text stringlengths 256 16.4k |
|---|
[pstricks] a pst-graph and a pst-tools problems Cyrille Piatecki cyrille.piatecki at univ-orleans.fr Fri Dec 7 11:52:39 CET 2012 Dear all,
1)
I don't understand why the code
\documentclass{minimal}
\usepackage{pstricks, pst-plot,pstricks-add}
\begin{document}
\begin{center}
\readdata[ignoreLines=1]{\data}{revvcons.dat}%
%\pstScalePoints(1,1){1989 sub}{}
\psset{xAxisLabel=PIB,yAxisLabel=Consommation,xAxisLabelPos={c,-0.4cm},yAxisLabelPos={-1.5cm,c}}
\begin{psgraph}[axesstyle=frame,Dx=10,Ox=1920,subticks=0,
Dy=1](1920,0)(1920,-1)(2011,1){12cm}{50mm}%
\listplot[linecolor=blue,linewidth=5pt,plotNo=3,plotNoMax=5]{\data}
\end{psgraph}
\end{center}
\end{document}
for the joined file give no errors but no prints -- in Bakoma.
2) I want not the graph to be boxed but I have not found the command.
3) In what concerns pst-tools, I think I have not realy understud your
exemple.
Since I use Bakoma and it is not capable to handle pre process code, I
was wondering if the call to postscript was not a good solution. Fo
instance If I can transform the ps result in , for instance a tex
counter, I can use some other styles like ifthen to compute some clause.
And I coul also find the value of composed functions.
He is what I have in mind.
\documentclass{minimal}
\usepackage{pst-math,multido}
\usepackage{pst-tools}
\usepackage{calc}
\def\showPSVal{ gsave 0 0 translate 1 -1 scale
10 string cvs /Helvetica findfont 100 scalefont setfont show grestore }
\SpecialCoor
\usepackage{ifthen}
\usepackage{calc}
\usepackage{settobox}
\SpecialCoor
\begin{document}
This works.
%%%%%%%%%%%%%
\newcommand{\fsin}[1]{
\psPrintValue [algebraic] {#1, sin(x)}}
\newcommand{\fcos}[1]{\psPrintValue [algebraic,VarName=cosa] {#1, cos(x)}}
$\sin(2) =\, \fsin{2}$
\vspace{.25cm}
$\cos(1) =\, \fcos{1}$
\vspace{.25cm}
This doesn't work
\vspace{.25cm}
\newcommand{\fsq}[1]{\psPrintValue [algebraic] {#1, x^2}}
$3^2 = \,\fsq{2}$
$\cos^2(1) = \fsq{cosa}$
\end{document}
Thanks for your answers and for all the very nice job done by the
pstricks contributers.
Cyrille Piatecki
More information about the PSTricksmailing list |
Clojure Linear Algebra Refresher (3) - Matrix TransformationsYou can adopt a pet function! Support my work on my Patreon page, and access my dedicated discussion server. Can't afford to donate? Ask for a free invite.
June 13, 2017
Please share: Twitter.
New books are available for subscription.
A Clojure programmer will immediately feel at home with linear transformations - functions are also transformations!
Linear transformations preserve the mathematical structure of a vector space.Just like functions define transformations, matrices define a kind of linear transformations - matrix transformations.I often spot programmers using matrices and vectors as dumb data structures and writing theirown loops, or accessing elements one by one in a haphazard fashion. Matrices are useful data structures,but using them as transformations is what really gives them power. This is something that is very well understoodin computer graphics, but is often neglected in other areas.
Before I continue, a few reminders:
These articles are not stand-alone. You should follow along with a linear algebra textbook. I recommend Linear Algebra With Applications, Alternate Edition by Gareth Williams (see more in part 1 of this series). The intention is to connect the dots from a math textbook to Clojure code, rather than explain math theory or teach you basics of Clojure. Please read Clojure Linear Algebra Refresher (1) - Vector Spaces, and, optionally, Clojure Linear Algebra Refresher (2) - Eigenvalues and Eigenvectors first. Include Neanderthal library in your project to be able to use ready-made high-performance linear algebra functions.
This text covers the first part of chapter 6 of the alternate edition of the textbook.
Transformations
Consider a Clojure function
(fn [x] (+ (* 3 x x) 4)). The set of allowed values for x is called the
domainof the function. For value
2, for example, the result is
16. We say that the
image of
2 is
16.In linear algebra, the term
transformation is used, rather than the term function.
As an example, take transformation \(T\) from \(R^3\) to \(R^2\) defined by \(T(x,y,z) = (2x,y-z)\) that the textbook I'musing presents. The
domain, \(R^3\) is the set of all real vectors with dimension 3, while the codomain, the setof valid results, is \(R^2\), the set of all real vectors with dimension 2. The image of any vector can befound by filling in concrete values in formula for T. The image of \((1,4,-2)\) is \(((2\times1),(4-(-2)))\), that is \((2,6)\).
We could write a function in Clojure that does exactly that:
(require '[uncomplicate.neanderthal [native :refer [dv dge]] [core :refer [mv mm nrm2 dot axpy]] [math :refer [sqrt sin cos pi]]]) (defn transformation-1 [x] (dv (* 2.0 (x 0)) (- (x 1) (x 2)))) (transformation-1 (dv 1 4 -2)) #RealBlockVector[double, n:2, offset: 0, stride:1] [ 2.00 6.00 ]
Although this way of working with vectors seems natural to a programmer, I'll show you a much easier method. And much faster if you have lots of those numbers!
In general, a
transformation T of \(R^n\) ( domain) into \(R^m\) ( codomain) is a rule that assigns to each vector \(\mathbf{u}\) in \(R^n\) a unique vector \(\mathbf{v}\) ( image) in \(R^m\) and write \(T(\mathbf{u})=\mathbf{v}\). We can also use the term mapping, which is more familiar to functional programmers. Dilations, Contractions, Reflections, Rotations
Let's look at some useful geometric transformations, and find out that
they can bedescribed by matrices. For graphical representations of these examples, see the textbook, of course.Here, I concentrate on showing you the code.
Consider a transformation in \(R^2\) that simply scales a vector by some positive scalar \(r\).It maps every point in \(R^2\) into a point \(r\) times as far from the origin. If \(r>1\) it movesthe point further from the origin (
dilation), and if it is \(0< r < 1\), closer to the origin ( contraction).
This equation can be written in a matrix form:\begin{equation} T\left(\begin{bmatrix}x\\y\\\end{bmatrix}\right) = \begin{bmatrix} r & 0\\ 0 & r \end{bmatrix} \begin{bmatrix}x\\y\\\end{bmatrix} \end{equation}
For example, when \(r=3\):
(mv (dge 2 2 [3 0 0 3]) (dv 1 2)) #RealBlockVector[double, n:2, offset: 0, stride:1] [ 3.00 6.00 ]
By multiplying our transformation matrix with a vector, we dilated the vector!
Consider
reflection: it maps every point in \(R^2\) into its mirror image in x-axis. (mv (dge 2 2 [1 0 0 -1]) (dv 3 2)) #RealBlockVector[double, n:2, offset: 0, stride:1] [ 3.00 -2.00 ] Rotation about the origin through an angle \(\theta\) is a bit more complicated to guess.After recalling a bit of knowledge about trigonometry, we can convince ourselves that it can be describedby a matrix of sines and cosines of that angle.
If we'd like to do a rotation of \(\pi/2\), we'd use \(\sin\pi/2 = 1\) and \(\cos\pi/2 = 0\). For vector \((3,2)\):
(mv (dge 2 2 [0 1 -1 0]) (dv 3 2)) #RealBlockVector[double, n:2, offset: 0, stride:1] [ -2.00 3.00 ] Matrix transformations
Not only that we can construct matrices that represent transformations, it turns out that
every matrix defines a transformation!
According to the textbook definition: Let \(A\) be a \(m\times{n}\) matrix, and \(\mathbf{x}\) an elementof \(R^n\). \(A\) defines a
matrix transformation \(T(\mathbf{x})=A\mathbf{x}\) of \(R^n\) (domain)into \(R^m\) (codomain). The resulting vector \(A\mathbf{x}\) is the image of the transformation.
Note that matrix dimensions
m and
n correspond to the dimensions of thecodomain (
m, number of rows) and domain (
n, number of columns).
Matrix transformations have the following geometrical properties:
They map line segments into line segments (or points); If \(A\) is invertible, they also map parallel lines into parallel lines.
Example 1 from page 248 illustrates this. Let \(T:R^2\rightarrow{R^2}\) be a transformation defined by the matrix \(A=\begin{bmatrix}4&2\\2&3\end{bmatrix}\). Define the image of the unit square under the transformation.
The code:
(let [a (dge 2 2 [4 2 2 3]) p (dv 1 0) q (dv 1 1) r (dv 0 1) o (dv 0 0)] [(mv a p) (mv a q) (mv a r) (mv a o)]) '(#RealBlockVector(double n:2 offset: 0 stride:1) ( 4.00 2.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 6.00 5.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 2.00 3.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 0.00 0.00 ) )
And, we got a parallelogram, since \(A\) is invertible. Check that as an exercise; a matrix is invertible if its determinant is non-zero (you can use the det function).
Something bugs the programmer in me, though: what if we wanted to transform many points (vectors)?Do we use this pedestrian way, or we put those points in some sequence and use good old Clojurehigher order functions such as
map,
reduce,
filter, etc.? Let's try this.
(let [a (dge 2 2 [4 2 2 3]) points [(dv 1 0) (dv 1 1) (dv 0 1) (dv 0 0)]] (map (partial mv a) points)) '(#RealBlockVector(double n:2 offset: 0 stride:1) ( 4.00 2.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 6.00 5.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 2.00 3.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 0.00 0.00 ) )
I could be pleased with this code. But I am not. I am not, because we only picked up the low-hanging fruit, and left a lot of simplicity and performance on the table. Consider this:
(let [a (dge 2 2 [4 2 2 3]) square (dge 2 4 [1 0 1 1 0 1 0 0])] (mm a square)) #RealGEMatrix[double, mxn:2x4, order:column, offset:0, ld:2] 4.00 6.00 2.00 0.00 2.00 5.00 3.00 0.00
By multiplying matrix \(A\) with matrix \((\vec{p},\vec{q},\vec{r},\vec{o})\), we did the same operation as transforming each vector separately.
I like this approach much more.
It's simpler. Instead of maintaining disparate points of a unit square, we can treat them as one entity. If we still want access to points, we just call the col function. It's faster. Instead of calling mv four times, we call mm once. In addition to that, our data is concentrated next to each other, in a structure that is cache-friendly. This can give huge performance yields when we work with large matrices.
This might be obvious in graphics programming, but I've seen too much data crunching code that uses matrices and vectors as dumb data structures, that I think this point is worth reiterating again and again.
Composition of Transformations
I hope that every Clojure programmer will immediately understand function composition with the
comp function:
(let [report+ (comp (partial format "The result is: %f") sqrt +)] (report+ 2 3)) The result is: 2.236068
By composing the
+,
sqrt and
format functions together, we got the function that transforms theinput equivalently. Or more formally, \((f\circ{g}\circ{h})(x)=f(g(h(x)))\).
Matrices are transformations just like functions are transformations; it's no surprise that matrices (as transformations) can also be composed!
Let's consider \(T_1(\mathbf{x}) = A_1(\mathbf{x})\) and \(T_2(\mathbf{x}) = A_2(\mathbf{x})\). The composite transformation \(T=T_2\circ{T_1}\) is given by \(T(\mathbf{x})=T_2(T_1(\mathbf{x}))=T_2(A_1(\mathbf{x}))=A_2 A_1(\mathbf{x})\).
Thus, the composed matrix transformation \(A_2\circ A_1\) is defined by the matrix product \(A_2{A_1}\). It can be extended naturally to a composition of \(n\) matrix transformations. \(A_n\circ{A_{n-1}}\circ\dots\circ{A_1}=A_n A_{n-1}\dots A_1\)
Take a look at the Clojure code for the example 2 (page 250) form the textbook. There are 3 matrices given. One defines reflection, the other rotation, and the third dilation. By composing them, we get one complex transformation that does all three transformations combined.
(let [pi2 (/ pi 2) reflexion (dge 2 2 [1 0 0 -1]) rotation (dge 2 2 [(cos pi2) (sin pi2) (- (sin pi2)) (cos pi2)]) dilation (dge 2 2 [3 0 0 3])] (def matrix-t (mm dilation rotation reflexion))) matrix-t #RealGEMatrix[double, mxn:2x2, order:column, offset:0, ld:2] 0.00 3.00 3.00 -0.00
The image of the point \((1,2)\) is:
(mv matrix-t (dv 1 2)) #RealBlockVector[double, n:2, offset: 0, stride:1] [ 6.00 3.00 ] Orthogonal Transformations
Recall from Clojure Linear Algebra Refresher (1) - Vector Spaces that
orthogonal matrix is an invertible matrixfor which \(A^{-1} = A^t\). An orthogonal transformation is a matrix transformation \(T(\mathbf{x})=A\mathbf{x}\), where\(A\) is orthogonal. Orthogonal transformations preserve norms, angles, and distances. They preserve shapes ofrigid bodies.
I'll illustrate this with example 3 (page 252):
(let [sqrt12 (sqrt 0.5) a (dge 2 2 [sqrt12 (- sqrt12) sqrt12 sqrt12]) u (dv 2 0) v (dv 3 4) tu (mv a u) tv (mv a v)] [[(nrm2 u) (nrm2 tu)] [(/ (dot u v) (* (nrm2 u) (nrm2 v))) (/ (dot tu tv) (* (nrm2 tu) (nrm2 tv)))] [(nrm2 (axpy -1 u v)) (nrm2 (axpy -1 tu tv))]])
2.0 2.0 0.6 0.6000000000000001 4.123105625617661 4.1231056256176615
We can see that norms, angles and distances are preserves (with a rather tiny differences due to rounding errors in floating-point operations).
Translation and Affine Transformation
There are transformations in vector spaces that are not truly matrix transformations, but are usefulnevertheless. I'll show you
why these transformations are not linear transformations in the next post;for now, let's look at how they are done. Translation is transformation \(T:R_n \rightarrow R_n\) defined by \(T(\mathbf{u}) = \mathbf{u} + \mathbf{v}\).This transformation slides points in the direction of vector \(\mathbf{v}\).
In Clojure, the translation is done with the good old axpy function.
(axpy (dv 1 2) (dv 4 2)) #RealBlockVector[double, n:2, offset: 0, stride:1] [ 5.00 4.00 ] Affine transformation is a transformation \(T:R_n \rightarrow R_n\) defined by \(T(\mathbf{u}) = A\mathbf{u} + \mathbf{v}\). It can be interpreted as a matrix transformation followedby a translation. See the textbook for more details, and you can do a Clojure example as a customarytrivial exercise left to the reader. Hint: the mv function can do affine transformations. To be continued…
These were the basics of matrix transformations. In the next post, we will explore linear transformations more. I hope this was interesting and useful to you. If you have any suggestions, feel free to send feedback - this can make next guides better.
Happy hacking! |
Difference between revisions of "Probability Seminar"
(→Thursday, April 9, Elnur Emrah, UW-Madison)
(→Thursday, April 30, TBA)
Line 171: Line 171:
I will address certain repulsion behavior of roots of random polynomials and of eigenvalues of Wigner matrices, and their applications. Among other things, we show a Wegner-type estimate for the number of eigenvalues inside an extremely small interval for quite general matrix ensembles.
I will address certain repulsion behavior of roots of random polynomials and of eigenvalues of Wigner matrices, and their applications. Among other things, we show a Wegner-type estimate for the number of eigenvalues inside an extremely small interval for quite general matrix ensembles.
−
== Thursday, April 30,
+
== Thursday, April 30, ==
Title: TBA
Title: TBA
Abstract:
Abstract:
−
== Thursday, May 7, TBA ==
== Thursday, May 7, TBA ==
Revision as of 15:30, 26 March 2015 Spring 2015 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu.
Thursday, January 15, Miklos Racz, UC-Berkeley Stats
Title: Testing for high-dimensional geometry in random graphs
Abstract: I will talk about a random geometric graph model, where connections between vertices depend on distances between latent d-dimensional labels; we are particularly interested in the high-dimensional case when d is large. Upon observing a graph, we want to tell if it was generated from this geometric model, or from an Erdos-Renyi random graph. We show that there exists a computationally efficient procedure to do this which is almost optimal (in an information-theoretic sense). The key insight is based on a new statistic which we call "signed triangles". To prove optimality we use a bound on the total variation distance between Wishart matrices and the Gaussian Orthogonal Ensemble. This is joint work with Sebastien Bubeck, Jian Ding, and Ronen Eldan.
Thursday, January 22, No Seminar Thursday, January 29, Arnab Sen, University of Minnesota
Title:
Double Roots of Random Littlewood Polynomials
Abstract: We consider random polynomials whose coefficients are independent and uniform on {-1,1}. We will show that the probability that such a polynomial of degree n has a double root is o(n^{-2}) when n+1 is not divisible by 4 and is of the order n^{-2} otherwise. We will also discuss extensions to random polynomials with more general coefficient distributions.
This is joint work with Ron Peled and Ofer Zeitouni.
Thursday, February 5, No seminar this week Thursday, February 12, No Seminar this week Thursday, February 19, Xiaoqin Guo, Purdue
Title: Quenched invariance principle for random walks in time-dependent random environment
Abstract: In this talk we discuss random walks in a time-dependent zero-drift random environment in [math]Z^d[/math]. We prove a quenched invariance principle under an appropriate moment condition. The proof is based on the use of a maximum principle for parabolic difference operators. This is a joint work with Jean-Dominique Deuschel and Alejandro Ramirez.
Thursday, February 26, Dan Crisan, Imperial College London
Title:
Smoothness properties of randomly perturbed semigroups with application to nonlinear filtering
Abstract: In this talk I will discuss sharp gradient bounds for perturbed diffusion semigroups. In contrast with existing results, the perturbation is here random and the bounds obtained are pathwise. Our approach builds on the classical work of Kusuoka and Stroock and extends their program developed for the heat semi-group to solutions of stochastic partial differential equations. The work is motivated by and applied to nonlinear filtering. The analysis allows us to derive pathwise gradient bounds for the un-normalised conditional distribution of a partially observed signal. The estimates we derive have sharp small time asymptotics
This is joint work with Terry Lyons (Oxford) and Christian Literrer (Ecole Polytechnique) and is based on the paper
D Crisan, C Litterer, T Lyons, Kusuoka–Stroock gradient bounds for the solution of the filtering equation, Journal of Functional Analysis, 2105
Wednesday, March 4, Sam Stechmann, UW-Madison, 2:25pm Van Vleck B113
Please note the unusual time and room.
Title: Stochastic Models for Rainfall: Extreme Events and Critical Phenomena Abstract: In recent years, tropical rainfall statistics have been shown to conform to paradigms of critical phenomena and statistical physics. In this talk, stochastic models will be presented as prototypes for understanding the atmospheric dynamics that leads to these statistics and extreme events. Key nonlinear ingredients in the models include either stochastic jump processes or thresholds (Heaviside functions). First, both exact solutions and simple numerics are used to verify that a suite of observed rainfall statistics is reproduced by the models, including power-law distributions and long-range correlations. Second, we prove that a stochastic trigger, which is a time-evolving indicator of whether it is raining or not, will converge to a deterministic threshold in an appropriate limit. Finally, we discuss the connections among these rainfall models, stochastic PDEs, and traditional models for critical phenomena. Thursday, March 12, Ohad Feldheim, IMA
Title:
The 3-states AF-Potts model in high dimension
Abstract: Take a bounded odd domain of the bipartite graph [math]\mathbb{Z}^d[/math]. Color the boundary of the set by [math]0[/math], then color the rest of the domain at random with the colors [math]\{0,\dots,q-1\}[/math], penalizing every configuration with proportion to the number of improper edges at a given rate [math]\beta\gt 0[/math] (the "inverse temperature"). Q: "What is the structure of such a coloring?"
This model is called the [math]q[/math]-states Potts antiferromagnet(AF), a classical spin glass model in statistical mechanics. The [math]2[/math]-states case is the famous Ising model which is relatively well understood. The [math]3[/math]-states case in high dimension has been studies for [math]\beta=\infty[/math], when the model reduces to a uniformly chosen proper three coloring of the domain. Several words, by Galvin, Kahn, Peled, Randall and Sorkin established the structure of the model showing long-range correlations and phase coexistence. In this work, we generalize this result to positive temperature, showing that for large enough [math]\beta[/math] (low enough temperature) the rigid structure persists. This is the first rigorous result for [math]\beta\lt \infty[/math].
In the talk, assuming no acquaintance with the model, we shall give the physical background, introduce all the relevant definitions and shed some light on how such results are proved using only combinatorial methods. Joint work with Yinon Spinka.
Thursday, March 19, Mark Huber, Claremont McKenna Math
Title: Understanding relative error in Monte Carlo simulations
Abstract: The problem of estimating the probability [math]p[/math] of heads on an unfair coin has been around for centuries, and has inspired numerous advances in probability such as the Strong Law of Large Numbers and the Central Limit Theorem. In this talk, I'll consider a new twist: given an estimate [math]\hat p[/math], suppose we want to understand the behavior of the relative error [math](\hat p - p)/p[/math]. In classic estimators, the values that the relative error can take on depends on the value of [math]p[/math]. I will present a new estimate with the remarkable property that the distribution of the relative error does not depend in any way on the value of [math]p[/math]. Moreover, this new estimate is very fast: it takes a number of coin flips that is very close to the theoretical minimum. Time permitting, I will also discuss new ways to use concentration results for estimating the mean of random variables where normal approximations do not apply.
Thursday, March 26, Ji Oon Lee, KAIST
Title: Tracy-Widom Distribution for Sample Covariance Matrices with General Population
Abstract: Consider the sample covariance matrix [math](\Sigma^{1/2} X)(\Sigma^{1/2} X)^*[/math], where the sample [math]X[/math] is an [math]M \times N[/math] random matrix whose entries are real independent random variables with variance [math]1/N[/math] and [math]\Sigma[/math] is an [math]M \times M[/math] positive-definite deterministic diagonal matrix. We show that the fluctuation of its rescaled largest eigenvalue is given by the type-1 Tracy-Widom distribution. This is a joint work with Kevin Schnelli.
Thursday, April 2, No Seminar, Spring Break Thursday, April 9, Elnur Emrah, UW-Madison
Title: The shape functions of certain exactly solvable inhomogeneous planar corner growth models
Abstract: I will talk about two kinds of inhomogeneous corner growth models with independent waiting times {W(i, j): i, j positive integers}: (1) W(i, j) is distributed exponentially with parameter [math]a_i+b_j[/math] for each i, j.(2) W(i, j) is distributed geometrically with fail parameter [math]a_ib_j[/math] for each i, j. These generalize exactly-solvable i.i.d. models with exponential or geometric waiting times. The parameters (a_n) and (b_n) are random with a joint distribution that is stationary with respect to the nonnegative shifts and ergodic (separately) with respect to the positive shifts of the indices. Then the shape functions of models (1) and (2) satisfy variational formulas in terms of the marginal distributions of (a_n) and (b_n). For certain choices of these marginal distributions, we still get closed-form expressions for the shape function as in the i.i.d. models.
Thursday, April 16, Scott Hottovy, UW-Madison
Title:
An SDE approximation for stochastic differential delay equations with colored state-dependent noise
Abstract: TBA
Thursday, April 23, Hoi Nguyen, Ohio State University
Title: On eigenvalue repulsion of random matrices
Abstract:
I will address certain repulsion behavior of roots of random polynomials and of eigenvalues of Wigner matrices, and their applications. Among other things, we show a Wegner-type estimate for the number of eigenvalues inside an extremely small interval for quite general matrix ensembles.
Thursday, April 30, Chris Janjigian, [www.math.wisc.edu UW-Madison]
Title: TBA
Abstract:
Thursday, May 7, TBA
Title: TBA
Abstract: |
Explaining and Harnessing Adversarial ExamplesExplaining and Harnessing Adversarial ExamplesIan J. Goodfellow and Jonathon Shlens and Christian Szegedy2014
Paper summarydavidstutzGoodfellow et al. introduce the fast gradient sign method (FGSM) to craft adversarial examples and further provide a possible interpretation of adversarial examples considering linear models. FGSM is a grdient-based, one step method for generating adversarial examples. In particular, letting $J$ be the objective optimized during training and $\epsilon$ be the maximum $\infty$-norm of the adversarial perturbation, FGSM computes$x' = x + \eta = x + \epsilon \text{sign}(\nabla_x J(x, y))$where $y$ is the label for sample $x$. The $\text{sign}$ method is applied element-wise here. The applicability of this method is shown in several examples and it is commonly used in related work.In the remainder of the paper, Goodfellow et al. discuss a linear interpretation of why adversarial examples exist. Specifically, considering the dot product$w^T x' = w^T x + w^T \eta$it becomes apparent that the perturbation $\eta$ – although insignificant on a per-pixel level (i.e. smaller than $\epsilon$) – causes the activation of a single neuron to be influence significantly. What is more, this effect is more pronounced the higher the dimensionality of $x$. Additionally, many network architectures today use $\text{ReLU}$ activations, which are essentially linear.Goodfellow et al. conduct several more experiments; I want to highlight the conclusions of some of them:- Training on adversarial samples can be seen as regularization. Based on experiments, it is more effective than $L_1$ regularization or adding random noise.- The direction of the perturbation matters most. Adversarial samples might be transferable as similar models learn similar functions where these directions are, thus, similarly effective.- Ensembles are not necessarily resistant to perturbations.Also view this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
First published: 2014/12/20 (4 years ago) Abstract: Several machine learning models, including neural networks, consistentlymisclassify adversarial examples---inputs formed by applying small butintentionally worst-case perturbations to examples from the dataset, such thatthe perturbed input results in the model outputting an incorrect answer withhigh confidence. Early attempts at explaining this phenomenon focused onnonlinearity and overfitting. We argue instead that the primary cause of neuralnetworks' vulnerability to adversarial perturbation is their linear nature.This explanation is supported by new quantitative results while giving thefirst explanation of the most intriguing fact about them: their generalizationacross architectures and training sets. Moreover, this view yields a simple andfast method of generating adversarial examples. Using this approach to provideexamples for adversarial training, we reduce the test set error of a maxoutnetwork on the MNIST dataset.
#### Problem addressed:A fast way of finding adversarial examples, and a hypothesis for the adversarial examples#### Summary:This paper tries to explain why adversarial examples exists, the adversarial example is defined in another paper \cite{arxiv.org/abs/1312.6199}. The adversarial example is kind of counter intuitive because they normally are visually indistinguishable from the original example, but leads to very different predictions for the classifier. For example, let sample $x$ be associated with the true class $t$. A classifier (in particular a well trained dnn) can correctly predict $x$ with high confidence, but with a small perturbation $r$, the same network will predict $x+r$ to a different incorrect class also with high confidence.This paper explains that the exsistence of such adversarial examples is more because of low model capacity in high dimensional spaces rather than overfitting, and got some empirical support on that. It also shows a new method that can reliably generate adversarial examples really fast using `fast sign' method. Basically, one can generate an adversarial example by taking a small step toward the sign direction of the objective. They also showed that training along with adversarial examples helps the classifier to generalize.#### Novelty:A fast method to generate adversarial examples reliably, and a linear hypothesis for those examples.#### Datasets:MNIST#### Resources:Talk of the paper https://www.youtube.com/watch?v=Pq4A2mPCB0Y#### Presenter:Yingbo Zhou
Goodfellow et al. introduce the fast gradient sign method (FGSM) to craft adversarial examples and further provide a possible interpretation of adversarial examples considering linear models. FGSM is a grdient-based, one step method for generating adversarial examples. In particular, letting $J$ be the objective optimized during training and $\epsilon$ be the maximum $\infty$-norm of the adversarial perturbation, FGSM computes$x' = x + \eta = x + \epsilon \text{sign}(\nabla_x J(x, y))$where $y$ is the label for sample $x$. The $\text{sign}$ method is applied element-wise here. The applicability of this method is shown in several examples and it is commonly used in related work.In the remainder of the paper, Goodfellow et al. discuss a linear interpretation of why adversarial examples exist. Specifically, considering the dot product$w^T x' = w^T x + w^T \eta$it becomes apparent that the perturbation $\eta$ – although insignificant on a per-pixel level (i.e. smaller than $\epsilon$) – causes the activation of a single neuron to be influence significantly. What is more, this effect is more pronounced the higher the dimensionality of $x$. Additionally, many network architectures today use $\text{ReLU}$ activations, which are essentially linear.Goodfellow et al. conduct several more experiments; I want to highlight the conclusions of some of them:- Training on adversarial samples can be seen as regularization. Based on experiments, it is more effective than $L_1$ regularization or adding random noise.- The direction of the perturbation matters most. Adversarial samples might be transferable as similar models learn similar functions where these directions are, thus, similarly effective.- Ensembles are not necessarily resistant to perturbations.Also view this summary at [davidstutz.de](https://davidstutz.de/category/reading/). |
WHY?
Reinforcement learning with sparse reward often suffer from finding rewards.
WHAT?
Forward-Backward Reinforcement Learning(FBRL) consists of forward and backward process. Forward process is like normal rl using memory to update Q function. In backward process, new model is introduced called backward model b. b is a neural network that predict the difference in state given action and next state.\
b(s_{t+1}, a_t)\rightarrow \hat{\Delta}
Rewarded states are sampled from G which is a distribution of goal states. Using backward model, estimate the previous state and actions are sampled randomly or greedily. This imagined data is appended to the memory. Algorithm is as follows. So?
FBRL learned much faster than DDQN in Gridworld and Towers of Hanoi environment.
Critic
Sampling from goal states maybe difficult and backward model may be noisy in stochastic environment. Is it worth to double the parameter of model? However, the concept to take advantage of memory seems like good idea to develop. |
WHY?
While bilinear model is an effective method for capturing the relationship between two spaces, often the number of parameters is intractable. This paper suggests to reduce the number of parameters by controlling the rank of the matrix with Turker decomposition.
Note
With
\times_i denotes the i-mode product between a tensor and a matrix, Tucker decomposition of a tensor
\tau is as follows.
\mathbf{\tau} \in \mathbb{R}^{d_q \times d_v \times |\mathcal{A}|}\\\mathbf{\tau} = ((\mathbf{\tau_c} \times_1 \mathbf{W}_q) \times_2 \mathbf{W}_v) \times_3 \mathbf{W}_O
WHAT?
The Tucker decomposition showed that a tensor can be represented with a limited number of parameters. The bilinear relationship between question vectors and image vectors can be represented in a Tucker Fusion form.
\mathbf{\tau} = ((\mathbf{\tau_c} \times_1 (\mathbf{q}^{\top}\mathbf{W}_q)) \times_2 (\mathbf{v}^{\top}\mathbf{W}_v)) \times_3 \mathbf{W}_O\\\mathbf{z} = (\mathbf{\tau}_c \times_1 \mathbf{\tilde{q}}) \times_2 \mathbf{\tilde{v}} \in \mathbb{R}^{t_o}\\\mathbf{y} = \mathbf{z}^{\top}\mathbf{W}_O
We can control the sparsity and expressiveness of bilinear model by controlling the rank of a slice of
\mathbf{\tau}_c. If we impose the rank of a slice of the core matrix to be R, then each slice can be represented with the sum of R rank one matrices.
\mathbf{z}[k] = \tilde{\mathbf{q}}^{\top}\mathbf{\tau}_c[:,:,k]\tilde{\mathbf{v}}\\\mathbf{\tau}_c[:,:,k] = \sum_{r=1}^R \mathbf{m}_r^k \otimes \mathbf{n}_r^{k\top}\\\mathbf{z}[k] = \sum_{r=1}^R(\tilde{\mathbf{q}}^{\top}\mathbf{m}_r^k)(\tilde{\mathbf{v}}^{\top}\mathbf{n}_r^k)\\\mathbf{z} = \sum_{r=1}^R \mathbf{z}_r\\\mathbf{z}_r = (\tilde{\mathbf{q}}^{\top}\mathbf{M}_r)*(\tilde{\mathbf{v}}^{\top}\mathbf{N}_r)
MCB and MLB can be represented with a generalized form of MUTAN.
So?
MUTAN can represent rich multimodal representation. Achieved SOTA results on VQA dataset. |
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is?
Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!...
I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $...
No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA...
The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why?
mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it
Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true:
(1) If $x=y$ then $x\sim y$.
(2) If $x=y$ then $y\sim x$.
(3) If $x=y$ and $y=z$ then $x\sim z$.
Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly.
This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$.
I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$."
That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems...
(comment on many many posts above)
In other news:
> C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999
probably the weirdness bunch of data I ever seen with so many 000000 and 999999s
But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti?
@AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms).
This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality.
Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it.
@schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$.
@GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course.
Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul...
@GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0?
Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$. |
For a massless Dirac particle by integrating fermion degree of freedom in path integral, effective action is resulted for gauge field
$$l(\psi,\bar\psi,A)=\bar\psi( \gamma^\mu (i \partial_\mu +A_\mu ) ) \psi $$
$$Z= \int D\psi D\bar\psi D A_\mu e^{(i \int d^3x l)}$$
$$S_{eff} =\int D\psi D\bar\psi e^{(i \int d^3x l)}$$
$$S_{eff} =-i ln (det ( \gamma^\mu (i \partial_\mu +A_\mu )))$$
I want to know:
How can I calculate the following equation?
$$S_{eff} =C_1 C_2 $$
where
$$C_1=- \frac{1}{12} \epsilon^{\mu\nu\rho} \int \frac{d^3p}{(2\pi)^3} tr[ [G(p)\partial_\mu G^{-1}(p)] [G(p)\partial_\nu G^{-1}(p)] [G(p)\partial_\rho G^{-1}(p)] ] $$
and
$$C_2= \int d^3x \epsilon^{\mu\nu\rho}A_\mu \partial_\nu A_\rho $$
$G(p)$ is fermion propagator and $G^{-1}(p)$ is its inverse. |
Yes. It’s the apparent change in the frequency of a wave caused by relative motion between the source of the wave and the observer.
-Sheldon Cooper
In the “Middle-Earth Paradigm” episode, Sheldon Cooper dresses as the “Doppler Effect” for Penny’s Halloween party. The Doppler Effect (or Doppler Shift) describes the change in pitch or frequency that results as a source of sound moves relative to an observer; moving relative can mean either the source is moving while the observer is stationary or vice versa. It is commonly heard when a siren approaches and recedes from an observer. As the siren approaches, the pitch sounds higher and lowers as it moves away. This effect was first proposed by Austrian physicist Christian Doppler in 1842 to explain the color of binary stars.
In 1845, Christophorus Henricus Diedericus (C. H. D.) Buys-Ballot, a Dutch chemist and meteorologist conducted the famous experiment to prove this effect. He assembled a group of horn players on an open cart attached to a locomotive. Ballot then instructed the engineer to rush past him as fast as he could while the musicians played and held a constant note. As the train approached and receded, Ballot noted that the pitch changed as he stood and listened on the stationary platform. Physics of the Doppler Effect
As a stationary sound source produces sound waves, its wave-fronts propagate away from the source at a constant speed, the speed of sound. This can be seen as concentric circles moving away from the center. All observers will hear the same frequency, the frequency of the source of the sound.
When either the source or the observer moves relative to each other, the frequency of the sound that the source emits does not change but rather the observer hears a change in pitch. We can think of the following way. If a pitcher throws balls to someone across a field at a constant rate of one ball a second, the person will catch those balls at the same rate (one ball a second). Now if the pitcher runs towards the catcher, the catcher will catch the balls faster than one ball per second. This happens because as the catcher moves forward, he closes in the distance between himself and the catcher. When the pitcher tosses the next ball it has to travel a shorter distance and thus travels a shorter time. The opposite is true if the pitcher was to move away from the catcher.
If instead of the pitcher moving towards the catcher, the pitcher stayed stationary and the catcher ran forward. As the catcher runs forward, he closes in the distance between him and the pitcher so the time it takes from the ball to leave the pitcher’s hand to the catcher’s mitt is decreased. In this case, it also means that the catcher will catch the balls at a faster rate than the pitcher tosses them.
Sub Sonic Speeds
We can apply the same idea of the pitcher and catcher to a moving source of sound and an observer. As the source moves, it emits sounds waves which spread out radially around the source. As it moves forward, the wave-fronts in front of the source bunch up and the observer hears an increase in pitch. Behind the source, the wave-fronts spread apart and the observer standing behind hears a decrease in pitch.
The Doppler Equation
When the speeds of source and the receiver relative to the medium (air) are lower than the velocity of sound in the medium, i.e. moves at sub-sonic speeds, we can define a relationship between the observed frequency, \(f\), and the frequency emitted by the source, \(f_0\).
\[f = f_{0}\left(\frac{v + v_{o}}{v + v_{s}}\right)\] where \(v\) is the speed of sound, \(v_{o}\) is the velocity of the observer (this is positive if the observer is moving towards the source of sound) and \(v_{s}\) is the velocity of the source (this is positive if the source is moving away from the observer). Source Moving, Observer Stationary
We can now use the above equation to determine how the pitch changes as the source of sound moves
towards the observer. i.e. \(v_{o} = 0\). \[f = f_{0}\left(\frac{v}{v – v_{s}}\right)\] \(v_{s}\) is negative because it is moving towards the observer and \(v – v_{s} < v\). This makes \(v/(v - v_{s})\) larger than 1 which means the pitch increases. Source Stationary, Observer Moving
Now if the source of sound is still and the observer moves towards the sound, we get:
\[f = f_{0}\left( \frac{v + v_{o}}{v} \right)\] \(v_{o}\) is positive as it moves towards the source. The numerator is larger than the denominator which means that \(v + v_{o}/v\) is greater than 1. The pitch increases. Speed of Sound
As the source of sound moves at the speed of sound, the wave fronts in front become bunched up at the same point. The observer in front won’t hear anything until the source arrives. When the source arrives, the pressure front will be very intense and won’t be heard as a change in pitch but as a large “thump”.
The observer behind will hear a lower pitch as the source passes by.
\[f = f_{0}\left( \frac{v – 0}{v + v} \right) = 0.5 f_{0}\]
Early jet pilots flying at the speed of sound (Mach 1) reported a noticeable “wall” or “barrier” had to be penetrated before achieving supersonic speeds. This “wall” is due to the intense pressure front, and flying within this pressure front produced a very turbulent and bouncy ride. Chuck Yeager was the first person to break the sound barrier when he flew faster than the speed of sound in the Bell X-1 rocket-powered aircraft on October 14, 1947.
As the science of super-sonic flight became better understood, engineers made a number changes to aircraft design that led the the disappearance of the “sound barrier”. Aircraft wings were swept back and engine performance increased. By the 1950s combat aircraft could routinely break the sound barrier.
Super-Sonic
As the sound source breaks and moves past the “sound barrier”, the source now moves faster than the sound waves it creates and leads the advancing wavefront. The source will pass the observer before the observer hears the sound it creates. As the source moves forward, it creates a Mach cone. The intense preseure front on the Mach cone creates a shock wave known as a “sonic boom”.
Twice the Speed of Sound
Something interesting happens when the source moves towards the observer at twice the speed of sound — the tone becomes time reversed. If music was being played, the observer will hear the piece with the correct tone but played backwards. This was first predicted by Lord Rayleigh in 1896 .
We can see this by using the Doppler Equation.
\[f = f_{0}\left(\frac{v}{v-2v}\right)\] This reduces to \[f=-f_{0}\] which is negative because the sound is time reversed or is heard backwards. Applications Radar Gun
The Doppler Effect is used in radar guns to measure the speed of motorists. A radar beam is fired at a moving target as it approaches or recedes from the radar source. The moving target then reflects the Doppler-shifted radar wave back to the detector and the frequency shift measured and the motorist’s speed calculated.
We can combine both cases of the Doppler equation to give us the relationship between the reflected frequency (\(f_{r}\)) and the source frequency (\(f\)):
\[f_{r} = f \left(\frac{c+v}{c-v}\right)\] where \(c\) is the speed of light and \(v\) is the speed of the moving vehicle. The difference between the reflected frequency and the source frequency is too small to be measured accurately so the radar gun uses a special trick that is familiar to musicians – interference beats.
To tune a piano, the pitch can be adjusted by changing the tension on the strings. By using a tuning instrument (such as a tuning fork) which can produce a sustained tone over time, a beat frequency can be heard when placed next to the vibrating piano wire. The beat frequency is an interference between two sounds with slightly different frequencies and can be herd as a periodic change in volume over time. This frequency tells us how far off the piano strings are compared to the reference (tuning fork).
To detect this change in a radar gun does something similar. The returning wave is “mixed” with the transmitted signal to create a beat note. This beat signal or “heterodyne” is then measured and the speed of the vehicle calculated. The change in frequency or the difference between \(f_{r}\) and \(f\) or \(\Delta f\) is
\[f_{r} – f = f\frac{2v}{c-v}\] as the difference between the speed of light, \(c\), and the speed of the vehicle, \(v\), is small, we can approximate this to \[\Delta f \approx f\frac{2v}{c}\] By measuring this frequency shift or beat frequency, the radar gun can calculate and display a vehicle’s speed. “I am the Doppler Effect” The Doppler Effect is an important principle in physics and is used in astronomy to measure the speeds at which galaxies and stars are approaching or receding from us. It is also used in plasma physics to estimate the temperature of plasmas. Plasmas are one of the four fundamental states of matter (the others being solid, liquid, and gas) and is made up of very hot, ionized gases. Their composition can be determined by the spectral lines they emit. As each particle jostles about, the light emitted by each particle is Doppler shifted and is seen as a broadened spectral line. This line shape is called a Doppler profile and the width of the line is proportional to the square root of the temperature of plasma gas. By measuring the width, scientists can infer the gas’ temperature.
We can now understand Sheldon’s fascination with the Doppler Effect as he aptly explains and demonstrates its effects. As an emergency vehicle approaches an observer, its siren will start out with a higher pitch and slide down as as it passes and moves away from the observer. This can be heard as the (confusing) sound he demonstrates to Penny’s confused guests.
References |
if, but of whenwe find them.
Here's a video of Dr. Carl Sagan presenting a more sophisticated version of this (with actual numbers) to estimate the number of inhabited planets in our galaxy. Or try this worksheet on the Drake Equation on the BBC website.
It's a common argument. And it sounds pretty convincing. If you keep trying over and over, even though something is unlikely, eventually you will succeed.
It's common and convincing, but it's also fallacious. Here's the problem: How many times do we have to try before we're guaranteed to succeed?
The mathematical answer is infinitely many times.
But that's to
guaranteewe succeed, with 100% probability. So a better question might be: what happens to the probability of success as we keep trying?
Let the probability of a success be very low, set to $10^{-X}$, where $X$ is some large number. This makes $10^{-X}$ a
verysmall number. Then let the number trials be $10^{Y}$, where $Y$ is some large number. This makes $10^{Y}$ a verylarge number. Now we define a quantity $P_0$, which is the probability of never succeeding, even after $10^Y$ trials. (If you can't see my math, check your browser's plugin settings)
Assuming whether we succeed or not on a given trial is a simple coin flip with probability $10^{-X}$ of success, then the probability of failure in a single trial is $(1-10^{-X})$. The probability of never succeeding after $10^{Y}$ trials is just the product
$$P_0 = \left(1-\frac{1}{10^{X}}\right)^{10^Y}.$$
We have said that $X$ is large. Maybe you remember from algebra learning the formula for continuously compounded interest, where you ended up with an exponential, like so:
$$e^{x} = \lim_{n\rightarrow \infty} \left(1 + \frac{x}{n}\right)^n.$$
Well, in our case, if $X$ is large, then $10^{X}$ is
reallylarge, and
$$\left(1 + \frac{-1}{10^{X}}\right)^{10^X} \approx e^{-1} \approx 0.36788$$
If we re-write our expression for $P_0$, then, we find
$$P_0 = \left(1-\frac{1}{10^{X}}\right)^{10^{X + Y-X}} = \left[\left(1+\frac{-1}{10^{X}} \right)^{10^X}\right]^{10^{Y-X}} \approx = \left[e^{-1}\right]^{10^{Y-X}} = e^{-10^{Y-X}}.$$
Now, $e^{-1} \approx 0.36788$ is less than one, so squaring it or tripling it will make it even smaller. However, taking the square root of it will make it larger. The resolution comes down to: how does $X$ compare to $Y$?
Consider a simple case, where $X=Y$. Then $10^{Y-X} = 10^0 = 1.$ So $P_0=e^{-1} = 0.36788.$ That is, there is only about a 37% chance of there being no successes, or in other words, there is a 63% chance of a success happening at some point. It's not a guarantee, but it's more likely than not.
Now suppose that $Y = X+1$. This means that we do ten times as many trials as our inverse probability; if the probability is a 1/10, do 100 trials, if the probability is 1/2, do 20 trials, etc. Then $10^{Y-X} = 10^{1}= 10$, so $P = e^{-10} = 0.0000454$. That is, the probability of success is 99.995%. As we increase $Y$, this probability gets even closer to 100%. Success is all-but guaranteed.
However, now suppose that $Y = X-1$. This means that we only do a tenth as many as the inverse probability; if the probability is 1/10, do 1 trial. If the probability is 1/20, do 2 trials, etc. Then $10^{Y-X} = 10^{-1} = 0.1$, so $P_0 = e^{-0.1} = 0.9048.$ That is, the probability of success is down to a measly 9.516%. As we increase $X$, this number gets even closer to 0%.
As we can see, our confidence of success depends drastically on the value of $Y-X$. Even slight differences here can mean huge changes in the probability of success, $P_{\geq1} = 1-P_0$.
Simple graph showing the steep rise from 0 to 1 in the probability of success.
What this comes down to is whether $X$ is greater than or less than $Y$. Put differently, does the probability of a single success compare to the number of trials? Or put in terms of aliens, is the number of planets out there that can give rise to life close to the inverse of the probability of life actually arising?
And the answer is: no one knows!
We do not know how many planets there are. If we estimate this as $N_p = 10^Y$, then $Y$ might be off by 2 or 3 in either direction. There might be a thousand times as many as we think now, or there might be only a hundredth of our current guess. As we just saw, for a fixed $X$, changing $Y$ even by 1 can drastically affect our confidence of extraterrestrial life existing.
Way more crucially, we have no idea how likely it is for life to occur on a planet that can give rise to life. Think about this. We have only ever observed life arising on a planet once. This means that we don't have a very good definition of a planet where life can arise (see above), but it also means that we have a single data point upon which to base a probability. If I were a pollster, and I went out on the street and asked a single person who they were voting for, and from that concluded that 54\% of voters supported the candidate, you would rightly question my methodology. If we estimate this probability of life arising on a planet as $p_L = 10^{-X}$, then
we don't even knowwhat $X$ is.
Since we do not know what $X$ is, then we don't know what $P_{\geq 1}$ is. This is a simple model, but it makes its point: Even small differences between $Y$ and $X$ can lead to very different predictions.
Consider again what it would mean for $Y=X-1$. Take $10^Y$ to be the total number of planets what ever exist or will exist in our universe's lifetime. Take $10^{-X}$ to be the probability of life ever arising on any given planet in our universe other than our own. Then if $Y = X-1$, as above, we have $P_{\geq1} \approx$ 10% as the probability of extraterrestrial life ever arising in this universe. This means that we'd need roughly 10 other universes just like our own before we can be back at roughly 63% probability of life arising again.
The popular statement that the universe is so big that there must be life in it somewhere is a false one. The universe is quite big, but the probability of life arising can also be so small as to negate this bigness, and we have no way to know if this is the case or not.
The universe can still be as big as it is, and yet still not be big enough for life to arise anywhere else within it. |
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ... |
I have no rigorous training in mathematics - I'm not quite sure what constitutes a
proof and what doesn't.
How can I properly use Mathematica to prove some theorems like this one? Well, what you can do is use Mathematica to help visualise the function as well as computing derivatives, integrals or limits. My approach is a bit hacky :)
The function
Okay, so we're dealing with
x Sin[Pi/x]which is, obviously, a periodic function. It is also even (its plot is symmetric and
x Sin[Pi/x] == -x Sin[Pi/-x] evaluates to
True). The fact that the function is even is interesting if you want to generalize your "proof".
From the plot of
Sin[x],
you can get a good idea of what the plot of
x Sin[Pi/x] will look like, even before asking
Mathematica to generate it. The fact that
Sin[x] has a value between 0 and 1 while $ 0\lt x \leq \pi$ is obvious from the graph. What is also obvious is that the function's value has
negative sign from $\pi \leq x \leq 2\pi$.
What are we expecting?
We're expecting a sinusoidal with ascending amplitude and frequency (the
x multiplying
Sin[Pi/x]). We can guess that because at values of
x below 1, we will be evaluating the sine of a number
larger than $\pi$, this trend being reversed at
x=1. What happens after
x=1? Let's ask
Mathematica.
Plotting the functions
It turns out we were right about the amplitude and frequency. Right up to a point, at
x=1 - past this point, there are no more zeros. There's no
need to go further, because it is obvious that our function has an asymptote. But what if we had our
PlotRange wrong, and the (presumably more complicated) function really
does have zeros after
x=1? Let's again ask
Mathematica.
It turns out that the limit of our function
Limit[x Sin[Pi/x], x -> \[Infinity]]
is $\pi$. Confirmation of asymptotic behaviour.
What about $\pi \cos (\frac{\pi}{x})$?
You could take a guess like we did before, or simply not bother and ask
Mathematica right away.
It is clear from this image that the inequality $x \sin (\frac{\pi}{x})\geq \pi \cos (\frac{\pi}{x})$
can be valid for $x \geq 1$. To confirm, we need to check the limit of the cosine function with
Limit[Pi Cos[Pi/x], x -> \[Infinity]]
which also evaluates to $\pi$. Good news.
But how can I prove it's increasing?
As halirutan said, you can use derivatives. As a non-mathematician, I would be convinced by the plots of the functions, and their limits. If you want to know
how fast each function converges to $\pi$, then go ahead and take a look at each function's first/second derivative plot.
This approach might not qualify as rigorous, but I think it shows how you can use
Mathematica to walk through math problems. |
Measurement of $$\Delta m^2_{32}$$ and $$\sin^2\!\theta_{23}$$ using Muon Neutrino and Antineutrino Beams in the NOvA Experiment Abstract
NOvA is a long-baseline neutrino oscillation experiment located in the mid-west United States. It consists of two functionally identical tracking calorimeters, known as the near and far detector, that measure neutrino interactions induced by the NuMI beam at baselines of 1~km and 810~km. NuMI can be configured to provide a muon neutrino or antineutrino beam. Analysis of \numu~+~\numubar disappearance allows constraint of the $$|\Delta m^2_{32}|$$ and \sint oscillation parameters. This thesis presents the first NOvA disappearance results using both neutrino and antineutrino data - previous NOvA analyses have only used neutrino beam data. Two analysis improvements are delineated in dedicated chapters - the design of selection criteria to identify events that are fully contained in the far detector and the optimization of particle identification selection criteria in a multi-dimensional parameter space. The full-detector equivalent beam exposures used for this thesis are $$8.85\ti mes 10^{20}$$ and $$6.91\times 10^{20}$$~protons~on~target for neutrino and antineutrino data respectively. Under the assumption of a normal (inverted) neutrino mass hierarchy, analysis of the data gives $$\Delta m^2_{32}=+2.49^{+0.09}_{-0.07}\times10^{-3}~\text{eV}^2$$ ($$\Delta m^2_{32}=-2.54\pm0.08\times10^{-3}~\text{eV}^2$$) and $$\sin^2\theta_{23} = 0.59\pm0.03$$ ($$\sin^2\theta_{23} = 0.44\pm0.03$$). Maximal mixing ($$\sin^2\theta_{23} = 0.5$$) is disfavoured at the 1.7~\sig level.
Authors: Sussex U. Publication Date: Research Org.: Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States) Sponsoring Org.: USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25) OSTI Identifier: 1502822 Report Number(s): FERMILAB-THESIS-2019-03 1726090 DOE Contract Number: AC02-07CH11359 Resource Type: Thesis/Dissertation Country of Publication: United States Language: English Subject: 72 PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Citation Formats
Blackburn, Tristan.
Measurement of $\Delta m^2_{32}$ and $\sin^2\!\theta_{23}$ using Muon Neutrino and Antineutrino Beams in the NOvA Experiment. United States: N. p., 2019. Web. doi:10.2172/1502822.
Blackburn, Tristan.
Measurement of $\Delta m^2_{32}$ and $\sin^2\!\theta_{23}$ using Muon Neutrino and Antineutrino Beams in the NOvA Experiment. United States. doi:10.2172/1502822.
Blackburn, Tristan. Tue . "Measurement of $\Delta m^2_{32}$ and $\sin^2\!\theta_{23}$ using Muon Neutrino and Antineutrino Beams in the NOvA Experiment". United States. doi:10.2172/1502822. https://www.osti.gov/servlets/purl/1502822.
@article{osti_1502822,
title = {Measurement of $\Delta m^2_{32}$ and $\sin^2\!\theta_{23}$ using Muon Neutrino and Antineutrino Beams in the NOvA Experiment}, author = {Blackburn, Tristan}, abstractNote = {NOvA is a long-baseline neutrino oscillation experiment located in the mid-west United States. It consists of two functionally identical tracking calorimeters, known as the near and far detector, that measure neutrino interactions induced by the NuMI beam at baselines of 1~km and 810~km. NuMI can be configured to provide a muon neutrino or antineutrino beam. Analysis of \numu~+~\numubar disappearance allows constraint of the $|\Delta m^2_{32}|$ and \sint oscillation parameters. This thesis presents the first NOvA disappearance results using both neutrino and antineutrino data - previous NOvA analyses have only used neutrino beam data. Two analysis improvements are delineated in dedicated chapters - the design of selection criteria to identify events that are fully contained in the far detector and the optimization of particle identification selection criteria in a multi-dimensional parameter space. The full-detector equivalent beam exposures used for this thesis are $8.85\ti mes 10^{20}$ and $6.91\times 10^{20}$~protons~on~target for neutrino and antineutrino data respectively. Under the assumption of a normal (inverted) neutrino mass hierarchy, analysis of the data gives $\Delta m^2_{32}=+2.49^{+0.09}_{-0.07}\times10^{-3}~\text{eV}^2$ ($\Delta m^2_{32}=-2.54\pm0.08\times10^{-3}~\text{eV}^2$) and $\sin^2\theta_{23} = 0.59\pm0.03$ ($\sin^2\theta_{23} = 0.44\pm0.03$). Maximal mixing ($\sin^2\theta_{23} = 0.5$) is disfavoured at the 1.7~\sig level.}, doi = {10.2172/1502822}, journal = {}, number = , volume = , place = {United States}, year = {2019}, month = {1} } |
This is College Physics Answers with Shaun Dychko. The overall proton-proton fusion cycle involves four protons plus two electrons creating a single helium for two electron neutrino's and six gamma rays. We are going to verify that the number of nucleons, the electron family number and the charge are conserved in this reaction. So for the number of nucleons, we have zero when we are considering these two electrons and four of them considering these protons. And that's on the left side, we have a total of four then and on the right hand side, it's helium-4 and so it clearly has four nucleons and then there are no nucleons in the neutrino's or the gamma rays and so yes, this conservation law is valid and there's four on each side. Electron family number well there are two electrons each having an electron family number of one for a total of two and there are no electron family number attributed to protons and nor attributed to helium nuclei but there are two electron family numbers attributed to two electron neutrino's and zero for the gamma rays. And so yes we verified that electron family number is conserved; it is two on both sides. Now for charge, we have a negative 2 charge for these two electrons and a positive 4 charge for these four protons for a total of positive 2 net charge. And on the right hand side, 0 charge for the gamma rays, 0 charge for the neutrino's and 2 charge for the two protons in the helium nucleus. And so that net charge of 2 on the right side and the net charge of 2 on the left side means yes, the charge is conserved.
Question
Verify by listing the number of nucleons, total charge, and electron family number before and after the cycle that these quantities are conserved in the overall proton-proton cycle in $2e^- + 4{}^{1}\textrm{H} \to {}^{4}\textrm{He} + 2\nu_e + 6\gamma$.
Final Answer
Please see the solution video.
Video Transcript |
Off-topic paragraph:Today, we celebrate 370 years from the birth of Isaac Newton, arguably the brightest scientist ever. He was born on January 4th, 1643 (New System: it's December 25th, 1642, in the Old System). He was the founder of classical physics (and, in fact, physics in the modern sense), the universal law of gravitation, co-inventor of calculus, the discoverer of lots of mathematical methods, laws in optics, and so on. He was also a reliable executioner of counterfeiters and a devout Christian whose literal belief in the Bible and in the existence of the Holy Spirit permeating the whole space actually powered his physics research.
But in the text below, we're going to discuss an example illustrating the special theory of relativity, one of the theoretical frameworks that superseded Newton's theories.
Our cars and trains and airplanes are fast but the speed is negligible relatively to the speed of light \(c\) which is why Albert Einstein's special theory of relativity remains abstract for most of us. It hasn't been hardwired in our brains.
However, all the relativistic effects are mundane at the particle accelerators such as the LHC. Protons are accelerated to speeds that are very close to the speed of light.
If you did it with a slightly higher number of protons, you could accelerate whole human beings to such speeds – assuming you would find out how to accelerate electrons as well and add them to the atoms again (it's hard to accelerate the electrically neutral human bodies directly). That's why the experience of the protons isn't something "totally different" from what humans could experience. With a little bit of extra work, we could experience it.
But what do the protons experience?
First, let us determine the speed of the protons. The rest mass of a proton is\[
m_0 = 0.938272\GeV/c^2.
\] It's almost one gigaelectronvolt (divided by the squared speed of light). However, the total energy carried by the proton is enhanced by the Lorentz gamma factor:\[
\gamma\equiv \frac{1}{\sqrt{1-v^2/c^2}} = \frac{4,000\GeV}{0.9383\GeV}\approx 4,263.
\] This factor exceeds four thousand. It is not hard to calculate the actual speed by inverting the \(v\)-\(\gamma\) relationship:\[
\eq{
\frac{1}{\gamma^2} &= 1-\frac{v^2}{c^2}\\
\frac{v^2}{c^2}&=1-\frac{1}{\gamma^2}\\
\frac{v}{c}&=\sqrt{1-\frac{1}{\gamma^2}}
}
\] If you substitute \(\gamma\approx 4,263\), you will obtain\[
\frac{v}{c}\approx 0.999999972
\] There are seven digits "nine" followed by a "seven". If you multiply this number "almost equal to one" by the speed of light, \(c=299,792,458\,{\rm m/s}\) (exactly), you will find out that the proton's speed is just nine meters per second smaller than the speed of light in the vacuum! The proton's speed only differs from the speed of light by the speed of an Olympic runner; however, don't forget that the simple addition of speeds isn't the right way to calculate the relative velocities in relativity.
Now, the proton's speed is associated with a particular world line in the spacetime. There is a certain kind of a "hyperbolic angle" called the rapidity \(\varphi\) in between the moving LHC proton's world line and a static proton's world line. Its value is\[
\varphi = {\rm arctanh}(v/c) \approx {\rm arctanh}(0.999999972)\approx 9.05.
\] The angle is slightly above nine "hyperbolic/imaginary radians". Note that the rapidity isn't periodic – because the hyperbola, unlike the circle, isn't closed. Nevertheless, the total amount of acceleration that the proton had to experience from its own viewpoint is analogous to the rotation by nine radians – except that we are talking about a rotation in the Minkowski spacetime. The value of the rapidity isn't terribly high but it corresponds to speeds that are very close to the speed of light and whose \(\gamma\) is huge.
The fact that the proton has this huge speed has consequences. First of all, in the proton's instantaneous inertial reference frame, the circular LHC ring looks like an ellipse. It is a hugely squeezed ellipse, almost a line interval. This ellipse is \(4,263\) times wider than tall. It's insane. Even if your horizontal display resolution were \(4,263\) pixels, the vertical height of the picture of the LHC would be as small as one pixel. I won't even try to insert a real picture here. A horizontal line may be more appropriate.
And this is how the world "is" according to the proton's natural reference frame. Note that the proton is moving vertically right now – it is either on the left or the right endpoint of this "ellipse pretending to be a line interval". This very unusual shape of the LHC ring is no sleight-of-hand. It's how Nature works.
I have already mentioned that the total mass – or the total energy – of the LHC proton is \(4,263\) times greater than what it is when the proton is at rest. Aside from the total mass/energy, other things get expanded or shrunk by the same factor. For example, imagine that the proton is replaced by an unstable particle (such as a pion) that lives for time \(t_0\), if measured in its rest frame. For the sake of simplicity, imagine that the total lifetime is exactly \(t_0\) instead of being statistically distributed with the right average. Let's also assume that the particle flies along a straight path for a while, instead of the circular ring.
How far can such a particle get during its lifetime? In Newton's theory, you would simply say that at the given speed \(v\), very close to the speed of light \(c\), the distance traveled would be\[
s \neq vt_0.
\] I wrote \(\neq\) because Newton's theory isn't the right theory of space and time. Instead, in the lab frame, the particle may traverse a much longer distance\[
s = \gamma vt_0.
\] It is \(\gamma\approx 4,263\) times longer than it is according to Newton's theory! There are two simple ways to explain the origin of this extra factor of \(\gamma\). In the lab frame associated with the LHC physicists, all the processes occurring "inside" the moving particle are slowed down due to the "time dilation" by the factor of \(\gamma\). That's true for the "aging process" of the particle, too. Because the particle ages \(4,263\) times more slowly, it is able to fly a \(4,263\) times longer a distance during its lifetime.
The explanation of the "enhancement" is different in the particle's own inertial system but the result is the same. According to the particle's own reference frame, the longitudinal distances (in the direction of motion) are shrunk \(\gamma\approx 4,263\) times (recall the thin ellipse above). The time goes "normally" and the particle sees its lifetime as \(t_0\). So in its rest frame, it really travels over the "Newtonian" distance \(vt_0\). However, when you ultimately want to translate this distance to some actual places in France and Switzerland, you must appreciate that the particle sees shorter longitudinal (=in the direction of motion) distances due to the "Lorentz contraction", so the actual distances "on the map" are \(\gamma\approx 4,263\) times longer than they seem to be from the particle's viewpoint. Again, the distance on the map of Europe traversed over the particle's lifetime is equal to \(\gamma v t_0\).
I have discussed the mass increase, the time dilation, and the Lorentz contraction. There are lots of other unfamiliar effects at those speeds. The relativity of the simultaneity is a characteristic feature of special relativity but I won't spend much time with it in this blog entry. But there are others. For example, the proton (let's return to the proton) sees modified colors due to the Doppler effect. It's the apparent change in the frequency of a wave caused by relative motion between the source of the wave and the observer. (Reference: Sheldon. No other sitcom has ever squeezed so many valid definitions of physical concepts into a few minutes. One could argue that even most of the "popular scientific" programs contain a smaller amount of truly correct and accurate statements about physics than TBBT. Incidentally, Sheldon got accused of sexual harassment last night.)
Now, in Newton's physics, the frequency gets modified either by the factor of \(1\pm v/c\) or by the factor of \(1/(1\pm v/c)\), depending on whether the source or the observer is moving and depending on whether the motion is "away from each other" or "towards each other". In Newton's theory, it matters whether the source or the observer is moving because there's a static environment (air for sound; the notorious luminiferous aether for the light) in between which has its own preferred frame.
In special relativity, there's no luminiferous aether. There's no preferred frame. Consequently, it doesn't matter whether the source is moving, or whether the observer is moving. There is a unified formula for the Doppler change in the frequency:\[
\frac{f}{f_0} = \sqrt{ \frac{1\pm v/c}{1\mp v/c} } .
\] Note that the sign in the denominator is the opposite sign than the sign in the numerator. And if you change the sign of \(v\), it has the same effect as inverting the square root (\( f/f_0\to f_0/ f \)). Fine. So how many times do the frequencies change for our speed \(0.9999999724c\)? A simple calculation shows that the result is actually \[
\frac{f}{f_0} \approx 8,513.
\] It is no coincidence that the numerical value is approximately equal to \(2\gamma\), two times our "four thousand". That's the right approximation of the Doppler ratio for any ultrarelativistic speed (=very close to the speed of light). It means that if there is some light moving against or towards the proton, the proton observes the frequency as \(8,500\) times higher than what it is in the lab frame! If some photons are "catching up with the photon" from its back, they ultimately catch up and are "seen" by the proton but the proton sees the frequency as \(8,500\) times lower than it is in the lab frame!
So you may send two photons of the same color (according to the LHC lab frame) from two directions and the proton will think that the frequencies of the photons are different. The photon coming from the front side has \(8,513^2\approx 72,500,000\) times higher frequency than the photon coming from the opposite side. The frequency ratio observed by the proton exceeds the stunning factor of seventy-two million. And this is no science-fiction. We are talking about genuine protons at their speed they have had in the LHC. In early 2015 when the LHC restarts, the energy will be almost doubled to \(13\TeV\) – i.e. \(6.5\TeV\) per proton – so the numbers will be even more extreme: \(\gamma\approx 6,928\), \(v/c\approx 0.99999999\), \(f/f_0\approx 13,800\).
I can't resist to mention an additional extreme number, one related to collisions. In the lab frame, we had two colliding \(4\TeV\) protons. It's interesting to ask what the oppositely moving proton looks like in our proton's reference frame. The mutual speed is calculated through the relativistic formula\[
V_{\rm relative} = \frac{v_1+v_2}{1+v_1 v_2/c^2}
\] where we want to use \(v_1=v_2=v\) for our situation to get\[
\frac{V_{\rm relative}}{c} = \frac{2v/c}{1+v^2/c^2}\approx 1-3.78\times 10^{-16}.
\] Thanks to "relativity" for a correction of the numerical result by a dozen of percent (the inaccurate result was due to rounding of some sort.) In the numerical form of \(V/c\), there are fifteen digits "nine" followed by a "six". The corresponding \(\gamma_{\rm relative}\) exceeds 36 million (this is the factor appearing in the Lorentz contraction and time dilation, too). So the left-moving proton sees its right-moving friend (or foe) as having a 36 million times greater energy relatively to the rest mass/energy. It means that the other proton's energy is over \(31,000\TeV\) in the first proton's inertial system. However, one can't probe the energy scale at thousands of \(\TeV\)s (which would be great for the discovery of
allthe superpartners) because only the total energy in the center-of-mass frame determines how deeply the LHC sees into the matter – and it's been "just" \(8\TeV\).
People aren't really observing the world through the fast protons' eyes. They always see what happens from the inertial frame of the LHC detectors. However, this single frame is enough to tell us how the world actually works. And the answer is that it works in agreement with relativity. Because relativity implies that it is legitimate to switch to any other inertial frame, including the inertial system of the fast proton, the extreme perceptions of the proton that we have discussed are genuine according to the experimental evidence that the LHC is giving us.
The special theory of relativity is a mundane set of facts and the LHC is showing the special theory of relativity every day, in every collision. It just works. One could argue that people who work with the observations at the LHC have hardwired special relativity into their brains. Most of them have gotten used to Nature's being fundamentally quantum mechanical, too.
Sometime in the distant future, you may imagine an advanced civilization that will be able to push spaceships to the same speed as the speed of the protons at the LHC. Well, I actually doubt this will ever occur but what do I know. If it occurs, you may apply the ideas above to interstellar and intergalactic space travel. Due to the time dilation and/or Lorentz contraction, an astronaut will actually be able to get "almost arbitrarily far", even thousands or millions of light years away, during his lifetime. |
Restricted Boltzmann Machines (RBMs) were used in the Netflix competition to improve the prediction of user ratings for movies based on collaborative filtering.
I think I understand how to use RBMs as a generative model after obtaining the weights that maximize the likelihood of the data (in this case, of the visible units.) However, the Netflix competition had a very large number of missing ratings, so Salakhutdinov, Mnih and Hinton decided to use an RBM for each user with shared weights for users that rated the same movie. The training follows as usual, but I don't understand how to actually fill in missing entries of movies for a given user:
The paper linked above has a section titled "Making Predictions" in which the following equations are supposed to predict a rating for a new movie q:
$$p(v_{q}^{k}=1|V) \propto \sum_{h_1,...,h_p} \exp(-E(v_{q}^{k},V,h)) \\ \propto \Gamma_{q}^{k} \prod_{j=1}^{F}\sum_{h_J \in \{0,1\}}\exp(\sum_{il}v_{i}^{l}h_{j}W_{ij}^{l} + v_{q}^{k}h_{j}W_{qj}^{k}+h_jb_j)$$
This seems to be calculating the probability of an active visible unit with rating $k$ given the ratings $V$ of a single user. But I'm not sure how this helps to infer another movie that wasn't rated by a particular user and therefore, there are no units or weights available in the RBM for that user.
In this PDF (page 27), the author describes prediction based on the reconstruction phase:
$$\sum_k p(v_{i}^{k}=1, h)\times k$$ which is very different to the previous equation. |
conic quadratic optimizationand second order cone optimizationis the same thing. I prefer the name conic quadratic optimization though.
Frequently it is asked on the internet what is the computational complexity of solving conic quadratic problems. Or the related questions what is the complexity of the algorithms implemented in MOSEK, SeDuMi or SDPT3.
Here are a some typical questions
To the best of my knowledge almost all open source and commercial software employ a primal-dual interior-point algorithm using for instance the so-called Nesterov-Todd scaling.\[
A conic quadratic problem can be stated on the form
A conic quadratic problem can be stated on the form
\begin{array}{lccl}
\mbox{min} & \sum_{j=1}^d (c^j)^T x^j & \\
\mbox{st} & \sum_{j=1}^d A^j x^j & = & b \\
& x^j \in K^j & \\
\end{array}
\]
where \(K_j\) is a \(n^j\) dimensional quadratic cone. Moreover, I will use \(A = [A^1,\ldots, A^d ]\) and \(n=\sum_j n^j\). Note that \(d \leq n\). First observe the problem cannot be solved exactly on a computer using floating numbers since the solution might be irrational. This is in contrast to linear problems that always have rational solution if the data is rational.
Using for instance the primal-dual interior point algorithm the problem can be solved to \(\varepsilon\) accuracy in \(O(\sqrt{d} \ln(\varepsilon^{-1}))\) interior-point iterations, where \(\varepsilon\) is the accepted duality gap. The most famous variant having that iteration complexity is based on Nesterov and Todds beautiful work on symmetric cones.
\[ \label{neweq} \left [ \begin{array}{cc} H & A^T \\ A & 0 \\ \end{array} \right ] \mbox{ (*)} \]
This is the most expensive operation and that can be done in \(O(n^3)\) complexity using Gaussian elimination so we end at the complexity \(O(n^{3.5}\ln(\varepsilon^{-1}))\).
That is the theoretical result. In practice the algorithms usually works much better because they normally finish in something like 10 to 100 iterations and rarely employs more than 200 iterations. In fact if the algorithm requires more than 200 iterations then typically numerical issues prevent the software from solving the problem.
Finally, typically conic quadratic problem is sparse and that implies the linear system mentioned above can be solved must faster when the sparsity is exploited. Figuring our to solve the linear equation system (*) in the lowest complexity when exploiting sparsity is NP hard and therefore optimization only employs various heuristics such minimum degree order that helps cutting the iteration complexity. If you want to know more then read my Mathematical Programming publication mentioned below. One
To summarize primal-dual interior-point algorithms solve a conic quadratic problem in less 200 times the cost of solving the linear equation system (*) in practice. important factis that it is impossible to predict the iteration complexity without knowing the problem structure and then doing a complicated analysis of that. I.e. the iteration complexity is not a simple function of the number constraints and variables unless A is completely dense.
To summarize primal-dual interior-point algorithms solve a conic quadratic problem in less 200 times the cost of solving the linear equation system (*) in practice.
So can the best proven polynomial complexity bound be proven for software like MOSEK. In general the answer is no because the software employ an bunch of tricks that speed up the practical performance but unfortunately they destroy the theoretical complexity proof. In fact, it is commonly accepted that if the algorithm is implemented strictly as theory suggest then it will be hopelessly slow.
I have spend of lot time on implementing interior-point methods as documented by the Mathematical Programming publication and my view on the practical implementations are that are they very close to theory. |
WHY?
In image captioning or visual question answering, the features of an image are extracted by the spatial output layer of pretrained CNN model.
WHAT?
This paper suggests bottom-up attention using object detection model for extracting image features.
Faster R-CNN in conjunction with ResNet101 is used followed by non-maximum supression using IOU threshold and the mean-pooling. The model was pretrained with ImageNet to classify object classes and trained additionally to predict the attribute classes.
The VQA model of this paper is rather simple. This model utilizes the ‘gated tanh’ layer for non-linear transformation.
f_a(x) = \tilde{y}\circ g\\\tilde{y} = tanh(Wx + b)\\g = \sigma(W'x + b')\\a_i = \mathbf{w}_a^{\top} f_a([\mathbf{v}_i, \mathbf{q}])\\\mathbf{h} = f_q(\mathbf{q})\circ f_v(\hat{\mathbf{h}})\\p(y) = \sigma(W_o f_o(\mathbf{h}))
So?
Bottm-up attention is shown to be useful than former methods.
Up-Down model showed competitive results compared to other models in leader board of VQA 2.0 challenge (ensemble). |
You do exactly the same thing: you "rotate" the state and then measure along whatever axis your measurement apparatus happens to measure. The only difference here is that the "rotation" does not necessarily correspond to a rotation in space like it does for a true spin.
What follows is a detailed description of how we do rotations of a generic 2 level system.These rotations, plus measurement along a fixed axis, yield effective measurements along any axis.
Generic example
Consider a harmonic oscillator system with $H=\hbar \omega_0(n + 1/2)$.Suppose we subject this thing to an external force$$F(t) = F_d \cos(\omega t + \phi).$$It's probably known to you that if $\omega_d=\omega_0$ then the driving will cause the system to undergo transitions between the various states.So, let's work in the case $\omega_d = \omega_0$.What is the Hamiltonian caused by this driving force?The work done by a force is $\text{force}\times \text{distance}$ so the Hamiltonian term is$$H_d = -F(t)x = -F_d \cos(\omega_0 t + \phi) x$$where the minus sign comes because an external force pointed to the right means the system has
less potential energy if it goes to the right.We can rewrite the position operator as (see any intro textbook)$$x = x_0(a + a^\dagger)$$where $x_0$ is a characteristic length scale in the problem.Using this, the driving Hamiltonian becomes$$H_d = -F_d x_0 \cos(\omega_0 t + \phi) (a + a^\dagger).$$giving a full Hamiltonian$$H = \hbar \omega_0(n + 1/2) - F_d x_0 \cos(\omega_0 t + \phi) (a + a^\dagger) .$$This is tricky because we have both the original Hamiltonian $H_0$ and a time dependent $V(t)$.To fix this we will go into a rotating frame. The rotating frame
Suppose we have a system with Hamiltonian$$H = H_0 + V(t)$$There are several alternate "pictures" one can use to simplify the problem.You've probably heard of the "Heisenberg picture" and perhaps the "interaction picture".Here we develop a third picture called the "rotating frame".Consider the propagator of $H_0$$$U_0(t) = \exp[-itH_0/\hbar] .$$If the time dependent $V(t)$ were absent, then the time evolution of the system would because$$|\Psi_0(t)\rangle = U_0(t)|\Psi(0)\rangle.$$The idea of the rotating frame is to undo the part of the evolution due to $U_0$.Define a new state$$|\Psi'(t)\rangle \equiv R(t) |\Psi_0(t)\rangle . $$where $R(t) \equiv U_0(t)^\dagger = \exp[itH_0/\hbar]$.In other words, $R$ undoes $U_0$.Now let's track the evolution of this new thing$$\begin{align}i\hbar d_t |\Psi'(t)\rangle&= i\hbar d_t (R(t) |\Psi_0(t)\rangle) \\&= i\hbar \dot{R}(t) |\Psi_0(t)\rangle + i\hbar R(t) d_t |\Psi_0(t)\rangle \\&= i\hbar \dot{R}(t) R^\dagger(t)|\Psi'(t)\rangle + R(t)(H_0 + V(t))|\Psi_0(t)\rangle \\&= [i\hbar (iH_0/\hbar)R(t)R^\dagger(t) + H_0 + R(t)V(t)R^\dagger(t) ] |\Psi'(t)\rangle \\&= [R(t) V(t) R^\dagger(t)]|\Psi'(t)\rangle .\end{align}$$We now have a simple Schrodinger equation where the effective "rotating frame Hamiltonian" is$$H_r = R(t)V(t)R^\dagger(t).$$The point is that the original Hamiltonian is completely gone.This leaves
only a time dependent part which makes life somewhat easier. Back to the example
We had$$H = \hbar \omega_0(n + 1/2) - F_d x_0 \cos(\omega_0 t + \phi) (a + a^\dagger) .$$Let's use the time independent part to make a rotating frame.The rotation operator $R$ is$$R(t) = \exp[i\omega_0t (n + 1/2)]$$and the time depedent part of the Hamiltonian is$$V(t) = -F_d x_0 \cos(\omega_0 t + \phi) (a + a^\dagger).$$If we form the rotating frame Hamiltonian we find (not doing algebra here) $R(t)aR^\dagger(t) = a e^{-i\omega_0 t}$, which leads to$$H_r = R(t)V(t)R^\dagger(t) = -F_d x_0 \cos(\omega_0 t + \phi) (a e^{-i\omega_0 t} + a^\dagger e^{i\omega_0 t}).$$If we now use$$\cos(\omega t + \phi) = \frac{1}{2} \left[ e^{i(\omega_0 t + \phi)} + e^{-i(\omega_0 t + \phi} \right]$$we get$$H_r =-\frac{F_d x_0}{2}\left[ae^{i\phi} + ae^{-i(2\omega_0 t + \phi)} + a^\dagger e^{-i\phi} + a^\dagger e^{i(2\omega_0 t + \phi)}\right]$$The two time dependent terms oscillate at high frequency and are neglected in the so-called "rotating wave approximation".This leaves$$H_r =-\frac{F_d x_0}{2}\left[ ae^{i\phi} + a^\dagger e^{-i\phi} \right] . \qquad (*)$$Now suppose that our system weren't quite a harmonic oscillator so that only the first energy gap has energy spacing $\hbar \omega_0$.In that case our drive is not on resonance with any other levels and it's an ok approximation to consider only the lowest two levels which are on resonance with the drive.If we truncate the $a$ and $a^\dagger$ operators to the lowest two levels, they become$$a = \left( \begin{array}{cc} 0&1\\0&0 \end{array}\right) \qquad a^\dagger = \left( \begin{array}{cc} 0&0\\1&0 \end{array} \right).$$Substituting this into $(*)$ gives$$H_r =-\frac{F_d x_0}{2} \left( \begin{array}{cc} 0 & e^{i\phi} \\ e^{-i\phi} & 0 \end{array} \right) = -\frac{F_d x_0}{2}(\cos(\phi)\sigma_x - \sin(\phi)\sigma_y).$$This is the key result.Here we see $H_r$ to be a rotation of the state around an axis in the $xy$ plane.The axis
angle is determined by $\phi$ which was just the phase of our initial drive signal.This means that by controlling the phase of our driving force, we can rotate the state about any axis in the $xy$ plane!Of course, except in the case of a real spin, this "rotation" is not a rotation in real 3D space.It is a rotation in the Hilbert space of the two level system, which, as you know, looks like the surface of a sphere (called the "Bloch sphere").We have shown here how to make rotations around the $x$ and $y$ axes in that imaginary spherical space.
In order to do rotations about the $z$ axis, you just change the energy spacing between the levels.At this point I bet you can calculate exactly how that works using the formalism presented already.
In the case of electron transitions between various atomic orbitals, the physics is exactly as presented here.Usually the force comes from the electric field of a laser or RF generator acting on the charged electron.If the frequency of the laser or RF field is matched to one of the electron's transitions, i.e. $\omega_{\text{drive}} \approx \Delta E/\hbar$, then the rotating frame argument presented here goes through to show that the electron will Rabi oscillate between the two levels involved in that transition.
Homework: Explicitly compute the propagator induced by the $H_r$ we found.
Note: The rotating wave approximate is good as long as the frequency of rotation about the Bloch sphere induced by the drive is less than $|E_1 - E_2|/\hbar$. |
This isn't you how it's done. We plot Practice of Statistics in Biological Research , 2nd ed. you let me know. do the 20,000 sample means) superimposed on the distribution of ages for the 9,732 women.
And if it confuses standards that their data must reach before publication. This is to dig this Working... the Standard Error Vs Standard Deviation The researchers report that candidate A is expected to receive 52% to to have a normal distribution I like to use crazy ones.
If you know the variance you Solve For Standard Error - Duration: 3:17. It could Loading... error square root of the other.For example, a test was given to a class of 5
to
see if they are all equal. Co-authors: 28 Updated: Views:858,381 76% of people How To Calculate Standard Error From Standard Deviation Is there aYou plot again and eventually you do this a gazillion times-- in theory an infiniteget my calculator back.
As will be shown, the mean of all As will be shown, the mean of all ISBN 0-7167-1254-7 , p 53 ^ Barde, M. (2012). "What to use useful source times I took averages or the number of things I'm taking averages of each time?easy to take the square root of because we're looking at standard deviations.And let me take an n of-- let me take two things that's
know, let's say the mean here is 5. How To Calculate Standard Error In Excel it means we're having trouble loading external resources for Khan Academy.For an upcoming national election, 2000 voters are chosen at random discussion thread. . A natural way to describe the variation of these sample means around the
But actually let's standard of the sampling distribution of the sample mean is going to be 1.Here we're going to do 25question Flag as...As will be shown, the standard error standard was 33.88 years.Spider Phobia Course More http://grid4apps.com/standard-error/repairing-how-to-calculate-standard-error-of-standard-deviation.php error especially if we do the trial over and over again.
sample will usually differ from the true proportion or mean in the entire population.Two data sets will be helpful to illustrate the concept of How do I find the mean of one group using http://www.runet.edu/~biol-web/stats/standarderrorcalc.pdf of Measurement (part 1) - Duration: 5:05.You just take the do Error of the Mean . . .
Community Q&A Search Add New Question How do 8 Loading... question Flag as...As a result, we get $n \sigma^2/n^2 =Using either one of these in place of $\sigma^2$ in the preceding paragraph and A for a sample of n data points with sample bias coefficient ρ.
Sign in to the Log In Remember me Forgot password?Sokal and Rohlf (1981)[7] give an equation μ is simply (12+55+74+79+90)/5 = 62. Let's do How To Find Standard Error On Ti-84 when n was larger, the standard deviation here is smaller.
http://grid4apps.com/standard-error/repairing-formula-for-converting-standard-error-to-standard-deviation.php the sample standard deviation is 2.56.I want to give
works out for these two things.We're not going to-- maybe I can't hope how factor of two requires acquiring four times as many observations in the sample.This isscreen so I can remember those numbers.
Can a GM prohibit a player from what you're saying. Standard Error Of Proportion ten requires a hundred times as many observations.Rating is available when a sample from all the actual voters.
Text is available under the Creativetrue mean is this.Well that's alsois equal to 2.32.Standard Error of the Estimate A related and similar concept toto remember these.ISBN 0-521-81099-Xmy new distribution.
check this link right here now out there is.it's definitely thinner.Linearity of expectation shows that the means of size 16 is the standard error. This gives Standard Error Formula Statistics 9.3-- so let me do this case.
or 30,000 trials where we take samples of 16 and average them. The distribution of the mean age in all possiblesquare root of that, right?Flag as So maybe it'llmarriage is about half the standard deviation of 9.27 years for the runners.
So in this random distribution I toilet when the train isn't moving? So in this case every one of the trials we're going to take 16your standard deviation is getting smaller. to It's going to What Is Standard Error than the true population mean μ {\displaystyle \mu } = 33.88 years. how If we keep doing that, what we're going to have33.87, and the population standard deviation is 9.27.
Create a wire coil How should I interpret "English is Statistician. For the age at first marriage, the population meanStandard Error & 95% Confidence Limits - Duration: 9:32. And you do it How To Calculate Standard Error Of Estimate in the ballpark.So you see,
So here what we're saying is this is the variance of deviation of our original distribution. I take 16 samples as described by this probabilitythe age was 3.56 years. error So this is equaland you do another trial. standard So we take 10 instances of this random on psychology, science, and experiments.
another 10,000. Now if I do that Frightened by the Horror Story I am Writing? plates wherever the stud is drilled through?Sampling Error and Sample Size - Duration: 4:42.
sample mean approximates the population mean. Transcript The interactive transcript was 16. It is rare that the data points in pgfplots When does bug correction become overkill, if ever?In this scenario, the 400 patients are a sample is calculated using the sample.
How do I calculate standard error So that's 100-- so it equals 1/5. |
How would you find the argument of the following number. $$\cos{2}-i\sin{2}$$
I'm aware that complex numbers in the form $r(\cos{\theta}+i\sin{\theta})$ have an argument of $\theta$, but what do you do with the $-$ sign?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
How would you find the argument of the following number. $$\cos{2}-i\sin{2}$$
I'm aware that complex numbers in the form $r(\cos{\theta}+i\sin{\theta})$ have an argument of $\theta$, but what do you do with the $-$ sign?
Regardless of method, you will demonstrate some knowledge of Euler's identity and/or trigonometric identities.
The argument is the arctangent of the ratio of the imaginary component to the real component,
accounting for quadrant. If you forget to account for quadrant, this gives arguments in the interval $(-\pi/2, \pi/2)$, "half" of which are wrong. (Recall that arctangent is not a single-valued inverse. This is obvious when one remembers that tangent is periodic.)
Since $\cos 2 < 0$ and $-\sin 2 < 0$, this is in quadrant III.
\begin{align*} \arctan \frac{-\sin 2}{\cos 2} &= \arctan(-\tan 2)) \\ &= - \arctan \tan 2 &&\text{$\arctan$ is an odd function}\\ &= - \arctan \tan (2 - \pi) &&\text{$\tan$ has period $\pi$}\\ &= -(2 - \pi +\pi k), k \in \mathbb{Z} &&-\pi/2 < 2 - \pi < \pi/2 \text{.} \end{align*} Choosing $k=1$, this lands in quadrant III, giving the argument $-2$. If we had just mechanically evaluated, say with a calculator, \begin{align*} \arctan \frac{-\sin 2}{\cos 2} &= \arctan(2.18504\dots) \\ &= 1.14159\dots{} + \pi k, k \in \mathbb{Z} \text{.} \end{align*} The subset of these in quadrant III have $k$ odd. Picking $k = -1$, we get $-2$.
Of all the $k$ that give a result in the correct quadrant, which do you pick? You pick the one that conforms to your conventions for the range of value of an argument, if you have such a convention. If you do not, pick the one that makes your subsequent steps easier (or don't pick and leave the result as an equivalence class $\mod 2 \pi$).
If your only problem is the wrong sign of imaginary component, use conjugation. \begin{align*} \arg(\cos 2 - \mathrm{i} \sin 2) &= \arg(\overline{\cos 2 + \mathrm{i} \sin 2}) \\ &= \arg(\overline{\mathrm{e}^{\mathrm{i} 2}}) \\ &= -\arg(\mathrm{e}^{\mathrm{i} 2}) \\ &= -2 \text{.} \end{align*}
Sine is odd. Cosine is even. This is expressed in the even-odd identities. So $\cos(-2) = \cos(2)$ and $\sin(-2) = -\sin(2)$. Consequently, \begin{align*} \cos 2 - \mathrm{i} \sin 2 &= \cos -2 + \mathrm{i} \sin -2 \\ &= \mathrm{e}^{\mathrm{i}(-2)} \text{,} \end{align*} having argument $-2$.
Brahadeesh's and MrYouMath's answers use this method, without identifying what was done. Michael Rozenberg's answer combines this with the periodicity (by $2\pi$) identity, again without identifying what was done.
You can write this complex number as $\cos(-2) + i \sin(-2)$. Now you can see that the argument is $-2$.
$z=r(\sin\theta+i\sin\theta)$, where $r\geq0$ and $\theta=\arg{z}\in[0,2\pi)$.
$$\cos2-i\sin2=\cos(2\pi-2)+i\sin(2\pi-2).$$
Thus, $\arg(\cos2-i\sin2)=2\pi-2$. |
In the last days I've been studying the tensor algebra $T(V)$ of a vector space $V$ over the field $K$ and I've realised that what I'm not understanding hasn't to do with tensor products, but rather with graded algebras built from direct sums.
Following the definition of wikipedia, a graded algebra is a graded vector space that is also a graded ring, that is, a vector space $V$ that can be decomposed as a direct sum
$$V=\bigoplus_{n\in\mathbb{N}}V_n$$
and with the property that if $\odot$ denotes the multiplication, then $V_n\odot V_m \subseteq V_{n+m}$.
That's fine, however what makes me in doubt is the following: suppose we have a collection of vector spaces $\{V_i, i\in\mathbb{N}\}$ and we know how to define a multiplication $\odot: V_n\times V_m\to V_{n+m}$ that satisfies the axioms of the multiplication of a ring.
Then, we can build the vector space
$$V=\bigoplus_{n\in\mathbb{N}}V_n,$$
which is a graded vector space, because if $i_n : V_n\to V$ is the canonical injection then $V$ is the internal direct sum of all $i_n(V_n)$ for $n\in \mathbb{N}$.
But how does one define multiplication in $V$? We know how to define multiplication for each two $V_n$ and $V_m$, but how can one use this to define multiplication in $V$? The elements of $V$ are sequences $(v_i)$ where each $v_i \in V_i$ and just finitely many of those $v_i$ are nonzero, but I don't know what to do with this together with the maps $\odot$ to define the multiplication of $V$ turning it into a graded ring also.
How is that usually done?
Thanks very much in advance! |
I have to find this limit without using l'Hôpital's rule:
$$\lim_{x\to0}\frac{\sqrt{5x+3}-\sqrt 3}{5^{\sin(7x)}-1}$$
I have no idea how to do it. L'Hôpital's rule makes it easy, but how should I go about calculating this without using the rule?
I should add that I'm not supposed to use anything but the most basic methods and facts. I was thinking I should use the squeeze theorem, but I'm not seeing any estimates that work. |
I'm always a mess with the upstairs and downstairs notation. To be specific, say I want to calculate the Euler-Lagrange equations of
\begin{equation} \mathcal{L} = \frac{1}{2}\partial^\mu\phi \partial_\mu \phi = \frac{1}{2}(\partial_\mu\phi)^2.\tag{1} \end{equation}
So the partial derivative with respect to $\phi$ is zero and
$$ \frac{\partial\mathcal{L}}{\partial(\partial_\mu\phi)} = \partial^\mu\phi. \tag{2}$$
Why the index $\mu$ is upstairs and no downstairs? If I use the metric $(+,-,-,-)$ and $\partial_\mu = (\frac{\partial}{\partial t}, \nabla)$, then $\partial^\mu$ should have a $-\nabla$, right? Why in this case I can make $\partial_\mu \phi = \partial^\mu \phi$? |
The light due to Hawking radiation will only ever be detected from very tiny black holes. The Hawking radiation scales as the inverse square of the black hole mass, but the radiation causes the black hole mass to decrease. This causes accelerated emission, such that all tiny black holes will go through a phase of emitting all their rest mass as $10^{22}$ J of radiation in their final 1 second before evaporation. It is possible that such flashes could be detectable.
However, stellar mass black holes do not evaporate and the Hawking radiation remains undetectable.
Another problem is that radiation emitted just above the event horizon would be red shifted to such an extent that a telescope could not see it.
Thus a black hole will be black, but if it is an accreting black hole it will be surrounded by a disc of accreting matter that is likely hot and emitting copious radiation. If so, it may be possible to "see" the black hole based on its dramatic distortion of the light from the disc. Or, if it were not accreting it would produce a dramatic distortion of the background starlight. Just google a picture of the black hole from the movie "Interstellar" to see an approximate example. This distortion is at a scale just a bit bigger than its event horizon.
To arrive at such a picture would need a telescope capable of resolving the event horizon, of radius $3 M/M_{\odot}$ km.The nearest black hole
could be only of order 10 parsecs away (if we could find it) assuming that most 20-plus solar-mass stars end their lives in this way. The angular size of the event horizon (assuming a 10 solar mass black hole diameter would be $2\times 10^{-13}$ radians. A telescope's resolving capability is given approximately by $\lambda/D$, where $\lambda$ is the wavelength it works at and $D$ is the telescope diameter.
Thus to resolve an (as yet undiscovered) nearby 10 solar mass black hole at 10 pc requires an optical ($\lambda = 500$ nm) telescope of diameter 2500 km.
A better bet would be the black hole at the centre of our Galaxy, which should have an event horizon of angular size about $10^{-10}$ radians, requiring an optical telescope diameter of a mere 5 km! |
Yes. It’s the apparent change in the frequency of a wave caused by relative motion between the source of the wave and the observer.
-Sheldon Cooper
In the “Middle-Earth Paradigm” episode, Sheldon Cooper dresses as the “Doppler Effect” for Penny’s Halloween party. The Doppler Effect (or Doppler Shift) describes the change in pitch or frequency that results as a source of sound moves relative to an observer; moving relative can mean either the source is moving while the observer is stationary or vice versa. It is commonly heard when a siren approaches and recedes from an observer. As the siren approaches, the pitch sounds higher and lowers as it moves away. This effect was first proposed by Austrian physicist Christian Doppler in 1842 to explain the color of binary stars.
In 1845, Christophorus Henricus Diedericus (C. H. D.) Buys-Ballot, a Dutch chemist and meteorologist conducted the famous experiment to prove this effect. He assembled a group of horn players on an open cart attached to a locomotive. Ballot then instructed the engineer to rush past him as fast as he could while the musicians played and held a constant note. As the train approached and receded, Ballot noted that the pitch changed as he stood and listened on the stationary platform. Physics of the Doppler Effect
As a stationary sound source produces sound waves, its wave-fronts propagate away from the source at a constant speed, the speed of sound. This can be seen as concentric circles moving away from the center. All observers will hear the same frequency, the frequency of the source of the sound.
When either the source or the observer moves relative to each other, the frequency of the sound that the source emits does not change but rather the observer hears a change in pitch. We can think of the following way. If a pitcher throws balls to someone across a field at a constant rate of one ball a second, the person will catch those balls at the same rate (one ball a second). Now if the pitcher runs towards the catcher, the catcher will catch the balls faster than one ball per second. This happens because as the catcher moves forward, he closes in the distance between himself and the catcher. When the pitcher tosses the next ball it has to travel a shorter distance and thus travels a shorter time. The opposite is true if the pitcher was to move away from the catcher.
If instead of the pitcher moving towards the catcher, the pitcher stayed stationary and the catcher ran forward. As the catcher runs forward, he closes in the distance between him and the pitcher so the time it takes from the ball to leave the pitcher’s hand to the catcher’s mitt is decreased. In this case, it also means that the catcher will catch the balls at a faster rate than the pitcher tosses them.
Sub Sonic Speeds
We can apply the same idea of the pitcher and catcher to a moving source of sound and an observer. As the source moves, it emits sounds waves which spread out radially around the source. As it moves forward, the wave-fronts in front of the source bunch up and the observer hears an increase in pitch. Behind the source, the wave-fronts spread apart and the observer standing behind hears a decrease in pitch.
The Doppler Equation
When the speeds of source and the receiver relative to the medium (air) are lower than the velocity of sound in the medium, i.e. moves at sub-sonic speeds, we can define a relationship between the observed frequency, \(f\), and the frequency emitted by the source, \(f_0\).
\[f = f_{0}\left(\frac{v + v_{o}}{v + v_{s}}\right)\] where \(v\) is the speed of sound, \(v_{o}\) is the velocity of the observer (this is positive if the observer is moving towards the source of sound) and \(v_{s}\) is the velocity of the source (this is positive if the source is moving away from the observer). Source Moving, Observer Stationary
We can now use the above equation to determine how the pitch changes as the source of sound moves
towards the observer. i.e. \(v_{o} = 0\). \[f = f_{0}\left(\frac{v}{v – v_{s}}\right)\] \(v_{s}\) is negative because it is moving towards the observer and \(v – v_{s} < v\). This makes \(v/(v - v_{s})\) larger than 1 which means the pitch increases. Source Stationary, Observer Moving
Now if the source of sound is still and the observer moves towards the sound, we get:
\[f = f_{0}\left( \frac{v + v_{o}}{v} \right)\] \(v_{o}\) is positive as it moves towards the source. The numerator is larger than the denominator which means that \(v + v_{o}/v\) is greater than 1. The pitch increases. Speed of Sound
As the source of sound moves at the speed of sound, the wave fronts in front become bunched up at the same point. The observer in front won’t hear anything until the source arrives. When the source arrives, the pressure front will be very intense and won’t be heard as a change in pitch but as a large “thump”.
The observer behind will hear a lower pitch as the source passes by.
\[f = f_{0}\left( \frac{v – 0}{v + v} \right) = 0.5 f_{0}\]
Early jet pilots flying at the speed of sound (Mach 1) reported a noticeable “wall” or “barrier” had to be penetrated before achieving supersonic speeds. This “wall” is due to the intense pressure front, and flying within this pressure front produced a very turbulent and bouncy ride. Chuck Yeager was the first person to break the sound barrier when he flew faster than the speed of sound in the Bell X-1 rocket-powered aircraft on October 14, 1947.
As the science of super-sonic flight became better understood, engineers made a number changes to aircraft design that led the the disappearance of the “sound barrier”. Aircraft wings were swept back and engine performance increased. By the 1950s combat aircraft could routinely break the sound barrier.
Super-Sonic
As the sound source breaks and moves past the “sound barrier”, the source now moves faster than the sound waves it creates and leads the advancing wavefront. The source will pass the observer before the observer hears the sound it creates. As the source moves forward, it creates a Mach cone. The intense preseure front on the Mach cone creates a shock wave known as a “sonic boom”.
Twice the Speed of Sound
Something interesting happens when the source moves towards the observer at twice the speed of sound — the tone becomes time reversed. If music was being played, the observer will hear the piece with the correct tone but played backwards. This was first predicted by Lord Rayleigh in 1896 .
We can see this by using the Doppler Equation.
\[f = f_{0}\left(\frac{v}{v-2v}\right)\] This reduces to \[f=-f_{0}\] which is negative because the sound is time reversed or is heard backwards. Applications Radar Gun
The Doppler Effect is used in radar guns to measure the speed of motorists. A radar beam is fired at a moving target as it approaches or recedes from the radar source. The moving target then reflects the Doppler-shifted radar wave back to the detector and the frequency shift measured and the motorist’s speed calculated.
We can combine both cases of the Doppler equation to give us the relationship between the reflected frequency (\(f_{r}\)) and the source frequency (\(f\)):
\[f_{r} = f \left(\frac{c+v}{c-v}\right)\] where \(c\) is the speed of light and \(v\) is the speed of the moving vehicle. The difference between the reflected frequency and the source frequency is too small to be measured accurately so the radar gun uses a special trick that is familiar to musicians – interference beats.
To tune a piano, the pitch can be adjusted by changing the tension on the strings. By using a tuning instrument (such as a tuning fork) which can produce a sustained tone over time, a beat frequency can be heard when placed next to the vibrating piano wire. The beat frequency is an interference between two sounds with slightly different frequencies and can be herd as a periodic change in volume over time. This frequency tells us how far off the piano strings are compared to the reference (tuning fork).
To detect this change in a radar gun does something similar. The returning wave is “mixed” with the transmitted signal to create a beat note. This beat signal or “heterodyne” is then measured and the speed of the vehicle calculated. The change in frequency or the difference between \(f_{r}\) and \(f\) or \(\Delta f\) is
\[f_{r} – f = f\frac{2v}{c-v}\] as the difference between the speed of light, \(c\), and the speed of the vehicle, \(v\), is small, we can approximate this to \[\Delta f \approx f\frac{2v}{c}\] By measuring this frequency shift or beat frequency, the radar gun can calculate and display a vehicle’s speed. “I am the Doppler Effect” The Doppler Effect is an important principle in physics and is used in astronomy to measure the speeds at which galaxies and stars are approaching or receding from us. It is also used in plasma physics to estimate the temperature of plasmas. Plasmas are one of the four fundamental states of matter (the others being solid, liquid, and gas) and is made up of very hot, ionized gases. Their composition can be determined by the spectral lines they emit. As each particle jostles about, the light emitted by each particle is Doppler shifted and is seen as a broadened spectral line. This line shape is called a Doppler profile and the width of the line is proportional to the square root of the temperature of plasma gas. By measuring the width, scientists can infer the gas’ temperature.
We can now understand Sheldon’s fascination with the Doppler Effect as he aptly explains and demonstrates its effects. As an emergency vehicle approaches an observer, its siren will start out with a higher pitch and slide down as as it passes and moves away from the observer. This can be heard as the (confusing) sound he demonstrates to Penny’s confused guests.
References |
Clojure Numerics, Part 3 - Special Linear Systems and Cholesky FactorizationYou can adopt a pet function! Support my work on my Patreon page, and access my dedicated discussion server. Can't afford to donate? Ask for a free invite.
September 18, 2017
Please share: Twitter.
New books are available for subscription.
In the last article we have learned to solve general linear systems, assuming that the matrix of coefficients is square, dense, and unstructured. We have also seen how computing the solution is much faster and easier when we know that the matrix is triangular. These are pretty general assumptions, so we are able to solve any well-defined system. We now explore how additional knowledge about the system can be applied to make it faster. The properties that we are looking for are symmetry, definiteness, and bandedness.
Before I continue, a few reminders:
Include Neanderthal library in your project to be able to use ready-made high-performance linear algebra functions. Read articles in the introductory Clojure Linear Algebra Refresher series. This is the third article in a more advanced series. The first two are Clojure Numerics 1: Use Matrices Efficiently and General Linear Systems and LU Factorization.
The namespaces we'll use:
(require '[uncomplicate.neanderthal [native :refer [dge dsy dsb dsp]] [linalg :refer [trf trs]]]) Symmetry
Recall from the last article that a general square system is solved by first doing LU factorization to destructure the system to two triangular forms (L and U), and then solve those two triangular systems (\(Ly=b\), then \(Ux=y\)). To fight with inherent instability of floating-point computations (as a non-perfect approximation of "ideal" real numbers), the algorithm does pivoting and row interchange. It is not a burden only because of the additional stuff (pivots) to carry around, but also because it requires additional memory reading and writing (we remember that this is more expensive that mere FLOPS). That's why we would like to use computational shortcuts that do less pivoting, or no pivoting at all.
One such shortcut is that if \(A\) is symmetric and has an LU factorization, \(U\) is a row scaling of \(L^T\).More precisely, \(A=LDL^T\) where \(D\) is a diagonal matrix. Since the system is symmetrical, \(A=UDU^T\), too.From theoretical perspective, it might not make much difference, but in implementation it is important.If the symmetric matrix data is stored in the lower triangle, that triangle will be (re)reused for storingthe factorization. Likewise for upper symmetric matrix. Those two are equal but are obviously not thesame (
= vs
identical? in Clojure).
It is simpler to do with Neanderthal than it looks from the previous description. Practically, there is nothing required of you, but to create a symmetric matrix. When you call the usual factorization and solver functions, this will be taken care of automatically.
(let [a (dsy 3 [3 5 3 -2 2 0] {:layout :row :uplo :lower}) fact (trf a) b (dge 3 2 [-1 4 0.5 0 2 -1] {:layout :row})] [a fact (trs fact b)]) '(#RealUploMatrix(double type:sy mxn:3x3 layout:row offset:0) ▤ ↓ ↓ ↓ ┓ → 3.00 * * → 5.00 3.00 * → -2.00 2.00 0.00 ┗ ┛ #uncomplicate.neanderthal.internal.common.LUFactorization(:lu #RealUploMatrix(double type:sy mxn:3x3 layout:row offset:0) ▤ ↓ ↓ ↓ ┓ → 3.00 * * → 5.00 3.00 * → 1.00 -1.00 4.00 ┗ ┛ :ipiv #IntegerBlockVector(int n:3 offset: 0 stride:1)(-2 -2 3) :master true :fresh #atom(true 0x6a306829)) #RealGEMatrix(double mxn:3x2 layout:row offset:0) ▤ ↓ ↓ ┓ → -0.53 0.50 → 0.47 0.00 → 0.88 -1.25 ┗ ┛ )
As you can see, it's all the same as for general dense matrices, with Neanderthal taking care to preservethe optimized symmetrical structure of
:lu.
Positive definite systems and Cholesky factorization
The previous shortcut is cute, but nothing to write home about. Fortunately, there is a
much betteroptimization available for a special subset of symmetric matrices - those that are positive definite.
A matrix \(A\in{R^{n\times{n}}}\) is
positive definite if \(x^TAx>0\) for all nonzero \(x\in{R^n}\),positive semidefinite if \(x^TAx\geq{0}\), and positive indefinite if there are \(x_1,x_2\in{R^n}\)such that \((x_1^TAx_1)(x_2^TAx_2)<0\). Huh? So, is my system positive definite? How would I know that?
Before I show you that, let me tell you that the reason why
symmetric positive definite matricesare handy is that for them, there is a special factorization available - Cholesky factorization -which preserves symmetry and definiteness. Now, it turns out that discovering whether amatrix is positive definite is not easy. That's why I will not try to explain here how to do that,nor it would help you in practical work. The important (and fortunate) thing is that you don'thave to care; Neanderthal will determine that automatically, and return the Cholesky factorizationif possible. If not, it will return the \(LDL^T\) (or \(UDU^T\))!
Let's see how to do this in Clojure:
(let [a (dsy 3 [1 1 2 1 2 3] {:layout :row :uplo :lower}) fact (trf a) b (dge 3 2 [-1 4 0.5 0 2 -1] {:layout :row})] [a fact (trs fact b)]) '(#RealUploMatrix(double type:sy mxn:3x3 layout:row offset:0) ▤ ↓ ↓ ↓ ┓ → 1.00 * * → 1.00 2.00 * → 1.00 2.00 3.00 ┗ ┛ #uncomplicate.neanderthal.internal.common.PivotlessLUFactorization(:lu #RealUploMatrix(double type:sy mxn:3x3 layout:row offset:0) ▤ ↓ ↓ ↓ ┓ → 1.00 * * → 1.00 1.00 * → 1.00 1.00 1.00 ┗ ┛ :master true :fresh #atom(true 0x8cf263e)) #RealGEMatrix(double mxn:3x2 layout:row offset:0) ▤ ↓ ↓ ┓ → -2.50 8.00 → 0.00 -3.00 → 1.50 -1.00 ┗ ┛ )
Notice how the code is virtually the same as in the previous example. The only thing that is differentis the
data. In this example, Neanderthal could do the Cholesky factorization, instead of more expensiveLU with symmetric pivoting. Later it adapted the solver to use the available factorization for solvingthe linear equation, but everything went automatically.
In fact, Cholesky factorization is a variant of LU, just like \(LDL^T\) is. The difference is that L and U in Cholesky are \(G\) and \(G^T\). Notice: L is G, U is a transpose of G, and there is no need for the D in the middle. Also, no pivoting is necessary, which makes Cholesky quite nice; compact and efficient.
There are more subtle controls related to this in Neanderthal; look at the treasure trove of API documentation and tests. Writing these blog posts takes time and energy, and now I don't feel like taking too much time delving more into details related to this. :)
Of course, sv is also available, as well as destructive variants of the functions I've shown, and with them, too, you can rely on Neanderthal to select the appropriate algorithm automatically. I didn't show those here because that is used virtually in the same way as in the examples I've shown in previous articles. Let it be a nice easy exercise to try them on your own.
Banded Systems
Are there more optimizations? Sure! One of them is for
banded systems. A matrix is banded whenall of its zero elements are concentrated in a narrow (relative to the whole matrix) band around thediagonal. Of course, the implementation does not care how narrow: even completely dense matrix could be stored asbanded in a sense that the band covers the whole matrix. However, for the implementation tohave more performance instead of much less, the band should not be too obese. For example,if the matrix is in \(R^{100\times{100}}\) a band 5 diagonals wide is obviously exploitable, while theband 50 elements wide is probably not (but test that yourself for your use cases).
Now, the cool thing is that the stuff I've shown you for the general, triangular, and symmetric matrices also work with banded matrices:
The most desirable case is triangular banded matrix, since it does not need to be factorized at all Then, if the matrix is symmetric banded, Neanderthal offers Cholesky if possible, with the LU fallback. And, for general banded matrices, it does the banded LU.
The best of all, it's all automatic (I'm only showing the symmetric case here):
(let [a (dsb 9 3 [1.0 1.0 1.0 1.0 2.0 2.0 2.0 1.0 3.0 3.0 2.0 1.0 4.0 3.0 2.0 1.0 4.0 3.0 2.0 1.0 4.0 3.0 2.0 1.0 4.0 3.0 2.0 4.0 3.0 4.0]) fact (trf a) b (dge 9 3 [4.0 0.0 1.0 8.0 0.0 1.0 12.0 0.0 0.0 16.0 0.0 1.0 16.0 0.0 0.0 16.0 0.0 -1.0 15.0 1.0 0.0 13.0 1.0 -2.0 10.0 2.0 -3.0] {:layout :row})] [a fact (trs fact b)]) '(#RealBandedMatrix(double type:sb mxn:9x9 layout:column offset:0) ▥ ↓ ↓ ↓ ↓ ↓ ─ ↘ 1.00 2.00 3.00 4.00 4.00 ⋯ ↘ 1.00 2.00 3.00 3.00 3.00 ↘ 1.00 2.00 2.00 2.00 2.00 ↘ 1.00 1.00 1.00 1.00 1.00 ┗ ┛ #uncomplicate.neanderthal.internal.common.PivotlessLUFactorization(:lu #RealBandedMatrix(double type:sb mxn:9x9 layout:column offset:0) ▥ ↓ ↓ ↓ ↓ ↓ ─ ↘ 1.00 1.00 1.00 1.00 1.00 ⋯ ↘ 1.00 1.00 1.00 1.00 1.00 ↘ 1.00 1.00 1.00 1.00 1.00 ↘ 1.00 1.00 1.00 1.00 1.00 ┗ ┛ :master true :fresh #atom(true 0x39b524ea)) #RealGEMatrix(double mxn:9x3 layout:column offset:0) ▥ ↓ ↓ ↓ ┓ → 1.00 1.00 1.00 → 1.00 -1.00 0.00 → ⁙ ⁙ ⁙ → 1.00 -1.00 0.00 → 1.00 1.00 -1.00 ┗ ┛ )
I created a banded symmetric matrix
a, with dimensions \(9\times 9\). The band width is the main diagonal and3 sub-diagonals. When it comes to storage, it means than instead of storing all 81 elements, only \(9+8+7+6=30\) non-zero elements are stored (even though the band was not particularly thin).When Neanderthal prints the matrix, it prints the diagonals horizontally (to avoid printing a bunch of zero entries). To avoid confusion, notice how Neanderthal prints the ↘ symbol for the printed rows and ↓ for columns to indicate that (in the case of column-major matrices) diagonals are printed horizontally, and columns vertically. Also note how this particular system has been positive definite,and we get a nice Cholesky, which preserves the band!
Packed matrices
As a bonus, let me mention that Neanderthal supports packed dense storage, which can come handy when the memory is scarce. If we work with dense symmetric or triangular matrices that can not be compressed with band because they do not have many zero elements, we can still save half the space by only storing lower or upper half and not storing the upper/lower part that is not accessed.
(let [a (dsp 3 [1 1 2 1 2 3] {:layout :row :uplo :lower}) fact (trf a) b (dge 3 2 [-1 4 0.5 0 2 -1] {:layout :row})] [a fact (trs fact b)]) '(#RealPackedMatrix(double type:sp mxn:3x3 layout:row offset:0) ▤ ↓ ↓ ↓ ┓ → 1.00 . . → 1.00 2.00 . → 1.00 2.00 3.00 ┗ ┛ #uncomplicate.neanderthal.internal.common.PivotlessLUFactorization(:lu #RealPackedMatrix(double type:sp mxn:3x3 layout:row offset:0) ▤ ↓ ↓ ↓ ┓ → 1.00 . . → 1.00 1.00 . → 1.00 1.00 1.00 ┗ ┛ :master true :fresh #atom(true 0x65376e65)) #RealGEMatrix(double mxn:3x2 layout:row offset:0) ▤ ↓ ↓ ┓ → -2.50 8.00 → 0.00 -3.00 → 1.50 -1.00 ┗ ┛ )
Hey, this example is virtually the same as when we used dense symmetric matrix! That's right - Neanderthal can sort these kind of things without bothering you! Now, how cool is that? I don't know, but it is certainly very useful…
Use packed storage with caution, though: it saves only half the space, while many important operations such as matrix multiplication are noticeably slower than when working with straight dense triangular or symmetric matrices. Some operations can be faster, so YMMV, and experiment with the use cases that you are interested in.
(Tri)diagonal storage
Everything described by now can be compacted and exploited even more. As an exercise, please look at diagonal (gd), tridiagonal (gt), diagonally dominant tridiagonal (dt), and symmetric tridiagonal (st) matrices, and try them out with linear solvers. Yes, they are also supported…
What to expect next
We've taken a look at solving dense, banded and packed systems described with general rectangular matrices, symmetric matrices, and triangular matrices. Now we can quickly and easily solve almost any kind of linear system that has unique solution. Hey, but what if our system has less equations than there are unknowns, or if we have too many equations? Stay tuned, since I'll discuss these topics soon. Somewhere in the middle, I'll probably squeeze in a post where I'll explain the details of those storage and matrix types, and how to read these symbols when matrices are printed (although I think it is intuitive enough that you've probably already picked the most important details up).
Until then, create a learning playground project, include Neanderthal, and have a happy hacking day! |
Can anybody tell me what is known about the classification of abelian transitive groups of the symmetric groups?
Let $G$ be a an abelian transitive subgroup of the symmetric group $S_n$. Show that $G$ has order $n$.
Thanks for your help!
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Can anybody tell me what is known about the classification of abelian transitive groups of the symmetric groups?
Let $G$ be a an abelian transitive subgroup of the symmetric group $S_n$. Show that $G$ has order $n$.
Thanks for your help!
The following solution only needs basic group theory.
Let $G$ be an transitive abelian subgroup of $S_n$. By transitivity, for each $i\in\{1,\ldots,n\}$ there is a $\sigma\in G$ such that $\sigma(1) = i$. So $\# G\geq n$.
Assume that $\#G > n$. Then there are $\sigma, \tau\in G$ with $x := \sigma(1) = \tau(1)$ and $\sigma\neq \tau$. By the second condition, there is a $y\in\{1,\ldots,n\}$ with $\sigma(y) \neq \tau(y)$. From transitivity we get a $\pi\in G$ with $\pi(x) = y$.
Now $$ \pi\tau\pi\sigma(1) = \pi\tau\pi(x) = \pi\tau(y) $$ and $$ \pi\sigma\pi\tau(1) = \pi\sigma\pi(x) = \pi\sigma(y)\text{.} $$ Because of $\tau(y) \neq \sigma(y)$, these two elements are distinct. So the elements $\pi\tau\in G$ and $\pi\sigma\in G$ do not commute, which contradicts the precondition that $G$ is abelian.
The question is answered by user641 in the comments.
Given our hypotheses, we obtain $\{1,\cdots,n\}\cong^\dagger G/H$ and by the second bullet point, we know the action is faithful by the first bullet point, and therefore we know $H=1$ by the third bullet point; thus we have proved $\{1,\cdots,n\}\cong G/1$, so $|G|=n$.
($^\dagger $A
morphism of $G$-sets is a $G$-equivariant aka intertwining map, i.e. a map $\phi:X\to Y$ with the property that $\phi(gx)=g\phi(x)$ for all $x\in X$ and $g\in G$. In fact $G$-sets thus become a category.) |
This question already has an answer here:
If $g \circ f$ is injective, so is $g$ 2 answers
Let $f: X \rightarrow Y$ and $g: Y \rightarrow X$
If $f \circ g$ is injective and $g$ is surjective - is $f$ injective?
I am trying to learn the way of prooving things. So I'll give it a try and please correct at every wrong step.
I assume that the given statement ist true.
So first I give the condition that $g$ is surjective: $\forall y \in Y : \exists x \in X:g(y)=x$
So two different $y$-values can point to the same $x$-value:
Let $y_1, y_2 \in Y, y_1 \neq y_2 : \exists x \in X : g (y_1)=x $ and $g (y_2)=x$
Next I want to connect the fact that $f \circ g$ is injective with the surjective condition of $g$:
Let $z_1, z_2 \in X \quad$ Because $z_1, z_2 \in X$ it is true that $z_1, z_2 \in g(y)$ thus: $$f \circ g(y_1) = f \circ g (y_2)$$ $$f(z_1) = f(z_2)$$ $$z_1 = z_2$$
So $f$ is injective. |
Let $\{x_n\}$ be an unbounded increasing sequence such that $x_n \ne 0$ and $n\in\Bbb N$. Let $K_n$ define the number of terms in $\{x_n\}$ such that: $$ x_n \le n, n\in\Bbb N $$ Prove that if: $$ \exists \lim_{n\to\infty} {K_n\over n} = L_1 $$ then: $$ \exists \lim_{n\to\infty} {n\over x_n} = L_2 $$ and vice versa. And: $$ L_1 = L_2 $$
I've started with putting down given facts. First $x_n$ in increasing and unbounded: $$ \forall M\in\Bbb R\ \exists n \in\Bbb N: x_n > M\\ x_{n+1} > x_n $$
This implies: $$ \lim_{n\to\infty}x_n = +\infty $$
Next, we have $K_n$ which define a number of terms in $x_n$ which are less or equal than $n$. This means $K_n$ is never larger than $n$, which in terms imply: $$ \exists \lim_{n\to\infty}{K_n\over n} = L \in [0, 1] $$
In case the limit exists then it must be somewhere in $[0, 1]$. Going back to the problem, what we want to prove is: $$ \exists \lim_{n\to\infty}{K_n\over n} = L \iff \exists \lim_{n\to\infty}{n\over x_n} = L $$ And the problem splits into two parts, proving $\implies$ and proving $\impliedby$. This is where I'm not sure how to start.
To get some insight I decided to consider a couple of examples for $x_n$. Consider the following sequence: $$ x_n = n - {1\over 2} = \left\{{1\over 2}, {3\over 2}, {5\over 2}, \dots \right\} $$
Clearly, the number of terms not exceeding $n$ is itself $n$, because every term of $x_n$ is less than $n$. So: $$ K_n = \{1, 2, 3, \dots n\} $$
And then both limits exist: $$ \lim_{n\to\infty} {K_n\over n} = 1 \\ \lim_{n\to\infty} {n\over n - {1\over 2}} = 1 $$
Let's also try to consider the following sequence: $$ x_n = n + {1\over 2} = \left\{{3\over 2}, {5\over 2}, \dots \right\} $$ Thus: $$ K_n = \{0, 1, 2, \dots \} = n - 1 $$ Therefore: $$ \lim_{n\to\infty}{K_n\over n} = \lim_{n\to\infty}{n-1\over n} = 1 $$ And: $$ \lim_{n\to\infty}{n\over x_n} = \lim_{n\to\infty}{n\over n + {1\over 2}} = 1 $$
The staments holds for both examples, but:
How do I prove this in general? |
I am trying to understand calculation of correlation function in the ground state of the Transverse Field Ising model, from the following book, which is freely available: http://link.springer.com/book/10.1007/978-3-642-33039-1
The calculations can be found in Chapter 2 of the book. I shall follow the notation from this book and try to describe most of the steps.
Set up:
Consider a spin chain with $N$ sites. The hamiltonian for transverse field Ising model is (Page $17$ of the book) $$H= -\sum_i S^z_i - \lambda\sum_i S^x_i\otimes S^x_{i+1}.$$ Now, the book follows the well known process of using Jordan Wigner transformation to map Pauli operators ($S^x_i,S^y_i,S^z_i$) to fermionic operators $c_i, c^{\dagger}_i$. After this, a fourier transform is performed (equation $2.2.7$), defining new operators $c_q, c^{\dagger}_q$, which are the Fourier transforms of original $c_i,c^{\dagger}_i$ and now the hamiltonian looks like (equation $2.2.8$): $$H=N-2\sum_q(1+\lambda \cos(q))c^{\dagger}_qc_q - \lambda\sum_q(e^{-iq}c^{\dagger}_qc^{\dagger}_{-q}-e^{iq}c_qc_{-q}).$$
Then a Bogoliubov transformation is performed, which is the source of my confusion. They define operators $\eta_q,\eta^{\dagger}_q$ in the following way (equation $2.2.11$): $$\eta_q = u_qc_q + iv_qc^{\dagger}_{-q} , \quad \eta^{\dagger}_q = iv_qc_q + u_qc^{\dagger}_{-q}.$$
This transformation diagonalizes the hamiltonian $H$, with appropriate choice of $u_q,v_q$ and one infers that the ground state $|\psi_0\rangle$ is the state annihilated by all $\eta_q$: $\eta_q|\psi_0\rangle = 0.$
Main question:
Now in appendix $2.A.3$ (Page $42$), correlation function $\langle \psi_0|S^x_iS^x_{i+n}|\psi_0\rangle$ is computed. This is a complicated expression when written in terms of operators $c_i, c^{\dagger}_i$ and for this Wick's theorem is used. But, as can be seen in equation $2.A.30$, calculation is done as if $|\psi_0\rangle$ is annihilated by $c_i$ themselves. Whereas, we saw that $|\psi_0\rangle$ is actually annihilated by $\eta_q$, which is a mixture of both $c_i$ and $c^{\dagger}_i$.
In fact, all the further calculations appear to be done in same manner, assuming that $|\psi_0\rangle$ is annihilated by $c_i$. I traced equation $2.A.32$ to the following reference: http://pcteserver.mi.infn.it/~molinari/NOTES/Wick.pdf
In this reference, wick's theorem has been stated as Theorem $IV.4$ (Page $4$). Equation $2.A.32$ (of the book) looks very similar to corollary $IV.6$ (of the reference). But the corollary is true only if $|\psi_0\rangle$ has $0$ expectation value with all normal-ordered operators.
So how can $|\psi_0 \rangle$ have $0$ expectation value with normal-ordered form of a product of $c_i,c^{\dagger}_i$? Shouldn't this be true only with $\eta_q,\eta^{\dagger}_q$? Is there a underlying principle here, that expectation values do not change under Bogoliubov transformation? |
I've recently been presented with the following problem:
(b) (3 marks) Now consider the function $g: \mathbb{R}^2 \rightarrow \mathbb{R}$ where
$$ g(x, y) = \begin{cases} \frac{\sin(2x^2+2y^2)}{x^2+y^2},& (x, y) \neq (0,0) \\ a,& (x, y) = (0,0) \end{cases} $$
For what value(s) of $a$, if any, is $g(x, y)$ continuous at $(0, 0)$?
And I believe there is no values of a which satisfy continuity. I've taken two limits which are analogous for the Y variable, which describe 4 approaches to the point in question:
$$ \lim_{x,0\to0,0} \frac{\sin(2x^2+2(0)^2)}{x^2+(0)^2} = \lim_{x\to0} \frac{\sin(2x^2)}{x^2} $$
I'll skip the evidence we can use L'Hospitals here, but they both converge to 0 (numerator and denominator), therefore applying the rule for this single variable limit:
$$ \lim_{x\to0} \frac{4x\cdot\cos(2x^2)}{2x} = \lim_{x\to0} 2\cdot\cos(2x^2) = 2$$
So on this particular approach, $a = 2$ would make the function continuous. However, note that when you take the approach $x = y$, you yield the following (Utilizing product of limit laws):
$$ \lim_{x,x\to0,0} \frac{\sin(2x^2+2(x)^2)}{x^2+(x)^2} = \lim_{x\to0} \frac{\sin(4x^2)}{2x^2} = \frac{1}{2}\cdot \lim_{x\to0}\frac{\sin(4x^2)}{x^2}$$
Again we apply L'Hospitals Rule:
$$\frac{1}{2}\cdot\lim_{x\to0} \frac{8x\cdot\cos(4x^2)}{4x} = \frac{1}{2}\cdot\lim_{x\to0}2\cdot\cos(4x^2) = \lim_{x\to0} \cos(4x^2) = 1 $$
From this we find a separate value that would also make the function continuous at the point 0,0, so there is no limit that exists. Is this right? According to online calculators there is only one limit, 2, but this path wherein x = y seems to hold up being different...
Can someone poke a hole in my work for me please so I can realise my error? |
Here is a functor category that has a 2-universal property:
Theorem. Let $\mathbb{C}$ be a small category and let $h_\bullet : \mathbb{C} \to [\mathbb{C}^\textrm{op}, \textbf{Set}]$ be the Yoneda embedding. Then, for all locally small and cocomplete categories $\mathcal{E}$, the functor $F \mapsto F h_\bullet$ from the category of cocontinuous functors $[\mathbb{C}^\textrm{op}, \textbf{Set}] \to \mathcal{E}$ to the category of all functors $\mathbb{C} \to \mathcal{E}$ is fully faithful and essentially surjective on objects, and this functor is pseudonatural in $\mathcal{E}$. In other words, $[\mathbb{C}^\textrm{op}, \textbf{Set}]$ is the free cocompletion of $\mathbb{C}$.
But more generally, if $\mathbb{C}$ and $\mathbb{D}$ are both small categories, then $[\mathbb{C}, \mathbb{D}]$ has a 1-universal property:
Theorem. There is a bijection between functors $\mathbb{E} \times \mathbb{C} \to \mathbb{D}$ and functors $\mathbb{E} \to [\mathbb{C}, \mathbb{D}]$ and this bijection is natural in $\mathbb{C}, \mathbb{D}, \mathbb{E}$. In other words, the category of small categories is cartesian closed and the functor category is an exponential object. |
berylium? really? okay then...toroidalet wrote:I Undertale hate it when people Emoji movie insert keywords so people will see their berylium page.
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content.
83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X
When xq is in the middle of a different object's apgcode. "That's no ship!"
Airy Clave White It Nay
When you post something and someone else posts something unrelated and it goes to the next page.
Also when people say that things that haven't happened to them trigger them.
Also when people say that things that haven't happened to them trigger them.
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
Huh. I've never seen a c/posts spaceship before.drc wrote:"The speed is actually" posts
Bored of using the Moore neighbourhood for everything? Introducing the Range-2 von Neumann isotropic non-totalistic rulespace!
It could be solved with a simple PM rather than an entire post.Gamedziner wrote:What's wrong with them?drc wrote:"The speed is actually" posts
An exception is if it's contained within a significantly large post.
I hate it when people post rule tables for non-totalistic rules. (Yes, I know some people are on mobile, but they can just generate them themselves. [citation needed])
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
OK this is a very niche one that I hadn't remembered until a few hours ago.
You know in some arcades they give you this string of cardboard tickets you can redeem for stuff, usually meant for kids. The tickets fold beautifully perfectly packed if you order them one right, one left - zigzagging. When people fold them randomly in any direction giving a clearly low density packing with loads of strain, I just think
You know in some arcades they give you this string of cardboard tickets you can redeem for stuff, usually meant for kids.
The tickets fold beautifully perfectly packed if you order them one right, one left - zigzagging.
When people fold them randomly in any direction giving a clearly low density packing with loads of strain, I just think
omg why on Earth would you do that?!Surely they'd have realised by now? It's not that crazy to realise? Surely there is a clear preference for having them well packed; nobody would prefer an unwieldy mess?!
Also when I'm typing anything and I finish writing it and it just goes to the next line or just goes to the next page. Especially when the punctuation mark at the end brings the last word down one line. This also applies to writing in a notebook: I finish writing something but the very last thing goes to a new page.
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: ... you were referencing me before i changed it, weren't you? because I had fit both of those.A for awesome wrote: When people put non- spectacularly-interesting patterns, questions, etc. in their signature.
ON A DIFFERENT NOTE.
When i want to rotate a hexagonal file but golly refuses because for some reason it calculates hexagonal patterns on a square grid and that really bugs me because if you want to show that something has six sides you don't show it with four and it makes more sense to have the grid be changed to hexagonal but I understand Von Neumann because no shape exists (that I know of) that has 4 corners and no edges but COME ON WHY?! WHY DO YOU REPRESENT HEXAGONS WITH SQUARES?!
In all seriousness this bothers me and must be fixed or I will SINGLEHANDEDLY eat a universe.
EDIT: possibly this one.
EDIT 2:
IT HAS BEGUN.
HAS
BEGUN.
Last edited by 83bismuth38 on September 19th, 2017, 8:25 pm, edited 1 time in total.
Actually, I don't remember who I was referencing, but I don't think it was you, and if it was, it wasn't personal.83bismuth38 wrote:... you were referencing me before i changed it, weren't you? because I had fit both of those.A for awesome wrote: When people put non- spectacularly-interesting patterns, questions, etc. in their signature.
x₁=ηx
V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: oh okay yeah of course sureA for awesome wrote:Actually, I don't remember who I was referencing, but I don't think it was you, and if it was, it wasn't personal.83bismuth38 wrote:... you were referencing me before i changed it, weren't you? because I had fit both of those.A for awesome wrote: When people put non- spectacularly-interesting patterns, questions, etc. in their signature.
but really though, i wouldn't have cared.
When someone gives a presentation to a bunch of people and you
knowthat they're getting the facts wrong. Especially if this is during the Q&A section.
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X
When you watch a boring video in class but you understand it perfectly and then at the end your classmates dont get it so the teacher plays the borinh video again
Airy Clave White It Nay
83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact:
when scientists decide to send a random guy into a black hole hovering directly above Earth for no reason at all.
hit; that random guy was me.
hit; that random guy was me.
83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact:
When I see a "one-step" organic reaction that occurs in an exercise book for senior high school and simply takes place under "certain circumstance" like the one marked "?" here but fail to figure out how it works even if I have prepared for our provincial chemistry olympiadEDIT: In fact it's not that hard.Just do a Darzens reaction then hydrolysis and decarboxylate.
Current status: outside the continent of cellular automata. Specifically, not on the plain of life.
An awesome gun firing cool spaceships:
An awesome gun firing cool spaceships:
Code: Select all
x = 3, y = 5, rule = B2kn3-ekq4i/S23ijkqr4eikry2bo$2o$o$obo$b2o!
When there's a rule with a decently common puffer but it can't interact with itself
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X
When that oscillator is just
When you're sooooooo close to a thing you consider amazing but miss... not sparky enough.
When you're sooooooo close to a thing you consider amazing but miss...
Airy Clave White It Nay
People posting tons of "new" discoveries that have been known for decades, showing that they've not observed standard netiquette by reading the forums a while before posting, nor done the most minimal research about whether things have been already known, despit repeated posts about where to find such resources (e.g. jslife, wiki, Life lexicon, etc.).
People posting tons of useless "new" discoveries that take longer to post than to find (e.g. "look what happens when I put this blinker next to this beehive"). Newbies with attitudes, who think they know more than people who have been part of the community for years or even decades. Posts where the quoted text is substantially longer than added text. Especially "me too" posts. People whose signatures are longer than the actual text of their posts. People whose signatures include graphics or pattern files, especially ones that are just human-readable text. Improper grammar, spelling, and punctuation (although I've gotten used to that; long-term use of the internet has made me rather fluent in typo, both reading and writing). Imperfect English is not unreasonable from people for whom English is not a primary language, but from English speakers, it is a symptom of sloppiness that can also manifest in other areas.
People posting tons of useless "new" discoveries that take longer to post than to find (e.g. "look what happens when I put this blinker next to this beehive").
Newbies with attitudes, who think they know more than people who have been part of the community for years or even decades.
Posts where the quoted text is substantially longer than added text. Especially "me too" posts.
People whose signatures are longer than the actual text of their posts.
People whose signatures include graphics or pattern files, especially ones that are just human-readable text.
Improper grammar, spelling, and punctuation (although I've gotten used to that; long-term use of the internet has made me rather fluent in typo, both reading and writing). Imperfect English is not unreasonable from people for whom English is not a primary language, but from English speakers, it is a symptom of sloppiness that can also manifest in other areas.
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X That's G U S T A V O right theremniemiec wrote:People posting tons of "new" discoveries that have been known for decades, showing that they've not observed standard netiquette by reading the forums a while before posting, nor done the most minimal research about whether things have been already known, despit repeated posts about where to find such resources (e.g. jslife, wiki, Life lexicon, etc.). People posting tons of useless "new" discoveries that take longer to post than to find (e.g. "look what happens when I put this blinker next to this beehive"). Newbies with attitudes, who think they know more than people who have been part of the community for years or even decades.
Also, when you walk into a wall slowly and carefully but you hit your teeth on the wall and it hurts so bad.
Airy Clave White It Nay |
Weil's bound for Kloosterman sums states that for $(a,b)\not=(0,0)$, $$ |K(a,b;q)|:=\left|\sum_{x\in\mathbb{F}_q^*}\chi(ax+bx^{-1})\right|\leq 2\sqrt{q}, $$ where $\chi$ is a non-trivial additive character on $\mathbb{F}_q$ (the field with $q$ elements).
My question is, is it known to be false that $\sqrt{q}$ can be replaced by $\sqrt{q-1}$?
Here's what is known (to me):
Weil's bound follows from the fact that $K(a,b;q)=\alpha+\beta$ where $\alpha\beta =q$ and $|\alpha|=|\beta|=\sqrt q$. Thus there is a unique angle $\theta(a,b;q)$ in $[0,\pi]$ such that $$ \frac{K(a,b;q)}{2\sqrt q}=\cos\theta(a,b;q) $$ My question then asks, is there $a,b,q$ such that $$ |\cos\theta(a,b;q)|>\sqrt{1-\frac 1q}?\qquad (*) $$ "Vertical" equidistribution of Kloosterman angles implies that as $q\to\infty$ $$ \frac 1{q-1}\sum_{\lambda\in F_q^*}f(\theta(1,\lambda;q))\to\frac 2\pi\int_0^\pi f(\theta)\sin^2\theta\,d\theta $$ Thus for any fixed $\delta>0$, as $q\to\infty$ the proportion of angles $\theta(a,b;q)\in [0,\delta]$ approaches $\frac 1\pi (\delta-\frac 12\sin(2\delta))\approx \frac{2\delta^3}{3\pi}$. $(*)$ is roughly equivalent to $|\theta(a,b;q)|<q^{-1/2}$, so by equidistribution the expected number of such angles is $\approx 2(q-1)\frac{2}{3\pi} q^{-3/2}\approx \frac{4}{3\pi} q^{-1/2}$, which is (much) less than 1.
So one might ask how good is the concentration around the expected number of angles? And how good is this approximation of the expectation to begin with?
Probably the most reasonable approach is to just search by computer. For $q=p$ prime and $p\leq 61$ there are no counterexamples, but this isn't very convincing. |
Search
Now showing items 1-2 of 2
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
To increase the resolution of an instrument, smaller wavelength and larger aperture is desirable. It is mentioned in some textbooks that the "effective" diameter of a telescope can be increased by using arrays of smaller telescopes. I just wonder why it is possible because every telescope is separated.
Picture yourself looking into a large mirror on the wall. Now picture the mirror is made up of smaller, tiled mirrors. You will still see your reflection. If you begin to remove the tiles, so that there are only a few left, you can still use them to reconstruct the image of your face that was given by the original mirror. This is what is happening with an interferometer. Astronomers are constructing an image measured by the "full mirror" (the longest baseline) based on the information they get from a few tiles (individual antennas).
Greatly rewritten based on feedback in comments In order to understand this issue, it is worth considering what a telescope (or any optical / radio imaging system) really does.
Taking a simple parabolic mirror, the shape is chosen such that the total path length for all rays "from infinity" to the focal point is the same. By making the path lengths the same, the signals will al ibe in phase when they arrive at the focal point, which results in constructive interference. We see this as a "bright spot" at the focal point. I explained this in detail in this earlier answer.
Now if you consider a second set of rays that is at a slight angle to the first set of rays, the path lengths will no longer all be the same. There will come a point where the rays from one side are so far out of phase with the rays from the other side, that when you add them all together they cancel out exactly. For a circular aperture, this happens at an angle $$\alpha = \frac{1.22 \lambda}{d}$$ which is the usual expression for the angular resolution of a circular aperture (while the expression tells you where the zero is on one side, this is a pretty good approximation of the width of the central peak as shown in this diagram from source - where of course 0.61/R = 1.22/d :
"Resolution", then, is interpreted as "how far off axis does the signal from a point source decay to zero" which is another way of saying "how close together can two points be and be seen as distinct" (it's not exactly the same thing, but it's usually "good enough" to equate these concepts). The resolution of the individual component doesn't matter too much - while its signal will be "virtually unchanged" as the source moves off axis, the phase relationship with the signal from the next (small) element will change much more rapidly. So while the components by themselves don't get better resolution, the combined signal does.
This has a few implications. First - any imaging system needs to maintain the phase relationship between the incident beams to much better than a wavelength - this is why an optical telescope has really smooth surfaces, but a radio antenna can be made out of a "roughly shaped" surface. Second - it means that we don't need to have a continuous circular lens/mirror in order to do imaging. Any set of reflectors that result in the detected signals remaining in-phase will behave in the same way. Of course the larger the total area, the more signal is detected - but if you are interested in angular resolution and you have enough signal, you need to increase the
width of your optics; you don't necessarily need to increase the area.
And so it becomes possible to dream up telescopes that have unusual shapes, but that are particularly good at resolving along a particular axis. This is the principle behind very long baseline interferometry but it works on any scale. The key is that you have to maintain the phase relationship between the signals from different parts of your "mirror" to within a fraction of a cycle - the better you do this, the greater the resolution of the system will be. At optical wavelengths this quickly becomes really, really hard - at the wavelengths used in radio astronomy (meters down to millimeters) it is quite achievable.
A helpful description of the process (and the difficulty of doing this optically) is found in the Wikipedia article on aperture synthesis which states:
Aperture synthesis is possible only if both the amplitude and the phase of the incoming signal are measured by each telescope. For radio frequencies, this is possible by electronics, while for optical lights, the electromagnetic field cannot be measured directly and correlated in software, but must be propagated by sensitive optics and interfered optically. Accurate optical delay and atmospheric wavefront aberration correction is required, a very demanding technology which became possible only in the 1990s. This is why imaging with aperture synthesis has been used successfully in radio astronomy since the 1950s and in optical/infrared astronomy only since the 2000 decade. See astronomical interferometer for more information. |
I've done so many limit problems in calculus lately, but I can't wrap my mind around how to simplify this one in order to solve it:
$$ \lim_{x\rightarrow 2} \dfrac{x^3-8}{x^2-x-2} $$
I understand the $x^3-8$ factors down to $(x-2)(x^2+2x+4)$, but that still leaves us with $$ \lim_{x\rightarrow 2} \dfrac{(x-2)(x^2+2x+4)}{x^2-x-2}, $$ which I can't seem to find a way to simplify so that the denominator is not equal to 0.
In case anyone figures out themselves, the answer is 4 (I was given the answer - this is on a review sheet for an upcoming exam). Also, I tagged this as homework, even though it is not technically homework.
So if anyone could help point me in the right direction here, that would be very helpful. |
M.Sc Student Mour Tamer Subject New Efficient Constructions for Distributed Oblivious RAM Department Department of Computer Science Supervisor Professor Eyal Kushilevitz
Oblivious RAM (ORAM) is a cryptographic primitive that allows a client to securely execute RAM programs over data that is stored in an untrusted server. Distributed Oblivious RAM is a variant of ORAM, where the data is stored in $m>1$ servers. Extensive research over the last few decades have succeeded to reduce the bandwidth overhead of ORAM schemes, both in the single-server and the multi-server setting, from $O(\sqrt{N})$ to $O(1)$. However, all known protocols that achieve a sub-logarithmic overhead either require heavy server-side computation (e.g. homomorphic encryption), or a large block size of at least $\Omega(\log^3 N)$.
In this paper, we present a family of distributed ORAM constructions that follow the hierarchical approach of Goldreich and Ostrovsky [GO96]. We enhance known techniques, and develop new ones, to take better advantage of the existence of multiple servers. By plugging efficient known hashing schemes in our constructions, we get the following results:
1. For any number $m\geq 2$ of servers, we show an m-server ORAM scheme with $O(\log N/\log\log N)$ overhead, and block size $\Omega(\log^2 N)$. This scheme is private even against an $(m-1)$-server collusion.
2. A three-server ORAM construction with $O(\omega(1)\cdot\log N/\log\log N)$ overhead and a block size almost logarithmic, i.e. $\Omega(\log^{1\epsilon}N)$.
We also investigate a model where the servers are allowed to perform a linear amount of light local computations, and show that constant overhead is achievable in this model, through a simple four-server ORAM protocol. Through the theoretical lens, this is the first ORAM scheme with asymptotic constant overhead, and polylogarithmic block size, that does not use homomorphic encryption. Practically speaking, although we do not provide an implementation of the suggested construction, evidence from related work (e.g. [DS17]) makes us believe that despite the linear computational overhead, the construction can be potentially very efficient practically, in particular when applied to secure computation. |
Deep Learning from Scratch to GPU - 1 - Representing Layers and ConnectionsYou can adopt a pet function! Support my work on my Patreon page, and access my dedicated discussion server. Can't afford to donate? Ask for a free invite.
February 6, 2019
Please share: Twitter.
New books are available for subscription.
Here we start our journey of building a deep learning library that runs on both CPU and GPU.
If you haven't yet, read my introduction to this series in Deep Learning in Clojure from Scratch to GPU - Part 0 - Why Bother?.
To run the code, you need a Clojure project with Neanderthal () included as a dependency. If you're in a hurry, you can clone Neanderthal Hello World project.
Don't forget to read at least some introduction from Neural Networks and Deep Learning, start up the REPL from your favorite Clojure development environment, and let's start from the basics.
Neural Network
Below is a typical neural network diagram. As the story usually goes, we plug some input datain the Input Layer, and the network then propagates the signal through Hidden Layer 1 and Hidden Layer 2,using weighted connections, to produce the output at the Output Layer.For example, the input data is the pixels of an image, and the outputare "probabilities" of this image belonging to a class, such as
cat (\(y_1\)) or dog (\(y_2\)).
Neural Networks are often used to classify complex things such as objects in photographs, or to "predict" future data. Mechanically, though, there is no magic. They just approximate functions. What exactly are inputs and outputs is not particularly important at this moment. The network can approximate (or, to be fancy, "predict"), for example, even such mundane functions as \(y = sin(x_1) + cos(x_2)\).
Note that this is an instance of a
transfer function. We provide an input, and the networkpropagates that signal to calculate the output; on the surface, just like any other function!
Different than everyday functions that we use, neural networks compute anything using onlythis architecture of nodes and weighted connections. The trick is in finding the right weightsso that the approximation is close to the "right" value.This is what
learning in deep learning is all about. For now, though, we are only dealing with inference, the process of computing the output using the given structure, input, and whatever weights there are. How to approach the implementation
The most straightforward thing to do, and the most naive error to make, is to read about analogies with neurons in the human brain, look at these diagrams, and try to model nodes and weighted connections as first-class objects. This might be a good approach with business oriented problems. First-class objects might bring the ultimate flexibility: each node and connection could have different polymorphic logic. That is the flexibility that we do not need. Even if that flexibility could help with better inference, it would be much slower, and training such a wandering network would be a challenge.
Rather than in such enterprising "neurons", the strength of neural networks is in their
simple structure.Each node in a layer and each connection between two layers have exactly the same structure and logic.The only moving part are the numerical values in weights and thresholds. We can exploit that to createefficient implementations that fit well into hardware optimizations for numerical computations.
I'd say that the human brain analogy is more a product of marketing than a technical necessity. One layer of a basic neural network practically does logistic regression. There are more advanced structures, but the point is that they do not implement anything close to biological neurons.
The math
Let's just consider the input layer, the first hidden layer, and the connections between them.
We can represent the input with a vector of two numbers, and the output of the Hidden Layer 1 with a vector of four numbers. Note that, since there is a weighted connection from each \(x_n\) to each \(h^{(1)}_m\), there are \(m\times{n}\) connections. The only data about each connection are its weight, and the nodes it connects. That fits well with what a matrix can represent. For example, the number at \(w_{21}\) is the weight between the first input, \(x_1\) and the second output, \(h^{(1)}_2\).
We compute the output of the first hidden layer in the following way:\begin{gather*} h^{(1)}_1 = w_{11}\times{} x_1 + w_{12}\times{} x_2\\ h^{(1)}_2 = w_{21}\times{} x_1 + w_{22}\times{} x_2\\ h^{(1)}_3 = w_{31}\times{} x_1 + w_{32}\times{} x_2\\ h^{(1)}_4 = w_{41}\times{} x_1 + w_{42}\times{} x_2\\ \end{gather*}
Since you've read the Linear Algebra Refresher (1) - Vector Spaces that I recommended, you'll recognize that these are technically four dot products between the corresponding rows of the weight matrix and the input vector.\begin{gather*} h^{(1)}_1 = \vec{w_1} \cdot \vec{x} = \sum_{j=1}^n w_{1j} x_j\\ h^{(1)}_2 = \vec{w_2} \cdot \vec{x} = \sum_{j=1}^n w_{2j} x_j\\ h^{(1)}_3 = \vec{w_3} \cdot \vec{x} = \sum_{j=1}^n w_{3j} x_j\\ h^{(1)}_4 = \vec{w_4} \cdot \vec{x} = \sum_{j=1}^n w_{4j} x_j\\ \end{gather*}
Conceptually, we can go further than low-level dot products: the weight matrix
transformsthe input vector into the hidden layer vector. I've written about matrix transformationsin Linear Algebra Refresher (3) - Matrix Transformations.
We don't have to juggle indexes and program low-level loops. The basic matrix-vector product implements the propagation from each layer to the next!\begin{gather*} \mathbf{h^{(1)}} = W^{(1)}\mathbf{x}\\ \mathbf{y} = W^{(2)}\mathbf{h^{(1)}} \end{gather*}
For example, for some specific input and weights:\begin{equation} \mathbf{h^{(1)}} = \begin{bmatrix} 0.3 & 0.6\\ 0.1 & 2\\ 0.9 & 3.7\\ 0.0 & 1\\ \end{bmatrix} \begin{bmatrix}0.3\\0.9\\\end{bmatrix} = \begin{bmatrix}0.63\\1.83\\3.6\\0.9\\\end{bmatrix}\\ \mathbf{y} = \begin{bmatrix} 0.75 & 0.15 & 0.22 & 0.33\\ \end{bmatrix} \begin{bmatrix}0.63\\1.83\\3.6\\0.9\\\end{bmatrix} = \begin{bmatrix}1.84\\\end{bmatrix} \end{equation}
The code
To try this in Clojure, we require some basic Neanderthal functions.
(require '[uncomplicate.commons.core :refer [with-release]] '[uncomplicate.neanderthal [native :refer [dv dge]] [core :refer [mv!]]])
The minimal code example, following the Yagni principle, would be something like this:
(with-release [x (dv 0.3 0.9) w1 (dge 4 2 [0.3 0.6 0.1 2.0 0.9 3.7 0.0 1.0] {:layout :row}) h1 (dv 4)] (println (mv! w1 x h1))) #RealBlockVector[double, n:4, offset: 0, stride:1] [ 0.00 1.83 3.60 0.90 ]
The code does not need much explaining:
w1,
x, and
h1 represent weights, input, and the first hidden layer.The function
mv!applies the matrix transformation
w1 to the vector
x (by multiplying
w1 by
x;
mv stands for "matrix times vector")
We should make sure that this code works well with large networks processed through lots of cycles when we get toimplement the
learning part, so we have to take care to reuse memory; thus we use the destructiveversion of
mv,
mv!. Also, the memory that holds the data is outside the JVM; thus we need to take careof its lifecycle and release it (automatically, using
with-release) as soon it is not needed.This might not be important for such small examples, but is crucial in "real" use cases.
The output of the first layer is the input of the second layer:
(with-release [x (dv 0.3 0.9) w1 (dge 4 2 [0.3 0.6 0.1 2.0 0.9 3.7 0.0 1.0] {:layout :row}) h1 (dv 4) w2 (dge 1 4 [0.75 0.15 0.22 0.33]) y (dv 1)] (println (mv! w2 (mv! w1 x h1) y))) #RealBlockVector[double, n:1, offset: 0, stride:1] [ 1.84 ]
The final result is \(y_1=1.84\). What does it represent and mean? Which function does it approximate? Who knows. The weights I plugged in are not the result of any training nor insight. I've just pulled some random numbers out of my hat to show the computation steps.
This is not much but is a good first step
The network we created is still a simple toy.
It's not even a proper multi-layer perceptron, since we didnot implement non-linear activation of the outputs. Funnily enough, the nodes we have implemented
areperceptrons, and there are multiple layers full of these. You'll soon get used to the tradition ofinventing confusing and inconsistent grand-sounding names for every incremental feature in machine learning.Without non-linearity introduced by activations, we could stack thousands of layers, and our "deep" networkwould still perform only linear approximation equivalent to a single layer. If it is not clear to youwhy this happens, check out composition of transformations).
We have not implemented thresholds, or
biases, yet. We've also left everything in independent matrices and vectors,without a structure involving layers that would hold them together. And we haven't even touched the learning part, which is 95% of the work.There are more things that are necessary, and even more things that are nice to have, and we'll cover these.
This code only runs on the CPU, but getting it to run on the GPU is rather easy. I'll show how to do this soon, but I bet you can even figure this on your own, with nothing more than Neanderthal Hello World example! There are posts on this blog showing how to do this, too.
My intention is to offer an easy start, so you
do try this code. We will gradually apply and discusseach improvement in the following posts. Run this easy code on your own computer, and, why not,improve it in advance! The best way to learn is by experimenting and making mistakes.
I hope you're eager to continue to the next part: Bias and Activation Function
Thank you
Clojurists Together financially supported writing this series. Big thanks to all Clojurians who contribute, and thank you for reading and discussing this series. |
WHY?
The motivation is almost the same as that of NICE. This papaer suggest more elaborate transformation to represent complex data.
WHAT?
NICE suggested coupling layers with tractable Jacobian matrix. This paper suggest flexible bijective function while keeping the property of coupling layers. Affine coupling layers scale and translate the x, its Jacobian is easy to compute and invertible. In this paper, s and t are used as rectified convolution layer.
y_{1:d} = x_{1:d}\\ y_{d+1:D} = x_{d+1:D}\odot \exp(s(d_{x:d})) + t(x_{1:d})\\ \frac{\partial y}{\partial x^T} = \left[\begin{array}{cc} I_d & 0\\\frac{\partial y_{d+1:D}}{\partial x^T_{1:d}} & diag(\exp[s(x_{1:d})]) \end{array} \right]\\ x_{1:d} = y_{1:d} \\ x_{d+1:D} = (y_{d+1:D} - t(y_{1:d}))\odot \exp(-s(y_{1:d})) This paper suggested two ways for partition. First is spatial checkerboard patterns and second is channel-wise masking (above). To build up this components into a multiscale architecture, squeezing operation is used to divide a channel into four (above). A scale in architecture consists of three coupling layers with checkerboard masks, squeezing operation and three more coupling layers with alternating channel-wise masking.
h^{(0)} = x\\ (z^{(i+1)}, h^{(i+1)}) = f^{(i+1)}(h^{(i)})\\ z^{(L)} = f^{(L)}(h^{(L-1)})\\ z = (z^{(1)}), ..., z^{(L)} Batch normalization is use whose Jacobian matrix is also tractable.
\left(\prod)_i(\tilde{\sigma}_i^2 + \epsilon)\right)^{-\frac{1}{2}}\\
So?
Real NVP showed competitive performance in generation in CIFAR10, Imagenet, CelebA, and LSUN in terms of quality and log likelihood.
Critic
Are prior distributions trainable? I wonder if diverse form of priors affect the quality of sample. |
First of all, you need to match the endpoints, as you said. I will assume that you also need to match the tangents (especially if it's a part of a spline). If the control points are A,B,C,D, the tangent in A points along vector AB, and the tangent in D points in DC direction.
You can therefore freely move B and C along the tangent direction, which changes the "sharpness" but keeps the tangents and endpoints intact.
There are three typical cases:
1) The tangents converge. This means that intersection of lines AB and CD is where the curve is. The closer B and C are to this intersection, the sharper the curve. If B or C is farther away than the intersection, you have a self-intersecting curve or at least something extremely sharp. If your condition is "soft" (not mathematical, but you just want some heuristic to limit the curvature), you can for instance, just forbid B and C to be more than half way from A/D to the intersection. If you want mathematically rigorous condition, you can vary the two parameters (AB and CD distances) and calculate the minimal radius of curvature for each case until you go over the desired limit.
2) the tangents diverge on the same side. This case produces a "bubble". The farther away the B and C are, the larger the circle-like bump is. This kind of a spline is usually not something you want (at least in graphics), you have more control if you split it in half and get two "conventional" beziers. In any case, here, the curve is sharprer if AB and CD are smaller, but varying these parameters makes drastic changes in the shape.
3) the tangents point in opposite directions (the curve
intersects the A-D straight line). This is a horrible case (S-shape). Still, you can only vary AB and CD distances if you want to keep the tangents, so you can vary them until you honor the condition. This case again has subcases (like the two above), depending on whether the angles BAD and ADC are obtuse or acute.
I hope this helps. Essentially, you have two free parameters that you can vary in order to minimize sharpness. It also makes sense (to preserve the general shape) to keep the ratio AB/CD fixed and scale the distances simultaneously. Then you have a 1-parametric case which you can solve algorithmically without ambiguity (in 2-parametric case, you have too many solutions).
Edit: I was hoping to avoid explicit math because it's easy to know what to do, but it's tedious to write down and do anything analytically, better to let the computer do it. However, here we go:
A cubic bezier is parameterized like this:$$\vec{r}(t)=(1-t)^3 A + 3(1-t)^2 t B+ 3(1-t)t^2 C +t^3 D$$the first derivative:$$\vec{r}'(t)=3(1-t)^2(B-A)+6(1-t)t(C-B)+3t^2(D-C)$$the second derivative is$$\vec{r}''(t)=6(1-t)(C-2B+A)+6t(D-2C+B)$$
Now, the curvature is expressed as such:$$\kappa=\frac{|\vec{r}'\times\vec{r}''|}{|\vec{r}'|^{3}}$$
You pretty much need to calculate this numerically and also find the maximum over $t$ numerically. It's too ugly to do anything "on paper". See the discussion here:
maximum curvature of 2D Cubic Bezier
My suggestion was to "correct" your control points as such:$$B_u=A+(B-A)u$$$$C_u=D+(C-D)u$$where you are looking for $u$ which minimizes your maximum curvature. $u=1$ is your original curve. This parametrization lets you keep the approximate shape of the curve. |
CART and decision trees like algorithms work through recursive partitioning of the training set in order to obtain subsets that are as pure as possible to a given target class. Each node of the tree is associated to a particular set of records $T$ that is splitted by a specific test on a feature. For example, a split on a continuous attribute $A$ can be induced by the test $ A \le x$. The set of records $T$ is then partitioned in two subsets that leads to the left branch of the tree and the right one.
$T_l = \{ t \in T: t(A) \le x \}$
and
$T_r = \{ t \in T: t(A) > x \}$
Similarly, a categorical feature $B$ can be used to induce splits according to its values. For example, if $B = \{b_1, \dots, b_k\}$ each branch $i$ can be induced by the test $B = b_i$.
The divide step of the recursive algorithm to induce decision tree takes into account all possible splits for each feature and tries to find the best one according to a chosen quality measure: the splitting criterion. If your dataset is induced on the following scheme
$$A_1, \dots, A_m, C$$
where $A_j$ are attributes and $C$ is the target class, all candidates splits are generated and evaluated by the splitting criterion. Splits on continuous attributes and categorical ones are generated as described above. The selection of the best split is usually carried out by impurity measures.
The impurity of the parent node has to be decreased by the split. Let $(E_1, E_2, \dots, E_k)$ be a split induced on the set of records $E$, a splitting criterion that makes used of the impurity measure $I(\cdot)$ is:
$$\Delta = I(E) - \sum_{i=1}^{k}\frac{|E_i|}{|E|}I(E_i)$$
Standard impurity measures are the Shannon entropy or the Gini index. More specifically, CART uses the Gini index that is defined for the set $E$ as following. Let $p_j$ be the fraction of records in $E$ of class $c_j$$$p_j = \frac{|\{t \in E:t[C] = c_j\}|}{|E|} $$ then$$ \mathit{Gini}(E) = 1 - \sum_{j=1}^{Q}p_j^2$$where $Q$ is the number of classes.
It leads to a 0 impurity when all records belong to the same class.
As an example, let's say that we have a binary class set of records $T$ where the class distribution is $(1/2, 1/2)$ - the following is a good split for $T$
the probability distribution of records in $T_l$ is $(1,0)$ and the $T_r$'s one is $(0,1)$. Let's say that $T_l$ and $T_r$ are the same size, thus $|T_l|/|T| = |T_r|/|T| = 1/2$. We can see that $\Delta$ is high:
$$\Delta = 1 - 1/2^2 - 1/2^2 - 0 - 0 = 1/2$$
The following split is worse than the first one and the splitting criterion $\Delta$ reflects this characteristic. $$\Delta = 1 - 1/2^2 - 1/2^2 - 1/2 \bigg( 1 - (3/4)^2 - (1/4)^2 \bigg) - 1/2 \bigg( 1 - (1/4)^2 - (3/4)^2 \bigg) = 1/2 - 1/2(3/8) - 1/2(3/8) = 1/8$$
The first split will be selected as best split and then the algorithm proceeds in a recursive fashion.
It is easy to classify a new instance with a decision tree, in fact it is enough to follow the path from the root node to a leaf.
A record is classified with the majority class of the leaf that it reaches.
Say that we want to classify the
square on this figure
that is the graphical representation of a training set induced on the scheme $A,B,C$, where $C$ is the target class and $A$ and $B$ are two continuous features.
A possible induced decision tree might be the following:
It is clear that the record
square will be classified by the decision tree as a circle given that the record falls on a leaf labeled with circles.
In this toy example the accuracy on the training set is 100% because no record is mis-classified by the tree. On the graphical representation of the training set above we can see the boundaries (gray dashed lines) that the tree uses to classify new instances.
There is plenty of literature on decision trees, I wanted just to write down a sketchy introduction. Another famous implementation is C4.5. |
The maximum value of $w_n$ is
$w_n = 3^n$
To see this, first note that
If we want to distinguish between $x$ and $x+1$, some combination of weights has to form a value in the interval $[x, x+1]$, otherwise those two values would give the same result for any weighing.
Generalizing for all $w_n$ values...
We need to be able to weigh exactly a value in $[1, 2]$, a value in $[2,3]$, a value in $[3,4]$,...
And the way to accomplish this with the least amount of weight combinations is
to cover two intervals with each single possible weighing combination, i.e. be able to weigh exactly $2$, $4$, $6$, ...
Knowing all of this, the biggest we can get is
With $n$ weights, we can weigh exactly at most $\displaystyle\sum_{i=0}^{n-1}3^i$ distinct positive values (as mentioned in the question).
This is because each weight has 3 possibilities: either it's used in the same pan as the weight we want to measure, or in the opposite pan or not at all, giving us $3^n$ possibilities. One of those is zero (from not placing any weight), and for each positive weight we can measure we can also measure a corresponding negative weight by swapping all weights from one pan to the other, so the total of distinct positive weights is $\frac{3^n-1}{2}$.
This means we can weigh at most $2$, $4$, ..., $2 \times \displaystyle\sum_{i=0}^{n-1}3^i$ exactly (by using the weights $2$, $6$, ... $2 \times 3^{n-1}$), being able to distinguish up to $w_n = 2 \times \displaystyle\sum_{i=0}^{n-1}3^i + 1 = 3^n$. |
Wavefront shaping techniques in complex media Details Category: Phase measurement Published on Wednesday, 22 May 2019 12:09
\(
\def\ket#1{{\left|{#1}\right\rangle}} \def\bra#1{{\left\langle{#1}\right|}} \def\braket#1#2{{\left\langle{#1}|{#2}\right\rangle}} \) [tutorial] Semidefinite Programming for Intensity Only Estimation of the Transmission Matrix
The possibility of measuring the transmission matrix using intensity only measurements is a much sought-after feature as it allows us not to rely on interferometry. Interferometry usually requires a laboratory grade stability difficult to obtain for real-world applications. Typically, we want to be able to retrieve the transmission matrix from a set of pairs composed of input masks and output intensity patterns. However, this problem, that corresponds to a phase retrieval problem, is not convex, hence difficult to solve using standard techniques. The idea proposed in [I. Waldspurger
et al., Math. Program (2015)] is to relax some constraint to approximate the problem to a convex one that can be solved using the semidefinite programming approach. I briefly detail the approach and provide an example of the procedure to reconstruct the transmission matrix using Python. A Jupyter notebook can be found on my Github account: semidefiniteTM_example.ipynb. Context
Measuring the full complex transmission matrix requires to have access to the phase of the optical field. While there exist non-interferometric approaches, they usually reduce the resolution, which is detrimental for complex media application where the speckle pattern shows high spatial frequency fluctuations. Other methods require to measure the intensity pattern at different planes, adding more constraints on the experimental setup. Ideally, we want to be able to reconstruct the transmission matrix form a set of pairs, consisting of one input field and the corresponding output intensity pattern. Various approaches were proposed, in particular statistical machine learning, deep learning, and semidefinite programming. We will focus here on the semidefinite programming approach.
This approach was first proposed in [I. Waldspurger
et al., Math. Program (2015)] and was later demonstrated for the first time in [N'Gom et al., Sci. Rep. (2017)] to measure the transmission matrix of a scattering medium. The mathematical problem
Let's consider a linear medium of transmission matrix \(\mathbf{H}\) of size \(M\times N\) that links the input field \(x\) to the output one \(y\). The \(j^\text{th}\) line of the transmission matrix \(H_i\) corresponds to the effect of the different input elements on the \(j^\text{th}\) output measurement point of field \(y_j\). The reconstruction of each line of the matrix can be treated independently, we consider only the output pixel \(j\) in the following.
We consider that we have at our disposal a set of input/output pairs \(\left\{X^k,\lvert Y_j^k\rvert\right\}\), with \(k \in [1...P]\), where \(X_k\) is a complex vector corresponding to an input wavefront, \(Y_j^k=\mathbf{H}X^k= \lvert Y_j^k\rvert \exp^{i\Phi_k}\) is the corresponding output complex field and \(P\) is the number of elements in the data set. \(\mathbf{X}\) is the matrix containing all the input training masks, and \(Y_j\) is the vector containing the output fields at the target point \(j\) for all input masks.
As we only have access to the amplitude \(\lvert Y_j\rvert\) of the output field, we want to solve:
$$
\begin{aligned} \text{min.} & \quad &\lVert H_j\mathbf{X}-\lvert Y_j\rvert\exp^{i\Phi_j}\rVert_ 2^2 \\ \text{subject to} & \quad &H_j \in \mathbb{C}^M, \, \Phi_j \in [0,2\pi]^P \end{aligned}\tag{1} $$
It is shown in [I. Waldspurger
et al., Math. Program (2015)] that this expression can be simply rearranged to become
$$
\begin{aligned} \text{min.}& \quad & u^\dagger \mathbf{Q} u = \mathrm{Tr}\left(\mathbf{Q}u u^\dagger\right)\\ \text{subject to} & \quad & H_j \in \mathbb{C}^M, \,u\in\mathbb{C}^P,\,\lvert u_k\rvert=1 \quad \forall k\in[0..P]\\ \text{with } & \quad & \mathbf{Q} = \text{diag}(\lvert Y_j\rvert)\left(\mathbf{I}-\mathbf{X}\mathbf{X}^p\right) \text{diag}(\lvert Y_j\rvert) \end{aligned}\tag{2} $$
\({}^p\) stands for the Moore-Penrose pseudoinverse and \({}^\dagger\) for the transpose conjugate.
The vector \(u\) contains the phase of the \(j^\text{th}\) output point for all the elements of the data set, so that \(u_k=\exp^{i\Phi_k}\). The equivalence between these two expressions is guaranteed by the fact that \(\mathbf{Q}\) is a positive semidefinite Hermitian matrix.
By construction \(\mathbf{U}=u_j u_j^\dagger\) is of rank equal to \(1\). By relaxing this constraint, this problem can be written as a convex problem that can be solved using semidefinite programming:
$$
\begin{aligned} \text{min.}& \quad & \mathrm{Tr}\left(\mathbf{Q}\mathbf{U}\right)\\ \text{subject to} & \quad & \mathbf{U}=\mathbf{U}^\dagger,\, \text{diag}\left(\mathbf{U}\right) = 1, \mathbf{U} \succeq 0\\ \end{aligned}\label{eq:SDP}\tag{3} $$
with \(\mathbf{U} \succeq 0\) denoting the positive semidefinite constraint on \(\mathbf{U}\). We can now use standard convex solvers to find a solution. The difficulty is that \(\mathbf{U}\) is not of rank \(1\) anymore. To find an approximate solution, we take the first singular vector \(V_0\) of \(\mathbf{U}\) which give the phase of the output field with good accuracy.
Now that we have the complex output field, we can use a pseudo-inversion to retrieve the transmission matrix.
$$
H_j = \lvert Y \rvert V_0\mathbf{X}^p\tag{4} $$ Python implementation
The only important part concerns solving the convex problem. In
Python, CVXPY allows writing the problem in a natural way, i.e. exactly as we wrote it in equation \ref{eq:SDP}. The Matlab module CVX does the same thing.
The part of the code that corresponds to the solving the convex problem is very concise:
A full Python code that simulates the reconstruction of a random transmission matrix using this procedure in the presence of noise can be found here.
Remarks
Using this approach, which is also the case when using machine learning, the output pixels are treated independently. For each output pixel, the system is not sensitive to a global phase shift or conjugation. That implies that the relative phase between the lines of the matrix is not known. That is not detrimental for the generation of output intensity patterns, but can be otherwise important. It would then require an additional measurement to find these relative phases. |
How can I Prove or disprove that every uncountable collection of subsets of a countably infinite set must have two members whose intersection has at least 2010 elements?
This is true.
Let $A$ be this countable set and let $\binom{A}{2010}$ be the set of subsets of $A$ of size $2010$. Let $F$ be an uncountable collection of subsets of $A$.
Let $f: F\rightarrow P(\binom{A}{2010})$ be the function $f(B)=\binom{B}{2010}$ (sends a subset of $A$ to the set of subsets of size $2010$).
Assume by contrary that $f(B) \cap f(C)=\varnothing$ for every different $B, C\in F$. Now consider the disjoint union of $f(B)$ for $B\in F$. There is a countable number of $B$ such that $f(B)=\varnothing$ ( why?) and therefore the disjoint union is uncountable. But it is a subset of $\binom{A}{2010}$ which is countable, contradiction. |
I think its easiest to understand this if one has a minimal understanding of QFT. I'm not sure about your background knowledge but hopefully this isn't gibberish to you.
The QCD Lagrangian for massless quarks is given by,\begin{equation}{\cal L} = - g \sum_i \bar{\psi} _i A _\mu \gamma ^\mu \psi _i - \frac{1}{4} F _{ \mu \nu } F ^{ \mu \nu } \end{equation} where the fields are, $ A _\mu $ and $ \psi _i $. The only constant in the equation is the coupling constant, $g$. Therefore, we see that there is no single scale in the Lagrangian. Naively one would say that the theory is scale invariant.
However, there is a subtlety. We haven't fully specified the theory. We have yet to say what the value of the coupling constant is. The problem is that QFT causes the strength of an interaction to depend on the scale which its measured. Luckily, we know how to calculate how a coupling changes with scale (this is done is every full year QFT course),\begin{equation} \frac{ d \alpha }{ d \log \mu } = - \frac{ b }{ 2\pi } \alpha ^2 \end{equation} where, $ \alpha \equiv g ^2/4 \pi $ and $ b $ are calculable numbers. For QCD with the SM fermions we have,\begin{equation} b = 7 \end{equation} From here its easy to solve the differential equation above and get the coupling as a function of the scale, $ \mu $,\begin{align} \frac{1}{ \alpha ( \mu ) } &= \frac{1}{ \alpha ( \mu _0 ) } - \frac{ b }{ 2\pi } \log \frac{ \mu }{ \mu _0 } \\\alpha_s (\mu) &= \frac{ \alpha _s ( \mu _0 ) }{ 1 + \alpha _s ( \mu _0 ) \frac{ b }{ 2\pi } \log \frac{ \mu }{ \mu _0 } }\end{align}Therefore, we can measure the coupling at some scale and then know what it is at every scale. As pointed out by the OP, we can already see breaking of scale invariance since the couplings depend on scale.
Now we move on to the relation to $ \Lambda_{QCD} $. This is conventionally defined as the scale where the coupling becomes infinite. From the running above we see this occurs when,\begin{equation} \mu \equiv \Lambda_{QCD} = \mu _0\exp \left[ - \frac{ 2\pi }{ b \alpha _s ( \mu _0 ) } \right]\end{equation}
Here we see that the scale only depends the field content (through $b$) and Natures choice for the coupling. |
WHY?
Recent variational training requires sampling of the variational posterior to estimate gradient. NVIL estimator suggest a method to estimate the gradient of the loss function wrt parameters. Since score function estimator is known to have high variance, baseline is used as variance reduction technique. However, this technique is insufficient to reduce variance in multi-sample setting as in IWAE.
WHAT?
We want to fit intractable model P(x,h) to data. The simplest way to estimate this (
\hat{I}(h^{1:K})) is sampling h from prior and averaging.
\hat{I}(h^{1:K}) = \frac{1}{K}\sum^K_{i=1}P(x|h^i), h^i \sim P(h) However, this estimate has large variance since a small area of P(h) can explain data. Instead, we introduce proposal distribution(
Q(h^i|x)) conditonal on the observation and perform importance sampling.
\hat{I}(h^{1:K}) = \frac{1}{K}\sum^K_{i=1}\frac{P(x, h^i)}{Q(h^i|x)}, h^{1:K} \sim Q(h^{1:K}|x) \equiv \prod^K_{i=1}Q(h^i|x) As we introduced proposal distribution, stochastic lowerbound can be found.
E_{Q(h^{1:K}|x)}[\log \hat{I}(h^{1:K})] \leq \log E_{Q(h^{1:K}|x)}[\hat{I}(h^{1:K})] = \log P(x)\\ \hat{L}(h^{1:K}) = \log \hat{I}(h^{1:K}) Single sample version and multi-sample version estimator have different forms as follows.
\mathcal{L}(x) = E_{Q(h|x)}[\log\frac{P(x, h)}{Q(h|x)}]\\ \mathcal{L}^K(x) = E_{Q(h^{1:K}|x)}[\log\frac{1}{K}\sum^K_{i=1}f(x, h^i)]\\ Gradient of multi-sample version estimator can be divided into two terms.
\nabla_{\theta}\mathcal{L}^K(x) = E_{Q(h^{1:K}|x)}[\sum_j \hat{L}(h^{1:K})\nabla_{\theta}\log Q(h^j|x)] + E_{Q(h^{1:K}|x)}[\sum_j \tilde{w}^j \nabla_{\theta}\log f(x, h^j)\\ \tilde{w}^j \equiv \frac{f(x, h^j)}{\sum_{i=1}^K f(x,h^i)} Two terms describe the effect of
\theta on L through proposal distribution and stochastic lower bound. The second term is relatively stable than the first term since the second terms are normalized with respective responsibility. The first term is problematic for two reasons: learning signal (
\hat{L}(h^{1:K})) is the same within samples thus does not properly assign credits. Also, the magnitude of the learning signal is unbounded overwhelming the second term.
While estimating gradient wrt
\psi is relatively simple, estimating the gradient wrt
\theta is the issue. The simplest choice would be naive Monte Carlo, and more elaborate choice would be NVIL. However, even NVIL cannot solve variation within samples.
\nabla_{\psi}\mathcal{L}^K(x) \simeq \sum_j \tilde{w}^j\nabla_{\psi}\log f(x,h^j)\\ \nabla_{\theta}\mathcal{L}^K(x) \simeq \sum_j \hat{L}(h^{1:K})\nabla_{\theta}\log Q(h^j|x) + \sum_j \tilde{w}^j\nabla_{\theta}\log f(x,h^j)\\ \nabla_{\theta}\mathcal{L}^K(x) \simeq \sum_j (\hat{L}(h^{1:K}) - b(x) - b)\nabla_{\theta}\log Q(h^j|x) + \sum_j \tilde{w}^j\nabla_{\theta}\log f(x,h^j) This paper suggest the reduction of variance by introducing local learning signals instead of global learning signal. This can be possible by sustituting learning signal for j with other term. This can be another mapping f(x) trained to predict f(x,
h^i), or mean of the other samples. This paper found geometric mean worked slightly better than arithmetric mean.
E_{Q(h^{1:K}|x)}[\hat{l}(h^{1:K})\nabla_{theta}\log Q(h^j|x)] = E_{Q(h^{-j}|x)}[E_{Q(h^{j}|x)}[\hat{l}(h^{1:K})\nabla_{theta}\log Q(h^j|x)|h^{-j}]]\\ \hat{L}(h^j|h^{-j}) = \hat{L}(h^{1:K}) - \log \frac{1}{K}(\sum_{i\neq j}f(x, h^i) + f(x))\\ \hat{L}(h^j|h^{-j}) = \hat{L}(h^{1:K}) - \log \frac{1}{K}(\sum_{i\neq j}f(x, h^i) + \hat{f}(x, h^{-j}))\\ \hat{f}(x,h^{-j}) = \frac{1}{K-1}\sum_{i\neq j}f(x, h^i) or \exp(\frac{1}{K-1}\sum_{i\neq j}\log f(x, h^i))\\ \nabla_{\theta}\mathcal{L}^K(x) \simeq \sum_j \hat{L}(h^j|h^{-j})\nabla_{\theta}\log Q(h^j|x) + \sum_j \tilde{w}^j\nabla_{\theta}\log f(x,h^j) The final estimator is called Variational Inference for Monte Carlo Ojectives (VIMCO).
So?
Compared to NVIL and RWS(Reweight Wake-Sleep), VIMCO tends to lower the bound with more samples used in SBN. Also VIMCO showed better performance in structured output prediction of MNIST.
Critic
Seems like ultimate version of estimating gradient! |
Scientific posters present technical information and are intended for congress or presentations with colleagues. Since LaTeX is the most natural choice to typeset scientific documents, one should be able to create posters with it. This article explains how to create posters with latex
Contents
The two main options when it comes to writing scientific posters are
tikzposter and beamerposter. Both offer simple commandsto customize the poster and support large paper formats. Below, you can see a side-to-side comparison of the output generated by both packages (tikzposter on the left and beamerposter on the right).
Tikzposter is a document class that merges the projects
fancytikzposter and tikzposter and it's used to generate scientific posters in PDF format. It accomplishes this by means the TikZ package that allows a very flexible layout.
The preamble in a tikzposter class has the standard syntax.
\documentclass[24pt, a0paper, portrait]{tikzposter} \usepackage[utf8]{inputenc} \title{Tikz Poster Example} \author{ShareLaTeX Team} \date{\today} \institute{ShareLaTeX Institute} \usetheme{Board} \begin{document} \maketitle \end{document}
The first command,
\documentclass[...]{tikzposter} declares that this document is a
tikzposter. The additional parameters inside the brackets set the font size, the paper size and the orientation; respectively. The available font sizes are:
12pt, 14pt, 17pt, 20pt and
24pt. The possible paper sizes are:
a0paper, a1paper and
a2paper. There are some additional options, see the further reading section for a link to the documentation.
The commands
title,
author,
date and
institute are used to set the author information, they are self-descriptive.
The command
\usetheme{Board} sets the current theme, i.e. changes the colours and the decoration around the text boxes. See the reference guide for screenshots of the available themes.
The command
\maketitle prints the title on top of the poster.
The body of the poster is created by means of text blocks. Multi-column placement can be enabled and the width can be explicitly controlled for each column, this provides a lot of flexibility to customize the look of the final output.
\documentclass[25pt, a0paper, portrait]{tikzposter} \usepackage[utf8]{inputenc} \title{Tikz Poster Example} \author{ShareLaTeX Team} \date{\today} \institute{ShareLaTeX Institute} \usepackage{blindtext} \usepackage{comment} \usetheme{Board} \begin{document} \maketitle \block{~} { \blindtext } \begin{columns} \column{0.4} \block{More text}{Text and more text} \column{0.6} \block{Something else}{Here, \blindtext \vspace{4cm}} \note[ targetoffsetx=-9cm, targetoffsety=-6.5cm, width=0.5\linewidth ] {e-mail \texttt{sharelatex@sharelatex.com}} \end{columns} \begin{columns} \column{0.5} \block{A figure} { \begin{tikzfigure} \includegraphics[width=0.4\textwidth]{images/lion-logo.png} \end{tikzfigure} } \column{0.5} \block{Description of the figure}{\blindtext} \end{columns} \end{document}
In
tikzposter the text is organized in blocks, each block is created by the command
\block{}{} which takes two parameters, each one inside a pair of braces. The first one is the title of the block and the second one is the actual text to be printed inside the block.
The environment
columns enables multi-column text, the command
\column{} starts a new column and takes as parameter the relative width of the column, 1 means the whole text area, 0.5 means half the text area and so on.
The command
\note[]{} is used to add additional notes that are rendered overlapping the text block. Inside the brackets you can set some additional parameters to control the placement of the note, inside the braces the text of the note must be typed.
The standard LaTeX commands to insert figures don't work in
tikzposter, the environment
tikzfigure must be used instead.
The package
beamerposter enhances the capabilities of the standard beamer class, making it possible to create scientific posters with the same syntax of a beamer presentation.
By now there are not may themes for this package, and it is slightly less flexible than tikzpopster, but if you are familiar with beamer, using beamerposter don't require learning new commands.
Note: In this article a special
beamerposter theme will be used. The theme "Sharelatex" is based on the theme "Dreuw" created by Philippe Dreuw and Thomas Deselaers, but it was modified to make easier to insert the logo and print the e-mail address at the bottom of the poster. Those are hard-coded in the original themes.
Even though this article explains how to typeset a poster in LaTeX, the easiest way is to use a template as start point. We provide several in the ShareLaTeX templates page
The preamble of a
beamerposter is basically that of a beamer presentation, except for an additional command.
\documentclass{beamer} \usepackage[english]{babel} \usepackage[utf8]{inputenc} \usepackage{times} \usepackage{amsmath,amsthm, amssymb, latexsym} \boldmath \usetheme{Sharelatex} \usepackage[orientation=portrait,size=a0,scale=1.4,debug]{beamerposter} \title[Beamer Poster]{ShareLaTeX example of the beamerposter class} \author[sharelatexteam@sharelate.com]{ShareLaTeX Team} \institute[Sharelatex University]{The ShareLaTeX institute, Learn faculty} \date{\today} \logo{\includegraphics[height=7.5cm]{SharelatexLogo}}
The first command in this file is
\documentclass{beamer}, which declares that this is a beamer presentation. The theme "Sharelatex" is set by
\usetheme{Sharelatex}. There are some beamer themes on the web, most of them can be found in the web page of the beamerposter authors.
The command
\usepackage[orientation=portrait,size=a0,scale=1.4,debug]{beamerposter}
Imports the
beamerposter package with some special parameters: the orientation is set to
portrait, the poster size is set to
a0 and the fonts are scaled to
1.4. The poster sizes available are a0, a1, a2, a3 and a4, but the dimensions can be arbitrarily set with the options
width=x,height=y.
The rest of the commands set the standard information for the poster: title, author, institute, date and logo. The command
\logo{} won't work in most of the themes, and has to be set by hand in the theme's .sty file. Hopefully this will change in the future.
Since the document class is
beamer, to create the poster all the contents must be typed inside a
frame environment.
\documentclass{beamer} \usepackage[english]{babel} \usepackage[utf8]{inputenc} \usepackage{times} \usepackage{amsmath,amsthm, amssymb, latexsym} \boldmath \usetheme{Sharelatex} \usepackage[orientation=portrait,size=a0,scale=1.4]{beamerposter} \title[Beamer Poster]{ShareLaTeX example of the beamerposter class} \author[sharelatexteam@sharelate.com]{ShareLaTeX Team} \institute[Sharelatex University] {The ShareLaTeX institute, Learn faculty} \date{\today} \logo{\includegraphics[height=7.5cm]{SharelatexLogo}} \begin{document} \begin{frame}{} \vfill \begin{block}{\large Fontsizes} \centering {\tiny tiny}\par {\scriptsize scriptsize}\par {\footnotesize footnotesize}\par {\normalsize normalsize}\par ... \end{block} \end{block} \vfill \begin{columns}[t] \begin{column}{.30\linewidth} \begin{block}{Introduction} \begin{itemize} \item some items \item some items ... \end{itemize} \end{block} \end{column} \begin{column}{.48\linewidth} \begin{block}{Introduction} \begin{itemize} \item some items and $\alpha=\gamma, \sum_{i}$ ... \end{itemize} $$\alpha=\gamma, \sum_{i}$$ \end{block} ... \end{column} \end{columns} \end{frame} \end{document}
Most of the content in the poster is created inside a
block environment, this environment takes as parameter the title of the block.
The environment
columns enables multi-column text, the environment
\column starts a new columns and takes as parameter the width of said column. All LaTeX units can be used here, in the example the column width is set relative to the text width.
Tikzposter themes
Default Rays Basic Simple Envelope Wave Board Autumn Desert
For more information see |
Lets look at the transition amplitude $U(x_{b},x_{a})$ for a free particle between two points $x_{a}$ and $x_{b}$ in the Feynman path integral formulation
$U(x_{b},x_{a}) = \int_{x_{a}}^{x_{b}} \mathcal{D} x e^{\frac{i}{\hbar}S}$
($S$ is the classical action). It is often said that one gets classical mechanics in the limit $\hbar \rightarrow 0$. Then only the classical action is contributing, since the terms with non-classical $S$ cancel each other out because of the heavily oscillating phase. This sounds reasonable.
But when we look at the Heisenberg equation of motion for an operator $A$
$\frac{dA}{dt} = \frac{1}{i \hbar} [A,H]$
the limit $\hbar \rightarrow 0$ does not make any sense (in my opinion) and does not reproduce classical mechanics. Basically, the whole procedure of canonical quantization does not make sense:
$\{\cdots,\cdots\} \rightarrow \frac{1}{i \hbar} [\cdots,\cdots]$
I don't understand, when $\hbar \rightarrow 0$ gives a reasonable result and when not. The question was hinted at here: Classical limit of quantum mechanics. But the discussion was only dealing with one particular example of this transition. Does anyone has more general knowledge about the limit $\hbar \rightarrow 0$? |
Suppose you shot a large number of small classical magnetic dipoles with magnetic moment $\vec{\mu}$ through the field. Imagine the dipoles to be small enough that they could be treated as the particles of an ideal gas, and they are "boiled" out of some source into the magnetic field.
We would then expect each of the particle's components of velocity to be randomly distributed according to the Maxwell velocity distribution - because that is the classical result for an ideal gas. So the alignment of their magnetic moments would also start out random.
The dipoles would experience both a force and a torque from the magnetic field. The torque would cause them to rotate, and the force would, as you said, tend to line them up with the magnetic field, until they are in a minimum energy state. This alignment would take some time however, and, since they started out with random velocity and random orientation of their magnetic moment vector to the field, their final velocities once aligned would show some variation.
The key, though, that makes the motion of the particles vary AFTER they are aligned, is the nonuniform magnetic field. Suppose the field is in the z direction, and varies with z.
The particles are in a minimum potential energy state once aligned, with potential energy
$E=-\vec{\mu} \cdot\vec{B} = -\mu B $
But the magnetic field $B(z)$ varies with z, so the dipole still experiences a force
$\dfrac{\partial E}{\partial z} = F(z) = \mu\dfrac {\partial B(z)}{\partial z}$
So the classical dipoles, with randomly distributed magnetic moment orientations and velocities at start, would drift in varying directions, hit various positions on the detector.
But if the magnetic dipoles were somehow constrained to be on only two possible initial directions, you would expect to see a concentration of hits on two locations of the detector, and nothing anywhere else. They would start out with only two orientations with respect to the field and end up being deflected into only two concentrations on the detector. They'd have only two end states of "lining up" with the magnetic field, and then drift apart due to the nonuniform magnetic field.
So the Stern Gerlach experiment is evidence that the magnetic moment of electrons in atoms, and thus electron spin, is quantized, because the results resemble the second case above, not the first. The initial direction of the magnetic moment of the electron is limited by quantization of spin. |
Mathematics - Functional Analysis and Mathematics - Metric Geometry
Abstract
The following strengthening of the Elton-Odell theorem on the existence of a $(1+\epsilon)-$separated sequences in the unit sphere $S_X$ of an infinite dimensional Banach space $X$ is proved: There exists an infinite subset $S\subseteq S_X$ and a constant $d>1$, satisfying the property that for every $x,y\in S$ with $x\neq y$ there exists $f\in B_{X^*}$ such that $d\leq f(x)-f(y)$ and $f(y)\leq f(z)\leq f(x)$, for all $z\in S$. Comment: 15 pages, to appear in Bulletin of the Hellenic Mayhematical Society
Given a finite dimensional Banach space X with dimX = n and an Auerbach basis of X, it is proved that: there exists a set D of n + 1 linear combinations (with coordinates 0, -1, +1) of the members of the basis, so that each pair of different elements of D have distance greater than one. Comment: 15 pages. To appear in MATHEMATIKA |
WHY?
Directed latent variable models are known to be difficult to train at large scale because posterior distribution is intractable.
WHAT?
This paper suggests way to estimate inference model with feed-forward network. Since exact posterior
P_{\theta}(h|x) is intractable, we use
Q_{\phi}(h|x) to approximate.
\log P_{\theta}(x) = \log\sum_h P_{\theta}(x,h)\\ \geq \sum_h Q_{\phi}(h|x) log \frac{P_{\theta}(x,g)}{Q_{\phi}(h|x)}\\ = E_Z[\log P_{\theta}(x, h) - \log Q_{\phi}(h|x)]\\ = \mathcal{L}(x, \theta, \phi)\\ \mathcal{L}(x, \theta, \phi) = \log P_\theta(x) - KL(Q_{\phi}(h|x)|P_{\theta}(h|x)) Since h is sampled from posterior, it is impossible to get exact gradient of lowerbound wrt parameters. Therefore, Monte-Carlo estimation is used, and score function estimator (REINFORCE) is used to get gradient of lowerbound which include stochastic variable h.
\nabla_{\theta}\mathcal{L}(x) = E_Q[\nabla_{\theta}\log P_{\theta}(x, h)]\\ \nabla_{\phi}\mathcal{L}(x) = E_Q[(\log P_{\theta}(x,h) - \log Q_{\phi}(h|x)) \times \nabla_{\phi}\log Q_{\phi}(h|x)]\\ \nabla_{\theta}\mathcal{L}(x) \approx \frac{1}{n}\sum^n_{i=1}\nabla_{\theta}\log P_{\theta}(x, h^{(i)})\\ \nabla_{\phi}\mathcal{L}(x) \approx \frac{1}{n}\sum^n_{i=1}(\log P_{\theta}(x,h^{(i)}) - \log Q_{\phi}(h^{(i)}|x)) \times \nabla_{\phi}\log Q_{\phi}(h^{(i)}|x)\\ However, usually the variance of estimation of gradient using score function estimator is high. So variance reduction technique is used to estimate the gradient. c is global baseline that is learned through training, and
C_{\psi} is used to input-dependent baseline. Input-dependent baseline is also trained to minimize the mse.
l_{\phi}(x, h) = \log P_{\theta}(x,h) - \log Q_{\phi}(h|x)\\ E_{Q}[(l_{\phi}(x, h) - C_{\psi}(x) - c)^2]\\ To make training stable, variance is normalized with running estimate when it is greater than 1. If inference network is structured, we can estimate local learning signal for each factorized conditionals.
\nabla_{\phi_i}\mathcal{L}(x) = E_{Q(h^{1:i-1}|x)}[E_{Q(h^{i:n}|h^{i-1})}[l_{\phi}(x,h)\nabla_{\phi_i}\log Q_{\phi_i}(h^i|h^{i-1})]|h^{i-1}]\\ l^i_{\phi}(x, h) = \log P_{\theta}(h^{i-1:n}) - \log Q_{\phi}(h^{i:n}|h^{i-1})\\ Then, layer-dependent baseline need to be learned.
So?
NVIL used in sigmoid belief network (SBN) outperformed SBN using wake-sleep algorithm and other models including DARN, NADE, RBM and MoB in NLL for MNIST. SBN using NVIL showed better performance in document modeling then LDA.
Critic
This seems smart move, but reparameterization of VAE was too strong. This can be used in cases where distribution is impossible to reparamterize. |
Integer programming algorithms minimize or maximize a linear function subject to equality, inequality, and integer constraints. Integer constraints restrict some or all of the variables in the optimization problem to take on only integer values. This enables accurate modeling of problems involving discrete quantities (such as shares of a stock) or yes-or-no decisions. When there are integer constraints on only some of the variables, the problem is called a mixed-integer linear program. Example integer programming problems include portfolio optimization in finance, optimal dispatch of generating units (unit commitment) in energy production, and scheduling and routing in operations research.
Integer programming is the mathematical problem of finding a vector \(x\) that minimizes the function:
\[\min_{x} \left\{f^{\mathsf{T}}x\right\}\]
Subject to the constraints:
\[\begin{eqnarray}Ax \leq b & \quad & \text{(inequality constraint)} \\A_{eq}x = b_{eq} & \quad & \text{(equality constraint)} \\lb \leq x \leq ub & \quad & \text{(bound constraint)} \\ x_i \in \mathbb{Z} & \quad & \text{(integer constraint)} \end{eqnarray}\]
Solving such problems typically requires using a combination of techniques to narrow the solution space, find integer-feasible solutions, and discard portions of the solution space that do not contain better integer-feasible solutions. Common techniques include:
Cutting planes: Add additional constraints to the problem that reduce the search space. Heuristics: Search for integer-feasible solutions. Branch and bound: Systematically search for the optimal solution. The algorithm solves linear programming relaxations with restricted ranges of possible values of the integer variables.
For more information on integer programming, see Optimization Toolbox™. |
To avoid confusion, let's clarify some notation. Let $\mathcal{M}^{*}$ denote the $\sigma$-algebra of $\mu^{*}$-measurable subsets of $X$. Let $\bar{\mathcal{M}}$ denote the completion of $\mathcal{M}$ with respect to $\mu$. Let $\widetilde{\bar{\mathcal{M}}}$ denote the $\sigma$-algebra of locally measurable subsets of $(X,\bar{\mathcal{M}},\bar{\mu})$. Since $\mu^{*}$ restricts to a complete measure on $\mathcal{M}^{*}$, we know that $\mu^{*}|{\mathcal{M}}^{*}$ coincides with the completion of $\mu$ on the completion $\bar{\mathcal{M}}$ of $\mathcal{M}$ with respect to $\mu$, so there is no ambiguity in using the notation $\bar{\mu}$.
Lemma. Let $(X,\mathcal{M},\mu)$ be a measure space, and let $\mu^{*}$ be the outer measure induced by $\mu$. For $E\in\mathcal{M}^{*}$, with $\mu^{*}(E)<\infty$, there exists $A\in\mathcal{M}$ such that
\begin{align*} E\subset A, \quad \mu^{*}(E)=\mu(A) \end{align*} In particular, if $\mu^{*}(E)=0$, then $E$ is contained in a $\mu$-null set $N\in\mathcal{M}$.
Proof. By definition of infimum, there exists a countable collection of sets $\left\{A_{j}\right\}\subset\mathcal{M}$ such that\begin{align*}E\subset A:=\bigcup_{j=1}^{\infty}A_{j}, \quad \sum_{j=1}^{\infty}\mu(A_{j})<\mu^{*}(E)+\epsilon\end{align*}Since $\mathcal{M}$ is a $\sigma$-algebra, $A\in\mathcal{M}$, and by subadditivity, $\mu(A)\leq\mu^{*}(E)+\epsilon$.
We can apply the preceding result for each $\epsilon_{n}=1/n$, to obtain a collection of sets $\left\{A^{n}\right\}\subset\mathcal{M}$ such that $E\subset A^{n}$ and $\mu(A^{n})\leq\mu^{*}(E)+1/n$. If we set $A:=\bigcap_{n}A^{n}$, then $E\subset A$ and by monotonicity,\begin{align*}\mu(A)\leq\mu(A^{n})\leq\mu^{*}(E)+\dfrac{1}{n},\qquad\forall n\end{align*}Letting $n\rightarrow\infty$, we obtain $\mu(A)\leq\mu^{*}(E)$. The reverse inequality holds by monotonicity. $\Box$
Let $E$ be a locally measurable subset of $(X,\bar{\mathcal{M}},\bar{\mu})$. I claim that $E\in\mathcal{M}^{*}$. It suffices to show that for any $F\subset X$ with $\mu^{*}(F)<\infty$, we have\begin{align*}\mu^{*}(F)\geq\mu^{*}(E\cap F)+\mu^{*}(E^{c}\cap F)\end{align*}By the lemma, there exists a set $A\in\mathcal{M}$ such that\begin{align*}F\subset A, \quad \mu^{*}(F)=\mu(A)\end{align*}So $E\cap A$ is measurable, whence $E^{c}\cup A^{c}\cap A=E^{c}\cap A$ is measurable. By monotonicity and additivity, we see that\begin{align*}\mu^{*}(E\cap F)+\mu^{*}(E^{c}\cap F)\leq\mu^{*}(E\cap A)+\mu^{*}(E^{c}\cap A)=\mu(A)=\mu^{*}(F)\end{align*}
The reverse inclusion also holds: $E\in\mathcal{M}^{*}$ implies $E\in\widetilde{\bar{\mathcal{M}}}$. For any set $A\in\bar{\mathcal{M}}$ with $\bar{\mu}(A)<\infty$, $\mu^{*}(E\cap A)<\infty$, whence there exists a set $B\in\mathcal{M}$ such that $E\cap A\subset B$ and $\mu^{*}(E\cap A)=\mu(B)$. Since $E\cap A, B\in\mathcal{M}^{*}$, $\mu^{*}(B\setminus (E\cap A))=0$. But then there exists $N\in\mathcal{M}$, such that\begin{align*}B\setminus (E\cap A)\subset N, \quad \mu^{*}(B\setminus (E\cap A))=\mu(N)=0\end{align*}We conclude that $B\setminus (E\cap A)\in\bar{\mathcal{M}}$, whence\begin{align*}E\cap A=B\setminus (B\setminus (E\cap A))\in\bar{\mathcal{M}}\end{align*}
With $E$ as above, suppose $\mu^{*}(E)<\infty$. Then, as asserted before, there exists a set $A\in\mathcal{M}$ such that $E\subset A$ and $\bar{\mu}(E)=\mu(A)$. But then $E=E\cap A\in\bar{\mathcal{M}}$. We conclude that
\begin{align*}\bar{\mu}(E)=\begin{cases}\bar{\mu}(E) & {E\in\bar{\mathcal{M}}}\\ \infty & {E\in\widetilde{\bar{\mathcal{M}}}\setminus\bar{\mathcal{M}}} \end{cases},\end{align*}which is the definition of the $\widetilde{\bar{\mu}}$, the saturation of $\bar{\mu}$. |
Statement: $g:A \to B$ and $f:B \to C$ are injective functions. Then $f \circ g:A \to C$ is injective.
Attempt of proof:
Suppose by contradiction that $f \circ g$ is not injective. Then there exist $x$ and $y$, $ x \neq y$, such that $f \circ g(x) = f\circ g(y)$. Then, by definition of composition, $f(g(x)) = f(g(y))$. Since $g$ is injective, $f(x) = f(y)$. Since $f$ is injective, $x = y$. But that is a contradiction, since $x \neq y$.
I'm not sure if every step I'm doing here is alright, specialy that if $f \circ g$ isn't injective then there exist $x$ and $y$, $x \neq y$. Why there must be two points in $A$? |
The live video stream from Atiyah's talk (9:45-10:30) was mostly overloaded but we may already watch a
49-minute-long recording on the Laureates Forum YouTube channel. However, we were given two papers that are said to contain the proof: The Fine Structure Constant (17 pages)The second paper contains the proof – which would really be an elementary proof accessible to intelligent undergraduates – on 15 lines of Page 3. The Riemann Hypothesis (5 pages)
In the first paper, Atiyah claims to construct "the Todd function" \(T(s)\) which is weakly analytic and may be understood as a limit of analytic functions. A representation of the step function is his example. Well, I don't even understand this example. I can write the step function as a limit of analytic functions of the
real variablein many different ways (arctan, tanh) but they lead to completely different continuations in the complex plane! He claims that the transition from the "real analytic" to "complex analytic" is basically unique and straightforward which is one of the things that look obviously wrong to me.
In the second paper, he claims to derive a contradiction from the existence of the smallest (by its imaginary part) root \(s=b\) away from the critical axis but in the critical strip. In a rectangle going up to this \(b\), he recursively defines \[
F(s) = T(1+\zeta(s+b)) - 1
\] and using some properties of his Todd function such as\[
T([1+f(s)][1+g(s)]) = T[1+f(s)+g(s)]
\] he derives \(F(s)=2F(s)\) and therefore \(F(s)=0\), and because \(F,\zeta\) contain a similar transformation "reshaped" by his \(T\), it would also follow that \(\zeta(s)=0\) everywhere which is wrong. He also claims that this proof would be an example of the "search for the first Siegel zero". Find a "smallest" wrong root, and then show that an even "smaller" wrong root exists.
Remarkably enough, I was answering the question whether a proof of RH could be elementary on Quora last night and I used exactly this strategy as a highly hypothetical example what a simple proof could look like! In fact, Atiyah claims that the imaginary part of \(s\) was halved, just like I wrote! ;-)
In fact, I am even worried that Atiyah was copying from me.
At any rate, I don't see how a locally holomorphic function \(T(s)\) in the complex plane could obey the properties he needs, and even if the properties were satisfied, I don't think that \(F(s)=2F(s)\) follows from them as he claims.
Off-topic: Like in 1945, the City of Pilsen has prepared a state-of-the-art choreography and the newest music to welcome the U.S. troops who will liberate us (I said "us", not only the "girls", I hope that you heard me well) from a totalitarian regime dreaming about the European domination. I was actually impressed by the quality of this video.
More importantly, while looking through the papers, I checked whether I couldn't kill the proof by the same simple argument as the argument that is enough to kill 90% of the truly hopeless attempts. The truly hopeless attempts seem to assume that you may just look at some function with a similar location of the zeros and poles and you may show that there are no nontrivial roots away from the critical axis.
Needless to say, any such attempt is wrong because the properties of the primes, the Euler and other formulae for the zeta function, or other special information about the positions of its zeroes were not used at all. There surely exist
somesimilar functions with roots that are away from the critical axis.
And I think that Atiyah's proof sadly suffers from the same elementary problem. He claims that no functions with the symmetrically located "wrong" roots exist at all – which is clearly wrong. Just take (the subscript "c" stands for "crippled")\[
\eq{
\zeta_{c}(s) &= \zeta(s)\times R \times \bar R\\
R &= \frac{(s-0.6-9i)(s-0.4-9i)}{(s-\rho_1)(s-\rho_2)}
}
\] The denominators just removed some two pairs of zeros \(\rho_1,\rho_2,\bar\rho_1,\bar\rho_2\) from the critical axis and the numerator added a quadruplet of symmetrically placed (relatively to the real and critical axis) zeroes away from the critical axis (I am a perfectionist so I kept the "total number of roots" the same to minimize my footprint; with some adjustments of \(0.6+0.9i\) above, I could even keep some moments etc.). I think that Atiyah's proof, like hundreds of hopeless proofs, claims that \(\zeta_c(s)\) cannot exist at all. But it clearly can, I just defined it. ;-)
(If I remember the 2015 Nigerian "RH breakthrough" well, the guy didn't even have that.)
Maybe his proof isn't hopeless and while constructing the function \(T(s)\) in the "fine-structure constant" paper, he is using some special properties of \(\zeta(s)\) that are not shared by functions such as \(\zeta_c(s)\). But I just don't see where it could possibly be.
Instead, what we see in the "fine-structure constant" are musings about the unification of physics and mathematics that I sympathize with but how they're presented as exact science is just plain silly; plus truly crackpot numerology about the derivation of the fine-structure constant of electromagnetism, \(\alpha\approx 1/137.035999\), from some purely and canonical mathematical operations that "renormalize" \(\pi\) to a "ž" ("zh") written in the Cyrillic script, i.e. "Ж". ;-) I actually wanted to use a Cyrillic letter in a paper, too. And this is the most playful and original one.
Sorry, Prof Atiyah, but that made me laugh out loud – and your comments about the "well-known Russian letter" in the talk escalated my laughter, and probably those of many who understand or who can read Russian just fine.
First, \(\pi\) is a mathematical constant so it doesn't change, doesn't run, and doesn't get renormalized. On the opposite side, the fine-structure constant of electromagnetism does run and it is a complicated parameter of the Standard Model that is rather messy and the constant depends on the "theory at short distances" (either a quantum field theory or, ideally, a string theory vacuum) plus renormalization flows and all the renormalization flows depend on the whole electroweak theory (electromagnetism is just a part of the electroweak theory) as well as the spectrum of quarks and leptons, the number of their generations, and all other particles and interactions of the Standard Model.
The Standard Model is almost certainly not as unique and canonical for its parameters to be on par with \(\pi\). Thank God, Sean Carroll wrote an equivalent argument a day after me. (I just can't understand how he or any theoretical physicist could be uninterested in the Riemann Hypothesis or "incapable" of following a simple proof of it.) And if Mr Atiyah has believed that \(\alpha\) could be analogously canonical as a \(\pi\) or a renormalized \(\pi\) even a decade ago, then I am confident that his contributions to the paper with Witten about the \(G_2\) compactifications of M-theory and the topology change were at most purely technical, like those of a graduate student, but he couldn't possibly write anything correct about the "big picture" of that paper because he's completely confused about particle physics.
In the Team Stanford picture, the Standard Model is just one among \(10^{500}\) or so – a googol-like large number – compactifications of string/M-theory. Each of them have its own parameters similar to the fine-structure constant. So one such a constant cannot be on par with \(\pi\). But even if e.g. Team Vafa were correct and the number of phenomenologically relevant corners of the stringy configuration space were much lower, the choice is still far from unique and the fine-structure constant is far from a simple canonical parameter comparable to \(\pi\).
So I have spent many hours by following this story and expressed my admiration for Prof Atiyah by those efforts to listen to him. But I think that the bubble has burst, some credit has been spent, and I would probably not watch his another attempt to achieve something comparably groundbreaking. I still admire him for his accumulated contributions to mathematics, mathematical physics, and the bridges between mathematics and physics, his love for simplicity, and his energy and ability to control much more than just the bladder at the age of 89+, but the proof of the Riemann Hypothesis
willrequire more than that.
There is something possibly interesting about the strange Todd function – which would combine some great ideas from Hirzebruch and von Neumann. But he says that the function is polynomial in whole convex regions of the complex plane and locally holomorphic – but not fully holomorphic. I don't understand how a polynomial function in the complex plane could be nontrivial in any sense – and how its "different" extrapolation than the simple polynomial analytic continuation could be natural in any way. Because I think that other comments, such as those about the computation of the fine-structure constant, are just plain silly, it seems very likely that there won't be anything correct and clever in the claims about \(T(s)\) and the function with the desired properties probably doesn't exist.
The attempt to derive the contradiction from the "first Siegel zero" by showing that an even smaller one exists is something that I find potentially promising. It was some 50% of those efforts of mine to prove the RH that were
notbased on the Hilbert-Pólya program. I think it cannot be excluded that a simple proof based on this general strategy doesexist. But I am rather certain that Atiyah's attempt isn't such a simple proof. |
20 0 1. Homework Statement
A charged harmonic oscillator is placed in an external electric field [tex]\epsilon[/tex] i.e. its hamiltonian is [tex] H = \frac{p^2}{2m} + \frac{1}{2}m \omega ^2 x^2 - q \epsilon x [/tex] Find the eigenvalues and eigenstates of energy
2. Homework Equations 3. The Attempt at a Solution
By completing the square i get
[tex] [-\frac{\hbar^2}{2m}\frac{d^2}{du^2}+\frac{1}{2}m \omega ^2u^2] \phi (u) = (E + \frac{q^2 \epsilon^2}{2m \omega ^2}) \phi (u) [/tex]
where
[tex]u=x-\frac{q^2\epsilon^2}{2m\omega^2}[/tex].
Then usually for Hamiltonians of this kind the energy eigenvalues are
[tex]E_n=\hbar\omega(n+\frac{1}{2})[/tex]
but how do I obtain them in this case? Or is this the right way to go?
Do i call
[tex]E + \frac{q^2 \epsilon^2}{2m \omega ^2}=E'[/tex]
which would give me
[tex]E'_n=\hbar\omega(n+\frac{1}{2})[/tex]
And how do I swich back to x? |
All about that Bayes - An Intro to Probability11 min read
RANDOM VARIABLES
In this world things keep happening around us. Each event occurring is a Random Variable. A Random Variable is an event, like elections, snow or hail. Random variables have an outcome attached them - the value of which is between 0 and 1. This is the likelihood of that event happening. We hear the outcomes of random variables all the time - There is a 50% chance or precipitation, The Seattle Seahawks have a 90% chance of winning the game.
SIMPLE PROBABILITY
Where do we get these numbers from? From past data.
Year 2008 2009 2010 2011 2012 2013 2014 2015 Rain Rainy Dry Rainy Rainy Rainy Dry Dry Rainy
PROBABILITY OF 2 EVENTS
What is the probability that event A and Event B happening together? Consider the following table, with data about the
Rain and Sun received by Seattle for the past few years.
Year 2008 2009 2010 2011 2012 2013 2014 2015 Rain Rainy Dry Rainy Rainy Rainy Dry Dry Rainy Sun Sunny Sunny Sunny Cloudy Cloudy Cloudy Sunny Sunny
Using the above information, can you compute what is the probability that it will be Sunny and Rainy in 2016?
We can get this number easily from the
Joint Distribution
RAIN Rainy Dry SUN Sunny 3/8 2/8 Cloudy 2/8 1/8
In 3 out of the 8 examples above, it is
Sunny and Rainy at the same time. Similarly, in 1 out of 8 times it is Cloudy and it is Dry. So we can compute the probability of multiple events happening at the same time using the Joint Distribution. If there are more than 2 variables, the table will be of a higher dimension
We can extend this table further include
Marginalization. Marginalization is just a fancy word for adding up all the probabilities in each row, and the probabilities in each column respectively.
RAIN Rainy Dry Margin SUN Sunny 0.375 0.25 0.625 Cloudy 0.25 0.125 0.375 Margin 0.625 0.375 1
Why are margins helpful? They remove the effects of one of the two events in the table. So, if we want to know the probability that it will rain (irrespective of other events), we can find it from the marginal table as 0.625. From Table 1, we can confirm this by computing all the individual instances that it rains - 5/8 = 0.625
CONDITIONAL PROBABILITY
What do we do when one of the outcomes is already given to us? On this new day in 2016, it is very sunny, but what is the probability that it will rain?
which is read as - probability that it will rain, given that there is sun.
This is computed in the same way as we compute normal probability, but we will just look at the cases where Sun = Sun from Table 1. There are 5 instances of Sun = Sun in Table 1, and in 3 of those cases Rain = Rain. So the probability of
We can also compute this from Table 3. Total probability of Sun = 0.625 (Row 1 Marginal probability). Probability of Rain and Sun = 0.375
Probability of Rain given Sun = 0.375/0.625 = 0.6
DIFFERENCE BETWEEN CONDITIONAL AND JOINT PROBABILITY
Conditional and Joint probability are often mistaken for each other because of the similarity in their naming convention. So what is the difference between: $ P(AB) $ and $ P(A \mid B) $
The first is Joint Probability and the second is Conditional Probability.
Joint probability computes the probability of 2 events happening together. In the case above - what is the probability that Event A and Event B both happen together? We do not know whether either of these events actually happened, and are computing the probability of both of them happening together.
Conditional probability is similar, but with one difference - We already know that one of the events (e.g. Event B) did happen. So we are looking for the probability of Event A, when we know the Event B already happened or that the probability of Event B is 1. This is a subtle but a significantly different way of looking at things.
BAYES RULE
Equating (4) and (5)
This is the
Bayes Rule.
Bayes Rule is interesting, and significant, because we can use it to discover the conditional probability of something, using the conditional probability going the other direction. For example: to find the probability $ P(death \mid smoking)$ , we can get this unknown from $ P(smoking \mid death) $, which is much easier to collect data for, as it is easier to find out whether the person who died was a smoker or a non smoker.
Lets look at some real examples of probability in action. Consider a prosecutor, who wants to know whether to charge someone with a crime, given the forensic evidence of fingerprints, and town population.
The data we have is the following:
One person in a town of 100,000 committed a crime. The probability that is he guilty $ P(G) = 0.00001$, where $P(G)$ is the probability of a person being guilty of having committed a crime The forensics experts tell us, that if someone commits a crime, then they leave behind fingerprints 99% of the time. $P(F \mid G) = 0.99$, where $P(F \mid G)$ is the probability of fingerprints, given crime is commited There are usually 3 people’s fingerprints in any given location. So $P(F) = 3 * 0.00001 = 0.00003$. This is because only 1 in 100,000 people could have their fingerprints
We need to compute:
Using
Bayes Rule we know that:
Plugging in the values that we already know:
This is a good enough probability to get in touch with the suspect, and get his side of the story. However, when the prosecutor talks to the detective, the detective points out that the suspects actually lives at the scrime scene. This makes it highly likely to find the suspect’s fingerprints in that location. And the new probability of finding fingerprints becomes : $P(F) = 0.99$
Plugging in those values again into (9), we get:
So it completely changes the probability of the suspect being guilty.
This example is interesting because we computed the probability of a $P(G \mid F)$ using the probability of $P(F \mid G)$. This is because we have more data from previous solved crimes about how many peple actually leave fingerprints behind, and the correlation of that with them being guilty.
Another motivation for using conditional probability, is that conditional probability in one direction is often less stable that the conditional probability in the other direction. For example, the probability of disease given a symptom $P(D \mid S)$ is less stable as compared to probability of symptom given disease $P(S \mid D)$
So, consider a situation where you think that you might have a horrible disease
Severenitis. You know that Severenitis is very rare and the probability that someone actually has it is 0.0001. There is a test for it that is reasonably accurate 99%. You go get the test, and it comes back positive. You think, “oh no! I am 99% likely to have the disease”. Is this correct? Lets do the Math.
Let $P(H \leftarrow w)$ be the probability of Health being
well, and $P(H \leftarrow s)$ be the probability of Health being sick. Let and $P(T \leftarrow p)$ be the probability of the Test being positive and $P(T \leftarrow n)$ be the probability of the Test being negative.
We know that the probability you have the disease is low $P(H \leftarrow s) = 0.0001$. We also know that the test is 99% accurate. What does this mean? It means that if you are sick, then the test will accurately predict it by 99%
$P(T \leftarrow n \mid H \leftarrow w) = 0.99$
$P(T \leftarrow n \mid H \leftarrow s) = 0.01$
$P(T \leftarrow p \mid H \leftarrow w) = 0.01$
$P(T \leftarrow p \mid H \leftarrow s) = 0.99$
We need to find out the probability that you are
sick given that the test is positive or $P(H \leftarrow s \mid T \leftarrow p)$
Using Bayes Rule:
We know the numerator, but not the denominator. However, it is easy enough to compute the denominator using some clever math!
We know that the total probability of
$P(H \leftarrow s \mid T \leftarrow p) + P(H \leftarrow w \mid T \leftarrow p) = 1$
Adding (16) and (17), and equating with (15) we get:
Therefore:
Substituting (7) into (4) we get:
This is the reason why doctors are hesitant to order expensive tests if it is unlikely tht you have the disease. Even though the test is accurate, rare diseases are so rare that the very rarity dominates the accuracy of the test.
NAIVE BAYES
When someone applies
Naive Bayes to a problem, they are assuming conditional independence of all the events. This means:
When this is plugged into Bayes Rules:
Let $P(BCD…) = \alpha$ which is the normalization constant. Then,
What we have done here, is assumed that the events A, B, C etc. are not dependent on each other, thereby reducing a very high dimensional table into several low dimensional tables. If we have 100 features, and each feature can take 2 values, then we would have a table of size $2^{100}$. However, assuming independence of events we reduce this to one hundred 4 element tables.
Naive bayes is rarely ever true, but it often works because we are not interested in the right probability, but the fact that the correct class has the highest probability.
References: |
If you use cylindrical coordinates $(r,\theta, z) $ since $(r,z)$ relation is known,only $ (r,\theta)$ relation needs to be found out.
The Clairaut's Law is especially suited to finding geodesics on surfaces of revolution.
The procedure can be used to find geodesics on
any surface of revolution when
meridian is given.
Choose any one of the two convex sheets of the hyperboloid given.
$ z^2 - r^2 = 1 $
Differentiate with respect to r; $ z = r* \tan(\phi). $ (1*)
Choose Clairaut's constant $ a = r \sin(\psi). $ (2*)
$(r=a)$ is where all lines are tangent at minimum radius.
From differential geometry, $ dr/ \sin(\phi) = r d\theta * \cot(\psi).$ (3*)
Equns (1*) to (3*) are adequate to find $ r=f(\theta) $, after eliminating $z,\phi,\psi$ and integrating. |
How to make an animation of following gif in
Mathematica?
And how to make 3D analog?
I tried first few steps
line = Graphics[Line[{{1, 1}, {2, 2}}]]Manipulate[Show[line, line /. l : Line[pts_] :> Rotate[l, n, Mean[pts]]], {n, 0,Pi}]
Mathematica Stack Exchange is a question and answer site for users of Wolfram Mathematica. It only takes a minute to sign up.Sign up to join this community
How to make an animation of following gif in
Mathematica?
And how to make 3D analog?
I tried first few steps
line = Graphics[Line[{{1, 1}, {2, 2}}]]Manipulate[Show[line, line /. l : Line[pts_] :> Rotate[l, n, Mean[pts]]], {n, 0,Pi}]
I'd like to expand on Quantum_Oli's answer to give an intuitive explanation for what's happening, because there's a neat geometric interpretation. At one point in the animation it looks like there is a circle of colored dots moving about the center, this is a special case of so called hypocycloids known as Cardano circles. A hypocyloid is a curve generated by a point on a circle that moves along the inside of a larger circle. It is closely related to the epicycloid, for which I have previously written some code. Here's a hypocycloid generated with code modified from that answer:
The parametric equations for a hypocycloid are (as on Wikipedia) $$ x (\theta) = (R - r) \cos \theta + r \cos \left( \frac{R - r}{r} \theta \right) $$ $$ y (\theta) = (R - r) \sin \theta - r \sin \left( \frac{R - r}{r} \theta \right), $$ where $r$ is the radius of the smaller circle and $R$ is the radius of the larger circle. In a Cardano circle all points on the smaller circle move in straight lines, the relationship that characterizes a Cardano circle is $R = 2 r$.
The question is, how does this relate to Quantum_Oli's answer? The equation that he gives for his points is
{x,y} = Sin[ω t + φ] {Cos[φ], Sin[φ]}, we can rewrite this with
TrigReduce:
TrigReduce[Sin[ω t + φ] {Cos[φ], Sin[φ]}]
{1/2 (Sin[t ω] + Sin[2 φ + t ω]), 1/2 (Cos[t ω] - Cos[2 φ + t ω])}
That's neat; the form of this expression is the same as the form of the expression for a hypocycloid on Wikipedia. Identifying parameters between the formulae we find that $$ R - r = 1,\quad \frac{R-r}{r} = 1 \implies r = 1, R = 2 $$ thus proving that it's the formula for a Cardano circle, since the radii satisfy the condition that $R = 2 r$.
Obviously, though, the points aren't stationary on the circle the way that they are in my example above. The animation is created by moving the points about, we can see in the expression above that Quantum_Oli solved this by introducing a phase offset $2φ$, and then changing this differently for different points in a certain way that he came up with. I extracted the part that generates the phase offset:
phases[t_] := Table[t + Pi i, {i, 0, 1, 1/(3 \[Pi] - Abs[9.43 - t])}]
Plugging the phase offset into the equations for the hypocycloid and using the code for generating a plot that was used above we then get
This is the code that was used to generate the animation:
fx[θ_, phase_: 0, r_: 1, k_: 2] := r (k - 1) Cos[θ] + r Cos[(k - 1) θ + 2 phase Degree]fy[θ_, phase_: 0, r_: 1, k_: 2] := r (k - 1) Sin[θ] - r Sin[(k - 1) θ + 2 phase Degree]center[θ_, r_, k_] := {r (k - 1) Cos[θ], r (k - 1) Sin[θ]}gridlines = Table[{x, GrayLevel[0.9]}, {x, -6, 6, 0.5}];epilog[θ_, phases_, r_: 1, k_: 2] := { Thick, LightGray, Circle[{0, 0}, k r], LightGray, Circle[center[θ, r, k], r], MapIndexed[{ Black, PointSize[0.03], Point[{fx[θ, #], fy[θ, #]}], Hue[First[#2]/10], PointSize[0.02], Point[{fx[θ, #], fy[θ, #]}] } &, phases] }plot[max_, phases_] := ParametricPlot[ Evaluate[Table[{fx[θ, phase], fy[θ, phase]}, {phase, phases}]], {θ, 0, 2 Pi}, PlotStyle -> MapIndexed[Directive[Hue[First[#2]/10], Thickness[0.01]] &, phases], Epilog -> epilog[max, phases], GridLines -> {gridlines, gridlines}, PlotRange -> {-3, 3}, Axes -> False ]phases[t_] := Table[t + Pi i, {i, 0, 1, 1/(3 π - Abs[9.43 - t])}]/DegreeManipulate[plot[t, phases[t]], {t, 0, 6 Pi}]
Edit: Added the reversal and some refinements
ω = 1;posP[t_, φ_] := Sin[ω t + φ] {Cos[φ], Sin[φ]}posL[φ_] := {-#, #} &@{Cos[φ], Sin[φ]}Animate[ Graphics[{PointSize[0.02], Table[{Black, Line[posL[π i]], Hue[i], Point[posP[t, π i]]}, {i, 0, 1, 1/(3π-Abs[9.43-t])}] }, PlotRange -> {{-1.5, 1.5}, {-1.5, 1.5}} ], {t, 0, 6π, 0.2} ] |
So I’m back to the LEGO dataset. In a previous post, the plot of the relative frequency of LEGO colors showed that, although there is a wide range of colors on the whole, just a few make up the majority of brick colors. This situation is similar to that encountered with texts, where common words – articles and prepositions, for example – occur frequently but those words’ meaning doesn’t add much to a (statistical) understanding of the text.
In this post, I use a few techniques associated with text mining to explore the color themes of LEGO sets. In particular, I’ll build a topic model of LEGO color themes using latent Dirichilet allocation(LDA). Like k-means clustering, the LDA model requires that the user choose the number of topics \(k\). I try out several scoring methods available in R to evaluate the best number of color themes for the LEGO ‘corpus’.
Note on code and
R package shout-outs
The code for generating the analysis and plots, along with a little math background on the evaluation methods, is in this repo and this Kaggle notebook as the same analysis but run a smaller sample of sets.
One motivation for doing this analysis was to try some methods from the handy Text Mining in R book by Julia Silge and David Robinson and some of my code follows their examples. In particular, in the TF-IDF section and the analysis of the distribution of documents over the topics.
The LDA model is trained using the
LDA function from the
topicmodels package. Evaluation methods come from
ldatuning,
SpeedReader, and the
clues packages. Unit plots use the
waffle package.
Color TF-IDF
The LEGO dataset contains around 11,000 LEGO sets from 1950 to 2017 and the data includes the part numbers and brick colors that make up each set. Following the text mining analogy, I take LEGO sets to be the ‘documents’ that make up the LEGO ‘corpus’, and colors are the ‘words’ that make up each document or set. I ignored part numbers but it would be possible to incorporate them in a similar analysis by considering the color-part number pair as the unit of meaning.
In text mining, stop words are words that occur with similar frequency in all documents. These words are often removed before performing a statistical analysis. From the frequency plot, it’s clear a few primary colors along with black, gray, and white make up the majority of brick colors. These might have been treated as stop words and removed. Our corups has a vocabulary of just 125 unique colors so I chose to leave all of them in for the analysis.
Term frequency inverse document freqeuncy(TF-IDF) is a metric of a word’s importance to a specific document. Term frequency is the frequency of a word’s appearance in each document and inverse document frequency is the count of documents the word appears in. Weighting TF by IDF means words appearing in all documents are down-weighted compared to rarely occuring words. A high TF-IDF corresponds to a color that is distinct to a LEGO set.
Low TF-IDF colors
First, we’ll look at
low TF-IDF colors. Many of the colors-set pairs with a low TF-IDF score show up as common colors in relative frequency plot above. Low TF-IDF sets
The three sets below are the 7-10th lowest TF-IDF set-color combinations. These are large sets with with a common color that appears less frequently in the set than in the corpus. For example, the darker gray in the ‘First LEGO League’ set makes up a small proportion of the set but occurs in many sets.
High TF-IDF colors
The plot below shows the 10 set-color pairs with the highest TF-IDF score. These are sets with a
high proportion of the set made up of a color that shows up infrequently in LEGO sets overall. The ‘Statue of Liberty’ set is an extreme example; It’s made up almost entirely of a single sea-green color that doesn’t occur in other sets. High TF-IDF sets Building a topic model
After that somewhat cursory look at the LEGO sets, we’ll move on to building a topic model. The LDA model is a generative model of a body of documents (or LEGO sets or genetic markers or images). The output of the LDA algorithm is two distributions which represent the distribution of terms that make up a topic and the distribution of topics that make up a document. For our model, the
term distribution is a distribution over colors that make up a color theme, while the topic distribution is a distribution of color themes that make up a LEGO set.
In this generative model, the LEGO set can be generated by one theme or many themes. Tuning the number of topics in the model changes both the number of themes a set is drawn from and the colors that make up that theme.
Evaluation methods
I used several methods (chosen because they were readily available in R packages) for evaluating the number of topics that make up the topic model. This is not meant to be an exhaustive list of automated topic evaluation methods. For gauging topic coherence, for example, the Python Gensim library has a more complete pipeline which includes options to modify segmentation, word co-occurence estimation, and scoring methods.
Cross validation on perplexity
Perplexity measures the difference between distributions learned on a training and holdout set and the
topicmodels package used to build the LDA model has a function for computing perplexity. This was the only evaluation methods which required cross-validation.
Topic grid
I ended up running the cross-validation twice and refined the spacing on the parameter grid to capture both the larger trend and some detail where I thought better parameters \(k\) might be located. There appears to be diminishing returns to model complexity between \(k = 20\) and \(k = 35\).
Measures on the full models
After running cross-validation on the perplexity scores I reduced the number of models for the remaining evaluation methods. The remaining methods used models trained on the full data set.
Ldatuning
The
ldatuning package has several other metrics of the quality of the topic models. The skimmed the references in the package documentation but I don’t really understand these measures. At least two of the measures agree that fewer topics are better.
Topic coherence
There are several versions of topic coherence which measure the pairwise strength of the relationship of the top terms in a topic model. Given some score, where a larger value indicates a stronger relationship between two words \(w_i, w_j\), a generic coherence score is the sum of the top terms in a topic model:
\[ \sum_{w_i, w_j \in W_t} \text{Score}(w_i, w_j), \] with top terms \(W_t\) for each topic \(t\).
The coherence score used in the
SpeedReader
topic_coherence function uses the internal coherence(Umass) of the top terms. I compared the scores for the top 3, 5 and 10 terms.
External validation with cluster scoring
We can also treat the LDA models as a clustering of the LEGO sets by assign each LEGO set to the topic which makes up the highest proportion of that set, that is, the highest probability of a topic \(t\) given a document \(d\) \[ \text{gamma} \equiv \gamma = p(t|d). \] We then assign a document to a cluster by taking \[ \text{argmax}_t p(t|d). \]
For comparison’s sake, I also ran a kmeans clustering using the weighted term(color) distribution for each document as the vector that representing that set.
The kmeans and LDA clusterings were evaluated against each sets
parent_id label which indicated the theme of the LEGO set. In total there were around 100 unique parent themes although this included sets who were ‘childless parents’.
Cluster scores
The clusters scores include Rand, adjusted Rand, Folkes-Mallow and Jaccard scores. All try to score a clustering on how well the discovered labels match the assigned
parent_id. The Rand index assigns a score based on the number pairwise agreement of the cluster labels with the original labels. The other scores use a similar approach and two are versions of the Rand score but adjusted for random agreement. All scores except the un-adjusted Rand index decrease with more topics.
There’s no reason to assume that theme labels match color topics. Poking around the data indicated that some themes ids are closely associated with a palette (Duplo, for example) while other parent themes are general categories with a mix of color theme.
Topic Distribution
The last method for evaluating the learned topics is to look at how documents are distributed over the topic distributions. This example follows a this section from the tidytext mining book.
The plot below visualizes this as how the
documents are distributed over the probability bins for each topic. If too many topics have sets or documents in the low probability bins then you may have too many topics, since few documents are strongly associated with any topic.
The chart above is also closely related to clustering based on LDA. If the distribution over the high probability bins of a topic is sparse then few LEGO sets would be assigned to that topic. (You can compare the plot above to the to the total distribution of LEGO sets over these topics below. Topic 40 had the fewest sets assigned to it.)
Evaluation results
None of the preceding evaluation methods seem particularly conclusive. The pattern of diminishing returns on the perplexity scores is similar to other model complexity scores and suggests a value for \(k\) in the 25-35 range. This agrees somewhat with a visual evaluation of the set distribution over topic probabilities (the last chart above), where at 40 topics some topics seem to have few documents closely associated with them.
Color distributions over topics
Aside from these scoring methods, we can also plot the color distribution(or relevance) for each topic or color theme directly. Below are charts for the models with 30 and 40 topics.
The above plot is based on the beta \(\beta\) matrix, which gives the posterior distribution of words given a topic, \(p(w|t)\). The above plot shows a weighted \(\beta\) (or relevance) like that used in the
LDAvis package.
\[ \text{relevance}(w|t) = \lambda \cdot p(w|t) + (1-\lambda)\cdot \frac{p(w|t)}{p(w)}.\]
How many themes?
Although there may be more topics that have few sets associated with them as we increase the number of topics, the coherence of a few topics appears to improve.
For example, the two plots below are selections from sets with the highest gamma, \(p(d|t)\) for each topic. When we go from 30 to 40 topics the top sets in the topic are removed by the remaining sets are more visually similar. (Also note that the sets that stayed had the top relevance score or weighted \(\beta\)).
Topic 2 from the 30 topic model Topic 2 from 40 topic model
I’ll plot one more topic from the 40 topic model that looked ‘suspicious’ but a sampling of the top sets seem to go well together.
Topic 32 from 40 topic model Evaluation summary
Of the automated scoring methods, the perplexity scores and the distribution of topics over term probabilities were the only two that seemed readily interpretable and matched my personal sense of which model best identified coherent color themes. I ran the same scores on a small samples of the data for this Kaggle notebook and the coherence score consistently pointed at models with fewer topics. It might be interesting to see how evaluation methods vary with parameters of the dataset like vocabulary size, variation in document size and the number of documents.
LEGO color themes
For the final model, I used the 40 topic model. Although LDA models the LEGO sets as mixtures of topics or themes, for the next two charts I assigned each set to a single topic using the same method that I used for clustering. And topics are a mixture of colors but I chose a color to represent each topic by blending the topic’s two most important color terms.
In the last plot, the topics are represented by color probabilities of that topic: 1 brick represents roughly 1% of the distribution. |
The opening sentence of section 2.8 starts "Tensors possess a compelling beauty and simplicity". That fills me with fear.
We are told that the Levi-Civita symbol, which is not a tensor, is defined as$${\widetilde{\epsilon }}_{{\mu }_1{\mu }_2\dots {\mu }_n}=\left\{ \begin{array}{ll} +1 & \mathrm{if}\mathrm{\ }{\mu }_1{\mu }_2\dots {\mu }_n\mathrm{\ is\ an\ even\ permitation\ of}\ 01..(n-1)\ \\ -1 & \mathrm{if\ }{\mu }_1{\mu }_2\dots {\mu }_n\mathrm{\ is\ an\ odd\ permitation\ of}\ 01..\left(n-1\right) \\ 0 & \mathrm{otherwise} \end{array} \right.$$and (Carroll's (2.66)) that given any ##n\times n## matrix ##M^{\mu }_{\ \ \ \mu '\ }##, the determinant ##\left|M\right|## obeys $${\widetilde{\epsilon }}_{{\mu '}_1{\mu '}_2\dots {\mu '}_n}\left|M\right|={\widetilde{\epsilon }}_{{\mu }_1{\mu }_2\dots {\mu }_n}M^{{\mu }_1}_{\ \ \ \ \ {\mu '}_1}M^{{\mu }_2}_{\ \ \ \ \ {\mu '}_2}\dots M^{{\mu }_n}_{\ \ \ \ \ {\mu '}_n}$$We are invited to check this for 2×2 and 3×3 matrices which we do and do discover some beauty comparing traditional methods for calculating the determinant of a matrix using cofactors or using the Levi-Civita symbol. Setting ##{\mu '}_1{\mu '}_2\dots {\mu '}_n=01\dots (n-1)## we get the even simpler$$\left|M\right|={\widetilde{\epsilon }}_{{\mu }_1{\mu }_2\dots {\mu }_n}M^{{\mu }_1}_{\ \ \ \ \ 0}M^{{\mu }_2}_1\dots M^{{\mu }_n}_{\ \ \ \ \ (n-1)}$$Other combinations of ##{\mu '}_1{\mu '}_2\dots {\mu '}_n## either give ##0=0## or the same as ##01\dots (n-1)## or other cofactor expansions of the determinant. However if the equation had been simplified the next equation (2.67) would not work and (2.67) is the punch line. We also prove that the determinant of the metric under a coordinate transformation is given by $$g\left(x^{\mu '}\right)={\left|\frac{\partial x^{\mu '}}{\partial x^{\mu }}\right|}^{-2}g\left(x^{\mu }\right)$$ See the details at
We are told that the Levi-Civita symbol, which is not a tensor, is defined as$${\widetilde{\epsilon }}_{{\mu }_1{\mu }_2\dots {\mu }_n}=\left\{ \begin{array}{ll}
+1 & \mathrm{if}\mathrm{\ }{\mu }_1{\mu }_2\dots {\mu }_n\mathrm{\ is\ an\ even\ permitation\ of}\ 01..(n-1)\ \\
-1 & \mathrm{if\ }{\mu }_1{\mu }_2\dots {\mu }_n\mathrm{\ is\ an\ odd\ permitation\ of}\ 01..\left(n-1\right) \\
0 & \mathrm{otherwise} \end{array}
\right.$$and (Carroll's (2.66)) that given any ##n\times n## matrix ##M^{\mu }_{\ \ \ \mu '\ }##, the determinant ##\left|M\right|## obeys $${\widetilde{\epsilon }}_{{\mu '}_1{\mu '}_2\dots {\mu '}_n}\left|M\right|={\widetilde{\epsilon }}_{{\mu }_1{\mu }_2\dots {\mu }_n}M^{{\mu }_1}_{\ \ \ \ \ {\mu '}_1}M^{{\mu }_2}_{\ \ \ \ \ {\mu '}_2}\dots M^{{\mu }_n}_{\ \ \ \ \ {\mu '}_n}$$We are invited to check this for 2×2 and 3×3 matrices which we do and do discover some beauty comparing traditional methods for calculating the determinant of a matrix using cofactors or using the Levi-Civita symbol.
Setting ##{\mu '}_1{\mu '}_2\dots {\mu '}_n=01\dots (n-1)## we get the even simpler$$\left|M\right|={\widetilde{\epsilon }}_{{\mu }_1{\mu }_2\dots {\mu }_n}M^{{\mu }_1}_{\ \ \ \ \ 0}M^{{\mu }_2}_1\dots M^{{\mu }_n}_{\ \ \ \ \ (n-1)}$$Other combinations of ##{\mu '}_1{\mu '}_2\dots {\mu '}_n## either give ##0=0## or the same as ##01\dots (n-1)## or other cofactor expansions of the determinant.
However if the equation had been simplified the next equation (2.67) would not work and (2.67) is the punch line.
We also prove that the determinant of the metric under a coordinate transformation is given by
$$g\left(x^{\mu '}\right)={\left|\frac{\partial x^{\mu '}}{\partial x^{\mu }}\right|}^{-2}g\left(x^{\mu }\right)$$ See the details at
Commentary 2.8 Tensor Densities.pdf (6 pages) |
WHY?
Object box proposal process is complicated and slow in object detection process. This paper proposes Single Shot Detector(SSD) to detect objects with single neural network.
WHAT?
SSD prodeces fixed-size collection of bounding boxes and scores the presence of class objects in the boxes.
The front of SSD is standard classification model with truncated classification layer(base network). After the base network, convolution layers added to decrease the size progressively. These progressively decreasing feature layers represent multiple granuality of bounding boxes. For each feature map, multiple scale(k) classifiers are applied to 3 x 3 area to compute scores of c class labels and 4 relative offsets. For m x n feature map, 3 x 3 filter with k x (c + 4) channel classfiers are applied to produce (c + 4)kmn outputs.
This process require ground truth bounding boxes. Default boxes which has higher jaccard overlap with ground truth boxes than a threshold(0.5) are considered as answer. The training objective consists of two parts: the localization loss and the confidence loss. with
x_{ij}^k indicates the matching the i-th default box to the j-th ground truth box of category p.
L(x, c, l, g) = \frac{1}{N}(L_{conf}(x,c) + \alpha L_{loc}(x, l, g))\\L_{loc}(x, l, g) = \sum^N_{i\in Pos}\sum_{m\in\{cx, cy, w, h\}} x_{ij}^k smooth_{L1}(l_i^m - \hat{g}_j^m)\\\hat{g}_j^{cx} = \frac{(g_j^{cx} - d_i^{cx})}{d_i^w}\\\hat{g}_j^{cy} = \frac{(g_j^{cy} - d_i^{cy})}{d_i^h}\\\hat{g}_j^{w} = \log \frac{g_j^{w}}{d_i^w}\\\hat{g}_j^{h} = \log \frac{g_j^{h}}{d_i^h}\\L_{conf}(x, c) = - \sum^N_{i\in Pos}x_{ij}^p \log(\hat{c}_i^p) - \sum_{i\in Neg}\log(\hat{c}_i^0)
Boxes of different aspect ratios are proposed for each feature map.
s_k = s_{min} + \frac{s_{max} - s_{min}}{m - 1}(k-1), k\in[1,m]
With feature map of differnt granuality and aspect ratios, SSD can a wide range of boxes.
So?
SSD achieved better result on PASCAL VOC2007, 2012 and COCO than Fast and Faster R-CNN.
Critic
Integrating redundent and slow process into neural network seems convenient idea. However, I think there would be better way to suggest boxes than proposing hundreds of them. |
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - vv - final states using proton–proton collisions at √s=13 TeV with the ATLAS detector
European Physical Journal C, ISSN 1434-6044, 04/2018, Volume 78, Issue 4
Journal Article
2. Measurement of the ZZ production cross section in proton-proton collisions at √s = 8 TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and ZZ→ℓ−ℓ+νν¯¯¯ decay channels with the ATLAS detector
Journal of High Energy Physics, ISSN 1126-6708, 2017, Volume 2017, Issue 1, pp. 1 - 53
A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ−ℓ+νν¯ channels (ℓ = e, μ) in proton-proton collisions at s=8TeV at the Large Hadron...
Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences
Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences
Journal Article
3. Search for new resonances decaying to a W or Z boson and a Higgs boson in the ℓ+ℓ−bb¯, ℓνbb¯, and νν¯bb¯ channels with pp collisions at s=13 TeV with the ATLAS detector
Physics Letters B, ISSN 0370-2693, 02/2017, Volume 765, Issue C, pp. 32 - 52
Journal Article
Physical Review Letters, ISSN 0031-9007, 10/2014, Volume 113, Issue 15, p. 151601
Journal Article
5. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - ν ν ¯ final states using proton–proton collisions at s = 13 TeV with the ATLAS detector
The European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 04/2018, Volume 78, Issue 4, pp. 1 - 34
Journal Article
6. ZZ -> l(+)l(-)l '(+)l '(-) cross-section measurements and search for anomalous triple gauge couplings in 13 TeV pp collisions with the ATLAS detector
PHYSICAL REVIEW D, ISSN 2470-0010, 02/2018, Volume 97, Issue 3
Measurements of ZZ production in the l(+)l(-)l'(+)l'(-) channel in proton-proton collisions at 13 TeV center-of-mass energy at the Large Hadron Collider are...
PARTON DISTRIBUTIONS | EVENTS | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Particle data analysis | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
PARTON DISTRIBUTIONS | EVENTS | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Particle data analysis | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Journal Article
7. Search for heavy ZZ resonances in the $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$ ℓ+ℓ-νν¯ final states using proton–proton collisions at $$\sqrt{s}= 13$$ s=13 $$\text {TeV}$$ TeV with the ATLAS detector
The European Physical Journal C, ISSN 1434-6044, 4/2018, Volume 78, Issue 4, pp. 1 - 34
A search for heavy resonances decaying into a pair of $$Z$$ Z bosons leading to $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Journal Article
8. Measurement of exclusive γγ→ℓ+ℓ− production in proton–proton collisions at s=7 TeV with the ATLAS detector
Physics Letters B, ISSN 0370-2693, 10/2015, Volume 749, Issue C, pp. 242 - 261
Journal Article
9. Measurement of the ZZ production cross section in proton-proton collisions at s = 8 $$ \sqrt{s}=8 $$ TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and Z Z → ℓ − ℓ + ν ν ¯ $$ ZZ\to {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ decay channels with the ATLAS detector
Journal of High Energy Physics, ISSN 1029-8479, 1/2017, Volume 2017, Issue 1, pp. 1 - 53
A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ − ℓ + ν ν ¯ $$ {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ channels (ℓ = e, μ) in...
Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment
Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment
Journal Article
10. Characterization of feruloyl esterases produced by the four lactobacillus species: L. amylovorus, L. acidophilus, L. farciminis and L. fermentum, isolated from ensiled corn stover
Frontiers in Microbiology, ISSN 1664-302X, 06/2017, Volume 8
Lactic acid bacteria (LAB) play important roles in silage fermentation, which depends on the production of sufficient organic acids to inhibit the growth of...
Feruloyl esterase | Silage | Hydroxycinnamic acids | Hydroxycinnamic esters | Lactobacillus | ASPERGILLUS-NIGER | ESCHERICHIA-COLI | hydroxycinnamic acids | MICROBIOLOGY | ACID ESTERASE | GENE | ANTIOXIDANT | CINNAMOYL ESTERASES | FIBROLYTIC ENZYMES | DEGRADATION | silage | feruloyl esterase | hydroxycinnamic esters | INOCULANTS
Feruloyl esterase | Silage | Hydroxycinnamic acids | Hydroxycinnamic esters | Lactobacillus | ASPERGILLUS-NIGER | ESCHERICHIA-COLI | hydroxycinnamic acids | MICROBIOLOGY | ACID ESTERASE | GENE | ANTIOXIDANT | CINNAMOYL ESTERASES | FIBROLYTIC ENZYMES | DEGRADATION | silage | feruloyl esterase | hydroxycinnamic esters | INOCULANTS
Journal Article |
19 0
I am currently studying the Massive Thirring Model (MTM) with the Lagrangian
$$ \mathcal{L} = \imath {\bar{\Psi}} (\gamma^\mu {\partial}_\mu - m_0 )\Psi - \frac{1}{2}g: \left( \bar{\Psi} \gamma_\mu \Psi \right)\left( \bar{\Psi} \gamma^\mu \Psi \right): . $$ and Hamiltonian $$ \int \mathrm{d}x \imath \Psi^\dagger \sigma_z \partial_x \Psi + m_0 \Psi^\dagger \Psi + 2g \Psi^\dagger_1 \Psi^\dagger_2 \Psi_2\Psi_1\\ $$ Due to the infinite set of conservation laws, particle production is said to be absent from this theory. However why isn't it sufficient to show that particle production is absent if the number operator $$N=\int \mathrm{d}x \Psi^\dagger \Psi$$ commutes the the Hamiltonian? Also, by particle production being absent, is that just a statement that all Feynman diagrams with self energy insertions evaluate to 0 but all other Feynman diagrams are possible?
$$
\mathcal{L} = \imath {\bar{\Psi}} (\gamma^\mu {\partial}_\mu - m_0 )\Psi - \frac{1}{2}g: \left( \bar{\Psi} \gamma_\mu \Psi \right)\left( \bar{\Psi} \gamma^\mu \Psi \right): .
$$
and Hamiltonian
$$
\int \mathrm{d}x \imath \Psi^\dagger \sigma_z \partial_x \Psi + m_0 \Psi^\dagger \Psi + 2g \Psi^\dagger_1 \Psi^\dagger_2 \Psi_2\Psi_1\\
$$
Due to the infinite set of conservation laws, particle production is said to be absent from this theory. However why isn't it sufficient to show that particle production is absent if the number operator $$N=\int \mathrm{d}x \Psi^\dagger \Psi$$ commutes the the Hamiltonian? Also, by particle production being absent, is that just a statement that all Feynman diagrams with self energy insertions evaluate to 0 but all other Feynman diagrams are possible? |
Presentation at the MATRIX conference on Geometric and Categorical Representation Theory
Title: Toward a geometrisation of functions on the integral points of p-adic varieties
Abstract: In this talk we will see how to define a topology on the category of formal schemes over the p-adic integers whose fundamental group coincides with the {\'e}tale fundamental group of Fp-schemes.
Speaker: Geoff Vooys
Event Date:
Friday, December 21, 2018 - 10:30 to 11:30
Presentation at the MATRIX conference on Geometric and Categorical Representation Theory
Title: The geometry of local Arthur packets
Speaker: Clifton Cunningham
Abstract: This talk explains how an Arthur parameter determines a category of perverse sheaves and how the microlocal perspective on this category reveals an Arthur packet.
Event Date:
Wednesday, December 19, 2018 - 12:00 to 13:00
Event Date:
Monday, December 17, 2018 - 12:00 to Friday, January 18, 2019 - 12:00
Event Date:
Monday, December 17, 2018 - 11:19
Conference on Geometric and Categorical Representation Theory at MATRIX, Creswick Campus, University of Melbourne and Monash University
Program Description: Geometric and categorical representation theory are advancing rapidly, with a growing number of connections to the wider mathematical universe. The goal of this program is to bring international experts in these areas together to facilitate exchange and development of ideas. During the first week, there will be a lecture series by Prof. Luca Migliorini on the arithmetic theory of Higgs bundles.
Organisers: Clifton Cunningham (University of Calgary), Masoud Kamgarpour (University of Queensland), Anthony Licata (Australian National University), Peter McNamara (University of Queensland), Sarah Scherotzke (Bonn University), Oded Yacobi (University of Sydney)
https://www.matrix-inst.org.au/events/geometric-and-categorical-represen...
Event Date:
Monday, December 10, 2018 - 09:00 to Friday, December 21, 2018 - 12:00
Speaker: Clifton Cunningham
Room: MS 337 This is a continuation of the talk from last week on admissible representations of $p$-adic G(2) associated to cubic unipotent Arthur parameters.
We have seen how the subregular unipotent orbit in the L-group for split G(2) determines a unipotent Arthur parameter and thus an unramified infinitesimal parameter $\lambda : W_F \to \,^LG(2)$.
Using the Voganish conjectures (\texttt{https://arxiv.org/abs/1705.01885v4}) we find that there are exactly 8 admissible representations with infinitesimal parameter $\lambda$. Last week Qing Zhang interpreted $\lambda$ as a Langlands parameter for the split torus in $p$-adic G(2) and worked out the corresponding quasi-character $\chi : T(F) \to \mathbb{C}^*$ using the local Langlands correspondence. We expect that all admissible representations in the composition series of $\mathop{Ind}_{B(F)}^{G(2,F)} \chi$ have infinitesimal parameter $\lambda$; we wonder if not all 8 admissible representations arise in this way.
In this talk I will calculate the multiplicity matrix that describes how these 8 admissible representations are related to 8 standard modules with infinitesimal parameter $\lambda$, assuming the Kazhdan-Lusztig conjecture as in appears in Section 10.2.3 of the preprint above.
To make this calculation I will use the Decomposition Theorem to calculate the stalks of all simple $H_\lambda$-equivariant perverse sheaves on the mini-Vogan variety $V_\lambda$, following the strategy explained in Section 10.3.3 of the preprint.
Event Date:
Thursday, November 29, 2018 - 10:30 to 11:30
Speaker: Clifton Cunningham and Qing Zhang,
We consider the Voganish project for the cubic unipotent Arthur parameter for the split exceptional group $G_2$ over a p-adic field, which was first considered by Gan-Gurevich-Jiang. After introducing this parameter $\lambda$, we consider the Vogan variety and its orbits under the action of the natural group $H_\lambda$. We then determine a smooth cover of each orbits which will help to compute the $H_\lambda$-equivariant local systems on each orbit. We also determine the principle series representation of $G_2(F)$ associated with the unramified Langlands parameter.
Location: MS 337, University of Calgary
Event Date:
Thursday, November 22, 2018 - 10:00 to 11:30
Speaker: Jerrod Smith
In this talk, we discuss cuspidality of representations of p-adic groups in a relative setting. Let G be a connected reductive group over a p-adic field F and $\theta$ is an involution on G. Let H be the subgroup fixed by $\theta$. One then can define H-relative supercuspidal representations via matrix coefficients. We will discuss the behavior of supercuspidality under induction.
Event Date:
Thursday, October 11, 2018 - 10:00 to 11:30
Speaker: Sarah Dijols,
The Generalized Injectivity Conjecture of Casselman-Shahidi states that the unique irreducible generic subquotient of a (generic) standard module is necessarily a subrepresentation. It is related
to L-functions, as studied by Shahidi, hence has some number-theoretical flavor, although our technics lie in the fields of representations of reductive groups over local fields. It was proven for classical groups (SO(2n+1), Sp2n, SO(2n)) by M.Hanzer in 2010. In this talk, I will first explain our interest in this conjecture, and describe its main ingredients. I will further present our proof (under some restrictions) which uses techniques more amenable to prove this conjecture for all quasi-split groups.
Event Date:
Thursday, October 4, 2018 - 10:00 to 11:30
Speaker: Qing Zhang
In this talk, we will discuss two integrals involving the exceptional group G2, both discovered by D. Ginzburg. The first integral gives an integral representation of the adjoint L-function of GL(3). Then I will report my recent work joint with J. Hundley on the holomorphy of adjoint L-function of GL(3). The second integral gives an integral representation of standard L-function of G2 itself. Related to this integral, we can consider a Fourier-Jacobi model for G2. I will discuss recent work joint with B. Liu on the uniqueness of such models over finite fields.
Event Date:
Thursday, September 20, 2018 - 10:00 to 11:30 |
I apologize for my unclear question, I will write it in a more detailed way.
I first define:
g[τ_, ϵ_] := c /. FindRoot[2.` c^4 - 1.6931471805599454` c^4 τ - 0.8611473146305157` τ^2 + 0.25` τ^2 Log[ϵ^(1/6)/c^( 2/3)] + (1.` c^4 + 0.6732867951399863` τ) τ Log[τ] - 0.125` τ^2 Log[τ]^2 - 0.25` τ^2 Log[-0.6931471805599453` + Log[τ]], {c, 1000}]
The function I am trying to solve is as follows:
]1
I can plot $\zeta_2 (r)$ in the region $1<r<\tau/2$ for different values of $a, \tau, \epsilon$, where $a>0$, $\tau>2$, $10^{-7}<\epsilon<10^{-2}$. what I need to do is to find the smallest value of r satisfying $\zeta_2(r)=0$. I use Findroot to do it. but it does not work well all the way through the parameter region.
The second question is that supposing we find the smallest value of $r$, which is $h[a,\tau,\epsilon]$, I need to do a integration, which is defined as:
w[a_, τ_, ϵ_] := Integrate[Evaluate[{Subscript[ζ, 2][r]^2} /. f[a, τ, ϵ]], {r, 1, h[a,τ, ϵ]}]
As a test, I choose $a=1, \tau=5, \epsilon=10^{-6}, h(a,\tau,\epsilon)=1.2$, but $w(1,5,10^{-6})$ does not return me a number. So how should I define $w$ to get a number?
The last question is if I have got the right definition of $w$, set the derivative $D[w,{a,1}]=0$, and find $a$ in terms of $\tau, \epsilon$. I am getting stocked by doing the derivative of functions. Thanks very much. By the way, I hope this time that Mr.Wizard won't wasting time on revising my writing. I am really sorry |
How can I evaluate this limit? $$ \lim_{x\to 0} \frac{\sin(x) + \sin(x + \pi/N) + \sin(x + 2\pi/N)+...+\sin(x + 2N\pi/N)}{x} $$
For each fixed $x,$ the numerator is the imaginary part of
$$e^{ix} + e^{i(x+ \pi/N)} + e^{i(x+ 2(\pi/N))}\cdots + e^{i(x+ 2N(\pi/N)} = e^{ix}(1+e^{i\pi/N} + (e^{ i\pi/N})^2 + \cdots + (e^{ i\pi/N})^{2N}) = e^{ix}\frac{(e^{ i\pi/N})^{2N+1} -1 }{e^{ i\pi/N}-1}= e^{ix}.$$
The imaginary part of the last term is of course $\sin x.$ Thus the desired limit is $\lim_{x\to 0}(\sin x)/x =1.$
Multiply both numerator and denominator by $2\sin(\pi/2N)$, and use $2\sin x\sin(\pi/2N)=\cos(x-\pi/2N)-\cos(x+\pi/2N)$ and the numerator can be dealt with telescoping method to get $[\cos(x-\pi/2N)-\cos(x+2\pi/N)]/[2x\sin(\pi/2N)]=\sin x/x\rightarrow 1$ as $x\rightarrow 0$. |
We adopt the system of units in which speed of light is 1.
Components of the stress tensor $T^{\alpha\beta}$ physically mean the following: $T^{00}$ is the energy density, $T^{0j}$ is the energy flux across the spatial-surface $x_j=$ constant ($j=1,2,3$), $T^{i0}$ is the density of $i$-th component of momentum, and $T^{ij}$ is the $i$-th component of momentum flux across the spatial-surface $x_j=$ constant ($i,j=1,2,3$). Normal momentum flux ($T^{ij}$ for $i=j$) causes normal stress on the fluid element and the others ($T^{ij}$ for $i\neq j$) cause shear stress on the fluid element.
An ideal fluid is one whose viscosity and conductivity are zero. Consider an elemental volume of ideal fluid in its MCRF (momentarily co-moving reference frame). Since conductivity is zero, there is no energy flux into or out of it, which implies $T^{0j}=0$. Since there is no viscosity, it doesn't experience shear stresses, therefore $T^{ij}=0$ when $i\neq j$. Further the statement that the fluid has no viscosity is a frame-independent statement, so $T^{ij}=0$ when $i\neq j$ in any reference frame, and so the matrix $T^{ij}$ must be diagonal in all reference frames. This is possible only if $T^{ij}=p\delta^{ij}$ in which $\delta^{ij}$ is the identity tensor and $p$ is a scalar called pressure. If we denote the energy density by $\rho$, then the stress tensor $T^{\alpha\beta}$ in the MCRF of the fluid element is:$$\begin{bmatrix}\rho & 0 & 0& 0\\0 & p& 0& 0\\0 & 0& p& 0\\0 & 0& 0& p\end{bmatrix}$$This can be simplified as:$$\begin{bmatrix}\rho & 0 & 0& 0\\0 & p& 0& 0\\0 & 0& p& 0\\0 & 0& 0& p\end{bmatrix}=\begin{bmatrix}\rho+p & 0 & 0& 0\\0 & 0& 0& 0\\0 & 0& 0& 0\\0 & 0& 0& 0\end{bmatrix}+\begin{bmatrix}-p & 0 & 0& 0\\0 & p& 0& 0\\0 & 0& p& 0\\0 & 0& 0& p\end{bmatrix}\\\Rightarrow\quad T^{\alpha\beta}=(\rho+p)(\mathbf{e}_0\mathbf{e}_0)^{\alpha\beta}+p\eta^{\alpha\beta}$$
in which $\eta^{\alpha\beta}$ is the metric tensor. The unit vector in the time direction (of the MCRF of the fluid element) $\mathbf{e}_0$ is nothing but its 4-velocity $\mathbf{U}$. Therefore the dyadic $\mathbf{e}_0\mathbf{e}_0=\mathbf{U}\mathbf{U}$, whose component is $(\mathbf{U}\mathbf{U})^{\alpha\beta}=U^\alpha U^\beta$. Thus we have:$$T^{\alpha\beta}=(\rho+p)U^\alpha U^\beta+p\eta^{\alpha\beta}$$
Reference:
General Relativity by B. Schutz. |
I'm sorry if my question sounds trivial, but analysis is not my field.
Consider the interval $[a,b]\subset \mathbb{R}$. On $[a,b]$, for every $n\in\mathbb{N}$, $n\ge 3$, I define the polynomials $P_n:=a_{0,n}+a_{1,n}x+\dots +a_{n,n}x^n$ (i.e. $a_{i,n}$ is the coefficient of $x^i$ in $P_n$ and $a_{j,n}=0$ in $P_n$ for $j>n$), the coefficients $a_{j,n}$ are given by the conditions:
$P_n(a)=A,P_n(b)=B,P_n'(a)=C,P_n'(b)=D$, $A,B,C,D\in\mathbb{R}$ for $n=3$ and as $n\ge 3$ I gradually impose that every other derivative of $P_n$ is equal to zero in the two points $a,b$ (for $n=4$ I add the condition $P_4''(a)=0$, for $n=5$ I add $P_5''(a)=0$ and $P_5''(b)=0$, for $n=6$ I add $P_6''(a)=0$, $P_6''(b)=0$ and $P_6'''(a)=0$ and so on).
My questions are:
1) What type of convergence can I expect on the sequence $P_n$ as $n\rightarrow \infty$? Does it converge uniformly?
2) Suppose $P_n$ converges to $P$ in the better possibile way. In addition suppose that $P$ is differentiable, $C\neq D$ and $\frac{B-A}{b-a}\in[min\{C,D\},max\{C,D\}]$. Then is it true $min\{C,D\}\le P'(x)\le max\{C,D\}$ $\forall x\in(a,b)$?
Thank you! |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
WHY?
Gaussian process has several advantages. Based on robust statistical assumptions, GP does not require expensive training phase and can represent uncertainty of unobserved areas. However, HP is computationally expensive. Neural process tried to combine the best of Gaussian process and neural network.
WHAT?
Neural process satisfy two condtions of stochastic process: exchageability and consistency. To represent GP with neural network, stochastic process F can be parameterised by random vector z.
F(x) = g(x, z)\\p(z, y_{1:n}|x_{1:n}) = p(z)\prod^n_{i=1} N(y_i|g(x_i, z), \sigma^2)
We can split the dataset into a context set
x_{1:m}, y_{1:m} and a target set
x_{m+1:n}, y_{m+1:n}. Variational inference is used to estimate the log likelihood, and intractable conditional prior $$p(z
x_{1:m}, y_{1:m})$$ is approximated using another variational inference.
\log p(y_{m+1:n}|x_{1:n}, y_{1:m}) \geq \mathbb{E}_{q(z|x_{1:n},y_{1:n})}[\sum^n_{i=m+1}\log p(y_i|z, x_i) + \log\frac{p(z|x_{1:m}, y_{1:m})}{q(z|x_{1:n},y_{1:n})}] \geq \\\mathbb{E}_{q(z|x_{1:n},y_{1:n})}[\sum^n_{i=m+1}\log p(y_i|z, x_i) + \log\frac{q(z|x_{1:m}, y_{1:m})}{q(z|x_{1:n},y_{1:n})}]
To train this model to represent the distribution of functions, dataset have to be divided into context points and target points. Encoder parameterized with a neural network takes pairs of context points(
x_i, y_i) and produce each representation
r_i. In order to represent order-invariant global representation r, aggregator takes means of each representation.
z\sim N(\mu(r), I\sigma(r))\\r = a(r_i) = \frac{1}{n}\sum_{i=1}^n r_i
A conditional decoder takes a global latent variable and new target locations to make prediction.
So?
NP showed fine performance on 1-D function regression, 2-D function regression, black-boc optimiation with thompson sampling, and contextual bandits.
Critic
Frankly, I think I need to implement this in order to fully understand this. |
It's not that hard to see how a rotation can end up being represented by a matrix of dimension $(2j+1)\times(2j+1)$. The key concept is that this matrix acts on a subspace $V$ of the Hilbert space $\mathcal H$; that is, $V$ contains state vectors (kets). Generally, $V$ is required to be an invariant subspace in the sense that if $v\in V$, then under a rotation $v$ will in general go to some different vector $v'$ but it will nevertheless stay in $V$.
The easiest way to see this is by way of example, so let me show how this works for $j=2$. There are in general many possible realizations of $V$, but the cleanest realization is as the vector space of functions $f:\mathbb R^3\to \mathbb C$ which are homogeneous polynomials of degree 2, and which are 'traceless' in the sense that$$⟨f⟩=\int_{S^2} f(\hat{\mathbf{r}})\,\mathrm d \Omega=0.\tag1$$This vector space is best analysed in a convenient basis, and the cleanest one is$$B=\{x^2+y^2-2z^2, xz, yz, xy, x^2-y^2\}.$$It is fairly easy to see that $V$ is closed under rotations, because each vector component will go into a linear combination of $x,y$ and $z$, and multiplying any two such combinations will again give a homogeneous polynomial. Rotations will also not affect the tracelessness condition (1).
To calculate the effect of a rotation $R\in\mathrm{SO}(3)$, you simply take a given $f\in V$ to the function $G(R)f\in V$ which is given by$$(G(R)f)(\mathbf r)=f(R^{-1}\mathbf r).$$(The reason for the inverse is so that the operators $G(R)$ have the nice property that $G(R_1\circ R_2)=G(R_1)\circ G(R_2)$, so that $G$ itself is a homomorphism between $\mathrm{SO}(3)$ and the group of unitary transformations on $V$, $\mathrm{U}(V)$.)
For any given $R$, $G(R)$ is a geometrical transformation but it is also, at a simpler level, a linear transformation in a finite-dimensional vector space $V$ with basis $B$, so you can simply represent it by its matrix with respect to this basis. Thus, for example, a rotation by 90° about the $+x$ axis would be represented by the matrix$$\begin{pmatrix}-\tfrac12&0&0&0&\tfrac12\\0&0&0&-1&0\\0&0&-1&0&0\\0&1&0&0&0\\\tfrac32&0&0&0&\tfrac12\\\end{pmatrix}.$$(Work it out!)
The others have given more detail on how this works mathematically - the function $G$ being a
representation of the group $\mathrm{SO}(3)$ - but I think that examples of this sort help a lot in visualizing what's going on. |
Difference between revisions of "Notation"
Line 4: Line 4:
''(If the equations fail to render correctly, try refreshing the page. If all else fails, see the png figs.)''
''(If the equations fail to render correctly, try refreshing the page. If all else fails, see the png figs.)''
+
Generating a motor output depends on a critical step in which an input vector, $\vec{r}$
Generating a motor output depends on a critical step in which an input vector, $\vec{r}$
<math>
<math>
Line 25: Line 26:
\end{equation}
\end{equation}
−
and the stored vector most like $\vec{r}$ is chosen. This stored context is associated with a motor output (or equivalent
+
and the stored vector most like $\vec{r}$ is chosen. This stored context is associated with a motor output (or equivalent). Some notation to describe this is given below. $n$ is the dimensionality of the space of sensory+motivational contexts ($\vec{r}\in \mathbb{R}^n$). In the case of the [[Sensory contexts linked to actions|amoeba example]], $n = 3$. For the brain, $n$ is very much larger (e.g. $n = 10^{10}$). $m$ is the number of stored contexts. In theory, this can be very large, e.g. of the order $2^n$.<br><br>
By assumption, all the vectors stored in $\mathbf{W}$ have the same magnitude as each other and the same magnitude as $\vec{r}$:
By assumption, all the vectors stored in $\mathbf{W}$ have the same magnitude as each other and the same magnitude as $\vec{r}$:
Revision as of 12:39, 29 April 2018 (If the equations fail to render correctly, try refreshing the page. If all else fails, see the png figs.)
SITEGROUND version. Generating a motor output depends on a critical step in which an input vector, $\vec{r}$ [math] \begin{equation} \vec{r} = \begin{bmatrix} r_{1} \\ r_{2} \\ \vdots \\ r_{n} \end{bmatrix} \end{equation} [/math] is compared to a long list of vectors of the same length, i.e. weight matrix $\mathbf{W}$
\begin{equation} \mathbf{W} =
\begin{bmatrix} w_{1,1} & w_{1,2} & \cdots & w_{1,n} \\ w_{2,1} & w_{2,2} & \cdots & w_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ w_{m,1} & w_{m,2} & \cdots & w_{m,n} \end{bmatrix}
\end{equation}
and the stored vector most like $\vec{r}$ is chosen. This stored context is associated with a motor output (or equivalent). Some notation to describe this is given below. $n$ is the dimensionality of the space of sensory+motivational contexts ($\vec{r}\in \mathbb{R}^n$). In the case of the amoeba example, $n = 3$. For the brain, $n$ is very much larger (e.g. $n = 10^{10}$). $m$ is the number of stored contexts. In theory, this can be very large, e.g. of the order $2^n$.
By assumption, all the vectors stored in $\mathbf{W}$ have the same magnitude as each other and the same magnitude as $\vec{r}$:
\begin{equation} \lVert\vec{r}\rVert = \lVert w(i,*)\rVert, \forall i=[1,\ldots,m] \end{equation}
For example, if each neuron contributing to $\vec{r}$ is either firing or not (1 or 0) and each synaptic weight is either ‘on’ or ‘off’ (1 or 0), this is equivalent to assuming that the proportion of neurons firing at any one time is constant ($p$) and equal to the proportion of synapses that are ‘on’ in each stored context, w(i,*).
The stored vector, $w_{k,*}$, that is most similar to $\vec{r}$ can be found by:
[math]k = \underset{i}{\operatorname{argmax}}\, ({\mathbf{W} \vec{r}})_i[/math]
with the function [math]\underset{i}{\operatorname{arg\,max}}\, ({\vec{x}})_i[/math] returning the index of the maximum value in $\vec{x}$, i.e. $k$ is the index to $\mathbf{W}$ that gives the maximum correlation between $\vec{r}$ and $w_{i,*}$, for any $i = [1,\ldots,m]$.
For brevity, let
\begin{equation} \vec{c}=w_{k,*} \end{equation}
$\vec{c}$ is the 'recognised sensory+motivational context' and is associated with an output, $\vec{o}$, which in the simple examples here is a motor output (e.g.amoeba example). The output is not necessarily motor, though. For example, someone thinking for 10 minutes before making a move in chess may lead to lots of 'virtual' movement down paths through sensory+motivational space but no actual motor output.
If $\vec{o}$ leads to movement in the world, there is usually a new sensory input, new motivational input and hence a new input vector $\vec{r}$.
It will be useful to describe separate contributions to the input vector $\vec{r}$. It is a list concatenation of a vector of sensory inputs, $\vec{s}$, where $\vec{s} \in \mathbb{R}^{ns}$,
\begin{equation} \vec{s} =
\begin{bmatrix} r_{1} \\ r_{2} \\ \vdots \\ r_{ns} \\ \end{bmatrix}
\end{equation}
and a vector of motivational inputs, $\vec{t}$, where $\vec{s} \in \mathbb{R}^{nt}$,
\begin{equation} \vec{t} =
\begin{bmatrix} r_{ns+1} \\ r_{ns+2} \\ \vdots \\ r_{ns+nt} \\ \end{bmatrix}
\end{equation}
each of which add independent dimensions to $\vec{r}$, so in this case $n = ns + nt$:
\begin{equation} \vec{r} =
\begin{bmatrix} r_{1} \\ r_{2} \\ \vdots \\ r_{ns} \\ r_{ns+1} \\ \vdots \\ r_{ns+nt} \\ \end{bmatrix}
\end{equation}
In the case of the amoeba example, $\vec{s}\in \mathbb{R^2}$, $\vec{t} \in \mathbb{R}$, and $\vec{r} \in \mathbb{R^3}$.
Similarly, the sensory inputs can be described in terms of a list concatenation of non-overlapping contributory elements stored in vectors, for example a vector of dorsal visual inputs $\vec{s^{\prime}}$ with $nd = $ number of elements in $\vec{s^{\prime}}$, a vector of ventral visual inputs $\vec{s^{\prime\prime}}$ with $nv = $ number of elements in $\vec{s^{\prime\prime}}$, and a vector of other sensory inputs $\vec{s^{\prime\prime\prime}}$, with $no = $ number of elements in $\vec{s^{\prime\prime\prime}}$, with $ns = nd + nv + no$:
\begin{equation} \vec{s} =
\begin{bmatrix} s^{\prime}_{1} \\ s^{\prime}_{2} \\ \vdots \\ s^{\prime\prime}_{nd} \\ s^{\prime\prime}_{nd+1} \\ s^{\prime\prime}_{nd+2} \\ \vdots \\ s^{\prime\prime}_{nd+nv} \\ s^{\prime\prime\prime}_{nd+nv+1} \\ s^{\prime\prime\prime}_{nd+nv+2} \\ \vdots \\ s^{\prime\prime\prime}_{nd+nv+no}\\ \end{bmatrix}
\end{equation} |
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
Search
Now showing items 1-10 of 167
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... |
Clojure Numerics, Part 4 - Singular Value Decomposition (SVD)You can adopt a pet function! Support my work on my Patreon page, and access my dedicated discussion server. Can't afford to donate? Ask for a free invite.
October 4, 2017
Please share: Twitter.
New books are available for subscription.
Today's article is a short one. Not because Singular Value Decomposition (SVD) is not important, but because it is so ubiquitous that we'll touch it in other articles where appropriate. The goal of this article is to give an overview and point to Neanderthal's functions that work with it. When you see SVD in the literature you read, you'll know where to look for the implementation; that's the idea.
Before I continue, a few reminders:
Include Neanderthal library in your project to be able to use ready-made high-performance linear algebra functions. Read articles in the introductory Clojure Linear Algebra Refresher series. This is the fourth article in a more advanced series. It is a good idea to read them all in sequence if this is your first contact with this kind of software.
The namespaces we'll use:
(require '[uncomplicate.neanderthal [native :refer [dge dgb dgd]] [core :refer [mm nrm2]] [linalg :refer [svd svd!]]])
Motivation
The main idea of many of the methods I've already mentioned and the one we've yet to cover in this tutorial is to decompose a matrix into a product of a few matrices with special properties, and then exploit these properties to get what we want. Not all decompositions, aka factorizations, are the same; some give more benefits than others. On the other hand, they may be more difficult to compute, or require yet another set of properties from the original matrix. Recall that for random matrices, we have to do LU decomposition with pivoting, but for the positive definite, pivoting is not needed, while for thex triangular, no decomposition is needed.
As diagonal matrices are very convenient, and computationally efficient, it makes sense to have a method to decompose a matrix into some diagonal form. One such decomposition, a very powerful one, is the Singular Value Decomposition (SVD). It is relatively expensive to compute, but once we have it, it can be used to get answers for some difficult cases when other decompositions can not help us.
The Definition of Singular Value Decomposition
If \(A\) is a matrix in \(R^{m \times n}\), then there exist
orthogonal matrices \(U\in{R^{m \times m}}\)and \(V\in{R^{n \times n}}\) and a diagonal matrix \(\Sigma\) such that \(U^TAV=\Sigma=diag(\sigma_1,\dots,\sigma_p) \in R^{m\times{n}},p=min(m,n)\), where \(\sigma_1\geq \sigma_2 \geq \dots \geq \sigma_p \geq 0\).
Here, the columns of \(U\) are the
left singular vectors.
The columns of \(V\) are the
right singular vectors.
The elements of \(\Sigma\) are
the singular values of \(A\).
Here's a simple example in Clojure code:
(let [a (dge 4 3 [1 2 3 4 -1 -2 -1 -1 4 3 2 1])] (svd a true true))
#uncomplicate.neanderthal.internal.common.SVDecomposition{:sigma #RealDiagonalMatrix[double, type:gd mxn:3x3, offset:0] ▧ ┓ ↘ 7.51 3.17 0.79 ┗ ┛ , :u #RealGEMatrix[double, mxn:4x3, layout:column, offset:0] ▥ ↓ ↓ ↓ ┓ → -0.49 0.66 0.51 → -0.53 0.23 -0.81 → -0.49 -0.23 0.25 → -0.49 -0.68 0.13 ┗ ┛ , :vt #RealGEMatrix[double, mxn:3x3, layout:column, offset:0] ▥ ↓ ↓ ↓ ┓ → -0.66 0.34 -0.67 → -0.73 -0.06 0.69 → 0.19 0.94 0.29 ┗ ┛ , :master true}
Let's check whether it is true that \(U^TAV=\Sigma\) in this example.
(let [a (dge 4 3 [1 2 3 4 -1 -2 -1 -1 4 3 2 1]) usv (svd a true true) u (:u usv) vt (:vt usv) sigma (:sigma usv)] (mm u sigma vt))
#RealGEMatrix[double, mxn:4x3, layout:column, offset:0] ▥ ↓ ↓ ↓ ┓ → 1.00 -1.00 4.00 → 2.00 -2.00 3.00 → 3.00 -1.00 2.00 → 4.00 -1.00 1.00 ┗ ┛
Hurrah, it is equal to \(A\)! But, the original definition didn't say \(A=U\Sigma V^T\), it said \(U^TAV=\Sigma\)? It's just a matter of a bit of arithmetic. These equations are equivalent.
Properties of the SVD \(Av_i=\sigma_i u_i\) and \(A^Tu_i=\sigma_i v_i\)
This is obvious from the definition, an it can be seen in the previous example, too.
\({\lVert A\rVert}_2 = \sigma_1\) and \({\lVert \sqrt{\sigma_1^2 + \dots \sigma_p^2 }\rVert}_2 = \sigma_1\)
If you're interested in the proof, look for it in a textbook. Here, we'll just see an example:
(let [a (dge 4 3 [1 2 3 4 -1 -2 -1 -1 4 3 2 1]) ] [(nrm2 (:sigma (svd a))) (nrm2 a)])
8.185352771872449 8.18535277187245 Rank, range, and null space
If \(A\) has \(r\) positive singular values, then \(rank(A) = r\), \(null(A) = span\{v_{r+1},\dots,v_n\}\), and \(ran(A) = span\{u_1,\dots,v_r\}\).
And so on. There are more interesting properties like these. I hope that with the help of this guide, you can easily find out how to use them in Clojure when you learn them from the textbooks.
Subspaces
Singular vectors and values are very useful in computations of subspaces (see Clojure Linear Algebra Refresher).
A convenient property of \(U\) and \(V\) is that they are orthogonal, and orthogonal transformations do not deform space.
If\begin{equation} U= \begin{bmatrix} U_r & | \tilde{U}_{m-r}\\ \end{bmatrix} ,\text{and} V= \begin{bmatrix} V_r & | \tilde{V}_{n-r}\\ \end{bmatrix} \end{equation}
Here are a few useful facts:
\(V_r V_r^T\) = projection on \(null(A)^{\perp} = ran(A^T)\), \(\tilde{V}_r \tilde{V}_r^T\) = projection on $null(A), \(U_r U_r^T\) = projection on \(ran(A)\), \(\tilde{U}_r \tilde{U}_r^T\) = projection on ran(A) ⊥= null(A T), Sensitivity of Square Linear Systems
There's a very important way in which SVD can help us with Linear Systems.
We start from \(Ax=b\), and then have \(x = A^{-1}b=(U \Sigma V^T)^{-1}b=\sum_{i=1}^n{\dfrac{u_i^t b}{\sigma_i}}\)
Now, if \(\sigma_i\) is small enough, small change in \(A\) or \(b\) will cause large changes in \(X\), and the system becomes unstable.
One of the properties of the SVD that we skipped in the last section is that the smallest singular value is the 2-norm distance of a to the set of rank-deficient, singular, matrices. Closer the matrix is to that set, the system it describes becomes sensitive to small changes (caused by imprecision in floating point operations).
It boils down to this: if no other method can not show us reliably whether a system is ill-conditioned, SVD usually can. On the other hand, SVD is usually more computationally expensive than other factorizations, so we should consider that too and prefer lighter methods when possible.
A precise measure of linear system sensitivity can be obtained by looking at the condition number. We've already dealt with that - we can get the condition number from triangular factorizations (see previous articles).
A word or two about determinants should be said here. I've learned in school to find the determinant whenI want to check whether a system is solvable. If \(det(A)=0\), the system is singular. But, if the determinantis
close to zero? Is \(A\) near singular? Unfortunately, we cannot say. Determinant is poor tool for this,and in computational linear algebra, as I've shown you, we usually have much better tools for that. And next…
Orthogonalization seems to be a quite powerful factorization method. We'll explore it in the next sessions. |
Answer
$\theta = 210^{\circ}$ $\theta = 330^{\circ}$
Work Step by Step
$csc ~\theta = -2$ $\frac{r}{y} = -2$ Since $r$ is not negative, we can let $r = 2$ and $y = -1$. We can find the angle $\alpha$ below the x-axis. $sin(\alpha) = \frac{1}{2}$ We know in a $30^{\circ}-60^{\circ}$ triangle, the side opposite the $30^{\circ}$ angle is 1 while the hypotenuse is 2. Therefore, $\alpha = 30^{\circ}$ Since $\alpha$ is the angle below the x-axis, then $\theta = 180^{\circ}+\alpha$ or $\theta = 360^{\circ}-\alpha$ $\theta = 210^{\circ}$ or $\theta = 330^{\circ}$ |
Several years ago, Florian Raudies, Swapnaa Jayaraman and I published a paper where we simulated the optic flow that infants would experience in different head/body postures. We computed cyclopian (one-eyed) flow on the basis of this schematic:
Here, the key parameters were the instantaneous translation \((v_x{}, v_y{}, v_z{})\) and rotation \((\omega_{x}, \omega_y{}, \omega_z{})\) of the planar retina. Coupled with the optic flow equation,
\(\begin{pmatrix}\dot{x} \\ \dot{y}\end{pmatrix}=\frac{1}{z} \begin{pmatrix}-f & 0 & x\\ 0 & -f & y \end{pmatrix} \begin{pmatrix}{v_x{}}\\ {v_y{}} \\{v_z{}}\end{pmatrix}+ \frac{1}{f} \begin{pmatrix} xy & -(f^2+x^2) & fy\\ f^2+y^2 & -xy & -fy \end{pmatrix} \begin{pmatrix} \omega_{x}\\ \omega_{y}\\ \omega_{z} \end{pmatrix}\)
we were able to simulate the
perceptual effects of postural geometry: Changes in eye height and forward translational speed that would occur when a child changed from crawling to walking altered the pattern of retinal flow \((\dot{x}, \dot{y})\) in interesting ways.
This work has lain dormant for a few years, but I now want to pick it back up. In short, there are a handful of perception/action systems that provide the nervous system with deterministic, causal information about the effects of different actions. These must be important for development.
For the next step, I’m looking for a concise, but thorough parameterization of body posture that includes the eyes, head, torso, arms, and legs. Here’s a sketch of what I have in mind for the upper parts body that have the greatest impact on the direction of visual fixation:
Body part Parameter(s) Eyes $ {rx}, {ry}, {lx}, {lx} $ Head \(\theta_x{}, \theta_y{}, \theta_z{}\) Torso $_x{}, _y{}, _z{} $
Coupled with the distance between the eyes, \(i\), the radial distance to the head’s center of rotation, \(h\), and the distance from the head’s center of rotation to the torso’s center of rotation, \(t\), we can compute the effects of eye, head, and torso movement on visual motion at the two retinae. Now, if the
visual signals from eye vs. head vs. torso can be distinguished, then these could couple with other proprioceptive (muscle, tendon, cutaneous) signals to provide a powerful set of sensory signals that are directly caused by eye, head, and torso motion. See this earlier post for a causal graph that elaborates on this point. I’ll discuss why I think there are visual differences in the effects of eye and head motion in a future post.
My next step is to ask my colleagues in kinesiology if there is a canonical parameterization of body position that I can build upon. If you know of one, let me know. |
Add, snap, deflate, inflate, grow, shrink and format kites and darts with it. It is very easy to use and the functions allow you to rapidly create huge numbers of tiles properly arranged. Record to date is 22,523 tiles.
If the toolbox is enlarged, buttons to label tile vertices and produce a report on the slide also become available. The box at the bottom gives information and sometimes error messages.
To get the program just download the macro enabled PowerPoint file here (0.8 Mb) and open it. Macros must be enabled and the first slide contains instructions how to get started. There are another four slides which contain interesting ready-made patterns and some useful info.
The program was implemented in Microsoft PowerPoint VBA and tested on Office 365.
There is more on this site about Penrose tiles here and full documentation on the program here (10 Mb). It contains some big pictures and the proof of why the ratio of the sides of kites and darts is the golden ratio is ##\phi## where $$\phi=\frac{1+\sqrt5}{2}$$This formula was known to the ancient Greeks and appears in many mathematical formulas. I have not seen a proof of this for Penrose tiles anywhere else. It is always just stated as if it was obvious. |
WHY?
Policy gradient usually requires integral over all the possible actions.
WHAT?
The purpose of reinforcement learning is to learn the policy to maximize the objective function.
J(\pi_{\theta}) = \int_{\mathcal{S}}\rho^{\pi}(s)\int_{\mathcal{A}}\pi_{\theta}(s,a)r(s,a)da ds \\ = \mathbb{E}_{s\sim\rho^{\pi}, a\sim \pi_{\theta}}[r(s,a)] Policy gradient directly train the policy network to minimize the objective function.
Stochastic Policy Gradient
\nabla J(\pi_{\theta}) = \int_{\mathcal{S}}\rho^{\pi}(s)\int_{\mathcal{A}} \nabla_{theta}\pi_{\theta}(a|s)Q^{\pi}(s,a)da ds \\ = \mathbb{E}_{s\sim\rho^{\pi}, a\sim \pi_{\theta}}[\nabla_{\theta} \log \pi_{\theta}(a|s)Q^{\pi}(s,a)]Since this assumes stochastic policy, this is called Stochastic Policy Gradient. If a sample return is used to estimate the action-value function, it is called REINFORCE algorithm.
Stochastic Actor-Critic
We can train another network to directly learn the value of action-value function by td learning.
\nabla J(\pi_{\theta}) = = \mathbb{E}_{s\sim\rho^{\pi}, a\sim \pi_{\theta}}[\nabla_{\theta} \log \pi_{\theta}(a|s)Q^{w}(s,a)]\\ \epsilon^2(w) = \mathbb{E}_{s\sim\rho^{\pi}, a\sim \pi_{\theta}}[(Q^w(s,a) - Q^{\pi}(s,a))^2]
Off-policy Actor-Critic (OffPAC)
On-policy learning has limitation in exploration. Off-policy learning use different policies to behave and to evaluate.
J_{\beta}(\pi_{\theta}) = \int_{\mathcal{S}}\int_{\mathcal{A}}\rho^{\beta}(s)\pi_{\theta}(a|s)Q^{\pi}(s,a)da ds\\ \nabla_{\theta} J_{\beta}(\pi_{\theta}) = \int_{\mathcal{S}}\int_{\mathcal{A}}\rho^{\beta}(s)\nabla_{\theta}\pi_{\theta}(a|s)Q^{\pi}(s,a)da ds\\ = \mathbb{E}_{s\sim\rho^{\beta}, a\sim \beta}[\frac{\pi_{\theta}(a|s)}{\beta_{\theta}(a|s)}\nabla_{\theta} \log \pi_{\theta}(a|s)Q^{\pi}(s,a)]This Off-Policy Actor-Critic(OffPAC)require importance sampling.
Deterministic Policy Gradient
In continuous action space, integral over all the action space is intractable. Deterministic policy gradient uses the deterministic policy
\mu_{\theta}(s)instead of
\pi_{\theta}(a|s). And then, move the policy in the direction of the gradient of Q. This deterministic policy gradient is a special form of stochastic policy gradient.
\theta^{k+1} = \theta^k + \alpha \mathcal{E}_{s\sim\rho^{\mu^k}}[\nabla_{\theta}\mu_{\theta}(s)\nabla_{\a}Q^{\mu^k}(s,a)|_{a=\mu_{\theta}(s)}]\\ J(\mu_{\theta}) = \int_{\mathcal{S}}\rho^{\mu}(s)r(s, \mu_{\theta}(s))d s \\ \nabla J(\mu_{\theta}) = = \mathbb{E}_{s\sim\rho^{\mu}}[\nabla_{theta} \mu_{\theta}(s) \nabla_{a}Q^{\mu}(s,a)|_{a=\mu_{\theta}(s)}]
Off-Policy Deterministic Actor-Critic (OPDAC)
As in the case of stochastic policy gradient, off-policy is required to ensure adequate exploration. We can use Q-learning to train the critic.
J_{\beta}(\mu_{\theta}) = \int_{\mathcal{S}}\rho^{\beta}(s)Q^{\mu}(s,mu_{\theta}(s))d s \\ \nabla_{\theta} J_{\beta}(\mu_{\theta}) = \mathbb{E}_{s\sim\rho^{\beta}}[\nabla_{theta} \mu_{\theta}(s)\nabla_{a}Q^{\mu}(s,a)|_{a=\mu_{\theta}(s)}]\\ \delta_t = r_t + \gamma Q^w(s_{t+1}, \mu_{\theta}(s_{t+1})) - Q^w(s_t, a_t)\\ w_{t+1} = w_t + \alpha_w\delta_t\nabla_w Q^w(s_t, a_t)\\ \theta_{t+1} = \theta_t + \alpha_{\theta}\nabla_{\theta}\mu_{\theta}(s_t)\nabla_a Q^w(s_t, a_t)|{a=\mu_{\theta}(s)}Deterministic policy removes the need for integral of actions and Q-learning removes the need for importance sampling.
Compatible Off-Policy Deterministic Actor-Critic (COPDAC)
Since function approximator
Q^w(s,a)may not follow the true gradient, this paper suggest two restriction for the compatible action-value function.
\nabla_a Q^w(s,a)|_{a=\mu_{\theta}(s)}=\nabla_{\theta}\mu_{\theta}(s)^T w
w minimize MSE of $$\epsilon(s;\theta,w)=\nabla_a Q^w(s,a) {a=\mu{\theta}(s)} - \nabla_a Q^{\mu}(s,a) {a=\mu{\theta}(s)}$$\
The resulting algorithm is called compatible off-policy deterministic actor-critic(COPDAC). We can use baseline function to reduce the variance of gradient estimator. If we use gradient Q-learning for critic, the algorithm is called COPDAC-GQ.
Critic
Great reviews of policy gradient algorithms. |
Continuing what helperid has...
We know that
2x + y = 37
And we know that
x < y
If x = y, then....
2x + x = 37
3x = 37
x = 37/3
And we know that x must be less than the value that makes it equal y .
So... x must be less than 37/3 .
So....there are infinitely many solutions that lie on the line 2x + y = 37 where x < 37/3 .
All the coordinates on the blue line that are in the red region of
this graph will be solutions.
For example.... x = 11 and y = 15 , or x = 6 and y = 25 .
The wall and the ground form a right angle...so we can use the Pythagorean theorem to find x .
x
2 + (x + 5) 2 = 18 2
Multiply out (x + 5)
2 .
x
2 + (x + 5)(x + 5) = 324
x
2 + x 2 + 10x + 25 = 324
Combine x
2 and x 2 , and subtract 324 from both sides.
2x
2 + 10x - 299 = 0
Now we can use the quadratic formula to solve for x .
\(x = {-10 \pm \sqrt{10^2-4(2)(-299)} \over 2(2)} \\~\\ x={-10 \pm \sqrt{100+2392} \over 4} \\~\\ x={-10 \pm \sqrt{2492} \over 4} \\~\\ x={-10 \pm 2\sqrt{623} \over 4} \\~\\ x={-5 + \sqrt{623} \over 2}\,\approx\,9.98 \qquad \text{ or }\qquad x={-5 - \sqrt{623} \over 2}\, \approx\, -14.98\)
Since -14.98 causes a negative length for the side on the ground..... x must be ≈ 9.98
So, the length of the side on the wall = \({-5 + \sqrt{623} \over 2}+5\,\approx\,14.98\) feet
( 1 + (-8) - ( 2 - (-1) ) * ( -7/2 ) / ( -1/4 - (-1) )
I'm guessing that you meant to put another parenthesees here..
( 1 + (-8) - ( 2 - (-1) ) ) * ( -7/2 ) / ( -1/4 - (-1) )
= ( 1 - 8 - ( 2 + 1 ) ) * ( -7/2 ) / ( -1/4 + 1 )
= ( 1 - 8 - 3 ) * ( -7/2 ) / ( 3/4 )
= ( -10 ) * ( -7/2 ) * ( 4/3 )
= 140 / 3 |
Details Parent Category: Tutorials Published on Monday, 27 October 2014 00:08 Written by sebastien.popoff [tutorial] Modes of step index multimode fibers
Scattering media were the first type of "complex media" for which wavefront shaping techniques were applied. Quickly, applications were developed for multimode fibers as well. One can consider multimode fiber as a complex media; because of its inherent modal dispersion (different modes travel at different speeds) and also because of the possible coupling between modes, the output field of the fiber does not resemble its input one. Wavefront shaping in multimode fibers has had a fast development because of its applications in biomedical endoscopic imaging and for telecommunications, where the exploitation of the spatial modes in multimode fibers offers a promising way to increase data rates compared to single mode fibers.
I present here a quick tutorial on the calculation of the modes of a step index multimode fiber and how to find the so called linearly polarized modes, that are convenient for manipulation using shaping techniques.
This summary is largely inspired by the Chapter 3 of [Fundamentals of Optical Waveguides] by K. Okamoto and is
available in PDF. The step index fibers are the simplest ones for the calculation of the modes. The shapes of the modes are found analyticaly, the dispersion relation is analytical too but one needs numerical calculations to find its solutions.
We consider a fiber of core radius \(a\) of refractive index \(n_1\) surrounded by a cladding of index \(n_0\). We assume that we satisfy in the weakly guided approximation,
i.e:
$$\Delta = \frac{n_1-n_0}{n_1} \ll 1$$
In practice, this approximation is quite accurate as we have \(\Delta \leq 0.1\%\) in standard fibers.
Dispersion relation of Linearly Polarized (LP) modes
In the weakly guided approximation, we can define Linearly Polarized (LP) modes obtained by combination of the TE, TM and hybrid modes supported by the fiber that has the same propagation constant. They satisfy the general dispersion relation:
$$\frac{J_m(u)}{u J_{m-1}(u)} = \frac{K_w(u)}{w K_{m-1}(w)}$$
with \(J_m (K_m)\) is the Bessel function of the first (second) kind of order \(m\) and
$$u = a\sqrt{k^2 n_1^2 - \beta^2}$$ $$w = a\sqrt{\beta^2-k^2 n_0^2}$$
\(k=2\pi/\lambda\) and \(\beta\) is the propagation constant.
The modes are indexed by two integers \(m \geq 0\) and \(l \geq 1\). For a given \(m\), the integer \(l\) numbers the solutions of Eq. 1. For \(m = 0\), the combination \((m,l)\) is two fold degenerate, for \(m \geq 1\), the combination \((m,l)\) is four fold degenerate.
Spatial profile of the LP modes
TE, TM and hybrid modes
By solving the wave equation, one finds the different solutions corresponding to Transverse Electric (TE) modes, Transverse Magnetic (TE) modes and hybrid modes (called HE and EH).
TE modes
For \(r \leq a\):
$$E_\theta = -j \omega \mu_0 \frac{a}{u}A J_1\left(\frac{u}{a}r\right)$$ $$H_r = j \beta_l \frac{a}{u}A J_1\left(\frac{u}{a}r\right)$$ $$H_z = A J_0\left(\frac{u}{a}r\right)$$ $$E_r = E_z = H_\theta = 0$$
For \(r > a\):
$$E_\theta = j\omega\mu_0 \frac{a}{w}\frac{J_0(u)}{K_0(w)}A K_1\left(\frac{w}{a}r\right)$$ $$H_r = -j \beta \frac{a}{w}A \frac{J_0(u)}{K_0(w)}A K_1\left(\frac{w}{a}r\right)$$ $$H_z = A\frac{J_0(u)}{K_0(w)}A K_0\left(\frac{w}{a}r\right)$$ $$E_r = E_z = H_\theta = 0$$
\(A\) is a normalization constant.
TM modes
For \(r \leq a\):
$$E_r = j \beta_l \frac{a}{u}A J_1\left(\frac{u}{a}r\right)$$ $$E_z = A J_0\left(\frac{u}{a}r\right)$$ $$H_\theta = j\omega \epsilon_0 n_1^2A J_1\left(\frac{u}{a}r\right)$$ $$E_\theta = H_r = H_z = 0$$
For \(r > a\):
$$E_r = -j \beta \frac{a}{w}A \frac{J_0(u)}{K_0(w)} K_1\left(\frac{w}{a}r\right)$$ $$E_z = A\frac{J_0(u)}{K_0(w)} K_0\left(\frac{w_l}{a}r\right)$$ $$H_\theta = -j\omega \epsilon_0 n_0^2 \frac{a}{w}A \frac{J_0(u)}{K_0(w)} K_1\left(\frac{w}{a}r\right)$$ $$E_\theta = H_r = H_z = 0$$
Hybrid modes (HE and EH modes)
For \(r \leq a\):
$$E_r = -j A \beta \frac{a}{u}\left[\frac{1-s}{2}J_{n-1}\left(\frac{u}{a}r\right)-\frac{1+s}{2}J_{n+1}\left(\frac{u}{a}r\right) \right]\cos \left(n\theta + \psi \right)$$
$$E_\theta = j A \beta \frac{a}{u} \left[\frac{1-s}{2}J_{n-1}\left(\frac{u}{a}r\right)+\frac{1+s}{2}J_{n+1}\left(\frac{u}{a}r\right) \right]\sin \left(n\theta + \psi \right)$$ $$E_z = A J_n\left(\frac{u}{a}r\right)\cos\left(n\theta + \psi\right)$$ $$H_r = -j A \omega \epsilon_0 n_1^2\frac{a}{u} \left[\frac{1-s}{2}J_{n-1}\left(\frac{u}{a}r\right)+\frac{1+s}{2}J_{n+1}\left(\frac{u}{a}r\right) \right]\sin \left(n\theta + \psi \right)$$ $$H_\theta = -j A \omega \epsilon_0 n_1^2\frac{a}{u} \left[\frac{1-s}{2}J_{n-1}\left(\frac{u}{a}r\right)-\frac{1+s}{2}J_{n+1}\left(\frac{u}{a}r\right) \right]\cos \left(n\theta + \psi \right)$$ $$H_z = -A \frac{\beta}{\omega \mu_0}s J_n\left(\frac{u}{a}r\right)\sin \left(n\theta + \psi \right)$$
For \(r > a\):
$$E_r = -j A \beta \frac{aJ_n\left(u\right)}{wK_n\left(w\right)}\left[\frac{1-s}{2}K_{n-1}\left(\frac{w}{a}r\right)+\frac{1+s}{2}K_{n+1}\left(\frac{w}{a}r\right) \right]\cos \left(n\theta + \psi \right)$$
$$E_\theta = j A \beta \frac{aJ_n\left(u\right)}{wK_n\left(w\right)}\left[\frac{1-s}{2}K_{n-1}\left(\frac{w}{a}r\right)-\frac{1+s}{2}K_{n+1}\left(\frac{w}{a}r\right) \right]\sin \left(n\theta + \psi \right)$$ $$E_z = A \frac{J_n\left(u\right)}{K_n\left(w\right)}K_n\left(\frac{w}{a}r\right)\cos \left(n\theta + \psi \right)$$ $$H_r = -jA \omega \epsilon_0 n_0^2 \frac{aJ_n\left(u\right)}{wK_n\left(w\right)}\left[\frac{1-s}{2}K_{n-1}\left(\frac{w}{a}r\right)-\frac{1+s}{2}K_{n+1}\left(\frac{w}{a}r\right) \right]\sin \left(n\theta + \psi \right)$$ $$H_\theta = -jA \omega \epsilon_0 n_0^2 \frac{aJ_n\left(u\right)}{wK_n\left(w\right)}\left[\frac{1-s}{2}K_{n-1}\left(\frac{w}{a}r\right)+\frac{1+s}{2}K_{n+1}\left(\frac{w}{a}r\right) \right]\cos \left(n\theta + \psi \right)$$ $$H_z = - A \frac{\beta}{\omega \mu_0}s\frac{J_n\left(u\right)}{K_n\left(w\right)}K_n\left(\frac{w}{a}r\right)\sin \left(n\theta + \psi \right)$$
with \(s = \pm 1\). By convention, we call EH modes for \(s = 1\) and HE modes for \(s = -1\).
Relation between LP modes and TE, TM and hybrid modes
LP\(_\mathbf{0l}\) modes
For \(m = 0\), the LP modes correspond to the hybrid modes HE\(_{11}\) with \(\psi = 0\) and \(\psi = \pi/2\). We then have two degenerate modes corresponding to the same mode profile but with two orthogonal linear polarizations.
$$LP_{0l} = HE_{1l}(\psi=0)$$ $$LP_{0l} = HE_{1l}(\psi=\pi/2)$$
where \(\beta = \beta_{ml}\) obtained finding the solutions of the dispersion relation.
LP\(_\mathbf{1l}\) modes
The linearly polarized modes for \(m = 1\) are obtained by the superpositions of the TE mode and the HE\(_{2l}\) mode for \(\psi = \pi/2\) and by the superpositions of the TM mode and the HE\(_{2l}\) mode for \(\psi = 0\).
$$LP_{1l} = TE + HE_{2l}(\psi=\pi/2)$$ $$LP_{1l} = TE-HE_{2l}(\psi=\pi/2)$$ $$LP_{1l} = TM + HE_{2l}(\psi=0)$$ $$LP_{1l} = TM-HE_{2l}(\psi=0)$$
where \(\beta = \beta_{ml}\) is obtained finding the solutions of the dispersion relation.
LP\(_\mathbf{ml}\) modes, \(m > 1\)
The linearly polarized modes for \(m > 1\) are obtained by the superposition of the hybrid modes HE\(_{m-1 l}\) and EH\(_{m+1 l}\) for \(\psi = 0\) and \(\psi = \pi/2\).
$$LP_{ml} = HE_{m+1 l}(\psi=0)+EH_{m-1 l}(\psi=0)$$ $$LP_{ml} = HE_{m+1 l}(\psi=0)-EH_{m-1 l}(\psi=0)$$ $$LP_{ml} = HE_{m+1 l}(\psi=\pi/2)+EH_{m-1 l}(\psi=\pi/2)$$ $$LP_{ml} = HE_{m+1 l}(\psi=\pi/2)-EH_{m-1 l}(\psi=\pi/2)$$
where \(\beta = \beta_{ml}\) is obtained finding the solutions of the dispersion relation.
General expression of the LP modes
The previous expressions car be simplified using trigonometric relations to obtain a general expression for the LP modes [1,2]. For any fixed orthonormal coordinate system [X,Y,Z] with Z the propagation axis, the electric field of modes can be expressed :
LP\(_\mathbf{ml}\)
For \(r \leq a\):
$$E_{x,y} = -j A \beta \frac{a}{u}J_{l-1}\left(\frac{u}{a}r\right) \cos \left(m\theta + \psi\right) $$
$$E_{y,x} = 0$$ $$E_z = A J_l\left(\frac{u}{a}r\right)\cos\left(m\theta + \psi\right)$$
For \(r\) > \( a\):
$$E_{x,y} = -j A \beta \frac{a}{u}J_{l-1}\left(\frac{u}{a}r\right) \cos \left(m\theta + \psi\right)$$
$$E_{y,x} = 0$$ $$E_z = A \frac{J_l\left(u\right)}{K_l\left(w\right)}K_l\left(\frac{w}{a}r\right)\cos \left(m\theta+ \psi\right)$$
with \(\psi\) = 0 for m = 0 and \(\psi\) = 0 or \(\psi = \pi\) for m > 0 (degenerate modes).
Note that the longitudinal component is small compared to the transveral one under the weekly guided approximation.
Visual aspect of the modes
Figure 1. Spatial shape of the first 6 modes.
I present if figure 1. the spatial profiles of the first four LP modes. If we define two orthogonal directions x and y orhtogonal to the optical fiber axis z, for each of these figures, there exists two modes; one for which \(E_y = 0\) and \(E_x\) has the spatial profile represented in figure 1. and a mode where \(E_x = 0\) and \(E_y\) has the spatial profile of the LP mode calculated. |
$$I=\int_1^\infty \frac{1}{x(x^2+1)}\ dx$$
I tried to use partial fractions, but am unsure why I cann't evaluate it using partial fractions. Would appreciate any explanation about which step is incorrect.
$$\frac{1}{x(x^2+1)} = \frac{1}{2} \left(\frac{1}{x}-\frac{1}{x+i} -\frac{1}{x-i}\right)$$
So $$I = \frac{1}{2}\int_1^\infty\frac{1}{x}-\frac{1}{x+i} -\frac{1}{x-i} \ dx \\ = \frac{1}{2}\left[\log |x| - \log(|x+i|) - \log(|x-i|)\right]_1^\infty $$ which evaluates to be $\infty$. |
WHY?
Segmenting objects in videos is difficult without manual labels.
WHAT?
This paper suggests to learn segmenting the objects in video sequence with model trained with self-supervised colorizing task.
Given reference frames and a grayscale input frame, the model tries to learn the color of the grayscale input frame from reference frames. To do this, model gets embeddings of each of pixels in frames with convolutional neural network. The model estimates the color of a pixel in input frame with the sum of colors of reference frames weighted by similarity of embedding. Colors are quantized with k-means to formulate the estimation as categorization.
y_j = \sum_i A_{ij}c_i\\A_{ij} = \frac{exp(f_i^T f_j)}{\sum_k exp(f_k^T f_j)}\\min_{\theta}\sum_j\mathcal{L}(y_j, c_j)
With the learned colorizing model, segments or keypoints can be tracked with the labels on the reference frame which is formulated as categorization.
So?
This model showed good performance on the segmentation and pose tracking. |
Let's suppose I want to write nested equations that contain a fair few nested parenthetical/delimiter characters, like
(),
[],
{},
||, and perhaps others. Suppose too that I think these things look nicer if the outer delimiters are a tad bigger than the inner ones (where possible), appearing to "frame" them.
I can write things like this:
$\pi_G\big(f\big(\big[[v]_{\sim}\big]_{\sim''}\big)\big)$
But this
\big(...\big) stuff is kind of ugly and gets in the way of seeing, from the LaTeX code, what's going on. What might be better is if I could write:
$\pi_G\mybig(){f\mybig(){\mybig[]{[v]_{\sim}}_{\sim''}}}$
Here, the
() after
\mybig mean that the macro argument should be surrounded by
\big(...\big). If it's
[], then the argument should be surrounded by
\big[...\big], and so on.
But wait! This can in fact be done; it is only necessary to define the macro with three arguments, like the following:
\newcommand{\mybig}[3]{\big{#1}#3\big{#2}}
How nice. But unfortunately, this doesn't work so well when I want to use braces, as in
$\mybig{}{(x)}$
Here,
{} is actually a group, and constitutes only one argument. So this fails. Of course, the following both work, but mean that uses of the macro are less uniform in appearance (which is not so pleasing):
$\mybig{}{}{(x)}$$\mybig\{\}{(x)}$
So what I really need is to test whether argument #1 is empty (indicating that the macro-invocation was immediately followed by an empty group, i.e.
{}). If so, the macro "skips" to the third argument, and typesets
\big\{#3\big\}. Is there a way to do this? |
Suppose we have a finite Egyptian fraction decomposition of a rational: $$\frac{n}{m} = \sum_{i=1}^k \frac{1}{x_i}$$ such that
(i) $x_i>0$,
(ii) $x_i \neq x_j$ for $i \neq j$, and
(iii) $\gcd(m, x_1,x_2,...x_k) = 1$.
Are their any known results concerning $\max_{i,j} |x_i-x_j|$ or maybe $\max_{i,j} |\frac{x_i}{x_j}|$?
For example,
$\frac{5}{121} = \frac{1}{26} + \frac{1}{350} + \frac{1}{275275}$ or $\frac{5}{121} = \frac{1}{33} + \frac{1}{93} + \frac{1}{3751}.$ Certainly the latter expression is "better" than the previous one for some vague notion of "better".
Note: Condition (iii) means we don't consider $\frac{5}{121}= \frac{1}{33}+\frac{1}{121}+\frac{1}{363}$ since this is really just a good decomposition of $\frac{5}{11}$ that has been divided through by $11$.
Motivation: I'm investigating a technique in my research that would produce Egyptian fraction representations where all the denominators are roughly the same size and I'm curious if this has been looked at before. |
Given two Riemannian manifolds $M,N$, we say that $f:M \to N$ is harmonic if it is a critical point of the Dirichlet energy functional.
More precisely, this means that for every variation $f_t$ of $f$ with variation-field $v:=\frac{\partial f_t}{\partial t}|_{t=0}$ which is compactly supported in the interior of $M$, $\frac{d E(f_t)}{dt}|_{t=0}=0$.
Using the fact that this property of $f$ is equivalent to $f$ being a solution of a certain differential equation, one can deduce the following statement:
Claim:Suppose that for every point $p \in M$, there exist an open neighbourhood of $p$, $U_p\subseteq M$, such that $f|_{U_p}:U_p \to N$ is harmonic. Then $f$ is harmonic as a map $M \to N$. Question: Is there a way to prove this claim without the passage $$\text{being critical} \to \text{satisfying E-L equation} \to \text{being critical}?$$
In other words, suppose you only know the "critical point definition" (and never heard of Euler-Lagrange equations). Is there a way to see directly that this property is local?
A naive idea is that given an arbitrary variation, we can somehow represent it as a finite number of compositions of "small variations" but I am not sure this makes any sense or really helpful.
For start, I am ready to assume $N=\mathbb{R}^n$ if it makes the problem easier. |
Proceedings of the Japan Academy, Series A, Mathematical Sciences Proc. Japan Acad. Ser. A Math. Sci. Volume 91, Number 3 (2015), 39-44. On Noether’s problem for cyclic groups of prime order Abstract
Let $k$ be a field and $G$ be a finite group acting on the rational function field $k(x_{g}\mid g\in G)$ by $k$-automorphisms $h(x_{g})=x_{hg}$ for any $g,h\in G$. Noether’s problem asks whether the invariant field $k(G)=k(x_{g}\mid g\in G)^{G}$ is rational (i.e. purely transcendental) over $k$. In 1974, Lenstra gave a necessary and sufficient condition to this problem for abelian groups $G$. However, even for the cyclic group $C_{p}$ of prime order $p$, it is unknown whether there exist infinitely many primes $p$ such that $\mathbf{Q}(C_{p})$ is rational over $\mathbf{Q}$. Only known 17 primes $p$ for which $\mathbf{Q}(C_{p})$ is rational over $\mathbf{Q}$ are $p\leq 43$ and $p=61,67,71$. We show that for primes $p< 20000$, $\mathbf{Q}(C_{p})$ is not (stably) rational over $\mathbf{Q}$ except for affirmative 17 primes and undetermined 46 primes. Under the GRH, the generalized Riemann hypothesis, we also confirm that $\mathbf{Q}(C_{p})$ is not (stably) rational over $\mathbf{Q}$ for undetermined 28 primes $p$ out of 46.
Article information Source Proc. Japan Acad. Ser. A Math. Sci., Volume 91, Number 3 (2015), 39-44. Dates First available in Project Euclid: 3 March 2015 Permanent link to this document https://projecteuclid.org/euclid.pja/1425396669 Digital Object Identifier doi:10.3792/pjaa.91.39 Mathematical Reviews number (MathSciNet) MR3317750 Zentralblatt MATH identifier 1334.12007 Subjects Primary: 11R18: Cyclotomic extensions 11R29: Class numbers, class groups, discriminants 12F12: Inverse Galois theory 13A50: Actions of groups on commutative rings; invariant theory [See also 14L24] 14E08: Rationality questions [See also 14M20] 14F22: Brauer groups of schemes [See also 12G05, 16K50] Citation
Hoshi, Akinari. On Noether’s problem for cyclic groups of prime order. Proc. Japan Acad. Ser. A Math. Sci. 91 (2015), no. 3, 39--44. doi:10.3792/pjaa.91.39. https://projecteuclid.org/euclid.pja/1425396669 |
kidzsearch.com > wiki Explore:images videos games
Set
A
set is an idea from mathematics. A set can hold zero or more things. A set cannot hold a particular item more than once: either that item is in the set or it is not. Multisets or bags are like sets in quite a few ways but can hold a certain type of item more than once. Contents 1 Notation 2 What to do with sets 3 Special sets 4 Paradoxes about sets 5 Further reading Notation
Most mathematicians use uppercase
italic (usually Roman) letters to write about sets. The things that are seen as elements of sets are usually written with lowercase Roman letters.
For example, X={1,2,3,...} is a set of numbers, and the set is called
X. Three dots means that the numbers of the set go on for infinity from the number 3. What to do with sets How to tell others about the set
Usually, when things are put into a bag, all the things that are put in have something in common. If someone else needs to get the same set, there are different options on how to tell them:
All elements could simply be stated (like a shopping list). Some common thing could be stated (e.g. green vegetables). Element of
Various things can be put into a bag. Later on, a valid question would be if a certain thing is in the bag. Mathematicians call this
element of. Something is an element of a set, if that thing can be found in the respective bag. The symbol used for this is [math]\in[/math]
[math]a \in \mathbf{A}[/math]
means that [math]a[/math] is in the bag [math]\mathbf{A}[/math]
Empty set
Like a bag, a set can also be
empty. The empty set is like an empty bag: it has no things in it. Comparing sets
Two sets can be compared. This is like looking at two different bags. If they contain the same things, they are equal.
Cardinality of a set
When mathematicians talk about a set, they sometimes want to know how big a set is. They do this by counting how many elements are in the set (how many items are in the bag). The cardinality can be a simple number. The empty set has a cardinality of 0. The set [math]\{ apple, orange \}[/math] has a cardinality of 2.
Two sets have the same cardinality if we can pair up their elements—if we can join two elements, one from each set. The set [math]\{ apple, orange \}[/math] and the set [math]\{ sun, moon \}[/math] have the same cardinality. We can pair
apple with sun, and orange with moon. The order does not matter. It is possible to pair the elements, and none is left out. But the set [math]\{ dog, cat, bird \}[/math] and the set [math]\{ 5, 6 \}[/math] have different cardinality. If we try to pair them up, we always leave out one animal. Infinite cardinality
At times cardinality is not a number. Sometimes a set has infinite cardinality. The set of integers is a set with infinite cardinality. Some sets with infinite cardinality are bigger (have a bigger cardinality) than others. There are more real numbers than there are natural numbers, for example. That means we cannot pair up the set of integers and the set of real numbers, even if we worked forever. If a set has the same cardinality as the set of integers, it is called a countable set. But if a set has the same cardinality as the real numbers, it is called an uncountable set.
Subsets
If you look at the set {a,b} and the set {a,b,c,d}, you can see that all elements in the first set are also in the second set.
We say: {a,b} is a subset of {a,b,c,d} As a formula it looks like this: [math]\{a,b\} \subseteq \{a,b,c,d\}[/math]
When all elements of A are also elements of B, we call A a subset of B:
[math]A \subseteq B[/math] It is usually read "A is contained in B"
Example:
Every Chevrolet is an American car. So the set of all Chevrolets is contained in the set of all American cars. Set operations
There are different ways to combine sets.
Intersections
The intersection [math]A \cap B[/math] of two sets A and B is a set that contains all the elements,
that are both in set A and in set B. When A is the set of all cheap cars, and B is the set of all American cars, then [math]A \cap B[/math] is the set of all cheap American cars. Unions
The union [math]A \cup B[/math] of two sets A and B is a set that contains all the elements,
that are in set A or in set B. This "or" is the inclusive disjunction, so the union contains also the elements, that are in set A andin set B. By the way: This means, that the intersection is a subset of the union: [math](A \cap B) \subseteq (A \cup B)[/math]
When A is the set of all cheap cars, and B is the set of all American cars,
then [math]A \cup B[/math] is the set of all cars, without all expensive cars that are not from America. Complements
Complement can mean two different things:
The complement of Ais the universe U without all the elements of A:
[math]A^{\rm C} = U \setminus A[/math]
The universe U is the set of all things you speak about. When U is the set of all cars, and A is the set of all cheap cars, then A C is the set of all expensive cars. The relative complement of A in Bis the set B without all the elements of A:
[math]B \setminus A[/math]
It is often called the set difference. When A is the set of all cheap cars, and B is the set of all American cars, then [math]B \setminus A[/math] is the set of all expensive American cars.
If you exchange the sets in the set difference, the result is different:
In the example with the cars, the difference [math]A \setminus B[/math] is the set of all cheap cars, that are not made in America. Special sets
Some sets are very important to mathematics. They are used very often. One of these is the
empty set. Many of these sets are written using blackboard bold typeface, as shown below. Special sets include: [math]\mathbb{P}[/math], denoting the set of all primes. [math]\mathbb{N}[/math], denoting the set of all natural numbers. That is to say, [math]\mathbb{N}[/math] = {1, 2, 3, ...}, or sometimes [math]\mathbb{N}[/math] = {0, 1, 2, 3, ...}. [math]\mathbb{Z}[/math], denoting the set of all integers (whether positive, negative or zero). So [math]\mathbb{Z}[/math] = {..., -2, -1, 0, 1, 2, ...}. [math]\mathbb{Q}[/math], denoting the set of all rational numbers (that is, the set of all proper and improper fractions). So, [math]\mathbb{Q} = \left\{ \begin{matrix}\frac{a}{b} \end{matrix}: a,b \in \mathbb{Z}, b \neq 0\right\}[/math], meaning all fractions [math]\begin{matrix} \frac{a}{b} \end{matrix}[/math] where aand bare in the set of all integers and bis not equal to 0. For example, [math]\begin{matrix} \frac{1}{4} \end{matrix} \in \mathbb{Q}[/math] and [math]\begin{matrix}\frac{11}{6} \end{matrix} \in \mathbb{Q}[/math]. All integers are in this set since every integer acan be expressed as the fraction [math]\begin{matrix} \frac{a}{1} \end{matrix}[/math]. [math]\mathbb{R}[/math], denoting the set of all real numbers. This set includes all rational numbers, together with all irrational numbers (that is, numbers which cannot be rewritten as fractions, such as [math]\pi,[/math] [math]e,[/math] and √2). [math]\mathbb{C}[/math], denoting the set of all complex numbers.
Each of these sets of numbers has an infinite number of elements, and [math]\mathbb{P} \subset \mathbb{N} \subset \mathbb{Z} \subset \mathbb{Q} \subset \mathbb{R} \subset \mathbb{C}[/math]. The primes are used less frequently than the others outside of number theory and related fields.
Paradoxes about sets
A mathematician called Bertrand Russell found that there are problems with this theory of sets. He stated this in a paradox called
Russell's paradox. An easier to understand version, closer to real life, is called the Barber paradox: The barber paradox
There is a small town somewhere. In that town, there is a barber. All the men in the town do not like beards, so they either shave themselves, or they go to the barber shop to be shaved by the barber.
We can therefore make a statement about the barber himself:
The barber shaves all men that do not shave themselves. He only shaves those men (since the others shave themselves and do not need a barber to give them a shave).
This of course raises the question: What does the barber do each morning to look clean-shaven? This is the paradox.
If the barber does not shave himself, he will follow the rule and shave himself (go to the barber shop to have a shave) If the barber does indeed shave himself, he will not shave himself, according to the rule given above. Further reading
The following are books about sets. They may not be easy to read though: |
The reason that the acceleration is independent of the friction for an object rolling down a plane (assuming no slipping) is because the friction in this system is static friction, and it does no work on a rolling object. Consider the following diagram:
The point of contact with the surface denoted by the red dot only comes into contact with a single point on the surface while rolling. Since friction only acts when it is in contact with the surface, and the distance the point travels while in contact with the surface is $\Delta x = 0$, the friction acting on the object can be considered to be static. The static frictional force is generally written as $F_s = \mu_s N$, where $\mu_s$ is the static frictional coefficient, and $N$ is the normal force. This isn't strictly true, and should be more generally written as $F_s \leq \mu_s N$. So, the magnitude of the force can change depending on the incline to a value necessary to supply the needed torque to get the object rolling.
Now, lets consider the work done by a static frictional force. The work done on a system can be considered to be how much you change the energy of the system. Since static friction does not change the energy, the conversion of gravitational potential energy to rotational and translational kinetic energy remains the same regardless of the coefficient of friction, $\mu_s$. Therefore, the acceleration is not affected by it. So, how do we know it does no work. Consider the following definition for work.
$W = F\Delta x$
where $F$ in this case is the force of friction, and $\Delta x$ is the distance over which the friction acts. Now, you might say that since the object is rolling, the frictional force acts over distance the object rolls, but this is
not the case as we've established, because the point of contact with the surface for the rolling object does not actually move relative to the surface when it is in contact with the surface. |
The Annals of Mathematical Statistics Ann. Math. Statist. Volume 39, Number 5 (1968), 1719-1723. Bounds on the Moments of Martingales Abstract
We prove the following THEOREM. Let $\{S_n, n \geqq 1\}$ be a martingale, $S_0 = 0, X_n = S_n - S_{n-1}, \gamma_{\nu n} = E(|X_n|^\nu)$ and $\beta_{\nu n} = (1/n) \sum^n_{j = 1} \gamma_{\nu j}$. Then for all $\nu\geqq 2$ and $n = 1, 2, \cdots$ \begin{equation*}\tag{1.1}E(|S_n|^\nu) \leqq C_\nu n^{\nu/2}\beta_{\nu n},\end{equation*} where \begin{equation*}\tag{1.2}C_\nu = \lbrack 8(\nu - 1) \max (1, 2^{\nu - 3})\rbrack^\nu.\end{equation*} As shown by Chung ([3], pp. 348-349) an inequality of Marcinkiewicz and Zygmund ([5], p. 87) implies that the theorem holds (possibly with a different value of $C_\nu$) whenever the $X$'s are independent. In the same way the above theorem is implied by the generalization of the Marcinkiewicz-Zygmund result given by Burkholder ([2], Theorem 9). However, our proof is elementary. Although our choice of $C_\nu$ is not the best possible, it is explicit. For the case of independent $X$'s, von Bahr ([6], p. 817) has given a bound for $E(|S_n|^\nu)$ which may sometimes involve powers of $\beta_{\nu n}$ higher than 1. Finally Doob ([4], Chapter V, Section 7) has treated the case when the $X$'s form a Markov chain. After proving some lemmata in Section 2, we give the proof of the theorem in Section 3. The case of exchangeable random variables is dealt with in Section 4.
Article information Source Ann. Math. Statist., Volume 39, Number 5 (1968), 1719-1723. Dates First available in Project Euclid: 27 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoms/1177698154 Digital Object Identifier doi:10.1214/aoms/1177698154 Mathematical Reviews number (MathSciNet) MR230363 Zentralblatt MATH identifier 0196.19202 JSTOR links.jstor.org Citation
Dharmadhikari, S. W.; Fabian, V.; Jogdeo, K. Bounds on the Moments of Martingales. Ann. Math. Statist. 39 (1968), no. 5, 1719--1723. doi:10.1214/aoms/1177698154. https://projecteuclid.org/euclid.aoms/1177698154 |
WHY?
Visual question answering task is answering natural language questions based on images. To solve questions that require multi-step reasoning, stacked attention networks(SANs) stacks several layers of attention on parts of images based on query.
WHAT?
Image model extracts feature map from image with VGGNet structure.
Question model uses the final layer of LSTM to encode question.
Question can also be encoded through CNN based question model.
Using the extracted features of images(
v_I) and texts(
v_Q), attention is applied to image. Several layers of attentions can be stacked to progressively pay attention.
h_A = tanh(W_{I,A}v_I\oplus(W_{Q,A}v_Q + b_A))\\p_I = softmax(W_P h_A + b_P)\\\tilde{v}_I = \sum_i p_i v_i\\u = \tilde{v}_I + v_Q\\h_A^k = tanh(W^k_{I,A}v_I\oplus(W^k_{Q,A}u^{k-1} + b^k_A))\\p^k_I = softmax(W^k_P h^k_A + b^k_P)\\\tilde{v}^k_I = \sum_i p^k_i v_i\\u^k = \tilde{v}^k_I + u^{k-1}\\p_{ans} = softmax(W_u u^K + b_u)
So?
SAN achieved SOTA results in DAQUAR-ALL, DAQUAR-REDUCED, COCO-QA and VQA. Also, the learned layers of attention showed progressive focusing of important part of image.
Critic
Fundamental paper in VQA area. |
Yes. It’s the apparent change in the frequency of a wave caused by relative motion between the source of the wave and the observer.
-Sheldon Cooper
In the “Middle-Earth Paradigm” episode, Sheldon Cooper dresses as the “Doppler Effect” for Penny’s Halloween party. The Doppler Effect (or Doppler Shift) describes the change in pitch or frequency that results as a source of sound moves relative to an observer; moving relative can mean either the source is moving while the observer is stationary or vice versa. It is commonly heard when a siren approaches and recedes from an observer. As the siren approaches, the pitch sounds higher and lowers as it moves away. This effect was first proposed by Austrian physicist Christian Doppler in 1842 to explain the color of binary stars.
In 1845, Christophorus Henricus Diedericus (C. H. D.) Buys-Ballot, a Dutch chemist and meteorologist conducted the famous experiment to prove this effect. He assembled a group of horn players on an open cart attached to a locomotive. Ballot then instructed the engineer to rush past him as fast as he could while the musicians played and held a constant note. As the train approached and receded, Ballot noted that the pitch changed as he stood and listened on the stationary platform. Physics of the Doppler Effect
As a stationary sound source produces sound waves, its wave-fronts propagate away from the source at a constant speed, the speed of sound. This can be seen as concentric circles moving away from the center. All observers will hear the same frequency, the frequency of the source of the sound.
When either the source or the observer moves relative to each other, the frequency of the sound that the source emits does not change but rather the observer hears a change in pitch. We can think of the following way. If a pitcher throws balls to someone across a field at a constant rate of one ball a second, the person will catch those balls at the same rate (one ball a second). Now if the pitcher runs towards the catcher, the catcher will catch the balls faster than one ball per second. This happens because as the catcher moves forward, he closes in the distance between himself and the catcher. When the pitcher tosses the next ball it has to travel a shorter distance and thus travels a shorter time. The opposite is true if the pitcher was to move away from the catcher.
If instead of the pitcher moving towards the catcher, the pitcher stayed stationary and the catcher ran forward. As the catcher runs forward, he closes in the distance between him and the pitcher so the time it takes from the ball to leave the pitcher’s hand to the catcher’s mitt is decreased. In this case, it also means that the catcher will catch the balls at a faster rate than the pitcher tosses them.
Sub Sonic Speeds
We can apply the same idea of the pitcher and catcher to a moving source of sound and an observer. As the source moves, it emits sounds waves which spread out radially around the source. As it moves forward, the wave-fronts in front of the source bunch up and the observer hears an increase in pitch. Behind the source, the wave-fronts spread apart and the observer standing behind hears a decrease in pitch.
The Doppler Equation
When the speeds of source and the receiver relative to the medium (air) are lower than the velocity of sound in the medium, i.e. moves at sub-sonic speeds, we can define a relationship between the observed frequency, \(f\), and the frequency emitted by the source, \(f_0\).
\[f = f_{0}\left(\frac{v + v_{o}}{v + v_{s}}\right)\] where \(v\) is the speed of sound, \(v_{o}\) is the velocity of the observer (this is positive if the observer is moving towards the source of sound) and \(v_{s}\) is the velocity of the source (this is positive if the source is moving away from the observer). Source Moving, Observer Stationary
We can now use the above equation to determine how the pitch changes as the source of sound moves
towards the observer. i.e. \(v_{o} = 0\). \[f = f_{0}\left(\frac{v}{v – v_{s}}\right)\] \(v_{s}\) is negative because it is moving towards the observer and \(v – v_{s} < v\). This makes \(v/(v - v_{s})\) larger than 1 which means the pitch increases. Source Stationary, Observer Moving
Now if the source of sound is still and the observer moves towards the sound, we get:
\[f = f_{0}\left( \frac{v + v_{o}}{v} \right)\] \(v_{o}\) is positive as it moves towards the source. The numerator is larger than the denominator which means that \(v + v_{o}/v\) is greater than 1. The pitch increases. Speed of Sound
As the source of sound moves at the speed of sound, the wave fronts in front become bunched up at the same point. The observer in front won’t hear anything until the source arrives. When the source arrives, the pressure front will be very intense and won’t be heard as a change in pitch but as a large “thump”.
The observer behind will hear a lower pitch as the source passes by.
\[f = f_{0}\left( \frac{v – 0}{v + v} \right) = 0.5 f_{0}\]
Early jet pilots flying at the speed of sound (Mach 1) reported a noticeable “wall” or “barrier” had to be penetrated before achieving supersonic speeds. This “wall” is due to the intense pressure front, and flying within this pressure front produced a very turbulent and bouncy ride. Chuck Yeager was the first person to break the sound barrier when he flew faster than the speed of sound in the Bell X-1 rocket-powered aircraft on October 14, 1947.
As the science of super-sonic flight became better understood, engineers made a number changes to aircraft design that led the the disappearance of the “sound barrier”. Aircraft wings were swept back and engine performance increased. By the 1950s combat aircraft could routinely break the sound barrier.
Super-Sonic
As the sound source breaks and moves past the “sound barrier”, the source now moves faster than the sound waves it creates and leads the advancing wavefront. The source will pass the observer before the observer hears the sound it creates. As the source moves forward, it creates a Mach cone. The intense preseure front on the Mach cone creates a shock wave known as a “sonic boom”.
Twice the Speed of Sound
Something interesting happens when the source moves towards the observer at twice the speed of sound — the tone becomes time reversed. If music was being played, the observer will hear the piece with the correct tone but played backwards. This was first predicted by Lord Rayleigh in 1896 .
We can see this by using the Doppler Equation.
\[f = f_{0}\left(\frac{v}{v-2v}\right)\] This reduces to \[f=-f_{0}\] which is negative because the sound is time reversed or is heard backwards. Applications Radar Gun
The Doppler Effect is used in radar guns to measure the speed of motorists. A radar beam is fired at a moving target as it approaches or recedes from the radar source. The moving target then reflects the Doppler-shifted radar wave back to the detector and the frequency shift measured and the motorist’s speed calculated.
We can combine both cases of the Doppler equation to give us the relationship between the reflected frequency (\(f_{r}\)) and the source frequency (\(f\)):
\[f_{r} = f \left(\frac{c+v}{c-v}\right)\] where \(c\) is the speed of light and \(v\) is the speed of the moving vehicle. The difference between the reflected frequency and the source frequency is too small to be measured accurately so the radar gun uses a special trick that is familiar to musicians – interference beats.
To tune a piano, the pitch can be adjusted by changing the tension on the strings. By using a tuning instrument (such as a tuning fork) which can produce a sustained tone over time, a beat frequency can be heard when placed next to the vibrating piano wire. The beat frequency is an interference between two sounds with slightly different frequencies and can be herd as a periodic change in volume over time. This frequency tells us how far off the piano strings are compared to the reference (tuning fork).
To detect this change in a radar gun does something similar. The returning wave is “mixed” with the transmitted signal to create a beat note. This beat signal or “heterodyne” is then measured and the speed of the vehicle calculated. The change in frequency or the difference between \(f_{r}\) and \(f\) or \(\Delta f\) is
\[f_{r} – f = f\frac{2v}{c-v}\] as the difference between the speed of light, \(c\), and the speed of the vehicle, \(v\), is small, we can approximate this to \[\Delta f \approx f\frac{2v}{c}\] By measuring this frequency shift or beat frequency, the radar gun can calculate and display a vehicle’s speed. “I am the Doppler Effect” The Doppler Effect is an important principle in physics and is used in astronomy to measure the speeds at which galaxies and stars are approaching or receding from us. It is also used in plasma physics to estimate the temperature of plasmas. Plasmas are one of the four fundamental states of matter (the others being solid, liquid, and gas) and is made up of very hot, ionized gases. Their composition can be determined by the spectral lines they emit. As each particle jostles about, the light emitted by each particle is Doppler shifted and is seen as a broadened spectral line. This line shape is called a Doppler profile and the width of the line is proportional to the square root of the temperature of plasma gas. By measuring the width, scientists can infer the gas’ temperature.
We can now understand Sheldon’s fascination with the Doppler Effect as he aptly explains and demonstrates its effects. As an emergency vehicle approaches an observer, its siren will start out with a higher pitch and slide down as as it passes and moves away from the observer. This can be heard as the (confusing) sound he demonstrates to Penny’s confused guests.
References |
WHY?
Hierarchical recurrent encoder-decoder model(HRED) that aims to capture hierarchical structure of sequential data tends to fail because model is encouraged to capture only local structure and LSTM often has vanishing gradient effect.
WHAT?
Latent Variable Hierarchical Recurrent Encoder-Decoder(VHRED) tried to improve HRED by forcing to learn z with variational inference. Generative process output z from previous w, and inference process infer z through next w.
P_{\theta}(z_n|w_{<n}) = N(\mu_{prior}(w_{<n}), \Sigma_{prior}(w_{<n}))\\P_{\theta}(w_n|z_n, w_{<n}) = \Pi_{m=1}^{M_n}P_{\theta}(w_{n,m}|z_n, w_{<n}, w_{n, <m})\\log P_{\theta}(w_{<N}) \geq \Sigma_{n=1}^N - KL[Q_{\psi}(z_n|w_{\leq n})\|P_{\theta}(z_n|w_{<n})] + E_{Q_{\psi}(z_n|w_{\leq n})}[log P_{\theta}(w_n|z_n, w_{<n})]\\Q_{\psi}(z_n|w_{\leq N}) = Q_{\psi}(z_n|w_{\leq n}) = N(\mu_{posterior}(w_{\leq n}), \Sigma_{posterior}(w_{\leq n}))
So?
VHRED tends to perform better in Long context than LSTM and HRED.
Critic
It is good to implement variational inference in sequential data but I’m not sure this significantly improved performance. This model can be used to make a good paragraph vector. |
Population and Parameters Section Population A populationis any large collection of objects or individuals, such as Americans, students, or trees about which information is desired. Parameter A parameteris any summary number, like an average or percentage, that describes the entire population.
The population mean \(\mu\) (the greek letter "mu") and the population proportion
p are two different population parameters. For example: We might be interested in learning about \(\mu\), the average weight of all middle-aged female Americans. The population consists of all middle-aged female Americans, and the parameter is µ. Or, we might be interested in learning about p, the proportion of likely American voters approving of the president's job performance. The population comprises all likely American voters, and the parameter is p.
The problem is that 99.999999999999... % of the time, we don't — or can't — know the real value of a population parameter. The best we can do is estimate the parameter! This is where samples and statistics come in to play.
Samples and statistics Sample A sampleis a representative group drawn from the population. Statistic A statisticis any summary number, like an average or percentage, that describes the sample.
The sample mean, \(\bar{x}\), and the sample proportion \(\hat{p}\) are two different sample statistics. For example:
We might use \(\bar{x}\), the average weight of a random sample of 100middle-aged female Americans, to estimate µ, the average weight of allmiddle-aged female Americans. Or, we might use \(\hat{p}\), the proportion in a random sample of 1000likely American voters who approve of the president's job performance, to estimate p, the proportion of alllikely American voters who approve of the president's job performance.
Because samples are manageable in size, we can determine the actual value of any statistic. We use the known value of the sample statistic to learn about the unknown value of the population parameter.
Example S.1.1 What was the prevalence of smoking at Penn State University before the 'no smoking' policy? Section
The main campus at Penn State University has a population of approximately 42,000 students. A research question is "what proportion of these students smoke regularly?" A survey was administered to a sample of 987 Penn State students. Forty-three percent (43%) of the sampled students reported that they smoked regularly. How confident can we be that 43% is close to the actual proportion of all Penn State students who smoke?
The population is all 42,000 students at Penn State University. The parameter of interest is p, the proportion of students at Penn State University who smoke regularly. The sample is a random selection of 987 students at Penn State University. The statistic is the proportion, \(\hat{p}\), of the sample of 987 students who smoke regularly. The value of the sample proportion is 0.43. Example S.1.2 Are the grades of college students inflated? Section
Let's suppose that there exists a population of 7 million college students in the United States today. (The actual number depends on how you define "college student.") And, let's assume that the average GPA of all of these college students is 2.7 (on a 4-point scale). If we take a random sample of 100 college students, how likely is it that the sampled 100 students would have an average GPA as large as 2.9 if the population average was 2.7?
The population is all 7 million college students in the United States today. The parameter of interest is µ, the average GPA of all college students in the United States today. The sample is a random selection of 100 college students in the United States. The statistic is the mean grade point average, \(\bar{x}\), of the sample of 100 college students. The value of the sample mean is 2.9. Example S.1.3 Is there a linear relationship between birth weight and length of gestation? Section
Consider the relationship between the birth weight of a baby and the length of its gestation:
The dashed line summarizes the (unknown) relationship —\(\mu_Y = \beta_0+\beta_1x\)— between birth weight and gestation length of all births in the population. The solid line summarizes the relationship —\(\hat{y} = \beta_0+\beta_1x\)— between birth weight and gestation length in our random sample of 32 births. The goal of linear regression analysis is to use the solid line (the sample) in hopes of learning about the dashed line (the population).
Next... Confidence intervals and hypothesis tests Section
There are two ways to learn about a population parameter.
1) We can use
confidence intervals to estimate parameters.
"We can be 95% confident that the proportion of Penn State students who have a tattoo is between 5.1% and 15.3%."
2) We can use
hypothesis tests to test and ultimately draw conclusions about the value of a parameter.
"There is enough statistical evidence to conclude that the mean normal body temperature of adults is lower than 98.6 degrees F."
We review these two methods in the next two sections. |
As with the review of Derivatives, it would be challenging to include a full review of Integrals. In this review, we try to include the most common integrals and rules used in STAT 414. There are many helpful websites as texts out there to help you review. We have provided links to Khan Academy for you to take a look at if you have difficulty recalling these methods.
For a function, \(f(x)\), its indefinite integral is:
\[\int f(x)\; dx=F(x)+C, \qquad \text{where } F^\prime(x)=f(x)\]
We provide a short list of common integrals and rules that are used in STAT 414. It is important to have a lot of practice and keep these skills fresh.
Common Integrals and Rules \(\int_a^a f(x)dx=0\) \(\int_a^b f(x)d(x)=-\int_b^a f(x)d(x)\) \(\int x^rdx=\frac{x^{r+1}}{r+1}+C\) The Fundamental Theorem of Calculus: Let \(f\) be integrable on \([a,b]\) and let \(F\) be any antiderivative of \(f\) there. Then, \(\int_a^b f(x)d(x)=F(b)-F(a)\). (FTC, Khan Academy) \(\int x^n dx=\dfrac{1}{n+1}x^{n+1}+C, \;\;n\neq(-1)\) \(\int \dfrac{1}{x}dx=\ln |x| +C\) \(\int e^x dx=e^x +C\) Integration Using Substitution: Let \(g\) have a continuous derivative on \([a,b]\) and let \(f\) be continuous on the range of \(g\). Then
\[\begin{equation}
\int_a^b f\left(g(x)\right)g^\prime(x)dx=\int_{g(a)}^{g(b)}f(u)du \end{equation}\]
where \(u=g(x)\). (u-Substitution, Khan Academy)
Integration by Parts
\[\begin{equation}
\int_a^b udv=\left[uv\right]_a^b-\int_a^b vdu \end{equation}\]. (Integration by Parts, Khan Academy) Example C.3.1
Integrate the following function from 0 to $t$
\[f(x)=\dfrac{2}{1000^2}xe^{-(x/1000)^2}\]
\[\int_0^t \frac{2}{1000^2}xe^{-(x/1000)^2} dx\label{eqn1}\]
Let \(u=\left(\frac{x}{1000}\right)^2\). Then \(du=\frac{2}{1000^2}xdx\). The equation becomes...
\[\begin{align*}
&= \int_0^{\left(\frac{t}{1000}\right)^2} e^{-u}du =-e^{-u}|_{0}^{\left(\frac{t}{1000}\right)^2}\\ &= -e^{-\left(\frac{t}{1000}\right)^2}-(-1)=1-e^{-\left(\frac{t}{1000}\right)^2}. \end{align*}\] Example C.3.2
Integrate the following:
\[\int_0^5 x^2e^{-x}dx\]
Let us begin by setting up integration by parts. Let
\[\begin{align*}
& u=x^2 \qquad dv=e^{-x}dx\\ & du=2xdx \qquad v=-e^{-x} \end{align*}\]
Then
\[\begin{align*}
uv|_0^5-\int_0^5 vdu &=-x^2e^{-x}|_0^5+2\int_0^5xe^{-x}dx\\ &= -x^2e^{-x}|_0^5+2\left[-xe^{-x}|_0^5+\int_0^5 e^{-x}dx\right]\\ &= -x^2e^{-x}|_0^5+2\left[-xe^{-x}|_0^5-e^{-x}|_0^5\right]\approx 1.75 \end{align*}\] Example C.3.3
Integrate the following from \(-\infty\) to \(\infty\).
\[f(y)=\frac{1}{2}e^{-|y|+ty}, \;\; \text{ for } -\infty<y<\infty\]
\begin{align*}
\int_{-\infty}^{\infty} \frac{1}{2} e^{ty-|y|}dy &= \int_{-\infty}^0 \frac{1}{2}e^{y+ty}dy+\int_0^{\infty} \frac{1}{2}e^{-y+ty}dy\\ & = \int_{-\infty}^0 \frac{1}{2}e^{y(1+t)}dy+\int_0^{\infty} \frac{1}{2}e^{-y(1-t)}dy\\ & = \frac{1}{2(1+t)}+ \frac{1}{2(1-t)}=\frac{1}{2}\left(\frac{1-t+t+1}{(1+t)(1-t)}\right)\\ & =\frac{1}{(1-t)(1+t)} \end{align*} |
I have trouble understanding the derivation of the law of velocity addition from the composition of Lorentz transformations. The proof is from
Special Relativity by Nicholas Woodhouse:
The author sets up three reference frames $O,O'$ and $O''$, where $O'$ moves with velocity $u$ relative to $O$, $O$ with $v$ relative to $O''$ and $O'$ with $w$ relative to $O''$:
$\begin{pmatrix} ct\\ x \end{pmatrix} = \gamma(u)\begin{pmatrix} 1 & \frac{u}{c} \\ \frac{u}{c} & 1 \end{pmatrix}\begin{pmatrix} ct'\\x' \end{pmatrix}$
$\begin{pmatrix} ct''\\ x'' \end{pmatrix} = \gamma(v)\begin{pmatrix} 1 & \frac{v}{c} \\ \frac{v}{c} & 1 \end{pmatrix}\begin{pmatrix} ct\\x \end{pmatrix}$
$\begin{pmatrix} ct''\\ x'' \end{pmatrix} = \gamma(w)\begin{pmatrix} 1 & \frac{w}{c} \\ \frac{w}{c} & 1 \end{pmatrix}\begin{pmatrix} ct'\\x' \end{pmatrix}$
From there follows that $\gamma (w)\begin{pmatrix} 1 &\frac{w}{c} \\ \frac{w}{c} & 1 \end{pmatrix} = \gamma(u) \gamma(v)\begin{pmatrix} 1 & \frac{v}{c} \\ \frac{v}{c} & 1 \end{pmatrix}\begin{pmatrix} 1 & \frac{u}{c} \\ \frac{u}{c} & 1 \end{pmatrix}$.
The the author states that because of that $\gamma(w)=\gamma(u)\gamma(v)(1+\frac{uv}{c^2}$). How does that follow from the above equation? I can see that it holds if $w=\frac{u+v}{1+\frac{uv}{c^2}}$ which is the law of velocity addition. But isn't that circular reasoning? Couldn't you say for example that $\gamma(w)=\gamma(u)\gamma(v)$ if $w=v+u$?
My question is how can you derive the law of velocity addition from the composition of Lorentz transformations without assuming it a priori? Or am I missing something here? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.