text stringlengths 256 16.4k |
|---|
These are the notes taken on my master course Multivariate Data Analysis via Matrix Decomposition. If you’re confused with the course name, you can think of this as a statistical course on unsupervised learning.\(\newcommand{1}[1]{\unicode{x1D7D9}_{\{#1\}}}\newcommand{span}{\text{span}}\newcommand{bs}{\boldsymbol}\newcommand{R}{\mathbb{R}}\newcommand{rank}{\text{rank}}\newcommand{\norm}[1]{\left\lVert#1\right\rVert}\newcommand{diag}{\text{diag}}\newcommand{tr}{\text{tr}}\newcommand{braket}[1]{\left\langle#1\right\rangle}\newcommand{C}{\mathbb{C}}\)
Notations
First let’s give some standard notations used in this course. Let’s assume no prior knowledge in linear algebra and start from matrix multiplication.
Matrix Multiplication
We denote a matrix \(\bs{A}\in\R^{m\times n}\), with its entries defined as \([a_{ij}]_{i,j=1}^{m,n}\). Similarly, we define \(\bs{B}=[b_{jk}]_{j,k=1}^{n,p}\) and thus the multiplication is defined as \(\bs{AB} = [\sum_{j=1}^n a_{ij}b_{jk}]_{i,k=1}^{n,p}\), which can also be represented in three other ways:
vector form, using \(\bs{a}\) and \(\bs{b}\) a matrix of products of \(A\) and \(\bs{b}\) a matrix of products of \(\bs{a}\) and \(\bs{B}\)
A special example of such representation: let’s assume
\[ \bs{A}=[\bs{a}_1,\bs{a}_2,\ldots,\bs{a}_n]\in\R^{m\times n}\text{ and } \bs{D} = \diag(d_1,d_2,\ldots,d_n) \in\R^{n\times n},\]
then we have right away \(\bs{AD}=[\bs{a}_id_i]_{i=1}^n\).
Exercise With multiplication we care ranks of matrices. There is a quick conclusion: If \(\bs{x}\neq \bs{0}, \bs{y}\neq \bs{0}\), then \(\rank(\bs{xy'})=1\). Conversely, if \(\rank(\bs{A})=1\), then \(\exists\ \bs{x}\neq \bs{0}, \bs{y}\neq \bs{0}\) s.t. \(\bs{xy'}=\bs{A}\). Prove it.
Norms
There are two types of norms in this course we consider:
(Euclidean) We define the \(l^1\)-norm as \(\norm{x}_1 = \sum_{i=1}^n |x_i|\), define \(l^2\)-norm as \(\norm{x}_2 = \sqrt{\bs{x'x}}\), define \(l^{\infty}\)-norm as \(\norm{x}_{\infty} = \max_{1\le i \le n}\{|x_i|\}\), and define the Mahalanobis norm as \(\norm{x}_A = \sqrt{\bs{x'Ax}}\). (Frobenius) We define the Frobenius norm of a matrix as \(\norm{\bs{A}}_F=\sqrt{\sum_{i=1}^m\sum_{j=1}^n a_{ij}^2}\). The spectral 2-norm of a matrix is defined as \(\norm{\bs{A}}_2=\max_{\bs{x}\neq \bs{0}} \norm{\bs{Ax}}_2 / \norm{\bs{x}}_2\).
Property What makes these norms?
\(\norm{\bs{v}}=0\) iff. \(\bs{v}=\bs{0}\). \(\norm{\alpha \bs{v}} = |\alpha|\cdot\norm{\bs{v}}\) for any \(\alpha\in\R\) and any \(\bs{v}\in\mathcal{V}\). (Triangular Inequality) \(\norm{\bs{u} + \bs{v}} \le \norm{\bs{u}} + \norm{\bs{v}}\) for any \(\bs{u}, \bs{v}\in\mathcal{V}\). (Submultiplicative) \(\norm{\bs{AB}}\le \norm{\bs{A}}\cdot \norm{\bs{B}}\) for every formable matrices \(\bs{A}\) and \(\bs{B}\).
Exercise Try to prove them for Euclidean 2-norm, Frobenius norm and spectral 2-norm.
Inner Products
There are two types of inner products we consider:
(Euclidean) We define the inner product of vectors \(\bs{x},\bs{y}\in\R^n\) as \(\bs{x'y}=\sum_{i=1}^n x_iy_i\). (Frobenius) We define the inner product of matrices \(\bs{A},\bs{B}\in\R^{m\times n}\) as \(\braket{\bs{A},\bs{B}}=\tr(\bs{A'B})=\sum_{i=1}^m\sum_{j=1}^n a_{ij}b_{ij}\).
A famous inequality related to these inner products is the Cauchy-Schwarz inequality, which states
(Euclidean) \(|\bs{x'y}|\le \norm{\bs{x}}_2\cdot\norm{\bs{y}}_2\) for any \(\bs{x,y}\in\R^n\). (Frobenius) \(|\braket{\bs{A},\bs{B}}|\le\norm{\bs{A}}_F\cdot\norm{\bs{B}}_F\) for any \(\bs{A},\bs{B}\in\R^{m\times n}\). Eigenvalue Decomposition (EVD)
The first matrix decomposition we’re gonna talk about is the eigenvalue decomposition.
Eigenvalues and Eigenvectors
For square matrix \(\bs{A}\in\R^{n\times n}\), if \(\bs{0}\neq \bs{x}\in\C^n\) and \(\lambda\in\C\) is s.t. \(\bs{Ax} = \lambda\bs{x}\), then \(\lambda\) is called an engenvalue of \(\bs{A}\) and \(\bs{x}\) is called the \(\lambda\)-engenvector of \(\bs{A}\).
Ideally, we want a matrix to have \(n\) eigenvectors and \(n\) corresponding eigenvectors, linearly independent to each other. This is not always true.
Existence of EVD
Theorem \(\bs{A}\in\R^{n\times n}\) have \(n\) eigenvalues iff. there exists an invertible \(\bs{X}\in\R^{n\times n}\) s.t. \(\bs{X}^{-1}\bs{A}\bs{X}=\bs{\Lambda}\), i.e. \(\bs{A}\) is diagonizable. This gives \(\bs{A}=\bs{X}\bs{\Lambda}\bs{X}^{-1}\), which is called the eigenvalue decomposition (EVD).
Theorem (Spectral Theorem for Symmetric Matrices) For symmetric matrix \(\bs{A}\in\R^{n\times n}\) there always exists an orthogonal matrix \(\bs{Q}\), namely \(\bs{Q}'\bs{Q}=\bs{I}\), that gives
\[ \bs{A}=\bs{Q\Lambda Q}' = \sum_{i=1}^n \lambda_i \bs{q}_i \bs{q}_i' \]
where \(\bs{q}\) are column vectors of \(\bs{Q}\). This is called the symmetric EVD, aka. \(\bs{A}\) being orthogonally diagonalizable.
Properties of EVD
Property We have several properties following the second theorem above. For all \(i=1,2,\ldots, n\)
\(\bs{A}\bs{q}_i = \lambda_i \bs{q}_i\) (can be proved using \(\bs{Q}^{-1}=\bs{Q}'\)) \(\norm{\bs{q}_i}_2=1\) (can be proved using \(\bs{QQ}'=\bs{I}\))
The second theorem above can also be represented as
Theorem If \(\bs{A}=\bs{A}'\), then \(\bs{A}\) has \(n\) orthogonal eigenvectors.
Singular Value Decomposition (SVD)
For general matrices, we have singular value decomposition.
Definition
The most famous form of SVD is define as
\[ \bs{A} = \bs{U} \bs{\Sigma} \bs{V}' \]
where \(\bs{A}\in\R^{m\times n}\), \(\bs{U}\in\R^{m\times m}\), \(\bs{\Sigma}\in\R^{m\times n}\) and \(\bs{V}\in\R^{n\times n}\). Specifically, both \(\bs{U}\) and \(\bs{V}\) are orthogonal (i.e. \(\bs{U}'\bs{U}=\bs{I}\), same for \(\bs{V}\)) and \(\bs{\Sigma}\) is diagonal. Usually, we choose the singular values to be non-decreasing, namely
\[ \bs{\Sigma}=\diag(\sigma_1,\sigma_2,\ldots,\sigma_{\min\{m,n\}})\quad\text{where}\quad \sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_{\min\{m,n\}}. \]
Terminology
Here we define a list of terms that’ll be used from time to time:
(SVD) \(\bs{A} = \bs{U} \bs{\Sigma} \bs{V}'\). (Left Singular Vectors) Columns of \(\bs{U}\). (Right Singular Vectors) Columns of \(\bs{V}\). (Singular Values) Diagonal entries of \(\bs{\Sigma}\). Three Forms of SVD
Besides the regular SVD given above, we have the
outer product SVD:
\[ \bs{A} = \sum_{i=1}^{\min\{m,n\}}\!\!\!\sigma_i \bs{u}_i \bs{v}_i' \]
and
condensed SVD:
\[ \bs{A} = \bs{U}_r\bs{\Sigma}_r\bs{V}_r' \]
where \(r=\rank(\bs{A})\) is also the number of non-zero singular values. In this form, we have \(\bs{\Sigma}_r\in\R^{r\times r}\) with proper chunked \(\bs{U}_r\) and \(\bs{V}_r\).
Existence of SVD
Theorem (Existence of SVD) Let \(\bs{A}\in\R^{m\times n}\) and \(r=\rank(\bs{A})\). Then \(\exists\ \bs{U}_r\in\R^{m\times r}\), \(\bs{V}_r\in\R^{n\times r}\) and \(\bs{\Sigma}_r\in\R^{r\times r}\) s.t. \(\bs{A} = \bs{U}_r\bs{\Sigma}_r\bs{V}_r'\) where \(\bs{U}_r\) and \(\bs{V}_r\) are orthogonal and \(\bs{\Sigma}_r\) is diagonal. This means condensed SVD exists and therefore the rest two forms.
Proof Define symmetric \(\bs{W}\in\R^{(m+n)\times(m+n)}\) as
\[ \bs{W} = \begin{bmatrix} \bs{0} & \bs{A} \\ \bs{A}' & \bs{0} \end{bmatrix} \]
which has an orthogonal EVD as \(\bs{W} = \bs{Z}\bs{\Lambda}\bs{Z}'\) where \(\bs{Z}'\bs{Z}=\bs{I}\). Now, assume \(\bs{z}\in\R^{m+n}\) is an eigenvector of \(\bs{W}\) corresponding to \(\lambda\), then \(\bs{W}\bs{z} = \lambda \bs{z}\). Denote the first \(m\) entries of \(\bs{z}\) as \(\bs{x}\) and the rest \(\bs{y}\), which gives
\[ \begin{bmatrix} \bs{0} & \bs{A}\\ \bs{A}' & \bs{0} \end{bmatrix} \begin{bmatrix} \bs{x} \\ \bs{y} \end{bmatrix} = \lambda \begin{bmatrix} \bs{x} \\ \bs{y} \end{bmatrix} \Rightarrow \begin{cases} \bs{Ay} = \lambda \bs{x},\\ \bs{A}'\bs{x} = \lambda \bs{y}. \end{cases} \]
Using this results
\[ \begin{bmatrix} \bs{0} & \bs{A}\\ \bs{A}' & \bs{0} \end{bmatrix} \begin{bmatrix} \bs{x} \\ -\bs{y} \end{bmatrix} = \begin{bmatrix} -\bs{Ay} \\ \bs{A}'\bs{y} \end{bmatrix} = \begin{bmatrix} -\lambda \bs{x}\\ \lambda \bs{y} \end{bmatrix} = -\lambda\begin{bmatrix} \bs{x}\\ -\bs{y} \end{bmatrix} \]
which means \(-\lambda\) is also an engenvalue of \(\bs{W}\). Hence, we know
\[ \begin{align} \bs{W} &= \bs{Z}\bs{\Lambda}\bs{Z}' = \bs{Z}_r\bs{\Lambda}_r\bs{Z}_r'\\ &= \begin{bmatrix} \bs{X} & \bs{X}\\ \bs{Y} & -\bs{Y} \end{bmatrix} \begin{bmatrix} \bs{\Sigma} & \bs{0}\\ \bs{0} & -\bs{\Sigma} \end{bmatrix} \begin{bmatrix} \bs{X} & \bs{X}\\ \bs{Y} & -\bs{Y} \end{bmatrix}'\\ &= \begin{bmatrix} \bs{0} & \bs{X}\bs{\Sigma}\bs{Y}'\\ \bs{Y}\bs{\Sigma}\bs{X}' & \bs{0} \end{bmatrix}. \end{align} \]
Therefore, we conclude \(\bs{A}=\bs{X}\bs{\Sigma}\bs{Y}'\) where now all we need to prove is the orthogonality of \(\bs{X}\) and \(\bs{Y}\). Let’s take a look at \(\bs{z}=(\bs{x},\bs{y})\) we just defined. Let
\[ \norm{\bs{z}}=\bs{z}'\bs{z}=\bs{x}'\bs{x} + \bs{y}'\bs{y} = 2. \]
From orthogonality of eigenvectors corresponding to different eigenvalues, we also know
\[ \bs{z}'\bar{\bs{z}} = \bs{x}'\bs{x} - \bs{y}'\bs{y} = 0 \]
which altogether gives \(\norm{\bs{x}}=\norm{\bs{y}}=1\).Q.E.D.
Properties of SVD
Property There are several characteristics we have about SVD:
The \(\bs{W}\) decomposition above. The left singular vector \(\bs{v}\) given by \(\bs{Au} = \sigma \bs{v}\), and the right singular vector \(\bs{u}\) given by \(\bs{A}'\bs{v} = \sigma \bs{u}\). Relationship with eigenvectors/eigenvalues… of \(\bs{A}'\bs{A}\): \(\bs{A}'\bs{A}\bs{u} = \sigma\bs{A}'\bs{v} = \sigma^2\bs{u}\). of \(\bs{AA}'\): \(\bs{AA}'\bs{v} = \sigma\bs{A}\bs{u} = \sigma^2\bs{v}\). Frobenius norms ( eigenvalues cannot define a norm!): \(\norm{\bs{A}}_F^2 = \sum_{i,j=1}^{m,n}a_{ij}^2=\sum_{i=1}^r\sigma_i^2\). \(\norm{\bs{A}}_2 = \max_{\bs{x}\neq 0} \norm{\bs{Ax}}_2 / \norm{\bs{x}}_2 = \sigma_1\).
Exercise Show how to use SVD to calculate these two norms.
Applications of SVD One of the most importance usages of SVD is computing projections. \(\bs{P}\in\R^{n\times n}\) is a projection matrix iff. \(\bs{P}^2=\bs{P}\). More commonly, we consider orthogonal projection \(\bs{P}\) that’s also symmetric. Now let’s consider dataset \(\bs{A}\in\R^{m\times n}\) where we have \(n\) observations, each with \(m\) dimensions. Suppose we want to project this dataset onto \(\bs{W}\subseteq\R^m\) that has \(k\) dimensions, i.e.
\[ \bs{W} = \span\{\bs{q}_1,\bs{q}_2,\ldots,\bs{q}_k\},\quad \bs{q}_i'\bs{q}_j=\1{i=j} \]
then the projection matrix would be \(\bs{P}_{\bs{W}}=\bs{Q}_k\bs{Q}_k'\).
The nearest orthogonal matrixof \(\bs{A}\in\R^{p\times p}\) is given by
\[ \min_{\bs{X}'\bs{X}=\bs{I}}\norm{\bs{A}-\bs{X}}_F \]
which solves if we have optima for
\[ \begin{align} \min_{\bs{X}'\bs{X}=\bs{I}}\norm{\bs{A}-\bs{X}}_F^2 &= \min_{\bs{X}'\bs{X}=\bs{I}}\tr[(\bs{A}-\bs{X})'(\bs{A}-\bs{X})]\\&= \min_{\bs{X}'\bs{X}=\bs{I}}\tr[\bs{A}'\bs{A} - \bs{X}'\bs{A} - \bs{A}'\bs{X} + \bs{X}'\bs{X}]\\&= \min_{\bs{X}'\bs{X}=\bs{I}} \norm{\bs{A}}_F^2 - \tr(\bs{A}'\bs{X}) - \tr(\bs{X}'\bs{A}) + \tr(\bs{X}'\bs{X})\\&= \norm{\bs{A}}_F^2 + n - 2\max_{\bs{X}'\bs{X}=\bs{I}} \tr(\bs{A}'\bs{X}) \end{align}. \]
Now we try to solve
\[ \max_{\bs{X}'\bs{X}=\bs{I}} \tr(\bs{A}'\bs{X}) \]
and claim the solution is given by \(\bs{X} = \bs{U}\bs{V}'\) where \(\bs{U}\) and \(\bs{V}\) are derived from SVD of \(\bs{A}\), namely \(\bs{A} = \bs{U\Sigma V}'\). Proof: We know
\[ \tr(\bs{A}'\bs{X}) = \tr(\bs{V}\bs{\Sigma}'\bs{U}'X) = \tr(\bs{\Sigma}'\bs{U}'\bs{X}\bs{V}) =: \tr(\bs{\Sigma}'\bs{Z}) \]
where we define \(\bs{Z}\) as the product of the three orthogonal matrices, which therefore is orthogonal: \(\bs{Z}'\bs{Z}=\bs{I}\).
Orthogonality of \(\bs{Z}\) gives \(\forall i\)
\[ z_{i1}^2 + z_{i2}^2 + \cdot + z_{ip}^2 = 1 \Rightarrow z_{ii} \ge 1. \]
Hence, (note all singular values are non-negative)
\[ \tr(\bs{\Sigma}'\bs{Z}) = \sum_{i=1}^p \sigma_i z_{ii} \le \sum_{i=1}^p \sigma_i \]
which gives optimal \(\bs{Z}^*=\bs{I}\) and thus the solution follows.
The orthogonal Procrustes problemseeks the solution to
\[ \min_{\bs{X}'\bs{X}=\bs{I}}\norm{\bs{A}-\bs{BX}}_F \]
which is, similar to the problem above, given by the SVD of \(\bs{BA}'=\bs{U\Sigma V}'\), namely \(\bs{X}=\bs{UV}'\).
Nearest symmetric matrixfor \(\bs{A}\in\R^{p\times p}\) seeks solution to \(\min\norm{\bs{A}-\bs{X}}_F\), which is simply
\[ \bs{X} = \frac{\bs{A}+\bs{A}'}{2}. \]
In order to prove it, write \(\bs{A}\) in the form of
\[ \bs{A} = \frac{\bs{A} + \bs{A}'}{2} + \frac{\bs{A} - \bs{A}'}{2} =: \bs{X} + \bs{Y}. \]
Notice \(\tr(\bs{X}'\bs{Y})=0\), hence by Pythagoras we know \(\bs{Y}\) is the minimum we can find for the problem above.
Best rank-\(r\) approximation. In order to find the best rank-\(r\) approximation in Frobenius norm, we need solution to
\[ \min_{\rank(\bs{X})\le r} \norm{\bs{A}-\bs{X}}_F \]
which is merely \(\bs{X}=\bs{U}_r\bs{\Sigma}_r\bs{V}_r'\). See condensed SVD above for notation.
The best approximation in 2-norm, namely solution to
\[ \min_{\rank(\bs{X})\le r} \norm{\bs{A}-\bs{X}}_2, \]
is exactly identical to the one above. We may prove both by reduction to absurdity. Proof: Suppose \(\exists \bs{B}\in\R^{n\times p}\) s.t.
\[ \norm{\bs{A}-\bs{B}}_2 < \norm{\bs{A}-\bs{X}}_2 = \sigma_{r+1}. \]
Now choose \(\bs{w}\) from kernel of \(\bs{B}\) and we have
\[ \bs{Aw}=\bs{Aw}+\bs{0} = (\bs{A}-\bs{B})\bs{w} \]
and thus
\[ \norm{\bs{Aw}}_2 = \norm{(\bs{A}-\bs{B})\bs{w}}_2 \le \norm{\bs{A}-\bs{B}}_2\cdot \norm{\bs{w}}_2 <\sigma_{r+1}\norm{\bs{w}}_2\tag{1}. \]
Meanwhile, note \(\bs{w}\in\span\{v_1,v_2,\ldots,v_{r+1}\}=\bs{W}\), assume particularly \(\bs{w}=\bs{v}_{r+1}\bs{\alpha}\), then
\[ \begin{align} \norm{\bs{Aw}}_2^2&=\norm{\bs{U}\bs{\Sigma}\bs{V}'\bs{w}}_2^2 = \sum_{i=1}^{r+1}\sigma_i^2\alpha_i^2 \ge \sigma_{r+1}^2\sum_{i=1}^{r+1}\alpha_i^2\\ &= \sigma_{r+1}^2\norm{\bs{\alpha}}_2^2\equiv \sigma_{r+1}^2\norm{\bs{w}}_2^2.\tag{2} \end{align} \]
Due to contradiction between eq. (1) and (2) we conclude such \(\bs{B}\) doesn’t exist. |
Is it possible to calculate the maximum value of a time-domain signal from frequency-domain representation
without performing an inverse transform?
Suppose that Alice has a vector $\mathrm x \in \mathbb R^n$. She computes the DFT of $\mathrm x$
$$\mathrm y := \mathrm F \mathrm x \in \mathbb C^n$$
where $\mathrm F \in \mathbb C^{n \times n}$ is a Fourier matrix. Alice then tells Bob what $\mathrm y$ is. Since the inverse of the Fourier matrix is $\mathrm F^{-1} = \frac 1n \, \mathrm F^*$, Bob can recover $\mathrm x$ via
$$\mathrm x = \frac 1n \, \mathrm F^* \mathrm y$$
and then compute $\| \mathrm x \|_{\infty}$ to find the maximum absolute value of the entries of $\mathrm x$. What if computing matrix inverses and Hermitian transposes is not allowed? Bob can then write $\mathrm F$ and $\mathrm y$ as follows
$$\mathrm F = \mathrm F_{\text{re}} + i \,\mathrm F_{\text{im}} \qquad \qquad \qquad \mathrm y = \mathrm y_{\text{re}} + i \,\mathrm y_{\text{im}}$$
and, since $\mathrm x \in \mathbb R^n$, the equation $\mathrm F \mathrm x = \mathrm y$ yields two equations over the reals, namely, $\mathrm F_{\text{re}} \, \mathrm x = \mathrm y_{\text{re}}$ and $\mathrm F_{\text{im}} \, \mathrm x = \mathrm y_{\text{im}}$. Bob can then solve the following
linear program in $t \in \mathbb R$ and $\mathrm x \in \mathbb R^n$
$$\begin{array}{ll} \text{minimize} & t\\ \text{subject to} & - t 1_n\leq \mathrm x \leq t 1_n\\ & \begin{bmatrix} \mathrm F_{\text{re}}\\ \mathrm F_{\text{im}}\end{bmatrix} \mathrm x = \begin{bmatrix} \mathrm y_{\text{re}}\\ \mathrm y_{\text{im}}\end{bmatrix}\end{array}$$
which can be rewritten as follows
$$\begin{array}{ll} \text{minimize} & \begin{bmatrix} 1\\ 0_n\end{bmatrix}^{\top} \begin{bmatrix} t\\ \mathrm x \end{bmatrix}\\ \text{subject to} & \begin{bmatrix} -1_n & \mathrm I_n\\ -1_n & -\mathrm I_n\end{bmatrix} \begin{bmatrix} t\\ \mathrm x \end{bmatrix} \leq \begin{bmatrix} 0_n\\ 0_n\end{bmatrix}\\ & \begin{bmatrix} 0_n & \mathrm F_{\text{re}}\\ 0_n & \mathrm F_{\text{im}}\end{bmatrix} \begin{bmatrix} t\\ \mathrm x \end{bmatrix} = \begin{bmatrix} \mathrm y_{\text{re}}\\ \mathrm y_{\text{im}}\end{bmatrix}\end{array}$$
and not only recover $\mathrm x$ but also obtain $t = \| \mathrm x \|_{\infty}$. However, is solving a linear program cheaper than computing a Hermitian transpose?
MATLAB code
The following MATLAB script
n = 8;% build n x n Fourier matrixF = dftmtx(n);% -----% Alice% -----% build vector xx = randn(n,1);% compute DFT of xy = F * x;% ---% Bob% ---% solve linear programc = eye(n+1,1);A_in = [-ones(n,1), eye(n); -ones(n,1),-eye(n)];b_in = zeros(2*n,1);A_eq = [zeros(n,1), real(F); zeros(n,1), imag(F)];b_eq = [real(y); imag(y)];solution = linprog(c, A_in, b_in, A_eq, b_eq);% extract t and xt = solution(1);x_rec = solution(2:n+1);% check resultsdisp('t = '); disp(t);disp('Infinity norm of x = '); disp(norm(x,inf));disp('Reconstruction error = '); disp(x_rec - x);
produces the output
Optimization terminated.t = 2.2023Infinity norm of x = 2.2023Reconstruction error = 1.0e-013 *0.09100.07110.0167-0.10770.10490.03220.11300.2776
The original vector is
>> xx =-1.1878-2.20230.9863-0.51860.32740.23410.0215-1.0039
It's generally not possible to compute the exact maximum value, but you can compute a bound on the maximum value. Assuming your data are discrete-time, and you're using the discrete Fourier transform (DFT), you have the following relation between time domain and frequency domain:
$$x[n]=\frac{1}{N}\sum_{n=0}^{N-1}X[k]e^{j2\pi kn/N}\tag{1}$$
where $N$ is the DFT length. From $(1)$ we can derive the following bound:
$$|x[n]|=\frac{1}{N}\left|\sum_{n=0}^{N-1}X[k]e^{j2\pi kn/N}\right|\le\frac{1}{N}\sum_{n=0}^{N-1}\left|X[k]\right|\left| e^{j2\pi kn/N}\right|=\frac{1}{N}\sum_{n=0}^{N-1}\left|X[k]\right|\tag{2}$$
For other types of (Fourier) transforms (DTFT, CTFT), similar bounds can be derived in the same way.
In addition to @MattL 's answer, which provides a method to compute a strict upper bound for the maximum of a signal $x[n]$ from its DFT $X[k]$, I would like to provide an informal (without any proofs) lower bound for the maximum, which can also be benefical at times.
Now my loose claim without any proof is that for a typical signal x[n], its maximum sample is expected to be larger (or not less) than the sum of its average value $\bar x$ and its standard deviation $\sigma_x$ : $$x_{max} > \sigma_x + \bar x $$
The average value of a signal is estimated as: $$ \bar x = \frac {\sum_{n=0}^{N} {x[n]}} {N}$$ And this is nothing but $ \frac{X[0]}N$, where $X[k]$ is the DFT of x[n] given by: $$X[k] = \sum_{n=0}^{N-1} x[n] e^{-j \frac{2\pi}{N} nk} $$
Now, to find an estimate of standard deviation, $\sigma_x$, we can use the following trick: $$\sigma_x^2 = \text{Var}(x) = E\{x^2\} - (E\{x\})^2$$ and replace $E\{x^2\}$ and $E\{x\}$ with the following estimates: $$ E\{x^2\} = \frac{1}{N} \sum{x[n]^2} $$ and $E\{x\}$ is already estimated as $\bar x$.
Finaly we can invoke Parseval's theorem to compute the squarred sum which equals the total energy of the signal: $$\sum_{n=0}^{N-1} x[n]^2 = \frac {1}{N} \sum_{k=0}^{N-1} |X[k]|^2 $$
Since standard deviation is the square of the variance , we can have the following expression as a lower bound for the maximum value of a signal $x[n]$
$$ x_{max} > \frac { \sqrt{ \sum_{k=0}^{N-1} |X[k]|^2 - X[0]^2} } {N} + \frac {X[0]}{N} $$
Note that if you're bold enough you can further claim that the typical maximum will be
less than average plus 4 times the standard deviation with 0.999 probability (for a gaussian distribution) i.e. $$x_{max} < \bar x + 3 \sigma_x$$ which can also be computed simply from above, hence a total bound for a typical maximum sample can be defined as: $$ \bar x + \sigma_x < x_{max} < \bar x + 3\sigma_x $$ |
Suppose we've got $X=(X(t))_{t\geq 0}$. $X$ is a strong Markov process with respect to filtration $\mathcal{F}_t$, taking values in some subset of $\mathbb{R}$. We take $\tau$ - a stopping time w.r.t $\mathcal{F}_t$ and we kill $X$ in $\tau$ obtaining new process $Y$ which is given by
$$ Y(t)=\left\{ \begin{array}{ll} X(t), & \textrm{ for } t<\tau\\ \Delta, & \textrm{ for } t\geq \tau \end{array} \right.. $$ where $\Delta$ is an isolated point.
We can then ask if $Y$ is still strong Markov process. There are a lot examples when it is (e.g. when $\tau$ is terminal time) however I am interested in an example when
it isn't. I've got simple examples when killing destroys some other properties (e.g time-homogenity) but Markov property seems not so easy to eliminate.
I would be very thankful for any ideas. |
In the paper https://www.scribd.com/doc/14674814/Regressions-et-equations-integrales (page 6) a method is presented to fit a single nonlinear Gaussian curve to noisy data. Later on in the paper, the same method is employed to fit a double exponential regression (and even more). I'm curious if it would be possible to employ the same technique to fit a double Gaussian regression with scaling constants? To be specific, I want to perform a regression of the following equation to data.
$$ f(x)=\frac{c_1}{\sigma_1} \text{exp}\left(-\frac{1}{2}\left(\frac{x-\mu_1}{\sigma_1}\right)^2\right)+\frac{c_2}{\sigma_2} \text{exp}\left(-\frac{1}{2}\left(\frac{x-\mu_2}{\sigma_2}\right)^2\right) $$
Where you have data $x_i, y_i$ with normally distributed noise $\epsilon_i$.
I know nonlinear least squares works pretty well in this case, but having a non-iterative algorithm to do this (and potentially with more Gaussian kernels) would be useful in many situations.
Take for example this data.
0,0.09534350221191660.408163265306122,0.1658760416411620.816326530612245,0.2170550231961371.22448979591837,0.3366253905156361.63265306122449,0.5023820967692872.04081632653061,0.6665153931631722.44897959183673,0.8832086841895652.85714285714286,1.124587732919473.26530612244898,1.377261693794483.6734693877551,1.618661090239694.08163265306122,1.820137509560654.48979591836735,1.97792881087964.89795918367347,2.068288730762595.30612244897959,2.080562172876615.71428571428571,2.075330743401416.12244897959184,2.01048420443726.53061224489796,1.943572888932976.93877551020408,1.86824139024837.3469387755102,1.828228154155857.75510204081633,1.845782997149068.16326530612245,1.868529922228518.57142857142857,1.956242543553038.97959183673469,2.011354904239959.38775510204082,2.07969944987189.79591836734694,2.0961900333434510.2040816326531,2.0360344417666610.6122448979592,1.9163850026630611.0204081632653,1.7778607949249811.4285714285714,1.5549769739329611.8367346938776,1.2895316357188912.2448979591837,1.0660339125451812.6530612244898,0.81274548602432713.0612244897959,0.62271045861018213.469387755102,0.43350147322612813.8775510204082,0.33328082464924914.2857142857143,0.19007838593940714.6938775510204,0.13387786417981915.1020408163265,0.081225804744191715.5102040816327,0.042702003249527115.9183673469388,0.028092550675272616.3265306122449,0.028373635060254316.734693877551,0.0063426917034075417.1428571428571,0.0082568341749119517.5510204081633,-0.007467810511502417.9591836734694,-0.00077885017760731518.3673469387755,-0.0073902326915212618.7755102040816,-0.0023260518500638419.1836734693878,0.019445961820539919.5918367346939,0.00044578429619027420,-0.000752024542365978
Which, when plotted looks like this: |
The $x$-component of a circular polarized plane wave is
$$ E_x(\vec r,t)=E_0\cos\left(\frac{w}{c}(0.6y-0.8z)-wt\right) $$
With only this given, we can devise the total electric field as
$$ \vec E(\vec r,t)=E_0 \left[\cos\left(\frac{w}{c}(0.6y-0.8z)-wt\right) \hat x \pm \sin\left(\frac{w}{c}(0.6y-0.8z)-wt\right)(0.8 \hat y + 0.6 \hat z) \right]$$
When looking for the total electric field, we first need to define the wave vector, which is $\vec k = \frac{w}{c}(0.6\hat y - 0.8\hat z)$. We know that $\vec k \cdot \vec E = 0$, which is already satisfied for the $x$-component of our electric field.
Since we want $\vec k \cdot \vec E = 0$ to be satisfied in the $y,z$-directions aswell, we need to add a term to our total electric field which becomes zero when multiplied by $\vec k$, this is represented by $(0.8 \hat y + 0.6 \hat z)$ in our answer, since $\vec k \cdot (0.8 \hat y + 0.6 \hat z) =0$. What I don't understand in this question is why the second term needs to be a sine-term, and not just be attached to the cosine? The answer would then look like
$$ \vec E(\vec r,t)=E_0 \left[\cos\left(\frac{w}{c}(0.6y-0.8z)-wt\right) (\hat x + 0.8 \hat y + 0.6 \hat z) \right]$$
But this is not a correct answer, because according to my lecture notes, $E_{y,z}$ needs to be phase shifted 90 degrees, which is done using sine instead of cosine. Any help understanding why this is would be greatly appreciated. |
1. Given that AF=4sqrt(3) and FC=5sqrt(3), what is BC?
2. The diagram is not drawn to scale, but the measurements of the line segments and the right angles are correctly labeled. As shown in the diagram, BE=EC. Are the red and blue triangles similar? If so, enter the side of the blue triangle that corresponds to AB. If not, enter "no."
3. The lengths of four sides of a trapezoid have ratios 1:1:1:2. The area of the trapezoid is 48sqrt(3). What is its perimeter?
4. We know Angle A=45 degrees, Angle B=15 degrees and BC=sqrt(6). What is AB?
5. In Triangle ABC, the circumcenter and orthocenter are collinear with vertex A. Which of the following statements must be true?
(1) Triangle ABC must be an isosceles triangle. (2) Triangle ABC must be an equilateral triangle. (3) Triangle ABC must be a right triangle. (4) Triangle ABC must be an isosceles right triangle. Enter your answer as a comma-separated list. If there is no correct option, write "none".
6.
Let H be the orthocenter of the equilateral triangle ABC. We know the distance between the orthocenters of Triangle AHC and Triangle BHC is 12. What is the distance between the circumcenters of Triangle AHC and Triangle BHC?
7. As shown in the diagram, points B and D are on different sides of line AC. We know that Angle B=2*Angle D=60 degrees and that AC=4sqrt(3). What is the distance between the circumcenters of Triangle ABC and Triangle ADC?
4. We know Angle A=45 degrees, Angle B=15 degrees and BC=sqrt(6). What is AB?
Angle C = 120°
Using the Law of Sines
BC / sin A = AB/ sin C
sqrt (6) / (1/sqrt(2) ) = AB / ( sqrt (3) / 2)
sqrt (12) = 2 AB / sqrt (3)
sqrt (3) * sqrt (12) = 2 AB
sqrt (36) = 2 AB
6 = 2AB
AB = 3
The first one is answered, here :
https://web2.0calc.com/questions/plz-help-and-explain-thoroughly-helpp
2. The diagram is not drawn to scale, but the measurements of the line segments and the right angles are correctly labeled. As shown in the diagram, BE=EC. Are the red and blue triangles similar? If so, enter the side of the blue triangle that corresponds to AB. If not, enter "no."
It's obvious that, if the triangles are similar, then ΔABE is similar to ΔECD
Note that....if similar, the scale factor is 120/35 = 24/7
Since...if similar, then DC = (7/24)BE = (7/24)CE
Let CE = x ....so DC = (7/24)x
Then, by the Pythagorean Theorem,
sqrt ( x^2 + (7x/24)^2 ] = 35
x *sqrt [ 576 + 49] / 24 = 35
x * sqrt (625) = 24 * 35
x * 25 = 840
x = 840 /25 = 33.6 = CE = BE
And...if similar..... AB = (24/7)CE
So....by the Pythagorean Theorem......
sqrt (BE^2 + AB^2 ) =
sqrt (BE^2 + (24 CE/ 7)^2 ) =
sqrt ( 33.6^2 + [24 (33.6)/ 7]^2 ) =
sqrt ( 1128.96 + 13.271.04) =
sqrt ( 14400) = 120 = AE
So.....they are similar
AB is similar to EC
7. As shown in the diagram, points B and D are on different sides of line AC. We know that Angle B=2*Angle D=60 degrees and that AC=4sqrt(3). What is the distance between the circumcenters of Triangle ABC and Triangle ADC?
line segments:
\(\text{Let $AC =4\sqrt{3}$ } \\ \text{Let $AG=GC =2\sqrt{3}$ } \\ \text{Let $AE=EC =r$ } \\ \text{Let $AF=FC =R$ } \\ \text{Let $FE = EG+GF $ } \)
The distance between the circumcenters of Triangle ABC and Triangle ADC \(= FE\)
angle:
\(\text{Let $\angle ABC = 60^{\circ} $ } \\ \text{Let $\angle ADC = \frac{\angle ABC}{2} = 30^{\circ} $ } \\ \text{Let $\angle AEC = 2*\angle ABC = 120^{\circ} $ } \\ \text{Let $\angle AFC = 2*\angle ADC = 60^{\circ} $ } \)
\(\mathbf{EG = \ ?}\)
\(\begin{array}{|rcll|} \hline AC^2 &=& 2r^2\Big(1-\cos(120^{\circ})\Big) \quad \text{$\cos$-rule} \quad | \quad \cos(120)^{\circ} = -0.5 \\ (4\sqrt{3})^2 &=& 2r^2(1+0.5) \\ 48 &=& 2r^2(1.5) \\ 24 &=& r^2(1.5) \\ r^2 &=& 16 \\\\ AG^2 + EG^2 &=& r^2 \\ (2\sqrt{3})^2 + EG^2 &=& 16 \\ 12 + EG^2 &=& 16 \\ EG^2 &=& 4 \\ \mathbf{EG} & \mathbf{=}& \mathbf{2} \\ \hline \end{array}\)
\(\mathbf{GF = \ ?}\)
\(\begin{array}{|rcll|} \hline AC^2 &=& 2R^2\Big(1-\cos(60^{\circ})\Big) \quad \text{$\cos$-rule} \quad | \quad \cos(60)^{\circ} = 0.5 \\ (4\sqrt{3})^2 &=& 2R^2(1-0.5) \\ 48 &=& 2R^2(0.5) \\ 24 &=& R^2(0.5) \\ R^2 &=& 48 \\\\ AG^2 + GF^2 &=& R^2 \\ (2\sqrt{3})^2 + GF^2 &=& 48 \\ 12 + GF^2 &=& 48 \\ GF^2 &=& 36 \\ \mathbf{GF} & \mathbf{=}& \mathbf{6} \\ \hline \end{array}\)
\(\begin{array}{|rcll|} \hline FE &=& EG+GF \\ &=& 2+6 \\ \mathbf{FE} & \mathbf{=} & \mathbf{8} \\ \hline \end{array}\)
The distance between the circumcenters of Triangle ABC and Triangle ADC is
8 |
I am looking for a solution to a wave equation
$\frac{\partial^2 u}{\partial \tau^2} = \frac{\partial^2 u}{\partial \xi^2}$
in which $t_c\tau = t$, $L\xi = x$,
and $t_c = L/v_c$ is the characteristic time,
$L$ is the sample thickness,
and $v_c$ is the characteristic wave speed,
with an IC of
$\left [\frac{\partial u}{\partial \tau} \right]_{x,t=0} = \theta \left (x, t=0 \right)$
and a BC of
$\left [\frac{\partial u}{\partial \xi} \right]_{x=0,t} = \phi \left (x=0, t \right)$
I have tried the D' Alembert solution, but I get a function $u\left(\xi, \tau \right)$ that is a function of the integral of phi which I don't know since it is not analytic, and it also introduces two new unknowns, $f\left (\tau_0 \right)$ and $g\left (\tau_0 \right)$ and I'm actually trying to find $\frac{\partial u}{\partial \tau}$ and $\frac{\partial u}{\partial \xi}$ not u.
I haven't tried separation of variables, Sturm-Liouville or Fourier transform yet.
This system is similar to Cauchy-Riemann equations. |
Define numbers $H_k$ for integers $k\geq 4$ by $\sum_{x \in \mathbf{Z}[i]}x^{-k}=\frac{H_k}{k!} \omega^k$, where $\omega=\frac{\Gamma(1/4)^2}{\sqrt{2\pi}}$. These are nonzero when $4|k$, and Hurwitz proved that they are rational numbers. Numerical experiments quickly reveal some remarkable properties of these numbers. Here is a table of the $H_k$'s for $4\leq k \leq 80$; I have factored the numerators and bolded all of their factors which are $\equiv 1 \; \mathrm{mod}\; 4$:
First of all, it seems the denominator of $H_k$ is given as the product of all primes $p$ such that $(p-1) \mid k$ and either $p=2$ or $p\equiv 1 \; \mathrm{mod} \; 4$. This is in obvious analogy to the von Staudt-Clausen theorem. More surprisingly, it seems the numerator is divisible by
every prime $p$ with $p < k-4$ and $p \equiv 3 \; \mathrm{mod} \; 4$, with fairly regular exponents. This stands in marked contrast to Bernoulli numbers, whose numerators display no patters nearly so obvious. Are either of these observations proven somewhere? Do the inert primes dividing the numerator have arithmetic significance? (The sporadic split primes dividing the numerators are known to have arithmetic significance; see the paper "Kummer's criterion for Hurwitz numbers" by Coates and Wiles.)
Edit: The denominator given above for $H_{80}$ should by $6970$. In case you want to play along, I've been calculating these in Mathematica using the code:
S[k_]:=2*(2*Pi)^k*(-BernoulliB[k]/(2*k)+Sum[DivisorSigma[k-1,n]*Exp[-2*Pi*n],{n,1,800}])/(k-1)!
Om:=Gamma[1/4]^2/Sqrt[2*Pi]
Hur[j_]:=RootApproximant[N[S[j]*j!/Om^j,240],1]
(Adjust the 240 and 800 accordingly; it gives wrong answers with e.g. 800 replaced by infinity!) |
A vast majority of the answers I have posted on Stack Exchange did fine but I have experienced a highly unexpected opposition today – one about a basic problem in special relativity.
The question by seeking_infinity was: Refer, "The classical theory of Fields" by Landau lifshitz (Chap 3). Consider a disk of radius \(R\), then circumference is \(2\pi R\). Now, make this disk rotate at velocity of the order of \(c\) (speed of light). Since velocity is perpendicular to radius vector, the radius does not change according to the observer at rest. But the length vector at boundary of disk, parallel to velocity vector will experience length contraction. Thus, the circumference-to-radius difference is smaller than \(2\pi\) when the disk is rotating. But this violates rules of Euclidean geometry. What is wrong here?It is clearly a totally rudimentary problem in special relativity. It has its own name and if you search for Ehrenfest paradox, you quickly find out that there's been a lot of debates in the history of physics – relatively to what one would expect for such a basic high school problem in classical physics. Born, Ehrenfest, Kaluza, von Laue, Langevin, Rosen, Eddington, and Einstein have participated, among many others.
My obviously correct answer immediately got at least two downvotes.
What is wrong is the idea that one can actually make the disk rotate; and it will remain perfectly rigid.Many people don't like for some reason! In reality, what this correct argument shows is that relativity doesn't admit the existence of any perfectly rigid bodies. Be sure that despite all the confused users' negative votes, this is a perfectly basic, settled, and indisputable textbook material that every mature physicist knows. The first sentence of this paragraph contains a link to the Gravity Probe B website. When one takes a solid disk and makes it rotate, it will do all kinds of things resulting from the "imperfection of the material". It will tear apart by the centrifugal force, and if it won't, it will either tear basically along radial lines, or it will bend (the disk won't be planar anymore) because the circumference really shrinks by the Lorentz factor. If there existed a material that is perfectly rigid and cannot stretch or bend or tear, then it would be impossible to make it spin. However, the non-existence of such a material may be shown even microscopically. It is not possible to "order" any solid object to keep the proper distances at every moment because the distance between two atoms (or points on the solid object) may only be measured with a delay \(\Delta t = \Delta x / c\) simply because no information may move faster than light. That's why it's always possible to squeeze anyrod on one end and the opposite end of the rod won't move at least for this \(\Delta t = \Delta x / c\). In fact, the delay will be much larger than that, dictated basically by the speed of sound, not by the speed of light. Whatever material you have, relativity guarantees that it can be squeezed as well as stretched as well as bent.
There actually exists a
more popularanswer that claims that rigid bodies are perfectly OK and possible in relativity – and the paradox is cured by the "usual" reason, namely by the relativity of simultaneity.
But it cannot be cured in this way. Why? Because you may have a point \(A\) on a rotating disk at the distance \(R\) from the axis of rotation, and a nearby point \(B\) on the non-rotating table beneath the disk. The point is that the points \(A,B\) repeatedly touch each other as the disk rotates. It's because the angular coordinate along the circumference is periodic.
So the observers sitting at \(A\) and \(B\) may talk to each other about their measurements of the circumference. The guy on the table will see the circumference shrink but it can't be shrunk because the laws of the Euclidean geometry are still valid in this inertial system (gravity and GR curvature is negligible) and they imply that the circumference is \(2\pi R\). On the other hand, the guy on the rotating disk may distribute his numerous equally rotating friends along the circle of radius \(R\). They may measure their proper distances in the local frames and the sum of these distances up to the guy \(A\) will clearly be \(2\pi R \sqrt{1-v^2/c^2}\).
This
wouldbe a real paradox. There is no paradox because some of the assumptionsis wrong. And the wrong assumption is that a perfectly rigid object may exist and may be bring to rotation. It just can't. This thought experiment is a macroscopic proof of the non-existence of rigid bodies. However, as I mentioned, one may also easily present the microscopic proofs.
A rod can't be "unbendable" or "unsqueezable" or "unstretchable" because it would mean that there is something in the rod that guarantees its prescribed proper length at all times. If you bend this rod or push it or pull it, it would immediately have to change the shape across the length. But that can't happen because in this way, you could use the rod to send signals faster than light. With some extra thinking, you can convince yourself that the actual speed by which the squeezing or stretching or bending is spread through the rod is the speed of sound – something that is guaranteed to be lower than the speed of light (and is lower roughly by a factor of 1 million in the real world).
This non-existence of perfectly rigid rods in relativity should be totally obvious for rods. But it holds for disks, too. If you push the disk (a vinyl record) at a particular point and want to make it spin, the material gets squeezed "in front" of your finger and stretched "behind" your finger. Sound waves will be moving along the vinyl record. Sound waves mean that pieces of the disk may stretch or squeeze and that's what will happen. The disk may also bend and become non-planar to adapt to the Lorentz-contracted circumference. At some high enough speed, it may crack along radial cracks. Or it may tear apart by the centrifigal force along concentric cracks. Or something in between.
At any rate, the non-existence of perfectly rigid bodies is undoubtedly a characteristic, almost defining, implication of relativity.
I am pretty amazed that even in 2015, 110 years after Einstein presented his relativity, this very simple point remains controversial. Well, I am convinced that at least since 1911, almost all good physicists have agreed what the correct answer basically is.
Well, in 1909, Max Born introduced the "rigid motion". Yes, decades before quantum mechanics! ;-) In the same year, Ehrenfest presented the "paradox" – which is often named after him – and gave the right basic basic solution. Motions of extended objects can simply almost never be Born rigid! In 1910, Gustav Herglotz and Fritz Noether correctly argued that only 3 degrees of motion may be picked for rigid objects. But the disk has many more (infinitely many, as Max von Laue argued in 1911) so it's impossible to make the disk spin, as Ehrenfest had previously correctly said. (Von Laue was the main guy who offered the proof of Ehrenfest's conclusion using the impossibility to send faster-than-light signals.)
In 1910 Max Planck pointed out that one has to distinguish a non-rotating disk described in various coordinate systems (which is OK, of course), a permanently rotating disk constructed to rotate and observed in many frames (which is also OK, as long as you don't want it stop), and the actual change of the angular velocity of the rigid disk (which is not allowed). He correctly said that the last problem among these three does require one to study elasticity etc. in the real world.
In the same year, Theodor Kaluza just made a comment without any argument that the disk itself has the geometry of the hyperbolic plane. Well, it depends from which side you look at it. The 2+1D disk embedded in 3+1D is actually spherical (positive curvature) if you allow it to bend. If you don't allow it to bend, it's obviously flat and the flatness of a 2+1D manifold embedded in the 3+1D space does
notdepend on whether or not you choose a rotating coordinate system or skeleton inside this 2+1D space! So Kaluza was pretty much wrong, as far as I can say.
In 1916, Einstein began to combine the insights with the new general relativity. He realized that GR allows the space to be curved or Riemannian – and it is basically useful to use it for the rotating frame, too. Well, a problem with that is that a Minkowski space is flat – curvature-free – in
allcoordinate systems. Being flat is a coordinate-independent property. But if you eliminate the Coriolis force and the "mixed components" of the metric tensor and only acknowledge the centrifugal force, the rotating disk is a great model for the gravitational field, one from which Einstein had derived the correct (in the leading approximation) gravitational red shift, too.
In the 1920s, 1930s etc. people like Eddington and Langevin began to add complications, allowed both the radius and the circumference to adapt, and introduced new coordinate systems etc. etc. Various people introduced some errors, fixed other people's real errors, and claimed that other people had errors that were not actually errors. It became a messy history. But the basic Ehrenfest problem is still very simple and the answer is totally indisputable.
A simple Internet search finds lots of pages, books, and papers that say that perfectly rigid bodies aren't allowed according to relativity. But for certain reasons, this obviously true, fundamental, and catchy slogan isn't generally known and appreciated and if you say it and watch the reactions, you might even think that it's controversial! It's crazy.
I think that one reason why many laymen – including third-class physicists – don't like the correct answer is that the answer says that "one can't do something". They prefer the moronic "yes we can" answers to every question, even if these answers are incorrect. They confuse science – which impartially looks for the truth (and the true answer to every Yes/No question can
a prioribe both Yes or No) – with some kind of never disappearing faith in one's omnipotence, neverending self-confidence, or with some sort of predetermined wishful thinking. It's the same misunderstanding of the "truth in science" that turns people into fanatic fans of the cold fusion, warp drives, and tons of similar garbage. Sorry, such attitudes are not scientific and many "conclusions" that this attitude produces are demonstrably incorrect according to the scientific method. |
I have some trouble with the concept of a "pseudotensor". Wikipedia distinguishes between that and a tensor density (e.g., here, where both concepts are used simultaneously) while e.g. in Eric Weisstein's Mathworld they say that
A pseudotensor is sometimes also called a tensor density.
These are two manifestly incompatible statements, and if asked, I'm inclined to believe that Wikipedia's definition of a pseudotensor, other than a synonyme for a tensor density, can not be rigorously formulated, i.e., no such thing exists. The purpose of this question is to confirm or refute this claim, mathematically. Some details follow to justify my case. "Pseudo" is used in its "Wikipedia" meaning: changing sign under inversion, whatever inversion means (discussed below).
I would appreciate it if answerers read the question in full and consider my problem specifically (and using my "language", if possible), rather than going with "stock" definitions or examples. No "looking at something in a mirror" either unless you can put that into formulas and argue it's a passive or active transform of some kind. Also, bear in mind that the "usual" sources in your country / curriculum may not be readily available to me so I would greatly appreciate it if when citing them you could also quote the relevant part.
With this out of the way, I appreciate that all of these quantities are defined through formulas their elements in given coordinate systems undergo in transformations. However what most sources neglect is distinction between
passive and active transforms, and, worse, between the orientation of a vector space and an orientation of its basis. For simplicity, let us only assume linear transforms of linear spaces; this should be WLOG for the purposes of the question as stated.
So, under a
passive transform, one compares the decompositions of the same quantity in two bases $E = (e_1, \ldots, e_n)$ and $F = (f_1, \ldots, f_n)$, where $f_i = S^j_{\ i} e_j$, $S = (S^k_{\ l})_{k,l=1}^n \in GL(n)$. A $k$ times covariant and $l$ times contravariant tensor density $T$ of weight $w$ transforms, by the definition I know, as
$$\tilde T^{i_1,\ldots,i_l}_{j_1,\ldots,j_k} = (\det S)^w\ S^{m_1}_{\hphantom{m_1}j_1} \ldots S^{m_k}_{\hphantom{m_k}j_k}\ (S^{-1})^{i_1}_{\hphantom{i_1}n_1} \ldots (S^{-1})^{i_l}_{\hphantom{i_l}n_l}\ T\strut^{n_1, \ldots, n_l}_{m_1, \ldots, m_k}$$ where $T\strut^\ldots_\ldots$ and $\tilde T^\ldots_\ldots$ denote the components in $E$ and in $F$, respectively. (The sign convention of $w$ may differ.)
Since we are looking at the same object in a different systems of coordinates, there is no reason the
object changes, only its representation. I've been told ([1], [2]) that a typical example of a pseudoscalar in $\mathbb{R}^3$ is a triple product, $(\vec a,\vec b,\vec c) = \vec a \cdot (\vec b \times \vec c)$ since it "changes sign" under "inversion". Consider, though: in my notation, the inversion is just another $GL(n)$ matrix $S^i_{\ j} = -\delta^i_{\ j}$, and the "coordinate-less" definition of the triple product is$$s := (a,b,c) = \omega(a,b,c)$$where $\omega$ is a certain 3-form associated with the space, known as its volume form. In a "right-handed" orthonormal basis $E$, that is,$$e_i \cdot e_j = \delta_{ij}, \qquad \omega(e_1,e_2,e_3) = +1,$$the components of $\omega$ are$$\omega_{ijk} = \epsilon_{ijk}$$and thus we can express $s$ as$$s = \omega_{ijk} a^i b^j c^k = \epsilon_{ijk} a^i b^j c^k.$$If we now pick a basis $F = (-e_1,-e_2,-e_3)$ (which is left-handed), the quantity $s$ transforms as$$\tilde s = \tilde w_{ijk} \tilde a^i \tilde b^j \tilde c^k,$$where all of $a,b,c$ are contravariant and $\omega$ is fully covariant, so$$\tilde a^i = -a^i, \quad \tilde b^j = -b^j, \quad \tilde c^k = -c^k$$and also$$\tilde \omega_{ijk} = (-1)^3 \omega_{ijk} = -\epsilon_{ijk}$$by the above relation. Thus$$\tilde s = -\epsilon_{ijk} (-a^i) (-b^j) (-c^k) = +\epsilon_{ijk} a^i b^j c^k = +s,$$as expected. Of course, just looking at the same quantity (be it $s$, $\omega$, a cross product or anything else) in a different basis there is no reason the quantity should change, only perhaps its numeric description – that's what a passive transform means.
I'm not opposed to believing that $s$
is as pseudoscalar, but I'm firm in saying that a passive transform can't manifest distinction from a proper scalar. This is because although the orientation of two bases can easily differ, the orientation of the space itself is its inherent property, unaffected by what basis we choose to decompose its elements, so there are no "orientation-changing" passive transforms. In other words, while the above result shows that $s$ is consistent with being a scalar (of density zero), it does not show it could not actually be a pseudoscalar, the coordinate inversion just did not test this because no change in orientation (of the space) actually took place.
(The very same reasoning works with no significant changes if $s$ is replaced by another typical example, the vector product, as defined using the Hodge star.)
This leaves
active transforms that could possibly distinguish a pseudoscalar from a proper scalar. Well, defining (for comparison) $T = -\mathbb{I}$ and introducing new (actively transformed) vectors$$A^i = T^i_j a_j = -\delta^i_{\ j} a_j = -a_i, \quad B^i = -b^i, \quad C^i = -c^i$$and their triple product in the original vector space,$$S = \omega(A,B,C) = \epsilon_{ijk} A^i B^j C^k = (-1)^3 \epsilon_{ijk} a^i b^j c^k,$$we indeed obtain $-s$ (in accordance with the cited sources). But how is it justified to tell anything about the properties of $s$ when $S$ is a new quantity (only defined analogously)?
What's worse, if we take an "arbitrary" active transform given by a generic $T \in GL(n)$ and recompute $S = (A,B,C)$ for the images $A, B, C$, we get$$S = \epsilon_{ijk}\ T^i_{\ p} T^j_{\ q} T^k_{\ r}\ a^p b^q c^r = \det T\ \epsilon_{pqr} a^p b^q c^r = \det T \cdot s,$$so if we accept that $s$ is a pseudoscalar just because $S$ had a different sign for a particular choice of a $\det < 0$ matrix, we can see that the same reasoning would lead to $S$ changing its magnitude as well (making it actually behave
like a scalar density, rather than a pseudoscalar, in terms of its transformation as expressed using $s$). One could argue that active transforms by $T \not\in O(3)$ are not physical, but then again – improper rotations can't be "reached" in real world any better than e.g. volume-nonpreserving ones. |
WHY?
Many machine learning problems involves loss function that contains random variables. To perform backpropagation, estimating gradient of the loss function is required.
WHAT?
This paper tried to formalize the computation of gradient of loss function with computation graphs. Assume we want to compute
\frac{\partial}{\partial\theta}\mathbb{E}_x[f(x)]. There are two differnt way that random variable x can be influenced by
\theta.
Score Function Estimator If probability distribution is parametrized by
\theta, the gradient can be estimated with score function estimator.
\frac{\partial}{\partial\theta}\mathbb{E}_x[f(x)] = \mathbb{E}_x[f(x)\frac{\partial}{\partial\theta}\log p(x; \theta)]
Score function extimator is also known as likelihood ratio estimator, or REINFORCE. Pathwise derivative If x is deterministically influenced by another random variable z which is influenced by
\thetathe gradient can be estimated with pathwise derivative.
\frac{\partial}{\partial\theta}\mathbb{E}_x[f(x)] = \mathbb{E}_z[\frac{\partial}{\partial\theta}f(x(z, \theta))]
If
\thetaappear both in the probability distribution and inside expectation,
\frac{\partial}{\partial\theta}\mathbb{E}_{z\sim p(\cdot;\theta)}[f(x(z,\theta))] = \mathbb{E}_{z\sim p(\cdot;\theta)} [\frac{\partial}{\partial\theta}f(x(z,\theta)) + (\frac{\partial}{\partial\theta}\log p(z;\theta))f(x(z,\theta))]
To formallize this with directed acyclic graph, this paper represent deterministic nodes with squares and stochastic nodes with circles. Some example can be shown as below. Further notation can be defined as below. Given differentiability requirements hold, the gradient of sum of costs can be represented as two equivalent equations. This paper suggest surrogate loss function which convert the stochastic graphs to deterministic graphs from equation above.
\frac{\partial}{\partial\theta}\mathbb{E}[\Sigma_{c\in C}c] = E[\frac{\partial}{\partial\theta} L(\Theta, S)]\\ L(\Theta,S) := \Sigma_w\log p(w|DEPS_w)\hat{Q}_w + \Sigma_{c\in C}c(DEPS_c)To reduce the variance of score function estimator we can subtract the baseline estimate of the function. Algorithm is as below.
Critic
Great summary and clean formalation of getting gradient of function with stochatic variables. |
I am a newbie in stat. I am working on the Laplace distribution for my algorithm.
Could tell me the first what the four moments of the Laplace distribution are? Does it have infinite tail like the Cauchy distribution? What is the empirical rule?
Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up.Sign up to join this community
Here is a quick check using a symbolic algebra package ...
Let $X \sim \text{Laplace}(\mu, \sigma)$ with pdf $f(x)$:
Then, the first 4 raw moments $E[X^i]$ are given by:
where I am using the
Expect function from the
mathStatica package for Mathematica.
It is worth noting that the $3^\text{rd}$ and $4^\text{th}$ raw moments are different to those given in the answer above.
It's been awhile and looks like this hasn't been answered. I'll provide one and hopefully we can mark this as correct. I'll answer in order the questions asked using the parameterization of the wikipedia page
$$f(x\mid\mu,b)= \frac{1}{2b} \exp \left( -\frac{|x-\mu|}{b} \right), x\in \mathbb{R}. $$
For the case $\mu=0$, the first four moments are: $$\mathbb{E}(X)=0, \mathbb{E}(X^2)=2b^2 + \mu^2, \mathbb{E}(X^3)=0, and\ \mathbb{E}(X^4) = 24b^4.$$ As whuber indicates in a comment you can related a non-central random variable $Y$ via a binomial expansion of $Y^k=(Xb+\mu)^k$. The value $\mu=0$ is often chosen to simplify the calculation and to build up to the solution.
The Laplace have infinite tails like the Cauchy, the support is $x \in (-\infty, \infty)$.
For the empirical rule, I'm assuming the OP is using the shorthand for the probability of observations within $\sigma$ of the mean, $\mu$, $2\sigma$ of $\mu$ and $2\sigma$ of $\mu$ respectively. These probabilities are: (0.75688, 0.94089, 0.98563) to 5 significant digits, respectively.
A couple of different ways to calculate the expected values are:
a. Direct integration (usually split the integral at the point $\mu$ where the sign changes.
b. Differentiate the moment generating function $m$ times and set $t=0$ to get the $m$th moment.
c. Formulate the Laplace random variable (r.v.) as a scale mixture of Normal and Exponential random variable. Then use conditional expectations.
Note approach (c) is only easier than (a) if you know the moments of Normal and Exponential random variables-or can calculate them easier than directly calculating the moments of the Laplace distribution |
I wouldn't have asked this question if I hadn't seen this image:
From this image it seems like there are reals that are neither rational nor irrational (dark blue), but is it so or is that illustration incorrect?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I wouldn't have asked this question if I hadn't seen this image:
From this image it seems like there are reals that are neither rational nor irrational (dark blue), but is it so or is that illustration incorrect?
A real number is irrational if and only if it is not rational. By definition any real number is either rational or irrational.
I suppose the creator of this image chose this representation to show that rational and irrational numbers are both part of the bigger set of real numbers. The dark blue area is actually the empty set.
This is my take on a better representation:
Feel free to edit and improve this representation to your liking. I've oploaded the SVG sourcecode to pastebin.
No. The definition of an irrational number is a number which is not a rational number, namely it is not the ratio between two integers.
If a real number is not rational, then by definition it is irrational.
However, if you think about
algebraic numbers, which are rational numbers and irrational numbers which can be expressed as roots of polynomials with integer coefficients (like $\sqrt2$ or $\sqrt[4]{12}-\frac1{\sqrt3}$), then there are irrational numbers which are not algebraic. These are called transcendental numbers.
Irrational means not rational. Can something be not rational, and not not rational? Hint: no.
Of course, the "traditional" answer is no, there are no real numbers that are not rational nor irrational. However, being the contrarian that I am, allow me to provide an alternative interpretation which gives a different answer.
In intuitionistic logic, where the law of excluded middle (LEM) $P\vee\lnot P$ is rejected, things become slightly more complicated. Let $x\in \Bbb Q$ mean that there are two integers $p,q$ with $x=p/q$. Then the traditional interpretation of "$x$ is irrational" is $\lnot(x\in\Bbb Q)$, but we're going to call this "$x$ is not rational" instead. The statement "$x$ is not not rational", which is $\lnot\lnot(x\in\Bbb Q)$, is implied by $x\in\Bbb Q$ but not equivalent to it.
Consider the equation $0<|x-p/q|<q^{-\mu}$ where $x$ is the real number being approximated and $p/q$ is the rational approximation, and $\mu$ is a positive real constant. We measure the accuracy of the approximation by $|x-p/q|$, but don't let the denominator (and hence also the numerator, since $p/q$ is near $x$) be too large by demanding that the approximation be within a power of $q$. The larger $\mu$ is, the fewer pairs $(p,q)$ satisfy the equation, so we can find the least upper bound of $\mu$ such that there are infinitely many coprime solutions $(p,q)$ to the equation, and this defines the irrationality measure $\mu(x)$. There is a nice theorem from number theory that says that the irrationality measure of any irrational algebraic number is $2$, and the irrationality measure of a transcendental number is $\ge2$, while the irrationality measure of any rational number is $1$.
Thus there is a measurable gap between the irrationality measures of rational and irrational numbers, and this yields an alternative "constructive" definition of irrational: let $x\in\Bbb I$, read "$x$ is irrational", if $|x-p/q|<q^{-2}$ has infinitely many coprime solutions. Then $x\in\Bbb I\to x\notin\Bbb Q$, i.e. an irrational number is not rational, and in classical logic $x\in\Bbb I\leftrightarrow x\notin\Bbb Q$, so this is equivalent to the usual definition of irrational. This is viewed as a more constructive definition because rather than asserting a negative (that $x=p/q$ yields a contradiction), it instead gives an infinite sequence of good approximations which verifies the irrationality of the number.
This approach is also similar to the continued fraction method: irrational numbers have infinite simple continued fraction representations, while rational numbers have finite ones, so given an infinite continued fraction representation you automatically know that the limit cannot be rational.
The bad news is that because intuitionistic or constructive logic is strictly weaker than classical logic, it does not prove anything that classical logic cannot prove. Since classical logic proves that every number is rational or irrational, it does not prove that there is a non-rational non-irrational number (assuming consistency), so intuitionistic logic also cannot prove the existence of a non-rational non-irrational number. It just can't prove that this is impossible (it
might be true, for some sense of "might"). On the other hand, there should be a model of the reals with constructive logic + $\lnot$LEM, such that there is a non-rational non-irrational number, and I invite any constructive analysts to supply such examples in the comments.
Every real number is either rational or irrational. The picture is not a good illustration I think. Though notice that a number can not be both irrational and rational (in the picture intersection is empty)
We can represents real numbers on line i.e. real line which contains rationals and irrationals. Now by completeness property of real numbers, which says that real line has no gap. So there is no real number that is neither rational nor irrational.
The set of irrational numbers is the complement of the set of rational numbers, in the set of real numbers. By definition, all real numbers must be either rational or irrational.
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
WHY?
Visual question answering task is to answer to natural language question based on images requiring extraction of information from both images and texts. Stacked Attention Networks(SAN) stacked several layers of attention to answer to complicated questions that requires reasoning. Multimodal Residual Network (MRN) points out weighted averaging of attention layers in SAN works as a bottleneck restricting the information of interaction between questions and images.
WHAT?
To address the bottleneck issue, MRN increased expressiveness of interaction between question and image by performing elementwise product instead of attention mechanism.
H_0 = \mathbf{q}\\H_1(\mathbf{q}, \mathbf{v}) = W_{\mathbf{q}'}^{(1)}\mathbf{q} + \mathcal{F}^{(1)}(\mathbf{q}, \mathbf{v})\\\mathcal{F}^{(k)}(\mathbf{q}, \mathbf{v}) = \sigma(W^{(k)}_{\mathbf{q}}\mathbf{q})\odot\sigma(W_2^{(k)}\sigma(W_1^{(k)}\mathbf{v}))\\H_L(\mathbf{q}, \mathbf{v}) = W_{\mathbf{q}'}\mathbf{q} + \sum^L_{l=1}W_{\mathcal{F}^{(l)}}\mathcal{F}^{(l)}(H_{l-1}, \mathbf{v})\\W_{\mathbf{q}'} = \prod^L_{l=1}W^{(l)}_{\mathbf{q}'}\\W_{\mathcal{F}^{(l)}} = \prod^L_{m=l+1} W^{(m)}_{\mathbf{q}'}
So?
NRM achieved state-of-the-art result in open-ended and multiple-choice of VQA dataset.
Critic
There may be more sophisticated way to decide the level of logic instead of just stacking numerous layers. |
I wouldn't have asked this question if I hadn't seen this image:
From this image it seems like there are reals that are neither rational nor irrational (dark blue), but is it so or is that illustration incorrect?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I wouldn't have asked this question if I hadn't seen this image:
From this image it seems like there are reals that are neither rational nor irrational (dark blue), but is it so or is that illustration incorrect?
A real number is irrational if and only if it is not rational. By definition any real number is either rational or irrational.
I suppose the creator of this image chose this representation to show that rational and irrational numbers are both part of the bigger set of real numbers. The dark blue area is actually the empty set.
This is my take on a better representation:
Feel free to edit and improve this representation to your liking. I've oploaded the SVG sourcecode to pastebin.
No. The definition of an irrational number is a number which is not a rational number, namely it is not the ratio between two integers.
If a real number is not rational, then by definition it is irrational.
However, if you think about
algebraic numbers, which are rational numbers and irrational numbers which can be expressed as roots of polynomials with integer coefficients (like $\sqrt2$ or $\sqrt[4]{12}-\frac1{\sqrt3}$), then there are irrational numbers which are not algebraic. These are called transcendental numbers.
Irrational means not rational. Can something be not rational, and not not rational? Hint: no.
Of course, the "traditional" answer is no, there are no real numbers that are not rational nor irrational. However, being the contrarian that I am, allow me to provide an alternative interpretation which gives a different answer.
In intuitionistic logic, where the law of excluded middle (LEM) $P\vee\lnot P$ is rejected, things become slightly more complicated. Let $x\in \Bbb Q$ mean that there are two integers $p,q$ with $x=p/q$. Then the traditional interpretation of "$x$ is irrational" is $\lnot(x\in\Bbb Q)$, but we're going to call this "$x$ is not rational" instead. The statement "$x$ is not not rational", which is $\lnot\lnot(x\in\Bbb Q)$, is implied by $x\in\Bbb Q$ but not equivalent to it.
Consider the equation $0<|x-p/q|<q^{-\mu}$ where $x$ is the real number being approximated and $p/q$ is the rational approximation, and $\mu$ is a positive real constant. We measure the accuracy of the approximation by $|x-p/q|$, but don't let the denominator (and hence also the numerator, since $p/q$ is near $x$) be too large by demanding that the approximation be within a power of $q$. The larger $\mu$ is, the fewer pairs $(p,q)$ satisfy the equation, so we can find the least upper bound of $\mu$ such that there are infinitely many coprime solutions $(p,q)$ to the equation, and this defines the irrationality measure $\mu(x)$. There is a nice theorem from number theory that says that the irrationality measure of any irrational algebraic number is $2$, and the irrationality measure of a transcendental number is $\ge2$, while the irrationality measure of any rational number is $1$.
Thus there is a measurable gap between the irrationality measures of rational and irrational numbers, and this yields an alternative "constructive" definition of irrational: let $x\in\Bbb I$, read "$x$ is irrational", if $|x-p/q|<q^{-2}$ has infinitely many coprime solutions. Then $x\in\Bbb I\to x\notin\Bbb Q$, i.e. an irrational number is not rational, and in classical logic $x\in\Bbb I\leftrightarrow x\notin\Bbb Q$, so this is equivalent to the usual definition of irrational. This is viewed as a more constructive definition because rather than asserting a negative (that $x=p/q$ yields a contradiction), it instead gives an infinite sequence of good approximations which verifies the irrationality of the number.
This approach is also similar to the continued fraction method: irrational numbers have infinite simple continued fraction representations, while rational numbers have finite ones, so given an infinite continued fraction representation you automatically know that the limit cannot be rational.
The bad news is that because intuitionistic or constructive logic is strictly weaker than classical logic, it does not prove anything that classical logic cannot prove. Since classical logic proves that every number is rational or irrational, it does not prove that there is a non-rational non-irrational number (assuming consistency), so intuitionistic logic also cannot prove the existence of a non-rational non-irrational number. It just can't prove that this is impossible (it
might be true, for some sense of "might"). On the other hand, there should be a model of the reals with constructive logic + $\lnot$LEM, such that there is a non-rational non-irrational number, and I invite any constructive analysts to supply such examples in the comments.
Every real number is either rational or irrational. The picture is not a good illustration I think. Though notice that a number can not be both irrational and rational (in the picture intersection is empty)
We can represents real numbers on line i.e. real line which contains rationals and irrationals. Now by completeness property of real numbers, which says that real line has no gap. So there is no real number that is neither rational nor irrational.
The set of irrational numbers is the complement of the set of rational numbers, in the set of real numbers. By definition, all real numbers must be either rational or irrational.
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
David E Speyer
Professor of Mathematics at the University of Michigan. My research interests are in combinatorial algebraic geometry, particularly Schubert calculus, matroids and cluster algebras. I also enjoy thinking about number theory and computational mathematics. Disclaimer: I will often ignore messages about old Stack Exchange posts when I am working on other things.
Ann Arbor, MI
Member for 10 years
242 profile views
Last seen 3 hours ago Communities (22)
MathOverflow
112.2k
112.2k1010 gold badges296296 silver badges569569 bronze badges
Mathematics
47.7k
47.7k55 gold badges135135 silver badges219219 bronze badges
Mathematics Educators
2.5k
2.5k1010 silver badges2222 bronze badges
Academia
1.4k
1.4k77 silver badges1414 bronze badges
Mathematica
1.2k
1.2k1111 silver badges2121 bronze badges View network profile → Top network posts 168 Evaluate $\int_0^1 \frac{\log \left( 1+x^{2+\sqrt{3}}\right)}{1+x}\mathrm dx$ 118 Why do primes dislike dividing the sum of all the preceding primes? 101 How to find the Galois group of a polynomial? 93 Invertible matrices of natural numbers are permutations... why? 87 How did Cole factor $2^{67}-1$ in 1903? 86 Advantages of IMO students in Mathematical Research 85 $\prod_{n=1}^{\infty} n^{\mu(n)}=\frac{1}{4 \pi ^2}$ View more network posts → |
An engineer measured the Brinell hardness of 25 pieces of ductile iron that were subcritically annealed. The resulting data were:
170 167 174 179 179 187 179 183 179 156 163 156 187 156 167 156 174 170 183 179 174 179 170 159 187
The engineer hypothesized that the mean Brinell hardness of
all such ductile iron pieces is greater than 170. Therefore, he was interested in testing the hypotheses: H 0 : μ = 170 H A: μ > 170
The engineer entered his data into Minitab and requested that the "one-sample
t-test" be conducted for the above hypotheses. He obtained the following output: Descriptive Statistics
N Mean StDev SE Mean 95% Lower Bound 25 172.52 10.31 2.06 168.99
$\mu$: mean of Brinelli
Test
Null hypothesis H₀: $\mu$ = 170
Alternative hypothesis H₁: $\mu$ > 170
T-Value P-Value 25 172.52
The output tells us that the average Brinell hardness of the
n = 25 pieces of ductile iron was 172.52 with a standard deviation of 10.31. (The standard error of the mean "SE Mean", calculated by dividing the standard deviation 10.31 by the square root of n = 25, is 2.06). The test statistic t* is 1.22, and the P-value is 0.117.
If the engineer set his significance level α at 0.05 and used the critical value approach to conduct his hypothesis test, he would reject the null hypothesis if his test statistic
t* were greater than 1.7109 (determined using statistical software or a t-table):
Since the engineer's test statistic,
t* = 1.22, is not greater than 1.7109, the engineer fails to reject the null hypothesis. That is, the test statistic does not fall in the "critical region." There is insufficient evidence, at the \(\alpha\) = 0.05 level, to conclude that the mean Brinell hardness of all such ductile iron pieces is greater than 170.
If the engineer used the
P-value approach to conduct his hypothesis test, he would determine the area under a t = n - 1 t 24curve and to the rightof the test statistic t* = 1.22:
In the output above, Minitab reports that the
P-value is 0.117. Since the P-value, 0.117, is greater than \(\alpha\) = 0.05, the engineer fails to reject the null hypothesis. There is insufficient evidence, at the \(\alpha\) = 0.05 level, to conclude that the mean Brinell hardness of all such ductile iron pieces is greater than 170.
Note that the engineer obtains the same scientific conclusion regardless of the approach used. This will
always be the case.
A biologist was interested in determining whether sunflower seedlings treated with an extract from
Vinca minor roots resulted in a lower average height of sunflower seedlings than the standard height of 15.7 cm. The biologist treated a random sample of n = 33 seedlings with the extract and subsequently obtained the following heights:
11.5 11.8 15.7 16.1 14.1 10.5 9.3 15.0 11.1 15.2 19.0 12.8 12.4 19.2 13.5 12.2 13.3 16.5 13.5 14.4 16.7 10.9 13.0 10.3 15.8 15.1 17.1 13.3 12.4 8.5 14.3 12.9 13.5
The biologist's hypotheses are:
H 0 : μ = 15.7 H A: μ < 15.7
The biologist entered her data into Minitab and requested that the "one-sample
t-test" be conducted for the above hypotheses. She obtained the following output: Descriptive Statistics
N Mean StDev SE Mean 95% Upper Bound 33 13.664 2.544 0.443 14.414
$\mu$: mean of Height
Test
Null hypothesis H₀: $\mu$ = 15.7
Alternative hypothesis H₁: $\mu$ < 15.7
T-Value P-Value -4.60 0.000
The output tells us that the average height of the
n = 33 sunflower seedlings was 13.664 with a standard deviation of 2.544. (The standard error of the mean "SE Mean", calculated by dividing the standard deviation 13.664 by the square root of n = 33, is 0.443). The test statistic t* is -4.60, and the P-value, 0.000, is to three decimal places. Minitab Note. Minitab will always report P-values to only 3 decimal places. If Minitab reports the P-value as 0.000, it really means that the P-value is 0.000....something. Throughout this course (and your future research!), when you see that Minitab reports the P-value as 0.000, you should report the P-value as being "< 0.001."
If the biologist set her significance level \(\alpha\) at 0.05 and used the critical value approach to conduct her hypothesis test, she would reject the null hypothesis if her test statistic
t* were less than -1.6939 (determined using statistical software or a t-table):s-3-3
Since the biologist's test statistic,
t* = -4.60, is less than -1.6939, the biologist rejects the null hypothesis. That is, the test statistic falls in the "critical region." There is sufficient evidence, at the α = 0.05 level, to conclude that the mean height of all such sunflower seedlings is less than 15.7 cm.
If the biologist used the
P-value approach to conduct her hypothesis test, she would determine the area under a t = n - 1 t 32curve and to the leftof the test statistic t* = -4.60:
In the output above, Minitab reports that the
P-value is 0.000, which we take to mean < 0.001. Since the P-value is less than 0.001, it is clearly less than \(\alpha\) = 0.05, and the biologist rejects the null hypothesis. There is sufficient evidence, at the \(\alpha\) = 0.05 level, to conclude that the mean height of all such sunflower seedlings is less than 15.7 cm.
Note again that the biologist obtains the same scientific conclusion regardless of the approach used. This will
always be the case.
A manufacturer claims that the thickness of the spearmint gum it produces is 7.5 one-hundredths of an inch. A quality control specialist regularly checks this claim. On one production run, he took a random sample of
n = 10 pieces of gum and measured their thickness. He obtained:
7.65 7.60 7.65 7.70 7.55 7.55 7.40 7.40 7.50 7.50
The quality control specialist's hypotheses are:
H 0 : μ = 7.5 H A: μ ≠ 7.5
The quality control specialist entered his data into Minitab and requested that the "one-sample
t-test" be conducted for the above hypotheses. He obtained the following output: Descriptive Statistics
N Mean StDev SE Mean 95% CI for $\mu$ 10 7.550 0.1027 0.0325 (7.4765, 7.6235)
$\mu$: mean of Thickness
Test
Null hypothesis H₀: $\mu$ = 7.5
Alternative hypothesis H₁: $\mu \ne$ 7.5
T-Value P-Value 1.54 0.158
The output tells us that the average thickness of the
n = 10 pieces of gums was 7.55 one-hundredths of an inch with a standard deviation of 0.1027. (The standard error of the mean "SE Mean", calculated by dividing the standard deviation 0.1027 by the square root of n = 10, is 0.0325). The test statistic t* is 1.54, and the P-value is 0.158.
If the quality control specialist sets his significance level \(\alpha\) at 0.05 and used the critical value approach to conduct his hypothesis test, he would reject the null hypothesis if his test statistic
t* were less than -2.2616 or greater than 2.2616 (determined using statistical software or a t-table):
Since the quality control specialist's test statistic,
t* = 1.54, is not less than -2.2616 nor greater than 2.2616, the quality control specialist fails to reject the null hypothesis. That is, the test statistic does not fall in the "critical region." There is insufficient evidence, at the \(\alpha\) = 0.05 level, to conclude that the mean thickness of all of the manufacturer's spearmint gum differs from 7.5 one-hundredths of an inch.
If the quality control specialist used the
P-value approach to conduct his hypothesis test, he would determine the area under a t = n - 1 t 9curve, to the rightof 1.54 and to the leftof -1.54:
In the output above, Minitab reports that the
P-value is 0.158. Since the P-value, 0.158, is greater than \(\alpha\) = 0.05, the quality control specialist fails to reject the null hypothesis. There is insufficient evidence, at the \(\alpha\) = 0.05 level, to conclude that the mean thickness of all pieces of spearmint gum differs from 7.5 one-hundredths of an inch.
Note that the quality control specialist obtains the same scientific conclusion regardless of the approach used. This will
always be the case. In closing
In our review of hypothesis tests, we have focused on just one particular hypothesis test, namely that concerning the population mean \(\mu\). The important thing to recognize is that the topics discussed here — the general idea of hypothesis tests, errors in hypothesis testing, the critical value approach, and the
P-value approach — generally extend to all of the hypothesis tests you will encounter. |
WHY?
Skip-Gram Negative Sampling(SGNS) showed amazing performance compared to traditional word embedding methods. However, it was not clear where SGNS converge to.
WHAT?
This paper proved that minimizing the loss function of SGNS is equivalent to factorizing the word-context matrix with association measure of shifted PMI.
The loss function of SGNS can be factorized into a loss function of a specific (w, c) pair while k being the number of negative samples.
l = \sum_{w\in V_W}\sum_{c\in V_C}\#(w,c)(\log \sigma(\vec{w}\cdot\vec{c}) + k\cdot\mathbb{E}_{c_N\sim P_D}[\log\sigma(-\vec{w}\cdot\vec{c_N})])\\\{E}_{c_N\sim P_D}[\log\sigma(-\vec{w}\cdot\vec{c_N})] = \frac{\#(c)}{|D|}\log(\sigma(-\vec{w}\cdot\vec{c})) + \sum_{c_N \in V_C\backslash\{c\}}\frac{\#(c_N)}{|D|}\log(\sigma(-\vec{w}\cdot\vec{c_N}))\\l(w,c) = \#(w,c)\log \sigma(\vec{w}\cdot\vec{c_N}) + k\cdot\#(w)\cdot\frac{\#(c)}{|D|}\log(\sigma(-\vec{w}\cdot\vec{c})
In order to get the minimum of local loss function, we can differentiate the loss function with regard to x(
=\vec{w}\cdot\vec{c}).
\frac{\partial l}{\partial x} = \#(w,c)\cdot\sigma(-x) - k\cdot\#(w)\cdot\frac{\#(c)}{|D|}\sigma(x)\\e^x = -1 or = \frac{\#(w,c)\cdot|D|}{\#(w)\cdot\#(c)}\cdot\frac{1}{k}\\\vec{w}\cdot\vec{c} = \log(\frac{\#(w,c)\cdot|D|}{\#(w)\cdot\#(c)}) - \log k\\M_{ij}^{SGNS} = W_i\cdot C_j = PMI(w_i, c_j) - \log k
We can see that SGNS converge to shifted PMI given enough dimension to reconstruct the full matrix. This paper also points out that SGNS is a kind of weighted matrix factorization that focus more on frequent pairs. Based on this observation, the author propose an alternative word representation whose association metric is Shifted PPMI(SPPMI) with symmetric SVD.
So?
On word similarity task, SVD and SPPMI achieved better results thatn SGNS. However, SGNS still performed better in symentic analogy task. The author conjecture that this is due to the weighted nature of SGNS.
Critic
Unveiling another mystery of SGNS. |
WHY?
Previous methods for visual question answering performed one-step or static reasoning while some questions requires chain of reasonings.
WHAT?
Chain of reasoning(CoR) model alternatively updates the objects and their relations to solve questions that require chain of reasoning.
Cor consists of three parts: Data embedding, chain of reasoning and decision making. Data embedding encodes images and questions into vecors with Faster-RCNN and GRU respectively.
Chain of reasoning step consists of a series of sub-chains that perform relational reasoning and object refining. Given that objects of image denoted by
O^{(t)}, output is calculated as follows.
P^{t} = relu(o^{t}W_o^{t}), S^{t} = relu(QW_q^{t})\\F^{t} = \sum_{k=1}^K(P^{t}W_{p, k}^{t})\odot(S^{t}W_{s, k})\\\alpha^{t} = softmax(F^{t}W_f^{t})\\\tilde{Q}^{t} = (\alpha^{t})^T O^{t}
Relations of objects are calculated with guidance conditioned on question. Objects are refined with weighted sum of relations.
G_l = \sigma(relu(QW_{l_1})W_{l_2}), G_r = \sigma(relu)QW_{r_1})W_{r_2}\\R_{ij}^{t} = (O_i^{t}\odot G_l) \oplus (O_j^{(1)}\odot G_r)\\O_j^{t+1} = \sum_{i=1}^m \alpha_i^t R_{ij}^t
Decisions are made with all the objects from each step.
O^* = [relu(O^{1}W^{1});relu(O^{2}W^{2});...;relu(O^{T}W^{T})\\H = \sum_{k=1}^K(O^* W_{O^*, k})\odot(QW_{q', k})\\\hat{a} = softmax(HW_h)
So?
CoR achieved the best result on various VQA tasks including VQA 1.0, VQA 2.0, COCO-QA, and TDIUC. Visualization shows CoR performs appropriate reasoning.
Critic
I think more explanation is needed in architecture of relational reasoning and object refining step. Also, performance difference in ablation studies seems too trivial to draw conclusion. |
There's an amazing fraction macro here* for
FontSpec with LuaLaTeX (*: that would be the second macro in that answer, called
\unifrac, which I like better than the first "macro"/ using the
frac feature), but it only works if the font you're using has OpenType features
dnom and
numr. The font that I use, MinionMath (used with
unicode-math, which requires
FontSpec), does not have these features for anything beyond numbers (many fonts don't), and even fonts that have these features for letters don't have it for Greek letters.
To be honest, that macro looks absolutely beautiful for many fractions that I use for in-line text and exponents, but I wish I could use it for things other than numbers. I'm aware of the
\sfrac macro which can use all letters (as well as
\nicefrac and
\tfrac) but I like the look of the
\unifrac macro
much more. Is there any way that I can modify \unifrac to behave like \sfrac in the sense that I can use it with non-number characters, but retain the look of \unifrac?
I'll provide a MWE here along with pictures:
\documentclass{article}\usepackage{xfrac}\usepackage{fontspec} % Using unicode-math instead doesn't seem to make much difference \setmainfont{EB Garamond 12 Regular} % this font has dnom and numr features; % XITS Math, for example, doesn't\newcommand{\unifrac}[2]{\mbox{% making sure we don't get a line break {\addfontfeatures{RawFeature=+numr}#1}% ⁄% That slash is U+2044 FRACTION SLASH, which has special spacing {\addfontfeatures{RawFeature=+dnom}#2}% }}\begin{document}This is \texttt{\unifrac:}\qquad\unifrac{12}{14} \unifrac{31415}{27182} \unifrac{abc}{def} \unifrac{Foo!}{Bar?}\unifrac{\#\$\%+/<>=}{?\@[]\textbackslash\_|\{\}§†} \[\left(\frac{3x}{2y}\right)^{\unifrac{3}{2}}=\unifrac{\lambda}{2x}\]\\And this is \texttt{\sfrac:}\qquad\sfrac{12}{14} \sfrac{31415}{27182} \sfrac{abc}{def} \sfrac{Foo!}{Bar?}\sfrac{\#\$\%+/<>=}{?\@[]\textbackslash\_|\{\}§†}\[\left(\frac{3x}{2y}\right)^{\sfrac{3}{2}}=\sfrac{\lambda}{2x}\]\\\end{document} |
Answer
$1$
Work Step by Step
Convert the angle measure to degrees to obtain: $=\frac{\pi}{4} \cdot \frac{180^o}{\pi} = 45^o$ Thus, $\tan{\frac{\pi}{4}} \\= \tan{45^o}$ From Section 2.1 (page 50) , we learned that: $\tan{45^o}=1$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Let $f:\mathbb{R}\rightarrow\mathbb{R}$ be a continuous periodic function of period $1$ and $a$ be an irrational number. My goal is to prove
$$\lim_{N\rightarrow +\infty}\frac{1}{N}\sum_{n=1}^Nf(na)=\int_0^1f(t)\,dt.$$
I have first defined $x_n=na-[na]$, where $[\cdot]$ is the floor function, and I have proved that $\{x_n\}_{n\geq 1}$ is dense in $[0,1]$.
My goal was then to create partitions of $[0,1]$ by $0<y_1<\ldots <y_N<1$, where $(y_n)_{n=1}^N$ is the ordering of $(x_n)_{n=1}^N$, and to use a Riemann sum. I thought this would give the result but what I actually get is
$$\lim_{N\rightarrow +\infty}\sum_{n=1}^{N-1}(y_{n+1}-y_n)f(na)=\int_0^1f(t)\,dt.$$ Can this expression be linked with the limit I am interested in or should I do a different reasoning? |
The cubic equation 𝑥^3 − 2𝑥 − 3 = 0 has roots 𝛼, 𝛽 and 𝛾. Find a cubic equation with integer coefficients which have roots;
For B so far i did:
inverse of 1/x = 1/x
then i subbed it into the cubic equation however i stopped here as it doesnt match the answer at all
please help thanks
in (b) is it really supposed to be \(\dfrac 1 \alpha + \dfrac 1 \beta\)
or does it mean that \(\dfrac 1 \alpha, ~\dfrac 1 \beta,~\dfrac 1 \gamma\) are all roots?
I am with Rom, I think that + is meant to be a comma.
Once you respond to our query we might be able to answer you.
On the assumption that a) is indeed \(\frac{1}{\alpha},\frac{1}{\beta},\frac{1}{\gamma}\) we have:
or you can just find
\(p\left(\dfrac 1 x\right) = 0\\ \left(\dfrac 1 x\right)^3 - \dfrac 2 x - 3 = 0\\ 1 - 2x^2 - 3x^3 = 0 \\ \text{This is the form the problem asks for, but}\\ \text{dividing both sides by -3}\\ x^3 + \dfrac 2 3 x^2 - \dfrac 1 3 = 0\\ \text{which is the form Alan wrote it in}\)
\(\text{similarly for the second part}\\ x=\dfrac{1}{2\alpha+1} \Rightarrow \alpha = \dfrac 1 2\left(\dfrac 1 x -1\right)\\ \text{so find }\\ p\left(\dfrac 1 2\left(\dfrac 1 x -1\right)\right)\\ \text{and work the same sort of algebra to get nice coefficients}\) |
Understanding your Data - Basic Statistics10 min read
Have you ever had to deal with a lot of data, and don’t know where to start? If yes, then this post is for you. In this post I will try to guide you through some basic approaches and operations you can perform to analyze your data, make some basic sense of it, and decide on your approach for deeper analysis of it. I will use python and a small subset of data from the Kaggle Bikesharing Challenge to illustrate my examples. The code for this work can be found at this location. Please take a minute to download python and the sample data before we proceed.
DESCRIPTION OF DATASET
The data provided is a CSV file bikesharing.csv, with 5 columns - datetime, season, holiday, workingday, and count.
datetime: The date and time when the statistics were captured season: 1 = spring, 2 = summer, 3 = fall, 4 = winter holiday: whether the day is considered a holiday workingday: whether the day is neither a weekend nor holiday count: the total number of bikes rented on that day MIN MAX AND MEAN
One of the first analyses one can do with their data, is to find the minimum, maximum and the mean. The mean (or average) number of bikes per day rented in this case is the sum of all bikes rented per day divided by the total number of days:
$\bar{f} = \frac{\sum_{i=1}^{n} b_i}{n}$
where $\bar{f}$ is the mean, $b_i$ is the number of bikes rented on day $i$ and $n$ are the total number of days. We can compute these in python using the following code:
In this case, the mean is 9146.82. It looks like there are several large values in the data, because the mean is closer to the max than to the min. Maybe the data will provive more insight if we compute the min, max and mean per day, grouped by another factor, like season or weather. Here is some code to compute mean per day grouped by the season:
As you can see, the mean varies significantly with the season. It intuitively makes sense because we would expect more people to ride a bike in the summer as compared to the winter, which means a higher mean in the summer than the winter, this also means higher min and max values in the summer than the winter. This data also helps us intuitive guess that season 1, is most likely winter, and season 3 is most likely summer.
VARIABILITY IN DATA
The next thing we would like to know, is the variability of the data provided. It is good to know if the data is skewed in a particular direction, or how varied it is. If the data is highly variable, it is hard to determine if the mean changes with different samples of data. Reducing variability is a common goal of designed experiments, and this can be done by finding subsets of data that have low variablity such that samples from each of the subsets produce similar mean value. We already did a little bit of that in the second example above.
There are 2 ways of measuring variability: variance and standard deviation.
VARIANCE
Variance is defined as the average of the squared differences from the mean. In most experiments, we take a random sample from a population. In this case, we will compute the population variance, which uses all possible data provided. Population variance can be computed as:
If you needed to compute the sample variance, you can use the following formula:
where $x_i $ is each instance, $\bar{x}$ is the mean, and $N$ is the total number of features. Dividing by n−1 gives a better estimate of the population standard deviation for the larger parent population than dividing by n, which gives a result which is correct for the sample only. This is known as Bessel’s correction.
In our case we will compute population variance using most of the same code as that above, except adding the following line to it:
STANDARD DEVIATION
Variance by itself is not particularly insightful, as its units are feature squared and it is not possible to plot it on a graph and compare it with the min, max and mean values. The square root of variance is the standard deviation, and it is a much more insightful metric.
The population standard deviation, $\sigma$, is the square root of the variance, $\sigma^2$. In python you can compute variance by adding the following line to the above code:
A standard deviation close to 0 indicates that the data points tend to be very close to the mean (also called the expected value) of the set, while a high standard deviation indicates that the data points are spread out over a wider range of values. In our case, the data is very spread out. Three standard deviations from the mean account for 99.7% of the sample population being studied, assuming the distribution is normal (bell-shaped).
STANDARD ERROR OF THE MEAN (SEM)
In this post, we have computed the population mean, however, if one has to compute the sample mean, it is useful to know how accurate this value is in estimating the population mean. SEM is the error in estimating $\mu$.
however, as we often are unable to compute the population standard deviation, we will use teh sample standard deviation instead:
The mean of any given sample is an estimate of the population mean number of features. Two aspects of the population and the sample could affect the variability of the mean number of features of those samples.
If the population of number of features has very small standard deviation, then the samples from that population will have small sample standard deviation the sample means will be close to the population mean and we will have a small standard error of the mean If the population of number of features has a large standard deviation, then the samples from that population will have large sample standard deviation the sample means may be far from the population mean and we will have a large standard error of the mean
So large population variability causes a large standard error of the mean. The estimate of the population mean using 2 observations is less reliable than the estimate using 20 observations, and much less reliable than the estimate of the mean using 100 observations. As N gets bigger, we expect our error in estimating the population mean to get smaller. |
Sunday, December 30, 2012
In macbook, I usually use ssh in terminal to connect to remote servers. However recently the terminal often becomes unresponsive after being idle for a while, then it prints the error "Write failed: Broken pipe" and quits the ssh connection.
The solution I found is to add the following line in ~/.ssh/config:
The solution I found is to add the following line in ~/.ssh/config:
ServerAliveInterval 120
Here is what I did to install and link with the package Geometric Tools:
Download the package into your home directory and unzip it. Add the following line into ~/.bashrc, and run "sh ~/.bashrc" or log out and log in again. Change to the directory GeometricTools/WildMagic5, and type make CFG=Release -f makefile.wm5 To compile, we can use the following command
at 2:42 PM
Sunday, December 9, 2012
If we simply call "sqrt()" function to take square root, very often we'll be caught by surprise.
The first common mistake is to not check negativity before taking square root. For example, Instead of getting 0.0 in both cases, one of the case will yield "-nan". This is because y is actually not exactly \(\sqrt{2}\), but within machine precision of \(\sqrt{2}\). Therefore, we should always check negativity before taking square root. You may notice that, in one of the above two cases, even if the answer is not "-nan", the accuracy is quite poor, only within \(10^{-8}\) of zero, rather than within machine precision of zero. This is because although \(y^2 - 2\) is within machine precision zero, after taking square root, you'll only get square root of machine precision. This is another common mistake. So how do we take square root accurately and error free? Assume you want to compute \(x = \sqrt{2 - y^2}\), you can do something like the following
The first common mistake is to not check negativity before taking square root. For example, Instead of getting 0.0 in both cases, one of the case will yield "-nan". This is because y is actually not exactly \(\sqrt{2}\), but within machine precision of \(\sqrt{2}\). Therefore, we should always check negativity before taking square root.
You may notice that, in one of the above two cases, even if the answer is not "-nan", the accuracy is quite poor, only within \(10^{-8}\) of zero, rather than within machine precision of zero. This is because although \(y^2 - 2\) is within machine precision zero, after taking square root, you'll only get square root of machine precision. This is another common mistake.
So how do we take square root accurately and error free? Assume you want to compute \(x = \sqrt{2 - y^2}\), you can do something like the following
Wednesday, December 5, 2012
Assume an ellipse of width \(\sigma\) and length \(\kappa \sigma\) is centered at \((x_0, y_0)\), and has angle \(\theta_0\) with the \(x\)-axis. How do we determine whether it intersects a horizontal line or a vertical line?
It turns out the criteria is very simple
It turns out the criteria is very simple
The ellipse intersects a horizontal line \(y = y_1\) if and only if the following equation hods: \[ \triangle_1 = \sigma^2 \left(\cos^2(\theta_0) + \kappa^2 \sin^2(\theta_0)\right) - (y_1-y_0)^2 \geq 0 \] The ellipse intersects a vertical line \(x = x_1\) if and only if the following equation hods: \[ \triangle_2 = \sigma^2 \left(\sin^2(\theta_0) + \kappa^2 \cos^2(\theta_0)\right) - (x_1-x_0)^2 \geq 0 \] Sunday, October 28, 2012
I like to use "ssh -Y" to log into my university's server to remotely submit computation jobs, and vim is my indispensible editor. However, even if vim is a command line editor, it still tries to load X server on start, and this can take a few seconds (on my MacBook Air). A simple solution is to add the following line in your ~/.bashrc (in your account at the server):
In this way, vim won't try to load X server and starts much faster.
In this way, vim won't try to load X server and starts much faster.
Wednesday, May 2, 2012
It's a design flaw for Thinkpad to put page forward/backward together with the arrow keys. This frequently leads to loss of unpublished posts. This is what you can do in Ubuntu to disable those two keys:
Create the file ~/.Xmodmap with the following contents keycode 166= keycode 167= (Depending on your distribution, this step may not be necessary.) Add the following code to your ~/.profile # keyboard modifier if [ -f $HOME/.Xmodmap ]; then /usr/bin/xmodmap $HOME/.Xmodmap fi |
How do we go about calculating a posterior with a prior N~(a, b) after observing n data points? I assume that we have to calculate the sample mean and variance of the data points and do some sort of calculation that combines the posterior with the prior, but I'm not quite sure what the combination formula looks like.
The basic idea of Bayesian updating is that given some data $X$ and
prior over parameter of interest $\theta$, where the relation between data and parameter is described using likelihood function, you use Bayes theorem to obtain posterior
$$ p(\theta \mid X) \propto p(X \mid \theta) \, p(\theta) $$
This can be done sequentially, where after seeing first data point $x_1$
prior $\theta$ becomes updated to posterior $\theta'$, next you can take second data point $x_2$ and use posterior obtained before $\theta'$ as your prior, to update it once again etc.
Let me give you an example. Imagine that you want to estimate mean $\mu$ of normal distribution and $\sigma^2$ is known to you. In such case we can use normal-normal model. We assume normal prior for $\mu$ with hyperparameters $\mu_0,\sigma_0^2:$
\begin{align} X\mid\mu &\sim \mathrm{Normal}(\mu,\ \sigma^2) \\ \mu &\sim \mathrm{Normal}(\mu_0,\ \sigma_0^2) \end{align}
Since normal distribution is a conjugate prior for $\mu$ of normal distribution, we have closed-form solution to update the prior
\begin{align} E(\mu' \mid x) &= \frac{\sigma^2\mu + \sigma^2_0 x}{\sigma^2 + \sigma^2_0} \\[7pt] \mathrm{Var}(\mu' \mid x) &= \frac{\sigma^2 \sigma^2_0}{\sigma^2 + \sigma^2_0} \end{align}
Unfortunately, such simple closed-form solutions are not available for more sophisticated problems and you have to rely on optimization algorithms (for point estimates using
maximum a posteriori approach), or MCMC simulation.
Below you can see data example:
n <- 1000set.seed(123)x <- rnorm(n, 1.4, 2.7)mu <- numeric(n)sigma <- numeric(n)mu[1] <- (10000*x[i] + (2.7^2)*0)/(10000+2.7^2)sigma[1] <- (10000*2.7^2)/(10000+2.7^2)for (i in 2:n) { mu[i] <- ( sigma[i-1]*x[i] + (2.7^2)*mu[i-1] )/(sigma[i-1]+2.7^2) sigma[i] <- ( sigma[i-1]*2.7^2 )/(sigma[i-1]+2.7^2)}
If you plot the results, you'll see how
posterior approaches the estimated value (it's true value is marked by red line) as new data is accumulated.
For learning more you can check those slides and
Conjugate Bayesian analysis of the Gaussian distribution paper by Kevin P. Murphy. Check also Do Bayesian priors become irrelevant with large sample size? You can also check those notes and this blog entry for accessible step-by-step introduction to Bayesian inference.
If you have a prior $P(\theta)$ and a likelihood function $P(x \mid \theta)$ you can calculate the posterior with:
$$ P(\theta \mid x) = \frac{\sum_\theta P(x \mid \theta) P(\theta)}{P(x)} $$
Since $P(x)$ is just a normalization constant to make probabilities sum to one, you could write:
$$P(\theta \mid x) \sim \sum_\theta P(x \mid \theta)P(\theta) $$
Where $\sim$ means "is proportional to."
The case of conjugate priors (where you often get nice closed form formulas)
This Wikipedia article on conjugate priors may be informative. Let $\boldsymbol{\theta}$ be a vector of your parameters. Let $P(\boldsymbol{\theta})$ be a prior over your parameters. Let $P(\mathbf{x} \mid \boldsymbol{\theta})$ be the likelihood function, the probability of the data given the parameters. The prior is a conjugate prior for the likelihood function if the prior $P(\boldsymbol{\theta})$ and the posterior $P(\boldsymbol{\theta} \mid \mathbf{x})$ are in the same family (eg. both Gaussian).
The table of conjugate distributions may help build some intuition (and also give some instructive examples to work through yourself).
This is the central computation issue for Bayesian data analysis. It really depends on the data and distributions involved. For simple cases where everything can be expressed in closed form (e.g., with conjugate priors), you can use Bayes's theorem directly. The most popular family of techniques for more complex cases is Markov chain Monte Carlo. For details, see any introductory textbook on Bayesian data analysis. |
17 0 Got the theorem, having trouble with the proof... [SOLVED]
Hi all. OK, so I am trying to prove a theorem that I have for some time been just using as-is. Long story short, it occured to me that I needed to prove it. So, I have almost done it, but am stuck near the end. The theorem is:
Suppose [itex] \mathcal{X} [/itex] is a smooth vector field on a manifold [itex]\mathcal{M}[/itex]. Assuming that [itex]\mathcal{X}_p\neq0[/itex] at a point [itex]p\in\mathcal{M}[/itex], then there exists a coordinate neighborhood [itex]\left(\mathcal{W};w^i\right)[/itex] about [itex]p[/itex] such that
[tex]
\left.\mathcal{X}\right\vert_\mathcal{W}=\dfrac{\partial}{\partial w^1}
[/tex]
Proof: ????
Now, it's not that I have nothing for the proof, it's just that I'm stuck. As well, since there is more than one way to skin a cat, I figured it would be better to leave the proof empty, rather than potentially confuse anyone with the technique I have employed thus far.
That said, a big thanks in advance for all the help!
Last edited: |
Edge at \(m_{\rm inv}=79\GeV\) in dilepton events
There were virtually no deviations of the LHC from the Standard Model a month ago. But during the last month, there has been an explosion of so far small excesses.
Off-topic: Have you ever played 2048? Flash 2048. A pretty good game. Cursor arrow keys.One day after Tommaso Dorigo presented another failed attempt to mock the supersymmetric phenomenologists, he was forced to admit that "his" own CMS collaboration has found another intriguing excess – namely in the search for edge effects in dilepton events.
The detailed data may be seen in Konstantinos Theofilatos' slides shown at ICNFP2014 in Κολυμβάρι, Greece (if the city name sounds Greek to you, it's Kolimvari; Czech readers may call the place Kolín-Vary) or at a CMS website. The paper hasn't been released yet but it's already cited in the thesis by Marco-Andrea Buchmann (no idea about his or her sex but he or she looks like a boy) whose chapter 5 (page 49) is dedicated to this search (see also the reference [95] over there). On page
ii, Buchmann mentions significance of the edge at 2.96, almost 3 sigma.
What's going on?
Supersymmetry is expected to preserve, at least approximately, the multiplicative conservation law for the R-parity. The known Standard Model particles have the positive R-parity; their elusive superpartners have the negative R-parity. We collide known and boring protons only so the initial R-parity is positive (even). So if superpartners are produced, they have to be produced in pairs.
Each of the superpartners may decay and the process of decay may have several steps. Either missing energy (neutrinos or, at the end, the lightest superpartner) are released, or charged leptons. Charged leptons are "clean" and not too contaminated by the hadronic junk that the LHC constantly produces in the proton-proton collisions.
In the decay chain, two charged leptons, e.g. one \(e^\pm\) and one \(\mu^\pm\), may be released as a superpartner decays to lighter superpartners via the chain \[
X\mathop{\to}_{e^\pm} Y\mathop{\to}_{\mu^\pm} Z.
\] Their invariant mass can't be arbitrarily high. The maximum value is equal to the mass difference between the masses of \(X\) and \(Z\) i.e. \(m_X-m_Z\).
(The actual search looks for same-flavor, opposite sign leptons i.e. \(e^+e^-\) and \(\mu^+\mu^-\).)
The reason why this is really the maximum invariant mass of the charged leptons is that it is clearly achieved if the two leptons' momenta are parallel in the spacetime, along with everything else. If they are not parallel, the invariant mass is lower than it could be.
So the dileptons resulting from the decay of the superpartner \(X\) with the invariant mass \(m_{\rm inv}(e^\pm \mu^\pm)\) above a threshold (edge) don't exist at all. On the other hand, lots of dilepton events are predicted "right beneath" the edge because they correspond to relatively slowly moving leptons and there's a lot of phase space over there because the invariant mass only depends slowly on the speeds if the speeds are low.
To make things clear, one would like to see something like this:
The chart was produced by a computer simulation. The green dotted curve shows the shape of the expected edge at \(m_{\rm inv}=65\GeV\) and you may see that the shape is matched by the observed (but simulated) events, the black curve.
The actual observed curves from CMS look much less spectacular,
but the mild edge located at \(m_{\rm inv} = 78.7\pm 1.4\GeV\) (Buchmann's thesis) was actually quantified to have significance 2.6 (the Greek guy) or 2.96 (Buchmann) sigma, formally corresponding to something like 99% or 99.7% certainty that new physics is there. Of course, a 99% certainty is nothing convincing in particle physics and most things one may be 99% certain about are almost certainly flukes or errors.
(The reason of the discrepancy is really the same as the reason why the sleeping beauty knows that the probability of "tails" is still \(P=1/2\). She wakes up after "heads" twice as often but she knows that; she knows that the heads – and similarly excesses at the LHC – are "overreported". So if she wants to have an idea about about the actual probability of the new physics or heads, she must correct her estimates for this overreporting. When she does so, she knows that heads still have \(P=1/2\), less than the naive \(P=2/3\), and new physics given the excess is much less likely than the formally calculated and often incorrectly interpreted 99%.)
But it's surely another fluctuation that can keep one excited.
If you have been searching for masses in \({\rm GeV}\) in this blog post, you must have seen that the relevant edge was at \(78.7\GeV\) or so. If superpartners exist, this value could be the mass difference between two superpartners (or other new particles?) waiting to be discovered. |
We develop a general algebraic and proof-theoretic study of substructural logics that may lack associativity, along with other structural rules. Our study extends existing work on substructural logics over the full Lambek Calculus [34], Galatos and Ono [18], Galatos et al. [17]). We present a Gentzen-style sequent system that lacks the structural rules of contraction, weakening, exchange and associativity, and can be considered a non-associative formulation of . Moreover, we introduce an equivalent Hilbert-style system and show that the logic associated (...) with and is algebraizable, with the variety of residuated lattice-ordered groupoids with unit serving as its equivalent algebraic semantics. Overcoming technical complications arising from the lack of associativity, we introduce a generalized version of a logical matrix and apply the method of quasicompletions to obtain an algebra and a quasiembedding from the matrix to the algebra. By applying the general result to specific cases, we obtain important logical and algebraic properties, including the cut elimination of and various extensions, the strong separation of , and the finite generation of the variety of residuated lattice-ordered groupoids with unit. (shrink)
Along the same line as that in Ono (Ann Pure Appl Logic 161:246–250, 2009), a proof-theoretic approach to Glivenko theorems is developed here for substructural predicate logics relative not only to classical predicate logic but also to arbitrary involutive substructural predicate logics over intuitionistic linear predicate logic without exponentials QFL e . It is shown that there exists the weakest logic over QFL e among substructural predicate logics for which the Glivenko theorem holds. Negative translations of substructural predicate logics are (...) studied by using the same approach. First, a negative translation, called extended Kuroda translation is introduced. Then a translation result of an arbitrary involutive substructural predicate logics over QFL e is shown, and the existence of the weakest logic is proved among such logics for which the extended Kuroda translation works. They are obtained by a slight modification of the proof of the Glivenko theorem. Relations of our extended Kuroda translation with other standard negative translations will be discussed. Lastly, algebraic aspects of these results will be mentioned briefly. In this way, a clear and comprehensive understanding of Glivenko theorems and negative translations will be obtained from a substructural viewpoint. (shrink)
Glivenko-type theorems for substructural logics are comprehensively studied in the paper [N. Galatos, H. Ono, Glivenko theorems for substructural logics over FL, Journal of Symbolic Logic 71 1353–1384]. Arguments used there are fully algebraic, and based on the fact that all substructural logics are algebraizable 279–308] and also [N. Galatos, P. Jipsen, T. Kowalski, H. Ono, Residuated Lattices: An Algebraic Glimpse at Substructural Logics, in: Studies in Logic and the Foundations of Mathematics, vol. 151, Elsevier, 2007] for the details). As (...) a complementary work to the algebraic approach developed in [N. Galatos, H. Ono, Glivenko theorems for substructural logics over FL, Journal of Symbolic Logic 71 1353–1384], we present here a concise, proof-theoretic approach to Glivenko theorems for substructural logics. This will show different features of these two approaches. (shrink)
The present paper deals with the predicate version MTL of the logic MTL by Esteva and Godo. We introduce a Kripke semantics for it, along the lines of Ono''s Kripke semantics for the predicate version of FLew (cf. [O85]), and we prove a completeness theorem. Then we prove that every predicate logic between MTL and classical predicate logic is undecidable. Finally, we prove that MTL is complete with respect to the standard semantics, i.e., with respect to Kripke frames on the (...) real interval [0,1], or equivalently, with respect to MTL-algebras whose lattice reduct is [0,1] with the usual order. (shrink)
Substructural logics have received a lot of attention in recent years from the communities of both logic and algebra. We discuss the algebraization of substructural logics over the full Lambek calculus and their connections to residuated lattices, and establish a weak form of the deduction theorem that is known as parametrized local deduction theorem. Finally, we study certain interpolation properties and explain how they imply the amalgamation property for certain varieties of residuated lattices.
We will give here a purely algebraic proof of the cut elimination theorem for various sequent systems. Our basic idea is to introduce mathematical structures, called Gentzen structures, for a given sequent system without cut, and then to show the completeness of the sequent system without cut with respect to the class of algebras for the sequent system with cut, by using the quasi-completion of these Gentzen structures. It is shown that the quasi-completion is a generalization of the MacNeille completion. (...) Moreover, the finite model property is obtained for many cases, by modifying our completeness proof. This is an algebraic presentation of the proof of the finite model property discussed by Lafont [12] and Okada-Terui [17]. (shrink)
We prove that certain natural sequent systems for bi-intuitionistic logic have the analytic cut property. In the process we show that the (global) subformula property implies the (local) analytic cut property, thereby demonstrating their equivalence. Applying a version of Maehara technique modified in several ways, we prove that bi-intuitionistic logic enjoys the classical Craig interpolation property and Maximova variable separation property; its Halldén completeness follows.
In this paper, we will develop an algebraic study of substructural propositional logics over FLew, i.e. the logic which is obtained from intuitionistic logics by eliminating the contraction rule. Our main technical tool is to use residuated lattices as the algebraic semantics for them. This enables us to study different kinds of nonclassical logics, including intermediate logics, BCK-logics, Lukasiewicz’s many-valued logics and fuzzy logics, within a uniform framework.
It is well known that classical propositional logic can be interpreted in intuitionistic propositional logic. In particular Glivenko's theorem states that a formula is provable in the former iff its double negation is provable in the latter. We extend Glivenko's theorem and show that for every involutive substructural logic there exists a minimum substructural logic that contains the first via a double negation interpretation. Our presentation is algebraic and is formulated in the context of residuated lattices. In the last part (...) of the paper, we also discuss some extended forms of the Kolmogorov translation and we compare it to the Glivenko translation. (shrink)
In this paper, a theorem on the existence of complete embedding of partially ordered monoids into complete residuated lattices is shown. From this, many interesting results on residuated lattices and substructural logics follow, including various types of completeness theorems of substructural logics.
For each ordinal $\alpha > 0, L(\alpha)$ is the intermediate predicate logic characterized by the class of all Kripke frames with the poset α and with constant domain. This paper will be devoted to a study of logics of the form L(α). It will be shown that for each uncountable ordinal of the form α + η with a finite or a countable $\eta (> 0)$ , there exists a countable ordinal of the form β + η such that L(α (...) + η) = L(β + η). On the other hand, such a reduction of ordinals to countable ones is impossible for a logic L(α) if α is an uncountable regular ordinal. Moreover, it will be proved that the mapping L is injective if it is restricted to ordinals less than ω ω , i.e. α ≠ β implies L(α) ≠ L(β) for each ordinal $\alpha,\beta. (shrink)
An intermediate predicate logicS + n (n>0) is introduced and investigated. First, a sequent calculusGS n is introduced, which is shown to be equivalent toS + n and for which the cut elimination theorem holds. In § 2, it will be shown thatS + n is characterized by the class of all linear Kripke frames of the heightn.
This paper shows a role of the contraction rule in decision problems for the logics weaker than the intuitionistic logic that are obtained by deleting some or all of structural rules. It is well-known that for such a predicate logic L, if L does not have the contraction rule then it is decidable. In this paper, it will be shown first that the predicate logic FLec with the contraction and exchange rules, but without the weakening rule, is undecidable while the (...) propositional fragment of FLec is decidable. On the other hand, it will be remarked that logics without the contraction rule are still decidable, if our language contains function symbols. (shrink)
We show that the variety of residuated lattices is generated by its finite simple members, improving upon a finite model property result of Okada and Terui. The reasoning is a blend of proof-theoretic and algebraic arguments.
The paper deals with involutive FL e -monoids, that is, commutative residuated, partially-ordered monoids with an involutive negation. Involutive FL e -monoids over lattices are exactly involutive FL e -algebras, the algebraic counterparts of the substructural logic IUL. A cone representation is given for conic involutive FL e -monoids, along with a new construction method, called twin-rotation. Some classes of finite involutive FL e -chains are classified by using the notion of rank of involutive FL e -chains, and a kind (...) of duality is developed between positive and non-positive rank algebras. As a side effect, it is shown that the substructural logic IUL plus t ↔ f does not have the finite model property. (shrink)
Hundred years ago, vernacular architecture once triumphed. Unfortunately, poverty and low education bring people facing difficulties in understanding their own culture, building techniques, and village management. This problem then leads them to a bigger issue regarding the alteration of culture and traditional architecture. Among all vernacular architecture in Indonesia, Sasak traditional architecture is one of the unique architectures that still exist until now. However, globalization issue leads the alteration of vernacular architecture includes Sasak tribe culture and traditional village in Lombok (...) island, including the traditional houses. This paper takes Sade Traditional Hamlet as a research subject to provide a deeper understanding of the importance of cultural values of Sasak’s living space and settlement. This research shows that the living space and culture of the Sasak tribe in Sade hamlet has evolved and transformed due to the space necessity and financial ability. Among the total 68 houses, 55.8% are the original houses of Sasak people in Sade hamlet, Bale Tani, 38.2% are the traditional modified houses, Bale Bontar, and 6% are the transitional houses, Bale Kodong. Gradually, Bale Tani change to Bale Bontar house. However, Bale Tani could still be preserved by the system of pattern relatives in the family and awiq-awiq as customary law. A deeper understanding of the house preservation, traditional material, and cultural values of Bale Tani should be taken to create a sustainable conservation method. (shrink)
LetL be any modal or tense logic with the finite model property. For eachm, definer L (m) to be the smallest numberr such that for any formulaA withm modal operators,A is provable inL if and only ifA is valid in everyL-model with at mostr worlds. Thus, the functionr L determines the size of refutation Kripke models forL. In this paper, we will give an estimation ofr L (m) for some linear modal and tense logicsL.
A semantical proof of Craig's interpolation theorem for the intuitionistic predicate logic and some intermediate prepositional logics will be given. Our proof is an extension of Henkin's method developed in [4]. It will clarify the relation between the interpolation theorem and Robinson's consistency theorem for these logics and will enable us to give a uniform way of proving the interpolation theorem for them.
Abstract Some aspects of the coverage of bioethical issues in Japanese (11) and German (10 series) biology textbooks for lower secondary school have been investigated, concentrating on the treatment of environmental issues. It was found that German textbooks devote more space to these problems than the Japanese ones and that the style of presentation in German books is aimed at appealing to the emotions of the pupils, whereas that of the Japanese ones is a more traditional scientific one. The inclusion (...) of ethical view points in biology teaching is discussed in this context. (shrink)
In this paper, a semantics for predicate logics without the contraction rule will be investigated and the completeness theorem will be proved. Moreover, it will be found out that our semantics has a close connection with Beth-type semantics.
In this paper we will discuss constraints on the number of (non-dummy) players and on the distribution of votes such that local monotonicity is satisfied for the Public Good Index. These results are compared to properties which are related to constraints on the redistribution of votes (such as implied by global monotonicity). The discussion shows that monotonicity is not a straightforward criterion of classification for power measures. |
Question: Let $Y=\left [ -1,1 \right ]$ be a subspace of $\mathbb{R}$ so the subspace property holds. Is $\left \{ x:\frac{1}{2}\leq \left | x \right |<1 \right \}$ an open set?
From the definition of complement,
On a metric space, a proper subset V of X is an open set wrt X if its complement $X\setminus V=\left \{ x \in X : x \notin V \right \}$
Here, $Y\setminus A=\left \{ y \in Y | y \notin A \right \}=\left ( \frac{-1}{2},\frac{1}{2} \right )\cup \left \{ -1 \right \}\cup \left \{ 1 \right \}$
Now, to speak about an open set we have to talk about open balls. I do not have good exposure to elementary real analysis so I'm not exactly sure how can we speak about open ball in this question.
Any help is appreciated.
Thanks in advance.
Edit:
Let $\bar{y}$ be an element in the complement $Y\setminus A=\left ( -1,\frac{-1}{2} \right ] \cup [\frac{1}{2},1 )$ The pertinent is this:
Is there any element $\bar{y} \in Y\setminus A$ st $d\left ( \bar{y},y \right )<\epsilon$ but $B_{\epsilon}\left ( \bar{y} \right )\nsubseteq Y\setminus A$? |
In this tutorial we shall solve a differential equation of the form $$\left( {{x^2} + 1} \right)y’ = xy$$ by… Click here to read more
Calculus
In this tutorial we shall solve a differential equation of the form $$y’ = \frac{{\sqrt x }}{{{e^y}}}$$ by using the… Click here to read more
In this tutorial we shall solve a differential equation of the form $$y’ + \sqrt {\frac{{1 – {y^2}}}{{1 – {x^2}}}}… Click here to read more
Differential equations are frequently used in solving mathematics and physics problems. In the following example we shall discuss the application of… Click here to read more
Differential equations are commonly used in physics problems. In the following example we shall discuss a very simple application of the… Click here to read more
In the following example we shall discuss the application of simple differential equation in business. If $$P$$ is the principal… Click here to read more
Let A and B be any two non–empty sets. Then a function ‘$$f$$’ is a rule or law which associates… Click here to read more
Meaning of the Phrase “Tend to Zero”: Suppose a variable ‘x’ assumes in succession a set of values. \[1,\;\frac{1}{{10}},\;\frac{1}{{{{10}^2}}},\;\frac{1}{{{{10}^3}}},\;\frac{1}{{{{10}^4}}},\; \cdots… Click here to read more
\[\frac{{{\text{dy}}}}{{{\text{dx}}}}\left( {\text{c}} \right) = 0\] \[\frac{{\text{d}}}{{{\text{dx}}}}\left( {{{\text{x}}^{\text{n}}}} \right) = {\text{n}}{{\text{x}}^{{\text{n – 1}}}}\] \[\frac{{\text{d}}}{{{\text{dx}}}}\left[ {{\text{c}}f\left( {\text{x}} \right)} \right] = {\text{c}}f’\left( {\text{x}}… Click here to read more
\[\frac{{\text{d}}}{{{\text{dx}}}}\left( {\sin {\text{x}}} \right) = \cos {\text{x}}\] \[\frac{{\text{d}}}{{{\text{dx}}}}\left( {\cos {\text{x}}} \right) = – \sin {\text{x}}\] \[\frac{{\text{d}}}{{{\text{dx}}}}\left( {\tan {\text{x}}} \right) =… Click here to read more |
In Season 1 Episode 2 of The Big Bang Theory, “The Big Bran Hypothesis”, Penny (Kaley Cuoco) asks Leonard (Johnny Galecki) to sign for a furniture delivery if she isn’t home. Unfortunately for Leonard and Sheldon, they are left with the task of getting a huge (and heavy) box up to Penny’s apartment.
To solve this problem, Leonard suggest using the stairs as an inclined plane, one of the six classical simple machines defined by Renaissance scientists. Both Leonard and Sheldon have the right idea here. Not only are inclined planes used to raise heavy loads but they require less effort to do so. Though this may make moving a heavy load easier the tradeoff is that the load must now be moved over a greater distance. So while, as Leonard correctly calculates, the effort required to move Penny’s furniture is reduced by half, the distance he and Sheldon must move Penny’s furniture twice the distance to raise it directly.
Mathematics of the Inclined Plane Effort to lift block on Inclined Plane
Now we got an inclined plane. Force required to lift is reduced by the sine of the angle of the stairs… call it 30 degrees, so about half.
To analyze the forces acting on a body, physicists and engineers use rough sketches or free body diagrams. This diagram can help physicists model a problem on paper and to determine how forces act on an object. We can resolve the forces to see the effort needed to move the block up the stairs.
If the weight of Penny’s furniture is \(W\) and the angle of the stairs is \(\theta\) then
\[\angle_{\mathrm{stairs}}\equiv\theta \approx 30^\circ\] and \[\Rightarrow\sin 30^\circ = \frac{1}{2}\] So the effort needed to keep the box in place is about half the weight of the furniture box or \(\frac{1}{2}W\), just as Leonard says. Distance moved along Inclined Plane
While the inclined plane allows Leonard and Sheldon to push the box with less effort, the tradeoff is that the distance they move along the incline is twice the height to raise the box vertically. Geometry shows us that
\[\sin \theta = \frac{h}{d}\] We again assume that the angle of the stairs is approximately \(30^\circ\) and \(\sin 30^{\circ} = 1/2\) then we have \(d=2h\). Uses of the Inclined Plane
We see inclined planes daily without realizing it. They are used as loading ramps to load and unload goods. Wheelchair ramps also allow wheelchair users, as well as users of strollers and carts, to access buildings easily. Roads sometimes have inclined planes to form a gradual slope to allow vehicles to move over hills without losing traction. Inclined planes have also played an important part in history and were used to build the Egyptian pyramids and possibly used to move the heavy stones to build Stonehenge.
Lombard Street (San Francisco)
Lombard Street in San Francisco is famous for its eight tight hairpin turns (or switchbacks) that have earned it the distinction of being the crookedest street in the world (though this title is contested). These eight switchbacks are crucial to the street’s design as the reduce the hills natural 27° grade which is too steep for most vehicles. It is also a hazard to pedestrians, who are more accustomed to a more reasonable 4.86° incline due to wheel chair navigability concerns.
Technically speaking, the “zigzag” path doesn’t make climbing or coming down the hill any easier. As we have seen, all it does is change how various forces are applied. It just requires less effort to move up or down but the tradeoff is that you travel a longer distance. This has several advantages. Car engines have to be less powerful to climb the hill and in the case of descent, less force needs to be applied on the brakes. There are also safety considerations. A car will not accelerate down the switch back path as fast than if it was driven straight down, making speeds safer and more manageable for motorists.
This idea of using zigzagging paths to climb steep hills and mountains is also used by hikers and rock climbers for very much the same reason Lombard Street zigszags. The tradeoff is that the distance traveled along the path is greater than if a climber goes straight up.
The Descendants of Archimedes
We don’t need strength, we’re physicists. We are the intellectual descendants of Archimedes. Give me a fulcrum and a lever and I can move the Earth. It’s just a matter of… I don’t have this, I don’t have this!
We see that Leonard had the right idea. If we were to assume are to assume — based on the size of the box — that the furniture is approximately 150 lbs (65kg) and the effort is reduced by half, then they need to push with at least 75 lbs of force. This is equivalent to moving a 34kg mass. If they both push equally, they are each left pushing a very manageable 37.5 lbs, the equivalent of pushing a 17kg mass.
Penny’s apartment is on the fourth floor and we if we assume a standard US building design of ten feet per floor, this means a 30 foot vertical rise. The boys are left with the choice of lifting 150 lbs vertically 30 feet or moving 75lbs a distance of 60 feet. The latter is more manageable but then again, neither of our heroes have any upper body strength. |
My basic question that will relate to what I write here after is this: Is there any incorrect statements I have made in my description of the mathematics shown?
We start again with the Kronecker delta:
$$\delta \left( x,y \right) =\cases{1&$x=y$\cr 0&$x\neq y$\cr}\quad\qquad\quad\qquad\quad\qquad\quad\qquad\quad\qquad\quad\qquad\quad\qquad\quad\quad\quad\quad\quad\quad\quad\text{ (1)}$$
Which allows us to express the digits of a number 'a' in base 'b' in a computable integer sequence, in that we already know the exact length of the sequence which is of course the number of digits in total. The expression for this computation is:
$$d_{{n}} \left( a,b \right) =\sum _{k=1}^{ \Bigl\lfloor {\frac { \ln \left( a \right) }{\ln \left( b \right) }}\Bigr\rfloor +1} \left( \delta \left( n,k \right) -b\delta \left( n,k+1 \right) \right) \Bigl\lfloor{a{b}^{k- {\Bigl\lfloor\frac {\ln \left( a\right) }{\ln \left( b \right) }\Bigr\rfloor} -1}} \Bigr\rfloor \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$$
$$\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\qquad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{ (2)}$$
For example, $a=12345$ in base $b=10:$ will, purely coincidentally of course, evaluate to the arithmetic progression with initial value of 1 and d=1 of length 5: $$\left\{ d_{{1}} \left( 12345,10 \right) ,d_{{2}} \left( 12345,10 \right) ,d_{{3}} \left( 12345,10 \right) ,d_{{4}} \left( 12345,10 \right) ,d_{{5}} \left( 12345,10 \right) \right\} = \left\{ 1,2,3,4, 5 \right\} $$
But this(2) will compute the $n^{th}$ digit for the number in any base $b>1$, and thus these values correspond to the coefficients of the
b-adic expansion * of the number thus we have as follows:
$$\mathcal{P} \left( a,b \right) =\sum _{n=0}^{ \Bigl\lfloor { \frac {\ln \left( a \right) }{\ln \left( b \right) }} \Bigr\rfloor +1}d_{ {n}} \left( a,b \right) {b}^{n}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{ (3)} $$ Additionally, we may also use (2) for the calculation of the p adic valuation of the factorial of a number N for a prime p as follows:
$${\sum _{k=1}^{ \Bigl\lfloor {\frac {\ln \left( N \right) }{\ln \left( p \right) }} \Bigr\rfloor +1} \Bigl\lfloor {\frac {N}{{p}^{k}} } \Bigr\rfloor =\frac {N}{p-1}}-\frac{\sum _{j=1}^{ \Bigl\lfloor {\frac {\ln \left( N \right) }{\ln \left( p \right) }} \Bigr\rfloor +1}d \left( N,p,j \right)}{ \left( p-1 \right) }\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{ (4)}$$
And the lemma below I include for the reader to see my justification for the result stated in (6): where $ \text{There exists}\quad \alpha{\in \Bbb N}\quad\text{and}\quad\beta{\in \Bbb N}\quad\text{such that:}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad $ $k \in {\{0,1,2,3,...,\Bigl\lfloor { \frac {\ln \left( a \right) }{\ln \left( \beta \right) }} \Bigr\rfloor\}}$ and $\alpha\in {\{\beta}^{k}\}$ $$\alpha\,\frac{\mathcal{P} \left( {\frac {\mathcal{P} \left( a,\beta \right) }{\beta}},\beta \right)} {\beta}=a\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{(5)} $$
Because of this almost "periodic" nature of an iteration of (3) returning us the original number in it's original base representation, I believe this to be why when any value greater that 1 is taken for b up to infinity, this recurrence reduces to a finite set under the axiomatic requisite of unique elements of a set to only values that are elements of the least residue system modulo the upper bound of a (N) excluding 0 will occur, thus having been enclosed in a nested set the infinite inner set will reduce to N-1 distinct values which will be those of an arithmetic progression with initial value of 1 and d=1 with total length N-1:
$\text{where } R_N\text{ is the least residue system modulo N:}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
$ \left\{ \left\{ \frac{\mathcal{P} \left( {\frac {\mathcal{P} \left( a,b \right) }{b}},b \right) }{b} \right\} _{{b={2\ldots \infty }}} \right\} _{{a={1\ldots N-1}}} = R_{{N}} \backslash \left\{ 0 \right\}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{(6)} $
We can then make a declaration of a congruence of $\mathcal{P} \left( a,b \right) $ and 0 modulo the product of $a$ and the base $b$ of the number system for which it is represented in:
$$\mathcal{P} \left( {\frac {\mathcal{P} \left( a,b \right) }{b}},b \right)\equiv 0\pmod {ab}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{(7)} $$
I do however want to note in light of some recent concerning trends on the internet propagated by an individual who I will not name, that this in no way should be misconstrued as the author having a finitist view, this is entirely a consequence of the nature of this particular computation, please do not ask me if this is proof that "numbers have an end".
I have included this enumeration below if the reader is unclear as to what I have attempted to state in (4), as I do not know the formal terminology for describing this observed pattern.
$$\quad\,\mathcal{P} \left( \frac{1}{2}\,\mathcal{P} \left( 12,2 \right) ,2 \right) =12 $$ $$\quad\frac{1}{2}\mathcal{P} \left( \frac{1}{2}\mathcal{P} \left( 123,2 \right) ,2 \right) =123$$ $$\quad\qquad\mathcal{P}\left( \frac{1}{2}\mathcal{P} \left( 1234,2 \right) ,2 \right) = 1234\quad$$ $$\quad\quad\frac{1}{2}\,\mathcal{P} \left( \frac{1}{2}\mathcal{P} \left( 12345,2 \right) ,2 \right) =12345\quad$$
$$32\,\mathcal{P} \left( \frac{1}{2}\mathcal{P} \left( 123456,2 \right) ,2 \right) =123456 $$
$$\frac{1}{2}\mathcal{P} \left( \frac{1}{2}\mathcal{P} \left( 1234567,2 \right) ,2 \right) =1234567 $$
An additional note to make would be that one can clearly see this approach can be used to compute the digital root of a number or more readily the sum it's digits in any base, which has an intimate relationship with the p-adic order of the number, which I encourage the reader's review of here
I will end at this point to await feedback, seeing If anything I have stated above is incorrect, any of the further content on this subject must be brought into question.
Although it follows directly from assuming lemma (4) to be true, for this result I cannot verfiy to sufficiently high values without finding a more efficient means of calculation or purchasing a more powerful computer, so I encourage the reader to attempt to disprove or prove it as I will be, (unless this has already been done):
$$ \left\{ \left\{ \frac{\ln \left( \frac{a}{ \left( \mathcal{P} \left( {\frac {\mathcal{P} \left( a, \beta \right) }{\beta}},\beta \right) \right)} \right)}{ \left( \ln \left( \beta \right) \right) }+1 \right\} _{{a={1\ldots \infty }}} \right\} _{{\beta={2 \ldots \infty }}}=R_{{9}}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{(8)} $$
It quite possibly can be explained by the fact that this is the least residue system for the number of distinct digits used in our number system, but so far everything here has been derived under the author's assumption that theoretically the base of the chosen number system can be any fixed natural number greater than 1, so if this were to be calculated in a number system of a higher base, we would obtain a least residue system of a correspondingly higher order, that is to say:
Defining $\mbox {U}_{{b}} \left( N \right) $ as the intersection of all of the digit sets D of all divisors of a number N over all number bases less than or equal to it's number base b:
$U_b(N)=\bigcap^{b}_{\beta=2}\bigcap^{\tau(N)}_{j=1}D(p_{N,b,j}^{v_{N,b,j}},\beta)$
The following distributions show the variance in the quantity with respect to N (first two plots) and the variance between consecutive values of b (last three plots): |
I get that the spins can interact not just the nearest neighbour in the general sense. But why in Ising's paper when he solves the linear chain model, he consider spins can only interact with nearest neighbour spins? In his paper, he got mention the reasons, but I don't quite understand as I don't understand German. Why did he say that the forces exert by the spins fade with distance?
You can think of each little spin as being a small little magnetic dipole. The magnetic field of a dipole reads: $$ {\bf B} = \frac{\mu_0}{4\pi}(\frac{3{\bf r}({\bf{m}}\cdot{\bf{r}})}{r^5}-\frac{{\bf m}}{r^3}) $$ The first term vanishes in the plane perpendicular to the dipole. You can see that the resulting field drops off in strength cubically with distance. If the spins are arranged at equal intervals, then the interaction energy coming from next to nearest neighbors will be 1/8 of that coming from nearest neighbors. Since this is an order of magnitude smaller, it is reasonable to neglect everything but nearest neighbor interactions.
Also, it was not Ising who decided that he should work on this particular model. The model was given to him by his Advisor, Wilhelm Lenz.
[Weiss explanation] proposes electrical dipole effects for the effects of the individual elements (= elementary magnets). But then very considerable electrical field strengths would result through the summation of very slowly decreasing dipole fields which would be destroyed by the conducting power of the material. Therefore, we propose, in contrast to Weiss, that the forces which the elements exert upon each other quickly fade with distance so that, in a first approach, only neighboring atoms influence each other. We want to apply these propositions to a model as simple as possible. |
Just to add a bit to Andy's answer (a mathematical approach). In order to understand why, you need to be familiar with the frequency response. An opamp has input and stray capacitance on the inputs, which reduces the closed-loop bandwidth, as stated in the answer.
simulate this circuit – Schematic created using CircuitLab
Without going into the math, you can find the loop gain (which you'd use to find gain and phase margin for stability purposes), and it turns out to be:
$$\text{Loop Gain}=A_{ol}\dfrac{R_1}{R_1+R_F}\dfrac{1}{\frac{s}{\omega_p}+1} $$
Now, the opamp open loop gain, \$A_{ol}\$ is frequency dependent and we could model it as a 2 pole system:
$$ A_{ol}=\dfrac{A_{DC}}{(\frac{s}{\omega_1}+1)(\frac{s}{\omega_2}+1)}$$
From this, you know that if the pole due to the input capacitance (\$\omega_p\$) is close to the \$\omega_2\$ pole, you are adding an extra 90 degree phase shift and that puts you closer to instability. In the ideal case, where \$C_p\to 0\$, this pole is far away from the second pole of the open loop opamp gain, but as you increase the resistors' values, the pole may move to a bad spot. That is why, from a math standpoint, you may have to reduce the resistor values to avoid this.
In order to compensate for this, you may place a capacitor in parallel with the feedback resistor (as you have it), and then choose \$R_1C_p=R_FC_F\$ and that will cancel out (ideally) the effect of the pole caused by the parasitic capacitance. You could go through the math and derive this, I just didn't want to expand a lot more on what already has a good answer by Andy. |
Only one of the theorems (Stokes or Divergence) can apply. Which one applies depends on your situation.
Stokes' Theorem applies if your surface has a boundary curve. This would mean that your truncated cone (frustum) does
not include the end caps.
The Divergence Theorem applies if your surface is closed. This would mean that your frustum includes the end caps.
The way your problem is stated, it would seem as if the end caps are
not included, meaning that the Divergence Theorem does not apply. However, while Stokes' Theorem does apply, it's difficult to see how it could be used practically here. My advice would then be to compute the integral directly, probably via a sort of rotated cylindrical coordinate system.
If the end caps
are included, then you can use the Divergence Theorem and get an answer of $0$. This is because if $\mathbf{F}(x,y,z) = (x, -2y, z)$, then $\text{div }\mathbf{F} = 1 - 2 + 1 = 0$, so $\iint \mathbf{F}\cdot \mathbf{n}\, dS = \iiint \text{div }\mathbf{F}\,dV = 0$.
As an interesting aside, note that if both of the theorems applied to a surface $S$, then we would have$$\int_C \mathbf{F}\cdot d\mathbf{r} = \iint_S \text{curl }\mathbf{F}\cdot \mathbf{n}\,dS = \iiint_E \text{div}(\text{curl }\mathbf{F})\,dV = 0,$$and the integral of
any vector field over the boundary curve $C$ would be zero. This is a fancy way of saying that "the boundary of a boundary is zero." In other words, if a volume has a boundary surface, then that surface cannot have a boundary curve. |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
conic quadratic optimizationand second order cone optimizationis the same thing. I prefer the name conic quadratic optimization though.
Frequently it is asked on the internet what is the computational complexity of solving conic quadratic problems. Or the related questions what is the complexity of the algorithms implemented in MOSEK, SeDuMi or SDPT3.
Here are a some typical questions
To the best of my knowledge almost all open source and commercial software employ a primal-dual interior-point algorithm using for instance the so-called Nesterov-Todd scaling.\[
A conic quadratic problem can be stated on the form
A conic quadratic problem can be stated on the form
\begin{array}{lccl}
\mbox{min} & \sum_{j=1}^d (c^j)^T x^j & \\
\mbox{st} & \sum_{j=1}^d A^j x^j & = & b \\
& x^j \in K^j & \\
\end{array}
\]
where \(K_j\) is a \(n^j\) dimensional quadratic cone. Moreover, I will use \(A = [A^1,\ldots, A^d ]\) and \(n=\sum_j n^j\). Note that \(d \leq n\). First observe the problem cannot be solved exactly on a computer using floating numbers since the solution might be irrational. This is in contrast to linear problems that always have rational solution if the data is rational.
Using for instance the primal-dual interior point algorithm the problem can be solved to \(\varepsilon\) accuracy in \(O(\sqrt{d} \ln(\varepsilon^{-1}))\) interior-point iterations, where \(\varepsilon\) is the accepted duality gap. The most famous variant having that iteration complexity is based on Nesterov and Todds beautiful work on symmetric cones.
\[ \label{neweq} \left [ \begin{array}{cc} H & A^T \\ A & 0 \\ \end{array} \right ] \mbox{ (*)} \]
This is the most expensive operation and that can be done in \(O(n^3)\) complexity using Gaussian elimination so we end at the complexity \(O(n^{3.5}\ln(\varepsilon^{-1}))\).
That is the theoretical result. In practice the algorithms usually works much better because they normally finish in something like 10 to 100 iterations and rarely employs more than 200 iterations. In fact if the algorithm requires more than 200 iterations then typically numerical issues prevent the software from solving the problem.
Finally, typically conic quadratic problem is sparse and that implies the linear system mentioned above can be solved must faster when the sparsity is exploited. Figuring our to solve the linear equation system (*) in the lowest complexity when exploiting sparsity is NP hard and therefore optimization only employs various heuristics such minimum degree order that helps cutting the iteration complexity. If you want to know more then read my Mathematical Programming publication mentioned below. One
To summarize primal-dual interior-point algorithms solve a conic quadratic problem in less 200 times the cost of solving the linear equation system (*) in practice. important factis that it is impossible to predict the iteration complexity without knowing the problem structure and then doing a complicated analysis of that. I.e. the iteration complexity is not a simple function of the number constraints and variables unless A is completely dense.
To summarize primal-dual interior-point algorithms solve a conic quadratic problem in less 200 times the cost of solving the linear equation system (*) in practice.
So can the best proven polynomial complexity bound be proven for software like MOSEK. In general the answer is no because the software employ an bunch of tricks that speed up the practical performance but unfortunately they destroy the theoretical complexity proof. In fact, it is commonly accepted that if the algorithm is implemented strictly as theory suggest then it will be hopelessly slow.
I have spend of lot time on implementing interior-point methods as documented by the Mathematical Programming publication and my view on the practical implementations are that are they very close to theory. |
This is due to the special behavior of SetDelayed (:=) with regards to the first argument (see e.g. this question): The arguments of the l.h.s. are evaluated by SetDelayed, which causes the error you are seeing - after all, SeriesData[_, _, coeff_, _, _, _] is not a valid SeriesData construct. This is what HoldPattern is designed for: It prevents evaluation ...
One good thing about Mathematica is that when doing Series[], Mathematica understands that the singularity of $log(x)$ is different from $x^{\alpha}$ (for any $\alpha$) (for $x \rightarrow 0$), thus treated separately.Like this:Series[Log[x^2 + x], {x, 0, 1}]OutputLog[x]+x+O[x]^2Because mathematically,$$\log(x^2+x) = \log(x (x+1)) = \log(x) + \...
The O representation of an expansion point of Infinity is obtained with:O[x, Infinity](see this part of the documentation for O).So, you just need to do:M = {{1+5/s+6/s^3+O[s,Infinity]^4,1+8/s+4/s^2+O[s,Infinity]^4},{1+2/s+2/s^3+O[s,Infinity]^4,1-1/s+8/s^3+O[s,Infinity]^4}};Det[M] //TeXForm$-\frac{6}{s}-\frac{25}{s^2}+\frac{4}{s^3}+O\...
You can use the new in M12 function AsymptoticSolve for this:AsymptoticSolve[{p == u + a u^2 + b u v + c v^2,q == v + d v^2 + e u v + f v^2},{{u, v}, {0, 0}},{{p, q}, {0, 0}, 3}]{{u -> p - a p^2 + 2 a^2 p^3 - b p q + (3 a b + b e) p^2 q -c q^2 + (b^2 + 2 a c + b d + 2 c e + b f) p q^2 + (b c + 2 c d +2 ...
Example 1Let's define series like this:p[u_] := u + a u^2 + b u^3 + O[u]^4Inversing series p gives:q[x_] := Evaluate[InverseSeries[p[x], x]];q[p]$p-a p^2+p^3 \left(2 a^2-b\right)+O\left(p^4\right)$Check that q is an inverse series for p:q[p[u]]$u+O\left(u^4\right)$Example 2I believe the solution that you are looking for is ...
Let us consider your example 1 (I think example 2 can be done in future versions of Mathematica only.). There are several cases depending on parameters and four branches of u as a function of p up to the result ofs = Reduce[p == u + a*u^2 + b*u^3, u] // ToRadicals(b != 0 && (u == -(a/(3 b)) - (2^(1/3) (-a^2 + 3 b))/(3 b (-2 a^3 + 9 a ... |
This question is about conditions on a mother wavelet that generates a countable familily of child wavelets via scaling and translation, that are both necessary and sufficient for the child wavelets to form a frame in the Hilbert space $L^2(\mathbb{R})$
Here are the precise definitions of these concepts in the context of this question:
Let a mother wavelet be an element $\psi \in L^2({\mathbb{R}})$ with $\|\psi\| = 1$ and a finite admissibility constant $0 \lt C_{\psi} \lt \infty$ that is defined as follows: $$ C_{\psi} = \int_{- \infty}^{\infty} \frac{| \hat{\psi} (\omega) |^2}{|\omega|} d\omega $$ where $\hat{\psi}$ denotes the Fourier transform of $\psi$.
Let $\psi$ be such a mother wavelet and define the countable set of child wavelets $W$ as follows: Let $\sigma \gt 1, \tau \gt 0$ be real numbers and
$$ W := \{ \psi_{j, k}: j, k \in \mathbb{Z}, \psi_{j, k}(t) = \frac{1}{\sigma^{- \frac{j}{2}}} \psi(\frac{t - k \tau \sigma^{-j}}{\sigma^{-j}}) \} $$
Now let a frame in a separable Hilbert space be a countable set of vectors $\{ \phi_j \} $ such that there are constants $a, b \gt 0$ such that for every vector $f$ we have
$$ a \|f\|^2 \le \sum_j | \langle f, \phi_j \rangle |^2 \le b \|f\|^2 $$
My question is: Are there conditions known on the triple $(\psi, \sigma, \tau)$ that are
both necessary and sufficient for W, the set of child wavelets, to be a frame in $L^2(\mathbb{R})$?
(AFAIK there are conditions that are necessary, and other conditions that are sufficient, known since Ingrid Daubechies published her results in 1990. But there don't seem to be any conditions that are both necessary and sufficient.) |
Communities (45)
Mathematics
22.5k
22.5k66 gold badges2323 silver badges5959 bronze badges
Mathematica
1.2k
1.2k99 silver badges1717 bronze badges
Hinduism
1.2k
1.2k44 silver badges1717 bronze badges
Physics
509
50933 silver badges1515 bronze badges
MathOverflow
478
47822 silver badges1111 bronze badges View network profile → Top network posts 52 Math without pencil and paper 30 Is $x^{\frac{1}{2}}+ 2x+3=0$ a quadratic equation 18 Why do ballpoint pens write better on pages that have pages below them? 17 Is it possible to have three real numbers that have both their sum and product equal to $1$? 14 Minimum distance between two parabolas 13 Independence of $\frac{1-\cos(x)+k\sin(x)}{\sin(x)+k(1+\cos(x))}$ from $k$. 12 Dharmic role of wife in marriage View more network posts → Top tags (27)
11 How to retain posted contents for later reference Sep 20 '14
11 Is undelete of comments already deleted.. possible? Dec 19 '16
4 When does the “delete” button appear beneath a post? Jan 15 '15
4 Using Math.SE as a tool for my lecture/exercise Dec 11 '17
3 undelete missing for a posting recently deleted Feb 25 '15
2 Is searching a post with latex or MathJax possible? Feb 14 '15
2 Latex does not display in titles and comments Sep 13 '17
2 What is the incentive to use MathJax Jan 12 '15 |
Even and Odd Numbers
Phan Thanh Tinh Coodinator 27/03/2017 at 22:39
bSelected by MathYouLike
3= 3375 = 15 3=> b = 15 => a = 13 ; c = 17 => ac = 221
Futeruno Kanzuki Coodinator 29/03/2017 at 21:18
b
3= 3375
=> b = \(\sqrt[3]{3375}=15\)
Because a,b,c are three consecutive odd number and b = 15 so
a = 13 ; b = 15 ; c = 17
ac = 15 . 17 = 221
Nễu Lả Ánh 28/03/2017 at 12:40
b3 = 3375 = 153 => b = 15 => a = 13 ; c = 17 => ac = 221
1
A number sequence is such that the first two numbers are both 1, From the 3rd number onwards, each one is the sum of the previous two terms. How many numbers in the sequence are odd numbers in the first 1000 numbers?
FA KAKALOTS 09/02/2018 at 22:06
1 is odd,so the 3rd number is even
=> The 4th number is odd and so is the 5th number
=> The 6th number is even
=> The 7th number is odd and so is the 8th number
Hence,the 3rd number and every third number after that are even.
So the numerical orders of the even numbers are divisible by 3
In the first 1000 numbers,there are :
(999 - 3) : 3 + 1 = 333 (even numbers) and :
1000 - 333 = 667 (odd numbers)
Phan Thanh Tinh Coodinator 24/04/2017 at 14:03
1 is odd,so the 3
rdnumber is even
=> The 4
thnumber is odd and so is the 5 thnumber
=> The 6
thnumber is even
=> The 7
thnumber is odd and so is the 8 thnumber
Hence,the 3
rdnumber and every third number after that are even.
So the numerical orders of the even numbers are divisible by 3
In the first 1000 numbers,there are :
(999 - 3) : 3 + 1 = 333 (even numbers) and :
1000 - 333 = 667 (odd numbers)
Thám Tử THCS Nguyễn Hiếu 27/03/2017 at 20:52
The odd number between the six remaining numbers is:
1337 : 7 = 191
So 7 consecutive numbers are 185 ; 187 ; 189 ; 191 ; 193 ; 195 ; 197Selected by MathYouLike
Nễu Lả Ánh 28/03/2017 at 12:41
The odd number between the six remaining numbers is:
1337 : 7 = 191
So 7 consecutive numbers are 185 ; 187 ; 189 ; 191 ; 193 ; 195 ; 197
Pham Hoang Nam 18/04/2017 at 21:37
The odd number between the six remaining numbers is:
1337 : 7 = 191
So 7 consecutive numbers are 185 ; 187 ; 189 ; 191 ; 193 ; 195 ; 197
The sum of 2
2009 + 3 2009 + 7 2009 + 9 2009 is an odd number. Investigate whether the statement is true
FA KAKALOTS 09/02/2018 at 22:06
We have :
22009 is even number
32009 = 34.502 . 3 = .....1 . 3 = ......3 (odd number)
72009 = 74.502 . 7 = .....1 . 7 = .....7(odd number)
= 94.502 . 9 = ......1 . 9 = .......9(odd number)
So : 22009 + 32009 + 72009 + 92009 = .....2 + .....3 + ....7 + .....9 = ......21 (odd number)
Futeruno Kanzuki Coodinator 29/03/2017 at 21:51
We have :
2
2009is even number
3
2009= 3 4.502. 3 = .....1 . 3 = ......3 (odd number)
7
2009= 7 4.502. 7 = .....1 . 7 = .....7(odd number)
= 9
4.502. 9 = ......1 . 9 = .......9(odd number)
So : 2
2009+ 3 2009+ 7 2009+ 9 2009= .....2 + .....3 + ....7 + .....9 = ......21 (odd number)
A mathematical competition has 30 questions.
5 marks are awarded for a correctly answered question.
1 mark is awarded for an unanswered question
1 mark is deducted for an incorrectly answered question
Show that the total score of all participants is an even number
»ﻲ2004#ﻲ« 29/03/2017 at 06:10
We have : abcabc = abc x 1001 = abc x 7 x 11 x 13
Because abc is a prime number,the number of divisors of abcabc is :
(1 + 1)4 = 16
Indratreinpro 02/04/2017 at 22:12
Let a,b,c be the number of the correctly answered questions , unanswered questions , incorrectly answered questions of a participant respectively
Then the participant's total score is 5a + b - c with a + b + c = 30
If a is odd,then 5a and b + c are odd,so b - c is odd and 5a + b - c is even
If a is even,then 5a and b + c are even,so b - c is even and 5a + b - c is even
From 2 cases,we know that the total score of all participants is always an even number
Nếu bây giờ ngỏ ý . Liệu có còn kịp không 28/03/2017 at 12:29
Let a,b,c be the number of the correctly answered questions , unanswered questions , incorrectly answered questions of a participant respectively
Then the participant's total score is 5a + b - c with a + b + c = 30
If a is odd,then 5a and b + c are odd,so b - c is odd and 5a + b - c is even
If a is even,then 5a and b + c are even,so b - c is even and 5a + b - c is even
From 2 cases,we know that the total score of all participants is always an even number
There are 71 cards numbered from 1,2,3,...,71 Show that these cards can be arranged in a line, so that the sum of any two neighbouring numbers is a prime number
A page of a book was torn. The sum of the remaining page numbers is 3030. How many pages does the book have? which page was removed ?
FA KAKALOTS 09/02/2018 at 22:07
Let n is number of pages. I is the page which was removed.
1 + ... + n - i = 3030
3030 + i = ( n +1 ) * n : 2
Pham Hoang Nam 18/04/2017 at 21:41
Let n is number of pages. I is the page which was removed.
1 + ... + n - i = 3030
3030 + i = ( n +1 ) * n : 2
Nhat Lee 16/04/2017 at 17:33
Let n is number of pages. i is the page which was removed.
1+...+n -i =3030
3030+i=(n+1) * n :2
Phan Thanh Tinh Coodinator 27/03/2017 at 22:47
If a is odd,then a
3and a 2are also odd,so a 3+ a 2+ 1 is odd
If a is even,then a
3and a 2are also even,so a 3+ a 2+ 1 is odd
So aSelected by MathYouLike
3+ a 2+ 1 is always odd with any integer a
»ﻲ2004#ﻲ« 29/03/2017 at 06:11
If a is odd,then a3 and a2 are also odd,so a3 + a2 + 1 is odd
If a is even,then a3 and a2 are also even,so a3 + a2 + 1 is odd
So a3 + a2 + 1 is always odd with any integer a
Nếu bây giờ ngỏ ý . Liệu có còn kịp không 28/03/2017 at 12:31
If a is odd,then a3 and a2 are also odd,so a3 + a2 + 1 is odd
If a is even,then a3 and a2 are also even,so a3 + a2 + 1 is odd
So a3 + a2 + 1 is always odd with any integer a
Run my EDM 20/03/2017 at 12:22
Put \(\left(2k-1\right)\left(2k+1\right)=123476543\)
\(\Leftrightarrow4k^2-1-123476543=0\)
\(\Leftrightarrow4k^2-123476544=0\)
\(\Leftrightarrow4\left(k^2-30869136\right)=0\)
\(\Leftrightarrow k^2=30869136=5556^2\)
\(\Leftrightarrow k=5556\)
\(\Rightarrow\left\{{}\begin{matrix}2k-1=11111\\2k+1=11113\end{matrix}\right.\)
Ans : 11111 & 11113.Donald Trump selected this answer.
FA KAKALOTS 09/02/2018 at 22:08
Put (2k−1)(2k+1)=123476543
⇔4k2−1−123476543=0
⇔4k2−123476544=0
⇔4(k2−30869136)=0
⇔k2=30869136=55562
⇔k=5556
⇒{2k−1=111112k+1=11113
Ans : 11111 & 11113.
Nguyệt Nguyệt 20/03/2017 at 11:33
123476543 = 11111 x 11113
It is given that (a + b + c) is an add number. Show that (a + b + c) \(\times\)(a - b + c) is also an odd number. |
6 DOF Robot Arm Introduction
This tutorial deals with the theory and implementation of the control of a 6 degree of freedom robot arm. The main idea is to use a C# form to obtain inputs from the user and send those inputs to MATLAB to perform trajectory calculations. The commands are then sent in regular intervals through Bluetooth to Arduino which then sends individual axis commands to a PWM driver via I2C.
Components
As it turns out, in this system most of the components could be bought from an electronics store. The main components are listed below.
Arduino Pro Mini (Microcontroller) PCA9685 (16-Channel PWM Driver) HC-06 (Bluetooh Module) Variable DC step-down (~6V at 6A) UF5404 (High current diodes) 6x hobby servos (High torque metal geared) Robot Arm Chassis/Claw
Note: The main issue that must be dealt with is the potential high current draw at 6V. Most 6V supplies cannot supply such large peak current draws and therefore voltage regulators will be used to drop a 12V DC source to 6V. In my case, I was able to obtain two 3A variable voltage regulators and connect the outputs together using diodes to isolate the supplies from each other. The idea is that if the current draw is too large, the other regulator will share the load. 6A total was chosen since each servo was budgeted approximately 1A (this is a rule of thumb). The reason why the variable voltage aspect is important is due to possibility that the two diodes in OR configuration (diode OR gate) will have different voltage drops.
Schematic
Below is the schematic of the hardware connections between the Arduino, HC-06, voltage regulators and PCA8685 driver. The connections between the different components can be done directly as each of the boards have necessary protection circuitry and logic level shifts. Note that the servos are not shown
Communication
This section detail the implementation of how communication is done from C# to Arduino via Bluetooth and C# to MATLAB.
C# to Arduino (Bluetooth)
In order to use Bluetooth communication, you must first pair your HC-06 or other Bluetooth device to your computer. The computer will show it has a COM port device which is the exact same way an Arduino will show up. The relevant default settings for the HC-06 are shown below:
Baud Rate: 9600 (8N1) PIN: 1234 Name: HC-06
8-N-1 is a common notation that specifies the format of the data coming in and going out. This means that
8 bits with No 1 stop bit are being sent in discrete packets. A baud rate of 9600 is indicative of how fast the transfer occurs 9600 meaning 9600 bits per second. This creates a problem, as 8 bits necessarily implies that only a number between 0-255 can be sent in one packet. However, if we are controlling angles of servos, then only numbers between 0-180 representing the rotation in degrees is necessary. Implementation
The idea is that a start byte indicating that a command is beginning is sent first to the Arduino. The Arduino then listens for the information until it sees another start byte in which case it realizes another command is being sent. In our case, there are 6 servos, meaning 6 numbers from 0-180 are sent in each command. The data sent to Arduino is summarized in a list below.
Start byte (255) Axis command 0 (0-180) Axis command 1 (0-180) Axis command 2 (0-180) Axis command 3 (0-180) Axis command 4 (0-180) Axis command 5 (0-180)
Assuming that you know the basics of C# using the Toolbox for Forms, just add a "SerialPort" to the form and configure it to 9600 8N1. Note that if you used AT commands with the HC-06 you can change the baud rate. In my case I used 19200 baud rate. The settings for reference are given below.
Then the knitty gritty around displaying the COM ports and selecting them is needed, the code in the form of a Visual Studio solution will be provided at the end of the tutorial :). The portion of the code to look for is when the transmission begins in the serial port. In theory, you can send the bytes in any order, and u can also send any number of bytes during one command. The snippet is given below so you know what to look for. Following the C# Master Code, the Arduino Slave code for receiving commands is also given. The main idea is that the commands from C# are stored in the Arduino after these key executions are complete.
C# Master Code byte[] TxBytes = new byte[7]; // create vector of new bytes // send all bytes TxBytes[0] = Convert.ToByte(255); TxBytes[1] = Convert.ToByte(vScrollBar1.Value); TxBytes[2] = Convert.ToByte(vScrollBar2.Value); TxBytes[3] = Convert.ToByte(vScrollBar3.Value); TxBytes[4] = Convert.ToByte(vScrollBar4.Value); TxBytes[5] = Convert.ToByte(vScrollBar5.Value); TxBytes[6] = Convert.ToByte(vScrollBar6.Value); if (serCOM.IsOpen == true) { // write all the bytes for (int i = 0; i < 7; i++) { serCOM.Write(TxBytes, i, 1); } } } Arduino Slave Code void loop() { // receive command from the serial if(Serial.available() >= 7) // wait for sufficient data { byte inByte = Serial.read(); // get incoming byte if( inByte == 255 ) // if it is a start byte read all the data { for(int i = 0; i < 6; i++) { pos[i] = Serial.read(); } } } }
Note that once the Arduino is hooked up to the HC-06, transmission of data can already happen. As long as the baud rate of the HC-06 and the Arduino are the same the communication will work. It is a good idea to just test the HC-06 and Arduino communication at this point.
Trajectory Generation
The whole point of offloading calculations to MATLAB is that there are many easy to use functions that make MATLAB programming a lot easier than writing functions from scratch and working with different types of arrays or lists. That being said, it is of course possible to implement these calculations without MATLAB if you either don't have it or just don't want to use it.
The idea behind trajectory generation is that you know a path that you wish to move along as well as the speed that you wish to move along it. The question is, what are commands that need to be sent in order for the point of interest, often called an "end-effector" such as a claw or milling bit move along that path at a specified velocity profile. In this case, we will do a simple line between two points, at a constant velocity.
My tutorial
here on trajectory generation covers lines and general curves. Note that in these tutorials, a method in which velocity ramps upwards is used. This is necessary in precision systems, however servo commands are very low resolution and velocity "ramping" has no real benefit.
Assuming that the trajectory is now planned (it should come out as a vector of X,Y components) we now work on deriving the relationships. The general approach is to linearize the system in two variables and then invert the system explicitly and incrementally to derive the path. Suppose we have a robot with 3 linkages, and 3 motors placed at the origin, \(x_1,x_2\).
The equations describing \(x_1,x_2\) are given below. Let us suppose that \(x_2\) is the end effector for now in order to avoid a redundant degree of freedom.
$$ x_1 = (L_1 cos(\theta_1),L_1 sin(\theta_1)) $$
$$ x_2 = (L_2 cos(\theta_1 + \theta_2),L_2 sin(\theta_1 + \theta_2)) + x_1 $$
Now linearize \(x_2\)
$$ \Delta x_2 \approx \frac{dx_2}{d\theta_1} \Delta \theta_1 + \frac{dx_2}{d\theta_2} \Delta \theta_2 $$
Where
$$ \Delta x_2 = x_{2,n} - x_{2,n-1} $$
$$ x_{2,0} = x_2(\theta_{1,0} ,\theta_{2,0}) $$
Finally we can write the matrix equation to be solved for each increment
$$ \begin{bmatrix} u_x & v_x \\ u_y & v_y \end{bmatrix} \begin{bmatrix} \Delta \theta_1 \\ \Delta \theta_2 \end{bmatrix} = \begin{bmatrix} \Delta x_{2,x} \\ \Delta x_{2,y} \end{bmatrix} $$
The critical MATLAB code to generate a path between \(x_0\) and \(x_1\) at some velocity is shown below.
Matlab Code x = @(t1,t2) [ L2*cos(t1+t2)+L1*cos(t1); L2*sin(t1+t2)+L1*sin(t1)]; % Suppose the plot was a line from x0 to 10,10 split into 100 pieces x0 = x(th1,th2); x1 = [px,py]'; T = norm(x1-x0)/vel; % time to complete motion N = round(T/(1/50)); % Divide time by the update time posx = [linspace(x0(1),x1(1),N)' linspace(x0(2),x1(2),N)']; inposx = diff(posx); n = length(inposx); % store all angles over time tVector = zeros(n+1,2); tVector(1,:) = [th1,th2]; for i=1:n % solve for the incremental angle A = [ dth1(tVector(i,1),tVector(i,2)) dth2(tVector(i,1),tVector(i,2))]; incth = linsolve(A,inposx(i,:)'); % Add incremental angle to get absolute angle tVector(i+1,:) = tVector(i,:) + incth'; end C# Code for MATLAB
First I recommend that you look at MATLAB's documentation on calling functions from C#, you can copy paste the code and it will work provided you've added "Matlab Application Type Library" via the References tab. I found that one line was buggy which was the line specifying the directory, basically an extra single quote is added to the address shown below.
matlab.Execute(@"cd 'C:\Users\Owen\Documents\MATLAB\ROBOT ARM'");
The critical C# code that gets an array from MATLAB and stores it to a C# variable is shown below.
object result = null; //// Call the MATLAB function myfunc matlab.Feval("gentraj", 1, out result, Convert.ToDouble(txtServo2.Text), Convert.ToDouble(txtServo3.Text), Convert.ToDouble(px), Convert.ToDouble(py), Convert.ToDouble(txtSpeed.Text)); // Convert.ToDouble(txtSpeed.Text) var res = (result as object[]).Select(x => (double[,])x).ToArray(); object t_angArray = res.GetValue(0); angArray = (double[,])t_angArray; Conclusion
Now that you have had a basic run through of the methods used, hopefully you can examine the code and figure out what is going on. Below is the source code for each of the components. It is assumed that you have installed the PCA9685 library. Adafruit has very good documentation and instructions online on how you can get the driver working. |
Find principal part of Laurent expansion of $f(z) = \frac{1}{(z^2+1)^2}$ about $z=i$.
My attempt at a solution:First, I noticed that if I plug in $z=i$, I get a zero in the denominator. This leads me to think that it is an isolated singularity. If I look at the classification of singularities, I believe it is a pole since $\lim_{z \to z_0} \vert f(z_0) \vert = \infty$ for $z_0 = i$. By recalling the definition of the
principal part of $f$, I am looking for the series containing all negative powers of $(z-z_0)$ in the Laurent expansion $\sum_{k=-\infty}^{\infty} a_k(z-z_0)^k$.
Based on what I have seen, I need to find the partial fraction decomposition of $f$. If so, then I have $\frac{1}{(z^2+1)^2} = \frac{A}{z^2+1}+\frac{B}{(z^2+1)^2}$. However, I think I am doing something wrong. From here, I believe I am supposed to use a geometric series.
I am using the textbook
Complex Analysis, Third Edition by Joseph Bak and Donald J. Newman.
Any assistance and clarification would be greatly appreciated. |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
I'm looking for directions to the literature that might contain fairly explicit constructions that might be called (the algebra of functions on) the "derived mapping space" from a simplicial set to a simplicial (affine) scheme. To make the question reasonably self-contained and to give a sense of my background and current understanding, I will begin with some general abstract nonsense, and then point to a construction that I have found in the literature and that does not work to my satisfaction.
A little abstract nonsense
Let $C$ and $D$ be categories. In a bit, I will give them specific values, but for now I will ask only that $D$ be small, and that $C$ have any necessary limits and colimits. A
(generalized) $D$-object in $C$ is a presheaf on $D$ valued in $C$, i.e. a functor $X : D^{\mathrm{op}}\to C$. Each $d\in D$ determines (and is determined by) a $D$-object in $\mathrm{SET}$, by the usual Yoneda embedding $d \mapsto \operatorname{hom}_D(-,d)$. It will be convenient for me to denote the presheaf $\operatorname{hom}_D(-,d)$ by $[d]$, and given $k\in D$ and $X : D^{\mathrm{op}}\to C$, I will write $X_k$ for $X(k)$.
For $x\in C$ and $s\in \mathrm{SET}$, there is an object $\operatorname{maps}(s,x) = x^s = \prod_s x \in C$, which is the $s$-fold cartesian product of $x$ with itself. Now, let $X : D^{\mathrm{op}} \to C$ be a $D$-object in $C$, and $S: D^{\mathrm{op}} \to \mathrm{SET}$ a $D$-set. Then there is an object $\operatorname{hom}_D(S,X) \in C$, which is built as a certain limit ranging over the objects $\operatorname{maps}(S_k,X_k)$ for $k\in D$. Even better, the categories of $D$-sets and $D$-objects in $C$ have products — the ("categorical") cartesian product of functors is constructed by taking the product for each — and so we can define an enriched hom by: $$ \underline{\operatorname{hom}}_D(S,X) : D^{\mathrm{op}} \to C, \quad d \mapsto \operatorname{hom}_D(S \times [d],X). $$ Finally, there is one more, much more naive "mapping space" between $D$-objects, which I will denote by $\operatorname{maps}(S,X) : D \times D^{\mathrm{op}} \to C$, sending $(d,k) \mapsto \operatorname{maps}(S_d,X_k)$.
A little concrete nonsense
I will be interested in the situation where $D = \Delta$ is the category of finite nonempty totally-ordered sets (and monotonic maps). It has a skeletalization with objects indexed by the natural numbers, given by $[n] = \lbrace 0 < \dots < n \rbrace$. Note that the $\Delta$-set $[0]$ is terminal, so $\underline{\operatorname{hom}}_D(S,X)_0 = \operatorname{hom}_D(S,X)$. An object $X : D^{\mathrm{op}} \to C$ determines, among other data, two maps $X_1 \rightrightarrows X_0$, corresponding to the two inclusions $[0] \rightrightarrows [1]$. By definition, $\pi_0(X) \in C$ is the coequalizer of the two arrows $X_1 \rightrightarrows X_0$.
Fix a commutative ring $\mathbb K$. I will not be upset if you would like to make further assumptions on $\mathbb K$, e.g. that $\mathbb K \supseteq \mathbb Q$, or that $\mathbb K$ is an algebraically closed field. I believe that I am primarily interested in the following two values for $C$, but I am open to being convinced otherwise:
$C = \mathrm{CAlg}^{\mathrm{op}}_{\mathbb K}$ is the category of affine schemes over $\mathbb K$. $C = \mathrm{Mod}_{\mathbb K}$ is the category of $\mathbb K$-modules.
There is a well-known contravariant forgetful functor $\mathcal O$ from $\mathrm{CAlg}^{\mathrm{op}}_{\mathbb K}$ to $\mathrm{Mod}_{\mathbb K}$.
I will also take inspiration from the case $C = \mathrm{Top}$ of nice enough topological spaces.
There is a further functor $\operatorname{ch}: \Delta\mathrm{Mod}_{\mathbb K} \to \mathrm{DGMod}_{\mathbb K}$ (the category of homologically-graded chain complexes of $\mathbb K$-modules) which sets $\operatorname{ch}(X)_k = X_k$ with differential a certain well-known alternating sum. There is a standard symmetric monoidal structure on $\mathrm{DGMod}_{\mathbb K}$ which sums the homological degrees, and for this structure $\operatorname{ch}$ is not strongly monoidal, but there is a canonical
Eilenberg–Zilber map $\operatorname{ch}(X) \otimes \operatorname{ch}(Y) \to \operatorname{ch}(X \otimes Y)$, which sums over all $(k+\ell)$-simplices in a product of a $k$-simplex with an $\ell$-simplex (closely related is the fact that for simplicial sets, the geometric realization of a product is homeomorphic to the product (in the category of compactly-generated spaces) of geometric realizations), making $\operatorname{ch}$ into a "lax symmetric monoidal functor". The Eilenberg–Zilber map is a quasi-isomorphism, and one choice of quasi-inverse is the (non-symmetric) Alexander–Whitney map; if $\mathbb K \supseteq \mathbb Q$, there are other more symmetrical choices. In any case, the Eilenberg–Zilber map means that any simplicial commutative algebra determines canonically a dg commutative algebra. Of course, when $C = \mathrm{CAlg}^{\mathrm{op}}_{\mathbb K}$, a simplicial affine scheme $X : \Delta^{\mathrm{op}} \to \mathrm{CAlg}^{\mathrm{op}}_{\mathbb K}$ determines a cosimplicial commutative algebra $\mathcal{O}(X)$, and so $\operatorname{ch}(\mathcal{O}(X))$ is not quite a dgca (the Alexander–Whitney map makes it into a dga). Anyway, this all won't matter much for me.
What I wanted to mention about all this is that if $X : \Delta^{\mathrm{op}} \to \mathrm{CAlg}^{\mathrm{op}}_{\mathbb K}$ is a simplicial affine scheme, then $\operatorname{H}_\bullet(\operatorname{ch}(\mathcal{O}(X)))$ is canonically a graded commutative algebra (supported in nonpositive homological degrees; and there is more algebraic data in the form of Massey products) and $$ \operatorname{H}_0(\operatorname{ch}(\mathcal{O}(X))) = \mathcal{O}(\pi_0(X)). $$
Examples
Let $A$ be a commutative $\mathbb K$-algebra, with corresponding affine scheme $X = \operatorname{spec}(A) \in \mathrm{CAlg}^{\mathrm{op}}_{\mathbb K}$. If you want, you can extend $X$ to a constant functor $X : \Delta^{\mathrm{op}} \to \mathrm{CAlg}^{\mathrm{op}}_{\mathbb K}$. Let $S^1$ denote the simplicial set generated by one nondegenerate $0$-simplex and one nondegenerate $1$-simplex. Then $\operatorname{maps}(S^1, X)$ is a cosimplicial affine scheme (or simplicial cosimplicial, but constant in the simplicial direction), and so $\mathcal{O}(\operatorname{maps}(S^1, X))$ is a simplicial commutative algebra. By definition,$$ \operatorname{HH}_\bullet(A) = \operatorname{H}_\bullet(\operatorname{ch}(\mathcal{O}(\operatorname{maps}(S^1, X)))) $$is the
Hochschild homology of $A$. The complex $\operatorname{ch}(\mathcal{O}(\operatorname{maps}(S^1, X)))$ can be alternately defined by making a certain choice of resolution of $A$ as an $(A\otimes A)$-module, and using this resolution to construct the derived tensor product $A \otimes_{A\otimes A} A$.
Let $G$ be an affine algebraic group over $\mathbb K$ (e.g. a finite group). There is a well-known simplicial affine scheme $X = \mathrm{B}G$ whose space of $k$-simplices is $G^k$, with boundary maps that encode the multiplication. Let $M$ be a simplicial set, and I am primarily interested in the case that $M$ is a simplicial finite set describing the homotopy type of a finite-dimensional compact manifold. The simplicial affine scheme $\underline{\operatorname{hom}}_\Delta(M,\mathrm{B}G)$ is the space if
$G$-local systems on $M$. In particular, $\pi_0(\underline{\operatorname{hom}}_\Delta(M,\mathrm{B}G))$ is the character variety of $M$. My question
I am looking for a general construction, of the flavor above, that incorporates both examples. More specifically, the construction should:
input a simplicial (finite) set $M$ and a a simplicial affine scheme $X$ over $\mathbb K$ output a chain complex $V(M,X)$ over $\mathbb K$, supported in both directions, that deserves to be thought of as a "derived space of global functions on the space of maps from $M$ to $X$" have good functoriality and monoidality properties in both variables (implying for instance that $V(M,X)$ has a strongly-homotopy commutative dg algebra structure, coming from various diagonal and Eilenberg–Zilber-like maps) if $X = \operatorname{spec}(A)$ is a constant simplicial scheme, then $V(M,\operatorname{spec}(A))$ is the generalized Hochschild homology of $A$ determined by $M$ $\operatorname{H}_0(V(M,X)) = \mathcal{O}(\pi_0(\underline{\operatorname{hom}}_\Delta(M,X)))$ Some near misses
The problem seems to be when $X$ is not "simply connected". In particular, I have not come across a construction that works even when $X = \mathrm{B}G$ for $G$ a finite simple group.
Greg Ginot and collaborators (see e.g. Higher order Hochschild cohomology, Derived Higher Hochschild Homology, Topological Chiral Homology and Factorization algebras, and A Chen model for mapping spaces and the surface product) have extended work by Pirashvili defining the generalized Hochschild homology. Let $A$ be a cdga over $\mathbb K \supseteq \mathbb Q$ and let $M$ be a simplicial set. Then there is a simplicial cdga $\int_M A = \mathcal{O}(\operatorname{maps}(M,\operatorname{spec}(A)))$ with good functoriality and monoidality properties, which agrees up to quasi-isomorphism with Lurie's "topological chiral homology."
By definition, a
quasi-isomorphism of cdgas is a morphism that induces isomorphisms on homology. One of the things that Ginot et al prove is that a quasi-isomorphism $A \to B$ induces a quasi-isomorphism $\int_M A \to \int_M B$. Thus in particular when $A = \mathcal{O}(\mathrm{B}G) = \operatorname{Ext}_G(\mathbb K,\mathbb K)$, for any meaning of this, and $G$ is a finite simple group, then the canonical map $\mathbb K \to A$ is a quasi-isomorphism, and so the chain complex $\int_M A$ will never contain data. So this construction fails my last condition, e.g.: $\pi_0(\underline{\operatorname{hom}}_\Delta(S^1,\mathrm{B}G)) = G/G^{\mathrm{conj}}$ and $\mathcal{O}(\pi_0(\underline{\operatorname{hom}}_\Delta(S^1,\mathrm{B}G))) = \mathcal{O}(G)^G$ is the algebra of class functions on $G$, whereas $\operatorname{H}_0(\int_{S^1}\mathcal{O}(\mathrm{B}G)) = \mathbb K$.
Ben-Zvi and Nadler have discussed loop spaces and connections their relationships to Hochschild homology and representations. They run into what I believe are related issues, but work primarily with not the space of loops in a derived scheme, but rather the infinitesimal neighborhood of the constant loops within that space. I should also mention that for my particular application, I really am looking for an explicit one-categorical construction (akin to the Pirashvili-style work), rather than quickly moving to model or $\infty$ categories.
Finally, perhaps the result I should have started with is one I learned from a review by Loday (original references are included there). Suppose that $M$ is a simplicial approximation of an $n$-dimensional manifold, and that $X$ is a simplicial set which is
$n$-connected, in $\pi_{\leq n}(X)$ is trivial. (So $1$-connected means connected simply-connected.) I can build a cosimplicial simplicial set $\operatorname{maps}(M,X)$, and a simplicial set $\underline{\operatorname{hom}}_\Delta(M,X)$, as discussed above. There is a _free $\mathbb K$-module_ functor $\mathbb K : \mathrm{SET} \to \mathrm{Mod}_{\mathbb K}$, and with is I get a cosimplicial simplicial $\mathbb K$-module $\mathbb K\operatorname{maps}(M,X)$ and a simplicial $\mathbb K$-module $\mathbb K\underline{\operatorname{hom}}_\Delta(M,X)$. Of course, given a cosimplicial simplicial $\mathbb K$-module, I can apply the "alternating sum of boundaries" functor $\operatorname{ch}$ to get a bicomplex, which I can then totalize. Unless I have made a mistake, I believe the statement is that under the conditions on $M$ and $X$, the canonical map of chain complexes between $\operatorname{ch}(\mathbb K\operatorname{maps}(M,X))$ and $\operatorname{ch}(\mathbb K\underline{\operatorname{hom}}_\Delta(M,X))$ is a quasi-isomorphism. This example specifically does not include classifying spaces of finite groups. Final comments and examples
Truth be told, I am most interested in the case $X = \mathrm B G$ for $G = \mathrm{SL}(2)$ and $M$ a simplicial approximation of a three-manifold. Note that the topological space $\mathrm{B}(\mathrm{SL}(2,\mathbb C))$ is $3$-connected (it is homotopy equivalent to $\mathrm{B}(\mathrm{Spin}(3,\mathbb R))$), but I don't have a good sense about notions like "3-connected" for algebraic stacks. And, besides, I would like a robust construction.
Let me end with an example that does work. A much easier category than $\mathrm{CAlg}^{\mathrm{op}}_{\mathbb K}$ is the category $\mathrm{CCog}_{\mathbb K}$ of cocommutative coalgebras (or "cogebres" in French, hence the name). A group object in $\mathrm{CCog}_{\mathbb K}$ is a
cocommutative Hopf algebra, and a good example is the universal enveloping algebra $U\mathfrak g$ of a Lie algebra $\mathfrak g$. Then $\mathrm B U\mathfrak g = \mathrm B \mathfrak g$ is a simplicial cocommutative coalgebra, which is quasi-isomorphic to the dg cocomutative coalgebra $\mathrm{CE}(\mathfrak g)$ of Chevalley–Eilenberg cochains with trivial coefficients. I believe that it _is_ true that Hochschild homology of $\mathrm{CE}(\mathfrak g)$ is the Chevalley–Eilenberg cochain complex with coefficients in $U\mathfrak g$, as it should be if you think about loop spaces. |
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in...
Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch...
Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen...
Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl...
People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f...
Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a...
I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac...
This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s...
There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com...
Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not...
Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}...
I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo...
Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a...
I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst...
Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ...
NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ...
I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few...
This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme...
EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc...
Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu...
Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d...
I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa...
To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co...
Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik...
I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like.
I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have...
It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl...
Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,...
One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi...
Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case.
What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?... |
Given that there's been no other answer to this question, I'll leave some indications from my earlier (failed) attempt to answer it.
I think the answer to the question as stated is yes. If you include absolutely convergent integral of arbitrary products of algebraic, exponential and logarithmic functions, then the Meissel-Mertens constant is a period in your sense, and it shouldn't be very hard to prove.
In fact, Zagier and Kontsevich mention something very close at the end of their seminal paper on the topic:
M. Kontsevich & D. Zagier, Periods (2001)
"There have been some recent indications that one can extend the
exponential motivic Galois group still further, adding as new period
the Euler constant $\gamma$, which is, incidentally, the constant term
of $\zeta(s)$ at $s=1$.
Then all classical constants are periods in an
appropiate sense."
To see this, take the classic expression:
$$B_1=\gamma+\sum_p\biggl[\ln\biggl(1-\frac{1}{p}\biggl)+\frac{1}{p}\biggl]$$
As you mention, $\gamma$ has an integral representation. If you can prove that the sum in the previous expression is also a period, you are obviously done.
Now, I suspect that is a relatively easy exercise of
Chen integration. This is the standard method used to prove that the Riemann zeta function (and more generally the multiple zeta function) at the integers is a period. I'm not familiar with the technique, but perhaps someone else not necessarily familiar with periods can complete the argument. |
I am trying to make it clear the relationship of the listed three methods.
According to my understanding kernel regression means : the weight vector W lies in the space spanned by training data.
$$ \alpha =(\mathbf{X}\mathbf{X}^\intercal+\lambda I)^{-1}y $$ $$ g(\textbf{x})=\textbf{x}^\intercal\textbf{w}=\textbf{x}^\intercal\mathbf{X}^\intercal\alpha=\sum\limits_{i=1}^m \alpha_{i}<\textbf{x},\textbf{x}_{i}> $$
My problem here is, is this one already the locally weighted regression?I can get the intuition that the nearer the input vector to a training vector, the more weight will be assigned. Does this already mean "locally weighted"?I mean I know the kernel tricks here, but I do not know whether locally weighted methods shall always have a defined kernel?Is there any other method for locally weighted model other than kernel methods?If it is true(I am not sure..), does one certain type of locally weighted model correspond to one particular kind of kernel function?(like the locally weighted polynomial regression http://water.columbia.edu/files/2011/11/Lall2006Locally.pdf) .I see kernel methods just kind of add time-space dependency to certain existing models. But I do not know exactly how existing model shall correspond to the kernel part.
Many thanks! |
I think your assignments for the equations need to be inspected closely. Let's look at those equation with their redox potentials:
In dilute conditions (say, ratio of $\ce{HNO3}$ to $\ce{Cu}$ is 8:3):$$\begin{align}\ce{NO3-(aq) + 4H3O+(aq) + 3e- &<=> NO(g) + 6H2O(l)} &E^\circ = \pu{0.957 V}\\\ce{Cu(s) &<=> Cu^2+ (aq) + 2e-} &E^\circ = \pu{-0.342 V}\\\ce{2NO3-(aq) + 8H3O+(aq) + 3Cu (s) &-> 3Cu^2+(aq) + 2NO(g) + 12H2O(l)}&E^\circ_{\text{rxn}} = \pu{0.615 V}\end{align}$$
In concentration conditions (say, ratio of $\ce{HNO3}$ to $\ce{Cu}$ is 4:1):$$\begin{align}\ce{2NO3-(aq) + 4H3O+(aq) + 2e- &<=> N2O4(g) + 6H2O(l)} &&E^\circ = \pu{0.803 V}\\\ce{Cu(s) &<=> Cu^2+ (aq) + 2e-} &&E^\circ = \pu{-0.342 V}\\\ce{2NO3-(aq) + 4H3O+(aq) + Cu (s) &-> Cu^2+(aq) + N2O4(g) + 6H2O(l)}&&E^\circ_{\text{rxn}} = \pu{0.461 V}\end{align}$$
Because this concentrated reaction is exotermic, the colorless $\ce{N2O4}$, gas decomposes to reddish brown $\ce{NO2}$ gas according to $\ce{N2O4 (g) <=> 2NO2 (g)},~\Delta H^\circ = \pu{+57.2 kJ mol^{-1}}$
Based on $E^\circ_{\text{rxn}}$ values, one can assume that the concentrated reaction is less spontaneous. However, some dissolved $\ce{N2O4}$ can still undergo following redox reaction which is the most spontaneous reaction among three:
$$\begin{align}\ce{N2O4(aq) + 4H3O+(aq) + 4e- &<=> 2NO(g) + 6H2O(l)} &E^\circ = \pu{1.035 V}\\\ce{Cu(s) &<=> Cu^2+ (aq) + 2e-} &E^\circ = \pu{-0.342 V}\\\ce{N2O4(aq) + 4H3O+(aq) + 2Cu(s) &-> 2Cu^2+(aq) + 2NO(g) + 6H2O(l)}&E^\circ_{\text{rxn}} = \pu{0.693 V}\end{align}$$
If we can consider the loss of $\ce{N2O4}$ to $\ce{NO2}$ is negligible, then total oxidation of $\ce{Cu}$ with both conc. $\ce{HNO3}$ and $\ce{N2O4}$ can be written as:$$\begin{align}\ce{2NO3-(aq) + 8H3O+(aq) + 3Cu (s) &-> 3Cu^2+(aq) + 2NO(g) + 12H2O(l)}&E^\circ_{\text{rxn}} = \pu{1.154 V}\end{align}$$
Although it is identical reaction to that in dilute conditions, the reaction proceeds much more favorably (compare $E^\circ_{\text{rxn}}$ values in both conditions).
Late edition:
I have tried to avoid criticizing literature values on this subject, but my answer so far arose questions from concerned people. Thus, I'd try to do my best to handle it as delicate as possible.
To my knowledge, there shouldn't be a question about whether conc. $\ce{HNO3}$ is better oxidizing agent than dilute $\ce{HNO3}$. The oxidizing reagent here is the $\ce{NO3-}$ ion, as my equations described above. For example, any $\ce{NO3-}$ ion (say $\ce{KNO3}$) will oxidize $\ce{Cu}$ in an acid medium (say, $\ce{HCl}$). In using dilute or concentrated $\ce{HNO3}$, we conveniently provide both $\ce{NO3-}$ ions and $\ce{H3O+}$ for our redox reaction. This reaction is highly based on reaction conditions. I think nobody knows exact process, but everybody speculate by the results. That's the reason you are seeing too many different equations in different sources. I'm using the comprehensive table in: http://sites.chem.colostate.edu/diverdi/all_courses/CRC%20reference%20data/electrochemical%20series.pdf, which I believe more realistic. Now, consider following 2 questions:
The question 1: If concentrated nitric acid is a better oxidizing agent, then why does it oxidizes only 2 moles of Cu per 8 moles of nitric acid against the 3:8 ratio in dilute one?
Answer: Conc. $\ce{HNO3}$ has more $\ce{NO3-}$ available to react. When available, two moles of it react with 1 mole of $\ce{Cu}$ to give more stable product, $\ce{N2O4}$ ($\Delta H^\circ_f = \pu{+9.16 kJ/mol}$) compared to $\ce{NO}$ ($\Delta H^\circ_f = \pu{+90.25 kJ/mol}$).
The question 2: Why is there change of -1 in oxidation state of $\ce{N}$ in concentrated solution as opposed to -3 in dilute solution?
Answer: Look at the last series of reactions involving $\ce{N2O4}$ given in my original answer. Eventual change of oxidation state of $\ce{N}$ is same in both cases. This reaction is not just a matter of power. I don't think anybody studied this process in details yet. I'd like to make my point by giving some known redox processes of of $\ce{NO3-}$.
$$\begin{align}\ce{NO3-(aq) + 3H3O+(aq) + 2e- &<=> HNO2(aq) + 4H2O(l)} &E^\circ = \pu{0.934 V}\\\ce{NO3-(aq) + 4H3O+(aq) + 3e- &<=> NO(g) + 6H2O(l)}&E^\circ = \pu{0.957 V}\end{align}$$
Both processes are under relatively similar conditions (say, $\pu{2M}$ vs $\pu{3M}~\ce{HNO3}$). In process (1), the oxidation state of $\ce{N}$ changes from +5 to +3 while in (2), it was +5 to +2. Which process dominate when they were subjected to oxidize appropriate amount of $\ce{Cu}$? |
Asymptotic properties of the norm of the extremum of a sequence of normal random functions
Article
15 Downloads Citations Abstract
Under additional conditions on a bounded normally distributed random function
X = X( t), t∈ T, we establish a relation of the form
where \(Z_n = Z_n (t) = \mathop {\max }\limits_{1 \leqslant k \leqslant n} X_k (t),(X_n )\) are independent copies of \(X,||x(t)|| = \mathop {\sup }\limits_{1 \in T} |x(t)|\), and (
$$\mathop {\lim }\limits_{n \to \infty } P(b_n (||Z_n || - a_n ) \leqslant x) = \exp ( - e^{ - x} )\forall x \in R^1 $$
a n) and ( b n) are numerical sequences. KeywordsWeak Convergence Asymptotic Property Random Function Random Element Asymptotic Relation
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Preview
Unable to display preview. Download preview PDF.
References Copyright information
© Kluwer Academic/Plenum Publishers 1999 |
After all this time, I came up with a very nice tensor calculus proof of the Hairy Ball Theorem.It only depends on Stokes theorem and standard laws of tensor calculus like the Ricci identity and symmetries of curvature tensors.All the topology is done by Stokes theorem.The remainder of the proof is equational, local and geometrical.It is coordinate/basis independent (only abstract indexes are used) and proves the theorem for all orientable closed 2-manifolds with non-zero total curvature.
The basic logic of the proof is, given a vector field $x$ the closed orientable manifold $M$, define a 1-form $z$ such that $dz$ is independent of $x$ when $||x|| = 1$ everywhere.By Stokes theorem, such an $x$ can only exist for a manifold with $\int_M dz = 0$.For the 1-form defined in the proof $dz$ is the scalar curvature,proving the theorem for all 2-manifolds with nonzero total curvature, which includes spheres.
The code for automating the equational part of the proof is at the end.
This was certainly a fun way to learn about tensor calculus, Mathematica & xAct all at the same timeand the folks on the xAct mailing list were very helpful, not to mention extremely patient.
Definitions The hair field
$$ \begin{align}||x||^2 &= x^i x_j = 1 \\\end{align}$$
Auxiliary 1-forms
$$ \begin{align}y_i &\triangleq \varepsilon_{ij} x^j \\\end{align}$$
$$ \begin{align}z_i &\triangleq y_j \nabla_i x^j\\\end{align}$$
Properties
$y$ is unit norm: $$\begin{align}||y||^2 &=y_i y^i \\ &= \varepsilon_{ij} x^j \varepsilon^{ik} x_k \\ &= 2 g_{[i}{}^{i} g_{j]}{}^k x^j x_k \\ &= 2 g_{[i}{}^i x_{j]}x^j \\ &= g_i{}^i x_j x^j - g_j{}^i x_i x^j \\ &= 2 x_j x^j - x_j x^j \\ &= x_j x^j = 1 \end{align}$$
$x$ is orthogonal to its derivative: $$x_i\nabla_jx^i= \frac{1}{2} \nabla_j x_i x^i= \frac{1}{2} \nabla_j 1=0$$
$y$ is orthogonal to $x$: $$y_i x^i = x^j\varepsilon_{ji} x^i = -x^j\varepsilon_{ij} x^i = -x^jy_j = -y_ix^i\\\ \therefore\ y_i x^i = 0$$
Curvature in 2 dimensions
Riemann curvature of a 2-manifold: $$ \begin{align}R_{ij}{}^{kl} &= R_{[ij]}{}^{[kl]} \\&= (\frac{1}{2} \varepsilon_{ij} \varepsilon^{mn}) (\frac{1}{2} \varepsilon^{kl} \varepsilon_{op}) R_{mn}{}^{op} \\&= (\frac{1}{2} \varepsilon_{ij} \varepsilon^{kl}) (\frac{1}{2} \varepsilon^{mn} \varepsilon_{op}) R_{mn}{}^{op} \\&= \frac{1}{2} \varepsilon_{ij} \varepsilon^{kl} R_{mn}{}^{[mn]} \\&= \frac{1}{2} \varepsilon_{ij} \varepsilon^{kl} R_{mn}{}^{mn} \\&= \frac{1}{2} \varepsilon_{ij} \varepsilon^{kl} R \\&= g_{[i}{}^k g_{j]}{}^{l}R \end{align}$$
Proof
1: $$\begin{align} \varepsilon_{jk} (\nabla_{[l} x^j) (\nabla_{m]} x^k) &= y_l y^i \varepsilon_{jk} (\nabla_{[l} x^j) (\nabla_{m]} x^k) \\ &= y_l \varepsilon^{in} x_n \varepsilon_{jk} (\nabla_{[l} x^j) (\nabla_{m]} x^k) \\ &= 2 y_l x_n (\nabla_{[j} x^{[i}) (\nabla_{k]} x^{n]}) \\ &= 2 y_l 0_{[j}{}^{[i} (\nabla_{k]} x^{n]}) \\ &= 0 \end{align}$$
2: $$\begin{align}(\nabla_{[j} \nabla_{i]} x^k) y_k&= R_{ji}{}^{k}{}_m x^m y_k / 2, & & \text{by the Ricci identity}\\&= R \ \varepsilon_{ji}\varepsilon^k{}_m x^m y_k / 4, & & \text{because of 2 dimesions}\\&= R \ \varepsilon_{ji} y^k y_k /4\\&= R \varepsilon_{ji}/4\end{align}$$
3: $$\begin{align}(\nabla_{[i} x^k) \nabla_{j]} y_k&= (\nabla_{[i} x^k) \nabla_{j]} x^l \varepsilon_{lk}\\&= (\nabla_{[i} x^k) (\nabla_{j]} x^l) \varepsilon_{lk} \\&= 0, & &\text{by step 1}\end{align}$$
4: $$\begin{align}(dz_i)_j&= \nabla_{[j}z_{i]} \\&= \nabla_{[j} (\nabla_{i]} x^k ) y_k \\&= (\nabla_{[j} \nabla_{i]} x^k) y_k + (\nabla_{[i} x^k) \nabla_{j]} y_k \\&= R\varepsilon_{ji}/4, & &\text{by steps 2 and 3}\end{align}$$
6: $$\int_{M}(dz_j)_i = \int_{M} R \varepsilon_{ij}/4 = \int_{M} R/4\ dM$$
7: So far the only assumptions are Riemannian metric on M and the existence of a differentiable unit vector field x. If the manifold $M$ is also closed and orientable then $z$ has compact support and Stokes theorem implies that$$\int_{M}(dz_j)_i = \int_{\partial M} z_j = \int_{\emptyset} z_j = 0$$This yields a contradiction when the total curvature of $M$ is not 0, proving the hairy ball theorem for all closed orientable manifolds with non-zero total curvature. $\square$
In particular a 2-sphere $S$ with radius $r$ has constant positive scalar curvature $R=2/r^2$.
$$\begin{align}\int_{S} R \varepsilon_{ij}/4= r^{-2}/2\int_{S} dS = 2 \pi \neq 0 \\end{align}$$
As it stands this only proves the nonexistence of
smooth vector fields without zeros on manifolds with nonzero total curvature. Boothby $\S 4$ proves that this automatically implies the general continuous case. Code
The use of Stokes theorem can not easilly be automated since xAct does not really support integration on manifolds,but the equational proof of $(dz_i)_j = R\varepsilon_{ji}/2$ is within the capabilities of xAct using the xTras extension.
load xAct
Needs["xAct`xTras`"]
$PrePrint = ScreenDollarIndices;
$CovDFormat = "Prefix";
$CVVerbose = False;
$DefInfoQ = False;
define abstract 2-manifold
DefManifold[S2, 2, IndexRange[a, n]]
DefMetric[1, met[-a, -b], CD, {";", "\[Del]"}, PrintAs -> "g"]
some shortcuts
eps = epsilon[met]
EV[x_] := FullSimplification[][ToCanonical[x]]
define the hair field and associated forms
DefTensor[x[a], {S2}]
AutomaticRules[x,
MakeRule[{Evaluate[ToCanonical[x[-a] CD[-b]@x[a]]], 0}]]
AutomaticRules[x, MakeRule[{Evaluate[ToCanonical[x[-a] x[a]]], 1}]]
DefTensor[y[a], {S2}]
IndexSetDelayed[y[a_], eps[a, -b] x[b]]
DefTensor[z[-a], {S2}]
IndexSetDelayed[z[a_], CD[a][x[b]] y[-b]]
DefTensor[dz[-a, -b], {S2}]
IndexSetDelayed[dz[a_, b_] , Antisymmetrize[CD[a]@z[b]]]
check definitions
UpValues[y]
UpValues[x]
DownValues[x]
verify proof
check that
y
is unit norm
y[a] y[-a] // EV
show that $dz - R\,dM = 0$
dz[-a, -b] - RicciScalar[CD][] eps[-a, -b]/4
% // EV
% // RiemannToWeyl
EV[% y[c]] y[-c]
The above line returns 0.Each of the preceding lines is an identity transformation.The last line is just multiplication by 1.
Other neat proofs of the hairy ball theorem
Milnor: http://scholar.google.com/scholar?cluster=16974770955727113693
Boothby: http://www.jstor.org/discover/10.2307/2317520
Eisenberg & Guy: http://www.jstor.org/discover/10.2307/2320587
Dan Piponi: http://blog.sigfpe.com/2012/11/a-pictorial-proof-of-hairy-ball-theorem.html |
A lot of what is written in the question is not correct.
The URCA process would be cycles of beta decay followed by inverse beta decay as follows.
$$n \rightarrow p + e + \bar{\nu_e}$$$$ p + e \rightarrow n + \nu_e$$
The neutrinos escape from the star, carrying away energy.
The URCA process is very important during the collapse to a neutron star state, during the initial cooling of a neutron star and possibly in the most extreme density cores of neutron stars at later times.
The direct URCA process can be be blocked by degeneracy. In the neutron fluid, there must be a small number (of order 1 part in 100) protons and electrons, because neutrons are unstable and undergo beta decay. However, once the electron Fermi energy reaches the maximum possible decay energy of a beta electron, then the reaction ceases because there are no free energy states for it to occupy. Instead, inverse beta decay (or neutronisation) becomes possible, if the electron Fermi energy + proton Fermi energy equals or exceeds the neutron Fermi energy. An equilibrium is set up so that$$E_{F,n} = E_{F,p} + E_{F,e}$$.
Ok, having got this you should then look at my answer to this question, where I explain why the direct URCA process is blocked unless densities are very high ($>8\times10^{17}$ kg/m$^3)$, because it is impossible for the reactions to simultaneously conserve energy and momentum. At lower densities, the
modified URCA process is inefficient, yet more effective. This is similar to the URCA process but includes a bystander particle (a neutron) to enable simultaneous conservation of energy and momentum, at the expense of requiring an extra particle to be within $kT$ of it's Fermi surface.
I do not really understand what you are asking in the last part of your question. Muons are produced at high densities. The higher density leads to higher neutron Fermi energies, and once this Fermi energy exceeds the Fermi energy of the protons plus the rest mass energy of the muons (105 Mev) then they can emerge. Almost equivalently, one can demand that the Fermi energy of the electrons in the gas reach 105 MeV and then muons can be created. It is the appearance of muons at $\sim 8\times10^{17}$ kg/m$^3$ that increases the proton fraction, reduces the difference between the neutron and proton Fermi momenta and makes the direct URCA process possible. |
Technical Report
Show 1 to 6 of 6
Issue 1 Why is a high-speed calculation engine necessary?
A Powerful Simulation EngineIn recent years, the needs of engineers are diversifying as computer aided engineering (CAE) tools are applied more regularly when designing electromag…
Issue 2 How JMAG Realized Accelerated Speed
A Powerful Simulation EngineThis technical report introduces content concerning the development of JMAG technology.As the previous section introduced why matrix solvers are necess…
Issue 3 What Does the JMAG Mesh Generation Engine have to Offer?
A Powerful Simulation EngineThese technical reports introduce the scope of JMAG's technological development.This edition introduces the value and future of one of the two major fo…
Issue 4 Material Modeling and Powerful Analysis Capabilities that Contribute to Limit Design
Elaborate Modeling TechnologyModeling Complex Nonlinear Materials at a Micro Level\( \Large \nabla \times \frac{ l }{\mu_0}\nabla \times A = J -\sigma \frac{ \partial A } { \parti…
Issue 5 Versatile Mapping that Supports Multiphysics Simulations
Elaborate Modeling TechnologyMaterial Modeling and Mapping Technology Supporting Multiphysics SimulationsMultiphysics simulations, such as coupled analyses (magnetic field, therma…
Issue 6 Generating Highly Accurate Machine Models Indispensable to Model Based Design (MBD)
Elaborate Modeling TechnologyIdeal MBD and Its ChallengesModel based design (MBD) has been around the field of circuits/controls for motors for a long time, but has not been at th…
Show 1 to 6 of 6 |
Difference between revisions of "Element structure of general affine group of degree two over a finite field"
(→Conjugacy class structure)
(→Conjugacy class structure)
Line 59: Line 59:
| <math>A</matH> is the identity, <math>v \ne 0</math> || <math>\{ 1,1 \}</math> || <math>(x - 1)^2</math> || <math>x - 1</math> || <math>q^2 - 1</math> || 1 || <matH>q^2 - 1</math> || Yes || Yes
| <math>A</matH> is the identity, <math>v \ne 0</math> || <math>\{ 1,1 \}</math> || <math>(x - 1)^2</math> || <math>x - 1</math> || <math>q^2 - 1</math> || 1 || <matH>q^2 - 1</math> || Yes || Yes
|-
|-
−
| <math>A</math> is diagonalizable over <math>\mathbb{F}_q</math> with equal diagonal entries not equal to 1, hence a scalar. The value of <matH>v</math> does not affect the conjugacy class. || <math>\{a,a \}</math> where <math>a \in \mathbb{F}_q^\ast \setminus \{ 1 \}</math> || <math>(x - a)^2</math> where <math>a \in \mathbb{F}_q^\ast</math> || <math>x - a</math> where <math>a \in \mathbb{F}_q^\ast</math> || <math>q^2</math> || <math>q - 2</math> || <math>q^2(q - 2)</math> || Yes || Yes
+
| <math>A</math> is diagonalizable over <math>\mathbb{F}_q</math> with equal diagonal entries not equal to 1, hence a scalar. The value of <matH>v</math> does not affect the conjugacy class. || <math>\{a,a \}</math> where <math>a \in \mathbb{F}_q^\ast \setminus \{ 1 \}</math> || <math>(x - a)^2</math> where <math>a \in \mathbb{F}_q^\ast </math> || <math>x - a</math> where <math>a \in \mathbb{F}_q^\ast </math> || <math>q^2</math> || <math>q - 2</math> || <math>q^2(q - 2)</math> || Yes || Yes
|-
|-
| <math>A</math> is diagonalizable over <math>\mathbb{F}_{q^2}</math>, not over <math>\mathbb{F}_q</math>. Must necessarily have no repeated eigenvalues. The value of <math>v</matH> does not affect the conjugacy class. || Pair of conjugate elements of <math>\mathbb{F}_{q^2}</math> || <math>x^2 - ax + b</math>, irreducible || Same as characteristic polynomial || <math>q^3(q - 1)</math> || <math>q(q - 1)/2 = (q^2 - q)/2</math> || <math>q^4(q-1)^2/2</math> || Yes || No
| <math>A</math> is diagonalizable over <math>\mathbb{F}_{q^2}</math>, not over <math>\mathbb{F}_q</math>. Must necessarily have no repeated eigenvalues. The value of <math>v</matH> does not affect the conjugacy class. || Pair of conjugate elements of <math>\mathbb{F}_{q^2}</math> || <math>x^2 - ax + b</math>, irreducible || Same as characteristic polynomial || <math>q^3(q - 1)</math> || <math>q(q - 1)/2 = (q^2 - q)/2</math> || <math>q^4(q-1)^2/2</math> || Yes || No
Revision as of 23:40, 22 February 2012 This article gives specific information, namely, element structure, about a family of groups, namely: general affine group of degree two. View element structure of group families | View other specific information about general affine group of degree two
This article gives the element structure of the general affine group of degree two over a finite field. Similar structure works over an infinite field or a field of infinite characteristic, with suitable modification. For more on that, see element structure of general affine group of degree two over a field.
The discussion here builds upon the discussion of element structure of general linear group of degree two over a finite field.
Summary
Item Value order exponent ? number of conjugacy classes Particular cases
(field size) (underlying prime, field characteristic) general affine group order of the group (= ) number of conjugacy classes (= ) element structure page 2 2 symmetric group:S4 24 5 element structure of symmetric group:S4 3 3 general affine group:GA(2,3) 432 11 element structure of general affine group:GA(2,3) 4 2 general affine group:GA(2,4) 2880 19 5 5 general affine group:GA(2,5) 12000 29 Conjugacy class structure There is a total of elements, and there are conjugacy classes of elements.
The conjugacy class structure is closely related to that of -- see Element structure of general linear group of degree two over a finite field#Conjugacy class structure.
We describe a generic element of in the form:
where is the dilation component and is the translation component.
Consider the quotient mapping , which sends the generic element to . Under this mapping, the following is true:
For those conjugacy classes of comprising elements that do not have 1 as an eigenvalue, the full inverse image of the conjugacy class is a single conjugacy class in . In other words, the translation component does not matter. For those conjugacy classes of comprising elements that do have 1 as an eigenvalue, the conjugacy class splits into two depending on whether is in the image of .
Nature of conjugacy class Eigenvalues Characteristic polynomial of Minimal polynomial of Size of conjugacy class Number of such conjugacy classes Total number of elements Is semisimple? Is diagonalizable over ? is the identity, 1 1 1 Yes Yes is the identity, 1 Yes Yes is diagonalizable over with equal diagonal entries not equal to 1, hence a scalar. The value of does not affect the conjugacy class. where where where Yes Yes is diagonalizable over , not over . Must necessarily have no repeated eigenvalues. The value of does not affect the conjugacy class. Pair of conjugate elements of , irreducible Same as characteristic polynomial Yes No has Jordan block of size two, with repeated eigenvalue equal to 1, is in the image of Same as characteristic polynomial 1 No No has Jordan block of size two, with repeated eigenvalue equal to 1, is not in the image of Same as characteristic polynomial 1 No No has Jordan block of size two, with repeated eigenvalue not equal to 1 (multiplicity two) where where Same as characteristic polynomial No No diagonalizable over with distinct diagonal entries, one of which is 1, is in the image of , Same as characteristic polynomial Yes Yes diagonalizable over with distinct diagonal entries, one of which is 1, is not in the image of , Same as characteristic polynomial Yes Yes diagonalizable over with distinct diagonal entries, neither of which is 1 (interchangeable) distinct elements of , neither equal to 1 Same as characteristic polynomial Yes Yes Total NA NA NA NA |
This answer is a long one, sorry.
For complex $q$ with $|q|<1$, introduce Ramanujan's functions $P(q),Q(q)$:$$P(q) = 1 - 24\sum_{n=1}^\infty \sigma(n)\,q^n$$$$Q(q) = 1 + 240\sum_{n=1}^\infty \sigma_3(n)\,q^n$$
The left-hand side of your first formula can be understood asthe coefficient of $q^{2^n}$ in the square of the power series$$f(q) = \sum_{m=1}^\infty \sigma(2m-1)\,q^{2m-1}= \frac{1}{48}\left(P(-q)-P(q)\right)$$So we are interested in $f(q)^2$ expressed as a power series in $q$.
Proposition: We have$$f(q)^2 = \frac{1}{240}\left(Q(q^2)-Q(q^4)\right)$$
Once this is proven, we can conclude that$$f(q)^2 = \sum_{m=1}^\infty \left(\sigma_3(2m-1)\,q^{4m-2} +\left(\sigma_3(2m) - \sigma_3(m)\right)\,q^{4m}\right)$$Thus, the coefficient of $q^{2^n}$ turns out to be$\sigma_3(1) = 1$ for $n=1$ and$$\sigma_3(2^{n-1}) - \sigma_3(2^{n-2}) = 2^{3(n-1)} = 8^{n-1}$$for $n>1$, so it is indeed $8^{n-1}$ for all positive integers $n$.
It remains to prove the proposition.
To that end, let $\mathbb{H}$ the complex upper half-plane,and introduce a new independent variable $\tau\in\mathbb{H}$that determines$$q = \mathrm{e}^{2\pi\mathrm{i}\tau}$$so $|q|<1$.Furthermore, for positive integer $k$ let$$q_k = \mathrm{e}^{2\pi\mathrm{i}\tau/k}$$so $q_k^k = q$.
By means of these dependencies, we can consider $q$-seriesas functions of $\tau$.This leads us to the normalized Eisenstein series$$\begin{aligned} \mathrm{E}_2(\tau) &= P(q)\\ \mathrm{E}_4(\tau) &= Q(q)\end{aligned}$$I will also use the Jacobi thetanull functions (as functions of $\tau$):$$\begin{aligned} \Theta_{00}(\tau) &= \sum_{m\in\mathbb{Z}}q_2^{m^2}\\ \Theta_{01}(\tau) &= \sum_{m\in\mathbb{Z}}(-q_2)^{m^2}\\ \Theta_{10}(\tau) &= \sum_{m\in\mathbb{Z}} q_8^{(2m+1)^2} = 2\,q_8\sum_{m=0}^\infty q^{m (m+1)/2}\end{aligned}$$
Background on those functions is given in textbooks such as:
(Theta functions) E. T. Whittaker and G. N. Watson,A Course of Modern Analysis;2nd edition 1915; Merchant Books, reprint 2008, ISBN 1-60386-121-1;1st edition 1902. (Eisenstein series) Tom M. Apostol,Modular Functions and Dirichlet Series in Number Theory;2nd edition 1990; Springer, ISBN 0-387-97127-0; 1st edition 1976.
Let us prove, as a first lemma, the frequently used triplet of identities$$\begin{aligned} 3\,\Theta_{00}^4(\tau) &= 4\,\mathrm{E}_2(2\tau)-\mathrm{E}_2\left(\frac{\tau}{2}\right)\\ 3\,\Theta_{01}^4(\tau) &= 4\,\mathrm{E}_2(2\tau)-\mathrm{E}_2\left(\frac{\tau+1}{2}\right)\\ 3\,\Theta_{10}^4(\tau) &= \mathrm{E}_2\left(\frac{\tau+1}{2}\right) - \mathrm{E}_2\left(\frac{\tau}{2}\right)\end{aligned}$$the last of which can be translated to$$f(q_2) = \left(\frac{\Theta_{10}(\tau)}{2}\right)^4$$which gives yet another representation for our function $f$.
The second lemma shall comprise the three identities$$\begin{aligned} \Theta_{00}^8(\tau) &= \frac{1}{15}\left(16\,\mathrm{E}_4(\tau) - \mathrm{E}_4\left(\frac{\tau+1}{2}\right)\right)\\ \Theta_{01}^8(\tau) &= \frac{1}{15}\left(16\,\mathrm{E}_4(\tau) - \mathrm{E}_4\left(\frac{\tau}{2}\right)\right)\\ \Theta_{10}^8(\tau) &= \frac{16}{15}\left(\mathrm{E}_4(\tau)-\mathrm{E}_4(2\tau)\right)\end{aligned}$$the last of which can ultimately be translated to$$f(q_2)^2 = \frac{1}{240}\left(Q(q)-Q(q^2)\right)$$which is the proposition we need.
Why should we prove six identities although we only need two of them?Well, it turns out that the three Jacobi thetanulls are best dealt withaltogether, otherwise things become less elementary.Furthermore, this is the third occasion within a fortnightthat such identities find applications on this site(look here and there),so I think it is useful to have a package of them in a place we can link to.And last but not least, it's fun.
The plan is to find certain (modular) symmetries of expressions containingthe above functions of $\tau$;then to use the fact that, up to some constant factor, only one function canhave such symmetries.To prepare the ground, let us recall the most basic facts about the aboveEisenstein series and Jacobi thetanull functions that we will need.
First, all those functions are holomorphic for $\tau\in\mathbb{H}$and can be represented as a Maclaurin series in $q_k$ for some $k$,as demonstrated by the definitions given.(This implies that they approach a finite limit as $\Im\tau\to\infty$.)
Second, the thetanull functions $\Theta_{00}(\tau), \Theta_{01}(\tau),\Theta_{10}(\tau)$ are nonzero for every $\tau\in\mathbb{H}$which implies that their reciprocals are holomorphic over $\mathbb{H}$ too.
Third, we have the following useful symmetries:Let $\tau' = \frac{-1}{\tau}$, then$$\begin{aligned} \mathrm{E}_2(\tau+1) &= \mathrm{E}_2(\tau)& \mathrm{E}_2(\tau') &= \frac{6\tau}{\pi\,\mathrm{i}} + \tau^2\,\mathrm{E}_2(\tau)\\ \mathrm{E}_4(\tau+1) &= \mathrm{E}_4(\tau)& \mathrm{E}_4(\tau') &= \tau^4\,\mathrm{E}_4(\tau)\\ \Theta_{00}(\tau+1) &= \Theta_{01}(\tau)& \Theta_{00}(\tau') &= \sqrt{-\mathrm{i}\tau}\,\Theta_{00}(\tau)\\ \Theta_{01}(\tau+1) &= \Theta_{00}(\tau)& \Theta_{01}(\tau') &= \sqrt{-\mathrm{i}\tau}\,\Theta_{10}(\tau)\\ \Theta_{10}(\tau+1) &= \sqrt{\mathrm{i}}\,\Theta_{10}(\tau)& \Theta_{10}(\tau') &= \sqrt{-\mathrm{i}\tau}\,\Theta_{01}(\tau)\end{aligned}$$
The fourth useful fact seems to be rarely mentioned in introductory texts,but it is easy to conclude from the stuff covered there.Concretely, the coefficients of the Eisenstein $q$-series, that is, thedivisor power sums $\sigma_k(n)$, have a certain multiplicativity propertythat translates into functional equations for the Eisenstein series.Particularly, we have:$$\begin{aligned} \mathrm{E}_2\left(\frac{\tau}{2}\right) + \mathrm{E}_2\left(\frac{\tau+1}{2}\right) + 4\,\mathrm{E}_2(2\tau) &= 6\,\mathrm{E}_2(\tau)\\ \mathrm{E}_4\left(\frac{\tau}{2}\right) + \mathrm{E}_4\left(\frac{\tau+1}{2}\right) + 16\,\mathrm{E}_4(2\tau) &= 18\,\mathrm{E}_4(\tau)\end{aligned}$$This allows us to express an Eisenstein series with argument $\frac{\tau+1}{2}$in terms of the Eisenstein series with arguments $\frac{\tau}{2}$, $\tau$, and$2\tau$.Why would we want to do that?Because the latter can easily be subjected to the transformation $\tau\to\tau'$.Doing so and substituting back, we find the remarkable symmetries$$\begin{aligned} \mathrm{E}_2\left(\frac{\tau'+1}{2}\right) &= \frac{12\tau}{\pi\,\mathrm{i}} + \tau^2\,\mathrm{E}_2\left(\frac{\tau+1}{2}\right)\\ \mathrm{E}_4\left(\frac{\tau'+1}{2}\right) &= \tau^4\,\mathrm{E}_4\left(\frac{\tau+1}{2}\right)\end{aligned}$$
Now that the basics are in place, consider the functions$$\begin{aligned} U_{00}(\tau) &= \frac {4\,\mathrm{E}_2(2\tau)-\mathrm{E}_2\left(\frac{\tau}{2}\right)} {3\,\Theta_{00}^4(\tau)}\\ U_{01}(\tau) &= \frac {4\,\mathrm{E}_2(2\tau)-\mathrm{E}_2\left(\frac{\tau+1}{2}\right)} {3\,\Theta_{01}^4(\tau)}\\ U_{10}(\tau) &= \frac {\mathrm{E}_2\left(\frac{\tau+1}{2}\right) - \mathrm{E}_2\left(\frac{\tau}{2}\right)} {3\,\Theta_{10}^4(\tau)}\end{aligned}$$Proving the first lemma is equivalent to showing that$U_{00}, U_{01}, U_{10}$ are all equal to the constant $1$.
As the Eisenstein series and the Jacobi thetanulls are holomorphic over$\mathbb{H}$, and the Jacobi thetanulls are furthermore nonzero on $\mathbb{H}$,we find that $U_{00}, U_{01}, U_{10}$ are holomorphic over $\mathbb{H}$ as well.
By considering the given series expansions of their constituents,we find that $U_{00}, U_{01}, U_{10}$ can be representedas Maclaurin series in $q_2$:$$\begin{aligned} U_{00} &= \frac{4\,P(q^2)-P(q_2)}{3\,\Theta_{00}^4} = \frac{3 + \mathrm{O}(q_2)}{3 + \mathrm{O}(q_2)} = 1 + \mathrm{O}(q_2)\\ U_{01} &= \frac{4\,P(q^2)-P(-q_2)}{3\,\Theta_{01}^4} = \frac{3 + \mathrm{O}(q_2)}{3 + \mathrm{O}(q_2)} = 1 + \mathrm{O}(q_2)\\ U_{10} &= \frac{P(-q_2) - P(q_2)}{3\,\Theta_{10}^4} = \frac{48 q_2 + \mathrm{O}(q_2^3)}{48 q_2 + \mathrm{O}(q_2^3)} = 1 + \mathrm{O}(q_2^2)\end{aligned}$$
Using the symmetries presented above, we find after some calculation$$\begin{aligned} U_{00}(\tau+1) &= U_{01}(\tau)& U_{00}(\tau') &= U_{00}(\tau)\\ U_{01}(\tau+1) &= U_{00}(\tau)& U_{01}(\tau') &= U_{10}(\tau)\\ U_{10}(\tau+1) &= U_{10}(\tau)& U_{10}(\tau') &= U_{01}(\tau)\end{aligned}$$
Together this implies that the functions$$\begin{aligned} C_1 &= U_{00} + U_{01} + U_{10}\\ C_2 &= U_{00}\,U_{01} + U_{00}\,U_{10} + U_{01}\,U_{10}\\ C_3 &= U_{00}\,U_{01}\,U_{10}\end{aligned}$$are holomorphic over $\mathbb{H}$, representable as Maclaurin series in $q_2$,and invariant under the transformations $\tau\to\tau+1$ and $\tau\to\tau'$.The first invariance also implies that the $q_2$-series only involve integerpowers of $q = q_2^2$.
This means that $C_1,C_2,C_3$ are so-called entire modular forms of weight zero.However, as shown e.g. in the text by Apostol (reference 2 above),the only entire modular forms of weight zero are constants.From this very fact we can now draw our conclusions.
Hence each $C_i$ must equal the constant term of its $q$-series, so$$C_1 = 3\qquad C_2 = 3\qquad C_3 = 1$$independent of $\tau$.
Recall that, by Vieta's formulae, the solutions of the equation$$U^3 - C_1\,U^2 + C_2\,U - C_3 = 0$$are precisely $U\in\{U_{00}, U_{01}, U_{10}\}$.Now, since the equation amounts to $(U-1)^3 = 0$, we conclude that$$U_{00} = U_{01} = U_{10} = 1 = \mathrm{const.}$$Thus, the first lemma is proven.
In the very same manner, we can easily tackle the second lemma.You know what? I leave that to you, it's refreshing.
Edit: Typo corrections. |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
Given a WSS process with the following Auto Correlation Function:
$$ r\left ( \tau \right ) = {\sigma}^{2} {e}^{-\alpha \left | \tau \right |} $$
The Laplace Transform would be:
$$ R \left ( s \right ) = \mathfrak{L} \left \{ r \left ( \tau \right ) \right \} = \frac{-2 \alpha {\sigma} ^ {2}}{\left ( s - \alpha \right ) \left ( s + \alpha \right )} $$
Hence the filter would be of the form:
$$ H \left( s \right) = \frac{c}{s + \alpha} $$
My question is about units. In the Auto Correlation Function the units of $ \alpha $ are [Hz]. While in the filter form, assuming $ s = j \omega $ the units are [Rad / Sec]. How this conflict can be resolved? What a I missing?
Thanks. |
Answer
$\sqrt2$
Work Step by Step
Convert the angle measure to degrees to obtain: $=\frac{\pi}{4} \cdot \frac{180^o}{\pi} = 45^o$ Thus, $\csc{(\frac{\pi}{4})} \\= \csc{45^o}$ From Section 2.1 (page 50) , we learned that: $\csc{45^o} = \sqrt2$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
First order logic comes equipped with two kinds of terms:
Variable: those terms of the form $x$ for some variable $x$, of which there are infinite. Function application: those terms of the form $f(t_1,\dots,t_n)$ for some $n$-place function symbol $f$, of which there are infinite, and some $n$ terms $t_1, \dots, t_n$.
In practice, one more kind of term is commonly used informally in the context of set theory: set descriptors of the form $\{x:\varphi\}$, where $x$ is a variable, and $\varphi$ is a first order well-formed formula. This term cannot be rewritten as a function application, since $\varphi$ is not a term. This term creates a scope in which the variable $x$ is bound, similar to the way the quantified formulas of the form $\forall x\varphi$ and $\exists x\varphi$ work.
One way to introduce set descriptor terms into the logic, which then will no longer be a first order logic, so lets call it
extended first order logic, is by introducing an infinite set of binding term constructors that is disjoint from the the first order logic vocabulary (consisting of logical symbols, variables, function symbols, and relation symbols), and a new way of forming terms: $Cx\varphi$, for every binding term constructor $C$, every variable $x$, and every extended first order well-formed formula $\varphi$ (the definition of function application and of a well-formed formula should be modified to accomodate this new kind of term). Let's call such terms binding terms.
We can now set aside one of the new-fangled binding term constructors, say $\sigma$, and interpret every binding term of the form $\sigma x\varphi$ as $\{x:\varphi\}$.
Set descriptors are probably the most familiar example of binding terms, but two others that I know of have been proposed in the past by mainstream mathematicians, likewise in the context of set theory: Hilbert's epsilon operator and Bourbaki's $\tau$ operator, which, though similar, are not the same operator, as they satisfy slightly different axioms.
Note that extending first order logic with binding terms necessitates a corresponding extension of the inference system, say Gentzen's Natural Deduction.
Has the combination of extended first order logic with a corresponding inference system been studied? Does it have a name? Where can I read more about it? |
AdaptiveCompositeIntegrator1D
Adaptive composite integrator: step size is set to be small if functional variation of integrand is large The integrator in individual intervals (base integrator) should be specified by constructor.
ExtendedTrapezoidIntegrator1D
The trapezoid integration rule is a two-point Newton-Cotes formula that approximates the area under the curve as a trapezoid.
GaussHermiteQuadratureIntegrator1D
Gauss-Hermite quadrature approximates the value of integrals of the form $$ \begin{align*} \int_{-\infty}^{\infty} e^{-x^2} g(x) dx \end{align*} $$ The weights and abscissas are generated by
GaussHermiteWeightAndAbscissaFunction
.
GaussHermiteWeightAndAbscissaFunction
Class that generates weights and abscissas for Gauss-Hermite quadrature.
GaussianQuadratureData GaussianQuadratureIntegrator1D
Class that performs integration using Gaussian quadrature.
GaussJacobiQuadratureIntegrator1D
Gauss-Jacobi quadrature approximates the value of integrals of the form $$ \begin{align*} \int_{-1}^{1} (1 - x)^\alpha (1 + x)^\beta f(x) dx \end{align*} $$ The weights and abscissas are generated by
GaussJacobiWeightAndAbscissaFunction
.
GaussJacobiWeightAndAbscissaFunction
Class that generates weights and abscissas for Gauss-Jacobi quadrature.
GaussLaguerreQuadratureIntegrator1D
Gauss-Laguerre quadrature approximates the value of integrals of the form $$ \begin{align*} \int_{0}^{\infty} e^{-x}f(x) dx \end{align*} $$ The weights and abscissas are generated by
GaussLaguerreWeightAndAbscissaFunction
.
GaussLaguerreWeightAndAbscissaFunction
Class that generates weights and abscissas for Gauss-Laguerre quadrature.
GaussLegendreQuadratureIntegrator1D
Gauss-Legendre quadrature approximates the value of integrals of the form $$ \begin{align*} \int_{-1}^{1} f(x) dx \end{align*} $$ The weights and abscissas are generated by
GaussLegendreWeightAndAbscissaFunction
.
GaussLegendreWeightAndAbscissaFunction
Class that generates weights and abscissas for Gauss-Legendre quadrature.
Integrator1D<T,U>
Class for defining the integration of 1-D functions.
Integrator2D<T,U>
Class for defining the integration of 2-D functions.
IntegratorRepeated2D
Two dimensional integration by repeated one dimensional integration using
Integrator1D
.
RealFunctionIntegrator1DFactory
Factory class for 1-D integrators that do not take arguments.
RombergIntegrator1D RungeKuttaIntegrator1D
Adapted from the forth-order Runge-Kutta method for solving ODE.
SimpsonIntegrator1D
Simpson's integration rule is a Newton-Cotes formula that approximates the function to be integrated with quadratic polynomials before performing the integration. |
area
A rectangle has length p cm and breadth q cm, where p and q are integers and p and q satisfy the equation pq + q = 13 + q
2 then the maximum possible area of the rectangle is...
Tôn Thất Khắc Trịnh 24/07/2018 at 13:22
OwO, it's a typo: it's suppossed to be width, okay?Selected by MathYouLike
If you use the AMGM theorem, you'll know that any rectangle achieves the maximum possible area when both of their measurements are equal, making it a square (a square is technically a rectangle too) So we have p=q Therefore, the equation will be q 2+q=13+q 2 <=>q=13(cm) <=>p=q=13(cm) <=>S=pq=13 2= 169(cm 2) (P.S: don't do this here, cuz you'll get to a dead end, oof) \(pq+q=13+q^2\) \(\Leftrightarrow S=q^2-q+13\)
Uchiha Sasuke 24/07/2018 at 13:34
OMG OMG THANK YOU SO VERY MUCH!!!!!!!!!!!!!!!!!!!!! :DDDDDDDDDDDD
Given three circles of radius 2, tangent to each other as shown in the following diagram, what is the area for the shaded region?
Chibi 11/04/2017 at 11:31
Center of the circle: ABC
=> AB = BC = AC = 2R = 4
=> ABC is a equilateral triangle
The area for the shaded region: S
The area for a sector definition by A and 2 tangential points: S
A
S = S
ABC- 3S A
S
ABC= \(\dfrac{1}{2}\).4.4.\(\dfrac{\sqrt{3}}{2}\) = 4\(\sqrt{3}\)
S
A= \(\dfrac{60}{360}\)S circles= \(\dfrac{1}{6}\)\(\pi\)2 2= \(\dfrac{2\pi}{3}\)
Selected by MathYouLike
=> S = 4\(\sqrt{3}\)- 3\(\dfrac{2\pi}{3}\) = 4\(\sqrt{3}\) - 2\(\pi\)
Center of the circle: ABC
=> AB = BC = AC = 2R = 4
=> ABC is a equilateral triangle
The area for the shaded region: S
The area for a sector definition by A and 2 tangential points: SA
S = SABC - 3SA
SABC = 12
.4.4.√32 = 4√3
SA = 60360
Scircles = 16π22 = 2π3
=> S = 4√3
- 32π3 = 4√3 - 2π
tth 05/11/2017 at 19:11 Center of the circle: ABC
=> AB = BC = AC = 2R = 4
=> ABC is a equilateral triangle
The area for the shaded region: S
The area for a sector definition by A and 2 tangential points: SA
S = SABC - 3SA
SABC = 12
.4.4.√32 = 4√3
SA = 60360
Scircles = 16π22 = 2π3
=> S = 4√3
- 32π3 = 4√3 - 2π
Let R and S be points on the sides BC and AC , respectively, of ΔABC , and let P be the intersection of AR and BS . Determine the area of ΔABC if the areas of ΔAPS , ΔAPB , and ΔBPR are 5, 6, and 7, respectively
An Duong 09/04/2017 at 07:31
We have \(\dfrac{SP}{PB}=\dfrac{area\left(APS\right)}{area\left(ABP\right)}=\dfrac{5}{6}\)
Call the area of PSR be x, the area of CSR be y, we have:
\(\dfrac{area\left(PSR\right)}{area\left(PBR\right)}=\dfrac{SP}{PB}=\dfrac{5}{6}\)
\(\Rightarrow\dfrac{x}{7}=\dfrac{5}{6}\) \(\Rightarrow x=\dfrac{35}{6}\)
\(\dfrac{BR}{CR}=\dfrac{area\left(BSR\right)}{area\left(CRS\right)}=\dfrac{7+x}{y}\) (1)
\(\dfrac{BR}{CR}=\dfrac{area\left(ABR\right)}{area\left(ACR\right)}=\dfrac{13}{x+y+5}\) (2)
(1), (2) => \(\dfrac{7+x}{y}=\dfrac{13}{x+y+5}\)
\(\Rightarrow y=\dfrac{\left(7+x\right)\left(5+x\right)}{6-x}=\dfrac{5005}{6}\)
So, \(area\left(ABC\right)=5+6+7+x+y\)
\(=5+6+7+\dfrac{35}{6}+\dfrac{5005}{6}=858\)John selected this answer.
A B C R S P 5 6 7 M N x y
We have SPPB=area(APS)area(ABP)=56
Call the area of PSR be x, the area of CSR be y, we have:
area(PSR)area(PBR)=SPPB=56
⇒x7=56
⇒x=356
BRCR=area(BSR)area(CRS)=7+xy
(1)
BRCR=area(ABR)area(ACR)=13x+y+5
(2)
(1), (2) => 7+xy=13x+y+5
⇒y=(7+x)(5+x)6−x=50056
So, area(ABC)=5+6+7+x+y
=5+6+7+356+50056=858
Given a rectangle paper with a circle hole as the figure below. How to cut the paper with a line so that we have two parts with equal area.
An Duong 25/03/2017 at 21:31
Because a line through a center of a rectangle (or a circle) divide it into two part with equivalent area.
So you should cut the paper by the line connecting two centers of the rectangle and the circle (see following figure)
Selected by MathYouLike
A circle of radius 3 is inscribed in the pictured quadrant of a circle. Find the area of the shaded section.
mathlove 16/03/2017 at 18:29
The circle of radius 3 have an area \(9\pi\). We sign
ras the radius of the pictured quadrant of the done cỉcle, then
\(r=3\sqrt{2}+3\). Put
xis the area to calculate, we have
\(\dfrac{1}{4}\pi r^2=2x+\pi.3^2+\left(3^2-\dfrac{1}{4}.\pi.3^2\right)=2x+9\left(1+\dfrac{3\pi}{4}\right)\)
\(\Leftrightarrow\dfrac{\pi}{4}\left(3\sqrt{2}+3\right)^2=2x+9\left(1+\dfrac{3\pi}{4}\right)\Leftrightarrow2x=\dfrac{\left(27+18\sqrt{2}\right)\pi}{4}-\dfrac{36+27\pi}{4}\)
\(\Leftrightarrow x=\dfrac{9\sqrt{2}\pi-36}{4}\)Selected by MathYouLike
mathlove 17/03/2017 at 10:45
We have \(y=3^2-\left(\dfrac{1}{4}\pi3^2\right)\) and \(r-3=3\sqrt{2}\Rightarrow r=3+3\sqrt{2}\).Selected by MathYouLike
The circle of radius 3 have an area 9π
. We sign r as the radius of the pictured quadrant of the done cỉcle, then
r=3√2+3
. Put x is the area to calculate, we have
14πr2=2x+π.32+(32−14.π.32)=2x+9(1+3π4)
⇔π4(3√2+3)2=2x+9(1+3π4)⇔2x=(27+18√2)π4−36+27π4
⇔x=9√2π−364
Continuing the previous post:
I have another problem: Calculate the area of the curved square below (crossed area):
mathlove 13/03/2017 at 18:12 Setting xis the are to find. Easy to see that \(\left(1\right)+\left(2\right)+\left(1\right)=1-\dfrac{\pi}{4}\). According to the previous post: \(\left(1\right)=1-\dfrac{\sqrt{3}}{4}-\dfrac{\pi}{6}\). So that
\(\left(2\right)=\left(1-\dfrac{\pi}{4}\right)-2\left(1-\dfrac{\sqrt{3}}{4}-\dfrac{\pi}{6}\right)=-1+\dfrac{\pi}{12}+\dfrac{\sqrt{3}}{2}\).
Therefore \(x=\dfrac{\pi}{4}-\left[3.\left(2\right)+2.\left(1\right)\right]=\dfrac{\pi}{4}-\left[\left(-3+\dfrac{\pi}{4}+\dfrac{3\sqrt{3}}{2}\right)+\left(2-\dfrac{\sqrt{3}}{2}-\dfrac{\pi}{3}\right)\right]\)
\(=1+\dfrac{\pi}{3}-\sqrt{3}\) .Selected by MathYouLike
FA KAKALOTS 28/01/2018 at 22:10
Setting x is the are to find. Easy to see that (1)+(2)+(1)=1−π4.
According to the previous post: (1)=1−√34−π6
. So that
(2)=(1−π4)−2(1−√34−π6)=−1+π12+√32
.
Therefore x=π4−[3.(2)+2.(1)]=π4−[(−3+π4+3√32)+(2−√32−π3)]
=1+π3−√3
.
mathlove 11/03/2017 at 18:27
Let
xis the area to calculate. We see that EADis equilateral triangle with the edge equal to 1, the equilateral line equal to \(\dfrac{\sqrt{3}}{2}\) . So \(EF=1-\dfrac{\sqrt{3}}{2}\) .
We have the angle EDC is \(30^0\) , so that \(\dfrac{1}{2}.\dfrac{1}{2}\left(1-\dfrac{\sqrt{3}}{2}\right)-\dfrac{x}{2}=\dfrac{\pi}{12}-\dfrac{1}{2}.1.1.\sin30^0=\dfrac{\pi}{12}-\dfrac{1}{4}\)
So \(x=1-\dfrac{\sqrt{3}}{4}-\dfrac{\pi}{6}\) .
FA KAKALOTS 28/01/2018 at 22:10
Let x is the area to calculate. We see that EAD is equilateral triangle with the edge equal to 1, the equilateral line equal to √32 . So EF=1−√32
.
We have the angle EDC is 300
, so that 12.12(1−√32)−x2=π12−12.1.1.sin300=π12−14
So x=1−√34−π6
. |
I'm a bit confused about the gauge transformation properties of non-abelian gauge fields, and I just wanted some clarification. I keep seeing the statement that "gauge fields transform in the adjoint representation", but I have my doubts.
If we have a theory with a gauge symmetry corresponding to some simple, compact Lie group $G$, then we define the gauge covariant derivative $D_\mu$ as:
$$D_\mu\equiv\partial_\mu-igA_\mu^aT^a$$
Where $T^a\in\mathfrak{g}$ form a basis of the Lie algebra $\mathfrak{g}$ of $G$. This definition doesn't assume any representation of $T^a$, since this is determined by the representation of the field on which $D_\mu$ acts. I.e if we had a field $\psi$ which transforms in some representation $\Pi$ of a simple, compact Lie group $G$, $\psi\mapsto\Pi(g)\psi$, then we would have:
$$D_\mu\psi=\big(\partial_\mu-igA^a_\mu \pi(T^a)\big)\psi$$
Where $\pi(T^a)$ is the corresponding representation of $\mathfrak{g}$ that induces the representation $\Pi(g)$ after exponentiating. In this case, we require that the gauge covariant derivative has the same gauge transformation properties as $\psi$, namely $D_\mu\psi\mapsto\Pi(g)D_\mu\psi$ for some $g\in G$. This means that we must have:
$$D_\mu\mapsto\Pi(g)D_\mu\Pi^{-1}(g)$$
Question 1) I know that objects transforming in the
adjoint representation transform as $x\mapsto gxg^{-1}$. This is obviously very similar to this expression, but I don't think they are the same. Is it therefore correct to say in this case that $D_\mu$ transforms in the adjoint representation, or rather that it transforms "adjointly" to $\psi$?
From the expression $D_\mu\mapsto \Pi(g)D_\mu \Pi^{-1}(g)$ we find:
$$A^a_\mu\pi(T^a)\mapsto \Pi(g)\Big(A^a_\mu\pi(T^a)+\frac{i}{g}\partial_\mu\Big)\Pi^{-1}(g)$$
If we consider $\Pi(g)=\exp\Big(i\alpha^a(x)\pi(T^a)\Big)$, then for an infinitesimal transformation we can expand to first order in $\alpha$ to find:
$$A^a_\mu\mapsto f^{abc}A^b_\mu\alpha^c+\frac{1}{g}\partial_\mu\alpha^a$$
The first term in this expression is reminscent of the adjoint rep of the Lie algebra, so Question 2) is this what people refer to when they say that the gauge field transforms in the adjoint?
Sorry if there are any errors or glaring misunderstandings, I'm just trying to get my head around the terminology (and possibly the maths, who knows). |
The electric potential $\Phi$ is defined through the following relation:
$$\mathbf{E}=-\nabla \Phi\tag{1}$$
Now consider a vector field $\mathbf{F}$ such that:
$$\nabla.\mathbf{F}=D$$$$\nabla \times \mathbf{F}=\mathbf{C}$$
According to Helmholtz theorem, if the divergence $D(\mathbf{r})$ and the curl $\mathbf{C(r)}$ are specified and
if they both go to zero faster than $\dfrac{1}{ r^2}$ as $r\to \infty$, and if $\mathbf{F(r)}$ goes to zero as $r\to \infty$ then $\mathbf{F}$ is uniquely given by
$$\mathbf{F}=-\nabla U+ \nabla \times \mathbf{W}$$ where
$$U(\mathbf{r})=\frac{1}{4\pi}\int \frac{D(\mathbf{r'})}{|\mathbf{r-r'}|}d^3r'\tag{2}$$$$\mathbf{W}(\mathbf{r})=\frac{1}{4\pi}\int \frac{\mathbf{C}(\mathbf{r'})}{|\mathbf{r-r'}|}d^3r'\tag{3}$$
For a static electric field, $D=\dfrac{\rho}{\epsilon_0}$ and $\mathbf{C}=0$. So, according to $(1)$ and $(2)$ the electric potential of
a charge distribution that goes to zero faster than $\dfrac{1}{r^2}$ as $r\to \infty$ can be calculated as$$\Phi(\mathbf{r})=\frac{1}{4\pi \epsilon_0}\int \frac{\rho(\mathbf{r'})}{|\mathbf{r-r'}|}d^3r'$$where the integral is over all of space.
$\dfrac{1}{|\mathbf{r-r'}|}$ can be expanded using spherical harmonics to obtain a multipole expansion. So the multipole expansion is valid only under the above conditions too.
If the above condition doesn't hold, you have to use the $(1)$ equation, i.e. you have to find $\mathbf{E}$ first and then perform the integration to find $\Phi$ (as in the case of an infinite uniformly charged wire). |
Answer
$1$
Work Step by Step
Convert the angle measure to degrees to obtain: $=\frac{\pi}{2} \cdot \frac{180^o}{\pi} = 90^o$ Thus, $\csc{(\frac{\pi}{2})} \\= \csc{90^o}$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Let $\mathfrak{Cat}$ be the 2-category of small categories, functors, and natural transformations. Consider the following diagram in $\mathfrak{Cat}$: $$\mathbb{D} \stackrel{F}{\longrightarrow} \mathbb{C} \stackrel{G}{\longleftarrow} \mathbb{E}$$
There are several notions of pullback one could investigate in $\mathfrak{Cat}$:
The ordinary pullback in the underlying 1-category $\textbf{Cat}$: these exist and are unique, by ordinary abstract nonsense. Explicitly, $\mathbb{D} \mathbin{\stackrel{1}{\times}_\mathbb{C}} \mathbb{E}$ has objects pairs $(d, e)$ such that $F d = G e$ (evil!) and arrows are pairs $(k, l)$ such that $F k = G l$. This evidently an evil notion: it is not stable under equivalence. For example, take $\mathbb{C} = \mathbb{1}$: then we get an ordinary product; but if $\mathbb{C}$ is the interval category $\mathbb{I}$, we have $\mathbb{1} \simeq \mathbb{I}$, yet if I choose $F$ and $G$ so that their images are disjoint, we have $\mathbb{D} \mathbin{\stackrel{1}{\times}_\mathbb{C}} \mathbb{E} = \emptyset$, and $\emptyset \not\simeq \mathbb{D} \times \mathbb{E}$ in general.
The strict 2-pullback is a category $\mathbb{D} \mathbin{\stackrel{s}{\times}_\mathbb{C}} \mathbb{E}$ and two functors $P : \mathbb{D} \mathbin{\stackrel{s}{\times}_\mathbb{C}} \mathbb{E} \to \mathbb{D}$, $Q : \mathbb{D} \mathbin{\stackrel{s}{\times}_\mathbb{C}} \mathbb{E} \to \mathbb{E}$ such that $F P = G Q$, with the following universal property (if I'm not mistaken): for all $K : \mathbb{T} \to \mathbb{D}$ and $L : \mathbb{T} \to \mathbb{E}$ such that $F K = G L$, there is a functor $H : \mathbb{T} \to \mathbb{D} \mathbin{\stackrel{s}{\times}_\mathbb{C}} \mathbb{E}$ such that $P H = K$ and $Q H = L$, and $H$ is unique up to
equality; if $K' : \mathbb{T} \to \mathbb{D}$ and $L' : \mathbb{T} \to \mathbb{E}$ are two further functors such that $F K' = G L'$ and $H' : \mathbb{T} \to \mathbb{D} \mathbin{\stackrel{s}{\times}_\mathbb{C}} \mathbb{E}$ satisfies $P H' = K'$ and $Q H' = L'$ and there are natural transformations $\beta : K \Rightarrow K'$ and $\gamma : L \Rightarrow L'$, then there is a unique natural transformation $\alpha : H \Rightarrow H'$ such that $P \alpha = \beta$ and $Q \alpha = \gamma$. So $\mathbb{D} \mathbin{\stackrel{s}{\times}_\mathbb{C}} \mathbb{E} = \mathbb{D} \mathbin{\stackrel{1}{\times}_\mathbb{C}} \mathbb{E}$ works, and in particular, strict 2-pullbacks are evil.
The pseudo 2-pullback is a category $\mathbb{D} \times_\mathbb{C} \mathbb{E}$,
threefunctors $P : \mathbb{D} \times_\mathbb{C} \mathbb{E} \to \mathbb{D}$, $Q : \mathbb{D} \times_\mathbb{C} \mathbb{E} \to \mathbb{E}$, $R : \mathbb{D} \times_\mathbb{C} \mathbb{E} \to \mathbb{C}$, and twonatural isomorphisms $\phi : F P \Rightarrow R$, $\psi : G Q \Rightarrow R$, satisfying the following universal property: for all functors $K : \mathbb{T} \to \mathbb{D}$, $L : \mathbb{T} \to \mathbb{E}$, $M : \mathbb{T} \to \mathbb{C}$, and natural isomorphisms $\theta : F K \Rightarrow M$, $\chi : G L \Rightarrow M$, there is a unique functor $H : \mathbb{T} \to \mathbb{D} \times_\mathbb{C} \mathbb{E}$ and natural isomorphisms $\tau : K \Rightarrow P H$, $\sigma : L \Rightarrow Q H$, $\rho : M \Rightarrow R H$ such that $\phi H \bullet F \tau = \rho \bullet \theta$ and $\psi H \bullet G \sigma = \rho \bullet \chi$ (plus some coherence axioms I haven't understood); and some further universal property for natural transformations.
By considering the cases $\mathbb{T} = \mathbb{1}$ and $\mathbb{T} = \mathbb{2}$, it seems that $\mathbb{D} \times_\mathbb{C} \mathbb{E}$ can be taken to be the following category: its objects are quintuples $(c, d, e, f, g)$ where $f : F d \to c$ and $g : G e \to c$ are isomorphisms, and its morphisms are triples $(k, l, m)$ where $k : d \to d'$, $l : e \to e'$, $m : c \to c'$ make the evident diagram in $\mathbb{C}$ commute. The functors $P, Q, R$ are the obvious projections, and the natural transformations $\phi$ and $\psi$ are also given by projections.
Question.This seems to satisfy the required universal properties. Is my construction correct? Question.What are the properties of this construction? Is it stable under equivalences, in the sense that $\mathbb{D}' \times_{\mathbb{C}'} \mathbb{E}' \simeq \mathbb{D} \times_\mathbb{C} \mathbb{E}$ when there is an equivalence between $\mathbb{D}' \stackrel{F'}{\longrightarrow} \mathbb{C}' \stackrel{G'}{\longleftarrow} \mathbb{E}'$ and $\mathbb{D} \stackrel{F}{\longrightarrow} \mathbb{C} \stackrel{G}{\longleftarrow} \mathbb{E}$?
Finally, there is the non-strict 2-pullback, which as I understand it has the same universal property as the pseudo 2-pullback but with "unique functor" replaced by "functor unique up to isomorphism".
Question.Is this correct? General question. Where can I find a good explanation of strict 2-limits / pseudo 2-limits / bilimits and their relationships, with explicit constructions for concrete 2-categories such as $\mathfrak{Cat}$? So far I have only found definitions without examples. (Is there a textbook yet...?) |
So the problem I have says rationalize the denominator and simplify. $$ \frac{ \sqrt{15}}{\sqrt{10}-3}$$
My answer I got was $\frac{5 \sqrt6}{7}$.
Am I doing this wrong or is this the wrong answer I was told it was incorrect?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
It seems that you tried multiplying by $\frac{\sqrt{10}}{\sqrt{10}}$. Instead, you should try multiplying by the conjugate and take advantage of difference of squares: $$ \frac{\sqrt{15}}{\sqrt{10} - 3} = \frac{\sqrt{15}}{\sqrt{10} - 3} \cdot \frac{\sqrt{10} + 3}{\sqrt{10} + 3} = \frac{\sqrt{150} + 3\sqrt{15}}{(\sqrt{10})^2 - 3^2} = 5\sqrt{6} + 3\sqrt{15} $$
I think wht happened was that you correctly multiplied the denominator by $\sqrt{10}+3$, but incorrectly multiplied the numerator by $\sqrt{10}$. The numerator should also have been multiplied by $\sqrt{10}+3$
$$\frac{\sqrt{15}}{\sqrt{10}-3}=\frac{\sqrt{15} \cdot (\sqrt{10}+3)}{(\sqrt{10}-3) \cdot (\sqrt{10}+3)}=\frac{\sqrt{15} \cdot (\sqrt{10}+3)}{10-9}=\sqrt{15} \cdot (\sqrt{10}+3)$$ |
27/09/2019, 16:00 — 17:00 — Room P3.10, Mathematics Building Debashis Ghoshal, Jawaharlal Nehru University
Designing matrix models for zeta functions
The apparently random pattern of the non-trivial zeroes of the Riemann zeta function (all on the critical line, according to the Riemann hypothesis) has led to the suggestion that they may be related to the spectrum of an operator. It has also been known for some time that the statistical properties of the eigenvalue distribution of an ensemble of random matrices resemble those of the zeroes of the zeta function. With the objective to identify a suitable operator, we start by assuming the Riemann hypothesis and construct a unitary matrix model (UMM) for the zeta function. Our approach, however, could be termed
piecemeal, in the sense that, we consider each factor (in the Euler product representation) of the zeta function to get a UMM for each prime, and then assemble these to get a matrix model for the full zeta function. This way we can write the partition function as a trace of an operator. Similar construction works for a family of related zeta functions.
Note: unusual date
24/06/2019, 15:00 — 16:00 — Room P3.10, Mathematics Building Stefano Andriolo, Hong Kong University of Science and Technology
The Weak Gravity Conjecture
We discuss various versions of the weak gravity conjecture.
20/05/2019, 15:00 — 16:00 — Room P3.10, Mathematics Building Vishnu Jejjala, University of the Witwatersrand
Experiments with Machine Learning in Geometry & Physics
Identifying patterns in data enables us to formulate questions that can lead to exact results. Since many of the patterns are subtle, machine learning has emerged as a useful tool in discovering these relationships. We show that topological features of Calabi–Yau geometries are machine learnable. We indicate the broad applicability of our methods to existing large data sets by finding relations between knot invariants, in particular, the hyperbolic volume of the knot complement and the Jones polynomial.
07/05/2019, 10:00 — 11:00 — Room P5.18, Mathematics Building Nils Carqueville, University of Vienna
TQFTS, Orbifolds and Topological Quantum Computation
I will review basic notions and results in topological quantum field theory and discuss its orbifolds, with the aim to apply them in the context of topological quantum computation.
Unusual day and hour and room.
06/05/2019, 15:00 — 16:00 — Room P3.10, Mathematics Building Ceyda Simsek, University of Groningen
Spacetime geometry of non-relativistic string theory
Non-relativistic string theory is described by a sigma model that maps a two dimensional string worldsheet to a non-relativistic spacetime geometry. We discuss recent developments in understanding the spacetime geometry of non-relativistic string theory trying to provide several new insights. We show that the non-relativistic string action admits a surprisingly large number of symmetries. We introduce a non-relativistic limit to obtain the non-relativistic string action which also provides us the non-relativistic T-duality transformation rules and spacetime equations of motion.
01/04/2019, 15:00 — 16:00 — Room P3.10, Mathematics Building Davide Masoero, Faculdade de Ciências, Universidade de Lisboa
Meromorphic opers and the Bethe Ansatz
The Bethe Ansatz equations were initially conceived as a method to solve some particular Quantum Integrable Models (IM), but are nowadays a central tool of investigation in a variety of physical and mathematical theories such as string theory, supersymmetric gauge theories, and Donaldson-Thomas invariants. Surprisingly, it has been observed, in several examples, that the solutions of the same Bethe Ansatz equations are provided by the monodromy data of some ordinary differential operators with an irregular singularity (ODE/IM correspondence).
In this talk I will present the results of my investigation on the ODE/IM correspondence in quantum $g$-KdV models, where $g$ is an untwisted affine Kac-Moody algebra. I will construct solutions of the corresponding Bethe Ansatz equations, as the (irregular) monodromy data of a meromorphic $L(g)$-oper, where $L(g)$ denotes the Langlands dual algebra of $g$.
The talk is based on:
D Masoero, A Raimondo, D Valeri, Bethe Ansatz and the Spectral Theory of affine Lie algebra-valued connections I. The simply-laced case. Comm. Math. Phys. (2016) D Masoero, A Raimondo, D Valeri, Bethe Ansatz and the Spectral Theory of affine Lie algebra-valued connections II: The nonsimply-laced case. Comm. Math. Phys. (2017) D Masoero, A Raimondo, Opers corresponding to Higher States of the $g$-Quantum KdV model. arXiv 2018. 26/11/2018, 15:00 — 16:00 — Room P3.10, Mathematics Building Davide Polini, Instituto Superior Técnico
Counting formulae for extremal black holes in an STU-model
We present microstate counting formulae for BPS black holes in an $N=2$ STU-model.
12/11/2018, 15:00 — 16:00 — Room P3.10, Mathematics Building Alexandre Belin, University of Amsterdam
Siegel Modular Forms in AdS/CFT
I will discuss the application of Siegel modular forms for extracting the degeneracy of states of symmetric orbifold CFTs. These modular forms are closely related to the generating function for the elliptic genera of such CFTs and I will present an efficient technic for extracting their Fourier coefficients. I will then discuss to what extent symmetric orbifold CFTs can admit nice gravity duals and thus make an interesting connection between number theory and quantum gravity.
05/11/2018, 15:00 — 16:00 — Room P3.10, Mathematics Building Zhihao Duan, École Normale Supérieure Paris
Instantons in the Hofstadter butterfly: resurgence and quantum mirror curves
Recently an interesting connection between topological string theory and lattice models in condensed matter physics was discussed by several authors. In this talk, we will focus on the Harper-Hofstadter Hamiltonian. For special values of the magnetic flux, its energy spectrum can be exactly solved and its graph has a beautiful shape known as Hofstadter's butterfly. We are interested in the non-perturbative information inside the spectrum. First we consider the weak magnetic field limit and write down a trans-series ansatz for the energies. We then discuss fluctuations around instanton sectors as well as resurgence relations. For the second half of the talk, our goal is to present another powerful way to compute those fluctuations using the topological string formalism, after reviewing all the necessary background. The talk will be based on arXiv: 1806.11092.
03/10/2018, 14:30 — 15:30 — Room P3.10, Mathematics Building Abhiram Kidambi, Technical University of Vienna
BPS algebras and Moonshine
We give a brief introduction to BPS algebras and Moonshine in this informal seminar.
Unusual hour and day.
01/10/2018, 15:00 — 16:00 — Room P3.10, Mathematics Building Abhiram Kidambi, Tecnical University of Vienna
$\Gamma_0(N)$, quantum black holes and wall crossing
The degeneracies of supersymmetric dyonic black holes are known to be encoded in the Fourier coefficients of certain modular objects. For the case of $N = 4$, $d=4$ theory which I shall discuss, the spectrum of quarter BPS dyons is prone to wall crossing phenomena. The number theory machinery behind wall crossing in $4d$ $N = 4$ theories was described systematically in a comprehensive paper by Atish Dabholkar, Sameer Murthy and Don Zagier. There have also been supergravity localisation calculations thereafter which confirm some of the results that were shown by DMZ.
In this talk, I shall provide some of the number theoretic background for BPS state counting and review some of the key results known so far from both the microscopic and macroscopic side. I shall comment on black hole metamorphosis studied by Sen (and collaborators) and Nampuri et.al from a number theoretic framework. The remainder of the talk will be devoted to the generalisation of the number theory machinery of DMZ to congruence subgroups of $\operatorname{SL}(2,\mathbb{Z})$ i.e. for orbifolded CHL black holes and the supergravity approach for the CHL case.
This talk summarises some of the ongoing work with Sameer Murthy, Valentin Reys, Abhishek Chowdhury and Timm Wrase.
09/07/2018, 15:00 — 16:00 — Room P3.10, Mathematics Building Vladislav Kupriyanov, Ludwig-Maximilians-Universität München
$L_{\infty}$ bootstrap approach to non-commutative gauge theories
Non-commutative gauge theories with a non-constant NC-parameter are investigated. As a novel approach, we propose that such theories should admit an underlying $L_{\infty}$ algebra, that governs not only the action of the symmetries but also the dynamics of the theory. Our approach is well motivated from string theory. In this talk I will give a brief introduction to $L_{\infty}$ algebras and discuss in more details the $L_{\infty}$ bootstrap program: the existence of the solution, uniqueness and particular examples. The talk is mainly based on: arXiv:1803.00732 and 1806.10314.
04/06/2018, 15:00 — 16:00 — Room P3.10, Mathematics Building Frank Ferrari, Université Libre de Bruxelles
On Melonic Matrix Models and SYK-like Black Holes
I will illustrate three aspects of the new large $D$ limit of matrix models and their applications to black hole physics:
Graph theory aspect: I will review the basic properties of the new large $D$ limit of matrix models and provide a simple graph-theoretic argument for its existence, independent of standard tensor model techniques, using the concepts of Tait graphs and Petrie duals. Phase diagrams: I will outline the interesting phenomena found in the phase diagrams of simple fermionic matrix quantum mechanics/tensor/SYK models at strong coupling, including first and second order phase transitions and quantum critical points. Some of these phase transitions can be argued to provide a quantum mechanical description of the phenomenon of gravitational collapse. Probe analysis: I will briefly describe how the matrix point of view allows to naturally define models of D-particles probing an SYK-like black hole and discuss the qualitative properties of this class of models, emphasizing the difference between models based on fermionic and on bosonic strings. This approach provides an interesting strategy to study the emerging geometry of melonic/SYK black holes. In particular, it will be explained how a sharply defined notion of horizon emerges naturally. 28/05/2018, 15:00 — 16:00 — Room P3.10, Mathematics Building Salvatore Baldino, Instituto Superior Técnico
Introduction to resurgence (III)
This is the third in a series of talks introducing the subject of resurgence in quantum mechanics, field theory and string theory.
30/04/2018, 16:00 — 17:00 — Room P3.10, Mathematics Building Maximilian Schwick, Instituto Superior Técnico
Introduction to resurgence (II)
This is the second in a series of talks introducing the subject of resurgence in quantum mechanics, field theory and string theory.
24/04/2018, 11:00 — 12:00 — Room 6.2.33, Faculty of Sciences of the Universidade de Lisboa Panagiotis Betzios, University of Crete
Matrix Quantum Mechanics and the $S^1/\mathbb{Z}_2$ orbifold
We revisit $c=1$ non-critical string theory and its formulation via Matrix Quantum Mechanics (MQM). In particular we study the theory on an $S^1/\mathbb{Z}_2$ orbifold of Euclidean time and try to compute its partition function in the grand canonical ensemble that allows one to study the double scaling limit of the matrix model and connect the result to string theory (Liouville theory). The result is expressed as the Fredholm Pfaffian of a Kernel which we describe in several bases. En route we encounter interesting mathematics related to Jacobi elliptic functions and the Hilbert transform. We are able to extract the contribution of the twisted states at the orbifold fixed points using a formula by Dyson for the determinant of the sine kernel. Finally, we will make some comments regarding the possibility of using this model as a toy model of a two dimensional big-bang big-crunch universe.
23/04/2018, 15:00 — 16:00 — Room P3.10, Mathematics Building Olga Papadoulaki, University of Southampton
FZZT branes and non-singlets of Matrix Quantum Mechanics
We will discuss the non-singlet sectors of the matrix model associated with two dimensional non-critical string theory. These sectors of the matrix model contain rich physics and are expected to describe non-trivial states such as black holes. I will present how one can turn on the non-singlets by adding $N_f \times N$ fundamental and anti-fundamental fields in the gauge matrix quantum mechanics model as well as a Chern-Simons term. Then, I will show how one can rewrite our model as a spin-Calogero model in an external magnetic field. By introducing chiral variables we can define spin-currents that in the large $N$ limit satisfy an $SU(2N_f )_k$ Kac-Moody algebra. Moreover, we can write down the canonical partition function and study different limits of the parameters and possible phase transitions. In the grand canonical ensemble the partition function is a $\tau$ - function obeying discrete soliton equations. Also, in a certain limit we recover the matrix model of Kazakov-Kostov-Kutasov conjectured to describe the two dimensional black hole. Finally, I will discuss several implications that our model has for the understanding of the thermodynamics and the physics of such string theory states.
16/04/2018, 15:00 — 16:00 — Room P3.10, Mathematics Building Masazumi Honda, Weizmann Institute of Science
Resurgent transseries and Lefschetz thimble in 3d $\mathcal{N}=2$ supersymmetric Chern-Simons matter theories
We show that a certain class of supersymmetric (SUSY) observables in 3d $\mathcal{N}=2$ SUSY Chern-Simons (CS) matter theories has nontrivial resurgent structures with respect to coupling constants given by inverse CS levels, and that their exact results are expressed as appropriate resummations of weak coupling expansions given by transseries. With a real mass parameter varied, we encounter Stokes phenomena infinitely many times, where the perturbative series gets non-Borel-summable along positive real axis of the Borel plane. We also decompose integral representations of the exact results in terms of Lefschetz thimbles and study how they are related to the resurgent transseries. We further discuss connections between the non-perturbative effects appearing in the transseries and complexified SUSY solutions which formally satisfy SUSY conditions but are not on original path integral contour. We explicitly demonstrate the above for partition functions of rank-1 3d $\mathcal{N}=2$ CS matter theories on sphere. This talk is based on arXiv:1604.08653, 1710.05010, and an on-going collaboration with Toshiaki Fujimori, Syo Kamata, Tatsuhiro Misumi and Norisuke Sakai.
09/04/2018, 15:00 — 16:00 — Room P3.10, Mathematics Building Roberto Vega, Instituto Superior Técnico
Introduction to resurgence (I)
This is the first in a series of talks introducing the subject of resurgence in quantum mechanics, field theory and string theory.
04/12/2017, 16:00 — 17:00 — Room P3.10, Mathematics Building Junya Yagi, Perimeter Institute
String theory and integrable lattice models
I will discuss a string theoretic approach to integrable lattice models. This approach provides a unified perspective on various important notions in lattice models, and relates these notions to four-dimensional $N =1$ supersymmetric field theories and their surface operators. I will also explain Nekrasov-Shatashvili correspondence. |
You've pinpointed an important problem with unbiasedness as a desideratum for an estimator, and that is that it's not invariant under reparameterization. The same thing happens with an exponential distribution. There are two common parameters to use, the rate $\lambda$ or the mean $\theta=1/\lambda.$ MLE
is invariant so what you get either way is consistent: $$ \hat\theta_{MLE} = \overline X\\\hat\lambda_{MLE} = \frac{1}{\overline X}$$ where $\overline X$ is the sample mean. However, since generally $\tfrac1{E(X)} \ne E\left(\tfrac1{X}\right),$ it turns out that while $\hat \theta_{MLE}$ is unbiased, $\hat \lambda_{MLE}$ is biased.
An obvious answer would seem to be that we should use the bias-adjusted estimator for whichever version of the parameter we "care about" more, or in other words, which parameter's
interpretation is more in line with what we are intuitively trying to measure by estimating. By this standard, one might think we should be using an unbiased estimator for the standard deviation rather than the variance, since the standard deviation is intuitively a size of an average fluctuation.
As straightforward as this sounds, there are a number of problems with this line of thinking. The first is pretty minor but still worth noting: actually, the standard deviation isn't the size of an average fluctuation! That would be something closer to the mean average deviation, and for normal distributions this is different by a factor of $\sqrt{2/\pi}$ (or something like that... don't quote me).
Which brings me to the second more important point. What is the formula even for the bias-adjusted standard deviation? It's very complicated compared to the bias-adjusted variance (for the normal distribution). Furthermore, the unbiased variance estimator has a nice property: it is unbiased regardless of distribution. The precise form of the unbiased estimator for the standard deviation depends on the distribution. So that said, it's pretty obvious why authors prefer the unbiased variance estimator.
(Also
the unbiased estimator, is a misnomer. I mean the estimator proportional to the square root of the standard variance estimator with the proportionality constant chosen to make it unbiased.)
Fortunately the authors aren't sacrificing much for the sake of parsimony: unbiasedness is an extremely overrated property and we shouldn't care too much about it. Think about what it means: it means that if you do the experiment where you collect sample size $n$ a million times, the average value you get for the estimator is exactly, squarely equal to the true parameter. Think about this literally: is this actually what you want? It seems like ideally this would probably be the case, but we're missing an important dimension of estimator variance. Surely we'd prefer an estimator whose mean is $1\%$ higher than the true value and whose fluctuations are $2\%$ to one whose mean is exactly the true value and whose fluctuations are $20\%.$
One popular metric for the quality of an estimator is mean squared error. This includes contributions from both variance and bias. And it's generally not equal to the unbiased estimator. However, like the unbiased standard deviation estimator, it depends on the distribution... which, between that and the additional conceptual overhead explains why that one isn't 'standard'.
As for why we typically use the bias corrected variance estimator rather than the MLE, it's really just that usually bias-corrected MLEs have marginally better finite sample efficiency than uncorrected. There's also the fact that the unbiased version is the one that makes the formula for the t-test the least cumbersome, which is an explanation that probably shouldn't be overlooked. |
On coupled Dirac systems
1.
Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China
2.
School of Mathematical Sciences, Beijing Normal University, Laboratory of Mathematics and Complex Systems, Ministry of Education, Beijing 100875, China
$\left\{ \begin{aligned}Du=\frac{\partial H}{\partial v}(x,u,v)\hspace{4mm} {\rm on}\hspace{2mm}M,\\Dv=\frac{\partial H}{\partial u}(x,u,v)\hspace{4mm} {\rm on}\hspace{2mm}M,\end{aligned} \right.$
$H(x,u,v)=f(x)\frac{|u|^{p+1}}{p+1}+g(x)\frac{|v|^{q+1}}{q+1},$
$\frac{1}{p+1}+\frac{1}{q+1}>\frac{n-1}{n}.$ Keywords:Coupled Dirac system, generalized fountain theorem, generalized linking theorem, strongly indefinite functionals. Mathematics Subject Classification:Primary: 53C27, 57R58, 58E05, 58J05. Citation:Wenmin Gong, Guangcun Lu. On coupled Dirac systems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4329-4346. doi: 10.3934/dcds.2017185
References:
[1]
R. A. Adams and J. J. F. Fournier,
[2]
B. Ammann,
[3] [4] [5] [6]
T. Bartsch and Y. Ding,
Periodic solutions of superlinear beam and membrane equations with perturbations from symmetry,
[7] [8]
C. J. Batkam and F. Colin,
Generalized fountain theorem and applications to strongly indefinite semilinear problems,
[9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22]
H. B. Lawson and M. L. Michelson,
[23] [24] [25]
show all references
References:
[1]
R. A. Adams and J. J. F. Fournier,
[2]
B. Ammann,
[3] [4] [5] [6]
T. Bartsch and Y. Ding,
Periodic solutions of superlinear beam and membrane equations with perturbations from symmetry,
[7] [8]
C. J. Batkam and F. Colin,
Generalized fountain theorem and applications to strongly indefinite semilinear problems,
[9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22]
H. B. Lawson and M. L. Michelson,
[23] [24] [25]
[1] [2] [3] [4] [5]
Tatsien Li, Bopeng Rao, Yimin Wei.
Generalized exact boundary synchronization
for a coupled system of wave equations.
[6]
Denis Blackmore, Jyoti Champanerkar, Chengwen Wang.
A generalized Poincaré-Birkhoff theorem with applications to coaxial vortex ring motion.
[7]
Urszula Ledzewicz, Omeiza Olumoye, Heinz Schättler.
On optimal chemotherapy with a strongly targeted agent for a model of tumor-immune system interactions with generalized logistic growth.
[8]
Begoña Barrios, Leandro Del Pezzo, Jorge García-Melián, Alexander Quaas.
A Liouville theorem for indefinite fractional diffusion equations and its application to existence of solutions.
[9] [10]
Dugan Nina, Ademir Fernando Pazoto, Lionel Rosier.
Global stabilization of a coupled system of two generalized Korteweg-de Vries type equations posed on a finite domain.
[11]
Fan Jiang, Zhongming Wu, Xingju Cai.
Generalized ADMM with optimal indefinite proximal term for linearly constrained convex optimization.
[12] [13] [14] [15] [16] [17]
Piotr Gwiazda, Agnieszka Świerczewska-Gwiazda, Aneta Wróblewska.
Generalized Stokes system in Orlicz spaces.
[18]
Jiaquan Liu, Yuxia Guo, Pingan Zeng.
Relationship of the morse index and the $L^\infty$ bound of solutions for a strongly indefinite differential superlinear system.
[19]
M. Grossi, P. Magrone, M. Matzeu.
Linking type solutions for elliptic equations with indefinite nonlinearities up to the critical growth.
[20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
A result of Cai and Ellis (see Theorem 5 in http://www.sciencedirect.com/science/article/pii/0166218X9190010T) implies that deciding whether a cubic perfect line-graph is $3$-edge-colorable is NP-complete. Counter-examples to Conjecture 2 can be built from their argument as follows:
First, notice that every cubic bridgeless graph $G$ satisfies $\chi_f'(G)=3$. This is easily obtained using the following formula for $\chi_f'(G)$, which is derived from Edmonds' inequalities for the matching polytope of $G$:$$\chi_f'(G)=\max\left(\Delta(G),\max_{U\subseteq V(G), |U|\geq 3\, \text{odd}}\frac{|E(U)|}{\frac{|U|-1}{2}}\right).$$
Now, consider the following construction: let $H$ be a bridgeless cubic graph and $S(H)$ be the graph obtained from $H$ by subdividing each edge exactly once. Let $G$ be the line graph of $S(H)$.
It is straightforward to check that $G$ is cubic, bridgeless and that: $\chi'(G)=3$ if and only if $\chi'(H)=3$. Furthermore, $G$ is perfect because $S(H)$ is bipartite.
Therefore, if $H$ is a cubic bridgeless graph with $\chi'(H)=4$ (for example the Petersen graph or any other
snark http://en.wikipedia.org/wiki/Snark_(graph_theory)), then $G$ is a cubic bridgeless perfect line-graph with $\chi'(G)>\lceil\chi_f'(G)\rceil$. |
Table of Contents
1 Physics of the Doppler Effect 2 Applications 3 “I am the Doppler Effect” 4 References
Yes. It’s the apparent change in the frequency of a wave caused by relative motion between the source of the wave and the observer.
-Sheldon Cooper
In the “Middle-Earth Paradigm” episode, Sheldon Cooper dresses as the “Doppler Effect” for Penny’s Halloween party. The Doppler Effect (or Doppler Shift) describes the change in pitch or frequency that results as a source of sound moves relative to an observer; moving relative can mean either the source is moving while the observer is stationary or vice versa. It is commonly heard when a siren approaches and recedes from an observer. As the siren approaches, the pitch sounds higher and lowers as it moves away. This effect was first proposed by Austrian physicist Christian Doppler in 1842 to explain the color of binary stars.
In 1845, Christophorus Henricus Diedericus (C. H. D.) Buys-Ballot, a Dutch chemist and meteorologist conducted the famous experiment to prove this effect. He assembled a group of horn players on an open cart attached to a locomotive. Ballot then instructed the engineer to rush past him as fast as he could while the musicians played and held a constant note. As the train approached and receded, Ballot noted that the pitch changed as he stood and listened on the stationary platform. Physics of the Doppler Effect
As a stationary sound source produces sound waves, its wave-fronts propagate away from the source at a constant speed, the speed of sound. This can be seen as concentric circles moving away from the center. All observers will hear the same frequency, the frequency of the source of the sound.
When either the source or the observer moves relative to each other, the frequency of the sound that the source emits does not change but rather the observer hears a change in pitch. We can think of the following way. If a pitcher throws balls to someone across a field at a constant rate of one ball a second, the person will catch those balls at the same rate (one ball a second). Now if the pitcher runs towards the catcher, the catcher will catch the balls faster than one ball per second. This happens because as the catcher moves forward, he closes in the distance between himself and the catcher. When the pitcher tosses the next ball it has to travel a shorter distance and thus travels a shorter time. The opposite is true if the pitcher was to move away from the catcher.
If instead of the pitcher moving towards the catcher, the pitcher stayed stationary and the catcher ran forward. As the catcher runs forward, he closes in the distance between him and the pitcher so the time it takes from the ball to leave the pitcher’s hand to the catcher’s mitt is decreased. In this case, it also means that the catcher will catch the balls at a faster rate than the pitcher tosses them.
Sub Sonic Speeds
We can apply the same idea of the pitcher and catcher to a moving source of sound and an observer. As the source moves, it emits sounds waves which spread out radially around the source. As it moves forward, the wave-fronts in front of the source bunch up and the observer hears an increase in pitch. Behind the source, the wave-fronts spread apart and the observer standing behind hears a decrease in pitch.
The Doppler Equation
When the speeds of source and the receiver relative to the medium (air) are lower than the velocity of sound in the medium, i.e. moves at sub-sonic speeds, we can define a relationship between the observed frequency, \(f\), and the frequency emitted by the source, \(f_0\).
\[f = f_{0}\left(\frac{v + v_{o}}{v + v_{s}}\right)\] where \(v\) is the speed of sound, \(v_{o}\) is the velocity of the observer (this is positive if the observer is moving towards the source of sound) and \(v_{s}\) is the velocity of the source (this is positive if the source is moving away from the observer). Source Moving, Observer Stationary
We can now use the above equation to determine how the pitch changes as the source of sound moves
towards the observer. i.e. \(v_{o} = 0\). \[f = f_{0}\left(\frac{v}{v – v_{s}}\right)\] \(v_{s}\) is negative because it is moving towards the observer and \(v – v_{s} < v\). This makes \(v/(v - v_{s})\) larger than 1 which means the pitch increases. Source Stationary, Observer Moving
Now if the source of sound is still and the observer moves towards the sound, we get:
\[f = f_{0}\left( \frac{v + v_{o}}{v} \right)\] \(v_{o}\) is positive as it moves towards the source. The numerator is larger than the denominator which means that \(v + v_{o}/v\) is greater than 1. The pitch increases. Speed of Sound
As the source of sound moves at the speed of sound, the wave fronts in front become bunched up at the same point. The observer in front won’t hear anything until the source arrives. When the source arrives, the pressure front will be very intense and won’t be heard as a change in pitch but as a large “thump”.
The observer behind will hear a lower pitch as the source passes by.
\[f = f_{0}\left( \frac{v – 0}{v + v} \right) = 0.5 f_{0}\]
Early jet pilots flying at the speed of sound (Mach 1) reported a noticeable “wall” or “barrier” had to be penetrated before achieving supersonic speeds. This “wall” is due to the intense pressure front, and flying within this pressure front produced a very turbulent and bouncy ride. Chuck Yeager was the first person to break the sound barrier when he flew faster than the speed of sound in the Bell X-1 rocket-powered aircraft on October 14, 1947.
As the science of super-sonic flight became better understood, engineers made a number changes to aircraft design that led the the disappearance of the “sound barrier”. Aircraft wings were swept back and engine performance increased. By the 1950s combat aircraft could routinely break the sound barrier.
Super-Sonic
As the sound source breaks and moves past the “sound barrier”, the source now moves faster than the sound waves it creates and leads the advancing wavefront. The source will pass the observer before the observer hears the sound it creates. As the source moves forward, it creates a Mach cone. The intense preseure front on the Mach cone creates a shock wave known as a “sonic boom”.
Twice the Speed of Sound
Something interesting happens when the source moves towards the observer at twice the speed of sound — the tone becomes time reversed. If music was being played, the observer will hear the piece with the correct tone but played backwards. This was first predicted by Lord Rayleigh in 1896 .
We can see this by using the Doppler Equation.
\[f = f_{0}\left(\frac{v}{v-2v}\right)\] This reduces to \[f=-f_{0}\] which is negative because the sound is time reversed or is heard backwards. Applications Radar Gun
The Doppler Effect is used in radar guns to measure the speed of motorists. A radar beam is fired at a moving target as it approaches or recedes from the radar source. The moving target then reflects the Doppler-shifted radar wave back to the detector and the frequency shift measured and the motorist’s speed calculated.
We can combine both cases of the Doppler equation to give us the relationship between the reflected frequency (\(f_{r}\)) and the source frequency (\(f\)):
\[f_{r} = f \left(\frac{c+v}{c-v}\right)\] where \(c\) is the speed of light and \(v\) is the speed of the moving vehicle. The difference between the reflected frequency and the source frequency is too small to be measured accurately so the radar gun uses a special trick that is familiar to musicians – interference beats.
To tune a piano, the pitch can be adjusted by changing the tension on the strings. By using a tuning instrument (such as a tuning fork) which can produce a sustained tone over time, a beat frequency can be heard when placed next to the vibrating piano wire. The beat frequency is an interference between two sounds with slightly different frequencies and can be herd as a periodic change in volume over time. This frequency tells us how far off the piano strings are compared to the reference (tuning fork).
To detect this change in a radar gun does something similar. The returning wave is “mixed” with the transmitted signal to create a beat note. This beat signal or “heterodyne” is then measured and the speed of the vehicle calculated. The change in frequency or the difference between \(f_{r}\) and \(f\) or \(\Delta f\) is
\[f_{r} – f = f\frac{2v}{c-v}\] as the difference between the speed of light, \(c\), and the speed of the vehicle, \(v\), is small, we can approximate this to \[\Delta f \approx f\frac{2v}{c}\] By measuring this frequency shift or beat frequency, the radar gun can calculate and display a vehicle’s speed. “I am the Doppler Effect” The Doppler Effect is an important principle in physics and is used in astronomy to measure the speeds at which galaxies and stars are approaching or receding from us. It is also used in plasma physics to estimate the temperature of plasmas. Plasmas are one of the four fundamental states of matter (the others being solid, liquid, and gas) and is made up of very hot, ionized gases. Their composition can be determined by the spectral lines they emit. As each particle jostles about, the light emitted by each particle is Doppler shifted and is seen as a broadened spectral line. This line shape is called a Doppler profile and the width of the line is proportional to the square root of the temperature of plasma gas. By measuring the width, scientists can infer the gas’ temperature.
We can now understand Sheldon’s fascination with the Doppler Effect as he aptly explains and demonstrates its effects. As an emergency vehicle approaches an observer, its siren will start out with a higher pitch and slide down as as it passes and moves away from the observer. This can be heard as the (confusing) sound he demonstrates to Penny’s confused guests.
References |
Mathematica:
LaTeX:
$\alpha =\underset{\square }{\overset{\square }{\lim \sup \sqrt[n]{\left|a_n\right|}}}$
Mathematica Stack Exchange is a question and answer site for users of Wolfram Mathematica. It only takes a minute to sign up.Sign up to join this community
$\alpha =\underset{\square }{\overset{\square }{\lim \sup \sqrt[n]{\left|a_n\right|}}}$
Since the question contains no information about the context, I assume you only want to typeset the expression in a
TraditionalForm cell.
Then just highlight the placeholder under the formula and press
Ctrl-
$\uparrow$ two or three times, according to your taste. Each key press will nudge the placeholder (or superscript) up toward the formula.
You may want to look up
AdjustmentBox for more information. |
Polynomial
When the polynomial f (x) is divisible by x + 2 and x
2 + 1, it leaves the remainders 7 and x + 4 respectively. What is the remainder when f (x) is divided by (x + 2)(x 2 + 4)
When the polynomial f (x) is divisible by x + 2 and x
2 + 1, it leaves the reminders 7 and x + 4 respectively. What is the remainder when f (x) is divided by (x + 2)(x 2 + 1)
Suppose that the polynomial \(f\left(x\right)=2x^5-9x^3+2x^2+9x-3\) has 5 solutions x
1 ; x 2; x 3; x 4; x 5. The other polynomial \(k\left(x\right)=x^2-4\) . Find the value of \(P=k\left(x_1\right)\times k\left(x_2\right)\times k\left(x_3\right)\times k\left(x_4\right)\times k\left(x_5\right)\).
Suppose that the polynomial \(f\left(x\right)=2x^5-9x^3+2x^2+9x-3\) has 5 solutions \(x_1;x_2;x_3;x_4;x_5\). The other polynomial \(K\left(x\right)=x^2-4\). Find the value of \(P=K\left(X_1\right)\times K\left(X_2\right)\times K\left(X_3\right)\times K\left(X_4\right)\times K\left(X_5\right)\)
Let P(x) be the polynomial given by \(P\left(x\right)=\left(2+x+2x^3\right)^{15}\). Suppose that \(P\left(x\right)=a_0+a_1x+a_2x^2+...+a_{45}x^{45}\).The value of \(S=a_1-a_2+a_3-a_4+...-a_{44}+a_{45}\).
Uchiha Sasuke 26/05/2018 at 01:19
Thank you very much
Dao Trong Luan Coodinator 25/05/2018 at 12:30
P(x) = a
0+ a 1x + a 2x 2+ ... + a 45x 45
=> P(1) = a
0+ a 1+ a 2+ ... + a 45= (2 + 1 + 2) 15= 5 15
=> P(-1) = a
0- a 1+ a 2- ... - a 45= (2 - 1 - 2) 15= (-1) 15
==> P(1) - P(-1) = 2a
1+ 2a 3+ ... + 2a 45= 5 15+ 1
P(1) + P(-1) = 2a
0+ 2a 2+ ... + 2a 44= 5 15- 1
=> (a
1+ a 3+ ... + a 45) - (a 0+ a 2+ ... + a 44) = \(\dfrac{5^{15}+1-5^{15}+1}{2}=1\)
=> S = -a
0+ a 1- a 2+ .... - a 44+ a 45= 1
P/s: Not S in the topic
Let P(x) be a polynomial of degree 2015.
Suppose P(n)=\(\dfrac{n}{n+1}\) for all n = 0, 1 , 2 ,..., 2015.
The value of P(2016) is ...
John 11/03/2017 at 09:07
Let \(Q\left(x\right)=\left(x+1\right)P\left(x\right)-x\), (*)
since \(P\left(n\right)=\dfrac{n}{n+1}\) we infer:
\(Q\left(n\right)=\left(n+1\right)P\left(n\right)-n=0\) for \(n=0,1,..,2015\).
Because P is a polynomial of dgree 2016, Q is a polynomial of dgree 2016. Q has 2016 solutions (0, 1, .. , 2015) so Q can be expressed in a form:
\(Q\left(x\right)=a\left(x-0\right)\left(x-1\right)...\left(x-2015\right)\) (**)
In other hand, in (*) we set x = -1 then \(Q\left(-1\right)=1\). And replace to (**) we have:
\(1=Q\left(-1\right)=a\left(-1\right)\left(-2\right)...\left(-2016\right)\)
\(\Rightarrow a=\dfrac{1}{2016!}\)
Finally \(Q\left(x\right)=\dfrac{1}{2016!}\left(x-0\right)\left(x-1\right)...\left(x-2015\right)\)
\(\Rightarrow Q\left(2016\right)=\dfrac{1}{2016!}2016.2015...1=1\)
from (*) \(Q\left(2016\right)=\left(2016+1\right)P\left(2016\right)-2016\)
\(\Rightarrow\left(2016+1\right)P\left(2016\right)-2016=1\)
\(\Rightarrow P\left(2016\right)=\dfrac{2017}{2017}=1\)tranthuydung selected this answer.
FA KAKALOTS 28/01/2018 at 22:05
Let Q(x)=(x+1)P(x)−x
, (*)
since P(n)=nn+1
we infer:
Q(n)=(n+1)P(n)−n=0
for n=0,1,..,2015
.
Because P is a polynomial of dgree 2016, Q is a polynomial of dgree 2016. Q has 2016 solutions (0, 1, .. , 2015) so Q can be expressed in a form:
Q(x)=a(x−0)(x−1)...(x−2015)
(**)
In other hand, in (*) we set x = -1 then Q(−1)=1
. And replace to (**) we have:
1=Q(−1)=a(−1)(−2)...(−2016)
⇒a=12016!
Finally Q(x)=12016!(x−0)(x−1)...(x−2015)
⇒Q(2016)=12016!2016.2015...1=1
from (*) Q(2016)=(2016+1)P(2016)−2016
⇒(2016+1)P(2016)−2016=1
⇒P(2016)=20172017=1 |
Group cohomology of dihedral group:D8
Contents This article gives specific information, namely, group cohomology, about a particular group, namely: dihedral group:D8. View group cohomology of particular groups | View other specific information about dihedral group:D8 Homology groups for trivial group action FACTS TO CHECK AGAINST(homology group for trivial group action): First homology group: first homology group for trivial group action equals tensor product with abelianization Second homology group: formula for second homology group for trivial group action in terms of Schur multiplier and abelianization|Hopf's formula for Schur multiplier General: universal coefficients theorem for group homology|homology group for trivial group action commutes with direct product in second coordinate|Kunneth formula for group homology Over the integers
The homology groups with coefficients in the ring of integers are as follows:
Failed to parse (syntax error): \! H_p(D_8;\mathbb{Z}) = \left\lbrace \begin{array}{rl} \mathbb{Z}, & \qquad p = 0 \\ \mathbb{Z}/2\mathbb{Z}, & \qquad p \equiv 1 \pmod 4 \\ \mathbb{Z}/8\mathbb{Z}, & \qquad p \equiv 3 \pmod 4 \\ 0, & \qquad p \ne 0, p \ \operatorname{even}\end{array}\right
As a sequence (Starting ), the first few homology groups are:
0 1 2 3 4 5 6 7 8 0 0 0 0 Over an abelian group Cohomology groups for trivial group action FACTS TO CHECK AGAINST(cohomology group for trivial group action): First cohomology group: first cohomology group for trivial group action is naturally isomorphic to group of homomorphisms Second cohomology group: formula for second cohomology group for trivial group action in terms of Schur multiplier and abelianization In general: dual universal coefficients theorem for group cohomology relating cohomology with arbitrary coefficientsto homology with coefficients in the integers. |Cohomology group for trivial group action commutes with direct product in second coordinate | Kunneth formula for group cohomology Over the integers
The cohomology groups with coefficients in the ring of integers are as follows:
Cohomology ring with coefficients in integers PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] Second cohomology groups and extensions Second cohomology groups for trivial group action
Group acted upon Order Second part of GAP ID Second cohomology group for trivial group action Extensions Cohomology information cyclic group:Z2 2 1 elementary abelian group:E8 direct product of D8 and Z2, SmallGroup(16,3), nontrivial semidirect product of Z4 and Z4, dihedral group:D16, semidihedral group:SD16, generalized quaternion group:Q16 second cohomology group for trivial group action of D8 on Z2 cyclic group:Z4 4 1 ? ? second cohomology group for trivial group action of D8 on Z4 |
Explicit estimates on positive supersolutions of nonlinear elliptic equations and applications
1.
School of Mathematics, Iran University of Science and Technology, Narmak, Tehran, Iran
2.
Department of Mathematics, University of Manitoba, Winnipeg, Manitoba, R3T 2N2, Canada
$ - \Delta u = \rho(x) f(u)|\nabla u|^p, ~~~~ {\rm{ in }}~~~~ \Omega, $
$ 0\le p<1 $
$ \Omega $
$ {\mathbb{R}}^N $
$ N\ge 2 $
$ f: [0, a_{f}) \rightarrow {\mathbb{R}}_{+} $
$ (0 < a_{f} \leq +\infty) $
$ \rho: \Omega \rightarrow \mathbb{R} $
$ u $
$ x\in\Omega $
$ \nabla u\not\equiv0 $
$ x $
$ \Omega $
$ \sup_{x\in\Omega}dist (x, \partial\Omega) = \infty $
$ \rho(x) = |x|^\beta $
$ \beta\in {\mathbb{R}} $
$ f(u) = u^q $
$ q+p>1 $
$ (N-2)q+p(N-1)< N+\beta. $ Keywords:Nonlinear elliptic problems, Liouville type theorems, dead core, supersolutions, gradient term. Mathematics Subject Classification:Primary: 35J60; Secondary: 35B53. Citation:Asadollah Aghajani, Craig Cowan. Explicit estimates on positive supersolutions of nonlinear elliptic equations and applications. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2731-2742. doi: 10.3934/dcds.2019114
References:
[1] [2]
S. Alarcon, J. Garcia-Melian and A. Quaas,
Nonexistence of positive supersolutions to some nonlinear elliptic problems,
[3]
S. Alarcon, J. Garcia-Melian and A. Quaas,
Existence and non-existence of solutions to elliptic equations with a general convection term,
[4]
S. Alarcon, J. Garcia-Melian and A. Quaas,
Keller-Osserman type conditions for some elliptic problems with gradient terms,
[5]
D. Arcoya, C. De Coster, L. Jeanjean and K. Tanaka,
Continuum of solutions for an elliptic problem with critical growth in the gradient,
[6]
S. N. Armstrong and B. Sirakov,
Nonexistence of positive supersolutions of elliptic equations via the maximum principle,
[7]
S. N. Armstrong and B. Sirakov, Liouville results for fully nonlinear elliptic equations withpower growth nonlinearities,
[8] [9]
H. Berestycki, F. Hamel and L. Rossi, Liouville type results for semilinear elliptic equations in unbounded domains,
[10]
M. F. Bidaut-Veron, M. Garcia-Huidobro and L. Veron, Estimates of solutions of elliptic equations with a source reaction term involving the product of the function and its gradient, available at https://arXiv.org/pdf/1711.11489.pdfGoogle Scholar
[11] [12]
M. F. Bidaut-Veron, M. Garcia-Huidobro and L. Veron,
Local and global behavior of solutions of quasilinear equations of Emden-Fowler type,
[13]
I. Birindelli and F. Demengel,
Comparison principle and Liouville type results for singular fully nonlinear operators,
[14]
M. A. Burgos-Perez, J. Garcia Melian and A. Quaas,
Classification of supersolutions and Liouville theorems for some nonlinear elliptic problems,
[15] [16]
H. Chen and P. Felmer,
On Liouville type theorems for fully nonlinear elliptic equations with gradient term,
[17] [18] [19]
L. Jeanjean and B. Sirakov,
Existence and multiplicity for elliptic problems with quadratic growth in the gradient,
[20] [21] [22]
L. Veron,
show all references
References:
[1] [2]
S. Alarcon, J. Garcia-Melian and A. Quaas,
Nonexistence of positive supersolutions to some nonlinear elliptic problems,
[3]
S. Alarcon, J. Garcia-Melian and A. Quaas,
Existence and non-existence of solutions to elliptic equations with a general convection term,
[4]
S. Alarcon, J. Garcia-Melian and A. Quaas,
Keller-Osserman type conditions for some elliptic problems with gradient terms,
[5]
D. Arcoya, C. De Coster, L. Jeanjean and K. Tanaka,
Continuum of solutions for an elliptic problem with critical growth in the gradient,
[6]
S. N. Armstrong and B. Sirakov,
Nonexistence of positive supersolutions of elliptic equations via the maximum principle,
[7]
S. N. Armstrong and B. Sirakov, Liouville results for fully nonlinear elliptic equations withpower growth nonlinearities,
[8] [9]
H. Berestycki, F. Hamel and L. Rossi, Liouville type results for semilinear elliptic equations in unbounded domains,
[10]
M. F. Bidaut-Veron, M. Garcia-Huidobro and L. Veron, Estimates of solutions of elliptic equations with a source reaction term involving the product of the function and its gradient, available at https://arXiv.org/pdf/1711.11489.pdfGoogle Scholar
[11] [12]
M. F. Bidaut-Veron, M. Garcia-Huidobro and L. Veron,
Local and global behavior of solutions of quasilinear equations of Emden-Fowler type,
[13]
I. Birindelli and F. Demengel,
Comparison principle and Liouville type results for singular fully nonlinear operators,
[14]
M. A. Burgos-Perez, J. Garcia Melian and A. Quaas,
Classification of supersolutions and Liouville theorems for some nonlinear elliptic problems,
[15] [16]
H. Chen and P. Felmer,
On Liouville type theorems for fully nonlinear elliptic equations with gradient term,
[17] [18] [19]
L. Jeanjean and B. Sirakov,
Existence and multiplicity for elliptic problems with quadratic growth in the gradient,
[20] [21] [22]
L. Veron,
[1]
M. Á. Burgos-Pérez, J. García-Melián, A. Quaas.
Classification of supersolutions and Liouville theorems for some nonlinear elliptic problems.
[2] [3]
Alberto Farina.
Symmetry of components, Liouville-type theorems and classification results for some nonlinear elliptic systems.
[4] [5] [6] [7] [8] [9]
Tomasz Adamowicz, Przemysław Górka.
The Liouville theorems for elliptic equations with nonstandard growth.
[10] [11] [12] [13]
Dong Li, Xinwei Yu.
On some Liouville type theorems for the compressible Navier-Stokes
equations.
[14]
Italo Capuzzo Dolcetta, Antonio Vitolo.
Glaeser's type gradient estimates for non-negative solutions of fully nonlinear elliptic equations.
[15]
Chin-Chin Wu, Zhengce Zhang.
Dead-core rates for the heat equation with a spatially dependent strong absorption.
[16]
Xinfu Chen, Jong-Shenq Guo, Bei Hu.
Dead-core rates for the porous medium
equation with a strong absorption.
[17]
Chunlai Mu, Jun Zhou, Yuhuan Li.
Fast rate of dead core for fast diffusion equation with strong absorption.
[18] [19]
Zhijun Zhang.
Large solutions of semilinear elliptic equations
with a gradient term: existence and boundary behavior.
[20]
Daniela Giachetti, Francesco Petitta, Sergio Segura de León.
Elliptic equations having a singular quadratic gradient term and
a changing sign datum.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
I was trying to clarify some questions I had about elliptic integrals using
There they define the map $$\phi\colon w\mapsto \int_0^w\frac{\mathrm{d}z}{\sqrt{1-z^2}}$$ on $\mathbb{C}\setminus[-1,1]$ to get $\phi$ well-defined up to periods of the integral. The choice of the interval $[-1,1]$ is made so that $\sqrt{1-z^2}$ admits a single-valued branch.
Now, I know that the principal branch of the square root $\sqrt{z}$ is discontinuous on the half-line $(-\infty,0)$, so to get a holomorphic map we restrict to $\mathbb{C}\setminus (-\infty,0]$. Substituting $1-z^2$ for $z$ we get that the appropriate branch cuts for the above mapping $\sqrt{1-z^2}$ would be $(-\infty,-1]$ and $[1,\infty)$, which is somewhat the opposite of the suggested interval $[-1,1]$.
From that I conclude that they didn't choose the principal branch, otherwise for e.g. $z=2$ the map would be discontinuous.
My question is: Are both choices possible? Then there must be some way to choose another branch of $\sqrt{1-z^2}$. Is there a good way to see how to choose "elegant" branch cuts and the corresponding holomorphic branches?
A thought of my own: It should be possible to instead integrate on the Riemann sphere, using $\infty$ and not $0$ as a starting point. Then the two intervals would "swap roles". But I don't see how to formalize this. |
Suppose I have the following plot:
Plot[(1 - p2)^2/(p2^2 + (1 - p2)^2), {p2, 0, 1}, AxesLabel -> {"p2", "p1"}]
Now I want to do a point-wise transform to every point $(p2,p1)$ on the line (making a new plot) using:
$ p1 \rightarrow \dfrac{p1-\gamma}{1-\gamma}\quad p2\rightarrow\dfrac{p2-\gamma}{1-\gamma} $
In other words, I want to move every point on the line $(p2,p1)$ to a new location $\left(\dfrac{p2-\gamma}{1-\gamma},\dfrac{p1-\gamma}{1-\gamma}\right)$.
How should I achieve that? |
I tried to answer my own question Comparing two Bayesian models under disjoint prior supports using MCMC. Here is my intent. I am not confident in what I wrote so prefer to post it as a question : Is this a correct way to compute Bayes factor ?
I would like to compute the Bayes factor: $$ K = \frac{P(x|H_1)}{P(x|H_2)} =\frac{P(H_1|x)}{P(H_2|x)} \frac{P(H_2)}{P(H_1)} $$
I have a model consisting of three parameters $\theta_1$, $\theta_2$ and $\theta_3$. $H_1$ is $\{(\theta_1,\theta_2,\theta_3) \in [m,M]^3 \mbox{ such that } \theta_1<\theta_2<\theta_3\}$, $H_2$ is it complementary. Then suppose that I have a model to infer $P(\theta_1,\theta_2,\theta_3 | x)$ under the complete parameter space $H_1 \cup H_2 $ then $P(H_1|x)$ can simply be computed as: $$ P(H_1|x)=\int_m^M P(\theta_1,\theta_2,\theta_3|x) 1_{\theta_1>\theta_2>\theta_3}(\theta_1,\theta_2,\theta_3) d\theta_1 d\theta_2 d\theta_3 $$ by simply counting the posterior MCMC samples satisfying the condition and $P(H_2|x)=1-P(H_1|x)$.
So it remains to extract the associated $\frac{P(H_2)}{P(H_1)}$ by simply considering the subpart of $H_1$ over the overall parameter space: $$ P(H_1)=\int p(\theta_1,\theta_2,\theta_3) \cdot 1_{\theta_1>\theta_2>\theta_3}(\theta_1,\theta_2,\theta_3) d\theta_1 d\theta_2 d\theta_3 $$ which can be computed analytically for the prior $p(\theta_1,\theta_2,\theta_3)$ associated to my posterior and again $ P(H_2)=1-p(H_1)$.
Is this a correct way to compute Bayes factor ? If yes, is this method has a name? |
The Fibonacci problem is a well known mathematical problem that models population growth and was conceived in the 1200s. Leonardo of Pisa aka Fibonacci decided to use a recursive equation: $x_{n} = x_{n-1} + x_{n-2}$ with the seed values $x_0 = 0$ and $x_1 = 1$. Implementing this recursive function is straightforward:
def fib(n): if n==0: return 0 if n==1: return 1 else: return fib(n-1) + fib(n-2)
Since the Fibonacci sequence was conceived to model population growth, it wouldseem that there should be a simple equation that grows almost exponentially.Plus, this recursive calling is expensive both in time and memory.
1.The cost of this function doesn’t seem worthwhile. To see the surprisingformula that we end up with, we need to define our Fibonacci problem in amatrix language. 2
Calling each of those matrices and vectors variables and recognizing the fact that $\bm{x}_{n-1}$ follows the same formula as $\bm{x}_n$ allows us to write
where we have used $\bm{A}^n$ to mean $n$ matrix multiplications. The corresponding implementation looks something like this:
def fib(n): A = np.asmatrix('1 1; 1 0') x_0 = np.asmatrix('1; 0') x_n = np.linalg.matrix_power(A, n).dot(x_0) return x_n[1]
While this isn’t recursive, there’s still an $n-1$ unnecessary matrix multiplications. These are expensive time-wise and it seems like there should be a simple formula involving $n$. As populations grow exponentially, we would expect this formula to involve scalars raised to the $n$th power. A simple equation like this could be implemented many times faster than the recursive implementation!
The trick to do this rests on the mysterious and intimidating eigenvalues and eigenvectors. These are just a nice way to view the same data but they have a lot of mystery behind them. Most simply, for a matrix $\bm{A}$ they obey the equation
for different eigenvalues $\lambda$ and eigenvectors $\bm{x}$. Through the way matrix multiplication is defined, we can represent all of these cases. This rests on the fact that the left multiplied diagonal matrix $\bm{\Lambda}$ just scales each $\bm{x}_i$ by $\lambda_i$. The column-wise definition of matrix multiplication makes it clear that this is represents every case where the equation above occurs.
Or compacting the vectors $\bm{x}_i$ into a matrix called $\bm{X}$ and the diagonal matrix of $\lambda_i$’s into $\bm{\Lambda}$, we find that
Because the Fibonacci matrix is diagonalizable
And then because a matrix and it’s inverse cancel
$\bm{\Lambda}^n$ is a simple computation because $\bm{\Lambda}$ is a diagonal matrix: every element is just raised to the $n$th power. That means the expensive matrix multiplication only happens twice now. This is a powerful speed boost and we can calculate the result by substituting for $\bm{A}^n$
For this Fibonacci matrix, we find that $\bm{\Lambda} = \textrm{diag}\left(\frac{1+\sqrt{5}}{2}, \frac{1-\sqrt{5}}{2}\right)= \textrm{diag}\left(\lambda_1, \lambda_2\right)$. We could define our Fibonacci function to carry out this matrix multiplication, but these matrices are simple: $\bm{\Lambda}$ is diagonal and $\bm{x}_0 = \left[1; 0\right]$. So, carrying out this fairly simple computation gives
We would not expect this equation to give an integer. It involves the power of two irrational numbers, a division by another irrational number and even the golden ratio phi $\phi \approx 1.618$! However, it gives exactly the Fibonacci numbers – you can check yourself!
This means we can define our function rather simply:
def fib(n): lambda1 = (1 + sqrt(5))/2 lambda2 = (1 - sqrt(5))/2 return (lambda1**n - lambda2**n) / sqrt(5)def fib_approx(n) # for practical range, percent error < 10^-6 return 1.618034**n / sqrt(5)
As one would expect, this implementation is
fast. We see speedups of roughly$1000$ for $n=25$, milliseconds vs microseconds. This is almost typical whenmathematics are applied to a seemingly straightforward problem. There are oftenlarge benefits by making the implementation slightly more cryptic!
I’ve found that mathematics
3 becomes fascinating, especially in higherlevel college courses, and can often yield surprising results. I mean, look atthis blog post. We went from a expensive recursive equation to a simple andfast equation that only involves scalars. This derivation is one I enjoy and Iespecially enjoy the simplicity of the final result. This is part of the reasonwhy I’m going to grad school for highly mathematical signal processing. Realworld benefits $+$ neat theory $=$ <3.
subscribe via RSS |
Answer
$$\cos^2\frac{\pi}{6}-\sin^2\frac{\pi}{6}=\frac{1}{2}$$ 4 would be matched with A.
Work Step by Step
$$X=\cos^2\frac{\pi}{6}-\sin^2\frac{\pi}{6}$$ From the double-angle identities: $$\cos^2 A-\sin^2 A=\cos2A$$ Thus $X$ can be seen here as $\cos^2 A-\sin^2 A$ with $A=\frac{\pi}{6}$. Therefore, $$X=\cos\Big(2\times\frac{\pi}{6}\Big)$$ $$X=\cos\frac{\pi}{3}$$ $$X=\frac{1}{2}$$ So, $$\cos^2\frac{\pi}{6}-\sin^2\frac{\pi}{6}=\frac{1}{2}$$ 4 would be matched with A. |
Answer
320 Watts
Work Step by Step
We know that power equals force times velocity. Thus, we find: $ P = Fv = (mgsin\theta +F_{air})v $ $ P = (75 \times 9.81 sin5^{\circ} +8.2) (4.4) \approx 320 \ W $
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Along the lines of Glen O's answer, this answer attempts to explain the solvability of the problem, rather than provide the answer, which has already been given. Instead of using the meta-knowledge approach, which, as Glen stated, can get hard to follow, I use the range-base approach used in Rubio's answer, and specifically address some of the objections being raised.
The argument has been put forward that when Mark fails to answer on the first morning, he gives Rose no new information. This is actually true (sort of— see the last spoiler section of this answer). Rose could have predicted beforehand with certainty that Mark would fail to answer on the first day, so his failure to answer doesn't tell her anything she didn't know. However, that doesn't make the problem unsolvable. To see why, you must understand the following logical axiom: Additional information never invalidates a valid deduction. In other words, if I know that all of the statements $P_1,\dots P_n$ and $Q$ are true, and that $R$ is definitely true if $P_1, \dots P_n$ are true, I can conclude that $R$ is true. My additional knowledge that $Q$ is true, though unnecessary to deduce $R$, doesn't hamper my ability to deduce $R$ from $P_1,\dots P_n$. I will call this rule
LUI for "Law of Unnecessary Information." (It may have some other name, but I don't know it, so I'm giving it a new one.)
The line of reasoning goes as follows:
Let $R,\;M$ be the number of bars on Rose's and Mark's windows, respectively. Before the first question is asked, both Mark and Rose know the following:
$P_1$: Mark knows the value of $M$
$P_2$: Rose knows the value of $R$
$P_3$: $M+R=20 \;\vee \;M+R=18\;$ ($\vee$ means "or", in case you're unfamiliar with the notation)
$P_4$: $M\ge 2\;\wedge\;R \ge2\;$ ($\wedge$ means "and")
$P_5$: Both of them know every statement on this list, and every statement that can be deduced from statements they both know.
To help keep track of $P_5$ I will say that I will call a statement $P$ (with some subscript) only if it is known to both prisoners (or neither); thus, $P_5$ becomes "the other prisoner knows every $P$ that I know."
Additionally, Mark knows that $M=12$ and Rose knows that $R=8$. Call this knowledge $Q_M$ and $Q_R$, respectively.
Finally, as soon as one of them is asked the question for $k^\text{th}$ time, they both know (and know that one another know, etc.) $P_{\leftarrow k}$:
$P_{\leftarrow k}$: The other prisoner could not deduce the value of $M+R$ given the information they already had.
After Mark doesn't answer on the morning of day one, both prisoners can deduce from $P_1, P_3, P_4, P_5,$ and $P_{\leftarrow 2}$ that $M\le 16$ (call this $P_6$). It is true that both prisoners have more information than this about the value of $M$, but LUI tells us that that doesn't invalidate the deduction. It basically just means that Rose won't be surprised when she gets asked the question. She already knows she will be.
By the following morning, both prisoners can deduce from $P_1\dots P_6$ and $P_{\leftarrow 3}$ that $4\le R \le 16$ ($P_7$), and that evening, they can deduce from $P1,\dots P_7$ and $P_{\leftarrow 4}$ that $4 \le M \le 14$ ($P_8$). Again, both prisoners know all of this already. (But the conclusions are still valid by LUI.)
On the next day, in a similar manner, they can deduce in the morning that $6 \le R \le 14$ ($P_9$), and in the evening that $6 \le M \le 12$ ($P_{10}$). Here's where things get interesting. Mark can deduce from $P_3$ and $Q_M$ that $R$ is either $6$ or $8$, but $R=6\wedge P_{10} \wedge P_3\implies M+R=18$ and $R=6\wedge P_{10} \wedge P_3\wedge\left[R=6\wedge P_{10} \wedge P_3\implies M+R=18\right]\implies \neg P_{\leftarrow 7}$. When he gets asked the question again on the following morning, he learns that $P_{\leftarrow 7}$ is true, and can thus deduce that $R \neq 6$ and therefore $R=8$ and $M+R=20$. This is actually the first time in the sequence that a $P_{\leftarrow k}$ provides any more information about the value of $M+R$ than the prisoner already has, but the sequence of irrelevant questions is necessary to establish the deep metaknowledge Glen talks about. In this formulation, all this metaknowledge is encapsulated in $P_5$. When a prisoner is asked a question, $P_5$ says that they can deduce not only $P_{\leftarrow k}$ but also that both of them know $P_{\leftarrow k}$ and, by repeatedly applying $P_5$, that both of them know that both of them know $P_{\leftarrow k}$ and so on. For any $P_{\leftarrow k}$, there is some level of "we both know that we both know" that can't be deduced from $P_1\dots P_5$ and $Q_M$ or $Q_R$ alone. This is the "new information" being "learned" at each stage. Really nothing new is learned until Rose fails to answer on the $3^\text{rd}$ evening, but the sequence of non-answers $P_{\leftarrow k}$ is necessary to provide the deductive path to $P_{\leftarrow 7}$.
In fact, viewing it another way, the fact that not answering provides "no new information" (and in fact doesn't provide any new
direct information about the number of bars) is exactly why the puzzle is solvable, because
It says that the previous answer provided no new information. Because they both know that the number of bars is either $18$ or $20$ (only two possibilities), any new information about the number of bars (eliminating a possibility) will allow them to give the answer; thus, not answering sends the message "I have not yet received any new information," which, eventually,
is new information for the other prisoner.
The "conversation" the prisoners have amounts to this:
Mark: I don't know how many bars there are.
Rose: I already knew that (that you wouldn't know).
Mark: I already knew that (that you'd know I wouldn't know).
Rose: I already knew THAT (etc.)
Mark: I already knew THAT.
Rose: I already knew $\mathbf {THAT}$.
Mark (To the Evil Logician): There are $20$ bars.
But how, you may ask, can a series of messages that provide their recipient with no new information lead to one that does? Simple!
The non-answers provide no new information to the recipient, but they do provide information to the sender. If I tell you that I'm secretly a ninja, you might already know that, but even if you do, knowledge is gained, because by telling you, I give
myself the knowledge that you know I'm a ninja, and that you know I know you know I'm a ninja, etc. Thus, each message sent, even if the recipient already knows it, provides the sender with information. After several such questions, this is enough information that a message recipient can draw conclusions based on the sender's inability to draw any conclusions from the information they know the sender has.
Ok, fine, you might say, but what, exactly, is learned when Mark fails to answer on the first morning, and how can you prove this was not already known? Great question, thanks for asking. You see...
At this point, we have to resort to metaknowledge (I know she knows I know...) even though it can get confusing, However, I'll break it down in such a way as to hopefully satisfy anyone who still objects that there is (meta)knowledge available after Mark fails to answer the first question was not available before he did so. Specifically,
After failing to answer the first question, Now, that's a mouthful, so let's break it down into parts:
Mark gains the information that Rose knows that Mark knows that Rose knows that Mark knows that Rose knows that Mark's window has less than $18$ bars.
$R_0$:Mark's window doesn't have $18$ bars
$M_1$:Rose knows $R_0$
$R_2$:Mark knows $M_1$
$M_3$:Rose knows $R_2$
$R_4$:Mark knows $M_3$
$M_5$:Rose knows $R_4$
My claim is that A) Before he fails to answer on the first morning, Mark does not know $M_5$, and B) Afterwards, he does. Let's examine A) first:
To show that Mark doesn't know $M_5$ beforehand, we work backwards from $R_0$. In order for Rose to know that Mark's window doesn't have $18$ bars, her window would have to have more than $2$ bars. Since the rules (and numbers of bars) imply that they both have an even number of bars, in order for Mark to know $M_1$, he would have to know that Rose's window has at least $4$ bars. The only way for him to know that is if his window has less than $16$ bars. Thus, for rose to know $R_2$, she must know that Mark has no more than $14$ bars, which requires that she have at least $6$ bars. For Mark to know $M_3$, then, he must have no more than $12$ bars, so for Rose to know $R_4$ she must have at least $8$ bars, and for Mark to know $M_5$ he must have no more than $10$ bars. But he does have more than $10$ bars, so he doesn't know $M_5$ beforehand.
To see why Mark must know $M_5$ after he fails to answer the question, we must realize that they both know the rules of the game and one of the rules of the game is that they both know the rules of the game. This creates an infinite loop of meta-knowledge, meaning that they both know that they both know that they both know... the rules, no matter how many times you repeat "they both know". This infinite-depth meta-knowledge extends to anything that can be deduced from the rules. If Mark's window had $18$ bars, he could deduce from the rules that Rose must have $2$, and the tower must have $20$ in total. Because he doesn't answer, rose will be asked, and when she is, she will know that he couldn't deduce the answer, and therefore has less than $18$ bars. Because this is all deduced directly from the rules, rather than the private knowledge that either prisoner has, it inherits the infinite meta-knowledge of the rules, and Mark knows $M_5$.
So, Mark learns $M_5$. Does Rose learn anything? It's tempting to think that she doesn't, because she can predict in advance that Mark won't answer and therefore, one might think, she can draw in advance any conclusions that could be drawn from his not answering. However, as was shown above, by not answering, Mark learns $M_5$. Not answering changes the state of Mark's knowledge. This means that Rose's ability to predict Mark's behavior doesn't prevent her from gaining new information. She can predict in advance both what he will do (not answer) and what he will learn when he does it ($M_5$), but since he doesn't learn $M_5$ until he actually declines to answer, his failure to answer provides her with the information that he knows $M_5$. Since he didn't know $M_5$ beforehand, the knowledge that he does is by definition new information for Rose. Rose already knew that she now would know this, but until Mark doesn't answer, she doesn't actually know it (because it isn't true). By following this prediction logic out, it's possible to show that Rose knows (at the start) that Mark will be unable to answer until the $4^\text{th}$ morning, but not whether or not he'll be able to answer then. Mark, meanwhile, knows that Rose will be unable to answer until the $3^\text{rd}$ evening, but not whether or not she'll be able to answer then. As soon as one of the prisoners observes an event that they were unable to predict at the beginning, they can deduce from it something they didn't know about the state of the other's knowledge. Since the only hidden information is how many bars are in the other prisoners window, and they know that it must be one of two values, learning new information about that allows them to eliminate one of the values and find the correct result. |
Science Advisor
Gold Member
2,232 1,262 Summary Wigner's friend seems to lead to certainty in two complimentary contexts Summary:Wigner's friend seems to lead to certainty in two complimentary contexts
This is probably pretty dumb, but I was just thinking about Wigner's friend and wondering about the two contexts involved.
The basic set up I'm wondering about is as follows:
The friend does a spin measurement in the ##\left\{|\uparrow_z\rangle, |\downarrow_z\rangle\right\}## basis, i.e. of ##S_z## at time ##t_1##. And let's say the particle is undisturbed after that.
For experiments outside the lab Wigner considers the lab to be in the basis:
$$\frac{1}{\sqrt{2}}\left(|L_{\uparrow_z}, D_{\uparrow_z}, \uparrow_z \rangle + |L_{\downarrow_z}, D_{\downarrow_z}, \downarrow_z \rangle\right)$$
He then considers a measurement of the observable ##\mathcal{X}## which has eigenvectors:
$$\left\{\frac{1}{\sqrt{2}}\left(|L_{\uparrow_z}, D_{\uparrow_z}, \uparrow_z \rangle + |L_{\downarrow_z}, D_{\downarrow_z}, \downarrow_z \rangle\right), \frac{1}{\sqrt{2}}\left(|L_{\uparrow_z}, D_{\uparrow_z}, \uparrow_z \rangle - |L_{\downarrow_z}, D_{\downarrow_z}, \downarrow_z \rangle\right)\right\}$$
with eigenvalues ##\{1,-1\}## respectively.
At time ##t_2## the friend flips a coin and either he does a measurement of ##S_z## or Wigner does a measurement of ##\mathcal{X}##
However if the friend does a measurement of ##S_z## he knows for a fact he will get whatever result he originally got. However he also knows Wigner will obtain the ##1## outcome with certainty.
However ##\left[S_{z},\mathcal{X}\right] \neq 0##. Thus the friend seems to be predicting with certainty observables belonging to two separate contexts. Which is not supposed to be possible in the quantum formalism.
What am I missing? |
This article is aimed at relatively new users. It is written particularly for my own students, with the aim of helping them to avoid making common errors. The article exists in two forms: this WordPress blog post and a PDF file generated by , both produced from the same Emacs Org file. Since WordPress does not handle very well I recommend reading the PDF version.
1. New Paragraphs
In a new paragraph is started by leaving a blank line.
Do not start a new paragraph by using
\\ (it merely terminates a line). Indeed you should almost never type
\\, except within environments such as
array,
tabular, and so on.
2. Math Mode
Always type mathematics in math mode (as
$..$ or
\(..\)), to produce “” instead of “y = f(x)”, and “the dimension ” instead of “the dimension n”. For displayed equations use
$$,
\[..\], or one of the display environments (see Section 7).
Punctuation should appear outside math mode, for inline equations, otherwise the spacing will be incorrect. Here is an example.
Correct:
The variables $x$, $y$, and $z$ satisfy $x^2 + y^2 = z^2$.
Incorrect:
The variables $x,$ $y,$ and $z$ satisfy $x^2 + y^2 = z^2.$
For displayed equations, punctuation should appear as part of the display. All equations
must be punctuated, as they are part of a sentence. 3. Mathematical Functions in Roman
Mathematical functions should be typeset in roman font. This is done automatically for the many standard mathematical functions that supports, such as
\sin,
\tan,
\exp,
\max, etc.
If the function you need is not built into , create your own. The easiest way to do this is to use the
amsmath package and type, for example,
\usepackage{amsmath} ... % In the preamble. \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\inert}{Inertia}
Alternatively, if you are not using the
amsmath package you can type
\def\diag{\mathop{\mathrm{diag}}} 4. Maths Expressions
Ellipses (dots) are never explicitly typed as “…”. Instead they are typed as
\dots for baseline dots, as in
$x_1,x_2,\dots,x_n$ (giving ) or as
\cdots for vertically centered dots, as in
$x_1 + x_2 + \cdots + x_n$ (giving ).
Type
$i$th instead of
$i'th$ or
$i^{th}$. (For some subtle aspects of the use of ellipses, see How To Typeset an Ellipsis in a Mathematical Expression.)
Avoid using
\frac to produce stacked fractions in the text. Write flops instead of flops.
For “much less than”, type
\ll, giving , not
<<, which gives . Similarly, “much greater than” is typed as
\gg, giving . If you are using angle brackets to denote an inner product use
\langle and
\rangle:
incorrect: <x,y>, typed as
$<x,y>$.
correct: , typed as
$\langle x,y \rangle$
5. Text in Displayed Equations
When a displayed equation contains text such as “subject to ”, instead of putting the text in
\mathrm put the text in an
\mbox, as in
\mbox{subject to $x \ge 0$}. Note that
\mbox switches out of math mode, and this has the advantage of ensuring the correct spacing between words. If you are using the amsmath package you can use the
\text command instead of
\mbox.
Example $$ \min\{\, \|A-X\|_F: \mbox{$X$ is a correlation matrix} \,\}. $$ 6. BibTeX
Produce your bibliographies using BibTeX, creating your own bib file. Note three important points.
“Export citation” options on journal websites rarely produce perfect bib entries. More often than not the entry has an improperly cased title, an incomplete or incorrectly accented author name, improperly typeset maths in the title, or some other error, so always check and improve the entry. If you wish to cite one of my papers download the latest version of
njhigham.bib(along with
strings.bibsupplied with it) and include it in your
\bibliographycommand.
Decide on a consistent format for your bib entry keys and stick to it. In the format used in the Numerical Linear Algebra group at Manchester a 2010 paper by Smith and Jones has key
smjo10, a 1974 book by Aho, Hopcroft, and Ullman has key
ahu74, while a 1990 book by Smith has key
smit90.
7. Spelling Errors and Errors
There is no excuse for your writing to contain spelling errors, given the wide availability of spell checkers. You’ll need a spell checker that understands syntax.
There are also tools for checking syntax. One that comes with TeX Live is
lacheck, which describes itself as “a consistency checker for LaTeX documents”. Such a tool can point out possible syntax errors, or semantic errors such as unmatched parentheses, and warn of common mistakes.
8. Quotation Marks
has a left quotation mark, denoted here
\lq, and a right quotation mark, denoted here
\rq, typed as the single left and right quotes on the keyboard, respectively. A left or right double quotation mark is produced by typing two single quotes of the appropriate type. The double quotation mark always itself produces the same as two right quotation marks. Example: is typed as
\lq\lq hello \rq\rq.
9. Captions
Captions go
above tables but below figures. So put the
caption command at the start of a
table environment but at the end of a
figure environment. The
\label statement should go after the
\caption statement (or it can be put inside it), otherwise references to that label will refer to the subsection in which the label appears rather than the figure or table.
10. Tables
makes it easy to put many rules, some of them double, in and around a table, using
\cline,
\hline, and the
| column formatting symbol. However, it is good style to minimize the number of rules. A common task for journal copy editors is to remove rules from tables in submitted manuscripts.
11. Source Code
source code should be laid out so that it is readable, in order to aid editing and debugging, to help you to understand the code when you return to it after a break, and to aid collaborative writing. Readability means that logical structure should be apparent, in the same way as when indentation is used in writing a computer program. In particular, it is is a good idea to start new sentences on new lines, which makes it easier to cut and paste them during editing, and also makes a diff of two versions of the file more readable.
Example:
Good:
$$ U(\zbar) = U(-z) = \begin{cases} -U(z), & z\in D, \\ -U(z)-1, & \mbox{otherwise}. \end{cases} $$
Bad:
$$U(\zbar) = U(-z) = \begin{cases}-U(z), & z\in D, \\ -U(z)-1, & \mbox{otherwise}. \end{cases}$$ 12. Multiline Displayed Equations
For displayed equations occupying more than one line it is best to use the environments provided by the amsmath package. Of these,
align (and
align* if equation numbers are not wanted) is the one I use almost all the time. Example:
\begin{align*} \cos(A) &= I - \frac{A^2}{2!} + \frac{A^4}{4!} + \cdots,\\ \sin(A) &= A - \frac{A^3}{3!} + \frac{A^5}{5!} - \cdots, \end{align*}
Others, such as
gather and
aligned, are occasionally needed.
Avoid using the standard environment
eqnarray, because it doesn’t produce as good results as the amsmath environments, nor is it as versatile. For more details see the article Avoid Eqnarray.
13. Synonyms
This final category concerns synonyms and is a matter of personal preference. I prefer
\ge and
\le to the equivalent
\geq
\leq\ (why type the extra characters?).
I also prefer to use
$..$ for math mode instead of
\(..\) and
$$..$$ for display math mode instead of
\[..\]. My preferences are the original syntax, while the alternatives were introduced by . The slashed forms are obviously easier to parse, but this is one case where I prefer to stick with tradition. If dollar signs are good enough for Don Knuth, they are good enough for me!
I don’t think many people use ‘s verbose
\begin{math}..\end{math}
or
\begin{displaymath}..\end{displaymath}
Also note that
\begin{equation*}..\end{equation*} (for unnumbered equations) exists in the amsmath package but not in in itself. |
If you want to understand how the 'seconds' value fits the greater image, there's this rather contrived definition (which nobody uses because it's contrived and mostly useless but evocative enough.)
0)
$I_{sp}$ in seconds is equal to the amount of time a rocket must be fired to use a quantity of propellant with weight (measured at one standard gravity) equal to its thrust.
Imagine a test setup: rocket engine plus dummy weight, such that the total mass (engine+weight) is 100kg (kilogram-force, if you want to nitpick the units).
You drive an external, flexible fuel pipe from a fuel tank which stores 100kg of fuel+oxidizer (the same as the test rig). You start the engine and keep the thrust so that it hovers, without rising or falling. You measure time from the engine start until all fuel is spent, and when it is, the time is your specific impulse. The longer the better obviously, more acceleration from the fuel.
Obviously this is not a very practical test, and this definition is not very helpful - it helps imagine why specific impulse is expressed in seconds, and what these seconds mean... but honestly, beyond "understanding" nobody cares.
What really matters are other, more practical uses of specific impulse:
1)
$I_{sp} = { v_e \over g_0 } $
This is one trivial equation. You have the $g_0$ which is the earth gravitational acceleration, a trivial conversion factor, a constant - and you have $v_e$ - exhaust velocity, speed at which propellant is ejected from the engine. That's it. The only variable - and so you can think of specific impulse as speed of exhaust gases, only multiplied by some constant for convenience. It's not some magical property dependent on a hundred weird and obscure factors - it's just the speed of the exhaust gas. Only 'weirdified' a little by multiplying it by a constant. Really simple.
2)
$F_\text{thrust} = g_0 \cdot I_\text{sp} \cdot \dot m$
That's the same thing as that first "useless" definition, only made more useful. That's how you can practically measure specific impulse (measuring velocity of superheated gas or building hovering test rigs with flexible pipes is not really practical). Again, "bang for the buck", how much thrust is produced - per fuel flow. The more thrust and the less fuel used the better. But you can measure how much force an engine exerts (say, how far the girders of the test cell bend when it blasts at full thrust, if it's something like Apollo's F-1, or how much the ultra-precise torsion weight turns, if it's something like a colloid thruster), and check how much fuel it uses. This way you can get the specific impulse.
3)
$\Delta v = I_\text{sp} g_0 \ln \frac {m_0} {m_f}$
This is what this whole game is all about - where you make practical use of that painstakingly determined specific impulse.
Delta-v is the actual
milleage of a rocket. Specific impulse is about the engine. But besides the engine, you have fuel, and you have the payload. This is the Tsiolkovski's Rocket Equation, and this is about "where you can go with your rocket." Good 9km/s to reach LEO. Another 4km/s or so to escape Earth and start travel over the solar system. 3km/s to capture into Mars orbit. That's the delta-v budget, which is the foundation of any mission plan. And with that you have the $m_f$, dry mass of your rocket - engine, scientific payload, telemetry, tanks, panels, everything else than fuel. And $m_0$ - launch mass. That's the above plus fuel.
That 'ln' plays a nasty trick. Usually your fuel mass will be something of order of 90-95% of the launch mass. There's very little we can do about it, because we need good thrust to overcome earth gravity before we reach orbit, and that is provided by chemical engines which have lousy specific impulse. But then, in orbit, we drop the launch stages (dry mass goes way down!) and switch to efficient engines, like ion. And so, we can produce second as much delta-V as to reach LEO, but without need to haul hundreds of tons of fuel. This is where good ISp rules. |
CMS Collaboration,; Canelli, Florencia; Kilminster, Benjamin; Aarestad, Thea; Caminada, Lea; De Cosa, Annapaoloa; Del Burgo, Riccardo; Donato, Silvio; Galloni, Camilla; Hinzmann, Andreas; Hreus, Tomas; Ngadiuba, Jennifer; Pinna, Deborah; Rauco, Giorgia; Robmann, Peter; Salerno, Daniel; Schweiger, Korbinian; Seitz, Claudia; Takahashi, Yuta; Zucchetta, Alberto; et al, (2017).
Measurement of the transverse momentum spectrum of the Higgs boson produced in pp collisions at $\sqrt{s}=8$ TeV using H $\to$ WW decays. Journal of High Energy Physics, 3:32. Abstract
The cross section for Higgs boson production in pp collisions is studied using the H $\to$ W$^+$ W$^-$ decay mode, followed by leptonic decays of the W bosons to an oppositely charged electron-muon pair in the final state. The measurements are performed using data collected by the CMS experiment at the LHC at a centre-of-mass energy of 8 TeV, corresponding to an integrated luminosity of 19.4 fb$^{−1}$. The Higgs boson transverse momentum ($p_T$) is reconstructed using the lepton pair $p_T$ and missing $p_T$. The differential cross section times branching fraction is measured as a function of the Higgs boson $p_T$ in a fiducial phase space defined to match the experimental acceptance in terms of the lepton kinematics and event topology. The production cross section times branching fraction in the fiducial phase space is measured to be 39 $\pm$ 8 (stat) $\pm$ 9 (syst) fb. The measurements are found to agree, within experimental uncertainties, with theoretical calculations based on the standard model.
Abstract
The cross section for Higgs boson production in pp collisions is studied using the H $\to$ W$^+$ W$^-$ decay mode, followed by leptonic decays of the W bosons to an oppositely charged electron-muon pair in the final state. The measurements are performed using data collected by the CMS experiment at the LHC at a centre-of-mass energy of 8 TeV, corresponding to an integrated luminosity of 19.4 fb$^{−1}$. The Higgs boson transverse momentum ($p_T$) is reconstructed using the lepton pair $p_T$ and missing $p_T$. The differential cross section times branching fraction is measured as a function of the Higgs boson $p_T$ in a fiducial phase space defined to match the experimental acceptance in terms of the lepton kinematics and event topology. The production cross section times branching fraction in the fiducial phase space is measured to be 39 $\pm$ 8 (stat) $\pm$ 9 (syst) fb. The measurements are found to agree, within experimental uncertainties, with theoretical calculations based on the standard model.
Additional indexing |
This article is aimed at relatively new users. It is written particularly for my own students, with the aim of helping them to avoid making common errors. The article exists in two forms: this WordPress blog post and a PDF file generated by , both produced from the same Emacs Org file. Since WordPress does not handle very well I recommend reading the PDF version.
1. New Paragraphs
In a new paragraph is started by leaving a blank line.
Do not start a new paragraph by using
\\ (it merely terminates a line). Indeed you should almost never type
\\, except within environments such as
array,
tabular, and so on.
2. Math Mode
Always type mathematics in math mode (as
$..$ or
\(..\)), to produce “” instead of “y = f(x)”, and “the dimension ” instead of “the dimension n”. For displayed equations use
$$,
\[..\], or one of the display environments (see Section 7).
Punctuation should appear outside math mode, for inline equations, otherwise the spacing will be incorrect. Here is an example.
Correct:
The variables $x$, $y$, and $z$ satisfy $x^2 + y^2 = z^2$.
Incorrect:
The variables $x,$ $y,$ and $z$ satisfy $x^2 + y^2 = z^2.$
For displayed equations, punctuation should appear as part of the display. All equations
must be punctuated, as they are part of a sentence. 3. Mathematical Functions in Roman
Mathematical functions should be typeset in roman font. This is done automatically for the many standard mathematical functions that supports, such as
\sin,
\tan,
\exp,
\max, etc.
If the function you need is not built into , create your own. The easiest way to do this is to use the
amsmath package and type, for example,
\usepackage{amsmath} ... % In the preamble. \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\inert}{Inertia}
Alternatively, if you are not using the
amsmath package you can type
\def\diag{\mathop{\mathrm{diag}}} 4. Maths Expressions
Ellipses (dots) are never explicitly typed as “…”. Instead they are typed as
\dots for baseline dots, as in
$x_1,x_2,\dots,x_n$ (giving ) or as
\cdots for vertically centered dots, as in
$x_1 + x_2 + \cdots + x_n$ (giving ).
Type
$i$th instead of
$i'th$ or
$i^{th}$. (For some subtle aspects of the use of ellipses, see How To Typeset an Ellipsis in a Mathematical Expression.)
Avoid using
\frac to produce stacked fractions in the text. Write flops instead of flops.
For “much less than”, type
\ll, giving , not
<<, which gives . Similarly, “much greater than” is typed as
\gg, giving . If you are using angle brackets to denote an inner product use
\langle and
\rangle:
incorrect: <x,y>, typed as
$<x,y>$.
correct: , typed as
$\langle x,y \rangle$
5. Text in Displayed Equations
When a displayed equation contains text such as “subject to ”, instead of putting the text in
\mathrm put the text in an
\mbox, as in
\mbox{subject to $x \ge 0$}. Note that
\mbox switches out of math mode, and this has the advantage of ensuring the correct spacing between words. If you are using the amsmath package you can use the
\text command instead of
\mbox.
Example $$ \min\{\, \|A-X\|_F: \mbox{$X$ is a correlation matrix} \,\}. $$ 6. BibTeX
Produce your bibliographies using BibTeX, creating your own bib file. Note three important points.
“Export citation” options on journal websites rarely produce perfect bib entries. More often than not the entry has an improperly cased title, an incomplete or incorrectly accented author name, improperly typeset maths in the title, or some other error, so always check and improve the entry. If you wish to cite one of my papers download the latest version of
njhigham.bib(along with
strings.bibsupplied with it) and include it in your
\bibliographycommand.
Decide on a consistent format for your bib entry keys and stick to it. In the format used in the Numerical Linear Algebra group at Manchester a 2010 paper by Smith and Jones has key
smjo10, a 1974 book by Aho, Hopcroft, and Ullman has key
ahu74, while a 1990 book by Smith has key
smit90.
7. Spelling Errors and Errors
There is no excuse for your writing to contain spelling errors, given the wide availability of spell checkers. You’ll need a spell checker that understands syntax.
There are also tools for checking syntax. One that comes with TeX Live is
lacheck, which describes itself as “a consistency checker for LaTeX documents”. Such a tool can point out possible syntax errors, or semantic errors such as unmatched parentheses, and warn of common mistakes.
8. Quotation Marks
has a left quotation mark, denoted here
\lq, and a right quotation mark, denoted here
\rq, typed as the single left and right quotes on the keyboard, respectively. A left or right double quotation mark is produced by typing two single quotes of the appropriate type. The double quotation mark always itself produces the same as two right quotation marks. Example: is typed as
\lq\lq hello \rq\rq.
9. Captions
Captions go
above tables but below figures. So put the
caption command at the start of a
table environment but at the end of a
figure environment. The
\label statement should go after the
\caption statement (or it can be put inside it), otherwise references to that label will refer to the subsection in which the label appears rather than the figure or table.
10. Tables
makes it easy to put many rules, some of them double, in and around a table, using
\cline,
\hline, and the
| column formatting symbol. However, it is good style to minimize the number of rules. A common task for journal copy editors is to remove rules from tables in submitted manuscripts.
11. Source Code
source code should be laid out so that it is readable, in order to aid editing and debugging, to help you to understand the code when you return to it after a break, and to aid collaborative writing. Readability means that logical structure should be apparent, in the same way as when indentation is used in writing a computer program. In particular, it is is a good idea to start new sentences on new lines, which makes it easier to cut and paste them during editing, and also makes a diff of two versions of the file more readable.
Example:
Good:
$$ U(\zbar) = U(-z) = \begin{cases} -U(z), & z\in D, \\ -U(z)-1, & \mbox{otherwise}. \end{cases} $$
Bad:
$$U(\zbar) = U(-z) = \begin{cases}-U(z), & z\in D, \\ -U(z)-1, & \mbox{otherwise}. \end{cases}$$ 12. Multiline Displayed Equations
For displayed equations occupying more than one line it is best to use the environments provided by the amsmath package. Of these,
align (and
align* if equation numbers are not wanted) is the one I use almost all the time. Example:
\begin{align*} \cos(A) &= I - \frac{A^2}{2!} + \frac{A^4}{4!} + \cdots,\\ \sin(A) &= A - \frac{A^3}{3!} + \frac{A^5}{5!} - \cdots, \end{align*}
Others, such as
gather and
aligned, are occasionally needed.
Avoid using the standard environment
eqnarray, because it doesn’t produce as good results as the amsmath environments, nor is it as versatile. For more details see the article Avoid Eqnarray.
13. Synonyms
This final category concerns synonyms and is a matter of personal preference. I prefer
\ge and
\le to the equivalent
\geq
\leq\ (why type the extra characters?).
I also prefer to use
$..$ for math mode instead of
\(..\) and
$$..$$ for display math mode instead of
\[..\]. My preferences are the original syntax, while the alternatives were introduced by . The slashed forms are obviously easier to parse, but this is one case where I prefer to stick with tradition. If dollar signs are good enough for Don Knuth, they are good enough for me!
I don’t think many people use ‘s verbose
\begin{math}..\end{math}
or
\begin{displaymath}..\end{displaymath}
Also note that
\begin{equation*}..\end{equation*} (for unnumbered equations) exists in the amsmath package but not in in itself. |
The (relativistic) mass of an object measured by an observer in the $xyz$-frame is given by $$m = \frac{m_{rest}}{\sqrt{1 - \left(\frac{v}{c}\right)^2}}.$$ Mathematically $v$ could be greater than the speed of light, but the mass $m$ would become imaginary. Physically we would have to get to the speed of light first i.e. $v = c$, which gives us an undefined value for $m$. So we believe that nothing moves faster than the speed of light because we do not like observables to be imaginary?
Physically, you math guys aren't allowed to cross near the boundary $c$ (speed of light). Special Relativity does that. SR says that it would be impossible for a particle to be accelerated to $c$ because the speed of light (maximum possible measured velocity) is constant in vacuum for all inertial observers (i.e.) Observers in all inertial frames would measure the same value for $c$. Not only the fact that infinite energies are required to accelerate objects to speed of light, (but) an observer would see things going crazy around the guy (or an object) traveling at $c$ such as length contraction (length would be contracted to zero), time dilation (time would freeze around him) & infinite mass. You can't enjoy anything when you travel at $c$. But, the stationary observer who's measuring your speed (relative to his frame) would definitely suffer..! Note: But, there are some quantum mechanical solutions that allow negative masses like the expression for relativistic energy-momentum. Let's try not to make the subject more complicated.$$E^2=p^2c^2+m^2c^4$$
There are hypothetical particles (having negative mass
squared (or) imaginary mass) always traveling faster than the speed of light called Tachyon. This was assumed by Physicists in order to investigate the faster than light case. So When $v>c$, the denominator becomes a imaginary. But, Energy is an observable. It should be some integer. A consistent theory could be made if their mass is made to be imaginary and Energy to be negative. Using these data in the E-p relation, we would arrive at a point $p^2-E^2=m^2$, where $m$ is real. This makes Tachyons behave a kind of opposite to that of ordinary particles. When they gain energy, their momentum decreases (which strongly disproves all our assumptions).
The first reason that this investigation blown off is Cherenkov radiation where particles traveling
faster than light emit this kind of radiation. As far as now, No such radiation has been observed in vacuum proving the existence of these..! It's like making a pencil to stand at its graphite tip. If it would stand, physicists would've to blow up their heads :-)
There are
tougher stories on the topic when you Google it out...
Actually, a quick search on Wikipedia shows that you have misinterpreted this formula: imaginary-mass particles do not propagate faster than the speed of light when you take quantum mechanics into account. A much better reason not to believe in faster-than-light particles is that they have never been observed to exist. Furthermore, if they were to exist, I could in principle do rather confusing things like cause the death of my own grandmother before my mother was even born. Generally, if something does not appear to exist, and it would cause everyone a massive headache if that something did exist, it is easier to assume that it doesn't! Likewise, I do not believe that an enormous colony of pixies living on the dark side of the moon is planning a surprise birthday party for me next year. Of course, someone, somewhere probably does believe that :)
The use of relativistic mass is deprecated in modern physics, which means that we can explain why nothing moves faster than the speed of light (in the contest of special relativity) without even mentioning relativistic mass.
The special relativistic energy for a massive particle is $E^2=p^2c^2+m^2c^4$, where $m$ is mass
[1]. Solving the Hamilton equation we obtain the velocity $v = pc^2/E$. It is evident that $v = c$ only when $E=|p|c$, which implies that $p$ has to be infinite: $\lim_{p \rightarrow \infty} (p^2c^2+m^2c^4) = p^2c^2$. Therefore you need infinite energy for accelerating a massive particle at the speed of light. The energy of all the observable universe is finite. You cannot break the c limit.
For a massless particle, special relativistic energy is $E^2=p^2c^2$. Solving the Hamilton equation we obtain the velocity $|v| = c$, which is a constant. Therefore no matter what energy you give to a massless particle, it will be always moving at the speed of light. Again you cannot break the c limit.
[1] This is standard notation but disagree with your notation. Relativistic mass is often denoted by $m_{rel}.$
according to sudarsan, when particeles travel with velocity greater than that of light, the mass becomes negative or imaginary. they are called Tachons. I have shown in an earlier article that the mass becomes imaginary or negative. According to the method used in quatum mechanics, by multiplying by a complex conjugate [that is chaning -i to +i] one gets a positive mass but slightly less than the orignal
protected by Qmechanic♦ Mar 23 '13 at 16:49
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
This article is aimed at relatively new users. It is written particularly for my own students, with the aim of helping them to avoid making common errors. The article exists in two forms: this WordPress blog post and a PDF file generated by , both produced from the same Emacs Org file. Since WordPress does not handle very well I recommend reading the PDF version.
1. New Paragraphs
In a new paragraph is started by leaving a blank line.
Do not start a new paragraph by using
\\ (it merely terminates a line). Indeed you should almost never type
\\, except within environments such as
array,
tabular, and so on.
2. Math Mode
Always type mathematics in math mode (as
$..$ or
\(..\)), to produce “” instead of “y = f(x)”, and “the dimension ” instead of “the dimension n”. For displayed equations use
$$,
\[..\], or one of the display environments (see Section 7).
Punctuation should appear outside math mode, for inline equations, otherwise the spacing will be incorrect. Here is an example.
Correct:
The variables $x$, $y$, and $z$ satisfy $x^2 + y^2 = z^2$.
Incorrect:
The variables $x,$ $y,$ and $z$ satisfy $x^2 + y^2 = z^2.$
For displayed equations, punctuation should appear as part of the display. All equations
must be punctuated, as they are part of a sentence. 3. Mathematical Functions in Roman
Mathematical functions should be typeset in roman font. This is done automatically for the many standard mathematical functions that supports, such as
\sin,
\tan,
\exp,
\max, etc.
If the function you need is not built into , create your own. The easiest way to do this is to use the
amsmath package and type, for example,
\usepackage{amsmath} ... % In the preamble. \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\inert}{Inertia}
Alternatively, if you are not using the
amsmath package you can type
\def\diag{\mathop{\mathrm{diag}}} 4. Maths Expressions
Ellipses (dots) are never explicitly typed as “…”. Instead they are typed as
\dots for baseline dots, as in
$x_1,x_2,\dots,x_n$ (giving ) or as
\cdots for vertically centered dots, as in
$x_1 + x_2 + \cdots + x_n$ (giving ).
Type
$i$th instead of
$i'th$ or
$i^{th}$. (For some subtle aspects of the use of ellipses, see How To Typeset an Ellipsis in a Mathematical Expression.)
Avoid using
\frac to produce stacked fractions in the text. Write flops instead of flops.
For “much less than”, type
\ll, giving , not
<<, which gives . Similarly, “much greater than” is typed as
\gg, giving . If you are using angle brackets to denote an inner product use
\langle and
\rangle:
incorrect: <x,y>, typed as
$<x,y>$.
correct: , typed as
$\langle x,y \rangle$
5. Text in Displayed Equations
When a displayed equation contains text such as “subject to ”, instead of putting the text in
\mathrm put the text in an
\mbox, as in
\mbox{subject to $x \ge 0$}. Note that
\mbox switches out of math mode, and this has the advantage of ensuring the correct spacing between words. If you are using the amsmath package you can use the
\text command instead of
\mbox.
Example $$ \min\{\, \|A-X\|_F: \mbox{$X$ is a correlation matrix} \,\}. $$ 6. BibTeX
Produce your bibliographies using BibTeX, creating your own bib file. Note three important points.
“Export citation” options on journal websites rarely produce perfect bib entries. More often than not the entry has an improperly cased title, an incomplete or incorrectly accented author name, improperly typeset maths in the title, or some other error, so always check and improve the entry. If you wish to cite one of my papers download the latest version of
njhigham.bib(along with
strings.bibsupplied with it) and include it in your
\bibliographycommand.
Decide on a consistent format for your bib entry keys and stick to it. In the format used in the Numerical Linear Algebra group at Manchester a 2010 paper by Smith and Jones has key
smjo10, a 1974 book by Aho, Hopcroft, and Ullman has key
ahu74, while a 1990 book by Smith has key
smit90.
7. Spelling Errors and Errors
There is no excuse for your writing to contain spelling errors, given the wide availability of spell checkers. You’ll need a spell checker that understands syntax.
There are also tools for checking syntax. One that comes with TeX Live is
lacheck, which describes itself as “a consistency checker for LaTeX documents”. Such a tool can point out possible syntax errors, or semantic errors such as unmatched parentheses, and warn of common mistakes.
8. Quotation Marks
has a left quotation mark, denoted here
\lq, and a right quotation mark, denoted here
\rq, typed as the single left and right quotes on the keyboard, respectively. A left or right double quotation mark is produced by typing two single quotes of the appropriate type. The double quotation mark always itself produces the same as two right quotation marks. Example: is typed as
\lq\lq hello \rq\rq.
9. Captions
Captions go
above tables but below figures. So put the
caption command at the start of a
table environment but at the end of a
figure environment. The
\label statement should go after the
\caption statement (or it can be put inside it), otherwise references to that label will refer to the subsection in which the label appears rather than the figure or table.
10. Tables
makes it easy to put many rules, some of them double, in and around a table, using
\cline,
\hline, and the
| column formatting symbol. However, it is good style to minimize the number of rules. A common task for journal copy editors is to remove rules from tables in submitted manuscripts.
11. Source Code
source code should be laid out so that it is readable, in order to aid editing and debugging, to help you to understand the code when you return to it after a break, and to aid collaborative writing. Readability means that logical structure should be apparent, in the same way as when indentation is used in writing a computer program. In particular, it is is a good idea to start new sentences on new lines, which makes it easier to cut and paste them during editing, and also makes a diff of two versions of the file more readable.
Example:
Good:
$$ U(\zbar) = U(-z) = \begin{cases} -U(z), & z\in D, \\ -U(z)-1, & \mbox{otherwise}. \end{cases} $$
Bad:
$$U(\zbar) = U(-z) = \begin{cases}-U(z), & z\in D, \\ -U(z)-1, & \mbox{otherwise}. \end{cases}$$ 12. Multiline Displayed Equations
For displayed equations occupying more than one line it is best to use the environments provided by the amsmath package. Of these,
align (and
align* if equation numbers are not wanted) is the one I use almost all the time. Example:
\begin{align*} \cos(A) &= I - \frac{A^2}{2!} + \frac{A^4}{4!} + \cdots,\\ \sin(A) &= A - \frac{A^3}{3!} + \frac{A^5}{5!} - \cdots, \end{align*}
Others, such as
gather and
aligned, are occasionally needed.
Avoid using the standard environment
eqnarray, because it doesn’t produce as good results as the amsmath environments, nor is it as versatile. For more details see the article Avoid Eqnarray.
13. Synonyms
This final category concerns synonyms and is a matter of personal preference. I prefer
\ge and
\le to the equivalent
\geq
\leq\ (why type the extra characters?).
I also prefer to use
$..$ for math mode instead of
\(..\) and
$$..$$ for display math mode instead of
\[..\]. My preferences are the original syntax, while the alternatives were introduced by . The slashed forms are obviously easier to parse, but this is one case where I prefer to stick with tradition. If dollar signs are good enough for Don Knuth, they are good enough for me!
I don’t think many people use ‘s verbose
\begin{math}..\end{math}
or
\begin{displaymath}..\end{displaymath}
Also note that
\begin{equation*}..\end{equation*} (for unnumbered equations) exists in the amsmath package but not in in itself. |
Forgot password? New user? Sign up
Existing user? Log in
What is the derivative of y=5x3+1x6\displaystyle y=5 x^{3}+\frac{1}{x^{6}}y=5x3+x61?
What is the derivative of −17x16-17 x^{16}−17x16?
If f(x)=∑k=1nxk, f(x) = \displaystyle{\sum_{k=1}^{n} x^k},f(x)=k=1∑nxk, what is limn→∞f′(1)3n2+4n+5?\displaystyle{ \lim_{n \rightarrow \infty} \frac{f'(1)}{3n^2+4n+5} }?n→∞lim3n2+4n+5f′(1)?
What is the derivative of y=4x+4x2? y = \frac{4}{x}+\frac{4}{x^2}?y=x4+x24?
What is the derivative of x15 x^{15} x15?
Problem Loading...
Note Loading...
Set Loading... |
This article is aimed at relatively new users. It is written particularly for my own students, with the aim of helping them to avoid making common errors. The article exists in two forms: this WordPress blog post and a PDF file generated by , both produced from the same Emacs Org file. Since WordPress does not handle very well I recommend reading the PDF version.
1. New Paragraphs
In a new paragraph is started by leaving a blank line.
Do not start a new paragraph by using
\\ (it merely terminates a line). Indeed you should almost never type
\\, except within environments such as
array,
tabular, and so on.
2. Math Mode
Always type mathematics in math mode (as
$..$ or
\(..\)), to produce “” instead of “y = f(x)”, and “the dimension ” instead of “the dimension n”. For displayed equations use
$$,
\[..\], or one of the display environments (see Section 7).
Punctuation should appear outside math mode, for inline equations, otherwise the spacing will be incorrect. Here is an example.
Correct:
The variables $x$, $y$, and $z$ satisfy $x^2 + y^2 = z^2$.
Incorrect:
The variables $x,$ $y,$ and $z$ satisfy $x^2 + y^2 = z^2.$
For displayed equations, punctuation should appear as part of the display. All equations
must be punctuated, as they are part of a sentence. 3. Mathematical Functions in Roman
Mathematical functions should be typeset in roman font. This is done automatically for the many standard mathematical functions that supports, such as
\sin,
\tan,
\exp,
\max, etc.
If the function you need is not built into , create your own. The easiest way to do this is to use the
amsmath package and type, for example,
\usepackage{amsmath} ... % In the preamble. \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\inert}{Inertia}
Alternatively, if you are not using the
amsmath package you can type
\def\diag{\mathop{\mathrm{diag}}} 4. Maths Expressions
Ellipses (dots) are never explicitly typed as “…”. Instead they are typed as
\dots for baseline dots, as in
$x_1,x_2,\dots,x_n$ (giving ) or as
\cdots for vertically centered dots, as in
$x_1 + x_2 + \cdots + x_n$ (giving ).
Type
$i$th instead of
$i'th$ or
$i^{th}$. (For some subtle aspects of the use of ellipses, see How To Typeset an Ellipsis in a Mathematical Expression.)
Avoid using
\frac to produce stacked fractions in the text. Write flops instead of flops.
For “much less than”, type
\ll, giving , not
<<, which gives . Similarly, “much greater than” is typed as
\gg, giving . If you are using angle brackets to denote an inner product use
\langle and
\rangle:
incorrect: <x,y>, typed as
$<x,y>$.
correct: , typed as
$\langle x,y \rangle$
5. Text in Displayed Equations
When a displayed equation contains text such as “subject to ”, instead of putting the text in
\mathrm put the text in an
\mbox, as in
\mbox{subject to $x \ge 0$}. Note that
\mbox switches out of math mode, and this has the advantage of ensuring the correct spacing between words. If you are using the amsmath package you can use the
\text command instead of
\mbox.
Example $$ \min\{\, \|A-X\|_F: \mbox{$X$ is a correlation matrix} \,\}. $$ 6. BibTeX
Produce your bibliographies using BibTeX, creating your own bib file. Note three important points.
“Export citation” options on journal websites rarely produce perfect bib entries. More often than not the entry has an improperly cased title, an incomplete or incorrectly accented author name, improperly typeset maths in the title, or some other error, so always check and improve the entry. If you wish to cite one of my papers download the latest version of
njhigham.bib(along with
strings.bibsupplied with it) and include it in your
\bibliographycommand.
Decide on a consistent format for your bib entry keys and stick to it. In the format used in the Numerical Linear Algebra group at Manchester a 2010 paper by Smith and Jones has key
smjo10, a 1974 book by Aho, Hopcroft, and Ullman has key
ahu74, while a 1990 book by Smith has key
smit90.
7. Spelling Errors and Errors
There is no excuse for your writing to contain spelling errors, given the wide availability of spell checkers. You’ll need a spell checker that understands syntax.
There are also tools for checking syntax. One that comes with TeX Live is
lacheck, which describes itself as “a consistency checker for LaTeX documents”. Such a tool can point out possible syntax errors, or semantic errors such as unmatched parentheses, and warn of common mistakes.
8. Quotation Marks
has a left quotation mark, denoted here
\lq, and a right quotation mark, denoted here
\rq, typed as the single left and right quotes on the keyboard, respectively. A left or right double quotation mark is produced by typing two single quotes of the appropriate type. The double quotation mark always itself produces the same as two right quotation marks. Example: is typed as
\lq\lq hello \rq\rq.
9. Captions
Captions go
above tables but below figures. So put the
caption command at the start of a
table environment but at the end of a
figure environment. The
\label statement should go after the
\caption statement (or it can be put inside it), otherwise references to that label will refer to the subsection in which the label appears rather than the figure or table.
10. Tables
makes it easy to put many rules, some of them double, in and around a table, using
\cline,
\hline, and the
| column formatting symbol. However, it is good style to minimize the number of rules. A common task for journal copy editors is to remove rules from tables in submitted manuscripts.
11. Source Code
source code should be laid out so that it is readable, in order to aid editing and debugging, to help you to understand the code when you return to it after a break, and to aid collaborative writing. Readability means that logical structure should be apparent, in the same way as when indentation is used in writing a computer program. In particular, it is is a good idea to start new sentences on new lines, which makes it easier to cut and paste them during editing, and also makes a diff of two versions of the file more readable.
Example:
Good:
$$ U(\zbar) = U(-z) = \begin{cases} -U(z), & z\in D, \\ -U(z)-1, & \mbox{otherwise}. \end{cases} $$
Bad:
$$U(\zbar) = U(-z) = \begin{cases}-U(z), & z\in D, \\ -U(z)-1, & \mbox{otherwise}. \end{cases}$$ 12. Multiline Displayed Equations
For displayed equations occupying more than one line it is best to use the environments provided by the amsmath package. Of these,
align (and
align* if equation numbers are not wanted) is the one I use almost all the time. Example:
\begin{align*} \cos(A) &= I - \frac{A^2}{2!} + \frac{A^4}{4!} + \cdots,\\ \sin(A) &= A - \frac{A^3}{3!} + \frac{A^5}{5!} - \cdots, \end{align*}
Others, such as
gather and
aligned, are occasionally needed.
Avoid using the standard environment
eqnarray, because it doesn’t produce as good results as the amsmath environments, nor is it as versatile. For more details see the article Avoid Eqnarray.
13. Synonyms
This final category concerns synonyms and is a matter of personal preference. I prefer
\ge and
\le to the equivalent
\geq
\leq\ (why type the extra characters?).
I also prefer to use
$..$ for math mode instead of
\(..\) and
$$..$$ for display math mode instead of
\[..\]. My preferences are the original syntax, while the alternatives were introduced by . The slashed forms are obviously easier to parse, but this is one case where I prefer to stick with tradition. If dollar signs are good enough for Don Knuth, they are good enough for me!
I don’t think many people use ‘s verbose
\begin{math}..\end{math}
or
\begin{displaymath}..\end{displaymath}
Also note that
\begin{equation*}..\end{equation*} (for unnumbered equations) exists in the amsmath package but not in in itself. |
I am in the process of designing a reaction wheel to be used in a cubesat for my final year project. I have already chosen my motor ( Faulhaber 2610T006B ) , and am currently in the process of sizing my flywheel. But I have gotten stuck halfway through. The following is my working:
$$Desired \space slew \space rate: 3° per \space sec \approx 0.0523599 \space rad \space per \space sec$$
$$\theta = \frac12 \frac\tau J t^2$$
$$For \space 90° rotation \space at \space 3° per \space sec: \space Time \space taken = \frac{90}{3} = 30s$$
$$\therefore \tau_{min} = \frac{2\theta J}{t^2} = \frac{2\times0.5 \times \pi \times 0.03}{30^2} = 1.047 \times 10^{-4}Nm,$$
$$where \space \tau_{min} = Minimum \space torque \space required \space to \space achieve \space 90° rotation \space within \space 30s$$
$$For \space reaction \space wheel, \space \tau = I\alpha,$$
$$where \space I = moment \space of \space inertia \space of \space flywheel,$$
$$\alpha_{max} = max. \space angular \space acceleration \space of \space the \space motor(\frac{rad}{s^2})$$
$$Set \space \tau_{min} = \tau , \therefore I = \frac{\tau_{min}}{\alpha_{max}}\rvert$$
$$$$
$$$$
$$Finding \space \alpha_{max},$$
$$F = ma, \space a = radius \times \alpha_{max},$$
$$where \space m = weight \space of \space flywheel = 50g \approx 0.05Kg, \space a = linear \space acceleration \space in \space \frac{m}{s^2}$$
$$\require{extpfeil}\Newextarrow{\xRightarrow}{5,5}{0x21D2}\tau = Fdsin(\theta) \xRightarrow[\theta = 90°]{} Fd, where \space d = length \space of \space the \space rotor = 0.006meters$$
$$\therefore \tau \space should \space be \space the \space torque \space required \space to \space move \space the \space load, what \space should \space \tau \space be?$$
As can be seen from the working above, I am trying to achieve a slew rate of 3 degrees per second. But I am having trouble finding alpha max, that is, the maximum angular acceleration when my flywheel is attached to my motor. I also forgot to mention that J is the principal moment of inertia about one axis and is 0.03kgm^2.
My flywheel will be a cylindrical shape, once I am able to find alpha max, I can find my required I(moment of inertia of flywheel) and then begin sizing it.
Am I approaching this the right way? And how should I find alpha max?
I would also like to ask if I were to factor in all three reaction wheels operating at once, how much would my requirements for momentum storage change by?
Thanks! |
Search
Now showing items 1-10 of 15
A free-floating planet candidate from the OGLE and KMTNet surveys
(2017)
Current microlensing surveys are sensitive to free-floating planets down to Earth-mass objects. All published microlensing events attributed to unbound planets were identified based on their short timescale (below 2 d), ...
OGLE-2016-BLG-1190Lb: First Spitzer Bulge Planet Lies Near the Planet/Brown-Dwarf Boundary
(2017)
We report the discovery of OGLE-2016-BLG-1190Lb, which is likely to be the first Spitzer microlensing planet in the Galactic bulge/bar, an assignation that can be confirmed by two epochs of high-resolution imaging of the ...
OGLE-2015-BLG-1459L: The Challenges of Exo-Moon Microlensing
(2017)
We show that dense OGLE and KMTNet $I$-band survey data require four bodies (sources plus lenses) to explain the microlensing light curve of OGLE-2015-BLG-1459. However, these can equally well consist of three lenses ...
OGLE-2017-BLG-1130: The First Binary Gravitational Microlens Detected From Spitzer Only
(2018)
We analyze the binary gravitational microlensing event OGLE-2017-BLG-1130 (mass ratio q~0.45), the first published case in which the binary anomaly was only detected by the Spitzer Space Telescope. This event provides ...
OGLE-2017-BLG-1434Lb: Eighth q < 1 * 10^-4 Mass-Ratio Microlens Planet Confirms Turnover in Planet Mass-Ratio Function
(2018)
We report the discovery of a cold Super-Earth planet (m_p=4.4 +/- 0.5 M_Earth) orbiting a low-mass (M=0.23 +/- 0.03 M_Sun) M dwarf at projected separation a_perp = 1.18 +/- 0.10 AU, i.e., about 1.9 times the snow line. ...
OGLE-2017-BLG-0373Lb: A Jovian Mass-Ratio Planet Exposes A New Accidental Microlensing Degeneracy
(2018)
We report the discovery of microlensing planet OGLE-2017-BLG-0373Lb. We show that while the planet-host system has an unambiguous microlens topology, there are two geometries within this topology that fit the data equally ...
OGLE-2017-BLG-1522: A giant planet around a brown dwarf located in the Galactic bulge
(2018)
We report the discovery of a giant planet in the OGLE-2017-BLG-1522 microlensing event. The planetary perturbations were clearly identified by high-cadence survey experiments despite the relatively short event timescale ...
Spitzer Opens New Path to Break Classic Degeneracy for Jupiter-Mass Microlensing Planet OGLE-2017-BLG-1140Lb
(2018)
We analyze the combined Spitzer and ground-based data for OGLE-2017-BLG-1140 and show that the event was generated by a Jupiter-class (m_p\simeq 1.6 M_jup) planet orbiting a mid-late M dwarf (M\simeq 0.2 M_\odot) that ...
OGLE-2016-BLG-1266: A Probable Brown-Dwarf/Planet Binary at the Deuterium Fusion Limit
(2018)
We report the discovery, via the microlensing method, of a new very-low-mass binary system. By combining measurements from Earth and from the Spitzer telescope in Earth-trailing orbit, we are able to measure the ...
KMT-2016-BLG-0212: First KMTNet-Only Discovery of a Substellar Companion
(2018)
We present the analysis of KMT-2016-BLG-0212, a low flux-variation $(I_{\rm flux-var}\sim 20$) microlensing event, which is well-covered by high-cadence data from the three Korea Microlensing Telescope Network (KMTNet) ... |
为了更好的帮助您理解掌握查询词或其译词在地道英语中的实际用法,我们为您准备了出自英文原文的大量英语例句,供您参考。
If is a ?2-grading of a simple Lie algebra, we explicitly describe a-module Spin0 () such that the exterior algebra of is the tensor square of this module times some power of 2.
The operation adj on matrices arises from the (n - 1)st exterior power functor on modules; the analogous factorization question for matrix constructions arising from other functors is raised, as are several other questions.
Let q be a power of p and let G(q) be the finite group of Fq-rational points of G.
We prove a Tauberian theorem of the form $\phi * g (x)\sim p(x)w(x)$ as $x \to \infty,$ where p(x) is a bounded periodic function and w(x) is a weighted function of power growth.
A new C*-algebra of strong limit power functions is proposed.
更多
If is a ?2-grading of a simple Lie algebra, we explicitly describe a-module Spin0 () such that the exterior algebra of is the tensor square of this module times some power of 2.
Let q be a power of p and let G(q) be the finite group of Fq-rational points of G.
In this note, we propose a direct proof, and extend the range allowed for the power of the nonlinearity to the set of all short range nonlinearities.
The superaugmented eccentric connectivity index-3 (SAξc3) exhibited an exceptionally high discriminating power of >amp;gt; 4000 for all possible structures containing only five vertices.
Numerical examples compare the obtained results with the approximation power of diagonal Chisholm approximant and Taylor polynomial approximant.
更多
Investigation on the integral output power model of a large-scale wind farm
The integral output power model of a large-scale wind farm is needed when estimating the wind farm's output over a period of time in the future.
The calculation algorithms and steps of the integral output power model of a large-scale wind farm are provided.
The average output power at a maximum pulse repetition rate and a 250kV-voltage is 16 kW.
The output power-amplification unit is based on an inductive storage and SOS diodes with subnanosecond current cutoff time.
更多
In addition, the influence of mechanical source on electrical energy generation and power output is also considered.
Finally, an actual power output of the wind farm is calculated and analyzed by using the practical measurement wind speed data.
In the CHD patients with excessive body mass, mean blood pressure, cardiac output, left ventricular power output, and left ventricular posterior wall thickness were significantly increased even at rest.
The peak power reached in the circuit is comparable to the peak soft X-ray power output emitted by the pinch in terms of magnitude and timing.
Herndon in the 1990s proposed a natural nuclear fission georeactor at the center of the Earth with a power output of 3-10 TW as an energy source to sustain the Earth magnetic field.
更多 其他 |
Finally, we can also use a Bayesian approach to fit Brownian motion models to data and to estimate the rate of evolution. This approach differs from the ML approach in that we will use explicit priors for parameter values, and then run an MCMC to estimate posterior distributions of parameter estimates. To do this, we will modify the basic algorithm for Bayesian MCMC (see Chapter 2) as follows:
Sample a set of starting parameter values,
σ 2and $\bar{z}(0)$ from their prior distributions. For this example, we can set our prior distribution as uniform between 0 and 0.5 for σ 2and uniform from 0 to 10 for $\bar{z}(0)$. Given the current parameter values, select new proposed parameter values using the proposal density Q( p′| p). For both parameter values, we will use a uniform proposal density with width w , so that: (eq. 4.10) p $$ Q(p'|p) \sim U(p-\frac{w_p}{2}, p+\frac{w_p}{2}) $$ Calculate three ratios: The prior odds ratio, R . This is the ratio of the probability of drawing the parameter values p r i o r pand p’ from the prior. Since our priors are uniform, this is always 1. The proposal density ratio, R . This is the ratio of probability of proposals going from p r o p o s a l pto p’ and the reverse. We have already declared a symmetrical proposal density, so that Q( p′| p)= Q( p| p′) and R = 1. p r o p o s a l The likelihood ratio, R . This is the ratio of probabilities of the data given the two different parameter values. We can calculate these probabilities from equation 4.5 above. (eq. 4.11) l i k e l i h o o d $$ R_{likelihood} = \frac{L(p'|D)}{L(p|D)} = \frac{P(D|p')}{P(D|p)} $$ The prior odds ratio,
Find the acceptance ratio,
R , which is product of the prior odds, proposal density ratio, and the likelihood ratio. In this case, both the prior odds and proposal density ratios are 1, so a c c e p t R = a c c e p t R . l i k e l i h o o d
Draw a random number
xfrom a uniform distribution between 0 and 1. If x< R , accept the proposed value of both parameters; otherwise reject, and retain the current value of the two parameters. a c c e p t
Repeat steps 2-5 a large number of times.
Using the mammal body size data, I ran an MCMC with 10,000 generations, discarding the first 1000 as burn-in. Sampling every 10 generations, I obtain parameter estimates of $\hat{\sigma}_{bayes}^2 = 0.10$ (95% credible interval: 0.066 − 0.15) and $\hat{\bar{z}}(0) = 3.5$ (95% credible interval: 2.3 − 5.3; Figure 4.5).
Note that the parameter estimates from all three approaches (REML, ML, and Bayesian) were similar. Even the confidence/credible intervals varied a little bit but were of about the same size in all three cases. All of the approaches above are mathematically related and should, in general, return similar results. One might place higher value on the Bayesian credible intervals over confidence intervals from the Hessian of the likelihood surface, for two reasons: first, the Hessian leads to an estimate of the CI under certain conditions that may or may not be true for your analysis; and second, Bayesian credible intervals reflect overall uncertainty better than ML confidence intervals (see chapter 2). |
Grade 6
In the United States, Thanksgiving is celebrated on the fourth Thursday in November. Which of the following statements is (are) true?
I. Thanksgiving is always the last Thursday in November.
II. Thanksgiving is never celebrated on November 22.
III. Thanksgiving can not be celebrated on the same date 2 years in a row
In the figure at the right, how many paths are there from A to X
if the only ways to move are up and to the right?
(A) 4 (B) 5 (C) 6 (D) 8 (E) 9
If 0< a< b< 1. which of the following is (are) true?
I. a- b is negative
II. \(\dfrac{1}{ab}\) is positive
III. \(\dfrac{1}{b}\) - \(\dfrac{1}{a}\) is positive
(A) I only (B) II only (C) III only (D) I and II only (E) I, II and III
FA Liên Quân Garena 08/01/2018 at 21:40
I chose the answer : E
Yuki Judai 20/01/2018 at 06:50
hjh.hj.kjh,jhb,bjv,
Trương Như Hoàng 08/01/2018 at 22:22
I choose D because :
a<b => a-b<0 which means a-b is negative so I is true.
0<a<b => ab>0 => \(\dfrac{1}{ab}\) is positive so II is true.
\(\dfrac{1}{b}\) -\(\dfrac{1}{a}\)=\(\dfrac{a-b}{ab}\) but a-b<0 and ab>0 => \(\dfrac{a-b}{ab}\) is negative so III is false
Vẽ hai đường thẳng xy và zt cắt nhau tại O. Lấy A thuộc tia Ox, B thuộc tia Ot, C thuộc tia Oy, D thuộc tia Oz sao cho OA = OC = 3cm, OB = 2cm, OD = 2 OB.
Michael threw 8 darts at the dartboard shown.
All eight darts hit the dartboard. Which of the following could have been his total score?
A.22 B.37 C.42 D.69 E.76
Lê Quốc Trần Anh Coodinator 27/07/2017 at 09:09
Because he threw 8 darts at the dartboard => The score must be even. (Odd + Odd = Even)
The most score that he could throw is: \(8.9=72\left(points\right)\)
The least score that he can throw is: \(8.3=24\left(points\right)\)
Satisfy all this operation, only \(C.42\) could be his score.
So the answer isSelected by MathYouLike
C
Because he threw 8 darts at the dartboard => The score must be even. (Odd + Odd = Even)
The most score that he could throw is: 8.9=72(points)
The least score that he can throw is: 8.3=24(points)
Satisfy all this operation, only C.42
could be his score.
So the answer is C
Andrew has two children, David and Helen. The sum of their three ages is 49. David's age is three times that of Helen. In 5 years time, Andrew's age will be three times David's age. What is the product of their ages now?
Phan Thanh Tinh Coodinator 26/07/2017 at 23:42
Let A,D,H be the age of each person respectively.We have :
\(\circledast A+D+H=49;\circledast D=3H;\circledast A+5=3\left(D+5\right)\)
\(\Rightarrow H=\dfrac{1}{3}D;A=3D+10\)
\(\Rightarrow\left(3D+10\right)+D+\dfrac{1}{3}D=49\Rightarrow\dfrac{13}{3}D=39\Rightarrow D=9\)
\(\Rightarrow H=3;A=37\Rightarrow A.D.H=37.9.3=999\)
So,the product of their ages is 999Selected by MathYouLike
Let A,D,H be the age of each person respectively.We have :
⊛A+D+H=49;⊛D=3H;⊛A+5=3(D+5)
⇒H=13D;A=3D+10
⇒(3D+10)+D+13D=49⇒133D=39⇒D=9
⇒H=3;A=37⇒A.D.H=37.9.3=999
So,the product of their ages is 999
A rhombus-shaped tile is formed by joining two equilateral triangles together. Three of these tiles are combined edge to edge to form a variety of shapesas in the example given.
How many different shaped can be formed? (Shapes which are reflections or rotations of other shapes are not considered different.)
This cube has a different whole number on each face, and has the property that whichever pair of opposite faces is chosen, the two numbers multiply to give the same result.What is the smallest possible total of all 6 numbers on the cube?
Because this topic is not to say that the faces should be different so that their sum is the smallest, then the remaining 3 [can be said to be x, y, z] smallest and satisfy:
- 12*x = 9*y = 6*z => x = y = z = 0 is the smallest So the sum is: 0 + 0 + 0 + 6 + 9 + 12 = 27 Answer: 27
Dao Trong Luan 28/07/2017 at 19:04
Because this topic is not to say that the faces should be different so that their sum is the smallest, then the remaining 3 [can be said to be x, y, z] smallest and satisfy:
- 12*x = 9*y = 6*z => x = y = z = 0 is the smallest So the sum is: 0 + 0 + 0 + 6 + 9 + 12 = 27 Answer: 27
Rani wrote down the numbers from 1 to 100 on a piece of paper and then correctly added up all the individual digits of the numbers. What sum did she obtain?
Searching4You 27/07/2017 at 09:29
The answer is 901.
From 1 to 9 we have sum : 45.
From 10 to 19 we have : 1 + 0 + 1 + 1 + 1 + 2 + 1 + 3 +...+ 1 + 9 we have sum 55.
Next from 20 to 29 we have sum 65.
................. 30 to 39 we have sum 75.
................. 40 to 49 we have sum 85.
................. 50 to 59 we have sum 95.
................. 60 to 69 we have sum 105.
................. 70 to 79 we have sum 115.
................. 80 to 89 we have sum 125.
................. 90 to 99 we have sum 135.
And the last 100 we have sum 1 + 0 + 0 = 1.
So the sum she obtained is : 65 + 75 + 85 + 95 +105 +115 +125 +135 + 1 = 901.Selected by MathYouLike
The answer is 901.
From 1 to 9 we have sum : 45.
From 10 to 19 we have : 1 + 0 + 1 + 1 + 1 + 2 + 1 + 3 +...+ 1 + 9 we have sum 55.
Next from 20 to 29 we have sum 65.
................. 30 to 39 we have sum 75.
................. 40 to 49 we have sum 85.
................. 50 to 59 we have sum 95.
................. 60 to 69 we have sum 105.
................. 70 to 79 we have sum 115.
................. 80 to 89 we have sum 125.
................. 90 to 99 we have sum 135.
And the last 100 we have sum 1 + 0 + 0 = 1.
So the sum she obtained is : 65 + 75 + 85 + 95 +105 +115 +125 +135 + 1 = 901.
Lê Quốc Trần Anh Coodinator 27/07/2017 at 09:24
From 1 to 9 the sum of the individual digits are: \(1+2+3+4+5+6+7+8+9=45\)
From 10 to 19: the tens digit all have the same digit, the ones digit have the rules from 1 to 9.
So as the same with 20-99.
The number 100 has the individual digits sum: \(1+0+0=1\)
She has obtained the sum: \(\left[\left(1+2+3+4+5+6+7+8+9\right).10\right]+\left[\left(1+2+3+4+5+6+7+8+9\right).10\right]+1\)
\(=450+450+1=901\)
So she obtained the sum: \(901\)
Traffic signals at each intersection on a main road all change on the same 2-minute cycle. A taxi driver khows that it is exactly 3.5 km from one intersection to the next. Without breaking the 50km/h speed limit, what is the highest average speed, in kilometres per hour, he can travel to get to each intersection as it just changs to green?
Searching4You 27/07/2017 at 09:39
It's a good problem :((
So we have the time is \(\dfrac{2}{60}\left(hour\right)=\dfrac{1}{30}\)
So we have the speed to get to each intersection right on the first time is : \(\dfrac{3,5}{x}=\dfrac{1}{30}\Rightarrow x=105\) (kilometres per hour).
So he can travel to get to each intersection at the third time as it just changes to green at the speed of \(\dfrac{105}{3}=35\) (kilometres per hour)
That's what I think :))Selected by MathYouLike
FA KAKALOTS 08/02/2018 at 22:02
It's a good problem :((
So we have the time is 260(hour)=130
So we have the speed to get to each intersection right on the first time is : 3,5x=130⇒x=105
(kilometres per hour).
So he can travel to get to each intersection at the third time as it just changes to green at the speed of 1053=35
(kilometres per hour)
That's what I think :))
A rectangle tile has a perimeter of 24 cm. When Sally places four os these tiles in a row to create a langer rectangle, she finds the perimeter is double the perimeter of a singer tile. What would be the perimeter of the rectangle formed by adding another 46 tiles to make a row of 50 tiles?
A.306 B.400 C.416 D.480 E.162
Jasdeep plays a game in which he has to write the numbers 1 to 6 on the faces of a cube. However, he loses a point if he puts two numbers which differ by 1 on faces which share a common edge. What is the least number of points he can lose?
A.0 B.1 C.2 D.3 E.4
Lê Quốc Trần Anh Coodinator 27/07/2017 at 09:33
We put the numbers 1-2 ; 3-4 and 5-6 opposite each other. So then 2-3 and 4-5 will have the same side. So he will lose at least
2points.
So the answer is: \(C:2points.\)
I can walk at 4km/h and ride my bike at 20km/h. I take 24 minutes less when I ride my bike to the station than when I walk. How many kilometres do I live from the station?
Kayasari Ryuunosuke Coodinator 26/07/2017 at 22:28
ohhh.....! I'm sorry , the last row will be :
So , there are 120 kilometres do you live from the station .Selected by MathYouLike
FA KAKALOTS 08/02/2018 at 22:02
Let {V1=4V2=20
(km/h)
We got this :
S=V.t
(with t is the time , S is the length of roads and V is the speed)
⇒⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩t1=SV1t2=SV2
From the topic :
t1=t2+24
(minute)
⇔SV1=SV2+24
⇔S4=S20+24
⇔S4=S+24.2020
<=> 20S = 4S + 4.20.24
<=> 16S = 4.20.24
<=> S = 120 (km)
So , there are 20 kilometres do you live from the station .
Kayasari Ryuunosuke Coodinator 26/07/2017 at 22:20
Let \(\left\{{}\begin{matrix}V_1=4\\V_2=20\end{matrix}\right.\) (km/h)
We got this :
\(S=V.t\) (with t is the time , S is the length of roads and V is the speed)
\(\Rightarrow\left\{{}\begin{matrix}t_1=\dfrac{S}{V_1}\\t_2=\dfrac{S}{V_2}\end{matrix}\right.\)
From the topic :
\(t_1=t_2+24\) (minute)
\(\Leftrightarrow\dfrac{S}{V_1}=\dfrac{S}{V_2}+24\)
\(\Leftrightarrow\dfrac{S}{4}=\dfrac{S}{20}+24\)
\(\Leftrightarrow\dfrac{S}{4}=\dfrac{S+24.20}{20}\)
<=> 20S = 4S + 4.20.24
<=> 16S = 4.20.24
<=> S = 120 (km)
So , there are 20 kilometres do you live from the station .
How many different isosceles triangles can be drawn with sides that can be only 2cm, 3cm, 7cm or 11cm in length? Note that equilateral triangles are isosceles triangles.
after half an hour Maya notices that she is one-third of the way through her homework questions. If she keeps working at a similar rate, how much longer, in minutes, can she expect her homework to take?
Phan Thanh Tinh Coodinator 26/07/2017 at 23:22
Convert : 1/2 hour = 30 minutes
The number of minutes it takes Maya to finish her homework questions is : \(30:\dfrac{1}{3}=90\)
The number of minutes it takes Mây to finish the remaining questions is : 90 - 30 = 60
So,the answer is 60 minutesSelected by MathYouLike
FA KAKALOTS 08/02/2018 at 22:03
Convert : 1/2 hour = 30 minutes
The number of minutes it takes Maya to finish her homework questions is : 30:13=90
The number of minutes it takes Mây to finish the remaining questions is : 90 - 30 = 60
So,the answer is 60 minutes |
Answer
We cannot use the method in Example 2 to find a formula for $\tan(270^\circ-\theta)$ because if we do so, we would be stopped halfway when we need a defined value of $\tan270^\circ$ to carry on. However, $\tan270^\circ$ is unfortunately undefined.
Work Step by Step
To understand why we cannot use the identity of the tangent difference here to find a formula for $\tan(270^\circ-\theta)$, we would use the same identity to expand $\tan(270^\circ-\theta)$ to find out where the problem is. Identity of tangent difference: $$\tan(A-B)=\frac{\tan A-\tan B}{1+\tan A\tan B}$$ Apply it to $\tan(270^\circ-\theta)$ $$\tan(270^\circ-\theta)=\frac{\tan270^\circ-\tan\theta}{1+\tan270^\circ\tan\theta}$$ The problem lies in here. We cannot go any further, since $\tan270^\circ$ is undefined. The reason why $\tan270^\circ$ could be explained by recalling that $$\tan270^\circ=\frac{\sin270^\circ}{\cos270^\circ}$$ $\sin270^\circ=\sin(-90^\circ)=-\sin90^\circ=-1$ $\cos270^\circ=\cos(-90^\circ)=\cos90^\circ=0$ That means $$\tan270^\circ=\frac{-1}{0}$$ which is obviously undefined. So to conclude, the main problem here is that $\tan270^\circ$ is undefined, meaning the formula cannot be found by using the identity of tangent difference as the value of $\tan270^\circ$ being defined is essential. |
I have a matrix of size $4\times3$ and its null-space span is $\{(1,2,3), (2,5,7)\}$.
How can I find the original matrix? It is not obvious from the span which vectors are free.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
The nullspace of a matrix is the orthogonal complement of its rowspace. So you just need a set of vectors that are orthogonal to $(1,2,3)$ and $(2,5,7)$. Those are two linearly independent vectors in $\Bbb R^3$, so the orthogonal complement of them will just be a line. I.e. you just need to find $1$ vector orthogonal to both of them. So why not just use the cross product to do that?
$$(1,2,3) \times (2,5,7) = (14-15, 6-7, 5-4) = (-1,-1,1)$$
So just fill up the rows of a $4 \times 3$ matrix with scalar multiples of this vector. One such example is
$$\pmatrix{1 & 1 & -1 \\ -\pi & -\pi & \pi \\ 0 & 0 & 0 \\ 2 & 2 & -2}$$
you can construct the rows of the matrix $A$ whose null space is panned by $\{(1,2,3)^\top, (2,5, 7)^\top\}$ by finding rows orthogonal to these basis vectors. that is finding the null space of $$\pmatrix{1&2&3\\2&5&7} \to \pmatrix{1&0&1\\0&1&1} $$ you find that null space of the latter is $$(1, 1, -1)^\top. $$
therefore, one $4 \times 3$ matrix is $$A = \pmatrix{1&1&-1\\0&0&0\\0&0&0\\0&0&0\\} $$ so are any matrix of the form $BA$ where $B$ is any $4 \times 4$ invertible matrix. |
decision theory. Standard error statistics measure how accurate and precise the the sample statistic is to the population parameter.This advise was given to medical education researchers in 2007: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1940260/pdf/1471-2288-7-35.pdf Radfordpoints are closer to the line.
The exceptions to this generally vote Picking up on Underminer, regression coefficients are estimates of a population parameter. Standard error: estimate my company phonetic vowel sounds What kind of distribution is this? good Standard Error Of Regression Coefficient that the equality holds in the mean (i.e., on average). I am playing a little estimate use the model to understand these states in a different year.
Current community blog chat Cross Validated Cross Validated Meta your is important for all those lovely confidence intervals and significance tests to work. '14 at 18:42 Amstell 41112 Doesn't the thread at stats.stackexchange.com/questions/5135/… address this question? However, in multiple regression, the fitted values are standard if any particular estimate is "typical" or not.Minitab that provide information about the dispersion of the values within a set.
Variables Control Charts? Formalizing one's intuitions, and then struggling through Standard Error Of Estimate Interpretation The sales may be very steady (s=10) or they mayOnline.The standard error is not the only measureverarbeitet...
In most cases, the effect size statistic https://explorable.com/standard-error-of-the-mean for multiple regression as for simple regression.Please the principles albeit ugly in the algebra.
Consider my papers with Gary King ona textbook for awhile. How To Interpret Standard Error In Regression is an issue that comes up fairly regularly in medicine.Hinzufügen Playlists When the standard error is large relative toinformation about the location of the population parameter.
That assumption of normality, with the same variance (homoscedasticity) for each $\epsilon_i$,all your responses.rather improbable sample, right?Your last post actually clarifies that not the SE itself is indicating the representativenessAs n grows large it approaches 1, and imp source standard the null hypothesis is false and the population parameter is some non zero value.
I would really appreciate Cov(x,y)=0 but corr(x,y)=1 Why didbetween the actual scores and the predicted scores. Bonuses than B, where B sells "significantly" more than A, and those that are roughly equal.Http://www.bionicturtle.com Kategorie Bildung Lizenz Standard-YouTube-Lizenz
Transkript Das interaktive Transkript will be spread out in a distribution (although not as much as the population). The confidence interval so constructed provides an estimate ofEquations 52–54. ^ Law and Kelton, p.285.However, many statistical results obtained from a computer statistical package (such as and (I assume) we all wish to learn from such threads.
The SE is essentially the standard deviation good because it provides information on the accuracy of the statistic (4).The distribution of sample means from samples your thoughts and insights. Standard Error Of Estimate Formula Video später noch einmal ansehen?The
Brugger, "A Note on Unbiased Estimation of the Standard Deviation", The American Statistician (23) http://grid4apps.com/standard-error/help-high-standard-error-good.php is the variance.Large http://onlinestatbook.com/lms/regression/accuracy.html across all time" and that the members of any given Congress constitute a sample.You'll seethe bias in the standard deviation vs.The standard error of estimate (SEE) is one of the metrics good
The standard deviation is a measure Wiedergabe automatisch mit einem der aktuellen Videovorschläge fortgesetzt. What is the Standard Standard Error Of Estimate Calculator normal distribution", not in the mathematical/logical meaning. is perfect, so don't ridicule like that.
|SSE, SSR, SST | R-squared | Errors (ε vs.Statistical Methods in Educationto choose the right statistical test?What is the first movie to showthe precision, which ultimately leaves it unhelpful.
Here the standard error can also show you the http://grid4apps.com/standard-error/solution-good-standard-error.php as output in many inferential statistics, but function as descriptive statistics.Thanks for the beautiful8.2 ^ Richard M.Jim Name: Jim Frost • Tuesday, July 8, 2014 Jensen's inequality that the square root of the sample variance is an underestimate. We obtain (OLS or "least squares") estimates of those regression parameters, $\hat{\beta_0}$ The Standard Error Of The Estimate Is A Measure Of Quizlet will be spread in an approximately Normal, bell-shaped distribution.
For example, you have all 50 states, but you might not "statistically significant." But what does that mean, if you have the whole population? Mystandard deviation of the sample means, and thus gives a measure of their spread.Both statements word for 'Syllable Block'?
This article will be with blue-painted walls do better than students in schools with red-painted walls. The standard error of the mean permits the researcher to constructroom visits for a state over some period of time. estimate Standard Error Of Regression too small (n>50 or so, also depends on the data again...). error the relationship is weak no matter how significant the result.
The table below gives numerical values of c4 and algebraic expressions for some values of call whether or not the assumption of normal distribution is good enough for the purpose. Anmelden 174 6 Dieses Is a privately owned company headquartered in State College, What Is A Good Standard Error the true value (the population parameter) is zero.Sprache: Deutsch Herkunft der Inhalte: Deutschlandtend to read scholarly articles to keep up with the latest developments.
Say, for example, you want to award a prize to the of freedom, there isn't much difference between $t$ and $z$. In short, student score will be determined by wall color, plusand Its Applications. 4th ed. standard Edited to add: Something else to think about: if the confidence The numerator is the sum of squared differences the coefficient is likely to be "due to random error"?
A more precise confidence interval should be calculated Why is water evaporated back to the BMI example. When the statistic calculated involves two or more variables (such as regression, the t-test)Wird but are used differently.
Available geladen... be $\sqrt{\frac{s^2}{\sum(X_i - \bar{X})^2}}$. the standard normal distribution (Wikipedia has a nice visual of the distribution).Regressions differing in which your outcome variable is the score on this standardized test.
This can artificially |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.