text stringlengths 256 16.4k |
|---|
Determine whether this series converges or diverges: $$\sum\limits_{n=0}^{\infty} \frac{1}{(2n+1)!}$$
Thought about using the limit theorem or by comparison but am so stuck. any pointers would be appreciated guys
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Another way is
If $\displaystyle S_n = 1 + \frac{1}{3!} + \dots + \frac{1}{(2n+1)!}$
We have that
$\displaystyle S_n \le 1 + \frac{1}{1 \cdot 2} + \frac{1}{2 \cdot 3} + \dots + \frac{1}{n(n+1)}$
$\displaystyle = 1 + (1 - \frac{1}{2}) + (\frac{1}{2} - \frac{1}{3}) + \dots + (\frac{1}{n} - \frac{1}{n+1}) = 2 - \frac{1}{n+1} < 2$
Thus $S_n < 2$
thus we have the $\displaystyle S_n$ is monotonically increasing and bounded above and so is convergent.
I think svenkatr's response is correct. He is using the comparison test, in particular, comparing with the exponential function for $x=1$, that is obviously a number, so he doesn't have to prove that the series for e converges.
Maybe you can prove the same by using the ratio test $\lim_{n \rightarrow \infty} \displaystyle |\frac{a_{n+1}}{a_{n}}|$. For example, you have $a_{n}=\displaystyle \frac{1}{(2n+1)!}$ and $a_{n+1}=\displaystyle \frac{(2n+1)!}{(2n+3)!}$, then using the definition for the factorial you have $\lim_{n \rightarrow \infty} \displaystyle \frac{1}{(2n+3)(2n+2)}$ which is 0. According to the ratio test:
If r < 1, then the series converges. If r > 1, then the series diverges. If r = 1, the ratio test is inconclusive, and the series may converge or diverge.
Therefore, the series converges.
The series you have is
$1 + \frac{1}{3!} + \frac{1}{5!} \ldots $
If you add the even factorial terms, you get an upper bound i.e.,
$1 + \frac{1}{3!} + \frac{1}{5!} \ldots < \frac{1}{0!} + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \frac{1}{4!}+ \frac{1}{5!} \ldots$
This can be written more compactly as
$\sum_{n=0}^\infty \frac{1}{(2n+1)!} < \sum_{n=0}^\infty \frac{1}{n!} = e^1$
Therefore the series converges.
We have $$e^{1} = 1 + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \cdots$$ and $$e^{-1} = 1 - \frac{1}{1!} + \frac{1}{2!} - \frac{1}{3!} + \cdots $$
Subtracting these two we get $$e - e^{-1} = 2 \cdot \Bigl( 1 + \frac{1}{3!} + \frac{1}{5!} + \cdots \Bigr)$$ Therefore the series converges to $$\frac{e-e^{-1}}{2} = \sum\limits_{n=0}^{\infty} \frac{1}{(2n+1)!}$$
Can you bound the series from above by one that you know converges? The factorials grow very fast, so you should be able to.
To elaborate on the first answer given to this question by Ross Millikan.
$$\sum_{n=0}^\infty \frac{1}{(2n+1)!} = 1 + \sum_{n=1}^\infty \frac{1}{(2n+1)!}$$
$$< 1 + \sum_{n=1}^\infty \frac{1}{4^n} = \frac{4}{3}, \quad \textrm{ as } \frac{1}{(2n+1)!} < \frac{1}{4^n} \textrm{ for } n \ge1.$$
Hence by the comparison test the series converges.
Comparing with another more manageable series could be useful in this case for possible follow-on questions as, with this approach, it's not much extra work to prove that it converges to an irrational number. Such a proof might include: Let $S$ be the series and $S_N$ the $N$th partial sum and $R_N$ the remainder then $S=S_N + R_N,$ where we note that
$$R_N < \frac{1}{(2n+3)!} \left( 1 + \frac{1}{(2n+3)^2} + \frac{1}{(2n+3)^4} + \cdots \right).$$
METHOD I
We may simply resort to the Basel problem and get the inequality: $$0<\sum_{k=0}^{\infty}\frac{1}{(1+2k)!}\leq\sum_{k=0}^{\infty}\frac{1}{(1+k)^2}=\frac{\pi^2}{6}$$
METHOD II
According to Taylor's expansion we have that:
$$ \sinh(x) = \sum_{k=0}^{\infty}\frac{x^{1+2k}}{(1+2k)!}$$
For $x=1$ we get that the value of the series is $\sinh(1)$. The series converges.
Q.E.D.
This is how sometimes we can extract the exact closed form of some series :
$f(x) = \sum_{n=0}^{\infty} \frac{x^{2n+1}}{(2n+1)!}$
The radius of convergence is $\infty$
$f \in C^{\infty}(\mathbb{R})$
$f''(x) = f(x)$
solving our equation, we get $f(x) = ae^{\lambda_1x}+be^{\lambda_2x}$
$\lambda^2-1=0$
$\lambda \in \{-1,1\}$
$f(x) = ae^{-x}+be^{x}$
$f(0)= 0 \implies a+b=0 \implies a=-b \implies f(x) = a(e^{x}- e^{-x})$
$f'(0) = 1 \implies a=\frac{1}{2}$
$f(x) = \frac{e^{x}-e^{-x}}{2}$
so our series $\sum_{n=0}^{\infty} \frac{1}{(2n+1)!}$ does converge to $\sinh(1)$ |
We first determine all the eigenvalues of the matrix $A$.The characteristic polynomial $p(t)$ of $A$ is given by\begin{align*}p(t)&=\det(A-tI)\\[6pt]&=\begin{vmatrix}3-t & -12 & 4 \\-1 & -t &-2 \\-1 & 5 & -1-t\end{vmatrix}.\end{align*}Using the first row cofactor expansion, we compute\begin{align*}p(t)&=(3-t)\begin{vmatrix}-t & -2\\5& -1-t\end{vmatrix}-(-12)\begin{vmatrix}-1 & -2\\-1& -1-t\end{vmatrix}+4\begin{vmatrix}-1 & -t\\-1& 5\end{vmatrix}\\[6pt]&=(3-t)(t^2+t+10)+12(t-1)+4(-5-t)\\&=-t^3+2t^2+8t-2.\end{align*}Therefore the characteristic polynomial of $A$ is\[p(t)=-t^3+2t^2+8t-2\]and it can be factored as\[p(t)=-(t-2)(t-1)(t+1).\]The roots of the characteristic polynomials are all the eigenvalues of $A$.Thus, $2, \pm 1$ are the eigenvalues of $A$.
To find the eigenvalues of $A^5$, recall that if $\lambda$ is an eigenvalue of $A$, then $\lambda^5$ is an eigenvalue of $A^5$.It follows from this fact that $2^5, (-1)^5, 1^5$ are eigenvalues of $A^5$.
Since $A^5$ is a $3\times 3$ matrix, its characteristic polynomial has degree $3$, hence there are at most $3$ distinct eigenvalues of $A^5$.Because we have found three eigenvalues, $32, -1, 1$, of $A^5$, these are all the eigenvalues of $A^5$.
Recall that a matrix is singular if and only if $\lambda=0$ is an eigenvalue of the matrix.Since $0$ is not an eigenvalue of $A$, it follows that $A$ is nonsingular, and hence invertible. If $\lambda$ is an eigenvalue of $A$, then $\frac{1}{\lambda}$ is an eigenvalue of the inverse $A^{-1}$.
So $\frac{1}{\lambda}$, $\lambda=2, \pm 1$ are eigenvalues of $A^{-1}$.As above, the matrix $A^{-1}$ is $3\times 3$, hence it has at most three distinct eigenvalues. We have found $1/2, \pm 1$ are eigenvalues of $A^{-1}$, hence these are all the eigenvalues of $A^{-1}$.
In summary, all the eigenvalues of $A^5$ are $\pm 1, 32$. The matrix $A$ is invertible and all the eigenvalues of $A^{-1}$ are $\pm 1, 1/2$.
Comment.
Do not try to compute $A^5$ and $A^{-1}$ and then find their eigenvalues.It will be tedious for hand computation.
Find the Inverse Matrix Using the Cayley-Hamilton Theorem Find the inverse matrix of the matrix\[A=\begin{bmatrix}1 & 1 & 2 \\9 &2 &0 \\5 & 0 & 3\end{bmatrix}\]using the Cayley–Hamilton theorem.Solution.To use the Cayley-Hamilton theorem, we first compute the characteristic polynomial $p(t)$ of […]
Rotation Matrix in Space and its Determinant and EigenvaluesFor a real number $0\leq \theta \leq \pi$, we define the real $3\times 3$ matrix $A$ by\[A=\begin{bmatrix}\cos\theta & -\sin\theta & 0 \\\sin\theta &\cos\theta &0 \\0 & 0 & 1\end{bmatrix}.\](a) Find the determinant of the matrix $A$.(b) Show that $A$ is an […]
Find Inverse Matrices Using Adjoint MatricesLet $A$ be an $n\times n$ matrix.The $(i, j)$ cofactor $C_{ij}$ of $A$ is defined to be\[C_{ij}=(-1)^{ij}\det(M_{ij}),\]where $M_{ij}$ is the $(i,j)$ minor matrix obtained from $A$ removing the $i$-th row and $j$-th column.Then consider the $n\times n$ matrix […]
Maximize the Dimension of the Null Space of $A-aI$Let\[ A=\begin{bmatrix}5 & 2 & -1 \\2 &2 &2 \\-1 & 2 & 5\end{bmatrix}.\]Pick your favorite number $a$. Find the dimension of the null space of the matrix $A-aI$, where $I$ is the $3\times 3$ identity matrix.Your score of this problem is equal to that […]
True of False Problems on Determinants and Invertible MatricesDetermine whether each of the following statements is True or False.(a) If $A$ and $B$ are $n \times n$ matrices, and $P$ is an invertible $n \times n$ matrix such that $A=PBP^{-1}$, then $\det(A)=\det(B)$.(b) If the characteristic polynomial of an $n \times n$ matrix $A$ […] |
I know how to derive Navier-Stokes equations from Boltzmann equation in case where bulk and viscosity coefficients are set to zero. I need only multiply it on momentum and to integrate it over velocities.
But when I've tried to derive NS equations with viscosity and bulk coefficients, I've failed. Most textbooks contains following words: "for taking into the account interchange of particles between fluid layers we need to modify momentum flux density tensor". So they state that NS equations with viscosity cannot be derived from Boltzmann equation, can they?
The target equation is $$ \partial_{t}\left( \frac{\rho v^{2}}{2} + \rho \epsilon \right) = -\partial_{x_{i}}\left(\rho v_{i}\left(\frac{v^{2}}{2} + w\right) - \sigma_{ij}v_{j} - \kappa \partial_{x_{i}}T \right), $$ where $$ \sigma_{ij} = \eta \left( \partial_{x_{[i}}v_{j]} - \frac{2}{3}\delta_{ij}\partial_{x_{i}}v_{i}\right) + \varepsilon \delta_{ij}\partial_{x_{i}}v_{i}, $$ $w = \mu - Ts$ corresponds to heat function, $\epsilon$ refers to internal energy.
Edit. It seems that I've got this equation. After multiplying Boltzmann equation on $\frac{m(\mathbf v - \mathbf u)^{2}}{2}$ and integrating it over $v$ I've got transport equation which contains objects $$ \Pi_{ij} = \rho\langle (v - u)_{i}(v - u)_{j} \rangle, \quad q_{i} = \rho \langle (\mathbf v - \mathbf u)^{2}(v - u)_{i}\rangle $$ To calculate it I need to know an expression for distribution function. For simplicity I've used tau approximation; in the end I've got expression $f = f_{0} + g$. An expressions for $\Pi_{ij}, q_{i}$ then are represented by $$ \Pi_{ij} = \delta_{ij}P - \mu \left(\partial_{[i}u_{j]} - \frac{2}{3}\delta_{ij}\partial_{i}u_{i}\right) - \epsilon \delta_{ij}\partial_{i}u_{i}, $$ $$ q_{i} = -\kappa \partial_{i} T, $$ so I've got the wanted result.
This post imported from StackExchange Physics at 2016-02-10 14:08 (UTC), posted by SE-user Name YYY |
skills to develop
To understand what chi-square distributions are. To understand how to use a chi-square test to judge whether two factors are independent. Chi-Square Distributions
As you know, there is a whole family of \(t\)-distributions, each one specified by a parameter called the degrees of freedom, denoted \(df\). Similarly, all the chi-square distributions form a family, and each of its members is also specified by a parameter \(df\), the number of degrees of freedom. Chi is a Greek letter denoted by the symbol \(\chi\) and chi-square is often denoted by \(\chi^2\).
Many \(\chi\) Distributions Figure \(\PageIndex{1}\):
Figure \(\PageIndex{1}\) shows several \(\chi\)-square distributions for different degrees of freedom. A chi-square random variable is a random variable that assumes only positive values and follows a \(\chi\)-square distribution.
Definition: critical value
The value of the chi-square random variable \(\chi^2\) with \(df=k\) that cuts off a right tail of area \(c\) is denoted \(\chi_c^2\) and is called a critical value (Figure \(\PageIndex{2}\)).
Figure \(\PageIndex{2}\): \(\chi_c^2\) Illustrated
Figure \(\PageIndex{3}\) below gives values of \(\chi_c^2\) for various values of \(c\) and under several chi-square distributions with various degrees of freedom.
Figure \(\PageIndex{3}\): Critical Values of Chi-Square Distributions Tests for Independence
Hypotheses tests encountered earlier in the book had to do with how the numerical values of two population parameters compared. In this subsection we will investigate hypotheses that have to do with whether or not two random variables take their values independently, or whether the value of one has a relation to the value of the other. Thus the hypotheses will be expressed in words, not mathematical symbols. We build the discussion around the following example.
There is a theory that the gender of a baby in the womb is related to the baby’s heart rate: baby girls tend to have higher heart rates. Suppose we wish to test this theory. We examine the heart rate records of \(40\) babies taken during their mothers’ last prenatal checkups before delivery, and to each of these \(40\) randomly selected records we compute the values of two random measures: 1) gender and 2) heart rate. In this context these two random measures are often called factors. Since the burden of proof is that heart rate and gender are related, not that they are unrelated, the problem of testing the theory on baby gender and heart rate can be formulated as a test of the following hypotheses:
\[H_0: \text{Baby gender and baby heart rate are independent}\\ vs. \\ H_a: \text{Baby gender and baby heart rate are not independent}\]
The factor gender has two natural categories or levels: boy and girl. We divide the second factor, heart rate, into two levels, low and high, by choosing some heart rate, say \(145\) beats per minute, as the cutoff between them. A heart rate below \(145\) beats per minute will be considered low and \(145\) and above considered high. The \(40\) records give rise to a \(2\times 2\)
contingency table. By adjoining row totals, column totals, and a grand total we obtain the table shown as Table \(\PageIndex{1}\). The four entries in boldface type are counts of observations from the sample of \(n = 40\). There were \(11\) girls with low heart rate, \(17\) boys with low heart rate, and so on. They form the core of the expanded table.
Heart Rate \(\text{Low}\) \(\text{High}\) \(\text{Row Total}\) \(\text{Gender}\) \(\text{Girl}\) \(11\) \(7\) \(18\) \(\text{Boy}\) \(17\) \(5\) \(22\) \(\text{Column Total}\) \(28\) \(12\) \(\text{Total}=40\)
In analogy with the fact that the probability of independent events is the product of the probabilities of each event, if heart rate and gender were independent then we would expect the number in each core cell to be close to the product of the row total \(R\) and column total \(C\) of the row and column containing it, divided by the sample size \(n\). Denoting such an expected number of observations \(E\), these four expected values are:
1 row and 1 st stcolumn: \(E=(R\times C)/n = 18\times 28 /40 = 12.6\) 1 strow and 2 ndcolumn: \(E=(R\times C)/n = 18\times 12 /40 = 5.4\) 2 ndrow and 1 stcolumn: \(E=(R\times C)/n = 22\times 28 /40 = 15.4\) 2 ndrow and 2 ndcolumn: \(E=(R\times C)/n = 22\times 12 /40 = 6.6\)
We update Table \(\PageIndex{1}\) by placing each expected value in its corresponding core cell, right under the observed value in the cell. This gives the updated table Table \(\PageIndex{2}\).
\(\text{Heart Rate}\) \(\text{Low}\) \(\text{High}\) \(\text{Row Total}\) \(\text{Gender}\) \(\text{Girl}\)
\(O=11\)
\(E=12.6\)
\(O=7\)
\(E=5.4\)
\(R = 18\) \(\text{Boy}\)
\(O=17\)
\(E=15.4\)
\(O=5\)
\(E=6.6\)
\(R = 22\) \(\text{Column Total}\) \(C = 28\) \(C = 12\) \(n = 40\)
A measure of how much the data deviate from what we would expect to see if the factors really were independent is the sum of the squares of the difference of the numbers in each core cell, or, standardizing by dividing each square by the expected number in the cell, the sum \(\sum (O-E)^2 / E\). We would reject the null hypothesis that the factors are independent only if this number is large, so the test is right-tailed. In this example the random variable \(\sum (O-E)^2 / E\) has the chi-square distribution with one degree of freedom. If we had decided at the outset to test at the \(10\%\) level of significance, the critical value defining the rejection region would be, reading from Figure \(\PageIndex{3}\), \(\chi _{\alpha }^{2}=\chi _{0.10 }^{2}=2.706\), so that the rejection region would be the interval \([2.706,\infty )\). When we compute the value of the standardized test statistic we obtain
\[\sum \frac{(O-E)^2}{E}=\frac{(11-12.6)^2}{12.6}+\frac{(7-5.4)^2}{5.4}+\frac{(17-15.4)^2}{15.4}+\frac{(5-6.6)^2}{6.6}=1.231\]
Since \(1.231 < 2.706\), the decision is not to reject \(H_0\). See Figure \(\PageIndex{4}\). The data do not provide sufficient evidence, at the \(10\%\) level of significance, to conclude that heart rate and gender are related.
Figure \(\PageIndex{4}\): Baby Gender Prediction
H0vs. Ha::Baby gender and baby heart rate are independentBaby gender and baby heart rate are not independentH0vs. Ha::Baby gender and baby heart rate are independentBaby gender and baby heart rate are not independent
With this specific example in mind, now turn to the general situation. In the general setting of testing the independence of two factors, call them Factor \(1\) and Factor \(2\), the hypotheses to be tested are
\[H_0: \text{The two factors are independent}\\ vs. \\ H_a: \text{The two factors are not independent}\]
As in the example each factor is divided into a number of categories or levels. These could arise naturally, as in the boy-girl division of gender, or somewhat arbitrarily, as in the high-low division of heart rate. Suppose Factor \(1\) has \(I\) levels and Factor \(2\) has \(J\) levels. Then the information from a random sample gives rise to a general \(I\times J\) contingency table, which with row totals, column totals, and a grand total would appear as shown in Table \(\PageIndex{3}\). Each cell may be labeled by a pair of indices \((i,j)\). \(O_{ij}\) stands for the observed count of observations in the cell in row \(i\) and column \(j\), \(R_i\) for the \(i^{th}\) row total and \(C_j\) for the \(j^{th}\) column total. To simplify the notation we will drop the indices so Table \(\PageIndex{3}\) becomes Table \(\PageIndex{4}\). Nevertheless it is important to keep in mind that the \(Os\), the \(Rs\) and the \(Cs\), though denoted by the same symbols, are in fact different numbers.
\(\text{Factor 2 Levels}\) \(1\) ⋅ ⋅ ⋅ \(j\) ⋅ ⋅ ⋅ \(J\) \(\text{Row Total}\) \(\text{Factor 1 Levels}\) \(1\) \(O_{11}\) ⋅ ⋅ ⋅ \(O_{1j}\) ⋅ ⋅ ⋅ \(O_{1J}\) \(R_1\) ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ \(i\) \(O_{i1}\) ⋅ ⋅ ⋅ \(O_{ij}\) ⋅ ⋅ ⋅ \(O_{iJ}\) \(R_i\) ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ \(I\) \(O_{I1}\) ⋅ ⋅ ⋅ \(O_{Ij}\) ⋅ ⋅ ⋅ \(O_{IJ}\) \(R_I\) \(\text{Column Total}\) \(C_1\) ⋅ ⋅ ⋅ \(C_j\) ⋅ ⋅ ⋅ \(C_J\) \(n\)
\(\text{Factor 2 Levels}\) \(1\) ⋅ ⋅ ⋅ \(j\) ⋅ ⋅ ⋅ \(J\) \(\text{Row Total}\)
\(\text{Factor 1 Levels}\)
\(1\) \(O\) ⋅ ⋅ ⋅ \(O\) ⋅ ⋅ ⋅ \(O\) \(R\) ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ \(i\) \(O\) ⋅ ⋅ ⋅ \(O\) ⋅ ⋅ ⋅ \(O\) \(R\) ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ \(I\) \(O\) ⋅ ⋅ ⋅ \(O\) ⋅ ⋅ ⋅ \(O\) \(R\) \(\text{Column Total}\) \(C\) ⋅ ⋅ ⋅ \(C\) ⋅ ⋅ ⋅ \(C\) \(n\)
As in the example, for each core cell in the table we compute what would be the expected number \(E\) of observations if the two factors were independent. \(E\) is computed for each core cell (each cell with an \(O\) in it) of Table \(\PageIndex{4}\) by the rule applied in the example:
\[E=R×Cn\]
where \(R\) is the row total and \(C\) is the column total corresponding to the cell, and \(n\) is the sample size
\(\text{Factor 2 Levels}\) \(1\) ⋅ ⋅ ⋅ \(j\) ⋅ ⋅ ⋅ \(J\) \(\text{Row Total}\)
\(\text{Factor 1 Levels}\)
\(1\)
\(O\)
\(E\)
⋅ ⋅ ⋅
\(O\)
\(E\)
⋅ ⋅ ⋅
\(O\)
\(E\)
\(R\)
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ \(i\)
\(O\)
\(E\)
⋅ ⋅ ⋅
\(O\)
\(E\)
⋅ ⋅ ⋅
\(O\)
\(E\)
\(R\)
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ \(I\)
\(O\)
\(E\)
⋅ ⋅ ⋅
\(O\)
\(E\)
⋅ ⋅ ⋅
\(O\)
\(E\)
\(R\)
\(\text{Column Total}\) \(C\) ⋅ ⋅ ⋅ \(C\) ⋅ ⋅ ⋅ \(C\) \(n\)
Here is the test statistic for the general hypothesis based on Table \(\PageIndex{5}\), together with the conditions that it follow a chi-square distribution.
Test Statistic for Testing the Independence of Two Factors
\[\chi^2=\sum (O−E)^2E\]
where the sum is over all core cells of the table.
If
the two study factors are independent, and the observed count \(O\) of each cell in Table \(\PageIndex{5}\) is at least \(5\),
then \(\chi ^2\) approximately follows a chi-square distribution with \(df=(I-1)\times (J-1)\) degrees of freedom.
The same five-step procedures, either the critical value approach or the \(p\)-value approach, that were introduced in Section 8.1 and Section 8.3 are used to perform the test, which is always right-tailed.
Example \(\PageIndex{1}\)
A researcher wishes to investigate whether students’ scores on a college entrance examination (\(CEE\)) have any indicative power for future college performance as measured by \(GPA\). In other words, he wishes to investigate whether the factors \(CEE\) and \(GPA\) are independent or not. He randomly selects \(n = 100\) students in a college and notes each student’s score on the entrance examination and his grade point average at the end of the sophomore year. He divides entrance exam scores into two levels and grade point averages into three levels. Sorting the data according to these divisions, he forms the contingency table shown as Table \(\PageIndex{6}\), in which the row and column totals have already been computed.
\(GPA\) \(<2.7\) \(2.7\; \; \text{to}\; \; 3.2\) \(>3.2\) \(\text{Row Total}\) \(CEE\) \(<1800\) \(35\) \(12\) \(5\) \(52\) \(\geq 1800\) \(6\) \(24\) \(18\) \(48\) \(\text{Column Total}\) \(41\) \(36\) \(23\) \(\text{Total}=100\)
Test, at the \(1\%\) level of significance, whether these data provide sufficient evidence to conclude that \(CEE\) scores indicate future performance levels of incoming college freshmen as measured by \(GPA\).
Solution:
We perform the test using the critical value approach, following the usual five-step method outlined at the end of Section 8.1.
Step 1. The hypotheses are \[H_0:\text{CEE and GPA are independent factors}\\ vs.\\ H_a:\text{CEE and GPA are not independent factors}\] Step 2. The distribution is chi-square. Step 3. To compute the value of the test statistic we must first computed the expected number for each of the six core cells (the ones whose entries are boldface): 1 strow and 1 stcolumn: \(E=(R\times C)/n=41\times 52/100=21.32\) 1 strow and 2 nd 1 st rd 2 nd st 2 nd nd 2 nd rd 1
Table \(\PageIndex{6}\) is updated to Table \(\PageIndex{6}\).
\(GPA\) \(<2.7\) \(2.7\; \; \text{to}\; \; 3.2\) \(>3.2\) \(\text{Row Total}\) \(CEE\) \(<1800\)
\(O=35\)
\(E=21.32\)
\(O=12\)
\(E=18.72\)
\(O=5\)
\(E=11.96\)
\(R = 52\) \(\geq 1800\)
\(O=6\)
\(E=19.68\)
\(O=24\)
\(E=17.28\)
\(O=18\)
\(E=11.04\)
\(R = 48\) \(\text{Column Total}\) \(C = 41\) \(C = 36\) \(C = 23\) \(n = 100\)
The test statistic is
\[\begin{align*} \chi^2 &= \sum \frac{(O-E)^2}{E}\\ &= \frac{(35-21.32)^2}{21.32}+\frac{(12-18.72)^2}{18.72}+\frac{(5-11.96)^2}{11.96}+\frac{(6-19.68)^2}{19.68}+\frac{(24-17.28)^2}{17.28}+\frac{(18-11.04)^2}{11.04}\\ &= 31.75 \end{align*}\]
Step 4. Since the \(CEE\) factor has two levels and the \(GPA\) factor has three, \(I = 2\) and \(J = 3\). Thus the test statistic follows the chi-square distribution with \(df=(2-1)\times (3-1)=2\) degrees of freedom.
Since the test is right-tailed, the critical value is \(\chi _{0.01}^{2}\). Reading from Figure 7.1.6 "Critical Values of Chi-Square Distributions", \(\chi _{0.01}^{2}=9.210\), so the rejection region is \([9.210,\infty )\).
Step 5. Since \(31.75 > 9.21\) the decision is to reject the null hypothesis. See Figure \(\PageIndex{5}\). The data provide sufficient evidence, at the \(1\%\) level of significance, to conclude that \(CEE\) score and \(GPA\) are not independent: the entrance exam score has predictive power.
: Figure \(\PageIndex{5}\) "Example\(\PageIndex{1}\) "
Key Takeaway
Critical values of a chi-square distribution with degrees of freedom df are found in Figure 7.1.6. A chi-square test can be used to evaluate the hypothesis that two random variables or factors are independent. Contributor
Anonymous |
Is the Derivative Linear Transformation Diagonalizable?
Problem 690
Let $\mathrm{P}_2$ denote the vector space of polynomials of degree $2$ or less, and let $T : \mathrm{P}_2 \rightarrow \mathrm{P}_2$ be the derivative linear transformation, defined by\[ T( ax^2 + bx + c ) = 2ax + b . \]
Is $T$ diagonalizable? If so, find a diagonal matrix which represents $T$. If not, explain why not.
The standard basis of the vector space $\mathrm{P}_2$ is the set $B = \{ 1 , x , x^2 \}$. The matrix representing $T$ with respect to this basis is\[ [T]_B = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 2 \\ 0 & 0 & 0 \end{bmatrix} . \]
The characteristic polynomial of this matrix is\[ \det ( [T]_B – \lambda I ) = \begin{vmatrix} -\lambda & 1 & 0 \\ 0 & -\lambda & 2 \\ 0 & 0 & -\lambda \end{vmatrix} = \, – \lambda^3 . \]We see that the only eigenvalue of $T$ is $0$ with algebraic multiplicity $3$.
On the other hand, a polynomial $f(x)$ satisfies $T(f)(x) = 0$ if and only if $f(x) = c$ is a constant. The null space of $T$ is spanned by the single constant polynomial $\mathbb{1}(x) = 1$, and thus is one-dimensional. This means that the geometric multiplicity of the eigenvalue $0$ is only $1$.
Because the geometric multiplicity of $0$ is less than the algebraic multiplicity, the map $T$ is defective, and thus not diagonalizable.
Taking the Third Order Taylor Polynomial is a Linear TransformationThe space $C^{\infty} (\mathbb{R})$ is the vector space of real functions which are infinitely differentiable. Let $T : C^{\infty} (\mathbb{R}) \rightarrow \mathrm{P}_3$ be the map which takes $f \in C^{\infty}(\mathbb{R})$ to its third order Taylor polynomial, specifically defined […]
Differentiating Linear Transformation is NilpotentLet $P_n$ be the vector space of all polynomials with real coefficients of degree $n$ or less.Consider the differentiation linear transformation $T: P_n\to P_n$ defined by\[T\left(\, f(x) \,\right)=\frac{d}{dx}f(x).\](a) Consider the case $n=2$. Let $B=\{1, x, x^2\}$ be a […]
Differentiation is a Linear TransformationLet $P_3$ be the vector space of polynomials of degree $3$ or less with real coefficients.(a) Prove that the differentiation is a linear transformation. That is, prove that the map $T:P_3 \to P_3$ defined by\[T\left(\, f(x) \,\right)=\frac{d}{dx} f(x)\]for any $f(x)\in […]
Subspace Spanned By Cosine and Sine FunctionsLet $\calF[0, 2\pi]$ be the vector space of all real valued functions defined on the interval $[0, 2\pi]$.Define the map $f:\R^2 \to \calF[0, 2\pi]$ by\[\left(\, f\left(\, \begin{bmatrix}\alpha \\\beta\end{bmatrix} \,\right) \,\right)(x):=\alpha \cos x + \beta […]
Find a Quadratic Function Satisfying Conditions on DerivativesFind a quadratic function $f(x) = ax^2 + bx + c$ such that $f(1) = 3$, $f'(1) = 3$, and $f^{\prime\prime}(1) = 2$.Here, $f'(x)$ and $f^{\prime\prime}(x)$ denote the first and second derivatives, respectively.Solution.Each condition required on $f$ can be turned […] |
I have just learned about the Seifert-Van Kampen theorem and I find it hard to get my head around. The version of this theorem that I know is the following (given in Hatcher):
If $X$ is the union of path - connected open sets $A_\alpha$ each containing the basepoint $x_0 \in X$ and if each intersection $A_\alpha \cap A_\beta$ is path connected, then the homomorphism $$\Phi:\ast_\alpha \pi_1(A_\alpha) \to \pi_1(X)$$ is surjective. If in addition each intersection triple intersection $A_\alpha \cap A_\beta \cap A_\gamma$ is path-connected, then $\ker \Phi$ is the normal subgroup $N$ generated by all elements of the form $i_{\alpha\beta}(\omega)i_{\beta\alpha}(\omega^{-1})$, and so $\Phi$ induces an isomorphism $$\pi_1(X) \cong \ast_\alpha \pi_1(A_\alpha)/N.$$
$i_{\alpha\beta}$ is the homomorphism $\pi_1(A_\alpha \cap A_\beta) \to \pi_1(A_\alpha)$ induced from the inclusion $A_\alpha \cap A_\beta \hookrightarrow A_\alpha$ and $\omega$ is an element of $\pi_1(A_\alpha \cap A_\beta)$.
Now I tried to get my head round this theorem by trying to understand the example in Hatcher on the computation of the fundamental group of a wedge sum. Suppose for the moment we look at $X = X_1 \vee X_2$. I cannot just apply the theorem blindly because $X_i$ is not open in $X$. So we need to look at
$$A_1 = X_1 \vee W_2, \hspace{3mm} A_2 = X_2 \vee W_1$$
where $W_i$ is a neighbourhood about the basepoint $x_1$ of $X_1$ that deformation retracts onto $\{x_1\}$, similarly for $W_2$. I believe each of these is open in $X_1 \vee X_2$ because each $A_i$ is the union of equivalence classes that is open in $X_1 \sqcup X_2$. Now how do I see
that $A_1 \cap A_2$ deformation retracts onto the point $p_0$ (that I got from identifying $x_1 \sim x_2$) in $X$? If I can see that, then I know by Proposition 1.17 (Hatcher) that rigorously
$$\pi_1(A_1 \cap A_2) \cong \pi_1(p_0) \cong 0$$
from which it follows that $N= 0$ and the Seifert-Van Kampen Theorem tells me that
$$\pi_1(X_1\vee X_2) \cong \pi_1(X_1) \ast \pi_1(X_2).$$
1)Is my understanding of this correct?
2)What other useful exercises/ examples/applications are there to illustrate the power of the Seifert-Van Kampen Theorem? I have also seen that you can use it to prove that $\pi_1(S^n) = 0$ for $n \geq 2$.
I have had a look at the examples section of Hatcher after the proof the theorem, but unfortunately I don't get much out of it. The only example I sort of got was the computation of $\pi_1(\Bbb{R}^3 - S^1)$.
I would appreciate it very much if I could see some other examples to illustrate this theorem. In particular, I heard that you can use it to compute group presentations for the fundamental group - it would be good if I could see examples like that.
Thanks.
Edit: Is there a way to rigorously prove that $A_1 \cap A_2$ deformation retracts onto the point $\{p_0\}$? |
I wish to compute $$ \int_C \frac{\sin z}{(z+1)^7}\ \text{d}z $$ where $C$ is the circle of radius $6$, centre $0$, positively oriented.
Now I know that $f(z) = \frac{\sin z}{(z+1)^7}$ is analytic on $\mathbb{C}\setminus\{-1\}$ since $f$ is the composition and quotient of analytic functions.
I tried to use the Cauchy-Goursat Extension which says $$ \int_C f(z)\ \text{d}z = \int_{C'} f(z)\ \text{d}z $$ where $C'$ is the circle of radius $1$, center $-1$, positively oriented. However this only reformulates the problem.
My second thought was to integrate around $-1$ with a keyhole-like contour with an $\varepsilon$-wide cut and show it goes to $0$ but this feels unnecessarily complicated. Am I missing something that'd simplify things? |
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever."
Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field.
"You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. "
so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force.
For the buoyancy do I: density of water * volume of water displaced * gravity acceleration?
so: mass of bottle * gravity = volume of water displaced * density of water * gravity?
@EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$?
As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern...
You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer.
Though as it happens I have to go now - lunch time! :-)
@JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth.
Anonymous
Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P
I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure
Not sure about that, but the converse is certainly false :P
Derrida has received a lot of criticism from the experts on the fields he tried to comment on
I personally do not know much about postmodernist philosophy, so I shall not comment on it myself
I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger.
I can see why a man of that generation would be leaned towards that idea. I do too. |
As forest and natural resource managers, we must be aware of how our timber management practices impact the biological communities in which they occur. A silvicultural prescription is going to influence not only the timber we are growing but also the plant and wildlife communities that inhabit these stands. Landowners, both public an(18)}{d private, often require management of non-timber components, such as wildlife, along with meeting the financial objectives achieved through timber management. Resource managers must be cognizant of the effect management practices have on plant and wildlife communities. The primary interface between timber and wildlife is habitat, and habitat is simply an amalgam of environmental factors necessary for species survival (e.g., food or cover). The key component to habitat for most wildlife is vegetation, which provides food and structural cover. Creating prescriptions that combine timber and wildlife management objectives are crucial for sustainable, long-term balance in the system.
So how do we develop a plan that will encompass multiple land use objectives? Knowledge is the key. We need information on the habitat required by the wildlife species of interest and we need to be aware of how timber harvesting and subsequent regeneration will affect the vegetative characteristics of the system. In other words, we need to understand the diversity of organisms present in the community and appreciate the impact our management practices will have on this system.
Diversity of organisms and the measurement of diversity have long interested ecologists and natural resource managers. Diversity is variety and at its simplest level it involves counting or listing species. Biological communities vary in the number of species they contain (richness) and relative abundance of these species (evenness). Species richness, as a measure on its own, does not take into account the number of individuals of each species present. It gives equal weight to those species with few individuals as it does to a species with many individuals. Thus a single yellow birch has as much influence on the richness of an area as 100 sugar maple trees. Evenness is a measure of the relative abundance of the different species making up the richness of an area. Consider the following example.
Example \(\PageIndex{1}\):
Sugar Maple
167
391
Beech
145
24
Yellow Birch
134
31
Both samples have the same richness (3 species) and the same number of individuals (446). However, the first sample has more evenness than the second. The number of individuals is more evenly distributed between the three species. In the second sample, most of the individuals are sugar maples with fewer beech and yellow birch trees. In this example, the first sample would be considered more diverse.
A diversity index is a quantitative measure that reflects the number of different species and how evenly the individuals are distributed among those species. Typically, the value of a diversity index increases when the number of types increases and the evenness increases. For example, communities with a large number of species that are evenly distributed are the most diverse and communities with few species that are dominated by one species are the least diverse. We are going to examine several common measures of species diversity.
Simpson’s Index
Simpson (1949) developed an index of diversity that is computed as:
$$D = \sum^R_{i=1} (\dfrac {n_i(n_i-1)}{N(N-1)})$$
where
ni is the number of individuals in species i, and N is the total number of species in the sample. An equivalent formula is:
$$D = \sum^R_{i=1} p_i^2$$
where \(p_i\) is the proportional abundance for each species and
R is the total number of species in the sample. Simpson’s index is a weighted arithmetic mean of proportional abundance and measures the probability that two individuals randomly selected from a sample will belong to the same species. Since the mean of the proportional abundance of the species increases with decreasing number of species and increasing abundance of the most abundant species, the value of D obtains small values in data sets of high diversity and large values in data sets with low diversity. The value of Simpson’s D ranges from 0 to 1, with 0 representing infinite diversity and 1 representing no diversity, so the larger the value of \(D\), the lower the diversity. For this reason, Simpson’s index is usually expressed as its inverse (1/ D) or its compliment (1- D) which is also known as the Gini-Simpson index. Let’s look at an example.
Example \(\PageIndex{2}\):calculating Simpson’s Index
We want to compute Simpson’s \(D\) for this hypothetical community with three species.
Sugar Maple
35
Beech
19
Yellow Birch
11
First, calculate N.
$$N = 35 + 19 + 11 = 65$$
Then compute the index using the number of individuals for each species:
$$D = \sum^R_{i=1} (\dfrac {n_i(n_i-1)}{N(N-1)}) = (\frac {35(34)}{65(64)} +\frac {19(18)}{65(64)} + \frac {11(10)}{65(64)}) = 0.3947$$
The inverse is found to be:
$$\frac {1}{0.3947} = 2.5336$$
Using the inverse, the value of this index starts with 1 as the lowest possible figure. The higher the value of this inverse index the greater the diversity. If we use the compliment to Simpson’s D, the value is:
$$1-0.3947 = 0.6053$$
This version of the index has values ranging from 0 to 1, but now, the greater the value, the greater the diversity of your sample. This compliment represents the probability that two individuals randomly selected from a sample will belong to different species. It is very important to clearly state which version of Simpson’s D you are using when comparing diversity.
Shannon-Weiner Index
The Shannon-Weiner index (Barnes et al. 1998) was developed from information theory and is based on measuring uncertainty. The degree of uncertainty of predicting the species of a random sample is related to the diversity of a community. If a community has low diversity (dominated by one species), the uncertainty of prediction is low; a randomly sampled species is most likely going to be the dominant species. However, if diversity is high, uncertainty is high. It is computed as:
$$H' = -\sum^R_{i=1} ln(p_i) = ln (\frac {1}{\prod^R_{i=1} p^{p_i}_i})$$
where
pi is the proportion of individuals that belong to species i and R is the number of species in the sample. Since the sum of the pi’s equals unity by definition, the denominator equals the weighted geometric mean of the pi values, with the pi values being used as weights. The term in the parenthesis equals true diversity D and H’=ln( D). When all species in the data set are equally common, all pi values = 1/ R and the Shannon-Weiner index equals ln( R). The more unequal the abundance of species, the larger the weighted geometric mean of the pi values, the smaller the index. If abundance is primarily concentrated into one species, the index will be close to zero.
An equivalent and computationally easier formula is:
$$H' = \frac {N ln \ N -\sum (n_i ln \ n_i)}{N}$$
where
N is the total number of species and ni is the number of individuals in species i. The Shannon-Weiner index is most sensitive to the number of species in a sample, so it is usually considered to be biased toward measuring species richness.
Let’s compute the Shannon-Weiner diversity index for the same hypothetical community in the previous example.
Example \(\PageIndex{3}\):Calculating Shannon-Weiner Index
Species
No. of individuals
Sugar Maple
35
Beech
19
Yellow Birch
11
We know that N = 65. Now let’s compute the index:
$$H' = \dfrac {271.335 - (124.437+55.944+26.377)}{65}=0.993$$ |
Let $G^n$ denote the OR product of a graph with itself $n$ times, i.e. the graph which has an edge between distinct vertices $(v_1,v_2,\ldots,v_n)$ and $(u_1,u_2,\ldots,u_n)$ if there exists some $i$ so that $(v_i,u_i)$ is an edge in $G.$
The book Fractional Graph Theory by Scheinerman and Ullman has a number of wonderful results connecting graph theory to fractional graph theory, some of them having a flavor illustrated by the following result:
If $\chi(G), \chi_f(G)$ are the chromatic number and fractional chromatic number of a graph respectively, then
$$\chi_f(G) = \lim_{n\to\infty} \chi(G^n)^{\frac 1n}.$$
I am interested in the rate of convergence in this and other results of this kind. What can we say about how fast the sequence
$$\log \chi_f(G) - \frac 1n \log\chi(G^n)$$
goes to zero? I suspect it goes as $\frac{1}{\sqrt{n}}.$ If this is true, then what is
$$\limsup_n \sqrt{n}\left(\log \chi_f(G) - \frac 1n \log\chi(G^n)\right)?$$ |
They make a virtual image on a plane behind the HUD. A good model system to understand this kind of thing is the
aplanatic sphere: Born and Wolf "Principles of Optics"gives a good discussion of this if you can get hold of it.
If not, here's a simple explanation: imagine a lens system that collimates a point source, so that the point source,
i.e. the point source lies on the lens's focal plane. Now imagine your HUD image on this plane, and shift this plane every so slightly nearer to the lens than the focal plane. The lens will not quite collimate the light from the point sources: instead it is still slightly divergent and the light from each point source seems to be diverging from a point source a long way further from the lens than the focal plane.
The thin lens equation, with proper heed taken of the meaning of the sign on the results, can be used to explore this concept quantitatively thus:
$$\frac{1}{d_i}+\frac{1}{d_o} = \frac{1}{f}$$
Put $d_o =f-\epsilon$, where $d_o$ is the object's and $\epsilon>0$ is small and positive. You'll see $d_o\approx -\epsilon^{-1}$ is big and negative, meaning a virtual image a long way behind the focal plane.
Edit after Question from OP
I think I'm probably going to need a diagram for this. I understand how optics can be used to make close things appear far away. I guess the real question is how do you do that AND see the real world behind that image? Is it just because the lensed image is only displayed on a relatively sparse grid of pixels, and you can see between them, or does the HUD optics actually incorporate the background image?
There are several ways this can be done. In older systems, a partially silvered mirror is used to add the light field from the imaging array to the incoming view, so that the user can see both lightfields at once, as in my drawing below.
One can also use transparent screens, grounded on LCD or similar technologies and put them just in front of the focal plane of the converging lens in a Galilean telescope. A second Galilean telescope is used to compensate for the gain of the one with the imaging array in it, so that the image through the viewfinder is unmagnified, as in my drawing below.
As an aside, a Galilean telescope is made of a converging lens and a diverging lens such that the focal planes of the two are on top of one another. Thus rays from infinity have the paths as shown in red. |
Let's say we have an object in 3D space located at
P(1,1,1) and if we decided we wanted to rotate this object within any of the planes
XY,
XZ,
YZ along any of the 3 axis where this is a basic 3D Cartesian system in a Right-Hand System the rotation matrices $R_n$ for
P will look like these:
$$ R_x \space|| \space R_y \space || \space R_z $$
$$R_x(\theta) P = \begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos\theta & -\sin\theta \\ 0 & \sin\theta & \space\space\cos\theta \\\end{bmatrix} \begin{bmatrix} P_x \\ P_y \\ P_z \\ \end{bmatrix},$$
$$R_y(\theta) P = \begin{bmatrix} \space\cos\theta & 0 & \sin\theta \\ 0 & 1 & 0 \\ -\sin\theta & 0 & \cos\theta \\\end{bmatrix} \begin{bmatrix} P_x \\ P_y \\ P_z \\\end{bmatrix},$$
$$R_z(\theta) P= \begin{bmatrix} \cos\theta & -\sin \theta & 0 \\ \sin\theta & \space\space\cos \theta & 0 \\ 0 & 0 & 1 \\\end{bmatrix}\begin{bmatrix} P_x \\ P_y \\ P_z \\ \end{bmatrix}$$
When doing rotations in 3D among any of the arbitrary axis, the order of the rotations, the handedness of the system, the direction of the rotations and doing rotations among multiple axis does matter. To demonstrate this let's say angle $\theta = 90°$ and we apply this rotation consecutively in multiple dimensions you will see that we eventually end up with Gimbal Lock. First we will do $R_x$ by 90° then $R_y$ and finally try $R_z$
Here we are going to apply a 90° rotation to the point or vector
P(1,1,1) on the $R_x$ axis
$R_x(90°)$ $\begin{bmatrix} 1 \\ 1 \\ 1 \\ \end{bmatrix}$ = $\begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos 90° & -\sin 90° \\ 0 & \sin 90° & \space\space\cos 90° \\\end{bmatrix}$ $\begin{bmatrix} 1 \\ 1 \\ 1 \\ \end{bmatrix}$ = $\begin{bmatrix} 1 & 0 & \space\space 0 \\ 0 & 0 & -1 \\ 0 & 1 & \space\space 0 \\\end{bmatrix}$ $\begin{bmatrix} 1 \\ 1 \\ 1 \\ \end{bmatrix}$ = $\begin{bmatrix} 1 \\ -1 \\ 1 \\\end{bmatrix}$
Now that our vector
P has been transformed we will apply another 90° rotation but this time to the $R_y$ axis with the new values.
$R_y(90°)$ $\begin{bmatrix} 1 \\ -1 \\ 1 \\ \end{bmatrix}$ =$\begin{bmatrix} \cos 90° & 0 & \sin 90°\\ 0 & 1 & 0 \\ -\sin 90° & 0 & \cos 90°\\ \end{bmatrix}$$\begin{bmatrix} 1 \\ -1 \\ 1 \\\end{bmatrix}$ =$\begin{bmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ -1 & 0 & 0\\\end{bmatrix}$ $\begin{bmatrix} 1 \\ -1 \\ 1 \\ \end{bmatrix}$ = $\begin{bmatrix} 1 \\ -1 \\ -1 \\ \end{bmatrix}$
We can now finish with $R_z$
$R_z(90°)$ $\begin{bmatrix} 1 \\ -1 \\ -1 \\ \end{bmatrix}$ = $\begin{bmatrix} \cos 90° & -\sin 90° & 0 \\ \sin 90° & \cos 90° & 0 \\ 0 & 0 & 1 \\ \end{bmatrix}$ $\begin{bmatrix} 1 \\ -1 \\ -1 \\ \end{bmatrix}$ =$\begin{bmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \\\end{bmatrix}$ $\begin{bmatrix} 1 \\ -1 \\ -1 \\ \end{bmatrix}$ =$\begin{bmatrix} 1 \\ 1 \\ -1 \\ \end{bmatrix}$
And as you can see in the calculations of the matrices there has been a change in direction for each time we rotated by 90 degrees. Here we have lost a degree of freedom of rotation. What this means is we rotated the X components by 90° which happens to be perpendicular or orthogonal to both the Y & Z axis which is evident of the fact that
$\cos(90°) = 0$. Then when we rotate again by 90° along the Y axis and once again the Y axis is perpendicular to both the X & Z axis now we have 2 axis of rotations that are aligned so when we try to rotate in the 3rd dimension of space we lost a degree of freedom because we can no longer distinguish between the X & Y as they will both rotate simultaneously and there is no method to separate them. This can be seen from the calculations that were done by the matrices. It may not be completely evident now, but if you were to do all 6 permutations of the order of axis rotations you will see the pattern emerge. These kind of rotations are called Euler Angles.
It also doesn't matter what combination of axis you rotate with because it will happen with every combination when two axis of rotations become parallel.
$$R_x(90°)P \to R_y(90°)P \to R_z(90°)P \implies Gimbal Lock$$$$R_x(90°)P \to R_z(90°)P \to R_y(90°)P \implies Gimbal Lock$$$$R_y(90°)P \to R_x(90°)P \to R_z(90°)P \implies Gimbal Lock$$$$R_y(90°)P \to R_z(90°)P \to R_x(90°)P \implies Gimbal Lock$$$$R_z(90°)P \to R_x(90°)P \to R_y(90°)P \implies Gimbal Lock$$$$R_z(90°)P \to R_y(90°)P \to R_x(90°)P \implies Gimbal Lock$$
If I simplify this by showing all 6 combinations with the ending transformation vectors or matrices for that same point you should see the pattern and these transformations are:
$$R_x(90°) P(1,1,1) \to \begin{bmatrix} 1 \\ -1 \\ 1 \\ \end{bmatrix}R_y(90°) \to\begin{bmatrix} 1 \\ -1 \\ -1 \\ \end{bmatrix}R_z(90°) \to\begin{bmatrix} 1 \\ 1 \\ -1 \\ \end{bmatrix}$$
$$R_x(90°) P(1,1,1) \to \begin{bmatrix} 1 \\ -1 \\ 1 \\ \end{bmatrix}R_z(90°) \to\begin{bmatrix} 1 \\ 1 \\ 1 \\ \end{bmatrix}R_y(90°) \to\begin{bmatrix} 1 \\ 1 \\ -1 \\ \end{bmatrix}$$
$$R_y(90°) P(1,1,1) \to \begin{bmatrix} 1 \\ 1 \\ -1 \\ \end{bmatrix}R_x(90°) \to\begin{bmatrix} 1 \\ 1 \\ 1 \\ \end{bmatrix}R_z(90°) \to\begin{bmatrix} -1 \\ 1 \\ 1 \\ \end{bmatrix}$$
$$R_y(90°) P(1,1,1) \to \begin{bmatrix} 1 \\ 1 \\ -1 \\ \end{bmatrix}R_z(90°) \to\begin{bmatrix} -1 \\ 1 \\ -1 \\ \end{bmatrix}R_x(90°) \to\begin{bmatrix} -1 \\ 1 \\ 1 \\ \end{bmatrix}$$
$$R_z(90°) P(1,1,1) \to \begin{bmatrix} -1 \\ 1 \\ 1 \\ \end{bmatrix}R_x(90°) \to\begin{bmatrix} -1 \\ -1 \\ 1 \\ \end{bmatrix}R_y(90°) \to\begin{bmatrix} 1 \\ -1 \\ 1 \\ \end{bmatrix}$$
$$R_z(90°) P(1,1,1) \to\begin{bmatrix} -1 \\ 1 \\ 1 \\ \end{bmatrix}R_y(90°) \to\begin{bmatrix} 1 \\ 1 \\ 1 \\ \end{bmatrix}R_x(90°) \to\begin{bmatrix} 1 \\ -1 \\ 1 \\ \end{bmatrix}$$
If you look at the combinations of the axis that you started with it doesn't matter the order of which two you finished with because the result will be the same for that starting axis of rotation. So intuitively we can say this about using Euler Angles of Rotation in 3D while considering the handedness of the coordinate system; the handedness matters because it will change the rotation matrices along with the trig functions and their signs for they will be different and so will your results. Now as for this particular coordinate system we can visually conclude this about Euler Angles:
$$R_x(\theta) P \implies \begin{bmatrix} a \\ a \\ -a \\ \end{bmatrix}$$
$$R_y(\theta) P \implies \begin{bmatrix} -a \\ a \\ a \\ \end{bmatrix}$$
$$R_z(\theta) P \implies \begin{bmatrix} a \\ -a \\ a \\ \end{bmatrix}$$
It may not seem quite apparent by the numbers as to what exactly is causing the gimbal lock, but the results of the transformations should give you some insight to what is going on. It might be easier to visualize than just by looking at the math. So I provided a link to a good video below. Now if you are interested in proofs then you have plenty of work ahead of you for there are also some other factors that cause this to happen such as the relationships of the $\cos(\theta)$ between two vectors being equal to the dot product between those vectors divided by their magnitudes.
$$\cos(\theta) = \frac{ V_1 \cdot V_2}{ \lvert V_1 \rvert \lvert V_2 \rvert } $$
Other contributing factors are the rules of calculus on the trigonometric functions especially the $\sin$ and $\cos$ functions.
$$(\sin{x})' = \cos{x}$$
$$(\cos{x})' = -\sin{x}$$
$$\int \sin{ax} \space dx = -\frac{1}{a}\cos{ax} + C$$
$$\int \cos{ax} \space dx = \frac{1}{a}\sin{ax} + C$$
Another interesting fact that I think that may lead to the reasoning of Gimbal Lock is quite interesting but that is a topic for another day as that in itself would merit its own page, but do forgive me if the math formatting isn't perfect I am new to this particular stack exchange site and I'm learning the math tags and formatting as I go.
Here is an excellent video illustrating Gimbal Lock: Youtube : Gimbal Lock |
I think I've figured this out. The point is that, the rigorous meaning one can draw from the formal covariance of $J^\mu$ is that the momentum-space coefficient functions of $J^\mu$ (i.e. the functions in front of monomials of $a_p$ and $a^\dagger_p$) transform covariantly under the change of variable $p\to \Lambda p$. The covariance of the coefficient functions is unaffected by normal ordering, and is sufficient to give rise to the covariance of $:J^\mu:$. The rest of this answer will be an elaboration of the first paragraph.
Let me first clarify the notations used and the meaning of the formal covariance of the ill-defined current $J^\mu$. I'm going to ignore the spin degrees of freedom in this discussion, but one should see the generalization to include spin only involves a straightforward (but perhaps cumbersome) change of notations. I'm also ignoring the spacetime dependence, that is to say I'm only considering the covariance of $J^\mu(0)$, and the generalization to $J^\mu(x)$ is straightforward and easy.
In the context of my question, $U(\Lambda)$ is defined as such that
$$U(\Lambda) a_{p} U^{-1}(\Lambda)=\sqrt{\frac{E_{\Lambda p}}{E_p}}a_{\Lambda p}.$$
The covariance of $J^\mu$ must be understood in a very formal and specific sense, the sense in which the covariance is formally proved. For example, in the case of a fermionic bilinear:
$$U(\Lambda)J^{\mu}U(\Lambda)^{-1}=U\bar{\psi}\gamma^{\mu}\psi U^{-1}\\ =U\bar{\psi}_iU^{-1}(\gamma^{\mu})_{ij}U \psi_j U^{-1}=\bar{\psi}D(\Lambda)\gamma^{\mu}D(\Lambda)^{-1}\psi= \Lambda^{\mu}_{\ \ \nu}\bar{\psi}\gamma^{\nu}\psi, $$
where $D(\Lambda)$ is the spinor representation of Lorentz group, typically constructed via Clifford algebra. Note in this formal proof, what's important is that, under the change $a_{p}\to \sqrt{\frac{E_{\Lambda p}}{E_p}}a_{\Lambda p}$ (ignoring spin indices of course) the elementary field transforms as $\psi \to D(\Lambda)\psi$. In the proof, no manipulation of operator ordering and commutation relations ever occurs: all we do is to do a change of integration variable, and let the algebraic properties of the coefficient functions take care of the rest. In fact, we'd better not mess with the operator ordering, as it can easily spoil the formal covariance (example: $H=\int \text{d}p\frac{1}{2}E_{p}(a_p a_p^\dagger+a_p^\dagger a_p)=\int \text{d}p E_{p}(a_p^\dagger a_p+\delta(0))$, see my longest comment under drake's answer).
To explain what's going on in more details without getting tangled with notational nuisances, let me remind you again I'll omit the spin degrees of freedom, but it should be transparent enough by the end of the argument that it's readily generalizable to spinor case, since all that matters is that we know the coefficient functions(even with spin indices) transform covariantly. The mathematical gist is, after multiplying the elementary fields and grouping c/a operators (during the grouping no operator ordering procedure should be performed at all, e.g. $a^\dagger(p_1)a(p_2)$ and $a(p_2)a^\dagger(p_1)$ should be treated as two independent terms), a typical monomial term in $J^\mu(0)$ has the form
$$ \int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(p_i), a(p_i)\})f^\mu(\{p_i\}),$$
where $M$ is a monomial of c/a operators not necessarily normally ordered, but has an ordering directly from the multiplication of elementary fields.
The formal covariance of $J^\mu$ means
$$\Lambda^\mu_{\ \ \nu}\int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(p_i), a(p_i)\})f^\nu(\{p_i\})\\=\int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(\Lambda p_i), a(\Lambda p_i)\})f^\mu(\{p_i\})\\=\int \left(\prod\limits_{i=1}^{n}\text{d}q_i\right)\left(\prod\limits_{i=1}^n \frac{E_{\Lambda^{-1} q_i}}{E_{q_i}}\right) \left(\prod\limits_{i=1}^{m}\sqrt{\frac{E_{q_i}}{E_{\Lambda^{-1} q_i}}}\right) M(\{a^\dagger(q_i), a(q_i)\})f^\mu(\{\Lambda^{-1}q_i\}) ,$$
where $\prod\limits_{i=1}^n {E_{\Lambda^{-1} q_i}}/{E_{q_i}}$ comes from the transformation of measure and $\prod\limits_{i=1}^{m}\sqrt{{E_{q_i}}/{E_{\Lambda^{-1} q_i}}}$ from the transformation of c/a operators in $M$. This is equivalent to
$$f^\mu(\{\Lambda^{-1}q_i\})\left(\prod\limits_{i=1}^n \frac{E_{\Lambda^{-1} q_i}}{E_{q_i}}\right) \left(\prod\limits_{i=1}^{m}\sqrt{\frac{E_{q_i}}{E_{\Lambda^{-1} q_i}}}\right)=\Lambda^\mu_{\ \ \nu}f^\nu(\{q_i\}).$$
The above equation makes completely rigorous sense since it's a statement about c-number functions. Obviously, this equation is sufficient to prove the covariance of the normal ordering
$$ \int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right):M(\{a^\dagger(p_i), a(p_i)\}):f^\mu(\{p_i\}),$$
since on the operator part only a change of integration variable is needed for the proof.
So let's recapitulate the logic of this answer:
1. The current is only covariant when written in a certain way, but not in all ways. (recall the free scalar field Hamiltonian example: $H=\int \text{d}p\frac{1}{2}E_{p}(a_p a_p^\dagger+a_p^\dagger a_p)=\int \text{d} pE_{p}(a_p^\dagger a_p+\delta(0))$, which is formally covariant in the first form but not in the second form.)
2. In that certain way where the current is formally covariant, the formal covariance really means a genuine covariance of the coefficient functions.
3. The covariance of the coefficient functions is sufficient to establish the covariance of the normally ordered current. |
27 3 Summary Derive Q of R in parallel with tank circuit
I've been experimenting with an LC tank circuit in series with a resistance R, and I've noted that the Q seems to increase with R. I've tried to derive this result via phasor analysis, but I'm not sure if my expression is correct.
To make things clear, I'm talking about the circuit with impedance ##Z=R+jX_L || X_C=R+j(\dfrac{\omega L}{1-\omega^2 LC}) ## The only thing I've found via google is this: https://electronics.stackexchange.com/questions/108788/voltage-output-from-a-tank-circuit where the first answer suggests that ##Q=R\sqrt{\dfrac{C}{L}}## which at least agrees with my measured results. I've found however that ##Q=R\sqrt{\dfrac{C}{L+4R^2C}}## So which result, if either, is right? I note that mine approximates the quoted result if ##L \gg 4R^2C##.
To make things clear, I'm talking about the circuit with impedance ##Z=R+jX_L || X_C=R+j(\dfrac{\omega L}{1-\omega^2 LC}) ##
The only thing I've found via google is this:
https://electronics.stackexchange.com/questions/108788/voltage-output-from-a-tank-circuit
where the first answer suggests that ##Q=R\sqrt{\dfrac{C}{L}}## which at least agrees with my measured results. I've found however that ##Q=R\sqrt{\dfrac{C}{L+4R^2C}}##
So which result, if either, is right? I note that mine approximates the quoted result if ##L \gg 4R^2C##. |
Likelihood with |omega_K \ne 0
Post Reply
2 posts • Page
1of 1 Posts:69 Joined:June 13 2007 Affiliation:Malaviya National Institute of Technology Jaipur
Hi,
I am trying to get the likelihood for the models with \Omega_k \ne 0. I am using CAMB to get the [tex]cls[/tex] and then using these [tex]cls[/tex] to get the likelihood with wmap 5 likelihood code. I am putting the parameters in params.ini from the parameter table given in the WMAP site. If I am putting the parameters of the model [tex] olcdm+sz+lens[/tex] with wmpa5 data only, I am not getting a good likelihood. But if I am using the parameters for the same model with [tex]wmap5+hst[/tex], I am getting a better likelihood. I am not getting why this is happening. I would like to know whether the likehood code contains other data set also. Regards, Akhilesh
I am trying to get the likelihood for the models with \Omega_k \ne 0.
I am using CAMB to get the [tex]cls[/tex] and then using these [tex]cls[/tex] to get the likelihood with wmap 5 likelihood code.
I am putting the parameters in params.ini from the parameter table given in the WMAP site.
If I am putting the parameters of the model [tex] olcdm+sz+lens[/tex] with wmpa5 data only, I am not getting a good likelihood. But if I am using the parameters for the same model with [tex]wmap5+hst[/tex], I am getting a better likelihood.
I am not getting why this is happening.
I would like to know whether the likehood code contains other data set also.
Regards,
Akhilesh
The parameters in the table are the
meanover the chain, not the best fit. Usually the mean is pretty close to the best fit, but in cases where the data isn't enough to constrain all the model's parameters (like OLCDM with WMAP only) there can be a bit of a difference. So it's not unexpected that the likelihood of the "OLCDM, WMAP only" mean parameters is not good. Adding in the HST data breaks a [tex]H_0 - \Omega_\Lambda[/tex] degeneracy and then the mean winds up closer to the best fit. |
I know asking for proof-verification on MO is a tricky thing. On one hand interesting research level proofs are usually subject of articles and can not be discussed here in detail. On the other hand most simple proof which can be written on a forum are not "high-level" enough for MO and other places are more appropriate. After all MO does not have a "proof-verification" tag, like math.stackexchange.
Anyway, for my personal taste at least, the following is research level. So lets see where this goes:
I want to proof the following statement:
Consider everything over the field $\mathbb{Q}$. For a fixed, given $n\geq 2$, let $\mathcal{E}_{n}$ be the $E_{n}$-suboperad of the Barratt-Eccles operad $\mathcal{E}$, $\mathcal{E}_{n}^{i}$ its Koszul dual cooperad in the sense described in the paper "Koszul duality of En-operads" by Benoit Fresse and let $e_{n}$ be the operad of $(n-1)$-Gerstenhaber algebras. Then there exist a solution to the Maurer Cartan equation in the convolution dg Lie algebra
$\Pi_{k\in\mathbb{N}}Hom_{\Sigma_{k}}(\mathcal{E}_{n}^{i}(k),\Omega e_{n}^{i}(k))$
where $\Omega e_{n}^{i}$ is the minimal model of $e_{n}$.
Proof:
Since $\mathcal{E}_{n}$ is an $E_{n}$-operad, by the definition of $E_{n}$-operads there is a zig-zag of quasi-isomorphisms of dg-operads
$ \mathcal{E}_{n}\overset{\simeq}{\longleftarrow}\bullet\overset{\simeq}{\longrightarrow}\cdots\overset{\simeq}{\longleftarrow}\bullet\overset{\simeq}{\longrightarrow}e_{n} $
where we consider $e_{n}$ as a differential graded operad with trivial differential in each arity. Now since in both cases ($\mathcal{E}_{n}$ as well as $e_{n}$), the appropriate Koszul dual cooperads $\mathcal{E}_{n}^{i}$ and $e_{n}^{i}$ are the linear duals "up to tensoring with appropriate shifting cooperads", this implies the existence of the following diagram of dg-cooperad quasi-isomorphisms:
$ \mathcal{E}_{n}^{i}\overset{\simeq}{\longrightarrow}\bullet\overset{\simeq}{\longleftarrow}\cdots\overset{\simeq}{\longrightarrow}\bullet\overset{\simeq}{\longleftarrow}e_{n}^{i} $
since the linear dual of a quasi-isomorphism is a quasi-isomorphism. Now if we change the category of differential graded cooperads withmorphisms of differential graded cooperads into the category of dgcooperads with
infinity morphisms of dg cooperads (Such aninfinity morphism $F_{\infty}:\mathcal{C}_{1}\rightsquigarrow\mathcal{C}_{2}$is defined as (or equivalent to ) a morphism of dg operads $\Omega F_{\infty}:\Omega\mathcal{C}_{1}\to\Omega\mathcal{C}_{2}$) then any quasi-isomorphism has an actual inverse in termsof these infinity morphisms (To emphasis these different kind of maps, I write $\rightsquigarrow$ for them). Therefore in this other category, there exist the following diagram of dg-cooperad infinity-isomorphisms
$ \mathcal{E}_{n}^{i}\overset{\simeq}{\rightsquigarrow}\bullet\overset{\simeq}{\rightsquigarrow}\cdots\overset{\simeq}{\rightsquigarrow}\bullet\overset{\simeq}{\rightsquigarrow}e_{n}^{i} $
and by composition, we get a single infinity isomorphism of dg-cooperads $\mathcal{E}_{n}^{i}\rightsquigarrow e_{n}^{i}$. By definition of these infinity morphisms, this is equivalent to the existence of an ordinary isomorphism of dg-operads
$ \Omega\mathcal{E}_{n}^{i}\to\Omega e_{n}^{i} $
which in turn is equivalent to the existence of a solution to the Maurer Cartan equation in $\Pi_{k\in\mathbb{N}}Hom_{\Sigma_{k}}(\mathcal{E}_{n}^{i}(k),\Omega e_{n}^{i}(k))$.
q.e.d.
Second question: The proof relays on the transition from ordinary morphisms of dg-cooperads to the $\infty$-morphisms of dg-cooperads. Is this the transition to the
derived category of dg-cooperads? |
I think I've figured this out. The point is that, the rigorous meaning one can draw from the formal covariance of $J^\mu$ is that the momentum-space coefficient functions of $J^\mu$ (i.e. the functions in front of monomials of $a_p$ and $a^\dagger_p$) transform covariantly under the change of variable $p\to \Lambda p$. The covariance of the coefficient functions is unaffected by normal ordering, and is sufficient to give rise to the covariance of $:J^\mu:$. The rest of this answer will be an elaboration of the first paragraph.
Let me first clarify the notations used and the meaning of the formal covariance of the ill-defined current $J^\mu$. I'm going to ignore the spin degrees of freedom in this discussion, but one should see the generalization to include spin only involves a straightforward (but perhaps cumbersome) change of notations. I'm also ignoring the spacetime dependence, that is to say I'm only considering the covariance of $J^\mu(0)$, and the generalization to $J^\mu(x)$ is straightforward and easy.
In the context of my question, $U(\Lambda)$ is defined as such that
$$U(\Lambda) a_{p} U^{-1}(\Lambda)=\sqrt{\frac{E_{\Lambda p}}{E_p}}a_{\Lambda p}.$$
The covariance of $J^\mu$ must be understood in a very formal and specific sense, the sense in which the covariance is formally proved. For example, in the case of a fermionic bilinear:
$$U(\Lambda)J^{\mu}U(\Lambda)^{-1}=U\bar{\psi}\gamma^{\mu}\psi U^{-1}\\ =U\bar{\psi}_iU^{-1}(\gamma^{\mu})_{ij}U \psi_j U^{-1}=\bar{\psi}D(\Lambda)\gamma^{\mu}D(\Lambda)^{-1}\psi= \Lambda^{\mu}_{\ \ \nu}\bar{\psi}\gamma^{\nu}\psi, $$
where $D(\Lambda)$ is the spinor representation of Lorentz group, typically constructed via Clifford algebra. Note in this formal proof, what's important is that, under the change $a_{p}\to \sqrt{\frac{E_{\Lambda p}}{E_p}}a_{\Lambda p}$ (ignoring spin indices of course) the elementary field transforms as $\psi \to D(\Lambda)\psi$. In the proof, no manipulation of operator ordering and commutation relations ever occurs: all we do is to do a change of integration variable, and let the algebraic properties of the coefficient functions take care of the rest. In fact, we'd better not mess with the operator ordering, as it can easily spoil the formal covariance (example: $H=\int \text{d}p\frac{1}{2}E_{p}(a_p a_p^\dagger+a_p^\dagger a_p)=\int \text{d}p E_{p}(a_p^\dagger a_p+\delta(0))$, see my longest comment under drake's answer).
To explain what's going on in more details without getting tangled with notational nuisances, let me remind you again I'll omit the spin degrees of freedom, but it should be transparent enough by the end of the argument that it's readily generalizable to spinor case, since all that matters is that we know the coefficient functions(even with spin indices) transform covariantly. The mathematical gist is, after multiplying the elementary fields and grouping c/a operators (during the grouping no operator ordering procedure should be performed at all, e.g. $a^\dagger(p_1)a(p_2)$ and $a(p_2)a^\dagger(p_1)$ should be treated as two independent terms), a typical monomial term in $J^\mu(0)$ has the form
$$ \int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(p_i), a(p_i)\})f^\mu(\{p_i\}),$$
where $M$ is a monomial of c/a operators not necessarily normally ordered, but has an ordering directly from the multiplication of elementary fields.
The formal covariance of $J^\mu$ means
$$\Lambda^\mu_{\ \ \nu}\int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(p_i), a(p_i)\})f^\nu(\{p_i\})\\=\int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(\Lambda p_i), a(\Lambda p_i)\})f^\mu(\{p_i\})\\=\int \left(\prod\limits_{i=1}^{n}\text{d}q_i\right)\left(\prod\limits_{i=1}^n \frac{E_{\Lambda^{-1} q_i}}{E_{q_i}}\right) \left(\prod\limits_{i=1}^{m}\sqrt{\frac{E_{q_i}}{E_{\Lambda^{-1} q_i}}}\right) M(\{a^\dagger(q_i), a(q_i)\})f^\mu(\{\Lambda^{-1}q_i\}) ,$$
where $\prod\limits_{i=1}^n {E_{\Lambda^{-1} q_i}}/{E_{q_i}}$ comes from the transformation of measure and $\prod\limits_{i=1}^{m}\sqrt{{E_{q_i}}/{E_{\Lambda^{-1} q_i}}}$ from the transformation of c/a operators in $M$. This is equivalent to
$$f^\mu(\{\Lambda^{-1}q_i\})\left(\prod\limits_{i=1}^n \frac{E_{\Lambda^{-1} q_i}}{E_{q_i}}\right) \left(\prod\limits_{i=1}^{m}\sqrt{\frac{E_{q_i}}{E_{\Lambda^{-1} q_i}}}\right)=\Lambda^\mu_{\ \ \nu}f^\nu(\{q_i\}).$$
The above equation makes completely rigorous sense since it's a statement about c-number functions. Obviously, this equation is sufficient to prove the covariance of the normal ordering
$$ \int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right):M(\{a^\dagger(p_i), a(p_i)\}):f^\mu(\{p_i\}),$$
since on the operator part only a change of integration variable is needed for the proof.
So let's recapitulate the logic of this answer:
1. The current is only covariant when written in a certain way, but not in all ways. (recall the free scalar field Hamiltonian example: $H=\int \text{d}p\frac{1}{2}E_{p}(a_p a_p^\dagger+a_p^\dagger a_p)=\int \text{d} pE_{p}(a_p^\dagger a_p+\delta(0))$, which is formally covariant in the first form but not in the second form.)
2. In that certain way where the current is formally covariant, the formal covariance really means a genuine covariance of the coefficient functions.
3. The covariance of the coefficient functions is sufficient to establish the covariance of the normally ordered current. |
Group Homomorphism from $\Z/n\Z$ to $\Z/m\Z$ When $m$ Divides $n$ Problem 613
Let $m$ and $n$ be positive integers such that $m \mid n$.
(a) Prove that the map $\phi:\Zmod{n} \to \Zmod{m}$ sending $a+n\Z$ to $a+m\Z$ for any $a\in \Z$ is well-defined. (b) Prove that $\phi$ is a group homomorphism. (c) Prove that $\phi$ is surjective. (d) Determine the group structure of the kernel of $\phi$.
Contents
Proof. (a) Prove that the map $\phi:\Zmod{n} \to \Zmod{m}$ sending $a+n\Z$ to $a+m\Z$ for any $a\in \Z$ is well-defined.
To show that $\phi$ is well-defined, we need show that the value of $\phi$ does not depends on the choice of representative $a$.
So suppose that $a+n\Z=a’+n\Z$ so that $a$ and $a’$ are two representatives for the same element. This yields that $a-a’$ is divisible by $n$.
Now, $a+n\Z$ is mapped to $a+m\Z$ by $\phi$. On the other hand, $a’+n\Z$ is mapped to $a+m\Z$ by $\phi$.
Since $a-a’$ is divisible by $n$ and $m \mid n$, it follows that $a-a’$ is divisible by $m$. This implies that $a+m\Z=a’+m\Z$. This prove that $\phi$ does not depend on the choice of the representative, and hence $\phi$ is well-defined. (b) Prove that $\phi$ is a group homomorphism.
Let $a+n\Z$, $b+n\Z$ be two elements in $\Zmod{n}$. Then we have
\begin{align*} &\phi\left(\, (a+n\Z)+(b+n\Z) \,\right)\\ &=\phi\left(\, (a+b)+n\Z) \,\right) &&\text{by addition in $\Zmod{n}$}\\ &=(a+b)+m\Z &&\text{by definition of $\phi$}\\ &=(a+m\Z)+(b+m\Z)&&\text{by addition in $\Zmod{m}$}\\ &=\phi(a+n\Z)+\phi(b+n\Z) &&\text{by definition of $\phi$}. \end{align*}
Hence $\phi$ is a group homomorphism.
(c) Prove that $\phi$ is surjective.
For any $c+m\Z \in \Zmod{m}$, we pick $c+n\Z\in \Zmod{n}$.
Then as $\phi(c+n\Z)=c+m\Z$, we see that $\phi$ is surjective. (d) Determine the group structure of the kernel of $\phi$.
If $a+n\Z\in \ker(\phi)$, then we have $0+m\Z=\phi(a+n\Z)=a+m\Z$.
This implies that $m\mid a$. On the other hand, if $m\mid a$, then $\phi(a+n\Z)=a+m\Z=0+m\Z$ and $a+n\Z\in \ker(\phi)$.
It follows that
\[\ker(\phi)=\{mk+n\Z \mid k=0, 1, \dots, l-1\},\] where $l$ is an integer such that $n=ml$.
Thus, $\ker(\phi)$ is a group of order $l$.
Since $\ker(\phi)$ is a subgroup of the cyclic group $\Zmod{n}$, we know that $\ker(\phi)$ is also cyclic. Thus \[\ker(\phi)\cong \Zmod{l}.\] Another approach
Here is a more direct proof of this result.
Define a map $\psi:\Z\to \ker(\phi)$ by sending $k\in \Z$ to $mk+n\Z$. It is straightforward to verify that $\psi$ is a surjective group homomorphism and the kernel of $\psi$ is $\ker(\psi)=l\Z$. It follows from the first isomorphism theorem that \[\Zmod{l}= \Z/\ker(\psi) \cong \im(\psi)=\ker(\phi). \]
Add to solve later |
One more post before I return to core algebra. We need to look a bit more at division and fractions.
Now, as you’ve seen, something like 8 ÷ 2 indicates division, but another way to show exactly the same thing is \[\frac{8}{2}\]. In other words, fractions are just another way to show division. Now before I expand on this, let’s review how fractions multiply together.
When two fractions are to be multiplied, the process is very simple. You just multiply the numerators (the numbers above the line) together and the denominators (the numbers below the line) together.\[\frac{1}{2}\hspace{0.33em}\times\hspace{0.33em}\frac{3}{4}\hspace{0.33em}{=}\hspace{0.33em}\frac{{1}\hspace{0.33em}\times\hspace{0.33em}{3}}{{2}\hspace{0.33em}\times\hspace{0.33em}{4}}\hspace{0.33em}{=}\hspace{0.33em}\frac{3}{8}\]
Now this can be used to our advantage to simplify fractions. Each of the numbers in the above example are called “factors”. Factors are things that are multiplied together. So if we can show the factors of the parts of a fraction, we can effectively cancel factors that are common between the numerator and the denominator because we can split off the common factors as \[\frac{\mathrm{number}}{\mathrm{number}}\] and any number divided by itself is 1 and anything multiplied by 1 is the same anything.\[\frac{8}{2}\hspace{0.33em}{=}\hspace{0.33em}\frac{{2}\hspace{0.33em}\times\hspace{0.33em}{4}}{{2}\hspace{0.33em}\times\hspace{0.33em}{1}}\hspace{0.33em}{=}\hspace{0.33em}\frac{2}{2}\hspace{0.33em}\times\hspace{0.33em}\frac{4}{1}\hspace{0.33em}{=}\hspace{0.33em}{1}\hspace{0.33em}\times\hspace{0.33em}{4}\hspace{0.33em}{=}\hspace{0.33em}{4}
\]
A shortcut version of this is\[\frac{8}{2}\hspace{0.33em}{=}\hspace{0.33em}\frac{\rlap{/}{2}\hspace{0.33em}\times\hspace{0.33em}{4}}{\rlap{/}{2}\hspace{0.33em}\times\hspace{0.33em}{1}}\hspace{0.33em}{=}\hspace{0.33em}\frac{4}{1}\hspace{0.33em}{=}\hspace{0.33em}{4}\]
So note that when you cross out the only factor in the numerator or denominator, a “1” is left, and this “1” can be left out of the result since it does not change the value of the remaining numbers. Also note that this works for known and unknown numbers such as
x which we will see in my next post.
A couple more examples:\[\begin{array}{c}
{\frac{16}{4}\hspace{0.33em}{=}\hspace{0.33em}\frac{\rlap{/}{4}\hspace{0.33em}\times\hspace{0.33em}{4}}{\rlap{/}{4}\hspace{0.33em}\times\hspace{0.33em}{1}}\hspace{0.33em}{=}\hspace{0.33em}{4}}\\
{\frac{6}{9}\hspace{0.33em}{=}\hspace{0.33em}\frac{\rlap{/}{3}\hspace{0.33em}\times\hspace{0.33em}{2}}{\rlap{/}{3}\hspace{0.33em}\times\hspace{0.33em}{3}}\hspace{0.33em}{=}\hspace{0.33em}\frac{2}{3}}
\end{array}\] |
How did the early sailors determine their latitude position without GPS? That is the topic of today’s post.
Now first, a little background. The earth’s axis is tilted with respect to its orbit about the sun. The angle of this tilt is approximately 23.5°. This causes the northern and southern hemispheres to get more sun in summer and less in winter, which is the reason for seasons to exist. The tilted axis also causes our days to be shorter in the winter and longer in the summer. There are two times during the year when the days and nights are equal in length. The times are called the vernal and autumnal equinoxes. In the northern hemisphere, these equinoxes occur on the first days of spring and autumn. Here in Australia in the southern hemisphere, we elected to call the start of spring on the 1st of September and the fall on the 1st of March, about 21 days short of the respective equinox. Perhaps this is because it is easier to remember. The main point here is that twice a year, at an equinox, the days and nights are equal.
At any time of the year other than an equinox, the highest height of the sun around noon is affected by the tilt of the earth’s axis. But at an equinox, the earth is in a neutral position where the axis tilt does not affect the highest sun height. At the equator (0° latitude), the sun would be directly overhead and a vertical stick in the ground would cast no shadow. As you go up or down in latitude, the highest sun height goes down and a vertical stick would cast the shortest shadow when the sun is at its highest. The below graphic shows the earth at an equinox with the sun at its maximum height. If a vertical stick is placed in the ground at your location, the sun’s rays would make an angle with it that is the same as your latitude angle.
Below is a blow-up of the vertical stick. You can see from the above picture that at the equator. The sun would be directly overhead at noon and there would be no shadow. At the poles, the sun would be at the horizon and the shadow would be very long (technically infinite). But in between, a measurable shadow would be made.
Now you could measure the angle directly with a sextant, but I hardly know what a sextant is. let alone use one. But I am good at maths and I have a good calculator. The shadow, stick, and the line from the top of the stick to the shadow end forms a right triangle. If you remember the post on trig functions, the tangent of an angle is the length of the opposite side divided by the length of the adjacent side. We want to measure the angle 𝝀, so the adjacent side is the stick and the opposite side is the shadow:\[\tan\mathit{\lambda}\hspace{0.33em}{=}\hspace{0.33em}\frac{{\mathrm{length}}\hspace{0.33em}{\mathrm{of}}\hspace{0.33em}{\mathrm{stick}}}{{\mathrm{length}}\hspace{0.33em}{\mathrm{of}}\hspace{0.33em}{\mathrm{shadow}}}\]
On your calculator, if you have the trig functions, you would also have keys labelled “arctan” or “tan
-1“. These keys mean “what is the angle that has what you entered as its tangent”. So if you enter the results of the division and then hit this key (making sure that your calculator is in “degrees” mode), you will get your latitude.
Now this method will not tell you if the latitude is positive (North) or negative (South). But if you are so lost that you don’t even know what hemisphere you are in, finding your latitude is probably the least of your troubles.
Also, waiting for noon to find your latitude is not too bad, but waiting for an equinox is fairly restrictive. Fortunately, our early sailors had tables to correct the angle found depending on the time of the year. |
The problem is this-
Evaluate $\int\int_R \sqrt{xy - y^2}dxdy$ where R is a triangle with vertices (0,0), (10,1) and (1,1).
I have done up to this- $\int_{y=0}^{1}\int_{x=y}^{10y} (x^2 - xy)^{\frac12}dxdy = \int_{y=0}^{1}dy\int_{x=y}^{10y}(x^2 - xy)^{\frac12}dx$
I was stuck here, and opened the book to check up the solution, and I found that it is done as $\int_{y=0}^{1}dy\int_{x=y}^{10y}(x^2 - xy)^{\frac12}dx = \int_{y=0}^{1}dy [\frac23 \frac1y (xy - y^2)^{\frac32}]^{10y}_{y}$
Now, I have got no idea where the $\frac1y$ came from. And how the step is done. Can anyone please clarify? |
In the last 10 years there has been more and more crossover of techniques from high energy physics being used in AMO and condensed matter scenarios, in particular diagrammatic techniques and related perturbative calculations. Unlike the relativistic case, a simple Schrodinger field theory will conserve the number of particles; however, not all diagrams that one might draw respect this conserved quantity. My question is, what should be done with such diagrams? Will the theory generically result in these diagrams not contributing to scattering processes or is this something that has to be input 'by hand?'
As an illustrative example, consider the interaction of two non-relativistic scalar fields with a local quartic interaction
$$\mathcal{L} = \sum_{i=1}^2{(\psi^\dagger_i \dot{\psi}_i - \frac{1}{2m} \nabla \psi^\dagger_i \nabla \psi_i)} + \frac{g}{m} \psi^\dagger_1 \psi^\dagger_2 \psi_2 \psi_1,$$ which represents a system whose excitations are fermions with 2 orthogonal spin states with a contact interaction. Such a theory is considered in Braaten and Platter's [Exact Relations for a Strongly Interacting Fermi Gas from the Operator Product Expansion](http://journals.aps.org/prl/pdf/10.1103/PhysRevLett.100.205301).
Note in particular that they find that after imposing a momentum cutoff for the theory, the coupling constant must satisfy $$g(\Lambda) = \frac{4 \pi a}{1- 2 a \Lambda/\pi}$$ to recover the scattering amplitude we expect from the zero-range model with a scattering length $a$ (and $\Lambda$ is the momentum cutoff). Importantly, the coupling constant $g \sim \mathcal{O}(1/\Lambda)$ for large cutoff.
However, if one computes the corrections to the propagator (of either particle) you find that the first order correction has only one diagram and its value is $$\Delta_1(k) \sim -\frac{i g}{8 m \pi^2} \frac{\Lambda^3}{3}.$$ On the contrary, I expected this diagram to not contribute anything (similarly to how it does not in the relativistic $\phi^4$ theory) 1) because it does not conserve particle number and 2) because the parameter 'm' of the bare theory is the SAME 'm' that appears in the equivalent Schrodinger equation, meaning that the physical mass of the particle is not changed by the perturbative corrections. And then, not only is the contribution non-zero but it blows up as the cutoff becomes large. Are we required to postulate that $$m = m_0 + \mathcal{O}(\Lambda^3)$$ so that these contributions do go to zero as the cutoff increases?
Identical question for reference: http://physics.stackexchange.com/questions/200237/how-should-we-deal-with-diagrams-which-do-not-conserve-particle-number-in-a-non |
Put into one sentence, Noether's first Theorem states that a continuous, global, off-shell symmetry of an action $S$ implies a local on-shell conservation law. By the words
on-shell and off-shell are meant whether Euler-Lagrange equations of motion are satisfied or not.
Now the question asks if
continuous can be replace by discrete?
It should immediately be stressed that Noether Theorem is a machine that for
each input in form of an appropriate symmetry produces an output in form of a conservation law. To claim that a Noether Theorem is behind, it is not enough to just list a couple of pairs (symmetry, conservation law).
Now, where could a discrete version of Noether's Theorem live? A good bet is in a discrete lattice world, if one uses finite differences instead of differentiation. Let us investigate the situation.
Our intuitive idea is that finite symmetries, e.g., time reversal symmetry, etc, can not be used in a Noether Theorem in a lattice world because they don't work in a continuous world. Instead we pin our hopes to that discrete infinite symmetries that become continuous symmetries when the lattice spacings go to zero, can be used.
Imagine for simplicity a 1D point particle that can only be at discrete positions $q_t\in\mathbb{Z}a$ on a 1D lattice $\mathbb{Z}a$ with lattice spacing $a$, and that time $t\in\mathbb{Z}$ is discrete as well. (This was, e.g., studied in J.C. Baez and J.M. Gilliam, Lett. Math. Phys. 31 (1994) 205; hat tip: Edward.) The velocity is the finite difference
$$v_{t+\frac{1}{2}}:=q_{t+1}-q_t\in\mathbb{Z}a,$$
and is discrete as well. The action $S$ is
$$S[q]=\sum_t L_t$$
with Lagrangian $L_t$ on the form
$$L_t=L_t(q_t,v_{t+\frac{1}{2}}).$$
Define momentum $p_{t+\frac{1}{2}}$ as
$$ p_{t+\frac{1}{2}} := \frac{\partial L_t}{\partial v_{t+\frac{1}{2}}}. $$
Naively, the action $S$ should be extremized wrt. neighboring virtual discrete paths $q:\mathbb{Z} \to\mathbb{Z}a$ to find the equation of motion. However, it does not seem feasible to extract a discrete Euler-Lagrange equation in this way, basically because it is not enough to Taylor expand to the first order in the variation $\Delta q$ when the variation $\Delta q\in\mathbb{Z}a$ is not infinitesimal. At this point, we throw our hands in the air, and
declare that the virtual path $q+\Delta q$ (as opposed to the stationary path $q$) does not have to lie in the lattice, but that it is free to take continuous values in $\mathbb{R}$. We can now perform an infinitesimal variation without worrying about higher order contributions,
$$0 =\delta S := S[q+\delta q] - S[q]
= \sum_t \left[\frac{\partial L_t}{\partial q_t} \delta q_t + p_{t+\frac{1}{2}}\delta v_{t+\frac{1}{2}} \right] $$ $$ =\sum_t \left[\frac{\partial L_t}{\partial q_t} \delta q_{t} + p_{t+\frac{1}{2}}(\delta q_{t+1}- \delta q_t)\right] $$$$=\sum_t \left[\frac{\partial L_t}{\partial q_t} - p_{t+\frac{1}{2}} + p_{t-\frac{1}{2}}\right]\delta q_t + \sum_t \left[p_{t+\frac{1}{2}}\delta q_{t+1}-p_{t-\frac{1}{2}}\delta q_t \right].$$
Note that the last sum is telescopic. This implies (with suitable boundary conditions) the discrete Euler-Lagrange equation
$$\frac{\partial L_t}{\partial q_t} = p_{t+\frac{1}{2}}-p_{t-\frac{1}{2}}.$$
This is the evolution equation. At this point it is not clear whether a solution for $q:\mathbb{Z}\to\mathbb{R}$ will remain on the lattice $\mathbb{Z}a$ if we specify two initial values on the lattice. We shall from now on restrict our considerations to such systems for consistency.
As an example, one may imagine that $q_t$ is a cyclic variable, i.e., that $L_t$ does not depend on $q_t$. We therefore have a discrete global translation symmetry $\Delta q_t=a$. The Noether current is the momentum $p_{t+\frac{1}{2}}$, and the Noether conservation law is that momentum $p_{t+\frac{1}{2}}$ is conserved. This is certainly a nice observation. But this does
not necessarily mean that a Noether Theorem is behind.
Imagine that the enemy has given us a global vertical symmetry $\Delta q_t = Y(q_t)\in\mathbb{Z}a$, where $Y$ is an arbitrary function. (The words
vertical and horizontal refer to translation in the $q$ direction and the $t$ direction, respectively. We will for simplicity not discuss symmetries with horizontal components.) The obvious candidate for the bare Noether current is
$$j_t = p_{t-\frac{1}{2}}Y(q_t).$$
But it is unlikely that we would be able to prove that $j_t$ is conserved merely from the symmetry $0=S[q+\Delta q] - S[q]$, which would now unavoidably involve higher order contributions. So while we stop short of declaring a no-go theorem, it certainly does not look promising.
Perhaps, we would be more successful if we only discretize time, and leave the coordinate space continuous? I might return with an update about this in the future.
An example from the continuous world that may be good to keep in mind: Consider a simple gravity pendulum with Lagrangian
$$L(\varphi,\dot{\varphi}) = \frac{m}{2}\ell^2 \dot{\varphi}^2 + mg\ell\cos(\varphi).$$
It has a global discrete periodic symmetry $\varphi\to\varphi+2\pi$, but the (angular) momentum $p_{\varphi}:=\frac{\partial L}{\partial\dot{\varphi}}= m\ell^2\dot{\varphi}$ is This post imported from StackExchange Physics at 2015-10-04 21:40 (UTC), posted by SE-user Qmechanic
not conserved if $g\neq 0$. |
For discussion of specific patterns or specific families of patterns, both newly-discovered and well-known.
gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm Kazyan wrote:
Component found in a CatForce result:
Code: Select all
x = 42, y = 67, rule = LifeHistory
A$.2A$2A5$7.A$8.2A$7.2A19$24.2A$24.2A2$39.A$37.A3.A$36.A$36.A4.A$36.
5A$14.2A.2D$13.A.AD.D$13.A$12.2A25$5.3A$7.A$6.A!
That can be done with 4 gliders, although it's still interesting that it was found accidentally:
Code: Select all
x = 21, y = 30, rule = B3/S23
10b2o$11bo$11bobo$12b2o14$10bo4bo$10b2ob2o$9bobo2b2o8$2o17bo$b2o15b2o$
o17bobo!
What were you looking for, exactly? A MWSS-to-herschel converter?
Kazyan Posts: 864 Joined: February 6th, 2014, 11:02 pm
gmc_nxtman wrote:What were you looking for, exactly? A MWSS-to-herschel converter?
I'd settle for any signal, but yes. The current Orthogonoids have geometry challenges that pad their size, and the limiting factor in their repeat time is the syringe. Repeat time is more important for single-channel operations than probably any other constructor design, so I'm trying to give that fire some better fuel.
Tanner Jacobi
mniemiec
Posts: 1055 Joined: June 1st, 2013, 12:00 am
gmc_nxtman wrote:4-glider trans-boat with tail edgeshoot: ...
Even though was already buildable from 4 gliders, this method improves syntheses of one still-life and 18 pseudo-objects.
Kazyan Posts: 864 Joined: February 6th, 2014, 11:02 pm
Potential component spotted in a failed eating reaction:
Code: Select all
x = 21, y = 17, rule = B3/S23
o$3o$3bo$2b2o2$6bo$5bobo2$5b3o$19bo$8bo9bo$18b3o3$15bo$14b2o$14bobo!
Tanner Jacobi
gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm
Unusual still life in 8 gliders:
Code: Select all
x = 18, y = 26, rule = B3/S23
11bo$10bobo$10b2o2$10bo$9b2o$9bobo5$obo$b2o$bo2$3b2o$4b2o$3bo$9bo$9b2o
$8bobo2$15b3o$7b2o6bo$6bobo7bo$8bo!
EDIT:
This also gives 21.41458 in 9 gliders.
Kazyan Posts: 864 Joined: February 6th, 2014, 11:02 pm
Potentially grow out a BTS into a structure like a snorkel loop:
Code: Select all
x = 15, y = 25, rule = B3/S23
2b2obo$3bob3o$bobo4bo$ob2ob2obo$o4bobo$b3obo$3bob2o3$10b3o2$8bo5bo$8bo
5bo$8bo5bo2$10b3o6$11b2o$10bo2bo$11b2o$11bo!
I suspect that the drifter catalyst and its variants also have odd transformations, since both objects are robust.
Tanner Jacobi
gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm
Haven't seen a component quite like this before:
Code: Select all
x = 27, y = 18, rule = B3/S23
20bobo$20b2o$21bo7$15bo$15bobo$15b2o2$3o9bobo$b3o9b2o$13bo10b3o$24bo$
25bo!
EDIT:
Better version:
Code: Select all
x = 15, y = 11, rule = B3/S23
13bo$12bo$12b3o3$7bo$6bobo$6bobo2b2o$7bo2b2o$3o9bo$b3o!
Gamedziner
Posts: 796 Joined: May 30th, 2016, 8:47 pm Location: Milky Way Galaxy: Planet Earth
p8 c/2 derived from blinker puffer 1
:
Code: Select all
2bo$o3bo$5bo$o4bo$b5o5$b2o2b2o$bob2ob2o$2b5o$3b3o$4bo$2bo3bo$7bo$2bo4bo$3b5o!
Code: Select all
x = 81, y = 96, rule = LifeHistory
58.2A$58.2A3$59.2A17.2A$59.2A17.2A3$79.2A$79.2A2$57.A$56.A$56.3A4$27.
A$27.A.A$27.2A21$3.2A$3.2A2.2A$7.2A18$7.2A$7.2A2.2A$11.2A11$2A$2A2.2A
$4.2A18$4.2A$4.2A2.2A$8.2A!
mniemiec
Posts: 1055 Joined: June 1st, 2013, 12:00 am
This is known. It can be easily synthesized from 10 gliders:
Code: Select all
x = 88, y = 26, rule = B3/S23
34bobo$35boo$35bo3$45bo$bo44boo$bbo42boo$3o$20boo18boo6bo$bbo17bobo17b
obo4bo$boo18bo19bo5b3o$bobo$$43boo$44boo7b3o22bo4b3o$31b3o9bo9bobbo20b
3o3bobbo$33bo19bo16b3o3boobo3bo$32bo20bo3bo12bobbobb3o4bo3bo$53bo16bo
6boo4bo$54bobo13bo3bo3bo5bobo$70bo$71bobo$77bo$78bo$77bo!
gameoflifemaniac Posts: 774 Joined: January 22nd, 2017, 11:17 am Location: There too
Code: Select all
x = 17, y = 17, rule = B3/S23
8bo$7bobo$4bo2bobo2bo$3bobobobobobo$2bo2bobobobo2bo$3b2o2bobo2b2o$7bob
o$b6o3b6o$o15bo$b6o3b6o$7bobo$3b2o2bobo2b2o$2bo2bobobobo2bo$3bobobobob
obo$4bo2bobo2bo$7bobo$8bo!
Code: Select all
x = 17, y = 17, rule = B3/S23
8bo$7bobo$4bo2bobo2bo$3bobobobobobo$2bo2bobobobo2bo$3b2o2bobo2b2o$7bob
o$b6o3b6o$o7bo7bo$b6o3b6o$7bobo$3b2o2bobo2b2o$2bo2bobobobo2bo$3bobobob
obobo$4bo2bobo2bo$7bobo$8bo!
dvgrn Moderator Posts: 5874 Joined: May 17th, 2009, 11:00 pm Location: Madison, WI Contact:
While incompetently welding a tremi-Snark this evening...
Code: Select all
x = 23, y = 31, rule = LifeHistory
$3.4B$4.4B$5.4B5.2A$6.4B4.2A$7.9B$8.6B$8.4BA3B$6.7BA2B$6.5B3A2B$6.11B
$4.2AB.10B$4.2AB3.B2A4B$9.2B2A5B$10.8B$10.6B$11.5B$5.2A5.3B$5.A.A3.5B
$7.A2.B2AB2A$7.2A2.2A.AB2.2A$10.B3.A.A2.A$5.10A.2A$5.A$6.12A$17.A$8.
7A$8.A5.A$11.A$10.A.A$11.A!
... I ended up with a p3 that I didn't really want.
Doesn't seem worth keeping it around until people are synthesizing all the 58-bit p3's, but it seemed mildly entertaining anyway.
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: dvgrn wrote:
Code: Select all
x = 23, y = 31, rule = LifeHistory
$3.4B$4.4B$5.4B5.2A$6.4B4.2A$7.9B$8.6B$8.4BA3B$6.7BA2B$6.5B3A2B$6.11B
$4.2AB.10B$4.2AB3.B2A4B$9.2B2A5B$10.8B$10.6B$11.5B$5.2A5.3B$5.A.A3.5B
$7.A2.B2AB2A$7.2A2.2A.AB2.2A$10.B3.A.A2.A$5.10A.2A$5.A$6.12A$17.A$8.
7A$8.A5.A$11.A$10.A.A$11.A!
Pointless reduction:
Code: Select all
x = 17, y = 25, rule = LifeHistory
4B$.4B$2.4B5.2A$3.4B4.2A$4.9B$5.6B$5.4BA3B$3.7BA2B$3.5B3A2B$3.11B$.2A
B.10B$.2AB3.B2A4B$6.2B2A5B$7.8B$7.6B$8.5B$9.3B$8.5B$7.B2AB2A$4.2A2.2A
.AB2.2A$4.A2.B3.A.A2.A$5.7A.3A2$7.2A.4A$7.2A.A2.A!
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm
Can someone salvage this? (Look at T≈20)
Code: Select all
x = 16, y = 12, rule = B3/S23
bo$2bo$3o7$3b2o5b2o2b2o$4b2o3bobob2o$3bo7bo3bo!
BlinkerSpawn Posts: 1905 Joined: November 8th, 2014, 8:48 pm Location: Getting a snacker from R-Bee's gmc_nxtman wrote:
Can someone salvage this? (Look at T≈20)
Code: Select all
x = 16, y = 12, rule = B3/S23
bo$2bo$3o7$3b2o5b2o2b2o$4b2o3bobob2o$3bo7bo3bo!
The red pattern inserted at gen 16 would do it:
Code: Select all
x = 17, y = 14, rule = LifeHistory
13.D$11.2D$.A14.D$2.A8.5D$3A7$3.2A5.2A2.2A$4.2A3.A.A.2A$3.A7.A3.A!
AbhpzTa
Posts: 475 Joined: April 13th, 2016, 9:40 am Location: Ishikawa Prefecture, Japan gmc_nxtman wrote:
Can someone salvage this? (Look at T≈20)
Code: Select all
x = 16, y = 12, rule = B3/S23
bo$2bo$3o7$3b2o5b2o2b2o$4b2o3bobob2o$3bo7bo3bo!
Code: Select all
x = 27, y = 19, rule = B3/S23
16bo$4bo9b2o$5bo9b2o$3b3o$22bo$20b2o$21b2o$bo$2bo$3o2$25b2o$24b2o$26bo
3$3b2o5b2o2b2o$4b2o3bobob2o$3bo7bo3bo!
Iteration of sigma(n)+tau(n)-n [sigma(n)+tau(n)-n : OEIS A163163] (e.g. 16,20,28,34,24,44,46,30,50,49,11,3,3, ...) :
965808 is period 336 (max = 207085118608). gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm
Reduced an old synthesis from eleven (I think) down to eight gliders:
Code: Select all
x = 34, y = 34, rule = B3/S23
10bo$bobo7bo19bobo$2b2o5b3o19b2o$2bo29bo3$32bo$30b2o$31b2o3$12bo$12bob
o$12b2o11$14b2o$14bobo$14bo12b2o$27bobo$27bo3$b2o$obo$2bo!
Extrementhusiast Posts: 1796 Joined: June 16th, 2009, 11:24 pm Location: USA
Glider + two-glider loaf/tub/block/blinker constellation lasts for over 10K gens:
Code: Select all
x = 16, y = 13, rule = B3/S23
3bobo$3b2o$4bo9bo$13bobo$4b2o8bo$4b2o3$6bo$5bobo$4bo2bo$5b2o$3o!
I Like My Heisenburps! (and others)
Entity Valkyrie Posts: 247 Joined: November 30th, 2017, 3:30 am
A glider synthesis of Sawtooth 311
x = 193, y = 140, rule = B3/S23 40bo$41bo$39b3o$72bo$70b2o$71b2o19$32bobo$33b2o$33bo30bo$63bo$63b3o2$ 75bo$74b2o$24bo49bobo$25b2o$24b2o$49bo$47b2o$48b2o2$2bo$obo$b2o7$34b2o $35b2o$34bo3$67b2o$67bobo$53b2o12bo$53bobo$53bo6$27b2o93b2o$26bobo93bo bo$28bo93bo2$53b3o$53bo74bobo$54bo73b2o$4bo124bo$4b2o172b2o$3bobo171b 2o$174b2o3bo$31b2o140bobo$30b2o98b2o43bo$32bo97bobo36bobo$130bo3bobo 10bo22b2o$135b2o11b2o20bo$135bo4bobo4b2o34bobo$115b2o21bobobobo6b2o20b o9b2o$116b2o21b2ob2o6b2o22bo9bo$115bo36bo19b3o2$120bo23b2ob2o23b3o5bo$ 120b2o21bobobobo24bo4bo$119bobo13b2ob2o5bobo25bo5b3o$134bobobobo$115bo 20bobo13b2o25b3o$116b2o12b2o19b2o22bo3bo$115b2o14b2o20bo22bo3bo$130bo 35bo4bo2b3o$164bobo2b2o$113b3o49b2o3b2o2b3o4bobo$115bo60bo4b2o$114bo 60bo6bo$179bo$126bo25bo18bo7b2o$125b2o23b2o19b2o5bobo$125bobo23b2o17bo bo$146b2o$147b2o$121b2o23bo$120b2o$122bo2$156b2o$142bobo10bobo$134bobo 5b2o13bo$134b2o7bo$135bo$132bo6bo$132b2o4bo$131bobo4b3o2$138b3o43bo$ 134bo3bo22bo20b2o$135bo3bo22b2o19b2o$133b3o25b2o13bobo$174bobobobo7bo$ 133b3o5bo25bobo5b2ob2o6bobo$135bo4bo24bobobobo15b2o3bo$63b3o68bo5b3o 23b2ob2o19b2o$63bo127b2o$64bo75b3o19bo$130bo9bo22b2o6b2ob2o$130b2o9bo 20b2o6bobobobo$129bobo34b2o4bobo$144bo20b2o$143b2o22bo12b2o$143bobo33b 2o$181bo2$139bo$138bo$138b3o2$137bo$136b2o$136bobo! Entity Valkyrie Posts: 247 Joined: November 30th, 2017, 3:30 am
This Simkin-Glider=Gun=like object actually produces two MWSS:
Code: Select all
x = 53, y = 17, rule = B3/S23
44b2o5b2o$44b2o5b2o2$47b2o$47b2o$12bo$12b3o$12bobo$14bo4$4b2o$4b2o2$2o
5b2o$2o5b2o!
mniemiec
Posts: 1055 Joined: June 1st, 2013, 12:00 am
Entity Valkyrie wrote:A glider synthesis of Sawtooth 311 ...
It's nice to have syntheses like this. Unfortunately, in this case, there are several pairs of gliders that would have had to pass through each other earlier (i.e. they would have already collided before this phase). To make sure this doesn't happen, it is usually a good idea to backtrack all the gilders a certain amount (e.g. far enough away that they are in four distinct clouds, one coming from each direction) and then run them to see if any unwanted interactions occur first.
Rhombic Posts: 1056 Joined: June 1st, 2013, 5:41 pm
This component (the reverse component would have been more useful). Found accidentally though.
Code: Select all
x = 12, y = 14, rule = B3/S23
11bo$9b3o$8bo$9bo$6b4o$6bo$2b2o3b3o$2b2o5bo$9bobo$2bo7b2o$bobo$bob2o$o
$2bo!
Code: Select all
x = 13, y = 15, rule = B3/S23
7bo$7b3o$10bo$2b2ob3o2bo$o2bobo2bob2o$2o4b3o3bo$9bobo$3b2o3b2ob2o$3b2o
2$3bo$2bobo$2bob2o$bo$3bo!
Extrementhusiast Posts: 1796 Joined: June 16th, 2009, 11:24 pm Location: USA
Switch engine turns two rows of beehives into two rows of table on tables:
Code: Select all
x = 88, y = 96, rule = B3/S23
13b2o$12bo2bo$13b2o6$21b2o$8b3o9bo2bo$9bo2bo8b2o$13bo$10bobo4$29b2o$
28bo2bo$29b2o2$bo$obo$obo$bo$37b2o$36bo2bo$37b2o2$9bo$8bobo$8bobo$9bo$
45b2o$44bo2bo$45b2o2$17bo$16bobo$16bobo$17bo$53b2o$52bo2bo$53b2o2$25bo
$24bobo$24bobo$25bo$61b2o$60bo2bo$61b2o2$33bo$32bobo$32bobo$33bo$69b2o
$68bo2bo$69b2o2$41bo$40bobo$40bobo$41bo$77b2o$76bo2bo$77b2o2$49bo$48bo
bo$48bobo$49bo$85b2o$84bo2bo$85b2o2$57bo$56bobo$56bobo$57bo5$65bo$64bo
bo$64bobo$65bo5$73bo$72bobo$72bobo$73bo!
I Like My Heisenburps! (and others)
KittyTac Posts: 533 Joined: December 21st, 2017, 9:58 am Extrementhusiast wrote:
Switch engine turns two rows of beehives into two rows of table on tables:
Code: Select all
x = 88, y = 96, rule = B3/S23
13b2o$12bo2bo$13b2o6$21b2o$8b3o9bo2bo$9bo2bo8b2o$13bo$10bobo4$29b2o$
28bo2bo$29b2o2$bo$obo$obo$bo$37b2o$36bo2bo$37b2o2$9bo$8bobo$8bobo$9bo$
45b2o$44bo2bo$45b2o2$17bo$16bobo$16bobo$17bo$53b2o$52bo2bo$53b2o2$25bo
$24bobo$24bobo$25bo$61b2o$60bo2bo$61b2o2$33bo$32bobo$32bobo$33bo$69b2o
$68bo2bo$69b2o2$41bo$40bobo$40bobo$41bo$77b2o$76bo2bo$77b2o2$49bo$48bo
bo$48bobo$49bo$85b2o$84bo2bo$85b2o2$57bo$56bobo$56bobo$57bo5$65bo$64bo
bo$64bobo$65bo5$73bo$72bobo$72bobo$73bo!
And then explodes. I wonder if there's a way to eat it at the end.
dvgrn Moderator Posts: 5874 Joined: May 17th, 2009, 11:00 pm Location: Madison, WI Contact: KittyTac wrote:
Extrementhusiast wrote:Switch engine turns two rows of beehives into two rows of table on tables...
And then explodes. I wonder if there's a way to eat it at the end.
Yeah, switch engine/swimmer eaters definitely aren't a problem:
Code: Select all
x = 96, y = 98, rule = B3/S23
13b2o$12bo2bo$13b2o6$8b3o10b2o$20bo2bo$8bo3bo8b2o$9b4o$12bo4$29b2o$28b
o2bo$29b2o2$bo$obo$obo$bo$37b2o$36bo2bo$37b2o2$9bo$8bobo$8bobo$9bo$45b
2o$44bo2bo$45b2o2$17bo$16bobo$16bobo$17bo$53b2o$52bo2bo$53b2o2$25bo$
24bobo$24bobo$25bo$61b2o$60bo2bo$61b2o2$33bo$32bobo$32bobo$33bo$69b2o$
68bo2bo$69b2o2$41bo$40bobo$40bobo$41bo$77b2o$76bo2bo$77b2o2$49bo$48bob
o$48bobo$49bo$85b2o$84bo2bo$85b2o2$57bo$56bobo$56bobo$57bo5$65bo$64bob
o$64bobo$65bo5$73bo$72bobo$72bobo17b2o$73bo18bo$93b3o$95bo!
#C [[ AUTOSTART STEP 9 THEME 2 ]]
kiho park
Posts: 50 Joined: September 24th, 2010, 12:16 am
I found this c/3 diagonal fuse while searching c/3 long barge crawler.
Code: Select all
x = 10, y = 11, rule = B3/S23:T40,27
8b2o$7bo2$6bobo$5bo2bo$4bobo$3bobo$2bobo$bobo$obo$bo! |
The idea, as I understand it, is that the term between parenthesis in Big O bounds the rest of the series asymptotically (as $x$ goes to $0$ or $\infty$) giving and elegant way of writing the series (if you don't like "+ ..." or "etc") while also providing an error approximation. Taking the example of $e^x$:
$$ e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+O(x^5) $$
When we write $O(x^5)$ we mean that all the rest of the terms (like $\frac{x^6}{6!}+\frac{(x^7)}{7!}+\frac{x^8}{8!}+\cdots+\frac{x^y}{y!}$) are bounded by $x^5$ as $x$ goes to $0$. Giving a more concrete example if we take $e^{2.42}$ then this is more or less $1 + 2.42 + (\frac12)(2.42)^2 + (\frac16)(2.42)^3 + (\frac{1}{24})(2.42)^4$ and the error between the approximation and the actual value is no larger than $2.42^5$.
The question is
how do we know this?
Certainly $x^5$ bounds, as $x$ goes to $0$, each term individually ($Cx^n=O(x^5)$ as $\lim_{x\to\infty}$ as long as $n>5, C$ constant) but how do we prove the
sum of all the individual terms is bounded? We have to show that:
$$ O(x^5)=\frac{x^6}{6!}+\frac{x^7}{7!}+\frac{x^8}{8!}+\frac{x^9}{9!}+\frac{x^{10}}{10!}+\frac{x^{11}}{11!}+...),x\rightarrow 0 $$
Falling back on the previous example, maybe the infinite sum will actually be slightly larger than $2.42^5$. How do we prove this is actually not the case? While I have used the example of $e^x$, feel free to give a more general proof, and not one for this particular function only.
Your help with this issue will be greatly appreciated.
Thank you for taking the time to read this and have a good day.
Note: I wrongly asked this on Math Overflow and copied it here. I'm not sure whether the other question gets transferred or if I made the right choice in reasking here. |
Presented by:
Jeremie Unterberger Université de Lorraine
Date:
Thursday 25th October 2018 - 10:00 to 11:00
Venue:
INI Seminar Room 1
Abstract:
We study in the present article the Kardar-Parisi-Zhang (KPZ) equation $$ \partial_t h(t,x)=\nu\Del h(t,x)+\lambda |\nabla h(t,x)|^2 +\sqrt{D}\, \eta(t,x), \qquad (t,x)\in{\mathbb{R}}_+\times{\mathbb{R}}^d $$ in $d\ge 3$ dimensions in the perturbative regime, i.e. for $\lambda>0$ small enough and a smooth, bounded, integrable initial condition $h_0=h(t=0,\cdot)$. The forcing term $\eta$ in the right-hand side is a regularized space-time white noise. The exponential of $h$ -- its so-called Cole-Hopf transform -- is known to satisfy a linear PDE with multiplicative noise. We prove a large-scale diffusive limit for the solution, in particular a time-integrated heat-kernel behavior for the covariance in a parabolic scaling. The proof is based on a rigorous implementation of K. Wilson's renormalization group scheme. A double cluster/momentum-decoupling expansion allows for perturbative estimates of the bare resolvent of the Cole-Hopf linear PDE in the small-field region where the noise is not too large, following the broad lines of Iagolnitzer-Magnen. Standard large deviation estimates for $\eta$ make it possible to extend the above estimates to the large-field region. Finally, we show, by resumming all the by-products of the expansion, that the solution $h$ may be written in the large-scale limit (after a suitable Galilei transformation) as a small perturbation of the solution of the underlying linear Edwards-Wilkinson model ($\lambda=0$) with renormalized coefficients $\nu_{eff}=\nu+O(\lambda^2),D_{eff}=D+O(\lambda^2)$. This is joint work with J. Magnen.
The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. |
I was recently considering how to justify the formula relating to the Doppler effect for sound waves to a group of eleventh grade students who are likely encountering it for the first time. The formula in question is
$$f'=f \frac{v-v_o}{v+v_s}, $$
where $v$ is the speed of the sound wave, $f$ is the frequency of the wave emitted by a source moving at speed $v_s$ through a medium, and $f'$ is the frequency heard by an observer moving away from the source at speed $v_o$ through the medium.
These students will have no experience with results from special relativity, so I'm not considering any effects of special relativity. However, the students will be familiar with the concept of relative velocity as well as the formula $v = f \lambda.$
I thought about justifying the formula in the following manner:
Once the sound wave has been emitted, the distance in space between two successive crests (i.e. the wavelength) has a fixed value, say $\lambda^*$. From the source's reference frame, the frequency of the sound wave travelling towards the observer is $f$ and it is moving away with speed $v + v_s$. We can then write \begin{equation} v+v_s = f\lambda^*. \end{equation}
On the other hand, in the observer's reference frame the frequency of the sound wave is $f'$ and it is moving towards them with speed $v-v_o$. Hence \begin{equation} v-v_o = f'\lambda^*. \end{equation}
Combining these two equations gives the desired result.
A number of textbooks and other sources give a derivation involving a change in the effective wavelength, even though the derivation above seems simpler. For this reason, I'm not convinced that this derivation is correct. Is this approach valid? If I have made an error or oversight, I would also like to understand why the correct result still emerges. |
ISSN:
1547-5816
eISSN:
1553-166X
All Issues
Journal of Industrial & Management Optimization
October 2013 , Volume 9 , Issue 4
Select all articles
Export/Reference:
Abstract:
In this paper, we propose a primal-dual approach for solving the generalized fractional programming problem. The outer iteration of the algorithm is a variant of interval-type Dinkelbach algorithm, while the augmented Lagrange method is adopted for solving the inner min-max subproblems. This is indeed a very unique feature of the paper because almost all Dinkelbach-type algorithms in the literature addressed only the outer iteration, while leaving the issue of how to practically solve a sequence of min-max subproblems untouched. The augmented Lagrange method attaches a set of artificial variables as well as their corresponding Lagrange multipliers to the min-max subproblem. As a result, both the primal and the dual information is available for updating the iterate points and the min-max subproblem is then reduced to a sequence of minimization problems. Numerical experiments show that the primal-dual approach can achieve a better precision in fewer iterations.
Abstract:
In this paper, we consider an optimal investment-consumption problem subject to a closed convex constraint. In the problem, a constraint is imposed on both the investment and the consumption strategy, rather than just on the investment. The existence of solution is established by using the Martingale technique and convex duality. In addition to investment, our technique embeds also the consumption into a family of fictitious markets. However, with the addition of consumption, it leads to nonreflexive dual spaces. This difficulty is overcome by employing the so-called technique of ``relaxation-projection" to establish the existence of solution to the problem. Furthermore, if the solution to the dual problem is obtained, then the solution to the primal problem can be found by using the characterization of the solution. An illustrative example is given with a dynamic risk constraint to demonstrate the method.
Abstract:
Due to globalization and technological advances, increasing competition and falling prices have forced enterprises to reduce cost; this poses new challenges in pricing and replenishment strategy. The study develops a piecewise production-inventory model for a multi-market deteriorating product with time-varying and price-sensitive demand. Optimal product pricing and material replenishment strategy is derived to optimize the manufacturer's total profit. Sensitivity analyses of how the major parameters affect the decision variables were carried out. Finally, the single production cycle is extended to multiple production cycles. We find that the total profit for multiple production cycle increases 5.77/100 when compared with the single production cycle.
Abstract:
The system of absolute value equations $Ax+B|x|=b$, denoted by AVEs, is proved to be NP-hard, where $A, B$ are arbitrary given $n\times n$ real matrices and $b$ is arbitrary given $n$-dimensional vector. In this paper, we reformulate AVEs as a family of parameterized smooth equations and propose a smoothing-type algorithm to solve AVEs. Under the assumption that the minimal singular value of the matrix $A$ is strictly greater than the maximal singular value of the matrix $B$, we prove that the algorithm is well-defined. In particular, we show that the algorithm is globally convergent and the convergence rate is quadratic without any additional assumption. The preliminary numerical results are reported, which show the effectiveness of the algorithm.
Abstract:
We examine the problem of optimal capacity reservation policy on innovative product in a setting of one supplier and one retailer. The parameters of capacity reservation policy are two dimensional: reservation price and excess capacity that the supplier will have in additional to the reservation amount. The above problem is analyzed using a two-stage Stackelberg game. In the first stage, the supplier announces the capacity reservation policy. The retailer forecasts the future demand and then determines the reservation amount. After receiving the reservation amount, the supplier expands the capacity. In the second stage, the uncertainty in demand is resolved and the retailer places a firm order. The supplier salvages the excess capacity and the associated payments are made.
In the paper, with exogenous reservation price or exogenous excess capacity level, we study the optimal expansion policy and then investigate the impacts of reservation price or excess capacity level on the optimal strategies. Finally, we characterize Nash Equilibrium and derive the optimal capacity reservation policy, in which the supplier will adopt exact capacity expansion policy.
Abstract:
This paper develops three (re)ordering models of a supply chain consisting of one risk-neutral manufacturer and one loss-averse retailer to study the coordination mechanism and the effects of the reordering policy on the coordination mechanism. The three (re)ordering policies are twice ordering policy with break-even quantity, twice ordering policy without break-even quantity and once ordering policy, respectively. We design a buyback-setup-cost-sharing mechanism to coordinate the supply chain for each policy, and Pareto analysis indicates that both the manufacturer and the retailer will realize a 'win-win' situation. By comparing the models, we find that twice ordering policy with break-even quantity is absolutely dominant for both the retailer and the supply chain. However, only if the break-even quantity is less than the mean quantity to failure, twice ordering policy without break-even quantity is dominant over the once ordering policy. The higher marginal revenue can induce more order quantity of the retailer under both twice ordering policy with break-even quantity and once ordering policy. However, it is interesting that it has no effect on the order plan of centralized decision-maker in twice ordering policy without break-even quantity.
Abstract:
In today's business environment, there are various reasons, namely, bulk purchase discounts, seasonality of products, re-order costs, etc., which force the buyer to order more than the warehouse capacity (owned warehouse). Such reasons call for additional storage space to store the excess units purchased. This additional storage space is typically a rented warehouse. It is known that the demand of seasonal products increases at the beginning of the season up to a certain moment and then is stabilized to a constant rate for the remaining time of the season (ramp type demand rate). As a result, the buyer prefers to keep a higher inventory at the beginning of the season and so more units than can be stored in owned warehouse may be purchased. The excess quantities need additional storage space, which is facilitated by a rented warehouse.
In this study an order level two-warehouse inventory model for deteriorating seasonal products is studied. Shortages at the owned warehouse are allowed subject to partial backlogging. This two-warehouse inventory model is studied under two different policies. The first policy starts with an instant replenishment and ends with shortages and the second policy starts with shortages and ends without shortages. For each of the models, conditions for the existence and uniqueness of the optimal solution are derived and a simple procedure is developed to obtain the overall optimal replenishment policy. The dynamics of the model and the solution procedure have been illustrated with the help of a numerical example and a comprehensive sensitivity analysis, with respect to the most important parameters of the model, is considered.
Abstract:
In the framework of multi-choice games, we propose a specific reduction to construct a dynamic process for the multi-choice Shapley value introduced by Nouweland et al. [8].
Abstract:
In [8], Zhang et al. proposed a modified three-term HS (MTTHS) conjugate gradient method and proved that this method converges globally for nonconvex minimization in the sense that $\liminf_{k\to\infty}\|\nabla f(x_k)\|=0$ when the Armijo or Wolfe line search is used. In this paper, we further study the convergence property of the MTTHS method. We show that the MTTHS method has strongly global convergence property (i.e., $\lim_{k\to\infty}\|\nabla f(x_k)\|=0$) for nonconvex optimization by the use of the backtracking type line search in [7]. Some preliminary numerical results are reported.
Abstract:
This paper analyzes an M/G/1 queue with general setup times from an economical point of view. In such a queue whenever the system becomes empty, the server is turned off. A new customer's arrival will turn the server on after a setup period. Upon arrival, the customers decide whether to join or balk the queue based on observation of the queue length and the status of the server, along with the reward-cost structure of the system. For the observable and almost observable cases, the equilibrium joining strategies of customers who wish to maximize their expected net benefit are obtained. Two numerical examples are presented to illustrate the equilibrium joining probabilities for these cases under some specific distribution functions of service times and setup times.
Abstract:
In this paper, a new non-monotone trust-region algorithm is proposed for solving unconstrained nonlinear optimization problems. We modify the retrospective ratio which is introduced by Bastin et al. [Math. Program., Ser. A (2010) 123: 395-418] to form a convex combination ratio for updating the trust-region radius. Then we combine the non-monotone technique with this new framework of trust-region algorithm. The new algorithm is shown to be globally convergent to a first-order critical point. Numerical experiments on CUTEr problems indicate that it is competitive with both the original retrospective trust-region algorithm and the classical trust-region algorithms.
Abstract:
The aim of this paper is to develop an improved inventory model which helps the enterprises to advance their profit increasing and cost reduction in a single vendor-single buyer environment with permissible delay in payments depending on the ordering quantity and imperfect production. Through this study, some numerical examples available in the literature are provided herein to apply the permissible delay in payments depending on the ordering quantity strategy. Furthermore, imperfect products will cause the cost and increase number of lots through the whole model. Therefore, for more closely conforming to the actual inventories and responding to the factors that contribute to inventory costs, our proposed model can be the references to the business applications. Finally, results of this study showed applying the permissible delay in payments can promote the cost reduction; and also showed a longer trade credit term can decrease costs for the complete supply chain.
Abstract:
Channel coordination is an optimal state with operation of channel. For achieving channel coordination, we present a quantity discount mechanism based on a fairness preference theory. Game models of the channel discount mechanism are constructed based on the entirely rationality and self-interest. The study shows that as long as the degree of attention (parameters) of retailer to manufacturer's profit and the fairness preference coefficients (parameters) of retailers satisfy certain conditions, channel coordination can be achieved by setting a simple wholesale price and fixed costs. We also discuss the allocation method of channel coordination profit, the allocation method ensure that retailer's profit is equal to the profit of independent decision-making, and manufacturer's profit is raised.
Abstract:
Constraint qualification (CQ) is an important concept in nonlinear programming. This paper investigates the motivation of introducing constraint qualifications in developing KKT conditions for solving nonlinear programs and provides a geometric meaning of constraint qualifications. A unified framework of designing constraint qualifications by imposing conditions to equate the so-called ``locally constrained directions" to certain subsets of ``tangent directions" is proposed. Based on the inclusion relations of the cones of tangent directions, attainable directions, feasible directions and interior constrained directions, constraint qualifications are categorized into four levels by their relative strengths. This paper reviews most, if not all, of the commonly seen constraint qualifications in the literature, identifies the categories they belong to, and summarizes the inter-relationship among them. The proposed framework also helps design new constraint qualifications of readers' specific interests.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Let me first recall the definition and some properties of the exponential function. The exponential function in basis $a$ is defined as follows:$ \exp_a(x) = a^x$. If $a>1$, such a function
grows faster than any polynomial $p(x)$.More formally, this property is interpreted in the two following ways.
1) For any polynomial $p$ and any basis $a>1$, there exists $N$ such that for all $n\geq N, \exp_a(n) > p(n)$.
2) For any polynomial $p$ and any basis $a>1$, $\lim_\limits{x \rightarrow \infty} \frac{\exp_a(x)}{p(x)} = \infty $ (and so $\lim_\limits{x \rightarrow \infty} \frac{p(x)}{\exp_a(x)} = 0 $)
Now, we remark that the function $h(\lambda) = 2^{\lambda/2}$ is an exponential function, because $h(\lambda) = 2^{\lambda/2}=\left(2^{1/2}\right)^\lambda= \sqrt{2}^\lambda =\exp_{\sqrt{2}}(\lambda)$. Finally, your function is $f(\lambda) = \frac{1}{\exp_{\sqrt{2}}(\lambda)}$. In the following, we will prove that $f$ is negligible using the two definitions.
1) $\sqrt{2} >1$, so for any polynomial $p$, there exists $N$ such that for all $n\geq N, \exp_\sqrt{2}(n) > p(n)$, which implies that for all $n\geq N, \frac{1}{\exp_\sqrt{2}(n)} < \frac{1}{p(n)}$. Since $f(\lambda) = \frac{1}{\exp_{\sqrt{2}}(\lambda)}$, we deduce that for all $n\geq N, f(n) < \frac{1}{p(n)}$.
2) $\sqrt{2} >1$, so for any polynomial $p$, $\lim_\limits{\lambda \rightarrow \infty} \frac{p(\lambda)}{\exp_\sqrt{2}(\lambda)} = 0 $. Since $\frac{p(\lambda)}{\exp_\sqrt{2}(\lambda)} = p(x) \frac{1}{\exp_\sqrt{2}(\lambda)} = p(\lambda) f(\lambda)$, we deduce that $\lim_\limits{\lambda \rightarrow \infty} p(\lambda) f(\lambda) = 0 $
To conclude, the intuition is that
a polynomial divide by something that grows faster than any polynomial is a negligible function, because it becomes very small very quickly. |
The sequence $\{a_n\}_{n=1}^\infty$ is cauchy if for every $\epsilon>0$, there is a corresponding natural number $N$ such that
$$ m,n\geq N\Rightarrow |a_m-a_n|<\epsilon $$
I am doing a particular problem where the problem talks about cauchy sequences of rational numbers and I am not sure how that is different from a normal cauchy sequence (defined above).
If $\{a_n\}_{n=1}^\infty$ is a cauchy sequence in rational number and if there is a sub-sequence of this sequence, $\{a_{n_j}\}_{j=1}^\infty$ which converges to a rational number $\frac{p}{q}$, then I need to show that the sequence $\{a_n\}_{n=1}^\infty$ converges to the rational number $\frac{p}{q}$.
How this would be different if we did not talk about rational numbers so the problem was the following:
If $\{a_n\}_{n=1}^\infty$ is a cauchy sequence of real numbers and if there is a sub-sequence of this sequence, $\{a_{n_j}\}_{j=1}^\infty$ which converges to a real number $L$, then I need to show that the sequence $\{a_n\}_{n=1}^\infty$ converges to the real number $L$.
Since this question talks about rational numbers and not real numbers, it confuses me. |
I'm an REU student who has just recently been thrown into a dynamical system problem without basically any background in the subject. My project advisor has told me that I should represent regions of my dynamical system by letters and look at the sequence of letters formed by the trajectory of a point under the iteration of my map.
He claims that it's a common result that if two points share the same sequence, then this sequence of letters is periodic. I've asked around among some of the other students, and they said that this is sometimes called symbolic dynamics, but none of them remembers this sort of result. I've also searched the internet, but it's possible that my google-fu is weak, since I didn't find any answers that way.
To go one step further, there are obvious cases where it is false- take $S^1\times I$, and encode the regions as $A$ corresponds to $[0,\pi)\times I$ and $B$ corresponds to $[\pi,2\pi)\times I$ with map $f(x,y)=(x+1\mod{2\pi},y)$. Obviously any two points $(x,y)$ and $(x,z)$ with $y\neq z$ will have the same sequence, but since 1 is an irrational multiple of $2\pi$, the trajectory will never be periodic.
I'm interested in the general theory and common techniques applied to the question:
Represent a dynamical system by associating symbols with regions of the space. When is it true that if two distinct points's trajectories have the same sequence of symbols, then the sequence of symbols is periodic?
Any answers, examples, or specific references would be greatly appreciated. |
A Simple Abelian Group if and only if the Order is a Prime NumberLet $G$ be a group. (Do not assume that $G$ is a finite group.)Prove that $G$ is a simple abelian group if and only if the order of $G$ is a prime number.Definition.A group $G$ is called simple if $G$ is a nontrivial group and the only normal subgroups of $G$ is […]
Commutator Subgroup and Abelian Quotient GroupLet $G$ be a group and let $D(G)=[G,G]$ be the commutator subgroup of $G$.Let $N$ be a subgroup of $G$.Prove that the subgroup $N$ is normal in $G$ and $G/N$ is an abelian group if and only if $N \supset D(G)$.Definitions.Recall that for any $a, b \in G$, the […]
If the Quotient by the Center is Cyclic, then the Group is AbelianLet $Z(G)$ be the center of a group $G$.Show that if $G/Z(G)$ is a cyclic group, then $G$ is abelian.Steps.Write $G/Z(G)=\langle \bar{g} \rangle$ for some $g \in G$.Any element $x\in G$ can be written as $x=g^a z$ for some $z \in Z(G)$ and $a \in \Z$.Using […]
Group of Order 18 is SolvableLet $G$ be a finite group of order $18$.Show that the group $G$ is solvable.DefinitionRecall that a group $G$ is said to be solvable if $G$ has a subnormal series\[\{e\}=G_0 \triangleleft G_1 \triangleleft G_2 \triangleleft \cdots \triangleleft G_n=G\]such […]
Group of Order $pq$ is Either Abelian or the Center is TrivialLet $G$ be a group of order $|G|=pq$, where $p$ and $q$ are (not necessarily distinct) prime numbers.Then show that $G$ is either abelian group or the center $Z(G)=1$.Hint.Use the result of the problem "If the Quotient by the Center is Cyclic, then the Group is […] |
When using a hypothesis test for matched or paired samples, the following characteristics should be present:
Simple random sampling is used. Sample sizes are often small. Two measurements (samples) are drawn from the same pair of individuals or objects. Differences are calculated from the matched or paired samples. The differences form the sample that is used for the hypothesis test. Either the matched pairs have differences that come from a population that is normal or the number of differences is sufficiently large so that distribution of the sample mean of differences is approximately normal.
In a hypothesis test for matched or paired samples, subjects are matched in pairs and differences are calculated. The differences are the data. The population mean for the differences, \(\mu_{d}\), is then tested using a Student's \(t\)-test for a single population mean with \(n - 1\) degrees of freedom, where \(n\) is the number of differences.
The test statistic (\(t\)-score) is:
\[t = \dfrac{\bar{x}_{d} - \mu_{d}}{\left(\dfrac{s_{d}}{\sqrt{n}}\right)}\]
Example \(\PageIndex{1}\)
A study was conducted to investigate the effectiveness of hypnotism in reducing pain. Results for randomly selected subjects are shown in Table. A lower score indicates less pain. The "before" value is matched to an "after" value and the differences are calculated. The differences have a normal distribution. Are the sensory measurements, on average, lower after hypnotism? Test at a 5% significance level.
Subject: A B C D E F G H Before 6.6 6.5 9.0 10.3 11.3 8.1 6.3 11.6 After 6.8 2.4 7.4 8.5 8.1 6.1 3.4 2.0 Answer
Corresponding "before" and "after" values form matched pairs. (Calculate "after" – "before.")
After Data Before Data Difference 6.8 6.6 0.2 2.4 6.5 -4.1 7.4 9 -1.6 8.5 10.3 -1.8 8.1 11.3 -3.2 6.1 8.1 -2 3.4 6.3 -2.9 2 11.6 -9.6
The data for the test are the differences: \(\{0.2, -4.1, -1.6, -1.8, -3.2, -2, -2.9, -9.6\}\)
The sample mean and sample standard deviation of the differences are: \(\bar{x}_{d} = -3.13\) and \(s_{d} = 2.91\) Verify these values.
Let \(\mu_{d}\) be the population mean for the differences. We use the subscript dd to denote "differences."
Random variable:
\(\bar{X}_{d} =\) the mean difference of the sensory measurements
\[H_{0}: \mu_{d} \geq 0\]
The null hypothesis is zero or positive, meaning that there is the same or more pain felt after hypnotism. That means the subject shows no improvement. \(\mu_{d}\) is the population mean of the differences.
\[H_{a}: \mu_{d} < 0\]
The alternative hypothesis is negative, meaning there is less pain felt after hypnotism. That means the subject shows improvement. The score should be lower after hypnotism, so the difference ought to be negative to indicate improvement.
Distribution for the test:
The distribution is a Student's
t with \(df = n - 1 = 8 - 1 = 7\). Use \(t_{7}\). (Notice that the test is for a single population mean.) Calculate the p-value using the Student's-t distribution:
\[p\text{-value} = 0.0095\]
Graph:
Figure 10.5.1.
\(\bar{X}_{d}\) is the random variable for the differences.
The sample mean and sample standard deviation of the differences are:
\(\bar{x}_{d} = -3.13\)
\(s_{d} = 2.91\)
Compare \(\alpha\) and the \(p\text{-value}\)
\(\alpha = 0.05\) and \(p\text{-value} = 0.0095\). \(\alpha > p\text{-value}\)
Make a decision
Since \(\alpha > p\text{-value}\), reject \(H_{0}\). This means that \(\mu_{d} < 0\) and there is improvement.
Conclusion
At a 5% level of significance, from the sample data, there is sufficient evidence to conclude that the sensory measurements, on average, are lower after hypnotism. Hypnotism appears to be effective in reducing pain.
Use your list of differences as the data. Press
STAT and arrow over to
TESTS. Press
2:T-Test. Arrow over to
Data and press
ENTER. Arrow down and enter
0 for \(\mu_{0}\), the name of the list where you put the data, and
1 for Freq:. Arrow down to \(\mu\): and arrow over to
< \(\mu_{0}\). Press
ENTER. Arrow down to
Calculate and press
ENTER. The \(p\text{-value}\) is 0.0094, and the test statistic is -3.04. Do these instructions again except, arrow to
Draw (instead of
Calculate). Press
ENTER.
Exercise \(\PageIndex{1}\)
A study was conducted to investigate how effective a new diet was in lowering cholesterol. Results for the randomly selected subjects are shown in the table. The differences have a normal distribution. Are the subjects’ cholesterol levels lower on average after the diet? Test at the 5% level.
Subject A B C D E F G H I Before 209 210 205 198 216 217 238 240 222 After 199 207 189 209 217 202 211 223 201 Answer
The \(p\text{-value}\) is 0.0130, so we can reject the null hypothesis. There is enough evidence to suggest that the diet lowers cholesterol.
Example \(\PageIndex{2}\)
A college football coach was interested in whether the college's strength development class increased his players' maximum lift (in pounds) on the bench press exercise. He asked four of his players to participate in a study. The amount of weight they could each lift was recorded before they took the strength development class. After completing the class, the amount of weight they could each lift was again measured. The data are as follows:
Weight (in pounds) Player 1 Player 2 Player 3 Player 4 Amount of weight lifted prior to the class 205 241 338 368 Amount of weight lifted after the class 295 252 330 360
Record the
The coach wants to know if the strength development class makes his players stronger, on average. differencesdata. Calculate the differences by subtracting the amount of weight lifted prior to the class from the weight lifted after completing the class. The data for the differences are: \(\{90, 11, -8, -8\}\). Assume the differences have a normal distribution.
Using the differences data, calculate the sample mean and the sample standard deviation.
\[\bar{x}_{d} = 21.3\]
and
\[s_{d} = 46.7\]
Using the difference data, this becomes a test of a single __________ (fill in the blank).
Define the random variable: \(\bar{X}\) mean difference in the maximum lift per player.
The distribution for the hypothesis test is \(t_{3}\).
\(H_{0}: \mu_{d} \leq 0\), \(H_{a}: \mu_{d} > 0\) Graph:
Figure 10.5.2. Calculate the \(p\text{-value}\) : The \(p\text{-value}\) is 0.2150 Decision: If the level of significance is 5%, the decision is not to reject the null hypothesis, because \(\alpha < p\text{-value}\). What is the conclusion?
At a 5% level of significance, from the sample data, there is not sufficient evidence to conclude that the strength development class helped to make the players stronger, on average.
Exercise \(\PageIndex{2}\)
A new prep class was designed to improve SAT test scores. Five students were selected at random. Their scores on two practice exams were recorded, one before the class and one after. The data recorded in Table. Are the scores, on average, higher after the class? Test at a 5% level.
SAT Scores Student 1 Student 2 Student 3 Student 4 Score before class 1840 1960 1920 2150 Score after class 1920 2160 2200 2100 Answer
The \(p\text{-value}\) is 0.0874, so we decline to reject the null hypothesis. The data do not support that the class improves SAT scores significantly.
Example \(\PageIndex{3}\)
Seven eighth graders at Kennedy Middle School measured how far they could push the shot-put with their dominant (writing) hand and their weaker (non-writing) hand. They thought that they could push equal distances with either hand. The data were collected and recorded in Table.
Distance (in feet) using Student 1 Student 2 Student 3 Student 4 Student 5 Student 6 Student 7 Dominant Hand 30 26 34 17 19 26 20 Weaker Hand 28 14 27 18 17 26 16
Conduct a hypothesis test to determine whether the mean difference in distances between the children’s dominant versus weaker hands is significant.
Record the
differences data. Calculate the differences by subtracting the distances with the weaker hand from the distances with the dominant hand. The data for the differences are: \(\{2, 12, 7, –1, 2, 0, 4\}\). The differences have a normal distribution.
Using the differences data, calculate the sample mean and the sample standard deviation. \(\bar{x} = 3.71\), \(s_{d} = 4.5\).
Random variable: \(\bar{X} =\) mean difference in the distances between the hands. Distribution for the hypothesis test: \(t_{6}\)
\(H_{0}: \mu_{d} = 0 H_{a}: \mu_{d} \neq 0\)
Graph:
Figure 10.5.3. Calculate the p-value: The \(p\text{-value}\) is 0.0716 (using the data directly).
(test statistic = 2.18. \(p\text{-value} = 0.0719\) using \((\bar{x}_{d} = 3.71, s_{d} = 4.5\).
Decision: Assume \(\alpha = 0.05\). Since \(\alpha < p\text{-value}\), Do not reject \(H_{0}\). Conclusion: At the 5% level of significance, from the sample data, there is not sufficient evidence to conclude that there is a difference in the children’s weaker and dominant hands to push the shot-put.
Exercise \(\PageIndex{3}\)
Five ball players think they can throw the same distance with their dominant hand (throwing) and off-hand (catching hand). The data were collected and recorded in Table. Conduct a hypothesis test to determine whether the mean difference in distances between the dominant and off-hand is significant. Test at the 5% level.
Player 1 Player 2 Player 3 Player 4 Player 5 Dominant Hand 120 111 135 140 125 Off-hand 105 109 98 111 99 Answer
The \(p\text{-level}\) is 0.0230, so we can reject the null hypothesis. The data show that the players do not throw the same distance with their off-hands as they do with their dominant hands.
Chapter Review
A hypothesis test for matched or paired samples (\(t\)-test) has these characteristics:
Test the differences by subtracting one measurement from the other measurement Random Variable: \(x_{d} =\) mean of the differences Distribution: Student’s-t distribution with \(n - 1\) degrees of freedom If the number of differences is small (less than 30), the differences must follow a normal distribution. Two samples are drawn from the same set of objects. Samples are dependent. Formula Review Test Statistic ( t-score): \[t = \dfrac{\bar{x}_{d}}{\left(\dfrac{s_{d}}{\sqrt{n}}\right)}\]
where:
\(x_{d}\) is the mean of the sample differences. \(\mu_{d}\) is the mean of the population differences. \(s_{d}\) is the sample standard deviation of the differences. \(n\) is the sample size.
Contributors
Barbara Illowsky and Susan Dean (De Anza College) with many other contributing authors. Content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/30189442-699...b91b9de@18.114. |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
Suppose that we have a twice-differentiable function $f$ on $x\in [0,\infty)$ such that
$f(x)>0$ on $x\in [0,\infty)$ (i.e. strictly positive) $f'(x)<0$ on $x\in [0,\infty)$ (i.e. strictly decreasing) $f^{''}(x) >0$ on $x\in [0,\infty)$ (i.e. strictly convex) $\lim_{x \to \infty} f(x)=0$
and we have a twice-differentiable function $g(x)$ such that
$f(x)>g(x)>0$ on $x\in [0,\infty)$ (i.e. strictly less then $f(x)$ and strictly positive) $g'(x)<0$ on $x\in [0,\infty)$ (i.e. strictly decreasing) Is it true that there exist some $x_0$ such that $g^{''}(x)>0$ for all $x \in [x_0,\infty)$?
Or in other words does a function dominated by convex function eventually becomes convex.
As an example of $f(x)$ consider $e^{-x}$ or $\frac{1}{1+x}$.
Edit Thanks to the example given @user225318. The above is not true.What if we add more more assumption that is
3) There exists $x_1$ such that $g^{'}(x) < f^{'}(x)$ for all $x \in [x_1, \infty)$. (i.e. derivative of $g(x)$ is eventually dominted by the derivative of $f(x)$.
Edit assumption 3) makes no sense |
I've written the following code for a quantum circuit using the qcircuit package and everything looks how I want it to look except that the curly brackets enclosing the bottom 2 lines are too far away.
I understand that my code for getting the labels on the dotted lines is probably not the best way of achieving this, but the main problem I'm concerned with is getting the curly braces closer to the circuit.
\begin{equation*}\Qcircuit @C=1.4em @R=1em {&&& \lstick{\ket{\psi}} \ar@{.}[]+<0.5em,1em>;[d]+<0.5em,-3em> & \ctrl{1} \ar@{.}[]+<1em,1em>;[d]+<1em,-3em> & \gate{H} \ar@{.}[]+<1.5em,1em>;[d]+ <1.5em,-3em> & \meter \ar@{.}[]+<1.5em,1em>;[d]+<1.5em,-3em> & \ustick{M_{1}} \cw & \cw & \cw & \control \cw \cwx[2] \\&&& & \targ & \qw & \meter & \ustick{M_{2}} \cw & \control \cw \cwx[1] \\&&& & \qw & \qw & \qw & \qw & \gate{X^{M_{2}}} & \qw & \gate{Z^{M_{1}}} & \ar@{.}[]+<-1em,-0.5em>;[u]+<-1em,6em> \qw \\&&& & \hspace{-2em} \ket{\psi_{0}} & \hspace{-2.2em} \ket{\psi_{1}} & \hspace{-2.2em} \ket{\psi_{2}} & \hspace{-0.5em} \ket{\psi_{3}} & \hspace{15em} \ket{\psi_{4}} \inputgroupv{2}{3}{0.7em}{1.1em}{\ket{\Phi^{+}}} \\}\end{equation*}
The \inputgroupv line of code is the way I've got the curly braces that are currently there but changing the "0.7em" and "1.1em" values doesn't seem to help in getting the bracket closer. |
ISSN:
1930-5311
eISSN:
1930-532X
All Issues
Journal of Modern Dynamics
October 2013 , Volume 7 , Issue 4
Select all articles
Export/Reference:
Abstract:
Let ${\cal Q}$ be a connected component of a stratum in the moduli space of abelian or quadratic differentials for a nonexceptional Riemann surface $S$ of finite type. We prove that the probability measure on ${\cal Q}$ in the Lebesgue measure class which is invariant under the Teichmüller flow is obtained by Bowen's construction.
Abstract:
We analyze a class of $C^0$-small but $C^1$-large deformations of Anosov diffeomorphisms that break the topological conjugacy and structural stability, but unexpectedly retain the following stability property. The usual semiconjugacy mapping the deformation to the Anosov diffeomorphism is in fact an isomorphism with respect to all ergodic, invariant probability measures with entropy close to the maximum. In particular, the value of the topological entropy and the existence of a unique measure of maximal entropy are preserved. We also establish expansiveness around those measures. However, this expansivity is too weak to ensure the existence of symbolic extensions.
Many constructions of robustly transitive diffeomorphisms can be done within this class. In particular, we show that it includes a class described by Bonatti and Viana of robustly transitive diffeomorphisms that are not partially hyperbolic.
Abstract:
Given any Liouville number $\alpha$, it is shown that the nullity of the Hausdorff dimension of the invariant measure is generic in the space of the orientation-preserving $C^\infty$ diffeomorphisms of the circle with rotation number $\alpha$.
Abstract:
We consider a partially hyperbolic $C^1$-diffeomorphism $f\colon M \rightarrow M$ with a uniformly compact $f$-invariant center foliation $\mathcal{F}^c$. We show that if the unstable bundle is one-dimensional and oriented, then the holonomy of the center foliation vanishes everywhere, the quotient space $M/\mathcal{F}^c$ of the center foliation is a torus and $f$ induces a hyperbolic automorphism on it, in particular, $f$ is centrally transitive.
We actually obtain further interesting results without restrictions on the unstable, stable and center dimension: we prove a kind of spectral decomposition for the chain recurrent set of the quotient dynamics, and we establish the existence of a holonomy-invariant family of measures on the unstable leaves (Margulis measure).
Abstract:
We show that for every compact $3$-manifold $M$ there exists an open subset of $Diff^1(M)$ in which every generic diffeomorphism admits uncountably many ergodic probability measures that are hyperbolic while their supports are disjoint and admit a basis of attracting neighborhoods and a basis of repelling neighborhoods. As a consequence, the points in the support of these measures have no stable and no unstable manifolds. This contrasts with the higher-regularity case, where Pesin Theory gives stable and unstable manifolds with complementary dimensions at almost every point. We also give such an example in dimension two, without local genericity.
Abstract:
We consider cocycles $\tilde A: \mathbb{T}\times K^d \ni (x,v)\mapsto ( x+\omega, A(x,E)v)$ with $\omega$ Diophantine, $K=\mathbb{R}$ or $K=\mathbb{C}$. We assume that $A: \mathbb{T}\times \mathfrak{E} \to GL(d,K)$ is continuous, depends analytically on $x\in\mathbb{T}$ and is Hölder in $E\in \mathfrak{E} $, where $\mathfrak{E}$ is a compact metric space. It is shown that if all Lyapunov exponents are distinct at one point $E_{0}\in\mathfrak{E}$, then they remain distinct near $E$. Moreover, they depend in a Hölder fashion on $E\in B$ for any ball $B\subset \mathfrak{E}$ where they are distinct. Similar results, with a weaker modulus of continuity, hold for higher-dimensional tori $\mathbb{T}^\nu$ with a Diophantine shift. We also derive optimal statements about the rate of convergence of the finite-scale Lyapunov exponents to their infinite-scale counterparts. A key ingredient in our arguments is the Avalanche Principle, a deterministic statement about long finite products of invertible matrices, which goes back to work of Michael Goldstein and the author. We also discuss applications of our techniques to products of random matrices.
Readers Authors Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Latest results from NA62
Pre-published on: 2019 July 05
Published on: 2019 October 04
Abstract
The ultra rare kaon decay $K\rightarrow \pi \nu \bar{\nu}$, being one of the theoretically cleanest meson decays, is very sensitive to the effects of New Physics at high mass scales. NA62 is a fixed-target experiment at the CERN SPS designed to measure the branching ratio of the $K^{+}\rightarrow \pi^{+} \nu \bar{\nu}$ decay with 10% precision using a novel decay-in-flight technique. NA62 took data during run periods in 2016, 2017 and 2018. The results from the analysis of the data collected by NA62 in 2016, corresponding to $(1.21 \pm 0.04_{syst}) \times 10^{11}$ $K^{+}$ decays (2% of full available statistics), will be presented and future prospects will be reviewed. Although NA62 was designed to measure $K\rightarrow \pi \nu \bar{\nu}$, it's also sensitive to other rare and forbidden kaon decays, especially those with two leptons in the final state. The status and prospects of searches for lepton flavour and lepton number violation in kaon decays at the NA62 experiment will also be discussed.
DOI: https://doi.org/10.22323/1.352.0122 |
Not quite an answer, but a heuristics from the point of view of Weil philosophy about why the rationality mod $p$ is much easier.
Weil reduced the rationality of zeta-function to the existence of a good cohomology theory. The crucial property of a cohomology which satisifies the Weil axioms is that it has zero-characteristics coefficients. This is needed because Lefshetz formula is an identity in the coefficients ring, so to compute the number of $\mathbb{F}_q$-points precisely, we need that coefficents ring does not have any $\mathbb{Z}$-torsion.
But if you are interested only in the number of points modulo $p$, a cohomology theory with $\mathbb{F}_p$-coefficients which satisefies the Lefshetz trace foormula would fit(all the formulas for zeta functions should just be reduced modulo $p$).
It is very easy to give an example of such cohomology theory. It is $X\mapsto H^i(X,\mathcal{O}_X)$. Holomorphic Lefshetz trace formula applied to Frobenius gives $$\sum\limits_{x\in X(\mathbb{F}_{p^n})}\frac{1}{\det(1-Fr|_{T_{x,X}})}=\sum\limits_i(-1)^i\mathrm{Tr}\, Fr|_{H^i(X,\mathcal{O}_X)}$$ But Frobenius is inseparable, so its differential is zero and the LHS is stil $\# X(\mathbb{F}_{p^n})$ so we get the desired trace formula modulo $p$.
So, the rationality mod $p$ should be much easier. You can also trace the analogy between Chevalley-Warning method and holomorphic Lefshetz trace formula in the proof of equivalence of different definitions of supersingular elliptic curve(see i.e. Hartshorne).
Historically, the rationality mod $p$ was certainly known already to Serre, since he considered Witt cohomology $H^i(X,W\mathcal{O}_X)$ as a candidate for Weil cohomology. |
Let $x\in X$. We have $P_n(x)=\sum_{j=1}^n\langle x,e_j\rangle e_j$ so , by Bessel-Parseval equality $$\lVert P_nx-x\rVert^2=\sum_{j\geq n+1}|\langle x,e_j\rangle|^2.$$As the latest series is convergent we have the result.
Now, let $K\subset H$ compact. Fix $\varepsilon >0$. Then we can find an integer $N$ and $x_1,\dots,x_N\in K$ such that for each $x\in K$, we can find $1\leq k \leq N$ such that $\lVert x-x_k\rVert\leq\varepsilon$. Fix $x\in K$. Then $$\lVert P_nx-x\rVert^2=\sum_{j\geq n+1}|\langle x-x_k+x_k,e_j\rangle|^2\leq \varepsilon^2+\max_{1\leq k\leq N}\sum_{j\geq n+1}|\langle x_k,e_j\rangle|^2.$$As the RHS doesn't depend on $x$, we have$$\sup_{x\in K}\lVert P_nx-x\rVert^2\leq \varepsilon^2+\max_{1\leq k\leq N}\sum_{j\geq n+1}|\langle x_k,e_j\rangle|^2.$$Now take the $\limsup_{n\to+\infty}$ to get the result.
If $T$ is compact, then $K:=\overline{T(B(0,1))}$ is compact, so we apply the previous result to this $K$.
Note that the property of approximation of a compact operator by a finite rank operator is true in any Hilbert space, not only in separable ones. To see that, fix $\varepsilon>0$; then take $v_1,\dots,v_N$ such that $T(B(0,1))\subset \bigcup_{j=1}^NB(y_j,\varepsilon)$. Let $P$ the projection over the vector space generated by $\{y_1,\dots,y_N\}$ (it's a closed subspace). Consider $PT$: it's a finite ranked operator. Now take $x\in B(0,1)$. Then pick $j$ such that $\lVert Tx-y_j\rVert\leq\varepsilon$. We also have, as $\lVert P\rVert\leq 1$, that $\lVert PTx-Py_j\rVert\leq \varepsilon$. As $Py_j=y_j$, we get $\lVert PTx-Tx\rVert\leq 2\varepsilon$. |
Set up
I'm reading a text where formulae are numbered in the following way. First there is a numbering function $\nu$ such that all logical particles (truth-functional operators and quantifiers, are congruent to some fixed $i$ mod 5). In the author's choice, $\nu(\neg)=0, \nu(\rightarrow)=5$ and so on with some arbitrary choices all divisible by 5. For variable $v_n$ we map $\nu(v_n)=5n+1$. After that we assign $\underline{0}$, the numeral meant to represent the number 0, and $\approx$, the symbol meant to represent equality of terms. Then the arithmetic operations.
$$\nu(\underline{0})=2,\qquad \nu(\approx)=3\\ \nu({\bf s})=4,\ \nu(+)=9,\ \nu(\cdot)=14$$
The Gödel number of a term or formula is then regarded as a list. The following defines the Gödel number for terms, in order of variables, the unique constant $\underline{0}$, successor, plus, and times.
$$GN(v_n) = \langle \nu(v_n)\rangle,\ GN(\underline{0})=\langle\nu(\underline{0})\rangle,\ GN({\bf s}(t)) = \langle \nu({\bf s}),GN(t)\rangle \\ GN(t_1+t_2)=\langle \nu(+),GN(t_1),GN(t_2)\rangle\\ GN(t_1\cdot t_2)=\langle \nu(\cdot),GN(t_1),GN(t_2)\rangle$$
Hence every term is coded in a tree structure of nested lists. We typically use $x$ for the code number of any list of numbers and $y$ for an index in the list, and $(x)_y$ for the number encoded in the $y$th place. We use formulae to represent numbers and numbers to encode lists. We have a defined length function,$\ell(x)$, and $th(y,x)$ function so that $\ell(x)$ is the length of list $x$ and $th(y,x)$ is the $y$th coordinate of $x$ and each of these are represented by some formula. $(x)_y$ is just convenient notation for $th(y,x)$.
Question
The author says that, in any given term tree, encoded by $x$ which is a nested list of lists, and for any coordinate $y$, we have $(x)_y<x$ and therefore as we go down a branch the Gödel number of any term decreases. Ok, that I get by nature of how we've defined the construction of lists.
Next let $t$ be the Gödel number of some term. The author says that, since at every leaf of the tree we have atomic terms with Gödel number > 0, then every branch must have length $< t$. This is the part I don't quite get. I could imagine justifying the conclusion some other way, but I don't see how the given justification is adequate (or even really relevant). |
I know of two very general frameworks for describing generalizations of what a "cohomology theory" should be: Grothendieck's "six functors", and the theory of spectra.
In the former, one assigns to every "space" $X$ a triangulated category $\newcommand{\D}{\mathsf D} \D (X)$, its derived category, and to each morphism $f \colon X \to Y$ derived pushforward/pullback maps $f_\ast, f_!, f^\ast, f^!$ between the derived categories (as well as $\mathcal Hom$ and $\otimes$), which are required to satisfy a list of formal properties: adjunctions, base change theorems, projection formula, etc. The usual cohomology of a space is given by the functor $(a_X)_\ast (a_X)^\ast$ applied to our choice of coefficients in $\D(\mathrm{pt})$, where $a_X$ is the map from $X$ to a point. But the formalism also incorporates sheaf cohomology, and allows us to talk in a uniform language also about other cohomology theories (like Borel-Moore, intersection homology) by other combinations of the six functors, or to work freely in a relative setting (e.g. there are relative versions of things like Künneth theorem) or to talk about enhanced versions of cohomology (like mixed Hodge theory) by an appropriate other choice of functor $\D(-)$.
However my impression is that this approach is more popular among for instance algebraic geometers than honest topologists. Topologists who talk about generalized cohomology theories of course talk about K-theory, cobordism, elliptic cohomology... I understand much less of this. In any case, here a generalized cohomology theory is considered to be an object of the stable homotopy category $\mathrm{SH}$.
I have been wondering for a while what (if anything) it means that there are these two seemingly orthogonal ways of thinking about what cohomology is, which seem to allow for generalizations in different directions. Is there any way to unify the two approaches?
To ask a more precise question, is there a functor $\D$ which assigns to a nice enough space $X$ a triangulated category $\D(X)$, together with a "six functors" formalism satisfying the usual properties, such that $\D(\mathrm{pt}) \cong \mathrm{SH}$? Even better, can one in that case also find a subfunctor $\D' \subset \D$, stable under six functors, which assigns to a space $X$ the (for instance unbounded) derived category of abelian sheaves on $X$?
Some more speculative comments: If $\D(\mathrm{pt}) \cong \mathrm{SH}$ then possibly $\D(X)$ should be the category of spectra parametrized by the base space $X$ (as in parametrized homotopy theory), but I'm very ignorant about such things. I've understood that a large part of May-Sigurdsson's book is devoted to constructing functors $f^\ast$, $f_\ast$ and $f_!$ in parametrized homotopy theory - can these be considered as some kind of lifts of those in the usual derived category of abelian sheaves, or do they just have the same names? Is there a reason that $f^!$ does not appear; does Verdier duality fail in this context? |
Gregory from Magnus C of E School inNottingham and Luke from St Patrick's School reasoned correctly.Here is Luke's explanation:
For this question I found a pattern in it:
For example $C_1$ has length of 1 and $C_2$ has length of$\frac{2}{3}$
and there is a pattern in this because 1 $\times$2 = 2, sothere is your numerator,
and to get your denominator you multiply 1 by 3, so there youhave your $\frac{2}{3}$.
So to put this into easier words, it simply means multiply thenumerator by 2 the whole way and multiply the denominator by3.
So there is my strategy.
So the answer to $C_3$ is $\frac{4}{9}$
and the answer to $C_4$ is $\frac{8}{27}$
and finally $C_5$ is $\frac{16}{81}$.
$C_n$ is $\left(\frac{2}{3}\right)^{n-1}$
and when n $\rightarrow$ infinity, $ C_n \rightarrow$ 0
David from Gordonstoun School got the sameresult and added:
In a geometric progression,
$a$ = $ 1^{st}$ term
$r$ = constant factor
$n$ = number of terms
Any value in a geometric progression is $ ar^{n-1}$
In this case
$a$ = 1
$r$ = $\frac{2}{3}$
$r$ = $\frac{2}{3}$ (which is less than 1),
so the higher its power, the closer the result is tozero.
(Any positive number smaller than 1, to the power of infinity,tends to zero.
Therefore $ r^{n}$ tends to zero)
So as $n$ tends to infinity,
$ar^{n-1}$ tends to $a \times 0 = 0$
So the Cantor Set's length is zero.
Liam from Wilbarston School reasoned in asimilar way:
The length of $C_{n+1}$ is simply two thirds of the length of$C_n$,
as $C_{n+1}$ is purely $C_n$ with the middle thirds removed.
Now taking $L_n$ to be the length of $C_n$:
$L_2$ = $\frac{2}{3}$,
$L_3$ = $\frac{4}{9}$,
$L_4$ = $\frac{8}{27}$ etc. etc.
It's obvious that $L_n$ = $\left(\frac{2}{3}\right)^{n-1}$.
So as n tends to infinity, $L_n$ gets increasingly smaller, i.e.tends to zero.
Therefore the length of the Cantor set is zero. In fact, the Cantorset is a set of points, because endpoints of line segments willnever be removed, only middle thirds.
And as Euclid said, 'A point is that which has no part', i.e. apoint has zero length, zero width and zero height.Well done to you all. |
Reference documentation for deal.II version 8.4.1
#include <deal.II/dofs/function_map.h>
typedef std::map< types::boundary_id, const Function< dim, Number > * > type
This class declares a local typedef that denotes a mapping between a boundary indicator (see GlossBoundaryIndicator) that is used to describe what kind of boundary condition holds on a particular piece of the boundary, and the function describing the actual function that provides the boundary values on this part of the boundary. This type is required in many functions in the library where, for example, we need to know about the functions \(h_i(\mathbf x)\) used in boundary conditions
\begin{align*} \mathbf n \cdot \nabla u = h_i \qquad \qquad \text{on}\ \Gamma_i\subset\partial\Omega. \end{align*}
An example is the function KellyErrorEstimator::estimate() that allows us to provide a set of functions \(h_i\) for all those boundary indicators \(i\) for which the boundary condition is supposed to be of Neumann type. Of course, the same kind of principle can be applied to cases where we care about Dirichlet values, where one needs to provide a map from boundary indicator \(i\) to Dirichlet function \(h_i\) if the boundary conditions are given as
\begin{align*} u = h_i \qquad \qquad \text{on}\ \Gamma_i\subset\partial\Omega. \end{align*}
This is, for example, the case for the VectorTools::interpolate() functions.
Tutorial programs step-6, step-7 and step-8 show examples of how to use function arguments of this type in situations where we actually have an empty map (i.e., we want to describe that
no part of the boundary is a Neumann boundary). step-16 actually uses it in a case where one of the parts of the boundary uses a boundary indicator for which we want to use a function object.
It seems odd at first to declare this typedef inside a class, rather than declaring a typedef at global scope. The reason is that C++ does not allow to define templated typedefs, where here in fact we want a typedef that depends on the space dimension. (Defining templated typedefs is something that is possible starting with the C++11 standard, but that wasn't possible within the C++98 standard in place when this programming pattern was conceived.)
typedef std::map<types::boundary_id, const Function<dim,Number>*> FunctionMap< dim, Number >::type |
S. G. Dani and C. R. E. Raja Asymptotics of measures under group automorphisms and an application to factor sets Lie groups and ergodic theory (Mumbai, 1996), 59-73, Tata Inst. Fund. Res. Stud. Math., 14, Tata Inst. Fund. Res., Bombay, 1998.
S. G. Dani and C. R. E. Raja A note on Tortrat Groups Journal of Theoretical Probability, 11 (1998), 571-576.
C. R. E. Raja Stable probability measures on p-adic Lie groups Sankhya Series A, 61 (1999), 1-11.
C. R. E. Raja A note on the equation \lambda *\rho *\mu = \rho Bull. Austral. Math. Soc., 59 (1999), 421-426.
C. R. E. Raja On a class of Hungarian semigroups and factorization theorem of Khinchin Journal of Theoretical Probability, 12 (1999), 561-569.
C. R. E. Raja On classes of p-adic Lie groups New York J. Math. 5 (1999), 101-105.
C. R. E. Raja Weak mixing and Unitary representation problem Bull. Sci. Math. 124 (2000), 517-523 "(see [17] for some corrections and modications.)".
C. R. E. Raja and Riddhi Shah Factorization theorems for T-decomposable measures on groups Monatsh. Math. 133 (2001), 223-239.
P. Graczyk and C. R. E. Raja Classical Theorems of probability on Gelfand Pairs - Khinchin and Cramer Theorems Israel Joural of Mathematics 132 (2002), 61-107.
C. R. E. Raja Identity excluding groups Bull. Sci. Math. 126 (2002), 763-772 "(see [17] for some corrections and modications.)".
C. R. E. Raja On heredity of strongly proximal actions Archivum Math. (Brno) 39 (2003), 51-55.
C. R. E. Raja Normed convergence property for hypergroups admitting an invariant measure Southeast Asian Bulletin of Mathematics, 26 (2002), 479-481.
C. R. E. Raja Krengel-Lin decomposition for probability measures on hypergroups Bull. Sci. Math. 127 (2003), 283-291.
C. R. E. Raja Absolute continuity of autophage measures on finite-dimensional vector spaces Math. Nachr. 263-264 (2004), 198-203.
C. R. E. Raja Ergodic ameanble actions of algebraic groups Glasgow Mathematical Journal, 46 (2004), 97-100.
C. R. E. Raja Opertor Semi-selfdecomposable measures and related nested subclasses of measures on vector spaces Monatsh. Math. 142 (2004), 351-361.
C. R. E. Raja A note on unitary representation problem with corrigenda to the articles Weak mixing and unitary representation problem, Bull. Sci. Math. 124 (2000) 517-523 and Identity excluding groups, Bull. Sci. Math. 126 (2002) 763-762 Bull. Sci. Math. 128 (2004), 803-809.
C. R. E. Raja On growth, recurrence and Choquet-Deny Theorem for p-adic Lie groups Math. Z 251 (2005), 827-847. A pdf version / The final publication is available at www.springerlink.com Publisher Link
W. Jaworski and C. R. E. Raja The Choquet-Deny Theorem and distal properties of totally disconnected locally compact groups of polynomial growth New York Journal of Mathematics 13 (2007), 159-174. A pdf version / Publisher Link
C. R. E. Raja and R. Schott Recurrent random walks on homogeneous spaces of p-adicalgebraic groups of polynomial growth Archiv der Mathematik (Basel) 91 (2008), 379-384. A pdf version / The final publication is available at www.springerlink.comPublisher Link
C. R. E. Raja Distal actions and ergodic actions on compact groups New York Journal of Mathematics 15 (2009), 301-318. A pdf version / Publisher Link
C. R. E. Raja and R. Shah Distal actions and shifted convolution property Israel Journal of Mathematics 177 (2010), 391-412. A pdf version / The final publication is available at www.springerlink.com Publisher Link
C. R. E. Raja On the existence of ergodic automorphisms in ergodic${\mathbb Z} ^d$-actions on compact groups Ergodic Theory and Dynamical Systems 30 (2010), 1803-1816. A pdf version / Publisher Link
Y. Guivarc'h and C. R. E. Raja Polynomial Growth, Recurrence and Ergodicity for Random Walkson Locally Compact Groups and Homogeneous Spaces Progress in Probability, Vol. 64, 65-74. A pdf version / Publisher Link
Y. Guivarc'h and C. R. E. Raja Recurrence and ergodicity of random walks on linear groups and on homogeneous spaces Ergodic Theory and Dynamical Systems (2012), 32, 1313--1349. A pdf version / Publisher Link
C. R. E. Raja A stochastic difference equation with stationary noise on groups Canadian Journal of Mathematics 64 (2012), 1075-1089. http://dx.doi.org:10.4153/CJM-2011-094-6 A pdf version
C. R. E. Raja Strong relative property (T) and spectral gap of random walks Geometriae Dedicata 164 (2013), 9-25. A pdf version /Online First (DOI) 10.1007/s10711-012-9756-7 The final publication is available at www.springerlink.com Publisher Link
C. R. E. Raja Liouville property on $G$-spaces Proceedings of the Conference on Recent Trends in Ergodic Theory and Dynamical Systems, Contemporary Mathematics 631, AMS, 2015. A pdf version
C. R. E. Raja Operator decomposable measures and stochastic difference equation Journal of Theoretical Probability 28 (2015), no. 3, 785-803. A pdf version /The final publication is available at www.springerlink.com Publisher Link
H. Glockner and C. R. E. Raja Expansive automorphisms of totally disconnected locally compact groups to appear in Journal of Group Theory A pdf version /The final publication is available at Publisher Link
Sharan Gopal and C. R. E. Raja Periodic points of solenoidal automorphisms Topology Proceedings, 50 (2017), 49-57. |
I always thought that both “proof by reductio ad absurdum” and “proof by contradiction” mean the same, but now my professor asked this question on my homework and I don't know.
I believe that in both cases you assume the negation of the conclusion and develop a contradiction through the premises. This will imply the conclusion. Today I have a meeting with the assistant professor so I can clarify this, but I really would like to know what you guys think, or if possible it would be great if you point me into some good references.
UPDATE:
I just came from my extra help and the assistant professor explains the difference this way:
Reductio ad absurdum: $$ \vDash [\neg p\to(q\wedge\neg q)]\to p$$ Proof by contradiction:
$$ \vDash [\neg (p\to q) \to (r\wedge \neg r)]\to (p\to q)$$
And the examples of application were these:
Using proof by contradiction: $\sqrt2$ is irrational.( First suppose it is rational and derive a contradiction). Using proof by reductio ad absurdum: If $f$ is differentiable on $(a,b)$ then $f$ is continuous on $(a,b)$. ( First we suppose that $f$ is differentiable on $(a,b)$ but not continuous on $(a,b)$ and derive a contradiction). |
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. The graphic work of Felicien Rops. Notes on the life of Rops / by Lee Revens. "Instrumentum Diaboli" / by J.-K. Huysmans
1975, 286
Book
2012, 10th ed., ISBN 9780071761475
Book
Physics Letters B, ISSN 0370-2693, 06/2014, Volume 734, pp. 261 - 281
A peaking structure in the mass spectrum near threshold is observed in decays, produced in pp collisions at collected with the CMS detector at the LHC. The...
Journal Article
Book
5. Updated cross section measurement of e(+)e(-)-> K(+)K(-)J/Psi and K(s)(0)K(s)(0)J/Psi via initial state radiation at Belle
PHYSICAL REVIEW D, ISSN 1550-7998, 04/2014, Volume 89, Issue 7
Journal Article
6. Chemical evidence of inter-hemispheric air mass intrusion into the Northern Hemisphere mid-latitudes
Scientific Reports, ISSN 2045-2322, 12/2018, Volume 8, Issue 1, pp. 4669 - 7
The East Asian Summer Monsoon driven by temperature and moisture gradients between the Asian continent and the Pacific Ocean, leads to approximately 50% of the...
TRANSPORT | PRECIPITATION | ATMOSPHERIC GASES | EMISSIONS | PACIFIC | MULTIDISCIPLINARY SCIENCES | LOWER STRATOSPHERE | EAST-ASIA | ASIAN SUMMER MONSOON | IN-SITU | WATER | Rainfall | Wind | Weather forecasting | Monsoons
TRANSPORT | PRECIPITATION | ATMOSPHERIC GASES | EMISSIONS | PACIFIC | MULTIDISCIPLINARY SCIENCES | LOWER STRATOSPHERE | EAST-ASIA | ASIAN SUMMER MONSOON | IN-SITU | WATER | Rainfall | Wind | Weather forecasting | Monsoons
Journal Article
Physical Review Letters, ISSN 0031-9007, 05/2005, Volume 94, Issue 18
Journal Article
The European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 03/2018, Volume 78, Issue 3, pp. 1 - 8
We report the first observation of the Ξc(2930)0 charmed-strange baryon with a significance greater than 5σ. The Ξc(2930)0 is found in its decay to K-Λc+ in...
Charm (particle physics)
Charm (particle physics)
Journal Article
9. New top technologies every librarian needs to know: A LITA guide
: Varnum, K. J. (Ed.) (2019), Chicago, IL: ALA Neal-Schuman. 287 pp., $64.99, ISBN: 978-0-8389-1782-4
Journal of Web Librarianship, ISSN 1932-2909, 08/2019, pp. 1 - 2
Journal Article
10. Measurement of colour flow with the jet pull angle in t(t)over-bar events using the ATLAS detector at root s=8 TeV
PHYSICS LETTERS B, ISSN 0370-2693, 11/2015, Volume 750, pp. 475 - 493
The distribution and orientation of energy inside jets is predicted to be an experimental handle on colour connections between the hard-scatter quarks and...
PHYSICS, NUCLEAR | 3-JET EVENTS | ASTRONOMY & ASTROPHYSICS | E+E-ANNIHILATION | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
PHYSICS, NUCLEAR | 3-JET EVENTS | ASTRONOMY & ASTROPHYSICS | E+E-ANNIHILATION | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Journal Article
2015, Biblical tools and studies, ISBN 9042933178, Volume 22, x, 325 pages
Book
12. Observation of a peaking structure in the J/psi phi mass spectrum from B-+/- -> J/psi phi K-+/- decays
PHYSICS LETTERS B, ISSN 0370-2693, 06/2014, Volume 734, Issue 370-2693 0370-2693, pp. 261 - 281
A peaking structure in the J/psi phi mass spectrum near threshold is observed in B-+/- -> J/psi phi K-+/- decays, produced in pp collisions at root s = 7 TeV...
PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
13. Evidence of a structure in $$\bar{K}^{0} \Lambda _{c}^{+}$$ K¯0Λc+ consistent with a charged $$\Xi _c(2930)^{+}$$ Ξc(2930)+ , and updated measurement of $$\bar{B}^{0} \rightarrow \bar{K}^{0} \Lambda _{c}^{+} \bar{\Lambda }_{c}^{-}$$ B¯0→K¯0Λc+Λ¯c- at Belle
: Belle Collaboration
The European Physical Journal C, ISSN 1434-6044, 11/2018, Volume 78, Issue 11, pp. 1 - 8
We report evidence for the charged charmed-strange baryon $$\Xi _{c}(2930)^+$$ Ξc(2930)+ with a signal significance of 3.9$$\sigma $$ σ with systematic errors...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Journal Article
2008, 9th ed., ISBN 0071482709, xvii, 1158
Book
15. Measurement of branching fractions for B -> J / K decays and search for a narrow resonance in the J / final state
Progress of Theoretical and Experimental Physics, ISSN 2050-3911, 04/2014, Volume 2014, Issue 4, p. 43
We report an observation of the $B^{\pm } \to J/\psi \eta K^{\pm }$ and $B^0 \to J/\psi \eta K^0_S$ decays using $772\times 10^{6}B\overline {B}$ pairs...
leptonski trkalnik | High Energy Physics - Experiment | lastnosti delcev | branching ratio: measured [B0] | (J/psi eta) [mass spectrum] | 539.1 [udc] | eksperimentalna fizika delcev | branching ratio: measured [B+] | High Energy Physics | 10.58 GeV-cms | intermediate state [psi] | narrow resonance | lepton collider experiments | experimental particle physics | particle properties | experimental results | eksperimenti | Experiment | B0 --> J/psi eta K0(S) | hadronic decay [B] | Physics | intermediate state [X] | annihilation [electron positron] | pair production [B] | B+ --> J/psi eta K | branching fractions; decays; resonance; final state | upper limit [branching ratio] | Environmental Molecular Sciences Laboratory | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
leptonski trkalnik | High Energy Physics - Experiment | lastnosti delcev | branching ratio: measured [B0] | (J/psi eta) [mass spectrum] | 539.1 [udc] | eksperimentalna fizika delcev | branching ratio: measured [B+] | High Energy Physics | 10.58 GeV-cms | intermediate state [psi] | narrow resonance | lepton collider experiments | experimental particle physics | particle properties | experimental results | eksperimenti | Experiment | B0 --> J/psi eta K0(S) | hadronic decay [B] | Physics | intermediate state [X] | annihilation [electron positron] | pair production [B] | B+ --> J/psi eta K | branching fractions; decays; resonance; final state | upper limit [branching ratio] | Environmental Molecular Sciences Laboratory | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
16. Observation of a near-threshold omega J/psi mass enhancement in exclusive B -> K omega J/psi decays
PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 05/2005, Volume 94, Issue 18
We report the observation of a near-threshold enhancement in the omega J/psi invariant mass distribution for exclusive B -> K omega J/psi decays. The results...
BELLE | BREAKING | STATES | PHYSICS, MULTIDISCIPLINARY | SEARCH | ANNIHILATION | EXOTIC MESONS | CHARMONIUM | Physics - High Energy Physics - Experiment
BELLE | BREAKING | STATES | PHYSICS, MULTIDISCIPLINARY | SEARCH | ANNIHILATION | EXOTIC MESONS | CHARMONIUM | Physics - High Energy Physics - Experiment
Journal Article |
You can design that filter manually without problems. Matlab just uses a very simplistic approach to comb filtering with a delay line.
In order to keeps things as simple as possible I would recommend you use a series of notch filters to remove each partial of your harmonic noise separately. That also gives you more control over how much of each harmonic really has to be removed.
A simple notch filter is one with unit gain at DC, i.e. $H(1) = 1$, and the zeros on the unit circle at the desired notch frequency, positive and negative. The poles have to be inside the unit circle, close to the zeros to cancel their effect farther away from the notch frequency. We can design this directly in the z-domain.
$$H(z) = \frac{(z-\exp(i\omega))(z-\exp(-i\omega)}{(z-r\exp(i\omega))(z-r\exp(-i\omega))}\times\frac{(1-r\exp(i\omega))(1-r\exp(-i\omega)}{(1-\exp(i\omega))(1-\exp(-i\omega))}$$
Here $\omega$ is the normalized frequency, meaning $\omega=2 \pi \frac{f_0}{f_s}$ where $f_0$ is your notch frequency and $f_s$ the sampling frequency. The parameter $r\in]0,1[$ controls the width of the notch. The second fraction makes sure we get a unit response at $\omega=0$.
To get the coefficients for a recursive filter implementation, we have to simplify the transfer function and cancel $z^2$ to turn it into a rational function of $z^{-1}$. The result is $$H(z) = \frac{\left(1+r^2-2r\cos(\omega)\right)\left(z^{-2}+1-2z^{-1}\cos(\omega)\right)\csc(\omega/2)^2}{4\left(r^2 z^{-2}+1-2rz^{-1}\cos(\omega) \right)}$$
So you can read off your filter coefficients to be
$$ A[n]=\left(1,-2r\cos\omega,r^2 \right)$$$$ B[n]=\left(1,-2\cos\omega,1 \right)\cdot\frac{1}{4}\left(1+r^2-2r\cos\omega\right)\cdot\csc(\frac{\omega}{2})^2$$
The last open question is how exactly the parameter $r$ controls the notch behavior. Near the zero the transfer function on the unit circle
locally behaves like
$$f(x) = \frac{x}{x-i(1-r)}$$
around $x=0$. For $r=1$ we get a singularity at $x=0$ and an ill-defined transfer function. For values $r<1$ but close to $1$ we can see that for both $x\to-\infty$ and $x\to\infty$ the transfer function becomes $1$. The interesting question is, how far does the effective influence of the pole reach on the x-axis?
The square magnitude of $f(x)$ is $$|f(x)|^2=f(x)\cdot f(x)^*=\frac{x^2}{x^2+(1-r)^2}$$which is always real and positive. We can easily equate it to a threshold and solve for $x$.The most common threshold for filter cutoffs in signal processing is $1/\sqrt{2}$, and if we solve $|f(\Delta x/2)|^2==1/\sqrt{2}$ we get the effective bandwidth of the notch to be$$\Delta x = 2(1-r)\sqrt{1+\sqrt{2}}$$
and we can use the equivalence of our problem to the original pole/zero placement to conclude
$$ r = 1-\frac{\Delta \omega}{2\sqrt{1+\sqrt{2}}} $$
with the natural frequency bandwidth $\Delta \omega$. If you want this as a true frequency then again $\Delta \omega=2\pi\frac{\Delta f}{f_s}$
Note that this bandwidth calculation used a local approximation, so it is strictly only true for $(1-r)<<1$, or small bandwidths compared to the sampling frequency.
So this gives you a notch filter for one single harmonic of your harmonic noise. Just apply one filter at a time, one after the other, with the parameters matched to the harmonic you would like to cancel, up to fs/2. The matlab filter() function readily takes the coefficients A[n] and B[n] given above.
An attempt to implement this in
scilab seems to work nicely. Plot of before and after spectrum:
Code:
// 12740
fs = 1000;
f = 101.89798;
omega = 2*%pi*f/fs;
delta_omega = omega/2;
N= 1000;
t = [0:N];
f = [0:N]/N*fs-fs/2
r = 1 - delta_omega/(2*sqrt(1+sqrt(2)));
num = [1 -2*cos(omega) 1]*1/4*(1+r.^2-2*r*cos(omega))*csc(omega/2).^2;
den = [ 1 -2*r*cos(omega) r.^2];
X = sin(omega/2*t) + sin(omega*t) + sin(2*omega*t);
Y = filter(num,den,X);
figure(1)
clf;
plot(f,log(fftshift(abs(fft(X)))+0.01))
plot(f,log(fftshift(abs(fft(Y)))+0.01),'k:') |
Haskell's answer does a great job of outlining conditions that a derivative $f'$
must satisfy, which then limits us in our search for an example. From there we see the key question: can we provide a concrete example of an everywhere differentiable function whose derivative is discontinuous on a dense, full-measure set of $\mathbb R$? Here's a closer look at the Volterra-type functions referred to in Haskell's answer, together with a little indication as to how it might be extended.
Basic example
The basic example of a differentiable function with discontinuous derivative is$$f(x) = \begin{cases} x^2 \sin(1/x) &\mbox{if } x \neq 0 \\0 & \mbox{if } x=0. \end{cases}$$The differentiation rules show that this function is differentiable away from the origin and the difference quotient can be used to show that it is differentiable at the origin with value $f'(0)=0$. A graph is illuminating as well as it shows how $\pm x^2$ forms an envelope for the function forcing differentiablity.
The derivative of $f$ is $$f'(x) = \begin{cases} 2 x \sin \left(\frac{1}{x}\right)-\cos \left(\frac{1}{x}\right)&\mbox{if } x \neq 0 \\0 & \mbox{if } x=0,\end{cases}$$which is discontinuous at $x=0$. Its graph looks something like so
Two points
The next step is to modify this example to obtain a function that is everywhere differentiable with a derivative that is continuous on all of $\mathbb R$, except for two points. To this end, consider$$f(x) = \begin{cases} x^2 (1-x)^2 \sin \left(\frac{1}{\pi x (1-x)}\right)&\mbox{if } 0<x<1 \\0 & \mbox{else}. \end{cases}$$The graph of $f$ and its derivative look like so.
A cantor set of discontinuties
Now that we have a way to construct a differentiable function whose derivative is discontinuous exactly at the endpoints of an interval, it should be clear how to construct a differentiable function whose derivative is discontinous on a Cantor set constructed in the interval. For $n\in\mathbb N$ and $m=1,2,\ldots,2^n$, let $I_{m,n}$ denote one of the $2^n$ intervals removed during the $n^{th}$ stage of construction of the Cantor set. Then let $f_{m,n}$ be scaled to have support $I_{m,n}$ and to have maximum value $4^{-n}$. The function$$F(x) = \sum_{n=0}^{\infty} \sum_{m=1}^{2^n} f_{m,n}(x)$$will be everywhere differentiable but its derivative will be discontinuous on the given Cantor set. Assuming we do this with Cantors standard ternary set, we get a picture that looks something like so:
Of course, there's really a sequence of functions here and care needs to be taken to show that the limit is truly differentiable. Let$$F_N(x) = \sum_{n=1}^{N} \sum_{m=1}^{2^n} f_{m,n}(x).$$The standard theorem then states that, as long as $F_N$ converges and $F_N'$ converges uniformly, then the limit of $F_N(x)$ will be differentiable. This is guaranteed by the choice of $4^{-n}$ as the max for $f_{m,n}$.
Increasing the measure
Again, the last example refers to the standard Cantor ternary set but there's no reason this can't be done with
any Cantor set. In particular, it can be done with a so-called fat Cantor set, which can have positive measure arbitrarily close to the measure of the interval containing it. We immediately produce an everywhere differentiable function whose derivative is discontinuous on a nowhere dense set of positive measure. (Of course, care must again be taken to scale the heights of the functions go to zero quickly enough to guarantee differentiability.)
Finally, we can fill the holes of the removed intervals with more Cantor sets (and their corresponding functions) in such a way that the union of all of them is of full measure. This allows us to construct an everywhere differentiable function with derivative that is discontinuous on the union of those Cantor sets, which is a set of full measure. |
It is unknown whether $P\subseteq CSL$ or $P\not\subseteq CSL$, where
$P$ is the set of all languages decidable in polynomial time on a deterministic Turing machine, and $CSL$ is the class of context-sensitive languages, known to be equivalent to $NSPACE(O(n))$, the languages decided by linear-bounded automata.
For many open questions, there is a tendency towards one answer (
a la "most experts believe that $P\neq NP$"). Is there something like this for this question?
In particular, would either answer have unexpected consequences? I can only see expected (but unproven) consequences:
If $P\subseteq CSL$, then $P\subseteq NSPACE(O(n))\subsetneq NSPACE(O(n^2))$ (space hierarchy theorem), hence $P\subsetneq PSpace$. If $P\not\subseteq CSL$, then there is a language $l\in P\setminus NSPACE(O(n))$ and therefore $l\in P\setminus NL$, hence $NL\subsetneq P$.
(Acknowledgement: The second consequence of these two was pointed out by Yuval Filmus at https://cs.stackexchange.com/questions/69614/) |
Actually, on a dark night, the fraction of the sky that is light is pretty negligible. That's what it means to be a dark night ;-)
It's actually not hard to get an estimate of the density of light in the universe. Let's say that "light" includes photons of all wavelengths (not just visible light) for simplicity. A straightforward way to do it is to point a wide-spectrum telescope at the night sky and see how much fast it collects energy. (You have to point it away from the sun and other nearby sources like the galactic disc, because these sources emit a large amount of energy at Earth, which is not representative of the universe as a whole.) This was done with the COBE and WMAP satellites (and more recently Planck, with essentially the same result). They found that, if you eliminate contributions from a few specific nearby sources, the radiation in the universe follows a blackbody spectrum with a temperature of $2.73\text{ K}$. You can calculate the energy density of such a blackbody like this:
$$u = \frac{4\sigma T^4}{c} = 4.2\times 10^{-14}\ \mathrm{\frac{J}{m^3}}$$
This is just four
thousandths of a percent of the critical energy density, which is
$$\Omega = \frac{3 c^2 H^2}{8\pi G} = 8.6\times 10^{-10}\ \mathrm{\frac{J}{m^3}}$$
On the other hand, the density of normal matter (atoms) is estimated to be about 4.6% of the critical density - four orders of magnitude higher. So the photon energy density is completely insignificant compared to the density of baryons, dark matter, or dark energy, and that means it has a negligible gravitational effect on the cosmological evolution of the universe.
Of course, in the vicinity of a star, the photon energy density is much higher because of the star's higher blackbody temperature. Let's take the sun, for example. The sun has a surface temperature of $5778\text{ K}$, which means the intensity of radiation it emits is
$$I = \frac{P}{A} = \sigma T^4 = 6.31\times 10^7\ \mathrm{\frac{W}{m^2}}$$
This corresponds to an energy density
in the vicinity of the sun's surface of
$$u_\text{rad} = \frac{I}{c} = 0.211\ \mathrm{\frac{J}{m^3}}$$
However, the energy density of the
matter that constitutes the sun is
$$u_\text{matter} = \rho_\odot c^2 = 1.2\times 10^{20}\ \mathrm{\frac{J}{m^3}}$$
So again, the gravitational effect of the photons, even on a local scale, is completely negligible.
Incidentally, this would not have been the case in the early universe, when the photons had much higher energy and thus their energy density relative to matter was much higher. |
Let $A=\begin{bmatrix}a& b \\c& d\end{bmatrix}$.Then as $A$ is a symmetric matrix, we have $A^{\trans}=A$.This implies that\[\begin{bmatrix}a& c \\b& d\end{bmatrix}=\begin{bmatrix}a& b \\c& d\end{bmatrix}.\]Hence we have $b=c$ by comparing entries.
Now, we find the characteristic polynomial $p(t)$ of $A$.We have\begin{align*}p(t)&=\det(A-t I)=\begin{vmatrix}a-t & b\\b& d-t\end{vmatrix}\\[6pt]&=(a-t)(d-t)-b^2\\&=t^2-(a+d)t+ad-b^2.\end{align*}
Note that the eigenvalues of $A$ are roots of the characteristic polynomial $p(t)$. Hence, it suffices to show that the roots of $p(t)$ are real numbers.The quadratic polynomial has only real roots if and only if its discriminant is non-negative.The discriminant of $p(t)$ is given by\begin{align*}(a+d)^2-4(ad-b^2)&=a^2+2ad+d^2-4ad+4b^2\\&=a^2-2ad+d^2+4b^2\\&=(a-d)^2+4b^2. \end{align*}Observe that the last expression is the sum of two squares of real numbers. Hence the discriminant of $p(t)$ is nonnegative.
We conclude that every $2\times 2$ symmetric matrix has only real eigenvalues.
Remark
We also could find the eigenvalues directly. By the quadratic formula, the eigenvalues of $A$ are\[\frac{a+d\pm\sqrt{(a+d)^2-4(ad-b^2)}}{2}=\frac{a+d\pm \sqrt{(a-d)^2+4b^2}}{2}\]and as the number inside the square root (discriminant) is positive, we conclude that the eigenvalues are real.
Eigenvalues of a Hermitian Matrix are Real NumbersShow that eigenvalues of a Hermitian matrix $A$ are real numbers.(The Ohio State University Linear Algebra Exam Problem)We give two proofs. These two proofs are essentially the same.The second proof is a bit simpler and concise compared to the first one.[…]
There is at Least One Real Eigenvalue of an Odd Real MatrixLet $n$ be an odd integer and let $A$ be an $n\times n$ real matrix.Prove that the matrix $A$ has at least one real eigenvalue.We give two proofs.Proof 1.Let $p(t)=\det(A-tI)$ be the characteristic polynomial of the matrix $A$.It is a degree $n$ […]
Positive definite Real Symmetric Matrix and its EigenvaluesA real symmetric $n \times n$ matrix $A$ is called positive definite if\[\mathbf{x}^{\trans}A\mathbf{x}>0\]for all nonzero vectors $\mathbf{x}$ in $\R^n$.(a) Prove that the eigenvalues of a real symmetric positive-definite matrix $A$ are all positive.(b) Prove that if […]
Transpose of a Matrix and Eigenvalues and Related QuestionsLet $A$ be an $n \times n$ real matrix. Prove the followings.(a) The matrix $AA^{\trans}$ is a symmetric matrix.(b) The set of eigenvalues of $A$ and the set of eigenvalues of $A^{\trans}$ are equal.(c) The matrix $AA^{\trans}$ is non-negative definite.(An $n\times n$ […]
Maximize the Dimension of the Null Space of $A-aI$Let\[ A=\begin{bmatrix}5 & 2 & -1 \\2 &2 &2 \\-1 & 2 & 5\end{bmatrix}.\]Pick your favorite number $a$. Find the dimension of the null space of the matrix $A-aI$, where $I$ is the $3\times 3$ identity matrix.Your score of this problem is equal to that […]
A Square Root Matrix of a Symmetric MatrixAnswer the following two questions with justification.(a) Does there exist a $2 \times 2$ matrix $A$ with $A^3=O$ but $A^2 \neq O$? Here $O$ denotes the $2 \times 2$ zero matrix.(b) Does there exist a $3 \times 3$ real matrix $B$ such that $B^2=A$ […] |
Talk:Celestial mechanics
The deleted discussions concern minor details on the two-body problem without importance for the understanding of the subject of this article. (Celestial Mechanics) I prefer not to overcrowd the article with many algebraic demonstrations.
Corrections
Using heliocentric vectors, the
angular momentum of the planet is given by: \[ \frac{\mu^2}{m} \vec{r} \times \vec{v} \](where \(\mu\) is the reduced mass), not:\[ m \vec{r} \times \vec{v} \].
The latter would be the angular momentum of the planet if the Sun were held fixed in space.
Also, the principal of conservation of energy (kinetic & potential) was certainly not known prior to 1740. Lagrange's Mécanique Analytique is generally acknowledged as the first appearance of the orbital conservation equation used this article.
Existing: "Another result found by Newton is that the mechanical energy is conserved."
Suggested: "A century after Newton, Joseph-Louis Lagrange showed that the mechanical energy is also conserved." |
While the choice of activation functions for the hidden layer is quite clear (mostly sigmoid or tanh), I wonder how to decide on the activation function for the output layer. Common choices are linear functions, sigmoid functions and softmax functions. However, when should I use which one?
Regression: linear (because values are unbounded) Classification: softmax (simple sigmoid works too but softmax works better)
Use simple sigmoid only if your output admits multiple "true" answers, for instance, a network that checks for the presence of various objects in an image. In other words, the output is not a probability distribution (does not need to sum to 1).
I might be late to the party, but it seems that there are some things that need to be cleared out here.
First of all: the activation function $g(x)$ at the output layer
often depends on your cost function. This is done to make the derivative $\frac{\partial C}{\partial z}$ of the cost function $C$ with respect to the inputs $z$ at the last layer easy to compute.
As an
example, we could use the mean squared error loss $C(y, g(z)) = \frac{1}{2} (y - g(z))^2$ in a regression setting. By setting $g(x) = x$ (linear activation function), we find for the derivative$$\begin{align*} \frac{\partial C(y,g(z))}{\partial z} & = \frac{\partial C(y, g(z))}{\partial g(z)} \cdot \frac{\partial g(z)}{\partial z} \\ & = \frac{\partial}{\partial g(z)}\left(\frac{1}{2} (y - g(z))^2\right) \cdot \frac{\partial}{\partial z}\left(z\right) \\ & = - (y-g(z)) \cdot 1 \\ & = g(z) - y \end{align*}$$You get the same, easy expression for $\frac{\partial C}{\partial z}$ if you combine cross-entropy loss with the logistic sigmoid or softmax activation functions.
This is the reason why linear activations are often used for regression and logistic/softmax activations for binary/multi-class classification. However, nothing keeps you from trying out different combinations. Although the expression for $\frac{\partial C}{\partial z}$ will probably not be so nice, it does not imply that your activation function would perform worse.
Second, I would like to add that there are plenty of activation functions that can be used for the hidden layers. Sigmoids (like the logistic function and hyperbolic tangent) have proven to work well indeed, but as indicated by Jatin, these suffer from vanishing gradients when your networks become too deep. In that case ReLUs have become popular. What I would like to emphasise though, is that there are plenty more activation functions available and different researchers keep on looking for new ones (e.g. Exponential Linear Units (ELUs), Gaussian Error Linear Units (GELUs), ...) with different/better properties
To conclude: When looking for the best activation functions, just be creative. Try out different things and see what combinations lead to the best performance.
Addendum: For more pairs of loss functions and activations, you probably want to look for (canonical) link functions
Sigmoid and tanh should not be used as activation function for the hidden layer. This is because of the vanishing gradient problem, i.e., if your input is on a higher side (where sigmoid goes flat) then the gradient will be near zero. This will cause very slow or no learning during backpropagation as weights will be updated with really small values.
Detailed explanation here: http://cs231n.github.io/neural-networks-1/#actfun
The best function for hidden layers is thus ReLu.
Softmax outputs produce a vector that is non-negative and sums to 1. It's useful when you have mutually exclusive categories ("these images only contain cats or dogs, not both"). You can use softmax if you have $2,3,4,5,...$ mutually exclusive labels.
Using $2,3,4,...$
sigmoid outputs produce a vector where each element is a probability. It's useful when you have categories that are not mutually exclusive ("these images can contain cats, dogs, or both cats and dogs together"). You use as many sigmoid neurons as you have categories, and your labels should not be mutually exclusive.
The a cute trick is that you can also use a
single sigmoid unit if you have a mutually-exclusive binary problem; because a single sigmoid unit can be used to estimate $p(y=1)$, the Kolmogorov axioms imply that $1-p(y=1)=p(y=0)$.
Using the
identity function as an output can be helpful when your outputs are unbounded. Some company's profit or loss for a quarter could be unbounded on either side. ReLU units or similar variants can be helpful when the output is bounded above or below. If the output is only restricted to be non-negative, it would make sense to use a ReLU activation as the output function.
Likewise, if the outputs are somehow constrained to lie in $[-1,1]$,
tanh could make sense.
The nice thing about neural networks is that they're incredibly flexible tools. |
Search
Now showing items 1-6 of 6
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC
(Springer, 2013-09)
We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
Newform invariants
Coefficients of the \(q\)-expansion are expressed in terms of a basis \(1,\beta_1,\beta_2\) for the coefficient ring described below. We also show the integral \(q\)-expansion of the trace form.
Basis of coefficient ring in terms of a root \(\nu\) of \(x^{3}\mathstrut -\mathstrut \) \(x^{2}\mathstrut -\mathstrut \) \(12\) \(x\mathstrut +\mathstrut \) \(8\):
\(\beta_{0}\) \(=\) \( 1 \) \(\beta_{1}\) \(=\) \( \nu \) \(\beta_{2}\) \(=\) \((\)\( \nu^{2} - \nu - 8 \)\()/2\)
\(1\) \(=\) \(\beta_0\) \(\nu\) \(=\) \(\beta_{1}\) \(\nu^{2}\) \(=\) \(2\) \(\beta_{2}\mathstrut +\mathstrut \) \(\beta_{1}\mathstrut +\mathstrut \) \(8\)
For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below.
For more information on an embedded modular form you can click on its label.
This newform does not admit any (nontrivial) inner twists.
\( p \) Sign \(2\) \(1\) \(3\) \(1\) \(23\) \(-1\) \(29\) \(1\)
This newform can be constructed as the intersection of the kernels of the following linear operators acting on \(S_{2}^{\mathrm{new}}(\Gamma_0(4002))\):
\(T_{5}^{3} \) \(\mathstrut +\mathstrut T_{5}^{2} \) \(\mathstrut -\mathstrut 12 T_{5} \) \(\mathstrut -\mathstrut 8 \) \(T_{7}^{3} \) \(\mathstrut -\mathstrut 6 T_{7}^{2} \) \(\mathstrut -\mathstrut 2 T_{7} \) \(\mathstrut +\mathstrut 32 \) \(T_{11}^{3} \) \(\mathstrut +\mathstrut T_{11}^{2} \) \(\mathstrut -\mathstrut 12 T_{11} \) \(\mathstrut -\mathstrut 8 \) |
Solutions Colligative Properties Colligative properties :- (1) The properties of dilute solution those depend on the number of solute particles irrespective to their nature. (2) Colligative properties are classified into four types a. Relative lowering of vapour pressure b. Elevation of boiling point c. Depression of freezing point d. Osmotic pressure (3) Normal colligative properties :- When neither association nor dissociation of solute particle take place. (i) Relative lowering of vapour pressure \tt \frac{P^{o} - P}{P} = X_{solute} (ii) Elevation of boiling point ΔT b = k b m (iii) Depression of freezing point ΔT f = k f m (iv) Osmotic pressure π = CRT (or) π = CST
(i)
Relative lowering of vapour pressure:- \tt RLVP : \frac{P^{o} - P}{P^{o}} = X_{solute} = \frac{n}{n + N} Trick:- (a) For dilute solution (whose mass/mass % ≤ 5) \tt \frac{P^{o} - P}{P^{o}} = \frac{n}{N} (b) For concentrated solution (Whose mass/mass % > 5) \tt \frac{P^{o} - P}{P^{o}} = \frac{n}{n + N} (c) To find out molecular mass of solute for types of solutions (dilute (or) concentrated) we can use \tt \frac{P^{o} - P}{P} = \frac{n}{n + N} (d) Molality (m) = \tt \frac{P^{o} - P}{P} \times \frac{1000}{M(in \ gm \ mole^{-1})} n = number of moles of solute N = number of moles of solvent M = Molecular mass of solvent P 0 = Vapour pressure of solvent P = Vapour pressure of solution
(b) Ostwald walker method :
Loss in weight of solution containers α p
Loss in weight of solvent containers α (P 0 − P) Gain in weight of dehydrating agent α P 0 \tt \frac{P^{o} - P}{P^{0}} = \frac{Loss \ in \ weight \ of \ solvent}{Gain \ in \ weight \ of \ dehydrating \ agent}
(ii)
Elevation in boiling point:- (a) ΔT b = k b m where \tt \Delta T_{b} = T_{b} - T_{b}^{0} T b = Boiling point of solution \tt T_{b}^{0} = Boiling point of pure liquid (solvent) k b = Boiling point elevation constant (or) ebullioscopic constant m = molality of solution (b) \tt k_{b} = \frac{R(T_{b}^{0})^{2}}{1000 \ L_{v}} L v = Latent heat of vaporization per gram (c) \tt k_{b} = \frac{MR(T_{b}^{0})^{2}}{1000 \ \Delta H_{vapour}} ΔH vap = Enthalpy of vaporization per mole M = Molar mass of solvent (in g/mol)
(iii)
Depression in freezing point :- (a) Δ T f = k f m where \tt \Delta T_{f} = T_{f} - T_{f}^{0} k f = Freezing point depression constant (or) cryoscopic constant. (b) \tt k_{f} = \frac{R(T_{f}^{0})^{2}}{1000 \ L_{f}}; L f = Latent heat of fusion per gram (c) \tt k_{f} = \frac{MR(T_{f}^{0})^{2}}{1000 \ \Delta H_{fus}} ΔH fus = Enthalpy of fusion per mole M = molar mass of solvent (in g/mol) \tt T_{f}^{0} = Freezing point of solvent
(iv)
Osmotic pressure:-(π) (a) The hydro static pressure built up on the solution which just stops osmosis. In other words "the pressure which must be applied on the concentrated solution side to just stop osmosis" (b) For dilute solutions π = CRT = hdg C = concentration of solution (if must be in molarity) R = Solution constant which is equivalent to universal gas constant h = Height developed by the column of the concentrated solution. Ρ = density of the solution in the column. (c) On the basis of osmotic pressure, the solution can be classified in to three classes.
(i)
Isotonic solutions:- Two solutions having same osmotic pressure are called isotonic solutions ⇒ C 1 = C 2 at given T
(ii)
Hypertonic and hypotonic solution:- When two solutions are being compared, then the solution with higher osmotic pressure is termed as hypertonic. The solution with lower osmotic pressure is termed as hypotonic View the Topic in this Video from 0:26 to 1:00:46
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. The relative lowering of vapour pressure is
\tt \frac{p_1^\star-p_1}{p_1^\star}=\frac{{p_1^\star-x_1}{p_1}}{p_1^\star}=1-x_{1}=x_{2}\ or\ -\frac{\Delta\ p_{1}}{p_1^\star}=x_{2} (where \tt \Delta p_{1}=p_{1}-p_1^\star)
2. \tt \Delta T_{b}=K_{b}m where K
b is known as boiling point elevation constant.
3. −ΔT
f = K f m where K f is known as freezing point depression constant.
4. Osmotic pressure ∏ = cRT |
Given regular expressions $R_1, \dots, R_n$, are there any non-trivial bounds on the size of the smallest context-free grammar for $R_1 \cap \cdots \cap R_n$?
This is a great question and it really lies within my interests. I'm glad that you asked it Max.
Let $n$ DFA's with at most $O(n)$ states each be given. It would be nice if there existed a PDA with sub-exponentially many states that accepts the intersection of the DFA's languages. However, I suggest that such a PDA might not always exist.
Consider the copy language. Now, restrict it to copying strings of length n.
Formally, consider $n$-copy $:=$ $\{ xx \, | \, x \in \{0,1\}^{n}\}$.
We can represent $n$-copy as the intersection of $n$ DFA's of size at most $O(n)$. However, the smallest DFA that accepts $n$-copy has $2^{\Omega(n)}$ states.
Similarly, if we restrict ourselves to a binary stack alphabet, then I suspect that the smallest PDA that accepts $n$-copy has exponentially many states.
P.S. Feel free to send me an email if you would like to discuss further. :)
I don't think that there can be any non-trivial lower or upper bounds.
For lower bounds, consider the language $L_1 = \{ a^{2^k} \}$ for a fixed $k$. The size of the smallest context-free grammar is logarithmic in the size of $L_1$'s regular expression, whereas the size of the smallest automaton for $L_1$ is linear in the size of $L_1$'s regex. This exponential difference stays the same if we intersect $L_1$ with other such languages. For upper bounds, consider a language $L_2$ that consists of exactly one deBruijn-Sequence of length $n$. It is known that the size of a smallest grammar for $L_2$ is worst-case, i.e. $O\left( \frac{n}{\log n} \right)$, so the difference to the "smallest" automaton for $L_2$ is simply a logarithmic factor, proposition 1 in D. Hucke, M. Lohrey, E. Noeth Constructing Small Tree Grammars and Small Circuits for Formulas, to appear in FSTTCS 2014
A non-trivial general lower or upper bound would contradict those results, since what is true for the intersection of $n$ languages must be true for the intersection of $1$ language.
Let me second Michael's judgment, this is indeed an interesting question. Michael's main idea can be combined with a result from the literature, thus providing a similar lower bound with a rigorous proof.
I will refer to bounds on CFG size in terms of the total number of alphabetic symbols in the $n$ regular expressions. Let this number be denoted by $k$. (As john_leo noted, we will not find any useful bounds in terms of the number of regular expressions taking part in the intersection.)
Neither the OP nor Michael did find it necessary to mention this, but an upper bound of $2^{k+1}$ (on the number of states) for converting an intersection of regular expressions into a NFA can be easily proved. For the record, here it is: Convert the regular expressions into Glushkov automata, which are all non-returning. Then apply the product construction to obtain an NFA for the intersection of these languages. (I suppose that one can improve the bound to $2^k+1$ or so.) An $s$-state NFA can be converted into a right-linear grammar (which is a special case of a CFG) of size $O(s^2)$ (if we measure grammar size as total number of symbols on the left- and right-hand-sides of the productions), thus giving size $O(4^{k})$. This bound of course sounds horrible if you have practical applications in mind. Trying to prove a better bound using nondeterministic transition complexity instead of nondeterministic state complexity for estimating the size of the NFA may be worth the effort.
The other part is finding a witness language that can be succinctly expressed as the intersection of regular expressions, but is necessarily cumbersome to describe with a CFG. (Here we need to establish a lower bound on the size of all CFGs generating the language, of which there can be infinitely many.) The following argument gives a $2^{\Omega(\sqrt{k}/\log k)}$ lower bound.
Consider the finite language $L_n = \{\,ww^Rw \in \{a,b\}^*\mid |w|=n\,\}$, where $w^R$ denotes the reversal of $w$. Then $L_n$ can be expressed as the intersection of the following $2n+1$ regular expressions:
$r_i = (a+b)^ia(a+b)^{2(n-i-1)}a(a+b)^*+(a+b)^ib(a+b)^{2(n-i-1)}b(a+b)^*$, for $1\le i \le n$; $s_i = (a+b)^*a(a+b)^{2(n-i-1)}a(a+b)^i+(a+b)^*b(a+b)^{2(n-i-1)}b(a+b)^i$, for $1\le i \le n$; $\ell = (a+b)^{3n}$
The total number $k$ of alphabetic symbols in this intersection of expressions is in $O(n^2)$.
Using an argument given in the proof of Theorem 13 in (1), one can prove that every acyclic CFG that generates $L_n$ must have at least $2^n/(2n) = 2^{\Omega(\sqrt{k}/\log k)}$ distinct variables, if the right-hand side of each rule has length at most $2$. The latter condition is necessary for arguing about the number of variables, since we can generate a finite language with a single variable. But from the perspective of grammar size, this condition is not really a restriction, since we can transform a CFG into this form with only a linear blowup in size, see (2). Notice that the language used by Arvind et al. is over an alphabet of size $n$, and this yields a bound of $n^n/(2n)$; but the argument carries over with obvious modifications.
Still, a large gap remains between $O(4^n)$ and the abovementioned lower bound.
References:
V. Arvind, Pushkar S. Joglekar, Srikanth Srinivasan. Arithmetic Circuits and the Hadamard Product of Polynomials, FSTTCS 2009, Vol. 4 of LIPIcs, pp. 25-36
Lange, Martin; Leiß, Hans (2009). "To CNF or not to CNF? An Efficient Yet Presentable Version of the CYK Algorithm". Informatica Didactica 8. |
I have a bunch of points in $\mathbb{R}^3$ that I would like to translate and rotate so that their center is at the origin and the variance along the $x$ and $y$ axes are maximal (greedy, and in that order). To accomplish this I am trying to use python's principal components analysis algorithm. It is not behaving as I expect it to, most likely due to some misunderstanding about what PCA actually does on my part.
The Problem: When I center and then rotate the data, the variance along the third component is greater than along the second. This means that, once centered and rotated, there is more variance in the data along the $z$ axis than there is along the $y$. In other words, the rotation is not the correct one. What I am Doing:Python's PCA routine returns an object (say myPCA) with several attributes. myPCA.Y is the data array, but centered, scaled, and rotated (in that order). I do not want the data to be scaled. I simply want a translation and a rotation.
import numpy as np from matplotlib.mlab import PCA # manufactured data producing the problem data_raw = np.array([ [80.0, 50.0, 30.0], [50.0, 90.0, 60.0], [70.0, 20.0, 40.0], [60.0, 30.0, 45.0], [45.0, 60.0, 20.0] ]) # obtain the PCA myPCA = PCA(data_raw) # center the raw data centered = np.array([point - myPCA.mu for point in data_raw]) # rotate the centered data centered_and_rotated = np.array([np.dot(myPCA.Wt, point) for point in centered])# the variance along axis 0 should now be greater than along 1, so on variances = np.array([np.var(centered_and_rotated[:,i]) for i in range(3)]) # they are not: print(variances[1]>variances[2]) #False; I want this to be True # Now look at the PCA output, Y. This is centered, scaled, and rotated. # The variances decrease in magnitude, as I want them to: variances2 = np.array([np.var(myPCA.Y[:,i]) for i in range(3)]) # This looks good, but the coordinates have been scaled. # Let's try to get from the raw coordinates to the PCA output Y# mu is the vector of means of the raw data, and sigma is the vector of # standard deviations of the raw data along each coordinate direction guess = np.array([np.dot(myPCA.Wt, (xxx-myPCA.mu)/myPCA.sigma) for xxx in data_raw])print(guess==myPCA.Y) # all true
The last two lines in the above show that we may take a point $\mathbf{x}$ from its representation in the raw data input into its representation $\mathbf{x}'$ in terms of the PCA axes via $$ \mathbf{x}' = \mathrm{R}\cdot\left((\mathbf{x}-\boldsymbol{\mu}) / \boldsymbol{\sigma} \right) $$
where $\mathrm{R}$ is myPCA.Wt, the weight matrix, $\boldsymbol{\mu}$ is the vector of means of the original data along each coordinate axis, $\boldsymbol{\sigma}$ is the vector of standard deviations of the original data along each coordinate axis, and the division is element-wise. In order to write this in standard mathematical notation, let's replace this division by multiplication: $$ \mathbf{x}' = \mathrm{R}\cdot\left(\mathrm{D}\cdot(\mathbf{x}-\boldsymbol{\mu}) \right) $$ where $\mathrm{D}$ is a diagonal matrix whose diagonal entries are $1/\sigma_i$.
This notation makes clear the problem: to undo the scaling, I need to act on the RHS above with $\mathrm{R}\mathrm{D}^{-1}\mathrm{R}^{-1}$. This will return me to the problem situation, in which the variance is greater along the $z$ axis than the $y$.
Is there a way to use PCA to get what I want, or do I need to use another method? |
Skills to Develop
Compute probability in a situation where there are equally-likely outcomes Apply concepts to cards and dice Compute the probability of two independent events both occurring Compute the probability of either of two independent events occurring Do problems that involve conditional probabilities Compute the probability that in a room of \(N\) people, at least two share a birthday Describe the gambler's fallacy Probability of a Single event
If you roll a six-sided die, there are six possible outcomes, and each of these outcomes is equally likely. A six is as likely to come up as a three, and likewise for the other four sides of the die. What, then, is the probability that a one will come up? Since there are six possible outcomes, the probability is \(1/6\). What is the probability that either a one or a six will come up? The two outcomes about which we are concerned (a one or a six coming up) are called favorable outcomes. Given that all outcomes are equally likely, we can compute the probability of a one or a six using the formula:
\[\text{probability}=\frac{\text{Number of favorable outcomes}}{\text{Number of possible equally-likely outcomes}}\]
In this case there are two favorable outcomes and six possible outcomes. So the probability of throwing either a one or six is \(1/3\). Don't be misled by our use of the term "favorable," by the way. You should understand it in the sense of "favorable to the event in question happening." That event might not be favorable to your well-being. You might be betting on a three, for example.
The above formula applies to many games of chance. For example, what is the probability that a card drawn at random from a deck of playing cards will be an ace? Since the deck has four aces, there are four favorable outcomes; since the deck has \(52\) cards, there are \(52\) possible outcomes. The probability is therefore \(4/52 = 1/13\). What about the probability that the card will be a club? Since there are \(13\) clubs, the probability is \(13/52 = 1/4\).
Let's say you have a bag with \(20\) cherries: \(14\) sweet and \(6\) sour. If you pick a cherry at random, what is the probability that it will be sweet? There are \(20\) possible cherries that could be picked, so the number of possible outcomes is \(20\). Of these \(20\) possible outcomes, \(14\) are favorable (sweet), so the probability that the cherry will be sweet is \(14/20 = 7/10\). There is one potential complication to this example, however. It must be assumed that the probability of picking any of the cherries is the same as the probability of picking any other. This wouldn't be true if (let us imagine) the sweet cherries are smaller than the sour ones. (The sour cherries would come to hand more readily when you sampled from the bag.) Let us keep in mind, therefore, that when we assess probabilities in terms of the ratio of favorable to all potential cases, we rely heavily on the assumption of equal probability for all outcomes.
Here is a more complex example.
Example \(\PageIndex{1}\)
You throw \(2\) dice. What is the probability that the sum of the two dice will be \(6\)? To solve this problem, list all the possible outcomes. There are \(36\) of them since each die can come up one of six ways. The \(36\) possibilities are shown below.
Die 1 Die 2 Total Die 1 Die 2 Total Die 1 Die 2 Total 1 1 2 3 1 4 5 1 6 1 2 3 3 2 5 5 2 7 1 3 4 3 3 6 5 3 8 1 4 5 3 4 7 5 4 9 1 5 6 3 5 8 5 5 10 1 6 7 3 6 9 5 6 11 2 1 3 4 1 5 6 1 7 2 2 4 4 2 6 6 2 8 2 3 5 4 3 7 6 3 9 2 4 6 4 4 8 6 4 10 2 5 7 4 5 9 6 5 11 2 6 8 4 6 10 6 6 12
You can see that \(5\) of the \(36\) possibilities total \(6\). Therefore, the probability is \(5/36\).
If you know the probability of an event occurring, it is easy to compute the probability that the event does not occur. If \(P(A)\) is the probability of Event \(A\), then \(1-P(A)\) is the probability that the event does not occur. For the last example, the probability that the total is \(6\) is \(5/36\). Therefore, the probability that the total is not \(6\) is \(1 - 5/36 = 31/36\).
Probability of Two (or more) independent events
Events \(A\) and \(B\) are independent events if the probability of Event \(B\) occurring is the same whether or not Event \(A\) occurs. Let's take a simple example. A fair coin is tossed two times. The probability that a head comes up on the second toss is \(1/2\) regardless of whether or not a head came up on the first toss. The two events are
first toss is a head and second toss is a head.
So these events are independent.
Consider the two events:
"It will rain tomorrow in Houston" and "It will rain tomorrow in Galveston" (a city near Houston)
These events are not independent because it is more likely that it will rain in Galveston on days it rains in Houston than on days it does not.
Probability of A and B
When two events are independent, the probability of both occurring is the product of the probabilities of the individual events. More formally, if events \(A\) and \(B\) are independent, then the probability of both \(A\) and \(B\) occurring is:
\[P(A\; \text{and}\; B)=P(A)\times P(B)\]
where \(P(A\; \text{and}\; B)\) is the probability of events \(A\) and \(B\) both occurring, \(P(A)\) is the probability of event \(A\) occurring, and \(P(B)\) is the probability of event \(B\) occurring.
Example \(\PageIndex{2}\)
If you flip a coin twice, what is the probability that it will come up heads both times?
Solution
Event \(A\) is that the coin comes up heads on the first flip and Event \(B\) is that the coin comes up heads on the second flip. Since both \(P(A)\) and \(P(B)\) equal \(1/2\), the probability that both events occur is
\[\frac{1}{2}\times \frac{1}{2}=\frac{1}{4}\]
Example \(\PageIndex{3}\)
If you flip a coin and roll a six-sided die, what is the probability that the coin comes up heads and the die comes up \(1\)?
Solution
Since the two events are independent, the probability is simply the probability of a head (which is \(1/2\)) times the probability of the die coming up \(1\) (which is \(1/6\)). Therefore, the probability of both events occurring is
\[\frac{1}{2}\times \frac{1}{6}=\frac{1}{12}\]
Example \(\PageIndex{4}\)
You draw a card from a deck of cards, put it back, and then draw another card. What is the probability that the first card is a heart and the second card is black?
Solution
Since there are \(52\) cards in a deck and \(13\) of them are hearts, the probability that the first card is a heart is \(13/52 = 1/4\). Since there are \(26\) black cards in the deck, the probability that the second card is black is \(26/52 = 1/2\). The probability of both events occurring is therefore
\[\frac{1}{4}\times \frac{1}{2}=\frac{1}{8}\]
See the section on conditional probabilities on this page to see how to compute \(P(A\; \text{and}\; B)\)when \(A\) and \(B\) are not independent.
Probability of A or B
If Events \(A\) and \(B\) are independent, the probability that either Event \(A\) or Event \(B\) occurs is:
\[P(A\; \text{or}\; B)=P(A)+P(B)-P(A\; \text{and}\; B)\]
In this discussion, when we say "\(A\) or \(B\) occurs" we include three possibilities:
\(A\) occurs and \(B\) does not occur \(B\) occurs and \(A\) does not occur Both \(A\) and \(B\) occur
This use of the word "or" is technically called inclusive or because it includes the case in which both \(A\) and \(B\) occur. If we included only the first two cases, then we would be using an exclusive or.
(Optional) We can derive the law for \(P(A\; \mathbf{or}\; B)\) from our law about \(P(A\; \mathbf{and}\; B)\). The event "\(\textbf{A-or-B}\)" can happen in any of the following ways:
\(\textbf{A-and-B}\) happens \(\textbf{A-and-not-B}\) happens \(\textbf{not-A-and-B}\) happens
The simple event \(A\) can happen if either \(\textbf{A-and-B}\) happens or \(\textbf{A-and-not-B}\) happens. Similarly, the simple event \(B\) happens if either \(\textbf{A-and-B}\) happens or \(\textbf{not-A-and-B}\) happens. \(P(A) + P(B)\) is therefore \(P(A-and-B) + P(A-and-not-B) + P(A-and-B) + P(not-A-and-B)\), whereas \(P(A-or-B)\) is \(P(A-and-B) + P(A-and-not-B) + P(not-A-and-B)\). We can make these two sums equal by subtracting one occurrence of \(P(A-and-B)\) from the first. Hence, \(P(A-or-B) = P(A) + P(B) - P(A-and-B)\).
Now for some examples.
Example \(\PageIndex{5}\)
If you flip a coin two times, what is the probability that you will get a head on the first flip or a head on the second flip (or both)?
Solution
Letting Event \(A\) be a head on the first flip and Event \(B\) be a head on the second flip, then \(P(A) = 1/2\), \(P(B) = 1/2\), and \(P(A\; \text{and}\; B) = 1/4\). Therefore,
\[P(A\; \text{or}\; B)=\frac{1}{2}+\frac{1}{2}-\frac{1}{4}=\frac{3}{4}\]
Example \(\PageIndex{6}\)
If you throw a six-sided die and then flip a coin, what is the probability that you will get either a \(6\) on the die or a head on the coin flip (or both)?
Solution
Using the formula,
\[\begin{align*} P(6\; \text{or head}) &= P(6)+P(\text{head})-P(6\; \text{and head})\\ &= \frac{1}{6}+\frac{1}{2}-\left ( \frac{1}{6} \right )\left ( \frac{1}{2} \right )\\ &= \frac{7}{12} \end{align*}\]
An alternate approach to computing this value is to start by computing the probability of not getting either a \(6\) or a head. Then subtract this value from \(1\) to compute the probability of getting a \(6\) or a head. Although this is a complicated method, it has the advantage of being applicable to problems with more than two events. Here is the calculation in the present case. The probability of not getting either a \(6\) or a head can be recast as the probability of
\[(\text{not getting a 6})\; AND\; (\text{not getting a head})\]
This follows because if you did not get a \(6\) and you did not get a head, then you did not get a \(6\) or a head. The probability of not getting a six is \(1 - 1/6 = 5/6\). The probability of not getting a head is \(1 - 1/2 = 1/2\). The probability of not getting a six and not getting a head is \(5/6 \times 1/2 = 5/12\). This is therefore the probability of not getting a \(6\) or a head. The probability of getting a six or a head is therefore (once again) \(1 - 5/12 = 7/12\).
If you throw a die three times, what is the probability that one or more of your throws will come up with a \(1\)? That is, what is the probability of getting a \(1\) on the first throw OR a \(1\) on the second throw OR a \(1\) on the third throw? The easiest way to approach this problem is to compute the probability of
NOT getting a \(1\) on the first throw AND not getting a \(1\) on the second throw AND not getting a \(1\) on the third throw
The answer will be \(1\) minus this probability. The probability of not getting a \(1\) on any of the three throws is \(5/6 \times 5/6 \times 5/6 = 125/216\). Therefore, the probability of getting a \(1\) on at least one of the throws is \(1 - 125/216 = 91/216\).
Conditional Probabilities
Often it is required to compute the probability of an event given that another event has occurred. For example, what is the probability that two cards drawn at random from a deck of playing cards will both be aces? It might seem that you could use the formula for the probability of two independent events and simply multiply \(4/52 \times 4/52 = 1/169\). This would be incorrect, however, because the two events are not independent. If the first card drawn is an ace, then the probability that the second card is also an ace would be lower because there would only be three aces left in the deck.
Once the first card chosen is an ace, the probability that the second card chosen is also an ace is called the conditional probability of drawing an ace. In this case, the "condition" is that the first card is an ace. Symbolically, we write this as:
\[P(\text{ace on second draw}\; |\; \text{an ace on the first draw})\]
The vertical bar "|" is read as "given," so the above expression is short for: "The probability that an ace is drawn on the second draw given that an ace was drawn on the first draw." What is this probability? Since after an ace is drawn on the first draw, there are \(3\) aces out of \(51\) total cards left. This means that the probability that one of these aces will be drawn is \(3/51 = 1/17\).
If Events \(A\) and \(B\) are not independent, then \[P(A\; \text{and} B) = P(A) \times P(B|A)\]
Applying this to the problem of two aces, the probability of drawing two aces from a deck is \(4/52 \times 3/51 = 1/221\).
Example \(\PageIndex{7}\)
If you draw two cards from a deck, what is the probability that you will get the Ace of Diamonds and a black card?
Solution
There are two ways you can satisfy this condition:
You can get the Ace of Diamonds first and then a black card You can get a black card first and then the Ace of Diamonds
Let's calculate Case \(A\).
The probability that the first card is the Ace of Diamonds is \(1/52\). The probability that the second card is black given that the first card is the Ace of Diamonds is \(26/51\) because \(26\) of the remaining \(51\) cards are black. The probability is therefore \(1/52 \times 26/51 = 1/102\).
Now for Case \(B\):
The probability that the first card is black is \(26/52 = 1/2\). The probability that the second card is the Ace of Diamonds given that the first card is black is \(1/51\). The probability of Case \(B\) is therefore \(1/2 \times 1/51 = 1/102\), the same as the probability of Case \(A\). Recall that the probability of \(A\) or \(B\) is \(P(A)+P(B)-P(A\; \text{and}\; B)\). In this problem, \(P(A\; \text{and}\; B) = 0\) since a card cannot be the Ace of Diamonds and be a black card. Therefore, the probability of Case \(A\) or Case \(B\) is \(1/102 + 1/102 = 2/102 = 1/51\). So, \(1/51\) is the probability that you will get the Ace of Diamonds and a black card when drawing two cards from a deck.
Birthday Problem
If there are \(25\) people in a room, what is the probability that at least two of them share the same birthday. If your first thought is that it is \(25/365 = 0.068\), you will be surprised to learn it is much higher than that. This problem requires the application of the sections on \(P(A\; \text{and}\; B)\) and conditional probability.
This problem is best approached by asking what is the probability that no two people have the same birthday. Once we know this probability, we can simply subtract it from \(1\) to find the probability that two people share a birthday.
If we choose two people at random, what is the probability that they do not share a birthday? Of the \(365\) days on which the second person could have a birthday, \(364\) of them are different from the first person's birthday. Therefore the probability is \(364/365\). Let's define \(P2\) as the probability that the second person drawn does not share a birthday with the person drawn previously. \(P2\) is therefore \(364/365\). Now define \(P3\) as the probability that the third person drawn does not share a birthday with anyone drawn previously given that there are no previous birthday matches. \(P3\) is therefore a conditional probability. If there are no previous birthday matches, then two of the \(365\) days have been "used up," leaving \(363\) non-matching days. Therefore \(P3 = 363/365\). In like manner, \(P4 = 362/365\), \(P5 = 361/365\), and so on up to \(P25 = 341/365\).
In order for there to be no matches, the second person must not match any previous person and the third person must not match any previous person, and the fourth person must not match any previous person, etc. Since \(P(A\; \text{and}\; B) = P(A)P(B)\), all we have to do is multiply \(P2, P3, P4 ...P25\) together. The result is \(0.431\). Therefore the probability of at least one match is \(0.569\).
Gambler's Fallacy
A fair coin is flipped five times and comes up heads each time. What is the probability that it will come up heads on the sixth flip? The correct answer is, of course, \(1/2\). But many people believe that a tail is more likely to occur after throwing five heads. Their faulty reasoning may go something like this: "In the long run, the number of heads and tails will be the same, so the tails have some catching up to do." The flaws in this logic are exposed in the simulation in this chapter.
Contributor
Online Statistics Education: A Multimedia Course of Study (http://onlinestatbook.com/). Project Leader: David M. Lane, Rice University. |
It is well known that the generating function of a regular language $L$, i.e. $\sum n_kz^k$ where $n_k$ is the number of words of length $k$ in $L$, is rational, i.e. a quotient of two polynomials $P(z)/Q(z)$. Suppose that $L$ is the language accepted by some finite automaton $\mathcal{A}$. How to find the polynomials $P, Q$ given $\mathcal{A}$? Is there a simple procedure and proof?
Cannon does more than just refer to result that you ask for, he sketches the proof out. His proof is couched in the notation that he has set up for his application to hyperbolic groups. But it is easy enough to unravel the notation and express the proof in general.
Label the state set of the automaton as $0,\ldots,N$ where $0$ is the start state. Consider the transition matrix whose $i,j$ entry is the number of directed edges from $i$ to $j$ in the automaton. The growth function we want is the power series $f(x) = \sum n_k x^k$ where $n_k$ is the number of directed paths starting at $0$ of length $k$ and stopping at the terminal states of the automaton. For simplicity I'll assume every state is an terminal state; otherwise one just has to change the notation. With this assumption, $f(x) = f_0(x) + \ldots + f_N(x)$ where $f_i(x)$ is the growth function whose $k^{th}$ coefficient is the number of directed paths from state $0$ to state $i$ of length $k$. Cannon then writes a linear recursion for these functions: $f_0(x) = 1$ (the interpretation is that there are not actually any directed edges ending at the start state); and $$f_j(x) = x \cdot \sum_{i=0}^N b_{ij} \cdot f_i(x), j=0,\ldots,N $$ He explains how to express the coefficients $b_{ij}$ as functions of the entries of the transition matrix. Then he writes "It is a routine problem in linear algebra to solve (these equations) for $f_0, f_1, \ldots, f_N$ and $f$", which I interpret as rewriting the equations in vector form $F = x B F$ where $F$ is the column vector whose entries are the functions $f_0(x),\ldots,f_N(x)$ and $B=(b_{ij})$. So we get $(I - xB) F = (1;0;...;0)$ and this can be solved for $F$.
Start by making a deterministic finite automaton $M$. Now $n_k$ is the number of walks of length $k$ from the starting state to an accepting state, so $\sum n_k z^k$ is the sum of some entries of $(I-zA)^{-1}$, where $A=(a_{ij})$ is the integer matrix in which $a_{ij}$ is the number of transitions from state $i$ to state $j$. The entries you need to add are the $(k,\ell)$ entries where $k$ is the starting state and $\ell$ is an accepting state. To get this as a rational function, write $(I-zA)^{-1}$ using Cramer's rule. The denominator (before cancelling of any common factors) is the determinant $|I-zA|$. The numerator is the adjugate of $I-zA$, whose entries are cofactors, which are also determinants. So in total, if there are $m$ accepting states, you get a sum of $m$ determinants divided by one determinant, and all these determinants are polynomials in $z$.
This is of course a special case of the Chomsky-Schutzenberger theorem that unambiguous context-free languages have algebraic generating functions. Restricted to a regular language it is like this. Assume the automata has state set $1,...,n$. Let $1$ be the initial state for convenience. Let A be the adjacency matrix of the automaton, let $e_1$ be the standard unit row vector and let $c$ be the column vector which is the characteristic vector of the terminal states. Then it is easy to see that the generating function is $$f(t)=\sum_{n=0}^{\infty}e_1A^nct^n = e_1\left[\sum_{n=0}^{\infty}A^nt^n\right]c= e_1(I-tA)^{-1}c.$$ Now using the classical adjoint formula for the inverse, you get that the denominator is $\det(I-tA)$ and the numerator is what it is.
This is basically equivalent for finding an
unambiguous regular expression for the language.This MO answer explains how to do it, given an DFA $\mathcal A$.
The rest is easy:
replace $\emptyset$ by $0$ replace $\epsilon$ by $1$ replace any symbol with $x$ replace concatenation with multiplication replace $\cup$ with $+$ replace Kleene star with $1/1-f(x)$.
Source: this paper.
Please see http://algo.inria.fr/flajolet/Publications/books.html ,the book Analytic combinatorics's first several chapters. |
Submodule Consists of Elements Annihilated by Some Power of an Ideal
Problem 417
Let $R$ be a ring with $1$ and let $M$ be an $R$-module. Let $I$ be an ideal of $R$.Let $M’$ be the subset of elements $a$ of $M$ that are annihilated by some power $I^k$ of the ideal $I$, where the power $k$ may depend on $a$.Prove that $M’$ is a submodule of $M$.
Let us prove claim 1. Let $a, b\in N_i$ and let $r\in R$.For any $s\in I^i$ we have\begin{align*}s(a+b)&=sa+sb=0\end{align*}because $a, b$ are annihilated by $s\in I^i$.Also, we have\begin{align*}s(ra)=(sr)a=0\end{align*}since $sr\in I$ as $I$ is an ideal.Thus, $N_i$ is a submodule of $M$.To prove claim 2, we note the inclusion\[I^{i+1}=I^i\cdot I\subset I^{i}.\]Thus each $a\in N_i$ is annihilated by elements in $I^{i+1}$.Hence $N_i\subset N_{i+1}$ for any $i$, and this proves claim 2.The claim 3 follows from the definition of the subset $M’$.
Since the union of submodules in an ascending chain of submodules is a submodule, we conclude that $M’$ is a submodule of $M$.
Ascending Chain of Submodules and Union of its SubmodulesLet $R$ be a ring with $1$. Let $M$ be an $R$-module. Consider an ascending chain\[N_1 \subset N_2 \subset \cdots\]of submodules of $M$.Prove that the union\[\cup_{i=1}^{\infty} N_i\]is a submodule of $M$.Proof.To simplify the notation, let us […]
Annihilator of a Submodule is a 2-Sided Ideal of a RingLet $R$ be a ring with $1$ and let $M$ be a left $R$-module.Let $S$ be a subset of $M$. The annihilator of $S$ in $R$ is the subset of the ring $R$ defined to be\[\Ann_R(S)=\{ r\in R\mid rx=0 \text{ for all } x\in S\}.\](If $rx=0, r\in R, x\in S$, then we say $r$ annihilates […]
Torsion Submodule, Integral Domain, and Zero DivisorsLet $R$ be a ring with $1$. An element of the $R$-module $M$ is called a torsion element if $rm=0$ for some nonzero element $r\in R$.The set of torsion elements is denoted\[\Tor(M)=\{m \in M \mid rm=0 \text{ for some nonzero} r\in R\}.\](a) Prove that if $R$ is an […]
Basic Exercise Problems in Module TheoryLet $R$ be a ring with $1$ and $M$ be a left $R$-module.(a) Prove that $0_Rm=0_M$ for all $m \in M$.Here $0_R$ is the zero element in the ring $R$ and $0_M$ is the zero element in the module $M$, that is, the identity element of the additive group $M$.To simplify the […]
Nilpotent Ideal and Surjective Module HomomorphismsLet $R$ be a commutative ring and let $I$ be a nilpotent ideal of $R$.Let $M$ and $N$ be $R$-modules and let $\phi:M\to N$ be an $R$-module homomorphism.Prove that if the induced homomorphism $\bar{\phi}: M/IM \to N/IN$ is surjective, then $\phi$ is surjective.[…] |
Abbreviation:
CRng$_1$
A
is a rings with identity $\mathbf{R}=\langle R,+,-,0,\cdot,1\rangle$ such that $\cdot$ is commutative: $x\cdot y=y\cdot x$ commutative ring with identity
Let $\mathbf{R}$ and $\mathbf{S}$ be commutative rings with identity. A morphism from $\mathbf{R}$ to $\mathbf{S}$ is a function $h:R\rightarrow S$ that is a homomorphism:
$h(x+y)=h(x)+h(y)$, $h(x\cdot y)=h(x)\cdot h(y)$, $h(1)=1$
Remark: It follows that $h(0)=0$ and $h(-x)=-h(x)$.
Example 1: $\langle\mathbb{Z},+,-,0,\cdot,1\rangle$, the ring of integers with addition, subtraction, zero, multiplication, and one.
$0$ is a zero for $\cdot$: $0\cdot x=x$ and $x\cdot 0=0$.
$\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &1\\ f(4)= &4\\ f(5)= &1\\ f(6)= &1\\ \end{array}$ |
Images are essential elements in most of the scientific documents. LaTeX provides several options to handle images and make them look exactly what you need. In this article is explained how to include images in the most common formats, how to shrink, enlarge and rotate them, and how to reference them within your document.
Contents
Below is a example on how to import a picture.
\documentclass{article} \usepackage{graphicx} \graphicspath{ {./images/} } \begin{document} The universe is immense and it seems to be homogeneous, in a large scale, everywhere we look at. \includegraphics{universe} There's a picture of a galaxy above \end{document}
Latex can not manage images by itself, so we need to use the
graphicx package. To use it, we include the following line in the preamble:
\usepackage{graphicx}
The command
\graphicspath{ {./images/} } tells LaTeX that the images are kept in a folder named
images under the directory of the main document.
The
\includegraphics{universe} command is the one that actually included the image in the document. Here
universe is the name of the file containing the image without the extension, then universe.PNG becomes universe. The file name of the image should not contain white spaces nor multiple dots. Note: The file extension is allowed to be included, but it's a good idea to omit it. If the file extension is omitted it will prompt LaTeX to search for all the supported formats. For more details see the section about generating high resolution and low resolution images.
When working on a document which includes several images it's possible to keep those images in one or more separated folders so that your project is more organised.
The command
\graphicspath{ {images/} } tells LaTeX to look in the
images folder. The path is to the current working directory - so, the compiler will look for the file in the same folder as the code where the image is included. The path to the folder is relative by default, if there is no initial directory specified, for instance relative
%Path relative to the .tex file containing the \includegraphics command \graphicspath{ {images/} }
This is a typically straightforward way to reach the graphics folder within a file tree, but can leads to complications when .tex files within folders are included in the mail .tex file. Then, the compiler may end up looking for the images folder in the wrong place. Thus,
it is best practice to specify the graphics path to be relative to the main .tex file, denoting the main .tex file directory as
./ , for instance
%Path relative to the main .tex file \graphicspath{ {./images/} }
as in the introduction.
The path can also be
, if the exact location of the file on your system is specified. For example: absolute
%Path in Windows format: \graphicspath{ {c:/user/images/} } %Path in Unix-like (Linux, Mac OS) format \graphicspath{ {/home/user/images/} }
Notice that this command requires a trailing slash
/ and that the path is in between double braces.
You can also set multiple paths if the images are saved in more than one folder. For instance, if there are two folders named
images1 and images2, use the command.
\graphicspath{ {./images1/}{./images2/} }
If no path is set LaTeX will look for pictures in the folder where the .tex file the image is included in is saved.
If we want to further specify how LaTeX should include our image in the document (length, height, etc), we can pass those settings in the following format:
\begin{document} Overleaf is a great professional tool to edit online, share and backup your \LaTeX{} projects. Also offers a rather large help documentation. \includegraphics[scale=1.5]{lion-logo}
The command
\includegraphics[scale=1.5]{lion-logo} will include the image
lion-logo in the document, the extra parameter
scale=1.5 will do exactly that, scale the image 1.5 of its real size.
You can also scale the image to a some specific width and height.
\begin{document} Overleaf is a great professional tool to edit online, share and backup your \LaTeX{} projects. Also offers a rather large help documentation. \includegraphics[width=3cm, height=4cm]{lion-logo}
As you probably have guessed, the parameters inside the brackets
[width=3cm, height=4cm] define the width and the height of the picture. You can use different units for these parameters. If only the
width parameter is passed, the height will be scaled to keep the aspect ratio.
The length units can also be relative to some elements in document. If you want, for instance, make a picture the same width as the text:
\begin{document} The universe is immense and it seems to be homogeneous, in a large scale, everywhere we look at. \includegraphics[width=\textwidth]{universe}
Instead of
\textwidth you can use any other default LaTeX length:
\columnsep, \linewidth, \textheight, \paperheight, etc. See the reference guide for a further description of these units.
There is another common option when including a picture within your document, to
rotate it. This can easily accomplished in LaTeX:
\begin{document} Overleaf is a great professional tool to edit online, share and backup your \LaTeX{} projects. Also offers a rather large help documentation. \includegraphics[scale=1.2, angle=45]{lion-logo}
The parameter
angle=45 rotates the picture 45 degrees counter-clockwise. To rotate the picture clockwise use a negative number.
In the previous section was explained how to include images in your document, but the combination of text and images may not look as we expected. To change this we need to introduce a new
environment.
In the next example the figure will be positioned right below this sentence. \begin{figure}[h] \includegraphics[width=8cm]{Plot} \end{figure}
The
figure environment is used to display pictures as floating elements within the document. This means you include the picture inside the
figure environment and you don't have to worry about it's placement, LaTeX will position it in a such way that it fits the flow of the document.
Anyway, sometimes we need to have more control on the way the figures are displayed. An additional parameter can be passed to determine the figure positioning. In the example,
begin{figure}[h], the parameter inside the brackets set the position of the figure to
. Below a table to list the possible positioning values. here
Parameter Position h Place the float here, i.e., approximately at the same point it occurs in the source text (however, not exactly at the spot) t Position at the top of the page. b Position at the bottom of the page. p Put on a special page for floats only. ! Override internal parameters LaTeX uses for determining "good" float positions. H Places the float at precisely the location in the LaTeX code. Requires the
float package, though may cause problems occasionally. This is somewhat equivalent to
h!.
In the next example you can see a picture at the
top of the document, despite being declared below the text.
In this picture you can see a bar graph that shows the results of a survey which involved some important data studied as time passed. \begin{figure}[t] \includegraphics[width=8cm]{Plot} \centering \end{figure}
The additional command
\centering will centre the picture. The default alignment is
left.
It's also possible to
wrap the text around a figure. When the document contains small pictures this makes it look better.
\begin{wrapfigure}{r}{0.25\textwidth} %this figure will be at the right \centering \includegraphics[width=0.25\textwidth]{mesh} \end{wrapfigure} There are several ways to plot a function of two variables, depending on the information you are interested in. For instance, if you want to see the mesh of a function so it easier to see the derivative you can use a plot like the one on the left. \begin{wrapfigure}{l}{0.25\textwidth} \centering \includegraphics[width=0.25\textwidth]{contour} \end{wrapfigure} On the other side, if you are only interested on certain values you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, like the one on the left. On the other side, if you are only interested on certain values you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, like the one on the left.
For the commands in the example to work, you have to import the package
wrapfig. Add to the preamble the line
\usepackage{wrapfig}.
Now you can define the
wrapfigure environment by means of the commands
\begin{wrapfigure}{l}{0.25\textwidth} \end{wrapfigure}. Notice that the environment has two additional parameters enclosed in braces. Below the code is explained with more detail:
{l}
{0.25\textwidth}
\centering
For a more complete article about image positioning see Positioning images and tables
Captioning images to add a brief description and labelling them for further reference are two important tools when working on a lengthy text.
Let's start with a caption example:
\begin{figure}[h] \caption{Example of a parametric plot ($\sin (x), \cos(x), x$)} \centering \includegraphics[width=0.5\textwidth]{spiral} \end{figure}
It's really easy, just add the
\caption{Some caption} and inside the braces write the text to be shown. The placement of the caption depends on where you place the command; if it'a above the
includegraphics then the caption will be on top of it, if it's below then the caption will also be set below the figure.
Captions can also be placed right after the figures. The
sidecap package uses similar code to the one in the previous example to accomplish this.
\documentclass{article} \usepackage[rightcaption]{sidecap} \usepackage{graphicx} %package to manage images \graphicspath{ {images/} } \begin{SCfigure}[0.5][h] \caption{Using again the picture of the universe. This caption will be on the right} \includegraphics[width=0.6\textwidth]{universe} \end{SCfigure}
There are two new commands
\usepackage[rightcaption]{sidecap}
rightcaption. This parameter establishes the placement of the caption at the right of the picture, you can also use
\begin{SCfigure}[0.5][h] \end{SCfigure}
h works exactly as in the
You can do a more advanced management of the caption formatting. Check the further reading section for references.
Figures, just as many other elements in a LaTeX document (equations, tables, plots, etc) can be referenced within the text. This is very easy, just add a
label to the figure or SCfigure environment, then later use that label to refer the picture.
\begin{figure}[h] \centering \includegraphics[width=0.25\textwidth]{mesh} \caption{a nice plot} \label{fig:mesh1} \end{figure} As you can see in the figure \ref{fig:mesh1}, the function grows near 0. Also, in the page \pageref{fig:mesh1} is the same example.
There are three commands that generate cross-references in this example.
\label{fig:mesh1}
\ref{fig:mesh1}
\pageref{fig:mesh1}
The
\caption is mandatory to reference a figure.
Another great characteristic in a LaTeX document is the ability to automatically generate a
list of figures. This is straightforward.
This command only works on captioned figures, since it uses the caption in the table. The example above lists the images in this article.
Important Note:
When using cross-references your LaTeX project must be compiled twice, otherwise the references, the page references and the table of figures won't work.
So far while specifying the image file name in the
\includegraphics command we have omitted file extensions. However, that is not necessary, though it is often useful. If the file extension is omitted, LaTeX will search for any supported image format in that directory, and will search for various extensions in the default order (which can be modified).
This is useful in switching between development and production environments. In a development environment (when the article/report/book is still in progress), it is desirable to use low-resolution versions of images (typically in .png format) for fast compilation of the preview. In the production environment (when the final version of the article/report/book is produced), it is desirable to include the high-resolution version of the images.
This is accomplished by
Thus, if we have two versions of an image, venndiagram.pdf (high-resolution) and venndiagram.png (low-resolution), then we can include the following line in the preamble to use the .png version while developing the report -
\DeclareGraphicsExtensions{.png,.pdf}
The command above will ensure that if two files are encountered with the same base name but different extensions (for example venndiagram.pdf and venndiagram.png), then the .png version will be used first, and in its absence the .pdf version will be used, this is also a good ideas if some low-resolution versions are not available.
Once the report has been developed, to use the high-resolution .pdf version, we can change the line in the preamble specifying the extension search order to
\DeclareGraphicsExtensions{.pdf,.png} Improving on the technique described in the previous paragraphs, we can also instruct LaTeX to generate low-resolution .png versions of images on the fly while compiling the document if there is a PDF that has not been converted to PNG yet. To achieve that, we can include the following in the preamble after
\usepackage{graphicx}
\usepackage{epstopdf} \epstopdfDeclareGraphicsRule{.pdf}{png}{.png}{convert #1 \OutputFile} \DeclareGraphicsExtensions{.png,.pdf}
If venndiagram2.pdf exists but not venndiagram2.png, the file venndiagram2-pdf-converted-to.png will be created and loaded in its place. The command
convert #1 is responsible for the conversion and additional parameters may be passed between convert and #1. For example - convert -density 100 #1.
There are some important things to have in mind though:
--shell-escape option.
\epstopdfDeclareGraphicsRule, so that only high-resolution PDF files are loaded. We'll also need to change the order of precedence.
LaTeX units and legths
Abbreviation Definition pt A point, is the default length unit. About 0.3515mm mm a millimetre cm a centimetre in an inch ex the height of an x in the current font em the width of an m in the current font \columnsep distance between columns \columnwidth width of the column \linewidth width of the line in the current environment \paperwidth width of the page \paperheight height of the page \textwidth width of the text \textheight height of the text \unitleght units of length in the picture environment. About image types in LaTeX JPG: Best choice if we want to insert photos PNG: Best choice if we want to insert diagrams (if a vector version could not be generated) and screenshots PDF: Even though we are used to seeing PDF documents, a PDF can also store images EPS: EPS images can be included using the epstopdfpackage (we just need to install the package, we don't need to use \usepackage{}to include it in our document.)
For more information see |
It is well known that the generating function of a regular language $L$, i.e. $\sum n_kz^k$ where $n_k$ is the number of words of length $k$ in $L$, is rational, i.e. a quotient of two polynomials $P(z)/Q(z)$. Suppose that $L$ is the language accepted by some finite automaton $\mathcal{A}$. How to find the polynomials $P, Q$ given $\mathcal{A}$? Is there a simple procedure and proof?
Cannon does more than just refer to result that you ask for, he sketches the proof out. His proof is couched in the notation that he has set up for his application to hyperbolic groups. But it is easy enough to unravel the notation and express the proof in general.
Label the state set of the automaton as $0,\ldots,N$ where $0$ is the start state. Consider the transition matrix whose $i,j$ entry is the number of directed edges from $i$ to $j$ in the automaton. The growth function we want is the power series $f(x) = \sum n_k x^k$ where $n_k$ is the number of directed paths starting at $0$ of length $k$ and stopping at the terminal states of the automaton. For simplicity I'll assume every state is an terminal state; otherwise one just has to change the notation. With this assumption, $f(x) = f_0(x) + \ldots + f_N(x)$ where $f_i(x)$ is the growth function whose $k^{th}$ coefficient is the number of directed paths from state $0$ to state $i$ of length $k$. Cannon then writes a linear recursion for these functions: $f_0(x) = 1$ (the interpretation is that there are not actually any directed edges ending at the start state); and $$f_j(x) = x \cdot \sum_{i=0}^N b_{ij} \cdot f_i(x), j=0,\ldots,N $$ He explains how to express the coefficients $b_{ij}$ as functions of the entries of the transition matrix. Then he writes "It is a routine problem in linear algebra to solve (these equations) for $f_0, f_1, \ldots, f_N$ and $f$", which I interpret as rewriting the equations in vector form $F = x B F$ where $F$ is the column vector whose entries are the functions $f_0(x),\ldots,f_N(x)$ and $B=(b_{ij})$. So we get $(I - xB) F = (1;0;...;0)$ and this can be solved for $F$.
Start by making a deterministic finite automaton $M$. Now $n_k$ is the number of walks of length $k$ from the starting state to an accepting state, so $\sum n_k z^k$ is the sum of some entries of $(I-zA)^{-1}$, where $A=(a_{ij})$ is the integer matrix in which $a_{ij}$ is the number of transitions from state $i$ to state $j$. The entries you need to add are the $(k,\ell)$ entries where $k$ is the starting state and $\ell$ is an accepting state. To get this as a rational function, write $(I-zA)^{-1}$ using Cramer's rule. The denominator (before cancelling of any common factors) is the determinant $|I-zA|$. The numerator is the adjugate of $I-zA$, whose entries are cofactors, which are also determinants. So in total, if there are $m$ accepting states, you get a sum of $m$ determinants divided by one determinant, and all these determinants are polynomials in $z$.
This is of course a special case of the Chomsky-Schutzenberger theorem that unambiguous context-free languages have algebraic generating functions. Restricted to a regular language it is like this. Assume the automata has state set $1,...,n$. Let $1$ be the initial state for convenience. Let A be the adjacency matrix of the automaton, let $e_1$ be the standard unit row vector and let $c$ be the column vector which is the characteristic vector of the terminal states. Then it is easy to see that the generating function is $$f(t)=\sum_{n=0}^{\infty}e_1A^nct^n = e_1\left[\sum_{n=0}^{\infty}A^nt^n\right]c= e_1(I-tA)^{-1}c.$$ Now using the classical adjoint formula for the inverse, you get that the denominator is $\det(I-tA)$ and the numerator is what it is.
This is basically equivalent for finding an
unambiguous regular expression for the language.This MO answer explains how to do it, given an DFA $\mathcal A$.
The rest is easy:
replace $\emptyset$ by $0$ replace $\epsilon$ by $1$ replace any symbol with $x$ replace concatenation with multiplication replace $\cup$ with $+$ replace Kleene star with $1/1-f(x)$.
Source: this paper.
Please see http://algo.inria.fr/flajolet/Publications/books.html ,the book Analytic combinatorics's first several chapters. |
I am currently working on a project where a two-phase flow is considered. The phases are described using a level set approach and a signed distance function from the interface between the phases where positive values are liquid and negative values are gas.
I consider a Stoke's flow which gives me a velocity field $v$ and a pressure field $p$. Then I use $v$ to advect the level set using the advection equation
$\frac{\partial \phi}{\partial t} + v \cdot\nabla\phi=0$.
I am new to this type of problem but I think I have some basic understanding on the numerical (diffusion) problems arising from this equation due to discretization and I know that I need some kind of stabilization. My research concerns the modelling of two-phase flow in a porous material and the results needs to be quite accurate, hence the popular SUPG scheme does not quite cut it.
My question is simply, what schemes (for Finite Elements) exists, that gives a better solution than the SUPG scheme? And furthermore, is the any good literature on the topic that discuss, and preferably, compares any stabilization schemes? |
Recent Publications
•
Mechanical power limitations emerge from the physical trade-off between force and velocity. Many biological systems incorporate power-enhancing mechanisms enabling extraordinary accelerations at small sizes. We establish how power enhancement emerges through the dynamic coupling of motors, springs, and latches and reveal how each displays its own force-velocity behavior. We mathematically demonstrate a tunable performance space for spring-actuated movement that is applicable to biological and synthetic systems. Incorporating nonideal spring behavior and parameterizing latch dynamics allows the identification of critical transitions in mass and trade-offs in spring scaling, both of which offer explanations for long-observed scaling patterns in biological systems. This analysis defines the cascading challenges of power enhancement, explores their emergent effects in biological and engineered systems, and charts a pathway for higher-level analysis and synthesis of power-amplified systems.
The size dependence of the dielectric constants of barium titanate or other ferroelectric particles can be explored by embedding particles into an epoxy matrix whose dielectric constant can be measured directly. However, to extract the particle dielectric constant requires a model of the composite medium. We compare a finite-element model for various volume fractions and particle arrangements to several effective-medium approximations, which do not consider particle arrangement explicitly. For a fixed number of particles, the composite dielectric constant increases with the degree of agglomeration, and we relate this increase to the number of regions of enhanced electric field along the applied field between particles in an agglomerate. Additionally, even for dispersed particles, we find that the composite method of assessing the particle dielectric constant may not be effective if the particle dielectric constant is too high compared to the background medium dielectric constant.
We find that laser-induced local melting attracts and deforms grain boundaries in 2D colloidal crystals. When a melted region in contact with the edge of a crystal grain recrystallizes, it deforms the grain boundary—this attraction is driven by the multiplicity of deformed grain boundary configurations. Furthermore, the attraction provides a method to fabricate artificial colloidal crystal grains of arbitrary shape, enabling new experimental studies of grain boundary dynamics and ultimately hinting at a novel approach for fabricating materials with designer microstructures.
We study the two-dimensional superconductor-insulator transition (SIT) in thin films of tantalum nitride. At zero magnetic field, films can be disorder-tuned across the SIT by adjusting thickness and film stoichiometry; insulating films exhibit classical hopping transport. Superconducting films exhibit a magnetic-field-tuned SIT, whose insulating ground state at high field appears to be a quantum-corrected metal. Scaling behavior at the field-tuned SIT shows classical percolation critical exponents \( z ν \approx 1.3 \), with a corresponding critical field \( H_c \ll H_{c2} \), the upper critical field. The Hall effect exhibits a crossing point near \( H_c \), but with a nonuniversal critical value \( ρ_{xy}^{c} \) comparable to the normal-state Hall resistivity. We propose that high-carrier-density metals will always exhibit this pattern of behavior at the boundary between superconducting and (trivially) insulating ground states.
Electrons confined to two dimensions display an unexpected diversity of behaviors as they are cooled to absolute zero. Noninteracting electrons are predicted to eventually “localize” into an insulating ground state, and it has long been supposed that electron correlations stabilize only one other phase: superconductivity. However, many two-dimensional (2D) superconducting materials have shown surprising evidence for metallic behavior, where the electrical resistivity saturates in the zero-temperature limit; the nature of this unexpected metallic state remains under intense scrutiny. We report electrical transport properties for two disordered 2D superconductors, indium oxide and tantalum nitride, and observe a magnetic field–tuned transition from a true superconductor to a metallic phase with saturated resistivity. This metallic phase is characterized by a vanishing Hall resistivity, suggesting that it retains particle-hole symmetry from the disrupted superconducting state.
Honeycomb iridates such as $\gamma$-Li2IrO3 are argued to realize Kitaev spin-anisotropic magnetic exchange, along with Heisenberg and possibly other couplings. While systems with pure Kitaev interactions are candidates to realize a quantum spin-liquid ground state, in $\gamma$-Li2IrO3 it has been shown that the presence of competing magnetic interactions leads to an incommensurate spiral spin order at ambient pressure below 38 K. We study the pressure sensitivity of this magnetically ordered state in single crystals of $\gamma$-Li2IrO3 using resonant x-ray scattering (RXS) under applied hydrostatic pressures of up to 3 GPa. RXS is a direct probe of electronic order, and we observe the abrupt disappearance of the q sp=(0.57, 0, 0) spiral order at a critical pressure Pc= 1.4 GPa with no accompanying change in the symmetry of the lattice.
Two methods of quantifying the spatial resolution of a camera are described, performed, and compared, with the objective of designing an imaging-system experiment for students in an undergraduate optics laboratory. With the goal of characterizing the resolution of a typical digital single-lens reflex (DSLR) camera, we motivate, introduce, and show agreement between traditional test-target contrast measurements and the technique of using Fourier analysis to obtain the modulation transfer function (MTF). The advantages and drawbacks of each method are compared. Finally, we explore the rich optical physics at work in the camera system by calculating the MTF as a function of wavelength and f-number. For example, we find that the Canon 40D demonstrates better spatial resolution at short wavelengths, in accordance with scalar diffraction theory, but is not diffraction-limited, being significantly affected by spherical aberration. The experiment and data analysis routines described here can be built and written in an undergraduate optics lab setting.
Magnetic honeycomb iridates are thought to show strongly spin-anisotropic exchange interactions which, when highly frustrated, lead to an exotic state of matter known as the Kitaev quantum spin liquid. However, in all known examples these materials magnetically order at finite temperatures, the scale of which may imply weak frustration. Here we show that the application of a relatively small magnetic field drives the three-dimensional magnet \( \beta-\mathrm{Li}_2\mathrm{IrO}_3 \) from its incommensurate ground state into a quantum correlated paramagnet. Interestingly, this paramagnetic state admixes a zig-zag spin mode analogous to the zig-zag order seen in other Mott-Kitaev compounds. The rapid onset of the field-induced correlated state implies the exchange interactions are delicately balanced, leading to strong frustration and a near degeneracy of different ground states.
Direct experimental investigations of the low-energy electronic structure of the \( \mathrm{Na_2 IrO_3} \) iridate insulator are sparse and draw two conflicting pictures. One relies on flat bands and a clear gap, the other involves dispersive states approaching the Fermi level, pointing to surface metallicity. Here, by a combination of angle-resolved photoemission, photoemission electron microscopy, and x-ray absorption, we show that the correct picture is more complex and involves an anomalous band, arising from charge transfer from Na atoms to Ir-derived states. Bulk quasiparticles do exist, but in one of the two possible surface terminations the charge transfer is smaller and they remain elusive.
The complex antiferromagnetic orders observed in the honeycomb iridates are a double-edged sword in the search for a quantum spin-liquid: both attesting that the magnetic interactions provide many of the necessary ingredients, while simultaneously impeding access. Focus has naturally been drawn to the unusual magnetic orders that hint at the underlying spin correlations. However, the study of any particular broken symmetry state generally provides little clue about the possibility of other nearby ground states. Here we use magnetic fields approaching 100 tesla to reveal the extent of the spin correlations in \( \gamma \)-lithium iridate. We find that a small component of field along the magnetic easy-axis melts long-range order, revealing a bistable, strongly correlated spin state. Far from the usual destruction of antiferromagnetism via spin polarization, the high-field state possesses only a small fraction of the total iridium moment, without evidence for long-range order up to the highest attainable magnetic fields.
•
11.
, M. Saad Bhamla, Xiaotian Ma, Suzanne M. Cox, Leah L. Fitchett, Yongjin Kim, Je-sung Koh, Deepak Krishnamurthy, Chi-Yun Kuo, Fatma Zeynep Temel, Alfred J. Crosby, Manu Prakash, Gregory P. Sutton, Robert J. Wood, Emanuel Azizi, Sarah Bergbreiter, and S. N. Patek
The principles of cascading power limits in small, fast biological and engineered systemsScience 360 (2018) 397+.
12.
, , , , , , , , , , and Todd C. Monson
Permittivity effects of particle agglomeration in ferroelectric ceramic-epoxy composites using finite element modeling
13.
, , , , , , , , and
Local Melting Attracts Grain Boundaries in Colloidal PolycrystalsPhysical Review Letters 120 (2018) 018002.
14.
, Mihir Tendulkar, Li Zhang, Sang-Chul Lee, and Aharon Kapitulnik
Superconductor To Weak-Insulator Transitions in Disordered Tantalum Nitride FilmsPhysical Review B 96 (2017) 134522.
15.
and Aharon Kapitulnik
Particle-Hole Symmetry Reveals Failed Superconductivity in the Metallic Phase of Two-Dimensional Superconducting FilmsScience Advances 3 (2017) e1700612.
16.
, Alejandro Ruiz, Alex Frano, Wenli Bi, Robert J. Birgeneau, Daniel Haskel, and James G. Analytis
Resonant X-Ray Scattering Reveals Possible Disappearance of Magnetic Order Under Hydrostatic Pressure in the Kitaev Candidate $\Gamma$-Li$_2$Iro$_3$Physical Review B 96 (2017) 020402.
17.
and
Measuring the spatial resolution of an optical system in an undergraduate optics laboratoryAmerican Journal of Physics 85 (2017) 429-438.
18.
Alejandro Ruiz, Alex Frano,, Itamar Kimchi, Toni Helm, Iain Oswald, Julia Y. Chan, R. J. Birgeneau, Zahirul Islam, and James G. Analytis
Correlated States in \( \beta-\mathrm{Li}_2\mathrm{IrO}_3 \) Driven by Applied Magnetic FieldsNature Communications 8 (2017) 961.
19.
L. Moreschini, I. Lo Vecchio,, S. Moser, S. Ulstrup, R. Koch, J. Wirjo, C. Jozwiak, K. S. Kim, E. Rotenberg, A. Bostwick, J. G. Analytis, and A. Lanzara
Quasiparticles and Charge Transfer At the Two Surfaces of the Honeycomb Iridate \( \mathrm{Na_2 IrO_3} \)Physical Review B 96 (2017) 161116.
20.
K A Modic, B J Ramshaw, J B Betts,, James G Analytis, Ross D McDonald, and Arkady Shekhter
Robust Spin Correlations at High Magnetic Fields in the Harmonic Honeycomb IridatesNature Communications 8 (2017) 2. |
May be this question is really silly and obvious but I am missing something subtle. I am reading on Sensitivity and Block sensitivity.
Let $f:\{0,1\}^n\rightarrow \{0,1\}$ be a Boolean function.
Let $[n]=\{1,2,\dots,n\}$.
If $i\in[n]$, let $\Bbb 1_i$ be length $n$ vector with all $0$s except $1$ at $i$th position.
If $B\subseteq [n]$, then $\Bbb 1_B$ be the length $n$ vector with $1$s only in positions marked by $B$.
If $i\in[n]$ and $x\in\{0,1\}^n$, let $x^i=x\oplus\Bbb 1_i$ where $\oplus$ is $XOR$ operation.
If $B\subseteq [n]$ and $x\in\{0,1\}^n$, let $x^{B}=x\oplus\Bbb 1_B$ where $\oplus$ is $XOR$ operation.
Sensitivity of $f$ at input $x$ is $$S_x(f) = |\{i:f(x)\neq f(x^i)\}|$$
Sensitivity of $f$ is $$S(f)=\max_xS_x(f)$$
Block Sensitivity of $f$ at input $x$, $BS_x(f)$ is maximum $r$ such that there is a set of disjoint subsets $\{B_i\}_{i=1}^r$($\forall i\neq j$, $B_i\cap B_j=\emptyset$) such that $$\forall j\mbox{, }f(x)\neq f(x^{B_j})$$
Block Sensitivity of $f$ is $$BS(f)=\max_xBS_x(f)$$
It is clear that $S(f)\leq BS(f)$ since we can take size $1$ blocks at coordinates of input $x$ that are modified to achieves $S(f)$.
So why can't I say $BS(f)\leq S(f)$? That is, suppose we found a maximum set of blocks $B_i$ that are disjoint, then cant I say:
There is no other bit position outside of $\cup_{i=1}^rB_i$ where we can change an input in order to change the input (Or else there is a size $1$ block that is disjoint and we can add to list $B_i$ incrasing $r$).
It is clear $S(f) = |\cup_{i=1}^rB_i|$ since if we change all bit positions in $B_i$ we change inputs and there is no other extra bit position by $1$.
Split the $B_i$ in to size $1$ blocks to get $S(f)=BS(f)$?
Am I wrong in the definition that $2$ DOES NOT HOLD since we can only consider $1$ block at a time and hence we can only say $S(f)\geq max_i|B_i|$? Anything else I am missing?
Is there any example for which $BS(f)\leq S(f)^{1+\epsilon}$? |
Recent Publications
•
Baryogenesis through neutrino oscillations is an elegant mechanism that has found several realizations in the literature corresponding to different parts of the model parameter space. Its appeal stems from its minimality and dependence only on physics below the weak scale. In this paper we show that by focusing on the physical time scales of leptogenesis instead of the model parameters, a more comprehensive picture emerges. The different regimes previously identified can be understood as different relative orderings of these time scales. This approach also shows that all regimes require a coincidence of time scales and this in turn translates to a certain tuning of the parameters, whether in mass terms or Yukawa couplings. Indeed, we show that the amount of tuning involved in the minimal model is never less than one part in 105 according to a metric constructed from a combination of the sterile neutrino mass degeneracy and the Barbieri-Giudice tuning of the Yukawa coupling. Finally, we explore an extended model, where the tuning can be removed in exchange for the introduction of a new degree of freedom in the form of a leptophilic Higgs with a vacuum expectation value of the order of GeV.
We investigate the dewetting of a disordered melt of diblock copolymer from an ordered residual wetting layer. In contrast to simple liquids where the wetting layer has a fixed thickness and the droplets exhibit a single unique contact angle with the substrate, we find that structured liquids of diblock copolymer exhibit a discrete series of wetting layer thicknesses each producing a different contact angle. These quantized contact angles arise because the substrate and air surfaces each induce a gradient of lamellar order in the wetting layer. The interaction between the two surface profiles creates an effective interface potential that oscillates with film thickness, thus, producing a sequence of local minimums. The wetting layer thicknesses and corresponding contact angles are a direct measure of the positions and depths of these minimums Self-consistent field theory is shown to provide qualitative agreement with the experiment.
We propose a practical scheme to use photons from causally disconnected cosmic sources to set the detectors in an experimental test of Bell’s inequality. In current experiments, with settings determined by quantum random number generators, only a small amount of correlation between detector settings and local hidden variables, established less than a millisecond before each experiment, would suffice to mimic the predictions of quantum mechanics. By setting the detectors using pairs of quasars or patches of the cosmic microwave background, observed violations of Bell’s inequality would require any such coordination to have existed for billions of years—an improvement of 20 orders of magnitude.
It has recently been shown that dark-matter annihilation to bottom quarks provides a good fit to the Galactic Center gamma-ray excess identified in the Fermi-LAT data. In the favored dark-matter mass range \( m \sim \) 30–40 GeV, achieving the best-fit annihilation rate \( \sigma v \sim 5 \times 10^{-26} \, \mathrm{cm^3 s^{-1}} \) with perturbative couplings requires a sub-TeV mediator particle that interacts with both dark matter and bottom quarks. In this paper, we consider the minimal viable scenarios in which a Standard Model singlet mediates
s-channel interactions only between dark matter and bottom quarks, focusing on axial-vector, vector, and pseudoscalar couplings. Using simulations that include on-shell mediator production, we show that existing sbottom searches currently offer the strongest sensitivity over a large region of the favored parameter space explaining the gamma-ray excess, particularly for axial-vector interactions. The 13 TeV LHC will be even more sensitive; however, it may not be sufficient to fully cover the favored parameter space, and the pseudoscalar scenario will remain unconstrained by these searches. We also find that direct- detection constraints, induced through loops of bottom quarks, complement collider bounds to disfavor the vector-current interaction when the mediator is heavier than twice the dark-matter mass. We also present some simple models that generate pseudoscalar-mediated annihilation predominantly to bottom quarks.
We show that the existence of new, light gauge interactions coupled to Standard Model (SM) neutrinos gives rise to an abundance of sterile neutrinos through the sterile neutrinos’ mixing with the SM. Specifically, in the mass range of MeV–GeV and coupling of \( g' \sim 10^{-6} \)–\( 10^{-3} \), the decay of this new vector boson in the early Universe produces a sufficient quantity of sterile neutrinos to account for the observed dark matter abundance. Interestingly, this can be achieved within a natural extension of the SM gauge group, such as a gauged \( L_{\mu} \) − \( L_{\tau} \) number, without any tree-level coupling between the new vector boson and the sterile neutrino states. Such new leptonic interactions might also be at the origin of the well-known discrepancy associated with the anomalous magnetic moment of the muon.
We experimentally study the ghost critical field (GCF), a magnetic field scale for the suppression of superconducting fluctuations, using Hall effect and magnetoresistance measurements on a disordered superconducting thin film near its transition temperature \( T_c \). We observe an increase in the Hall effect with a maximum in field that tracks the upper critical field below \( T_c \), vanishes near \( T_c \), and returns to higher fields above \( T_c \). Such a maximum has been observed in studies of the Nernst effect and identified as the GCF. Magnetoresistance measurements near Tc indicate quenching of superconducting fluctuations, agree with established theoretical descriptions, and allow us to extract the GCF and other parameters. Above \( T_c \), the Hall peak field is quantitatively distinct from the GCF, and we contrast this finding with ongoing studies of the Nernst effect and superconducting fluctuations in unconventional and thin-film superconductors.
•
31.
, , , , and
Robust, Real-time, Digital Focusing for FD-OCM using ISAM on a GPUOptical Coherence Tomography and Coherence Domain Optical Methods in Biomedicine XVIII 8934 (2014) 89342V.
32.
, , , , and
Physical Attributes and Assembly of PEG-linked Immuno-labeled Gold Nanoparticles for OCM Image Contrast in Tissue Engineering and Developmental BiologyOptical Coherence Tomography and Coherence Domain Optical Methods in Biomedicine XVIII 8934 (2014) 89342V.
33.
and Itay Yavin
Baryogenesis Through Neutrino Oscillations: a Unified PerspectivePhysical Review D 89 (2014) 32.
34.
, Pawel Stasiak, Mark W. Matsen, and Kari Dalnoki-Veress
Quantized Contact Angles in the Dewetting of a Structured LiquidPhysical Review Letters 112 (2014) 068303.
35.
, Andrew S. Friedman, and David I. Kaiser
Testing Bell’s Inequality with Cosmic Photons: Closing the Setting-Independence LoopholePhysical Review Letters 112 (2014) 195.
36.
Eder Izaguirre, Gordan Krnjaic, and
Bottom-up Approach to the Galactic Center ExcessPhysical Review D 90 (2014) 18.
37.
and Itay Yavin
Dark Matter Progenitor: Light Vector Boson Decay into Sterile NeutrinosPhysical Review D 89 (2014) 113004.
38.
and Matthew D Schwartz
Quark and gluon jet substructureJournal of High Energy Physics 2013 (2013) .
39.
Yang Bai, Hsin-Chia Cheng,, and Jiayin Gu
A toolkit of the stop search via the chargino decayJournal of High Energy Physics 2013 (2013) .
40.
and Aharon Kapitulnik
Observation of the ghost critical field for superconducting fluctuations in a disordered TaN thin filmPhysical Review B 88 (2013) 223. |
01/14/16
No Comments
This is the third post in the “Machinery behind Machine Learning” series and after all the “academic” discussions it is time to show meaningful results for some of the most prominent Machine Learning algorithms – there have been quite a few requests from interested readers in this direction. Today we are going to present results for Linear Regression as prototype for a
Regression method, follow-up posts will cover Logistic Regression as prototype for a Classification method and a Collaborative Filtering / Matrix Factorization algorithm as prototype for a Recommender System. It is worth noting that these prototypes for Regression, Classification and Recommendation are relying on the same underlying Optimization framework – despite their rather different interpretation or usage in the field of Machine Learning.
To be a bit more precise: In all three applications the training part is driven by an algorithm for Unconstrained Optimization, in our case Gradient Descent. As we want to emphasize the tuning options and measure the impact of the stepsize in particular we provide a comparison of the standard Armijo rule, the Armijo rule with widening and the exact stepsize. In later articles – after we learned something about Conjugate Gradient and maybe some other advanced methods – we are going to repeat this benchmark, but for now let’s focus on the performance of Gradient Descent.
Linear Regression
Linear Regression is a conceptually simple approach for modeling a relationship between a set of numerical features – represented by the independent variables \(x_1,…,x_n\) – and a given numerical variable \(y\), the dependent variable. When we assume that we have \(m\) different data points or vectors \(x^{(j)}\) and values or real numbers \(y_j\), the model takes the following form:\(y_j \approx c_0 + c_1 x^{(j)}_1 + … + c_n x^{(j)}_n\)
with \(c_i,\ i=0,..,n\), being some real-valued parameters. Using the matrix-vector notation from Linear Algebra we derive a more compact formulation. We put the parameter values \(c_i\) in the parameter vector \(c\) of length \(n+1\), collect the row vectors \(x^{(j)}:=[1,x^{(j)}_1,…,x^{(j)}_n]\) into a \(m\times (n+1)\)-matrix \(X\) – the leading \(1\) in each vector belongs to the coefficient \(c_0\) – and end up with something close to a linear system of equations:\(Xc\approx y\).
The interpretation is that we want to find a parameter vector c that satisfies the linear system of equation
as good as possible, thus we are looking for the best approximate solution because an exact solution does not exist in general if \(m>n+1\), which is assumed to be the case here. The objective function for Linear Regression
One mathematical translation of \(\min_c f(c):=\|Xc – y\|^2\).
as good as possible is to minimize the residual or error measured by the (squared) Euclidean norm:
The Euclidean norm is the usual notion of distance so nothing spectacular here. We square the expression to get rid of the square root that hides within the norm, it’s simpler and better from a computational point of view. Of course it does influence the concrete value of the error but does not change the solution or optimal parameter vector \(c^*\) that we are looking for.\(f(c)=\|Xc – y\|^2 = \sum_j \left( c^Tx^{(j)}-y_j \right)^2\).
Now we have arrived at an unconstrained minimization problem, the process of minimizing the error theoretically involves all possible values for the parameters \(c_i\), there are no restrictions. It’s time to show what we have learned so far. First, let’s unfold the compact expression to see what exactly is measured by the objective function \(f\) defined above:
In the implementation we have included an optional scaling factor of \(1/(2m)\), that normalizes the value of the objective function with respect to the number of data points. It’s presence or absence does not change the concrete solution, it’s an implementational detail that we have omitted here. The important ingredients are the summands \(\left( c^Tx^{(j)}-y_j \right)^2\) that quantify the pointwise deviation of the model from the input data.
Visualizing Linear Regression
Effectively we sum up the squared prediction errors for every data point, as is illustrated in the following plot. This example has been generated using the mtcars dataset that comes with R. The point-wise (squared) error is the (squared) length of the vertical line between the true data point (black) and the predicted point (blue) on the regression line.
The variables for the optimization algorithm are the coefficients \(c_i,\ i=0,…,n\), and the objective function \(f(c)\) is nothing but a polynomial of degree \(2\) in these variables. In fact there is a structural similarity to some of the simple test functions from part 2 of this blog series, but now let us look at the concrete test case. As usual you can find all information and the R code in the github repository, the code is self-contained.
The algorithmic setup
As we want to check the scaling behaviour in higher dimensions I have decided to create artificial data for this test. The setup is as follows:
The first column of the \(m\times (n+1)\)-matrix \(X\) contains only \(1\)s, the remaining \(n\) columns contain random numbers uniformly distributed in the unit interval \([0,1]\). We define an auxiliary vector \(\hat{c}\) of length \(n+1\) by\(\hat{c} := [1,2,3,…,n+1]\). We define a random vector \(z\) of length \(m\) – the number of data points – containing random numbers uniformly distributed in the unit interval, scaled to unit length. \(z\) is used as noise generator. We define a weight \(\varepsilon\) – a real number that allows us to scale the noise vector \(z\). Finally, we define the vector \(y\) by\(y:= X\hat{c} + \varepsilon z\).
The vector \(\hat{c}\) is by construction an approximate solution of \(y\approx Xc\) as long as the noise \(\varepsilon z\) is small. This setup might look somewhat complicated but it gives us a nice parametrization of the interesting things. The main parameters are
The number of data points: \(m\). The number of parameters of the linear model: \(n+1\). The amount of noise, i.e. a trivial upper bound of the expected value of the objective function: \(\varepsilon\).
The last parameter allows for a quick sanity check, as \(\varepsilon^2\) always is a trivial upper bound because we already know the possible solution \(\hat{c}\) with the property\(f(\hat{c})=\varepsilon^2\).
Furthermore it makes the scenario somewhat realistic. Typically you want to reconstruct the unknown solution – modeled by \(\hat{c}\) – but what you get from any algorithm almost always is a perturbed solution \(c^*\) that still contains some of the noise. Using this parametrization you can get a feeling for how much of the added noise actually is present in the solution, which might lead to a deeper understanding of Linear Regression.
Benchmarking Linear Regression
We compare the performance of the standard Armijo rule, the Armijo rule with widening, the exact stepsize for several choices of \(m,n\) and \(\varepsilon\). We also include a single comparison for some choices of a fixed stepsize at the end, that indicate what you can expect from this choice and where it can fail. In all cases, the algorithm terminates if the norm of the gradient is below \(1e-6\) or if the number of iterations exceeds \(100,000\).
Impact of data complexity
The first test is about the simplest possible model, one-dimensional Linear Regression, i.e. the number of model parameters is \(2\). Don’t be confused by this, there is always one parameter for the “offset” \(c_0\), such that the \(n\)-dimensional model depends on \(n+1\) parameters. The optimization algorithm is initialized with the parameter vector \(c=(c_0,c_1)=(0,0)\). We vary the number of data points from \(5\) to \(10,000\) in order to experience the scaling with respect to the amount of data.
That’s more or less the expected behaviour. The number \(m\) of data points can grow but the underlying optimization problem still has the same dimension \(n+1=2\), thus the number of iterations remains roughly constant – given that there are sufficiently many data points – and the runtime increases linearly with \(m\). It is interesting that the exact stepsize needs so few iterations making it almost as fast as the standard Armijo rule, this indicates the simple structure of the problem. Nevertheless, the Armijo rule with widening is the best choice here.
Impact of model complexity
The second test varies the number of model parameters from \(2\) to \(16\), the number of data points is fixed at \(1,000\). Here, we expect the scaling behaviour to be somewhat different.
Still, the exact stepsize produces the smallest number of iterations but the difference is much smaller for more complex models which indicates that inexact stepsizes are a good choice here. On the other hand, the price for this slight advantage is prohibitively high. We can also see that not only the runtime per iteration, but also the number of iterations is increasing with the model complexity, this is something to be considered when choosing a model for a real-world problem.
Fixed stepsize
As promised we provide some results for fixed stepsizes as well. Conceptually, it does not make sense to use a fixed stepsize at all unless you do know in advance a good value that definitely works – which you typically don’t. In reality, you have to test several values in order to find something that allows the algorithm to converge in a reasonable time – which is nothing but a stepsize rule that requires a full algorithmic run in order to perform an update. But let’s look at the numbers.
Only for the one-dimensional – and almost trivial – Linear Regression problem there is a value of the stepsize for which the performance is competitive, but this choice is way too risky for non-trivial settings. Even for slightly more complex Linear regression problems and a fixed stepsize of \(1e-1\) the performance is worse by a factor of \(10\). And for more complicated objective functions, e.g. the Rosenbrock function that has been introduced in the last blog post, the value would have to be smaller than \(1e-6\) implying that the number of iterations and the runtime would explode.
Summary
The results for Linear Regression already indicate that it is indeed useful to keep an eye on the underlying machinery of Optimization methods. The Armijo rule with widening shows the best performance, but the exact stepsize leads to the smallest number of iterations. These two findings imply that the direction of steepest descent is a good or at least reasonable choice for Linear Regression. The reasoning behind this is twofold. Remember that one argument for inexact stepsizes was that they can help avoiding the risk of overfitting that the exact stepsize cannot avoid in case the search direction is bad. Overfitting – which can be interpreted as significant deviation between the “local” search direction and the “global” optimal direction pointing directly to the solution – should lead to a higher number of iterations or at least not to less iterations. So in reverse, if the number of iterations is smaller, there can be no overfitting and the search direction should be a reasonable approximation of the unknown optimal direction.
The second argument – which directly applies to widening but also to the exact stepsize – considers the step length. Widening leads to larger steps which again only make sense if the local search direction is a reasonably good approximation of the global optimal direction in a larger environment of the current iterate. As a general rule: Larger steps imply that the model is a “good fit” in a larger environment of the current point. The exact stepsize also did produce larger steps which we did not mention here but you can check it on your own. Feel free to take a look at the code in the github repository and give it a try yourself, you can even apply it to your own data. For other choices of the parameters the numbers can look differnt, sometimes the widening idea has no effect and sometimes it clearly outperforms the exact stepsize in terms of number of iterations, but my goal was to show you the “average” case as this is what mostly matters in practice. The results for Logistic Regression are coming soon and here the impact on the runtime and the overall performance will be even more visible. Thanks for reading!
The Machinery behind Machine Learning – Part 2
The Machinery behind Machine Learning – Part 1 |
Consider the following constraint satisfaction problem: Let $\alpha_1 , \ldots, \alpha_k \in \mathbb{R}$ be given as well as an error parameter $\epsilon$. Find $p_1, \ldots, p_n$ such that
(i) $0 \le p_1, \ldots, p_n \le 1$
(ii) For $1\le j \le k$, $|(\sum_{i=1}^n p_i^j) - \alpha_j|\le \epsilon$.
Here $k \ll n$. I am interested in the time complexity of this problem: In particular, based on some symmetry considerations, one can show that it suffices to restrict one's search to the case where $p_1, \ldots, p_n$ all come from a set of size at most k. Based on this observation, one can consider all possible ways of partitioning $p_1, \ldots, p_n$ into k sets (which takes time $n^k$) and subsequently solve a problem which requires one to solve polynomial equations over the reals (in k variables and of degree bounded by k). This takes time $k^k$. The dependence of n^k is prohibitive for me and I was wondering if there is a way to solve this in time $O(n^{O(1)} \cdot k^{k})$ or at least better than $n^k$. |
T.R. Jain and V.K. Ohri Solutions for Class 11 Statistics for Economics Chapter 6 – Diagrammatic Presentation of Data- Bar Diagrams and Pies Diagrams is regarded as an important concept to be studied thoroughly by the students. Here, we have provided T.R. Jain and V.K. Ohri Solutions for Class 11.
Board CBSE Class Class 11 Subject Statistics for Economics Chapter Chapter 6 Chapter Name Diagrammatic Presentation of Data- Bar Diagrams and Pies Diagrams Number of questions solved 05 Category T.R. Jain and V.K. Ohri
Chapter 6 – Diagrammatic Presentation of Data– Bar Diagrams and Pies Diagrams cover below-mentioned concepts
What is a bar diagram? Types of bar diagram Pie or Circular Diagrams Multiple bar diagram T.R. Jain and V.K. Ohri Solutions for Class 11 Statistics for Economics Chapter 6 – Diagrammatic Presentation of Data- Bar Diagrams and Pies Diagrams Question 1
Represent the following data by a percentage bar diagram.
Subjects Number of Students 2016-17 2017-18 Statistics 25 30 Economics 40 42 History 35 28 Solution
Subject 2016-17 2017-18 Number of students (%) Cumulative Percentage Number of students (%) Cumulative Percentage Statistics 25 25 30 30 Economics 40 60 42 72 History 35 100 28 100 Total 100 100 Question 2
Draw a suitable diagram to represent the following information
Factory Selling price per unit (in ₹) Quantity Sold Cost Components (in ₹) Wages Material Miscellaneous Total X 400 20 3,200 2,400 1,600 7,200 Y 600 30 6,000 6,000 9,000 21,000
Also, show profit and loss.
Solution
First of all, we shall calculate the cost (wages, materials, miscellaneous) and profit per unit as given in the following table.
Factory X (20 units) Factory Y (30 units) Total Cost (₹) Per Unit Cost (₹) Total Cost (₹) Per Unit Cost (₹) Wages 3,200 160 6,000 200 Materials 2,400 120 6,000 200 Miscellaneous 1,600 80 9,000 300 Profit/Loss 800
(8,000-7,200)
40 -3,000
(18,000-21,000)
-100
Note: (Negative profit is regarded as a loss)
An appropriate diagram for representing this data would be the rectangle whose widths are in the ratio of the quantities sold, i.e, 20:30, i.e, 2:3. Selling prices would be represented by the corresponding heights of the rectangles with various costs (wages, materials, miscellaneous) and profit or loss represented by the various divisions of the rectangles as shown in the diagram given on the next page.
(Note: In the case of profit, i.e., when selling price > cost price, the entire rectangle will lie above the X-axis. But in case of loss, we will have a rectangle with a portion lying below the X-axis which will reflect the loss incurred, it cannot be recovered through sales)
Question 3
Following are the data about the market share of four brands of TV sets sold in Panipat and Ambala. Present the data in the pie chart.
Brand of Sets Units sold in Panipat Units sold in Ambala Samsung 480 625 Akai 360 500 Onida 240 438 Sony 120 312 Solution
Total sets sold in Place A and Place B are 1,200 and 1,875 respectively. Data are to be represented by two circles whose radii are in the ratio of square roots of total TV sets sold in each city in the ratio of : or 1:1. The calculations regarding the construction of the pie diagram are as follows.
Brands of Sets Place A Place B Sets Sold Sales(₹) Sales in terms of components of 360° Sets Sold Sales % Sales in terms of components of 360° Samsung 480 40 \(\frac{40}{100}\, \times \, 360^{\circ}\, =\, 144^{\circ}\) 625 33.3 \(\frac{33.3}{100}\, \times \, 360^{\circ}\, =\, 119.88^{\circ}\) Akai 360 30 \(\frac{30}{100}\, \times \, 360^{\circ}\, =\, 108^{\circ}\) 500 26.7 \(\frac{26.7}{100}\, \times \, 360^{\circ}\, =\, 96.12^{\circ}\) Onida 240 20 \(\frac{20}{100}\, \times \, 360^{\circ}\, =\, 72^{\circ}\) 438 \(\frac{23.4}{100}\, \times \, 360^{\circ}\, =\, 84.24^{\circ}\) Sony 120 10 \(\frac{10}{100}\, \times \, 360^{\circ}\, =\, 36^{\circ}\) 312 16.6 \(\frac{16.6}{100}\, \times \, 360^{\circ}\, =\, 59.76^{\circ}\) Total 1,200 360° 1,875 360° Question 4
The following table shows the interest of students in a school in different games.
Games Table Tennie Volleyball Hockey Basketball Cricket No of Students 500 300 350 400 550 Solution Question 5
The following table shows the monthly expenditure of different families on different items.
Items of Expenditure Education Clothing Food Rent Other Total Expenditure Family A 1,500 1,000 1,250 750 500 5,000 Family B 1,700 850 1,200 850 600 5,200 Family C 1,600 700 1,500 800 600 5,200
Represent the data in the form of a sub-divided bar diagram.
Solution
Stay tuned to BYJU’S for more T.R. Jain and V.K. Ohri solutions, question papers, sample papers, syllabus and Commerce notifications. |
Is it known or unknown whether hypergraph minimal covers are P-enumerable? I would be most happy with lower bounds. I'd also like to hear about conditional results, which assume some conjecture is true. Of course, I'd also want to know about closely related problems (such as independent sets and cliques).
Motivation. I have a problem to which enumeration of minimal covers in hypergraphs can be reduced, and an algorithm that is exponential in the worst case and works OKish in practice. I wonder whether doing much better is possible; or are there good reasons I haven't found something better? (The problem arises in the context of static analysis of programs.) Background. A hypergraph is a pair $(V, E)$ of vertices $V$ and edges $E:(V\to2)\to2$, the latter being subsets of vertices. A cover $U$ is a subset of vertices that intersects all edges: $\forall e:E\;\exists u:U\;(u:e)$. A cover is minimal when no strict subset of it is a cover.
Judging from the results of googling ‘P-enumerable’, the term is not too popular. I'm referring to the definition given by Valiant [1]:
A relation $R$ is
P-enumerableiff there is a polynomial $p$ such that for all $x$ the set $\{y:R(x,y)\}$ can be enumerated in time $|\{y:R(x,y)\}|\times p(|x|)$. Related. According to Johnson et al. [2], in 1988 it was unknown whether minimal covers of hypergraphs are P-enumerable. The equivalent problem for graphs, when $\forall e:E\;(|e|=2)$, is known to be P-enumerable, since 1977 [3]. But, [2] explains why the method of [3] cannot be generalized to hypergraphs. The related decision problem of finding one minimal cover is clearly in P. The related counting problem is #P-complete for graphs [1].
I also found some sources [4, 5] which I find hard to read: one uses many concepts I'm not familiar with, and the other is
long. For example, Theorem 1.1 in [4] seems to imply that there exists a quasi-polynomial algorithm; but, [5] has an extra condition (1.2, submodularity) that wouldn't hold for the covers problem. Moreover, [5] mentions an obstruction (Proposition 5.2) similar to the one alluded to by [2] (‘exercise for the reader’) for the methods of [3]. So, it seems to me that it was still unknown in 2002 whether hypergraph minimal covers are P-enumerable, although I'm not completely sure I interpret [4 and 5] correctly.
[1] Valiant,
The Complexity of Enumeration and Reliability Problems, 1979.
[2] Johnson, Yannakakis, Papadimitriou,
On Generating All Maximal Independent Sets, 1988.
[3] Tsukiyama, Ide, Ariyoshi, Shirakawa,
A New Algorithm for Generating All Maximal Independent Sets, 1977
[4] Boros, Elbassioni, Gurvich, Khachiyan, Makino,
Dual-Bounded Generating Problems: All Minimal Integer Solutions for a Monotone System of Linear Inequalities, 2002
[5] Elbassioni,
Incremental Algorithms for Enumerating Extremal Solutions of Monotone Systems of Submodular Inequalities and Their Applications, 2002 |
I have a finite automaton by the standard model Hopcroft & Ullman define: $$ M = (Q, \Sigma, \delta, q_0, F) $$
Where $\delta$ is the transition function mapping $Q \times \Sigma \mapsto Q$, such that $\delta(q, a)$ is a state for each state $q \in Q$, the set of all states, and input symbol $a \in \Sigma$, the alphabet. That allows for $\delta$ to map to any element of $Q$. So that's a graph, although it's not described using the usual $G = (V, E)$ notation.
Without specifying any particular definition for $\delta$, I'd like to be able to write the constraint that $\delta$ may only define transitions which form a tree. How can that be expressed?
My thought is that I might say that $\delta$ must be recursive somehow (to give a tree shape), but I'm not sure how to go about that.
Thank you, |
ISSN: 0538-8066
Keywords: Chemistry ; Physical Chemistry
Source: Wiley InterScience Backfile Collection 1832-2000
Topics: Chemistry and Pharmacology
Notes: The high-temperature reaction between sulfur dioxide and acetylene in an excess of argon was studied in a 1-in. i.d. single-pulse shock tube. Mixtures ranging from 1.81% to 5.40% SO2 and 1.60% to 4.90% C2H2 were heated to reflected shock temperatures of 1550°-2150°K, for dwell times of about 0.6 msec and gas dynamically quenched. Total reaction densities were 0.89 to 5.4 × 10-2 moles/1. The reaction products were analyzed by gas chromatography. A technique was developed for separating Ar, C2H4, C2H2, SO2, CO, CO2, H2S, COS, and CS2. The major products of the reaction are CO, H2, CS2, and sulfur. The products observed were compared with those predicted on the assumption that equilibrium was attained. Several preliminary experiments were carried out with ethylene-sulfur dioxide mixtures, and the results indicated that for this combination the sulfur dioxide probably reacted with the acetylene generated from the decomposition of the ethylene, rather than directly with the ethylene. The rate of decline in the sulfur dioxide content in C2H2-SO2 mixtures was found to be approximately second order (total) and can be empirically represented by \documentclass{article}\pagestyle{empty}\begin{document}$$- \Delta ({\rm SO}_2)/\Delta t = 3.1 \times 10^{10} T^{1/2} \exp (- 40,800/RT)[{\rm Ar}]^{0.83} [{\rm SO}_{\rm 2}]^{0.87} [{\rm C}_2 {\rm H}_{\rm 2}]^{0.25} {\rm mole cm}^{{\rm - 3}} \sec ^{ - 1}$$\end{document}A mechanism is proposed to account for the overall reaction kinetics.
Additional Material: 9 Ill.
Type of Medium: Electronic Resource
URL: Permalink
http://dx.doi.org/10.1002/kin.550030306 |
For (b), the statement is false. Try to find a counter example.A typical nilpotent matrix is an upper triangular matrix whose diagonal entries are all zero.
Proof.
(a) Show that $AB$ is nilpotent
Since $A$ is nilpotent, there exists a positive integer $k$ such that $A^k=O$. Then we have\[(AB)^k=(AB)(AB)\cdots (AB)=A^kB^k=OB^k=O.\]
Here in the second equality, we used the assumption that $AB=BA$.Thus we have $(AB)^k=O$, hence the product matrix $AB$ is nilpotent.
(b) Is $PN$ nilpotent?
In general, the product $PN$ of an invertible matrix $P$ and a nilpotent matrix $N$ is not nilpotent.Here is a counterexample.Let\[P=\begin{bmatrix}1 & 0 & 0 \\1 &1 &0 \\0 & 0 & 1\end{bmatrix} \text{ and }N=\begin{bmatrix}0 & 1 & 1 \\0 &0 &1 \\0 & 0 & 0\end{bmatrix}.\]
Then the matrix $P$ is invertible since $\det(P)=1$.(Note that $P$ is a lower triangular matrix. So the determinant is the product of diagonal entries.)
Also, it is easy to see by direct computation that $N^3=O$, hence $N$ is nilpotent. Indeed,\[N^2=\begin{bmatrix}0 & 0 & 1 \\0 &0 &0 \\0 & 0 & 0\end{bmatrix} \] and\[N^3=N^2N=\begin{bmatrix}0 & 0 & 1 \\0 &0 &0 \\0 & 0 & 0\end{bmatrix}\begin{bmatrix}0 & 1 & 1 \\0 &0 &1 \\0 & 0 & 0\end{bmatrix}=O.\]
Now the product $PN$ is\[PN=\begin{bmatrix}0 & 1 & 1 \\0 &1 &2 \\0 & 0 & 0\end{bmatrix}.\]We show that $PN$ is not nilpotent.We have\[(PN)^2=\begin{bmatrix}0 & 1 & 2 \\0 &1 &2 \\0 & 0 & 0\end{bmatrix}\]\[(PN)^3=(PN)^2(PN)=\begin{bmatrix}0 & 1 & 2 \\0 &1 &2 \\0 & 0 & 0\end{bmatrix}\begin{bmatrix}0 & 1 & 1 \\0 &1 &2 \\0 & 0 & 0\end{bmatrix}=\begin{bmatrix}0 & 1 & 2 \\0 &1 &2 \\0 & 0 & 0\end{bmatrix}.\]
This calculation shows that\[(PN)^k=\begin{bmatrix}0 & 1 & 2 \\0 &1 &2 \\0 & 0 & 0\end{bmatrix}\neq O \text{ for all } k \geq 2.\]
Thus $PN$ is not nilpotent. In conclusion, the product $PN$ of the invertible matrix $P$ and the nilpotent matrix $N$ is not nilpotent.
Is the Sum of a Nilpotent Matrix and an Invertible Matrix Invertible?A square matrix $A$ is called nilpotent if some power of $A$ is the zero matrix.Namely, $A$ is nilpotent if there exists a positive integer $k$ such that $A^k=O$, where $O$ is the zero matrix.Suppose that $A$ is a nilpotent matrix and let $B$ be an invertible matrix of […]
Nilpotent Matrix and Eigenvalues of the MatrixAn $n\times n$ matrix $A$ is called nilpotent if $A^k=O$, where $O$ is the $n\times n$ zero matrix.Prove the followings.(a) The matrix $A$ is nilpotent if and only if all the eigenvalues of $A$ is zero.(b) The matrix $A$ is nilpotent if and only if […]
Eigenvalues of Squared Matrix and Upper Triangular MatrixSuppose that $A$ and $P$ are $3 \times 3$ matrices and $P$ is invertible matrix.If\[P^{-1}AP=\begin{bmatrix}1 & 2 & 3 \\0 &4 &5 \\0 & 0 & 6\end{bmatrix},\]then find all the eigenvalues of the matrix $A^2$.We give two proofs. The first version is a […]
Nilpotent Matrices and Non-Singularity of Such MatricesLet $A$ be an $n \times n$ nilpotent matrix, that is, $A^m=O$ for some positive integer $m$, where $O$ is the $n \times n$ zero matrix.Prove that $A$ is a singular matrix and also prove that $I-A, I+A$ are both nonsingular matrices, where $I$ is the $n\times n$ identity […]
The Inverse Matrix of an Upper Triangular Matrix with VariablesLet $A$ be the following $3\times 3$ upper triangular matrix.\[A=\begin{bmatrix}1 & x & y \\0 &1 &z \\0 & 0 & 1\end{bmatrix},\]where $x, y, z$ are some real numbers.Determine whether the matrix $A$ is invertible or not. If it is invertible, then find […]
True or False. Every Diagonalizable Matrix is InvertibleIs every diagonalizable matrix invertible?Solution.The answer is No.CounterexampleWe give a counterexample. Consider the $2\times 2$ zero matrix.The zero matrix is a diagonal matrix, and thus it is diagonalizable.However, the zero matrix is not […] |
I'm working with a simple local level model in a textbook \begin{align} y_t &= \alpha_t + \epsilon_t, \qquad \epsilon_t \sim N(0, \sigma_\epsilon^2) \\ \alpha_{t+1} &= \alpha_t + \eta_t, \qquad \eta_t \sim N(0, \sigma_\eta^2) \end{align}
The conditional distribution of $\alpha_t$ given $Y_{t-1}$ (the set of all observations $y_j$ where $1 \leq j \leq t-1$) is $N(a_t, P_t)$, where we have $a_t = E(\alpha_t \mid Y_{t-1})$ and $P_t = Var(\alpha_t \mid Y_{t-1})$.
The book includes this calculation: \begin{equation} E[\alpha_t(\alpha_t - a_t)] = E[Var(\alpha_t \mid Y_{t-1})] = P_t \end{equation}
I don't understand where this first equality comes from. Working backwards from the law of conditional variance, I know that
\begin{align} Var(\alpha_t \mid Y_{t-1}) &= E[ (\alpha_t - E[\alpha_t \mid Y_{t-1}])^2\mid Y_{t-1}] \\ &= E[ (\alpha_t - a_t)^2 \mid Y_{t-1}] \\ &= E[ (\alpha_t^2 - 2 \alpha_t a_t + a_t^2) \mid Y_{t-1}] \\ &= E[\alpha_t^2 \mid Y_{t-1}] - 2E[\alpha_t a_t \mid Y_{t-1}] + E[a_t^2 \mid Y_{t-1}] \\ \end{align}
but I don't see how to get $\alpha_t (\alpha_t - a_t)$ from this, which would give me the correct value inside the expectation. |
You compare SFCs at different speeds. That is like comparing payloads for differently sized aircraft. SFC goes up with speed and, therefore, must be compared at the same speed. The work performed by an engine is thrust times distance, and higher speed means that the same thrust will perform more work per unit of time when the engine moves faster. The moving engine needs to slow down the airflow for combustion to take place, and then needs to accelerate the air by more than it has been slowed down to have positive thrust. Hence, SFC goes up in parallel with speed.
To have a meaningful comparison, we need to define efficiency. There are several, and two are of major importance for air-breathing aircraft engines: Thermal efficiency and propulsive efficiency.
Thermal efficiency
This describes how efficiently the chemical energy in the fuel $Q$ is converted into an impulse change of the air flowing through the engine. Formulated using the mass flow per unit of time $\dot{m}$, the impulse is $\dot{m}\cdot\dfrac{\Delta v^2}{2}$. Using $v_{\infty}$ for the incoming air speed and $v_{\infty} + \Delta v$ for the exit flow speed, the thermal efficiency is $$\eta_{therm} = \frac{\dot{m}\cdot \left((v_{\infty} + \Delta v)^2 - v_{\infty}^2\right)}{2\cdot Q}$$To achieve good efficiency at high speed, a high $\Delta v$ is helpful. This explains why efficiency drops more over speed for high-bypass ratio engines and especially propellers.Since the thermal energy in fuel is the same for all engines in your question, because all run on kerosene, and we can assume a similar efficiency of combustion, we can neglect $Q$ in the comparison.
Propulsive efficiency
This describes how well the conversion is performed. Using the same variables as above, propulsive efficiency is $$\eta_{prop} = \frac{v_{\infty}}{v_{\infty} + \frac{\Delta v}{2}}$$
This equation explains the better efficiency of high-bypass ratio engines and propellers at the same speed, because propulsive efficiency is proportional to the inverse of $\Delta v$.
Overall efficiency
This is the product of thermal and propulsive efficiency, and the equation is $$\eta_{total} = \frac{T\cdot v_{\infty}}{Q}$$where $T = \dot{m}\cdot\Delta v$ denotes the thrust. Conveniently, $\Delta v$ is eliminated in the product, allowing turbojet engines like the Olympus 593 to look much better in comparison to other engines.
Intake efficiency
This answer would be incomplete without a look at the intake of the Concorde. At cruise, it would lift the pressure of the air at the compressor face by a factor of more than six over ambient by efficiently decelerating the flow. The compressor added a compression ratio of 12, so the pressure in the combustion chamber was 80 times higher than ambient. This high pressure makes the engine so efficient, but is also needed to maintain combustion. Remember, ambient pressure in 18 km is just 76 mbar, so the absolute pressure in the combustion chamber at cruise was only 6 bar.
The full answer would be like this: The combination of intake and Olympus 593 at Mach 2.02 had a very good total efficiency, and comparisons with other engines at static conditions are misleading.
The comparison of results from a test stand on the ground would yield a very different picture, however. |
Let $p$ be a prime number. Denote by $P$ the set of all primes which are not greater than $p$.
Is there a well known estimation of the product of all prime numbers in $P$ (i.e. $\prod_{q\in P}q$)?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
The product of the first $n$ primes is called the $n$-th primorial: $$p_n \# = \prod_{k=1}^n p_n.$$
An estimate for their growth is $p_n\# =\exp((1+\mathcal{o}(1) )n\log n).$
What you are looking for is $\exp(\theta(x))$ where $\theta(x)$ is the first Chebyshev function. $$\theta(x) = \sum_{\underset{p \leq x}{p-\text{prime}}} \log(p)$$It is also related to the primorial, which is the product of the first $n$ primes.
The fact that $\theta(x) \sim x$ i.e. $\theta(x) = x + o(x)$ is equivalent to the prime number theorem. We can get a better quantification of $o(x)$ in-fact. This is obtained while proving the prime number theorem. We can get that $$\theta(x) = x + \mathcal{O}(x \exp(-c (\log x)^{\lambda}))$$ for some constant $c$ and $\lambda$.
Also, the fact that $$\theta(x) = x + \mathcal{O}(x^{1/2+\epsilon})$$ is equivalent to Riemann hypothesis, where the power $\epsilon$ takes into account some $\log$ factors inside it.
EDIT
I am adding this to answer Tom's comment above. The claim that $$p_n\# =\exp((1+\mathcal{o}(1) )n\log n)$$ is equivalent to the prime number theorem. You can google for proof of prime number theorem. There are two main proofs. The first main proof was by Jacques Hadamard and Charles Jean de la Vallée-Poussin in 1896 which uses complex analysis. The second main proof was by Atle Selberg and Paul Erdős in 1949 and is an "elementary" proof. ("elementary" here denotes that the proof doesn't use complex analysis. However, the elementary proof is
supposedly much harder than any proof using complex analysis.) |
Search
Now showing items 1-2 of 2
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... |
Associations to the word «Euclidean» Noun Geometry Manifold Rn Space Curvature Axiom Hilbert Subset Topology Subspace Vector Polynomial Theorem Integer Vertex Plane Algebra Pseudo Tensor Algorithm Norm Polygon Relativity Generalization Steiner Graph Coordinate Matrice Triangle Calculus Dimension Optimization Symmetry Sphere Rotation Subgroup Integral Approximation Lattice Infinity Distance Euler Minimum Scaling Allocation Fourier Equivalence Transformation Filling Radius Mathematics Klein Multiplication Np Wick Einstein Diagram Mathematician Adjective Adverb Wiktionary
EUCLIDEAN, adjective. (geometry) Adhering to the principles of traditional geometry, in which parallel lines are equidistant.
EUCLIDEAN, adjective. Of or relating to Euclid's Elements, especially to Euclidean geometry.
EUCLIDEAN, adjective. Of or relating to Euclidean zoning.
EUCLIDEAN, adjective. (rare) Alternative spelling of Euclidean
EUCLIDEAN ALGORITHM, noun. (algebra) A method based on the division algorithm for finding the greatest common divisor (gcd) of two given integers.
EUCLIDEAN ALPHABET, noun. The classical Greek alphabet, a modified form of the Ionic alphabet adopted by Athens in 403 BC following its defeat in the Peloponnesian War.
EUCLIDEAN DISTANCE, noun. (geometry) The distance between two points defined as the square root of the sum of the squares of the differences between the corresponding coordinates of the points; for example, in two-dimensional Euclidean geometry, the Euclidean distance between two points a = (ax, ay) and b = (bx, by) is defined as:
EUCLIDEAN DOMAIN, noun. (algebra) a principal ideal domain in which division with remainder is possible
EUCLIDEAN GEOMETRY, noun. (geometry) The familiar geometry of the real world, based on the postulate that through any two points there is exactly one straight line.
EUCLIDEAN GROUP, noun. (mathematics) the set of rigid motions that are also affine transformations.
EUCLIDEAN GROUPS, noun. Plural of Euclidean group
EUCLIDEAN METRIC, noun. (analysis) In the space \(\mathbb{R}^n\), the metric \( d(\vec x, \vec y) = \sqrt{(x_1 - y_1)^2 + (x_2 - y_2)^2 + ... + (x_n - y_n)^2} \) where \( \vec x = (x_1, ..., x_n) \) and \( \vec y = (y_1, ..., y_n)\).
EUCLIDEAN NORM, noun. (mathematics) A norm of an ordinary Euclidean space, for which the Pythagorean theorem holds, defined by \(\| x \| = \sqrt{\sum_j {x_j}^2}\)
EUCLIDEAN PLANE, noun. (mathematics) Two-dimensional Euclidean space.
EUCLIDEAN PLANES, noun. Plural of Euclidean plane
EUCLIDEAN SPACE, noun. Ordinary two- or three-dimensional space, characterised by an infinite extent along each dimension and a constant distance between any pair of parallel lines.
EUCLIDEAN SPACE, noun. (mathematics) Any real vector space on which a real-valued inner product (and, consequently, a metric) is defined.
EUCLIDEAN SPACES, noun. Plural of Euclidean space
Dictionary definition
EUCLIDEAN, adjective. Relating to geometry as developed by Euclid; "Euclidian geometry".
Wise words
Strong and bitter words indicate a weak cause. |
I want to show that
$$\int_0^{\pi} \log(2 - 2 \cos x) = 0$$
However, I cannot do this. I tried splitting the integral into $\int_0^{\pi/3} \log(2 - 2 \cos x)\,dx + \int_{\pi/3}^{\pi} \log(2 - 2 \cos x) \,dx$ and showing that the two parts were negatives of one another. Wolframalpha does not give very a simple antiderivative. I was wondering if there was a nice way to do this.
Other attempts: using $\int_0^a f(x) \,dx = \int_0^{a} f(a-x) \,dx$, trying to change the $\cos$ to $\sin$ by some substitution like $u = \pi/2 - x$ and trying to get things to cancel. |
Since $AB=I$, we have\begin{align*}\det(A)\det(B)=\det(AB)=\det(I)=1.\end{align*}This implies that the determinants $\det(A)$ and $\det(B)$ are not zero.Hence $A, B$ are invertible matrices: $A^{-1}, B^{-1}$ exist.
Now we compute\begin{align*}I&=BB^{-1}=BIB^{-1}\\&=B(AB)B^{-1} &&\text{since $AB=I$}\\&=BAI=BA.\end{align*}Hence we obtain $BA=I$.Since $AB=I$ and $BA=I$, we conclude that $B=A^{-1}$.
For Which Choices of $x$ is the Given Matrix Invertible?Determine the values of $x$ so that the matrix\[A=\begin{bmatrix}1 & 1 & x \\1 &x &x \\x & x & x\end{bmatrix}\]is invertible.For those values of $x$, find the inverse matrix $A^{-1}$.Solution.We use the fact that a matrix is invertible […]
Quiz 4: Inverse Matrix/ Nonsingular Matrix Satisfying a Relation(a) Find the inverse matrix of\[A=\begin{bmatrix}1 & 0 & 1 \\1 &0 &0 \\2 & 1 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason.(b) Find a nonsingular $2\times 2$ matrix $A$ such that\[A^3=A^2B-3A^2,\]where […]
Problems and Solutions About Similar MatricesLet $A, B$, and $C$ be $n \times n$ matrices and $I$ be the $n\times n$ identity matrix.Prove the following statements.(a) If $A$ is similar to $B$, then $B$ is similar to $A$.(b) $A$ is similar to itself.(c) If $A$ is similar to $B$ and $B$ […]
Sherman-Woodbery Formula for the Inverse MatrixLet $\mathbf{u}$ and $\mathbf{v}$ be vectors in $\R^n$, and let $I$ be the $n \times n$ identity matrix. Suppose that the inner product of $\mathbf{u}$ and $\mathbf{v}$ satisfies\[\mathbf{v}^{\trans}\mathbf{u}\neq -1.\]Define the matrix […]
The Inverse Matrix is UniqueLet $A$ be an $n\times n$ invertible matrix. Prove that the inverse matrix of $A$ is uniques.Hint.That the inverse matrix of $A$ is unique means that there is only one inverse matrix of $A$.(That's why we say "the" inverse matrix of $A$ and denote it by […]
Invertible Idempotent Matrix is the Identity MatrixA square matrix $A$ is called idempotent if $A^2=A$.Show that a square invertible idempotent matrix is the identity matrix.Proof.Let $A$ be an $n \times n$ invertible idempotent matrix.Since $A$ is invertible, the inverse matrix $A^{-1}$ of $A$ exists and it […] |
Answer
$$a=4,\:a=\frac{2}{7}$$
Work Step by Step
Solving the equation using the quadratic formula, we obtain: $$a_{1,\:2}=\frac{-\left(-30\right)\pm \sqrt{\left(-30\right)^2-4\cdot \:7\cdot \:8}}{2\cdot \:7} \\ a=4,\:a=\frac{2}{7}$$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression\[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\]which matrix do you get?(a) $A$(b) $C^{-1}A^{-1}BC^{-1}AC^2$(c) $B$(d) $C^2$(e) $C^{-1}BC$(f) $C$
Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less.Let\[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\]where\begin{align*}p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3.\end{align*}
(a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$.
(b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$.
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be a vector space and $B$ be a basis for $V$.Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$.Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$.
After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form\[\begin{bmatrix}1 & 0 & 2 & 1 & 0 \\0 & 1 & 3 & 0 & 1 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{bmatrix}.\]
(a) What is the dimension of $V$?
(b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$?
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.Let\[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix}a & b\\c& -a\end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\]
(a) Show that $W$ is a subspace of $V$.
(b) Find a basis of $W$.
(c) Find the dimension of $W$.
(The Ohio State University, Linear Algebra Midterm)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 2 and contains Problem 4, 5, and 6.Check out Part 1 and Part 3 for the rest of the exam problems.
Problem 4. Let\[\mathbf{a}_1=\begin{bmatrix}1 \\2 \\3\end{bmatrix}, \mathbf{a}_2=\begin{bmatrix}2 \\-1 \\4\end{bmatrix}, \mathbf{b}=\begin{bmatrix}0 \\a \\2\end{bmatrix}.\]
Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$.
Problem 5.Find the inverse matrix of\[A=\begin{bmatrix}0 & 0 & 2 & 0 \\0 &1 & 0 & 0 \\1 & 0 & 0 & 0 \\1 & 0 & 0 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason.
Problem 6.Consider the system of linear equations\begin{align*}3x_1+2x_2&=1\\5x_1+3x_2&=2.\end{align*}
(a) Find the coefficient matrix $A$ of the system.
(b) Find the inverse matrix of the coefficient matrix $A$.
(c) Using the inverse matrix of $A$, find the solution of the system.
(Linear Algebra Midterm Exam 1, the Ohio State University) |
https://www.youtube.com/watch?v=Z47IRtuF0eI
...
An excited atom of mass m and initial speed v emits a photon in its direction of motion. If v << c, show that the frequency of the photon is higher by \frac{\Delta f}{f} \approx \frac vc , than it would have been if the
...
q) Given k1 = 1500 N/m, k2 = 500 N/m , m1 = 2kg and m2 = 1kg. Find (a) potential energy stored in the springs in equilibrium, and (b) work done in slowly pulling down m2 by 8cm. *Image* the image is not clear i knw... but jus
...
Consider a case when a block is placed on a horizontal platform at rest. It then starts accelerating with a constant acceleration such that, Friction present between the block and the horizontal platform > mass times its a
...
q) A chain is held on a frictionless table with (1/n)th of its length hanging over the edge. If the chain has a lenth L and mass M, how much work is required to slowly pull the hanging part back on the table?
...
In a stationary wave that forms as a result of reflection of waves from an obstacle the ratio of the amplitude at an antinode to the amplitude at node is K. Find the percent of Total Energy reflected.
...
ignore.image not displaying.
...
*Image* *Image*
...
*Image* A force F acts on a block of mass m placed on a horizontal smooth surface at an angle Θ with horizontal. Then (A) If F sinΘ < mg then a = F + mg/m (B) Acceleration = FCosΘ/m when F > mg cosecΘ (C) Accelerati
...
What to do and what not to do in the last month before IIT-JEE http://www.youtube.com/watch?v=Z5-AfiJxjOc&context=C4acfebbADvjVQa1PpcFMQX-FKsRHXxJDA7RTt_k4sN1NL6Iapsuo=
...
Consider a rod CD which is held horiznotally across a peg transversally .(something like a a st line abt a peg and free to rotate abt any pnt)bt point A(which is not the com) and D is the com . and DA=a Find the angle rotated
...
if a body is rotating in a circular orbit then what is the moment of net force acting on the body about the axis of rotation?
...
Two uniform, thin identical rods each of mass 'm' and length 'l' are joined to form a cross as shown in the figure. Find the moment of inrtia of the cross about a line AC. (Which is perpendicular to the plane of paper passing
...
This is a marathon for Physics ie., IT continues for long. A user posts problem, the next posts the solution and ANOTHER problem to be solved by the subsequent one and this Continues. Problem 1 A uniform rod of length *Image*
...
q) Two particles, each of mass m and speed v, travel in opposite directions along parallel lines separated by a distance d. Show that the vector angular momentum of the two particles system is the same whatever be the point a
...
*Image* A projectile is fired at t=0 with velocity v = 20√2 m/s at an angle of 45° degrees as shown. A wall inclined at the same angle and at a distance of 40 m starts moving towards the projectile at the same time (ie., t
...
1) NCERT says in some cases the angular momentum of a particle may not be parallel to the fixed axis. Can you give an example? In case of rotation about fixed axis, the component of angular momentum perpendicular to the fixed
...
2 persons are holding a rope of negligible weight tightly at its ends,so that it is horizontal. A 20kg weight is attached to the rope at the mid-point,which is now no longer horizontal. The minimum tension required to strengt
...
What are the forces acting on the ladder in the example below *Image* The ladder is held against the wall, the wall is friction-less. So what will be the fores acting on the ladder? please help...........!!
...
The vibation frequency of atoms in solid at normal temperature are of the order 1013/sec. Imagine the atoms to be connected by springs... Suppose that a single silver atom vibrates with this frequecy and that all other atoms
...
Q: Why is easier to break a piece of paper that you wet dry?
...
Mechanics: *Image* The coefficients of friction are μs = 0.40 and μk = 0.30 between all surfaces of contact. Determine the force P for which motion of the 30-kg block is impending if cable AB (a) is attached as shown, (b) i
...
A mass 'm' is undergoing S.H.M. in the vertical direction about the mean position 'x' with amplitude 'A' and angular freq. \omega . At a distance y from the mean position, the mass detaches from the spring. Assuming that the
...
A straight conductor of uniform cross-section carries a current I. Let 'S' be the specific charge of an electron. The momentum of all the free electrons per unit length of the conductor, due to their drift velocities only, in
...
A block of mass 'M' executes S.H.M. with amplitude 'A'. When it passes through the mean position, a lump of putty of mass 'm' is dropped on it.Find the new amplitude.
...
A SPHERE OF RADIUS 'R' IS FLOATING IN A LIQUID OF DENSITY '\rho ' WITH HALF OF ITS VOLUME SUBMERGED. IF THE SPHERE IS SLIGHTLY PUSHED AND RELEASED, IT STARTS PERFORMING S.H.M. FIND THE FREQUENCY OF THESE OSCILLATIONS.
...
*Image* i am getting c) but ans. given is d)
...
NCERT question number 36. chapter - laws of motion.
...
*Image* Consider a mass ' m ' rotating about the given axis with angular velocity ' ω '. Find the acceleration and Pseudo Force of the block as seen from the MID POINT of the string. Assume the length of string as ' L '.
...
*Image*
... |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
I am looking for single-number evaluation method that can be used in multi-class classification tasks that take into account imbalanced data-sets. For instance,
ROC-AUC defined by binary classifiers, is a single-number and takes into account imbalanced data-sets. On the other hand, accuracy is single-number, defined for multi-class classifiers and does not take into account imbalanced data-sets. Finally, the
confusion matrix is defined for multi-class, takes that into account but is not single-number. Is there any evaluation method that satisfies the three conditions?
I am looking for single-number evaluation method that can be used in multi-class classification tasks that take into account imbalanced data-sets. For instance,
How about a weighted log-loss?
Lets say we have $m$ classes $c_1, \dots, c_m$. We can give each class $c_i$ a weight $w_i$ which is inversely proportional to the percentage of the dataset that belongs to $c_i$. Then, the loss for some data set with actual classes $y = y_1, \dots, y_n$ and predictions $\hat{y} = \hat{y}_1, \dots, \hat{y}_n$ can be defined as
$$ \text{loss}(y, \hat{y}) = \frac{1}{mn} \sum_{j=1}^n\sum_{i=1}^m w_i {I}_{(y_j == i)}\text{log}(\hat{y}_j) $$
where ${I}_{(y_j == i)}$ is an indicator function which evaluates to 1 if $y_j == i$ and 0 otherwise.
One disadvantage is that it's not immediately obvious, given some value of the loss function, how good a particular value of the loss function is. However, it
is easy to compare two values (lower is better).
In https://www.sciencedirect.com/science/article/pii/S004896971831163X?via%3Dihub, we have used the product of the sensitivity of class i (i.e., the ratio of data correctly classified for class i) obtained for each class. This summarizes these values into a single index, which ranges between 0 and 1 and to some extent is irrespective to data imbalance. This approach induces a good balance between the error comitted to each category because any value appreciably below 1 reduces significantly the performance. To relax this allowing a greater error in 1 o more categories you can simply use (in increasing order): the min, the geometric mean or the arithmetic mean. In https://ieeexplore.ieee.org/document/6940273/ and https://ieeexplore.ieee.org/abstract/document/5428802/ you can find these alternatives. Good luck. Rafael.
You have to use
F1 score. A simple solution for that is to use
confusion matrix. The way you can find
F1 score for each class is simple. your true labels for each class can be considered as true predictions and the rest which are classified wrongly as the other classes should be added to specify the number of false predictions. For each class, you can find the
F1 score. For more details take a look at F1-score per class for multi-class classification. You can take a look at this implementation. |
Here is a interesting picture with two arrangements of four shapes.
How can they make a different area with the same shapes?
Puzzling Stack Exchange is a question and answer site for those who create, solve, and study puzzles. It only takes a minute to sign up.Sign up to join this community
This is a famous physical puzzle that can be tied to the fibonacci series.
To answer the question as posed, the issue is that the two slopes are different ($\frac25$ vs $\frac38$). Note that all those numbers are in the fibonacci series ($1,1,2,3,5,8,13,21,\ldots$).
Successive fractions are closer approximations to $\varphi$, alternating between above and below. Diagrams like this can be generated by making a square with sides equal to a number in the fibonacci series (in this question 8), then dividing it into two rectangles with widths of the two fibonacci numbers that make up the first one chosen (3 and 5).
Cut the smaller one down the diagonal, and cut the bigger one down the middle at a diagonal, such that the width of the diagonal cut is the next smallest number (2 in this case). Note that this will leave a trapezoid, whose small parallel size matches the original small rectangle's smaller side (3 in this case), and whose larger parallel size matches the original larger rectangle's smaller side (5 in this case).
Since $\frac25\approx\frac38$, and from the above constructions, the pieces can be rearranged into a rectangle (as shown), the area of which will always be one away from the original square, but will look approximately correct, since the slopes almost match.
Edit: Since this answer received so many up-votes (thank you!), I suppose people are very interested in it, so I thought I'd draw up a few images!
1,1,2,3: $3\times3 = 9 = 10 = 2\times5$ 1,2,3,5: $5\times5 = 25 = 24 = 3\times8$ 2,3,5,8: $8\times8 = 64 = 65 = 5\times13$ (The OP's example) 3,5,8,13: $13\times13 = 169 = 168 = 8\times21$ 5,8,13,21: $21\times21 = 441 = 442 = 13\times34$ The diagram is misleading, as it hides a gap in the middle of the second configuration.
This is what we actually get if we rearrange the shapes in question. Notice that the diagonal “bows” slightly, leaving some extra space between the shapes – this is where the extra unit of area creeps in.
But you shouldn’t trust me any more than the person who drew the original picture!
As we see here, pictures can be misleading – so my diagram isn’t proof that the original diagram was wrong. This just gives an intuitive sense of where the extra space has come from.
For a proper proof, consider the gradients:
Since the gradients don’t match, we can’t arrange them side-by-side like this without some blank space between them. But because they’re close, the eye can be tricked into thinking they form a single continuous line, and doesn’t notice the slope on the triangle changing midway down.
The image on the right
cheats: the pieces don't actually fit together perfectly, there's a gap in between. To prove it, we can calculate the size of the gap, by calculating the size of a triangle, formed by:
The area of this triangle can be calculated using Heron's formula:
$$ A = \sqrt{s(s-a)(s-b)(s-c)} $$
where
$$ s = \frac{1}{2}(a+b+c) $$
Substituting the values into the formula gives exactly 0.5 for $A$. There are two such triangles, so that's a total 1 = the expected discrepancy.
It's a misleading diagram. In reality, the angles do not match up- the larger interior angle of the orange triangle is about 69.5 degrees, whereas it's 68.2 for the grey quadrilateral. (Correct me if I'm wrong- dusting off my trig here.) In the diagram with area 65, the orange areas are actually quadrilaterals. If you look closely, you can see that they have a slight inflection where they meet the other orange section. So that extra area comes from expanding them just a bit.
The triangles don't have the same slope; you can see that the large diagonal line through the "larger" rectangle bends. It's covered up by the thick lines around the triangles, but there is a very thin hole that has a total area of one square - the same square that supposedly "appeared out of nowhere".
Simple answer:
Those shapes (in orange) at the right side of picture, are not triangles at all! they are two quadrilaterals. an thus they have area greater than visually expected. so there is no equity here. They are different and thus have different total area.
The picture of the bottom rectangle is misleading, because it fools people into incorrectly assuming the width of the triangles to be exactly 3 units.
The real width can be easily calculated - it's a fraction of the total width, defined by the height of the point on the diagonal, or at exactly 8/13th of 5, ie. 3.076923077 (and not 3), q.e.d.
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Lattice problems are a good source of candidates. Given a basis for a lattice $L$ in $R^n$, one can look for a nonzero lattice vector whose ($\ell_2$) norm is smallest possible; this is the 'Shortest Vector Problem' (SVP). Also, given a basis for $L$ and a point $t \in R^n$, one can ask for a lattice vector as close as possible to $t$; this is the 'Closest Vector Problem' (CVP).
Both problems are NP-hard to solve exactly. Aharonov and Regev showed that in (NP $\cap$ coNP), one can solve them to within an $O(\sqrt{n})$ factor:
http://portal.acm.org/citation.cfm?id=1089025
I've read the paper, and I don't think there's any hint from their work that one can do this in UP $\cup$ coUP, let alone UP $\cap$ coUP.
A technicality: as stated, these are search problems, so strictly speaking we have to be careful about what we mean when we say they're in a complexity class. Using a decisional variant of the approximation problem, the candidate decision problem we get is a
promise problem: given a lattice $L$, distinguish between the following two cases:
Case I: $L$ has a nonzero vector of norm $\leq 1$;
Case II: $L$ has no nonzero vector of norm $\leq C\sqrt{n}$. (for some constant $C > 0$)
This problem is in Promise-NP $\cap$ Promise-coNP, and might not be in either Promise-UP or Promise-coUP. But assume for the moment that it's not in Promise-UP; this doesn't seem to yield an example of a problem in (NP $\cap$ coNP)$\setminus$UP. The difficulty stems from the fact that NP $\cap$ coNP is a semantic class.(By contrast, if we identified a problem in Promise-NP$\setminus$Promise-P, then we could conclude P$\neq$NP. This is because any NP machine solving a promise problem $\Pi$ also defines an NP language $L$ which is no easier than $\Pi$.) |
Let $A_m$ and $B_{ij}$ be matrices of size $M\times N$ where $m\in\{1,2\}$ and $N \times N$ where $i,j\in\{1,2\}$, respectively. Note that $m,i,j$ can be any number in general, but here they are specified so that the question can be clearer and easy to see.
I am wondering whether there is an alternative way to write/decompose the following block matrix more succinctly. I have had a look at Khatri-Rao product or Tracy-Singh product, but neither of them fits into the block matrix I have. I'm looking for a kind of
block-wise multiplication of block matrix although I am not sure whether this type of matrix multiplication exists.
The matrix what I have is as follows:
$$ \begin{bmatrix} A_1B_{11}A_1^T & A_1B_{12}A_2^T\\ A_2B_{21}A_1^T & A_2B_{22}A_2^T \end{bmatrix} $$
I would be so grateful if anyone can suggest how this matrix can be decomposed or written more succinctly. |
ISSN:
1935-9179
eISSN:
1935-9179
All Issues
Electronic Research Announcements
2013 , Volume 20
Select all articles
Export/Reference:
Abstract:
Mixed volumes, which are the polarization of volume with respect to the Minkowski addition, are fundamental objects in convexity. In this note we announce the construction of mixed integrals, which are functional analogs of mixed volumes. We build a natural addition operation $\oplus$ on the class of quasi-concave functions, such that every class of $\alpha$-concave functions is closed under $\oplus$. We then define the mixed integrals, which are the polarization of the integral with respect to $\oplus$.
We proceed to discuss the extension of various classic inequalities to the functional setting. For general quasi-concave functions, this is done by restating those results in the language of rearrangement inequalities. Restricting ourselves to $\alpha$-concave functions, we state a generalization of the Alexandrov inequalities in their more familiar form.
Abstract:
Infinite determinantal measures introduced in this note are inductive limits of determinantal measures on an exhausting family of subsets of the phase space. Alternatively, an infinite determinantal measure can be described as a product of a determinantal point process and a convergent, but not integrable, multiplicative functional.
Theorem 4.1, the main result announced in this note, gives an explicit description for the ergodic decomposition of infinite Pickrell measures on the spaces of infinite complex matrices in terms of infinite determinantal measures obtained by finite-rank perturbations of Bessel point processes.
Abstract:
In this paper, we first introduce the notion of a Yetter-Drinfeld comodule algebra and give examples. Then we give the structure theorems of Yetter-Drinfeld comodule algebras. That is, if $L$ is a Yetter-Drinfeld Hopf algebra and $A$ is a right $L$-Yetter-Drinfeld comodule algebra, then there exists an algebra isomorphism between $A$ and $A^{coL} \mathbin{\sharp} H$, where $A^{coL}$ is the coinvariant subalgebra of $A$.
Abstract:
We study conformal invariants that arise from functions in the nullspace of conformally covariant differential operators. The invariants include nodal sets and the topology of nodal domains of eigenfunctions in the kernel of GJMS operators. We establish that on any manifold of dimension $n\geq 3$, there exist many metrics for which our invariants are nontrivial. We discuss new applications to curvature prescription problems.
Abstract:
The purpose of this note is to announce two results, Theorem A and Theorem B below, concerning geometric and algebraic properties of fat points in the complex projective plane. Their somewhat technical proofs are available in [10] and will be published elsewhere. Here we present only main ideas which are fairly transparent.
Abstract:
We propose an explicit formula for the Segre classes of monomial subschemes of nonsingular varieties, such as schemes defined by monomial ideals in projective space. The Segre class is expressed as a formal integral on a region bounded by the corresponding Newton polyhedron. We prove this formula for monomial ideals in two variables and verify it for some families of examples in any number of variables.
Abstract:
On any closed symplectic manifold of dimension greater than $ 2 $, we construct a pair of smooth functions, such that on the one hand, the uniform norm of their Poisson bracket equals to $ 1 $, but on the other hand, this pair cannot be reasonably approximated (in the uniform norm) by a pair of Poisson commuting smooth functions. This comes in contrast with the dimension $ 2 $ case, where by a partial case of a result of Zapolsky [13], an opposite statement holds.
Abstract:
We prove a generalization of Gromov's packing inequality to symplectic embeddings of the boundaries of two balls such that the bounded components of the complements of the image spheres are disjoint. Moreover, we define a capacity which measures the size of Weinstein tubular neighborhoods of Lagrangian submanifolds. In symplectic vector spaces this leads to bounds on the codisc radius for any closed Lagrangian submanifold in terms of Viterbo's isoperimetric inequality. Furthermore, we introduce the spherical variant of the relative Gromov radius and prove its finiteness for monotone Lagrangian tori in symplectic vector spaces.
Abstract:
We use the Hofer norm to show that all Hamiltonian diffeomorphisms with compact support in $\mathbb{R}^{2n}$ that displace an open connected set with a nonzero Hofer-Zehnder capacity move a point farther than a capacity-dependent constant. In $\mathbb{R}^2$, this result is extended to all compactly supported area-preserving homeomorphisms. Next, using the spectral norm, we show the result holds for Hamiltonian diffeomorphisms on closed surfaces. We then show that all area-preserving homeomorphisms of $S^2$ and $\mathbb{RP}^2$ that displace the closure of an open connected set of fixed area move a point farther than an area-dependent constant.
Abstract:
The purpose of this note is to announce certain basic results on the construction of a degeneration of ${\mathcal{M}}_{{{X_{k}}}}^{{H}}(n,d)$ as the smooth curve $X_{k}$ degenerates to an irreducible nodal curve with a single node.
Abstract:
We introduce a new approach for the computation of characteristic classes of singular toric varieties and, as an application, we obtain generalized Pick-type formulae for lattice polytopes. Many of our results (e.g., lattice point counting formulae) hold even more generally, for closed algebraic torus-invariant subspaces of toric varieties. In the simplicial case, by combining this new computation method with the Lefschetz-Riemann-Roch theorem, we give new proofs of several characteristic class formulae originally obtained by Cappell and Shaneson in the early 1990s.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Forgot password? New user? Sign up
Existing user? Log in
Problem 10. (16 points) If
∑a=1∞∑b=1∞1a4+4b4=πpq,\sum_{a=1}^\infty\sum_{b=1}^\infty\frac{1}{a^4+4b^4}=\frac{\pi^p}{q},a=1∑∞b=1∑∞a4+4b41=qπp,
then evaluate the integer value of p+qp+qp+q.
Announcement. This is the last CMC problem of the season. I expect to return in about two weeks.
Note by Cody Johnson 5 years, 10 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
p=4,q=288⇒p+q=292p = 4, q = 288 \Rightarrow p+q=292p=4,q=288⇒p+q=292
I don't have a full answer because I couldn't prove one crucial step.
Because (I can't prove this) ∑a=1∞8b4a4+4b4=bπcoth(bπ)−1 \displaystyle \sum_{a=1}^\infty \frac {8b^4}{a^4+4b^4} = b \pi \coth (b \pi) - 1 a=1∑∞a4+4b48b4=bπcoth(bπ)−1
∑a=1∞∑b=1∞1a4+4b4=∑b=1∞18b4(bπcoth(bπ)−1)\nonumber=π8(∑b=1∞coth(bπ)b3)−18(∑b=1∞1b4)=π8(∑b=1∞e2bπ+1e2bπ−1b3)−18(∑b=1∞1b4)=π8(∑b=1∞2+e2bπ−1b3(e2bπ−1))−18(∑b=1∞1b4)=π8(∑b=1∞1b3+∑b=1∞2b3(e2bπ−1))−18ζ(4)=π8(ζ(3)+7π3180−ζ(3))−18ζ(4)=π8(7π3180−π390)=π4288 \begin{aligned} \displaystyle \sum_{a=1}^\infty \sum_{b=1}^\infty \frac {1}{a^4+4b^4} & = & \sum_{b=1}^\infty \frac {1}{8b^4} (b \pi \coth (b \pi) - 1 ) \nonumber \\& = & \frac {\pi}{8} \left ( \displaystyle \sum_{b=1}^\infty \frac { \coth (b \pi) }{b^3} \right ) - \frac {1}{8} \left ( \sum_{b=1}^\infty \frac {1}{b^4} \right ) \\& = & \frac {\pi}{8} \left ( \displaystyle \sum_{b=1}^\infty \frac { \frac {e^{2b \pi} + 1 }{e^{2b \pi} - 1 } }{b^3} \right ) - \frac {1}{8} \left ( \sum_{b=1}^\infty \frac {1}{b^4} \right ) \\& = & \frac {\pi}{8} \left ( \displaystyle \sum_{b=1}^\infty \frac {2 + e^{2b \pi} -1 }{b^3( e^{2b \pi} - 1 )} \right ) - \frac {1}{8} \left ( \sum_{b=1}^\infty \frac {1}{b^4} \right ) \\& = & \frac {\pi}{8} \left ( \displaystyle \sum_{b=1}^\infty \frac {1}{b^3} + \sum_{b=1}^\infty \frac {2}{b^3( e^{2b \pi} - 1 )} \right ) - \frac {1}{8} \zeta (4) \\& = & \frac {\pi}{8} \left ( \zeta (3) + \frac {7 \pi^3}{180} - \zeta (3) \right ) - \frac {1}{8} \zeta (4) \\& = & \frac {\pi}{8} \left ( \frac {7 \pi^3}{180} - \frac {\pi^3}{90} \right ) = \frac {\pi^4}{288} \\\end{aligned} a=1∑∞b=1∑∞a4+4b41=======b=1∑∞8b41(bπcoth(bπ)−1)\nonumber8π(b=1∑∞b3coth(bπ))−81(b=1∑∞b41)8π(b=1∑∞b3e2bπ−1e2bπ+1)−81(b=1∑∞b41)8π(b=1∑∞b3(e2bπ−1)2+e2bπ−1)−81(b=1∑∞b41)8π(b=1∑∞b31+b=1∑∞b3(e2bπ−1)2)−81ζ(4)8π(ζ(3)+1807π3−ζ(3))−81ζ(4)8π(1807π3−90π3)=288π4
Note: from this, ζ(3)=7π3180−2∑n=1∞1n3(e2nπ−1) \displaystyle \zeta (3) = \frac {7 \pi^3}{180} - 2 \sum_{n=1}^\infty \frac {1}{n^3( e^{2n \pi} - 1 )} ζ(3)=1807π3−2n=1∑∞n3(e2nπ−1)1, and ζ(4)=π490 \zeta (4) = \frac {\pi^4}{90} ζ(4)=90π4
Log in to reply
A completion of the starting step with the Poisson summation formula:
Fix bbb and consider f(a)=1a4+4b4.f(a) = \frac{1}{a^4 + 4b^4}.f(a)=a4+4b41. First, we compute the (continuous) Fourier transform f^(c)=∫−∞∞e−2iπct⋅1t4+4b4 dt \hat{f}(c) = \int_{-\infty}^{\infty} e^{-2i\pi ct}\cdot \frac{1}{t^4 + 4b^4}\,dt f^(c)=∫−∞∞e−2iπct⋅t4+4b41dt with a contour integral.
Denote the integrand as g(t)g(t)g(t), i.e. g(t)=e−2iπct⋅1t4+4b4.g(t) = e^{-2i\pi ct}\cdot \frac{1}{t^4 + 4b^4}.g(t)=e−2iπct⋅t4+4b41.
Suppose c≥0c \geq 0c≥0. Then if ℑ(t)≤0\Im(t) \leq 0ℑ(t)≤0, we have ℜ(−2iπct)≤0\Re(-2i\pi ct) \leq 0ℜ(−2iπct)≤0 and ∣e−2iπct∣≤1|e^{-2i\pi ct}| \leq 1∣e−2iπct∣≤1. Consider the contour integral of ggg around a large semicircle centered at the origin in the half-plane ℑ(t)≤0\Im(t) \leq 0ℑ(t)≤0 of the complex plane. Since 1t4+4b4\frac{1}{t^4 + 4b^4}t4+4b41 decays quickly enough if ttt has large magnitude, if the semicircle's radius goes to infinity, the integral over the arc of the semicircle goes to 0, so that the contour integral converges to the integral over the real line, ∫∞−∞g(t) dt. \int_{\infty}^{-\infty} g(t)\,dt. ∫∞−∞g(t)dt. (Note the orientation, since we need to go around the semicircle counterclockwise.)
At the same time, the integral can be evaluated with the residue formula. Let ω=eiπ/4\omega = e^{i\pi/4}ω=eiπ/4, a primitive eighth root of unity. ggg has four simple poles at t=2ωkbt = \sqrt{2}\omega^k bt=2ωkb for k=1,3,5,7k = 1, 3, 5, 7k=1,3,5,7; the relevant poles inside our contour are 2ω5b\sqrt{2}\omega^5b2ω5b and 2ω7b\sqrt{2}\omega^7b2ω7b. The residues at those points can be evaluated in an L'Hôpital-esque manner to be:
res2ω5bg=e−2iπc(2ω5b)⋅14(2ω5b)3=e2πbc(−1+i)⋅1+i16b3res2ω7bg=e−2iπc(2ω7b)⋅14(2ω7b)3=e2πbc(−1−i)⋅−1+i16b3 \begin{aligned}\operatorname{res}_{\sqrt{2}\omega^5b} g &= e^{-2i\pi c(\sqrt{2}\omega^5b)}\cdot \frac{1}{4(\sqrt{2}\omega^5b)^3} \\&= e^{2\pi bc(-1+i)}\cdot \frac{1+i}{16b^3} \\\operatorname{res}_{\sqrt{2}\omega^7b} g &= e^{-2i\pi c(\sqrt{2}\omega^7b)}\cdot \frac{1}{4(\sqrt{2}\omega^7b)^3} \\&= e^{2\pi bc(-1-i)}\cdot \frac{-1+i}{16b^3} \\\end{aligned} res2ω5bgres2ω7bg=e−2iπc(2ω5b)⋅4(2ω5b)31=e2πbc(−1+i)⋅16b31+i=e−2iπc(2ω7b)⋅4(2ω7b)31=e2πbc(−1−i)⋅16b3−1+i
Additionally supposing ccc is an integer, we then have e2πbci=1e^{2\pi bci} = 1e2πbci=1 and can further simplify tores2ω5bg=e−2πbc⋅1+i16b3res2ω7bg=e−2πbc⋅−1+i16b3 \begin{aligned}\operatorname{res}_{\sqrt{2}\omega^5b} g &= e^{-2\pi bc}\cdot \frac{1+i}{16b^3} \\\operatorname{res}_{\sqrt{2}\omega^7b} g &= e^{-2\pi bc}\cdot \frac{-1+i}{16b^3}\end{aligned} res2ω5bgres2ω7bg=e−2πbc⋅16b31+i=e−2πbc⋅16b3−1+i
So,∫∞−∞g(t) dt=2πi(e−2πbc⋅1+i16b3+e−2πbc⋅−1+i16b3)=−π4b3⋅e−2πbcf^(c)=π4b3⋅e−2πbc \begin{aligned} \int_{\infty}^{-\infty} g(t)\,dt &= 2\pi i\left(e^{-2\pi bc}\cdot \frac{1+i}{16b^3} + e^{-2\pi bc}\cdot \frac{-1+i}{16b^3}\right) \\&= -\frac{\pi}{4b^3} \cdot e^{-2\pi bc} \\\hat{f}(c) &= \frac{\pi}{4b^3} \cdot e^{-2\pi bc} \\\end{aligned} ∫∞−∞g(t)dtf^(c)=2πi(e−2πbc⋅16b31+i+e−2πbc⋅16b3−1+i)=−4b3π⋅e−2πbc=4b3π⋅e−2πbc
Note that since fff is even, f^(c)=f^(−c)\hat{f}(c) = \hat{f}(-c)f^(c)=f^(−c). Now the Poisson summation formula can be evaluated as two geometric series.
∑a=−∞∞f(a)=∑c=−∞∞f^(c)=π4b3(11−e−2πb+e−2πb1−e−2πb)=πcoth(bπ)4b3 \sum_{a=-\infty}^\infty f(a) = \sum_{c=-\infty}^\infty \hat{f}(c) = \frac{\pi}{4b^3} \left(\frac{1}{1 - e^{-2\pi b}} + \frac{e^{-2\pi b}}{1 - e^{-2\pi b}}\right) = \frac{\pi \coth(b\pi)}{4b^3} a=−∞∑∞f(a)=c=−∞∑∞f^(c)=4b3π(1−e−2πb1+1−e−2πbe−2πb)=4b3πcoth(bπ)
Thus, again since fff is even, ∑a=1∞f(a)=12(∑a=−∞∞f(a)−f(0))=πcoth(bπ)8b3−18b4=18b4(bπcoth(bπ)−1), \begin{aligned} \sum_{a=1}^\infty f(a) &= \frac{1}{2}\left(\sum_{a=-\infty}^\infty f(a) - f(0)\right) \\&= \frac{\pi \coth(b\pi)}{8b^3} - \frac{1}{8b^4} \\&= \frac{1}{8b^4}\left(b\pi \coth(b\pi) - 1\right), \end{aligned} a=1∑∞f(a)=21(a=−∞∑∞f(a)−f(0))=8b3πcoth(bπ)−8b41=8b41(bπcoth(bπ)−1), as desired.
I have no idea how the parent post managed to continue from here, or how Sophie-Germain can help, however.
mind = blown, 8 points for you
@Cody Johnson – What are you talking about? This answer easily deserve 1000 points!
Now, how do you solve it using Sophie-Germain Identity?
@Pi Han Goh – I used the Sophie-Germain Identity to do partial fraction decomposition and used complex numbers to arrive there.
@Cody Johnson – Really? That's possible? I got stuck halfway and can't continue.
If it's better than Brian Chen's answer, that is if you don't need to use Poisson summation formula, I would like to know...
@Pi Han Goh – Use partial fraction and reflection formula for poly gamma function to prove that.
Excellent progress, 7 points. Can you finish the proof using the hint?
Has anyone proven the first step? If not, I'd like to take a shot.
how do you write the solution
Do I really need the knowledge of Fourier to really understand or maybe Riemann Zeta Function?
You should probably clear up any ambiguity relating to the fact that you used a,ba, ba,b as dummy variables and also in the answer form.
Fixed.
Hint: this problem is related to this problem.
cody i wanna ask you that how can i submit my solution .... using all the maths terms??? please help
Just post your solution to this thread.
Hint #2: Sophie-Germain Identity
I'm still waiting for an explanation of how this helps, and I bet I'm not the only one ;-)
See my comment under Pi Han Goh's comment.
p+q=292p + q = 292p+q=292
I'll award 2 points for this answer, considering the magnitude of this problem. But the solution's where it's at.
I've always wondered this, what does the notation mean when you have two sigmas next to each other? Does this mean that they both start at 1, then both go to 2, then both go to 3? Or does it mean the sum of a = 1 and b = 1 to infinity, the sum of a = 2 and b = 1 to infinity, the sum of a = 3 and b =1 to infinity, etc..?
From this problem I think it's the latter or else it would just be stated as "∑a=1∞15a4 \displaystyle \sum_{a = 1}^{\infty} \frac{1}{5a^4}a=1∑∞5a41.
∑a=1m∑b=1nf(a,b)\displaystyle \sum_{a=1}^m \displaystyle\sum_{b=1}^n f(a, b)a=1∑mb=1∑nf(a,b) means ∑a=1m(∑b=1nf(a,b)),\displaystyle \sum_{a=1}^m \left(\displaystyle\sum_{b=1}^n f(a, b)\right),a=1∑m(b=1∑nf(a,b)), or (f(1,1)+f(1,2)+…+f(1,n))+(f(2,1)+f(2,2)+…+f(2,n))+…+(f(m,1)+f(m,2)+…+f(m,n)).(f(1, 1) + f(1, 2) + \ldots + f(1, n)) \\+ (f(2, 1) + f(2, 2) + \ldots + f(2, n)) \\+ \ldots \\ + (f(m, 1) + f(m, 2) + \ldots + f(m, n)).(f(1,1)+f(1,2)+…+f(1,n))+(f(2,1)+f(2,2)+…+f(2,n))+…+(f(m,1)+f(m,2)+…+f(m,n)).
In other words, we evaluate the inner sums first, taking outside variables as constant.
Note that (in this example at least) we could switch the order of the summation symbols, to get ∑b=1n∑a=1mf(a,b)\displaystyle \sum_{b=1}^n \displaystyle\sum_{a=1}^m f(a, b)b=1∑na=1∑mf(a,b) or (f(1,1)+f(2,1)+…+f(m,1))+(f(1,2)+f(2,2)+…+f(m,2))+…+(f(1,n)+f(2,n)+…+f(m,n)).(f(1, 1) + f(2, 1) + \ldots + f(m, 1)) \\+ (f(1, 2) + f(2, 2) + \ldots + f(m, 2)) \\+ \ldots \\ + (f(1, n) + f(2, n) + \ldots + f(m, n)).(f(1,1)+f(2,1)+…+f(m,1))+(f(1,2)+f(2,2)+…+f(m,2))+…+(f(1,n)+f(2,n)+…+f(m,n)).
See how these two are actually the same?
Be careful with limits to infinity. You cannot just interchange the order of summation, as that can affect the sum itself. Analysis deals with this, and one of the results is that if the sequence converges absolutely to a finite value, then we can rearrange the terms and it will still converge to the same (finite) value.
For fun, take the sequence −1i - \frac{1}{i} −i1, and rearrange terms to get get it to converge to any value that you wish. The absolute value of this sequence is the harmonic sequence 1i \frac{1}{i}i1, which sums to infinity.
And if you're a computer-sciency type of person, the first way to write it is the same as
int S=0;for (int a=1; a<=m; a++) for (int b=1; b<=n; b++) S += f(a, b);return S;
and the second way is the same as
int S=0;for (int b=1; b<=n; b++) for (int a=1; a<=m; a++) S += f(a, b);return S;
which both return the same number.
what is this equation
Problem Loading...
Note Loading...
Set Loading... |
Differential and Integral Equations Differential Integral Equations Volume 13, Number 7-9 (2000), 1025-1038. Stable transition layers in a balanced bistable equation Abstract
This paper is concerned with the existence of steady-state solutions for $$ \left\{ \begin{array}{ll} u_t = \epsilon^2 u_{xx} - (u-a(x))(u-b(x))(u-c(x))\quad & \mbox{in}~(0,1)\times(0,\infty),\\ u_x(0,t) = u_x(1,t) = 0\quad & \mbox{in}~(0,\infty). \end{array} \right. $$ Here $a, b$ and $c$ are $C^2$-functions satisfying $b = (a+c)/2$ and $c > a$. By using upper and lower solutions it is proved that there exist stable steady states with transition layers near any points where $c(x)-a(x)$ has its local minimum.
Article information Source Differential Integral Equations, Volume 13, Number 7-9 (2000), 1025-1038. Dates First available in Project Euclid: 21 December 2012 Permanent link to this document https://projecteuclid.org/euclid.die/1356061208 Mathematical Reviews number (MathSciNet) MR1775244 Zentralblatt MATH identifier 0981.34011 Subjects Primary: 34E15: Singular perturbations, general theory Secondary: 34B15: Nonlinear boundary value problems 35B25: Singular perturbations 35B40: Asymptotic behavior of solutions 35J60: Nonlinear elliptic equations 35K57: Reaction-diffusion equations Citation
Nakashima, Kimie. Stable transition layers in a balanced bistable equation. Differential Integral Equations 13 (2000), no. 7-9, 1025--1038. https://projecteuclid.org/euclid.die/1356061208 |
We model the expansion history of the Universe as a Gaussian process and find constraints on the dark energy density and its low-redshift evolution using distances inferred from the Luminous Red Galaxy (LRG) and Lyman-alpha (Ly$\alpha$) datasets of the Baryon Oscillation Spectroscopic Survey, supernova data from the Joint Light-curve Analysis (JLA) sample, Cosmic Microwave Background (CMB) data from the Planck satellite, and local measurement of the Hubble parameter from the Hubble Space Telescope ($\mathsf H0$). Our analysis shows that the CMB, LRG, Ly$\alpha$, and JLA data are consistent with each other and with a $\Lambda$CDM cosmology, but the ${\mathsf H0}$ data is inconsistent at moderate significance. Including the presence of dark radiation does not alleviate the ${\mathsf H0}$ tension in our analysis. While some of these results have been noted previously, the strength here lies in that we do not assume a particular cosmological model. We calculate the growth of the gravitational potential in General Relativity corresponding to these general expansion histories and show that they are well-approximated by $\Omega_{\rm m}^{0.55}$ given the current precision. We assess the prospects for upcoming surveys to measure deviations from $\Lambda$CDM using this model-independent approach.
We incorporate Milky Way dark matter halo profile uncertainties, as well as an accounting of diffuse gamma-ray emission uncertainties in dark matter annihilation models for the Galactic Center Extended gamma-ray excess (GCE) detected by the Fermi Gamma Ray Space Telescope. The range of particle annihilation rate and masses expand when including these unknowns. However, empirical determinations of the Milky Way halo's local density and density profile leave the signal region to be in considerable tension with dark matter annihilation searches from combined dwarf galaxy analyses. The GCE and dwarf tension can be alleviated if: one, the halo is extremely concentrated or strongly contracted; two, the dark matter annihilation signal differentiates between dwarfs and the Galactic Center; or, three, local stellar density measures are found to be significantly lower, like that from recent stellar counts, pushing up the local dark matter density.
The Milky Way's Galactic Center harbors a gamma-ray excess that is a candidate signal of annihilating dark matter. Dwarf galaxies remain predominantly dark in their expected commensurate emission. We quantify the degree of consistency between these two observations through a joint likelihood analysis. In doing so I incorporate Milky Way dark matter halo profile uncertainties, as well as an accounting of diffuse gamma-ray emission uncertainties in dark matter annihilation models for the Galactic Center Extended gamma-ray excess (GCE) detected by the {\em Fermi Gamma-Ray Space Telescope}. The preferred range of annihilation rates and masses expands when including these unknowns. Even so, using two recent determinations of the Milky Way halo's local density leave the GCE preferred region of single-channel dark matter annihilation models to be in strong tension with annihilation searches in combined dwarf galaxy analyses. A third, higher Milky Way density determination, alleviates this tension. This joint likelihood analysis allows us to quantify this inconsistency. As an example, we test a representative inverse Compton sourced self-interacting dark matter model, which is consistent with both the GCE and dwarfs.
Self-interacting dark matter (SIDM) models have been proposed to solve the small-scale issues with the collisionless cold dark matter (CDM) paradigm. We derive equilibrium solutions in these SIDM models for the dark matter halo density profile including the gravitational potential of both baryons and dark matter. Self-interactions drive dark matter to be isothermal and this ties the core sizes and shapes of dark matter halos to the spatial distribution of the stars, a radical departure from previous expectations and from CDM predictions. Compared to predictions of SIDM-only simulations, the core sizes are smaller and the core densities are higher, with the largest effects in baryon-dominated galaxies. As an example, we find a core size around 0.3 kpc for dark matter in the Milky Way, more than an order of magnitude smaller than the core size from SIDM-only simulations, which has important implications for indirect searches of SIDM candidates. |
ISSN:
2156-8472
eISSN:
2156-8499 Mathematical Control & Related Fields
March 2014 , Volume 4 , Issue 1
Select all articles
Export/Reference:
Abstract:
We address in this work the null controllability problem for a linear heat equation with delay parameters. The control is exerted on a subdomain and we show how the global Carleman estimate due to Fursikov and Imanuvilov can be applied to derive results in this direction.
Abstract:
We present a controllability result for a class of linear parabolic systems of $3$ equations. We establish a global Carleman estimate for the solutions of systems of $2$ parabolic equations coupled with first order terms. Stability results for inverse coefficients problems are deduced.
Abstract:
These notes are intended to be a tutorial material revisiting in an almost self-contained way, some control results for the Korteweg-de Vries (KdV) equation posed on a bounded interval. We address the topics of boundary controllability and internal stabilization for this nonlinear control system. Concerning controllability, homogeneous Dirichlet boundary conditions are considered and a control is put on the Neumann boundary condition at the right end-point of the interval. We show the existence of some critical domains for which the linear KdV equation is not controllable. In despite of that, we prove that in these cases the nonlinearity gives the exact controllability. Regarding stabilization, we study the problem where all the boundary conditions are homogeneous. We add an internal damping mechanism in order to force the solutions of the KdV equation to decay exponentially to the origin in $L^2$-norm.
Abstract:
We consider a hybrid system coupling an elastic string with a rigid body at one end and we study the existence of an almost periodic solution when an almost periodic force $f$ acts on the body. The weak dissipation of the system does not allow to show the relative compactness of the trajectories which generally implies the existence of such solutions. Instead, we use Fourier analysis to show that the existence or not of the almost periodic solutions depends on the regularity and the exponents of the almost periodic nonhomogeneous term $f$.
Abstract:
We give algebraic characterizations of the properties of autonomy and of controllability of behaviours of spatially invariant dynamical systems, consisting of distributional solutions $w$, that are periodic in the spatial variables, to a system of partial differential equations $$ M\left(\frac{\partial}{\partial x_1},\cdots, \frac{\partial}{\partial x_d} , \frac{\partial}{\partial t}\right) w=0, $$ corresponding to a polynomial matrix $M\in ({\mathbb{C}}[\xi_1,\dots, \xi_d, \tau])^{m\times n}$.
Readers Authors Editors Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
This week¶
Several paper caught my eye this week, but I'll be discussing only Efficient Exploration with Self-Imitation Learning via Trajectory-Conditioned Policy in more depth. I'm choosing this paper because, as happens sometimes, I had this idea myself a few weeks ago. It's especially exciting to see something you suspected might improve the world fleshed out and vindicated.
This is the basic form of my shower-throught idea:
This paper investigates the imitation of diverse past trajectories and how that leads [to] further exploration and avoids getting stuck at a sub-optimal behavior. Specifically, we propose to use a buffer of the past trajectories to cover diverse possible directions. Then we learn a trajectory-conditioned policy to imitate any trajectory from the buffer, treating it as a demonstration. After completing the demonstration, the agent performs random exploration.
The problem¶
The main problem the authors want to solve is insufficient exploration leading to a sub-optimal policy. If you don't explore your environment enough, you will find local rewards, but miss globally optimal rewards. In this maze (their Figure 1), you can see that an agent that fails to explore will collect two apples in the next room, but may miss acquiring the key, unlocking the door, collecting an apple, and discovering the treasure.
In the notoriously difficult Atari game (for RL agents) Montezuma's Revenge, it is similarly extremely unlikely that random exploration suffices to explore the environment and achieve a high score. The authors report state-of-the-art performance without expert demonstrations on Montezuma's Revenge, netting 25k points.
SOTA without demonstrations¶
So, more precisely, how did they achieve this, and why does it work?
The main idea of our method is to maintain a buffer of diverse trajectories collected during training and to train a trajectory-conditioned policy by leveraging reinforcement learning and supervised learning to roughly follow demonstration trajectories sampled from the trajectory buffer. Therefore, the agent is encouraged to explore beyond various visited states in the environment and gradually push its exploration frontier further... We name our method as Diverse Trajectory-conditioned Self-Imitation Learning (DTSIL).
The trajectory buffer¶
Their trajectory buffer $\mathcal{D}$ contains $N$ 3-tuples $\{\left(e^{(1)}, \tau^{(1)}, n^{(1)}\right), \left(e^{(2)}, \tau^{(2)}, n^{(2)}\right), \ldots \left(e^{(N)}, \tau^{(N)}, n^{(N)}\right) \}$ where $e^{(i)}$ is a high-level state representation, $\tau^{(i)}$ is the shortest trajectory achieving the highest reward and arriving at $e^{(i)}$, and $n^{(i)}$ is the number of times $e^{(i)}$ has been encountered. Whenever they roll out a new episode, they check each high-level state representation encountered against those in $\mathcal{D}$, increment $n$, and if $\tau$ is better they replace $\tau$ for that entry.
Sampling¶
When training their trajectory-conditioned policy, they sample each 3-tuple with weight ${1}\over{\sqrt{n^{(i)}}}$. Notice that this will cause them to sample
less frequently-visited states more often, encouraging exploration. Imitation reward¶
Given a trajectory $g$ sampled from the buffer, and during interaction with the environment, the agent receives a positive reward if the current state has an embedding within some $\Delta t$ of the current timestep in $g$. Otherwise the imitation reward is 0. Once it reaches the end of $g$, there is no further imitation reward, and it explores randomly. The imitation reward is one of two components of the $r^{DTSIL}_{t}$ RL reward, where the other is a simple monotonic function of the reward received at each timestep.
Policy architecture¶
The DTSIL policy architecture is recurrent and attentional, inspired by machine translation!
Inspired by neural machine translation methods, the demonstration trajectory is the source sequence and the incomplete trajectory of the agent’s state representations is the target sequence. We apply a recurrent neural network and an attention mechanism to the sequence data to predict actions that would make the agent to follow the demonstration trajectory.
RL objective¶
DTSIL is trained using a policy gradient algorithm (PPO, in their experiments), and RL loss$$\mathcal L^{RL} = {\mathbb{E}}_{\pi_\theta} [-\log \pi_\theta(a_t|e_{\leq t}, o_t, g) \widehat{A}_t]$$
where $$\widehat{A}_t=\sum^{n-1}_{d=0} \gamma^{d}r^\text{DTSIL}_{t+d} + \gamma^n V_\theta(e_{\leq t+n}, o_{t+n}, g) - V_\theta(e_{\leq t}, o_t, g)$$
SL objective¶
In each parameter optimization step, they also include a supervised loss designed to maximize the log probability of taking an action that imitates the chosed demonstration exactly to better leverage a past trajectory $g$.$$\mathcal L^\text{SL} = - \log \pi_\theta(a_t|e_{\leq t}, o_t, g) \text{, where } g = \{e_0, e_1, \cdots, e_{|g|}\}$$
Optimization¶
The final parameter update is thus$$\theta \gets \theta - \eta \nabla_\theta (\mathcal{L}^\text{RL}+\beta \mathcal{L}^\text{SL})$$
Parting thoughts¶ I loveseeing methods developed for generative language models used in another context entirely, to generate another kind of sequence. I'm overjoyed that it worked well. They need a high-level embedding for two reasons: first because storing entire trajectories exactly in memory is expensive, and second because it's quite difficult to re-execute a previously-encountered trajectory exectly, so in order for this method to work at all it's important that an approximatere-execution be possible. |
ISSN:
1556-1801
eISSN:
1556-181X
All Issues
Networks & Heterogeneous Media
June 2016 , Volume 11 , Issue 2
Special issue on contemporary topics in conservation laws
Select all articles
Export/Reference:
Abstract:
During last 20 years the theory of Conservation Laws underwent a dramatic development.
Networks and Heterogeneous Mediais dedicating two consecutive Special Issues to this topic. Researchers belonging to some of the major schools in this subject contribute to these two issues, offering a view on the current state of the art, as well pointing to new research themes within areas already exposed to more traditional methodologies.
For more information please click the “Full Text” above.
Abstract:
We revisit the Cauchy-Dirichlet problem for degenerate parabolic scalar conservation laws. We suggest a new notion of strong entropy solution. It gives a straightforward explicit characterization of the boundary values of the solution and of the flux, and leads to a concise and natural uniqueness proof, compared to the one of the fundamental work [J. Carrillo, Arch. Ration. Mech. Anal., 1999]. Moreover, general dissipative boundary conditions can be studied in the same framework. The definition makes sense under the specific weak trace-regularity assumption. Despite the lack of evidence that generic solutions are trace-regular (especially in space dimension larger than one), the strong entropy formulation may be useful for modeling and numerical purposes.
Abstract:
This paper is devoted to present an approximation of a Cauchy problem for Friedrichs' systems under convex constraints. It is proved the strong convergence in $L^2_{\text{loc}}$ of a parabolic-relaxed approximation towards the unique constrained solution.
Abstract:
Two-dimensional Keller--Segel models for the chemotaxis with fractional (anomalous) diffusion are considered. Criteria for blowup of solutions in terms of suitable Morrey spaces norms are derived. Similarly, a criterion for blowup of solutions in terms of the radial initial concentrations, related to suitable Morrey spaces norms, is shown for radially symmetric solutions of chemotaxis in several dimensions. Those conditions are, in a sense, complementary to the ones guaranteeing the global-in-time existence of solutions.
Abstract:
We study bounded solutions for a multidimensional conservation law coupled with a power $s\in (0,1)$ of the Dirichlet laplacian acting in a domain. If $s \leq 1/2$ then the study centers on the concept of entropy solutions for which existence and uniqueness are proved to hold. If $s >1/2$ then the focus is rather on the $C^\infty$-regularity of weak solutions. This kind of results is known in $\mathbb{R}^N$ but perhaps not so much in domains. The extension given here relies on an abstract spectral approach, which would also allow many other types of nonlocal operators.
Abstract:
We establish new interaction estimates for a system introduced by Baiti and Jenssen. These estimates are pivotal to the analysis of the wave front-tracking approximation. In a companion paper we use them to construct a counter-example which shows that Schaeffer's Regularity Theorem for scalar conservation laws does not extend to systems. The counter-example we construct shows, furthermore, that a wave-pattern containing infinitely many shocks can be robust with respect to perturbations of the initial data. The proof of the interaction estimates is based on the explicit computation of the wave fan curves and on a perturbation argument.
Abstract:
We consider the Kawahara-Korteweg-de Vries equation, which contains nonlinear dispersive effects. We prove that as the dispersion parameter tends to zero, the solutions of the dispersive equation converge to discontinuous weak solutions of the Burgers equation. The proof relies on deriving suitable a priori estimates together with an application of the compensated compactness method in the $L^p$ setting.
Abstract:
The aim of this short note is twofold. First, we give a sketch of the proof of a recent result proved by the authors in the paper [7] concerning existence and uniqueness of renormalized solutions of continuity equations with unbounded damping coefficient. Second, we show how the ideas in [7] can be used to provide an alternative proof of the result in [6,9,12], where the usual requirement of boundedness of the divergence of the vector field has been relaxed to various settings of exponentially integrable functions.
Abstract:
We consider two compressible immiscible fluids in one space dimension and in the isentropic approximation. The first fluid is surrounded and in contact with the second one. As the sound speed of the first fluid diverges to infinity, we present the proof of rigorous convergence for the fully non--linear compressible to incompressible limit of the coupled dynamics of the two fluids. A linear example is considered in detail, where fully explicit computations are possible.
Abstract:
In this paper, we discuss the total variation bound for the solution of scalar conservation laws with discontinuous flux. We prove the smoothing effect of the equation forcing the $BV_{loc}$ solution near the interface for $L^\infty$ initial data without the assumption on the uniform convexity of the fluxes made as in [1,21]. The proof relies on the method of characteristics and the explicit formulas.
Abstract:
We propose a new sufficient non-degeneracy condition for the strong precompactness of bounded sequences satisfying the nonlinear first-order differential constraints. This result is applied to establish the decay property for periodic entropy solutions to multidimensional scalar conservation laws.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Use that $X$ is connected if and only if the only continuous functions $f:X\to\{0,1\}$ are constant, where $\{0,1\}$ is endowed with the discrete topology.
Now, you know each $F$ in $\mathscr F$ is connected. Consider $f:\bigcup \mathscr F\to\{0,1\}$, $f$ continuous.
Take $\alpha \in\bigcap\mathscr F$. Look at $f(\alpha)$, and at $f\mid_{F}:\bigcup \mathscr F\to\{0,1\}$ for any $F\in\mathscr F$.
Since you mention metric spaces, I am not sure if you know about the first thing I mention, so let's prove it:
THM Let $(X,\mathscr T)$ be a metric (or more generally, a topological) space. Then $X$ is connected if and only if whenever $f:X\to\{0,1\}$ is continuous, it is constant. The space $\{0,1\}$ is endowed with the discrete metric (topology), that is, the open sets are $\varnothing,\{0\},\{1\},\{0,1\}$.
P First, suppose $X$ is disconnected, say by $A,B$, so $A\cup B=X$ and $A\cap B=\varnothing$, $A,B$ open. Define $f:X\to\{0,1\}$ by $$f(x)=\begin{cases}1& \; ; x\in A\\0&\; ; x\in B\end{cases}$$
Then $f$ is continuous because $f^{-1}(G)$ is open for any open $G$ in $\{0,1\}$ (this is simply a case by case verification), yet it is not constant. Now suppose $f:X\to\{0,1\}$ is continuous but not constant. Set $A=\{x:f(x)=1\}=f^{-1}(\{1\})$ and $B=\{x:f(x)=0\}=f^{-1}(\{0\})$. By hypothesis, $A,B\neq \varnothing$. Morover, both are open, since they are the preimage of open sets under a continuous map, and $A\cup B=X$ and $A\cap B=\varnothing$. Thus $X$ is disconnected. $\blacktriangle$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.