text stringlengths 256 16.4k |
|---|
Show the $\sum\limits_{n = 1}^\infty x_n\sin(nx)$ converges uniformly
iff$nx_n →0$ as $n →\infty$, where $x_n$ is a decreasing sequence and with $x_n>0$ for all $n=1,2,\cdots$.
My attempt was with M-test. I took $M_n= nx_n$. Since $\sin(x)$ is bounded by $1$, I claim that $M_n >x_n\sin(nx)$ and we have $\sum M_n >\sum x_n\sin(nx)$. Then if $\sum M_n$ converges we have $\sum x_n\sin(nx)$ converges uniformly by theorem.
My problem here is question asking with
iff. If $\sum M_n$ converges we know $|M_n|→0$. But I could not understand how we can show the opposite direction in this case. I mean if $|M_n|→0$ we can not be sure if $\sum M_n$ converges or not from the harmonic series case. Could you help me on that? |
1. Let's begin by studying if $(a_n)$ is increasing or decreasing.
Note that $\forall n\in\Bbb N^*$, if $a_n\ne 0$:$$a_{n+1} - a_n = \dfrac{1}{a_n} - \dfrac{a_n}2 = \dfrac{2-a_n^2}{2a_n}\;\;\;(\star)$$
Thus, we have to see if this is positive or negative. To do this, we need to see if $a_n>0$ or not, and if $2-a_n^2\ge 0$ or not (i.e. what's the position of $a_n$ with respect to $0$ and to $\pm\sqrt{2}$). Since $a_1=2>\sqrt{2}$ and $a_2 = \dfrac32>\sqrt{2}$, we may hope that $\forall n\in\Bbb N^*, a_n\ge \sqrt{2}$. Let's try to prove it by induction:
$a_1 = 2>\sqrt{2}$ if we assume that for a given $n\ge 1$, $a_n\ge \sqrt{2}$, then$$a_{n+1} - \sqrt{2} = \dfrac{1}{a_n} + \dfrac{a_n}2 - \sqrt{2} = \dfrac{2+a_n^2 - 2\sqrt{2}a_n}{2a_n} = \dfrac{(a_n - \sqrt{2})^2}{2a_n}\ge 0$$since $a_n\ge \sqrt{2}\ge 0$. thus $\forall n\in\Bbb N^*,\,a_n\ge \sqrt{2}\;\;(\triangle)$.
Thus, using $(\star)$, we can see that $\forall n\in\Bbb N^*,\,2-a_n^2\le 0$ (since $\forall n\in\Bbb N^*,\,a_n\ge \sqrt{2}$) and $a_n\ge 0$. This shows $a_{n+1}-a_n\le 0$ and $(a_n)$ is non-increasing.
2. Now, we also know that $(a_n)$ is bounded from below by $\sqrt{2}$ and non-increasing. Thus, it's a convergent sequence. |
Tagged: determinant of a matrix Problem 718
Let
\[ A= \begin{bmatrix} 8 & 1 & 6 \\ 3 & 5 & 7 \\ 4 & 9 & 2 \end{bmatrix} . \] Notice that $A$ contains every integer from $1$ to $9$ and that the sums of each row, column, and diagonal of $A$ are equal. Such a grid is sometimes called a magic square.
Compute the determinant of $A$.Add to solve later
Problem 686
In each of the following cases, can we conclude that $A$ is invertible? If so, find an expression for $A^{-1}$ as a linear combination of positive powers of $A$. If $A$ is not invertible, explain why not.
(a) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=0$.
Add to solve later
(b) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=-1$. Problem 582
A square matrix $A$ is called
nilpotent if some power of $A$ is the zero matrix. Namely, $A$ is nilpotent if there exists a positive integer $k$ such that $A^k=O$, where $O$ is the zero matrix.
Suppose that $A$ is a nilpotent matrix and let $B$ be an invertible matrix of the same size as $A$.
Is the matrix $B-A$ invertible? If so prove it. Otherwise, give a counterexample. Problem 571
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.
There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold). The time limit was 55 minutes.
This post is Part 2 and contains Problem 4, 5, and 6.
Check out Part 1 and Part 3 for the rest of the exam problems.
Problem 4. Let \[\mathbf{a}_1=\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}, \mathbf{a}_2=\begin{bmatrix} 2 \\ -1 \\ 4 \end{bmatrix}, \mathbf{b}=\begin{bmatrix} 0 \\ a \\ 2 \end{bmatrix}.\]
Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$.
Problem 5. Find the inverse matrix of \[A=\begin{bmatrix} 0 & 0 & 2 & 0 \\ 0 &1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 \end{bmatrix}\] if it exists. If you think there is no inverse matrix of $A$, then give a reason. Problem 6. Consider the system of linear equations \begin{align*} 3x_1+2x_2&=1\\ 5x_1+3x_2&=2. \end{align*} (a) Find the coefficient matrix $A$ of the system. (b) Find the inverse matrix of the coefficient matrix $A$. (c) Using the inverse matrix of $A$, find the solution of the system.
(
Linear Algebra Midterm Exam 1, the Ohio State University) Read solution Problem 546
Let $A$ be an $n\times n$ matrix.
The $(i, j)$
cofactor $C_{ij}$ of $A$ is defined to be \[C_{ij}=(-1)^{ij}\det(M_{ij}),\] where $M_{ij}$ is the $(i,j)$ minor matrix obtained from $A$ removing the $i$-th row and $j$-th column.
Then consider the $n\times n$ matrix $C=(C_{ij})$, and define the $n\times n$ matrix $\Adj(A)=C^{\trans}$.
The matrix $\Adj(A)$ is called the adjoint matrix of $A$.
When $A$ is invertible, then its inverse can be obtained by the formula
For each of the following matrices, determine whether it is invertible, and if so, then find the invertible matrix using the above formula.
(a) $A=\begin{bmatrix} 1 & 5 & 2 \\ 0 &-1 &2 \\ 0 & 0 & 1 \end{bmatrix}$. (b) $B=\begin{bmatrix} 1 & 0 & 2 \\ 0 &1 &4 \\ 3 & 0 & 1 \end{bmatrix}$. Problem 509
Using the numbers appearing in
\[\pi=3.1415926535897932384626433832795028841971693993751058209749\dots\] we construct the matrix \[A=\begin{bmatrix} 3 & 14 &1592& 65358\\ 97932& 38462643& 38& 32\\ 7950& 2& 8841& 9716\\ 939937510& 5820& 974& 9 \end{bmatrix}.\]
Prove that the matrix $A$ is nonsingular.Add to solve later
Problem 505
Let $A$ be a singular $2\times 2$ matrix such that $\tr(A)\neq -1$ and let $I$ be the $2\times 2$ identity matrix.
Then prove that the inverse matrix of the matrix $I+A$ is given by the following formula: \[(I+A)^{-1}=I-\frac{1}{1+\tr(A)}A.\]
Using the formula, calculate the inverse matrix of $\begin{bmatrix}
2 & 1\\ 1& 2 \end{bmatrix}$. Problem 486
Determine whether there exists a nonsingular matrix $A$ if
\[A^4=ABA^2+2A^3,\] where $B$ is the following matrix. \[B=\begin{bmatrix} -1 & 1 & -1 \\ 0 &-1 &0 \\ 2 & 1 & -4 \end{bmatrix}.\]
If such a nonsingular matrix $A$ exists, find the inverse matrix $A^{-1}$.
(
The Ohio State University, Linear Algebra Final Exam Problem) Read solution Eigenvalues of Orthogonal Matrices Have Length 1. Every $3\times 3$ Orthogonal Matrix Has 1 as an Eigenvalue Problem 419 (a) Let $A$ be a real orthogonal $n\times n$ matrix. Prove that the length (magnitude) of each eigenvalue of $A$ is $1$.
Add to solve later
(b) Let $A$ be a real orthogonal $3\times 3$ matrix and suppose that the determinant of $A$ is $1$. Then prove that $A$ has $1$ as an eigenvalue. |
1)If (x+y)^2 = 31 and xy=6, what is the value of (x-y)^2?]
2)Compute the product \[\frac{(1998^2 - 1996^2)(1998^2 - 1995^2) \cdots (1998^2 - 0^2)}{(1997^2 - 1996^2)(1997^2 - 1995^2) \cdots (1997^2 - 0^2)}.\]
3)A square park measures 170 feet along each side. Two paved paths run from each corner to the opposite corner and extend 3 feet inwards from each corner, as shown. What is the total area, in square feet, taken by the paths?
4)If $\left(a-\frac 1a\right)^2=4$ and $a > 1,$ what is the absolute value of $a^3 - \frac1{a^3}?$
5)If $a+b=7$ and $a^3+b^3=42$, what is the value of the sum $\dfrac{1}{a}+\dfrac{1}{b}$? Express your answer as a common fraction.
1.)
\(\text{If you want to figure out }(x-y)^2\text{, You can expand it to }x^2-2xy+y^2\)
\(\text{Notice how }xy=6\text{ now that means we can place it into our } x^2-2xy+y^2\text{ equation}\)
\(\text{So we have: }x^2-2xy+y^2\Rightarrow{x^2}-2(6)+y^2\Rightarrow{\color{green}x^2+y^2-12}\)
\(\text{Now consider one of the equations we were given: }(x+y)^2=31\)
\(\text{We can now expand the following: } (x+y)^2=31\Rightarrow{\color{red}x^2+2xy+y^2=31}\)
\(\text{Now we place }xy=6\text{ into the red equation}\)
\(x^2+2xy+y^2=31\rightarrow{x^2}+2(6)+y^2=31\rightarrow{\color{red}x^2+y^2=19}\)
\(\text{Now we combine the new red equation and the green equation together for the answer}\)
\({\color{red}x^2+y^2=19}\text{ and }{\color{green}x^2+y^2-12}\Rightarrow{19-12}=\boxed{7}\)
I do not have time to answer the other questions, but I am sure that the all involve expanding special cases and combining equations!.
\(4)If \ $\left(a-\frac 1a\right)^2=4\ and \ a > 1,$ what is the absolute value of $a^3 - \frac1{a^3}?$ \)
Expand (a -1/a)^2 = 4 → a^2 - 2 + 1/a^2 = 4 → a^2 + 1/a^2 = 6 (1)
And (a - 1/a)^2 = 4 take the positive root
a - 1/a = 2 (2)
Factor a^3 - 1/a^3 as a difference of cubes
(a -1/a) ( a^2 + 1 + 1/a^2)
(a - 1/a) ( [a^2 + 1/a^2] + 1) (3)
Sub (1) and (2) into (3)
(2) ( 6 + 1)
(2) (7) =
14
\(5)If $a+b=7$ and $a^3+b^3=42$, what is the value of the sum $\dfrac{1}{a}+\dfrac{1}{b}$? \)
a + b = 7 square both sides
a^2 + 2ab +b^2 = 49 (1)
a^3 + b^3 = 42 factor the left side as a sum of cubes
(a + b) (a^2 - ab + b^2) = 42
(7) (a^2 - ab + b^2) = 42 divide both sides by 7
a^2 - ab + b^2 = 6 (2)
Subtract (2) from (1) and we have that
3ab = 43
ab = 43/3 ( 3)
And we can simplify 1/a + 1/b as (a + b) / ab
So
1/a + 1/b =
(a + b) / ab =
7 / ( 43/3) =
(7/1) / (43/3) =
(7/1) * (3/43) =
7 * 3 / 43 =
21 / 43
(1998^2 - 1996^2)(1998^2 - 1995^2) ........1998^2
_________________________________________________________
(1997^2 - 1996^2) ( 1997^2 - 1995^2) .......1997^2
(1998 + 1996)(1998 - 1996) ( 1998 + 1995)(1998 - 1995) (1998 + 1994) (1998 - 1994).....1998^2
_____________________________________________________________________________
(1997 + 1996) (1997 - 1996) (1997 + 1995)(1997 - 1995) (1997 + 1994)(1997 - 1994).....1997^2
[ (3994)(2)] [ (3993)(3) ] [(3992)(4) ].......[(1998)(1998)]
____________________________________________
[ (3993)(1)] [ (3992)(2)] [ (3991)(3)]..........[(1997)(1997)]
Note that each pair of products in the numerator sum to 3996 and each pair of of products in the numerator sum to 3994
After cancellation of like terms we are left with
3994 * ( 4) [(1998)(1998)]
_____ ..... _____________
( 3991) [ (1997)(1997)]
And....note that the sum of the non-cancelled terms in the middle fraction's numerator and denominator = 3995
This implies.....that after complete evaluation ( and cancellations), we will be left with
3994 * (1998)
______ =
(1997)
(2)(1997) * (1998)
_______________ =
(1997)
2 * 1998 =
3996 |
Electromagnetic Induction Faraday’s Law of Induction, Lenz’s Law and Conservation of Energy If the current flowing in the primary coil is ‘ip’, ‘φ s’ is the magnetic flux of the secondary coil, then φ s∝ i p⇒ φ s= Mi p⇒ \tt M = \frac{\phi_{s}}{i_{p}} Induced emf generated in secondary coil is \tt e = - \frac{d \phi}{dt} = - M \left(\frac{di_{p}}{dt}\right) \tt M = \frac{e}{\left(-di_{p} / dt \right)} Dimensional formula of mutual inductance is [ML 2T −2A −2] Two solenoids with numbers of turns per unit length n 1and n 2, with coil 1 inside, coil 2 co-axially placed. Then \tt M = \mu_{0} n_{1}n_{2} \pi r_{1}^{2} l If a magnetic material of relative permeability μ rfills the space inside the solenoid, then \tt M = \mu_{r} \mu_{0} n_{1}n_{2} \pi r_{1}^{2} l Mutual inductance for coil 1 due to 2 is equal to mutual inductance for the coil 2 due to 1 ⇒ M 12= M 21 Two circular coils, one with very small radius r 1and another with very large radius r 2are placed co-axially with their centres coinciding, then \tt M = \frac{\mu_{0} \pi r^{2}_{1}}{2r_{2}} If ‘I’ is the current flowing through the coil and ‘φ’ is the magnetic flux linked with the coil, then φ ∝ i ⇒ φ = Li⇒ \tt L = \frac{\phi}{i}
L = co-efficient of self-induction
Self-induced emf, \tt e = \frac{- d \phi}{dt} = - L \frac{di}{dt} A coil with high self-inductance is known as INDUCTOR. In case of a long solenoid, where the core consists of a magnetic material of relative permeability μ r, then L = μ rμ 0n 2Al Energy in a current carrying coil is stored in the form of magnetic field and it is given by \tt U = \frac{1}{2} Li^{2} Relation between self-inductance and mutual inductance is given as \tt M = K \sqrt{L_{1}L_{2}} (K = coupling factor), K ≤ 1 Coupling factor, K = 1 If the coils are wound one over the other Inductances are added in series or parallel L series= L 1+ L 2+ L 3+ …… \tt \frac{1}{L_{parallel}} = \frac{1}{L_{1}} + \frac{1}{L_{2}} + \frac{1}{L_{3}} + .... An AC GENERATOR converts mechanical energy into electrical energy. The method to induce an emf in a loop is through a change in the loop’s orientation. This is the PRINCIPLE OF SIMPLE AC GENERATOR The induced emf for the rotating coil of N turns is,
\tt \varepsilon = - \frac{d \phi_{B}}{dt} = - \frac{d}{dt} (NBA cos ωt)
∴ ε = NBA ω sin ωt
The magnitude of induced emf is
ε = NBA ω sin ωt = ε
0sinωt
ε
0 = peak value of the emf ‘i’ is the current in the circuit at any instant ‘t’ in an L-R circuit with DC current
\tt i = i_{0} \left\{{1 - e^{-t / \lambda}}\right\} , \lambda = \frac{L}{R}, inductive time constant.
At t = λ, i = i 0\tt \left(1 - \frac{1}{e}\right) = 0.63 i_{0} The INDUCTIVE TIME CONSTANT of a circuit is defined as the time in which the current rises from zero to 63% of its final value. ‘i’ is the current in the circuit at any instant in an L-R circuit with DC current and \tt i = i_{0} e^{-t/ \lambda} At t = λ, \tt i = \frac{i_{0}}{e} = 0.37i_{0} The inductive time constant (λ) is defined as the time interval during which the current decays to 37% of the maximum current The charge present on the plates opposes further introduction of charge \tt E - \frac{q}{c} = Ri \Rightarrow E - \frac{q}{c} = R \frac{dq}{dt} λ = CR is called CAPACITIVE TIME CONSTANT \tt q = q_{0} \left(1 - \frac{1}{e} = 0.63 q_{0}\right) The CAPACITIVE TIME CONSTANT is the time in which the charge on the plates of the capacitor becomes 0.63 q 0. If the charge slowly reduces to zero after infinite time then \tt \frac{-q}{c} = Ri \Rightarrow \frac{-q}{c} = R \frac{dq}{dt} At t = λ, \tt q = \frac{q_{0}}{e} = 0.37 q_{0} View the Topic in this video From 26:16 To 55:11
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. The induced emf is given by rate of change of magnetic flux linked with the circuit, i.e., e = -\frac{d\phi}{dt}. For N turns e = -\frac{N d\phi}{dt}; Negative sign indicates that induced emf (e) opposes the change of flux.
2. When Φ changes, the average induced emf
e = -N \frac{d\phi}{dt} = - \frac{N(\phi_{2} - \phi_{1})}{\Delta t} = - \frac{NA(B_{2} - B_{1}) cos \theta}{\Delta t} = - \frac{NBA(\cos \theta_{2} - \cos \theta_{1})}{\Delta t}
3. Φ = NBA cos θ = NBA cos ωt, where ω is the angular velocity e = -\frac{d \phi}{dt} = - \frac{d}{dt}(NBA \ \cos \omega t) |
Before we even consider this, consider the kinetic energy of a relativistic particle:
$$E=mc^{2}\left(\frac{1}{\sqrt{1-\left(\frac{v}{c}\right)^{2}}}-1\right)$$
This represents the amount of energy required to accelerate the particle from rest (with respect to a given reference frame) to the speed $v$. It should be immediately obvious that this quantity becomes infinite as $v\rightarrow c. {}^{1}$ Since it is impossible to generate an infinite amount of energy, this then means that no particle with $m > 0$ can be accelerated to the speed of light. In fact, since high-energy particles have speeds that are generally extremely close to thespeed of light in a laboratory frame, it has become custom to quote the speed of the particles in terms of their energy--particle physicists routinely talk about "1 GeV" electrons to refer to the kinetic + rest mass energy of the electron in question, rather than bothering to translate this into a translational velocity.
${}^{1}$ This is true unless $m=0$. In the latter case, you can get a finite answer out of the $0\cdot \infty$ in the case of massless particles, but ONLY if they travel at exactly at the speed of light. Were they to travel faster than this, the formula would no longer be the indeterminate form $0 \cdot \infty$, but rather $0\cdot ({\rm complex \,\,number})$, which would be a definite, but nonsensical answer. |
First, the string of minimum length might not be defined properly since it might not be unique.Here is a way to find a string of minimum length:Convert the regular expression to a nondeterministic finite automaton.Convert the nondeterministic automaton into a deterministic one.Use a breadth first search until you encounter the first nonfinal state (if ...
Your reasoning is incorrect.It is true that your hypothetical "proof deriver" cannot derive all true statements. No proof derivation system can, and indeed, it is not even possible to express the set of true statements in arithmetic, which is a consequence of Tarski's theorem on truth, itself a consequence of Gödel's theorem.However, your algorithm does ...
Your basic idea (in particular the choice of $t$) works, but there are some issues with the "proof" as such:What is $n$?Why do you have $y=b$ have to be pumped? You have to work against all allowed decompositions of $t$!$n+i < n+2$ does indeed not work for all $i$, but that's kind of the point. Just drop that condition.I suggest you read our ...
This is similar to the recurrence arising in the analysis of Quicksort, search for "quicksort analysis" to get lots of results.An easy road is to write:\begin{align}(n + 1) T(n + 1) &= 2 \sum_{0 \le k \le n} T(k) + (n + 1) c \\n T(n) &= 2 \sum_{0 \le k \le n - 1} T(k) + n c\end{align}Subtract to get:$$(n + 1) T(n + 1) - (n + 2) T(n)...
Your idea is correct.You should take the following path to formally write it up/prove it. First assume that $L$ is accepted by some DEA $M=(Q,\Sigma,\delta,q_0,F)$ then define a new NEA $M'=(Q',\Sigma,\delta',q'_0,F')$ based on $M$. Just express you ideas formally, for example start with$Q'=Q\cup\{q_\text{new1},q_\text{new2}\}$,$F'=\{q_\text{new2}\}$,...
your answers look good.To do this properly, it might helpful to convert the expression into the equivalent language (i.e., using English words to describe it), however, for certain expressions this language will be very complicated.Another "algorithmic" way is to convert it to a DFA/NFA, and then check all the strings lexicographically, until you find ...
Your proof produces a tree in which all nodes are colored black. It doesn't necessarily satisfy the "black height" rule:Every path from a given node to any of its descendant NIL nodes contains the same number of black nodes.Not every AVL tree satisfies this condition, for example the Wikipedia example doesn't.
It is hard to answer this question since it is hard to find a formal statement of Rabin's compression theorem. Here is one from the book Complexity Theory and Cryptology: An Introduction to Cryptocomplexity by Jörg Rothe, page 63:For each time constructible function $t(n)$ there exists a decidable set $D_t$ such that each deterministic Turing machine $M$ ...
Your invariant, together with the negation of the loop condition, is not strong enough to imply your postcondition. Try adding an additional conjunct to the invariant which, together with $\neg\ i<10$, implies $i=10$ (the $j=-1$ part then follows from $i+j=9$).
An alternative solution still using domain transformation/change of variables.$$T(n) = T(\sqrt{n}) + \log \log n$$1. Let $m = \log n$We can then define a new function $S$ based on how $m$ changes in $T$ and expand it:$$\begin{align}S(m) &= S\left(\frac{m}{2}\right) + \log m\\&=S\left(\frac{m}{4}\right) + \log m -1 + \log m\\& \vdots\\&...
To make life simple, assume $T(1)=1$. If we look at this just for integral powers of $k$, i.e. $n=k^m$ for some $k \in \mathbb Z$, we have, by definition,$$T(k^m)=kT(k^{m-1})+ck\cdot k^m$$We can repeatedly substitute into the recurrence to get:$$\begin{align}T(k^m)&=k\cdot{\color{red}{T(k^{m-1})}}+ck\cdot k^m\\&=k\cdot{\color{red}{[k\cdot T(k^...
If $f = O(g)$, then$$\exists n_0. \exists c. \forall n. (n > n_0 \rightarrow f(n) \le c g(n))$$The negation of this statement is$$\forall n_0. \forall c. \exists n. (n > n_0 \land f(n) > c g(n))$$This doesn't match your negation, so I believe your proof is incorrect.I actually think the statement you're trying to prove is false. Try ...
Stirling's approximation states that$$ n! \sim \sqrt{2\pi n} (n/e)^n. $$This notation means that the ratio between the two sides tends to 1 as $n$ tends to infinity. For your purposes, we can simply write$$ n! = \Theta(\sqrt{n} (n/e)^n). $$Since $\sqrt{n} = o(e^n)$,$$ n! = o(n^n). $$If all you want to show $n! = O(n^n)$, then as Jukka mentions you can ...
You seem to think the structure of the proof is:suppose the algorithm is incorrect;prove that the algorithm is, in fact, correct;this contradicts 1., so the algorithm is correct.That's almost, but not quite true. Step 2 doesn't prove that the returned value $\mathrm{max}$ is bigger than every element of the array, which is what would be required to ...
You can compute $S\setminus T$ and $T\setminus S$ from $S$ and $T$ in $O(n+m)$ time using a hash table. Put all of list $S$ into a hashtable, and then iterate through list $T$ and look it up in the hashtable. Then do the same, with $T$ in the hashtable and iterating through $S$.Fine print for complexity purists: this is expected running time, making ...
Thanks for the clarification! I really misinterpreted your question.Okay, so we have the computable function $f$, the also computable function $F$ that is based on $f$, and the Turing machine $M_F$ with the following behavior:when started with a tape with $y$ $1$'s, writes a block of $F(y)$ $1$'s to the right, separated by at least one blankNote that ...
In this context, lexicographic order means:First order by length.Within each length, order lexicographically.You're saying that the proof is incorrect, but in fact it is only inaccurate in that the order is not quite the lexicographic order. I would say that the proof has a small mistake, but is otherwise correct.One often encounters such small issues ...
It's a proof by contradiction that could easily be rewritten as a direct proof.To rephrase it as a direct proof, we divide it into two claims:$max$ is an element of $A$.$max \geq A[j]$ for all $j$.We can conclude that $max = \max(A)$.
The definition you give looks like the definition of a complete tree. With the restriction that nodes are in $[\![1, 2^h-1]\!]$, then it is also a perfect tree of height $h$.Instead of looking at the leaves, you should look at the root of the tree: since the tree is perfect, it has two children of size $2^{h-1}-1$ (this is how $C_{h-1}$ appears). You just ...
The nature of the answer depends on what you are attempting to optimize (e.g. computation, communication, interactivity) and the computational model (e.g. deterministic, probabilistic,distributed/centralized).In the case where the two sets are on remote devices, this problem is known as the set reconciliation problem, or more generally the data ...
The solution in the question is correct.The constraint $i\ne j$ is the one that gives us trouble. To get around it we have to split into two cases: (i) $i<j$, and (ii) $i>j$ (but still $i<2j$)thus$S\to S_{(i)} \mid S_{(ii)}$as the first production splits into this two cases.Now, divide and conquer: case (i) is very simple, since we only ...
There is a trivial $O(kn^2)$ algorithm:Choose a vertex $v$ and calculate its degree $k$. Then verify that all other vertices have degree $k$.Choose some neighbor $w$ of $v$ and calculate the number of common neighbors $\lambda$. Then verify that all other pairs of adjacent vertices have $\lambda$ common neighbors.Choose some non-neighbor $x$ of $v$ and ...
Here's an explicit construction of $f$ and $g$ such that neither $f=O(g)$ nor $g=O(f)$. To make the calculations slightly easier, I've chosen $g$ to be increasing and $f$ to be nondecreasing (namely, $f$ will be a step function), but one can follow the technique and tweak $f$ so that it's increasing, as well.First, let $g(n)=n$. Construct $f$ as follows:...
You are missing something.If you are given what is supposed to be a factorisation of a number x, it's not enough to show that the product of those numbers is x. You also have to prove that all the numbers in the purported factorisation are primes.Fortunately there is theorem that for every prime number, there is a polynomial time proof that it is a ...
Our proof deriver enumerates all possible axiomatic systemsBut the set of possible axiomatic systems also include the inconsistent systems. On the other hand, the consistent axiomatic systems are not computationally enumerable, hence you cannot enumerate them.But cody's point is probably even more relevant, since the algorithm can just write down an ...
Then you can compress all $g_j ∈ S_N$, by iterating through $A$ until you find $a_i|f(a_i,g_j)−m>0$.I'm not sure what the notation means (the pipe here, and also $\#A$ elsewhere), but still: this is not a meaningful algorithm since the set$\qquad A = \{a_i : \, \forall \, g_j \in S_N \, f(a_i, g_j) \gt \lceil(\log_2{\#A})\rceil\}$is empty.I ... |
Maybe I misunderstand your first question. The source is by definition a module of the vertex, namely the module $M$ for which $U | M^G$. Trivial source in this context means trivial module of the vertex $Q$.
Now for the second question. The easy part is why $k_Q|U_Q$ implies $k_{Q\cap H}|U_{Q_H}$. If you write out the definitions of these symbols, the implication becomes trivial, using transitivity of restriction:$$
\begin{align}
k_Q|U_Q & \Rightarrow U_Q = k_Q\oplus\ldots\\
& \Rightarrow U_{Q\cap H} = (U_Q)_{H\cap Q} = (k_Q)_{H\cap Q}\oplus \ldots= k_{H\cap Q}\oplus \ldots
\end{align}
$$It remains to show that $k_Q|U_Q$. Again, let us write out the definitions. We have $(k_Q)^G = U\oplus \ldots$, since $k_Q$ is a source of $U$. So, restricting back to $Q$, $((k_Q)^G)_Q = U_Q\oplus\ldots$ Now use Mackey:$$
((k_Q)^G)_Q = \bigoplus_{g\in Q\backslash G/Q}(k_{Q\cap Q^g})^Q = U_Q\oplus\ldots
$$By Krull-Schmidt, $U_Q$ is a direct sum of trivial modules induced from various subgroups, so let's simply write $U_Q = \bigoplus_i (k_{H_i})^Q$, $H_i\leq Q$, and we want to show that one of these $H_i$ is $Q$ itself. Now, $Q$ being a vertex implies that $U|(U_Q)^G$ (indeed, one of the possible definitions of vertex is that $Q$ is the smallest such subgroup of a Sylow). So, $U\;|\;\bigoplus_i (k_{H_i})^G$, and since $U$ is indecomposable, it divides one of the summands. But hang on a minute, $Q$ was the
smallest subgroup from which $U$ could divide an induction, so one of these $H_i$ must indeed be $Q$, as claimed. |
Consider a liner discrete-time system. Assume we can define it in terms of an input-output relation as follows (you can assume a more general model but it is enough for our purpose):
$$a_0y[n]+a_{1}y[n-1]+\cdots+a_{N}y[n-N]=b_0x[n]+b_{1}x[n-1]+\cdots+b_{M}x[n-M]\tag{1}$$
When the coefficients $\{a_i\}$ and $\{b_i\}$ are constant, we call it a finite-order constant-coefficient ordinary difference equation.It explains the current output $y[n]$ in terms of a weighted sum of the current and past inputs and the past outputs:
$$y[n]=\frac{-a_{1}y[n-1]-\cdots-a_{N}y[n-N]+b_0x[n]+b_{1}x[n-1]+\cdots+b_{M}x[n-M]}{a_0}$$
It is very similar to a differential equation in continuous time.
Dealing with such equation to find the output of the system can become complicated. It is a recurrence relation. It defines the values recursively and for arbitrary inputs it is not straightforward to express the output in a closed-form representation.
How to deal with it in an easier way?
Consider the following transform (assume the sum exists):$$\mathcal{Z}(x[n])=\sum_{n=-\infty}^{+\infty}x[n]z^{-n}$$ You are right, it is just a transform that accept $x[n]$ and gives you $X(z)$.But it has a useful property. Lets calculate the $z$-transform of $x[n-\alpha]$:$$\begin{align}\mathcal{Z}(x[n-\alpha])&=\sum_{n=-\infty}^{+\infty}x[n-\alpha]z^{-n}\\[10pt]&=\sum_{n'=-\infty}^{+\infty}x[n']z^{-(n'+\alpha)}\tag{2}\\[10pt]&=\left(\sum_{n'=-\infty}^{+\infty}x[n']z^{-n'}\right)z^{-\alpha}\\&=\left(\mathcal{Z}(x[n])\right)z^{-\alpha}\end{align}$$where in $(2)$ I changed the variable $n'=n-\alpha \Rightarrow n=n'+\alpha$.Assume $\mathcal{Z}(x[n])=X(z)$. We have seen that a property of $z$-transform is: $$\boxed{\mathcal{Z}(x[n-\alpha])=z^{-\alpha}X(z)}$$
Applying this to the difference equation $(1)$ makes it (note that it is a liner transform and can be applied term by term):
$$a_0Y(z)+a_1z^{-1}Y(z)+\cdots+a_Nz^{-N}Y(z)=b_0X(z)+b_1z^{-1}X(z)+\cdots+b_Mz^{-M}X(z)$$
Note that all terms are expressed in terms of $Y(z)$ and $X(z)$. So we can say that
$z$-transform reduces usually difficult-to-handle recurrence relations to much easier to manipulate
algebraic relations.
With an algebraic relation, we can factor the terms:
$$Y(z)\left(a_0+a_1z^{-1}+\cdots+a_Nz^{-N}\right)=X(z)\left(b_0+b_1z^{-1}+\cdots+b_Mz^{-M}\right)$$and consequently,
$$Y(z)=X(z)\frac{b_0+b_1z^{-1}+\cdots+b_Mz^{-M}}{a_0+a_1z^{-1}+\cdots+a_Nz^{-N}}$$
where $$H(z)\triangleq\frac{b_0+b_1z^{-1}+\cdots+b_Mz^{-M}}{a_0+a_1z^{-1}+\cdots+a_Nz^{-N}}$$is called the system function. So we distilled all the system into an algebraic expression as $H(z)$. Hence, the zeros and poles of $H(z)$ have a direct impact on the output.
There are also other reasons that make $z$-transform and $H(z)$ important, and we would prefer working with the $z$-domain rather than time-domain.
Assume the input to the system is $x[n]=z_{k}^n$, where $z_k$ is just a complex number (don't mix it with the $z$ variable in the $z$-transform at this moment! Also there are some restrictions on $z_k$ as we will see next).The output of the LTI system can be calculated by convolution of the input and the impulse response $h[n]$:
$$\begin{align}y[n]&=h[n]*x[n]\\&=\sum_{m=-\infty}^{+\infty} h[m] x[n-m]\\&=\sum_{m=-\infty}^{+\infty} h[m] z_k^{n-m}\\&=z_k^n \sum_{m=-\infty}^{+\infty} h[m] z_k^{-m}\\&=z_k^n H(z_k)\end{align}$$
What does it really mean?
The output of the system is
an scaled version of the input. So the signal $z_k^n$ passes through the system and the only impact on it is a scaling. Note that the scaling factor is the system function when evaluated at $z_k$. Why is it important?
If we could decompose an arbitrary signal (not of the form $z_k^n$) into several components of the form $z_k^{n}$, then we can easily calculate the output of the system to each of the components, and then add them together to find the overall output (benefiting from the superposition property of LTI systems).
How can we decompose the signal $x[n]$ into components of the form $z_k^n$?It is easy as the following (assume the sum exists for now)
$$\left(\sum_{n=-\infty}^{+\infty}x[n]z_k^{-n}\right)z_k^n$$
Let's refer to this component as $X(z_k)z_k^n$. So the output to this component is $y_k[n]=H(z_k)X(z_k)z_k^n$. We can have as many components as possible. Potentially, we have thousands of $z_k$. So we refer to all of them by a variable $z$ and the component-wise output becomes $H(z)X(z)z^n$. The overall output $y[n]$ is in form of an integral (since we need to add them up). It can be shown that $y[n]$ is given by the following contour integral:
$$y[n]=\frac{1}{2\pi j}\oint_C\frac{H(z)X(z)z^n}{z}dz$$
where $C$ is a counterclockwise path inside the ROC (where $X(z)$ and $H(z)$ exist) that encircles the origin, which is the same as the inverse $z$-transform of $X(z)H(z)=Y(z)$.
Regarding your second question, in the definition of $z$-transform (when it exists on the complex unit circle) if you choose $z=e^{j\omega}$ it becomes the definition of the Discrete-Time Fourier transform Hence is the frequency-domain interpretation. |
I am trying to solve the Diophantine equation:
$$ \binom{a}{2} + \binom{b}{2} = \binom{c}{2} $$
Here's what it looks like if you expand, it's variant of the Pythagorean triples:
$$ a \times (a-1) + b \times (b-1) = c \times (c-1) $$
I was able to find solutions by computer search but this could have also been checked using the Hasse principle. \begin{eqnarray*} \binom{3}{2}+ \binom{3}{2}&=& \binom{4}{2} \\ \\ \binom{5}{2}+ \binom{10}{2}&=& \binom{11}{2} \\ \\ \binom{15}{2}+ \binom{15}{2}&=& \binom{21}{2} \end{eqnarray*}
and many others. Is there a general formula for the $(a,b,c) \in \mathbb{Z}^3$ that satisfy this integer constraint.
>>> N = 25>>> f = lambda a : a*(a-1)/2>>> X = [(a,b,c,f(a) + f(b) - f(c)) for a in range(N) for b in range(N) for c in range(N)] >>> [(x[0],x[1],x[2]) for x in X if x[3] == 0 and x[0] > 1 and x[1] > 1 and x[2] > 1][( 3, 3, 4), ( 4, 6, 7) , ( 5, 10, 11), ( 6, 4, 7), ( 6, 7, 9), ( 6, 15, 16), ( 7, 6, 9) , ( 7, 10, 12), ( 7, 21, 22), ( 9, 11, 14), (10, 5, 11), (10, 7, 12), (10, 14, 17), (10, 22, 24), (11, 9, 14), (12, 15, 19), (12, 21, 24), (13, 18, 22), (14, 10, 17), (15, 6, 16), (15, 12, 19), (15, 15, 21), (15, 19, 24), (18, 13, 22), (19, 15, 24), (21, 7, 22), (21, 12, 24), (22, 10, 24)] |
This is not too obvious to me - what is the size of alternating group?
Following the hint in the comment, should it be $A_n = S_n/2$?
So I don't feel right up to here.....
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
The map $\sigma:S_n\to\mathbb{Z}/2\mathbb{Z}$ defined by sending a permutation to $0$ if it has even parity, and $1$ if it has odd parity, is a group homomorphism. The kernel of this map is $A_n$, so by the first isomorphism theorem, we have $[S_n:A_n]=2$ for $n\ge 2$ (the map is not surjective for $n=1$). It follows that for $n\ge 2$ we have
$$|A_n|=\frac{|S_n|}{2}=\frac{n!}{2}$$
The key is to use the existence of the sign homomorphism $\text{sgn} : S_n \rightarrow \{ \pm 1 \}$. By definition $A_n$ is the kernel of $\text{sgn}$. Since $\text{sgn}$ is surjective, it follows immediately that $[S_n : A_n ] = 2$, so $|A_n| = |S_n| / 2 = n! / 2$.
Edit: As noted by Jared, one must assume that $n \geq 2$. |
What are some examples of complex functions with infinitely many complex zeros? There are no particular restrictions on the functions I am just curious and having a hard time finding examples. Also what can be said about a complex function with infinitely many complex zeros, must they have any special properties?
There is nothing special about functions with infinitely many zeros. In fact, it is the norm.
If $f(z)$ is any entire function which isn't a polynomial, then $\infty$ is an essential signularity of $f(z)$. By Big Picard's theorem, $f(z)$ takes on all possible values of $\mathbb{C}$, with at most a single exception, infinitely often.
This means if your $f(z)$ is entire, not a polynomial with finitely many zeroes, then $f(z) + \alpha$ for any constant $\alpha \ne 0$ has infinitely many zeros.
Remember that the real numbers are a proper subset of the complex numbers and so infinitely many real zeros satisfies the criterion of infinitely many complex zeros. Hence: $$\operatorname{f}(z) := \sin z $$ If you want infinitely many non-real, complex zeros then note that $\cos(\operatorname{i}\!z) \equiv \cosh z$, and so $$\operatorname{g}(z) := \cosh(z)-1$$ has infinitely many non-real complex zeros: $z \in \{2\pi\operatorname{i}n : n \in \mathbb{Z}\}$
$f(z)=0$ for all $z \in \mathbb C$ |
yxTtherm
The Swedish company, yxTherm, produces thermostats and wish to revise their established quality assurance parameters. Currently, the production process is defined as ‘controlled’ when a maximum of 10% of the produced units are defect.
On each production day, a randomly selected sample of 25 thermostats is carried out and QA has defined that if 4 or more units in the sample show to be defect, the whole production of that day is returned to an inspection.
Inspections are costly both direct and indirectly as they tie resources, but the negative effects that defect units have as soon as they get in the hand of end user is considered to be even more costly in the long run mainly in terms of brand and company image and in the costs of repair and compensation.
So, it seems that the old production procedures and maybe even the machinery needs to be revised. At first, yxTherm wish to see if they can improve through their working procedures.
Quality assessment in current production
First, the board wishes to take the temperature of the current quality the production, defining the following question:
Question 1:
The calculations: To answer Question 1, we will define the model to be used for the statistical calculation:
* The draws in the sample have two possible outcomes: defect/not defect
* The draws are not independent as they have no replacement
This would lead to a hypergeometric distribution: X follows Hypergeometric(K, N, n), where:
* K = number of defect units in one daily production * N = the population size = the number of units produced per day * n= the sample size = 25 draws, and in which * p = the proportion of success/defect units in a daily production:p=K/N
However, we assume that the daily production is more than 10 times the sample size: \(N >10n\Leftrightarrow N>250\) which means that we can approximate to the binomial distribution: \(X\sim Bin(n;p)\) with a 10% probability of defect units, expressed by p= 0.1, so we get: the expression, the expected value and the varianse:
* \({X\sim \operatorname {Bin} (25;0.1)}\)
* \({\operatorname {E} [X]=np}=2.5\)
* \({\operatorname {Var} (X)=np(1-p)} \Leftrightarrow {\operatorname {Var} (X)=250(1-0.1).} = 1.5\)
The criteria for returning a one-day production to inspection is if there are 4 or more defect units in the daily sample, which can be expressed as:
\(\displaystyle P(X\ge 4)=1-P(X\leq 3)\)
\(\displaystyle\Rightarrow P(X\leq k)=\sum _{i=0}^{n}{n \choose i}p^{i}(1-p)^{n-i}\) \(\displaystyle\Leftrightarrow P(X\leq 3)=\sum _{i=0}^{25}{25 \choose i}0.1^{i}(1-0.1)^{25-i}\)
The step-by-step sum of the probablilities of having 3 or less:
\(\displaystyle{\Pr(0{\text{ defect units}})=f(0)=\Pr(X=0)={25 \choose 0}0.1^{0}(1-0.1)^{25-0}=0.071789799}\)
\(\displaystyle{\Pr(1{\text{ defect unit}})=f(1)=\Pr(X=1)={25 \choose 1}0.1^{1}(1-0.1)^{25-1}=0.199416108}\) \(\displaystyle{\Pr(2{\text{ defect units}})=f(2)=\Pr(X=2)={25 \choose 2}0.1^{2}(1-0.1)^{25-2}=0.265888144}\) \(\displaystyle{\Pr(3{\text{ defect units}})=f(3)=\Pr(X=3)={25 \choose 3}0.1^{3}(1-0.1)^{25-3}=0.226497308}\)
\(\Leftrightarrow 1 – (0.071789799+0.199416108+0.265888144+0.226497308) = \underline{0.2364}\)
Conclusion: yxTherm find that the described risk is too high and that action needs to be taken in order to improve the quality in production. Action to be taken
QA evaluate that this risk is too high, and consequently they have new production procedures developed and implemented with the goal of decreasing the proportion of defect units.
Question 2
Has there been a decrease in the proportion of defect units after the implementation of the new production procedures?
QA decide to take a new and larger sample of 100 units. This sample returns 6 defect units which is a 0.06 proportion of defect units compared to the initial of 0.1. Does this mean that QA can inform of an actual decrease?
Test In order to answer Question 2, QA runs a hypothesis test at a significance level of 5%. As the new sample has returned a result lower than the initial, the alternative Hypothesis is: “The defect proportion is lower than 0.1” and the H0 hypothesis is the opposite and the “conservative” version: “The defect proportion is at least 0.1:
\(z=\frac{\hat{p} – p_0}{\sqrt{p_0 (1-p_0)}}\sqrt n\)
\(\Leftrightarrow z=\frac{0.6 – 0.1}{\sqrt{0.1 (1-0.1)}}\sqrt 100\)
##
\(\Leftrightarrow \underline {z=-1.33}\)
As the z~5%~ = 1.645, the H~0~ hypothesis is accepted and we can therefore not conclude that the changes have led to an improvement in production.
Another way to test the sample result is to run a confidential interval for the proportion:
\(\displaystyle{\hat {p}}\pm z{\sqrt {\frac {{\hat {p}}\left(1-{\hat {p}}\right)}{n}}}\)
\(\displaystyle 0.06\pm 1,96{\sqrt {\frac {{\hat {p}}\left(1-{\hat {p}}\right)}{n}}}\)
\(\displaystyle \Leftrightarrow\ 0.6\pm 1,96{\sqrt \frac {0.06 (1 – 0.06)}{100}}\)
\(\)\displaystyle \Leftrightarrow\ [0.06 \pm 0.0465]\Leftrightarrow\underline{[0.0135 ; 0.1065]}[ /latex]
This means that yxTherm can be 95% confident that the 0.1 is still within the range of values that include the true defect proportion. Thus, they cannot conclude that the defect proportion has decreased from the 0.1.
From here yxTherm could consider to take a larger sample size and calculate what the sample size should be in order to narrow it down to x interval, or they could choose directly to take action as to considering a change of machinery. |
This is a follow up question to here.
The recommended solution doesn't work for me. I therefore let all my mathematical packages in the MWE. The result is an upright Psi followed by an italic Psi. I would like to have all upper-case Greeks being italic in formulas. The possibility to set one character upright (for a constant, an operator...) would be great. Therefore I wonder, if there is a more beautiful solution around.
\documentclass[]{scrreprt}\usepackage[final]{microtype}\usepackage{luatextra}\usepackage{fontenc} \defaultfontfeatures{Ligatures={TeX}}\usepackage[ngerman, english]{babel}\useshorthands{"}\addto\extrasenglish{\languageshorthands{ngerman}}\usepackage{amsmath}\let\Gamma\varGamma\let\Delta\varDelta\let\Theta\varTheta\let\Lambda\varLambda\let\Xi\varXi\let\Pi\varPi\let\Sigma\varSigma\let\Upsilon\varUpsilon\let\Phi\varPhi\let\Psi\varPsi\let\Omega\varOmega\usepackage{mathtools}\usepackage{amssymb}\usepackage{commath}\usepackage[per-mode=symbol-or-fraction,locale=DE,sticky-per]{siunitx} \usepackage{xfrac}%\begin{document}$\Psi \varPsi$\end{document}
Edit My knowledge over the used math-packages is quite small. Don't know, if I use them at all (
amssymb and
mathtools). If they are obsolete or not recommended for typesetting in Germany (English and German) or if something is missing for typical formula setting, I would be glad for every advise. |
If \(f\left( x \right)\) is not continuous at \(x = a\), then \(f\left( x \right)\) is said to be discontinuous at this point. Figures \(1 – 4\) show the graphs of four functions, two of which are continuous at \(x =a\) and two are not.
Classification of Discontinuity Points
All discontinuity points are divided into discontinuities of the first and second kind.
The function \(f\left( x \right)\) has a discontinuity of the first kind at \(x = a\) if
There exist left-hand limit \(\lim\limits_{x \to a – 0} f\left( x \right)\) and right-hand limit \(\lim\limits_{x \to a + 0} f\left( x \right)\); These one-sided limits are finite.
Further there may be the following two options:
The right-hand limit and the left-hand limit are equal to each other: \[{\lim\limits_{x \to a – 0} f\left( x \right) }={ \lim\limits_{x \to a + 0} f\left( x \right).}\] Such a point is called a removable discontinuity. The right-hand limit and the left-hand limit are unequal:\[{\lim\limits_{x \to a – 0} f\left( x \right) }\ne{ \lim\limits_{x \to a + 0} f\left( x \right).}\] In this case the function \(f\left( x \right)\) has a jump discontinuity.
The function \(f\left( x \right)\) is said to have a discontinuity of the second kind (or a nonremovable or essential discontinuity) at \(x = a\), if at least one of the one-sided limits either does not exist or is infinite.
Solved Problems
Click a problem to see the solution.
Example 1Investigate continuity of the function \(f\left( x \right) = {3^{\large\frac{x}{{1 – {x^2}}}\normalsize}}.\) Example 2Show that the function \(f\left( x \right) = {\large\frac{{\sin x}}{x}\normalsize}\) has a removable discontinuity at \(x = 0.\) Example 3Find the points of discontinuity of the function \(f\left( x \right) = \begin{cases} 1 – {x^2}, & x \lt 0 \\ x +2, &x \ge 0 \end{cases} \) if they exist. Example 4Find the points of discontinuity of the function \(f\left( x \right) = \arctan {\large\frac{1}{x}\normalsize}\) if they exist. Example 5Find the points of discontinuity of the function \(f\left( x \right) = {\large\frac{{\left| {2x + 5} \right|}}{{2x + 5}}\normalsize}\) if they exist. Example 1.Investigate continuity of the function \(f\left( x \right) = {3^{\large\frac{x}{{1 – {x^2}}}\normalsize}}.\)
Solution.
The given function is not defined at \(x = -1\) and \(x = 1\). Hence, this function has discontinuities at \(x = \pm 1\). To determine the type of the discontinuities, we find the one-sided limits:
\[
{\lim\limits_{x \to – 1 – 0} {3^{\large\frac{x}{{1 – {x^2}}}\normalsize}} = {3^{\large\frac{{ – 1}}{{ – 0}}\normalsize}} }={ {3^\infty } = \infty ,\;\;\;}\kern-0.3pt {\lim\limits_{x \to – 1 + 0} {3^{\large\frac{x}{{1 – {x^2}}}\normalsize}} = {3^{\large\frac{{ – 1}}{{ + 0}}\normalsize}} }={ {3^{ – \infty }} = \frac{1}{{{3^\infty }}} = 0.} \]
Since the left-side limit at \(x = -1\) is infinity, we have an essential discontinuity at this point.
\[
{\lim\limits_{x \to 1 – 0} {3^{\large\frac{x}{{1 – {x^2}}}\normalsize}} = {3^{\large\frac{{ 1}}{{ +0}}\normalsize}} }={ {3^\infty } = \infty ,\;\;\;}\kern-0.3pt {\lim\limits_{x \to 1 + 0} {3^{\large\frac{x}{{1 – {x^2}}}\normalsize}} = {3^{\large\frac{{ 1}}{{ -0}}\normalsize}} }={ {3^{ – \infty }} = \frac{1}{{{3^\infty }}} = 0.} \]
Similarly, the right-side limit at \(x = 1\) is infinity. Hence, here we also have an essential discontinuity. |
Tagged: finite group If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575
Let $G$ be a finite group of order $2n$.
Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$.
Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later
Problem 455
Let $G$ be a finite group.
The centralizer of an element $a$ of $G$ is defined to be \[C_G(a)=\{g\in G \mid ga=ag\}.\]
A
conjugacy class is a set of the form \[\Cl(a)=\{bab^{-1} \mid b\in G\}\] for some $a\in G$. (a)Prove that the centralizer of an element of $a$ in $G$ is a subgroup of the group $G$.
Add to solve later
(b) Prove that the order (the number of elements) of every conjugacy class in $G$ divides the order of the group $G$. Problem 420
In this post, we study the
Fundamental Theorem of Finitely Generated Abelian Groups, and as an application we solve the following problem.
Add to solve later
Problem. Let $G$ be a finite abelian group of order $n$. If $n$ is the product of distinct prime numbers, then prove that $G$ is isomorphic to the cyclic group $Z_n=\Zmod{n}$ of order $n$. Problem 302
Let $R$ be a commutative ring with $1$ and let $G$ be a finite group with identity element $e$. Let $RG$ be the group ring. Then the map $\epsilon: RG \to R$ defined by
\[\epsilon(\sum_{i=1}^na_i g_i)=\sum_{i=1}^na_i,\] where $a_i\in R$ and $G=\{g_i\}_{i=1}^n$, is a ring homomorphism, called the augmentation map and the kernel of $\epsilon$ is called the augmentation ideal. (a) Prove that the augmentation ideal in the group ring $RG$ is generated by $\{g-e \mid g\in G\}$.
Add to solve later
(b) Prove that if $G=\langle g\rangle$ is a finite cyclic group generated by $g$, then the augmentation ideal is generated by $g-e$. Read solution |
NOTICE: Citizendium is still being set up on its newer server, treat as a beta for now; please see here for more. Citizendium - a community developing a quality comprehensive compendium of knowledge, online and free . Click here to join and contribute—free CZ thanks our previous donors. Donate here. Treasurer's Financial Report -- Thanks to our content contributors. -- CZ:How to edit an article Quick start
This page is about
the code.
When you work on your article, it's mostly just like writing a long e-mail. But to make text
bold or italicized, or to create links, you'll be using wiki "markup." Don't worry--it's not complicated! There are just a few bits of code you'll be using again and again: To start a new paragraph, skip down twolines. Skipping down oneline has no effect. To make text bold,put three single quotation marks around it:
'''bold'''
To italicizetext, use two single quotes:
''italicized text''
To link to a page, surround the text to be linked with double brackets:
[[link]]
To make a link that points to an article that is differentfrom the text of the link, you use a "pipe," :
[[Biology|link]]
To start a new section, mark the section title like this, using equals signs (flush left):
== My New Section ==
To start a subsection, mark the subsection title like this (flush left):
=== My New Sub-Section ===
To make a bulleted list, precede a list item with * and make sure it's flush left:
* My bullet point
To make a numbered list, use #, like this:
# My numbered point
If you see some formatting you'd like to replicate, just click the "edit" button to see how it's done. This is how most of us learned! But there is a more complete list below.
Contents 1 Quick start 2 Introduction 3 Minor edits 4 Wiki markup 5 Citation tools for Citizendium 6 Enhancing your editing with JavaScript Introduction
The Citizendium is a Wiki, which means that anyone can edit any page and save those changes immediately. Whether authors, editors, or constables,
anyone taking part in Citizendium can edit almost any article.
Just click on the "
edit this page" tab at the top of the page, and you will see the editable text of that page. Make any changes you want to, and put a short explanation in the small field below the edit-box. When you have finished, press the " show preview" button to see how your changes will look. You can also see the difference between the page with your edits and the previous version by pressing the "show changes" button. When you're satisfied, press " Save page" .
If you click on the "
Discussion" tab you will see the " talk page", which contains comments about the article from other Citizendium users. Edit the page in the same way as an article page. Always sign your messages on talk pages. Signing is easy -- just type four tildes (~~~~) at the end of what you post. The software will convert this to your name or signature and a timestamp, i.e. Matt Innis 08:24, 16 April 2007 (CDT). Note that three tildes (~~~) will only sign your name, i.e. Matt Innis. Please use the four tildes on all talk pages.
You should
not sign edits you make to regular articles. Each article's page histories function within the MediaWiki software keeps track of which user makes each change. Minor edits
When you save a page that you've just changed, you can mark your changes as "minor" in the edit summary. Minor edits generally mean spelling corrections, formatting, and minor rearrangement of text - any small and uncontroversial changes. It is possible to "hide" minor edits when viewing the "recent changes" link on the left side navigation bar of the Citizendium. If you accidentally mark an edit as minor, please edit the source again, and in the new edit summary, say that your
previous edit was a major, not a minor edit. Wiki markup
The
wiki markup is the syntax system you can use to format a Citizendium page. The table below lists some of the edits you can make. The left column shows the effects, the right column shows the wiki markup used to achieve them. Some of these edits can also be made using the formatting buttons at the top of any page's edit box. Examples
What it looks like What you type
Start sections of articles as follows:
Subsection
Sub-subsection
==New section== ===Subsection=== ====Sub-subsection====
A single newline generally has no effect on the layout. These can be used to separate sentences in a paragraph. Some editors find that this aids editing.
But an empty line starts a new paragraph.
A single [[newline]] generally has no effect on the layout. These can be used to separate sentences in a paragraph. Some editors find that this makes editing clearer. But an empty line starts a new paragraph.
You can break lines
You can break lines<br/> without starting a new paragraph.
marks the end of a list item.
* It's easy to create a list: ** Start every line with a star. *** More stars means deeper levels. **** A newline in a list marks the end of a list item. * An empty line starts a new list. # Numbered lists are ## very organized ## easy to follow ### easier still ; Definition list : list of definitions ; item : the item's definition ; another item : the other item's definition * You can create mixed lists *# and nest them *#* like this *#*; can I mix definition list as well? *#*: yes *#*; how? *#*: it's easy as *#*:* a *#*:* b *#*:* c
A manual newline starts a new paragraph.
: A colon indents a line. A manual newline starts a new paragraph.
When you want to separate a block of text,
<blockquote> The '''blockquote''' command is useful, for example, to display a quotation. </blockquote>
(See formula on right):
IF a line starts with a space THEN it will be formatted exactly as typed; in a fixed-width font; lines will not wrap; END IF <center>Centered text.</center>
A horizontal dividing line: this is above it
and this is below it.
A [[horizontal dividing line]]: this is above it ---- and this is below it. Links and URLs
What it looks like What you type
Edinburgh is the capital of Scotland.
Edinburgh is the capital of [[Scotland]].
Glasgow is the largest Scottish city.
Glasgow is the largest [[Scotland| Scottish]] city.
San Francisco also has public transportation.
San Francisco also has [[public transport]]ation. Examples include [[bus]]es, [[taxicab]]s, and [[streetcar]]s. [[micro]]<nowiki>second </nowiki>
See the Citizendium:Manual of Style.
See the [[Citizendium:Manual of Style]].
Citizendium:Manual of Style#Italics is a link to a section within another page.
#Links and URLs is a link to another section on the current page.#example is a link to an anchor that was created using
an id attribute
[[Citizendium:Manual of Style#Italics]] is a link to a section within another page. [[#Links and URLs]] is a link to another section on the current page. [[#example]] is a link to an anchor that was created using <div id="example">an id attribute </div>
Automatically hide stuff in parentheses: kingdom.
Automatically hide namespace: Village Pump.
Or both: Manual of Style
But not: [[Citizendium:Manual of Style#Links|]]
Automatically hide stuff in parentheses: [[kingdom (biology)|]]. Automatically hide namespace: [[Citizendium:Village Pump|]]. Or both: [[Citizendium: Manual of Style (headings)|]] But not: [[Citizendium: Manual of Style#Links|]]
See Citizendium:Pipe trick for details.
National sarcasm society is a page that does not exist yet.
[[National sarcasm society]] is a page that does not exist yet.
When adding a comment to a Talk page, sign it by adding three tildes:
or four to add user name plus date/time:
Five tildes gives the date/time alone:
When adding a comment to a Talk page, sign it by adding three tildes: : ~~~ or four to add the date/time: : ~~~~ Five tildes gives the date/time alone: : ~~~~~ #REDIRECT [[United States]] [[fr:Wikipédia:Aide]] '''What links here''' and '''Related changes''' pages can be linked as: [[Special:Whatlinkshere/ Citizendium:How to edit a page]] and [[Special:Recentchangeslinked/ Citizendium:How to edit a page]] A user's '''Contributions''' page can be linked as: [[Special:Contributions/UserName]] or [[Special:Contributions/192.0.2.0]]
Two ways to link to external (non-wiki) sources:
Two ways to link to external (non-wiki) sources: # Unnamed link: [http://www.nupedia.com/] (only used within article body for footnotes) # Named link: [http://www.nupedia.com Nupedia] ISBN 012345678X ISBN 0-12-345678-X
Text mentioning RFC 4321 anywhere
Text mentioning RFC 4321 anywhere
</nowiki></pre>
Special [[WP:AO]] links like [[As of 2006|this year]] needing future maintenance [[media:Sg_mrob.ogg|Sound]] Images
Only images that have been uploaded to Citizendium can be used. To upload images, use the Upload Wizard.
After you upload an image with the Upload Wizard, the basic code to place it will appear right on the image page. Some things you can do to vary the placement are described below.
All uploaded images are at the image list.
NOTE: Citizendium is not yet able to totally support all of the following coding for image resizing and such.
What it looks like What you type A picture: A picture: [[Image:Logo200gr.jpg]] With alternative text: With alternative text: [[Image:Logo200gr.jpg|citi key logo]] Floating to the right side of the page and with a caption: Floating to the right side of the page and with a caption: [[Image:Logo200gr.jpg|frame|Citizendium Encyclopedia]] Floating to the right side of the page without a caption: Floating to the right side of the page ''without'' a caption: [[Image:Logo200gr.jpg|right|Citizendium Encyclopedia]] A picture resized to 100 pixels... A picture resized to 100 pixels... [[Image:Logo200gr.jpg|100 px|citi key logo]] A picture resized to 100 pixels with a caption: A picture resized to 100 pixels with a caption: [[Image:Logo200gr.jpg|thumb|100 px|citi key logo]] A picture resized to 100 pixels floating in the center with a caption: A picture resized to 100 pixels floating in the center with a caption: [[Image:Logo200gr.jpg|thumb|center|100 px|citi key logo]] A failed attempt to resize to 100 pixels, float in the center with a caption using frame: A failed attempt to resize to 100 pixels, float in the center with a caption using '''frame''': [[Image:Logo200gr.jpg|frame|center|100 px|citi key logo]] Linking directly to the description page of an image: Linking directly to the description page of an image: [[:Image:Logo200gr.jpg]]
(such as any of the ones above) also leads to the description page
Linking directly to an image without displaying it: Linking directly to an image without displaying it: [[media:Logo200gr.jpg|Image of the citi key logo]] Using the div tag to separate images from text (note that this may allow images to cover text): Example: <div style="display:inline; width:220px; float:right;"> Place images here </div> Using wiki markup to make a table in which to place a vertical column of images (this helps edit links match headers, especially in Firefox browsers): Example: {| align=right |- | Place images here |} Character formatting
What it looks like What you type ''Emphasized text'' '''Strong emphasis''' '''''Even stronger emphasis'''''
A typewriter font for
A typewriter font for <tt>monospace text</tt> or for computer code: <code>int main()</code>
You can use small text for captions.
You can use <small>small text</small> for captions.
Better stay away from big text, unless it's within small text.
Better stay away from <big>big text</big>, unless <small> it's <big>within</big> small</small> text.
You can
You can also mark
You can <s>strike out deleted material</s> and <u>underline new material</u>. You can also mark <del>deleted material</del> and <ins>inserted material</ins> using logical markup. For backwards compatibility better combine this potentially ignored new <del>logical</del> with the old <s><del>physical</del></s> markup. <nowiki>Link → (''to'') the [[Citizendium FAQ]]</nowiki> <!-- comment here --> À Á Â Ã Ä Å Æ Ç È É Ê Ë Ì Í Î Ï Ñ Ò Ó Ô Õ Ö Ø Ù Ú Û Ü ß à á â ã ä å æ ç è é ê ë ì í î ï ñ ò ó ô œ õ ö ø ù ú û ü ÿ ¿ ¡ § ¶ † ‡ • – — ‹ › « » ‘ ’ “ ” ™ © ® ¢ € ¥ £ ¤
ε
''x''<sub>1</sub> ''x''<sub>2</sub> ''x''<sub>3</sub> or <br/> ''x''₀ ''x''⃥ ''x''₂ ''x''₃ ''x''₄ <br/> ''x''₅ ''x''₆ ''x''₇ ''x''₈ ''x''₉ ''x''<sup>1</sup> ''x''<sup>2</sup> ''x''<sup>3</sup> or <br/> ''x''⁰ ''x''¹ ''x''² ''x''³ ''x''⁴ <br/> x⁵ x⁶ x⁷ x⁸ x⁹ ε<sub>0</sub> = 8.85 × 10<sup>−12</sup> C² / J m. 1 [[hectare]] = [[1 E4 m²]] α β γ δ ε ζ η θ ι κ λ μ ν ξ ο π ρ σ ς τ υ φ χ ψ ω Γ Δ Θ Λ Ξ Π Σ Φ Ψ Ω ∫ ∑ ∏ √ − ± ∞ ≈ ∝ ≡ ≠ ≤ ≥ × · ÷ ∂ ′ ″ ∇ ‰ ° ∴ ℵ ø ∈ ∉ ∩ ∪ ⊂ ⊃ ⊆ ⊇ ¬ ∧ ∨ ∃ ∀ ⇒ ⇐ ⇓ ⇑ ⇔ → ↓ ↑ ← ↔
Ordinary text should use wiki markup for emphasis, and should not use
<math>\sin x + \ln y\,</math> sin ''x'' + ln ''y'' <math>\mathbf{x} = 0</math> '''x''' = 0 Obviously, ''x''² ≥ 0 is true when ''x'' is a real number. : <math>\sum_{n=0}^\infty \frac{x^n}{n!}</math> (see also: Chess symbols in Unicode) No or limited formatting - showing exactly what is being typed
A few different kinds of formatting will tell the Wiki to display things as you typed them - what you see, is what you get!
What it looks like What you type <nowiki> tags
The nowiki tag ignores [[Wiki]] ''markup''. It reformats text by removing newlines and multiple spaces. It still interprets special characters: →
<nowiki> The nowiki tag ignores [[Wiki]] ''markup''. It reformats text by removing newlines and multiple spaces. It still interprets special characters: → </nowiki> <pre> tags The pre tag ignores [[Wiki]] ''markup''. It also doesn't reformat text. It still interprets special characters: → <pre> The pre tag ignores [[Wiki]] ''markup''. It also doesn't reformat text. It still interprets special characters: → </pre> Leading spaces
Leading spaces are another way to preserve formatting.
Putting a space at the beginning of each line stops the text from being reformatted. It still interprets Wiki Leading spaces are another way to preserve formatting. Putting a space at the beginning of each line stops the text from being reformatted. It still interprets [[Wiki]] ''markup'' and special characters: → Table of contents
At the current status of the wiki markup language, having at least four headers on a page triggers the TOC to appear in front of the first header (or after introductory sections). Putting __TOC__ anywhere forces the TOC to appear at that point (instead of just before the first header). Putting __NOTOC__ anywhere forces the TOC to disappear. See also compact TOC for alphabet and year headings.
Tables
There are two ways to build tables:
in special Wiki-markup (see How to make tables) with the usual HTML elements: <table>, <tr>, <td> or <th>. References and citations The markup <ref>Put text to appear in note here</ref> creates a numbered note A collected citation list is created by <references/> The markup <ref name=Smith>Put text to appear in note here</ref> gives a name to a note which can be marked up again by calling the name. No space can be used in the name. Named references are called upon later in the text by <ref name=Smith /> Guidance on citation style is in Help:citation style Citation tools for Citizendium
There are some tools available to assist citations into Citizendium. See CZ:MediaWiki Citation Tools.
Enhancing your editing with JavaScript
You can enhance and make easier your experience using wiki markup through the use of JavaScript extensions. See Enhancing your editing with javascript extensions.
Citizendium Technical Help How to edit an article | Searching | Start article with subpages
The Article Checklist | Subpage template | Other
See also: Getting Started
Home
Welcome Page |
Theorem. $\int_0^\infty \sin x \phantom. dx/x = \pi/2$.
Poof. For $x>0$ write $1/x = \int_0^\infty e^{-xt} \phantom. dt$,and deduce that $\int_0^\infty \sin x \phantom. dx/x$ is$$\int_0^\infty \sin x \int_0^\infty e^{-xt} \phantom. dt \phantom. dx= \int_0^\infty \left( \int_0^\infty e^{-tx} \sin x \phantom. dx \right)\phantom. dt= \int_0^\infty \frac{dt}{t^2+1},$$which is the arctangent integral for $\pi/2$, QED.
The theorem is correct, and usually obtained as an application ofcontour integration, or of Fourier inversion ($\sin x / x$ is a multiple ofthe Fourier transform of the characteristic function of an interval).The poof, which is the first one I saw(given in a footnote in an introductory textbook on quantum physics),is not correct, because the integral does not converge absolutely.One can rescue it by writing $\int_0^M \sin x \phantom. dx/x$as a double integral in the same way, obtaining$$\int_0^M \sin x \frac{dx}{x} =\int_0^\infty \frac{dt}{t^2+1}- \int_0^\infty e^{-Mt} (\cos M + t \cdot \sin M) \frac{dt}{t^2+1}$$and showing that the second integral approaches $0$ as $M \rightarrow \infty$;but this detour makes for a much less appealing alternative to the usualproof by complex or Fourier analysis.
Still the double-integral trick can be used legitimately to evaluate$\int_0^\infty \sin^m x \phantom. dx/x^n$ for integers $m,n$ such thatthe integral converges absolutely (that is, with $2 \leq n \leq m$;NB unlike the contour or Fourier approach this technique appliesalso when $m \not\equiv n \bmod 2$).Write $(n-1)!/x^n = \int_0^\infty t^{n-1} e^{-xt} \phantom. dt$ to obtain$$\int_0^\infty \sin^m x \frac{dx}{x^n} = \frac1{(n-1)!} \int_0^\infty t^{n-1} \left( \int_0^\infty e^{-tx} \sin^m x \phantom. dx \right)\phantom. dt,$$in which the inner integral is a rational function of $t$,and then the integral with respect to $t$ is elementary.For example, when $m=n=2$ we find$$\int_0^\infty \sin^2 x \frac{dx}{x^2}= \int_0^\infty t \frac2{t^3+4t} dt= 2 \int_0^\infty \frac{dt}{t^2+4} = \frac\pi2.$$As a bonus, we recover a correct proof of our starting theorem byintegration by parts:
$$\frac\pi2 = \int_0^\infty \sin^2 x \frac{dx}{x^2} = \int_0^\infty \sin^2 x \phantom. d(-1/x) = \int_0^\infty \frac1x d(\sin^2 x) = \int_0^\infty 2 \sin x \cos x \frac{dx}{x};$$since $2 \sin x \cos x = \sin 2x$, the desired$\int_0^\infty \sin x \phantom. dx/x = \pi/2$follows by a linear change of variable.
Exercise Use this technique to prove that$\int_0^\infty \sin^3 x \phantom. dx/x^2 = \frac34 \log 3$,and more generally$$\int_0^\infty \sin^3 x \frac{dx}{x^\nu} = \frac{3-3^{\nu-1}}{4} \cos \frac{\nu\pi}{2} \Gamma(1-\nu)$$when the integral converges. [Both are in Gradshteyn and Ryzhik,page 449, formula 3.827; the $\nu=2$ case is 3.827#3, credited toD. Bierens de Haan, Nouvelles tables d'intégrales définies,Amsterdam 1867; the general case is 3.827#1, from Gröbner andHofreiter's Integraltafel II, Springer: Vienna and Innsbruck 1958.] |
I was trying to perform the contour integral of the digamma function $\oint\limits_C \psi(z)\,dz$ on the neighborhood (a small circle $-k+re^{it}$, $k \in \mathbb{Z}$ ) of $k$, before actually realizing that due to the residue theorem $\operatorname{res}(\psi(z),-k)=\frac{1}{2\pi i}\oint\limits_C \psi(z)\,dz=-1$.
Now I know the answer, nevertheless I'm still curious as how this could be done by directly integrating.
I know that $\int \psi(z)\,dz=\log\Gamma(z)$, so $$\int_{0}^{2\pi} \psi(-k+re^{it})ire^{it}\,dt=\log\Gamma(\frac{k+re^{2 \pi}}{k+re^{0}})$$ but integrating between $0$ and $2\pi$ would just give zero as result (due to the symmetry int he function?) so I divided the integration limits: $$2\int_{0}^{\pi} \psi(-k+re^{it})ire^{it}\,dt=2\log\Gamma(\frac{k+re^{ \pi}}{k+re^{0}})$$ When I do numerical approximations to this I do get the result I'm looking for, i.e. $-2\pi i$, but I do not know how to formalize this calculation on $\lim_{r\rightarrow0}$. Could someone please help me?
Thanks in advance for your ideas! |
I am confused with connection between state $| \psi \rangle$ of a quantum system and corresponding wave function $\psi(x)$ (at a given time). I have been told that for every state $| \psi \rangle$ we can define $\psi(x) \equiv \langle x | \psi \rangle$, where $|x\rangle$ are eigenkets of position operator. So basicaly wave function is continuous coefficient in the expansion of $| \psi \rangle$ in the basis eigenkets $| x \rangle$, right? But this can be true for any state of the system only if the state space of the system is a subspace of the space spanned by position eigenkets $| x \rangle$, right? Moreover this has to be true for every quantum system. How is this possible?
The identification that $ ψ(x)≡⟨x|ψ⟩$ is completely correct, and this is the way to treat wavefunctions in 'grown-up' quantum mechanics. In short,
So basicaly wave function is continuous coefficient in the expansion of $|ψ⟩$ in the basis eigenkets $|x⟩$, right?
Yes, and
But this can be true for any state of the system only if the state space of the system is a subspace of the space spanned by position eigenkets $|x⟩$, right?
yes.
How is this possible?
The position is an observable and
all observables have orthonormal bases which span the Hilbert space. More generally, the fact that the eigenkets $| x \rangle$ span the Hilbert space simply tells you that all particles must be somewhere. Mathematically, it says that there does not exist any physical state $| \psi \rangle$ such that $\langle x |\psi \rangle=0$ for all $x$, which is precisely the statement that every physical state must be located at some position (or possibly several).
There's nothing really mysterious about this.
Edit, in response to your comment:
Thank you for your answer! I just don't fully understand how those position eigenkets span, say, $n$-dimensional state space $\mathcal H$ of the system. It looks like for such system there has to be set of only $n$ independent position ket states which are capable of spanning the $\mathcal H$, but this makes no sense to me. What would these "special" position kets represent? Clearly I am doing something wrong.
There are two distinct confusions here. First of all, the state space $\mathcal H$ of the system is in general
not infinite dimensional. As such, there aren't any finite number of position kets $| x \rangle$ which span the whole space; instead, you need to sum over a whole continuum, and indeed over all real $x$:$$| \psi \rangle=\int_{-\infty}^\infty \text d x |x\rangle\langle x | \psi \rangle.\tag1$$
Secondly, some systems can indeed be described well using a finite dimension. When this happens, there exists a set of states $\{| \psi_1 \rangle, \ldots,| \psi_n \rangle\}\subset\mathcal H$ whose span $\mathcal H_0$ contains (most of) the system's evolution. In this case, $\mathcal H_0$ is still contained within $\mathcal H$, so all the states in it can still be expressed as an infinite sum over position kets as in (1). By going to finite dimension, what you lose is the ability to express a position ket $| x \rangle$ in terms of a basis of your subspace, not the other way around. |
Answer
The judge should rule in favor of the police officer.
Work Step by Step
We use the equation for the velocity considering the Doppler effect: $v = \frac{c\Delta f}{2f} = \frac{(10.8\times10^8 \ km/h)(15.6\times10^3 \ Hz)}{2(70\times10^9 \ Hz)} = 120 \ km/h$ The judge should rule in favor of the police officer, for this is the same as the recorded speed. |
A sequence of real numbers \(n\) is a function \(f\left( n \right),\) whose domain is the set of positive integers. The values \({a_n} = f\left( n \right)\) taken by the function are called the terms of the sequence.
The set of values \({a_n} = f\left( n \right)\) is denoted by \(\left\{ {{a_n}} \right\}.\)
A sequence \(\left\{ {{a_n}} \right\}\) has the limit \(L\) if for every \(\varepsilon \gt 0\) there exists an integer \(N \gt 0\) such that if \(n \ge N,\) then \(\left| {{a_n} – L} \right| \le \varepsilon .\) In this case we write:
\[\lim\limits_{n \to \infty } {a_n} = L.\]
The sequence \(\left\{ {{a_n}} \right\}\) has the limit \(\infty\) if for every positive number \(M\) there is an integer \(N \gt 0\) such that if \(n \ge N\) then \({a_n} \gt M.\) In this case we write
\[\lim\limits_{n \to \infty } {a_n} = \infty.\]
If the limit \(\lim\limits_{n \to \infty } {a_n} = L\) exists and \(L\) is finite, we say that the sequence converges. Otherwise the sequence diverges.
Squeezing Theorem.
Suppose that \(\lim\limits_{n \to \infty } {a_n} = \lim\limits_{n \to \infty } {b_n} = L\) and \(\left\{ {{c_n}} \right\}\) is a sequence such that \({a_n} \le {c_n} \le {b_n}\) for all \(n \gt N,\) where \(N\) is a positive integer. Then
\[\lim\limits_{n \to \infty } {c_n} = L.\]
The sequence \(\left\{ {{a_n}} \right\}\) is bounded if there is a number \(M \gt 0\) such that \(\left| {{a_n}} \right| \le M\) for every positive \(n.\)
Every convergent sequence is bounded. Every unbounded sequence is divergent.
The sequence \(\left\{ {{a_n}} \right\}\) is monotone increasing if \({a_n} \le {a_{n + 1}}\) for every \(n \ge 1.\) Similarly, the sequence \(\left\{ {{a_n}} \right\}\) is called monotone decreasing if \({a_n} \ge {a_{n + 1}}\) for every \(n \ge 1.\) The sequence \(\left\{ {{a_n}} \right\}\) is called monotonic if it is either monotone increasing or monotone decreasing.
Solved Problems
Click a problem to see the solution.
Example 1Write a formula for the \(n\)th term of \({a_n}\) of the sequence and determine its limit (if it exists). Example 2Write a formula for the \(n\)th term of \({a_n}\) of the sequence and determine its limit (if it exists). Example 3Determine whether the sequence \(\left\{ {\large\frac{{2n + 3}}{{5n – 7}}\normalsize} \right\}\) converges or diverges. Example 4Does the sequence \(\left\{ {\large\frac{{{n^2}}}{{{2^n}}}\normalsize} \right\}\) converge or diverge? Example 5Determine whether the sequence \(\left\{ {\sqrt {n + 2} – \sqrt {n + 1} } \right\}\) converges or diverges. Example 6Determine whether the sequence \(\left\{ {\large\frac{{5n – 7}}{{3n + 4}}\normalsize} \right\}\) is increasing, decreasing, or neither. Example 7Determine whether the sequence \(\left\{ {\large\frac{{{2^n} + 3}}{{{2^n} + 1}}\normalsize} \right\}\) is increasing, decreasing, or not monotonic. Example 1.Write a formula for the \(n\)th term of \({a_n}\) of the sequence and determine its limit (if it exists).
Solution.
Here \({a_n} = {\large\frac{n}{{n + 2}}\normalsize}.\) Then the limit is
\[
{\lim\limits_{n \to \infty } \frac{n}{{n + 2}} } = {\lim\limits_{n \to \infty } \frac{{n + 2 – 2}}{{n + 2}} } = {\lim\limits_{n \to \infty } \left( {1 – \frac{2}{{n + 2}}} \right) } = {\lim\limits_{n \to \infty } 1 }-{ \lim\limits_{n \to \infty } \frac{2}{{n + 2}} }={ 1 – 0 = 1.} \]
Thus, the sequence converges to \(1.\)
Example 2.Write a formula for the \(n\)th term of \({a_n}\) of the sequence and determine its limit (if it exists).
Solution.
We easily can see that \(n\)th term of the sequence is given by the formula \({a_n} = {\large\frac{{{{\left( { – 1} \right)}^{n – 1}}n}}{{{2^{n – 1}}}}\normalsize}.\) Since \( – n \le {\left( { – 1} \right)^{n – 1}}n \le n,\) we can write:
\[{\frac{{ – n}}{{{2^{n – 1}}}} \le \frac{{{{\left( { – 1} \right)}^{n – 1}}n}}{{{2^{n – 1}}}} }\le{ \frac{n}{{{2^{n – 1}}}}.}\]
Using L’Hopital’s rule, we obtain
\[
{\lim\limits_{x \to \infty } \left( { \pm \frac{x}{{{2^{x – 1}}}}} \right) } = { \pm \lim\limits_{x \to \infty } \frac{x}{{{2^{x – 1}}}} } = { \pm \lim\limits_{x \to \infty } \frac{1}{{{2^{x – 1}}\ln 2}} }={ 0.} \]
Hence, by the squeezing theorem, the limit of the initial sequence is
\[{\lim\limits_{n \to \infty } \frac{{{{\left( { – 1} \right)}^{n – 1}}n}}{{{2^{n – 1}}}} }={ 0.}\] |
Custom software development is expensive, certainly if you include the cost of the need to maintain lines of code over time. A common strategy is therefore to reuse code that has been created already or share the development and maintenance costs between different parties.Different types of reuseReuse models can differ…
A while ago, i came across the concept of explorable explanation as envisioned by Bret Victor and loved it immediately. Any document can be so much powerful when you can interact with it. Steven Wolfram recognized the value as well when his company introduced the computable document format. I struggled…
Legacy code is a concern for any company with a reasonably size team pretty quickly. In this article Wayne Lobb from Foliage provides us with industry metrics with respect to code growth (Unfortunately need to register to download). The quick summary is that on average a developer can either create…
The first assumption is that developer productivity declines with a constant rate $r$: (1) $\Large \frac{dP(t)}{dt}$ = $\large -r \cdot P(t)$ Divide by $P(t)$: (2) $\Large \frac{1}{P(t)}$ $\cdot$ $\Large \frac{dP(t)}{dt}$ = $-r$ (3) $\Large \frac{dP(t)}{P(t)}$ = $-r dt$ (4) $\Large \int \frac{dP(t)}{P(t)}$ = $-r \int t dt$ (5) $\large ln(P(t))=…
At $t = 0$ $P_0$ is defined to be developer productivity without legacy obligations (1) $\large P(0) = P_0$ At time $t$ the developer productivity is given by the following equation. $r$ denotes the rate in which productivity declines over time: (2) $\large P(t) = P_0 \cdot e^{-r t} dt$…
Eric Ries The Lean Startup is an interesting read to bridge the concepts introduced in Steve Blanks Four Steps to the Epiphany, Lean Manufacturing and Agile Development methods in software engineering. Eric describes how he learned the hard way in his first startup that the biggest Waste any company can… |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-10 of 26
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... |
Given a sketch of a function $f(x)$ , how to sketch the graph of $y^2=f(x)$?
Is there any relationship between the features of two graphs?
Like vertical asymptotes or the zeros.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Given a sketch of a function $f(x)$ , how to sketch the graph of $y^2=f(x)$?
Is there any relationship between the features of two graphs?
Like vertical asymptotes or the zeros.
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
As far as I'm aware there is no answer that fits all here as different functions will be behave in different ways under the square-root but one thing you must do is (assuming you are trying to graph a real function) take the modulus of $f(x)$ first since if $f(x) < 0$, then $\sqrt{f(x)}\notin\mathbb{R}$. Now you are trying to graph $y=\pm\sqrt{\mid f(x)\mid}$ so there are a few different forms this can take depending on $f(x)$.
If $f(x) = ax^{2}$, for some $a\in\mathbb{R}$, then the graph of $y$ will be a straight line "V" shape. But then don't forget the $\pm$ in the expression for $y$ which gives you an overall "X" shape to the graph.
If $f(x) = e^{x}$, then $y=e^{\frac{x}{2}}$ will be a similar shape to $f(x)$. Again the $\pm$ part of the expression will give you another curve passing through $(0,-1)$ and tending off to $-\infty$.
Another interesting example is $f(x)=x^{3}$ which I will leave for you to think about.
So my point is given a sketch of $f(x)$ you can see what form the function approximately takes and, given the above examples, you can estimate what $y$ should look like.
If you are sketching over $\Bbb R$, the graph of $y$ is discontinuous whenever $f(x)<0$. $y$ will have the same zeroes and poles as $f(x)$ as well as a vertical tangent line at $f(x)=0$. This is because $$y=\pm\sqrt {f(x)}$$ so $$\frac{dy}{dx}=\pm \frac{f'(x)}{2\sqrt{f(x)}}$$ Only if $f'(x)=f(x)=0$ will there not necessarily be a vertical tangent, since $\frac{dy}{dx}$ will be indeterminate. For each value of $f(x)$ there will be two values of $y$, due to $y^2$. |
I'm trying to show how the discrete Fourier transform (DFT) arises from the equation for the continuous-time Fourier Transform. I've run into an interesting caveat which I can't seem to find an explanation to anywhere. My 'derivation' goes something like this:
Lets consider the continuous time Fourier transform equation $$ F(\omega) = \int_{-\infty}^{+\infty} \, f(t) \, e^{-2\pi i \omega t} \,dt $$ If $f(t)$ was to be discretised at $N$ evenly spaced sampling points with the sampling interval of $dt$, then it could be thought of as a continuous function of time with Dirac deltas at each sampling point. The integral of each of the deltas would be equal to the value of $f(t)$ at that point. Lets denote $f(t)$ at $k$-th sampling point (0-indexed) as $f_k$. Now, the integral is non-zero only at those delta functions so the entire above equation can be written as a sum $$ F(\omega) = dt \times \sum_{k=0}^{N-1} \, f_k\, e^{-2\pi i \omega (k dt)} $$ Even though the sum is finite, this is a continuous function of $\omega$. $F(\omega)$ turns out to be periodic in $\omega$ and therefore just a single period of it is needed to be kept to retain all of the information about the sampled version of $f(t)$ it came from.
All of this is (I believe) fine. Here comes the tricky bit. I want to then say that this single period of $F(\omega)$ can be sampled down to $N$ evenly spaced points which would in fact give the equation for the DFT. I say that:
"It is one of the implications of the Shannon information theory (...) that for a general case of $N$ arbitrary bits of information, there does not exist a smaller set of bits which can be used to represent all of the information of the original $N$ bits. This places a lower bound on how many samples of $F(\omega)$ one can take." Question 1: Is this a correct statement about implications of the information theory? Question 2: I know that N bits of $F(\omega)$ are sufficient, but how do I prove it? To put it differently: How can I one show that for any arbitrary signal I will not loose any information about $f(t)$ by sampling $F(\omega)$ at only $N$ points. |
Definition and Properties of the Matrix Exponential
Consider a square matrix \(A\) of size \(n \times n,\) elements of which may be either real or complex numbers. Since the matrix \(A\) is square, the operation of raising to a power is defined, i.e. we can calculate the matrices
\[
{{A^0} = I,\;\;{A^1} = A,\;\;}\kern-0.3pt {{A^2} = A \cdot A,\;\;}\kern-0.3pt {{A^3} = {A^2} \cdot A,\; \ldots ,}\kern-0.3pt {{A^k} = \underbrace {A \cdot A \cdots A}_\text{k times},} \]
where \(I\) denotes a unit matrix of order \(n.\)
We form the infinite matrix power series
\[{I + \frac{t}{{1!}}A + \frac{{{t^2}}}{{2!}}{A^2} }+{ \frac{{{t^3}}}{{3!}}{A^3} + \cdots }+{ \frac{{{t^k}}}{{k!}}{A^k} + \cdots }\]
The sum of the infinite series is called the matrix exponential and denoted as \({e^{tA}}:\)
\[{e^{tA}} = \sum\limits_{k = 0}^\infty {\frac{{{t^k}}}{{k!}}{A^k}} .\]
This series is absolutely convergent.
In the limiting case, when the matrix consists of a single number \(a,\) i.e. has a size of \(1 \times 1,\) this formula is converted into a known formula for expanding the exponential function \({e^{at}}\) in a Maclaurin series:
\[
{{e^{at}} = 1 + at + \frac{{{a^2}{t^2}}}{{2!}} + \frac{{{a^3}{t^3}}}{{3!}} + \cdots } = {\sum\limits_{k = 0}^\infty {\frac{{{a^k}{t^k}}}{{k!}}} .} \]
The matrix exponential has the following main properties:
If \(A\) is a zero matrix, then \({e^{tA}} = {e^0} = I;\) (\(I\) is the identity matrix); If \(A = I,\) then \({e^{tI}} = {e^t}I;\) If \(A\) has an inverse matrix \({A^{ – 1}},\) then \({e^A}{e^{ – A}} = I;\) \({e^{mA}}{e^{nA}} = {e^{\left( {m + n} \right)A}},\) where \(m, n\) are arbitrary real or complex numbers; The derivative of the matrix exponential is given by the formula\[\frac{d}{{dt}}\left( {{e^{tA}}} \right) = A{e^{tA}}.\] Let \(H\) be a nonsingular linear transformation. If \(A = HM{H^{ – 1}},\) then \({e^{tA}} = H{e^{tM}}{H^{ – 1}}.\) The Use of the Matrix Exponential for Solving Homogeneous Linear Systems with Constant Coefficients
The matrix exponential can be successfully used for solving systems of differential equations. Consider a system of linear homogeneous equations, which in matrix form can be written as follows:
\[\mathbf{X}’\left( t \right) = A\mathbf{X}\left( t \right).\]
The general solution of this system is represented in terms of the matrix exponential as
\[\mathbf{X}\left( t \right) = {e^{tA}}\mathbf{C},\]
where \(\mathbf{C} =\) \( {\left( {{C_1},{C_2}, \ldots ,{C_n}} \right)^T}\) is an arbitrary \(n\)-dimensional vector. The symbol \(^T\) denotes transposition. In this formula, we cannot write the vector \(\mathbf{C}\) in front of the matrix exponential as the matrix product \(\mathop {\mathbf{C}}\limits_{\left[ {n \times 1} \right]} \mathop {{e^{tA}}}\limits_{\left[ {n \times n} \right]} \) is not defined.
For an initial value problem (Cauchy problem), the components of \(\mathbf{C}\) are expressed in terms of the initial conditions. In this case, the solution of the homogeneous system can be written as
\[{\mathbf{X}\left( t \right) = {e^{tA}}{\mathbf{X}_0},\;\;}\kern-0.3pt{\text{where}\;\;}\kern-0.3pt{{\mathbf{X}_0} = \mathbf{X}\left( {t = {t_0}} \right).}\]
Thus, the solution of the homogeneous system becomes known, if we calculate the corresponding matrix exponential. To calculate it, we can use the infinite series, which is contained in the definition of the matrix exponential. Often, however, this allows us to find the matrix exponential only approximately. To solve the problem, one can also use an algebraic method based on the latest property listed above. Consider this method and the general pattern of solution in more detail.
Algorithm for Solving the System of Equations Using the Matrix Exponential We first find the eigenvalues \({\lambda _i}\)of the matrix (linear operator) \(A;\) Calculate the eigenvectors and (in the case of multiple eigenvalues) generalized eigenvectors; Construct the nonsingular linear transformation matrix \(H\) using the found regular and generalized eigenvectors. Compute the corresponding inverse matrix \({H^{ – 1}}\); Find the Jordan normal form Jfor the given matrix \(A,\) using the formula\[J = {H^{ – 1}}AH.\]Note: In the process of finding the regular and generalized eigenvectors, the structure of each Jordan block often becomes clear. This allows to write the Jordan form without calculation by the above formula. Knowing the Jordan form \(J,\) we compose the matrix \({e^{tJ}}.\) The corresponding formulas for this conversion are derived from the definition of the matrix exponential. The matrices \({e^{tJ}}\) for some simple Jordan forms are shown in the following table: Compute the matrix exponential \({e^{tA}}\) by the formula\[{e^{tA}} = H{e^{tJ}}{H^{ – 1}}.\] Write the general solution of the system:\[\mathbf{X}\left( t \right) = {e^{tA}}\mathbf{C}.\]For a second order system, the general solution is given by\[{\mathbf{X}\left( t \right) = \left[ {\begin{array}{*{20}{c}} x\\ y \end{array}} \right] }={ {e^{tA}}\left[ {\begin{array}{*{20}{c}} {{C_1}}\\ {{C_2}} \end{array}} \right],}\]where \({C_1},{C_2}\) are arbitrary constants. Solved Problems
Click a problem to see the solution. |
It is very simple:
NSP says. Let us have a spherically symmetric density. Then
If it is concentrated as $\{r \ge R\}$ then inside of the cavity $\{r<R\}$ the pull is $0$ If it is concentrated as $\{r \le R\}$ then in $\{r>R\}$ the pull is as if it was created by the same mass but concentrated in the center
So, outer layers don't pull us and the pull of the layers below is $G\times \frac{4}{3}\mu r^3 \times r^{-2}= \frac{4G\mu}{3} r$.
In particular travelling to the center of the Earth in the framework of this model we would see decaying gravity. Also if we drill a tunnel and drop something then movement (barring air resistance) will be described by $r'' = - gr/R$ where $R$ is the radius and $g$ a gravity acceleration on the surface. Then period is $\frac{2\pi }{\sqrt{g/R}}=2\pi \sqrt{\frac{R}{g}}\approx 5071 sec \approx 84 min$ which coincides with the period of the circular orbit with the radius $R$ (it is given by the same formula as the speed is $\sqrt{gR}$ and the length of the orbit is $2\pi R$). |
I am presently reading this [1] paper on covariant phase space and I have difficulty understanding the following formalism developed:
In the paper (section $2.2$, pg. $12$), the authors have introduced the notion of pre-phase space and go on to reinterpret differential forms by their functional counterpart. Instead of viewing $\delta$ as the variation of a functional, it is viewed as an exterior derivative living in the configuration space. Thus, the action of $\delta \phi^{a}$ is given by
$$\delta \phi^{a}\left(\int d^{d}x'f^{b}\left(\phi,x' \right)\frac{\delta}{\delta \phi^{b}(x')} \right)=f^{a}(\phi,x)$$ They go on to derive a formula for the pre-symplectic current by making the assumption that $\delta^{2}=0$ (which holds since the functional is being viewed as an exterior derivative). Finally, in section $2.3$, they follow this formalism to define a vector field as follows $$X_{\xi}\equiv\int d^{d}x\mathcal{L}_{\xi}\phi^{a}(x)\frac{\delta}{\delta \phi^{a}}$$ such that $\cdot$ in $X_{\xi}\cdot \delta \phi^{a}(x)$ denotes the insertion of a vector into the first arguement of the differential form.
I don't follow the formalism used, are they stating that the differential forms have the above-stated form in the functional space? If this is so then how does one prove this and that the assumption $\delta^{2}$ holds. Also, in section $2.2$, a statement is made that it is convenient to re-interpret functionals $\Theta$ (the symplectic potential) and $C$ (a general $(d-2)-$form) as one-forms on the pre-phase space. How is this obvious?
[1]: https://arxiv.org/abs/1906.08616 |
Volumetric thermal expansion coefficient, abbreviated as \(\alpha_v\) (Greek symbol alpha), also known as coefficient of volumetric thermal expansion, is the ratio of the change in size of a material to its change in temperature.
Volumetric Thermal Expansion Coefficient FORMULA
\(\large{ \alpha_v = \frac { 1 }{ V } \; \frac {\Delta V } {\Delta T} }\)
Where:
\(\large{ \alpha_v }\) (Greek symbol alpha) = volumetric thermal expansion coefficient
\(\large{ \Delta V }\) = volume differential
\(\large{ \Delta T }\) = temperature differential
\(\large{ V }\) = volume of the object |
It seems to me that the $V$ function can be easily expressed by the $Q$ function and thus the $V$ function seems to be superfluous to me. However, I'm new to reinforcement learning so I guess I got something wrong.
Definitions
Q- and V-learning are in the context of Markov Decision Processes. A
MDP is a 5-tuple $(S, A, P, R, \gamma)$ with $S$ is a set of states (typically finite) $A$ is a set of actions (typically finite) $P(s, s', a) = P(s_{t+1} = s' | s_t = s, a_t = a)$ is the probability to get from state $s$ to state $s'$ with action $a$. $R(s, s', a) \in \mathbb{R}$ is the immediate reward after going from state $s$ to state $s'$ with action $a$. (It seems to me that usually only $s'$ matters). $\gamma \in [0, 1]$ is called discount factor and determines if one focuses on immediate rewards ($\gamma = 0$), the total reward ($\gamma = 1$) or some trade-off.
A
policy $\pi$, according to Reinforcement Learning: An Introduction by Sutton and Barto is a function $\pi: S \rightarrow A$ (this could be probabilistic).
According to Mario Martins slides, the
$V$ function is$$V^\pi(s) = E_\pi \{R_t | s_t = s\} = E_\pi \{\sum_{k=0}^\infty \gamma^k r_{t+k+1} | s_t = s\}$$and the Q function is$$Q^\pi(s, a) = E_\pi \{R_t | s_t = s, a_t = a\} = E_\pi \{\sum_{k=0}^\infty \gamma^k r_{t+k+1} | s_t = s, a_t=a\}$$ My thoughts
The $V$ function states what the expected overall value (not reward!) of a state $s$ under the policy $\pi$ is.
The $Q$ function states what the value of a state $s$ and an action $a$ under the policy $\pi$ is.
This means, $$Q^\pi(s, \pi(s)) = V^\pi(s)$$
Right? So why do we have the value function at all? (I guess I mixed up something) |
A first order differential equation \(y’ = f\left( {x,y} \right)\) is called a separable equation if the function \(f\left( {x,y} \right)\) can be factored into the product of two functions of \(x\) and \(y:\)
\[f\left( {x,y} \right) = p\left( x \right)h\left( y \right),\]
where \(p\left( x \right)\) and \(h\left( y \right)\) are continuous functions.
Considering the derivative \({y’}\) as the ratio of two differentials \({\large\frac{{dy}}{{dx}}\normalsize},\) we move \(dx\) to the right side and divide the equation by \(h\left( y \right):\)
\[{\frac{{dy}}{{dx}} = p\left( x \right)h\left( y \right),\;\; }\Rightarrow {\frac{{dy}}{{h\left( y \right)}} = p\left( x \right)dx.}\]
Of course, we need to make sure that \(h\left( y \right) \ne 0.\) If there’s a number \({x_0}\) such that \(h\left( {{x_0}} \right) = 0,\) then this number will also be a solution of the differential equation. Division by \(h\left( y \right)\) causes loss of this solution.
By denoting \(q\left( y \right) = {\large\frac{1}{{h\left( y \right)}}\normalsize},\) we write the equation in the form
\[q\left( y \right)dy = p\left( x \right)dx.\]
We have separated the variables so now we can integrate this equation:
\[{\int {q\left( y \right)dy} }={ \int {p\left( x \right)dx} }+{ C,}\]
where \(C\) is an integration constant.
Calculating the integrals, we get the expression
\[Q\left( y \right) = P\left( x \right) + C,\]
representing the general solution of the separable differential equation.
Solved Problems
Click a problem to see the solution.
Example 1Solve the differential equation \({\large\frac{{dy}}{{dx}}\normalsize} = y\left( {y + 2} \right).\) Example 2Solve the differential equation \(\left( {{x^2} + 4} \right)y’ = 2xy.\) Example 3Find all solutions of the differential equation \(y’ = – x{e^y}.\) Example 4Find a particular solution of the differential equation \(x\left( {y + 2} \right)y’ =\) \(\ln x + 1\) provided \(y\left( 1 \right) = – 1.\) Example 5Solve the differential equation \(y'{\cot ^2}x +\) \(\tan y = 0.\) Example 6Find a particular solution of the equation \(\left( {1 + {e^x}} \right)y’ = {e^x}\) satisfying the initial condition \(y\left( 0 \right) = 0.\) Example 7Solve the equation \(y\left( {1 + xy} \right)dx =\) \(x\left( {1 – xy} \right)dy.\) Example 8Find the general solution of the differential equation \(\left( {x + y + 1} \right)dx +\) \(\left( {4x + 4y + 10} \right)dy \) \(= 0.\) Example 1.Solve the differential equation \({\large\frac{{dy}}{{dx}}\normalsize} = y\left( {y + 2} \right).\)
Solution.
In the given case \(p\left( x \right) = 1\) and \(h\left( y \right) =\) \(y\left( {y + 2} \right).\) We divide the equation by \(h\left( y \right)\) and move \(dx\) to the right side:
\[\frac{{dy}}{{y\left( {y + 2} \right)}} = dx.\]
One can notice that after dividing we can lose the solutions \(y = 0\) and \(y = -2\) when \(h\left( y \right)\) becomes zero. In fact, let’s see that \(y = 0\) is a solution of the differential equation. Obviously,
\[y = 0,\;\;dy = 0.\]
Substituting this into the equation gives \(0 = 0.\) Hence, \(y = 0\) is one of the solutions. Similarly, we can check that \(y = -2\) is also a solution.
Returning to the differential equation, we integrate it:
\[{\int {\frac{{dy}}{{y\left( {y + 2} \right)}}} }={ \int {dx} + C.}\]
We can calculate the left integral by using the fractional decomposition of the integrand:
\[ {\frac{1}{{y\left( {y + 2} \right)}} = \frac{A}{y} + \frac{B}{{y + 2}},\;\;}\Rightarrow {\frac{1}{{y\left( {y + 2} \right)}} = \frac{{A\left( {y + 2} \right) + By}}{{y\left( {y + 2} \right)}},\;\;}\Rightarrow {1 \equiv Ay + 2A + By,\;\;}\Rightarrow {1 \equiv \left( {A + B} \right)y + 2A,\;\;}\Rightarrow {\left\{ {\begin{array}{*{20}{c}} {A + B = 0}\\ {2A = 1} \end{array}} \right.,\;\;}\Rightarrow {\left\{ {\begin{array}{*{20}{c}} {A = \frac{1}{2}}\\ {B = – \frac{1}{2}} \end{array}} \right..} \]
Thus, we get the following decomposition of the rational integrand:
\[{\frac{1}{{y\left( {y + 2} \right)}} }={ \frac{1}{2}\left( {\frac{1}{y} – \frac{1}{{y + 2}}} \right).}\]
Hence,
\[
{{\frac{1}{2}\int {\left( {\frac{1}{y} – \frac{1}{{y + 2}}} \right)dy} }={ \int {dx} + C,\;\;}}\Rightarrow {{\frac{1}{2}\left( {\int {\frac{{dy}}{y}} – \int {\frac{{dy}}{{y + 2}}} } \right) }={ \int {dx} + C,\;\;}}\Rightarrow {{\frac{1}{2}\left( {\ln \left| y \right| – \ln \left| {y + 2} \right|} \right) }={ x + C,\;\;}}\Rightarrow {\frac{1}{2}\ln \left| {\frac{y}{{y + 2}}} \right| = x + C,\;\;}\Rightarrow {\ln \left| {\frac{y}{{y + 2}}} \right| = 2x + 2C.} \]
We can rename the constant: \(2C = {C_1}.\) Thus, the final solution of the equation is written in the form
\[{\ln \left| {\frac{y}{{y + 2}}} \right| = 2x + {C_1},\;\;\;}\kern-0.3pt{y = 0,\;\;\;}\kern-0.3pt{y = – 2.}\]
Here the general solution is expressed in implicit form. In the given case we can transform the expression to obtain the answer as an explicit function \(y = f\left( {x,{C_1}} \right),\) where \({C_1}\) is a constant. However, it is possible to do not for all differential equations. |
Gyroscope 1. Homework Statement
http://www.jyu.fi/kastdk/olympiads/2005/Th2.pdf [Broken]
http://www.jyu.fi/kastdk/olympiads/2005/Th2%20Solution.pdf [Broken]
(Question 3)
My first doubt is why the magnetic field at the center of the coil is:
[tex]|\vec B|=\frac{\mu_0 NI}{2a}[/tex] and not
[tex]|\vec B|=\mu_0 nI[/tex], where n=N/l, the density of turns. The problem does not give any value for l.
Second,
They calculate the emf on each ring by doing [itex]\epsilon=\int E dr[/itex], where [tex]E=\omega Br[/tex]. Am I right?
And finally why do they sum the emfs of the rings?
----------------------
(Question 2), Why is that for the stationary regime I will calculate the mean values of the induced magnetic field? I think the magnetic needle will never stay put, there will never be a stationary regime, I think.
Thank you for the help.
Last edited by a moderator: |
1
ROSI Grades posted
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
(e) I will work on this part in a little while. So far I think I have gotten all of the same solutions as Rong Wei.Added my solution below. For the case of the half-line and Dirichlet boundary condition, we will have the solution: \begin{equation} u(x,t) = \frac{e^{\alpha{}x + \beta{}t}}{2\sqrt{\pi{}kt}}\int_0^{\infty}[e^{-(x-y)^2/4kt} - e^{-(x+y)^2/4kt}]g(y)dy \end{equation} In the case of Neumann boundary conditions, we cannot use a similar method. My questions is about boundary condition. For the general form of 1D wave, there is no v present. We used boundary condition\begin{equation} u_{x=0} = 0 \end{equation}. Here you are asking v. I understand that we need conditions, but why here we need:\begin{equation} u_{x=vt} = something \end{equation} but not \begin{equation} u_{x=ct} = something \end{equation} or \begin{equation} u_{x=0} = something \end{equation}Thanks professor but I understand the process of doing this but not why we need initial condition Ux=vt here but not Ux=0, or similarly when with problems without v, why not use boundary condition Ux=ct? This is completely incomprehensible. What do you really mean by this charade? |
$\omega_1$ is generally referred to as an ordinal when it is rather more of a formula $\phi$ true for some sets in models of ZFC.
I started to wonder if $\omega_1$ really
was an ordinal, unlike $\beth_1$ (which is a cardinal in every model but isn't always the same cardinal; it is more easily described as a formula where $\phi(x)\Leftrightarrow x=\beth_1$). This question was originally hard to describe, but I now have an explicit definition that I feel represents the "Canonicality" of $\omega_1$: Let a formula $\chi$ be $\phi$-Intact in a theory $T$ when:
$$\exists M(M\models T\land M\not\models\exists x(\phi(x)\land\chi(x)))\land\exists\chi_1(T\models(\exists x(\phi(x)\land\chi(x))\rightarrow\forall x(\chi(x)\Leftrightarrow\chi_1(x))\land\exists x(\phi(x)\land\chi(x))))$$
Broken down into several parts, there is some model $M$ of $T$ such that $M\models\neg\exists x(\phi(x)\land\chi(x))$. In othere words, $T$ cannot prove the existence of some $x$ such that $\phi(x)\land\chi(x)$. However, there is some other formula $\chi_1$ such that $T$ semantically entails that some $x$ with $\chi_1(x)\land\phi(x)$ exists and $T$ semantically entails "if some $x$ exists such that $\phi(x)\land\chi(x)$, then for every $x$ $\chi(x)\Leftrightarrow\chi_1(x)$".
This basically means: There are models $M$ of $T$ which do not contain any $x$ such that both $\chi(x)$ and $\phi(x)$ hold. However, there is some other formula $\chi_1$ such that in every model of $T$ there is some $x$ for which both $\chi_1(x)$ and $\phi(x)$ hold, and in the models of $T$ in which there is some $x$ such that both $\chi(x)$ and $\phi(x)$ both hold, in that model $\chi(x)\Leftrightarrow\chi_1(x)$ for every $x$.
Let $\psi(x)$ be true if and only if $x$ is a countable ordinal. The Canonicality of $\omega_1$ is equivalent to $\chi$ being $\psi$-Intact in ZFC for every $\chi$ such that: $\exists M\models\mathrm{ZFC}(M\models\exists x(\chi(x)\land\psi(x)))$ $\exists M\models\mathrm{ZFC}(M\not\models\exists x(\chi(x)\land\psi(x)))$ $\forall M\models\mathrm{ZFC}(M\models\exists x(\chi(x)\land\psi(x)))\rightarrow\forall x,y(\chi(x)\land\psi(x)\land\chi(y)\land\psi(y)\rightarrow x=y)$ (In all of the models of ZFC for which some countable ordinal $\alpha$ with $\chi(\alpha)$, there is only one such ordinal.) If all of the previously mentioned bullet points hold for some $\chi$, $\psi$, and $T$, then one could generalize this by saying $\chi$ is a $\psi$-Undecidability in $T$. The previous bullet points holding for $\chi$ is equivalent to $\chi$ being a $\psi$-Undecidability in ZFC. My conjecture is that $\omega_1$ is not canonical. In other words, there is a $\psi$-Undecidability in ZFC which is not $\psi$-Intact in ZFC.
This conjecture came about when researching the Proof-Theoretic ordinal of KPI. The Proof-Theoretic ordinal is not only proven to exist from ZFC but proven to be countable, without even assuming the consistency of KPI.
Assuming the consistency of a weakly inaccessible cardinal, there are models of ZFC in which $I_0$, the smallest weakly inaccessible cardinal, exists. In all of these models, the Rathjen collapse $\psi_\Omega(\varepsilon_{I_0+1})$. Consistency-wise this Rathjen collapse (when viewed as a formula) is as strong as a weakly inaccessible cardinal, but whenever it exists it is KPI (which always exists).
With this, I felt that, even though this Rathjen collapse isn't provably existent from ZFC, it is "canonical" in a sort; whenever it does exist it is equal to a Truly Canonical countable ordinal.
Of course, the immediate question is whether or not all such undecidable countable ordinal formulas $\varphi$ are canonical in the same way; and the falsity of that is my conjecture.
Assuming My Conjecture is True, one could "expand" $\omega_1$ by making larger ordinals which are not equivalent to any provably existant and countable ordinal. This would make $\omega_1$ bigger as adding Large Cardinal axioms makes $V$ bigger. It does not increase the cardinality of $\omega_1$ itself, but simply adds more ordinals. This could be a sort of 'antiforcing'; forcing is often used to make large sets smaller (like making an inaccessible $\omega_1$), but this could be used to literally add things to $\omega_1$ and keep it $\omega_1$. This could be generalized to any $\omega_n$ for finite $n$. My Question is Whether or not there are any immediate proofs or disproofs of my Conjecture. |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
Nuclei Mass-Energy and Nuclear Binding Energy The mass of a nucleus is always less than the mass of constituent nucleons in their free state. This difference is called MASS DEFECT, (Δm). The energy equivalent to mass defect is called the BINDING ENERGY OF NUCLEUS, BE. BE = [ZM p+ (A − Z) M n– M] 931.5 MeV. Binding energy of a nucleus is given by, BE = (Δm)C 2Joules (Δm is in Kg) BE = (Δm) 931.5 MeV (Δm is in amu) Average Binding energy = \tt \frac{BE}{A} Binding fraction or average binding energy is a measure of stability of nucleus. Mass defect per nucleon is called PACKING FRACTION. \tt PF = \frac{M-A}{A} = \frac{\Delta m}{A} If packing fraction is positive, nucleus is unstable. If packing fraction is negative, nucleus is stable. With increase in mass number A, B.E per nucleon increases rapidly, reaches a maximum value and then decreases slowly. Average Binding energy is maximum for 26Fe 56and it is 8.7 MeV. For deuteron, the average binding energy is about 1 MeV. Average Binding energy is low for both light and heavy nuclei. The average binding energy for helium is about 7 MeV. The binding energy per nucleon for iron is maximum at 8.7 MeV and is the most stable and will undergo neither fission nor fusion. Part1: View the Topic in this Video from 0:13 to 3:08 Part2: View the Topic in this Video from 5:20 to 11:38
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Mass defect (Δ
m) : Δ m = Sum of masses of nucleons − Mass of nucleus =\left\{Z_{m_{p}} + (A - Z)m_{n}\right\} - M = \left\{Zm_{p} + Zm_{c} + (A - Z)m_{n}\right\} - M'
2. Mass defect per nucleon is called packing fraction.
Packing fraction (f) = \frac{\Delta m}{A} = \frac{M - A}{A}, where M = Mass of nucleus, A = Mass number.
3. Binding energy per nucleon : The average energy required to release a nucleon from the nucleus is called binding energy per nucleon.
Binding energy per nucleon =\tt \frac{Total \ binding \ energy}{Mass \ number (i.e. total \ number \ of \ nucleons)} = \frac{\Delta m \times 931}{A} \frac{MeV}{Nucleon} |
I am new to this stuff. Can some one explain how I could compute the stochastic integral of the form $\int_0^t W_sds$, where $W_t$ is Brownian process?
What
to compute the integral means is unclear in this context but one can say this: for every $t\geqslant0$, the random variable $$X_t=\int_0^tW_s\mathrm ds$$ is centered normal vith variance $\sigma_t^2$ where$$\sigma_t^2=\mathbb E(X_t^2)=2\int_0^t\int_0^s\mathbb E(W_sW_u)\mathrm du\mathrm ds=\int_0^t\int_0^s2u\mathrm du\mathrm ds=\frac{t^3}3.$$The process $(X_t)_{t\geqslant0}$ is called integrated Brownian motion and is the subject of some active research, for a sample see this paper and the list of references therein.
As Learner pointed out, the integral $\omega \mapsto \int_{0}^t W_s(\omega) \, ds$ is not a stochastic integral, it's a pathwise Lebesgue integration.
But anyway: If we would like to obtain another expression for this integral, we can apply Itô's formula:
$$f(W_t)-f(W_0) = \int_0^t f'(W_s) \, dW_s + \frac{1}{2} \int_0^t f''(W_s) \, ds \tag{1}$$
Since we are looking for the "$ds$-part", it would be nice to have
$$f''(W_s) = 2 W_s$$
i.e. $f''(x)=2x$. We obtain this by choosing $f(x) := \frac{x^3}{3}$. By applying $(1)$:
$$\frac{W_t^3}{3} - 0 = \int_0^t W_s^2 \, dW_s + \frac{1}{2} \int_0^t 2 W_s \, ds \\ \Rightarrow \int_0^t W_s \, ds = \frac{W_t^3}{3} - \int_0^t W_s^2 \, dW_s$$ |
The answer to your second question is also no: only local properties can be characterised in the internal language of a topos, and it is possible to have locally constant (pre)sheaves that are not constant.
A simple, and well-known, example runs as follows: let $\mathbb C$ be the poset with four elements $n$, $e$, $s$, and $w$, satisfying $w<n$, $w<s$, $e<n$, and $e<s$. Now let $P$ be the presheaf on $C$ which maps: each object to $\{0,1\}$; all but one arrow (say, the one from $w$ to $n$) to the identity; that one other arrow to the non-identity automorphism of $\{0,1\}$. This presheaf is not isomorphic to a coproduct of copies of 1, and therefore not constant. (If you draw it right, $P$ looks like the non-trivial double cover of the circle. In fact, this is not just a metaphor: $\hat{\mathbb C}$ can be construed as the topos of sheaves on a certain non-Hausdorff quotient of the circle.)
Now let $N$ and $S$ denote Yoneda of $n$ and $s$, respectively; then $N+S$ has global support. The slice topos $\hat{\mathbb C}/(N+S)$ is equivalent to $\hat{\mathbb D}$, where $\mathbb D$ is the poset having elements $n,e_N,e_S,s,w_S,w_N$, satisfying $w_N<n$, $w_S<s$, $e_N<n$, and $e_S<s$. Moreover $(S+N)^*:\hat{\mathbb C}\to\hat{\mathbb C}/(N+S)\simeq\hat{\mathbb D}$ maps $P$ to the presheaf (let's call it $Q$) which maps: every element of $\mathbb D$ to $\{0,1\}$; all but one arrow ($w_N \to n$) to the identity; that one arrow to the non-identity automorphism of $\{0,1\}$. Unlike $P$, $Q$ actually is isomorphic to a coproduct of copies of $1$, and therefore constant. (Continuing the parenthetic remark at the end of the previous paragraph: we have trivialised the non-trivial double cover by breaking the circle into two semicircles as usual.)
The larger point here is that $(S+N)^*$, being the inverse image functor of a surjective local homeomorphism, is both logical and faithful; logical functors preserve interpretations of formulas, and faithful ones reflect validity. So if $\Phi$ is a formula which is true for all constant sheaves, its validity for $Q$ entails its validity for $P$. In fact, this applies for infinitary formulae as well, since $(S+N)^*$, having both a left and a right adjoint, preserves arbitrary limits and colimits.
Now one may wonder: can locally constant sheaves be characterised by an internal formula? I think that the answer is yes, but the man to ask is Thomas Streicher. Both he and Richard Squire were (independently) interested in this question in the late '90s and (I think) reached answers in the early '00s. I am relatively sure that Richard did not publish his results, but perhaps Thomas did? This was about the time I was losing interest in topos theory, though I have picked it up again recently.
What I can remember is this:
locally constant presheaves are the same as presheaves valued in {sets-with-bijections};
[1.5 decidable presheaves are the same as presheaves valued in {sets-with-injections};]
(for presheaves) it is therefore enough to characterise "transition-surjective" presheaves---i.e., presheaves valued in {sets-with-surjections};
[2.5 Kuratowski-finite presheaves are precisely those which are both transition-surjective and finitely-valued;]
the problem of characterising transition-surjective presheaves is therefore equivalent to that of characterising finitely-valued presheaves, which is an interesting question in its own right. |
I wonder if some one can help me with the solution to this question from Björk's "Arbitrage theory in continuous time":
At date of maturity $T_2$ the holder of a financial contract will obtain the amount: $$ \frac{1}{T_2 - T_1 } \int_{T_1}^{T_2} S(u) du $$ where $T_1$ is some time point before $T_2$. Determine the arbitrage free price of the contract at time $t$. Assume you live in a Black-Scholes world and that $t<T_1$.
Earlier in the book he states this theorem that I think one might use:
The arbitrage free price of a claim $\Phi(S(T))$ is given by: $$ \Pi(t,\Phi)=F(s,t) $$ where $F(\cdot,\cdot)$ is given by the formula $$ F(s,t)=e^{-r(T-t)}E_{s,t}^Q [\Phi(S(T))] $$ where the $Q$-dynamics of $S(t)$ are given by $$ dS(t)=rS(t)dt + S(t)\sigma(t,S(t))dW(t) $$
However I'm not really sure how to apply it in this case. Can anybody help me out here? |
Tagged: abelian group Problem 307
Let $A$ be an abelian group and let $T(A)$ denote the set of elements of $A$ that have finite order.
(a) Prove that $T(A)$ is a subgroup of $A$.
(The subgroup $T(A)$ is called the
torsion subgroup of the abelian group $A$ and elements of $T(A)$ are called torsion elements.)
Add to solve later
(b) Prove that the quotient group $G=A/T(A)$ is a torsion-free abelian group. That is, the only element of $G$ that has finite order is the identity element. Problem 268
Let $G$ be a group with the identity element $e$ and suppose that we have a group homomorphism $\phi$ from the direct product $G \times G$ to $G$ satisfying
\[\phi(e, g)=g \text{ and } \phi(g, e)=g, \tag{*}\] for any $g\in G$.
Let $\mu: G\times G \to G$ be a map defined by
\[\mu(g, h)=gh.\] (That is, $\mu$ is the group operation on $G$.)
Then prove that $\phi=\mu$.
Also prove that the group $G$ is abelian. Problem 240
A nontrivial abelian group $A$ is called
divisible if for each element $a\in A$ and each nonzero integer $k$, there is an element $x \in A$ such that $x^k=a$. (Here the group operation of $A$ is written multiplicatively. In additive notation, the equation is written as $kx=a$.) That is, $A$ is divisible if each element has a $k$-th root in $A$. (a) Prove that the additive group of rational numbers $\Q$ is divisible.
Add to solve later
(b) Prove that no finite abelian group is divisible. Problem 221
Let $p$ be a prime number. Let
\[G=\{z\in \C \mid z^{p^n}=1\} \] be the group of $p$-power roots of $1$ in $\C$.
Show that the map $\Psi:G\to G$ mapping $z$ to $z^p$ is a surjective homomorphism.
Also deduce from this that $G$ is isomorphic to a proper quotient of $G$ itself. Problem 209
Let $G$ be a group. We fix an element $x$ of $G$ and define a map
\[ \Psi_x: G\to G\] by mapping $g\in G$ to $xgx^{-1} \in G$. Then prove the followings. (a) The map $\Psi_x$ is a group homomorphism. (b) The map $\Psi_x=\id$ if and only if $x\in Z(G)$, where $Z(G)$ is the center of the group $G$.
Add to solve later
(c) The map $\Psi_y=\id$ for all $y\in G$ if and only if $G$ is an abelian group. |
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression\[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\]which matrix do you get?(a) $A$(b) $C^{-1}A^{-1}BC^{-1}AC^2$(c) $B$(d) $C^2$(e) $C^{-1}BC$(f) $C$
Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less.Let\[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\]where\begin{align*}p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3.\end{align*}
(a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$.
(b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$.
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be a vector space and $B$ be a basis for $V$.Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$.Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$.
After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form\[\begin{bmatrix}1 & 0 & 2 & 1 & 0 \\0 & 1 & 3 & 0 & 1 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{bmatrix}.\]
(a) What is the dimension of $V$?
(b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$?
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.Let\[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix}a & b\\c& -a\end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\]
(a) Show that $W$ is a subspace of $V$.
(b) Find a basis of $W$.
(c) Find the dimension of $W$.
(The Ohio State University, Linear Algebra Midterm)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 2 and contains Problem 4, 5, and 6.Check out Part 1 and Part 3 for the rest of the exam problems.
Problem 4. Let\[\mathbf{a}_1=\begin{bmatrix}1 \\2 \\3\end{bmatrix}, \mathbf{a}_2=\begin{bmatrix}2 \\-1 \\4\end{bmatrix}, \mathbf{b}=\begin{bmatrix}0 \\a \\2\end{bmatrix}.\]
Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$.
Problem 5.Find the inverse matrix of\[A=\begin{bmatrix}0 & 0 & 2 & 0 \\0 &1 & 0 & 0 \\1 & 0 & 0 & 0 \\1 & 0 & 0 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason.
Problem 6.Consider the system of linear equations\begin{align*}3x_1+2x_2&=1\\5x_1+3x_2&=2.\end{align*}
(a) Find the coefficient matrix $A$ of the system.
(b) Find the inverse matrix of the coefficient matrix $A$.
(c) Using the inverse matrix of $A$, find the solution of the system.
(Linear Algebra Midterm Exam 1, the Ohio State University)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 1 and contains the first three problems.Check out Part 2 and Part 3 for the rest of the exam problems.
Problem 1. Determine all possibilities for the number of solutions of each of the systems of linear equations described below.
(a) A consistent system of $5$ equations in $3$ unknowns and the rank of the system is $1$.
(b) A homogeneous system of $5$ equations in $4$ unknowns and it has a solution $x_1=1$, $x_2=2$, $x_3=3$, $x_4=4$.
Problem 2. Consider the homogeneous system of linear equations whose coefficient matrix is given by the following matrix $A$. Find the vector form for the general solution of the system.\[A=\begin{bmatrix}1 & 0 & -1 & -2 \\2 &1 & -2 & -7 \\3 & 0 & -3 & -6 \\0 & 1 & 0 & -3\end{bmatrix}.\]
Problem 3. Let $A$ be the following invertible matrix.\[A=\begin{bmatrix}-1 & 2 & 3 & 4 & 5\\6 & -7 & 8& 9& 10\\11 & 12 & -13 & 14 & 15\\16 & 17 & 18& -19 & 20\\21 & 22 & 23 & 24 & -25\end{bmatrix}\]Let $I$ be the $5\times 5$ identity matrix and let $B$ be a $5\times 5$ matrix.Suppose that $ABA^{-1}=I$.Then determine the matrix $B$.
(Linear Algebra Midterm Exam 1, the Ohio State University) |
I'm a student, currently studying about quadratic forms over a field $\mathbb{R}$ and I have a few questions regarding the topic.
From a book I currently read, a quadratic formis a real-valued functionover a vector space $E$ (i.e. $Q: E \longrightarrow \mathbb{R}$, please correct me if I'm mistaken) such that there exists a symmetric bilinear form$B: E \times E \longrightarrow \mathbb{R}$ in which the following expression is valid: \begin{align} Q(x)=B(x,x) \end{align} $\forall x \in E$.
My question: given some function $F: E \longrightarrow \mathbb{R}$. The definition requires the existence of a symmetric bilinear form $G: E \times E \longrightarrow \mathbb{R}$ in which the above expression valid, but how to make sure that we could have such function? For example, if I have a function $F: \mathbb{R^3} \longrightarrow \mathbb{R}$ such that \begin{align} F(x)= x_1+x_2+x_3 \end{align} with $ x=(x_1,x_2,x_3) \in \mathbb{R^3}$,
how to check whether $F$ is a quadratic form or not?
The book also mentioned a regular quadratic space$(E, Q)$, i.e. a vector space $E$ in which it has a quadratic space $Q$ that is nonsingular. What is the definition of nonsingular in terms of function/transformation? And how to connect it to this?
I'm really lost, I know I'm still learning, and I need lots of help. Of course, this is not the last time I'll ask, maybe I'll come again if I find more difficulties but for now, this is all I've got to ask you good people in this community. Thanks! Any help will do! |
When the physical dimension of a circuit approaches the magnitude of a wavelength of the signal, wires and circuit traces begin to affect circuit performance. This is due to the effects being frequency dependent and typically having very small values that are linearly dependent on wire/trace length. When the frequency and lengths become comparably large, the impedance becomes non-negligible. The ratio of wavelength to wire length can be considered as low as 0.01.
As a mental shortcut, so as not having to analyze the harmonic components of a signal, compare the rise time of the signal to the propagation delay. If the rise time is less than twice the propagation delay, transmission line effects must be considered. So if the propagation delay of a wire or trace is 5ns, then any signal with a rise time of less than 10ns will be affected due to transmission line effects.
To quantify this analytically, consider the familiar passive parasitics affecting a circuit’s performance: inductance, capacitance, resistance, and conductance (L, C, R, G). These elements can be thought of as being distributed along the length of the transmission line. For initial simplicity, the model is two parallel lines with one conductor and one ground.
Examining a section of the lumped element model as a single mesh with Kirchoff’s voltage law, and using circuit elements whose values are per-unit length:
We get the equation:
Another analysis of the lumped element model with Kirchoff’s current law with one node at the top, gives the equation:
Divide both sides by $$\Delta z$$ and take the limit as $$\Delta z\rightarrow 0$$ (note that the last terms become derivatives).
Simplify equations 3 and 4 using Cosine phasors.
We can then solve these equations simultaneously to find I(z) and V(z).
Equations 7 and 8 are commonly known as the telegraph equations. Where the $$\gamma$$ is the
complex propagation constant:
is for a single line and is a function of frequency.
Solving equations 7 & 8 for I(z) and V(z) give
Where $$e^{-\omega z}$$ accounts for propagation in the positive z direction and $$e^{\omega z}$$ for the reflection in the negative direction. If eq. 10 is plugged into eq. 8, we can get the relationship
Comparing the terms in eq. 12 with eq. 11 leads to the conclusion that
In which case
and is defined as the
characteristic impedance of the transmission line.
Using the characteristic impedance, we can define the current in terms of the voltage.
With the transmission line clearly defined as a circuit element, it can now be analyzed when a load is attached. We define the load to be located at z=0 to simplify the analysis.
The current and voltage at the load can be related by the load impedence. Using equations 10 & 15, while setting z=0, we get
Rearranging, we can find the reflected voltage value in terms of known values
The ratio of reflected wave to the incident voltage wave is known as the
reflection coefficient, $$\Gamma$$
An important case should be observed from eq. 18. When the load impedance matches the characteristic impedance of the transmission line, the reflection coefficient $$\Gamma =0$$, and there is no reflected wave. This load is referred to as being
matched to the transmission line.
Here is how the load matching effects the reflections in a transmission line.
The first graph shows iterative reflections when the load is much smaller than $$Z_0$$, the second shows a load-matched transmission line with no reflections, and the third shows the same transmission line with a load that is much greater than Z0.
There you have it-- the fast and dirty intro to the transmission line!
Next Article in Series: Transmission Lines: From Lumped Element to Distributed Element Regimes |
The problem is this: Use the recursion-tree method to give a good asymptotic upper bound on $$ T(n) = 9T(\sqrt[3]n) + \Theta(1). $$ I am able to get the tree started and find a pattern with the sub-problems, but I am having difficulty finding the total cost of the running times throughout the tree. I cannot figure out how to get the number of sub-problems at depth $i$ when $n=1$. I have a feeling the answer is $O(\log_3 n)$, but I cannot verify that at the moment. Any help would be appreciated.
The recurrence can be written as $$T(n) = 9T(\sqrt[3]n) + C, $$ where $C$ is some constant, since any constant will always be treated as 1 asymptotically. My recursion tree is explained by each level below:
Level 0: This is the constant $C$
Level 1: $T(\sqrt[3]n)$ is written 9 times which represent the sub-problems of $C$. This adds up to $9C\sqrt[3]n$.
Level 2: Each of the 9 sub-problems from level 1 gets divided into 9 more sub-problems, which are each written as $T(\sqrt[9]n)$. All of these add up to $81C\sqrt[9]n$.
Sub-Problem Sizes and Nodes: The number of nodes at depth $i$ is $9^i$. We know that the sub-problem size for a node at depth $i$ is $n^{1/3^i}$. The problem size hits $n=1$ when this size equals 1. Solving for $i$ yields:
$$ (n^{1/3^i})^{3i} = 1^{3i} n = 1^{3i}. $$
This results in $n$ being 1 which doesn't give a logarithmic form! |
Let $M$ be a symmetric square matrix with integer coefficients and $M_k$ the matrix obtained by deleting the k-th line and k-th column. If det(
M)=0 does it follow that $\det(M_kM_j)$ is a square?
Yes one gets the square of the determinant of the matrix where one deletes row j and column k. That follows from Dodgson's condensation formula for determinants.
Abdelmalek's answer was posted while I was finishing up this, but I decided to post it anyway since it gives insight into why the statement is indeed true.
Let $M = (m_{i,j})\in Mat^{n\times n}(\mathbb{Z})$ and define $D_i := Det(M_i)$
Since det(M) = 0, one has that some row of M is linearly dependent on the others, WLOG say the last row so that $m_{n,j} = \sum_{k=1}^{n-1}a_im_{k,j}$ for some $a_k's$; furthermore since $M$ is integer-valued, upon scaling one can assume the $a_k$'s are integers.
Now $M_n = \begin{pmatrix}m_{1,1}&m_{1,2}&\ldots&m_{1,n-1}\\m_{1,2}&m_{2,2}&\ldots&m_{2,n-1}\\ \vdots&\vdots&\ddots&\vdots\\m_{1,n-1}&m_{2,n-1}&\ldots&m_{n-1,n-1}\end{pmatrix}$
Lets compare $M_{n-1}$ to $M_n$:
$M_{n-1} = \begin{pmatrix}m_{1,1}&m_{1,2}&\ldots&m_{1,n-2}&m_{1,n}\\m_{1,2}&m_{2,2}&\ldots&m_{2,n-2}&m_{2,n}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\m_{1,n-2}&m_{2,n-2}&\ldots&m_{n-2,n-2}&m_{n,n-2}\\m_{1,n}&m_{2,n}&\ldots&m_{n,n-2}&m_{n,n}\end{pmatrix} =$
$\small\begin{pmatrix}m_{1,1}&m_{1,2}&\ldots&m_{1,n-2}&a_1m_{1,1}+a_2m_{1,2}+\ldots a_{n-2}m_{1,n-2}+a_{n-1}m_{1,n-1}\\m_{1,2}&m_{2,2}&\ldots&m_{2,n-2}&a_1m_{1,2}+a_2m_{2,2}+\ldots a_{n-2}m_{2,n-2}+a_{n-1}m_{2,n-1}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\m_{1,n-2}&m_{2,n-2}&\ldots&m_{n-2,n-2}&a_1m_{1,n-2}+a_2m_{2,n-2}+\ldots a_{n-2}m_{n-2,n-2}+a_{n-1}m_{n-1,n-2}\\m_{1,n}&m_{2,n}&\ldots&m_{n,n-2}&a_1m_{1,n}+a_2m_{2,n}+\ldots a_{n-2}m_{n-2,n}+a_{n-1}m_{n-1,n}\end{pmatrix}$
Now since a determinant is unchanged by subtracting a multiple of one column from another, for each $i$ from 1 to $n-2$ subtract $a_i$ times the $i^{th}$ column from the last column, this leaves the following matrix with the same determinant as $M_{n-1}$:
$\begin{pmatrix}m_{1,1}&m_{1,2}&\ldots&m_{1,n-2}&a_{n-1}m_{1,n-1}\\m_{1,2}&m_{2,2}&\ldots&m_{2,n-2}&a_{n-1}m_{2,n-1}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\m_{1,n-2}&m_{2,n-2}&\ldots&m_{n-2,n-2}&a_{n-1}m_{n-1,n-2}\\m_{1,n}&m_{2,n}&\ldots&m_{n,n-2}&a_{n-1}m_{n-1,n}\end{pmatrix}$
Now do the same thing along the rows of this matrix, giving the following matrix with the same determinant as $M_{n-1}$
$\begin{pmatrix}m_{1,1}&m_{1,2}&\ldots&m_{1,n-2}&a_{n-1}m_{1,n-1}\\m_{1,2}&m_{2,2}&\ldots&m_{2,n-2}&a_{n-1}m_{2,n-1}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\m_{1,n-2}&m_{2,n-2}&\ldots&m_{n-2,n-2}&a_{n-1}m_{n-1,n-2}\\a_{n-1}m_{1,n-1}&a_{n-1}m_{2,n-1}&\ldots&a_{n-1}m_{n-2,n-1}&a_{n-1}^2m_{n-1,n-1}\end{pmatrix}$.
Note that this matrix is obtained from $M_n$ by multiplying the last row by $a_{n-1}$ and then multiplying the last column by $a_{n-1}$, hence one has $D_{n-1} = a_{n-1}^2D_n$. A similar argument holds for the other $D_k$, thus $det(M_jM_k) = D_jD_k = a_{j}^2a_{k}^2D_n^2$ is indeed always a square. |
Let us start with an assumption that the radiation damping broadening is negligible, so that, for all practical purposes the spread of the frequencies emitted by a collection of atoms in a gas is infinitesimally narrow. The observer, however, will not see an infinitesimally thin line. This is because of the motion of the atoms in a hot gas. Some atoms are moving hither, and the wavelength will be blue-shifted; others are moving yon, and the wavelength will be red-shifted. The result will be a broadening of the lines, known as
thermal broadening. The hotter the gas, the faster the atoms will be moving, and the broader the lines will be. We shall be able to measure the kinetic temperature of the gas from the width of the lines.
First, a brief reminder of the relevant results from the kinetic theory of gases, and to establish our notation.
\[\nonumber \begin{align}\text{Notation:}\qquad &c=\text{ speed of light } \\ \nonumber&\mathbf{V}=\text{ velocity of a particular atom }=u\hat{\mathbf{x}}+v\hat{\mathbf{y}}+w\hat{\mathbf{z}} \\ \nonumber &V = \text{ speed of that atom } = \left (u^2+v^2+w^2 \right )^{\frac{1}{2}} \\ \nonumber &V_\text{m} = \text{ modal speed of all the atoms } = \sqrt{\frac{2kT}{m}}=1.414\sqrt{\frac{kT}{m}} \\ \nonumber &\overline V = \text{mean speed of all the atoms }=\sqrt{\frac{8kT}{\pi m}}=1.596 \sqrt{\frac{kT}{m}}=1.128V_\text{m} \\ \nonumber &V_\text{RMS} =\text{ root mean square speed of all the atoms } =\sqrt{\frac{3kT}{m}}=1.732\sqrt{\frac{kT}{m}}=1.225V_\text{m} \\ \end{align}\]
The Maxwell distribution gives the distribution of speeds. Consider a gas of \(N\) atoms, and let \(N_VdV\) be the number of them that have speeds between \(V\text{ and }V + dV\). Then
\[\label{10.3.1}\frac{N_VdV}{N}=\frac{4}{\sqrt{\pi}V_\text{m}^3}V^2\text{exp}\left ( -\frac{u^2}{V_\text{m}^2}\right ) dV.\]
More relevant to our present topic is the distribution of a velocity component. We’ll choose the \(x\)-component, and suppose that the \(x\)-direction is the line of sight of the observer as he or she peers through a stellar atmosphere. Let \(N_udu\) be the number of atoms with velocity components between \(u\text{ and }du\). Then the gaussian distribution is
\[\label{10.3.2}\frac{N_udu}{N}=\frac{1}{\sqrt{\pi}V_\text{m}}\text{exp}\left ( -\frac{u^2}{V_\text{m}^2}\right ) du,\]
which, of course, is symmetric about \(u = 0\).
Now an atom with a line-of-sight velocity component \(u\) gives rise to a Doppler shift \(\nu - \nu_0\), where (provided that \(u^2 << c^2\) ) \(\frac{\nu-\nu_0}{\nu_0}=\frac{u}{c}\). If we are looking at an emission line, the left hand side of equation \ref{10.3.2} gives us the line profile \(I_\nu(\nu)/I_\nu(\nu_0)\) (provided the line is optically thin, as is always assumed in this chapter unless specified otherwise). Thus the line profile of an emission line is
\[\label{10.3.3}\frac{I_\nu(\nu)}{I_\nu(\nu_0)}=\text{exp}\left [ -\frac{c^2}{V_\text{m}^2}\frac{(\nu-\nu_0)^2}{v\nu_0^2}\right ].\]
This is a
gaussian, or Doppler, profile.
It is easy to show that the full width at half maximum (FWHM) is
\[\label{10.3.4}w=\frac{V_\text{m}\nu_0}{\text{c}}\sqrt{\ln 16}=1.6651\frac{V_\text{m}\nu_0}{\text{c}}.\]
This is also the full width at half minimum (FWHm) of an absorption line, in frequency units. This is also the FWHM or FWHm in wavelength units, provided that \(\lambda_0\) be substituted for \(\nu_0\).
The profile of an absorption line of central depth \(d ( = \frac{I_\nu(\text{c})-I_\nu(\nu_0)}{I_\nu(\text{c})})\) is
\[\label{10.3.5}\frac{I_\nu(\nu)}{I_v(c)}=1-d\text{ exp}\left [ -\frac{c^2}{V_\text{m}^2}\frac{(\nu-\nu_0)^2}{\nu_0^2}\right ] ,\]
which can also be written
\[\label{10.3.6}\frac{I_\nu(\nu)}{I_\nu(\text{c})}=1-d \text{exp}\left [ -\frac{(\nu-\nu_0)^2\ln 16}{w^2}\right ].\]
(Verify that when \(\nu - \nu_0 = \frac{1}{2}w\), the right hand side is \(1-\frac{1}{2}d\). Do the same for equation 10.2.22.)
In figure X.2, I draw two gaussian profiles, each of the same equivalent width as the lorentzian profiles of figure X.1, and of the same two central depths, namely 0.4 and 0.8. We see that a gaussian profile is “all core and no wings”. A visual inspection of a profile may lead one to believe that it is probably gaussian, but, to be sure, one could write equation \ref{10.3.6} in the form
\[\label{10.3.7}\ln \left [ \frac{I_\nu(\text{c})-I_\nu(\nu)}{I_\nu(\text{c})}\right ] =\ln d -\frac{(\nu-\nu_0)^2\ln 16}{w^2}\]
and plot a graph of the left hand side versus \((\nu - \nu_0)^2\). If the profile is truly gaussian, this will result in a straight line, from which \(w\) and \(d\) can be found from the slope and intercept.
Integrating the Doppler profile to find the equivalent width is slightly less easy than integrating the Lorentz profile, but it is left as an exercise to show that
\[\nonumber \begin{align}\text{Equivalent width } &=\sqrt{\frac{\pi}{\ln 16}}\times\text{ central depth }\times \text{FWHm} \\ &=1.064 \times \text{ central depth }\times \text{FWHm}. \\ \end{align}\]
Compare this with equation 10.2.23 for a Lorentz profile.
\(\text{FIGURE X.2}\)
Figure X.3 shows a lorentzian profile (continuous) and a gaussian profile (dashed), each having the same central depth and the same FWHm. The ratio of the lorenzian equivalent width to the gaussian equivalent width is \(\frac{\pi}{2}\div \sqrt{\frac{\pi}{\ln 16}}=\sqrt{\pi \ln 2}=1.476.\)
\(\text{FIGURE X.3}\) Contributor |
Start with the unperturbed gravitational potential for a uniform sphere of mass M and radius R, interior and exterior:
$$ \phi^0_\mathrm{in} = {-3M \over 2R} + {M\over 2R^3} (x^2 + y^2 + z^2) $$$$ \phi^0_\mathrm{out} = {- M\over r} $$
Add a quadrupole perturbation, you get
$$ \phi_\mathrm{in} = \phi^0_\mathrm{in} + {\epsilon M\over R^3} D $$$$ \phi_\mathrm{out} = \phi^0_\mathrm{out} + {M\epsilon R^2\over r^5} D $$
$$ D = x^2 + y^2 - 2 z^2 $$
The scale factors of M and R are just to make $\epsilon$ dimensionless, the falloff of $D\over r^5$ is just so that the exterior solution solves Laplace's equation, and the matching of the solutions is to ensure that on
any ellipsoid near the sphere of radius R, the two solutions are equal to order $\epsilon$. The reason this works is because the $\phi^0$ solutions are matched both in value and in first derivative at x=R, so they stay matched in value to leading order even when perturbed away from a sphere. The order $\epsilon$ quadrupole terms are equal on the sphere, and therefore match to leading order.
The ellipsoid I will choose solves the equation:
$$ r^2 + \delta D = R^2 $$
The z-diameter is increased by a fraction $\delta$, while the x diameter decreased by $\delta/2$. So that the ratio of polar to equatorial radius is $3\delta/2$. To leading order
$$ r = R + {\delta D \over 2R}$$
We already matched the values of the inner and outer solutions, but we need to match the derivatives. taking the "d":
$$ d\phi_\mathrm{in} = {M\over R^3} (rdr) + {\epsilon M\over R^3} dD $$$$ d\phi_\mathrm{out} = {M\over r^3} (rdr) + {MR^2\epsilon \over r^5} dD - {5\epsilon R^2 M\over r^7} (rdr) $$
$$ rdr = x dx + y dy + z dz $$$$ dD = 2 x dx + 2ydy - 4z dz $$
To first order in $\epsilon$, only the first term of the second equation is modified by the fact that r is not constant on the ellipsoid. Specializing to the surface of the ellipsoid:
$$ d\phi_\mathrm{out}|_\mathrm{ellipsoid} = {M\over R^3} (rdr) + {3\delta \over 2 R^5}(rdr) + {\epsilon M \over R^3} dD - {5\epsilon M \over R^5} (rdr)$$
Equating the in and out derivatives, the parts proportional to $dD$ cancel (as they must--- the tangential derivatives are equal because the two functions are equal on the ellipsoid). The rest must cancel too, so
$$ {3\over 2} \delta = 5 \epsilon $$
So you find the relation between $\delta$ and $\epsilon$. The solution for $\phi_\mathrm{in}$ gives
$$ \phi_\mathrm{in} + {3M\over 2R} = {M\over 2R^3}( r^2 + {3\over 5} \delta D ) $$
Which means, looking at the equation in parentheses, that the equipotentials are 60% as squooshed as the ellipsoid.
Now there is a condition that this is balanced by rotation, meaning that the ellipsoid is an equipotential once you add the centrifugal potential:
$$ - {\omega^2\over 2} (x^2 + y^2) = -{\omega^2 \over 3} (x^2 + y^2 + z^2) -{\omega^2\over 6} (x^2 + y^2 - 2z^2) $$
To make the $\delta$ ellipsoid equipotential requires that $\omega^2\over 6$ equals the remaining ${2\over 5} {M\over 2R^2}$, so that, calling $M\over R^2$ (the acceleration of gravity) by the name "g", and $\omega^2 R$ by the name "C" (centrifugal)
$$\delta = {5\over 6} {C \over g} $$
The actual difference in equatorial and polar diameters is found by multiplying by 3/2 (see above):
$$ {3\over 2} \delta = {5\over 4} {C\over g} $$
instead of the naive estimate of ${C\over 2g}$. So the naive estimate is multiplied by two and a half for a uniform density rotating sphere.
Nonuniform interior: primitive model
The previous solution is both interior and exterior for a rotating uniform ellipsoid, and it is exact in r, it is only leading order in the deviation from spherical symmetry. So it immediately extends to give the shape of the Earth for a nonuniform interior mass distribution. The estimate with a uniform density is surprisingly good, and this is because there are competing effects largely cancelling out the correction for non-uniform density.
The two competing effects are:1. the interior distribution is more elliptical than the surface, because the interior solution feels all the surrounding elliptical Earth deforming it, with extra density deforming it more. 2. The ellipticity of the interior is suppressed by the $1/r^3$ falloff of the quadrupole solution of Laplace's equation, which is $1/r^2$ faster than the usual potential. So although the interior is somewhat more deformed, the falloff more than compensates, and the effect of the interior extra density is to make the Earth more spherical, although not by much.
These competing effects are what shift the correction factor from 2.5 to 2, which is actually quite small considering that the interior of the Earth is extremely nonuniform, with the center more than three times as dense as the outer parts.
The exact solution is a little complicated, so I will start with a dopey model. This assumes that the Earth is a uniform ellipsoid of mass M and ellipticity parameter $\delta$, plus a point source in the middle (or a sphere, it doesn't matter), accounting for the extra mass in the interior, of mass M'. The interior potential is given by superposition. With the centrifugal potential:
$$ \phi_{int} = - {M'\over r} - {2M\over 3R} + {M\over 2R^3}(r^2 - {3\over 5} \delta D) + {\omega^2\over 2} r^2 - {\omega^2\over 6} D $$
This has the schematic form of spherical plus quadrupole (including the centrifugal force inside F and G)
$$ \phi_{int} = F(r) + G(r) D $$
The condition that the $\delta$ ellipsoid is an equipotential is found by replacing $r$ with $R - {\delta D\over 2R}$ inside F(r), and setting the D-part to zero:
$$ {F'(R) \delta \over 2R} = G(r) $$
In this case, you get the equation below, which reduces to the previous case when $M'=0$:
$$ {M'\over M+M'}\delta + {M\over M+M'} (\delta - {3\over 5} \delta) = - {C\over 3 g } $$
where $C=\omega^2 R$ is the centrifugal force, and $ g= {M+M'\over R^2} $ is the gravitational force at the surface. I should point out that the spherical part of the centrifugal potential ${\omega^2\over 2} r^2$ always contributes a subleading term proportional to $\omega^2\delta$ to the equation and should be dropped. The result is
$$ {3\over 2} \delta = {1\over 2 (1 - {3\over 5} {M\over M+M'}) } {C\over g} $$
So that if you choose M' to be .2 M, you get the correct answer, so that the extra equatorial radius is twice the naive amount of ${C\over 2g}$.
This says that the potential at the surface of the Earth is only modified from the uniform ellipsoid estimate by adding a sphere with 20% of the total mass at the center. This is somewhat small, considering the nonuniform density in the interior contains about 25% of the mass of the Earth (the perturbing mass is twice the density at half the radius, so about 25% of the total). The slight difference is due to the ellipticity of the core.
Nonuniform mass density II: exact solution
The main thing neglected in the above is that the center is also nonspherical, and so adds to the nonspherical D part of the potential on the surface. This effect mostly counteracts the general tendency of extra mass at the center to make the surface more spherical, although imperfectly, so that there is a correction left over.
You can consider it as a superposition of uniform ellipsoids of mean radius s, with ellipticity parameter $\delta(s)$ for $0<s<R$ increasing as you go toward the center. Each is uniform on the interior, with mass density $|\rho'(s)|$ where $\rho(s)$ is the extra density of the Earth at distance s from the center, so that $\rho(R)=0$. These ellipsoids are superposed on top of a uniform density ellipsoid of density $\rho_0$ equal to the surface density of the Earth's crust:
I will consider $\rho(s)$ and $\rho_0$ known, so that I also know $|\rho'(s)|$, it's (negative) derivative with respect to s, which is the density of the ellipsoid you add at s, and I also know:
$$ M(r) = \int_0^r 4\pi \rho(s) s^2 ds $$
The quantity $M(s)$ is ${1\over 4\pi}$ times the additional mass in the interior, as compared to a uniform Earth at crust density. Note that $M(s)$ is not affected by the ellipsoidal shape to leading order, because all the nested ellipsoids are quadrupole perturbations, and so contain the same volume as spheres.
Each of these concentric ellipsoids is itself an equipotential surface for the centrifugal potential plus the potential from the interior and exterior ellipsoids. So once you know the form of the potential of all these superposed ellipsoids, which is of the form of spherical + quadrupole + centrifugal quadrupole (the centrifugal spherical part always gives a subleading correction, so I omit it):
$$ \phi_\mathrm{int}(r) = F(r) + G(r) D + {\omega^2 \over 6} D $$
You know that each of these nested ellipsoids is an equipotential
$$ F(s - {\delta(s) \over 2s}) D + G(s) D - {\omega^2\over 6} D $$
so that the equation demanding that this is an equipotential at any s is
$$ {\delta(s) F'(s) \over 2s} - G(s) + {\omega^2\over 6} = 0 $$
To find the form of F and G, you first express the interior/exterior solution for a uniform ellipsoid in terms of the density $\rho$ and the radius R:
$$ {\phi_\mathrm{int}\over 4\pi} = - {\rho R^2\over 2} + {\rho\over 6} r^2 + {\rho \delta\over 10} D $$
$$ {\phi_\mathrm{ext}\over 4\pi} = - {\rho R^3 \over 3 r} + {\rho\delta R^5\over 10 r^5} D $$
You can check the sign and numerical value of the coefficients using the 3/5 rule for the interior equipotential ellipsoids, the separate matching of the spherical and D perturbations at r=R, and dimensional analysis. I put a factor $4\pi$ on the bottom of $\phi$ so that the right hand side solves the constant free form of Laplace's equation.
Now you can superpose all the ellipsoids, by setting $\delta$ on each ellipsoid to be $\delta(s)$, setting $\rho$ on each ellipsoid to be $|\rho'(s)|$, and $R$ to be $s$. I am only going to give the interior solution at r (doing integration by parts on the spherical part, where you know the answer is going to turn out to be, and throwing away some additive constant C) is:
$$ {\phi_\mathrm{int}(r)\over 4\pi} - C = {\rho_0\over 6} r^2 + {\rho_0 \delta(R)\over 10} D - {M(r)\over 4\pi r} + {1\over 10r^5} \int_0^r |\rho'(s)| \delta(s) s^5 ds D + {1\over 10} \int_r^R |\rho'(s)|\delta(s) D $$
The first two terms are the interior solution for constant density $\rho_0$. The third term is the total spherical contribution, which is just as in the spherical symmetric case. The fourth term is the the superposed exterior potential from the ellipsoids inside r, and the last term is the superposed interior potential from the ellipsoids outside r.
From this you can read off the spherical and quadrupole parts:$$ F(r) = {\rho_0\over 6} r^2 + {M(r)\over r} $$$$ G(r) = {\rho_0\delta(R)\over 10} + {1\over 10r^5} \int_0^r |\rho'(s) |\delta(s) s^5 ds + {1\over 10} \int_r^R |\rho'(s)|\delta(s) $$
So that the integral equation for $\delta(s)$ asserts that the $\delta(r)$ shape is an equipotential at any depth.
$$ {F'(r)\delta(r)\over 2r} - G(r) + {\omega^2 \over 6} = 0 $$
This equation can be solved numerically for any mass profile in the interior, to find the $\delta(R)$. This is difficult to do by hand, but you can get qualitative insight.
Consider an ellipsoidal perturbation inside a uniform density ellipsoid. If you let this mass settle along an equipotential, it will settle to the same ellipsoidal shape as the surface, because the interior solution for the uniform ellipsoid is quadratic, and so has exact nested ellipsoids of the same shape as equipotentials. But this extra density will contribute less than it's share of elliptical potential to the surface, diminishing as the third power of the ratio of the radius of the Earth to the radius of the perturbation. But it will produce stronger ellipses inside, so that the interior is always more elliptical than the surface.
Oblate Core Model
The exact solution is too difficult for paper and pencil calculations, but looking [here]( http://www.google.com/imgres?hl=en&client=ubuntu&hs=dhf&sa=X&channel=fs&tbm=isch&prmd=imvns&tbnid=hjMCgNhAjHnRiM:&imgrefurl=http://www.springerimages.com/Images/Geosciences/1-10.1007_978-90-481-8702-7_100-1&docid=ijMBfCAOC1GhEM&imgurl=http://img.springerimages.com/Images/SpringerBooks/BSE%253D5898/BOK%253D978-90-481-8702-7/PRT%253D5/MediaObjects/WATER_978-90-481-8702-7_5_Part_Fig1-100_HTML.jpg&w=300&h=228&ei=ZccgUJCTK8iH6QHEuoHICQ&zoom=1&iact=hc&vpx=210&vpy=153&dur=4872&hovh=182&hovw=240&tx=134&ty=82&sig=108672344460589538944&page=1&tbnh=129&tbnw=170&start=0&ndsp=8&ved=1t:429,r:1,s:0,i:79&biw=729&bih=483 ), you see that it is sensible to model the Earth as two concentric spheres of radius $R$ and $R_1$ with total mass $M$ and $M_1$ and $\delta$ and $\delta_1$.
I will take
$$ R_1 = {R\over 2} $$
and
$$ M_1 = {M\over 4} $$
that is, the inner sphere is 3000 km across, with twice the density, which is roughly accurate. Superposing the potentials and finding the equation for the $\delta$s (the two point truncation of the integral equation), you find
$$ -\delta + {3\over 5} {M_0\over M_0 + M_1} \delta + {3\over 5} {M_1\over M_0 + M_1} \delta_1 ({R_1\over R})^2 = {C\over 3g} $$
$$ {M_0 \over M_0 + M_1} (-\delta_1 + {3\over 5} \delta) + {M_1 \over M_0 + M_1}( -\delta_1 + {3\over 5} \delta_1) = {C\over 3g} $$
Where
$$ g = {M_0+ M_1\over R^2}$$$$ C = \omega^2 R $$
are the gravitational force and the centrifugal force per unit mass, as usual. Using the parameters, and defining $\epsilon = {3\delta\over 2}$ and $\epsilon_1={3\delta_1\over 2}$, one finds:
$$ - 1.04 \epsilon + .06 \epsilon = {C\over g} $$$$ - 1.76 \epsilon_1 + .96 \epsilon = {C\over g} $$
(these are exact decimal fractions, there are denominators of 100 and 25). Subtracting the two equations gives:
$$ \epsilon_1 = {\epsilon\over .91} $$
(still exact fractions) Which gives the equation
$$ (-1.04 + {.06\over .91} ) \epsilon = {C\over g}$$
So that the factor in front is $.974$, instead of the naive 2. This gives an equatorial diameter of 44.3 km, as opposed to 42.73, which is close enough that the model essentially explains everything you wanted to know.
The value of $\epsilon_1$ is also interesting, it tells you that the Earth's core is 9% more eccentric than the outer ellipsoid of the Earth itself. Given that the accuracy of the model is at the 3% level, this should be very accurate. |
Colloquia for the Department of Mathematics and Statistics are normallyheld in
University Hall 4010 on Fridays at 4:00pm.Any departures from this are indicated below.
Light refreshments are served after the colloquia in 2040 University Hall.
Driving directions, parking information, and maps are available on the university website.
2017-2018 Colloquia
What follows is a list of speakers, talk titles and abstracts for the current academic year. Abstracts for the talks are also posted in the hallways around the departmental offices.
Spring Semester April 27, 2018
Harm Derksen (University of Michigan)
Constructive Invariant Theory and Noncommutative Rank
Abstract: If $G$ is a group acting on a vector space $V$ by linear transformations, then the invariant polynomial functions on $V$ form a ring. In this talk we will discuss upper bounds for the degrees of generators of this invariant ring. An example of particular interest is the action of the group $SL_n \times SL_n$ on the space of $m$-tuples of $n \times n$ matrices by simultaneous left-right multiplication. In this case, Visu Makam and the speaker recently proved that invariants of degree at most $mn^4$ generate the invariant ring. We will explore an interesting connection between this result and the notion of noncommutative rank.
April 20, 2018
Anthony Vasaturo (University of Toledo)
Invertibility of Toeplitz Operators via Berezin Transforms
Abstract: Motivated by Douglas' question on the Hardy space about the invertibility of Toeplitz operators via the Berezin transforms of their symbols, we answer a related question about the invertibility of Toeplitz operators with certain measure symbols with respect to the Berezin transforms of these measures. In particular, we will consider the cases when these measures are Carleson measures on the Bergman, Bargmann-Fock, and Model spaces.
April 13, 2018
Jonathan Hall (Michigan State University)
Transpositions and Algebras
Abstract: The symbiotic relationship between groups and algebras goes back at least to Sophus Lie, who introduced Lie algebras to support the study of Lie groups. Weyl noted that finite groups generated by reflections - typified by symmetric groups and their transpositions - play a major role in the Lie classification. Similar interplay exists to this day, even in the realm of vertex operator algebras and the related Monster sporadic group. I will survey and discuss some of this current interaction.
April 6, 2018
Lizhen Ji (University of Michigan, Ann Arbor)
Bernhard Riemann and his moduli space
Abstract: Though Riemann only published a few papers in his lifetime, he introduced many notions which have had long lasting impact on mathematics. One is the concept of Riemann surfaces, and another is the related notion of moduli space of Riemann surfaces.
The road to formulate precisely the moduli space $M_g$ of compact Riemann surfaces of genus $g$ and to understand its meaning is long and complicated, and mathematicians are still working hard to understand its structures and properties from the perspectives of geometry, topology, analysis etc. Its analogy and connection with locally symmetric spaces have provided an effective way to study these problems.
In this talk, I will describe some historical aspects which may not be so well-known and some recent results on the interaction between moduli spaces and locally symmetric spaces.
March 23, 2018
Tian Chen (University of Toledo)
Marginalized Zero-Inflated Count Models for Overdispersed and Correlated Count Data with Missing Value
Abstract: Zero-inflated count outcomes arise quite often in research and practice. Parametric models such as the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB) are widely used to model such responses. However, the interpretation of those models focuses on the "at-risk" subpopulation and fails to provide direct inference about the marginal effect for the overall population. Recently, new approaches have been proposed to provide to facilitate the marginal inference for count responses with excess zeros. However, they are likelihood based and therefore impose strong assumption on the underlying distributions. In this paper, we propose a new distribution-free alternative that provides robust inference for marginal effect when population mixtures are defined by zero-inflated count outcomes. The proposed method is also extended to the longitudinal case with missing data. We illustrate the proposed approach with both simulated and real study data.
March 16, 2018
Luen-Chau Li (The Pennsylvania State University)
Isospectral flows for shock clustering and Burgers turbulence
Abstract: In recent work, Menon and Srinivasan showed that the study of hyperbolic conservation laws with a certain class of random initial conditions give rise to isospectral flows, i.e., flows which preserve the spectrum of the underlying operator. In this talk, we will report on progress made in this program in the last few years. In particular, we will show that in the case of pure-jump initial data with a finite number of states, the flow is conjugate to a straight line motion, and is exactly solvable.
Fall Semester November 17, 2017
Yuri Berest (Cornell University)
Topological representation theory
Abstract: Deep connections between representation theory and low-dimensional topology became apparent in the late 80's, after the discovery of the Jones polynomial and its generalizations related to quantum groups. In recent years, new types of connections and, in fact, an entirely new paradigm of interactions between representation theory and topology have emerged. The study of these connections is part of a nascent area of research which might be called topological representation theory. By analogy with geometric representation theory, where classical representations of Lie algebras and groups are constructed by means of algebraic geometry, topological representation theory produces objects of representation-theoretic interest from topological surfaces and 3-manifolds, using tools of geometric topology.
In this talk, I will discuss one simple example by constructing some natural topological representations of double affine Hecke algebras from knot complements in $S^3$. This construction leads to an intriguing multidimensional generalization of the classical Jones polynomials. The talk is based on joint work with P. Samuelson.
November 3, 2017
Gerard Thompson (University of Toledo)
The origin of Lie symmetry methods for Differential Equations and the rise of abstract Lie algebras
Abstract: In this talk we shall focus on the origins of Lie theory and discuss several examples that could easily occur in Math 2860, to which the Lie symmetry method is applicable. Thereafter we shall trace the development of the theory of abstract Lie algebras and its importance in theoretical physics. Then, as time permits, we shall revisit the Lie symmetry method as it is still used today.
October 20, 2017
Alexander (Oleksandr) Tovstolis (University of Central Florida)
On Bernstein and Nikolskiı̆ Type Inequalities, and Poisson Summation Formula in Hardy Spaces
Abstract: We consider Hardy spaces $H^p(T_{\Gamma})$ in tube domains over open cones $(T_{\Gamma} \subset \mathbb{C}^n)$. When $p \ge 1$, these spaces have properties very similar to those of Lebesgue $L^p(\mathbb{R}^n)$ spaces. When $p < 1$, the situation is dramatically different. These spaces are not even normed (just pre-normed). However, they have very interesting properties related to the Fourier transform. These properties make those spaces much nicer than their "brothers" with $p > 1$. And it is possible to obtain general results (for any $p$) from those for $p \le 1$, which can be obtained more easily.
I am going to give a flavor of this idea showing how Fourier multipliers can be used. In particular, we will see how to obtain Bernstein and Nikolskiı̆ type inequalities for entire functions of exponential type $K$ belonging to $H^p (T_{\Gamma} )$.
Another result (joint work with Dr. Xin Li) for Hardy spaces $H^p (T_{\Gamma} )$ with $p \in (0, 1]$ is the Poisson summation formula:
$$\sum_{m \in \Lambda} f (z + m) = \sum_{m \in \Lambda} \hat{f} (m) e^{2\pi i(z,m)} , \forall z \in T_{\Gamma} .$$
The formula holds without any additional assumptions. Moreover, the series in both sides of this formula are analytic functions in $T_{\Gamma}$.
October 13, 2017
Alexander Odesskii (Brock University, Canada)
Deformations of complex structures on Riemann surfaces and Lie algebroids
Abstract: We obtain variational formulas for holomorphic objects on Riemann surfaces with respect to arbitrary local coordinates on the moduli space of complex structures. These formulas are written in terms of a canonical object on the moduli space which corresponds to the pairing between the space of quadratic differentials and the tangent space to the moduli space. This canonical object satisfies certain commutation relations which can be understood as a Lie algebroid.
September 29, 2017
Elmas Irmak (Bowling Green State University)
Simplicial Maps of Complexes of Curves and Mapping Class Groups of Surfaces
Abstract: I will talk about recent developments on simplicial maps of complexes of curves on both orientable and nonorientable surfaces. The talk will mainly be about a joint work with Prof. Luis Paris, where we prove that on a compact, connected, nonorientable surface of genus at least 5, any superinjective simplicial map from the two-sided curve complex to itself is induced by a homeomorphism that is unique up to isotopy. I will also talk about an application in the mapping class groups.
Shoemaker Lecture Series September 11-13, 2017
Miroslav Englis (Mathematics Institute, Czech Academy of Sciences -- Prague)
Lecture 1: An excursion into Berezin-Toeplitz quantization and related topics
September 11 (Monday), 4:00-5:00pm in GH 5300
Abstract: From the beginning, mathematical foundations of quantum mechanics have traditionally involved a lot of operator theory, with geometry, groups and their representations, and other themes thrown in not long afterwards. With the advent of deformation quantization, cohomology of algebras and related disciplines have also entered. The talk will discuss an elegant quantization procedure which is based on methods from analysis of several complex variables. Further highlights include connections to Lie group representations or related developments for harmonic functions.
Lecture 2: Arveson-Douglas conjecture and Toeplitz operators
September 12 (Tuesday) 4:00-5:00pm in FH 1270
Abstract: A basic problems in multivariable operator theory is finding appropriate "models" for tuples of operators. For the case of commuting tuples, this is resolved by a nice theory developed by William Arveson, and the question of the "size" of the commutators of the model operators with their adjoints is the subject of the Arveson-Douglas conjecture. Though the latter is still open in full generality at the moment, we give a proof of the conjecture in a special case, using methods verging on microlocal analysis and complex analysis of several variables. The same machinery can also be used to get (criteria for traceability and) formulas for the Dixmier trace of Toeplitz and Hankel operators, a theme of importance in Connes' noncommutative geometry.
Lecture 3: Reproducing kernels and distinguished metrics
September 13 (Wednesday), 4:00-5:00pm in GH 5300
Abstract: Two classical distinguished Hermitian metrics on a complex domain are the Bergman metric, coming from the reproducing kernel of the space of square-integrable holomorphic functions, and the Poincare metric, i.e. a Kähler-Einstein metric with prescribed (natural) behaviour at the boundary. In the setting of compact Kähler manifolds rather than domains, the so-called balanced metrics were introduced some time ago by S. Donaldson, building on earlier works on S.T. Yau and G. Tian. The talk will discuss the questions of existence and uniqueness of balanced metrics on (noncompact) complex domains, where some answers are yet unknown nowadays even for the simplest case of the unit disc.
There will be a reception on Monday immediately following the talk at Libbey Hall from 5:00-7:00pm. |
This answer addresses the trivial part of the question, that is multiplicative stability, while providing
non-trivial examples of saturated sets $k(R)$ under two rather restrictive conditions: $R$ is a Dedekind domain or a polynomial ring over a unique factorization domain (UFD).
It is easy to show that $k(R)$ is multiplicatively closed.
Claim. Let $R$ be an integral domain. Then $k(R)$ is multiplicatively closed.
Proof. Let $x, y \in k(R)$ and let $b \in R$. If $Rx + Rb = Ry + Rb = R$, then $Rxy + Rb = R$ since $1 \in (Rx + Rb)(Ry + Rb) \subset Rxy + R$. Otherwise, there is a non-invertible element $d \in R$ such that $Rx + Rb \subset Rd$ or $Ry + Rb \subset Rd$. In both cases, we have $Rxy + Rb \subset Rd$, which completes the proof.
I
fail to find an integral domain $R$ such that $k(R)$ is not saturated.The two propositions below provide conditions under which $k(R)$ is saturated, yielding non-trivial examples of sets $k(R)$. By trivial, I mean one of the two possibilities $k(R) = R^{\times}$ (the unit group of $R$) or $k(R) = R \setminus \{0\}$. The latter is the subject of a theorem of Robert Gilmer and William Heinzer [1, Corollary 2]:
Gilmer-Heinzer's Theorem. Let $R$ be an integral domain. If $k(R) = R \setminus \{0\}$ and $R$ satisfies the ascending chain condition on principal ideals (ACCP), then $R$ is a principal ideal ring (PID).
The authors define the
Kummer condition, or condition $(K)$, as $k(R) = R \setminus \{0\}$. According to the references of [1], Condition $(K)$ is at the core of Ernst Kummer's error in his proof of that Fermat's last theorem holds for regular primes. Kummer would have assumed that $k(R) = R \setminus \{0\}$ holds for $R = \mathbb{Z}[\zeta_n]$ with $\zeta_n = e^{\frac{2i\pi}{n}}$ or $R = \mathbb{Z}[\zeta_n][X]$. It is nowadays well-known that any of the two previous Noetherian domains can fail to be a UFD, e.g., for $n = 23$.
The following remarks will come soon in handy.
Remark 1. Let $R$ be an integral domain. Then $a \in k(R)$ if and only if $Ra + Rb = R$ (in other words, $a$ is coprime with $b$) whenever $a$ and $b$ have no non-invertible common divisor (in other words, $\text{gcd}(a, b) = 1$).
Remark 2. Let $R$ be an integral domain. Then $k(R)$ contains
the multiplicative submonoid $M(R)$ of $R \setminus \{0\}$ generated by the units of $R$ and the prime elements $p \in R$ such that $Rp$ is a maximal ideal of $R$. The monoid $M(R)$ is saturated in the sense that, for every $x, y \in R$, $xy \in M(R)$ implies $x \in M(R)$.
Remark 3. Let $R$ be an atomic domain. If $k(R)$ is saturated, then $k(R) = M(R)$.
Remark 4. Let $R$ be an integral domain. Let $p$ be a prime element of $R$ and let $n > 0$ such that $p^n \in R$. Then $p \in R$.
We establish first that $k(R)$ is saturated for $R$ an arbitrary Dedekind domain.
Proposition 1. Let $R$ be a Dedekind domain. Then $k(R) = M(R)$. In particular, $k(R)$ is saturated.
Proof. Given $a \in R \setminus \{0\}$, we can write
$Ra = \mathfrak{m}_1^{k_1}\mathfrak{m}_2^{k_2} \cdots \mathfrak{m}_n^{k_n}$, where each $\mathfrak{m_i}$ is a maximal ideal of $R$ and $k_i \ge 1$ an integer. Let us assume that one of the ideals $\mathfrak{m_i}$, say $\mathfrak{m_1}$, is not principal. Then we can find $b \in R$ such that $\mathfrak{m_1} = Ra + Rb$ (see, e.g., this MSE post). The elements $a$ and $b$ don't have any common non-invertible divisor, since otherwise $\mathfrak{m_1}$ wouldn't be maximal. Therefore $a \notin k(R)$. Apply now Remarks 1 and 2 to complete the proof.
The class of polynomial rings over UFDs provide other examples of sets $k(R)$ which are saturated.
Proposition 2. Let $S$ be a principal ideal domain (PID) and set $R \Doteq S[X]$. Then $k(R)$ is saturated. More precisely, we have
$k(R) = R^{\times}$ if $S$ has infinitely many prime elements up to multiplication by units.
$k(R) = R \setminus \{0\}$ if $S$ is a field.
$k(R) = Rp_1 \cdots p_n + R^{\times}$, if $S$ has $n \ge 1$ primes elements up to multiplication by units.
Proof.
Let us assume first that $S$ has infinitely many non-associated prime elements. Let $P(X) \in k(R)$. Since $S$ is a PID, there are infinitely many non-associated prime elements of $S$ which doesn't divide $P(X)$. Let $p$ be one of them. By hypothesis, $RP(X) + Rp = R$. Thus the reduction of $P(X)$ modulo $p$ is a unit of $R/pR = (S/Sp)[X]$. From this we infer that $p$ divides every coefficient of $P(X)$ except the coefficient of degree zero. Since this holds for infinitely many $p$, the polynomial $P(X)$ is constant. As $RP(X) + RX = R$ must hold, we deduce that $P(X) \in S^{\times} = R^{\times}$. Conversely, any unit of $R$ belongs to $k(R)$. Thus $k(R) = R^{\times}$.
Let us assume now that $S$ has only finitely many non-associated prime elements and let $p$ be one of them. Given $P(X) \in k(R)$, we can find $s \in S$ such that $X - s$ doesn't divide $P(X)$, so that $RP(X) + R(X - s) = R$. Therefore $P(s)$ is a unit of $S$ and hence $p$ cannot divide $P(X)$. From this we infer that $RP(X) + Rp = R$, i.e., $P(X) \in Rp + R^{\times}$. Since this holds for any prime $p$ of $S$, we deduce from the Chinese Remainder Theorem that $P(X) \in R \pi + R^{\times}$, where $\pi$ is the product of $n$ non-associated prime elements of $S$.
We shall establish the converse statement, i.e., that an arbitrary element $P(X)$ of $R \pi + R^{\times}$ lies in $k(R)$. Let $Q(X) \in R$. If $P(X)$ and $Q(X)$ have a common non-invertible divisor, then $RP(X) + RQ(X)$ is contained in a proper ideal of $R$. Thus we can assume that the greatest common divisor (gdc) of $P(X)$ and $Q(X)$ is $1$. Since the image of $\pi$ in $A \Doteq R/RP(X)$ is a unit, the ring $A$ is the quotient of $F/FP(X)$ where $F = \text{Frac}(S)[X]$ and $\text{Frac}(S)$ is the fraction field of $S$. As $\text{gcd}(P(X), Q(X)) = 1$, we have $FP(X) + FQ(X) = F$, i.e., the image of $Q(X)$ in $A$ is a unit. Equivalently, $RP(X) + RQ(X) = R$ as desired.
In order to check that $k(R)$ is saturated in the latter case, consider $P(X), Q(X) \in R$ such that $P(X)Q(X) \in k(R)$. If $P(X)$ doesn't lie in $k(R)$, then its reduction modulo $p$ for some prime $p$ of $S$ is not a unit of $(S/pS)[X]$. As the reduction of the of $P(X)Q(X)$ is a unit, we obtain a contradiction. Hence $P(X) \in k(R)$.
Eventually, if $S$ has no prime element, i.e, $S$ is a field, then $R$ is a principal ideal domain. The result follows.
The above proof shows actually a bit more than what was stated:
Statement. Let $S$ be a unique factorization domain (UFD) and set $R \Doteq S[X]$. Then $k(R)$ is the set of polynomials $P(X) \in R$ such that the reduction of $P(X)$ modulo $p$ is a unit of $(S/Sp)[X]$ for every prime element $p$ of $S$. In particular $k(R)$ is saturated.
Remark 5. Let $R$ be a UFD. Then $x \in k(R)$ if and only if $x$ is congruent either to zero or to a unit modulo $Rp$ for every prime element $p \in R$.
Remark 6. Let $R$ be a UFD. Let $x$ and $y$ be two distinct primes of $R$. If $xy \in k(R)$, then $Rx + Ry = R$.
[1] R. Gilmer, W. Heinzer, "Principal ideal rings and a condition of Kummer", 1982. |
Let's take the usual definition of a spectral risk measure.
If we look at the integral we see that spectral risk measures have the property that the risk measure of a random variable $X$ can be represented by a combination of the quantiles of $X$.
Since the quantile function is rather friendly one gets that every spectral risk measure is also a coherent risk measure.
Examples are the expected value and the expected shortfall (CVaR). In those cases, the spectral representation yields a very convenient way to approximate the measure by simply weighing the quantiles of our dataset. That yields the following questions:
Are there any other known measures that have a spectral representation? If we relax the assumptions on the spectrum $\phi$, can we obtain (approximative sequences of) other (possibly non-coherent) risk measures?
EDIT: In reaction to the comment by @Joshua Ulrich I want to provide an example of what I want to achieve and some more details.
Example: The Conditional Value at Risk. We have the following formula: $\text{CVaR}_\alpha(X) = -\frac{1}{\alpha}\int_0^{\alpha}F^{-1}_X(p)dp$. From sample $X_i$, $i=1,\ldots,N$, we can calculate the CVaR by taking the order statistics that are in the $\alpha$-tail of the sample, average, and divide by $\alpha$. We can see that this is measure has a spectral representation with $\phi(p) = \frac{1}{\alpha}$ for $p \in [0,\alpha]$ and $\phi(p) = 0$ for $p \in (\alpha, 1]$. So its easy to check: The CVaR is a spectral measure.
Obviously, the "order statistics + weighted average" procedure does not only work for the CVaR, it works for all spectral measures: From the definition of spectral measure we see that, after discretizing the integral, we have an approximation of the measure that is a linear combination of quantiles which is very easy to compute.
In fact its so easy that I would like to compute as many risk measures as possible this way (very easy if you do monte carlo or scenarios for example). For the computation only, I dont need all the assumptions about $\phi$ so lets forget about them for a moment and see what else we can calculate this way. |
About this question,I found this from the Wikipedia:nuclear fission produces neutrons with a mean energy of 2 MeV (200 TJ/kg, i.e. 20,000 km/s), which qualifies as "fast". For my question, I only knew a few detail that converting Electron volt to Velocity unit,and electron volt is closely related with energy density, but I don' know the conversion formula of electron volt to velocity unit.
Electron volt is an energy unit,1 eV = $1.602\times 10^{-19}$ J. It can be used in the context of the kinetic energy of a particle, the potential energy of a system, the mass-energy of a system or particle and so on. In your situation, it sounds like it is used as a kinetic energy.
From there, you need to determine whether the particle is likely to be moving relativistically or not. If the kinetic energy, $K$, is greater than 10% of the mass energy ($mc^2$), then you should probably use $$K=mc^2\left(\frac{1}{\sqrt{1-v^2/c^2}}-1\right).$$
If $K$ is less than 10%, you can get away with $$K=\frac{mv^2}{2}.$$
For small particles, the energy is sometimes conveniently expressed in eV - the energy an electron gets when accelerated through a 1 Volt field - and that's equivalent to about $1.6\cdot 10^{-19}~\rm{J}$
For the energy level you are talking about, you're in the relativistic regime. At low velocities, you can say the kinetic energy is given by $E=\frac12 mv^2$ from which it would follow that $v=\sqrt{\frac{2E}{m}}$; if that gives you a velocity that is close to (or greater than) the speed of light, you need to make a relativistic adjustment.
The energy is $1.6\cdot 10^{-19}\times 2\cdot 10^6~\rm{J}$, and the mass of the neutron is roughly $1.6\cdot 10^{-27}~\rm{kg}$.
At 2 MeV, your neutron is still OK - the above equation gives 20,000 km/s which is considerably less than the speed of light, 300,000 km/s. So there's no need to make the relativistic correction (which would change the result by about 0.2%). |
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ... |
The Church-Turing thesis is about (partial) functions $\mathbb{N} \to \mathbb{N}$ (or $\Sigma^* \to \Sigma^*$ for a finite alphabet $\Sigma$). How do you define a definite value based on some random output (for a given input)? There is at most one (random) output occurring with probability $>\frac{1}{2}$, so taking that output if it exists (and undefined otherwise) as definite value could be a reasonable definition. (One might argue that $\frac{1}{2}$ is too small for physical implementation and one should take $\frac{2}{3}$ instead, but I guess that is not really important for this answer.) The Church-Turing thesis now asserts that even this extended notion of computability for (partial) functions $\mathbb{N} \to \mathbb{N}$ still leads to the same set of computable functions.
If we are willing to leave the strict setting of (partial) functions $\mathbb{N} \to \mathbb{N}$, then we risk those toasting toast discussions. (Based on the older mustard watches or montres à moutarde parody by Jean-Yves Girard.) I prefered Martin Berger's and Neel Krishnaswami's position in that discussion over Scott Aaronson's position, but most everybody else seems to agree with Scott. I guess even Andrej Bauer would be unable to change that outcome. Scott's position is not meant as a parody:
To the people who keep banging the drum about higher-level formalisms being vastly more intuitive than TMs, and no one thinking in terms of TMs as a practical matter, let me ask an extremely simple question. What is it that lets all those high-level languages exist in the first place, that ensures they can always be compiled down to machine code? Could it be ... err ...
THE CHURCH-TURING THESIS, the very same one you've been ragging on?
I previously wrote "So if a theoretical machine had access to a random source which its opponent could not predict (and could conceal its internal state from its opponent), then this theoretical machine would be more powerful than a Turing machine." But that argument takes place in a game theoretic setting (and idealization of some real world scenario) different from the context of the Church-Turing thesis. The simulation argument (explained by Martin Berger in the above discussion) could reduce that setting back to TMs by simulating all interactions between the random source enhanced Turing machines, but that misses my original point, about the random source being a separate idealization which can fail in its own ways. But if already clear concepts like higher-level formalisms gets dismissed, then there is little point in elaborating such fine interpretative points.
It might be possible to keep the "$\mathbb{N} \to \mathbb{N}$" part, and replace the "(partial) functions" part with something else (I am thinking here of an analogy Fermions <-> "(partial) functions", Bosons <-> "something else"), but the Church-Turing thesis would probably still hold in such modified settings. |
Here's a neat one from optimization: the Alternating Direction Method of Multipliers (ADMM) algorithm.
Given an uncoupled and convex objective function of two variables (the variables themselves could be vectors) and a linear constraint coupling the two variables:
$$\min f_1(x_1) + f_2(x_2) $$$$ s.t. \; A_1 x_1 + A_2 x_2 = b $$
The Augmented Lagrangian function for this optimization problem would then be $$ L_{\rho}(x_1, x_2, \lambda) = f_1(x_1) + f_2(x_2) + \lambda^T (A_1 x_1 + A_2 x_2 - b) + \frac{\rho}{2}|| A_1 x_1 + A_2 x_2 - b ||_2^2 $$
The ADMM algorithm roughly works by performing a "Gauss-Seidel" splitting on the augmented Lagrangian function for this optimization problem by minimizing $L_{\rho}(x_1, x_2, \lambda)$ first with respect to $x_1$ (while $x_2, \lambda$ remain fixed), then by minimizing $L_{\rho}(x_1, x_2, \lambda)$ with respect to $x_2$ (while $x_1, \lambda$ remain fixed), then by updating $\lambda$. This cycle goes on until a stopping criterion is reached.
(Note: some researchers such as Eckstein discard the Gauss-Siedel splitting view in favor of proximal operators, for example see http://rutcor.rutgers.edu/pub/rrr/reports2012/32_2012.pdf )
For convex problems, this algorithm has been proven to converge - for two sets of variables. This is not the case for three variables. For example, the optimization problem
$$\min f_1(x_1) + f_2(x_2) + f_3(x_3)$$$$ s.t. \; A_1 x_1 + A_2 x_2 + A_3x_3 = b $$
Even if all the $f$ are convex, the ADMM-like approach (minimizing the Augmented Lagrangian with respect to each variable $x_i$, then updating the dual variable $\lambda$) is NOT guaranteed to converge, as was shown in this paper.
https://web.stanford.edu/~yyye/ADMM-final.pdf |
I'm not sure specifically what you're looking for here. Noise is typically described via its power spectral density, or equivalently its autocorrelation function; the autocorrelation function of a random process and its PSD are a Fourier transform pair. White noise, for example, has an impulsive autocorrelation; this transforms to a flat power spectrum in the Fourier domain.
Your example (while somewhat impractical) is analogous to a communication receiver that observes carrier-modulated white noise at a carrier frequency of $ 2 \omega $. The example receiver is quite fortunate, as it has its an oscillator that is coherent with that of the transmitter; there is no phase offset between the sinusoids generated at the modulator and demodulator, allowing for the possibility of "perfect" downconversion to baseband. This isn't impractical on its own; there are numerous structures for coherent communications receivers. However, noise is typically modeled as an additive element of the communication channel that is uncorrelated with the modulated signal that the receiver seeks to recover; it would be rare for a transmitter to actually transmit noise as part of its modulated output signal.
With that out of the way, though, a look at the mathematics behind your example can explain your observation. In order to get the results that you describe (at least in the original question), the modulator and demodulator have oscillators that operate at an identical reference frequency and phase. The modulator outputs the following:
$$
\begin{align}
n(t) &\sim \mathcal{N}(0, \sigma^2) \\
x(t) & = n(t) \sin(2\omega t)
\end{align}
$$
The receiver generates the downconverted I and Q signals as follows:
$$
\begin{align}
I(t) &= x(t) \sin(2 \omega t) = n(t) \sin^2(2 \omega t)\\
Q(t) &= x(t) \cos(2 \omega t) = n(t) \sin(2 \omega t) \cos(2 \omega t)
\end{align}
$$
Some trigonometric identities can help flesh out $ I(t) $ and $ Q(t) $ some more:
$$
\begin{align}
\sin^2(2 \omega t) &= \frac{1 - \cos(4 \omega t)}{2}\\
\sin(2 \omega t) \cos(2 \omega t) &= \frac{\sin(4 \omega t) + \sin(0)}{2} = \frac{1}{2} \sin(4 \omega t)
\end{align}
$$
Now we can rewrite the downconverted signal pair as:
$$
\begin{align}
I(t) &= n(t) \frac{1 - \cos(4 \omega t)}{2}\\
Q(t) &= \frac{1}{2} n(t) \sin(4 \omega t)
\end{align}
$$
The input noise is zero-mean, so $ I(t) $ and $ Q(t) $ are also zero-mean. This means that their variances are:
$$
\begin{align}
\sigma^{2}_{I(t)} &= \mathbb{E}(I^2(t)) = \mathbb{E}\left(n^2(t) \left[\frac{1 - \cos(4 \omega t)}{2}\right]^2\right) = \mathbb{E}\left(n^2(t)\right) \mathbb{E}\left(\left[\frac{1 - \cos(4 \omega t)}{2}\right]^2\right) \\
\sigma^{2}_{Q(t)} &= \mathbb{E}(Q^2(t)) = \mathbb{E}\left(n^2(t) \sin^2(4 \omega t)\right) = \mathbb{E}\left(n^2(t)\right) \mathbb{E}\left(\sin^2(4 \omega t)\right)
\end{align}
$$
You noted the ratio between the variances of $ I(t) $ and $ Q(t) $ in your question. It can be simplified to:
$$
\frac{\sigma^{2}_{I(t)}}{\sigma^{2}_{Q(t)}} = \frac{\mathbb{E}\left(\left[\frac{1 - \cos(4 \omega t)}{2}\right]^2\right)}{\mathbb{E}\left(\sin^2(4 \omega t)\right)}
$$
The expectations are taken over the random process $ n(t) $ 's time variable $ t $. Since the functions are deterministic and periodic, this is really just equivalent to the mean-squared value of each sinusoidal function over one period; for the values shown here, you get a ratio of $ \sqrt 3 $, as you noted. The fact that you get more noise power in the I channel is an artifact of noise being modulated coherently (i.e. in phase) with the demodulator's own sinusoidal reference. Based on the underlying mathematics, this result is to be expected. As I stated before, however, this type of situation is not typical.
Although you didn't directly ask about it, I wanted to note that this type of operation (modulation by a sinusoidal carrier followed by demodulation of an identical or nearly-identical reproduction of the carrier) is a fundamental building block in communication systems. A real communication receiver, however, would include an additional step after the carrier demodulation: a lowpass filter to remove the I and Q signal components at frequency $ 4 \omega $. If we eliminate the double-carrier-frequency components, the ratio of I energy to Q energy looks like:
$$
\frac{\sigma^{2}_{I(t)}}{\sigma^{2}_{Q(t)}} = \frac{\mathbb{E}\left((\frac{1}{2})^2\right)}{\mathbb{E}(0)} = \infty
$$
This is the goal of a coherent quadrature modulation receiver: signal that is placed in the in-phase (I) channel is carried into the receiver's I signal with no leakage into the quadrature (Q) signal.
Edit: I wanted to address your comments below. For a quadrature receiver, the carrier frequency would in most cases be at the center of the transmitted signal bandwidth, so instead of being bandlimited to the carrier frequency $ \omega\ $, a typical communications signal would be bandpass over the interval $ [\omega - \frac{B}{2}, \omega + \frac{B}{2}] $, where $ B $ is its modulated bandwidth. A quadrature receiver aims to downconvert the signal to baseband as an initial step; this can be done by treating the I and Q channels as the real and imaginary components of a complex-valued signal for subsequent analysis steps.
With regard to your comment on the second-order statistics of the cyclostationary $ x(t) $, you have an error. The cyclostationary nature of the signal is captured in its autocorrelation function. Let the function be $ R(t, \tau) $:
$$
R(t, \tau) = \mathbb{E}(x(t)x(t - \tau))
$$
$$
R(t, \tau) = \mathbb{E}(n(t)n(t - \tau) \sin(2 \omega t) \sin(2 \omega(t - \tau)))
$$
$$
R(t, \tau) = \mathbb{E}(n(t)n(t - \tau)) \sin(2 \omega t) \sin(2 \omega(t - \tau))
$$
Because of the whiteness of the original noise process $ n(t) $, the expectation (and therefore the entire right-hand side of the equation) is zero for all nonzero values of $ \tau $.
$$
R(t, \tau) = \sigma^2 \delta(\tau) \sin^2(2 \omega t)
$$
The autocorrelation is no longer just a simple impulse at zero lag; instead, it is time-variant and periodic because of the sinusoidal scaling factor. This causes the phenomenon that you originally observed, in that there are periods of "high variance" in $ x(t) $ and other periods where the variance is lower. The "high variance" periods are selected by demodulating by a sinusoid that is coherent with the one used to modulate it, which stands to reason. |
Discussing Stirling's approximation, Wolfram Mathworld article mentions a modification of it due to Gosper: instead of the usual
$$n!\approx n^ne^{-n}\sqrt{2n\pi},$$
we have a tiny addition of $\frac13$ to $2n$ under the radical sign:
$$n!\approx n^ne^{-n}\sqrt{\left(2n+\frac13\right)\pi}.$$
This radically improves the precision of approximation for all $n\ge0$. But all the places I've seen discussing it don't explain how it was obtained. Wolfram Mathworld just says about Gosper's modification
...a better approximation to n! (i.e., one which approximates the terms in Stirling's series instead of truncating them)...
but it doesn't make me understand how this change was derived.
So, how to derive this modification? And is there a further improvement to the whole Stirling series, or is it just "whole series stuffed into one expression"? |
Topological Methods in Nonlinear Analysis Topol. Methods Nonlinear Anal. Volume 29, Number 1 (2007), 181-198. On lifespan of solutions to the Einstein equations Abstract
We investigate the issue of existence of maximal solutions to the vacuum Einstein solutions for asymptotically flat spacetime. Solutions are established globally in time outside a domain of influence of a suitable large compact set, where singularities can appear. Our approach shows existence of metric coefficients which obey the following behavior: $g_{\alpha\beta}=\eta_{\alpha\beta}+O(r^{-\delta})$ for a small fixed $\delta > 0$ at infinity (where $\eta_{\alpha\beta}$ is the Minkowski metric). The system is studied in the harmonic (wavelike) gauge.
Article information Source Topol. Methods Nonlinear Anal., Volume 29, Number 1 (2007), 181-198. Dates First available in Project Euclid: 13 May 2016 Permanent link to this document https://projecteuclid.org/euclid.tmna/1463144892 Mathematical Reviews number (MathSciNet) MR2308221 Zentralblatt MATH identifier 1136.83007 Citation
Mucha, Piotr Bogusław. On lifespan of solutions to the Einstein equations. Topol. Methods Nonlinear Anal. 29 (2007), no. 1, 181--198. https://projecteuclid.org/euclid.tmna/1463144892 |
Tagged: invertible matrix Problem 583
Consider the $2\times 2$ complex matrix
\[A=\begin{bmatrix} a & b-a\\ 0& b \end{bmatrix}.\] (a) Find the eigenvalues of $A$. (b) For each eigenvalue of $A$, determine the eigenvectors. (c) Diagonalize the matrix $A$.
Add to solve later
(d) Using the result of the diagonalization, compute and simplify $A^k$ for each positive integer $k$. Problem 582
A square matrix $A$ is called
nilpotent if some power of $A$ is the zero matrix. Namely, $A$ is nilpotent if there exists a positive integer $k$ such that $A^k=O$, where $O$ is the zero matrix.
Suppose that $A$ is a nilpotent matrix and let $B$ be an invertible matrix of the same size as $A$.
Is the matrix $B-A$ invertible? If so prove it. Otherwise, give a counterexample. Problem 562
An $n\times n$ matrix $A$ is called
nonsingular if the only vector $\mathbf{x}\in \R^n$ satisfying the equation $A\mathbf{x}=\mathbf{0}$ is $\mathbf{x}=\mathbf{0}$. Using the definition of a nonsingular matrix, prove the following statements. (a) If $A$ and $B$ are $n\times n$ nonsingular matrix, then the product $AB$ is also nonsingular. (b) Let $A$ and $B$ be $n\times n$ matrices and suppose that the product $AB$ is nonsingular. Then: The matrix $B$ is nonsingular. The matrix $A$ is nonsingular. (You may use the fact that a nonsingular matrix is invertible.) Problem 552
For each of the following $3\times 3$ matrices $A$, determine whether $A$ is invertible and find the inverse $A^{-1}$ if exists by computing the augmented matrix $[A|I]$, where $I$ is the $3\times 3$ identity matrix.
Add to solve later
(a) $A=\begin{bmatrix} 1 & 3 & -2 \\ 2 &3 &0 \\ 0 & 1 & -1 \end{bmatrix}$ (b) $A=\begin{bmatrix} 1 & 0 & 2 \\ -1 &-3 &2 \\ 3 & 6 & -2 \end{bmatrix}$. Problem 548
An $n\times n$ matrix $A$ is said to be
invertible if there exists an $n\times n$ matrix $B$ such that $AB=I$, and $BA=I$,
where $I$ is the $n\times n$ identity matrix.
If such a matrix $B$ exists, then it is known to be unique and called the
inverse matrix of $A$, denoted by $A^{-1}$.
In this problem, we prove that if $B$ satisfies the first condition, then it automatically satisfies the second condition.
So if we know $AB=I$, then we can conclude that $B=A^{-1}$.
Let $A$ and $B$ be $n\times n$ matrices.
Suppose that we have $AB=I$, where $I$ is the $n \times n$ identity matrix.
Prove that $BA=I$, and hence $A^{-1}=B$.Add to solve later
Problem 546
Let $A$ be an $n\times n$ matrix.
The $(i, j)$
cofactor $C_{ij}$ of $A$ is defined to be \[C_{ij}=(-1)^{ij}\det(M_{ij}),\] where $M_{ij}$ is the $(i,j)$ minor matrix obtained from $A$ removing the $i$-th row and $j$-th column.
Then consider the $n\times n$ matrix $C=(C_{ij})$, and define the $n\times n$ matrix $\Adj(A)=C^{\trans}$.
The matrix $\Adj(A)$ is called the adjoint matrix of $A$.
When $A$ is invertible, then its inverse can be obtained by the formula
For each of the following matrices, determine whether it is invertible, and if so, then find the invertible matrix using the above formula.
(a) $A=\begin{bmatrix} 1 & 5 & 2 \\ 0 &-1 &2 \\ 0 & 0 & 1 \end{bmatrix}$. (b) $B=\begin{bmatrix} 1 & 0 & 2 \\ 0 &1 &4 \\ 3 & 0 & 1 \end{bmatrix}$. Problem 506
Let $A$ be an $n\times n$ invertible matrix. Then prove the transpose $A^{\trans}$ is also invertible and that the inverse matrix of the transpose $A^{\trans}$ is the transpose of the inverse matrix $A^{-1}$.
Namely, show that \[(A^{\trans})^{-1}=(A^{-1})^{\trans}.\] Problem 500
10 questions about nonsingular matrices, invertible matrices, and linearly independent vectors.
The quiz is designed to test your understanding of the basic properties of these topics.
You can take the quiz as many times as you like.
The solutions will be given after completing all the 10 problems.
Click the View question button to see the solutions. Problem 452
Let $A$ be an $n\times n$ complex matrix.
Let $S$ be an invertible matrix. (a) If $SAS^{-1}=\lambda A$ for some complex number $\lambda$, then prove that either $\lambda^n=1$ or $A$ is a singular matrix. (b) If $n$ is odd and $SAS^{-1}=-A$, then prove that $0$ is an eigenvalue of $A$.
Add to solve later
(c) Suppose that all the eigenvalues of $A$ are integers and $\det(A) > 0$. If $n$ is odd and $SAS^{-1}=A^{-1}$, then prove that $1$ is an eigenvalue of $A$. Problem 438
Determine whether each of the following statements is True or False.
(a) If $A$ and $B$ are $n \times n$ matrices, and $P$ is an invertible $n \times n$ matrix such that $A=PBP^{-1}$, then $\det(A)=\det(B)$. (b) If the characteristic polynomial of an $n \times n$ matrix $A$ is \[p(\lambda)=(\lambda-1)^n+2,\] then $A$ is invertible. (c) If $A^2$ is an invertible $n\times n$ matrix, then $A^3$ is also invertible. (d) If $A$ is a $3\times 3$ matrix such that $\det(A)=7$, then $\det(2A^{\trans}A^{-1})=2$. (e) If $\mathbf{v}$ is an eigenvector of an $n \times n$ matrix $A$ with corresponding eigenvalue $\lambda_1$, and if $\mathbf{w}$ is an eigenvector of $A$ with corresponding eigenvalue $\lambda_2$, then $\mathbf{v}+\mathbf{w}$ is an eigenvector of $A$ with corresponding eigenvalue $\lambda_1+\lambda_2$.
(
Stanford University, Linear Algebra Exam Problem) Read solution |
Suppose we want to measure reasoning ability. For participant \(i\), the true reasoning ability score is \(T_{i}\) and the observed score is \(y_{i}\). If there is no measurement error, we would expect that
\[y_{i}=T_{i}.\]
However, often times, we can not perfectly measure something like reasoning ability because of measurement error. With measurement error, the above equation becomes
\[y_{i}=T_{i}+e_{i}.\]
where \(e_{i}\) is the difference between the observed value and its true score in reasoning ability for participant \(i\). Measurement error is almost always present in a measurement. It is caused by unpredictable fluctuations in the the data collection. It can show up as different results for the same repeated measurement.
It is often assumed that the mean of \(e_{i}\) is equal to 0. We need to estimate the variance of \(e_{i}\). Note that we ignore the systematic errors here. The measurement error discussed here is purely random error.
We can express the measurement error using a path diagram shown below. Both the true score and the measurement error are unobserved. The only quantity that is available is the observed score \(y\). From the relationship, we can easily see that
\[ Var(y)=Var(T) + Var(e). \]
That is the observed variance is equal to the sum of the true score variance and the measurement error variance. Note that reliability is define as
\[ reliability = \frac{Var(T)}{Var(T) + Var(e)}.\]
Measurement error can be estimated by comparing multiple measurements, and reduced by averaging multiple measurements.
The most well known influence of measurement error is the attenuation of a relationship. For example, it can lead to reduced correlation between two variables if the two variables are observed with measurement error. In terms of regression analysis, it results in attenuated regression slope estimates, which is also known as regression dilution.
We can illustrate this through an example on correlation. The path diagram for the example is given below.
Note that from the diagram, we have
\[ X = \xi + \delta_1 \text{ and } Y = \eta + \delta_2.\]
The variances for the true score $\xi$ and $\eta$ are $\sigma_{\xi}^2$ and $\sigma_{\eta}^2$, respectively. The variances for the measurement error $\delta_1$ and $\delta_2$ are $\sigma_{1}^2$ and $\sigma_{2}^2$, respectively. The covariance between $\xi$ and $\eta$ is $\sigma_{\xi \eta}$
The correlation between the true scores is
\[ \rho_{\xi\eta}=\frac{COV(\xi,\eta)}{\sigma_{\xi}\sigma_{\eta}}=\frac{\sigma_{\xi\eta}}{\sigma_{\xi}\sigma_{\eta}}\]
and the correlation between the observed scores is
\[\rho_{XY}=\frac{COV(X,Y)}{\sqrt{VAR(X)VAR(Y)}}=\frac{\sigma_{\xi\eta}}{\sqrt{(\sigma_{\xi}^{2}+\sigma_{1}^{2})(\sigma_{\eta}^{2}+\sigma_{2}^{2})}}.\]
Clearly, \( \rho_{xy} < \rho_{\xi \eta} \).
If we know the variance of measurement errors, we can correct the influences by including measurement errors in a model. With only a single indicator for the latent variable \(T\) (the true score variable), we cannot estimate the variance of measurement errors. For example, for the measurement error model, we have one pieces of information – the variance of \(y\). However, we need to estimate the variance of \(T\) and the variance of \(e\). Thus, we are short of information. If we have multiple indicators of \(T\), we can estimate the measurement error variance and the variance of \(T\). This leads to factor models.
Factor analysis is a statistical method for studying the dimensionality of a set of variables/indicators. Factor analysis examines how underlying constructs influence the responses on a number of measured variables/indicators. It can effectively handle/model measurement errors. There are basically two types of factor analysis: Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA).
A typical factor analysis model expresses a set of observed variables \(y_{j}(j=1,\ldots,p)\) as a function of factors \(f_{k}(k=1,\ldots,m)\) and residuals/measurement errors/unique factors \(e_{j}(j=1,\ldots,p)\). Specifically, we have
\begin{eqnarray*} y_{i1} & = & \lambda_{11}f_{i1}+\lambda_{12}f_{i2}+\ldots+\lambda_{1m}f_{m1}+e_{i1}\\& \ldots\\ y_{ij} & = & \lambda_{j1}f_{i1}+\lambda_{j2}f_{i2}+\ldots+\lambda_{jm}f_{ij}+e_{ij}\\ & \ldots\\ y_{ip} & = & \lambda_{p1}f_{i1}+\lambda_{p2}f_{i2}+\ldots+\lambda_{pm}f_{im}+e_{im} \end{eqnarray*}
where $\lambda_{jk}$ is a factor loading (regression coefficient of $f_{k}$ on $y_{j}$) and $f_{ik}$ is the factor score for Person $i$ on the $k$th factor.
In a factor model, each observed variable (indicator \(y_{1}\) through indicator \(y_{p}\)) is influenced by both the underlying common factors \(f\) (factor 1 through factor \(m\)), and the underlying unique factors \(e\) (error 1 through error \(p\)). The strength of the link between each factor and each indicator, measured by the factor loading, varies, such that a given factor influences some indicators more than others. Factor analyses can be performed by examining the pattern of correlations (or covariances) among the observed variables. Measures that are highly correlated (either positively or negatively) are likely influenced by the same factors, while those that are relatively uncorrelated are likely influenced by different factors. |
Energy conservation does work perfectly in general relativity. The overall Lagrangian is invariant under time translations and Noether's Theorem can be used to derive a non-trivial and exact conserved current for energy. The only thing that makes general relativity a little different from electromagnetism is that the time translation symmetry is part of a larger gauge symmetry so time is not absolute and can be chosen in many ways. However there is no problem with the derivation of conserved energy with respect to any given choice of time translation.
There is a long and interesting history to this problem. Einstein gave a valid formula for the energy in the gravitational field shortly after publishing general relativity. The mathematicians Hilbert and Klein did not like the coordinate dependence in Einstein's formulation and claimed it reduced to a trivial identity. They enlisted Noether to work out a general formalism for conservation laws and claimed that her work supported their view.
The debate continued for many years especially in the context of gravitational waves which some people claimed did not exist. They thought that the linearised solutions for gravitational waves were equivalent to flat space via co-ordinate transformations and that they carried no energy. At one point even Einstein doubted his own formalism, but later he returned to his original view that energy conservation holds up. The issue was finally resolved when exact non-linear gravitational wave solutions were found and it was shown that they do carry energy. Since then this has even been verified empirically to very high precision with the observation of the slowing down of binary pulsars in exact agreement with the predicted radiation of gravitational energy from the system.
The formula for energy in general relativity is usually given in terms of pseudo tensors such as those proposed by Laundau & Lifshitz, Dirac, Weinberg or Einstein himself. Wikipedia has a good article on these and how they confirm energy conservation. Although pseudotensors are mathematically rigorous objects which can be understood as sections of jet bundles, some people don't like their apparent co-ordinate dependence. There are other covariant approaches such as the Komar Superpotential or a more general formula of mine which gives the energy current in terms of the time translation vector $k^{\mu}$ as
$ J^{\mu}_G = \frac{1}{16\pi G} (k^{\mu}R - 2k^{\mu}\Lambda - 2{{k^{\alpha}}_{;\alpha}}^{\mu} + {{k^{\alpha}}_{;}}^{\mu}_{\alpha}+ {{k^{\mu}}_{;}}^{\alpha}_{\alpha})$
Despite these general formulations of energy conservation in general relativity there are some cosmologists who still take the view that energy conservation is only approximate or that it only works in special cases or that it reduces to a trivial identity. In each case these claims can be refuted either by studying the formulations I have referenced or by comparing the arguments given by these cosmologists with analogous situations in other gauge theories where conservation laws are accepted and follow analogous rules.
One area of particular contention is energy conservation in a homogeneous cosmology with cosmic radiation and a cosmological constant. Despite all the contrary claims, a valid formula for energy conservation in this case can be derived from the general methods and is given by this equation.
$ E = Mc^2 + \frac{\Gamma}{a} + \frac{\Lambda c^2}{\kappa}a^3 - \frac{3}{\kappa}\dot{a}^2a - Ka = 0$
$a(t)$ is the universal expansion factor as a funcrtion of time normalised to 1 at the current epoch.
$E$ is the total energy in an expanding region of volume $a(t)^3$. This always comes to zero in a perfectly homogeneous cosmology.
$M$ is the total mass of matter in the region
$c$ is the speed of light
$\Gamma$ is the density of cosmic radiation normalised to the current epoch
$\Lambda$ is the cosmological constant, thought to be positive.
$\kappa$ is the gravitational coupling constant
$K$ is a constant that is positive for spherical closed space, negative for hyperbolic space and zero for flat space.
The first two terms describe the energy in matter and radiation with the matter energy not changing and the radiation decreasing as the universe expands. Both are positive.The third term is "dark energy" which is currently though to be positive and contributing about 75% of the non-gravitational energy, but this increases with time.The final two terms represent the gravitational energy which is negative to balance the other terms.
This equation holds as a consequence of the well-known Friedmann cosmological equations, that come from the Einstein field equations, so it is in no sense trivial as some people have claimed it must be. |
A note on a superlinear and periodic elliptic system in the whole space
1.
Department of Mathematics, Yunnan Normal University, Kunming 650092 Yunnanage, China
2.
Office of Adult Education, Simao Teacher's College, Simao 665000 Yunnan, China
3.
Department of Mathematics, Yunnan Normal University, Kunming 650092 Yunnan
$ -\Delta u+V(x)u=g(x,v)$ in $R^N,$
$ -\Delta v+V(x)v=f(x,u)$ in $R^N,$
$ u(x)\to 0$ and $v(x)\to 0$ as $|x|\to\infty,$
where the potential $V$ is periodic and has a positive bound from below, $f(x,t)$ and $g(x,t)$ are periodic in $x$ and superlinear but subcritical in $t$ at infinity. By using generalized Nehari manifold method, existence of a positive ground state solution as well as multiple solutions for odd $f$ and $g$ are obtained.
Mathematics Subject Classification:Primary: 35J50; Secondary: 35J5. Citation:Shuying He, Rumei Zhang, Fukun Zhao. A note on a superlinear and periodic elliptic system in the whole space. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1149-1163. doi: 10.3934/cpaa.2011.10.1149
References:
[1]
C. O. Alves, P. C. Carrião and O. H. Miyagaki,
[2] [3] [4]
T. Bartsch and D. G. De Figueiredo,
[5] [6] [7] [8]
D. G. De Figueiredo and Y. Ding,
[9] [10] [11] [12] [13]
L. Jeanjean,
[14] [15]
G. Li and J. Yang,
[16]
Y. Li, Z. Wang and J. Zeng,
[17]
J. L. Lions and E. Magenes, "Non-homogeneous Boundary Value Problems and Applications,",
[18]
P. L. Lions,
[19] [20]
A. Pistoia and M. Ramos,
[21]
M. Reed and B. Simon, "Methods of Modern Mathematical Physics, IV Analysis of Operators,",
[22] [23]
B. Sirakov,
[24] [25]
H. Triebel, "Interpolation Theory, Function Spaces, Differential Operators,",
[26]
J. Yang,
[27] [28] [29]
show all references
References:
[1]
C. O. Alves, P. C. Carrião and O. H. Miyagaki,
[2] [3] [4]
T. Bartsch and D. G. De Figueiredo,
[5] [6] [7] [8]
D. G. De Figueiredo and Y. Ding,
[9] [10] [11] [12] [13]
L. Jeanjean,
[14] [15]
G. Li and J. Yang,
[16]
Y. Li, Z. Wang and J. Zeng,
[17]
J. L. Lions and E. Magenes, "Non-homogeneous Boundary Value Problems and Applications,",
[18]
P. L. Lions,
[19] [20]
A. Pistoia and M. Ramos,
[21]
M. Reed and B. Simon, "Methods of Modern Mathematical Physics, IV Analysis of Operators,",
[22] [23]
B. Sirakov,
[24] [25]
H. Triebel, "Interpolation Theory, Function Spaces, Differential Operators,",
[26]
J. Yang,
[27] [28] [29]
[1]
Jiaquan Liu, Yuxia Guo, Pingan Zeng.
Relationship of the morse index and the $L^\infty$ bound of solutions for a strongly indefinite differential superlinear system.
[2]
Rushun Tian, Zhi-Qiang Wang.
Bifurcation results on positive solutions of an indefinite nonlinear
elliptic system.
[3]
Jian Zhang, Wen Zhang, Xiaoliang Xie.
Existence and concentration of semiclassical solutions for Hamiltonian elliptic system.
[4] [5]
Xianjin Chen, Jianxin Zhou.
A local min-orthogonal method for multiple solutions of strongly coupled elliptic systems.
[6]
Jian Zhang, Wen Zhang, Xianhua Tang.
Ground state solutions for Hamiltonian elliptic system with inverse square potential.
[7]
Jian Zhang, Wen Zhang.
Existence and decay property of ground state solutions for Hamiltonian elliptic system.
[8]
Brahim Amaziane, Leonid Pankratov, Andrey Piatnitski.
Homogenization of variational functionals with nonstandard growth in perforated domains.
[9]
Maria Laura Delle Monache, Paola Goatin.
A front tracking method for a strongly coupled PDE-ODE system with moving density constraints in traffic flow.
[10]
Yurii Nesterov, Laura Scrimali.
Solving strongly monotone variational and quasi-variational inequalities.
[11] [12] [13]
B. Buffoni, F. Giannoni.
Brake periodic orbits of prescribed Hamiltonian for indefinite Lagrangian systems.
[14]
Paula Balseiro, Teresinha J. Stuchi, Alejandro Cabrera, Jair Koiller.
About simple variational splines from the Hamiltonian viewpoint.
[15] [16] [17]
Chiu-Yen Kao, Yuan Lou, Eiji Yanagida.
Principal eigenvalue for an elliptic problem with indefinite weight on cylindrical domains.
[18]
M. Grossi, P. Magrone, M. Matzeu.
Linking type solutions for elliptic equations with indefinite nonlinearities up to the critical growth.
[19] [20]
Răzvan M. Tudoran, Anania Gîrban.
On the Hamiltonian dynamics and geometry of the Rabinovich system.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top] |
Detailed error analysis for a fractional adams method with graded meshes AffiliationLvliang University; University of Chester; Publication Date2017-09-21
MetadataShow full item record AbstractWe consider a fractional Adams method for solving the nonlinear fractional differential equation $\, ^{C}_{0}D^{\alpha}_{t} y(t) = f(t, y(t)), \, \alpha >0$, equipped with the initial conditions $y^{(k)} (0) = y_{0}^{(k)}, k=0, 1, \dots, \lceil \alpha \rceil -1$. Here $\alpha$ may be an arbitrary positive number and $ \lceil \alpha \rceil$ denotes the smallest integer no less than $\alpha$ and the differential operator is the Caputo derivative. Under the assumption $\, ^{C}_{0}D^{\alpha}_{t} y \in C^{2}[0, T]$, Diethelm et al. \cite[Theorem 3.2]{dieforfre} introduced a fractional Adams method with the uniform meshes $t_{n}= T (n/N), n=0, 1, 2, \dots, N$ and proved that this method has the optimal convergence order uniformly in $t_{n}$, that is $O(N^{-2})$ if $\alpha > 1$ and $O(N^{-1-\alpha})$ if $\alpha \leq 1$. They also showed that if $\, ^{C}_{0}D^{\alpha}_{t} y(t) \notin C^{2}[0, T]$, the optimal convergence order of this method cannot be obtained with the uniform meshes. However, it is well known that for $y \in C^{m} [0, T]$ for some $m \in \mathbb{N}$ and $ 0 < \alpha 1$, we show that the optimal convergence order of this method can be recovered uniformly in $t_{n}$ even if $\, ^{C}_{0}D^{\alpha}_{t} y$ behaves as $t^{\sigma}, 0< \sigma <1$. Numerical examples are given to show that the numerical results are consistent with the theoretical results. CitationYanzhi, L., Roberts, J., & Yan, Y. (2018). Detailed error analysis for a fractional Adams method with graded meshes. Numerical Algorithms, 78(4), 1195-1216. https://doi.org/10.1007/s11075-017-0419-5 PublisherSpringer JournalNumerical Algorithms TypeArticle Languageen DescriptionThe final publication is available at Springer via http://dx.doi.org/10.1007/s11075-017-0419-5 ISSN1572-9265
ae974a485f413a2113503eed53cd6c53
10.1007/s11075-017-0419-5
Scopus Count Collections
The following license files are associated with this item:
Except where otherwise noted, this item's license is described as http://creativecommons.org/licenses/by/4.0/ |
Why is there a need to perform holographic renormalization for the normal $AdS_5\times S^5$/CFT$_4$ correspondence if the brane theory is conformal? Since the flow along the AdS direction $r$ is related to the renormalization scale does this not explicitly introduce an "energy" parameter that breaks conformal invariance in the SYM side of the duality?
The fact that the boundary theory is conformal means that renormalization does not induce running of the coupling. However, there are divergences which have to be regularized and renormalized. The regularization requires the introduction of an arbitrary scale, which is not Weyl invariant and leads to a conformal anomaly (in even dimensions).
Correspondingly, also the bulk theory has to be regularized by introducing a cutoff $\epsilon$ on the radial coordinate. The supergravity fields have to be expanded close to the horizon and local counterterms have to be introduced to subtract the divergences when taking the limit of $\epsilon\to 0$. For the metric, the regularization procedure requires picking a reference metric $g_{(0)}$ from the conformal structure on the boundary. For $d$ (boundary dimension) even, the dependence of the counterterm on the chosen reference metric leads to a renormalized Lagrangian, that is not Weyl invariant. One picks up exactly the expected Weyl anomaly.
This is a very neat example of a connection of boundary UV physics (the cutoff) and bulk IR physics (divergences close to the boundary) which lead to the same Weyl anomaly.
For details, see the paper by Henningson and Skenderis. There are also these very instructive lecture notes on holographic renormalization with the example of renormalization of the action of a massive bulk scalar.
Addendum: Example of why CFT correlators need regularization/renormalization It is well-known that conformal invariance greatly restricts the form of CFT correlation functions. For example, two-point functions of a scalar operator $\mathcal{O}$ are restricted to $$ \langle\mathcal{O}(x)\mathcal{O}(0)\rangle=\frac{C}{x^{2\Delta}} $$ where $\Delta$ is the scaling dimension of $\mathcal{O}$ and $C$ is a normalization constant. It is much less known though, that this is only a bare correlator and it is not valid at $x^{2}=0$. A correlator should be a well-defined distribution and have well-defined Fourier transforms:$$G(p)=\int d^dx\, e^{-ipx}\frac{C}{x^{2\Delta}}=\frac{C\pi^{d/2}2^{d-2\Delta}\Gamma\left(\frac{d-2\Delta}{2}\right)}{\Gamma{\Delta}}p^{2\Delta-d}.$$Since the $\Gamma$-function is undefined for negative integer arguments, we can see that regularization is necessary when $\Delta=\frac{d}{2}+n$, where $n$ is a positive integer. This can be done, for example, using dimensional regularization. After addition of a counterterm in the action the correlator becomes$$G(p)=p^{2\Delta-d}\left(C_1 \log\frac{p^2}{\mu^2}+C_2\right),$$which clearly is a scale dependent expression. Scale invariance is an anomalous symmetry in the full quantum theory. However, in $\mathcal{N}=4$ super Yang-Mills the coupling is protected from running by supersymmetry. So it is not conformal symmetry that leads to a vanishing $\beta$-function, but it is SUSY. The vanishing of the $\beta$-function for this particular theory is discussed here including some references. |
All celestial bodies have a gravitational well, and particle in the vicinity of this well would feel a gravitational force. My questions are:
a)How can I find the thickness of such a gravitational well?
b)Shouldn't the subatomic particles, e.g. An electron be confined to the potential well of a planet or a star?
c)Is quantum mechanical tunnelling a possibility for the electron (here) to get over the potential barrier?
E.g.: A black hole of infinite mass in the presence of another body becomes completely transparent quantum mechanically ( $\Pi$ = 1). But in an Aharonov-Bohm-like effect, if we consider two systems, each with black hole (B.H.) and concentric shell, opposite each other can result in a tunneling probability, $\Pi$ greater than 0.
In a simplified model of a black hole facing a body of mass $M_{2}$. $M_{2}$ is centered at $R$ opposite a black hole of mass $M$ centered at the origin. Since tunneling is greatest near the top of the barrier, the deviation from a $\frac{1}{r}$ potential toward the center of each body is not critical. The potentials used are that of two point masses, so $M_{2}$ may also be a black hole. Thus two little black holes may get quite close for maximum tunneling radiation. Solving the Schrödinger equation outside the black hole:
$$-\frac{\hbar^{2}}{2m}{D^{2}\psi}=-\left[\frac{-GmM}{r}+\left(-\frac{GmM_{2}}{R-r}\right)-E\right]\psi$$ In the region $a\leq r\leq b$, where $a$ bad $b$ are classical turning points, and $$E=-\frac{GmM}{a}+\left(\frac{-GmM}{R-a}\right)=-\frac{GmM}{b}+\left(\frac{-GmM_{2}}{R-b}\right)$$ Since, $T=e^{-2\Delta \gamma}$, $$\Delta \gamma=\frac{m}{\hbar}\sqrt{\frac{2GM}{d}}\left[\sqrt{b(b-d)}-\sqrt{a(a-d)}-dln\left|\frac{\sqrt{b}+\sqrt{b-d}}{\sqrt{a}+\sqrt{a-d}}\right|\right]$$ Here $d=\frac{Ma(R-a)R}{[M((R-a)R+M_{2}a^{2}]}$ and for $R>>b$ & $M_{2}>>M$
Thus, $\Delta \gamma$ approaches zero as $a$ approaches $b$, yielding $\Pi$ approaches 1. When $M$ approaches zero, or $M_{2}$ approaches infinity, or equivalently $\frac{M}{M_{2}}$ approaches zero, $\Delta \gamma$ approaches zero and $\Pi$ approaches one. Hence observing quantum tunnelling.
For a better detailed derivation refer here |
4 Methods to Account for Radiation in Participating Media
Radiative heat transfer in semitransparent media is described by the radiative transfer equation (RTE). Solving this equation is challenging in terms of computational costs. However, depending on a medium’s radiation properties, simplifications exist that allow the solving of such models in a fraction of the time. This blog post gives an overview of the available methods and when they can be applied.
Defining the Radiative Transfer Equation
An incident beam that travels in direction \Omega through a participating medium interacts with the medium. Part of its intensity, I(\Omega), is absorbed by the fraction \kappa I(\Omega), where \kappa (m^{-1}) is the absorption coefficient. Another fraction is scattered in another direction, \sigma_s I(\Omega), where \sigma_s(m^{-1}) is the scattering coefficient. The intensity in a given direction is attenuated by scattering in a different direction and augmented by radiation coming from a different direction. This is described by:
(1)
where \phi(\Omega^{\prime},\Omega) is the scattering phase function that describes the probability of a ray from direction \Omega^{\prime} being scattered into direction \Omega. The medium itself can emit radiation in all directions by the factor \kappa I_b, where I_b is the intensity of a blackbody.
Radiation interacting with a semitransparent medium.
All of these effects are fully described by an integro-differential equation called RTE:
(2)
The key to solving this equation lies in the approximation of the scattering integral. In combination with heat transfer, the incident radiation
(3)
and the radiative heat flux
(4)
are important quantities.
Before we discuss the different methods to solve this equation, we introduce another quantity that describes the participating medium — the optical thickness or optical depth:
(5)
It describes how transparent the medium is to radiation. If \tau\ll 1, the medium is called “optically thin”, and if \tau\gg1, “optically thick”.
Methods for Solving the RTE
The following sections give an overview of the four methods available with the COMSOL Multiphysics® software to solve the RTE:
Discrete ordinates method (DOM) P1 approximation Rosseland approximation Beer–Lambert law
Except for the last one, we owe all of these methods to astrophysics and its analysis of stellar atmospheres. A comprehensive book about this complex topic is
Radiative Heat Transfer by M.F. Modest (Ref. 1). It contains detailed explanations and derivations of the solution methods that go beyond the scope of this blog post. Method 1: The Discrete Ordinates Method
The most general method of solving the RTE is the DOM. Its name indicates the idea behind this method. The integral over the angular space is divided into discrete directions. Thus, one partial differential equation (PDE) per discrete ordinate remains to be solved for the intensity, I:
(6)
s
where \mathbf{S}_i is the i
th discrete ordinate and w_j, the quadrature weights.
The details of how to divide the angular space into discrete ordinates are described in this previous blog post. The default
S N method uses a symmetric quadrature set of order Nand divides the 3D angular space in N(N+2) directions. The figure below illustrates the discrete ordinates for the symmetric even quadrature set and different order, N. Discrete ordinates for the level symmetric even quadrature set from S2 up to S12 (8–168 directions).
The default S4 method is sufficient for many applications but already introduces 24 dependent variables for the intensities. The major advantage of the DOM over the other methods is the high accuracy in arbitrary configurations because of its discretization of the angular space. Additionally, the method can handle various forms of the scattering phase functions: isotropic, linear or polynomial anisotropic, and Henyey–Greenstein.
Because this method is computationally expensive and the required memory easily exceeds the available memory on common workstations for complex 3D geometries, we want to talk briefly about a simple way to tweak the solver — the performance index, which is available at the interface level.
Let’s say we need to be very accurate and use the S8 method, which adds 80 additional equations to the model. The solver splits these variables into segregated groups, and each group is computed in a single iterative step before the solver moves on to the next group. The performance index controls the number of groups that are created. For a performance index of 0 (minimum value), 10 groups are created, where each group contains 8 intensity variables. If the performance index is set to 1 (maximum value), each intensity variable goes into a separate, segregated group and the required memory remains low. This approach also works for larger models, but the computational time increases.
Performance index to control the solver. Method 2: The P1 Approximation
Instead of using discrete ordinates, the P1 method is based on spherical harmonics to discretize the angular space. They are eigenfunctions of the Laplace operator in spherical coordinates. The P1 approximation uses linear terms only and from this, it emerges that solving the following equation for Eq. (3) is equivalent to:
(7)
D_\textrm{P1} is the P1 diffusion coefficient, defined as:
(8)
with the linear Legendre coefficient, a_1, for the scattering phase function.
Hence, with the P1 method, isotropic and linear anisotropic scattering can be considered. The second term on the left-hand side corresponds to the radiative heat source, Q_\textrm{r}. Thus, only one additional equation is needed to take radiation transport into account.
Method 3: The Rosseland Approximation
First, let’s recall the stationary heat equation for a medium with density \rho(kg/m^3), heat capacity C_p(J/(kg\cdot K)), and thermal conductivity k(W/(m\cdot K):
(9)
The first term on the left-hand side is the convective term, and on the right side is the heat source term.
Let’s take a closer look at the conductive term where the heat flux, \mathbf{q}, follows Fourier’s law of heat conduction:
(10)
Getting back to radiation in participating media, light propagation behaves similarly to heat conduction under the assumption of large optical depths (\tau\gg 1), and we can rewrite Eq. (10) as follows:
(11)
with the highly nonlinear “radiative conductivity”, k_\textrm{R}:
(12)
with \beta_\textrm{R}=\kappa+\sigma_\textrm{s} being the Rosseland mean extinction coefficient and \sigma(W/(m^2K^4)), the Stefan–Boltzmann constant.
From a computational point of view, no additional equation is required to account for radiation in participating media. Just a highly nonlinear conductivity term appears. However, the number of problems for which this approximation is valid is limited: mainly, radiation that depends on the temperature and its gradient only and not on its direction or distance to the source, which is valid at very large optical depths. This applies to stellar atmospheres, for which the method was first developed by Rosseland. It is also a common method in the glass industry for a wide range of uses.
Because the Rosseland approximation inserts an additional term to Eq. (9), it is available as an extension of the
Solid feature and is called Optically Thick Participating Medium. Subfeature to consider radiation at large optical depths. Method 4: Beer–Lambert Law
This law is a great simplification of the RTE but still provides an accurate solution if the following conditions are fulfilled:
The radiation source is described by collimated, almost monochromatic beams Refraction, reflection, or scattering in the medium can be neglected There is no emission in the wavelength range of the incident beam
This is the case for photometry and the analysis of chemical compositions. The RTE then simplifies to:
(13)
with the beam’s orientation \mathbf{e}_i.
As the beam travels through the medium, energy is absorbed and the radiative heat source term is defined by:
(14)
Hence, the Beer–Lambert law describes the attenuation of the radiation intensity by absorption as the beam travels through the medium.
Verification Examples: Accounting for Radiation in Participating Media
This section shows examples for the different methods we discussed earlier and compares the results for various properties of the participating medium.
Cooling Glass Melt
To compare the DOM, P1 approximation, and Rosseland approximation for a typical scenario, let’s take a look at a glass melt that is cooled down from 600°C to 20°C. Due to the high temperatures, this cooling occurs mainly via radiation. The resulting temperature distribution after 10 seconds is compared for low and high absorption coefficients.
Temperature profile at the centerline and in the glass plate for \kappa=5\ m^{-1}. The Rosseland approximation is not sufficient for a low optical thickness. The P1 approximation provides a very accurate solution. Temperature profile at the centerline and in the glass plate for \kappa=120\ m^{-1}. The Rosseland approximation provides a reasonable result despite its simplicity. The P1 approximation is still very accurate but not as accurate as it is for a low \kappa.
This example shows that the P1 approximation can be very accurate over a large range of \tau and gives good results for optically thin media as well. It also shows the weakness of the Rosseland approximation for small optical depths that are present at the walls and for a low \kappa. The computational costs for the DOM are significantly higher, taking about 10 times longer to solve than the other methods.
Scattering in a Cylinder
To investigate the accuracy of the P1 approximation and DOM for different scattering effects and wall properties, a verification model is investigated for three different cases:
Constant surface emissivity, \epsilon_r=0.5, with isotropic scattering Radially varying emissivity, \epsilon_r=0.5(1-y/R), with isotropic scattering Radially varying emissivity, \epsilon_r=0.5(1-y/R), with linear anisotropic scattering
The scattering albedo \omega=\sigma / (\sigma+\kappa) is used to parameterize the model.
Case 1: Incident radiation for a varying isotropic scattering albedo in the radial direction. Case 2: Incident radiation for a varying isotropic scattering albedo in the azimuthal direction. Case 3: Incident radiation for a varying linear anisotropic scattering along the normalized optical thickness.
This example shows that the P1 approximation approaches the accurate DOM solution for larger optical thicknesses for \omega \rightarrow 1. The error for small optical thickness increases. In particular, for linear anisotropic scattering (case 3), the method reproduces the results only roughly. Nevertheless, the P1 method is still a good approximation, especially if you consider the lower computational effort.
Concluding Thoughts on the Methods in COMSOL Multiphysics®
The methods we discussed cover the entire range of methods for computing radiation in participating media under different assumptions. To conclude this blog post, we summarize our findings for each method:
DOM Is the most versatile method, solving the full RTE in discrete directions (up to 512) and providing high accuracy Can include complex forms of anisotropic scattering Computational costs increase as the number of discrete ordinates increase P1 method Is reasonably accurate for many configurations where the directional aspect of the radiation propagation does not dominate Can only include isotropic and linear anisotropic scattering Is computationally inexpensive by adding only one additional scalar equation to the system Rosseland approximation Provides good results for very large optical depths Cannot consider scattering Is computationally inexpensive by using a radiative conductivity in the heat transfer equation Beer-Lambert law Provides good results for applications that fulfill the assumptions for the underlying theory Has a narrow range of applications Is computationally inexpensive Additional Resources
For more information about the functionality available for heat transfer modeling, click the button below to go to the Heat Transfer Module product page.
Learn more about modeling radiation by checking out the following tutorials:
Reference M.F. Modest, Radiative Heat Transfer, Academic Press, 2003. Comentários (0) CATEGORIAS Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
Current browse context:
cs.CC
Change to browse by: References & Citations Bookmark(what is this?) Computer Science > Computational Complexity Title: The phase transition in random regular exact cover
(Submitted on 26 Feb 2015 (v1), last revised 4 Mar 2015 (this version, v3))
Abstract: A $k$-uniform, $d$-regular instance of Exact Cover is a family of $m$ sets $F_{n,d,k} = \{ S_j \subseteq \{1,...,n\} \}$, where each subset has size $k$ and each $1 \le i \le n$ is contained in $d$ of the $S_j$. It is satisfiable if there is a subset $T \subseteq \{1,...,n\}$ such that $|T \cap S_j|=1$ for all $j$. Alternately, we can consider it a $d$-regular instance of Positive 1-in-$k$ SAT, i.e., a Boolean formula with $m$ clauses and $n$ variables where each clause contains $k$ variables and demands that exactly one of them is true. We determine the satisfiability threshold for random instances of this type with $k > 2$. Letting $d^\star = \frac{\ln k}{(k-1)(- \ln (1-1/k))} + 1$, we show that $F_{n,d,k}$ is satisfiable with high probability if $d < d^\star$ and unsatisfiable with high probability if $d > d^\star$. We do this with a simple application of the first and second moment methods, boosting the probability of satisfiability below $d^\star$ to $1-o(1)$ using the small subgraph conditioning method. Submission historyFrom: Cristopher Moore [view email] [v1]Thu, 26 Feb 2015 15:22:02 GMT (9kb) [v2]Fri, 27 Feb 2015 01:45:31 GMT (9kb) [v3]Wed, 4 Mar 2015 17:49:19 GMT (9kb) |
1. Homework StatementI have phase space coordinates (x0,y0,z0,vx,vy,vz)=(1,0,0,0,1,0). I need to analytically show that these phase space coordinates correspond to a circular orbit.2. Homework Equationsr=sqrt(x^2+y^2+z^2) maybe?3. The Attempt at a SolutionMy core problem here is maybe...
A space vehicle enters the sensible atmosphere of the earth (300,000 ft) with a velocity of 25,000 ft/sec at a flight-path angle of -60 degrees. What is its velocity and flight-path angle at an altitude of 100 nautical miles during descent?(Assuming no drag or perturbations, two body orbital...
Would the velocity of a body which is orbiting another body change due to its radius to the center of gravity? If so, why? A body which moves passed a planet and starts orbiting it should have the same velocity it had before ,regarding the fact that it is orbiting a planet. Also, gravity isn’t...
This is a possible science-fiction scenario, and I'm wondering if it is scientifically plausible.If someone wanted to take a one-way trip into future, say 1000 years from now, then SR gives you a possible way to do it without dying of old age: Just hop in a rocket ship, accelerate to nearly...
1. Homework StatementThe orbits of the planets remains extremely stable over long times however this not always true for comets. Can you explain why not? What hazards might they encounter during their travels?2. Homework EquationsNone3. The Attempt at a SolutionI think that due to the...
1. The problem statement, all variables, and given/known dataSatellites orbiting the Earth are often put into orbit at different heights around the planet. What affect will this have on the motion of the satellite and how exactly would the motion of a very high orbit satellite differ from one...
1. The problem statement, all variables, and given/known dataVenus is sometimes described as either the “Morning Star” or the “Evening Star”, since it can only be seen near sunrise and sunset, very close to the Sun in the sky. Why does Venus always appear close to the Sun in the sky for an...
Hi! first time poster here.I'm making an orbital simulation and I am having a problem with one minor detail.The gravity is working great, and I've programmed it using this formula:A force vector is applied = DirectionOfCentralBodyNormalized * ((GravConstant * centralbodymass *...
I have a satellite orbiting at a velocity of 3850Km/hr around planet earth in a circular orbit, please help me in calculating it's height of orbit. I do know that the closer a satellite is, the faster it's orbital velocity, and the farther it is the slower it would be, and help me with the formulas?
How does the adiabatic invariance of actions J imply that closed orbits remain closed when the potential is deformed adiabatically?Is it because a closed orbit has commensurate angular frequencies $$\omega_i$$ defined by $$\omega_i = \frac{\partial H}{\partial J_i}$$ where H is the...
First off, I'd like to note that I am by no means a physics expert. I am merely a high school student and a physics/maths enthusiast, nothing more, so if my thoughts are completely dysfunctional and downright incorrect, which is more than a distinct possibility, please tell me.I recently took...
Hi!This is a textbook problem that I need help with (I want to practice as much as I can before the exams) and I hope that there is someone who can guide me. The question is:You’re doing a first-order analysis on a new satellite in an elliptical (e = 0.2) orbit at 700 km altitude. Can you...
1. Homework StatementA satellite moving in a highly elliptical orbit is given a retarded force concentrated at its perigee. This is modeled as an impulse I. By considering changes in energy and angular momentum, find the changes in a (semi major axis) and l (semi latus rectum). Show that...
1. Homework StatementShow that Kepler's third law, \tau = a^{3/2}, implies that the force on a planet is proportional to its mass.2. Homework Equations3. The Attempt at a SolutionI haven't really attempted anything. I'm not sure what the question is going for. What can we assume and use?
There have been many questions on this forum about celestial mechanics in general, and concerning position and velocity in an orbit in particular. So I offer this post as a summary and reference.Here's a method for finding heliocentric position and sun-relative velocity in ecliptic coordinates...
Hello friends (I hope :biggrin:),For a maths project I am working on, I need to be able to prove the equation for an elliptical orbit, related to Kepler's first law:and p = a(1-e2) (or should be as p can be replaced by that value)Where:r = distance from sun to any point on the orbitp =...
1. Homework StatementA satellite is in a circular orbit of radius R from the planet's center of mass around a planet of mass M.The angular momentum of the satellite in its orbit is:I. directly proportional to R.II. directly proportional to the square root of RIII. directly proportional to...
So let's say you are on an orbital satellite in an elliptical orbit around our planet Earth, meaning that the at one point in the orbit you are going faster, due to the gravitational pull of the planet. Would you feel the acceleration in space due to the shape of the orbit?
Would you guys agree with these websites on how orbits work?http://www.nasa.gov/audience/forstudents/5-8/features/nasa-knows/what-is-orbit-58.htmlhttp://www.qrg.northwestern.edu/projects/vss/docs/space-environment/1-what-causes-an-orbit.html
Hi! I am currently trying to write a code in C to simulate the orbit of planets around the Sun in the solar system.I am using the velocity Verlet approach and finding that my code produces no acceleration in the ##y##-direction (aside from for ##t_n = 0##,) and that the planet just flies off...
Hi,I've read that SOHO (SOlar and Heliospheric Observatory) is orbiting around the L1 point. I remember this point being unstable (that is, that something in orbit will diverge from stability). How are the corrections made for this orbit ? Does it really cost a tiny amount of fuel, or is the...
How did Kepler derive his laws of Planetary Motion without knowing about Newton's law of gravitation? Specifically, the first law of planetary motion which says that planets follow elliptical paths - how did he figure that out without the knowledge of the gravitational pull of the sun? Was it...
Can someone please give a step by step explanation of an orbital rendezvous by a spacecraft for a target that is orbiting the body it launched from? And if possible, can you explain how mission control is involved and what part computers play? When the RCS is active is it changing the...
Hi there,I was reading one of my textbooks and I had a thought. For a black hole, there is minimum orbiting radius of ##R_{min}=3R_s## where ##R_s## is the Schwarzschild Radius. This minimum orbit is created by the fact that in order to obtain an orbit of that radius around a black hole, you...
Firstly, apologies if this is in the wrong thread.I'm currently writing a presentation on the physics of getting a spacecraft from Earth to Mars in the near future. In my research I've come up against Porkchop plots which seem to plot contours of equal characteristic energy so you can find out...
I was thinking about the motion of two stars in a binary star system, but there is something I cannot quite figure out. Suppose you have a binary star system with two stars masses m1 and m2 with m2>m1 so that m2 is closer to the centre of mass of the system. Then when the two stars are as far...
A satellite changes its orbit inclined 66° at 260.0 km altitude to a polar orbit at the same altitude. What Delta V was required?... I am stuck.do I figure it out using:DeltaV1 = |V_transfer at orbit 1 - V_orbit 1|andDeltaV2 = |V_transfer at orbit 2 - V_orbit 2|or is there another...
A satellite has been placed in a circular, sun-synchronous orbit at an altitude of 1300.0 km. The satellite has a mass of 5,000.0 kg. What is the K.E. of the satellite?I know K=(1/2)mv^2 .... but I have no idea where to go from here. I Know I am over-thinking this. So any help would be greatly...
News stories make it sound like Rosetta is orbiting the comet. But presumably the comet's gravity is negligible, which means that orbiting it would require continuous acceleration (and therefore continuous use of energy) in order for Rosetta's motion to conform to a circle/ellipse, rather than... |
I don't know if the derivation for that expression is in the book or not, but for clarity and completeness I'll briefly write it here:
For a single particle, the probability of it having velocity $v$ to $v+dv$ and angle $\theta$ to $\theta + d\theta$ is given by
$$ \int_\theta^{\theta+d\theta}g(\theta')d\theta'\int_v^{v+d\theta}f(v')dv' = f(v)dv\;g(\theta)d\theta.$$
$g(\theta)$ is the probability distribution of $\theta$ for a particle. Crucially, if the particle has equal probability of moving in any direction, $g(\theta)$ isn't constant (if this isn't clear, consider that there's only one possible particle direction with $\theta=0$, but a particle with $\theta=\pi/2$ could be moving in any direction orthogonal to the wall).
The correct expression is $g(\theta) = \frac{1}{2} \sin(\theta)$, meaning the probability of a particle having speed/angle in the given constraints is
$$ f(v)dv\frac{1}{2}\sin(\theta)d\theta. $$
Now consider that the particle will only collide with the wall in one second if its in the volume of distance $v \cos(\theta)$ away from the wall (and $\theta<\frac{\pi}{2}$). For a single particle, this gives the probability of the particle colliding with the wall with the given speed/velocity as
$$ v\cos(\theta) \frac{1}{V}f(v)dv\frac{1}{2}\sin(\theta)d\theta. $$
Q1. Since we're assuming the particles don't interact, the way to go from the 1-particle expression to the many-particle expression is the central limit theorem. You're right in that strictly we'd need a distribution, but for any macroscopic number of particles the CLT will give such a sharp peak that it may as well be a single value. This is probably discussed in your textbook somewhere, or googling the central limit theorem might help if you're not familiar with it and this isn't clear.
Q2. Basically, this isn't a problem. Remember that $\theta$ is a continuous variable, so the probability of being at any specific angle is 0. If you integrate from 0 to something you get a non-0 answer.
Q3. Your mistake is that you're averaging over every particle with $\theta < \frac{\pi}{2}$, not just the ones that collide with the wall. To get the right answer, you need to use the expression above as the probability distribution for all particles that hit the wall. Hence, you get
$$ \langle \cos(\theta)\rangle = \frac{\int_0^{\pi/2} p(\theta,v)\cos(\theta)}{\int_0^{\pi/2} p(\theta,v)} ,$$
where $p(\theta,v)$ is your expression. The speed parts cancel so you get a straightforward trig integral to solve. |
Difference between revisions of "Discontinuous derivative"
Line 1: Line 1: −
Consider the function
+
Consider the function
:<math> f: \mathbb{R} \to \mathbb{R}, x \mapsto
:<math> f: \mathbb{R} \to \mathbb{R}, x \mapsto
\begin{cases}
\begin{cases}
Line 7: Line 7:
</math>
</math>
<math>f</math> is a continous and differentiable.
<math>f</math> is a continous and differentiable.
−
The derivative of <math>f</math> is the function
+
The derivative of <math>f</math> is the function
:<math>
:<math>
f': \mathbb{R} \to \mathbb{R}, x \mapsto
f': \mathbb{R} \to \mathbb{R}, x \mapsto
Line 15: Line 15:
\end{cases}\,.
\end{cases}\,.
</math>
</math>
−
We observe that <math>f'(0) = 0</math> but <math>\lim_{x\to0}f'(x)</math> does not exist.
+
We observe that <math>f'(0) = 0</math>but <math>\lim_{x\to0}f'(x)</math> does not exist.
Therefore, <math>f'</math> is an example of a derivative which is not continuous.
Therefore, <math>f'</math> is an example of a derivative which is not continuous.
Revision as of 11:02, 13 February 2019
Consider the function (blue curve)
[math] f: \mathbb{R} \to \mathbb{R}, x \mapsto \begin{cases} x^2\sin(1/x),& x\neq 0\\ 0,& x=0 \end{cases}\,. [/math]
[math]f[/math] is a continous and differentiable. The derivative of [math]f[/math] is the function (red curve)
[math] f': \mathbb{R} \to \mathbb{R}, x \mapsto \begin{cases} 2\sin(1/x) - \cos(1/x), &x \neq 0\\ 0,& x=0 \end{cases}\,. [/math]
We observe that [math]f'(0) = 0[/math], but [math]\lim_{x\to0}f'(x)[/math] does not exist.
Therefore, [math]f'[/math] is an example of a derivative which is not continuous.
The underlying JavaScript code
var board = JXG.JSXGraph.initBoard('jxgbox', {axis:true, boundingbox:[-1/2,1/2,1/2,-1/2]});var g = board.create('functiongraph', ["2*sin(1/x) - cos(1/x)"], {strokeColor: 'red'});var f = board.create('functiongraph', ["x^2*sin(1/x)"], {strokeWidth:2}); |
A good-quality mirror may reflect more than 90% of the light that falls on it, absorbing the rest. But it would be useful to have a mirror that reflects all of the light that falls on it. Interestingly, we can produce total reflection using an aspect of
refraction.
Consider what happens when a ray of light strikes the surface between two materials, such as is shown in Figure 1a. Part of the light crosses the boundary and is refracted; the rest is reflected. If, as shown in the figure, the index of refraction for the second medium is less than for the first, the ray bends away from the perpendicular. (Since \(n_{1} \gt n_{2}\), the angle of refraction is greater than the angle of incidence -- that is, \(\theta_{1} \gt \theta_{2}\).) Now imagine what happens as the incident angle is increased. This causes \(\theta_{2}\) to increase also. The largest the angle of refraction \(\theta_{2}\) can be is \(90^{\circ}\), as shown in Figure 1b. The
critical angle \(\theta_{c}\) for a combination of materials is defined to be the incident angle \(\theta_{1}\) that produces an angle of refraction of \(90^{\circ}\). That is, \(\theta_{c}\) is the incident angle for which \(\theta_{2} = 90^{\circ}\). If the incident angle \(\theta_{1}\) is greater than the critical angle, as shown in Figure 1c, then all of the light is reflected back into medium 1, a condition called total internal reflection.
CRITICAL ANGLE
The incident angle \(\theta_{1}\) that produces an angle of refraction of \(90^{\circ}\) is called the critical angle, \(\theta_{c}\).
Snell’s law states the relationship between angles and indices of refraction. It is given by
\[n_{1}\sin{\theta_{1}} = n_{2}\sin{\theta_{2}}\label{25.5.1}\]
When the incident angle equals the critical angle (\(\theta_{1} = \theta_{c}\)), the angle of refraction is \(90^{\circ}\) (\(\theta_{2} = 90^{\circ}\)). Noting that \(\sin{90^{\circ}} = 1\), Snell’s law in this case becomes
\[n_{1} \sin{\theta_{1}} = n_{2}\label{25.5.2}\]
The critical angle \(\theta_{c}\) for a given combination of materials is thus
\[\theta_{c} = \sin{\left( n_{2} / n_{1} \right)}^{-1} \label{25.5.3}\]
for \(n_{1} \gt n_{2}\).
Total internal reflection occurs for any incident angle greater than the critical angle \(\theta_{c}\), and it can only occur when the second medium has an index of refraction less than the first. Note the above equation is written for a light ray that travels in medium 1 and reflects from medium 2, as shown in the figure.
Example \(\PageIndex{1}\): How Big is the Critical Angle Here?
What is the critical angle for light traveling in a polystyrene (a type of plastic) pipe surrounded by air?
Strategy:
The index of refraction for polystyrene is found to be 1.49 in Figure 2 and the index of refraction of air can be taken to be 1.00, as before. Thus, the condition that the second medium (air) has an index of refraction less than the first (plastic) is satisfied, and the equation \(\theta_{c} = \sin{\left( n_{2} / n_{1} \right)}^{-1}\) can be used to find the critical angle \(\theta_{c}\) Here, then, \(n_{2} = 1.00\) and \(n_{1} = 1.49\).
Solution:
The critical angle is given by
\[\theta_{c} = \sin{\left( n_{2} / n_{1} \right)}^{-1} \nonumber.\]
Substituting the identified values gives
\[\begin{align*} \theta_{c} &= \sin{\left( 1.00 / 1.49 \right)}^{-1} \\[4pt] &= \sin{\left(0.671\right)}^{-1} \\[4pt] &= 42.2^{\circ}. \end{align*}\]
Discussion:
This means that any ray of light inside the plastic that strikes the surface at an angle greater than \(42.2^{\circ}\) will be totally reflected. This will make the inside surface of the clear plastic a perfect mirror for such rays without any need for the silvering used on common mirrors. Different combinations of materials have different critical angles, but any combination with \(n_{1} \gt n_{2}\) can produce total internal reflection. The same calculation as made here shows that the critical angle for a ray going from water to air is \(48.6^{\circ}\), while that from diamond to air is \(24.4^{\circ}\), and that from flint glass to crown glass is \(66.3^{\circ}\). There is no total reflection for rays going in the other direction -- for example, from air to water -- since the condition that the second medium must have a smaller index of refraction is not satisfied. A number of interesting applications of total internal reflection follow.
Fiber Optics: Endoscopes to Telephones
Fiber optics is one application of total internal reflection that is in wide use. In communications, it is used to transmit telephone, internet, and cable TV signals.
Fiber optics employs the transmission of light down fibers of plastic or glass. Because the fibers are thin, light entering one is likely to strike the inside surface at an angle greater than the critical angle and, thus, be totally reflected (Figure \(\PageIndex{2}\)). The index of refraction outside the fiber must be smaller than inside, a condition that is easily satisfied by coating the outside of the fiber with a material having an appropriate refractive index. In fact, most fibers have a varying refractive index to allow more light to be guided along the fiber through total internal refraction. Rays are reflected around corners as shown, making the fibers into tiny light pipes.
Bundles of fibers can be used to transmit an image without a lens, as illustrated in Figure \(\PageIndex{3}\). The output of a device called an
endoscope is shown in Figure \(\PageIndex{3b}\). Endoscopes are used to explore the body through various orifices or minor incisions. Light is transmitted down one fiber bundle to illuminate internal parts, and the reflected light is transmitted back out through another to be observed. Surgery can be performed, such as arthroscopic surgery on the knee joint, employing cutting tools attached to and observed with the endoscope. Samples can also be obtained, such as by lassoing an intestinal polyp for external examination.
Fiber optics has revolutionized surgical techniques and observations within the body. There are a host of medical diagnostic and therapeutic uses. The flexibility of the fiber optic bundle allows it to navigate around difficult and small regions in the body, such as the intestines, the heart, blood vessels, and joints. Transmission of an intense laser beam to burn away obstructing plaques in major arteries as well as delivering light to activate chemotherapy drugs are becoming commonplace. Optical fibers have in fact enabled microsurgery and remote surgery where the incisions are small and the surgeon’s fingers do not need to touch the diseased tissue.
Fibers in bundles are surrounded by a cladding material that has a lower index of refraction than the core (Figure \(\PageIndex{4}\)). The cladding prevents light from being transmitted between fibers in a bundle. Without cladding, light could pass between fibers in contact, since their indices of refraction are identical. Since no light gets into the cladding (there is total internal reflection back into the core), none can be transmitted between clad fibers that are in contact with one another. The cladding prevents light from escaping out of the fiber; instead most of the light is propagated along the length of the fiber, minimizing the loss of signal and ensuring that a quality image is formed at the other end. The cladding and an additional protective layer make optical fibers flexible and durable.
Cladding
The cladding prevents light from being transmitted between fibers in a bundle.
Special tiny lenses that can be attached to the ends of bundles of fibers are being designed and fabricated. Light emerging from a fiber bundle can be focused and a tiny spot can be imaged. In some cases the spot can be scanned, allowing quality imaging of a region inside the body. Special minute optical filters inserted at the end of the fiber bundle have the capacity to image tens of microns below the surface without cutting the surface -- non-intrusive diagnostics. This is particularly useful for determining the extent of cancers in the stomach and bowel.
Most telephone conversations and Internet communications are now carried by laser signals along optical fibers. Extensive optical fiber cables have been placed on the ocean floor and underground to enable optical communications. Optical fiber communication systems offer several advantages over electrical (copper) based systems, particularly for long distances. The fibers can be made so transparent that light can travel many kilometers before it becomes dim enough to require amplification -- much superior to copper conductors. This property of optical fibers is called
low loss. Lasers emit light with characteristics that allow far more conversations in one fiber than are possible with electric signals on a single conductor. This property of optical fibers is called high bandwidth. Optical signals in one fiber do not produce undesirable effects in other adjacent fibers. This property of optical fibers is called reduced crosstalk. We shall explore the unique characteristics of laser radiation in a later chapter. Corner Reflectors and Diamonds
A light ray that strikes an object consisting of two mutually perpendicular reflecting surfaces is reflected back exactly parallel to the direction from which it came. This is true whenever the reflecting surfaces are perpendicular, and it is independent of the angle of incidence. Such an object, shown in this link, is called a
corner reflector, since the light bounces from its inside corner. Many inexpensive reflector buttons on bicycles, cars, and warning signs have corner reflectors designed to return light in the direction from which it originated. It was more expensive for astronauts to place one on the moon. Laser signals can be bounced from that corner reflector to measure the gradually increasing distance to the moon with great precision.
Corner reflectors are perfectly efficient when the conditions for total internal reflection are satisfied. With common materials, it is easy to obtain a critical angle that is less than \(45^{\circ}\). One use of these perfect mirrors is in binoculars, as shown in Figure \(\PageIndex{6}\). Another use is in periscopes found in submarines.
The Sparkle of Diamonds
Total internal reflection, coupled with a large index of refraction, explains why diamonds sparkle more than other materials. The critical angle for a diamond-to-air surface is only \(24.4^{\circ}\), and so when light enters a diamond, it has trouble getting back out (Figure \(\PageIndex{7}\)). Although light freely enters the diamond, it can exit only if it makes an angle less than \(24.4^{\circ}\) Facets on diamonds are specifically intended to make this unlikely, so that the light can exit only in certain places. Good diamonds are very clear, so that the light makes many internal reflections and is concentrated at the few places it can exit—hence the sparkle. (Zircon is a natural gemstone that has an exceptionally large index of refraction, but not as large as diamond, so it is not as highly prized. Cubic zirconia is manufactured and has an even higher index of refraction (\(\simvar 2.17\)), but still less than that of diamond.) The colors you see emerging from a sparkling diamond are not due to the diamond’s color, which is usually nearly colorless. Those colors result from dispersion, the topic of "Dispersion: The Rainbow and Prisms" in the next section. Colored diamonds get their color from structural defects of the crystal lattice and the inclusion of minute quantities of graphite and other materials. The Argyle Mine in Western Australia produces around 90% of the world’s pink, red, champagne, and cognac diamonds, while around 50% of the world’s clear diamonds come from central and southern Africa.
PHET EXPLORATIONS: BENDING LIGHT
Explore bending of light between two media with different indices of refraction. See how changing from air to water to glass changes the bending angle. Play with prisms of different shapes and make rainbows.
Summary The incident angle that produces an angle of refraction of \(90^{\circ}\) is called critical angle. Total internal reflection is a phenomenon that occurs at the boundary between two mediums, such that if the incident angle in the first medium is greater than the critical angle, then all the light is reflected back into that medium. Fiber optics involves the transmission of light down fibers of plastic or glass, applying the principle of total internal reflection. Endoscopes are used to explore the body through various orifices or minor incisions, based on the transmission of light through optical fibers. Cladding prevents light from being transmitted between fibers in a bundle. Diamonds sparkle due to total internal reflection coupled with a large index of refraction. Glossary critical angle incident angle that produces an angle of refraction of \(90^{\circ}\) fiber optics transmission of light down fibers of plastic or glass, applying the principle of total internal reflection corner reflector an object consisting of two mutually perpendicular reflecting surfaces, so that the light that enters is reflected back exactly parallel to the direction from which it came zircon natural gemstone with a large index of refraction Contributors
Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0). |
In analogy with the definition of torque, \(\boldsymbol{\tau}=\boldsymbol{r} \times \boldsymbol{F}\) as the rotational counterpart of the force, we define the
angular momentum \(\boldsymbol{L}\) as the rotational counterpart of momentum: \[\boldsymbol{L}=\boldsymbol{r} \times \boldsymbol{p} \label{rp}\]
For a rigid body rotating around an axis of symmetry, the angular momentum is given by
\[\boldsymbol{L}=I \boldsymbol{\omega} \label{iomega}\]
where \(I\) is the
moment of inertia of the body with respect to the symmetry axis around which it rotates. Equation \ref{iomega} also holds for a collection of particles rotating about a symmetry axis through their center of mass, as readily follows from 5.4.2 and \ref{rp}. However, it does not hold in general, as in general, \(\boldsymbol{L}\) does not have to be parallel to \(\boldsymbol{\omega}\). For the general case, we need to consider a moment of inertia tensor \(\boldsymbol{I}\) (represented as a \(3×3\) matrix) and write \(\boldsymbol{L}=\boldsymbol{I} \cdot \boldsymbol{\omega}\). We’ll consider this case in more detail in Section 7.3. |
Ampere's law in magnetostatics is$$ \oint \vec{B} \cdot d\vec{l} = \mu_0 \mu_r I_c ,$$where the left hand side is a closed line integral around a loop and the right hand side contains $I_c$, the total current passing through the same loop.
The symmetry that your Professor is talking about is the cylindrical symmetry which allows you to write down the left hand side of this equation in a simpler way. This symmetry means that the magnitude of the B-field must only depend on $R$ and not on $z$ or $\phi$. It also means that the B-field must be in be in the $\phi$ direction: it cannot be in the $R$ direction because it would have a non-zero divergence at the centre and the Biot-Savart law tells us it must be perpendicular to the current.
If this is the case, and we choose to evaluate the line integral in a loop around the z-axis, such that $d\vec{l} = dl \hat{\phi}$, where $\hat{\phi}$ is a unit vector, then Ampere's law becomes$$ \oint \vec{B} \cdot dl\ \hat{\phi} = 2\pi R B_{\phi} = \mu_0 \mu_r I_c$$
Because the right hand side features
only the current enclosed by the loop then for point $a$ the current in the outer conductor can be ignored as your Prof suggests.
Why symmetry is required to solve this problem easily
If the outer conductor
did not have cylindrical symmetry than you could not necessarily assume that the B-field was entirely in the $\phi$ direction.One can imagine two forms of symmetry breaking. (i) The cable is not co-axial. This wouldn't matter - because of symmetry around a new axis, you could still assume that there was no B-field in the region between the two conductors that was due to the current in the outer conductor. The B-field at $a$ would still just depend on the distance from $a$ to the centre of the inner wire. (ii) The outer cable had a non-uniform current density, so that the current running through it depended on $\phi$ or it had a uniform current density, but its thickness varied with $\phi$. This does matter and means the B-field due to the outer conductor at point $a$ would not be zero.
To demonstrate the latter consider a hollow cylindrical conductor with a uniform current density, but where the central hole is hollowed out using a central axis displaced by a small amount from the axis of the outer perimeter, so that the thickness of the conductor varies with $\phi$. To calculate the B-field at $a$ due to this arrangement one could consider the field at $a$ imagining the conductor was solid and with a central axis defined by the outer perimeter. Then,
subtract from this the magnetic field that would be due to a solid cylindrical conductor with radius equal to the inner boundary but with a displacedcentral axis. The result would be a non-zero field perpendicular to a line joining the centres of the respective inner and outer cylinder axes. You would then have to add this non-zero field to the field due to an inner wire. |
One of my discoveries as a physicist was that, despite all attempts at clarity, we still have different meanings for the same words and use different words to refer the the same thing. When Alice says measurement, Bob hears a `quantum to classical channel', but Alice, a hard-core Everettian, does not even believe such channels exist. When Charlie says non-local, he means Bell non-local, but string theorist Dan starts lecturing him about non-local Lagrangian terms and violations of causality. And when I say non-local measurements, you hear ???? ?????. Let me give you a hint, I do not mean 'Bell non-local quantum to classical channels', to be honest, I am not even sure what that means.
So what do I mean when I say measurement? A measurement is a quantum operation that takes a quantum state as its input and spits out a quantum state
and a classical result as an output (no, I am not an Everettian). For simplicity I will concentrate of a special case of this operation, a projective measurement of an observable A.
The classical result of a projective measurement is an eigenvalue of \(A\), but what is the outgoing state?
The Lüders Measurement
Even the term projective measurement can lead to confusion, and indeed in the early days of quantum mechanics it did. When von Neumann wrote down the mathematical formalism for quantum measurements he missed an important detail about degenerate observables (i.e., Hermitian operators with a degenerate eigenvalue spectrum). In the usual projective measurement the state of the system after the measurement is uniquely determined by the classical result (an eigenvalue of the observable). Consequently, if we don't look at the classical result the quantum channel is a standard dephasing channel. In the case of a degenerate observable, the same eigenvalue corresponds to two or more orthogonal eigenstates. Seemingly the state of the system should correspond to one of those eigenstates, and the channel is a standard dephasing channel. But a degenerate spectrum means that the set of orthogonal eigenvectors is not unique, instead each eigenvalue has a corresponding subspace of corresponding eigenvectors. What Lüders suggested is that the dephasing channel does nothing within these subspaces.
Example
Consider the two qubit observable \(A=|00\rangle\langle 00 |\). It has eigenvalues \(1,0,0,0\). A 1 result in this measurement corresponds to "The system is in the state \(|{00}\rangle\)." Following a measurement with outcome \(1\), the outgoing state will be \(|00\rangle\). Similarly, a 0 result corresponds to "The system is not in the state \(|{00}\rangle\)". But here is where the Lüders rule kicks in. Given a generic input state \(\alpha|{00}\rangle+\beta|{01}\rangle+\gamma|{10}\rangle+\delta|{11}\rangle\) and a Lüders measurement of \(A\) with outcome 0, the outgoing state will be \(\frac{1}{\sqrt{|\beta|^2+|\gamma|^2+|\delta|^2}}\left[\beta|{01}\rangle+\gamma|{10}\rangle+\delta|{11}\rangle\right]\).
Non-local measurements
The relation to non-locality may already be apparent from the example, but let me start with some definitions. A system can be called non-local if it has parts in different locations, e.g., one part on Earth and the other on the moon. A measurement is non-local if it reveals something about a non-local system as a whole. In principle these definitions apply to classical and quantum systems. Classically a non-local measurement is trivial, there is no conceptual reason why we can't just measure at each location. For a quantum system the situation is different. Let us use the example above, but now consider the situation where the two qubits are in separate locations. Local measurements of \(\sigma_z\) will produce the desired measurement statistics (after coarse graining) but reveal too much information and dephase the state completely, while a Lüders measurement should not. What is quite neat about this example is that the Lüders measurement of \(|{00}\rangle\) cannot be implemented without entanglement (or quantum communication) resources and two-way classical communication. To prove that entanglement is necessary, it is enough to give an example where entanglement is created during the measurement. To show that communication is necessary, it is enough to show that the measurement (even if the outcome is unknown) can be used to transmit information. The detailed proof is left as an exercise to the reader. The lazy reader can find it here. |
65 4 Homework Statement Two identical audio speakers, connected to the same amplifier, produce monochromatic sound waves with a frequency that can be varied between 300 and 600 Hz. The speed of the sound is 340 m/s. You find that, where you are standing, you hear minimum intensity sound a) Explain why you hear minimum-intensity sound b) If one of the speakers is moved 39.8 cm toward you, the sound you hear has maximum intensity. What is the frequency of the sound? c) How much closer to you from the position in part (b) must the speaker be moved to the next position where you hear maximum intensity? Homework Equations interference Homework Statement:Two identical audio speakers, connected to the same amplifier, produce monochromatic sound waves with a frequency that can be varied between 300 and 600 Hz. The speed of the sound is 340 m/s. You find that, where you are standing, you hear minimum intensity sound
a) Explain why you hear minimum-intensity sound
b) If one of the speakers is moved 39.8 cm toward you, the sound you hear has maximum intensity. What is the frequency of the sound?
c) How much closer to you from the position in part (b) must the speaker be moved to the next position where you hear maximum intensity?
Homework Equations:interference
I have no idea on how to proceed
I started with
## frequency=\frac {speed\space of\space sound} \lambda \space = \frac {340 \frac m s} \lambda ##
then
##d \space sin\alpha \space = \space \frac \lambda 2\space ##
but now i'm stuck
Any help please? |
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
This questions concerns Exercise 2.11 in Polchinski. We are asked to compute the commutator $$L_{m}(L_{-m}|0;0\rangle) - L_{-m}(L_{m} |0;0\rangle)$$ By plugging the mode expansions, we use the definition from 2.7.6 $$L_{m}\sim \frac{1}{2}\sum_{n =-\infty}^{\infty} \alpha^{\mu}_{m-n} \alpha_{\mu n}. \tag{2.7.6}$$ Now, from the solution given here http://arxiv.org/abs/0812.4408, it is stated that we may consider only the summation $$L_{-m} |0;0\rangle = \frac{1}{2}\sum_{n=1}^{m-1}\alpha^{\mu}_{n-m}\alpha_{\mu (-n)}|0;0\rangle. \tag{36}$$ Now, I understand that for $n<0$, the operator will annihilate the state, but why do we cut it off at $m-1$? Why would the state be annihilated, at for example, the value $n=m+1$?
In addition, the calculation is carried out, and I was able to obtain the given result $$\frac{1}{4}\sum_{n=1}^{m-1}\sum_{n'=-\infty}^{\infty}\alpha^{\nu}_{m-n'}\alpha_{\nu n'}\alpha^{\mu}_{n-m}\alpha_{\mu(-n)}|0;0\rangle$$
However, in the next line, he proceeds the calculation with the equation $$\frac{1}{4}\sum_{n=1}^{m-1}\sum_{n'=-\infty}^{\infty}((m-n')n'\eta^{\nu \mu}\eta_{\nu \mu}\delta_{n' n}+(m-n')n'\delta^{\nu}_{\mu}\delta^{\mu}_{\nu}\delta_{m-n',n})|0;0\rangle$$
I can deduce that this is given by the commutator $$\frac{1}{4}\sum_{n=1}^{m-1}\sum_{n'=-\infty}^{\infty}[\alpha^{\nu}_{m-n'},\alpha^{\mu}_{n-m}][\alpha_{\nu n'},\alpha_{\mu(-n)}] + [\alpha^{\nu}_{m-n'},\alpha_{mu(-n)}][\alpha_{\nu n'},\alpha^{\mu}_{n-m}]$$
where we are given the commutator relations $$[\alpha^{\mu}_{m},\alpha^{\nu}_{n}] = m\delta_{m,-n}\eta^{\mu \nu}\tag{2.7.5a}$$ from equation (2.7.5a).
I can see why we could convert the equation to one in terms of commutators if taking the product of operators the other way annihilates the state, but I don't see why, for example, we could say $\alpha_{\nu n'}$ annihilates the state. |
So, I was thinking about the Bohr model of atom and I started to wonder how we could find the magnetic field due to a revolving electron (produced at the location of proton) of hydrogen atom in first orbit. Example:- How to find the magnetic field due to a revolving electron of hydrogen atom in first orbit? Given h ~$ 6.625*10^{-24} $;charge of electron~$1.6*10^{-19}$; $pi~ 3.141$; mass of electron~ $9.10*10^{-31}$.
If you naively use a Bohr-like model for the hydrogen atom, then the electron in its ground state is imagined as moving in a circular orbit of radius $r$ and moving with a speed $v$. In this case you could argue the electron is moving, moving charge is current, current creates a magnetic field. Following this model you might expect the magnetic field at the centre of the loop. From classical electromagnetism the magnetic field at the centre of a loop of radius $r$ carrying a current $I$ is $B = \frac{\mu_0 I}{2 r}$.
The question now becomes what do you use for the current. You're aware that the electron isn't a continuous charge distribution so that you have to use the following definition of current, namely current is the rate of change of charge passing you $I = \frac{\Delta Q}{\Delta t}$. Now, if the electron is moving fast enough in it's orbit you can imagine it to be roughly "smeared out" along its path. The electron takes an amount of time $\Delta t$ to move all the way round the orbit of length $2 \pi r$ and since its speed is $v$, this gives $\Delta t = \frac{2 \pi r}{v}$ and the appropriate current to use as $I = \frac{ev}{2 \pi r}$. Plugging this in gives $$B = \frac{\mu_0 e v}{4 \pi r^2}.$$
But, there are a few important problems with this model, it ignores the fact that the proton and the electron acts like miniature magnets in their own right because of spin, have a look at the following reference, H.C. Ohanian, "What is spin?", Am. J. Phys. 54 (1986) 500–505. However, much more importantly! is that the model of an electron orbiting a proton is wrong. Because of its wave nature, the electron in its ground state is actually smeared symmetrically about the proton (ignoring spin-spin effects) and the magnetic field turns out to be zero (this may be expected if the electron is smeared spherically symmetric about the proton, there is no special direction that you'd expect any magnetic field to point in, as distinct from the planetary model where you might expect a field perpendicular to the plane of the orbit). |
Category: Ring theory Problem 624
Let $R$ and $R’$ be commutative rings and let $f:R\to R’$ be a ring homomorphism.
Let $I$ and $I’$ be ideals of $R$ and $R’$, respectively. (a) Prove that $f(\sqrt{I}\,) \subset \sqrt{f(I)}$. (b) Prove that $\sqrt{f^{-1}(I’)}=f^{-1}(\sqrt{I’})$
Add to solve later
(c) Suppose that $f$ is surjective and $\ker(f)\subset I$. Then prove that $f(\sqrt{I}\,) =\sqrt{f(I)}$ Problem 618
Let $R$ be a commutative ring with $1$ such that every element $x$ in $R$ is idempotent, that is, $x^2=x$. (Such a ring is called a
Boolean ring.) (a) Prove that $x^n=x$ for any positive integer $n$.
Add to solve later
(b) Prove that $R$ does not have a nonzero nilpotent element. Problem 543
Let $R$ be a ring with $1$.
Suppose that $a, b$ are elements in $R$ such that \[ab=1 \text{ and } ba\neq 1.\] (a) Prove that $1-ba$ is idempotent. (b) Prove that $b^n(1-ba)$ is nilpotent for each positive integer $n$.
Add to solve later
(c) Prove that the ring $R$ has infinitely many nilpotent elements. |
Thank you Kasper. I've build the github version 2.2.7, and soo that you improve my rusty version of the notebooks. Great work!
I'd like to mention that I try the following code:
{M,N,P,Q,J,K,L}::Indices(full, position=independent).
{\mu,\nu,\rho,\sigma,\gamma,\lambda}::Indices(sub,position=independent, parent=full).
e^{M}_{\mu}::Vielbein;
E^{\mu}_{M}::InverseVielbein;
\delta^{\mu?}_{\nu?}::KroneckerDelta;
\delta_{\mu?}^{\nu?}::KroneckerDelta;
ex := e^{M}_{\mu} E^{\nu}_{M};
eliminate_vielbein(ex);
The result is
E^{\nu}_{\mu}, which is correct of course, but I expected that after defining the
InverseVielbein the result would be a
KroneckerDelta.
Question: Do you think it is possible to change that behaviour?
I know that it is possible that my expectations make not a lot of sense from the coding view point... since it's possible that the user had not defined the
KronerckerDelta, or the fact that the delta has to be defined in both spaces, and so on.
BTW,
Bonus question: Instead of defining several
KroneckerDelta, Would be possible to define a single
\delta{#}::KroneckerDelta; that works on whatever indices type and position? |
Let E be a Grothendieck topos, such as the category of sheaves of sets on a topological space. Then there is a unique geometric morphism $(\Delta \dashv \Gamma)\colon E\to \mathrm{Set}$, where $\Delta\colon \mathrm{Set}\to E$ constructs constant sheaves and its right adjoint Γ takes global sections. If E is locally connected (i.e. Δ has a further left adjoint), then Δ is a cartesian closed functor, i.e. $\Delta(B^A)\cong \Delta B^{\Delta A}$ for sets A,B.
Now in any cartesian closed category, we can define the "object of isomorphisms" Iso(X,Y) between any objects X,Y, as an equalizer of a pair of maps $X^Y \times Y^X \rightrightarrows X^X \times Y^Y$. In particular, when X=Y, we have the object Aut(X) of automorphisms of X. Since inverse image functors preserve finite limits, if E is locally connected then $\Delta(\operatorname{Aut}(X))\cong \operatorname{Aut}(\Delta X)$ for any set X.
My question is twofold:
Is there a noticeably weaker condition on E than local connectedness which ensures that Δ preserves objects of automorphisms?
Can you give an explicit example of a topos for which Δ does not preserve objects of automorphisms? |
Tagged: abelian group
Abelian Group Problems and Solutions.
The other popular topics in Group Theory are:
Problem 616
Suppose that $p$ is a prime number greater than $3$.
Consider the multiplicative group $G=(\Zmod{p})^*$ of order $p-1$. (a) Prove that the set of squares $S=\{x^2\mid x\in G\}$ is a subgroup of the multiplicative group $G$. (b) Determine the index $[G : S]$.
Add to solve later
(c) Assume that $-1\notin S$. Then prove that for each $a\in G$ we have either $a\in S$ or $-a\in S$. If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575
Let $G$ be a finite group of order $2n$.
Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$.
Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later
Problem 497
Let $G$ be an abelian group.
Let $a$ and $b$ be elements in $G$ of order $m$ and $n$, respectively. Prove that there exists an element $c$ in $G$ such that the order of $c$ is the least common multiple of $m$ and $n$.
Also determine whether the statement is true if $G$ is a non-abelian group.Add to solve later
Problem 434
Let $R$ be a ring with $1$.
A nonzero $R$-module $M$ is called irreducible if $0$ and $M$ are the only submodules of $M$. (It is also called a simple module.) (a) Prove that a nonzero $R$-module $M$ is irreducible if and only if $M$ is a cyclic module with any nonzero element as its generator.
Add to solve later
(b) Determine all the irreducible $\Z$-modules. Problem 420
In this post, we study the
Fundamental Theorem of Finitely Generated Abelian Groups, and as an application we solve the following problem.
Add to solve later
Problem. Let $G$ be a finite abelian group of order $n$. If $n$ is the product of distinct prime numbers, then prove that $G$ is isomorphic to the cyclic group $Z_n=\Zmod{n}$ of order $n$. Problem 343
Let $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$.
Let $\Aut(N)$ be the group of automorphisms of $G$.
Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime.
Then prove that $N$ is contained in the center of $G$. |
I have to the implementation of a Flanger effect on Matlab, buy previously I have to plot its frequency response and impulse response.
The difference equation is $y[n]=x[n]+a\cdot x[n-d[n]]$ where $a$ is a constant, $|a|<1$, and $d[n]=\frac D2 (1-\cos(2\pi f_s n))$; $D$ and $f \text{ const}$.
I'm having trouble on how to calculate any of DTFT o Z Transform of such difference equation. I can't find how to compute the varying time shifting transform for $x[n-d[n]]$.
Is there any other way to compute impulse and frequency responce? |
Sakharov condition states
both C and CP violation is necessary for baryogenesis. Now consider, for example, a theory with B-number violating interaction and C-violation. Therefore, if $p\to e^+\gamma$ is an allowed B-violating process, C-violation would imply that the C-conjugated process $\bar{p}\to e^-\gamma$ would occur at a different rate. Shouldn't therefore, C-violation be sufficient for baryogenesis? Why do we also need CP-violation?
Sakharov condition states
Assume You have only $C$-violation. Then it implies that the rate $\Gamma$ of hypothetical process $p^{-} \to e^{-}\gamma$ won't be equal to the rate of hypothetical process $p^{+} \to e^{+}\gamma$, but only
for the given helicities $L/R$. Say,$$\tag 1 \Gamma\big(p_{L}^{+} \to e^{+}_{R}\gamma_{L}\big) \neq \Gamma(p_{L}^{-} \to e^{-}_{R}\gamma_{L}),$$and$$\tag 2 \Gamma\big(p_{R}^{+} \to e^{+}_{L}\gamma_{R}\big) \neq \Gamma(p_{R}^{-} \to e^{-}_{L}\gamma_{R})$$(note that the "photon" $\gamma$ has helicities $\pm 1$ while the "electron" $e$ and the "proton" $p$ have helicites $\pm \frac{1}{2}$). (Added) This is because the $C$-transformation changes the particle on corresponding antiparticle without changing the helicity.
But les't assume that such processes respect $CP$-symmetry,
(added) under which the left/right particle is changed on right/left antipatrticle. Then there must be$$\tag 3 \Gamma (p^{-}_{L} \to e^{-}_{R}\gamma_{L}) = \Gamma (p^{+}_{R} \to e_{L}^{+}\gamma_{R})$$Let's add separately the left and the right hand-sides of $(1)$ and $(2)$ and use $(3)$. We obtain$$\Gamma\big(p_{R}^{+} \to e^{+}_{L}\gamma_{R}\big) + \Gamma\big(p_{L}^{+} \to e^{+}_{R}\gamma_{L}\big) = \Gamma(p_{L}^{-} \to e^{-}_{R}\gamma_{L})+ \Gamma(p_{R}^{-} \to e^{-}_{L}\gamma_{R}),$$and no total baryon asymmetry will be generated.
Therefore we require the CP-asymmetry. |
Fix a countable transitive model $M$ of ZFC. In my answer to this question I indicated that there are forcing iterations $((Q_\alpha:\alpha\leq\omega),(\dot P_\alpha:\alpha<\omega))$ in $M$ and sequences $(G_\alpha:\alpha<\omega)$ of filters such that the following happens:
Each $G_\alpha$ is a filter in the evaluation of $\dot P_\alpha$ with respect to the filter $G_0*\dots*G_{\alpha-1}$ and $G_\alpha$ is generic over $M[G_0,\dots,G_{\alpha-1}]$ (call such a sequence $(G_\alpha:\alpha<\omega)$ a sequence of generics), but there is no $Q_\omega$-generic filter over $M$ whose $\alpha$-th projection is $G_\alpha$ for all $\alpha<\omega$.
An example can be obtained as follows:
Take the countable support iteration of Sacks forcing (or any other nontrivial $\omega^\omega$-bounding proper forcing notion) of length $\omega$ (i.e., the supports are actually everything, but this doesn't matter). This forcing adds no Cohen real.
Compare this to the finite support iteration of the same forcing notions. This iteration does add a Cohen real. The Cohen real is coded by the sequence of generics and hence this sequence of generics does not come from a generic filter for the countable support iteration mentioned before. This sequence of generics is not even contained in a forcing extension obtained using the countable support iteration.
Now here are two questions:
1) Is there an example of a sequence of generics (of length $\omega$) that cannot come from any iteration of the $\dot P_\alpha$?
I am asking here for iterations where the finite initial segments are as usual (just plain iteration) and we choose whatever ideal for the supports, including all finite subsets of the index set. But I am open to more general forms of iteration. For example take a large forcing notion $Q$ along with commuting complete embeddings of the $Q_\alpha$, $\alpha<\omega$. This would be an iteration of the $\dot P_\alpha$, too, the most general one that I can think of right now.
2) Is there an example of a sequence of generics over $M$ that is not contained in any countable transitive extension of $M$ with the same ordinals as $M$ that is a model of ZFC?
Obviously, a positive answer to 2) solves 1) as well. |
Side of a regular polygon: \(a\)
Number of sides of a polygon: \(n\) Interior angle: \(\alpha\) Apothem: \(m\) Area: \(S\)
Number of sides of a polygon: \(n\)
Interior angle: \(\alpha\)
Apothem: \(m\)
Area: \(S\)
Radius of the inscribed circle: \(r\)
Radius of the circumscribed circle: \(R\) Perimeter: \(P\) Semiperimeter: \(p\)
Radius of the circumscribed circle: \(R\)
Perimeter: \(P\)
Semiperimeter: \(p\)
A regular polygon is a convex polygon with equal sides and equal angles. All interior angles in a regular polygon are equal and determined by the expression \(\alpha = {\large\frac{{n – 2}}{n}\normalsize} \cdot 180^\circ,\) where \(n\) is the number of sides of the polygon. Radius of the circumscribed circle \(R = {\large\frac{a}{{2\sin \frac{\pi }{n}}}\normalsize}\) The radius of the inscribed circle of a regular polygon coincides with the apothem (the perpendicular drawn from the centre to any side) and is given by the formula \(r = m = {\large\frac{a}{{2\tan \frac{\pi }{n}}}\normalsize} =\) \(\sqrt {{R^2} – {\large\frac{{{a^2}}}{4}}\normalsize},\) where \(r\) is the radius of the inscribed circle, \(m\) is the apothem, \(R\) is the radius of the circumscribed circle, \(a\) is the side of the polygon. Perimeter of a regular polygon \(P = na\) Area of a regular polygon \(S = {\large\frac{{n{R^2}}}{2}\normalsize}\sin {\large\frac{{2\pi }}{n}\normalsize}\) \(S = pr =\) \( p\sqrt {{R^2} – {\large\frac{{{a^2}}}{4}}\normalsize}\), where \(p = {\large\frac{P}{2}\normalsize}\). |
Hyperparameter Tuning
HYperparameter tuning in Deep Learning: Learning rate $\alpha$, $\beta , \beta_1, \beta_2, \epsilon$, number of layers, number of hidden units, learning rate decay, mini-batch size
Some parameters are more important than the others. Learning Rate $\alpha$ Momentum term $\beta$, # hidden units, mini batch size #layers, learning rate decay $\beta_1$ = 0.9, $\beta_2$ = 0.999, $\epsilon = 10^{-8}$ Grid Search: Don’t use a grid, use random values Coarse to fine sampling scheme - zoom in to a smaller region of the hyperparameters giving best results, and create a random grid within the small square Importance of picking appropriate scale to pick Hyperparameters
Lets see we are trying to tune number of layers $n^{[l]}$ from somewhere between 50 to 100, or #layers between 2-4 can be used. In such cases sampling uniformly at random makes sense. This might not be true for all hyperparameters
For example, Learning Rate $\alpha$ - say is between 0.0001 to 1
In such case, search for parameters on a log scale rather than uniform scale, in this case all possible features can be learnt through all the scales
python implementation
r = -4*np.random.randn #r will be between [-4,0] alpha = 10^r
alpha will be between $10^{-4}..10^0$
Generalization for a log scale
If you have to search between $10^a$ and $10^b$, where a and b are the ends of the scale
In the above case, a = $log_{10}0.0001$ = 4 and b = $log_{10}1$ = 0 r will be between [-4,0] or [a,b] Hyperparameter for exponentially weighted averages $\beta$
$\beta$ = 0.9 …. ..0.999
0.9 : Averaging over last 10 days value 0.999: Averaging over last 1000 days value
Similar to log scale
Exploring the values of 1-$\beta$ = 0.1…. 0.001 r will belong to [-3,-1] 1-$\beta$ = $10^r$ $\beta$ = 1-$10^r$ Pandas vs Caviar Re-test hyperparameters occasionally, intutions do get stale
Approaches:
Babysitting one model and keep working on it (less computational capacity case) [Pandas] Train multiple models in parallel [Caviar] Source material from Andrew NG’s awesome course on Coursera. The material in the video has been written in a text form so that anyone who wishes to revise a certain topic can go through this without going through the entire video lectures. |
First of all, contrary to some sources, I claim that the $\text{ECTT}$ can absolutely be understood as a mathematical axiom, or at least as a mathematical proposition if we doubt its truth. Introduce into our working language a new predicate symbol defined on models of computation with the intended meaning that a model is reasonable. This is essentially the same situation Peano and others faced: we already have an intended meaning for the symbols $\{0,1,+,\times\}$, even prior to writing the axioms involving them. At least until we axiomatize it, our theory remains sound under the interpretation of the new symbol, whatever it means, because the only facts about it that we can prove are tautologies. What is reasonable is reasonable, for example. Now add an axiom, the $\text{ECTT}$, which says that this predicate of reasonableness is satisfied by exactly those models that have a polynomial time-translation to a Turing machine. As an axiom it's not falsifiable in the sense of our theory being able to contradict it, as long as the theory was consistent to begin with, but the soundness of our theory is falsifiable: conceivably there is a reasonable model of computation that's not related to Turing machines by a polynomial time-translation. Allowing that this hypothetical discovery might involve a shift in thinking about what is reasonable, that is how I see the formal side. It seems trivial in retrospect but I think it's an important point to delineate the mathematics from everything else.
Overall, I view the $\text{ECTT}$ as a solid principle and axiom. But we have working computers that are well-described by $\text{BPP}$, and there are problems like prime-finding and polynomial identity-testing that are not known to be in $\text{P}$, so why doesn't this violate the $\text{ECTT}$? It doesn't until we can actually prove $\text{P} \neq \text{BPP}$: in the meantime, instead of shifting our focus to $\text{BPP}$, we're no worse off keeping the $\text{ECTT}$ as-is and saying what-if polynomial identity-testing is actually in $\text{P}$. This approach also lets us isolate particular problems we are interested in such as factoring. It's a subtly different assumption than equipping our model with an oracle, since we don't actually change the model, but the effect is the same. From this utilitarian point of view, the $\text{ECTT}$ is sufficient until we can prove any separations. The situation is the same for quantum computing, except we have to build a working quantum computer
and prove $\text{P} \neq \text{BQP}$ to really take the wind out of the $\text{ECTT}$. If we just build one without the proof, maybe the universe is a simulation running on a classical computer and the $\text{ECTT}$ still holds, or if we prove it without building one, maybe it's not really a reasonable model. To make the argument really tight, we need problems that are complete for $\text{BPP}$ and $\text{BQP}$ with respect to $\text{P}$, but we can make do with choosing whatever problems we know how to solve.
For example, suppose I claim to have built a machine that factors numbers and that its runtime satisfies a particular polynomial bound. The machine is in a box, you feed in the number written on a paper tape, and it prints out the factors. There is no doubt that it works, since I've used it to win the RSA challenges, confiscate cryptocurrency, factor large numbers of your choice, etc. What's in the box? Is it some amazing new type of computer, or is it an ordinary computer running some amazing new type of software?
By assuming the $\text{ECTT}$, we're saying it must be software, or at least that the same task could be accomplished by software. And until we can open the box by proving complexity class separations, no generality is lost under this assumption. That's because even if the operation of the machine is explained well by some reasonable non-classical or non-deterministic model and not explained by the classical deterministic one we would still need to prove those models are actually different in order to break our interpretation of the $\text{ECTT}$ and make our theory unsound.
To challenge the $\text{ECTT}$ from an entirely extra-mathematical direction, it seems that we'll need a machine or at least a plausible physical principle for solving an $\text{EXPTIME}$-complete problem in polynomial time. Even a time machine implementing $\text{P}_\text{CTC} = \text{PSPACE}$ is not powerful enough to defeat the $\text{ECTT}$ without a proof of $\text{P} \neq \text{PSPACE}$, although it might help us produce one.
To illustrate, Doctor Who has strung his telephone wires through a wormhole and built a contraption that he uses to discover a gigabyte-long formal proof of $\text{P} \neq \text{NP}$. He wins the Millenium Prize, and he has also invalidated the $\text{ECTT}$, because the result implies $\text{P} \neq \text{P}_\text{CTC}$. If his contraption finds a proof of $\text{P} = \text{NP}$ instead, or a proof of the Riemann hypothesis, he still wins the Prize, but that's it — no $\text{ECTT}$ violation. However, the Doctor's contraption seems like a better tool for attacking the $\text{ECTT}$ than my amazing factoring box, since I don't know how being able to magically factor numbers in polynomial time can help me prove that it isn't possible to do the same thing without magic. To be on equal footing it would have to be the case that factoring is $\text{NP}$-complete and also that I (somehow) know a reduction to it from $\text{3SAT}$ — then I could encode the search for a proof that factoring is not in $\text{P}$ as a series of factoring problems and have a chance at finding it before the wormhole reopens.
In the other corner towers Deep Blue, a giant robot designed by a corporation to solve $\text{EXPTIME}$-complete problems. Its challenge is to play perfect chess quickly on all board sizes and convince us all that it can really do that with an unlimited marketing budget. But it doesn't have to justify the uniqueness of its methods to make us rewrite the $\text{ECTT}$, since we already know that $\text{EXPTIME} \neq \text{P}$. This is more trivial than it may appear: if the robot is reasonably constructed, and what the robot does is amazing, then the reasonable model describing it is capable of amazing things and we can repurpose the $\text{ECTT}$ to polish its gears.
In my view, Scott Aaronson's answer is mathematically incoherent, because it's not compatible with any formalization of the $\text{ECTT}$ that I can identify. We are supposed to weigh evidence for and against $\text{P} = \text{BPP}$, but I think we should demand proof not just evidence before we drop the whole idea of the $\text{ECTT}$ or modify it for no practical benefit (nevermind the nasty business of extending the concept of time-translations to non-deterministic models). And as I've argued above the discussion of whether or not quantum computing is real is a red herring without a proof of $\text{P} \neq \text{BQP}$.
Here is a summary of the situation. For any given model of computation, it is inconsistent to simultaneously believe these three statements: the $\text{ECTT}$; that the model is reasonable or physically possible; and that the model is more powerful than a Turing machine. Only the last statement is in the language of our original theory, $\{\in\}$. If it's not already settled, then we're taking a gamble with consistency by assuming it as an axiom, or by assuming the first two statements together which imply its negation. So our only choice to incorporate any of these ideas which is sure to preserve consistency is between a definition of what reasonable means, and a statement that this particular model is reasonable (which by itself, without the definition, doesn't give us much to work with). Of course, we can have both and still be consistent if we change the $\text{ECTT}$ to something else, but this will have been wasted effort if the class separation is settled opposite the way we expected. Regardless, by axiomatizing our reasonability predicate symbol under such a nebulous interpretation, we're taking a gamble with soundness. Before, with our language equal to $\{\in\}$, we only had arithmetical soundness to worry about, and now we are expected to agree about what is reasonable as well.
Having browsed the linked paper by Dershowitz and Falkovich, I believe that its authors also hold an incoherent or maybe just tautological view of the $\text{ECTT}$. |
Let $T: \R^n \to \R^m$ be a linear transformation.Suppose that the nullity of $T$ is zero.
If $\{\mathbf{x}_1, \mathbf{x}_2,\dots, \mathbf{x}_k\}$ is a linearly independent subset of $\R^n$, then show that $\{T(\mathbf{x}_1), T(\mathbf{x}_2), \dots, T(\mathbf{x}_k) \}$ is a linearly independent subset of $\R^m$.
Let $V$ denote the vector space of all real $2\times 2$ matrices.Suppose that the linear transformation from $V$ to $V$ is given as below.\[T(A)=\begin{bmatrix}2 & 3\\5 & 7\end{bmatrix}A-A\begin{bmatrix}2 & 3\\5 & 7\end{bmatrix}.\]Prove or disprove that the linear transformation $T:V\to V$ is an isomorphism.
Let $G, H, K$ be groups. Let $f:G\to K$ be a group homomorphism and let $\pi:G\to H$ be a surjective group homomorphism such that the kernel of $\pi$ is included in the kernel of $f$: $\ker(\pi) \subset \ker(f)$.
Define a map $\bar{f}:H\to K$ as follows.For each $h\in H$, there exists $g\in G$ such that $\pi(g)=h$ since $\pi:G\to H$ is surjective.Define $\bar{f}:H\to K$ by $\bar{f}(h)=f(g)$.
(a) Prove that the map $\bar{f}:H\to K$ is well-defined.
(b) Prove that $\bar{f}:H\to K$ is a group homomorphism.
Let $\calF[0, 2\pi]$ be the vector space of all real valued functions defined on the interval $[0, 2\pi]$.Define the map $f:\R^2 \to \calF[0, 2\pi]$ by\[\left(\, f\left(\, \begin{bmatrix}\alpha \\\beta\end{bmatrix} \,\right) \,\right)(x):=\alpha \cos x + \beta \sin x.\]We put\[V:=\im f=\{\alpha \cos x + \beta \sin x \in \calF[0, 2\pi] \mid \alpha, \beta \in \R\}.\]
(a) Prove that the map $f$ is a linear transformation.
(b) Prove that the set $\{\cos x, \sin x\}$ is a basis of the vector space $V$.
(c) Prove that the kernel is trivial, that is, $\ker f=\{\mathbf{0}\}$.(This yields an isomorphism of $\R^2$ and $V$.)
(d) Define a map $g:V \to V$ by\[g(\alpha \cos x + \beta \sin x):=\frac{d}{dx}(\alpha \cos x+ \beta \sin x)=\beta \cos x -\alpha \sin x.\]Prove that the map $g$ is a linear transformation.
(e) Find the matrix representation of the linear transformation $g$ with respect to the basis $\{\cos x, \sin x\}$.
Suppose that the vectors\[\mathbf{v}_1=\begin{bmatrix}-2 \\1 \\0 \\0 \\0\end{bmatrix}, \qquad \mathbf{v}_2=\begin{bmatrix}-4 \\0 \\-3 \\-2 \\1\end{bmatrix}\]are a basis vectors for the null space of a $4\times 5$ matrix $A$. Find a vector $\mathbf{x}$ such that\[\mathbf{x}\neq0, \quad \mathbf{x}\neq \mathbf{v}_1, \quad \mathbf{x}\neq \mathbf{v}_2,\]and\[A\mathbf{x}=\mathbf{0}.\]
(Stanford University, Linear Algebra Exam Problem)
Let $V$ be the subspace of $\R^4$ defined by the equation\[x_1-x_2+2x_3+6x_4=0.\]Find a linear transformation $T$ from $\R^3$ to $\R^4$ such that the null space $\calN(T)=\{\mathbf{0}\}$ and the range $\calR(T)=V$. Describe $T$ by its matrix $A$.
A hyperplane in $n$-dimensional vector space $\R^n$ is defined to be the set of vectors\[\begin{bmatrix}x_1 \\x_2 \\\vdots \\x_n\end{bmatrix}\in \R^n\]satisfying the linear equation of the form\[a_1x_1+a_2x_2+\cdots+a_nx_n=b,\]where $a_1, a_2, \dots, a_n$ (at least one of $a_1, a_2, \dots, a_n$ is nonzero) and $b$ are real numbers.Here at least one of $a_1, a_2, \dots, a_n$ is nonzero.
Consider the hyperplane $P$ in $\R^n$ described by the linear equation\[a_1x_1+a_2x_2+\cdots+a_nx_n=0,\]where $a_1, a_2, \dots, a_n$ are some fixed real numbers and not all of these are zero.(The constant term $b$ is zero.)
Then prove that the hyperplane $P$ is a subspace of $R^{n}$ of dimension $n-1$.
Let $n$ be a positive integer. Let $T:\R^n \to \R$ be a non-zero linear transformation.Prove the followings.
(a) The nullity of $T$ is $n-1$. That is, the dimension of the nullspace of $T$ is $n-1$.
(b) Let $B=\{\mathbf{v}_1, \cdots, \mathbf{v}_{n-1}\}$ be a basis of the nullspace $\calN(T)$ of $T$.Let $\mathbf{w}$ be the $n$-dimensional vector that is not in $\calN(T)$. Then\[B’=\{\mathbf{v}_1, \cdots, \mathbf{v}_{n-1}, \mathbf{w}\}\]is a basis of $\R^n$.
(c) Each vector $\mathbf{u}\in \R^n$ can be expressed as\[\mathbf{u}=\mathbf{v}+\frac{T(\mathbf{u})}{T(\mathbf{w})}\mathbf{w}\]for some vector $\mathbf{v}\in \calN(T)$.
Let $A$ be the matrix for a linear transformation $T:\R^n \to \R^n$ with respect to the standard basis of $\R^n$.We assume that $A$ is idempotent, that is, $A^2=A$.Then prove that\[\R^n=\im(T) \oplus \ker(T).\]
(a) Let $A=\begin{bmatrix}1 & 2 & 1 \\3 &6 &4\end{bmatrix}$ and let\[\mathbf{a}=\begin{bmatrix}-3 \\1 \\1\end{bmatrix}, \qquad \mathbf{b}=\begin{bmatrix}-2 \\1 \\0\end{bmatrix}, \qquad \mathbf{c}=\begin{bmatrix}1 \\1\end{bmatrix}.\]For each of the vectors $\mathbf{a}, \mathbf{b}, \mathbf{c}$, determine whether the vector is in the null space $\calN(A)$. Do the same for the range $\calR(A)$.
(b) Find a basis of the null space of the matrix $B=\begin{bmatrix}1 & 1 & 2 \\-2 &-2 &-4\end{bmatrix}$.
Let $A$ be a real $7\times 3$ matrix such that its null space is spanned by the vectors\[\begin{bmatrix}1 \\2 \\0\end{bmatrix}, \begin{bmatrix}2 \\1 \\0\end{bmatrix}, \text{ and } \begin{bmatrix}1 \\-1 \\0\end{bmatrix}.\]Then find the rank of the matrix $A$.
(Purdue University, Linear Algebra Final Exam Problem)
Let $R$ be a commutative ring with $1$ and let $G$ be a finite group with identity element $e$. Let $RG$ be the group ring. Then the map $\epsilon: RG \to R$ defined by\[\epsilon(\sum_{i=1}^na_i g_i)=\sum_{i=1}^na_i,\]where $a_i\in R$ and $G=\{g_i\}_{i=1}^n$, is a ring homomorphism, called the augmentation map and the kernel of $\epsilon$ is called the augmentation ideal.
(a) Prove that the augmentation ideal in the group ring $RG$ is generated by $\{g-e \mid g\in G\}$.
(b) Prove that if $G=\langle g\rangle$ is a finite cyclic group generated by $g$, then the augmentation ideal is generated by $g-e$. |
Number Theory Seminar: Tom Scanlon Date: 01/12/2012 Time: 15:00
University of British Columbia
p-independent bounds in the positive characteristic Mordell-Lang problem (part 1)
p-independent bounds in the positive characteristic Mordell-Lang problem (part 2)
AbstractThe usual Mordell-Lang conjecture, a theorem of Faltings, asserts that if A is an abelian variety over C, \Gamma < A(C) is a finitely generated subgroup, and X \subseteq A is a closed subvariety, then X(C) \cap \Gamma is a finite union of cosets of subgroups of \Gamma. If one were to ask instead that A be defined over a field K of positive characteristic, then such a conclusion cannot hold in general as if A were defined over a finite field, F: A \to A were the associate Frobenius morphism, X \subseteq A were defined over the same finite field, and P \in Y(K) \cap \Gamma, then { Fn(P): n \in N } \subseteqY(K) \cap \Gamma. Other anomalous intersections may arise as sums of such orbits. Some years ago, in joint work with Moosa, I showed that these are essentially the only counterexamples to a naïve translation of the Mordell-Lang conjecture to semiabelian varieties defined over a finite field. Our proof which was long but elementary yields bounds which explicitly depend on the characteristic. In these lectures, I shall explain how to deduce characteristic independent bounds from a differential algebraic argument.
3:00-5:00pm in WMAX 216 (Time and location subject to change)
For other information, please visit the Math Department website: http://www.math.ubc.ca/Dept/Events/index.shtml?period=future&series=69. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Here is an exposition of the ``alternative hack'' to which David Speyer alluded. For concreteness, I have fixed a group; the argument will go through in general, of course.
Let $g = (12) \in S_3$, a transposition in the symmetric group on three elements. The conjugacy class of $g$ consists of all three transpositions. Form $a = \frac{1}{3}\left[(12) + (13) + (23) \right] \in \mathbb{C}S_3$, the average of $g$'s conjugacy class. It is clear that for any $x \in G$,$$xax^{-1} = a$$since conjugating $a$ will simply rearrange the terms in the sum. Equivalently,$$xa = ax,$$and $a$ lies in the center of $\mathbb{C}S_3$. It follows that if $\rho: S_3 \longrightarrow \mbox{Aut}(V)$ is an irreducible representation, the matrix$$\rho(a) = \frac{1}{3}\left[\rho(12) + \rho(13) + \rho(23) \right]$$commutes with any $\rho(x)$. Schur's lemma tells us that $\rho(a)$ is a scalar matrix, that is, $\rho(a) = \lambda I$ for some $\lambda \in \mathbb{C}$.
Summarizing, for each irreducible representation $\rho$, we may define a class function $\psi_{\rho}$ which associates to any $g \in S_3$ the scalar $\lambda$ by which $a=\frac{1}{|S_3|}\sum_{x \in S_3} xgx^{-1}$ acts on $V$.
As it happens, $\psi_{\rho}(g)$ can be computed easily in terms of the character $\chi^{\rho}$: after all, every element conjugate to $g$ has the same trace; by linearity,$\mbox{Tr}(\rho(a)) = \chi^{\rho}(a) = \chi^{\rho}(g)$.It follows that$$\psi_{\rho}(g) = \frac{\chi^{\rho}(g)}{\mbox{dim}V}.$$A natural next step is to eliminate reference to a particular $g$ using an inner product:$$\langle \psi_{\rho},\chi^{\rho} \rangle = \frac{1}{\mbox{dim}V}.$$Expanding the definition of the inner product on class functions,$$\mbox{Tr}\left[\frac{1}{|S_3|^2}\sum_{g \in S_3} \left(\sum_{x \in S_3} \rho(xgx^{-1}) \right) \rho(g^{-1}) \right] = \frac{1}{\mbox{dim}V},$$and$$\frac{1}{|S_3|^2}\sum_{g \in S_3} \sum_{x \in S_3} \chi^{\rho}(xgx^{-1}g^{-1}) = \frac{1}{\mbox{dim}V}.$$
We are led to consider the formal sum $d=\sum_{g,h \in S_3} ghg^{-1}h^{-1}$ mentioned in David Speyer's post. This sum is invariant under any automorphism of $S_3$ (in particular inner automorphisms) and so lies in the center of the group algebra $\mathbb{C}S_3$. Schur's lemma tells us that the image of $d$ under any irreducible representation $\rho$ is a scalar. By the above we get$$\sum_{g \in S_3} \sum_{h \in S_3} \rho(ghg^{-1}h^{-1}) = \left[\frac{|S_3|}{\mbox{dim}V}\right]^2I.$$
Considering now the regular representation $\mathbb{C}S_3$, we see that the element $d$ acts with rational number eigenvalues. In other words, its characteristic polynomial splits completely over $\mathbb{Q}$. But we can also see by inspection that $d$ acts by an integer matrix. The characteristic polynomial is a monic integer polynomial and splits into linear factors, so its roots are integers. Since $\mathbb{C}S_3$ contains every irreducible representation as a summand at least once, we see that each$$\frac{|S_3|}{\mbox{dim}V} \in \mathbb{Z}.$$In particular, $1$, $2$, and $1$ all divide $6$. |
The Hamiltonian for a system of spinless fermions on a 1D chain (with chemical potential $\mu=0$) is given by $$ H=-\sum_j\left( c^\dagger_{j+1} c_j+h.c.\right)+\Delta \sum_j \left( c^\dagger_{j+1}c^\dagger_j+h.c.\right) $$ where $\Delta$ is some number. If we introduce $$ c_j=\frac{1}{\sqrt{N}}\sum_k e^{ikj}c_k$$ We obtain the result below:
$$H=\sum_k \xi(k) c_k^\dagger c_k+\Delta\sum_k \left(e^{-ik}c_k^\dagger c_{-k}^\dagger +e^{ik}c_k c_{-k}\right) $$
where $\xi(k)=-2\cos(k)$I am trying to represent this Hamiltonian in matrix form by using the Nambu operator
$$ \phi_k=\begin{pmatrix} c_k \\ c_{-k}^\dagger \end{pmatrix} $$
Numerous texts give it as $$ H=\sum_k \phi_k^\dagger \begin{pmatrix} \xi(k) & 2i\Delta \sin(k)\\ -2i\Delta \sin(k ) & -\xi(k)\end{pmatrix}\phi(k) $$ However, when I expand the above out, I do not get my original coupling term back--instead, I get $$ \Delta \sum_k \left( e^{-ik}c_k^\dagger c_{-k}^\dagger -e^{ik}c_k^\dagger c_{-k}^\dagger+e^{ik}c_k c_{-k}-e^{-ik}c_k c_{-k}\right) $$
I see that, to obtain my old coupling term, I have to let $e^{ik}c_k^\dagger c_{-k}^\dagger=e^{-ik}c_k c_{-k}=0$, but I can't explain why. Can someone please help me with this step? Here is a similar question posed in a problem set from a German university for your reference: http://users.physik.fu-berlin.de/~romito/qft2011/set6.pdf |
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression\[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\]which matrix do you get?(a) $A$(b) $C^{-1}A^{-1}BC^{-1}AC^2$(c) $B$(d) $C^2$(e) $C^{-1}BC$(f) $C$
Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less.Let\[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\]where\begin{align*}p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3.\end{align*}
(a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$.
(b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$.
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be a vector space and $B$ be a basis for $V$.Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$.Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$.
After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form\[\begin{bmatrix}1 & 0 & 2 & 1 & 0 \\0 & 1 & 3 & 0 & 1 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{bmatrix}.\]
(a) What is the dimension of $V$?
(b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$?
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.Let\[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix}a & b\\c& -a\end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\]
(a) Show that $W$ is a subspace of $V$.
(b) Find a basis of $W$.
(c) Find the dimension of $W$.
(The Ohio State University, Linear Algebra Midterm)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 2 and contains Problem 4, 5, and 6.Check out Part 1 and Part 3 for the rest of the exam problems.
Problem 4. Let\[\mathbf{a}_1=\begin{bmatrix}1 \\2 \\3\end{bmatrix}, \mathbf{a}_2=\begin{bmatrix}2 \\-1 \\4\end{bmatrix}, \mathbf{b}=\begin{bmatrix}0 \\a \\2\end{bmatrix}.\]
Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$.
Problem 5.Find the inverse matrix of\[A=\begin{bmatrix}0 & 0 & 2 & 0 \\0 &1 & 0 & 0 \\1 & 0 & 0 & 0 \\1 & 0 & 0 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason.
Problem 6.Consider the system of linear equations\begin{align*}3x_1+2x_2&=1\\5x_1+3x_2&=2.\end{align*}
(a) Find the coefficient matrix $A$ of the system.
(b) Find the inverse matrix of the coefficient matrix $A$.
(c) Using the inverse matrix of $A$, find the solution of the system.
(Linear Algebra Midterm Exam 1, the Ohio State University)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 1 and contains the first three problems.Check out Part 2 and Part 3 for the rest of the exam problems.
Problem 1. Determine all possibilities for the number of solutions of each of the systems of linear equations described below.
(a) A consistent system of $5$ equations in $3$ unknowns and the rank of the system is $1$.
(b) A homogeneous system of $5$ equations in $4$ unknowns and it has a solution $x_1=1$, $x_2=2$, $x_3=3$, $x_4=4$.
Problem 2. Consider the homogeneous system of linear equations whose coefficient matrix is given by the following matrix $A$. Find the vector form for the general solution of the system.\[A=\begin{bmatrix}1 & 0 & -1 & -2 \\2 &1 & -2 & -7 \\3 & 0 & -3 & -6 \\0 & 1 & 0 & -3\end{bmatrix}.\]
Problem 3. Let $A$ be the following invertible matrix.\[A=\begin{bmatrix}-1 & 2 & 3 & 4 & 5\\6 & -7 & 8& 9& 10\\11 & 12 & -13 & 14 & 15\\16 & 17 & 18& -19 & 20\\21 & 22 & 23 & 24 & -25\end{bmatrix}\]Let $I$ be the $5\times 5$ identity matrix and let $B$ be a $5\times 5$ matrix.Suppose that $ABA^{-1}=I$.Then determine the matrix $B$.
(Linear Algebra Midterm Exam 1, the Ohio State University) |
The recursion theorem in computability states that, for any computable map $f : \mathbb{N} \to \mathbb{N}$ there exists $n \in \mathbb{N}$ such that $\varphi_{f(n)} = \varphi_n$, where $\varphi$ is a standard enumeration of partial computable functions. The one given here is due to Rogers, there is another by Kleene (but they can be derived from each other).
I am collecting typical uses of recursion theorem. I can think of the following:
existence of quines, for instance that there is $k \in \mathbb{N} \to \mathbb{N}$ such that $\phi_k = \lambda n . k$, or that there is $n$ such that $W_n = \{n\}$, where $W$ is a standard enumeration of c.e. sets.
it can be used to establish validity of various kinds of recursion schemata for defining computable functions,
the Kreisel-Lacombe-Shoenfield-Tseitin theorem stating that "all computable functionals are continuous".
What are some other typical uses of recursion theorem? |
I'll give an example from politics. Let's say you have a legislative body, such as the House of Representatives in the U. S. Congress. Over a period of time, the members of the House will vote on many bills and thereby accrue a voting history. Let's encode votes as numbers: a vote for a bill is 1, a vote against a bill is -1, and an abstention (no vote) is 0. Also, let's label the representatives $R_1,\ldots,R_m$, and label the bills $B_1,\ldots,B_n$. We thus have, for each pair $i,j$ of numbers with $0 <i\leq m$, $0 < j \leq n$, a vote $V_{ij}\in \{-1,0,1\}$, namely how congressperson $R_i$ voted on bill $B_j$. This gives us a big $m\times n$ "vote matrix" $V$ whose entries are the votes $V_{ij}$.
Now, it's true that each congressperson has an opinion on every bill, or dually, that each bill appeals to different congresspeople in different amounts (possibly negative). However, that description misses an important aspect of the situation: a congressperson's voting behavior can be approximated using much fewer parameters than a list of all his votes. Also, a bill's tendency to appeal to different people can be approximated using much fewer parameters than a list of all the people who voted for and against it. Indeed, it's not really
bills that congresspeople have opinions on; it's issues and policies. On the flip side, a given bill will implement various types of policies and address various issues, and that's really what determines who will like it and how much.
Thus, to describe voting behaviors, what you really need is (1) a list of policies $P_1,\ldots,P_f$ (the
latent factors), (2) for each congressperson $R_i$, a degree of preference $S_{ik}$ for each policy $P_k$, and (3) for each bill $B_j$, a value $C_{kj}$ describing to what extent it implements policy $P_k$. Let's continue the convention that positive values for $S_{ik}$ or $C_{kj}$ indicate accordance and negative values indicate opposition. To each congressperson $R_i$, we can assign the vector $S_i = (S_{i1},\ldots,S_{if})$ (which we might call the "policy vector" for that congressperson), and to each bill $B_j$, we can assign the vector $C_j = (C_{1j},\ldots,C_{fj})$ (which we might also call the "policy vector" for that bill).
For what follows, I'm going to use a very simplistic (but somewhat plausible) mathematical model. (Your article uses a more complicated and more realistic model, taking bias into account, for example.) Also, the model makes a lot more sense if congresspeople are asked to state their degree of preference for each bill rather than simply voting "yes" or "no," so that the "votes" $V_{ik}$ take values in $\mathbb{R}$. When deciding how to vote on a bill, a congressperson may consider how well it correlates with her opinions and make a decision based on that. With a lot of vigorous hand waving and wishful thinking, the outcome of this process can be described very simply in terms of policy vectors: the vote $V_{ij}$ is simply the dot product $S_i\cdot C_j = \sum_k S_{ik} C_{kj}$. (I'll leave it as an exercise to show that's not completely ridiculous, even if unlikely to be exactly true.) Another way of saying this is that if $S$ is the matrix with entries $S_{ik}$ and $C$ is the matrix with entries $C_{kj}$, then $V = SC$. In other words, our knowledge about legislative policies as latent factors induces a factorization of the matrix $V$.
One merit of the above approach is that although it's a bit too simple, it leads to well understood mathematics. For it to be useful, the number of policies $f$ should be much smaller than $m$ and $n$, the numbers of congresspeople and bills. In that case, the factorization $V = SC$ means that $V$ has rank $f$, which is small compared to its dimensions. In practice, admitting that the description in terms of policies as latent factors can only be a good approximation, not exact, this means that $V$ is well approximated by a low-rank matrix. Factorizations can be obtained from that observation alone using standard matrix tools like the singular-value decomposition. In particular, the policy vectors can be found even before you have any idea what the "policies" should be. (In other words, you don't have to sit down and make a list of policies you think are important and figure out what the policy vectors must be from that; you can use a standard algorithm which will determine the policies for you. Of course, it won't
name the policies, but if you need to, you can compare policy vectors to figure out what real-world policies the algorithmically extracted policies approximate.) |
The time dilation and length contraction relationships actually have very limited applicability. To use the time dilation relationship, one of the two observers must measure a proper time, where the two events occur at exactly the same spatial point. To use the length contraction relationship, one of the two observers must measure a proper length, where the two events must be at rest with respect to the observer. What if you want to compare measurements concerning more general events? To do this requires the Lorentz Transformation, which allows you to transform the spacetime coordinates of an event in one inertial reference system to any other inertial reference system.
To derive the
Lorentz transformation, imagine two inertial reference systems, labeled \(O\) and \(O’\). Let the origins of \(O\) and \(O’\) overlap at time zero, and allow \(O’\) to move with speed \(u\) relative to \(O\). (Therefore, at a later time \(t\), the origins are separated by a distance \(ut\)). Call the direction of motion the x-direction.
Figure \(\PageIndex{1}\):
Now imagine an event that occurs somewhere in spacetime. This event is located at position \(x\) relative to the \(O\) system, and position \(x’\) relative to the \(O’\) system. How are these two locations related?
You may be tempted to state that
\[x = x' + ut\]
however, this can’t be correct because \(x\) and \(x’\) are measured in different reference systems. However, imagine that the event is the tip of a meterstick, fixed in O’, striking some object. Since \(x’\) is now a proper length in \(O’\), it will appear contacted in \(O\) by the gamma factor. Therefore, the correct relationship between \(x\) and \(x’\) is
\[ x = \dfrac{x'}{\gamma} + ut\]
rearranging yields
\[ x' = \gamma (x-ut)\]
Since there is no relative motion in the \(y\) and \(z\) directions, these positions are the same in both coordinate systems
\[ y' = y\]
\[ z' = z\]
This completes the spatial part of the Lorentz transformation, but what about the temporal part? To determine how \(t\) and \(t’\) are related, now imagine that the event under investigation is the result of a light pulse, emitted from the origin when the two origins overlapped at time zero, striking some detector. Since the speed of light is the same in both systems, the distance measured in each system must be equal to the product of \(c\) and the elapsed time
\[ x' = \gamma (x-ut)\]
\[ ct' = \gamma \left(ct - u \dfrac{x}{c} \right)\]
\[ t' = \gamma \left( t - u \dfrac{x}{c^2} \right)\]
Using the Lorentz Transformation Inside of a spaceship zooming past earth at \(0.5c\), I fire a laser (in the same direction as the ship’s motion) and let it strike a mirror 10 m in front of the laser. What is the elapsed time measured on the earth between turning on the laser and the light striking the mirror? How far has the light traveled before hitting the mirror, as measured on earth?
Since neither the earth’s observers nor the observers on the ship measure a proper time or a proper length between the two events (turning on the laser and the laser striking the mirror), a more general method of relating different observers’ measurements is needed. This general method of relating measurements is the Lorentz Transformation. The Lorentz Transformation relates the coordinates of a spacetime event, \((x, y, z, t)\), measured in one frame to the coordinates of the same event in a frame moving with relative velocity \(u\), \((x’, y’, z’, t’)\) as follows:
\[ x' = \gamma (x-ut) \nonumber \]
\[ y' = y \nonumber\]
\[ z' = z \nonumber\]
\[ t' = \gamma \left( t - \dfrac{ux}{c^2} \right) \nonumber\]
These equations are written in a form that easily allows the determination of the primed coordinates from the unprimed. If the situation requires the inverse of this task, the equations can be easily inverted (by changing the sign of u and flipping the primed and unprimed notation) to yield
\[ x = \gamma (x'-ut') \]
\[ y = y' \]
\[ z = z' \]
\[ t = \gamma \left( t' - \dfrac{ux'}{c^2} \right) \]
Let the two coordinate systems overlap at the first event (the laser is fired). Thus, the position and time of the laser’s firing is zero in both coordinate systems. We now must find the position and time of the second event (the laser strikes the mirror). This is relatively easy to determine in the frame of the spaceship (the primed frame):
\[x' = 10\,m\]
\[ t' = \dfrac{10\,m}{c} \approx 3.33 \times 10^{-8} s\]
Since we know the spacetime location of the event in the primed frame, the Lorentz Transformation allows us to
transform this information into the earth frame. With \(u = 0.5c\) (\(\gamma=1.155\)),
\[ \begin{align} t &= \gamma \left(t' + \dfrac{ux'}{c^2} \right) \\[5pt] &= 1.155 \left( \left(\dfrac{10}{c} \right) + \dfrac{(0.5c)(10)}{c^2} \right) \\[5pt] &= \dfrac{17.3\,m}{c} = 5.7 \times 10^{-8} s \end{align}\]
and for the x-direction
\[\begin{align} x &= \gamma (x' + ut') \\[5pt] &= 1.155 \left(10 + (0.5c) \left(\dfrac{10}{c} \right) \right) \\[5pt] &= 17.3 \,m\end{align}\]
The light travels \(17.3\, m\) and takes \(5.78 \times 10^{-8}\, s\) to strike the mirror in the earth’s frame. |
Test your understanding of basic properties of matrix operations.
There are 10 True or False Quiz Problems.
These 10 problems are very common and essential.So make sure to understand these and don’t lose a point if any of these is your exam problems.(These are actual exam problems at the Ohio State University.)
You can take the quiz as many times as you like.
The solutions will be given after completing all the 10 problems.Click the View question button to see the solutions.
Determine whether the following systems of equations (or matrix equations) described below has no solution, one unique solution or infinitely many solutions and justify your answer.
(a) \[\left\{\begin{array}{c}ax+by=c \\dx+ey=f,\end{array}\right.\]where $a,b,c, d$ are scalars satisfying $a/d=b/e=c/f$.(b) $A \mathbf{x}=\mathbf{0}$, where $A$ is a singular matrix.(c) A homogeneous system of $3$ equations in $4$ unknowns.(d) $A\mathbf{x}=\mathbf{b}$, where the row-reduced echelon form of the augmented matrix $[A|\mathbf{b}]$ looks as follows:\[\begin{bmatrix}1 & 0 & -1 & 0 \\0 &1 & 2 & 0 \\0 & 0 & 0 & 1\end{bmatrix}.\](The Ohio State University, Linear Algebra Exam) Read solutionAdd to solve later
Determine whether the following sentence is True or False.
You have already completed the quiz before. Hence you can not start it again.
Quiz is loading...
You must sign in or sign up to start the quiz.
You have to finish following quiz, to start this quiz:
Results
0 of 3 questions answered correctly
Your time:
Time has elapsed
You have reached 0 of 0 points, (0)
Average score
Your score
Categories
Not categorized0%
Click View questions button below to see the answers.
1
2
3
Answered
Review
Question 1 of 3
1. Question
True or False. A linear system of four equations in three unknowns is always inconsistent.
Correct
Good! For example, the homogeneous system\[\left\{\begin{array}{c}x+y+z=0 \\2x+2y+2z=0 \\3x+3y+3z=0\end{array}\right.\]has the solution $(x,y,z)=(0,0,0)$. So the system is consistent.
Incorrect
the homogeneous system\[\left\{\begin{array}{c}x+y+z=0 \\2x+2y+2z=0 \\3x+3y+3z=0\end{array}\right.\]has the solution $(x,y,z)=(0,0,0)$. So the system is consistent.
Question 2 of 3
2. Question
True or False. A linear system with fewer equations than unknowns must have infinitely many solutions.
Correct
Good! For example, consider the system of one equation with two unknowns\[0x+0y=1.\]This system has no solution at all.
Incorrect
For example, consider the system of one equation with two unknowns\[0x+0y=1.\]This system has no solution at all.
Question 3 of 3
3. Question
True or False. If the system $A\mathbf{x}=\mathbf{b}$ has a unique solution, then $A$ must be a square matrix.
Correct
Good! For example, consider the matrix $A=\begin{bmatrix}1 \\1\end{bmatrix}$. Then the system\[\begin{bmatrix}1 \\1\end{bmatrix}[x]=\begin{bmatrix}0 \\0\end{bmatrix}\]has the unique solution $x=0$ but $A$ is not a square matrix.
Incorrect
For example, consider the matrix $A=\begin{bmatrix}1 \\1\end{bmatrix}$. Then the system\[\begin{bmatrix}1 \\1\end{bmatrix}[x]=\begin{bmatrix}0 \\0\end{bmatrix}\]has the unique solution $x=0$ but $A$ is not a square matrix.
Let $\Q$ denote the set of rational numbers (i.e., fractions of integers). Let $V$ denote the set of the form $x+y \sqrt{2}$ where $x,y \in \Q$. You may take for granted that the set $V$ is a vector space over the field $\Q$.
(a) Show that $B=\{1, \sqrt{2}\}$ is a basis for the vector space $V$ over $\Q$.
(b) Let $\alpha=a+b\sqrt{2} \in V$, and let $T_{\alpha}: V \to V$ be the map defined by\[ T_{\alpha}(x+y\sqrt{2}):=(ax+2by)+(ay+bx)\sqrt{2}\in V\]for any $x+y\sqrt{2} \in V$.Show that $T_{\alpha}$ is a linear transformation.
(c) Let $\begin{bmatrix}x \\y\end{bmatrix}_B=x+y \sqrt{2}$.Find the matrix $T_B$ such that\[ T_{\alpha} (x+y \sqrt{2})=\left( T_B\begin{bmatrix}x \\y\end{bmatrix}\right)_B,\]and compute $\det T_B$.
Find a basis for the subspace $W$ of all vectors in $\R^4$ which are perpendicular to the columns of the matrix\[A=\begin{bmatrix}11 & 12 & 13 & 14 \\21 &22 & 23 & 24 \\31 & 32 & 33 & 34 \\41 & 42 & 43 & 44\end{bmatrix}.\] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.