text stringlengths 256 16.4k |
|---|
If $M$ is a Riemannian manifold and $f:M\to \mathbb{R}$ a Morse-Smale function (which is just a rigorous way to say "generic smooth function"), then Morse theory essentially recovers the manifold itself from relatively basic information about the gradient flow diffeomorphisms of $f$. To sketch briefly: for each pair of critical points $p$ and $q$ of $f$ (i. e.,
fixed points of the diffeomorphisms), we can consider the subset $S\_{p,q}$ of $M$ that is attracted to $p$ and repelled from $q$ under the diffeomorphisms. ("Repelled from $q$" just means attracted to $q$ under the inverse of the diffeomorphisms.) These $S\_{p,q}$ essentially constitute a decomposition of $M$ as a cell complex. If you just want the homology groups, then you can get away with just considering pairs of critical points whose indices differ by one, and if you just want the Euler characteristic, then you only need local information around each critical point (to define its index). The index of a critical point $p$ is the number of negative eigenvalues of the Hessian (which does not actually depend on the coordinates chosen or even the metric), and it is also the dimension of the submanifold of points in any small neighborhood of $p$ that are attracted to $p$ under the gradient flow diffeomorphisms.
I want to know how much of that can be done if we don't have $f:M\to \mathbb{R}$, but just some transformation $F:M\to M$ homotopic to the identity, and if $M$ isn't necessarily even a manifold (but probably compact and metrizable). Given information about the fixed points of $F$ (or other dynamical information?), how much of the topology of $M$ can be recovered? (Can we still try to define the "index" of a fixed point of $F$ by looking at the set of points that are attracted to it as $F$ is iterated?)
Some thoughts:
For some $M$, there might well be maps $F$ that have no fixed points at all. If the Euler characteristic can be recovered from the fixed points of $F$, then such $M$ would have to have an Euler characteristic of zero. (Is that the case??) So the fixed points of $F$ are not very useful in such cases, but are there more general dynamical features of $F$ that relate to the topology of $M$?
Some $M$ might admit perfectly continuous, even smooth, $F$ with chaotic dynamics.
If $F$ has a unique fixed point $x\_0$ and for every $x\in M$, $F^n(x)\to x\_0$, then $M$ is contractible (recalling our assumption that $F$ is homotopic to the identity).
Can we get better results by considering an even more restrictive class of transformations? Of course, I don't want to go as far as to say that $F$ belongs to some group of gradient flow diffeomorphisms on a manifold, but maybe we can try to relax that by supposing there exists $f:M\to \mathbb{R}$ such that $f\circ F \geq f$. (That condition makes sense even if $M$ is not a manifold.) |
Redirected from Twin Prime Conjecture
There are an infinite number of primes psuch that p+ 2 is also prime.
Such a pair of prime numbers is called a twin prime. The conjecture has been researched by many number theorists. The majority of mathematicians believe that the conjecture is true, based on numerical evidence and heuristic reasoning involving the probabilistic distribution of primes.
In 1940, Erdös showed that there is a constant
c < 1 and infinitely many primes p such that p' - p < c ln( p), where p' denotes the next prime after p. This result was successively improved; in 1986 Maier showed that a constant c < 0.25 can be used.
In 1966, Jing-run Chen showed that there are infinitely many primes
p such that p+2 is a product of at most two prime factors. The approach he took involved a topic called Sieve theory[?], and he managed to treat the Twin Prime Conjecture and Goldbach's conjecture in similar manners.
There is also a generalization of the Twin Prime Conjecture, known as the
Hardy - Littlewood conjecture, which is concerned with the distribution of twin primes, in analogy to the prime number theorem. Let π 2( x) denote the number of primes p ≤ x such that p + 2 is also prime. Define the twin prime constant C 2 as
(here the product extends over all prime numbers
p ≥ 3). Then the conjecture is that
<math>\pi_2(x) \sim 2 C_2 \int_2^x {dt \over (\ln t)^2}</math>
in the sense that the quotient of the two expressions tends to 1 as
x approaches infinity.
This conjecture can be justified (but not proven) by assuming that 1/ln(
t) describes the density function of the prime distribution, an assumption suggested by the prime number theorem. The numerical evidence behind the Hardy - Littlewood conjecture is quite impressive.
Search Encyclopedia
Featured Article |
I'm trying to evaluate these integrals using Convergence Theorems, but I'm not really sure how to go about it. Here are the integrals:
For $\phi$ bounded and continuous, $\psi\in L^1(m)$, $\lim_{n\to\infty}\int_{\mathbb R}\phi(x/n)\psi(x)dm(x)$ For $\phi$ continuous and compactly supported, $\psi\in L^1(m)$, $\lim_{n\to\infty}\int_{-\infty}^\infty \phi(nx)\psi(x)dm(x)$ $\lim_{n\to\infty}\int_{-\infty}^\infty \frac{n}{x}\sin(x/n)e^{-\lvert x\rvert}dm(x)$ $\lim_{n\to\infty}\int_{[0,1]}(1+nx^2)(1+x^2)^{-n}dm(x)$
$m$ denotes the Lebesgue measure, and $L^1(m)$ denotes the set of integrable functions with respect to the Lebesgue measure.
I think for the first two, I need to find a dominating function for the integrand and then find the limit of the integrand, and for the last two, I believe I need to show that they are monotone increasing and find the limit of the integrand, but I haven't been able to come up with dominating functions/proof that the sequence of functions are monotone increasing.
I would really love to get some help. |
TODO¶
Listed are plans/directions the project is going to do in the next stage.
Demanding¶ lower search granularity to sector tree. merge CONST and VAR tokens. different symbol weight: Math token > math variable > sub/sup-script.
\( \begin{equation} \left\{ \label{eq154} \begin{array}{ll} \text{Score} &= \sum_t \text{sf}_{t,d} \cdot \text{idf}_{t,d} \\ &\\ \text{sf}_{t,d} &= S_{\text{sy}} \left( (1- \theta) + \theta \frac 1 {\log(1 + \operatorname{leaves}(T_d))} \right) \\ &\\ \text{idf}_{t,d} &= \sum_{p \in \mathfrak{T}( M(t, d) )} \log \frac{N}{\text{df}_p} \end{array} \right. \end{equation} \)
on-disk math index compression, faster indexer, index-stage init threshold. Re-design representation: eliminate the impact of sup/subscripts in some cases, e.g., definite and indefinite integrals. And also prime variable, e.g., x and x’. being able to differentiate \(\sum_{i=0}^n x_i = x\) and \(\sum_{i=0} x_i^n = x\). Solution: e.g., \sum lifted to operator, leaving a
baseto match variable, hanging there with sub/sup-scriptions.
booleanquery language support (must, should, must-not). Field search (index many sources and search MSE tag for example). [✓] put some large resources on CDN (jsdelivr.com) [✓] Show last update of index and some visit statistics at homepage. [✓] faster TeX rendering using mathjax v3. [✓] Increase cache postlist hit chanceby caching only long posting lists. [✓] scalability: Multiple nodes on each core or different machines (using MPI) [✓] re-entrant posting list iterators and MNC scoring. [✓] Combined math and text searchunder new model. [✓] Operand match highlight. [✓] Wildcardunder new model. [✓] Prefix model efficiency: MaxScore-like pruning. [✓] Path operators hashingto distinguish operator symbols. Misc¶ picture input UI interface on mobile platform, handwritten input on PC. QAC, spelling correction, search suggestion. faster Chinese tokenizer Return informative msg on query TeX parse error. indexing automation. Special posting list for big number exact match, e.g., “1/2016”. Semantics math equivalence awareness, e.g. 1+1/n = (1+n)/n. Text synonymawareness, e.g. horse = pony. Embedding of both text and math, e.g. pythagorean == x^2 + y^2 = z^2 |
Given that $(ab)^2=(bc)^4=(ca)^x=abc$ Then what is the value of $x$?
$2(\log a+\log b)=4(\log b+\log c)=x(\log c+\log a)=\log a+\log b+\log c$
Then I am lost, any other easier way to solve?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Given that $(ab)^2=(bc)^4=(ca)^x=abc$ Then what is the value of $x$?
$2(\log a+\log b)=4(\log b+\log c)=x(\log c+\log a)=\log a+\log b+\log c$
Then I am lost, any other easier way to solve?
Taking logarithm gives: $$2(\log a+\log b)=4(\log b+\log c)=x(\log c+\log a)=\log a+\log b+\log c$$
then taking $2(\log a+\log b)=\log a+\log b+\log c$
and $4(\log b+\log c)=\log a+\log b+\log c$
we would get $\log a+\log b-\log c=0$ & $3\log b+3\log c-\log a=0$ and by solving these two equations we get $\log b=-\log c$
similarly $\log a=-\log b$ then the solution becomes obvious......as $x=\frac12$
Hint: Split these two equalities into
$$(ab)^2 = (ca)^x$$ and $$(bc)^4 = (ca)^x$$
Then use $\log$ on both equations and see what happens ;-)
Let’s try to solve this without logarithms, as simply as possible. This allows to extend the solution to negative and complex values of $a$,$b$ and $c$, and includes a more interesting set of solution in the specific case of $|b|=1$.
If $abc=0$, then at least two elements of $\{a,b,c\}$ are $0$, and $x$ can take any value, except $0$ if $0^0=1$.
From now on $abc\neq 0$ : $$(ab)^2=abc \Rightarrow c=ab$$ replacing $c$ by its value transforms the equalities into : $$(ab)^2=\left(ab^2\right)^4=\left(a^2b\right)^x$$ The first equality then gives $$a^2b^6=1$$ Replacing $a^2$ by $\frac1{b^6}$, we have then $$\frac1{b^4}=\left(\frac1{b^5}\right)^x \Leftrightarrow b^4=b^{5x}.$$
If $b=1$, this is true for all $x$ and $a=c=\pm1$
Otherwise, if $|b|≠1$, one has $\boxed{x=\frac45}$ and $a=\pm\frac1{b^3}$, $c=\pm\frac1{b^2}$. This is the solution you were looking for.
But, for $|b|=1$ and $b≠1$, the solution above is still true, but it is not the only one. Let’s define $β≠0$ by $b=e^{iβ}$. The conditions on $x$ then becomes $$5xβ=4β+2kπ, k∈\mathbb Z$$ giving the set of solutions $$x=\frac45+\frac{2kπ}{5β}, k∈\mathbb Z.$$
This includes, for example the non trivial solution where $a=b=i$, $c=-1$ and $x=4$. |
I know this may seem like a homework problem at first, but please bear with me...
In this problem, we have two masses sliding without friction in a horizontal and vertical track, connected by a rigid massless link. Choosing $\theta$ as the single generalized coordinate, we can derive a single equation of motion, e.g. via the Lagrangian method.
The generalized force in context of Lagrangian dynamics is (often) defined as $$\begin{equation}\tag{1}Q_i = \sum_{j=1}^m \frac{\partial \mathbf{r}_j}{\partial q_i}\cdot \mathbf{F}_j\end{equation}$$ where $i$ is the index of the generalized coordinate, $m$ is the number of applied forces and $\mathbf{r}_j$ is the position vector to the $j$th force.
Question-1: Using equation (1), how is the applied force $F$ (or it's moment) included as a generalized force when our single coordinate is $q_1=\theta$, and what expression will the generalized force take ?
From a course at my university, the generalized force (for 2D systems) is defined $$\begin{equation}\tag{2}Q_i = \sum_{j=1}^m \frac{\partial \mathbf{r}_j}{\partial q_i}\cdot\mathbf{F}_j+\sum_{j=1}^p\frac{\partial \theta_j}{\partial q_i}M_j\end{equation}$$ where $m,p$ are nr of applied forces/torques
For this system, choosing $q_1=\theta$, we get $Q_1=M$. The answer should be $Q_1 = FL\sin(\theta)$, which is kind of intuitive (but not quite)
Question-2: Please clarify the $Q=FL\sin(\theta)$ part. Would that also be the case if e.g. $m_2$ had no mass ?
Edit: I have derived the equations of motion to be $$ (m_1L^2 s_{\theta}^2 + m_2 L^2 c_{\theta}^2)\ddot{\theta} - L^2s_{\theta}c_{\theta}(m_1-m_2)\dot{\theta}^2 + m_2gLc_{\theta} = Q$$ where $c_\theta=\cos(\theta)$, $s_\theta=\sin(\theta)$, and with kinetic and potential energies $$E_k=\frac{1}{2}m_1L^2s_{\theta}^2\dot{\theta}^2 + \frac{1}{2}m_2L^2c_{\theta}^2\dot{\theta}^2$$ $$E_p = m_2gLs_{\theta}$$
Perhaps this is flawed, and the force $F$ should be part of the potential ? |
If an electron has spin and volume than the point on the surface is rotating at constant speed according to Planck constant defined angular momentum.If this electron accelerates then this point on the surface has to add its rotating velocity to the velocity of translation of the electron but this sum must not reach the speed of light to not to contradict with special relativity so it seems its spin has to slow down in the reference frame that is not moving with the electron.Is that right?
I think there are two questions that are mixed together here: a question about the nature of quantum-mechanical spin, and a question about how to deal with the relative motion of subsystems when you have a system moving very close to the relativistic limit of $c$.
First, spin. It's very tempting to think of an electron as a "little ball of stuff," because macroscopic bits of matter come in discrete shapes that have surfaces, and because the people who illustrate textbooks feel like they have to have
some picture of an electron, and choose a little ball. But those appealing models are wrong. We don't have any evidence that an electron has a surface or an interior, the way that a water droplet or a dust mote or even a nucleus has. (The nucleus is an interesting case because nucleons participate in interactions that electrons ignore, but we won't go there for now.)
The modern picture of an electron is as a quantized disturbance in a "field," where a "field" is a continuous property of spacetime. When one applies conservation laws to interactions with the electron field, it becomes parsimonious to talk about these disturbances as being associated with an intrinsic mass, charge, and angular momentum --- the same mass, charge, and angular momentum that one ascribes to the electron in the little-ball model. But little balls have an intrinsic size parameter, which the electron doesn't appear to.
If that little paragraph doesn't satisfy you, I'm not sure I can do better. Usually we tell people that quantum-mechanical spin is like a spinning little ball, but different, and if people press the issue we sign them up for a graduate class in QFT.
But let's think about the relativity side of your question, too. Here's an example from accelerator physics. Suppose you inject a group of ultra-relativistic electrons into an accelerator. (I like to use the CEBAF accelerator, where the electrons are injected into the accelerator with $\gamma = (1-v^2/c^2)^{-1/2} \approx 100$ and exit with $\gamma \approx 20\,000$. These electrons are traveling at speeds upwards of $0.9999c$, in bunches that are about 0.3mm long. (Usually the bunch length is measured in how many picoseconds it takes for a bunch to pass a point on the accelerator.)
The part of this that's relevant to your question is what happens to those little bunches of electrons as they spend a microsecond or so traveling around the accelerator. In their rest frame, the electrons in each little bunch think they are at rest and surrounded by other electrons --- whom they hate, because they all have the same sign of electric charge and repel each other. So without special focusing magnets which accelerate the front and rear of each bunch differently, the electron bunches would spread out as the beam travels: some would go faster and slower than the average, just like the surfaces on your imaginary spinning ball.
How can you have velocity dispersion in a system that's traveling at a speed experimentally indistinguishable from $c$? It works because of relativistic velocity addition. If the bunch is moving at speed $u$ relative to me, and the fastest electron in the bunch is moving at speed $v$ relative to its friends, then my measurement of the speed of the fastest electron in the bunch is
$$ w = \frac{u+v}{1 + uv/c^2} $$
Part of any first course on relativity is playing with this formula to convince yourself that, if the speeds $u$ and $v$ are less than the limit $c$, then so is $w$. You can't constrain an object's motion in its rest frame just by looking at it from a reference frame that's moving too fast. I'm pretty sure that was the main concern in your question.
Presently in the standard model of particle physics, spin is a necessary adjunct of keeping
conservation of angular momentum at the framework of quantum mechanics. The historical survey given by Rob is the way it was discovered, interaction by interaction.
In all particle interactions there will be angular momentum missing where fermions are taking part (see table of particles), if a fixed spin is not assigned to the fermions (defined by having half integral spin). The way spin has been historically assigned conserves angular momentum in all quantum mechanical interactions. This has not been falsified in all data up to now.
Note that conservation laws hold in all inertial frames of special relativity, so the speed the particle is going makes no difference to the value of spin. There is no rotational velocity associated with the spin assignment, it is just a number needed to conserve angular momentum in quantum mechanical frameworks. |
Defining parameters
Level: \( N \) = \( 15 = 3 \cdot 5 \) Weight: \( k \) = \( 3 \) Nonzero newspaces: \( 3 \) Newforms: \( 4 \) Sturm bound: \(48\) Trace bound: \(3\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{3}(\Gamma_1(15))\).
Total New Old Modular forms 24 12 12 Cusp forms 8 8 0 Eisenstein series 16 4 12 Decomposition of \(S_{3}^{\mathrm{new}}(\Gamma_1(15))\)
We only show spaces with odd parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
Label \(\chi\) Newforms Dimension \(\chi\) degree 15.3.c \(\chi_{15}(11, \cdot)\) 15.3.c.a 2 1 15.3.d \(\chi_{15}(14, \cdot)\) 15.3.d.a 1 1 15.3.d.b 1 15.3.f \(\chi_{15}(7, \cdot)\) 15.3.f.a 4 2 |
I know a very similar question has been asked on this site already: Reciprocal lattices. The top answer states:
...The reciprocal lattice is simply the dual of the original lattice. And the dual lattice has a simple visual algorithm.
Given a lattice $L$, for each unit cell of $L$ find the point corresponding to that cell's "center of mass" (see below).
Connect each such "center of mass" to its nearest neighbors.
The resulting lattice is the dualof $L$.
Another explanation of the reciprocal space comes from Ashcroft/Mermin
Solid State Physics. On page 86 the authors define the reciprocal lattice as follows:
The set of all wave vectors $\mathbf K$ that yield plane waves with the periodicity of a given Bravais lattice is known as its reciprocal lattice.
This is very confusing. The first answer seems to suggest that the reciprocal space is some sort of useful abstract geometrical construction while the definition from Ashcroft and Mermin seems to imply that the reciprocal space actually results from a physical phenomenon (Diffraction). Which one of them is correct?
Let's suppose I shoot some x-rays at this Bravais lattice: (Source: Wikipedia)
According to the Bragg formulation of x-ray diffraction, for the rays to interfere constructively, the path difference must be an integral number of wavelengths:
$$n \lambda =2d \sin{\theta}$$
Why is perfect constructive interference necessary? Isn't it enough to demand that there shouldn't be perfect destructive interference?
Assume I rotate a detector and an emitter around my crystal. Are the diffraction patterns that I see on my detector the reciprocal lattice? How is the reciprocal lattice linked to the Ewald sphere? Why do we even need to construct the Ewald sphere?
I know this is a long question but I really would like to understand this. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Help:Color Color
{\color{Blue}{x^2}}+{\color{Orange}{2x}}-{\color{LimeGreen}{1}}
x_{1,2}=\frac{{\color{Blue}{-b}}\pm\sqrt{\color{Red}{b^2-4ac}}}{\color{Green}{2a}}
There are several alternate notations styles
{\color{Blue}x^2}+{\color{Orange}2x}-{\color{LimeGreen}1}works with both texvc and MathJax
\color{Blue}x^2\color{Black}+\color{Orange}2x\color{Black}-\color{LimeGreen}1works with both texvc and MathJax
\color{Blue}{x^2}+\color{Orange}{2x}-\color{LimeGreen}{1}only works with MathJax
Some color names are predeclared according to the following table, you can use them directly for the rendering of formulas (or for declaring the intended color of the page background).
Note that color should not be used as the
only way to identify something, because it will become meaningless on black-and-white media or for color-blind people. See Wikipedia:Manual of Style (accessibility)#Color.
Latex does not have a command for setting the background color. The most effective of setting a background color is by setting a CSS styling rules for a table cell
{| class="wikitable" |- | style="background: gray" | <math>\pagecolor{Gray}x^2</math> || style="background: Goldenrod" | <math>\pagecolor{Goldenrod}y^3</math> |}
Rendered as
The
\pagecolor{Goldenrod} command is necessary for the Texvc renderer to use the correct anti-aliasing around the edges of the semi-transparent images.
Custom colours can be defined using
\definecolor{myorange}{RGB}{255,165,100}\color{myorange}e^{i \pi}\color{Black} + 1 = 0 |
Im trying to maximize the probability of a particular outcome occurring subject to a constraint. In particular
$$\max \prod_{i \leq n} 1 - (1 - x_i)^{y_i} \;\;\; \text{ s.t. } \;\;\; i \in \mathbb{N}^+,\; 0 \leq x_i \leq 1,\; x_1\cdots x_n = z, \forall n \in \mathbb{N}^+$$
where $y_i \in \mathbb{N}^+$ and $0 \leq z \leq 1$. The context really isn't important, I'm interested only in a solution to this problem. I've been able to find and prove a solution for the minimum, but I haven't been able to for the max.
I highly suspect that the maximum is at $x_1 = \cdots = x_n = z^{(1/n)}$ ($y_i$ fixed for all $i$), but I have not been able to come close to proving this. I'm looking for a proof that either the solution that I have proposed is correct, or incorrect. I'm not necessarily looking for a solution, but it would be welcome. I've been trying to prove this for quite some time and haven't had any luck (proving it false or true). Any advice or suggestions would be greatly appreciated.
Note (read all first): I've posted this on math exchange, but despite the number of views, I haven't received any responses. I'm reposting here (shouldn't do this, I know, but in retrospect I think this question is more relevant here) because I'm beginning to wonder if this problem is in fact much more difficult than I originally thought. It seems to me that someone would have looked at a problem similar to this from the research community since this problem, at least to me, seems relatively elementary despite its potential usefulness when calculating outcome probabilities. Barring a solution, is anyone aware of any references that I could take a look at that might lead me to a proof or counter proof?
In addition I have included algebraic geometry as a tag because one of the approaches I have looked at is a reduction of this problem by looking at it as the maximization of a $n$ hyper rectangle. That is to say given a hyper rectangle with $n$ dimensional volume $x_1\cdot...\cdot x_n = z$, what side lengths will give the largest $n$ dimensional volume if we set each side length to be $1 - (1 - x_i)^{y_i}$ for fixed $y_i$. From this perspective, I would expect the greatest $n$ dimensional volume increase (and thus greatest volume) would occur when all sides ($x_i$) have the same length. I don't have much of a background in geometry though so I haven't gotten far on this.
Edit.
I mentioned that I was able to prove what the minimum was. My proof was incorrect. Doesn't change this question, but I wanted to make sure the problem description was as accurate as possible. |
Consider the following question in classical mechanics
Are Newton's Second Law, Hamilton's Principle and Lagrange Equations equivalent for particles and system of particles?
If
Yes, where can I find a complete proof? Are there certain conditions for this equivalence?
If
No, which one is the most general one?
I couldn't find the answer of my question in the books since there are lots of sentences and no clear conclusion! Or at least I couldn't get it from the books! Maybe the reason is that physical books are not written axiomatically (like mathematics books). The book which I had my focus on was
Classical Mechanics of Herbert Goldstein.
\begin{align*} \text{Newton's Second Law},\qquad\qquad &\mathbf{F}_j=m_j\mathbf{a}_j,\qquad j=1,\dots,N \\[0.9em] \text{Lagrange's Equations},\qquad\qquad &\frac{d}{dt}\frac{\partial T}{\partial\dot q_j}-\frac{\partial T}{\partial q_j}=Q_j,\qquad j=1,\dots,M \\ \text{Hamilton's Principle},\qquad\qquad &\delta\int_{t_1}^{t_2}L(q_1,\dots,q_M,\dot q_1,\dots,\dot q_M,t)dt=0 \end{align*}
where $N$ is the number of particles and $M$ is the number of generalized coordinates $q_j$. Interested readers may also read this post. |
Since sufficiency of fixed dimension only occurs in exponential families (Darmois-Pitman-Koopman lemma), apart from distributions with varying support like the Uniform, let us consider an exponential family with parameter $\theta$ and density [against a fixed dominating measure]$$f_\theta(x)=\exp\left\{\sum_{i=1}^k a_i(\theta) T_i(x) -\psi(\theta)\right\}$$ Assuming the functions $a_i$ are linearly independent on the maximal support of $\theta$ (namely, the range of $\theta$'s for which the density is integrable), the model associated with this density can be reparameterised in $\alpha=(\alpha_1,\ldots,\alpha_k)$ which varies in (at least) the parameter space$$A=\left\{\alpha=(\alpha_1,\ldots,\alpha_k);\,\exists\theta\in\Theta, \alpha_i=a_i(\theta) \right\}$$
at least, since the parameter space can be naturally expanded to its natural limit$$A=\left\{\alpha=(\alpha_1,\ldots,\alpha_k);\,\displaystyle{\int \exp\left\{\sum_{i=1}^k \alpha_i T_i(x)\right\}\text{d}\lambda(x)}<\infty \right\}\,,$$and that the functions $T_i$ are also linearly independent over the support $\cal X$ of $X$, the statistic$$T=(T_1(X),\ldots,T_k(X)$$is sufficient and with density$$g_\alpha(t)=\exp\left\{\sum_{i=1}^k \alpha_i t_i-\mu(\alpha)\right\}$$against the appropriate dominating measure. Therefore on the natural space of the exponential family (which may be larger than the original parameter space), the sufficient statistic is of the same dimension as the parameter. Even though the domain of variation of $T(x)$ can be constrained by non-linear relations, there is a sample size after which the dimensional constraint vanishes.
A completely different approach, avoiding exponential families, is provided by
Edward W. Barankin and Melvin Katz, Jr.
Sufficient Statistics of Minimal Dimension
Sankhyā: The Indian Journal of Statistics
Vol. 21, No. 3/4 (Aug., 1959), pp. 217-246
and they show the following result
where $r$ is the dimension of the sufficient statistic $T$ and $\rho(x^0)$ is the (local) rank of the second derivative of the log-likelihood in $\theta$ and $x$ [the definition is a bit too intricate to be reproduced here]. |
Complex Numbers
Problem 1-1
Let z = 2 - 3 i where i is the imaginary unit. Evaluate z z* , where z* is the conjugate of z , and write the answer in standard form.
Detailed Solution.
Problem 1-2
Evaluate and write in standard form \( \dfrac{1-i}{2-i} \) , where i is the imaginary unit.
Detailed Solution.
Quadratic Equations
Problem 2-1
Find all solutions of the equation \( x(x + 3) = - 5 \).
Detailed Solution.
Problem 2-2
Find all values of the parameter m for which the equation \( -2 x^2 + m x = 2 m \) has complex solutions.
Detailed Solution.
Functions
Problem 3-1
Let \( f(x) = - x^2 + 3(x - 1) \). Evaluate and simplify \( f(a-1)\).
Detailed Solution.
Problem 3-2
Write, in interval notation, the domain of function \(f\) given by \(f(x) = \sqrt{x^2-16} \).
Detailed Solution.
Problem 3-3
Find and write, in interval notation, the range of function \(f\) given by \(f(x) = - x^2 - 2x + 6 \).
Detailed Solution.
Problem 3-4
Let \(f(x) = \sqrt{x - 2} \) and \(g(x) = x^2 + 2 \); evaluate \( (f_o g)(a - 1) \) for \( a \lt 1 \).
Detailed Solution.
Problem 3-5
Which of the following is a one-to-one function?(There may be more than one answer). a) \(f(x) = - 2 \) b) \(g(x) = \ln(x^2 - 1) \) c) \(h(x) = |x| + 2 \) d) \(j(x) = 1/x + 2 \) e) \(k(x) = \sin(x) + 2 \) f) \(l(x) = ln(x - 1) + 1 \)
Detailed Solution.
Problem 3-6
What is the inverse of function f given by \(f(x) = \dfrac{-x+2}{x-1}\)?
Detailed Solution.
Problem 3-7
Classify the following functions as even, odd or neither. a) \(f(x) = - x^3 \) b) \(g(x) = |x|+ 2 \) c) h(x) = \( \ln(x - 1) \)
Detailed Solution.
Problem 3-8
Function \(f \) has one zero only at \(x = -2\). What is the zero of the function \(2f(2x - 5) \)?
Detailed Solution.
Problem 3-9
Which of the following piecewise functions has the graph shown below? a) \( f(x) = \begin{cases}x^2 & \text{if} \; x \ge 0 \\2 & \text{if} \; -2 \lt x \lt 0\\- x + 1& \text{if} \; x \le -2 \end{cases} \) b) \( g(x) = \begin{cases}x^2 & \text{if} \; x \gt 0 \\2 & \text{if} \; -2 \lt x \le 0\\- x + 1& \text{if} \; x \le -2 \end{cases} \) c) \( h(x) = \begin{cases}x^2 & \text{if} \; x \gt 0 \\2 & \text{if} \; -2 \lt x \lt 0\\- x + 1 & \text{if} \; x \lt -2 \end{cases} \)
Detailed Solution.
Problem 3-10
Calculate the average rate of change of function \( f(x) = \dfrac{1}{x} \) as x changes from \( x = a\) to \( x = a + h \).
Detailed Solution.
Polynomials
Problem 4-1
Find the quotient and the remainder of the division \( \dfrac{-x^4+2x^3-x^2+5}{x^2-2} \).
Detailed Solution.
Problem 4-2
Find \( k \) so that the remainder of the division \( \dfrac{4 x^2+2x-3}{2 x + k} \) is equal to \( -1 \)?
Detailed Solution.
Problem 4-3
\( (x - 2) \) is one of the factors of \( p(x) = -2x^4-8x^3+2x^2+32x+24 \). Factor \(p\) completely.
Detailed Solution.
Problem 4-4
Factor \( 16 x^4 - 81 \) completely.
Detailed Solution.
Problem 4-5
Find all solutions to the equation \( (x - 3)(x^2 - 4) = (- x + 3)(x^2 + 2x) \)
Detailed Solution.
Problem 4-6
Solve the inequality \( (x + 2)(x^2-4x-5) \ge (-x - 2)(x+1)(x-3)\)
Detailed Solution.
Problem 4-7
The graph of a polynomial function is shown below. Which of the following functions can possibly have this graph? a) \( y = -(x+2)^5(x-1)^2 \) b) \( y = 0.5(x+2)^3(x-1)^2 \) c) \( y = -0.5(x+2)^3 (x-1)^2 \) d) \( y = -(x+2)^3(x-1)^2 \)
Detailed Solution.
Problem 4-8
Which of the following graphs could possibly be that of the function f given by \( f(x) = k (x - 1)(x^2 + 4) \) where k is a negative constant? Find k if possible.
Detailed Solution.
Rational Expressions, Equations, Inequalities and Functions
Problem 5-1
Write as a single rational expression: \( \dfrac{x^2+3x-5}{(x-1)(x+2)} - \dfrac{2}{x+2} - 1 \).
Detailed Solution.
Problem 5-2
Solve the equation: \( \dfrac{- x^2+5}{x-1} = \dfrac{x-2}{x+2} - 4 \).
Detailed Solution.
Problem 5-3
Solve the inequality: \( \dfrac{1}{x-1}+\dfrac{1}{x+1} \ge \dfrac{3}{x^2-1} \).
Detailed Solution.
Problem 5-4
Find the horizontal and vertical asymptotes of the function: \( y = \dfrac{3x^2}{5 x^2 - 2 x - 7} + 2 \).
Detailed Solution.
Problem 5-5
Which of the following rational functions has an oblique asymptote? Find the point of intersection of the oblique asymptote with the function. a) \( y = -\dfrac{x-1}{x^2+2} \) b) \( y = -\dfrac{x^4-1}{x^2+2} \) c) \( y = -\dfrac{x^3 + 2x ^ 2 -1}{x^2- 2} \) d) \( y = -\dfrac{x^2-1}{x^2+2} \)
Detailed Solution.
Problem 5-6
Which of the following graphs could be that of function \( f(x) = \dfrac{2x-2}{x-1} \)?
Detailed Solution.
Trigonometry and Trigonometric Functions
Problem 6-1
A rotating wheel completes 1000 rotations per minute. Determine the angular speed of the wheel in radians per second.
Detailed Solution.
Problem 6-2
Determine the exact value of \( sec(-11\pi/3) \).
Detailed Solution.
Problem 6-3
Convert 1200° in radians giving the exact value.
Detailed Solution.
Problem 6-4
Convert \( \dfrac{-7\pi}{9} \) in degrees giving the exact value.
Detailed Solution.
Problem 6-5
What is the range and the period of the the function \( f(x) = -2\sin(-0.5(x - \pi/5)) - 6 \)?
Detailed Solution.
Problem 6-6
Which of the following graphs could be that of function given by: \( y = - \cos(2x - \pi/4) + 2 \)?
Detailed Solution.
Problem 6-7
Find a possible equation of the form \( y = a \sin(b x + c) + d \) for the graph shown below.(there are many possible solutions)
Detailed Solution.
Problem 6-8
Find the smallest positive value of x, in radians, such that \( - 4 \cos (2x - \pi/4) + 1 = 3 \)
Detailed Solution.
Problem 6-9
Simplify the expression: \( \dfrac{\cot(x)\sin(x) + \cos(x) \sin^2(x)+\cos^3(x)}{\cos(x)} \)
Detailed Solution.
Logarithmic and Exponential Functions
Problem 7-1
Simplify the expression \( \dfrac{4x^2 y^8}{8 x^3 y^5} \) using positive exponents in the final answer.
Detailed Solution.
Problem 7-2
Evaluate the expression \( \dfrac{3^{1/3} 9^{1/3}}{4^{1/2}} \).
Detailed Solution.
Problem 7-3
Rewrite the expression \( \log_b(2x - 4) = c \) in exponential form.
Detailed Solution.
Problem 7-4
Simplify the expressiomn: \( \log_a(9) \cdot \log_3(a^2) \)
Detailed Solution.
Problem 7-5
Solve the equation \( \log(x + 1) - log(x - 1) = 2 \log(x + 1) \).
Detailed Solution.
Problem 7-6
Solve the equation \( e^{2x} + e^x = 6 \).
Detailed Solution.
Problem 7-7
What is the horizontal asymptote of the graph of \( f(x) = 2 ( - 2 - e^{x-1}) \)?
Detailed Solution.
Problem 7-8
What is the vertical asymptote of the graph of \( f(x) = log(2x - 6) + 3 \)?
Detailed Solution.
Problem 7-9
Match the given functions with the graph shown below? A) \( y = 2 - 0.5^{2x-1} \) B) \( y = 0.5^{2x-1} \) C) \( y = 2 - 0.5^{-2x+1} \) D) \( y = 0.5^{-2x+1} \)
Detailed Solution.
Problem 7-10
Match the given functions with the graph shown below? A) \( y = 2+ln(x-2) \) B) \( y=-log_2(x+1)-1 \) C) \( y = -ln(-x) \) D) \( y = y=-log_3(x+1)-1 \)
Detailed Solution. |
Tokyo Journal of Mathematics Tokyo J. Math. Volume 34, Number 2 (2011), 383-406. Nested Subclasses of the Class of $\alpha$-selfdecomposable Distributions Abstract
A probability distribution $\mu$ on $\mathbf{R}^d$ is selfdecomposable if its characteristic function $\widehat\mu(z), z\in\mathbf{R}^d$, satisfies that for any $b>1$, there exists an infinitely divisible distribution $\rho_b$ satisfying $\widehat\mu(z) = \widehat\mu(b^{-1}z)\widehat\rho_b(z)$. This concept has been generalized to the concept of $\alpha$-selfdecomposability by many authors in the following way. Let $\alpha\in\mathbf{R}$. An infinitely divisible distribution $\mu$ on $\mathbf{R}^d$ is $\alpha$-selfdecomposable, if for any $b>1$, there exists an infinitely divisible distribution $\rho_b$ satisfying $\widehat\mu(z) = \widehat\mu(b^{-1}z)^{b^{\alpha}}\widehat\rho_b(z)$. By denoting the class of all $\alpha$-selfdecomposable distributions on $\mathbf{R}^d$ by $L^{(\alpha)}(\mathbf{R}^d)$, we define in this paper a sequence of nested subclasses of $L^{(\alpha)}(\mathbf{R}^d)$, and investigate several properties of them by two ways. One is by using limit theorems and the other is by using mappings of infinitely divisible distributions.
Article information Source Tokyo J. Math., Volume 34, Number 2 (2011), 383-406. Dates First available in Project Euclid: 30 January 2012 Permanent link to this document https://projecteuclid.org/euclid.tjm/1327931393 Digital Object Identifier doi:10.3836/tjm/1327931393 Mathematical Reviews number (MathSciNet) MR2918913 Zentralblatt MATH identifier 1236.60018 Citation
MAEJIMA, Makoto; UEDA, Yohei. Nested Subclasses of the Class of $\alpha$-selfdecomposable Distributions. Tokyo J. Math. 34 (2011), no. 2, 383--406. doi:10.3836/tjm/1327931393. https://projecteuclid.org/euclid.tjm/1327931393 |
So you have found out $X \in \mathbf{NP}$. Unfortunately, as Juho hints at in their answer, there is no "list" you can systematically go over to try and investigate $X$ further. This is mainly because separation results in complexity theory are (yet) few and far between, so most of the things you may try will not explicitly fail; you'll only be unsuccessful. For example, if you try to show $X \in \mathbf{P}$ you might succeed, but if it turns out to be incorrect it is very unlikely you will be able to prove it (since then $\mathbf{P} \neq \mathbf{NP}$).
This lack of "feedback" means you should tackle $X$ from different perspectives, and it is only experience (and a good deal of luck!) which will give you a "hunch" as to where the most promising one is. The following are a few options you might consider, as well as their implications if you manage to prove or disprove them (if any). Note I am only considering structural complexity theory here (which seems to be what you are interested in), but by all means this is what the entirety of complexity theory is about. Depending on the structure of $X$, it is also possible to consider $X$ from a parameterized complexity point of view, or try to find subexponential algorithms for it (see also number 4 below), and so forth and so on.
$X$ is $\mathbf{NP}$-complete
Prove: This is basically the end of the line for investigating $X$. It is also not a very surprising result since the list of $\mathbf{NP}$-complete problems is huge (and it already was when Richard Karp started to compile this list). Disprove: Congratulations! You have proved $\mathbf{P} \neq \mathbf{NP}$.
$X \in \mathbf{P}$
Prove: This is the second most likely outcome and even more unsurprising than the former. Disprove: Congratulations! You have proved $\mathbf{P} \neq \mathbf{NP}$.
$X \in \mathbf{coNP}$
Prove: Then $X$ is in $\mathbf{NP} \cap \mathbf{coNP}$ (which, for instance, includes factoring). Reconsider $X \in \mathbf{P}$. Disprove: Congratulations! You have proved $\mathbf{NP} \neq \mathbf{coNP}$.
$X \in \mathbf{GI}$ (the graph isomorphism class)
Prove: Then $X$ admits a quasipolynomial-time algorithm (and is unlikely to be $\mathbf{NP}$-complete). Reconsider $X \in \mathbf{P}$. Disprove: Congratulations! You have proved $\mathbf{P} \subseteq \mathbf{GI} \neq \mathbf{NP}$
$X \in \mathbf{BPP}$
Prove: Since it is widely suspected that $\mathbf{BPP} = \mathbf{P}$, this is only interesting if $X \in \mathbf{P}$ is not obvious. Consider also $X \in \mathbf{RP}$ or even $X \in \mathbf{ZPP}$. Disprove: Congratulations! You have proved $\mathbf{NP} \neq \mathbf{BPP}$ (and quite likely also $\mathbf{P} \neq \mathbf{NP}$).
And the list goes on and on. As you can see, disproving any of these options leads to separation results, all of which would be major breakthroughs and, thus, unlikely to be proven simply by considering a random problem $X \in \mathbf{NP}$. This means you cannot expect to go through this list from head to tail making ticks and crosses as you go; you have to "guess" which of the options (if any!) is more likely and (with some luck) you'll be able to prove it. |
I am currently reading through my electrodynamics lecture notes and I can't understand a calculation.
$$ E = \frac{1}{2} \int \mathrm{d}^3x \, \phi(\mathbf{x}) \rho(\mathbf{x}) = - \frac{\epsilon_0}{8\pi} \int \mathrm{d}^3x \, \phi(\mathbf{x}) \Delta \phi(\mathbf{x}) \\ = \frac{\epsilon_0}{8\pi} \int \mathrm{d}^3x \, \nabla \phi(\mathbf{x}) \nabla \phi(\mathbf{x}) = \frac{1}{8\pi} \int \mathrm{d}^3x \, \mathbf{D} \cdot \mathbf{E}$$
I know the identity $\nabla ( f \nabla g) = \nabla f \cdot \nabla g + f \Delta g$ which I could easily prove by looking at the components and summing. I think I have to use this identity from the first to second line in the above equation but I don't know why $\int \mathrm{d}^3x \, \nabla (\phi \nabla \phi) = 0$. I tried using $\nabla \phi = -\mathbf{E}$ and the divergence theorem but I couldn't show that.
The question is pretty specific and you don't get much insight from it but I would really like some help. |
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling
@heather well, there's a spectrum
so, there's things like New Journal of Physics and Physical Review X
which are the open-access branch of existing academic-society publishers
As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di...
Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago
> A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”
for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty
> for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing)
@0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals.
@BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work...
@BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions.
Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley.
I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea.
@EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results...
Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town...
@EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit. |
Regression (OLS) - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Regression (OLS)
$z$ test for the difference between two proportions
Paired sample $t$ test
Sign test
Cochran's Q test
Independent variables Independent variable Independent variable Independent variable Independent variable One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables One categorical with 2 independent groups 2 paired groups 2 paired groups One within subject factor ($\geq 2$ related groups) Dependent variable Dependent variable Dependent variable Dependent variable Dependent variable One quantitative of interval or ratio level One categorical with 2 independent groups One quantitative of interval or ratio level One of ordinal level One categorical with 2 independent groups Null hypothesis Null hypothesis Null hypothesis Null hypothesis Null hypothesis $F$ test for the complete regression model: $\pi_1 = \pi_2$
$\pi_1$ is the unknown proportion of "successes" in population 1; $\pi_2$ is the unknown proportion of "successes" in population 2
$\mu = \mu_0$
$\mu$ is the unknown population mean of the difference scores; $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0
$\pi_1 = \pi_2 = \ldots = \pi_I$
$\pi_1$ is the population proportion of 'successes' in group 1; $\pi_2$ is the population proportion of 'successes' in group 2; $\pi_I$ is the population proportion of 'successes' in group $I$
Alternative hypothesis Alternative hypothesis Alternative hypothesis Alternative hypothesis Alternative hypothesis $F$ test for the complete regression model: Two sided: $\pi_1 \neq \pi_2$
Right sided: $\pi_1 > \pi_2$
Left sided: $\pi_1 < \pi_2$
Two sided: $\mu \neq \mu_0$
Right sided: $\mu > \mu_0$
Left sided: $\mu < \mu_0$
Not all population proportions are equal Assumptions Assumptions Assumptions Assumptions Assumptions all individuals in the population. Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another Sample of 'blocks' (usually the subjects) is a simple random sample from the population. That is, blocks are independent of one another Test statistic Test statistic Test statistic Test statistic Test statistic $F$ test for the complete regression model:
Note 2: if only one independent variable ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1$
$z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
$p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, $n_2$ is the sample size of group 2
Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1$
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
$\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to H0, $s$ is the sample standard deviation of the difference scores, $N$ is the sample size (number of difference scores).
The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$
$W = $ number of difference scores that is larger than 0 If a failure is scored as 0 and a success is scored as 1:
$Q = k(k - 1) \dfrac{\sum_{groups} \Big (\mbox{group total} - \frac{\mbox{grand total}}{k} \Big)^2}{\sum_{blocks} \mbox{block total} \times (k - \mbox{block total})}$
Here $k$ is the number of related groups (usually the number of repeated measurements), a group total is the sum of the scores in a group, a block total is the sum of the scores in a block (usually a subject), and the grand total is the sum of all the scores.
Before computing $Q$, first exclude blocks with equal scores in all $k$ groups
Sample standard deviation of the residuals $s$ n.a. n.a. n.a. n.a. $\begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $ - - - - Sampling distribution of $F$ and of $t$ if H0 were true Sampling distribution of $z$ if H0 were true Sampling distribution of $t$ if H0 were true Sampling distribution of $W$ if H0 were true Sampling distribution of $Q$ if H0 were true Sampling distribution of $F$: Approximately standard normal $t$ distribution with $N - 1$ degrees of freedom The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $p$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $p = 0.5$.
If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $np = n \times 0.5$ and standard deviation $\sqrt{np(1-p)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$ follows approximately a standard normal distribution if the null hypothesis were true.
If the number of blocks (usually the number of subjects) is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom Significant? Significant? Significant? Significant? Significant? $F$ test: Two sided: Two sided: If $n$ is small, the table for the binomial distribution should be used:
Two sided:
If $n$ is large, the table for standard normal probabilities can be used:
Two sided:
If the number of blocks is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$: $C\%$ confidence interval for $\beta_k$ and for $\mu_y$; $C\%$ prediction interval for $y_{new}$ Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$ $C\%$ confidence interval for $\mu$ n.a. n.a. Confidence interval for $\beta_k$: Regular (large sample): $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20)
The confidence interval for $\mu$ can also be used as significance test.
- - Effect size n.a. Effect size n.a. n.a. Complete model: - Cohen's $d$:
Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0$
- - n.a. n.a. Visual representation n.a. n.a. - - - - ANOVA table n.a. n.a. n.a. n.a. - - - - n.a. Equivalent to Equivalent to Equivalent to Equivalent to - When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels One sample $t$ test on the difference scores
Repeated measures ANOVA with one dichotomous within subjects factor
Two sided sign test is equivalent to Friedman test, with a categorical dependent variable consisting of two independent groups Example context Example context Example context Example context Example context Can mental health be predicted from fysical health, economic class, and gender? Is the proportion smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic. Is the average difference between the mental health scores before and after an intervention different from $\mu_0$ = 0? Do people tend to score higher on mental health after a mindfulness course? Subjects perform three different tasks, which they can either perform correctly or incorrectly. Is there a difference in task performance between the three different tasks? SPSS SPSS SPSS SPSS SPSS Analyze > Regression > Linear... SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Analyze > Descriptive Statistics > Crosstabs...
Analyze > Compare Means > Paired-Samples T Test... Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples... Analyze > Nonparametric Tests > Legacy Dialogs > K Related Samples... Jamovi Jamovi Jamovi Jamovi Jamovi Regression > Linear Regression Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Frequencies > Independent Samples - $\chi^2$ test of association
T-Tests > Paired Samples T-Test Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
Jamovi does not have a specific option for the Cochran's Q test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the $p$ value that would have resulted from the Cochran's Q test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
Practice questions Practice questions Practice questions Practice questions Practice questions |
Difference between revisions of "Group cohomology of elementary abelian group of prime-square order"
(→Over an abelian group)
(→Over an abelian group)
Line 35: Line 35:
The homology groups with coefficients in an abelian group <math>M</math> are given as follows:
The homology groups with coefficients in an abelian group <math>M</math> are given as follows:
−
<math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};M) = \left\lbrace\begin{array}{rl} (M/ pM)^{(q+3)/2} \oplus (\operatorname{Ann}_M(p))^{(q-1)/2}, & \qquad q = 1,3,5,\dots\\ (M/pM)^{q/2} \oplus (\operatorname{Ann}_M(p))^{(q+2)/2}, & \qquad q = 2,4,6,\dots \\ M, & \qquad q = 0 \\\end{array}\right.</math>
+
<math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};M) = \left\lbrace\begin{array}{rl} (M/ pM)^{(q+3)/2} \oplus (\operatorname{Ann}_M(p))^{(q-1)/2}, & \qquad q = 1,3,5,\dots\\ (M/pM)^{q/2} \oplus (\operatorname{Ann}_M(p))^{(q+2)/2}, & \qquad q = 2,4,6,\dots \\ M, & \qquad q = 0 \\\end{array}\right.</math>
Here, <math>M/pM</math> is the quotient of <math>M</math> by <math>pM = \{ px \mid x \in M \}</math> and <math>\operatorname{Ann}_M(p) = \{ x \in M \mid px = 0 \}</math>.
Here, <math>M/pM</math> is the quotient of <math>M</math> by <math>pM = \{ px \mid x \in M \}</math> and <math>\operatorname{Ann}_M(p) = \{ x \in M \mid px = 0 \}</math>.
Revision as of 16:08, 24 October 2011 Contents 1 Particular cases 2 Homology groups for trivial group action 3 Cohomology groups for trivial group action Particular cases Homology groups for trivial group action FACTS TO CHECK AGAINST(homology group for trivial group action): First homology group: first homology group for trivial group action equals tensor product with abelianization Second homology group: formula for second homology group for trivial group action in terms of Schur multiplier and abelianization|Hopf's formula for Schur multiplier General: universal coefficients theorem for group homology|homology group for trivial group action commutes with direct product in second coordinate|Kunneth formula for group homology Over the integers
The first few homology groups are given below:
rank of as an elementary abelian -group -- 2 1 3 2 4 Over an abelian group
The homology groups with coefficients in an abelian group are given as follows:
Here, is the quotient of by and .
These homology groups can be computed in terms of the homology groups over integers using the universal coefficients theorem for group homology.
Important case types for abelian groups
Case on Conclusion about odd-indexed homology groups, i.e., Conclusion about even-indexed homology groups, i.e., is uniquely -divisible, i.e., every element of can be divided uniquely by . This includes the case that is a field of characteristic not . all zero groups all zero groups is -torsion-free, i.e., no nonzero element of multiplies by to give zero. is -divisible, but not necessarily uniquely so, e.g., , any natural number is a finite abelian group isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of is a finitely generated abelian group all isomorphic to where is the rank for the -Sylow subgroup of the torsion part of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of all isomorphic to where is the rank for the -Sylow subgroup of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of Cohomology groups for trivial group action FACTS TO CHECK AGAINST(cohomology group for trivial group action): First cohomology group: first cohomology group for trivial group action is naturally isomorphic to group of homomorphisms Second cohomology group: formula for second cohomology group for trivial group action in terms of Schur multiplier and abelianization In general: dual universal coefficients theorem for group cohomology relating cohomology with arbitrary coefficientsto homology with coefficients in the integers. |Cohomology group for trivial group action commutes with direct product in second coordinate | Kunneth formula for group cohomology Over the integers
The cohomology groups with coefficients in the integers are given as below:
The first few cohomology groups are given below:
0 rank of as an elementary abelian -group -- 0 2 1 3 2 Over an abelian group
The cohomology groups with coefficients in an abelian group are given as follows:
Here, is the quotient of by and .
These can be deduced from the homology groups with coefficients in the integers using the dual universal coefficients theorem for group cohomology.
Important case types for abelian groups Important case types for abelian groups
Case on Conclusion about odd-indexed cohomology groups, i.e., Conclusion about even-indexed homology groups, i.e., is uniquely -divisible, i.e., every element of can be divided by uniquely. This includes the case that is a field of characteristic not 2. all zero groups all zero groups is -torsion-free, i.e., no nonzero element of multiplies by to give zero. is -divisible, but not necessarily uniquely so, e.g., , any natural number is a finite abelian group isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of is a finitely generated abelian group all isomorphic to where is the rank for the -Sylow subgroup of the torsion part of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of all isomorphic to where is the rank for the -Sylow subgroup of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of |
How does a gauge covariant derivative in a non-abelian field theory act on various quantities which are not valued in the algebra, and why? In particular, how does it act on a scalar valued function $f(x)$ and a matrix-valued function $\chi(x)$ which is not valued in the [representation of the] algebra of the gauge symmetry group, and why?
My understanding is that the gauge covariant derivative acts as follows on $\psi$, which is in the fundamental representation and would be written as a column matrix function, and $\phi$, which is in the adjoint representation and would be written as an nxn matrix function (let "$Id$" be the identity matrix, and absorb the coupling constant):
1) $D_{\mu} \psi = \partial_{\mu} \psi + i A_{\mu} \psi$, where $A_{\mu}$ is hermitian matrix-valued.
2) $D_{\mu} \chi = \partial \phi + i [A_{\mu}, \phi]$
So then for a matrix-valued function $\chi$ which is not valued in the matrix representation of the algebra, and a scalar-valued function $f(x)$, I expect:
3) $D_{\mu} \chi = \partial_{\mu} \chi + i A_{\mu} \chi$, using a matrix product in the second term, and
4) $D_{\mu} f = \text{Id} \hspace{.1cm} \partial_{\mu} f$. However, I am wondering if it is instead $D_{\mu} f = \partial_{\mu} (f \hspace{.1cm} \text{Id}) + i A_{\mu} (f \hspace{.1cm} \text{Id})$. |
Your proposal is a type of perpetual motion device which is popularly referred to as a "magnetic motor" (NOT an "electric motor", which is a real thing). Every one of these magnetic motors is impossible because they all presuppose that it is in some way possible to have one permanent magnet "sneak up" on another and then suddenly experience repulsion. The ...
If an alternating voltage is applied at one end of your capacitor, then an alternating current will flow in (and out) of the central conductor at that end, with an opposite current in the outer conductor. The current will drop off to zero at the far end. An alternating magnetic field will be wrapped around the current inside the conductors and in the the ...
If you had the hypothetical situation of a constant current travelling around a loop of wire, then any changing magnetic flux through the loop is calculated as $-\oint \vec{E}\cdot d\vec{l}$.For an ideal conductor then $\vec{J} = \sigma \vec{E}$ and the current and electric field are of constant magnitude and in the same direct, which is also the direction ...
I may be missing your point, but current carrying loops produce varying magnetic fields in all kinds of situations: AC motors, transformers, inductors, demagnifiers, and many others. In an electromagnetic wave, the interaction of varying electric and magnetic fields determines the rate of propagation of the wave.
Your questions seems to ask whether there is an electromagnetic force between the Sun and the Earth which has an appreciable impact on our orbit.The Earth's magnetic field is very weak locally, barely enough to align a compass point.The Sun's is about twice as strong, on average, but has intense variations associated with sunspots etc.The affect of the ...
It should be noted that in a real transformer working in frequencies it is designed for, the second description is correct. I have played with transformers for a few years. I have winded my own one power transformer and 4 audio transformers. From what I have learned and from my experiments, the second description is correct.There is a fact that may or may ...
Your problem statement is not entirely clear, but if the rod is perpendicular to the field, and is moving sideways across the field, then each free electron in the rod will be subject to a force (evB) pushing it toward one end of the rod. They will move until the separation of charge produces an electrostatic force opposing the magnetic force (eE = evB). E ...
If during self-induction the inductor produces a current in opposite direction of that of battery in order to resist the change of current through it then how could the current after some reasonable amount of time gets to its peek point?If you apply a DC voltage to an ideal inductor, it will never reach its peak current. The current will continue to ...
The whole point is that for a normal conductor Lenz can never win.It is true that at the start there is no current but there is a finite rate of change of current with time and so with the passage of time the current will change from its initial zero value.Another way of looking at the situation is that if Lenz did stop the change of current, and ...
Here is some guidance.You can't change the current in an ideal inductor instantaneously (in zero time). Therefore, whatever the current is in the inductor just prior to closing the key in part (a) it will be the same the instant after the key is closed. It will then change over time.Likewise, whatever the current in the inductor is just prior to opening ...
I will only answer the first part of the question since the rest is very much homework-like.When you open the switch, there is an EMF from the inductor because of the changing current, but you simultaneously lose the EMF from the voltage source E. Therefore the total EMF after opening the switch is not greater than before, hence the current does not ...
I found the answer to my question hereIn summary, the field winding does not have to provide all the magnetic flux due to the fact that the current-carrying armature, in turn, creates a magnetic field.So the net magnetic flux becomes $B_{net} = B_{rotor} + B_{stator}$In an imaginary totally resistive stator, the power balances out without any rotor ...
I'm not sure I understand the nature of your question. Generator efficiency depends on shaft support bearing losses, ohmic losses in all the windings, and flux leakage in the gaps between the pole pieces in the stator and the armature. Efficiency is maximized when these are all minimized.
But, shouldn't we include this in the flux calculation?You can think about this problem like coupled inductors where the 'voltage across' the 'secondary' (inner loop of wire) is given by$$v_s = k\sqrt{L_1L_2}\frac{di_1}{dt} + L_2\frac{di_2}{dt}$$where $L_1$ and $L_2$ are the self-inductance of the solenoid and inner loop respectively, and $0\lt k \lt 1$...
Why is it the flux change which induces the ac current of primary coil equal to the flux change that primary coil induces on the secondary coil?Because you're only studying or modeling an ideal transformer.In a real transformer, flux through the secondary is somewhat less than the flux through the primary. Some of the flux produced by the current through ...
I've done some research and spent more time thinking about this. Provide feedback if any of this is inaccurate.Basically, magnets get their magnetism from the magnetic fields of all of the atoms lined up together. The magnetic field is generated by the motion of the electrons in the atom itself and with a sufficiently amount of atoms (e.g. quadrillions) ... |
Regression (OLS) - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Regression (OLS)
$z$ test for the difference between two proportions
Paired sample $t$ test
Sign test
Marginal Homogeneity test / Stuart-Maxwell test
Independent variables Independent variable Independent variable Independent variable Independent variable One or more quantitative of interval or ratio level and/or one or more categorical with independent groups, transformed into code variables One categorical with 2 independent groups 2 paired groups 2 paired groups 2 paired groups Dependent variable Dependent variable Dependent variable Dependent variable Dependent variable One quantitative of interval or ratio level One categorical with 2 independent groups One quantitative of interval or ratio level One of ordinal level One categorical with $J$ independent groups ($J \geqslant 2$) Null hypothesis Null hypothesis Null hypothesis Null hypothesis Null hypothesis $F$ test for the complete regression model: $\pi_1 = \pi_2$
$\pi_1$ is the unknown proportion of "successes" in population 1; $\pi_2$ is the unknown proportion of "successes" in population 2
$\mu = \mu_0$
$\mu$ is the unknown population mean of the difference scores; $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0
For each category $j$ of the dependent variable:
$\pi_j$ in the first paired group = $\pi_j$ in the second paired group
Here $\pi_j$ is the population proportion for category $j$
Alternative hypothesis Alternative hypothesis Alternative hypothesis Alternative hypothesis Alternative hypothesis $F$ test for the complete regression model: Two sided: $\pi_1 \neq \pi_2$
Right sided: $\pi_1 > \pi_2$
Left sided: $\pi_1 < \pi_2$
Two sided: $\mu \neq \mu_0$
Right sided: $\mu > \mu_0$
Left sided: $\mu < \mu_0$
For some categories of the dependent variable, $\pi_j$ in the first paired group $\neq$ $\pi_j$ in the second paired group Assumptions Assumptions Assumptions Assumptions Assumptions all individuals in the population. Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another Test statistic Test statistic Test statistic Test statistic Test statistic $F$ test for the complete regression model:
Note 2: if only one independent variable ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1$
$z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$
$p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$, $p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$, $p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$, $n_1$ is the sample size of group 1, $n_2$ is the sample size of group 2
Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1$
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$
$\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to H0, $s$ is the sample standard deviation of the difference scores, $N$ is the sample size (number of difference scores).
The denominator $s / \sqrt{N}$ is the standard error of the sampling distribution of $\bar{y}$. The $t$ value indicates how many standard errors $\bar{y}$ is removed from $\mu_0$
$W = $ number of difference scores that is larger than 0 Computing the test statistic is a bit complicated and involves matrix algebra. You probably won't need to calculate it by hand (unless you are following a technical course) Sample standard deviation of the residuals $s$ n.a. n.a. n.a. n.a. $\begin{aligned} s &= \sqrt{\dfrac{\sum (y_j - \hat{y}_j)^2}{N - K - 1}}\\ &= \sqrt{\dfrac{\mbox{sum of squares error}}{\mbox{degrees of freedom error}}}\\ &= \sqrt{\mbox{mean square error}} \end{aligned} $ - - - - Sampling distribution of $F$ and of $t$ if H0 were true Sampling distribution of $z$ if H0 were true Sampling distribution of $t$ if H0 were true Sampling distribution of $W$ if H0 were true Sampling distribution of the test statistic if H0 were true Sampling distribution of $F$: Approximately standard normal $t$ distribution with $N - 1$ degrees of freedom The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $p$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $p = 0.5$.
If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $np = n \times 0.5$ and standard deviation $\sqrt{np(1-p)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$ follows approximately a standard normal distribution if the null hypothesis were true.
Approximately a chi-squared distribution with $J - 1$ degrees of freedom Significant? Significant? Significant? Significant? Significant? $F$ test: Two sided: Two sided: If $n$ is small, the table for the binomial distribution should be used:
Two sided:
If $n$ is large, the table for standard normal probabilities can be used:
Two sided:
If we denote the test statistic as $X^2$: $C\%$ confidence interval for $\beta_k$ and for $\mu_y$; $C\%$ prediction interval for $y_{new}$ Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$ $C\%$ confidence interval for $\mu$ n.a. n.a. Confidence interval for $\beta_k$: Regular (large sample): $\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$
where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20)
The confidence interval for $\mu$ can also be used as significance test.
- - Effect size n.a. Effect size n.a. n.a. Complete model: - Cohen's $d$:
Standardized difference between the sample mean of the difference scores and $\mu_0$: $$d = \frac{\bar{y} - \mu_0}{s}$$ Indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0$
- - n.a. n.a. Visual representation n.a. n.a. - - - - ANOVA table n.a. n.a. n.a. n.a. - - - - n.a. Equivalent to Equivalent to Equivalent to n.a. - When testing two sided: chi-squared test for the relationship between two categorical variables, where both categorical variables have 2 levels One sample $t$ test on the difference scores
Repeated measures ANOVA with one dichotomous within subjects factor
Two sided sign test is equivalent to - Example context Example context Example context Example context Example context Can mental health be predicted from fysical health, economic class, and gender? Is the proportion smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic. Is the average difference between the mental health scores before and after an intervention different from $\mu_0$ = 0? Do people tend to score higher on mental health after a mindfulness course? Subjects are asked to taste three different types of mayonnaise, and to indicate which of the three types of mayonnaise they like best. They then have to drink a glass of beer, and taste and rate the three types of mayonnaise again. Does drinking a beer change which type of mayonnaise people like best? SPSS SPSS SPSS SPSS SPSS Analyze > Regression > Linear... SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Analyze > Descriptive Statistics > Crosstabs...
Analyze > Compare Means > Paired-Samples T Test... Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples... Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples... Jamovi Jamovi Jamovi Jamovi n.a. Regression > Linear Regression Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Frequencies > Independent Samples - $\chi^2$ test of association
T-Tests > Paired Samples T-Test Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
- Practice questions Practice questions Practice questions Practice questions Practice questions |
Frequency Domain View of Upsampling
Why Interpolator needs a LPF after Upsampling
Outline Background Introduction Derivation Example Conclusion Background
$ {f}_{s} $ = sampling frequency (number of samples/second) Hz
$ {T}_{s} $ = sampling period (number of seconds/sample) seconds $ {f}_{s} = {\frac{1}{{T}_{s}}} $ Sampling above Nyquist frequency guarantees a bandlimited sampled CT signal's reconstruction. **add source** Define Nyquist Sampling rate as $ {f}_{Nyquist} = 2{f}_{M} $ $ {f}_{M} $ is max frequency of CT signal
Introduction
Sampling at frequencies much larger than Nyquist requires a filter for reconstruction with a less sharp cutoff. A digital LPF can be used to then obtain the reconstructed signal. *add source*
Assume $ {x}_{c}(t) $ is a bandlimited CT signal, $ {x}_{1}[n] $ is a DT sampled signal of $ {x}_{c}(t) $ with sampling period $ {T}_{1} $ This leads to the question, can you use
$ {x}_{1}[n] = x_{c}(n{T}_{1}) $
to obtain
$ {x}_{u}[n] = {x}_{c}(n{T}_{u}) $, a signal sampled at a HIGHER sampling frequency than $ {x}_{1}[n] $, without having to fully reconstruct $ {x}_{c}(t) $
Derivation
We want $ {f}_{u} > {f}_{Nyquist} $. In this situation, this means $ {f}_{u} > {f}_{1} $.
Therefore, we want $ {T}_{u} < {T}_{1} $. (i.e. $ {x}_{u}[n] $ is sampled at a higher frequency than $ {x}_{1}[n] $) In other words, $ {T}_{u} = {\frac{{T}_{1}}{D}} $ for some integer D. $ {x}_{1}[n] = x_{c}(n{T}_{1}) $ $ {x}_{u}[n] = {x}_{c}(n{T}_{u}) $
$ {x}_{u}[n] ={x}_{1}[n/D] = {x}_{c}(n{T}_{1}({T}_{u}/{T}_{1})) = {x}_{c}(n{T}_{u}) $if n/D is an integer
$ {x}_{u}[n] = 0 $ else. In frequency domain: $ {X}_{u}({\omega}) = {\sum_{n = -{\infty}}^{\infty} {x}_{u}[n]e^{-j{\omega}n}} $ $ {X}_{u}({\omega}) = {\sum_{n = -{\infty}}^{\infty} {x}_{1}[n/D]e^{-j{\omega}n}} $ let n=mD $ {X}_{u}({\omega}) = {\sum_{m = -{\infty}}^{\infty} {x}_{1}[m]e^{-j{\omega}mD}} $ $ {X}_{u}({\omega}) ={X}_{1}(D{\omega}) $ notice this is a rescaled version of $ {X}_{1} $ In order to get $ {x}_{int}(n) $, the reconstructed signal, we need to LPF $ {X}_{u}({\omega}) $. $ {x}_{int}(n) = {x}_{u} * h(n) $ $ h(n) = sinc(n/D) $ Example
Source: Prof. Mireille Boutin |
Say we have used the TFIDF transform to encode documents into continuous-valued features.
How would we now use this as input to a Naive Bayes classifier?
Bernoulli naive-bayes is out, because our features aren't binary anymore.
Seems like we can't use Multinomial naive-bayes either, because the values are continuous rather than categorical.
As an alternative, would it be appropriate to use gaussian naive bayes instead? Are TFIDF vectors likely to hold up well under the gaussian-distribution assumption?
The sci-kit learn documentation for MultionomialNB suggests the following:
The multinomial Naive Bayes classifier is suitable for classification with discrete features (e.g., word counts for text classification). The multinomial distribution normally requires integer feature counts. However, in practice, fractional counts such as tf-idf may also work.
Isn't it fundamentally impossible to use fractional values for MultinomialNB?
As I understand it, the likelihood function itself assumes that we are dealing with discrete-counts:
(From Wikipedia):
${\displaystyle p(\mathbf {x} \mid C_{k})={\frac {(\sum _{i}x_{i})!}{\prod _{i}x_{i}!}}\prod _{i}{p_{ki}}^{x_{i}}}$
How would TFIDF values even work with this formula, since the $x_i$ values are all required to be discrete counts? |
$$\sum_{i=1}^n \frac1{4i-1}$$
I know I have to integrate the function but from what to find lower and upper bound.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
It sounds like you may be asking for a representation of $$\sum_{i=1}^n \frac{1}{4i-1}$$ as a Riemann sum estimate of two different integrals, one of which underestimates the true value of the integral (giving you an upper bound on your sum), and the other of which overestimates the true value of the integral (giving you a lower bound on your sum).
The right-hand Riemann sum approximation of $$\int_0^n \frac{1}{4x-1} dx$$ with $\Delta x = 1$ is $\sum_{i=1}^n \frac{1}{4i-1}$ and will underestimate the integral. However, the integrand is undefined at $x = 1/4$, and so you're better off looking at $$\frac{1}{3} + \sum_{i=2}^n \frac{1}{4i-1}$$ as an underestimate of $$\frac{1}{3} + \int_1^n \frac{1}{4x-1} dx.$$
Similarly, the left-hand Riemann sum approximation of $$\int_0^n \frac{1}{4(x+1)-1} dx$$ with $\Delta x = 1$ is $\sum_{i=1}^n \frac{1}{4i-1}$ and will overestimate the integral.
Drawing the Riemann sum approximations and the graphs of the functions will help to see this. Thus after putting it all together, we have
$$\int_0^n \frac{1}{4x+3} dx < \sum_{i=1}^n \frac{1}{4i-1} \leq \frac{1}{3} + \int_1^n \frac{1}{4x-1} dx,$$ with equality holding in the second case only when $n=1$.
$\sum_{i=1}^n \frac{1}{4i - 1} = \frac{1}{3} + \sum_{i=2}^n \frac{1}{4i-1}.$
Now apply the usual technique for obtaining a lower bound on the sum on the right.
You can write $$\frac{1}{4i-1} = \frac{1}{4i} + \frac{1}{4i(4i-1)}.$$ Therefore $$\sum_{i=1}^n \frac{1}{4i-1} = \frac{1}{4} \sum_{i=1}^n \frac{1}{i} + \sum_{i=1}^n \frac{1}{4i(4i-1)}.$$ The second term is bounded (by comparison with the convergent infinite series $\sum_{i=1}^\infty 1/i^2$), and the first term is familiar. |
Bathe's
Finite Element Procedures shows the "nonlinear strain stiffness matrix" for a 2D truss element as$$\frac {^tP} {L_0 + \Delta L}\left[ \begin{array}{ccc}1 & 0 & -1 & 0 \\0 & 1 & 0 & -1 \\-1 & 0 & 1 & 0 \\0 & -1 & 0 & 1 \\ \end{array} \right]$$
But other sources, such as this http://people.duke.edu/~hpgavin/cee421/truss-finite-def.pdf and the ANSYS theory manual, omit rows 1 and 3 (axial direction):
$$ \frac {^tP} {L_0} \left[ \begin{array}{ccc} 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 1 \\ \end{array} \right] $$
The extra 1's don't seem to be physically correct according to my intuition. They say the element becomes stiffer when it's under tension. That suggests the force/displacement relationship is not linear in a seemingly arbitrary way.
Bathe does say "Note that if the material stress-strain relationship is such that $^tP$ is constant with changes in $\Delta L$, [then something that seems to lead to the extra 1's being omitted]"
Perhaps it's using a generalization of Hooke's law which is nonlinear but doesn't have any extra parameters?
My experiments with nonlinear solid elements in Calculix show the same behavior as these truss elements with the extra 1's present.
EDIT: Using the full diagonal matrix with pre-stressed frequency analysis leads to what seems like an incorrect result - a 1D spring-mass system's natural frequency increases with tension on the spring. In other words, this common physics concept of gravity not influencing the motion described here is violated. https://share.ehs.uen.org/sites/default/files/Unit02Lesson2.pdf The same problem occurs with solid elements in Calculix. Are they really wrong?
EDIT 2: Example showing that the full-diagonal matrix does not work for pre-stressed frequency analysis. It's a 1D spring with length 1, spring constant k, mass m concentrated at each node and no constraints.
The natural frequency should be independent of P. We can see this is true when using the 2nd geometric stiffness matrix above by solving the eigenvalue equation $$ (\textbf K_e + \textbf K_g - \omega^2 \textbf M) \textbf u = 0 $$ I form the 1D matrix by taking on the 1st and 3rd rows and columns from the 2D matrix above.
$ \textbf K_e = \left[ \begin{array}{rr} k & -k \\ -k & k \\ \end{array} \right] $ $ \textbf K_g = \left[ \begin{array}{rr} 0 & 0 \\ 0 & 0 \\ \end{array} \right] $ $ \textbf M = \left[ \begin{array}{rr} m & 0 \\ 0 & m \\ \end{array} \right] $
which has the non-zero eigenvalue $ \omega^2=\frac{2k}{m} $. This is consistent with a simple spring-mass oscillator. The factor of 2 appears because it's really two 1-dof oscillators with half the spring each.
Now, if we use the first geometric stiffness matrix above, with $ \Delta L $ approximately 0, we get the wrong answer: $$ (\textbf K_e + \textbf K_g - \omega^2 \textbf M) \textbf u = 0 $$ $ \textbf K_e = \left[ \begin{array}{rr} k & -k \\ -k & k \\ \end{array} \right] $ $ \textbf K_g = \left[ \begin{array}{rr} P & -P \\ -P & P \\ \end{array} \right] $ $ \textbf M = \left[ \begin{array}{rr} m & 0 \\ 0 & m \\ \end{array} \right] $
This has a non-zero eigenvalue of $ \omega^2=\frac{2(k+P)}{m} $. It's apparently wrong because it's a function of P but it should be independent of P. It shows that a spring is stiffened by increased tension. Conversely, if P is negative, it also shows that compression softens it and it even becomes unstable when P=-k. This indicates axial buckling (not Euler column buckling) under compression, which seems to be unphysical. Even if $\Delta L$ is not zero, it can't make $\textbf K_g$ zero, so there will always be some discrepancy.
So I'm wondering if the matrix with the full diagonal above is really correct. At least it appears to be wrong when used for frequency or linear buckling analysis. Solid elements in Calculix follow this "incorrect" behavior for frequency analysis, which adds to the confusion. |
I start with three independent random variables, $X_1, X_2, X_3$. They are each normally distributed with:
$$X_i \sim N(\mu_i, \sigma^2), i = 1, 2, 3.$$
I then have three transformations,
$$\eqalign{ Y_1 &= -X_1/\sqrt{2} + X_2/\sqrt{2} \cr Y_2 &= -X_1/\sqrt{3} - X_2/\sqrt{3} + X_3/\sqrt{3} \cr Y_3 &= X_1/\sqrt{6} + X_2/\sqrt{6} + 2X_3 / \sqrt{6} \cr }$$
I am supposed to show that when $\mu_i = 0,$ $i = 1, 2, 3,$ $(Y_1^2 + Y_2^2 + Y_3^2)/\sigma^2 \sim \chi^2(3)$. I have also shown the transformations to preserve the independence, as the transformation matrix is orthogonal.
I have already shown that the expectations of $Y_1, Y_2, Y_3$ is 0 and their variances are all the same. Using the normal pdf, I have shown that:
$$Y_i^2 \sim \frac{1}{2\pi\sigma^2} \exp(-2x^2 / 2\sigma^2).$$
I thought about applying a substitution of $z = 2x^2 / \sigma^2$ to get the exponent into a similar form as the chi-square's $\exp(-x/2)$ form, but I'm stuck on what to do with the constants outside to get them to look similar. Could someone offer a hand? |
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
Regression (OLS)
$z$ test for the difference between two proportions
$\pi_1 = \pi_2$$\pi_1$ is the unknown proportion of "successes" in population 1; $\pi_2$ is the unknown proportion of "successes" in population 2
$\mu = \mu_0$$\mu$ is the unknown population mean of the difference scores; $\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0
P(first score of a pair exceeds second score of a pair) = P(second score of a pair exceeds first score of a pair)
If the dependent variable is measured on a continuous scale, this can also be formulated as:
The median of the difference scores is zero in the population
$\pi = \pi_0$$\pi$ is the population proportion of "successes"; $\pi_0$ is the population proportion of successes according to the null hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
Alternative hypothesis
$F$ test for the complete regression model:
not all population regression coefficients are 0 or equivalenty
The variance explained by all the independent variables together (the complete model) is larger than 0 in the population: $\rho^2 > 0$
$t$ test for individual $\beta_k$:
Two sided: $\beta_k \neq 0$
Right sided: $\beta_k > 0$
Left sided: $\beta_k < 0$
Two sided: $\pi_1 \neq \pi_2$Right sided: $\pi_1 > \pi_2$Left sided: $\pi_1 < \pi_2$
Two sided: $\mu \neq \mu_0$Right sided: $\mu > \mu_0$Left sided: $\mu < \mu_0$
Two sided: P(first score of a pair exceeds second score of a pair) $\neq$ P(second score of a pair exceeds first score of a pair)
Right sided: P(first score of a pair exceeds second score of a pair) > P(second score of a pair exceeds first score of a pair)
Left sided: P(first score of a pair exceeds second score of a pair) < P(second score of a pair exceeds first score of a pair)
If the dependent variable is measured on a continuous scale, this can also be formulated as:
Two sided: the median of the difference scores is different from zero in the population
Right sided: the median of the difference scores is larger than zero in the population
Left sided: the median of the difference scores is smaller than zero in the population
Two sided: $\pi \neq \pi_0$Right sided: $\pi > \pi_0$Left sided: $\pi < \pi_0$
Assumptions
Assumptions
Assumptions
Assumptions
Assumptions
In the population, the residuals are normally distributed at each combination of values of the independent variables
In the population, the standard deviation $\sigma$ of the residuals is the same for each combination of values of the independent variables (homoscedasticity)
In the population, the relationship between the independent variables and the mean of the dependent variable $\mu_y$ is linear. If this linearity assumption holds, the mean of the residuals is 0 for each combination of values of the independent variables
The residuals are independent of one another
Often ignored additional assumption:
Variables are measured without error
Also pay attention to:
Multicollinearity
Outliers
Sample size is large enough for $z$ to be approximately normally distributed. Rule of thumb:
Significance test: number of successes and number of failures are each 5 or more in both sample groups
Regular (large sample) 90%, 95%, or 99% confidence interval: number of successes and number of failures are each 10 or more in both sample groups
Plus four 90%, 95%, or 99% confidence interval: sample sizes of both groups are 5 or more
Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another
Difference scores are normally distributed in the population
Sample of difference scores is a simple random sample from the population of difference scores. That is, difference scores are independent of one another
Population of difference scores can be conceived of as the difference scores we would find if we would apply our study (e.g., applying an intervention and measuring pre-post scores) to all individuals in the population.
Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
Sample is a simple random sample from the population. That is, observations are independent of one another
Test statistic
Test statistic
Test statistic
Test statistic
Test statistic
$F$ test for the complete regression model:
$\begin{aligned}[t]F &= \dfrac{\sum (\hat{y}_j - \bar{y})^2 / K}{\sum (y_j - \hat{y}_j)^2 / (N - K - 1)}\\&= \dfrac{\mbox{sum of squares model} / \mbox{degrees of freedom model}}{\mbox{sum of squares error} / \mbox{degrees of freedom error}}\\&= \dfrac{\mbox{mean square model}}{\mbox{mean square error}}\end{aligned}$where $\hat{y}_j$ is the predicted score on the dependent variable $y$ of subject $j$, $\bar{y}$ is the mean of $y$, $y_j$ is the score on $y$ of subject $j$, $N$ is the total sample size, and $K$ is the number of independent variables
$t$ test for individual $\beta_k$:
$t = \dfrac{b_k}{SE_{b_k}}$
If only one independent variable: $SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}}$, with $s$ the sample standard deviation of the residuals, $x_j$ the score of subject $j$ on the independent variable $x$, and $\bar{x}$ the mean of $x$. For models with more than one independent variable, computing $SE_{b_k}$ becomes complicated
Note 1: mean square model is also known as mean square regression; mean square error is also known as mean square residualNote 2: if only one independent variable ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1$
$z = \dfrac{p_1 - p_2}{\sqrt{p(1 - p)\Bigg(\dfrac{1}{n_1} + \dfrac{1}{n_2}\Bigg)}}$$p_1$ is the sample proportion of successes in group 1: $\dfrac{X_1}{n_1}$,$p_2$ is the sample proportion of successes in group 2: $\dfrac{X_2}{n_2}$,$p$ is the total proportion of successes in the sample: $\dfrac{X_1 + X_2}{n_1 + n_2}$,$n_1$ is the sample size of group 1, $n_2$ is the sample size of group 2Note: we could just as well compute $p_2 - p_1$ in the numerator, but then the left sided alternative becomes $\pi_2 < \pi_1$, and the right sided alternative becomes $\pi_2 > \pi_1$
$t = \dfrac{\bar{y} - \mu_0}{s / \sqrt{N}}$$\bar{y}$ is the sample mean of the difference scores, $\mu_0$ is the population mean of the difference scores according to H0, $s$ is the sample standard deviation of the difference scores,$N$ is the sample size (number of difference scores).
$F$ distribution with $K$ (df model, numerator) and $N - K - 1$ (df error, denominator) degrees of freedom
Sampling distribution of $t$:
$t$ distribution with $N - K - 1$ (df error) degrees of freedom
Approximately standard normal
$t$ distribution with $N - 1$ degrees of freedom
The exact distribution of $W$ under the null hypothesis is the Binomial($n$, $p$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $p = 0.5$.
If $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $np = n \times 0.5$ and standard deviation $\sqrt{np(1-p)} = \sqrt{n \times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic$$z = \frac{W - n \times 0.5}{\sqrt{n \times 0.5(1 - 0.5)}}$$follows approximately a standard normal distribution if the null hypothesis were true.
Binomial($n$, $p$) distribution
Here $n = N$ (total sample size), and $p = \pi_0$ (population proportion according to the null hypothesis)
$C\%$ confidence interval for $\beta_k$ and for $\mu_y$; $C\%$ prediction interval for $y_{new}$
Approximate $C\%$ confidence interval for $\pi_1 - \pi_2$
$C\%$ confidence interval for $\mu$
n.a.
n.a.
Confidence interval for $\beta_k$:
$b_k \pm t^* \times SE_{b_k}$
If only one independent variable: $SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}}$
Confidence interval for $\mu_y$, the population mean of $y$ given the values on the independent variables:
$\hat{y} \pm t^* \times SE_{\hat{y}}$
If only one independent variable:$SE_{\hat{y}} = s \sqrt{\dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}}$
Prediction interval for $y_{new}$, the score on $y$ of a future respondent:
$\hat{y} \pm t^* \times SE_{y_{new}}$
If only one independent variable:$SE_{y_{new}} = s \sqrt{1 + \dfrac{1}{N} + \dfrac{(x^* - \bar{x})^2}{\sum (x_j - \bar{x})^2}}$
In all formulas, the critical value $t^*$ is the value under the $t_{N - K - 1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).
Regular (large sample):
$(p_1 - p_2) \pm z^* \times \sqrt{\dfrac{p_1(1 - p_1)}{n_1} + \dfrac{p_2(1 - p_2)}{n_2}}$where $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
With plus four method:
$(p_{1.plus} - p_{2.plus}) \pm z^* \times \sqrt{\dfrac{p_{1.plus}(1 - p_{1.plus})}{n_1 + 2} + \dfrac{p_{2.plus}(1 - p_{2.plus})}{n_2 + 2}}$where $p_{1.plus} = \dfrac{X_1 + 1}{n_1 + 2}$, $p_{2.plus} = \dfrac{X_2 + 1}{n_2 + 2}$, and $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)
$\bar{y} \pm t^* \times \dfrac{s}{\sqrt{N}}$where the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20)
Proportion variance explained $R^2$:Proportion variance of the dependent variable $y$ explained by the sample regression equation (the independent variables):$$\begin{align}R^2 &= \dfrac{\sum (\hat{y}_j - \bar{y})^2}{\sum (y_j - \bar{y})^2}\\ &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}\\&= 1 - \dfrac{\mbox{sum of squares error}}{\mbox{sum of squares total}}\\&= r(y, \hat{y})^2\end{align}$$$R^2$ is the proportion variance explained in the sample by the sample regression equation. It is a positively biased estimate of the proportion variance explained in the population by the population regression equation, $\rho^2$. If there is only one independent variable, $R^2 = r^2$: the correlation between the independent variable $x$ and dependent variable $y$ squared.
Wherry's $R^2$ / shrunken $R^2$:Corrects for the positive bias in $R^2$ and is equal to$$R^2_W = 1 - \frac{N - 1}{N - K - 1}(1 - R^2)$$$R^2_W$ is a less biased estimate than $R^2$ of the proportion variance explained in the population by the population regression equation, $\rho^2$
Stein's $R^2$:Estimates the proportion of variance in $y$ that we expect the current sample regression equation to explain in a different sample drawn from the same population. It is equal to$$R^2_S = 1 - \frac{(N - 1)(N - 2)(N + 1)}{(N - K - 1)(N - K - 2)(N)}(1 - R^2)$$
Per independent variable:
Correlation squared $r^2_k$: the proportion of the total variance in the dependent variable $y$ that is explained by the independent variable $x_k$, not corrected for the other independent variables in the model
Semi-partial correlation squared $sr^2_k$: the proportion of the total variance in the dependent variable $y$ that is uniquely explained by the independent variable $x_k$, beyond the part that is already explained by the other independent variables in the model
Partial correlation squared $pr^2_k$: the proportion of the variance in the dependent variable $y$ not explained by the other independent variables, that is uniquely explained by the independent variable $x_k$
-
Cohen's $d$:Standardized difference between the sample mean of the difference scores and $\mu_0$:$$d = \frac{\bar{y} - \mu_0}{s}$$Indicates how many standard deviations $s$ the sample mean of the difference scores $\bar{y}$ is removed from $\mu_0$
Can mental health be predicted from fysical health, economic class, and gender?
Is the proportion smokers different between men and women? Use the normal approximation for the sampling distribution of the test statistic.
Is the average difference between the mental health scores before and after an intervention different from $\mu_0$ = 0?
Do people tend to score higher on mental health after a mindfulness course?
Is the proportion smokers amongst office workers different from $\pi_0 = .2$?
SPSS
SPSS
SPSS
SPSS
SPSS
Analyze > Regression > Linear...
Put your dependent variable in the box below Dependent and your independent (predictor) variables in the box below Independent(s)
SPSS does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Analyze > Descriptive Statistics > Crosstabs...
Put your independent (grouping) variable in the box below Row(s), and your dependent variable in the box below Column(s)
Click the Statistics... button, and click on the square in front of Chi-square
Continue and click OK
Analyze > Compare Means > Paired-Samples T Test...
Put the two paired variables in the boxes below Variable 1 and Variable 2
Put your dichotomous variable in the box below Test Variable List
Fill in the value for $\pi_0$ in the box next to Test Proportion
Jamovi
Jamovi
Jamovi
Jamovi
Jamovi
Regression > Linear Regression
Put your dependent variable in the box below Dependent Variable and your independent variables of interval/ratio level in the box below Covariates
If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well
Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'
Jamovi does not have a specific option for the $z$ test for the difference between two proportions. However, you can do the chi-squared test instead. The $p$ value resulting from this chi-squared test is equivalent to the two sided $p$ value that would have resulted from the $z$ test. Go to:
Frequencies > Independent Samples - $\chi^2$ test of association
Put your independent (grouping) variable in the box below Rows, and your dependent variable in the box below Columns
T-Tests > Paired Samples T-Test
Put the two paired variables in the box below Paired Variables, one on the left side of the vertical line and one on the right side of the vertical line
Under Hypothesis, select your alternative hypothesis
Jamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:
ANOVA > Repeated Measures ANOVA - Friedman
Put the two paired variables in the box below Measures
Frequencies > 2 Outcomes - Binomial test
Put your dichotomous variable in the white box at the right
Fill in the value for $\pi_0$ in the box next to Test value
Under Hypothesis, select your alternative hypothesis |
"""Author: John VolkDate: 10/10/2016"""from __future__ import print_functionfrom sympy.parsing.sympy_parser import (parse_expr, standard_transformations, implicit_multiplication,\ implicit_application)import numpy as npimport sympyimport re
Python, regex, and SymPy to automate custom text conversions to LaTeX¶ This post includes examples on how to:
Convert text equation in bad format for Python and SymPy
Convert normal Python mathematical experssion into a suitable form for SymPy's LaTeX printer
Use sympy to produce LaTeX output
Create functions and data structures to make the process reusable and efficient to fit your needs
Lets start with the following string that we assign to the variable text that represents a mathematical model but in poor printing form:¶
text = """Ln(Y) = a0 + a1 LnQ + a2 LnQ^2 + a3 Sin(2 pi dtime) + a4 Cos(2 pi dtime)+ a5 dtime + a6 dtime^2"""text
'\nLn(Y) = a0 + a1 LnQ + a2 LnQ^2 + a3 Sin(2 pi dtime) + a4 Cos(2 pi dtime)\n+ a5 dtime + a6 dtime^2' However, we want this expression to look like:¶
$ \log{\left (Y \right )} = a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi dtime \right )} + a_{4} \cos{\left (2 \pi dtime \right )} + a_{5} dtime + a_{6} dtime^{2} $
Observe the following differences between text and valid LateX:¶
Some variables and functions are concatenated, i.e.: LnQ, correct latex would be \log{Q}
Functions are not in proper latex form (e.g. Sin = \sin, Ln = \log, ...)
Missing subscripts: a0 = a_0
Newline characters need to be removed
Some symbols need to be replaced: dtime = t
Python's symbolic math pacakge SymPy can automate some of the transformations that we need, and SymPy has built in LaTeX printing capabilities.¶
If you are not familiar with SymPy you should take some time to familiarize yourself with it; it takes some time to get used to its syntax. Check out the well done documentation for Sympy here.
First we need to convert the string (text) into valid SymPy input¶
Valid sympy input includes valid python math expressions with added recognition of math operations. For example the following expression can be parsed by SymPy without error:
exp = "(x + 4) * (x + sin(x**3) + log(x + 5*x) + 3*x - sqrt(y))" sympy.expand(exp)
4*x**2 - x*sqrt(y) + x*log(x) + x*sin(x**3) + x*log(6) + 16*x - 4*sqrt(y) + 4*log(x) + 4*sin(x**3) + 4*log(6)
print(sympy.latex(sympy.expand(exp)))
4 x^{2} - x \sqrt{y} + x \log{\left (x \right )} + x \sin{\left (x^{3} \right )} + x \log{\left (6 \right )} + 16 x - 4 \sqrt{y} + 4 \log{\left (x \right )} + 4 \sin{\left (x^{3} \right )} + 4 \log{\left (6 \right )} Now back to our original text that we want to convert, we need to make some simple adjustments to make the string a valid SymPy expression¶
You have several options here, in this case I choose to use regular expressions (regex) to do basic string pattern substitutions. You will likely need to modify these operations or create alrenative regex to prepare your text. If you do not know regex you can probably get by without using basic Python string methods.
## Note, I removed the LHS and the equal sign from the equation- SymPy requires special syntac for equations## further explanation belowtext = """a0 + a1 LnQ + a2 LnQ^2 + a3 Sin(2 pi dtime) + a4 Cos(2 pi dtime)+ a5 dtime + a6 dtime^2"""## Make a dictionary to map our strings to standard python math or symbols as neededsymbol_map = { '^': '**', 'Ln': 'log ', 'Sin': 'sin ', 'Cos': 'cos ', 'dtime': 't' }## use the dictionary to compile a regex on the keys## escape regex characters because ^ is one of the keys, (^ is a regex special character)to_symbols = re.compile('|'.join(re.escape(key) for key in symbol_map.keys())) # run through the text looking for keys (regex) and replacing them with the values from the dicttext = to_symbols.sub(lambda x: symbol_map[x.group()], text) text
'\na0 + a1 log Q + a2 log Q**2 + a3 sin (2 pi t) + a4 cos (2 pi t)\n+ a5 t + a6 t**2'
## remove new line characters from the text text = re.sub('\n', ' ', text)text
' a0 + a1 log Q + a2 log Q**2 + a3 sin (2 pi t) + a4 cos (2 pi t) + a5 t + a6 t**2'
## regex to replace coefficients a0, a1, ... with their equivalents with subscripts e.g. a0 = a_0text = re.sub(r"\s+a(\d)", r"a_\1", text)text
'a_0 +a_1 log Q +a_2 log Q**2 +a_3 sin (2 pi t) +a_4 cos (2 pi t) +a_5 t +a_6 t**2' At this point text is almost ready for LaTeX...¶ The remaining issues are sufficiently difficult string manipulations, SymPy's Parser is perfect for the remaining conversions:¶
Instead of trying to figure out how to place asterisks everywhere that multiplication is implied and parenthesis where functions are implied, e.g. log Q**2 should be log(Q**2) we can use SymPy's Parser that is quite powerful.
We use implicit multiplication (self-explantory) and implicit application for function applications that are mising parenthesis, both of these are transformations provided by the SymPy Parser. Remember the parser will still follow mathematical order of operations (PEMDAS) when doing implicit application. The parser can handle additional cases as well such as function exponentiation. Check the handy examples at the documentation link above.
## get the transformations we need (imported above) and place into a tuple that is required for the parsertransformations = standard_transformations + (implicit_multiplication, implicit_application, )## parse the text by applying implicit multiplication and implicit (math function) appplicationexpr = parse_expr(text, transformations=transformations)expr
a_0 + a_1*log(Q) + a_2*log(Q**2) + a_3*sin(2*pi*t) + a_4*cos(2*pi*t) + a_5*t + a_6*t**2
print(sympy.latex(expr))
a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi t \right )} + a_{4} \cos{\left (2 \pi t \right )} + a_{5} t + a_{6} t^{2} SymPly amazing!!¶
$a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi t \right )} + a_{4} \cos{\left (2 \pi t \right )} + a_{5} t + a_{6} t^{2} $
## global variables for the functionsymbol_map = { '^': '**', 'Ln': 'log ', 'Sin': 'sin ', 'Cos': 'cos ', 'dtime': 't' }transformations = standard_transformations + (implicit_multiplication, implicit_application, )## the functiondef translate(bad_text): """My custom string-to-LaTeX-ready SymPy expression translation function Arguments: bad_text (str): text that is in some bad format that requires string manipulation including custom string modifications to math functions, symbols, and operators defined by the global symbol_map dictionary (for substitutions), and the regexs compiled herein. More advanced manipulations providied by SymPy are defined by the global variable `transformations` are inputs to the SymPy parser Returns: expr (sympy expression): A SymPy expresion created by the SymPy expression parser after first doing custom string modifications to math functions, symbols, and operators """ to_symbols = re.compile('|'.join(re.escape(key) for key in symbol_map.keys())) bad_text = to_symbols.sub(lambda x: symbol_map[x.group()], bad_text) bad_text = re.sub('\n', '', bad_text) text = re.sub(r"\s+a(\d)", r"a_\1", bad_text) expr = parse_expr(text, transformations=transformations) return expr
## very handy, now we just have to convert to TeX and printprint(sympy.latex(translate(text)))
a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi t \right )} + a_{4} \cos{\left (2 \pi t \right )} + a_{5} t + a_{6} t^{2} What about the original text ? It was an equation with a left-hand-side:¶
Parse both the LHS and RHS separately and combine with SymPy's Equation method
text = """Ln(Y) = a0 + a1 LnQ + a2 LnQ^2 + a3 Sin(2 pi dtime) + a4 Cos(2 pi dtime)+ a5 dtime + a6 dtime^2"""# split on the equal signt1 = text.split('=')[0] t2 = text.split('=')[1]
## Use sympy.Eq(LHS,RHS)LHS = translate(t1)RHS = translate(t2)print(sympy.latex(sympy.Eq(LHS, RHS)))
\log{\left (Y \right )} = a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi t \right )} + a_{4} \cos{\left (2 \pi t \right )} + a_{5} t + a_{6} t^{2}
$\log{\left (Load \right )} = a_{0} + a_{1} \log{\left (Q \right )} + a_{2} \log{\left (Q^{2} \right )} + a_{3} \sin{\left (2 \pi t \right )} + a_{4} \cos{\left (2 \pi t \right )} + a_{5} t + a_{6} t^{2} $
SymPly fantastic!!!¶
## extract SymPy symbols from both sides of eqnLHS_symbols = [str(x) for x in LHS.atoms(sympy.symbol.Symbol)]RHS_symbols = [str(x) for x in RHS.atoms(sympy.symbol.Symbol)]LHS_symbols
['Y']
RHS_symbols
['a_0', 'Q', 'a_5', 'a_6', 'a_2', 'a_3', 'a_1', 'a_4', 't']
## remove Q and t from the RHS list because we do not want to plug values in for themRHS_symbols.pop(RHS_symbols.index('Q'))RHS_symbols.pop(RHS_symbols.index('t'));
## create a dictionary assigning each symbol to random variablesplug_in_dict = {k: np.random.randint(10) for k in RHS_symbols }print(plug_in_dict)
{'a_6': 4, 'a_5': 7, 'a_4': 5, 'a_3': 0, 'a_2': 1, 'a_1': 0, 'a_0': 6}
## now plug in our values and let sympy simplyfy! Note, the variables we changed only appear on the RHSRHS.subs(plug_in_dict)
4*t**2 + 7*t + log(Q**2) + 5*cos(2*pi*t) + 6
print(sympy.latex(sympy.Eq(LHS, RHS.subs(plug_in_dict))))
\log{\left (Y \right )} = 4 t^{2} + 7 t + \log{\left (Q^{2} \right )} + 5 \cos{\left (2 \pi t \right )} + 6
$\log{\left (Y \right )} = 4 t^{2} + 7 t + \log{\left (Q^{2} \right )} + 5 \cos{\left (2 \pi t \right )} + 6 $
Remarks¶
I hope this was useful to anyone trying to use Python to batch process strings into mathematical expressions and LaTeX. In my case I needed to process many of these types of strings that were output from a computer code that fits regression models to input data. As you can see, if you work with mathematical expressions of any kind and already know basic Python, SymPy is undoubtedly useful. If you liked this or have experimented with your own implementations of Python, regex, and/or SymPy to do cool and useful things please share in the comments below. |
The Planck function that describes the blackbody emission is a function of temperature
and wavelength:$$B_\lambda(T)=\frac{2hc^2}{\lambda^5}\cdot\frac1{e^{hc/\lambda k_BT}-1}$$Due to the temperature dependence, blackbodies at different temperatures have different emissions. The graph below, from Wikipedia shows the drastic changes for blackbodies of temperatures 3000 K, 4000 K, and 5000 K (also shown is the Rayleigh-Jeans regime where $B_\lambda(T)\propto\lambda^{-4}$ which blows up to infinity at low wavelengths).
The spectral class of the sun is G2V. The G2 signifies that the surface temperature of the sun is about 5800 K, not 5250 C (about 5520 K) in your diagram. Thus, the emissions observed from the sun
should be larger than that of the modeled blackbody you show.
Plotted below is the Planck function for a 5520 K emitter, a 5777 K emitter and a 5800 emitter. Both the 5777 K and 5800 K blackbodies have a peak that is about 30% larger than the 5520 K blackbody (left axis is W/sr/m$^3$, bottom axis is $\mu$m).
From Jim in the comments,
Don't forget temperature differences across the surface, light from hotter depth that eventually finds its way out, other light-producing phenomena (spectral emission from excited electrons recombining with atoms then ionizing again, photons created in scattering, decay, annihilation, etc processes), etc. A star is a complex system with a lot happening all the time. It's safe to assume that blackbody radiation (while the primary source) isn't the only source of radiation.
It appears that your image wants to fit the curve to the larger wavelengths, rather than the peak--this is where your confusion lies. If you fit the
peaks, then surely $\varepsilon\leq1$ is satisfied (so long as you note the above comment from Jim, that the sun really isn't a pure blackbody). |
Show that $$\int_0^\pi{d\theta\over(a-b\cos\theta)^2}={a\pi\over (a^2-b^2)^{3\over 2}}$$ where $a>b>0$. I'm not sure how to simplify this integral or evaluate it. Any solutions or hints are greatly appreciated.
closed as off-topic by Travis Willse, C. Dubussy, imranfat, heropup, Martin R Apr 21 '16 at 21:25
This question appears to be off-topic. The users who voted to close gave this specific reason:
" This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Travis Willse, C. Dubussy, imranfat, heropup, Martin R HINTS:
Note that $$\int_0^\pi \frac{1}{(a-b\cos(\theta))^2}\,d\theta=-\frac{\partial}{\partial a}\int_0^\pi \frac{1}{a-b\cos(\theta)}\,d\theta$$
Then, evaluate the integral on the right-hand side using either the classical Tangent Half-Angle Substitution or use contour integration.
Under the assumption $a>b>0$, let us start with: $$ I(a,b) = \int_{0}^{\pi}\frac{dt}{a-b\cos t} = 2\int_{0}^{\pi/2}\frac{dt}{(a+b)-2b\cos^2 t} $$ that through the substitution $t=\arctan u$ becomes: $$ I(a,b) = 2\int_{0}^{+\infty}\frac{du}{(a+b)(1+u^2)-2b}=\frac{\pi}{\sqrt{a^2-b^2}}. $$ Our integral is just $-\frac{\partial}{\partial a} I(a,b)$, hence it equals $\frac{\pi a}{(a^2-b^2)^{3/2}}$. |
I happen to have revised our calculus syllabus for first year biology majors about one year ago (in a French university, for that matter). I benefited a lot from my wife's experience as a math-friendly biologist.
The main point of the course is to get students able to deal with
quantitative models. For example, my wife studied the movement of cells under various circumstances.
A common model postulates that the average distance $d$ between two
positions of a cell at times $t_0$ and $t_0+T$ is given by
$$d = \alpha T^\beta$$ where $\alpha>0$ is a speed parameter and
$\beta\in[\frac12,1]$ is a parameter that measures how the movement
fits between a Brownian motion ($\beta=\frac12$)
and a purely ballistic motion ($\beta=1$).
This simple model is a great example to show how calculus can be relevant to biology.
My first point might be specific to recent French students: first-year students are often not even proficient enough with basic algebraic manipulations to be able to do anything relevant with such a model. For example, even asking to compute how $d$ changes when $T$ is multiplied by a constant needs to now how to
deal with exponents. In fact, we even had serious issues with the mere use of percentages.
One of the main point of our new calculus course is to be able to
estimate uncertainties: in particular, given that $T=T_0\pm \delta T$, $\alpha=\alpha_0\pm\delta\alpha$ and $\beta=\beta_0\pm\delta\beta$, we ask them to estimate $d$ up to order one (i.e. using first-order Taylor series). This already involves derivatives of multivariable functions, and is an important computation when you want to draw conclusions from experiments.
Another important point of the course is the
use of logarithms and exponentials, in particular to interpret log or log-log graphs. For example, in the above model, it takes a (very) little habit to see that taking logs is a good thing to do: $\log d = \beta\log T+\log \alpha$ so that plotting your data in log-log chart should give you a line (if the models accurately represent your experiments).
This then interacts with
statistics: one can find the linear regression in log-log charts to find estimates for $\alpha$ and $\beta$. But then one really gets an estimate of $\beta$ and... $\log\alpha$, so one should have a sense of how badly this uncertainty propagates to $\alpha$ ( one variable first-order Taylor series: easy peasy).
The other main goal of the course is to get them able to deal with some (ordinary) differential equations. The motivating example I chose was offered to me by the chemist of our syllabus meeting.
A common model for the kinetics of a chemical reaction$$A + B \to C$$is the second-order model: one assumes that the speed of the reaction is proportional to the product of the concentrations of the species A and B. This leads to a not-so-easy differential equation of the form$$ y'(t) = (a-y(t))(b-y(t)).$$This is a
first-order ODE with separable variables. One can solve it explicitly (a luxury!) by dividing by the second member, integrate in $t$, do a change of variable $u=y(t)$ in the left-hand-side, resolve into partial fractions the rational fraction that comes out, and remember how log is an antiderivative of the inverse function (and how to adjust for the various constants the appeared in the process). Then, you need some algebraic manipulations to transform the resulting equation into the form $y(t) = \dots$. Unfortunately and of course, we are far from being able to properly cover all this material, but we try to get the student able to follow this road later on, with their chemistry teachers.
In fact, I would love to be able to do more quantitative analysis of differential equations, but it is difficult to teach since it quickly goes beyond a few recipes. For example, I would like them to become able to tell in a glimpse the
variations of solutions to$$y'(t)=a\cdot y(t)-b \sqrt{y(t)}$$(a model of population growth for colonies of small living entities organized in circles, where death occur mostly on the edge - note how basic geometry makes an appearance here to explain the model) in terms of the initial value. Or to be able to realize that solutions to$$y'(t)=\sqrt{y(t)}$$must be sub-exponential (and what that even means...). For this kind of goals, one must first aim to basic proficiency in calculus.
To sum up,
dealing with any quantitative model needs a fair bit of calculus, in order to have a sense of what the model says, to use it with actual data, to analyze experimental data, to interpret it, etc.
To finish with a controversial point, it seems to me that, at least in my environment, biologists tend to underestimate the usefulness of calculus (and statistics, and more generally mathematics) and that improving the basic understanding of mathematics among biologists-to-be can only be beneficial. |
Prove that doesn't exist $N\in\mathbb{N}$ with property: for all primes $p>N$ exist $n\in\{3, 4,\ldots, N\}$ such that $n, n-1, n-2$ are quadratic residues modulo $p$.
By Dirichlet's theorem, there exists $p>N$ such that each prime $l\leq N$,with the exception of $l=3$, satisfies $(l/p) = (l/3)$. I claim thatthis $p$ is a counterexample. Indeed by multiplicativity $(m/p) = (m/3)$for each $m \leq N$ that is not a multiple of 3. In particular $(m/p) = -1$if $m \equiv -1 \bmod 3$. Each triple $\{ n, n-1, n-2 \}$ with $n \leq N$contains one such $m$, and therefore cannot comprise three quadratic residuesof $p$,
QED.
What's the context? Seems rather tricky for homework; hope it's not a problem from an ongoing contest...
[Added later] In fact this seems to be the only construction, in thefollowing sense:
Conjecture. For every prime $l \neq 3$ there exists $N$ with the following property: for all primes $p>N$ such that $(l/p) \neq (l/3)$ there is some $n \in \lbrace 3, 4, \ldots, N \rbrace$ such that each of $n$, $n-1$, and $n-2$ is a quadratic residue of $p$.
For example, if $l \in \lbrace 2, 5, 7, 11, 13, 17 \rbrace$ then we can take $N=121$. For $19 \leq l \leq 43$ we can use $N = 325$, and $N = 376$ works for $l=47$ and several larger $l$.
This can be checked as follows. For a positive integer $n$ let $s(n)$ be the unique squarefree number such that $n/s(n)$ is a square; e.g. for $n=24,25,26,27,28$ we have $s(n)=6,1,26,3,7$ respectively. Then $(n/p) = (s(n)/p)$ for all $p>n$. Given a small set $S$ of primes containing $l$ and a bound $N$, let $\cal N\phantom.$ be the set of all $n \in \lbrace 3, 4, \ldots, N \rbrace$ such that each of $s(n)$, $s(n-1)$, and $s(n-2)$ is a product of primes in $S$. Now try all $2^{|S|}$ ways to assign $\pm 1$ to each $(l'/p)$ with $l' \in S$, and see which ones make at least one of $s(n),s(n-1),s(n-2)$ a quadratic nonresidue for each $n \in \cal N$.
For $S = \lbrace 2, 3, 5, 7, 11, 13, 17 \rbrace$ and $N = 121$, we compute $${\cal N} = \lbrace 3, 4, 5, \ldots, 17, 18, 22, 26, 27, 28, 34, 35, 36, 50, 51, 52, 56, 65, 66, 100, 121 \rbrace,$$ and find that the only choices that work are the two that make $(l/p) = (l/3)$ for each $l \in S - \lbrace 3 \rbrace$.
Then if we put $l=19$ into $S$ and increase $N$ to $325$ we find that ${\cal N} \ni 325$, with $323 = 17 \cdot 19$, $324 = 18^2$, and $325 = 13 \cdot 5^2$. So the only way to avoid $(323/p) = (324/p) = (325/p) = 1$ is to make $(19/p) = +1$. We then incorporate $l=23$ by considering $n=92$, and $l=29$ using $n=290$, "etc." Computation suggests that there are lots of choices to make this work once we get past $l=19$, but I don't know how feasible it might be to prove this.
[The exhaustive computation over $2^{|S|}$ choices of $(l'/p)$ is what led me to the pattern $(l/p) = (l/3)$ in the first place. Once only two choices remained for $S = \lbrace 2, 3, 5, 7, 11, 13, 17 \rbrace$ I thought that a few more primes might whittle it down to zero and disprove the claim, but I kept seeing only two choices that differed only in the value of $(3/p)$, and the pattern in the other $(l/p)$ values soon became clear.]
This is Theorem 2 in D. Lehmer & E. Lehmer,
On runs of residues,Proceedings of the American Mathematical Society, Vol. 13, No. 1 (Feb., 1962), 102-106.The proof there uses quadratic reciprocity and Dirichlet's Theorem on primes in arithmetic progression. They also deal with similar problems for k-th powers. |
For the pressure exerted by compressed gas, its stated in my textbook: "The result is that at high pressures the molecules of a gas are so compressed that their volume becomes a significant fraction of the total volume of the gas. Since this reduces the volume available for molecular motion, collision occur more frequently.The pressure exerted by the real gas is higher than that predicted by Boyle's law for that particular volume of an ideal gas." What confuses me is the fact that the statement does not take in consideration for gravitational force of attraction between gas particles. Under the same temperature, if the volume is decreased by a significant amount, wouldn't the gravitational force of attraction between gas particles be no longer negligible?( following FG is inversely proportional to square of the distance between particles), hence this will cause the decrease in real pressure exerted to exterior surface( wall of the container).But, as stated in the textbook, its also true that there is a increase in real pressure due to smaller space for random motion of the particles( taking consideration for volume of the particles which is negligible within ideal gas's assumptions). So, doesn't these two make a contradiction? Or the increase in real pressure is larger than the decrease numerically so there is still an overall increase?
You are right to think it through and an attractive force would tend to reduce pressure. However in this example the gravitational force is simply too weak to overturn the repulsive effect of the electrons of one molecule repelling those of another. So overall the force is repulsive when molecules are close.
Intermolecular/interatomic forces are much, much,
much stronger than the gravitational force.
Consider for example an Argon gas. The interaction between Argon atoms is well described by the Lennard-Jones potential,
$$ U(r) = 4 \epsilon \left[\left(\frac \sigma r\right)^{12} -\left(\frac \sigma r\right)^6\right] $$
where $r$ is the distance between the centers of mass of the atoms and for Argon,
$$\epsilon/k_B \simeq 126 K$$ $$\sigma \simeq 0.335 nm$$
At short distances, only the repulsive part is relevant, and therefore
$$ U(r) \simeq 4 \epsilon \left(\frac \sigma r\right)^{12} $$
Now let's compare this with the gravitational interaction,
$$ U_G(r) = - \frac{G m m} {r} $$
For two Argon atoms,
$$G m m/k_B = 2.13 \cdot 10^{-38} m K$$
Comparing the absolute values of the potentials we obtain therefore
$$ \frac{U(r)}{|U_G(r)|}\simeq \frac{4 \epsilon \sigma}{Gmm} \left(\frac r \sigma\right)^{-11} = 7.92 \cdot 10^{30} \left(\frac r \sigma\right)^{-11} $$
or
$$ \frac{U(r)}{|U_G(r)|}\simeq 4.65 \cdot 10^{25} x^{-11} $$
where $x$ is the numerical value of $r$ in nm.
So you can see that the gravitational interaction is completely negligible compared to interatomic forces. This is true in general, and not only for ideal gases, even if finding an approximate form for the interatomic/intermolecular potential is quite difficult in general. |
I asked a student I tutor what $\sqrt{x^2}$ was so I could show him why the solution is $|x|$ instead of just $x$. He ended up changing the problem to $(x^2)^{1/2}$ and then multiplied the exponents to get $x^{2/2}=x^1=x$. Now I'm well aware why the answer to the problem I gave him is $|x|$ and explained it, and the answer to $(x^2)^{1/2}$ is likewise $|x|$ but I wasn't able to give a real good explanation why he couldn't use the rule $\sqrt[n]{a^m}=a^{m/n}$.
This is a surprisingly tricky issue, and the exact answer "why" depends on the context (real or complex numbers?) and definitions in play (varies by textbook). I'm looking for a good general answer to effectively the same issue on this question here.
Restricting the discussion purely to real numbers, the most popular and straightforward response seems to be that the identity $(a^r)^s = a^{rs}$ requires some additional restriction on it, for example, that both $a^r$ and $a^s$ be defined in real numbers. Observing that restriction, the student's simplification is invalid for negative $x$, because then $x^\frac{1}{2}$ is undefined in real numbers. It's possible that may only be implied (not explicitly stated) in a given textbook that you might use.
For some exposition of the complex-valued case, consider the Wikipedia article here.
Edit: Note that as defined in complex numbers, $(z^m)^\frac{1}{n} = (z^\frac{1}{n})^m$ will be true if and only if $m$ and $n$ are relatively prime (assumes $n$, $z$ nonzero; Silverman, 1975).
The identity $(x^m)^n=x^{mn}$ is true for real numbers $m,n$ and $x\ge 0$. It is not necessarily true otherwise.
An $n$th root of $b$ (where $n$ is a positive integer and $b$ is a real number) is a value of $x$ such that $x^n=b$. Note that if $b\ne 0$, then there are $n$ distinct values of $x$, some of which may be complex.
Although the expression $x^n=b$ implies many possible values for $x$, the notation $x=\sqrt[n]{b}$ implies
only one value of $x$, the principal $n$th root of $b$. This is the unique positive real value of $x$ (if it exists), or the unique negative real value of $x$ otherwise (if it exists). (In my experience, the use of the notation $x=\sqrt[n]{b}$ to denote a unique value of $x$ is often done in primary and secondary education.)
For example, if $x^2=4$, then this means that either $x=2$ or $x=-2$. But $x=\sqrt{4}$ means that $x=2$ (and not $x=-2$). If $x^3=-8$, then there are three solutions (only one of which is real), but the notation $x=\sqrt[3]{-8}$ means $x=-2$ (and no other values).
The equation $x^2=b$ (where $b>0$) means that either $x=\sqrt{b}$ or $x=-\sqrt{b}$. If it is the former, then $\sqrt{x^2}=x$ but if it is the latter then $\sqrt{x^2}=-x$. So one cannot claim that, in general, $\sqrt{x^2}=x$. However, $\sqrt{x^2}=|x|$ holds for either case, so it is, in general, true.
The identity $(x^m)^n = x^{mn}$ is only true if $x>0$. In fact pretty much all identities involving exponents are only true for positive bases -- another example that often trips students up is $x^r \cdot y^r = (xy)^r$.
There are several ways to explain why these formulas are only true for positive bases, but at a fundamental level it has to do with the question of how we even
define $x^r$ when $r$ is a real number. The conventional definition is that we define$$x^r = \exp \left(r \ln x \right) $$(although in order to make this definition non-circular we have to have some way of defining what $\exp$ and $\ln$ are without invoking exponents.)
Based on the above definition it can be shown (for positive $x$) that $x^{r+s}=x^r x^s$. From this it can further be deduced that if the exponent $r$ is some integer $n$ then the definition above is consistent the more elementary notions based on "repeated multiplication." It further follows that $x^{1/n}$ is the unique positive $n^{th}$ root of $x$, and we can likewise prove that $(x^r)^s = x^{rs}$ for any real $r$ and $s$. But
all of these provable properties hinge on how exactly we define $x^r$, and note that the definition given above only makes sense for $x>0$, because $\ln x$ is not defined (or, more precisely, is not single-valued) for negative values of $x$. Update: Actually, the situation is more complicated than what I have written above; see https://math.stackexchange.com/a/2085269/124095, and scroll down to the second update, for more information regarding the question of in which contexts the various exponent rules apply.
Other answers have concentrated on the base, but the problem can also be ascribed to the exponents. $(x^m)^n=x^{mn}$ is true for any base, as long as m and n are integers. This follows simply from the definition of exponentiation as repeated multiplication. $x^m$ is m copies of x multiplied together. $(x^m)^n$ is n copies of (m copies of x multiplied together) multiplied together, and so is mn copies of x multiplied together. This works for any base, integer, rational, irrational, positive, negative, complex. In fact, it works in any ring.
The problem comes when we try to extend the concept of exponents to non-integers. If we want to define $y = x^{\frac{1}{n}}$, we can take $y^n$ and extend the rule to say $y^n = (x^{\frac{1}{n}})^n =x^{\frac{1}{n}n}=x$. With this extension, y is a number such that $y^n$ = x. This is a description of y, but it's not a
definition of y, as it not does unique specify y. Thus, when we take $(x^2)^{\frac{1}{2}}$, we are finding a number that when squared is equal to $x^2$. And while it's tempting to look at the phrase "a number that when squared is equal to $x^2$" and say "Well, that's a rather complicated way of saying 'x' ", x isn't the only number that satisfies this description. Thus, we have to make qualifications such as "$x^{\frac{1}{n}}$ is the principal nth root of x", and its those qualifications that cause the rule to fail with negative numbers.
Thus, while we can save this rule by saying "x has to be a positive number", we can also save it by saying "m and have to be integers". Or combine these qualifications and say "This works for non-integers only when we have positive numbers". |
Two hours before the seminar, the auditorium was already half-full. By 2:40, it was full. The web chat room for the event was
!iframe> A Dutch variation of this blog post is available. Astrophysicist Mario Livio has joined the community of people who spread the rumor about a bump that will be presented on Tuesday between 3 pm and 5 pm: Rumor from #LHC @CERN potential detection of excess at \(700\GeV\) [most recently sometimes said to be \(750\GeV\)] decaying into 2 photons (3 sigma in both @ATLASexperiment & @CMSexperiment)I think that by now, everyone who wanted to know something about the 2015 results has heard or seen this rumor so let me officially declassify it. BTW the schedule for Tuesday December 15th: ............ webcast.cern.ch ............ 15:00-15:40 CMS, Jim Olsen (Princeton U., USA) 15:40-16:20 ATLAS, Marumi Kado (Lab. de l'Acc. Lin., FR)
All this knowledge could have come from a single source and the source may hypothetically be a prankster. But I have some feelings that the sources are actually numerous and independent at the root. So I think it's more likely than not that an excess of this kind will be reported. There may be other interesting (and maybe more interesting) excesses announced on Tuesday but I won't discuss this possibility in this blog post at all.
Exactly four years ago, we were shown pictures such as this one. When you collide proton beams and look for final states that include two photons (or "diphotons", as physicists like to call it when they want to pretend that they speak Greek), you may find a 3-sigmaish bump around \(125\GeV\) – which is the value of the invariant mass \((p_1^\mu+p_2^\mu)^2\) of the two photons – already with the data that was available by the end of 2011.
The Higgs boson had to be
somewhereon the mass axis and the remaining values had been largely excluded so it was pretty much unavoidable that this excess did meanthat the Higgs boson existed and had the mass around \(125\GeV\) – we were more likely to call the mass \(126\GeV\) at least throughout 2012 but the most accurate values are close to \(125.1\GeV\) now. Additional channels strengthened in 2012. The Higgs was seen to decay to \(ZZ^*\), a pair of massive electroweak gauge bosons (one of them must be virtual because they're too heavy), and some lepton and quark pairs later.
Now, on Tuesday, we are likely to be told that there is a very similar \(\gamma\gamma\) excess near the invariant mass of \(700\GeV\) – which happens to be 4 times the top quark mass, if you haven't noticed. There is a 3-sigma excess in the ATLAS data and a similar 3-sigma excess in the CMS data. By the Pythagorean rule, the "combined" excess has the significance of \[
\sqrt{3^2 + 3^2} = \sqrt{18} \approx 4.24.
\] Even the combination is insufficient to be a discovery but the excess is pretty strong. But if your prior probability that a "similar particle" should exist is low, 4.2 sigma is a pretty weak piece of evidence even though it's formally just a "1 in 100,000" risk that it's a fluke. Numerous 4-sigma signals have gone away.
In fact, there were apparently even 4-sigma bumps in diphotons that have gone away. Look at this April 2011 blog post about an ATLAS diphoton memo. At that time, it suggested that the Higgs boson could have existed and have the mass of \(115\GeV\). It was the most convincing value of the mass at that moment, I think – but you may also verify that I have never made any statements about being "comparably certain" about the \(115\GeV\) Higgs as I made about the \(125\GeV\) Higgs in December.
More...
The somewhat lighter \(115\GeV\) Higgs would have meant a more direct support for some of the most popular models with supersymmetry "right around the corner". Consequently, I think that it is probably not a coincidence that a Finnish company named Darkglass Electronics introduced a compressor called Supersymmetry \(115\GeV\) a bit later. By a compressor, they probably mean some gadget (a pedal!) for electric guitars but I am really confused what the product actually does LOL. They must have followed particle physics – or know someone who does. Or do you seriously believe that someone could have chosen this bizarre name that so accurately picks the excitement of a light Higgs in Spring 2011?
Too bad. If the Higgs were found at \(115\GeV\), I am sure that the pedal would be a huge bestseller. ;-)
Note that \(115\GeV\) was an intriguing value of the mass also because a decade earlier, the LEP collider saw a very weak hint of a Higgs boson at that mass – which was coincidentally the maximum mass that LEP was able to probe. At any rate, we know that the \(115\GeV\) hints went away (the same was true for hints around \(140\)-\(145\GeV\) from the Tevatron etc.) and a new bump ultimately began to emerge around \(125\GeV\) later in 2011 and it was declared a discovery on July 4th, 2012.
When the ATLAS and CMS see a very similar bump in the same channel, it may turn out to be a fluke – much like the \(115\GeV\) bump etc. In that case, it's not too interesting. Flukes have always taken place and they will never be banned. However, the signals may also be real – after all, 4.2 sigma isn't quite negligible evidence.
Because the particle decays to two photons, it must be a boson. I think that the simplest guess would be that it's a new Higgs boson. Supersymmetric models predict that there exist at least five faces of the God particle. A new gauge boson would marginally increase the probability that supersymmetry is right – \(700\GeV\) is an allowed mass, of course. But this new particle wouldn't be a
superpartneryet so I wouldn't say that "supersymmetry will have been discovered" as soon as this new scalar boson were proven to exist.
However, a new Higgs boson would surely be exciting, anyway. We would enter the epoch "Beyond the Standard Model". It would mean that despite all the talk about the Standard Model's nearly eternal validity and the nicknames The Core Theory and similar ideology, the completed Standard Model (with all parameters basically known) will have only been valid for some modest four years! ;-) This completely real possibility is the right perspective from which you should evaluate the question whether the modest, technical, seemingly temporary name "The Standard Model" is appropriate for the particular quantum field theory. It surely
isappropriate because we are aware of no good reasons why such a theory couldn't fall very soon.
There is an extra problem about any new particle of mass \(700\GeV\) that decays to two photons. What is the extra problem? The extra problem is that this is a pretty low mass and at these low masses, the increase of the LHC energy from \(8\TeV\) to \(13\TeV\) doesn't substantially improve the visibility of the new particle. So a new \(700\GeV\) particle visible in the 4/fb of the 2015 data should have been visible in the 20/fb of the 2012 data, too! But no one saw a big (or any) bump near \(700\GeV\) in the 2012 data, I think, although I am simply unable to find good diphoton graphs from the 2012 data that go this high.
This ATLAS diphoton graph only goes to \(600\GeV\), with an over-two-sigma bump around \(530\GeV\). The CMS graph goes up to \(3.5\TeV\) and is very chaotic around \(700\GeV\) or so. For a cleaner CMS paper, see the comments. Update: on Dec 15th in the morning, I found a TRF blog post and CMS paper in it that discuss an \(8\TeV\) excesses of 2.56 and 2.64 sigma between \(700\) and \(800\GeV\) on page 15.
If they are going to announce a bump in the simple \(\gamma\gamma\) channel, it will "probably" look like a pair of flukes due to the tension with the 2012 data. But the channel may be a more complicated one – with other particles produced simultaneously with the diphoton. If that's so, the process that creates the \(700\GeV\) may involve some heavier particles for which the increase from \(8\TeV\) to \(13\TeV\) is much more useful. Some multi-\({\rm TeV}\) particle (which is easy to produce now but was very hard in 2012) may decay to a product that decays further – we have decay chains or cascades etc.
The new particles that exist before the photons may belong e.g. to hidden valleys. What are hidden valleys? Matt Strassler has given this definition:
A unexpected place …Some of the TRF readers may say: Could you please be a bit more specific about the steps 1-4, Dr Strassler? ;-) More seriously, the definition's being vague is pretty much a point of the hidden valleys. Hidden valleys are any collections (hidden sectors) of new particles and fields influencing experiments at accessible energies that look pretty useless at the beginning and that only play role in the middle of some decay chains. Something more important stands at the beginning of the decay chains; but the chains always end with the Standard Model and not new particles. … of beauty and abundance … … discovered only after a long climb …
To some extent, to focus on hidden valleys means to abandon the minimality – or Occam's razor, if you wish. I would say that minimality is overrated but it's still more sensible to spend more time with models that look simpler than with the contrived ones. One simply shouldn't try to construct completely arbitrary Rube Goldberg machines and test whether they're the right theories of Nature.
On Tuesday, they may announce something much more convoluted than just the "diphoton final state". Perhaps the events will suggest that there is a heavier particle such as the \(2\TeV\) \(W'\)-boson – or a superpartner, if you want to be really ambitious – that decays into some decay products including a new \(700\GeV\) Higgs which decays into two photons. I believe that the number of possibilities is too high here and I won't be able to pick a reasonable candidate of the sort.
Even if these two 3-sigma excesses were the only result in tension with the Standard Model on Tuesday, it should be a very good idea to watch the 2-hour talks carefully. |
Note first that the first $=$ (equals) in $\frac{dl(\theta)}{d\theta} = 0 = −\frac{1}{2\sigma^2}(0−2X^TY + X^TX \theta)$ should be interpreted as a "is set to", that is, we set $\frac{dl(\theta)}{d\theta} = 0$. Given that (apparently) $\frac{dl(\theta)}{d\theta} = −\frac{1}{2\sigma^2}(0−2X^TY + X^TX \theta)$, $\frac{dl(\theta)}{d\theta} = 0$ is equivalent to $0 = −\frac{1}{2\sigma^2}(0−2X^TY + X^TX \theta)$.
Now, let's apply some basic linear algebra:
\begin{align}0 &= −\frac{1}{2\sigma^2}(0−2X^TY + X^TX \theta) \iff \\0 &= −(0−2X^TY + X^TX \theta) \iff \\0 &= −0 + 2X^TY - X^TX \theta) \iff \\0 &= 2X^TY - X^TX \theta \iff \\X^TX \theta &= 2X^TY \iff \\(X^TX)^{-1}(X^TX) \theta &= (X^TX)^{-1}2X^TY \iff \\\theta &= (X^TX)^{-1}2X^TY\end{align}
Now, you can ignore the $2$, because it is just a constant, and, when optimizing, this does not influence the result.
Note that using $\hat{\theta}$ instead of $\theta$ is just to indicate that what we will get is an "estimate" of the real $\theta$, because of round off errors during the computations, etc. |
As I am working on a problem with 3 linear equations with 2 unknowns I discover when I use any two of the equations it seems I always find a solution ok. But when I plug it into the third equation with the same two variables , the third may or may not cause a contradiction depending if it is a solution and I am OK with that BUT I am confused on when I pick the two equations with two unknowns it seems like it has no choice but to work. Is there something about linear algebra that makes this so and are there any conditions where it won't be the case that I will find a consistent solution using only the two equations? My linear algebra is rusty and I am getting up to speed. These are just equations of lines and maybe the geometry would explain it but I am not sure how. Thank you.
Each linear equation represents a line in the plane. Most of the time two lines will intersect in one point, which is the simultaneous solution you seek. If the two lines have exactly the same slope, they may not meet so there is no solution or they may be the same line and all the points on the line are solutions. When you add a third equation into the mix, that is another line. It is unlikely to go through the point that solves the first two equations, but it might.
There are three possible cases for $2$ linear equations with $2$ unknowns (slope and intercept):
$\qquad$ $\mathbf{0}$ solution points $\qquad$ $\qquad$ $\mathbf{1}$ solution point $\qquad$ $\qquad$ $\mathbf{\infty}$ solution points
$\qquad \quad$ $\nexists$ no existence $\qquad$ $\qquad$ $\exists !$ uniqueness $\qquad$ $\qquad$ $\exists$ no uniqueness
The lines have the form $y(x) = mx + b$.
Case 1: parallel lines
A solution
does not exist.
The lines are parallel: they have the same slope.
$$ % \begin{align} % y_{1}(x) &= m x + b_{1} \\ % y_{2}(x) &= m x + b_{2} \\ % \end{align} % $$
Case 2: intersecting lines
We have
existence and uniqueness.
The slopes are distinct.
$$m_{1} \ne m_{2}$$
$$ % \begin{align} % y_{1}(x) &= m_{1} x + b_{1} \\ % y_{2}(x) &= m_{2} x + b_{2} \\ % \end{align} % $$
Case 3: coincident lines
We have
existence, but not uniqueness. There is an infinite number of solutions. Every point solves the system of equations.
Both lines are the same.
$$ % \begin{align} % y_{1}(x) &= m x + b \\ % y_{2}(x) &= m x + b \\ % \end{align} % $$
In terms of linear algebra, look at the problem in terms of $\color{blue}{range}$ and $\color{red}{null}$ spaces.
The linear system for two equations is $$ % \begin{align} % m_{1} x - y &= b_{1} \\ % m_{2} x - y &= b_{1} \\ % \end{align} $$ which has the matrix form $$ % \begin{align} % \mathbf{A} x &= b \\ % \left[ \begin{array}{cc} m_{1} & -1 \\ m_{2} & -1 \\ \end{array} \right] % \left[ \begin{array}{cc} x \\ y \\ \end{array} \right] % &= % \left[ \begin{array}{cc} b_{1} \\ b_{2} \\ \end{array} \right] % \end{align} % $$
The Fundamental Theorem provides a natural framework for classifying data and solutions.
Fundamental Theorem of Linear Algebra
A matrix $\mathbf{A} \in \mathbb{C}^{m\times n}_{\rho}$ induces for fundamental subspaces: $$ \begin{align} % \mathbf{C}^{n} = \color{blue}{\mathcal{R} \left( \mathbf{A}^{*} \right)} \oplus \color{red}{\mathcal{N} \left( \mathbf{A} \right)} \\ % \mathbf{C}^{m} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} \oplus \color{red} {\mathcal{N} \left( \mathbf{A}^{*} \right)} % \end{align} $$
Case 1: No existence
The matrix $\mathbf{A}$ has a rank defect $(m_{1} = m_{2})$ and $b_{1} \ne b_{2}$. $$ b = \color{blue}{b_{\mathcal{R}}} + \color{red}{b_{\mathcal{N}}} $$ It is the $\color{red}{null}$ space component which precludes direct solution. (Interestingly enough, there is a least squares solution.) $$
The data vector $b$ is not a combination of the columns of $\mathbf{A}$. The column space is $$ \mathbf{C}^{2} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} \oplus \color{red} {\mathcal{N} \left( \mathbf{A}^{*} \right)} $$ The decomposition is $$ \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} = % \text{span } \left\{ \, \color{blue}{ \left[ \begin{array}{c} m \\ -1 \end{array} \right] } \, \right\} \qquad \color{red}{\mathcal{N} \left( \mathbf{A}^{*} \right)} = % \text{span } \left\{ \, \color{red}{ \left[ \begin{array}{r} -1 \\ m \end{array} \right] } \, \right\} $$
Case 2: Existence and uniqueness
The matrix $\mathbf{A}$ has full rank $(m_{1}\ne m_{2})$. The data vector is entirely in the $\color{blue}{range}$ space $\color{blue}{\mathcal{R} \left( \mathbf{A} \right)}$ $$ b = \color{blue}{b_{\mathcal{R}}} $$
The $\color{red}{null}$ space is trivial: $\color{red}{\mathcal{N} \left( \mathbf{A}^{*} \right)}=\mathbf{0}$. $$ \mathbf{C}^{2} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} $$ The decomposition is $$ \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} = % \text{span } \left\{ \, \color{blue}{ \left[ \begin{array}{c} m_{1} \\ -1 \end{array} \right] }, \, \color{blue}{ \left[ \begin{array}{c} m_{2} \\ -1 \end{array} \right] } \right\} $$
Case 3: Existence, no uniqueness
The matrix $\mathbf{A}$ has a rank defect $(m_{1} = m_{2} = m)$, yet $b_{1} = b_{2}$. $$ b = \color{blue}{b_{\mathcal{R}}} $$ The column space is has $\color{blue}{range}$ and $\color{red}{null}$ space components: $$ \mathbf{C}^{2} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} \oplus \color{red} {\mathcal{N} \left( \mathbf{A}^{*} \right)} $$ The decomposition is $$ \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} = % \text{span } \left\{ \, \color{blue}{ \left[ \begin{array}{c} m \\ -1 \end{array} \right] } \, \right\} \qquad \color{red}{\mathcal{N} \left( \mathbf{A}^{*} \right)} = % \text{span } \left\{ \, \color{red}{ \left[ \begin{array}{r} -1 \\ m \end{array} \right] } \, \right\} $$
Postscript: the theoretical foundations here are useful. The trip to understanding starts with simple examples like in @Nick's comment.
Let's think using vector notation.
A linear system with two unknowns $x$ and $y$, and two equations $$ \begin{align*} v_1 x + w_1 y &= a_1 \\ v_2 x + w_2 y &= a_2 \end{align*} $$ can be written in vector notation as $$ x\, \vec{v} + y\, \vec{w} = \vec{a}. $$ That is, you want to know if $\vec{a}$ can be written as a linear combination of $\vec{v}$ and $\vec{w}$.
Fixed the vectors $\vec{v}$ and $\vec{w}$, to state that a solution always exists whatever $\vec{a}$ is, is the same as to state that $\vec{v}$ and $\vec{w}$ spans the whole plane. If not ($\vec{v}$ and $\vec{w}$ are parallel), depending on $\vec{a}$, the solution might not exist. And when it exists, it will not be unique. |
I don't have much to add to mweiss' nice answer in terms of teaching suggestions. But I would like to add my own point of view on what the abuse of notation $\mathbf{r}=\mathbf{r}(s)=\mathbf{r}(t)$
means and where it comes from historically.
The first person to do this abuse of notation (implicitly) seems to have been Jacobi around 1830 (I suggest you read that link
after reading the rest of what I'll say). Note that 1830 is more than 100 years after Bernoulli and Euler introduced the notation $y=f(x)$ and more than 140 years after Leibniz startet to talk of functions. In the period between Leibniz and Jacobi we find luminaries like Euler, Lagrange, Laplace, Fourier, Bolzano, Cauchy and Gauss who, as far as I can tell, never did this. So it's not the case, as some people believe, that mathematicians and physicist have been writing $y=y(x)$ ever since the invention of functions. Even after Jacobi there are many people, like Riemann, Peano or Planck who apparently didn't do it. So it's also not the case that physicists invented $y=y(x)$ or somehow need it more urgently than mathematicians.
So why did Jacobi start doing it? In the above link you can read what he himself had to say about it (I highly recommend it to anyone teaching multivariable calculus), but his own words actually don't explain it completely. To understand it better we first need to understand how the word
function was used prior to about 1900. Most mathematicians seem unaware of the dramatic change of meaning this word underwent during the period 1900-1920. And it's a non-trivial task for a modern mathematician with a set theoretic perspective to make sense of what it meant prior to 1900. But let's try.
If you open
any calculus textbook written after Leibniz and before at least 1910 (a ~200 year period) you'll find that the word "function" is always defined as function . Here is a typical example from the end of that period taken from Peano's of something Calcolo differenziale e principii di calcolo integrale 1884, p.3:
Among the variables there are those to which we can assign
arbitrarily and successively different values, called
independent
variables, and others whose values depend on the values given to the
first ones. These are called dependent variables or . functions of
the first ones
We shall first treat functions of a single independent variable, and
we shall say that:
a function $y$ of $x$ is given in an interval
$(a, b)$, if to any value of $x$ in between $a$ and $b$ corresponds a
unique and determinate value for $y$ - whatever the means of
determining it.
So for example $x^{2}$ is a definite function of $x$ for any value of
$x$, and is hence given in any interval; $\sqrt{x}$ understood as the
arithmetic root of $x$, is given for all positive values of $x$ ;
while $\frac{1}{1} + \frac{1}{2} +\frac {1}{3} +\cdots + \frac{1}{x}$
is a function of $x$ defined only for integer and positive values of
the variable, etc.
So a function, like $x^2$ or $y$, is a variable quantity, just like $x$, only that it satisfies some additional property. If you feel uncomfortable with "$y$ is a function of $x$" because you are so used to modern functions, maybe paraphrasing it as "$y$ depends on $x$" helps a bit. (Certainly if $f:\mathbb{R}\to \mathbb{R}$, no modern mathematician would say that $f$ depends on $x\in \mathbb{R}$.) Hence whenever they called something a function, they had to add
of what that thing is a function. But often it was clear from the context or the notation, so they would soon drop the "of $x$" and simply talk of functions. So for example on p.13 we find Peano writing
One says that a function $f(x)$ becomes maximal relative to an
interval $(a,b)$ when $x=x_0$...
To be correct he should have said something like: "One says that a function
of $x$, $f(x)$, becomes ...". But it was clear form the notation. (Observe that nothing prevents $f(x)$ from being a function of something else too. For example when $x$ is itself a function of $t$, then $f(x)$ would also be a function of $t$.) We inherited the phrase "$f(x)$ is a function" —which we find in every modern calculus textbook (and btw. you used the question)— from this pre 1900 period, even though according to our modern convention we should correctly say "$f$ is a function".
To emphasise again the difference between "function of ..." and our modern "functions": when $y=f(x)$ they called $y$ and $f(x)$ a function, while we call $f$ a function.
If they called $f(x)$ the function, how did they call $f$? After all the notation $f(x)$ existed since Bernoulli ~1718. Didn't they also call $f$ a function? No, not officially. The first ones (Bernoulli, Euler, Lagrange) called $f$ the
characteristic of the function $f(x)$. I interpret this as saying: its just a character used to distinguish one function of $x$, say $f(x)$, from another, say $g(x)$. But mainly people before 1900 didn't call $f$ anything at all! Peano for example doesn't in his calculus book. In fact, no one treated $f$ as a mathematical object on its own, before Dedekind, Peano, Cantor and Frege (to a certain extent independently) changed that around 1890. (There are some surprising historical bits in that thread. For example: Dedekind, Peano and Cantor all gave $f$ a new name, calling it resp. "map", "prefix sign for a function" and "allocation". They all preserved the original use of "function" for $f(x)$! Moreover Dedekind formulated his notion of map nine years before realising that he could identify it with the $f$ appearing in their functions $f(x)$. Only Frege (unfortunately) suggested calling $f$ a function, and somehow it became standard. We would probably have less difficulty communicating today, if $f$ had not gotten the name function.)
Returning to $y=y(x)$ and why Jacobi started that. Besides the good reasons about partial derivatives he gives in De Determinantibus 1840, I suspect that he simply wanted a notation to indicate
of what a certain variable is to be considered a function. I used to believe that in the equation $y=y(x)$ the $y$ on the left and on the right were objects of different types which by abuse where given the same name (the right one being of type $\mathbb{R}\to\mathbb{R}$, the left one of type $\mathbb{R}$.) But I now think that this is not what Jacobi intended and how physicist would like to use it. Instead, we should literally think of the $y$ on the right and left as the same object (of type "variable quantity"), and what is being abused is the notation for function application $f(x)$. In other words: had Jacobi chosen another notation like square brackets $y[x]$ to annotate that $y$ is considered as function of $x$, instead of using the already existing notation $f(x)$ for "application of a map to a variable", we might have less trouble with it now. I suspect one could formalise this idea similarly to how computer scientist formalise "ascription". See for example Pierce, Types and Programming Languages.
Having said that, I don't think that by doing that we would gain much. Writing $\mathbf{r}=\mathbf{r}[t]$ seems mainly an aide for the reader, and we might as well do without it, as mweiss suggested. The harder problem seems to be to correctly formalise the notion of "variable quantity" and surrounding things like the notation $dy/dx$. |
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ...
The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial.
This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ...
I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv...
As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists?
I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib...
@EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc.
Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/…
You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago
So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball.
@ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why?
@AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially...
@vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes.
@RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself
@AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that?
@ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions...
When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former.
@RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that
And that is what I mean by "the basics".
Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers
@RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14
The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for...
@vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world.
@Slereah It's like the brain has a limited capacity on math skills it can store.
@NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life"
I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money
It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge
Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it |
I am learning the Riemann Surfaces and I have a question when I read the appendix A.1.5. of textbook A course Complex analysis and Riemann surfaces by W.Schlag.
One way to define the degree of the non-constant and analytic map $f$ between Riemann surfaces is that $$\deg(f) := \sum_{p\in f^{-1}(q)}(b_{p}(f)+1), $$ where $p$ is a branch point and $(b_{p}(f)$ barnch numbers.
And the other notion of the degree from topology,
Let $f: M \to N$ be a smooth map and $f^{*}: H^{n}(N) \to H^{n}(M)$ the induced map defined via the pullback. There exists a real number denoted by $\deg(f)$ s.t. $$ \int_{M} f^{*}(\omega)=\deg(f) \int_{N} \omega,\ \forall \omega \in H^{n}(N).$$
For the Riemann surfaces $n=2$, hence the definition of the degree is that $$\int_{M} f^{*}(d\sigma)=\deg(f) \int_{N} d\sigma,\ \forall d\sigma \in H^{2}(N).$$
$\textbf{My question}$ is that why it is easy to verify that for any $\textbf{regular}$ value $q\in N$, which means that $Df(p): T_{p}M \to T_{p} N$ is invertible for every $p$ with $f(p)=q$, $$\deg(f)= \sum_{p\in f^{-1}(q)} Ind(f;p),$$ where $Ind(f;p)=\pm 1$ depending on whether $Df(p)$ preserves or reverses the orientation. |
Edit: I'm leaving the old post below, but before I want to write the proof as suggested by Bruce from his book, which uses the ideas in a more efficient way.
Assume that $\|p-q\|<1$, with $p,q\in A$, a unital C$^*$-algebra. Let $x=pq+(1-p)(1-q)$. Then, as $2p-1$ is a unitary, $$\|1-x\|=\|(2p-1)(p-q)\|=\|p-q\|<1.$$So $x$ is invertible. Now let $x=uz$ be the polar decomposition, $z=(x^*x)^{1/2}\in A$. Then $u=xz^{-1}\in A$. Also, $px=pq=xq$, and $qx^*x=qpq$, so $qx^*x=x^*xq$, and then $qz=zq$. Then$$pu=pxz^{-1}=xqz^{-1}=uzqz^{-1}=uqzz^{-1}=uq.$$So $q=u^*pu$.
=============================================
(the old post starts here)
(A good friend pointed me to the ideas in this answer, so I'm sharing them here)
The result holds in any unital C$^*$-algebra. So assume that $\|p-q\|<1$, with $p,q$ in a unital C$^*$-algebra $A\subset B(H)$.
Claim 1: There is a continuous path of projections joining $p$ and $q$.
Proof. Let $\delta\in(0,1)$ with $\|p-q\|<\delta$. For each $t\in[0,1]$, let $x_t=tp+(1-t)q$. Then$$\|x_t-p\|=\|(1-t)(p-q)\|<\delta(1-t),$$$$\|x_t-q\|=\|t(p-q)\|<\delta t.$$This, together with the fact that $x_t$ is selfadjoint, implies that $\sigma(x_t)\subset K=[-\delta/2,\delta/2]\cup[1-\delta/2,1+\delta/2]$ (since $\min\{t,1-t\}\leq1/2$). Now let $f$ be the continuous function on $K$ defined as $0$ on $[-\delta/2,\delta/2]$ and $1$ on $[1-\delta/2,1+\delta/2]$. Then, for all $t\in[0,1]$, $f(x_t)\in A$ is a projection. And$$t\to x_t\to f(x_t)$$is continuous, completing the proof of the claim. Edit: years later, I posted this answer to a question on MSE that proves the continuity.
Claim 2: We may assume without loss of generality that $\|p-q\|<1/2$.
This is simply a compacity argument, using that each projection in the path $f(x_t)$ is very near another projection in the path. The compacity allows us to make the number of steps finite, and so if we find projectons $p=p_0,p_1,\ldots,p_n=q$ and unitaries with $u_kp_ku_k^*=p_{k+1}$, we can multiply the unitaries to get the unitary that achieves $q=upu^*$.
Claim 3: If $\|p-q\|<1/2$, there exists a unitary $u\in A$ with $q=upu^*$.
Let $x=pq+(1-p)(1-q)$. Then $$\|x-1\|=\|2pq-p-q\|=\|p(q-p)+(p-q)q\|\leq2\|p-q\|<1,$$so $x$ is invertible. Let $x=uz$ be the polar decomposition. Then $u$ is a unitary. Note that$$qx^*x=q(qpq+(1-q)(1-p)(1-q))=qpq,$$so $q$ commutes with $x^*x$ and then with $z=(x^*x)^{1/2}$. Note also that $px=xq$, so $puz=uzq=uqz$. As $z$ is invertible, $pu=uq$, i.e.$$q=u^*pu.$$Note that $u=xz^{-1}\in A$. |
I have this funny equation
$$ \frac{\partial^2 u}{\partial t^2} = \frac{\partial^3 u}{\partial x^3}, \qquad x \in [0,1], \qquad t \in (0,T] $$ with initial conditions $u(x,0) = \sin(2\pi x)$, $\frac{\partial u}{\partial t}(x,0) = 0$ and periodic boundary conditions: $u(0,t) = u(1,t)$, $\frac{\partial u}{\partial x}(0,t) = \frac{\partial u}{\partial x}(1,t)$ and $\frac{\partial^2 u}{\partial x^2}(0,t) = \frac{\partial^2 u}{\partial x^2}(1,t)$. The exact solution of this model is $$ u(x,t) = \frac{1}{2}[e^{2 \pi^{\frac32}t} \sin (2\pi x - 2\pi^{\frac32} t) + e^{-2\pi^{\frac32}t} \sin (2\pi x + 2 \pi^{\frac32}t)] $$ I would like to use boundary value technique to numerically solve this equation, i.e. i would like to write the following system of $2MN$ unknowns in matrix form and solve it in Matlab.
I partition time as $0 = t_0 < t_1 < \dots < t_{N} = 0.1$. Step size is of length $h=\Delta t =\frac{0.1}{N}$, i.e. $t_n = t_0 + nh$, $n=1,2,\dots,N$.
I partition space as $0 = x_0 < x_1 < \dots < x_{M} = 1$. Step size is of length $h'=\Delta x =\frac{1}{M}$, i.e. $x_i = x_0 + ih'$, $i=1,2,\dots,M$.
I rewrite the equation as system,
\begin{align*} u_t &= v, \\ v_t &= u_{xxx} \end{align*}
and using central approximation for time derivative and second order scheme for space derivative
\begin{align*} \frac{u_i^{n+1} - u_i^{n-1}}{2\Delta t} &= v_i^n \\ \frac{v_i^{n+1} - v_i^{n-1}}{2\Delta t} &= \frac{u_{i+2}^n - 2 u_{i+1}^n +2u_{i-1}^n -u_{i-2}^n}{2\Delta x^3} \end{align*}
To add an interesting fact: this can't be solved via step by step approach. At least as far as i know. BTCS and FTCS methods do not work. Also other schemes are not stable (eigenvalues of step by step matrix form a cross in complex plane and are outside stability regions of the schemes for any $\Delta t$ and $\Delta x$). This is why i would like to use the approach of solving system of equations to get values at grid points.
Here is my try deriving system of equation when i introduce vector $v$. I define $u^{n}:=[u_1^{n},u_2^{n},\dots,u_M^{n}]^T$ and $v^{n}:=[v_1^{n},v_2^{n},\dots,v_M^{n}]^T$ and then $w^n= [(u^n)^T,(v^n)^T]^T$ and then discretizing the space yields
$$ \begin{pmatrix} u^n \\ v^n \end{pmatrix}_t = \begin{pmatrix} 0 & I \\ A & 0 \end{pmatrix}w^n =: D w^n $$ where $$ A=\frac{1}{2 \Delta x^3} \begin{pmatrix} 0 & -2 & 1 & 0 & \cdots & 0 & -1 & 2 \\ 2 & 0 & -2 & 1 & 0 & \cdots & 0 & -1\\ -1 & 2 & 0 & -2 & 1 & 0 & \cdots & 0 \\ 0 & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & 0 \\ 0 & \cdots & 0 & -1 & 2 & 0 & -2 & 1 \\ 1 & 0 & \cdots & 0 & -1 & 2 & 0 & -2 \\ -2 & 1 & 0 & \cdots & 0 & -1 & 2 & 0 \end{pmatrix} $$ using periodic boundary conditions. Central discretization in time yields
$$w^{n+1} - w^{n-1} - 2\Delta tD w^n = 0$$
In the last step i use backward in time derivative approximation to get
$$-w^{N-1} - (\Delta tD - I) w^{N} = 0$$
Lastly, defining $w=[(w^0)^T,(w^1)T,\dots,(w^N)^T]^T$,
\begin{align*} C &=\begin{pmatrix} I & 0 & 0 & 0 & \dots & 0 & 0 & 0 \\ -I & -2\Delta tD & I & 0 & \dots & 0 & 0 & 0 \\ 0 & -I & -2\Delta tD & I & \ddots & 0 & 0 & 0 \\ 0 & 0 & -I & -2\Delta tD & \ddots & \ddots & 0 & 0 \\ \vdots & \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ 0 & 0 & 0 & \ddots & \ddots & -2\Delta tD & I & 0 \\ 0 & 0 & 0 & 0 & \ddots & -I & -2\Delta tD & I \\ 0 & 0 & 0 & 0 & \dots & 0 & -I & I-\Delta tD \\ \end{pmatrix}\\ \end{align*} and $c = ((w^0)^T,0...,0)^T$ i have to solve
$$C w = c$$
Edit: It works in Matlab now and the results are nice. Just matrix inversion is not fast. |
Welcome to the LHCb web Organisation
We are a relatively new group on the LHCb experiment, and as such we are quite small. However, we are rapidly growing, and this section will keep track of who is working on what. Please feel free to edit your section if I have missed anything out. %$\mu$%
Nigel Watson: Head of the LHCb group. Responsible for implementing new versions of Geant4 within the LHCb framework, and testing performance against previous versions. Working on the %$\Lambda_{\rm b}^{0} \rightarrow \Lambda^0 \mu^+ \mu^-$% analysis. Cristina Lazerroni: Working on the %$\Lambda_{\rm b}^{0} \rightarrow \Lambda^0 \mu^+ \mu^-$% analysis. Also head of the NA62 experiment at Birmingham. Mark Slater: Responsible for GRID operations and GANGA developement. Jimmy McCarthy: 4th year PhD student. Working on the search for %$\Lambda_{b}^{0} \rightarrow \Lambda^{0} \eta^{(\prime)}$% decays. Optimising the Geant4 generator cuts used by LHCb in order to improve the performance of Monte Carlo productions. Also responsible for the XML description of the beam pipe supports. Pete Griffith: 3rd year PhD student. Currently working on the analysis, %$\Lambda_{b}^{0} \rightarrow J/\psi p K$% and the search for promt %$\psi(3770)$%. Luca Pescatore: 3rd year PhD student. Currently working on the RK* analysis and on the update of %$\Lambda_{\rm b}^{0} \rightarrow \Lambda^0 \mu^+ \mu^-$% including also 2012 data and angular analysis. Simone Bifani: Research Fellow in Particle Physics. Currently working on RK*. Nathanael Farley: 2nd year PhD student. Working on the search for %$B_{c}^{+} \rightarrow K*(892) K^{-}$% Tim Williams: 1st year student.
We currently have regular meetings every Wednesday, 09:00 in the West 229 meeting room
Ongoing and past analyses VELO tests Paper reviews For New Students
Welcome to Birmingham. Click here
for information to get yourself started.
Physics Analysis
These pages are aimed at giving a brief introduction to the tools needed for a physics analysis. It is supposed to be guide only to jog your memory and give some example code. Please refer to the manuals and tutorials for a full description of how everything works.
Using DaVinci to fill an nTuple from Data or Monte Carlo. Using ROOT? to analyse the nTuple. Using the TMVA tool to refine your selection. Using RooFit? to perform fits to Data. Simulations
These pages are aimed at giving a brief introduction to the tools needed to run simulations for your technical work. It is supposed to be guide only to jog your memory and give some example code. Please refer to the manuals and tutorials for a full description of how everything works.
Validation Studies Using Gauss? and Boole to perform simulations. Using Panoramix? to visualize the detector. Writing XML for the detector Descriptions? Making a new release of Geant4? for LHCb. Working Locally
We have local copies of the LHCb software installed on eprexb. The advanage to working locally is that it is a lot quicker than trying to run emacs etc. over the network. The disadvatage is that every project has to be installed before it can be used, and you can only use the installed version. When a new version is released by LHCb, it needs to be installed locally before it can be used.
The current project which are currently available are:
Bender v21r2, v22r3 Geant4 v94r2p2, v94r2p4, v94r2p6, v95r2, v95r2p4g1 v94r2p5, v95r2p5g1, v95r2p5g3 Gauss v41r1, v41r4, v42r1, v43r3, v44r4, v45r3, v46r1, v46r2, v46r2p1 Boole v25r1, v26r6 Moore v14r8p1, v20r3p1 Brunel v44r8 DaVinci? v32r2, v32r2p1, v33r1 , v46r3 Panoramix v21r3, v22r0 Erasmus v8r1, v9r0
All the software is stored in:
/home/lhsoft
We also have a directory in which to store any large data files we use. This can be found in:
/home/lhdata1
Everybody in the group has write access to both areas, so feel free to add your own data, or install software. These areas are not backed up very regularly, so it is a good idea to keep a copy of all the data files on castor as well.
To use the LHCb software locally, you need to first setup and LHCb environment with the login scripts available. You can then setup a project in the same way you would on lxplus.
source /home/lhsoft/LbLogin.sh SetupProject davinci
Installing a new version of the software is easy to do:
export MYSITEROOT=/home/lhsoft/ cd $MYSITEROOT python install_project.py -b DaVinci v33r0
And then please update the list above so we can keep track of which projects are available.
Useful Links LHCb Web Utilities |
In Multivariable Calculus, we can easily find the gradient of a scalar function (producing a scalar field) $f : \mathbb{R^n} \to \mathbb{R}$, and the gradient function would produce a vector field.
$$grad(f) = \vec{\nabla}(f) = \left< \frac{\partial f}{\partial x_1}, \frac{\partial f}{\partial x_2} , ... , \frac{\partial f}{\partial x_n} \right> = \begin{bmatrix} \frac{\partial f}{\partial x_{1}} \\ \frac{\partial f}{\partial x_{2}} \\ ... \\ \frac{\partial f}{\partial x_{n}} \end{bmatrix}$$
Evaluating Vector Functions By Components
In Multivariable Calculus we learn that we can differentiate any vector function by taking the derivatives of its scalar components/functions, likewise we also learn that we can integrate any vector function by integrating each of its scalar components.
e.g.
Given a function $g : \mathbb{R^n} \to \mathbb{R^m}$, comprised of scalar functions $f_{i} : \mathbb{R^n} \to \mathbb{R} $
$${\vec{g}}{'}(t) = \left< {f_{1}}^{'}(t), {f_{2}}^{'}(t), ..., {f_{n}}^{'}(t)\right> = \begin{bmatrix} {f_{1}}^{'}(t) \\ {f_{2}}^{'}(t) \\ ... \\ {f_{n}}^{'}(t) \end{bmatrix}$$
$$\int\vec{g}(t) = \left< \int{f_{1}}^{'}(t) \ , \int{f_{2}}^{'}(t)\ , \ ..., \ \int{f_{n}}^{'}(t)\right> = \begin{bmatrix} \int{{f_{1}}^{'}(t)} \\ \int{f_{2}}^{'}(t) \\ ... \\ \int{f_{n}}^{'}(t) \end{bmatrix}$$
Can we do the same for the Del Operator?
Since we can differentiate an integrate any vector function, by taking the derivatives or integrals of its scalar components/functions, can we evaluate the
gradient of a vector function by applying the Del Operator to each of it's scalar components to compute the gradient of each scalar function producing a scalar field. I realize that this would produce a Tensor field as a result.
Again given the same vector function $g : \mathbb{R^n} \to \mathbb{R^m}$, comprised of scalar functions $f_{i} : \mathbb{R^n} \to \mathbb{R} $,
can we say the following :
$$ T = grad(\vec{g}) = \vec{\nabla}(\vec{g}) = \left< \vec{\nabla}(f_1), \vec{\nabla}(f_2), ..., \vec{\nabla}(f_n) \right> = \begin{bmatrix} \vec{\nabla}(f_1) \\ \vec{\nabla}(f_2) \\ ... \\ \vec{\nabla}(f_n) \end{bmatrix}$$
With $T$ denoting the tensor field outputted by taking the gradient of the vector field produced by the function $g$
Just to close off, I realize that a vector function, can take both vectors or scalars as inputs, and my question here only covers the case for scalar inputs to a vector function, however, extending this to vector inputs would be a fairly trivial task as we could just break up the vector inputs into its scalar components and then work from there, which we would now know how to do as that is covered within the scope of this question. |
This is standard theory. Try
Birrell, N. D., & Davies, P. C. W. (1982). Quantum Fields in Curved Space. Cambridge: Cambridge University Press.
Bog standard Curved space QFT text. Don't remember how much is said specifically about spinors though.
Brill, D., & Wheeler, J. (1957). Interaction of Neutrinos and Gravitational Fields. Reviews of Modern Physics, 29(3), 465–479. doi:10.1103/RevModPhys.29.465
<-- This paper was particularly clear from memory.
Yepez, J. (2011). Einstein’s vierbein field theory of curved space. General Relativity and Quantum Cosmology; History of Physics. Retrieved from http://arxiv.org/abs/1106.2037
Great discussion. Thanks twistor59.
Boulanger, N., Spindel, P., & Buisseret, F. (2006). Bound states of Dirac particles in gravitational fields. Physical Review D, 74(12). doi:10.1103/PhysRevD.74.125014
Technical examples worked out in painful detail
Lasenby, A., Doran, C., & Gull, S. (1998). Gravity, gauge theories and geometric algebra. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 356(1737), 487–582. General Relativity and Quantum Cosmology; Astrophysics. doi:10.1098/rsta.1998.0178 http://arxiv.org/abs/gr-qc/0405033
Geometric algebra technique - a powerful and elegant modern formalism that I'm hardly an expert on. See Muphrid's answer for more details. :)
These are less specific to the question but still with material pertaining to it:
There are other references. I'll put them in as I think of them or others point them out (thanks guys!).
The reason you need the spin connection is because fundamentally you need the tetrad or orthonormal frame fields. These fields give a set of "laboratory frames" at every point in spacetime:
$$ e^\mu_a(x),\ \text{with}\ e^\mu_a(x) e_{\mu b}(x) = \eta_{ab}, $$
where $a$ labels the field $a=\hat{t},\hat{x},\cdots$ and $\mu$ is the spacetime vector index. The intuitive meaning of this is that $e^\mu_{\hat{t}}$ represents the 4-velocity of the lab, $e^\mu_{\hat{x}}$ is bolted down on the lab bench oriented along the $x$-axis etc. You can prove the relationship
$$ g_{\mu\nu}(x) = \eta_{ab} e^a_\mu(x) e^b_\nu(x). $$
For this reason the tetrad is commonly known as the "square root of the metric," which is not an entirely satisfactory notion. Anyway, you can see that the tetrad is not uniquely defined. Any tetrad related to another by $e'^a_\mu(x) = \Lambda^a_b(x) e^b_\mu(x)$ where $\Lambda^a_b(x)$ is a
local Lorentz transformation is just as good. This corresponds to the freedom of different labs to rotate their axes and boost themselves independently.
This means the theory has a huge built in redundancy - local Lorentz invariance - which plays a similar role for spinors as coordinate tranformation invariance does in GR. The tetrads are necessary because spinor representations are defined in relation to the double cover of the Lorentz group SL(2,C), which cannot be represented in terms of tensors under the diffeomorphism group. You can however define spinors relative to a locally Minkowski frame:
$$ \psi(x) \rightarrow \left( 1 - \frac{i}{2} \omega_{ab}(x) \sigma^{ab} \right) \psi(x), $$
where $\omega_{ab}(x)$ is a
local Lorentz transformation and $\sigma^{ab}\propto [\gamma^a,\gamma^b]$ are the generators of spinor transformations. The spinors basically live in an internal space. The next key idea is that you have to then be able to "solder" SL(2,C) representations in these frames together consistently to cover the spacetime. The consistency conditions you desire are that: $\bar{\psi} \psi$ is a scalar field The product rule and linearity work for covariant derivatives The tetrad postulate, i.e. compatibility of the spinor covariant derivative with the ordinary covariant derivative
These conditions form the relationship between the internal spin and the spacetime, and they give the formula for the spin connection:
$$ \omega_{\mu b}^{a}=e_{\lambda}^{a}\Gamma_{\mu\nu}^{\lambda}e_{b}^{\nu}-\left(\partial_{\mu}e_{\nu}^{a}\right)e_{b}^{\nu}. $$
In older literature you may see curved space gamma matrices defined by contraction with the tetrad:
$$ \gamma^\mu(x) = \gamma^a e^\mu_a(x). $$
This is fine but I find it less confusing to keep the tetrads explicit. Note that the $\gamma^a$ are constant numerical matrices, whereas $\gamma^\mu(x)$ are spacetime functions.
This, combined with the references and some googling should hopefully get you started. If you are still really stuck after trying to work some of this out (Try to do it for yourself! There are so many different conventions in the literature it's hard to trust copy'n'pasting different people's results!) then I have an example calculation from go to woe in my honours thesis. |
Set is Subset of Union/Set of Sets
Jump to navigation Jump to search
Theorem
Let $\mathbb S$ be a set of sets.
Then: $\displaystyle \forall T \in \mathbb S: T \subseteq \bigcup \mathbb S$ Proof
Let $T$ be any element of $\mathbb S$.
We wish to show that $T \subseteq S$.
Let $x \in T$.
Then:
\(\displaystyle x\) \(\in\) \(\displaystyle T\) \(\displaystyle \implies \ \ \) \(\displaystyle x\) \(\in\) \(\displaystyle \bigcup \mathbb S\) Definition of Set Union
Since this holds for each $x \in T$:
\(\displaystyle T\) \(\subseteq\) \(\displaystyle \bigcup \mathbb S\) Definition of Subset As $T$ was arbitrary, it follows that: $\forall T \in \mathbb S: T \subseteq \bigcup \mathbb S$
$\blacksquare$ |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
Let $(A,m)$ be a commutative noetherian local ring such that $m$ is principal, say $m=(t)$. Let $(\hat A,\hat m)$ be its $m$-adic completion. Let $A\subset B\subset\hat A$ be any intermediate subring such that $n=tB$ is a maximal ideal of $B$.
The question is: Is it true that the localisation $B_n$ is contained in $\hat A$?
Does this follows simply by considering the isomorphims $\hat A\simeq A[[X]]/(X-t)\simeq A[[t]]$? This would imply that an element $g\in B\setminus n$ is a unit in $\hat A$. Am I right?
Thanks in advance! |
BICEP2 near the South Pole might have found a gem In the morning, Sam Telfer asked Matt Strassler, Adam Falkowski, and myself about the new buzz related to the B-modes. It turns out that he was ahead of us. But now, all of us know what is supposed to happen soon, and it is exciting. Update, Monday 4 pm:The rumor was 100% true. Ahead of the press conference, official data have been released. They measured \(n\sim 0.96\) and more importantly \(r=0.20\pm 0.05\) or so (see a new graph) and could exclude \(r=0\) at a 6-7 confidence level. The peaks are where we expect it from cosmic inflation, contamination by instruments seems very unlikely to them. See FAQ. See also a post-discovery blog post by Prof Liam McAllister.
The rumor is all about BICEP2, a small experiment at the Amundsen–Scott South Pole Station in the Antarctica (BICEP1 concluded with this paper; see also BICEP2 status in 2012). Focusing on the frequency \(150\, {\rm GHz}\) i.e. wavelength 2 millimeters, it is trying to find the primordial B-modes, something that could be important to pick the winners among theories of cosmic inflation and possible alternative theories to cosmic inflation.
What are the B-modes?
They are one of the two components of anisotropies of the cosmic microwave background that may be separated by the Helmholtz decomposition (or its generalization for the celestial sphere). Well, not so fast.
Recall that the cosmic microwave background (CMB) is a thermal black body radiation (the most perfect natural black body radiation we have observed in the Universe) at current temperature \(2.7\,{\rm K}\). It used to be much warmer but it cooled down, because of the global cooling in the whole Universe (an inseparable consequence of the expansion of the Universe; no, there is no global warming in the Universe) that has lasted for 13.8 billion years.
The cosmic microwave background was only created about 400,000 years after the big bang. Around that moment, atoms were born and the ionized plasma turned into a nearly transparent gas of neutral atoms. Since that moment, the photons were propagating through space almost uninterrupted. Their wavelength was just expanding proportionally to the expansion of the whole Universe and because longer wavelengths correspond to a lower frequency, energy, and temperature, the CMB was cooling down.
The temperature (as given by the maximum-intensity frequency etc.) is almost constant in all directions, the variations only amount to (relative) 10 parts per million or so. The variations are nevertheless important, may be measured, and decomposed to spherical harmonics. Those graphs of the amplitudes as functions of \(\ell\), the orbital angular momentum, provide us with some of the most spectacular confirmations of our theories about the very young Universe (including the scale invariance).
So far, I have only talked about the intensity. But the microwave radiation may be polarized which means that one linear or circular polarization has a higher intensity (or, more precisely, a higher temperature by a microkelvin or two) than the other one. At some level, this polarization is inevitable. Because the CMB was created from a variable density \(\kappa\), we may deduce the main components of the polarization from derivatives of this density field \(\kappa\), see e.g. page 2 of this paper. Define\[
\nabla \kappa = \vec u
\] and try to separate the density field \(\kappa\) into \(\kappa^E\) and \(\kappa^B\) (yes, the letters coincide with the names of the electric and magnetic vectors in electromagnetism) so that\[
\nabla^2 \kappa^E = \nabla\cdot \vec u, \quad \nabla^2 \kappa^B = \nabla\times \vec u.
\] So that's the basic separation into E-modes and B-modes, something that may be expressed in various other ways. The E-modes and B-modes try to isolate the "gradient" and "curl" parts of \(\vec u\). These modes imprint themselves to the linear and/or circular polarization of the CMB in some way. There are two kinds of modes because there are two polarizations. In a plane, you could choose the usual bases but on the sphere, there is no universal x-polarization or y-polarization or right-handed polarization or left-handed polarization because you can't "comb the sphere". So the 2-dimensional basis has to be chosen more cleverly (in some sense, the problem is analogous to the addition of the angular momentum, \(\ell+1\)) and the E-modes and the B-modes happen to do the job well.
In some sense, the B-modes denote/quantify/measure the "vortices" or "whirlpools" in the CMB as measured by the polarization of the photons.
If I simplify a bit, B-modes are the part of the variable polarization of the CMB that has something to do with the "curl" and it's called in this way because it's the magnetic field's fault that any "curls" appear in Maxwell's equations, anyway. ;-) It's actually more accurate to define the B-modes not as the "curl-ful" part of the non-uniformities but as the "divergence-less" part (note that the divergence of a curl vanishes, \(\nabla\cdot(\nabla\times \vec u)=0\)).
You may find the review of the modes by Wayne Hu of Chicago or Peter Coles' blog post or John Kováč's 40-minute-long talk (B-modes from the ground) much more helpful and informative than my modest remarks.
The graph above was taken from this HHZ 2002 paper or some slides (page 2) and it is showing the relative strength of various E-modes and B-modes from different sources, as a function of \(\ell\). In general, there are "gravitational lensing" i.e. "mundane" B-modes but if the inflation scale is sufficiently high (close enough to the Planck scale), for example near \(2.6\times 10^{16}\GeV\) on the graph, the "primordial" waves prevail at lower values of \(\ell\). To be a bit slower:
The B-modes may have arisen "recently", from the self-attraction of the CMB photons (this is the mundane origin); or, more excitingly, "a long time ago" (these are the important "primordial" B-modes, probably created during inflation when the Universe was really, really young and we may be jealous about its GDP growth rate).
The "primordial" B-modes should be evidence of the gravitational radiation (i.e. gravitational waves) caused by the cosmic inflation itself and the numerical data about these "primordial" modes should tell us something about the characteristic energy scale of cosmic inflation which is mostly expected to be near the GUT scale, \(10^{15-16}\GeV\) or so. The closer the inflation scale was to the Planck scale, the stronger gravitational waves we expect. For any scale, there seems to be a maximum (bump) in the B-modes around \(\ell=90\) where BICEP2 and others may have focused (the "mundane" B-modes from gravitational lensing are elsewhere, near \(\ell\sim 1,000\)). In the early Universe, B-modes behave just like tensor modes (those related to the metric-like tensor field). The tensor modes are the only conceivable source of B-modes during the extreme childhood of the Cosmos. Their emergence in cosmology would be exciting because so far, we have only seen signs of the "scalar" non-uniformities of the matter density in the outer space (via the CMB).
Amundsen-Scott South Pole Station. The microwaves and millimeter waves are nicely observed at the South Pole because it's 2.8 km above the sea level – a thin atmosphere; because the atmosphere is stable as there are no sunrises and sunsets; the concentration of water vapor that could steal/absorb these waves is low.
In Summer 2013 (Nature, Science Daily, arXiv/PRL), the "mundane" B-modes were finally observed by the South Pole Telescope with some help from the Herschel Space Observatory.
BICEP2 focal plane with four detector tiles.
Now it seems rather plausible that less than a year after the detection of the "mundane" B-modes, BICEP2 may actually detect the exciting "primordial" ones, too. And the announcement may be just 3 days from now! The rumor may also suffer from a bug. A positive signal would be sort of surprising because it was generally expected that the Planck satellite would be the first experiment with a chance to detect the "primordial" B-modes (see the current status of the Planck measurements; note that Planck has died due to a helium heart attack but papers should continue to be published up to Summer 2014 or so). WMAP saw no B-modes. And the same thing more or less holds for POLARBEAR in Chile that published its results earlier this week.
Finally, the rumor.
Finally, I may show you that I can also be a linker-not-thinker and overwhelm you with ten more URLs to sources that discuss the rumor and some anti-rumors:
Spaceref.com: March 17th Press Conference onYou are invited to read the articles now. Czech, German, Italian, Dutch, and Spanish readers get a bonus link. Five more bonus links: Hank Campbell, Universe Today, Raw Story, IO9, Sky and Telescope. Major Discovery[official title, no rumor! Note by LM] at Harvard-Smithsonian Center for Astrophysics The Guardian: Gravitational waves: have US scientists heard echoes of the big bang? (By Stuart Clark) The Trenches of Discovery: "A major discovery", BICEP2 and B-modes (by Shaun Hotchkiss) Richard Easther's ExcursionSet.com: The Smoking Gnu [sic] Bruce Bassett's writing: Should you hold your breath for B-modes? Résonaances: Plot for Weekend: flexing biceps (Adam Falkowski) viXra: Primordial Gravitational Waves? (Phil Gibbs) Blank on the map: B-modes, rumours, and inflation (by Sesh Nadathur) Preposterous Universe: Gravitational Waves in the Cosmic Microwave Background (by Sean Carroll, a rather clean and meaningful intro added on Sunday) Prof Matt Strassler: Getting Ready for the Cosmic News (a brief comment added by an active particle phenomenologist on Sunday; see also his Brief History of the Universe, Saturday)
The first (Spaceref) article says that the Harvard-Smithsonian will be streaming a press conference on Monday at 5 p.m., Prague Winter Time (9 a.m., California Daylight Savings Time). Some contacts are mentioned there. You may already see all the secret data now (if you guess the right password).
The Guardian and the Trenches emphasize that it would be a huge, potentially Nobel-prize-winning discovery of gravitational waves (the Nobel prize could go to the experimenters, to my former student Alan Guth [interview on B-modes] and his competitor Andrei Linde [both are rumored to attend the press conference on Monday], or others). Gravitational waves need to have a source but in this case, cosmic inflation itself may offer a source. "Trenches" also suggest that this could be the greatest hard-science discovery in astrophysics although cosmologists will surely consider it a discovery in cosmology.
Tweets about "B-modes"
Twitter may sometimes make you feel that something is happening in a highly specialized topic every minute.
Richard Easther says a few words about the inflationary origin of the "primordial" B-modes and admits that the precise information in the rumor seems contradictory at this point. Some sources say that the discovered B-modes are stronger than other teams indicated, and therefore suggesting a contradiction or some really weird behavior of the early Universe; other sources say that the new observation is compatible with everything else.
Bruce Bassett tries to use a combination of Bayesian inference and psychology to determine how strong a signal they may announce. He considers the physical and experimental priors and the likelihood that a big signal would leak or that it would need rechecks, and tries to use some experience with some Planck and Opera announcements to check his musings. Well, it is amusing but a bit too speculative. Rumors are of course unreliable but in many cases, I prefer to believe that the rumor is essentially accurate over these vague Bayesian inferences which are nothing else than a guesswork using fancy words.
The graph includes both measurements and theoretical predictions. The horizontal axis is the primordial tilt \(n_s\approx 0.96\) (according to observations); the tilt (expected around one in "clean inflation"; more generally known as the spectral index) is defined as the exponent from \(P(k) = \langle \abs{\delta_k(t)}^2 \rangle\sim k^n\). The vertical axis is the T/S (tensor-to-scalar) ratio. You may see that e.g. T/S around \(r\approx 0.05\) is preferred (and hoped for) by proponents of the (violet) natural inflation.
Adam Falkowski at Résonaances wants you to look at this graph because it may be the last days when the graph looks like this. On Monday, it may change rather dramatically.
Finally, Phil Gibbs adds some pessimistic remarks. Not such a long time ago, we were disappointed by the pre-hyped press conferences of AMS, LUX, and others. He adds some comments about the decomposition of the waves.
And Sesh Nadathur is also skeptical, claiming that a truly positive signal would contradict the already published Planck and WMAP data: note that the \(r=0.2\) vertical line on the graph above is above the recommended big "hill" in the middle of the picture. Nadathur suggests that the rumor also says that the T/S \(r\approx 0.2\) which he finds incompatible with others but a reader says that both Planck and BICEP2 could be compatible with the real value around \(r\approx 0.1\). Note that BICEP1 measured \(r\approx 0.03\pm 0.25\) or so. It seems to me that the error margin of BICEP2 may be at most 5-10 or so times smaller, like \(0.03\), so for the claim about a nonzero \(r\) to be significant, the mean value should better be higher and close to \(r\approx 0.2\), indeed. This 2010 talk comparing BICEP2 with others claimed that BICEP would reach \(r\sim 0.03\) by 2013 so I wouldn't quite exclude the possibility that they will claim a discovery with a very low \(r\), either.
Stay tuned.
Originally posted on Friday, March 14th. Update:On Monday after 4 pm, everything became clear as the BICEP server became accessible. See Andrei Linde who heard the news and will probably get very drunk (via Stanford). They announced \(r=0.20\pm 0.05\) or so, excluded \(r=0\) at 6-7 sigma or so, and \(n=0.96\pm 0.01\). This is consistent, for example, with the hilltop quartic inflationary model with many (60) e-foldings. |
Focus Questions
The following questions are meant to guide our study of the material in this section. After studying this section, we should understand the concepts mo- tivated by these questions and be able to write precise, coherent answers to these questions. What is an identity? How do we verify an identity?
Consider the trigonometric equation \(\sin(2x) = \cos(x)\). Based on our current knowledge, an equation like this can be difficult to solve exactly because the periods of the functions involved are different. What will allow us to solve this equation relatively easily is a trigonometric identity, and we will explicitly solve this equation in a subsequent section. This section is an introduction to trigonometric identities.
As we discussed in Section 2.6, a mathematical
equation like \(x^{2} = 1\) is a relation between two expressions that may be true for some values of the variable. To solve an equation means to find all of the values for the variables that make the two expressions equal to each other. An identity, is an equation that is true for all allowable values of the variable. For example, from previous algebra courses, we have seen that
\[x^{2} - 1 = (x + 1)(x - 1)\]
for all real numbers \(x\). This is an algebraic identity since it is true for all real number values of \(x\). An example of a trigonometric identity is \(\cos^{2} + \sin^{2} = 1\) since this is true for all real number values of \(x\).
So while we solve equations to determine when the equality is valid, there is no reason to solve an identity since the equality in an identity is always valid. Every identity is an equation, but not every equation is an identity. To know that an equation is an identity it is necessary to provide a convincing argument that the two expressions in the equation are always equal to each other. Such a convincing argument is called a
proof and we use proofs to verify trigonometric identities.
Definition: Identity
An
identity is an equation that is true for all allowable values of the variables involved.
Beginning Activity
Use a graphing utility to draw the graph of \(y = \cos(x - \dfrac{\pi}{2})\) and \(y = \sin(x + \dfrac{\pi}{2})\) over the interval \([-2\pi, 2\pi]\) on the same set of axes. Are the two expressions \(\cos(x - \dfrac{\pi}{2})\) and \(\sin(x + \dfrac{\pi}{2})\) the same – that is, do they have the same value for every input \(x\)? If so, explain how the graphs indicate that the expressions are the same. If not, find at least one value of \(x\) at which \(\cos(x - \dfrac{\pi}{2})\) and \(y = \sin(x + \dfrac{\pi}{2})\) have different values. Use a graphing utility to draw the graph of \(y = \cos(x - \dfrac{\pi}{2})\) and \(y = \sin(x)\) over the interval \([-2\pi , 2\pi]\) on the same set of axes. Are the two expressions \(\cos(x - \dfrac{\pi}{2})\) and \(\sin(x)\) the same – that is, do they have the same value for every input \(x\)? If so, explain how the graphs indicate that the expressions are the same. If not, find at least one value of \(x\) at which \(\cos(x - \dfrac{\pi}{2})\) and \(\sin(x)\) have different values. Some Known Trigonometric Identities
We have already established some important trigonometric identities. We can use the following identities to help establish new identities.
The Pythagorean Identity
This identity is fundamental to the development of trigonometry. See Section 1.2.
For all real numbers \(t\),
\[\cos^{2} + \sin^{2} = 1.\]
Identities from Definitions
The definitions of the tangent, cotangent, secant, and cosecant functions were introduced in Section 1.6. The following are valid for all values of \(t\) for which the right side of each equation is defined.
\[\tan(t) = \dfrac{\sin(t)}{\cos(t)}\]
\[\cot(t) = \dfrac{\cos(t)}{\sin(t)}\]
\[\sec(t) = \dfrac{1}{\cos(t)}\]
\[\csc(t) = \dfrac{1}{\sin(t)}\]
Negative Identities
The negative were introduced in Chapter 2 when the symmetry of the graphs were discussed. (See page 82 and Exercise (2) on page 139.)
\[\cos(-t) = \cos(t)\]
\[\sin(-t) = -\sin(t)\]
\[\tan(-t) = -\tan(t)\]
The negative identities for cosine and sine are valid for all real numbers \(t\), and the negative identity for tangent is valid for all real numbers \(t\) for which \(\tan(t)\) is defined.
Verifying Identities
Given two expressions, say \(\tan^{2}(x) + 1\) and \(\sec^{2}(x)\), we would like to know if they are equal (that is, have the same values for every allowable input) or not. We can draw the graphs of \(y = \tan^{2}(x) + 1\) and \(y = \sec^{2}(x)\) and see if the graphs look the same or different. Even if the graphs look the same, as they do with \(y = \tan^{2}(x) + 1\) and \(y = \sec^{2}(x)\), that is only an indication that the two expressions are equal for
every allowable input. In order to verify that the expressions are in fact always equal, we need to provide a convincing argument that works for all possible input. To do so we use facts that we know (existing identities) to show that two trigonometric expressions are always equal. As an example, we will verify that the equation \[\tan^{2}(x) + 1 = \sec^{2}(x)\] is an identity.
A proper format for this kind of argument is to choose one side of the equation and apply existing identities that we already know to transform the chosen side into the remaining side. It usually makes life easier to begin with the more complicated looking side (if there is one). In our example of equation (1) we might begin with the expression \(\tan^{2}(x) + 1\).
Example \(\PageIndex{1}\): Verifying a Trigonometric Identity
To verify that equation (1) is an identity, we work with the expression \(\tan^{2}(x) + 1\). It can often be a good idea to write all of the trigonometric functions in terms of the cosine and sine to start. In this case, we know that \(\tan(t) = \dfrac{\sin(t)}{\cos(t)}\), so we could begin by making this substitution to obtain the identity \[\tan^{2}(x) + 1 = (\dfrac{\sin(x)}{\cos(x)})^{2} + 1\]
Note that this is an identity and so is valid for all allowable values of the variable. Next we can apply the square to both the numerator and denominator of the right hand side of our identity (2).
\[(\dfrac{\sin(x)}{\cos(x)})^{2} + 1 = \dfrac{\sin^{2}(x)}{\cos^{2}(x)} + 1\]
Next we can perform some algebra to combine the two fractions on the right hand side of the identity (3) and obtain the new identity
\[\dfrac{\sin^{2}(x)}{\cos^{2}(x)} + 1 = \dfrac{\sin^{2}(x) + \cos^{2}(x)}{\cos^{2}(x)}\]
Now we can recognize the Pythagorean identity \(\cos^{2}(x) + \sin^{2}(x) = 1\), which makes the right side of identity (4)
\[\dfrac{\sin^{2}(x) + \cos^{2}(x)}{\cos^{2}(x)} = \dfrac{1}{\cos^{2}(x)}\] Recall that our goal is to verify identity (1), so we need to transform the expression into \(\sec^{2}(x)\). Recall that \(\sec(x) = \dfrac{1}{\cos(x)}\), and so the right side of identity (5) leads to the new identity which verifies the identity.
\[\dfrac{1}{\cos^{2}(x)} = \sec^{2}(x)\]
An argument like the one we just gave that shows that an equation is an identity is called a
proof. We usually leave out most of the explanatory steps (the steps should be evident from the equations) and write a proof in one long string of identities as
\[\tan^{2}(x) + 1 = (\dfrac{\sin(x)}{\cos(x)})^{2} + 1 = \dfrac{\sin^{2}(x)}{\cos^{2}(x)} + 1= \dfrac{\sin^{2} + \cos^{2}(x)}{\cos^{2}(x)} = \dfrac{1}{\cos^{2}(x)} = \sec^{2}(x).\]
To prove an identity is to show that the expressions on each side of the equation are the same for every allowable input. We illustrated this process with the equation \(\tan^{2}(x) + 1 = \sec^{2}(x)\). To show that an equation isn’t an identity it is enough to demonstrate that the two sides of the equation have different values at one input.
Example \(\PageIndex{2}\): (Showing that an Equation is not an Identity)
Consider the equation with the equation \(\cos(x - \dfrac{\pi}{2}) = \sin(x + \dfrac{\pi}{2})\) that we encountered in our Beginning Activity. Although you can check that \(\cos(x - \dfrac{\pi}{2})\) and \(\sin(x + \dfrac{\pi}{2})\) are equal at some values, \(\dfrac{\pi}{4}\) for example, they are not equal at all values–\(\cos(0 - \dfrac{\pi}{2}) = 0\) but \(\sin(0 + \dfrac{\pi}{2}) = 1\). Since an identity must provide an equality for
all allowable values of the variable, if the two expressions differ at one input, then the equation is not an identity. So the equation \(\cos(x - \dfrac{\pi}{2}) = \sin(x + \dfrac{\pi}{2})\) is not an identity.
Example 4.2 illustrates an important point. to show that an equation is not an identity, it is enough to find one input at which the two sides of the equation are not equal. We summarize our work with identities as follows.
To prove that an equation is an identity, we need to apply known identities to show that one side of the equation can be transformed into the other. To prove that an equation is not an identity, we need to find one input at which the two sides of the equation have different values. Important Note: When proving an identity it might be tempting to start working with the equation itself and manipulate both sides until you arrive at something you know to be true. DO NOT DO THIS! By working with both sides of the equation, we are making the assumption that the equation is an identity – but this assumes the very thing we need to show. So the proper format for a proof of a trigonometric identity is to choose one side of the equation and apply existing identities that we already know to transform the chosen side into the remaining side. It usually makes life easier to begin with the more complicated looking side (if there is one).
Example \(\PageIndex{3}\): Verifying an Identity
Consider the equation \[2\cos^{2}(x) - 1 = \cos^{2}(x) - \sin^{2}(x).\]
Graphs of both sides appear to indicate that this equation is an identity. To prove the identity we start with the left hand side:
\[2\cos^{2}(x) - 1 = \cos^{2}(x) + \cos^{2}(x) - 1 = \cos^{2}(x) + (1 - \sin^{2}(x)) - 1 = \cos^{2}(x) - \sin^{2}(x).\]
Notice that in our proof we rewrote the Pythagorean identity \(\cos^{2}(x) + \sin^{2}(x) = 1\) as \(\cos^{2}(x) = 1 - \sin^{2}(x)\). Any proper rearrangement of an identity is also an identity, so we can manipulate known identities to use in our proofs as well.
To reiterate, the proper format for a proof of a trigonometric identity is to choose one side of the equation and apply existing identities that we already know to transform the chosen side into the remaining side. There are no hard and fast methods for proving identities – it is a bit of an art. You must practice to become good at it.
Exercise \(\PageIndex{1}\)
For each of the following use a graphing utility to graph both sides of the equation. If the graphs indicate that the equation is not an identity, find one value of \(x\) at which the two sides of the equation have different values. If the graphs indicate that the equation is an identity, verify the identity.
\[\dfrac{\sec^{2}(x) - 1}{\sec^{2}(x)} = \sin^{2}(x)\] \[\cos(x)\sin(x) = 2\sin(x)\] Answer
1. The graphs of both sides of the equation indicate that this is an indentity.
2. The graphs of both sides of the equation indicate that this is not an indentity. For example, if we let \(x = \dfrac{\pi}{2}\),then
\[\cos(\dfrac{\pi}{2})\sin(\dfrac{\pi}{2}) = 0\cdot 1 = 0\] and \[2\sin(\dfrac{\pi}{2}) = 2\cdot 1 = 2\]
Summary
In this section, we studied the following important concepts and ideas:
An
identity is an equation that is true for all allowable values of the variables involved. To prove that an equation is an identity, we need to apply known identities to show that one side of the equation can be transformed into the other. To prove that an equation is not an identity, we need to find one input at which the two sides of the equation have different values. |
I have a question about the definition of a Markov process and a Markov family by Karatzas/Shreve in the book "Brownian Motion and Stochastic Calculus" (cf. p. 74). Let me first recall their definitions:
Markov process: Let $d$ be a positive integer and $\mu$ a probability measure on $(\mathbb{R}^d, \mathcal{B}(\mathbb{R}^d))$. An adapted, $d$-dimensional process $X=\{X_t,\mathcal{F}_t;t\geq 0 \}$ on some probability space $(\Omega, \mathcal{F},P^{\mu})$ is said to be a Markov process with initial distribution $\mu$, if
(i) $P^{\mu}(X_0 \in \Gamma ) = \mu(\Gamma), \forall \Gamma \in \mathcal{B}(\mathbb{R}^d)$.
(ii) for $s,t \geq 0$ and $\Gamma \in \mathcal{B}(\mathbb{R}^d)$: $P^{\mu} (X_{t+s} \in \Gamma | \mathcal{F}_s) = P^{\mu} (X_{t+s} \in \Gamma | X_s)$
Markov Family: Let $d$ be a positive integer. A $d$-dimensional Markov Family is an adapted process $X=\{X_t,\mathcal{F}_t;t\geq 0 \}$ on some $(\Omega, \mathcal{F})$, together with a family of probability measures $(P^x)_{x\in \mathbb{R}^d}$ on $(\Omega,\mathcal{F})$, such that
(a) for each $F\in \mathcal{F}$, the mapping $x\mapsto P^x(F)$ is universally measurable;
(b) $P^x(X_0=x)=1, \forall x \in \mathbb{R}^d$;
(c) for $x\in \mathbb{R}^d$, $s,t \geq 0$ and $\Gamma \in \mathcal{B}(\mathbb{R}^d)$: $P^x(X_{t+s} \in \Gamma | \mathcal{F}_s ) = P^x(X_{t+s} \in \Gamma | X_s ), P^x-a.s.$;
(d) for $x\in \mathbb{R}^d$, $s,t \geq 0$ and $\Gamma \in \mathcal{B}(\mathbb{R}^d)$: $P^x(X_{t+s} \in \Gamma | X_s=y ) = P^y(X_{t} \in \Gamma), P^xX_s^{-1}-a.e. y$;
Now my question: To me it looks like the definition of a Markov Family serves to have a Markov process start at different starting points, and for each different starting point $x$, we have one measure $P^x$. Why do we introduce this new definition instead of considering a family of processes $(X^x_t)_{x\in \mathbb{R}^d, t\geq 0}$ with $X^x_t = X_t+x$, and $(X_t)_{t\geq 0}$ a Markov process starting in zero with respect to one measure $P^0$. In this case we would have a family of processes but only one measure, whereas in the aforementioned definition we have one process but a family of measures. What is the advantage of the aforementioned definition?
Thank you in advance, Luke |
Let $\gamma : [0,1] \to \mathbb R^2$ be a finite $C^2$-curve in the plane which does not intersect itself. Let $p(z)$ be a second-degree polynomial in $z \in \mathbb R^2$. Can we construct a Riemannian metric $g$ along $\gamma$ such that
$\gamma$ is a geodesic of $g$, $\gamma$ has length 1, and $p(z)$ is the 2-jet of $g$ at $\gamma(0)$? (i.e. this prescribes $g$ and its first and second derivatives at $\gamma(0)$) Edit: As per Sergei's comment, assume that $p$ is chosen so that $\gamma$ does in fact solve the geodesic equation.
I think so, and my sketch of an argument follows the proof of existence of Fermi coordinates in reverse. I haven't worked through this in detail yet, though, because I'm more concerned about the next question:
Let $\gamma$ and $\eta$ both be finite curves $[0,1] \to \mathbb R^2$ which do not intersect themselves, and such that $\gamma(0) = \eta(0)$ and $\gamma(1) = \eta(1)$ with no other intersections (i.e. $\gamma \cup \eta$ is a piecewise, simple, closed $C^2$-curve in the plane). Can we construct a metric $g$ as above? Note that if so, $\gamma(0)$ and $\gamma(1)$ will be conjugate points along $\gamma$. |
Suppose there are $N$ subjects under study, with subject $i$ contribution $n_i$ observations, for $i =1,...,N$. And let $y_{ij}$ denote a response variable for subject $i$ at observation $j$. Let $x_{ij}$ denote a $p\times 1$ vector of predictors, and let $z_{ij}$ denote a $q\times 1$ vector of predictors. In general, the linear mixed effects model is $$y_i=X_i\alpha+Z_i\beta_i+\epsilon_i$$ where $y_i=(y_{i1},...,y_{in_i})^T$, $X_i=(x_{i1}^T,...,x_{in_i}^T)^T$, $Z_i=(z_{i1}^T,...,z_{in_i}^T)^T$, $\alpha$ is a $p \times 1$ vector of unknown population parameters, $\beta_i$ is a $q \times 1$ vector of unknown subject-specific random effects with $\beta_i \sim N(0, D)$ and the ellements of the residual vector, $\epsilon_i$, are $N(0, \sigma^2I)$.
And we can use like
lme4(frequency ~ attitude + (1|subject) + (1|scenario), data=politeness)
to specify which variable has fixed effect,
attitude, and which has a random effect, the
subject and
scenario are two variable with random effect.
But my questions are:
In our example, the variable which has a random effect is $Z_i$. But according to the error I got when I use
lme4, the level of random effect variable should be more than 1 and smaller than the number of observations. How should we deal with it when our $Z_i$ here is continuous?
We actually know the distribution of the random effect coefficients $\beta$, how should we using this information here? i.e. where to put the information in $D$?
Many thanks to any comments! |
Asymptotic large time behavior of singular solutions of the fast diffusion equation
1.
Institute of Mathematics, Academia Sinica, Taipei, Taiwan
2.
Department of Mathematics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong, China
$u_t=Δ u^m$
$({\mathbb R}^n\setminus\{0\})×(0, ∞)$
$0<m<\frac{n-2}{n}$
$n≥3$
$u$
$t^{-α} f_i(t^{-β}x)$
$i=1, 2$
$u_0$
$A_1|x|^{-γ}≤ u_0≤ A_2|x|^{-γ}$
$A_2>A_1>0$
$\frac{2}{1-m}<γ<\frac{n-2}{m}$
$β:=\frac{1}{2-γ(1-m)}$, $α:=\frac{2\beta-1}{1-m}, $
$f_i$
$Δ f^m+α f+β x· \nabla f=0 \,\,\,\,\,\,\mbox{ in ${\mathbb R}^n\setminus\{0\}$}$
$\tilde u(y, τ):= t^{\, α} u(t^{\, β} y, t),\,\,\,\,\,\, { τ:=\log t}, $ Keywords:Existence, large time behavior, fast diffusion equation, singular solution, self-similar solution. Mathematics Subject Classification:Primary: 35B35, 35B44, 35K55, 35K65. Citation:Kin Ming Hui, Soojung Kim. Asymptotic large time behavior of singular solutions of the fast diffusion equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5943-5977. doi: 10.3934/dcds.2017258
References:
[1] [2]
A. Blanchet, M. Bonforte, J. Dolbeault, G. Grillo and J. L. Vázquez,
Asymptotics of the fast diffusion equation via entropy estimates,
[3]
M. Bonforte, J. Dolbeault, G. Grillo and J. L. Vázquez,
Sharp rates of decay of solutions to the nonlinear fast diffusion equation via functional inequalities,
[4]
E. Chasseigne and J. L. Vázquez,
Theory of extended solutions for fast-diffusion equations in optimal classes of data. Radiation from singularities,
[5]
P. Daskalopoulos and C. E. Kenig,
[6] [7]
P. Daskalopoulos, M. del Pino and N. Sesum, Type Ⅱ ancient compact solutions to the Yamabe flow,
[8] [9] [10]
M. Fila, J. L. Vázquez, M. Winkler and E. Yanagida,
Rate of convergence to Barenblatt profiles for the fast diffusion equation,
[11]
M. Fila and M. Winkler,
Optimal rates of convergence to the singular Barenblatt profile for the fast diffusion equation,
[12] [13] [14] [15] [16] [17] [18] [19] [20] [21]
T. Kato,
[22]
O. A. Ladyzenskaya, V. A. Solonnikov and N. N. Uraltceva,
[23]
S. J. Osher and J. V. Ralston,
[24] [25] [26]
J. L. Vázquez,
[27] [28]
show all references
References:
[1] [2]
A. Blanchet, M. Bonforte, J. Dolbeault, G. Grillo and J. L. Vázquez,
Asymptotics of the fast diffusion equation via entropy estimates,
[3]
M. Bonforte, J. Dolbeault, G. Grillo and J. L. Vázquez,
Sharp rates of decay of solutions to the nonlinear fast diffusion equation via functional inequalities,
[4]
E. Chasseigne and J. L. Vázquez,
Theory of extended solutions for fast-diffusion equations in optimal classes of data. Radiation from singularities,
[5]
P. Daskalopoulos and C. E. Kenig,
[6] [7]
P. Daskalopoulos, M. del Pino and N. Sesum, Type Ⅱ ancient compact solutions to the Yamabe flow,
[8] [9] [10]
M. Fila, J. L. Vázquez, M. Winkler and E. Yanagida,
Rate of convergence to Barenblatt profiles for the fast diffusion equation,
[11]
M. Fila and M. Winkler,
Optimal rates of convergence to the singular Barenblatt profile for the fast diffusion equation,
[12] [13] [14] [15] [16] [17] [18] [19] [20] [21]
T. Kato,
[22]
O. A. Ladyzenskaya, V. A. Solonnikov and N. N. Uraltceva,
[23]
S. J. Osher and J. V. Ralston,
[24] [25] [26]
J. L. Vázquez,
[27] [28]
[1]
Shota Sato, Eiji Yanagida.
Forward self-similar solution with a moving singularity for a semilinear parabolic
equation.
[2]
Shota Sato, Eiji Yanagida.
Singular backward self-similar solutions of a semilinear parabolic equation.
[3]
Cong He, Hongjun Yu.
Large time behavior of the solution to the Landau Equation with specular reflective boundary condition.
[4] [5]
Qiaolin He.
Numerical simulation and self-similar analysis of singular solutions of Prandtl equations.
[6]
Kin Ming Hui, Sunghoon Kim.
Existence of Neumann and singular solutions of the fast diffusion equation.
[7] [8] [9]
Marek Fila, Michael Winkler, Eiji Yanagida.
Convergence to self-similar solutions for a semilinear parabolic equation.
[10]
Fouad Hadj Selem, Hiroaki Kikuchi, Juncheng Wei.
Existence and uniqueness of singular solution
to stationary Schrödinger equation with supercritical nonlinearity.
[11]
Bhargav Kumar Kakumani, Suman Kumar Tumuluri.
Asymptotic behavior of the solution of a diffusion equation with nonlocal boundary conditions.
[12] [13]
K. T. Joseph, Philippe G. LeFloch.
Boundary layers in weak solutions of hyperbolic conservation laws II. self-similar vanishing diffusion limits.
[14] [15]
Adrien Blanchet, Philippe Laurençot.
Finite mass self-similar blowing-up solutions of a chemotaxis system with non-linear diffusion.
[16]
Zoran Grujić.
Regularity of forward-in-time self-similar solutions to the 3D Navier-Stokes equations.
[17]
Joana Terra, Noemi Wolanski.
Large time behavior for a nonlocal diffusion equation with
absorption and bounded initial data.
[18]
Weike Wang, Xin Xu.
Large time behavior of solution for the full compressible navier-stokes-maxwell system.
[19]
Zhenhua Guo, Wenchao Dong, Jinjing Liu.
Large-time behavior of solution to an inflow problem on the half space for a class of compressible non-Newtonian fluids.
[20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
Recently, I read the definition of oxidation state on Wikipedia. It read that a 100% ionic bond is impossible. So what does a 75% ionic and 25% covalent bond mean at all?
A "100% ionic" bond would be a bond whose bonding electron(s) were
never in the vicinity of the cation, but rather always in unperturbed valence orbitals of the anion. That's not far from the truth in a paradigmatic ionic compound like $\ce{NaCl}$, but no matter how electronegative the anion is, the bonding electrons will still experience some attraction toward the positive charge on the cation, and so a detailed model of their wave functions will show a small but nonzero amplitude near the cation. That's just basic electrostatics; you have a positive charge and a negative charge not that far away from each other, they will attract.
Despite that, we can still talk about 100% ionic bonds as a theoretical limit and an acceptable approximation for modeling some compounds. It's kind of like how we can use absolute zero as the theoretical zero point of a temperature scale even though nothing in the universe can ever be that cold.
In a "100 %"
covalent bond, like the $\sigma_{ss}$ bond in the dihydrogen molecule, the electron probability density is perfectly symmetrically divided between the two bonded nuclei because both have the same electronegativity. Below: the schematised electron density $\psi^2$, for a 100 % covalent bond:
But when the two atoms have different electronegativities, the electron probability density will be higher towards the more electronegative atom, see e.g. $\ce{HCl}$. We say the bond is
polarised. See below, schematised, the permanent dipole of $\ce{XY}$, with partial charges $\delta +$ and $\delta -$: Below: the schematised electron density $\psi^2$, for a bond between atoms with differing electronegativity (highest electronegativity to the right):
If the electronegativity difference is really high, see e.g. $\ce{NaCl}$, the bond starts being so strongly polarised (electron probability density strongly skewed to towards the $\ce{Cl}$ atom) that the bond starts taking on an ionic character.
Ionic and covalent must be considered relative to each other: in a "75% ionic" bond the electron distribution is strongly skewed toward the electronegative element.
There is however no meaningful way to measure this "percentage".
Actually bonds that are $100\%$ ionic, or close to it, are possible. But they require a little ingenuity. Instead of combining atoms of very different electronegativity, we can build multiatomic ions in which the charge is (formally) buried or highly de-localized making covalent bonding unfavorable. We use molecular orbital structures rather than electronegativity to drive the charge separation.
One example of this is the cyclopropenyl salt $\ce{C_3H_3^+SbCl_6^-}$ produced by Breslow and Groves in 1970 (http://pubs.acs.org/doi/abs/10.1021/ja00707a040). To form a covalent bond between the ions would require either occupying a highly destabilized antibonding orbital in the cyclopropenyl ring, or an interaction between the ring and poorly overlapping orbitals in the bulky anion. So we have very little covalent bonding; the molecular orbital structure does not favor it. We could get covalent compounds by transferring a chloride ion, but this reaction is energetically unfavorable. The salt, with its ionic bonding between the aromatic cation and complex anion, is surprisingly stable for a species with a three-membered carbon ring. Further stabilization may be achieved and ionicity reinforced by placing substituent on the cyclopropenyl ring.
Another example is provided by compounds in which alkali metals are disproportionated into cations and anion (https://en.m.wikipedia.org/wiki/Alkalide). Because the ionization energy of alkali metals (at least from sodium on down) are so small, the cation may be stabilized by forming a complex where the (formal) positive charge is deeply buried and again, no obvious means exists for covalent bonding. The alkalide anion must be bonded by essentially ionic attraction to the cationic complex. |
Tuned Mass Dampers
A Tuned Mass Damper (TMD) is a mechanical device designed to add damping to a structure for a certain range of exciting frequencies. The extra damping will reduce the movement of the structure to an acceptable level.
A tuned mass damper contains a mass that is able to oscillate in the same direction as the structure. The oscillation frequency of the mass can be tuned using springs, suspension bars, or ball transfers. When the structure starts to oscillate, the mass of the TMD will initially remain stationary due to inertia. A frictional or hydraulic component connected between the structure and the TMD mass then turns the kinetic energy of the structure into thermal energy, which results in a lower vibration amplitude of the structure.
The design of an TMD depends on the oscillation frequency and mass of the structure, direction of the movements (one horizontal direction, two horizontal directions, or vertical) and the available space.
Guarantee
Flow Engineering has more than 20 years of experience in the calculation, designing, fabrication, and installation of Tuned Mass Damper (TMD) systems. Our TMD systems are applied to bridges, flagpoles, chimneys, distillation columns and other slender structures.
Our TMD systems have proven themselves in practice. We guarantee that the solutions we offer ensure the reduction of unwanted vibrations to an acceptable level, thereby preventing fatigue damage.
Tuning
By cleverly designing a Tuned Mass Damper (TMD), a single device can add sufficient damping for two or more of the structures natural frequencies, reducing the number of necessary TMDs. In order to perform such optimizations, we model the structure with a finite element package to calculate the modes of vibration of the structure. The TMD is then tuned to add at least the necessary damping for each natural frequency.
Design Considerations
In simple situations a structure with a connected Tuned Mass Damper (TMD) can be modelled as in the following figure.
Here \(k\) is the spring constant, \(c\) is the damper constant, and \(m\) is the mass. Subscript \(1\) pertains to the structure and subscript \(2\) to the TMD.
A TMD can significantly reduce the response of a structure, as can be seen from the following graph.
The effects of varying several design parameters are given below.
Mass Ratio \(\mu\)
Increasing the mass ratio \(\mu\) (increasing the damper mass) will decrease the structural displacement. The normalized structural displacement amplitude can be computed with the formula given by J.P. Den Hartog in “Mechanical Vibrations”:
$$ \frac{\left| z_{1} \right|}{x_{st}} = \sqrt{1+\frac{2}{\mu}} $$
As can be seen from the figure, Den Hartog’s approach, calculating with \(\zeta_{1}=0\), is slightly conservative for steel structures (\(\zeta_{1}=0.2\%\)) at the lower mass ratios.
Damper Frequency \(f\)
The eigenfrequencies of a structure may not be known to a sufficient level of accuracy at the time that the TMDs are designed. It is then useful to define a range in which the frequency of the eigenmode to be damped is sure to reside. By designing an appropriate TMD for the entire range, the need for measuring a structures eigenfrequencies before a TMD can be produced is negated.
In the case that the structure has multiple eigenfrequencies relatively near to each other, a wide range TMD may be used to add damping to several eigenmodes. Reducing the cost of the vibration damping system.
Internal Damping Ratio \(\zeta_{2}\)
The increase in amplitude from mis-tuned internal damping can be significant. It is because of this effect that we advise changing the internal dampers at set intervals of 15 to 25 years, depending on the damper used.
Our ongoing research into maintenance free tuned mass dampers has solved this issue for linear tuned mass dampers. See our solution for linear tuned mass dampers: Magnovisco Linear Dampers |
Let $C_n = \{1, x, \dots, x^{n-1}\}$ and define $\varphi_r : C_n \to C_n$ as $\varphi_r(x^s) = x^{rs}$.
Theorem: $\varphi_r$ is bijective $\iff$ $\gcd(r,n)=1$.
This is the proof given in a set of notes I'm using:
$\varphi_r$ is bijective $\iff \varphi_r$ is surjective $\iff$ $\{\varphi(x)^t : 0 \leq t < n\} = C_n \iff \mathrm{ord}( \varphi_r(x)) = n \iff \frac{n}{\gcd(r,n)} = n \iff \gcd(r,n)=1$
I'm confused by the step $\{\varphi(x)^t : 0 \leq t < n\} = C_n \iff \mathrm{ord}( \varphi_r(x)) = n$. What is the justification for this? |
Working in the set theory ZFC, all reasoning about classes is strictly informal. A class is informally taken to consist of all sets satisfying some condition. For example, the 'universe' is the collection of sets $x$ which satisfy $x=x$. More formally, two formulae $\phi$ and $\psi$ (with one free variable) refer to the same class if $\forall x(\phi(x) \leftrightarrow \psi(x))$, and a set $x$ is a 'member' of the class referred to by $\phi$ if and only if $\phi(x)$ is true. We denote this class by $\{ x\, :\, \phi(x)\}$, so for example $\{ x\, :\, x=x \}$ is the universe, the class of all sets.
All sets are themselves classes, but it's possible for a class not to be a set. Let $V$ denote the class of all sets satisfying $x=x$. If $V$ were a set then by the axiom schema of separation would dictate that $W = \{ x \in V\, :\, x \not \in x \}$ were a set. But then $W \in W \leftrightarrow W \not \in W$, which is obviously nonsense; so the bit where we went wrong must have been to assume that $V$ is a set! This is Russell's paradox.
Some set theories treat classes as formal objects, such as NBG and MK.
Anyway, yes, a group is defined to be a
set $G$ with some operation on it, but the class of groups isomorphic to $G$ cannot be a set. To see a simple example, the isomorphism class of the trivial group contains $\{ x \}$ for each set $x$, with the trivial group operation $x \cdot x = x$. The class of all such groups cannot be a set since if it were then it would biject with the universe via $\{x\} \mapsto x$, thus making the universe a set by the axiom schema of replacement (contradicting what we saw above). |
I read in a book about optical fibers that the different spectral components of a light pulse transmitted in the fiber propagate with different velocities due to a wavelength dependent refractive index. Can someone explain that? Why is that silica refractive index depends on the wavelength/frequency of the wave?
The fundamental reason for the wavelength dependance of refractive index ($n$), in fact the fundamental description of refraction itself, is the domain of quantum field theory and is beyond my understanding. Hopefully somebody else can provide an answer on that subject.
However, I can state that it isn't just silica that has a wavelength dependent $n$. In fact, every material has some wavelength dependence, and this property is called
dispersion. In optical materials, the dispersion curve is very well approximated by the Sellmeier Equation:$$ n^2(\lambda) = 1 + \sum_k \frac{B_k \lambda^2}{\lambda^2 - C_k} $$
usually taken to $k=3$, where $B_k$ and $C_k$ are measured experimentally. As far as I know this equation is not derived from theory; it is completely empirical.
Provided that the electron & the atomic beams also exhibit refraction,it seems that this is a particle's property.Velocity is inversely proportional to particle's mass/size for specific medium.Photon behaves as particle in this effect.Mass is given by de Broglie equation:m=hv/c^2 , v=frequency |
I got a task, I don't quite know how to solve.
I've got the following Hamiltonian: $$ \hat H = \frac{B}{\hbar^2}\hat{\mathbf S}_1\cdot \hat{\mathbf S}_2+\frac{C}{\hbar}\left(\hat S_{1z}+\hat S_{2z}\right), \qquad \hat{\mathbf S}_{j=1,2}=(\hat S_{jx},\hat S_{jy},\hat S_{jz}). $$ ($B,c$ constants)
My task is to calculate the Eigenvalues and Eigenstates of $\hat{H}$ for:
Two spin 1/2 particles
One spin 1/2 and one spin 1 particle
I got a Tip. I have to write Hamiltonian with the following operators $\hat{S}^2,\hat{S}_z,\hat{S}_1^2,\hat{S}_2^2$, where $\boldsymbol{\hat{S}}=\boldsymbol{\hat{S}_1}+\boldsymbol{\hat{S}_2}$
This was no Problem: $\hat{H}=\frac{B}{\hbar^2}(\hat{S}^2-\hat{S}_1^2-\hat{S}_2^2)+\frac{c}{\hbar}\hat{S}_z$
Now I'm pretty much stuck. My idea was to give out these operators as matrix and the rest is simple linear algebra. But I quit don't understand that. Furthermore I heard that I need the Clebsch-Gordan coefficients, but don't exactly know where. |
The solution is quite nice, and simply relies on the fact that $\lim_{n\to 0^+} x^x = 1$, hence for $n$ large enough, we can approximate the integral with $\int_0^{1/n} x\, dx$ instead.There’s an easy generalization of this problem:\begin{align*}\lim_{n\to \infty} n^{k+1} \int_0^{1/n} x^{x+k} \, dx = 1/(k + 1).\end{align*}
Generalizing this fact, we don’t even need the composite exponential as the proof just need a $f(x)$ to be a function such that $\lim_{x\to 0^+} f(x) = 1$ with an integral bound approaching $0$.
Recently finished reading Celeste Ng’s Everything I Never Told You in some five days while in Florida. I’ve never read a book written by an Asian-Americnan author before, nor one where Asian-American issues are discussed at great details. A central tenant of a character, wanting to fit in but couldn’t, resonated with me. The issues I faced when I first moved here were less significant than the ones in the book (which took place in the 60s), but this concept of wanting people to figure out one’s differences from something else besides my skin color still holds today.
The left controller stick of my Switch has been suffering greatly from drift recently, and I finally managed to fix it. The parts were quite cheap, with the tools included from Amazon. It felt quite nice again to work with my hands and really made me want to go do some small projects in the Brown design lab.
My adviser told me about this meshing of a cube (or any hexahedral) into 6 different tetrahedrons which is easy to draw. For the sake of exposition, we will consider the cube $(-1,1)^3$ The procedure is as follows:
Draw a diagonal from $(-1,-1, -1)$ to $(1,1,1)$.
Now, project the diagonal to each of the 6 faces of the cube, which will result in a mesh of the cube.
While the procedure is simple enough, the individual tetrahedrons were a bit difficult to visualize. To help with that, I’ve made a small Mathematica script that one can play with:
So what does the “conforming” part of the title mean? Of course, there is an easier way to tile the cube using only 5 tetrahedrons, but if you put together multiple cubes, one have to be careful of how you orient them. Using the above meshing, as long as the cubes are not too distorted and can easily create the tetrahedral mesh by drawing the diagonal in the same direction.
For example, below we have a eight hexahedral elements laid in a cube, but there are three slab, three columns, and two cubes (with one significantly smaller). This whole thing was needed so that I can construct something as anisotropic as the mesh below without resorting to fancy software.
I finished this series relatively quickly, probably in the span of a month total for three books. Looking back, the best book was probably the first two. There was just an air of mystery surrounding the nature of the invading aliens. Who are they? Why are they coming? What kinds of technology do they have? These questions really drives the first novel into a satisfying conclusion.
In the second and third books, where time skips anywhere from one to a few dozen years, a bleak picture of the universe is painted by Liu (the author). To no surprise, the universe of the novel is populated with lifeforms who mistrust each other and seek to destroy one another. Everything is explained quite thoroughly, but sometimes a bit too much. I wish he left some deduction for the readers to make ourselves rather than spoon feeding all the details.
There are a few more criticisms I have of the second and third books:
The character development falls mostly flat. I really didn’t care about any of them but rather the state of humanity as a whole. In contrast, the first book contained a fascinating historical overview set in communist China which helped build the characters.
Liu is really quite imaginative in the types of weapons that an alien with far superior technology can employ. Unfortunately, some of them seem quite farfetched. I just think if they posses the power to alter reality, there would be better ways of waging war.
Multiple times, Liu thinks that society as a whole would “agree” on an idea. As we see in our current political situation, this really doesn’t make any sense.
One can solve Poisson’s problem $-\Delta u = f$ in $d$ dimensions with homogeneous Dirichlet boundary conditions using a mixed formulation as explained below:
Let $\sigma = \nabla u$, then for a sufficiently smooth function $\tau$, by Green’s theorem\begin{align*}(\sigma, \tau) &= (\nabla u, \tau) \\&= -(u, \textrm{div } \tau).\end{align*}Again, choosing $v$ a function sufficiently smooth, we have\begin{align*}f = -\textrm{div } \sigma \implies (f, v) = (-\textrm{div } \sigma, v).\end{align*}This gives the saddle-point problem: find $(u, \sigma) \in V \times M$ such that\begin{align*}(\sigma, \tau) + (u, \textrm{div } \tau) &= 0\\(\textrm{div } \sigma, v) &= -(f, v)\end{align*}hold for all $(\tau, v) \in V \times M$. Note that we don’t have to take a derivative of $u$, hence it’s natural to try $M = L^2$, but what about the space $V$?
One very easy choice to guess is $V = [H^1(\Omega)]^d$ as we want the divergence to be all defined, but unfortunately this doesn’t work as the gradient of the solution to Poisson’s problem can easily not be in $[H^1(\Omega)]^d$.
In order to illustrate this, consider $u =\left(r^{2/3}-r^{5/3}\right) \sin \left(\frac{2 \theta }{3}\right)$ on the domain of the unit circle with bottom left quarter taken out. It’s not hard to see that $u = 0$ on the boundary of the domain, and we can easily find the $f$ such that it satisfies Poisson’s equation. Now, we can either calculate the gradient exactly or argue as follows.
First, recall how to take a gradient in polar coordinates. Note that $\partial_r u \approx r^{-1/3}$ plus higher order terms and that $\frac{1}{r}\partial_\theta u \approx r^{-1/3}$ plus higher order terms also. Now, one can easily calculate the $H^1$ seminorm to see that the derivative is unbounded as we’re integrating over $[0,1]$ with $(r^{-4/3})^2r$ terms (the extra $r$ comes from the change of basis from polar integration).
The above is an example of why the space $H(\textrm{div})$ is needed.
Once again, Dunkey has proven himself to be a modern day Donald Draper… in some sense. I bought Celeste almost strictly due to how fun it seemed. It truly is a great game with tight controls and extremely interesting level design.
The first point is starting to get quite standard now, but I want to reiterate on the latter point. It seems many high-ceiling platformers like Supermeatboy or the age-old N game constantly rely on precision. Celeste throws that away with an emphasis on when/where/how you use the air dash. That air dash, that one extra mechanic, really is the crux. Honestly, it reminded me of Ori’s dash, but the level design here is more like a puzzle.
Of course, the game can get a bit annoying. There are places where precise timing is the only way through the level (or so it seems to me?). Flag number 9 is particularly annoying. One can turn on the assist mode, but it really breaks the game by making it very easy. Another quirk is that it doesn’t save automatically when quitting from the Switch; this caused me to have to beat certain levels twice as I was switching between games.
All in all, quite a fun game with a ton of content. Worth it for just $20.
The edge functions orthogonality is explicitly stated as $\hat a(u, v) = 0$ for all $u \in \Gamma_i$ (edge space) and $v \in \mathcal{I}$ interior space. A very natural question is why the vertex functions does not need to be orthogonal to the interior functions. The fun fact is that it secretly is.
Note that the paper is for the $H^1$ semi-norm, hence $\hat a(u, v) = \int_T \nabla u \nabla v \, dx$. Now let $u$ be a hat (or bilinear) function, and let $v \in \mathcal{I}$. Then we have that\begin{align*}\int_T \nabla u \nabla v = \int_{\partial T} u \partial_n v – \int_T \Delta u v = 0.\end{align*}The first term is 0 due to the bubble functions vanishing on the boundary, and that $\Delta u = 0$ because it is linear.
For our first two papers, we essentially reused the same few examples as model problems to test our method with (sine-Gordon and Brusselator). For our next paper, my advisor wanted something different and pointed towards the Grey-Scott equations. It’s a simple reaction diffusion equation as follows\begin{align*}\frac{\partial u}{\partial t} &= d_u\Delta u – uv^2 + F(1 – u) \\\frac{\partial v}{\partial t} &= d_v\Delta v + uv^2 – (F+k)v\end{align*}where $F, k, d_u, d_v$ are constants.
There’s a short paper (“Complex Patterns in a Simple System,” by John E. Pearson) where he plots the function for different values of $F, k$. The problem for me in replicating that paper is that Pearson employed a periodic boundary condition, which is easy to implement for finite different and spectral methods, but a bit awkward for finite element methods (if you’re not using a very specific mesh).
The solution is quite nifty. Instead of a 2D plane, we simply project the domain onto a torus. It turns out FEM code on a surface is almost the same as on a plane, unless one uses curvilinear elements which then becomes a hassle. Furthermore, visualization gives pretty cool results… take a look at the two simulations below (their parameters differ just ever so slightly, but ends up giving drastically different patterns, though both reach a steady state):
Every year, I make the resolution of blogging and journalling more, and each passing year, I’ve noticed my failure. Yet here I am again stubborn as always to see how I can keep this up. Since it’s been awhile, here’s a laundry list of updates:
Family and I went cruising on RCI to Haiti, Jamaica and Mexico over Christmas, and it felt just so artificial. There was a thin veneer of luxury painted over the backs of the employees, who were (almost) all from poor countries. Everything seems to be nickel and dimed…. nevertheless, it’s still relaxing not to do any chores or cooking.
It’s incredibly hard to attempt to work at a place where you have never worked before. I look forward to getting back to writing and math tomorrow (and lifting/working out).
Finished Sapiens, Fahrenheit 451 and How I Killed Pluto and Why It Had it Coming. All fun books to read, with Sapiens being a bit too long and Fahrenheit too short. The Pluto book was quite fun to read, and it’s interesting to see how other fields work. |
We were introduced to hyperbolic functions previously, along with some of their basic properties. In this section, we look at differentiation and integration formulas for the hyperbolic functions and their inverses.
Derivatives and Integrals of the Hyperbolic Functions
Recall that the hyperbolic sine and hyperbolic cosine are defined as
\[\sinh x=\dfrac{e^x−e^{−x}}{2}\]
and
\[\cosh x=\dfrac{e^x+e^{−x}}{2}.\]
The other hyperbolic functions are then defined in terms of \(\sinh x\) and \(\cosh x\). The graphs of the hyperbolic functions are shown in Figure \(\PageIndex{1}\).
It is easy to develop differentiation formulas for the hyperbolic functions. For example, looking at \(\sinh x\) we have
\[\begin{align*} \dfrac{d}{dx} \left(\sinh x \right)&=\dfrac{d}{dx} \left(\dfrac{e^x−e^{−x}}{2}\right) \\ &=\dfrac{1}{2}\left[\dfrac{d}{dx}(e^x)−\dfrac{d}{dx}(e^{−x})\right] \\ &=\dfrac{1}{2}[e^x+e^{−x}] \\ &=\cosh x. \end{align*} \]
Similarly,
\[\dfrac{d}{dx} \cosh x=\sinh x.\]
We summarize the differentiation formulas for the hyperbolic functions in Table \(\PageIndex{1}\).
\(f(x)\) \(\dfrac{d}{dx}f(x)\) \(\sinh x\) \(\cosh x\) \(\cosh x\) \(\sinh x\) \(\tanh x\) \(\text{sech}^2 \,x\) \(\text{coth } x\) \(−\text{csch}^2\, x\) \(\text{sech } x\) \(−\text{sech}\, x \tanh x\) \(\text{csch } x\) \(−\text{csch}\, x \coth x\)
Let’s take a moment to compare the derivatives of the hyperbolic functions with the derivatives of the standard trigonometric functions. There are a lot of similarities, but differences as well. For example, the derivatives of the sine functions match:
\[\dfrac{d}{dx} \sin x=\cos x\]
and
\[\dfrac{d}{dx} \sinh x=\cosh x. \nonumber\]
The derivatives of the cosine functions, however, differ in sign:
\[\dfrac{d}{dx} \cos x=−\sin x, \nonumber\]
but
\[\dfrac{d}{dx} \cosh x=\sinh x. \nonumber\]
As we continue our examination of the hyperbolic functions, we must be mindful of their similarities and differences to the standard trigonometric functions. These differentiation formulas for the hyperbolic functions lead directly to the following integral formulas.
\[ \begin{align} \int \sinh u \,du &=\cosh u+C \\[4pt] \int \text{csch}^2 u \, du &=−\coth u+C \\[4pt] \int coshu \,du &=\sinh u+C \\[4pt] \int \text{sech} u \tanh u \,du &=−\text{sech } u+C−\text{csch} u+C \\[4pt] \int \text{sech }^2u \,du &=\tanh u+C \\[4pt] \int \text{csch} u \coth u \,du &=−\text{csch} u+C \end{align}\]
Example \(\PageIndex{1}\): Differentiating Hyperbolic Functions
Evaluate the following derivatives:
\(\dfrac{d}{dx}(\sinh(x^2))\) \(\dfrac{d}{dx}(\cosh x)^2\) Solution:
Using the formulas in Table \(\PageIndex{1}\) and the chain rule, we get
\(\dfrac{d}{dx}(\sinh(x^2))=\cosh(x^2)⋅2x\) \(\dfrac{d}{dx}(\cosh x)^2=2\cosh x\sinh x\)
Exercise \(\PageIndex{1}\)
Evaluate the following derivatives:
\(\dfrac{d}{dx}(\tanh(x^2+3x))\) \(\dfrac{d}{dx}\left(\dfrac{1}{(\sinh x)^2}\right)\) Hint
Use the formulas in Table \(\PageIndex{1}\) and apply the chain rule as necessary.
Answer a
\(\dfrac{d}{dx}(\tanh(x^2+3x))=(\text{sech}^2(x^2+3x))(2x+3)\)
Answer b
\(\dfrac{d}{dx}\left(\dfrac{1}{(\sinh x)^2}\right)=\dfrac{d}{dx}(\sinh x)^{−2}=−2(\sinh x)^{−3}\cosh x\)
Example \(\PageIndex{2}\): Integrals Involving Hyperbolic Functions
Evaluate the following integrals:
\( \displaystyle \int x\cosh(x^2)dx\) \( \displaystyle \int \tanh x\,dx\) Solution:
We can use u-substitution in both cases.
a. Let \(u=x^2\). Then, \(du=2xdx\) and
\[\begin{align*} \int x\cosh (x^2)dx &=\int \dfrac{1}{2}\cosh u\,du \\[4pt] &=\dfrac{1}{2}\sinh u+C \\[4pt] &=\dfrac{1}{2}\sinh (x^2)+C. \end{align*}\]
b. Let \(u=\cosh x\). Then, \(du=\sinh x\,dx\) and
\[\begin{align*} \int \tanh x \,dx=\int \dfrac{\sinh x}{\cosh x}dx &=\int \dfrac{1}{u}du \\[4pt] &=\ln|u|+C \\[4pt] &= \ln|\cosh x|+C.\end{align*}\]
Note that \(\cosh x>0\) for all \(x\), so we can eliminate the absolute value signs and obtain
\[\int \tanh x \,dx=\ln(\cosh x)+C. \nonumber\]
Exercise \(\PageIndex{2}\)
Evaluate the following integrals:
\(\displaystyle \int \sinh^3x \cosh x \,dx\) \(\displaystyle \int \text{sech }^2(3x)\, dx\) Hint
Use the formulas above and apply
u-substitution as necessary. Answer a
\(\displaystyle \int \sinh^3x \cosh x \,dx=\dfrac{\sinh^4x}{4}+C\)
Answer b
\(\displaystyle \int \text{sech }^2(3x) \, dx=\dfrac{\tanh(3x)}{3}+C\)
Calculus of Inverse Hyperbolic Functions
Looking at the graphs of the hyperbolic functions, we see that with appropriate range restrictions, they all have inverses. Most of the necessary range restrictions can be discerned by close examination of the graphs. The domains and ranges of the inverse hyperbolic functions are summarized in Table \(\PageIndex{2}\).
Function Domain Range \(\sinh^{−1}x\) (−∞,∞) (−∞,∞) \(\cosh^{−1}x\) (1,∞) [0,∞) \(\tanh^{−1}x\) (−1,1) (−∞,∞) \(\coth^{−1}x\) (−∞,1)∪(1,∞) (−∞,0)∪(0,∞) \(\text{sech}^{−1}x\) (0,1) [0,∞) \(\text{csch}^{−1}x\) (−∞,0)∪(0,∞) (−∞,0)∪(0,∞)
The graphs of the inverse hyperbolic functions are shown in the following figure.
To find the derivatives of the inverse functions, we use implicit differentiation. We have
\[\begin{align} y&=\sinh^{−1}x \\ \sinh y&=x \\ \dfrac{d}{dx} \sinh y&=\dfrac{d}{dx}x \\ \cosh y\dfrac{dy}{dx}&=1. \end{align}\]
Recall that \(\cosh^2y−\sinh^2y=1,\) so \(\cosh y=\sqrt{1+\sinh^2y}\).Then,
\[\dfrac{dy}{dx}=\dfrac{1}{\cosh y}=\dfrac{1}{\sqrt{1+\sinh^2y}}=\dfrac{1}{\sqrt{1+x^2}}.\]
We can derive differentiation formulas for the other inverse hyperbolic functions in a similar fashion. These differentiation formulas are summarized in Table \(\PageIndex{3}\).
\(f(x)\) \(\dfrac{d}{dx}f(x)\) \(\sinh^{−1}x\) \(\dfrac{1}{\sqrt{1+x^2}}\) \(\cosh^{−1}x\) \(\dfrac{1}{\sqrt{x^2−1}}\) \(\tanh^{−1}x\) \(\dfrac{1}{1−x^2}\) \(\coth^{−1}x\) \(\dfrac{1}{1−x^2}\) \(\text{sech}^{−1}x\) \(\dfrac{−1}{x\sqrt{1−x^2}}\) \(\text{csch}^{−1}x\) \(\dfrac{−1}{|x|\sqrt{1+x^2}}\)
Note that the derivatives of \(\tanh^{−1}x\) and \(\coth^{−1}x\) are the same. Thus, when we integrate \(1/(1−x^2)\), we need to select the proper antiderivative based on the domain of the functions and the values of \(x\). Integration formulas involving the inverse hyperbolic functions are summarized as follows.
\[\int \dfrac{1}{\sqrt{1+u^2}}du=\sinh^{−1}u+C\]
\[\int \dfrac{1}{u\sqrt{1−u^2}}du=−\text{sech}^{−1}|u|+C\]
\[\int \dfrac{1}{\sqrt{u^2−1}}du=\cosh^{−1}u+C\]
\[\int \dfrac{1}{u\sqrt{1+u^2}}du=−\text{csch}^{−1}|u|+C\]
\[\int \dfrac{1}{1−u^2}du=\begin{cases}\tanh^{−1}u+C & if|u|<1\\coth^{−1}u+C & if|u|>1\end{cases}\]
Example \(\PageIndex{3}\): Differentiating Inverse Hyperbolic Functions
Evaluate the following derivatives:
\(\dfrac{d}{dx}(\sinh^{−1}(\dfrac{x}{3}))\) \(\dfrac{d}{dx}(\tanh^{−1}x)^2\) Solution
Using the formulas in Table \(\PageIndex{3}\) and the chain rule, we obtain the following results:
\(\dfrac{d}{dx}(\sinh^{−1}(\dfrac{x}{3}))=\dfrac{1}{3\sqrt{1+\dfrac{x^2}{9}}}=\dfrac{1}{\sqrt{9+x^2}}\) \(\dfrac{d}{dx}(\tanh^{−1}x)^2=\dfrac{2(tanh^{−1}x)}{1−x^2}\)
Exercise \(\PageIndex{3}\)
Evaluate the following derivatives:
\(\dfrac{d}{dx}(cosh^{−1}(3x))\) \(\dfrac{d}{dx}(coth^{−1}x)^3\) Hint
Use the formulas in Table \(\PageIndex{3}\) and apply the chain rule as necessary.
Answer a
\[\dfrac{d}{dx}(\cosh^{−1}(3x))=\dfrac{3}{\sqrt{9x^2−1}} \nonumber\]
Answer b
\[\dfrac{d}{dx}(\coth^{−1}x)^3=\dfrac{3(coth^{−1}x)^2}{1−x^2} \nonumber\]
Example \(\PageIndex{4}\): Integrals Involving Inverse Hyperbolic Functions
Evaluate the following integrals:
\(\displaystyle \int \dfrac{1}{\sqrt{4x^2−1}}dx\) \(\displaystyle \int \dfrac{1}{2x\sqrt{1−9x^2}}dx\) Solution:
We can use
u-substitution in both cases.
Let \(u=2x\). Then, \(du=2dx\) and we have
\[\begin{align*} \int \dfrac{1}{\sqrt{4x^2−1}}dx &=\int \dfrac{1}{2\sqrt{u^2−1}}du \\[4pt] &=\dfrac{1}{2}\cosh^{−1}u+C \\[4pt] &=\dfrac{1}{2}\cosh^{−1}(2x)+C. \end{align*} \]
Let \(u=3x.\) Then, \(du=3dx\) and we obtain
\[\begin{align*} \int \dfrac{1}{2x\sqrt{1−9x^2}}dx&=\dfrac{1}{2}\int \dfrac{1}{u\sqrt{1−u^2}}du \\[4pt] &=−\dfrac{1}{2}\text{sech}^{−1}|u|+C \\[4pt] &=−\dfrac{1}{2}\text{sech}^{−1}|3x|+C \end{align*}\]
Exercise \(\PageIndex{4}\)
Evaluate the following integrals:
\(\displaystyle \int \dfrac{1}{\sqrt{x^2−4}}dx,x>2\) \(\displaystyle \int \dfrac{1}{\sqrt{1−e^{2x}}}dx\) Hint
Use the formulas above and apply u-substitution as necessary.
Answer a
\(\displaystyle \int \dfrac{1}{\sqrt{x^2−4}}dx=\cosh^{−1}(\dfrac{x}{2})+C\)
Answer b
\( \displaystyle \int \dfrac{1}{\sqrt{1−e^{2x}}}dx=−\text{sech}^{−1}(e^x)+C\)
Applications
One physical application of hyperbolic functions involves hanging cables. If a cable of uniform density is suspended between two supports without any load other than its own weight, the cable forms a curve called a
catenary. High-voltage power lines, chains hanging between two posts, and strands of a spider’s web all form catenaries. The following figure shows chains hanging from a row of posts.
Hyperbolic functions can be used to model catenaries. Specifically, functions of the form \(y=acosh(x/a)\) are catenaries. Figure \(\PageIndex{4}\) shows the graph of \(y=2cosh(x/2)\).
Example \(\PageIndex{5}\): Using a Catenary to Find the Length of a Cable
Assume a hanging cable has the shape \(10\cosh(x/10)\) for \(−15≤x≤15\), where \(x\) is measured in feet. Determine the length of the cable (in feet).
Solution
Recall from Section 6.4 that the formula for arc length is
\[\underbrace{\int ^b_a\sqrt{1+[f′(x)]^2}dx}_{\text{Arc Length}}. \nonumber\]
We have \(f(x)=10 \cosh(x/10)\), so \(f′(x)=\sinh(x/10)\). Then the arc length is
\[\int ^b_a\sqrt{1+[f′(x)]^2}dx=\int ^{15}_{−15}\sqrt{1+\sinh^2 \left(\dfrac{x}{10}\right)}dx. \nonumber\]
Now recall that
\[1+\sinh^2x=\cosh^2x, \nonumber\]
so we have
\[\begin{align*} \text{Arc Length} &= \int ^{15}_{−15}\sqrt{1+\sinh^2 \left(\dfrac{x}{10}\right)}dx \\[4pt] &=\int ^{15}_{−15}\cosh \left(\dfrac{x}{10}\right)dx \\[4pt] &= \left.10\sinh \left(\dfrac{x}{10}\right)\right|^{15}_{−15}=10\left[\sinh\left(\dfrac{3}{2}\right)−\sinh\left(−\dfrac{3}{2}\right)\right]=20\sinh \left(\dfrac{3}{2}\right) \\[4pt] &≈42.586\,ft. \end{align*}\]
Exercise \(\PageIndex{5}\):
Assume a hanging cable has the shape \(15 \cosh (x/15)\) for \(−20≤x≤20\). Determine the length of the cable (in feet).
Answer
\(52.95ft\)
Key Concepts Hyperbolic functions are defined in terms of exponential functions. Term-by-term differentiation yields differentiation formulas for the hyperbolic functions. These differentiation formulas give rise, in turn, to integration formulas. With appropriate range restrictions, the hyperbolic functions all have inverses. Implicit differentiation yields differentiation formulas for the inverse hyperbolic functions, which in turn give rise to integration formulas. The most common physical applications of hyperbolic functions are calculations involving catenaries. Glossary catenary a curve in the shape of the function \(y=a\cosh(x/a)\) is a catenary; a cable of uniform density suspended between two supports assumes the shape of a catenary Contributors
Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org. |
EDIT: As i mentioned in my original question, i do not have the background to fully understand @Timaeus answer (which was very detailed indeed). I would appreciate if someone could give a more 'classical physics' answer ,even not so detailed ,in order to clarify some things my self a little bit more. In addition i would like to know the difference between the term 'wave' and 'wavefunction' and how two uncharged particles would interfere in the experiment demonstrated by a single's particle's emission source.
Without enough theoretical background in physics , I post this question which actually has two related parts.
Wavefunction and 'de Broglie hypothesis':
As far as I can understand the wavefunction of massless particles, is described by the magnitude's change of a certain particle's property, i.e. the wavefunction of a photon is described as the changing of the intensity of it's EM field over time. This wave moves through space with velocity $C$ and carries energy equal to $hf$.
On the other hand 'de Broglie hypothesis' suggests that 'all matter has wave properties' and the wavelength of this function is equal to $\lambda=h/p=h/mv\cdot \gamma^{-1}$. Now here comes my first question: In which particle's property is this wavefunction related to? Or does this wavefunction actually describes the particle's motion (~) with it's simultaneous transport through space with velocity $v$,carrying $\rm{KE}\;?$
b) In the double slit experiment that is demonstrated by a single's electrons emission source, what physical property is the interference pattern related to ? Is this pattern the result of the interference of accelerating electron's EM wave or something else? If the source emits 'n' single electrons, how many arrivals of matter do we detect on the screen? |
In single-variable calculus, differentiation and integration are thought of as inverse operations. For instance, to integrate a function \(f (x)\) it is necessary to find the antiderivative of \(f\) , that is, another function \(F(x)\) whose derivative is \(f (x)\). Is there a similar way of defining integration of real-valued functions of two or more variables? The answer is yes, as we will see shortly. Recall also that the definite integral of a nonnegative function \(f (x) \ge 0\) represented the area “under” the curve \(y = f (x)\). As we will now see, the double integral of a nonnegative real-valued function \(f (x, y) \ge 0\) represents the volume “under” the surface \(z = f (x, y)\).
Let \(f (x, y)\) be a continuous function such that \(f (x, y) \ge 0 \text{ for all }(x, y)\) on the rectangle \(R = {(x, y) : a ≤ x ≤ b, c ≤ y ≤ d}\) in \(\mathbb{R}^2\) . We will often write this as \(R = [a,b] \times [c,d]\). For any number \(x∗\) in the interval \([a,b]\), slice the surface \(z = f (x, y)\) with the plane \(x = x∗\) parallel to the \(yz\)-plane. Then the trace of the surface in that plane is the
curve \(f (x∗, y)\), where \(x∗\) is fixed and only \(y\) varies. The area \(A\) under that curve (i.e. the area of the region between the curve and the \(x y\)-plane) as \(y\) varies over the interval \([c,d]\) then depends only on the value of \(x∗\). So using the variable \(x\) instead of \(x∗\), let \(A(x)\) be that area (see Figure 3.1.1).
Figure 3.1.1 The area \(A(x) \text{ varies with }x\)
Then \(A(x) = \int_c^d f (x, y)d y\) since we are treating \(x\) as fixed, and only \(y\) varies. This makes sense since for a fixed \(x\) the function \(f (x, y)\) is a continuous function of \(y\) over the interval \([c,d]\), so we know that the area under the curve is the definite integral. The area \(A(x)\) is a function of \(x\), so by the “slice” or cross-section method from single-variable calculus we know that the volume \(V\) of the
solid under the surface \(z = f (x, y)\) but above the \(x y\)-plane over the rectangle \(R\) is the integral over \([a,b]\) of that cross-sectional area \(A(x)\):
\[V = \int_a^b A(x)dx = \int_a^b \left [\int_c^d f (x, y)d y \right ] dx \label{Eq3.1}\]
We will always refer to this volume as “the volume under the surface”. The above expression uses what are called
iterated integrals. First the function \(f (x, y)\) is integrated as a function of \(y\), treating the variable \(x\) as a constant (this is called integrating with respect to \(y\)). That is what occurs in the “inner” integral between the square brackets in Equation \ref{Eq3.1}. This is the first iterated integral. Once that integration is performed, the result is then an expression involving only \(x\), which can then be integrated with respect to \(x\). That is what occurs in the “outer” integral above (the second iterated integral). The final result is then a number (the volume). This process of going through two iterations of integrals is called double integration, and the last expression in Equation \ref{Eq3.1} is called a double integral.
Notice that integrating \(f (x, y)\) with respect to \(y\) is the inverse operation of taking the partial derivative of \(f (x, y)\) with respect to \(y\). Also, we could just as easily have taken the area of cross-sections under the surface which were parallel to the \(xz\)-plane, which would then depend only on the variable \(y\), so that the volume \(V\) would be
\[V = \int_c^d \left [\int_a^b f (x, y)dx \right ] dy \label{Eq3.2}\]
It turns out that in general the order of the iterated integrals does not matter. Also, we will usually discard the brackets and simply write
\[V=\int_c^d \int_a^b f (x, y)dx d y \label{Eq3.3}\]
where it is understood that the fact that \(dx\) is written before \(d y\) means that the function \(f (x, y)\) is first integrated with respect to \(x\) using the “inner” limits of integration \(a \text{ and }b\), and then the resulting function is integrated with respect to y using the “outer” limits of integration \(c \text{ and }d\). This order of integration can be changed if it is more convenient.
Example 3.1
Find the volume \(V\) under the plane \(z = 8x +6y\) over the rectangle \(R = [0,1]\times [0,2]\).
Solution
We see that \(f (x, y) = 8x+6y \ge 0 \text{ for }0 \le x \le 1 \text{ and }0 \le y \le 2\), so:
\[\nonumber \begin{align} V&=\int_0^2 \int_0^1 (8x+6y)dx \,d y \\ \nonumber &= \int_0^2 \left ( 4x^2 +6x y \big |_{x=0}^{x=1} \right )dy \\ \nonumber &=\int_0^2 (4+6y)d y \\ \nonumber &=4y+3y^2 \big |_0^2 \\ &=20 \end{align}\]
Suppose we had switched the order of integration. We can verify that we still get the same answer:
\[\nonumber \begin{align}V&=\int_0^1 \int_0^2 (8x+6y)dy \,d x \\ \nonumber &=\int_0^1 \left ( 8x y+3y^2 \big |_{y=0}^{y=2} \right )dx \\ \nonumber &=\int_0^1 (16x+12)dx \\ &=8x^2 +12x \big |_0^1 \\ \nonumber &=20 \end{align}\]
Example 3.2
Find the volume \(V\) under the surface \(z = e^{x+y}\) over the rectangle \(R = [2,3] \times [1,2]\).
Solution
We know that \(f (x, y) = e^{x+y} > 0 \text{ for all }(x, y)\), so
\[\nonumber \begin{align} V &=\int_1^2 \int_2^3 e^{x+y} dx\, d y \\ \nonumber &= \int_1^2 \left (e^{x+y} \big |_{x=2}^{x=3} \right )dy \\ \nonumber &= \int_1^2 (e^{y+3}-e^{y+2})dy \\ \nonumber &= e^{y+3}-e^{y+2} \big |_1^2 \\ \nonumber &= e^5 − e^4 −(e^4 − e^3 ) = e^5 −2e^4 + e^3 \end{align} \]
Recall that for a general function \(f (x)\), the integral \(\int_a^b f (x)dx\) represents the difference of the area below the curve \(y = f (x)\) but above the \(x\)-axis when \(f (x) \ge 0\), and the area above the curve but below the \(x\)-axis when \(f (x) \le 0\). Similarly, the double integral of any continuous function \(f (x, y)\) represents the difference of the volume below the surface \(z = f (x, y)\) but above the \(x y\)-plane when \(f (x, y) \ge 0\), and the volume above the surface but below the \(x y\)-plane when \(f (x, y) \le 0\). Thus, our method of double integration by means of iterated integrals can be used to evaluate the double integral of
any continuous function over a rectangle, regardless of whether \(f (x, y) \ge 0\) or not.
Example 3.3
Evaluate \(\int_0^{2\pi}\int_0^{\pi}sin(x+y)dx\,dy \)
Solution
Note that \(f (x, y) = sin(x+ y)\) is both positive and negative over the rectangle \([0,\pi]\times [0,2\pi]\). We can still evaluate the double integral:
\[\nonumber \begin{align} \int_0^{2\pi} \int_0^{\pi}sin(x+y)dx\,dy &=\int_0^{2\pi} \left (-cos(x+y)\big |_{x=0}^{x=\pi} \right )dy \\ \nonumber &= \int_0^{2\pi} (-cos(y+\pi)+cos{\,y})dy \\ \nonumber &=-sin(y+\pi)+sin{\,y}\big |_0^{2\pi} = -sin{\,3\pi}+sin{\,2\pi} - (-sin{\,\pi}+sin{\,0}) \\ \nonumber &=0 \end{align} \] |
Let $N = q^k n^2$ be an odd perfect number with Euler prime $q$. (That is, $\gcd(q,n)=1$ and $q \equiv k \equiv 1 \pmod 4$.) Let $\sigma(x)$ denote the
sum of the divisors of $x \in \mathbb{N}$.
Define $$D(n^2) := 2n^2 - \sigma(n^2)$$ to be the deficiency of the non-Euler part $n^2$.
CLAIM
$\gcd(n^2, D(n^2)) \neq 1$.
MY ATTEMPT
From this preprint, we have the relationships
$$\gcd(n^2, \sigma(n^2)) = \dfrac{D(n^2)}{\sigma(q^{k-1})} = \dfrac{\sigma(n^2)}{q^k}.$$
We also have the lower bound
$$\dfrac{\sigma(n^2)}{q^k} \geq 3.$$
Since $\gcd(a,b) = \gcd(a, ax+by)$ for $x, y \in \mathbb{Z}$, we have
$$\gcd(n^2, \sigma(n^2)) = \gcd(n^2, 2n^2 - \sigma(n^2)) = \gcd(n^2, D(n^2)) \geq 3.$$
Here is my question:
QUESTION
Is this proof correct?
Added February 13 2017
Note that, for the Descartes spoof $d = 198585576189 = KM$ (where $K$ is a square and $M$ is the quasi-Euler prime), then $$D(K) = D({3^2}\cdot{7^2}\cdot{{11}^2}\cdot{{13}^2}) = 819 = {3^2}\cdot{7}\cdot{13}$$ which divides $K = {3^2}\cdot{7^2}\cdot{{11}^2}\cdot{{13}^2}$. (In other words, $K$ is an odd deficient-perfect number.)
In particular, note that $$\gcd(K, D(K)) = {3^2}\cdot{7}\cdot{13} = D(K) \neq 1.$$ |
Negative of Subring is Negative of Ring Theorem
Let $\struct {R, +, \circ}$ be an Ring.
For each $x \in R$ let $-x$ denote the ring negative of $x$ in $R$.
Let $\struct {S, + {\restriction_S}, \circ {\restriction_S}}$ be a subring of $R$.
For each $x \in S$ let $\mathbin \sim x$ denote the ring negative of $x$ in $S$.
Then: $\forall x \in S: \mathbin \sim x = -x$ Proof
Let $i_S: S \to R$ be the inclusion mapping from $S$ to $R$.
By Ring Homomorphism of Addition is Group Homomorphism, $i_S$ is a group homomorphism on ring addition $+$.
Let $x \in S$.
Then:
\(\displaystyle \mathbin \sim x\) \(=\) \(\displaystyle \map {i_S} {\mathbin \sim x}\) as $\mathbin \sim x \in S$ \(\displaystyle \) \(=\) \(\displaystyle -x\) Group Homomorphism Preserves Inverses
$\blacksquare$ |
19 Mar 2018 Using a Bernoulli VAE on Real-Valued Observations
The Bernoulli observation VAE is supposed is used when one’s observed samples $x \in \sset{0, 1}^n$ are vectors of binary elements. However, I have, on occasion, seen people (and even papers) that apply Bernoulli observation VAEs to real-valued samples $x \in [0, 1]^n$. This will be a quick and dirty post going over whether this unholy marriage of Bernoulli VAE with real-valued samples is appropriate.
Background and Notation for Bernoulli VAE
Given an empirical distribution $\hat{p}(x)$ whose samples are binary $x \in \sset{0, 1}^n$, the VAE objective is
If $p_\theta(x \giv z)$ is furthermore a fully-factorized Bernoulli observation model, then the distribution can be expressed as
where $\pi: \Z \to [0, 1]^n$ is a neural network parameterized by $\theta$. As preparation for the next section, we shall—with a slight abuse of notation—also define
where $\pi \in [0, 1]^n$.
Applying Bernoulli VAE to Real-Valued Samples
Suppose we have a distribution over $r(\pi)$, and $\hat{p}(x)$ is in fact the marginalization of $r(\pi)p(x \giv \pi)$. This is the case for MNIST, where the real-valued samples are interpreted as observations of $\pi$. This allows us to construct the objective as
It turns out there is another equally valid lower bound
However, since $q_\phi(z \giv \pi)$ does not have access to $x$, it is unlikely to give a better approximation of $p_\theta(z \giv x)$ than the previous equation. Consequently, it is likely to be a looser bound (which can be verified empirically). A bit of tedious algebra shows that the objective is equivalent to
where the inner-most term is exactly the sum of element-wise cross-entropy terms, where each cross-entropy term is
Note that this is exactly the application of Bernoulli observation VAEs to real-valued samples. So long as the real-valued samples can be interpreted as the Bernoulli distribution parameters, then this lower bound is valid. However, as noted above, this lower bound tends to be looser.
End of post |
I'm considering to study some high-dimensional Navier-Stokes equations. One problem is to do write the viscous equation for vorticity, helicity and other conserved quantities. I think it might be better if it is possible to work with differential form and exterior calculus? Is there any reference that I may find somewhere?
MS Mohamed et al begin with a "standard vector calculus formulation of the NS equations",
$$ \frac{∂ \bf u}{∂ t} - \mu ∆ {\bf u} + ({\bf u} \cdot \nabla) {\bf u} + \nabla p = 0 $$ $$ \nabla \cdot {\bf u} = 0 $$
and use the rotational form,
$$ \frac{∂ \bf{u}}{∂ t} + \mu \nabla \times \nabla \times {\bf u} - {\bf u}\times(\nabla \times {\bf u}) + \nabla p^d = 0 $$
to eventually derive
$$ \frac{∂ {\bf u}^\flat}{∂ t} + (-1)^{N+1} \mu \star d \star d {\bf u}^\flat + (-1)^{N+2}\star ({\bf u}^\flat \wedge \star d{\bf u}^\flat) + d p^d = 0, $$ $$ \star d \star {\bf u}^\flat = 0 $$
But they also later derive it as
$$ \frac{∂ d {\bf u}^\flat}{∂ t} + (-1)^{N+1} \mu d \star d \star d {\bf u}^\flat + (-1)^{N}d\star({\bf u}^\flat \wedge \star d{\bf u}^\flat) = 0, $$
Judging from the stars, -1s, and flats, these results could probably be simplified by substitution of the codifferential operator $d^\dagger$. Both are also the rotational form; I would rather provide a convective form, but I'm unaware of such a derivation in the literature.
I urge Troy to actually post an answer rather than referring us to his book, but lacking the points I cannot leave a comment to that effect.
protected by AccidentalFourierTransform Apr 21 '18 at 20:15
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
This is a fairly tricky concept, and I see research physicists stumbling with (analogues of) this problem with a somewhat alarming frequency. When you say
the nucleus is seemingly spherical
what that really means is that the electrons interact with a spherical object (essentially a point Coulomb charge at the nuclear centre) and therefore their dynamics must be invariant under any rotation about this point. However:
This does not mean that all the solutions to those dynamics must be spherically symmetric. What it does mean is that for every non-symmetric solution $S_1$ of those dynamics that points along some direction $\hat n_1$ and every other direction $\hat n_2$, there must exist a separate, equally valid, yet distinct, solution $S_2$ that points along $\hat n_2$.
Thus, it's perfectly possible for an atom to exist in an anisotropic state, like, say, the $2p_z$ state of hydrogen. The only thing that symmetry requires is that we have analogous $2p_{\hat{n}}$ states along any given axis $\hat n$ as possibilities, which is obviously true.
That still doesn't answer, of course, the question of which way a given atom in a $2p$ state will point, but the answer to that is simply that it depends on the situation, and you need to specify more information to say anything useful.
If all you know is that a given atom is in a $2p$ state but you don't have any more information, then all you can say is that the electron is in a mixed state of all possible orientations, which is isotropic and can be written down as$$\hat{\rho}_{2p,\text{ isotropic}} = \sum_{j=x,y,z}|2p_j⟩⟨2p_j| = \frac{1}{4\pi} \int |2p_{\hat{n}}⟩⟨2p_{\hat{n}}| \mathrm d \Omega_{\hat{n}}.$$This means that the electron is not in a pure state and therefore you cannot assign it a wavefunction, which is standard fare when you have not specified enough information to pin down a single state.
In practice, however, when you work with $2p$ states, or other anisotropic orbitals, you've normally prepared them yourself, by e.g. stimulating a transition up from the $1s$ state with a linearly polarized laser, or some other anisotropic pumping method. In those cases, the orientation will be fixed by the pumping mechanism, i.e. the symmetry gets broken by your preparation apparatus, which selects one out of the many possible orientations for the state. |
There is much misconception in the question, but I'll hazard an answer. I am not going to give any links (no point linking to Wikipedia, it's
just there), but I'll highlight the important terms in bold. If you want to research more, search for these. Any good general astronomy course textbook will cover these topics, too, if you want a bit more systematic approach.
My understanding is that the sun is basically a sphere of hydrogen with a helium core, and that the hydrogen is undergoing nuclear fusion to produce helium.
Basically, the Sun is a ball of hydrogen and helium, but this is not all there is. Being a Population I star, the Sun contains heavier elements (called metals in stellar astrophysics; anything lithium and heavier is considered metal in this sense). These elements already came with the gas cloud the Sun has formed from, and were produced by previously burst older stars. Despite low abundance, the metallicity plays an important role in the Sun's core power stability.
At some depth the gas ball compresses its inner area enough to heat it up so much that hydrogen fusion into helium begins. This area is called
the core. This is where practically all fusion happens, and what is responsible for the star's energy production. For a Sun-mass star and below, the proton-proton chain dominates. The pp-chain energy output is approximately proportional to $T^4$. The good news is, if reaction rate drops, then the outer layer of the star will compress the core, so it heats up, and the renewed energy output compensates for the compression. So this highly-sensitive dependency on the temperature is what gives the star its long term stability.
It is also notable that the center of the core is hotter and therefore more energetic than its periphery, and turns hydrogen into helium faster. Absent any mixing, the core would develop an inert helium ball in the middle (helium cannot be fused by a Sun-mass star, its core is too cold for that): A
pp-chain core is entirely non-convective. However, there is another multistage reaction that fuses protons into helium nuclei, the CNO cycle. This cycle requires metals ($C$, $N$ and $O$, naturally) be present in the core. They are not consumed, but participate in stages of the reaction and are ultimately recycled. The rate of this reaction depends on the temperature as $T^{20}$. It's a huge dependency! The CNO-dominant core has so much temperature gradient that it's fully convective, so it mixes the material very thoroughly.
It happens so that for a star with the mass $M=1.5 M_\odot$ the core is fully convective, but this is not an on/off phenomenon. Even in the Sun, the CNO cycle produces roughly 10% of core's output power, and is responsible for intermittent mixing of the core material. The dependency on temperature for this reaction is so large that the reaction is practically irrelevant at $M=0.9 M_\odot$ and $T=14.5\times 10^6 K$, and becomes dominant at $M=1.5 M_\odot$ and $T=17.5\times 10^6 K$. The Sun is at the very lower end of this range.
There is not a huge difference in the lifetime of the star even absent the CNO mechanism; it only changes the hydrodynamics of the core and its reactivity to temperature variation. But for the short-term stability it's very important; it amplifies the negative feedback loop that stabilizes the core reaction rate. It is probable (so models tell us) that the Sun's energy output would be much more variable on the scales of $\sim 10^3$ years. So we are lucky to have gotten enough "metal" in our home star, in the end--our ice ages have been bad enough already!
There are many images and cross-sectional schematics on Google but I can't find any actual numbers for the radii.
About $0.2\,R_\odot$.
Are the nuclear reactions occurring where the helium meets the hydrogen?
A Sun-mass star does not fuse helium. Helium fusion is a much more energetic process, and happens only in more massive and shorter-living stars. Helium is the embers of the combustion in the Sun, not its fuel.
As a side note, the Sun is a very calm reactor by Earthling's standards. The core's energy output is about $300\, W/m^3$, far too low for any practical fusion reactor on Earth. You need a chunk of the Sun's core $10^7\,m^3$ in size to match the power of a large coal-fueled electrical plant, and that's the volume of a ball about $300\,m$ in size. No way we could contain such a fireball at 15 million K; terrestrial fusion projects aim at much higher temperatures and thus reaction rates.
What radius are the nuclear reaction occurring at?
In the Sun while on the
main sequence, all throughout the core. The core is essentially isolated against hydrogen supply from outer layers by the radiative zone, where the high thermal gradient stabilizes the gas against convection. As the Sun exhausts its fuel in the core, it transitions into the red giant phase (and the Sun will do it twice!). This happens when the core mostly turns into unburnable helium and cools down. What was previously the hydrogen-rich material in the radiative zone collapses on the surface of the helium core, and restarts the hydrogen reaction when its temperature reaches the ignition point. This reaction occurs only on the surface of the inner helium ball, in a spherical shell. There won't be any mixing mechanism this time that could disturb the inner inert ball.
All numbers come entirely off the top of my head, the best I could recall. Please double-check after me! |
Frequent Links ISO 31-11
Its definitions include the following:
[2] Contents Mathematical logic
Sign Example Name Meaning and verbal equivalent Remarks ∧ p ∧ q conjunction sign p and q ∨ p ∨ q disjunction sign p or q (or both) ¬ ¬ p negation sign negation of p; not p; non p ⇒ p ⇒ q implication sign if p then q; p implies q Can also be written as q ⇐ p. Sometimes → is used. ∀ ∀ x∈ A p( x)
(∀
x∈ A) p( x) universal quantifier for every x belonging to A, the proposition p( x) is true The "∈ A" can be dropped where A is clear from context. ∃ ∃ x∈ A p( x)
(∃
x∈ A) p( x) existential quantifier there exists an x belonging to A for which the proposition p( x) is true The "∈ A" can be dropped where A is clear from context.
∃! is used where exactly one
x exists for which p( x) is true. Sets
Sign Example Meaning and verbal equivalent Remarks ∈ x ∈ A x belongs to A; x is an element of the set A ∉ x ∉ A x does not belong to A; x is not an element of the set A The negation stroke can also be vertical. ∋ A ∋ x the set A contains x (as an element) same meaning as x ∈ A ∌ A ∌ x the set A does not contain x (as an element) same meaning as x ∉ A { } {x 1, x 2, ..., x } n set with elements x 1, x 2, ..., x n also {x ∣ i i ∈ I}, where I denotes a set of indices { ∣ } { x ∈ A ∣ p( x)} set of those elements of A for which the proposition p( x) is true Example: { x ∈ ℝ ∣ x > 5}
The ∈
A can be dropped where this set is clear from the context. card card( A) number of elements in A; cardinal of A ∖ A ∖ B difference between A and B; A minus B The set of elements which belong to A but not to B. A ∖ B = { x ∣ x ∈ A ∧ x ∉ B } A − B should not be used. ∅ the empty set ℕ the set of natural numbers; the set of positive integers and zero ℕ = {0, 1, 2, 3, ...}
Exclusion of zero is denoted by an asterisk:
ℕ
* = {1, 2, 3, ...}
ℕ
= {0, 1, 2, 3, ..., k k − 1} ℤ the set of integers ℤ = {..., −3, −2, −1, 0, 1, 2, 3, ...}
ℤ
ℚ the set of rational numbers ℚ * = ℚ ∖ {0} ℝ the set of real numbers ℝ * = ℝ ∖ {0} ℂ the set of complex numbers ℂ * = ℂ ∖ {0} [,] [ a, b] closed interval in ℝ from a (included) to b (included) [ a, b] = { x ∈ ℝ ∣ a ≤ x ≤ b} ],]
(,]
] a, b]
(
a, b] left half-open interval in ℝ from a (excluded) to b (included) ] a, b] = { x ∈ ℝ ∣ a < x ≤ b} [,[
[,)
[ a, b[
[
a, b) right half-open interval in ℝ from a (included) to b (excluded) [ a, b[ = { x ∈ ℝ ∣ a ≤ x < b} ],[
(,)
] a, b[
(
a, b) open interval in ℝ from a (excluded) to b (excluded) ] a, b[ = { x ∈ ℝ ∣ a < x < b} ⊆ B ⊆ A B is included in A; B is a subset of A Every element of B belongs to A. ⊂ is also used. ⊂ B ⊂ A B is properly included in A; B is a proper subset of A Every element of B belongs to A, but B is not equal to A. If ⊂ is used for "included", then ⊊ should be used for "properly included". ⊈ C ⊈ A C is not included in A; C is not a subset of A ⊄ is also used. ⊇ A ⊇ B A includes B (as subset) A contains every element of B. ⊃ is also used. B ⊆ A means the same as A ⊇ B. ⊃ A ⊃ B. A includes B properly. A contains every element of B, but A is not equal to B. If ⊃ is used for "includes", then ⊋ should be used for "includes properly". ⊉ A ⊉ C A does not include C (as subset) ⊅ is also used. A ⊉ C means the same as C ⊈ A. ∪ A ∪ B union of A and B The set of elements which belong to A or to B or to both A and B. A ∪ B = { x ∣ x ∈ A ∨ x ∈ B } ⋃ <math>\bigcup_{i=1}^n A_i</math> union of a collection of sets <math>\bigcup_{i=1}^n A_i=A_1\cup A_2\cup\ldots\cup A_n</math>, the set of elements belonging to at least one of the sets A 1, …, A n. <math>\bigcup{}_{i=1}^n</math> and <math>\bigcup_{i\in I}</math>, <math>\bigcup{}_{i \in I}</math> are also used, where I denotes a set of indices. ∩ A ∩ B intersection of A and B The set of elements which belong to both A and B. A ∩ B = { x ∣ x ∈ A ∧ x ∈ B } ⋂ <math>\bigcap_{i=1}^n A_i</math> intersection of a collection of sets <math>\bigcap_{i=1}^n A_i=A_1\cap A_2\cap\ldots\cap A_n</math>, the set of elements belonging to all sets A 1, …, A n. <math>\bigcap{}_{i=1}^n</math> and <math>\bigcap_{i\in I}</math>, ⋂ are also used, where i∈ I I denotes a set of indices. ∁ ∁ A B complement of subset B of A The set of those elements of A which do not belong to the subset B. The symbol A is often omitted if the set A is clear from context. Also ∁ A B = A ∖ B. (,) ( a, b) ordered pair a, b; couple a, b ( a, b) = ( c, d) if and only if a = c and b = d.
⟨
a, b⟩ is also used. (,…,) ( a 1, a 2, …, a ) n ordered n-tuple ⟨ a 1, a 2, …, a ⟩ is also used. n × A × B cartesian product of A and B The set of ordered pairs ( a, b) such that a ∈ A and b ∈ B. A × B = { ( a, b) ∣ a ∈ A ∧ b ∈ B } A × A × ⋯ × A is denoted by A , where n n is the number of factors in the product. Δ Δ A set of pairs ( a, a) ∈ A × A where a ∈ A; diagonal of the set A × A Δ = { ( A a, a) ∣ a ∈ A }
id
is also used. A Miscellaneous signs and symbols
Sign Example Meaning and verbal equivalent Remarks ≝
<math>\ \stackrel{\mathrm{def}}{=}\ </math>
a ≝ b a is by definition equal to b [2] := is also used = a = b a equals b ≡ may be used to emphasize that a particular equality is an identity. ≠ a ≠ b a is not equal to b <math>a \not\equiv b</math> may be used to emphasize that a is not identically equal to b. ≙ a ≙ b a corresponds to b On a 1:10 6 map: 1 cm ≙ 10 km. ≈ a ≈ b a is approximately equal to b The symbol ≃ is reserved for "is asymptotically equal to". ∼
∝
a ∼ b a ∝ b a is proportional to b < a < b a is less than b > a > b a is greater than b ≤ a ≤ b a is less than or equal to b The symbol ≦ is also used. ≥ a ≥ b a is greater than or equal to b The symbol ≧ is also used. ≪ a ≪ b a is much less than b ≫ a ≫ b a is much greater than b ∞ infinity ()
[]
{}
<math>\langle \rangle</math>
(a+b)c
[a+b]c
{a+b}c
<math>\langle</math>a+b<math>\rangle</math>c
ac+bc, parentheses
ac+bc, square brackets
ac+bc, braces
ac+bc, angle brackets
In ordinary algebra, the sequence of (), [], {}, <math>\langle \rangle </math> in order of nesting is not standardized. Special uses are made of (), [], {}, <math>\langle \rangle </math> in particular fields. [3] ∥ AB ∥ CD the line AB is parallel to the line CD <math>\perp</math> AB<math>\perp</math>CD the line AB is perpendicular to the line CD [4] Operations
Sign Example Meaning and verbal equivalent Remarks + a + b a plus b − a − b a minus b ± a ± b a plus or minus b ∓ a ∓ b a minus or plus b −( a ± b) = − a ∓ b ... ... ... ... ⋮ Functions
Example Meaning and verbal equivalent Remarks <math>f:D \rightarrow C</math> function f has domain D and codomain C Used to explicitly define the domain and codomain of a function. <math>f\left(S\right)</math> <math>\left\{f\left(x\right)\mid x\in S\right\}</math> Set of all possible outputs in the codomain when given inputs from S, a subset of the domain of f. ⋮ Exponential and logarithmic functions
Example Meaning and verbal equivalent Remarks e base of natural logarithms e = 2.718 28... e x exponential function to the base e of x log ax logarithm to the base a of x lb x binary logarithm (to the base 2) of x lb x = log 2x ln x natural logarithm (to the base e) of x ln x = log ex lg x common logarithm (to the base 10) of x lg x = log 10x ... ... ... ⋮ Circular and hyperbolic functions
Example Meaning and verbal equivalent Remarks π ratio of the circumference of a circle to its diameter π = 3.141 59... ... ... ... ⋮ Complex numbers
Example Meaning and verbal equivalent Remarks i j imaginary unit; i² = −1 In electrotechnology, j is generally used. Re z real part of z z = x + i y, where x = Re z and y = Im z Im z imaginary part of z ∣ z∣ absolute value of z; modulus of z mod z is also used arg z argument of z; phase of z z = re i, where φ r = ∣ z∣ and φ = arg z, i.e. Re z = r cos φ and Im z = r sin φ z * (complex) conjugate of z sometimes a bar above z is used instead of z * sgn z signum z sgn z = z / ∣ z∣ = exp(i arg z) for z ≠ 0, sgn 0 = 0 Matrices
Example Meaning and verbal equivalent Remarks A matrix A ... ... ... ... ⋮ Coordinate systems
Coordinates Position vector and its differential Name of coordinate system Remarks x, y, z [x y z] = [x y z]; [dx dy dz]; cartesian x 1, x 2, x 3 for the coordinates and e 1, e 2, e 3 for the base vectors are also used. This notation easily generalizes to n-mensional space. e , x e , y e form an orthonormal right-handed system. For the base vectors, z , i , j are also used. k ρ, φ, z [x, y, z] = [ρ cos(φ), ρ sin(φ), z] cylindrical e ( ρ φ), e ( φ φ), e form an orthonormal right-handed system. lf z z= 0, then ρ and φ are the polar coordinates. r, θ, φ [x, y, z] = r [sin(θ)cos(φ), sin(θ)sin(φ), cos(θ)] spherical e ( r θ, φ), e ( θ θ, φ), e ( φ φ) form an orthonormal right-handed system. Vectors and tensors
Example Meaning and verbal equivalent Remarks a
<math>\vec a</math>
vector a Instead of italic boldface, vectors can also be indicated by an arrow above the letter symbol. Any vector can be multiplied by a scalar a k, i.e. k. a ... ... ... ⋮ Special functions
Example Meaning and verbal equivalent Remarks J ( l x) cylindrical Bessel functions (of the first kind) ... ... ... ... ⋮ See also References and notes "ISO 80000-2:2009". International Organization for Standardization. Retrieved 1 July 2010. Thompson, Ambler; Taylor, Barry M (March 2008). Guide for the Use of the International System of Units (SI) — NIST Special Publication 811, 2008 Edition — Second Printing(PDF). Gaithersburg, MD, USA: NIST. These brace or fence characters are upper level unicode characters, fairly recently established and so may not display correctly in every browser. A close approximation of the appearance is found in the standard Latin characters: ( ), [ ], { }, < >. A more accurate glyph depiction of the mathematical angle bracket characters are found in the Chinese-Japanese-Korean (CJK) punctuation category: 〈h; 〉h;. If the perpendicular symbol, ⟂h;, does not display correctly, it is similar to ⊥h; (up tack: sometimes meaning orthogonal to) and it also appears similar to ⏊h; (the dentistry: symbol light up and horizontal) |
I see definitions (in Wikipedia, for example) about inner product spaces over arbitrary fields. But I don't understand how positivity makes any sense for fields which are not ordered? Am I missing something?
Clarification: Let me elaborate, to make clear what I already know.An inner product $g$ is a $\mathbb F$-sesquilinear map from $V\times V\to \mathbb F$, that is conjugate symmetric, non-degenerate and positive. My qualms are regarding positivity: that $g(v,v)\geq0$ for all $v\in V$. That inequality means nothing if $\mathbb F$ has no order structure on it.
A suitable solution is to actually restrict our definition to ordered fields. But, this clearly is a strong condition- even the complex numbers are not ordered.
In the real case, the above definition corresponds to the familiar symmetric, non-degenerate, positive bilinear forms; as $\mathbb R$ has a trivial $*$-operation.
Let us look at the complex case, where the first such non-trivial definition arises. By using conjugate symmetry, $g_{\mathbb C}(v,v)=g_{\mathbb C}(v,v)^*$, so $g_{\mathbb C}(v,v)\in \mathbb R$, and since $\mathbb R$ has a canonical order- one can talk about positivity. So, the definition seems to be consistent here. A similar argument seems to works for quaternions as well. This
suggeststhat if your field is a normed division algebra over $\mathbb R$ (by Hurzewicz theorem, there are only four such examples), the above construction may be extendable. Does it work for $*$-fields, in general? While conjugate symmetry and sesquilinearity generalise well to $*$-fields, positivity remains an issue. For the above argument to work out in a general case, one would require the conjugate symmetric elements of a *-field to have an order structure. Any concrete counter-example to that here would be helpful. This is still a far away from general fields. |
Suppose I begin with the time-independent Schrodinger equation $$ \left(-\frac{1}{2m}\partial_x^2 + V(x)\right)\psi_n(x) = E_n\psi_n(x), $$ ordinarily we specify the function $V$ and then solve for a set of eigenfunctions and eigenvalues. And just to be slightly more general, we do the same thing with Sturm-Liouville equations, which I'll write in terms of the momentum operator and an extra function $U$, $$ \left(\hat{p} U(\hat{x}) \hat{p} + V(\hat{x})\right)\psi_n = E_n\psi_n.$$
Now nothing is stopping us from defining a new Hamiltonian operator with the same eigenvectors but different arbitrary eigenvalues $\lambda_n$,
$$\hat{H}\psi_n = \lambda_n \psi_n$$ Under what conditions can this eigenvalue equation for the new Hamiltonian be represented as a (not-necessarily second order) differential equation in $x$ with the same eigenfunctions? In other words when does $\hat{H}$ belong to the operator algebra generated by $\hat{x}$ and $\hat{p}$?
I see if I define the new eigenvalues by some $n$-independent function $f$ of the original eigenvalues $\lambda_n = f(E_n)$, I can come up with a new differential equation, but does this exhaust the possibilities? |
I have found many sources (c.f. Schwartz's QFT book section 10.4) that try to obtain the non-relativistic limit of the Dirac equation by first "squaring it" so that it looks somewhat like the Klein-Gordon equation.
Kleiner eqn 6.113 for examples shows that in the chiral basis, this "squared" Dirac equation becomes
$\left[-(\hbar \partial_\mu + i\frac{e}{c}A_\mu)^2+\frac{e\hbar}{c}\vec{\sigma}\cdot(\vec B \pm i \vec E)-M^2c^2\right]\left\{{\xi(x)\atop \eta(x)}\right\} = 0$,
where $\xi(x)$ and $\eta(x)$ are the two-components chiral spinors. Kleiner then claims that we remove the fast oscillations from those spinors by defining $\xi(x)\equiv e^{-iMc^2t/\hbar }\psi(\vec{x},t)$ to obtain the non-relativistic Pauli equation
$\left(i\partial_t + \hbar^2/2M\left(\vec \nabla-i \frac{e}{c \hbar}\vec A\right)^2+\frac{e}{2Mc}\vec\sigma \cdot \vec B-e \phi\right)\psi(\vec x,t)=0$.
Same thing for $\eta$ presumably.
My question is this: How was the $\vec \sigma\cdot \vec E$ term removed when taking the non-relativistic limit in this way? As far as I can tell, the removal of the rapid oscillations is only going to affect the part of the equation involving covariant derivatives.
I know there are other ways to get the non-relativistic limit, but I would like to understand this one, since it comes up a lot. |
Demonstrate this.
But it is given in the OP as "p". The expectation takes the largest value for p = 1/2, because as post #6 shows the expected number of games played...
Let n = \frac{N-1}{2} \geq 0Then our expectation value goes like this:E = \sum_{i=0}^{2i -1 < N} \sum_{j=0}^{2j -1 < N} { {i+j} \choose i } p^i...
Thus to decide a best-of-1 series, exactly one game is played taking us from state (0,0) to (1,0) with probability p or to state(0,1) with...
Expansion of (p+(1-p))^n\begin{array}{c}& & & & & & & 1 & & & & & & &\\ & & & & & & p & & (1-p) & & & & & &\\ & & & & & p^2 & & 2 p (1-p) & &...
Let N be an odd, positive number. A best-of-N series of games is played until one team has won a majority of the N maximum possible games in the...
This shows evidence of originating from a quote mine.First Edition was published circa 1998...
My flourless chocolate torte says you are wrong.
That "mechanism" question is literally not required to predict the behavior of gravitational phenomena, just as the color of your mother's KichenAid...
Not quite.The essential thing different with n>2 is that it is possible that the lead color changes. So as the number of colors goes down, so does...
Wrong. It's not a problem about how to paint the balls the same color. It's a question about how long a given description of the process is expected...
That's not actually the same problem. Here the balls are actually changing colors and nothing prevents any of the remaining colors from overtaking...
Excellent blog post by Laurent Lessard on this: http://www.laurentlessard.com/bookproofs/colorful-balls-puzzle/A PDF by Tim Black:...
A Simpler Model nColors=15;transitions = DiagonalMatrix[1 - Table[Binomial[nColors + 1 - k,2]/Binomial[nColors,2],{k,1,nColors}]] +...
AutomationAll of the above can be coded and run for different numbers of balls from 1..15. Maybe higher if you don't use a free-to-use cloud...
Uh oh, where in simulation, each simulation individually came to an end, the state vector always has a non-zero component not yet in the final state....
Tricks with MatricesIn addition to the five state transition matrix, lets have it automatically add 1 to a counter for the portion of our state...
Stochastic Matrixes and SimulationInstead of having sampling error, we could simulate the probability of outcomes. First we need to enumerate all...
UnderthinkingI predict a number of people will attempt simulation to attempt to model this. Simulation is reasonably fast to code, fast to run, and...
On April 28, FiveThirtyEight.com posted the weekend puzzle contest (which closed Sunday night) with their solutions coming this Friday, May the...
Separate names with a comma. |
The discovery of TON 618 have created a new black hole species (already fingerprinted by M87 or even IC1101 cores): the ultramassive black holes with masses greater than $10^{10}M_\odot$. As said in the previous answer, in classical settings, there is no upper limit of the mass of black holes (I am not so sure if you get a theory beyond General Relativity even in classical settings).
Maybe, one day we will learn that quantum gravity says something about that. Interestingly, any supermassive, stellar, intermediate and ultramassive black hole has a mass much much greater than Planck mass, about a microgram. The issue is that we think quantum gravity applies only to VERY MASSIVE TINY (very dense) objects, not to very massive only. Indeed, any person has a mass much greater than Planck mass, but it is not "concentrated". When you have concentrated mass in very tiny regions, we have no idea of how to handle quantum fluctuations and amplitudes excepting with superstring theory. Another related question, is if you can have black holes of any DENSITY. Again, as said, you need to consider quantum processes like Hawking radiation, ... However, there is a subtle point, called the transplanckian problem. In principle, as the black holes evaporates it gets smaller and smaller, such as at certain size the wavelength would be lesser than the Planck length. We have to expect for a definitive theory of quantum gravity before to answer the ultimate fate of black holes and thus, the destiny of both: black holes and the whole universe (even the spacetime could be metastable and provisional/transitional state).
How large can a black hole formed from the collapse of a massive star grow in 1 Gyr? Suppose the black hole can grow as fast as it can. Suppose, by the moment, it satisfies the Eddington limit. Then, an exponential law follows up:$$\dot{M}=kM=M/\tau$$where $k=4\cdot 10^{-16}s^{-1}$ for a ten solar masses inicial mass function accordingtly to the Eddington limit. Then, as
$$M=M_0\exp(kt)$$
Plugin into this formula $M_0=10M_\odot$ and the value of k, you get that the maximum mass it yields is in the range of ultramassive BH, i.e., $M_f\sim 10^{10}M_\odot$ for a timescale about 1 Gyr (be aware, the numbers are tricky). Of course, transEddington limit is tricky, but there are some reasons to believe black holes bigger than $10^{10}M_\odot$ are unstable and eject material. Of course, in the absence of any other argument, the above argument does NOT provide an upper limit in principle. Only other considerations relativo to quasars and jets seem to apply. But the issue is a hot topic of debate in astrophysics.By the other hand, the minimal (or tiniest) black hole mass is also a mystery. In macroscale, we have NOT found black holes tinier than 3-5 solar masses (stellar black holes). However, primordial black holes or microblack holes could made some bits of the dark matter hidden in clusters and other parts of the galaxies. Again, the only hint are inflationary ideas, astronomical measures and experimental bounds (recently, it has been analyzed the probability of dark matter being totally black holes, but some evidence seems to say that that is not the case: black holes can not be all the dark matter). |
It seems that you are having difficulty somewhere with converting the informal short sketch into a fully detailed sketch.
So let's try to see in detail how a (non-empty) set being r.e. (recursively enumerable) implies that it is the range of some total recursive function.
Let $A\subseteq \mathbb{N}$ be a non-empty r.e. set. Then, by definition, there exists a partial recursive function $f_A:\mathbb{N} \rightarrow \mathbb{N}$ such that:$$f_A(x)=1 \quad if \, x\in A$$$$f_A(x)=\,\uparrow \qquad if \, x\notin A$$
We want to construct a (total) recursive function $g:\mathbb{N} \rightarrow \mathbb{N}$ such that:$$range(g)=A$$Let's divide into two cases:
(1) A has infinite number of elements
In that case define a function such as $step:\mathbb{N}^3\rightarrow\mathbb{N}$ such that $step(i,t,x)$ returns $1$ if a program corresponding to index $i$ halts(when given the input $x$) exactly at step $t$ -- and $0$ otherwise.
Note that since the function $f_A$ described above is partial recursive, there exists some program that computes it. The variable $x$ below is supposed to be given as input to the function $g$. Here is how we proceed with constructing the function $g(x)$:
$i:=index\;of\;some\;program\;that\;computes\;f_A \\ n:=0 \\ m:=0 \\ while(n\neq x+1) \,\,\{ \\ a:=first(m) \\ b:=second(m) \\ \qquad if(step(i,a,b)=1) \,\,\{ \\ \qquad \,\, n:=n+1\\ \qquad \,\,y:=b \\ \qquad \} \\ m:=m+1 \\ \} \\ return \;\; y
$
For the functions $first:\mathbb{N} \rightarrow \mathbb{N}$ and $second:\mathbb{N} \rightarrow \mathbb{N}$, see for example https://en.wikipedia.org/wiki/Pairing_function. To define them rigorously consider any computable bijective function $pair:\mathbb{N}^2 \rightarrow \mathbb{N}$. Define:$$first(x)=\{\,a\in\mathbb{N} : \exists b\in \mathbb{N} \, (pair(a,b)=x) \}$$$$second(x)=\{\,b\in\mathbb{N} : \exists a\in \mathbb{N} \, (pair(a,b)=x) \}$$I think these definitions should be OK (but I am not fully confident). Anyway, note that the main properties of these functions are that for all $a,b \in \mathbb{N}$ we have:$$first(pair(a,b))=a$$$$second(pair(a,b))=b$$
(2) A has finite number of elements
Suppose the number of elements in A are $N$. Then we can write A (without repetition) as:$$A=\{a_0,\,a_1,....,a_{N-1}\}$$
Now define:
$g(x)=a_{x} \qquad for \; 0\leq x \leq N-1 \\g(x)=a_{N-1} \qquad for \; x \geq N$
P.S.
The above method in case(1) doesn't repeat the elements of A (in the range of g) when it is infinite.
The method that is described in comments below the question is a little different and removes the need of having a separate case for finite number of elements. Suppose $e\in A$. Here is a sketch for calculating $g(x)$ using that method:
$i:=index\;of\;some\;program\;that\;computes\;f_A \\ e:=some\;element\;that\;belongs\;to\;A \\ a:=first(x) \\ b:=second(x) \\ if(step(i,a,b)=1) \,\,\{ \\ y:=b \\ \} \\ if(step(i,a,b)=0) \,\,\{ \\ \\ y:=e \\ \} \\ return \;\; y
$ |
I was reading about fibre optic cables and it was mentioned that ,the individual "light pipes" are coated with a material whose refractive index is less than that of that of the glass. My question is why a material with smaller $\mu$ ? According to sin $\theta$=1/$\mu$, if I decrease $\mu$ then $\theta$,which is the critical angle,will increase.Hence data loss will increase!(since these cables are used in communication) Then why are we doing so? or is my logic incorrect?
According to sin θ=1/μ, if I decrease μ then θ,which is the critical angle, will increase.
Your formula is for propagation from an optically dense medium into air ($n=1$). For optical fiber you should consider the case where both media have non-unity refractive index.
The simplest case is step index fiber with a core large enough to allow solution by ray optics. This kind of fiber works on the principle of total internal reflection. Total internal reflection occors when the incident angle at the interface is greater than the critical angle. The critical angle occurs when
$$\sin\theta = \frac{n_2}{n_1}$$
If $n_1 < n_2$, you'd need $\sin\theta > 1$ to solve this equation. Since the sine function is always in the range [-1, 1], there is no such solution. And indeed, total internal reflection doesn't occur when $n_1 < n_2$, and optical fiber are designed with the core index ($n_1$) slightly higher than the cladding index ($n_2$).
Even if you talk about fiber small enough that you must consider the waveguide properties of the structure, rather than simply ray optics, you'll find that there is no guided wave solution unless $n_2<n_1$.
It sounds like your $\mu$ is $n_1 / n_2$ where light is being transmitted from a medium with index of refraction $n_1$ to an index of refraction $n_2$.
Decreasing $n_2$ therefore
increases $\mu$, which appears to be your fundamental misunderstanding. Yes, you want $\mu$ as big as possible, and the way you do that (if you cannot increase $n_1$) is by decreasing $n_2$. |
so here's something that might be a neat topic for discussion, if anyone ever comes around and wants to talk: recall the spectra $X(k)$, which are the Thom spectra of the maps $\Omega SU(k)\to \Omega SU\simeq BU$. This is pretty much all I talk about these days... sorry.
Well, here's a neat fact, if $E$ is complex oriented then $E^\ast(\mathbb{C}P^k)\cong E_\ast[x]/(x^{k+1})$ and $E_\ast[b_1,\ldots,b_{k-1}]$, sooooo part of me is wondering whether or not one might compare $X(k)$ to $\mathbb{A}^k$, and the notion of an $n$-orientation is sort of similar to the cogroup structures I described above.
whoops, that should say $E_\ast(X(k))\cong E_\ast[b_1,\ldots,b_{k-1}]$
in other words, one notes that $\pi_\ast(HR\wedge X(k))$ is the affine $R$-line, so one might hope that $\pi_ast(X(k))$ is something like.... the affine $\mathbb{S}$-line. LOL |
If you want rigorous proof, the following lemma is often useful resp. more handy than the definitions.
If $c = \lim_{n\to\infty} \frac{f(n)}{g(n)}$ exists, then
$c=0 \qquad \ \,\iff f \in o(g)$,
$c \in (0,\infty) \iff f \in \Theta(g)$ and
$c=\infty \quad \ \ \ \iff f \in \omega(g)$.
With this, you should be able to order most of the functions coming up in algorithm analysis¹. As an exercise, prove it!
Of course you have to be able to calculate the limits accordingly. Some useful tricks to break complicated functions down to basic ones are:
Express both functions as $e^{\dots}$ and compare the exponents; if their ratio tends to $0$ or $\infty$, so does the original quotient.
More generally: if you have a convex, continuously differentiable and strictly increasing function $h$ so that you can re-write your quotient as
$\qquad \displaystyle \frac{f(n)}{g(n)} = \frac{h(f^*(n))}{h(g^*(n))}$,
with $g^* \in \Omega(1)$ and
$\qquad \displaystyle \lim_{n \to \infty} \frac{f^*(n)}{g^*(n)} = \infty$,
then
$\qquad \displaystyle \lim_{n \to \infty} \frac{f(n)}{g(n)} = \infty$.
See here for a rigorous proof of this rule (in German).
Consider continuations of your functions over the reals. You can now use L'Hôpital's rule; be mindful of its conditions²!
Have a look at the discrete equivalent, Stolz–Cesàro.
When factorials pop up, use Stirling's formula:
$\qquad \displaystyle n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n$
It is also useful to keep a pool of basic relations you prove once and use often, such as:
logarithms grow slower than polynomials, i.e.
$\qquad\displaystyle (\log n)^\alpha \in o(n^\beta)$ for all $\alpha, \beta > 0$.
order of polynomials:
$\qquad\displaystyle n^\alpha \in o(n^\beta)$ for all $\alpha < \beta$.
polynomials grow slower than exponentials:
$\qquad\displaystyle n^\alpha \in o(c^n)$ for all $\alpha$ and $c > 1$.
It can happen that above lemma is not applicable because the limit does not exist (e.g. when functions oscillate). In this case, consider the following characterisation of Landau classes using limes superior/inferior:
With $c_s := \limsup_{n \to \infty} \frac{f(n)}{g(n)}$ we have
$0 \leq c_s < \infty \iff f \in O(g)$ and
$c_s = 0 \iff f \in o(g)$.
With $c_i := \liminf_{n \to \infty} \frac{f(n)}{g(n)}$ we have
$0 < c_i \leq \infty \iff f \in \Omega(g)$ and
$c_i = \infty \iff f \in \omega(g)$.
Furthermore,
$0 < c_i,c_s < \infty \iff f \in \Theta(g) \iff g \in \Theta(f)$ and
$ c_i = c_s = 1 \iff f \sim g$.
Check here and here if you are confused by my notation.
¹ Nota bene: My colleague wrote a Mathematica function that does this successfully for many functions, so the lemma really reduces the task to mechanical computation.
² See also here. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
If a function is a combination of other functions whose derivatives are known via composition, addition, etc., the derivative can be calculated using the chain rule and the like. But even the product of integrals can't be expressed in general in terms of the integral of the products, and forget about composition! Why is this?
Here is an extremely generic answer. Differentiation is a "local" operation: to compute the derivative of a function at a point you only have to know how it behaves in a neighborhood of that point. But integration is a "global" operation: to compute the definite integral of a function in an interval you have to know how it behaves on the
entire interval (and to compute the indefinite integral you have to know how it behaves on all intervals). That is a lot of information to summarize. Generally, local things are much easier than global things.
On the other hand, if you can do the global things, they tend to be
useful because of how much information goes into them. That's why theorems like the fundamental theorem of calculus, the full form of Stokes' theorem, and the main theorems of complex analysis are so powerful: they let us calculate global things in terms of slightly less global things.
The family of functions you generally consider (e.g., elementary functions) is closed under differentiation, that is, the derivative of such function is still in the family. However, the family is not in general closed under integration. For instance, even the family of rational functions is not closed under integration because you $\int 1/x = \log$.
Answering an old question just because I saw it on the main page. From Roger Penrose (
Road To Reality):
... there is a striking contrast between the operations of differentiation and integration, in this calculus, with regard to which is the ‘easy’ one and which is the ‘difficult’ one. When it is a matter of applying the operations to explicit formulae involving known functions, it is differentiation which is ‘easy’ and integration ‘difficult’, and in many cases the latter may not be possible to carry out at all in an explicit way. On the other hand, when functions are not given in terms of formulae, but are provided in the form of tabulated lists of
numerical data,then it is integration which is ‘easy’ and differentiation ‘difficult’, and the latter may not, strictly speaking, be possible at all in the ordinary way. Numerical techniques are generally concerned with approximations, but there is also a close analogue of this aspect of things in the exact theory, and again it is integration which can be performed in circumstances where differentiation cannot.
I guess the OP asks about the symbolic integration. Other answers already dealt with the numeric case where integration is easy and differentiation is hard.
If you recall the definition of the differentiation, you can see it's just a subtraction and division by a constant. Even if you can't do any algebraic changes, it won't get any more complex than that. But usually you can do many simplifications due to the zero limit, as many terms fall out as being too small. From this definition it can be shown that if you know the derivative of $f(x)$ and $g(x)$, then you can use these derivatives to express the derivative of $f(x) \pm g(x)$, $f(x)g(x)$ and $f(g(x))$. This makes symbolic differentiation easy as you just need to apply the rules recursively.
Now about integration. Integration is basically an infinite sum of small quantities. So if you see an $\int f(x) \; \d x$. You can imagine it as an infinite sum of $(f_1 + f_2 + ...) \; \d x$ where $f_i$ are consecutive values of the function.
This means if you need to calculate integral of $\int (a f(x) + b g(x)) \; \d x$. Then you can imagine the sum $((af_1 + bg_1) + (af_2 + bg_2) + ...) \, \d x$. Using the associativity and distributivity, you can transform this into: $a(f_1 + f_2 +...)\d x + b(g_1 + g_2 + ...)\, \d x$. So this means $\int (a f(x) + b g(x)) \, \d x = a \int f(x) \d x + b \int g(x) \, dx$.
But if you have $\int f(x) g(x) \, \d x$, you have the sum $(f_1 g_1 + f_2 g_2 + ...) \; \d x$. From which you cannot factor out the sum of $f$s and $g$s. This means there is no recursive rule for multiplication.
Same goes for $\int f(g(x)) \; \d x$. You cannot extract anything from the sum $(f(g_1) + f(g_2) + ...) \; \d x$ in general.
So far, only linearity is the useful property. What about the analogues of the Differentiation rules? We have the product rule: $$\frac{d f(x)g(x) }{\, d x} = f(x) \frac{d g(x)}{\, d x} + g(x) \frac{d f(x)}{\, d x}.$$ Integrating both sides and rearranging the terms, we get the well-known integral by parts formula:
$$\int f(x) \frac{d g(x)}{\, d x} \, d x = f(x)g(x) - \int g(x) \frac{d f(x)}{\, d x} \, d x.$$
But this formula is only useful if $\frac{d f(x)}{dx} \int g(x) \, d x$ or $\frac{d g(x)}{dx} \int f(x) \, d x$ is easier to integrate than $f(x)g(x)$.
And it's often hard to see when this rule is useful. For example, when you try to integrate $\mathrm{ln}(x)$, it's not obvious to see that it's $1 \mathrm{ln}(x)$. The integral of $1$ is $x$ and the derivative of $\mathrm{ln}(x)$ is $\frac{1}{x}$, which lead to a very simple integral of $x\frac{1}{x} = 1$, whose integral is again $x$.
Another well-known differential rule is the chain rule $$\frac{d f(g(x))}{\, d x} = \frac{d f(g(x))}{d g(x)} \frac{d g(x)}{\, d x}.$$
Integrating both sides, you get the reverse chain rule:
$$f(g(x)) = \int \frac{d f(g(x))}{d g(x)} \frac{d g(x)}{\, d x} \, d x.$$
But again it's hard to see when it is useful. For example what about the integration of $\frac{x}{\sqrt{x^2 + c}}$? Is it obvious to you that $\frac{x}{\sqrt{x^2 + c}} = 2x \frac{1}{2\sqrt{x^2 + c}}$ and this is the derivative of $\sqrt{x^2 + c}$? I guess not, unless someone showed you the trick.
For differentiation, you can mechanically apply the rules. For integration, you need to recognize patterns and even need to introduce cancellations to bring the expression into the desired form and this requires lot of practice and intuition.
For example how would you integrate $\sqrt{x^2 + 1}$?
First you turn it into a fraction:
$$\frac{x^2 + 1}{\sqrt{x^2+1}}$$
Then multiply and divide by 2:
$$\frac{2x^2 + 2}{2\sqrt{x^2+1}}$$
Separate the terms like this:
$$\frac{1}{2}\left(\frac{1}{\sqrt{x^2+1}}+\frac{x^2+1}{\sqrt{x^2+1}}+\frac{x^2}{\sqrt{x^2+1}} \right)$$
Play with 2nd and 3rd term:
$$\frac{1}{2} \left( \frac{1}{\sqrt{x^2+1}}+ 1\sqrt{x^2+1}+ x2x\frac{1}{2\sqrt{x^2+1}} \right)$$
Now you can see the first bracketed term is the derivative of $\mathrm{arsinh(x)}$. The second and third term is the derivative of the $x\sqrt{x^2+1}$. Thus the integral will be:
$$\frac{\mathrm{arsinh}(x)}{2} + \frac{x\sqrt{x^2+1}}{2} + C$$
Were these transformations obvious to you? Probably not. That's why differentiation is just a mechanic while integration is an art.
In the
MIT lecture 6.001 "Structure and Interpretation of Computer Programs" by Sussman and Abelson this contrast is briefly discussed in terms of pattern matching. See the lecture video (at 3:56) or alternatively the transcript (p. 2 or see the quote below). The book used in the lecture does not provide further details. Edit: Apparently, they discuss the Risch algorithm. It might be worthwhile to have a look at the same question on mathoverflow.SE: Why is differentiating mechanics and integration art?
And you know from calculus that it's easy to produce derivatives of arbitrary expressions. You also know from your elementary calculus that it's hard to produce integrals. Yet integrals and derivatives are opposites of each other. They're inverse operations. And they have the same rules. What is special about these rules that makes it possible for one to produce derivatives easily and integrals why it's so hard? Let's think about that very simply.
Look at these rules. Every one of these rules, when used in the direction for taking derivatives, which is in the direction of this arrow, the left side is matched against your expression, and the right side is the thing which is the derivative of that expression. The arrow is going that way. In each of these rules, the expressions on the right - hand side of the rule that are contained within derivatives are subexpressions, are proper subexpressions, of the expression on the left - hand side.
So here we see the derivative of the sum, with is the expression on the left - hand side is the sum of the derivatives of the pieces. So the rule of moving to the right are reduction rules. The problem becomes easier. I turn a big complicated problem it's lots of smaller problems and then combine the results, a perfect place for recursion to work.
If I'm going in the other direction like this, if I'm trying to produce integrals, well there are several problems you see here.
First of all, if I try to integrate an expression like a sum, more than one rule matches.Here's one that matches. Here's one that matches. I don't know which one to take. And they may be different. I may get to explore different things. Also, the expressions become larger in that direction. And when the expressions become larger, then there's no guarantee that any particular path I choose will terminate, because we will only terminate by accidental cancellation.So that's why integrals are complicated searches and hard to do.
I will try to bring this to you in another way .Let us start by thinking in terms of something as simple as a straight line . If I give you the equation of a line y = mx + c , it's slope can be easily determined which in this case is nothing but m . .Now let me make the question a bit trickier .Let me say that the line given above intersects the x and y axis at some points .I ask you to give me the area between the line,the abcissa and the ordinate
This is obviously not as easy as finding the slope .You shall have to find the intersection of the line with the axis and get two points of intersection and then taking the origin as a third point find the area . This is not the only method of finding the area as we know there are loads of formulas for finding the area of a triangle . Let us now view this in terms of curves .If the simple process of finding the slope in case of a line is translated to curves we get differential calculus which is a bit more complicated than the method of finding slopes of straight lines .
Add finding the area under the curve to that and you get integral calculus which by our experience from straight lines we know should be much harder than finding the slope ie differentiation .Also there is no one fixed method for finding the area of a figure .hence the Many methods of. Integration. |
Here is a somewhat different way from Johan's of looking at thisproblem. At each stage of the walk, choose a number $x$uniformly from $[0,1]$ and then walk either a distance $x$ to theright or $1-x$ to the left. This does not affect the probabilityof becoming negative since there is still a uniform probabilityof taking a step whose length belongs to the interval$[-1,1]$. However, it does have the property that after taking$n$ steps and choosing $0\leq x\leq 1$, the two possiblelocations following the next step are the same modulo 1. Hencethe walk can be described as follows. Choose $n$ numbers$0\lt x_1\lt \cdots\lt x_n\lt 1$, a sequence$\epsilon=(\epsilon_1,\dots,\epsilon_n)$ of signs $\pm 1$, and apermutation $w$ of $1,2,\dots,n$. Let the location be $y_k$ afterthe $k$th step. If $\epsilon_k=1$ then step to the least realnumber $y_{k+1}\equiv x_{w(k+1)}$ (mod 1), $y_{k+1}>y_k$. If$\epsilon_k=-1$ then step to the greatest real number$y_{k+1}\equiv x_{w(k+1)}$ (mod 1), $y_{k+1}\lt y_k$. But thequestion of whether any $y_k$ is negative depends only on $\epsilon$ and$w$, not the choice of $x_1,\dots,x_n$. There are $2^n n!$ waysto choose $\epsilon$ and $w$. Is there a simple combinatorialargument that the number of choices such that each $y_k>0$ is$(2n-1)!!=1\cdot 3\cdot 5\cdots (2n-1)$? Then the probability ofsuccess is $(2n-1)!!/2^nn! = (2n)!/4^nn!^2$.
Here is a reformulation of the combinatorial result that needs a simple direct proof.
Let $f(n)$ be the number of pairs $(a_1a_2\cdots a_n,b_1b_2\cdots b_{n-1})$ such that (a) $a_1 a_2\cdots a_n$ is apermutation of $1,2,\dots, n$, (b) $b_i=0$ or $1$ if $a_i\lta_{i+1}$, (c) $b_i=0$ or $-1$ if $a_i>a_{i+1}$, and (d) $b_1+b_2+\cdots+b_j\geq 0$ for all $1\leq j\leq n-1$. Then$f(n)=(2n-1)!!$.
Update. The combinatorial result is proved bijectively by O. Bernardi, B. Duplantier, and P. Nadeau in Séminaire Lotharingien de Combinatoire, B63e (2010). In their citation [1] they use this result for the same purpose as above, i.e., to compute the probability $P_n$ (though they state the result a little differently).
Second update. The method above can be applied to the $[l,r]$ generalization mentioned by Lwins in his comment. By rescaling we may assume $l=-1$. If we are at $y$ sometime during the walk, choose a number $x$ uniformly from $[0,1]$. With probability 1/2 step from $y$ to $y+\frac{r-1}{2}+\frac{r+1}{2}x$. With probability 1/2 step from $y$ to $y-1-\frac{r+1}{2}x$. This gives a uniform probability of stepping from $y$ to a point in the interval $[y-1,y+r]$. It has the property that once $x$ is chosen, the value of $y$ is determined modulo $\frac{r+1}{2}$. Thus the walk can be described as follows: pick uniformly and independently $0\lt x_1\lt \cdots\lt x_n \lt \frac{r+1}{2}$, pick a permutation $w$ uniformly from the symmetric group $S_n$, and a sequence $\epsilon=(\epsilon_1,\dots,\epsilon_n)$ of independently distributed signs, with a probability of $\frac{r}{r+1}$ for a plus sign and $\frac{1}{r+1}$ for a minus sign. Go through the same procedure as above, working mod $\frac{r+1}{2}$ instead of mod 1. Again a proper walk (i.e., one which never becomes negative) depends only on $w$ and $\epsilon$, and we get the following result:
Theorem. The probability $P_n(r)$ that the walk is proper is given by $$ P_n(r) = \frac{1}{(1+r)^nn!}\sum r^{1+f(w,\beta)}, $$ summed over all pairs $w=a_1a_2\cdots a_n\in S_n$ and $\beta=(b_1,\dots, b_{n-1})\in \lbrace 0,\pm 1\rbrace^n$ satisfying the conditions (b) and (c) above, where $f(w,\beta)$ is the number of integers $1\leq i\leq n-1$ for which either $a_i\lt a_j$ and $b_i=0$, or $a_i\gt a_j$ and $b_i=1$.
For instance, $P_2(r)= (r+2r^2)/2(r+1)^2$ and $P_3(r) =(r+8r^2+6r^3)/6(r+1)^3$. I conjecture that the numerator $N_n(r)$ of $P_n(r)$ is just the polynomial $\sum B_{n,i}r^i$ defined by equation (4) of http://math.mit.edu/~rstan/pubs/pubfiles/29.pdf. This paper gives some additional information about the polynomials $\sum B_{n,i}r^i$. Much additional information can be found in the literature on Stirling permutations, e.g., Bona proves in http://wenku.baidu.com/view/dfa70012cc7931b765ce15e4.html that all zeros of this polynomial are real.
Third update. Alas, the conjecture in my second update is false. Unless there is an error in my code, the sequence of coefficients of $N_n(r)$ for $2\leq n\leq 7$ are $(1,2)$, $(1,8,6)$, $(1,25,55,24)$, $(1,69,361,394,120)$, $(1,176,1999,4416,3083,720)$, $(1,426,9836,41019,52193,26620,5040)$. It is easy to see why the leading coefficient of $N_n(r)$ is $n!$. |
Theory Definition. A hierarchy is a single-rooted tree.
Let \(T=(V,E)\) be a connected, acyclic directed graph that expresses a hierarchical structure. The set of vertices \(V\) represents the entities that are being organized, and the set of edges \(E\) maps from a given entity to its respective superior.
A vertex is said to have degree \(k\) if it has exactly \(k\) direct subordinates (incoming edges). Vertices with degree 0 are called leaves and vertices with degree \(>0\) are called nodes.
Definition. The average degree of a hierarchy is the average degree of its nodes.
If \(T\) is not empty, then there exists exactly one distinguished vertex \(v_1\in{}V\) that has no superior. All other vertices have exactly one superior.
For every \(v\in{}V\) there exists a unique path that leads from \(v\) to \(v_1\). The length of that path is referred to as depth of \(v\). The depth of \(v_1\) is 0.
Definition. The average depth of a hierarchy is the average depth of its leaves.
Let \(\mu\) be the average degree of \(T\), and let \(l\) be the average depth of \(T\). Then there exists a perfectly regular tree \(T'\) in which
all nodes have degree \(\mu\) and all leaves are at depth~\(l\):
Such a tree has \(\mu^k\) vertices at any given depth \(k\), which means that the total number of vertices is \[ n \;=\; \sum\limits_{k=0}^{l}\,\mu^k \;=\; \frac{1-\mu^{l+1}}{1-\mu} . \]
Corollary. In every perfectly regular hierarchy, the relationship between the average degree \(\mu\), the average depth \(l\), and the number of vertices \(n\) is \[n\,\mu^{l+1}-\mu^{l}-n+1\;=\;0 .\] Corollary. The proportion of nodes in every perfectly regular hierarchy is \[\frac{1-\mu^{l}}{1-\mu^{l+1}} .\] Application
Consider any given company hierarchy of \(n=10\,000\) employees. Nodes represent managers and leaves represent workers. Workers have an average chain of \(l=4\) superiors up to (and including) the big boss. If that hierarchy is somewhat regular, then approximately 10% of the workforce — \(1\,000\) employees — would be required for purposes of management. The average team in that company would have approximately \(\mu=9,7\) members. If the company match these predictions, then its hierarchy would have to be somewhat irregular. |
I'm new on Mathematics Stack Exchange, and it's awesome to be a new member of this awesome community. Please correct me if I have made any errors in terms of following the rules for asking questions here. Also, please feel free to drop down below and say hi.
This is a question from one of my statistics courses at uni, on which I am completely stumped (this is not something I'm being marked on, it's a question from practice problems). So here I go :)
Let $X$ be a continuous random variable, whose moment generating function $m_X(t)$ exists, i.e. $\exists$ $h>0$ such that $m_X(t)<\infty$ for $t\in(-h,h)$.
Let $a\geq0$ and $(X_i)_{i=1,\dots,n}$ is a sequence of i.i.d. random variables whose distribution follows that of $X$. We set $S_n := \sum_{i=1}^nX_i$ and $\bar{X}_n=\frac{1}{n}S_n$.
(i) Show that $\forall x\in \mathbb{R}$ and $t>0$, $I_{\{x>a\}}\leq e^{(x-a)t}$.
(ii) Show that $\mathbb{P}(S_n>a)\leq e^{-at}[m_X(t)]^n$ for $0<t<h$.
(iii) Use the facts that $m_X(0)=1$ and $m_X'(0)=\mathbb{E}(X)$ to show that if $\mathbb{E}(X)<0$ then there exists $0<c<1$ with $\mathbb{P}(S_n>a)\leq c^n$.
Hint: use the mean value theorem and the intermediate value theorem.
Any help would be greatly appreciated.
Thanks in advance! |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
What did you try ? To prove that $A$ is a subring of $K$, you need to prove that:
(i) $1 \in A$ (note: it is easy to forget this one, so I put it first!), which means that $v(1) \geq 0$. We could easily prove this one using fact 1. of your question: namely, $v(1 \times 1) = v(1) + v(1)$, what could $v(1)$ be ? (ii) for $x, y \in A$, $x+y$ and $xy \in A$. These two should be trivial consequences of your points 1. and 3.
Your second point, however, is not true as you wrote it. You must replace “p is a prime number” by “p is a
prime element of $A$” (and it is more common to call it $\pi$ instead of $p$, precisely because of this; we usually restrict the name $p$ for prime numbers). Counter-example: $K = \mathbb{Q}((X))$, $A = \mathbb{Q}[[X]]$, ${\frak m} = (X)$.
If $A$ has only one maximal ideal $\mathfrak m$ then it is a local ring, which means that $A^\times = A \setminus \mathfrak m$ is the set of invertible elements of $A$. You may easily prove that an element $x \in A$ is invertible iff $v(x) = 0$. So a candidate for $\mathfrak m$ is the set$$ \mathfrak m = \left\{ x \in A, v(x) > 0 \right\}.$$Proving this should now be as easy as proving that $A$ is a subring of $K$.
The only remaining point to prove is that $\frak m$ is principal; namely, any maximal ideal is always prime, and if it is principal, then its generator must be a prime element. Since multiplying elements add their valuations, any generator must have valuation $1$. Why does such an element always exist ?
Oh, and by the way, condition 2. in your question is redundant. Since $K$ is a field, for any $a \neq 0$, we have $v(1) = v(a \times 1/a) = v(a) + v(1/a)$, therefore $v(a) = \infty \Rightarrow a = 0$; conversely, for $a$ such that $v(a) = 1$, $v(0) = v(0 a)$ proves that $v(0) = v(0) + 1$, which is possible only for $v(0) = \infty$. |
Case 1: all roots are distinct
In this case, assume $f$ has total of $n$ real roots. I can take any consecutive pair of roots $a,b$, $a < b$ and say that since $f(a) = f(b) = 0$, using Rolle's theorem, $\exists c \in \mathbb{R}$ such that $f'(c) = 0, a < c < b$. Then I end up with $n-1$ real roots in $f'$. How do I show that $f'$ has total of $n-1$ roots?
Case 2: repeated roots
Then for some order $k$ at $x_0$, $f(x) = (x-x_0)^kq(x)$ where $q(x_0)\neq 0$. Then $f'(x) = k(x-x_0)^{k-1}q(x) + (x-x_0)^k q'(x)$. How many root are there in $f'$ and how do I show that they are all real? |
Topic: Discrete Fourier Transform
(This problem clarifies how zero-padding a signal changes its DFT.)
Question
Let x[n] be a signal with duration N. More precisely, assume that $ x[n]=0 $ for $ n> N-1 $ and for $ n<0 $.
Let y[n] be the zero-padding of x[n] to length M>N:
$ y[n]= \left\{ \begin{array}{ll} x[n], & 0\leq n < N,\\ 0, & N \leq n <M. \end{array} \right. $
Show that the M point DFT of y[n] satisfies
$ Y_M [k] = {\mathcal X} \left( \frac{2 \pi k }{M}\right), \text{ for } k=0,1,\ldots, M-1, $
where $ {\mathcal X} (\omega) $ is the DTFT of x[n].
You will receive feedback from your instructor and TA directly on this page. Other students are welcome to comment/discuss/point out mistakes/ask questions too!
TA's comments: As for the question "What's the effect of zero padding?". I want to point out that the zero padding techniques can only make your frequency spectrum smoother. It cannot increase the resolution of frequency. That means no matter how many zero you pad, the envelop of your frequency spectrum will remain the same. There will not be any new frequency components show up. This implies that given a length L signal, the length L DFT contains enough information to fully reconstruct the original signal.
Instructor's comments: While it is true that the length L DFT contains enough information to fully reconstruct the original signal, and thus its DTFT, zero padding does increase the number of samples of the DTFT, and thus the resolution of the sampling of the DTFT. This extra resolution is not adding any more "information" per say (from a mathematical standpoint), but it is changing the position and distance between the samples, thus giving a different approximation (from a visual standpoint) of the DTFT. -pm Answer 1
$ {\mathcal X}(\omega)= \sum_{n=-\infty}^{\infty}x[n]e^{-j\omega n} $
---------------------------------------------
set $ \omega = \frac{2\pi k}{N} $ and use the fact that $ x[n]=0 $ for $ n> N-1 $ and for $ n<0 $
$ {\mathcal X}(\frac{2\pi k}{N})= \sum_{n=0}^{N-1}x[n]e^{-j\frac{2\pi k}{N} n} = X_{N}[k] $ formula(1)
---------------------------------------------
Why do you need this part? Shouldn't you get an expression for the M point DFT of x[n] instead?
Now we can manipulate the DFT of y[n]
$ \begin{align} Y_{M}[k]&= \sum_{n=0}^{M-1}y[n]e^{-j\frac{2\pi k}{M} n} \\ &= \sum_{n=0}^{N-1}x[n]e^{-j\frac{2\pi k}{M} n} \ \ since \ y[n]=0 \ above \ N-1 \\ &= {\mathcal X}(\frac{2\pi k}{M}) \ \ by \ comparing \ to \ formula (1) \end{align} $
Answer 2
By definition of DFT and DTFT:
$ Y_{M}[k] = \sum_{n=0}^{M-1}x[n]e^{-j2\pi\frac{kn}{M}} $
$ {\mathcal X}(\omega)= \sum_{n=-\infty}^{\infty}x[n]e^{-j\omega n} $
since x is none zero in [0,N-1],N<M, so x is zero wherever x<0 or x>M-1,
(Note:x[n]=0 when n in[N,M-1], but we still keep it in there for notation)
$ {\mathcal X}(\omega)= \sum_{n=-\infty}^{\infty}x[n]e^{-j\omega n}= \sum_{n=0}^{M-1}x[n]e^{-j\omega n} $
Plug in $ \omega = \frac{2\pi k}{M} $
$ {\mathcal X}(\frac{2\pi k}{M})= \sum_{n=0}^{M-1}x[n]e^{-j\frac{2\pi k}{M}n} $
Compare to $ Y_{M}[k] = \sum_{n=0}^{M-1}x[n]e^{-j2\pi\frac{kn}{M}} $
one can conclude that $ Y_{M}[k] = \sum_{n=0}^{M-1}x[n]e^{-j2\pi\frac{kn}{M}} = {\mathcal X}(\frac{2\pi k}{M}) $
Q.E.D.
Yimin.
Instructor's comment: This proof is very clear. But on the exam, you would not have to write that many details. Perhaps somebody can try to write a shorter proof below? -pm Answer 3
write it here. |
Next we consider series with both positive and negative terms, but in a regular pattern: they alternate, as in the
alternating harmonic series for example:
\[ \sum_{n=1}^\infty {(-1)^{n-1}\over n}= {1\over1}+{-1\over2}+{1\over3}+{-1\over4}+\cdots= {1\over1}-{1\over2}+{1\over3}-{1\over4}+\cdots. \]
In this series the sizes of the terms decrease, that is, \( |a_n|\) forms a decreasing sequence, but this is not required in an alternating series. As with positive term series, however, when the terms do have decreasing sizes it is easier to analyze the series, much easier, in fact, than positive term series. Consider pictorially what is going on in the alternating harmonic series, shown in Figure 11.4.1. Because the sizes of the terms \( a_n\) are decreasing, the partial sums \( s_1\), \( s_3\), \( s_5\), and so on, form a decreasing sequence that is bounded below by \( s_2\), so this sequence must converge. Likewise, the partial sums \( s_2\), \( s_4\), \( s_6\), and so on, form an increasing sequence that is bounded above by \( s_1\), so this sequence also converges. Since all the even numbered partial sums are less than all the odd numbered ones, and since the "jumps'' (that is, the \( a_i\) terms) are getting smaller and smaller, the two sequences must converge to the same value, meaning the entire sequence of partial sums \( s_1,s_2,s_3,\ldots\) converges as well.
There's nothing special about the alternating harmonic series---the same argument works for any alternating sequence with decreasing size terms. The alternating series test is worth calling a theorem.
Theorem 11.4.1: The Alternating Series Test
Suppose that \(\{a_n\}_{n=1}^\infty\) is a non-increasing sequence of positive numbers and \(\lim_{n\to\infty}a_n=0\). Then the alternating series \(\sum_{n=1}^\infty (-1)^{n-1} a_n\) converges.
Proof
The odd numbered partial sums, \( s_1\), \( s_3\), \( s_5\), and so on, form a non-increasing sequence, because \( s_{2k+3}=s_{2k+1}-a_{2k+2}+a_{2k+3}\le s_{2k+1}\), since \( a_{2k+2}\ge a_{2k+3}\). This sequence is bounded below by \( s_2\), so it must converge, say \(\lim_{k\to\infty}s_{2k+1}=L\). Likewise, the partial sums \( s_2\), \( s_4\), \( s_6\), and so on, form a non-decreasing sequence that is bounded above by \( s_1\), so this sequence also converges, say \(\lim_{k\to\infty}s_{2k}=M\). Since \(\lim_{n\to\infty} a_n=0\) and \( s_{2k+1}= s_{2k}+a_{2k+1}\),
\[ L=\lim_{k\to\infty}s_{2k+1}=\lim_{k\to\infty}(s_{2k}+a_{2k+1})= \lim_{k\to\infty}s_{2k}+\lim_{k\to\infty}a_{2k+1}=M+0=M, \]
so \(L=M\), the two sequences of partial sums converge to the same limit, and this means the entire sequence of partial sums also converges to \(L\).
\(\square \)
Another useful fact is implicit in this discussion. Suppose that \(L=\sum_{n=1}^\infty (-1)^{n-1} a_n\) and that we approximate \(L\) by a finite part of this sum, say \(L\approx \sum_{n=1}^N (-1)^{n-1} a_n.\) Because the terms are decreasing in size, we know that the true value of \(L\) must be between this approximation and the next one, that is, between \(\sum_{n=1}^N (-1)^{n-1} a_n \quad \) and \(\quad \sum_{n=1}^{N+1} (-1)^{n-1} a_n. \) Depending on whether \(N\) is odd or even, the second will be larger or smaller than the first.
Example 11.4.2
Approximate the alternating harmonic series to one decimal place.
Solution
We need to go roughly to the point at which the next term to be added or subtracted is \(1/10\). Adding up the first nine and the first ten terms we get approximately \(0.746\) and \(0.646\). These are \(1/10\) apart, but it is not clear how the correct value would be rounded. It turns out that we are able to settle the question by computing the sums of the first eleven and twelve terms, which give \(0.737\) and \(0.653\), so correct to one place the value is \(0.7\).
We have considered alternating series with first index 1, and in which the first term is positive, but a little thought shows this is not crucial. The same test applies to any similar series, such as \(\sum_{n=0}^\infty (-1)^n a_n\), \(\sum_{n=1}^\infty (-1)^n a_n\), \(\sum_{n=17}^\infty (-1)^n a_n\), etc. |
I am studying the dual simplex method from Lieberman - 10e. An approach called dual simplex method was described that is "applied" on the "primal table" itself, i.e., We do not convert it into its dual problem. For the Primal problem, I know how to find an initial basis using Big-M method or Two phase method. Such a basis is called primal feasible and it may not be dual feasible(i.e., optimal or all function coefficient non-negative). However, from that table itself, is there any way to find an initial dual feasible or optimal solution. So that we can even apply the dual simplex algorithm?
"Initialization: After converting any functional constraints in $\geq$ form to $\leq$ form (by multiplying through both sides by -1), introduce slack variables as needed to construct a set of equations describing the problem.
Find a basic solutionsuch that the coefficients in Eq. (0) are zero for basic variables and nonnegative for nonbasic variables (so the solution is optimal if it is feasible). Go to the feasibility test. "
How to find such a basic solution which may be optimal but not feasible in the primal sense? I.e., how can I make all nonbasic functional coefficients non-negative by using the given constraints?
Edit: This is an example.
Suppose we have the problem.
$$\text{max} \;\;2x-3y+4z \\ x+y+z\leq8 \\ 2x+y-z\leq4 \\-x+2y-z\leq-3$$
Then in Tableau form we have,
$$ \begin{matrix} Z & x & y & z & s1 & s2 & s3 & b \\ 1 & -2 & 3 & -4 & 0 & 0 & 0 & 0 && eq(0) \\ 0 & 1 & 1 & 1 & 1 & 0 & 0 & 8 && eq(1)\\ 0 & 2 & 1 & -1 & 0 & 1 & 0 & 4 && eq(2) \\ 0 & -1 & 2 & -1 & 0 & 0 & 1 & -3 && eq(3) \end{matrix} $$
Please note that here it's obvious by adding eq(1) to eq(0) required number of times. I want a more general way.
How can I initialize it for Dual simplex algorithm, i.e., how can I make all coefficients in eq(0) non-negative? Or simply put, how to initialize any table for the dual simplex algorithm?
Edit2: I propose the following conjecture, will it work?
1) Find the most negative coefficient in $eq(0)$ and choose it as our entering variable.
2) Choose max positive ratio and choose that as our leaving variable. (This way we will proceed toward an optimal solution fastest without caring for primal feasibility). 3) If no positive ratio exists for $a_{i, enter}>0$ where $i$ represent row index, the problem is dual infeasible. |
What is a function? Informally, it is a process, or an assignment, from an input set to an output set. It is not just the process or assignment that forms a function, but specifying the input and output is part of what it is. Formally, a function is a triple $(A,B,f)$ where $A,B$ are sets and $f\subseteq A\times B$ is a subset of the cartesian product, i.e., a relation
from $A$ $to$ $B$. If instead one wanted to view the function $(A,B,f)$ as equivalent to $(A,Im(f),f)$ then one needs to add a condition to an otherwise very clean definition. So, to force the codomain to be the range actually burdens the definition rather than simplify things. That is one reason not to force such things.
Another reason is so that one has no problems talking about composition when it's obvious one should be able to talk about composition (so actually this is a category theoretic reasoning). Suppose that $f:A\to B$ and $g:B\to C$. If I want to know if the composition $g\circ f$ exists I don't care what the actual range of the functions is. Just a glance as the domain of $g$ and the (ta ta ta taaaaaa)
codomain of $f$. In other words, if the input type of $g$ matches the output type of $f$, then the composition is defined.
If we insisted that codomain=range, then the condition above will have to be replaced by "the range of $f$ is contained in the domain of $g$". And now I'll get into a bit of technical lingo from category theory. The resulting category from forcing codomain=range will be the category $Set_{Surj}$ of sets and surjections. It's a perfectly legit category, but it has far fewer nice properties when compared to $Set$, the category of sets and
all functions. For instance, the empty set is characterized in $Set$ as an initial object (i.e., there is precisely one function from it to any other given set) and is dual to singletons, which are terminal. This is no longer the case in $Set_{Surj}$. The disjoint union of two (or more) sets is an example of a categorical product in $Set$, and is dual to the cartesian product. Disjoint unions are no longer coproducts in $Set_{Surj}$. Many other of the nice properties of $Set$ are lost if passing to $Set_{Surj}$.
In particular, many very convenient injections will no longer be allowed if we require all functions to be surjective. For instance, it is very convenient to be able to speak of the functions $f_y:\mathbb R \to \mathbb R^2$, given by $f(x)=(x,y)$, or various curves in the plane being parametrized by some function $\gamma:[a,b]\to \mathbb R^2$. |
The first observation of top quark production in proton-nucleus collisions is reported using proton-lead data collected by the CMS experiment at the CERN LHC at a nucleon-nucleon center-of-mass energy of $\sqrt{s_\mathrm{NN}} =$ 8.16 TeV. The measurement is performed using events with exactly one isolated electron or muon and at least four jets. The data sample corresponds to an integrated luminosity of 174 nb$^{-1}$. The significance of the $\mathrm{t}\overline{\mathrm{t}}$ signal against the background-only hypothesis is above five standard deviations. The measured cross section is $\sigma_{\mathrm{t}\overline{\mathrm{t}}} =$ 45$\pm$8 nb, consistent with predictions from perturbative quantum chromodynamics.
Measurements of two- and multi-particle angular correlations in pp collisions at s=5,7, and 13TeV are presented as a function of charged-particle multiplicity. The data, corresponding to integrated luminosities of 1.0pb−1 (5 TeV), 6.2pb−1 (7 TeV), and 0.7pb−1 (13 TeV), were collected using the CMS detector at the LHC. The second-order ( v2 ) and third-order ( v3 ) azimuthal anisotropy harmonics of unidentified charged particles, as well as v2 of KS0 and Λ/Λ‾ particles, are extracted from long-range two-particle correlations as functions of particle multiplicity and transverse momentum. For high-multiplicity pp events, a mass ordering is observed for the v2 values of charged hadrons (mostly pions), KS0 , and Λ/Λ‾ , with lighter particle species exhibiting a stronger azimuthal anisotropy signal below pT≈2GeV/c . For 13 TeV data, the v2 signals are also extracted from four- and six-particle correlations for the first time in pp collisions, with comparable magnitude to those from two-particle correlations. These observations are similar to those seen in pPb and PbPb collisions, and support the interpretation of a collective origin for the observed long-range correlations in high-multiplicity pp collisions.
Measurements are presented of the associated production of a W boson and a charm-quark jet (W + c) in pp collisions at a center-of-mass energy of 7 TeV. The analysis is conducted with a data sample corresponding to a total integrated luminosity of 5 inverse femtobarns, collected by the CMS detector at the LHC. W boson candidates are identified by their decay into a charged lepton (muon or electron) and a neutrino. The W + c measurements are performed for charm-quark jets in the kinematic region $p_T^{jet} \gt$ 25 GeV, $|\eta^{jet}| \lt$ 2.5, for two different thresholds for the transverse momentum of the lepton from the W-boson decay, and in the pseudorapidity range $|\eta^{\ell}| \lt$ 2.1. Hadronic and inclusive semileptonic decays of charm hadrons are used to measure the following total cross sections: $\sigma(pp \to W + c + X) \times B(W \to \ell \nu)$ = 107.7 +/- 3.3 (stat.) +/- 6.9 (syst.) pb ($p_T^{\ell} \gt$ 25 GeV) and $\sigma(pp \to W + c + X) \times B(W \to \ell \nu)$ = 84.1 +/- 2.0 (stat.) +/- 4.9 (syst.) pb ($p_T^{\ell} \gt$ 35 GeV), and the cross section ratios $\sigma(pp \to W^+ + \bar{c} + X)/\sigma(pp \to W^- + c + X)$ = 0.954 +/- 0.025 (stat.) +/- 0.004 (syst.) ($p_T^{\ell} \gt$ 25 GeV) and $\sigma(pp \to W^+ + \bar{c} + X)\sigma(pp \to W^- + c + X)$ = 0.938 +/- 0.019 (stat.) +/- 0.006 (syst.) ($p_T^{\ell} \gt$ 35 GeV). Cross sections and cross section ratios are also measured differentially with respect to the absolute value of the pseudorapidity of the lepton from the W-boson decay. These are the first measurements from the LHC directly sensitive to the strange quark and antiquark content of the proton. Results are compared with theoretical predictions and are consistent with the predictions based on global fits of parton distribution functions.
A search for narrow resonances in the dijet mass spectrum is performed using data corresponding to an integrated luminosity of 2.9 inverse pb collected by the CMS experiment at the LHC. Upper limits at the 95% confidence level (CL) are presented on the product of the resonance cross section, branching fraction into dijets, and acceptance, separately for decays into quark-quark, quark-gluon, or gluon-gluon pairs. The data exclude new particles predicted in the following models at the 95% CL: string resonances, with mass less than 2.50 TeV, excited quarks, with mass less than 1.58 TeV, and axigluons, colorons, and E_6 diquarks, in specific mass intervals. This extends previously published limits on these models.
The production of jets associated to bottom quarks is measured for the first time in PbPb collisions at a center-of-mass energy of 2.76 TeV per nucleon pair. Jet spectra are reported in the transverse momentum (pt) range of 80-250 GeV, and within pseudorapidity abs(eta < 2). The nuclear modification factor (R[AA]) calculated from these spectra shows a strong suppression in the b-jet yield in PbPb collisions relative to the yield observed in pp collisions at the same energy. The suppression persists to the largest values of pt studied, and is centrality dependent. The R[AA] is about 0.4 in the most central events, similar to previous observations for inclusive jets. This implies that jet quenching does not have a strong dependence on parton mass and flavor in the jet pt range studied.
A search for neutral Higgs bosons in the minimal supersymmetric extension of the standard model (MSSM) decaying to tau-lepton pairs in pp collisions is performed, using events recorded by the CMS experiment at the LHC. The dataset corresponds to an integrated luminosity of 24.6 fb$^{−1}$, with 4.9 fb$^{−1}$ at 7 TeV and 19.7 fb$^{−1}$ at 8 TeV. To enhance the sensitivity to neutral MSSM Higgs bosons, the search includes the case where the Higgs boson is produced in association with a b-quark jet. No excess is observed in the tau-lepton-pair invariant mass spectrum. Exclusion limits are presented in the MSSM parameter space for different benchmark scenarios, m$_{h}^{max}$ , m$_{h}^{mod +}$ , m$_{h}^{mod −}$ , light-stop, light-stau, τ-phobic, and low-m$_{H}$. Upper limits on the cross section times branching fraction for gluon fusion and b-quark associated Higgs boson production are also given.
Measurements of the differential production cross sections in transverse momentum and rapidity for B0 mesons produced in pp collisions at sqrt(s) = 7 TeV are presented. The dataset used was collected by the CMS experiment at the LHC and corresponds to an integrated luminosity of 40 inverse picobarns. The production cross section is measured from B0 meson decays reconstructed in the exclusive final state J/Psi K-short, with the subsequent decays J/Psi to mu^+ mu^- and K-short to pi^+ pi^-. The total cross section for pt(B0) > 5 GeV and y(B0) < 2.2 is measured to be 33.2 +/- 2.5 +/- 3.5 microbarns, where the first uncertainty is statistical and the second is systematic.
The Upsilon production cross section in proton-proton collisions at sqrt(s) = 7 TeV is measured using a data sample collected with the CMS detector at the LHC, corresponding to an integrated luminosity of 3.1 +/- 0.3 inverse picobarns. Integrated over the rapidity range |y|<2, we find the product of the Upsilon(1S) production cross section and branching fraction to dimuons to be sigma(pp to Upsilon(1S) X) B(Upsilon(1S) to mu+ mu-) = 7.37 +/- 0.13^{+0.61}_{-0.42}\pm 0.81 nb, where the first uncertainty is statistical, the second is systematic, and the third is associated with the estimation of the integrated luminosity of the data sample. This cross section is obtained assuming unpolarized Upsilon(1S) production. If the Upsilon(1S) production polarization is fully transverse or fully longitudinal the cross section changes by about 20%. We also report the measurement of the Upsilon(1S), Upsilon(2S), and Upsilon(3S) differential cross sections as a function of transverse momentum and rapidity.
A search for Z bosons in the mu^+mu^- decay channel has been performed in PbPb collisions at a nucleon-nucleon centre of mass energy = 2.76 TeV with the CMS detector at the LHC, in a 7.2 inverse microbarn data sample. The number of opposite-sign muon pairs observed in the 60--120 GeV/c2 invariant mass range is 39, corresponding to a yield per unit of rapidity (y) and per minimum bias event of (33.8 ± 5.5 (stat) ± 4.4 (syst)) 10^{-8}, in the |y|<2.0 range. Rapidity, transverse momentum, and centrality dependencies are also measured. The results agree with next-to-leading order QCD calculations, scaled by the number of incoherent nucleon-nucleon collisions.
A measurement of the J/psi and psi(2S) production cross sections in pp collisions at sqrt(s)=7 TeV with the CMS experiment at the LHC is presented. The data sample corresponds to an integrated luminosity of 37 inverse picobarns. Using a fit to the invariant mass and decay length distributions, production cross sections have been measured separately for prompt and non-prompt charmonium states, as a function of the meson transverse momentum in several rapidity ranges. In addition, cross sections restricted to the acceptance of the CMS detector are given, which are not affected by the polarization of the charmonium states. The ratio of the differential production cross sections of the two states, where systematic uncertainties largely cancel, is also determined. The branching fraction of the inclusive B to psi(2S) X decay is extracted from the ratio of the non-prompt cross sections to be: BR(B to psi(2S) X) = (3.08 +/- 0.12(stat.+syst.) +/- 0.13(theor.) +/- 0.42(BR[PDG])) 10^-3
Isolated photon production is measured in proton-proton and lead-lead collisions at nucleon-nucleon centre-of-mass energies of 2.76 TeV in the pseudorapidity range |eta|<1.44 and transverse energies ET between 20 and 80 GeV with the CMS detector at the LHC. The measured ET spectra are found to be in good agreement with next-to-leading-order perturbative QCD predictions. The ratio of PbPb to pp isolated photon ET-differential yields, scaled by the number of incoherent nucleon-nucleon collisions, is consistent with unity for all PbPb reaction centralities.
The prompt D0 meson azimuthal anisotropy coefficients, v[2] and v[3], are measured at midrapidity (abs(y) < 1.0) in PbPb collisions at a center-of-mass energy sqrt(s[NN]) = 5.02 TeV per nucleon pair with data collected by the CMS experiment. The measurement is performed in the transverse momentum (pT) range of 1 to 40 GeV/c, for central and midcentral collisions. The v[2] coefficient is found to be positive throughout the pT range studied. The first measurement of the prompt D0 meson v[3] coefficient is performed, and values up to 0.07 are observed for pT around 4 GeV/c. Compared to measurements of charged particles, a similar pT dependence, but smaller magnitude for pT < 6 GeV/c, is found for prompt D0 meson v[2] and v[3] coefficients. The results are consistent with the presence of collective motion of charm quarks at low pT and a path length dependence of charm quark energy loss at high pT, thereby providing new constraints on the theoretical description of the interactions between charm quarks and the quark-gluon plasma.
The transverse momentum (pt) spectrum of prompt D0 mesons and their antiparticles has been measured via the hadronic decay channels D0 to K- pi+ and D0-bar to K+ pi- in pp and PbPb collisions at a centre-of-mass energy of 5.02 TeV per nucleon pair with the CMS detector at the LHC. The measurement is performed in the D0 meson pt range of 2-100 GeV and in the rapidity range of abs(y)<1. The pp (PbPb) dataset used for this analysis corresponds to an integrated luminosity of 27.4 inverse picobarns (530 inverse microbarns). The measured D0 meson pt spectrum in pp collisions is well described by perturbative QCD calculations. The nuclear modification factor, comparing D0 meson yields in PbPb and pp collisions, was extracted for both minimum-bias and the 10% most central PbPb interactions. For central events, the D0 meson yield in the PbPb collisions is suppressed by a factor of 5-6 compared to the pp reference in the pt range of 6-10 GeV. For D0 mesons in the high-pt range of 60-100 GeV, a significantly smaller suppression is observed. The results are also compared to theoretical calculations.
A search for supersymmetry is presented based on proton-proton collision events containing identified hadronically decaying top quarks, no leptons, and an imbalance pTmiss in transverse momentum. The data were collected with the CMS detector at the CERN LHC at a center-of-mass energy of 13 TeV, and correspond to an integrated luminosity of 35.9 fb−1. Search regions are defined in terms of the multiplicity of bottom quark jet and top quark candidates, the pTmiss, the scalar sum of jet transverse momenta, and the mT2 mass variable. No statistically significant excess of events is observed relative to the expectation from the standard model. Lower limits on the masses of supersymmetric particles are determined at 95% confidence level in the context of simplified models with top quark production. For a model with direct top squark pair production followed by the decay of each top squark to a top quark and a neutralino, top squark masses up to 1020 GeV and neutralino masses up to 430 GeV are excluded. For a model with pair production of gluinos followed by the decay of each gluino to a top quark-antiquark pair and a neutralino, gluino masses up to 2040 GeV and neutralino masses up to 1150 GeV are excluded. These limits extend previous results.
A measurement of the exclusive two-photon production of muon pairs in proton-proton collisions at sqrt(s)= 7 TeV, pp to p mu^+ mu^- p, is reported using data corresponding to an integrated luminosity of 40 inverse picobarns. For muon pairs with invariant mass greater than 11.5 GeV, transverse momentum pT(mu) > 4 GeV and pseudorapidity |eta(mu)| < 2.1, a fit to the dimuon pt(mu^+ mu^-) distribution results in a measured cross section of sigma(pp to p mu^+ mu^- p) = 3.38 [+0.58 -0.55] (stat.) +/- 0.16 (syst.) +/- 0.14 (lumi.) pb, consistent with the theoretical prediction evaluated with the event generator Lpair. The ratio to the predicted cross section is 0.83 [+0.14-0.13] (stat.) +/- 0.04 (syst.) +/- 0.03 (lumi.). The characteristic distributions of the muon pairs produced via photon-photon fusion, such as the muon acoplanarity, the muon pair invariant mass and transverse momentum agree with those from the theory.
Measurements of the differential cross sections for the production of exactly four jets in proton-proton collisions are presented as a function of the transverse momentum pt and pseudorapidity eta, together with the correlations in azimuthal angle and the pt balance among the jets. The data sample was collected in 2010 at a center-of-mass energy of 7 TeV with the CMS detector at the LHC, with an integrated luminosity of 36 inverse picobarns. The cross section for a final state with a pair of hard jets with pt > 50 GeV and another pair with pt > 20 GeV within abs(eta) < 4.7 is measured to be sigma = 330 +- 5 (stat.) +- 45 (syst.) nb. It is found that fixed-order matrix element calculations including parton showers describe the measured differential cross sections in some regions of phase space only, and that adding contributions from double parton scattering brings the Monte Carlo predictions closer to the data. |
Square roots are typically computed by root finding; if you want to find $\sqrt{x}$, you would use a root finding algorithm on
$$g(t) = t^2 - x$$
One such algorithm would be, for example, Newton's Method. Wikipedia has an entire page describing algorithms for root finding, and some more for computing square roots.
The unifying theme behind these algorithms is that you define a sequence $\{x_n\}_{n \in \mathbb{N}}$ such that $x_n \to x^*$, where $g(x^*) = 0$ and, depending on the algorithm and function, you can prove that it converges to the true root, and that it does so with a certain speed. The convergence properties usually depend on a good first approximation to where the root is, so not everything's rose colored. However, a priori this means that you can get arbitrarily close to the root and, for well behaved inputs, you will get the actual root (i.e. perfect squares).
As many other answers described, computers work with a floating point representation, which are limited in what they can represent to a subset of the rational numbers. This means, in practice, that most numbers can't in fact be handled by your computer, and furthermore that you have to take several precautions when working with them; "What Every Computer Scientist Should Know About Floating-Point Arithmetic" is a pretty classic introduction, which is also friendly for math students.
Computer Algebra Systems usually choose to instead work with a symbolic representation of math: instead of using actual numbers, implement the typical rules of math on symbols. In this approach, you would have a representation for $\sqrt{2}$, and a rule that tells you $\sqrt{x}^2 = x$; thus $\sqrt{2}^2 = 2$. At the end, they do have to approximate, but they have the most accurate representation possible until the very last minute, which avoids many errors.
If you are interested in reading more, "Numerical Analysis" by Richard L. Burden is a book I really liked when taking my Numerical Analysis class. |
There is a very simple method to simulate from the Gaussian copula which is based on the definitions of the multivariate normal distribution and the Gauss copula.
I'll start by providing the required definition and properties of the multivariate normal distribution, followed by the Gaussian copula, and then I'll provide the algorithm to simulate from the Gauss copula.
Multivariate normal distribution A random vector $X = (X_1, \ldots, X_d)'$ has a multivariate normal distribution if $$X \stackrel{\mathrm{d}}{=} \mu + AZ,$$where $Z$ is a $k$-dimensional vector of independent standard normal random variables, $\mu$ is a $d$-dimensional vector of constants, and $A$ is a $d\times k$ matrix of constants.The notation $\stackrel{\mathrm{d}}{=}$ denotes equality in distribution.So, each component of $X$ is essentially a weighted sum of independent standard normal random variables. From the properties of mean vectors and covariance matrices, we have${\rm E}(X) = \mu$ and ${\rm cov}(X) = \Sigma$, with $\Sigma = AA'$, leading to the natural notation $X \sim {\rm N}_d(\mu, \Sigma)$.
Gauss copula The Gauss copula is defined implicitely from the multivariate normal distribution, that is, the Gauss copula is the copula associated with a multivariate normal distribution. Specifically, from Sklar's theorem the Gauss copula is$$C_P(u_1, \ldots, u_d) = \boldsymbol{\Phi}_P(\Phi^{-1}(u_1), \ldots, \Phi^{-1}(u_d)), $$where $\Phi$ denotes the standard normal distribution function, and $\boldsymbol{\Phi}_P$ denotes the multivariate standard normal distribution function with correlation matrix P. So, the Gauss copula is simply a standard multivariate normal distribution where the probability integral transform is applied to each margin.
Simulation algorithm In view of the above, a natural approach to simulate from the Gauss copula is to simulate from the multivariate standard normal distribution with an appropriate correlation matrix $P$, and to convert each margin using the probability integral transform with the standard normal distribution function.Whilst simulating from a multivariate normal distribution with covariance matrix $\Sigma$ essentially comes down to do a weighted sum of independent standard normal random variables, where the "weight" matrix $A$ can be obtained by the Cholesky decomposition of the covariance matrix $\Sigma$.
Therefore, an algorithm to simulate $n$ samples from the Gauss copula with correlation matrix $P$ is:
Perform a Cholesky decomposition of $P$, and set $A$ as the resulting lower triangular matrix. Repeat the following steps $n$ times. Generate a vector $Z = (Z_1, \ldots, Z_d)'$ of independent standard normal variates. Set $X = AZ$ Return $U = (\Phi(X_1), \ldots, \Phi(X_d))'$.
The following code in an example implementation of this algorithm using R:
## Initialization and parameters
set.seed(123)
P <- matrix(c(1, 0.1, 0.8, # Correlation matrix
0.1, 1, 0.4,
0.8, 0.4, 1), nrow = 3)
d <- nrow(P) # Dimension
n <- 200 # Number of samples
## Simulation (non-vectorized version)
A <- t(chol(P))
U <- matrix(nrow = n, ncol = d)
for (i in 1:n){
Z <- rnorm(d)
X <- A%*%Z
U[i, ] <- pnorm(X)
}
## Simulation (compact vectorized version)
U <- pnorm(matrix(rnorm(n*d), ncol = d) %*% chol(P))
## Visualization
pairs(U, pch = 16,
labels = sapply(1:d, function(i){as.expression(substitute(U[k], list(k = i)))}))
The following chart shows the data resulting from the above R code. |
In rudimentary thermodynamics, an equation of state is any equation relating state variables such as pressure, volume, temperature, energy, and so on. In cosmology and fundamental physics, the term is most often used for equations describing the relationship between the energy density \(\rho\) and pressure \(p\):
\[ p = p(\rho) \] That's because these two variables are entries in the stress-energy tensor that directly enters Einstein's equations, those that determine the evolution of the spacetime geometry. The simplest relationship you may have is a proportionality law:
\[ p = w\cdot \rho \] where \(w\) is a numerical constant. Why is it dimensionless? It's because dimensionally speaking, pressure is force per unit area which is the same thing as energy per unit distance and unit area which is the same as energy per unit volume i.e. energy density. Now, the number \(w\) can't ever jump out of the interval \([-1,+1]\). Why?
The reason is that the speed of sound \(v\) can't exceed the speed of light in the vacuum, \(c\). The speed of sound may be calculated from:
\[ \frac{ v^2}{c^2} = \left( \frac{\partial p}{\partial \rho} \right)^2 \] If you substitute a linear dependence of the pressure on the energy density, both sides are simply equal to \(w^2\) which therefore can't be greater than \(+1\). Similar constraints are equivalent to some of the "energy conditions" but I don't want to discuss numerous energy conditions that have been analyzed in some older TRF blog entries.
Important values of \(w\)
Instead, let us look at important values of \(w\) in the allowed interval. The lowest allowed value is
\[ w=-1 \] Is there a material that may give you this highly negative pressure? Well, it's not really a "material": it's the cosmological constant, the dominant model for the "dark energy" we know (or believe) to constitute 73% of the energy density in the Universe. Note that if you set \(p=-\rho\), the 4-dimensional stress-energy tensor will be
\[ T_{\mu\nu} = {\rm diag} (+1,-1,-1,-1)\rho \] which means that it will be proportional to the metric tensor. Such a tensor doesn't pick any preferred reference frame; it locally preserves the Lorentz symmetry. Globally, when you require Einstein's equations to hold, you will find many solutions to these equations. Among them, the most importat ones will be the "maximally symmetric" universes that include, aside from the flat Minkowski space for \(\rho=0\), de Sitter space for a positive \(\rho\) and anti de Sitter space for a negative \(\rho\) as well.
The existence of the cosmological constant may also be deduced from a Lagrangian; the relevant term in the action is proportional simply to
\[\int {\rm d}^4x \,\sqrt{g} \] so the Lagrangian density is constant, assuming that you include the correct "proper spacetime volume" integration measure.
The negative-energy-density space, the anti de Sitter space, may be viewed as a Lorentzian-signature hyperboloid of a sort. It contains globally timelike Killing vectors which is why you may define "globally static" frames. That's necessary for supersymmetry – because supercharges' anticommutators include time-like translations. And indeed, anti de Sitter spaces are among the spaces that are most fully understood within string theory (and its approximations) because of the supersymmetry that may remain unbroken.
More realistically, we may also consider de Sitter space with a positive value of \(\rho\). Our Universe is almost certainly an example. Supersymmetry has to be broken in such a space; well, in the real world, it's broken much more intensely than by the minimum breaking required by the de Sitter curvature. De Sitter space may also be visualized as a "static space" but only if you cut the "exterior space" beyond the cosmic horizon. Also, de Sitter space may be imagined as a space whose spatial volume is exponentially increasing – a picture that is essential in cosmology (both during inflation which has an approximate de Sitter space, as well as during the recent and future expansion of our Universe that is dominated by a much smaller positive value of the vacuum energy than the huge value during inflation).
The supernova observations indicate that most of the energy density in the Universe indeed has \(w\) close to \(-1\) which supports the idea that the cosmological constant (or something very similar to it) is the right detailed name of the vague beast known as "dark energy". Are there other negative values of \(w\) that you should know? You bet.
Cosmic strings and domain walls
In string theory (whether the real one or its partially inconsistent imitations and approximations), one may find one-dimensional and two-dimensional objects, the cosmic strings and the domain walls. The cosmic strings may be nothing else than the fundamental strings that have grown to astronomical dimensions (usually because of the extreme conditions of the early Universe) but they may also be other kinds of string-like objects that string theory admits that just happen to be large. The same thing applies to the two-dimensional domain walls (more generally, domain walls are objects of co-dimension one) which may be some membranes or membrane-shaped entities found in string theory.
What is the equation of state of a material composed of cosmic strings? Well, that's not hard to calculate. Their histories in the spacetime look like two-dimensional world sheets. The energy density within these world sheet is given by the stress-energy tensor. It's proportional to a two-dimensional delta-function, so that the energy density is only nonzero at the locus of the world sheet. However, it must be proportional to
\[ T_{\mu\nu} = {\rm diag} (+1,-1,0,0) \rho \] because in the two-dimensional "spacetime" (world sheet) generated by the first two dimensions, the (world sheet) Lorentz symmetry must continue to be unbroken so the corresponding stress-energy tensor has to be proportional to the metric tensor again. It's just like the case of the cosmological constant but now it only applies to two spacetime dimensions – time and the dimension along the string.
However, strings may have arbitrary directions in space. For all of them, the trace of the spatial part of the stress energy tensor is the same. But you may average over the directions, anyway. The outcome is an averaged value of the stress-energy tensor:
\[ T_{\mu\nu} = {\rm diag} (+1,-1/3,-1/3,-1/3) \rho \] Well, we just divided \(-1\) into three pieces, to make the spatial part rotationally symmetric while preserving the trace. Without much ado, we see that cosmic strings have \(w=-1/3\), already too close to zero to be consistent with the "dark energy" seen in the Universe. The domain walls have a Lorentz-invariant stress-energy tensor in the 2+1-dimensional world volumes,
\[ T_{\mu\nu} = {\rm diag} (+1,-1,-1,0) \rho, \] which after averaging over all the directions of the membrane gives us
\[ T_{\mu\nu} = {\rm diag} (+1,-2/3,-2/3,-2/3) \rho \] i.e. \(w=-2/3\). It's more negative than for cosmic strings but still too close to zero.
Dust
The dust, i.e. matter particles that are almost at rest and not moving much, have \(p=0\) which means \(w=0\) as well. Pressure is force exerted by the material on the walls but if the particles inside the box are not moving at all, they don't act on the walls of the box and the pressure vanishes. I have actually used this logic above – for cosmic strings and domain walls – because this realization was needed to justify why the "last components" of the stress-energy tensor were set to zero. It's also because the objects are separated from the walls in the transverse dimensions and they're moving, so they can't exert a pressure in those directions.
All regular "slow and cold" materials belong to the category of "dust", whether they're gold or something much less valuable.
Radiation
When particles get faster, the pressure keeps on increasing. An important point is when the particles move by the speed of light. In that case, we have \(w=+1/3\). Note that the sign is opposite than for the cosmic strings. Why is it exactly \(+1/3\)? Well, consider a photon confined in a box of volume \(L^3\). The photon moves by the velocity \(v_x\) in the \(x\)-direction so it takes \(L/v_x\) for it to get from the left wall to the right one or vice versa. Every time it hits the wall, it gets reflected: the momentum changes from \(p_x\) to \(-p_x\) or vice versa. Regardless of the sign, the photon delivers the outward momentum of \(2p_x\) to the left or right walls. Recall it takes \(L/v_x\) of time so the momentum per unit time, counting only the left and right walls, is the ratio \(2p_x / (L/v_x) = 2p_x\cdot v_x/L\). Add the same "outward force" contributions from the other two pairs of the faces of the box to get a force equal to \(F=2p\cdot v / L\). When you divide this force by the area, you get
\[ p_{\rm pressure} = F/A = \frac{2p\cdot v}{6L^3} = \frac{\rho}{3} \] because \(p\cdot v = c|p| = E\) is the energy of the photon and, when divided by \(L^3\), you get the energy density. This proves \(p=\rho/3\) for radiation. The same derivation holds for any particles that move (nearly) by the speed of light, including neutrinos or gravitons or whatever you like (even though gravitons won't really get reflected from any wall you may construct). It follows that \(w=+1/3\) for radiation.
Dense black hole gas
For the sake of completeness, I want to mention a favorite "material" of Tom Banks, a somewhat unrealistic "dense black hole gas", which allows you to reach \(p=+\rho\), the opposite extreme value of \(p\) than the vacuum energy density. The relationship may be at least formally derived if you imagine a region with lots of black holes treated as "densely packed balls" and if you use the Schwarzschild formulae for the radius-mass relationship etc. Don't expect this material to occur in your lab: it's mostly a purely theoretical visual way to imagine how the opposite extreme has to look like.
Cosmological evolution
For different values of \(w\), you get very different cosmological evolutions. The negative pressure offered by the cosmological constant, cosmic strings, or domain walls will be able to accelerate the expansion of the Universe – although the cosmological constant is the only one that quantitatively agrees with the observed acceleration of the expansion. Now, you may re-read the article about energy non-conservation in cosmology.
I was explaining that the total mass-energy carried by the dust remains constant when the Universe is expanding. However, the radiation sees its wavelength to increase proportionally to \(a\), the linear dimensions of the expanding Universe, which is why the energy of each photon (or another particle) goes down like \(1/a\): recall that the energy is proportional to the frequency or inversely proportional to the wavelength. The volume goes like \(a^3\) so the energy density, \(E/V\), goes like \(1/a^4\) for radiation.
The energy stored in the cosmological constant has a constant energy density; that's why the cosmological constant has the word "constant" in it. So the energy density goes like \(1/a^0\) and the total energy goes up as \(a^3\). During inflation, this is the reason why the total mass of the Universe expands much like the volume, a reason why Alan Guth says that while people believe that there's no free lunch, inflation is the ultimate free lunch. For other equations of state, the energy density goes like
\[ \rho \sim a^{-3(w+1)}. \] Of course, one may generalize those comments to more complicated, nonlinear equations of state but it's important for a physicist to cover the landscape of possibilities by friendly, well-understood situations so that no place ends up being "completely unfamiliar".
Summary
To summarize, for the purpose of cosmology, the dominant property describing the type of material is \(w=p/\rho\) which is \(0\) for ordinary matter composed of particles and similarly \(-d/3\) for large objects with \(d\) spatial dimensions (the cosmological constant may be uniformly counted as a bulk-filling three-brane). Positive values of \(w\) are possible if the objects have a high velocity; \(w=+1/3\) is valid for radiation (at the speed of light). Higher values of \(w\) are extreme, unrealistic, and at \(w=+1\), they may be linked to a dense gas of black holes. Each nonzero value of \(w\) quantifies how much the total energy will fail to be conserved during the cosmological evolution. |
Hipster dynamics¶
This week I started seeing references all over the internet to this paper:
The Hipster Effect: When Anticonformists All Look The Same. It essentially describes a simple mathematical model which models conformity and non-conformity among a mutually interacting population, and finds some interesting results: namely, conformity among a population of self-conscious non-conformists is similar to a phase transition in a time-delayed thermodynamic system. In other words, with enough hipsters around responding to delayed fashion trends, a plethora of facial hair and fixed gear bikes is a natural result.
Also naturally, upon reading the paper I wanted to try to reproduce the work. The paper solves the problem analytically for a continuous system and shows the precise values of certain phase transitions within the long-term limit of the postulated system. Though such theoretical derivations are useful, I often find it more intuitive to simulate systems like this in a more approximate manner to gain hands-on understanding.
Mathematically Modeling Hipsters¶
We'll start by defining the problem, and going through the notation suggested in the paper. We'll consider a group of $N$ people, and define the following quantities:
$\epsilon_i$ : this value is either $+1$ or $-1$. $+1$ means person $i$ is a hipster, while $-1$ means they're a conformist. $s_i(t)$ : this is also either $+1$ or $-1$. This indicates person $i$'s choice of style at time $t$. For example, $+1$ might indicated a bushy beard, while $-1$ indicates clean-shaven. $J_{ij}$ : The influence matrix. This is a value greater than zero which indicates how much person $j$ influences person $i$. $\tau_{ij}$ : The delay matrix. This is an integer telling us the length of delay for the style of person $j$ to affect the style of person $i$.
The idea of the model is this: on any given day, person $i$ looks at the world around him or her, and sees some previous day's version of everyone else. This information is $s_j(t - \tau_{ij})$.
The amount that person $j$ influences person $i$ is given by the influence matrix, $J_{ij}$, and after putting all the information together, we see that person $i$'s mean impression of the world's style is
$$ m_i(t) = \frac{1}{N} \sum_j J_{ij} \cdot s_j(t - \tau_{ij}) $$
Given the problem setup, we can quickly check whether this impression matches their own current style:
if $m_i(t) \cdot s_i(t) > 0$, then person $i$ matches those around them if $m_i(t) \cdot s_i(t) < 0$, then person $i$ looks different than those around them
A hipster who notices that their style matches that of the world around them will risk giving up all their hipster cred if they don't change quickly; a conformist will have the opposite reaction. Because $\epsilon_i$ = $+1$ for a hipster and $-1$ for a conformist, we can encode this observation in a single value which tells us what which way the person will lean that day:
$$ x_i(t) = -\epsilon_i m_i(t) s_i(t) $$
Simple! If $x_i(t) > 0$, then person $i$ will more likely switch their style that day, and if $x_i(t) < 0$, person $i$ will more likely maintain the same style as the previous day. So we have a formula for how to update each person's style based on their preferences, their influences, and the world around them.
But the world is a noisy place. Each person might have other things going on that day, so instead of using this value directly, we can turn it in to a probabilistic statement. Consider the function
$$ \phi(x;\beta) = \frac{1 + \tanh(\beta \cdot x)}{2} $$
We can plot this function quickly:
import numpy as npimport holoviews as hvfrom holoviews import optshv.extension('bokeh', 'matplotlib') |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.