text
stringlengths 256
16.4k
|
|---|
Before answering the question more or less directly, I'd like to point out that this is a good question that provides an object lesson and opens a foray into the topics of
singular integral equations, analytic continuation and dispersion relations. Here are some references of these more advanced topics: Muskhelishvili, Singular Integral Equations; Courant & Hilbert, Methods of Mathematical Physics, Vol I, Ch 3; Dispersion Theory in High Energy Physics, Queen & Violini; Eden et.al., The Analytic S-matrix. There is also a condensed discussion of `invariant functions' in Schweber, An Intro to Relativistic QFT Ch13d.
The quick answer is that, for $m^2 \in\mathbb{R}$, there's no "shortcut." One must
choose a path around the singularities in the denominator. The appropriate choice is governed by the boundary conditions of the problem at hand. The $+i\epsilon$ "trick" (it's not a "trick") simply encodes the boundary conditions relevant for causal propagation of particles and antiparticles in field theory.
We briefly study the analytic form of $G(x-y;m)$ to demonstrate some of these features.
Note, first, that for real values of $p^2$, the singularity in the denominator of the integrand signals the presence of (a) branch point(s). In fact, [Huang,
Quantum Field Theory: From Operators to Path Integrals, p29] the Feynman propagator for the scalar field (your equation) may be explicitly evaluated:\begin{align}G(x-y;m) &= \lim_{\epsilon \to 0} \frac{1}{(2 \pi)^4} \int d^4p \, \frac{e^{-ip\cdot(x-y)}}{p^2 - m^2 + i\epsilon} \nonumber \\&= \left \{ \begin{matrix}-\frac{1}{4 \pi} \delta(s) + \frac{m}{8 \pi \sqrt{s}} H_1^{(1)}(m \sqrt{s}) & \textrm{ if }\, s \geq 0 \\ -\frac{i m}{ 4 \pi^2 \sqrt{-s}} K_1(m \sqrt{-s}) & \textrm{if }\, s < 0.\end{matrix} \right.\end{align}where $s=(x-y)^2$.
The first-order Hankel function of the first kind $H^{(1)}_1$ has a logarithmic branch point at $x=0$; so does the modified Bessel function of the second kind, $K_1$. (Look at the small $x$ behavior of these functions to see this.)
A branch point indicates that the Cauchy-Riemann conditions have broken down at $x=0$ (or $z=x+iy=0$). And the fact that these singularities are logarithmic is an indication that we have an endpoint singularity [eg. Eden et. al., Ch 2.1]. (To see this, consider $m=0$, then the integrand, $p^{-2}$, has a zero at the lower limit of integration in $dp^2$.)
Coming back to the question of boundary conditions, there is a good discussion in Sakurai,
Advanced Quantum Mechanics, Ch4.4 [NB: "East Coast" metric]. You can see that for large values of $s>0$ from the above expression that we have an outgoing wave from the asymptotic form of the Hankel function.
Connecting it back to the original references I cited above, the $+i\epsilon$ form is a version of the Plemelj formula [Muskhelishvili]. And the expression for the propagator is a type of Cauchy integral [Musk.; Eden et.al.]. And this notions lead quickly to the topics I mentioned above -- certainly a rich landscape for research.This post imported from StackExchange Physics at 2014-07-13 04:38 (UCT), posted by SE-user MarkWayne
|
No:
Consider the case in which $F = \Bbb Q$, the rationals, and $K = \Bbb R$, the reals;let $\tau$ be any transcendental real number, e.g. we might take $\tau = e$ or $\tau = \pi$. Let $R$ be the ring $\Bbb Q [\tau]$, i.e. polynomial expressions in $\tau$ with rational coefficients. Then $\Bbb Q \subset R \subset \Bbb R$. $R$ is easily seen to be a subring $\Bbb R$, but it cannot be a field; it does not contain $\tau^{-1}$; if it did, we would have, for some polynomial $p(x) \in \Bbb Q [x]$,
$p(\tau) = \tau^{-1}$;
but then
$\tau p(\tau) = 1$,
so $\tau$ would satisfy the polynomial equation
$\tau p(\tau) - 1 = 0$,
which would imply that $\tau$ is algebraic over $\Bbb Q$, a contradiction. QED.
|
One may think it is maybe possible to reproduce the reasoning that leads to this equation from the Schrödinger one in any quantum system where the Schrödinger equation holds (i.e. surely QFT and QED, I don't know about string theory). However, this is actually rather difficult, and it is not possible to obtain the nice form of the continuity equation (in my opinion) apart from some special situation. In addition, I would like to point out that the local conservation equation is a
direct consequence of the Schrödinger equation (and of the particular form of "standard" Hamiltonians in non-relativistic QM), and does not add any further insight than the Schrödinger equation.
One problem in QFT is that we often do not know the precise form of the interacting Hamiltonian, however we may suppose that we know it and it is of the form $H=H_0(\Pi)+V(\Phi)$; where $\Phi$ is the quantum field operator and $\Pi$ its conjugate momentum.
Another problem is that $\Pi$ does not necessarily behave like a derivation operator on the wavefunction $\Psi$, and $V(\Phi)$ as a multiplicative operator. However, this can be made possible using a "trick" at least in a special case, i.e. when the Hilbert space of the QFT is a Fock space $\Gamma(\mathscr{H})$, where $\mathscr{H}$ is the one-particle (separable) Hibert space. In fact there is a construction, called Q-space, that unitarily identifies the Fock space with an $L^2(\Omega,d\mu)$ space of functionals $\Psi(\phi)$ on $\Omega$ with Gaussian measure $d\mu(\phi)$. In this space, the Fock space field $\Phi(x)$ acts as the multiplication by the function $\phi(x)$, and the momentum $\Pi(x)$ as the functional derivative $-i\partial_{\phi(x)}$.
Now in the Q-space form the Hamiltonian becomes $H=H_0(\partial_\phi)+V(\phi)$, and this is analogous to the usual $L^2(\mathbb{R}^d)$ form of QM, and therefore the conservation equation$$\partial_{\phi(t)} J\Bigl(\Psi(t,\phi),\partial_{\phi(t)}\Psi(t,\phi(t))\Bigr)+\partial_t\Bigl(\Psi(t,\phi)^*\Psi(t,\phi)\Bigr)=0\; ;$$may be recovered (with a suitable current $J$, that in the case of $H_0(\Pi)\simeq \Pi^2$ has the usual form).
|
I asked this question for many people/professors without getting a sufficient answer, why in QM Lebesgue spaces of second degree are assumed to be the one that corresponds to the Hilbert vector space of state functions, from where this arises? and why 2-order space that assumes the following inner product:
$\langle\phi|\psi\rangle =\int\phi^{*}\psi\,dx$
While there is many ways to define the inner product.
In Physics books, this always assumed as given, never explains it, also I tried to read some abstract math books on this things, and found some concepts like "Metric weight" that will be minimized in such spaces, even so I don't really understand what is behind that, so why $L_2$? what special about them? Who and how physicists understood that those are the one we need to use?
|
Forgot password? New user? Sign up
Existing user? Log in
Explain logically, not algebraically, why (n1)+(n3)+(n5)...=2n−1 \dbinom{n}{1}+\dbinom{n}{3}+\dbinom{n}{5}...= 2^{n-1} (1n)+(3n)+(5n)...=2n−1 By logically I mean, some argument by which we can calculate the sum orally.
Note by Vikram Waradpande 6 years, 5 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
Flip n coins. Clearly, there are 2^n strings of head and tails possible.
There are n C 1 strings with exactly 1 head, nC3 with 3 heads, and so on.
Observe that the probability of getting an odd number of heads is precisely half (by a variety of arguments like induction, recursion, or plain obviousness). This gives you the desired result.
Log in to reply
Only, induction and recursive should not be considered as 'logical reasoning'(though of course it is logical. ), and should be considered as 'mathematical reasoning'
That's quite good thinking. I have an argument that'll give us the answer quickly. All we want is the number of odd selections out of a total of nnn.Fix any object xnx_nxn out of the nnn objects. Now the total number of strings possible for the remaining n−1n-1n−1 elements are 2n−12^{n-1}2n−1. Take any possible string of the total strings possible. If the set contains odd number of elements, then keep it as it is, otherwise add xnx_nxn. We see there's a perfect pairing. So we are done!
Take n balls numbered from 1 to n. Take nthn_{th}nth ball and put it aside. Split the rest of the n−1n-1n−1 balls into two sets - set A and set B. There are 2n−12^{n-1}2n−1 ways to do that which is our R.H.S. Now take nthn_{th}nth ball which we have set aside initially and put in to either set A or set B so that number of balls in set A is odd. So, basically we have chosen odd number of balls from n balls in set A, which is the L.H.S. Hence the equation is proved.[EDIT] I should have read the comments first :S, Vikram W. has provided almost the solution as mine.
Actually people wouldn't have been confused if you said "combinatorial proof" instead of "logical explanation".
We have to find ∑k≥0(n2k+1) \sum_{k\ge0} \binom{n}{2k+1} ∑k≥0(2k+1n) . Now imagine this like this , Suppose you have to choose number of groups of size 2k+1 2k+1 2k+1 from n number of students . Now for each k k k you have some number of groups , symoblically 0≤2k+1≤n 0 \le 2k+1 \le n 0≤2k+1≤n , there are (n2k+1) \binom{n}{2k+1} (2k+1n) groups of size 2k+1 2k+1 2k+1 , so there are ∑k≥0(n2k+1) \sum_{k \ge 0} \binom{n}{2k+1} ∑k≥0(2k+1n) such type of groups .
Now we do this sum , logically . For n−1 n-1 n−1 students there is a choice , for taking him in the group or not. so 2n−1 2^{n-1} 2n−1 . Once these choices are made, then the fate of the nth student is completely determined so that the final group size is an odd number. Consequently, there are 2n−1 2^{n-1}2n−1 such committees. □ \Box □
Another idea: Use the fact that nC0+nC1+...+nCn=2^n and the expansion of 0=(1+(-1))^n by the Binomial Theorem...
We know that (NK)N \choose K(KN) is part of the binomial theorem. (x+y)n=(N0)∗xN+(N1)∗xN−1∗y+....(NN)∗yN (x+y)^{n}= {N \choose 0}*x^N+{N \choose 1}*x^{N-1}*y+....{N \choose N}*y^{N}(x+y)n=(0N)∗xN+(1N)∗xN−1∗y+....(NN)∗yN. Let x and y equal 1 and we see that 2n=(N0)+(N1)+...+(NN)2^{n}={N \choose 0}+{N \choose 1}+...+{N \choose N}2n=(0N)+(1N)+...+(NN). By halving both sides we see that (N1)+(N3)+(N5)+...=2n−1{N \choose 1}+{N \choose 3}+{N \choose 5}+...=2^{n-1}(1N)+(3N)+(5N)+...=2n−1.
[Note: In order to type (N1)+(N3) { N \choose 1} + {N \choose 3} (1N)+(3N), you need to use { N \choose 1} + { N \choose 3} as the Latex code, otherwise it doesn't know how to write it up. E.g. it could appear as (N1+(N3)) { N \choose { 1 + {N\choose 3} } } (1+(3N)N), or any of the other variants - Calvin]
Alternatively, it is known that the sum of N choose 0 through N choose N is 2N2^{N}2N. We also know that N choose K is the same thing as (NN−K)N \choose N-K(N−KN). So we conclude that N choose 0=N choose N and N choose 1 = N choose N-1 thus the stated problem is half of 2n2^{n}2n.
If n is even e.g. n = 6
nC1 = nC5; but both are in the set. This idea does work for odd n though.
that is, by binomial theorem, ((1+1)^n-(1-1)^n)/2=2^(n-1)
Not 'logical reasoning', algebraic reasoning.
(n2k−1)=(n−12k−1)+(n−12k−2) {n \choose {2k-1}}= {{n-1} \choose {2k-1}}+ {{n-1} \choose {2k-2}} (2k−1n)=(2k−1n−1)+(2k−2n−1), and as we apply it to all, it becomes the sum of n-1 choose 0 through n-1 choose N-1, so it is 2n−12^{n-1}2n−1
i think the (1-1)^n explanation makes perfect sense, but if you want something "logical" then consider the construction of pascal's triangle. When a row goes to the next row, each term on it is added to two consecutive terms in the next row, and as of course one of these is odd and one is even, the even and odd terms have the same sum. There is also the committee argument which this is pretty much equivalent to, and I'm sure several other clever bijections. What's notable about the generating function is that it generalizes easily, say by changing to (2-1)^n. These are both special cases of roots of unity filters, which are very intuitive.
This can also be used to prove why there exists 2n2^n2n subsets in a set.
Yeah. We consider nnn objects. Assume you want to make a team out of them consisting of any number of objects. Each object can be given a label 'Yes' or 'No' based on whether it is selected or not. So the total number of possible teams is 2∗2∗2...∗2⏟n times=2n\underbrace{2*2*2...*2}_\text{n times}=2^nn times2∗2∗2...∗2=2n
((1+x)^n+(1-x)^n)/2=2^(n-1), for x=1 here n is taken positive complete number.Sum of the combinations of n items taking odd number of items(which is less or equal to the number of items) at a time is 2^(n-1). Example: n=4, 4C1+4C3=8=2^(4-1)=8, and so on.
we know that the total number of subsets of a set containing n elements = 2^nAgain we know that the sum of the total number of odd element subset = the sum of the total number of even element subsetso n C 1 + n C 2 + n C 3 + ... + n C n = 2^n or , 2 ( n C 1 + n C 3 + n C 5 + ... ) = 2^nor, n C 1 + n C 3 + n C 5 + ... = 2^n / 2 = 2^(n-1)
we know that the total number of subsets of a set containing n elements is 2^n.
Also, the sum of the subsets containing odd elements = sum of the subsets containing even elements.
So we have ,
n C 1 + n C 3 + n C 5 + n C 7 +....= n C 0 + n C 2 + n C 4 + n C 6 +...
again,
n C 1 + n C 2 + n C 3 + n C 4 + n C 5 + ... n C n = 2^nor, 2(n C 1 + n C 3 + n C 5 + n C 7 +....) = 2^nor n C 1 + n C 3 + n C 5 + n C 7 +....= 2^(n-1)
This solution is half 'mathematical', half 'logical'. From binomial expansion the sum of nCr = 2n2^{n}2n. Now the sum given is equal to the sum that has been removed as nCr = nC(n-r) (i.e. nC1 = nC(n-1)). Hence, the sum is equal to 12\frac{1}{2}21 * 2n2^{n}2n = 2n−12^{n-1}2n−1, as required.
Problem Loading...
Note Loading...
Set Loading...
|
M. Schechter introduced an operational quantity characterizing strictly singular operators as follows:
For an operator $T:X\rightarrow Y$, we set $$\tau(T)=\sup_{M}\inf_{x\in S_{M}}\|Tx\|,$$ where $M$ represent an infinite-dimensional closed subspace of $X$.
If $A$ and $B$ are two nonempty subsets of a Banach space $X$, we set $$d(A,B)=\inf\{\|a-b\|:a\in A,b\in B\},$$$$\widehat{d}(A,B)=\sup\{d(a,B):a\in A\}.$$ Thus, $d(A,B)$ is the ordinary distance between $A$ and $B$, and $\widehat{d}(A,B)$ is the non-symmetrized Hausdorff distance from $A$ to $B$.
Let $A$ be a bounded subset of a Banach space $X$. The Hausdorff measure of non-compactness of $A$ is defined by $\chi(A)=\inf\{\widehat{d}(A,F):F\subset X$ finite subset $\}$.
Then $\chi(A)=0$ if and only if $A$ is relatively norm compact. For an operator $T: X\rightarrow Y$, $\chi(T)$ will denote $\chi(TB_{X})$.
I prove the following result:
Theorem. Let $T:X\rightarrow Y$ be an operator. Then $$\tau(T)\leq 2\chi(T).$$
I have two questions about this theorem.
Question 1. Is this theorem new? I am not sure that this theorem has already appeared somewhere.
Question 2. Is the constant 2 in the theorem optimal?
Thank you!
|
It says that enthalpies of combustion are always negative as these reactions are exothermic. The heat of combustion tables list positive values for all substances. Why do these values differ in sign?
Looking at Wikipedia for the definitions:
The
standard enthalpy of combustion is the enthalpy change when one mole of a reactant completely burns in excess oxygen under standard thermodynamic conditions (although experimental values are usually obtained under different conditions and subsequently adjusted).
The heat of combustion $(\Delta H_c^\circ)$ is the
energy released as heat when a compound undergoes complete combustion with oxygen under standard conditions. The chemical reaction is typically a hydrocarbon reacting with oxygen to form carbon dioxide, water and heat.
On first inspection, the two seem the same, however, there is a very slight different use in terminology and wording.
1.) What are standard conditions?
Referring back to Wikipedia:
Standard conditions for temperature and pressure are standard sets of conditions for experimental measurements established to allow comparisons to be made between different sets of data.
In chemistry, IUPAC established standard temperature and pressure (informally abbreviated as STP) as a temperature of 273.15 K (0 °C, 32 °F) and an absolute pressure of 100 kPa (14.504 psi, 0.987 atm, 1 bar)
As you know, the standard enthalpy of combustion can be referred to as as $\Delta H ^{\circ} _{\mathrm{total}}$ whose units are in kJ. The molar enthalpy of combustion is how much energy is released per kJ/mol. How is the standard enthalpy of combustion found?
Using the individual enthalpies of formation, and Hess's law, we can calculate the standard enthalpy of combustion, or $\Delta H ^{\circ} _{\mathrm{total}}$.
$\Delta H_{reaction}^\ominus = \sum \Delta H_{\mathrm f \,(products)}^{\ominus} - \sum \Delta H_{\mathrm f \,(reactants)}^{\ominus}$
The standard enthalpies of formation, or the negative values are located here https://en.wikipedia.org/wiki/Standard_enthalpy_change_of_formation_(data_table) and as well, generally in your textbook in an appendix along with the standard entropy of formation, and Gibbs free energy of formation, as all three are state functions.
2.) Why do the signs differ in the values from your textbook compared to the table in Wikipedia?
For the heat of combustion, it is generally expressed in units of higher heating value, lower heating value, and gross heating value. As the article states in Wikipedia, the heating value (or energy value or calorific value) of a substance, usually a fuel or food (see food energy), is the amount of heat released during the combustion of a specified amount of it.
In summary, the sign difference that I see is due to different terminology, they are specifically referring to units of HHV, LHV, and GHV and expressing those in the table below. Although, heat of combustion may also be calculated as the difference between the heat of formation $\big( \Delta H_f^\circ \big)$ of the products and reactants just as the standard enthalpy of combustion, it seems to me that the heat of combustion is specialized in a way to be applied to fuels, and as such, different from the standard enthalpy of combustion that we are accustomed to.
Perhaps someone could clarify a little more on this?
The lower heating value is the measured, or empirical heat of combustion. It is lower because some of the internal energy (enthalpy) that is released as heat goes into the production of water in vapor form.
Since standard enthalpy values require compounds in their standard states (water in liquid form), the higher heating value must be calculated. This is done by adjusting for the heat of vaporization of water for the given mass. Typically this done by adding the adjustment quantity to the measured value.
|
Let $K$ be a convex body and let $\| \cdot \|_{K}$ be the correspoding Minkowski functional $$\| x \|_{K} = \inf\{\lambda > 0 : x \in \lambda K \}$$
Let us consider the following map $f: K \rightarrow \partial \mathring K$ such that $f(x) = \nabla \| x \|_{K}$ Here $\partial \mathring K$ stands for the polar set, i.e. $$\partial \mathring K = \{ y \in \partial K : \sup_{x \in \partial K}{\langle x, y \rangle} \leq 1 \}$$
It is pointed out that $f$ pretends to be a Gauss map $v_{K}: \partial K \rightarrow S^{n-1}$ that maps the outer unit normal to the boundary to the unit sphere. Are there any easy ways to recover it geometrically?
It looks as if there is a direct relationship between the fact that the subgradients of convex functions are presicely the outer normal vector of supporting hyperplanes of sublevel sets and the statement above, but i can't see any fast ways to figure it out.
|
How would you prove this identity using combinatorics? Any hints or advice?
For all positive integers $n>1$,
$\sum_{k=0}^{n} \frac{1}{k+1} {n\choose k} (-1)^{k+1}=\frac{-1}{n+1} $
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
We obtain \begin{align*} \color{blue}{\sum_{k=0}^n\frac{1}{k+1}\binom{n}{k}(-1)^{k+1}} &=\frac{1}{n+1}\sum_{k=0}^n\binom{n+1}{k+1}(-1)^{k+1}\tag{1}\\ &=\frac{1}{n+1}\sum_{k=1}^{n+1}\binom{n+1}{k}(-1)^{k}\tag{2}\\ &=\frac{1}{n+1}(1-1)^{n+1}-\frac{1}{n+1}\tag{3}\\ &\color{blue}{=-\frac{1}{n+1}} \end{align*}
Comment:
In (1) we use the binomial identity $\binom{n+1}{k+1}=\frac{n+1}{k+1}\binom{n}{k}$.
In (2) we shift the index to start from $k=1$.
In (3) we apply the binomial summation formula and subtract $\frac{1}{n+1}$ as compensation for the lower limit starting with $1$ instead of $0$.
We have: $f(x) = \dfrac{1}{k+1}\cdot \displaystyle \sum_{k=0}^n \binom{n}{k}x^{k+1}\implies f'(x) = (1+x)^n\implies f(x) = f(0) + \displaystyle \int_{0}^x f'(t)dt = 0 + \displaystyle \int_{0}^x(1+t)^ndt = \displaystyle \int_{1}^{1+x}u^ndu= \dfrac{u^{n+1}}{n+1}|_{u=1}^{u=x+1}= \dfrac{(x+1)^{n+1}}{n+1}- \dfrac{1}{n+1}\implies f(-1) = -\dfrac{1}{n+1}$ .
HINT 1: Notice that$$\frac{1}{k+1}\binom{n}{k}=\frac{1}{n+1}\binom{n+1}{k+1}$$ HINT 2: Do you know the binomial theorem?
|
Algebra, esp. ring theory is about the relationship between addition and multiplication. One concept of exceptional importance is the relation of being a
$k$th root of a number (with $k\in \mathbb{N}$ not considered as an element of the ring):
$x$ is
a $k$th root of$y$ when $\underbrace{x\cdot x\cdot \dots \cdot x}_{k \text{ times}} = y$.
An example of a proposition concerning roots is this:
In $\mathbb{C}$ the sum of all $k$th roots of $1$ is $0$.
Now I wonder why another concept – the concept of being
part of a number, which can be defined in perfect analogy to roots – has not been found worth to get a name for its own and correspondingly is not found explicitly in any relevant theorem or proof (as far as I can see):
$x$ is
a $k$th part of$y$ when $\underbrace{x + x + \dots + x}_{k \text{ times}} = y$.
As we write $x^k$ for $\underbrace{x\cdot x\cdot \dots \cdot x}_{k \text{ times}}$ we may write $k\times x$ for $\underbrace{x + x + \dots + x}_{k \text{ times}}$, related to but not to be confused with $k \cdot x$ when $k$ is an element of the ring.
Compare the definitions of being root, part, and divisior:
$x$ is a
rootof $y$ when there is a $k$ with $x^k = y$.
$x$ is a
partof $y$ when there is a $k$ with $k \times x = y$.
$x$ is a
divisorof $y$ when there is a $z$ with $z \cdot x = y$.
Note, that the concept of part somehow builds a bridge between roots and divisors.
The concept of being part of a number has one great appearance: In the definition of the characteristic of a ring.
The
characteristicof a ring is the smallest number $k$ such that $1$ is a $k$th part of $0$.
But after this appearance the concept of a part steps back behind the curtain and seems not to be needed anymore.
My questions:
Do I oversee something and an equivalent concept of being-part-of
isimportant per se and used in propositions and proofs of ring theory and algebra (but in disguise and under another name)?
If not so: How can this be understood? Why is the concept not so important per se?
If you find these questions unclear and too unspecific, maybe you can answer this one:
In which rings is being part and being divisor equivalent, i.e. $x$ is part of $y$ iff $x$ is divisor of $y$?
|
The basic point is the following:
If an element commutes with more than half of the other elements, it already commutes with all elements.
This is basically Lagrange's theorem: Let $C_R(x) = \{r \in R | rx = xr\}$ be the centralizer of $x$. We assume $|C_R(x)| > \frac{1}{2} |R|$. $C_R(x)$ is a subgroup of $R$ and thus its order divides $|R|$. However, by our assumption $\frac{|R|}{|C_R(x)|} < 2$, so $C_R(x)$ already has to be whole $R$. Intuitively $C_R(x)$ is big enough to cover $R$; take an element of $R$ and look at $C_R(x) + r$. This set has the same number of elements as $C_R(x)$ and since there are more than $\frac{1}{2} |R|$ such elements, these two sets cannot be disjoint. But then they must actually be equal!
Now if more than half of the elements commute with more than half of the elements, then more than half of the elements commute with every element. Put another way: Every element commutes with more than half of the elements of $R$ and thus -- by the above -- every element commutes with every element; $R$ is commutative.
Now why do more than half of the elements commute with more than half of the elements? Now the idempotents come into play: Let $T = \{r \in R | r^2=r\}$ be the idempotents. Then $|T| > \frac{3}{4} |R|$ by assumption. Now fix $x \in T$. We will show that $x$ commutes with a special subset of $R$ and then count that subset to see that it has more than $\frac{1}{2} |R|$ elements.
The idea to get to this set is the following: If $x$ and $y$ and $x + y$ are idempotent, we get $(x + y)^2 = x^2 + xy + yx + y^2 = x + y$, so $xy = -yx$. For $x=y$ this says $x=-x$ or $2x = 0$. Let's concentrate on this case for a bit: Since we have a lot of idempotents, the "probability" is high for $x + x$ or equivalently $-x$ to also be idempotent. (If $-x$ is idempotent, we have $-x = (-x)^2=x^2=x$, thus $2x = x + x$ which is idempotent; if $x + x$ is idempotent, we have $x = -x$ idempotent by the above.) How many elements does $T \cap (-T)$ have? Well, by inclusion-exclusion for example, more than half of $|R|$. With a similar argument to the above (Lagrange), and with the above we now see that $T \cap (-T)$ is a subset of the subgroup $\{r \in R | 2r = 0 \}$ and thus this subgroup exhausts $R$. $R$ has characteristic 2!
Now the only thing that is left to ensure is that for our $x \in T$ there are enough idempotents $y$ such that $x + y$ is also idempotent, because then -- as we saw -- $xy = -yx = yx$. So we count the set $T \cap (T - x)$ or equivalently $T\cap (T + x)$. This is fairly easy using inclusion-exclusion again: Since we know $|T + x| = |T| > \frac{3}{4} |R|$ and $|T \cup {x + T}| \le |R|$ we get $|T \cap (x + T)| > \frac{1}{2} |R|$ as we wanted!
I hope this didn't become to convoluted; this theorem is a corollary of Theorem 4 of this paper.
I tried to make its application to this specific scenario a bit more transparent.
|
Last edited: January 26th 2018
Splines is a type of data interpolation, a method of (re)constructing a function between a given set of data points. Interpolation can be used to represent complicated and computationally demanding functions as e.g. polynomials. Then, using a table of a few function evaluations, one can easily approximate the true function with high accuracy.
In spline interpolation the data is interpolated by several low-degree polynomials. This differs from polynomial interpolation, in which the data is interpolated by a single polynomial of a high order. For a general discussion on polynomial interpolation we refer you to our notebook on polynomial interpolation. The simplest example of spline interpolation is linear splines, where the data points are simply connected by straight lines. We are going to discuss interpolation by
cubic splines, which interpolates the polynomials using cubic polynomials with continuous first and second derivatives. We will create an algorithm and some functions for computing a cubic spline.
We start by importing needed packages and setting common figure parameters.
import numpy as npimport matplotlib.pyplot as pltimport scipy.sparse as spimport scipy.linalg as la%matplotlib inline# Set some figure parametersnewparams = {'figure.figsize': (15, 7), 'axes.grid': False, 'lines.markersize': 10, 'lines.linewidth': 2, 'font.size': 15, 'mathtext.fontset': 'stix', 'font.family': 'STIXGeneral'}plt.rcParams.update(newparams)
Assume that we are given four data points: $\{(0, 0), (1, -1), (2, 2), (3, 0)\}$. The cubic spline interpolating these points is $$ S(x) = \begin{cases} -\frac{12}{5}x + \frac{7}{5}x^3, & 0\leq x < 1,\\ -1 + \frac{9}{5}(x - 1) + \frac{21}{5}(x-1)^2 - 3(x-1)^3, & 1 \leq x < 2,\\ 2 + \frac{6}{5}(x - 2) -\frac{24}{5}(x-2)^2 + \frac{8}{5}(x-2)^3, & 2 \leq x < 3.\\ \end{cases} $$ Let's plot the data points, linear spline and cubic spline!
n = 200x1 = np.linspace(0, 1, n)x2 = np.linspace(1, 2, n)x3 = np.linspace(2, 3, n)# Cubic splineS1 = -12/5*x1 + 7/5*x1**3S2 = -1 + 9/5*(x2 - 1) + 21/5*(x2 - 1)**2 - 3*(x2 - 1)**3S3 = 2 + 6/5*(x3 - 2) - 24/5*(x3 - 2)**2 + 8/5*(x3 - 2)**3plt.plot(np.concatenate([x1, x2, x3]), np.concatenate([S1, S2, S3]), label="Cubic spline")# Linear splineplt.plot([0, 1, 2, 3], [0, -1, 2, 0], "--", label="Linear spline")# Data pointsplt.plot([0, 1, 2, 3], [0, -1, 2, 0], "o", label="Data points")plt.legend()plt.show()
Exercise: Check that the cubic spline above has continuous first and second derivatives.
A general cubic spline $S(x)$ interpolating the $n$ data points $\{(x_1,y_1), (x_2, y_2),..., (x_n, y_n)\}$ can be written as\begin{equation} S(x) = \begin{cases} S_1(x) &= y_1 + b_1(x - x_1) + c_1(x-x_1)^2 + d_1(x-x_1)^3, & \text{for } x\in[x_1, x_2],\\ S_2(x) &= y_2 + b_2(x - x_2) + c_2(x-x_2)^2 + d_2(x-x_2)^3, & \text{for } x\in[x_2, x_3],\\ &\vdots&\\ S_{n-1} (x) &= y_{n-1} + b_{n-1}(x - x_{n-1}) + c_{n-1}(x-x_{n-1})^2 + d_{n-1}(x-x_{n-1})^3, & \text{for } x\in[x_{n-1}, x_n],\\ \end{cases} \label{eq:spline} \end{equation}
for some constants $b_i, c_i, d_i$, $i=1, ..., n$. As mentioned in the introduction, we demand that the spline in continuous and has continuous first and second derivatives. This gives the following properties: [1]
1. $S_i(x_i)=y_i$ and $S_i(x_{i+1})=y_{i+1}$ for $i=1,...,n-1$,
2. $S_{i-1}'(x_i)=S_{i}'(x_i)$ and $S_i(x_{i+1})=y_{i+1}$ for $i=2,...,n-1$, 3. $S_{i-1}''(x_i)=S_{i}''(x_i)$ and $S_i(x_{i+1})=y_{i+1}$ for $i=2,...,n-1$.
The three properties make sure that the spline is continuous and smooth.
The total number of constants $b_i, c_i, d_i$ that we need to compute is $3(n-1)$.
Note that the total number of conditions imposed by the properties above is $3n-5$. However, the total number of coefficients $b_i, c_i, d_i$ we need to compute is $3(n-1)$. Hence, we need two additional conditions to make the spline $S(x)$ unique. This is achieved thru endpoint conditions.
There are several choices of endpoint conditions (see e.g. [1]). We will be considering
natural cubic splines,
4a. $S''_1(x_1)= 0$ and $S''_{n-1}(x_n)=0$,
and
not-a-knot cubic splines
4b. $S_1'''(x_2)=S_2'''(x_2), \; S_{n-2}'''(x_{n-1})=S_{n-1}'''(x_{n-1})$.
Exercise: Which endpoint condition is used in the example above?
From property 1, 2 and 3 we obtain\begin{equation} y_{i+1} = y_{i}+b_{i}(x_{i+1}-x_i) + c_i(x_{i+1}-x_i)^2 + d_i(x_{i+1}-x_i)^3, \quad i=1,...,n-1, \label{eq:prop1} \end{equation}\begin{equation} 0 = b_i + 2c_i(x_{i+1}-x_i) + 3d_i(x_{i+1}-x_i)^2-b_{i+1}, \quad i=1,...,n-2, \label{eq:prop2} \end{equation}
and\begin{equation} 0 = c_i+3d_i(x_{i+1}-x_i)-c_{i+1}, \quad i=1,...,n-2, \label{eq:prop3} \end{equation}
respectively. The derivation is straight forward, and is left as an exercise for the reader. If we solve these equations, we obtain the constants $b_i, c_i, d_i$ and thus the cubic spline. To simplify the notation, we define $\Delta x_i=x_{i+1}-x_i$ and $\Delta y_i = y_{i+1}-y_i$. By using equation \eqref{eq:prop1} and \eqref{eq:prop3} we obtain the following expressions for $b_i$ and $d_i$ in terms of the $c$-coefficients:\begin{align} d_i &= \frac{c_{i+1}-c_i}{3\Delta x_i}, \label{eq:d}\\ b_i &= \frac{\Delta y_i}{\Delta x_i}-\frac{1}{3}\Delta x_i (2 c_i + c_{i+1}).\label{eq:b} \end{align}
If we insert this into equation \eqref{eq:prop2} we obtain,$$\Delta x_ic_i + 2(\Delta x_i + \Delta x_{i+1})c_{i+1}+\Delta x_{i+2}c_{i+2} = 3\left(\frac{\Delta y_{i+1}}{\Delta x_{i+1}}-\frac{\Delta y_{i}}{\Delta x_{i}}\right)$$
which is $n-2$ equations for $c_1,..., c_n$. The
natural spline endpoint condition gives $c_1=c_n=0$. We can write this as the matrix equation
The algorithm for finding the spline is now quite apparent. We start by constructing the matrix equation, then solve it to find $c_i$ and in turn compute $b_i$ and $d_i$ via the equations \eqref{eq:d} and \eqref{eq:b}.
The first and last row of the matrix is altered with the
not-a-knot endpoint conditions. Note that property 4b implies $d_1=d_2$ and $d_{n-2}=d_{n-1}$. If we insert this into equation \eqref{eq:d} for $d_i$, we obtain$$\Delta x_2 c_1 -(\Delta x_1 + \Delta x_2) c_2 + \Delta x_1 c_3 = 0,$$$$\Delta x_{n-1} c_{n-1} -(\Delta x_{n-2} + \Delta x_{n-1})c_{n-1}+\Delta x_{n-2}c_n = 0.$$The first row in with the not-a-knot end conditions becomes $$(\Delta x_2\;\; -(\Delta x_1 + \Delta x_2)\;\; \Delta x_1\;\; 0\;\; 0\;\; ...).$$ Likewise, the last row becomes $$(0\;\; ... \;\; 0 \;\; \Delta x_{n-1}\;\; -(\Delta x_{n-2} + \Delta x_{n-2})\;\; \Delta x_{n-2}).$$
We now proceed to create a function that computes the cubic spline that interpolates the points $\{(x_1,y_1), (x_2, y_2),..., (x_n, y_n)\}$. Note that the matrix equation above is tridiagonal, and it can therefore be stored as a $3\times n$ array and solved effectively by using scipy.linalg.solve_banded. In the
not-a-knot-case the matrix becomes a banded matrix with two upper and lower diagonals. Exercise: Derive equations \eqref{eq:prop1}, \eqref{eq:prop2} and \eqref{eq:prop3} from the properties 1-3.
def cubic_spline_coeffs(x, y, endpoint="natural"): """ Computes the coefficients in the cubic spline that interpolates the points (x_1,y_1), (x_2, y_2),..., (x_n, y_n). Parameters: x: array_like, shape (n>2,). x-value of the points being interpolated. Values must be real and in strictly increasing order. y: array_like, shape (n>2,) y-value of the points being interpolated. Values must be real. Returns: array, shape (3, n). The coefficients b, c and d, stored in the first, second and third row, respectively. """ x = np.asarray(x) y = np.asarray(y) n = len(x) dx = np.diff(x) dy = np.diff(y) # Find the vector for the right hand side rhs = np.zeros(n) rhs[1:-1] = 3*(dy[1:]/dx[1:] - dy[:-1]/dx[:-1]) # Compute the matrix and store a matrix diagonal ordered form if (endpoint == "natural"): matrix = np.zeros((3, n)) bands = (1, 1) matrix[1, 1:-1] = 2*(dx[:-1] + dx[1:]) # Diagonal matrix[1, 0] = matrix[1, -1] = 1 matrix[0, 2:] = dx[1:] # Upper diagonal matrix[2, :-2] = dx[:-1] # Lower diagonal if (endpoint == "not-a-knot"): matrix = np.zeros((5, n)) bands = (2, 2) matrix[2, 1:-1] = 2*(dx[:-1] + dx[1:]) # Diagonal matrix[1, 2:] = dx[1:] # Upper diagonal matrix[3, :-2] = dx[:-1] # Lower diagonal # First row matrix[2, 0] = dx[1] matrix[1, 1] = -dx[0] - dx[1] matrix[0, 2] = dx[0] # Last row matrix[2, -1] = dx[-2] matrix[3, -2] = -dx[-2] - dx[-3] matrix[4, -3] = dx[-1] # Call a solver for a banded matrix c = la.solve_banded(bands, matrix, rhs, overwrite_ab=True, overwrite_b=True, check_finite=False) # Find the remainding coefficients d = np.diff(c)/(3*dx) b = dy/dx - dx*(2*c[:-1] + c[1:])/3 return b, c, d
We also need a function that can evaluate the spline given the coefficients.
def cubic_spline_eval(x, xdata, ydata, b, c, d): """ Evaluates the cubic spline that interpolates {(xdata, ydata)} at x with coefficients b, c and d. Parameters: x: array_like, shape(m,). x-values (axis) at which the spline is evaluated. a, b, c, d: array_like, shapes (n,), (n-1,), (n,) and (n-1,). Coefficients of the spline. Return: array, shape(m,). Function evaluation of the spline. """ x = np.asarray(x) y = np.zeros(len(x)) m = 0 for i in range(len(xdata) - 1): n = np.sum(x < xdata[i + 1]) - m xx = x[m:m + n] - xdata[i] y[m:m + n] = ydata[i] + b[i]*xx + c[i]*xx**2 + d[i]*xx**3 m = m + n xx = x[m:] - xdata[-2] y[m:] = ydata[-2] + b[-1]*xx + c[-2]*xx**2 + d[-1]*xx**3 return y
We are now ready to find the cubic spline of a general set of data points. To get some basis for comparison, we will interpolate some points from $\sin(x)$.
def func(x): return np.sin(x)xdata = np.asarray([0, 2, 4, 6, 8, 10])*np.pi/5ydata = func(xdata)x = np.linspace(xdata[0] - 2, xdata[-1] + 2, 200)y = func(x)b, c, d = cubic_spline_coeffs(xdata, ydata, "natural")ya = cubic_spline_eval(x, xdata, ydata, b, c, d)b, c, d = cubic_spline_coeffs(xdata, ydata, "not-a-knot")yb = cubic_spline_eval(x, xdata, ydata, b, c, d)plt.figure()plt.plot(x, y, "--", label=r"$\sin(x)$")plt.plot(x, ya, label="Natural cubic spline")plt.plot(x, yb, label="Not-a-knot cubic spline")plt.plot(xdata, ydata, 'o', label="Data points")plt.xlim(x[0], x[-1])plt.ylim(np.max(y)*1.1, np.min(y)*1.1)plt.legend()plt.show()
Note that we have defined the cubic splines outside the domain of the data points. In the not-a-knot cubic spline outer polynomials are extended. That is, the polynomial at $x<x_1$ is the same as for $x_1<x<x_2$. The natural cubic spline on the other hand, defines a polynomial with "opposite curvature" outside the data points. That is, if the spline curves away from the axis at $x_1<x<x_2$ it will curve towards the axis at $x<x_1$.
It is also possible to interpolate parameterized curves with splines. This is done by performing the interpolation on both the x-axis and y-axis as functions of the curve parameter. Consider the following example:
# Define some data pointsxdata = [-0.5,-1.0,-0.5, 0.2, 1.5, 2.0, 1.0]ydata = [ 5.0, 3.7, 1.0, 1.0,-0.5, 1.5, 4.0]n = len(xdata)# Parameter valuest = np.linspace(0,1,100)# Curve interpolation using uniformly disturbuted parameter nodesti = np.linspace(0,1,n)# x-axisb, c, d = cubic_spline_coeffs(ti, xdata, "not-a-knot")Px = cubic_spline_eval(t, ti, xdata, b, c, d)# y-axisb, c, d = cubic_spline_coeffs(ti, ydata, "not-a-knot")Py = cubic_spline_eval(t, ti, ydata, b, c, d)plt.figureplt.plot(Px,Py,'g',label='Uniformly disturbuted nodes.')plt.plot(xdata,ydata,'r*',label='Data points.')plt.title('Polynomial curve interpolation of a set data points.')plt.legend(), plt.xlabel('x'), plt.ylabel('y')plt.show()
Exercise: Use the Chebychev nodes instead of the uniformly distributed parameter nodes. Hint. See the polynomial interpolation notebook.
There are several benefits of using cubic splines opposed to e.g. a high order polynomial interpolation or higher order splines. In our notebook on polynomial interpolation we showed that large oscillations (and thus large errors) may occur when approximating a function using a high order polynomial. This is called
Runge's phenomenon. We showed that the error due to this oscillation is minimized when using so-called Chebyshev nodes as base for the interpolation. Another workaround is to use low order polynomial splines.
You may be wondering why we don't use a quadratic spline instead of cubic splines. Approximating a function using more function evaluations and higher order polynomials must always be better, right? Wrong! We may of course argue that higher order polynomial splines gives smoother curves, since we are demanding that higher order derivatives are continuous. This may, however, lead to larger errors due to Runge's phenomenon. In addition, higher order polynomial interpolations require more data points. The global change in the curve due to the change in a single data point is thus increased in higher order polynomial splines.
[1] Sauer, T.: Numerical Analysis international edition, second edition, Pearson 2014
[2] Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes, the Art of Scientific Computing, 3rd edition, Cambridge University Press 2007
|
Difference between revisions of "De Bruijn-Newman constant"
(Created page with "For each real number <math>t</math>, define the entire function <math>H_t: {\mathbf C} \to {\mathbf C}</math> by the formula :<math>\displaystyle H_t(z) := \int_0^\infty e^{t...")
(→Threads)
(51 intermediate revisions by the same user not shown) Line 7: Line 7:
:<math>\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).</math>
:<math>\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).</math>
−
It is known that <math>\Phi</math> is even, and that <math>H_t</math> is even, real on the real axis, and obeys the functional equation <math>H_t(\overline{z}) = \overline{H_t(z)}</math>. In particular, the zeroes of <math>H_t</math> are symmetric about both the real and imaginary axes.
+
It is known that <math>\Phi</math> is even, and that <math>H_t</math> is even, real on the real axis, and obeys the functional equation <math>H_t(\overline{z}) = \overline{H_t(z)}</math>. In particular, the zeroes of <math>H_t</math> are symmetric about both the real and imaginary axes.
−
De Bruijn and Newman showed that there existed a constant, the
+ + + + + + + + + + +
De Bruijn and Newman showed that there existed a constant, the Bruijn-Newman <math>\Lambda</math>, such that <math>H_t</math> has all zeroes real precisely when <math>t \geq \Lambda</math>. The Riemann hypothesis is equivalent to the claim that <math>\Lambda \leq 0</math>. Currently it is known that <math>0 \leq \Lambda < 1/2</math>
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
.
+ + Latest revision as of 17:37, 30 April 2019
For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula
[math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math]
where [math]\Phi[/math] is the super-exponentially decaying function
[math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math]
It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as
[math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math]
or
[math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math]
In the notation of [KKL2009], one has
[math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math]
De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the
de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]).
The
Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math]
When [math]t=0[/math], one has
[math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math]
where
[math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \zeta(s)[/math]
is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives
[math]\displaystyle \left|N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})\right| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math]
for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T.
The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real.
[math]t\gt0[/math]
For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis,
all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2].
It is known that [math]\xi[/math] is an entire function of order one ([T1986, Theorem 2.12]). Hence by the fundamental solution for the heat equation, the [math]H_t[/math] are also entire functions of order one for any [math]t[/math].
Because [math]\Phi[/math] is positive, [math]H_t(iy)[/math] is positive for any [math]y[/math], and hence there are no zeroes on the imaginary axis.
Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-increasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have
[math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math]
for any [math]t[/math].
The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE
[math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math]
where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as
[math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math]
where the dependence on [math]t[/math] has been omitted for brevity.
In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic
[math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math]
as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that
[math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math]
as [math]k \to +\infty[/math].
Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Polymath15, third thread: computing and approximating H_t, Terence Tao and Sujit Nair, Feb 12, 2018. Polymath 15, fourth thread: closing in on the test problem, Terence Tao, Feb 24, 2018. Polymath15, fifth thread: finishing off the test problem?, Terence Tao, Mar 2, 2018. Polymath15, sixth thread: the test problem and beyond, Terence Tao, Mar 18, 2018. Polymath15, seventh thread: going below 0.48, Terence Tao, Mar 28, 2018. Polymath15, eighth thread: going below 0.28, Terence Tao, Apr 17, 2018. Polymath15, ninth thread: going below 0.22?, Terence Tao, May 4, 2018. Polymath15, tenth thread: numerics update, Rudolph Dwars and Kalpesh Muchhal, Sep 6, 2018. Polymath15, eleventh thread: Writing up the results, and exploring negative t, Terence Tao, Dec 28, 2018. Effective approximation of heat flow evolution of the Riemann xi function, and a new upper bound for the de Bruijn-Newman constant, Terence Tao, Apr 30, 2019. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Writeup
Here are the Polymath15 grant acknowledgments.
Test problem Zero-free regions
See Zero-free regions.
Wikipedia and other references Bibliography [A2011] J. Arias de Reyna, High-precision computation of Riemann's zeta function by the Riemann-Siegel asymptotic formula, I, Mathematics of Computation, Volume 80, Number 274, April 2011, Pages 995–1009. [B1994] W. G. C. Boyd, Gamma Function Asymptotics by an Extension of the Method of Steepest Descents, Proceedings: Mathematical and Physical Sciences, Vol. 447, No. 1931 (Dec. 8, 1994),pp. 609-630. [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
|
Assumptions
There is a no function in (all) target operating systems that gives you the maximum/minimum values of the accelerometer
You do not want to spend a lot of money and time to build a database of these values on your own
Suggested solutions
I assume there is no way of getting around a kind of calibration procedure. Here are two thoughts:
#1
The simplest one that I can think of is that you let people drop their phone from a certain height, let's say 1 meter. Have them to do this:
Hit a calibrate switch. This starts a count down that starts some kind of user feedback (e.g. a certain color on the screen, a tone being played) for 5 seconds.
Within these 5 seconds the phone is to be held still, approximately 1 meter above a soft ground that is safe to let a phone fall onto (e.g. blanket, bed, ...)
When the 5 seconds are over and the feedback stops (or changes, e.g. different color on the screen, or the tone stops playing), the phone is to be dropped on the soft ground.
Since you know approximately the height of the phone, you can select a part of the accelerometer data where the phone certainly is in free fall, and is therefor free of vibrations from the users hand or from the phone touching the soft surface. All you probably need is to get 10 centimeters of plain, free fall. Rotations are not a problem if you intend to use $\bar{a} = \sqrt{a_x^2 + a_y^2 + a_z^2}$. If needed, you could also let them repeat the process to gain a higher SNR from averaging. For your application, it is certainly sufficient to assume that the measured signal is due to the earth's surface acceleration of $g \approx 9.81\,\frac{m}{s^2}$. If you need slightly higher precision, you could use the geolocation of the device and an
theoretical gravity approximation model.
#2
If the app is used in certain contexts and only those values are important, you could simply introduce a learning phase. Let the user wear the device/use it in the intended way, but only for learning. You could:
let the user rate the acceleration during the training phase and use this as a ground truth for your model
just collect the data and let the app figure out the levels by itself, assuming that after let's say... 13 training sessions all intensities were measured
Further ideas
In both cases, you could ask the user to share this data with the other users of your app: After a successful training session, ask them if the would share the phone model/accelerometer model (that you probably can retrieve from the OS) and the calibration values. This way, you could have the app look for calibration values online. If they are available, they could be downloaded and used without the inconvenience for the user of doing the calibration. Additionally, with such a database, you could process and thereby refine the calibration data.
|
Difference between revisions of "De Bruijn-Newman constant"
(→Threads)
(50 intermediate revisions by the same user not shown) Line 7: Line 7:
:<math>\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).</math>
:<math>\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).</math>
−
It is known that <math>\Phi</math> is even, and that <math>H_t</math> is even, real on the real axis, and obeys the functional equation <math>H_t(\overline{z}) = \overline{H_t(z)}</math>. In particular, the zeroes of <math>H_t</math> are symmetric about both the real and imaginary axes.
+
It is known that <math>\Phi</math> is even, and that <math>H_t</math> is even, real on the real axis, and obeys the functional equation <math>H_t(\overline{z}) = \overline{H_t(z)}</math>. In particular, the zeroes of <math>H_t</math> are symmetric about both the real and imaginary axes.
−
De Bruijn and Newman showed that there existed a constant, the ''de Bruijn-Newman constant'' <math>\Lambda</math>, such that <math>H_t</math> has all zeroes real precisely when <math>t \geq \Lambda</math>. The Riemann hypothesis is equivalent to the claim that <math>\Lambda \leq 0</math>. Currently it is known that <math>0 \leq \Lambda < 1/2</math>.
+ + + + + + + + + + +
De Bruijn and Newman showed that there existed a constant, the ''de Bruijn-Newman constant'' <math>\Lambda</math>, such that <math>H_t</math> has all zeroes real precisely when <math>t \geq \Lambda</math>. The Riemann hypothesis is equivalent to the claim that <math>\Lambda \leq 0</math>. Currently it is known that <math>0 \leq \Lambda < 1/2</math>
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
.
+ + Latest revision as of 17:37, 30 April 2019
For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula
[math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math]
where [math]\Phi[/math] is the super-exponentially decaying function
[math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math]
It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as
[math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math]
or
[math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math]
In the notation of [KKL2009], one has
[math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math]
De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the
de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]).
The
Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math]
When [math]t=0[/math], one has
[math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math]
where
[math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \zeta(s)[/math]
is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives
[math]\displaystyle \left|N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})\right| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math]
for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T.
The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real.
[math]t\gt0[/math]
For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis,
all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2].
It is known that [math]\xi[/math] is an entire function of order one ([T1986, Theorem 2.12]). Hence by the fundamental solution for the heat equation, the [math]H_t[/math] are also entire functions of order one for any [math]t[/math].
Because [math]\Phi[/math] is positive, [math]H_t(iy)[/math] is positive for any [math]y[/math], and hence there are no zeroes on the imaginary axis.
Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-increasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have
[math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math]
for any [math]t[/math].
The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE
[math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math]
where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as
[math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math]
where the dependence on [math]t[/math] has been omitted for brevity.
In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic
[math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math]
as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that
[math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math]
as [math]k \to +\infty[/math].
Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Polymath15, third thread: computing and approximating H_t, Terence Tao and Sujit Nair, Feb 12, 2018. Polymath 15, fourth thread: closing in on the test problem, Terence Tao, Feb 24, 2018. Polymath15, fifth thread: finishing off the test problem?, Terence Tao, Mar 2, 2018. Polymath15, sixth thread: the test problem and beyond, Terence Tao, Mar 18, 2018. Polymath15, seventh thread: going below 0.48, Terence Tao, Mar 28, 2018. Polymath15, eighth thread: going below 0.28, Terence Tao, Apr 17, 2018. Polymath15, ninth thread: going below 0.22?, Terence Tao, May 4, 2018. Polymath15, tenth thread: numerics update, Rudolph Dwars and Kalpesh Muchhal, Sep 6, 2018. Polymath15, eleventh thread: Writing up the results, and exploring negative t, Terence Tao, Dec 28, 2018. Effective approximation of heat flow evolution of the Riemann xi function, and a new upper bound for the de Bruijn-Newman constant, Terence Tao, Apr 30, 2019. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Writeup
Here are the Polymath15 grant acknowledgments.
Test problem Zero-free regions
See Zero-free regions.
Wikipedia and other references Bibliography [A2011] J. Arias de Reyna, High-precision computation of Riemann's zeta function by the Riemann-Siegel asymptotic formula, I, Mathematics of Computation, Volume 80, Number 274, April 2011, Pages 995–1009. [B1994] W. G. C. Boyd, Gamma Function Asymptotics by an Extension of the Method of Steepest Descents, Proceedings: Mathematical and Physical Sciences, Vol. 447, No. 1931 (Dec. 8, 1994),pp. 609-630. [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
|
I have created a channel model in GNU Radio (a modified version of the default GNU Radio Channel Block). Its performance under AWGN (without timing, phase or frequency offsets) aligns properly with theory (for BPSK). I'm currently evaluating the effect of carrier frequency offset on BER by means of the flowgraph below. The block "debug_bpsk_phase_recovery" is a modified version of the default GNU Radio Costas loop (provides a way of configuring the loop damping factor). The curves below were obtained by running the simulation with loop bandwidths $B_{n} = 0.1$ and 0.05 and two different data rates (250k and 500k). The frequency offset is $0.01R_{s}$.
Out of curiosity, I'm trying to find out how much frequency offset the Costas loop can handle. To my understanding, the maximum frequency offset is related to the PLL pull-in range. From theory, the PLL pull-in range is $\left( 2\pi\sqrt(2)\times \zeta\right)\times B_{n}$ (where $\zeta$ is damping factor (which I set to 1.0 for critical damping (some texts use $\sqrt(2)/2$). This evaluates to $0.88\times R_{s}$ and $0.44\times R_{s}$ for $B_{n} = 0.1$ and 0.05 respectively ($R_{s}$ is symbol rate).
However, as seen in the table below (SNR = 6dB), the loop fails to handle offsets above $0.04R_{s}$ (Bn = 0.1) and $0.03R_{s}$ (Bn = 0.05), which is much much less than theory. Any ideas of why is this happening?
As I understand (from the Mengali book), course carrier frequency (like FLL block in GNU Radio) can reduce offsets to within 10% of the symbol rate. I therefore tried to include the FLL block. This didnt help at all. As a matter of fact, the FLL introduced BER degradation even for small frequency offsets (~0.01) that the Costas loop was able handle on its own. Why does the FLL show this behavior?
The book also suggested the use of fine carrier recovery (tracking?) (clock-aided, decision-directed) before phase recovery. Any suggestion on algorithms that I could use for this are welcome.
Thanks in Advance, M.
|
Difference between revisions of "De Bruijn-Newman constant"
(→Threads)
(49 intermediate revisions by the same user not shown) Line 7: Line 7:
:<math>\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).</math>
:<math>\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).</math>
−
It is known that <math>\Phi</math> is even, and that <math>H_t</math> is even, real on the real axis, and obeys the functional equation <math>H_t(\overline{z}) = \overline{H_t(z)}</math>. In particular, the zeroes of <math>H_t</math> are symmetric about both the real and imaginary axes.
+
It is known that <math>\Phi</math> is even, and that <math>H_t</math> is even, real on the real axis, and obeys the functional equation <math>H_t(\overline{z}) = \overline{H_t(z)}</math>. In particular, the zeroes of <math>H_t</math> are symmetric about both the real and imaginary axes.
+ + + + + + + + + +
De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the ''de Bruijn-Newman constant'' <math>\Lambda</math>, such that <math>H_t</math> has all zeroes real precisely when <math>t \geq \Lambda</math>. The Riemann hypothesis is equivalent to the claim that <math>\Lambda \leq 0</math>. Currently it is known that <math>0 \leq \Lambda < 1/2</math> (lower bound in [RT2018], upper bound in [KKL2009]).
De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the ''de Bruijn-Newman constant'' <math>\Lambda</math>, such that <math>H_t</math> has all zeroes real precisely when <math>t \geq \Lambda</math>. The Riemann hypothesis is equivalent to the claim that <math>\Lambda \leq 0</math>. Currently it is known that <math>0 \leq \Lambda < 1/2</math> (lower bound in [RT2018], upper bound in [KKL2009]).
+ + + + + +
== <math>t=0</math> ==
== <math>t=0</math> ==
Line 19: Line 35:
where
where
−
:<math>\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{s/2} \Gamma(s/2) \zeta(s)</math>
+
:<math>\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{s/2} \Gamma(s/2) \zeta(s)</math>
−
is the Riemann xi function. In particular, <math>z</math> is a zero of <math>H_0</math> if and only if <math>\frac{1}{2} + \frac{iz}{2}</math> is a non-trivial zero of the Riemann zeta function.
+
is the Riemann xi function. In particular, <math>z</math> is a zero of <math>H_0</math> if and only if <math>\frac{1}{2} + \frac{iz}{2}</math> is a non-trivial zero of the Riemann zeta function
+ + + + + +
.
== <math>t>0</math> ==
== <math>t>0</math> ==
−
For any <math>t>0</math>, it is known that all but finitely many of the zeroes of <math>H_t</math> are real and simple [KKL2009, Theorem 1.3]
+
For any <math>t>0</math>, it is known that all but finitely many of the zeroes of <math>H_t</math> are real and simple [KKL2009, Theorem 1.3]
−
Let <math>\sigma_{max}(t)</math> denote the largest imaginary part of a zero of <math>H_t</math>, thus <math>\sigma_{max}(t)=0</math> if and only if <math>t \geq \Lambda</math>. It is known that the quantity <math>\frac{1}{2} \sigma_{max}(t)^2 + t</math> is non-
+ + + + +
Let <math>\sigma_{max}(t)</math> denote the largest imaginary part of a zero of <math>H_t</math>, thus <math>\sigma_{max}(t)=0</math> if and only if <math>t \geq \Lambda</math>. It is known that the quantity <math>\frac{1}{2} \sigma_{max}(t)^2 + t</math> is non-in time whenever <math>\sigma_{max}(t)>0</math> (see [KKL2009, Proposition A]. In particular we have
:<math>\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2</math>
:<math>\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2</math>
Line 33: Line 59:
for any <math>t</math>.
for any <math>t</math>.
−
The zeroes <math>z_j(t)</math> of <math>H_t</math>
+
The zeroes <math>z_j(t)</math> of <math>H_t</math> obey the system of ODE
:<math>\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}</math>
:<math>\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}</math>
−
where the sum
+
where the sum interpreted in a principal value sense. See []. <math>t
+ +
> \</math>
+ + + + +
, it is that
+ + + +
to <math> t
+ +
\</math>
+ +
as .
+ + + + + + + + + + + + + + + + + + + + + + + + +
)
+ + + + + + + + + + + + + + + + + +
== Wikipedia and other references ==
== Wikipedia and other references ==
* [https://en.wikipedia.org/wiki/De_Bruijn%E2%80%93Newman_constant de Bruijn-Newman constant]
* [https://en.wikipedia.org/wiki/De_Bruijn%E2%80%93Newman_constant de Bruijn-Newman constant]
+ +
* [https://en.wikipedia.org/wiki/Riemann_Xi_function Riemann xi function]
* [https://en.wikipedia.org/wiki/Riemann_Xi_function Riemann xi function]
== Bibliography ==
== Bibliography ==
−
* [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226.
+ + +
* [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226.
* [CSV1994] G. Csordas, W. Smith, R. S. Varga, [https://link.springer.com/article/10.1007/BF01205170 Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis], Constr. Approx. 10 (1994), no. 1, 107–129.
* [CSV1994] G. Csordas, W. Smith, R. S. Varga, [https://link.springer.com/article/10.1007/BF01205170 Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis], Constr. Approx. 10 (1994), no. 1, 107–129.
+
* [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.164.5595&rep=rep1&type=pdf Citeseer]
* [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.164.5595&rep=rep1&type=pdf Citeseer]
−
* [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251.
+
* [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251
−
* [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is negative, preprint. [https://arxiv.org/abs/1801.05914 arXiv:1801.05914]
+ +
.
+
* [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is negative, preprint. [https://arxiv.org/abs/1801.05914 arXiv:1801.05914
+ + +
]
Latest revision as of 17:37, 30 April 2019
For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula
[math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math]
where [math]\Phi[/math] is the super-exponentially decaying function
[math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math]
It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as
[math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math]
or
[math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math]
In the notation of [KKL2009], one has
[math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math]
De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the
de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]).
The
Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math]
When [math]t=0[/math], one has
[math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math]
where
[math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \zeta(s)[/math]
is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives
[math]\displaystyle \left|N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})\right| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math]
for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T.
The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real.
[math]t\gt0[/math]
For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis,
all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2].
It is known that [math]\xi[/math] is an entire function of order one ([T1986, Theorem 2.12]). Hence by the fundamental solution for the heat equation, the [math]H_t[/math] are also entire functions of order one for any [math]t[/math].
Because [math]\Phi[/math] is positive, [math]H_t(iy)[/math] is positive for any [math]y[/math], and hence there are no zeroes on the imaginary axis.
Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-increasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have
[math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math]
for any [math]t[/math].
The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE
[math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math]
where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as
[math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math]
where the dependence on [math]t[/math] has been omitted for brevity.
In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic
[math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math]
as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that
[math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math]
as [math]k \to +\infty[/math].
Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Polymath15, third thread: computing and approximating H_t, Terence Tao and Sujit Nair, Feb 12, 2018. Polymath 15, fourth thread: closing in on the test problem, Terence Tao, Feb 24, 2018. Polymath15, fifth thread: finishing off the test problem?, Terence Tao, Mar 2, 2018. Polymath15, sixth thread: the test problem and beyond, Terence Tao, Mar 18, 2018. Polymath15, seventh thread: going below 0.48, Terence Tao, Mar 28, 2018. Polymath15, eighth thread: going below 0.28, Terence Tao, Apr 17, 2018. Polymath15, ninth thread: going below 0.22?, Terence Tao, May 4, 2018. Polymath15, tenth thread: numerics update, Rudolph Dwars and Kalpesh Muchhal, Sep 6, 2018. Polymath15, eleventh thread: Writing up the results, and exploring negative t, Terence Tao, Dec 28, 2018. Effective approximation of heat flow evolution of the Riemann xi function, and a new upper bound for the de Bruijn-Newman constant, Terence Tao, Apr 30, 2019. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Writeup
Here are the Polymath15 grant acknowledgments.
Test problem Zero-free regions
See Zero-free regions.
Wikipedia and other references Bibliography [A2011] J. Arias de Reyna, High-precision computation of Riemann's zeta function by the Riemann-Siegel asymptotic formula, I, Mathematics of Computation, Volume 80, Number 274, April 2011, Pages 995–1009. [B1994] W. G. C. Boyd, Gamma Function Asymptotics by an Extension of the Method of Steepest Descents, Proceedings: Mathematical and Physical Sciences, Vol. 447, No. 1931 (Dec. 8, 1994),pp. 609-630. [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
|
I am referring to the following three reactions. Two of these reactions are imaginary ($\ce{M}$ and $\ce{X}$ are imaginary).
$$\begin{alignat}{3} \ce{X- + 2e- \;&<=> X^3-} \qquad &&E^\circ_{\ce{X^3-}/\ce{X-}}=0.9\ \mathrm{V}\qquad &&&\text{(a)}\\ \ce{2H+ + MO^2+ + e- \;&<=> M^3+ + H2O} \qquad &&E^\circ_{\ce{MO^2+}/\ce{M^3+}}=0.5\ \mathrm{V} \qquad &&&\text{(b)}\\ \ce{8H+ + MnO4- + 5e- \;&<=> Mn^2+ + 4H2O} \qquad &&E^\circ_{\ce{MnO4-}/\ce{Mn^2+}}=1.5\ \mathrm{V} \qquad &&&\text{(c)} \end{alignat}$$
Now consider the reaction between a solution having equimolar amounts of $\ce{M^3+}$ and $\ce{X^3-}$ and a $\ce{KMnO4}$ solution. (Note that all the species are in their respective standard states, so there is no need for Nernst equation.)
To find out possible reactions, following steps are taken.
Equations $\text{(a)}$ and $\text{(c)}$;
$$ \begin{align} &{-}\text{(a)} \times 5 + \text{(c)} \times 2 : \\ \ce{&16H+ +5X^3- + 2MnO4- -> 2Mn^2+ + 5X- + 8H2O} \qquad E^\circ_1\qquad\text{(1)} \\ \\ &E^\circ_1 = 1.5\ \mathrm{V} + (-0.9\ \mathrm{V}) = 0.6\ \mathrm{V} \\ &\Delta G^\circ_1 = -nE^\circ_1F = -10 \times 0.6 \times F = -6F \end{align} $$
Equations $\text{(b)}$ and $\text{(c)}$;
$$ \begin{align} &{-}\text{(b)} \times 5 + \text{(c)}: \\ \ce{&5M^3+ + H2O + MnO4- -> Mn^2+ + 5MO^2+ + 2H+} \qquad E^\circ_2 \qquad(2)\\ \\ &E^\circ_2 = 1.5\ \mathrm{V} + (-0.5\ \mathrm{V}) = 1.0\ \mathrm{V} \\ &\Delta G^\circ_2 = -nE^\circ_2F = -5 \times 1.0 \times F = -5F \end{align}$$
My questions:
Are there any errors in above calculations? If so, please suggest corrections. According to $E^\circ$ values, reaction $(2)$ is more feasible than reaction $(1)$. Am I correct? According to $\Delta G^\circ$ values, reaction $(1)$ is more feasible than reaction $(2)$. Am I correct? Is it useless to use $E$ values to predict the feasibilities of redox reactions? Should we always resort to $\Delta G$? Is it possible for predictions based on $E$ values and $\Delta G$ values to contradict? If not how can these results be explained?
|
Ex.11.1 Q1 Constructions Solution - NCERT Maths Class 10 Question
Draw a line segment of length \(7.6 \,\rm{cm}\) and divide it in the ratio \(5:8\). Measure the two parts.
Text Solution
What is known?
Length of line segment and the ratio to be divided.
What is unknown?
Construction
Reasoning: Draw the line segment of given length. Then draw another line which makes an acute angle with the given line. Divide the line into \(m + n\) parts where \(m\)and \(n\)are the ratio given. Basic proportionality theorem states that, “If a straight line is drawn parallel to a side of a triangle, then it divides the other two sides proportionally".
Steps:
(i)
(ii) Draw ray \(AX,\) making an acute angle width \(AB.\)
(iii)
(iv)
(v) Through \(A_5\) (since we need \(5\) parts to \(8\) parts) draw \({{C}}{{{A}}_{{5}}}\) parallel to \({{B}}{{{A}}_{{{13}}}}\) where \(C\) lies on \(AB.\)
Now \({{AC: CB = 5:8}}\)
We find \(AC = 2.9 \,\rm{cm}\) and \(CB = 4.7 \,\rm{cm}\)
Proof:
\({{C}}{{{A}}_{{5}}}\) is parallel to \({{B}}{{{A}}_{{{13}}}}\)
By Basic Proportionality theorem, in \({{\Delta A}}{{{A}}_{{{13}}}}{{B}}\)
\(\begin{align}\frac{{AC}}{{CB}} = \frac{{{{A}}{{{A}}_{{5}}}}}{{{{{A}}_{{5}}}{{{A}}_{{{13}}}}}}{{ = }}\frac{{{5}}}{{{8}}}\,\,\end{align}\) (By Construction)
Thus, \(C\) divides \(AB\) in the ratio \(5:8\).
|
In some Yang-Mills theory with gauge group $G$, the gauge fields $A_{\mu}^{a}$ transform as $$A_{\mu}^{a} \to A_{\mu}^{a} \pm \partial_{\mu}\theta^{a} \pm f^{abc}A_{\mu}^{b}\theta^{c}$$ $$A_{\mu}^{a} \to A_{\mu}^{a} \pm \left(\partial_{\mu}\theta^{a}-A_{\mu}^{b}f^{bac}\theta^{c}\right)$$ $$A_{\mu}^{a} \to A_{\mu}^{a} \pm \left(\partial_{\mu}\theta^{a}-iA_{\mu}^{b}(T^{b}_{\text{adj}})^{ac}\theta^{c}\right),$$
where $T^{a}_{\text{adj}}$ is the adjoint representation of the gauge group $G$ and the gauge parameters $\theta^{a}$ are seen to transform in the adjoint representation of the gauge group $G$.
Why does this mean that the gauge fields $A_{\mu}^{a}$ transform in the adjoint representation?
Should the transformation of the gauge fields $A_{\mu}^{a}$ in the adjoint representation not be given by
$$A_{\mu}^{a} \to A_{\mu}^{a} \pm i\theta^{b}(T^{b}_{\text{adj}})^{ac}A_{\mu}^{c}?$$
|
This post is part 3 of the "clustering" series:
Generate datasets to understand some clustering algorithms behavior K-means is not all about sunshines and rainbows Gaussian mixture models: k-means on steroids How many red Christmas baubles on the tree? Chaining effect in clustering Animate intermediate results of your algorithm Accuracy: from classification to clustering evaluation
The k-means algorithm assumes the data is generated by a mixture of Gaussians, each having the same proportion and variance, and no covariance. These assumptions can be alleviated with a more generic algorithm: the CEM algorithm applied on a mixture of Gaussians.
To illustrate this, we will first apply a more generic clustering algorithm than k-means on nine synthetic datasets previously used in this series. The internal criterion and a variation of it will be discussed along with the algorithms to optimize them. The complexity of the algorithms will be exposed: it explains why using the k-means algorithm is often justified instead of a more generic algorithm.
Gaussian mixture models behavior on generated datasets
Let’s first apply a Gaussian mixture model on some datasets. For this purpose, Rmixmod library (I recommend the article on Rxmimod in Journal of Statistical Software for further reading) is used with its default parameters (only the number of clusters is specified).
library("Rmixmod")processMixmod <- function(data) { d = data %>% select(x, y) x = mixmodCluster(d, 2) x["bestResult"]["partition"]}dataMixmod = data %>% group_by(dataset) %>% do(data.frame(., cluster = factor(processMixmod(.))))scatterPlotWithLabels(dataMixmod)
One can view from the plots that the models fit better the datasets than k-means which only fits well the first dataset. In particular, the first five datasets are generated by a mixture of Gaussians, hence the Gaussian mixture models works well. The other datasets, except than the two classes of uniform data are not fitted correctly since they are generated differently (by the way, the last dataset exhibits no classes).
The internal criterion optimized by the algorithm used, namely EM, explains why it works for the mixture of Gaussians.
EM and CEM algorithms
The EM algorithm is an iterative algorithm which iterates over two steps: an "expectation" step and a "maximisation" step, hence his name. The CEM algorithm includes an other step of "classification" between the two steps of the EM algorithm. The two algorithms differ also slightly by their criterion.
EM algorithm
With the EM algorithm, we seek to maximize the log likelihood:
where:
\(K\) is the number of clusters; \(\theta = (\pi, \alpha)\) with \(\pi\) is a vector of proportions and \(\alpha\) is a vector of parameters of a component; \(\varphi_k(x_i, \alpha_k)\) is the density of an observation \(x_i\) from the \(k^{th}\) component.
Maximizing the likelihood means that we want to maximize the plausibility of the model parameter values, given the observed data.The EM algorithm applied to a mixture of Gaussians tries to find the parameters of the mixture (the proportions) and the Gaussians (the means and the covariance matrices) that fits best the data. The cluster assignations are then found
a posteriori: the points generated by a Gaussian are to be classified in the same cluster. CEM algorithm
The CEM algorithm considers the cluster assignations as parameters to be learned. The criterion optimized by CEM is:
with \(z_{ik} = 1\) if the point \(x_i\) is assigned to the cluster \(k\) and else \(0\).
The CEM algorithm is often prefered to EM when clustering since it aims to learn the cluster assignation directly, not afterwards. It often converges faster that EM in practice. It is also a generalization of k-means.
CEM algorithm as an extension of k-means
The CEM algorithm applied on a mixture of Gaussians with some assumptions on the Gaussians (no covariance and equal variance) and on the mixture (equal proportions) is actually the same as the k-means algorithm. With these assumptions, the criterions are the same (see the previous post on kmeans in this series on clustering), and it can be shown that the steps are also the same.
Let's see this in practice when fitting CEM with a mixture of Gaussians with diagonal variance matrices and equal proportions, volumes and shapes (see page 25 of the reference manual of Rmixmod):
library("Rmixmod")processMixmod2 <- function(data) { d = data %>% select(x, y) x = mixmodCluster(d, 2, models=mixmodGaussianModel(listModels=c("Gaussian_p_L_I")), strategy=new("Strategy", algo="CEM")) x["bestResult"]["partition"]}dataMixmod = data %>% group_by(dataset) %>% do(data.frame(., cluster = factor(processMixmod2(.))))scatterPlotWithLabels(dataMixmod)
We obtain the same results than with k-means.
It seems that since Gaussian mixture models are more general, there is no reason to use k-means. However, it should be noted that k-means has less parameters to estimate and is thus faster and requires less memory. Let's explore this.
Number of parameters to estimate
When fitting a mixture of Gaussians, we have to estimate the proportions of the mixture, and the parameters of each Gaussian:
Since the proportions sum to one, we only have to estimate \(K-1\) proportions, with \(K\) the number of Gaussians. We also have to estimate a mean for each Gaussian, thus \(K d\) parameters with \(d\) the number of dimensions. The covariance matrix is a \(d \times d\) square matrix which is symmetric (the entries below the main diagonal are the same as the entries upper the main diagonal). We have thus \(\frac{d (d-1)}{2}\) values to estimate for each Gaussian.
Overall, we have, for the general form of the mixture of Gaussians (without any conditions) : \(K-1 + Kd + \frac{Kd(d-1)}{2}\) parameters to estimate, with:
\(K\) the number of Gaussians (or classes) \(d\) the number of variables
If we set conditions on the model, we can lower the number of parameters to estimate. For instance, if we assume that the covariance matrices are diagonal with equal variance (a scalar times the identity matrix) and that the proportions of the mixture are the same, we have: \(d + 1\) parameters to estimate. It corresponds to the assumptions of k-means.
To sum up Gaussian mixture models with the CEM algorithm can be seen as a generalisation of k-means: whereas k-means fit data generated by a mixture of Gaussians with same proportions and diagonal covariance matrices, Gaussian mixture models allows to enforce other assumptions on the proportions of the mixture and on the covariance matrices of the Gaussians. Setting conditions on the parameters of the Gaussians and the mixture allows to reduce the number of parameters to estimate and thus reduce the runtime of the algorithm. Several algorithms intend to fit a mixture of Gaussians. For instance, the EM algorithm estimates the parameters of the mixture and the classes are found a posteriori. The CEM algorithm estimates instead the parameters and the clusters assignations simultaneously, allowing a boost on the execution time.
|
The nozzle is the part of a rocket that limits the speed of the exhaust velocity. (It's also the part that converts the pressure and temperature of the expanding propellant into velocity.)
The speed of sound in the exhaust likewise regulates the expansion of the propellant gas.
For rockets using nozzles, the exhaust velocity can be expressed as
$$V=\sqrt{\frac{2 \gamma R_{{}^{\circ}} T_{{}^{\circ}}}{(\gamma -1) \mu }\left(1-\left(\frac{P_e}{P_c}\right){}^{\frac{\gamma -1}{\gamma }}\right)}$$
(in the form from Hash's self-answer elsewhere; k is used instead of $\gamma$ and M instead of $\mu$ in Basics of Spaceflight.)
Restrictions on the exhaust temperature $T$ are implied by the temperature that the nozzle can withstand. $\gamma$, $\mu$, and $R$ have to do with the choice of propellant. $\gamma$, $R$, and $T$ also directly relate the speed of sound in a gas.
As in Tom Spilker's answer, ion engines avoid that limitation because they don't rely on the gas's expansion to provide the exhaust velocity. The directed application of electromagnetic fields to an ionized exhaust stream allows higher velocities to be imparted.
|
Show that the first excited state of 8-Be nucleus fits the experimental value of the rotational band $E(2^+) = 92 keV$ (this is the first excited state, which is 92 keV above the ground state). To do so model 8-Be to be made of 2 alpha particles $4.5 fm$ apart rotating about their center of mass.
The method I used is:
1) Get the moment of inertia of the system.
For two particles rotating each other (in the same plane) about its center of mass, one gets:
$$I = m_1 r_1^2 + m_2 r_2^2 = 2m_{\alpha} (d/2)^2 = 6.728 \times 10^{-56}kgm^2$$
where:
$$m_{\alpha} = 6.645 \times 10^{-27} kg$$
$$d = 4.5 \times 10^{-15}m$$
I've checked this value and it's correct.
2) Verify the experimental value for the rotational band by applying the rotational energy formula:
$$E = J (J + 1) \frac{\hbar^2}{2I}$$
OK let's first get what's called "characteristic rotational energy": $\frac{\hbar^2}{2I}$
$$\frac{\hbar^2}{2I} = 8.256 \times 10^{-14} J= 0.516 MeV$$
where:
$$\hbar = 1.054 \times 10^{-34}Js$$
$$1 eV = 1.6 \times 10^{-19}J$$
$2^+$ band has $J=2$ associated with it. So it's just a matter of plugging numbers in:
$$E = J (J + 1) \frac{\hbar^2}{2I} = 3096 keV$$
Which is way off $92keV$.
Where is my method gone wrong? Maybe the original question provided an incorrect experimental result.
PS/ For anyone interested in rotational energy related to nucleus this is a good video to check: https://www.youtube.com/watch?v=rwdBnwznt3s
|
In many textbooks, the derivation of the energy levels in a hydrogen atom starts from the basic Hamiltonian $H = \frac{\mathbf{p}^2}{2m} + \frac{e^2}{4\pi\epsilon_0\mathbf{r}}$ and then adds relativistic fine structure or hyperfine structure as corrections to these basic energy levels. The effect of an electromagnetic field is then often included by modifying the Hamiltonian to $H = \frac{\left(\mathbf{p} - e \mathbf{A}\right)^2}{2m} + e \phi$ for electromagnetic potentials $\mathbf{A}$ and $\phi$.
What I find strange about this approach is that from the very beginning, the interaction between proton and electron is electrostatic (thus requiring the presence of an electromagnetic field), so why is the quantised electromagnetic field only added later and not right at the start? Also, the energy of the spins of the particles in an external magnetic field does not seem to be included in this treatment.
What I wonder is the following: If we do not start from the basic Hamiltonian and add all terms later, but set out to derive from first principles the energy levels for a proton and a (bound) electron in a quantised electromagnetic field (generated by proton and electron, but also due to external radiation that might be present), what is the Hamiltonian that describes this system? The proton and electron should be considered non-relativistic, so should be regarded as quantised in the sense that observables should be operators, but without the need for quantum field theory to describe proton and electron. How could relativistic effects then be included perturbatively in this general treatment?
EDIT: Just to clarify, the situation that I am interested in is the following: Two charged, spin-carrying non-relativistic particles (proton and electron) move under the influence of an internal electric field arising from their charges and an external field due to radiation. Both particles should be included in the Hamiltonian, so the proton will also have a coupling to any electromagnetic fields. Questions: How is the internal field treated as a quantised electromagnetic field, leading to a potential energy? Which terms in the Hamiltonian does the interaction with the external field lead to? What is therefore the most general Hamiltonian for a non-relativistic proton-electron system moving in a quantised electromagnetic field?
Any references to textbooks or journal articles with a detailed treatment of the most general and (within the limits of neglecting relativistic effects) most exact Hamiltonian describing such a system would be very useful.
|
Ex.6.5 Q11 Triangles Solution - NCERT Maths Class 10 Question
An aeroplane leaves an airport and flies due north at a speed of \(1000\; \rm{}km\) per hour. At the same time, another aeroplane leaves the same airport and flies due west at a speed of \(1200\; \rm{}km\) per hour. How far apart will be the two planes after \(\,1\frac{1}{2}\) hours?
Diagram
Text Solution Reasoning:
We have to find the distance travelled by aeroplanes, we need to use
\(\text{distance}\,\,\text{=}\,\,\text{speed}\,\,\text{ }\!\!\times\!\!\text{ }\,\,\text{time}\)
In a right triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides.
Steps:
\(AB\) is the distance travelled by aeroplanes travelling towards north
\[\begin{align} A B &=1000\, \mathrm{km} / \mathrm{hr} \times 1 \frac{1}{2} \mathrm{hr} \\ &=1000 \times \frac{3}{2} \mathrm{km} \\ {AB} &=1500\, \mathrm{km} \end{align}\]
\(BC\) is the distance travelled by another aeroplane travelling towards south
\[\begin{align} BC&=1200\,\,\text{km/hr}\times 1\frac{1}{2}\,\,\text{hr} \\ \,\,\,\,\,\,\,\,\,\,&=1200\times \frac{3}{2}\text{hr} \\ BC&=1800\,\,\text{km} \\ \end{align}\]
Now, In \(\Delta ABC\,\,,\,\,\angle ABC={{90}^{0}}\)
\[\begin{align} AC{}^{2}&=A{{B}^{2}}+B{{C}^{2}}\,\,\left( \text{Pythagoras theorem} \right) \\ & ={{(1500)}^{2}}+{{(1800)}^{2}} \\ & =2250000+3240000 \\ A{{C}^{2}}&=5490000 \\ AC&=\sqrt{549000} \\ \,\,\,\,\,\,\,\,\,\,\,&=300\sqrt{61}\,\,\text{km} \end{align}\]
The distance between two planes after \(1\frac{1}{2}\text{hr}=300\sqrt{61}\,\,\text{km}\)
|
Ex.7.4 Q4 Coordinate Geometry Solution - NCERT Maths Class 10 Question
The two opposite vertices of a square are \(\begin{align}\left( { - 1,\,2} \right){\text{ }}{\text{ and}}\left( {3,\,2} \right).\end{align}\) Find the coordinates of the other two vertices.
Text Solution Reasoning: What is known?
The \(x\) and \(y\) co-ordinates of the two opposite vertices of a square.
What is unknown?
The coordinates of the other two vertices.
Steps:
From the Figure,
Given,
Let \(ABCD\) be a square having known vertices \(\begin{align}{A}\,\,\left( { - 1,{\text{ }}2} \right){\text{ and}}{\text{ }}{C}\,\,\left( {3,{\text{ }}2} \right)\end{align}\)as vertices \(A\) and \(C\) respectively. Let \(\begin{align}\text{B}\left( {x, y} \right)\end{align}\) be one unknown vertex
We know that the sides of a square are equal to each other.
\(\begin{align}∴ {{AB = BC}}\end{align}\)
By Using Distance formula to find distance between points \(AB\; {\rm and } \;BC ,\)
\(\begin{align} \sqrt {{{(x_1 - 1)}^2} + {{(y_1 - 2)}^2}} &= \sqrt {{{(x_1 - 3)}^2} + {{(y_1 - 2)}^2}} \end{align}\)
\(\begin{align} {x_1^2} + 2x + 1 + {y_1^2} - 4y + 4 \end{align}\) \(\begin{align} &= {x_1^2} + 9 - 6x + {y_1^2} + 4 - 4y _1\qquad \end{align}\) (By Simplifying & Transposing)
\(\begin{align} 8x_1 &= 8 \end{align}\)
\(\begin{align} x_1 &= 1\end{align}\)
We know that in a square, all interior angles are of \(\begin{align}{90^\circ }.\end{align}\)
In \(\begin{align}\Delta ABC\end{align}\)
\(\begin{align}{{AB}^{2}+{BC}^{2}={AC}^{2}} \quad \text { [By Pythagorastheorem }] \end {align}\)
Distance formula is used to find distance between \(AB\), \(BC\) and \(AC\)
\(\begin{align} {\left( {\sqrt {{{(1 + 1)}^2} + {{(y_1 - 2)}^2}} } \right)^2} + {\left( {\sqrt {{{(1 - 3)}^2} + {{(y_1 - 2)}^2}} } \right)^2}\end{align}\) \(\begin{align} &= {\left( {\sqrt {{{(3 + 1)}^2} + {{(2 - 2)}^2}} } \right)^2} \end{align}\)
\(\begin{align} 4 + {y_1^2} + 4 - 4y_1 + 4 + {y_1^2} - 4y_1 + 4 &= 16 \end{align}\)
\(\begin{align} 2{y_1^2} + 16 - 8y &= 16 \end{align}\)
\(\begin{align} 2{y_1^2} - 8y &= 0 \end{align}\)
\(\begin{align} y_1(y_1 - 4) &= 0 \end{align}\)
\(\begin{align}y_1 &= 0\,\,or\,\,4\end{align}\)
Hence the required vertices are \(\begin{align}B{\text{ }}\left( {1,\,\,0} \right){\text{ }}{\text{ and}}\;D{\text{ }}\left( {1,{\text{ }}4} \right)\end{align}\)
|
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling
@heather well, there's a spectrum
so, there's things like New Journal of Physics and Physical Review X
which are the open-access branch of existing academic-society publishers
As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di...
Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago
> A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”
for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty
> for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing)
@0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals.
@BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work...
@BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions.
Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley.
I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea.
@EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results...
Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town...
@EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit.
|
Forgive me if this question is poorly worded, I'm not sure if centroid is the word to use here.
Say I want to interpolate the peak of a cross-correlation function in order to get sub-sample delays. I can fit a parabola with three points, or simply upsample my signal through zero-padding/sinc-interpolation. However, I haven't seen any comments (or found any literature) on using some sort of weighted average on the main peak.
In case it's unclear, i mean performing the following:
Finding the maximum of the cross-correlation in order to get a coarse estimate. $$ \hat{D} = argmax(R_{12}) $$ Taking the P nearest neighbors (where [-P,P] corresponds to the central peak area) and doing the following fine estimate
$$ \hat{d} = \frac{\sum_{k =-P}^P kR_{12}(k)}{\sum_{k=-P}^PR_{12}(k)} $$
I've tested it for a gaussian distributed, infinite snr signal, critically sampled (so peak is only three samples. Centroid, (yellow) versus some sinc interpolation (purple) and parabolic fitting (orange), and it seems to offer good performance.
It appears to have less maximum bias, despite the discontinuity. I also tested it with noise and the results were also okay. I get that this requires knowledge of the signal A.C.F. beforehand whereas fitting a parabola doesn't, but is there any other reason why this method is not seen in the literature?
Either way, I'm finding it somewhat advantageous in my application (also not sure wh), so I would like to read some more formal studies or descriptions of the technique. My background is not signal processing, so I'm afraid I'm overlooking some severe shortcoming.
NOTE: I also tested the mean-square error. It seems to perform much worse there, but as I said, for my use case it seems to perform well. . (The black line is the cramer-rao lower bound. Not sure why the parabola goes below it, probably because it's a biased estimator...) EDIT: Signal example of the application (this picture has quite a lot of distortion of the delayed signal, but it's what I have at hand currently).
|
Note: In the below, I used $X$ instead of $H$ by accident. Also, $X^*$ is my notation for $X^H$.
We can find this inverse by the chain rule. Note that $f = f_1 \circ f_2 \circ f_3$, where$$f_1(A) = Tr(A)\\f_2(A) = A^{-1}\\f_3(A) = XAX^* + I$$The derivatives of these functions are given by$$[f_1'(A)](B) = Tr(B)\\[f_2'(A)](B) = -A^{-1}BA^{-1}\\[f_3'(A)](B) = XBX^*\\$$The chain rule tells us that$$[(f_1 \circ f_2 \circ f_3)'(A)] = \\[f_1'(f_2(f_3(A)))]\circ[f_2'(f_3(A))] \circ f_3'(A)$$So, all together, that gives us$$[f'(A)] = \\(B \mapsto Tr[(XBX^* + I)^{-1}])\circ(B \mapsto -A^{-1}(XBX^* + I)A^{-1})\circ(B \mapsto XBX^*) =\\(B \mapsto Tr[(XBX^* + I)^{-1}])\circ(B \mapsto -A^{-1}(X[XBX^*]X^* + I)A^{-1}) =\\(B \mapsto Tr[(XBX^* + I)^{-1}])\circ(B \mapsto -A^{-1}(X^2BX^{*2} + I)A^{-1}) =\\(B \mapsto Tr[(X[-A^{-1}(X^2BX^{*2} + I)A^{-1}]X^* + I)^{-1}])$$In other words, if $A$ is parameterized as a function of $t \in \Bbb R$, then we can write$$\frac{d}{dt}f(A(t)) = Tr[(X[-A^{-1}(X^2\frac{dA}{dt}X^{*2} + I)A^{-1}]X^* + I)^{-1}]$$
Clarification of my derivatives:
By $[f'(A)](B)$, I mean that for every $A$, $f'(A)$ is a function on $B$. You are right to be confused. Let's look at an example.
Take $f(X) = Tr(X)$, and find its derivative my way. In fact, $f$ is linear, so this will be easy: we have$$[f'(A)](B) = Tr(B)$$Note that, in this case, $A$ doesn't pop up in the definition of $f'$. In other words, $f$ has a
constant derivative. This makes sense because $f$ is linear. Another way of writing the above is to say that if $A(t)$ is a matrix-valued function on $t \in \Bbb R$, then$$\frac{d}{dt}f(A(t)) = Tr \frac{dA}{dt}$$Now, my understanding of your notation is that$$\frac{\partial f}{\partial X}[i,j] = \frac{\partial f}{x_{ji}} $$equivalently, define $A_{ij}(t) = A + t E_{ji}$ (where $E_{ji}$ is the matrix with $0$s everywhere except at the $j,i$ entry, where it has a $1$). Then we should have$$\frac{\partial f}{\partial X}(A)[i,j] = \frac {d}{dt} f(A_{ij}(t))$$By my above statement, that means we can write$$\frac{\partial f}{\partial X}(A)[i,j] = Tr(E_{ji}) = \begin{cases}1 & i=j\\0 & i \neq j\end{cases}$$In other words, $\frac {\partial f}{\partial X}$ is $I$, which is exactly what we expected.
|
Ex.5.3 Q4 Arithmetic Progressions Solution - NCERT Maths Class 10 Question
How many terms of the AP. \(9, 17, 25 \dots\) must be taken to give a sum of \(636\)?
Text Solution What is Known?
The AP and sum.
What is Unknown?
Number of terms.
Reasoning:
Sum of the first \(n\) terms of an AP is given by \({S_n} = \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\)
Where \(a\) is the first term, \(d\) is the common difference and \(n\) is the number of terms.
Steps:
Given,
• First term, \(a = 9\)
• Common difference, \(d = 17 - 9 = 8\)
• Sum up to nth terms, \({S_n} = 636\)
We know that sum of \(n\) terms of AP
\[\begin{align}{S_n} &= \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\\636 &= \frac{n}{2}\left[ {2 \times 9 + \left( {n - 1} \right)8} \right]\\636 &= \frac{n}{2}\left[ {18 + 8n - 8} \right]\\636& = \frac{n}{2}\left[ {10 + 8n} \right]\\636& = n\left[ {5 + 4n} \right]\\636& = 5n + 4{n^2}\\4{n^2} + 5n - 636 &= 0\\4{n^2} + 53n - 48n - 636 &= 0\\n\left( {4n + 53} \right) - 12\left( {4n + 53} \right) &= 0\\(4n + 53)(n - 12) &= 0\end{align}\]
Either \(4n + 53 = 0\) or \(n - 12 = 0\)
\(n = - \frac{{53}}{4}\) or \(n = 12\)
\(n\) cannot be \(\frac{{ - 53}}{4}\). As the number of terms can neither be negative nor fractional, therefore, \(n = 12\)
|
OpenCV 3.4.7
Open Source Computer Vision
This tutorial will demonstrate the basic concepts of the homography with some codes. For detailed explanations about the theory, please refer to a computer vision course or a computer vision book, e.g.:
Briefly, the planar homography relates the transformation between two planes (up to a scale factor):
\[ s \begin{bmatrix} x^{'} \\ y^{'} \\ 1 \end{bmatrix} = H \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} = \begin{bmatrix} h_{11} & h_{12} & h_{13} \\ h_{21} & h_{22} & h_{23} \\ h_{31} & h_{32} & h_{33} \end{bmatrix} \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} \]
The homography matrix is a
3x3 matrix but with 8 DoF (degrees of freedom) as it is estimated up to a scale. It is generally normalized (see also 1) with \( h_{33} = 1 \) or \( h_{11}^2 + h_{12}^2 + h_{13}^2 + h_{21}^2 + h_{22}^2 + h_{23}^2 + h_{31}^2 + h_{32}^2 + h_{33}^2 = 1 \).
The following examples show different kinds of transformation but all relate a transformation between two planes.
The homography can be estimated using for instance the Direct Linear Transform (DLT) algorithm (see 1 for more information). As the object is planar, the transformation between points expressed in the object frame and projected points into the image plane expressed in the normalized camera frame is a homography. Only because the object is planar, the camera pose can be retrieved from the homography, assuming the camera intrinsic parameters are known (see 2 or 4). This can be tested easily using a chessboard object and
findChessboardCorners() to get the corner locations in the image.
The first thing consists to detect the chessboard corners, the chessboard size (
patternSize), here
9x6, is required:
The object points expressed in the object frame can be computed easily knowing the size of a chessboard square:
The coordinate
Z=0 must be removed for the homography estimation part:
The image points expressed in the normalized camera can be computed from the corner points and by applying a reverse perspective transformation using the camera intrinsics and the distortion coefficients:
The homography can then be estimated with:
A quick solution to retrieve the pose from the homography matrix is (see 5):
\[ \begin{align*} \boldsymbol{X} &= \left( X, Y, 0, 1 \right ) \\ \boldsymbol{x} &= \boldsymbol{P}\boldsymbol{X} \\ &= \boldsymbol{K} \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \boldsymbol{r_3} \hspace{0.5em} \boldsymbol{t} \right ] \begin{pmatrix} X \\ Y \\ 0 \\ 1 \end{pmatrix} \\ &= \boldsymbol{K} \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \boldsymbol{t} \right ] \begin{pmatrix} X \\ Y \\ 1 \end{pmatrix} \\ &= \boldsymbol{H} \begin{pmatrix} X \\ Y \\ 1 \end{pmatrix} \end{align*} \]
\[ \begin{align*} \boldsymbol{H} &= \lambda \boldsymbol{K} \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \boldsymbol{t} \right ] \\ \boldsymbol{K}^{-1} \boldsymbol{H} &= \lambda \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \boldsymbol{t} \right ] \\ \boldsymbol{P} &= \boldsymbol{K} \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \left( \boldsymbol{r_1} \times \boldsymbol{r_2} \right ) \hspace{0.5em} \boldsymbol{t} \right ] \end{align*} \]
This is a quick solution (see also 2) as this does not ensure that the resulting rotation matrix will be orthogonal and the scale is estimated roughly by normalize the first column to 1.
To check the result, the object frame projected into the image with the estimated camera pose is displayed:
In this example, a source image will be transformed into a desired perspective view by computing the homography that maps the source points into the desired points. The following image shows the source image (left) and the chessboard view that we want to transform into the desired chessboard view (right).
The first step consists to detect the chessboard corners in the source and desired images:
The homography is estimated easily with:
To warp the source chessboard view into the desired chessboard view, we use cv::warpPerspective
The result image is:
To compute the coordinates of the source corners transformed by the homography:
To check the correctness of the calculation, the matching lines are displayed:
The homography relates the transformation between two planes and it is possible to retrieve the corresponding camera displacement that allows to go from the first to the second plane view (see [135] for more information). Before going into the details that allow to compute the homography from the camera displacement, some recalls about camera pose and homogeneous transformation.
The function cv::solvePnP allows to compute the camera pose from the correspondences 3D object points (points expressed in the object frame) and the projected 2D image points (object points viewed in the image). The intrinsic parameters and the distortion coefficients are required (see the camera calibration process).
\[ \begin{align*} s \begin{bmatrix} u \\ v \\ 1 \end{bmatrix} &= \begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} r_{11} & r_{12} & r_{13} & t_x \\ r_{21} & r_{22} & r_{23} & t_y \\ r_{31} & r_{32} & r_{33} & t_z \end{bmatrix} \begin{bmatrix} X_o \\ Y_o \\ Z_o \\ 1 \end{bmatrix} \\ &= \boldsymbol{K} \hspace{0.2em} ^{c}\textrm{M}_o \begin{bmatrix} X_o \\ Y_o \\ Z_o \\ 1 \end{bmatrix} \end{align*} \]
\( \boldsymbol{K} \) is the intrinsic matrix and \( ^{c}\textrm{M}_o \) is the camera pose. The output of cv::solvePnP is exactly this:
rvec is the Rodrigues rotation vector and
tvec the translation vector.
\( ^{c}\textrm{M}_o \) can be represented in a homogeneous form and allows to transform a point expressed in the object frame into the camera frame:
\[ \begin{align*} \begin{bmatrix} X_c \\ Y_c \\ Z_c \\ 1 \end{bmatrix} &= \hspace{0.2em} ^{c}\textrm{M}_o \begin{bmatrix} X_o \\ Y_o \\ Z_o \\ 1 \end{bmatrix} \\ &= \begin{bmatrix} ^{c}\textrm{R}_o & ^{c}\textrm{t}_o \\ 0_{1\times3} & 1 \end{bmatrix} \begin{bmatrix} X_o \\ Y_o \\ Z_o \\ 1 \end{bmatrix} \\ &= \begin{bmatrix} r_{11} & r_{12} & r_{13} & t_x \\ r_{21} & r_{22} & r_{23} & t_y \\ r_{31} & r_{32} & r_{33} & t_z \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} X_o \\ Y_o \\ Z_o \\ 1 \end{bmatrix} \end{align*} \]
Transform a point expressed in one frame to another frame can be easily done with matrix multiplication:
To transform a 3D point expressed in the camera 1 frame to the camera 2 frame:
\[ ^{c_2}\textrm{M}_{c_1} = \hspace{0.2em} ^{c_2}\textrm{M}_{o} \cdot \hspace{0.1em} ^{o}\textrm{M}_{c_1} = \hspace{0.2em} ^{c_2}\textrm{M}_{o} \cdot \hspace{0.1em} \left( ^{c_1}\textrm{M}_{o} \right )^{-1} = \begin{bmatrix} ^{c_2}\textrm{R}_{o} & ^{c_2}\textrm{t}_{o} \\ 0_{3 \times 1} & 1 \end{bmatrix} \cdot \begin{bmatrix} ^{c_1}\textrm{R}_{o}^T & - \hspace{0.2em} ^{c_1}\textrm{R}_{o}^T \cdot \hspace{0.2em} ^{c_1}\textrm{t}_{o} \\ 0_{1 \times 3} & 1 \end{bmatrix} \]
In this example, we will compute the camera displacement between two camera poses with respect to the chessboard object. The first step consists to compute the camera poses for the two images:
The camera displacement can be computed from the camera poses using the formulas above:
The homography related to a specific plane computed from the camera displacement is:
On this figure,
n is the normal vector of the plane and
d the distance between the camera frame and the plane along the plane normal. The equation to compute the homography from the camera displacement is:
\[ ^{2}\textrm{H}_{1} = \hspace{0.2em} ^{2}\textrm{R}_{1} - \hspace{0.1em} \frac{^{2}\textrm{t}_{1} \cdot n^T}{d} \]
Where \( ^{2}\textrm{H}_{1} \) is the homography matrix that maps the points in the first camera frame to the corresponding points in the second camera frame, \( ^{2}\textrm{R}_{1} = \hspace{0.2em} ^{c_2}\textrm{R}_{o} \cdot \hspace{0.1em} ^{c_1}\textrm{R}_{o}^{T} \) is the rotation matrix that represents the rotation between the two camera frames and \( ^{2}\textrm{t}_{1} = \hspace{0.2em} ^{c_2}\textrm{R}_{o} \cdot \left( - \hspace{0.1em} ^{c_1}\textrm{R}_{o}^{T} \cdot \hspace{0.1em} ^{c_1}\textrm{t}_{o} \right ) + \hspace{0.1em} ^{c_2}\textrm{t}_{o} \) the translation vector between the two camera frames.
Here the normal vector
n is the plane normal expressed in the camera frame 1 and can be computed as the cross product of 2 vectors (using 3 non collinear points that lie on the plane) or in our case directly with:
The distance
d can be computed as the dot product between the plane normal and a point on the plane or by computing the plane equation and using the D coefficient:
The projective homography matrix \( \textbf{G} \) can be computed from the Euclidean homography \( \textbf{H} \) using the intrinsic matrix \( \textbf{K} \) (see [135]), here assuming the same camera between the two plane views:
\[ \textbf{G} = \gamma \textbf{K} \textbf{H} \textbf{K}^{-1} \]
In our case, the Z-axis of the chessboard goes inside the object whereas in the homography figure it goes outside. This is just a matter of sign:
\[ ^{2}\textrm{H}_{1} = \hspace{0.2em} ^{2}\textrm{R}_{1} + \hspace{0.1em} \frac{^{2}\textrm{t}_{1} \cdot n^T}{d} \]
We will now compare the projective homography computed from the camera displacement with the one estimated with cv::findHomography
The homography matrices are similar. If we compare the image 1 warped using both homography matrices:
Visually, it is hard to distinguish a difference between the result image from the homography computed from the camera displacement and the one estimated with cv::findHomography function.
OpenCV 3 contains the function cv::decomposeHomographyMat which allows to decompose the homography matrix to a set of rotations, translations and plane normals. First we will decompose the homography matrix computed from the camera displacement:
The results of cv::decomposeHomographyMat are:
The result of the decomposition of the homography matrix can only be recovered up to a scale factor that corresponds in fact to the distance
d as the normal is unit length. As you can see, there is one solution that matches almost perfectly with the computed camera displacement. As stated in the documentation:
As the result of the decomposition is a camera displacement, if we have the initial camera pose \( ^{c_1}\textrm{M}_{o} \), we can compute the current camera pose \( ^{c_2}\textrm{M}_{o} = \hspace{0.2em} ^{c_2}\textrm{M}_{c_1} \cdot \hspace{0.1em} ^{c_1}\textrm{M}_{o} \) and test if the 3D object points that belong to the plane are projected in front of the camera or not. Another solution could be to retain the solution with the closest normal if we know the plane normal expressed at the camera 1 pose.
The same thing but with the homography matrix estimated with cv::findHomography
Again, there is also a solution that matches with the computed camera displacement.
The homography transformation applies only for planar structure. But in the case of a rotating camera (pure rotation around the camera axis of projection, no translation), an arbitrary world can be considered (see previously).
The homography can then be computed using the rotation transformation and the camera intrinsic parameters as (see for instance 8):
\[ s \begin{bmatrix} x^{'} \\ y^{'} \\ 1 \end{bmatrix} = \bf{K} \hspace{0.1em} \bf{R} \hspace{0.1em} \bf{K}^{-1} \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} \]
To illustrate, we used Blender, a free and open-source 3D computer graphics software, to generate two camera views with only a rotation transformation between each other. More information about how to retrieve the camera intrinsic parameters and the
3x4 extrinsic matrix with respect to the world can be found in 9 (an additional transformation is needed to get the transformation between the camera and the object frames) with Blender.
The figure below shows the two generated views of the Suzanne model, with only a rotation transformation:
With the known associated camera poses and the intrinsic parameters, the relative rotation between the two views can be computed:
Here, the second image will be stitched with respect to the first image. The homography can be calculated using the formula above:
The stitching is made simply with:
The resulting image is:
|
I've implemented the Earth harmonics calculation in JGM-3 model, using the coefficients from here
2 0 -0.10826360229840e-02 0.02 1 -0.24140000522221e-09 0.15430999737844e-082 2 0.15745360427672e-05 -0.90386807301869e-063 0 0.25324353457544e-05 0.0
Now, I want to switch to EGM2008, the coefficients are taken from here (Tide-free).
2 0 -0.484165143790815e-03 0.000000000000000e+002 1 -0.206615509074176e-09 0.138441389137979e-082 2 0.243938357328313e-05 -0.140027370385934e-053 0 0.957161207093473e-06 0.000000000000000e+00
There is a major difference. Probably, I should make some operations on the coefficients? For example, multiply $C_{20}$ by $\sqrt{5}$?
Multiplied all coefficients of EGM2008 by $\sqrt{2*degree+1}$, got
2 0 -1.082626173852220E-03 0.0000000000000E+002 1 -4.6200632349558E-10 3.0956435701202E-092 2 5.4546274930574E-06 -3.1311071889349E-063 0 2.5324105185677E-06 0.0000000000000E+00
Especially for $C_{21}$ and $C_{22}$ the difference is still major.
Extra
To calculate the Earth gravity potential in JGM-3, the equation was used: $U_{har}=\frac{\mu}{r}[1+\sum_{i=2}^d\sum_{j=0}^o (\frac{R_{eq}}{r})^iP_{ij}(\sin\phi)(S_{ij}\sin{j\lambda}+C_{ij}\cos{j\lambda})]$,
As you see, the argument of Lejendre polynom $P_{ij}$ is $sin\phi$.
However, here for EGM2008 it's said to use $cos\phi$. Is it specific for EGM2008 model?
|
Search
Now showing items 1-1 of 1
Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2016-09)
The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
|
In a solution to a problem, to calculate Molecular mass of unknown gas from a mixture containing $\ce{O2}$ in $80~\%$, the author used the formula :
$\frac{1}{\sqrt{M_\text{mix}}}=\frac{X_{\ce{O2}}}{\sqrt{M_{\ce{O2}}}}+\frac{X_\text{gas}}{\sqrt{M_\text{gas}}}$.
The exact problem is:
Pure $\ce{O_2}$ diffuses through an aperture in $224\mathrm{s}$, where as mixture of $\ce{O_2}$ and another gas containing $80~\%\ \ce{O_2}$ diffuses from the same in $234\mathrm{s}$. The molecular mass of gas will be?
Solution by author:
$$\begin{align}\frac{t_\text{mix}}{t_{\ce{O2}}} &= \frac{r_{\ce{O2}}}{r_\text{mix}} = \sqrt{\frac{M_\text{mix}}{32}}\\[1em] \frac{234}{224} &= \sqrt{\frac{M_\text{mix}}{32}}\\[0.5em] M_\text{mix} &= 34.92\\[1em] \Longrightarrow \qquad \frac{1}{\sqrt{M_\text{mix}}} &= \frac{X_\text{gas}}{\sqrt{M_\text{gas}}} + \frac{X_{\ce{O2}}}{\sqrt{M_{\ce{O2}}}}\\[1em] \Longrightarrow \qquad \frac{1}{\sqrt{34.92}} &= \frac{0.2}{\sqrt{M_\text{gas}}} + \frac{0.8}{\sqrt{32}}\\[0.5em] M_\text{gas} &= 51.5\end{align}$$
I wish to know why this works here?
|
Colloquia/Fall18 Contents 1 Mathematics Colloquium 1.1 Spring 2018 1.2 Spring Abstracts 1.2.1 January 29 Li Chao (Columbia) 1.2.2 February 2 Thomas Fai (Harvard) 1.2.3 February 5 Alex Lubotzky (Hebrew University) 1.2.4 February 6 Alex Lubotzky (Hebrew University) 1.2.5 February 9 Wes Pegden (CMU) 1.2.6 March 2 Aaron Bertram (Utah) 1.2.7 March 16 Anne Gelb (Dartmouth) 1.2.8 April 6 Edray Goins (Purdue) 1.3 Past Colloquia Mathematics Colloquium
All colloquia are on Fridays at 4:00 pm in Van Vleck B239,
unless otherwise indicated. Spring 2018
date speaker title host(s) January 29 (Monday) Li Chao (Columbia) Elliptic curves and Goldfeld's conjecture Jordan Ellenberg February 2 (Room: 911) Thomas Fai (Harvard) The Lubricated Immersed Boundary Method Spagnolie, Smith February 5 (Monday, Room: 911) Alex Lubotzky (Hebrew University) High dimensional expanders: From Ramanujan graphs to Ramanujan complexes Ellenberg, Gurevitch February 6 (Tuesday 2 pm, Room 911) Alex Lubotzky (Hebrew University) Groups' approximation, stability and high dimensional expanders Ellenberg, Gurevitch February 9 Wes Pegden (CMU) The fractal nature of the Abelian Sandpile Roch March 2 Aaron Bertram (University of Utah) Stability in Algebraic Geometry Caldararu March 16 Anne Gelb (Dartmouth) Reducing the effects of bad data measurements using variance based weighted joint sparsity WIMAW April 4 (Wednesday) John Baez (UC Riverside) TBA Craciun April 6 Edray Goins (Purdue) Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups Melanie April 13 Jill Pipher (Brown) TBA WIMAW April 16 (Monday) Christine Berkesch Zamaere (University of Minnesota) TBA Erman, Sam April 20 Xiuxiong Chen (Stony Brook University, CANCELLED) TBA Bing Wang April 25 (Wednesday) Hitoshi Ishii (Waseda University) Wasow lecture TBA Tran date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty Spring Abstracts January 29 Li Chao (Columbia)
Title: Elliptic curves and Goldfeld's conjecture
Abstract: An elliptic curve is a plane curve defined by a cubic equation. Determining whether such an equation has infinitely many rational solutions has been a central problem in number theory for centuries, which lead to the celebrated conjecture of Birch and Swinnerton-Dyer. Within a family of elliptic curves (such as the Mordell curve family y^2=x^3-d), a conjecture of Goldfeld further predicts that there should be infinitely many rational solutions exactly half of the time. We will start with a history of this problem, discuss our recent work (with D. Kriz) towards Goldfeld's conjecture and illustrate the key ideas and ingredients behind these new progresses.
February 2 Thomas Fai (Harvard)
Title: The Lubricated Immersed Boundary Method
Abstract: Many real-world examples of fluid-structure interaction, including the transit of red blood cells through the narrow slits in the spleen, involve the near-contact of elastic structures separated by thin layers of fluid. The separation of length scales between these fine lubrication layers and the larger elastic objects poses significant computational challenges. Motivated by the challenge of resolving such multiscale problems, we introduce an immersed boundary method that uses elements of lubrication theory to resolve thin fluid layers between immersed boundaries. We apply this method to two-dimensional flows of increasing complexity, including eccentric rotating cylinders and elastic vesicles near walls in shear flow, to show its increased accuracy compared to the classical immersed boundary method. We present preliminary simulation results of cell suspensions, a problem in which near-contact occurs at multiple levels, such as cell-wall, cell-cell, and intracellular interactions, to highlight the importance of resolving thin fluid layers in order to obtain the correct overall dynamics.
February 5 Alex Lubotzky (Hebrew University)
Title: High dimensional expanders: From Ramanujan graphs to Ramanujan complexes
Abstract:
Expander graphs in general, and Ramanujan graphs , in particular, have played a major role in computer science in the last 5 decades and more recently also in pure math. The first explicit construction of bounded degree expanding graphs was given by Margulis in the early 70's. In mid 80' Margulis and Lubotzky-Phillips-Sarnak provided Ramanujan graphs which are optimal such expanders.
In recent years a high dimensional theory of expanders is emerging. A notion of topological expanders was defined by Gromov in 2010 who proved that the complete d-dimensional simplical complexes are such. He raised the basic question of existence of such bounded degree complexes of dimension d>1.
This question was answered recently affirmatively (by T. Kaufman, D. Kazdhan and A. Lubotzky for d=2 and by S. Evra and T. Kaufman for general d) by showing that the d-skeleton of (d+1)-dimensional Ramanujan complexes provide such topological expanders. We will describe these developments and the general area of high dimensional expanders.
February 6 Alex Lubotzky (Hebrew University)
Title: Groups' approximation, stability and high dimensional expanders
Abstract:
Several well-known open questions, such as: are all groups sofic or hyperlinear?, have a common form: can all groups be approximated by asymptotic homomorphisms into the symmetric groups Sym(n) (in the sofic case) or the unitary groups U(n) (in the hyperlinear case)? In the case of U(n), the question can be asked with respect to different metrics and norms. We answer, for the first time, one of these versions, showing that there exist fintely presented groups which are not approximated by U(n) with respect to the Frobenius (=L_2) norm.
The strategy is via the notion of "stability": some higher dimensional cohomology vanishing phenomena is proven to imply stability and using high dimensional expanders, it is shown that some non-residually finite groups (central extensions of some lattices in p-adic Lie groups) are Frobenious stable and hence cannot be Frobenius approximated.
All notions will be explained. Joint work with M, De Chiffre, L. Glebsky and A. Thom.
February 9 Wes Pegden (CMU)
Title: The fractal nature of the Abelian Sandpile
Abstract: The Abelian Sandpile is a simple diffusion process on the integer lattice, in which configurations of chips disperse according to a simple rule: when a vertex has at least 4 chips, it can distribute one chip to each neighbor.
Introduced in the statistical physics community in the 1980s, the Abelian sandpile exhibits striking fractal behavior which long resisted rigorous mathematical analysis (or even a plausible explanation). We now have a relatively robust mathematical understanding of this fractal nature of the sandpile, which involves surprising connections between integer superharmonic functions on the lattice, discrete tilings of the plane, and Apollonian circle packings. In this talk, we will survey our work in this area, and discuss avenues of current and future research.
March 2 Aaron Bertram (Utah)
Title: Stability in Algebraic Geometry
Abstract: Stability was originally introduced in algebraic geometry in the context of finding a projective quotient space for the action of an algebraic group on a projective manifold. This, in turn, led in the 1960s to a notion of slope-stability for vector bundles on a Riemann surface, which was an important tool in the classification of vector bundles. In the 1990s, mirror symmetry considerations led Michael Douglas to notions of stability for "D-branes" (on a higher-dimensional manifold) that corresponded to no previously known mathematical definition. We now understand each of these notions of stability as a distinct point of a complex "stability manifold" that is an important invariant of the (derived) category of complexes of vector bundles of a projective manifold. In this talk I want to give some examples to illustrate the various stabilities, and also to describe some current work in the area.
March 16 Anne Gelb (Dartmouth)
Title: Reducing the effects of bad data measurements using variance based weighted joint sparsity
Abstract: We introduce the variance based joint sparsity (VBJS) method for sparse signal recovery and image reconstruction from multiple measurement vectors. Joint sparsity techniques employing $\ell_{2,1}$ minimization are typically used, but the algorithm is computationally intensive and requires fine tuning of parameters. The VBJS method uses a weighted $\ell_1$ joint sparsity algorithm, where the weights depend on the pixel-wise variance. The VBJS method is accurate, robust, cost efficient and also reduces the effects of false data.
April 6 Edray Goins (Purdue)
Title: Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups
Abstract: A Belyĭ map [math] \beta: \mathbb P^1(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] is a rational function with at most three critical values; we may assume these values are [math] \{ 0, \, 1, \, \infty \}. [/math] A Dessin d'Enfant is a planar bipartite graph obtained by considering the preimage of a path between two of these critical values, usually taken to be the line segment from 0 to 1. Such graphs can be drawn on the sphere by composing with stereographic projection: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq \mathbb P^1(\mathbb C) \simeq S^2(\mathbb R). [/math] Replacing [math] \mathbb P^1 [/math] with an elliptic curve [math]E [/math], there is a similar definition of a Belyĭ map [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C). [/math] Since [math] E(\mathbb C) \simeq \mathbb T^2(\mathbb R) [/math] is a torus, we call [math] (E, \beta) [/math] a toroidal Belyĭ pair. The corresponding Dessin d'Enfant can be drawn on the torus by composing with an elliptic logarithm: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq E(\mathbb C) \simeq \mathbb T^2(\mathbb R). [/math]
This project seeks to create a database of such Belyĭ pairs, their corresponding Dessins d'Enfant, and their monodromy groups. For each positive integer [math] N [/math], there are only finitely many toroidal Belyĭ pairs [math] (E, \beta) [/math] with [math] \deg \, \beta = N. [/math] Using the Hurwitz Genus formula, we can begin this database by considering all possible degree sequences [math] \mathcal D [/math] on the ramification indices as multisets on three partitions of N. For each degree sequence, we compute all possible monodromy groups [math] G = \text{im} \, \bigl[ \pi_1 \bigl( \mathbb P^1(\mathbb C) - \{ 0, \, 1, \, \infty \} \bigr) \to S_N \bigr]; [/math] they are the ``Galois closure
of the group of automorphisms of the graph. Finally, for each possible monodromy group, we compute explicit formulas for Belyĭ maps [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] associated to some elliptic curve [math] E: \ y^2 = x^3 + A \, x + B. [/math] We will discuss some of the challenges of determining the structure of these groups, and present visualizations of group actions on the torus.
This work is part of PRiME (Purdue Research in Mathematics Experience) with Chineze Christopher, Robert Dicks, Gina Ferolito, Joseph Sauder, and Danika Van Niel with assistance by Edray Goins and Abhishek Parab.
|
The Tarski-Vaught test for $\preceq$ states that given a structure $\mathfrak{B}$ and $A\subseteq B$ then $A$ is the underlying set of an elementary substructure of $\mathfrak{B}$ iff for all formulas $\psi(x_1,\cdots ,x_m,y)$ in $L$ and every sequence $a_1,\cdots , a_m$ of elements in $A$ when $\mathfrak{B}\models \exists y \psi[a_1,\cdots ,a_m]$, then there is $a\in A$ such that $\mathfrak{B}\models \psi[a_1,\cdots ,a_m,a]$.
One direction is clear from the definition of elementary substructure and the other one can be proven using induction on formulas.
I asked around and was told that the second condition can't be reduced to just checking formulas $\exists y\psi$ for $\psi$ quantifier free, but I haven't seen a counterexample.
Intuitively an induction on the number of quantifiers seems to give a proof, but apparently its wrong(?) So what would be a counterexample for it not being necessary to just check quantifier free $\psi$?
|
The adèles $\mathbb A$ arise naturally when considering the Berkovich space $\mathcal M(\mathbb Z)$ of the integers. Namely, they are the stalk $\mathbb A = (j_\ast j^{-1} \mathcal O_\mathbb Z)_p$ where $p \in \mathcal M(\mathbb Z)$ is the trivial norm, $j: U \to \mathcal M(\mathbb Z)$ is the inclusion of its complement, and $\mathcal O_\mathbb Z$ is the structure sheaf of analytic functions on $\mathcal M(\mathbb Z)$.
(Of course, rigid analytic geometry seems to
mostly be done locally at a prime $p$, but I think there is some work that proceeds globally with spaces like $\mathcal M(\mathbb Z)$.)
Now, this is my ignorance speaking, but the coolest thing I know about the adèles is that taking them seriously is the starting point for proving the functional equation for $\zeta(s)$
à la Tate's thesis. So if the adèles have a natural rigid-analytic interpretation, it seems natural to want to interpret the rest of Tate's thesis in this framework. Questions:
Do the main concepts of Tate's thesis have natural interpretations in rigid-analytic geometry? I have in mind the following:
Haar measure on the additive and unit groups of a Banach algebra.
The Fourier transform on the additive group of a Banach algebra.
Eigenfunctions for this Fourier transform.
Schwartz functions on a Banach algebra.
The Fourier inversion formula on the additive group of a Banach algebra.
Characters and quasi-characters on the group of units of a Banach algebra.
The discreteness of $\mathbb Q \subset \mathbb A$ (more generally, if $\mathcal F$ is a coherent sheaf on a rigid-analytic space $X$ and $p \in X$ with inclusion $j: X \setminus p \to X$, then is $\mathcal F_p \to (j_\ast j^\ast \mathcal F)_p$ a topological embedding?).
The Poisson summation formula.
If all the above components have good rigid-analytic interpretations, then by putting them together as Tate does it seems one should arrive at a good rigid-analytic interpretation of the function equation of $\zeta(s)$. Even if this does not work out, is there some other rigid-analytic interpretation of the functional equation?
EDIT: Here are some more appearances of the adeles $\mathbb A$ and the ideles $\mathbb I$:
An element of $\mathbb I$ is precisely the data needed to extend a trivial line bundle on $U \subset \mathcal M(\mathbb Z)$ (the complement of the trivial norm) to a line bundle on $\mathcal M(\mathbb Z)$, so I'm tempted to identify $\mathbb I$ with the
divisor groupof $\mathcal M(\mathbb Z)$.
Modding out global sections of $\mathcal O_\mathbb Z^\times$, $\mathbb I / \mathbb Q^\times$ is a sort of moduli space of line bundles on $\mathcal M(\mathbb Z)$. I don't know if this refines to a moduli Berkovich space "$Pic(\mathcal M(\mathbb Z))$" of line bundles.
Similarly, $\mathbb A / \mathbb Q$ is a moduli space of $\mathcal O_\mathbb Z$-principal bundles.
On a scheme $X$, the structure sheaf $\mathcal O_X$ locally has the property that every restriction map is a localization. But on $\mathcal M(\mathbb Z)$, the structure sheaf $\mathcal O_\mathbb Z$ fails to have this property (although restrictions are
completionsof localizations). A resolution by sheaves which do have this property is given by $j_\ast j^{-1} \mathcal O_\mathbb Z \to i_\ast \mathbb A / \mathbb Q$ where $i$ is the inclusion of the trivial norm into $\mathcal M(\mathbb Z)$ and $j$ as before is the inclusion of the complement. I'm not sure if this is significant.
|
To describe the evolution of a (in this example non-relativistic) fluid system, the evolution equations for all relevant variables, conservation laws, the second law of thermodynamics, and appropriate (a)n appropriate equation(s) of state have to be considered.
The evolution equation for momentum is the Navier-Stokes equation which in a geophysical context can be written as
\[\frac{d u}{d t} + (u\cdot\nabla)u = -\frac{\nabla p}{\rho} -\nabla\Phi +\frac{1}{\rho}\nabla S\]
$\Phi$ is the geopotential and $S$ is the stress tensor. Conservation of angular momentum is taken into account by imposing the contraint that the stress tensor
\[S = \rho \nu \{ \nabla \circ u + (\nabla \circ u)^T\} + \rho \eta I (\nabla \cdot u)\]
where $\nu$ and $\eta$ denote the dynamic and kinematic viscosity respectively, is symmetric.
The evolution equation for internal energy (or equivalently temperature) can be written as
\[\frac{d e}{d t} = \frac{p}{\rho^2}\frac{d\rho}{d t} + Q_{rad} + Q_{lat} -\frac{1}{\rho}\nabla J + \frac{1}{\rho}(S\nabla)\cdot u\]
The second law of thermodynamics is considered by demanding that the last term in the above equation
\[\epsilon = \frac{1}{\rho} (\nabla S)\cdot u\]
which describes the frictional heating or dissipation is positive definite.
Conservation of mass is considered by including the continuity equation
\[\frac{d\rho}{d t} = \frac{\partial\rho}{\partial t} +\nabla(\rho u)\]
into the relevant system of equations.
As you can see, this is a coupled system of equations. The kinetic energy dissipated due to the friction term in the Navier-Stokes equation reappears as dissipative heating $\epsilon$ in the internal energy (or temperature) equation, so the (kinetic) energy can not disappear.
A nice example of a study which includes the temperature equation in addition to the Navier-Stokes equation, is the stuy of Sukorianski et al., who investigate stochastically forced turbulent flows with stable stratification by making use of renormalization-group like methods. By deriving a coupled system of RG equations that describes the scale-dependence of the anisotropic diffusivities of velocitie as well as of (potential) temperature fluctuations, they are by assuming the presence of a Kolmogorov scale invariant subrange able to repoduce the correct kinetic energy cascade and by slightly extending their work, it should be possible to derive the corresponding scale-dependence of the spectrum of temperature fluctuations (or available potential energy) too.
If this answer is not exactly what you wanted, I hope that it helps a bit at least.
|
I don't know how to explain the bending mode, but there is a nice mathematical explanation for the difference in stretching modes.
Take $\ce{CO2}$, carbon dioxide, as an example. I'll use B3LYP/6-31G(d,p) values. $\nu_{1}$, the symmetric stretch, is 1372 cm$^{-1}$, and $\nu_{3}$, the asymmetric stretch, is at 2436 cm$^{-1}$. Now consider $\ce{CO}$, a single carbonyl group (the molecule won't work for this example). Looking at an IR table, the average stretch will be about 1750 cm$^{-1}$. $\ce{CO2}$ looks a lot like two $\ce{CO}$ bonds that are linear and share a carbon. But, clearly $\ce{CO2}$ doesn't have a doubly-degenerate stretch mode of any kind at that frequency, so something is causing the
local modes to not be identical to the normal modes. Because both local modes have the same symmetry, they are allowed to mix with some coupling between them, just like molecular orbitals. A local mode is a vibration that is completely uncoupled from all other modes; its motion does not influence the motion of other atoms and vice versa.
Set up a Hamiltonian for a degenerate two-level system with some coupling between them. Keeping with a Huckel-like picture, I'll use $\alpha$ for the energy level of each degenerate local mode and $\beta$ for the coupling between them.
$$\hat{H} = \begin{pmatrix}\alpha & \beta \\ \beta & \alpha\end{pmatrix}$$
Obtaining the normal modes and their energies is equivalent to diagonalizing this Hamiltonian, where the eigenvalues are the normal mode energies and the eigenvectors are the normal modes themselves as linear combinations of the local modes. Doing so gives the normal mode energies:
$$\alpha - \beta, \, \alpha + \beta$$
and the normal modes themselves:
$$\begin{pmatrix}-1 \\ 1\end{pmatrix},\,\begin{pmatrix}1 \\ 1\end{pmatrix}$$
By convention in Huckel theory, $\beta$ is usually negative, so the 2nd eigenvalue is the one lower in energy.
If you take the dot product between each normal mode eigenvector and the vector of local modes:
$$\begin{pmatrix}\phi_{1} \\ \phi_{2}\end{pmatrix}$$
out falls the symmetric and antisymmetric combinations leading to the symmetric and asymmetric stretches. Leaving out the normalization factor of $1/\sqrt{2}$:
$$(-\phi_{1} + \phi_{2}), \, (\phi_{1} + \phi_{2})$$
Setting $\alpha = 1750$ cm$^{-1}$ and $\beta = -500$ cm$^{-1}$ gives a symmetric stretching frequency of 1250 cm$^{-1}$ and an asymmetric stretching frequency of 2250 cm$^{-1}$. Not bad for a model with a single parameter.
This is how the energy levels (eigenvalues) split with increasing the coupling between the initially degenerate local modes into distinct normal modes:
Here is a link to the Jupyter Notebook containing the math I did. This is a simplified (math-wise) version of what we did in a paper, since the conceptual point is identical.
This sort of description is how any two-level time-independent system works. See how the two spin energy levels of a free electron change in energy when an external magnetic field is applied (Zeeman effect):
|
Lang is very lazy at learning English. He is always getting distracted during English classes. Being happy with the sweet message he got from his girlfriend yesterday, Lang invented a game to play during his English class.
Lang has an English dictionary containing $n$ words, denoted as $T_1, T_2, \ldots , T_ n$. Denoting as $S$ the message he received, his game is as below:
First, he chooses two indices $\ell $ and $r$ $(1 \leq \ell \leq r \leq |S|)$.
Then, he considers the
substring of $S$ from $\ell $ to $r$, he finds all words in his dictionary which are subsequences of this substring.
Next, he sorts all these words in lexicographic order.
Finally, he chooses a positive integer $k$ and wonders which is the $k$-th word of this list.
Lang wants to play this game $q$ times. However, he worries that the class would end before he finishes playing, so he asks you to write a program to find the $k$-th word quickly.
Given a string $S = s_1 s_2 \ldots s_ n$:
The
substring of $S$ from $\ell $ to $r$ $(1 \leq \ell \leq r \leq n)$ is the string $s_\ell s_{\ell +1} \ldots s_ r$.
A string $T = t_1 t_2 \ldots t_ m$ is a
subsequence of $S$ iff there exists a sequence of indices $1 \leq i_1 < i_2 < \cdots < i_ m \leq n$ such that $S_{i_ j} = T_ j~ \forall 1 \leq j \leq m$.
A string $S = s_1 s_2 \ldots s_ n$ is lexicographically smaller than a string $T = t_1 t_2 \ldots t_ m$ iff any of the following is satisfied:
$n < m$ and $S_ i = T_ i~ \forall 1 \leq i \leq n$.
There exists an index $j$ such that $1 \leq j \leq \min (m,n), S_ i = T_ i~ \forall 1\leq i < j$ and $S_ j < T_ j$.
The first line contains a string $S$ of at most $500$ lowercase English characters — the message Lang received yesterday.
The second line contains two integers $n$ $(1 \leq n \leq 2 \cdot 10^4)$ — the number of words in Lang’s dictionary, and $q$ $(1 \leq q \leq 3 \cdot 10^5)$ — the number of times Lang plays his game.
Each of the next $n$ lines contain a non-empty string of lowercase English characters — a word in Lang’s dictionary. The total length of these $n$ words does not exceed $8 \cdot 10^4$.
Each of the last $q$ lines contain three positive integers $\ell $, $r$, and $k$ $(1 \leq \ell \leq r \leq |S|, 1 \leq k \leq n)$ presenting a game as described above.
Print $q$ lines describing the words Lang is looking for in these $q$ games.
If the length of the word is longer than $10$ characters, print only the first $10$ characters. If in some game, that word does not exist (i.e, $k$ is greater than the number of words in the list), print ‘NO SUCH WORD’ instead.
Please note that the checker of this problem is case-sensitive.
Sample Input 1 Sample Output 1 abcd 3 5 a ac bd 1 3 1 1 3 2 1 3 3 2 4 1 2 4 2 a ac NO SUCH WORD bd NO SUCH WORD
|
This is a really beautiful problem!
As there is some discussion in the question/comments/one of the answers about the history of this problem, let me narrate here what I found. I’m making this community-wiki as it is not exactly an answer to the question.
In
Mathematics Magazine, which is published five times a year by the Mathematical Association of America, in the November 1982 issue (Vol. 55, No. 5), in the problems section (p. 300), the following (much easier) problem was posed anonymously (or rather, by “Anon, Erewhon-upon-Spanish River”, who seems a prolific contributor) as problem 1158 (I’ve changed the notation slightly):
Set $a_0 = 1$ and for $n \ge 1$, $a_n = a_{\lfloor n/2 \rfloor} + a_{\lfloor n/3 \rfloor}$. Find $\lim_{n\to\infty} a_n/n$.
This is a much easier problem, as $\frac12 + \frac13 \neq 1$. (Hint: try polynomial growth.)
Solutions to this problem 1158 were given in the January 1984 issue (Vol. 57, No. 1)’s Problems section (pp. 49–50), under the title
A Pseudo-Fibonacci Limit, where it was solved by a host of people.
One of them was
Daniel A. Rawsthorne, Wheaton, Maryland, who in the same section of the same issue (p. 42), proposed the harder problem *1185. The asterisk means that he proposed the problem without supplying a solution himself.
Set $a_0 = 1$ and for $n \ge 1$, $a_n = a_{\lfloor n/2 \rfloor} + a_{\lfloor n/3 \rfloor} + a_{\lfloor n/6 \rfloor}$. Find $\lim_{n\to\infty} a_n/n$.
This is a much harder problem, as we need to determine not just the rate of growth ("linear"), but also the constant proportionality factor.
Solutions were given in the January 1985 issue (Vol. 58, No. 1), in the Problems section (pp. 51–52), under the title
A Very Slowly Converging Sequence, by (together) P. Erdős, A. Hildebrand, A. Odlyzko, P. Pudaite, and B. Reznick.
Note that the same page also says:
Also solved by
Noam Elkies (student), who used Dirichlet series and the residue theorem; and partially (under the assumption that the limit exists) by Don Coppersmith, who gave the explicit formula
$$ a_n = 1 + 2 \sum \frac{(r+s+t)!}{r!s!t!} $$
where the sum is extended over all triples $(r, s, t)$ of nonnegative integers such that $2^r3^s6^t \le n$.
Anyway, in their solution, the authors EHOPR also say
We are writing a paper inspired by this problem and its generalizations.
and give general results, such as the following (I've simplified the numerator a bit):
Suppose $a_0 = 1$, and $a_n = \sum_{i=1}^{s} \lambda_i a_{\lfloor n/m_i \rfloor}$ for $n \ge 1$. Suppose also that not all $m_i$s are (integer) powers of some common integer. Then
$$ \lim_{n\to\infty} \frac{a_n}{n^{\tau}} =
\frac{ \sum_{i=1}^{s} \lambda_i - 1}{\tau \sum_{i=1}^{s} p_i \log m_i} $$
where $\tau$ is the unique real number satisfying $\sum_{i=1}^{s} \lambda_i / m_i^\tau = 1$, and $p_i = \lambda_i / m_i^\tau$ (assuming $\tau \neq 0$).
So in this case, we have$$\lim_{n\to\infty} \frac{a_n}{n} = \frac{3 - 1}{\frac12\log2 + \frac13\log 3 + \frac16\log 6}= \frac{12}{\log 432} \approx 1.977$$The sequence is OEIS A007731.
For the earlier problem (1158), we have, with $\tau \approx 0.78788$ the solution to $(1/2)^x + (1/3)^x = 1$, $p_1 = \frac1{2^\tau}$, and $p_2 = \frac1{3^\tau} = 1 - p_1$, the ratio $\frac{1}{\tau 2^{-\tau}\log 2 + 3^{-\tau}\log 3} \approx 1.469$, so $$a_n \sim 1.469 n^{0.78788}.$$
The paper they wrote was published as:
P. Erdős, A. Hildebrand, A. Odlyzko, P. Pudaite, and B. Reznick,
The asymptotic behavior of a family of sequences,
Pacific Journal of Mathematics, Vol. 126, No. 2 (1987), pp. 227–241: Link 1 (PDF), Link 2
|
Let $\mathcal{O}$ be the $\sigma$-algebra on $\omega_1$ generated by its countable subsets. Is there a ($\sigma$-additive) probability measure on $\mathcal{O}$ that is not concentrated on a countable set? (I am trying to construct a real random variable whose support has size $\aleph_1$.)
Another nice example is related to this. Let $\Omega$ be the set of all countable ordinals, with its order topology. I may write $\Omega = [0,\omega_1)$. Note that the last point is missing, but any countable subset has a supremum in $\Omega$. Topologically, $\Omega$ is not compact, but is locally compact and pseudo-compact: Indeed, even stronger, any continuous function $f : \Omega \to \mathbb R$ is eventually constant. The linear functional $\Lambda$ that assigns to each continuous function $f$ this eventual value, is what we want. As usual, there is a measure $\mu$ so that $\Lambda(f) = \int_\Omega f\,d\mu$ for all $f$. This is a good example of a measure with "empty support". For every point $t \in \Omega$, there is a neighborhood $A$ with $\mu(A) = 0$.
The $\sigma$-algebra you describe includes all the sets of one element. Due to the countable additivity of measure, we conclude that the measure of any countable set is determined by the sum of the measure of its elements.
Suppose the set of positive-measure elements is uncountable. Then some countable set must have infinite measure. Proof: Consider the sets $\{x|\mu(x)\}>\epsilon$. For some $\epsilon$, this must be infinite. Choose a countable subset. (Clever ZF-without-choice mojo may enable you to construct a counterexample here).
If you're okay with that behavior, then the counting measure is an example.
If you're not okay with that behavior, then all measures will look like the sum of Michael Greinecker's measure and a measure of countable support. (Let $S$ be the set of positive-measure elements and $T$ another set, then $\mu(T)=\mu(S\cap T)+\mu(T-S)$, the first being the sum of its elements and the second $0$ on all countable sets, and therefore equal on all co-countable sets.)
Note: I think Gerald Edgar's measure is the restriction of Michael's measure to the Borel measure space.
|
In
Numerical Determination of Eigeinenergies for the Harmonic Odcillator, we showed how to numerically determine the eigenenergies of the harmonic oscillator potential. Using the fact that the potential was symmetric, and hence the wave functions symmetric or anti-symmetric, we could easily choose the correct boundary conditions for the wave function we were interested in. However, for an asymmetric potential we don't have any such information! All we know is that the eigenfunctions have to be square integrable and smooth. And that is actually enough information to solve this problem.
Consider the following potential, $$ V(x) = ax^4-b(x+c)^2+d, $$ with $a=1$, $b=1.5$, $c=0.2$, and $d=1.17$. The potential is slightly asymmetrical, as seen in the plot below.
%matplotlib inlinefrom __future__ import divisionimport numpy as npimport matplotlib.pyplot as plt# Set common figure parameters:newparams = {'axes.labelsize': 14, 'axes.linewidth': 1, 'savefig.dpi': 300, 'lines.linewidth': 1.0, 'figure.figsize': (8, 3), 'figure.subplot.wspace': 0.4, 'ytick.labelsize': 10, 'xtick.labelsize': 10, 'ytick.major.pad': 5, 'xtick.major.pad': 5, 'legend.fontsize': 10, 'legend.frameon': False, 'legend.handlelength': 1.5}plt.rcParams.update(newparams)
n = 1000 # number of points per unit on x-axisdx = 1.0/np = 10.0 # which x-values to includelinP = np.linspace(0,p*n-1, p*n, True)linM = np.linspace(-(p*n-1), 0, p*n, True)xP = linP/n # x-values in positive directionxM = linM/n # x-values in negative directiona = 1.0b = 1.5c = 0.2d = 1.17VP = a*xP**4 - b*(xP+c)**2 + d # potential for x>0VM = a*xM**4 - b*(xM+c)**2 + d # potential for x<0plt.figure()plt.plot(xM, VM, 'r', xP, VP, 'r')plt.grid()plt.ylim([-1, 10])plt.xlim([-2, 2])plt.title(r'The Potential Under Consideration')plt.xlabel(r'$x$')plt.ylabel(r'$V(x)$');
Setting $\hbar = 1$, $m = 1$ for simplicity, the Schrödinger equation reads $$\left[-\frac{1}{2}\frac{\rm d^2}{{\rm d}x^2} + V(x) \right]\psi(x) = E \psi(x), $$ yielding the following equation for $\psi''(x)$, $$\psi''(x) = 2[V(x)-E]\psi(x).$$ Discretizing the $x$-axis and the wave function, and using the second-order central difference method, we get a formula for a function value $\psi_{i+1}$ based on two previous points, $$\psi_{i+1} = 2\psi_i-\psi_{i-1}-2(\Delta x)^2\left[E-V(x) \right]\psi_i, $$ where $\Delta x$ is the distance between two points $x_i$ and $x_{i+1}$ on the $x$-axis. For the first function value on each side of the origin we use $$ \psi_1 = \psi_0 + \psi_0'\Delta x,$$ where $\psi_0$ and $\psi_0'$ are the initial value at the origin for the wave function and the slope of the wave function respectively.
As mentioned, we have to find functions that are square integrable and smooth. Hence we will try the following procedure: For a given energy, use the above equations to find wave functions for both positive and negative values of $x$ which are square integrable. To do this we need to define functions that find an initial value for the slope such that the wave functions go towards zero as $x \rightarrow \infty $, e.g. by using the bisection method. It is important to note that the criterion for the bisection method will vary depending on if the wave function approaches zero from above or below. Hence we need two different functions, and for simplicity we have also define seperate functions for positive and negative $x$-values. These four functions are defined below.
def findSlopeP(Ein, acc): """Function for positive x, approaches zero from above. Takes energy and desired accuracy as imput. """ S1 = -30.0 # lower limit for slope S2 = 30.0 # upper limit for slope DeltaSP = 1.0 S = 0 # starting value for slope while DeltaSP > acc: for i in linP[0:-1]: if i == 0: f1[i+1] = f1[i] + dx*S else: f1[i+1] = -f1[i-1] + f1[i]*(2-dx**2*2*(Ein-VP[i])) if f1[i] < -20: # the wave function shoots off towards minus infinity, adjust slope S1 = S S = S1 + (S2-S1)/2 break elif f1[i] > 20: # the wave function shoots off towards plus infinity, adjust slope S2 = S S = S2 - (S2-S1)/2 break DeltaSP = S2-S1 # if DeltaSP is smaller than the given accuracy, the function returns the calculated slope return Sdef findSlopeM(Ein, acc): """Function for negative x, approaches zero from above. """ S1 = -30.0 S2 = 30.0 DeltaSM = 1.0 S = 0 while DeltaSM > acc: for i in linP[1:-1]: if i == 1: f2[-(i+1)] = f2[-i] + dx*S else: f2[-(i+1)] = -f2[-(i-1)] + f2[-i]*(2-dx**2*2*(Ein-VM[-i])) if f2[-i] <- 20: S1 = S S = S1 + (S2-S1)/2 break elif f2[-i] > 20: S2 = S S = S2 - (S2-S1)/2 break DeltaSM = (S2-S1) return Sdef findSlopeP2(Ein, acc): """Function for positive x, approaches zero from below. """ S1 = -30.0 S2 = 30.0 DeltaSP = 1.0 S = 0.0 while DeltaSP > acc: for i in linP[0:-1]: if i == 0: f1[i+1] = f1[i] + dx*S else: f1[i+1] = -f1[i-1] + f1[i]*(2-dx**2*2*(Ein-VP[i])) if f1[i] > 20: S1 = S S = S1 + (S2-S1)/2 break elif f1[i] < -20: S2 = S S = S2 - (S2-S1)/2 break DeltaSP = abs(S2-S1) return Sdef findSlopeM2(Ein, acc): """Function for negative x, approaches zero from below. """ S1 = -30.0 S2 = 30.0 DeltaSM = 1.0 S = 0.0 while DeltaSM > acc: for i in linP[1:-1]: if i == 1: f2[-(i+1)] = f2[-i] + dx*S else: f2[-(i+1)] = -f2[-(i-1)] + f2[-i]*(2-dx**2*2*(Ein-VM[-i])) if f2[-i] > 20: S1 = S S = S1 + (S2-S1)/2 break elif f2[-i] < -20: S2 = S S = S2 - (S2-S1)/2 break DeltaSM = abs(S2-S1) return S
We now need to look for values of the energy which give the same slope in both directions. Using the fact that the $n^{\rm th}$ excited state has $n-1$ nodes, we can determine which combinations of the above functions to use. Now all is set to start computing the eigenenergies. We start with the ground state energy, $E_0$.
The ground state has zero nodes, hence we will use the functions where the wave function approaches zero from above,
findSlopeP and findSlopeM, beginning the search in an interval $0<E<2$, with step length 0.1. Plotting the slope as a function of energy for both positive and negative values of $x$ will show if any of the energy values give a smooth wave function.
f1 = np.zeros(p*n)f1[0] = 1.0f2 = np.zeros(p*n)f2[-1] = 1.0acc = 0.01N = 2E = np.linspace(0,N,10*N+1,True)lin = np.linspace(0,10*N,10*N+1,True)SP = np.zeros(10*N+1)SM = np.zeros(10*N+1)for k in lin: SP[k] = findSlopeP(E[k], acc) SM[k] = findSlopeM(E[k], acc)plt.figure()plt.plot(E, SP, 'b', E, -SM, 'g')plt.xticks(np.arange(min(E), max(E), (max(E)-min(E))/10))plt.grid()plt.xlabel('Energy')plt.ylabel('Slope');
We see that there is an intersection for an energy $E_0 \approx 1.1$. After some trial and error, we see that the ground state energy is approximately $E_0 = 1.09$ which gives a quite smooth function as seen below. So it seems like the method works!
E0 = 1.09f1 = np.zeros(p*n)f1[0] = 1.0f2 = np.zeros(p*n)f2[-1] = 1.0acc = 1e-6SP0 = findSlopeP(E0, acc)SM0 = findSlopeM(E0, acc)print('Right slope: %s' % SP0)print('Left slope: %s' % -SM0)# Plot ground stateplt.figure()plt.plot(xP, f1, 'b', xM, f2, 'b')plt.xlim([-3,3])plt.ylim([-0.5,2])plt.ylabel('$\psi(x)$')plt.xlabel('$x$')plt.title('Ground State')plt.grid();
Right slope: 0.635657161474 Left slope: 0.64026966691
We see now that the computed slopes are quite similar. To get higher accuracy we could have used a bisection method for energies close to $1.09$.
For the first excited state, the node theorem states that we will have one node. Hence, the wave function will approach the $x$-axis from below on one side. But at which side of the origin will the node be? Looking at the plot of the potential above, we see that the potential is lower on the right side, and hence this area is more "allowed". Hence the node will probably lie to the right of the origin. To find $E_1$, we then have to use the second function,
findSlopeP2, for positive values of $x$.
f1 = np.zeros(p*n)f1[0] = 1.0f2 = np.zeros(p*n)f2[-1] = 1.0acc = 0.01N = 2start = 1E = np.linspace(start, N+start, 10*N+1)lin = np.linspace(0, 10*N, 10*N+1)SP = np.zeros(10*N+1)SM = np.zeros(10*N+1)for k in lin: SP[k] = findSlopeP2(E[k], acc) SM[k] = findSlopeM(E[k], acc) plt.figure()plt.plot(E, SP, 'b', E, -SM, 'g')plt.xticks(np.arange(min(E), max(E), (max(E)-min(E))/10))plt.grid()plt.xlabel('Energy')plt.ylabel('Slope');
Here we see that there is an intersection at $E_1 \approx 2.3$, or more accurately $E_1 = 2.33$.
E1 = 2.33f1 = np.zeros(p*n)f1[0] = 1.0f2 = np.zeros(p*n)f2[-1] = 1.0acc = 1e-6SP1 = findSlopeP2(E1, acc)SM1 = findSlopeM(E1, acc)print('Right slope: %s' % SP1)print('Left slope: %s' % -SM1)# Plot first excited stateplt.figure()plt.plot(xP, f1, 'b', xM, f2, 'b')plt.xlim([-3,3])plt.ylim([-2,2.5])plt.ylabel(r'$\psi(x)$')plt.xlabel(r'$x$')plt.title('First Excited State')plt.grid();
Right slope: -2.84302040935 Left slope: -2.85926297307
For the second excited state, we expect two nodes, but where? Probably they will be on each side of the origin, but shifted to the right compared to a symmetric potential. Hence the wave function will approach the $x$-axis from below on both sides, making the second set of functions,
findSlopeP2 and findSlopeM2, necessary for both negative and positive values of $x$.
f1 = np.zeros(p*n)f1[0] = 1.0f2 = np.zeros(p*n)f2[-1] = 1.0acc = 0.01N = 3start = 2E = np.linspace(start, N+start, 10*N+1)lin = np.linspace(0, 10*N, 10*N+1)SP = np.zeros(10*N+1)SM = np.zeros(10*N+1)for k in lin: SP[k] = findSlopeP2(E[k], acc) SM[k] = findSlopeM2(E[k], acc) plt.figure()plt.plot(E,(SP),'b',E,-(SM),'g')plt.xticks(np.arange(min(E), max(E), (max(E)-min(E))/10))plt.grid()plt.xlabel('Energy')plt.ylabel('Slope');
The intersection is at $E_2 \approx 4.2$, and further investigation gives a more accurate value $E_2 = 4.21$.
E2 = 4.21f1 = np.zeros(p*n)f1[0] = 1.0f2 = np.zeros(p*n)f2[-1] = 1.0acc = 1e-6SP2 = findSlopeP2(E2, acc)SM2 = findSlopeM2(E2, acc)print('Right slope: %s' % SP2)print('Left slope: %s' % -SM2)# Plot first excited stateplt.figure()plt.plot(xP, f1, 'b', xM, f2, 'b')plt.xlim([-3,3])plt.ylim([-2,2.5])plt.ylabel(r'$\psi(x)$')plt.xlabel(r'$x$')plt.title('Second Excited State')plt.grid();
Right slope: 1.05427667499 Left slope: 1.06592908502
We see from the plot above that the assumption regarding the nodes was correct. Continuing in a similar manner we could have determined the energies and plotted the wave functions for higher states as well. So, we have shown that one can in fact find the eigenenergies even for an asymmetric potential.
|
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$.
Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$...
What if $\theta$ is irrational...what did I do wrong?
'cause I understand that second one but I'm having a hard time explaining it in words
(Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.)
DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something.
he based much of his success on principles like this I cant believe ive forgotten it
it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar
yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one
I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually
Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book
regardless of you attending Princeton university or not
yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities
but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were
i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it
well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned
@TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking
Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college.
long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now
@BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers
What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even
A proof of this uses (basically) Fourier analysis
Even though it looks rather innocuous albeit surprising result in pure number theory
@BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho
yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen
oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive
absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about
@Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$).
The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero.
I can try to recall more if you're interested.
It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane
Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i
Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things
I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know
Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps
So it sort of makes sense
Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis
@BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre
You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all
it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate
|
the background of my question is that I want to calculate an arbitrary square matrix $A \in \mathbb{R}^{n \times n}$ in a convex optimization problem and ensure it is regular, i.e., it can be inverted. For this reason, I would like to add a second, also convex summand to the optimization problem, or, alternatively, add a convex inequality constraint to the optimization problem. The addition of, e.g., $-log(|det(A)|)$ to the objective function ensures invertibility, but this term is not convex.
Ensuring invertibility in a convex way can be ensured by the methods described in Bounds for determinants with positive diagonals for diagonally dominant matrices or, allowing for a bit more general matrices, Verified bounds for singular values, in particular for the spectral norm of a matrix and its inverse (e.g., Lemma 2.1). This, however, restrics the matrices to have a certain structure, while general matrices would be much better for the optimization I have in mind.
From my research, it seems to be most promising to me to consider the smallest singular value of $A$ which equals the largest singular value of its inverse:
$\sigma_{\mathrm{min}}(A)=\frac{1}{\lVert A^{-1}\rVert_2}$,
where $\sigma_{\mathrm{min}(A)}$ denotes the smallest singular value of $A$ and $\lVert\lVert_2$ the spectral norm, i.e., the largest singular value. My key question is if one can obtain a (concave) positive lower bound for the smallest singular value using $\sigma_{\mathrm{min}}(A)=\frac{1}{\lVert A^{-1}\rVert_2}$? Maximizing this concave lower bound ensures that the eigenvalues of $A$ have absolute value $>0$, see this question. The bound itself can be very inaccurate, the main thing is that $\sigma_{\mathrm{min}}(A)>0 $ is ensured. As far as I have seen, using the triangle inequality (subadditivity) and submultiplicativity of matrix norms only leads to upper bounds for $\sigma_{\mathrm{min}}(A)$, but not to lower ones. Maybe it is possible to play with subadditivity and submultiplicativity as well as other matrix norm properties to obtain such a lower bound that is dependent on $A$ or its elements, but not on its inverse to ensure easy optimization?
Answers on this specific idea (lower bound for minimal singular value) or on how to ensure invertibility in a convex way in general would be greatly appreciated.
|
I can’t write this computer simulation for you (at least not based on the data provided) but will instead explain a few basic relationships that govern the heating and cooling of objects. I hope this helps.
Consider a building an object that is composed of $n$ objects of masses $m_i$ with specific heat capacities $c_{p,i}$, then the building has an overall heat capacity given by:
$cM=\displaystyle\sum_{i=1}^{n} c_{p,i} m_i$.
1. Heating:
Understand $cM$ to be the total heat energy $\Delta Q$ needed to raise the building’s temperature by 1 degree Celsius:
$\Delta Q=cM\Delta T$ for $\Delta T = 1 C$.
If we add an amount of heat per unit of time $t$, that is $\frac{\Delta Q}{\Delta t}$, then we can write:
$\frac{\Delta Q}{\Delta t}=cM \frac{\Delta T}{\Delta t}$.....(Eq.1).
With $\frac{\Delta T}{\Delta t}$ the rate of temperature change in time.
2. Cooling:
Assuming that the building’s temperature $T$ is higher than that of its surroundings $T_O$, the building is constantly losing heat in accordance with Newton’s cooling law:
$\frac{\Delta Q}{\Delta t}=hA(T(t)-T_O)$.....Eq.2
Where $h$ the heat transfer coefficient, $A$ the total outside surface of the building and $T(t)$ the building’s temperature at any time $t$.
3. Thermostatic heating:
Most buildings require temperature to be kept near a desired set point, say $T_s$ (typically 18 to 20 degrees Celsius).
It should now be apparent that when $T(t) \geq T_s$ and the heating power $\frac{\Delta Q}{\Delta t}$ is still ‘on’, then in accordance with Eq.1, $T(t)$ will continue rising. But that is not a desired outcome. Instead it would be better to switch the heating power to ‘off’ when $T(t) \geq T_s$. This is of course the principle of thermostatic heating. When the building has cooled back down so a that $T(t) \leq T_s$, the thermostatic controller switches the heating power back on. This system requires no timed heating power program and also adapts well to changing values of $T_O$, the building's surrounding temperature.
At this set temperature the average power consumption is given by Eq.2:
$\frac{\Delta Q}{\Delta t}=hA(T_s-T_O)$.
|
Ex.9.1 Q11 Some Applications of Trigonometry Solution - NCERT Maths Class 10 Question
A TV tower stands vertically on the bank of a canal. From a point on the other bank directly opposite the tower, the angle of elevation of the top of the tower is \(60^\circ\). From another point \(20 \,\rm{m}\) away from this point on the line joining this point to the foot of the tower, the angle of elevation of the top of the tower is \(30^\circ\) see Figure. Find the height of the tower and the width of the canal.
Text Solution What is Known?
(i) The angle of elevation of the top of the tower from a point on the other bank directly opposite the tower is \(60^\circ\)
(ii) From another point \(20\,\rm{ m}\) away from this point in (i) on the line joining this pointto the foot of the tower, the angle of elevation of the top of the tower is \(30^\circ\)
(iii) \( CD =20\,\rm{ m}\)
What is Unknown?
Height of the tower \(=AB\) and the width of canal \(=BC\)
Reasoning:
Trigonometric ratio involving \(CD,\, BC,\) angles and height of tower \(AB\) is \(tan\,\theta \).
Steps:
Considering \(\Delta ABC,\)
\[\begin{align} \tan 60 ^ { \circ } & = \frac { A B } { B C } \\ \sqrt { 3 } & = \frac { A B } { B C } \\ A B & = \sqrt { 3 } B C \ldots . ( 1 ) \end{align}\]
Considering \(\Delta ABD,\)
\[\begin{align}\text{tan 3}{{\text{0}}^{0}}&=\frac{AB}{BD} \\ \text{tan 3}{{\text{0}}^{0}}&=\frac{AB}{CD+BC} \\ \frac{1}{\sqrt{3}}&=\frac{BC\sqrt{3}\,}{20+BC}\qquad{From}\,(1) \\ 20+BC&=BC\sqrt{3}\times \sqrt{3} \\ 20+BC&=3BC \\ 3BC-BC&=20 \\ 2BC&=20 \\ BC&=10\end{align}\]
Substituting\( BC = 10 \,\rm{m}\) in Equation (\(1\)), we get
\(\text{AB = 10}\sqrt{\text{3}}\,\rm{m}\)
Height of the tower \(\text{AB = 10}\sqrt{\text{3}}\)
Width of the canal \(BC=10 \,\rm{m}\)
|
Event detail On Gaussian-width gradient complexity and mean-field behavior of interacting particle systems and random graphs
Seminar | November 1 | 3:10-4 p.m. | 1011 Evans Hall
Ronen Eldan, Weizmann Institute of Science
The motivating question for this talk is: What does a sparse Erd\"os-R\'enyi random graph, conditioned to have twice the number of triangles than the expected number, typically look like? Motivated by this question, In 2014, Chatterjee and Dembo introduced a framework for obtaining Large Deviation Principles (LDP) for nonlinear functions of Bernoulli random variables (this followed an earlier work of Chatterjee-Varadhan which used limit graph theory to answer this question in the dense regime). The aforementioned framework relies on a notion of "low complexity" functions on the discrete cube, defined in terms of the covering numbers of their gradient. The central lemma used in their proof provides a method of estimating the log-normalizing constant $\log \sum_{x \in \{-1,1\}^n} e^{f(x)}$ by a corresponding mean-fieldfunctional.
In this talk, we will introduce a new notion of complexity for measures on the discrete cube, namely the mean-width of the gradient of the log-density. We prove a general structure theorem for such measures which goes beyond the discrete cube. In particular, we show that a measure $\nu$ attaining low complexity (with no extra smoothness assumptions needed) are close to a product measure in the following sense: there exists a measure $\tilde \nu$ a small "tilt" of $\nu$ in the sense that their log-densities differ by a linear function with small slope, such that $\tilde \nu$ is close to a product measure in transportation distance. An easy corollary of our result is a strengthening of the framework of Chatterjee-Dembo, which in particular simplifies the derivation of LDPs for subgraph counts, and improves the attained bounds. We will demonstrate how our framework can be used to study the behavior of low-complexity measures beyond the approximation of the partition function. As an example application, we prove that exponential random graphs behave roughly like mixtures of stochastic block models.
510-000-0000
|
OpenCV 3.4.7
Open Source Computer Vision
In this tutorial, you will learn how image inpainting using F-transform works. It consists in:
The goal of this tutorial is to show that the inverse F-transform can be used for image reconstruction. By the image reconstruction, we mean a reconstruction of a corrupted image where corruption is everything that the original image does not include. It can be noise, text, scratch, etc. Proposal is to solve the problem of reconstruction with the help of an approximation technique. This means that we will be looking for an approximating image which is close to the given one and at the same time, does not contain what we recognize as the corruption. This task is called
image inpainting.
As I shown in previous tutorial, F-transform is a tool of fuzzy mathematics highly usable in image processing. Let me rewrite the formula using kernel \(g\) introduced before as well:
\[ F^0_{kl}=\frac{\sum_{x=0}^{2h+1}\sum_{y=0}^{2h+1} \iota_{kl}(x,y) g(x,y)}{\sum_{x=0}^{2h+1}\sum_{y=0}^{2h+1} g(x,y)}, \]
where \(\iota_{kl} \subset I\) centered to pixel \((k \cdot h,l \cdot h)\) and \(g\) is a kernel. For purpose of image processing, a binary mask \(S\) is used such as
\[ g^s_{kl} = g \circ s_{kl} \]
where \(s_{k,l} \subset S\). Subarea \(s\) of mask \(S\) corresponds with subarea \(\iota\) of image \(I\). Operator \(\circ\) is element-wise matrix multiplication (Hadamard product). Formula is updated to
\[ F^0_{kl}=\frac{\sum_{x=0}^{2h+1}\sum_{y=0}^{2h+1} \iota_{kl}(x,y) g^s(x,y)}{\sum_{x=0}^{2h+1}\sum_{y=0}^{2h+1} g^s(x,y)}. \]
More details can be found in related papers.
The sample below demonstrates the usage of image inpainting. Three artificial images are created using the same input and three different type of corruption. In the real life usage, the input image will be already presented but here we created it by ourselves.
First of all, we must load our image and three masks used for artificial damage creation.
See that mask must be loaded as
IMREAD_GRAYSCALE.
In the next step, the masks are used for damaging our input image.
Using the masks, we applied three different kind of corruption on the same input image. Here is the result.
Do not forget that in real life usage, images
input1,
input2and
input3are created naturaly and used as the input directly.
Declaration of output images follows. In the following lines, the method of inpainting is applied. Let me explain three different algorithms one by one.
First of them is
ONE_STEP.
The
ONE_STEP algorithm simply compute direct F-transform ignoring damaged parts using kernel with radius
2 (as specified in the method calling). Inverse F-transform fill up the missing area using values from the components nearby. It is up to you to choose radius which is big enough.
Second is
MULTI_STEP.
MULTI_STEP algorithm works in the same way but defined radius (
2 in this case) is automatically increased if it is found insufficient. If you want to fill up the hole and you are not sure how big radius you need, you can choose
MULTI_STEP and let the computer decide. The lowest possible will be found.
Last one is
ITERATIVE.
Best choice in majority of cases is
ITERATIVE. This way of processing use small radius of basic functions for small kind of damage and higher ones for bigger holes.
|
Main Page The Problem
Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A
combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. [math]k=3[/math] Density Hales-Jewett (DHJ(3)) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math]
The original proof of DHJ(3) used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers.
Useful background materials
Some background to the project can be found here. General discussion on massively collaborative "polymath" projects can be found here. A cheatsheet for editing the wiki may be found here. Finally, here is the general Wiki user's guide
Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (active) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (inactive) (500-599) Possible proof strategies (active) (600-699) A reading seminar on density Hales-Jewett (active)
We are also collecting bounds for Fujimura's problem.
Here are some unsolved problems arising from the above threads.
Bibliography M. Elkin, "An Improved Construction of Progression-Free Sets ", preprint. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119. B. Green, J. Wolf, "A note on Elkin's improvement of Behrend's construction", preprint. K. O'Bryant, "Sets of integers that do not contain long arithmetic progressions", preprint. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished.
|
I first learnt about consistent hashing from a paper
1 that my manager at Google suggested I read. Recently I started thinking about it again as I was considering implementing it in one of my projects. I wanted to make sure that it was reliable enough but, looking again at the paper and searching online, I could only find simple introductions and implementations for the technique and no in-depth theoretical analyses. I thus ended up figuring this out myself, and wrote it down here for future reference. TL;DR
In a consistent hashing setting with \(N\) servers and \(K\) markers per server, the fraction of load handled by each server is distributed according to the beta distribution \(Β(K, (N-1)K)\). In order for every server to stay within \(1+\epsilon\) of its expected load with probability \(1 - \delta\) a safe choice of \(K\) is \(N (\epsilon^2 \delta)^{-1}\).
What is consistent hashing?
Consistent hashing is a solution to the following problem. Suppose we have some clients that need to perform operations on some objects. These objects are shared among many servers. We want the clients to be able to determine which server hosts which object in a distributed and consistent way. That is, a client should be able to decide this autonomously, without consulting a central authority (or even other clients/servers), and all clients should agree on these decisions.
If that were all then the possibly simplest solution would be to put a canonical order on the servers (on which all clients can agree) and then, for every object, compute a hash of it, take the result modulo the number of servers and use that value as an index to look up a server in the sorted list.
But we want to add another requirement: we need to be able to add servers to the pool or remove them dynamically at any time and we want this to cause the least number of objects to be moved from one server to another. The above modulo-based approach fails at this.
Consistent hashing takes some inspiration from the modulo-based approach. It consists in imagining a circle of unit length and mapping each server to a random uniformly distributed point on it (since all clients need to agree on the position of this point, it won’t actually be random but it will be based on a hash that is sufficiently uniform). The server that hosts a given object is determined as follows: compute a hash of the object, map it to a point on the circle and then walk clockwise along the circle until you reach the first marker of one of the servers; that server is the one the object maps to. Equivalently, we can say that a server’s marker covers the arc of the circle that “precedes” it (in clockwise direction) until the previous server’s marker. That server will then host all objects whose hashes fall onto that arc.
When using hashes this technique allows to deterministically compute the object-to-server function using only the servers’ and objects’ hashes, hence every client can perform this calculation locally and agree on the result. Moreover, when adding a server to the pool, its marker will fall on the arc covered by some other server’s marker and cut that arc in two, with one portion staying covered by the other server’s marker and another portion going under the control of the new server’s marker. Hence the object that see their “ownership” transferred are only those whose hashes fall in the arc that the new server took over from the other one. The inverse happens when a server leaves the pool. Hence the only transferred objects are the ones belonging to the server that joins or leaves the pool, which is the optimal behavior.
Since the technique is (pseudo-)random there’s inherent uncertainty in its performances. In particular, while each server is expected to receive on average the same amount of objects (and thus of load), it may happen that its marker covers a very long arc and that therefore it receives many more objects (and thus requests) than other servers. To hedge against this risk it is common to have every server place more than one marker on the circle, as if there were many “virtual copies” of it. The larger the number of copies the more its actual load will be likely to be close to the expected ideal average.
Suppose we have \(N\) servers, each with \(K\) markers. My question is…
How much load does one server get?
Let’s call \(R_j^{(i)}\) the length of the arc covered by the \(j\)-th marker of the \(i\)-th server. We have of course that \(\sum_{i=1}^N \sum_{j=1}^K R_j^{(i)} = 1\). The random variables \(R_j^{(i)}\) are not independent but they are identically distributed. This is enough to tell us that \(\operatorname{E}(R_j^{(i)}) = (NK)^{-1}\) by symmetry. Let’s analyze the distribution of \(R_j^{(i)}\) in more detail.
We can suppose without loss of generality that the \(j\)-th marker of the \(i\)-th server falls on “coordinate zero” of the circle (if that were not the case we could just shift the coordinate system). The length of the arc it covers is equal to coordinate of the marker that follows it, which is the first among all other markers, that is, the one with smallest coordinate. The other markers fall uniformly on the circle, so what we’re looking for is the minimum of \(NK-1\) independent values uniformly distributed on \([0, 1]\).
We’ll later need a generalization of this, so let’s do the extra work now. Consider \(M\) other markers (rather than \(NK - 1\)) and suppose the interval on which they fall has length \(L\) (rather than \(1\)). Let \(T_{L,M}\) be a random variable defined as the minimum of \(M\) independent random variables uniformly distributed on \([0, L]\). We have that \[ \Pr(T_{L,M} \ge t) = \left(\frac{L - t}{L}\right)^{M} \] since all \(M-1\) uniformly distributed random variables need to fall in \([t, L]\) for that event to happen, each of them does that with probability \(\frac{L-t}{L}\) and, as they are all independent, the total probability is the product of the individual ones.
From that we can deduce the probability density function \(\tau_{L,M}\) of \(T_{L,M}\), which is \[ \tau_{L,M}(t) = \frac{\mathrm{d}}{\mathrm{d}t} \Pr(T_{L,M} \le t) = \frac{\mathrm{d}}{\mathrm{d}t} \left(1 - \Pr(T_{L,M} \ge t) \right) = - \frac{\mathrm{d}}{\mathrm{d}t} \left(\frac{L - t}{L}\right)^{M} = \frac{M}{L} \left(\frac{L - t}{L}\right)^{M-1} \]
Since \(R_j^{(i)} = T_{1,NK-1}\) the pdf of \(R_j^{(i)}\) is \(\tau_{1,NK-1}\).
We introduce \(S_d^{(i)}\) as \(\sum_{j=1}^d R_j^{(i)}\). We are interested in the distribution of \(S_K^{(i)}\) but we will show a more general result, namely that the pdf \(\sigma_d\) of \(S_d^{(i)}\) is: \[ \sigma_d(x) = \frac{(NK-1)!}{(d-1)! (NK-d-1)!} x^{d-1} (1-x)^{NK-d-1} \]
Since \(S_1^{(i)} = R_1^{(i)} = T_{1,NK-1}\) we have already shown the result for \(d = 1\), because \(\sigma_1(x) = \tau_{1,NK-1}(x)\). Let’s show it for every \(d\) by induction: \[ \begin{split} \sigma_d(x) &= \int_0^1 \sigma_{d-1}(t) \tau_{1-t,NK-d}(x - t) \,\mathrm{d}t \\ &= \int_0^1 \frac{(NK-1)!}{(d-2)! (NK-d)!} t^{d-2} (1-t)^{NK-d} \frac{NK-d}{1-t} \left(\frac{1-x}{1-t}\right)^{NK-d-1} \,\mathrm{d}t \\ &= \frac{(NK-1)!}{(d-2)! (NK-d-1)!} (1-x)^{NK-d-1} \int_0^1 t^{d-2} \,\mathrm{d}t \\ &= \frac{(NK-1)!}{(d-2)! (NK-d-1)!} (1-x)^{NK-d-1} \frac{x^{d-1}}{d-1} \end{split} \] The integral goes over all possible values \(t\) and multiplies the probability that the first \(d-1\) arcs have lengths that sum up to \(t\) and that the \(d\)-th arc has a length that is \(x-t\) (in the remaining length of the circle, which is \(1-t\)).
It turns out that \(\sigma_d\) is in fact the pdf of the beta distribution with parameters \(\alpha = d\) and \(\beta = NK - d\) (recall that for natural numbers \(\Gamma(n) = (n-1)!\)). Hence \(S_K^{(i)}\) is distributed as \(Β(K, (N-1)K)\), which is exactly what we were looking for.
As it is a very well known distribution we can just look up its properties on Wikipedia rather than computing them ourselves. We find out that the expected value of \(S_K^{(i)}\) is \(N^{-1}\) (which we already knew) and that its variance is \[ \operatorname{Var}(S_K^{(i)}) = \frac{N-1}{N^2 (NK + 1)} \]
That’s all quite interesting, but…
What do we make of it?
Suppose that given \(N\) we want to choose a \(K\) that guarantees that a server will receive at most \(1+\epsilon\) times its ideal average load with probability at least \(1 - \delta\). What I mean by this is: in ideal conditions every server should have markers whose covered arcs sum up to exactly one \(N\)-th of the circle and should thus receive one \(N\)-th of the load; for, say, \(\epsilon = 0.1\) and \(\delta = 0.001\), we are estimating the probability that a server stays under 110% of its ideal load with 99.9% probability.
Using Chebyshev’s inequality we can figure out what values of \(K\) are appropriate. This will most likely be an overestimate, but it will give a simple closed-form answer. The inequality can be stated in the following form: \[ \Pr(|X - \operatorname{E}(X)| \ge \epsilon \operatorname{E}(X)) \le \frac{\operatorname{Var}(X)}{\epsilon^2 \operatorname{E}^2(X)} \]
If we make the right-hand side smaller than \(\delta\) then the probability on the left-hand side will be as well, which is what we want. Let’s express that condition and plug in the values of \(\operatorname{E}(X)\) and \(\operatorname{Var}(X)\): \[ \frac{N-1}{\epsilon^2 (NK + 1)} \le \delta \] which means that any \(K\) such that \[ K \ge \left(1 - \frac{1}{N}\right)\frac{1}{\epsilon^2 \delta} - \frac{1}{N} \] will work. We can lose the dependency on \(N\) by taking a slightly larger value of \(K\), namely \[ K = \frac{1}{\epsilon^2 \delta} \]
In our previous example with \(\epsilon = 0.1\) and \(\delta = 0.001\) this gives us \(K = 100000\).
Observe that we only looked at a single server. If we want every server’s load to stay within a \(1 + \epsilon\) factor of its ideal average with probability \(1 - \delta’\) then, by applying the union bound, we find out that each of them must satisfy the previous condition for \(\delta = \frac{\delta’}{N}\). This again is an overestimate.
Tighter bounds can be obtained. The tightest bound involves inverting the cumulative distribution function of the beta distribution. As far as I know this can only be done numerically, and therefore doesn’t lead to nice closed-form expressions.
David Karger et al. “Web caching with consistent hashing.” Computer Networks 31.11 (1999): 1203-1213. ↩
|
Since $(a_n)$ is bounded, $S$ is nonempty and bounded above. So by AoC there exists an upper bound $s=\sup S$.
Consider $s-\frac1k$ and $s+\frac1k$ where k is an arbitrary but fixed natural number. Since any number smaller than $s$ is not an upper bound of $S$, so $\exists (s'\in S)(s-\frac1k < s')$. Observe the property which forms $S$, by transtivity of $<$, $s-\frac1k$ also has the property: $s-\frac1k < a_n$ for infinitely many terms $a_n$. Apply this similar reasoning on $s+\frac1k$ we can see that $s+\frac1k \notin S$, so there are none or only finitely many terms $a_n$ satisfying $1+\frac1k < a_n$, this is the same as saying there are infinite many terms $a_n$ satisfying $a_n \ge 1+\frac1k$. Combine these two parts we get: for all $k\in N$, there are infinitely many terms of $a_n$ satisfying $s-\frac1k < a_n \le s+\frac1k$.
The last statement gave us a hint of how to build a subsequence of $(a_n)$. For every different $k\in N$, we can pick a term from infinitely many terms that satisfy that inequality. For example we can pick $a_{n_1}$ from $\{a_n : s-1<a_n<s+1\}$. After we picked $a_{n_k}$, we only need to satisfy $n_{k+1}>n_k$ to make this is indeed a subsequence, no repeatition or backward happened. (This is always can be done because for every pick we have infinitely many terms in hand)
Then we need to check whether this subsequence $(a_{n_k})$ converges to something. By intuition this should be $\sup S$. To satisfy the inequality $|a_{n_k}-s|<\epsilon$ for every $\epsilon >0$, choose $K>\frac1\epsilon$. If $k\ge K$ then $\frac1k < \epsilon$ which implies $$s-\epsilon < s-\frac1k < a_{n_k} \le s+\frac1k < s+\epsilon$$
So the B-W theorem has been proved using AoC.
|
I am trying to figure out what is the largest possible order that an element of the multiplicative group $\bmod{n}$ can have if $n=p_1^{k_1} \cdot p_2^{k_2} \cdot \dots \cdot p_m^{k_m}$.
then I can prove that
$$U(n) \cong U(p_1^{k_1}) \times U(p_2^{k_2}) \times \dots \times U(p_m^{k_m})$$
Now I know from my number theory book, that if $p_i$ is an odd prime, then $p_i^{k_i}$ has a primitive root, which makes $U(p_i^{k_i})$ cyclic. So the largest order of an element in that group is $\phi(p_i^{k_i})$. I also know, that $2$ and $4$ have primitive roots, so the largest orders in $U(2)$ and $U(2^2)$ are $\phi(2)$ and $\phi(2^2)$ respectively.
However, I am having trouble with $U(2^k)$ where $k\geq 3$.
I was able to show that for any $x\in U(2^k), \quad x^{2^{k-2}}\equiv 1 \pmod{2^k}$.
But how can I show that there $\textbf{must}$ be an element of order $2^{k-2}$ in $U(2^k), k\geq3$ ?
|
Deterministic models. Clarification of the question:
The problem with these blogs is that people are inclined to start yelling at each other. (I admit, I got infected and it's difficult not to raise one's electronic voice.) I want to ask my question without an entourage of polemics.
My recent papers were greeted with scepticism. I've no problem with that. What disturbes me is the general reaction that they are "wrong". My question is summarised as follows:
Did any of these people actually read the work and can anyone tell me where a mistake was made?
Now the details. I can't help being disgusted by the "many world" interpretation, or the Bohm-de Broglie "pilot waves", and even the idea that the quantum world must be non-local is difficult to buy. I want to know what is really going on, and in order to try to get some ideas, I construct some models with various degrees of sophistication. These models are of course "wrong" in the sense that they do not describe the real world, they do not generate the Standard Model, but one can imagine starting from such simple models and adding more and more complicated details to make them look more realistic, in various stages.
Of course I know what the difficulties are when one tries to underpin QM with determinism. Simple probabilistic theories fail in an essential way. One or several of the usual assumptions made in such a deterministic theory will probably have to be abandoned; I am fully aware of that. On the other hand, our world seems to be extremely logical and natural.
Therefore, I decided to start my investigation at the other end. Make assumptions that later surely will have to be amended; make some simple models, compare these with what we know about the real world, and then modify the assumptions any way we like.
The no-go theorems tell us that a simple cellular automaton model is not likely to work. One way I tried to "amend" them, was to introduce information loss. At first sight this would carry me even further away from QM, but if you look a little more closely, you find that one still can introduce a Hilbert space, but it becomes much smaller and it may become holographic, which is something we may actually want. If you then realize that information loss makes any mapping from the deterministic model to QM states fundamentally non-local—while the physics itself stays local—then maybe the idea becomes more attractive.
Now the problem with this is that again one makes too big assumptions, and the math is quite complicated and unattractive. So I went back to a reversible, local, deterministic automaton and asked: To what extent does this resemble QM, and where does it go wrong? With the idea in mind that we will alter the assumptions, maybe add information loss, put in an expanding universe, but all that comes later; first I want to know what goes wrong.
And here is the surprise: In a sense, nothing goes wrong. All you have to assume is that we use quantum states, even if the evolution laws themselves are deterministic. So the probability distributions are given by quantum amplitudes. The point is that, when describing the mapping between the deterministic system and the quantum system, there is a lot of freedom. If you look at any one periodic mode of the deterministic system, you can define a common contribution to the energy for all states in this mode, and this introduces a large number of arbitrary constants, so we are given much freedom.
Using this freedom I end up with quite a few models that I happen to find interesting. Starting with deterministic systems I end up with quantum systems. I mean real quantum systems, not any of those ugly concoctions. On the other hand, they are still a long way off from the Standard Model, or even anything else that shows decent, interacting particles.
Except string theory. Is the model I constructed a counterexample, showing that what everyone tells me about fundamental QM being incompatible with determinism, is wrong? No, I don't believe that. The idea was that, somewhere, I will have to modify my assumptions, but maybe the usual assumptions made in the no-go theorems will have to be looked at as well.
I personally think people are too quick in rejecting "superdeterminism". I do reject "conspiracy", but that might not be the same thing. Superdeterminism simply states that you can't "change your mind" (about which component of a spin to measure), by "free will", without also having a modification of the deterministic modes of your world in the distant past. It's obviously true in a deterministic world, and maybe this is an essential fact that has to be taken into account. It does not imply "conspiracy".
Does someone have a good, or better, idea about this approach, without name-calling? Why are some of you so strongly opinionated that it is "wrong"? Am I stepping on someone's religeous feelings? I hope not.
References:
"Relating the quantum mechanics of discrete systems to standard canonical quantum mechanics", arXiv:1204.4926 [quant-ph];
"Duality between a deterministic cellular automaton and a bosonic quantum field theory in $1+1$ dimensions", arXiv:1205.4107 [quant-ph];
"Discreteness and Determinism in Superstrings", arXiv:1207.3612 [hep-th].
Further reactions on the answers given. (Writing this as "comment" failed, then writing this as "answer" generated objections. I'll try to erase the "answer" that I should not have put there...)
First: thank you for the elaborate answers.
I realise that my question raises philosophical issues; these are interesting and important, but not my main concern. I want to know why I find no technical problem while constructing my model. I am flattered by the impression that my theories were so "easy" to construct. Indeed, I made my presentation as transparent as possible, but it wasn't easy. There are many dead alleys, and not all models work equally well. For instance, the harmonic oscillator can be mapped onto a simple periodic automaton, but then one does hit upon technicalities: The hamiltonian of a periodic system seems to be unbounded above and below, while the harmonic oscillator has a ground state. The time-reversible cellular automaton (CA) that consists of two steps $A$ and $B$, where both $A$ and $B$ can be written as the exponent of physically reasonable Hamiltonians, itself is much more difficult to express as a Hamiltonian theory, because the BCH series does not converge. Also, explicit $3+1$ dimensional QFT models resisted my attempts to rewrite them as cellular automata. This is why I was surprised that the superstring works so nicely, it seems, but even here, to achieve this, quite a few tricks had to be invented.
@RonMaimon. I here repeat what I said in a comment, just because there the 600 character limit distorted my text too much. You gave a good exposition of the problem in earlier contributions: in a CA the "ontic" wave function of the universe can only be in specific modes of the CA. This means that the universe can only be in states $\psi_1,\ \psi_2,\ ...$ that have the property $\langle\psi_i\,|\,\psi_j\rangle=\delta_{ij}$, whereas the quantum world that we would like to describe, allows for many more states that are not at all orthonormal to each other. How could these states ever arise? I summarise, with apologies for the repetition:
We usually think that Hilbert space is separable, that is, inside every infinitesimal volume element of this world there is a Hilbert space, and the entire Hilbert space is the product of all these. Normally, we assume that any of the states in this joint Hilbert space may represent an "ontic" state of the Universe. I think this might not be true. The ontic states of the universe may form a much smaller class of states $\psi_i$; in terms of CA states, they must form an orthonormal set. In terms of "Standard Model" (SM) states, this orthonormal set is notseparable, and this is why, locally, we think we have not only the basis elements but also all superpositions. The orthonormal set is then easy to map back onto the CA states.
I don't think we have to talk about a non-denumerable number of states, but the number of CA states is extremely large. In short: the mathematical system allows us to choose: take all CA states, then the orthonormal set is large enough to describe all possible universes, or choose the much smaller set of SM states, then you also need many superimposed states to describe the universe. The transition from one description to the other is natural and smooth in the mathematical sense.
I suspect that, this way, one can see how a description that is not quantum mechanical at the CA level (admitting only "classical" probabilities), can "gradually" force us into accepting quantum amplitudes when turning to larger distance scales, and limiting ourselves to much lower energy levels only. You see, in words, all of this might sound crooky and vague, but in my models I think I am forced to think this way, simply by looking at the expressions: In terms of the SM states, I could easily decide to accept all quantum amplitudes, but when turning to the CA basis, I discover that superpositions are superfluous; they can be replaced by classical probabilities without changing any of the physics, because in the CA, the phase factors in the superpositions will never become observable.
@Ron I understand that what you are trying to do is something else. It is not clear to me whether you want to interpret $\delta\rho$ as a wave function. (I am not worried about the absence of $\mathrm{i}$, as long as the minus sign is allowed.) My theory is much more direct; I use the original "quantum" description with only conventional wave functions and conventional probabilities.
(New since Sunday Aug. 20, 2012)
There is a problem with my argument. (I correct some statements I had put here earlier). I have to work with two kinds of states: 1: the template states, used whever you do quantum mechanics, these allow for any kinds of superposition; and 2: the ontic states, the set of states that form the basis of the CA. The ontic states $|n\rangle$ are all orthonormal: $\langle n|m\rangle=\delta_{nm}$, so no superpositions are allowed for them (unless you want to construct a template state of course). One can then ask the question: How can it be that we (think we) see superimposed states in experiments? Aren't experiments only seeing ontic states?
My answer has always been: Who cares about that problem? Just use the rules of QM. Use the templates to do any calculation you like, compute your state $|\psi\rangle$, and then note that the CA probabilities, $\rho_n=|\langle n|\psi\rangle|^2$, evolve exactly as probabilities are supposed to do.
That works, but it leaves the question unanswered, and for some reason, my friends on this discussion page get upset by that.
So I started thinking about it. I concluded that the template states can be used to describe the ontic states, but this means that, somewhere along the line, they have to be reduced to an orthonormal set. How does this happen? In particular, how can it be that experiments strongly suggest that superpositions play extremely important roles, while according to my theory, somehow, these are plutoed by saying that they aren't ontic?
Looking at the math expressions, I now tend to think that orthonormality is restored by "superdeterminism", combined with vacuum fluctuations. The thing we call vacuum state, $|\emptyset\rangle$, is not an ontological state, but a superposition of many, perhaps all, CA states. The phases can be chosen to be anything, but it makes sense to choose them to be $+1$ for the vacuum. This is actually a nice way to define phases: all other phases you might introduce for non-vacuum states now have a definite meaning.
The states we normally consider in an experiment are usually orthogonal to the vacuum. If we say that we can do experiments with two states, $A$ and $B$, that are not orthonormal to each other, this means that these are template states; it is easy to construct such states and to calculate how they evolve. However, it is safe to assume that, actually, the ontological states $|n\rangle$ with non-vanishing inner product with $A$, must be different from the states $|m\rangle$ that occur in $B$, so that, in spite of the template, $\langle A|B\rangle=0$. This is because the universe never repeats itself exactly. My physical interpretation of this is "superdeterminism": If, in an EPR or Bell experiment, Alice (or Bob) changes her (his) mind about what to measure, she (he) works with states $m$ which all differ from all states $n$ used previously. In the template states, all one has to do is assume at least one change in one of the physical states somewhere else in the universe. The contradiction then disappears.
The role of vacuum fluctuations is also unavoidable when considering the decay of an unstable particle.
I think there's no problem with the above arguments, but some people find it difficult to accept that the working of their minds may have any effect at all on vacuum fluctuations, or the converse, that vacuum fluctuations might affect their minds. The "free will" of an observer is at risk; people won't like that.
But most disturbingly, this argument would imply that what my friends have been teaching at Harvard and other places, for many decades as we are told, is actually incorrect. I want to stay modest; I find this disturbing.
A revised version of my latest paper was now sent to the arXiv (will probably be available from Monday or Tuesday). Thanks to you all. My conclusion did not change, but I now have more precise arguments concerning Bell's inequalities and what vacuum fluctuations can do to them.
|
Is it correct to represent Higgs VEV as the coherent state?
I mean, suppose translational invariant coherent state
$$
|\alpha\rangle = Ne^{\alpha \hat{a}^{\dagger}_{\mathbf p =0}}|\alpha =0\rangle, \quad N: \quad \langle \alpha|\alpha\rangle = 1 $$
of massless particles with zero dispersion relation. In principle, it is possible to state that the non-shifted Higgs doublet operator has VEV on $|\alpha \rangle$ state. But is this correct?
|
Let $\{g_i:X\subset\mathbb{R}\rightarrow\mathbb{R};\;i=1,...,m\}$ be a linerly independet set of real functions.
Given $n$ points $(x_1,y_1),...,(x_n,y_n)\in X$, consider the following function
$$G(\beta_1,...,\beta_n)=\sum_{k=1}^n\left(\sum_{l=1}^m\left[ \beta_lg_l(x_k)-y_k\right]\right)^2$$
I need to prove that $G$ attains a local minimum. For this I need to show that the Hessian matrix of $G$ is positive definite, but I'm not able to do. I'm trying to show it, but I'm be able to prove that it's just positive semidefinite (for this, I'm doing things like this and this). Can someone help me to show that the Hessian matrix of $G$ is positive definite?
Notice that
$$\frac{\partial G}{\partial \beta_i}=\sum_{k=1}^n\left(2g_i(x_k)\sum_{l=1}^m\left[ \beta_lg_l(x_k)-y_k\right]\right)$$
and
$$\frac{\partial^2 G}{\partial \beta_j\beta_i}=2\sum_{k=1}^ng_i(x_k)g_j(x_k)$$
Hence, the hessian matrix of $G$ is
$$H=2 \begin{bmatrix} \sum_{k=1}^ng_1(x_k)^2 & \sum_{k=1}^ng_1(x_k)g_2(x_k) & \cdots & \sum_{k=1}^ng_1(x_k)g_m(x_k)\\ \sum_{k=1}^ng_2(x_k)g_1(x_k) & \sum_{k=1}^ng_2(x_k)^2 & \cdots & \sum_{k=1}^ng_2(x_k)g_m(x_k)\\ & & \vdots & \\ \sum_{k=1}^ng_m(x_k)g_1(x_k) & \sum_{k=1}^ng_m(x_k)g_2(x_k) & \cdots & \sum_{k=1}^ng_m(x_k)^2 \end{bmatrix}$$
Thanks.
|
Two mistakes.
For all $\epsilon > 0$ the will indeed exist an $n_\epsilon\in \mathbb N$ so that $\alpha -\epsilon < n_\epsilon \le \alpha$ and $n_\epsilon < \alpha +\epsilon$ but that does not mean $n_\epsilon < \alpha + \epsilon$ for all $\epsilon$.
$n_\epsilon < \alpha + \epsilon$ is only true for
that $n_\epsilon$ and that $\epsilon$. For a different value of $\delta > 0$ it will follow that that is an $n_\delta$ so that $n_\delta < \alpha + \delta$ but $n_\delta$ could be a completely different value than $n_\epsilon$.
Second.
$n\ge \alpha$ does not contradict that $\alpha$ is a least upper bound. $\alpha$ is a least upper bound, and $n \in \mathbb N$ will mean that $\alpha \ge n$ and we have $n \ge \alpha$. That's not a contradiction.
......
So here's a hint.
let $0 < \epsilon <1$.
Let $n_\epsilon$ but the natural number where $\alpha - \epsilon < n_\epsilon \le \alpha$.
Now I'll tell you right off the bat, you will never find a contradiction with $n_\epsilon$. You can note that $n_\epsilon < \alpha+\epsilon$ if you want but that won't be a contradiction nor will it help you.
You will find nothing wrong with $n_\epsilon$.
Try to find a
different natural number that does cause a contradiction.
Second hint. Don't bother trying to find a different $\delta > 0$ and a different $n_\delta$ so that $\alpha - \delta < n_\delta \le \alpha$. If you do that you will find something very important about $n_\epsilon$ vs. $n_\delta$ but it
won't be a contradiction.
Third hint: You have $\alpha -\epsilon < n_\epsilon \le \alpha$. Try to find an $m\in \mathbb N$ so that $m > \alpha$. That was your original goal after all. How does knowing $\alpha - \epsilon < n_\epsilon \le \alpha$ help you find $m$ so that $m > \alpha$?
=====
Fourth Hint: FORGET ANALYSIS! How would a five year old answer answer this?
Try it. Go up to a five year old and ask her "I'm thinking of a real big number. How do you know that there is a bigger one?" I bet you she will say the answer that is the utter
key to this proof!
|
First of all I define the convention I use.
The matrices $\bar{\sigma}^\mu$ I will use are $\{ Id, \sigma^i \}$ where $\sigma^i$ are the Pauli matrices and $Id$ is the 2x2 identity matrix.I will use the Chiral Fierz Identity$$(\bar{\sigma}^\mu)[\bar{\sigma}^\nu] = (\bar{\sigma}^\mu][\bar{\sigma}^\nu) + (\bar{\sigma}^\nu][\bar{\sigma}^\mu) - \eta^{\mu\nu}(\bar{\sigma}^\lambda][\bar{\sigma}_\lambda) + i\epsilon^{\mu\nu\rho\lambda}(\bar{\sigma}_\lambda][\bar{\sigma}_\rho)$$ where I used the Takashi notation.
Let us consider the left-handed component $\chi$ of a massless fermion field $\psi$ and the operator defined as $$\mathcal{O} = \chi^\dagger \bar{\sigma}^\mu\chi (\partial_\mu\partial_\nu\chi^\dagger)\bar{\sigma}^\nu\chi.$$
If I have use the Chiral Fierz identity I get $\mathcal{O} = 2\mathcal{O}$ where I used $\partial_\mu\partial^\mu \chi = 0$. So, I get $\mathcal{O}=0$.
This equality, if true, suggests me there is another way to show that this operator is null for massless fermions. Is there any way? Do you suggest anything?This post imported from StackExchange Physics at 2016-05-07 13:22 (UTC), posted by SE-user FrancescoS
|
24 0
Hiya!
Somewhere in thermodynamics, the thermal wavelength [tex] \lambda = \frac {h}{\sqrt{2 \pi m k T}} [/tex] appears, apparently out of nothing (well, out of the canonical state integral).
Does anyone know what the physical meaning of this wavelength is?
Also, what is the physical meaning of the fugacity [tex] z = e^{\beta \mu} [/tex] ?
Somewhere in thermodynamics, the thermal wavelength
[tex] \lambda = \frac {h}{\sqrt{2 \pi m k T}} [/tex]
appears, apparently out of nothing (well, out of the canonical state integral).
Does anyone know what the physical meaning of this wavelength is?
Also, what is the physical meaning of the fugacity [tex] z = e^{\beta \mu} [/tex] ?
Last edited by a moderator:
|
Ex.13.2 Q6 Surface Areas and Volumes Solution - NCERT Maths Class 10 Question
A solid iron pole consists of a cylinder of height \(220\,\rm{cm}\) and base diameter \(24\,\rm{cm},\) which is surmounted by another cylinder of height \(60\,\rm{cm}\) and radius \(8\,\rm{cm}\).
Find the mass of the pole, given that \(1\,\rm {cm^3}\) of iron has approximately \(8\,\rm{g}\) mass.
\(\left( {{\text{Use}}\,\,\pi \,{\text{ = }}\,{\text{3}}{\text{.14}}} \right)\)
Text Solution What is known?
A solid iron pole consisting of a cylinder of height \( 220\,\rm{cm}\) and base diameter \(24\,\rm {cm}\) which is surmounted by another cylinder of height \( 60\,\rm {cm}\) and radius \(8\,\rm {cm}\)
Mass of \(1\;\rm{cm}^3\) iron \(= 8\rm\,{g}\)
What is unknown?
The mass of the solid iron pole
Reasoning:
Draw the figure to visualize the iron pole
Visually it’s clear that
Volume of the solid iron pole \(=\) volume of larger cylinder \(+\) volume of smaller cylinder
Mass of iron in the pole \(=8\,\rm{g}\) \(\;\times\;\)volume of the solid iron pole in \({\text{c}}{{\text{m}}^3}\)
We will find the volume of the solid by using formula;
Volume of the cylinder\( = \pi {r^2}h\)
where
\(r\) and \(h\) are the radius and height of the cylinder respectively. Steps:
Radius of larger cylinder,\(\begin{align}R = \frac{{24cm}}{2} = 12cm\end{align}\)
Height of larger cylinder,\(H = 220cm\)
Radius of smaller cylinder,\(r = 8cm\)
Height of smaller cylinder,\(h = 60cm\)
Volume of the solid iron pole \(=\) volume of larger cylinder \(+\) volume of smaller cylinder
\[\begin{align}&= \pi {R^2}H + \pi {r^2}h\\&= \pi \left( {12cm \times 12cm \times 220cm + 8cm \times 8cm \times 60cm} \right)\\&= 3.14 \times \left( {31680c{m^3} + 3840c{m^3}} \right)\\&= 3.14 \times 35520c{m^3}\\&= 111532.8c{m^3}\end{align}\]
Mass of \(1cm^3\) iron is \(8g\)
Mass of iron in the pole \(= 8g ×\) volume of the solid iron pole in \(cm^3\)
\[\begin{align}&= 8g \times 111532.8\\&= 892262.4g\\&= \frac{{892262.4}}{{1000}}kg\\&= 892.2624kg\end{align}\]
Mass of iron in the pole is \(892.26\,\rm{ kg}\)
|
When proving that conditions $A$ and $B$ are equivalent, it is often an arbitrary choice whether to first prove $A\implies B$ or $B\implies A$. Are there examples where the second implication uses the first in a nontrivial way, and becomes significantly more difficult without invoking the former?
The proof of the Shephard-Todd theorem proceeds exactly that way. The theorem can be stated as:
For a finite subgroup $G\subset GL(n,\mathbb C)$ the following are equivalent:
(A) $G$ is generated by complex reflections (i.e., elements $g$ with $\text{rank}(g-\mathbf1_n)=1$).
(B) The ring of invariants $\mathbb C[x_1,\ldots,x_n]^G$ is a polynomial ring.
First, Shephard-Todd prove $A\Rightarrow B$ by classifying all complex reflection groups and then proceed by inspection (Chevalley found later a conceptual proof of this direction).
To prove $B\Rightarrow A$ let $H\subseteq G$ be the subgroup generated by all complex reflections. From $A\Rightarrow B$ one gets that $\mathbb C[x_1,\ldots,x_n]^H=\mathbb C[y_1,\ldots,y_n]=:P$ is a polynomial ring. By assumption $\mathbb C[x_1,\ldots,x_n]^G=\mathbb C[z_1,\ldots,z_n]=:Q$. The ring extension $P|Q$ is, by construction, unramified in codimension one. On the other hand it is ramified along the zero set of the Jacobian $\det(\partial z_i/\partial y_j)$. So $P=Q$ and therefore $H=G$.
This is not exactly about an equivalence. But the same question can be raised whether, in an existence and uniqueness theorem, the proof of existence uses proven uniqueness, or the converse. I have one example of both situations:
In elliptic linear PDE, one often apply Lax-Milgram Theorem. The proof of existence does involve the quite obvious uniqueness.
More involved : the KdV equation $u_t+uu_x+u_{xxx}=0$ admits an infinite list of conserved quantities, parametrized by a natural integer (Gardner, Greene, Kruskal & Miura) $$I_n=\int_{-\infty}^{+\infty}\left((d^nu/dx^n)^2+\cdots\right)dx.$$ I proved that the list is complete (there does nt exist other invariants in the form of an integral involving $u$ and finitely many derivatives), and the proof needs the knowledge of the list.
Lafforgue's proof of the Langlands correspondence over function fields provides an important example.
The Langlands correspondence over a function field can be expressed, roughly, as an "if and only if" statement:
An assignment of conjugacy classes in $GL_n$ to the closed points of an algebraic curve $X$ over a finite field $k$ is the set of Frobenius conjugacy classes of some irreducible $n$-dimensional $\operatorname{Gal}(k(X))$-representation of and only if it is the set of Satake parameters of some cuspidal automorphic form on $GL_n(k(X))$.
L. Lafforgue proved both directions of this. He proved the "if" direction directly, by a geometric argument involving the moduli space of shtukas, building on work of Drinfeld and others. He deduced the "only if" direction from the "if" direction for smaller $n$ via the converse theorem.
V. Lafforgue generalized the "if" to an arbitrary group, but the second part breaks down in the case of a general group as the converse theorem is not available there.
This is perhaps overly elementary since it is something that might appear in an undergraduate course, but it's what came immediately to my mind.
For $\Omega \subseteq \mathbb{C}$ simply connected and open, a continuous function $f\colon \Omega \to \mathbb{C}$ is holomorphic if and only if $$\int_\gamma f(z)\,dz = 0$$ for any closed curve $\gamma$ in $\Omega$.
The forward direction is Cauchy's theorem. As a consequence once obtains the famous integral formula and from that the fact that holomorphic functions are analytic.
The reverse direction is a particular case of Morera's theorem, which can be easily proven as follows: observe that the condition on integrals implies $f$ has an antiderivative, which is necessarily holomorphic and therefore analytic. Since $f$ is the derivative of an analytic function it has a derivative itself.
I asked about this before on MSE: https://math.stackexchange.com/q/1500691/101420
There are some arguments that Pythagoras theorem and its converse provide an example of what you're asking. (So given a triangle with sides $p, q, r$ lying opposite of corners $P, Q, R$ one would have statement $A$ to be 'angle $R$ is a right angle' and statement $B$ to be $p^2 + q^2 = r^2$.) For a more extensive discussion of this see my answer to my MSE question linked above and links in there.
I am still interested in more examples, though!
|
Belyi's theorem states that the following properties of a nonsingular projective algebraic curve $X$ are equivalent:
1) $X$ is defined over $\overline{\mathbb{Q}};$
2) There exists a meromorphic function $\phi: X\to\mathbb{P}^1\mathbb{C} $ ramified at most at $0,1,$ and $\infty$;
3) $X$ is isomorphic to $\Gamma \backslash \mathbb{H}$ (compactified at cusps) for a finite index subgroup $[\mathrm{PSL}_2(\mathbb{Z}), \Gamma]<\infty.$
The remaining question is: $$\boxed{\text{ Is there a way to treat singularities in this or a similar framework? }}$$
The following of my original questions have been answered:
Can this be generalized to arbitrary projective nonsingular varities of higher dimensions? (I discussed this with one professor here in Goettingen. That seems to be ongoing research. Please see also comment of David Roberts.)
What compactification do they mean here? $\Gamma \backslash \mathbb{H} \cup \mathbb{Q}$! (see the answer of Robin Chapman)
What is a nice reference for the proof of Belyi's theorem? (see answer of YBL and Koeck + the comments of Emerton)
How does the Galois group $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ enter the picture? (see comment of Ariyan and answer of YBL)
Where can I find nice examples where these computations have been done explicitly? (see answer of Andy Putnam and JSE)
|
Search
Now showing items 1-2 of 2
Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector
(Elsevier, 2014-11-10)
This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ...
Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector
(Elsevier, 2014-11-10)
Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ...
|
Search
Now showing items 1-2 of 2
Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector
(Elsevier, 2014-11-10)
This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ...
Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector
(Elsevier, 2014-11-10)
Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ...
|
The definition of the entropy is
$$H(Y) = -\sum p(y_j)\log_2 p(y_j)\,.$$
Now my text book says to compute the entropy for each attribute we consider the grouping of the data by that attribute now in each group we calculate the entropy (with respect to classes in each subgroup) and do a weighted sum. In this way the attribute we choose has the most purity of data in each of their subgroups.
Then I thought why cannot we use the entropy in this way:
We consider each class and calculate the entropy of each class (with respect to the groups they belong to) and do a weighted average.
For example, let's say by putting attribute $t_1$ in the root, the data are split in to sets and our class labels are $+$ and $-$.
group 1 group2 {30 + 1 -} {30 + 1 -}
(the numbers show the frequency of each class)
Method 1:
\begin{align*} H(\text{group 1}) &= -(\tfrac1{31} \log\tfrac1{31}+\tfrac{30}{31}\log\tfrac{30}{31})\\ H(\text{group 2}) &= -(\tfrac{1}{31} \log\tfrac1{31}+\tfrac{30}{31}\log\tfrac{30}{31}\qquad \text{(just the same)}\\ H(f1) &= \tfrac{31}{62} H(\text{group 1}) + \tfrac{31}{62} H(\text{group 2})\,. \end{align*}
Method 2:
\begin{align*} H(\text{label }+) &= -(\tfrac{30}{60}\log\frac{30}{60} + \tfrac{30}{60}\log\tfrac{30}{60})\\ H(\text{label }-) &= -(\tfrac12\log\frac12 + \tfrac12\log\tfrac12)\\ H(f1) &= \tfrac{60}{62} H(\text{label }+) + \frac{2}{62} H(\text{label }-)\,. \end{align*}
Now I know that the second method does not measure the purity of each subgroup but it can help us. For example,
f1 f2group 1 group 2 group 1 gropu2{30+ 1-} {30+ 1-} {30+ 1-} {30- 1+}
In this case the method one thinks both of the attributes do a good job of making pure subgroups but we also need the ability to make good decisions meaning differentiating between different class labels with the help of our attribute. f2 does this much better and I think method2 can help us with that.
Here method 1 gives equal entropy while method 2 gives much lower entropy for f2.
But again method2 is not enough by itself since it cannot measure purity.
My question: am I right? If yes then how can we combine the use of both these measures to choose the best attribute for each node?
|
I've been reading David Marker's Introduction to Model Theory, and found
Vaught's two cardinal theorem (4.3.34): if a theory $T$ has a $(\kappa,\lambda)$-model, where $\kappa > \lambda \geq \aleph_0$, then $T$ has an $(\aleph_1,\aleph_0)$-model. The proof is made using Vaught pairs, and uses the following: existence of $(\kappa,\lambda)$-model $\Rightarrow$ existence of Vaught pair $\Rightarrow$ existence of countable Vaught pair $\Rightarrow$ existence of $(\aleph_1,\aleph_0)$-model. However, when constructing the $(\aleph_1,\aleph_0)$-model from a countable Vaught pair it uses the formula that defines the Vaught pair (which may have parameters), and asserts that the same formula can be used to define the $(\aleph_1,\aleph_0)$-model (which should not have parameters). So according to the theorem, a Vaught pair of Dense linear order without endpoints should assure the existence of a $(\aleph_1,\aleph_0)$-model. I've been able to expose a Vaught pair of DLO w/o endpoints. QUESTIONS 1. Is there an $(\aleph_1,\aleph_0)$-model of the theory? My guess is not, since seems to me that all elements have the same type over $\emptyset$. 2. If the proof of the theorem is wrong, is there an easy workaround? (Not using saturated models, but Vaught pairs) 3. If the proof of the theorem is correct, why can you use a formula with parameters to construct the $(\aleph_1,\aleph_0)$-model?
I've been reading David Marker's Introduction to Model Theory, and found
You're right that Marker is sloppy on this point. However, if you work through the proofs, you'll see that the formulas witnessing the $(\kappa,\lambda)$ model and the Vaughtian pair and the $(\aleph_1,\aleph_0)$-model are all the same. So if you start with a formula without parameters witnessing the $(\kappa,\lambda)$ model, you'll get a Vaughtian pair and an $(\aleph_1,\aleph_0)$-model without parameters. On the other hand, if the formula in your Vaughtian pair has parameters, you'll get a formula
with parameters witnessing the $(\aleph_1,\aleph_0)$-model, or, if you prefer, you can add the parameters as constants to the language and get a true $(\aleph_1,\aleph_0)$-model.
|
Errata List for “Intuitive Gide to Fourier Analysis and Spectral Estimation”, First Edition Feel embarrassed that after reading the stuff more than twenty times, I did not notice so many glaring errors.Well, that’s my excuse any way! Your indulgence required until the next edition.
Here I have listed all the errors and typos that have already been corrected in the Kindle version but remain in the First edition of the book. Please add your discoveries below in the comment section. .
These apply only to the printed book. They have already been corrected in the electronic version.
(Equations are given in Latex)
\item Chapter 1: page 21, below Eq. 1.16, the last line should say {Now the whole expression uses only the cosine waves and the index, $k$ spans not just from $0$ to $\infty$, but from $-\infty$ to $+\infty$.}
\item Chapter 1: page 17, Fig. 1.13(b and d) the sign for the second term should be negative, i.e. $-{1}{3}\cos(3 \omega_0 t)$ $-{1}{7}\cos(3 \omega_0 t)$.
\item Chapter 2: Eq. 2.11 thru Eq. 2.22 should have a subscript 0 to the frequency, $\omega_0$ instead of $\omega$ alone. This subscript makes it clear that $\omega$ is not a variable but is a constant, the starting or the fundamental frequency.
\item Chapter 2: Eq. 2.21 and in Eq. 2.22 the sign of the exponential is reversed. The first one should be minus and second, a plus.
\item Chapter 2: Eq. 2.24 is missing the term $t$, should be $e^{\j k \omega \underline{t}}$.
\item Page 59: The duty cycle term is $\tau{/T}$ and not $\tau$ alone.
\item Page 57: Example 2.9, the signal should be \newline
$x(t) = 3+ 6\cos(4\pi t+2)+\underline{4}\j\sin(4\pi t+3)- 3\j \sin(10\pi t+1.5)$ \newline Note the underlined quantity $4$ for the third term, $\j \sin(4\pi t+3)$.
\item Page 58: The calculated value of the coefficient $e^{-\j 4\pi t}$ should be 2.552. The corresponding Fig. 2.14 has been changed.
\item Page 68: Example 2.15 The correct answer is -0.142 instead of 0.142.
\item Page 74: Figure 3.3 (a), the y axis has been moved to the center of the shape to be consistent with figures (b) and (c).
\item Page 75: Eq. 3.8, added subscript s to Time T.
\item Page 83: Last line, the equation number should be (3.17) instead of (3.19).
\item Page 86: First paragraph, last line, should be “with only 1 Hz” and not 2 Hz.
\item Page 92: Third paragraph, third line, should be $11 \pi /5$ and not $12 \pi /5$
\item Page 97: Eq. (3.37) and eq. (3.38) et. all are missing the coefficient $1/N_0$ from the front of the RHS.
\item Page 102: The correct values for the coefficients are: 1.5, .612, 0.5 and 0.612, as shown in the figure 3.25.
\item Page 103: Correct coefficients are: 0.75, 0.56, 0.25 and 0.56.
\item Page 104: Added subscript 0 to N to clarify that this is the fundamental period.
\item Page 149: N is equal to ${(2M+1)}$ and not $\frac{(2M+1)} {2}$.
\item Page 157: Eq.(5.22) is missing the coefficient $\frac{1}{N_0}$ in the front.
\item Page 190: The index of the top equation should be 6 and not 7.
|
In general $$F(\alpha) = \sum_{n=1}^\infty \frac{\cot(n\pi \alpha)}{n^3}$$converges and can be explicitly calculated when $\alpha$ is a
quadratic irrational. The convergence in this case is easily seen as $\alpha$ has irrationality measure $2$. More precisely, $F(\alpha)/\pi^3 \in \mathbb{Q}(\alpha)$ when $\alpha$ is quadratic irrational.
The procedure below also works when $n^3$ is replaced by any $n^{2k+1}$.
Let $$g(z) = \frac{\cot(z\pi \alpha)\cot(z\pi)}{z^3}$$ then $g$ has simple poles at non-zero integer multiples of $1$ and $1/\alpha$, and $5$-th order pole at $0$. Let $R_N$ denote a large rectangle with corners at $N(\pm 1 \pm i)$. Then contour integration gives$$\tag{1}\sum_{\substack{n\in R_N \\ n\neq 0}} \frac{\cot(n\pi\alpha)}{\pi n^3} + \sum_{\substack{n/\alpha\in R_N \\ n\neq 0}} \frac{\alpha^2\cot(n\pi/\alpha)}{\pi n^3}-\frac{\pi ^2 \left(\alpha^4-5 \alpha^2+1\right)}{45 \alpha} = \frac{1}{2\pi i} \int_{R_N} g(z)dz$$I claim there exists a sequence of integers $N_1, N_2, \cdots$ such that RHS tends to $0$. Note that $\cot(z\pi)$ is uniformly bounded on the annulus $R_{N+3/4} - R_{N+1/4}$ when $N$ is an integer. Hence by equidistribution of $n\alpha$ modulo $1$, we can find integers $N_i$ such that both $\cot(z\pi\alpha)$ and $\cot(z\pi)$ are uniformly bounded on $R_{N_i+3/4} - R_{N_i+1/4}$.
Since we already know the series converges, from $(1)$:$$\tag{2}F(\alpha) + \alpha^2F(\frac{1}{\alpha}) = \underbrace{\frac{\pi ^3 \left(\alpha^4-5 \alpha^2+1\right)}{90 \alpha}}_{\rho(\alpha)}$$Note that obviously $F(\alpha+1)=F(\alpha)$.
Let the continued fraction expansion of $\alpha$ be given by$$\alpha = [a_0;a_1,a_2,\cdots]$$Successive complete quotients are denoted by:$$\zeta_0 = [a_0;a_1,a_2,\cdots]\qquad \zeta_1 = [a_1;a_2,a_3,\cdots]\qquad \zeta_2 = [a_2;a_3,a_4,\cdots]$$Then $(2)$ and periodicity implies for $k\geq 0$:$$\tag{3} F(\zeta_{k+1}) + \zeta_{k+1}^2 F(\zeta_k) = \rho(\zeta_{k+1})$$If continued fraction of $\alpha$ is of form$$\alpha = [a_0;a_1,\cdots,a_m,\overline{b_1,\cdots,b_r}]$$Then $\zeta_{m+r+1} = \zeta_{m+1}$, so we eventually entered a cycle. $(3)$ gives a system of $m+r+1$ linear equations (by setting $k=0,\cdots,m+r$), with $m+r+1$ variables: $F(\zeta_0), F(\zeta_1),\cdots,F(\zeta_{m+r})$.$$\begin{cases}F(\zeta_1) + \zeta_1^2 F(\zeta_0) &= \rho(\zeta_1) \\ F(\zeta_2) + \zeta_2^2 F(\zeta_1) &= \rho(\zeta_2) \\ \cdots \\F(\zeta_{m+1}) + \zeta_{m+1}^2 F(\zeta_{m+r}) &= \rho(\zeta_{m+1})\end{cases}$$Solving it gives the value of $F(\zeta_0)=F(\alpha)$.
For $\alpha = \sqrt{61} = [7;\overline{1,4,3,1,2,2,1,3,4,1,14}]$, we have$$\begin{aligned} \zeta_0 = \sqrt{61} \qquad \zeta_1 &= \frac{1}{12}(7+\sqrt{61}) \\\zeta_2 = \frac{1}{3}(5+\sqrt{61}) \qquad \zeta_3 &= \frac{1}{4}(7+\sqrt{61})\\\zeta_4 = \frac{1}{9}(5+\sqrt{61}) \qquad \zeta_5 &= \frac{1}{5}(4+\sqrt{61})\\\zeta_6 = \frac{1}{5}(6+\sqrt{61}) \qquad \zeta_7 &= \frac{1}{9}(4+\sqrt{61})\\\zeta_8 = \frac{1}{4}(5+\sqrt{61}) \qquad \zeta_9 &= \frac{1}{3}(7+\sqrt{61}) \\\zeta_{10} = \frac{1}{12}(5+\sqrt{61}) \qquad \zeta_{11} &= \frac{1}{12}(7+\sqrt{61}) \end{aligned}$$ solving the above system gives the result.
A few examples: for $\alpha = (1+\sqrt{5})/2$, the continued fraction has period $1$, direct substitution into $(2)$ gives$$\sum_{n=1}^\infty \frac{\cot(n\pi \frac{1+\sqrt{5}}{2})}{n^3} = -\frac{\pi ^3}{45 \sqrt{5}}$$Complexity of result increases as period of $\alpha$ increases. For $\alpha = \sqrt{211}$, which has period $26$:$$\sum_{n=1}^\infty \frac{\cot(n\pi \sqrt{211})}{n^3} = \frac{128833758679 \pi ^3}{383254107060 \sqrt{211}}$$For $\alpha = \sqrt{1051}$, with period $50$:$$\sum_{n=1}^\infty \frac{\cot(n\pi \sqrt{1051})}{n^3} = \frac{47332791433774124737806821 \pi ^3}{589394448213331173141730140 \sqrt{1051}}$$For $\alpha = \sqrt{3361}$, with period $95$:$$\sum_{n=1}^\infty \frac{\cot(n\pi \sqrt{3361})}{n^3} = -\frac{21398204683553311136959481444654882139602077 \pi ^3}{862144802276358317644836682685386007371980 \sqrt{3361}}$$When $\alpha$ is not a "pure" quadratic irrational, for example, $\alpha = 1/4 + \sqrt{7}/{3}$, the result involves "constant term" (because of non-trivial automorphism of $\mathbb{Q}(\alpha)$):$$\sum_{n=1}^\infty \frac{\cot(n\pi(\frac{1}{4}+\frac{\sqrt{7}}{3}))}{n^3} = \frac{13 \pi ^3}{288}+\frac{104771 \pi ^3}{1244160 \sqrt{7}}$$The closed-form here follows immediately by noting $\csc x = \cot (x/2) - \cot x$.
I wrote a Mathematica code to evaluate this sum. The command
cotsum[Sqrt[61]] evaluates the sum in the question. You can try other quadratic irrationals as well. The case when denominator is $n^5$ or $n^{2k+1}$ can be similarly handled by modifying $f$.
This algorithm is not very efficient, it works by creating plenty of variables (one for each term of continued fraction until repetition occurs), then solve the resulting equation. For $\alpha$ with period around $100$, it typically takes around 3-6 minutes. It would be much more efficient to back-substitute, but I don't have much motivation now in optimizing it.
cotsum[x_] /; QuadraticIrrationalQ[x] :=
Module[{a1 = x, list, l, r, i, nlist, f, solution, output, equation,
string}, list = ContinuedFraction[a1];
l = Length[list] - 1;
r = Length[list[[l + 1]]];
i = 1; string = "{";
While[i < l + r + 1, string = string <> "x" <> ToString[i] <> ",";
i++]; string = StringTake[string, StringLength[string] - 1] <> "}";
Do[Evaluate[ToExpression["a" <> ToString[i + 1]]] =
FromContinuedFraction[Drop[list, i]], {i, 1, l - 1}];
nlist = list[[l + 1]];
Do[Evaluate[ToExpression["a" <> ToString[i + l + 1]]] =
FromContinuedFraction[{Flatten[
Append[Drop[nlist, i], Take[nlist, i]]]}], {i, 0, r - 1}];
f[a_] := (1 - 5 a^2 + a^4)/90/a;
equation =
Table[ToExpression[
"x" <> ToString[i + 1] <> "+a" <> ToString[i + 1] <> "^2*x" <>
ToString[i] <> "==f[a" <> ToString[i + 1] <> "]"], {i, 1,
r + l - 1}];
equation =
Append[equation,
ToExpression[
"x" <> ToString[l + 1] <> "+a" <> ToString[l + 1] <> "^2*x" <>
ToString[r + l] <> "==f[a" <> ToString[l + 1] <> "]"]];
solution = Solve[equation, ToExpression[string]]; Clear["a*"];
output = (ToExpression[string][[1]] /. Flatten[solution])*Pi^3;
FullSimplify[output]]
|
I'm reading about induced representations for research. Particularly, I'm trying to get a firm grasp on the finite group case before venturing on to the locally compact case. I've been looking at Wikipedia and more or less get the idea with one somewhat significant issue: why should we look to $\bigoplus g_iV$ (where the $g_i$ are coset representatives of for $H$) when defining the induced representation? Why shouldn't the induced representation on $G$ act on simply $V$ or even $V^{|G|}$ instead of $[G:H]$ copies of $V$?
For a general pair $G, H$ and a representation $V$ of $H$ there may be no way to extend the action of $H$ to an action of $V$. By making the $[G:H]$ copies we have enough wiggle room for $G$ to act.
More or less I like to think of it as follows: Take $[G : H]$ copies of the representation indexed by the cosets of $H$. When you act by something in $H$ you act on each copy separately, and when you act by something else you permute the copies according to how that element permutes the cosets of $H$.
This isn't quite correct, as other elements of $G$ not in $H$ will act on the separate copies as well as permute them, but I think it gives good intuition for what is going on. The wikipedia article gives a more explicit formula for how general $g \in G$ acts.
This is a more advanced view.
More generally, if you have a ring homomorphism $\phi:R_1\to R_2$, and $M$ is a left $R_1$-module, then there is a left $R_2$-module that is the "induced module" which is:
$$R_2\otimes_{R_1} M$$ which is an $R_2$ module.
The case of $H<G$ is then $R_1=\mathbb C[H], R_2=\mathbb C[G]$ is the case of induced group representations. In that case, $R_2$ is generated by $[G:H]$ elements when it is considered as an $R_1$-module, so this is $[G:H]$ times the 'dimension' of $M$.
If you look at the categories of $R_1\text{-Mod}$ and $R_2\text{-Mod}$, there is an obvious functor from $F_{\phi}:R_2\text{-Mod}\to R_1\text{-Mod}$. I believe, but I'm not sure, that $M\to R_2\otimes_{R_1} M$ is the adjoint of that functor, but I wouldn't swear that is true.
Induction of modules is in a sense related to extension by scalars. Recall that any ring morphism $f \colon R \to S$ induces a functor from left $R$-modules to left $S$-modules by sending an $R$-module $M$ to $S \otimes_R M$, where $S$ is viewed as a $(S, R)$-bimodule via $f$. So the extension by scalar functor is actually just a left tensor functor $L \otimes_R -$ for some $(S, R)$-bimodule $L$.
Now suppose that $G$ is a finite group, $H$ a subgroup of $G$, and $K$ some field. We have a $KH$-module $N$ and want to obtain a $KG$-module the most universal way. Viewing the group ring $KH$ as a subring of $KG$ and then extending by scalars suggests that $KG \otimes_{KH} N$ is our candidate. In fact, you can prove that $\text{Ind}_H^G N \cong KG \otimes_{KH} N$ as $(KG, KH)$-bimodules and more generally that the induction functor $\text{Ind}_H^G \colon KH\text{-Mod} \to KG\text{-Mod}$ between the respective module categories is naturally isomorphic to the left tensor functor $KG \otimes_{KH} -$.
It should also be noted that the other basic representation theory operations of restriction, inflation, and deflation can similarly be defined as tensor functors.
Induced representations are a special case of
extension of scalars. Say you're doing linear algebra over a vector space $V$ defined over a field $k$. Often it's useful to have an algebraically closed field of scalars, since then we can decompose a space into generalized eigenspaces of a given linear map, so what do we do if our scalars aren't algebraically closed? We extend them! To this end, let $L/K$ be any extension of fields: we want the extension-of-scalars-from-$K$-to-$L$ to take the free $K$-vector space on a set $X$, and return the free $L$-vector space on the set $X$. Thus, if our space is comprised of elements which are $K$-linear combinations of a given set of basis vectors, then after extension of scalars the same is true only we have $L$-linear combinations. There is a way to do this independent of choosing a basis, which is to tensor against $L$ over $K$, i.e. $V_L:=L\otimes_KV$. The tensor product is essentially a way to "pretend multiply" vectors in $V$ by scalars in $L$ and allow ourselves linear combinations of these "pretend products." One easily checks this makes $V_L$ an $L$-vector space.
The same can be done with $R$-modules. (Vector spaces are modules over fields.) If $S$ is a ring which contains the ring $R$, and $M$ is any $R$-module, then $S\otimes_RM$ is formed by "pretending" to multiply elements of $M$ by scalars in $S$ (subject to the proviso that scalars in $R$ act the same way they did originally on $M$) and then adding these pretend products, and this turns $S\otimes_RM$ into a module over the bigger ring $S$.
Linear representations of a group $H$ over a field $k$ are basically $k[H]$-modules. If we want to extend the action of $H$ on a $k$-space $V$ to an action of $G$ on it, we need to extend the scalars of $k[H]$ to the scalars of $k[G]$. This is achieved by $k[G]\otimes_{k[H]}V$. It is comprised of linear combinations of vectors $gv$ for $g\in G$, $v\in V$, subject to $(ab)v=a(bv)$, $(a+b)v=av+bv$, $a(v+w)=av+aw$, and the rule that $hv$ for $h\in H$ is exactly as it's defined when $V$ is a $k[H]$-module.
To a category theorist, we notice that "restriction of the action of $G$ to the action of $H$" is basically a forgetful functor from the category of representations of $G$ to the category of representations of $H$. The notion of "free" or "universal" constructions is captured in categorical language as "adjoints to forgetful functors," so if the opposite of restricting actions is inducing them upwards, then we can define the representations of $G$ induced from a representation of $H$ via applying such an adjoint. Frobenius reciprocity (the $\hom$ version) essentially states that $k[G]\otimes_{k[H]}V$ is the left adjoint applied to a representation $V$ of $H$ to induce a representation of $G$.
Whenever there is some definition (or in this case, more precisely a construction), it is indeed a good idea to ask yourself "why does it have to be exactly this way".
To answer such a question, one should start by figuring out what the constructed object should satisfy. In this case, we have a group $G$, a subgroup $H$ and a representation $V$ of $H$. What we want is a representation of $G$, and we would of course like it to have some sort of relation to $V$. Since we have a very nice way to get a representation of $H$ from a representation of $G$ (restriction), this should probably somehow play into this relation.
I will discuss three possible relations one might wish for: The overly optimistic, the overly pessimistic, and the "just right".
The overly optimistic:
If we were to wish for the very best relation we could get, we would like our induced representation $W$ to be such that restricting $W$ to $H$ gives back $V$. This would obviously give us the very best possible relation we could ever hope for, but unfortunately, $G$ need not have any representation with this property (in fact, determining when this is the case is a very interesting topic in the representation theory of finite groups).
The overly pessimistic:
A relation that we should at least require is that if $V$ is simple, then when we restrict the induced module to $H$, then $V$ is either a submodule or a quotient (note that I have not specified anything about the groups or the field we work over, so these need not be equivalent). However, as we will shortly see, we can get something even better than this, and we might as well get all that we can.
The "just right":
In the overly pessimistic version, we were interested in submodules or quotients. But often a more useful thing to consider will be $\operatorname{Hom}$-spaces between modules. In this context, having a certain simple submodule is the same as having a non-zero homomorphism from said simple module (and having it as a quotient is then the same with the arrow in the other direction). So in this way, the overly pessimistic version becomes a statement about certain $\operatorname{Hom}$-spaces being non-zero. To make this more general, we have an $H$-module $V$ and we want a $G$-module $W$ such that whenever we have a $G$-module $M$, we have some sort of comparison between the spaces $\operatorname{Hom}_H(V,M)$ and $\operatorname{Hom}_G(W,M)$ (or with the entries switched). So what sort of comparison do we want? Well, these are vectorspaces, so how about asking that they have the same dimension? This turns out to be just the right thing to ask, since it is on the one hand a very strong condition, but on the other, we can actually construct such a $W$ (that the induced module satisfies this is known as Frobenius reciprocity).
Note that to recover the overly pessimistic version, we can take $M = W$ in the above.
As a final note, I would like to add a bit of notation. If we denote restriction from $G$ to $H$ by $\operatorname{res}_H^G$ and induction from $H$ to $G$ by $\operatorname{ind}_H^G$ then the above condition becomes $\operatorname{Hom}_H(V,\operatorname{res}_H^G M)\cong\operatorname{Hom}_G(\operatorname{ind}_H^G V,M)$, or in other word that induction is left adjoint to restriction (or right adjoint if we switch the entries), at least when everything behaves nicely as functors, which indeed it tends to do.
Also worth noting is that the above gives two possibilities for what we might require of our "induction". A "slightly more optimistic" version could be to require both to hold. And indeed, for finite groups over $\mathbb{C}$, this is what we get, but in general the two need not give the same (in fact, we might not always be sure they both exist). For example, when dealing with algebraic groups, we will usually be more interested in the version mentioned in paranthesis, since this is somewhat better behaved.
|
I'm reading Kudla's Article on the Local Langlands Conjecture for $p$-adic general linear groups, and specifically I'm trying to understand how the ideas of Bernstein-Zelevinski yield show that you only need to prove the LLC for supercuspidal representations (on the automorphic side) and irreducible representatons (on the Galois side).
The part I can't find a reference on is: getting the $L$-functions to match. In particular, if $\tau_1,\,\ldots\,\,\tau_r$ are essentially-square-integrable representations, then they correspond to indecomposable representations $\rho_1',\,\ldots,\,\rho_r'$ on the Galois side. Then the correspondence gives $$Q(\tau_1,\ldots,\, \tau_r) \mapsto \rho_1' \oplus\ldots\oplus \rho_r'$$ (here $Q$ is the Langlands quotient, i.e. the unique irreducible quotient of the representation parabolically induced from $\tau_1\otimes \ldots\otimes \tau_r$). and so for match of $L$-functions we need $$L(Q(\tau_1,\ldots,\, \tau_r),\,s) = \prod_i L(\rho_i',\,s) = \prod_i L(\tau_i,\,s).$$
The fact that $$L(Q(\tau_1,\ldots,\, \tau_r),\,s) = \prod_i L(\tau_i,\,s)$$ Is stated in Kudla's article, but is not proved, and a reference is not provided. I was wondering if anyone could point me in the right direction, or explain why this is true?
(I guess the same question goes for $\epsilon$ factors but these are much more mysterious to me).
EDIT: unnecessary question removed, then edited for clarity.
Thanks!
|
I have $n$ objects $O_i$, each of them having $3$ values, $O_i = (A_i, B_i, C_i)$. I am trying to group them into $k$ groups $P_u$ such as $P_u =(A_u, B_u, C_u)$ such that
$$\text{minimize} \quad \sum_u^k M_k \left( a A_u + b B_u + c C_u \right)$$
for all $O_i \in P_u$, $A_i \leq A_u$, $B_i \leq B_u$, $C_i \leq C_u$. Here, $M_k$ is the numbers of objects $O_i$ in each category ($\sum M_k =n $). $a$, $b$ and $c$ are constants, as $A$, $B$ and $C$ do not have the same weight.
Can anyone link me to a theorem or some literature that would give me an idea of how to solve my problem?
I am not a mathematician or anything alike. I am an engineer with what I assume is a trivial optimization problem. Let me clarify what I'm trying to achieve.
I am a structural engineer and I have to design beam bearing plates, each bearing plates having $3$ dimensions (length, width and thickness). In a given building it is not uncommon to need $100$ or more bearing plates, each with their own dimensions. However for the sake of practicality we oversize some and group them so that at the end we only have a few different bearing plates ($k$ groups of plates, $k$ being a number chosen arbitrarily by the engineer and not a variable to optimize). We cannot undersize any plates.
I am trying to find a way to optimize the design of those groups so that the act of grouping the plates waste as little money as possible, so the values to be optimized are the dimensions of each group of plates.
Factors $a$, $b$ and $c$ comes into play because bumping up a plate thickness is more costly than bumping up its length or width.
I hope this clears up any misunderstanding, if not just let me know. Thanks all!
Fedja's possible clarification (I'm not sure if this is what OP had in mind)
There are $n$ triples $(A_i,B_i,C_i)$ of real numbers ($i=1,\dots,n$). We want to partition the index set $\{1,\dots,n\}$ into $k$ subsets $P_u$ so that the sum $$ \sum_u |P_u|[a\max_{i\in P_u}A_i+b\max_{i\in P_u}B_i+c\max_{i\in P_u}C_i] $$ is as small as possible, where $|P_u|$ is the cardinality of $P_u$.
What is an efficient algorithm for doing that?
I would also risk to assume that $a,b,c>0$ (which actually makes them redundant in the mathematical formulation of the problem: just replace $A_i$ by $aA_i$ and so on).
|
Since the Apollo 11 code is on GitHub, I was able to find the code that looks like an implementation of sine and cosine functions: see here for the command module and here for the lunar lander (it looks like it is the same code).
For convenience, here is a copy of the code:
# Page 1102
BLOCK 02
# SINGLE PRECISION SINE AND COSINE
COUNT* $$/INTER
SPCOS AD HALF # ARGUMENTS SCALED AT PI
SPSIN TS TEMK
TCF SPT
CS TEMK
SPT DOUBLE
TS TEMK
TCF POLLEY
XCH TEMK
INDEX TEMK
AD LIMITS
COM
AD TEMK
TS TEMK
TCF POLLEY
TCF ARG90
POLLEY EXTEND
MP TEMK
TS SQ
EXTEND
MP C5/2
AD C3/2
EXTEND
MP SQ
AD C1/2
EXTEND
MP TEMK
DDOUBL
TS TEMK
TC Q
ARG90 INDEX A
CS LIMITS
TC Q # RESULT SCALED AT 1.
The comment
# SINGLE PRECISION SINE AND COSINE
indicates, that the following is indeed an implementation of the sine and cosine functions.
Information about the type of assembler used, can be found on Wikipedia.
Partial explanation of the code:
The subroutine
SPSIN actually calculates $\sin(\pi x)$, and
SPCOS calculates $\cos(\pi x)$.
The subroutine
SPCOS first adds one half to the input, and then proceeds to calculate the sine (this is valid because of $\cos(\pi x) = \sin(\pi (x+\tfrac12))$). The argument is doubled at the beginning of the
SPT subroutine. That is why we now have to calculate $\sin(\tfrac\pi2 y)$ for $y=2x$.
The subroutine
POLLEY calculates an almost Taylor polynomial approximation of $\sin(\tfrac\pi2 x)$. First, we store $x^2$ in the register SQ (where $x$ denotes the input). This is used to calculate the polynomial$$ ((( C_{5/2} x^2 ) + C_{3/2} ) x^2 + C_{1/2}) x.$$The values for the constants can be found in the same GitHub repository and are
$$\begin{aligned}C_{5/2} &= .0363551 \approx \left(\frac\pi2\right)^5 \cdot \frac1{2\cdot 5!}\\C_{3/2} &= -.3216147 \approx -\left(\frac\pi2\right)^3 \cdot \frac1{2\cdot 3!}\\C_{1/2} &= .7853134 \approx \frac\pi2 \cdot \frac12\\\end{aligned}$$
which look like the first Taylor coefficients for the function$\frac12 \sin(\tfrac\pi2 x)$.
These values are not exact! So this is a polynomial approximation, which is very close to the Taylor approximation, but even better (see below, also thanks to @uhoh and @zch).
Finally, the result is doubled with the
DDOUBL command, and the subroutine
POLLEY returns an approximation to $\sin(\tfrac\pi2 x)$.
As for the scaling (first halve, then double, ...), @Christopher mentioned in the comments, that the 16-bit fixed-point number could only store values from -1 to +1. Therefore, scaling is necessary. See here for a source and further details on the data representation. Details for the assembler instructions can be found on the same page.
How accurate is this almost-Taylor approximation? Here you can see a plot on WolframAlpha for the sine, and it looks like a good approximation for $x$ from $-0.6$ to $+.6$.The cosine function and its approximation is plotted here. (I hope they never had to calculate the cosine for a value $\geq \tfrac\pi2$, because then the error would be unpleasantly large.)
@uhoh wrote some Python code, which compares the coefficients $C_{1/2}, C_{3/2}, C_{5/2}$ from the Apollo code with the Taylor coefficients and calculates the optimal coefficients (based on the maximal error for $-\tfrac\pi2 \leq x \leq \tfrac\pi2$ and quadratic error on that domain).It shows that the Apollo coefficients are closer to the optimal coefficients than the Taylor coefficients.
In this plot the differences between $\sin(\pi x)$ and the approximations (Apollo/Taylor) is displayed. One can see, that the Taylor approximation is much worse for $x\geq .3$, but much better for $x\leq .1$. Mathematically, this is not a huge surprise, because Taylor approximations are only locally defined, and therefore they are often only useful close to a single point (here $x=0$).
Note that for this polynomial approximation you only need four multiplications and two additions (
MP and
AD in the code). For the Apollo Guidance Computer, memory and CPU cycles were only available in small numbers.
There are some ways to increase accuracy and input range, which would have been available for them, but it would result in more code and more computation time.For example, exploiting symmetry and periodicity of sine and cosine, using the Taylor expansion for cosine, or simply adding more terms of the Taylor expansion would have improved the accuracy and would also have allowed for arbitrary large input values.
|
Ex.5.3 Q17 Arithmetic Progressions Solution - NCERT Maths Class 10 Question
In a school, students thought of planting trees in and around the school to reduce air pollution. It was decided that the number of trees, that each section of each class will plant, will be the same as the class, in which they are studying, e.g., a section of class I will plant \(1\) tree, a section of class II will plant \(2\) trees and so on till class XII. There are three sections of each class. How many trees will be planted by the students?
Text Solution What is Known?
The number of trees planted by \(3\) sections of each class (I to XII).
What is Unknown?
Number of trees planted by the students.
Reasoning:
Sum of the first \(n\) terms of an AP is given by \({S_n} = \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\) Where \(a\) is the first term, \(d\) is the common difference and \(n\) is the number of terms.
Steps:
Class I
Class II
Class III
……………………
Class XII Section A \(1\) \(2\) \(3\) …………………… \(12\) Section B \(1\) \(2\) \(3\) …………………… \(12\) Section C \(1\) \(2\) \(3\) …………………… \(12\)
Total \(3\) \(6\) \(9\) \(36\)
It can be observed that the number of trees planted by the students are in an AP.
\[3, 6, 9, 12, 15, ....................36\]
First term \(a = 3\) Common difference, \(d = 6 - 3 = 3\) Number of terms, \(n = 12\)
We know that sum of \(n\) terms of AP,
\[\begin{align}{S_n} &= \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\\{S_{12}} &= \frac{{12}}{2}\left[ {2 \times 3 + \left( {12 - 1} \right) \times 3} \right]\\& = 6\left[ {6 + 11 \times 3} \right]\\ &= 6\left[ {6 + 33} \right]\\ &= 6 \times 39\\ &= 234\end{align}\]
Therefore, \(234\) trees will be planted by the students.
|
I think I've figured this out. The point is that, the rigorous meaning one can draw from the formal covariance of $J^\mu$ is that the momentum-space coefficient functions of $J^\mu$ (i.e. the functions in front of monomials of $a_p$ and $a^\dagger_p$) transform covariantly under the change of variable $p\to \Lambda p$. The covariance of the coefficient functions is unaffected by normal ordering, and is sufficient to give rise to the covariance of $:J^\mu:$. The rest of this answer will be an elaboration of the first paragraph.
Let me first clarify the notations used and the meaning of the formal covariance of the ill-defined current $J^\mu$. I'm going to ignore the spin degrees of freedom in this discussion, but one should see the generalization to include spin only involves a straightforward (but perhaps cumbersome) change of notations. I'm also ignoring the spacetime dependence, that is to say I'm only considering the covariance of $J^\mu(0)$, and the generalization to $J^\mu(x)$ is straightforward and easy.
In the context of my question, $U(\Lambda)$ is defined as such that
$$U(\Lambda) a_{p} U^{-1}(\Lambda)=\sqrt{\frac{E_{\Lambda p}}{E_p}}a_{\Lambda p}.$$
The covariance of $J^\mu$ must be understood in a very formal and specific sense, the sense in which the covariance is formally proved. For example, in the case of a fermionic bilinear:
$$U(\Lambda)J^{\mu}U(\Lambda)^{-1}=U\bar{\psi}\gamma^{\mu}\psi U^{-1}\\ =U\bar{\psi}_iU^{-1}(\gamma^{\mu})_{ij}U \psi_j U^{-1}=\bar{\psi}D(\Lambda)\gamma^{\mu}D(\Lambda)^{-1}\psi= \Lambda^{\mu}_{\ \ \nu}\bar{\psi}\gamma^{\nu}\psi, $$
where $D(\Lambda)$ is the spinor representation of Lorentz group, typically constructed via Clifford algebra. Note in this formal proof, what's important is that, under the change $a_{p}\to \sqrt{\frac{E_{\Lambda p}}{E_p}}a_{\Lambda p}$ (ignoring spin indices of course) the elementary field transforms as $\psi \to D(\Lambda)\psi$. In the proof, no manipulation of operator ordering and commutation relations ever occurs: all we do is to do a change of integration variable, and let the algebraic properties of the coefficient functions take care of the rest. In fact, we'd better not mess with the operator ordering, as it can easily spoil the formal covariance (example: $H=\int \text{d}p\frac{1}{2}E_{p}(a_p a_p^\dagger+a_p^\dagger a_p)=\int \text{d}p E_{p}(a_p^\dagger a_p+\delta(0))$, see my longest comment under drake's answer).
To explain what's going on in more details without getting tangled with notational nuisances, let me remind you again I'll omit the spin degrees of freedom, but it should be transparent enough by the end of the argument that it's readily generalizable to spinor case, since all that matters is that we know the coefficient functions(even with spin indices) transform covariantly. The mathematical gist is, after multiplying the elementary fields and grouping c/a operators (during the grouping no operator ordering procedure should be performed at all, e.g. $a^\dagger(p_1)a(p_2)$ and $a(p_2)a^\dagger(p_1)$ should be treated as two independent terms), a typical monomial term in $J^\mu(0)$ has the form
$$ \int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(p_i), a(p_i)\})f^\mu(\{p_i\}),$$
where $M$ is a monomial of c/a operators not necessarily normally ordered, but has an ordering directly from the multiplication of elementary fields.
The formal covariance of $J^\mu$ means
$$\Lambda^\mu_{\ \ \nu}\int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(p_i), a(p_i)\})f^\nu(\{p_i\})\\=\int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(\Lambda p_i), a(\Lambda p_i)\})f^\mu(\{p_i\})\\=\int \left(\prod\limits_{i=1}^{n}\text{d}q_i\right)\left(\prod\limits_{i=1}^n \frac{E_{\Lambda^{-1} q_i}}{E_{q_i}}\right) \left(\prod\limits_{i=1}^{m}\sqrt{\frac{E_{q_i}}{E_{\Lambda^{-1} q_i}}}\right) M(\{a^\dagger(q_i), a(q_i)\})f^\mu(\{\Lambda^{-1}q_i\}) ,$$
where $\prod\limits_{i=1}^n {E_{\Lambda^{-1} q_i}}/{E_{q_i}}$ comes from the transformation of measure and $\prod\limits_{i=1}^{m}\sqrt{{E_{q_i}}/{E_{\Lambda^{-1} q_i}}}$ from the transformation of c/a operators in $M$. This is equivalent to
$$f^\mu(\{\Lambda^{-1}q_i\})\left(\prod\limits_{i=1}^n \frac{E_{\Lambda^{-1} q_i}}{E_{q_i}}\right) \left(\prod\limits_{i=1}^{m}\sqrt{\frac{E_{q_i}}{E_{\Lambda^{-1} q_i}}}\right)=\Lambda^\mu_{\ \ \nu}f^\nu(\{q_i\}).$$
The above equation makes completely rigorous sense since it's a statement about c-number functions. Obviously, this equation is sufficient to prove the covariance of the normal ordering
$$ \int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right):M(\{a^\dagger(p_i), a(p_i)\}):f^\mu(\{p_i\}),$$
since on the operator part only a change of integration variable is needed for the proof.
So let's recapitulate the logic of this answer:
1. The current is only covariant when written in a certain way, but not in all ways. (recall the free scalar field Hamiltonian example: $H=\int \text{d}p\frac{1}{2}E_{p}(a_p a_p^\dagger+a_p^\dagger a_p)=\int \text{d} pE_{p}(a_p^\dagger a_p+\delta(0))$, which is formally covariant in the first form but not in the second form.)
2. In that certain way where the current is formally covariant, the formal covariance really means a genuine covariance of the coefficient functions.
3. The covariance of the coefficient functions is sufficient to establish the covariance of the normally ordered current.
|
Search
Now showing items 1-10 of 155
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
|
Tautology (logic)
Philosopher Ludwig Wittgenstein first applied the term to redundancies of propositional logic in 1921; (it had been used earlier to refer to rhetorical tautologies, and continues to be used in that alternate sense). A formula is satisfiable if it is true under at least one interpretation, and thus a tautology is a formula whose negation is unsatisfiable. Unsatisfiable statements, both through negation and affirmation, are known formally as contradictions. A formula that is neither a tautology nor a contradiction is said to be logically contingent. Such a formula can be made either true or false based on the values assigned to its propositional variables. The double turnstile notation \vDash S is used to indicate that
S is a tautology. Tautology is sometimes symbolized by "V pq", and contradiction by "O pq". The tee symbol \top is sometimes used to denote an arbitrary tautology, with the dual symbol \bot (falsum) representing an arbitrary contradiction.
Tautologies are a key concept in propositional logic, where a tautology is defined as a propositional formula that is true under any possible Boolean valuation of its propositional variables. A key property of tautologies in propositional logic is that an effective method exists for testing whether a given formula is always satisfied (or, equivalently, whether its negation is unsatisfiable).
The definition of
tautology can be extended to sentences in predicate logic, which may contain quantifiers, unlike sentences of propositional logic. In propositional logic, there is no distinction between a tautology and a logically valid formula. In the context of predicate logic, many authors define a tautology to be a sentence that can be obtained by taking a tautology of propositional logic and uniformly replacing each propositional variable by a first-order formula (one formula per propositional variable). The set of such formulas is a proper subset of the set of logically valid sentences of predicate logic (which are the sentences that are true in every model). Contents History 1 Background 2 Definition and examples 3 Verifying tautologies 4 Tautological implication 5 Substitution 6 Efficient verification and the Boolean satisfiability problem 7 Tautologies versus validities in first-order logic 8 See also 9 Normal forms 9.1 Related logical topics 9.2 References 10 External links 11 History
The word
tautology was used by the ancient Greeks to describe a statement that was true merely by virtue of saying the same thing twice, a pejorative meaning that is still used for rhetorical tautologies. Between 1800 and 1940, the word gained new meaning in logic, and is currently used in mathematical logic to denote a certain type of propositional formula, without the pejorative connotations it originally possessed.
In 1800, Immanuel Kant wrote in his book
Logic: "The identity of concepts in analytical judgments can be either explicit( explicita) or non-explicit( implicita). In the former case analytic propositions are tautological."
Here
analytic proposition refers to an analytic truth, a statement in natural language that is true solely because of the terms involved.
In 1884, Gottlob Frege proposed in his
Grundlagen that a truth is analytic exactly if it can be derived using logic. But he maintained a distinction between analytic truths (those true based only on the meanings of their terms) and tautologies (statements devoid of content).
In 1921, in his
Tractatus Logico-Philosophicus, Ludwig Wittgenstein proposed that statements that can be deduced by logical deduction are tautological (empty of meaning) as well as being analytic truths. Henri Poincaré had made similar remarks in Science and Hypothesis in 1905. Although Bertrand Russell at first argued against these remarks by Wittgenstein and Poincaré, claiming that mathematical truths were not only non-tautologous but were synthetic, he later spoke in favor of them in 1918: "Everything that is a proposition of logic has got to be in some sense or the other like a tautology. It has got to be something that has some peculiar quality, which I do not know how to define, that belongs to logical propositions but not to others."
Here
logical proposition refers to a proposition that is provable using the laws of logic.
During the 1930s, the formalization of the semantics of propositional logic in terms of truth assignments was developed. The term
tautology began to be applied to those propositional formulas that are true regardless of the truth or falsity of their propositional variables. Some early books on logic (such as Symbolic Logic by C. I. Lewis and Langford, 1932) used the term for any proposition (in any formal logic) that is universally valid. It is common in presentations after this (such as Stephen Kleene 1967 and Herbert Enderton 2002) to use tautology to refer to a logically valid propositional formula, but to maintain a distinction between tautology and logically valid in the context of first-order logic (see below). Background
Propositional logic begins with
propositional variables, atomic units that represent concrete propositions. A formula consists of propositional variables connected by logical connectives in a meaningful way, so that the truth of the overall formula can be uniquely deduced from the truth or falsity of each variable. A valuation is a function that assigns each propositional variable either T (for truth) or F (for falsity). So, for example, using the propositional variables A and B, the binary connectives \lor and \land representing disjunction and conjunction respectively, and the unary connective \lnot representing negation, the following formula can be obtained::(A \land B) \lor (\lnot A) \lor (\lnot B). A valuation here must assign to each of A and B either T or F. But no matter how this assignment is made, the overall formula will come out true. For if the first conjunction (A \land B) is not satisfied by a particular valuation, then one of A and B is assigned F, which will cause the corresponding later disjunct to be T. Definition and examples
A formula of propositional logic is a
tautology if the formula itself is always true regardless of which valuation is used for the propositional variables.
There are infinitely many tautologies. Examples include:
(A \lor \lnot A) (" Aor not A"), the law of the excluded middle. This formula has only one propositional variable, A. Any valuation for this formula must, by definition, assign Aone of the truth values trueor false, and assign \lnot Athe other truth value. (A \to B) \Leftrightarrow (\lnot B \to \lnot A) ("if Aimplies Bthen not- Bimplies not- A", and vice versa), which expresses the law of contraposition. ((\lnot A \to B) \land (\lnot A \to \lnot B)) \to A ("if not- Aimplies both Band its negation not- B, then not- Amust be false, then Amust be true"), which is the principle known as reductio ad absurdum. \lnot(A \land B) \Leftrightarrow (\lnot A \lor \lnot B) ("if not both Aand B, then not- Aor not- B", and vice versa), which is known as Aimplies Band Bimplies C, then Aimplies C"), which is the principle known as syllogism. ((A \lor B) \land (A \to C) \land (B \to C)) \to C (if at least one of Aor Bis true, and each implies C, then Cmust be true as well), which is the principle known as proof by cases.
A minimal tautology is a tautology that is not the instance of a shorter tautology.
(A \or B) \to (A \or B) is a tautology, but not a minimal one, because it is an instantiation of C \to C. Verifying tautologies
The problem of determining whether a formula is a tautology is fundamental in propositional logic. If there are
n variables occurring in a formula then there are 2 distinct valuations for the formula. Therefore the task of determining whether or not the formula is a tautology is a finite, mechanical one: one need only evaluate the truth value of the formula under each of its possible valuations. One algorithmic method for verifying that every valuation causes this sentence to be true is to make a truth table that includes every possible valuation. n
For example, consider the formula
((A \land B) \to C) \Leftrightarrow (A \to (B \to C)). A, B, C, represented by the first three columns of the following table. The remaining columns show the truth of subformulas of the formula above, culminating in a column showing the truth value of the original formula under each valuation.
A B C A \land B (A \land B) \to C B \to C A \to (B \to C) ((A \land B) \to C) \Leftrightarrow (A \to (B \to C)) T T T T T T T T T T F T F F F T T F T F T T T T T F F F T T T T F T T F T T T T F T F F T F T T F F T F T T T T F F F F T T T T
Because each row of the final column shows
T, the sentence in question is verified to be a tautology.
It is also possible to define a deductive system (proof system) for propositional logic, as a simpler variant of the deductive systems employed for first-order logic (see Kleene 1967, Sec 1.9 for one such system). A proof of a tautology in an appropriate deduction system may be much shorter than a complete truth table (a formula with
n propositional variables requires a truth table with 2 lines, which quickly becomes infeasible as n nincreases). Proof systems are also required for the study of intuitionistic propositional logic, in which the method of truth tables cannot be employed because the law of the excluded middle is not assumed. Tautological implication
A formula
R is said to tautologically imply a formula S if every valuation that causes R to be true also causes S to be true. This situation is denoted R \models S. It is equivalent to the formula R \to S being a tautology (Kleene 1967 p. 27).
For example, let
S be A \land (B \lor \lnot B). Then S is not a tautology, because any valuation that makes A false will make S false. But any valuation that makes A true will make S true, because B \lor \lnot B is a tautology. Let R be the formula A \land C. Then R \models S, because any valuation satisfying R makes A true and thus makes S true.
It follows from the definition that if a formula
R is a contradiction then R tautologically implies every formula, because there is no truth valuation that causes R to be true and so the definition of tautological implication is trivially satisfied. Similarly, if S is a tautology then S is tautologically implied by every formula. Substitution
There is a general procedure, the
substitution rule, that allows additional tautologies to be constructed from a given tautology (Kleene 1967 sec. 3). Suppose that S is a tautology and for each propositional variable A in S a fixed sentence S is chosen. Then the sentence obtained by replacing each variable A Ain Swith the corresponding sentence S is also a tautology. A
For example, let
S be (A \land B) \lor (\lnot A) \lor (\lnot B), a tautology. Let S be C \lor D and let A S be C \to E. It follows from the substitution rule that the sentence B ((C \lor D) \land (C \to E)) \lor (\lnot (C \lor D) )\lor (\lnot (C \to E))
is a tautology.
Efficient verification and the Boolean satisfiability problem
The problem of constructing practical algorithms to determine whether sentences with large numbers of propositional variables are tautologies is an area of contemporary research in the area of automated theorem proving.
The method of truth tables illustrated above is provably correct – the truth table for a tautology will end in a column with only
T, while the truth table for a sentence that is not a tautology will contain a row whose final column is F, and the valuation corresponding to that row is a valuation that does not satisfy the sentence being tested. This method for verifying tautologies is an effective procedure, which means that given unlimited computational resources it can always be used to mechanistically determine whether a sentence is a tautology. This means, in particular, the set of tautologies over a fixed finite or countable alphabet is a decidable set.
As an efficient procedure, however, truth tables are constrained by the fact that the number of valuations that must be checked increases as 2
, where k kis the number of variables in the formula. This exponential growth in the computation length renders the truth table method useless for formulas with thousands of propositional variables, as contemporary computing hardware cannot execute the algorithm in a feasible time period.
The problem of determining whether there is any valuation that makes a formula true is the
Boolean satisfiability problem; the problem of checking tautologies is equivalent to this problem, because verifying that a sentence S is a tautology is equivalent to verifying that there is no valuation satisfying \lnot S. It is known that the Boolean satisfiability problem is NP complete, and widely believed that there is no polynomial-time algorithm that can perform it. Current research focuses on finding algorithms that perform well on special classes of formulas, or terminate quickly on average even though some inputs may cause them to take much longer. Tautologies versus validities in first-order logic
The fundamental definition of a tautology is in the context of propositional logic. The definition can be extended, however, to sentences in first-order logic (see Enderton (2002, p. 114) and Kleene (1967 secs. 17–18)). These sentences may contain quantifiers, unlike sentences of propositional logic. In the context of first-order logic, a distinction is maintained between
logical validities, sentences that are true in every model, and tautologies, which are a proper subset of the first-order logical validities. In the context of propositional logic, these two terms coincide.
A tautology in first-order logic is a sentence that can be obtained by taking a tautology of propositional logic and uniformly replacing each propositional variable by a first-order formula (one formula per propositional variable). For example, because A \lor \lnot A is a tautology of propositional logic, (\forall x ( x = x)) \lor (\lnot \forall x (x = x)) is a tautology in first order logic. Similarly, in a first-order language with a unary relation symbols
R, S, T, the following sentence is a tautology: (((\exists x Rx) \land \lnot (\exists x Sx)) \to \forall x Tx) \Leftrightarrow ((\exists x Rx) \to ((\lnot \exists x Sx) \to \forall x Tx)).
It is obtained by replacing A with \exists x Rx, B with \lnot \exists x Sx, and C with \forall x Tx in the propositional tautology ((A \land B) \to C) \Leftrightarrow (A \to (B \to C)).
Not all logical validities are tautologies in first-order logic. For example, the sentence
(\forall x Rx) \to \lnot \exists x \lnot Rx
is true in any first-order interpretation, but it corresponds to the propositional sentence A \to B which is not a tautology of propositional logic.
See also Normal forms Related logical topics References Bocheński, J. M. (1959) Précis of Mathematical Logic, translated from the French and German editions by Otto Bird, Dordrecht, South Holland: D. Reidel. Enderton, H. B. (2002) A Mathematical Introduction to Logic, Harcourt/Academic Press, ISBN 0-12-238452-0. Kleene, S. C. (1967) Mathematical Logic, reprinted 2002, Dover Publications, ISBN 0-486-42533-9. Reichenbach, H. (1947). Elements of Symbolic Logic, reprinted 1980, Dover, ISBN 0-486-24004-5 Wittgenstein, L. (1921). "Logisch-philosophiche Abhandlung", Annalen der Naturphilosophie(Leipzig), v. 14, pp. 185–262, reprinted in English translation as Tractatus logico-philosophicus, New York and London, 1922.
|
Wikipedia says that a linear transformation is a $(1,1)$ tensor. Is this restricting it to transformations from $V$ to $V$ or is a transformation from $V$ to $W$ also a $(1,1)$ tensor? (where $V$ and $W$ are both vector spaces). I think it must be the first case since it also states that a linear functional is a $(0,1)$ tensor and this is a transformation from $V$ to
$R$. If it is the second case, could you please explain why linear transformations are $(1,1)$ tensors.
Wikipedia says that a linear transformation is a $(1,1)$ tensor. Is this restricting it to transformations from $V$ to $V$ or is a transformation from $V$ to $W$ also a $(1,1)$ tensor? (where $V$ and $W$ are both vector spaces). I think it must be the first case since it also states that a linear functional is a $(0,1)$ tensor and this is a transformation from $V$ to
It's very common in tensor analysis to associate endomorphisms on a vector space with (1,1) tensors. Namely because there exists an isomorphism between the two sets.
Define $E(V)$ to be the set of endomorphisms on $V$.
Let $A\in E(V)$ and define the map $\Theta:E(V)\rightarrow T^1_1(V)$ by \begin{align*} (\Theta A)(\omega,X)&=\omega(AX). \end{align*} We show that $\Theta$ is an isomorphism of vector spaces. Let $\{e_i\}$ be a basis for $V$ and let $\{\varepsilon^i\}$ be the corresponding dual basis. First, we note $\Theta$ is linear by the linearity of $\omega$. To show injectivity, suppose $\Theta A = \Theta B$ for some $A,B\in E(V)$ and let $X\in V$, $\omega \in V^*$ be arbitrary. Then \begin{align*} (\Theta A)(\omega,X)&=(\Theta B)(\omega,X)\\ \\ \iff \omega(AX-BX)&=0. \end{align*} Since $X$ and $\omega$ were arbitrary, it follows that \begin{align*} AX&=BX\\ \iff A&=B. \end{align*} To show surjectivity, suppose $f\in T^1_1$ has coordinate representation $f^j_i \varepsilon^i \otimes e_j$. We wish to find $A\in E(V)$ such that $\Theta A = f$. We simply choose $A\in E (V)$ such that $A$ has the matrix representation $(f^j_i)$. If we write the representation of our vector $X$ and covector $\omega$ as \begin{align*} X&=X^i e_i\\ \omega&=\omega_i \varepsilon^i, \end{align*} we have \begin{align*} (\Theta A)(\omega, X)&=\omega(AX)\\ \\ &=\omega_k \varepsilon^k(f^j_i X^i e_j)\\ \\ &=f^j_i X^i \omega_k \varepsilon^k (e_j)\\ \\ &=f^j_i X^i \omega_k \delta^k_j\\ \\ &=f^k_i X^i \omega_k. \end{align*} However we see \begin{align*} f(\omega,X)&=f(\omega_k\varepsilon^k,X^ie_i)\\ \\ &=\omega_k X^i f(\varepsilon^k,e_i)\\ \\ &=f^k_i X^i \omega_k. \end{align*} Since $X$ and $\omega$ were arbitrary, it follows that $\Theta A = f$. Thus, $\Theta$ is linear and bijective, hence an isomorphism.
Let $T : V \mapsto W$. Then define $\tau: V \times W^* \mapsto K$ such that, for $a \in V$ and $\alpha \in W^*$, we have
$$\tau(a, \alpha) = (\alpha \circ T)(a)$$
Note that it's typical to define
tensor to mean a multilinear map that is a function of vectors only in the same vector space, or of covectors in the associated dual space, or some combination of the two. So, we could identify a linear operator $T: V \mapsto V$ with a $(1,1)$ tensor $\tau: V \times V^* \mapsto K$, but in the case that $V$ and $W$ are distinct vector spaces, these would just be some construction of multilinear maps, not tensors.
To summarize as an answer what I wrote in various comments above: first beware that autors differ in their definition of tensor, even when using the same approach, i.e. using the tensor product in this case.
For some authors a tensor is defined only as ...
$$ T\in \underbrace{V \otimes\dots\otimes V}_{n \text{ copies}} \otimes \underbrace{V^* \otimes\dots\otimes V^*}_{m \text{ copies}}$$
From which it makes sense to speak of a type-$(n,m)$ tensor.
For others, a tensor is any...
$$T\in V_1 \otimes\dots\otimes V_d$$
where $V_1, \dots, V_d$ can be different vector spaces, however all must be over the same scalar field. And with this latter definition one can speak of an order-$d$ tensor. A type-$(n,m)$ tensor [in the former sense] is a tensor of order $d=n+m$ in the latter sense, but second definition is broader for it does not restrict us to a single vector space. In particular, a second-order tensor is an element of $V \otimes W$ where $V$ and $W$ may be two different vector spaces. Type-(1,1) tensors are tensors of second order, but the converse of this statement doesn't make sense. (N.B.: I've updated Wikipedia to reflect these different definitions.)
As for your 2nd question, endomorphisms (linear maps) from a vector space to itself are (isomorphic with) type-(1,1) tensors (detailed proof given here by beedge89), but if you consider homomorphisms (linear maps) between
different vector spaces $V$ and $W$, i.e. $\mathrm{Hom}(V,W)$, these are isomorphic with only a certain class of order-2 tensors, namely with $V^* \otimes W$. If we let $(\phi, w)\in V^* \times W$, then the correspondence is given by $\phi \otimes w \leftrightarrow F_{\phi, w}$, where the latter is a (linear) map defined as $F_{\phi, w} (v) = \phi(v)w$. (Remember that covectors are themselves maps from vectors to scalars, so the formula for $F$ makes sense as it's a product of the scalar $\phi(v)$ with the vector $w$). A detailed proof of the fact that this is an isomorphism is given in Yokonuma (pp. 18-19). Apologies for not including it here.
As you may expect, the result for type-(1,1) tensors also follows as a corollary of this, i.e. $\mathrm{Hom}(V,V)$ is isomorphic with $V^* \otimes V$ (and with $V \otimes V^*$ by commutativity of the tensor product, which is also understood in the sense of an isomorphism between $V \otimes W$ and $W \otimes V$ for any vector spaces $V$ and $W$).
And one important caveat here: this is an isomorphism only for finite-dimensional vector spaces. (The introduction of Yokonuma's book actually says to assume all vector spaces in the book are finite-dimensional unless stated otherwise.) If both $V$ and $W$ are infinite-dimensional, then it turns out $V^*\otimes W$ is only a proper subspace of $\mathrm{Hom}(V,W)$, namely it is the subspace of linear transformation of finite rank.
And to tie this in with bilinear (and in general with multi-linear) maps: there's also a one-one correspondence between
linear maps $f : V\times W \to U$ and $linear$ maps $g : V\otimes W \to U$. (For a proof see for instance http://www.landsburg.com/algebra.pdf) That's why second-order tensors are basically said to be just bilinear maps, and in general why bi d-order tensors are said to be just multi-linear maps.
|
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
|
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling
@heather well, there's a spectrum
so, there's things like New Journal of Physics and Physical Review X
which are the open-access branch of existing academic-society publishers
As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di...
Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago
> A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”
for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty
> for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing)
@0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals.
@BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work...
@BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions.
Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley.
I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea.
@EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results...
Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town...
@EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit.
|
Plot of the Chebyshev rational functions for
n = 0, 1, 2, 3, 4
for
0.01 ≤ x ≤ 100
, log scale.
In
mathematics, the Chebyshev rational functions are a sequence of functions which are both rational and orthogonal. They are named after Pafnuty Chebyshev. A rational Chebyshev function of degree is defined as: n R n ( x ) = d e f T n ( x − 1 x + 1 ) {\displaystyle R_{n}(x)\ {\stackrel {\mathrm {def} }{=}}\ T_{n}\left({\frac {x-1}{x+1}}\right)}
where
is a T( n x) Chebyshev polynomial of the first kind. Properties [ edit ]
Many properties can be derived from the properties of the Chebyshev polynomials of the first kind. Other properties are unique to the functions themselves.
Recursion [ edit ] R n + 1 ( x ) = 2 x − 1 x + 1 R n ( x ) − R n − 1 ( x ) for n ≥ 1 {\displaystyle R_{n+1}(x)=2\,{\frac {x-1}{x+1}}R_{n}(x)-R_{n-1}(x)\quad {\text{for }}n\geq 1} Differential equations [ edit ] ( x + 1 ) 2 R n ( x ) = 1 n + 1 d d x R n + 1 ( x ) − 1 n − 1 d d x R n − 1 ( x ) for n ≥ 2 {\displaystyle (x+1)^{2}R_{n}(x)={\frac {1}{n+1}}{\frac {\mathrm {d} }{\mathrm {d} x}}R_{n+1}(x)-{\frac {1}{n-1}}{\frac {\mathrm {d} }{\mathrm {d} x}}R_{n-1}(x)\quad {\text{for }}n\geq 2} ( x + 1 ) 2 x d 2 d x 2 R n ( x ) + ( 3 x + 1 ) ( x + 1 ) 2 d d x R n ( x ) + n 2 R n ( x ) = 0 {\displaystyle (x+1)^{2}x{\frac {\mathrm {d} ^{2}}{\mathrm {d} x^{2}}}R_{n}(x)+{\frac {(3x+1)(x+1)}{2}}{\frac {\mathrm {d} }{\mathrm {d} x}}R_{n}(x)+n^{2}R_{n}(x)=0} Orthogonality [ edit ]
Plot of the absolute value of the seventh-order (
n = 7
) Chebyshev rational function for
0.01 ≤ x ≤ 100
. Note that there are
n
zeroes arranged symmetrically about
x = 1
and if
x 0
is a zero, then
1 / x 0
is a zero as well. The maximum value between the zeros is unity. These properties hold for all orders.
Defining:
ω ( x ) = d e f 1 ( x + 1 ) x {\displaystyle \omega (x)\ {\stackrel {\mathrm {def} }{=}}\ {\frac {1}{(x+1){\sqrt {x}}}}}
The orthogonality of the Chebyshev rational functions may be written:
∫ 0 ∞ R m ( x ) R n ( x ) ω ( x ) d x = π c n 2 δ n m {\displaystyle \int _{0}^{\infty }R_{m}(x)\,R_{n}(x)\,\omega (x)\,\mathrm {d} x={\frac {\pi c_{n}}{2}}\delta _{nm}}
where
for c = 2 n and n = 0 for c = 1 n ; n ≥ 1 is the δ nm Kronecker delta function. Expansion of an arbitrary function [ edit ]
For an arbitrary function
the orthogonality relationship can be used to expand f( x) ∈ L 2 ω : f( x) f ( x ) = ∑ n = 0 ∞ F n R n ( x ) {\displaystyle f(x)=\sum _{n=0}^{\infty }F_{n}R_{n}(x)}
where
F n = 2 c n π ∫ 0 ∞ f ( x ) R n ( x ) ω ( x ) d x . {\displaystyle F_{n}={\frac {2}{c_{n}\pi }}\int _{0}^{\infty }f(x)R_{n}(x)\omega (x)\,\mathrm {d} x.} Particular values [ edit ] R 0 ( x ) = 1 R 1 ( x ) = x − 1 x + 1 R 2 ( x ) = x 2 − 6 x + 1 ( x + 1 ) 2 R 3 ( x ) = x 3 − 15 x 2 + 15 x − 1 ( x + 1 ) 3 R 4 ( x ) = x 4 − 28 x 3 + 70 x 2 − 28 x + 1 ( x + 1 ) 4 R n ( x ) = ( x + 1 ) − n ∑ m = 0 n ( − 1 ) m ( 2 n 2 m ) x n − m {\displaystyle {\begin{aligned}R_{0}(x)&=1\\R_{1}(x)&={\frac {x-1}{x+1}}\\R_{2}(x)&={\frac {x^{2}-6x+1}{(x+1)^{2}}}\\R_{3}(x)&={\frac {x^{3}-15x^{2}+15x-1}{(x+1)^{3}}}\\R_{4}(x)&={\frac {x^{4}-28x^{3}+70x^{2}-28x+1}{(x+1)^{4}}}\\R_{n}(x)&=(x+1)^{-n}\sum _{m=0}^{n}(-1)^{m}{\binom {2n}{2m}}x^{n-m}\end{aligned}}} Partial fraction expansion [ edit ] R n ( x ) = ∑ m = 0 n ( m ! ) 2 ( 2 m ) ! ( n + m − 1 m ) ( n m ) ( − 4 ) m ( x + 1 ) m {\displaystyle R_{n}(x)=\sum _{m=0}^{n}{\frac {(m!)^{2}}{(2m)!}}{\binom {n+m-1}{m}}{\binom {n}{m}}{\frac {(-4)^{m}}{(x+1)^{m}}}} References [ edit ]
|
An abelian group is the same thing as a module over the ring $\mathbb{Z}$ (think about it). The ring $\mathbb{Z}$ is a PID, thus submodules of free $\mathbb{Z}$-modules are free. Reformulated in the context of abelian groups, submodules of free abelian groups are free abelian. You can use this fact to show that every abelian group has a length 2 free resolution (this works over any PID $R$, in particular $R = \mathbb{Z}$):
Let $P_0 = \bigoplus_{m \in M} R_m$ be a direct sum of copies of $R$, one for each element of $M$ (the index is just here for bookkeeping reasons). This is a free $R$-module. This maps to $M$ through $\varepsilon : P_0 \to M$ by defining $\varepsilon_m : R_m \to M$, $x \mapsto x \cdot m$ and extending to the direct sum (coproduct).
The kernel $P_1 = \ker(P_0 \to M)$ is a submodule of the free module $P_0$, hence it is free as $R$ is a PID. Thus you get a free resolution (exact sequence):
$$0 \to P_1 \to P_0 \to M \to 0.$$
In the above proof, notice that "free $\mathbb{Z}$-module" is also the same thing as "free abelian group", so everything works out. You can also choose a set of generators for your module $M$, but I feel it's cleaner by just taking every element of $M$ and be done with it.
As Bernard mentions in the comments, for finitely generated abelian groups it's easy to see from the structure theorem (e.g. $\mathbb{Z}/n\mathbb{Z}$ has the free resolution $0 \to \mathbb{Z} \xrightarrow{\cdot n} \mathbb{Z} \to \mathbb{Z}/n\mathbb{Z} \to 0$).
Other people have already commented on the difference between resolutions of abelian groups and groups in general. Surprisingly enough, subgroups of free groups are free too by the Nielsen–Schreier theorem, and the standard proof even uses algebraic topology! It all comes around. So you can directly adapt the above argument to show that every (not necessarily abelian) group has a free resolution of length at most two:
Let $G$ be a group. Let $P_0 = \bigstar_{g \in G} \mathbb{Z}_g$ be the free product of copies of $\mathbb{Z}$, one for each element of $g$. This is a free group. By the universal property of free groups, this maps to $G$ by sending $1 \in \mathbb{Z}_g$ to $g \in G$. The kernel $P_1 = \ker(P_0 \to G)$ is a subgroup of a free group, thus it is free itself, and you get a free resolution of $G$ of length at most 2:
$$0 \to P_1 \to P_0 \to G \to 0.$$
In the above proof, $\mathbb{Z}$ appears too, but for different reasons: it is the free group on one generator. It also happens to be the free abelian group on one generator, but that's not its role in the proof above. The groups $P_0$ and $P_1$ that appear in the proof are, in general, not free abelian.
|
Let me add some detail first.
An Anosov automorphism on $R^2$ is a mapping from the unit square $S$ onto $S$ of the form
$\begin{bmatrix}x \\y\end{bmatrix}\rightarrow\begin{bmatrix}a && b \\c && d \end{bmatrix}\begin{bmatrix}x \\y\end{bmatrix}mod \hspace{1mm}1$
in which (i) $a, b, c, and \hspace{1mm} d$ are integers, (ii) the determinant of the matrix is $\pm$1, and (iii) the eigenvalues of the matrix do not have magnitude $1$.
It is easy to show that Arnold's cat map is an Anosov automorphism, and that it is chaotic.
To define "chaotic" in this context,
A mapping $T$ of $S$ onto itself is said to be chaotic if:
(i) $S$ contains a dense set of periodic points of the mapping $T$ (ii) There is a point in $S$ whose iterates under $T$ are dense in $S$.
That said, it is said that all Anosov automorphisms are chaotic mappings. Based on the definition of chaotic, how can one prove that statement?
Any feedback will be appreciated.
|
I've been following the news around the work they are doing at the LHC particle accelerator in CERN. I am wondering what the raw data that is used to visualize the collisions looks like. Maybe someone can provide a sample csv or txt?
Different pieces of equipment will produce somewhat different looking data, but typically it consists of voltages defined as a function of time. In some cases (spark chambers, for example) the "voltage" is digital, and in others it is analog.
Traditionally, the time series for the data is slower than the times required for the (almost light speed) particles to traverse the detector. Thus one had an effective photograph for a single experiment. More modern equipment is faster but they still display the data that way. Here's an LHC example:
In the above, the data has been organized for display according to the shape and geometry of the detector. The raw data itself would be digitized and just a collection of zeroes and ones.
There are typically two types of measurements, "position" and "energy". The position measurements are typically binary, that is, they indicate that a particle either came through that (very small) element or did not. In the above, the yellow lines are position measurements.
Note that some of the yellow lines are curved. Actually all of them are curved at least some. This is because there is a strong magnetic field. The curvature of the particle tracks helps determine what particles they are. For example, given the same speed and charge, a heavier particle will run straighter.
The radius of curvature is given by:
$$r = \frac{m\gamma E}{pB}$$ where $\gamma = 1/\sqrt{1-(v/c)^2}$ is the Lorentz factor, $E$ is the energy, and $p$ is the momentum. This helps determine the particle type and energy.
Energy measurements are generally analog. In them, one gets an indication of how much energy was deposited by the particle as it went by. In the above, the light blue and red data are energy measurements. For these measurements, one doesn't get such a precise position, but the amplitude is very accurate.
Years ago, as a grad student in particle physics, I used to work on the PHENIX experiment at BNL. Before I had shown up (I think near the end of run 2) the main data structure used for analysis was called a "tuple". Tuples were pretty much like the lists used today in Python with a bit more structure to make access faster and contained the actual data corresponding to what we called an "event" (something interesting that happened in the detector which was captured by the various subsystems and written eventually into a tuple). Unfortunately tuples were generally just too large and one needed to analyze a smaller subset of the entries in the tuples -- so micro-tuples were born and then shortly afterwards nano-tuples.
There were different types of nano-tuples defined and used by the various working groups on the experiment which had different subsets of the original tuples. Which type of nano-tuple you used depended on the analysis you were trying to do and roughly corresponded to the working group you were in. In my case this was heavy flavor where I was studying charm.
So a nano-tuple might look like this:
(x_1, x_2, ..., x_n)
where the x_i would be all the different quantities of interest associated with the event: transverse momentum, energy deposited in the EM-cal, blah, blah, blah..
In the end the data analysis revolved around the manipulation of these nano-tuples and amounted to:
Put in a request with the data guys to get raw data collected by the different subsystems in the form of nano-tuples. Wait a couple days for the data to show up on disk since it was a huge set of data. Loop over the events (nano-tuples) filtering out the stuff you weren't interested in (usually events associated with pions) Bin the data in each entry of the tuple Overlay the theoretical prediction of these distributions on top of what you extracted from the tuple Make your statement about what was going on. (confirmation of theory, conjecture about disagreement, etc..)
The truth is that we rarely looked at the RAW, raw data streaming out of the detector unless you were on shift and part of the data acquisition system had stopped operating for some reason. But in that case the data was pretty meaningless when you looked at it. You'd be more concerned that the data wasn't flowing. However if you were one of the people responsible for maintaining a subsystem (say, EM-cal) then you'd probably be doing calibration on a regular basis and regularly looking over raw data from your particular subsystem to tune the calibration and make the raw data analyzable.
Mostly the raw data was only meaningful for the subsystem you had a responsibility to and looking at all the raw data from all the subsystems as a whole wasn't really done. I don't think anyone had that kind of breadth across all the different subsystems...
Regarding the data for the visualizations you asked about: I believe these were specially defined nano-tuples which had entries from enough of the subsystems to allow for reconstruction and the final visualization (pretty pictures) but I'm 99% sure the visualizations weren't created from the "raw" data. Rather they were done using these nano-tuples.
If you poke around the PHENIX website you can see some pretty fancy animations (at least fancy for back then) of collisions in the detector. Mostly these pics and movies were part of a larger experiment wide PR effort. They were made by a guy named Jeffery Mitchel and you should email him to find out more details on the format of the data he used (mitchell@bnl.gov) Everyone was talking about the LHC back then (2004 or 2005ish) and most of them have long since moved on so you can probably get more insight into the "raw" data created by the LHC today and used for those visualizations if you ask someone like him directly.
|
Ex.13.1 Q5 Surface Areas and Volumes Solution - NCERT Maths Class 10 Question
A hemispherical depression is cut out from one face of a cubical wooden block such that the diameter
\(l \) of the hemisphere is equal to the edge of the cube. Determine the surface area of the remaining solid.
Text Solution What is known?
Diameter \(l\) of the hemisphere is equal to the edge of the cube.
What is unknown?
The surface area of the remaining solid.
Reasoning:
We can create the figure of the solid as per given information
From the figure it’s clear that the surface area of the remaining solid includes TSA of the cube, CSA of the hemisphere and excludes base of the hemisphere.
Surface area of the remaining solid \(=\) TSA of the cubical part\( +\) CSA of the hemisphericalpart \(–\) Area of the base of the hemispherical part
We will find the remaining area of the solid by using formulae;
TSA of the cube \( = 6{l^2}\) where \(l\) is the length of the edge of the cube
CSA of the hemisphere \( = 2\pi {r^2}\)
Area of the base of the hemisphere \( = \pi {r^2}\) where \(r\) is the radius of the hemisphere Steps:
Diameter of the hemisphere \(=\) Length of the edge of the cube \( = l\)
Radius of the hemisphere, \(r = \frac{l}{2}\)
Surface area of the remaining solid \(=\) TSA of the cubical part \(+\) CSA of the hemisphericalpart \(–\) Area of the base of the hemispherical part
\[\begin{align}&= 6{l^2} + 2\pi {r^2} - \pi {r^2}\\&= 6{l^2} + \pi {r^2}\\&= 6{l^2} + \pi {\left( {\frac{l}{2}} \right)^2}\\&= 6{l^2} + \frac{{\pi {l^2}}}{4}\\&= \frac{1}{4}{l^2}\left( {\pi + 24} \right)\end{align}\]
|
What is Electric Field Intensity?
The space around an electric charge in which its influence can be felt is known as the
electric field. The electric field Intensity at a point is the force experienced by a unit positive charge placed at that point. Electric Field Intensity is a vector quantity. It is denoted by ‘E’. Formula: Electric Field = F/q. Unit of E is NC -1or Vm -1.
The electric field intensity due to a positive charge is always directed away from the charge and the intensity due to a negative charge is always directed towards the charge.
Table of Content:
Due to a point charge q, the intensity of the electric field at a point d units away from it is given by the expression:
Electric Field Intensity (E) = q/[4πεd
2] NC -1
The intensity of the electric field at any point due to a number of charges is equal to the vector sum of the intensities produced by the separate charges.
Force Experienced by a Charge in Electric Field
The force experienced by a charge in an electric field is given by, \(\vec{F}=Q\vec{E}\). where E is the electric field intensity.
Special Cases: If Q is a positive charge, the force \(\vec{F}\) acts in the direction of \(\vec{E}\) . Acceleration a = F/m = QE/m. If Q is a negative charge, the force acts in a direction opposite to \(\vec{E}\). Acceleration a = F/m = QE/m
A charge in an electric field experiences a force whether it is at rest or moving. The electric force is independent of the mass and velocity of the charged particle, it depends upon the charge.
⇒ Also Read: Motion of Charged Particle in Electric Field
If a charged particle of charge Q is placed in an electric field of strength E, the force experienced by the charged particle = EQ.
The acceleration of the charged particle in the electric field, a = EQ/m. The velocity of the charged particle after time t is = (EQ/m)t if the initial velocity is zero. The distance travelled by the charged particle is S = 1/2 × at 2= 1/2(EQ/m) × t 2if the initial velocity is zero. The Trajectory of Particle in Electric Field
When a charged particle is projected into a uniform electric field with some velocity perpendicular to the field, the path traced by it is a parabola. The trajectory of a charged particle projected in a different direction from the direction of a uniform electric field is a parabola.
When a charged particle of mass m and charge Q remains suspended in a vertical electric field then mg = EQ. When a charged particle of mass m and charge Q remains suspended in an electric field and the number of fundamental charges on the charged particle is n then, mg = E(ne),thus n = mg/Ee. Pendulum Oscillating in a Uniform Electric Field
The bob of a simple pendulum is given +ve charge and it is made to oscillate in vertically upward electric field, then the time period of oscillation is
Time Period (T) = 2π × √[l/(g – EQ/m)]
In the above case, if the bob is given a -ve charge then the time period is given by;
Time Period (T) = 2π × √[l/(g + EQ/m)]
Projectile Motion in an Electric Field
A charged particle is projected with an initial velocity u and charge Q making an angle to the horizontal in an electric field directed vertically upward then,
1. Time of flight = 2u sin θ/[g ± EQ/m]
2. Maximum height = u
2 sin 2 θ/[g ± EQ/m]
3. Range = u
2 sin 2θ/[g ± EQ/m] Properties of Charged Particle in Electric Field 1. The density of the electric field inside a charged hollow conducting sphere is zero. 2. A sphere is given a charge of ‘Q’ and is suspended in a horizontal electric field. The angle made by the string with the vertical is, θ = Tan -1 (EQ/mg). 3. The tension in the string is √(EQ 2 + mg 2). 4. A bob carrying a +ve charge is suspended by a silk thread in a vertically upward electric field, then the tension in the string is; T = mg – EQ. 5. If the bob carries -ve charge, the tension in the string is; T = mg + EQ. Force Experienced by Electron and Proton
A proton and an electron in the same electric field experience forces of the same magnitude but in opposite directions. Force on the proton is accelerating force whereas force on the electron is retarding force. If the proton and electron are initially moving in the direction of the electric field.
Acceleration of proton/Retardation of electron = mass of an electron/mass of a proton.
Solved Example Question: A rigid insulated wireframe in the form of a right-angled triangle ABC, is set in a vertical plane as shown. Two beads of equal masses m and carrying charges q1 and q2 are connected by a cord of length L and can slide without friction on the wires. Considering the case when the beads are stationary, determine, The angle α (∠APQ), The tension in the cord, The normal reactions on the beads. If the cords are now cut, what are the values of the charges for which the beads continue to remain stationary?
Solution: 1. The forces acting on each bead are, mg (downward) Tension Electric force Normal Reactions.
The directions of the forces are shown in the figure:
Equating the forces along and perpendicular to AB and AC, for equilibrium of the beads;
T cos α = F cos α + mg sin 30
0 . . . . (1)
F sin α + N1 = mg cos 30
0 + T sin α . . . . . (2)
T sin α = F sin α + mg cos 30
0 . . . . . . (3)
N2 + F cos α = T cos α + mg cos 60
0 . . . . . (4)
From equation (1) and (3);
(T – F) cos α = mg sin 30
0
and (T – F) sin α = mg cos 30
0
Dividing, cot α = tan 30
0 = cot 60 0
Therefore, α = 60
0.
2. Substituting the value of α in (1) and noting that
F = \(\frac{1}{4\pi {{\varepsilon }_{0}}}\;\;\frac{{{q}_{1}}{{q}_{2}}}{{{\ell }^{2}}}\)
T cos 60º= \(\frac{1}{4\pi {{\varepsilon }_{0}}}\;\;\frac{{{q}_{1}}{{q}_{2}}}{{{\ell }^{2}}}\) cos 60º + mg sin 60º
or T = \(\frac{1}{4\pi {{\varepsilon }_{0}}}\;\;\frac{{{q}_{1}}{{q}_{2}}}{{{\ell }^{2}}} + mg\).
3. T – F =\(\frac{mg\,\,\sin 30{}^\text{o}}{\cos 60{}^\text{o}}\) = mg
From (2), N1 = mg cos 30º + (T – F) sin 60º
⇒ N1 = mg \(\frac{\sqrt{3}}{2} + mg . \frac{\sqrt{3}}{2}= \sqrt{3} mg\)
From (4), N2 = (T – F) cos 60
0 ± mg cos 60 0
= mg × 1/2 + mg × 1/2 = mg.
4. When the cord is cut, T = 0, then from (1)
0 = F cos 60
0 + mg sin 30 0
or F + mg = 0
This result is on the assumption that q1 and q2 are of the same sign, so it was taken that there was a force of repulsion.
Since mg is fixed in direction, its sign cannot be reversed but the sign of F can be reversed because if q1 and q2 are of opposite sign, F will change its sign from + to -.
Let q1 and q2 be of the opposite sign then,
-F + mg = 0 is the condition for equilibrium.
or \(\frac{1}{4\pi {{\varepsilon }_{0}}}\;\;\frac{{{q}_{1}}{{q}_{2}}}{{{\ell }^{2}}}\) = mg
or q1q2 = \(4\pi\varepsilon _0mg\lambda^{2}\)
|
You have forgotten that there is work done by the battery after $t=0$, so that there is energy lost before S1 is opened and S2 is closed.
The work done by the battery to transfer charge $Q$ onto the 1st capacitor is $W=QV=\frac{Q^2}{C}$.
The final charges on the capacitors are $\frac12Q, \frac14Q, \frac{1}{16}Q, \frac{1}{64}Q, ...$ The total energy stored in all the capacitors at this time is $$E_{\infty}=\frac{Q^2}{2C}[(\frac12)^2+(\frac14)^2+(\frac{1}{16})^2+(\frac{1}{64})^2 +...]=\frac{Q^2}{6C}$$ The total loss of energy after closing S1 is $$W-E_{\infty}=\frac{5Q^2}{6C}=\frac56CV^2=\frac{20}{6}\pi\epsilon_0 RV^2$$
So I think option A should be the correct answer. The marking scheme appears to be wrong.
|
Can anyone state the difference between frequency response and impulse response in simple English?
The impulse response and frequency response are two attributes that are useful for characterizing linear time-invariant (LTI) systems. They provide two different ways of calculating what an LTI system's output will be for a given input signal. A continuous-time LTI system is usually illustrated like this:
In general, the system $H$ maps its input signal $x(t)$ to a corresponding output signal $y(t)$. There are many types of LTI systems that can have apply very different transformations to the signals that pass through them. But, they all share two key characteristics:
The system is linear, so it obeys the principle of superposition. Stated simply, if you linearly combine two signals and input them to the system, the output is the same linear combination of what the outputs would have been had the signals been passed through individually. That is, if $x_1(t)$ maps to an output of $y_1(t)$ and $x_2(t)$ maps to an output of $y_2(t)$, then for all values of $a_1$ and $a_2$,
$$ H\{a_1 x_1(t) + a_2 x_2(t)\} = a_1 y_1(t) + a_2 y_2(t) $$
The system is time-invariant, so its characteristics do not change with time. If you add a delay to the input signal, then you simply add the same delay to the output. For an input signal $x(t)$ that maps to an output signal $y(t)$, then for all values of $\tau$,
$$ H\{x(t - \tau)\} = y(t - \tau) $$
Discrete-time LTI systems have the same properties; the notation is different because of the discrete-versus-continuous difference, but they are a lot alike. These characteristics allow the operation of the system to be straightforwardly characterized using its impulse and frequency responses. They provide two perspectives on the system that can be used in different contexts.
Impulse Response:
The
impulse that is referred to in the term impulse response is generally a short-duration time-domain signal. For continuous-time systems, this is the Dirac delta function $\delta(t)$, while for discrete-time systems, the Kronecker delta function $\delta[n]$ is typically used. A system's impulse response (often annotated as $h(t)$ for continuous-time systems or $h[n]$ for discrete-time systems) is defined as the output signal that results when an impulse is applied to the system input.
Why is this useful? It allows us to predict what the system's output will look like in the time domain. Remember the linearity and time-invariance properties mentioned above? If we can decompose the system's input signal into a sum of a bunch of components, then the output is equal to the sum of the system outputs for each of those components. What if we could decompose our input signal into a sum of scaled and time-shifted impulses? Then, the output would be equal to the sum of copies of the impulse response, scaled and time-shifted in the same way.
For discrete-time systems, this is possible, because you can write any signal $x[n]$ as a sum of scaled and time-shifted Kronecker delta functions:
$$ x[n] = \sum_{k=0}^{\infty} x[k] \delta[n - k] $$
Each term in the sum is an impulse scaled by the value of $x[n]$ at that time instant. What would we get if we passed $x[n]$ through an LTI system to yield $y[n]$? Simple: each scaled and time-delayed impulse that we put in yields a scaled and time-delayed copy of the impulse response at the output. That is:
$$ y[n] = \sum_{k=0}^{\infty} x[k] h[n-k] $$
where $h[n]$ is the system's impulse response. The above equation is the convolution theorem for discrete-time LTI systems. That is, for any signal $x[n]$ that is input to an LTI system, the system's output $y[n]$ is equal to the discrete convolution of the input signal and the system's impulse response.
For continuous-time systems, the above straightforward decomposition isn't possible in a strict mathematical sense (the Dirac delta has zero width and infinite height), but at an engineering level, it's an approximate, intuitive way of looking at the problem. A similar convolution theorem holds for these systems:
$$ y(t) = \int_{-\infty}^{\infty} x(\tau) h(t - \tau) d\tau $$
where, again, $h(t)$ is the system's impulse response. There are a number of ways of deriving this relationship (I think you could make a similar argument as above by claiming that Dirac delta functions at all time shifts make up an orthogonal basis for the $L^2$ Hilbert space, noting that you can use the delta function's sifting property to project any function in $L^2$ onto that basis, therefore allowing you to express system outputs in terms of the outputs associated with the basis (i.e. time-shifted impulse responses), but I'm not a licensed mathematician, so I'll leave that aside). One method that relies only upon the aforementioned LTI system properties is shown here.
In summary: For both discrete- and continuous-time systems, the impulse response is useful because it allows us to calculate the output of these systems for any input signal; the output is simply the input signal convolved with the impulse response function. Frequency response:
An LTI system's frequency response provides a similar function: it allows you to calculate the effect that a system will have on an input signal, except those effects are illustrated in the
frequency domain. Recall the definition of the Fourier transform:
$$ X(f) = \int_{-\infty}^{\infty} x(t) e^{-j 2 \pi ft} dt $$
More importantly for the sake of this illustration, look at its inverse:
$$ x(t) = \int_{-\infty}^{\infty} X(f) e^{j 2 \pi ft} df $$
In essence, this relation tells us that any time-domain signal $x(t)$ can be broken up into a linear combination of many complex exponential functions at varying frequencies (there is an analogous relationship for discrete-time signals called the discrete-time Fourier transform; I only treat the continuous-time case below for simplicity). For a time-domain signal $x(t)$, the Fourier transform yields a corresponding function $X(f)$ that specifies, for each frequency $f$, the scaling factor to apply to the complex exponential at frequency $f$ in the aforementioned linear combination. These scaling factors are, in general, complex numbers. One way of looking at complex numbers is in amplitude/phase format, that is:
$$ X(f) = A(f) e^{j \phi(f)} $$
Looking at it this way, then, $x(t)$ can be written as a linear combination of many complex exponential functions, each scaled in amplitude by the function $A(f)$ and shifted in phase by the function $\phi(f)$. This lines up well with the LTI system properties that we discussed previously; if we can decompose our input signal $x(t)$ into a linear combination of a bunch of complex exponential functions, then we can write the output of the system as the same linear combination of the system response to those complex exponential functions.
Here's where it gets better: exponential functions are the eigenfunctions of linear time-invariant systems. The idea is, similar to eigenvectors in linear algebra, if you put an exponential function into an LTI system, you get the same exponential function out, scaled by a (generally complex) value. This has the effect of changing the amplitude and phase of the exponential function that you put in.
This is immensely useful when combined with the Fourier-transform-based decomposition discussed above. As we said before, we can write any signal $x(t)$ as a linear combination of many complex exponential functions at varying frequencies. If we pass $x(t)$ into an LTI system, then (because those exponentials are eigenfunctions of the system), the output contains complex exponentials at the same frequencies, only scaled in amplitude and shifted in phase. These effects on the exponentials' amplitudes and phases, as a function of frequency, is the system's
frequency response. That is, for an input signal with Fourier transform $X(f)$ passed into system $H$ to yield an output with a Fourier transform $Y(f)$,
$$ Y(f) = H(f) X(f) = A(f) e^{j \phi(f)} X(f) $$
In summary: So, if we know a system's frequency response $H(f)$ and the Fourier transform of the signal that we put into it $X(f)$, then it is straightforward to calculate the Fourier transform of the system's output; it is merely the product of the frequency response and the input signal's transform. For each complex exponential frequency that is present in the spectrum $X(f)$, the system has the effect of scaling that exponential in amplitude by $A(f)$ and shifting the exponential in phase by $\phi(f)$ radians. Bringing them together:
An LTI system's impulse response and frequency response are intimately related. The frequency response is simply the Fourier transform of the system's impulse response (to see why this relation holds, see the answers to this other question). So, for a continuous-time system:
$$ H(f) = \int_{-\infty}^{\infty} h(t) e^{-j 2 \pi ft} dt $$
So, given either a system's impulse response or its frequency response, you can calculate the other. Either one is sufficient to fully characterize the behavior of the system; the impulse response is useful when operating in the time domain and the frequency response is useful when analyzing behavior in the frequency domain.
Bang on something sharply once and plot how it responds in the time domain (as with an oscilloscope or pen plotter). That will be close to the impulse response.
Get a tone generator and vibrate something with different frequencies. Some resonant frequencies it will amplify. Others it may not respond at all. Plot the response size and phase versus the input frequency. That will be close to the frequency response.
For certain common classes of systems (where the system doesn't much change over time, and any non-linearity is small enough to ignore for the purpose at hand), the two responses are related, and a Laplace or Fourier transform might be applicable to approximate the relationship.
The impulse response is the response of a system to a single pulse of infinitely small duration and unit energy (a Dirac pulse). The frequency response shows how much each frequency is attenuated or amplified by the system.
The frequency response of a system is the impulse response transformed to the frequency domain. If you have an impulse response, you can use the FFT to find the frequency response, and you can use the inverse FFT to go from a frequency response to an impulse response.
Shortly, we have two kind of basic responses:
time responses and frequency responses. Time responses test how the system works with momentary disturbance while the frequency response test it with continuous disturbance. Time responses contain things such as step response, ramp response and impulse response. Frequency responses contain sinusoidal responses.
Aalto University has some course Mat-2.4129 material freely here, most relevant probably the Matlab files because most stuff in Finnish. If you are more interested, you could check the videos below for introduction videos. I found them helpful myself.
I have only very elementary knowledge about LTI problems so I will cover them below -- but there are surely much more different kinds of problems! Responses with Linear time-invariant problems
With LTI (linear time-invariant) problems, the input and output must have the same form: sinusoidal input has a sinusoidal output and similarly step input result into step output. If you don't have LTI system -- let say you have feedback or your control/noise and input correlate -- then all above assertions may be wrong. With LTI, you will get two type of changes: phase shift and amplitude changes but the frequency stays the same. If you break some assumptions let say with non-correlation-assumption, then the input and output may have very different forms.
If you need to investigate whether a system is LTI or not, you could use tool such as Wiener-Hopf equation and correlation-analysis. Wiener-Hopf equation is used with noisy systems. It is essential to validate results and verify premises, otherwise easy to make mistakes with differente responses. More about determining the impulse response with noisy system here.
References
Wikipedia article about LTI here
protected by jojek♦ Mar 8 '16 at 8:55
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
Recall that in the category of Topological spaces or in the category of Manifolds, a
submersion is a (not necessarily surjective!) map $f: X \to Y$ so that for each point $x\in X$, there exists open neighborhood $f(x) \in U \subseteq Y$ and a map $g: U \to X$ splitting $f$, i.e. $f \circ g = \operatorname{id}_U$. This definition does not generalize well to other categories: it requires at least that "points" know a lot about the objects, and that we know what are "open neighborhoods".
My question is: How much extra "abstract nonsense" structure do I need to put on a category for it to have a good theory of submersions?
On the one hand, the surjective submersions of manifolds are all regular epimorphisms (does this characterize the surjective submersions?), and so I could imagine defining "submersion" to mean a map that factors as a regular epi and a regular mono (I think that the regular monos in manifolds are the open embeddings?). Then it seems that I don't need
any extra structure, but I have not checked that this conditions characterizes submersions.
On the other hand, (surjective?) submersions form a Grothendieck pretopology, and hence determine a Grothendieck topology. Conversely, I would have assumed that a Grothendieck topology (which is extra structure on a category) determines which maps are submersions, although I am sufficiently new to this that I don't have a proposal for such a definition.
|
I) OP is right, ideologically speaking. Ideologically, OP's first eq.
$$ \tag{1} \left| \int_{\mathbb{R}}\! \mathrm{d}x_f~K(x_f,t_f;x_i,t_i) \right| ~\stackrel{?}{=}~1 \qquad(\leftarrow\text{Turns out to be ultimately wrong!}) $$
is the statement that a particle that is initially localized at a spacetime event $(x_i,t_i)$ must with probability 100% be within $x$-space $\mathbb{R}$ at a final time $t_f$, as our QM model does not allow creation or annihilation of particles.
However, such notion of absolute probabilities of the Feynman kernel $K(x_f,t_f;x_i,t_i)$ cannot be maintained when ideology has to be converted into mathematical formulas. E.g. for the harmonic oscillator, one has
$$\tag{A} \left| \int_{\mathbb{R}}\!\mathrm{d}x_f ~ K(x_f,t_f;x_i,t_i)\right|~=~\frac{1}{\sqrt{\cos\omega \Delta t}}, \qquad \Delta t ~:=~t_f-t_i,$$
which only becomes unity for $\omega \Delta t \to 0$. The problem can ultimately be traced to the fact that there is no normalizable uniform probability distribution on the real axis $\mathbb{R}$, i.e. the $x$-position space. In general, OP's first eq. (1) only holds for short times $\Delta t\ll \tau$, where $\tau$ is some characteristic time scale of the system.
II) Let us review how normalization appears in the Feynman path integral from first principles. The main tool to determine the Feynman propagator/kernel/amplitude $K(x_b,t_b;x_a,t_a)$ is the (semi)group property
$$\tag{B} K(x_f,t_f;x_i,t_i) ~=~ \int_{\mathbb{R}}\!\mathrm{d}x_m ~ K(x_f,t_f;x_m,t_m) K(x_m,t_m;x_i,t_i). $$
III) Equivalently, if we identify
$$\tag{C} K(x_f,t_f;x_i,t_i)~=~\langle x_f,t_f \mid x_i,t_i \rangle$$
with an overlap of instantaneous$^1$ position eigenstates in the Heisenberg picture, then eq. (B) follows from the (first of) the completeness relations
$$\tag{D} \int \!\mathrm{d}x ~|x,t \rangle \langle x,t |~=~{\bf 1}, \qquad \text{and} \qquad \int \!\mathrm{d}p~ |p,t \rangle \langle p,t |~=~{\bf 1}.$$
These instantaneous position and momentum eigenstates have overlap$^2$
$$\tag{E} \langle p,t \mid x,t \rangle~=~\frac{1}{\sqrt{2\pi\hbar}}\exp\left[\frac{px}{i\hbar}\right].$$
IV) OP's first eq. (1) is equivalent to the statement that
$$\tag{F} \left| \langle p_f=0,t_f \mid x_i,t_i \rangle \right| ~\stackrel{?}{=}~\frac{1}{\sqrt{2\pi\hbar}},\qquad(\leftarrow\text{ Ultimately wrong!}) $$
due to the identification (C) and
$$\tag{G} \langle p_f,t_f \mid x_i,t_i \rangle~\stackrel{(D)+(E)}{=}~\int_{\mathbb{R}}\!\frac{\mathrm{d}x_f}{\sqrt{2\pi\hbar}}\exp\left[\frac{p_fx_f}{i\hbar}\right] \langle x_f,t_f \mid x_i,t_i \rangle. $$
Eq. (F) is violated for e.g. the harmonic oscillator, where one has
$$\tag{H} \left| \langle p_f,t_f \mid x_i,t_i \rangle \right| ~=~\frac{1}{\sqrt{2\pi\hbar\cos\omega \Delta t}}. $$
V) For sufficiently short times $\Delta t\ll \tau$, one derives from the Hamiltonian formulation (without introducing arbitrary normalization/fudge factors!) that
$$ \langle x_f,t_f \mid x_i,t_i\rangle~\stackrel{(D)}{=}~\int_{\mathbb{R}} \!\mathrm{d}p~ \langle x_f,t_f \mid p,\bar{t} \rangle \langle p,\bar{t} \mid x_i,t_i\rangle $$$$ ~=~~\int_{\mathbb{R}} \!\mathrm{d}p~\langle x_f,\bar{t} \mid \exp\left[-\frac{i\Delta t}{2\hbar}\hat{H}\right]\mid p,\bar{t} \rangle \langle p,\bar{t} \mid \exp\left[-\frac{i\Delta t}{2\hbar}\hat{H}\right]\mid x_i,\bar{t}\rangle$$ $$ ~\approx~\int_{\mathbb{R}} \!\mathrm{d}p~\langle x_f,\bar{t} \mid p,\bar{t} \rangle \langle p,\bar{t} \mid x_i,\bar{t}\rangle \exp\left[-\frac{i\Delta t}{\hbar} H(\bar{x},p) \right]$$ $$~\stackrel{(E)}{=}~ \int_{\mathbb{R}} \!\frac{\mathrm{d}p}{2\pi\hbar}\exp\left[\frac{i}{\hbar}\left(p\Delta x -\left(\frac{p^2}{2m} + V(\bar{x})\right)\Delta t\right) \right]$$$$ ~=~ \sqrt{\frac{A}{\pi}} \exp\left[-A(\Delta x)^2-\frac{i}{\hbar}V(\bar{x})\Delta t\right], \qquad A~:=~\frac{m}{2 i\hbar} \frac{1}{\Delta t},$$$$ \tag{I} ~=~\sqrt{\frac{m}{2\pi i\hbar} \frac{1}{\Delta t}} \exp\left[ \frac{i}{\hbar}\left(\frac{m}{2}\frac{(\Delta x)^2}{\Delta t}-V(\bar{x})\Delta t\right)\right],$$
where
$$\tag{J} \Delta t~ :=~t_f-t_i, \quad \bar{t}~ :=~ \frac{t_f+t_i}{2}, \quad \Delta x~ :=~x_f-x_i, \quad \bar{x}~ :=~ \frac{x_f+x_i}{2} .$$
The oscillatory Gaussian integral (I) over momentum $p$ was performed by introducing the pertinent $\Delta t\to\Delta t-i\epsilon$ prescription. Eq. (I) implies that
$$\tag{K} K(x_f,t_f;x_i,t_i) ~\longrightarrow~\delta(\Delta x) \quad \text{for} \quad \Delta t \to 0^{+}, $$
which in turn implies OP's first eq. (1) in the short time limit $\Delta t \to 0^{+}$. More generally, Eq. (I) implies OP's first eq. (1) for $\Delta t\ll \tau$.
VI) Note that the short time probability
$$\tag{L} P(x_f,t_f;x_i,t_i)~=~|K(x_f,t_f;x_i,t_i)|^2~\stackrel{(I)}{\approx}~\frac{m}{2\pi \hbar} \frac{1}{\Delta t} , \qquad \Delta t\ll \tau, $$
is independent of initial and final positions, $x_i$ and $x_f$, respectively. For fixed initial position $x_i$, the formula (L) can be interpreted as a uniform and unnormalizable probability distribution in the final position $x_f\in\mathbb{R}$. This reflects the fact that the instantaneous eigenstate $|x_i,t_i \rangle$ is not normalizable in the first place, and ultimately dooms the notion of absolute probabilities.
VII) For finite times $\Delta t$ not small, the interaction term $V$ becomes important. In the general case, the functional determinant typically needs to be regularized by introducing a cut-off and counterterms. But regularization is not the (only) source of violation of OP's first eq. (1), or equivalently, eq. (F). Rather it is a generic feature that the $px$ matrix elements of an unitary evolution operator
$$\tag{M} \frac{\langle p,t \mid \exp\left[-\frac{i\Delta t}{\hbar}\hat{H}\right]\mid x,t\rangle}{\langle p,t \mid x,t\rangle} $$
is not just a phase factor away from the short time approximation $\Delta t\ll \tau$.
VIII)
Example: Consider the Hermitian Hamiltonian
$$\tag{N} \hat{H}~:= \frac{\omega}{2}(\hat{p}\hat{x}+\hat{x}\hat{p})~=~ \omega(\hat{p}\hat{x}+\frac{i\hbar}{2}). $$
Then
$$\tag{O} \frac{\langle p,t \mid \exp\left[-\frac{i\Delta t}{\hbar}\hat{H}\right]\mid x,t\rangle}{\langle p,t \mid x,t\rangle} ~=~1 - \omega\Delta t\left(\frac{1}{2}-i\frac{px}{\hbar} \right)+\frac{(\omega\Delta t)^2}{2}\left(\frac{1}{4}-2i\frac{px}{\hbar} - \left(\frac{px}{\hbar} \right)^2\right)+{\cal O}\left((\omega\Delta t)^3\right), $$
which is not a phase factor if $\omega\Delta t\neq 0$. To see this more clearly, take for simplicity $px=0$.
References:
R.P. Feynman and A.R. Hibbs,
Quantum Mechanics and Path Integrals, 1965.
J.J. Sakurai,
Modern Quantum Mechanics, 1994, Section 2.5.
--
$^1$ Instantaneous eigenstates are often introduced in textbooks of quantum mechanics to derive the path-integral formalism from the operator formalism in the simplest cases, see e.g. Ref. 2. Note that the instantaneous eigenstates $\mid x,t \rangle $ and $\mid p,t \rangle $ are time-independent states (as they should be in the Heisenberg picture).
$^2$ Here we assume that possible additional phase factors in the $px$ overlap (E) have been removed via appropriate redefinitions, cf. this Phys.SE answer.
|
HINT 1:Your matrix $A$ is $$(a-b)I + b e e^T$$
Can you now compute the determinant?
Move your mouse over the gray area below for another hint.
HINT 2: Make use of the fact that $\det(\lambda A) = \lambda^n \det(A)$
Move your mouse over the gray area below for another hint.
HINT 3: $\det(I + \alpha e e^T) = (1+n \alpha)$ where $e$ is a column vector of ones.
Move your mouse over the gray area for the complete answer.
$$\det ((a-b)I + b e e^T) = (a-b)^n \det \left( I + \dfrac{b}{a-b} e e^T\right)$$ Hence, all we need is to find the determinant of $I + \alpha ee^T$, where $\alpha = \dfrac{b}{a-b}$ in our case. Note that $ee^T$ is a rank one matrix and its eigen values are $e^Te = n$ and $n-1$ zeros. If $\lambda$ is an eigen value of $I + \alpha ee^T$, then $$\det (I + \alpha ee^T - \lambda I) = \alpha^n \det \left(ee^T + \dfrac{(1-\lambda)}{\alpha}I \right) = 0$$ This means that $-\dfrac{(1-\lambda)}{\alpha}$ are the eigenvalues of $ee^T$. Hence, we get that $$-\dfrac{(1-\lambda)}{\alpha} = n \text{ or }0 \text{ ($n-1$ times)}.$$ Hence, we get that $$\lambda = 1 + n \alpha, 1 \text{ ($n-1$ times)}$$ Hence, $$\det ((a-b)I + b e e^T) = (a-b)^n \times \left( 1 + n \dfrac{b}{a-b} \right) = (a-b)^{n-1} (a+(n-1)b)$$
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Search
Now showing items 1-5 of 5
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(Elsevier, 2016-02)
The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ...
Centrality dependence of the pseudorapidity density distribution for charged particles in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2013-11)
We present the first wide-range measurement of the charged-particle pseudorapidity density distribution, for different centralities (the 0-5%, 5-10%, 10-20%, and 20-30% most central events) in Pb-Pb collisions at $\sqrt{s_{NN}}$ ...
Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays
(Elsevier, 2014-11)
The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
|
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
|
Chapter 10 of Advanced Topics in Types and Programming Languages; gives a very comprehensive description of type inference by constraints solving.
They introduce the
free program indentifiers of an environment $\Gamma$ with the pair of equations $fpi(\emptyset) = \emptyset$ and $fpi(\Gamma;x:\sigma) = fpi(\Gamma) \cup fpi(\sigma)$.
I would like some clarification for the $fpi(\sigma)$ term. Does it contains the set of free type variables ($ftv$)?
Specifically they say (page 408):
The sets of free type variables of a type scheme $\sigma$ and of a constraint $C$, written $ftv(\sigma)$ and $ftv(C)$, respectively, are defined accordingly... The sets of free program identifiers of a type scheme $\sigma$ and of a constraint $C$, written $fpi(\sigma)$ and $fpi(C)$, respectively, are defined accordingly.
Note that $x$ if free in $x \preceq T$.
The last sentence seems to indicate that type variables are program identifiers.
Are there any other program identifiers? Do predicates and the name of ground types are program identifiers?
Is the result of $fpi(x \rightarrow integer)$ the set $\{x, integer, \rightarrow\}$?
|
Harry Svensson
Harry.Nossnevs@gmail.com
Electrical Engineer (Bachelor)
If you feel that I've helped you in a way you deem worthy of a payment. Then just paypal me.
Buzz words I have reasonably good knowledge about:
DSP Control theory Digital Filters Analog Filters Digital logic Buck converters 3-phase motors and households 3D designer, using Blender Telecommunication Programming ASM / C / C++ on computer and microcontroller Webdesigner prefer front end but know some Nodejs FPGA/CPLD, primarily VHDL, not used to verilog, yet Talking in front of an audience Thinking outside the box
Hobby:
Designing primarily ADC/DAC's circuits, and some oscillators and sensors in CircuitJS Gaming, primarily Factorio, and some League of Legends Designing my cube
\\$\beta\\$ = \$\beta\$
\\$\lambda\\$=\$\lambda\$ \\$\pi\\$=\$\pi\$ \\$\omega\\$=\$\omega\$ \\$\tau\\$=\$\tau\$ \\$\sqrt{\frac{\pi}{R_{C}^5}}\\$=\$\sqrt{\frac{\pi}{R_{C}^5}}\$
á à â ǎ ă ã ả ȧ ạ ä å ḁ ā ą ᶏ ⱥ ȁ ấ ầ ẫ ẩ ậ ắ ằ ẵ ẳ ặ ǻ ǡ ǟ ȁ ȃ ɑ ᴀ ɐ ɒ a æ ǽ ǣ ꜳ ꜵ ꜷ ꜹ ꜻ
Linköping, Sweden
Member for 1 year, 7 months
0 profile views
Last seen Aug 13 '18 at 16:15 Communities (13)
Electrical Engineering
7k
7k33 gold badges2424 silver badges4949 bronze badges
Stack Overflow
297
29744 silver badges1515 bronze badges
Signal Processing
196
19699 bronze badges
Physics
133
13311 gold badge11 silver badge88 bronze badges
Mathematics
130
13077 bronze badges View network profile → Top network posts 43 Minimum operating temperature - Outer Space? 29 High Voltage PWM Motor Controller - Mosfets Explode 28 Can anyone identify this circut? 25 Why use the fast Fourier transform for noise reduction instead of a classical electronic filter? 23 Smart ways to detect a button (less power consuming) 19 Cell Phone as Microcontroller 17 How do I prove to my physics teacher that adding a battery in parallel doesn't double the current? View more network posts →
|
Mark's answer is correct, but in my opinion is not clear enough. Let's make it a bit simpler:Is it possible that the geomagnetic field reversal led to the extinction of Dinosaurs?NO, DEFINITELY NOTHere's why:The cause for the K-Pg extinction event (in which many living species, including dinosaurs, died) is well known: volcanic eruptions (the ...
The idea of mass extinction is not that recent actually: Cuvier (1798), Buckland (1823) and d'Orbigny (1851) for instance were already talking about global catastrophes in earth history, linked to extinctions. But during the same period, Brocchi (1814) and Lyell (1832) proposed that extinctions of species occurred individually and were a gradual process (...
It's a commonly-proposed theory that geomagnetic reversals cause extinction events, but there's no evidence for it. There aren't enough mass extinction events for any sort of statistical analysis, and there are a number of geologic processes that can give the illusion of simultaneous reversal and extinction. In particular, an erosion event can erase both ...
Some authors thinks that the extinction of the Pleistocene megafauna (large mammals such as mammoths, etc.) was contemporaneous with the Younger Dryas (Firestone et al. 2007; Faith & Surovell 2009), while some thinks that it predates it by a couple thousand years (Gill et al. 2009).Whether or not it was contemporaneous with the Younger Dryas, it ...
Though I agree with @kaberett that there is indeed more and more evidences that the Deccan volcanism was the main trigger of the K/Pg crisis, i wanted to add that there is a more nuanced hypothesis (that I heard about last week during a talk at EGU) according to which the Chicxulub impact resulting seismic response may be the trigger of one of the main stage ...
No, it did not definitively single-handedly cause the KT mass extinction event. Around the same time, the Deccan Traps Large Igneous Province (India) was being emplaced. Flood volcanism has been associated with other mass extinctions, due to the impact on climate, sunlight at the surface, etc, of the output of huge volumes of sulphur-based gases.The ...
Is there any consensus about the conjecture that gravitational force on Earth may have changed significantly over geological time;No, earth's gravity did not change significantly over time. Yes, earth's mass increases because of meteorites and decreases because of loss of some atmospheric gases to space, but it is extremely negligible.and in ...
I'll ignore the complete impossibility of getting the world's nuclear arsenal to the center of the Earth and the impossibility of exploding them all at once. The total number of nuclear weapons, including those held in reserve and those scheduled for dismantling, is about 15000, with an average explosive power of less than half a megaton of TNT. This is ...
As a preamble, let me say that I don't know remotely enough on galactic dynamic to know if a supernova could have possibly been close enough during the Ordovician for Earth to be affected by a gamma ray burst, nor do I know enough about geochemistry to know if there are ways to detect such an event in the fossil record.That being said, the questions that ...
You need to be very wary of anything written in the non-scientific media about science. The media loves woo and controversy because those are the things that garners readers, and that in turn garners advertising revenue.By way of analogy, suppose as near-adult in gym class someone said "My gym shoes smell bad. Bad! Awfully bad!" Someone else would ...
I can't be entirely sure but I'll make an informed guess:That value doesn't come for a single measurement. Therefore, if the error in the age of a single sample is $\pm125$ kyr, you just need to average 16 samples to get it down to $\pm31$ kyr.The uncertainty in the addition (or substraction) of two or more quantities is equal to the square root of the ...
Firstly, I'd like to state that so far there is no evidence that a civilization similar to ours has appeared on Earth before us. It might be apparent that we as a species have left quite a significant geological footprint, (if we didn't Archeologists would be out of a job!), and as of yet we've discovered no evidence to indicate a civilization that cannot ...
The answer that you're asking is within the paper that you cited in the question:Did a gamma-ray burst initiate the lateOrdovician mass extinction? (Melot et al).Specifically the section titled "ASPECTS OF THE LATE ORDOVICIAN MASS EXTINCTION POTENTIALLYCOMPATIBLE WITH A GRB".But if you're looking for an article summary, here is the evidence they ...
I'm not going to give an extensive account on "mass extinction" epistomology here but I think the first thing one has to consider is the difficulty of studying numerically paleobiodiversity (mostly because of a gappy fossil record but also an historically unbalanced sampling effort and the rock availability for some of the time periods); although, along the ...
Being in a submarine in the ocean is not a good idea because if a large asteroid hits the ocean the shock wave created, and its energy, would be very large. If the submarine survives intact its occupants may not. The occupants could be thrown about so much they liquidize & turn into people puree.Similarly being airborne in a blimp or airplane would be ...
There are several factors to consider. The main one is the atmosphere (especially if you want to compare Mars with the Earth's during magnetic reversals). Earth's atmosphere is a formidable shield against solar wind and cosmic radiation. Each type of radiation have a different penetration, but in general the radiation dose associated to each type of ...
To add to David Hammen's answer. Earth is big. I hate to use the words "really really big" cause there are things much bigger, like the sun, but Earth is quite large.Imagine what would happen if all the Nukes went off 3000 miles from you. You're in LA, the bombs are all set off in NY. The curvature of the Earth would prevent you from seeing much, ...
As bon noted in the comments, that it the time it would take the gamma rays to reach Earth. Gamme rays travel at the same speed as all other forms of electromagnetic radiation, including visible light, so we will not have any warning. This kind of makes the "6000" number meaningless. We will know that the star exploded and the gamme rays will reach earth at ...
Mass extinctions are selective: not all living organisms will be affected by it to the same extent. Meaning also that various groups will recover from it at various speed: a group from which half of the species were exterminated by the event will most probably take longer to recover (i. e. reach its pre-event diversity) than a group for which only 10% of the ...
A weed is just a plant where you do not want it. Totally a matter of context. Tumbleweeds are non-native, introduced centuries ago. I assume you mean the invasive species of plants that have been spread by humans and are disrupting ecologies throughout most of the worldUntil recently, these plants we consider weeds were limited in their range to home ...
Almost nil on human time scales.The total estimated reserves (this number keeps changing over time) are of the order of 300E9 m^3. For convenience lets say all of this oil formed over the last 300 million years at a constant rate. Then the rate of formation is 1000 m^3 per year or ~6000 barrels of usable oil.This obviously is a simple back of the ...
Estimates of earth's total biomass vary widely, from 0.5 to 4 trillion tons C, so instead of citing a source, I'll just go with an assumption of $1\times10^{15} \text{kg C}$. Measuring biomass in carbon is a convenient segue to the next assumption: assume that all biomass burns only in $\text{CX + O}_2\,\rightarrow\,\text{CO}_2 + \text{X}$.Given these ...
National Geographic published an article about this in January 2018 - No, We're Not All Doomed by Earth's Magnetic Field Flip.Yes, the flipping of the magnetic poles does take a long time - thousands of years. But during the change, the Earth's magnetic field does not cease to exist & the magnetic poles do not disappear. They slowly migrate.For ...
There is one scenario in which climate change could render the Earth inhospitable to life in general, which would be a runaway greenhouse effect. In a runaway greenhouse effect, the Earth would get so hot that the oceans start to evaporate, adding more water vapour (powerful greenhouse gas) to the atmosphere, which will make it hotter yet, until Earth ends ...
Estimated sulfur release 325 gigatonnes = 325,000 teragrams. The numbers in this diagram are in teragramsSulfur Cycleso the release is $\approx 1000\times $ today's annual sulfur cycle.I think most of the sulfur compounds would be washed into the ocean and then deposited into sediments. I can't find how much sulfur is currently in the oceans, this ...
100 years is not very long-term at all. The added CO2 will not miraculously disappear after a century, but will go on warming the planet. (And that's just assuming that humans stop burning fossil fuels now.) There are all sorts of feedback effects that will come into play at some point, if they haven't already.For one example, a lot of CO2 winds up ...
Depends on the further actions humanity takes. If we keep on pumping green house gases into the atmosphere, survival will become significantly harder. Sure, we most likely wont be able to create a Venus-like atmosphere, but as of today we simply do not know, what tipping points in the climate system we might go beyond.Actually I am more concerned by the ...
Wow, I just finished watching this it seems we know, or can guess, quite a bit about the level of Carbon, particularly Methane that were released at the end of the Permian and what they did to the climate. What seems to be missing is any consensus about why the Permian extinction stopped just that once things settled down climate and atmosphere swung back to ...
|
I have recorded Signal_A, and downloaded signal_B from the internet (therefore two different microphones/settings are used for recording)
You might be able to achieve a "seamless" mix (or transition) but the fact that you have two different microphones and settings might add artifacts in the final product. You can tell how this sounds like if you have ever noticed the differences between microphones. Accounting for such differences might be a bit more challenging.
The sound pressure level of both signals are known( I measured the sound pressure level of Signal_A in 1m distance, and also know the sound pressure level of Signal_B in the same distance).
It is good that you have measured the SPL but most commonly you are unlikely to know the exact relationship that takes you from SPL to converted values.
As far as I know, the value recorded/represented in each waveform of any signal is relative to that corresponding signal, and since two signals are recorded using different settings we can expect that one might be more amplified than the other. Therefore a simple mixing of the two won’t give us a realistic result.
I am not entirely clear on what you mean by "a simple mixing" but the first approach would be to simply adjust the amplitudes of the two recordings and then "add" them together as per your
MergedSignal equation.
Now the question is: is there a way to manipulate recorded signal_A and signal_B programatically, assuming the fact that the SPL values of the original signals at 1m distance is known, and get the merged signal as they were playing simultaneously and recorded at 1m distance?
Yes.
BUT!, you have to make sure that the rest of the signal pathway was the same as well. Now:
Sound waves hit the transducer. The transducer converts them to voltage with a (hopefully) linear relationship of Volts per Pascal of pressure. In addition, this relationship is frequency dependent. But let's say that it is constant across the spectrum and call it $\alpha$.
Behind the transducer there is an amplifier whose job is to take the electrical signal and bring it up to some level. Again, this is achieved with a (hopefully) linear relationship. Let's keep things simple and assume that the job of this amplifier is to take the signal of the transducer and adapt it to a standard line level of approximately 2 Volts (peak-to-peak). Let's call this $\beta$.
The final stage is the Analog to Digital Converter (ADC). It converts the 2volts peak to peak to some range with a given word length (for example 0..255, -127..128 and so on).
So, assuming that $\alpha$ (mic), $\beta$ (amp/signal conditioning) are the same, you can indeed scale the recordings by the ratio of the SPLs. That is:
$$MergedSignal = (1.0 - \frac{SPL_A}{SPL_B}) \times Signal_A + \frac{SPL_A}{SPL_B} \times Signal_B$$
In the more general case and when you don't know the SPL, you can simply "match" the average level of the converted values. So, same relationship, instead of $SPL_A, SPL_B$ you have the averages of the signals.
Hope this helps.
|
I'm having trouble determining whether the infinite product in $(1.)$ converges or diverges, my attack was to use the convergence cretina stated in $(0.)$
$(0.)$
An Infinite Product is said to be convergent if there exists a non-zero limit of the sequence of partial products:
$$P_{n}=\prod_{k=1}^n(1 + u_{k})$$.
as $n \rightarrow \infty$ The value of the infinite product is the limit: $$P = \lim_{n \rightarrow \infty}P_{n}$$
and one writes $$\prod_{k=1}(1 + u_{k}) = P$$
$(1.)$
$$\prod_{k=2}\left(1 + (-1)^{k}\frac{1}{k}\right)$$
Attacking $(1.)$ via our convergence criteria stated in $(0.)$ one can make the following observations:
$$P_{n} = \prod_{k=2}^n\left(1 + (-1)^{k}\frac{1}{k}\right)$$
Now taking the limit of $P_{n}$ we may now see: $$\lim_{n \rightarrow \infty} \left(\prod_{k=2}^n\left(1 + (-1)^{k}\frac{1}{k}\right)\right)$$
Further observations reveal that we have the following: $$\lim_{n \rightarrow \infty}P_{n}=\lim_{n \rightarrow \infty} \left(\prod_{k=2}^n\left(1 + (-1)^{k}\frac{1}{k}\right)\right)= \lim_{n \rightarrow \infty}\left(P_{2}\times P_{3}\times P_{4}\times P_{5} \times \cdots \times P_{n}\right)$$.
Finally in conclusion the limit on the RHS side of our result becomes the following: $$\lim_{n \rightarrow \infty}\left(1 +(-1)^{2}\frac{1}{2}\right) \times \cdots + \left(1 +(-1)^{n})\frac{1}{n}\right)$$
My question is how to convert our sequence of partial products as seen in our previous result into an infinite series of any form ?
|
My friend gave me a problem, which can be reduced to the following.
Let $$$S(k)$$$ be the set of all arrays of size $$$n$$$ that contains $$$k$$$ ones and $$$n - k $$$ zeroes. Let for some array $$$s \in S(k)$$$ , $$$pos_s[1..k]$$$ denotes the position of those ones (say in increasing order, it doesn't matter actually). We have to calculate.
i.e, for all arrays $$$s \in S(k)$$$ we have to sum the positions of all $$$k$$$ ones.
My thoughts : Modification to binomial expansion? Noo. hmm, what about counting how many times each position occurs, but then there will be intersections, like I might put two ones at the same index, so ? what about inclusion-exclusion? No it's getting a lot messy.
Linearity of Expectation : Let's find the expected value of $$$ \sum_{i = 1}^{k} pos_s[i] $$$
Let's look at it it some other way. We choose random variables $$$X_{1, 2, .. k} $$$ lying in $$$[1, n]$$$ with same probability (Note that I'm not saying they're independent. ). Thus $$$E[X_i] = \frac{1 + 2 + .. n}{n} = \frac{n + 1}{2}$$$ Now by linearity of expectation,
.
Now, it's left to you to realise that
.
And since we've $$$n \choose k $$$ arrays possible, we deduce,
.
I went numb for a few seconds after completing the solution, I mean wtf, what about intersection, how can this be even possible, you can't just choose it randomly (I know the formal proof of LOE, but still :\ )
There are few things, that I know are true, but I can never digest them, "My favourite song has just 7 million views on youtube" is on second", "Linearity of expectation actually works" remains at first.
Maybe I overkilled it. Other easier solutions are welcome :)
|
Let $f$ be a monotone decreasing, continuously differentiable function with $\lim_{x\rightarrow \infty}f(x)=0$. Let $\chi$ be a non-principal Dirichlet character. It is standard to show that $\sum_{n\geq x}\chi(n)f(n)=O(f(x))$, where the big-O constant is easily computable and depends only on $\chi$. In particular, we have $\sum_{n\leq x}\chi(n)f(n)=A+O(f(x))$ where $A=\sum_{n\in \mathbb{N}}\chi(n)f(n)$ is a constant.
When $f(x)=\log(x)/x$, Mertens used the fact that $\log(ab)=\log(a)+\log(b)$ to re-express the sum in terms of a sum over primes. He showed that $\sum_{p\leq x}\chi(p)\log(p)/p$ is bounded, in absolute value, by a computable constant. Then, by partial summation techniques, he removed the $\log(p)$ from the numerator and obtained bounds of the form $$\left|\sum_{p\leq x}\chi(p)/p- C \right| < D/\log(x)$$ where $C$ and $D$ are easily computable constants (possibly depending on $\chi$).
My question is two-fold. First, what conditions on a function $f$ (satisfying any of the nice properties above, or more) guarantees that $\sum_{p\in \mathbb{N}}\chi(p)f(p)$ exists?
Second, since $-L'(s;\chi)/L(s;\chi)$ is analytic in a neighborhood of $s=1$, we know that $\sum_{p\in\mathbb{N}}\chi(p)\log(p)/p$ converges, say to a constant E. Is there an easy way to obtain explicit bounds of the form $$\left|\sum_{p\leq x}\chi(p)\log(p)/p - E \right| < o(1)$$ where the $o$-function is fairly simple, etc...?
The reason I ask is that I want to find an effective formula for $\sum_{p\equiv a\pmod{k},\, p\leq x}\log(p)/p$, where the error term is small. If anyone has such a reference that would also be appreciated.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.