text stringlengths 256 16.4k |
|---|
You are blindfolded and disoriented, standing exactly 1 mile from the Great Wall of China. How far must you walk to find the wall?
Assume the earth is flat and the Great Wall is infinitely long and straight.
Puzzling Stack Exchange is a question and answer site for those who create, solve, and study puzzles. It only takes a minute to sign up.Sign up to join this community
You are blindfolded and disoriented, standing exactly 1 mile from the Great Wall of China. How far must you walk to find the wall?
Assume the earth is flat and the Great Wall is infinitely long and straight.
$\DeclareMathOperator{\arcsec}{arcsec}$
For each possible orientation of the wall (relative to some arbitrary initial orientation), the point on the wall closest to our starting point is a distance $1$ away. The collection of the closest points for all possible orientations of the wall form a circle of radius $1$ around our starting point.
If we move a distance $r>1$ away from the initial point, we intersect two orientations of the wall that are an angle $\theta$ apart. In order to reach that point we must have crossed all of the orientations in that angle. In the figure below on the left, those "explored" points are marked by a magenta line.
By trigonometry we can show that $\theta = 2\arcsec r$. If we traverse the path shown on the right side of the above figure, we travel a worst-case distance of:
$$ r + r(2\pi - \theta) \\ r + 2r(\pi - \arcsec r) $$
This distance is minimized when $r\approx 1.04356$ for a worst-case distance of $6.99528$, an improvement of about $3.95\%$
However, looking at the figure we can immediately see that the majority of the large circular arc is "wasted" distance. Only the ends contribute to additional "explored" points. If we shrink-wrap the rest of the path around the unit circle, we get the following path:
The worst-case distance of this path is:
$$ r + 2\sqrt{r^2-1} + (2\pi - 2\theta) \\ r + 2\left(\sqrt{r^2-1} + \pi - 2\arcsec r\right) $$
This happens to be minimized for $r = \sqrt{\frac{15-\sqrt{33}}{6}} \approx 1.24200$ (not the distance shown in the figure), for a worst-case distance of:
$$ \sqrt{\frac{9+\sqrt{33}}{2}}+4\arctan \sqrt{\frac{9+\sqrt{33}}{8}} \approx 6.45891 $$
an improvement of $11.32\%$.
Thanks to Michael Seifert for pointing out that we can do better by letting the radii of the start and end be different, in which case we have the distance:
$$ r_1 + \sqrt{r_1^2-1} + \sqrt{r_2^2-1} + 2\pi - \theta_1 - \theta_2 \\ r_1 + \sqrt{r_1^2-1} + \sqrt{r_2^2-1} + 2\pi - \arcsec r_1 - \arcsec r_2 $$
Which is minimized by $r_1=2/\sqrt{3},\ r_2=\sqrt{2}$ (with $\theta_1=\pi/3,\ \theta_2=\pi/2$):
(Because of the nice angles, this picture is exactly to scale.) The worst-case distance here is simply
$$ \frac{2}{\sqrt{3}} + \frac{1}{\sqrt{3}} + \frac{2\pi}{3} + \frac{\pi}{2} + 1 \\ = 1 + \sqrt{3} + \frac{7\pi}{6} $$
(a $12.16\%$ improvement.)
If the angle between the possible wall and the initial line is $x$ (the angle between the diagonal line and the bottom line in the diagram below), then the distance travelled is $1+(\pi/2+2x)+1/tan(x)+1/sin(x)$.
Gratifyingly this gives a slightly improved answer of $2+3\pi/2\approx6.7124$ for my first attempt (because you can drop down straight rather than complete the circle), where $x=\pi/2$.
It also gives my second attempt for $x=\pi/4$ (answer $2+\sqrt{2}+\pi\approx6.5558$).
Throwing the expression into wolfram alpha, shows that a minimum occurs at $\pi/3$. This gives a value of $1+\sqrt{3}+7\pi/6\approx6.397$
Old new upper bound: $2+\sqrt{2}+\pi$ as per diagram:
(Old upper bound: $2\pi+1$ miles. Walk 1 mile in any direction and then walk is a circle of radius 1, centred at your starting point. )
I would like to present this non-rigorous but hopefully more intuitive explanation for the optimal path. (The technique used here was very helpful for working on Oray's variant with two people.)
The first part of 2012rcampion's answer explains that we should go as far out as some tangent $l$, before going around the circle to get back to $l$ on the other side. Call the starting point $A$ and the circle $O$. Then the problem is this:
Find the shortest path that comes from $A$, touches $l$, then goes around the circle and touches $l$ again.
It won't change which path is shortest if we turn around at the end and go all the way back:
Find the shortest path that comes from $A$, touches $l$, then goes around the circle and touches $l$ again, and then goes back around the circle to $l$ and then $A$.
Now, if we reflect the entire diagram over $l$, we get this:
Instead of having our path touch $l$ and go back, we can have it switch sides every time instead, which won't change the length because it's just a reflection. So now the problem is this:
Find the shortest path from point $A$ that goes around circle $O^\prime$, then around circle $O$, then goes to point $A^\prime$.
Anyone should be able to do that (imagine putting a string from $A$ around the circles to $A^\prime$ and pulling it tight):
And now if we only look at the part of the diagram above $l$, there's the answer without any calculations.
"...standing exactly 1 mile from the Great Wall of China. How far must you walk to find the wall?"
You
must walk 1 mile. If you go the wrong way then you will end up walking further. If you don't walk that far you can't reach it.
@DrXorile is clos to the answer. Mine isn't an answer either but here's some food for thoughts
I wanted to picture it. It looks like it.
If we take 360 individuals, all starting at the center of the circle and each at a different angle, only one will find the wall.
That's a 0.27% chance of finding the wall if you walk eaxtly one mile. If you need to reach the wall for your survival, you're dead.
Also imagine the guy who started just one degree slightly off, extend his hands and the wall is just 2 inches further, then starts over in the wrong direction.
Walking more than one mile means we could increase our chances of reaching the wall at slightly off angle.
But then again, this could happen: |
Constraints on non-Standard Model Higgs boson interactions in an effective field theory using differential cross sections measured in the $H \\rightarrow \\gamma\\gamma$ decay channel at $\\sqrt{s} = 8$ TeV with the ATLAS detector Authors: ATLAS Collaboration Date of Publication (YYYY-MM-DD): 2016 Title of Journal: Physics Letters B Issue / Number: 753 Start Page: 69 End Page: 85 Document Type: Article ID: 716329.0
Dijet production in $\\sqrt{s}=7$ TeV $pp$ collisions with large rapidity gaps at the ATLAS experiment Authors: ATLAS Collaboration Date of Publication (YYYY-MM-DD): 2016 Title of Journal: Physics Letters B Issue / Number: 754 Start Page: 214 Document Type: Article ID: 716381.0
Measurement of the correlations between the polar angles of leptons from top quark decays in the helicity basis at $\\sqrt{s}=7$TeV using the ATLAS detector Authors: ATLAS Collaboration Date of Publication (YYYY-MM-DD): 2016 Title of Journal: Physical Review D Issue / Number: 93 Start Page: 012002 Document Type: Article ID: 716376.0
Measurement of the production cross-section of a single top quark in association with a $W$ boson at 8 TeV with the ATLAS experiment Authors: ATLAS Collaboration Date of Publication (YYYY-MM-DD): 2016 Title of Journal: Journal of High Energy Physics Issue / Number: 01 Start Page: 064 Document Type: Article ID: 716455.0
Measurements of fiducial cross-sections for $t\\bar{t}$ production with one or two additional $b$-jets in $pp$ collisions at $\\sqrt{s}$ = 8 TeV using the ATLAS detector Authors: ATLAS Collaboration Date of Publication (YYYY-MM-DD): 2016 Title of Journal: European Physical Journal C Issue / Number: 76 Start Page: 11 Document Type: Article ID: 716253.0
Measurements of four-lepton production in $pp$ collisions at $\\sqrt{s}=$ 8 TeV with the ATLAS detector Authors: ATLAS Collaboration Date of Publication (YYYY-MM-DD): 2016 Title of Journal: Physics Letters B Issue / Number: 753 Start Page: 552 End Page: 572 Document Type: Article ID: 716448.0
Measurements of the Higgs boson production and decay rates and coupling strengths using $pp$ collision data at $\\sqrt{s}=7$ and $8$ TeV in the ATLAS experiment Authors: ATLAS Collaboration Date of Publication (YYYY-MM-DD): 2016 Title of Journal: European Physical Journal C Issue / Number: 76 Start Page: 6 Document Type: Article ID: 716212.0
Searches for scalar leptoquarks in $pp$ collisions at $\\sqrt{s}$ = 8 TeV with the ATLAS detector Authors: ATLAS Collaboration Date of Publication (YYYY-MM-DD): 2016 Title of Journal: European Physical Journal C Issue / Number: 76 Start Page: 5 Document Type: Article ID: 716248.0
Search for a high-mass Higgs boson decaying to a $W$ boson pair in $pp$ collisions at $\\sqrt{s} = 8$ TeV with the ATLAS detector Authors: ATLAS Collaboration Date of Publication (YYYY-MM-DD): 2016 Title of Journal: Journal of High Energy Physics Issue / Number: 01 Start Page: 032 Document Type: Article ID: 716261.0
Search for flavour-changing neutral current top-quark decays to $qZ$ in $pp$ collision data collected with the ATLAS detector at $\\sqrt{s}=8$ TeV Authors: ATLAS Collaboration Date of Publication (YYYY-MM-DD): 2016 Title of Journal: European Physical Journal C Issue / Number: 76 Start Page: 12 Document Type: Article ID: 716251.0
The scope and number of records on eDoc is subject to the collection policies defined by each institute - see "info" button in the collection browse view. |
observation: the usual topological proof that complex polynomials have roots uses that the degree of $f/|f|$ on some large radius circle is one (by explicit calculation). the real version of this is the intermediate value theorem; it says that if the degree of $f/|f|: S^0 \to S^0$ is one, $f$ has a zero
We have that $$\int_{\Sigma}2\sqrt{1-x^2-y^2}dA=\iint_D2dxdy$$ where $\Sigma (x,y)=(x,y,-\sqrt{1-x^2-y^2})$. What will $D$ be?
Will it be the set of all $(x,y)$ such that the square root is defined? It must be $1-x^2-y^2\geq 0 \Rightarrow y^2\leq 1-x^2 \Rightarrow -\sqrt{1-x^2}\leq y\leq \sqrt{1-x^2}$. So that the square root $\sqrt{1-x^2}$ is defined, it must be $1-x^2\geq 0 \Rightarrow x^2\leq 1\Rightarrow -1\leq x\leq 1$. So, we get that $D=\{(x,y)\mid -1\leq x\leq 1, -\sqrt{1-x^2}\leq y\leq \sqrt{1-x^2}\}$.
@BalarkaSen do you remember a simple proof that hypersurfaces in $\Bbb R^n$ are orientable? My students want to do it using Jordan separation, which is quite nice, but I feel like there should be something simpler
I guess maybe I'm secretly just going to reproduce a proof of Jordan-Brouwer
Oh duh, you check that the normal bundle is trivial otherwise you would have nontrivial intersection with some loop
@AkivaWeinberger I had an inductive proof which is similar to yours. Assume by induction true for n lines. For n+1 lines, color the thing formed by n of them. Then reverse the colors on one side of the line
hi, if A,B is 3x3 matrix such that A is invertible, then is A + nB invertible for some n?I assumed the answer to be no, then used multilinear property of determinant and analysed the value for n-1 , n n+1 and arrive at a contradiction. But this doesn't seem to me to be a elegant solution (also I haven't verifyed if my method is right), so can someone answer this
So i posted the question and arrived at an answer in a non-elegant way (1st comment), so i wanted to cross check my answer and also find a simpler answer, so could you recheck if the logic behing your and mine answers are same?
hi, if A,B is 3x3 matrix such that A is invertible, then is A + nB invertible for some n?I assumed the answer to be no, then used multilinear property of determinant and analysed the value for n-1 , n n+1 and arrive at a contradiction. But this doesn't seem to me to be a elegant solution (also I haven't verifyed if my method is right), so can someone answer this
What tools, ways would you propose for getting the closed form of this integral?$$\int_0^{\pi/4}\frac{\log(1-x) \tan^2(x)}{1-x\tan^2(x)} \ dx$$EDIT: It took a while since I made this post. I'll give a little bounty for the solver of the problem, 500 points bounty.Supplementary question:Ca...
Consider distinct integer polynomials of distinct degree. Also they are all univariate polynomials of the variable $x$.Let $*$ denote composition and $*$ is the operation for the monoid.Consider when such polynomials $a,b,c,d$ forms an abelian monoid.Many questions arise naturally.For insta...
Let $f : X \to \Bbb{R}$ be some function, where $X$ is a measurable space. I have that $f^{-1}([q,\infty))$ is measurable for every rational $q$. I need to conclude that $f^{-1}([c,\infty))$ is measurable for all $c \in \Bbb{R}$. Let $(q_n)$ be a decreasing rational sequence converging to $c$. Is $[c, \infty) = \bigcup_{n=1}^\infty [q_n , \infty)$ or $[c, \infty) = \bigcap_{n=1}^\infty [q_n , \infty)$?
This is giving me so much damn trouble for some reason... |
Let $u_t$ be the random walk$$ u_t = u_{t-i} + \varepsilon_t $$ where $\mathrm{E}[\varepsilon_t]=0$ and $\mathrm{var}[\varepsilon_t]=\sigma^2$ , i.e. $\varepsilon_t$ is stationary.
Now let $$X_t = \alpha u_t +\nu_t$$ and$$Y_t = \beta u_t + \eta_t$$where $\nu_t$ and $\eta_t$ are stationary processes similar to $\varepsilon_t$
Then both $X_t$ and $Y_t$ are non stationary because they are linear functions of the non-stationary (stochastic trend) variable $u_t$.
However$$\beta X_t - \alpha Y_t = \beta \nu_t - \alpha_t \eta_t $$ is a linear combination of the stationary disturbances and is therefore stationary. When this happens $X_t$ and $Y_t$ are said to be cointegrated. $X_t$ and $Y_t$ are said to contain the same stochastic trend.
The idea behind the Dickey-Fuller test is to estimate a regression which estimates the ratio of $\alpha$ and $\beta$ and test if the estimated residuals are stationary. These residuals do not follow a standard distribution. |
According to the Perron-Frobenius theorem, a real matrix with only positive entries (or one with non-negative entries with a property called irreducibility) will have a unique eigenvector that contains only positive entries. Its corresponding eigenvalue will be real and positive, and will be the eigenvalue with greatest magnitude.
I have a situation where I'm interested in such an eigenvector. I'm currently using numpy to find all the eigenvalues, then taking the eigenvector corresponding to the one with largest magnitude. The trouble is that for my problem, when the size of the matrix gets large, the results start to go crazy, e.g. the eigenvector found that way might not have all positive entries. I guess this is due to rounding errors.
Because of this, I'm wondering if there's an algorithm that can give better results by making use of the facts that $(i)$ the matrix has non-negative entries and is irreducible, and $(ii)$ we're only looking for the eigenvector whose entries are positive. Since there are algorithms that can make use of other matrix properties (e.g. symmetry), it seems reasonable to think this might be possible.
While writing this question it occurred to me that just iterating $\nu_{t+1} = \frac{A\nu_t}{|A\nu_t|}$ will work (starting with an initial $\nu_0$ with positive entries), but imagine with a large matrix the convergence will be very slow, so I guess I'm looking for a more efficient algorithm than this. (I'll try it though!)
Of course, if the algorithm is easy to implement and/or has been implemented in a form that can easily be called from Python, that's a huge bonus.
Incidentally, in case it makes any kind of difference, my problem is this one. I'm finding that as I increase the matrix size (finding the eigenvector using Numpy as described above) it looks like it's converging, but then suddenly starts to jump all over the place. This instability gets worse the smaller the value of $\lambda$. |
5.Matrix and more examples Example 12 (HKALE Pure 93 I 10)Let M be the set of 3x3 real matrices. For $A,B\in M$, a relation ~ is set if there exist a non-singular $P\in M$ so that $A=PBP^{-1}$, then $A-B$.
a) Show that ~ is an equivalence relation.
b) Show if $A\sim B$ then $A^n\sim B^n$
Not a really "MI" question but this is a rare question involving MI because proves on matrices rarely depends on integers and their proves would be extremely difficult.
Definition.A equivalence relation A~B is set if and only if:
1) A~A (Reflxivity)
2) A~B implies B~A (symmetry)
3) A~B, B~C implies A~C (Transitivity).
Example 13.Defind a~b if $a-b=c\in Q$, this ia an equivalence relation because:
1) $x-x=0\in Q$, therefore $x\sim x$.
2) $a-b=c\in Q$ implies $b-a=-c\in Q$ so it's symmetrical.
3) $a-b,b-c\in Q$ implies $a-c\in Q$ since rational number performs closure in addition.
Now back to the original problem:
1) $A=IAI^-1$, therefore A~A.
2) $A=PBP^{-1}$, then $B=(P^{-1})A(P^{-1})^{-1}$, so it's symmetrical.
3) If $A=PBP^{-1}$ where PQ is non-singular by determinants. For part (b) we have $A^n=(PBP^{-1})^n=PB^nP^{-1}$ so that $A^n\sim B^n$. How can we do this through MI?
A~B is the case n=1 which is given. Assume $A^n\sim B^n$ by $A^n=PB^nP^{-1}$, then $A^{n+1}=(PB^nP^{-1})(PBP^{-1})=PB^{n+1}P^{-1}$ and the same result is given, though this is quite stupid.
Readers can try to prove the following:
1) If C~0, then C is zero matrix.
2) AB ~ BA may not be true.
<i>Example 14. (HKALE Pure 1992 I 5</i> Given that $u_1=0$, $u_{n+1}=2n-u_n$, show that $u_n=n+\frac{-1+(-1)^n}{2}$.
Approach 1: induction through odd and even separately.
We have $u_{n+2}=2+u_n$ so that it's true by summing up from the first term ($u_1$ for odd terms and $u_2$ for even terms).
Approach 2: induction through all integers.
Assume $u_n=n+\frac{-1+(-1)^n}{2}$, then $u_{n+1}=2n-u_n=n-\frac{-1+(-1)^n}{2}=(n+1)+\frac{-1+(-1)^{n+1}}{2}$ so that the induction is complete.
<i>Example 15. (HKALE Puer 1989 I 11a)</i> Prove the existance and uniqueness of $a_n,b_n$ so that $(1+\sqrt{2})^n=a_n+b_n\sqrt{2}$ and also:
i) $a_n\equiv 1\mod 2$ for all n.
ii) $b_n\equiv 1\mod 2$ for odd n.
Existance is trivial as the multiplication of $Z[\sqrt{2}]$ (lattice point in forms of $a+b\sqrt{2}$ that a,b are integers) performs closure, and there's at least one solution $(a_n,b_n)$ for every n. This is unique because $\sqrt{2}$ is irrational so that $a\sqrt{2}=b$ don't give rational solution.
For n=1, a=b=1.
Part I: Assume $a_n$ is odd for n = k. For n = k+1,
$(a_n+b_n\sqrt{2})(1+\sqrt{2})=(a_n+2b_n)+(a_n+b_n)\sqrt{2}=a_{n+1}+b_{n+1}\sqrt{2}$ so that $a_{n+1}=a_n+2b_n\equiv 1\mod 2$.
Part II:Notice that $(1+\sqrt{2})^2=5+2\sqrt{2}$, tweet the proposition that for $(1+\sqrt{2})(5+2\sqrt{2})^{n-1}=c_n+d_n\sqrt{2}$, $d_n\equiv 1\mod 2$. Assume that n = k is true, then for n = k+1,
$(c_n+d_n\sqrt{2})(5+2\sqrt{2})=(5c_n+4d_n)+(5d_n+2c_n)\sqrt{2}$ so that $d_{n+1}$ is also odd.
To end this one we would introduce a special case of M.I.
<i>Theorem. (M.I. from the middle)</i> The proposition is true for all n if:
a) P(k) is true, k is not necessarily 1.
b) Assuming P(x) is true, then P(x+1) and P(x-1) is true.
If P(x-1) is not necessarily true, then there exist $n_0\leq k$ that for all $n\geq n_0$, P(n) is true.
Somehow we don't have many elemental case for this because most proofs involving M.I., is true from n=1. Such bounding occurs more frequently in inequality and bounding case which could be much difficult. There's a modified question at the bottom (Q1) concerning this M.I. method.
To conclude, the followings are usual techniques for M.I.:
1) Expressing the case n = k+1 in terms of n = k, including their difference.
2) Seperate odd and even case especially in sequence.
3) Artificially make up expression, like $\frac{(n+2)(n+3)}{2}=\frac{((n+1)+1)((n+1)+2)}{2}$ as to show the case n = k+1
4) For M.I. in inequality, usually they behaves monotonically as power increases, so observing pattern is important.
Problems from HK A-Level Examinations:
1) (HKALE 1993 Pure I 2 modified)
$u_8=47,u_{11}=199$ and $u_{n+2}=u_{n+1}+u_n$ for all natrual n. Show that $u_n=\alpha^n+\beta^n$ where $\alpha, \beta$ are roots of $x^2-x-1=0$ by:
a) Finding $u_1,u_2$ and induction through natrual n.
b) By M.I. from the middle.
2) (HKALE 1996 Pure II 7b)
Show that $\sum_{r=0}^n \frac{x^r}{r!}+\frac{ex^{n+1}}{(n+1)!} > e^x > \sum_{r=0}^n \frac{x^r}{r!}$, hence show that $\sum_{r=0}^n \frac{x^r}{r!}$ tends to $e^x$ the constant.
3) (HKALE 1991 Pure I 11)
a) Prove the existance and uniqueness that $(\sqrt{3}+\sqrt{2})^{2n}=p_n+q_n\sqrt{6}$ for natrual $n,p_n,q_n$. Also show that $(\sqrt{3}-\sqrt{2})^{2n}=p_n-q_n\sqrt{6}$. Hence deduce that $2p_n-1$ < $(\sqrt{3}+\sqrt{2})^{2n}$ < $2p_n$.
b) Show that the followings are multiples of 10:
i) $2^{5n}-2^n$
ii) $3^{4n}-1$
iii) $2p_{2n}-(2^{3n+1})(3^n)$.
c) By (a),(b) or otherwise, find the unit digit of $[(\sqrt{3}+\sqrt{2})^{100}]$ when it's expressed in decimal form. Note that [] is the integer (floor) function.
Read the rest of this passage:
Part I
Part II
Part IV |
Definition:Generalized Sum Definition
Let $\subseteq$ denote the subset relation on $\mathcal F$.
Define the net: $\phi: \mathcal F \to G$
by:
$\displaystyle \phi \left({F}\right) = \sum_{i \mathop \in F} g_i$ Then $\phi$ is denoted: $\displaystyle \sum \left\{{g_i: i \in I}\right\}$
and referred to as a
generalized sum.
Statements about
convergence of $\displaystyle \sum \left\{{g_i: i \in I}\right\}$ are as for general convergent nets.
Let $\left({g_n}\right)_{n \in \N}$ be a sequence in $G$.
Let $V$ be a Banach space.
This nomenclature is appropriate as we have Absolutely Convergent Generalized Sum Converges.
Note
While the notion of a topological group may be somewhat overwhelming, one may as well read normed vector space in its place to at least grasp the most important use of a generalized sum.
A part of this page has to be extracted as a theorem. |
Abstract
Let $T$ be a complete, first-order theory in a finite or countable language having infinite models. Let $I(T,\kappa)$ be the number of isomorphism types of models of $T$ of cardinality $\kappa$. We denote by $\mu$ (respectively $\hat\mu$) the number of cardinals (respectively infinite cardinals) less than or equal to $\kappa$.
THEOREM. $I(T,\kappa)$,
as a function of $\kappa > \aleph_0$, is the minimum of $2^{\kappa}$ and one of the following functions:
1. $2^\kappa$;
2.
the constant function $1$;
3. $\begin{cases}
|\hat\mu^n/{\sim_G}|-|(\hat\mu – 1)^n/{\sim_G}| & \hat\mu\lt \omega;\qquad \mbox{for some } 1\lt n\lt \omega\quad \mbox{and}\\ \quad\quad\qquad \; \hat\mu &\hat\mu\ge\omega;\qquad \mbox{some group } G\le \mathrm{Sym}(n) \end{cases}$
4.
the constant function $\beth_2$;
5. $\beth_{d+1}(\mu)$
for some infinite, countable ordinal $d$;
6. $\sum_{i=1}^d \Gamma(i)$
where $d$ is an integer greater than $0$ ( the depth of $T$) and $$ \Gamma(i)\; \textit{ is either } \beth_{d-i-1}(\mu^{\hat\mu})\; \textit{ or } \beth_{d-i}(\mu^{\sigma(i)} + \alpha(i)),$$ where $\sigma(i)$ is either $1$, $\aleph_0$ or $\beth_1$, and $\alpha(i)$ is $0$ or $\beth_2$; the first possibility for $\Gamma(i)$ can occur only when $d-i>0$.
The cases (2), (3) of functions taking a finite value were dealt with by Morley and Lachlan. Shelah showed (1) holds unless a certain structure theory (superstability and extensions) is valid. He also characterized (4) and (5) and showed that in all other cases, for large values of $\kappa$, the spectrum is given by $\beth_{d-1}(\mu^{<\sigma})$
for a certain $\sigma$, the “ special number of dimensions.”
The present paper shows, using descriptive set theoretic methods, that the continuum hypothesis holds for the special number of dimensions. Shelah’s superstability technology is then used to complete the classification of the all possible uncountable spectra. |
Bonds are traditionally valued using the discount curve, the credit spread to determine the probability of default and usually you still have to adjust the price using a spread because bonds can become illiquid.
If you ignore the last spread the price is sum of discounted cashflows except that at every point of time you need to compute the implied probability of default. If the bond defaults, the market assumes that you receive a recovery. By computing the value of the coupon leg conditionally on survival and the default leg for the default case, you should obtain a NPV which is much closer to the traded price of this security.
Usually you still need to adjust this price by a spread as the market can trade the security at a different price than your theoretical price and this is especially true for illiquid bonds. You could generalise this by saying that the market tries to anticipate what is the distribution of the expected cashflows and asks for a premium for uncertainty.
Assuming a flat rate, the basic NPV of the cashflows is:$$B = P_T \frac{1}{(1 + r)^T} + \sum C_t \frac{1}{(1 + r)^t}$$
Some would try to assume that you need to add a credit spread over the discounting rate:$$B = P_T \frac{1}{(1 + r + c)^T} + \sum C_t \frac{1}{(1 + r +c)^t}$$
This is still incorrect as the implied default probability from the CDS spread. You would want to compute:$$B = P_T \frac{1}{(1 + r)^T)}surv(T) + \sum C_t \frac{1}{(1 + r +c)^t}surv(t)$$
where $surv(t)$ is a function the cumulative probability of survival at time t.
But you are still incorrect, the bonds actually pay you a recovery in the form of cash or new bonds in the case of default. Assuming that this recovery is represented by an amount $R$ and the instantaneous default probability at time t is represented by a function $q(t)$:$$B = P_T \frac{1}{(1 + r)^T)}surv(T) + \sum C_t \frac{1}{(1 + r)^t}surv(t) + R \int_{t=0}^T \frac{1}{(1 + r)^t} \cdot q(t)$$
Even with all these adjustments, you will still not be matching the market price of a security, you can consider the difference is likely to be due to some liquidity issues or error in your estimation of your recovery or other parameters.
There are various basis methods which will yield different results, the best is to find which one the people who trade this market seems to be using and try to match the market in addition to calculate a fair theoretical value.
HTH |
This was supposed to be a comment, but it became too long.
When you want your students to understand the material, I think it's good practice to avoid remembering procedures (I avoid the term 'trick' because I want to distinguish between two types on tricks: IMO, there is a good and a bad kind) as much as possible. I think they are harmful for understanding of the topic. A much more useful skill is to derive formulas on the spot. I have to admit that there are examples in which it is just not feasible to do this (for example, I
would advise to just remember the quotient rule as it's too complex to derive on the spot every time, but I think it's also good to be able to derive it, so that forgetting a rule is no big deal).
So, to recap, I think it's better to teach techniques which
enable students to come up with formulas. Remembering a technique (normally, I'd refer to a technique as a trick, rather than a formula or abbreviation) usually works way better than remembering a formula (because there is usually some intuition behind the technique, and our brains just are better at remembering ideas than they are at remembering raw formulas).
I can (hopefully) show what I mean, by an example.
Consider two students that have learned how to find roots of a quadratic equation.
Student A has learned to write $x^2 + bx + c = 0$ in the form $(x-a)^2 = d$, so that you can apply a squareroot on both sides, then solve for x to get: $ x = a \pm \sqrt{d} $. Ideally, he would understand that we translate the graph of the polynomial (by doing a substitution for x) to get rid of the $bx$ term, get an equation that is easy to solve, and then translate/substitute back again.
Student B has learned $$ x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a} $$
Now suppose they learn about cubic polynomials and their roots. Using some clever geometric tricks, it is possible to solve $x^3 + px + q = 0$.
Student A's knowledge of the technique now enables him to see a similar possibility here. IF you have $x^3 + ux^2 + vx + w = 0$, and you can find an $a$ such that the quadratic term will vanish (just like the first-order term vanished in the quadratic equation), you can write
any cubic equation in the simpler form. So student A may realize that the solution to the simpler cubic equations is enough to solve for all cubic equations. |
Article Abstract
This work shows how to encode inductive types using recursion schemes. Unlike Church-Boem-Berarducci encoding that can encode inductive types without fixpoint, recursion schemes require the fixpoint constructor in the typechecker core in order to express this encoding. We will use cubical type checker from Mortberg et all. You may want to try this in Agda, Idris, Coq, or any other MLTT prover.
Fixpoint
The core fixpoint reflection type is parametrized by a functor and has only one contructor with the value of this functor applied to fixpoint itself.
data fix (F: U -> U) = Fix (point: F (fix F))
We also need functions for projecting and embedding values to/from fixpoint functiorial stream.
unfix (F: U -> U): fix F -> F (fix F) = split Fix f -> fembed (F: U -> U): F (fix F) -> fix F = \(x: F (fix F)) -> Fix x
F-Algebra
F-Algebras give us a categorical understanding of recursive types. Let $F : C \rightarrow C$ be an endofunctor on category $C$. An F-algebra is a pair $(A, \varphi)$, where A is an object and $\varphi\ : F\ A \rightarrow A$ is a morphism in the category $C$. The object A is the carrier and the functor F is the signature of the algebra. Reversing arrows gives us F-coalgebra.
Initial Algebra
A F-algebra $(\mu F, in)$ is the initial F-algebra if for any F-algebra $(C, \varphi)$ there exists a unique arrow $\llparenthesis \varphi \rrparenthesis : \mu F \rightarrow C$ where $f = \llparenthesis \varphi \rrparenthesis$ and is called catamorphism. Similarly, a F-coalgebra $(\nu F, out)$ is the terminal F-coalgebra if for any F-coalgebra $(C, \phi)$ there exists a unique arrow $\llbracket \phi \rrbracket : C \rightarrow \nu F$ where $f = \llbracket \phi \rrbracket$
Example of Initial Algebra
The data type of $List$ over a given set $A$ can be represented as the initial algebra $\mu\ L_A in$ of the functor $L_A(X) = 1 + (A \times X)$. Denote $\mu\ L_A = List (A)$. The constructor functions $nil: 1 \rightarrow List (A)$ and $cons: A \times List(A) \rightarrow List(A)$ are defined by $nil = in \circ inl$ and $cons = in \circ inr$, so $in = [nil,cons]$.
Catamorphism
Catamorphism is known as generalized version of fold. Assume we have fmap defined somewhere else. It is used to construct instances of inductive datatypes.
fmap (A B: U) (F: U -> U): (A -> B) -> F A -> F B = undefined
Then cata is defined as follows:
cata (A: U) (F: U -> U) (alg: F A -> A) (f: fix F): A = alg (fmap (fix F) A F (cata A F alg) (unfix F f))
Inductive
Let's rewrite fix data type as an interface structure along with its fold:
ind (F: U -> U) (A: U): U = (in_: F (fix F) -> fix F) * (in_rev: fix F -> F (fix F)) * (fold_: (F A -> A) -> fix F -> A) * Unit
Then instance of this type class would be:
inductive (F: U -> U) (A: U): ind F A = (embed F,unfix F,cata A F,tt)
Anamorphism
Anamorphism is used to build instances of coinductive data types and represents generic stream unfold.
ana (A: U) (F: U -> U) (coalg: A -> F A) (a: A): fix F = Fix (fmap A (fix F) F (ana A F coalg) (coalg a))
Coinductive
All arrows are reversed, in is out, fold is unfold.
coind (F: U -> U) (A: U): U = (out_: F (fix F) -> fix F) * (out_rev: fix F -> F (fix F)) * (unfold_: (F A -> A) -> fix F -> A) * Unit
Then instance of this type class would be:
coinductive (F: U -> U) (A: U) : coind A F = (unfix F,embed F,ana A F,tt)
Inductive List Nat
Here is an example of inductive encoding of list nat:
> inductive list
EVAL: (\(A : U) -> (embed F,(unfix F,(cata A F,tt)))) (F = (\(A : U) -> list))
> inductive list nat
EVAL: ((\(x : F (fix F)) -> Fix x) (F = (\(A : U) -> list)), (unfix (\(A : U) -> list),((\(alg : Pi \(_ : F A) -> A) -> \(f : fix F) -> alg (fmap (fix F) A F (cata A F alg) (unfix F f))) (A = nat, F = (\(A : U) -> list)),tt))) Coinductive Stream Nat
Here is example of coinductive encoding of stream nat:
> coinductive stream nat
EVAL: (unfix (\(A : U) -> stream),((\(x : F (fix F)) -> Fix x) (F = (\(A : U) -> stream)),((\(coalg : Pi \(_ : A) -> F A) -> \(a : A) -> Fix (fmap A (fix F) F (ana A F coalg) (coalg a))) (A = nat, F = (\(A : U) -> stream)),tt))) Hylomorphism
Hylomorphism is a bi-functor that could be taken as axiom, since all other recursion schemas are derivable from it. More common, (Co)-Inductive types could be represented as di-algebras.
hylo (A B: U) (F: U -> U) (alg: F B -> B) (coalg: A -> F A) (a: A): B = alg (fmap A B F (hylo A B F alg coalg) (coalg a))
Prelude
First we need to set up an inductive type tuple for para and either type for apomorphism.
data tuple (A B: U) = pair (a: A) (b: B)data either (A B: U) = left (a: A) | right (b: B)either_ (A B C: U): (A -> C) -> (B -> C) -> (either A B) -> C = \(b: A -> C) -> \(c: B -> C) -> [email protected](either A B -> C) with left x -> b x right y -> c yfst (A B: U): tuple A B -> A = split pair a b -> asnd (A B: U): tuple A B -> B = split pair a b -> b
Primitive Recursion Paramorphism
para (A: U) (F: U -> U) (psi: F (tuple (fix F) A) -> A) (f: fix F): A = psi (fmap (fix F) (tuple (fix F) A) F (\(m: fix F) -> pair m (para A F psi m)) (unfix F f))
Apomorphism
apo (A: U) (F: U -> U) (coalg: A -> F(either (fix F) A)) (a: A): fix F = Fix(fmap (either (fix F) A) (fix F) F (\(x: either (fix F) A) -> either_ (fix F) A (fix F) (idfun (fix F)) (apo A F coalg) x) (coalg a))
Gapomorphism
gapo (A B: U) (F: U -> U) (coalg: A -> F A) (coalg2: B -> F(either A B)) (b: B): fix F = Fix((fmap (either A B) (fix F) F (\(x: either A B) -> either_ A B (fix F) (\(y: A) -> ana A F coalg y) (\(z: B) -> gapo A B F coalg coalg2 z) x) (coalg2 b)))
Morphisms on (Co)-Initial Objects
data freeF (F: U -> U) (A B: U) = ReturnF (a: A) | BindF (f: F B)data cofreeF (F: U -> U) (A B: U) = CoBindF (a: A) (f: F B)data free (F: U -> U) (A: U) = Free (_: fix (freeF F A))data cofree (F: U -> U) (A: U) = CoFree (_: fix (cofreeF F A))unfree (A: U) (F: U -> U): free F A -> fix (freeF F A) = split Free a -> auncofree (A: U) (F: U -> U): cofree F A -> fix (cofreeF F A) = split CoFree a -> a
Histomorphism
histo (A:U) (F: U->U) (f: F (cofree F A) -> A) (z: fix F): A = extract A F ((cata (cofree F A) F (\(x: F (cofree F A)) -> CoFree (Fix (CoBindF (f x) ((fmap (cofree F A) (fix (cofreeF F A)) F (uncofree A F) x)))))) z) where extract (A: U) (F: U -> U): cofree F A -> A = split CoFree f -> unpack_fix f where unpack_fix: fix (cofreeF F A) -> A = split Fix f -> unpack_cofree f where unpack_cofree: cofreeF F A (fix (cofreeF F A)) -> A = split CoBindF a -> a
Futumorphism
futu (A: U) (F: U -> U) (f: A -> F (free F A)) (a: A): fix F = Fix (fmap (free F A) (fix F) F (\(z: free F A) -> w z) (f a)) where w: free F A -> fix F = split Free x -> unpack x where unpack_free: freeF F A (fix (freeF F A)) -> fix F = split ReturnF x -> futu A F f x BindF g -> Fix (fmap (fix (freeF F A)) (fix F) F (\(x: fix (freeF F A)) -> w (Free x)) g) unpack: fix (freeF F A) -> fix F = split Fix x -> unpack_free x
Chronomorphism
chrono (A B: U) (F: U -> U) (f: F (cofree F B) -> B) (g: A -> F (free F A)) (a: A): B = histo B F f (futu A F g a)
Appendix Metamorphism
meta (A B: U) (F: U -> U) (f: A -> F A) (e: B -> A) (g: F B -> B) (t: fix F): fix F = ana A F f (e (cata B F g t))
Mutumorphism
mutu (A B: U) (F: U -> U) (f: F (tuple A B) -> B) (g: F (tuple B A) -> A) (t: fix F): A = g (fmap (fix F) (tuple B A) F (\(x: fix F) -> pair (mutu B A F g f x) (mutu A B F f g x)) (unfix F t))
Zygomorphism
zygo (A B: U) (F: U -> U) (g: F A -> A) (alg: F (tuple A B) -> B) (f: fix F): B = snd A B (cata (tuple A B) F (\(x: F (tuple A B)) -> pair (g(fmap (tuple A B) A F (\(y: tuple A B) -> fst A B y) x)) (alg x)) f)
Prepromorphism
prepro (A: U) (F: U -> U) (nt: F(fix F) -> F(fix F)) (alg: F A -> A) (f: fix F): A = alg (fmap (fix F) A F (\(x: fix F) -> prepro A F nt alg (cata (fix F) F (\(y: F(fix F)) -> Fix (nt(y))) x)) (unfix F f))
Postpromorphism
postpro (A: U) (F: U -> U) (nt : F(fix F) -> F(fix F)) (coalg: A -> F A) (a: A): fix F = Fix(fmap A (fix F) F (\(x: A) -> ana (fix F) F (\(y: fix F) -> nt(unfix F y)) (postpro A F nt coalg x)) (coalg a))
The code is here. |
Current Return vs Forward-Looking Return
This article explains how we calculate returns at the portfolio level, and why we make a distinction between what we call ‘Current Return’ and ‘Forward-looking Return’.
Averaging Returns
We calculate the return of a portfolio as a weighted average, based on the amount invested in each note. A portfolio made up of
n loans, each one of amount a and giving a return r, then R the portfolio return is:
\[ R = \frac{\sum_{i=1}^n{a_i \cdot r_i}}{\sum_{i=1}^n{a_i}} \]
Dead and Live Loans
Loans can be either paid, defaulted, or on-going. Dead loans are not expected to make any more payment, and therefore their return calculation is simple and accurate. Dead loans, like human beings, may have had an admirable life (in case of loans, they made fully payment) or may have disappointed (for loans, they were charged-off).
Live loans are the ones that are still expected to make further payments. They can be current or late (past a payment due), and more or less mature. The mechanism to predict the returns of live loans has been described in a previous article. There are two important facts to remember. First, because the probability of defaulting decreases over time, the expected return of a live note increases as the loan becomes for mature. Second, the risk of default is not uniform over time and loans have a significantly higher risk to default during the first half of their life.
For instance, a fictitious loan with a 10-month maturity returns at best 13.5% upon reaching maturity. Its expected return over time could look like this:
Month Expected Return 0 (issuance) 9.0% 1 9.1% 2 9.3% 3 10.0% 4 11.0% 5 11.8% 6 12.5% 7 12.9% 8 13.2% 9 13.4% 10 13.5%
At issuance, the expected return is only 9%. After one payment this return increases, but only slightly, because few loans miss at least one payment. At the 4th month, the growth in expected return accelerates. A loan is unlikely to miss the last payment once 9 payments have been made; therefore the ‘discount’ for the risk of default only lowers the return to 13.4%.
Aggregation at the Portfolio level
Since there is still some uncertainty regarding live loans, it seems at first more reliable to calculate the return of a dead one.
However, because defaults happen, by definition, prior to maturity, an investor is likely to see those charged-offs accumulating only a few months after initial investment, while he’ll have to wait several years for the surviving loans to reach maturity.
Let’s imagine a portfolio, made up of 7 identical loans similar to the one described above. Each loan is issued at the same time. However, loan #1 stops paying after only 2 months, and loan #2 defaults after making a total of 4 payments. The 5 remaining loans reach maturity. The returns we have for each loans are:
Month Loan #1 Loan #2 Loan #3 Loan #4 Loan #5 Loan #6 Loan #7 0 (issuance) 9.0% 9.0% 9.0% 9.0% 9.0% 9.0% 9.0% 1 9.1% 9.1% 9.1% 9.1% 9.1% 9.1% 9.1% 2 9.3% 9.3% 9.3% 9.3% 9.3% 9.3% 9.3% 3 –30.0% 10.0% 10.0% 10.0% 10.0% 10.0% 10.0% 4 – 11.0% 11.0% 11.0% 11.0% 11.0% 11.0% 5 – –15.0% 11.8% 11.8% 11.8% 11.8% 11.8% 6 – – 12.5% 12.5% 12.5% 12.5% 12.5% 7 – – 12.9% 12.9% 12.9% 12.9% 12.9% 8 – – 13.2% 13.2% 13.2% 13.2% 13.2% 9 – – 13.4% 13.4% 13.4% 13.4% 13.4% 10 – – 13.5% 13.5% 13.5% 13.5% 13.5%
Depending upon which loans we take into account for calculating the portfolio returns, numbers are very different. There are 3 options:
1) Consider only the dead loans. We’ll call this ‘Past Return’
2) Consider only the live loans. We’ll call this ‘Forward-Looking Return’ because it doesn’t take into consideration what already happened. 3) Consider all the loans, both live and dead. We’ll call this the ‘Current Return’.
Month Loan #1 Loan #2 Loan #3 to #7 Past Return Current Return Forward-Looking Return 0 (issuance) 9.0% 9.0% 9.0% – 9.0% 9.0% 1 9.1% 9.1% 9.1% – 9.1% 9.1% 2 9.3% 9.3% 9.3% – 9.3% 9.3% 3 –30.0% 10.0% 10.0% –30.0% 4.3% 10.0% 4 – 11.0% 11.0% –30.0% 5.1% 11.0% 5 – –15.0% 11.8% –22.5% 2.0% 11.8% 6 – – 12.5% –22.5% 2.5% 12.5% 7 – – 12.9% –22.5% 2.8% 12.9% 8 – – 13.2% –22.5% 3.0% 13.2% 9 – – 13.4% –22.5% 3.1% 13.4% 10 – – 13.5% –22.5% 3.2% 13.5% 11 – – – 3.2% 3.2% –
As shown above, the Past Return shows meaningless numbers, until all the loans have been able to reach maturity. The Current Return is a much more accurate measurement. However, it is also heavily penalized by early defaults. At the 5th month, the Current Return indicates 2.0%, which is almost one-half under the final performance.
As for the Forward Looking Return, it tends to be too optimistic (even more so on this example. For real portfolio the discrepancy is much lower). However, this is an important measurement, because it allows to track, over time, the performance of a strategy.
A Note for Our Clients
Because of the way we calculate Current Returns and Forward-Looking Returns, many clients who invested on their own before switching to LendingRobot are likely to encounter this kind of summary information:
Account Current Return Forward-Looking Return Managed by LendingRobot 7.72% 9.76% Not Managed by LendingRobot 4.61% 11.29%
Here the forward-looking return of the loans we do manage is significantly lower than the one for the investments we do not manage (9.76% vs 11.29%). Is LendingRobot doing a bad job? Not necessarily.
Because the non-managed loans are older than the ones managed by LendingRobot, more of them have already defaulted. This is why, on the ‘few’ remaining ones, the return is so high. This can be easily verified by comparing the current returns: it’s at 7.72% for the loans managed by LendingRobot, while it’s only 4.61% for the loans purchased before switching to LendingRobot. A discrepancy that can only be explained by two reasons: the non-managed is older, so more loans have defaulted already, and LendingRobot selected better loans.
Considering that in this case, LendingRobot is 1.53% below for the forward-looking return, but 3.11% above for the current return, both reasons seem valid.
Nota Bene: the numbers above are real numbers from my personal account.
Emmanuel Marot March 24, 2016 2 Comment |
Thinking about all of the things in the statistical world that we can estimate, the one that has always perplexed me is estimating the size of an unknown population \(N\). Usually when we compute estimates based on samples, we involve the size of the sample \(n\) somewhere, thus we take “size” for granted — the size of a sample is known. We also make inferences based on sample statistics using theory such as the Central Limit Theorem, but seem to never care about the population size \(N\), we either know it, or assume it is infinite. But, in fields like ecology and environmental studies, this attitude of gluttony is dangerous!
In the summers I spend a lot of time in the Eastern Sierra where there are deer and bears. These animals are so large that we cannot consider their population in the area to be infinite (like we may with ants or bacteria). A matter of fact, one of our local animal behavior specialists knows the exact number of bears that live in my mountain community. I always had just assumed he spent all day tracking down the local bears and marking them with radio collars.
There must be some way to estimate the population size, but how? This will be the first aspect of the problem we will discuss. Then we will talk about how to formally derive an estimate of the population. The computation, and even the probability distributed used, is quite niche and really piqued my interest. Finally, I conclude with some wisdom on why we even need to go through this theoretical process, something I did not appreciate in school.
Two deer enjoying a nice day across the street. A bear fishing in Lake Mary. A (sedated) bear walks past me at Lake Mary. The Capture-Recapture Design
One nugget of information I had learned in a course about web data mining and search taught by Professor John Cho, was a method called
capture-recapture. Oddly enough, it was used to estimate the number of web pages on the Internet 1. A short while later, I came across this same mechanism in a textbook about Markov Chain Monte Carlo. The mechanism is called capture-recapture, but goes by many other names such as mark-recapture and others.
How it works for animals such as my woodland friends:
Capture some number of bears, deer, whatever. The number itself doesn’t matter. Mark these animals somehow to remember that we have seen them. For animals, this means physically marking them (birds, snails, squirrels) or putting radio collars on them (bears, deer and mountain lions). Release them back into the wild. Allow enough time to pass so that the animals disperse into their natural habitat. Capture a sample of \(n\) animals and count the number of animals that are marked.
For estimating the number of web pages on the Internet, it might look like the following:
Collect a random sample of web pages using a random walk (see 2, 3), or assume a week of search queries and the link clicked is a sample of the web (see 4). Store the page address. Crawl, or collect, a sample again. Count the number of pages in the new sample that we have already seen. Estimating \(N\) with a Simple Proportion
We can simply form an 8th grade proportion of the following form, where we let \(N\) represent the unknown population size, \(n\) represent the number of animals we captured at the second sampling point, \(K\) be the total number of animals we marked and \(k\) be the number of tagged animals we counted in the recaptured sample.
\[
\frac{k}{n} = \frac{K}{N} \]
This yields the estimator:
\[
\hat{N}_{\tiny\mbox{EST}} = \frac{Kn}{k} \]
This estimator is actually called the Lincoln index and assumes that only one marking and one recapture takes place, and that the population is “closed.” A “closed” population is one that does not lose animals due to death or migration and does not gain any from birth or migration. In my area, the population is mostly closed throughout the year if we perform the mark and recapture with one day or so. During migration periods and mating periods, this assumption is not as reliable. Being in the mountains, targeted or clustered deaths of these larger animals is not common.
Computing the Maximum Likelihood Estimator of \(N\)
Consider the formulation of the problem again. We have some population \(N\), and from that population we draw a sample of \(n\) objects. That sample itself is divided into two subgroups: marked objects \(k\), and unmarked objects \(n-k\). This sounds like the classic colored balls in urns problem where we want to find the probability that we select \(k\) of the \(K\) red balls when we draw a sample of \(n\) balls from an urn containing \(N\) balls. If this smells
hypergeometric to you, you would be right.
The hypergeomtric is a discrete probability distribution with the following probability density function:
\[
P(X = k) = \frac{\left( \begin{array}{c} K \\ k \end{array} \right) \left( \begin{array}{c} N – K \\ n – k \end{array} \right)}{\left( \begin{array}{c} N \\ n \end{array} \right)} \]
Usually when we construct a likelihood function, we take the product of the joint PDF over all the observations. Since we are estimating a population estimate, and it’s not based on a sample of observations and instead just one observation, we represent the likelihood as just the PDF. Perhaps if we did multiple recaptures, this would be different. The likelihood function thus looks something like the following.
\[
\begin{align} \mathscr{L}(N ; K, k, n) &= \frac{\left( \begin{array}{c} K \\ k \end{array} \right) \left( \begin{array}{c} N – K \\ n – k \end{array} \right)}{\left( \begin{array}{c} N \\ n \end{array} \right)} \end{align} \]
We could take the log of this likelihood function like we usually do, but we will see that this is not necessary. Recall that the beauty of the log-likelihood was that it converted all of the products into sums of logs. Since we are only working with one term, we don’t need to do this.
This is where the story gets weird. Usually we take the partial derivative of the log-likelihood function with respect to the parameter of interest, set it equal to zero and then solve for the estimate. There is a wrinkle in this case because the parameter of interest \(N\) is a non-negative integer. This means we cannot find the maximum likelihood estimate using calculus by taking the derivative with respect to \(N\). Instead, we look at the
likelihood ratio of successive hypothetical values of \(N\). That is
\[
D(N) = \frac{\mathscr{L}(N)}{\mathscr{L}(N-1)} = \frac{\frac{\left( \begin{array}{c} K \\ k \end{array} \right) \left( \begin{array}{c} N – K \\ n – k \end{array} \right)}{\left( \begin{array}{c} N \\ n \end{array} \right)}}{\frac{\left( \begin{array}{c} K \\ k \end{array} \right) \left( \begin{array}{c} N – K – 1 \\ n – k \end{array} \right)}{\left( \begin{array}{c} N – 1 \\ n \end{array} \right)}} \]
The purpose of taking the partial derivative of the log-likelihood function was to determine the point where the log-likelihood (and thus the likelihood) function switches from being increasing to decreasing — an optimum. We want to find this same phenomenon with our likelihood ratio \(D\). We want to find where \(D\) switches from increasing to decreasing. That is, we want to find where \(D(N) > 1\) and \(D(N) < 1\). Note that for \(N > 1\), \(D(N) \ne 1\). I apologize for the formatting of the math; I wanted to save screen real estate. Anyway, we then have
\[
\begin{align} D(N) &= \frac{\mathscr{L}(N)}{\mathscr{L}(N-1)} = \frac{\frac{\left( \begin{array}{c} K \\ k \end{array} \right) \left( \begin{array}{c} N – K \\ n – k \end{array} \right)}{\left( \begin{array}{c} N \\ n \end{array} \right)}}{\frac{\left( \begin{array}{c} K \\ k \end{array} \right) \left( \begin{array}{c} N – K – 1 \\ n – k \end{array} \right)}{\left( \begin{array}{c} N – 1 \\ n \end{array} \right)}} = \frac{\frac{\frac{K!}{(K-k)! x!} \frac{(N-K)!}{(n-k)!(N-K-n+k)!}}{\frac{N!}{(N-n)! n!}}}{\frac{\frac{K!}{(K-k)! k!} \frac{(N-K-1)!}{(n-k)!(N-K-n-1+k)!}}{\frac{(N-1)!}{(N-n-1)! n!}}} \\ & \mbox{Cancel the \({}_KC_k\) term as well as the \(n!\) terms and \((n-k)!\) terms.} \\ &= \frac{\frac{\frac{(N-K)!}{(N-K-n+k)!}}{\frac{N !}{(N-n)!}}}{\frac{\frac{(N-K-1)!}{(N-K-n -1+k)!}}{\frac{(N-1)!}{(N-n-1)!}}} = \frac{\frac{(N-K)! (N-n)!}{N! (N-K-n+k)!}}{\frac{(N-K-1)! (N-n-1)!}{(N-1)! (N-K-1-n+k)!}} = \frac{(N-K)! (N-n)! (N-1)! (N-K-1-n+k)!}{N! (N-K-n+k)! (N-K-1)! (N-n-1)!} \\ & \mbox{Apply variations of $y (y – 1)! = y!$} \\ &= \frac{(N-K)(N-K-1)! (N-n)! (N-K-1-n+k)! (N-1)!}{N(N-1)! (N-K-n+k)(N-K-n+k-1)! (N-K-1)! (N-n-1)!} \\ &= \frac{(N-K) (N-n)}{N (N-K-n+k)} \end{align} \]
which is a nice result, but we still are not done. We need to find where \(D(N)\) goes from being greater than 1, to being less than 1. So let’s continue.
\[
\begin{align} D(N) = \frac{(N-K)(N-n)}{N(N-K-n+k)} &> 1 \\ (N-K)(N-n) &> N(N-K-n+k) \\ N^2 + Kn – NK – Nn &> N^2 -NK – Nn +Nk \\ N^2 – N^2 + Kn – NK + NK – Nn + Nn &> Nk \\ Kn &> Nk \\ \frac{Kn}{k} &> N \\ N &< \frac{Kn}{k} \end{align} \]
Thus \(D(N) > 1\) and \(\mathscr{L}(N) > \mathscr{L}(N-1)\) when \(N < \frac{Kn}{k}\) and thus \(D(N) < 1\) and \(\mathscr{L}(N) \le \mathscr{L}(N-1)\) when \(N > \frac{Kn}{k}\). So \(\hat{N} = \frac{Kn}{k}\) is the maximum likelihood estimator, right? Not quite.
According to the definition of a maximum likelihood estimator, the estimate must fall in the original parameter space, which is in the field of
integers. The fraction \(\frac{Kn}{k}\) is likely not an integer, so it cannot be the maximum likelihood estimator. But don’t fret, this is very close to the MLE.
The correct maximum likelihood estimator is (drumroll) the
integer part of this result.
\[
\hat{N}_{MLE} = \left[ \frac{Kn}{k} \right] \]
Note that our original 8th grade proportion gave us the Lincoln index, which does not use the integer part. For small \(n\), the Lincoln index is biased. Shao
5 shows that there is also a second candidate for the MLE, due to the increasing and decreasing nature of \(D(N)\) and the fact that the estimate must be an integer. This second candidate is simply \(\left[ \frac{Kn}{k} \right] + 1\), but there is no further discussion of this candidate in Shao’s work. References: 6 7 So What Was the Point of Having to Compute the MLE?
When I was in graduate school, I used to ask the same thing all the time. We would spend 5-10 minutes deriving the fact that the sample mean is the maximum-likelihood estimator of the population mean under the normal distribution. I would ask myself “What is the point of this? Isn’t it obvious?”
In some ways it
is obvious, but just like mathematicians have to prove obvious statements, so do statisticians. If somebody were to ask “how do we know that the 8th grade proportion yields a good estimate?” As a statistician, we would be expected to prove it rigorously. Consider that in our first derivation of the Lincoln index that we made a crucial assumption that the relationship between the sample markings and the population was linear. Aside from that, it came out of educated thin air. We also have no idea about how good of an estimator our result is, and because we essentially just assumed it, we cannot prove anything about it without some sort of theoretical context.
So now that we know we have to prove it rigorously, the next step is to figure out what distribution the phenomenon follows. In this case it was the hypergeometric, which is a bit niche, but still a fairly simple distribution to work with. For other problems it could be a distribution that is very messy, or a mixture model, or something requiring simulation. Once we have defined a probability distribution that makes sense for our context, we can compute all the estimators we want, whether it be by maximum likelihood, method of moments or MAP such as in a Bayesian context. Once we have constructed an estimate using these tried and true methods, we can rely on centuries worth of statistical theory to measure how optimal each estimator is — unbiasedness, consistency, efficiency, sufficiency etc. If we wanted, we could probably show that in order for our estimate of \(N\) to have low variance, we would need high \(n\) or high \(K\) or both. Let’s leave that as an exercise for the reader…
Conclusion
In this article we take a problem that seems hopeless, or at least difficult, to solve and solve it rigorously. We start with an educated guess of an estimator by setting up a simple proportion. We then set up this basic form of the problem as a realization of the hypergeometric distribution. We then execute a niche maximum-likelihood estimation based on the likelihood ratio since the parameter we are trying to solve for must be an integer. We get the same answer as our original guess, but we now have the backup of statistical theory such that we can ask questions about
how good of an estimator the Lincoln index really is. Footnotes and References This course is actually the course that got me interested in pursuing additional graduate study in computer science. ↩ Baykan, Eda, et al. “A comparison of techniques for sampling web pages.” arXiv preprint arXiv:0902.1604 (2009). ↩ Rusmevichientong, Paat, et al. “Methods for sampling pages uniformly from the world wide web.” AAAI Fall Symposium on Using Uncertainty Within Computation. 2001. ↩ Mauldin, Michael L. “Measuring the Size of the Web with Lycos” ↩ Shao, J. “Mathematical Statistics, Springer Texts in Statistics.” (2003). Pg. 227 ↩ Watkins, Joe. “Topic 15: Maximum Likelihood Estimation.” 1 Nov. 2011. Mathematics 363, University of Arizona. ↩ Zhang, Hanwen. “A note about maximum likelihood estimator in hypergeometric distribution.” Revista Comunicaciones en Estadistica, UNIVERSIDAD SANTO TOMAS 2.2 (2009). ↩ |
Let $y$ be a dependent variable of a feature vector $x$
Problem: Given a training set $\langle x^{(i)}, y^{(i)} \rangle$, $1 \le i \le m$, find the value of $y$ on any input vector $x$.
We solve this problem by constructing a
hypothesis funciton $h_\theta(x)$using one of the methods below. Notation Optimization Objective
Step 1. Normalize each feature $(x^{(0)}_j, …, x^{(m)}_j)$, $1 \le j \le n$ by mean $\mu$ and standard deviation $\sigma$
Step 2. Minimize the cost function
where $\theta = (\theta_0, …, \theta_n)^T$
Step 3. Compute hypothesis function as
where vector $x$ is normalized using the same values of $\mu$ and $\sigma$ as in Step 1.
Gradient Descent Gradient descent is the method for finding (global) minimum of cost funtion $J(\theta)$.There are few ways to implement this method. Direct method
Choose small learning rate $\alpha > 0$ and find the fixed point of the function
Optimized method
Many mathematical software packages already include implementations of gradient descent that compute learning rate $\alpha$ automatically. These methods accept cost function $J(\theta)$ and its gradient $\nabla J(\theta)$ as arguments, which for the linear regression is computed as follows
Normal Equation
Unlike Gradient Descent this method does not require feature normalization (Step 1) and convergence loop.
Normal equation gives the closed-form solution to linear regression Regularization
In case of overfitting both methods can be tweaked by introducing polynomial features and adjusting equations as follows. Let $\lambda > 0$ and E be the diagonal matrix
Then the cost function for gradient descent becomes
and normal equation |
You are correct in that the series is not stationary. The ADF test isn't designed to test for stationarity outside the center of location. You are not going to be able to use the square root rule to extrapolate because you have significant autocorrelation of the variances.
I do have a suggestion on your problem by noting that returns are not data. Prices are data, but returns are transformations of data. The log return is an even greater transformation of the raw data.
Let's start with a simple AR(1) process of prices $p_{t+1}=Rp_t+\epsilon_{t+1}$, where R is implicitly a return and, for simplicity, $\epsilon_{t+1}\sim\mathcal{N}(0,\sigma_{t+1})$ and $\epsilon_{t}\perp\epsilon_{t+\Delta{t}}$. From Mann and Wald, we know that the OLS estimator is the MLE estimator of $R$ for any distribution of the error term. From White, we know that the
sampling distribution of the OLS estimator for $R$ is the Cauchy distribution. Since the Cauchy distribution has no mean, this is the same thing as saying no non-Bayesian solution exists that is also consistent with the thinking behind mean-variance finance. If, on the other hand, returns are defined as $r_t=\frac{p_{t+1}}{p_t}$ and both $p_t$ and $p_{t+1}$ are independent normal random variables then $r_t$ would have a Cauchy distribution. The log version of this would be the hyperbolic secant distribution which does have a mean and a variance.
Let's further assume that prices are locally a function of liquidity and globally a function of discounted cash flows. This makes liquidity a fast function and cash flow a slow function. Although this implies that returns should be a function of liquidity through prices, liquidity itself has two components, the global interest rate and the local adjustment for market-maker specific liquidity needs. The discounting of cash flows shares the global rate. In a separate paper, I argue that the distribution of liquidity is normal or log-normal depending on the model you are using.
The short-run effect is that returns should be centered on the discounted cash flows from dividends, but that the variance of log-returns should be focused on the short-run liquidity. Although liquidity should have a slight effect on the center of location, that should appear in the regression constant because of the form you are using. The variability of prices, which is what the bid-ask spread is, is a short-run process. As liquidity is not idiosyncratic within institutions, but rather a systematic issue, you have to expect serial correlation among the terms.
The number of lags should reflect how long it takes firms to change their overall liquidity levels. This depends on both internal factors, such as margin credit and so forth, and the short-run external popularity of a particular issue.
In looking at the serial correlation of returns you are implying momentum in prices. This implies that prices do not adjust instantaneously, otherwise, there would be no information in historical returns. You should look at both the bid-ask spread and interest rates. I would argue that you have a misspecified model and that there is no fix except to match it to time series of short-term interest rates and the bid-ask spread. You may want to look as Abbott's model of marketability and liquidity in
The Valuation Handbook. You should also grab an article on the summation of variables drawn from the hyperbolic secant distribution as that is essentially what the right-hand side of the regression formula is. |
Search
Now showing items 1-1 of 1
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... |
Is it possible to reduce this problem using a Levin Reduction ?
If you don't understand write for clarification!
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
Is it possible to reduce this problem using a Levin Reduction ?
If you don't understand write for clarification!
Let $(L,B)$ be an instance of subset sum, where $L$ is a list (multiset) of numbers, and $B$ is the target sum. Let $S = \sum L$. Let $L'$ be the list formed by adding $S+B,2S-B$ to $L$.
(1) If there is a sublist $M \subseteq L$ summing to $B$, then $L'$ can be partitioned into two equal parts: $M \cup \{ 2S-B \}$ and $L\setminus M \cup \{ S+B \}$. Indeed, the first part sums to $B+(2S-B) = 2S$, and the second to $(S-B)+(S+B) = 2S$.
(2) If $L'$ can be partitioned into two equal parts $P_1,P_2$, then there is a sublist of $L$ summing to $B$. Indeed, since $(S+B)+(2S-B) = 3S$ and each part sums to $2S$, the two elements belong to different parts. Without loss of generality, $2S-B \in P_1$. The rest of the elements in $P_1$ belong to $L$ and sum to $B$.
The answer mentioned by @Yuval Filmus is incorrect (it's correct ONLY if there are no negative integers). Consider the following multiset :
$$\{-5, 2, 2, 2, 2, 2\} $$
and the target sum is $-2$. We know that there is no subset. Now, we construct the instance for the partition problem. The two new elements added are $2\sigma-t = 12$ and $\sigma+t = 3$. The multiset is now: $$\{-5, 2, 2, 2, 2, 2, 3, 12\}$$ and the total sum is $20$.
The partition problem solves the answer giving the subset $$\{2, 2, 2, 2, 2\}$$ Here, the 2 new elements are in the same subset (there is no other way to partition into half the sum). Hence, this is a counter example. The correct answer is as follows:
Add an element whose value is $2t-\sigma$. The total sum of the multiset is now $2t$. Solve the partition problem which will give 2 subsets of sum $t$. Only one of the partition will contain the new element. We choose the other partition whose sum is $t$ and we have solved the subset problem by reducing it into a partition problem. This is what the link explains.
Here is a straightforward proof:
It is easy to see that SET-PARTITION can be verified in polynomial time; given a partition $P_1,P_2$ just sum the two and verify that their sums equal each other, which is obviously a polynomial time verification (because summation is a polynomial operation and we are only performing at most $|X|$ many summations).
The core of the proof is in reducing SUBSETSUM to PARTITION; to that end given set $X$ and a value $t$ (the subset sum query) we form a new set $X'=X \cup \{s-2t\}$ where $s=\sum_{x \in X}x$. To see that this is a reduction:
($\implies$ ) assume there exists some $S \subset X$ such that $t=\sum_{x \in S}x$ then we would have that \begin{equation*} s-t=\sum_{x \in S\cup \{ s-2t \} }x, \end{equation*} \begin{equation*} s-t=\sum_{x \in X' \setminus( S\cup \{s-2t\})}x \end{equation*} and we would have that $S\cup \{ s-2t \} $ and $X' \setminus( S\cup \{s-2t\})$ form a partition of $X'$
($\impliedby $) Suppose that there is a partition $P_1',P_2' $ of $X'$ such that $\sum_{x \in P_1'}x= \sum_{x \in P_2'}x$. Notice that this induces a natural partition $P_1$ and $P_2$ of $X$ such that WLOG we have that \begin{equation*} s-2t+\sum_{x \in P_1}x= \sum_{x \in P_2}x \end{equation*} \begin{equation*} \implies s-2t+\sum_{x \in P_1}x+\sum_{x \in P_1}x= \sum_{x \in P_2}x+\sum_{x \in P_1}x = s \end{equation*} \begin{equation*} \implies s-2t+2\sum_{x \in P_1}x = s \end{equation*} \begin{equation*} \implies \sum_{x \in P_1}x = t \end{equation*}
Hence from a solution $t=\sum_{x \in S}x$ we can form a parition $P_1 =S\cup \{ s-2t \} $, $P_2=X' \setminus( S\cup \{s-2t\})$ and conversely from a partition $P_1',P_2' $ we can form a soltuion $t=\sum_{x \in P_1'\setminus \{s-2t\}}x$ and therefore the mapping $f:(X,t)\rightarrow X'$ is a reduction (because $(X,t)$ is in the language/set SUBSETSUM $\Leftrightarrow X'=f(X,t)$ is in the language/set PARTITION) and it is clear to see that the transformation was done in polynomial time.
Subset Sum:
Input: {a1,a2,...,am} s.t M={1..m} and ai are non negative integer and S⊆{1..k} and Σai(i∈S) = t
Partition:
Input: {a1,a2,...,am} and S⊆ {1,· · ·,m} s.t Σai(i∈S) = Σaj(j∉S)
Partition Np Proof: if prover provides a partitions(P1,P2) for verifier, verifier can easily calculate the sum of P1 and P2 and check if the result is 0 in linear time. NP_Hard: SubsetSum ≤p PARTITION
Let x be input of SubsetSum and x=〈a1,a2,...,am,t〉 and Σai(i from 1 to m) = a
Case1: 2t >= a:
Let f(x)=〈a1,a2,...,am,am+1〉 where am+1= 2t−a
We want to show that
x∈SubsetSum ⇔ f(x)∈PARTITION
so there exist S⊆ {1,...,m} s.t T = {1..m} - S and Σai(i∈T) = a-t
and Let T' = {1...m,m+1} - S so Σaj(j∈T') = a-t+2t-a = t
which is exactly Σai(i∈S)= t and it shows f(x)∈PARTITION
now We also will show that
f(x)∈PARTITION ⇔ x∈SubsetSum
so there exist S⊆ {1,...,m,m+1} s.t T = {1,...,m,m+1} - S and Σai (i∈T)= [a+(2t-a)-t]=t
and it shows Σai(i∈T) = Σaj(j∈S) so m+1∈T and S⊆ {1,· · ·,m} and Σai(i∈S)= t
so x∈SubsetSum
Case 2: 2t =< a :
we can check same but just this time am+1 is a−2t
this link has a good description of both reductions, partition to subset-sum and subset-sum to partition.I think it is more obvious than
YUVAL's answer.useful link |
We present a search for new heavy particles, $X$, which decay via $X rightarrow WZ \to e\nu +jj$ in $p{\bar p}$ collisions at $\sqrt{s} = 1.8$ TeV. No evidence is found for production of $X$ in 110 pb$^{-1}$ of data collected by the Collider Detector at Fermilab. Limits are set at the 95% C.L. on the mass and the production of new heavy charged vector bosons which decay via $W'\to WZ$ in extended gauge models as a function of the width, $Gamma (W')$, and mixing factor between the $W'$ and the Standard Model $W$ bosons.
The polarization of neutral Cascade and anti-Cascade hyperons produced by 800 GeV/c protons on a BeO target at a fixed targeting angle of 4.8 mrad is measured by the KTeV experiment at Fermilab. Our result of 9.7% for the neutral Cascade polarization shows no significant energy dependence when compared to a result obtained at 400 GeV/c production energy and at twice our targeting angle. The polarization of the neutral anti-Cascade is measured for the first time and found to be consistent with zero. We also examine the dependence of polarization on transverse production momentum.
We present a measurement of $\sigma \cdot B(W \rightarrow e \nu)$ and $\sigma \cdot B(Z~0 \rightarrow e~+e~-)$ in proton - antiproton collisions at $\sqrt{s} =1.8$ TeV using a significantly improved understanding of the integrated luminosity. The data represent an integrated luminosity of 19.7 pb$~{-1}$ from the 1992-1993 run with the Collider Detector at Fermilab (CDF). We find $\sigma \cdot B(W \rightarrow e \nu) = 2.49 \pm 0.12$nb and $\sigma \cdot B(Z~0 \rightarrow e~+e~-) = 0.231 \pm 0.012$nb.
We report measurements of the inclusive transverse momentum pT distribution of centrally produced kshort, kstar(892), and phi(1020) mesons up to pT = 10 GeV/c in minimum-bias events, and kshort and lambda particles up to pT = 20 GeV/c in jets with transverse energy between 25 GeV and 160 GeV in pbar p collisions. The data were taken with the CDF II detector at the Fermilab Tevatron at sqrt(s) = 1.96 TeV. We find that as pT increases, the pT slopes of the three mesons (kshort, kstar, and phi) are similar, and the ratio of lambda to kshort as a function of pT in minimum-bias events becomes similar to the fairly constant ratio in jets at pT ~ 5 GeV/c. This suggests that the particles with pT >~ 5 GeV/c in minimum-bias events are from soft jets, and that the pT slope of particles in jets is insensitive to light quark flavor (u, d, or s) and to the number of valence quarks. We also find that for pT <~ 4 GeV relatively more lambda baryons are produced in minimum-bias events than in jets.
Measurements of $\mathrm{B}^*_\mathrm{s2}(5840)^0$ and $\mathrm{B}_\mathrm{s1}(5830)^0$ mesons are performed using a data sample of proton-proton collisions corresponding to an integrated luminosity of 19.6 fb$^{-1}$, collected with the CMS detector at the LHC at a centre-of-mass energy of 8 TeV. The analysis studies $P$-wave $\mathrm{B}^0_\mathrm{S}$ meson decays into $\mathrm{B}^{(*)+}\mathrm{K}^-$ and $\mathrm{B}^{(*)0}\mathrm{K}^0_\mathrm{S}$, where the $\mathrm{B}^+$ and $\mathrm{B}^0$ mesons are identified using the decays $\mathrm{B}^+\to\mathrm{J}/\psi\,\mathrm{K}^+$ and $\mathrm{B}^0\to\mathrm{J}/\psi\,\mathrm{K}^*(892)^0$. The masses of the $P$-wave $\mathrm{B}^0_\mathrm{S}$ meson states are measured and the natural width of the $\mathrm{B}^*_\mathrm{s2}(5840)^0$ state is determined. The first measurement of the mass difference between the charged and neutral $\mathrm{B}^*$ mesons is also presented. The $\mathrm{B}^*_\mathrm{s2}(5840)^0$ decay to $\mathrm{B}^0\mathrm{K}^0_\mathrm{S}$ is observed, together with a measurement of its branching fraction relative to the $\mathrm{B}^*_\mathrm{s2}(5840)^0\to\mathrm{B}^+\mathrm{K}^-$ decay.
Nuclear transparencies measured in exclusive incoherent ρ0 meson production from hydrogen, deuterium, carbon, calcium, and lead in muon-nucleus scattering are reported. The data were obtained with the E665 spectrometer using the Fermilab Tevatron muon beam with a mean beam energy of 470 GeV. Increases in the nuclear transparencies are observed as the virtuality of the photon increases, in qualitative agreement with the expectations of color transparency.
A high statistics sample of photoproduced charm particles from the FOCUS (E831) experiment at Fermilab has been used to search for CP violation in the Cabibbo suppressed decay modes D+ to K-K+pi+, D0 to K-K+ and D0 to pi-pi+. We have measured the following CP asymmetry parameters: A_CP(K-K+pi+) = +0.006 +/- 0.011 +/- 0.005, A_CP(K-K+) = -0.001 +/- 0.022 +/- 0.015 and A_CP(pi-pi+) = +0.048 +/- 0.039 +/- 0.025 where the first error is statistical and the second error is systematic. These asymmetries are consistent with zero with smaller errors than previous measurements.
We present a study of the production of K_s^0 and Lambda^0 in inelastic pbar-p collisions at sqrt(s)= 1800 and 630 GeV using data collected by the CDF experiment at the Fermilab Tevatron. Analyses of K_s^0 and Lambda^0 multiplicity and transverse momentum distributions, as well as of the dependencies of the average number and <p_T> of K_s^0 and Lambda^0 on charged particle multiplicity are reported. Systematic comparisons are performed for the full sample of inelastic collisions, and for the low and high momentum transfer subsamples, at the two energies. The p_T distributions extend above 8 GeV/c, showing a <p_T> higher than previous measurements. The dependence of the mean K_s^0(Lambda^0) p_T on the charged particle multiplicity for the three samples shows a behavior analogous to that of charged primary tracks.
Inclusive and semi-inclusive cross sections for gp0 production in 100, 200, and 360 GeV/c π−p interactions are presented. Differential cross sections for ρ0 production as functions of c.m. rapidity and transverse momentum are compared with the corresponding differential cross sections for pion production. Effects of various methods of estimating background on the values obtained for ρ0 production cross sections are discussed. About 10% of the final-state charged pions appear to come from ρ0 decay. Thus, while ρ0 production and decay is a significant source of final-state pions, other sources must contribute the majority of the produced pions.
We report measurements from elastic photoproduction of ω's on hydrogen for photon energies between 60 and 225 GeV, elastic φ photoproduction on hydrogen between 35 and 165 GeV and on deuterium between 45 and 85 GeV, elastic photoproduction on deuterium of an enhancement at 1.72 GeV/c2 decaying into K+K−, and elastic and inelastic photoproduction on deuterium of pp¯ pairs.
We measure the forward-backward asymmetry in the production of Λb0 and Λ¯b0 baryons as a function of rapidity in pp¯ collisions at s=1.96 TeV using 10.4 fb-1 of data collected with the D0 detector at the Fermilab Tevatron collider. The asymmetry is determined by the preference of Λb0 or Λ¯b0 particles to be produced in the direction of the beam protons or antiprotons, respectively. The measured asymmetry integrated over rapidity y in the range 0.1<|y|<2.0 is A=0.04±0.07(stat)±0.02(syst). |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
268 results for "cos".
Translation to Dutch of
"Given a description in words of the costs of some items in terms of an unknown cost, write down an expression for the total cost of a selection of items. Then simplify the expression, and finally evaluate it at a given point.
The word problem is about the costs of sweets in a sweet shop."
Question Ready to use
CC BY Published
Last modified 28/09/2019 18:14
No subjects selected No topics selected
Transition to university
Lengths in right-angled triangle a provided. sin, cos and tan of angle asked for Question Draft
CC BY Published
Last modified 25/09/2019 09:34
No subjects selected No topics selected
No ability levels selected
Find $\displaystyle \frac{d}{dx}\left(\frac{m\sin(ax)+n\cos(ax)}{b\sin(ax)+c\cos(ax)}\right)$. Three part question.
Question Draft
CC BY Published
Last modified 11/09/2019 12:03
No subjects selected No topics selected
No ability levels selected
multiple choice testing sin, cos, tan of random(0,90,120,135,150,180,210,225,240,270,300,315,330) degrees
Question Draft
CC BY-NC-SA Published
Last modified 28/08/2019 18:24
No subjects selected No topics selected
No ability levels selected
Given a description in words of the costs of some items in terms of an unknown cost, write down an expression for the total cost of a selection of items. Then simplify the expression, and finally evaluate it at a given point.
The word problem is about the costs of sweets in a sweet shop.
Question Draft
CC BY Published
Last modified 27/08/2019 00:09
No subjects selected No topics selected
No ability levels selected
Calculate the local extrema of a function ${f(x) = e^{x/C1}(C2sin(x)-C3cos(x))}$
The graph of f(x) has to be identified.
The first derivative of f(x) has to be calculated.
The min max points have to be identified using the graph and/or calculated using the first derivative method. Requires solving trigonometric equation
Question Draft
CC BY Published
Last modified 16/08/2019 14:46
No subjects selected No topics selected
No ability levels selected |
The Annals of Statistics Ann. Statist. Volume 43, Number 1 (2015), 382-421. Universality for the largest eigenvalue of sample covariance matrices with general population Abstract
This paper is aimed at deriving the universality of the largest eigenvalue of a class of high-dimensional real or complex sample covariance matrices of the form $\mathcal{W}_{N}=\Sigma^{1/2}XX^{*}\Sigma^{1/2}$. Here, $X=(x_{ij})_{M,N}$ is an $M\times N$ random matrix with independent entries $x_{ij}$, $1\leq i\leq M$, $1\leq j\leq N$ such that $\mathbb{E}x_{ij}=0$, $\mathbb{E}|x_{ij}|^{2}=1/N$. On dimensionality, we assume that $M=M(N)$ and $N/M\rightarrow d\in(0,\infty)$ as $N\rightarrow\infty$. For a class of general deterministic positive-definite $M\times M$ matrices $\Sigma$, under some additional assumptions on the distribution of $x_{ij}$’s, we show that the limiting behavior of the largest eigenvalue of $\mathcal{W}_{N}$ is universal, via pursuing a Green function comparison strategy raised in [
Probab. Theory Related Fields 154 (2012) 341–407, Adv. Math. 229 (2012) 1435–1515] by Erdős, Yau and Yin for Wigner matrices and extended by Pillai and Yin [ Ann. Appl. Probab. 24 (2014) 935–1001] to sample covariance matrices in the null case ($\Sigma=I$). Consequently, in the standard complex case ($\mathbb{E}x_{ij}^{2}=0$), combing this universality property and the results known for Gaussian matrices obtained by El Karoui in [ Ann. Probab. 35 (2007) 663–714] (nonsingular case) and Onatski in [ Ann. Appl. Probab. 18 (2008) 470–490] (singular case), we show that after an appropriate normalization the largest eigenvalue of $\mathcal{W}_{N}$ converges weakly to the type 2 Tracy–Widom distribution $\mathrm{TW}_{2}$. Moreover, in the real case, we show that when $\Sigma$ is spiked with a fixed number of subcritical spikes, the type 1 Tracy–Widom limit $\mathrm{TW}_{1}$ holds for the normalized largest eigenvalue of $\mathcal{W}_{N}$, which extends a result of Féral and Péché in [ J. Math. Phys. 50 (2009) 073302] to the scenario of nondiagonal $\Sigma$ and more generally distributed $X$. In summary, we establish the Tracy–Widom type universality for the largest eigenvalue of generally distributed sample covariance matrices under quite light assumptions on $\Sigma$. Applications of these limiting results to statistical signal detection and structure recognition of separable covariance matrices are also discussed. Article information Source Ann. Statist., Volume 43, Number 1 (2015), 382-421. Dates First available in Project Euclid: 6 February 2015 Permanent link to this document https://projecteuclid.org/euclid.aos/1423230084 Digital Object Identifier doi:10.1214/14-AOS1281 Mathematical Reviews number (MathSciNet) MR3311864 Zentralblatt MATH identifier 06420692 Subjects Primary: 60B20: Random matrices (probabilistic aspects; for algebraic aspects see 15B52) Secondary: 62H10: Distribution of statistics 15B52: Random matrices 62H25: Factor analysis and principal components; correspondence analysis Citation
Bao, Zhigang; Pan, Guangming; Zhou, Wang. Universality for the largest eigenvalue of sample covariance matrices with general population. Ann. Statist. 43 (2015), no. 1, 382--421. doi:10.1214/14-AOS1281. https://projecteuclid.org/euclid.aos/1423230084
Supplemental materials Supplementary material: Proofs of some lemmas. In the supplementary material [6], we will provide the proofs of Lemmas 4.2, 4.5, 4.6, 5.1, 5.4 and 5.5. |
[Click here for a PDF of this post with nicer formatting]
Many authors pull the definitions of the raising and lowering (or ladder) operators out of their butt with no attempt at motivation. This is pointed out nicely in [1] by Eli along with one justification based on factoring the Hamiltonian.
In [2] is a small exception to the usual presentation. In that text, these operators are defined as usual with no motivation. However, after the utility of these operators has been shown, the raising and lowering operators show up in a context that does provide that missing motivation as a side effect.
It doesn’t look like the author was trying to provide a motivation, but it can be interpreted that way.
When seeking the time evolution of Heisenberg-picture position and momentum operators, we will see that those solutions can be trivially expressed using the raising and lowering operators. No special tools nor black magic is required to find the structure of these operators. Unfortunately, we must first switch to both the Heisenberg picture representation of the position and momentum operators, and also employ the Heisenberg equations of motion. Neither of these last two fit into standard narrative of most introductory quantum mechanics treatments. We will also see that these raising and lowering “operators” could also be introduced in classical mechanics, provided we were attempting to solve the SHO system using the Hamiltonian equations of motion.
I’ll outline this route to finding the structure of the ladder operators below. Because these are encountered trying to solve the time evolution problem, I’ll first show a simpler way to solve that problem. Because that simpler method depends a bit on lucky observation and is somewhat unstructured, I’ll then outline a more structured procedure that leads to the ladder operators directly, also providing the solution to the time evolution problem as a side effect.
The starting point is the Heisenberg equations of motion. For a time independent Hamiltonian \( H \), and a Heisenberg operator \( A^{(H)} \), those equations are
\begin{equation}\label{eqn:harmonicOscDiagonalize:20}
\ddt{A^{(H)}} = \inv{i \Hbar} \antisymmetric{A^{(H)}}{H}. \end{equation}
Here the Heisenberg operator \( A^{(H)} \) is related to the Schrodinger operator \( A^{(S)} \) by
\begin{equation}\label{eqn:harmonicOscDiagonalize:60}
A^{(H)} = U^\dagger A^{(S)} U, \end{equation}
where \( U \) is the time evolution operator. For this discussion, we need only know that \( U \) commutes with \( H \), and do not need to know the specific structure of that operator. In particular, the Heisenberg equations of motion take the form
\begin{equation}\label{eqn:harmonicOscDiagonalize:80}
\begin{aligned} \ddt{A^{(H)}} &= \inv{i \Hbar} \antisymmetric{A^{(H)}}{H} \\ &= \inv{i \Hbar} \antisymmetric{U^\dagger A^{(S)} U}{H} \\ &= \inv{i \Hbar} \lr{ U^\dagger A^{(S)} U H – H U^\dagger A^{(S)} U } \\ &= \inv{i \Hbar} \lr{ U^\dagger A^{(S)} H U – U^\dagger H A^{(S)} U } \\ &= \inv{i \Hbar} U^\dagger \antisymmetric{A^{(S)}}{H} U. \end{aligned} \end{equation}
The Hamiltonian for the harmonic oscillator, with Schrodinger-picture position and momentum operators \( x, p \) is
\begin{equation}\label{eqn:harmonicOscDiagonalize:40}
H = \frac{p^2}{2m} + \inv{2} m \omega^2 x^2, \end{equation}
so the equations of motions are
\begin{equation}\label{eqn:harmonicOscDiagonalize:100}
\begin{aligned} \ddt{x^{(H)}} &= \inv{i \Hbar} U^\dagger \antisymmetric{x}{H} U \\ &= \inv{i \Hbar} U^\dagger \antisymmetric{x}{\frac{p^2}{2m}} U \\ &= \inv{2 m i \Hbar} U^\dagger \lr{ i \Hbar \PD{p}{p^2} } U \\ &= \inv{m } U^\dagger p U \\ &= \inv{m } p^{(H)}, \end{aligned} \end{equation}
and
\begin{equation}\label{eqn:harmonicOscDiagonalize:120} \begin{aligned} \ddt{p^{(H)}} &= \inv{i \Hbar} U^\dagger \antisymmetric{p}{H} U \\ &= \inv{i \Hbar} U^\dagger \antisymmetric{p}{\inv{2} m \omega^2 x^2 } U \\ &= \frac{m \omega^2}{2 i \Hbar} U^\dagger \lr{ -i \Hbar \PD{x}{x^2} } U \\ &= -m \omega^2 U^\dagger x U \\ &= -m \omega^2 x^{(H)}. \end{aligned} \end{equation}
In the Heisenberg picture the equations of motion are precisely those of classical Hamiltonian mechanics, except that we are dealing with operators instead of scalars
\begin{equation}\label{eqn:harmonicOscDiagonalize:140}
\begin{aligned} \ddt{p^{(H)}} &= -m \omega^2 x^{(H)} \\ \ddt{x^{(H)}} &= \inv{m } p^{(H)}. \end{aligned} \end{equation}
In the text the ladder operators are used to simplify the solution of these coupled equations, since they can decouple them. That’s not really required since we can solve them directly in matrix form with little work
\begin{equation}\label{eqn:harmonicOscDiagonalize:160}
\ddt{} \begin{bmatrix} p^{(H)} \\ x^{(H)} \end{bmatrix} = \begin{bmatrix} 0 & -m \omega^2 \\ \inv{m} & 0 \end{bmatrix} \begin{bmatrix} p^{(H)} \\ x^{(H)} \end{bmatrix}, \end{equation}
or, with length scaled variables
\begin{equation}\label{eqn:harmonicOscDiagonalize:180}
\begin{aligned} \ddt{} \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} &= \begin{bmatrix} 0 & -\omega \\ \omega & 0 \end{bmatrix} \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} \\ &= -i \omega \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} \\ &= -i \omega \sigma_y \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix}. \end{aligned} \end{equation}
Writing \( y = \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} \), the solution can then be written immediately as
\begin{equation}\label{eqn:harmonicOscDiagonalize:200}
\begin{aligned} y(t) &= \exp\lr{ -i \omega \sigma_y t } y(0) \\ &= \lr{ \cos \lr{ \omega t } I – i \sigma_y \sin\lr{ \omega t } } y(0) \\ &= \begin{bmatrix} \cos\lr{ \omega t } & \sin\lr{ \omega t } \\ -\sin\lr{ \omega t } & \cos\lr{ \omega t } \end{bmatrix} y(0), \end{aligned} \end{equation}
or
\begin{equation}\label{eqn:harmonicOscDiagonalize:220}
\begin{aligned} \frac{p^{(H)}(t)}{m \omega} &= \cos\lr{ \omega t } \frac{p^{(H)}(0)}{m \omega} + \sin\lr{ \omega t } x^{(H)}(0) \\ x^{(H)}(t) &= -\sin\lr{ \omega t } \frac{p^{(H)}(0)}{m \omega} + \cos\lr{ \omega t } x^{(H)}(0). \end{aligned} \end{equation}
This solution depends on being lucky enough to recognize that the matrix has a Pauli matrix as a factor (which squares to unity, and allows the exponential to be evaluated easily.)
If we hadn’t been that observant, then the first tool we’d have used instead would have been to diagonalize the matrix. For such diagonalization, it’s natural to work in completely dimensionless variables. Such a non-dimensionalisation can be had by defining
\begin{equation}\label{eqn:harmonicOscDiagonalize:240}
x_0 = \sqrt{\frac{\Hbar}{m \omega}}, \end{equation}
and dividing the working (operator) variables through by those values. Let \( z = \inv{x_0} y \), and \( \tau = \omega t \) so that the equations of motion are
\begin{equation}\label{eqn:harmonicOscDiagonalize:260}
\frac{dz}{d\tau} = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} z. \end{equation}
This matrix can be diagonalized as
\begin{equation}\label{eqn:harmonicOscDiagonalize:280}
A = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = V \begin{bmatrix} i & 0 \\ 0 & -i \end{bmatrix} V^{-1}, \end{equation}
where
\begin{equation}\label{eqn:harmonicOscDiagonalize:300}
V = \inv{\sqrt{2}} \begin{bmatrix} i & -i \\ 1 & 1 \end{bmatrix}. \end{equation}
The equations of motion can now be written
\begin{equation}\label{eqn:harmonicOscDiagonalize:320}
\frac{d}{d\tau} \lr{ V^{-1} z } = \begin{bmatrix} i & 0 \\ 0 & -i \end{bmatrix} \lr{ V^{-1} z }. \end{equation}
This final change of variables \( V^{-1} z \) decouples the system as desired. Expanding that gives
\begin{equation}\label{eqn:harmonicOscDiagonalize:340}
\begin{aligned} V^{-1} z &= \inv{\sqrt{2}} \begin{bmatrix} -i & 1 \\ i & 1 \end{bmatrix} \begin{bmatrix} \frac{p^{(H)}}{x_0 m \omega} \\ \frac{x^{(H)}}{x_0} \end{bmatrix} \\ &= \inv{\sqrt{2} x_0} \begin{bmatrix} -i \frac{p^{(H)}}{m \omega} + x^{(H)} \\ i \frac{p^{(H)}}{m \omega} + x^{(H)} \end{bmatrix} \\ &= \begin{bmatrix} a^\dagger \\ a \end{bmatrix}, \end{aligned} \end{equation}
where
\begin{equation}\label{eqn:harmonicOscDiagonalize:n} \begin{aligned} a^\dagger &= \sqrt{\frac{m \omega}{2 \Hbar}} \lr{ -i \frac{p^{(H)}}{m \omega} + x^{(H)} } \\ a &= \sqrt{\frac{m \omega}{2 \Hbar}} \lr{ i \frac{p^{(H)}}{m \omega} + x^{(H)} }. \end{aligned} \end{equation}
Lo and behold, we have the standard form of the raising and lowering operators, and can write the system equations as
\begin{equation}\label{eqn:harmonicOscDiagonalize:360}
\begin{aligned} \ddt{a^\dagger} &= i \omega a^\dagger \\ \ddt{a} &= -i \omega a. \end{aligned} \end{equation}
It is actually a bit fluky that this matched exactly, since we could have chosen eigenvectors that differ by constant phase factors, like
\begin{equation}\label{eqn:harmonicOscDiagonalize:380}
V = \inv{\sqrt{2}} \begin{bmatrix} i e^{i\phi} & -i e^{i \psi} \\ 1 e^{i\phi} & e^{i \psi} \end{bmatrix}, \end{equation}
so
\begin{equation}\label{eqn:harmonicOscDiagonalize:341}
\begin{aligned} V^{-1} z &= \frac{e^{-i(\phi + \psi)}}{\sqrt{2}} \begin{bmatrix} -i e^{i\psi} & e^{i \psi} \\ i e^{i\phi} & e^{i \phi} \end{bmatrix} \begin{bmatrix} \frac{p^{(H)}}{x_0 m \omega} \\ \frac{x^{(H)}}{x_0} \end{bmatrix} \\ &= \inv{\sqrt{2} x_0} \begin{bmatrix} -i e^{i\phi} \frac{p^{(H)}}{m \omega} + e^{i\phi} x^{(H)} \\ i e^{i\psi} \frac{p^{(H)}}{m \omega} + e^{i\psi} x^{(H)} \end{bmatrix} \\ &= \begin{bmatrix} e^{i\phi} a^\dagger \\ e^{i\psi} a \end{bmatrix}. \end{aligned} \end{equation}
To make the resulting pairs of operators Hermitian conjugates, we’d want to constrain those constant phase factors by setting \( \phi = -\psi \). If we were only interested in solving the time evolution problem no such additional constraints are required.
The raising and lowering operators are seen to naturally occur when seeking the solution of the Heisenberg equations of motion. This is found using the standard technique of non-dimensionalisation and then seeking a change of basis that diagonalizes the system matrix. Because the Heisenberg equations of motion are identical to the classical Hamiltonian equations of motion in this case, what we call the raising and lowering operators in quantum mechanics could also be utilized in the classical simple harmonic oscillator problem. However, in a classical context we wouldn’t have a justification to call this more than a change of basis.
References
[1] Eli Lansey.
The Quantum Harmonic Oscillator Ladder Operators, 2009. URL http://behindtheguesses.blogspot.ca/2009/03/quantum-harmonic-oscillator-ladder.html. [Online; accessed 18-August-2015].
[2] Jun John Sakurai and Jim J Napolitano.
Modern quantum mechanics, chapter {Time Development of the Oscillator}. Pearson Higher Ed, 2014. |
No; the equator is always warmer than the poles
Earth has a 23 degree axial tilt. That means that the sun is never more than 23 degrees from the equator. The equation for incident solar energy ($E$) for any given day is
$$ E = E_d\cos{A_i}$$ where $E_d$ is the incident energy of the sun overhead at noon and $A_i$ is the angle of the the sun above the horizon at noon. There are plenty of associated factors like cloud cover and refraction of light from various layers of the atmosphere, but we can ignore them for now.
On the Earth, over one year of sunlight, the total incident sunlight can be calculated as the percentage of the maximum possible sunlight ($E_{max}$); that is, the amount of energy you would receive if the sun was directly overheat at noon every day of the year (i.e. a planet with no axial tilt). We will define a year to be $2\pi$ units long, so $E_{max} = 2\pi E_d$.
The motion of the sun over the course of the year can be modeled by the sine (or cosine) function as $$A_i = \max\left[\frac{\pi}{2}, A_{tilt}\sin{t} + L\right]$$ where $t \in [0, 2\pi]$ are times within one solar year and $A_{tilt}$ is the axial tilt of the planet and and $L$ is the absolute value of latitude (north and south do not matter). Note that if $L$ is greater than $A_{tilt}$, then the location is outside of the tropics and the sun can never be overhead. Also note the
max function is necessary, since if the sun is more than 90 degrees away, it is still giving zero light.
Plugging equation 2 into equation 1 and integrating over a year, we get
$$\begin{align}\frac{E}{E_{max}} &= \frac{1}{2\pi}\int_{0}^{2\pi}\cos\left(\max\left[\frac{\pi}{2}, A_{tilt}\sin{t} + L\right]\right)dt\\\end{align}$$
The closed form solution of this indefinite integral is derived from the Bessel function with some added complexities due to the
max function, but we can solve it numerically. For Earth, axial tilt is 23.5 degrees or 0.410 radians, and the solution is roughly 0.958. That is, the equator gets 95.8% as much sunlight as a planet with no axial tilt. A solution for the Tropic of Cancer (or Capricorn) at 23.5 degrees from the equator is 0.878. At 60 degrees (north or south) the value is 0.478, while at the pole it is 0.128. So far, this reflects reality pretty well.
Now, let us change the axial tilt of your planet to 60 degrees. The corresponding number for the equator is 0.743; and for the poles 0.294. You have succeeded in making the entire planet into a temperate or cooler climate; but you have not succeeded in making the poles
warmer than the equator.
A general proof of this for any axial tilt can be seen by observing that the derivative of the integrated function is just the function itself. That function in turn is maximized by $L = 0$ for $A_{tilt} \in \left[0,\frac{\pi}{2}\right]$, so for any non-tidally locked planet orbiting the sun, the equator will receive the most solar radiation. |
Difference between revisions of "Main Page"
(Highlighting headings)
m (Bolding comment references)
Line 40: Line 40:
I feel that this is a promising avenue to explore, but I would also like a little more justification of the suggestion that this variant is likely to be simpler.
I feel that this is a promising avenue to explore, but I would also like a little more justification of the suggestion that this variant is likely to be simpler.
−
Gowers.22: A slight variant of the problem you propose is this. Let’s take as our ground set the set of all pairs (U,V) of subsets of \null [n], and let’s take as our definition of a corner a triple of the form (U,V), (U\cup D,V), (U,V\cup D), where both the unions must be disjoint unions. This is asking for more than you asked for because I insist that the difference D is positive, so to speak. It seems to be a nice combination of Sperner’s theorem and the usual corners result. But perhaps it would be more sensible not to insist on that positivity and instead ask for a triple of the form (U,V), ((U\cup D)\setminus C,V), (U, (V\cup D)\setminus C, where D is disjoint from both U and V and C is contained in both U and V. That is your original problem I think.
+
Gowers.22:A slight variant of the problem you propose is this. Let’s take as our ground set the set of all pairs (U,V) of subsets of \null [n], and let’s take as our definition of a corner a triple of the form (U,V), (U\cup D,V), (U,V\cup D), where both the unions must be disjoint unions. This is asking for more than you asked for because I insist that the difference D is positive, so to speak. It seems to be a nice combination of Sperner’s theorem and the usual corners result. But perhaps it would be more sensible not to insist on that positivity and instead ask for a triple of the form (U,V), ((U\cup D)\setminus C,V), (U, (V\cup D)\setminus C, where D is disjoint from both U and V and C is contained in both U and V. That is your original problem I think.
I think I now understand better why your problem could be a good toy problem to look at first. Let’s quickly work out what triangle-removal statement would be needed to solve it. (You’ve already done that, so I just want to reformulate it in set-theoretic language, which I find easier to understand.) We let all of X, Y and Z equal the power set of \null [n]. We join U\in X to V\in Y if (U,V)\in A.
I think I now understand better why your problem could be a good toy problem to look at first. Let’s quickly work out what triangle-removal statement would be needed to solve it. (You’ve already done that, so I just want to reformulate it in set-theoretic language, which I find easier to understand.) We let all of X, Y and Z equal the power set of \null [n]. We join U\in X to V\in Y if (U,V)\in A.
Line 47: Line 47: −
O'Donnell.35: Just to confirm I have the question right…
+
O'Donnell.35:Just to confirm I have the question right…
There is a dense subset A of {0,1}^n x {0,1}^n. Is it true that it must contain three nonidentical strings (x,x’), (y,y’), (z,z’) such that for each i = 1…n, the 6 bits
There is a dense subset A of {0,1}^n x {0,1}^n. Is it true that it must contain three nonidentical strings (x,x’), (y,y’), (z,z’) such that for each i = 1…n, the 6 bits
Line 64: Line 64: −
McCutcheon.469: IP Roth:
+
McCutcheon.469:IP Roth:
Just to be clear on the formulation I had in mind (with apologies for the unprocessed code): for every $\delta>0$ there is an $n$ such that any $E\subset [n]^{[n]}\times [n]^{[n]}$ having relative density at least $\delta$ contains a corner of the form $\{a, a+(\sum_{i\in \alpha} e_i ,0),a+(0, \sum_{i\in \alpha} e_i)\}$. Here $(e_i)$ is the coordinate basis for $[n]^{[n]}$, i.e. $e_i(j)=\delta_{ij}$.
Just to be clear on the formulation I had in mind (with apologies for the unprocessed code): for every $\delta>0$ there is an $n$ such that any $E\subset [n]^{[n]}\times [n]^{[n]}$ having relative density at least $\delta$ contains a corner of the form $\{a, a+(\sum_{i\in \alpha} e_i ,0),a+(0, \sum_{i\in \alpha} e_i)\}$. Here $(e_i)$ is the coordinate basis for $[n]^{[n]}$, i.e. $e_i(j)=\delta_{ij}$.
Presumably, this should be (perhaps much) simpler than DHJ, k=3.
Presumably, this should be (perhaps much) simpler than DHJ, k=3.
−
=== High-dimensional Sperner ===
=== High-dimensional Sperner ===
Revision as of 20:58, 11 February 2009 Contents The Problem
Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A
combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. Density Hales-Jewett (DHJ) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math]
The original proof of DHJ used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers.
Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (active) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (final call) (500-599) TBA (600-699) A reading seminar on density Hales-Jewett (active)
A spreadsheet containing the latest lower and upper bounds for [math]c_n[/math] can be found here.
Unsolved questions Gowers.462: Incidentally, it occurs to me that we as a collective are doing what I as an individual mathematician do all the time: have an idea that leads to an interesting avenue to explore, get diverted by some temporarily more exciting idea, and forget about the first one. I think we should probably go through the various threads and collect together all the unsolved questions we can find (even if they are vague ones like, “Can an approach of the following kind work?”) and write them up in a single post. If this were a more massive collaboration, then we could work on the various questions in parallel, and update the post if they got answered, or reformulated, or if new questions arose. IP-Szemeredi (a weaker problem than DHJ) Solymosi.2: In this note I will try to argue that we should consider a variant of the original problem first. If the removal technique doesn’t work here, then it won’t work in the more difficult setting. If it works, then we have a nice result! Consider the Cartesian product of an IP_d set. (An IP_d set is generated by d numbers by taking all the [math]2^d[/math] possible sums. So, if the n numbers are independent then the size of the IP_d set is [math]2^d[/math]. In the following statements we will suppose that our IP_d sets have size [math]2^n[/math].)
Prove that for any [math]c\gt0[/math] there is a [math]d[/math], such that any [math]c[/math]-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner.
The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for [math]k=4[/math]. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our [math]c[/math]-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma.
Finally, let me prove that there is square if [math]d[/math] is large enough compare to [math]c[/math]. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length [math]d[/math]. It has a one to one mapping to [math][4]^d[/math]; Given a point [math]( (x_1,…,x_d),(y_1,…,y_d) )[/math] where [math]x_i,y_j[/math] are 0 or 1, it maps to [math](z_1,…,z_d)[/math], where [math]z_i=0[/math] if [math]x_i=y_i=0[/math], [math]z_i=1[/math] if [math]x_i=1[/math] and [math]y_i=0, z_i=2[/math] if [math]x_i=0[/math] and [math]y_i=1[/math], and finally [math]z_i=3[/math] if [math]x_i=y_i=1[/math]. Any combinatorial line in [math][4]^d[/math] defines a square in the Cartesian product, so the density HJ implies the statement.
Gowers.7: With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product.
This is simpler than the density Hales-Jewett problem in at least one respect: it involves 01-sequences rather than 012-sequences. But that simplicity may be slightly misleading because we are looking for corners in the Cartesian product. A possible disadvantage is that in this formulation we lose the symmetry of the corners: the horizontal and vertical lines will intersect this set in a different way from how the lines of slope -1 do.
I feel that this is a promising avenue to explore, but I would also like a little more justification of the suggestion that this variant is likely to be simpler.
Gowers.22: A slight variant of the problem you propose is this. Let’s take as our ground set the set of all pairs (U,V) of subsets of \null [n], and let’s take as our definition of a corner a triple of the form (U,V), (U\cup D,V), (U,V\cup D), where both the unions must be disjoint unions. This is asking for more than you asked for because I insist that the difference D is positive, so to speak. It seems to be a nice combination of Sperner’s theorem and the usual corners result. But perhaps it would be more sensible not to insist on that positivity and instead ask for a triple of the form (U,V), ((U\cup D)\setminus C,V), (U, (V\cup D)\setminus C, where D is disjoint from both U and V and C is contained in both U and V. That is your original problem I think.
I think I now understand better why your problem could be a good toy problem to look at first. Let’s quickly work out what triangle-removal statement would be needed to solve it. (You’ve already done that, so I just want to reformulate it in set-theoretic language, which I find easier to understand.) We let all of X, Y and Z equal the power set of \null [n]. We join U\in X to V\in Y if (U,V)\in A.
Ah, I see now that there’s a problem with what I’m suggesting, which is that in the normal corners problem we say that (x,y+d) and (x+d,y) lie in a line because both points have the same coordinate sum. When should we say that (U,V\cup D) and (U\cup D,V) lie in a line? It looks to me as though we have to treat the sets as 01-sequences and take the sum again. So it’s not really a set-theoretic reformulation after all.
O'Donnell.35: Just to confirm I have the question right…
There is a dense subset A of {0,1}^n x {0,1}^n. Is it true that it must contain three nonidentical strings (x,x’), (y,y’), (z,z’) such that for each i = 1…n, the 6 bits
[ x_i x'_i ] [ y_i y'_i ] [ z_i z'_i ]
are equal to one of the following:
[ 0 0 ] [ 0 0 ] [ 0, 1 ] [ 1 0 ] [ 1 1 ] [ 1 1 ] [ 0 0 ], [ 0 1 ], [ 0, 1 ], [ 1 0 ], [ 1 0 ], [ 1 1 ], [ 0 0 ] [ 1 0 ] [ 0, 1 ] [ 1 0 ] [ 0 1 ] [ 1 1 ]
?
McCutcheon.469: IP Roth:
Just to be clear on the formulation I had in mind (with apologies for the unprocessed code): for every $\delta>0$ there is an $n$ such that any $E\subset [n]^{[n]}\times [n]^{[n]}$ having relative density at least $\delta$ contains a corner of the form $\{a, a+(\sum_{i\in \alpha} e_i ,0),a+(0, \sum_{i\in \alpha} e_i)\}$. Here $(e_i)$ is the coordinate basis for $[n]^{[n]}$, i.e. $e_i(j)=\delta_{ij}$.
Presumably, this should be (perhaps much) simpler than DHJ, k=3.
High-dimensional Sperner
Kalai.29: There is an analogous for Sperner but with high dimensional combinatorial spaces instead of "lines" but I do not remember the details (Kleitman(?) Katona(?) those are ususal suspects.)
Fourier approach
Kalai.29: A sort of generic attack one can try with Sperner is to look at f=1_A and express using the Fourier expansion of f the expression \int f(x)f(y)1_{x<y} where x<y is the partial order (=containment) for 0-1 vectors. Then one may hope that if f does not have a large Fourier coefficient then the expression above is similar to what we get when A is random and otherwise we can raise the density for subspaces. (OK, you can try it directly for the k=3 density HJ problem too but Sperner would be easier;) This is not unrealeted to the regularity philosophy.
Gowers.31: Gil, a quick remark about Fourier expansions and the k=3 case. I want to explain why I got stuck several years ago when I was trying to develop some kind of Fourier approach. Maybe with your deep knowledge of this kind of thing you can get me unstuck again.
The problem was that the natural Fourier basis in \null [3]^n was the basis you get by thinking of \null [3]^n as the group \mathbb{Z}_3^n. And if that’s what you do, then there appear to be examples that do not behave quasirandomly, but which do not have large Fourier coefficients either. For example, suppose that n is a multiple of 7, and you look at the set A of all sequences where the numbers of 1s, 2s and 3s are all multiples of 7. If two such sequences lie in a combinatorial line, then the set of variable coordinates for that line must have cardinality that’s a multiple of 7, from which it follows that the third point automatically lies in the line. So this set A has too many combinatorial lines. But I’m fairly sure — perhaps you can confirm this — that A has no large Fourier coefficient.
You can use this idea to produce lots more examples. Obviously you can replace 7 by some other small number. But you can also pick some arbitrary subset W of \null[n] and just ask that the numbers of 0s, 1s and 2s inside W are multiples of 7. DHJ for dense subsets of a random set
Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic.
Bibliography H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119. |
An alternate explanation rather than from wikipedia is preferable.
I am not an expert, but it seems that the fundamental plane is a relation among characteristic quantities of the galaxy, showing a correlation that is the analogous of the Tully-Fisher relation for spiral galaxies.
You can start from the virial theorem: $\frac{M}{R}\sim v^2$
($M$ mass contained in the radius $R$, $v$ velocity dispersion of the stars, also indicated with $\sigma$ in these cases) meaning that the stars behave like an isothermal sphere.
Assuming that all galaxies have the same mass-to-light ratio $L/M$, and that all galaxies have the same surface brightness
$\Sigma=L/R^2$
(with $L$ luminosity), you obtain the Tully-Fisher relation:
$L\sim v^4$
Of course, all galaxies do not have the same surface brightness. So if you take $\Sigma=L/R^2$ and substitute that into the virial theorem (keeping the mass-to-light ratio assumption), then a relation between luminosity, surface brightness, and velocity dispersion is obtained:
$L \sim \sigma^4 \Sigma^{-1}$
You can plot the stars distributions in elliptical galaxy according to this relation, and will note that the stars do not distribute randomly, but are more concentrated along the fundamental plane (more or less the same happens for the HR diagram):
(Here $R_e$ is the so called effective radius).
For completeness, another illuminating picture from here: |
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero).
I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it.
But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$
I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ...
Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!)
On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case
@Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question.
Moreover, the title is vague and doesn't clearly ask a question.
And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed.
If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself.
but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away
lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre
I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A?
@swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out
By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point
So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying
But 240 miles seems waaay to short to cross two time zones
So my inclination is to say the answer key is nonsense
You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi...
Hi there,
I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer.
Where does the term e^{(r_1-r_2)x} come from?
It seems like it is taken out of the blue, but it yields the desired result. |
As all words of length $>1$ and only consisting of a's should be contained in L2, there is a simple finite automaton that recognizes it. So your attempt at using the pumping lemma is futile, as the pumping lemma only helps you prove that a language is irregular if it is, and doesn't tell you anything about languages that are regular.Maybe I'm also ...
Here is yet another proof. It is known that the number of integers at most $n$ which are the product of two primes is $o(n)$, see for example this answer, which gives the asymptotic $\frac{n\log\log n}{\log n}$. This means that your language is infinite yet has vanishing asymptotic density. This is impossible for a regular unary language.
According to the Fundamental Theorem of Arithmetic, any integer $>1$ can be written as a product of one or more primes (in a unique way). So, it seems that your language can be simplified as $\{a^n\mid n\geq 2\}$.
Just pump up $(M+1)$ $y$'s. Now you get $xy^{M+1}z=a^{(M+1)j+M-j}=a^{M(j+1)}$. Since $M$ is a product of two primes, $M(j+1)$ is a product of at least 3 primes, so $a^{M(j+1)}\notin L_1$, which proves $L_1$ is not regular by the pumping lemma.
You need to keep doing it an infinite number of times before you reach any infinite languages. So your proof will involve transfinite induction. As Wikipedia says:Transfinite induction is an extension of mathematical induction towell-ordered sets, for example to sets of ordinal numbers or cardinalnumbers.Let P(α) be a property defined for all ...
There is an alternative to the “pumping” lemma which I find easier: After each possible input, determine the set of continuations that would complete a string of the language. You can use each of those sets as a state in the finite state machine for the language, so if there is a finite number of those sets then the language is regular- if there are ...
The idea is to start with a grammar for the related language $L'_2 = \{a^ib^j \mid 2j \leq i \leq 3j\}$:$$ S \to a^2Sb \mid a^3Sb \mid \epsilon. $$We want to force at least one production of the form $a^2Sb$ and at least one of the form $a^3Sb$. There are many ways of doing that. The simplest, probably, is to force one of these productions to be the first, ...
The same "argument" would show that $\mathbb{N}$ is finite since it's the union of finite sets $$\mathbb{N}=\{0\}\cup\{0,1\}\cup\{0,1,2\}\cup ...$$ The point is that knowing that a given property is preserved under a given operation does not mean that it's preserved under "infinite iterations" of that operation.
Here is working code. Don't forget to input your percentage in fractional form (e.g. $0.02$ for $2\%$).#include <stdio.h>#define MONTHS_IN_YEAR 12float monthlyRepaymentAmount(float principalAmount, float interestRate, int numberOfYears);int main(void) {//Problem statement variablesfloat principalAmount; // use int if principal can ...
Here is one result in the direction you're looking into:Suppose that $A,B,C$ are languages such that $A$ is a non-regular subset of the regular language $C$, and $B$ is disjoint from $C$. Then $A \cup B$ is non-regular.For the proof, consider the intersection of $A \cup B$ and $C$. Details left to the reader.The question you link to concerns the ...
I'm not sure what $L1, \dots, Lk$ are since you did not define them. The easiest way is probably that of starting with a DFA for $L$ and constructing a NFA for $drop(L)$ (hint in the spoiler below).Then it should be easy to show that:If $w \in drop(L)$ then the NFA accepts $w$: use the definition of the function $drop$ to conclude that there must be a ...
Can you use a rule that looks like $S \rightarrow A_1A_2\dots A_iS \mid B_1B_2\dots B_iS \mid Z_i$, where $i \ge 2$?No. Grammar rules consist of explicitly given finite strings of terminals and non-terminals on each side of the arrow, and a grammar may contain only finitely many rules. The first restriction rules out the "for all $i\ge 2$" part of your ...
I believe I can prove that:a RE using only possessive operators is equivalent to a RE without possessive operators.any RE can be rewritten to an equivalent RE that uses only possessive operators.For two expressions $A$ and $B$ to be equivalent means that they define the same language:$$A\equiv B \implies L(A) = L(B)$$Let's mark with "$\hat{\ }$" the ...
Pushdown automata do not necessarily halt. They are not forced to read input each step, they can also do so-called $\lambda$-instructions where the tape is not advanced.Then we can have an infinite loop at a certain tape position.Likewise, linear bounded automata may loop, and do not always halt. However, they can be simulated by a Turing machine, which ...
Using the accepted answer by @David Richerby ->I think what we have to do is modify the DFAs that recognize L1 and L2.Let L1 alphabet Σ1 and L2 alphabet Σ2,let Σ = Σ1 ∪ Σ2let's say we have DFA for L1 called M,For M DFA add a extra state called y and for all the letters in Σ but not in Σ1 add a transition from all the states of M to state y. then ...
Let $S$ be the list of all prefixes of words in $L$. Create a DFA with a state $q_s$ for each $s \in S$, and an additional sink state $q_\bot$. The starting state is $q_\epsilon$, and a state is accepting if it corresponds to a word in $L$. When at a non-sink state $q_s$, upon reading $\sigma$, move to $q_{s\sigma}$ if $s\sigma \in S$, and to $q_\bot$ ...
The subset of all palindromes in L is obviously not usually regular, take the simple example $a^*ba^*$ where the subset of palindromes $a^nba^n$ is not regular.Assume you have an FSM for L (that is an FSM describing and defining L). You can take that FSM and use a simple algorithm to determine if w is in M:Given a state S, define succ(S, a) as the state ...
The class languages recognized by FTMs are the class of regular languages. A FTM can completely simulate a DFA, so each regular language can be recognized by a FTM. On the other hand, if we can prove each FTM recognizes a regular language, we have proved the assertion.Given a FTM $F$ with the set of states $Q$, we use the Myhill–Nerode theorem to show the ...
Clearly not.Let $A=\{a\}$ and $B=\{aa\}$.Now,$A\cap B = \emptyset$so$(A\cap B)^* = \{\epsilon\}$but$A^*\cap B^*=B^*=\{a^{2i} : i \in \mathbb{N}\}$(all strings consisting of an even number of $a$).
Above answers are excellently written. But one other approach would be helpful I think.We want to prove that if $L(A)$ is a $CFL$ and $L(B)$ is a regular then $L(A/B) = \{w\space|\space wx \in A,\space x\in B,\space w\in \Sigma^*\, ,\space x\in \Sigma^*\}$ is also a $CFL$.Let's use fact that set of regular languages and set of context free languages are ...
Given that, I would expect that for any reasonable model of computation, if $f : A \rightarrow B$ and $g : B \rightarrow C$ are computable, then $g \circ f : A \rightarrow C$ should be as well.Let's say our model is quadratic time computation. If $f$ is the function which maps a string of length $n$ to a string of $n^2$ zeroes, then $f$ is computable in ...
After days of constantly refreshing this page in hope for an answer today I woke up and my very first thought might be it.If all states are reachable and it is deterministic this means there has to be a set of inputs where every state will once be reached.semi-decidable Algorithm:Validate if DLBA-Encoding and reject if not; # returns 'no' on bogus...
Let w be the shortest word in L with length l. Then the shortest word in $L^2$ has length 2l, but the shortest word in $L^3$ has length 3l. If l ≥ 1 then the sets are different.Now assume L contains the empty string. $L^3$ is the set of all strings that can be created by taking an element of $L^2$ and adding an element of L. Among many others, this ...
I'm assuming by $\lambda$ you mean the empty word and by $n_0(w)$ the length of a word.The proof for the first part is not correct: You argue that every word in $L^2$ has length at least $2$ and every word in $L^3$ has length at least $3$. From that it does not follow that every word in $L^3$ is longer than every word $L^2$, because there could also be a ...
There is something to be gotten out of the computation.Either you interpret the final tape contents (and you stop by the "HALT" command, not by going into an accept state, which you do not have), oryou distinguish two states (accept, reject), orstopping means accept (again by HALT, no accept state) and running forever means reject.The halting problem ...
Your exam question makes very little sense. The obvious reading would be this:Let $M$ and $N$ be two Turing machines. Why is it not possible to prove that $M$ and $N$ compute the same function?More precisely:It is not the case that for all Turing machines $M$ and $N$ it is provable that $M$ and $N$ compute the same function.Well, this is quite ... |
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is?
Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!...
I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $...
No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA...
The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why?
mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it
Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true:
(1) If $x=y$ then $x\sim y$.
(2) If $x=y$ then $y\sim x$.
(3) If $x=y$ and $y=z$ then $x\sim z$.
Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly.
This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$.
I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$."
That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems...
(comment on many many posts above)
In other news:
> C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999
probably the weirdness bunch of data I ever seen with so many 000000 and 999999s
But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti?
@AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms).
This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality.
Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it.
@schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$.
@GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course.
Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul...
@GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0?
Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$. |
Difference between revisions of "Angle"
(Importing text file)
m (Completed rendering of article in TeX.)
Line 1: Line 1: −
A geometrical figure consisting of two distinct rays issuing from the same point. The rays are called the sides of the angle and their common original is called the vertex of the angle. Let
+
A geometrical figure consisting of two distinct rays issuing from the same point. The rays are called the sides of the angleand their common original is called the vertex of the angle. Let the sides of an angle, its vertexand the plane defined by its sides. The figure = divides the plane into two adjoint regions ,: = . The region = ,, is also called an angle or a plane angle, is called the interior domain of the plane angle . The regions (angles) are called supplementary.
−
Two angles are called equal (or congruent) if they can be superposed so that corresponding sides and vertices coincides. For every ray in the plane and a given side of it one can construct a unique angle equal to a given one. The comparison of two angles can be realized in two ways. If the angle is considered as a pair of rays with a common origin then to answer the question which of the two angles is the greater it is necessary to superpose in the given plane the vertices and one of each pair of sides (see
+
Two angles are called equal (or congruent) if they can be superposed so that corresponding sides and vertices coincides. For every ray in the plane and a given side of itone can construct a unique angle equal to a given one. The comparison of two angles can be realized in two ways. If the angle is considered as a pair of rays with a common originthen to answer the question which of the two angles is the greaterit is necessary to superpose in the given plane the vertices and one of each pair of sides (see a). If the second side of an angle comes to lie inside the second angle, one says that the first angle is smaller than the second.
<img style="border:1px solid;" src="https://www.encyclopediaofmath.org/legacyimages/common_img/a012500a.gif" />
<img style="border:1px solid;" src="https://www.encyclopediaofmath.org/legacyimages/common_img/a012500a.gif" />
−
Figure
+
Figure
A second method of comparing angles consists in assigning to each angle a certain number. Equal angles will have the same number of degrees or radians (see below), greater angles a greater and smaller angles a smaller number.
A second method of comparing angles consists in assigning to each angle a certain number. Equal angles will have the same number of degrees or radians (see below), greater angles a greater and smaller angles a smaller number.
−
Two angles are called supplementary if they have their vertex and one side is common, while the other two sides form a straight line (see
+
Two angles are called supplementary if they have their vertex and one side is common, while the other two sides form a straight line (see b).
<img style="border:1px solid;" src="https://www.encyclopediaofmath.org/legacyimages/common_img/a012500b.gif" />
<img style="border:1px solid;" src="https://www.encyclopediaofmath.org/legacyimages/common_img/a012500b.gif" />
−
Figure
+
Figure
Generally, angles having a common vertex and one common side are called adjoining. Two angles are called vertical if the sides of one are the prolongations through the vertex of the sides of the other. Vertical angles are equal to each other. Angles the sides of which form a straight line are called straight. Half a straight angle is called a right angle. A right angle may equivalently be defined as follows: An angle equal to its supplement is called right. The interior domain of a plane angle that is not greater then a straight angle is called a convex domain in the plane.
Generally, angles having a common vertex and one common side are called adjoining. Two angles are called vertical if the sides of one are the prolongations through the vertex of the sides of the other. Vertical angles are equal to each other. Angles the sides of which form a straight line are called straight. Half a straight angle is called a right angle. A right angle may equivalently be defined as follows: An angle equal to its supplement is called right. The interior domain of a plane angle that is not greater then a straight angle is called a convex domain in the plane.
−
As the unit of measurement of an angle one uses the
+
As the unit of measurement of an angleone uses the part of a right angle, called a degree. The cyclic or radian measure of an angle is also used. The numerical value of the radian measure of an angle is the length of the arc cut out by the sides of the angle on the unit circle. One radian is assigned to the angle the arc of which has length equal to the radius. A straight angle is equal to radians.
−
By intersecting two straight lines lying in a plane with a third there are formed angles (see
+
By intersecting two straight lines lying in a plane with a thirdthere are formed angles (see c): 1 and 5, 2 and 6, 4 and 8, 3 and 7; these are called corresponding angles; 2 and 5, 3 and 8 are called interior angles on the same side; 1 and 6, 4 and 7 are called exterior angles on the same side; 3 and 5, 2 and 8 are called interior opposite angles; and 1 and 7, 4 and 6 are called exterior opposite angles.
<img style="border:1px solid;" src="https://www.encyclopediaofmath.org/legacyimages/common_img/a012500c.gif" />
<img style="border:1px solid;" src="https://www.encyclopediaofmath.org/legacyimages/common_img/a012500c.gif" />
−
Figure
+
Figure
−
In practical problems it is convenient to consider angles as the measure of rotation of a fixed ray round its origin to a given position. Depending on the direction of the rotation, the angle may in this case be considered as positive or negative. Thus, an angle in this sense can have as value any real number. The angle as a measure of the rotation of a ray is considered in the theory of trigonometric functions: For any value of the argument (an angle) one can define a value for the trigonometric functions. The concept of an angle in geometrical systems founded on the axiomatics of positions of point vectors is radically different from the definition of angles as figures — in such axiomatics by an angle one understands a well-defined metric quantity connected with two vectors by means of their scalar product. Thus, each pair of vectors
+
In practical problemsit is convenient to consider angles as the measure of rotation of a fixed ray round its origin to a given position. Depending on the direction of the rotation, the angle may in this case be considered as positive or negative. Thus, an angle in this sense can have as value any real number. The angle as a measure of the rotation of a ray is considered in the theory of trigonometric functions: For any value of the argument (an angle)one can define a value for the trigonometric functions. The concept of an angle in geometrical systems founded on the axiomatics of positions of point vectors is radically different from the definition of angles as figures — in such axiomaticsby an angleone understands a well-defined metric quantity connected with two vectors by means of their scalar product. Thus, each pair of vectors defines a certain angle — the number given by the vector formula
− + − +
= a
− + −
where
+
where ais the scalar product of the vectors.
The concept of an angle as a plane figure and as a certain numerical quantity is applied in different geometrical problems in which the angle plays a special role. Thus, by the angle between two intersecting curves having a well-defined tangent in their point of intersection one understands the angle formed by these tangents.
The concept of an angle as a plane figure and as a certain numerical quantity is applied in different geometrical problems in which the angle plays a special role. Thus, by the angle between two intersecting curves having a well-defined tangent in their point of intersection one understands the angle formed by these tangents.
−
By the angle between a straight line and a plane one understands the angle formed by the straight line and its orthogonal projection on the plane; its value lies between
+
By the angle between a straight line and a planeone understands the angle formed by the straight line and its orthogonal projection on the plane; its value lies between 0and . By the angle between two crossing linesone understands the angle between the directions of these lines, i.e.between two lines parallel to the given crossing lines and passing through a given point.
A [[Solid angle|solid angle]] is a part of space bounding a certain conical surface; a particular case of a solid angle is a [[Polyhedral angle|polyhedral angle]].
A [[Solid angle|solid angle]] is a part of space bounding a certain conical surface; a particular case of a solid angle is a [[Polyhedral angle|polyhedral angle]].
−
In higher-dimensional geometry one defines the angle between multi-dimensional planes, between straight lines and planes, etc. Angles between straight lines, between planes and straight lines, and between higher-dimensional planes are also defined in non-Euclidean spaces.
+
In higher-dimensional geometryone defines the angle between multi-dimensional planes, between straight lines and planes, etc. Angles between straight lines, between planes and straight lines, and between higher-dimensional planes are also defined in non-Euclidean spaces.
+ − +
<table>
− +
<TR><TD valign="top">[a1]</TD> <TD valign="top"> M.J. Greenberg, and non-Euclidean , Freeman (1974)</TD></TR>
− +
</table>
− − −
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">
Latest revision as of 13:05, 4 December 2016
A geometrical figure consisting of two distinct rays issuing from the same point. The rays are called the sides of the angle, and their common original is called the vertex of the angle. Let $ [BA),[BC) $ denote the sides of an angle, $ B $ its vertex, and $ \alpha $ the plane defined by its sides. The figure $ \Gamma = [BA) \cup [BC) $ divides the plane $ \alpha $ into two adjoint regions $ \alpha'_{i} $, $ i \in \{ 1,2 \} $: $ \alpha'_{1} \cup \alpha'_{2} = \alpha \setminus \Gamma $. The region $ \alpha_{i} = \alpha'_{i} \cup \Gamma $, $ i \in \{ 1,2 \} $, is also called an angle or a plane angle, $ \alpha'_{i} $ is called the interior domain of the plane angle $ \alpha_{i} $. The regions (angles) $ \alpha_{1},\alpha_{2} $ are called supplementary.
Two angles are called equal (or congruent) if they can be superposed so that corresponding sides and vertices coincides. For every ray in the plane and a given side of it, one can construct a unique angle equal to a given one. The comparison of two angles can be realized in two ways. If the angle is considered as a pair of rays with a common origin, then to answer the question which of the two angles is the greater, it is necessary to superpose in the given plane the vertices and one of each pair of sides (see Figure (a)). If the second side of an angle comes to lie inside the second angle, one says that the first angle is smaller than the second.
Figure (a)
A second method of comparing angles consists in assigning to each angle a certain number. Equal angles will have the same number of degrees or radians (see below), greater angles a greater and smaller angles a smaller number.
Two angles are called supplementary if they have their vertex and one side is common, while the other two sides form a straight line (see Figure (b)).
Figure (b)
Generally, angles having a common vertex and one common side are called adjoining. Two angles are called vertical if the sides of one are the prolongations through the vertex of the sides of the other. Vertical angles are equal to each other. Angles the sides of which form a straight line are called straight. Half a straight angle is called a right angle. A right angle may equivalently be defined as follows: An angle equal to its supplement is called right. The interior domain of a plane angle that is not greater then a straight angle is called a convex domain in the plane.
As the unit of measurement of an angle, one uses the $ 90 $th part of a right angle, called a degree. The cyclic or radian measure of an angle is also used. The numerical value of the radian measure of an angle is the length of the arc cut out by the sides of the angle on the unit circle. One radian is assigned to the angle the arc of which has length equal to the radius. A straight angle is equal to $ \pi $ radians.
By intersecting two straight lines lying in a plane with a third, there are formed angles (see Figure (c)): 1 and 5, 2 and 6, 4 and 8, 3 and 7; these are called corresponding angles; 2 and 5, 3 and 8 are called interior angles on the same side; 1 and 6, 4 and 7 are called exterior angles on the same side; 3 and 5, 2 and 8 are called interior opposite angles; and 1 and 7, 4 and 6 are called exterior opposite angles.
Figure (c)
In practical problems, it is convenient to consider angles as the measure of rotation of a fixed ray round its origin to a given position. Depending on the direction of the rotation, the angle may in this case be considered as positive or negative. Thus, an angle in this sense can have as value any real number. The angle as a measure of the rotation of a ray is considered in the theory of trigonometric functions: For any value of the argument (an angle), one can define a value for the trigonometric functions. The concept of an angle in geometrical systems founded on the axiomatics of positions of point vectors is radically different from the definition of angles as figures — in such axiomatics, by an angle, one understands a well-defined metric quantity connected with two vectors by means of their scalar product. Thus, each pair $ (\mathbf{a},\mathbf{b}) $ of vectors defines a certain angle $ \phi $ — the number given by the vector formula $$ \cos(\phi) = \frac{\langle \mathbf{a},\mathbf{b} \rangle}{\| \mathbf{a} \| \| \mathbf{b} \|}, $$ where $ \langle \mathbf{a},\mathbf{b} \rangle $ is the scalar product of the vectors.
The concept of an angle as a plane figure and as a certain numerical quantity is applied in different geometrical problems in which the angle plays a special role. Thus, by the angle between two intersecting curves having a well-defined tangent in their point of intersection one understands the angle formed by these tangents.
By the angle between a straight line and a plane, one understands the angle formed by the straight line and its orthogonal projection on the plane; its value lies between $ 0^{\circ} $ and $ 90^{\circ} $. By the angle between two crossing lines, one understands the angle between the directions of these lines, i.e., between two lines parallel to the given crossing lines and passing through a given point.
In higher-dimensional geometry, one defines the angle between multi-dimensional planes, between straight lines and planes, etc. Angles between straight lines, between planes and straight lines, and between higher-dimensional planes are also defined in non-Euclidean spaces.
References
[a1] M.J. Greenberg, “Euclidean and non-Euclidean geometries”, Freeman (1974). How to Cite This Entry:
Angle.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Angle&oldid=13323 |
Without some information about the construction of these $12\times 12$ positive definite real symmetric matrices, the suggestions to be made are of necessity fairly limited.
I downloaded the Armadillo package from Sourceforge and took a look at the documentation. Try to improve performance of separately computing $\det(Q)$ and $\det(12I - Q - J)$, where $J$ is the rank one matrix of all ones, by setting e.g
det(Q,slow=false). The documentation notes that this is the default for matrices up to size $4\times 4$, so by omission I assume the
slow=true option is a default for the $12\times 12$ case.
What
slow=true
presumably does is partial or full pivoting in getting a row echelon form, from which the determinant is easily found. However you know in advance the matrix $Q$ is positive definite, so pivoting is unnecessary for stability (at least presumptively for the bulk of your computations. It's unclear if the Armadillo package throws an exception if the pivots become unduly small, but this should be a feature of a reasonable numerical linear algebra package. EDIT: I found the Armadillo code that implements
det in header file
include\armadillo_bits\auxlib_meat.hpp, using C++ templates for substantial functionality. The setting
slow=false doesn't appear to affect how a $12\times 12$ determinant will be done because the computation gets "thrown over a wall" to LAPACK (or ATLAS) at that point with no indication that pivoting is not required; see
det_lapack and its invocations in that file.
The other point would be to follow their recommendation of building the Armadillo package linking to high speed replacements for BLAS and LAPACK, if you are indeed using those; see Sec. 5 of the Armadillo README.TXT file for details. [The use of a dedicated 64-bit version of BLAS or LAPACK is also recommended for speed on current 64-bit machines.]
Row reduction to echelon form is essentially Gaussian elimination, and has arithmetic complexity $\frac{2}{3} n^3 + O(n^2)$. For both matrices this then amounts to twice that work, or $\frac{4}{3} n^3 + O(n^2)$. These operations may well be the "bottleneck" in your processing, but there's little hope that without special structure in $Q$ (or some known relationships among the trillion test cases allowing amortization) the work could be reduced to $O(n^2)$.
For comparison, expansion by cofactors of a general $n\times n$ matrix involves $n!$ multiplication operations (and roughly as many additions/subtractions), so for $n=12$ the comparison ($12! = 479001600$ vs. $\frac{2}{3} n^3 = 1152$) clearly favors elimination over cofactors.
Another approach requiring $\frac{4}{3} n^3 + O(n^2)$ work would be reducing $Q$ to tridiagonal form with Householder transformations, which also puts $12I - Q$ into tridiagonal form. Computing $\det(Q)$ and $\det(12I - Q - J)$ can thereafter be done in $O(n)$ operations. [The effect of the rank one update $-J$ in the second determinant can be expressed as a scalar factor given by solving one tridiagonal system.]
Implementing such an independent computation might be worthwhile as a check on the results of successful (or failed) calls to Armadillo's
det function.
Special Case: As suggested by a Comment of Jernej, suppose that $Q = D - J$ where $J$ as before is the (rank 1) matrix of all ones and $D=\text{diag}(d_1,\ldots,d_n)$ is a nonsingular (positive) diagonal matrix. Indeed for the proposed application in graph theory these would be integer matrices. Then an explicit formula for $\det(Q)$ is:
$$ \det(Q) = \left(\prod_{i=1}^n d_i \right)\left(1 - \sum_{i=1}^n d_i^{-1} \right) $$
A sketch of its proof affords an opportunity to illustrate wider applicability, i.e. whenever $D$ has a known determinant and the system $Dv = (1\ldots 1)^T$ is quickly solved. Begin by factoring out:
$$ \det(D - J) = \det(D) \cdot \det(I - D^{-1}J) $$
Now $D^{-1}J$ is again rank 1, namely $(d_1^{-1}\ldots d_n^{-1})^T(1\ldots 1)$. Note that the second determinant is simply:
$$ f(1) = \det(I- D^{-1}J) $$
where $f(x)$ is the characteristic polynomial of $D^{-1}J$. As a rank 1 matrix, $f(x)$ must have (at least) $n-1$ factors of $x$ to account for its nullspace. The "missing" eigenvalue is $\sum d_i^{-1}$, as may be seen from the computation:
$$ D^{-1}J\; (d_1^{-1}\ldots d_n^{-1})^T = \left(\sum d_i^{-1} \right) (d_1^{-1}\ldots d_n^{-1})^T $$
It follows that the characteristic polynomial $f(x) = x^{n-1} (x - \sum d_i^{-1})$, and $f(1)$ is as shown above for $\det(I - D^{-1}J)$, $1 - \sum d_i^{-1}$.
Also note that if $Q = D-J$, then $12I - Q - J = 12I - D + J - J = 12I - D$, a diagonal matrix whose determinant is simply the product of its diagonal entries. |
Fujimura's problem
Let [math]\overline{c}^\mu_n[/math] the largest subset of the triangular grid
[math]\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c=n \}[/math]
which contains no equilateral triangles [math](a+r,b,c), (a,b+r,c), (a,b,c+r)[/math] with [math]r \gt 0[/math]; call such sets
triangle-free. (It is an interesting variant to also allow negative r, thus allowing "upside-down" triangles, but this does not seem to be as closely connected to DHJ(3).) Fujimura's problem is to compute [math]\overline{c}^\mu_n[/math]. This quantity is relevant to a certain hyper-optimistic conjecture.
n 0 1 2 3 4 5 [math]\overline{c}^\mu_n[/math] 1 2 4 6 9 12 n=0
[math]\overline{c}^\mu_0 = 1[/math]:
This is clear.
n=1
[math]\overline{c}^\mu_1 = 2[/math]:
This is clear.
n=2
[math]\overline{c}^\mu_2 = 4[/math]:
This is clear (e.g. remove (0,2,0) and (1,0,1) from [math]\Delta_2[/math]).
n=3
[math]\overline{c}^\mu_3 \geq 6[/math]:
For the lower bound, delete (0,3,0), (0,2,1), (2,1,0), (1,0,2) from [math]\Delta_3[/math].
For the upper bound: observe that with only three removals each of these (non-overlapping) triangles must have one removal:
set A: (0,3,0) (0,2,1) (1,2,0) set B: (0,1,2) (0,0,3) (1,0,2) set C: (2,1,0) (2,0,1) (3,0,0)
Consider choices from set A:
(0,3,0) leaves triangle (0,2,1) (1,2,0) (1,1,1) (0,2,1) forces a second removal at (2,1,0) [otherwise there is triangle at (1,2,0) (1,1,1) (2,1,0)] but then none of the choices for third removal work (1,2,0) is symmetrical with (0,2,1) n=4 [math]\overline{c}^\mu_4=9[/math]:
The set of all [math](a,b,c)[/math] in [math]\Delta_4[/math] with exactly one of a,b,c =0, has 9 elements and is triangle-free. (Note that it does contain the equilateral triangle (2,2,0),(2,0,2),(0,2,2), so would not qualify for the generalised version of Fujimura's problem in which [math]r[/math] is allowed to be negative.)
Let [math]S\subset \Delta_4[/math] be a set without equilateral triangles. If [math](0,0,4)\in S[/math], there can only be one of [math](0,x,4-x)[/math] and [math](x,0,4-x)[/math] in S for [math]x=1,2,3,4[/math]. Thus there can only be 5 elements in S with [math]a=0[/math] or [math]b=0[/math]. The set of elements with [math]a,b\gt0[/math] is isomorphic to [math]\Delta_2[/math], so S can at most have 4 elements in this set. So [math]|S|\leq 4+5=9[/math]. Similar if S contain (0,4,0) or (4,0,0). So if [math]|S|\gt9[/math] S doesn’t contain any of these. Also, S can’t contain all of [math](0,1,3), (0,3,1), (2,1,1)[/math]. Similar for [math](3,0,1), (1,0,3),(1,2,1)[/math] and [math](1,3,0), (3,1,0), (1,1,2)[/math]. So now we have found 6 elements not in S, but [math]|\Delta_4|=15[/math], so [math]S\leq 15-6=9[/math].
Remark: curiously, the best constructions for [math]c_4[/math] uses only 7 points instead of 9. n=5 [math]\overline{c}^\mu_5=12[/math]:
The set of all (a,b,c) in [math]\Delta_5[/math] with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles.
Let [math]S\subset \Delta_5[/math] be a set without equilateral triangles. If [math](0,0,5)\in S[/math], there can only be one of (0,x,5-x) and (x,0,5-x) in S for x=1,2,3,4,5. Thus there can only be 6 elements in S with a=0 or b=0. The set of element with a,b>0 is isomorphic to [math]\Delta_3[/math], so S can at most have 6 elements in this set. So [math]|S|\leq 6+6=12[/math]. Similar if S contain (0,5,0) or (5,0,0). So if |S| >12 S doesn’t contain any of these. S can only contain 2 point in each of the following equilateral triangles:
(3,1,1),(0,4,1),(0,1,4)
(4,1,0),(1,4,0),(1,1,3)
(4,0,1),(1,3,1),(1,0,4)
(1,2,2),(0,3,2),(0,2,3)
(3,2,0),(2,3,0),(2,2,1)
(3,0,2),(2,1,2),(2,0,3)
So now we have found 9 elements not in S, but [math]|\Delta_5|=21[/math], so [math]S\leq 21-9=12[/math].
n=6 [math]15 \leq \overline{c}^\mu_6 \leq 17[/math]:
[math]15 \leq \overline{c}^\mu_6[/math] from the bound for general n.
Note that there are eight extremal solutions to [math] \overline{c}^\mu_3 [/math]:
Solution I: remove 300, 020, 111, 003
Solution II: remove 030, 111, 201, 102 Solution III (and 2 rotations): remove 030, 021, 210, 102 Solution III' (and 2 rotations): remove 030, 120, 012, 201
Also consider the same triangular lattice with the point 020 removed, making a trapezoid. Solutions based on I-III are:
Solution IV: remove 300, 111, 003
Solution V: remove 201, 111, 102 Solution VI: remove 210, 021, 102 Solution VI': remove 120, 012, 201
Suppose we can remove all equilateral triangles on our 7×7x7 triangular lattice with only 10 removals.
The triangle 141-411-114 must have at least one point removed. Remove 141, and note because of symmetry any logic that follows also applies to 411 and 114.
There are three disjoint triangles 060-150-051, 240-231-330, 042-132-033, so each must have a point removed.
(Now only six removals remaining.)
The remainder of the triangle includes the overlapping trapezoids 600-420-321-303 and 303-123-024-006. If the solutions of these trapezoids come from V, VI, or VI', then 6 points have been removed. Suppose the trapezoid 600-420-321-303 uses the solution IV (by symmetry the same logic will work with the other trapezoid). Then there are 3 disjoint triangles 402-222-204, 213-123-114, and 105-015-006. Then 6 points have been removed. Therefore the remaining six removals must all come from the bottom three rows of the lattice.
Note this means the "top triangle" 060-330-033 must have only four points removed so it must conform to solution either I or II, because of the removal of 141.
Suppose the solution of the trapezoid 600-420-321-303 is VI or VI'. Both solutions I and II on the "top triangle" leave 240 open, and hence the equilateral triangle 240-420-222 remains. So the trapezoid can't be VI or VI'.
Suppose the solution of the trapezoid 600-420-321-303 is V. This leaves an equilateral triangle 420-321-330 which forces the "top triangle" to be solution I. This leaves the equilateral triangle 201-321-222. So the trapezoid can't be V.
Therefore the solution of the trapezoid 600-420-321-303 is IV. Since the disjoint triangles 402-222-204, 213-123-114, and 105-015-006 must all have points removed, that means the remaining points in the bottom three rows (420, 321, 510, 501, 312, 024) must be left open. 420 and 321 force 330 to be removed, so the "top triangle" is solution I. This leaves triangle 321-024-051 open, and we have reached a contradiction.
General n
A lower bound for [math]\overline{c}^\mu_n[/math] is 2n for [math]n \geq 1[/math], by removing (n,0,0), the triangle (n-2,1,1) (0,n-1,1) (0,1,n-1), and all points on the edges of and inside the same triangle. In a similar spirit, we have the lower bound
[math]\overline{c}^\mu_{n+1} \geq \overline{c}^\mu_n + 2[/math]
for [math]n \geq 1[/math], because we can take an example for [math]\overline{c}^\mu_n[/math] (which cannot be all of [math]\Delta_n[/math]) and add two points on the bottom row, chosen so that the triangle they form has third vertex outside of the original example.
An asymptotically superior lower bound for [math]\overline{c}^\mu_n[/math] is 3(n-1), made of all points in [math]\Delta_n[/math] with exactly one coordinate equal to zero.
A trivial upper bound is
[math]\overline{c}^\mu_{n+1} \leq \overline{c}^\mu_n + n+2[/math]
since deleting the bottom row of a equilateral-triangle-free-set gives another equilateral-triangle-free-set. We also have the asymptotically superior bound
[math]\overline{c}^\mu_{n+2} \leq \overline{c}^\mu_n + \frac{3n+2}{2}[/math]
which comes from deleting two bottom rows of a triangle-free set and counting how many vertices are possible in those rows.
Another upper bound comes from counting the triangles. There are [math]\binom{n+2}{3}[/math] triangles, and each point belongs to n of them. So you must remove at least (n+2)(n+1)/6 points to remove all triangles, leaving (n+2)(n+1)/3 points as an upper bound for [math]\overline{c}^\mu_n[/math].
Asymptotics
The corners theorem tells us that [math]\overline{c}^\mu_n = o(n^2)[/math] as [math]n \to \infty[/math].
By looking at those triples (a,b,c) with a+2b inside a Behrend set, one can obtain the lower bound [math]\overline{c}^\mu_n \geq n^2 \exp(-O(\sqrt{\log n}))[/math]. |
Previously, we studied how functions can be represented as power series, \(y(x)=\sum_{n=0}^{\infty} a_nx^n\). We also saw that we can find series representations of the derivatives of such functions by differentiating the power series term by term. This gives
\[y′(x)=\sum_{n=1}^{\infty}na_nx^{n−1} \nonumber\]
and
\[y″(x)=\sum_{n=2}^{\infty}n(n−1)a_nx^{n−2}. \nonumber\]
In some cases, these power series representations can be used to find solutions to differential equations.
The examples and exercises in this section were chosen for which power solutions exist. However, it is not always the case that power solutions exist. Those of you interested in a more rigorous treatment of this topic should review the differential equations section of the LibreTexts.
PROBLEM-SOLVING STRATEGY: FINDING POWER SERIES SOLUTIONS TO DIFFERENTIAL EQUATIONS
Assume the differential equation has a solution of the form \[y(x)=\sum_{n=0}^{\infty}a_nx^n.\] Differentiate the power series term by term to get \[y′(x)=\sum_{n=1}^{\infty}na_nx^{n−1}\]and\[y″(x)=\sum_{n=2}^{\infty}n(n−1)a_nx^{n−2}.\] Substitute the power series expressions into the differential equation. Re-index sums as necessary to combine terms and simplify the expression. Equate coefficients of like powers of \(x\) to determine values for the coefficients \(a_n\) in the power series. Substitute the coefficients back into the power series and write the solution. Bessel functions
We close this section with a brief introduction to
Bessel functions. Complete treatment of Bessel functions is well beyond the scope of this course, but we get a little taste of the topic here so we can see how series solutions to differential equations are used in real-world applications. The Bessel equation of order n is given by
\[x^2y″+xy′+(x^2−n^2)y=0.\]
This equation arises in many physical applications, particularly those involving cylindrical coordinates, such as the vibration of a circular drum head and transient heating or cooling of a cylinder. In the next example, we find a power series solution to the Bessel equation of order 0.
Example \(\PageIndex{2}\): Power Series Solution to the Bessel Equation
Find a power series solution to the Bessel equation of order 0 and graph the solution.
Solution
The Bessel equation of order 0 is given by
\[x^2y″+xy′+x^2y=0. \nonumber\]
We assume a solution of the form \(y=\sum_{n=0}^∞ a_nx^n\). Then \(y′(x)=\sum_{n=1}^∞ na_nx^{n−1}\) and \(y''(x)=\sum_{n=2}^∞n(n−1)a_nx^{n−2}.\)Substituting this into the differential equation, we get
\[\begin{array}{ll} x^2 \sum_{n=2}^∞ n(n−1)a_nx^{n−2}+x \sum_{n=1}^∞ na_nx^{n−1}+x^2 \sum_{n=0}^∞ a_nx^n=0 & \text{Substitution.} \\ \\ \sum_{n=2}^∞ n(n−1)a_nx^n+\sum_{n=1}^∞ na_nx^n+ \sum_{n=0}^∞ a_nx^{n+2}=0 & \text{Bring external factors within sums.} \\ \\ \sum_{n=2}^∞ n(n−1)a_nx^n+\sum_{n=1}^∞ na_nx^n+\sum_{n=2}^∞ a_{n−2}x^n=0 & \text{Re-index third sum.} \\ \\ \sum_{n=2}^∞ n(n−1)a_nx^n+a_1x+\sum_{n=2}^∞ na_nx^n+\sum_{n=2}^∞ a_{n−2}x^n=0 & \text{Separate }n=1 \text{ term from second sum.} \\ \\ a_1x+\sum_{n=2}^∞ [n(n−1)a_n+na_n+a_{n−2}]x^n=0 & \text{Collect summation terms.} \\ \\ a_1x+\sum_{n=2}^∞ [(n^2−n)a_n+na_n+a_{n−2}]x^n=0 & \text{Multiply through in first term.} \\ \\ a_1x+\sum_{n=2}^∞ [n^2a_n+a_{n−2}]x^n=0. & \text{Simplify.} \end{array}\]
Then, \(a_1=0\), and for \(n≥2,\)
\[\begin{align*} n^2a_n+a_{n−2} & = 0 \\ a_n=−\dfrac{1}{n^2}a_{n−2}. \end{align*} \]
Because \(a_1=0\), all odd terms are zero. Then, for even values of
n, we have
\[\begin{align*}a_2 & =−\dfrac{1}{2^2}a_0 \\ a_4 & = −\dfrac{1}{4^2}a_2=\dfrac{1}{4^2⋅2^2} a_0. \\ a_6 & =−\dfrac{1}{6^2}a_4 =−\dfrac{1}{6^2⋅4^2⋅2^2}a_0 \end{align*}\]
In general,
\[a_{2k}=\dfrac{(−1)^k}{(2)^{2k}(k!)^2}a_0. \nonumber\]
Thus, we have
\[y(x)=a_0 \sum_{k=0}^∞ \dfrac{(−1)^k}{(2)^{2k}(k!)^2}x^{2k}. \nonumber\]
The graph appears below.
Exercise \(\PageIndex{2}\)
Verify that the expression found in Example \(\PageIndex{2}\) is a solution to the Bessel equation of order 0.
Hint
Differentiate the power series term by term and substitute it into the differential equation.
Key Concepts Power series representations of functions can sometimes be used to find solutions to differential equations. Differentiate the power series term by term and substitute into the differential equation to find relationships between the power series coefficients. Contributors
Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org. |
In some sense, this is answered in the book Nonabelian Algebraic Topology (EMS Tracts in Math Vol 15, 2011). However this is not done for
spaces but for filtered spaces $$X_*: X_0 \subseteq X_1 \subseteq \cdots \subseteq X_n \subseteq \cdots \subseteq X. $$From this we construct a somewhat nonabelian analogue of a chain complex called a crossed complex $\Pi X_*$, using the fundamental groupoid $\pi_1(X_1,X_0)$ and relative homotopy groups $$\pi_n(X_n,X_{n-1},x), x \in X_0$$ for $n \geqslant 2$, with operations, and boundary maps. So this is nonabelian in dimensions $1$ and $2$. But since the relative homotopy groups are defined as homotopy classes of maps $$(D^n,S^{n-1}, 0) \to (X_n,X_{n-1},x)$$ we are using "singular cells". (This construction on a filtered space was first considered with a different name by A L Blakers, Annals of Math., 1948, so it is quite classical.)
The problem is then how to compute $\Pi X_*$. Fortunately, there is a Seifert-van Kampen type theorem for $\Pi X_*$, allowing its determination or computation for filtered spaces built by gluing out of other filtered spaces. This includes the skeletal filtration of CW-complexes, giving a more powerful version of the cellular chains of the universal cover of a CW-complex, and from which that operator chain complex may be recovered.
The above theorem is not obtained directly but by using a cubical version, which we write $\rho X_*$, using homotopy classes rel vartices of maps $I^n_* \to X_*$, and which is shown to be a
strict cubical higher homotopy groupoid. The compositions in $\rho X_*$ are intuitively natural constructions, but the proofs that they are well defined are non trivial. Once these compositions are obtained, the proof of a Seifert-van Kampen theorem for $\rho X_*$ follows the pattern of some proofs of the $1$-dimensional theorem. One also has to show that $\rho X_*$ is "equivalent" to $\Pi X_*$.
Other things are easier for $\rho$ than for $\Pi$, particularly tensor products. For example a morphism $$\eta: \rho X_* \otimes \rho Y_* \to \rho (X_* \otimes Y_*)$$ is easily seen to be well defined by $[f] \otimes [g] \mapsto [f \otimes g]$, using $I^m_* \otimes I^n_* \cong I^{m+n}_*$, so the corresponding construction for $\Pi$ may be deduced.
There is a methodological argument for using filtered space, apart from the facts that they arise naturally, and that the theory works. To develop algebraic topology on a space $X$, the space $X$ has to be given by some data. That data will have some kind of structure. It is not unreasonable to reflect that structure in terms of structure imposed on $X$. A filtration is one such example of structure.
Note that the results obtained by the theory do not require the setting up of simplicial homology, nor is simplicial approximation used at all, and most of the calculations obtained of relative homotopy groups in dimension $2$ are not obtainable by the use of abelian methods.
Some traditional results of homotopy theory, such as the simplicial Homotopy Addition Lemma, are also nicely expressed in the above terms.
The above results were all obtained with much collaboration from about 1965 by pursuing the question: if groupoids are useful in dimension $1$ homotopy theory, can they be useful in higher dimensional homotopy theory, using some form of multiple groupoids? |
In mechanics, we obtain the equations of motion (Euler-Lagrange equations) via Hamilton's principle by considering stationary points of the action $$ S = \int_{t_i}^{t_f} L ~ dt $$ where we have $L=T-V$, the difference between kinetic and potential energy. The usual derivation sets the first variation to zero and integrates by parts, to yield the requirement $$ \delta S = \int_{t_i}^{t_f} \left[ \frac{\partial L}{\partial q} - \frac{d}{dt}\left( \frac{\partial L}{\partial \dot{q}} \right) \right] \delta q ~ dt + \frac{\partial L}{\partial \dot{q}}(t_f) ~ \delta q (t_f) - \frac{\partial L}{\partial \dot{q}}(t_i) ~ \delta q (t_i) = 0 $$ where $q$ denotes the generalised coordinates and $\dot{q}$ the corresponding velocities.
At this point, most textbook derivations eliminate the second and third terms by claiming $\delta q (t_i) = 0$ and $\delta q (t_f)=0$. The first of these is intuitive, because in practice we normally consider initial value problems in which the initial positions are known.
But, a priori, we don't typically know $q (t_f)$ for an arbitrary time $t_f$, so why do we set $\delta q (t_f)=0$?
For some other variational principles, it is intuitive to assume the coordinates at both endpoints are known and fixed, for example Fermat's principle to work out the path of a light ray between two points.
Is there an intuitive explanation of why the final coordinates are considered fixed when applying Hamilton's principle, or a derivation of the mechanical Euler-Lagrange equations without this assumption?
In considering the problem myself, I tried to obtain the same conditions in another way: if we instead take the final position $q (t_f)$ as free but with $t_f$ fixed, then, in addition to the Euler-Lagrange equation, we get the extra requirement for stationarity $$\frac{\partial L}{\partial \dot{q}}(t_f) = 0$$ but it seems that this does not hold in general. If we consider a harmonic oscillator, for example, this condition implies that the kinetic energy is minimised at the (arbitrary) fixed time $t_f$. I haven't yet considered the necessary conditions if we also consider $t_f$ as free, as I'm not totally sure of how to carry out the analysis without incorporating elements from optimal control theory (e.g. Pontryagin's principle or the HJB equation). |
Along the lines of Glen O's answer, this answer attempts to explain the solvability of the problem, rather than provide the answer, which has already been given. Instead of using the meta-knowledge approach, which, as Glen stated, can get hard to follow, I use the range-base approach used in Rubio's answer, and specifically address some of the objections being raised.
The argument has been put forward that when Mark fails to answer on the first morning, he gives Rose no new information. This is actually true (sort of— see the last spoiler section of this answer). Rose could have predicted beforehand with certainty that Mark would fail to answer on the first day, so his failure to answer doesn't tell her anything she didn't know. However, that doesn't make the problem unsolvable. To see why, you must understand the following logical axiom: Additional information never invalidates a valid deduction. In other words, if I know that all of the statements $P_1,\dots P_n$ and $Q$ are true, and that $R$ is definitely true if $P_1, \dots P_n$ are true, I can conclude that $R$ is true. My additional knowledge that $Q$ is true, though unnecessary to deduce $R$, doesn't hamper my ability to deduce $R$ from $P_1,\dots P_n$. I will call this rule
LUI for "Law of Unnecessary Information." (It may have some other name, but I don't know it, so I'm giving it a new one.)
The line of reasoning goes as follows:
Let $R,\;M$ be the number of bars on Rose's and Mark's windows, respectively. Before the first question is asked, both Mark and Rose know the following:
$P_1$: Mark knows the value of $M$
$P_2$: Rose knows the value of $R$
$P_3$: $M+R=20 \;\vee \;M+R=18\;$ ($\vee$ means "or", in case you're unfamiliar with the notation)
$P_4$: $M\ge 2\;\wedge\;R \ge2\;$ ($\wedge$ means "and")
$P_5$: Both of them know every statement on this list, and every statement that can be deduced from statements they both know.
To help keep track of $P_5$ I will say that I will call a statement $P$ (with some subscript) only if it is known to both prisoners (or neither); thus, $P_5$ becomes "the other prisoner knows every $P$ that I know."
Additionally, Mark knows that $M=12$ and Rose knows that $R=8$. Call this knowledge $Q_M$ and $Q_R$, respectively.
Finally, as soon as one of them is asked the question for $k^\text{th}$ time, they both know (and know that one another know, etc.) $P_{\leftarrow k}$:
$P_{\leftarrow k}$: The other prisoner could not deduce the value of $M+R$ given the information they already had.
After Mark doesn't answer on the morning of day one, both prisoners can deduce from $P_1, P_3, P_4, P_5,$ and $P_{\leftarrow 2}$ that $M\le 16$ (call this $P_6$). It is true that both prisoners have more information than this about the value of $M$, but LUI tells us that that doesn't invalidate the deduction. It basically just means that Rose won't be surprised when she gets asked the question. She already knows she will be.
By the following morning, both prisoners can deduce from $P_1\dots P_6$ and $P_{\leftarrow 3}$ that $4\le R \le 16$ ($P_7$), and that evening, they can deduce from $P1,\dots P_7$ and $P_{\leftarrow 4}$ that $4 \le M \le 14$ ($P_8$). Again, both prisoners know all of this already. (But the conclusions are still valid by LUI.)
On the next day, in a similar manner, they can deduce in the morning that $6 \le R \le 14$ ($P_9$), and in the evening that $6 \le M \le 12$ ($P_{10}$). Here's where things get interesting. Mark can deduce from $P_3$ and $Q_M$ that $R$ is either $6$ or $8$, but $R=6\wedge P_{10} \wedge P_3\implies M+R=18$ and $R=6\wedge P_{10} \wedge P_3\wedge\left[R=6\wedge P_{10} \wedge P_3\implies M+R=18\right]\implies \neg P_{\leftarrow 7}$. When he gets asked the question again on the following morning, he learns that $P_{\leftarrow 7}$ is true, and can thus deduce that $R \neq 6$ and therefore $R=8$ and $M+R=20$. This is actually the first time in the sequence that a $P_{\leftarrow k}$ provides any more information about the value of $M+R$ than the prisoner already has, but the sequence of irrelevant questions is necessary to establish the deep metaknowledge Glen talks about. In this formulation, all this metaknowledge is encapsulated in $P_5$. When a prisoner is asked a question, $P_5$ says that they can deduce not only $P_{\leftarrow k}$ but also that both of them know $P_{\leftarrow k}$ and, by repeatedly applying $P_5$, that both of them know that both of them know $P_{\leftarrow k}$ and so on. For any $P_{\leftarrow k}$, there is some level of "we both know that we both know" that can't be deduced from $P_1\dots P_5$ and $Q_M$ or $Q_R$ alone. This is the "new information" being "learned" at each stage. Really nothing new is learned until Rose fails to answer on the $3^\text{rd}$ evening, but the sequence of non-answers $P_{\leftarrow k}$ is necessary to provide the deductive path to $P_{\leftarrow 7}$.
In fact, viewing it another way, the fact that not answering provides "no new information" (and in fact doesn't provide any new
direct information about the number of bars) is exactly why the puzzle is solvable, because
It says that the previous answer provided no new information. Because they both know that the number of bars is either $18$ or $20$ (only two possibilities), any new information about the number of bars (eliminating a possibility) will allow them to give the answer; thus, not answering sends the message "I have not yet received any new information," which, eventually,
is new information for the other prisoner.
The "conversation" the prisoners have amounts to this:
Mark: I don't know how many bars there are.
Rose: I already knew that (that you wouldn't know).
Mark: I already knew that (that you'd know I wouldn't know).
Rose: I already knew THAT (etc.)
Mark: I already knew THAT.
Rose: I already knew $\mathbf {THAT}$.
Mark (To the Evil Logician): There are $20$ bars.
But how, you may ask, can a series of messages that provide their recipient with no new information lead to one that does? Simple!
The non-answers provide no new information to the recipient, but they do provide information to the sender. If I tell you that I'm secretly a ninja, you might already know that, but even if you do, knowledge is gained, because by telling you, I give
myself the knowledge that you know I'm a ninja, and that you know I know you know I'm a ninja, etc. Thus, each message sent, even if the recipient already knows it, provides the sender with information. After several such questions, this is enough information that a message recipient can draw conclusions based on the sender's inability to draw any conclusions from the information they know the sender has.
Ok, fine, you might say, but what, exactly, is learned when Mark fails to answer on the first morning, and how can you prove this was not already known? Great question, thanks for asking. You see...
At this point, we have to resort to metaknowledge (I know she knows I know...) even though it can get confusing, However, I'll break it down in such a way as to hopefully satisfy anyone who still objects that there is (meta)knowledge available after Mark fails to answer the first question was not available before he did so. Specifically,
After failing to answer the first question, Now, that's a mouthful, so let's break it down into parts:
Mark gains the information that Rose knows that Mark knows that Rose knows that Mark knows that Rose knows that Mark's window has less than $18$ bars.
$R_0$:Mark's window doesn't have $18$ bars
$M_1$:Rose knows $R_0$
$R_2$:Mark knows $M_1$
$M_3$:Rose knows $R_2$
$R_4$:Mark knows $M_3$
$M_5$:Rose knows $R_4$
My claim is that A) Before he fails to answer on the first morning, Mark does not know $M_5$, and B) Afterwards, he does. Let's examine A) first:
To show that Mark doesn't know $M_5$ beforehand, we work backwards from $R_0$. In order for Rose to know that Mark's window doesn't have $18$ bars, her window would have to have more than $2$ bars. Since the rules (and numbers of bars) imply that they both have an even number of bars, in order for Mark to know $M_1$, he would have to know that Rose's window has at least $4$ bars. The only way for him to know that is if his window has less than $16$ bars. Thus, for rose to know $R_2$, she must know that Mark has no more than $14$ bars, which requires that she have at least $6$ bars. For Mark to know $M_3$, then, he must have no more than $12$ bars, so for Rose to know $R_4$ she must have at least $8$ bars, and for Mark to know $M_5$ he must have no more than $10$ bars. But he does have more than $10$ bars, so he doesn't know $M_5$ beforehand.
To see why Mark must know $M_5$ after he fails to answer the question, we must realize that they both know the rules of the game and one of the rules of the game is that they both know the rules of the game. This creates an infinite loop of meta-knowledge, meaning that they both know that they both know that they both know... the rules, no matter how many times you repeat "they both know". This infinite-depth meta-knowledge extends to anything that can be deduced from the rules. If Mark's window had $18$ bars, he could deduce from the rules that Rose must have $2$, and the tower must have $20$ in total. Because he doesn't answer, rose will be asked, and when she is, she will know that he couldn't deduce the answer, and therefore has less than $18$ bars. Because this is all deduced directly from the rules, rather than the private knowledge that either prisoner has, it inherits the infinite meta-knowledge of the rules, and Mark knows $M_5$.
So, Mark learns $M_5$. Does Rose learn anything? It's tempting to think that she doesn't, because she can predict in advance that Mark won't answer and therefore, one might think, she can draw in advance any conclusions that could be drawn from his not answering. However, as was shown above, by not answering, Mark learns $M_5$. Not answering changes the state of Mark's knowledge. This means that Rose's ability to predict Mark's behavior doesn't prevent her from gaining new information. She can predict in advance both what he will do (not answer) and what he will learn when he does it ($M_5$), but since he doesn't learn $M_5$ until he actually declines to answer, his failure to answer provides her with the information that he knows $M_5$. Since he didn't know $M_5$ beforehand, the knowledge that he does is by definition new information for Rose. Rose already knew that she now would know this, but until Mark doesn't answer, she doesn't actually know it (because it isn't true). By following this prediction logic out, it's possible to show that Rose knows (at the start) that Mark will be unable to answer until the $4^\text{th}$ morning, but not whether or not he'll be able to answer then. Mark, meanwhile, knows that Rose will be unable to answer until the $3^\text{rd}$ evening, but not whether or not she'll be able to answer then. As soon as one of the prisoners observes an event that they were unable to predict at the beginning, they can deduce from it something they didn't know about the state of the other's knowledge. Since the only hidden information is how many bars are in the other prisoners window, and they know that it must be one of two values, learning new information about that allows them to eliminate one of the values and find the correct result. |
The footnote on page E. II.6 in Bourbaki’s 1970 edition of “Theorie des ensembles” reads
If this is completely obvious to you, stop reading now and start getting a life. For the rest of us, it took me quite some time before i was able to parse this formula, and when i finally did, it only added to my initial confusion.
Though the Bourbakis had a very preliminary version of their set-theory already out in 1939 (Fascicule des Resultats), the version as we know it now was published, chapter-wise, in the fifties: Chapters I and II in 1954, Chapter III in 1956 and finally Chapter IV in 1957.
In the first chapter they develop their version of logic, using ‘assemblages’ (assemblies) which are words of signs and letters, the signs being $\tau, \square, \vee, \neg, =, \in$ and $\supset$.
Of these, we have the familiar signs $\vee$ (or), $\neg$ (not), $=$ (equal to) and $\in$ (element of) and, three more exotic ones: $\tau$ (their symbol for the Hilbert operator $\varepsilon$), $\square$ a sort of wildcard variable bound by an occurrence of $\tau$ (the ‘links’ in the above scan) and $\supset$ for an ordered couple.
The connectives are written in front of the symbols they connect rather than between them, avoiding brackets, so far example $(x \in y) \vee (x=x)$ becomes $\vee \epsilon x y = x x$.
If $R$ is some assembly and $x$ a letter occurring in $R$, then the intende meaning of the *Hilbert-operator* $\tau_x(R)$ is ‘some $x$ for which $R$ is true if such a thing exists’. $\tau_x(R)$ is again an assembly constructed in three steps: (a) form the assembly $\tau R$, (b) link the starting $\tau$ to all occurrences of $x$ in $R$ and (c) replace those occurrences of $x$ by an occurrence of $\square$.
For MathJax reasons we will not try to draw links but rather give a linked $\tau$ and $\square$ the same subscript. So, for example, the claimed assembly for $\emptyset$ above reads
$\tau_y \neg \neg \neg \in \tau_x \neg \neg \in \square_x \square_y \square_y$
If $A$ and $B$ are assemblies and $x$ a letter occurring in $B$ then we denote by $(A | x)B$ the assembly obtained by replacing each occurrence of $x$ in $B$ by the assembly $A$. The upshot of this is that we can now write quantifiers as assemblies:
$(\exists x) R$ is the assembly $(\tau_x(R) | x)R$ and as $(\forall x) R$ is $\neg (\exists x) \neg R$ it becomes $\neg (\tau_x(\neg R) | x) \neg R$
Okay, let’s try to convert Bourbaki’s definition of the emptyset $\emptyset$ as ‘something that contains no element’, or formally $\tau_y((\forall x)(x \notin y))$, into an assembly.
– by definition of $\forall$ it becomes $\tau_y(\neg (\exists x)(\neg (x \notin y)))$
– write $\neg ( x \notin y)$ as the assembly $R= \neg \neg \in x \square_y$ – then by definition of $\exists$ we have to assemble $\tau_y \neg (\tau_x(R) | x) R$ – by construction $\tau_x(R) = \tau_x \neg \neg \in \square_x \square_y$ – using the description of $(A|x)B$ we finally indeed obtain $\tau_y \neg \neg \neg \in \tau_x \neg \neg \in \square_x \square_y \square_y$
But, can someone please explain what’s wrong with $\tau_y \neg \in \tau_x \in \square_x \square_y \square_y$ which is the assembly corresponding to $\tau_y(\neg (\exists x) (x \in y))$ which could equally well have been taken as defining the empty set and has a shorter assembly (length 8 and 3 links, compared to the one given of length 12 with 3 links).
Hair-splitting as this is, it will have dramatic implications when we will try to assemble Bourbaki’s definition of “1” another time. |
In the end we shall want to write a solution to an equation as a series of Bessel functions. In order to do that we shall need to understand about orthogonality of Bessel function – just as sines and cosines were orthogonal. This is most easily done by developing a mathematical tool called Sturm-Liouville theory. It starts from an equation in the so-called self-adjoint form
\[[r(x) y'(x)]' + [p(x)+\lambda s(x)] y(x) = 0 \label{eq:selfadj}\]
where \(\lambda\) is a number, and \(r(x)\) and \(s(x)\) are greater than 0 on \([a,b]\). We apply the boundary conditions
\[\begin{aligned} a_1 y(a)+ a_2 y'(a)&=0,\nonumber\\ b_1 y(b)+ b_2 y'(b)&=0,\end{aligned}\]
with \(a_1\) and \(a_2\) not both zero, and \(b_1\) and \(b_2\) similar.
Theorem \(\PageIndex{1}\)
If there is a solution to (\ref{eq:selfadj}) then \(\lambda\) is real.
Proof
Assume that \(\lambda\) is a complex number (\(\lambda = \alpha + i \beta\)) with solution \(\Phi\). By complex conjugation we find that
\[\begin{aligned} [r(x) \Phi'(x)]' + [p(x)+\lambda s(x)] \Phi(x) &= 0\nonumber\\ {}[r(x) (\Phi^*)'(x)]' + [p(x)+\lambda^* s(x)] (\Phi^*)(x) &= 0\end{aligned}\]
where \(*\) note complex conjugation. Multiply the first equation by \(\Phi^*(x)\) and the second by \(\Phi(x)\), and subtract the two equations: \[(\lambda^*-\lambda)s(x) \Phi^*(x)\Phi(x)=\Phi(x)[r(x) (\Phi^*)'(x)]'-\Phi^*(x)[r(x) \Phi'(x)]'.\] Now integrate over \(x\) from \(a\) to \(b\) and find
\[(\lambda^*-\lambda)\int_a^bs(x) \Phi^*(x)\Phi(x)\,dx = \int_a^b\Phi(x)[r(x) (\Phi^*)'(x)]'-\Phi^*(x)[r(x) \Phi'(x)]'\,dx\]
The second part can be integrated by parts, and we find
\[\begin{aligned} (\lambda^*-\lambda)\int_a^b s(x) \Phi^*(x)\Phi(x)\,dx &= \left[\Phi'(x) r(x) (\Phi^*)'(x)-\Phi^*(x)r(x) \Phi'(x)\right|^b_a \nonumber\\ &= r(b)\left[\Phi'(b) (\Phi^*)'(b)-\Phi^*(b)\Phi'(b)\right] \nonumber\\&& -r(a)\left[\Phi'(a) (\Phi^*)'(a)-\Phi^*(a)\Phi'(a)\right] \nonumber\\ &=0,\end{aligned}\]
where the last step can be done using the boundary conditions. Since both \(\Phi^*(x)\Phi(x)\) and \(s(x)\) are greater than zero we conclude that \(\int_a^bs(x) \Phi^*(x)\Phi(x)\,dx > 0\), which can now be divided out of the equation to lead to \(\lambda=\lambda^*\).
Theorem \(\PageIndex{2}\)
Let \(\Phi_n\) and \(\Phi_m\) be two solutions for different values of \(\lambda\), \(\lambda_n\neq \lambda_m\), then \[\int_a^b s(x) \Phi_n(x) \Phi_m(x)\,dx = 0.\]
Proof
The proof is to a large extent identical to the one above: multiply the equation for \(\Phi_n(x)\) by \(\Phi_m(x)\) and vice-versa. Subtract and find \[(\lambda_n-\lambda_m)\int_a^b s(x) \Phi_m(x)\Phi_n(x)\,dx = 0\] which leads us to conclude that \[\int_a^b s(x) \Phi_n(x) \Phi_m(x)\,dx = 0.\]
Theorem \(\PageIndex{3}\)
Under the conditions set out above
There exists a real infinite set of eigenvalues \(\lambda_0,\ldots,\lambda_n, \ldots\) with \(\lim_{n\rightarrow \infty} = \infty\). If \(\Phi_n\) is the eigenfunction corresponding to \(\lambda_n\), it has exactly \(n\) zeroes in \([a,b]\). No proof shall be given. Proof No proof shall be given.
Clearly the Bessel equation is of self-adjoint form: rewrite \[x^2 y'' + xy' + (x^2-\nu^2) y = 0\] as (divide by \(x\)) \[[x y']' + (x-\frac{\nu^2}{x}) y = 0\] We cannot identify \(\nu\) with \(\lambda\), and we do not have positive weight functions. It can be proven from properties of the equation that the Bessel functions have an infinite number of zeroes on the interval \([0,\infty)\). A small list of these:
\[\begin{array}{lclllll} J_0 &:& 2.42&5.52 &8.65&11.79&\ldots\\ J_{1/2} &:& \pi & 2 \pi & 3\pi & 4\pi & \ldots\\ J_{8} &:& 11.20 & 16.04 & 19.60 & 22.90 & \ldots \end{array}\] |
PREFACE: This question is about a proper algorithm and its implementation. I will explain the problem as detailed as possible and will give my current algorithm as well as two more possible solutions which are far better suited for this task. Unfortunately I have no idea on how to implement either of the latter ones. Therefore any help in that direction is appreciated. INPUT: 1) The input are k lists $L_i$. Every of this k lists contains a sublists. E.g.: $k=2$, $a=3$ $$L_1=\{l_{11},l_{12},l_{13}\} \\ L_2=\{l_{21},l_{22},l_{23}\}$$ 1.a) Every sublist $l_{ij}$ contains $m_{ij}$ sublists $b_{ij,k=1\dots,m_{i,j}}$ of length $j$. 1.b) The first element in every sub-sub-list $b_{ijk}$ of list $L_i$ is equal. ALL other elements of ALL lists are unequal. Example: $k=2$, $a=3$ (explicit) $$ L_1=\{\{\{1\}\},\{\{1,2\},\{1,3\}\},\{\{1,4,5\},\{1,6,7\},\{1,8,9\}\}\} \\ \ \ \ L_2=\{\{\{10\}\},\{\{10,11\},\{10,12\},,\{10,13\}\},\{\{10,14,15\}\}\}$$ TASK: Find all sets of length k containing elements of the lists $L_i$ which have the following properties: I.) No doubled elements: No set contains more than one element from every list $L_i$. E.g.: Possible sets created from $L_1$ and $L_2$ given above: $$ \{\{1\},\{10,12\}\}: \ \textit{ok} \\ \ \ \quad \{\{1\},\{1,6,7\}\}: \ \textit{not ok}$$ II.) The total number of elements in the union of each of this sets is $\leq a+k$. E.g.: $k=2$, $a=3$ $$ \ \ \quad \{\{1\},\{10,12\}\}\to ||\{1\}\cup \{10,12\}||<5: \ \textit{ok} \\ \{\{1,6,7\},\{10,14,15\}\}\to ||\{1\}\cup \{10,12\}||>5: \ \textit{not ok} $$ Current algorithm: Currently I create all possible subsets of length $k$ from the lists $L_i$ and apply the conditions afterwards. That is ``okayish'' for small $k$ and $a$ but problematic for larger values, due to the (unnecessary) combinatorics.
(*minimal example *)k = 2; a = 3; L1 = {{{1}}, {{1, 2}, {1, 3}}, {{1, 4, 5}, {1, 6, 7}, {1, 8, 9}}}; L2 = {{{10}}, {{10, 11}, {10, 12}}, {{10, 14, 15}}}; (* create all subsets and apply conditions afterwards <-> combinatorics :( *)Lall = Flatten[Union[L1, L2], 1]; Lall = Subsets[Lall, {k}]; linter = Length[Lall] (* intermediate length blows up due to combinatorics *)(* No doubled elements -> delete them *) Do[If[Length[Union @@ Lall[[i]]] < Sum[Length[Lall[[i,j]]], {j, 1, k}], Lall[[i]] = {}; ], {i, 1, Length[Lall]}]Lall = Lall /. {} -> Nothing; linter2 = Length[Lall] (* second intermediate length: way shorter (corresponds to ``Problem 1.a)'' *)(* total number of elements in the union of every subset has to be <a+k *)Do[If[Length[Flatten[Lall[[i]]]] > a + k, Lall[[i]] = {}; ], {i, 1, Length[Lall]}] Lall = Lall /. {} -> Nothing; lresult = Length[Lall]
PROBLEM: The unnecessary combinatorial overhead. 1.a) Here any help is appreciated. I would save a lot of the overhead if I take a similar approach but restrict the creation of the subsets ( see linter to linter2 in the code) by creating them according to:
subsets = {}; Do[subsets = Append[subsets, Subsets[Join[L1[[l]], Flatten[L2, 1]], {k}]], {l, 1, Length[L1]}]subsets = Flatten[subsets, 1]
and applying condition
II.) afterwards. Unfortunately it is not clear to me, how that approach can be implemented for an arbitrary number of lists k.
1.b) Elegant but probably overly hard to implement.
The complete problem could be solved without overhead by recognizing that the proper way of building the subsets is given by all permutations of the integer partitions $j\leq a+k$ of length $k$. Then one gets for the example:
s = {}; Do[s = Append[s, IntegerPartitions[l, {k}]], {l, k, a + k}]s = Flatten[Permutations /@ Flatten[s, 1], 1]
Which yields:
{{1, 1}, {2, 1}, {1, 2}, {3, 1}, {1, 3}, {2, 2}, {4, 1}, {1, 4}, {3, 2}, {2, 3}}
and therefore exactly the indices $i,j$ of $l_{1,i}$ and $l_{2,j}$ from which all possible subsets fulfill condition
I.) and II.). But I guess implementing that is far from trivial and I have no idea on how to attempt it for general k.
To everybody who made it until here: Many thanks, Armin! |
In many acoustic power-transfer applications, ultrasonic sound is used for transmission of power. However, the efficiency of transmission over distance reduces as attenuation is high. Why does sound at ultrasonic frequency have high attenuation?
When sound travels through a medium, its intensity diminishes with distance. In idealized materials, sound pressure (signal amplitude) is only reduced by the spreading of the wave. Natural materials, however, all produce an effect which further weakens the sound. This further weakening results from scattering and absorption. Scattering is the reflection of the sound in directions other than its original direction of propagation. Absorption is the conversion of the sound energy to other forms of energy. The combined effect of scattering and absorption is called attenuation. Ultrasonic attenuation is the decay rate of the wave as it propagates through material.
Absorption is a function of frequency--more cycles per second means the material is being 'worked' more--expending more energy into th material. Scattering is also a function of frequency--higher frequency pulses are more more likely to be deflected off in another direction ('specular reflection'--google it).
As user45664 notes, attenuation is the summation of all kinds of losses, including absorption, scattering, or reflection.
Ultrasonic absorption coefficients have been determined over a wide frequency range in all cases and found to be largely independent of the degree of cellular and subcellular destruction, suggesting that
more than two-thirds of liver tissue absorption originates on a macro-molecular levelincluding submicroscopic particles such as ribosomes and aggregations of macromolecules.
Let's consider this intuitively for a for a moment. In human tissues there's (typically) a linear increase in the attenuation of ultrasound with frequency from $0.3$ to $10$ MHz. If we take the speed of sound in human soft tissue to be $1540 \frac{m}{s}$, we can estimate the wavelength of the ultrasound:
$$\begin{align} 3 \cdot10^5 \ \text{Hz} ≤ \ & f \ ≤ 1 \cdot 10^7 \ \text{Hz} \\[10pt] \frac{1540 \frac{m}{s}}{3 \cdot10^5 \ \text{Hz}} ≤ \ & \lambda \ ≤ \frac{1540 \frac{m}{s}}{1 \cdot10^7 \ \text{Hz}} \\[10pt] 5.1 \cdot 10^{-3} \ m ≤ \ & \lambda \ ≤ \ 2.0 \cdot 10^{-4} \ m \\ \end{align} $$
Which - at the smallest - is still an order of magnitude larger than most eukaryotic cells ($2 \cdot 10^{-5} \ m$), but small enough where it should be apparent that the size of the wave is approximately the same as structures within the tissue. This will tend to cause resonance of those structures, and a loss of acoustic energy proportional to the number of structures along the path.
Waves with lower frequencies will not have as pronounced of an effect.
In addition, frequency matters for
scattering. These UNC lecture notes on the Acoustics of the Body are helpful in visualizing why scattering tends to happen more for high-frequencies than low:
For the so called
nonspecular boundaries of different tissues (of which the kidney's many nephrons are a great example,) the size of the bumps and crenelations of the tissue boundary scatter wavelengths smaller than them. This also causes increased attenuation for smaller wavelengths. |
Let $Y_i=\alpha_0+\beta_0 X_i + \epsilon_0$, where $\epsilon_i \sim N(0, \sigma_0^2)$ and $X_i \sim N(\mu_x,\tau_0^2)$ are independent.
The data $(X_i, Y_i)$ are generated from $Y_i=\alpha_0+\beta_0 X_i + \epsilon_0$.
I have found the maximum likelihood estimator for each parameter by using $$L_n({X_i, Y_i};\, \alpha, \beta, \mu_x, \sigma^2, \tau^2) = \prod_{i=1}^n f(X_i, Y_i)=\prod_{i=1}^n f_x(X_i)f_{.|X_i}(Y_i),$$ differentiating it with respect to the parameter, setting it equal to zero, and solving for the parameter.
For example, I get that $\hat{\alpha}_{MLE}=\bar{Y_n}-\beta \bar{X_n}$.
But how do I show that $\hat{\alpha}_{MLE}=\bar{Y_n}-\beta \bar{X_n}$ converges to its true value?
I do not know how to find the true value of $\hat{\alpha}_{MLE}$ or how to begin showing convergence.
Thank you. |
I want to know if there is a way to find for example $\ln(2)$, without using the calculator ?
Thanks
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
And let's not forget this method (read off of the Ln scale).
$$\log 2 = 1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\ldots$$ In the general case $$\log \frac{1+x}{1-x} = 2(x+\frac{x^3}{3}+\frac{x^5}{5}+\frac{x^7}{7}+\ldots)$$
How precise do you need the calculation to be?
As a quick and dirty approximation, we know that $2^3 = 8$ and $e^2 \approx 2.7^2 = 7.29$, and so $\ln(2)$ should be just over $\frac{2}{3} \approx 0.67$. Contiuing to match powers, we find $2^{10} = 1024$, and $e^7 \approx (2.7)^7 = (3 - 0.3)^7 = 3^7 -7(3)^6(.3) + 21(3)^5(.3)^2 - 35(3)^4(.3)^3 \dots$ $= 3^7 (1 - .7 + .21 - .035 \dots)$ $\approx 2187(.475) = 1038.825$. Therefore, $e^7 \approx 2^{10}$ and so $\ln(2)$ should be just under $0.7$.
The operations that are relatively easy to compute by hand are addition, multiplication, and their inverses, subtraction, and division. With these operations we can compute all rational functions, e.g. $\frac{2x^2-1}{x^3+x-1}$.
We know that $$\ln(x)=\sum_{k=1}^{\infty}(-1)^k\frac{(x-1)^k}{k}$$
for values of $x$ close to $1$. So, if we take partial sums of this series we get approximations to logarithm that only require multiplications and sum and subtractions.
Notice that we only need to be able to compute values of logarithm for numbers close to $1$, since using $\ln(e^kx)=k+\ln(x)$ can allow us to reduce to this case.
$$\log2=\frac{2}{3}\left(1+\frac{1}{27}+\frac{1}{405}+\frac{1}{5103}+\frac{1}{59049}+\frac{1}{649539}+...\right)$$
The denominator is $(2k+1)9^k$.
Gourdon and Sebah discuss the efficiency of this formula in http://plouffe.fr/simon/articles/log2.pdf (page 11)
A "little more effort" is required to compute $log(2)$ using this formula than to compute $\pi$ using Machin's relation.
We have the CORDIC method, which can be quite effective for by-hand computation as it requires additions/subtractions only (an one multiply by a small integer).
There are two limitations though:
it is better performed in base $2$, so a preliminary change of base is needed for the input argument (you can do it in base $10$ as well but it takes about $3$ times more operations);
you need a small table of constants.
It is based on the identity $\log(ab)=\log(a)+\log(b)$.
You first normalize the binary number as $x=z\cdot2^e$, with $1\le z<10_b$. You have $\log(x)=\log(z)+e\cdot\log(2)$.
Then $$\log(z)=\log(0.11_bz)-\log(0.11_b)\\ \log(z)=\log(0.111_bz)-\log(0.111_b)\\ \log(z)=\log(0.1111_bz)-\log(0.1111_b)\\ \cdots$$
You will use these equalities as follows. Initialize an accumulator $l\leftarrow0$ and
if $0.11_bz>1$ (i.e. $z>1.01010101_b\cdots$) let $z\leftarrow 0.11_bz$, $l\leftarrow l-\log(0.11_b)$;
if $0.111_bz>1$ (i.e. $z>1.00100100_b\cdots$) let $z\leftarrow 0.111_bz$, $l\leftarrow l-\log(0.111_b)$;
if $0.1111_bz>1$ (i.e. $z>1.00010001_b\cdots$) let $z\leftarrow 0.1111_bz$, $l\leftarrow l-\log(0.1111_b)$;
$\cdots$
The multiplies are actually performed as shifts and subtractions (f.i. $0.111_bz=z-0.001_bz$).
This way, we progressively reduce $z$ to bring it closer and closer to $1$, while $l$ gets closer and closer to the logarithm of the initial $z$. On every step we gain one bit of accuracy.
The table of constants ($\log(10_b)=-\log(0.1_b),-\log(0.11_b),-\log(0.111_b),\cdots$ up to the desired number of significant bits) is computed in the decimal base, so that the answer is readily available as such.
$$\begin{align}z&\to-\log(z)\\ 0.1000000000000000000000000000000_b&\to 0.6931471806_d\\ 0.1100000000000000000000000000000_b&\to 0.2876820725_d\\ 0.1110000000000000000000000000000_b&\to 0.1335313926_d\\ 0.1111000000000000000000000000000_b&\to 0.0645385211_d\\ 0.1111100000000000000000000000000_b&\to 0.0317486983_d\\ 0.1111110000000000000000000000000_b&\to 0.0157483570_d\\ 0.1111111000000000000000000000000_b&\to 0.0078431775_d\\ 0.1111111100000000000000000000000_b&\to 0.0039138993_d\\ 0.1111111110000000000000000000000_b&\to 0.0019550348_d\\ 0.1111111111000000000000000000000_b&\to 0.0009770396_d\\ 0.1111111111100000000000000000000_b&\to 0.0004884005_d\\ 0.1111111111110000000000000000000_b&\to 0.0002441704_d\\ 0.1111111111111000000000000000000_b&\to 0.0001220778_d\\ 0.1111111111111100000000000000000_b&\to 0.0000610370_d\\ 0.1111111111111110000000000000000_b&\to 0.0000305180_d\\ 0.1111111111111111000000000000000_b&\to 0.0000152589_d\\ 0.1111111111111111100000000000000_b&\to 0.0000076294_d\\ 0.1111111111111111110000000000000_b&\to 0.0000038147_d\\ 0.1111111111111111111000000000000_b&\to 0.0000019074_d\\ 0.1111111111111111111100000000000_b&\to 0.0000009537_d\\ 0.1111111111111111111110000000000_b&\to 0.0000004768_d\\ 0.1111111111111111111111000000000_b&\to 0.0000002384_d\\ 0.1111111111111111111111100000000_b&\to 0.0000001192_d\\ 0.1111111111111111111111110000000_b&\to 0.0000000596_d\\ 0.1111111111111111111111111000000_b&\to 0.0000000298_d\\ 0.1111111111111111111111111100000_b&\to 0.0000000149_d\\ 0.1111111111111111111111111110000_b&\to 0.0000000075_d\\ 0.1111111111111111111111111111000_b&\to 0.0000000037_d\\ 0.1111111111111111111111111111100_b&\to 0.0000000019_d\\ 0.1111111111111111111111111111110_b&\to 0.0000000009_d\\ 0.1111111111111111111111111111111_b&\to 0.0000000005_d\\ \end{align}$$
One can use the fact that$$\log x=\lim_{n\to\infty}n\left(1-\frac{1}{\sqrt[n]{x}}\right)$$For $\log2$ a good approximation is$$1048576\left(1-\frac{1}{\sqrt[1048576]{2}}\right)$$where$$\sqrt[1048576]{x}$$can be computed by pressing twenty times the
SQRT key on a pocket calculator, since $1048576=2^{20}$ (or computing it by hand, with
much patience and time to spend).
What I get doing those computations is $0.6931469565952$, while a real computer gives $0.69314718055994530941$, so we have five exact decimal digits. Of course bigger numbers won't do, since the $2^{20}$-th root of it will be too near $1$ and the necessary digits would have already been lost.
(Note: $\log$ is the natural logarithm; I refuse to denote it in any other way. ;-))
$$\log (x)=\sum _{n=1}^{\infty } \frac{\left(\frac{x-1}{x}\right)^n}{n}$$ when $x>1$
What you can use is the Taylor expansion of $\ln (1+x)$:
$$\ln (1+x) = \sum (-1)^{j+1}{x^j\over j}$$
which converges for $-1<x\le1$. It would be tempting to insert $x=1$ into it, but that would be a poor choice since the convergence for $x=1$ is painfully slow. Instead you use the fact that $\ln 2 = -\ln 1/2$ and insert $x=-1/2$ instead:
$$\ln (1-{1\over 2}) = \sum (-1)^{j+1}{1\over j2^j} = -\sum {1\over j2^j}$$
So
$$\ln 2 = \sum {1\over j2^j}$$
This is similar to how the calculator does it, but there's probably a few tricks more that's used. First it probably uses base two logarithm and have a stored value of $\lg_2 e$ to be able to produce the natural logarithm. The reason for this is to be able to handle logarithm of values outside the convergence region (and generally we want to use the series for as narrow region as possible). We generally can write any number on the form $x2^p$ (in fact the numbers are already represented on that form) with $x$ being near $1$ and then $\lg_2(x2^p) = p\lg_2(x)$ (similar trick is being done on all these kind of functions).
The second trick is to approximate $\ln(1+x)$ on the interval $[1/\sqrt2, \sqrt2]$ even better than Taylor expansion, the trick is to find a polynomial that approximates it as uniformly good as possible. The McLaurin expansion has the property that it will yield a good approximation fast for values near zero at the expense of values further away. For generic case one uses a polynomial that yields a good enough approximation equally fast in the interval.
We can represent the logarithm of positive rational numbers as follows.
First, consider the following null conditionally convergent series (cancelled harmonic series):
$$0=(1-1)+\left(\frac{1}{2}-\frac{1}{2}\right)+\left(\frac{1}{3}-\frac{1}{3}\right)+\left(\frac{1}{4}-\frac{1}{4}\right)+\left(\frac{1}{5}-\frac{1}{5}\right)+...$$
Note that we are computing $0=log(1)=log\left(\frac{1}{1}\right)$ by adding consecutive terms with 1 positive fraction and 1 negative fraction each, taken from the inverses of non-zero integers. This observation may sound trivial now, but it is interesting for what comes next.
We can rearrange the terms of this series to compute $log(2)$ by taking two positive fractions and one negative for each term.
$$log\left(2\right)=\left(1+\frac{1}{2}-1\right)+\left(\frac{1}{3}+\frac{1}{4}-\frac{1}{2}\right)+\left(\frac{1}{5}+\frac{1}{6}-\frac{1}{3}\right)+\left(\frac{1}{7}+\frac{1}{8}-\frac{1}{4}\right)+...$$
This can be easily seen to be the Mercator series in disguise, so we have discovered nothing new yet.
But there is more. Similarly, we have
$$log\left(3\right)=\left(1+\frac{1}{2}+\frac{1}{3}-1\right)+\left(\frac{1}{4}+\frac{1}{5}+\frac{1}{6}-\frac{1}{2}\right)+\left(\frac{1}{7}+\frac{1}{8}+\frac{1}{9}-\frac{1}{3}\right)+\left(\frac{1}{10}+\frac{1}{11}+\frac{1}{12}-\frac{1}{4}\right)+...$$
This pattern holds for all positive integers, so the next step is applying the property that $log(p/q)=log(p)-log(q)$ on these representations.
This leads to $log(p/q)$ by adding $p$ positive fractions and $q$ negative fractions at each step. For example, we have
$$log\left(\frac{3}{2}\right)=\left(1+\frac{1}{2}+\frac{1}{3}-1-\frac{1}{2}\right)+\left(\frac{1}{4}+\frac{1}{5}+\frac{1}{6}-\frac{1}{3}-\frac{1}{4}\right)+\left(\frac{1}{7}+\frac{1}{8}+\frac{1}{9}-\frac{1}{5}-\frac{1}{6}\right)+...$$
as illustrated in http://oeis.org/A166871. |
Solving 1D and 2D complex Schroedinger wave equations with NDSolve
I do not agree with you when you write:
I know the NDSolve is not magic...
My opinion is that
NDSolve is one of the most complex functionality I've met so far in the Mathematica environment, with its millions of options and special function this is a real complex thing and it is hard indeed to get a proper result. NDSolve is big, you could even sell this as a standalone program.
This post is somewhat long and at one side it is an answer (or an approximation of answer) and on the other side it is opening a huge amount of new questions...
It's a pity that guys like
Rob Knapp are not available to ask. So it's only me who tries to give a look under the hood of NDSolve while scratching only at the surface.)
NDSolve includes a general solver for partial differential equations based on the methods of lines. There are several ways to control the selection of the spatial grid (I'll list here only those who are relevant for the 1D case):
AccuracyGoal $\rightarrow$ the number of digits of absolute tolerance
PrecisionGoal $\rightarrow$ the number of digits of relative tolerance
MinStepSize $\rightarrow$ the minimum grid spacing to use
MaxStepSize $\rightarrow$ the maximum grid spacing to use
The discretization is done (not for pseudospectral methods) with uniform grids with the following direct correspondences (with interval length L):
MaxPoints $\rightarrow$ n $\Leftrightarrow$ MaxStepSize $\rightarrow$ L/n
MinPoints $\rightarrow$ n $\Leftrightarrow$ MinStepSize $\rightarrow$ L/n
1) The 1D case:
sol = NDSolve[{I D[u[t, x], t] == -D[u[t, x], {x, 2}],
u[0., x] == Exp[-(x^2.)], u[t, 5.] == 0, u[t, -5.] == 0},
u, {t, 0., 20.}, {x, -5., 5.}, MaxStepSize -> 0.01,
AccuracyGoal -> 3, PrecisionGoal -> 3]
Animate[Plot[Evaluate[Abs[u[t, x] /. First[sol]]^2], {x, -5, 5},
PlotRange -> {0, 1}], {t, 0, 17, 0.01}]
In order to increase the grid size you have to decrease
MaxStepSize.Now your plot will work nicely.
Warming up:
Now let's turn to a 2D heat equation. To get warm we'll solve a SIAM 100 Challenge using
NDSolve for that, although an analytical solution would be the best way.
(I only want to show that the solver is quite capable to get an answer)
The SIAM 100 Challenge #8 states the following problem:
A square plate [-1,1] is at temperature u = 0. At time t = 0 the
temperature is increased to u = 5 along one of the four sides while being
held at u = 0 along the other three sides, and heat then flows into the plate
according $u_i = \Delta{u}$. When does the temperture reach u = 1 at the center
of the plate? (Folkmar Bornemann)
The initial condition in that problem is discontinuous, so we'll provide a specific grid spacing.
Quiet[Block[{n = 25, he = 5 UnitStep[-(x + 1)]}, hsol = NDSolve[
{
D[u[t, x, y], t] == D[u[t, x, y], x, x] + D[u[t, x, y], y, y],
he == u[0, x, y],
u[t, -1, y] == 5, u[t, 1, y] == 0, u[t, x, -1] == he,
u[t, x, 1] == he
}, u, {t, 0, 1}, {x, -1, 1}, {y, -1, 1},
Method -> {"MethodOfLines",
Method -> {"EventLocator",
"Event" -> u[t, 0, 0] - 1,
"EventAction" :> Throw[end = t, "StopIntegration"]},
"SpatialDiscretization" -> {"TensorProductGrid",
"MinPoints" -> {n, n}, "MaxPoints" -> {n, n}}}]]]
which yields an answer quickly:
u->InterpolatingFunction[{{0.,0.424014},{-1.,1.},{-1.,1.}},<>]}}
We've specified the
EventLocator controller method. Every time the Event option is zero EventAction is evaluated, which in this case will stop the integration.
The most important suboptions for method option =
MethodOfLines are:
"SpatialDiscretization $\rightarrow$ "TensorProductGrid"
"MinPoints" $\rightarrow$ list Of Discreatization points for each spatial variable
"MaxPoints" $\rightarrow$ dito
"DifferenceOrder" $\rightarrow$ positive integer or "PseudoSpectral"
Now let's
Plot3D the solution of the heat equation at t = 0.424014, along with a DensityPlot:
GraphicsGrid[
{{
Plot3D[Evaluate[u[0.424014, x, y] /. hsol], {x, -1, 1}, {y, -1, 1}],
DensityPlot[Evaluate[Abs[u[0.424014, x, y]] /. hsol], {x, -1, 1}, {y, -1, 1},
PlotPoints -> 200, Mesh -> False]
}}
]
The 1D complex Schroedinger wave equation
We will model the quantum-mechanical scattering of a Gaussian wave packet by using a 1D time-dependent Schroedinger equation.
Let's define the the Schroedinger equation:
The time-dependent Schroedinger equation is defined as:
$i\hbar \frac{d}{dt} \Psi(x,t)=[\frac{-\hbar^2}{2m} \Delta^2+V(x,t)] \psi(x,t)$
As you already wrote in your
Note 1, the inital condition is a simple spreading Gaussian probability.
And its potential/kinetic energy is defined as:
$e^{-(x+3)^{2}+3 i x}$
If we integrate this with $\Psi$ we get the maximum value of the potential, which is $\approx 6$
Setting $\hbar$ and $m$ = 1, and the potential to be 6, we get:
schroedingerEq = I D[u[x, t], {t, 1}] == -1/2 D[u[x, t], {x, 2}] + 6
Exp[-x^2] u[x, t]
In order to make sure that at $\pm xMax$ our wave function behaves identically we define a Dirichlet boundary condition which adds a cosine to
u:
DirichletBC[u_, x_, xM_] := u - (((u /. x -> -xM) - (u /. x -> xM))/
2*(Cos[(x + xM)/(2 xM) Pi] + 1) + (u /. x -> xM))
Now let's solve this wave equation numerically:
With[{xMax = 15},(nsol =
NDSolve[{schroedingerEq,
u[x, 0] == DirichletBC[Exp[-(x + 3)^2], x, xMax] Exp[3 I x],
u[xMax, t] == 0, u[-xMax, t] == 0},
u[x, t], {x, -xMax, xMax}, {t, 0, 5},
AccuracyGoal -> 3, PrecisionGoal -> 3]) // Timing]
Plotting
nsol yields:
DensityPlot[Evaluate[Abs[u[x, t]] /. nsol], {x, -15, 15}, {t, 0, 5},
PlotPoints -> 200, Mesh -> False]
Here we can see that at t $\approx$ 3.2 the wave packet reaches the right boundary and gets reflected there.
The 2D complex Schroedinger wave equation
When we look at the byte size of
sol (see the 1D part of this answer) we realize, that the size is about 72 MB large.
In order to avoid such huge
InterpolatingFunction objects and since I found out, in our prior discussion, that a low memory consumption is important to you I want to show another way to tackle the problem.
Under the hood
NDSolve is broken up actually in several parts: Equation processing and method selection Method initialization Numerical solution Solution processing
Normally, if you use
NDSolve you won't even notice these steps, but there exist low-level functions which you can use on your own, to break up the steps. ProcessEquations Iterate ProcessSolutions
ProcessEquations is used to set up the problem and to create the StateData data structure.
Iterate advances the numerical solution.
ProcessSolutions converts the numerical data into a InterpolationFunction
Please see the documentation for NDSolve steps and components
Let's first set up the problem using a homogenous Dirichlet boundary condition on all four edges, with length
L, the initial condition
ics0 and the DifferenceOrder option
do:
makeNDSolveStateDataObject[ics0_, do_, L_: 5, opts___] :=
First[NDSolve`ProcessEquations[{
I D[u[t, x, y], t] == -D[u[t, x, y], {x, 2}] - D[u[t, x, y], {y, 2}],
u[0., x, y] == ics0,
u[t, -L, y] == u[t, L, y], u[t, x, -L] == u[t, x, L]},
u, {t, 0., 2.}, {x, -L, L}, {y, -L, L}, opts,
Method -> {"MethodOfLines",
"SpatialDiscretization" -> {"TensorProductGrid",
"DifferenceOrder" -> do}}]]
and build the
StateData data structure:
stateD = makeNDSolveStateDataObject[Exp[-(x^2 + y^2)], "Pseudospectral"];
stateD ==> NDSolve`StateData[<0.>]
The nice thing about the
StateData data structure is that we have more control of the integration. Sometimes it is appropriate to check the solution and change maybe some parameters and to start it over again.
Iterate on its own does not return a value but it modifies the StateData data structure.
We can integrate using
Iterate and if we want to integrate further we have to call Iterate again but with a larger value of time.
For the sake of brevity I gonna create now a GraphicsGrid iterating through the
StateData structure:
GraphicsGrid[Partition[sols = Table[
NDSolve`Iterate[stateD, t];
Plot3D[
Evaluate[
Abs@u[t, x, y] /.
NDSolve`ProcessSolutions[stateD, "Forward"]],
{x, -5, 5}, {y, -5, 5}, PlotRange -> {0, 1}],
{t, 0, 2, .1}], 2]]
And here the canonical animation:
This solution may not be the "straightforward" one, but if you are short on ressources (memory) this is the way to go.
According to
Weierstrass, the ultimative goal is always the representation of a function.
I guess I fulfilled his dictum now...
I hope this helps. |
There is no such thing as a "small" supernova, especially not for a "stellar death" scenario as you're imagining.
In order for a star to produce a supernova upon death, it must have a certain minimum mass - in particular, about 10 solar masses ($M_\odot$), where one solar mass is, not surprisingly, a mass standard equal to the mass of the Sun, or very roughly 2'000 000 000 Yg (yottagrams, for comparison, the Earth is around 6'000 Yg.).
Such a star will be rather short-lived - only about 10 million years or so, which is far too quick for any sort of life to ever evolve in such a system, unless it was introduced from elsewhere, e.g. someone settled on the planet from another star system.
However, regardless of this, the supernova cannot be made arbitrarily small because there is, as mentioned, a minimum mass for the star that will die in it, and thus the smallest supernova cannot be smaller than the supernova that this mass of star would produce. And even at this level, the supernova is devastating - about $10^{46}$ joules, of which essentially
all is initially released as neutrino radiation; 1% of this is soaked up by the star and blows it apart. The resulting mechanical and radiative explosion at $10^{44}$ joules is always going to be enough to completely incinerate everything in the system, including all planets. In particular, suppose there were a planet as far out as 100 AU, or 15 000 Gm, about 100 times further away than the Earth is from the Sun. Assuming the energy radiates out in a spherical pattern, one can find the intensity of energy that will be deposited at this distance by
$$I = \frac{E}{4\pi r^2}$$
If you take $r$ to be the given distance (take $E = 10^{44}\ \mathrm{J}$ and $r = 1.5 \times 10^{13}\ \mathrm{m}$), you will find that the total energy pulse will be around 35 petajoules per square meter, or roughly one
hydrogen-bomb explosion detonating on every square meter of planetary surface area at this distance. Granted, this energy will not arrive instantly, but over time; nonetheless it would be sufficient to flay an Earth-like planet at least down to the mantle, and for anything closer in (and it couldn't be too close if you imagine it to be habitable since the 10 solar mass star will be very bright, especially toward the end of its life), it may even be enough to entirely evaporate the planet altogether. As a result, no life forms of any kind will survive; the effects of the supernova will be complete and total annihilation, the death of the humanoid race in full as well as all other habitation within the system, if any is present.
Instead of a supernova, you may be more interested in checking out a
stellar superflare of the type produced by very low-mass "flare stars" (red dwarfs), which is a rather explosive event but not of apocalyptic proportions. There are plenty of planets that have been found orbiting such stars, and one could ask about a binary system in which one of the stars is such a flare star, or at least capable of putting out a similar flare as apparently they have sometimes been observed to occur with more massive stars.
Regarding your question about figure-8 orbits, they are possible in theory but
infinitely unstable in that an arbitrarily small "nudge" of the planet off its orbit by anything at all - e.g. the gravity of another planet in the system, or even just an irregularity in the stars' gravity owing to them not being perfectly homogeneous or symmetrical masses, will cause it to deviate from that orbit and into a different trajectory, perhaps ending up being consumed by the stars or ejected. (The instability is in a way the same as trying to balance a sharp pencil on its tip.) Thus such a system will not form naturally or persist were it formed artificially, and thus will not likely be observed to be in existence at all. For a stable orbit of a planet in a binary system, the binary must be either wide enough (the two stars sufficiently far apart) that the planet can orbit only one of them without getting cooked or pulled into the other, or else very close together (more like Tatooine from Star Wars, as you named in the question) such that the planet orbits in a huge circle or ellipse around both of them. |
i'm trying to plot a function $$\ f(x,y)=cos(\sqrt{x^2 +y^2})$$,same as cos(r).. f(x,y)=0 for points ,which are on circle.when the points are out of this circle i want the function f(x,y) to be 0 as well. so the support of my function to be the circle.i want to plot this situation.. as i thought, my function is $$f(x,y)= \left\{ \begin{array}{c} cos(r) ,r<\pi/2\\ 0,othervise \\ \end{array} \right. $$ but my plot isn't continuous...please help me understand the problem.--> -->
That is a feature to indicate that your function is not continuous there. You can use the
Exclusions option to prevent it
k[x_, y_] := With[{r = Sqrt[x^2 + y^2]}, Piecewise[{{Cos[r], r < Pi/2}}, 0]]Plot3D[k[x, y], {x, -5, 5}, {y, -5, 5}, Exclusions -> None]
Edit
If you are wondering why this plot looks a bit bumpy at the bottom, the reason is that there are too few polygons used to create a sharp plot. You can try
Plot3D[k[x, y], {x, -5, 5}, {y, -5, 5}, Exclusions -> None, PlotPoints -> 100, MaxRecursion -> 6]
and then you clearly see your circle at the bottom
I think the problem is with $\texttt{Piecewise[]}$. Try this variation instead:
$\texttt{f[x_, y_] := With[{r = Sqrt[x*x + y*y]}, If[r > Pi/2, 0, Cos@r]];}$ $\texttt{Plot3D[f[x, y], {x, -5, 5}, {y, -5, 5}]}$
You can use the option $\texttt{PlotPoints -> 100}$ to make the base sharper, but using $\texttt{Piecewise[]}$ leaves a gap in the plot surface as you noticed. |
Degree $n$ : $42$ Transitive number $t$ : $46$ Group : $C_7\times S_3^2$ Parity: $-1$ Primitive: No Nilpotency class: $-1$ (not nilpotent) Generators: (1,23,2,24,3,22)(4,27,6,26,5,25)(7,30,9,28,8,29)(10,32,11,31,12,33)(13,36,15,35,14,34)(16,39,17,38,18,37)(19,41,21,40,20,42), (1,34,26,16,7,42,32,23,15,6,37,28,19,11,3,36,27,17,8,41,33,24,13,5,38,29,20,12,2,35,25,18,9,40,31,22,14,4,39,30,21,10) $|\Aut(F/K)|$: $7$
|G/N| Galois groups for stem field(s) 2: $C_2$ x 3 4: $C_2^2$ 6: $S_3$ x 2 7: $C_7$ 12: $D_{6}$ x 2 36: $S_3^2$
Resolvents shown for degrees $\leq 10$
There are no siblings with degree $\leq 10$ Data on whether or not a number field with this Galois group has arithmetically equivalent fields has not been computed.
There are 63 conjugacy classes of elements. Data not shown.
Order: $252=2^{2} \cdot 3^{2} \cdot 7$ Cyclic: No Abelian: No Solvable: Yes GAP id: [252, 35]
Character table: Data not available. |
I do not understand the distinction between configuration state functions and Slater determinants. Is not every Slater determinant a CSF ?
Please explain the difference and the use/need of configuration state functions.
Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.Sign up to join this community
Slater determinants are not eigenfunctions of the $\hat S^2$ operator, but CSFs are.
The Hamiltonian commutes with the operators for total and projected spin\begin{align} [\hat H, \hat S^2] &= 0 \\ [\hat H, \hat S_z] &= 0\end{align}Therefore, a set of common eigenfunctions to all three operators exists. As Slater determinants are eigenfunctions of $\hat S_z$, but not $\hat S^2$, using them as a basis for the electronic wave function (=eigenstate of $\hat H$), will not guarantee the (approximate) solution to be an eigenfunction of $\hat S^2$. The spin multiplicity of the found solution may not be a pure Singlet (or Doublet, Triplet,
etc.).
As an example, consider a 2 electron system, with both electrons occupying different spatial orbitals. The situation with same spin for both electrons can be represented as the Slater determinant $|\alpha\alpha\rangle=\frac{1}{\sqrt{2}}[\alpha(1)\beta(2)-\alpha(2)\beta(1)]$. The spatial part is omitted here, to keep the notation short. The corresponding eigenvalue equations are then \begin{align} \hat S_z |\alpha\alpha\rangle &= 1 |\alpha\alpha\rangle \\ \hat S^2 |\alpha\alpha\rangle &= 2 |\alpha\alpha\rangle \end{align} Thus we have $S=1$, from $S(S+1)=2$, and the spin multiplicity is $2S+1=3$, i.e. a Triplet. Here the Slater determinant directly corresponds to a CSF.
For the case of opposite spin, we have \begin{align} \hat S_z |\alpha\beta\rangle &= 0 |\alpha\beta\rangle \\ \hat S^2 |\alpha\beta\rangle &= |\alpha\beta\rangle + |\beta\alpha\rangle \end{align} and therefore $|\alpha\beta\rangle$ is not an eigenfunction of $\hat S^2$. The expectation value for total spin would yield $\langle\hat S^2\rangle=1$, which is neither a Singlet nor a Triplet (not even a Doublet where $S(S+1)=0.75$).
This can be fixed by taking suitable linear combinations of Slater determinants. In this example we have 2 options: \begin{align} |^1\Psi\rangle &= \frac{1}{\sqrt{2}} \left( |\alpha\beta\rangle - |\beta\alpha\rangle \right) \\ |^3\Psi\rangle &= \frac{1}{\sqrt{2}} \left( |\alpha\beta\rangle + |\beta\alpha\rangle \right) \end{align} This yields the eigenvalue equations \begin{align} \hat S^2 |^1\Psi\rangle &= 0 |^1\Psi\rangle \\ \hat S^2 |^3\Psi\rangle &= 2 |^3\Psi\rangle \end{align} which correspond to a Singlet and a Triplet state respectively.
Overall, this yields 1 Singlet component $|^1\Psi\rangle = \frac{1}{\sqrt{2}} \left( |\alpha\beta\rangle - |\beta\alpha\rangle \right)$, and the 3 Triplet components $|^3\Psi\rangle = \frac{1}{\sqrt{2}} \left( |\alpha\beta\rangle + |\beta\alpha\rangle \right)$, $|\alpha\alpha\rangle$ and $|\beta\beta\rangle$.
For the arithmetic on how to apply the $\hat S_z$ and $\hat S^2$ operators to multi-electron systems, see Chapter 2.5 in
Modern Quantum Chemistry by A. Szabo and N. Ostlund. |
I am preparing for my master thesis in Quantum Image Processing (QImP), i choose to work with Novel Enhanced Quantum Representation of Digital Images (NEQR).
To convert an image from Classical domain to Quantum domain we need to do a Quantum Image Preparation which in case of NEQR is consists of two steps as shown in the image below:
The second step is the one that set the colors. The paper descripe this step as follow
It is divided into $2^{2n}$ sub-operations to store the gray-scale information for every pixel. For pixel $(Y,X)$, the quantum sub- operation $ U_{YX}$ is shown as (8) $$ U_{YX} = \Biggl(I \otimes \sum_{j=0}^{2^n -1} \sum_{i=0,ji \neq YX}^{2^n - 1} \lvert ji \rangle \langle ji \rvert \Biggr) + \Omega_{YX} \otimes \lvert YX \rangle \langle YX \rvert \tag{8}$$
Where $ \Omega_{YX} $ is a quantum operation as shown in (9), which is the value setting operation for pixel $ (Y,X)$: $$ \Omega_{YX} = {\displaystyle \bigotimes_{i=0}^{q-1} \Omega_{YX}^{i}} \tag{9}$$ Because $ q $ qubits represent the gray-scale value in NEQR, $ \Omega_{YX}$ is consisted of $ q $
quantum oraclesas shown in (10): $$ \Omega_{YX}^{i} : \rvert 0 \rangle \rightarrow \Bigl\rvert 0 \oplus C_{YX}^{i} \Bigr\rangle \tag{10}$$ From (10), if $ C_{YX}^{i}=1, \Omega_{YX}^i $ is a $ 2n - CNOT $ gate. Otherwise, it is a quantum gate which will do nothing on the quantum state.
My question is, how (10) is a $2n - CNOT $ gate if $ C_{YX}^{i}$ is $1$?
From my understanding $ C_{YX}^{i}$ is a computational basis, that is it is either $\rvert 0 \rangle$ or $\rvert 1 \rangle$ and the tensoring of $ C_{YX}^{i}$ in (9) will produce a column vector.
Also if i interpret $ \Bigl\rvert 0 \oplus C_{YX}^{i} \Bigr\rangle $ as follow: it is the result of $ 0 \oplus C_{YX}^i$ this is just $C_{YX}^i$ because $ 0 \oplus x$ is just $x$. Where $ \oplus $ is XOR. How this will produce a $2n-CNOT$ gate where it is a 3 qubit gate (its matrix is 8 * 8) |
I'm trying to understand why there are differing predictions of the atmospheric temperature profile. It is well established that the dry adiabatic lapse rate (DALR) is:
$$ \frac{\mathrm{d}T}{\mathrm{d}z} = -\frac{g}{c_p} \approx -9.8\ \mathrm{K/km} $$
This is derived by assuming adiabatic process and hydrostatic pressure gradient:
$$ \mathrm{d}s = c_p\mathrm{d}\ln{T} - R\mathrm{d}\ln{p}\quad(= \frac{\delta q}{T} = 0)\\ \quad \frac{\mathrm{d}p}{\mathrm{d}z} = -\rho' g $$
where $\rho'$ is density of ambient air, and pressure of an air parcel is the same as ambient pressure ($p = p'$). It is essenitally the cooling an air parcel will experience due to change in ambient pressure when risen adiabatically in the atmosphere.
However, when using the principle of maximum entropy (i.e. looking for the equilibrium profile) we get
isothermal profile, as predicted classically by Gibbs and Boltzmann.
Apparently, the actual atmospheric profile (where there is no moisture condensing) is consistent with the dry adiabatic lapse rate much more than with the isothermal profile. The actual profile is of course subject to continuous thermal cooling of the atmospheric layers, thermal warming by radiation from the Earth's surface, heating from the surface by conduction (by turbulence and molecular diffusion), and during the day to solar heating. These factors can obviously affect any equilibrium.
My question is:
What causes the discrepancy? Is there a consensus?
If there was initially a DALR profile, would it eventually turn into an isothermal profile if there was no influence (no radiation, no surface, no dynamic phenomena).
My impression is that when radiative processes (thermal cooling and thermal heating from surface) are included in the maximum entropy calculation, one might get some sort of lapse rate, i.e. something between a pure thermal radiative profile and the isothermal profile.
I found some papers discussing the issue, esp.:
Verkley WTM., Gerkema T, 2003. On Maximum Entropy Profiles.
Journal of atmospheric sciences.
Akmaev RA., 2008. On the energetics of maximum‐entropy temperature profiles.
Quarterly Journal of the Royal Meteorological Society
where they treat potential temperature as something which is conserved when applying the maximum entropy principle in order to reach a profile with a lapse rate. But it is not clear to me why such an assumption should be made, or if there is any general consensus that this is the right approach. |
Let us write the equation in state-space form
\begin{equation}\frac{d}{dt}\begin{bmatrix}x_{1}(t)\\x_{2}(t)\end{bmatrix}+\begin{bmatrix}\eta/2 & -\omega\\\omega & \eta/2\end{bmatrix}\begin{bmatrix}x_{1}(t)\\x_{2}(t)\end{bmatrix}=0 \tag1\end{equation}
where the natural frequency is $\omega \triangleq \sqrt{1-\eta^2/4}$. It is easy to verify that the characteristic polynomial is indeed $p(\lambda) = \lambda^2 + \eta \lambda + 1$ by construction. When $\eta \ll 1$, we have $\omega \approx 1$, so the solution can always be approximated to great accuracy as
\begin{equation}\begin{bmatrix}x_{1}(t)\\x_{2}(t)\end{bmatrix}=\exp\left(-t\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix}\right)\begin{bmatrix}y_{1}\\y_{2}\end{bmatrix}\tag2\end{equation}
with $y_{1}=x_{1}(0), y_{2}=x_{2}(0)$. Indeed this is the exact solution when $\eta=0$. Now, with a finite $\eta$, this solution is not completely correct, so we let $y_{1},y_{2}$ vary with time to yield
\begin{equation}\begin{bmatrix}x_{1}(t)\\x_{2}(t)\end{bmatrix}=\exp\left(-t\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix}\right)\begin{bmatrix}y_{1}(t)\\y_{2}(t)\end{bmatrix},\tag3\end{equation}
with $y_{1}(0)=x_{1}(0)$, $y_{2}(0)=x_{2}(0)$. These variables can be intuitively understood as “corrections” to the initial guess above. Then using the laws of matrix exponentials, we derive an ODE for the correction terms
\begin{equation}\frac{d}{dt}\begin{bmatrix}y_{1}(t)\\y_{2}(t)\end{bmatrix}+\begin{bmatrix}\eta/2 & -(\omega-1)\\(\omega-1) & \eta/2\end{bmatrix}\begin{bmatrix}y_{1}(t)\\y_{2}(t)\end{bmatrix}=0.\tag4\end{equation}
This latter ODE is non-stiff, and can be solved using any simple integration method. With $y_{1}(t)$ and $y_{2}(t)$ known, $x(t)$ is easily recovered.
To summarize, the principal idea is to start with an excellent initial guess, and to write the true solution as a time-dependent correction multiplied by this initial guess. Solving for the correction is then a far easier problem (read: less stiff) than solving for the solution directly.
The particularly strategy above is known as an “exponential integrator” (more specifically, the Lawson type), and assumes that the nonlinear dynamics can be well approximated by a linear one. For this very specific problem of a damped oscillation, it is also known as a phasor transform. |
Planets and stars, no. Globular clusters and galaxies, yes.Small scalesTo condense into such relatively compact objects as planets, stars, and even the more diffuse star-forming clouds, particles need to be able to dissipate their energy. If they don't do this, their velocities prohibit them from forming anything."Normal" particles, i.e. atoms, do this ...
It doesn't matter if the body is made of gas, rocks, liquid or plasma, the four states of matter all have mass. So, as we know, mass create a gravitational field, and the more mass the stronger the gravity - and Jupiter has 317x Earth mass.
Comet Shoemaker–Levy 9 crashed into Jupiter a few years back.As well as these molecules, emission from heavy atoms such as iron, magnesium and silicon was detected, with abundances consistent with what would be found in a cometary nucleus.Those heavy elements are consistent with the comet being at least being partially composed of rock. So Jupiter is ...
The mass of the Sun is determined from Kepler's laws:$$\frac{4\pi^2\times(1\,\mathrm{AU})^3}{G\times(1\,\mathrm{year})^2}$$Each term in this component contributes to both the value of the solar mass and our uncertainty. First, we know to very good precision that the (sidereal) year is 365.256363004 days. We have also defined the astronomical unit (AU) to ...
I assume you're asking about central supermassive black holes (SMBHs, one per galaxy), not stellar-mass black holes.The answer is yes, but what actually happens is the two SMBHs have to merge first, and then the resulting combined SMBH can sometimes be ejected from the combined (merged) galaxy.[Edited to add: Since you've updated the question with a ...
I concur with everyone else here (of course) that the gravity at the "surface" of Jupiter is entirely determined by the mass contained within that surface. The composition makes no difference.However I differ with some on the answer to the headline title question. We simply do not know whether Jupiter has a rocky core.A popular theory for the formation ...
Well, I wasn't going to answer but the other two answers are wrong, or at least incomplete. If you wish to make a black hole from a stellar-sized object, then there is no need to compress it to as small as the Schwarzschild radius (though that would certainly work and would certainly be the answer for smaller objects with negligible self-gravity). Instead, ...
I asked a somewhat similar question but just about the Earth. Here and in my question there's some links that you might be interested in. Jupiter for example is thought to have moved closer to the sun during the late heavy bombardment, then back outwards. So I suspect that through the wormhole didn't get that point quite right. Observations by the ...
The gap appears because of pair instability supernovae. In short, as one looks at such massive stellar cores at increasing temperatures, an ever-larger fraction of the photons are sufficiently energetic to spontaneously form electron-positron pairs. True, they soon recombine, but there is nevertheless a loss in (radiation) pressure, which causes contraction, ...
A succinct summary of supernova types is given in the following image based on Heger et al. (2003):Image courtesy of Wikipedia user Fulvio 314 under the Creative Commons Attribution-Share Alike 3.0 Unported license. The graph is based on the graph in Fig. 1 of the linked paper.The pair instability realm is upwards of ~100 solar masses, though it is ...
Your question body is different from your question title and it seems you really want to ask what you did in the question body so I'll address that.Short Answer: The simple power law which applies for larger asteroids and comets actually doesn't extend that well to smaller bodies and shouldn't be trusted too much in that range.Long Answer:You're right ...
The mass of an average galaxy appears to be totally dominated by dark matter, so your calculation would not give the galaxy mass.Even if all you wanted was the baryonic (non dark matter) mass then what you suggest will be very much a lower limit. For example you can look at this paper by Chabrier (2001), who estimates that gas forms less than half the ...
According to the standard ΛCDM cosmological model, the observable universe has a density of about $\rho = 2.5\!\times\!10^{-27}\;\mathrm{kg/m^3}$, with a cosmological consant of about $\Lambda = 1.3\!\times\!10^{-52}\;\mathrm{m^{-2}}$, is very close to spatially flat, and has a current proper radius of about $r = 14.3\,\mathrm{Gpc}$.From this, we can ...
This Wikipedia page does a decent job of describing the orbit-clearing criterion, based on the original paper by Stern & Levison (2002), which can be found here (PDF).In order to have cleared its orbit over a period of billions of years, an object needs a "Stern-Levison parameter" $\Lambda$ which is $> 1$; Pluto has $\Lambda \approx 3$-$4 \times 10^{...
The mass of a object does not increase when it collapses into a black hole. So a supermassive black hole must have started off quite small, and then grown.The formation and growth of supermassive black hole is not settled science. Supermassive black holes probably started as large stellar mass black holes (The very earliest stars could have been very large,...
There is no general consensus on this. Different evolutionary models give different results. The factors (in addition to the initial mass of the star) that effect the final black hole mass would be the rotation rate of the progenitor, its composition (or metallicity) and whether it was in a binary system or not and whether that binary system was able to ...
Your approach is completely correct, just note three things:Logarithmic distributionFirst, since the distribution of masses is logarithmic in nature (as is most other things), be sure to bin them logarithmically. Otherwise you will oversample (undersample) the bins at the low-(high-)mass end.Comoving densitiesSecond, to be able to compare mass ...
Photons are massless. This doesn't depend on their energy, so doesn't depend on their frequency or wavelength.Massless particles travel at the speed of light.Even if we abandon particles and look at classical electrodynamics, we find that the speed of an electromagnetic wave (in vacuum) has a fixed value. It doesn't depend on wavelength.Gravitational ...
The article you've read is not quite accurate/correct. A more correct pictue is as follows:A star may approach a super-massive black hole (SMBH) so closely that the tidal forces of the SMBH tear it appart. The distance to the SMBH at which this happens is often referred to as the tidal radius. For a (non-rotating) SMBH with a mass in excess of about $10^8$...
What id like to know, in what distance do they have to be from each other to create only one gravitational influence.At whichever point you decide to call them two objects rather than one object. It's a completely arbitrary choice that depends on you rather than gravitational physics. What's going on is that gravity can be described by a mass density ...
The Schwarzschild radius of a black hole is probably the closest we can get to your question.$$r_s = (2G/c^2) \cdot m \mbox{, with }\ 2G/c^2 = 2.95\ \mbox{km}/\mbox{solar mass}.$$This means, that the Schwarzschild radius for a given mass is proportional to that mass.The radius shouldn't be taken too literal in the physical sense, because space is ...
I don't have enough reputation to comment...I think this might help you understand the formation or binary and more stars systems. This of course is not the only possible method but it might explain the systems with big mass differences.As the initial rotation speed increases (marked in the videos as beta) you will see how the protoplanetary disk breaks ...
Here you can find a list of all the natural satellites in our Solar System. You can check one by one (good luck!) OR you can check this webpage, and just add the terms.Please, keep in mind that the latter website is kind of unknown, so double-check at least some of the masses, before to trust it.Perhaps, you can cross check with this list as well, and ...
According to Newton's Law of Universal Gravitation, you simply need interacting masses in order to generate a gravitational force between them. Gases have mass and they therefore can contribute to gravity. So even if Jupiter is entirely gaseous, it is so incredibly massive besides (so much gas!), that it has a much stronger gravitational pull than Earth. ...
It's not completely clear what you are asking, but if this is a multi-choice quiz, then the only option that could be correct is (a).(b) Is not correct, because a white dwarf that just passes the Chandrasekhar mass is comfortably below the maximum mass that is supportable by a neutron star. So neutronisation followed by neutron degeneracy pressure and the ...
You don't have to guesstimate to come up with the answer.What you do is look at the dynamics of stars with respect to the Galactic plane - in particular, the velocity dispersions of stars with known distances from the plane, combined with a reasonable assessment of where the Sun is with respect to the plane (close), yields an almost model-independent ...
I'll shamelessly reference an answer I wrote on Worldbuilding to an almost identical question. Lammer et al. (2014) suggested that "super-Earths" with masses of $2$-$5M_\oplus$1 could retain massive hydrogen/helium envelopes, up to $\sim10^{25}$ kilograms. Above this, up to about $10M_\odot$ or more, "mini-Neptunes" exist, possibly composed of volatiles and ...
As you say, making black holes quickly in the early Universe is a major unsolved problem in astrophysics. There are various hypotheses, of which two roughly correspond to supermassive stars. All basically involve trying to give the black hole a headstart in mass. There isn't really enough time to grow a $100\,M_\odot$ black hole to $10^9\,M_\odot$, so the ...
There are three factors that all play into this:The Jupiter/Sun mass ratioThe Jupiter/Sun distanceThe Sun's radiusThe barycenter of any pair of orbiting masses lies on the line connecting their centers of mass, and its position depends on the masses of the two objects.If two objects have the same mass, their barycenter will lie halfway between them. ...
A Black Hole (BH) is an object of General Relativity (GR), not of Newtonian physics, so the answer involves both.First Newton: As another answer notes, the acceleration due to gravity depends on the mass of the body divided by the square of the distance from it. At a given distance (say, ten million miles) the acceleration due to gravity is simply ... |
Method 1: To compute $\log_a(b)$, compute the smallest integer $n$ such that $b/a^n \leq 2$. Also compute the smallest integer $m$ such that $a/e^m \leq 2$. Then $\log_a(b)=\frac{n+\ln(b/a^n)}{m+\ln(a/e^m)}$. These logarithms can now be computed using the Maclaurin series $\ln(1+x)=\sum_{n=1}^\infty (-1)^{n-1} \frac{x^n}{n}$, which will converge in this range.
If $|x|$ is very close to $1$, and especially if $x=1$, this series converges quite slowly, in which case you will want to use series acceleration. One way to achieve this is to use the fact mentioned in another answer, which is that if $y=x/(2+x)$ then $\ln(1+x)=2\sum_{n=1}^\infty y^{2n-1}/(2n-1)$. With this series acceleration, even in the worst case $x=1$, you still get about 1 ternary digit per step.
Method 2: To get binary digits of $\log_2(x)$, first read off its integer part by repeated division by $2$ (or directly from the representation, if you are given the number in floating point). Now read off the binary digits after the "decimal" point by dividing by $2^{2^{-n}},n=1,2,\dots$ and seeing if the result is now less than $1$ or not. If it is, then that digit is $0$ and you go back to the same input you had. If it is not, then that digit is $1$, and you preserve the division and continue. So for example, $\log_2(127)=6+\log_2(127/64)$, and now to continue you need to check whether $127/64$ is greater than $\sqrt{2}$ (it is), whether $127/(64\sqrt{2})$ is greater than $2^{1/4}$ (it is), whether $127/(64 \cdot 2^{3/4})$ is greater than $2^{1/8}$ (it is), etc.
You can also evaluate these inequalities indirectly if evaluating roots is a problem: for example, you want to check whether $127/64$ is greater than $\sqrt{2}$ so instead you check whether $127^2>2 \cdot 64^2$. This will always boil down to checking whether your original number to some integer power exceeds $2$ to some other integer power, which is particularly straightforward in floating point arithmetic (where you don't actually need to compute the powers of $2$ at all).
You can then adapt this method for getting $\log_2(x)$ to other bases using the change of base formula as in the previous method. |
After completing this reading you should be able to:
Describe linear and nonlinear trends. Describe trend models to estimate and forecast trends. Compare and evaluate model selection criteria, including mean squared error (MSE), s2, the Akaike information criterion (AIC), and the Schwarz information criterion (SIC). Explain the necessary conditions for a model selection criterion to demonstrate consistency. Modeling Trend
Trend is a general systematic linear or (most often) nonlinear component that changes over time and does not repeat. It is a pattern of gradual change in a condition, output, or process, or an average or general tendency of a series of data points to move in a certain direction over time.
There are deterministic trend models whose evolution is perfectly predictable. These models are very important and are practically very applicable.
Consider the following linear time function:
$$ { T }_{ t }={ \beta }_{ 0 }+{ \beta }_{ 1 }{ TIME }_{ t } $$
This is a perfect example of a trend description equation. Note that the time trend or time dummy, denoted by the variable \(TIME\) is an artificial creation. A sample of size \(T\) has:
$$ { TIME }_{ t }=t $$
Where:
\( TIME=\left( 1,2,3,\dots ,T-1,T \right) \)
The regression intercept is denoted as \({ \beta }_{ 0 }\). At time \(t = 0\), the regression will equal to \({ \beta }_{ 0 }\). The regression slope is denoted as \({ \beta }_{ 1 }\). Note also that an increasing trend will have a positive slope while a decreasing one will have a negative slope. An increasing linear trend in the fields of business, finance, and economics usually indicate growth, but not always the case.
Trends may also take a nonlinear or curved form. A good example is a variable that increases at a rate that is increasing or declining. It is not a necessity that trends be linear, but rather they should be smooth.
The other model is the quadratic trend model. These models are capable of catching nonlinearities. Consider the following function:
$$ { T }_{ t }={ \beta }_{ 0 }+{ \beta }_{ 1 }{ TIME }_{ t }+{ \beta }_{ 2 }{ Time }_{ t }^{ 2 } $$
If \({ \beta }_{ 2 }=0\), then we have the emergence of a linear trend. If smoothness is to be maintained, then polynomials of lower order are applied. We may have several shapes of nonlinear trends. This will be determined by how large or small the coefficients are and whether they are positive or negative.
If both \({ \beta }_{ 1 }\) and \({ \beta }_{ 2 }\) have a positive sign then the trend increases in a monotonically and nonlinearly manner. On the other hand, if both \({ \beta }_{ 1 }\) and \({ \beta }_{ 2 }\) are less than zero then the trend decreases on a monotonically manner.
The trend becomes U-shaped when \({ \beta }_{ 1 }\) is negative and \({ \beta }_{ 2 }\) is positive. We obtain an inverted U-shaped trend in the event that \({ \beta }_{ 1 }\) is positive and \({ \beta }_{ 2 }\) is negative.
The concept of exponential trend arises when the appearance of a trend is nonlinear in levels but linear in logarithms. It is also referred to as the log-linear trend. This is a very common occurrence in the fields of business, finance, and economics where growth is roughly constant, as indicated by the economic variables.
A trend with a constant growth rate at \({ \beta }_{ 1 }\) can be expressed as:
$$ { T }_{ t }={ \beta }_{ 0 }{ e }^{ { \beta }_{ 1 }{ Time }_{ t } } $$
In levels, this trend happens to an exponential, hence nonlinear, time function. However the same trend can be written as logarithms:
$$ ln\left( { T }_{ t } \right) =ln\left( { \beta }_{ 0 } \right) +{ \beta }_{ 1 }{ Time }_{ t } $$
Hence becoming a linear function of time.
The exponential trend is similar to the quadratic trend in the sense that we can have a variety of patterns depending on whether the parameter values are big or small whether they have a positive or negative sign. Their rate of increase or decrease can either decline or increase.
There are subtle differences between qualitative trend shapes of similar sorts despite the fact that quadratic and exponential trends achieving them. Certain series can have quadratic trends approximating their nonlinear trends quite well. Yet for others, it is the exponential trend offer a better approximation.
Estimating Trend Models
It is important that variables such as time and its square be created and stored in a computer prior to estimating trend models.
Least squares regression is used to fit some trend models to data on a time series \(y\). We compute the following using a computer:
$$ \hat { \theta } =\underset { \theta }{ argmin } \sum _{ t=1 }^{ T }{ { \left( { y }_{ t }-{ T }_{ t }\left( \theta \right) \right) }^{ 2 } } $$
The set of parameters to be estimated are given as \( \theta\).
For a linear trend, we have that:
$$ { T }_{ t }\left( \theta \right) ={ \beta }_{ 0 }+{ \beta }_{ 1 }{ TIME }_{ t } $$
$$ \theta =\left( { \beta }_{ 0 },{ \beta }_{ 1 } \right) $$
Therefore, the computer determines:
$$ \left( { \hat { \beta } }_{ 0 },{ \hat { \beta } }_{ 1 } \right) =\underset { { \beta }_{ 0 },{ \beta }_{ 1 } }{ argmin } \sum _{ t=1 }^{ T }{ { \left( { y }_{ t }-{ \beta }_{ 0 }-{ \beta }_{ 1 }{ TIME }_{ t } \right) }^{ 2 } } $$
For a quadrative trend the computer determines:
$$ \left( { \hat { \beta } }_{ 0 },{ \hat { \beta } }_{ 1 },{ \hat { \beta } }_{ 2 } \right) =\underset { { \hat { \beta } }_{ 0 },{ \hat { \beta } }_{ 1 },{ \hat { \beta } }_{ 2 } }{ argmin } \sum _{ t=1 }^{ T }{ { \left( { y }_{ t }-{ \beta }_{ 0 }-{ \beta }_{ 1 }{ TIME }_{ t }-{ \beta }_{ 2 }{ TIME }_{ t }^{ 2 } \right) }^{ 2 } } $$
An exponential trend can be estimated in two ways. The first approach involves proceeding directly from the exponential representation and let the computer determine:
$$ \left( { \hat { \beta } }_{ 0 },{ \hat { \beta } }_{ 1 } \right) =\underset { { \beta }_{ 0 },{ \beta }_{ 1 } }{ argmin } \sum _{ t=1 }^{ T }{ { \left( { y }_{ t }-{ \beta }_{ 0 }{ e }^{ { \beta }_{ 1 }{ TIME }_{ t } } \right) }^{ 2 } } $$
In the second approach, log \(y\) is regressed on an intercept and \(TIME\). The computer therefore determines:
$$ \left( { \hat { \beta } }_{ 0 },{ \hat { \beta } }_{ 1 } \right) =\underset { { \beta }_{ 0 },{ \beta }_{ 1 } }{ argmin } \sum _{ t=1 }^{ T }{ { \left( { lny }_{ t }-ln{ \beta }_{ 0 }-{ \beta }_{ 1 }{ TIME }_{ t } \right) }^{ 2 } } $$
It is important to consider the fact that the fitted values from this are the fitted values of log \(y\). Therefore, the fitted values of \(y\) are obtained by their exponentiation.
Argmin just means “the argument that minimizes.” Least squares proceeds by finding the argument (in this case, the value of θ) that minimizes the sum of squared residuals; thus, the least squares estimator is the “argmin” of the sum of squared residuals function. Forecasting Trend
Point forecasts construction should be our first consideration. Assuming that currently, we are at time \(T\), and the \(h\)-step-ahead value of a series \(y\) should be forecasted by the use of a trend model. Consider the following linear model that holds for any time \(t\):
$$ { y }_{ t }={ \beta }_{ 0 }+{ \beta }_{ 1 }{ Time }_{ t }+{ \varepsilon }_{ t } $$
At time \(t + h\) we have:
$$ { y }_{ T+h }={ \beta }_{ 0 }+{ \beta }_{ 1 }{ Time }_{ T+h }+{ \varepsilon }_{ T+h } $$
On the right side of the equation we have two future value of series, \({ Time }_{ T+h }+{ \varepsilon }_{ T+h }\). Supposing that we knew \({ Time }_{ T+h }\) and \({ \varepsilon }_{ T+h }\) at time \(T\), then the forecast could be immediately cranked out. \({ Time }_{ T+h }\) being known could be attributed to the fact that the artificially constructed time variable is perfectly predictable.
However, since we do not know \({ \varepsilon }_{ T+h }\) at time \(T\), an optimal forecast of \({ \varepsilon }_{ T+h }\) is constructed using information up to time \(T\). \(\varepsilon\) assumed to be a simple independent zero-mean random noise. For any future period, the optimal forecast of \({ \varepsilon }_{ T+h }\) is zero. This results in the following point forecast:
$$ { y }_{ T+h,T }={ \beta }_{ 0 }+{ \beta }_{ 1 }{ Time }_{ T+h } $$
It is important to note that the subscript \({ T+h,T }\) informs us that the forecast is for time \(T+h\), and is made at time \(T\). This point forecast can be made practically operational by applying the least squares estimates rather than then unknown parameters. This will result in:
$$ { \hat { y } }_{ T+h,T }={ \hat { \beta } }_{ 0 }+{ \hat { \beta } }_{ 1 }{ Time }_{ T+h } $$
An interval forecast is formed by assuming that the trend regression disturbance is a normal distribution. Ignoring parameter estimation, a 95% interval forecast is \({ y }_{ T+h,T }\pm 1.96\sigma \). The disturbance in the trend regression has a standard deviation of \(\sigma\). This is made operational by applying \({ \hat { y } }_{ T+h,T }\pm 1.96\hat { \sigma } \). The trend regression will have a standard error of \(\hat { \sigma }\).
A density forecast is also formed by assuming that the trend regression is a normal distribution. Ignoring parameter estimation uncertainty, the density forecast is \(N\left( { y }_{ T+h,T },{ \sigma }^{ 2 } \right) \). The disturbance in the trend regression has a standard deviation of \({ \sigma }\). This is made operational by applying \(N\left( { \hat { y } }_{ T+h,T },{ \hat { \sigma } }^{ 2 } \right) \) as the density forecast.
Applying the Akaike and Schwarz Criteria to Select Forecasting Models
Strategies of model selection rarely produce good out-of-sample forecasting models. Several modern tools are applied to help in model selection. For most model selection criteria, the task is identifying the model whose out-of-sample 1-step-ahead mean squared prediction error is the smallest.
For the number of degrees of freedom applied in the model estimation, the differences among the criteria amount to different penalties. All the criteria have a negative orientation since they are all effectively out-of-sample mean square prediction errors.
Consider the following mean squared error (\(MSE\)):
$$ MSE=\frac { { \Sigma }_{ t=1 }^{ T }{ e }_{ t }^{ 2 } }{ T } $$
\(T\) denotes the sample size and:
$$ { e }_{ t }={ y }_{ t }-{ \hat { y } }_{ t } $$
Where:
$$ { \hat { y } }_{ t }={ \hat { \beta } }_{ 0 }+{ \hat { \beta } }_{ 1 }{ TIME }_{ t } $$
From this formula, it is clear that the model whose \(MSE\) is the smallest happens to be the model whose sum of squared residuals is the least. This is due to the fact that when the sum of squared residuals is summed is scaled by \({ 1 }/{ T }\), the ranking does not change.
Recall that:
$$ { R }^{ 2 }=1-\frac { { \Sigma }_{ t=1 }^{ T }{ e }_{ t }^{ 2 } }{ { \Sigma }_{ t=1 }^{ T }{ \left( { y }_{ t }-{ \hat { y } }_{ t } \right) }^{ 2 } } $$
\({ { \Sigma }_{ t=1 }^{ T }{ \left( { y }_{ t }-{ \hat { y } }_{ t } \right) }^{ 2 } }\) is the total sum of squares and is only reliant on the data rather than the particular model fit. When the model that minimizes the sum of squared residuals is selected, \(MSE\) is selected, similar results will be obtained by selecting the model that minimizes the \(MSE\) or the model that maximizes \({ R }^{ 2 }\).
Addition of more variables to a model does not cause in-sample \(MSE\) to rise, but on the contrary, it will fall continuously. In the fitting of polynomial trend models, the degree of the polynomial, say \(p\), is related to the number of variables in the model:
$$ { T }_{ t }={ \beta }_{ 0 }+{ \beta }_{ 1 }{ TIME }_{ t }+{ \beta }_{ 2 }{ TIME }_{ t }^{ 2 }+\cdots +{ \beta }_{ p }{ TIME }_{ t }^{ p } $$
When \(p=1\) then the trend is linear and when \(p=2\) the trend becomes quadratic, both of which have already been considered. The sum of squared residuals are to be explicitly minimized by the estimated parameters, thus with higher powers of time being included, the sum of squared residuals cannot rise. When more variables are included in a forecasting model, the sum of squared residuals will be lower leading to a lower \(MSE\) and a higher \({ R }^{ 2 }\).
This results in overfitting and data mining whereby as more variables are included in a forecasting model, it’s out of sample forecasting performance may or may not improve. However, the model’s fit on historical data will be improved.
The \(MSE\) is a biased estimator of out-of-sample 1-step-ahead prediction error variance, and increasing the number of variables included in the model increases the size of the bias.
The degrees of freedom used should be penalized if the bias associated with the \(MSE\) and its relatives is to be reduced. Consider the mean squared error corrected for degrees of freedom:
$$ { s }^{ 2 }=\frac { { \Sigma }_{ t=1 }^{ T }{ e }_{ t }^{ 2 } }{ T-k } $$
Where there are \(k\) degrees of freedom used in model fitting, the usual unbiased estimates of the regression disturbance variance is given as \({ s }^{ 2 }\). When the model that minimizes \({ s }^{ 2 }\) is selected, we obtain the same results as when selecting the number that minimizes the standard error of the regression.
Earlier on, we had stated that:
$$ { \bar { R } }^{ 2 }=1-\frac { \frac { { \Sigma }_{ t=1 }^{ T }{ e }_{ t }^{ 2 } }{ T-k } }{ \frac { { \Sigma }_{ t=1 }^{ T }{ \left( { y }_{ t }-{ \hat { y } }_{ t } \right) }^{ 2 } }{ T-1 } } =1-\frac { { s }^{ 2 } }{ \frac { { \Sigma }_{ t=1 }^{ T }{ \left( { y }_{ t }-{ \hat { y } }_{ t } \right) }^{ 2 } }{ T-1 } } $$
\({ s }^{ 2 }\) should be written as a product of the \(MSE\) and the penalty factor, for the degree of freedom and the penalty factor to be highlighted. That is:
$$ { s }^{ 2 }=\left( \frac { T }{ T-k } \right) \frac { { \Sigma }_{ t=1 }^{ T }{ e }_{ t }^{ 2 } }{ T } $$
When more variables are included in a regression, there will be a rise in the degrees of freedom penalty. The \(MSE\) should be penalized to reflect the applied degrees of freedom, as this will enable us to obtain an accurate estimate of the one-step-ahead out-of-sample prediction variance.
The Akaike information criteria (\(AIC\)) and the Schwarz Information Criteria (\(SIC\)) are some of the very crucial such criteria. Their formulas are:
$$ AIC={ e }^{ \left( \frac { 2k }{ T } \right) }\frac { { \Sigma }_{ t=1 }^{ T }{ e }_{ t }^{ 2 } }{ T } $$
And:
$$ SIC={ T }^{ \left( \frac { k }{ T } \right) }\frac { { \Sigma }_{ t=1 }^{ T }{ e }_{ t }^{ 2 } }{ T } $$
\( \frac { k }{ T } \) is the number of estimated parameters per sample observation, and all parameter factors are functions of it.
Working with the same sample size, a graphical comparison of the Akaike Information Criterion (AIC) and the Schwarz Information Criterion (SIC) reveals a few things:
The penalty for SIC is much larger and rises at a slightly increasing rate with k/T. The penalty for AIC, though larger, still rises only slowly with k/T. The penalty for S2 is small and rises slowly with k/T. Consistency
The consistency of a model selection criteria is the key property in its evaluation. The following are some of the conditions that are to be for a model selection criteria to be consistent:
As the sample size gets large, the likelihood that a true data generating process (DGP) will be selected approaches one. This is in case the DGP is among the considered model. As the sample size gets large, the chances of selecting the best approximation to the true DGP approaches 1, in case it is impossible to select the true DGP since the true model will not be among the ones considered.
Question 1
A trend model has the following characteristics:
60 observations; 6 parameters; Its sum of squared residuals is 785;
Let’s define the “corrected MSE” as the MSE that is penalized for degrees of freedom used.
Compute the (i) the mean squared error (MSE), (ii) the corrected MSE, and (iii) the standard error of the regression (SER).
MSE: 13.08; Corrected MSE: 3.813; SER: 14.54 MSE: 12.09; Corrected MSE: 11.82; SER: 3.813 MSE: 13.08; Corrected MSE: 14.54; SER: 3.813 MSE: 12.09; Corrected MSE: 13.01; SER: 8.56
The correct answer is
C.
$$ MSE=\frac { SSR }{ T } = \frac { 785 }{ 60 } = 13.08 $$
$$ Corrected \quad MSE=\frac { SSR }{ T – k } = \frac { 785 }{ 60 – 6 } = 14.54 $$
$$ SER = \sqrt{Corrected \quad MSE} = \sqrt { \frac { SSR }{ T – k } } = \sqrt { \frac { 785 }{ 60 – 6 } } = 3.813 $$ |
Note this is a question related to study in a CS course at a university, it is NOT homework and can be found here under Fall 2011 exam2.
Here are the two questions I'm looking at from a past exam. They seem to be related, the first:
Let
$\qquad \mathrm{FINITE}_{\mathrm{CFG}} = \{ < \! G \! > \mid G \text{ is a Context Free Grammar with } |\mathcal{L}(G)|<\infty \} $
Prove that $\mathrm{FINITE}_{\mathrm{CFG}}$ is a decidable language.
and...
Let
$\qquad \mathrm{FINITE}_{\mathrm{TM}} = \{ < \! M\!> \mid M \text{ is a Turing Machine with } |\mathcal{L}(M)|<\infty \}$
Prove that $\mathrm{FINITE}_{\mathrm{TM}}$ is an undecidable language.
I am a bit lost on how to tackle these problems, but I have a few insights which I think may be in the right direction. The first thing is that I am aware of is that the language $A_{\mathrm{REX}}$, where
$\qquad A_{\mathrm{REX}} = \{ <\! R, w \!> \mid R \text{ is a regular expression with } w \in\mathcal{L}(R)\}$
is a decidable language (proof is in Michael Sipser's
Theory of Computation, pg. 168). The same source also proves that a Context Free Grammar can be converted to a regular expression, and vice versa. Thus $A_{\mathrm{CFG}}$, must also be decidable as it can be converted to a regular expression. This, and the fact that $A_{\mathrm{TM}}$ is un-decidable, seems to be related to this problem.
The only thing I can think of is passing G to Turing machines for $A_{\mathrm{REX}}$ (after converting G to a regular expression) and $A_{\mathrm{TM}}$. Then accepting if G does and rejecting if G doesn't. As $A_{\mathrm{TM}}$ is undecidable, this will never happen. Somehow I feel like I'm making a mistake here, but I'm not sure of what it is. Could someone please lend me a hand here? |
Lateral earth pressure is the pressure that soil exerts in the horizontal direction. The lateral earth pressure is important because it affects the consolidation behavior and strength of the soil and because it is considered in the design of geotechnical engineering structures such as retaining walls, basements, tunnels, deep foundations and braced excavations.
The coefficient of lateral earth pressure, K, is defined as the ratio of the horizontal effective stress, σ’
h, to the vertical effective stress, σ’ v. The effective stress is the intergranular stress calculated by subtracting the pore pressure from the total stress as described in soil mechanics. K for a particular soil deposit is a function of the soil properties and the stress history. The minimum stable value of K is called the active earth pressure coefficient, K a; the active earth pressure is obtained, for example,when a retaining wall moves away from the soil. The maximum stable value of K is called the passive earth pressure coefficient, K p; the passive earth pressure would develop, for example against a vertical plow that is pushing soil horizontally. For a level ground deposit with zero lateral strain in the soil, the "at-rest" coefficient of lateral earth pressure, K 0 is obtained.
There are many theories for predicting lateral earth pressure; some are empirically based, and some are analytically derived.
Contents At rest pressure 1 Soil Lateral Active Pressure and Passive Resistance 2 Rankine theory 2.1 Coulomb theory 2.2 Caquot and Kerisel 2.3 Equivalent fluid pressure 2.4 Bell's relationship 3 Coefficients of earth pressure 4 See also 5 References 6 Notes 7 At rest pressure
At rest lateral earth pressure, represented as K
0, is the in situ lateral pressure. It can be measured directly by a dilatometer test (DMT) or a borehole pressuremeter test (PMT). As these are rather expensive tests, empirical relations have been created in order to predict at rest pressure with less involved soil testing, and relate to the angle of shearing resistance. Two of the more commonly used are presented below.
Jaky (1948)
[1] for normally consolidated soils: K_{0(NC)} = 1 - \sin \phi ' \
Mayne & Kulhawy (1982)
[2] for overconsolidated soils: K_{0(OC)} = K_{0(NC)} * OCR^{(\sin \phi ')} \
The latter requires the OCR profile with depth to be determined. OCR is the overconsolidation ratio and \phi ' is the effective stress friction angle.
To estimate K
0 due to compaction pressures, refer Ingold (1979) [3] Soil Lateral Active Pressure and Passive Resistance
The active state occurs when a retained soil mass is allowed to relax or deform laterally and outward (away from the soil mass) to the point of mobilizing its available full shear resistance (or engaged its shear strength) in trying to resist lateral deformation. That is, the soil is at the point of incipient failure by shearing due to unloading in the lateral direction. It is the minimum theoretical lateral pressure that a given soil mass will exert on a retaining that will move or rotate away from the soil until the soil active state is reached (not necessarily the actual in-service lateral pressure on walls that do not move when subjected to soil lateral pressures higher than the active pressure). The passive state occurs when a soil mass is externally forced laterally and inward (towards the soil mass) to the point of mobilizing its available full shear resistance in trying to resist further lateral deformation. That is, the soil mass is at the point of incipient failure by shearing due to loading in the lateral direction. It is the maximum lateral resistance that a given soil mass can offer to a retaining wall that is being pushed towards the soil mass. That is, the soil is at the point of incipient failure by shearing, but this time due to loading in the lateral direction. Thus active pressure and passive resistance define the minimum lateral pressure and the maximum lateral resistance possible from a give mass of soil.
Rankine theory
Rankine's theory, developed in 1857,
[4] is a stress field solution that predicts active and passive earth pressure. It assumes that the soil is cohesionless, the wall is frictionless, the soil-wall interface is vertical, the failure surface on which the soil moves is planar, and the resultant force is angled parallel to the backfill surface. The equations for active and passive lateral earth pressure coefficients are given below. Note that φ' is the angle of shearing resistance of the soil and the backfill is inclined at angle β to the horizontal K_a = \cos\beta \frac{\cos \beta - \left(\cos ^2 \beta - \cos ^2 \phi \right)^{1/2}}{\cos \beta + \left(\cos ^2 \beta - \cos ^2 \phi \right)^{1/2}} K_p = \cos\beta \frac{\cos \beta + \left(\cos ^2 \beta - \cos ^2 \phi \right)^{1/2}}{\cos \beta - \left(\cos ^2 \beta - \cos ^2 \phi \right)^{1/2}}
For the case where β is 0, the above equations simplify to
K_a = \tan ^2 \left( 45 - \frac{\phi}{2} \right) = \frac{ 1 - \sin(\phi) }{ 1 + \sin(\phi) } K_p = \tan ^2 \left( 45 + \frac{\phi}{2} \right) = \frac{ 1 + \sin(\phi) }{ 1 - \sin(\phi) } Coulomb theory
Coulomb (1776)
[5] first studied the problem of lateral earth pressures on retaining structures. He used limit equilibrium theory, which considers the failing soil block as a free body in order to determine the limiting horizontal earth pressure. The limiting horizontal pressures at failure in extension or compression are used to determine the K a and K p respectively. Since the problem is indeterminate, [6] a number of potential failure surfaces must be analysed to identify the critical failure surface (i.e. the surface that produces the maximum or minimum thrust on the wall). Mayniel (1908) [7] later extended Coulomb's equations to account for wall friction, symbolized by δ. Müller-Breslau (1906) [8] further generalized Mayniel's equations for a non-horizontal backfill and a non-vertical soil-wall interface (represented by angle θ from the vertical). K_a = \frac{ \cos ^2 \left( \phi - \theta \right)}{\cos ^2 \theta \cos \left( \delta + \theta \right) \left( 1 + \sqrt{ \frac{ \sin \left( \delta + \phi \right) \sin \left( \phi - \beta \right)}{\cos \left( \delta + \theta \right) \cos \left( \beta - \theta \right)}} \ \right) ^2} K_p = \frac{ \cos ^2 \left( \phi + \theta \right)}{\cos ^2 \theta \cos \left( \delta - \theta \right) \left( 1 - \sqrt{ \frac{ \sin \left( \delta + \phi \right) \sin \left( \phi + \beta \right)}{\cos \left( \delta - \theta \right) \cos \left( \beta - \theta \right)}} \ \right) ^2}
Instead of evaluating the above equations or using commercial software applications for this, books of tables for the most common cases can be used. Generally instead of K
a, the horizontal part K ah is tabulated. It is the same as K a times cos(δ+θ).
The actual earth pressure force E
a is the sum of a part E ag due to the weight of the earth, a part E ap due to extra loads such as traffic, minus a part E ac due to any cohesion present.
E
ag is the integral of the pressure over the height of the wall, which equates to K a times the specific gravity of the earth, times one half the wall height squared.
In the case of a uniform pressure loading on a terrace above a retaining wall, E
ap equates to this pressure times K a times the height of the wall. This applies if the terrace is horizontal or the wall vertical. Otherwise, E ap must be multiplied by cosθ cosβ / cos(θ − β).
E
ac is generally assumed to be zero unless a value of cohesion can be maintained permanently.
E
ag acts on the wall's surface at one third of its height from the bottom and at an angle δ relative to a right angle at the wall. E ap acts at the same angle, but at one half the height. Caquot and Kerisel
In 1948, Albert Caquot (1881–1976) and Jean Kerisel (1908–2005) developed an advanced theory that modified Muller-Breslau's equations to account for a non-planar rupture surface. They used a logarithmic spiral to represent the rupture surface instead. This modification is extremely important for passive earth pressure where there is soil-wall friction. Mayniel and Muller-Breslau's equations are unconservative in this situation and are dangerous to apply. For the active pressure coefficient, the logarithmic spiral rupture surface provides a negligible difference compared to Muller-Breslau. These equations are too complex to use, so tables or computers are used instead.
Equivalent fluid pressure
Terzaghi and Peck, in 1948, developed empirical charts for predicting lateral pressures. Only the soil's classification and backfill slope angle are necessary to use the charts.
Bell's relationship
For soils with cohesion, Bell developed an analytical solution that uses the square root of the pressure coefficient to predict the cohesion's contribution to the overall resulting pressure. These equations represent the total lateral earth pressure. The first term represents the non-cohesive contribution and the second term the cohesive contribution. The first equation is for an active situation and the second for passive situations.
\sigma_h = K_a \sigma_v - 2c \sqrt{K_a} \ \sigma_h = K_p \sigma_v + 2c \sqrt{K_p} \ Coefficients of earth pressure
Coefficient of active earth pressure at rest
Coefficient of active earth pressure
Coefficient of passive earth pressure
See also References Coduto, Donald (2001), Foundation Design, Prentice-Hall, California Department of Transportation Material on Lateral Earth Pressure Notes
^ Jaky J. (1948) Pressure in silos, 2nd ICSMFE, London, Vol. 1, pp 103-107. ^ Mayne, P.W. and Kulhawy, F.H. (1982). “K0-OCR relationships in soil”. Journal of Geotechnical Engineering, Vol. 108 (GT6), 851-872. ^ Ingold, T.S., (1979) The effects of compaction on retaining walls, Gèotechnique, 29, p265-283. ^ Rankine, W. (1857) On the stability of loose earth. Philosophical Transactions of the Royal Society of London, Vol. 147. ^ Coulomb C.A., (1776). Essai sur une application des regles des maximis et minimis a quelques problemes de statique relatifs a l'architecture. Memoires de l'Academie Royale pres Divers Savants, Vol. 7 ^ Kramer S.L. (1996) Earthquake Geotechnical Engineering, Prentice Hall, New Jersey ^ Mayniel K., (1808), Traité expérimental, analytique et preatique de la poussée des terres et des murs de revêtement, Paris. ^ Müller-Breslau H., (1906) Erddruck auf Stutzmauern, Alfred Kroner, Stuttgart.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
Let $S_{n}:= \sum_{i=1}^{n} X_{i,n}$ where for each $n, X_{1,n}, X_{2,n},..., X_{n,n}$ are sequences of independents r.v.'s. $$X_{i,n}=\begin{cases}1, & \text{with probability }p_n\\0,& \text{with probability }1-p_n\end{cases}$$ for $i = 1,\dots, n$. Suppose that $p_{n}={\beta}/n$.
The problem is to show that $S_{n}$ doesn't converge almost surely to a limit.
I proved that it converges in distribution to the Poisson, but I'm not sure how to proceed in proving that $S_{n}$ does not converge almost surely to a limit. The hint given is to show that $ P(|S_{m}-S_{n}|>1) $ does not converge to zero. I know this is a Cauchy so possible giving me guidance in this directions. (or any other that you see fit)??
Thanks!! |
Difference between revisions of "Kakeya problem"
(3 intermediate revisions by 3 users not shown) Line 1: Line 1: −
A '''Kakeya set''' in <math>{\mathbb F}_3^
+
A '''Kakeya set''' in <math>{\mathbb F}_3^</math> is a subset <math>E\subset{\mathbb F}_3^n</math> that contains an [[algebraic line]] in every direction; that is, for every <math>d\in{\mathbb F}_3^n</math>, there exists <math>e\in{\mathbb F}_3^n</math> such that <math>e,e+d,e+2d</math> all lie in <math>E</math>. Let <math>k_n</math> be the smallest size of a Kakeya set in <math>{\mathbb F}_3^n</math>.
Clearly, we have <math>k_1=3</math>, and it is easy to see that <math>k_2=7</math>. Using a computer, it is not difficult to find that <math>k_3=13</math> and <math>k_4\le 27</math>. Indeed, it seems likely that <math>k_4=27</math> holds, meaning that in <math>{\mathbb F}_3^4</math> one cannot get away with just <math>26</math> elements.
Clearly, we have <math>k_1=3</math>, and it is easy to see that <math>k_2=7</math>. Using a computer, it is not difficult to find that <math>k_3=13</math> and <math>k_4\le 27</math>. Indeed, it seems likely that <math>k_4=27</math> holds, meaning that in <math>{\mathbb F}_3^4</math> one cannot get away with just <math>26</math> elements.
Line 9: Line 9:
:<math>k_n\le k_{n+1}\le 3k_n</math>.
:<math>k_n\le k_{n+1}\le 3k_n</math>.
−
Since the Cartesian product of two Kakeya sets is another Kakeya set,
+
Since the Cartesian product of two Kakeya sets is another Kakeya set,
:<math>k_{n+m} \leq k_m k_n</math>;
:<math>k_{n+m} \leq k_m k_n</math>;
−
this implies that <math>k_n^{1/n}</math> converges to a limit as <math>n</math> goes to infinity.
+
this implies that <math>k_n^{1/n}</math> converges to a limit as <math>n</math> goes to infinity.
−
== Lower
+
== Lower ==
− + − +
To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, <math>\binom{k_n}{2}\ge 3\cdot(3^n-1)/2</math>, and hence
To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, <math>\binom{k_n}{2}\ge 3\cdot(3^n-1)/2</math>, and hence
−
:<math>k_n\
+
:<math>k_n\3^{(n+1)/2}.</math>
One can derive essentially the same conclusion using the "bush" argument, as follows. Let <math>E\subset{\mathbb F}_3^n</math> be a Kakeya set, considered as a union of <math>N := (3^n-1)/2</math> lines in all different directions. Let <math>\mu</math> be the largest number of lines that are concurrent at a point of <math>E</math>. The number of point-line incidences is at most <math>|E|\mu</math> and at least <math>3N</math>, whence <math>|E|\ge 3N/\mu</math>. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity <math>\mu</math>, we see that <math>|E|\ge 2\mu+1</math>. Comparing the two last bounds one obtains
One can derive essentially the same conclusion using the "bush" argument, as follows. Let <math>E\subset{\mathbb F}_3^n</math> be a Kakeya set, considered as a union of <math>N := (3^n-1)/2</math> lines in all different directions. Let <math>\mu</math> be the largest number of lines that are concurrent at a point of <math>E</math>. The number of point-line incidences is at most <math>|E|\mu</math> and at least <math>3N</math>, whence <math>|E|\ge 3N/\mu</math>. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity <math>\mu</math>, we see that <math>|E|\ge 2\mu+1</math>. Comparing the two last bounds one obtains
<math>|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}</math>.
<math>|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}</math>.
−
A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{\mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> through <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (a,b)\in G\}</math> is all of <math>{\mathbb F}_3^{n-1}</math>. Using an estimate from [http://front.math.ucdavis.edu/math.CO/9906097 a paper of Katz-Tao], we conclude that <math>3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}</math>, leading to <math>|E|\ge 3^{6(n-1)/11}</math>. Thus,
+ + + + + + +
A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{\mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> through <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (a,b)\in G\}</math> is all of <math>{\mathbb F}_3^{n-1}</math>. Using an estimate from [http://front.math.ucdavis.edu/math.CO/9906097 a paper of Katz-Tao], we conclude that <math>3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}</math>, leading to <math>|E|\ge 3^{6(n-1)/11}</math>. Thus,
:<math>k_n \ge 3^{6(n-1)/11}.</math>
:<math>k_n \ge 3^{6(n-1)/11}.</math>
−
== Upper
+
== Upper ==
We have
We have
Latest revision as of 00:35, 5 June 2009
A
Kakeya set in [math]{\mathbb F}_3^n[/math] is a subset [math]E\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]e\in{\mathbb F}_3^n[/math] such that [math]e,e+d,e+2d[/math] all lie in [math]E[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math].
Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements.
Basic Estimates
Trivially, we have
[math]k_n\le k_{n+1}\le 3k_n[/math].
Since the Cartesian product of two Kakeya sets is another Kakeya set, the upper bound can be extended to
[math]k_{n+m} \leq k_m k_n[/math];
this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity.
Lower Bounds
To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence
[math]k_n\ge 3^{(n+1)/2}.[/math]
One can derive essentially the same conclusion using the "bush" argument, as follows. Let [math]E\subset{\mathbb F}_3^n[/math] be a Kakeya set, considered as a union of [math]N := (3^n-1)/2[/math] lines in all different directions. Let [math]\mu[/math] be the largest number of lines that are concurrent at a point of [math]E[/math]. The number of point-line incidences is at most [math]|E|\mu[/math] and at least [math]3N[/math], whence [math]|E|\ge 3N/\mu[/math]. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that [math]|E|\ge 2\mu+1[/math]. Comparing the two last bounds one obtains [math]|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}[/math].
The better estimate
[math]k_n\ge (9/5)^n[/math]
is obtained in a paper of Dvir, Kopparty, Saraf, and Sudan. (In general, they show that a Kakeya set in the [math]n[/math]-dimensional vector space over the [math]q[/math]-element field has at least [math](q/(2-1/q))^n[/math] elements).
A still better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a,b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus,
[math]k_n \ge 3^{6(n-1)/11}.[/math] Upper Bounds
We have
[math]k_n\le 2^{n+1}-1[/math]
since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set.
This estimate can be improved using an idea due to Ruzsa (seems to be unpublished). Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula), contains lines in a positive proportion of directions: for, a typical direction [math]d\in {\mathbb F}_3^n[/math] can be represented as [math]d=d_1+2d_2[/math] with [math]d_1,d_2\in A[/math], and then [math]d_1,d_1+d,d_1+2d\in E[/math]. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]).
Putting all this together, we seem to have
[math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math]
or
[math](1.8207+o(1))^n \le k_n \le (1.8899+o(1))^n.[/math] |
It is my first question here, I hope it is not too stupid...
I am writing my thesis in math and I need a lot of subscripts and superscripts. My problem arises since in subscripts and superscripts numbers and capital letters are too big to appear with a small letter.
For instance, I have to write things like
$c_{1}$ or $q_{A*}$ (I know it is not that nice having both
A and
\*, but the function is called
$q_{A}$ and the operation
_{*} has a precise meaning). And it is not that nice.
Then I started using
\scriptscriptstyle when I have capital letters or numbers (for instance
c_{{\scriptscriptstyle 1}}. And finally I noticed that several times I have to subscribe things like
_{2g} or
_{k+1} and there numbers come back to their original size.
Hence I was wondering whether there is a standard way or etiquette to deal with this issue (since now I feel either I go back to standard style or I start writing things like
u_{{\scriptscriptstyle 2}g}...)
\documentclass[a4paper,11pt,oneside,openright]{book} \usepackage{amsthm}\usepackage{amssymb}\usepackage{amsmath,amscd}\usepackage[english]{babel}\usepackage{latexsym}\begin{document}$Hom_{0}(X)$ or $Hom_{\scriptscriptstyle 0}(X)$?But then I have also $Hom_{2g-1}(X)$...Then also $c_{1}$ or $c_{\scriptscriptstyle 1}$ Mixed things like $k^{2g-2p+s}$ or $(u_{{\scriptscriptstyle 1}},\ldots,u_{g}, \tilde{u}_{{\scriptscriptstyle 1}}=u_{g+1}, \ldots , \tilde{u}_{g}=u_{2g})$Also $\omega_{E}$ is quite annoying.\end{document} |
983 173
I'll take on problem 1, though I'm only somewhat familiar with its subject matter.
(a) This is presumably for finding the primitive polynomials in GF8. These are cubic polynomials with coefficients in GF2 that cannot be expressed as products of corresponding polynomials for GF4 and GF2. I will use x as an undetermined variable here.
For GF2, the primitive polynomials are x + (0,1) = x, x+1. For GF4, we consider primitive-polynomial candidates x For GF8, we consider primitive-polynomial candidates x Thus, GF8 has primitive polynomials x (b) There is a problem here. A basis is easy to define for addition: {1, x, x (c) That is a consequence of every finite field GF(p I will now try to show that every finite field has a nonzero number of primitive polynomials with respect to some subfield. First, itself: for all elements a of F relative to F, (x - a) is primitive. Thus, F has N primitive polynomials. For GF(p $$ \sum_{\sum_k k m_k = r} \prod_k P(N(k),m_k) = N^r $$ If N is a prime, then the solution is known: $$ N(m) = \frac{1}{m} \sum_{d|m} N^{m/d} \mu(d) $$ where the μ is the Moebius mu function, (-1)^(number of distinct primes if square-free), and 0 otherwise. I don't know if that is correct for a power of a prime.
For GF2, the primitive polynomials are x + (0,1) = x, x+1.
For GF4, we consider primitive-polynomial candidates x
2+ (0,1)*x + (0,1): x 2, x 2+ 1 = (x + 1) 2, x 2+ x = x(x + 1), x 2+ x + 1. That last one is the only primitive polynomial for GF4.
For GF8, we consider primitive-polynomial candidates x
3+ (0,1)*x 2+ (0,1)*x + (0,1): x 3, x 3+ 1 = (x 2+ x + 1)*(x + 1), x 3+ x = x * (x + 1) 2, x 3+ x + 1, x 3+ x 2= x 2* (x + 1), x 3+ x 2+ 1, x 3+ x 2+ x = x*(x 2+ x + 1), x 3+ x 2+ x + 1 = (x + 1) 3.
Thus, GF8 has primitive polynomials x
3+ x + 1 and x 3+ x 2+ 1.
(b) There is a problem here. A basis is easy to define for addition: {1, x, x
2} where multiplication uses the remainder from dividing by a primitive polynomial. The additive group is thus (Z2) 3. The multiplicative group is, however, Z7, and it omits 0. That group has no nontrivial subgroups, so it's hard to identify a basis for it.
(c) That is a consequence of every finite field GF(p
n) being a subfield of an infinite number of finite fields GF(p m*n), each one with a nonzero number of primitive polynomials with coefficients in GF(p n). Since each field's primitive polynomials cannot be be factored into its subfields' ones, each field adds some polynomial roots, and thus, there are an infinite number of such roots.
I will now try to show that every finite field has a nonzero number of primitive polynomials with respect to some subfield. First, itself: for all elements a of F relative to F, (x - a) is primitive. Thus, F has N primitive polynomials. For GF(p
m*n) relative to GF(p n), I will call the number N(m). One can count all the possible candidate polynomials for GF(p m*n), and one gets
$$ \sum_{\sum_k k m_k = r} \prod_k P(N(k),m_k) = N^r $$
If N is a prime, then the solution is known:
$$ N(m) = \frac{1}{m} \sum_{d|m} N^{m/d} \mu(d) $$
where the μ is the Moebius mu function, (-1)^(number of distinct primes if square-free), and 0 otherwise. I don't know if that is correct for a power of a prime. |
I would like to know if there is a simple real-world problem which requires knowing a closed form for $\displaystyle \sum_{i=1}^n i$ and/or the sum of the first $n$ even/odd numbers. The only application I can think of is when doing Riemann Sums, but this student has not taken calculus.
Students sometimes find applications to computer programming interesting. The sum in question often pops up in determining the complexity of various algorithms. For a simple example, selection sort sorts a list of $n$ items, by first scanning over it, selecting the first element , then scanning over the remaining $n-1$ items to find the second element of the sorted list, etc. The total amount of scanning is $1 + 2 + ... + n$. Alternatively, it involves $1 + 2 + ... + (n-1)$ pairwise comparisons in the course of those scans. In either case, the formula for the sum shows that selection sort is $O(n^2)$.
More generally, this sort of analysis is applicable to any algorithm with nested for-loops where the outer loop has its index (say $i$) ranging from $1$ to $n$ and the inner loop has its index (say $j$) ranging from $1$ to $i$.
If $n$ people meet and everyone shakes hands with everyone else, how many hand shakes were there?
You could do this combinatorially, e.g. $n$ choose 2, but another way to think about it is that the first person shakes $n-1$ times, the next $n-2$, ... until the second to last person only shakes the last person's hand. This leads to the summation.
Perhaps you can make the topic "applicable" by comparing it to some popular social media posts of the form, "How many triangles are in this figure?
Only geniuses get this right!"
Take a $5\times 1$ rectangular grid, for example, and ask the students: How many rectangles
of any size can be found in this figure? Surely, some of them will just start counting right away and find that the answer is 15. Then, prod them further by asking, "Okay, of those 15 rectangles, how many of them are $1\times 1$? How many are $2\times 1$? etc. up to $5\times 1$"
The students will see that there are 5, 4, 3, 2, and 1 such rectangles, respectively. Hopefully, they will then realize how to
generalize the problem: if we start with an $n\times 1$ rectangle, the total number of rectangles will be $n+(n-1)+(n-2)+\cdots+3+2+1$.
Now, ask them what the answer to this problem would be if $n=100$. See how many students started punching calculator buttons or doing mental/paper arithmetic. Stop them immediately and say, "Surely, there must be a better,
more efficient, method than just adding 100 numbers!" This motivates the necessity of a closed-form solution, and you can use this to jump into finding such a formula. Addendum: Depending on the audience and time available, you could also use that problem to introduce binomial coefficients, since ${n+1\choose 2}$ is the solution to that problem. You can then segue into a combinatorial "proof without words" for that same fact. The image below demonstrates a bijection between the $1+2+\cdots+n$ elements in the upper triangle and pairs of elements in the row of $n+1$ elements in the bottom row (source via math.se):
Finally, you could follow this up by exploring a similar 2-dimensional problem: how many squares can you find in an $m\times n$ rectangular grid? The solution is ${m\choose 2}\cdot{n\choose 2}$. See this link for more.
You are able to buy cubical wooden blocks of unit size, and you want to build a staircase to get over a wall of height $n$ units. How many blocks are required?
A gold mining operation has two shifts. The day shift bores a tunnel that is deeper by an additional depth $\delta$ each day. The night shift comes in and works the whole accumulated length of the tunnel every day, extracting gold worth $g$ dollars per unit of depth. Find the gross revenue after $n$ days.
The population of a town grows over time as 1, 2, 3, ... Each person produces one unit of garbage per year, which fills up a landfill.
[Re #3, from comments:]
Is that model close to the actual way populations grow? I don't want to make the example sound too artificial.
I would approach this from the point of model building. Models don't need to be perfectly realistic in order to be informative. Also, linear population growth is likely to be a pretty decent approximation over an appropriate time interval (say a decade).
I am not quite sure if this answers the question in the body of your post, but here is how I have motivated the search for a closed form regarding these summations.
Summing the first $n$ counting numbers:
I used this formula when I demoed for my current teaching position, and think it is a good way to start a new year off, too: To learn student names, I have each of them say their own name, and then the name of each of their predecessors. So, the first person says just her own name ($1$ name), the second person says her sole predecessor's name as well as her own ($2$ more names), the third person says each of the first two's names and then her own ($3$ more names), and so forth.
At the end, I ask the students whose name was said the most times, whose name was said the fewest times, and how many total name utterances there were. Generally this is solved in one of three (related) ways: the Gaussian summation trick of duplicating the total, evaluating it as $n(n+1)$, and dividing by $2$ for the answer; pairing the number of utterances by the first and last speaker (i.e., $1$ and $n$), the second and penultimate speaker ($2$ and $n-1$) etc., with a small caveat around when there are an odd number of students; or multiplying the average number of utterances by the number of students (to my surprise, a group of eighth graders devised this method when I actually demoed!).
I like to follow-up and ask a question such as, e.g., for a class of $16$ students: Would the total number of utterances be greater, fewer, or the same, if instead of having them go through the names as above, we broke into two groups of $8$ students and had each group proceed with this name game? (This can be computed as $16(17)/2$ versus $2 \cdot 8(9)/2$, or answered without computation...)
If you devise a method of summing the first $n$ counting numbers and realize it as $n(n+1)/2$, then the sum of the first $n$ even numbers is (clearly) twice this, i.e., $n(n+1) = n^2 + n$. And once you know that, you can sum the first $n$ odd numbers by summing the first $n$ even numbers and subtracting $1$ from each of the $n$ addends, which gives a total of $(n^2 + n) - n = n^2$.
An application of the fact that the first $n$ odds sum to $n^2$ arises when one broaches quadratic functions. Specifically, students in my algebra classes will write a quadratic expression in either vertex form, $a(x-h)^2 + k$, or standard form, $ax^2 + bx + c$, and then graph them by finding the vertex, first, and then (when the quadratic is monic, i.e., $a=1$) going over one, up $1$; over another, up $3$, over another, up $5$, etc. When the quadratic has a coefficient $a \not = 1$, then you can take these same rises and multiply through by $a$.
For example, the quadratic function defined by $x \mapsto 2(x-3)^2 - 8$ can be graphed by first noting that the vertex is at $(3, -8)$, and then going over one (to the left and right) and up $2 \cdot 1 = 2$; over another, up $2 \cdot 3 = 6$; over another, up $2 \cdot 5 = 10$, etc. This is a quick way to graph the quadratics that we are dealing with, but making sense of
why this works relies on the closed formula for the sum of odds. (So, any problem you have around quadratics that benefits from a graphical representation can be readily connected to $\sum$odds as described above.)
In my experience, this most commonly pops up in terms of averages of a uniform distribution. What is the average face value of a standard die (6-sided)? Of a polyhedral 4, 8, 12, or 20-sided die? What is the average of the numerical cards in a standard deck (2-10)? While the answer may seem trivial, many students find it totally opaque on first encounter. |
Home
Integration by PartsIntegration by Parts
Examples
Integration by Parts with a definite integral
Going in Circles
Tricks of the Trade
Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions
Product of Sines and Cosines (mixed even and odd powers or only odd powers)
Product of Sines and Cosines (only even powers)
Product of Secants and Tangents
Other Cases
Trig SubstitutionsHow Trig Substitution Works
Summary of trig substitution options
Examples
Completing the Square
Partial FractionsIntroduction to Partial Fractions
Linear Factors
Irreducible Quadratic Factors
Improper Rational Functions and Long Division
Summary
Strategies of IntegrationSubstitution
Integration by Parts
Trig Integrals
Trig Substitutions
Partial Fractions
Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration
Type 2 - Improper Integrals with Discontinuous Integrands
Comparison Tests for Convergence
Modeling with Differential EquationsIntroduction
Separable Equations
A Second Order Problem
Euler's Method and Direction FieldsEuler's Method (follow your nose)
Direction Fields
Euler's method revisited
Separable EquationsThe Simplest Differential Equations
Separable differential equations
Mixing and Dilution
Models of GrowthExponential Growth and Decay
The Zombie Apocalypse (Logistic Growth)
Linear EquationsLinear ODEs: Working an Example
The Solution in General
Saving for Retirement
Parametrized CurvesThree kinds of functions, three kinds of curves
The Cycloid
Visualizing Parametrized Curves
Tracing Circles and Ellipses
Lissajous Figures
Calculus with Parametrized CurvesVideo: Slope and Area
Video: Arclength and Surface Area
Summary and Simplifications
Higher Derivatives
Polar CoordinatesDefinitions of Polar Coordinates
Graphing polar functions
Video: Computing Slopes of Tangent Lines
Areas and Lengths of Polar CurvesArea Inside a Polar Curve
Area Between Polar Curves
Arc Length of Polar Curves
Conic sectionsSlicing a Cone
Ellipses
Hyperbolas
Parabolas and Directrices
Shifting the Center by Completing the Square
Conic Sections in Polar CoordinatesFoci and Directrices
Visualizing Eccentricity
Astronomy and Equations in Polar Coordinates
Infinite SequencesApproximate Versus Exact Answers
Examples of Infinite Sequences
Limit Laws for Sequences
Theorems for and Examples of Computing Limits of Sequences
Monotonic Covergence
Infinite SeriesIntroduction
Geometric Series
Limit Laws for Series
Test for Divergence and Other Theorems
Telescoping Sums
Integral TestPreview of Coming Attractions
The Integral Test
Estimates for the Value of the Series
Comparison TestsThe Basic Comparison Test
The Limit Comparison Test
Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test
Absolute Convergence
Rearrangements
The Ratio and Root TestsThe Ratio Test
The Root Test
Examples
Strategies for testing SeriesStrategy to Test Series and a Review of Tests
Examples, Part 1
Examples, Part 2
Power SeriesRadius and Interval of Convergence
Finding the Interval of Convergence
Power Series Centered at $x=a$
Representing Functions as Power SeriesFunctions as Power Series
Derivatives and Integrals of Power Series
Applications and Examples
Taylor and Maclaurin SeriesThe Formula for Taylor Series
Taylor Series for Common Functions
Adding, Multiplying, and Dividing Power Series
Miscellaneous Useful Facts
Applications of Taylor PolynomialsTaylor Polynomials
When Functions Are Equal to Their Taylor Series
When a Function Does Not Equal Its Taylor Series
Other Uses of Taylor Polynomials
Functions of 2 and 3 variablesFunctions of several variables
Limits and continuity
Partial DerivativesOne variable at a time (yet again)
Definitions and Examples
An Example from DNA
Geometry of partial derivatives
Higher Derivatives
Differentials and Taylor Expansions
Differentiability and the Chain RuleDifferentiability
The First Case of the Chain Rule
Chain Rule, General Case
Video: Worked problems
Multiple IntegralsGeneral Setup and Review of 1D Integrals
What is a Double Integral?
Volumes as Double Integrals
Iterated Integrals over RectanglesHow To Compute Iterated Integrals
Examples of Iterated Integrals
Fubini's Theorem
Summary and an Important Example
Double Integrals over General RegionsType I and Type II regions
Examples 1-4
Examples 5-7
Swapping the Order of Integration
Area and Volume Revisited
Double integrals in polar coordinatesdA = r dr (d theta)
Examples
Multiple integrals in physicsDouble integrals in physics
Triple integrals in physics
Integrals in Probability and StatisticsSingle integrals in probability
Double integrals in probability
Change of VariablesReview: Change of variables in 1 dimension
Mappings in 2 dimensions
Jacobians
Examples
Bonus: Cylindrical and spherical coordinates
The first (and simplest) method to try for drawing a polar graph is to rewrite $r=f(\theta)$ as a relation between $x$ and $y$, and then draw the graph of this relation. For example, when $r=2\cos(\theta)$, then we have $r^2 - 2r\cos(\theta) = 0$. But $$ r^2 \ = \ x^2+y^2 \quad \hbox{and} \quad 2r \cos(\theta) \ = \ 2 x,$$ so $x^2-2x + y^2=0$. Completing the square with $x^2-2x = (x-1)^2 - 1$ gives $$(x-1)^2 + y^2=1.$$ This is a circle of radius 1 centered at $(1,0)$.
In other cases, the curve doesn't have a nice description in rectangular coordinates -- that's usually why we're using polar coordinates! We then plot points as follows: start with the graph of $r=f(\theta)$ with $(r,\theta)$ as rectangular coordinates, and then plot the corresponding values of $(x,y)$ using $$x=r\cos(\theta),\qquad y=r\sin(\theta).$$ In other words, we obtain a parametrization of our polar curve as $\theta=t$, $r=f(t)$, and so: $$ x(t) = f(t) \cos(t); \qquad y(t) = f(t) \sin(t). $$ We can then apply everything we know about parametrized curves to polar curves.
The following video shows three ways to plot the polar curve $r = 2 \cos(\theta)$. |
A Belyi-extender (or dessinflateur) $\beta$ of degree $d$ is a quotient of two polynomials with rational coefficients
\[ \beta(t) = \frac{f(t)}{g(t)} \] with the special properties that for each complex number $c$ the polynomial equation of degree $d$ in $t$ \[ f(t)-c g(t)=0 \] has $d$ distinct solutions, except perhaps for $c=0$ or $c=1$, and, in addition, we have that \[ \beta(0),\beta(1),\beta(\infty) \in \{ 0,1,\infty \} \]
Let’s take for instance the power maps $\beta_n(t)=t^n$.
For every $c$ the degree $n$ polynomial $t^n – c = 0$ has exactly $n$ distinct solutions, except for $c=0$, when there is just one. And, clearly we have that $0^n=0$, $1^n=1$ and $\infty^n=\infty$. So, $\beta_n$ is a Belyi-extender of degree $n$.
A cute observation being that if $\beta$ is a Belyi-extender of degree $d$, and $\beta’$ is an extender of degree $d’$, then $\beta \circ \beta’$ is again a Belyi-extender, this time of degree $d.d’$.
That is, Belyi-extenders form a monoid under composition!
In our example, $\beta_n \circ \beta_m = \beta_{n.m}$. So, the power-maps are a sub-monoid of the Belyi-extenders, isomorphic to the multiplicative monoid $\mathbb{N}_{\times}$ of strictly positive natural numbers.
In their paper Quantum statistical mechanics of the absolute Galois group, Yuri I. Manin and Matilde Marcolli say they use the full monoid of Belyi-extenders to act on all Grothendieck’s dessins d’enfant.
But, they attach properties to these Belyi-extenders which they don’t have, in general. That’s fine, as they foresee in Remark 2.21 of their paper that the construction works equally well for any suitable sub-monoid, as long as this sub-monoid contains all power-map exenders.
I’m trying to figure out what the maximal mystery sub-monoid of extenders is satisfying all the properties they need for their proofs.
But first, let us see what Belyi-extenders have to do with dessins d’enfant.
Look at all complex solutions of $f(t)=0$ and label them with a black dot (and add a black dot at $\infty$ if $\beta(\infty)=0$). Now, look at all complex solutions of $f(t)-g(t)=0$ and label them with a white dot (and add a white dot at $\infty$ if $\beta(\infty)=1$).
Now comes the fun part.
Because $\beta$ has exactly $d$ pre-images for all real numbers $\lambda$ in the open interval $(0,1)$ (and $\beta$ is continuous), we can connect the black dots with the white dots by $d$ edges (the pre-images of the open interval $(0,1)$), giving us a $2$-coloured graph.
For the power-maps $\beta_n(t)=t^n$, we have just one black dot at $0$ (being the only solution of $t^n=0$), and $n$ white dots at the $n$-th roots of unity (the solutions of $x^n-1=0$). Any $\lambda \in (0,1)$ has as its $n$ pre-images the numbers $\zeta_i.\sqrt[n]{\lambda}$ with $\zeta_i$ an $n$-th root of unity, so we get here as picture an $n$-star. Here for $n=5$:
This dessin should be viewed on the 2-sphere, with the antipodal point of $0$ being $\infty$, so projecting from $\infty$ gives a homeomorphism between the 2-sphere and $\mathbb{C} \cup \{ \infty \}$.
To get all information of the dessin (including possible dots at infinity) it is best to slice the sphere open along the real segments $(\infty,0)$ and $(1,\infty)$ and flatten it to form a ‘diamond’ with the upper triangle corresponding to the closed upper semisphere and the lower triangle to the open lower semisphere.
In the picture above, the right hand side is the dessin drawn in the diamond, and this representation will be important when we come to the action of extenders on more general Grothendieck dessins d’enfant.
Okay, let’s try to get some information about the monoid $\mathcal{E}$ of all Belyi-extenders.
What are its invertible elements?
Well, we’ve seen that the degree of a composition of two extenders is the product of their degrees, so invertible elements must have degree $1$, so are automorphisms of $\mathbb{P}^1_{\mathbb{C}} – \{ 0,1,\infty \} = S^2-\{ 0,1,\infty \}$ permuting the set $\{ 0,1,\infty \}$.
They form the symmetric group $S_3$ on $3$-letters and correspond to the Belyi-extenders
\[ t,~1-t,~\frac{1}{t},~\frac{1}{1-t},~\frac{t-1}{t},~\frac{t}{t-1} \] You can compose these units with an extender to get anther extender of the same degree where the roles of $0,1$ and $\infty$ are changed.
For example, if you want to colour all your white dots black and the black dots white, you compose with the unit $1-t$.
Manin and Marcolli use this and claim that you can transform any extender $\eta$ to an extender $\gamma$ by composing with a unit, such that $\gamma(0)=0, \gamma(1)=1$ and $\gamma(\infty)=\infty$.
That’s fine as long as your original extender $\eta$ maps $\{ 0,1,\infty \}$
onto $\{ 0,1,\infty \}$, but usually a Belyi-extender only maps into $\{ 0,1,\infty \}$.
Here are some extenders of degree three (taken from Melanie Wood’s paper Belyi-extending maps and the Galois action on dessins d’enfants):
with dessin $5$ corresponding to the Belyi-extender
\[ \beta(t) = \frac{t^2(t-1)}{(t-\frac{4}{3})^3} \] with $\beta(0)=0=\beta(1)$ and $\beta(\infty) = 1$.
So, a first property of the mystery Manin-Marcolli monoid $\mathcal{E}_{MMM}$ must surely be that all its elements $\gamma(t)$ map $\{ 0,1,\infty \}$ onto $\{ 0,1,\infty \}$, for they use this property a number of times, for instance to construct a monoid map
\[ \mathcal{E}_{MMM} \rightarrow M_2(\mathbb{Z})^+ \qquad \gamma \mapsto \begin{bmatrix} d & m-1 \\ 0 & 1 \end{bmatrix} \] where $d$ is the degree of $\gamma$ and $m$ is the number of black dots in the dessin (or white dots for that matter).
Further, they seem to believe that the dessin of any Belyi-extender must be a 2-coloured tree.
Already last time we’ve encountered a Belyi-extender $\zeta(t) = \frac{27 t^2(t-1)^2}{4(t^2-t+1)^3}$ with dessin
But then, you may argue, this extender sends all of $0,1$ and $\infty$ to $0$, so it cannot belong to $\mathcal{E}_{MMM}$.
Here’s a trick to construct Belyi-extenders from Belyi-maps $\beta : \mathbb{P}^1 \rightarrow \mathbb{P}^1$, defined over $\mathbb{Q}$ and having the property that there are rational points in the fibers over $0,1$ and $\infty$.
Let’s take an example, the ‘monstrous dessin’ corresponding to the congruence subgroup $\Gamma_0(2)$
with map $\beta(t) = \frac{(t+256)^3}{1728 t^2}$.
As it stands, $\beta$ is not a Belyi-extender because it does not map $1$ into $\{ 0,1,\infty \}$. But we have that
\[ -256 \in \beta^{-1}(0),~\infty \in \beta^{-1}(\infty),~\text{and}~512,-64 \in \beta^{-1}(1) \] (the last one follows from $(t+256)^2-1728 t^3=(t-512)^2(t+64)$).
We can now pre-compose $\beta$ with the automorphism (defined over $\mathbb{Q}$) sending $0$ to $-256$, $1$ to $-64$ and fixing $\infty$ to get a Belyi-extender
\[ \gamma(t) = \frac{(192t)^3}{1728(192t-256)^2} \] which maps $\gamma(0)=0,~\gamma(1)=1$ and $\gamma(\infty)=\infty$ (so belongs to $\mathcal{E}_{MMM}$) with the same dessin, which is not a tree,
That is, $\mathcal{E}_{MMM}$ can at best consist only of those Belyi-extenders $\gamma(t)$ that map $\{ 0,1,\infty \}$ onto $\{ 0,1,\infty \}$ and such that their dessin is a tree.
Let me stop, for now, by asking for a reference (or counterexample) to perhaps the most startling claim in the Manin-Marcolli paper, namely that any 2-coloured tree can be realised as the dessin of a Belyi-extender!2 Comments |
In each case we draw an altitude of height \(h \) from the vertex at \(C \) to \(\overline{AB} \), so that the area (which we will denote by the letter \(K\)) is given by \(K = \frac{1}{2}hc \). But we see that \(h = b\;\sin\;A \) in each of the triangles (since \(\;h=b \) and \(\sin\;A = \sin\;90^\circ = 1 \) in Figure 2.4.1(b), and \(\;h = b\;\sin\;(180^\circ - A) =b\;\sin\;A \) in Figure 2.4.1(c)). We thus get the following formula:
\[\fbox{\(\text{Area} ~=~ K ~=~ \tfrac{1}{2}\,bc\;\sin\;A\)}\label{2.23}\]
The above formula for the area of \(\triangle\,ABC \) is in terms of the known parts \(A \), \(b \), and \(c \). Similar arguments for the angles \(B \) and \(C \) give us:
\[\label{2.24} \begin{align}\text{Area} ~&=~ K =~ \tfrac{1}{2}\,ac\;\sin\;B \\
\text{Area} ~&=~ K =~ \tfrac{1}{2}\,ab\;\sin\;C\label{2.25} \end{align}\]
Notice that the height \(h \) does not appear explicitly in these formulas, although it is implicitly there. These formulas have the advantage of being in terms of parts of the triangle, without having to find \(h \) separately.
Example 2.13
Find the area of the triangle \(\triangle\,ABC \) given
\(A = 33^\circ \), \(b = 5 \), and \(c = 7 \).
Solution:
Using Equation \ref{2.23}, the area \(K \) is given by:
\[ \nonumber \begin{align*} K ~&=~ \tfrac{1}{2}\,bc\;\sin\;A\\ \nonumber &=~ \tfrac{1}{2}\,(5)(7)\;\sin\;33^\circ\\ \nonumber K ~&=~ 9.53 \end{align*}\] Case 2: Three angles and any side.
Suppose that we have a triangle \(\triangle\,ABC \) in which one side, say, \(a \), and all three angles are known. By the Law of Sines we know that
\[\nonumber
c ~=~ \frac{a\;\sin\;C}{\sin\;A} ~, \]
so substituting this into Equation \ref{2.24} we get:
\[\fbox{\(\text{Area} ~=~ K ~=~ \frac{a^2 \;\sin\;B \;\sin\;C}{2\;\sin\;A}
\)}\label{2.26}\] Similar arguments for the sides \(b \) and \(c \) give us: \[\begin{align} \text{Area} ~&=~ K =~ \frac{b^2 \;\sin\;A \;\sin\;C}{2\;\sin\;B}\label{2.27}\\ \text{Area} ~&=~ K =~ \frac{c^2 \;\sin\;A \;\sin\;B}{2\;\sin\;C}\label{2.28} \end{align}\]
Example 2.14
Find the area of the triangle \(\triangle\,ABC \) given\ \(A = 115^\circ \), \(B=25^\circ \), \(C=40^\circ \), and \(a = 12 \).
Solution:
Using Equation \ref{2.26}, the area \(K \) is given by:
\[\begin{align*} K ~&=~ \frac{a^2 \;\sin\;B \;\sin\;C}{2\;\sin\;A}\\ &=~ \frac{12^2 \;\sin\;25^\circ \;\sin\;40^\circ}{2\;\sin\;115^\circ}\\ K ~&=~ 21.58 \end{align*}\] Case 3: Three sides.
Suppose that we have a triangle \(\triangle\,ABC \) in which all three sides are known. Then
Heron's formula gives us the area:
Heron's formula
For a triangle \(\triangle\,ABC \) with sides \(a \), \(b \), and \(c \), let \(s = \frac{1}{2}\,(a+b+c) \) (i.e. \(2s = a+b+c \) is the perimeter of the triangle). Then the area \(K \) of the triangle is
\[ \text{Area} ~=~ K ~=~ \sqrt{s\,(s-a)\,(s-b)\,(s-c)} ~~.\label{2.29}\]
To prove this, first remember that the area \(K \) is one-half the base times the height. Using \(c \) as the base and the altitude \(h \) as the height, as before in Figure 2.4.1, we have \(K = \frac{1}{2}hc \). Squaring both sides gives us
\[\label{2.30}
K^2 = \tfrac{1}{4}\,h^2 c^2 ~. \]
By the Pythagorean Theorem, we see that \(\;h^2 = b^2 - (AD)^2 \). In Figure 2.4.2(a), we see that \(\;AD = b\;\cos\;A \). And in Figure 2.4.2(b) we see that \(\;AD = b\;\cos (180^\circ - A) = -b\cos\;A \). Hence, in either case we have \(\;(AD)^2 = b^2 \;(\cos\;A)^2 \), and so
\[h^2 ~=~ b^2 - b^2 \;(\cos\;A)^2 ~=~ b^2 \,(1 - (\cos\;A)^2 ) ~=~ b^2 \,(1+ \cos\;A)\,(1- \cos\;A)~.\label{2.31} \]
(Note that the above equation also holds when \(A=90^\circ \) since \(\cos\;90^\circ =0 \) and \(h=b\)). Thus, substituting Equation \ref{2.31} into Equation \ref{2.30}, we have
\[\label{2.32}
K^2 = \tfrac{1}{4}\,b^2 c^2 \,(1+ \cos\;A)\,(1- \cos\;A) ~. \]
By the Law of Cosines we know that
\[\nonumber \begin{align*}
1 + \cos\;A ~&=~ 1 + \frac{b^2 + c^2 - a^2}{2bc} ~=~ \frac{2bc + b^2 + c^2 - a^2}{2bc} ~=~ \frac{(b+c)^2 - a^2}{2bc} ~=~ \frac{((b+c) + a)\,((b+c) - a)}{2bc}\\ \nonumber &=~ \frac{(a + b + c)\,(b + c - a)}{2bc} ~, \end{align*}\]
and similarly
\[\nonumber \begin{align*}
1 - \cos\;A ~&=~ 1 - \frac{b^2 + c^2 - a^2}{2bc} ~=~ \frac{2bc - b^2 - c^2 + a^2}{2bc} ~=~ \frac{a^2 - (b-c)^2}{2bc} ~=~ \frac{(a - (b-c))\,(a + (b-c))}{2bc}\\ \nonumber &=~ \frac{(a - b + c)\,(a + b - c)}{2bc} ~. \end{align*}\] Thus, substituting these expressions into Equation \ref{2.32}, we have \[\nonumber \begin{align} K^2 ~&=~ \tfrac{1}{4}\,b^2 c^2 \;\frac{(a + b + c)\,(b + c - a)}{2bc} \;\cdot\; \frac{(a - b + c)\,(a + b - c)}{2bc}\\ \nonumber &=~ \frac{a + b + c}{2} \;\cdot\; \frac{b + c - a}{2} \;\cdot\; \frac{a - b + c}{2} \;\cdot\; \frac{a + b - c}{2} ~,\\ \end{align}\] and since we defined \(s = \frac{1}{2}\,(a+b+c) \), we see that \[\nonumber K^2 ~=~ s\,(s-a)\,(s-b)\,(s-c) ~,\] so upon taking square roots we get \[\nonumber K ~=~ \sqrt{s\,(s-a)\,(s-b)\,(s-c)} ~~.\quad \textbf{QED}\]
Example 2.15
Find the area of the triangle \(\triangle\,ABC \) given \(a=5 \), \(b=4 \), and \(c = 7 \).
Solution:
Using Heron's formula with \(s = \frac{1}{2}\,(a+b+c) = \frac{1}{2}\,(5+4+7) = 8 \), the area \(K \) is given by:
\[ \begin{align*} K ~&=~ \sqrt{s\,(s-a)\,(s-b)\,(s-c)}\\ &=~ \sqrt{8\,(8-5)\,(8-4)\,(8-7)} ~=~ \sqrt{96} \quad\Rightarrow\quad \boxed{K ~=~ 4\,\sqrt{6} ~\approx~ 9.8} ~. \end{align*}\]
Heron's formula is useful for theoretical purposes (e.g. in deriving other formulas). However, it is not well-suited for calculator use, exhibiting what is called
numerical instability for "extreme'' triangles, as in the following example.
Example 2.16
Find the area of the triangle \(\triangle\,ABC \) given \(a=1000000 \), \(b=999999.9999979 \), and \(c = 0.0000029 \).
Solution:
To use Heron's formula, we need to calculate \(s = \frac{1}{2}\,(a+b+c) \). Notice that the actual value of \(a+b+c \) is \(2000000.0000008 \), which has \(14 \) digits. Most calculators can store \(12\)-\(14 \) digits internally (even if they display less), and hence may round off that value of \(a+b+c \) to \(2000000 \). When we then divide that rounded value for \(a+b+c \) by \(2 \) to get \(s \), some calculators (e.g. the TI-83 Plus) will give a rounded down value of \(1000000 \).
This is a problem because \(a=1000000 \), and so we would get \(s-a=0 \), causing Heron's formula to give us an area of \(0 \) for the triangle! And this is indeed the incorrect answer that the TI-83 Plus returns. Other calculators may give some other inaccurate answer, depending on how they store values internally. The actual area - accurate to \(15\) decimal places - is \(K = 0.99999999999895 \), i.e. it is basically \(1 \).
The above example shows how problematic
floating-point arithmetic can be. Luckily there is a better formula for the area of a triangle when the three sides are known:
For a triangle \(\triangle\,ABC \) with sides \(a \ge b \ge c \), the area is:
\[ \text{Area} ~=~ K ~=~ \tfrac{1}{4}\,\sqrt{(a + (b+c))\,(c - (a-b))\,(c + (a-b))\,(a + (b-c))}\label{2.33}\]
To use this formula, sort the names of the sides so that \(a \ge b \ge c \). Then perform the operations inside the square root
in the exact order in which they appear in the formula, including the use of parentheses. Then take the square root and divide by \(4 \). For the triangle in Example 2.16, the above formula gives an answer of exactly \(K = 1 \) on the same TI-83 Plus calculator that failed with Heron's formula. What is amazing about this formula is that it is just Heron's formula rewritten! The use of parentheses is what forces the correct order of operations for numerical stability. Another formula for the area of a triangle given its three sides is given below:
For a triangle \(\triangle\,ABC \) with sides \(a \ge b \ge c \), the area is:
\[\label{2.34} \text{Area} ~=~ K ~=~ \tfrac{1}{2}\,\sqrt{a^2 c^2 ~-~ \left( \tfrac{a^2 + c^2 - b^2}{2} \right)^2}\] |
After the Mandelbulb, several new types of 3D fractals appeared at Fractal Forums. Perhaps one of the most impressive and unique is the Mandelbox. It was first described in this thread, where it was introduced by Tom Lowe (Tglad). Similar to the original Mandelbrot set, an iterative function is applied to points in 3D space, and points which do not diverge are considered part of the set.
Tom Lowe has a great site, where he discusses the history of the Mandelbox, and highlights several of its properties, so in this post I’ll focus on the distance estimator, and try to make some more or less convincing arguments about why a scalar derivative works in this case.
The Mandelbulb and Mandelbrot systems use a simple polynomial formula to generate the escape-time sequence:\(z_{n+1} = z_n^\alpha + c\)
The Mandelbox uses a slightly more complex transformation:\(z_{n+1} = scale*spherefold(boxfold(z_n)) + c\)
I have mentioned
folds before. These are simply conditional reflections. A box fold is a similar construction: if the point, p, is outside a box with a given side length, reflect the point in the box side. Or as code: if (p.x>L) { p.x = 2.0*L-p.x; } else if (p.x<-L) { p.x = -2.0*L-p.x; }
(this must be done for each dimension. Notice, that in GLSL this can be expressed elegantly in one single operation for all dimensions: p = clamp(p,-L,L)*2.0-p)
The
sphere fold is a conditional sphere inversion. If a point, p, is inside a sphere with a fixed radius, R, we will reflect the point in the sphere, e.g: float r = length(p); if (r<R) p=p*R*R/(r*r);
(Actually, the sphere fold used in most Mandelbox implementations is slightly more complex and adds an inner radius, where the length of the point is scaled linearly).
Now, how can we create a DE for the Mandelbox?
Again, it turns out that it is possible to create a scalar running derivative based distance estimator. I think the first scalar formula was suggested by Buddhi in this thread at fractal forums. Here is the code:
float DE(vec3 z) { vec3 offset = z; float dr = 1.0; for (int n = 0; n < Iterations; n++) { boxFold(z,dr); // Reflect sphereFold(z,dr); // Sphere Inversion z=Scale*z + offset; // Scale & Translate dr = dr*abs(Scale)+1.0; } float r = length(z); return r/abs(dr); }
where the sphereFold and boxFold may be defined as:
void sphereFold(inout vec3 z, inout float dz) { float r2 = dot(z,z); if (r<minRadius2) { // linear inner scaling float temp = (fixedRadius2/minRadius2); z *= temp; dz*= temp; } else if (r2<fixedRadius2) { // this is the actual sphere inversion float temp =(fixedRadius2/r2); z *= temp; dz*= temp; } } void boxFold(inout vec3 z, inout float dz) { z = clamp(z, -foldingLimit, foldingLimit) * 2.0 - z; } It is possible to simplify this even further by storing the scalar derivative as the fourth component of a 4-vector. See Rrrola’s post for an example.
However, one thing that is missing, is an explanation of why this distance estimator works. And even though I do not completely understand the mechanism, I’ll try to justify this formula. It is not a strict derivation, but I think it offers some understanding of why the scalar distance estimator works.
A Running Scalar Derivative
Let us say, that for a given starting point, p, we obtain an length, R, after having applied a fixed number of iterations. If the length is less then \(R_{min}\), we consider the orbit to be bounded and is thus part of the fractal, otherwise it is outside the fractal set. We want to obtain a distance estimate for this point p. Now, the distance estimate must tell us how far we can go in any direction, before the final radius falls below the minimum radius, \(R_{min}\), and we hit the fractal surface. One distance estimate approximation would be to find the direction, where R decreases fastest, and do a linear extrapolation to estimate when R becomes less than \(R_{min}\):\(DE=(R-R_{min})/DR\)
where DR is the magnitude of the derivative along this steepest descent (this is essentially Newton root finding).
In the previous post, we argued that the linear approximation to a vector-function is best described using the Jacobian matrix:\(
F(p+dp) \approx F(p) + J(p)dp
\)
The fastest decrease is thus given by the induced matrix norm of J, since the matrix norm is the maximum of \(|J v|\) for all unit vectors v.
So, if we could calculate the (induced) matrix norm of the Jacobian, we would arrive at a linear distance estimate:\(
DE=(R-R_{min})/\|J\|
\)
Calculating the Jacobian matrix norm sounds tricky, but let us take a look at the different transformations involved in the iteration loop: Reflections (R), Sphere Inversions (SI), Scalings (S), and Translations (T). It is also common to add a rotation (ROT) inside the iteration loop.
Now, for a given point, we will end applying an iterated sequence of operations to see if the point escapes:\(
Mandelbox(p) = (T\circ S\circ SI\circ R\circ \ldots\circ T\circ S\circ SI\circ R)(p)
\)
In the previous part, we argued that the most obvious derivative for a R^3 to R^3 function is a Jacobian. According to the chain rule for Jacobians, the Jacobian for a function such as this Mandelbox(z) will be of the form:\(
J_{Mandelbox} = J_T * J_S * J_{SI} * J_R …. J_T * J_S * J_{SI} * J_R
\)
In general, all of these matrices will be functions of R^3, which should be evaluated at different positions. Now, let us take a look of the individual Jacobian matrices for the Mandelbox transformations.
Translations
A translation by a constant will simply have an identity matrix as Jacobian matrix, as can be seen from the definitions.
Reflections
Consider a simple reflection in one of the coordinate system planes. The transformation matrix for this is:\(
T_{R} = \begin{bmatrix}
1 & & \\
& 1 & \\
& & -1
\end{bmatrix}
\)
Now, the Jacobian of a transformation defined by multiplying with a constant matrix is simply the constant matrix itself. So the Jacobian is also simply a reflection matrix.
Rotations
A rotation (for a fixed angle and rotation vector) is also constant matrix, so the Jacobian is also simply a rotation matrix.
Scalings
The Jacobian for a uniform scaling operation is:\(
J_S = scale*\begin{bmatrix}
1 & & \\
& 1 & \\
& & 1
\end{bmatrix}
\)
Sphere Inversions
Below can be seen how the sphere fold (the conditional sphere inversion) transforms a uniform 2D grid. As can be seen, the sphere inversion is an anti-conformal transformation – the angles are still 90 degrees at the intersections, except for the boundary where the sphere inversion stops.
The Jacobian for sphere inversions is the most tricky. But a derivation leads to:\(
J_{R} = (r^2/R^2) \begin{bmatrix}
1-2x^2/R^2 & -2xy/R^2 & -2xz/R^2 \\
-2yx/R^2 & 1-2y^2/R^2 & -2yz/R^2 \\
-2zx/R^2 & -zy/R^2 & 1-2z^2/R^2
\end{bmatrix}
\)
Here R is the length of p, and r is radius of the inversion sphere. I have extracted the scalar front factor, so that the remaining part is an orthogonal matrix (as is also demonstrated in the derivation link).
Notice that all reflection, translation, and rotation Jacobian-matrices will not change the length of a vector when multiplied with it. The Jacobian for the Scaling matrix, will multiply the length with the scale factor, and the Jacobian for the Sphere Inversion will multiply the length by a factor of (r^2/R^2) (notice that the length of the point must evaluated at the correct point in the sequence).
Now, if we calculated the matrix norm of the Jacobian:\(
|| J_{Mandelbox} || = || J_T * J_S * J_{SI} * J_R …. J_T * J_S * J_{SI} * J_R ||
\)
we can easily do it, since we do only need to keep track of the scalar factor, whenever we encounter a Scaling Jacobian or a Sphere Inversion Jacobian. All the other matrix stuff will simply not change the length of a given vector and may be ignored. Also notice, that only the sphere inversion depends on the point where the Jacobian is evaluated – if this operation was not present, we could simply count the number of scalings performed and multiply the escape length with \(2^{-scale}\).
This means that the matrix norm of the Jacobian can be calculated using only a simply scalar variable, which is scaled, whenever we apply the scaling or sphere inversion operation.
This seems to hold for all conformal transformations (strictly speaking sphere inversions and reflections are not conformal, but anti-conformal, since orientations are reversed). Wikipedia also mentions, that any function, with a Jacobian equal to a
scalar times a rotational matrix, must be conformal, and it seems the converse is also true: any conformal or anti-conformal transformation in 3D has a Jacobian equal to a scalar times a orthogonal matrix.
Final Remarks
There are some reasons, why I’m not completely satisfied why the above derivation: first, the translational part of the Mandelbox transformation is not really a constant. It would be, if we were considering a Julia-type Mandelbox, where you add a fixed vector at each iteration, but here we add the starting point, and I’m not sure how to express the Jacobian of this transformation. Still, it is possible to do Julia-type Mandelbox fractals (they are quite similar), and here the derivation should be more sound. The transformations used in the Mandelbox are also conditional, and not simple reflection and sphere inversions, but I don’t think that matters with regard to the Jacobian, as long as the same conditions are used when calculating it.
Update: As Knighty pointed out in the comments below, it is possible to see why the scalar approximation works in the Mandelbrot case too:
Let us go back to the original formula:
\(f(z) = scale*spherefold(boxfold(z)) + c\)
and take a look at its Jacobian:
\(J_f = J_{scale}*J_{spherefold}*J_{boxfold} + I\)
Now by using the triangle inequality for matrix norms, we get:
\(||J_f|| = ||J_{scale}*J_{spherefold}*J_{boxfold} + I|| \) \(\leq ||J_{scale}*J_{spherefold}*J_{boxfold}|| + ||I|| \) \(= S_{scale}*S_{spherefold}*S_{boxfold} + 1 \)
where the S’s are the scalars for the given transformation. This argument can also be applied to repeated applications of the Mandelbox transformation. This means, that if we add one to the running derivative at each iteration (like in the Mandelbulb case), we get an upper bound of the true derivative. And since our distance estimate is calculated by dividing with the running derivate, this approximation yields a smaller distance estimate than the true one (which is good).
Another point is, that it is striking that we end up with the same scalar estimator as for the tetrahedron in part 3 (except that is has no sphere inversion). But for the tetrahedron, the scalar estimator was based on straight-forward arguments, so perhaps it is possible to come up with a much simpler argument for the running scalar derivative for the Mandelbox as well.
There must also be some kind of link between the gradient and the Jacobian norm. It seems, that the norm of the Jacobian should be equal to the absolute value of the length of the Mandelbox(p) function: ||J|| = |grad|MB(p)||, since they both describe how fast the length varies along the steepest descent path. This would also make the link to the gradient based numerical methods (discussed in part 5) more clear.
And finally, if we reused our argumentation for using a linear zero-point approximation of the escape length to the Mandelbulb, it just doesn’t work. Here it is necessary to introduce a log-term (\(DE= 0.5*r*log(r)/dr\)). Of course, the Mandelbulb is not composed of conformal transformations, so the “Jacobian to Scalar running derivative” argument is not valid anymore, but we already have an expression for the scalar running derivative for the Mandelbulb, and this expression does not seem to work well with the \(DE=(r-r_{min})/dr\) approximation. So it is not clear under what conditions this approximation is valid.
Update: Again, Knighty makes some good arguments below in the comments for why the linear approximations holds here. The next part is about dual numbers and distance estimation. |
103 0 1. Homework Statement
Determine the period of oscillations of a simple pendulum ( a particle of mass m suspended by a string of length l in a gravitational field) as a function of the amplitude of oscillations.
2. Homework Equations
[tex] T(E) = \sqrt(2m) \int^{x_2(E)}_{x_1(E)}\frac{dx}{\sqrt(E-U(x)} [/tex]
where T is the period of oscillations
3. The Attempt at a Solution
I only need the expression of E(x) and the problem is pretty much solved but I can't figure out why (which is rather embarassing ) the energy of the pendulum
[tex] E = \frac{1}{2}ml^2{\phi}^2 - mgl\cos{\phi} = -mgl\cos{\phi_o} [/tex]
where [tex] \phi [/tex] is the angle between the string and the vertical and [tex] \phi_0 [/tex] the maximum value of [tex] \phi [/tex]
Any hints?
thanks
Last edited: |
Hey guys! I built the voltage multiplier with alternating square wave from a 555 timer as a source (which is measured 4.5V by my multimeter) but the voltage multiplier doesn't seem to work. I tried first making a voltage doubler and it showed 9V (which is correct I suppose) but when I try a quadrupler for example and the voltage starts from like 6V and starts to go down around 0.1V per second.
Oh! I found a mistake in my wiring and fixed it. Now it seems to show 12V and instantly starts to go down by 0.1V per sec.
But you really should ask the people in Electrical Engineering. I just had a quick peek, and there was a recent conversation about voltage multipliers. I assume there are people there who've made high voltage stuff, like rail guns, which need a lot of current, so a low current circuit like yours should be simple for them.
So what did the guys in the EE chat say...
The voltage multiplier should be ok on a capacitive load. It will drop the voltage on a resistive load, as mentioned in various Electrical Engineering links on the topic. I assume you have thoroughly explored the links I have been posting for you...
A multimeter is basically an ammeter. To measure voltage, it puts a stable resistor into the circuit and measures the current running through it.
Hi all! There is theorem that links the imaginary and the real part in a time dependent analytic function. I forgot its name. Its named after some dutch(?) scientist and is used in solid state physics, who can help?
The Kramers–Kronig relations are bidirectional mathematical relations, connecting the real and imaginary parts of any complex function that is analytic in the upper half-plane. These relations are often used to calculate the real part from the imaginary part (or vice versa) of response functions in physical systems, because for stable systems, causality implies the analyticity condition, and conversely, analyticity implies causality of the corresponding stable physical system. The relation is named in honor of Ralph Kronig and Hans Kramers. In mathematics these relations are known under the names...
I have a weird question: The output on an astable multivibrator will be shown on a multimeter as half the input voltage (for example we have 9V-0V-9V-0V...and the multimeter averages it out and displays 4.5V). But then if I put that output to a voltage doubler, the voltage should be 18V, not 9V right? Since the voltage doubler will output in DC.
I've tried hooking up a transformer (9V to 230V, 0.5A) to an astable multivibrator (which operates at 671Hz) but something starts to smell burnt and the components of the astable multivibrator get hot. How do I fix this? I check it after that and the astable multivibrator works.
I searched the whole god damn internet, asked every god damn forum and I can't find a single schematic that converts 9V DC to 1500V DC without using giant transformers and power stage devices that weight 1 billion tons....
something so "simple" turns out to be hard as duck
In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it?
If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum.
@AaronStevens Yeah, I had a good laugh to myself when he responded back with "Yeah, maybe they considered it and it was just too complicated". I can't even be mad at people like that. They are clearly fairly new to physics and don't quite grasp yet that most "novel" ideas have been thought of to death by someone; likely 100+ years ago if it's classical physics
I have recently come up with a design of a conceptual electromagntic field propulsion system which should not violate any conservation laws, particularly the Law of Conservation of Momentum and the Law of Conservation of Energy. In fact, this system should work in conjunction with these two laws ...
I rememeber that Gordon Freeman's thesis was "Observation of Einstein-Podolsky-Rosen Entanglement on Supraquantum Structures by Induction Through Nonlinear Transuranic Crystal of Extremely Long Wavelength (ELW) Pulse from Mode-Locked Source Array "
In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum.
@ACuriousMind What confuses me is the interpretation of Peskin to this infinite c-number and the experimental fact
He said, the second term is the sum over zero point energy modes which is infnite as you mentioned. He added," fortunately, this energy cannot be detected experm., since the experiments measure only the difference between from the ground state of H".
@ACuriousMind Thank you, I understood your explanations clearly. However, regarding what Peskin mentioned in his book, there is a contradiction between what he said about the infinity of the zero point energy/ground state energy, and the fact that this energy is not detectable experimentally because the measurable quantity is the difference in energy between the ground state (which is infinite and this is the confusion) and a higher level.
It's just the first encounter with something that needs to be renormalized. Renormalizable theories are not "incomplete", even though you can take the Wilsonian standpoint that renormalized QFTs are effective theories cut off at a scale.
according to the author, the energy differenc is always infinite according to two fact. the first is, the ground state energy is infnite, secondly, the energy differenc is defined by substituting a higher level energy from the ground state one.
@enumaris That is an unfairly pithy way of putting it. There are finite, rigorous frameworks for renormalized perturbation theories following the work of Epstein and Glaser (buzzword: Causal perturbation theory). Just like in many other areas, the physicist's math sweeps a lot of subtlety under the rug, but that is far from unique to QFT or renormalization
The classical electrostatics formula $H = \int \frac{\mathbf{E}^2}{8 \pi} dV = \frac{1}{2} \sum_a e_a \phi(\mathbf{r}_a)$ with $\phi_a = \sum_b \frac{e_b}{R_{ab}}$ allows for $R_{aa} = 0$ terms i.e. dividing by zero to get infinities also, the problem stems from the fact that $R_{aa}$ can be zero due to using point particles, overall it's an infinite constant added to the particle that we throw away just as in QFT
@bolbteppa I understand the idea that we need to drop such terms to be in consistency with experiments. But i cannot understand why the experiment didn't predict such infinities that arose in the theory?
These $e_a/R_{aa}$ terms in the big sum are called self-energy terms, and are infinite, which means a relativistic electron would also have to have infinite mass if taken seriously, and relativity forbids the notion of a rigid body so we have to model them as point particles and can't avoid these $R_{aa} = 0$ values. |
Yes ,I would agree they are correct; in essence they differ by the placement of the $\forall x$ quantifier, you could say.
The first can be made really topological because the $\delta$ varies by point, and so is just another way of saying "some neighbourhood of $x$". If we denote by $\mathcal{N}_X(x)$ the set of all neighbourhoods of $x$ in a space $X$, the regular continuity can be formulated as
$$ \forall N \in \mathcal{N}_Y(f(x)) \exists N' \in \mathcal{N}_X(x): f[N'] \subseteq N$$ where the last inclusion could also have been written $N' \subseteq f^{-1}[N]$, to stay closer to your formulation.
The second one is not topological, but a notion that belongs to so-called uniform spaces, spaces with a uniform structure. If I choose the "entourages"-view so that $(X,d)$ has a metric uniform structure $\mathcal{E}_d$ generated by entourages of the form $\{(x,y): d(x,y) < \varepsilon\}$ etc. we formulate the uniform continuity as
$$\forall U \in \mathcal{E}_{(Y,d)}: (f \times f)^{-1}[U] \in \mathcal{E}_{(X,d)}$$ which looks like normal continuity between topological spaces (inverse image of a "special" subset in the co-domain is "special" in the domain).
The notion is not purely topological because the global "narrowness" $\delta$ cannot be really stated as a condition on open sets, but needs the metric here. |
Purpose Calculation of the EBEs (conditional mode) Conditional distribution Mode of the conditional distribution Individual random effects Algorithm Running the EBEs task Outputs In the graphical user interface In the output folder Settings Purpose
EBEs stands for
Empirical Bayes Estimates. The EBEs are the most probable value of the individual parameters (parameters for each individual), given the estimated population parameters and the data of each individual. In a more mathematical language, they are the mode of the conditional parameter distribution for each individual.
These values are useful to compute the most probable prediction for each individual, for comparison with the data (for instance in the Individual Fits plot).
Calculation of the EBEs (conditional mode)
When launching the “EBEs” task, the
mode of the conditional parameter distribution is calculated. Conditional distribution
The conditional distribution is \( p(\psi_i|y_i;\hat{\theta})\) with \(\psi_i\) the individual parameters for individual
i, \(\hat{\theta}\) the estimated population parameters, and \(y_i\) the data (observations) for individual i. The conditional distribution represents the uncertainty of the individual’s parameter value, taking into account the information at hand for this individual: the observed data for that individual, the covariate values for that individual and the fact that the individual belongs to the population for which we have already estimated the typical parameter value (fixed effects) and the variability (standard deviation of the random effects). It is not possible to directly calculate the probability for a given \(\psi_i\) (no closed form), but is possible to obtain samples from the distribution using a Markov-Chain Monte-Carlo procedure (MCMC). This is detailed more on the Conditional Distribution page. Mode of the conditional distribution
The mode is the parameter value with the highest probability:
$$ \hat{\psi}_i^{mode} = \underset{\psi_i}{\textrm{arg max }}p(\psi_i|y_i;\hat{\theta})$$
To find the mode, we thus need to maximize the conditional probability with respect to the individual parameter value \(\psi_i\).
Individual random effects
Once the individual parameters values \(\psi_i\) are known, the corresponding individual random effects can be calculated using the population parameters and covariates. Taking the example of a parameter \(\psi\) having a normal distribution within the population and that depends on the covariate \(c\), we can write for individual \(i\):
$$ \psi_i = \psi_{pop} + \beta \times c_i + \eta_i$$
As \(\psi_i\) (estimated conditional mode), \(\psi_{pop}\) and \(\beta\) (population parameters) and \(c_i\) (individual covariate value) are known, the individual random effect \(\eta_i\) can easily be calculated.
Algorithm
For each individual, to find the \(\psi_i\) values that maximizes the conditional distribution, we use the Nelder-Mead Simplex algorithm [1].
As the conditional distribution does not have a closed form solution (i.e \(p(\psi_i|y_i;\hat{\theta})\) cannot be directly or easily calculated for a given \(\psi_i\)), we use the Bayes law to rewrite it in the following way (leaving the population parameters \(\hat{\theta}\) out for clarity):
$$p(\psi_i|y_i)=\frac{p(y_i|\psi_i)p(\psi_i)}{p(y_i)}$$
The conditional density function of the data when knowing the individual parameter values (i.e \(p(y_i|\psi_i)\)) is easy to calculate, as well as the density function for the individual parameters (i.e \(p(\psi_i)\)), because they have closed form solutions. On the opposite, the likelihood \(p(y_i)\) has no closed form solution. But as it does not depend on \(\psi_i\), we can leave it out of the optimization procedure and only optimize \(p(y_i|\psi_i)p(\psi_i)\).
Running the EBEs task
When running the EBEs task, the progress is displayed in the pop-up window:
Dependencies between tasks: The “Population parameters” task must be run before launching the EBEs task. The EBEs task is recommended before calculating the Standard errors task and the Log-likelihood task using the linearization method. Outputs In the graphical user interface
In the
Indiv.Param section of the
Results tab, a summary of the individual parameters is proposed (min, max, median and quartiles) as shown in the figure below. The elapsed time for this task is also shown.
To see the estimated parameter value for each individual, the user can click on the [INDIV. ESTIM.] section. Notice that the user can also see them in the output files, which can be accessed via the folder icon at the bottom left. Notice that there is a “Copy table” icon on the top of each table to copy them in Excel, Word, … The table format and display will be kept.
In the output folder
After having run the EBEs task, the following files are available:
summary.txt: contains the summary statistics (as displayed in the GUI) IndividualParameters/estimatedIndividualParameters.txt: the individual parameters for each subject-occasion are displayed. In addition to the already present approximation conditional mean from SAEM (*_SAEM), the conditional mode (*_mode) is added to the file. IndividualParameters/estimatedRandomEffects.txt: the individual random effects for each subject-occasion are displayed (*_mode), in addition to the already present value based on the approximate conditional mean from SAEM (*_SAEM).
More details about the content of the output files can be found here.
Settings
The settings are accessible through the interface via the button next to the EBEs task.
Maximum number of iterations(default: 200): maximum number of iterations for the Nelder-Mead Simplex algorithm, for each individual. Even if the tolerance criteria is not met, the algorithm stops after that number of iterations. Tolerance(default: 1e-6): absolute tolerance criteria. The algorithm stops when the change of the conditional probability value between two iterations is less than the tolerance. |
Defining parameters
Level: \( N \) = \( 23 \) Weight: \( k \) = \( 7 \) Nonzero newspaces: \( 2 \) Newforms: \( 4 \) Sturm bound: \(308\) Trace bound: \(1\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{7}(\Gamma_1(23))\).
Total New Old Modular forms 143 143 0 Cusp forms 121 121 0 Eisenstein series 22 22 0 Decomposition of \(S_{7}^{\mathrm{new}}(\Gamma_1(23))\)
We only show spaces with odd parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
Label \(\chi\) Newforms Dimension \(\chi\) degree 23.7.b \(\chi_{23}(22, \cdot)\) 23.7.b.a 1 1 23.7.b.b 2 23.7.b.c 8 23.7.d \(\chi_{23}(5, \cdot)\) 23.7.d.a 110 10 |
In Exercises [exer:10.5.1}– [exer:10.5.12} find the general solution. [exer:10.5.1] \(\
Callstack: at (Bookshelves/Differential_Equations/Book:_Elementary_Differential_Equations_with_Boundary_Values_Problems_(Trench)/10:_Linear_Systems_of_Differential_Equations/10.5:_Constant_Coefficient_Homogeneous_Systems_II/10.5E:_Constant_Coefficient_Homogeneous_Systems_II_(Exercises)), /content/body/p[2]/span, line 1, column 1
[exer:10.5.3] \(\
Callstack: at (Bookshelves/Differential_Equations/Book:_Elementary_Differential_Equations_with_Boundary_Values_Problems_(Trench)/10:_Linear_Systems_of_Differential_Equations/10.5:_Constant_Coefficient_Homogeneous_Systems_II/10.5E:_Constant_Coefficient_Homogeneous_Systems_II_(Exercises)), /content/body/p[3]/span, line 1, column 1
[exer:10.5.5] \(\
Callstack: at (Bookshelves/Differential_Equations/Book:_Elementary_Differential_Equations_with_Boundary_Values_Problems_(Trench)/10:_Linear_Systems_of_Differential_Equations/10.5:_Constant_Coefficient_Homogeneous_Systems_II/10.5E:_Constant_Coefficient_Homogeneous_Systems_II_(Exercises)), /content/body/p[4]/span, line 1, column 1
[exer:10.5.7] \(\
Callstack: at (Bookshelves/Differential_Equations/Book:_Elementary_Differential_Equations_with_Boundary_Values_Problems_(Trench)/10:_Linear_Systems_of_Differential_Equations/10.5:_Constant_Coefficient_Homogeneous_Systems_II/10.5E:_Constant_Coefficient_Homogeneous_Systems_II_(Exercises)), /content/body/p[5]/span, line 1, column 1
[exer:10.5.9] \(\
Callstack: at (Bookshelves/Differential_Equations/Book:_Elementary_Differential_Equations_with_Boundary_Values_Problems_(Trench)/10:_Linear_Systems_of_Differential_Equations/10.5:_Constant_Coefficient_Homogeneous_Systems_II/10.5E:_Constant_Coefficient_Homogeneous_Systems_II_(Exercises)), /content/body/p[6]/span, line 1, column 1
[exer:10.5.11] \(\
Callstack: at (Bookshelves/Differential_Equations/Book:_Elementary_Differential_Equations_with_Boundary_Values_Problems_(Trench)/10:_Linear_Systems_of_Differential_Equations/10.5:_Constant_Coefficient_Homogeneous_Systems_II/10.5E:_Constant_Coefficient_Homogeneous_Systems_II_(Exercises)), /content/body/p[7]/span, line 1, column 1
[exer:10.5.13] \(\
Callstack: at (Bookshelves/Differential_Equations/Book:_Elementary_Differential_Equations_with_Boundary_Values_Problems_(Trench)/10:_Linear_Systems_of_Differential_Equations/10.5:_Constant_Coefficient_Homogeneous_Systems_II/10.5E:_Constant_Coefficient_Homogeneous_Systems_II_(Exercises)), /content/body/p[8]/span, line 1, column 1
[exer:10.5.19] \(\
Callstack: at (Bookshelves/Differential_Equations/Book:_Elementary_Differential_Equations_with_Boundary_Values_Problems_(Trench)/10:_Linear_Systems_of_Differential_Equations/10.5:_Constant_Coefficient_Homogeneous_Systems_II/10.5E:_Constant_Coefficient_Homogeneous_Systems_II_(Exercises)), /content/body/p[9]/span, line 1, column 1
[exer:10.5.21] \(\
Callstack: at (Bookshelves/Differential_Equations/Book:_Elementary_Differential_Equations_with_Boundary_Values_Problems_(Trench)/10:_Linear_Systems_of_Differential_Equations/10.5:_Constant_Coefficient_Homogeneous_Systems_II/10.5E:_Constant_Coefficient_Homogeneous_Systems_II_(Exercises)), /content/body/p[10]/span, line 1, column 1
[exer:10.5.23] \(\
Callstack: at (Bookshelves/Differential_Equations/Book:_Elementary_Differential_Equations_with_Boundary_Values_Problems_(Trench)/10:_Linear_Systems_of_Differential_Equations/10.5:_Constant_Coefficient_Homogeneous_Systems_II/10.5E:_Constant_Coefficient_Homogeneous_Systems_II_(Exercises)), /content/body/p[11]/span, line 1, column 1
[exer:10.5.25] \(\
Callstack: at (Bookshelves/Differential_Equations/Book:_Elementary_Differential_Equations_with_Boundary_Values_Problems_(Trench)/10:_Linear_Systems_of_Differential_Equations/10.5:_Constant_Coefficient_Homogeneous_Systems_II/10.5E:_Constant_Coefficient_Homogeneous_Systems_II_(Exercises)), /content/body/p[12]/span, line 1, column 1
[exer:10.5.27] \(\
Callstack: at (Bookshelves/Differential_Equations/Book:_Elementary_Differential_Equations_with_Boundary_Values_Problems_(Trench)/10:_Linear_Systems_of_Differential_Equations/10.5:_Constant_Coefficient_Homogeneous_Systems_II/10.5E:_Constant_Coefficient_Homogeneous_Systems_II_(Exercises)), /content/body/p[13]/span, line 1, column 1
[exer:10.5.29] \(\
Callstack: at (Bookshelves/Differential_Equations/Book:_Elementary_Differential_Equations_with_Boundary_Values_Problems_(Trench)/10:_Linear_Systems_of_Differential_Equations/10.5:_Constant_Coefficient_Homogeneous_Systems_II/10.5E:_Constant_Coefficient_Homogeneous_Systems_II_(Exercises)), /content/body/p[14]/span, line 1, column 1
[exer:10.5.31] \(\{{\bf y}' =\threebythree{-3}{-3}445{-8}23{-5}\bf y}\) [exer:10.5.32] \({\bf y}'={\threebythree{-3}{-1}01{-1}0{-1}{-1}{-2}}{\bf y}\) [exer:10.5.33] Under the assumptions of Theorem [thmtype:10.5.1}, suppose \({\bf u}\) and \(\hat{\bf u}\) are vectors such that
\[(A-\lambda_1I){\bf u}={\bf x}\quad\mbox{and }\quad (A-\lambda_1I)\hat{\bf u}={\bf x},\]
and let
\[{\bf y}_2={\bf u}e^{\lambda_1t}+{\bf x}te^{\lambda_1t} \quad\mbox{and }\quad \hat{\bf y}_2=\hat{\bf u}e^{\lambda_1t}+{\bf x}te^{\lambda_1t}.\]
Show that \({\bf y}_2-\hat{\bf y}_2\) is a scalar multiple of \({\bf y}_1={\bf x}e^{\lambda_1t}\). [exer:10.5.34] Under the assumptions of Theorem [thmtype:10.5.2}, let
\[\begin{aligned} {\bf y}_1 &=&{\bf x} e^{\lambda_1t},\\ {\bf y}_2&=&{\bf u}e^{\lambda_1t}+{\bf x} te^{\lambda_1t},\mbox{ and }\\ {\bf y}_3&=&{\bf v}e^{\lambda_1t}+{\bf u}te^{\lambda_1t}+{\bf x} {t^2e^{\lambda_1t}\over2}.\end{aligned}\]
Complete the proof of Theorem [thmtype:10.5.2} by showing that \({\bf y}_3\) is a solution of \({\bf y}'=A{\bf y}\) and that \(\{{\bf y}_1,{\bf y}_2,{\bf y}_3\}\) is linearly independent. [exer:10.5.35] Suppose the matrix ?has a repeated eigenvalue \(\lambda_1\) and the associated eigenspace is one-dimensional. Let \({\bf x}\) be a \(\lambda_1\)-eigenvector of \(A\). Show that if \((A-\lambda_1I){\bf u}_1={\bf x}\) and \((A-\lambda_1I){\bf u}_2={\bf x}\), then \({\bf u}_2-{\bf u}_1\) is parallel to \({\bf x}\). Conclude from this that all vectors \({\bf u}\) such that \((A-\lambda_1I){\bf u}={\bf x}\) define the same positive and negative half-planes with respect to the line \(L\) through the origin parallel to \({\bf x}\). [exer:10.5.36] \({\bf y}'=\{\twobytwo{-3}{-1}41}{\bf y}\)& [exer:10.5.37] \({\bf y}'=\{\twobytwo2{-1}10}{\bf y}\) [exer:10.5.38] \({\bf y}'=\{\twobytwo{-1}{-3}35}{\bf y}\)& [exer:10.5.39] \({\bf y}'=\{\twobytwo{-5}3{-3}1}{\bf y}\) [exer:10.5.40] \({\bf y}'=\{\twobytwo{-2}{-3}34}{\bf y}\)& [exer:10.5.41] \({\bf y}'=\{\twobytwo{-4}{-3}32}{\bf y}\) [exer:10.5.42] \({\bf y}'=\{\twobytwo0{-1}1{-2}}{\bf y}\)&
[exer:10.5.43] \({\bf y}'=\{\twobytwo01{-1}2}{\bf y}\)
[exer:10.5.44] \({\bf y}'=\{\twobytwo{-2}1{-1}0}{\bf y}\)&
[exer:10.5.45] \({\bf y}'=\{\twobytwo0{-4}1{-4}}{\bf y}\) |
Ever need to convert between specular exponent, roughness, and glossiness? for the Blinn-Phong BRDF, these three values represent the same concept. Specular Exponent is the actual value used to compute the BRDF. Let’s call that variable . The roughness and glossiness values are numbers between 0 and 1 and they are additive inverses of each other. In other words
$$g = 1 – m \, ,$$
$$m = 1 – g \, .$$
The exponent is given by
$$e = \frac{2}{m^2} – 2$$
or
$$e = \frac{2}{{(1 – g)}^2} – 2 \, .$$
The BRDF is
$$ f_r(\omega_i, \omega_o) = \pi \frac{F(\omega_i \cdot \omega_h) D(\omega_i) G_2(\omega_i \cdot \omega_g, \omega_o \cdot \omega_g)}{4 (\omega_i \cdot \omega_g) (\omega_o \cdot \omega_g)}$$
where Blinn-Phong is given by
$$ D_{\mathrm{Blinn-Phong}}(\omega_i, \omega_h, e) = \frac{e+2}{2\pi} (\omega_i \cdot \omega_h)^e $$
and the resulting specular illumination is given by
$$ \mathbf{L}_o = f_r(\omega_i, \omega_o) \mathbf{L}_i (\omega_i) \mathbf{V}_i (\omega_i) (\omega_g \cdot \omega_i) \, . $$ |
So I have a differential equation :
$\left\{\begin{split} & c'(t) = \alpha q(t) c(t)^{2/3} - \beta(1 - q(t))c(t)\\ &\chi'(t) = \kappa c(t) q(t) \\ & q(t) = \int_{-\infty}^t \Big( \frac{\chi(u)}{\chi(u) + \chi_c}e^{-(t-u)/t0}\Big)du\end{split} \right.$
I wrote the following DDE :
alpha = 0.1beta = 0.1kappa = 0.1t0 = 0.5chic = 1/2sol = NDSolve[{c'[t] == alpha*c[t]^(2/3)*q[t] - beta*(1 - q[t])*c[t], chi'[t] == kappa*c[t]*q[t], q[t] == Integrate[ chi[u]/((chi[u] + chic))*Exp[-(t - u)/t0], {u, -5, t}], q[0] == 1/2, chi[t /; t <= 0] == 1/2*(Tanh[-t*10]), c[t /; t <= 0] == 20}, {c, chi}, {t, 0, 14}]
But I'm getting those errors :
Item 2 requested in "Delayed time
1=
2computed at
3=
4did not evaluate to a real number." out of range; 1 items available.
General::stop: Further output of StringForm::sfr will be suppressed during this calculation.
NDSolve::rdelay: Delayed time u =
2computed at
3=
4did not evaluate to a real number.
I don't understand what it means. Could you explain the erros plz ? And how to fix it ?
Thx |
A possible ideal-gas cycle operates as follows:
From an initial state ($p_1$, $V_1$) the gas is cooled at constant pressure to ($p_1$, $V_2$); Lets call the start and end temperature $T_1$ and $T_2$
2.The gas is heated at constant volume to ($p_2$, $V_2$);Lets call the start and end temperature $T_2$ and $T_3$
3.The gas expands adiabatically back to ($p_1$, $V_1$). Lets call the start and end temperature $T_3$ and $T_1$
Assuming constant heat capacities, show that the thermal efficiency η is
$$ \eta=1-\gamma\frac{V_2/V_1 -1}{p_2/p_1-1} $$
Efficiencey is defined as: $$\eta=\frac{W}{Q_h}$$ the work done over the heat entered. The heat enters at stage 2 (and some leaves at stage 1 but that doesn't matter). So I need to find the heat entered at stage 2 and the work done.
Stage 1:
From the ideal gas equation we get: $$ p_1V_1=nRT_1, \ \ \ \ p_2V_2=nRT_2 \implies \frac{T_2}{T_1}=\frac{V_2}{V_1} $$
The work done is just force times distance which is pressure times change in volume:
$$ \Delta W=-p_1\Delta V=-p_1(V_2-V_1) $$
Stage 2:
It doesn't change in volume and so no work is done. However heat is put into the system, increasing the pressure. We need to find this heat.
$\Delta U= Q_h$
For an ideal gas we have: $$ \Delta U= C_v\Delta T=C_v(T_3-T_2) $$
Where $C_v$ is heat capacity at constant volume.
Stage 3:
Stage 3 is adiabatic so $\Delta U=\Delta W=C_v(T_1-T_3)$
We also have, using the ideal gas law: $$ T_3=\frac{p_2V_2}{p_1V_1}T_1 $$
Let us sub this into the efficiency:
$$ \eta=\frac{C_v(T_1-T_3)-p_1(V_2-V_1)}{C_v(T_3-T_2)} $$
If we get $T_3$ and $T_2$ in terms of T_1 and sub these in we get:
$$ \eta=-1-\frac{p_1(V_2-V_1)}{C_vT_1(\frac{p_2V_2}{p_1V_1}-\frac{V_2}{V_1})} $$ And with the ideal gas law, with $n=1$ for simplicity we get $T_1=\frac{p_1V_1}{R}$
$$ \implies\eta=-1-\frac{R(V_2/V_1-1)}{C_vT_1(\frac{p_2V_2}{p_1V_1}-\frac{V_2}{V_1})} $$
$R=C_p-C_v$ and $\gamma=C_p/C_v$
$$ \implies\eta=-1-\frac{(\gamma-1)(V_2/V_1-1)}{C_vT_1(\frac{p_2}{p_1}-1)\frac{V_2}{V_1}} $$
I have no real clue really if this is right or wrong. |
No offense but it will be much more complicated than what you think... I'm not even sure that you are familiar with risk-neutral pricing in the first place? I'll try to give you some clues.
This security is called a
basket option. On top of the multi-asset feature, there are non-trivial mechanisms embedded in the contract you mention: an auto-callable feature, meaning early redemption can happen if certain conditions are met at discrete observation dates specified as part of the contract. a compo/quanto feature. Because individual indices are not denominated in the same currency, either you define the basket $t$-value by converting each individual index's $t$-value in a fixed reference currency (compo) or you simply view these indices values as plain 'numbers' and express their weighted sum in a fixed reference currency regardless of the original denominations (quanto).
Let's forget about the auto-callable and quanto/compo features (as they would require a post of their own to be correctly addressed) and focus only on the basket part for the moment.
Omitting these features makes the payoff purely European i.e. it only depends on the terminal value of the basket ($T$-value). Let's denote it by $\phi(B_T)$, where in your case $\phi$ looks like the payoff of a put (or possibly a down-and-in put it's not clear from your explanation).
Let's also assume deterministic and constant rates in what follows, along with proportional dividends for the sake of clarity. The model categories I list below only reflect my own views, there is no such distinction in practice.
Pricing 101
Under the risk-neutral measure $\mathbb{Q}$ the price of an option is given by:$$ V_0 = e^{-r T} \mathbb{E}_0^\mathbb{Q} \left[ \phi(B_T) \right] $$with here $$B_t = \frac{1}{N} \sum_{i=1}^N S_t^{(i)} $$the $t$-value of the basket, $r$ the risk-free rate (rate at which money can be deposited on the money market, here you can take the EONIA discount curve for instance).
Mathematically, the risk-neutral measure $\mathbb{Q}$ is defined in such way that:$$ F(0,T) = \mathbb{E}_0^\mathbb{Q}[ S_T ] $$where $F(0,T)$ is the forward price at $T$ of the equity $S$ as seen of $t=0$.
There is a lot of theory hidden behind the last few equations, I would recommend you to investigate that using a good reference book e.g. Shreve. Maybe this post is a good start also. Anyway, the consequence is that your model should eventually read something like:
$$ \frac{dS_t^{(i)}}{S_t^{(i)}} = \frac{\partial \ln F^{(i)}(0,t)}{\partial t} dt + \sigma^{(i)}(...) dW_t^{(i),\mathbb{Q}} $$with a certain dependence structure between the Brownian motions driving the individual indices' prices.
Basket Pricing 101
From the above you see that:$$ V_0 = e^{-rT} \int_0^\infty \phi(B_T) p(B_T) dB_T $$in other words, if you know the probability distribution of $B_T$ you are done, since you can either: (1) perform a numerical quadrature and get the option price (2) sample from the terminal distribution and compute the expectation using Monte Carlo simulations. (1) is when $p(B_T)$ is known in closed-form, (2) is more general.
It is straightforward to show that the first 2 moments of $B_T$ under $\mathbb{Q}$ are:$$ B(0,T) = \mathbb{E}_0^\mathbb{Q}[B_T] = \frac{1}{N} \sum_{i=1}^N F^{(i)}(0,T) $$(i.e. sum of the expectations)$$ \sigma^2_B = \frac{1}{N^2} \left( \sum_{i=1}^N \sigma^{2,(i)} + 2 \sum_{j=1}^i \rho_{ij} \sigma^{(i)} \sigma^{(j)} \right) $$(i.e. sum of the covariances)
[Model Type 1]
Assume $B_T$ is
log-normal with first moments given by the above. This is a mere approximation since we know that the sum of $N$ log-normal variables is not a log-normal. But it allows you to compute the price of a basket using the BS formula:$$ V_0 = e^{-rT} ( B(0,T) N(d_+) - K N(d_-) ) $$$$ d_\pm = \frac{\ln\left(\frac{B(0,T)}{K}\right) \pm \frac{1}{2}\sigma^2_B T}{\sigma_B \sqrt{T}} $$
[Model Type 2]
Use a shifted log-normal to approximate $\phi(B_T)$, or any other known distribution for that matter. The idea is that you already know the first 2 theoretical moments of $B_T$ and you can easily write the third and even the fourth moments using standard calculus. You then fit the known distribution to the unknown distribution of $B_T$ by matching their moments. This is known as
moment-matching. In the case of a shifted-lognormal, this leads to a closed-form formula.
[Model Type 3]
You consider the true
marginals of each individual asset $S_t^{(i)}$ i.e.$$ p^{(i)}(x) = \frac{d F^{(i)}(x)}{d x} = \frac{d \mathbb{Q}\left(S_T^{(i)} \leq x\right)}{ dx} $$where $F^{(i)}$ is the $i^{th}$ asset cumulative distribution function.
You can infer the above risk-neutral probabilities from listed option prices using the
Breeden-Litzenberger idendtity, see here.
Now that you have identified the marginal distributions of each asset $S_t^{(i)}$, you need to define their dependence structure so that you can eventually obtain their joint distribution from which you will be able to infer the distribution of $B_t$ since:$$ p(B_t \in A) = \int_{\frac{1}{N} \sum_{i=1}^N x_i \in A} p\left(x_1,...,x_N\right) dx_1 ... dx_N $$You can use
copulas, to form the joint distribution $F^B(x)$ from the knowledge of the marginals $F^{(i)}(x)$.
[Basket model $\infty$]There are of course many variations of the above methods
Depending on the choice of copulas used in model 3 above for instance. Gaussian is the usual go-to choice, but you could pick any other dependence structure. One which accounts for the fact that correlations explode when the market crashes seems like a better option. See the problem of correlation skew.
You could use local volatility + instantaneous correlations as a replacement for coupling individual marginals with a Gaussian copula. Choosing different copulas then amounts to defining different local correlation structures. This is still an open research subject, see the work of Langnau, Guyon etc. in this area.
Be careful with decorrelation issues and the famous instantaneous vs. terminal correlation debate
[Additional details]
To price in the auto-callable feature it may be interesting to set up a hybrid equity-rates model (i.e. work with stochastic interest rates).
For the same reason, it might also be useful to consider discrete cash dividends instead of discrete proportional dividends.
To price in the compo feature you'll need to know the FX forwards. To price in the quanto feature you'll need to know the FX volatilities (see quanto drift adjustment) and sometimes more depending on whether you regard the equity volatility as a stochastic process or not.
[Your implementation]
You may wonder as to what model your implementation corresponds. Assuming you used the appropriate equity forward curves to build the risk-neutral drifts of the individual indices + used an implied volatility number (i.e.
not quantities estimated under the real-world measure $\mathbb{P}$ such as expected returns and historical volatility, but rather quantities inferred from listed option prices provided under $\mathbb{Q}$), what you did corresponds to postulating log-normal marginals combined through a Gaussian copula.
This is in-between model 1 (which is worse than yours because it assumes $B_t$ is log-normally distributed but exhibits correct first 2 moments) and model 3 (which is better than yours because it uses the true implied maringals and not a log-normal assumption for the marginals).
Good luck |
The average equilibrium temperature can be obtained from the Stefan-Boltzmann law, for your data 293.5 K (20 C). Compensating for the Earth-like atmosphere, (+15 K for Earth, closer to +12.5 K for this planet), we have an average temperature of approximately 306 K (33 C). Quite hot, as expected from a higher solar flux, and smaller albedo.
Another useful average we can get from this law is the equatorial average, 311 K without the atmosphere, ~323 K compensated.
Equations for temperature estimations without an atmosphere:
Effective influx: $= solar flux * (1-albedo)$
Global average $= \left(\frac{I_e}{4\sigma}\right)^{\frac{1}{4}}$
Equatorial average $= \left(\frac{I_e}{\pi \sigma}\right)^{\frac{1}{4}}$
Stationary sun-in-zenit average $= \left(\frac{I_e}{\sigma}\right)^{\frac{1}{4}}$
Where $\sigma$ is the Stefan-Boltzmann constant ($σ = 5.67×10^{-8} W m^{-2} K^{-4}$), and $I_e$ is the effective influx.
For a rapidly rotating planet, use the equatorial average for the equator temperature, for a very slowly rotating planet, use the sun-in-zenit equation for the peak temperature. For a case in between those extremes, use something in-between those equations.
Your planet seems to be divided into two regions, a lowland and a highland. We find the highest temperature variations on the equator part of the highland, where a necessary low cloud cover gives huge, dessert like variations, reaching almost 90 C shortly after noon (actually 130 degrees C if we calculate the black-body equilibrium, but we must compensate for atmospheric convection), and less than 0 C degrees (perhaps as low as -15 C) shortly before dawn.
In the lowland, the atmosphere, combined with clouds formed by the lakes ,gives more inertia to the system, thereby limiting the variations. (0 - 50 degrees C). |
I need some help with deciding if a given language is regular, context-free or not context-free.
Lets' say I have the following languages over the alphabet $\mathcal{A} = \{a,b,c,d\}$: $$ \begin{align} L_1 &= \{ w \in \mathcal{A}^* \mid \text{\(\#a(w)\) is even and \(\#b(w) = 1 \mathrel{\mathrm{mod}} 3\) and \(w \not\in \mathcal{A}^* abc \mathcal{A}^* \)} \} \\ L_2 &= \{ w \in \mathcal{A}^* \mid \text{\(\#a(w)\) is even and \(\#b(w) \lt \#c(w)\)} \} \\ L_3 &= \{ w \in \mathscr{A}^* \mid \#a(w) \lt \#b(w) \lt \#c(w) \} \\ \end{align} $$
This is my solution:
$L_1 = L_4 \cap L_5 \cap L_6$ where $$ \begin{align} L_4 &= \{ w \mid \text{\(w\) does not have a substring \(abc\)} \} \\ L_5 &= \{ w \mid \#a(w) \text{ is even} \} \\ L_6 &= \{ w \mid \#b(w) = 1 \mathrel{\mathrm{mod}} 3 \} \\ \end{align} $$
A DFA can be constructed for $L_5$, because $L_5$ does not need infinite memory, so $L_5$ is regular. For $L_6$ the same reasoning as above. And for $L_4$ we can construct a DFA that simply does not accept $abc$, hence regular.
$L_1$ is regular because regular languages are closed under intersection.
For $L_2$ we can divide the language thus: $L_2 = L_5 \cap L_7$ where
$$ \begin{align} L_5 &= \{ w \mid \#a(w) \text{ is even} \} \\ L_7 &= \{ w \mid \#b(w) \lt \#c(w) \} \\ \end{align} $$
We now that a DFA can be constructed for $L_5$, hence $L_5$ is regular. $L_7$ is context-free because we can construct a PDA where the stack counts the number of $a$s and $b$s.
$L_2$ is hence context-free because the intersection of a regular language and a context-free language result in a context-free language.
For $L_3$ we can see that it's not context-free because where are limited to 1 stack.
Is my reasoning right? |
Difference between revisions of "Banach indicatrix"
m
m (TeX)
(3 intermediate revisions by the same user not shown) Line 6: Line 6:
N(y,f) = +\infty,
N(y,f) = +\infty,
$$
$$
−
and if it has no roots, then
and if it has no roots, then
− + +
= 0.
+ −
The function
+
The function was defined by S. Banach [[#References|[1]]] (see also [[#References|[2]]]).
+
He proved that the indicatrix of any continuous function in the interval bis a function of Baire classnot higher than 2, and
+ + + − +
b()is the variation of on b. Thus, equation can be considered as the definition of the variation of a continuous function . The Banach indicatrix is also defined (preserving equation ) for functions with discontinuities of the first kind [[#References|[3]]]. The concept of a Banach indicatrix was employed to define the variation of functions in several variables [[#References|[4]]], [[#References|[5]]].
− −
====References====
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> S. Banach, "Sur les lignes rectifiables et les surfaces dont l'aire est finie" ''Fund. Math.'' , '''7''' (1925) pp. 225–236</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> I.P. Natanson, "Theorie der Funktionen einer reellen Veränderlichen" , H. Deutsch , Frankfurt a.M. (1961) (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> S.M. Lozinskii, "On the Banach indicatrix" ''Vestnik Leningrad. Univ. Math. Mekh. Astr.'' , '''7''' : 2 pp. 70–87 (In Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> A.S. Kronrod, "On functions of two variables" ''Uspekhi Mat. Nauk'' , '''5''' : 1 (1950) pp. 24–134 (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> A.G. Vitushkin, "On higher-dimensional variations" , Moscow (1955) (In Russian)</TD></TR></table>
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> S. Banach, "Sur les lignes rectifiables et les surfaces dont l'aire est finie" ''Fund. Math.'' , '''7''' (1925) pp. 225–236</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> I.P. Natanson, "Theorie der Funktionen einer reellen Veränderlichen" , H. Deutsch , Frankfurt a.M. (1961) (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> S.M. Lozinskii, "On the Banach indicatrix" ''Vestnik Leningrad. Univ. Math. Mekh. Astr.'' , '''7''' : 2 pp. 70–87 (In Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> A.S. Kronrod, "On functions of two variables" ''Uspekhi Mat. Nauk'' , '''5''' : 1 (1950) pp. 24–134 (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> A.G. Vitushkin, "On higher-dimensional variations" , Moscow (1955) (In Russian)</TD></TR></table>
−
====Comments====
====Comments====
−
More generally, for any mapping
+
More generally, for any mapping :define analogously.
− +
Then, let be a separable metric space and let be -measurable for all Borel subsets of .
− +
Let = for and let be the measure on defined by the Carathéodory construction from . Then
− + −
for every Borel set
+
=
+ +
for every Borel set . Cf. [[#References|[a1]]], p. 176 ff. For significant extension of , cf. [[#References|[a2]]].
====References====
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> H. Federer, "Geometric measure theory" , Springer (1969)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> H. Federer, "An analytic characterization of distributions whose partial derivatives are representable by measures" ''Bull. Amer. Math. Soc.'' , '''60''' (1954) pp. 339</TD></TR></table>
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> H. Federer, "Geometric measure theory" , Springer (1969)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> H. Federer, "An analytic characterization of distributions whose partial derivatives are representable by measures" ''Bull. Amer. Math. Soc.'' , '''60''' (1954) pp. 339</TD></TR></table>
Latest revision as of 14:32, 16 May 2015 multiplicity function, of a continuous function $y=f(x)$, $a\leq x\leq b$
An integer-valued function $N(y,f)$, $-\infty < y < \infty$, equal to the number of roots of the equation $f(x)=y$. If, for a given value of $y$, this equation has an infinite number of roots, then $$ N(y,f) = +\infty, $$
and if it has no roots, then
$$ N(y,f) = 0. $$
The function $N(y,f)$ was defined by S. Banach [1] (see also [2]). He proved that the indicatrix $N(y,f)$ of any continuous function $f(x)$ in the interval $[a,b]$ is a function of Baire class not higher than 2, and \begin{equation}\label{eq1} V_a^b(f) = \int\limits_{-\infty}^{+\infty} N(y, f) \, dy, \end{equation}
where $V_a^b(f)$ is the variation of $f(x)$ on $[a,b]$. Thus, equation \eqref{eq1} can be considered as the definition of the variation of a continuous function $f(x)$. The Banach indicatrix is also defined (preserving equation \eqref{eq1}) for functions with discontinuities of the first kind [3]. The concept of a Banach indicatrix was employed to define the variation of functions in several variables [4], [5].
References
[1] S. Banach, "Sur les lignes rectifiables et les surfaces dont l'aire est finie" Fund. Math. , 7 (1925) pp. 225–236 [2] I.P. Natanson, "Theorie der Funktionen einer reellen Veränderlichen" , H. Deutsch , Frankfurt a.M. (1961) (Translated from Russian) [3] S.M. Lozinskii, "On the Banach indicatrix" Vestnik Leningrad. Univ. Math. Mekh. Astr. , 7 : 2 pp. 70–87 (In Russian) [4] A.S. Kronrod, "On functions of two variables" Uspekhi Mat. Nauk , 5 : 1 (1950) pp. 24–134 (In Russian) [5] A.G. Vitushkin, "On higher-dimensional variations" , Moscow (1955) (In Russian) Comments
More generally, for any mapping $f:X\to Y$ define $N(y,f)$ analogously. Then, let $X$ be a separable metric space and let $f(A)$ be $\mu$-measurable for all Borel subsets $A$ of $X$. Let $\zeta(S) = \mu(f(S))$ for $S\subset X$ and let $\psi$ be the measure on $X$ defined by the Carathéodory construction from $\zeta$. Then $$ \psi(A) = \int\limits_{A}N(y,f)\, d\mu_{Y} $$ for every Borel set $A\subset X$. Cf. [a1], p. 176 ff. For significant extension of \eqref{eq1}, cf. [a2].
References
[a1] H. Federer, "Geometric measure theory" , Springer (1969) [a2] H. Federer, "An analytic characterization of distributions whose partial derivatives are representable by measures" Bull. Amer. Math. Soc. , 60 (1954) pp. 339 How to Cite This Entry:
Banach indicatrix.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Banach_indicatrix&oldid=36407 |
Purpose
The log-likelihood is the objective function and a key information. The log-likelihood cannot be computed in closed form for nonlinear mixed effects models. It can however be estimated.
Log-likelihood estimation
Performing likelihood ratio tests and computing information criteria for a given model requires computation of the log-likelihood
$$ {\cal L}{\cal L}_y(\hat{\theta}) = \log({\cal L}_y(\hat{\theta})) \triangleq \log(p(y;\hat{\theta})) $$
where \(\hat{\theta}\) is the vector of population parameter estimates for the model being considered, and \(p(y;\hat{\theta})\) is the probability distribution function of the observed data given the population parameter estimates. The log-likelihood cannot be computed in closed form for nonlinear mixed effects models. It can however be estimated in a general framework for all kinds of data and models using the importance sampling Monte Carlo method. This method has the advantage of providing an unbiased estimate of the log-likelihood – even for nonlinear models – whose variance can be controlled by the Monte Carlo size.
Two different algorithms are proposed to estimate the log-likelihood: by linearization and by Importance sampling. The estimated log-likelihoods are computed and stored in the
LogLikelihood folder in the result folder. In this folder, two files are stored:
logLikelihood.txt containing the OFV (objective function value), AIC, and BIC. individualLL.txt containing the -2LL for each individual.
Log-likelihood by importance sampling
The observed log-likelihood \({\cal LL}(\theta;y)=\log({\cal L}(\theta;y))\) can be estimated without requiring approximation of the model, using a Monte Carlo approach. Since
$${\cal LL}(\theta;y) = \log(p(y;\theta)) = \sum_{i=1}^{N} \log(p (y_i;\theta))$$
we can estimate \(\log(p(y_i;\theta))\) for each individual and derive an estimate of the log-likelihood as the sum of these individual log-likelihoods. We will now explain how to estimate \(\log(p(y_i;\theta))\) for any individual
i. Using the \(\phi\)-representation of the model (the individual parameters are transformed to be Gaussian), notice first that \(p(y_i;\theta)\) can be decomposed as follows:
$$p(y_i;\theta) = \int p(y_i,\phi_i;\theta)d\phi_i = \int p(y_i|\phi_i;\theta)p(\phi_i;\theta)d\phi_i = \mathbb{E}_{p_{\phi_i}}\left(p(y_i|\phi_i;\theta)\right)$$
Thus, \(p(y_i;\theta)\) is expressed as a mean. It can therefore be approximated by an empirical mean using a Monte Carlo procedure:
Draw Mindependent values \(\phi_i^{(1)}\), \(\phi_i^{(2)}\), …, \(\phi_i^{(M)}\) from the marginal distribution \(p_{\phi_i}(.;\theta)\). Estimate \(p(y_i;\theta)\) with \(\hat{p}_{i,M}=\frac{1}{M}\sum_{m=1}^{M}p(y_i | \phi_i^{(m)};\theta)\)
By construction, this estimator is unbiased, and consistent since its variance decreases as
1/M:
$$\mathbb{E}\left(\hat{p}_{i,M}\right)=\mathbb{E}_{p_{\phi_i}}\left(p(y_i|\phi_i^{(m)};\theta)\right) = p(y_i;\theta) ~~~~\mbox{Var}\left(\hat{p}_{i,M}\right) = \frac{1}{M} \mbox{Var}_{p_{\phi_i}}\left(p(y_i|\phi_i^{(m)};\theta)\right)$$
We could consider ourselves satisfied with this estimator since we “only” have to select
M large enough to get an estimator with a small variance. Nevertheless, it is possible to improve the statistical properties of this estimator.
The problem is that it is not possible to generate the \(\phi_i^{(m)}\) with this conditional distribution, since that would require computing a normalizing constant, which here is precisely \(p(y_i;\theta)\).
Nevertheless, this conditional distribution can be estimated using the Metropolis-Hastings algorithm and a practical proposal “close” to the optimal proposal \(p_{\phi_i|y_i}\) can be derived. We can then expect to get a very accurate estimate with a relatively small Monte Carlo size
M.
The mean and variance of the conditional distribution \(p_{\phi_i|y_i}\) are estimated by Metropolis-Hastings for each individual
i. Then, the \(\phi_i^{(m)}\) are drawn with a noncentral student t-distribution:
$$ \phi_i^{(m)} = \mu_i + \sigma_i \times T_{i,m}$$
where \(\mu_i\) and \(\sigma^2_i\) are estimates of \(\mathbb{E}\left(\phi_i|y_i;\theta\right)\) and \(\mbox{Var}\left(\phi_i|y_i;\theta\right)\), and \((T_{i,m})\) is a sequence of i.i.d. random variables distributed with a Student’s
t-distribution with \(\nu\) degrees of freedom. Remark: The standard error of all the draws is proposed. It is a representation of impact of the variability of the draws of the proposed population parameters and not of the uncertainty of the model. Remark: Even if \(\hat{\cal L}_y(\theta)=\prod_{i=1}^{N}\hat{p}_{i,M}\) is an unbiased estimator of \({\cal L}_y(\theta)\), \(\hat{\cal LL}_y(\theta)\) is a biased estimator of \({\cal LL}_y(\theta)\). Indeed, by Jensen’s inequality, we have :
$$\mathbb{E}\left(\log(\hat{\cal L}_y(\theta))\right) \leq \log \left(\mathbb{E}\left(\hat{\cal L}_y(\theta)\right)\right)=\log\left({\cal L}_y(\theta)\right)$$
Best practice: the bias decreases as M increases and also if \(\hat{\cal L}_y(\theta)\) is close to \({\cal L}_y(\theta)\). It is therefore highly recommended to use a proposal as close as possible to the conditional distribution \(p_{\phi_i|y_i}\), which means having to estimate this conditional distribution before estimating the log-likelihood (i.e. run task “Conditional distribution” before).
Display and outputs
In case of estimation using the importance sampling method, a graphical representation is proposed to see the valuation of the mean value over the Monte Carlo iterations as on the following:
The final estimations are displayed in the result frame as below. Notice that there is a “Copy table” icon on the top of each table to copy them in Excel, Word, … The table format and display will be kept.
In terms of output, a folder called
LogLikelihood is created in the result folder where the following files are created
logLikelihood.txt: containing for each computed method, the -2 x log-likelihood, the Akaike Information Criteria (AIC), the Bayesian Information Criteria (BIC), and the corrected Bayesian Information Criteria (BICc). individualLL.txt: containing the -2 x log-likelihood for each individual for each computed method.
The new BIC criterion penalizes the size of \(\theta_R\) (population parameters from random individual parameter models) with the log of the number of subjects (\(N\)) and the size of \(\theta_F\) (population parameters from non-random individual parameter models and error parameters) with the log of the total number of observations (\(n_{tot}\)), as follows (see here fore more explanation):
$$ BIC_c = -2\log(p(y;\theta)) + \dim(\theta_R)\log N+\dim(\theta_F)\log n_{tot}$$
Advanced settings for the log-likelihood
A t-distribution is used as proposal. The number of degrees of freedom of this distribution can be either fixed or optimized. In such a case, the default possible values are 1, 2, 5, 10 and 20 degrees of freedom. A distribution with a small number of degree of freedom (i.e. heavy tails) should be avoided in case of stiff ODE’s defined models. We recommend to set a degree of freedom at 5.
Log-likelihood by linearization
The likelihood of the nonlinear mixed effects model cannot be computed in a closed-form. An alternative is to approximate this likelihood by the likelihood of the Gaussian model deduced from the nonlinear mixed effects model after linearization of the function
f (defining the structural model) around the predictions of the individual parameters \((\phi_i; 1 \leq i \leq N)\). Notice that the log-likelihood can not be computed by linearization for discrete outputs (categorical, count, etc.) nor for mixture models. Best practice: We strongly recommend to compute the conditional mode before computing the log-likelihood by linearization. Indeed, the linearization should be made around the most probable values as they are the same for both the linear and the nonlinear model.
Best practices: When should I use the linearization and when should I use the importance sampling?
Firstly, it is only possible to use the linearization algorithm for the continuous data. In that case, this method is generally much faster than importance sampling method and also gives good estimates of the LL. The LL calculation by model linearization will generally be able to identify the main features of the model. More precise– and time-consuming – estimation procedures such as stochastic approximation and importance sampling will have very limited impact in terms of decisions for these most obvious features. Selection of the final model should instead use the unbiased estimator obtained by Monte Carlo. |
The hyperbolic functions appear with some frequency in applications, and are quite similar in many respects to the trigonometric functions. This is a bit surprising given our initial definitions.
Definition 4.11.1: Hyperbolic Cosines and Sines
The
hyperbolic cosine is the function
\[\cosh x ={e^x +e^{-x }\over2},\]
and the
hyperbolic sine is the function
\[\sinh x ={e^x -e^{-x}\over 2}.\]
Notice that \(\cosh\) is even (that is, \(\cosh(-x)=\cosh(x)\)) while \(\sinh\) is odd (\(\sinh(-x)=-\sinh(x)\)), and \( \cosh x + \sinh x = e^x\). Also, for all \(x\), \(\cosh x >0\), while \(\sinh x=0\) if and only if \( e^x -e^{-x }=0\), which is true precisely when \(x=0\).
Lemma 4.11.2
The range of \(\cosh x\) is \([1,\infty)\).
Proof
Let \(y= \cosh x\). We solve for \(x\):
\[\eqalign{y&={e^x +e^{-x }\over 2}\cr 2y &= e^x + e^{-x }\cr 2ye^x &= e^{2x} + 1\cr 0 &= e^{2x}-2ye^x +1\cr e^{x} &= {2y \pm \sqrt{4y^2 -4}\over 2}\cr e^{x} &= y\pm \sqrt{y^2 -1}\cr} \]
From the last equation, we see \( y^2 \geq 1\), and since \(y\geq 0\), it follows that \(y\geq 1\).
Now suppose \(y\geq 1\), so \( y\pm \sqrt{y^2 -1}>0\). Then \( x = \ln(y\pm \sqrt{y^2 -1})\) is a real number, and \(y =\cosh x\), so \(y\) is in the range of \(\cosh(x)\).
\(\square\)
Definition 4.11.3: Hyperbolic Tangent and Cotangent
The other hyperbolic functions are
\[\eqalign{\tanh x &= {\sinh x\over\cosh x}\cr \coth x &= {\cosh x\over\sinh x}\cr \text{sech} x &= {1\over\cosh x}\cr \text{csch} x &= {1\over\sinh x}\cr} \]
The domain of \(\coth\) and \(\text{csch}\) is \(x\neq 0\) while the domain of the other hyperbolic functions is all real numbers. Graphs are shown in Figure \(\PageIndex{1}\)
Certainly the hyperbolic functions do not closely resemble the trigonometric functions graphically. But they do have analogous properties, beginning with the following identity.
Theorem 4.11.4
For all \(x\) in \(\mathbb{R}\), \( \cosh ^2 x -\sinh ^2 x = 1\).
Proof
The proof is a straightforward computation:
\[\cosh ^2 x -\sinh ^2 x = {(e^x +e^{-x} )^2\over 4} -{(e^x -e^{-x} )^2\over 4}= {e^{2x} + 2 + e^{-2x } - e^{2x } + 2 - e^{-2x}\over 4}= {4\over 4} = 1. \]
\(\square\)
This immediately gives two additional identities:
\[1-\tanh^2 x =\text{sech}^2 x\qquad\hbox{and}\qquad \coth^2 x - 1 =\text{csch}^2 x.\]
The identity of the theorem also helps to provide a geometric motivation. Recall that the graph of \( x^2 -y^2 =1\) is a hyperbola with asymptotes \(x=\pm y\) whose \(x\)-intercepts are \(\pm 1\). If \((x,y)\) is a point on the right half of the hyperbola, and if we let \(x=\cosh t\), then \( y=\pm\sqrt{x^2-1}=\pm\sqrt{\cosh^2x-1}=\pm\sinh t\). So for some suitable \(t\), \(\cosh t\) and \(\sinh t\) are the coordinates of a typical point on the hyperbola. In fact, it turns out that \(t\) is twice the area shown in the first graph of Figure \(\PageIndex{2}\). Even this is analogous to trigonometry; \(\cos t\) and \(\sin t\) are the coordinates of a typical point on the unit circle, and \(t\) is twice the area shown in the second graph of Figure \(\PageIndex{2}\).
Given the definitions of the hyperbolic functions, finding their derivatives is straightforward. Here again we see similarities to the trigonometric functions.
Theorem 4.11.5
\( {d\over dx}\cosh x=\sinh x\) and \thmrdef{thm:hyperbolic derivatives} \( {d\over dx}\sinh x = \cosh x\).
Proof
\[ {d\over dx}\cosh x= {d\over dx}{e^x +e^{-x}\over 2} = {e^x- e^{-x}\over 2} =\sinh x,\]
and
\[ {d\over dx}\sinh x = {d\over dx}{e^x -e^{-x}\over 2} = {e^x +e^{-x }\over 2} =\cosh x.\]
Since \(\cosh x > 0\), \(\sinh x\) is increasing and hence injective, so \(\sinh x\) has an inverse, \(\text{arcsinh} x\). Also, \(\sinh x > 0\) when \(x>0\), so \(\cosh x\) is injective on \([0,\infty)\) and has a (partial) inverse, \(\text{arccosh} x\). The other hyperbolic functions have inverses as well, though \(\text{arcsech} x\) is only a partial inverse. We may compute the derivatives of these functions as we have other inverse functions.
Theorem 4.11.6
\( {d\over dx}\text{arcsinh} x = {1\over\sqrt{1+x^2}}\).
Proof
Let \(y=\text{arcsinh} x\), so \(\sinh y=x\). Then
\[ {d\over dx}\sinh y = \cosh(y)\cdot y' = 1,\]
and so
\[ y' ={1\over\cosh y} ={1\over\sqrt{1 +\sinh^2 y}} = {1\over\sqrt{1+x^2}}.\]
The other derivatives are left to the exercises. |
Basics
A coin (or a stamp) has a value. A dime is worth 10 cents.
The total value of a number of coins (or stamps) is the product of the number and the value.
Examples
Some of the problems here are simple. The solution can be worked out fast by quick reasoning. The benefit of these problems is not to find the solution by reasoning, but by learning algebraic steps applicable to more challenging situations. Solving application problems is a skill. Cultivate it.
My approach is not to read a problem first until I understand it. Many of my colleagues will disagree with me. I propose creating a preamble in which you write a symbol or set of symbols for each element in the problem. (Refer to exercise 10.2.9).
Look at the end of the problem where you usually find what to solve for. Let \(x\) be the number we are looking for. Write this as the first step in the preamble (list of symbols in non-trivial exercises whose solution we seek).
Next start reading the problem from the beginning and develop the preamble step by step. Write the symbol(s) for each step on a new line.
Finish the preamble by converting all problem steps into symbols. Now look at your preamble and gain an overview of the your problem.
It should be easier to obtain an equation using preamble symbols. Then solve the equation by the method introduced this far for linear equations. (A linear equation has a variable to the first degree (exponent).
Example \(\PageIndex{1}\)
A coin bank contains nickels, dimes and quarters.
There are \(8\) more dimes than quarters. The number of nickels is \(8\) less than twice the number of dimes. The total value of the coins is \(\$5.70\).
Find the number of each coin.
Solution:
Preamble:
Let \(x\) be the number of quarters (for example \(x=20\) quarters).
There are \(8\) more dimes than quarters: \(x+8\) \(20+8=28\) dimes). Nickels: \(8\) less than \(2\times\) dimes: \(2(x+8)-8\) \(2(20+8)-8\) \(=2x+16-8\) \(=2x+8\)
\(\begin{array}{|l|c|c |c|c|r |r|r|r|}\hline \hbox{Coin}&\hbox{Number of}&\hbox{Value of }&\hbox{Total value}\\ \hbox{}&\hbox{coins in cents}&\hbox{one coin}&\hbox{of each coins}\\ \hline \hbox{Nickels}&2x+8&5&5(2x+8)\\ \hline \hbox{Dimes}&x+8&10&10(x+8)\\ \hline \hbox{Quarters}&x&25&25x\\ \hline \end{array}\)
Equation:
\(\begin{array}{rcl lll} \hbox{Total value of coins}&=&\hbox{Total value of coins}\\[4pt] %\$5.70&=&\hbox{$=570$\hbox{ cents}}\\[4pt] 5(2x+8)+10(x+8)+25x&=&570\ \ \hbox{cents}\\[4pt] 10x+40+10x+80+25x&=&570\\[4pt] 45x+40+80&=&570\\[4pt] 45x+120&=&570\\[4pt] 45x+120-120&=&570-120\\[4pt] 45x&=&450\\[4pt] \displaystyle \frac{45x}{45}&=&\displaystyle \frac{450}{45}\\[10pt] x&=&10 \end{array}\)
Number of quarters: \(x=10\) Number of dimes: \(x+8=10+8=18\) Number of nickels: \(2x+8=2(10)+8=28\)
Note: If \(x\) had been chosen to be the number of dimes or nickels, the development would have been more difficult.
Example \(\PageIndex{2}\)
A wallet contains \(42\) one-, five- and ten-dollar bills.
The value of all the bills is \(\$135\). There are \(5\) times as many one-dollar bill as ten-dollar bills. Find the number of bills of each denomination. Solution:
Preamble:
Let \(x\) be the number of \(\$10\) bills (for example \(x=20\)).
Then the number of \(\$1\) bills is \(5x\) \(5(20)\). [5pt] We need to become creative (more variables, more equations later), A wallet contains \(42\) bills (including \(???\) five-dollar bills): \(\begin{array}{rcl lll} x+5x+???&=&42\\[5pt] 6x+???&=&42\\[5pt] 6x-6x+???&=&42-6x\\[5pt] ???&=&42-6x \end{array}\)
The number of \(\$5\) is \(???=42-6x\)
\(\begin{array}{|r|c|c |c|c|r |r|r|r|}\hline \hbox{Bill}&\hbox{Number of}&\hbox{Value of }&\hbox{Total value}\\ \hbox{}&\hbox{bills}&\hbox{each bill}&\hbox{of the coins}\\ \hline \hbox{\)$10\(}&x&10&10x\\ \hline \hbox{\)$5\(}&42-6x&5&5(42-6x)\\ \hline \hbox{\)$1\(}&5x&1&5x\\ \hline \end{array}\)
Equation:
\(\begin{array}{rcl lll} \hbox{The total value of the coins}&=&\hbox{\)$135\(\hbox{ dollars}}\\[5pt] 10x+5(42-6x)+5x&=&135\\[5pt] 10x+210-30x+5x&=&135\\[5pt] 210-15x&=&135\\[5pt] 210-210-15x&=&135-210\\[5pt] -15x&=&-75\\[5pt] \displaystyle \frac{-15x}{-15}&=&\displaystyle \frac{-75}{-15}\\[10pt] x&=&5 \end{array}\)
The number of ten-dollar bills is \(5\).
The number of five-dollar bills is \(42-6x=42-6(5)=42-30=12\). The number of one-dollar bills is \(5(5)=25\).
Example \(\PageIndex{3}\)
An order of \(\$0.44\) stamps and \(\$1.29\) custom photo postcards cost \(\$14.87\). The order consists of \(28\) items.
Find the number of custom photo postcards.
Solution:
Preamble:
Let \(x\) be the number of stamps (for example \(x=20\)).
Then the number postcards is \(28-x\) \(28-20\).
\(\begin{array}{|l|c|c |c|c|r |r|r|r|}\hline \hbox{Stamps}&\hbox{Number of}&\hbox{Value of }&\hbox{Total value}\\ \hbox{or postcards}&\hbox{items}&\hbox{one item}&\hbox{of the items}\\[5pt] \hline \hbox{stamp}&x&0.44&0.44x\\ \hline \hbox{postcard}&28-x&1.29&1.29(28-x)\\ \hline \end{array}\)
Equation:
\(\begin{array}{rcl lll} \hbox{The whole order costs }&=&\hbox{\)$14.87\(}\\[5pt] 0.44x+1.29(28-x)&=&14.87\\[5pt] 0.44x+36.12-1.29x&=&14.87\\[5pt] -0.85x+36.12&=&14.87\\[5pt] -0.85x+36.12-36.12&=&14.87-36.12\\[5pt] -0.85x&=&-21.25\\[5pt] \displaystyle \frac{-0.85}{-0.85}x&=&\displaystyle \frac{-21.25}{-0.85}\\[10pt] x&=&25 \end{array}\)
The order consists of \(25\) stamps and \(28-25=3\) postcards.
Exercises 12
A collection of stamps from a foreign country consists of five-franc stamps and two-franc stamps.
The number of two-franc stamps is \(12\) less than twice the number of five-franc stamps. The total value of the stamps is \(\$381\). Find the number of stamps.
A cash register contains five-franc, ten-franc and twenty-five-franc coins.
The number of five-franc coins is twice the number of ten-franc coins. The number of twenty-five-franc coins is \(4\) more than the number of five-franc coins. The total value of the coins is \(940\) francs. How many coins of each denomination are there?
A collection of stamps from a foreign country consists of five-franc stamps and two-franc stamps.
The number of two-franc stamps is \(12\) less than twice five-franc stamps. The total value of the stamps is \(381\) francs. Find the number of stamps. Solution:
Preamble:
Let \(x\) be the number of five-franc stamps (for example \(x=20\)).
Two-franc stamps: \(12\) less than \(2\times\) five-franc stamps: \(2x-12\) \(2(20)-12\)).
\(\begin{array}{|l|c|c |c|c|r |r|r|r|}\hline \hbox{Stamp}&\hbox{Number of}&\hbox{Value of }&\hbox{Total value}\\ &\hbox{stamps}&\hbox{each stamp}&\hbox{of the stamps}\\ \hline \hbox{five-franc}&x&5&5x\\ \hline \hbox{two-franc}&2x-12&2&2(2x-12)\\ \hline \end{array}\)
Equation:
\(\begin{array}{rcl lll} \hbox{The total value of the stamps }&=&\hbox{\)381\(francs}\\[5pt] 5x+2(2x-12)&=&381\\[5pt] 5x+4x-24&=&381\\[5pt] 9x-24&=&381\\[5pt] 9x-24+24&=&381+24\\[5pt] 9x&=&405\\[9pt] \displaystyle \frac{9}{9}x&=&\displaystyle \frac{405}{9}\\[11pt] x&=&45 \end{array}\)
The number of five-franc stamps is \(45\).
The number of two-franc stamps is \(2(45)-12=90-12=78\).
A cash register contains five-franc, ten-franc and twenty-five-franc coins.
The number of five-franc coins is twice the number of ten-franc coins.
The number of twenty-five-franc coins is \(4\) more than the number of five-franc coins.
The total value of the coins is \(940\) francs. How many coins of each denomination are there?
Solution:
Preamble:
Let \(x\) be the number of ten-franc coins (for example \(x=20\)).
Number of five-franc coins: \(2\times\) number of ten-franc coins: \(2x\) \(2(20)\).
twenty-five-franc coins: \(4\) more than five-franc coins: \(2x+4\) \(2(20)+4.\)
\(\begin{array}{|l|c|c |c|c|r |r|r|r|}\hline \hbox{Coin}&\hbox{Number of}&\hbox{Value of }&\hbox{Total value}\\ \hbox{}&\hbox{coins}&\hbox{one coin}&\hbox{of the coins}\\ \hline \hbox{five-franc}&2x&5&5(2x)\\ \hline \hbox{ten-franc}&x&10&10x\\ \hline \hbox{twenty-five}&2x+4&25&25(2x+4)\\ \hline \end{array}\)
Equation:
\(\begin{array}{rcl lll} \hbox{The total value of the coins}&=&\hbox{\)$940\(\hbox{ francs}}\\[5pt] 5(2x)+10x+25(2x+4)&=&940\\[5pt] 10x+10x+50x+100&=&940\\[5pt] 70x+100&=&940\\[5pt] 70x+100-100&=&940-100\\[5pt] 70x&=&840\\[5pt] \displaystyle \frac{70x}{70}&=&\displaystyle \frac{840}{70}\\[10pt] x&=&12 \end{array}\)
The number of five-franc coins is \(2(12)=24\).
The number of ten-franc coins is \(12\). The number of twenty-five-franc coins is \(2(12)+4=28\). |
Number System
Category : 6th Class
Number System
Learning Objective Number System
A system of naming or representing numbers.
Number
A number is a mathematical object which is used to count, label and measure,
Example
1, 5, 19, 325
Main Type
Natural numbers/whole-numbers, integers, rational numbers, irrational numbers, real numbers.
Natural Numbers
Counting numbers 1, 2, 3, 4, 5, 6... are called natural numbers. These numbers are also referred to as the positive integers.
Properties
Whole Numbers
The natural numbers along with zero form the collection of whole numbers.
Example 0, 1, 2, 3, 4, 5, 6 ....
Properties Factors
A factor of a number is an exact divisor of that number.
Example 1
Factors of \[4=1,\,2,\,4\]
Example 2
Factors of \[15=1,\,3,\,5,\,15\]
Properties Multiple
A multiple of any natural number is a number formed by multiplying that number by any whole number or a multiple is the product of any quantity and an integer or a number is said to be multiple of any of its factors.
Example 1
First three multiple of 4 are \[4\times 1= 4,4\times 2 = 8\] and \[4\times 3 =12\]
Example 2
First three multiple of 19 are \[19\times 1=19,19\times 2=38\] and \[19\times 3=57\]
Properties
Prime Numbers
The numbers which have exactly two factors, 1 and the number itself, are called prime numbers.
Example
2, 3, 5, 7 etc. are example of prime numbers.
Properties Co-primes: Two natural numbers which have only the common factor 1 are called co-primes.
Example
Some examples of pairs of co "primes are \[\left( 2,3 \right),\left( 5,7 \right),\left( 8,13 \right),\left( 17,26 \right),\left( 29,33 \right).\]
Twin-primes: Prime numbers that differ by 2 are called twin-primes.
Example
Some examples of pairs of twin-primes are \[\left( 3,5 \right),\left( 5,7 \right),\left( 11,13 \right),\left( 17,19 \right)\left( 29,31 \right).\]
Prime triplet: A prime triplet is a set of three prime numbers of the form \[\left( P,P+2+P+6 \right)\] or\[~\left( p,p+4,P+6 \right).\]
Example
Some examples of prime triplets are \[\left( 5,7,11 \right),\left( 7,11,13 \right),\left( 11,13,17 \right),\left( 13,17,19 \right),\left( 17,19,23 \right),\left( 37,41,43 \right).\]
Composite Number
Numbers having more than two factors are called composite numbers.
Example
4, 6, 8, 9 are example of composite numbers.
Properties:
Test for divisibility of numbers Example: \[20,40,50,100,200,260\] all are divisible by 10 Example: 15, 20, 25, 30 are divisible by 5. Example: 6, 24, 20, 34,102 are divisible by 2. Example: 9, 96,102, 201, 216 are divisible by 3. Example: 12, 24, 60, 36 are divisible by both 2 and 3 and also by 6. Example: 324, 428, 520, 672 are divisible by 4 Example: 1000, 1728, 13824, 8184 are divisible by 8 Example: 981, 729,108, 21456 are divisible by 9 Example: 132, 1452, 7172, 2277 are divisible by 11
HCF (highest common factor)
The HCF of two or more numbers is the greatest number which divides each number exactly.
Example
HCF of 9 and 12 is 3 because 3 is the highest common factor among all the common factors of 9 and 12.
Properties LCM (lowest common multiple)
The LCM of two or more given numbers is the lowest (or smallest) of their common multiples.
Example
LCM of 18 and 24 is 72 because 72 is the smallest common multiple among all the common multiples of 18 and 24.
Properties
Integers
It includes the counting number \[\left( 1,1,~3,4.... \right)\], zero (0) and the negative of counting number \[\left( -1,-2,-3,-4... \right)\] So we can denote integers like this \[\left( ...-4,-3,-1,-1,0,1,2,3,4... \right).\] Denotation of integers on the number line has been shown below.
Properties
Example Write all the integers between -3 and 4 Explanation: The integers between -3 and 4 are \[-2,-1,0,1,2\] and 3.
Fractions
A fraction is a number which represents a part of a whole. The whole may be a single object or a group of objects. A fraction is written in the form a/b where a is called numerator of the fraction and b is called denominator of the fraction.
Example
\[\frac{5}{7},\frac{9}{8}\] and \[\frac{11}{13}\] are examples of fractions.
Proper fraction: A fraction whose numerator is smaller than its denominator is called a proper fraction. Example
\[\frac{5}{7},\frac{5}{4}\] and \[\frac{54}{59}\] are examples of proper fractions.
Improper fraction: A fraction whose numerator is greater than its denominator is called an improper fraction. Example
\[\frac{9}{8},\frac{15}{13}\] and \[\frac{21}{19}\] are examples of improper tractions.
Mixed fraction: A combination of a whole number and a proper fraction is called a mixed fraction. Example
\[5\frac{1}{3},2\frac{1}{2}\] and \[3\frac{1}{2}\]are examples of mixed fractions.
Equivalent fraction: Fractions having same value are called equivalent tractions. Example
\[\frac{4}{5},\frac{8}{10}\] and \[\frac{16}{20}\]are examples of equivalent fractions.
Like fractions: The fractions having same denominators are called like fractions. Example
\[\frac{4}{5},\frac{8}{10}\] and \[\frac{6}{7}\]are examples of like fractions.
Unlike fractions: The fractions having different denominators are called unlike fractions, Example
\[\frac{2}{3},\frac{4}{5}\] and \[\frac{6}{7}\] are examples of unlike fractions.
Decimals
The numbers which use a decimal point followed by one or more digits are called decimal numbers.
Example
\[4.25,3.2,0.698\] are examples of decimal numbers.
Properties
Commonly Asked Questions
Which one of the following options is correct?
(a) \[\left( \frac{4}{3}\div 1 \right)-\frac{4}{3}=1\] (b) \[\left( \frac{4}{3}\div 1 \right)-\frac{4}{3}=0-1\]
(c) \[\left( \frac{4}{3}\div 1 \right)-\frac{4}{3}=0\] (d) All of these
(e) None of these
Answer (c) Explanation: \[\left( \frac{4}{3}\div 1 \right)-\frac{4}{3}=\frac{4}{3}-\frac{4}{3}=0\] Simplify: \[\mathbf{45+3\times 2}\,\mathbf{of}\,\mathbf{5-(16+4)-8\div 4}\]
(a) 55 (b) 43
(c) 53 (d) Both (a) and (c)
(e) None of these
Answer (c) Explanation: \[45+3\times 2\text{ }of\,5-\left( 16+4 \right)-8\div 4\]
\[=45+3\times 2\text{ }of\text{ }5-20-8\div 4\] [Bracket removed]
\[=45+3\times 2\text{ }of\text{ }5-20-2\] [Operation of division \[8\div 4=2\]]
\[=45+30-20-2\] [Operation of multiplication \[3\times 2\times 5=30\]]
\[=75-20-2\] [Operation of addition \[45+30=75\]]
\[=75-22=53\] [Operation of subtraction].
In the following picture, some parts of picture are shaded but some are not. What part of the picture is unshaded?
(a) \[\frac{2}{5}\] (b) \[\frac{1}{5}\]
(c) \[\frac{3}{5}\] (d) All of these
(e) None of these
Answer (b) Explanation: One part of the picture is not shaded but 4 parts are shaded. Find the product of decimals 234.567 and 123.7.
(a) 29016.9372 (b) 29015.9373
(c) 29016.1853 (d) 29015.9379
(e) All of these
Answer: (d) Explanation: \[234.569\times 123.7=29015.9379.\] Write the shortest form of the following: \[\mathbf{4000+500+8+0+}\frac{\mathbf{7}}{\mathbf{10}}\mathbf{+}\frac{\mathbf{6}}{\mathbf{1000}}\mathbf{.}\]
(a) 4508.706 (b) 4507.705
(c) 4509.707 (d) All of these
(e) None of these
Answer (a) Explanation: \[4000+500+8+0+\frac{7}{10}+\frac{6}{1000}=4508.706.\]
You need to login to perform this action.
You will be redirected in 3 sec |
QTW Workshop on Analytic Methods in Computer Science Synopsis
This Quarterly Theory Workshop is on the theme of analytic methods in
computer science. In the last two decades, methods from mathematical analysis have resulted in breakthroughs in the otherwise `discrete world of theoretical computer science’. The speakers Ryan O’Donnell, Oded Regev and Li-Yang Tan will speak about fundamental problems in complexity theory, quantum computing and lattices which have succumbed to these methods. Logistics Location:Ruan Transportation Center, Chambers Hall (basement level) (map), 600 Foster Street, Evanston, IL 60208. Transit:Noyes St. Purple Line (map). Parking:Validation for North Campus Parking Garage (map) available at workshop. Registration
Registration is free. Please register at https://goo.gl/forms/wpkRhkaEiznvSfIr2 if you plan to attend.
Titles and Abstracts Speaker: Ryan O’Donnell, Carnegie Mellon University (CMU) Title: Statistics of longest increasing subsequences and quantum states Abstract: Suppose you have access to i.i.d. samples from an unknown probability distribution $p = (p_1, …, p_d)$ on $[d]$, and you want to learn or test something about it. For example, if you want to estimate $p$ itself, then the empirical distribution will suffice when the number of samples, $n$, is $O(d/epsilon^2)$. In general, you can ask many more specific questions about $p$: Is it close to some known distribution $q$? Does it have high entropy? Etc. For many of these questions the optimal sample complexity has only been determined over the last $10$ years in the computer science literature.
The natural quantum version of these problems involves being given
samples of an unknown $d$-dimensional quantum mixed state $\rho$, which is a positive $d \times d$ matrix with trace $1$. Many questions from learning and testing probability carry over naturally to this setting: for example, estimating $\rho$ (the problem of “quantum tomography/state-learning”); or, testing if $\rho$ is close to a fixed state (the problem of “quantum hypothesis-testing/state-certification”).
Surprisingly, the analysis of these quantum problems mostly reduces to
questions concerning the combinatorics of longest increasing subsequences (LISes) in random words. In particular, a key technical question that needs to be faced is this: given a random word $w \in [d]^n$, where each letter $w_i$ is drawn i.i.d. from some distribution $p$, what do we expect $\mathrm{LIS}(w)$ to be? Answering this question requires diversions into the Robinson–Schensted–Knuth algorithm, representation theory of the symmetric group, the theory of symmetric polynomials, and many other interesting areas.
Speaker: Li-Yang Tan, Toyota Technological Institute (TTI) at Chicago Title: Fooling intersections of low-weight halfspaces Abstract: A weight-t halfspace is a Boolean function f(x)=sign(w_1 x_1 + \cdots + w_n x_n – \theta) where each w_i is an integer in \{-t,\dots,t\}. We give an explicit pseudorandom generator that \delta-fools any intersection of k weight-t halfspaces with seed length poly(\log n, \log k,t,1/\delta). In particular, our result gives an explicit PRG that fools any intersection of any quasipoly(n) number of halfspaces of any polylog(n) weight to any 1/polylog(n) accuracy using seed length polylog(n). Prior to this work no explicit PRG with non-trivial seed length was known even for fooling intersections of n weight-1 halfspaces to constant accuracy.
The analysis of our PRG fuses techniques from two different lines of work on
unconditional pseudorandomness for different kinds of Boolean functions. We extend the approach of Harsha, Klivans, and Meka for fooling intersections of regular halfspaces, and combine this approach with results of Bazzi and Razborov on bounded independence fooling CNF formulas. Our analysis introduces new coupling-based ingredients into the standard Lindeberg method for establishing quantitative central limit theorems and associated pseudorandomness results.
Joint work with Rocco Servedio.
Speaker: Oded Regev, Courant Institute, New York University (NYU). Title: A Reverse Minkowski Theorem Abstract: Informally, Minkowski’s first theorem states that lattices that are globally dense (have small determinant) are also locally dense (have lots of points in a small ball around the origin). This fundamental result dates back to 1891 and has a very wide range of applications.
I will present a proof of a reverse form of Minkowski’s theorem,
conjectured by Daniel Dadush in 2012, showing that locally dense lattices are also globally dense (in the appropriate sense).
The talk will be self contained and I will not assume any familiarity
with lattices. Based on joint papers with Daniel Dadush and Noah Stephens-Davidowitz. |
Let $Li_s(z)$ denote the usual polylogarithm. The elementary functional equation $$Li_{-n}(z)=(-1)^{n-1}Li_{-n}(1/z)$$ holds for $n\geq 1$. I remember only that the proof used some reproducing property of the Stirling numbers of the second kind.
This functional equation is rather useful because, taking linear combinations, it amounts to saying that a meromorphic function having a power series on the open unit disc whose coefficients are the values of a polynomial $f$ of degree $d$ evaluated at integer arguments:
$$\sum_{k=0}^{\infty}f(k)z^k=\frac{1}{(1-z)^{d+1}}\sum_{k=0}^{d}\left(\sum_{j=0}^k{d+1\choose j}(-1)^jf(k-j)\right)z^k$$
is defined for $|z|>1$ by the series
$$-\sum_{k=1}^{\infty}f(-k)z^{-k}.$$
I would like to know how this functional equation (or the continuation by sum over negative powers) can be proved?
Please note that the use of statements like
$$\sum_{k=-\infty}^{\infty}k^nz^k=\left(z\frac{d}{dz}\right)^n\sum_{k=-\infty}^{\infty}z^k=\left(z\frac{d}{dz}\right)^n 0=0$$ is acceptable only if you can also prove that the sums of positive and negative powers converge on a common arc of the unit circle.
In his answer below, Fedor Petrov has pointed out that the use of divergent series as formal generating functions is justified in some cases, so the last statement is incorrect. |
One thing which we should keep in mind is that
duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find your question, it is not such a bad thing if it is later closed as a duplicate. (And if it is closed as a duplicate, that is not automatically a reason to delete the question.) But at least for basic questions, which have already been asked many times it is probably better to avoid adding new copies. And, of course, trying to avoid duplicates in definitely not the only situation when it is useful to be able to find a questions on this site.
One another thing worth mentioning is that
experience both with this site and with the subject of the question helps a lot. A regular user of the site might immediately spot: This is a question I have seen here before. Somebody who has been teaching calculus for a long time can immediately recognize that a question is in fact a standard exercise which is in almost every introductory textbook on the subject and therefore it is very likely to already exist on the site. Experience with the subject can also help to come up with alternative phrases which might appear in the title of question we are looking for. Experience with the site can help in guessing what tag might have been used if this was asked in the past.
But enough side remarks, let us get to the core of the question. However, I should say in advance that you will find here nothing special. The whole answer could be summarized as: use google, use built-in search, have a look at lists of questions (created automatically by SE software or manually by users). I will try to list some useful techniques and I will try also to illustrate them on examples. (And I will expand this post if I am able to think of some other useful tips or if I can think of some other useful examples.) When listing these examples, I will say which posts I was able to find. Of course, as the contents of the site changes, the result of searches might change, too. (And I am also aware that google does not give the same result to everybody, they might slightly differ.)
Let us start with basics:
You can use external search engine; for instance, you can use Googleand add
site:math.stackexchange.com to restrict the search on this site.
You can use built-in search on this site. It is good to know that you can refine search using tags. And also some other advanced search tips, which can be found here or here.
Now let me list some examples of things I use when searching for questions.
When searching by tags, it is possible to choose frequent tab. (Which shows questions which have most links. This is good proxy for "questions which are asked frequently".)
Examples:
It might be useful to check list of related questions (and other lists generated by the software)
When viewing a question, you can see list of related questions in the sidebar. (I'd say, based on my experience, that they seem to depend mostly on the title and tags. However I do not know the exact mechanism how they are generated.) For questions, which are asked frequently, you can find out that it is a duplicate surprisingly often just by looking at related questions. And this list can be useful if you search for some question, too. If you managed to find at least a question which has some similar mathematical expression in the title, there is a chance., that you will find something useful among related questions.
There is also another list which is autogenerated. When posting a question, as soon as you write the title, you are shown list named "Questions that may already have your answer". So maybe if you are searching for something, one thing worth trying is to click on "ask question" and just fill the title (and perhaps tags - I am not sure whether they influence this list) and then check the questions which are offered by the software. Some users say that this way of searching returns much better results than using built-in search. See Why are “Questions that may already have your answer” search results better than the actual search results? and Link up the excellent search engine that gives “Questions that may already have an answer” with the search box. (This probably depends on the type of the question and of how large part of the question can be included in the title. And this is another thing which shows that using descriptive titles is a very useful thing to do.)
For some basic questions, which appear here frequently, some users compiled lists containing those questions and links to posts on this site. So it might be useful checking whether your question is there:
Google Images might be useful, too.
Many posts on MSE contain nice images illustrating the problems or solutions. If you have seen before that the problem you are looking for has picture proof and you think that you can recognize the relevant picture, searching on Google Images might be a good way to go.
For example if you have seen that the proof that harmonic numbers grow approximately as logarithm can be illustrated using nice graph, you can try to search for harmonic series site:math.stackexchange.com or harmonic number site:math.stackexchange.com or harmonic logarithm site:math.stackexchange.com (or some other similar possibilities).
If you know that the inequality $\sin x<x$ can be illustrated using a picture you can tryinequality "sin x" site:math.stackexchange.com.
If you recall that you have seen somewhere a proof of Young's inequality illustrated by a nice picture, you can try to search for young inequality site:math.stackexchange.com or young inequality area site:math.stackexchange.com.
How to search for mathematical expressions?
EDIT: Notice that this was written before I learned about Approach0 search engine. There still might be situations where it might be useful to try Google. (For example, when searching for multiple expressions, Approach0 uses OR operator, in Google you can search for posts which contain all of the given keywords. So I can imagine trying Google if I want to find some combination of formula with specific keywords. Or if I want, for some reason, rely on Google's PageRank.) And it is useful to know about several methods of searching.But generally when searching for formulas on this site, Approach0 is currently my first choice. You can check yourself whether results in Approach0 are good for the examples I listed - see the links at the end of the post.
As soon as we are looking for mathematical formulas, the things become more complicated. (Although I have seen posts, both on meta and on main, mentioning some search engines designated specifically for searching mathematical expression, I have not experimented with them much.)
Posts on this site (mostly) use MathJax (LaTeX) syntax to write mathematics. (And, of course, this is not the only site using MathJax. You might try omit restriction to this site in the searches below to see that you also get results from some other sites.) So we could simply search for parts of the formula we are looking for, and write them using MathJax.
One problem with this is that for many things there are different possibilities how to write the same thing. For example
\frac 1x $\frac 1x$,
\frac1x $\frac1x$,,
dfrac 1x $\dfrac 1x$, or
{1\over x} ${1\over x}$.
Another problem is that there are many possible choices for the variables. But still it is not always that bad.
Usually if a variable is index in a sum, it will be i, j, k or n. The variables often used in integrals are $x$, $y$ and (mostly after substitution) $t$ or $u$. Terms of sequences will most frequently be denoted as $x_n$, $y_n$, $a_n$, $b_n$. (Maybe sometimes $x_k$ or $x_i$ instead of $x_n$.) In calculation of limits, $n$ is often used for limits of sequences. For limit of functions, the variable is often $x$. So there might be several possibilities, but we might at least try the ones which are most frequently used. (Depending on how much time we are willing to spend with the searching.)
If the formula/object/theorem I am looking for has a name and I know this name, it might be useful to add it among the keywords used in the search. If I can guess some result or technique which might probably be used in the solution, that might give me other reasonable keywords to add.
Even if the search does not return exactly the question we want, but at least something which has similar title, it might still be worth trying to click on that post and checking list of related questions in the sidebar. (Since the list of related questions is highly dependent on the title and the tags, we have a chance that among the questions with similar titles is the question we want.)
I do not know some good general advice to add here. Rather than that, let us try some specific examples.
Some further examples can be found here and here
We want to search for the formula for $\sum\limits_{k=1}^n k^2$
We can try searches like:sum "k^2" site:math.stackexchange.com,sum "j^2" site:math.stackexchange.com,sum "i^2" site:math.stackexchange.com orsum "n^2" site:math.stackexchange.com.
Already the first search returns: Prove that $\sum\limits_{k=1}^nk^2 = \frac{n(n+1)(2n+1)}{6}$?
It is probably not too surprising that this was easy to find - I chose a very well-known formula as an example. (I shown above that it is relatively easy to find a post about this formula using tags and frequent tab.) But still a few comments which might be useful in general, not only for this particular question.
If I know what the result is supposed to be, that might help me further limit the search results. For example, I could search for something like: sum "k^2" frac "2n+1 6" site:math.stackexchange.com.
In this case, searching for the textual description would also help. Search forsum first squares site:math.stackexchange.com returns How to get to the formula for the sum of squares of first n numbers? and also some other relevant result. (Notice that the post I linked to is closed as a duplicate. But the existence of the duplicate still helps when searching.)
We want to find posts about $\sum\limits_{k=1}^\infty\frac 1{k(k+1)(k+2)}$.
A reasonable thing to try is: sum frac "k(k+1)(k+2)" site:math.stackexchange.com. We might also try other names of variables and search for sum frac "n(n+1)(n+2)" site:math.stackexchange.com orsum frac "i(i+1)(i+2)" site:math.stackexchange.com or sum frac "j(j+1)(j+2)" site:math.stackexchange.com.
If we can guess what could be used in the proof, we can have some additional phrases which might improve our chances to find posts about this. (If we guessed correctly and the added word is indeed used in on of the answers.)For example, telescoping sum frac "k(k+1)(k+2)" site:math.stackexchange.com orpartial fraction sum frac "k(k+1)(k+2)" site:math.stackexchange.com or induction sum frac "k(k+1)(k+2)" site:math.stackexchange.com
Almost any of the searches I listed about return some relevant results - either about the infinite series or about the finite sum $\sum\limits_{k=1}^n\frac 1{k(k+1)(k+2)}$. The latter is also helpful for somebody interested in this question.
We want to find posts containing the limit $\lim_{n\to\infty} (\sqrt{n+1}-\sqrt n)$.
We can search for limit "sqrt n+1 sqrt n" site:math.stackexchange.com
Among the first results I found this post Convergence of $\sum_{n=1}^\infty (-1)^n(\sqrt{n+1}-\sqrt n)$ - it is about different question, but contains the computation of the limit. (So this was not entirely unsuccesfull attempt.)
We could also try limit "sqrt n sqrt n-1" site:math.stackexchange.com, since the limit might be shifted by one.
Finally after tryginglimit "sqrt x+1 sqrt x" site:math.stackexchange.comI found this question Limit problem: $\sqrt{x+1} - \sqrt{x}$ as $x$ approaches infinity (And some other posts about the same limit are linked to it.)
EDIT: Here you can try to search for the same expressions in Approach0:
Since the above examples include infinite series and limits, I will point out that Approach0 returns different results if you replace $\infty$ by $+\infty$. (And both notations are relatively common, especially for limits.) You can test this if you search for $\sum\limits_{k=1}^{+\infty}\frac 1{k(k+1)(k+2)}$ and for $\lim_{n\to+\infty} (\sqrt{n+1}-\sqrt n)$. |
I am not aware of any name for this property. However, it can be proved rather quickly. I won't deal with the existence of geodesic (some compactness argument may work, but it is not obvious to me which one), and only prove that any geodesic in $X := \mathbb{R}^n-C^{int}$ between points of $C^{bd}$ must lie in $C^{bd}$.
Let $x$, $y$ be in $C^{bd}$. Assume that there exists a geodesic $\gamma : [0,1] \to X$ between $x$ and $y$. If this geodesic does not lie in $C^{bd}$, then there must be some point $z = \gamma (t_0) \in \gamma ([0,1])$ which does not belong to $C^{bd}$.
By the Hahn-Banach theorem, I can find an hyperplane $H$ which strictly separates $C$ and $z$. By the intermediate value theorem, there must be some times $t_1$, $t_2$ with $0 < t_1 < t_0 < t_2 < 1$ and such that $\gamma (t_1)$ and $\gamma (t_2)$ both belong to $H$. The geodesic $\gamma$ restricted to $[t_1, t_2]$ must also be a geodesic.
The shortest path between two points in $H$ is a line. So, $z$ cannot belong to the geodesic between $\gamma (t_1)$ and $\gamma (t_2)$. We get a contradiction. Hence, the geodesic between $x$ and $y$ must lie in $C^{bd}$. |
I've checked this for all $X$ up to $10^{10^5}$, here's how.We are searching for integer solutions to $T_{n-1}^2 + T_n^2 + T_{n+1}^2 = X^2$ (note I've reparametrised, compared to the OP), which expands to$$3 n^4 + 6 n^3 + 15 n^2 + 12 n + 4 =4X^2. $$ Curiously enough, this is an invertible expression, and we can instead say$$n = \frac{1}{6} \left(\sqrt{24 \sqrt{3X^2 + 6} - 63}- 3\right)$$assuming $n>0$.
For $n$ to be an integer, $\sqrt{24 \sqrt{3X^2 + 6} - 63}$ must be an integer, so $24 \sqrt{3X^2 + 6} - 63$ must be a square. Furthermore, it must be an integer, hence $ \sqrt{3X^2 + 6}$ must be an integer. (Note that $3X^2 + 6$ is an integer, so $ \sqrt{3X^2 + 6}$ is either an integer or irrational - being $\frac{1}{24}$ of an integer cannot work.) So, it suffices to search for integer solutions $(X,Y)$ to the equation $Y^2 - 3X^2 = 6$ such that $24Y-63$ is a square.
But, these are easy to generate, with some knowledge of Pell equations. I'll switch to talking about $x^2 - 3y^2 = 6$ for now, for consistency with literature. First observe that the equation $a^2 - 3b^2 = 1$ has a solution given by $(a,b) = (2, 1)$, and that this is the positive non-trivial solution with smallest $b$. By Theorem 5.3 (and Example 5.6) in Keith Conrad's first blurb on Pell equations, all solutions to this equation are generated by this one. Consulting Keith Conrad's second blurb on Pell equations, we can see that every other solution is a Pell multiple of this one. Considering the original equation $x^2 - 3y^2 = 6$ now, Theorem 3.3 in the second blurb tells us that every solution of this is given by a Pell multiple of a solution $(x_1, y_1)$ where $|x_1| \leq 22$ and $|y_1| \leq 7$. A quick computer check gives only $(3, 1)$ and $(9, 5)$ as the positive solutions here, and $(9, 5)$ is a Pell multiple of $(3, 1)$.
Hence, all solutions of $x^2 - 3y^2 = 6$ are Pell multiples of $(3, 1)$, where the next Pell multiple of pair $(x, y)$ is given by $(2x + 3y, x + 2y)$, so we can easily generate all the positive integer solutions in Haskell! (using the
arithmoi library)
import Math.NumberTheory.Powers.Squares
recurse :: (Integer, Integer) -> (Integer, Integer)
recurse (a, b) = (2*a + 3*b, a + 2*b)
vals :: [(Integer, Integer)]
vals = iterate recurse (3, 1)
works :: (Integer, Integer) -> Bool
works (n, _) = isSquare' (24 * n - 63)
solutions :: Integer -> [(Integer, Integer)]
solutions limit = filter works bounded
where bounded = takeWhile (\(y, x) -> x < limit) vals
vals is a list of the solutions to the Pell equation, generated using the recurrence explained above. A pair $(Y, X) = (x, y)$
works if $24Y - 63$ is a square, checked using the arithmoi package for number theory. Finally,
solutions limit checks every possible $(Y, X)$ that satisfies
X < limit to see if it
works or not.
solutions (10^(10^5)) gives
[(3, 1), (33, 19)] and the pair $(3, 1)$ corresponds to the trivial solution $T_{-1}^2 + T_0^2 + T_1^2 = 1$, while $(33, 19)$ corresponds to the solution in the OP. In particular, $24 \times 33 - 63 = 729$, so $n=4$, and $T_3^2 + T_4^2 + T_5^2 = 19$.
My computer checks this in under two minutes, but fails for $10^{10^6}$, seemingly due to memory errors. Hopefully someone can grow this into a full proof using cleverer ideas about Pell equations, or check larger values.
EDIT: Now checked up to $10^{2\times {10^5}}$. |
Let $A$ be a square matrix with real or complex coefficients of size $n$. Define its characteristic polynomial by $\chi_A(X) = \det(A-XI_n)$ (or $\det(XI_n-A)$ if you prefer). The question is : Prove that there exists a positive integer $q$ such that $\chi_A^q(A)=0$ where $\chi_A^q(X)= \chi_A(X)\times\cdots\times \chi_A(X)$ without using the Cayley-Hamilton theorem or the definition of an ideal of a ring.
My problem with this problem is to find a "good" proof that doesn't mimic a proof of Cayley-Hamilton theorem. I know that it is easy to see (with a dimensional argument) that there exists a annihilating polynomial. But here I can't think of anything.
An attempt of solution in the real or complex case :
We know that there exists an annihilating polynomial (since $(I_n,A,A^2,\cdots,A^{n^2})$ can't be linearly independent, where $I_n$ is the identity matrix), we denote by $P$ such a polynomial. Then $P=\prod_{i=1}^p \left( X-\lambda_i\right)^{\alpha_i}$ ($\lambda_i\in \mathbb{C}$). We can assume that the $\lambda_i$ are roots of the characteristic polynomial. If not, then by definition of the characteristic polynomial $A-\lambda_iI_n$ is invertible. We can multiply $P$ by $(A-\lambda_iI_n)^{-\alpha_i}$, we still get an annihilating polynomial, and since we can permute all the terms, the term $\left( X-\lambda_i\right)^{\alpha_i}$ disappears. So if we take a sufficiently large power of the characteristic polynomial of $A$, we recover all the factors of $P$ times an another polynomial. This gives the desired result.
What do you think of my attempt? Any ideas for the case where the coefficients of $A$ belong to an arbitrary abelian ring? |
Edited after Will Sawin's comment:
Consider the set $\mathcal{M}$ of all automorphic L-functions belonging to the Selberg class. Such a set is closed for the product $.$ and the tensor product $\otimes$ such that $\forall p\in\mathbb{P}, \ \ a_{p}(F\otimes G)=a_{p}(F).a_{p}(G)$ where $a_{n}(H)$ is the $n$-th Dirichlet coefficient of $H$, that is $H(s)=\displaystyle{\sum_{n\gt 0}\dfrac{a_{n}(H)}{n^s}}$ whenever $\Re(s)\gt 1$. This tensor product corresponds, on the automorphic side, to Rankin-Selberg convolution. For the sake of simplicity, the term of 'L-function' will be used to mean any element of $\mathcal{M}$.
Let's define the automorphism group of $\mathcal{M}$ as the group, under composition, of the bijective maps $\Phi$ from $\mathcal{M}$ to itself such that the following properties are simultaneously fulfilled:
A) $\Phi$ maps a primitive L-function to a primitive L-function
B) $\forall (F,G)\in\mathcal{M}^{2}, \ \ \Phi(F\odot G)=\Phi(F)\odot\Phi(G)$ where $\odot\in\{\times, \otimes\}$
Such an automorphism of $\mathcal{M}$ preserves the degree of any L-functions, that is $d_{\Phi(F)}=d_{F}$.
Assuming that for any two L-functions $F$ and $G$, $d_{F\otimes G}=d_{F}.d_{G}$, let's now associate to an L-function $H$ a complex manifold $X_{H}$ of dimension $d_{H}$ such that for any two L-functions $F$ and $G$, $X_{F.G}=X_{F}\oplus X_{G}$ and $X_{F\otimes G}=X_{F}\otimes X_{G}$ so that $F=G\Leftrightarrow X_{F}=X_{G}$. We shall denote the set of all $X_{F}$ where $F$ runs over $\mathcal{M}$ by $\mathcal{M}'$ and any element of $\mathcal{M}'$ will be called an L-manifold.
The automorphism group of $\mathcal{M}'$ is defined in a similar fashion as for the one of $\mathcal{M}$ so that these two groups are isomorphic.
Let's now define the notion of abstract Galois group $\operatorname{Gal}(A/B)$ as the set of all automorphisms of $A$ that preserve $B$ pointwise, and let's associate to any L-function $F$ its ' canonical' representation $(\rho_{F}, V_{F})$ such that the following properties are simultaneously fulfilled:
C) there exists an algebraic number field $K_{F}$ the absolute Galois group of which, denoted by $G_{K_{F}}$, is isomorphic to both $\operatorname{Gal}(\mathcal{M}/<F>)$ and $\operatorname{Gal}(\mathcal{M}'/<X_F>)$, and is such that $\rho_{F}$ is a group homomorphism from $G_{K_{F}}$ to $\operatorname{GL}_{d_{F}}(\mathbb{C})=Aut(V_{F})$ where $<F>=\{\bigodot_{k=0}^{m}F, \odot\in\{\times,\otimes\}, m\in\mathbb{N}_{0}\}$, $<X_{F}>$ is defined in a similar way, and so that $X_{F}$ is locally isomorphic to $V_{F}$.
Assuming $F$ is the L-function attached to an automorphic representation of $\operatorname{GL}_{d_{F}}(\mathbb{A}_{K_{F}})$ where $\mathbb{A}_{K_{F}}$ is the adele ring of $K_{F}$ we require that the considered representation $(\rho_{F}, V_{F})$ is faithful, and that it is irreducible if and only if $F$ is primitive. We also require that $V_{F.G}=V_{F}\oplus V_{G}$ and that $V_{F\otimes G}=V_{F}\otimes V_{G}$.
D) $F(s)=L(s,\rho_{F})$
My questions are:
1) does every L-manifold give rise naturally to a motive?
2) If so, is every L-function motivic? Can one say that any motivic L-function arises from a Galois representation and conversely?
Many thanks in advance. |
Theorem \(\PageIndex{1}\): Law of Sines
If a triangle has sides of lengths \(a \), \(b \), and \(c \) opposite the angles \(A \), \(B \), and \(C \), respectively, then
\[\label{2.1} \dfrac{a}{\sin\;A} ~=~ \dfrac{b}{\sin\;B} ~=~ \dfrac{c}{\sin\;C} ~.\]
Note that by taking reciprocals, Equation \ref{2.1} can be written as
\[\label{2.2}
\dfrac{\sin\;A}{a} ~=~ \dfrac{\sin\;B}{b} ~=~ \dfrac{\sin\;C}{c} ~, \]
and it can also be written as a collection of three equations:
\[\label{2.3}
\dfrac{a}{b} ~=~ \dfrac{\sin\;A}{\sin\;B} ~~,\quad \dfrac{a}{c} ~=~ \dfrac{\sin\;A}{\sin\;C} ~~,\quad \dfrac{b}{c} ~=~ \dfrac{\sin\;B}{\sin\;C} \]
Another way of stating the Law of Sines is:
The sides of a triangle are proportional to the sines of their opposite angles.
Proof
To prove the Law of Sines, let \(\triangle\,ABC \) be an oblique triangle. Then \(\angle \,ABC \) can be acute, as in Figure \(\PageIndex{1}\)(a), or it can be obtuse, as in Figure \(\PageIndex{1}\)(b). In each case, draw the
altitude from the vertex at \(C \) to the side \(\overline{AB} \). In Figure \(\PageIndex{1}\)(a) the altitude lies inside the triangle, while in Figure \(\PageIndex{1}\)(b) the altitude lies outside the triangle.
Let \(h \) be the height of the altitude. For each triangle in Figure \(\PageIndex{1}\), we see that
\[ \dfrac{h}{b} ~=~ \sin\;A\label{2.4}\]
and
\[ \dfrac{h}{a} ~=~ \sin\;B\label{2.5}\]
in Figure \(\PageIndex{1}\)(b), \(\dfrac{h}{a} = \sin\;(180^\circ - B) = \sin\;B \) by Equation (1.19) in Section 1.5). Thus, solving for \(h \) in Equation \ref{2.5} and substituting that into Equation \ref{2.4} gives
\[\dfrac{a\;\sin\;B}{b} ~=~ \sin\;A ~,\label{2.6}\]
and so putting \(a \) and \(A \) on the left side and \(b \) and \(B \) on the right side, we get
\[\dfrac{a}{\sin\;A} ~=~ \dfrac{b}{\sin\;B} ~.\label{2.7}\]
By a similar argument, drawing the altitude from \(A \) to \(\overline{BC} \) gives
\[\label{2.8} \dfrac{b}{\sin\;B} ~=~ \dfrac{c}{\sin\;C} ~,\]
so putting the last two equations together proves the theorem.
\(\square\)
Note that we did not prove the Law of Sines for right triangles, since it turns out (see Exercise 12) to be trivially true for that case.
Example \(\PageIndex{1}\):
Case 1 - One side and two angles known
Solve the triangle \(\triangle\,ABC \) given \(a = 10 \), \(A =41^\circ \), and \(C = 75^\circ \).
Solution
We can find the third angle by subtracting the other two angles from \(180^\circ \), then use the law of sines to find the two unknown sides. In this example we need to find \(B \), \(b \), and \(c \). First, we see that
\[\nonumber B ~=~ 180^\circ ~-~ A ~-~ C ~=~ 180^\circ ~-~ 41^\circ ~-~ 75^\circ \quad\Rightarrow\quad
\boxed{B ~=~ 64^\circ} ~.\]
So by the Law of Sines we have
\[\nonumber \begin{align}
\dfrac{b}{\sin\;B} ~&=~ \dfrac{a}{\sin\;A} \quad&\Rightarrow\quad b ~&=~ \dfrac{a\;\sin\;B}{\sin\;A} ~&=~ \dfrac{10\;\sin\;64^\circ}{\sin\;41^\circ} \quad&\Rightarrow\quad \boxed{b ~=~ 13.7} ~,~\text{and}\\[4pt]\nonumber \dfrac{c}{\sin\;C} ~&=~ \dfrac{a}{\sin\;A} \quad&\Rightarrow\quad c ~&=~ \dfrac{a\;\sin\;C}{\sin\;A} ~&=~ \dfrac{10\;\sin\;75^\circ}{\sin\;41^\circ} \quad&\Rightarrow\quad \boxed{c ~=~ 14.7} ~. \end{align}\]
Example \(\PageIndex{2}\):
Case 2 - Two sides and one opposite angle known
Solve the triangle \(\triangle\,ABC \) given \(a = 18 \), \(A = 25^\circ \), and \(b = 30 \).
Solution
In this example we know the side \(a \) and its opposite angle \(A \), and we know the side \(b \). We can use the Law of Sines to find the other opposite angle \(B \), then find the third angle \(C \) by subtracting \(A \) and \(B \) from \(180^\circ \), then use the law of sines to find the third side \(c \). By the Law of Sines, we have
\[\nonumber \dfrac{\sin\;B}{b} ~=~ \dfrac{\sin\;A}{a} \quad\Rightarrow\quad \sin\;B ~=~ \dfrac{b\;\sin\;A}{a} ~=~
\dfrac{30\;\sin\;25^\circ}{18} \quad\Rightarrow\quad \sin\;B ~=~ 0.7044 ~.\]
Using the \(\fbox{\(\sin^{-1}\)}\) button on a calculator gives \(B = 44.8^\circ \). However, recall from Section 1.5 that \(\sin\;(180^\circ - B) = \sin\;B \). So there is a second possible solution for \(B \), namely \(180^\circ - 44.8^\circ = 135.2^\circ \). Thus, we have to solve
twice for \(C \) and \(c \) : once for \(B = 44.8^\circ \) and once for \(B = 135.2^\circ\):
\[\begin{array}{c|c}
\boxed{B = 44.8^\circ} & \boxed{B = 135.2^\circ} \\ C = 180^\circ - A - B = 180^\circ - 25^\circ - 44.8^\circ = 110.2^\circ & C = 180^\circ - A - B = 180^\circ - 25^\circ - 135.2^\circ = 19.8^\circ \\ \dfrac{c}{\sin\;C} = \dfrac{a}{\sin\;A} ~\Rightarrow~ c = \dfrac{a\;\sin\;C}{\sin\;A} = \dfrac{18\;\sin\;110.2^\circ}{\sin\;25^\circ} & \dfrac{c}{\sin\;C} = \dfrac{a}{\sin\;A} ~\Rightarrow~ c = \dfrac{a\;\sin\;C}{\sin\;A} = \dfrac{18\;\sin\;19.8^\circ}{\sin\;25^\circ} \\ \Rightarrow~ c = 40 & \Rightarrow~ c = 14.4 \end{array}\]
Hence, \(\fbox{\(B = 44.8^\circ \), \(C = 110.2^\circ \),\(c = 40\)}\) and \(\fbox{\(B = 135.2^\circ \), \(C = 19.8^\circ \), \(c = 14.4\)}\) are the two possible sets of solutions. This means that there are two possible triangles, as shown in Figure \(\PageIndex{2}\).
In Example \(\PageIndex{2}\) we saw what is known as the
ambiguous case. That is, there may be more than one solution. It is also possible for there to be exactly one solution or no solution at all.
Example \(\PageIndex{3}\):
Case 2 - Two sides and one opposite angle known
Solve the triangle \(\triangle\,ABC \) given \(a = 5 \), \(A = 30^\circ \), and \(b = 12 \).
Solution
By the Law of Sines, we have
\[\dfrac{\sin\;B}{b} ~=~ \dfrac{\sin\;A}{a} \quad\Rightarrow\quad \sin\;B ~=~ \dfrac{b\;\sin\;A}{a} ~=~
\dfrac{12\;\sin\;30^\circ}{5} \quad\Rightarrow\quad \sin\;B ~=~ 1.2 ~,\nonumber \]
which is impossible since \(| \sin\;B | \le 1 \) for any angle \(B \). Thus, there is \(\fbox{no solution}\).
There is a way to determine how many solutions a triangle has in Case 2. For a triangle \(\triangle\,ABC \), suppose that we know the sides \(a \) and \(b \) and the angle \(A \). Draw the angle \(A\) and the side \(b \), and imagine that the side \(a \) is attached at the vertex at \(C \) so that it can "swing'' freely, as indicated by the dashed arc in Figure \(\PageIndex{3}\) below.
If \(A \) is acute, then the altitude from \(C \) to \(\overline{AB} \) has height \(h = b\;\sin\;A \). As we can see in Figure \(\PageIndex{3}\)(a)-(c), there is no solution when \(a < h \) (this was the case in Example 2.3); there is exactly one solution - namely, a right triangle - when \(a = h\); and there are two solutions when \(h < a < b \) (as was the case in Example 2.2). When \(a \ge b \) there is only one solution, even though it appears from Figure \(\PageIndex{3}\)(d) that there may be two solutions, since the dashed arc intersects the horizontal line at two points. However, the point of intersection to the left of \(A \) in Figure \(\PageIndex{3}\)(d) can not be used to determine \(B \), since that would make \(A \) an obtuse angle, and we assumed that \(A \) was acute.
If \(A \) is not acute (i.e. \(A \) is obtuse or a right angle), then the situation is simpler: there is no solution if \(a \le b \), and there is exactly one solution if \(a > b \) (see Figure \(\PageIndex{4}\)).
Table 2.1 summarizes the ambiguous case of solving \(\triangle\,ABC \) when given \(a \), \(A \), and \(b \). Of course, the letters can be interchanged, e.g. replace \(a \) and \(A \) by \(c\) and \(C \), etc.
Table 2.1 Summary of the ambiguous case
There is an interesting geometric consequence of the Law of Sines. Recall from Section 1.1 that in a right triangle the hypotenuse is the largest side. Since a right angle is the largest angle in a right triangle, this means that the largest side is opposite the largest angle. What the Law of Sines does is generalize this to
any triangle:
In any triangle, the largest side is opposite the largest angle.
Proof
To prove this, let \(C \) be the largest angle in a triangle \(\triangle\,ABC \). If \(C = 90^\circ \) then we already know that its opposite side \(c \) is the largest side. So we just need to prove the result for when \(C \) is acute and for when \(C \) is obtuse. In both cases, we have \(A \le C \) and \(B \le C \). We will first show that \(\sin\;A \le \sin\;C \) and \(\sin\;B \le \sin\;C \).
If \(C \) is acute, then \(A \) and \(B \) are also acute. Since \(A \le C \), imagine that \(A \) is in standard position in the \(xy\)-coordinate plane and that we rotate the terminal side of \(A \) counterclockwise to the terminal side of the larger angle \(C \), as in Figure \(\PageIndex{5}\). If we pick points \((x_{1},y_{1}) \) and \((x_{2},y_{2}) \) on the terminal sides of \(A \) and \(C \), respectively, so that their distance to the origin is the same number \(r \), then we see from the picture that \(y_{1} \le y_{2} \), and hence
\[\nonumber
\sin\;A ~=~ \dfrac{y_{1}}{r} ~\le~ \dfrac{y_{2}}{r} ~=~ \sin\;C ~. \]
By a similar argument, \(B \le C \) implies that \(\sin\;B \le \sin\;C \). Thus, \(\sin\;A \le \sin\;C\) and \(\sin\;B \le \sin\;C \) when \(C \) is acute. We will now show that these inequalities hold when \(C \) is obtuse.
If \(C \) is obtuse, then \(180^\circ - C \) is acute, as are \(A \) and \(B \). If \(A > 180^\circ -C\) then \(A + C > 180^\circ \), which is impossible. Thus, we must have \(A \le 180^\circ - C \). Likewise, \(B \le 180^\circ - C \). So by what we showed above for acute angles, we know that \(\sin\;A \le \sin\;(180^\circ - C) \) and \(\sin\;B \le \sin\;(180^\circ - C) \). But we know from Section 1.5 that \(\sin\;C = \sin\;(180^\circ - C) \). Hence, \(\sin\;A \le \sin\;C\) and \(\sin\;B \le \sin\;C \) when \(C \) is obtuse.
Thus, \(\sin\;A \le \sin\;C \) if \(C \) is acute or obtuse, so by the Law of Sines we have
\[\nonumber
\dfrac{a}{c} ~=~ \dfrac{\sin\;A}{\sin\;C} ~\le~ \dfrac{\sin\;C}{\sin\;C} ~=~ 1 \quad\Rightarrow\quad \dfrac{a}{c} ~\le~ 1 \quad\Rightarrow\quad a ~\le~ c ~. \]
By a similar argument, \(b \le c \). Thus, \(a \le c \) and \(b \le c \), i.e. \(c \) is the largest side.
\(\square\) |
Singleton Equality
Jump to navigation Jump to search
Theorems
Let $x$ and $y$ be sets.
Then:
$\left\{{x}\right\} \subseteq \left\{{y}\right\} \iff x = y$ $\left\{{x}\right\} = \left\{{y}\right\} \iff x = y$ Proof
\(\displaystyle \left\{ {x}\right\} \subseteq \left\{ {y}\right\}\) \(\iff\) \(\displaystyle \forall z: \left({z \in \left\{ {x}\right\} \implies z \in \left\{ {y}\right\} }\right)\) Definition of Subset \(\displaystyle \) \(\iff\) \(\displaystyle \forall z: \left({z = x \implies z = y}\right)\) Definition of Singleton \(\displaystyle \) \(\iff\) \(\displaystyle x = y\) Equality implies Substitution
$\Box$
Then:
\(\displaystyle x = y\) \(\implies\) \(\displaystyle \left\{ {x}\right\} = \left\{ {y}\right\}\) Substitutivity of Equality \(\displaystyle \) \(\implies\) \(\displaystyle \left\{ {x}\right\} \subseteq \left\{ {y}\right\}\) \(\displaystyle \) \(\implies\) \(\displaystyle x = y\) by the first part
$\blacksquare$ |
Refine Language English (4) (remove) Keywords Decomposition of integer matrices (1) Multileaf collimator (1) consecutive ones property (1) intensity modulated radiation therapy, multileaf collimator sequencing, field splitting, beam-on time, decomposition cardinality (1) intensity modulated radiation therapy, multileaf collimator sequencing, � eld splitting, beam-on time, decomposition cardinality (1) large scale integer programming (1) multileaf collimator sequencing (1) radiotherapy (1)
In this paper we consider the problem of decomposing a given integer matrix A into a positive integer linear combination of consecutive-ones matrices with a bound on the number of columns per matrix. This problem is of relevance in the realization stage of intensity modulated radiation therapy (IMRT) using linear accelerators and multileaf collimators with limited width. Constrained and unconstrained versions of the problem with the objectives of minimizing beam-on time and decomposition cardinality are considered. We introduce a new approach which can be used to find the minimum beam-on time for both constrained and unconstrained versions of the problem. The decomposition cardinality problem is shown to be NP-hard and an approach is proposed to solve the lexicographic decomposition problem of minimizing the decomposition cardinality subject to optimal beam-on time.
In this paper we consider the problem of decomposing a given integer matrix A into a positive integer linear combination of consecutive-ones matrices with a bound on the number of columns per matrix. This problem is of relevance in the realization stage of intensity modulated radiation therapy (IMRT) using linear accelerators and multileaf collimators with limited width. Constrained and unconstrained versions of the problem with the objectives of minimizing beam-on time and decomposition cardinality are considered. We introduce a new approach which can be used to find the minimum beam-on time for both constrained and unconstrained versions of the problem. The decomposition cardinality problem is shown to be NP-hard and an approach is proposed to solve the lexicographic decomposition problem of minimizing the decomposition cardinality subject to optimal beam-on time.
Matrix Decomposition with Times and Cardinality Objectives: Theory, Algorithms and Application to Multileaf Collimator Sequencing (2005)
In this thesis we have discussed the problem of decomposing an integer matrix \(A\) into a weighted sum \(A=\sum_{k \in {\mathcal K}} \alpha_k Y^k\) of 0-1 matrices with the strict consecutive ones property. We have developed algorithms to find decompositions which minimize the decomposition time \(\sum_{k \in {\mathcal K}} \alpha_k\) and the decomposition cardinality \(|\{ k \in {\mathcal K}: \alpha_k > 0\}|\). In the absence of additional constraints on the 0-1 matrices \(Y^k\) we have given an algorithm that finds the minimal decomposition time in \({\mathcal O}(NM)\) time. For the case that the matrices \(Y^k\) are restricted to shape matrices -- a restriction which is important in the application of our results in radiotherapy -- we have given an \({\mathcal O}(NM^2)\) algorithm. This is achieved by solving an integer programming formulation of the problem by a very efficient combinatorial algorithm. In addition, we have shown that the problem of minimizing decomposition cardinality is strongly NP-hard, even for matrices with one row (and thus for the unconstrained as well as the shape matrix decomposition). Our greedy heuristics are based on the results for the decomposition time problem and produce better results than previously published algorithms. |
I've heard a little recently about equivariant homotopy theory, and so I decided to try out some baby examples just to get a feel for it. I'm not even sure if these are the right thing to look at, and I'm sure I'm butchering the notation, but I've attempted to compute $\pi_2^{C_n}(S^2)$ and $\pi_3^{C_3}(S^2)$; here, $C_n$ acts by rotations on the plane and $C_3$ acts by the standard representation on 3-space.
For the first one, there's the obvious equivariant cell structure on $S^2$, which has fixed vertices at 0 and at the basepoint $\infty$, one orbit of $n$ edges, and one orbit of $n$ faces. My maps are based and I need to map fixed points to fixed points, so either both vertices go to $\infty$ or else my map fixes them.
Suppose both go to $\infty$. The image of any one edge is an arbitrary based loop, and of course the other edges' images are determined by this. Assuming my faces are going to able to be mapped in at all, the inclusion of an edge into a face is a cofibration, so I may as well just homotope the edges to $\infty$ now. From here, the map is determined by the image of one face, i.e. an element of (nonequivariant) $\pi_2(S^2)$. In the case where $0$ and $\infty$ are both fixed, the image of an edge is an arbitrary path between $0$ and $\infty$, and then we just get to decide where a face goes, which I think is just an element of the preimage of 1 in the connecting map $\pi_2(S^2,S^1)\rightarrow \pi_1(S^1)$. There's no extension problem in the lexseq, so this is just a projection onto one factor $\mathbb{Z}\times \mathbb{Z}\rightarrow \mathbb{Z}$, so this looks like $\mathbb{Z}\times \{1\}$.
So as a set, $\pi_2^{C_n}(S^2)=\mathbb{Z} \sqcup \mathbb{Z}$.
For the second one, I have a 1-cell of fixed points at $(x,y,z)=(t,t,t)$, my three 2-cells are the half-planes through that 1-cell and one of the three axes, and my three 3-cells fill in the rest. The 1-cell needs to be sent to fixed points, and these are still just $0$ and $\infty$, so by continuity it's all sent to $\infty$. So the image of a 2-cell is just an element of $\pi_2(S^2)$. Once we choose that (which determines the images of the other 2-cells, of course) we can always extend to the 3-cells, since these are just homotopies from a map to itself. Once we've chosen one, any other gives an element of $\pi_3(S^2)=\mathbb{Z}$. So as a set, $\pi_3^{C_3}(S^2)=\mathbb{Z} \times \mathbb{Z}$.
So, my first question is: Are these right? Also, I've learned to compute (usual) homotopy groups of spheres by making a Postnikov tower, and I'm wondering if there's a sufficiently easy example for the equivariant case where I can do the analogous calculations by hand without the full generality of slice cells or whatever's going on (those could be the wrong words -- I don't think I understand what these are well enough to know whether this is a decent request, either). In any case, I'd love suggestions of better/more instructive examples. Lastly, I'm wondering if there are actually group structures here. In the first example, it looks like I can reasonably hope to add guys that are both in the same copy of $\mathbb{Z}$, but not otherwise. I think it's easy to show that there's no $C_n$-equivariant coproduct on $S^2$. On the other hand, quotienting by the plane $x+y+z=0$ in $\mathbb{R}^3$ looks like it gives a $C_3$-equivariant coproduct on $S^3$. If two elements came from the same choice of $\pi_2(S^2)$, then there's an obvious origin for the $\pi_3(S^2)$-torsor, namely using the trivial homotopy to extend the map over the 3-cells. I think this agrees with my equivariant coproduct, just because it matches up with the usual picture you draw of how to add elements of $\pi_2$ (two squares sitting on top of each other). However, I can't tell whether this makes any sense when the elements came from different choices of $\pi_2(S^2)$. It seems like if it does work at all, there might be something funnier than the obvious group structure... |
A blog by Sebastian Liem and Erwin Poeze – ViriCiti Labs
ViriCiti provides insights in electric, CNG, diesel, hybrid, and hydrogen vehicles, as well as their charging infrastructure. We can, therefore, provide full insight in energy management, maintenance, route operations, and flexible charging for mixed fleets. Currently, we are working on the service ‘Smart Driving’, a tool assisting drivers to drive in an energy efficient, comfortable, and safe way. One of the core inputs of Smart Driving is the measurements of the vehicle acceleration which is done via the DataHub — ViriCiti’s onboard hardware unit — that has a built-in 3D accelerometer. Before the acceleration signals can be useful to us, however, the coordinate system of the DataHub must be aligned with the vehicle. Only then can we interpret the acceleration in the x-direction (driving direction) as braking or accelerating, and only then can we interpret rotation around the z-direction (downward direction) as the vehicle turning.
The DataHub is installed by our customers and we at ViriCiti do not necessarily know the orientation of the DataHub in relation to the vehicle. The coordinate systems of the DataHub and the vehicle are therefore not necessarily aligned, as depicted in Figure 1.
Figure 1: Vehicle’s coordinate system ($x_v$, $y_v$, $z_v$) is not aligned with that of the DataHub ($x_d$, $y_d$, $z_d$)
To overcome this misalignment, we have developed a method to automatically determine the DataHub’s orientation enabling us to align its coordinate system with the vehicle’s. This method has two steps; in the first step, we align the z-axes of the DataHub and vehicle. Secondly, we align the
x– and y-axes. The principle is the same for both steps: We measure the acceleration in a known vehicle state where we know forces acting on the vehicle, meaning that we know the acceleration vector in the vehicle’s coordinate system. In the first step, the vehicle must be stationary and positioned on a level surface. In this situation, only gravity affects the vehicle and acceleration is $a_v = (0, 0, g)$ in the vehicle’s coordinate system. As the DataHub is now misaligned, it measures some other acceleration $a_d$. Knowing how the $a_v$ is expressed in the coordinate system of the DataHub, we can find a coordinate transformation that aligns the z-axes of the vehicle’s and DataHub’s coordinate systems.
In the z-aligned coordinate system $x’_d, y’_d, z’_d$ (where $z’_d = z_v$), the x- and y-axes can still be misaligned but this is solved in the second step. In this step, the vehicle should be braking while driving in a straight line. We then derive that $a_v = (-x, 0, g)$, which we use to find the coordinate transformation from the z-aligned DataHub’s coordinate system to the vehicle’s coordinate system. Combining the coordinate transformations from step 1 and 2, we have a coordinate transformation that aligns the measurements from the DataHub with the vehicle. This allows us to use the acceleration to describe the movements of the vehicle. In the two following sections, we provide details on how each step works.
Figure 2: Step 1, rotation to align the z-axes of the vehicle and DataHub ($z_v$ and $z_d$). Step 2, rotation around aligned z-axis to align the x- and y-axes. Step 1: Using Gravity
In this first step, we need to determine when the vehicle is stationary and standing on a level surface. Determining if the vehicle is stationary is quite straightforward as we have direct access to the vehicle speed. Determining if the vehicle is on a level surface, however, is more difficult, as there is no sensor reading readily available. Instead, we rely on statistics taken from many samples — no city is all uphill after all and with a sufficient number of samples the average models a level surface.
The measured acceleration $a_d$ in the DataHub’s coordinate system expressed in the vehicle’s coordinate system should be $a_v = (0, 0, g)$. We can now find a transformation matrix $R$ so that
$$a_v = R \cdot a_d.$$
We use $R$ to denote the matrix because we know it should be a rotation matrix, it should preserve the origin, the norm, as well as the orientation of the acceleration signal. Note that $R$ doesn’t fully transform the vector from the DataHub’s coordinate system to that of the vehicle. It does, however, align their
z-axes or, equivalently, the xy-planes.
There is more than one way to construct a rotation matrix – Euler angles and quaternions are two popular approaches. We found these two methods to be unnecessarily complex and numerically error-prone for our purposes. Instead we took a step back and used the theory of rotations: $SO(3)$ the group of 3D Euclidean rotations. We will sketch, with no claims of rigor, the method we settled on. $SO(3)$ is a Lie group which has corresponding Lie algebra $\mathcal{so}(3)$. If we can express our rotation in this algebra we can generate the rotation matrix. To do this we chose the basis $L = [L_x, L_y, L_z]$ for $\mathcal{so}(3)$
$$L_x =\begin{pmatrix}0 & 0 & 0 \\0 & 0 & -1 \\0 & 1 & 0 \\\end{pmatrix},\quad L_y =\begin{pmatrix}0 & 0 & 1 \\0 & 0 & 0 \\-1 & 0 & 0 \\\end{pmatrix},\quad L_z =\begin{pmatrix}0 & -1 & 0 \\1 & 0 & 0 \\0 & 0 & 0 \\\end{pmatrix}$$
With this basis we can identify angle rotations $\theta$ around some unit vector $u$ as element $\theta u \cdot L \in \mathcal{so}(3)$. With this description of the rotation in the Lie algebra we use the exponential map to generate the actual rotation matrix. The map is defined using the matrix exponential series
$$\mathcal{so}(3) \to SO(3); \quad \theta u \cdot L \to R = e^{\theta u \cdot L } = I + \theta u \cdot L + \frac{1}{2!}(\theta u \cdot L )^2 + \ldots.$$
The infinite series has an analytical solution because $u \cdot L $ is skew-symmetric, meaning $(u \cdot L )^3 = -u \cdot L $. The higher order terms simplifies to one $u \cdot L $ term and one $(u \cdot L )^2$ term. We get
$$R = I + \left[\theta – \frac{\theta^3}{3!} + \frac{\theta^5}{5!} – \ldots\right] u \cdot L+ \left[ {\theta^2}{2!} – {\theta^4}{4!} + {\theta^2}{6!} – \ldots \right](u \cdot L)^2,$$
which, after remembering our trigonometric expansions, let us write
$$R = I + \sin \theta u \cdot L + (1 – \cos \theta) (u \cdot L)^2.$$
With this formula we can return to our problem of rotating $a_d$ onto $a_v$. We rotate in the plane spanned by the two vectors, i.e. around the cross-product $v = a_d x a_v$. The $\cos \theta$ and $\sin \theta$ can be found using the geometric meaning of the dot and cross-products respectively. We identify
$$u = \frac{v}{||v||}, \quad \cos \theta = \frac{a_v \cdot a_d}{||a_v|| ||a_d||}, \quad \sin \theta = \frac{||v||}{ ||a_v|| ||a_d||}.$$
And with $a_d$ measured and $a_v$ assumed we have our rotation matrix $R$ which aligns the z-axes of the DataHub and that of the vehicle.
Step 2: Braking
With the
z-axes aligned the next step is to align the x- and y-axes. We do this by identifying a scenario where the acceleration in the xy-plane is known to be only in the x-direction. When the vehicle is braking in a straight line, the acceleration is $a_v = (-a_{x;v}, 0, g)$ in the vehicle coordinate system. Having measured the acceleration in the DataHub’s coordinate system and rotating this acceleration vector to the aligned xy-planes of the vehicle and DataHub during the braking event, we can then find the rotation matrix to completely align the DataHub with its vehicle.
To detect a braking event we simply observe if the speed is rapidly diminishing. As it’s braking, the vehicle must maintain the same driving direction (within some tolerance) and to enforce this we use the circular dispersion of the acceleration sampled during the braking event. Only samples that meet a maximum dispersion threshold are accepted so we can be sure that the vehicle is indeed braking in a sufficiently straight line.
We combine samples for a number of braking events for our final measurement $a_d$ with which we find the rotation matrix using the method outlined in step 1.
Results
By composing the rotation matrices from step 1 and step 2 we achieve the alignment of the DataHub and the vehicle. In Figure 3 we see the difference between the acceleration signals of the unaligned DataHub and one that is aligned properly. As one would expect, the
x– and y components of the acceleration (blue and orange) are nearly zero when the vehicle is stationary, between timesteps 750 and 1100. Figure 3: x, y, z acceleration in the coordinate system of the DataHub (left) and the same signals transformed to the vehicle coordinate system (right).
The process of DataHub alignment is fully automated and currently runs on a selection of vehicles. In the near future we will roll out this new feature to all vehicles equipped with a DataHub and the users will have access to the acceleration signals, useful for, e.g., brake tests of new buses. In the meantime, we continue to develop Smart Driving using the acceleration signals. |
Home
Integration by PartsIntegration by Parts
Examples
Integration by Parts with a definite integral
Going in Circles
Tricks of the Trade
Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions
Product of Sines and Cosines (mixed even and odd powers or only odd powers)
Product of Sines and Cosines (only even powers)
Product of Secants and Tangents
Other Cases
Trig SubstitutionsHow Trig Substitution Works
Summary of trig substitution options
Examples
Completing the Square
Partial FractionsIntroduction to Partial Fractions
Linear Factors
Irreducible Quadratic Factors
Improper Rational Functions and Long Division
Summary
Strategies of IntegrationSubstitution
Integration by Parts
Trig Integrals
Trig Substitutions
Partial Fractions
Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration
Type 2 - Improper Integrals with Discontinuous Integrands
Comparison Tests for Convergence
Modeling with Differential EquationsIntroduction
Separable Equations
A Second Order Problem
Euler's Method and Direction FieldsEuler's Method (follow your nose)
Direction Fields
Euler's method revisited
Separable EquationsThe Simplest Differential Equations
Separable differential equations
Mixing and Dilution
Models of GrowthExponential Growth and Decay
The Zombie Apocalypse (Logistic Growth)
Linear EquationsLinear ODEs: Working an Example
The Solution in General
Saving for Retirement
Parametrized CurvesThree kinds of functions, three kinds of curves
The Cycloid
Visualizing Parametrized Curves
Tracing Circles and Ellipses
Lissajous Figures
Calculus with Parametrized CurvesVideo: Slope and Area
Video: Arclength and Surface Area
Summary and Simplifications
Higher Derivatives
Polar CoordinatesDefinitions of Polar Coordinates
Graphing polar functions
Video: Computing Slopes of Tangent Lines
Areas and Lengths of Polar CurvesArea Inside a Polar Curve
Area Between Polar Curves
Arc Length of Polar Curves
Conic sectionsSlicing a Cone
Ellipses
Hyperbolas
Parabolas and Directrices
Shifting the Center by Completing the Square
Conic Sections in Polar CoordinatesFoci and Directrices
Visualizing Eccentricity
Astronomy and Equations in Polar Coordinates
Infinite SequencesApproximate Versus Exact Answers
Examples of Infinite Sequences
Limit Laws for Sequences
Theorems for and Examples of Computing Limits of Sequences
Monotonic Covergence
Infinite SeriesIntroduction
Geometric Series
Limit Laws for Series
Test for Divergence and Other Theorems
Telescoping Sums
Integral TestPreview of Coming Attractions
The Integral Test
Estimates for the Value of the Series
Comparison TestsThe Basic Comparison Test
The Limit Comparison Test
Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test
Absolute Convergence
Rearrangements
The Ratio and Root TestsThe Ratio Test
The Root Test
Examples
Strategies for testing SeriesStrategy to Test Series and a Review of Tests
Examples, Part 1
Examples, Part 2
Power SeriesRadius and Interval of Convergence
Finding the Interval of Convergence
Power Series Centered at $x=a$
Representing Functions as Power SeriesFunctions as Power Series
Derivatives and Integrals of Power Series
Applications and Examples
Taylor and Maclaurin SeriesThe Formula for Taylor Series
Taylor Series for Common Functions
Adding, Multiplying, and Dividing Power Series
Miscellaneous Useful Facts
Applications of Taylor PolynomialsTaylor Polynomials
When Functions Are Equal to Their Taylor Series
When a Function Does Not Equal Its Taylor Series
Other Uses of Taylor Polynomials
Functions of 2 and 3 variablesFunctions of several variables
Limits and continuity
Partial DerivativesOne variable at a time (yet again)
Definitions and Examples
An Example from DNA
Geometry of partial derivatives
Higher Derivatives
Differentials and Taylor Expansions
Differentiability and the Chain RuleDifferentiability
The First Case of the Chain Rule
Chain Rule, General Case
Video: Worked problems
Multiple IntegralsGeneral Setup and Review of 1D Integrals
What is a Double Integral?
Volumes as Double Integrals
Iterated Integrals over RectanglesHow To Compute Iterated Integrals
Examples of Iterated Integrals
Fubini's Theorem
Summary and an Important Example
Double Integrals over General RegionsType I and Type II regions
Examples 1-4
Examples 5-7
Swapping the Order of Integration
Area and Volume Revisited
Double integrals in polar coordinatesdA = r dr (d theta)
Examples
Multiple integrals in physicsDouble integrals in physics
Triple integrals in physics
Integrals in Probability and StatisticsSingle integrals in probability
Double integrals in probability
Change of VariablesReview: Change of variables in 1 dimension
Mappings in 2 dimensions
Jacobians
Examples
Bonus: Cylindrical and spherical coordinates
Going in Circles
The main strategy for integration by parts is to pick $u$ and $dv$ so that $v\, du$ is simpler to integrate than $u \,dv$. Sometimes this isn't possible. In those cases we look for ways to relate $\int u\, dv$ to $\int v \,du$ algebraically, and then use algebra to solve for $\int u \,dv$. This method is especially good for integrals involving products of $e^x$, $\sin(x)$ and $\cos(x)$. Sometimes you need to integrate by parts
In the video, we computed $\int \sin^2 x\, dx$.
Solution 1: We set $u =\sin(x)$ and $dv = \sin(x)\, dx$, so applying integration by parts gives $$\int \sin^2(x) \,dx \overset{\fbox{$\,\,u\,=\,\sin(x)\quad\,\,\, v\,=\,-\cos(x)\\du\,=\,\cos(x) dx\,\,\, dv\,=\,\sin(x)\,dx$}\\}{=} -\sin(x)\cos(x) + \int \cos^2(x) \,dx.$$ But $\cos^2(x)=1-\sin^2(x)$, so $\int \cos^2(x) \,dx=\int(1-\sin^2 x)\,dx=\int\,dx-\int\sin^2 x\,dx$. We substitute this in, and then add this last term to the left-hand side from above, getting
$$ \begin{eqnarray}\int \sin^2(x)\, dx &=& - \sin(x)\cos(x) + \int dx - \int \sin^2(x) \,dx \cr 2 \int \sin^2(x)\, dx &=& - \sin(x)\cos(x)+x + C \cr \int \sin^2(x) \,dx &=& \frac{x - \sin(x)\cos(x)}{2} + c\end{eqnarray} $$
----------------------------------------------------------------------------- Solution 2: $$\int \sin^2 x\, dx=\int \frac{1-\cos(2x)}{2}\,dx=\frac{1}{2}\int(1-\cos(2x))\,dx=\frac{1}{2}\left(x-\frac{1}{2}\sin(2x)\right)+c=\frac{1}{2}\left(x-\frac{1}{2}2\sin x\cos x\right)+c$$ $$= \frac{x - \sin(x)\cos(x)}{2} + c.$$ Here, we used another trig identity that you should know: $\sin(2x)=2\sin x\cos x$. Remember: always check to see that you have the right antiderivative by differentiating it! |
Changzhi Li
According to our database
Collaborative distances: 1, Changzhi Li authored at least 63 papers between 2010 and 2019.
Collaborative distances:
Timeline Legend:Book In proceedings Article PhD thesis Other Links On csauthors.net: Bibliography
2019
A Noncontact Breathing Disorder Recognition System Using 2.4-GHz Digital-IF Doppler Radar.
IEEE J. Biomedical and Health Informatics, 2019
A DC-Coupled High Dynamic Range Biomedical Radar Sensor With Fast-Settling Analog DC Offset Cancelation.
IEEE Trans. Instrumentation and Measurement, 2019
Blind Separation of Doppler Human Gesture Signals Based on Continuous-Wave Radar Sensors.
IEEE Trans. Instrumentation and Measurement, 2019
Remote Blind Motion Separation Using a Single-Tone SIMO Doppler Radar Sensor.
IEEE Trans. Geoscience and Remote Sensing, 2019
Continuous Human Motion Recognition With a Dynamic Range-Doppler Trajectory Method Based on FMCW Radar.
IEEE Trans. Geoscience and Remote Sensing, 2019
Portable Microwave Radar Systems for Short-Range Localization and Life Tracking: A Review.
Sensors, 2019
2018
Bioinspired In-Grid Navigation and Positioning Based on an Artificially Established Magnetic Gradient.
IEEE Trans. Vehicular Technology, 2018
Noncontact Human-Machine Interface With Planar Probing Coils in a Differential Sensing Architecture.
IEEE Trans. Instrumentation and Measurement, 2018
Wireless Indoor Positioning With Vertically Uniform Alternating Magnetic Fields.
IEEE Trans. Instrumentation and Measurement, 2018
Overview of Recent Development on Wireless Sensing Circuits and Systems for Healthcare and Biomedical Applications.
IEEE J. Emerg. Sel. Topics Circuits Syst., 2018
Guest Editorial Wireless Sensing Circuits and Systems for Healthcare and Biomedical Applications.
IEEE J. Emerg. Sel. Topics Circuits Syst., 2018
E-Eye: Hidden Electronics Recognition through mmWave Nonlinear Effects.
Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems, SenSys 2018, 2018
Indoor localization based on a single-tone SIMO-structured Doppler radar system.
Proceedings of the 2018 IEEE Radio and Wireless Symposium, 2018
Single frequency microwave imaging based on compressed sensing.
Proceedings of the 2018 IEEE Radio and Wireless Symposium, 2018
An improved indoor localization solution using a hybrid UWB-Doppler system with Kalman filter.
Proceedings of the 2018 IEEE Radio and Wireless Symposium, 2018
Investigation of unique broadband nonlinear RF response of electronic devices.
Proceedings of the 2018 IEEE Radio and Wireless Symposium, 2018
5.8-GHz ISM band intermodulation radar for high-sensitivity motion-sensing applications.
Proceedings of the 2018 IEEE Radio and Wireless Symposium, 2018
A DC-coupled biomedical radar sensor with analog DC offset calibration circuit.
Proceedings of the IEEE International Instrumentation and Measurement Technology Conference, 2018
Long-time non-contact water level measurement with a 5.8-GHz DC-coupled interferometry radar.
Proceedings of the IEEE International Instrumentation and Measurement Technology Conference, 2018
A Human Tracking and Physiological Monitoring FSK Technology for Single Senior at Home Care.
Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2018
2017
Noncontact Physiological Dynamics Detection Using Low-power Digital-IF Doppler Radar.
IEEE Trans. Instrumentation and Measurement, 2017
SleepSense: A Noncontact and Cost-Effective Sleep Monitoring System.
IEEE Trans. Biomed. Circuits and Systems, 2017
Hand gesture recognition based on Wi-Fi chipsets.
Proceedings of the 2017 IEEE Radio and Wireless Symposium, 2017
Orientation and cancellation of directional interfering signals based on a radio frequency beamforming array.
Proceedings of the 2017 IEEE Radio and Wireless Symposium, 2017
Cardiac Scan: A Non-contact and Continuous Heart-based User Authentication System.
Proceedings of the 23rd Annual International Conference on Mobile Computing and Networking, 2017
2016
Energy and Area Efficient Three-Input XOR/XNORs With Systematic Cell Design Methodology.
IEEE Trans. VLSI Syst., 2016
Short-Range Doppler-Radar Signatures from Industrial Wind Turbines: Theory, Simulations, and Measurements.
IEEE Trans. Instrumentation and Measurement, 2016
A Self-Calibrating Radar Sensor System for Measuring Vital Signs.
IEEE Trans. Biomed. Circuits and Systems, 2016
Microwave Imaging under Oblique Illumination.
Sensors, 2016
Time-Varying Vocal Folds Vibration Detection Using a 24 GHz Portable Auditory Radar.
Sensors, 2016
On the feasibility of Non-contact Cardiac Motion Sensing for emerging heart-based biometrics.
Proceedings of the 2016 IEEE Radio and Wireless Symposium, 2016
A step forward towards radar sensor networks for structural health monitoring of wind turbines.
Proceedings of the 2016 IEEE Radio and Wireless Symposium, 2016
2015
A 0.7 V Relative Temperature Sensor With a Non-Calibrated $\pm 1~^{\circ}{\rm C}$ 3$\sigma$ Relative Inaccuracy.
IEEE Trans. on Circuits and Systems, 2015
Noncontact Vital Sign Detection based on Stepwise Atomic Norm Minimization.
IEEE Signal Process. Lett., 2015
Assessment of Human Respiration Patterns via Noncontact Sensing Using Doppler Multi-Radar System.
Sensors, 2015
An Interference Suppression Technique for Life Detection Using 5.75- and 35-GHz Dual-Frequency Continuous-Wave Radar.
IEEE Geosci. Remote Sensing Lett., 2015
Power synthesis at low frequencies in the sub-THz gap.
Proceedings of the 2015 IEEE Radio and Wireless Symposium, 2015
Remote phase synchronization for satellite network systems.
Proceedings of the 2015 IEEE Radio and Wireless Symposium, 2015
Non-contact hand interaction with smart phones using the wireless power transfer features.
Proceedings of the 2015 IEEE Radio and Wireless Symposium, 2015
SleepSense: Non-invasive sleep event recognition using an electromagnetic probe.
Proceedings of the 12th IEEE International Conference on Wearable and Implantable Body Sensor Networks, 2015
2014
Runtime Self-Calibrated Temperature-Stress Cosensor for 3-D Integrated Circuits.
IEEE Trans. VLSI Syst., 2014
Doppler Radar Motion Sensor With CMOS Digital DC-Tuning VGA and Inverter-Based Sigma-Delta Modulator.
IEEE Trans. Instrumentation and Measurement, 2014
Noncontact Distance and Amplitude-Independent Vibration Measurement Based on an Extended DACM Algorithm.
IEEE Trans. Instrumentation and Measurement, 2014
Automated DC Offset Calibration Strategy for Structural Health Monitoring Based on Portable CW Radar Sensor.
IEEE Trans. Instrumentation and Measurement, 2014
Long-Distance Geomagnetic Navigation: Imitations of Animal Migration Based on a New Assumption.
IEEE Trans. Geoscience and Remote Sensing, 2014
Leakage, Area, and Headroom Tradeoffs for Scattered Relative Temperature Sensor Front-End Architectures.
IEEE Trans. on Circuits and Systems, 2014
Cyclostationary approach to Doppler radar heart and respiration rates monitoring with body motion cancelation using Radar Doppler System.
Biomed. Signal Proc. and Control, 2014
Gesture recognition for smart home applications using portable radar sensors.
Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2014
Non-contact multi-radar smart probing of body orientation based on micro-Doppler signatures.
Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2014
2013
PLL-Based Self-Adaptive Resonance Tuning for a Wireless-Powered Potentiometer.
IEEE Trans. on Circuits and Systems, 2013
A Subthreshold-MOSFETs-Based Scattered Relative Temperature Sensor Front-End With a Non-Calibrated ±2.5°C 3σ Relative Inaccuracy From -40°C to 100°C.
IEEE Trans. on Circuits and Systems, 2013
A 0.45-V MOSFETs-Based Temperature Sensor Front-End in 90 nm CMOS With a Noncalibrated ± 3.5°C 3σ Relative Inaccuracy From -55°C to 105°C.
IEEE Trans. on Circuits and Systems, 2013
A compact phased array antenna system based on dual-band Butler matrices.
Proceedings of the 2013 IEEE Radio and Wireless Symposium, 2013
2012
Wireless Sensing System-on-Chip for Near-Field Monitoring of Analog and Switch Quantities.
IEEE Trans. Industrial Electronics, 2012
Accurate Respiration Measurement Using DC-Coupled Continuous-Wave Radar Sensor for Motion-Adaptive Cancer Radiotherapy.
IEEE Trans. Biomed. Engineering, 2012
Using moderate inversion to optimize voltage gain, thermal noise, and settling time in two-stage CMOS amplifiers.
Proceedings of the 2012 IEEE International Symposium on Circuits and Systems, 2012
A power-optimized reconfigurable CT ΔΣ modulator in 65nm CMOS.
Proceedings of the 2012 IEEE International Symposium on Circuits and Systems, 2012
An all-CMOS low supply voltage temperature sensor front-end with error correction techniques.
Proceedings of the 2012 IEEE International Symposium on Circuits and Systems, 2012
2011
All-CMOS subbandgap reference circuit operating at low supply voltage.
Proceedings of the International Symposium on Circuits and Systems (ISCAS 2011), 2011
A regulated 3.1-10.6 GHz linear dual-tuning differential ring oscillator for UWB applications.
Proceedings of the International Symposium on Circuits and Systems (ISCAS 2011), 2011
A multi-radar wireless system for respiratory gating and accurate tumor tracking in lung cancer radiotherapy.
Proceedings of the 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2011
2010
Accurate Doppler Radar Noncontact Vital Sign Detection Using the RELAX Algorithm.
IEEE Trans. Instrumentation and Measurement, 2010
Instrument-Based Noncontact Doppler Radar Vital Sign Detection System Using Heterodyne Digital Quadrature Demodulation Architecture.
IEEE Trans. Instrumentation and Measurement, 2010 |
This is a very nice problem. If I may dispense with the notation for random variables as $f$ and $k$, and refer to them instead as $X$ and $Y$ ...
The Problem
Let $X \sim Uniform(0,1)$ and $Y \sim Uniform(2,5)$ be independent random variables, with joint pdf $f(x,y) = \frac13$:
f = 1/3; domain[f] = {{x, 0, 1}, {y, 2, 5}};
We seek a closed-form solution to:
$$E\big[\,X \, \, \big| \,\, \big\{(1 - X)^Y < 1-p \big\} \,\big] = \underset{\Omega}{{\int }}x \; \frac{f(x,y)}{P((x,y) \in \Omega)}$$
where $\Omega =\left\{(x,y): (1 - x)^y < 1-p)\right\}\,$ denotes the domain of support defined by the condition; integration is carried out over $\Omega$; and $0<p<1$.
To illustrate the dependency created by the condition, here is a plot of the domain of support $\Omega =\left\{(x,y): (1 - x)^y < 1-p)\right\}$, when $p=0.9$:
RegionPlot[(1-x)^y < .1, {x, 0, 1}, {y, 2, 5}, FrameLabel -> {x, y}]
The essence of the problem is to define the shaded space (the domain of support) in the above picture, given an arbitrary symbolic parameter $p$, in such a way that
Mathematica can integrate over the domain of support $\Omega$, and so obtain a closed-form solution. Unfortunately, the usual simple trick of adding
Boole[ (1-x)^y < 1-p ] into a manual integration does not work. In fact, even the simplest possible case,
i.e.:
Integrate[Boole[(1-x)^y < 1-p], {x, 0, 1}, {y, 2, 5}, Assumptions -> 0<p<1]
returns unevaluated input
So, if we require a closed-form symbolic solution for arbitrary parameter $p$, then other methods are needed.
Tackle the denominator first ...
The Denominator: $\quad P[(1-X)^Y < 1-p]$
Note that $P\big((1-X)^Y < q\big)$ where $q = 1-p$ is just the cdf of the distribution of $Z = (1-X)^Y$ ... so to get around the awkward non-rectangular domain of support, let us introduce the transformation:
$$Z = (1-X)^Y$$
Unfortunately,
Mathematica 10 doesn't seem able to find this transformation by itself:
CDF[TransformedDistribution[(1-x)^y,
{Distributed[x, UniformDistribution[{0, 1}]],
Distributed[y, UniformDistribution[{2, 5}]]}], q]
returns unevaluated input
No cigar. Let's try using the
mathStatica package for Mathematica ...
Let $Z = (1-X)^Y$ and $W=Y$. Then, the joint pdf of $(Z,W)$ is:
Since $W=Y$, let us rather write this as the joint pdf of $(Z,Y)$, namely $g(z,y)$:
Then, the marginal pdf of $Z$ is say $h(z)$:
We seek $P\big((1-X)^Y < 1-p\big) = P(Z<1-p)$, which is:
Denominator: All done.
The Numerator:
We now wish to find the numerator, namely:
$$\underset{\Omega}{{\int }}x \; f(x,y) \quad \text{ where } \quad \Omega =\left\{(x,y): (1 - x)^y < 1-p)\right\}$$
Since $(X,Y)$ have joint pdf $f(x,y)$; and $(Z,Y)$ have joint pdf $g(z,y)$ (already derived); and since:
... the required integral is equivalent to finding:
$$ \int _0^{1-p}\int _2^5 (1-z^{1/y}) \; g(z,y) \; dy \; dz$$
Note that we have converted a very awkward non-rectangular integration problem in $(X,Y)$ space into a neat, rectangular integral in $(Z,Y)$ space. Note too that the upper integral bound on $dz$ is $(1-p)$, since the imposed condition is $Z<1-p$. And so the numerator solution is:
Summary
The solution to $E\big[\,X \, \, \big| \,\, \big\{(1 - X)^Y < 1-p \big\} \,\big]\,$ is:
sol = top/PP
... which yields the closed-form solution (copy and paste-able):
(2 + 10 (1 - p)^(1/5) - 5 (1 - p)^(2/5) - 4 Sqrt[1 - p] - 2 p +
2 Log[1-p] (-ExpIntegralEi[(1/5) Log[1-p]] + ExpIntegralEi[(2/5) Log[1-p]] +
ExpIntegralEi[(1/2) Log[1-p]] - LogIntegral[1-p]))/
(2*(5*(1 - p)^(1/5) - 2 Sqrt[1-p] + (-ExpIntegralEi[(1/5) Log[1-p]] + ExpIntegralEi[(1/2) Log[1-p]]) Log[1-p]))
All done.
Here is a plot of the conditional expectation, as $p$ varies from 0 to 1:
Plot[sol, {p, 0, 1}, PlotRange -> {0, 1}, AxesLabel -> {p, "E[X | cond]"}]
When $p = 0$, there is no constraint, and so the conditional expectation is equal to the unconditional expectation $E[X] = \frac12$.
Finally, a quick numerical check: when $p = 0.9$:
sol /. p -> 0.9
0.740363
which matches the numerical solution
Mathematica can obtain, given a numerical value for $p$:
Expectation[Conditioned[x, (1 - x)^y < 1 - 0.9],
Distributed[{x, y}, UniformDistribution[{{0, 1}, {2, 5}}]]]
0.740363 |
I don't understand the words "risk-neutral density". Please explain what it is, and how it can be estimated in practice.
My
guess would be that we have an underlying probability space $(\Omega, P)$. We are interested in the equivalent martingale measure $Q$, i.e. the space $(\Omega, Q)$.
For every stock price $S_t$, we wish to determine the $Q$-distribution. The risk-neutral density, if it exists, is the
density $\phi$ of $S_t$ with respect to the Lebesque measure, i.e. we can write for some fixed $t$, $$E^Q S_t = \int \phi \cdot s \ ds.$$
Is this guess correct? If so, how do we estimate $\phi$ from stock prices? |
i recently asked a related question about the relationship between monads in category theory and Haskell. The answerer showed me the following classes and instances:
class MyMonad t where fun :: (a -> b) -> (t a -> t b) -- functorial action, known as fmap eta :: a -> t a mu :: t (t a) -> t aclass KleisliTriple t where ret :: a -> t a (>>==) :: t a -> (a -> t b) -> t binstance KleisliTriple t => MyMonad t where fun f m = m >>== (ret . f) eta = ret mu m = m >>== idinstance MyMonad t => KleisliTriple t where ret = eta m >>== f = mu (fun f m)
And told me I should show that the rules for each class are equivalent. I've finally gotten around to this exercise but I've already run into some confusion.
I know that the "laws" for a Kleisli Triple are as follows:
return x >>= f = f xm >>= return = m(m >>= f) >>= g = m >>= (\x -> f x >>= g)
And I know that the coherence conditions for a monad can be written$$\mu \circ \eta T = \mu \circ T \eta = id_C$$$$\mu \circ \mu T = \mu \circ T \mu$$Where $T$ is the endofunctor,
fun in the Haskell class.
I know $(\eta T)_x = \eta_{T\, x}$ and $(T\eta)_x=T(\eta_x)$, so I think the rules for
MyMonad could be written as follows:
mu . eta . (fun f) = fmu . (fun eta) = idmu . mu . (fun id) = mu . (fun mu)
Writing any of these expressions as a function with the desired type signatures loads in GHCI, but I don't think I'm right.
It's mainly associativity that I'm not confident about. Unwrapping associativity for the Kleisli triple yields
mu (fun g (mu (fun f m))) = mu (fun (\x -> mu (fun g (f x))) m)
Which seems over-complicated, but unlike my attempt to express the category-theoretic constraints, this equation universally quantifies over the functions
f :: a -> m b g :: b -> m c`
How close am I to coming to the correct conclusion? Hints would be appreciated. |
Exercise 8.1.1: Sketch the phase plane vector field for:
a) \(x'=x^2, ~~y'=y^2\),
b) \(x'=(x-y)^2, ~~y'=-x\),
c) \(x'=e^y,~~ y'=e^x\).
Exercise 8.1.2: Match systems
1) \(x'=x^2\), \(y'=y^2\), 2) \(x'=xy\), \(y'=1+y^2\), 3) \(x'=\sin(\pi y)\), \(y'=x\), to the vector fields below. Justify.
a) b) c)
Exercise 8.1.3: Find the critical points and linearizations of the following systems.
a) \(x'=x^2-y^2\), \(y'=x^2+y^2-1\),
b) \(x'=-y\), \(y'=3x+yx^2\),
c) \(x'=x^2+y\), \(y'=y^2+x\).
Exercise 8.1.4: For the following systems, verify they have critical point at \((0,0)\), and find the linearization at \((0,0)\).
a) \(x'=x+2y+x^2-y^2\), \(y'=2y-x^2\)
b) \(x'=-y\), \(y'=x-y^3\)
c) \(x'=ax+by+f(x,y)\), \(y'=cx+dy+g(x,y)\), where \(f(0,0) = 0\), \(g(0,0) = 0\), and all first partial derivatives of \(f\) and \(g\) are also zero at \((0,0)\), that is,
\(\frac{\partial f}{\partial x}(0,0) = \frac{\partial f}{\partial y}(0,0) = \frac{\partial g}{\partial x}(0,0) = \frac{\partial g}{\partial y}(0,0) = 0\).
Exercise 8.1.5:Take \(x'=(x-y)^2\), \(y'=(x+y)^2\).
a) Find the set of critical points.
b) Sketch a phase diagram and describe the behavior near the critical point(s).
c) Find the linearization. Is it helpful in understanding the system?
Exercise 8.1.6: Take \(x'=x^2\), \(y'=x^3\).
a) Find the set of critical points.
b) Sketch a phase diagram and describe the behavior near the critical point(s).
c) Find the linearization. Is it helpful in understanding the system?
Exercise 8.1.101: Find the critical points and linearizations of the following systems.
a) \(x'=\sin(\pi y)+(x-1)^2\), \(y'=y^2-y\),
b) \(x'=x+y+y^2\), \(y'=x\),
c) \(x'=(x-1)^2+y\), \(y'=x^2+y\).
Exercise 8.1.102: Match systems
1) \(x'=y^2\), \(y'=-x^2\), 2) \(x'=y\), \(y'=(x-1)(x+1)\), 3) \(x'=y+x^2\), \(y'=-x\), to the vector fields below. Justify.
a) b) c)
Exercise 8.1.103: The idea of critical points and linearization works in higher dimensions as well. You simply make the Jacobian matrix bigger by adding more functions and more variables. For the following system of 3 equations find the critical points and their linearizations:
\(x' = x + z^2,\\ y' = z^2-y, \\ z' = z+x^2.\)
Exercise 8.1.1: Any two-dimensional non-autonomous system \(x'=f(x,y,t)\), \(y'=g(x,y,t)\) can be written as a three-dimensional autonomous system (three equations). Write down this autonomous system using the variables \(u\), \(v\), \(w\).
Exercise 8.2.1: For the systems below, find and classify the critical points, also indicate if the equilibria are stable, asymptotically stable, or unstable.
a) \(x'=-x+3x^2, y'=-y\) b) \(x'=x^2+y^2-1\),\(y'=x\) c) \(x'=ye^x\),\(y'=y-x+y^2\)
Exercise 8.2.2: Find the implicit equations of the trajectories of the following conservative systems. Next find their critical points (if any) and classify them.
a) \(x''+ x+x^3 = 0\) b) \(\theta''+\sin \theta = 0\) c) \(z''+ (z-1)(z+1) = 0\) d) \(x''+ x^2+1 = 0\)
Exercise 8.2.3: Find and classify the critical point(s) of \(x' = -x^2\),\(y' = -y^2\).
Exercise 8.2.4: Suppose \(x'=-xy\),\(y'=x^2-1-y\). a) Show there are two spiral sinks at \((-1,0)\) and \((1,0)\). b) For any initial point of the form \((0,y_0)\),find what is the trajectory. c) Can a trajectory starting at \((x_0,y_0)\) where \(x_0 > 0\) spiral into the critical point at \((-1,0)\)? Why or why not?
Exercise 8.2.5: In the example \(x'=y\),\(y'=y^3-x\) show that for any trajectory, the distance from the origin is an increasing function. Conclude that the origin behaves like is a spiral source. Hint: Consider \(f(t) = {\bigl(x(t)\bigr)}^2 + {\bigl(y(t)\bigr)}^2\) and show it has positive derivative.
Exercise 8.2.6: Suppose \(f\) is always positive. Find the trajectories of \(x''+f(x') = 0\). Are there any critical points?
Exercise 8.2.7: Suppose that \(x' = f(x,y)\),\(y' = g(x,y)\). Suppose that \(g(x,y) > 1\) for all \(x\) and \(y\). Are there any critical points? What can we say about the trajectories at \(t\) goes to infinity?
Exercise 8.2.101: For the systems below, find and classify the critical points. a) \(x'=-x+x^2\),\(y'=y\) b) \(x'=y-y^2-x\),\(y'=-x\) c) \(x'=xy\),\(y'=x+y-1\)
Exercise 8.2.102: Find the implicit equations of the trajectories of the following conservative systems. Next find their critical points (if any) and classify them. a) \(x''+ x^2 = 4\) b) \(x''+ e^x = 0\) c) \(x''+ (x+1)e^x = 0\)
Exercise 8.2.103: The conservative system \(x''+x^3 = 0\) is not almost linear. Classify its critical point(s) nonetheless.
Exercise 8.2.104:Derive an analogous classification of critical points for equations in one dimension, such as \(x'= f(x)\) based on the derivative. A point \(x_0\) is critical when \(f(x_0) = 0\) and almost linear if in addition \(f'(x_0) \not= 0\). Figure out if the critical point is stable or unstable depending on the sign of \(f'(x_0)\). Explain. Hint: see Ch. 1.6.
Exercise 8.3.1: Take the
damped nonlinear pendulum equation \(\theta '' + \mu \theta' + (\frac{g}{L}) \sin \theta = 0\) for some \(\mu > 0\) (that is, there is some friction). a) Suppose \(\mu = 1\) and \(\frac{g}{L} = 1\) for simplicity, find and classify the critical points. b) Do the same for any \(\mu > 0\) and any \(g\) and \(L\), but such that the damping is small, in particular, \(\mu^2 < 4(\frac{g}{L})\). c) Explain what your findings mean, and if it agrees with what you expect in reality.
Exercise 8.3.2: Suppose the hares do not grow exponentially, but logistically. In particular consider
\[x' = (0.4-0.01y)x - \gamma x^2, ~~~~~ y' = (0.003x-0.3)y .\]
For the following two values of \(\gamma\), find and classify all the critical points in the positive quadrant, that is, for \(x \geq 0\) and \(y \geq 0\). Then sketch the phase diagram. Discuss the implication for the long term behavior of the population. a) \(\gamma=0.001\), b) \(\gamma=0.01\).
Exercise 8.3.3: a) Suppose \(x\) and \(y\) are positive variables. Show \(\frac{y x}{e^{x+y}}\) attains a maximum at \((1,1)\). b) Suppose \(a,b,c,d\) are positive constants, and also suppose \(x\) and \(y\) are positive variables. Show \(\frac{y^a x^d}{e^{cx+by}}\) attains a maximum at \((\frac{d}{c},\frac{a}{b})\).
Exercise 8.3.4: Suppose that for the pendulum equation we take a trajectory giving the spinning-around motion, for example \(\omega = \sqrt{\frac{2g}{L} \cos \theta + \frac{2g}{L} + \omega_0^2}\). This is the trajectory where the lowest angular velocity is \(\omega_0^2\). Find an integral expression for how long it takes the pendulum to go all the way around.
Exercise 8.3.5:[challenging] Take the pendulum, suppose the initial position is \(\theta = 0\). a) Find the expression for \(\omega\) giving the trajectory with initial condition \((0,\omega_0)\). Hint: Figure out what \(C\) should be in terms of \(\omega_0\). b) Find the crucial angular velocity \(\omega_1\), such that for any higher initial angular velocity, the pendulum will keep going around its axis, and for any lower initial angular velocity, the pendulum will simply swing back and forth. Hint: When the pendulum doesn't go over the top the expression for \(\omega\) will be undefined for some \(\theta\)s. c) What do you think happens if the initial condition is \((0,\omega_1)\), that is, the initial angle is 0, and the initial angular velocity is exactly \(\omega_1\).
Exercise 8.3.101: Take the damped nonlinear pendulum equation \(\theta '' + \mu \theta' + (\frac{g}{L}) \sin \theta = 0\) for some \(\mu > 0\) (that is, there is friction). Suppose the friction is large, in particular \(\mu^2 > 4 (\frac{g}{L})\). a) Find and classify the critical points. b) Explain what your findings mean, and if it agrees with what you expect in reality.
Exercise 8.3.102: Suppose we have the system predator-prey system where the foxes are also killed at a constant rate \(h\) (\(h\) foxes killed per unit time): \(x' = (a-by)x,\) \(y' = (cx-d)y - h\). a) Find the critical points and the Jacobin matrices of the system. b) Put in the constants \(a=0.4\), \(b=0.01\), \(c=0.003\), \(d=0.3\), \(h=10\). Analyze the critical points. What do you think it says about the forest?
Exercise 8.3.103:[challenging] Suppose the foxes never die. That is, we have the system \(x' = (a-by)x,\) \(y' = cxy\). Find the critical points and notice they are not isolated. What will happen to the population in the forest if it starts at some positive numbers. Hint: Think of the constant of motion.
Exercise 8.4.1: Show that the following systems have no closed trajectories. a) \(x'=x^3+y,y'=y^3+x^2\), b) \(x'=e^{x-y},y'=e^{x+y}\), c) \(x'=x+3y^2-y^3,y'=y^3+x^2\).
Exercise 8.4.2: Formulate a condition for a 2-by-2 linear system \({\vec{x}\,}' = A \vec{x}\) to not be a center using the Bendixson-Dulac theorem. That is, the theorem says something about certain elements of \(A\).
Exercise 8.4.3: Explain why the Bendixson-Dulac Theorem does not apply for any conservative system \(x''+h(x) = 0\).
Exercise 8.4.4: A system such as \(x'=x, y'=y\) has solutions that exist for all time \(t\), yet there are no closed trajectories or other limit cycles. Explain why the Poincare-Bendixson Theorem does not apply.
Exercise 8.4.5: Differential equations can also be given in different coordinate systems. Suppose we have the system \(r' = 1-r^2\), \(\theta' = 1\) given in polar coordinates. Find all the closed trajectories and check if they are limit cycles and if so, if they are asymptotically stable or not.
Exercise 8.4.101: Show that the following systems have no closed trajectories. a) \(x'=x+y^2\), \(y'=y+x^2\), b) \(x'=-x\sin^2(y)\), \(y'=e^x\), c) \(x'=xy\), \(y'=x+x^2\).
Exercise 8.4.102: Suppose an autonomous system in the plane has a solution \(x=\cos(t)+e^{-t}\), \(y=\sin(t)+e^{-t}\). What can you say about the system (in particular about limit cycles and periodic solutions)?
Exercise 8.4.103: Show that the limit cycle of the Van der Pol oscillator (for \(\mu > 0\)) must not lie completely in the set where \(- \sqrt{\frac{1+\mu}{\mu}} < x < \sqrt{\frac{1+\mu}{\mu}}\).
Exercise 8.4.104: Suppose we have the system \(r' = \sin(r)\), \(\theta' = 1\) given in polar coordinates. Find all the closed trajectories. |
I am trying to derive the radial momentum equation in the equatorial plane of Kerr geometry obtained by Lasota (1994) which reads (eqn. 6 in page-343; I am using units in which $M=1$) as follows: $$uu'+\frac{1}{r\Delta}\left(a^2-r-\frac{A\gamma^2K}{r^3}\right)u^2-\frac{A\gamma^2K}{r^6}+\frac{1}{P+\rho}\left(\frac{\Delta}{r^2}+u^2\right)P'=0 \qquad (1)$$ where $K=\dfrac{(\Omega-\Omega_K^+)(\Omega-\Omega_K^-)}{\Omega_K^+\Omega_K^-},\qquad \Omega_K^\pm=\pm\dfrac{1}{r^{3/2}\pm a},\qquad u\equiv u^r$
The primes in the above equation refer to derivative w.r.t. the coordinate r. This is obtained from the equation $$(P+\rho)u^\nu u^r_{;\nu}+(g^{r\nu}+u^ru^\nu)P_{,r}=0 \qquad (2)$$ which represents the projection of the covariant derivative of perfect fluid energy momentum tensor on the hypersurface orthogonal to the four-velocity.
I tried to derive it as follows: First term in eqn.(2):$$(P+\rho)u^\nu u^r_{;\nu}=(P+\rho)u^r u^r_{;r}=(P+\rho)u^r\frac{1}{\sqrt-g}(\sqrt-g u^r)_{,r}=(P+\rho)uu'+(P+\rho)\frac{u^2}{r}$$ where I had used $A^i_{;i}=\dfrac{1}{\sqrt-g}(\sqrt-g A^i),\quad \sqrt-g=r,\quad u\equiv u^r$ Second term in eqn.(2):$$(g^{r\nu}+u^ru^\nu)P_{,\nu}=(g^{rr}+u^ru^r)P_{,r}=\frac{\Delta}{r^2}P'+u^2P'$$where I had used $g^{rr}=\dfrac{\Delta}{r^2}$
Adding these two terms, we obtain $$uu'+\dfrac{u^2}{r}+\dfrac{1}{P+\rho}\left(u^2+\dfrac{\Delta}{r^2}\right)P'=0$$
Comparing this with eqn.(1), it can be observed that only the first and the last terms match. However, in the non-matched terms there is no factor of $P'$. This means that I am missing something in the calculation of the first term.
Can anyone please point out what I am missing?
EDIT:
I missed the $u^t$ and $u^\phi$ terms in the expansion of the first term in eqn.(2). I had now expanded the term as $$(P+\rho)u^\nu u^r_{;\nu}=(P+\rho)(u^r u^r_{;r}+\Gamma^r_{\phi\phi}u^\phi u^\phi+\Gamma^r_{t\phi}u^tu^\phi+\Gamma^r_{\phi t}u^\phi u^t+\Gamma^r_{tt}u^tu^t)$$ and using the expressions for the Christoffel connections, I get the following equation: $$uu'+\frac{1}{r\Delta}\left(-\frac{A\gamma^2K}{r^3}\right)u^2-\frac{A\gamma^2K}{r^6}+\frac{1}{P+\rho}\left(\frac{\Delta}{r^2}+u^2\right)P'=0$$ Comparing this with eqn.(1), I am still missing two terms. |
Is there a way to calculate the expected number density of Dark Matter Halos above a given mass, in a certain redshift range, and in a certain area?
The function you're requesting — i.e. the number density $N$ of DM halos above a given mass $M_\mathrm{h}$ — is called the
cumulative halo mass function (cHMF). It is obtained by integrating the halo mass function (HMF) from a given mass to infinity. The HMF, in turn, is thus the function that describes the (differential) number density of DM halos of a given mass.
In other words, $$ \boxed{N(>\!\!M_\mathrm{h}) = \int_{M_\mathrm{h}}^\infty \!\!\!dM_\mathrm{h}'\, \frac{dN}{dM_\mathrm{h}'}.} $$
Halo mass function
So, the problem is to determine the HMF, i.e. $dN/dM_\mathrm{h}$. This was first calculated, analytically, by Press & Schechter (1974) assuming spherical collapse of structures from an initial, smoothed density field. It may be written as $$ \frac{dN}{dM_\mathrm{h}} = \frac{\rho_{\mathrm{m,0}}}{M_\mathrm{h}} \left| \frac{d\ln\sigma}{dM_\mathrm{h}} \right| f(\sigma), $$ where $\rho_{\mathrm{m,0}}$ is the present-day average mass density of the Universe, $\sigma = \sigma(M_\mathrm{h},z)$ is the rms fluctuations of the (smoothed) density field, and $f(\sigma)$ is the "multiplicity function" (note that because the number density decreases so fast with halo mass, for computational purposes often the HMF is expressed as $dN/d \ln M_\mathrm{h}$ rather than $dN/dM_\mathrm{h}$).
If you want, I can give you more details about how to calculate $\sigma(M_\mathrm{h},z)$ and $f(\sigma)$. You can also find details on this in sec. 2.1 of Laursen et al. (2018), but note that there's an error in Eq. 1. If you're happy with using a "black box", you can get both the HMF and cHMF for your favorite cosmological parameters using this online HMF calculator.
Only in the Press-Schechter formalism can an analytical form of $f(\sigma)$ be derived, and it was subsequently found that it over(under)predicts the collapsed fraction at the low-(high-)mass end (see e.g. Governato et al. 1999); in general one must obtain it by fitting halo abundances in cosmological $N$-body simulations.
Defining a halo
This is what N. Steinle describes in his answer, but actually you don't need to assume a density profile of the halos. So, how do you count halos in a simulation? There are several ways, arguably the two most popular ones being the
spherical overdensity (SO) and the friends-of-friends (FoF) method. Spherical overdensity
In the SO method, you first calculate the center of mass (CoM) of a clump of particles in the simulation, and then you calculate the average density $\bar{\rho}$ of particles in successively larger spheres centered on the CoM. As you increase the radius of the sphere, $\bar{\rho}$ drops because you include less and less dense regions. When the density has reached a certain factor $\Delta$ times the average density $\rho_\mathrm{m}(z)$ in the Universe you stop, and the total mass of all particles inside that sphere is then the halo mass. The overdensity factor $\Delta$ is usually chosen to be around 200, but other choices exist, e.g. 500 which then gives slightly smaller masses (because you stop counting earlier).
A variant of this method is using ellipsoids rather than spheres, allowing for more realistic results for elongated structures.
Friends-of-friends
In the FoF method, you start at an overdensity and count all particles that are "linked" together, meaning they are within some chosen distance (the "linking length") of each other. This method may give more realistic results for very non-spherical structures, but also tends to include particles in filaments streaming into the galaxies, which perhaps shouldn't be considered a part of a galaxy.
The figure below (from Klypin et al. 2011) shows halos from the Bolshoi simulation identified with the two methods; SO (
red points) and FoF ( blue points), compared to an extension of the P-S HMF ( solid black line) to allow for ellipsoidal structures (Sheth & Tormen (1999, 2002). At all masses, FoF halos are seen to be more massive than SO halos. Redshift range and area
Re-reading your question, I think maybe you're interested not in the number
density, but in the absolute number in a volume spanned by a redshift range $dz$ and area $dA$. If that is the case, then you simply multiply your $N(>\!\!M_\mathrm{h})$ by the cosmological volume given by $dz$ and $dA$. Let me know if you also want to know how to calculate that.
Is there a way to calculate the expected number density of Dark Matter Halos above a given mass, in a certain redshift range, and in a certain area?
There are ways of counting mass distributions of dark matter halos. I'm not sure what you mean by "expected number density," perhaps you could elaborate if my answer is not what you wanted? Do you mean the number density of the background galaxies if one is using lensing data? That is a very different question.
As with all astrophyiscs, limitations on computational power are limits on "experimental" power, since we use simulations as our laboratories, and the modeling of Dark Matter Halos is no exception.
So, a standard technique is to run N-body simulations of a dark matter universe and try to see how everything arranges itself by fitting functional forms to local regions (hence the density of a halo out to a certain mass and radius). Thus, the density you get depends on the functional form you choose! The standard profile is the NFW profile which treats the dark matter halo spherically, but this profile has many limitations that one must be careful of, for instance it is only valid approximately out to the virial radius of the galaxy. But, nowadays there are numerous available profiles (each with its own peculiarities) that try to include ellipticity of the dark matter halo. It is an area of active research to compare these profiles and improve them further, for instance see this recent paper, where the Deimer and NFW profiles are compared - the Deimer profile is capable of going beyond the virial radius because it's composed of an "inner density" that is similar to the NFW, and transitions at approximately the virial radius to an "outer" density profile. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.