text
stringlengths
256
16.4k
Definition:Chebyshev Distance Contents Definition Let $M_1 = \left({A_1, d_1}\right)$ and $M_2 = \left({A_2, d_2}\right)$ be metric spaces. Let $A_1 \times A_2$ be the cartesian product of $A_1$ and $A_2$. The Chebyshev distance on $A_1 \times A_2$ is defined as: $d_\infty \left({x, y}\right) := \max \left\{{d_1 \left({x_1, y_1}\right), d_2 \left({x_2, y_2}\right)}\right\}$ where $x = \left({x_1, x_2}\right), y = \left({y_1, y_2}\right) \in A_1 \times A_2$. The Chebyshev distance on $\displaystyle \mathcal A = \prod_{i \mathop = 1}^n A_i$ is defined as: $\displaystyle d_\infty \left({x, y}\right) = \max_{i \mathop = 1}^n \left\{{d_i \left({x_i, y_i}\right)}\right\}$ where $x = \left({x_1, x_2, \ldots, x_n}\right), y = \left({y_1, y_2, \ldots, y_n}\right) \in \mathcal A$. The Chebyshev distance on $\R^2$ is defined as: $\displaystyle d_\infty \left({x, y}\right):= \max \left\{ {\left\vert{x_1 - y_1}\right\vert, \left\vert{x_2 - y_2}\right\vert}\right\}$ where $x = \left({x_1, x_2}\right), y = \left({y_1, y_2}\right) \in \R^2$. Also known as The Chebyshev distance is also known as the maximum metric or sup metric. Also see Results about the Chebyshev distancecan be found here. Source of Name This entry was named for Pafnuty Lvovich Chebyshev.
How would you respond to a middle school student that says: “How do they know that irrational numbers NEVER repeat? I mean, there are only 10 possible digits, so they must eventually start repeating. And, how do they know that numbers like $\pi$ and $\sqrt2$ are irrational because they can't check an infinite number digits in the decimal form to see whether there is a repetition.” We know that irrational numbers never repeat by combining the following two facts: every rational number has a repeating decimal expansion, and every number which has a repeating decimal expansion is rational. Together these facts show that a number is rational if and only if it has a repeating decimal expansion. Decimal expansions which don't repeat are easy to construct; other answers already have examples of such things. I think the most important point to make with regard to your question is that nobody determines the irrationality of a number by examining its decimal expansion. While it is true that an irrational number has a non-repeating decimal expansion, you don't need to show a given number has a non-repeating decimal expansion in order to show it is irrational. In fact, this would be very difficult as we would have to have a way of determining all the decimal places. Instead, we use the fact that an irrational number is not rational; in fact, this is the definition of an irrational number (note, the definition is not that it has a non-repeating decimal expansion, that is a consequence). In particular, to show a number like $\sqrt{2}$ is irrational, we show that it isn't rational. This is a much better approach because, unlike irrational numbers, rational numbers have a very specific form that they must take, namely $\frac{a}{b}$ where $a$ and $b$ are integers, $b$ non-zero. The standard proofs show that you can't find such $a$ and $b$ so that $\sqrt{2} = \frac{a}{b}$ thereby showing that $\sqrt{2}$ is not rational; that is, $\sqrt{2}$ is irrational. What about the number $0.1010010001000010000010000001 \dots$? The runs of zeroes always get longer so clearly it never repeats. I think that might satisfy the objection that the whole concept of a non-repeating decimal doesn't make sense. The equivalence between having a representation as a fraction and having a representation as an eventually-repeating decimal is somewhat deeper and for a middle school student it might be best to take it on faith. A place to start on that topic is Euler's theorem. One direction of it (that all eventually-repeating decimals can be represented as fractions) can be shown with a convergent series but I'm just not sure about a simple way to explain the other. Show them a simple decimal like 0.11000111100000111111... which has 1 zero, 2 ones, 3 zeros, 4 ones, 5 zeros, and so on. This never repeats. Also, say that numbers are proved to be irrational by assuming them to be rational and deriving a contradiction. Perhaps you can then give one of the many proofs that $\sqrt{2}$ is irrational. You can also use the fact that a fraction must repeat, so the first number above can not be a fraction since it has arbitrarily long sequences of zeros and ones. Looks like you've missed a few symbols/letters in your question, but the spirit is there. Depends what you mean by 'repeat'. If you mean eventually terminate in an $N$-digit sequence that is iterated infinitely, then you can write out the fraction that corresponds to that 'irrational' number $\implies$ it's not actually irrational. E.g., $$0.8333 = 8/10 + 3/90 = 15/18.$$ There's also the canonical proof that $\sqrt 2$ isn't rational. Assume so. Then $\sqrt 2=p/q$ for $p,q\in\mathbb Z$ and $(p,q)=1$. Then $$ 2=\frac{p^2}{q^2}\iff 2q^2=p^2\implies 2\mid p\implies 4\mid p^2. $$ So write $p^2=4r^2$. Thus $2q^2=4r^2\implies 2\mid q$, which contradicts $(p,q)=1$. That's the rough proof at least. One way of disproving their suggestion that "there are only possible digits, so they must eventually start repeating" would be to show them the string: 0.10110111011110111110111111011111110.... and convince them that they can always choose a sub-string (for example '0'+'1'-(repeated n times)+'0') that will never be repeated further along the string. Thus, the string can never repeat-in-full (which is what you are really trying to show). If you can convince them of that then you should have no trouble convincing them that with 10 digits itis even easier to find non-repeating infinite strings of digits.
The problem: Find a formula for the total number of calls occurring during the insertion of n elements into an initially empty set. Assume that the insertion process fills up the binary search tree level-by-level. Leave your answer in the form of a sum. code for INSERT function: procedure INSERT(x: elementtype; var A: SET); begin if A = nil then begin A -> .element := x; A ->.leftchild := nil; A ->.rightchild := nil; end; else if x < A ->.element then INSERT(x, A->.leftchild); else if x > A ->.element then INSERT(x, A ->.rightchild); end;end; The main confusion for me here is with leaving my answer in the form of a sum. I'm not all that great at sums (haven't taken Calc 2 yet), so I don't really know how to set them up or extract information from them all that well. Any help here would be greatly appreciated. For clarity: This is a review problem where the answer is: Let $2^k \leq n \leq 2^{k+1}$. Then $k = \log n$ and the number of calls equals, $$ \sum_{i = 0}^{k - 1} (i + 1)2^i + (k + 1)(n - 2^k + 1) $$ I'd like to know the process behind getting this answer. Thank you.
A proof that uses more basic techniques for the case $c\neq 0$ is this: Suppose $X_n\rightarrow X$ in distribution, and suppose $c_n\rightarrow c$ (where $X_n, X$ are random variables and $c_n,c$ real numbers with $c\neq 0$). We prove that $c_nX_n\rightarrow cX$ in distribution. Proof: Without loss of generality assume $c>0$. The CDFs of $X$ and $cX$ are nondecreasing and hence have at most a countably infinite number of discontinuities. Fix $x \in \mathbb{R}$ such that the CDF of $cX$ is continuous at $x$. We want to show $\lim_{n\rightarrow\infty} P[c_nX_n\leq x] = P[cX\leq x]$. 1) Fix $\epsilon>0$ such that the CDF of $X$ is continuous at $x/c + \epsilon$ (since there are at most a countably infinite number of discontinuities, we can find arbitrarily small values of $\epsilon>0$ for which this holds). There is an index $N$ such that $$ c_n>0, \frac{x}{c_n} \leq \frac{x}{c} + \epsilon \quad \forall n \geq N$$So for all $n \geq N$ we have \begin{align}P[c_nX_n \leq x] &= P[X_n \leq x/c_n]\\&\leq P[X_n \leq x/c + \epsilon]\end{align}Taking a $\limsup$ as $n\rightarrow\infty$ and using the fact that $X_n\rightarrow X$ in distribution and the CDF of $X$ is continuous at $x/c+\epsilon$ gives$$ \limsup_{n\rightarrow\infty} P[c_nX_n\leq x] \leq P[X \leq x/c + \epsilon]$$This holds for arbitrarily small $\epsilon>0$ and since the CDF of $X$ is continuous from the right we get: $$ \boxed{\limsup_{n\rightarrow\infty} P[c_nX_n\leq x] \leq P[X \leq x/c] = P[cX \leq x]}$$ 2) Fix $\epsilon>0$ such that the CDF of $X$ is continuous at $(x-\epsilon)/c$ (again, since discontinuities are countable, there are arbitrarily small $\epsilon>0$ for which this holds). There is an index $N$ such that $$ c_n >0 , \frac{x-\epsilon}{c} \leq \frac{x}{c_n} \quad, \forall n \geq N $$So for $n \geq N$ we have \begin{align}P[cX \leq x - \epsilon] &= P[X \leq \frac{x-\epsilon}{c}]\\&\overset{(a)}{=}\liminf_{n\rightarrow\infty} P[X_n \leq \frac{x-\epsilon}{c}]\\&\leq \liminf_{n\rightarrow\infty} P[X_n \leq \frac{x}{c_n}]\\&= \liminf_{n\rightarrow\infty} P[c_nX_n \leq x]\end{align}where (a) holds because $X_n\rightarrow X$ in distribution and the CDF of $X$ is continuous at $(x-\epsilon)/c$. This holds for arbitrarily small $\epsilon>0$ and since the CDF of $cX$ is continuous at $x$ we get$$ \boxed{P[cX\leq x] \leq \liminf_{n\rightarrow\infty} P[c_nX_n\leq x]} $$The two boxed inequalities imply the result. $\Box$
How can I draw the figure shown above in rectangular coordinates, calculate the area and perimeter of the shaded region as a function of radius r of the outer circle, and find the points of intersection of the inner circles. Show it: RegionPlot[ region = RegionUnion[ Sequence @@ RegionIntersection @@@ Subsets[{Disk[{-1, 0}], Disk[{0, -1}], Disk[{1, 0}], Disk[{0, 1}]}, {2}], Fold[RegionDifference, {Disk[{0, 0}, 2], Disk[{-1, 0}], Disk[{0, -1}], Disk[{1, 0}], Disk[{0, 1}]}]], Frame -> False] Area Area[region] 4 (-2 + π) Perimeter ArcLength@RegionBoundary[region] 12 π I would use BooleanRegion: reg = BooleanRegion[ Xor, {Disk[{-1,0},1], Disk[{0,1},1], Disk[{1,0},1], Disk[{0,-1},1], Disk[{0,0},2]}];RegionMeasure @ regRegionMeasure @ RegionBoundary @ reg 4 (-2 + Pi) 12 Pi Your question wants a relationship for r which I assume is the radius of the larger circle. You can get it like this: c1 = ImplicitRegion[(x - r)^2 + y^2 <= r^2, {x, y}];c2 = ImplicitRegion[x^2 + (y - r)^2 <= r^2, {x, y}];Assuming[r > 0, Area[RegionIntersection[c1, c2]]] yields $\frac{1}{2} (\pi -2) r^2$ All the shaded areas in terms of r: FullSimplify[\[Pi]*r^2 - 4 \[Pi] (r/2)^2 + 8 1/8 (-2 + \[Pi]) r^2] $\left ( \pi -2\right )r^2$ Testing for the particular answer given by yode: (-2 + \[Pi]) r^2 /. r -> 2 yields $4\pi - 8$ Just for fun. By symmetry we need only consider the first quadrant. Graphics[{EdgeForm[Black], FaceForm[None], Disk[{0, 0}, 2, {0, Pi/2}], Disk[{1, 0}, 1, {0, Pi}], Disk[{0, 1}, 1, {-Pi/2, Pi/2}], Line[{{1, 0}, {1, 1}}], Line[{{0, 0}, {1, 1}}], Line[{{0, 1}, {1, 1}}], Text[Style["A", 20], {0, 0}, {1, 1}], Text[Style["B", 20], {1, 0}, {1, 1}], Text[Style["C", 20], {1, 1}, {-2, -1}], Text[Style["D", 20], {2, 0}, {1, 1}], Text[Style["arc 1", 20, Background -> White], {0.7, 0.3}, {0, 0}], Text[Style["arc 2", 20, Background -> White], {0.3, 0.7}, {0, 0}], Text[Style["arc 3", 20, Background -> White], {1.4, 1.4}, {0, 0}], Text[Style["arc 4", 20, Background -> White], {0.8, 1.5}, {0, 0}], Text[Style["arc 5", 20, Background -> White], {1.5, 0.8}, {0, 0}] }] Let the radius of the small circle be 1 (hence the radius of the large circle is 2). So the perimeter can be seen to be length of arc1+ arc2+arc3+arc4+arc5: Let $p_i$ represent arc i length. Now $p_1=p_2=p_4=p_5= \pi/2$ and arc length $p3= \pi/2 \times 2$. Hence total perimeter: perimeter = 4 (4 Pi/2 + Pi/2 2) i.e. $12\pi$ For the area: area bounded by arc1 and arc 2 is 2 x (area of sector-area of triangle ABC): area1 = 2 (Pi/4 - 1/2) The area bounded by arcs 3,4 and 5= area of quarter circle -area of 2 semicircles+ area of overlap: area2 = (Pi/2) 2^2/2 - Pi + area1 Note area1= area2=$\pi/2-1$, so the total area is total = Simplify[4 (area1 + area2)] yielding: 4 (-2 + \[Pi]) This isn't really a Mathematica problem. It is a Euclidean geometry problem and can be solve by a little classic geometry reasoning. Like ubpdqn I will work in the 1st quadrant and invoke symmetry. By construction $\qquad$OC = OE = R $\qquad$OB = OD = DA = BA = R/2 By observation $\qquad$Quadrant perimeter = EA + AO + OA + BA + EC $\qquad$OBAD is a square Arcs EA + OA and AO, + BA are one half the circumference of the equal circles centered at B and D, which have diameters R/2, so EA + OA + AO, + BA = circumference of an inner circle = π R. EC is one quarter of the circumference of the outer circle, so EC = (2 π R)/4 = π R/2. The quadrant perimeter is therefore π R + π R/2 = 3 π R/2. It follows that the full perimeter, 4 x (quadrant perimeter), is 6 π R. Point A is one of the points where the inner circles intersect and it clearly lies at {R/2, R/2}. By symmetry, the four points of intersection are $\qquad${{R/2, R/2}, {-R/2, R/2}, {-R/2, -R/2}, {R/2, -R/2}}. Finding the area is a little more complicated, but not much. The area, a1, between the two arcs ending at points O and A is clearly twice the difference of the area between the arc OA and the dashed line OA. This in turn is the area of a quadrant of inner circle centered at B less the half the square OBAD. Thus, $\qquad$a1 = 2 ((π (R/2)^2)/4 - ((R/2)^2)/2) = 1/8 (π - 2) R^2 The area, a2, bordered by the arcs EC, EA and AC is the area of the quadrant less the area of 2 quadrants of an inner circle less the area of the square OBAD. This is given by $\qquad$a2 = (π R^2)/4 - (π (R/2)^2)/2 - (R/2)^2 = 1/8 (π - 2) R^2 Note that a1 = a2 (which I find an interesting result in itself). Therefore, the full area is $\qquad$4 (2 a1) = (π - 2) R^2
As Andrew mentions you can use \usepackage[fleqn]{amsmath}, but this will mean tha all you equations will be moved to the left. However, if you want to be able to have some centered and some on the left, then you can use the flalign environment. But, note the trailing & that is required when this is used. As mentioned by egreg, the trailing & in the flalign environment is only required in one of the lines. \documentclass{article} \usepackage{amsmath}% mathtools includes this so this is optional \usepackage{mathtools} \begin{document} \begin{align*}% centered \cos\theta_1 \cos\theta_2-\sin\theta_1\sin\theta_2 &= \cos(\theta_1 +\theta_2) \\ \sin\theta_1 \cos\theta_2 + \cos\theta_1 \sin\theta_2 &= \sin(\theta_1+\theta_2) \end{align*} \begin{flalign*}% left aligned \cos\theta_1 \cos\theta_2-\sin\theta_1\sin\theta_2 &= \cos(\theta_1 +\theta_2) &\\ \sin\theta_1 \cos\theta_2 + \cos\theta_1 \sin\theta_2 &= \sin(\theta_1+\theta_2) &% Need tailing alignment char to get all the way left \end{flalign*} \end{document} I removed the \; from the OP's MWE, which inserted additional spacing where it was not necessary.
Difference between revisions of "Moser-lower.tex" Line 208: Line 208: Actually it is possible to improve upon these bounds by a slight amount. Observe that if $B$ is a maximiser for the right-hand side of \eqref{cn3} (subject to $B$ not containing isosceles triangles), then any triple $(a,b,c)$ not in $B$ must be the vertex of a (possibly degenerate) isosceles triangle with the other vertices in $B$. If this triangle is non-degenerate, or if $(a,b,c)$ is the upper vertex of a degenerate isosceles triangle, then no point from $\Gamma_{a,b,c}$ can be added to $A_B$ without creating a geometric line. However, if $(a,b,c) = (a'+r,b',c'+r)$ is only the lower vertex of a degenerate isosceles triangle $(a'+r,b',c'+r), (a',b'+2r,c')$, then one can add any subset of $\Gamma_{a,b,c}$ to $A_B$ and still have a Moser set as long as no pair of elements in that subset is separated by Hamming distance $2r$. For instance, in the $n=8$ case, the set Actually it is possible to improve upon these bounds by a slight amount. Observe that if $B$ is a maximiser for the right-hand side of \eqref{cn3} (subject to $B$ not containing isosceles triangles), then any triple $(a,b,c)$ not in $B$ must be the vertex of a (possibly degenerate) isosceles triangle with the other vertices in $B$. If this triangle is non-degenerate, or if $(a,b,c)$ is the upper vertex of a degenerate isosceles triangle, then no point from $\Gamma_{a,b,c}$ can be added to $A_B$ without creating a geometric line. However, if $(a,b,c) = (a'+r,b',c'+r)$ is only the lower vertex of a degenerate isosceles triangle $(a'+r,b',c'+r), (a',b'+2r,c')$, then one can add any subset of $\Gamma_{a,b,c}$ to $A_B$ and still have a Moser set as long as no pair of elements in that subset is separated by Hamming distance $2r$. For instance, in the $n=8$ case, the set − $$ B = \{ (0 + $$ B = \{ (0), (0 3 ),(1 ), (1 ), (1 ), (4 ), (2 4 2), (3 ), (3 3 ), (2), (3 ),(4 0), (1 1),(0 ),(1 ) \}$$ − ( + generates the lower bound $c'_{8,3} \geq 2902$ given above (and, up to reflection $a \leftrightarrow c$, is the only such set that does so); but by adding the four elements $11333333, 33113333, 33331133, 33333311$ from $\Gamma_{2,0,6}$ one can increase the lower bound slightly to $2906$. generates the lower bound $c'_{8,3} \geq 2902$ given above (and, up to reflection $a \leftrightarrow c$, is the only such set that does so); but by adding the four elements $11333333, 33113333, 33331133, 33333311$ from $\Gamma_{2,0,6}$ one can increase the lower bound slightly to $2906$. Revision as of 12:01, 18 June 2009 \section{Lower bounds for the Moser problem}\label{moser-lower-sec} In this section we discuss lower bounds for $c'_{n,3}$. Clearly we have $c'_{0,3}=1$ and $c'_{1,3}=2$, so we focus on the case $n \ge 2$. The first lower bounds may be due to Koml\'{o}s \cite{komlos}, who observed that the sphere $S_{i,n}$ of elements with exactly $n-i$ 2 entries (see Section \ref{notation-sec} for definition), is a Moser set, so that \begin{equation}\label{cin} c'_{n,3}\geq \vert S_{i,n}\vert \end{equation} holds for all $i$. Choosing $i=\lfloor \frac{2n}{3}\rfloor$ and applying Stirling's formula, we see that this lower bound takes the form \begin{equation}\label{cpn3} c'_{n,3} \geq (C-o(1)) 3^n / \sqrt{n} \end{equation} for some absolute constant $C>0$; in fact \eqref{cin} gives \eqref{cpn3} with $C := \sqrt{\frac{9}{4\pi}}$. In particular $c'_{3,3} \geq 12, c'_{4,3}\geq 24, c'_{5,3}\geq 80, c'_{6,3}\geq 240$. Asymptotically, the best lower bounds we know of are still of this type, but the values can be improved by studying combinations of several spheres or semispheres or applying elementary results from coding theory. Observe that if $\{w(1),w(2),w(3)\}$ is a geometric line in $[3]^n$, then $w(1), w(3)$ both lie in the same sphere $S_{i,n}$, and that $w(2)$ lies in a lower sphere $S_{i-r,n}$ for some $1 \leq r \leq i \leq n$. Furthermore, $w(1)$ and $w(3)$ are separated by Hamming distance $r$. As a consequence, we see that $S_{i-1,n} \cup S_{i,n}^e$ (or $S_{i-1,n} \cup S_{i,n}^o$) is a Moser set for any $1 \leq i \leq n$, since any two distinct elements $S_{i,n}^e$ are separated by a Hamming distance of at least two. (Recall Section \ref{notation-sec} for definitions), this leads to the lower bound \begin{equation}\label{cn3-low} c'_{n,3} \geq \binom{n}{i-1} 2^{i-1} + \binom{n}{i} 2^{i-1} = \binom{n+1}{i} 2^{i-1}. \end{equation} It is not hard to see that $\binom{n+1}{i+1} 2^{i} > \binom{n+1}{i} 2^{i-1}$ if and only if $3i < 2n+1$, and so this lower bound is maximised when $i = \lfloor \frac{2n+1}{3} \rfloor$ for $n \geq 2$, giving the formula \eqref{binom}. This leads to the lower bounds $$ c'_{2,3} \geq 6; c'_{3,3} \geq 16; c'_{4,3} \geq 40; c'_{5,3} \geq 120; c'_{6,3} \geq 336$$ which gives the right lower bounds for $n=2,3$, but is slightly off for $n=4,5$. Asymptotically, Stirling's formula and \eqref{cn3-low} then give the lower bound \eqref{cpn3} with $C = \frac{3}{2} \times \sqrt{\frac{9}{4\pi}}$, which is asymptotically $50\%$ better than the bound \eqref{cin}. The work of Chv\'{a}tal \cite{chvatal1} already contained a refinement of this idea which we here translate into the usual notation of coding theory: Let $A(n,d)$ denote the size of the largest binary code of length $n$ and minimal distance $d$. Then \begin{equation}\label{cnchvatal} c'_{n,3}\geq \max_k \left( \sum_{j=0}^k \binom{n}{j} A(n-j, k-j+1)\right). \end{equation} With the following values for $A(n,d)$: {\tiny{ \[ \begin{array}{llllllll} A(1,1)=2&&&&&&&\\ A(2,1)=4& A(2,2)=2&&&&&&\\ A(3,1)=8&A(3,2)=4&A(3,3)=2&&&&&\\ A(4,1)=16&A(4,2)=8& A(4,3)=2& A(4,4)=2&&&&\\ A(5,1)=32&A(5,2)=16& A(5,3)=4& A(5,4)=2&A(5,5)=2&&&\\ A(6,1)=64&A(6,2)=32& A(6,3)=8& A(6,4)=4&A(6,5)=2&A(6,6)=2&&\\ A(7,1)=128&A(7,2)=64& A(7,3)=16& A(7,4)=8&A(7,5)=2&A(7,6)=2&A(7,7)=2&\\ A(8,1)=256&A(8,2)=128& A(8,3)=20& A(8,4)=16&A(8,5)=4&A(8,6)=2 &A(8,7)=2&A(8,8)=2\\ A(9,1)=512&A(9,2)=256& A(9,3)=40& A(9,4)=20&A(9,5)=6&A(9,6)=4 &A(9,7)=2&A(9,8)=2\\ A(10,1)=1024&A(10,2)=512& A(10,3)=72& A(10,4)=40&A(10,5)=12&A(10,6)=6 &A(10,7)=2&A(10,8)=2\\ A(11,1)=2048&A(11,2)=1024& A(11,3)=144& A(11,4)=72&A(11,5)=24&A(11,6)=12 &A(11,7)=2&A(11,8)=2\\ A(12,1)=4096&A(12,2)=2048& A(12,3)=256& A(12,4)=144&A(12,5)=32&A(12,6)=24 &A(12,7)=4&A(12,8)=2\\ A(13,1)=8192&A(13,2)=4096& A(13,3)=512& A(13,4)=256&A(13,5)=64&A(12,6)=32 &A(13,7)=8&A(13,8)=4\\ \end{array} \] }} Generally, $A(n,1)=2^n, A(n,2)=2^{n-1}, A(n-1,2e-1)=A(n,2e), A(n,d)=2$, if $d>\frac{2n}{3}$. The values were taken or derived from Andries Brower's table at\\ http://www.win.tue.nl/$\sim$aeb/codes/binary-1.html \textbf{include to references? or other book with explicit values of $A(n,d)$ } For $c'_{n,3}$ we obtain the following lower bounds: with $k=2$ \[ \begin{array}{llll} c'_{4,3}&\geq &\binom{4}{0}A(4,3)+\binom{4}{1}A(3,2)+\binom{4}{2}A(2,1) =1\cdot 2+4 \cdot 4+6\cdot 4&=42.\\ c'_{5,3}&\geq &\binom{5}{0}A(5,3)+\binom{5}{1}A(4,2)+\binom{5}{2}A(3,1) =1\cdot 4+5 \cdot 8+10\cdot 8&=124.\\ c'_{6,3}&\geq &\binom{6}{0}A(6,3)+\binom{6}{1}A(5,2)+\binom{6}{2}A(4,1) =1\cdot 8+6 \cdot 16+15\cdot 16&=344. \end{array} \] With k=3 \[ \begin{array}{llll} c'_{7,3}&\geq& \binom{7}{0}A(7,4)+\binom{7}{1}A(6,3)+\binom{7}{2}A(5,2) + \binom{7}{3}A(4,1)&=960.\\ c'_{8,3}&\geq &\binom{8}{0}A(8,4)+\binom{8}{1}A(7,3)+\binom{8}{2}A(6,2) + \binom{8}{3}A(5,1)&=2832.\\ c'_{9,3}&\geq & \binom{9}{0}A(9,4)+\binom{9}{1}A(8,3)+\binom{9}{2}A(7,2) + \binom{9}{3}A(6,1)&=7880. \end{array}\] With k=4 \[ \begin{array}{llll} c'_{10,3}&\geq &\binom{10}{0}A(10,5)+\binom{10}{1}A(9,4)+\binom{10}{2}A(8,3) + \binom{10}{3}A(7,2)+\binom{10}{4}A(6,1)&=22232.\\ c'_{11,3}&\geq &\binom{11}{0}A(11,5)+\binom{11}{1}A(10,4)+\binom{11}{2}A(9,3) + \binom{11}{3}A(8,2)+\binom{11}{4}A(7,1)&=66024.\\ c'_{12,3}&\geq &\binom{12}{0}A(12,5)+\binom{12}{1}A(11,4)+\binom{12}{2}A(10,3) + \binom{12}{3}A(9,2)+\binom{12}{4}A(8,1)&=188688.\\ \end{array}\] With $k=5$ \[ c'_{13,3}\geq 539168.\] It should be pointed out that these bounds are even numbers, so that $c'_{4,3}=43$ shows that one cannot generally expect this lower bound gives the optimum. The maximum value appears to occur for $k=\lfloor\frac{n+2}{3}\rfloor$, so that using Stirling's formula and explicit bounds on $A(n,d)$ the best possible value known to date of the constant $C$ in equation \eqref{cpn3} can be worked out, but we refrain from doing this here. Using the Singleton bound $A(n,d)\leq 2^{n-d+1}$ Chv\'{a}tal \cite{chvatal1} proved that the expression on the right hand side of \eqref{cnchvatal} is also $O\left( \frac{3^n}{\sqrt{n}}\right)$, so that the refinement described above gains a constant factor over the initial construction only. For $n=4$ the above does not yet give the exact value. The value $c'_{4,3}=43$ was first proven by Chandra \cite{chandra}. A uniform way of describing examples for the optimum values of $c'_{4,3}=43$ and $c'_{5,3}=124$ is the following: Let us consider the sets $$ A := S_{i-1,n} \cup S_{i,n}^e \cup A'$$ where $A' \subset S_{i+1,n}$ has the property that any two elements in $A'$ are separated by a Hamming distance of at least three, or have a Hamming distance of exactly one but their midpoint lies in $S_{i,n}^o$. By the previous discussion we see that this is a Moser set, and we have the lower bound \begin{equation}\label{cnn} c'_{n,3} \geq \binom{n+1}{i} 2^{i-1} + |A'|. \end{equation} This gives some improved lower bounds for $c'_{n,3}$: \begin{itemize} \item By taking $n=4$, $i=3$, and $A' = \{ 1111, 3331, 3333\}$, we obtain $c'_{4,3} \geq 43$; \item By taking $n=5$, $i=4$, and $A' = \{ 11111, 11333, 33311, 33331 \}$, we obtain $c'_{5,3} \geq 124$. \item By taking $n=6$, $i=5$, and $A' = \{ 111111, 111113, 111331, 111333, 331111, 331113\}$, we obtain $c'_{6,3} \geq 342$. \end{itemize} This gives the lower bounds in Theorem \ref{moser} up to $n=5$, but the bound for $n=6$ is inferior to the lower bound $c'_{6,3}\geq 344$ given above. A modification of the construction in \eqref{cn3-low} leads to a slightly better lower bound. Observe that if $B \subset \Delta_n$, then the set $A_B := \bigcup_{\vec a \in B} \Gamma_{a,b,c}$ is a Moser set as long as $B$ does not contain any ``isosceles triangles $(a+r,b,c+s), (a+s,b,c+r), (a,b+r+s,c)$ for any $r,s \geq 0$ not both zero; in particular, $B$ cannot contain any ``vertical line segments $(a+r,b,c+r), (a,b+2r,c)$. An example of such a set is provided by selecting $0 \leq i \leq n-3$ and letting $B$ consist of the triples $(a, n-i, i-a)$ when $a \neq 3 \mod 3$, $(a,n-i-1,i+1-a)$ when $a \neq 1 \mod 3$, $(a,n-i-2,i+2-a)$ when $a=0 \mod 3$, and $(a,n-i-3,i+3-a)$ when $a=2 \mod 3$. Asymptotically, this set occues about two thirds of the spheres $S_{n,i}$, $S_{n,i+1}$ and one third of the spheres $S_{n,i+2}, S_{n,i+3}$ and (setting $i$ close to $n/3$) gives a lower bound \eqref{cpn3} with $C = 2 \times \frac{\sqrt{9}}{4\pi}$, which is thus superior to the previous constructions. An integer program was run to obtain the optimal lower bounds achievable by the $A_B$ construction (using \eqref{cn3}, of course). The results for $1 \leq n \leq 20$ are displayed in Figure \ref{nlow-moser}: \begin{figure}[tb] \centerline{ \begin{tabular}{|ll|ll|} \hline n & lower bound & n & lower bound \\ \hline 1 & 2 &11& 71766\\ 2 & 6 & 12& 212423\\ 3 & 16 & 13& 614875\\ 4 & 43 & 14& 1794212\\ 5 & 122& 15& 5321796\\ 6 & 353& 16& 15455256\\ 7 & 1017& 17& 45345052\\ 8 & 2902&18& 134438520\\ 9 & 8622&19& 391796798\\ 10& 24786& 20& 1153402148\\ \hline \end{tabular}} \caption{Lower bounds for $c'_n$ obtained by the $A_B$ construction.} \label{nlow-moser} \end{figure} More complete data, including the list of optimisers, can be found at {\tt http://abel.math.umu.se/~klasm/Data/HJ/}. This indicates that greedily filling in spheres, semispheres or codes is no longer the optimal strategy in dimensions six and higher. The lower bound $c'_{6,3} \geq 353$ was first located by a genetic algorithm: see Appendix \ref{genetic-alg}. \begin{figure}[tb] \centerline{\includegraphics{moser353new.png}} \caption{One of the examples of $353$-point sets in $[3]^6$ (elements of the set being indicated by white squares).} \label{moser353-fig} \end{figure} Actually it is possible to improve upon these bounds by a slight amount. Observe that if $B$ is a maximiser for the right-hand side of \eqref{cn3} (subject to $B$ not containing isosceles triangles), then any triple $(a,b,c)$ not in $B$ must be the vertex of a (possibly degenerate) isosceles triangle with the other vertices in $B$. If this triangle is non-degenerate, or if $(a,b,c)$ is the upper vertex of a degenerate isosceles triangle, then no point from $\Gamma_{a,b,c}$ can be added to $A_B$ without creating a geometric line. However, if $(a,b,c) = (a'+r,b',c'+r)$ is only the lower vertex of a degenerate isosceles triangle $(a'+r,b',c'+r), (a',b'+2r,c')$, then one can add any subset of $\Gamma_{a,b,c}$ to $A_B$ and still have a Moser set as long as no pair of elements in that subset is separated by Hamming distance $2r$. For instance, in the $n=8$ case, the set $$ B = \{ (0,1,7), (0 3 5),(1 0 7), (1 2 5), (1 3 4), (1 4 3), (2 4 2), (3 1 4), (3 3 2), (4 2 2), (4 3 1),(4 4 0), (6 1 1),(7 0 1),(7 1 0) \}$$ generates the lower bound $c'_{8,3} \geq 2902$ given above (and, up to reflection $a \leftrightarrow c$, is the only such set that does so); but by adding the four elements $11333333, 33113333, 33331133, 33333311$ from $\Gamma_{2,0,6}$ one can increase the lower bound slightly to $2906$. However, we have been unable to locate a lower bound which is asymptotically better than \eqref{cpn3}. Indeed, any method based purely on the $A_B$ construction cannot do asymptotically better than the previous constructions: \begin{proposition} Let $B \subset \Delta_n$ be such that $A_B$ is a Moser set. Then $|A_B| \leq (2 \sqrt{\frac{9}{4\pi}} + o(1)) \frac{3^n}{\sqrt{n}}$. \end{proposition} \begin{proof} By the previous discussion, $B$ cannot contain any pair of the form $(a,b+2r,c), (a+r,b,c+r)$ with $r>0$. In other words, for any $-n \leq h \leq n$, $B$ can contain at most one triple $(a,b,c)$ with $c-a=h$. From this and \eqref{cn3}, we see that $$ |A_B| \leq \sum_{h=-n}^n \max_{(a,b,c) \in \Delta_n: c-a=h} \frac{n!}{a! b! c!}.$$ From the Chernoff inequality (or the Stirling formula computation below) we see that $\frac{n!}{a! b! c!} \leq \frac{1}{n^{10}} 3^n$ unless $a,b,c = n/3 + O( n^{1/2} \log^{1/2} n )$, so we may restrict to this regime, which also forces $h = O( n^{1/2}/\log^{1/2} n)$. If we write $a = n/3 + \alpha$, $b = n/3 + \beta$, $c = n/3+\gamma$ and apply Stirling's formula $n! = (1+o(1)) \sqrt{2\pi n} n^n e^{-n}$, we obtain $$ \frac{n!}{a! b! c!} = (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - (\frac{n}{3}+\alpha) \log (1 + \frac{3\alpha}{n} ) - (\frac{n}{3}+\beta) \log (1 + \frac{3\beta}{n} ) - (\frac{n}{3}+\gamma) \log (1 + \frac{3\gamma}{n} ) ).$$ From Taylor expansion one has $$ (\frac{n}{3}+\alpha) \log (1 + \frac{3\alpha}{n} ) = -\alpha - \frac{3}{2} \frac{\alpha^2}{n} + o(1)$$ and similarly for $\beta,\gamma$; since $\alpha+\beta+\gamma=0$, we conclude that $$ \frac{n!}{a! b! c!} = (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - \frac{3}{2n} (\alpha^2+\beta^2+\gamma^2) ).$$ If $c-a=h$, then $\alpha^2+\beta^2+\gamma^2 = \frac{3\beta^2}{2} + \frac{h^2}{2}$. Thus we see that $$ \max_{(a,b,c) \in \Delta_n: c-a=h} \frac{n!}{a! b! c!} \leq (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - \frac{3}{4n} h^2 ).$$ Using the integral test, we thus have $$ |A_B| \leq (1+o(1)) \frac{3^{3/2}}{2\pi} 3^n \int_\R \exp( - \frac{3}{4n} x^2 )\ dx.$$ Since $\int_\R \exp( - \frac{3}{4n} x^2 )\ dx = \sqrt{\frac{4\pi n}{3}}$, we obtain the claim. \end{proof}
The aim of the SURVPOOL project is to investigate the potential influence of lifetime exposure to modifiable risk factors, such as overweight and obesity, on the occurrence of cancer and death due to cancer. Here, we describe succinctly the methodology used to \( \newcommand{\ud}{\mathrm{d}} \newcommand{\p}{\mathrm{P}} \newcommand{\np}{\mathrm{NP}} \newcommand{\pr}{\mathrm{Pr} } \newcommand{\tlm}{\textlengthmark} \newcommand{\bth}{\boldsymbol{\theta}} \newcommand{\yb}{\mathbf{y}} \newcommand{\xb}{\mathbf{x}} \newcommand{\zb}{\mathbf{z}} \newcommand{\w}{\mathbf{w}} \newcommand{\rr}{\mathrm{RR}} \newcommand{\bbt}{\boldsymbol{\beta}} \newcommand{\bgm}{\boldsymbol{\gamma}} \newcommand{\xbi}{\boldsymbol{\xi}} \newcommand{\bzt}{\boldsymbol{\zeta}} \) Body mass index (BMI) trajectories over time will be modelled for every study participant with at least two BMI assessments (after exclusion of BMI information in the year before and the year after cancer diagnosis for those who developed an invasive malignancy). This will be done using a quadratic growth model with a random intercept and random slope. More precisely, for individual \(i\) and measurement occurrence \(j\), the BMI will be modelled as a quadratic polynomial of age according to the following equation: \begin{equation} \label{eq:1} \nonumber \mathrm{BMI}_{ij}= (\alpha_0+u_{0i})+(\alpha_1+u_{1i})\cdot\mathrm{Age}_{ij}+\alpha_2\cdot\mathrm{Age}_{ij}^2+\varepsilon_{ij} \end{equation} \begin{equation} \label{eq:3} \nonumber \mathrm{with~~} \begin{pmatrix} u_{0i} \\ u_{1i} \end{pmatrix} \sim N(\mathbf{0},\Sigma) \mathrm{~~and~~} \varepsilon_{ij}\sim N(0,\sigma^2) \end{equation} This model describes an individual's BMI trajectory as the sum of Each individual BMI trajectory can then be described by a simple polynomial equation using the estimated parameters from this model, as illustrated in the figure below. The left panel shows the observed trajectories for a group of individuals, with two individual trajectories highlighted (red and dark-blue lines). The number and the timing of the measurements differ from one individual to the next, making it difficult to determine meaningful average values (e.g. the mean BMI at the age of 50 years). The right panel shows the result of the modelling process: the dashed black curve represents the mean BMI trajectory of the population, and each light-blue line represents an individual-specific trajectory (the red and dark-blue curves represent the modelled trajectories for the same two individuals highlighted in the left panel). These individual-specific curves can be used to define individual BMI-related variables, as explained in the next section. The growth curve model described in the previous section enables us to define individual-specific BMI trajectories as simple quadratic polynomial functions of age. These curves will be used to define several BMI-related variables that summarize the individual BMI trajectories and that can be used as predictors in time-to-event models. For the purpose of this example, we restrict our interest to summary BMI-related variables over a specified period of time (in this case, between the ages of 20 and 50 years), of length \(T\). We describe three such variables below: The next step is to use the previously defined variables as predictors in time-to-event models (e.g. Cox proportional hazard regression models) to predict the occurrence of cancer (or of death in cancer patients). The general use of this kind of model is to assess whether the individual-specific hazard (i.e. the function of time describing the instantaneous risk of presenting the event of interest) is dependent on the variables of interest (e.g. BMI-related variables), possibly after adjustment for variables that are already known to have an effect on the event of interest. A standard model formulation might relate the hazard of, for example, cancer occurrence \(\lambda\) at time \(t\) and a BMI-related variable (e.g. \(\mathrm{BMI}_{25}\) as defined in the previous section) according to the following equation: \begin{equation} \nonumber \lambda(t,\mathrm{BMI}_{25},\xb)=\lambda_0(t)\exp\bigl\{\beta\,\mathrm{BMI}_{25}+\xb^{T}\bgm\bigr\} \end{equation} where \(\lambda_0\) is the baseline hazard, \(\xb\) represents a vector of adjusting variables (e.g. age, gender, or smoking status), and \(\beta\) and \(\bgm\) are regression coefficients related to the effects of the variables on the hazard. In particular, the exponential of \(\beta\) is the hazard ratio related to the variable \(\mathrm{BMI}_{25}\): it indicates by how much the hazard at any point in time is increased when the variable \(\mathrm{BMI}_{25}\) is increased by one unit.
MATHEMATICS On the theory of semigroups of linear operators D. L. Berman 9 An application of the concept of rotation of a vector field Yu. G. Borisovich 12 Conditions for a function of many variables to be representable as a sum of a finite number of plane waves traveling in a given directions B. A. Vostretsov 16 On the principle of limiting amplitude M. M. Gekhtman 20 Boundary properties of a class of mappings in space V. Zorich 23 Boundary-value problems for elliptic equations in conical regions V. A. Kondrat'ev 27 An error bound for the numerical solution of a nonlinear integral equation I. P. Mysovskikh 30 On the solution of boundary-value problems for linear elliptic equations in an infinite region A. P. Oskolkov 34 On the topological characterization of uniform properties of metric spaces Yu. P. Orevkov 38 On infinite-dimensional linear groups B. I. Plotkin 42 Categories with division and integral representations A. V. Roiter 46 On the regularization of ill-posed problems A. N. Tikhonov 49 The Dirichlet problem for an elliptic system of two second-order differential equations N. E. Tovmasyan 53 Imbedding the space of $s$-smooth functions of $n$ variables into a space of sufficiently smooth functions of fewer variables G. M. Henkin 57 CYBERNETICS AND THE REGULATION THEORY Some theorems in the theory of $\Psi$-stability in cooperative games O. N. Bondareva 61 MATHEMATICAL PHYSICS Asymptotic method of investigating transients in nonlinear oscillatory systems T. A. Tibilov 64 PHYSICS Experimental determination of the matrix element of an electron transition for $\gamma$- and $\beta$-systems of the $\mathrm{NO}$ molecule E. T. Antropov, A. P. Dronov, N. N. Sobolev, V. P. Cheremisinov 67 Photosemiconducting properties of methylchlorophyllide $\mathrm{a}$ A. T. Vartanyan 70 The hydrodynamics of a rotating Bose system below the condensation point S. V. Iordanskii 74 GEOPHYSICS The mechanism of the 1948 Ashgabat earthquake from data secured by geophysical investigations D. N. Rustanovich 86 ELECTRICAL ENGINEERING The optimal (i. e. reducing the losses to a minimum) law of magnetic polarity reversal in ferromagnetic cores with a rectangular hysteresis loop M. A. Rozenblat, Yu. D. Rosental 90 CRYSTALLOGRAPHY The determination of crystal structures by the $R$-factor minimalization method B. K. Vaĭnshteĭn, I. M. Gel'fand, R. L. Kayushina, Yu. G. Fedorov 93 CHEMISTRY The use of luminescent reagents in the kinetic method of analysis E. A. Bozhevol'nov, S. U. Kreynhold, R. P. Lastovskii, V. V. Sidorenko 97 Directed catalytic synthesis of solid paraffin from carbon monoxide and hydrogen T. F. Bulanova, Ya. T. Eidus, N. S. Sergeeva, Yu. T. Khudyakov 101 Homolytic reactions of tetraethylgermane N. S. Vyazankin, E. N. Gladyshev, G. A. Razuvaev 104 The synthesis and dehydration of certain germanium-containing diene carbinols I. M. Gverdtsiteli, T. P. Guntsadze, A. D. Petrov 107 Energy transfer upon the oxidation of aromatic hydrocarbons by radiation A. T. Koritzky, V. N. Shamshev 111 Comparative reactivity of hydrocarbons in the cyclopropane series O. A. Nesmeyanova, M. Yu. Lukina, B. A. Kazanskii 114 The five-compound system $\mathrm{UO}_2(\mathrm{NO}_3)_2$ – $(\mathrm{C}_4\mathrm{H}_9)_2\mathrm{PO}(\mathrm{C}_4\mathrm{H}_9\mathrm{O})$ – $\mathrm{H}_2\mathrm{O}$ – $\mathrm{HNO}_3$ – $\mathrm{CCl}_4$ with a constant relation of $(\mathrm{C}_4\mathrm{H}_9)_2\mathrm{PO}(\mathrm{C}_4\mathrm{H}_9\mathrm{O})$ to $\mathrm{CCl}_4$ in the stratification region A. V. Nikolaev, Yu. A. Dyadin, I. I. Yakovlev, Z. N. Mironova 118 The doubling mechanism in the cyclization of depsipeptides and peptides Yu. A. Ovchinnikov, V. T. Ivanov, A. A. Kiryushkin, M. M. Shemyakin 122 Purification of indium from tin and lead impurities by zonal melting of its chloride P. I. Fedorov, N. S. Sitdykova 126 PHYSICAL CHEMISTRY Preliminary computation of adsorption equilibrium parameters for the adsorbent – binary vapour mixture system B. P. Bering, V. V. Serpinskii, S. I. Surinova 129 The effect of zinc ions on the sorption of hydrogen and the catalytic activity of palladium N. A. Zakarina, G. D. Zakumbaeva, D. V. Sokol'skii 133 The activity of alkali-metal and ammonium chlorides in aqueous solution of sodium chloride A. N. Kirgintsev, A. V. Luk'yanov 136 Kinetic stages of ethylene oxidation by palladium in aqueous solutions I. I. Moiseev, M. N. Vargaftik, Ya. K. Syrkin 140 New polyconjugate systems and their electrophysical properties S. A. Nizova, I. I. Patalakh, Ya. M. Paushkin 144 On the theory of the compensation effect in the diffusion processes taking place in solids S. Z. Roginskii, Yu. L. Hait 147 On the semiempirical theory of isotropic superfine splitting in the electron spin resonance spectra of free radicals P. V. Schastnev, G. M. Zhidomirov 151 Поправки к статье “Рентгеноспектроскопическое изучение некоторых полиферроценов” (ДАН, т. 149, № 6, 1963 г.) È. E. Vainstein, Yu. F. Kopelev
Superpower The two real-holomorphic superpower functions are considered here: \(\mathrm{SuPow}_a(z)=\exp(a^z)\) and \(\mathrm{SdPow}_a(z)=\exp(-a^z)\) For \(a\!=\!2\), the explicit plots of these two functions are shown in figure 1 at right. Contents Transfer equation \(F(z\!+\!1)=T(F(z))\) Finctions \(F\!=\!\mathrm{SuPow}_a\) and \(F\!=\!\mathrm{SdPow}_a\) and the modifications with displaced argument seems to be the only real-holomorphic solutions of the transfer equation with power function \(T\) as transfer function. Periodicity Complex map of function SuPow for the same value \(a\!=\!2\) is shown in Figure 2. Functions \(\mathrm{SuPow}_a\) and \(\mathrm{SdPow}_a\) are periodic; the period is \(P= \mathrm i 2 \pi/\ln(a )\) For positive \(a\), the period is pure imaginary. At \(a\!=\!2\), for the case shown in figure 2, the period \(P=2\pi / \ln(2) \approx 9.06472028365\) The map in Fig.2 is reproduced at the translations for, roughly, \(~\pm 9.06472028365 ~\) along the Imaginary axis. The range of FIg.2 covers a little bit more that one period of the function. Functions \(\mathrm{SuPow}_a\) and \(\mathrm{SdPow}_a\) can be expressed through each other, \(\mathrm{SdPow}_a(z)=\mathrm{SuPow}_a\big(z\!+\mathrm i \pi/\ln(a) \big)\) In particular, map of function \(\mathrm{SdPow}_2\) shown in Fig.3, appears as displacement of map of \(\mathrm{SdPow}_2\), shown in Fig.2, for \(P/2\), id est, as translation for, roughly, 4.53236014183 along the imaginary axis. There is another relation between these two superfunctions: \(\mathrm{SdPow}_a(z)=1/\mathrm{SuPow}_a(z)=\mathrm{SuPow}_a(z)^{-1}=\displaystyle\frac{1}{\mathrm{SuPow}_a(z)}\) Inverse functions \(G=\mathrm{AuPow}_a=\mathrm{SuPow}_a^{-1}\) and \(G=\mathrm{AdPow}_a=\mathrm{SdPow}_a^{-1}\) \(G(T(z))=G(z)+1\) Solution \(G\) of this equation for \(~T(z)\!=\!z^a~\) can be referred as Abelpower function For \(a\!=\!2\), complex map of function \(\mathrm{AuPow}_a\) is shown in figure at right. These functions differ for a constant, \(\mathrm{AdPow}_a=\mathrm{AdPow}_a \pm P/2\) where \(P\) is period of the superpower function defined above. Sign in this formula changes at the cut lines of functions \(\mathrm{AuPow}_a\) and \(\mathrm{AdPow}_a\); the sets of cut lines are different for these two functions. Application of Superpower Superpower function is important example to illustrate properties of superfunctions. Other superfunctions mentioned in Table of superfunctions show properties, similar to those of functions SuPow and SdPow. This applies also to superfunctions, that cannot be expressed through the elementary functions. It should be interesting to construct a proof, that for the power function, the SuPow and SdPow are the only non–trivial superfunctions with simple asymptotic behaviour, and other periodic superfunctions appear at the displacement of the argument for a constant. Other meanings of term superpower In Physics, term power denotes transfer of energy, refers to energy per time and can be measured in Watt\(=\)Joule/second\(=\mathrm{ Kg\, m^2/sec^3}\). In this sense, power is main parameter of an engine, of an electric plant, of an electric cable, of a source of waves (highball mixer, laser, acoustic emitter, wave machine, etc.). For this reason, in TORI, the only meaning mentioned in the previous sections is supposed to used. References http://www.dosguys.com/JCS/Jesus_Christ_Superstar_-_Chords_&_Lyrics_(by_RA).htmJESUS CHRIST SUPERSTAR by Andrew Lloyd Webber & Tim Rice. 7/7/2007. 2-3 Poor Jerusalem: Neither you Simon, nor the fifty thousand Nor the Romans, nor the Jews, Nor Judas, nor the Twelve, nor the Priests, nor the Scribes, Nor doomed Jerusalem itself Understand what power is ..
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-9 of 9 Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE (Elsevier, 2017-11) The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ... Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE (Elsevier, 2017-11) Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ... The new Inner Tracking System of the ALICE experiment (Elsevier, 2017-11) The ALICE experiment will undergo a major upgrade during the next LHC Long Shutdown scheduled in 2019–20 that will enable a detailed study of the properties of the QGP, exploiting the increased Pb-Pb luminosity ... Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE (Elsevier, 2017-11) Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ... Charmonium production in Pb–Pb and p–Pb collisions at forward rapidity measured with ALICE (Elsevier, 2017-11) The ALICE collaboration has measured the inclusive charmonium production at forward rapidity in Pb–Pb and p–Pb collisions at sNN=5.02TeV and sNN=8.16TeV , respectively. In Pb–Pb collisions, the J/ ψ and ψ (2S) nuclear ... Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions (Elsevier, 2017-11) Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ... Jet-hadron correlations relative to the event plane at the LHC with ALICE (Elsevier, 2017-11) In ultra relativistic heavy-ion collisions at the Large Hadron Collider (LHC), conditions are met to produce a hot, dense and strongly interacting medium known as the Quark Gluon Plasma (QGP). Quarks and gluons from incoming ... Measurements of the nuclear modification factor and elliptic flow of leptons from heavy-flavour hadron decays in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 and 5.02 TeV with ALICE (Elsevier, 2017-11) We present the ALICE results on the nuclear modification factor and elliptic flow of electrons and muons from open heavy-flavour hadron decays at mid-rapidity and forward rapidity in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
I came across this question as part of my self-study of abstract algebra and so I would prefer answers suitable for a beginner. First I established that the set $$M = \{a + b\sqrt{2}\mid a, b\in \mathbb{Z}, 5\mid a, 5 \mid b\}$$ is an ideal in the ring $R = \mathbb{Z}[\sqrt{2}]$ and this is more of a routine exercise. To prove that $M$ is a maximal ideal took me some time. The approach is to start with an ideal $N$ which properly contains $M$ i.e. $M \subset N$ and then show that $1 \in N$ so that $N = R$. Thus I start with an element $a + b\sqrt{2} \in N \setminus M$ so that at least one of $a, b$ is not divisible by $5$. Using this I show that norm $a^{2} - 2b^{2}$ is not divisible by $5$ and that is the only fact from number theory I used here. Now it is easy to show that there is an integer $k$ such that $k(a^{2} - 2b^{2}) \equiv 1\pmod{5}$. Thus we can see that $1 = k(a - b\sqrt{2})(a + b\sqrt{2}) - 5n \in N$ for some integer $n$ (note that $5n \in M \subset N$). Next $R$ is commutative with unity and hence $R/M$ is a field as $M$ is a maximal ideal of $R$. The field $R/M$ consists of all the cosets of type $a + b\sqrt{2} + M$ with $a, b$ taking values $0, 1, 2, 3, 4$ so that it has 25 elements. There is another way to look at this field and that is to consider the field $\mathbb{Z}_{5}$ and the polynomial $x^{2} - 2$ which is irreducible in $\mathbb{Z}_{5}[x]$ so that the quotient $\mathbb{Z}_{5}[x]/(x^{2} - 2)$ is a field and looking at its elements we see that it is isomorphic to $R/M$ considered earlier. My question is whether the second viewpoint can be used to directly infer the fact that $M$ is a maximal ideal in $R$?
For weak or covariance stationarity of $\{Z_t\}$, you need the cross-covariance functions$\operatorname{cov}(A_t,B_{t+h})$ and $\operatorname{cov}(A_{t+h},B_{t})$ to be functionsof $h$ alone, and not dependent on the choice of $t$. Some people call this propertyas joint weak stationarity, meaning that $\{A_t\}$ and $\{B_t\}$ are individuallyweakly stationary processesand that the cross-covariance functions have the desired property. Note that thecross-covariance functions are $0$ when $\{A_t\}$ and $\{B_t\}$ are uncorrelatedprocesses meaning that $\operatorname{cov}(A_{t_1},B_{t_2})$ are uncorrelatedfor all choices of $t_1$ and $t_2$, or in words, every random variablefrom the process $\{A_t\}$ is uncorrelated with every random variablefrom the process f$\{B_t\}$. Independent processes are a subclass ofuncorrelated processes. If $\{A_t\}$ and $\{B_t\}$ are uncorrelatedweakly stationary processes, then their sum is a weakly stationary process. Answer to question in comment:In general, $\operatorname{cov}(A_{t+h},B_{t})$ is a function of $h$ and $t$, and so it is of course a function of $h$. What you need to determine is whether it is also a function of $t$. Example: Let $\Theta$ denote a random variable enjoying the property$$E[\cos(\Theta)] = E[\sin(\Theta)]=E[\cos(2\Theta)] = E[\sin(2\Theta)]=E[\cos(4\Theta)] = E[\sin(4\Theta)]=0$$ and define random processes $\{A_t \colon t \in \mathbb R\}$ and$\{B_t \colon t \in \mathbb R\}$ via $A_t = \cos(t+\Theta), B_t = \cos(t+2\Theta)$.Then,$$E[A_t]=E[\cos(t+\Theta)]=E[\cos(t)\cos(\Theta)-\sin(t)\sin(2\Theta)]= 0$$and $$\begin{align}E[A_tA_{t+h}]&=E[\cos(t+\Theta)\cos(t+h+\Theta)]\\&=\frac{1}{2}E[\cos(2t+h+2\Theta)+\cos(h)]\\&=\frac{1}{2}E[\cos(2t+h)\cos(2\Theta)-\sin(2t+h)\sin(2\Theta)+\cos(h)]\\&=\frac{1}{2}\cos(h)\end{align}$$showing that $\{A_t\}$ is weakly stationary. A similar calculation showsthat $\{B_t\}$ is also weakly stationary.However, the processes are not necessarily jointly weakly stationary because$$\begin{align}\operatorname{cov}(A_{t},B_{t+h})&= E[A_{t}B_{t+h}]\\&=E[\cos(t+\Theta)\cos(t+h+2\Theta)]\\&=\frac{1}{2}E[\cos(2t+h+3\Theta)+\cos(h+\Theta)]\\&=\frac{1}{2}E[\cos(2t+h)\cos(3\Theta)-\sin(2t+h)\sin(3\Theta)+\cos(h+\Theta)]\end{align}$$ is a function of both $t$ and $h$ unless we makethe additional assumption that$E[\sin(3\Theta)]=E[\cos(3\Theta)]=0$. Returning to the original question, we note that the sum of weakly stationary random processes is not necessarily weakly stationary: additional assumptions such as joint weak stationarity are needed.
$a=\frac1{\sqrt6}(1,1,-2)$ is a counterexample for both. (There are really only two cases to consider, $\langle(1,1,-2),(1,1,-2)\rangle$ and $\langle(1,1,-2),(1,-2,1)\rangle$.) Edit: More on the case $-1$: By the equality case of the Cauchy-Schwarz inequality, the only way to get $\langle a,a^\sigma\rangle=-1$ is if $a$ and $-a$ are permutations of each other. The only way that can happen is if $a$ is a permutation of $(v,-v)$ (if $n$ even) or of $(v,-v,0)$ (if $n$ odd), where $v\in\mathbb R^{\lfloor n/2\rfloor}$. We automatically get $\sum_i a_i = 0$ in either case, and can get $\|a\|=1$ just by scaling $v$ appropriately. (These cases were described in a now-deleted answer; the point is that these are the only ones.) Edit: More on the case $0$: For any $n\ge 4$, there are vectors $a$ with $\min |s(\sigma)|=0$, e.g., $a=\frac1{\sqrt2}(1,-1,0,0,\dotsc,0)$. (Just permute so that $a$ and $a^\sigma$ have disjoint supports. Geometrically this corresponds to the fact that the opposite edges of a regular tetrahedron (more generally, disjoint edges of a regular simplex) are perpendicular.) On the other hand, for any $n\ge 3$, there are vectors $a$ with $\min |s(\sigma)|>0$, such as $a=\frac1{\sqrt{n(n-1)}}(1,1,\dotsc,1,-n+1)$, for which $\min |s(\sigma)| = 1/(n-1)$. (Again, there are only two cases to consider.) (Geometrically such $a$ are the vertices of the regular simplex you get by projecting the standard basis vectors on the hyperplane $\sum_i x_i=0$ (and normalizing); permutations of coordinates give the symmetries of this simplex, so you can only get the inner product of a vertex with another vertex this way. So in fact in this case, $s(\sigma) = -1/(n-1)$ for any $\sigma$ other than the identity ( correction: for any $\sigma$ that moves the last coordinate)). Edit: For the question in comments, to determine $\sup_a \min_\sigma |\langle a,a^\sigma\rangle|$, here's a computation to show that it's at most $1/\sqrt{n-1}$. First we compute the average of the squares:\begin{align*}\frac1{n!} \sum_\sigma \langle a,a^\sigma\rangle^2&= \frac1{n!} \sum_\sigma \sum_i \sum_j a_i a_j a_{\sigma(i)} a_{\sigma(j)} \\&= \sum_i \sum_j a_i a_j \Big( \frac1{n!} \sum_\sigma a_{\sigma(i)} a_{\sigma(j)} \Big) \\&= \sum_i a_i^2 \Big( \frac1{n!} \sum_\sigma a_{\sigma(i)}^2 \Big) + \sum_i \sum_{j\ne i} a_i a_j \Big( \frac1{n!} \sum_\sigma a_{\sigma(i)} a_{\sigma(j)} \Big) \\&= \sum_i a_i^2 \Big( \frac1n \sum_k a_k^2 \Big) + \sum_i \sum_{j\ne i} a_i a_j \Big( \frac1{n(n-1)} \sum_k \sum_{\ell\ne k} a_k a_\ell \Big) \\&= \frac1n \Big( \sum_i a_i^2 \Big)^2 + \frac1{n(n-1)} \Big( \sum_i \sum_{j\ne i} a_i a_j \Big)^2 \\&= \frac1n \Big( \sum_i a_i^2 \Big)^2 + \frac1{n(n-1)} \Big( \Big(\sum_i a_i\Big)^2 - \sum_i a_i^2 \Big)^2 \\&= \frac1n + \frac1{n(n-1)} = \frac1{n-1}\end{align*}Thus$$ \min_\sigma |\langle a,a^\sigma\rangle|= \sqrt{\min_\sigma \langle a,a^\sigma\rangle^2}\le \sqrt{\frac1{n!} \sum_\sigma \langle a,a^\sigma\rangle^2}= \frac1{\sqrt{n-1}} $$I'm not sure whether this is sharp. (Well, you could shave off a tiny bit by excluding the identity from the average; I mean I'm not sure whether it's asymptotically sharp.)
Graphical Models and Structural Learning for Extremes Conditional independence, graphical models and sparsity are key notions for parsimonious models in high dimensions and for learning structural relationships in the data. The theory of multivariate and spatial extremes describes the risk of rare events through asymptotically justified limit models such as max-stable and multivariate Pareto distributions. Statistical modeling in this field has been limited to moderate dimensions so far, owing to complicated likelihoods and a lack of understanding of the underlying probabilistic structures. We introduce a general theory of conditional independence for multivariate Pareto distributions that allows to define graphical models and sparsity for extremes. New parametric models can be built in a modular way and statistical inference can be simplified to lower-dimensional margins. We define the extremal variogram, a new summary statistics that turns out to be a tree metric and therefore allows to efficiently learn an underlying tree structure through Prim's algorithm. For a popular parametric class of multivariate Pareto distributions we show that, similarly to the Gaussian case, the sparsity pattern of a general graphical model can be easily read of from suitable inverse covariance matrices. This enables the definition of an extremal graphical lasso that enforces sparsity in the dependence structure. We illustrate the results with an application to flood risk assessment on the Danube river. This is joint work with Adrien Hitz. Preprint available on \texttt{https://arxiv.org/abs/1812.01734}. Building Deep Statistical Thinking for Data Science 2020: Privacy Protected Census, Gerrymandering, and Election The year 2020 will be a busy one for statisticians and more generally data scientists. The US Census Bureau has announced that the data from the 2020 Census will be released under differential privacy (DP) protection, which in layperson’s terms means adding some noises to the data. While few would argue against protecting data privacy, many researchers, especially from the social sciences, are concerned whether the right trade-offs between data privacy and data utility are being made. The DP protection also has direct impact on redistricting, an issue that is already complicated enough with accurate counts, due to the need of guarding against excessive gerrymandering. The central statistical problem there is a rather unique one: how to determine whether a realization is an outlier with respect to a null distribution, when that null distribution itself cannot be fully determined? The 2020 US election will be another highly watched event, with many groups already busy making predictions. Will the lessons from predicting the 2016 US election be learned, or the failure be repeated? This talk invites the audience on a journey of deep statistical thinking prompted by these questions, regardless whether they have any interest in the US Census or politics. On the properties of $\Lambda$-quantiles We present a systematic treatment of $\Lambda$-quantiles, a family of generalized quantiles introduced in Frittelli et al. (2014) under the name of Lambda Value at Risk. We consider various possible definitions and derive their fundamental properties, mainly working under the assumption that the threshold function $\Lambda$ is nonincreasing. We refine some of the weak continuity results derived in Burzoni et al. (2017), showing that the weak continuity properties of $\Lambda$-quantiles are essentially similar to those of the usual quantiles. Further, we provide an axiomatic foundation for $\Lambda$-quantiles based on a locality property that generalizes a similar axiomatization of the usual quantiles based on the ordinal covariance property given in Chambers (2009). We study scoring functions consistent with $\Lambda$-quantiles and as an extension of the usual quantile regression we introduce $\Lambda$-quantile regression, of which we provide two financial applications (joint work with Ilaria Peri). Variable selection for structured high-dimensional data using known and novel graph information Variable selection for structured high-dimensional covariates lying on an underlying graph has drawn considerable interest. However, most of the existing methods may not be scalable to high dimensional settings involving tens of thousands of variables lying on known pathways such as the case in genomics studies, and they assume that the graph information is fully known. This talk will focus on addressing these two challenges. In the first part, I will present an adaptive Bayesian shrinkage approach which incorporates known graph information through shrinkage parameters and is scalable to high dimensional settings (e.g., p~100,000 or millions). We also establish theoretical properties of the proposed approach for fixed and diverging p. In the second part, I will tackle the issue that graph information is not fully known. For example, the role of miRNAs in regulating gene expression is not well-understood and the miRNA regulatory network is often not validated. We propose an approach that treats unknown graph information as missing data (i.e. missing edges), introduce the idea of imputing the unknown graph information, and define the imputed information as the novel graph information. In addition, we propose a hierarchical group penalty to encourage sparsity at both the pathway level and the within-pathway level, which, combined with the imputation step, allows for incorporation of known and novel graph information. The methods are assessed via simulation studies and are applied to analyses of cancer data. Nonregular and Minimax Estimation of Individualized Thresholds in High Dimension with Binary Responses Given a large number of covariates $\bZ$, we consider the estimation of a high-dimensional parameter $\btheta$ in an individualized linear threshold $\btheta^T\bZ$ for a continuous variable $X$, which minimizes the disagreement between $\sign{X-\btheta^T\bZ}$ and a binary response $Y$. While the problem can be formulated into the M-estimation framework, minimizing the corresponding empirical risk function is computationally intractable due to discontinuity of the sign function. Moreover, estimating $\btheta$ even in the fixed-dimensional setting is known as a nonregular problem leading to nonstandard asymptotic theory. To tackle the computational and theoretical challenges in the estimation of the high-dimensional parameter $\btheta$, we propose an empirical risk minimization approach based on a regularized smoothed non-convex loss function. The Fisher consistency of the proposed method is guaranteed as the bandwidth of the smoothed loss is shrunk to 0. Statistically, we show that the finite sample error bound for estimating $\btheta$ in $\ell_2$ norm is $(s\log d/n)^{\beta/(2\beta+1)}$, where $d$ is the dimension of $\btheta$, $s$ is the sparsity level, $n$ is the sample size and $\beta$ is the smoothness of the conditional density of $X$ given the response $Y$ and the covariates $\bZ$. The convergence rate is nonstandard and slower than that in the classical Lasso problems. Furthermore, we prove that the resulting estimator is minimax rate optimal up to a logarithmic factor. The Lepski's method is developed to achieve the adaption to the unknown sparsity $s$ and smoothness $\beta$. Computationally, an efficient path-following algorithm is proposed to compute the solution path. We show that this algorithm achieves geometric rate of convergence for computing the whole path. Finally, we evaluate the finite sample performance of the proposed estimator in simulation studies and a real data analysis from the ChAMP (Chondral Lesions And Meniscus Procedures) Trial. On Khintchine's Inequality for Statistics In complex estimation and hypothesis testing settings, it may be impossible to compute p-values or construct confidence intervals using classical analytic approaches like asymptotic normality. Instead, one often relies on randomization and resampling procedures such as the bootstrap or permutation test. But these approaches suffer from the computational burden of large scale Monte Carlo runs. To remove this burden, we develop analytic methods for hypothesis testing and confidence intervals by specifically considering the discrete finite sample distributions of the randomized test statistic. The primary tool we use to achieve such results is Khintchine's inequality and its extensions and generalizations. More information about this seminar will be added as soon as possible. More information about this seminar will be added as soon as possible. More information about this seminar will be added as soon as possible.
Given a modulus $n\in\mathbb{N}$ and another natural number $0<x<n$, what's an efficient algorithm to enumerate all pairs of natural numbers $(a,b)\in\mathbb{N}^2$ such that both $a$ and $b$ are less than $n$ and $$ab\equiv x \mod n\text?$$ Let's solve the problem for a specific $a$. Observe that $ab \equiv x \pmod{n}$ if and only if there exists a $m \in \mathbb{Z}$ such that $\qquad ab + mn = x$ By the Bezout theorem, the above has solutions if and only if $x \mid (a, n)$ (the "if" arrow is the theorem, the "only if" arrow is trivial). One of those solutions can be found by the Extended Euclid algorithm, and all the others can be obtained observing that if $(b, m)$ is a solution, then $\qquad \left( b + \frac{n}{(a, n)}, m + \frac{a}{(a, n)} \right) $ is also a solution. Repeating the above process for all $a \in \mathbb{Z}/n\mathbb{Z}$ yields all the pairs you are looking for. While it's rather naive, I don't think you can get much better than that if you're looking for all solutions.
Suppose $G$ is not $k$-edge connected, then suppose the minimum number of edges required to disconnect $G$ be $m$, of course, $m<k$, so after removing $m$ edges from $G$, the graph becomes disconnected. Let this disconnected graph be $H$. Also now the disconnected graph $H$ has $\binom{n-k-1}{2}+\binom{k+1}{2}+(k-m)$ edges, hence by the result you proved it must contain a component $M$ of size less than $(k+1)$, say it's size is $p$, where $1\leq p\leq k$. Hence degree of each vertex in that component $M$ of $H$ is at most $p-1$, whereas the degree of each vertex of $M$ in the original graph $G$ was at least $k$. Note that if there was an edge $e$ between any two vertices $u$, $v$ of $M$ in the original graph $G$, that edge was not removed to get $H$, as the graph $M$ is connected, removing that edge $e$ would not help to disconnect $G$, In other words if $e$ does not belong to the edge set of $H$, then if we add $e$ to $H$, the new graph would still be disconnected, hence contradicting the minimality of $m$. Hence at least $(k-(p-1))\times p$ edges has been removed from the orginal graph $G$. Now $[(k-(p-1))\times p]-k=kp-k-p^2+p=(k-p)(p-1) \geq 0$. Hence $(k-(p-1))\times p\geq k$. But we know we have removed $m$ edges from the original graph $G$ to obtain $H$ and $m<k$, hence a contradiction. So we have proved that $G$ is $k$-edge connected.
We are asked to use natural deduction to prove some stuff. Problem is, without De Morgan's law, which I think belongs in transformational proof, lots of things seem difficult to prove. Would using de Morgan's laws be a violation of "In your natural deduction proofs, use only naturaldeduction inference rules; i.e., donot use any transformational laws."? If so, how can I work around de Morgan? We are asked to use natural deduction to prove some stuff. Problem is, without De Morgan's law, which I think belongs in The usual natural deduction introduction and elimination rules for $\land$ and $\lor$, together with the classical rules for negation allow you to derive De Morgan's laws, I.e. to show that from $\neg(\varphi \land \psi)$ you can derive $\neg\varphi \lor \neg\psi$, and vice versa, and the duals. Each of the four proofs is easy and no more than about a dozen lines [Fitch style] or the equivalent [Gentzen style]. They are routine examples, or exercises for beginners. So it is never really harder to prove something from natural deduction first principles alone rather than from the natural deduction rules augmented with De Morgan's laws as derived rules, it is just a bit longer. Whenever you want to invoke one of De Morgan's laws, just slot in the standard proof routine using the basic natural deduction rules to derive the required instance. What's the problem? Eric, my advice would be to learn the transformational laws expressed natural deduction. Then, whenever you feel a transformational law is needed you can apply the natural deduction proof of said rule. For instance, here is an example of $(\lnot\phi\lor\lnot\psi) \to \lnot(\phi\land\psi)$: $$ \frac{\displaystyle \frac{\displaystyle \lnot\phi \lor \lnot\psi \quad \frac{\displaystyle \frac{\displaystyle \frac{}{\phi\land\psi}\scriptstyle (2)} {\phi} \quad \frac{}{\lnot\phi}\scriptstyle (1)} {\bot} \quad \frac{\displaystyle \frac{\displaystyle \frac{}{\phi\land\psi}\scriptstyle(2)} {\psi} \quad \frac{}{\lnot\psi}\scriptstyle(1)} {\bot} } {\bot}\scriptstyle (1) } {\lnot(\phi\land\psi)}\scriptstyle (2) $$
Contents Practice Question on Computing the Fourier Transform of a Continuous-time Signal Compute the Fourier transform of the signal $ x(t)= \sum_{k=-\infty}^\infty f(t+2k) $, where $ f(t)=\left\{ \begin{array}{ll} t+1, & \text{ for } -1 \leq t <0, \\ 1-t, & \text{ for } 0 \leq t <1, \\ 0, \text{ else}. \end{array} \right. \ $ You will receive feedback from your instructor and TA directly on this page. Other students are welcome to comment/discuss/point out mistakes/ask questions too! Answer 1 Write it here. Answer 2 Write it here. Answer 3 Write it here.
Every computable function can be expressed in continuation-passing-style, in which all calls are tail-calls. The trick is to add a "continuation" parameter to every function. Instead of making a non-tail-call to a function, you make a tail call to that function with a modified continuation, describing what to do with the result. All instances where a value is directly returned (such as recursion base cases) are replaced by calling the continuation in tail-position with the result as an argument. Transformation from the lambda calculus into CPS can be done mechanically. So, $TR=R$. EDIT: addressing the comments about higher-order functions: Tail-calls are almost always discussed in the context of higher order functions and the lambda calculus. So the problem is, what precisely is our definition of $TR$? You can certainly add a wrapper around a CPS function to give it the type $\mathbb{N}^n \to \mathbb{N}$, by giving it an initial continuation of $\lambda k \ldotp k$. If higher-order functions are allowed internally, then the result that $TR=R$ still holds. If higher order functions aren't allowed internally, what is our definition of $TR$? If we define it in the same way as $PR$, then it is going to only contain primitive recursive problems by definition (since it's just the restruction of $PR$ to tail-recursion). If we add $\mu$ for infinite search, I think we're just going to get $R$, since we can encode higher-order functions using integers. So, I'm not sure there's a meaningful question to be asked in the non-higher-order case. EDIT 2: As for the class of first-order functions that only allow tail recursion, with Constant, Successor, Projection and Composition functions, and extension by tail recursion: h(x1 ... xn) = if c(x1 ... xn) = 0 then h(g1(x1), ..., gn(xn)) else f(x1, ..., xn) where $c$, $g_i$ and $f$ are all tail-recursive functions, I think we can prove that it's Turing Complete, by solving Post's Correspondence Problem, which is undecidable but semi-decidable: Assume that we've got nice functions for dealing with strings encoded as integers, with concatenation, etc. Let $pcpInst(k, n)$ be a function which takes an integer $k$ and returns the $k$th string over the alphabet $\{1, \ldots, n \}$. Let $c(k, x_1, \ldots, x_n) = $ be a function, where $k$ is an integer, and each $x_i$ is a pair containing two strings over a binary alphabet. Thus function does the following: Computes $k_1 \cdots k_p = pcpInst(k,n)$, the $k$th possible PCP solution indices. Constructs $s_1=\pi_1(x_{k_1}) \cdots \pi_1(x_{k_p}))$. This is the string we get by concatenating the first string of the arguments indexed by our $k_i$ sequence. We define $s_2$ with $pi_2$ similarly. Return $0$ if $s_1 \neq s_2$, return $1$ otherwise. Now, we'll define our function to solve the a PCP instance with $n$ strings: $h(k, x_1, \ldots, x_n) = h(S(k), x_1, \ldots, x_n)$ if $c(k, x_1, \ldots, x_n) = 0 $ $h(k, x_1, \ldots, x_n) = 0$ otherwise Now we define $h'(x_1, \ldots, x_n) = h(0, x_1, \ldots, x_n)$. It is clear to see that $h'(x_1, \ldots, x_n)$ returns 0 if and only if there is a solution to the correspondence problem defined by pairs of strings $x_1, \ldots, x_n$. If there is a solution, we eventually iterate to it by increasing $k$, and return $0$ when our $c$ function returns 1. If there is no solution, we never return. The trick here is ensuring that $c$ is itself tail-recursive. I am fairly confident that it is, but that proving so would be tedious. But since it is performing simple string manipulations and equality checks, I would be very surprised if it is not tail recursive.
V. Gitman and J. D. Hamkins, “A model of the generic Vopěnka principle in which the ordinals are not Mahlo,” Archive for Mathematical Logic, pp. 1-21, 2018. @ARTICLE{GitmanHamkins2018:A-model-of-the-generic-Vopenka-principle-in-which-the-ordinals-are-not-Mahlo, author = {Gitman, Victoria and Hamkins, Joel David}, year = {2018}, title = {A model of the generic Vopěnka principle in which the ordinals are not Mahlo}, journal = {Archive for Mathematical Logic}, issn = {0933-5846}, doi = {10.1007/s00153-018-0632-5}, month = {5}, pages = {1--21}, eprint = {1706.00843}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1xT}, abstract = {The generic Vopěnka principle, we prove, is relatively consistent with the ordinals being non-Mahlo. Similarly, the generic Vopěnka scheme is relatively consistent with the ordinals being definably non-Mahlo. Indeed, the generic Vopěnka scheme is relatively consistent with the existence of a Δ2-definable class containing no regular cardinals. In such a model, there can be no Σ2-reflecting cardinals and hence also no remarkable cardinals. This latter fact answers negatively a question of Bagaria, Gitman and Schindler.}, } Abstract.The generic Vopěnka principle, we prove, is relatively consistent with the ordinals being non-Mahlo. Similarly, the generic Vopěnka scheme is relatively consistent with the ordinals being definably non-Mahlo. Indeed, the generic Vopěnka scheme is relatively consistent with the existence of a $\Delta_2$-definable class containing no regular cardinals. In such a model, there can be no $\Sigma_2$-reflecting cardinals and hence also no remarkable cardinals. This latter fact answers negatively a question of Bagaria, Gitman and Schindler. The Vopěnka principle is the assertion that for every proper class of first-order structures in a fixed language, one of the structures embeds elementarily into another. This principle can be formalized as a single second-order statement in Gödel-Bernays set-theory GBC, and it has a variety of useful equivalent characterizations. For example, the Vopěnka principle holds precisely when for every class $A$, the universe has an $A$-extendible cardinal, and it is also equivalent to the assertion that for every class $A$, there is a stationary proper class of $A$-extendible cardinals (see theorem 6 in my paper The Vopěnka principle is inequivalent to but conservative over the Vopěnka scheme) In particular, the Vopěnka principle implies that ORD is Mahlo: every class club contains a regular cardinal and indeed, an extendible cardinal and more. To define these terms, recall that a cardinal $\kappa$ is extendible, if for every $\lambda>\kappa$, there is an ordinal $\theta$ and an elementary embedding $j:V_\lambda\to V_\theta$ with critical point $\kappa$. It turns out that, in light of the Kunen inconsistency, this weak form of extendibility is equivalent to a stronger form, where one insists also that $\lambda<j(\kappa)$; but there is a subtle issue about this that comes up with the virtual forms of these axioms, where the virtual weak and virtual strong forms are no longer equivalent. Relativizing to a class parameter, a cardinal $\kappa$ is $A$-extendible for a class $A$, if for every $\lambda>\kappa$, there is an elementary embedding $$j:\langle V_\lambda, \in, A\cap V_\lambda\rangle\to \langle V_\theta,\in,A\cap V_\theta\rangle$$ with critical point $\kappa$, and again one may equivalently insist also that $\lambda<j(\kappa)$. Every such $A$-extendible cardinal is therefore extendible and hence inaccessible, measurable, supercompact and more. These are amongst the largest large cardinals. In the first-order ZFC context, set theorists commonly consider a first-order version of the Vopěnka principle, which we call the Vopěnka scheme, the scheme making the Vopěnka assertion of each definable class separately, allowing parameters. That is, the Vopěnka scheme asserts, of every formula $\varphi$, that for any parameter $p$, if $\{\,x\mid \varphi(x,p)\,\}$ is a proper class of first-order structures in a common language, then one of those structures elementarily embeds into another. The Vopěnka scheme is naturally stratified by the assertions $\text{VP}(\Sigma_n)$, for the particular natural numbers $n$ in the meta-theory, where $\text{VP}(\Sigma_n)$ makes the Vopěnka assertion for all $\Sigma_n$-definable classes. Using the definable $\Sigma_n$-truth predicate, each assertion $\text{VP}(\Sigma_n)$ can be expressed as a single first-order statement in the language of set theory. In my previous paper, The Vopěnka principle is inequivalent to but conservative over the Vopěnka scheme, I proved that the Vopěnka principle is not provably equivalent to the Vopěnka scheme, if consistent, although they are equiconsistent over GBC and furthermore, the Vopěnka principle is conservative over the Vopěnka scheme for first-order assertions. That is, over GBC the two versions of the Vopěnka principle have exactly the same consequences in the first-order language of set theory. In this article, Gitman and I are concerned with the virtual forms of the Vopěnka principles. The main idea of virtualization, due to Schindler, is to weaken elementary-embedding existence assertions to the assertion that such embeddings can be found in a forcing extension of the universe. Gitman and Schindler had emphasized that the remarkable cardinals, for example, instantiate the virtualized form of supercompactness via the Magidor characterization of supercompactness. This virtualization program has now been undertaken with various large cardinals, leading to fruitful new insights. Carrying out the virtualization idea with the Vopěnka principles, we define the generic Vopěnka principle to be the second-order assertion in GBC that for every proper class of first-order structures in a common language, one of the structures admits, in some forcing extension of the universe, an elementary embedding into another. That is, the structures themselves are in the class in the ground model, but you may have to go to the forcing extension in order to find the elementary embedding. Similarly, the generic Vopěnka scheme, introduced by Bagaria, Gitman and Schindler, is the assertion (in ZFC or GBC) that for every first-order definable proper class of first-order structures in a common language, one of the structures admits, in some forcing extension, an elementary embedding into another. On the basis of their work, Bagaria, Gitman and Schindler had asked the following question: Question. If the generic Vopěnka scheme holds, then must there be a proper class of remarkable cardinals? There seemed good reason to expect an affirmative answer, even assuming only $\text{gVP}(\Sigma_2)$, based on strong analogies with the non-generic case. Specifically, in the non-generic context Bagaria had proved that $\text{VP}(\Sigma_2)$ was equivalent to the existence of a proper class of supercompact cardinals, while in the virtual context, Bagaria, Gitman and Schindler proved that the generic form $\text{gVP}(\Sigma_2)$ was equiconsistent with a proper class of remarkable cardinals, the virtual form of supercompactness. Similarly, higher up, in the non-generic context Bagaria had proved that $\text{VP}(\Sigma_{n+2})$ is equivalent to the existence of a proper class of $C^{(n)}$-extendible cardinals, while in the virtual context, Bagaria, Gitman and Schindler proved that the generic form $\text{gVP}(\Sigma_{n+2})$ is equiconsistent with a proper class of virtually $C^{(n)}$-extendible cardinals. But further, they achieved direct implications, with an interesting bifurcation feature that specifically suggested an affirmative answer to the question above. Namely, what they showed at the $\Sigma_2$-level is that if there is a proper class of remarkable cardinals, then $\text{gVP}(\Sigma_2)$ holds, and conversely if $\text{gVP}(\Sigma_2)$ holds, then there is either a proper class of remarkable cardinals or a proper class of virtually rank-into-rank cardinals. And similarly, higher up, if there is a proper class of virtually $C^{(n)}$-extendible cardinals, then $\text{gVP}(\Sigma_{n+2})$ holds, and conversely, if $\text{gVP}(\Sigma_{n+2})$ holds, then either there is a proper class of virtually $C^{(n)}$-extendible cardinals or there is a proper class of virtually rank-into-rank cardinals. So in each case, the converse direction achieves a disjunction with the target cardinal and the virtually rank-into-rank cardinals. But since the consistency strength of the virtually rank-into-rank cardinals is strictly stronger than the generic Vopěnka principle itself, one can conclude on consistency-strength grounds that it isn’t always relevant, and for this reason, it seemed natural to inquire whether this second possibility in the bifurcation could simply be removed. That is, it seemed natural to expect an affirmative answer to the question, even assuming only $\text{gVP}(\Sigma_2)$, since such an answer would resolve the bifurcation issue and make a tighter analogy with the corresponding results in the non-generic/non-virtual case. In this article, however, we shall answer the question negatively. The details of our argument seem to suggest that a robust analogy with the non-generic/non-virtual principles is achieved not with the virtual $C^{(n)}$-cardinals, but with a weakening of that property that drops the requirement that $\lambda<j(\kappa)$. Indeed, our results seems to offer an illuminating resolution of the bifurcation aspect of the results we mentioned from Bagaria, Gitmand and Schindler, because it provides outright virtual large-cardinal equivalents of the stratified generic Vopěnka principles. Because the resulting virtual large cardinals are not necessarily remarkable, however, our main theorem shows that it is relatively consistent with even the full generic Vopěnka principle that there are no $\Sigma_2$-reflecting cardinals and therefore no remarkable cardinals. Main Theorem. It is relatively consistent that GBC and the generic Vopěnka principle holds, yet ORD is not Mahlo. It is relatively consistent that ZFC and the generic Vopěnka scheme holds, yet ORD is not definably Mahlo, and not even $\Delta_2$-Mahlo. In such a model, there can be no $\Sigma_2$-reflecting cardinals and therefore also no remarkable cardinals. For more, go to the arcticle: V. Gitman and J. D. Hamkins, “A model of the generic Vopěnka principle in which the ordinals are not Mahlo,” Archive for Mathematical Logic, pp. 1-21, 2018. @ARTICLE{GitmanHamkins2018:A-model-of-the-generic-Vopenka-principle-in-which-the-ordinals-are-not-Mahlo, author = {Gitman, Victoria and Hamkins, Joel David}, year = {2018}, title = {A model of the generic Vopěnka principle in which the ordinals are not Mahlo}, journal = {Archive for Mathematical Logic}, issn = {0933-5846}, doi = {10.1007/s00153-018-0632-5}, month = {5}, pages = {1--21}, eprint = {1706.00843}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1xT}, abstract = {The generic Vopěnka principle, we prove, is relatively consistent with the ordinals being non-Mahlo. Similarly, the generic Vopěnka scheme is relatively consistent with the ordinals being definably non-Mahlo. Indeed, the generic Vopěnka scheme is relatively consistent with the existence of a Δ2-definable class containing no regular cardinals. In such a model, there can be no Σ2-reflecting cardinals and hence also no remarkable cardinals. This latter fact answers negatively a question of Bagaria, Gitman and Schindler.}, }
Solve for $x$ I have an equation that I have been working on solving; I know the solution, but I cannot get to it myself. Almost every simplification I do reverts back to a previous step. Can anyone show me how to solve for $x$ in this equation? Equation: $$\log_6(2x-3)+\log_6(x+5)=\log_3x$$ Solution: $$x ≅ \frac{3347}{2000} ≅ 1.6735$$ Note: upon further analysis of the answer, while close, it does not seem to be the exact solution. What I Have Tried So Far $$\log_6(2x-3) + \log_6(x + 5) = \log_3x$$ $$\frac{\log(2x-3)}{\log6} + \frac{\log(x + 5)}{\log6} = \frac{\log x}{\log3}$$ $$\log3 \cdot \log(2x-3) + \log3 \cdot \log(x + 5) = \log6 \cdot \log x$$ $$\log3 \cdot \log \left[(2x - 3)(x + 5)\right] = \log6 \cdot \log x$$ $$\frac{\log \left[(2x - 3)(x + 5)\right]}{\log_3 10} = \frac{\log6}{\log_x10}$$ $$\log_x10 \cdot \log \left[(2x - 3)(x + 5)\right] = \log_3 10 \cdot \log6$$ $$\log_x \left[(2x - 3)(x + 5)\right] = \log_3 6$$ $$\log_x3 \cdot \log_x \left[(2x - 3)(x + 5)\right] = \frac{\log_3 6}{\log_3 x}$$ $$\log_x \left[(2x - 3)(x + 5)\right]^{\ \log_x3} = \log_x 6$$ $$\left[(2x - 3)(x + 5)\right]^{\ \log_x3} = 6$$ $$(2x - 3)(x + 5) = x^{\log_3 6}$$ I know these steps aren't really working towards the solution at points; I was sort of just playing around with the equation. Regardless, I really don't know how to go about moving forward from here.
With the help of Raymond Manzoni and Greg Martin I was able to derive an explicit formula for the number of primes of the form $4n+3$ in terms of (sums of) sums of Riemann's $R$ functions over roots of Riemann's $\zeta$ resp. Dirichlet $\beta$ function: \begin{align*} \pi^*(x;4,3)&=\sum_{k=0}^\infty 2^{-k-1}\left( \operatorname{R}(x^{1/2^{k}})-\sum_{\rho_\zeta} \operatorname{R}(x^{\rho_\zeta/2^k}) +\sum_{\rho_\beta} \operatorname{R}(x^{\rho_\beta/2^k}) \right) \end{align*} Ilya helped a lot to derive a formula for summing over General Functions of Primes $$ \sum_{p\le x}f(p)=\int_2^x f(t) d(\pi(t))\tag{1} $$ and applying this to Prime $\zeta$ function this simplifies (haha) to $$ P_{x;4,1}(r)+P_{x;4,3}(r)= \sum_{p<\color{red}x} \frac{1}{p^{ir}} =\sum_{n=1}^{\infty}\frac{ \mu (n)}{n}\sum_{z\in\{1,\rho\}}(-1)^{1-\delta_{1z}} \left[ {\rm li}(t^{\frac zn-ir}) \right]^{\color{red}x}_2 $$ So I think it's possible to combine these two partial results and come up with something $$ P_{x;4,3}(r)=\sum_{k=0}^\infty 2^{-k-1} \sum_{n=1}^{\infty}\frac{ \mu (n)}{n}\sum_{z\in\{1,\rho(\zeta),\rho(\beta) \}/2^k}\alpha(z) \left[ {\rm li}(t^{\frac zn-ir}) \right]^{\color{red}x}_2,\tag{2} $$ where $\alpha(z)=\cases{\phantom{-} 1 ,\text{if $z=1/2^k$ or $z=\rho_\beta/2^k$},\\ -1 , \text{if $z=\rho_\zeta/2^k$}}$. But I also found another, pretty simple, way to represent $P_{x;4,3}(r)$: $$ P_{x;4,3}(r)= \sum_{p<x} \frac12\left(1+ie^{i2\pi\frac{p}4}\right)p^{ir}, $$ where $\left(1+ie^{i2\pi\frac{p}4}\right)$ just cancels for primes of the form $4n+1$. We have to apply $(1)$ and insert $$ \pi(x) = \operatorname{R}(x^1) - \sum_{\rho(\zeta)}\operatorname{R}(x^{\rho(\zeta)}) \tag{3} $$ to get something comparable to $(2)$, but the roots of Dirichlet's $\beta$ function wouldn't obviously show up here. Is it possible to show that the effect of $\rho(\beta)$ in $(2 )$ can be condensed to $\frac12\left(1+ie^{i2\pi\frac{p}4}\right)$?
Electronic Journal of Probability Electron. J. Probab. Volume 1 (1996), paper no. 3, 19 pp. Eigenvalue Expansions for Brownian Motion with an Application to Occupation Times Abstract Let $B$ be a Borel subset of $R^d$ with finite volume. We give an eigenvalue expansion for the transition densities of Brownian motion killed on exiting $B$. Let $A_1$ be the time spent by Brownian motion in a closed cone with vertex $0$ until time one. We show that $\lim_{u\to 0} \log P^0(A_1 < u) /\log u = 1/\xi$ where $\xi$ is defined in terms of the first eigenvalue of the Laplacian in a compact domain. Eigenvalues of the Laplacian in open and closed sets are compared. Article information Source Electron. J. Probab., Volume 1 (1996), paper no. 3, 19 pp. Dates Accepted: 31 January 1996 First available in Project Euclid: 25 January 2016 Permanent link to this document https://projecteuclid.org/euclid.ejp/1453756466 Digital Object Identifier doi:10.1214/EJP.v1-3 Mathematical Reviews number (MathSciNet) MR1386295 Zentralblatt MATH identifier 0891.60079 Subjects Primary: 60J65: Brownian motion [See also 58J65] Secondary: 60J35: Transition functions, generators and resolvents [See also 47D03, 47D07] 60J45: Probabilistic potential theory [See also 31Cxx, 31D05] Rights This work is licensed under aCreative Commons Attribution 3.0 License. Citation Bass, Richard; Burdzy, Krzysztof. Eigenvalue Expansions for Brownian Motion with an Application to Occupation Times. Electron. J. Probab. 1 (1996), paper no. 3, 19 pp. doi:10.1214/EJP.v1-3. https://projecteuclid.org/euclid.ejp/1453756466
Search Now showing items 1-10 of 166 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
Ordinal Sum of Powers Theorem Let $x$, $y$, and $z$ be ordinals. Then: $x^y \times x^z = x^{y + z}$ Proof The proof shall proceed by Transfinite Induction on $z$. Basis for the Induction $x^0 = 1$ for all $x$ \(\displaystyle x^y \times x^z\) \(=\) \(\displaystyle x^y\) by Ordinal Multiplication by Zero \(\displaystyle \) \(=\) \(\displaystyle x^{y + z}\) by Ordinal Addition by Zero This proves the basis for the induction. $\Box$ Induction Step Suppose that $x^y \times x^z = x^{y + z}$. Then: \(\displaystyle x^y \times x^{z^+}\) \(=\) \(\displaystyle x^y \times \left({ x^z \times x }\right)\) definition of ordinal exponentiation \(\displaystyle \) \(=\) \(\displaystyle \left({ x^y \times x^z }\right) \times x\) by Ordinal Multiplication is Associative \(\displaystyle \) \(=\) \(\displaystyle x^{y + z} \times x\) by Inductive Hypothesis \(\displaystyle \) \(=\) \(\displaystyle x^{\left({y + z}\right)^+}\) definition of ordinal exponentiation \(\displaystyle \) \(=\) \(\displaystyle x^{\left({y + z^+}\right)}\) definition of ordinal addition This proves the induction step. $\Box$ Limit Case Suppose that $\forall w \in z: x^y \times x^w = x^{y + w}$ for limit ordinal $z$. \(\displaystyle \forall w \in z: \ \ \) \(\displaystyle \left({x^y \times x^w}\right)\) \(\le\) \(\displaystyle \left({x^y \times x^z}\right)\) by Membership is Left Compatible with Ordinal Multiplication \(\displaystyle \implies \ \ \) \(\displaystyle \bigcup_{w \in z} \left({ x^y \times x^w }\right)\) \(\le\) \(\displaystyle \left({ x^y \times x^z }\right)\) by Indexed Union Subset Conversely: \(\displaystyle w \in \left({ x^y \times x^z }\right)\) \(\implies\) \(\displaystyle \exists u \in x^z: w \in \left({ x^y \times u }\right)\) by Ordinal is Less than Ordinal times Limit \(\displaystyle \) \(\implies\) \(\displaystyle \exists v \in z: \exists u \in x^v: w \in \left({ x^y \times u }\right)\) by Ordinal is Less than Ordinal to Limit Power But this means that $u$ is bounded above by $x^v$ for some $v \in z$. Thus there exists a $v \in z$ such that: $w \le \left({ x^y \times x^v }\right)$ By Supremum Inequality for Ordinals, it follows that: $\left({ x^y \times x^z }\right) \le \bigcup_{w \in z} \left({ x^y \times x^w }\right)$ \(\displaystyle \left({ x^y \times x^z }\right)\) \(=\) \(\displaystyle \bigcup_{w \in z} \left({ x^y \times x^w }\right)\) Definition of Set Equality \(\displaystyle \) \(=\) \(\displaystyle \bigcup_{w \in z} x^{y+w}\) Inductive Hypothesis for the Limit Case $\Box$ \(\displaystyle \forall w \in z: \ \ \) \(\displaystyle y + w\) \(\le\) \(\displaystyle y + z\) by Membership is Left Compatible with Ordinal Addition \(\displaystyle \forall w \in z: \ \ \) \(\displaystyle x^{y + w}\) \(\le\) \(\displaystyle x^{y + z}\) by Membership is Left Compatible with Ordinal Exponentiation \(\displaystyle \implies \ \ \) \(\displaystyle \bigcup_{w \in z} x^{y + w}\) \(\le\) \(\displaystyle x^{y + z}\) by Indexed Union Subset Conversely: \(\displaystyle w \in x^{y + z}\) \(\implies\) \(\displaystyle \exists u \in \left({y + z}\right): w \in x^u\) definition of ordinal exponentiation \(\displaystyle \) \(\implies\) \(\displaystyle \exists v \in z: \exists u \in \left({y + v}\right): w \in x^u\) definition of ordinal addition Thus, $u$ is bounded above by $\left({ y + v }\right)$ for some $v \in z$. Therefore: $x^u \le x^{y + v}$ By Supremum Inequality for Ordinals, it follows that: $x^{y + z} \le \bigcup_{w \in z} x^{y + w}$ Thus, by definition of set equality: $x^{y + z} = \bigcup_{w \in z} x^{y + w}$ $\Box$ Combining the results of the first and second lemmas for the limit case: $x^{y+z} = x^y \times x^z$ This proves the limit case. $\blacksquare$
Recently the question If $\frac{d}{dx}$ is an operator, on what does it operate? was asked on mathoverflow. It seems that some users there objected to the question, apparently interpreting it as an elementary inquiry about what kind of thing is a differential operator, and on this interpretation, I would agree that the question would not be right for mathoverflow. And so the question was closed down (and then reopened, and then closed again…. sigh). (Update 12/6/12: it was opened again,and so I’ve now posted my answer over there.) Meanwhile, I find the question to be more interesting than that, and I believe that the OP intends the question in the way I am interpreting it, namely, as a logic question, a question about the nature of mathematical reference, about the connection between our mathematical symbols and the abstract mathematical objects to which we take them to refer. And specifically, about the curious form of variable binding that expressions involving $dx$ seem to involve. So let me write here the answer that I had intended to post on mathoverflow: ————————- To my way of thinking, this is a serious question, and I am not really satisfied by the other answers and comments, which seem to answer a different question than the one that I find interesting here. The problem is this. We want to regard $\frac{d}{dx}$ as an operator in the abstract senses mentioned by several of the other comments and answers. In the most elementary situation, it operates on a functions of a single real variable, returning another such function, the derivative. And the same for $\frac{d}{dt}$. The problem is that, described this way, the operators $\frac{d}{dx}$ and $\frac{d}{dt}$ seem to be the same operator, namely, the operator that takes a function to its derivative, but nevertheless we cannot seem freely to substitute these symbols for one another in formal expressions. For example, if an instructor were to write $\frac{d}{dt}x^3=3x^2$, a student might object, “don’t you mean $\frac{d}{dx}$?” and the instructor would likely reply, “Oh, yes, excuse me, I meant $\frac{d}{dx}x^3=3x^2$. The other expression would have a different meaning.” But if they are the same operator, why don’t the two expressions have the same meaning? Why can’t we freely substitute different names for this operator and get the same result? What is going on with the logic of reference here? The situation is that the operator $\frac{d}{dx}$ seems to make sense only when applied to functions whose independent variable is described by the symbol “x”. But this collides with the idea that what the function is at bottom has nothing to do with the way we represent it, with the particular symbols that we might use to express which function is meant. That is, the function is the abstract object (whether interpreted in set theory or category theory or whatever foundational theory), and is not connected in any intimate way with the symbol “$x$”. Surely the functions $x\mapsto x^3$ and $t\mapsto t^3$, with the same domain and codomain, are simply different ways of describing exactly the same function. So why can’t we seem to substitute them for one another in the formal expressions? The answer is that the syntactic use of $\frac{d}{dx}$ in a formal expression involves a kind of binding of the variable $x$. Consider the issue of collision of bound variables in first order logic: if $\varphi(x)$ is the assertion that $x$ is not maximal with respect to $\lt$, expressed by $\exists y\ x\lt y$, then $\varphi(y)$, the assertion that $y$ is not maximal, is not correctly described as the assertion $\exists y\ y\lt y$, which is what would be obtained by simply replacing the occurrence of $x$ in $\varphi(x)$ with the symbol $y$. For the intended meaning, we cannot simply syntactically replace the occurrence of $x$ with the symbol $y$, if that occurrence of $x$ falls under the scope of a quantifier. Similarly, although the functions $x\mapsto x^3$ and $t\mapsto t^3$ are equal as functions of a real variable, we cannot simply syntactically substitute the expression $x^3$ for $t^3$ in $\frac{d}{dt}t^3$ to get $\frac{d}{dt}x^3$. One might even take the latter as a kind of ill-formed expression, without further explanation of how $x^3$ is to be taken as a function of $t$. So the expression $\frac{d}{dx}$ causes a binding of the variable $x$, much like a quantifier might, and this prevents free substitution in just the way that collision does. But the case here is not quite the same as the way $x$ is a bound variable in $\int_0^1 x^3\ dx$, since $x$ remains free in $\frac{d}{dx}x^3$, but we would say that $\int_0^1 x^3\ dx$ has the same meaning as $\int_0^1 y^3\ dy$. Of course, the issue evaporates if one uses a notation, such as the $\lambda$-calculus, which insists that one be completely explicit about which syntactic variables are to be regarded as the independent variables of a functional term, as in $\lambda x.x^3$, which means the function of the variable $x$ with value $x^3$. And this is how I take several of the other answers to the question, namely, that the use of the operator $\frac{d}{dx}$ indicates that one has previously indicated which of the arguments of the given function is to be regarded as $x$, and it is with respect to this argument that one is differentiating. In practice, this is almost always clear without much remark. For example, our use of $\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial y}$ seems to manage very well in complex situations, sometimes with dozens of variables running around, without adopting the onerous formalism of the $\lambda$-calculus, even if that formalism is what these solutions are essentially really about. Meanwhile, it is easy to make examples where one must be very specific about which variables are the independent variable and which are not, as Todd mentions in his comment to David’s answer. For example, cases like $$\frac{d}{dx}\int_0^x(t^2+x^3)dt\qquad \frac{d}{dt}\int_t^x(t^2+x^3)dt$$ are surely clarified for students by a discussion of the usage of variables in formal expressions and more specifically the issue of bound and free variables.
CAPM states that the expected return of any given asset should equal $ER_i=R_f+β_i (R_m-R_f)$, with α being the error term of the previous equation. Now, as α has an expected value of zero, then only way to achieve higher expected returns is taking on more β (given that $E[(R_m-R_f )]>0$). Every individual stock has some idiosyncratic risk in addition to its market β (true always when correlated less than perfectly with the market). Thus, we can get the best return/risk ratio by buying the market portfolio, as buying anything else, we could not get more expected return for the same β, but would only get some additional idiosyncratic risk.Now, if you use historical data to estimate expected returns, you imply nonzero expected α-s for all assets. This is not coherent with the CAPM framework, so using this methodology within MPT, it has nothing to do with CAPM. In effect by using MPT this way, you are generating a momentum based investment strategy, as you assume that assets that have had good returns historically will continue to have good returns in the future. Here is a paper in which a strategy is analyzed that utilizes short-term past historical returns as the expected returns for mean-variance optimization. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2606884 Edit: My initial answer was fairly ambiguous in terms of notation. To clarify my notion of the connection between CAPM, Jensen's Alpha and security characteristic line (SCL) in this context, as discussed in comments below: We can define the SCL as $R_i = \alpha_i + \beta_i * R_m + \epsilon_i$ (with $R_i$ and $R_m$ being the realized security and market returns in excess of the risk-free rate and $\beta_i$ being the OLS regression beta with $R_i$ being the dependent variable and $R_m$ being the dependent variable). We can define Jensen's alpha as $\alpha_i = R_i - \beta_i * R_m$ (with the variables defined as above). From here it can be seen that Jensen's alpha equation is just another form of the SCL (with $\alpha_i$ and $R_i$ switching sides, and the equation multiplied by $-1$). When SCL and Jensen's alpha equations use realized returns, CAPM uses expected returns and can be formulated followingly: $E(R_i) = \beta_i * E(R_m) + \epsilon_i$ (notation similar to the previous equations, but with $E(R_i)$ being the expected excess return of the security and $E(R_m)$ being the expected return of the market portfolio), where $\epsilon_i$ is an error term , and $E(\epsilon_i) = 0$. Now, when previous realized returns are used as proxies for the expected returns (i.e. $E(R_i) = R_i$ and $E(R_m) = R_m$), when plugged in to the CAPM, we find that it must be the case that $\alpha_i ≡ \epsilon_i$ for all $i$. As the ( realized) $\alpha_i$ is a deterministic term and does not necessarily equal zero, we find that it can't be that $\alpha_i ≡ E(\epsilon_i) = 0$ for all $i$. Thus, using realized security returns as proxies for expected returns is not compatible with the CAPM. Edit2: I figure I did not still actually answer your question very well. Sharpe’s development of the CAPM was originally spurred by the problem his graduate school supervisor Markowitz had with mean-variance optimization. As computers were slow and expensive, it was not feasible to do the calculations for a large number of securities. Sharpe then first came up with the single-index model (SIM), which is basically what I previously referred to as the security characteristic line (SCL). The reasoning here was that the returns of different securities were related only through common relationships with some basic underlying factor. This being the case, instead of calculating all the pairwise covariances and the resulting portfolio volatilities the volatility of a (well diversified) portfolio (where all idiosyncratic risk is diversified away) could be approximated via securities’ weigthed covariances with the underlying factor (i.e. the market index). This decreased the computing power cost of the operation dramatically. The SIM was thus used to decompose (“analyst’s”) estimates of expected returns on different securities for a more efficient calculation of the efficient frontier. CAPM followed soon after, when Sharpe concluded that (if alpha’s could not be predicted) the market portfolio itself is the tangency portfolio. Now, as computing power is cheap today, and you can easily calculate the covariance matrix as well as the portfolio volatilies of a large number of different combinations, the SIM is no longer needed for the analysis.
A positive definite matrix is a symmetric matrix A for which all eigenvalues are positive. - Gilbert Strang I have heard of positive definite quadratic forms, but never heard of definiteness for a matrix. Because definiteness is higher dimensional analogy for whether if something is convex (opening up) or concave (opening down). It does not make sense to me to say a matrix is opening up, or matrix is opening down. Therefore it does not make sense to say that a matrix has definiteness. In addition, when we say $M \in \mathbb{R}^{n \times n}$ positive definite, what is the first thing we do? We plug $M$ into a function(al) $x^T (\cdot) x$ and check whether the function is positive for all $x \in \mathbb{R}^n$. Clearly, that means we are defining this definiteness with respect to $x^T (\cdot) x$ and NOT $M$ itself. Furthermore, when matrix have complex eigenvalues, then we ditch the notion of definiteness property all together. Clearly, definiteness is a flimsy property for matrices if we can just throw it away when it becomes inconvenient. I will grant you that if we were to define positive definite matrices, we should only define with respect to symmetric matrices. This is the definition on Wikipedia, the definition used by numerous linear algebra books and many applied math books. But then when confronted with a matrix of the form $$\begin{bmatrix} 1 & -1 \\ 0 & 1 \end{bmatrix}$$ I still firmly believe that this matrix is not positive definite because it is not symmetric. Because to me positive definiteness implies symmetry. To what degree is it widely agreed upon in the math community that a positive definite matrix is defined strictly with respect to symmetric matrices and why only with respect to symmetric matrices?
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
Superpower The two real-holomorphic superpower functions are considered here: \(\mathrm{SuPow}_a(z)=\exp(a^z)\) and \(\mathrm{SdPow}_a(z)=\exp(-a^z)\) For \(a\!=\!2\), the explicit plots of these two functions are shown in figure 1 at right. Contents Transfer equation \(F(z\!+\!1)=T(F(z))\) Finctions \(F\!=\!\mathrm{SuPow}_a\) and \(F\!=\!\mathrm{SdPow}_a\) and the modifications with displaced argument seems to be the only real-holomorphic solutions of the transfer equation with power function \(T\) as transfer function. Periodicity Complex map of function SuPow for the same value \(a\!=\!2\) is shown in Figure 2. Functions \(\mathrm{SuPow}_a\) and \(\mathrm{SdPow}_a\) are periodic; the period is \(P= \mathrm i 2 \pi/\ln(a )\) For positive \(a\), the period is pure imaginary. At \(a\!=\!2\), for the case shown in figure 2, the period \(P=2\pi / \ln(2) \approx 9.06472028365\) The map in Fig.2 is reproduced at the translations for, roughly, \(~\pm 9.06472028365 ~\) along the Imaginary axis. The range of FIg.2 covers a little bit more that one period of the function. Functions \(\mathrm{SuPow}_a\) and \(\mathrm{SdPow}_a\) can be expressed through each other, \(\mathrm{SdPow}_a(z)=\mathrm{SuPow}_a\big(z\!+\mathrm i \pi/\ln(a) \big)\) In particular, map of function \(\mathrm{SdPow}_2\) shown in Fig.3, appears as displacement of map of \(\mathrm{SdPow}_2\), shown in Fig.2, for \(P/2\), id est, as translation for, roughly, 4.53236014183 along the imaginary axis. There is another relation between these two superfunctions: \(\mathrm{SdPow}_a(z)=1/\mathrm{SuPow}_a(z)=\mathrm{SuPow}_a(z)^{-1}=\displaystyle\frac{1}{\mathrm{SuPow}_a(z)}\) Inverse functions \(G=\mathrm{AuPow}_a=\mathrm{SuPow}_a^{-1}\) and \(G=\mathrm{AdPow}_a=\mathrm{SdPow}_a^{-1}\) \(G(T(z))=G(z)+1\) Solution \(G\) of this equation for \(~T(z)\!=\!z^a~\) can be referred as Abelpower function For \(a\!=\!2\), complex map of function \(\mathrm{AuPow}_a\) is shown in figure at right. These functions differ for a constant, \(\mathrm{AdPow}_a=\mathrm{AdPow}_a \pm P/2\) where \(P\) is period of the superpower function defined above. Sign in this formula changes at the cut lines of functions \(\mathrm{AuPow}_a\) and \(\mathrm{AdPow}_a\); the sets of cut lines are different for these two functions. Application of Superpower Superpower function is important example to illustrate properties of superfunctions. Other superfunctions mentioned in Table of superfunctions show properties, similar to those of functions SuPow and SdPow. This applies also to superfunctions, that cannot be expressed through the elementary functions. It should be interesting to construct a proof, that for the power function, the SuPow and SdPow are the only non–trivial superfunctions with simple asymptotic behaviour, and other periodic superfunctions appear at the displacement of the argument for a constant. Other meanings of term superpower In Physics, term power denotes transfer of energy, refers to energy per time and can be measured in Watt\(=\)Joule/second\(=\mathrm{ Kg\, m^2/sec^3}\). In this sense, power is main parameter of an engine, of an electric plant, of an electric cable, of a source of waves (highball mixer, laser, acoustic emitter, wave machine, etc.). For this reason, in TORI, the only meaning mentioned in the previous sections is supposed to used. References http://www.dosguys.com/JCS/Jesus_Christ_Superstar_-_Chords_&_Lyrics_(by_RA).htmJESUS CHRIST SUPERSTAR by Andrew Lloyd Webber & Tim Rice. 7/7/2007. 2-3 Poor Jerusalem: Neither you Simon, nor the fifty thousand Nor the Romans, nor the Jews, Nor Judas, nor the Twelve, nor the Priests, nor the Scribes, Nor doomed Jerusalem itself Understand what power is ..
Fujimura's problem Let [math]\overline{c}^\mu_n[/math] the largest subset of the triangular grid [math]\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c=n \}[/math] which contains no equilateral triangles [math](a+r,b,c), (a,b+r,c), (a,b,c+r)[/math] with [math]r \gt 0[/math]; call such sets triangle-free. (It is an interesting variant to also allow negative r, thus allowing "upside-down" triangles, but this does not seem to be as closely connected to DHJ(3).) Fujimura's problem is to compute [math]\overline{c}^\mu_n[/math]. This quantity is relevant to a certain hyper-optimistic conjecture. n 0 1 2 3 4 5 [math]\overline{c}^\mu_n[/math] 1 2 4 6 9 12 n=0 [math]\overline{c}^\mu_0 = 1[/math]: This is clear. n=1 [math]\overline{c}^\mu_1 = 2[/math]: This is clear. n=2 [math]\overline{c}^\mu_2 = 4[/math]: This is clear (e.g. remove (0,2,0) and (1,0,1) from [math]\Delta_2[/math]). n=3 [math]\overline{c}^\mu_3 \geq 6[/math]: For the lower bound, delete (0,3,0), (0,2,1), (2,1,0), (1,0,2) from [math]\Delta_3[/math]. For the upper bound: observe that with only three removals each of these (non-overlapping) triangles must have one removal: set A: (0,3,0) (0,2,1) (1,2,0) set B: (0,1,2) (0,0,3) (1,0,2) set C: (2,1,0) (2,0,1) (3,0,0) Consider choices from set A: (0,3,0) leaves triangle (0,2,1) (1,2,0) (1,1,1) (0,2,1) forces a second removal at (2,1,0) [otherwise there is triangle at (1,2,0) (1,1,1) (2,1,0)] but then none of the choices for third removal work (1,2,0) is symmetrical with (0,2,1) n=4 [math]\overline{c}^\mu_4=9[/math]: The set of all [math](a,b,c)[/math] in [math]\Delta_4[/math] with exactly one of a,b,c =0, has 9 elements and is triangle-free. (Note that it does contain the equilateral triangle (2,2,0),(2,0,2),(0,2,2), so would not qualify for the generalised version of Fujimura's problem in which [math]r[/math] is allowed to be negative.) Let [math]S\subset \Delta_4[/math] be a set without equilateral triangles. If [math](0,0,4)\in S[/math], there can only be one of [math](0,x,4-x)[/math] and [math](x,0,4-x)[/math] in S for [math]x=1,2,3,4[/math]. Thus there can only be 5 elements in S with [math]a=0[/math] or [math]b=0[/math]. The set of elements with [math]a,b\gt0[/math] is isomorphic to [math]\Delta_2[/math], so S can at most have 4 elements in this set. So [math]|S|\leq 4+5=9[/math]. Similar if S contain (0,4,0) or (4,0,0). So if [math]|S|\gt9[/math] S doesn’t contain any of these. Also, S can’t contain all of [math](0,1,3), (0,3,1), (2,1,1)[/math]. Similar for [math](3,0,1), (1,0,3),(1,2,1)[/math] and [math](1,3,0), (3,1,0), (1,1,2)[/math]. So now we have found 6 elements not in S, but [math]|\Delta_4|=15[/math], so [math]S\leq 15-6=9[/math]. n=5 [math]\overline{c}^\mu_5=12[/math]: The set of all (a,b,c) in [math]\Delta_5[/math] with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles. Let [math]S\subset \Delta_5[/math] be a set without equilateral triangles. If [math](0,0,5)\in S[/math], there can only be one of (0,x,5-x) and (x,0,5-x) in S for x=1,2,3,4,5. Thus there can only be 6 elements in S with a=0 or b=0. The set of element with a,b>0 is isomorphic to [math]\Delta_3[/math], so S can at most have 6 elements in this set. So [math]|S|\leq 6+6=12[/math]. Similar if S contain (0,5,0) or (5,0,0). So if |S| >12 S doesn’t contain any of these. S can only contain 2 point in each of the following equilateral triangles: (3,1,1),(0,4,1),(0,1,4) (4,1,0),(1,4,0),(1,1,3) (4,0,1),(1,3,1),(1,0,4) (1,2,2),(0,3,2),(0,2,3) (3,2,0),(2,3,0),(2,2,1) (3,0,2),(2,1,2),(2,0,3) So now we have found 9 elements not in S, but [math]|\Delta_5|=21[/math], so [math]S\leq 21-9=12[/math]. General n A lower bound for [math]\overline{c}^\mu_n[/math] is 2n for [math]n \geq 1[/math], by removing (n,0,0), the triangle (n-2,1,1) (0,n-1,1) (0,1,n-1), and all points on the edges of and inside the same triangle. In a similar spirit, we have the lower bound [math]\overline{c}^\mu_{n+1} \geq \overline{c}^\mu_n + 2[/math] for [math]n \geq 1[/math], because we can take an example for [math]\overline{c}^\mu_n[/math] (which cannot be all of [math]\Delta_n[/math]) and add two points on the bottom row, chosen so that the triangle they form has third vertex outside of the original example. An asymptotically superior lower bound for [math]\overline{c}^\mu_n[/math] is 3(n-1), made of all points in [math]\Delta_n[/math] with exactly one coordinate equal to zero. A trivial upper bound is [math]\overline{c}^\mu_{n+1} \leq \overline{c}^\mu_n + n+2[/math] since deleting the bottom row of a equilateral-triangle-free-set gives another equilateral-triangle-free-set. We also have the asymptotically superior bound [math]\overline{c}^\mu_{n+2} \leq \overline{c}^\mu_n + \frac{3n+2}{2}[/math] which comes from deleting two bottom rows of a triangle-free set and counting how many vertices are possible in those rows. Another upper bound comes from counting the triangles. There are [math]\binom{n+2}{3}[/math] triangles, and each point belongs to n of them. So you must remove at least (n+2)(n+1)/6 points to remove all triangles, leaving (n+2)(n+1)/3 points as an upper bound for [math]\overline{c}^\mu_n[/math]. Asymptotics The corners theorem tells us that [math]\overline{c}^\mu_n = o(n^2)[/math] as [math]n \to \infty[/math]. By looking at those triples (a,b,c) with a+2b inside a Behrend set, one can obtain the lower bound [math]\overline{c}^\mu_n \geq n^2 \exp(-O(\sqrt{\log n}))[/math].
This is something I discovered after an attempt at a more mathematically rigorous investigation of how to interpret polarimetry results. It's not that hard, but I don't think you'll ever find a similar analysis in a book because in practice the details are pretty much unnecessary for experimental purposes (though fun mathematically!). Before we begin, it's very important to distinguish two types of optical rotation. The first is the real optical rotation, $\alpha_{real}$. This is the twist which a substance actually causes on the plane of polarized light as it travels through the optically active solution, and therefore is the rotation which is part of the specific rotation equation, shown further below. The numerical value of this angle can be any finite real number, in degrees or radians. There is no constraint on how much an optically active solution is "allowed" to twist a plane of polarized light; the fact that you can't directly see how many complete twists the plane of polarized light made is entirely an experimental limitation. This experimental limitation is what gives rise to the other type of rotation: the observed or apparent rotation, $\alpha_{obs}$. This is an angle which is also real number, but which is limited between $0$ and $180$ degrees or $0$ and $\pi$ radians (or you could say between $-90$ and $90$ degrees or $-\frac{\pi}{2}$ and $\frac{\pi}{2}$ radians, it makes no conceptual difference). The observed rotation is what you measure in a polarimeter, and in principle it cannot be directly put into the specific rotation equation. Why is $\alpha_{obs}$ allowed only in the range $[0, 180]$ degrees? Because you can't observe a difference between an angle $\alpha_{obs}$ and $\alpha_{obs} + 180\times n$ (where $n$ is a positive or a negative integer). Look at a figure such as this (source) and try to convince yourself of this. If it helps, remember that two identical polarizers on top of eachother with an angle of $0^\circ$ let light through completely, but rotating one will block more and more light until they are perpendicular ($90^\circ$) and no light at all comes through, at which point increasing the angle will brighten the light until it reaches a maximum again at $180^\circ$ before once again starting to dim. This means that, in polarimetry the observed brightening or dimming by rotations of plane-polarized light are cyclic with period $180^\circ$, not $90^\circ$ as you suggest. Turning the previous sentence into a simple equation, we obtain: $$\alpha_{real} = \alpha_{obs} + 180\times n \ \ \ , \ \ n \in \mathbb{Z} $$ (The angles in this answer will always be treated in degrees, though for simplicity, in the equations I shall forget about the $^\circ$ unit and treat angles as pure numbers*). Now we use the equation for optical rotation to find the specific rotation $[\alpha]^T_\lambda$, which characterizes a substance when measured at a temperature $T$ with a monochromatic light source of wavelength $\lambda$: $$\alpha_{real} = [\alpha]^T_\lambda \times l \times c$$ , where $l$ is the optical path and $c$ is the concentration of the (enantiomerically pure) optically active substance. Substituting that into the previous equation: $$[\alpha]^T_\lambda = \frac{\alpha_{obs} + 180 n}{lc}$$ So for each individual measurement of $\alpha_{obs}$ made, there are infinite solutions for $[\alpha]^T_\lambda$. This comes as no surprise. But if we take several measurements, then we can find a value of $[\alpha]^T_\lambda$ as a unique solution to the equation, right? Speaking in strictly mathematical terms, no. Any finite amount of measurements, even with perfect accuracy and precision, will still yield infinite possible values of $[\alpha]^T_\lambda$. In fact, there are infinite levorotatory and dextrorotatory solutions, so in principle you can't even determine the direction of rotation by this method! See the graph below: EDIT: In my previous graph, I incorrectly equated the values of $\theta$ with $\alpha_{real}$. The actual identity comes from $\mathrm{tan}\ \theta = \frac{d\alpha_{obs}}{dc} = \frac{d}{dc}(\alpha_{real}+180n)=\frac{d\alpha_{real}}{dc}=\frac{d}{dc}(lc[\alpha]^T_\lambda)=l[\alpha]^T_\lambda$, from which $\theta = \mathrm{arctan}\ (l[\alpha]^T_\lambda)$. This however does not change the theoretical development of the answer. Note that since the measurements are cyclic with period $180^\circ$, straight lines wrap around. The two black points are two measurements of $\alpha_{obs}$ at two different concentrations. There is an "obvious" red line connecting them, but it need not be the correct one. There are infinite lines which cut both points; I have represented two more perfectly valid solutions with the green and blue lines. Each line which cuts all points represents a possible solution, with an inclination given by the angle $\theta = \mathrm{arctan}\ (l[\alpha]^T_\lambda)$ (see derivation above). From this, $[\alpha]^T_\lambda = \frac{\mathrm{tan}\ \theta}{l}$. Interestingly, the red and blue lines are represent dextrorotatory solutions to the problem, while the green line indicates a levorotatory solution for the same compound. There are infinite more solutions, characterized by steeper lines which wrap around more times. See the comments for information about how they come into play. However, in practice, materials only have a finite and usually relatively small ability to perform optical rotation. Most of the more commonly encountered substances have a value of $[\alpha]^T_\lambda$ which tops out at a few hundreds. Some extreme examples are helicenes, for which specific optical rotations can reach as high as $\approx 10000^\circ$ in [13]helicene. After performing enough measurements (especially if you do them intelligently, by not measuring at different concentrations which are related to eachother by simple fractions), then you basically reach a situation where you can find an infinite family of solutions such as $[\alpha]^T_\lambda = -534 + 462785\times n$, in which case the true specific rotation is probably levorotatory with a value of just $-534^\circ$. As a small additional comment, usually the filters in an optical polarimeter are rotated until the least amount of light comes through as the human eye is better at detecting small differences in light levels in darker conditions, which will make for a more accurate measurement of the observed rotation. Originally I had answered this question by bringing up modular arithmetic, but while it's interesting it can possibly make the answer harder to understand at first. I have relocated my previous argument here. The first equation shown is equivalent to: $$\alpha_{obs} = \alpha_{real}\ \ (\mathrm{mod\ 180})$$ The $\mathrm{mod\ 180}$ part comes from modular arithmetic, which may sound scary and new, but it's just normal mathematics, with the small quirk that every time a number appears out of the range $[0, 180]$, you just add or subtract $180$ as many times as necessary for the answer to appear between $0$ and $180$. * In reality some of the $\alpha_{real}$ and $\alpha_{obs}$ variables in certain equations above would have to be divided by $^\circ$, so that all equalities are dimensionally valid. Equivalently, the modulus could be expressed with the unit as $\mathrm{mod\ 180^\circ}$, but I'm not sure if the modulus is allowed to have physical dimensions, even if I don't see any issue with it.
Hint: The best way to see this is to look it locally. If $f$ is a diffeomorphism, $f^{-1}$ is a diffeomorphism and the inverse of $df$ is $df^{-1}$ defined by $df^{-1}(x,v)=(f^{-1}(x),df^{-1}(v))$. If $f$ is an immersion submersion and$x\in X$ there exists a chart $U$ containing $x$ such that the restriction of $f$ to $U$ is an immersion/submersion whose image is contained in a chart containing $f(y)$. It is enough to show the result when $f:U\subset \mathbb{R}^n\rightarrow \mathbb{R}^m$. In this case, immersion submersion are characterized by the constant rank theorem and local diffeomorphism see Local Submersion Theorem - Differential Topology of Guillemin and Pollack and Local Immersion Theorem in $\mathbb{R}^n$ proof which reduce the problem to a linear map.
All right, let's take a crack at this: First, let's define a sound as a pressure wave traveling through a transmission medium. Since our aether is functionally identical to our atmosphere for those purposes, we need to find out what the threshold for human hearing is based on sound energy density, then find out how far a supernova's blast can travel before the imparted energy reaches that level. For the first bit, Wikipedia has a chart of various sounds and their decibel level as well as the effective pressure necessary to reach that level. Looking at the chart, the auditory threshold is given as $0\;\text{dB}$, or $2 \cdot 10^{−5}\;\text{Pa}$. That's given for a constant tone rather than a single event, though, so let's bump our limit up to leaves rustling, which is $10\;\text{dB}$, or $6.32 \cdot 10^{-5}\;\text{Pa}$. That gives a measure of how much force that we need to hear a sound, so let's move on to the supernova. Wikipedia comes to the rescue again in this case with all sorts of figures about supernovae. Since the energy released varies by the type of star, let's look at the low end, a star that has 8-10 solar masses worth of material. The chart "energetics of supernovae" on that page lists the kinetic energy released in that type of event as 1 foe, or $10^{44}\;\text{J}$. Now for the conversion to see how far that energy will get us. A joule can be defined as a pascal times meters cubed, or pascals in a certain volume. Shuffle that around and we get joules divided by pascals equals the volume required in meters cubed. $\frac{10^{44}\;\text{J}}{6.32 \cdot 10^{-5}\;\text{Pa}} = 1.58 \cdot 10^{48}\;\text{m}^{3}$ That's the total volume of space where the pressure wave through the aether will be a sound akin to leaves rustling. Since a supernova is an event that emanates from a single point in space, let's assume that's a perfect sphere. The radius of a sphere is given as the cubed root of three quarters pi times the volume. $\sqrt[3]{\frac{3 \pi}{4} \cdot 1.58 \cdot 10^{48}\;\text{m}^{3}} = 1.6 \cdot 10^{16}\;\text{m} = 1.63\;\text{ly}$ 1.63 light years. Not nearly as far as it would probably need to reach to hit us, but a pretty massive distance nonetheless. I took the minimum values for the supernova, too, so if there was a bigger bang, the distance would scale up appropriately.
Global attractors for damped semilinear wave equations 1. Mathematical Institute, University of Oxford, 24--29 St Giles', Oxford OX1 3LB, United Kingdom In the case $n\geq 3$ and $\gamma>\frac{n}{n-2}$ the existence of a global attractor is proved under the (unproved) assumption that every weak solution satisfies the energy equation. Keywords:Lyapunov function, semilinear wave equation, asymptotically compact, Kneser's property, damped, weak solution, asymptotically smooth., nonuniqueness, weakly continuous, Global attractor, generalized semi-flow. Mathematics Subject Classification:37L30, 37L05, 35L7. Citation:John M. Ball. Global attractors for damped semilinear wave equations. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 31-52. doi: 10.3934/dcds.2004.10.31 [1] José F. Caicedo, Alfonso Castro. A semilinear wave equation with smooth data and no resonance having no continuous solution. [2] Fengjuan Meng, Chengkui Zhong. Multiple equilibrium points in global attractor for the weakly damped wave equation with critical exponent. [3] [4] Zhijian Yang, Zhiming Liu. Global attractor for a strongly damped wave equation with fully supercritical nonlinearities. [5] Kotaro Tsugawa. Existence of the global attractor for weakly damped, forced KdV equation on Sobolev spaces of negative index. [6] Nikos I. Karachalios, Athanasios N Lyberopoulos. On the dynamics of a degenerate damped semilinear wave equation in \mathbb{R}^N : the non-compact case. [7] Federico Cacciafesta, Anne-Sophie De Suzzoni. Weak dispersion for the Dirac equation on asymptotically flat and warped product spaces. [8] [9] [10] [11] Boling Guo, Zhaohui Huo. The global attractor of the damped, forced generalized Korteweg de Vries-Benjamin-Ono equation in $L^2$. [12] [13] Zhiming Liu, Zhijian Yang. Global attractor of multi-valued operators with applications to a strongly damped nonlinear wave equation without uniqueness. [14] Stéphane Gerbi, Belkacem Said-Houari. Exponential decay for solutions to semilinear damped wave equation. [15] [16] Martin Michálek, Dalibor Pražák, Jakub Slavík. Semilinear damped wave equation in locally uniform spaces. [17] Takashi Narazaki. Global solutions to the Cauchy problem for the weakly coupled system of damped wave equations. [18] Nikos I. Karachalios, Nikos M. Stavrakakis. Estimates on the dimension of a global attractor for a semilinear dissipative wave equation on $\mathbb R^N$. [19] Peter E. Kloeden, José Valero. The Kneser property of the weak solutions of the three dimensional Navier-Stokes equations. [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
I have always thought that there is two solutions to the square root of a real number, one being positive and the other being negative. However, in Penrose's book, A Road to Reality, he seems to claim that $e^{1/2}$ will always give a positive answer, since $e^n$ is defined as $1+ \frac{n}{1!} +\frac{n^2}{2!} + \ldots$, so substituting $\frac{1}{2}$ into the equation will give us a positive number. And so logarithm defined with base $e$ will be unambiguous since there is only one answer for every $e^n$. However, I find it quite puzzling because $-e^{1/2}$ will also give me $e$ when squared, doesn't it? An old chestnut (and I'm sure there are many other posts on this, but here's another short attempt): For real numbers $x$ which are zero or positive, the square root of $x$ is defined to be the real number $y \geq 0$ such that $y^2 = x$. It's easy to see that $y$ is unique and we denote it by the symbols $\sqrt{x}$. We make the square root unique in part so that the function $f(x) = \sqrt{x}$ is well defined. If you then ask the question: what are the solutions of $x^2 = c$, for a positive, real number $c$, then there are two: $x = \sqrt{c}$ and $x = -\sqrt{c}$. Similarly, the reason why the formula for solving the generic quadratic equation over the reals of $ax^2 + bx + c = 0$ has a $\pm$ symbol is because the expression $\sqrt{b^2 - 4ac}$ has a unique value when it exists. So what Penrose is doing is using the established convention for what $\sqrt{x}$ means for a positive real number $x$. I think the confusion arises because you are reading $e^{1/2}$ as "the set of solutions to $x^2=e$", and Penrose is defining $e^{1/2}$ as $\sum_{k\ge0}\frac{(1/2)^k}{k!}$. He is not using $e^{1/2}$ the way $y^{1/2}$ is defined in complex analysis, but rather as a notation for a function given by a convergent series. It would be more clear if he defined $$ \exp(n)=\sum_{k\ge0}\frac{n^k}{k!} $$ and then said that $\exp(1/2)$ is unambiguously defined. If the solution to $x^2-e=0$ doesn't have the two roots $\pm\sqrt e$ some fundamental theorems of algebra has to be revised. But of course, also the exponential function must be well defined for $\displaystyle x=\frac{1}{2}$. I guess it is not perfect to claim $\displaystyle a^{\frac{1}{2}}=\pm\sqrt a$.
I have asked this question on MSE but did not receive an answer. I thought I could try it here. Let $T$ be a self-adjoint trace-class operator on $L^2(\mathbb{R})$. Is is true that it can be represented as an integral operator. I thought the kernel would be $$k_T(x,y) =\sum_{i=1}^\infty \lambda_i \phi_i(x) \bar\phi_i(y).$$ Here $\{\phi_i\}$ is an eigenbasis of $T$, i.e. $T=\sum_i \lambda_i |\phi_i\rangle\langle\phi_i|$. Then, we have $$\int k_T(\cdot,y) f(y) = \int\sum_i \lambda_i \phi_i(\cdot) \bar\phi_i(y) f(y) dy = \sum_i \lambda_i \phi_i \langle \phi_i, f\rangle=\sum_i \lambda_i |\phi_i\rangle\langle\phi_i|f\rangle = Tf.$$ Is this correct?
If $z \ne 0, \tag 1$ then $\left \vert \dfrac{z}{\vert z \vert} \right \vert = \dfrac{\vert z \vert}{\vert z \vert} = 1; \tag 2$ thus $\dfrac{z}{\vert z \vert} \in S^1, \tag 3$ the unit circle in $\Bbb C$; therefore, we may always find some $\phi \in \Bbb R$ with $\dfrac{z}{\vert z \vert} = \cos \phi + i \sin \phi; \tag 4$ it remains to show that $\cos \phi + i \sin \phi = e^{i \phi}; \tag 5$ but this may easily be done by expanding $e^{i \phi}$ in a power series $e^{i\phi} = \displaystyle \sum_0^\infty \dfrac{(i\phi)^n}{n!}, \tag 6$ and separating out the real and imaginary parts, as is shown in this wikipedia article, as well as by Jose Carlos Santos in his answer. Then (4) becomes $\dfrac{z}{\vert z \vert} = \cos \phi + i \sin \phi = e^{i\phi}, \tag 7$ or $z = \vert z \vert e^{i\phi}. \tag 8$ This in response to the comment to this answer made by our OP Hendrra. The easiest way to see that Note Added in Edit, Tuesday 3 April 2018 10:34 AM PST: $z \in S^1 \Longrightarrow \exists \phi \in \Bbb R, \; z = \cos \phi + i \sin \phi, \tag 9$ is via simple geometry and trigonometry. Since $z$ is a point on the unit circle, there is a radial line segment 'twixt the origin $O$ and $z$, $\overline{Oz}$; the length of this segment is $1$, since $S^1$ is the "unit circle". Then let $\phi$ be the angle 'twixt the positive $x$-axis and the segment $\overline{Oz}$; the $x$-coordinate of the point $z$ is then the real part of $z$ considered as a complex number: $\Re(z) = \vert \overline{Oz} \vert \cos \phi = \cos \phi, \tag{10}$ since $\vert \overline{Oz} \vert = 1$; likewise the $y$-coordinate is $\Im(z) = \sin \phi; \tag{11}$ thus $z = \Re(z) + i\Im(z) = \cos \phi + i \sin \phi. \tag{12}$ End of Note.
In mathematics, an ordered pair ( a, b) is a pair of mathematical objects. The order in which the objects appear in the pair is significant: the ordered pair ( a, b) is different from the ordered pair ( b, a) unless a = b. (In contrast, the unordered pair { a, b} equals the unordered pair { b, a}.) Ordered pairs are also called 2-tuples, or sequences of length 2; ordered pairs of scalars are also called 2-dimensional vectors. The entries of an ordered pair can be other ordered pairs, enabling the recursive definition of ordered n-tuples (ordered lists of n objects). For example, the ordered triple ( a, b, c) can be defined as ( a, ( b, c)), i.e., as one pair nested in another. In the ordered pair ( a, b), the object a is called the first entry, and the object b the second entry of the pair. Alternatively, the objects are called the first and second coordinates, or the left and right projections of the ordered pair. Cartesian products and binary relations (and hence functions) are defined in terms of ordered pairs. Generalities Let (a_1, b_1) and (a_2, b_2) be ordered pairs. Then the characteristic (or defining) property of the ordered pair is: (a_1, b_1) = (a_2, b_2)\quad\text{if and only if}\quad a_1 = a_2\text{ and }b_1 = b_2.\! The set of all ordered pairs whose first entry is in some set A and whose second entry is in some set B is called the Cartesian product of A and B, and written A × B. A binary relation between sets A and B is a subset of A × B. If one wishes to employ the \ (a,b) notation for a different purpose (such as denoting open intervals on the real number line) the ordered pair may be denoted by the variant notation \left \langle a,b\right \rangle. The left and right projection of a pair p is usually denoted by π 1( p) and π 2( p), or by π ( l p) and π ( r p), respectively. In contexts where arbitrary n-tuples are considered, π n ( i t) is a common notation for the i-th component of an n tuple t. Defining the ordered pair using set theory The above characteristic property of ordered pairs is all that is required to understand the role of ordered pairs in mathematics. Hence the ordered pair can be taken as a primitive notion, whose associated axiom is the characteristic property. This was the approach taken by the N. Bourbaki group in its Theory of Sets, published in 1954, long after Kuratowski discovered his reduction (below). The Kuratowski definition was added in the second edition of Theory of Sets, published in 1970. If one agrees that set theory is an appealing foundation of mathematics, then all mathematical objects must be defined as sets of some sort. Hence if the ordered pair is not taken as primitive, it must be defined as a set. [1] Several set-theoretic definitions of the ordered pair are given below. Wiener's definition Norbert Wiener proposed the first set theoretical definition of the ordered pair in 1914: [2] \left( a, b \right) := \left\{\left\{ \left\{a\right\},\, \emptyset \right\},\, \left\{\left\{b\right\}\right\}\right\}. He observed that this definition made it possible to define the types of Principia Mathematica as sets. Principia Mathematica had taken types, and hence relations of all arities, as primitive. Wiener used instead of { b} to make the definition compatible with type theory where all elements in a class must be of the same "type". With nesting b within an additional set its type is made equal to \{\{a\}, \emptyset\}'s. Hausdorff's definition About the same time as Wiener (1914), Felix Hausdorff proposed his definition: (a, b) := \left\{ \{a, 1\}, \{b, 2\} \right\} "where 1 and 2 are two distinct objects different from a and b." [3] Kuratowski definition In 1921 Kazimierz Kuratowski offered the now-accepted definition [4] [5] of the ordered pair ( a, b): (a, \ b)_K \; := \ \{ \{ a \}, \ \{ a, \ b \} \}. Note that this definition is used even when the first and the second coordinates are identical: (x,\ x)_K = \{\{x\},\{x, \ x\}\} = \{\{x\},\ \{x\}\} = \{\{x\}\} Given some ordered pair p, the property " x is the first coordinate of p" can be formulated as: \forall{Y}{\in}{p}:{x}{\in}{Y}. The property " x is the second coordinate of p" can be formulated as: (\exist{Y}{\in}{p}:{x}{\in}{Y})\and(\forall{Y_{1},Y_{2}}{\in}{p}:Y_{1}\ne Y_{2}\rarr ({x}{\notin}{Y_{1}}\or{x}{\notin}{Y_{2}})). In the case that the left and right coordinates are identical, the right conjunct (\forall{Y_{1},Y_{2}}{\in}{p}:Y_{1}\ne Y_{2}\rarr ({x}{\notin}{Y_{1}}\or{x}{\notin}{Y_{2}})) is trivially true, since Y 1 ≠ Y 2 is never the case. This is how we can extract the first coordinate of a pair (using the notation for arbitrary intersection and arbitrary union): \pi_1(p) = \bigcup\bigcap p This is how the second coordinate can be extracted: \pi_2(p) = \bigcup\{x \in \bigcup p \mid \bigcup p \not= \bigcap p \rarr x \notin \bigcap p \} Variants The above Kuratowski definition of the ordered pair is "adequate" in that it satisfies the characteristic property that an ordered pair must satisfy, namely that (a,b) = (x,y) \leftrightarrow (a=x) \and (b=y). In particular, it adequately expresses 'order', in that (a,b) = (b,a) is false unless b = a. There are other definitions, of similar or lesser complexity, that are equally adequate: ( a, b )_{\text{reverse}} := \{ \{ b \}, \{a, b\}\}; ( a, b )_{\text{short}} := \{ a, \{a, b\}\}; ( a, b )_{\text{01}} := \{\{0, a \}, \{1, b \}\}. [6] The reverse definition is merely a trivial variant of the Kuratowski definition, and as such is of no independent interest. The definition short is so-called because it requires two rather than three pairs of braces. Proving that short satisfies the characteristic property requires the Zermelo–Fraenkel set theory axiom of regularity. [7] Moreover, if one uses von Neumann's set-theoretic construction of the natural numbers, then 2 is defined as the set {0, 1} = {0, {0}}, which is indistinguishable from the pair (0, 0) short. Yet another disadvantage of the short pair is the fact, that even if a and b are of the same type, the elements of the short pair are not. (However, if a = b then the short version keeps having cardinality 2, which is something one might expect of any "pair", including any "ordered pair". Also note that the short version is used in Tarski–Grothendieck set theory, upon which the Mizar system is founded.) Proving that definitions satisfy the characteristic property Prove: ( a, b) = ( c, d) if and only if a = c and b = d. Kuratowski: If. If a = c and b = d, then = . Thus ( a, b) K = ( c, d) K. Only if. Two cases: a = b, and a ≠ b. If a = b: ( a, b) K = = = . ( c, d) K = = . Thus { c} = { c, d} = { a}, which implies a = c and a = d. By hypothesis, a = b. Hence b = d. If a ≠ b, then ( a, b) K = ( c, d) K implies = . Suppose { c, d} = { a}. Then c = d = a, and so = = = . But then would also equal , so that b = a which contradicts a ≠ b. Suppose { c} = { a, b}. Then a = b = c, which also contradicts a ≠ b. Therefore { c} = { a}, so that c = a and { c, d} = { a, b}. If d = a were true, then { c, d} = { a, a} = { a} ≠ { a, b}, a contradiction. Thus d = b is the case, so that a = c and b = d. Reverse: ( a, b) reverse = = = ( b, a) K. If. If ( a, b) reverse = ( c, d) reverse, ( b, a) K = ( d, c) K. Therefore b = d and a = c. Only if. If a = c and b = d, then = . Thus ( a, b) reverse = ( c, d) reverse. Short: [8] If: If a = c and b = d, then { a, { a, b}} = { c, { c, d}}. Thus ( a, b) short = ( c, d) short. Only if: Suppose { a, { a, b}} = { c, { c, d}}. Then a is in the left hand side, and thus in the right hand side. Because equal sets have equal elements, one of a = c or a = { c, d} must be the case. If a = { c, d}, then by similar reasoning as above, { a, b} is in the right hand side, so { a, b} = c or { a, b} = { c, d}. If { a, b} = c then c is in { c, d} = a and a is in c, and this combination contradicts the axiom of regularity, as { a, c} has no minimal element under the relation "element of." If { a, b} = { c, d}, then a is an element of a, from a = { c, d} = { a, b}, again contradicting regularity. Hence a = c must hold. Again, we see that { a, b} = c or { a, b} = { c, d}. The option { a, b} = c and a = c implies that c is an element of c, contradicting regularity. So we have a = c and { a, b} = { c, d}, and so: { b} = { a, b} \ { a} = { c, d} \ { c} = { d}, so b = d. Quine-Rosser definition Rosser (1953) [9] employed a definition of the ordered pair due to Quine which requires a prior definition of the natural numbers. Let \N be the set of natural numbers and x \setminus \N be the elements of x not in \N. Define \varphi(x) = (x \setminus \N) \cup \{n+1 : n \in (x \cap \N) \}. Applying this function simply increments every natural number in x. In particular, \varphi(x) does not contain the number 0, so that for any sets x and y, \varphi(x) \not= \{0\} \cup \varphi(y). Define the ordered pair ( A, B) as (A, B) = \{\varphi(a) : a \in A\} \cup \{\varphi(b) \cup \{0\} : b \in B \}. Extracting all the elements of the pair that do not contain 0 and undoing \varphi yields A. Likewise, B can be recovered from the elements of the pair that do contain 0. In type theory and in outgrowths thereof such as the axiomatic set theory NF, the Quine-Rosser pair has the same type as its projections and hence is termed a "type-level" ordered pair. Hence this definition has the advantage of enabling a function, defined as a set of ordered pairs, to have a type only 1 higher than the type of its arguments. This definition works only if the set of natural numbers is infinite. This is the case in NF, but not in type theory or in NFU. J. Barkley Rosser showed that the existence of such a type-level ordered pair (or even a "type-raising by 1" ordered pair) implies the axiom of infinity. For an extensive discussion of the ordered pair in the context of Quinian set theories, see Holmes (1998). [10] Morse definition Morse–Kelley set theory makes free use of proper classes. [11] Morse defined the ordered pair so that its projections could be proper classes as well as sets. (The Kuratowski definition does not allow this.) He first defined ordered pairs whose projections are sets in Kuratowski's manner. He then redefined the pair (x, y) = (\{0\} \times s(x)) \cup (\{1\} \times s(y)) where the component Cartesian products are Kuratowski pairs of sets and where s(x) = \{\emptyset \} \cup \{\{t\} | t \in x\} This renders possible pairs whose projections are proper classes. The Quine-Rosser definition above also admits proper classes as projections. Similarly the triple is defined as a 3-tuple as follows: (x, y, z) = (\{0\} \times s(x)) \cup (\{1\} \times s(y)) \cup (\{2\} \times s(z)) The use of the singleton set s(x) which has an inserted empty set allows tuples to have the uniqueness property that if a is an n-tuple and b is an m-tuple and a = b then n = m. Ordered triples which are defined as ordered pairs do not have this property with respect to ordered pairs. Category theory A category-theoretic product A × B in a category of sets represents the set of ordered pairs, with the first element coming from A and the second coming from B. In this context the characteristic property above is a consequence of the universal property of the product and the fact that elements of a set X can be identified with morphisms from 1 (a one element set) to X. While different objects may have the universal property, they are all naturally isomorphic. References ^ Quine has argued that the set-theoretical implementations of the concept of the ordered pair is a paradigm for the clarification of philosophical ideas (see "Word and Object", section 53). The general notion of such definitions or implementations are discussed in Thomas Forster "Reasoning about theoretical entities". ^ Wiener's paper "A Simplification of the logic of relations" is reprinted, together with a valuable commentary on pages 224ff in van Heijenoort, Jean (1967), From Frege to Gödel: A Source Book in Mathematical Logic, 1979-1931, Harvard University Press, Cambridge MA, ISBN 0-674-32449-8 (pbk.). van Heijenoort states the simplification this way: "By giving a definition of the ordered pair of two elements in terms of class operations, the note reduced the theory of relations to that of classes". ^ cf introduction to Wiener's paper in van Heijenoort 1967:224 ^ cf introduction to Wiener's paper in van Heijenoort 1967:224. van Heijenoort observes that the resulting set that represents the ordered pair "has a type higher by 2 than the elements (when they are of the same type)"; he offers references that show how, under certain circumstances, the type can be reduced to 1 or 0. ^ ^ This differs from Hausdorff's definition in not requiring the two elements 0 and 1 to be distinct from a and b. ^ Tourlakis, George (2003) Lectures in Logic and Set Theory. Vol. 2: Set Theory. Cambridge Univ. Press. Proposition III.10.1. ^ For a formal Metamath proof of the adequacy of short, see here (opthreg). Also see Tourlakis (2003), Proposition III.10.1. ^ J. Barkley Rosser, 1953. Logic for Mathematicians. McGraw-Hill. ^ Holmes, Randall (1998) Elementary Set Theory with a Universal Set. Academia-Bruylant. The publisher has graciously consented to permit diffusion of this monograph via the web. ^ Morse, Anthony P. (1965). A Theory of Sets. Academic Press. This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
Seminars and Colloquiums for the week of April 1, 2019 for the week of April 1, 2019 SPEAKERS Monday Hamza Ruzayqat, University of Tennessee Ibrahim Aslan, University of Tennessee Pawel Grzegrzolka, University of Tennessee Ryan Unger, University of Tennessee Tuesday Tanya Moore, Goodwill of San Francisco, San Mateo, and Marin Counties Wednesday Joan Lind, University of Tennessee Andreas Prohl, Universität Tübingen Thursday Mahir Demir, University of Tennessee Ram Iyer, Texas Tech University Suzanne Lenhart, University of Tennessee and National Institute for Mathematical and Biological Synthesis (NIMBioS) Friday Mahir Demir, Logan Perry, & Tyler Poppenwimer, University of Tennessee Industrial Panel: - Greg Clark (Software Engineer at Google) - Nate Pollesch (Postdoctoral Mathematician at EPA) - Kelly Rooker (Senior Data Scientist I at Johns Hopkins Applied Physics Laboratory) - Kirill Yakovlev (Senior Vice President & Director of Data Science at Fifth Third Bank) Monday, 4/1 DOCTORAL DEFENSE TITLE: Rejection Enhanced Off-Lattice Kinetic Monte Carlo SPEAKER: Hamza Ruzayqat, University of Tennessee TIME: 1:30 PM ROOM: Ayres 405 His committee consists of Professors: Schulze (Chair), Karakashian, Wise, and Xu (MSE). DOCTORAL DEFENSE TITLE: Leptospirosis models: Vaccination of cattle and early detection in humans SPEAKER: Ibrahim Aslan, University of Tennessee TIME: 3:30 PM ROOM: Ayres 123 His committee consists of Professors: Lenhart (chair), Alexiades, Day and Eda (For., Wildlife, & Fish.). DOCTORAL DEFENSE TITLE: Coarse Proximity, Proximity at Infinity, and Coarse Structures SPEAKER: Pawel Grzegrzolka, University of Tennessee TIME: 3:35 PM ROOM: Ayres 308H His committee consists of Professors: Dydak (chair), Brodskiy, Thistlethwaite, and Berry (EECS). GEOMETRIC ANALYSIS READING SEMINAR TITLE: Perelman’s monotonicity and kappa-noncollapsing SPEAKER: Ryan Unger, University of Tennessee TIME: 3:35 PM-5:30 PM ROOM: Ayres 113 We will derive Perelman’s monotonicity formula for the graviton-dilaton action + Boltzmann entropy under Ricci flow. A consequence of this is kappa-noncollapsing for closed manifolds, which guarantees the existence of tangent flows. Tuesday, 4/2 SIAM STUDENT CHAPTER TALK TITLE: Finding Certainty in Randomness: My Journey with Mathematics SPEAKER: Tanya Moore, Goodwill of San Francisco, San Mateo, and Marin Counties TIME: 2:10 PM-3:10 PM ROOM: Ayres 405 I love finding patterns. But, for a long time I couldn’t find them in my own life. Growing up life events felt random, chaotic even. Discovering mathematics in college gave me a way of bringing order and new possibility into my life. Professionally, it laid a foundation for me to tackle challenging social problems in health, education, and workforce development. Math inspired me to fully engage life. This talk will explore why even the seemingly random steps in one’s academic and professional journey have magnitude and direction. Tanya’s bio: Prior to joining Goodwill, Tanya’s professional career in local government and higher education focused on leading multi-agency collective impact initiatives focused on reducing health and education inequities and supporting college to career pathways. After earning a Bachelor's degree in Mathematics from Spelman College she received doctorate training in the field of Biostatistics at the University of California, Berkeley and remains passionate in her support of women in STEM careers. Tanya is one of the co-founders of the Infinite Possibilities Conference, a conference designed to support, promote and encourage underrepresented minority women in mathematics. She has been featured in Essence Magazine, Black Enterprise and O, The Oprah Magazine and was recognized as one of the “STEM Woman of the Year” by California State Assembly member Nancy Skinner in April 2014. Wednesday, 4/3 ANALYSIS SEMINAR TITLE: Fair Peano Curves, part 2 SPEAKER: Joan Lind, University of Tennessee TIME: 2:30 PM-3:20 PM ROOM: Ayres 113 Given a spanning tree of a rectangular grid in some domain, one can create the so-called peano curve that winds around tree. Thus the uniform spanning tree (UST), which is the random family of spanning trees that gives equal weight to each possible spanning tree, gives rise to a random family of peano curves. Lawler, Schramm, and Werner showed that the scaling limit of these curves is SLE(8). We will consider the notion of fair trees, which will lead to a different probability measure on spanning trees, and we will ask about the scaling limit of the corresponding fair peano curves. This is joint work with Nathan Albin and Pietro Poggi-Corradini. COMPUTATIONAL and APPLIED MATHEMATICS (CAM) SEMINAR TITLE: Numerical Approximation of the Stochastic Cahn-Hilliard Equation Near the Sharp Interface Limit SPEAKER: Andreas Prohl, Universität Tübingen TIME: 3:35 PM-4:25 PM ROOM: Ayres 113 In this talk I shall discuss a stable time discretization of the stochastic Cahn-Hilliard equation with an additive noise term $\varepsilon^{\gamma}\dot{W}$, where $\gamma >0$, and $\varepsilon>0$ is the interfacial width parameter. For sufficiently small noise (i.e., for $\gamma$ sufficiently large) and sufficiently small time-steps $k\leq k_0(\gamma)$, I shall present arguments which lead to strong error estimates where the parameter $\varepsilon$ only enters polynomially without using Gronwell’s lemma. This is joint work with D. Antonopoulou (Chester, UK), L. Banas (Bielefeld, Germany), and R. Nuernberg (London, UK). Friday, 4/5 MATH BIOLOGY SEMINAR SPEAKERS: Mahir Demir, Logan Perry, & Tyler Poppenwimer, University of Tennessee TIME: 11:15 AM-12:05 PM ROOM: Ayres 401 In this week's Math Biology seminar, Mahir Demir, Logan Perry, and Tyler Poppenwimer will begin presenting the paper "Efficacy of Infection Control Interventions in Reducing the Spread of Multidrug-resistant organisms in the Hospital Setting" by Mary Ann Horn, PhD, Case Western Reserve University. (published in Plos One, 2012, vol. 7, no. 2). The discussion of this paper will continue into the next week's seminar (on 4/12), followed on Monday, April 15th by a special seminar in which Dr. Horn will discuss the work with us in person! Dr. Horn will also be giving a NIMBioS seminar on Tues. 4/16 - be sure to watch for announcements about it. If you are interested in being added to the Math Biology Seminar 'BaseCamp' site to receive notices and seminar materials directly, please contact Judy Day at judyday@utk.edu. GRADUATE STUDENT PROFESSIONAL DEVELOPMENT EVENT TITLE: Industry Panel PANELISTS: Greg Clark (Software Engineer at Google) Nate Pollesch (Postdoctoral Mathematician at EPA) Kelly Rooker (Senior Data Scientist I at Johns Hopkins Applied Physics Laboratory) Kirill Yakovlev (Senior Vice President & Director of Data Science at Fifth Third Bank) TIME: 3:35 PM-4:35 PM ROOM: Ayres 405 Former UT math students with industry jobs will share information about their career paths and their advice for current grad students. If you are interested in giving or arranging a talk for one of our seminars or colloquiums, please review our calendar. If you have questions, or a date you would like to confirm, please contact mlangfo5@utk.edu Past notices: Mar. 18, 2019 (Spring break) Winter Break
Summary: The expected number of survivors after a shootout given as $$\lim_{n\rightarrow\infty}\operatorname{E}[n]\approx 0.284051\ n;\quad \text{(Tao/Wu - see below)}$$ is, if not correct, almost certainly very close to being correct (see update 2). However, this is disputed by Finch in Mathematical constants (again, see below for details). The results from Finch are easily replicable in Mathematica or similar, but I was not able to replicate even the partial results in Tao/Wu's paper (despite leaving out the absolute values of $\alpha$ and $\beta,$ which Finch points out as being incorrect - see below for futher details), leaving me unsure as to whether I am missing something in my "translation" of the problem into Finch's more modern notation. I should be most grateful if someone could illuminate me further in this matter. Original answer: Based on numerical tests, I would say the expected number of survivors for $n>3 \approx n/3.5$ Trial example test[20] (code below): anim[20,8]: For $1000$ trials, $1\leq n\leq 40$ est[40,10^3]: Note Using RandomReal it is very unlikely that any two distances will be exactly equal, thereby fulfilling the no isosceles triangle requirement. Update 1 History of the problem Robert Abilock proposed in American Monthly The Rifle-Problem (R. Abilock; 1967), $n$ riflemen are distributed at random points on a plane. At a signal, each one shoots at and kills his nearest neighbor. What is the expected number of riflemen who are left alive? This was reposed as the Vicious neighbor problem (R.Tao and F.Y.Wu; 1986), where the answer of $\approx 0.284051 n$ remaining riflemen (or $\approx n/3.52049$) was given as the solution in $2$ dimensions. This agrees distinctly with tests of sample-size $10^5:$ ListLinePlot[{const[#, 100000] & /@ Range@40}, GridLines -> {{}, {1/0.284051}}] However, in Mathematical Constants Nearest-neighbor graphs (S.R.Finch; 2008), Finch states that In [ Vicious neighbor problem], the absolute value signs in the definitions of $\varphi$ and $\psi$ were mistakenly omitted.)$\dots$ Given the discrepancy between our estimate $\dots$ and their estimate $\dots$, it seems doubtful that their approximation $\beta(2) = 0.284051\dots$ is entirely correct. So the question (for the bounty) is then reduced to: Has any progress been made since 2008 on the problem? In short, is Tao and Wu's calculation incorrect, and if so, is a more precise estimate of $\beta(2)$ known? Update 2 I have also tested the problem in other regular polygons (circle, triangle, pentagon, etc.) for $10^5$ trials, $1\leq n \leq 30$, and it seems that the comment by @D.Thomine below is in agreement with the data gathered, in that the constant for any bounded $2$ dimensional region appears to be the same for large enough $n,$ ie, independent of the global geometry of the domain: while further simulations, using $2\cdot 10^6$ trials for $n=30$ and $n=100$ yielded the following results: with the final averages after $2\cdot 10^6,$ compared to Tao/Wu's result, being: \begin{align}&n=30:&0.284090\dots\\&n=100:&0.284066\dots\\&\text{Tao/Wu:}&0.284051\dots\\\end{align} indicating that the Tao/Wu result of $\lim_{n\rightarrow\infty}\operatorname{E}[n]\approx 0.284051\ n$ is, if not correct, almost certainly very close to being correct. Upper and lower bounds Following up on @mathreadler's suggestion that it may be interesting to study the spread of data, I include the following as a minor contribution to the topic: Since arrangements like this are possible (and their circular counterparts, however unlikely through random point selection), clearly the lower bound for odd $n$ is $1$ and for even $n$ it is $0$ (since the points can be paired). Finding an upper bound is less obvious though. Looking at this sketch proof for upper bound $n=10$ provided by @JohnSmith in the comments, it is easy to see that the upper bound is $7:$ and by employing the same method, upper bounds for larger $n$ can be constructed: Assuming one can repeat this process indefinitely, it is likely that an upper bound for $n\geq 6$ then is $n-\lfloor n/3\rfloor:$ which has been set against the data for $2\cdot 10^4$ trials (red dots - see . data below) Regarding density of spread, (again with $2\cdot 10^4$ trials) produces the following plot: ListPlot3D[Flatten[data, 1], ColorFunction -> "LakeColors"] (courtesy of @AlexeiBoulbitch here), and regarding max. density of spread along $x/z$ axes from above plot, produces the following: With[{c = 0.284051}, Show[ListLinePlot[Max@#[[All, 3]] & /@ data, PlotRange -> All], Plot[{(1 + c)/(n - (1 + c)^2)^(1/2)}, {n, 0, 100}, PlotRange -> All, PlotStyle -> {Dashed, Red}]]] It is tempting to conjecture max height of distribution to be $\approx (c+1)/\sqrt{n-(c+1)^2},$ but of course this is largely empirical. test[nn_] := With[{aa = Partition[RandomReal[{0, 1}, 2 nn], 2]}, With[{cc = ({aa[[#]], First@Nearest[DeleteCases[aa, aa[[#]]], aa[[#]]]} & /@ Range@nn)}, With[{dd = Table[Position[aa, cc[[p, 2]]][[1, 1]], {p, nn}]}, With[{ee = Complement[Range@nn, dd]}, Column[{StringJoin[ToString["Expected: "], ToString[nn/3.5]], StringJoin[ToString["Survivors: "], ToString[Length@ee], ToString[": "], ToString[ee]], Show[Graphics[{Gray, Line@# & /@ cc}, Frame -> True, PlotRange -> {{0, 1}, {0, 1}}, Epilog -> {Text[Style[(Range@nn)[[#]], 30/Floor@Log@nn], aa[[#]]] & /@ Range@nn}], ImageSize -> 300]}]]]]] est[mm_, trials_] := ListLinePlot@({Quiet@With[{nn = #}, (N@Total@(With[{aa = Partition[RandomReal[{0, 1}, 2 nn], 2]}, With[{cc = ({aa[[#]], First@Nearest[DeleteCases[aa, aa[[#]]], aa[[#]]]} & /@ Range@nn)}, With[{dd = Table[Position[aa, cc[[p, 2]]][[1, 1]], {p, nn}]}, With[{ee = Complement[Range@nn, dd]},Length@ee]]]] & /@ Range@trials)/trials)] & /@ Range@mm, Range@mm/3.5}) anim[nn_, range_] := ListAnimate[test@nn & /@ Range@range, ControlPlacement -> Top, DefaultDuration -> nn] const[mm_, trials_] := With[{ans = Quiet@With[{nn = #}, SetPrecision[(Total@(With[{aa = Partition[RandomReal[{0, 1}, 2 nn], 2]}, With[{cc = ({aa[[#]],First@Nearest[DeleteCases[aa, aa[[#]]], aa[[#]]]} & /@ Range@nn)}, With[{dd = Table[Position[aa, cc[[p, 2]]][[1, 1]], {p, nn}]}, With[{ee = Complement[Range@nn, dd]}, Length@ee]]]] & /@ Range@trials)/trials), 20]] &@ mm}, mm/ans] act[nn_, trials_] := With[{aa = Partition[RandomReal[{0, 1}, 2 nn], 2]}, With[{cc = ({aa[[#]], First@Nearest[DeleteCases[aa, aa[[#]]], aa[[#]]]} & /@ Range@nn)}, With[{dd = Table[Position[aa, cc[[p, 2]]][[1, 1]], {p, nn}]}, With[{ee = Complement[Range@nn, dd]}, Length@ee]]]] & /@ Range@trials data = Quiet@ Table[With[{tt = 2*10^4}, With[{aa = act[nn, tt]}, With[{bb = Sort@DeleteDuplicates@aa}, Transpose@{ConstantArray[nn, Length@bb], bb, (Length@# & /@ Split@Sort@aa)/tt}]]], {nn, 1, 100}];
As already mentioned,$$(\spadesuit) \qquadC(\Bbb{T}) \rtimes_{\alpha} \Bbb{Z}_{2} \cong\{f \in C([0,1] \to {\text{M}_{2}}(\Bbb{C})) \mid\text{$ f(0) $ and $ f(1) $ are diagonal}\}.$$Hence, by the definition of a continuous field of $ C^{*} $-algebras, $ C(\Bbb{T}) \rtimes_{\alpha} \Bbb{Z}_{2} $ is a continuous field of $ C^{*} $-algebras over the compact Hausdorff space $ [0,1] $ with the following structure: The fibers over $ 0 $ and $ 1 $ are the $ C^{*} $-subalgebra of $ {\text{M}_{2}}(\Bbb{C}) $ consisting of all diagonal matrices, which is isomorphic to $ \Bbb{C} \oplus \Bbb{C} $. The fibers over $ (0,1) $ are $ {\text{M}_{2}}(\Bbb{C}) $. The generating $ * $-subalgebra of cross-sections is simply the set on the right-hand side of the relation $ (\spadesuit) $. This agrees with a $ 1976 $ result by Ru-Ying Lee, which states that a $ C^{*} $-algebra $ A $ is a continuous field over a locally compact Hausdorff space $ X $ if and only if there exists a continuous open map from the primitive-ideal space of $ A $, $ \text{Prim}(A) $, onto $ X $. As $ (\Bbb{Z}_{2},\Bbb{T},\alpha) $ is a second-countable transformation group, certain results in the theory of transformation-group $ C^{*} $-algebras show that $ \text{Prim}(C(\Bbb{T}) \rtimes_{\alpha} \Bbb{Z}_{2}) $ is homeomorphic to a closed and bounded interval of $ \Bbb{R} $. Of course, any $ C^{*} $-algebra is a continuous field over a single point, but this is uninteresting.
Consider the equation $$|2^x-3^y|=1$$ in the unknowns $x \in \mathbb{N}$ and $y \in \mathbb{N}$. Is it possible to prove that the only solutions are $(1,1)$, $(2,1)$ and $(3,2)$? Yes. Levi ben Gerson (1288-1344), also known as Gersonides, proved this. The Gersonides proof can be found here. EDIT: miracle notes that the link is broken. This link works (for now). But to keep everyone happy, I'll paste in the relevant parts of what's there: In 1342, Levi ben Gerson, also known as Gersonides (1288-1344), proved that the original four pairs of harmonic numbers are the only ones that differ by 1. Here's how his proof went (using modern notation). [note --- "harmonic numbers" are those that can be written as $2^n3^m$] If two harmonic numbers differ by 1, one must be odd and the other even. The only odd numbers are powers of 3. Hence one of the two harmonic numbers must be a power of 3 and the other a power of 2. The task then involves solving two equations: $2^n = 3^m + 1$ and $2^n = 3^m - 1$. Gersonides had the idea of looking at remainders after division of powers of 3 by 8 and powers of 2 by 8. For example, 27 divided by 8 gives a remainder of 3. For powers of 3, the remainders all turn out to be 1 or 3, depending on whether the power is even or odd. The remainders for powers of 2 are 1, 2, 4, then 0 for all powers higher than 2. For $2^n = 3^m + 1$, when $m$ is odd, $3^m$ has remainder 3, and $2^n = 3^m + 1$ must then have the remainder 4, so $n = 2$ and $m = 1$. That gives the consecutive harmonic numbers 3, 4. When $m$ is odd, the equation gives the consecutive harmonic numbers 1, 2. For $2^n = 3^m - 1$, when $m$ is odd, $3^m$ has remainder 3, so $2^n = 3^m - 1$ has remainder 2; as a result, $n = 1$ and $m = 1$, to give the consecutive harmonic numbers 2, 3. The final case, when $m$ is even, is a little trickier and requires substituting $2k$ for $m$, then solving the equation $2^n = 3^{2k} - 1 = (3^k - 1)(3^k + 1)$. That gives the consecutive harmonic numbers 8, 9. QED. This is an easy consequence of Catalan's conjecture, which was famously proved by Preda Mihăilescu in 2002 (so now it's also called Mihăilescu's theorem): $3^2-2^3=1$ is the only solution to $x^a - y^b = 1$ for integers $a, b, x, y \ge 2$. Assume that $x>3$ and $y>2$. $3^2=1\pmod{8}$. Since $3\not=-1\pmod{8}$ we know that $3^y\not=-1\pmod{8}$. Thus, if $|2^x-3^y|=1$, we must have $2^x-3^y=-1$ and $y$ must be even. Thus, we get that $$ 2^x=(3^{y/2}-1)(3^{y/2}+1)\tag{1} $$ The only factors of a power of $2$ are other powers of $2$, and the only powers of two that differ by $2$ are $2$ and $4$, but since $y>2$, we have $3^{y/2}-1>2$ and $3^{y/2}+1>4$. Therefore, there are no $x>3$ and $y>2$ so that $|2^x-3^y|=1$. Case 1: If $2^x=3^y+1$ then $x \ge 1$. If $y=0$ then $x=1$.If $y \ge 1$ then $3^y+1 \equiv 1 \pmod{3}$. Therefore $2^x \equiv 1 \pmod{3}$. Hence $x=2x_1$ with $x_1 \in \mathbb{N}^*$. The equation is equivalent to $$3^y= (2^{x_1}-1)(2^{x_1}+1)$$Since $2^{x_1}+1>2^{x_1}-1$ and $\gcd (2^{x_1}-1,2^{x_1}+1) \ne 3$ then $2= 3^{m} (3^{n-m}-1)$ with $2^{x_1}-1=3^m,2^{x_1}+1=3^n$. Thus, $m=0,n=1$. We obtain $x_1=1$ or $x=2$. Thus, $y=1$. Case 2. If $3^y=2^x+1$. If $x=0$ then there is no natural number $y$. If $x=1$ then $y=1$. If $x \ge 2$ then $3^y \equiv 1 \pmod{4}$. Thus, $y$ is even, let $y=2y_1$ with $y_1 \in \mathbb{N}^*$. The equation is equivalent to $$2^y=(3^{y_1}-1)(3^{y_1}+1)$$Thus, $2=2^n(2^{m-n}-1)$ with $2^m=3^{y_1}+1,2^n=3^{y_1}-1$. It follows that $n=1,m=2$. Thus, $y_1=1$ or $y=2$. We get $x=3$. The answer is $\boxed{ (x,y)=(1,0),(1,1),(2,1),(3,2)}$.
Contents MAP Estimation by Landis The Big Picture Given observation X used to estimate an unknown parameter $ \theta $ of distribution $ f_x(X) $ (i.e. $ f_x(X) = $ some function $ g(\theta) $ Consider three expressions (distributions): 1. Likehood $ p(X; \theta) $ (discrete) $ f_x(X; \theta) $ (continuous) used for MLE: $ \overset{\land}\theta_{ML} = f_x(X | \theta) $ 2. Prior $ P(\theta) $ (discrete) $ P_\theta(\theta) $ (continuous) Indicates some prior knowledge as to what $ \theta $ should be. Prior refers to before seeing observation. 3. Posterior $ p(\theta | x) $ (discrete) $ f_x(\theta, x) $ (continuous) "Posterior" refers to after seeing observations. Use Posterior to define maximum a-posterior i (map) estimate: $ \overset{\land}\theta_{\mbox{MAP}} = \overset{\mbox{argmax}}\theta f_{\theta | X}(\theta | X) $ Using Bayes' Rule, we can expand the posterior $ f_{\theta | X}(\theta | X) $: $ f_{\theta | X}(\theta | X) = \frac{f_{x|\theta}f_\theta(\theta)}{f_X(X)} $ $ \overset{\land}\theta_{\mbox{map}} = \overset{\mbox{argmax}}\theta f_{X | \theta}(X | \theta) F_\theta(\theta) $ So What? So, what does this mean in a nutshell? Essentially, an ML estimator is as follows: "I know that a random variable follows some sort of pattern, and this pattern can be adjusted by changing a variable called a parameter. (I have no clue what this parameter should be.) I run some experiment to get a sample output from the random variable. Based on my experiment, I would like to find the pattern for the random variable that best matches my data. So, I will find the pattern that gives my data the most likely chance of occuring." A MAP estimator is as follows: "I know that a random variable follows some sort of pattern, and this pattern can be adjusted by changing a variable called a parameter. (I have some idea what this parameter should be, so I will treat the parameter as a random variable. Then, the chance that the parameter will have some value will be expressed by it's PDF/PMF.) I run some experiment to get a sample output from the random variable. Based on my experiment and on my prior ideas of what the parameter should be, I would like to find the pattern that best matches both my data and my prior knowledge. So, I will find the pattern that maximizes the product of two things: how well the parameter matches the data, and how well the parameter meets my expectations of what the parameter should be." Example 1: Continuous $ X \sim f_x(X) = \lambda e^{-\lambda X} $ but we don't know the parameter $ \lambda $. Let us assume, however, that $ \lambda $ is actually itself exponentially distributed, i.e. $ \lambda \sim f_\lambda(\lambda) = \Lambda e^{-\Lambda\lambda} $ where $ \Lambda $ is fixed and known. Find $ \overset{\land}\lambda_{\mbox{map}} $. Solution: $ \overset{\land}\lambda_{\mbox{map}} = \overset{\mbox{argmax}}\lambda f_{\lambda | X}(\lambda | X) $ $ \overset{\land}\lambda_{\mbox{map}} = \overset{\mbox{argmax}}\lambda f_x(\lambda)f_{x|\lambda}(x; \lambda) $ $ \overset{\land}\lambda_{\mbox{map}} = \overset{\mbox{argmax}}\lambda \Lambda e^{-\lambda \Lambda}\lambda e^{-\lambda X} $ $ \frac{d}{d\lambda} \lambda \Lambda e^{-\lambda(\Lambda + X)} = 0 $ $ \Lambda e^{-\lambda}(\Lambda + X) - \lambda \Lambda (\Lambda + X) e^{-\lambda(\lambda + X)} = 0 $ $ 1 - \lambda(\Lambda + X) = 0 $ $ \overset{\land}X_{\mbox{map}} = \frac{1}{\Lambda + X} $ Recall from homework: $ \overset{\land}X_{\mbox{ML}} = \frac{1}{X} $ Prior $ f_{\lambda}(\lambda) = \Lambda e^{-\lambda \Lambda} $
As far as I know, I don't think it is possible to drive a Color Ramp or Mapping nodes from another socket (but I am not super experienced in drivers). However I have managed to re-create the color ramp and mapping node with math nodes, which you can plug inputs into directly. Color Ramp Unfortunately there is no way to create a group node exactly like the Color Ramp, with addable and removable color swatches. To get around this I have created a node with two movable swatches, you can then combine multiple of these nodes together to have the functionality of multiple swatches. The theory: The two input colors are plugged directly into a Mix RGB node. The two Pos inputs need to be sent through a function and plugged into the mix factor. The position of the first swatch needs to be mapped to $0$ to get just the first color out of the mix node. The position of the second swatch needs to be mapped to $1$ to get just the second color out of the mix node. The math: Here's a graph to visualize what we are trying to do, on the x-axis is the input factor of the color ramp, on the y-axis is the desired output, $a$ and $b$ are the positions of the two swatches. With some simple algebra we can find the equation of the line to be: $$y = \frac{1}{b-a}x + \frac{a}{a-b}$$ The math nodes below are simply replicating this equation. The final Add node also has Clamp checked to clamp the output to the interval $[0,1]$, which is what the mix node accepts. Mapping node The mapping node has three functions it can Translate (move), Rotate, and Scale the texture. Since often not all of these are needed I will create separate group nodes for each of these functions. If you don't have a decent understanding about how texture coordinates work you may want to read my answer here. The basic theory of manipulation texture coordinates is to separate the components of the vector with a Separate XYZ node, manipulate the components individually, then combine them back into a vector with a Combine XYZ node. Mapping Node - Translation Translation is easy, to translate $V$ to $V^\prime$ just add the desired amounts to each of the components of $V$. In the above node setup I actually used Subtract nodes instead of Add so positive values will move the texture to the right instead of left. Mapping Node - Scaling Scaling is the same as translating, just multiply the individual coordinates by the scale factor instead of adding. Mapping Node - Rotation Rotation is a little more complicated and I won't take the time to derive the equations here, but they are pretty standard pre-calc formulas. Here are the formulas for rotation. Around the x-axis $$\begin{aligned}x^\prime &= x\\y^\prime &= y\cos{\theta} - z\sin{\theta}\\z^\prime &= y\sin{\theta} + z\cos{\theta}\end{aligned}$$ Around the y-axis $$\begin{aligned}x^\prime &= x\cos{\theta} - z\sin{\theta}\\y^\prime &= y\\z^\prime &= x\sin{\theta} + z\cos{\theta}\end{aligned}$$ Around the z-axis$$\begin{aligned}x^\prime &= x\cos{\theta} - y\sin{\theta}\\y^\prime &= x\sin{\theta} + y\cos{\theta}\\z^\prime &= z\end{aligned}$$ the first two nodes in each of the rotation setups convert the degree input to radians, which is what the trig math nodes want. Note: And finally, here's a .blend file with all the above node groups.
Another (quick) question; Let $T \subset N$ be a coalition. The unanimity game on $T$ is the game $(N, u_T)$ where $u_T(S)=1$ if $T \subset S$ and $u_T(S)=0$ if $T\S$. In other words, a coalition $S$ has worth $1$ (is winning) if it contains all players of $T$, and worth $0$ (is loosing) if this is not the case. Calculate the core and the Shapley value for $(N, u_T)$ Then the core consists of $x_n-m \geq 0$ with $m \in [0,n-1]$ And then we know $x_n - 0 + x_n - 1 + \dots + xn - (n-1) = 1$ (efficiency) So we could denote the core as $x_n - m + x_n - m' \geq 0$ + the efficiency Am I right thinking the Shapley value should be \begin{array}{|1|}\hline\frac{1}{n-1!} \cdot (\frac{1}{n},\frac{1}{n},\dots,\frac{1}{n})=(\frac{1}{n!},\frac{1}{n!},\dots,\frac{1}{n!}) \\\tag{1}\hline\end{array} Is this ok?Thanks!
EditI realized that the key piece of information that I need is question 1, and so I'd like to rephrase this post: What are the possible eigenvalues of nonnegative integer matrices? Any answer to this question would be appreciated and checkmarked. Original Question: My question is related to this question about integer nonnegative matrices but goes in a slightly different direction. Like the previous poster, my question comes from solving linear recursions (specifically, computing the discrete modulus of a product finite subdivision rule acting on a grid). Given $A$ a square matrix with nonnegative integer entries and $v$ a column vector of the same dimension, we can use the Jordan canonical form to get a closed expression for $A^n(v)$. If we sum the entries of $A^n(v)$, we get a function of the form $f(n)=a_1 P_1(n)\lambda_1^n+...+a_kP_k\lambda_k$, where the $\lambda$'s are the distinct eigenvalues and each $P_i$ is a monic polynomial in $n$. My question is, what are the possible values for the $a_i$ and the $\lambda_i$? In particular: Can the $\lambda_i$ be any algebraic integer? Can the $a_i$ be non-integers? (My real question): Given an algebraic integer $\lambda$, can we construct two matrices $A_1$ and $A_2$ such that the ratio of their associated polynomials $\frac{f_1(n)}{f_2(n)}$ (where, as above, $f_i(n)$ is the sum of the entries of $A_1^n v$) has limit $\lambda$? The limit of such a fraction will be 0 unless the 'largest terms' of each polynomial have the same magnitude; my worry is that it would be impossible to get some algebraic integers in this way because they have Galois conjugates of equal or larger size. But it seems that the column vector $v$ might allow one to 'cancel out' unwanted eigenvalues. Is this possible? Is it even possible to get $\sqrt{3}$ in this way? Thank you for your help! The first two questions seem like they would be easily answerable by experts in matrix theory, but google searches have led to nothing. I appreciate in advance your help. An example of $f_1(n)$ Let $A=\left[ \begin{array}{cc} 1 & 2 \\ 0 & 2 \end{array} \right]$. It can be diagonalized as $A=\left[ \begin{array}{cc} 1 & 2 \\ 0 & 1 \end{array} \right] \left[ \begin{array}{cc} 1 & 0 \\ 0 & 2 \end{array} \right] \left[ \begin{array}{cc} 1 & -2 \\ 0 & 1 \end{array} \right]$. Now, let $v = \left[ \begin{array}{c} 2 \\ 3 \end{array} \right] $. Then $A^{n} v= \left[ \begin{array}{cc} 1 & 2 \\ 0 & 1 \end{array} \right] \left[ \begin{array}{cc} 1 & 0 \\ 0 & 2^{n} \end{array} \right] \left[ \begin{array}{cc} 1 & -2 \\ 0 & 1 \end{array} \right] \left[ \begin{array}{c} 2 \\ 3 \end{array} \right] = \left[ \begin{array}{cc} 1 & 2^{n+1} -2 \\ 0 & 2^{n} \end{array} \right] \left[ \begin{array}{c} 2 \\ 3 \end{array} \right] = \left[ \begin{array}{c} 3(2^{n+1})-4 \\ 3(2^{n}) \end{array} \right]$. Adding all the entries of this matrix together, we see that the growth function $f(n)$ is $3(2^{n+1} + 2^{n})-4=9(2^{n})-4$.
Q. A 15 g mass of nitrogen gas is enclosed in a vessel at a temperature $27^{\circ}C$. Amount of heat transferred to the gas, so that rms velocity of molecules is doubled, is about : [Take R = 8.3 J/ K mole] Solution: $Q = nC_v \Delta T$ as gas in closed vessel $Q = \frac{15}{28} \times\frac{5 \times R}{2} \times\left(4T -T\right) $ $ Q = 10000 J = 10 kJ$ Questions from JEE Main 2019 5. One of the two identical conducting wires of length L is bent in the form of a circular loop and the other one into a circular coil of N identical turns. If the same current is passed in both, the ratio of the magnetic field at the central of the loop $(B_L)$ to that at the centre of the coil $(B_C)$ , i.e., $R \frac{B_L}{B_C}$ will be 10. The magnetic field associated with a light wave is given, at the origin, by $B = B_0 [\sin(3.14 \times 10^7)ct + \sin(6.28 \times 10^7)ct]$. If this light falls on a silver plate having a work function of 4.7 eV, what will be the maximum kinetic energy of the photo electrons ? $(c = 3 \times 10^{8} ms^{-1}, h = 6.6 \times 10^{-34} J-s)$ Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is 9. A cannon of mass I 000 kg, located at the base of an inclined plane fires a shelI of mass 100 kg in a horizontal direction with a velocity 180 $kmh^{-1}$. The angle of inclination of the inclined plane with the horizontal is 45$^{\circ}$. The coefficient of friction between the cannon and the inclined plane is 0.5. The height, in metre, to which the cannon ascends the inclined plane as a result of the recoiI is (g = 10 $ms^{-1})$ Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade
Q. Ge and Si diodes start conducting at 0.3 V and 0.7 V respectively. In the following figure if Ge diode connection are reversed, the value of $V_o$ changes by : (assume that the Ge diode has large breakdown voltage) Solution: Initially Ge & Si are both forward biased so current will effectivily pass through Ge diode with a drop of 0.3 V if "Ge" is revesed then current will flow through "Si" diode hence an effective drop of (0.7 - 0.3) = 0.4 V is observed. Questions from JEE Main 2019 5. One of the two identical conducting wires of length L is bent in the form of a circular loop and the other one into a circular coil of N identical turns. If the same current is passed in both, the ratio of the magnetic field at the central of the loop $(B_L)$ to that at the centre of the coil $(B_C)$ , i.e., $R \frac{B_L}{B_C}$ will be 10. The magnetic field associated with a light wave is given, at the origin, by $B = B_0 [\sin(3.14 \times 10^7)ct + \sin(6.28 \times 10^7)ct]$. If this light falls on a silver plate having a work function of 4.7 eV, what will be the maximum kinetic energy of the photo electrons ? $(c = 3 \times 10^{8} ms^{-1}, h = 6.6 \times 10^{-34} J-s)$ Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is 9. A cannon of mass I 000 kg, located at the base of an inclined plane fires a shelI of mass 100 kg in a horizontal direction with a velocity 180 $kmh^{-1}$. The angle of inclination of the inclined plane with the horizontal is 45$^{\circ}$. The coefficient of friction between the cannon and the inclined plane is 0.5. The height, in metre, to which the cannon ascends the inclined plane as a result of the recoiI is (g = 10 $ms^{-1})$ Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade
$$\mathbf D = \varepsilon \mathbf E$$ I don't understand the difference between $\mathbf D$ and $\mathbf E$. When I have a plate capacitor, a different medium inside will change $\mathbf D$, right? $\mathbf E$ is only dependent from the charges right? $\mathbf E$ is the fundamental field in Maxwell equations, so it depends on all charges. But materials have lots of internal charges you usually don't care about. You can get rid of them by introducing polarization $\mathbf P$ (which is the material's response to the applied $\mathbf E$ field). Then you can subtract the effect of internal charges and you'll obtain equations just for free charges. These equations will look just like the original Maxwell equations but with $\mathbf E$ replaced by $\mathbf D$ and charges by just free charges. Similar arguments hold for currents and magnetic fields. With this in mind, you see that you need to take $\mathbf D$ in your example because $\mathbf E$ is sensitive also to the polarized charges inside the medium (about which you don't know anything). So the $\mathbf E$ field inside will be $\varepsilon$ times lower than for the conductor in vacuum. Like @Marek has said above, the electric field $E$ is the fundamental field, and is in some sense the more physical. However, Maxwell's equations have a neater geometric meaning if you throw in the "auxilary" fields $D$ (and $H$ for $B$). I usually tell my students the following version of electromagnetism: There are 4 fields in electromagnetism. We call them $E$, $D$, $B$ and $H$. All of these fields are independent and equally important. Furthermore, they actually embody geometric concepts which are manifest in the integral equations: $$\oint_S D \cdot dS = Q(S)$$ $$\oint_S B \cdot dS = 0$$ $$\oint_{\partial S} E \cdot dl + \partial_t \int_S B \cdot dS = 0$$ $$\oint_{\partial S} H \cdot dl - \partial_t \int_S D \cdot dS = \int_S j \cdot dS$$ Note that: $E$ and $B$ form an independent pair, as do $D$ and $H$. $E$ and $B$ do not depend on the sources $Q$ and $j$, but $D$ and $H$ do. $D$ and $B$ are integrated through surfaces, and represent fluxthrough those surfaces. (The correct mathematical gadget to describe these are actually 2-forms.$ $E$ and $H$ are integrated along lines, and end up representing the potential difference across the ends (or circulation in a loop). The latter pair connect the change of flux through surfaces with certain circulations. These equation form Maxwell's equations. They do not uniquely determine a physical situation. In particular, they need to be augumented with constitutive relations which describe (macroscopic) material properties. For example, we might have linear, isotropic, homogeneous (LIH) media, in which case we would have $D = \epsilon E$ and $B = \mu H$. But in general, we might have $\epsilon$ and $\mu$ being tensors, varying as functions of time and space, or even depend on the fields $E$, $B$, etc! These constitutive relations could be arbitrarily complicated, and indeed much of the new field of meta-material engineering is all about creating micro-structures which would yield interesting and useful constitutive relations at the macroscopic scale. More commonly, a scenario where the linearity breaks down is in ferromagnets/ferroelectrics. There is usually another constitutive relation, linking current and electric field. In LIH media this is called Ohm's Law: $J = \sigma E$. There is one more equation, which is simply always true, which is the conservation of charge; in the notation above, $\partial_t Q(S) - \int_S j \cdot dS = 0$. Edit: some additional observations: In a relativistically covariant form, we can merge $E$ and $B$ together to get the 2-form $F$, and $D$ and $H$ to get its Hodge dual $\star F$. The latter in general depends on the metric we choose. For linear materials it's possible to hide the effects of the material polarisation/magnetisation as a background metric. Incidentally, in this form, the energy is given by $F \wedge \star F$, so it is clear that energy/momentum should be "opposing" pairs, i.e. the Poyntin vector is $N = E \times H$. In numerical simulations, it's doubly important that we obey Maxwell's equations --- failure to do so leads to highly unphysical things like superluminal propagation of waves or failure to conserve energy or momentum. It has been found that the key is to be exact with respect to the integral forms of the equations, and put all of the discretisation error into not meeting the material constitutive properties. The electrical field $\mathbf E$ is the fundamental one. In principle, you don't need the electrical displacement field $\mathbf D$, everything can be expressed in terms of the field $\mathbf E$ alone. This works well for the vacuum. However, to describe electromagnetic fields in matter, it is convenient to introduce another field $\mathbf D$. Maxwells original equations are still valid, but in matter, you have to deal with additional charges and currents that are induced by the electric field and that also induce additional electric fields. (More precisely, one usually makes the approximation that the electric field induces tiny dipoles, which are described by the electric polarization $\mathbf P$.) A little bit of calculation shows that you can conveniently hide these additional charges by introducing the electrical displacement field $\mathbf D$, which then fulfills the equation $$ \nabla· \mathbf D = \rho_\text{free} .$$ The point is that this equation involves only the "external" ("free") charge density $\rho_\text{free}$. Charges that accumulate inside the block of matter have already been taken into account by the introduction of the $\mathbf D$ field. To understand what field is "real", write a charge equation of motion. The force in it is determined with the real field there. In a medium it is still E: $m\vec{a} = q\vec{E}$. In case of magnetic field, it is $\vec{B}$ that determines the force: $m\vec{a} = q \vec{v} \times \vec{B}/c$. $D$ is the electric displacement field or commonly the flux density and $E$ is the field intensity. There is a fundamental difference between them which will be understood to certain extent as you go through the following answer. Consider a point charge of $Q$ coulombs. This means that the number of flux lines emitted by the charge is $Q$ coulombs. . Let the hypothetical sphere shown in figure has a radius $r$. Then $D$ is given by \begin{equation} D = \frac{Q}{4\pi r^2}. \end{equation} That is, $D$ is the number of flux lines passing per area. So, to get an intuitive grasp, interpret $Q$ as a number (number of flux lines) and $D$ as a number density (number of flux lines per area). Now, what about $E?$ $E$, which is the electric field intensity, is actually a force ($E$ is defined as force per coulomb) per flux line, that is the force carried by each flux line. So, the relation $D = \varepsilon E$ connects the number density of flux lines, D, with a force per flux line term, $E$. Now, the permittivity $\varepsilon$ is defined as the ability to pass lines of electric flux through it. This is a qualitative way of saying. Quantitatively, it can be seen as the ratio $\frac{D}{E}$, that is, $\varepsilon$ is the number of electric flux lines (unit is coulomb, as mentioned earlier) passing through unit area for unit force/flux (which is unit field intensity). That is, say $\varepsilon = 5$ (this value of $\varepsilon$ is hypothetical and considered only for the sake of explanation) means, there are 5 flux lines in a unit area considered normal to an electric field with each flux line carrying $1 N$ force. protected by AccidentalFourierTransform Sep 19 at 0:28 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Fujimura's problem Let [math]\overline{c}^\mu_n[/math] the largest subset of the triangular grid [math]\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c=n \}[/math] which contains no equilateral triangles. Fujimura's problem is to compute [math]\overline{c}^\mu_n[/math]. This quantity is relevant to a certain "hyper-optimistic" version of DHJ(3). n=0 It is clear that [math]\overline{c}^\mu_0 = 1[/math]. n=1 It is clear that [math]\overline{c}^\mu_1 = 2[/math]. n=2 It is clear that [math]\overline{c}^\mu_2 = 4[/math] (e.g. remove (0,2,0) and (1,0,1) from [math]\Delta_2[/math]). n=3 Deleting (0,3,0), (0,2,1), (2,1,0), (1,0,2) from [math]\Delta_3[/math] shows that [math]\overline{c}^\mu_3 \geq 6[/math]. In fact [math]\overline{c}^\mu_3 = 6[/math]: just note (3,0,0) or something symmetrical has to be removed, leaving 3 triangles which do not intersect, so 3 more removals are required. n=4 [math]\overline{c}^\mu_4=9:[/math] The set of all [math](a,b,c)[/math] in [math]\Delta_4[/math] with exactly one of a,b,c =0, has 9 elements and doesn’t contain any equilateral triangles. Let [math]S\subset \Delta_4[/math] be a set without equilateral triangles. If [math](0,0,4)\in S[/math], there can only be one of [math](0,x,4-x)[/math] and [math](x,0,4-x)[/math] in S for [math]x=1,2,3,4[/math]. Thus there can only be 5 elements in S with [math]a=0[/math] or [math]b=0[/math]. The set of elements with [math]a,b\gt0[/math] is isomorphic to [math]\Delta_2[/math], so S can at most have 4 elements in this set. So [math]|S|\leq 4+5=9[/math]. Similar if S contain (0,4,0) or (4,0,0). So if [math]|S|\gt9[/math] S doesn’t contain any of these. Also, S can’t contain all of [math](0,1,3), (0,3,1), (2,1,1)[/math]. Similar for [math](3,0,1), (1,0,3),(1,2,1)[/math] and [math](1,3,0), (3,1,0), (1,1,2)[/math]. So now we have found 6 elements not in S, but [math]|\Delta_4|=15[/math], so [math]S\leq 15-6=9[/math]. n=5 [Cleanup required here] [math]\overline{c}^\mu_5=12[/math] The set of all (a,b,c) in [math]\Delta_5[/math] with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles. Let [math]S\subset \Delta_5[/math] be a set without equilateral triangles. If [math](0,0,5)\in S[/math], there can only be one of (0,x,5-x) and (x,0,5-x) in S for x=1,2,3,4,5. Thus there can only be 6 elements in S with a=0 or b=0. The set of element with a,b>0 is isomorphic to [math]\Delta_3[/math], so S can at most have 6 elements in this set. So [math]|S|\leq 6+6=12[/math]. Similar if S contain (0,5,0) or (5,0,0). So if |S| >12 S doesn’t contain any of these. S can only contain 2 point in each of the following equilateral triangles: (3,1,1),(0,4,1),(0,1,4) (4,1,0),(1,4,0),(1,1,3) (4,0,1),(1,3,1),(1,0,4) (1,2,2),(0,3,2),(0,2,3) (3,2,0),(2,3,0),(2,2,1) (3,0,2),(2,1,2),(2,0,3) So now we have found 9 elements not in S, but [math]|\Delta_5|=21[/math], so [math]S\leq 21-9=12[/math]. General n [Cleanup required here] A lower bound for [math]\overline{c}^\mu_n[/math] is 3(n-1), made of all points in [math]\Delta_n[/math] with exactly one coordinate equal to zero. A trivial upper bound is [math]\overline{c}^\mu_{n+1} \leq \overline{c}^\mu_n + n+2[/math] since deleting the bottom row of a equilateral-triangle-free-set gives another equilateral-triangle-free-set.
Orthogonal means that the inner product is zero. For example, in the case of using dot product as your inner product, two perpendicular vectors are orthogonal. Orthonormal means these vectors have been normalized so that their length is 1. Orthogonal vectors are useful for creating a basis for a space. This is because every point in the space can be represented as a (linear) combination of the vectors. So for example in 3D space, x=[0,0,1] y=[0,1,0] and z=[1,0,0] form an orthornormal basis. No component of x can be represented with a component of the other vectors. This is because they are linearly independent This is a different type of independence to statistical dependence however. Correlation is a different notion. There are different ways of representing the correlation of two vectors (random variables X and Y). $corr(X,Y) = \frac{cov(X,Y)}{\sigma_x \sigma_y} = \frac{E[X - \mu_x]E[Y - \mu_y]}{\sigma_x \sigma_y}$ Covariance is a measure of how two variables change together. Another measure of correlation is cross-correlation. This is measure of the similarity of two functions (or vectors). These concepts ares used in many different ways in image processing. For example cross-correlation is used in template matching. When looking for a small image inside a much larger image, the small template image is 'slided' over the large image and the cross-correlation is computed for each position. Locations with a high cross-correlation are likely to contain the image we are looking for. The concept of Linearly independent component vectors is used in principal component analysis (PCA). PCA takes a cloud of points in space and calculates a set of orthogonal basis vectors to represent them. Independent component analysis (ICA) is used for separating mixed signals. ie: separating a speaker at a noisy cocktail party. ICA uses properties of the signal correlation to separate the signals.
This is the question: Use the integral test to determine the convergence of $\sum_{n=1}^{\infty}\frac{1}{1+2n}$. I started by writing: $$\int_1^\infty\frac{1}{1+2x}dx=\lim_{a \rightarrow \infty}\left(\int_1^a\frac{1}{1+2x}dx\right)$$ I then decided to use u-substitution with $u=1+2n$ to solve the improper integral. I got the answer wrong and resorted to my answer book and this is where they went after setting $u=1+2n$: $$\lim_{a \rightarrow \infty}\left(\frac{1}{2}\int_{3}^{1+2a}\frac{1}{u}du\right)$$ And the answer goes on... What I can't figure out is where the $\frac{1}{2}$ came from when the u-substitution began and also, why the lower bound of the integral was changed to 3. Can someone tell me?
According to the spectral theorem every normal matrix can be represented as the product of a unitary matrix $\mathbb{U}$ and a diagonal matrix $\mathbb{D}$ $$\mathbb{A} = \mathbb{U}^H\mathbb{D}\mathbb{U}$$meaning that every normal matrix is diagonalizable. Does it necessarily mean that the unitary matrix has to be composed from the eigenvectors of $\mathbb{A}$ ? I presume that not, because then the eigenvectors of every normal matrix would form an orthonormal set (rows and columns of a unitary matrix are orthonormal in $\mathbb{C}^n$). So am I right that only the set of eigenvectors of a hermitian (or symmetric while in $\mathbb{R}^n$) matrix is orthonormal? According to the spectral theorem every normal matrix can be represented as the product of a unitary matrix $\mathbb{U}$ and a diagonal matrix $\mathbb{D}$ $$\mathbb{A} = \mathbb{U}^H\mathbb{D}\mathbb{U}$$meaning that every normal matrix is diagonalizable. An $n\times n$ matrix $A$ over the field $\mathbf{F}$ is diagonalizable if and only there is a basis of $\mathbf{F}^n$ of eigenvectors of $A$. This occurs if and only if there exists an invertible $n\times n$ matrix $Q$ such that $Q^{-1}AQ$ is a diagonal matrix; the column of $Q$ form the basis made up of eigenvectors, and conversely, if you take a basis made up of eigenvectors and arrange them as columns of a matrix, then the matrix is invertible and conjugating $A$ by that matrix will yield a diagonal matrix. An $n\times n$ matrix $A$ with coefficients in $\mathbb{R}$ or $\mathbb{C}$ is orthogonally diagonalizable if and only if there is an orthonormal basis of eigenvectors of $A$ for $\mathbb{R}^n$ (or $\mathbb{C}^n$, respectively). An $n\times n$ matrix with coefficients in $\mathbb{C}$ is orthogonally diagonalizable over $\mathbb{C}$ if and only if it is normal; a square matrix is orthogonally diagonalizable over $\mathbb{R}$ if and only if it is Hermitian. "Unitary" is usually reserved for complex matrices, with "orthogonal" being the corresponding term for real matrices. Yes, the columns of $U^\dagger$ are always eigenvectors of $A$ whenever $AA^\dagger = A^\dagger A$ and $A=U^\dagger D U$ with a diagonal $D$. It's not hard to see why. Take $v_n$ to be the $n$-th column of $U^\dagger$. Then $$ U v_n = (UU^\dagger)_{n{\rm -th\,\,column}} = 1_{n{\rm -th\,\,column}}$$ so $$ A v_n = U^\dagger D U^\dagger v_n = U^\dagger D\cdot 1_ {n{\rm -th\,\,column}} = D_{nn} U^\dagger \cdot 1_{n{\rm -th\,\,column}} = D_{nn} U_{n{\rm -th\,\,column}} = D_{nn} v_n $$ Looking at the first and last expression of this long equation, we see that $v_n$ is an eigenstate of $A$ with eigenvalue $D_{nn}$ - a diagonal element of the matrix $D$. I have only used that $U$ is unitary; indeed, this is equivalent to the basis' being orthonormal. More precisely, one may always find an orthogonal basis of eigenvectors of any normal matrix $A$ (the only extra work is that we must make orthogonal every higher-dimensional space corresponding to the same eigenvalue) and by normalizing the vectors, we get an orthonormal basis of eigenvectors of $A$. The basis vectors are the same thing as columns of $U^\dagger$. You can always find $n$ orthonormal eigenvectors for a normal $n\times n$ matrix $A$. Indeed, since you know $A=U^HDU$, if $e_1,\dots,e_n$ is the standard orthonormal basis of $\mathbb{C}^n$ then these vectors are eigenvectors of the diagonal marix $D$, hence $U^He_1,\dots,U_He_n$ (the columns of $U^H$) are orthonormal eigenvectors of $A$. Here is what I think is correct: Normal matrices are matrices that have orthogonal eigenvectors. Hermitian matrices are normal matrices that have real eigenvalues. So this answers your first question in positive: Yes, the unitary matrix in your decomposition has the same eigenvectors as your original matrix.
Equivalence of Well-Ordering Principle and Induction/Proof/PCI implies WOP Theorem That is: Principle of Complete Induction: Given a subset $S \subseteq \N$ of the natural numbers which has these properties: $0 \in S$ $\set {0, 1, \ldots, n} \subseteq S \implies n + 1 \in S$ then $S = \N$. implies: Proof To save space, we will refer to: Let us assume that the PCI is true. Let $\O \subset S \subseteq \N$. Aiming for a contradiction, suppose that: $(C): \quad S$ has no minimal element. Let $P \paren n$ be the propositional function: $n \notin S$ Suppose $0 \in S$. We have that $0$ is a lower bound for $\N$. $0 \notin S$, otherwise $0$ would be the minimal element of $S$. This contradicts our supposition $(C)$, namely, that $S$ does not have a minimal element. So $0 \notin S$ and so $P \paren 0$ holds. Suppose $P \paren j$ for $0 \le j \le k$. That is: $\forall j \in \closedint 0 k: j \notin S$ where $\closedint 0 k$ denotes the closed interval between $0$ and $k$. Now if $k + 1 \in S$ it follows that $k + 1$ would then be the minimal element of $S$. So then $k + 1 \notin S$ and so $P \paren {k + 1}$. Thus we have proved that: $(1): \quad P \paren 0$ holds $(2): \quad \paren {\forall j \in \closedint 0 k: P \paren j} \implies P \paren {k + 1}$ So we see that PCI implies that $P \paren n$ holds for all $n \in \N$. That is, $\N$ satisfies the Well-Ordering Principle. $\blacksquare$ Sources 1951: Nathan Jacobson: Lectures in Abstract Algebra: I. Basic Concepts... (previous) ... (next): Introduction $\S 4$: The natural numbers 1982: P.M. Cohn: Algebra Volume 1(2nd ed.) ... (previous) ... (next): Chapter $2$: Integers and natural numbers: $\S 2.1$: The integers 2000: James R. Munkres: Topology(2nd ed.) ... (previous) ... (next): $1$: Set Theory and Logic: $\S 4$: The Integers and the Real Numbers
Consider an atom to consist of an electron orbiting a positvely charged ion core (with one or more electrons associated to it). If you excite the electron to larger and larger values of the principal quantum number $n$ the classical electron orbit becomes larger and larger as well (in fact it scales as $n^2$). At sufficiently large distance, the postively charged ion core behaves as a point charge (like a single proton) and at long range you basically have the hydrogen problem with a different mass (assuming perfect shielding of the core electrons). In order to find the overall solution to this problem, one has to connect the long-range and short range behavior of the system. One way of doing this, is is by considering the system as a collision problem at negative energy (that is, a bound system). The effect of the nonhydrogenic ion core then results in a scattering phase shift between an incommming and outgoing electron. It can be shown that the solutions to this problem have a very similar appearance as the Bohr formula, namely: $$E_{n,\ell}=E_\text{IE}-\frac{hcR_A}{(n-\delta_\ell)^2}$$where $R_A=R_\infty(1-m_e/m_A)$ is the mass-corrected Rydberg constant, $E_\text{IE}$ is the ionization energy and $\delta_\ell$ is the so-called quantum defect for electrons with orbital angular momentum $\ell$. The quantum defect is related to the phase shift induced by the core and is different for $s, p, d, f$ etc. electrons. In principle, the quantum defect is a function of the binding energy of the electron to the ionic core, but in many cases this dependence may be neglected (in particular for large $n$). What does this mean in practice? Let us for example look at the $n$s levels of the sodium atom. We find that $R_\text{Na}=109734.697205$ cm$^{-1}$ and $E_\text{IP}(\text{Na})=41449.451$ cm$^{-1}$. The second column in the table below gives the energy (in cm$^{-1}$ from the NIST website) and the third column gives the binding energy of the electron (also in cm$^{-1}$). The last column gives the so-called effective quantum number, that is $n^*=n-\delta_\ell$, with $\ell=0$ for $s$ orbitals 3s 0.000 -41449.451 1.63 4s 25739.999 -15709.452 2.64 5s 33200.673 -8248.778 3.65 6s 36372.618 -5076.833 4.65 7s 38012.042 -3437.409 5.65 8s 38968.510 -2480.941 6.65 9s 39574.850 -1874.601 7.65 10s 39983.270 -1466.181 8.65 11s 40271.396 -1178.055 9.65 As you can see the quantum numbers take the form of the hydrogen solution if we start counting the principal quantum number form the 3$s$ levels and take $\delta_0\approx -0.65$. To calculate the quantum defect is difficult as it requires ab initio calculations and then converting the energy levels to effective quantum numbers from which the defects can be extracted. However, experimentaly these numbers are very convenient, and one may use this quantum-defect theory for instance to determine the ionization energy of atoms and even simple molecules very accurately by extrapolating Rydberg series.
SuNem \(\mathrm{Nem}_{q}(z)=z+z^3+qz^4\) It is assumed, that \(q\!>\!0\), although the formula can be used for some other values of the parameters too. SuNem is specific solution of the transfer equation \(\mathrm{Nem}_{q}\big( \mathrm{SuNem}_{q}(z)\big)=\mathrm{SuNem}_{q}(z\!+\!1)\). It is assumed that \(\mathrm{SuNem}_{q}(0)=1\). Also, the specific asymptotic behaviour at infinity is assumed, \(\mathrm{SuNem}_{q}(z) = {\displaystyle \frac{1}{\sqrt{-2z}}}\left(1+O\left(\frac{1}{\sqrt{-2z}} \right)\right)\) for any fixed phase \(\mathrm{Arg}(z)\) different from zero. For any \(\varepsilon>0\), the formula is valid for any large \(|z|\) such that \(|\mathrm{Arg}(z)|>\varepsilon\). Along the real axis, SuNem shows fast growth from zero at \(-\infty\) to plus infinity at \(+\infty\). Contents Asymptotic expansion Function SuNem is constructed by its asymptotical expansion. For the superfunction \(F\) of the Nemtsov transfer function \(T=\mathrm{Nem}_{q}\), it can be obtained from the transfer equation \(T(F(z))=F(z+1)\) Keping some positive integer mumber \(M\) of terms, the asymptotic solution can be written as follows: \(\displaystyle \tilde F(z) = \frac{1}{\sqrt{-2z}} \left(1+ \frac{P_m(\ln(-z))} {(-2z)^{m/2}} \right)\) where \(\displaystyle P_m(z)=\sum_{n=0}^{\mathrm{IntegerPart}(m/2)} A[m,n]\, z^m\) Substitution of \(\tilde F\) into the transfer equation gives the coefficients \(A\). These coefficients can be calculated with the mathematica code below: Mathematica generator of the algorithm The first 18 terms of the asymptotic representation of super function \(F\) can be computed with Mathematica software, using the code below: T[z_]=z+z^3+q z^4 P[m_, L_] := Sum[a[m, n] L^n, {n, 0, IntegerPart[m/2]}] a[1, 0] = -q; a[2, 0] = 0; m = 2; F[m_,z_] = (-2 z)^(-1/2) (1 + Sum[P[n, Log[-z]]/(-2 z)^(n/2), {n, 1, 2}]); s[m]=Numerator[Normal[Series[(T[F[m,-1/x^2]] - F[m,-1/x^2+1]) 2^((m+1)/2)/x^(m+3), {x,0,0}]]] sub[m] = Extract[Solve[s[m]==0, a[m,1]], 1]; SUB[m] = sub[m] For[m = 3, m < 18, F[m, z_] = ReplaceAll[(-2 z)^(-1/2) (1+Sum[P[n, Log[-z]]/(-2 z)^(n/2), {n,1,m}]), SUB[m-1]]; s[m] = Numerator[Normal[Series[(T[F[m,-1/x^2]]-F[m,-1/x^2+1]) 2^((m+1)/2)/x^(m+3),{x,0,0}]]]; t[m] = Collect[Numerator[ReplaceAll[s[m], Log[x] -> L]], L]; u[m] = Table[ Coefficient[t[m] L, L^n] == 0, {n, 1, 1 + IntegerPart[m/2]}]; tab[m] = Table[a[m, n], {n, 0, IntegerPart[m/2]}]; Print[sub[m] = Simplify[Extract[Solve[u[m], tab[m]], 1]]]; SUB[m] = Join[SUB[m - 1], sub[m]]; m++]; For[m=1, m<18, For[n=0,n<(m+1)/2, A[m, n] = TeXForm[ReplaceAll[a[m, n], sub[m]]]; Print["APQ[", m, "][", n, "]=", A[m, n], ";"] n++]; m++]; Evaluation of superfunction First, the superfunction of the Nemtsov function is constructed, that does not satisfy the requirement on its value at zero. The idea is to use the asymptotival expansion \(\tilde F\) of the superfunction in the area, where it provides the good approximation, displacing the argument of superfunction into this area with using of the transfer equation. Superfunction \(\mathrm{SuNe}_q\) of the Nemtsov function \(\mathrm{Nem}_q\) appears as limit \(\displaystyle \mathrm{SuNe}_q(z)=\lim_{n \rightarrow \infty} \mathrm{Nem}_q^{\,n} (\tilde F(z\!-\!n))\) Explicit plot of function \(\mathrm{SuNe}_q\) is shown in figure at right for \(q=-1, -0.5, 0, 0.5, 1, 2, 3~\). Then, function SuNem appears with the appropriate displacement of the argument: \(\mathrm{SuNem}_q(z)=\mathrm{SuNe}_q(x_0(q)+z)\) where displacement \(x_0=x_0(q)\) is real solution of equation \(F_q(x_0)\!=\!1\). This solution is shown in figure at left. In order to show the general trend of function \(x_0\), the graphic is extended a little bit into the range of negative \(q\). Inverse function \(\mathrm{AuNem_{q}} \big( \mathrm{Nem_{q}}(z)\big) = \mathrm{Nem_{q}}(z\!+\!1)\) Iterates of the Nemtsov function \(\mathrm{Nem}^n(z)=\mathrm{SuNem}(n+\mathrm{AuNem}(z))\)
Q. Half mole of an ideal monoatomic gas is heated at constant pressure of 1atm from $20{^{\circ}C}$ to $90^{\circ}C$. Work done by gas is close to : ( Gas constant R = 8.31 J /mol.K) Solution: $WD = P\Delta V =nR\Delta T = \frac{1}{2} \times8.31 \times70$ Questions from JEE Main 2019 4. One of the two identical conducting wires of length L is bent in the form of a circular loop and the other one into a circular coil of N identical turns. If the same current is passed in both, the ratio of the magnetic field at the central of the loop $(B_L)$ to that at the centre of the coil $(B_C)$ , i.e., $R \frac{B_L}{B_C}$ will be 9. The magnetic field associated with a light wave is given, at the origin, by $B = B_0 [\sin(3.14 \times 10^7)ct + \sin(6.28 \times 10^7)ct]$. If this light falls on a silver plate having a work function of 4.7 eV, what will be the maximum kinetic energy of the photo electrons ? $(c = 3 \times 10^{8} ms^{-1}, h = 6.6 \times 10^{-34} J-s)$ Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is 9. A cannon of mass I 000 kg, located at the base of an inclined plane fires a shelI of mass 100 kg in a horizontal direction with a velocity 180 $kmh^{-1}$. The angle of inclination of the inclined plane with the horizontal is 45$^{\circ}$. The coefficient of friction between the cannon and the inclined plane is 0.5. The height, in metre, to which the cannon ascends the inclined plane as a result of the recoiI is (g = 10 $ms^{-1})$ Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade
The definition of $f(n) = O(g(n))$ (for $n \to \infty$) is that there are $N_0, c$ such that for $N \geq N_0$ it is $f(n) \leq c g(n)$. In your case, pick e.g. $N_0 = 2$, then you have $10 n^3 + 3 n < 10 n^3 + 3 n^3 = 13 n^3$, and $c = 13$ works. The definition of $f(n) = \Omega(g(n))$ (for $n \to \infty$) is that there are $N_0, c$ such that for $N \geq N_0$ it is $f(n) \geq c g(n)$. Pick e.g. $N_0 = 5$, so $10 n^3 + 3 n > 10 n^3$, and $c = 10$ works. Now, $f(n) = \Theta(g(n))$ if both $f(n) = O(g(n))$ and $f(n) = \Omega(g(n))$, and you are done. Note the $N_0$, $c$ don't have to be the same (usually at least $c$ is different).
H-Quincy: Fair Scheduling for Hadoop Clusters1056 2013-09-14 16:49 H-Quincy implements the paper Quincy: Fair Scheduling for Distributed Computing Clusters on Hadoop, which improves the mapreduce scheduler by replacing the default queue-based one with a flow-based one. A min cost flow is calculated and updated to assign map tasks among the cluster, according to the size of the data split and the communication overhead in the cluster’s hierarchy. git clone https://github.com/puncsky/H-Quincy.git You can either build from source code or user the JAR directly. Build from Source Code. Replace your $HADOOP_HOME/src/mapred/org/apache/hadoop/mapredwith files in src/. Enter $HADOOP_HOMEand build with ant. utilize the JAR directly. Replace your $HADOOP_HOME/hadoop-core-{version}.jarwith hadoop-core-1.0.4-SNAPSHOT.jar The Quincy paper is rather theoretical organized and involves a large number of mathematical details, which reasonably makes itself hard to understand. The following sections explains our implementation. Figure 1 displays an outline of our architecture. There exist three kinds of nodes and accordingly three levels of hierarchy in the cluster. Computing nodes are connected via a rack switch, and placed in the same rack. Rack switches are connected via a core switch. Core switches and rack switches do not undertake computing works but can still be presented in the Hadoop system as nodes. Figure 1: A sample architecture with simplified preferred lists. As we know, Hadoop takes a master/slave architecture, which includes a single master and multiple worker nodes. The master node in general consists of a JobTracker and a NameNode. The slave worker consist of a TaskTracker and a DataNode. Nevertheless, the master can also play the role of the slave at the same time. JobTracker assigns tasks to the TaskTrackers for execution, and then collects the result. NameNode manages the index for blocks of data storing in DataNodes among the Hadoop Distributed File System. In our cluster, each computing node is both a TaskTracker and a DataNode, which means they are all slaves. And we select one of them as a master, which is both a JobTracker and a DataNode simultaneously. The master maintains the relationship with slaves through heartbeat, whose overhead is ignorable when compared to the data transfer or the execution of tasks. There may be many jobs sharing the cluster resources at the same time. When a job comes into the JobTracker’s job queue, the JobTracker resorts the queue by the priority and the start time of these jobs. A job is composed of the map tasks and the reduce tasks. When the job comes out of the queue and starts to run, the JobTracker will analyze the input data’s distribution over those TaskTrackers and initialize a list of preferred tasks for each TaskTracker, rack switch, and core switch, as shown in figure 1. A task will occur on the preferred list of some node if its data splits are stored in that node or in any of its child nodes. Then the JobTracker’s scheduler matches tasks to the TaskTrackers and launch them to run the tasks on their own newly-assigned lists. At the same time, the JobTracker keeps collecting status information from TaskTrackers until all the tasks finish. If failure happens in the TaskTracker, the JobTracker will restart the task from that TaskTracker or enable a new TaskTracker to execute the task. The scheduler can kill a task on a TaskTracker with preemption if there is a more suitable arrangement. After the initialization of preferred lists, the JobTracker assign a series of actions, including the tasks waiting for execution, into the response to heartbeat from the matched TaskTracker. The matched TaskTracker receives the heartbeat response and adds the actions to the task queue. In the task assignment, the JobTracker’s task scheduler first calculates the max workload for every job, and leaves certain padding slots for speculative tasks. The speculative task backs up some running task, in case that the running task is too slow and impede the job’s progress. Map tasks are assigned before reduce tasks. Data-local and rack-local tasks are assigned before non-local tasks. The default setup for Hadoop is a queue-based greedy scheduler. The preferred task lists of every node can be deemed as queues. Each computing node has a queue of tasks which can be executed without pulling data from other places. Each computing node in a rack shares a rack queue in case that computing node can execute tasks with pulling data splits from other nodes in the same rack, when the local list has already been finished. Since racks are connected with a core switch, racks also share a global queue. For every assignment, node will compute the previously failed tasks first, then the non-running local and non-local tasks, and finally the speculative tasks. When only one job is running on the cluster, that job will use the entire cluster. What will happen if multiple jobs are submitted? The default’s sorting by priority and start time only ensures that more significant and earlier submitted jobs are dequeued first. However, the Hadoop fair scheduler arranges jobs into pools. Computing resources are divided fairly between these pools. A pool can be occupied by only one user (by default) or shared by a user group. Within the pool, jobs can be scheduled evenly or first-in-first-serve. Assume we have $M$ computers and $K$ running jobs, and job $j$ has $N_{j}$ tasks in total. Each job $j$ will be allocated $A_{j} = min(\lfloor M/K\rfloor, N_{j})$ task slots. If there are still resting slots, divide them evenly. After finishing the running tasks, a TaskTracker will follow the new allocation $A_{j}$. However, if the process is not preemptive, the running tasks will definitely not be affected when new jobs are submitted irrespective of the changed allocation $A_{j}$. If the fairness is ensured with preemption, the running tasks will be killed while a new quota $A_{j}$ shows up. Hadoop default scheduler sets up a wait time to enable a portion of the cluster wait to get better locality, preventing a job’s tasks from becoming sticky to a TaskTracker. [Quincy](Quincy: Fair Scheduling for Distributed Computing Clusters) introduces a new framework transforming the scheduling into a global min cost max flow problem. Running a particular task on some machine incurs a data calculation cost and, potentially, a data transfer cost. Killing a running task also incurs a wasted time cost. If different kinds of costs can be expressed in the same unit, then we can investigate an algorithm to minimize the total cost of the scheduling. In a flow network, a directed graph $G=(V, E)$ has $source \in V$ and $sink \in V$. For each edge $(u,v)\in E$, there are $capacity(u,v)\in\mathbb{N}$, $flow(u,v)\in\mathbb{N}$ and $cost(u,v)\in\mathbb{R}$. The problem is to calculate the min cost flow $min(\sum_{E}flow\cdot cost)$ Edmonds-Karp algorithm is used to calculate the min cost flow with $O(V\cdot E ^ 2)$ in our implementation. Figure 2 shows the graph along with the same topology in Figure 1. Since supplies are from a variety of sources – task nodes and unscheduled nodes, the graph is a multi-source single-sink flow. Our implementation adds a virtual source to transform the flow into a single-source one. Figure 2: Min-Cost Max Flow Graph Each task node has a supply of 1, so $capacity(source, task) = 1$. The unscheduled is used to control the fairness. Tasks flowing to the unscheduled will not be assigned to computing nodes at this time. Each job must have and only has one unscheduled node with $$capacity(source, unscheduled) = F_j - N_j$$ where $F_j$ is the max number of running tasks job j may has. $N_j$ is the number of TaskTrackers assigned to this job. From each task node, there are edges to the core switch, preferred rack, and preferred computing nodes. By default, every split of data has three replicas, so the number of preferred computing nodes is usually 3. So we can yield $$capacity(task, core switch) = 1$$ $$capacity(task, preferredRackSwitch) = 1$$ $$capacity(task, preferredComputingNode) = 1$$ From the unscheduled, there is only one edge to sink with $capacity(unscheduled, sink) = F_j - E_j$, where $E_j$ is the min number of running tasks job j may have. From the core switch, there are edges to every rack with capacities of $$capacity(coreSwitch, rackSwitch) = numberOfTaskTrackersInThatRack$$ From each rack switch, there are edges to every computing node with capacity of $capacity(rackSwitch, computingNode) = 1$. From each computing node, there is only one edge with $capacity(computing node, sink) = numberOfTaskSlots$. The number of task slots is 2 by Hadoop’s default for map tasks. The value is 1 for reduce tasks. The cost of scheduling a task $t_n ^ j$ job $j$ with $n$ tasks onto a computing node is $\alpha_n ^ j = \psi R ^ X(t_n ^ j) + \xi X ^ X(t_n ^ j)$, where $\psi$ is the cost to transfer one GB across a rack switch, $\xi$ is the cost to transfer one GB across the core switch. $(R ^ X(t_n ^ j), X ^ X(t_n ^ j))$ is the upper bounds of the transferred data size across a rack switch and across a core switch. The cost of scheduling a task onto a preferred rack is $\rho ^ j_{n,l} = \psi R ^ R_l(t_n ^ j) + \xi X ^ R_l(t_n ^ j)$. The cost of scheduling a task onto a preferred computer is $\gamma ^ j_{n,m} = \psi R ^ C_m(t_n ^ j) + \xi X ^ C_m(t_n ^ j)$. However, if the computer is now executing the same task, the cost should be $\gamma ^ j_{n,m} = \psi R ^ C_m(t_n ^ j) + \xi X ^ C_m(t_n ^ j) - \theta ^ j_n$, where $\theta$ is the number of seconds for which the task has been running. The cost of scheduling a task onto the unscheduled is $\upsilon ^ j_n = \omega \nu ^ j_n$, where $\omega$ is a wait-time factor and $\nu ^ j_n$ is the total number of seconds that task $n$ in job $j$ has spent unscheduled. In our current version for testing, the wait-time factor $\omega=0.5$, $\psi = 1$ per GB, $\xi = 2$ per GB. $\psi$ and $\xi$ can be set larger to achieve a better locality. After initialization, the min cost flow matrix will be recalculated every time before a new task is assigned to a TaskTracker. When the job is running on the cluster, the capacity matrix and cost matrix will be updated if a task is finished. An edge from the finished task to the sink will be set with a capacity 1 and cost $-1000-\nu ^ j_n$. There exist four versions of quincy. Quincy without Preemption and without Fairness(Q). Quincy with Preemption and without Fairness(QP). Quincy without Preemption and with Fairness(QF). Quincy with Preemption and with Fairness(QPF). Limited to the time, our current implementation does not include preemption and fairness. Preemption is easy to achieve but there are more classes and source codes to modify for fairness control. If you find this article helpful
I'm stuck on some of the math behind this problem and could use some help working this out. I'm trying to calculate the Lagrangian, find the conjugate momenta, and finally calculate the Hamiltonian given the following information. In cartesian coordinates, we have that $ds^2 = dx_1^2 + dx_2^2 + dx_3^2$. This gives the following tensor: $$ ds^2 = \sum_{\alpha=1}^3 \sum_{\beta=1}^3 g_{\alpha \beta}du_{\alpha} du_{\beta} $$ Where the metric tensor $g$ is $$ g_{\alpha \beta} = \sum_{\mu = 1}^3 \dfrac{\partial x_{\mu}}{\partial u_{\alpha}}\dfrac{\partial x_{\mu}}{\partial u_{\beta}} $$ I found the Lagrangian easy enough. This is $$ \mathcal{L} = \dfrac{m}{2}\sum_{\alpha=1}^3 \sum_{\beta=1}^3 g_{\alpha \beta} \dot u_{\alpha} \dot u_{\beta} $$ The conjugate momenta calculation confuses me once I get to a certain point. I will show my work up until there. \begin{align} p_i = \dfrac{\partial \mathcal L}{\partial \dot u_i} &= \dfrac{m}{2}\sum_{\alpha=1}^3 \sum_{\beta=1}^3 g_{\alpha \beta} \dfrac{\partial }{\partial \dot u_i} \bigg ( \dot u_{\alpha} \dot u_{\beta} \bigg ) \\ &=\dfrac{m}{2}\sum_{\alpha=1}^3 \sum_{\beta=1}^3 g_{\alpha \beta} \bigg ( \dot u_{\alpha} \delta_{i\alpha} + \dot u_{\beta}{\delta_{\beta i}} \bigg ) \end{align} This is one source of my confusion. I'm not sure why the Kroneker delta's pop up in the last line. My professor tried to explain it to me, but I wasn't able to follow his logic there. Continuing the problem, we have that \begin{align} p_i &=\dfrac{m}{2}\sum_{\alpha=1}^3 \sum_{\beta=1}^3 g_{\alpha \beta} \bigg ( \dot u_{\alpha} \delta_{i\alpha} + \dot u_{\beta}{\delta_{\beta i}} \bigg ) \\ &=\dfrac{m}{2}\sum_{\alpha=1}^3 \sum_{\beta=1}^3 g_{\alpha \beta} \dot u_{\alpha} \delta_{i\alpha} + \dfrac{m}{2}\sum_{\alpha=1}^3 \sum_{\beta=1}^3 g_{\alpha \beta} \dot u_{\beta}{\delta_{\beta i}} \end{align} From here, my professor invoked some symmetry argument on the metric tensor to collapse the two parts together. He was rushed and stopped here. I was completely lost at this point and I'm not sure how to proceed from there. Once I get this, I should be able to find the Hamiltonain easy enough. I'm just new to tensor math and this is a bit confusing. Thank you!
Defining parameters Level: \( N \) = \( 210 = 2 \cdot 3 \cdot 5 \cdot 7 \) Weight: \( k \) = \( 2 \) Nonzero newspaces: \( 12 \) Newforms: \( 32 \) Sturm bound: \(4608\) Trace bound: \(4\) Dimensions The following table gives the dimensions of various subspaces of \(M_{2}(\Gamma_1(210))\). Total New Old Modular forms 1344 249 1095 Cusp forms 961 249 712 Eisenstein series 383 0 383 Decomposition of \(S_{2}^{\mathrm{new}}(\Gamma_1(210))\) We only show spaces with even parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
I have two square matrices - $A$ and $B$. $A^{-1}$ is known and I want to calculate $(A+B)^{-1}$. Are there theorems that help with calculating the inverse of the sum of matrices? In general case $B^{-1}$ is not known, but if it is necessary then it can be assumed that $B^{-1}$ is also known. In general, $A+B$ need not be invertible, even when $A$ and $B$ are. But one might ask whether you can have a formula under the additional assumption that $A+B$ is invertible. As noted by Adrián Barquero, there is a paper by Ken Miller published in the Mathematics Magazine in 1981 that addresses this. He proves the following: Lemma. If $A$ and $A+B$ are invertible, and $B$ has rank $1$, then let $g=\mathrm{trace}(BA^{-1})$. Then $g\neq -1$ and $$(A+B)^{-1} = A^{-1} - \frac{1}{1+g}A^{-1}BA^{-1}.$$ From this lemma, we can take a general $A+B$ that is invertible and write it as $A+B = A + B_1+B_2+\cdots+B_r$, where $B_i$ each have rank $1$ and such that each $A+B_1+\cdots+B_k$ is invertible (such a decomposition always exists if $A+B$ is invertible and $\mathrm{rank}(B)=r$). Then you get: Theorem. Let $A$ and $A+B$ be nonsingular matrices, and let $B$ have rank $r\gt 0$. Let $B=B_1+\cdots+B_r$, where each $B_i$ has rank $1$, and each $C_{k+1} = A+B_1+\cdots+B_k$ is nonsingular. Setting $C_1 = A$, then$$C_{k+1}^{-1} = C_{k}^{-1} - g_kC_k^{-1}B_kC_k^{-1}$$where $g_k = \frac{1}{1 + \mathrm{trace}(C_k^{-1}B_k)}$. In particular,$$(A+B)^{-1} = C_r^{-1} - g_rC_r^{-1}B_rC_r^{-1}.$$ (If the rank of $B$ is $0$, then $B=0$, so $(A+B)^{-1}=A^{-1}$). It is shown in On Deriving the Inverse of a Sum of Matrices that $(A+B)^{-1}=A^{-1}-A^{-1}B(A+B)^{-1}$. This equation cannot be used to calculate $(A+B)^{-1}$, but it is useful for perturbation analysis where $B$ is a perturbation of $A$. There are several other variations of the above form (see equations (22)-(26) in this paper). This result is good because it only requires $A$ and $A+B$ to be nonsingular. As a comparison, the SMW identity or Ken Miller's paper (as mentioned in the other answers) requires some nonsingualrity or rank conditions of $B$. This I found accidentally. Suppose given $A$, and $B$, where $A$ and $A+B$ are invertible. Now we want to know the expression of $(A+B)^{-1}$ without imposing the all inverse. Now we follow the intuition like this. Suppose that we can express $(A+B)^{-1} = A^{-1} + X$, next we will present simple straight forward method to compute $X$ \begin{equation} (A+B)^{-1} = A^{-1} + X \end{equation} \begin{equation} (A^{-1} + X) (A + B) = I \end{equation} \begin{equation} A^{-1} A + X A + A^{-1} B + X B = I \end{equation} \begin{equation} X(A + B) = - A^{-1} B \end{equation} \begin{equation} X = - A^{-1} B ( A + B)^{-1} \end{equation} \begin{equation} X = - A^{-1} B (A^{-1} + X) \end{equation} \begin{equation} (I + A^{-1}B) X = - A^{-1} B A^{-1} \end{equation} \begin{equation} X = - (I + A^{-1}B)^{-1} A^{-1} B A^{-1} \end{equation} This lemma is simplification of lemma presented by Ken Miller, 1981 $(A+B)^{-1} = A^{-1} - A^{-1}BA^{-1} + A^{-1}BA^{-1}BA^{-1} - A^{-1}BA^{-1}BA^{-1}BA^{-1} + \cdots$ provided $\|A^{-1}B\|<1$ or $\|BA^{-1}\| < 1$ (here $\|\cdot\|$ means norm). This is just the Taylor expansion of the inversion function together with basic information on convergence. (posted essentially at the same time as mjqxxx) A formal power series expansion is possible: $$ \begin{eqnarray} (A + \epsilon B)^{-1} &=& \left(A \left(I + \epsilon A^{-1}B\right)\right)^{-1} \\ &=& \left(I + \epsilon A^{-1}B\right)^{-1} A^{-1} \\ &=& \left(I - \epsilon A^{-1}B + \epsilon^2 A^{-1}BA^{-1}B - ...\right) A^{-1} \\ &=& A^{-1} - \epsilon A^{-1} B A^{-1} + \epsilon^2 A^{-1} B A^{-1} B A^{-1} - ... \end{eqnarray} $$ Under appropriate conditions on the eigenvalues of $A$ and $B$ (such that $A$ is sufficiently "large" compared to $B$), this will converge to the correct result at $\epsilon=1$. Assuming everything is nicely invertible, you are probably looking for the SMW identity (which, i think, can also be generalized to pseudoinverses if needed) Please see caveat in the comments below; in general if $B$ is low-rank, then you'd be happy using SMW. I'm surprising that no one realize it's a special case of the well-known matrix inverse lemma or [Woodbury matrix identity], it says, $ \left(A+UCV \right)^{-1} = A^{-1} - A^{-1}U \left(C^{-1}+VA^{-1}U \right)^{-1} VA^{-1}$ , just set U=V=I, it immediately gets $ \left(A+C \right)^{-1} = A^{-1} - A^{-1} \left(C^{-1}+A^{-1} \right)^{-1} A^{-1}$ . It is possible to come up with pretty simple examples where $A$,$A^{-1}$,$B$, and $B^{-1}$ are all very nice, but applying $(A+B)^{-1}$ is considered very difficult. The canonical example is where $A = \Delta$ is a finite difference implementation of the Laplacian on a regular grid (with, for example, Dirichlet boundary conditions), and $B=k^2I$ is a multiple of the identity. The finite difference laplacian and it's inverse are very nice and easy to deal with, as is the identity matrix. However, the combination $$\Delta + k^2 I$$ is the Helmholtz operator, which is widely known as being extremely difficult to solve for large $k$. If A and B were numbers, there is no simpler way to write $\frac{1}{A+B}$ in term of $ \frac{1}{A}$ and $B$ so I don't know why you would expect there to be for matrices. It is even possible to have matrices, A and B, so that neither $A^{-1}$ and $B^{-1}$ exist but $(A+B)^{-1}$ does or, conversely, such that both $A^{-1}$ nor $B^{-1}$ exist but $(A+B)^{-1}$ doesn't. Extending Muhammad Fuady's approach: We have: \begin{equation} (A+B)^{-1} = A^{-1} + X \end{equation} \begin{equation} X = - (I + A^{-1}B)^{-1} A^{-1} B A^{-1} \end{equation} So \begin{equation} (A+B)^{-1} = A^{-1} - (I + A^{-1}B)^{-1} A^{-1} B A^{-1} \tag{1}\label{eq1} \end{equation} This rearranges to: \begin{equation} (A+B)^{-1} = (I - (I + A^{-1}B)^{-1} A^{-1} B )A^{-1} \tag{2}\label{eq2} \end{equation} If we consider the part \begin{equation} (I + A^{-1}B)^{-1} \end{equation} Then, this is an inverse of a sum of two matrices, so we can use \eqref{eq2}, setting $A=I$ and $B = A^{-1}B$, this gives: \begin{equation} (I + A^{-1}B)^{-1} = (I - (I + A^{-1}B)^{-1}A^{-1}B ) \end{equation} so we can substitute the LHS of this for the right hand side which appears in \eqref{eq2}, giving: \begin{equation} (A+B)^{-1} = (I + A^{-1}B)^{-1}A^{-1} \tag{3}\label{eq3} \end{equation} Which is simpler than \eqref{eq1} and is very similar to the scalar identity: \begin{equation} \frac{1}{a+b}=\frac{1}{\left(1+\frac{b}{a}\right)a} \tag{4}\label{eq4} \end{equation} The technique is useful in computation, because if the values in A and B can be very different in size then calculating $\frac{1}{A+B}$ according to \eqref{eq3} gives a more accurate floating point result than if the two matrices are summed. Actually we can directly from @Shiyu answer about perturbations by subtracting $(A+B)^{-1}$ and factoring arrive at $$0=A^{-1}-(A^{-1}B+I)(A+B)^{-1}$$ followed by$$(A+B)^{-1}=(A^{-1}B+I)^{-1}A^{-1}$$ And by symmetry of course $$(A+B)^{-1}=(B^{-1}A+I)^{-1}B^{-1}$$ Now remember, $(I+X)^{-1}$ can be expanded as $I-X+X^2+\cdots$ by geometric series. So if $X=B^{-1}A$ or $X=A^{-1}B$ and multiplication by $A,B$ and either of $A^{-1}$ or $B^{-1}$ are cheap, then this could work nicer than some other method of finding inverse.
When I evaluate Solve[a==Sin[b*c], b] to rearrange the following for $ b $: $$ a = \sin(bc) $$ I get the following result from Mathematica: $$\begin{align*} \left\{\left\{b\to \text{ConditionalExpression}\left[\frac{-\sin ^{-1}(a)+2 \pi c_1+\pi }{c},c_1\in \mathbb{Z}\right]\right\},\right.\left.\left\{b\to \text{ConditionalExpression}\left[\frac{\sin ^{-1}(a)+2 \pi c_1}{c},c_1\in \mathbb{Z}\right]\right\}\right\} \end{align*}$$ It seems far too complicated. Unless I'm making a huge mistake, surely solving the equation for $ b $ would give: $$ b = \frac{\sin ^{-1}(a)}{c} $$ Am I doing something wrong?
Revista Matemática Iberoamericana Full-Text PDF (357 KB) | Metadata | Table of Contents | RMI summary Volume 19, Issue 3, 2003, pp. 813–856 DOI: 10.4171/RMI/371 Published online: 2003-12-31 Poissonian products of random weights: Uniform convergence and related measuresJulien Barral [1](1) Domaine de Voluceau, Le Chesnay, France The random multiplicative measures on $\mathbb{R}$ introduced in Mandelbrot ([Mandelbrot 1996]) are a fundamental particular case of a larger class we deal with in this paper. An element $\mu$ of this class is the vague limit of a continuous time measure-valued martingale $\mu _{t}$, generated by multiplying i.i.d. non-negative random weights, the $(W_M)_{M\in S}$, attached to the points $M$ of a Poisson point process $S$, in the strip $H=\{(x,y)\in \mathbb{R}\times\mathbb{R}_+ ; 0 < y\leq 1\}$ of the upper half-plane. We are interested in giving estimates for the dimension of such a measure. Our results give these estimates almost surely for uncountable families $(\mu ^{\lambda})_{\lambda \in U}$ of such measures constructed simultaneously, when every measure $\mu^{\lambda}$ is obtained from a family of random weights $(W_M(\lambda))_{M\in S}$ and $W_M(\lambda)$ depends smoothly upon the parameter $\lambda\in U\subset\mathbb{R}$. This problem leads to study in several sense the convergence, for every $s\geq 0$, of the functions valued martingale $Z^{(s)}_t: \lambda \mapsto \mu_{t}^{\lambda }([0,s])$. The study includes the case of analytic versions of $Z^{(s)}_t(\lambda)$ where $\lambda\in\mathbb{C}^n$. The results make it possible to show in certain cases that the dimension of $\mu^{\lambda}$ depends smoothly upon the parameter. When the Poisson point process is statistically invariant by horizontal translations, this construction provides the new non-decreasing multifractal processes with stationary increments $s\mapsto \mu ([0,s])$ for which we derive limit theorems, with uniform versions when $\mu$ depends on $\lambda$. Keywords: Poisson point processes, Banach space valued martingales, random measures, Hausdorff dimension Barral Julien: Poissonian products of random weights: Uniform convergence and related measures. Rev. Mat. Iberoam. 19 (2003), 813-856. doi: 10.4171/RMI/371
Downsampling Contents Outline Introduction Definition of Downsampling Derivation of DTFT of downsampled signal Example Decimator Conclusion Introduction This slecture provides definition of downsampling, derives DTFT of downsampled signal and demonstrates it in a frequency domain. Also, it explains process of decimation and why it needs a low-pass filter. Definition of Downsampling Downsampling is an operation which involves throwing away samples from discrete-time signal. Let x[n] be a digital-time signal shown below: then y[n] will be produced by downsampling x [n] by factor D = 3. So, y [n] = x[Dn]. As seen in above graph, y [n] is obtained by throwing away some samples from x [n]. So, y [n] is a downsampled signal from x [n]. Derivation of DTFT of downsampled signal Let x (t) be a continuous tim e signal. Then x 1 [n] = x (T1n) and x. And ratio of sampling periods would be 2[n] = x (T 2n) D = T 2/T 1, which is an integer greater than 1. From these equations we obtain realtionship between x 1 [n] and x. 2[n] $ \begin{align} x_2 [n] = x(T_2 n) = x(DT_1 n) = x_1 [nD] \end{align} $ Below we derive Discrete-Time Fourier Transform of x 2 [n] in terms of DTFT of x. 1[n] $ \begin{align} &\mathcal{X}_2(\omega)= \mathcal{F}(x_2 [n]) = \mathcal{F}(x_1 [Dn])\\ &= \sum_{n = -\infty}^\infty x_1[Dn] e^{-j \omega n} = \sum_{m = -\infty}^\infty x_1[m] e^{-j \omega {\frac{m}{D}}}\\ &= \sum_{n = -\infty}^\infty s_D[m]* x_1 [m] e^{-j \omega {\frac{m}{D}}}\\ \end{align} $ where $ s_D [m]=\left\{ \begin{array}{ll} 1,& \text{ if } n \text{ is a multiple of } D,\\ 0, & \text{ else}. \end{array}\right. = {\frac{1}{D}} \sum_{k = -\infty}^{D-1} e^{jk {\frac{2 \pi}{D} m}} $ $ \begin{align} &\mathcal{X}_2(\omega)= \sum_{m = -\infty}^\infty {\frac{1}{D}} \sum_{k = -\infty}^{D-1} e^{jk {\frac{2 \pi}{D} m}} x_1[m] e^{-j \omega {\frac{m}{D}}}\\ &= {\frac{1}{D}} \sum_{k = -\infty}^{D-1} \sum_{m = -\infty}^\infty x_1[m] e^{-jm ({\frac{\omega - 2 \pi k}{D}})} = \\ &= {\frac{1}{D}} \sum_{k = -\infty}^{D-1} \mathcal{X}_1 ({\frac{\omega - 2 \pi k}{D}}) \\ \end{align} $ Example Let's take a look at an original signal X 1 (w) and Xwhich is obtained after downsampling X 2(w) 1(w) by factor D = 2 in a frequency domain. From two graphs it is seen that signal is stretched by D in frequency domain and decreased by D in a magnitude after downsampling. Both signals have the frequency of $ \begin{align} 2\pi \end{align} $ . Decimator As seen in second graph, if $ \begin{align} D2\pi T_1f_{max} \end{align} $ is greater than $ \begin{align} \pi \end{align} $ aliasing occurs. Downsampler is a part of a decimator which also has a low-pass filter to prevent aliasing. LPF eliminates signal components which has frequencies higher than cutoff frequency, which can be found from graphs shown above. $ \begin{align} & D\omega_c = D 2 \pi T_1 f_{max} < \pi\\ & {\frac{T_2}{T_1}} 2\pi T_1 f_{max} < \pi \\ & 2\pi T_2f_{max} < \pi \\ &f_{max} < {\frac{1}{2T_2}} \end{align} $ Thereby, signal needs to be filtered before downsampling if f max > 1/(2T 2) . Complete block diagram of a decimator is shown below: Conclusion To summarize, downsampling is a process of removing samples from signal. After downsampling, signal decreases by factor D in the magnitude and stretches by D in frequency domain. In order to downsample a signal, it first should be filtered by LPF to prevent aliasing. Both LPF and downsampler are parts of a decimator. If you have any questions, comments, etc. please post them on this page.
A rational function $f$ in $n$ variables is a ratio of $2$ polynomials, $$f(x_1,...x_n) = \frac{p(x_1,...x_n)}{q(x_1,...x_n)}$$ where $q$ is not identically $0$. The function is called symmetric if $$f(x_1,...,x_n) = f(x_{\sigma(1)},...,x_{\sigma(n)})$$ for any permutation $\sigma$ of $\{1,\ldots,n\}$. Let $F$ denote the field of rational functions and $S$ denote the subfield of symmetric rational functions. Suppose the coefficients of polynomials are all real numbers. Show that $F = S(h)$, where $h = x_1 + 2x_2 + ... + nx_n$. In other words, show that $h$ generates $F$ as a field extension of $S$. Attempt at Solution: Can't seem to get very far with this one. I know that $F$ is a finite extension of $S$ of degree $n!$ and the Galois group of the extension is $S_n$. Using $h$ and the 1st symmetric function $s_1 = x_1 + x_2 + \ldots + x_n$, we see that $h - s_1 = x_2 + 2x_3 + \ldots (n-1)x_n \in S(h)$. Can't seem to find a good way to use the other symmetric functions $s_2,\ldots, s_n$. Any help would be greatly appreciated. Thank you.
I have the following system of equations: $ \begin{cases} \frac{du}{dt} = v - v^3 \,, \\ \frac{dv}{dt} = -u - u^3 \,. \end{cases} $ I'm asked to find a Lyapunov function (Lyapunov's second method) to determine the stability around the origin. Using a linearization near the origin, I have found that the eigenvalues of the Jacobian are $\pm i$ and hence, the origin is a stable center point. I figured this means I need to find a positive definite function (that is zero in the origin) and has negative semidefinite derivative (with respect to the system). The questions in the book $\textit{Elementary Differential Equations and Boundary Value Problems}$ by $\textit{Boyce}$ and $\textit{DiPrima}$ are usually solved by trying the polynomials $V(u,v) = au^2 + bv^2$ or $V(u,v) = au^2 + buv + cv^2$. Sometimes a change to polar coordinates is made to determine a radius in which the derivative is negative. But I can't seem to ensure a derivative that is less or equal to zero in this case, for example: Take $V(u,v) := au^2 + bv^2$, then $ \begin{align*} \dot V &= 2auu' + 2bvv' \\ &= 2au(v-v^3) + 2bv(-u-u^3) & \mbox{let (for example) $a=b=1$}\\ &= -2uv^3 - 2vu^3 \end{align*} $ As these are cubic terms, they may very well be positive.
I am trying to find an algorithm to determine whether a $N\times N$ matrix of ones and zeroes could have a sublist of ones, such that in that sublist we have only one $1$ from each row or column. This is the perfect matching problem in a bipartite graph: construct a bipartite graph with nodes $1,...,N$ on one side and nodes $-1,...,-N$ on the other side, and with an edge from $i$ to $-j$ if your matrix has a 1 in row $i$, column $j$. Then a perfect matching in this graph corresponds to a subset of ones in the matrix having the property that no two of them are in the same row or column. Textbook algorithm The Hopcroft-Karp algorithm solves the problem; it can determine if a perfect matching exists in time $O(\sqrt{N} M)$ where $M$ is the number of ones in your matrix (clearly, $M \le N^2$). A simpler randomized algorithm If you are satisfied with a randomized algorithm (with a small probability of error), there is indeed a conceptually simpler algorithm due to László Lovász. Create a new matrix $X$ in which every 1 in the original matrix is replaced by a random integer between 1 and some bound $P>100 \cdot N$; zeros in the original matrix remain zeros in $X$. Compute the determinant of $X$. If $det(X) \neq 0$, then for sure the original matrix has a perfect matching; if $det(X) = 0$, then with probability at least $1-1/100$, the original matrix does not have a perfect matching. (If you want to be sure that the value of the determinant does not overflow, you can do the computations modulo $P$ if you choose $P$ to be prime.) Computing the determinant will take time $O(N^3)$ with standard decomposition methods. An even simpler (and slower) method The simplest polynomial-time algorithm I know for bipartite perfect matching is due to Linial, Samorodnitsky and Wigderson and runs in time $O(N^4 \log N)$. Let $c_j$ denote the sum of the elements in column $j$. The algorithm is as follows: For $N^2 \log N$ iterations do: 1. Normalize each column so that it sums to 1 2. Normalize each row so that it sums to 1 3. Compute $c_1,\ldots,c_N$ 4. If $\sum_{j=1}^N (c_j-1)^2 < 1/N$ return YES Return NO If $N$ is tiny If you want something even simpler (?) to implement (but much, much slower if $N$ is large), you could just enumerate all permutations over $N$ elements, and for each of these check if the corresponding set satisfies your property. That is, for each permutation $\pi: \{1,\ldots,N\} \to \{1,\ldots,N\}$, you check if cell $(i, \pi(i))$ of the matrix contains a 1, for each $i=1,\ldots,N$. The standard libraries of some programming languages have support for generating all permutations of a set; for example, itertools.permutations in Python or next_permutation in C++. However, this approach based on enumeration will take time about $N \cdot N!$, which is more than exponential in $N$. It will probably only be acceptable if $N$ is at most around 10-12.
Recently the question If $\frac{d}{dx}$ is an operator, on what does it operate? was asked on mathoverflow. It seems that some users there objected to the question, apparently interpreting it as an elementary inquiry about what kind of thing is a differential operator, and on this interpretation, I would agree that the question would not be right for mathoverflow. And so the question was closed down (and then reopened, and then closed again…. sigh). (Update 12/6/12: it was opened again,and so I’ve now posted my answer over there.) Meanwhile, I find the question to be more interesting than that, and I believe that the OP intends the question in the way I am interpreting it, namely, as a logic question, a question about the nature of mathematical reference, about the connection between our mathematical symbols and the abstract mathematical objects to which we take them to refer. And specifically, about the curious form of variable binding that expressions involving $dx$ seem to involve. So let me write here the answer that I had intended to post on mathoverflow: ————————- To my way of thinking, this is a serious question, and I am not really satisfied by the other answers and comments, which seem to answer a different question than the one that I find interesting here. The problem is this. We want to regard $\frac{d}{dx}$ as an operator in the abstract senses mentioned by several of the other comments and answers. In the most elementary situation, it operates on a functions of a single real variable, returning another such function, the derivative. And the same for $\frac{d}{dt}$. The problem is that, described this way, the operators $\frac{d}{dx}$ and $\frac{d}{dt}$ seem to be the same operator, namely, the operator that takes a function to its derivative, but nevertheless we cannot seem freely to substitute these symbols for one another in formal expressions. For example, if an instructor were to write $\frac{d}{dt}x^3=3x^2$, a student might object, “don’t you mean $\frac{d}{dx}$?” and the instructor would likely reply, “Oh, yes, excuse me, I meant $\frac{d}{dx}x^3=3x^2$. The other expression would have a different meaning.” But if they are the same operator, why don’t the two expressions have the same meaning? Why can’t we freely substitute different names for this operator and get the same result? What is going on with the logic of reference here? The situation is that the operator $\frac{d}{dx}$ seems to make sense only when applied to functions whose independent variable is described by the symbol “x”. But this collides with the idea that what the function is at bottom has nothing to do with the way we represent it, with the particular symbols that we might use to express which function is meant. That is, the function is the abstract object (whether interpreted in set theory or category theory or whatever foundational theory), and is not connected in any intimate way with the symbol “$x$”. Surely the functions $x\mapsto x^3$ and $t\mapsto t^3$, with the same domain and codomain, are simply different ways of describing exactly the same function. So why can’t we seem to substitute them for one another in the formal expressions? The answer is that the syntactic use of $\frac{d}{dx}$ in a formal expression involves a kind of binding of the variable $x$. Consider the issue of collision of bound variables in first order logic: if $\varphi(x)$ is the assertion that $x$ is not maximal with respect to $\lt$, expressed by $\exists y\ x\lt y$, then $\varphi(y)$, the assertion that $y$ is not maximal, is not correctly described as the assertion $\exists y\ y\lt y$, which is what would be obtained by simply replacing the occurrence of $x$ in $\varphi(x)$ with the symbol $y$. For the intended meaning, we cannot simply syntactically replace the occurrence of $x$ with the symbol $y$, if that occurrence of $x$ falls under the scope of a quantifier. Similarly, although the functions $x\mapsto x^3$ and $t\mapsto t^3$ are equal as functions of a real variable, we cannot simply syntactically substitute the expression $x^3$ for $t^3$ in $\frac{d}{dt}t^3$ to get $\frac{d}{dt}x^3$. One might even take the latter as a kind of ill-formed expression, without further explanation of how $x^3$ is to be taken as a function of $t$. So the expression $\frac{d}{dx}$ causes a binding of the variable $x$, much like a quantifier might, and this prevents free substitution in just the way that collision does. But the case here is not quite the same as the way $x$ is a bound variable in $\int_0^1 x^3\ dx$, since $x$ remains free in $\frac{d}{dx}x^3$, but we would say that $\int_0^1 x^3\ dx$ has the same meaning as $\int_0^1 y^3\ dy$. Of course, the issue evaporates if one uses a notation, such as the $\lambda$-calculus, which insists that one be completely explicit about which syntactic variables are to be regarded as the independent variables of a functional term, as in $\lambda x.x^3$, which means the function of the variable $x$ with value $x^3$. And this is how I take several of the other answers to the question, namely, that the use of the operator $\frac{d}{dx}$ indicates that one has previously indicated which of the arguments of the given function is to be regarded as $x$, and it is with respect to this argument that one is differentiating. In practice, this is almost always clear without much remark. For example, our use of $\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial y}$ seems to manage very well in complex situations, sometimes with dozens of variables running around, without adopting the onerous formalism of the $\lambda$-calculus, even if that formalism is what these solutions are essentially really about. Meanwhile, it is easy to make examples where one must be very specific about which variables are the independent variable and which are not, as Todd mentions in his comment to David’s answer. For example, cases like $$\frac{d}{dx}\int_0^x(t^2+x^3)dt\qquad \frac{d}{dt}\int_t^x(t^2+x^3)dt$$ are surely clarified for students by a discussion of the usage of variables in formal expressions and more specifically the issue of bound and free variables.
Defining parameters Level: \( N \) = \( 20 = 2^{2} \cdot 5 \) Weight: \( k \) = \( 4 \) Nonzero newspaces: \( 3 \) Newforms: \( 4 \) Sturm bound: \(96\) Trace bound: \(1\) Dimensions The following table gives the dimensions of various subspaces of \(M_{4}(\Gamma_1(20))\). Total New Old Modular forms 46 25 21 Cusp forms 26 17 9 Eisenstein series 20 8 12 Decomposition of \(S_{4}^{\mathrm{new}}(\Gamma_1(20))\) We only show spaces with even parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension. Label \(\chi\) Newforms Dimension \(\chi\) degree 20.4.a \(\chi_{20}(1, \cdot)\) 20.4.a.a 1 1 20.4.c \(\chi_{20}(9, \cdot)\) 20.4.c.a 2 1 20.4.e \(\chi_{20}(3, \cdot)\) 20.4.e.a 2 2 20.4.e.b 12
In combinatorics there are quite many such disproven conjectures. The most famous of them are: 1) Tait conjecture: Any 3-vertex connected planar cubic graph is Hamiltonian The first counterexample found has 46 vertices. The "least" counterexample known has 38 vertices. 2) Tutte conjecture: Any bipartite cubic graph is Hamiltonian The first counterexample found has 96 vertices. The "least" counterexample known has 54 vertices. 3) Thom conjecture If two finite undirected simple graphs have conjugate adjacency matrices over $\mathbb{Z}$, then they are isomorphic. The least known counterexample pair is formed by two trees with 11 vertices. 4) Borsuk conjecture: Every bounded subset $E$ of $\mathbb{R}^n$can be partitioned into $n+1$ sets, each of which has a smaller diameter, than $E$ In the first counterexample found $n = 1325$. In the "least" counterexample known $n = 64$. 5) Danzer-Gruenbaum conjecture: If $A \subset \mathbb{R}^n$ and $\forall u, v, w \in A$ $(u - w, v - w) > 0,$ then $|A| \leq 2n - 1$ This statement is not true for any $n \geq 35$ 6) The Boolean Pythagorean Triple Conjecture: There exists $S \subset \mathbb{N}$, such that neither $S$, nor $\mathbb{N} \setminus S$ contain Pythagorean triples This conjecture was disproved by M. Heule, O. Kullman and V. Marek. They proved, that there do exist such $S \subset \{n \in \mathbb{N}| n \leq k\}$, such that neither $S$, nor $\{n \in \mathbb{N}| n \leq k\} \setminus S$ contain Pythagorean triples, for all $k \leq 7824$, but not for $k = 7825$ 7) Burnside conjecture: Every finitely generated group with period n is finite This statement is not true for any odd $n \geq 667$ 8) Otto Shmidt conjecture: If all proper subgroups of a group $G$ are isomorphic to $C_p$, where $p$ is a fixed prime number, then $G$ is finite. Alexander Olshanskii proved, that there are continuum many non-isomorphic counterexamples to this conjecture for any $p > 10^{75}$. 9) Von Neuman conjecture Any non-amenable group has a free subgroup of rank 2 The least known finitely presented counterexample has 3 generators and 9 relators 10) Word problem conjecture: Word problem is solvable for any finitely generated group The "least" counterexample known has 12 generators. 11) Leinster conjecture: Any Leinster group has even order The least counterexample known has order 355433039577. 12) Rotman conjecture: Automorphism groups of all finite groups not isomorphic to $C_2$ have even order The first counterexample found has order 78125. The least counterexample has order 2187. It is the automorphism group of a group with order 729. 13) Rose conjecture: Any nontrivial complete finite group has even order The least counterexample known has order 788953370457. 14) Hilton conjecture Automorphism group of a non-abelian group is non-abelian The least counterexample known has order 64. 15)Hughes conjecture: Suppose $G$ is a finite group and $p$ is a prime number. Then $[G : \langle\{g \in G| g^p \neq e\}\rangle] \in \{1, p, |G|\}$ The least known counterexample has order 142108547152020037174224853515625. 16) Moreto conjecture: Let $S$ be a finite simple group and $p$ the largest prime divisor of $|S|$. If $G$ is a finite group with the same number of elements of order $p$ as $S$ and $|G| = |S|$, then $G \cong S$ The first counterexample pair constructed is formed by groups of order 20160 (those groups are $A_8$ and $L_3(4)$) 17) This false statement is not a conjecture, but rather a popular mistake done by many people, who have just started learning group theory: All elements of the commutant of any finite group are commutators The least counterexample has order 96. If the numbers mentioned in this text do not impress you, please, do not feel disappointed: there are complex combinatorial objects "hidden" behind them.
I want to use the $\LaTeX$ font in graphics. For text/numbers, this is easily done by FontFamily -> "CMU Serif" (on Ubuntu) like this: Plot[Sin[x], {x, -Pi, Pi}, LabelStyle -> Directive[16, FontFamily -> "CMU Serif"],AxesLabel -> {"Greek: \[Alpha],\[Beta],\[Gamma]", "Fancy font!"}] The English text and numbers are in LaTeX computer modern font, which is great, but the Greek is not. For comparison, it should be This is because Latex does not use computer modern for Greek letters, but rather a font which (in Ubuntu, at least) is called cmmi10 font, as I found by looking at the PDF properties. The Greek letters correspond to characters in the range 161-195, as seen by this picture (obtained in LibreOffice Special character menu) but sadly only some of them work. The output of Style[FromCharacterCode[Range[161, 195]], FontFamily -> "cmmi10"] is So for some reason $\beta, \epsilon, \zeta $ and some more are displaying correctly, but others don't. Is there any way to fix this?
We say that the language $L$ is Cook Levin deterministic reducible to xor satisfiability in polynomial time if and only if for each word $w\in\Sigma^*:w\in L\iff f(w)\in XORSAT$ where $\Sigma=\{0,1\}$ and $f:\Sigma^*\rightarrow\Sigma^*$ is the function in the Cook Levin reduction that was introduced in 1971. In other words if the language $L$ is Cook Levin deterministic reducible to xor satisfiability in polynomial time and $M$ is non deterministic Turing machine that decides the language $L$ in polynomial time then for each word $w\in\Sigma^*:\Phi_{M,w}$ is satisfiable if and only if $M$ accepts $w$, according to Cook Levin theorem since 1971, and also $\Phi_{M,w}$ is in XNF, i.e. XOR Normal Form. Every boolean formula is in XOR Normal Form or XNF if and only if it is conjunction of clauses and each clause is XOR of literals. It's well known, according to Cook Levin theorem since 1971, that every NP language is Cook Levin deterministic reducible to CNF satisfiability in polynomial time, i.e. if $L$ is arbitrary NP language and $M$ is non deterministic Turing machine that decides $L$ in polynomial time then for each word $w\in\Sigma^*:w\in L\iff f(w)\in CNFSAT\land\Phi_{M,w}$ is satisfiable if and only if $M$ accepts $w$ and $\Phi_{M,w}$ is necessarily in CNF, i.e. conjunctive normal form, that is conjunction of clauses and each clause is disjunction of literals. But I don't know if exists special language that is in NP and if we apply the Cook Levin reduction then not only SAT instance is produced, but actually CNFSAT instance is produced and not only just CNFSAT instance is produced, but actually XORSAT instance is produced. So my question is does exist special language in NP that after applying Cook Levin reduction, for each word $w\in\Sigma^*$, we get XORSAT instance? Note that I don't want the function of the reduction to be identity and the NP language that needs to be found that it is Cook Levin reducible to XORSAT in polynomial time is not XORSAT itself. Also the function of Cook Levin reduction isn't identity at all.
Problem I'm trying to understand this with a view towards the etale fundamental group where we can't talk about loops. What I'm missing is how the fundamental group functor should work on morphisms, without mentioning loops. Naive attempt Let's say we have a map $X \rightarrow Y$ (of topological spaces, schemes, what have you). Let's say $\tilde Y$ is $Y$'s universal cover (in the case of schemes, this only exists as a pro-object, and only in some cases, but for simplicity assume it exists) and $\tilde X$ is $X$'s fundamental group. My first, naive, approach was the following: take $X \times_Y \tilde Y$. This is a cover of $X$ (etale is invariant to base change. Again $\tilde Y$ isn't really etale over $Y$ because it's not finite, but once we have the topological case down, ironing out the arithmetic details should be easy). So we have a map $\tilde X$ to $X \times_Y \tilde Y$. Now, since $\tilde Y$ to $Y$ was Galois (- normal for the topologists; with group of deck transformations $\pi_1(Y, y)$) then so is $X \times_Y \tilde Y$ over $X$. With what group? It seems (and correct me if I'm wrong) that this will always be some quotient of $\pi_1(Y,y)$ (meaning that the group action of $\pi_1(Y,y)$ on $X \times_Y \tilde Y$ as a map $\pi_1(Y,y) \times X \times_Y \tilde Y \rightarrow (X \times_Y \tilde Y) \times_X (X \times_Y \tilde Y)$ is surjective but not nec. an immersion). Since $\tilde X$ maps to $X \times_Y \tilde Y$, we get a natural map $\pi_1(X,x) \twoheadrightarrow Aut_X(X \times_Y \tilde Y)$, where, as we said, $Aut_X(X \times_Y \tilde Y)$ is a quotient of $\pi_1(Y,y)$. This is not going to work. What is the right definition of how the fundamental group functor acts on morphisms, via a deck-transformations approach?
Integrate over the region in the first octant above the parabolic cylinder and below the paraboloid I could not get the limits right even that I tried many one but I still could not get it Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community We can set up the integral as follows... $$ \iiint F(x,y,z)dzdydx$$ For the z integration or bounds it is simply from the lower surface ($z=y^2$) to the upper surface $(z=8-2x^2-y^2)$ So now we have... $$ \iint \!\!\int_{y^2}^{8-2x^2-y^2}(8xz) dzdydx $$ Now, we can think of the integral as being resolved onto the xy-plane with z=0. Setting the two functions of x and y equal to each other and simplifying, we get $x^2+y^2=4$. Now, trying to find the y bounds we solve to $y$ in terms of $x$. Thus, $y=\sqrt{4-x^2}$. Next, we need to integrate along the x axis where $y=0$. This is from $0$ to $2$. And our integral ends up being: $$\int_0^2\!\!\!\int_0^{\sqrt{4-x^2}}\!\!\int_{y^2}^{8-2x^2-y^2}(8xz)dzdydx$$ It has been a while since I have done these so please correct me if there is an error or a better way to do it.
$v$ should be the total number of parameters (constants + AR + MA + GARCH + ARCH). I disagree with @kiwiakos, the student t(df) distribution is used because we are using standard errors which are estimates of standard deviations (and not true standard deviations) to compute the statistic. That is the reason why we use student t test eventhose the asymptotically parameter distribution is Gaussian. The formula for the p-value is then : Example : Coeff = 0.15 Std Error = 0.064 (or robust std errors) The t-statistic is : 0.15/ 0.064 = 2.3438 (not p-statistic because we use standard error!) t-prob = p-value = 2*(1-tcdf( | 2.3438 | ,n-v)) ( p-value with a student dist and not p-value with a normal dist !) EDIT This is not specific to GARCH parameters but to the theory of tests statistics. The main idea : We should use the z-test only if there is no uncertainty regarding the population variance. However this is rarely the case so the p-value are obtained using the student t distribution. If the sample size is large enough the normal distribution (i.e the z test) can be used. Let's X to be a gaussian random variable, and let $\bar{X}$ to be the sample mean. The standardized statistic z is normally distributed ( see the central limit theorem) and is given by: $z = (\bar{x} - m_{0} )/ \sigma_{\bar{X}} $ where $\sigma_{\bar{X}} $ is the standard deviation of the sample mean. We can compute this standard deviation based on the population variance $\sigma_{X}^{2}$ and it is given by :\begin{equation*}\begin{aligned}\sigma_{\bar{X}}^{2} &=& var\{\dfrac{1}{n}(X_{1}+\ldots + X_{n})\}\\&=&\dfrac{1}{n^2} var\{( X_{1}+\ldots + X_{n} )\}\\&=&\dfrac{1}{n^2} (\sigma_{X}^{2}+ \ldots+ \sigma_{X}^{2})\\&=&\dfrac{1}{n^2} (n\sigma_{X}^{2})\\&=& \frac{\sigma_{X}^{2}}{n}\end{aligned}\end{equation*} However since we don’t know the population variance $\sigma^{2}_{X} $ we need to employ an estimation for it and this introduce some uncertainty. To approximate the population variance $\sigma_{X}^{2}$, (generally) we employ the sample variance given by : \begin{equation*}s_{X}^{2}=\dfrac{1}{n-1}\sum_{i=1}^{n}(X_{i}-\bar{X})^{2}\end{equation*} Then the variance of the sample mean is approximated as follow : $\widehat{ \sigma^{2}_{\bar{X}}} = \frac{s_{X}^{2} }{ n}$ (instead of $ \frac{\sigma_{X}^{2}}{n}$ ) So the standards errors (which takes into account this uncertainty) are : $ se = \sqrt{\widehat{ \sigma^{2}_{\bar{X}}} } = \frac{s_{X} }{\sqrt{n}}$ the z test is modified in the following way: $z = \frac{ \bar{x} - m_{0} }{ \sigma_{\bar{X}} } $ becomes : $t = \frac{ \bar{x} - m_{0} }{ se } $ and $t$ will be student t distributed with (n-number of parameters) degrees of freedom (see here ). The fact that we employ the standards errors instead of the standard deviation make the Z test to be a T test . When we use the MLE method, it is the same , we can replace X as the estimated parameters (wich are asymptotically gaussian random variables). We don't employ their true standards deviations but we use an estimation of it. The Hessian gives us the standards errors , not the standards deviations (because there is uncertainty - we don't observe the population but a sample). So we should employ the $t$ test. Note that, when $n$ (the number of observation ) increases the student t distribution becomes closer to the normal distribution and then p values obtained with the normal distribution or student t distribution will becomes the same ( cf @John comment). This is logical because the uncertainty decreases as the number of observations increases.
ISSN: 1930-8337 eISSN: 1930-8345 All Issues Inverse Problems & Imaging February 2009 , Volume 3 , Issue 1 Select all articles Export/Reference: Abstract: We show that the Dirichlet-to-Neumann operator of the Laplacian on an open subset of the boundary of a connected compact Einstein manifold with boundary determines the manifold up to isometries. Similarly, for connected conformally compact Einstein manifolds of even dimension $n+1,$ we prove that the scattering matrix at energy $n$ on an open subset of its boundary determines the manifold up to isometries. Abstract: A positivity principle for parabolic integro-differential equations is proved. By means of this principle, uniqueness, existence and stability for an inverse source problem and two inverse coefficient problems are established. Abstract: In this work we wish to recover an unknown image from a blurry, or noisy-blurry version. We solve this inverse problem by energy minimization and regularization. We seek a solution of the form $u + v$, where $u$ is a function of bounded variation (cartoon component), while $v$ is an oscillatory component (texture), modeled by a Sobolev function with negative degree of differentiability. We give several results of existence and characterization of minimizers of the proposed optimization problem. Experimental results show that this cartoon + texture model better recovers textured details in natural images, by comparison with the more standard models where the unknown is restricted only to the space of functions of bounded variation. Abstract: An iterative search method is proposed for obtaining orientation maps inside polycrystals from three-dimensional X-ray diffraction (3DXRD) data. In each step, detector pixel intensities are calculated by a forward model based on the current estimate of the orientation map. The pixel at which the experimentally measured value most exceeds the simulated one is identified. This difference can only be reduced by changing the current estimate at a location from a relatively small subset of all possible locations in the estimate and, at each such location, an increase at the identified pixel can only be achieved by changing the orientation in only a few possible ways. The method selects the location/orientation pair indicated as best by a function that measures data consistency combined with prior information on orientation maps. The superiority of the method to a previously published forward projection Monte Carlo optimization is demonstrated on simulated data. Abstract: Bayesian solution of an inverse problem for indirect measurement $M = AU + $ εis considered, where $U$ is a function on a domain of $\R^d$. Here $A$ is a smoothing linear operator and εis Gaussian white noise. The data is a realization $m_k$ of the random variable $M_k = P_kA U+P_k$ ε, where $P_k$ is a linear, finite dimensional operator related to measurement device. To allow computerized inversion, the unknown is discretized as $U_n=T_nU$, where $T_n$ is a finite dimensional projection, leading to the computational measurement model $M_{kn}=P_k A U_n + P_k$ ε. Bayes formula gives then the posterior distribution $\pi_{kn}(u_n\|\m_{kn})$~ Π $(u_n)\exp(-\frac{1}{2}$||$\m_{kn} - P_kA u_n$||$\_2^2)$ n in $\R^d$, and the mean $\u_{kn}$:$=\int u_n \ \pi_{kn}(u_n\|\m_k)\ du_n$ is considered as the reconstruction of $U$. We discuss a systematic way of choosing prior distributions Π for all $n\geq n_0>0$ by achieving them as projections of a distribution in a infinite-dimensional limit case. Such choice of prior distributions is n discretization-invariantin the sense that Π represent the same n a prioriinformation for all $n$ and that the mean $\u_{kn}$ converges to a limit estimate as $k,n$→$\infty$. Gaussian smoothness priors and wavelet-based Besov space priors are shown to be discretization invariant. In particular, Bayesian inversion in dimension two with $B^1_11$ prior is related to penalizing the $\l^1$ norm of the wavelet coefficients of $U$. Abstract: As a rule of thumb, sampling methods for inverse scattering problems suffer from interior eigenvalues of the obstacle. Indeed, throughout the history of such algorithms one meets the phenomenon that if the wave number meets some resonance frequency of the scatterer, then those methods can only be shown to work under suitable modifications. Such modifications often require a-priori knowledge, corrupting thereby the main advantage of sampling methods. It was common belief that transmission eigenvalues play a role corresponding to Dirichlet or Neumann eigenvalues in this respect. We show that this is not the case for the Factorization method: when applied to inverse medium scattering problems this method is stable at transmission eigenvalues. Abstract: We link boundary control theory and inverse spectral theory for the Schrödinger operator $H=-\partial _{x}^{2}+q( x) $ on $L^{2}( 0,\infty) $ with Dirichlet boundary condition at $x=0.$ This provides a shortcut to some results on inverse spectral theory due to Simon, Gesztesy-Simon and Remling. The approach also has a clear physical interpritation in terms of boundary control theory for the wave equation. Readers Authors Editors Referees Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Problem: A vertex of one square is pegged to the centre of an identical square, and the overlapping area is blue. One of the squares is then rotated about the vertex and the resulting overlap is red. Which area is greater? Let the area of each large square be exactly $1$ unit squared. Then, the area of the blue square is exactly $1/4$ units squared. The same would apply to the red area if you were to rotate the square $k\cdot 45$ degrees for a natural number $k$. Thus, I am assuming that no area is greater, and that it is a trick question $-$ although the red area might appear to be greater than the blue area, they are still the same: $1/4$. But how can it be proven? I know the area of a triangle with a base $b$ and a height $h\perp b$ is $bh\div 2$. Since the area of each square is exactly $1$ unit squared, then each side would also have a length of $1$. Therefore, the height of the red triangle area is $1/2$, and so $$\text{Red Area} = \frac{b\left(\frac 12\right)}{2} = \frac{b}{4}.$$ According to the diagram, the square has not rotated a complete $45$ degrees, so $b < 1$. It follows, then, that $$\begin{align} \text{Red Area} &< \frac 14 \\ \Leftrightarrow \text{Red Area} &< \text{Blue Area}.\end{align}$$ Assertion: To conclude, the $\color{blue}{\text{blue}}$ area is greater than the $\color{red}{\text{red}}$ area. Is this true? If so, is there another way of proving the assertion? Thanks to users who commented below, I did not take account of the fact that the red area is not a triangle $-$ it does not have three sides! This now leads back to my original question on whether my hypothesis was correct. This question is very similar to this post. Source: The Golden Ratio (why it is so irrational) $-$ Numberphile from $14$:$02$.
Difference between revisions of "Moser-lower.tex" Line 207: Line 207: − Actually it is possible to improve upon these bounds by a slight amount. Observe that if $B$ is a maximiser for the right-hand side of \eqref{cn3} (subject to $B$ not containing isosceles triangles), then any triple $(a,b,c)$ not in $B$ must be the vertex of a (possibly degenerate) isosceles triangle with the other vertices in $B$. If this triangle is non-degenerate, or if $(a,b,c)$ is the upper vertex of a degenerate isosceles triangle, then no point from $\Gamma_{a,b,c}$ can be added to $A_B$ without creating a geometric line. However, if $(a,b,c) = (a'+r,b',c'+r)$ is only the lower vertex of a degenerate isosceles triangle $(a'+r,b',c'+r), (a',b'+2r,c')$, then one can add any subset of $\Gamma_{a,b,c}$ to $A_B$ and still have a Moser set as long as no pair of elements in that subset is separated by Hamming distance $2r$. For instance, in the $n= + Actually it is possible to improve upon these bounds by a slight amount. Observe that if $B$ is a maximiser for the right-hand side of \eqref{cn3} (subject to $B$ not containing isosceles triangles), then any triple $(a,b,c)$ not in $B$ must be the vertex of a (possibly degenerate) isosceles triangle with the other vertices in $B$. If this triangle is non-degenerate, or if $(a,b,c)$ is the upper vertex of a degenerate isosceles triangle, then no point from $\Gamma_{a,b,c}$ can be added to $A_B$ without creating a geometric line. However, if $(a,b,c) = (a'+r,b',c'+r)$ is only the lower vertex of a degenerate isosceles triangle $(a'+r,b',c'+r), (a',b'+2r,c')$, then one can add any subset of $\Gamma_{a,b,c}$ to $A_B$ and still have a Moser set as long as no pair of elements in that subset is separated by Hamming distance $2r$. For instance, in the $n=$ case, the set − $$ B = \{ (0, + $$B = \{(0 ,, − generates the lower bound $c'_{ + 7 ),(0 5 ),(1 7 ),(2 5 ),(3 4 ),(4 3 ),(4 2 ),(1 4 ),(3 2 ),(2 2 ),(3 1 ),(4 0 ),(1 1 ),(0 1 ),(1 0 ) \}$$ + + generates the lower bound $c'_{,3} \geq $ given above (and, up to reflection $a \leftrightarrow c$, is the only such set that does so); but by adding elements from $\Gamma_{,0,}$ one can increase the lower bound slightly to $$. However, we have been unable to locate a lower bound which is asymptotically better than \eqref{cpn3}. Indeed, any method based purely on the $A_B$ construction cannot do asymptotically better than the previous constructions: However, we have been unable to locate a lower bound which is asymptotically better than \eqref{cpn3}. Indeed, any method based purely on the $A_B$ construction cannot do asymptotically better than the previous constructions: Revision as of 04:51, 19 June 2009 \section{Lower bounds for the Moser problem}\label{moser-lower-sec} In this section we discuss lower bounds for $c'_{n,3}$. Clearly we have $c'_{0,3}=1$ and $c'_{1,3}=2$, so we focus on the case $n \ge 2$. The first lower bounds may be due to Koml\'{o}s \cite{komlos}, who observed that the sphere $S_{i,n}$ of elements with exactly $n-i$ 2 entries (see Section \ref{notation-sec} for definition), is a Moser set, so that \begin{equation}\label{cin} c'_{n,3}\geq \vert S_{i,n}\vert \end{equation} holds for all $i$. Choosing $i=\lfloor \frac{2n}{3}\rfloor$ and applying Stirling's formula, we see that this lower bound takes the form \begin{equation}\label{cpn3} c'_{n,3} \geq (C-o(1)) 3^n / \sqrt{n} \end{equation} for some absolute constant $C>0$; in fact \eqref{cin} gives \eqref{cpn3} with $C := \sqrt{\frac{9}{4\pi}}$. In particular $c'_{3,3} \geq 12, c'_{4,3}\geq 24, c'_{5,3}\geq 80, c'_{6,3}\geq 240$. Asymptotically, the best lower bounds we know of are still of this type, but the values can be improved by studying combinations of several spheres or semispheres or applying elementary results from coding theory. Observe that if $\{w(1),w(2),w(3)\}$ is a geometric line in $[3]^n$, then $w(1), w(3)$ both lie in the same sphere $S_{i,n}$, and that $w(2)$ lies in a lower sphere $S_{i-r,n}$ for some $1 \leq r \leq i \leq n$. Furthermore, $w(1)$ and $w(3)$ are separated by Hamming distance $r$. As a consequence, we see that $S_{i-1,n} \cup S_{i,n}^e$ (or $S_{i-1,n} \cup S_{i,n}^o$) is a Moser set for any $1 \leq i \leq n$, since any two distinct elements $S_{i,n}^e$ are separated by a Hamming distance of at least two. (Recall Section \ref{notation-sec} for definitions), this leads to the lower bound \begin{equation}\label{cn3-low} c'_{n,3} \geq \binom{n}{i-1} 2^{i-1} + \binom{n}{i} 2^{i-1} = \binom{n+1}{i} 2^{i-1}. \end{equation} It is not hard to see that $\binom{n+1}{i+1} 2^{i} > \binom{n+1}{i} 2^{i-1}$ if and only if $3i < 2n+1$, and so this lower bound is maximised when $i = \lfloor \frac{2n+1}{3} \rfloor$ for $n \geq 2$, giving the formula \eqref{binom}. This leads to the lower bounds $$ c'_{2,3} \geq 6; c'_{3,3} \geq 16; c'_{4,3} \geq 40; c'_{5,3} \geq 120; c'_{6,3} \geq 336$$ which gives the right lower bounds for $n=2,3$, but is slightly off for $n=4,5$. Asymptotically, Stirling's formula and \eqref{cn3-low} then give the lower bound \eqref{cpn3} with $C = \frac{3}{2} \times \sqrt{\frac{9}{4\pi}}$, which is asymptotically $50\%$ better than the bound \eqref{cin}. The work of Chv\'{a}tal \cite{chvatal1} already contained a refinement of this idea which we here translate into the usual notation of coding theory: Let $A(n,d)$ denote the size of the largest binary code of length $n$ and minimal distance $d$. Then \begin{equation}\label{cnchvatal} c'_{n,3}\geq \max_k \left( \sum_{j=0}^k \binom{n}{j} A(n-j, k-j+1)\right). \end{equation} With the following values for $A(n,d)$: {\tiny{ \[ \begin{array}{llllllll} A(1,1)=2&&&&&&&\\ A(2,1)=4& A(2,2)=2&&&&&&\\ A(3,1)=8&A(3,2)=4&A(3,3)=2&&&&&\\ A(4,1)=16&A(4,2)=8& A(4,3)=2& A(4,4)=2&&&&\\ A(5,1)=32&A(5,2)=16& A(5,3)=4& A(5,4)=2&A(5,5)=2&&&\\ A(6,1)=64&A(6,2)=32& A(6,3)=8& A(6,4)=4&A(6,5)=2&A(6,6)=2&&\\ A(7,1)=128&A(7,2)=64& A(7,3)=16& A(7,4)=8&A(7,5)=2&A(7,6)=2&A(7,7)=2&\\ A(8,1)=256&A(8,2)=128& A(8,3)=20& A(8,4)=16&A(8,5)=4&A(8,6)=2 &A(8,7)=2&A(8,8)=2\\ A(9,1)=512&A(9,2)=256& A(9,3)=40& A(9,4)=20&A(9,5)=6&A(9,6)=4 &A(9,7)=2&A(9,8)=2\\ A(10,1)=1024&A(10,2)=512& A(10,3)=72& A(10,4)=40&A(10,5)=12&A(10,6)=6 &A(10,7)=2&A(10,8)=2\\ A(11,1)=2048&A(11,2)=1024& A(11,3)=144& A(11,4)=72&A(11,5)=24&A(11,6)=12 &A(11,7)=2&A(11,8)=2\\ A(12,1)=4096&A(12,2)=2048& A(12,3)=256& A(12,4)=144&A(12,5)=32&A(12,6)=24 &A(12,7)=4&A(12,8)=2\\ A(13,1)=8192&A(13,2)=4096& A(13,3)=512& A(13,4)=256&A(13,5)=64&A(12,6)=32 &A(13,7)=8&A(13,8)=4\\ \end{array} \] }} Generally, $A(n,1)=2^n, A(n,2)=2^{n-1}, A(n-1,2e-1)=A(n,2e), A(n,d)=2$, if $d>\frac{2n}{3}$. The values were taken or derived from Andries Brower's table at\\ http://www.win.tue.nl/$\sim$aeb/codes/binary-1.html \textbf{include to references? or other book with explicit values of $A(n,d)$ } For $c'_{n,3}$ we obtain the following lower bounds: with $k=2$ \[ \begin{array}{llll} c'_{4,3}&\geq &\binom{4}{0}A(4,3)+\binom{4}{1}A(3,2)+\binom{4}{2}A(2,1) =1\cdot 2+4 \cdot 4+6\cdot 4&=42.\\ c'_{5,3}&\geq &\binom{5}{0}A(5,3)+\binom{5}{1}A(4,2)+\binom{5}{2}A(3,1) =1\cdot 4+5 \cdot 8+10\cdot 8&=124.\\ c'_{6,3}&\geq &\binom{6}{0}A(6,3)+\binom{6}{1}A(5,2)+\binom{6}{2}A(4,1) =1\cdot 8+6 \cdot 16+15\cdot 16&=344. \end{array} \] With k=3 \[ \begin{array}{llll} c'_{7,3}&\geq& \binom{7}{0}A(7,4)+\binom{7}{1}A(6,3)+\binom{7}{2}A(5,2) + \binom{7}{3}A(4,1)&=960.\\ c'_{8,3}&\geq &\binom{8}{0}A(8,4)+\binom{8}{1}A(7,3)+\binom{8}{2}A(6,2) + \binom{8}{3}A(5,1)&=2832.\\ c'_{9,3}&\geq & \binom{9}{0}A(9,4)+\binom{9}{1}A(8,3)+\binom{9}{2}A(7,2) + \binom{9}{3}A(6,1)&=7880. \end{array}\] With k=4 \[ \begin{array}{llll} c'_{10,3}&\geq &\binom{10}{0}A(10,5)+\binom{10}{1}A(9,4)+\binom{10}{2}A(8,3) + \binom{10}{3}A(7,2)+\binom{10}{4}A(6,1)&=22232.\\ c'_{11,3}&\geq &\binom{11}{0}A(11,5)+\binom{11}{1}A(10,4)+\binom{11}{2}A(9,3) + \binom{11}{3}A(8,2)+\binom{11}{4}A(7,1)&=66024.\\ c'_{12,3}&\geq &\binom{12}{0}A(12,5)+\binom{12}{1}A(11,4)+\binom{12}{2}A(10,3) + \binom{12}{3}A(9,2)+\binom{12}{4}A(8,1)&=188688.\\ \end{array}\] With $k=5$ \[ c'_{13,3}\geq 539168.\] It should be pointed out that these bounds are even numbers, so that $c'_{4,3}=43$ shows that one cannot generally expect this lower bound gives the optimum. The maximum value appears to occur for $k=\lfloor\frac{n+2}{3}\rfloor$, so that using Stirling's formula and explicit bounds on $A(n,d)$ the best possible value known to date of the constant $C$ in equation \eqref{cpn3} can be worked out, but we refrain from doing this here. Using the Singleton bound $A(n,d)\leq 2^{n-d+1}$ Chv\'{a}tal \cite{chvatal1} proved that the expression on the right hand side of \eqref{cnchvatal} is also $O\left( \frac{3^n}{\sqrt{n}}\right)$, so that the refinement described above gains a constant factor over the initial construction only. For $n=4$ the above does not yet give the exact value. The value $c'_{4,3}=43$ was first proven by Chandra \cite{chandra}. A uniform way of describing examples for the optimum values of $c'_{4,3}=43$ and $c'_{5,3}=124$ is the following: Let us consider the sets $$ A := S_{i-1,n} \cup S_{i,n}^e \cup A'$$ where $A' \subset S_{i+1,n}$ has the property that any two elements in $A'$ are separated by a Hamming distance of at least three, or have a Hamming distance of exactly one but their midpoint lies in $S_{i,n}^o$. By the previous discussion we see that this is a Moser set, and we have the lower bound \begin{equation}\label{cnn} c'_{n,3} \geq \binom{n+1}{i} 2^{i-1} + |A'|. \end{equation} This gives some improved lower bounds for $c'_{n,3}$: \begin{itemize} \item By taking $n=4$, $i=3$, and $A' = \{ 1111, 3331, 3333\}$, we obtain $c'_{4,3} \geq 43$; \item By taking $n=5$, $i=4$, and $A' = \{ 11111, 11333, 33311, 33331 \}$, we obtain $c'_{5,3} \geq 124$. \item By taking $n=6$, $i=5$, and $A' = \{ 111111, 111113, 111331, 111333, 331111, 331113\}$, we obtain $c'_{6,3} \geq 342$. \end{itemize} This gives the lower bounds in Theorem \ref{moser} up to $n=5$, but the bound for $n=6$ is inferior to the lower bound $c'_{6,3}\geq 344$ given above. A modification of the construction in \eqref{cn3-low} leads to a slightly better lower bound. Observe that if $B \subset \Delta_n$, then the set $A_B := \bigcup_{\vec a \in B} \Gamma_{a,b,c}$ is a Moser set as long as $B$ does not contain any ``isosceles triangles $(a+r,b,c+s), (a+s,b,c+r), (a,b+r+s,c)$ for any $r,s \geq 0$ not both zero; in particular, $B$ cannot contain any ``vertical line segments $(a+r,b,c+r), (a,b+2r,c)$. An example of such a set is provided by selecting $0 \leq i \leq n-3$ and letting $B$ consist of the triples $(a, n-i, i-a)$ when $a \neq 3 \mod 3$, $(a,n-i-1,i+1-a)$ when $a \neq 1 \mod 3$, $(a,n-i-2,i+2-a)$ when $a=0 \mod 3$, and $(a,n-i-3,i+3-a)$ when $a=2 \mod 3$. Asymptotically, this set occues about two thirds of the spheres $S_{n,i}$, $S_{n,i+1}$ and one third of the spheres $S_{n,i+2}, S_{n,i+3}$ and (setting $i$ close to $n/3$) gives a lower bound \eqref{cpn3} with $C = 2 \times \frac{\sqrt{9}}{4\pi}$, which is thus superior to the previous constructions. An integer program was run to obtain the optimal lower bounds achievable by the $A_B$ construction (using \eqref{cn3}, of course). The results for $1 \leq n \leq 20$ are displayed in Figure \ref{nlow-moser}: \begin{figure}[tb] \centerline{ \begin{tabular}{|ll|ll|} \hline n & lower bound & n & lower bound \\ \hline 1 & 2 &11& 71766\\ 2 & 6 & 12& 212423\\ 3 & 16 & 13& 614875\\ 4 & 43 & 14& 1794212\\ 5 & 122& 15& 5321796\\ 6 & 353& 16& 15455256\\ 7 & 1017& 17& 45345052\\ 8 & 2902&18& 134438520\\ 9 & 8622&19& 391796798\\ 10& 24786& 20& 1153402148\\ \hline \end{tabular}} \caption{Lower bounds for $c'_n$ obtained by the $A_B$ construction.} \label{nlow-moser} \end{figure} More complete data, including the list of optimisers, can be found at {\tt http://abel.math.umu.se/~klasm/Data/HJ/}. This indicates that greedily filling in spheres, semispheres or codes is no longer the optimal strategy in dimensions six and higher. The lower bound $c'_{6,3} \geq 353$ was first located by a genetic algorithm: see Appendix \ref{genetic-alg}. \begin{figure}[tb] \centerline{\includegraphics{moser353new.png}} \caption{One of the examples of $353$-point sets in $[3]^6$ (elements of the set being indicated by white squares).} \label{moser353-fig} \end{figure} Actually it is possible to improve upon these bounds by a slight amount. Observe that if $B$ is a maximiser for the right-hand side of \eqref{cn3} (subject to $B$ not containing isosceles triangles), then any triple $(a,b,c)$ not in $B$ must be the vertex of a (possibly degenerate) isosceles triangle with the other vertices in $B$. If this triangle is non-degenerate, or if $(a,b,c)$ is the upper vertex of a degenerate isosceles triangle, then no point from $\Gamma_{a,b,c}$ can be added to $A_B$ without creating a geometric line. However, if $(a,b,c) = (a'+r,b',c'+r)$ is only the lower vertex of a degenerate isosceles triangle $(a'+r,b',c'+r), (a',b'+2r,c')$, then one can add any subset of $\Gamma_{a,b,c}$ to $A_B$ and still have a Moser set as long as no pair of elements in that subset is separated by Hamming distance $2r$. For instance, in the $n=10$ case, the set $$B = \{(0 0 10),(0 2 8 ), (0 3 7 ),(0 4 6 ),(1 4 5 ),(2 1 7 ),(2 3 5 ),(3 2 5 ),(3 3 4 ),(3 4 3 ),(4 4 2 ),(5 1 4 ),(5 3 2 ),(6 2 2 ),(6 3 1 ),(6 4 0 ),(8 1 1 ),(9 0 1 ),(9 1 0 ) \}$$ generates the lower bound $c'_{10,3} \geq 24786$ given above (and, up to reflection $a \leftrightarrow c$, is the only such set that does so); but by adding twelve elements from $\Gamma_{5,0,5}$ one can increase the lower bound slightly to $24798$. However, we have been unable to locate a lower bound which is asymptotically better than \eqref{cpn3}. Indeed, any method based purely on the $A_B$ construction cannot do asymptotically better than the previous constructions: \begin{proposition} Let $B \subset \Delta_n$ be such that $A_B$ is a Moser set. Then $|A_B| \leq (2 \sqrt{\frac{9}{4\pi}} + o(1)) \frac{3^n}{\sqrt{n}}$. \end{proposition} \begin{proof} By the previous discussion, $B$ cannot contain any pair of the form $(a,b+2r,c), (a+r,b,c+r)$ with $r>0$. In other words, for any $-n \leq h \leq n$, $B$ can contain at most one triple $(a,b,c)$ with $c-a=h$. From this and \eqref{cn3}, we see that $$ |A_B| \leq \sum_{h=-n}^n \max_{(a,b,c) \in \Delta_n: c-a=h} \frac{n!}{a! b! c!}.$$ From the Chernoff inequality (or the Stirling formula computation below) we see that $\frac{n!}{a! b! c!} \leq \frac{1}{n^{10}} 3^n$ unless $a,b,c = n/3 + O( n^{1/2} \log^{1/2} n )$, so we may restrict to this regime, which also forces $h = O( n^{1/2}/\log^{1/2} n)$. If we write $a = n/3 + \alpha$, $b = n/3 + \beta$, $c = n/3+\gamma$ and apply Stirling's formula $n! = (1+o(1)) \sqrt{2\pi n} n^n e^{-n}$, we obtain $$ \frac{n!}{a! b! c!} = (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - (\frac{n}{3}+\alpha) \log (1 + \frac{3\alpha}{n} ) - (\frac{n}{3}+\beta) \log (1 + \frac{3\beta}{n} ) - (\frac{n}{3}+\gamma) \log (1 + \frac{3\gamma}{n} ) ).$$ From Taylor expansion one has $$ (\frac{n}{3}+\alpha) \log (1 + \frac{3\alpha}{n} ) = -\alpha - \frac{3}{2} \frac{\alpha^2}{n} + o(1)$$ and similarly for $\beta,\gamma$; since $\alpha+\beta+\gamma=0$, we conclude that $$ \frac{n!}{a! b! c!} = (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - \frac{3}{2n} (\alpha^2+\beta^2+\gamma^2) ).$$ If $c-a=h$, then $\alpha^2+\beta^2+\gamma^2 = \frac{3\beta^2}{2} + \frac{h^2}{2}$. Thus we see that $$ \max_{(a,b,c) \in \Delta_n: c-a=h} \frac{n!}{a! b! c!} \leq (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - \frac{3}{4n} h^2 ).$$ Using the integral test, we thus have $$ |A_B| \leq (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \int_\R \exp( - \frac{3}{4n} x^2 )\ dx.$$ Since $\int_\R \exp( - \frac{3}{4n} x^2 )\ dx = \sqrt{\frac{4\pi n}{3}}$, we obtain the claim. \end{proof}
Forgot password? New user? Sign up Existing user? Log in JEE Advanced is just 5 days away. Best of luck!\textbf{Best of luck!}Best of luck! to all those who are appearing.Also, after 25th May, please tell me about your experience, i.e ,how was the paper and the expected score! Note by Avineil Jain 5 years, 4 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: All the best to everyone! Log in to reply All the best to you as well and all others who are taking the exam!!! Best of luck to you too! Feeling sooooo nervous! Best of luck to all!!! well I have 370 days to go . lucky me ... :-) All the best Avineil :D !! Best of Luck to you too! all the best bro! :) Thnx All the best to everybody !! :D maintain the confidence and hit the end just the way you do on brilliant ALL the best To u as well others too.. i will be giving nxt year would learn frm u all all the very best to you too!! and to all those who are giving it.... Good luck Everyone be optimistic and give your best .. ;)..i ll be appearing next year and I hope to crack it All the best to everyone!! :D @Avineil Jain Best luck to you all who are going to take JEE Advanced How was the paper?! avineil jaiswal hv u tooo cleared JEE advanced?? all the best to everybody who are appearing from me ...... dude what the heck is JEE? where is it found? You got 7 downvoted !!! .... I think that's a record .! :P Its Joint Entrance Examination.... and its found in India .... u cud have searched in Google... Problem Loading... Note Loading... Set Loading...
I have the following problem: Let $\Omega$ be a finite or countable set, $\mathfrak{F}$ a $\sigma$-algebra and $\mathbb{P}$ a probability measure. Prove there doesn't exist a collection $(A_i)_{i\in\mathbb{N}}$ of independents events in $(\Omega, \mathfrak{F},\mathbb{P})$, with $\mathbb{P}(A_i)=1/2$ for all $i\in \mathbb{N}.$ First I suppose that the collection exists and then I conclude $$\mathbb{P}\left(\bigcap_{n\geq1}A_i\right)=0$$ and $$\mathbb{P}\left(\bigcup_{n\geq1}A_i\right)=1,$$ but then I don't know how to continue. Also I don't know if this is useful. Can someone give me an idea or help? Thanks in advance :)
Your justifications are not strong enough. Calculating some values for specific $n$ is not enough. You either have to give specific values for $c$ and $n_0$ such that the definitions for $\mathcal{O}$, $\Omega$, or $\Theta$ are fulfilled, or you have to show that for any given $c$ and $n_0$ there exists an $n \geq n_0$ such that the definition is not fulfilled. To show that $f_1 = \mathcal{O}(f_2)$ you have to give constants $c$ and $n_0$ such that$$ f_1(n) \leq c \cdot f_2(n)$$for all $n \geq n_0$. To show that $f_1 \neq \mathcal{O}(f_2)$ you have to show that for any constants $c$ and $n_0$ you can give an $n \geq n_0$ such that$$ f_1(n) > c \cdot f_2(n).$$ Similarly for $f_1 = \Omega(f_2)$ and $f_1 \neq \Omega(f_2)$. $f_1 = \Theta(f_2)$ can only hold if $f_1 = \mathcal{O}(f_2)$ and $f_1 = \Omega(f_2)$. Let me guide you through the prove why $f_1 = \mathcal{O}(f_2)$ for a) and the other two equations do not hold. To show that $f_1 = \mathcal{O}(f_2)$ you can set $c = 500$ and $n_0 = 10$. Then $1 \leq \log n$ and, thus,$$ f_1(n) = 1000 n - 100 \leq 1000 n \log n + 3000n = 500 f_2(n)$$for all $n \geq 10$. This shows that $f_1 = \mathcal{O}(f_2)$. To show that $f_1 \neq \Omega(f_2)$, let constants $c$ and $n_0$ be given. If $n \geq n_0$ is chosen such that $\log n > \frac{501 - 3c}{c}$ and $n > 50$ then\begin{align*} c \cdot f_2(n) - f_1(n) &= c \cdot (2 n \log n + 6n) - 1000n - 100 \\ &= (2 c \log n + 6 c - 1000) n - 100 \\ &> (2 (501 - 3c) + 6c - 1000)n - 100 \\ &= 2n - 100 > 0\end{align*}which is equivalent to $c \cdot f_2(n) > f_1(n)$ for such $n$. Thus, $f_1 \neq \Omega(f_2)$. This directly implies $f_1 \neq \Theta(f_2)$.
Yet Another Generalization of Bottema's Theorem What Might This Be About? Problem Given three points $A$, $B,$ $C,$ and angle $\phi.$ Define $C_A$ to be the rotation of $C$ through angle $\phi$ around $A$ and $C_B$ to be the rotation of $C$ through angle $\phi-\pi$ around $B.$ Let $D$ be the midpoint of $C_A$ and $C_B.$ (This means rotating by $\phi$ and $\pi -\phi$ in opposite directions.) Then $D$ is independent of $C.$ $D$ lies on the circle with diameter $AB,$ and therefore $\angle ADB=90^{\circ}.$ Solution 1 I'll use complex numbers, assuming $A=\alpha,$ $B=\beta,$ $C=\gamma.$ Then $\begin{align}\displaystyle C_{A}&=\alpha + (\gamma -\alpha)e^{i\phi},\\ C_{B}&=\beta + (\gamma -\beta)e^{i(\phi-\pi)}=\beta - (\gamma -\beta)e^{i\phi} \end{align}$ such that $\displaystyle D=\frac{1}{2}\big(\alpha +\beta +e^{i\phi}(\beta -\alpha)\big)$ - independent of $C(\gamma ).$ This proves the first part, but also the second, because, as $\phi$ changes, $D$ stays on the circle centered at $\displaystyle \frac{\alpha +\beta}{2}$ with radius $\displaystyle \frac{|\beta -\alpha |}{2}.$ Solution 2 A product of rotations is a rotation through the angle which is the sum of the angles of the two consituent rotations. (It could be a translation of the angles ad up to a multiple of $360^{\circ}.)$ If we start working backwards, first rotating $C_B$ to $C$ and then $C$ to $C_A$ we may consider the latter as the image of $C_B$ under the rotation through $180^{\circ}$ which is a central symmetry. The center of that central symmetry depends on the centers of the two rotations but not on the point being rotated and is thus independent of $C_B$ and, therefore, independent from $C.$ If it were not for the Bottema theorem, how would we go about finding point $D?$ I'll do that for the product of two rotations through angles $\alpha$ and $\beta$ around points $A$ and $B,$ respectively. The product of rotations rotates $A$ around $B$ through angle $\beta$ into $A'.$ There is point $B'$ that is brought onto $B$ when rotated around $A$ through angle $\alpha.$ Thus $A'$ is the image of $A$ under the product of two rotations, while $B$ is the image of $B'.$ It follows that the center $D$ of the product of the two rotations lies on the perpendicular bisectors of $AA'$ and $BB'.$ One of these passes through $A,$ the other through $B;$ they meet at $D.$ In triangle $ABD,$ the angle at $A$ is $\alpha /2,$ that at $B$ is $\beta /2.$ the remaining angle at $D$ is therefore $180^{\circ}-(\alpha +\beta )/2$ such that if $\alpha + \beta =180^{\circ}$ then the angle at $D$ is $90^{\circ}.$ Hubert Shutrick offered a shortcut: "$B$ is not moved by the first rotation and goes to $B'$ with the one round $A$ so $BAB'$ is an isosceles triangle with $D$ at the midpoint of $BB'.$" Acknowledgment The second solution came up in a discussion with Machó Bónis. Bottema's Theorem Bottema's Theorem An Elementary Proof of Bottema's Theorem Bottema's Theorem - Proof Without Words On Bottema's Shoulders On Bottema's Shoulders II On Bottema's Shoulders with a Ladder Friendly Kiepert's Perspectors Bottema Shatters Japan's Seclusion Rotations in Disguise Four Hinged Squares Four Hinged Squares, Solution with Complex Numbers Pythagoras' from Bottema's A Degenerate Case of Bottema's Configuration Properties of Flank Triangles Analytic Proof of Bottema's Theorem Yet Another Generalization of Bottema's Theorem Bottema with a Product of Rotations Bottema with Similar Triangles Bottema in Three Rotations Bottema's Point Sibling 65618040
Huge cardinal Huge cardinals (and their variants) were introduced by Kenneth Kunen in 1972 as a very large cardinal axiom. Kenneth Kunen first used them to prove that the consistency of the existence of a huge cardinal implies the consistency of $\text{ZFC}$+"there is a $\omega_2$-saturated $\sigma$-ideal on $\omega_1$". It is now known that only a Woodin cardinal is needed for this result. [1] Contents Definitions Their formulation is similar to that of the formulation of superstrong cardinals. A huge cardinal is to a supercompact cardinal as a superstrong cardinal is to a strong cardinal, more precisely. The definition is part of a generalized phenomenon known as the "double helix", in which for some large cardinal properties n-$P_0$ and n-$P_1$, n-$P_0$ has less consistency strength than n-$P_1$, which has less consistency strength than (n+1)-$P_0$, and so on. This phenomenon is seen only around the n-fold variants as of modern set theoretic concerns. [2] Although they are very large, there is a first-order definition which is equivalent to n-hugeness, so the $\theta$-th n-huge cardinal is first-order definable whenever $\theta$ is first-order definable. This definition can be seen as a (very strong) strengthening of the first-order definition of measurability. Elementary embedding definitions $\kappa$ is almost n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length less than $\lambda$ (that is, $M^{<\lambda}\subseteq M$). $\kappa$ is n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length $\lambda$ ($M^\lambda\subseteq M$). $\kappa$ is almost n-hugeiff it is almost n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is n-hugeiff it is n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is super almost n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is almost n-huge with target $\lambda$ (that is, the target can be made arbitrarily large). $\kappa$ is super n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is n-huge with target $\lambda$. $\kappa$ is almost huge, huge, super almost huge, and superhugeiff it is almost 1-huge, 1-huge, etc. respectively. Ultrahuge cardinals A cardinal $\kappa$ is $\lambda$-ultrahuge for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $\mathrm{j}(\kappa)>\lambda$, $M^{j(\kappa)}\subseteq M$ and $V_{j(\lambda)}\subseteq M$. A cardinal is ultrahuge if it is $\lambda$-ultrahuge for all $\lambda\geq\kappa$. [1] Notice how similar this definition is to the alternative characterization of extendible cardinals. Furthermore, this definition can be extended in the obvious way to define $\lambda$-ultra n-hugeness and ultra n-hugeness, as well as the " almost" variants. Ultrafilter definition The first-order definition of n-huge is somewhat similar to measurability. Specifically, $\kappa$ is measurable iff there is a nonprincipal $\kappa$-complete ultrafilter, $U$, over $\kappa$. A cardinal $\kappa$ is n-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$, and cardinals $\kappa=\lambda_0<\lambda_1<\lambda_2...<\lambda_{n-1}<\lambda_n=\lambda$ such that: $$\forall i<n(\{x\subseteq\lambda:\text{order-type}(x\cap\lambda_{i+1})=\lambda_i\}\in U)$$ Where $\text{order-type}(X)$ is the order-type of the poset $(X,\in)$. [1] $\kappa$ is then super n-huge if for all ordinals $\theta$ there is a $\lambda>\theta$ such that $\kappa$ is n-huge with target $\lambda$, i.e. $\lambda_n$ can be made arbitrarily large. If $j:V\to M$ is such that $M^{j^n(\kappa)}\subseteq M$ (i.e. $j$ witnesses n-hugeness) then there is a ultrafilter $U$ as above such that, for all $k\leq n$, $\lambda_k = j^k(\kappa)$, i.e. it is not only $\lambda=\lambda_n$ that is an iterate of $\kappa$ by $j$; all members of the $\lambda_k$ sequence are. As an example, $\kappa$ is 1-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$ such that $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}\in U$. The reason why this would be so surprising is that every set $x\subseteq\lambda$ with every set of order-type $\kappa$ would be in the ultrafilter; that is, every set containing $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}$ as a subset is considered a "large set." Consistency strength and size Hugeness exhibits a phenomenon associated with similarly defined large cardinals (the n-fold variants) known as the double helix. This phenomenon is when for one n-fold variant, letting a cardinal be called n-$P_0$ iff it has the property, and another variant, n-$P_1$, n-$P_0$ is weaker than n-$P_1$, which is weaker than (n+1)-$P_0$. [2] In the consistency strength hierarchy, here is where these lay (top being weakest): measurable = 0-superstrong = 0-huge n-superstrong n-fold supercompact (n+1)-fold strong, n-fold extendible (n+1)-fold Woodin, n-fold Vopěnka (n+1)-fold Shelah almost n-huge super almost n-huge n-huge super n-huge ultra n-huge (n+1)-superstrong All huge variants lay at the top of the double helix restricted to some natural number n, although each are bested by I3 cardinals (the critical points of the I3 elementary embeddings). In fact, every I3 is preceeded by a stationary set of n-huge cardinals, for all n. [1] Similarly, every huge cardinal $\kappa$ is almost huge, and there is a normal measure over $\kappa$ which contains every almost huge cardinal $\lambda<\kappa$. Every superhuge cardinal $\kappa$ is extendible and there is a normal measure over $\kappa$ which contains every extendible cardinal $\lambda<\kappa$. Every (n+1)-huge cardinal $\kappa$ has a normal measure which contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is super n-huge" [1], in fact it contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is ultra n-huge". Every n-huge cardinal is m-huge for every m<n. Similarly with almost n-hugeness, super n-hugeness, and super almost n-hugeness. Every almost huge cardinal is Vopěnka (therefore the consistency of the existence of an almost-huge cardinal implies the consistency of Vopěnka's principle). [1] Every ultra n-huge is super n-huge and a stationary limit of super n-huge cardinals. Every super almost (n+1)-huge is ultra n-huge and a stationary limit of ultra n-huge cardinals. In terms of size, however, the least n-huge cardinal is smaller than the least supercompact cardinal (assuming both exist). [1] This is because n-huge cardinals have upward reflection properties, while supercompacts have downward reflection properties. Thus for any $\kappa$ which is supercompact and has an n-huge cardinal above it, $\kappa$ "reflects downward" that n-huge cardinal: there are $\kappa$-many n-huge cardinals below $\kappa$. On the other hand, the least super n-huge cardinals have both upward and downward reflection properties, and are all much larger than the least supercompact cardinal. It is notable that, while almost 2-huge cardinals have higher consistency strength than superhuge cardinals, the least almost 2-huge is much smaller than the least super almost huge. References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex
Given $\triangle ABC$ with $\angle B = 90^\circ$, $\overline{AC}$ hypotenuse, known points $A = (x_a, y_a)$ and $B = (x_b, y_b)$, and known angle $\angle A = \theta$, how do I find $(x_c, y_c)$? This is easiest with vectors. First calculate the vector from $B$ to $A$. This is simply: $$v_{BA}=(x_a-x_b, y_a-y_b)$$ Rotate that vector by 90 degrees. You do this by swapping its coordinates and negating one of them. Which one you negate doesn't matter, and the choice determines which of the two solutions you get: $$v^\perp_{BA} = (-(y_a-y_b),\ x_a-x_b) \text{ or } (y_a-y_b,\ -(x_a-x_b))$$ Note however that the vector you now have is still of length $|BA|$, though it is now pointing in the right direction to $C$ (if you start from $B$). To scale the vector to the right length we need to multiply it by the factor $\frac{|BC|}{|BA|} = \tan\theta$: $$v_{BC} = v^\perp_{BA} \cdot \tan\theta$$ Lastly you add this vector to the coordinates of $B$ to get the coordinates of $C$: $$(x_c,y_c) = (x_b-(y_a-y_b)\tan\theta, y_b+(x_a-x_b)\tan\theta)$$ or $$(x_c,y_c) = (x_b+(y_a-y_b)\tan\theta, y_b-(x_a-x_b)\tan\theta)$$ Write $$\sin^2\theta=(\frac{BC}{AC})^2=\frac{(x_c-x_b)^2+(y_c-y_b)^2}{(x_c-x_a)^2+(y_c-y_a)^2}\\\cos^2\theta=(\frac{AB}{AC})^2=\frac{(x_b-x_a)^2+(y_b-y_a)^2}{(x_c-x_a)^2+(y_c-y_a)^2}$$ You have two equations, with two unknowns. Notice that they are quadratic equations, so you will have two solutions, depending on which side of the $AB$ line you find point $C$. C is on the perpendicular to segment AB by B. So $x_c, y_c$ can be parametrized by $\lambda$ : $x_c - x_b, y_c - y_b = \lambda * (y_b - y_a, x_a - x_b)$ Now compute the angle of angle CAB as a function of $\lambda$ and solve the equation (in $\lambda$) angle= $\theta$ Let's move system of axises to point $O'(x_a,y_b)$, the coordinates of the points $A, B, C$ will change respectively. Le's name $\angle O'AB=\alpha$. Also $\angle ABO' =90^o-\alpha$ and sum of angles on axis X must be $180^o$, $\angle CBX_c=\alpha$. From $\triangle O'AB$ will find that $AB = \frac{y_b-y_a}{cos\alpha} (1)$ From $\triangle CBX_c$ will find that $BC = \frac{x_c-x_b+x_a}{cos\alpha} (2)$ From $\triangle ABC$ will find that $tg\theta= \frac{BC}{AB} (3)$ Put (1) and (2) into (3) $tg\theta= \frac{BC}{AB}=\frac{\frac{x_c-x_b+x_a}{cos\alpha}}{\frac{y_b-y_a}{cos\alpha}} = \frac{x_c-x_b+x_a}{y_b-y_a}(4)$ From (4) $x_c = (x_b-x_a) + (y_b-y_a)*tg\theta (5)$ If we move to the original axises (see Picture 1) we need to add $x_a$ to (5) So (5) become $x_c = (x_b-x_a) + (y_b-y_a)*tg\theta + x_a= x_b + (y_b-y_a)*tg\theta (6)$ Let's find $y_c$ from the condition that $\triangle O'AB$ is similar to $\triangle X_cBC$ because of $\angle O'AB = \angle X_cBC = \alpha$ and $\angle AO'B = \angle BX_cC = 90^o$ So $\frac{O'A}{BX_c}=\frac{O'B}{X_cC} (7)$ or $\frac{y_b-y_a}{x_c-x_b+x_a}=\frac{x_b-x_a}{y_c} (7')$ From (7') $y_c=\frac{x_b-x_a}{y_b-y_a}* (x_c-x_b+x_a) (7'')$ Put (5) to (7'') $y_c=\frac{x_b-x_a}{y_b-y_a}* ((x_b-x_a) + (y_b-y_a)*tg\theta-x_b+x_a) = (x_b-x_a)*tg\theta (7''')$ If we move to the original axises (see Picture 1) we need to add $y_b$ to (7''') $y_c=(x_b-x_a)*tg\theta + y_b=y_b + (x_b-x_a)*tg\theta (8)$ Finding the second solution for $C'$ I leave to readers. Questions, edit, comments? Do you know the length of the hypotenuse? If so, then there's actually more information than you need to solve the problem. If you know the hypotenuse and the vertex coordinates of a leg: The midpoint of the hypotenuse is equidistant from every vertex on a right triangle by Thales' Theorem. So be subtracting both sides from each other: $$(x_c-x_b)^2+(y_c-y_b)^2=m^2/4=(x_c-x_a)^2+(y_c-y_a)^2$$ This gives you two solutions but each is resulting triangle is congruent to the other. Alternatively, we know $(x_c,y_c)$ lies on a line perpendicular through B. From trigonometry, we know its length is $\tan{\theta}$ AB, where AB=length of AB. $$(x_c,y_c)=(x_b,y_b)+\tan{\theta}AB(y_b-y_a,x_a-x_b)$$ It should be possible to derive the solution without having to handle any non-linear equations.
Suppose the element is $\ce{Q}$, its chloride is $\ce{QCl_x}$ (since the oxidation number of $\ce{Cl}$ is $-1$, the oxidation number of $\ce{Q}$ would be $+x$), and the atomic weight of $\ce{Q}$ is $W$. Since $\pu{1.0 g}$ of $\ce{QCl_x}$ contains $\pu{0.835 g}$ of $\ce{Cl}$, the mass of $\ce{Q}$ is $\pu{(1.0-0.835) g} = \pu{0.165 g}$. Also: $$\frac{x}{1} = \frac{n_\ce{Cl}}{n_\ce{Q}} $$ But, $n_\ce{Cl} = \frac{\pu{0.835 g}}{\pu{35.5 gmol^{-1}}}= \pu{0.0235 mol}$ and $n_\ce{Q} = \frac{\pu{0.165 g}}{\pu{W gmol^{-1}}}= \pu{\frac{0.165}{W} mol}$. Therefore, $$\frac{x}{1} = \frac{n_\ce{Cl}}{n_\ce{Q}} = \frac{\pu{0.0235 mol}}{\pu{\frac{0.165}{W} mol}}=0.1424W$$$$\therefore x=0.1424W$$ Now, it is known that $\text{vapor density} \approx \frac12 \times \text{molar mass}$ (Wikipedia).$$\therefore W + x \times 35.5 = 2 \times 85= 170 $$Apply, $x=0.1424W$ here, and hence,$$W + 0.1424W \times 35.5 = (1+5.055)W = 6.055W = 170 $$$$\therefore W=\frac{170}{6.055}=28.08$$$$\therefore x=0.1424 \times 28.08=4.0 \approx 5$$ Thus, the valency of the element is $+4$. Since its calculated atomic weight is $\pu{28.08 gmol^{-1}}$, it should be silicon ($\ce{Si}$). Thus, I conclude that the compound is $\ce{SiCl4}$.
First of all, I do apologise if the question is not formulated in precise mathematical terms, but as a physics student I lack a formal background on rigorous functional analysis. Suppose we have a Fourier integral of the type \begin{equation} F(\omega)=\int_{-\infty}^{\infty} dt e^{-i\omega t}f(t) \end{equation} and we want to obtain an asymptotic expansion as $\omega\rightarrow 0$ that approximates the behaviour of $F(\omega)$ at low frequencies. The function $f(t)$ decays rapidly enough as $t\rightarrow \infty$ so that the previous integral exists. My question is: is there any general approach to obtain such an asymptotic expansion as $\omega\rightarrow 0$ that goes beyond the leading term (setting $\omega=0$ and trying to perform the integral)? My interest comes in basically since all the literature I looked through on asymptotic expansions of integrals ("Advanced Mathematical Methods for Scientists and Engineers" by Bender&Orszag; "Applied Asymptotic Analysis" by P.D. Miller, and some lecture notes I found online) focuses only on the high-frequency limit, i.e. as $\omega\rightarrow \infty$, for which methods like steepest-descent or stationary phase are available (or in Laplace-type integrals, Laplace's method). My guess is that it'd be possible to expand the exponential in powers of $\omega t$ and try to perform the integral term by term if the integral boundaries were finite (with better results if $f(t)$ decays quickly to 0), but I'm not completely sure about that either. Note: even though I'm interested in the general case, to give a bit of background: $f(t)$ in my case would be an autocorrelation function $C(\tau)=<x(t_0)x(t_0+\tau)> - <x(t_0)>^2$ of a stationary stochastic point process $x(t)$; so $F(\omega)\equiv S(\omega)$ is the power spectral density of the process and the previous statement is equivalent to the Wiener-Khinchin theorem. $C(t)$ and $S(\omega)$ are even functions in $t$ and $\omega$ respectively. Thank you in advance!
I have been trying to find the arc-length of $\sin^{-1}(x)$ over $[0,1]$. Of course, it is given by the integral $$J=\int_0^1\sqrt{1+\frac1{1-x^2}}\ dx=\int_0^1 \sqrt{\frac{2-x^2}{1-x^2}}\, dx$$ To compute this, I used $x\mapsto \cos x$: $$J=\int_0^{\pi/2}\sqrt{\frac{1+1-\cos^2x}{1-\cos^2x}}\, \sin(x)dx=\int_0^{\pi/2}\sqrt{1+\sin^2x}\,dx=\mathrm{E}(-1)$$ where $$\mathrm{E}(k)=\int_0^{\pi/2}\sqrt{1-k\sin^2x}\,dx$$ is the complete elliptic integral of the second kind. I am usually one to accept values of $\mathrm{E}$ as closed forms by themselves, but by chance, Wolfram kindly provided the explicit evaluation $$\mathrm{E}(-1)=\frac{\pi\sqrt{2\pi}}{\Gamma^2(1/4)}+\frac{\Gamma^2(1/4)}{4\sqrt{2\pi}}\tag{1}$$ Which I would like to know how to prove. I was able to immediately recognize that $$\frac{\Gamma^2(1/4)}{4\sqrt{2\pi}}=\frac{1}{4\sqrt{2}}\int_0^1\frac{dx}{x^{3/4}(1-x)^{3/4}}$$ But I cannot seem to find an integral representation for the other chunk, namely $\frac{\pi\sqrt{2\pi}}{\Gamma^2(1/4)}$. I suspect that the solution involves the Jacobi elliptic functions and the Jacobi theta functions, but I have no idea how to use them. Could I have some help proving $(1)$? Thanks.
Consider two parallel Cylinders with Diameters of $R_1$ and $R_2$: The contact width can be calculated from 2D Hertz formula: $$ a=2 \sqrt{\frac{PR}{\pi E_c}} \tag{1}$$ Where $P=\frac{F}{L}$ is the force per unit length, and $$\frac{1}{R}=\frac{1}{R_1}+\frac{1}{R_2}\tag{2}$$ and $$\frac{1}{E_c}=\frac{1-\nu_1^2}{E_1}+\frac{1-\nu_2^2}{E_2}\tag{3}$$ From here I know the total displacement can be written as: $$ \delta=\frac{P}{\pi E_c}\left( \ln\left(\frac{8R_1}{a} \right)+\ln\left(\frac{8R_2}{a} \right)-1 \right) \tag{4}$$ I need to calculate $P$ as a function of $\delta$ so : R := (R1*R2)/(R1 + R2)a := 2*Sqrt[P*R/(pi*Ec)]Solve[delta == P*(Log[8*R1/a] + Log[8*R2/a] - 1)/(pi*Ec), P] Which gives me "a" solution and a warning: Solve::ifun: Inverse functions are being used by Solve, so some solutions may not be found; use Reduce for complete solution information. To have a general understanding of the solution I plotted a simplified version: Plot[-x/ProductLog[-x], {x, 0, 1/E}] Which does not make sense because at $\delta=0$ the force must be zero, and it must increase by displacement increasing. Trying to solve the equation with Reduce, as sugested in the waring: Reduce[delta == P*(Log[8*R1/a] + Log[8*R2/a] - 1)/(pi*Ec), P] also does not yield any results after a long time of calculation. I would appreciate if you could help me know if I'm making any mistakes and/or how to calculate $P$ versus $\delta$ for parallel cylinders.
where $$ \begin{equation} \begin{aligned} C_1&=2\pi hc^2\\ C_2&=\cfrac{hc}{k} \end{aligned} \end{equation} $$ $h$ is Planck's constant, $c$ is the speed of light in vacuum, $k$ is the Boltzmann constant and $\lambda$ is the wavelength. As per CIE 015:2004 Colorimetry, 3rd Edition recommendation $C_2$ value when used in colorimetry should be $C_2= 1,4388x10^{-2}mK$ as defined by the International Temperature Scale (ITS-90). $C_1$ value is given by the Committee on Data for Science and Technology (CODATA) and should be $C_1=3,741771x10^{16}Wm^2$. In the current CIE 015:2004 Colorimetry, 3rd Edition recommendation, colour temperature and correlated colour temperature are calculated with $n=1$. Colour implements various blackbody computation related objects in the colour.colorimetry sub-package: import colour.colorimetry Note: colour.colorimetrypackage public API is also available from the colournamespace. The Planck's law is called using either the colour.planck_law or colour.blackbody_spectral_radiance definitions, they are expecting the wavelength $\lambda$ to be given in nanometers and the temperature $T$ to be given in degree kelvin: import colourcolour.colorimetry.planck_law(500 * 1e-9, 5500) 20472701909806.578 Generating the spectral distribution of a blackbody is done using the colour.sd_blackbody definition: with colour.utilities.suppress_warnings(python_warnings=True): colour.sd_blackbody(6500, colour.SpectralShape(0, 10000, 10)) With its temperature lowering, the blackbody peak shifts to longer wavelengths while its intensity decreases: from colour.plotting import * colour_style(); # Plotting various *blackbodies* spectral distributions.blackbodies_sds = [colour.sd_blackbody(i, colour.SpectralShape(0, 10000, 10)) for i in range(1000, 15000, 1000)]with colour.utilities.suppress_warnings(python_warnings=True): plot_multi_sds(blackbodies_sds, y_label='W / (sr m$^2$) / m', use_sds_colours=True, normalise_sds_colours=True, legend_location='upper right', bounding_box=[0, 1000, 0, 2.25e15]); Let's plot the blackbody colours from temperature in domain [150, 12500, 50]: plot_blackbody_colours(colour.SpectralShape(500, 12500, 50)); Let's compare the extraterrestrial solar spectral irradiance to the blackbody spectral radiance of a thermal radiator with a temperature of 5778 K: # Comparing theoretical and measured *Sun* spectral distributions.# Arbitrary ASTM_G_173_ETR scaling factor calculated with# :def:`colour.sd_to_XYZ` definition.ASTM_G_173_sd = ASTM_G_173_ETR.copy() * 1.37905559e+13blackbody_sd = colour.sd_blackbody( 5778, ASTM_G_173_sd.shape)blackbody_sd.name = 'The Sun - 5778K'plot_multi_sds([ASTM_G_173_sd, blackbody_sd], y_label='W / (sr m$^2$) / m', legend_location='upper right'); As you can see the Sun spectral distribution is very close to the one from a blackbody at similar temperature $T$. Calculating theoritical colour of any star is possible, for example the VY Canis Majoris red hypergiant in the constellation Canis Major. plot_blackbody_spectral_radiance(temperature=3500, blackbody='VY Canis Majoris'); Or Rigel the brightest star in the constellation Orion and the seventh brightest star in the night sky. plot_blackbody_spectral_radiance(temperature=12130, blackbody='Rigel'); plot_blackbody_spectral_radiance(temperature=5778, blackbody='The Sun');
Code B-splines cheatsheet degree k n + 1control points m + 1knots m = n + k + 1 B-splines Downside of Bézier splines is their global nature: moving a single control point changes the whole spline.A possible solution to this issue are the B-splines. Given a degree $k$, control polygon $\mathbf d_0,\dots,\mathbf d_n$, and a knot vector $t_0 \leq t_1 \leq \dots \leq t_m$ with $m = n+k+1$, a B-spline curve $S(t)$ is defined as The $N_j^k$ are the recursively-defined basis functions (hence the name B-spline) Looks complicated? Don’t worry if you cannot get your head around all the indices and whatnot; De Boor is here to help you! B-spline basis functions $N^k_j$ up to degree 5 for the knot sequence $(0,1,2,3,4,5,6,7)$. De Boor’s algorithm … also called the De Boor-Cox algorithm can be seen as a generalization of the De Casteljau’s algorithm. (Bézier curve is in fact a B-spline with a special knot sequence.) input :$\mathbf{d_{0}},\dots,\mathbf{d_{n}}$ : control points $t_0 \leq t_1 \leq \dots \leq t_m$ : knot vector $t \in [t_i, t_{i+1}) \subset [t_k, t_{m-k})$ where $k = m-n-1$ is the degree output : point $\mathbf S(t) = \mathbf d_i^k$ on the curve algorithm : For $j=i-k, \dots, i,$ set $\mathbf d_j^0 = \mathbf d_j$. Then compute the points \begin{align} \mathbf d_{j}^{r} &= (1-w_{j,k-(r-1)}) \mathbf d_{j-1}^{r-1} + w_{j,k-(r-1)} \mathbf d_{j }^{r-1} \end{align} for \begin{align} \quad r = 1,\dots,k, \quad \quad j = i-k+r,\dots,i \end{align} with \begin{align} w_{j,k-(r-1)} &= \frac{ t - t_j }{ t_{j+k-(r-1)} - t_j }. \end{align} Be careful with the indices! Here we have expressed a point at depth $r$ in terms of points at depth $r-1$ – that is why there is the $r-1$ everywhere in the formula. This might be a bit annoying, but I think it’s also more practical for the recursive implementation. (The formula becomes much more elegant if we express level $r+1$ in terms of level $r$.) A cubic B-spline with 16 segments and endpoint interpolation. ToDo$^1$ Implement the De Boor’s algorithm. Evaluate B-spline for the simpledataset. Modify the knot vector and recompute. What changed? Evaluate B-spline for the spiraldataset. Modify the knot vector to 0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 5 5 5 5. What changed? Evaluate B-spline for the cameldataset. Move the front leg by changing the x-coordinate of the last control point to -1.5. Which segments of the curve have changed? Why? NURBS If you’ve tested with the circle.bspline dataset, you were probably dissapointed:the resulting curve is far from being a circle. Here’s the bad news: it’s mathematically impossible to represent circle as a B-spline.But, it is possible via a generalization called Non-Uniform Rational B-Splines or NURBS. Why the long name? Non-uniform simply means the knot sequence is non-uniformly spaced. Rational means we work in homogeneous coordinatesby assigning a weight to each point. In plane, the point $(x,y)$ becomes $(x,y,1) \approx (wx,wy,w)$. If you examine circle9.nurbs file, you’ll it’s quite similar to circle.bspline.The only thing that’s changed is the addition of a third coordinate – this is the weight w. And here’s the good news: even with the homogeneous coordinates, we can apply exactly the same De Boor’s algorithm without any modifications! Here’s a secret recipe for transforming your B-spline code to work with NURBS: The columns of ControlPtsread from a .nurbscorrespond to x, yand w. Therefore, you first need to multiply both xand y(columns 0 and 1) by w(column 2). Feed the homogeneous control points [w*x,w*y,w]to the De Boor’s algorithm you’ve implemented previously. Convert the computed points (stored in the matrix Segment) back to Cartesian coordinates. Divide by the third column to pass from [w*x,w*y,w]to [x,y,1]. As before, plot the first two coordinates. Hint: in Python, the operators * and / are applied element-wise, so you can do stuff like matrix[:,0] *= matrix[:,2]matrix[:,0] /= matrix[:,2] ToDo$^2$ Modify your code to work in homogeneous coordinates (if dim=3). Evaluate circle9.nurbsand circle7.nurbs. Compare the results with circle.bspline. Resources B-spline and De Boor’s algorithm 1.4.2 B-spline curveand1.4.3 Algorithms for B-spline curves,online chapters from the book Shape Interrogation for Computer Aided Design and Manufacturingby N. Patrikalakis, T. Maekawa & W. Cho NURBS on wikipedia (includes the circle example) homepage of Prof. de Boor
Beatty's Theorem/Proof 1 Theorem Let $r, s \in \R \setminus \Q$ be an irrational number such that $r > 1$ and $s > 1$. Let $\mathcal B_r$ and $\mathcal B_s$ be the Beatty sequences on $r$ and $s$ respectively. $\dfrac 1 r + \dfrac 1 s = 1$ Proof We have been given that $r > 1$. Let $\dfrac 1 r + \dfrac 1 s = 1$. Then: $s = \dfrac r {r - 1}$ Aiming for a contradiction, suppose that $\dfrac j r = \dfrac k s$ for some $j, k \in \Z_{>0}$. Then: $\dfrac r s = \dfrac j k$ which is rational. But also: $\dfrac r s = r \left({1 - \dfrac 1 r}\right) = r - 1$ which is not rational. Therefore, no two of the numbers occupy the same position. Consider some $\dfrac j r$. There are $j$ numbers $\dfrac i r \le \dfrac j r$. There are also $\left\lfloor{\dfrac {j s} r}\right\rfloor$ numbers $\dfrac k s \le \dfrac j r$. So the position of $\dfrac j r$ in the list is $j + \left\lfloor{\dfrac {j s} r}\right\rfloor$. The equation $\dfrac 1 r + \dfrac 1 s = 1$ implies: $j + \left\lfloor{\dfrac {j s} r}\right\rfloor = j + \left\lfloor{j \left({s - 1}\right)}\right\rfloor = \left\lfloor{j s}\right\rfloor$ Likewise, the position of $\dfrac k s$ in the list is $\left\lfloor{k r}\right\rfloor$. It is concluded that every positive integer corresponding to every position in the list is of the form $\left\lfloor{n r}\right\rfloor$ or of the form $\left\lfloor{nr}\right\rfloor$, but not both. The converse statement is also true: if $p$ and $q$ are two real numbers such that every positive integer occurs precisely once in the above list, then $p$ and $q$ are irrational and the sum of their reciprocals is $1$. $\blacksquare$ Source of Name This entry was named for Samuel Beatty.
Defining parameters Level: \( N \) = \( 4000 = 2^{5} \cdot 5^{3} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 4000.de (of order \(200\) and degree \(80\)) Character conductor: \(\operatorname{cond}(\chi)\) = \( 4000 \) Character field: \(\Q(\zeta_{200})\) Newforms: \( 0 \) Sturm bound: \(600\) Trace bound: \(0\) Dimensions The following table gives the dimensions of various subspaces of \(M_{1}(4000, [\chi])\). Total New Old Modular forms 160 160 0 Cusp forms 0 0 0 Eisenstein series 160 160 0 The following table gives the dimensions of subspaces with specified projective image type. \(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 0 0 0 0
Let $V_t$ be a solution of the SDE $$dV_t=V_t(rdt+\sigma_t dW_t) $$ where $\sigma_t$ satisfies some other SDE $$d\sigma_t=\alpha(t,\sigma_t)dt+\beta(t,\sigma_t)dW^{\\ \prime}_t $$ and $W_t$ and $W^{\\ \prime}_t $ are possibly correlated Brownian motions. I'm interested in the Laplace transform $\mathbb{E}[\exp(\{aT}\\ )]$ of the stopping time $$T:=\inf\{t>0 \\ : \\ V_t\le \overline V\\ \}. $$ If $\sigma$ is a constant, both $V_t$ and $\mathbb{E}[\exp(\{aT}\\ )]$ are explicitly computable and everything is known. Question:What is known about $\mathbb{E}[\exp(\{aT}\\ )]$ in the general case? Are there any specifications of the coefficients $\alpha$ and $\beta$ that allow an explicit formula? Thank you!
Consider the following axiomatic definition of a field: A fieldis a set $F$ together with two binary operations $+$ and $\cdot$ on $F$ such that $(F,+)$ is an Abelian group with identity $0$ and $(F\setminus\{0\},\cdot)$ is an Abelian group with identity $1$, and the following left-distributive law holds: $$a\cdot(b+c)=(a\cdot b)+(a\cdot c)\quad\forall a,b,c\in F.$$ I want to show that $0\cdot x=0$ for any $x\in F$ using these, and only these, field axioms. I can prove that $x\cdot 0=0$ using left-distributivity, but multiplication with $0$ is not necessary commutative a priori [that $(F\setminus\{0\},\cdot)$ is an Abelian group does not say anything about multiplication with $0$]. Any hint would be appreciated. To elaborate on my point, let me prove that $x\cdot 0=0$ for any $x\in F$: \begin{align*} 0+0=&\,0\\ \Downarrow&\,\\ x\cdot(0+0)=&\,x\cdot0\\ \Downarrow&\,\text{(left-distributivity)}\\ (x\cdot 0)+(x\cdot 0)=&\,x\cdot 0\\ \Downarrow&\,\\ [(x\cdot 0)+(x\cdot 0)]+[-(x\cdot 0)]=&\,x\cdot 0+[-(x\cdot 0)]\\ \Downarrow&\,\\ (x\cdot0)+\{(x\cdot0)+[-(x\cdot0)]\}=&\,0\\ \Downarrow&\,\\ (x\cdot0)+0=&\,0\\ \Downarrow&\,\\ x\cdot0=&\,0. \end{align*} My problem is I would need to exploit right-distributivity to show that $0\cdot x=0$, but right-distributivity does not follow immediately from the axioms.
I am doing some research into the movement of robots executing a given algorithm, and have come up with the following recursive equations that describe the movement of the robots at each iteration: \begin{equation} \theta_i = \frac{3}{4}\theta_{i-1} \end{equation} \begin{equation} L_i=\frac{1}{2}\sqrt{L_{i-1}^2+A_{i-1}^2} \end{equation} \begin{equation} A_i=L_i\tan\theta_i \end{equation} Where the initial conditions are $\theta_1=45^{\circ}$, $\;L_1=\frac{1}{2}d$, and $\frac{1}{2}d$. It is clear that the first recurrence relation can simply be replaced with the following closed-form solution: \begin{equation} \theta_i = \big(\frac{3}{4}\big)^{i-1}\;\theta_1 \end{equation} But my problem is trying to obtain closed-form solutions to the other two equations that are defined in terms of each other. Is there a way to do this?