text
stringlengths
256
16.4k
You have made many wrong assumptions in this question. First theoretically speaking, Filters do not work in the way to 'pick up elements' (they work on principle of edge detection). You have assumed only a single combination of filter weights will give the desired output (assuming continuous weights not binary). This is especially in prominence in the problem of Regularization where we want to choose a set of weights without over-fitting data. The error you used looks very similar to Perceptron update rule (squared error gives the same derivative, but make sure you are not confusing the two). Backpropagation through 'dead ReLu's' is not possible (see this answer for more details). Now, let us check mathematically: Input Volume: $$ \begin{matrix} 1 & -1 & 0 \\ 0 & 1 & 0 \\ 0 & -1 & 1 \\ \end{matrix}$$ Desired Output: $$ \begin{matrix} 1 & -1 \\ 0 & 1 \\ \end{matrix}$$ Note, in this step you desire an output which is negative (element (0,-1)), but you are forward propagating through a ReLu which is cutting off the negative part, thus the gradients have no way to communicate or update the required negative.Basically, $ wx \rightarrow ReLu \rightarrow y$ is happening and if 'x' is a negative number then $y$ is always $0$ thus $(target-y)$ is always $target$ and hence whatever the value of $x$ error remains constant, and if we want to backpropagate (assuming squared error) then: $\frac{d}{dw} (target - y)^2 = 2*(target - y)*\frac{d}{dw}y = -2*(target - y)*0 = 0$ (Remember from the ReLu output graph slope is $0$ in the negative region). Now, you randomise a filter: $$ \begin{matrix} 1 & -1 \\ 1 & 1 \\ \end{matrix}$$ apply ReLu and get the following: $$ \begin{matrix} 3 & 0 \\ 0 & 1 \\ \end{matrix}$$ again you have chosen you target to have a negative number which is not possible in the case of ReLu activation. But continuing you get the error as: $$ \begin{matrix} -2 & -1 \\ 0 & 0 \\ \end{matrix}$$ use error to compute gradients (which again you have calculated wrong,by Backpropagating through ReLu's with ) values and also missed the minus sign associated with the output, but you have compensated it by adding it to the $w$ whereas the convention is to subtract): $$ \begin{matrix} 1 & 2 \\ 1 & -2 \\ \end{matrix}$$ And get the new filter as: $$ \begin{matrix} 0.5 & 0 \\ 0.5 & 0 \\ \end{matrix}$$ This is a pretty good approximation of the desired filter (even though the previous steps have wrong assumptions, but it does not matter much, since what you essentially did was use a linear activation function which will work, if you go through enough iterations). So basically you are using a Linear Filters and the details are too hodge podge for me to go into, so I will suggest some resources for you to see ReLu backpropagation: Deep Neural Network - Backpropogation with ReLU
In the first week of teaching my Calculus 1 discussion section this term, I decided to give the students a Precalc Review Worksheet. Its purpose was to refresh their memories of the basics of arithmetic, algebra, and trigonometry, and see what they had remembered from high school. Surprisingly, it was the arithmetic part that they had the most trouble with. Not things like multiplication and long division of large numbers – those things are taught well in our grade schools – but when they encountered a complicated multi-step arithmetic problem such as the first problem on the worksheet, they were stumped: Simplify: $1+2-3\cdot 4/5+4/3\cdot 2-1$ Gradually, some of the groups began to solve the problem. But some claimed it was $-16/15$, others guessed that it was $34/15$, and yet others insisted that it was $-46/15$. Who was correct? And why were they all getting different answers despite carefully checking over their work? The answer is that the arithmetic simplification procedure that one learns in grade school is ambiguous and sometimes incorrect. In American public schools, students are taught the acronym “PEMDAS”, which stands for Parentheses, Exponents, Multiplication, Division, Addition, Subtraction. This is called the order of operations, which tells you which arithmetic operations to perform first by convention, so that we all agree on what the expression above should mean. But PEMDAS doesn’t work properly in all cases. (This has already been wonderfully demonstrated in several YouTube videos such as this one, but I feel it is good to re-iterate the explanation in as many places as possible.) To illustrate the problem, consider the computation $6-2+3$. Here we’re starting with $6$, taking away $2$, and adding back $3$, so we should end up with $7$. This is what any modern calculator will tell you as well (try typing it into Google!) But if you follow PEMDAS to the letter, it tells you that addition comes before subtraction, and so we would add $2+3$ first to get $5$, and then end up with $6-5=1$. Even worse, what happens if we try to do $6-3-2$? We should end up with $1$ since we are taking away $2$ and $3$ from $6$, and yet if we choose another order in which to do a subtraction first, say $6-(3-2)=6-1$, we get $5$. So, subtraction can’t even properly be done before itself, and the PEMDAS rule does not deal with that ambiguity. Mathematicians have a better convention that fixes all of this. What we’re really doing when we’re subtracting is adding a negative number: $6-2+3$ is just $6+(-2)+3$. This eliminates the ambiguity; addition is commutative and associative, meaning no matter what order we choose to add several things together, the answer will always be the same. In this case, we could either do $6+(-2)=4$ and $4+3=7$ to get the answer of $7$, or we could do $(-2)+3$ first to get $1$ and then add that to $6$ to get $7$. We could even add the $6$ and the $3$ first to get $9$, and then add $-2$, and we’d once again end up with $7$. So now we always get the same answer! There’s a similar problem with division. Is $4/3/2$ equal to $4/(3/2)=8/3$, or is it equal to $(4/3)/2=2/3$? PEMDAS doesn’t give us a definite answer here, and has the further problem of making $4/3\cdot 2$ come out to $4/(3\cdot 2)=2/3$, which again disagrees with Google Calculator. As in the case of subtraction, the fix is to turn all division problems into multiplication problems: we should think of division as multiplying by a reciprocal. So in the exercise I gave my students, we’d have $4/3\cdot 2=4\cdot \frac{1}{3}\cdot 2=\frac{8}{3}$, and all the confusion is removed. To finish the problem, then, we would write $$\begin{eqnarray*} 1+2-3\cdot 4/5+4/3\cdot 2-1&=&1+2+(-\frac{12}{5})+\frac{8}{3}+(-1) \\ &=&2+\frac{-36+40}{15} \\ &=&\frac{34}{15}. \end{eqnarray*} $$ The only thing we need to do now is come up with a new acronym. We still follow the convention that Parentheses, Exponents, Multiplication, and Addition come in that order, but we no longer have division and subtraction since we replaced them with better operators. So that would be simply PEMA. But that’s not quite as catchy, so perhaps we could add in the “reciprocal” and “negation” rules to call it PERMNA instead. If you have something even more catchy, post it in the comments below!
Abstract We say that a Riemannian manifold $(M, g)$ with a non-empty boundary $\partial M$ is a minimal orientable filling if, for every compact orientable $(\widetilde M,\tilde g)$ with $\partial \widetilde M=\partial M$, the inequality $ d_{\tilde g}(x,y) \ge d_g(x,y)$ for all $x,y\in\partial M$ implies $ \operatorname{vol}(\widetilde M,\tilde g) \ge \operatorname{vol}(M,g) .$ We show that if a metric $g$ on a region $M \subset \mathbf{R}^n$ with a connected boundary is sufficiently $C^2$-close to a Euclidean one, then it is a minimal filling. By studying the equality case $ \operatorname{vol}(\widetilde M,\tilde g) = \operatorname{vol}(M,g)$ we show that if $ d_{\tilde g}(x,y) = d_g(x,y)$ for all $x,y\in\partial M$ then $(M,g)$ is isometric to $(\widetilde M,\tilde g)$. This gives the first known open class of boundary rigid manifolds in dimensions higher than two and makes a step towards a proof of Michel’s conjecture. [AB] J. C. Alvarez-Paiva and G. Berck, "What is wrong with the Hausdorff measure in Finsler spaces," Adv. Math., vol. 204, pp. 647-663, 2006. @article{AB, author={Alvarez-Paiva, J. C. and Berck, G.}, TITLE={What is wrong with the Hausdorff measure in Finsler spaces}, JOURNAL={Adv. Math.}, VOLUME={204}, YEAR={2006}, PAGES={647--663}, MRNUMBER={2007g:53079}, ZBLNUMBER={1099.53018}, } [Bes] A. S. Besicovitch, "On two problems of Loewner," J. London Math. Soc., vol. 27, pp. 141-144, 1952. @article {Bes, MRKEY = {0047126}, AUTHOR = {Besicovitch, A. S.}, TITLE = {On two problems of {L}oewner}, JOURNAL = {J. London Math. Soc.}, FJOURNAL = {Journal of the London Mathematical Society. Second Series}, VOLUME = {27}, YEAR = {1952}, PAGES = {141--144}, ISSN = {0024-6107}, MRCLASS = {27.2X}, MRNUMBER = {13,831d}, MRREVIEWER = {T. Rad{ó}}, DOI = {10.1112/jlms/s1-27.2.141}, ZBLNUMBER = {0046.05304}, } [Br] L. E. J. Brouwer, "Beweis der invarianz des $n$-dimensionalen gebiets," Math. Ann., vol. 71, iss. 3, pp. 305-313, 1911. @article {Br, MRKEY = {1511658}, AUTHOR = {Brouwer, L. E. J.}, TITLE = {Beweis der invarianz des {$n$}-dimensionalen gebiets}, JOURNAL = {Math. Ann.}, FJOURNAL = {Mathematische Annalen}, VOLUME = {71}, YEAR = {1911}, NUMBER = {3}, PAGES = {305--313}, ISSN = {0025-5831}, CODEN = {MAANA}, MRCLASS = {Contributed Item}, MRNUMBER = {1511658}, DOI = {10.1007/BF01456846}, JFMNUMBER={42.0418.01}, } [BI95] D. Burago and S. Ivanov, "On asymptotic volume of tori," Geom. Funct. Anal., vol. 5, iss. 5, pp. 800-808, 1995. @article {BI95, MRKEY = {1354290}, AUTHOR = {Burago, D. and Ivanov, S.}, TITLE = {On asymptotic volume of tori}, JOURNAL = {Geom. Funct. Anal.}, FJOURNAL = {Geometric and Functional Analysis}, VOLUME = {5}, YEAR = {1995}, NUMBER = {5}, PAGES = {800--808}, ISSN = {1016-443X}, CODEN = {GFANFB}, MRCLASS = {53C20 (53C60)}, MRNUMBER = {96h:53041}, MRREVIEWER = {Joseph E. Borzellino}, DOI = {10.1007/BF01897051}, ZBLNUMBER = {0846.53043}, } [BI02] D. Burago and S. Ivanov, "On asymptotic volume of Finsler tori, minimal surfaces in normed spaces, and symplectic filling volume," Ann. of Math., vol. 156, iss. 3, pp. 891-914, 2002. @article {BI02, MRKEY = {1954238}, AUTHOR = {Burago, D. and Ivanov, S.}, TITLE = {On asymptotic volume of {F}insler tori, minimal surfaces in normed spaces, and symplectic filling volume}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {156}, YEAR = {2002}, NUMBER = {3}, PAGES = {891--914}, ISSN = {0003-486X}, CODEN = {ANMAAH}, MRCLASS = {53C60 (53C20)}, MRNUMBER = {2003k:53088}, MRREVIEWER = {Janko Latschev}, DOI = {10.2307/3597285}, ZBLNUMBER = {1040.53088}, } [BI04] D. Burago and S. Ivanov, "Gaussian images of surfaces and ellipticity of surface area functionals," Geom. Funct. Anal., vol. 14, iss. 3, pp. 469-490, 2004. @article {BI04, MRKEY = {2100668}, AUTHOR = {Burago, D. and Ivanov, S.}, TITLE = {Gaussian images of surfaces and ellipticity of surface area functionals}, JOURNAL = {Geom. Funct. Anal.}, FJOURNAL = {Geometric and Functional Analysis}, VOLUME = {14}, YEAR = {2004}, NUMBER = {3}, PAGES = {469--490}, ISSN = {1016-443X}, CODEN = {GFANFB}, MRCLASS = {53C60 (53C65)}, MRNUMBER = {2006b:53099}, MRREVIEWER = {Janko Latschev}, DOI = {10.1007/s00039-004-0465-8}, ZBLNUMBER = {1067.53042}, } [BCG] G. Besson, G. Courtois, and S. Gallot, "Entropies et rigidités des espaces localement symétriques de courbure strictement négative," Geom. Funct. Anal., vol. 5, iss. 5, pp. 731-799, 1995. @article {BCG, MRKEY = {1354289}, AUTHOR = {Besson, G. and Courtois, G. and Gallot, S.}, TITLE = {Entropies et rigidités des espaces localement symétriques de courbure strictement négative}, JOURNAL = {Geom. Funct. Anal.}, FJOURNAL = {Geometric and Functional Analysis}, VOLUME = {5}, YEAR = {1995}, NUMBER = {5}, PAGES = {731--799}, ISSN = {1016-443X}, CODEN = {GFANFB}, MRCLASS = {58F17 (53C21 53C35)}, MRNUMBER = {96i:58136}, MRREVIEWER = {Boris Hasselblatt}, DOI = {10.1007/BF01897050}, ZBLNUMBER = {0851.53032}, } [Croke91] C. B. Croke, "Rigidity and the distance between boundary points," J. Differential Geom., vol. 33, iss. 2, pp. 445-464, 1991. @article {Croke91, MRKEY = {1094465}, AUTHOR = {Croke, Christopher B.}, TITLE = {Rigidity and the distance between boundary points}, JOURNAL = {J. Differential Geom.}, FJOURNAL = {Journal of Differential Geometry}, VOLUME = {33}, YEAR = {1991}, NUMBER = {2}, PAGES = {445--464}, ISSN = {0022-040X}, CODEN = {JDGEAS}, MRCLASS = {53C20}, MRNUMBER = {92a:53053}, MRREVIEWER = {Viktor Schroeder}, URL = {http://projecteuclid.org/getRecord?id=euclid.jdg/1214446326}, ZBLNUMBER = {0729.53043}, } [Croke04] Geometric Methods in Inverse Problems and PDE Control, Croke, C. B., Lasiecka, I., Uhlmann, G., and Vogelius, M. S., Eds., New York: Springer-Verlag, 2004. @book {Croke04, EDITOR={Croke, C. B. and Lasiecka, I. and Uhlmann, G. and Vogelius, M. S.}, MRKEY = {2164889}, TITLE = {Geometric Methods in Inverse Problems and {{\rm PDE}} Control}, SERIES = {The {\rm IMA} Volumes in Mathematics and its Applications}, NUMBER = {137}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York}, YEAR = {2004}, PAGES = {x+326}, ISBN = {0-387-40529-1}, MRCLASS = {00B25 (35-06 49-06 58-06)}, MRNUMBER = {2006d:00010}, ZBLNUMBER={1051.35003}, } [CDS] C. B. Croke, N. S. Dairbekov, and V. A. Sharafutdinov, "Local boundary rigidity of a compact Riemannian manifold with curvature bounded above," Trans. Amer. Math. Soc., vol. 352, iss. 9, pp. 3937-3956, 2000. @article {CDS, MRKEY = {1694283}, AUTHOR = {Croke, Christopher B. and Dairbekov, Nurlan S. and Sharafutdinov, Vladimir A.}, TITLE = {Local boundary rigidity of a compact {R}iemannian manifold with curvature bounded above}, JOURNAL = {Trans. Amer. Math. Soc.}, FJOURNAL = {Transactions of the American Mathematical Society}, VOLUME = {352}, YEAR = {2000}, NUMBER = {9}, PAGES = {3937--3956}, ISSN = {0002-9947}, CODEN = {TAMTAM}, MRCLASS = {53C24 (53C22 58J53)}, MRNUMBER = {2000m:53054}, MRREVIEWER = {Dorothee Schueth}, DOI = {10.1090/S0002-9947-00-02532-0}, ZBLNUMBER = {0958.53027}, } [CK] C. B. Croke and B. Kleiner, "A rigidity theorem for simply connected manifolds without conjugate points," Ergodic Theory and Dynam. Systems, vol. 18, pp. 807-812, 1998. @article{CK, author = {Croke, Christopher B. and Kleiner, B.}, TITLE={A rigidity theorem for simply connected manifolds without conjugate points}, JOURNAL={Ergodic Theory and Dynam. Systems}, VOLUME={18}, YEAR={1998}, PAGES={807--812}, MRNUMBER={99j:53035}, MRREVIEWER={Raul Quiroga}, ZBLNUMBER={0924.53027}, } [Gromov] M. Gromov, "Filling Riemannian manifolds," J. Differential Geom., vol. 18, iss. 1, pp. 1-147, 1983. @article {Gromov, MRKEY = {697984}, AUTHOR = {Gromov, Mikhael}, TITLE = {Filling {R}iemannian manifolds}, JOURNAL = {J. Differential Geom.}, FJOURNAL = {Journal of Differential Geometry}, VOLUME = {18}, YEAR = {1983}, NUMBER = {1}, PAGES = {1--147}, ISSN = {0022-040X}, CODEN = {JDGEAS}, MRCLASS = {53C20 (53C21 57R99)}, MRNUMBER = {85h:53029}, MRREVIEWER = {Yu. Burago}, URL = {http://projecteuclid.org/getRecord?id=euclid.jdg/1214509283}, ZBLNUMBER = {0515.53037}, } [HT] R. D. Holmes and A. C. Thompson, "$n$-dimensional area and content in Minkowski spaces," Pacific J. Math., vol. 85, iss. 1, pp. 77-110, 1979. @article {HT, MRKEY = {571628}, AUTHOR = {Holmes, R. D. and Thompson, A. C.}, TITLE = {{$n$}-dimensional area and content in {M}inkowski spaces}, JOURNAL = {Pacific J. Math.}, FJOURNAL = {Pacific Journal of Mathematics}, VOLUME = {85}, YEAR = {1979}, NUMBER = {1}, PAGES = {77--110}, ISSN = {0030-8730}, CODEN = {PJMAAI}, MRCLASS = {52A40}, MRNUMBER = {81k:52023}, MRREVIEWER = {P. R. Goodey}, URL = {http://projecteuclid.org/getRecord?id=euclid.pjm/1102784083}, ZBLNUMBER = {0467.51007}, } [Ivanov] S. V. Ivanov, "On two-dimensional minimal fillings," Algebra i Analiz, vol. 13, iss. 1, pp. 26-38, 2001. @article {Ivanov, MRKEY = {1819361}, AUTHOR = {Ivanov, S. V.}, TITLE = {On two-dimensional minimal fillings}, JOURNAL = {Algebra i Analiz}, FJOURNAL = {Rossiĭskaya Akademiya Nauk. Algebra i Analiz}, VOLUME = {13}, YEAR = {2001}, NUMBER = {1}, PAGES = {26--38}, ISSN = {0234-0852}, MRCLASS = {58E12 (53C20 53C60)}, MRNUMBER = {2002b:58016}, MRREVIEWER = {Nikolai K. Smolentsev}, ZBLNUMBER = {0995.58010}, } [Morgan] F. Morgan, "Examples of unoriented area-minimizing surfaces," Trans. Amer. Math. Soc., vol. 283, iss. 1, pp. 225-237, 1984. @article {Morgan, MRKEY = {735418}, AUTHOR = {Morgan, Frank}, TITLE = {Examples of unoriented area-minimizing surfaces}, JOURNAL = {Trans. Amer. Math. Soc.}, FJOURNAL = {Transactions of the American Mathematical Society}, VOLUME = {283}, YEAR = {1984}, NUMBER = {1}, PAGES = {225--237}, ISSN = {0002-9947}, CODEN = {TAMTAM}, MRCLASS = {49F22 (53C65)}, MRNUMBER = {85b:49079}, MRREVIEWER = {Jean E. Taylor}, DOI = {10.2307/1999999}, ZBLNUMBER = {0509.53007}, } [Michel] R. Michel, "Sur la rigidité imposée par la longueur des géodésiques," Invent. Math., vol. 65, iss. 1, pp. 71-83, 1981. @article {Michel, MRKEY = {636880}, AUTHOR = {Michel, R.}, TITLE = {Sur la rigidité imposée par la longueur des géodésiques}, JOURNAL = {Invent. Math.}, FJOURNAL = {Inventiones Mathematicae}, VOLUME = {65}, YEAR = {1981}, NUMBER = {1}, PAGES = {71--83}, ISSN = {0020-9910}, CODEN = {INVMBH}, MRCLASS = {58D99 (53C22)}, MRNUMBER = {83d:58021}, MRREVIEWER = {Francesco Mercuri}, DOI = {10.1007/BF01389295}, ZBLNUMBER = {0471.53030}, } [PU] L. Pestov and G. Uhlmann, "Two dimensional compact simple Riemannian manifolds are boundary distance rigid," Ann. of Math., vol. 161, iss. 2, pp. 1093-1110, 2005. @article {PU, MRKEY = {2153407}, AUTHOR = {Pestov, Leonid and Uhlmann, Gunther}, TITLE = {Two dimensional compact simple {R}iemannian manifolds are boundary distance rigid}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {161}, YEAR = {2005}, NUMBER = {2}, PAGES = {1093--1110}, ISSN = {0003-486X}, CODEN = {ANMAAH}, MRCLASS = {53C24 (53C22)}, MRNUMBER = {2006c:53038}, MRREVIEWER = {Rodney Josu{é} Biezuner}, DOI = {10.4007/annals.2005.161.1093}, ZBLNUMBER = {1076.53044}, } [Santalo] L. A. Santaló, Integral Geometry and Geometric Probability, Reading, MA: Addison-Wesley Publ. Co., 1976. @book {Santalo, MRKEY = {0433364}, AUTHOR = {Santal{ó}, Luis A.}, TITLE = {Integral Geometry and Geometric Probability}, PUBLISHER = {Addison-Wesley Publ. Co.}, ADDRESS={Reading, MA}, YEAR = {1976}, PAGES = {xvii+404}, MRCLASS = {53C65 (60D05)}, MRNUMBER = {55 \#6340}, MRREVIEWER = {W. J. Firey}, ZBLNUMBER = {0342.53049}, } [SU] P. Stefanov and G. Uhlmann, "Boundary rigidity and stability for generic simple metrics," J. Amer. Math. Soc., vol. 18, iss. 4, pp. 975-1003, 2005. @article {SU, MRKEY = {2163868}, AUTHOR = {Stefanov, Plamen and Uhlmann, Gunther}, TITLE = {Boundary rigidity and stability for generic simple metrics}, JOURNAL = {J. Amer. Math. Soc.}, FJOURNAL = {Journal of the American Mathematical Society}, VOLUME = {18}, YEAR = {2005}, NUMBER = {4}, PAGES = {975--1003}, ISSN = {0894-0347}, MRCLASS = {53C24 (58J32)}, MRNUMBER = {2006h:53031}, MRREVIEWER = {Rodney Josu{é} Biezuner}, DOI = {10.1090/S0894-0347-05-00494-7}, ZBLNUMBER = {1079.53061}, } [Th] A. C. Thompson, Minkowski Geometry, Cambridge: Cambridge Univ. Press, 1996. @book {Th, MRKEY = {1406315}, AUTHOR = {Thompson, A. C.}, TITLE = {Minkowski Geometry}, SERIES = {Encycl. Math. Appl.}, NUMBER = {63}, PUBLISHER = {Cambridge Univ. Press}, ADDRESS = {Cambridge}, YEAR = {1996}, PAGES = {xvi+346}, ISBN = {0-521-40472-X}, MRCLASS = {52-02 (51-02)}, MRNUMBER = {97f:52001}, MRREVIEWER = {W. J. Firey}, ZBLNUMBER = {0868.52001}, }
77th SSC CGL level Solution Set, topic Trigonometry 7 This is the 77th solution set for the 10 practice problem exercise for SSC CGL exam and 7th on topic Trigonometry. If you have not taken the test yet, you should refer to the before going through this solution. 77th SSC CGL question set and 7th on Trigonometry 77th solution set- 10 problems for SSC CGL exam: 7th on Trigonometry - testing time 12 mins Problem 1. If $7\sin^2 \theta+3\cos^2 \theta=4$, then the value of $\tan \theta$, where $\theta$ is acute, is, $1$ $\displaystyle\frac{1}{\sqrt{3}}$ $\sqrt{3}$ $\displaystyle\frac{1}{\sqrt{2}}$ Solution 1: Problem analysis and execution $3\sin^2 \theta$ out of $7\sin^2 \theta$ combines with $3\cos^2 \theta$ to produce 3 and reduces the 4 in RHS to 1, and leaving $4\sin^2 \theta$ in the LHS. So, $4\sin^2 \theta=1$, Or, $\sin^2 \theta=\displaystyle\frac{1}{4}$, Or, $\sin \theta=\displaystyle\frac{1}{2}$, as $\theta$ is acute $\sin \theta$ is positive, Or, $\theta=30^0$, Or, $\tan \theta=\displaystyle\frac{1}{\sqrt{3}}$, as $\cos 30^0=\displaystyle\frac{\sqrt{3}}{2}$. Answer: Option b: $\displaystyle\frac{1}{\sqrt{3}}$. Key concepts used: -- Basic trigonometry concepts Use of $\sin^2 \theta + \cos^2 \theta=1$. Problem 2. If $\alpha + \beta=90^0$, then the expression $\displaystyle\frac{\tan \alpha}{\tan \beta}+ \sin^2 \alpha + \sin^2 \beta$ is equal to, $\sec^2 \beta$ $\sec^2 \alpha$ $\tan^2 \beta$ $\tan^2 \alpha$ Solution 2: Problem analysis and execution Using we get, complementary trigonometric function concept, $\sin \alpha=\sin (90^0-\beta)=\cos \beta$, and similarly, $\cos \alpha=\sin \beta$, and $\tan \beta=\text{cot } \alpha$. So the target expression reduces to, $\tan^2 \alpha + \sin^2 \alpha + \cos^2 \alpha$ $=1+\tan^2 \alpha$ $=\sec^2 \alpha$. Answer: Option b: $\sec^2 \alpha$. Key concepts used: -- Complementary trigonometric functions Trigonometric function relations. Problem 3. If $0^0 \lt \theta \lt 90^0$ then the value of $\displaystyle\frac{\tan \theta-\sec \theta-1}{\tan \theta + \sec \theta +1}$ is, $\displaystyle\frac{1-\cos \theta}{\sin \theta}$ $\displaystyle\frac{\sin \theta +1}{\cos \theta}$ $\displaystyle\frac{1-\sin \theta}{\cos \theta}$ $\displaystyle\frac{\sin \theta-1}{\cos \theta}$ Solution 3: Problem analysis and pattern identification The similarity of the numerator and denominator expressions and presence of $sec \theta$ and $\tan \theta$ in additive (or subtractive) relation urged us to apply the powerful relation, friendly trigonometric function pair $\sec \theta - \tan \theta=\displaystyle\frac{1}{\sec \theta + \tan \theta}$, as $\sec^2 \theta -\tan^2 \theta=1$. Solution 3: Problem Solving execution As expected when we replaced the $-(\sec \theta-\tan \theta)$ in the numerator by $-\displaystyle\frac{1}{\sec \theta + \tan \theta}$, we got the numerator same as denominator except for a negative sign, $E=\displaystyle\frac{\tan \theta-\sec \theta-1}{\tan \theta + \sec \theta +1}$, target expression is assumed to be $E$, $=\displaystyle\frac{-\displaystyle\frac{1}{\sec \theta + \tan \theta}-1}{\sec \theta+\tan \theta + 1}$ $=-\displaystyle\frac{1}{\sec \theta + \tan \theta}\left[\displaystyle\frac{\sec \theta + \tan \theta +1}{\sec \theta + \tan \theta + 1}\right]$ $=-\displaystyle\frac{1}{\sec \theta + \tan \theta}$. We will now multiply numerator and denominator by $(\sec \theta - \tan \theta)$, $E=\tan \theta - \sec \theta$ $=\displaystyle\frac{\sin \theta - 1}{\cos \theta}$. Answer: Option d: $\displaystyle\frac{\sin \theta-1}{\cos \theta}$. Key concepts used: -- Problem analysis -- Key pattern identification -- Friendly trigonometric function pairs concepts Efficient simplification . The solution could easily be carried out mentally. Problem 4. If $5\sin \theta=3$, then the numerical value of $\displaystyle\frac{\sec \theta - \tan \theta}{\sec \theta + \tan \theta}$ is, $\displaystyle\frac{1}{2}$ $\displaystyle\frac{1}{4}$ $\displaystyle\frac{1}{3}$ $\displaystyle\frac{1}{5}$ Solution 4: Problem analysis and planning for using Componendo dividendo The target expression is just right for applying the fast and quick method of componendo dividendo. One way to proceed is to assume it equal to a dummy variable $x$, execute componendo dividendo to get on the LHS a simplified form, $\displaystyle\frac{\sec \theta}{\tan \theta}$ and on the RHS, $\displaystyle\frac{1+x}{1-x}$. Answer is just a step ahead. Let us show how. Solution 4: Problem solving execution As planned we have, $\displaystyle\frac{\sec \theta-\tan \theta}{\sec \theta + \tan \theta}=x$. Applying three-step componendo dividendo on both sides, $\displaystyle\frac{\sec \theta}{\tan \theta}=\displaystyle\frac{1+x}{1-x}$ Or, $\displaystyle\frac{1+x}{1-x}=\text{cosec } \theta=\displaystyle\frac{5}{3}$, from given expression, $5\sin \theta=3$. Applying componendo dividendo a second time, $x=\displaystyle\frac{\sec \theta -\tan \theta}{\sec \theta + \tan \theta}=\displaystyle\frac{5-3}{5+3}=\frac{1}{4}$ Solution 4: Faster method to reach solution We know the paired fraction of the target signature expression of componendo dividendo in our case is, $\displaystyle\frac{\sec \theta}{\tan \theta}=\text{cosec } \theta=\displaystyle\frac{5}{3}$. Subtracting 1, adding 1 and dividing the two results we have, $\displaystyle\frac{\sec \theta -\tan \theta}{\sec \theta + \tan \theta}=\frac{1}{4}$. Answer: Option b: $\displaystyle\frac{1}{4}$. Key concepts used: Key pattern identification -- Componendo dividendo in reverse . Problem 5. If $\displaystyle\frac{\sec \theta + \tan \theta}{\sec \theta - \tan \theta}=2\displaystyle\frac{51}{79}$, the value of $\sin \theta$ is, $\displaystyle\frac{39}{72}$ $\displaystyle\frac{91}{144}$ $\displaystyle\frac{65}{144}$ $\displaystyle\frac{35}{72}$ Solution 5: Problem analysis and execution: applying componendo dividendo and keeping mixed fraction in mixed form The given expression is just right for applying componendo dividendo and getting value of $\text{cosec } \theta$ immediately. But on the RHS the mixed fraction is a bit imposing. To speed up, as we do usually with mixed fractions, we will keep the mixed fractions in mixed form (which is inherently an integer plus a fraction) as long as possible. As planned we have, $\displaystyle\frac{\sec \theta+\tan \theta}{sec \theta - tan \theta}=2\displaystyle\frac{51}{79}$, Or, $\displaystyle\frac{\sec \theta}{\tan \theta}=\displaystyle\frac{3\displaystyle\frac{51}{79}}{1\displaystyle\frac{51}{79}}$, Or, $\text{cosec } \theta=\displaystyle\frac{288}{130}$, Or, $\sin \theta=\displaystyle\frac{65}{144}$. By the dual application of componendo dividendo and mixed fraction technique, answer could be reached in minimum time. Answer: Option c: $\displaystyle\frac{65}{144}$. Key concepts used: -- Key pattern identification -- Componendo dividendo -- Mixed fraction breakup technique Efficient simplification . Problem 6. The value of $\theta$ ($0 \leq \theta \leq 90^0$) satisfying $2\sin^2 \theta=3\cos \theta$ is, $60^0$ $45^0$ $90^0$ $30^0$ Solution 6: Problem analysis and execution As the given expression is not readily amenable to solution by derivation and the given angles are the well known ones, we have decided to test choice values. The answer could be reached quickly by the nature of the given expression. With $\theta=60^0$, $\sin \theta=\displaystyle\frac{\sqrt{3}}{2}$ and $\cos \theta=\displaystyle\frac{1}{2}$ which satisfy the given expression perfectly. Solution 6: Conventional solution by derivation $2\sin^2 \theta=3\cos \theta$, Or, $2(1-\cos^2\theta)=3\cos \theta$, Or, $2\cos^2 \theta+3\cos \theta-2=0$, Or, $(2\cos \theta -1)(\cos \theta +2)=0$. As $\theta$ is acute, $\cos \theta$ must be positive and so, $2\cos \theta=1$, Or, $\cos \theta =\displaystyle\frac{1}{2}$, Or, $\theta =60^0$. Answer: Option a: $60^0$. Key concepts used: -- Choice value test for quick solution -- Trigonometric ratio values Solving Quadratic equation. Problem 7. If $a$, $b$, $c$ are the lengths of three sides of a $\triangle ABC$. If $a$, $b$ and $c$ are related by the relation, $a^2+b^2+c^2=ab+bc+ca$, then the value of $\sin^2 \text{A}+\sin^2 \text{B}+\sin^2 \text{C}$ is, $\displaystyle\frac{3}{4}$ $\displaystyle\frac{9}{4}$ $\displaystyle\frac{3}{2}$ $\displaystyle\frac{3\sqrt{3}}{2}$ Solution 7: Problem analysis and strategy decision This is one of those problems that seems to be very difficult at first glance. But if you are well-acquainted with algebraic patterns you will see the solution through the curtain in twenty seconds at most. The algebraic expression calls for multiplying both sides by 2, taking all terms on the LHS and rearranging to transform the expression to a sum of three squares equal to 0. By well-known algebraic principle, each of the square terms must then be 0 resulting in, $a=b=c$, that is, an equilateral triangle with each angle $60^0$ and each $\sin$ value as $\displaystyle\frac{\sqrt{3}}{2}$. Final result, $\displaystyle\frac{9}{4}$. Let us show the deductions. Solution 7: Problem solving execution We have, $a^2+b^2+c^2=ab+bc+ca$, Or, $2(a^2+b^2+c^2)-2(ab+bc+ca)=0$, multiplying both sides by 2 and rearranging Or, $(a-b)^2+(b-c)^2+(c-a)^2=0$. By the zero sum of square terms principle, each of the square term must be zero. So, $(a-b)=(b-c)=(c-a)=0$, Or, $a=b=c$, all three sides of the triangle are of equal length. Thus, $\angle \text{A}=\angle \text{B}=\angle \text{C}=60^0$, Or, $\sin \text{A}=\sin \text{B}=\sin \text{C}=\displaystyle\frac{\sqrt{3}}{2}$. Finally, $\sin^2 \text{A}+\sin^2 \text{B}+\sin^2 \text{C}=\displaystyle\frac{9}{4}$. Answer: Option b: $\displaystyle\frac{9}{4}$. Key concepts used: -- Key pattern identification -- Zero sum of square terms Property of an equilateral triangle -- Algebraic patterns and methods . Problem 8. If $x\cos^2 30^0.\sin 60^0=\displaystyle\frac{\tan^2 45^0.\sec 60^0}{\text{cosec } 60^0}$, then the value of $x$ is, $\displaystyle\frac{1}{\sqrt{3}}$ $\displaystyle\frac{1}{2}$ $2\displaystyle\frac{2}{3}$ $\displaystyle\frac{1}{\sqrt{2}}$ Solution 8: Problem analysis and execution Instead of using the values of the trigonometric ratios and doing the involved calculations rightaway, we decide it to be the faster way to simplify the expression using patterns and trigonometric or algebraic relations first and then use the values of trigonometric ratios. In the given expression, $\sin 60^0$ is taken to to the denominator of the RHS to cancel out $\text{cosec } 60^0$ and $\tan 45^0$ being 1, $\tan^2 45^0$ also evaporates, leaving, $x\cos^2 30^0 =\sec 60^0=\displaystyle\frac{1}{\cos 60^0}=2$, Or, $x\left(\displaystyle\frac{3}{4}\right)=2$, as $\cos 30^0=\displaystyle\frac{\sqrt{3}}{2}$, Or, $x=\displaystyle\frac{8}{3}=2\displaystyle\frac{2}{3}$. Answer: Option c: $2\displaystyle\frac{2}{3}$. Key concepts used: -- Basic trigonometric concepts Values of trigonometric ratios -- Efficient simplification. Problem 9. If $\sec \theta-\cos \theta = \displaystyle\frac{3}{2}$, where $\theta$ is a positive acute angle, the value of $\sec \theta$ is, $2$ $0$ $-\displaystyle\frac{1}{2}$ $1$ Solution 9: Problem analysis and pattern identification This is a special sum where the two trigonometric functions cannot be interacted with each other trigonometrically, but then we notice, . It immediately reminds us of the powerful algebraic and general one is inverse of the other . We become sure of the path to the solution. principle of interaction of inverses Sum (subtractive) of inverses is the crucial pattern that we will exploit. Solution 9: Problem solving execution The given expression, $sec \theta-\cos \theta=\displaystyle\frac{3}{2}$. Squaring both sides, $\sec^2 \theta -2+\cos^2 \theta=\displaystyle\frac{9}{4}$. Adding 4 to both sides to convert the LHS to the square of $(\sec \theta+\cos \theta)$, $\sec^2 \theta+2+\cos^2 \theta=\displaystyle\frac{9}{4}+4=\displaystyle\frac{25}{4}$ Or, $(\sec \theta+\cos \theta)^2=\displaystyle\frac{25}{4}$ As $\theta$ is positive acute, neither $\sec \theta$ nor $\cos \theta$ can be negative, and so, $\sec \theta+\cos \theta=\displaystyle\frac{5}{2}$. Adding this to the given equation and simplifying, $\sec \theta=2$ Answer: Option a: $2$. Key concepts used: Key pattern identification -- Principle of interaction of inverses -- Algebraic patterns and methods -- Basic trigonometry concepts -- efficient simplification. By Principle of interaction of inverses as applied to algebraic relations, If we know the numeric value of $\left(x + \displaystyle\frac{1}{x}\right)$, we can always derive the numeric value of $\left(x-\displaystyle\frac{1}{x}\right)$ by raising the first equation to its square, adjusting the numeric middle term value and taking the square root. This is the simplest result of the principle. Details you can find in the article, . Principle of interaction of inverses Problem 10. If $1+\cos^2 \theta=3\sin \theta.\cos \theta$, then the integral value of $\text{cot } \theta$ $\left(0 \lt \theta \lt \displaystyle\frac{\pi}{2}\right)$ is equal to, $0$ $3$ $1$ $2$ Solution 10: Problem analysis To find the quickest path to the solution in this unbalanced expression, we paused briefly resulting in time to solve extending to a minute. The thought process was—with $(\sin \theta.\cos \theta)$ on the RHS and $\cos^2 \theta$ in the LHS, the situation called for expanding the 1 in the LHS to $(\sin^2 \theta + \cos^2 \theta)$ and transform the LHS to $(\sin \theta - \cos \theta)^2$. This was the key to the solution. Solution 10: Problem solving execution The given expression, $1+\cos^2 \theta=3\sin \theta.\cos \theta$, Or, $\sin^2 \theta+2\cos^2 \theta=3\sin \theta.\cos \theta$, Or, $\sin^2 \theta-2\sin\theta.\cos\theta+\cos^2\theta=\cos\theta(\sin\theta-\cos\theta)$ Or, $(\sin \theta-\cos\theta)^2=\cos\theta(\sin \theta-\cos \theta)$. Or, $(\sin \theta -\cos \theta)(\sin \theta-2\cos\theta)=0$ Thus we have two possible values of $\text{cot } \theta$, $\sin \theta=\cos \theta$, Or, $\text{cot } \theta=1$. And, $\sin \theta=2\cos \theta$, this is also possible, Or $\text{cot } \theta=\displaystyle\frac{1}{2}$. Desired is the integral value, that is, $\text{cot } \theta=1$. Answer: Option c: $1$. Key concepts and techniques used: Problem analysis and key pattern identification -- -- Basic algebraic concepts concepts . Basic trigonometry concepts Note: Observe that in many of the Trigonometric problems basic and rich algebraic concepts and techniques are to be used. In fact that is the norm. Algebraic concepts are frequently used for elegant solutions of Trigonometric problems. Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving. SSC CGL Tier II level question and solution sets on Trigonometry SSC CGL level question and solution sets in Trigonometry SSC CGL level Solution Set 77 on Trigonometry 7 Algebraic concepts If you like, you may to get latest subscribe content on competitive examsin your mail as soon as we publish it.
I’ve been doing a lot of reading on confidence interval theory. Some of the reading is more interesting than others. There is one passage from Neyman’s (1952) book “Lectures and Conferences on Mathematical Statistics and Probability” (available here) that stands above the rest in terms of clarity, style, and humor. I had not read this before the last draft of our confidence interval paper, but for those of you who have read it, you’ll recognize that this is the style I was going for. Maybe you have to be Jerzy Neyman to get away with it. Neyman gets bonus points for the footnote suggesting the “eminent”, “elderly” boss is so obtuse (a reference to Fisher?) and that the young frequentists should be “remind[ed] of the glory” of being burned at the stake. This is just absolutely fantastic writing. I hope you enjoy it as much as I did. [Neyman is discussing using “sampling experiments” (Monte Carlo experiments with tables of random numbers) in order to gain insight into confidence intervals. \(\theta\) is a true parameter of a probability distribution to be estimated.] The sampling experiments are more easily performed than described in detail. Therefore, let us make a start with \(\theta_1 = 1\), \(\theta_2 = 2\), \(\theta_3 = 3\) and \(\theta_4 = 4\). We imagine that, perhaps within a week, a practical statistician is faced four times with the problem of estimating \(\theta\), each time from twelve observations, and that the true values of \(\theta\) are as above [ie, \(\theta_1,\ldots,\theta_4\)] although the statistician does not know this. We imagine further that the statistician is an elderly gentleman, greatly attached to the arithmetic mean and that he wishes to use formulae (22). However, the statistician has a young assistant who may have read (and understood) modern literature and prefers formulae (21). Thus, for each of the four instances, we shall give two confidence intervals for \(\theta\), one computed by the elderly Boss, the other by his young Assistant. [Formula 21 and 22 are simply different 95% confidence procedures. Formula 21 is has better frequentist properties; Formula 22 is inferior, but the Boss likes it because it is intuitive to him.] Using the first column on the first page of Tippett’s tables of random numbers and performing the indicated multiplications, we obtain the following four sets of figures. The last two lines give the assertions regarding the true value of \(\theta\) made by the Boss and by the Assistant, respectively. The purpose of the sampling experiment is to verify the theoretical result that the long run relative frequency of cases in which these assertions will be correct is, approximately, equal to \(\alpha = .95\). You will notice that in three out of the four cases considered, both assertions (the Boss’ and the Assistant’s) regarding the true value of \(\theta\) are correct and that in the last case both assertions are wrong. In fact, in this last case the true \(\theta\) is 4 while the Boss asserts that it is between 2.026 and 3.993 and the Assistant asserts that it is between 2.996 and 3.846. Although the probability of success in estimating \(\theta\) has been fixed at \(\alpha = .95\), the failure on the fourth trial need not discourage us. In reality, a set of four trials is plainly too short to serve for an estimate of a long run relative frequency. Furthermore, a simple calculation shows that the probability of at least one failure in the course of four independent trials is equal to .1855. Therefore, a group of four consecutive samples like the above, with at least one wrong estimate of \(\theta\), may be expected one time in six or even somewhat oftener. The situation is, more or less, similar to betting on a particular side of a die and seeing it win. However, if you continue the sampling experiment and count the cases in which the assertion regarding the true value of \(\theta\), made by either method, is correct, you will find that the relative frequency of such cases converges gradually to its theoretical value, \(\alpha= .95\). Let us put this into more precise terms. Suppose you decide on a number \(N\) of samples which you will take and use for estimating the true value of \(\theta\). The true values of the parameter \(\theta\) may be the same in all \(N\) cases or they may vary from one case to another. This is absolutely immaterial as far as the relative frequency of successes in estimation is concerned. In each case the probability that your assertion will be correct is exactly equal to \(\alpha = .95\). Since the samples are taken in a manner insuring independence (this, of course, depends on the goodness of the table of random numbers used), the total number \(Z(N)\) of successes in estimating \(\theta\) is the familiar binomial variable with expectation equal to \(N\alpha\) and with variance equal to \(N\alpha(1 – \alpha)\). Thus, if \(N = 100\), \(\alpha = .95\), it is rather improbable that the relative frequency \(Z(N)/N\) of successes in estimating \(\alpha\) will differ from \(\alpha\) by more than\( 2\sqrt{\frac{\alpha(1-\alpha)}{N}} = .042 \) This is the exact meaning of the colloquial description that the long run relative frequency of successes in estimating \(\theta\) is equal to the preassigned \(\alpha\). Your knowledge of the theory of confidence intervals will not be influenced by the sampling experiment described, nor will the experiment prove anything. However, if you perform it, you will get an intuitive feeling of the machinery behind the method which is an excellent complement to the understanding of the theory. This is like learning to drive an automobile: gaining experience by actually driving a car compared with learning the theory by reading a book about driving. Among other things, the sampling experiment will attract attention to the frequent difference in the precision of estimating \(\theta\) by means of the two alternative confidence intervals (21) and (22). You will notice, in fact, that the confidence intervals based on \(X\), the greatest observation in the sample, are frequently shorter than those based on the arithmetic mean \(\bar{X}\). If we continue to discuss the sampling experiment in terms of cooperation between the eminent elderly statistician and his young assistant, we shall have occasion to visualize quite amusing scenes of indignation on the one hand and of despair before the impenetrable wall of stiffness of mind and routine of thought on the other. [See footnote] For example, one can imagine the conversation between the two men in connection with the first and third samples reproduced above. You will notice that in both cases the confidence interval of the Assistant is not only shorter than that of the Boss but is completely included in it. Thus, as a result of observing the first sample, the Assistant asserts that .956 \leq \theta \leq 1.227. \) On the other hand, the assertion of the Boss is far more conservative and admits the possibility that \(\theta\) may be as small as .688 and as large as 1.355. And both assertions correspond to the same confidence coefficient, \(\alpha = .95\)! I can just see the face of my eminent colleague redden with indignation and hear the following colloquy. Boss: “Now, how can this be true? I am to assert that \(\theta\) is between .688 and 1.355 and you tell me that the probability of my being correct is .95. At the same time, you assert that \(\theta\) is between .956 and 1.227 and claim the same probability of success in estimation. We both admit the possibility that \(\theta\) may be some number between.688 and .956 or between1.227 and 1.355. Thus, the probability of \(\theta\) falling within these intervals is certainly greater than zero. In these circumstances, you have to be a nit-wit to believe that \( \begin{eqnarray*} P\{.688 \leq \theta \leq 1.355\} &=& P\{.688 \leq \theta < .956\} + P\{.956 \leq \theta \leq 1.227\}\\ && + P\{1.227 \leq \theta \leq 1.355\}\\ &=& P\{.956 \leq \theta \leq 1.227\}.\mbox{”} \end{eqnarray*} \) Assistant: “But, Sir, the theory of confidence intervals does not assert anything about the probability that the unknown parameter \(\theta\) will fall within any specified limits. What it does assert is that the probability of success in estimation using either of the two formulae (21) or (22) is equal to \(\alpha\).” Boss: “Stuff and nonsense! I use one of the blessed pair of formulae and come up with the assertion that \(.688 \leq \theta \leq 1.355\). This assertion is a success only if \(\theta\) falls within the limits indicated. Hence, the probability of success is equal to the probability of \(\theta\) falling within these limits —.” Assistant: “No, Sir, it is not. The probability you describe is the a posteriori probability regarding \(\theta\), while we are concerned with something else. Suppose that we continue with the sampling experiment until we have, say, \(N = 100\) samples. You will see, Sir, that the relative frequency of successful estimations using formulae (21) will be about the same as that using formulae (22) and that both will be approximately equal to .95.” I do hope that the Assistant will not get fired. However, if he does, I would remind him of the glory of Giordano Bruno who was burned at the stake by the Holy Inquisition for believing in the Copernican theory of the solar system. Furthermore, I would advise him to have a talk with a physicist or a biologist or, maybe, with an engineer. They might fail to understand the theory but, if he performs for them the sampling experiment described above, they are likely to be convinced and give him a new job. In due course, the eminent statistical Boss will die or retire and then —. [footnote] Sad as it is, your mind does become less flexible and less receptive to novel ideas as the years go by. The more mature members of the audience should not take offense. I, myself, am not young and have young assistants. Besides, unreasonable and stubborn individuals are found not only among the elderly but also frequently among young people. [end excerpt]
Lie algebras over rings (Lie rings) are important in group theory. For instance, to every group $G$ one can associate a Lie ring $$L(G)=\bigoplus _{i=1}^\infty \gamma _i(G)/\gamma _{i+1}(G),$$ where $\gamma _i(G)$ is the $i$-th term of the lower central series of $G$. The addition is defined by the additive structure of $\gamma _i{G}/\gamma _{i+1}(G)$, and the Lie product is defined on homogeneous elements by $[x\gamma _{i+1}(G),y\gamma _{j+1}(G)]=[x,y]\gamma _{i+j+1}(G)$, and then extended to L(G) by linearity. There are several other ways of constructing Lie rings associated to groups, and there are numerous applications of these. One of the most notorious ones is the solution of the Restricted Burnside Problem by Zelmanov, see the book M. R. Vaughan-Lee, "The Restricted Burnside Problem". There's other books related to these rings, for example,Kostrikin, "Around Burnside",Huppert, Blackburn, "Finite groups II",Dixon, du Sautoy, Mann, Segal, "Analytic pro-$p$ groups".
A site C with pullbacks is subcanonical (all representable presheaves are sheaves) if and only if its codomain fibration $Arr(C) \to C$ is a prestack (all hom-presheaves are sheaves). Is there a common name for a site whose codomain fibration is a stack? The canonical topology on a Grothendieck topos has this property, as does the coherent topology on a pretopos, the regular topology on a Barr-exact category, the extensive topology on a lextensive category, etc. I don't have an answer to your question, but I'm going to post whatever thoughts I had about it. Maybe something here will help someone answer the question, or at least help more people understand what's involved. I'm sorry that it's come out so long. Definitions (skip this unless you suspect we mean different things by "(pre)stack") A functor $F\to C$ is a fibered category if for every arrow $f:U\to X$ in $C$ and every object $Y$ in $F$ lying over $X$, there is a cartesian arrow $V\to Y$ in $F$ lying over $f$ (see Definition 3.1 of Vistoli's notes). This arrow is determined up to unique isomorphism (by the cartesian property), so I'll call $V$ "the" pullback of $Y$ along $f$ and maybe denote it $f^*Y$. A fibered category is roughly a "category-valued presheaf (contravariant functor) on $C$". Given an object $X$ in $C$, let $F(X)$ be the subcategory of objects in $F$ lying over $X$, with morphisms being those morphisms in $F$ which lie over the identity morphism of $X$. I'll call $F(X)$ the "fiber over $X$." Given a morphism $f:U\to X$ in $C$, let $F(U\to X)$ be "the category of descent data along $f$," whose objects consist of an element $Z$ of $F(U)$ and an isomorphism $\sigma:p_2^*Z\to p_1^*Z$ (where $p_1,p_2:U\times_XU\to U$ are the projections) satisfying the usual cocycle condition over $U\times_XU\times_XU$ (see Definition 4.2 of Vistoli's notes). A morphism in $F(U\to X)$ is a morphism $Z\to Z'$ in $F(U)$ such that the following square commutes: $\begin{matrix} p_2^*Z & \xrightarrow{\sigma} & p_1^*Z \\ \downarrow & & \downarrow\\ p_2^*Z' & \xrightarrow{\sigma'} & p_1^*Z' \end{matrix}$ Suppose $C$ has the structure of a site. Then we say that $F$ is a prestack (resp. stack) over $C$ if for any cover $U\to X$ in $C$, the functor $F(X)\to F(U\to X)$ given by pullback is fully faithful (resp. an equivalence). Roughly, a prestack is a "separated presheaf of categories" and a stack is a "sheaf of categories" over $C$. The domain fibration (not your question, but related) Consider the domain functor $Arr(C)\to C$ given by $(X\to Y)\mapsto X$. You can check that a cartesian arrow over $f:U\to X$ is a commutative square $\begin{matrix} U & \xrightarrow{f} & X \\ \downarrow & & \downarrow\\ Y & = & Y \end{matrix}$ If I haven't made a mistake, This fibered category is a prestack iff every cover $U\to X$ is an epimorphism. It is a stack if furthermore every cover $U\to X$ is the coequalizer of the projection maps $p_1,p_2:U\times_XU\to U$. This last condition is equivalent to saying that every object $Y$ of $C$ satisfies the sheaf axiom with respect to the morphism $U\to X$. In particular, the domain fibration is a stack if and only if the topology is subcanonical. The codomain fibration (your question) Consider the codomain functor $Arr(C)\to C$ given by $(U\to X)\mapsto X$. You can check that a cartesian arrow over a morphism $f:U\to X$ is a cartesian square $\begin{matrix} V & \to & U \\ \downarrow & & \downarrow\\ Y & \xrightarrow{f} & X \end{matrix}$ There is a general result that says that the fibered category of sheaves on a site is itself a stack (I usually call this result "descent for sheaves on a site"). If you're working with the canonical topology on a topos (where every sheaf is representable), it follows that the codomain fibration is a stack. If the topology is subcanonical, then objects are sheaves, so descent for sheaves tells you that the pullback functor is fully faithful (i.e. the codomain fibration is a prestack), but when you "descend" a representable sheaf, it may no longer be representable, so the codomain fibration may not be a stack. In your question you say that being a prestack is actually equivalent to the topology being subcanonical, but I can't see the other implication (prestack⇒subcanonical). Supposing the codomain fibration is a prestack, saying that it is a stack roughly says that when you glue representable sheaves along a "cover relation," you get a representable sheaf, but with the strange condition that the "cover relation" you started with came from a relation where you could glue to get a representable sheaf. That is, given this diagram, where the squares on the left are cartesian ($\Rightarrow$ is meant to be two right arrows), can you fill in the "?" so that the square on the right is cartesian? $\begin{matrix} Z' & \Rightarrow & Z & \to & ?\\ \downarrow & & \downarrow & & \downarrow\\ U\times_XU & \Rightarrow & U & \to & X \end{matrix}$ A more natural (to me) condition is to ask that the only sheaves you can glue together from representable sheaves are already representable. That is, if $R\Rightarrow U$ is a "covering relation" (i.e. each of the maps $R\to U$ is a covering and $R\to U\times U$ is an equivalence relation), then the quotient sheaf $U/R$ is representable. I would call such a site "closed under gluing." For example, the category of schemes with the Zariski topology is closed under gluing (it's the "Zariski gluing closure" of the category affine schemes). The category of algebraic spaces with the etale topology is closed under gluing (it's the "etale gluing closure" of the category of affine schemes). In fact, I think that a standard structure theorem for smooth morphisms and a theorem of Artin (∃ fppf cover ⇒ ∃ smooth cover) imply that the category of algebraic spaces with the fppf topology is closed under gluing.
I would like to display a matrix with a dot (derivative) on top of it. The entry of this matrix shall consist of arbitrary symbols. When I write \begin{align} \dot{\begin{pmatrix} x \\ y \end{pmatrix}} \end{align} everything works fine. However, \begin{align} \dot{\begin{pmatrix} \hat{x} \\ \hat{y} \end{pmatrix}} \end{align} results in a weird series of errors (Illegal units of measure etc.). How can I display a matrix derivation with hats (or other ornaments) on my x and y variables? Neither changing from pmatrix to array nor setting brackets did bring any improvements. A complete example of the problem reads: \documentclass{report} \usepackage[utf8x]{inputenc} \usepackage{amsmath,amsfonts,mathrsfs,amssymb,dsfont} \begin{document} \begin{align} \dot{\begin{pmatrix} \hat{x} \\ \hat{y} \end{pmatrix}} % Problem! \end{align} \end{document}
This year’s Prove it! Math Academy was a big success, and it was an enormous pleasure to teach the seventeen talented high school students that attended this year. Some of the students mentioned that they felt even more inspired to study math further after our two-week program, but the inspiration went both ways – they inspired me with new ideas as well! One of the many, many things we investigated at the camp was the Fibonacci sequence, formed by starting with the two numbers $0$ and $1$ and then at each step, adding the previous two numbers to form the next: $$0,1,1,2,3,5,8,13,21,\ldots$$ If $F_n$ denotes the $(n+1)$st term of this sequence (where $F_0=0$ and $F_1=1$), then there is a remarkable formula for the $n$th term, known as Binet’s Formula: $$F_n=\frac{1}{\sqrt{5}}\left( \left(\frac{1+\sqrt{5}}{2}\right)^n – \left(\frac{1-\sqrt{5}}{2}\right)^n \right)$$ Looks crazy, right? Why would there be $\sqrt 5$’s showing up in a sequence of integers? At Prove it!, we first derived the formula using generating functions. I mentioned during class that it can also be proven by induction, and later, one of our students was trying to work out the induction proof on a white board outside the classroom. She was amazed how many different proofs there could be of the same fact, and it got me thinking: what if we expand each of the terms using the binomial theorem? Is there a combinatorial proof of the resulting identity? In particular, suppose we use the binomial theorem to expand $(1+\sqrt{5})^n$ and $(1-\sqrt{5})^n$ in Binet’s formula. The resulting expression is: $$F_n=\frac{1}{\sqrt{5}\cdot 2^n}\left( \left(\sum_{i=0}^n \binom{n}{i}(\sqrt{5})^i \right) – \left(\sum_{i=0}^n (-1)^i\binom{n}{i}(\sqrt{5})^i \right) \right)$$ Note that the even terms in the two summations cancel, and combining the odd terms gives us: $$F_n=\frac{1}{\sqrt{5}\cdot 2^n}\left( \sum_{k=0}^{\lfloor n/2\rfloor} 2 \binom{n}{2k+1}(\sqrt{5})^{2k+1} \right)$$ Since $(\sqrt{5})^{2k+1}=\sqrt{5}\cdot 5^k$, we can cancel the factors of $\sqrt{5}$ and multiply both sides by $2^{n-1}$ to obtain: $$2^{n-1}\cdot F_n=\sum_{k=0}^{\lfloor n/2\rfloor} \binom{n}{2k+1}\cdot 5^k.$$ Now, the left hand and right hand side are clearly nonnegative integers, and one handy fact about nonnegative integers is that they count the number of elements in some collection. The proof method of counting in two ways is the simple principle that if by some method one can show that a collection $A$ has $n$ elements, and by another method one can show that $A$ has $m$ elements, then it follows that $m=n$. Such a “combinatorial proof” may be able to be used to prove the identity above, with $m$ being the left hand side of the equation and $n$ being the right. I started thinking about this after Prove it! ended, and remembered that the $(n+1)$st Fibonacci number $F_n$ counts the number of ways to color a row of $n-2$ fenceposts either black or white such that no two adjacent ones are black. (Can you see why this combinatorial construction would satisfy the Fibonacci recurrence?) For instance, we have $F_5=5$, because there are five such colorings of a row of $3$ fenceposts: $$\begin{array}{ccc} \square & \square & \square \\ \\ \square & \square & \blacksquare \\ \\ \square & \blacksquare & \square \\ \\ \blacksquare & \square & \square \\ \\ \blacksquare & \square & \blacksquare \end{array}$$ Note also that $2^{n-1}$ counts the number of length-$(n-1)$ sequences of $0$’s and $1$’s. Thus, the left hand side of our identity, $2^{n-1}\cdot F_n$, counts the number of ways of choosing a binary sequence of length $n-1$ and also a fence post coloring of length $n-2$. Because of their lengths, given such a pair we can interlace their entries, forming an alternating sequence of digits and fence posts such as: $$1\, \square\, 0\, \square\, 1\, \blacksquare\, 1$$ We will call such sequences interlaced sequences. We now need only to show that the right hand side also counts these interlaced sequences. See the next page for my solution, or post your own solution in the comments below!
When Matlab reaches the cvx_end command, it completes the conversion of the ... \|Ax-b\|_2\\ \mbox{subject to} & l \preceq x \preceq u \end{array}\end{split}\]. An affine column vector CVX expression can be multiplied by a constant matrix of .... \[\begin{split}f_{\text{huber\_circ}}(x,M) \triangleq \begin{cases} \|x\|_2^2 ... CVX enforces the conventions dictated by the disciplined convex ... \[\begin{split}\ begin{array}{lll} \text{constant} & f(\alpha x + (1-\alpha)y) = f(x) & \forall x ... Metering, Monitoring and Protection > Surge Protection Devices > CVX Series. Products & Solutions .... Features; Documentation; Videos; Options; Support. The functions f_k are convex and twice differentiable and the linear inequalities are generalized inequalities with respect to a proper convex cone, defined as a ... The linear inequality is a generalized inequality with respect to a proper convex cone. It may include componentwise vector inequalities, second-order cone ... Oct 10, 2011 ... A little description on how the split throttles work on a CVT ... Using manual mode on a Case IH Puma 130-230 or Magnum 180-225 CVT ... Oct 10, 2011 ... In this video, you will understand when to use manual mode on Case IH CVT tractors.
A. SAXENA Articles written in Journal of Astrophysics and Astronomy Volume 40 Issue 2 April 2019 Article ID 0009 We report optical observations of TGSS J1054 $+$ 5832, a candidate high-redshift ($z = 4.8 \pm 2$) steep-spectrum radio galaxy, in $r$ and $i$ bands, using the faint object spectrograph and camera mounted on 3.6-m Devasthal Optical Telescope (DOT). The source previously detected at 150 MHz from Giant Meterwave Radio Telescope (GMRT) and at 1420 MHz from Very Large Array has a known counterpart in near-infrared bands with $K$-band magnitude of AB 22. The source is detected in $i$-band with AB24.3 $\pm$ 0.2 magnitude in theDOT images presented here. The source remains undetected in the $r$-band image at a 2.5$\sigma$ depth of AB 24.4 mag over an $1.2^{\prime\prime}\times 1.2^{\prime\prime}$ aperture. An upper limit to $i−K$ color is estimated to be $\sim$2.3, suggesting youthfulness of the galaxy with active star formation. These observations highlight the importance and potential of the 3.6-mDOT for detections of faint galaxies. Current Issue Volume 40 | Issue 5 October 2019 Since January 2016, the Journal of Astrophysics and Astronomy has moved to Continuous Article Publishing (CAP) mode. This means that each accepted article is being published immediately online with DOI and article citation ID with starting page number 1. Articles are also visible in Web of Science immediately. All these have helped shorten the publication time and have improved the visibility of the articles. Click here for Editorial Note on CAP Mode
Bill Dubuque raised an excellent point here: Coping with *abstract* duplicate questions. I suggest we use this question as a list of the generalized questions we create. I suggest we categorize these abstract duplicates based on topic (please edit the question). Also please feel free to suggest a better way to list these. Also, as per Jeff's recommendation, please tag these questions as faq. Laws of signs (minus times minus is plus): Why is negative times negative = positive? Order of operations in arithmetic: What is the standard interpretation of order of operations for the basic arithmetic operations? Solving equations with multiple absolute values: What is the best way to solve an equation involving multiple absolute values? Extraneous solutions to equations with a square root: Is there a name for this strange solution to a quadratic equation involving a square root? Principal $n$-th roots: $0! = 1$: Prove $0! = 1$ from first principles Partial fraction decomposition of rational functions: Converting multiplying fractions to sum of fractions Highest power of a prime $p$ dividing $N!$, number of zeros at the end of $N!$ and related questions: Highest power of a prime $p$ dividing $N!$ Solving $x^x=y$ for $x$: Is $x^x=y$ solvable for $x$? What is the value of $0^0$? Zero to the zero power – is $0^0=1$? Integrating polynomial and rational expressions of $\sin x$ and $\cos x$: Evaluating $\int P(\sin x, \cos x) \text{d}x$ Integration using partial fractions: Integration by partial fractions; how and why does it work? Intuitive meaning of Euler's constant $e$: Intuitive Understanding of the constant "$e$" Evaluating limits of the form $\lim_{x\to \infty} P(x)^{1/n}-x$ where $P(x)=x^n+a_{n-1}x^{n-1}+\cdots+a_0$ is a monic polynomial: Limits: How to evaluate $\lim\limits_{x\rightarrow \infty}\sqrt[n]{x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}}-x$ Finding the limit of rational functions at infinity: Finding the limit of $\frac{Q(n)}{P(n)}$ where $Q,P$ are polynomials Divergence of the harmonic series: Why does the series $\sum_{n=1}^\infty\frac1n$ not converge? Universal Chord Theorem: Universal Chord Theorem Nested radical series: $\sqrt{c+\sqrt{c+\sqrt{c+\cdots}}}$, or the limit of the sequence $x_{n+1} = \sqrt{c+x_n}$ Derivative of a function expressed as $f(x)^{g(x)}$: Differentiation of $x^{\sqrt{x}}$, how? Removable discontiuity: How can a function with a hole (removable discontinuity) equal a function with no hole? Calculus Meets Geometry Volume of intersection between cylinders Two cylinders, same radius, orthogonal. This post is not particularly good but there are many existing duplicate-links. Note that this can be done without calculus. Two cylinders variation: different radii (orthogonal), non-orthogonal (same radius), and elliptic cylinders (essentially unsolved). Three cylinders: same radius and orthogonal. Number of permutations of $n$ where no number $i$ is in position $i$ How many equivalence relations on a set with 4 elements. How many ways can N elements be partitioned into subsets of size K? Seating arrangements of four men and three women around a circular table How to use stars and bars? How many different spanning trees of $K_n \setminus e$ are there? (or Spanning Trees of the Complete Graph minus an edge) Definition of Matrix Multiplication: (Maybe there should just be one canonical one?) On the determinant: Determinants of special matrices: Eigenvectors and Eigenvalues Gram-Schmidt Orthogonalization Prove that A + I is invertible if A is nilpotent A generalization for non-commutative rings Modular exponentiation: How do I compute $a^b\,\bmod c$ by hand? Solving the congruence $x^2\equiv1\pmod n$: Number of solutions of $x^2=1$ in $\mathbb{Z}/n\mathbb{Z}$ Can $\sqrt{n} + \sqrt{m}$ be rational if neither $n,m$ are perfect squares? What is the period of the decimal expansion of $\frac mn$? Geometric Series: Value of $\sum\limits_n x^n$ Summing series of the form $\sum_n (n+1) x^n$: How can I evaluate $\sum_{n=0}^\infty(n+1)x^n$? Finding the limit of rational functions at infinity: Finding the limit of $\frac{Q(n)}{P(n)}$ where $Q,P$ are polynomials Divergence of the harmonic series: Why does the series $\sum_{n=1}^\infty\frac1n$ not converge? Nested radical series: $\sqrt{c+\sqrt{c+\sqrt{c+\cdots}}}$, or the limit of the sequence $x_{n+1} = \sqrt{c+x_n}$ Limit of exponential sequence and $n$ factorial: Prove that $\lim \limits_{n \to \infty} \frac{x^n}{n!} = 0$, $x \in \Bbb R$. There are different sizes of infinity: What Does it Really Mean to Have Different Kinds of Infinities? Solving triangles: Solving Triangles (finding missing sides/angles given 3 sides/angles) (Confusing) notation for inverse functions ($\sin^{-1}$ vs. $\arcsin$): $\arcsin$ written as $\sin^{-1}(x)$
Ok here are some rules because it would be too easy/too hard without them Round to nearest 100th The first number cannot be an integer Needs to be an equality No mixed numbers Hint: The equal sigh need not be aligned. Puzzling Stack Exchange is a question and answer site for those who create, solve, and study puzzles. It only takes a minute to sign up.Sign up to join this community Ok here are some rules because it would be too easy/too hard without them Hint: The equal sigh need not be aligned. Answer after second puzzle change Can do it moving $2$ matches (maybe there is a way for $1$?) Take $2$ from the top of the right hand side's numerator and move them to its denominator: Left - Roman numerals $\frac{X}{XX}=\frac{10}{20}=0.5$ Right - Tally marks $\frac{||}{||||}=\frac24=0.5$ Answer post puzzle change (without fourth rule) Easily done for equality, only $1$ match moving Answer pre puzzle change (add matches, not rule 3 or 4) Easily done for equality with $2$ more matches ...or for a true statement, we can do it with just $1$ match strike the equals to a not equals $\frac{X}{XX}\neq\frac{||||}{||}\rightarrow 0.5\neq2$ Move 0 matchsticks to make the equation true: The left side is in Suzhou numerals, reading 4/44. The right side is $\frac{1\left|1\right|}{11}$: 1 times the absolute value of 1, divided by 11. Both sides equal 1/11. This seems a little too simple to be correct, but.. My guess is 2. Method: Move 2 matchsticks from IIII to the denominator, so it becomes 10/20 = 2/4. I'll take a stab, not sure if this is what is being asked... Answer: 1 Reason: Move one match stick from the top of the second number to the bottom of the second number, changing the first vertical line into a V, 10/20 = 3/6 Answer: 2 How? X IIII ---=----- XX I I becomes XI III ---=----- XI III by way of: Moving one of the cross matches on the second X to the top (and straightening them out), and moving one of the 4 matches to the bottom. Or, simply put... 11/11 = 3/3, or 1=1.
A bunch of points:$\def\Spec{\mathrm{Spec}\ }$ • Let $A$ be a cluster algebra over a field $k$, let $(x_1, \ldots, x_n)$ be a cluster and let $L$ be the Laurent polynomial ring $k[x_1^{\pm}, \ldots, x_n^{\pm}]$. I imaging your intended question is whether the map $\Spec L \to \Spec A$ is an open immersion. (You ask about a map $\mathbb{A}^n \to X$, but there isn't a natural such map; $\Spec L$ is a torus, not affine space.) The answer is yes. The question is equivalent to asking whether $L$ is a localization of $A$. I claim that $L = A[x_1^{-1}, \ldots, x_n^{-1}]$. Proof: On the one hand, $A \subset L$ (by the Laurent phenomenon) and $x_1^{-1}$, ..., $x_n^{-1} \in L$ (obviously), so $A[x_1^{-1}, \ldots, x_n^{-1}] \subseteq L$. On the other hand, $L$ is generated by the $x_j$ and their reciprocals, and these are all in $A[x_1^{-1}, \ldots, x_n^{-1}]$, so $L \subseteq A[x_1^{-1}, \ldots, x_n^{-1}]$. $\square$. • $\Spec A$ is not generally the union of cluster tori. This is true even in the simplest case: The extended exchange matrix $\left( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix} \right)$ gives the cluster algebra$$\mathbb{C}[x,x', y^{\pm 1}]/x x'-y-1.$$The two tori are $xy \neq 0$ and $x' y \neq 0$. The point $(x,x',y) = (0,0,-1)$ is in neither torus. • Some sources define "the cluster variety" as the (quasi-affine) union of cluster tori. If you take that as the definition, it is obviously smooth. • But I imagine what you care about is whether $\Spec A$ is smooth (or possibly $\Spec U$, where $U$ is the upper cluster algebra. No, that doesn't have to be smooth. (The singular points are not in any of the cluster tori, of course.) I think the simplest example is the Markov clsuter algebra, with $B$-matrix $\left( \begin{smallmatrix} 0 & 2 & -2 \\ -2 & 0 & 2 \\ 2 & -2 & 0 \end{smallmatrix} \right)$. This ring is not finitely generated, and there is a maximal ideal generated by all cluster variables where the Zariski tangent space is infinite dimensional. If you like at the upper cluster algebra instead, it is $k[\lambda, x_1, x_2, x_3]/x_1 x_2 x_3 \lambda - x_1^2- x_2^2 - x_3^3$, which is singular along the line $x_1 = x_2 = x_3 = 0$. For another example, which doesn't have the $A$ versus $U$ issue, look at the $A_3$ cluster algebra with no frozen variables. From Corollary 1.17 in Cluster Algebras III, this is generated by $(x_1, x_2, x_3, x'_1, x'_2, x'_3)$ module the relations$$x_1 x'_1 = x_2+1,\ x_2 x'_2 = x_1 + x_3,\ x_3 x'_3 = x_2+1.$$ Look at the point $(x_1, x_2, x_3 , x'_1, x'_2, x'_3) = (0, -1, 0, 0,0,0)$. The Jacobian matrix of these $3$ equations with respect to the $6$ variables is$$\begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \end{pmatrix}$$which has rank two, so this is a singular point. • We do have Theorem 7.7 of Greg Muller's Locally Acyclic Cluster Algebras -- if the cluster algebra is locally acyclic, the $B$-matrix has full rank and our ground field has characteristic zero, then $\Spec A$ is smooth. See Muller 1 and Benito-Muller-Rajchgot-Smith 2 for more on singularities of cluster varieties.
Let $K \subset {\bf R}^d$ be a symmetric convex body (an open bounded convex neighbourhood of the origin with $K = -K$) with the property that $K + {\bf Z}^d \neq {\bf R}^d$, i.e. the projection of $K$ to the standard torus ${\bf R}^d/{\bf Z}^d$ is not surjective, or equivalently $K$ is disjoint from some coset $x + {\bf Z}^d$ of the standard lattice. My question is: what does this say about the polar body $$K^* := \{ \xi \in {\bf R}^d: \xi \cdot x < 1 \hbox{ for all } x \in K \}?$$ Intuitively, the property $K + {\bf Z}^d \neq {\bf R}^d$ is a "smallness" condition on K, and is thus a "largeness" condition on $K^*$. If $K^*$ contains a non-trivial element $n$ of $2 {\bf Z}^d$, then $K$ is contained in the strip $\{ x: |n \cdot x| < 1/2 \}$, and will thus avoid the coset $x+{\bf Z}^d$ whenever $x \cdot n = 1/2$. So this is a sufficient condition for $K + {\bf Z}^d \neq {\bf R}^d$, but it is not necessary. Indeed, if one takes $K$ to be the octahedron $$K := \{ (x_1,\ldots,x_d) \in {\bf R}^d: |x_1|+\ldots+|x_d| < d/2 \}$$ then $K$ avoids $(1/2,\ldots,1/2)+{\bf Z}^d$, but the dual body $$ K^* = \{ (\xi_1,\ldots,\xi_d) \in {\bf R}^d: |\xi_1|,\ldots,|\xi_d| < 2/d \}$$ is quite far from reaching a non-trivial element of $2 {\bf Z}^d$. On the other hand, by using the theory of Mahler bases or Fourier analysis one can show that if $K + {\bf Z}^d \neq {\bf R}^d$, then $K^*$ must contain a non-trivial element of $\varepsilon_d {\bf Z}^d$ for some $\varepsilon_d > 0$ depending only on $d$. However the bounds I can get here are exponentially poor in $d$. Based on the octahedron example (which intuitively seems to be the "biggest" convex set that still avoids a coset of ${\bf Z}^d$), one might tentatively conjecture that if $K + {\bf Z}^d \neq {\bf R}^d$, then the closure of $K^*$ contains a non-trivial element of $\frac{2}{d} {\bf Z}^d$. I do not know how to prove or disprove this conjecture (though I think the $d=2$ case might be worked out by ad hoc methods, and the $d=1$ case is trivial), so I am posing it here as a question.
Basic MOSAICmodeling models consist of Equations with a single Notation, which are then joined together to form an Equation System, which can be simulated or evaluated. At this point we assume that you already know how to set up a Notation. The editor to enter or edit an Equation inside MOSAICmodeling is shown in Fig. 1. Each Equation has a location at which it is saved, is connected to one Notation, and may load a Parameter List. The Equation editor has a Toolbox, which can be opened via the “Open Toolbox” button and can help you in entering Equations. MOSAICmodeling uses a LaTeX style to enter Equations. Figure 1: The Equation editor inside MOSAICmodeling. The Toolbox offers short-cuts to enter LaTeX commands for complex mathematical operations. Before presenting the workflow for equations we introduced the structure of variables inside MOSAICmodeling: This structure now needs to be transferred to MOSAICmodeling’s LaTeX style to use variables inside Equations: baseName_{subscript, index1, index2}^{superscript1, superscript2} Inside MOSAICmodeling this variable would then be rendered as: The supported mathematical operations are given in the following table: Mathematical Operator MOSAICmodeling LaTeX Command Rendered Form Multiplication a \cdot b Division a / b Fraction \frac{a}{b} Power Function (a)^{b} n-th Root (a)^{1/n} Exponential Function \exp(a) Natural Logarithm \ln(a) Sine \sin(a) Cosine \cos(a) Summation \sum_{i=1}^{Ni}{x_{i}} Derivative \diff{x}{t} Partial Derivative \pdiff{a}{b} Second Partial Derivative \p2diff{a}{b}{c} The workflow to enter a new Equation inside MOSAICmodeling is displayed in the following:
I was reading about the conic sections and that a conic section includes pair of straight lines, ellipses, hyperbola, circles and parabola. Are all these 5 components enough to form a right circular cone or are these just the parts of right circular cone? What I mean is : if I put together infinite number of circles, parabolas, hyperbolas,, ellipses and circles, will I get a right circular cone? closed as unclear what you're asking by Yves Daoust, Winther, Yagna Patel, yoknapatawpha, user228113 Jan 28 '16 at 0:15 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. If you intersect a cone with a plane, the intersection will be one of the following: a parabola, a circle, an ellipse, a hyperbola, a pair of lines (the plane must lie along the axis of the cone), a single line (plane is tangent to cone), or a single unique point (the plane must be perpendicular to the axis, passing through the center). There are infinitely many parabolas, circles, ellipses, hyperbolas, and lines that can be found in this way. You're asking if we can build a cone using these "parts." Well, intuitively, yes. After all, we just found these parts on the surface! But let's be more specific. Will any arrangement in space of, say, circles generate a cone? No. A random arrangement of many circles would probably look like a scattered mess. Or maybe not. If you're really lucky, your random arrangement could perhaps be a torus: Okay, so what if we're not randomly placing our parts. What if we're carefully placing circles against each other to generate our cone? Then yes, you can build a cone. That is, if you consider the point at the center a circle of radius zero. Using ellipses would work similarly. What if you use hyperbolas? Sure, that works. Though as you move from one side to the other, the center-most piece will degenerate to two lines. Using parabolas would similarly have a center-most piece that degenerates into a single line. If you take a single straight line and rotate it about an axis through a point on that line, you'll also map out a cone. In this case, there will be no degeneracy. There is also a single straight line and a point - both degenerate in the sense that the degree of the equation is less than $2$. You can consider the cone as cut by a family of parallel planes, and the conics on each plane will combine to give the whole cone. They will all be of the same type, with the plane through the apex giving a degenerate form. You get a family of pairs of lines by considering cuts by the family of planes which contain the axis of the cone, and in this case all the "conics" are the same. You can also consider a family of conics generated by the planes which contain a random line in space. In this case you will likely get all conics except a pair of lines (this only when the line goes through the apex, and consider a plane containing your line and the axis of the cone). You might miss a circle in cases where the plane through the apex is perpendicular to the axis and you get a point as a degenerate circle. Otherwise the apex gives you a degenerate ellipse. A cone is a surface and as such cannot consist of a finite number of curves. A conic section is the intersection of a cone with a plane, so it is a planar curve. Depending on the relative positions of the cone and the plane, the section can be a single or a double line, an ellipse, a parabola or an hyperbola (the circle is just a particular case of the ellipse). A way to reconstruct a cone is to sweep a plane over the whole space (for instance by translation or rotation), a put together all the sections so obtained. Putting together random lines, ellipses... in any number will just result in a mess. Yes, a right circular cone consists only of pair of straight lines, hyperbolas, parabola, circles and ellipses when in each case it is cut transverse planes. Take a cone of a fixed semi-vertical angle $\alpha$ whose symmetry axis is the x-axis Case 1 Cut it by a plane of variable inclination $ \beta$ and constant $C$. $$y = \tan \beta \,x + C \tag{1} $$ The conic section so formed has the eccentricity $$ \epsilon = \dfrac{\sin\alpha }{\sin \beta} \tag{2}$$ CASE $ c \ne 0 $ i.e., cutting plane avoids the cone apex: If $\beta > \alpha $ you get an ellipse. If $\beta < \alpha $ you get a hyperbola in two disjointed parts. If $\beta = \alpha $ you get a parabola. If you rotate any of the three above conics about the x-axis you get back the cone cut segment. CASE $ c = 0 $ i.e., cutting plane includes the cone apex: A pair of straight lines which include an angle $ 2 \gamma $ $$ \tan \gamma = \tan \alpha \,\cos \beta \tag{3}$$ Case 2 You can keep $\beta $ constant and $C$ variable, you get back the cone of stacked conic sections If you dilate and shift any conic by multiplication factors $ (\sin \alpha, \cos \alpha )$ respectively along x-axis This response would be more convincing with images. Using expression (4) uploading a (poor quality) image of a two dependent parameter $ r,\theta $ cone surface where one of $ \beta, C $ is varied at a time, keeping the other fixed. Ellipses are stacked to make up the cone.All type three types of conics are seen on one cone sheet. (Cone apex is excluded in the first plot. The parabola is not plotted in the second .. due to limitations in expression when some quantities go infinite). Parametric Equations of all possible conics $$ (x,y,z) = (x, \tan \beta\, x - c , \pm \sqrt {x^2( \tan{^2} \alpha -\tan{^2} \beta) + 2 \tan \beta\, c \,x - c^2} ) \tag{4} $$ The process by which you get back the cone from conic sections is in two ways. Rotation through x-axis rotation, and, Dilation/Translation keeping conic on x-axis.
Problem: $dX_t = \sigma X_tdB_t$, $X_0=x$. $dY_t=X_tdt-Z_tdt$ find $Y_t$, where $Z_t$ is a control and $B_t$ is standard Brownian motion. My attempt: From Ito's lemma, $\partial_BX_t=\sigma X_t$, therefore $X_t=c(t)\exp(\sigma B_t)$. $\partial_tX_t+\frac{1}{2}\partial^2_BX_t=0$, substitue in above expression for $X_t$ and $c^\prime(t)+\sigma^2\frac{1}{2}c(t)=0$ which implies $c(t)=a\exp(-\sigma^2\frac{t}{2}).$ Putting all together $$X_t=a\exp(-\sigma^2\frac{t}{2}+B_t\sigma)$$ which can be solved for $X_t=x$ to fine $a$. Then $Y_s-Y_r=\int_r^s a\exp(-\sigma^2\frac{t}{2}+B_t\sigma)dt+\int^s_rP(t)dt$. How can I solve the time integral of exponential of Brownian motion? A hint is sufficient. I would like to do it myself. Thank you all. Edit: I need help to find a closed form expression for $\int_r^s a\exp(B_t\sigma)dt.$
PandaX is designed to build and operate a ton-scale liquid xenon experiment to detect the so far elusive dark matter in the Universe. The PandaX experiment will use a two-phase (liquid and gas) xenon position-sensitive time projection chamber detector. It is located at CJPL, which is one of the deepest underground labs in the world. The program will evolve in two stages, initially probing the low-mass regime (<10 GeV) with a nuclear- recoil energy threshold of about 5 keV and ultimately employing a ton-scale detector to probe the higher-mass regime (10 to 1,000 GeV), reaching a sensitivity down to $10^{−47} cm^{2}$ for spin-independent WIMP-nucleon cross section. PandaX-II dark matter experiment PandaX-II is a dark matter direct detection experiment equipped with a half-ton scale dual-phase time projection chamber (TPC). The PandaX-II experiment started since Oct 2014 with the detector installation at CJPL, followed by about one year of engineering runs and test. The commissiong run started on Nov 22, 2015. After collecting 19 days of effective data, this run stopped due to high level of Krypton background. Nevertheless, a factor of two improvement on the spin-independent (SI) WIMP-nucleon cross section limits was obtained using this data compared to the final result from PandaX-I using 80 days of data . After a Krypton distillation campaign, the experiment restarted the data taking in March 2016. The first physics run lasted till end of June and collected 80 days of effective data. Combining the data from the comissioning run and the first physics run, PandaX-II obtained the most stringent upper limits on the SI WIMP-nucleon cross sectiones for WIMPs with masses between 5 and 1000 GeV (see figure below). The PandaX-II experiment is expected to run till the end of 2017. PandaX-III neutrinoless double beta decay($0\nu\beta\beta$) The PandaX-III project searches for the possible $0\nu\beta\beta$ of ${}^{136}Xe: {}^{136}Xe \rightarrow {}^{136}Ba + 2e{}^{-}$ with 200 kg to one ton of 90% enriched ${}^{136}Xe$. as the source. The high pressure gas Time Projection Chamber (TPC) we use can measure the energies of two emitted electrons from $0\nu\beta\beta$ and image the electron tracks, and thus can effectively identify $0\nu\beta\beta$ signals and suppress the unwanted backgrounds. Our TPC is symmetrical with the negative high voltage in the middle. Electrons ionized by an energy release in the TPC would drift to either ends, where Micromegas is used to amplify and collect the charges. The expected spatial resolution is 3 mm and energy resolution is 3%. At SJTU, we are building a 20kg scale prototype detectors, which consists of a stainless steel pressure vessel and a single-end TPC. We aim to optimize the design of Micromegas readout plane, to finalize the energy calibration of TPC, and to develop algorithm of 3D electron track reconstruction.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Exams can be easily created in LaTeX by means of the class exam.cls. This class makes it straightforward to typeset questions, and it sets a 1in margin in all paper sizes and provides special commands to write and compute grades. This article explains how to edit with exam.cls. Contents Let's see a simple working example of the exam class: \documentclass{exam} \usepackage[utf8]{inputenc} \begin{document} \begin{center} \fbox{\fbox{\parbox{5.5in}{\centering Answer the questions in the spaces provided. If you run out of room for an answer, continue on the back of the page.}}} \end{center} \vspace{5mm} \makebox[\textwidth]{Name and section:\enspace\hrulefill} \vspace{5mm} \makebox[\textwidth]{Instructor’s name:\enspace\hrulefill} \begin{questions} \question Is it true that \(x^n + y^n = z^n\) if \(x,y,z\) and \(n\) are positive integers?. Explain. \question Prove that the real part of all non-trivial zeros of the function \(\zeta(z)\) is \(\frac{1}{2}\) \question Compute \[\int_{0}^{\infty} \frac{\sin(x)}{x}\] \end{questions} To use the exam class you must put the line \documentclass{exam} on top of your .tex file. This will enable the package's exam-related commands, and set the page format to allow margins for corrections. The syntax of the questions environment is very similar to that of the itemize and enumerate environments. Each question is typed by putting the command \question before it. The other commands in this example are not specific to the exam class, but may be useful to create a quick header for your exam. In the previous section, a basic example showing how to create question was presented. Questions can be further customized, and this section explains how. If the students are required to answer the exam in the space provided, that space can be manually set or evenly distributed. See the example below: \begin{questions} \question Is it true that \(x^n + y^n = z^n\) if \(x,y,z\) and \(n\) are positive integers?. Explain. \vspace{\stretch{1}} \question Prove that the real part of all non-trivial zeros of the function \(\zeta(z)\) is \(\frac{1}{2}\) \vspace{\stretch{1}} \question Compute \[\int_{0}^{\infty} \frac{\sin(x)}{x}\] \vspace{\stretch{1}} \end{questions} \clearpage In this example the command \vspace{\stretch{1}} after each question equally distributes the available space. The command \clearpage inserts a page break point to continue typing questions in a new page. If you want to manually assign the space to each question, use the command \vspace{} and in between the braces write the units of space you need. For instance, \vspace{1in} inserts a 1-inch vertical space. Check the documentation about lenghts in LaTeX for a list of available units. If your questions have several parts focused on some subtopics you can use the environments parts, subparts, subsubparts and the corresponding commands \part, \subpart and \subsubpart. See the next example: \begin{questions} \question Given the equation \(x^n + y^n = z^n\) for \(x,y,z\) and \(n\) positive integers. \begin{parts} \part For what values of $n$ is the statement in the previous question true? \vspace{\stretch{1}} \part For $n=2$ there's a theorem with a special name. What's that name? \vspace{\stretch{1}} \part What famous mathematician had an elegant proof for this theorem but there was not enough space in the margin to write it down? \vspace{\stretch{1}} \begin{subparts} \subpart Who actually proved the theorem? \vspace{\stretch{1}} \subpart How long did actually take to solve this problem? \vspace{\stretch{1}} \end{subparts} \end{parts} \question Prove that the real part of all non-trivial zeros of the function \(\zeta(z)\) is \(\frac{1}{2}\) ... \end{questions} The environments parts and subparts provide question-like nested lists. Jut like in questions you can set manually the vertical spacing. There are four environments to create multiple choice questions. \question Which of these famous physicists invented time? \begin{oneparchoices} \choice Stephen Hawking \choice Albert Einstein \choice Emmy Noether \choice This makes no sense \end{oneparchoices} \question Which of these famous physicists published a paper on Brownian Motion? \begin{checkboxes} \choice Stephen Hawking \choice Albert Einstein \choice Emmy Noether \choice I don't know \end{checkboxes} In this example, two different environments are used to list the possible choices for multiple-choice questions. oneparchoices labels the choices with upper case letters and prints them horizontally. If you want the choices to be printed in a list-like format, the environment choices is the right choice. checkboxes prints check boxes before each choice. If you need the choices to be printed horizontally use the environment oneparcheckboxes instead. Another important feature of the exam class is that it provides commands to make grading the exams easier. You can add a parameter to each \question or \part to print the number of points you attain by correctly answering it \begin{questions} \question Given the equation \(x^n + y^n = z^n\) for \(x,y,z\) and \(n\) positive integers. \begin{parts} \part[10] For what values of $n$ is the statement in the previous question true? \vspace{\stretch{1}} \part[10] For $n=2$ there's a theorem with a special name. What's that name? \vspace{\stretch{1}} \part[10] What famous mathematician had an elegant proof for this theorem but there was not enough space in the margin to write it down? \vspace{\stretch{1}} \end{parts} \question[20] Compute \[\int_{0}^{\infty} \frac{\sin(x)}{x}\] \vspace{\stretch{1}} \end{questions} The additional parameter inside brackets after a question or a part represents the number of points assigned to it. You can change the appearance and the place where the points are printed, see the reference guide for additional commands. Sometimes it's convenient to include half points as value for parts of a questions. You can do this and then print then the value of the whole question. See the example below: \documentclass[addpoints]{exam} \usepackage[utf8]{inputenc} \begin{document} \begin{questions} \question Given the equation \(x^n + y^n = z^n\) for \(x,y,z\) and \(n\) positive integers. \begin{parts} \part[5] For what values of $n$ is the statement in the previous question true? \vspace{\stretch{1}} \part[2 \half] For $n=2$ there's a theorem with a special name. What's that name? \vspace{\stretch{1}} \part[2 \half] What famous mathematician had an elegant proof for this theorem but there was not enough space in the margin to write it down? \vspace{\stretch{1}} \end{parts} \droptotalpoints \question[20]... \end{questions} The command \half adds half points to a question. The command \droptotalpoints prints the total number of points for the last question. For this last command to work you must add the option [addpoints] to the document class statement. It is possible to add bonus questions, this extra points will later show up in the grading table. Adding bonus questions and parts is actually as simple as creating regular questions and parts. \begin{questions} \question Given the equation \(x^n + y^n = z^n\) for \(x,y,z\) and \(n\) positive integers. \begin{parts} \part[5] For what values of $n$ is the statement in the previous question true? \vspace{\stretch{1}} \part[2 \half] For $n=2$ there's a theorem with a special name. What's that name? \vspace{\stretch{1}} \bonuspart[2 \half] What famous mathematician had an elegant proof for this theorem but there was not enough space in the margin to write it down? \vspace{\stretch{1}} \end{parts} \droptotalpoints \question[20] Compute \[\int_{0}^{\infty} \frac{\sin(x)}{x}\] \vspace{\stretch{1}} \bonusquestion[30] Prove that the real part of all non-trivial zeros of the function \(\zeta(z)\) is \(\frac{1}{2}\) \vspace{\stretch{1}} \end{questions} The commands \bonusquestion and \bonuspart print "(bonus)" next to the point value of the question. A table that show the points of each question can be printed with a special command. There are three commands to print a table of grades: \gradetable \bonusgradetable \combinedgradetable These commands take two extra parameters, each parameter inside brackets. [h] for a horizontal table or [v] for a vertical table. [questions] to index the points by question and [pages] to list the points by page number. There is no support for other languages than English in the exam class. Nevertheless, it's easy to translate the default words for those in your local language. The next snippet shows how to translate the example presented in the previous sections to Spanish. ss[addpoints]{exam} \usepackage[utf8]{inputenc} \usepackage[spanish]{babel} \pointpoints{punto}{puntos} \bonuspointpoints{punto extra}{puntos extra} \totalformat{Pregunta \thequestion: \totalpoints puntos} \chqword{Pregunta} \chpgword{Página} \chpword{Puntos} \chbpword{Puntos extra} \chsword{Puntos obtenidos} \chtword{Total} ... The rest of the document would be exactly the same shown in previous examples. The commands typed here change the default words in the exam class. \pointpoints{punto}{puntos} \bonuspointpoints{punto extra}{puntos extra} \totalformat{Pregunta \thequestion: \totalpoints puntos} \droptotalpoints. In the example it prints \chqword{Pregunta} chpgword for \chpword for \chbpword for \chsword for \chtword for Placing and formatting the points mark for questions These commands can be typed in the preamble to change the format of the whole document or right before a question to change the format from that question down to the next formatting command or the end of the document. \poinstmargin. The point values are printed in the left margin. Use \nopointsmargin to revert this command to the default format. \pointsmarginright. The point values are printed in the right margin. The command \nopoinstmarginright will revert to the normal behaviour. \bracketedpoints. Uses brackets instead of parentheses around the point values. \boxedpoints. Draws a box around the point values. Changing default names in Grade Tables The commands depend on the format and the information displayed on the table. The h and v within each command mean horizontal or vertical orientation. If the command is preceded by a b means it changes the format in a bonus table, if the command is preceded by a c means it works on combined tables. For instance, to change the word "Score" in a vertical oriented bonus table for the words "Points Awarded" you should use \bvsword{Points Awarded}. Below a table with the default values is shown. horizontal vertical grades table \hpgword{Page:} \hpword{Points:} \hsword{Score:} \htword{Total} \vpgword{Page} \vpword{Points} \vsword{Score} \vtword{Total:} bonus points table \bhpgword{Page:} \bhpword{Bonus Points:} \bhsword{Score:} \bhtword{Total} \bvpgword{Page} \bvpword{Bonus Points} \bvsword{Score} \bvtword{Total:} combined table \chpgword{Page:} \chpword{Points:} \chbpword{Bonus Points:} \chsword{Score:} \chtword{Total} \cvpgword{Page} \cvpword{Points} \cvbpword{Bonus Points} \cvsword{Score} \cvtword{Total:} For more information see:
Ground state solutions of fractional Schrödinger equations with potentials and weak monotonicity condition on the nonlinear term 1. Center for Applied Mathematics, Tianjin University, Tianjin 300072, China 2. Department of Mathematics, East China University of Science and Technology, Shanghai 200237, China In this paper we are concerned with the fractional Schrödinger equation $ (-\Delta)^{\alpha} u+V(x)u = f(x, u) $, $ x\in {{\mathbb{R}}^{N}} $, where $ f $ is superlinear, subcritical growth and $ u\mapsto\frac{f(x, u)}{\vert u\vert} $ is nondecreasing. When $ V $ and $ f $ are periodic in $ x_{1},\ldots, x_{N} $, we show the existence of ground states and the infinitely many solutions if $ f $ is odd in $ u $. When $ V $ is coercive or $ V $ has a bounded potential well and $ f(x, u) = f(u) $, the ground states are obtained. When $ V $ and $ f $ are asymptotically periodic in $ x $, we also obtain the ground states solutions. In the previous research, $ u\mapsto\frac{f(x, u)}{\vert u\vert} $ was assumed to be strictly increasing, due to this small change, we are forced to go beyond methods of smooth analysis. Keywords:Fractional logarithmic Schrödinger equation, periodic potential, coercive potential, bounded potential, nonsmooth critical point theory. Mathematics Subject Classification:Primary: 35J60, 35R11; Secondary: 47J30. Citation:Chao Ji. Ground state solutions of fractional Schrödinger equations with potentials and weak monotonicity condition on the nonlinear term. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 6071-6089. doi: 10.3934/dcdsb.2019131 References: [1] [2] [3] [4] K. C. Chang, Variational methods for non-differentiable functionals and their applications to partial differential equations, [5] X. J. Chang and Z. Q. Wang, Ground state of scalar field equations involving a fractional Laplacian with general nonlinearity, [6] R. Cont and P. Tankov, [7] [8] [9] [10] [11] [12] G. Molica Bisci and V. Rădulescu, Ground state solutions of scalar field fractional for Schrödinger equations, [13] G. Molica Bisci, V. Rǎdulescu and R. Servadei, Variational Methods for Nonlocal Fractional Problems, Cambridge University Press, Cambridge, 2016. doi: 10.1017/CBO9781316282397. Google Scholar [14] [15] F. O. de Pavia, W. Kryszewski and A. Szulkin, Generalized Nehari manifold and semilinear Schrödinger equation with weak monotonicity condition on the nonlinear term, [16] P. Pucci, M. Q. Xia and B. L. Zhang, Multiple solutions for nonhomogeneous Schrödinger-Kirchhoff type equations involving the fractional p-Laplacian in $\mathbb{R}^N$, [17] [18] [19] [20] [21] [22] [23] H. Zhang, J. X. Xu and F. B. Zhao, Existence and multiplicity of solutions for superlinear fractional Schrödinger equations in ${{\mathbb{R}}^{N}}$, [24] show all references References: [1] [2] [3] [4] K. C. Chang, Variational methods for non-differentiable functionals and their applications to partial differential equations, [5] X. J. Chang and Z. Q. Wang, Ground state of scalar field equations involving a fractional Laplacian with general nonlinearity, [6] R. Cont and P. Tankov, [7] [8] [9] [10] [11] [12] G. Molica Bisci and V. Rădulescu, Ground state solutions of scalar field fractional for Schrödinger equations, [13] G. Molica Bisci, V. Rǎdulescu and R. Servadei, Variational Methods for Nonlocal Fractional Problems, Cambridge University Press, Cambridge, 2016. doi: 10.1017/CBO9781316282397. Google Scholar [14] [15] F. O. de Pavia, W. Kryszewski and A. Szulkin, Generalized Nehari manifold and semilinear Schrödinger equation with weak monotonicity condition on the nonlinear term, [16] P. Pucci, M. Q. Xia and B. L. Zhang, Multiple solutions for nonhomogeneous Schrödinger-Kirchhoff type equations involving the fractional p-Laplacian in $\mathbb{R}^N$, [17] [18] [19] [20] [21] [22] [23] H. Zhang, J. X. Xu and F. B. Zhao, Existence and multiplicity of solutions for superlinear fractional Schrödinger equations in ${{\mathbb{R}}^{N}}$, [24] [1] Wulong Liu, Guowei Dai. Multiple solutions for a fractional nonlinear Schrödinger equation with local potential. [2] David Gómez-Castro, Juan Luis Vázquez. The fractional Schrödinger equation with singular potential and measure data. [3] Grégoire Allaire, M. Vanninathan. Homogenization of the Schrödinger equation with a time oscillating potential. [4] [5] Yuxia Guo, Zhongwei Tang. Multi-bump solutions for Schrödinger equation involving critical growth and potential wells. [6] Jian Zhang, Shihui Zhu, Xiaoguang Li. Rate of $L^2$-concentration of the blow-up solution for critical nonlinear Schrödinger equation with potential. [7] César E. Torres Ledesma. Existence and concentration of solutions for a non-linear fractional Schrödinger equation with steep potential well. [8] [9] Reika Fukuizumi. Stability and instability of standing waves for the nonlinear Schrödinger equation with harmonic potential. [10] Naoufel Ben Abdallah, Yongyong Cai, Francois Castella, Florian Méhats. Second order averaging for the nonlinear Schrödinger equation with strongly anisotropic potential. [11] [12] Robert Magnus, Olivier Moschetta. The non-linear Schrödinger equation with non-periodic potential: infinite-bump solutions and non-degeneracy. [13] D. Motreanu, V. V. Motreanu, Nikolaos S. Papageorgiou. Nonautonomous resonant periodic systems with indefinite linear part and a nonsmooth potential. [14] [15] Fengshuang Gao, Yuxia Guo. Multiple solutions for a critical quasilinear equation with Hardy potential. [16] Toshiyuki Suzuki. Scattering theory for semilinear Schrödinger equations with an inverse-square potential via energy methods. [17] Yinbin Deng, Wei Shuai. Positive solutions for quasilinear Schrödinger equations with critical growth and potential vanishing at infinity. [18] Victor Isakov, Jenn-Nan Wang. Increasing stability for determining the potential in the Schrödinger equation with attenuation from the Dirichlet-to-Neumann map. [19] Xiaoyan Lin, Yubo He, Xianhua Tang. Existence and asymptotic behavior of ground state solutions for asymptotically linear Schrödinger equation with inverse square potential. [20] Thomas Bartsch, Zhongwei Tang. Multibump solutions of nonlinear Schrödinger equations with steep potential well and indefinite potential. 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
Homework Helper 1,059 9 Ok so I've got a question after walking through the time dilation derivation that used 'light clocks' (think a beam of light bouncing back and forth between mirrors) to derive ##\delta t^\prime = \frac{\delta t}{\sqrt{1-\frac{v^2}{c^2}}}##. So my Q is could you derive the same equation if you had used atomic clocks instead? Don't actually do so here, just wanted to know if it would have lead to the same relation. Thanks for your time! Edit, my tex tags seem to not work?! I'm sure that was how... <mentor edit latex> Edit, my tex tags seem to not work?! I'm sure that was how... <mentor edit latex> Last edited by a moderator:
Dimensionality reduction is used to remove irrelevant and redundant features. When the number of features in a dataset is bigger than the number of examples, then the probability density function of the dataset becomes difficult to calculate. For example, if we model a dataset \(S = \{x^{(i)}\}_{i=1}^m,\ x \in R^{n}\) as a single Gaussian N(μ, ∑), then the probability density function is defined as: \(P(x) = \frac{1}{{(2π)}^{\frac{n}{2}} |Σ|^\frac{1}{2}} exp(-\frac{1}{2} (x-μ)^T {Σ}^{-1} (x-μ))\) such as \(μ = \frac{1}{m} \sum_{i=1}^m x^{(i)} \\ ∑ = \frac{1}{m} \sum_{i=1}^m (x^{(i)} – μ)(x^{(i)} – μ)^T\). But If n >> m, then ∑ will be singular, and calculating P(x) will be impossible. Note: \((x^{(i)} – μ)(x^{(i)} – μ)^T\) is always singular, but the \(\sum_{i=1}^m\) of many singular matrices is most likely invertible when m >> n. Principal Component Analysis Given a set \(S = \{x^{(1)}=(0,1), x^{(2)}=(1,1)\}\), to reduce the dimensionality of S from 2 to 1, we need to project data on a vector that maximizes the projections. In other words, find the normalized vector \(μ = (μ_1, μ_2)\) that maximizes \( ({x^{(1)}}^T.μ)^2 + ({x^{(2)}}^T.μ)^2 = (μ_2)^2 + (μ_1 + μ_2)^2\). Using the method of Lagrange Multipliers, we can solve the maximization problem with constraint \(||u|| = μ_1^2 + μ_2^2 = 1\).\(L(μ, λ) = (μ_2)^2 + (μ_1 + μ_2)^2 – λ (μ_1^2 + μ_2^2 – 1) \) We need to find μ such as \(∇_u = 0 \) and ||u|| = 1 After derivations we will find that the solution is the vector μ = (0.52, 0.85) Generalization Given a set \(S = \{x^{(i)}\}_{i=1}^m,\ x \in R^{n}\), to reduce the dimensionality of S, we need to find μ that maximizes \(arg \ \underset{u: ||u|| = 1}{max} \frac{1}{m} \sum_{i=1}^m ({x^{(i)}}^T u)^2\)\(=\frac{1}{m} \sum_{i=1}^m (u^T {x^{(i)}})({x^{(i)}}^T u)\) \(=u^T (\frac{1}{m} \sum_{i=1}^m {x^{(i)}} * {x^{(i)}}^T) u\) Let’s define \( ∑ = \frac{1}{m} \sum_{i=1}^m {x^{(i)}} * {x^{(i)}}^T \) Using the method of Lagrange Multipliers, we can solve the maximization problem with constraint \(||u|| = u^Tu\) = 1.\(L(μ, λ) = u^T ∑ u – λ (u^Tu – 1) \) If we calculate the derivative with respect to u, we will find:\(∇_u = ∑ u – λ u = 0\) Therefore u that solves this maximization problem must be an eigenvector of ∑. We need to choose the eigenvector with highest eigenvalue. If we choose k eigenvectors \({u_1, u_2, …, u_k}\), then we need to transform the data by multiplying each example with each eigenvector.\(x^{(i)} := (u_1^T x^{(i)}, u_2^T x^{(i)},…, , u_k^T x^{(i)}) = U^T x^{(i)}\) Data should be normalized before running the PCA algorithm: 1-\(μ = \frac{1}{m} \sum_{i=1}^m x^{(i)}\) 2-\(x^{(i)} := x^{(i)} – μ\) 3-\(σ_j^{(i)} = \frac{1}{m} \sum_{i=1}^m {x_j^{(i)}}^2\) 4-\(x^{(i)} := \frac{x_j^{(i)}}{σ_j^{(i)}}\) To reconstruct the original data, we need to calculate \(\widehat{x}^{(i)} := U^T x^{(i)}\) Factor Analysis Factor analysis is a way to take a mass of data and shrinking it to a smaller data set with less features. Given a set \(S = \{x^{(i)}\}_{i=1}^m,\ x \in R^{n}\), and S is modeled as a single Gaussian. To reduce the dimensionality of S, we define a relationship between the variable x and a laten (hidden) variable z called factor such as \(x^{(i)} = μ + Λ z^{(i)} + ϵ^{(i)}\) and \(μ \in R^{n}\), \(z^{(i)} \in R^{d}\), \(Λ \in R^{n*d}\), \(ϵ \sim N(0, Ψ)\), Ψ is diagonal, \(z \sim N(0, I)\) and d <= n. From Λ we can find the features that are related to each factor, and then identify the features that need to be eliminated or combined in order to reduce the dimensionality of the data. Below the steps to estimate the parameters Ψ, μ, Λ.\(E[x] = E[μ + Λz + ϵ] = E[μ] + ΛE[z] + E[ϵ] = μ \) \(Var(x) = E[(x – μ)^2] = E[(x – μ)(x – μ)^T] = E[(Λz + ϵ)(Λz + ϵ)^T]\) \(=E[Λzz^TΛ^T + ϵz^TΛ^T + Λzϵ^T + ϵϵ^T]\) \(=ΛE[zz^T]Λ^T + E[ϵz^TΛ^T] + E[Λzϵ^T] + E[ϵϵ^T]\) \(=Λ.Var(z).Λ^T + E[ϵz^TΛ^T] + E[Λzϵ^T] + Var(ϵ)\) ϵ and z are independent, then the join probability of p(ϵ,z) = p(ϵ)*p(z), and \(E[ϵz]=\int_{ϵ}\int_{z} ϵ*z*p(ϵ,z) dϵ dz\)\(=\int_{ϵ}\int_{z} ϵ*z*p(ϵ)*p(z) dϵ dz\) \(=\int_{ϵ} ϵ*p(ϵ) \int_{z} z*p(z) dz dϵ\) \(=E[ϵ]E[z]\) So:\(Var(x)=ΛΛ^T + Ψ\) Therefore \(x \sim N(μ, ΛΛ^T + Ψ)\) and \(P(x) = \frac{1}{{(2π)}^{\frac{n}{2}} |ΛΛ^T + Ψ|^\frac{1}{2}} exp(-\frac{1}{2} (x-μ)^T {(ΛΛ^T + Ψ)}^{-1} (x-μ))\) \(Λ \in R^{n*d}\), if d <= m, then \(ΛΛ^T + Ψ\) is most likely invertible. To find Ψ, μ, Λ, we need to maximize the log-likelihood function.\(l(Ψ, μ, Λ) = \sum_{i=1}^m log(P(x^{(i)}; Ψ, μ, Λ))\) \(= \sum_{i=1}^m log(\frac{1}{{(2π)}^{\frac{n}{2}} |ΛΛ^T + Ψ|^\frac{1}{2}} exp(-\frac{1}{2} (x^{(i)}-μ)^T {(ΛΛ^T + Ψ)}^{-1} (x^{(i)}-μ)))\) This maximization problem cannot be solved by calculating the \(∇_Ψ l(Ψ, μ, Λ) = 0\), \(∇_μ l(Ψ, μ, Λ) = 0\), \(∇_Λ l(Ψ, μ, Λ) = 0\). However using the EM algorithm, we can solve that problem. More details can be found in this video: https://www.youtube.com/watch?v=ey2PE5xi9-A Restricted Boltzmann Machine A restricted Boltzmann machine (RBM) is a two-layer stochastic neural network where the first layer consists of observed data variables (or visible units), and the second layer consists of latent variables (or hidden units). The visible layer is fully connected to the hidden layer. Both the visible and hidden layers are restricted to have no within-layer connections. In this model, we update the parameters using the following equations: \(W := W + α * \frac{x⊗Transpose(h_0) – v_1 ⊗ Transpose(h_1)}{n} \\ b_v := b_v + α * mean(x – v_1) \\ b_h := b_h + α * mean(h_0 – h_1) \\ error = mean(square(x – v_1))\). Deep Belief Network A deep belief network is obtained by stacking several RBMs on top of each other. The hidden layer of the RBM at layer i becomes the input of the RBM at layer i+1. The first layer RBM gets as input the input of the network, and the hidden layer of the last RBM represents the output. Autoencoders An autoencoder, autoassociator or Diabolo network is a deterministic artificial neural network used for unsupervised learning of efficient codings. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for the purpose of dimensionality reduction. A deep Autoencoder contains multiple hidden units. Loss function For binary values, the loss function is defined as: \(loss(x,\hat{x}) = -\sum_{k=1}^{size(x)} x_k.log(\hat{x_k}) + (1-x_k).log(1 – \hat{x_k})\). For real values, the loss function is defined as: \(loss(x,\hat{x}) = ½ \sum_{k=1}^{size(x)} (x_k – \hat{x_k})^2\). Dimensionality reduction Autoencoders separate data better than PCA. Variational Autoencoder Variational autoencoder (VAE) models inherit autoencoder architecture, but make strong assumptions concerning the distribution of latent variables. In general, we suppose the distribution of the latent variable is gaussian. The training algorithm used in VAEs is similar to EM algorithm.
Let's examine the one-dimensional three-point stencil case in detail,because I think it's important to be clear just how this behaviourarises, and what it means to set a point to a certain value in afinite-difference grid when the underlying function isdiscontinuous. The equation will be$$ u''(x) = \rho(x). $$Instead of using the interval $[-1,1]$ with discontinuity at$\frac12$, I will use the interval $[-\frac12,\frac12]$ with thediscontinuity placed at $0$. The grid size will be $h$, and I will only have to consider the interval $[-\frac h2,\frac h2]$ around the grid point $0$. First, in a finite-difference approximation, we approximate$$ u''(0) = \frac{\hat u(-h) - 2\hat u(0) + \hat u(h)}{h^2} $$and solve$$ \frac{\hat u(-h) - 2\hat u(0) + \hat u(h)}{h^2} = \hat \rho(0). $$(Here the variables with hats are the numerical approximations on thegrid to the variables without hats.) But this is a very bad specification of the problem, because yourfunction $\rho(0) = 2$ at $x=0$, and $\rho(x) = 0$ everywhere else, isdiscontinuous. In particular, if we shift the grid point $0$ to eitherside by some tiny amount $\epsilon$, the numerical solution changesentirely and becomes exactly zero.This means that this is a terribly misspecified problem. We can make sense of it byconverting it to an equivalent finite-volume formulation, where itwill make much more sense. In a finite-volume method, we solve$$ \int_{-h/2}^{h/2} u''(x)\,dx = \int_{-h/2}^{h/2}\rho(x)\,dx, $$by picking $u(x)$ to be a suitable approximation to the unknownfunction. Let's pick, on the interval $[-\frac h2,\frac h2]$ the approximation$$ u(x) = \hat u(-h) \phi(x+h) + \hat u(0) \phi(x) + \hat u(h) \phi(x-h), $$where $\phi(x)$ is the basis function$$ \phi(x) = \max\left(1-\frac{|x|}{h}, 0\right), $$(it's a piecewise linear function that goes from $0$ at $-h$ to $1$ at$0$ to $0$ at $+h$, thus interpolating between grid points.). The approximation $u(x)$ is a weighted sum of three basis functions that look like this: We can then compute$$ \phi''(x) = \frac1h\delta(h-|x|) - \frac2h \delta(x), $$so that the approximation to the integral is$$ \int_{-h/2}^{h/2} u''(x) = \frac{\hat u(-h) -2\hat u(0)+\hat u(h)}{h}, $$and the finite-volume approximation to our equation becomes$$ \frac{\hat u(-h) -2\hat u(0) + \hat u(h)}{h} = \int_{-h/2}^{h/2}\rho(x)\,dx = h \hat \rho(0). $$ There are two important things here. First, this is equivalent tothe finite-difference formulation in that we end up solving the sameequations. Second, the discontinuity in $\rho$ is given a very precisemeaning: when we use the value $2$ for $\hat \rho(0)$, we are sayingthat this is the average value of $\rho$ on the interval $[-h/2,h/2]$:$$2 = \hat\rho(0) = \frac1h \int_{-h/2}^{h/2} \rho(x)\,dx.$$This interpretation is not available in the finite-differenceformulation. It is also not so sensitive to the location of the discontinuity: if the discontinuity were at some small distance $\epsilon$ away from $0$, the average value would be almost the same, but the value at $0$ might be completely different. But if we say that $2$ is the average value of $\rho$ near the gridpoint, we can then go back and compute the exact solution of theequation with the right-hand side given by$$ \tilde \rho(x) = 2[-h/2 < x < h/2]. $$(We pick a function of our own choice that gives the right average.)In this case, the exact solution will be$$ \tilde u(x) = 2 \int_{-h/2}^{h/2} G(x; u)\,du \approx 2h G(x;0), $$in terms of the Green function for the Poisson equation. In the two-dimensional case it will be$$ \approx 2h^2 G(x, y; 0, 0), $$as in the other (correct) answer on MSE. Finally, the outcome of all this is that when you say that you compareyour numerical approximate solution with the exact solution $u(x)=0$,this is wrong. The exact solution should not be zero, it should be$$ \approx 2h^2 G(x, y; 0,0). $$It therefore should make perfect sense that the fourth-order solution does not converge to zero with order $4$: it should converge to the correct solution, which is not zero but has magnitude of order $O(h^2)$. If you do want to get zero as the numerical solution and compare with the mathematical solution, you should set $\hat\rho(0)$ to be the average value of $\rho$, which is $0$, not $2$. The fact that the exact solution depends on the chosen grid size indicates that this is not a good way to check whether you implemented the method correctly. A very straightforward technique known as the method of manufactured solutions is better for this.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Assume that a function $f$ is integrable on $[0, x]$ for every $x > 0$. Prove that for any $x > 0$, $\displaystyle\left (\int_{0}^{x}fdx \right )^2\leq x\int_{0}^{x}f^2dx$. I have no idea how to even start this... What concept should I be using? EDIT: So upon the hint of using C-S inequality/using a dummy variable for clarity, I have come up with the following proof: Let $g$ be a constant function s.t. $g=1$ for any $x >0$. Note that $\displaystyle\ x \cdot \int_{0}^{x}f^2(t)dt = \left(\int_{0}^{x}g(t)dt \right) \cdot \left( \int_{0}^{x}f^2(t)dt \right)$. Since we know that f and g is integrable, we can apply the Cauchy-Schwarz Inequality for integrals. The inequality states that (integral of $f \cdot g$,... etc..). Thus, $$ \left (\int_{0}^{x}f(t)\cdot g(t)dt \right )^2 = \left (\int_{0}^{x}f(t)dt \right )^2 \leq \int_{0}^{x}g(t)dt \cdot \int_0^x f^2(t)dt=x\int_0^x f^2dt .$$ Q.E.D. //I don't know if I should be using $t$ or $x$ here though... As a matter of fact, shouldn't the statement change to Prove that for any $t,x>0$, [inequality] holds. now that we use $t$? Or am I misunderstanding the use of a dummy variable?//
CGAL 4.7 - Number Types class CGAL::Root_of_traits< RT > For a RealEmbeddable IntegralDomain RT, the class template Root_of_traits<RT> associates a type Root_of_2, which represents algebraic numbers of degree 2 over RT. More... class CGAL::Sqrt_extension< NT, ROOT, DifferentExtensionComparable, FilterPredicates > An instance of this class represents an extension of the type NT by one square root of the type ROOT. More... template<typename RT , typename OutputIterator > OutputIterator CGAL::compute_roots_of_2 (const RT &a, const RT &b, const RT &c, OutputIterator oit) The function compute_roots_of_2() solves a univariate polynomial as it is defined by the coefficients given to the function. More... template<typename RT > Root_of_traits< RT >::Root_of_2 CGAL::make_root_of_2 (const RT &a, const RT &b, const RT &c, bool s) The function make_root_of_2() constructs an algebraic number of degree 2 over a ring number type. More... template<typename RT > Root_of_traits< RT >::Root_of_2 CGAL::make_root_of_2 (RT alpha, RT beta, RT gamma) The function make_root_of_2() constructs an algebraic number of degree 2 over a ring number type. More... template<typename RT > Root_of_traits< RT >::Root_of_2 CGAL::make_sqrt (const RT &x) The function make_sqrt() constructs a square root of a given value of type \( RT\). More... OutputIterator CGAL::compute_roots_of_2 ( const RT & a, const RT & b, const RT & c, OutputIterator oit ) The function compute_roots_of_2() solves a univariate polynomial as it is defined by the coefficients given to the function. The solutions are written into the given OutputIterator. Writes the real roots of the polynomial \( aX^2+bX+c\) into \( oit\) in ascending order. Multiplicities are not reported. RT is an IntegralDomainWithoutDivision. RootOf_2 CGAL::Root_of_traits<RT> CGAL::make_root_of_2() CGAL::make_sqrt() CGAL::Sqrt_extension<NT,ROOT> #include <CGAL/Root_of_traits.h> Root_of_traits<RT>::Root_of_2 CGAL::make_root_of_2 ( const RT & a, const RT & b, const RT & c, bool s ) The function make_root_of_2() constructs an algebraic number of degree 2 over a ring number type. Returns the smallest real root of the polynomial \( aX^2+bX+c\) if \( s\) is true, and the largest root is \( s\) is false. RT is an IntegralDomainWithoutDivision. RootOf_2 CGAL::Root_of_traits<RT> CGAL::make_sqrt() CGAL::compute_roots_of_2() CGAL::Sqrt_extension<NT,ROOT> #include <CGAL/Root_of_traits.h> Root_of_traits<RT>::Root_of_2 CGAL::make_root_of_2 ( RT alpha, RT beta, RT gamma ) The function make_root_of_2() constructs an algebraic number of degree 2 over a ring number type. Constructs the number \( \alpha+ \beta\sqrt{\gamma}\). RT is an IntegralDomainWithoutDivision. RootOf_2 CGAL::Root_of_traits<RT> CGAL::make_sqrt() CGAL::compute_roots_of_2() CGAL::Sqrt_extension<NT,ROOT> #include <CGAL/Root_of_traits.h> Root_of_traits<RT>::Root_of_2 CGAL::make_sqrt ( const RT & x) The function make_sqrt() constructs a square root of a given value of type \( RT\). Depending on the type \( RT\) the square root may be returned in a new type that can represent algebraic extensions of degree \( 2\). #include <CGAL/Root_of_traits.h>
In Morii, Lim, Mukherjee, The Physics of the Standard Model and Beyond. 2004, ch. 8, they claim that the Peskin–Takeuchi oblique parameters S, T and U are in fact Wilson coefficients of certain dimension-6 operators. On page 212, they claim that the T parameter is described by$$O_T=(\phi^\dagger D_\mu \phi)\phi^\dagger D^\mu \phi)-\frac{1}{3}(\phi^\dagger D_\mu D^\mu\phi)(\phi^\dagger\phi)\,,$$and the S parameter by$$O_S=[\phi^\dagger(F_{\mu\nu}^i\sigma^i)\phi]B^{\mu\nu}\,,$$where $\phi$ is the Higgs doublet, $F_{\mu\nu}^i$ and $\sigma^i$ are the SU(2) weak isospin field strength and sigma matrices respectively, and $B^{\mu\nu}$ is the U(1) weak hypercharge field strength. On the next page (p. 213), problem 8.6 asks us to show that these are the operators. How do I precisely show that these higher-dimension operators give precisely the Peskin-Takeuchi parameters?
Introduction: Two days ago, while thinking about brain networks, it occurred to me that almost all simple graphs are small world networks in the sense that if is a simple graph with nodes sampled from the Erdös-Rényi random graph distribution with probability half then when is large: \begin{equation} \mathbb{E}[d(v_i,v_j)] \leq \log_2 N \end{equation} My strategy for proving this was to show that when is large, there exists a chain of distinct nodes of length originating from almost surely. This implies that: \begin{equation} \forall v_i, v_j \in G_N, d(v_i,v_j) \leq \log_2 N \end{equation} almost surely when is large. Now, by using the above method of proof I managed to show that almost all simple graphs are very small in the sense that: \begin{equation} \mathbb{E}[d(v_i,v_j)] \leq \log_2\log_2 N \end{equation} when tends to infinity. We can actually do even better. Using my proof that almost all simple graphs are connected, I can show that almost all simple graphs have diameter 2. However, I think there is more value in going through my original proof which in my opinion provides greater insight into the problem. Degrees of separation and the neighborhood of a node: We may think of degrees of separation as a sequence of ‘hops’ between the neighborhoods of distinct nodes . Given a node we may define as follows: \begin{equation} \mathcal{N}(v_i) = \{v_j \in G_N: \overline{v_i v_j} \in G_N \} \end{equation} where is a graph with nodes. Now, given the E-R model we can say that implies: \begin{equation} P(v_k \notin \mathcal{N}(v_i) \land v_k \notin \mathcal{N}(v_j)) = P(v_k \notin \mathcal{N}(v_i)) \cdot P(v_k \notin \mathcal{N}(v_j)) = \frac{1}{4} \end{equation} and by induction: \begin{equation} P(v_k \notin \mathcal{N}(v_{1}) \land … \land v_k \notin \mathcal{N}(v_{n})) = \frac{1}{2^n} \end{equation} It follows that if there is a chain of distinct nodes we can say that: \begin{equation} P(d(v_1,v_k) \leq n) = 1- \frac{1}{2^n} \end{equation} Almost all simple graphs are very small world networks: A chain of distinct nodes exists almost surely: The probability that there exists a chain of nodes of length : \begin{equation} \overline{v_1 … v_{\log_2\log_2 N}} \end{equation} such that is given by: \begin{equation} P(\overline{v_1 … v_{\log_2\log_2 N}} \in G_N) = \prod_{k=1}^{\log_2\log_2 N} \big(1-\frac{1}{2^{N-k}} \big) \geq \big(1- \frac{\log_2 N}{2^N}\big)^{\log_2\log_2 N} \end{equation} and we note that: \begin{equation} \lim\limits_{N \to \infty} \big(1- \frac{\log_2 N}{2^N}\big)^{\log_2\log_2 N} = 1 \end{equation} this guarantees the existence of a chain of distinct nodes of length originating from any almost surely. Given that exists almost surely we may deduce that almost surely: If exists we have: \begin{equation} \forall \{v_i\}_{i=1}^n, v_k \in G_N, P(d(v_1,v_k) \leq \log_2\log_2 N) = 1 - \frac{1}{2^{\log_2\log_2 N}} \end{equation} and so we have: \begin{equation} \lim\limits_{N \to \infty} \forall \{v_i\}_{i=1}^n, v_k \in G_N, P(d(v_1,v_k) \leq \log_2\log_2 N) = 1 \end{equation} Discussion: I must say that initially I found the above result quite surprising and I think it partially explains why small world networks frequently occur in nature. Granted, in natural settings the graph is typically embedded in some kind of Euclidean space so in addition to the degrees of separation we must consider the Euclidean distance. But, I suspect that in real-world networks with small world effects the Euclidean distance plays a marginal role. In particular, I believe that wherever small-world networks prevail the Euclidean distance is dominated by ergodic dynamics between nodes. There is probably some kind of stochastic communication process between the nodes.
Difference between revisions of "Quasirandomness" (Fixing spam damage) Line 1: Line 1: − + === − + + + + + + + + + + + : + + + + + + // + + . + + . + + + + + + // + + + + + + + + + + / ==A possible definition of quasirandom subsets of <math>[3]^n</math>== ==A possible definition of quasirandom subsets of <math>[3]^n</math>== Revision as of 08:51, 12 March 2009 Contents Introduction Quasirandomness is a central concept in extremal combinatorics, and is likely to play an important role in any combinatorial proof of the density Hales-Jewett theorem. This will be particularly true if that proof is based on the density increment method or on some kind of generalization of Szemerédi's regularity lemma. In general, one has some kind of parameter associated with a set, which in our case will be the number of combinatorial lines it contains, and one would like a deterministic definition of the word "quasirandom" with the following key property. Every quasirandom set [math]\mathcal{A}[/math] has roughly the same value of the given parameter as a random set of the same density. Needless to say, this is not the only desirable property of the definition, since otherwise we could just define [math]\mathcal{A}[/math] to be quasirandom if it has roughly the same value of the given parameter as a random set of the same density. The second key property is this. Every set [math]\mathcal{A}[/math] that failsto be quasirandom has some other property that we can exploit. These two properties are already discussed in some detail in the article on the density increment method: this article concentrates more on examples of quasirandomness in other contexts, and possible definitions of quasirandomness connected with the density Hales-Jewett theorem. Examples of quasirandomness definitions Bipartite graphs Let X and Y be two finite sets and let [math]f:X\times Y\rightarrow [-1,1].[/math] Then f is defined to be c-quasirandom if [math]\mathbb{E}_{x,x'\in X}\mathbb{E}_{y,y'\in Y}f(x,y)f(x,y')f(x',y)f(x',y')\leq c.[/math] Since the left-hand side is equal to [math]\mathbb{E}_{x,x'\in X}(\mathbb{E}_{y\in Y}f(x,y)f(x',y))^2,[/math] it is always non-negative, and the condition that it should be small implies that [math]\mathbb{E}_{y\in Y}f(x,y)f(x',y)[/math] is small for almost every pair [math]x,x'.[/math] If G is a bipartite graph with vertex sets X and Y and [math]\delta[/math] is the density of G, then we can define [math]f(x,y)[/math] to be [math]1-\delta[/math] if xy is an edge of G and [math]-\delta[/math] otherwise. We call f the balanced function of G, and we say that G is c-quasirandom if its balanced function is c-quasirandom. It can be shown that if H is any fixed graph and G is a large quasirandom graph, then the number of copies of H in G is approximately what it would be in a random graph of the same density as G. Subsets of finite Abelian groups If A is a subset of a finite Abelian group G and A has density [math]\delta,[/math] then we define the balanced function f of A by setting [math]f(x)=1-\delta[/math] when x\in A and [math]f(x)=-\delta[/math] otherwise. Then A is c-quasirandom if and only if f is c-quasirandom, and f is defined to be c-quasirandom if [math]\mathbb{E}_{x,a,b\in G}f(x)f(x+a)f(x+b)f(x+a+b)\leq c.[/math] Again, we can prove positivity by observing that the left-hand side is a sum of squares. In this case, it is [math]\mathbb{E}_{a\in G}(\mathbb{E}_{x\in G}f(x)f(x+a))^2.[/math] If G has odd order, then it can be shown that a quasirandom set A contains approximately the same number of triples [math](x,x+d,x+2d)[/math] as a random subset A of the same density. However, it is decidedly not the case that A must contain approximately the same number of arithmetic progressions of higher length (regardless of torsion assumptions on G). For that one must use "higher uniformity". Hypergraphs Subsets of grids A function f from [math][n]^2[/math] to [-1,1] is c-quasirandom if the "sum over rectangles" is at most c. The sum over rectangles is [math]\mathbb{E}_{x,y,a,b}f(x,y)f(x+a,y)f(x,y+b)f(x+a,y+b)[/math]. Again, it is easy to show that this sum is non-negative by expressing it as a sum of squares. And again, one defines a subset [math]A\subset[n]^2[/math] to be c-quasirandom if it has a balanced function that is c-quasirandom. If A is a c-quasirandom set of density [math]\delta[/math] and c is sufficiently small, then A contains roughly the same number of corners as a random subset of [math][n][/math] of density [math]\delta.[/math] A possible definition of quasirandom subsets of [math][3]^n[/math] As with all the examples above, it is more convenient to give a definition for quasirandom functions. However, in this case it is not quite so obvious what should be meant by a balanced function. Here, first, is a possible definition of a quasirandom function from [math][2]^n\times [2]^n[/math] to [math][-1,1].[/math] We say that f is c-quasirandom if [math]\mathbb{E}_{A,A',B,B'}f(A,B)f(A,B')f(A',B)f(A',B')\leq c.[/math] However, the expectation is not with respect to the uniform distribution over all quadruples (A,A',B,B') of subsets of [math][n].[/math] Rather, we choose them as follows. (Several variants of what we write here are possible: it is not clear in advance what precise definition will be the most convenient to use.) First we randomly permute [math][n][/math] using a permutation [math]\pi[/math]. Then we let A, A', B and B' be four random intervals in [math]\pi([n]),[/math] where we allow our intervals to wrap around mod n. (So, for example, a possible set A is [math]\{\pi(n-2),\pi(n-1),\pi(n),\pi(1),\pi(2)\}.[/math]) As ever, it is easy to prove positivity. To apply this definition to subsets [math]\mathcal{A}[/math] of [math][3]^n,[/math] define f(A,B) to be 0 if A and B intersect, [math]1-\delta[/math] if they are disjoint and the sequence x that is 1 on A, 2 on B and 3 elsewhere belongs to [math]\mathcal{A},[/math] and [math]-\delta[/math] otherwise. Here, [math]\delta[/math] is the probability that (A,B) belongs to [math]\mathcal{A}[/math] if we choose (A,B) randomly by taking two random intervals in a random permutation of [math][n][/math] (in other words, we take the marginal distribution of (A,B) from the distribution of the quadruple (A,A',B,B') above) and condition on their being disjoint. It follows from this definition that [math]\mathbb{E}f=0[/math] (since the expectation conditional on A and B being disjoint is 0 and f is zero whenever A and B intersect). Nothing that one would really like to know about this definition has yet been fully established, though an argument that looks as though it might work has been proposed to show that if f is quasirandom in this sense then the expectation [math]\mathbb{E}f(A,B)f(A\cup D,B)f(A,B\cup D)[/math] is small (if the distribution on these "set-theoretic corners" is appropriately defined).
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$. Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$... What if $\theta$ is irrational...what did I do wrong? 'cause I understand that second one but I'm having a hard time explaining it in words (Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.) DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something. he based much of his success on principles like this I cant believe ive forgotten it it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book regardless of you attending Princeton university or not yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned @TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college. long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now @BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even A proof of this uses (basically) Fourier analysis Even though it looks rather innocuous albeit surprising result in pure number theory @BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about @Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$). The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero. I can try to recall more if you're interested. It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps So it sort of makes sense Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis @BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate
I have been asked to work out the hardness of water in German degrees and this is what I did: I got a sample of water I didn’t measure how much I put into my volumetric flask perhaps it was $\pu{8.5ml}$ but I didn’t think I would need it, but I do use it which made me believe I was wrong. The sample was made up to $\pu{100ml}$. $\pu{10ml}$ of this dilution was used as the analyte along with buffer and the indicator. I Titrated $\pu{9.07ml}$ of EDTA with a concentration of $\pu{0.0185mol/dm^3}$ So the calculation I did was: $$\begin{align} n(\ce{edta})&= (9.07\times0.0185)/1000= 0.000167795\\ n(\ce{Ca^2+}) &= n(\ce{edta})\end{align}$$ So then I find the mass of Ca: $$m(\ce{Ca^2+}) = 40\times0.000167795= \pu{0.0067188g}$$ Now my issue: So $\pu{0.0067188g}$ of Ca in the $\pu{10ml}$ therefore in the $\pu{100ml}$ there will be $10$ times that, as it was a dilution. So $\pu{0.067188g}$ which is $\pu{67.118mg}$. So there is $\pu{67.118mg}$ of Ca in my $\pu{8.5ml}$ sample. Now a $1^\circ\ \mathrm{dH}$ is $\pu{10mg}$ of Ca per $\pu{1000ml}$. So is there $\pu{7896.2mg}$ in $\pu{1000ml}$ of sample — therefore having $789.6^\circ\ \mathrm{dH}$ of hardness?
It is a well-known fact of representation theory that, if the irreducible representations of a finite group $\DeclareMathOperator{\maj}{maj} \DeclareMathOperator{\sh}{sh} G$ are $V_1,\ldots,V_m$, and $R$ is the regular representation formed by $G$ acting on itself by left multiplication, then $$R=\bigoplus_{i=1}^{m} (\dim V_i) \cdot V_i$$ is its decomposition into irreducibles. I’ve recently discovered a $q$-analog of this fact for $G=S_n$ that is a simple consequence of some known results in symmetric function theory. In Enumerative Combinatorics, Stanley defines a generalization of the major index on permutations to standard tableaux. For a permutation $$w=w_1,\ldots,w_n$$ of $1,\ldots,n$, a descent is a position $i$ such that $w_i>w_{i+1}$. For instance, $52413$ has two descents, in positions $1$ and $3$. The major index of $w$, denoted $\maj(w)$, is the sum of the positions of the descents, in this case $$\maj(52413)=1+3=4.$$ To generalize this to standard Young tableaux, notice that $i$ is a descent of $w$ if and only if the location of $i$ occurs after $i+1$ in the inverse permutation $w^{-1}$. With this as an alternative notion of descent, we define a descent of a tableau $T$ to be a number $i$ for which $i+1$ occurs in a lower row than $i$. In fact, this is precisely a descent of the inverse of the reading word of $T$, the word formed by reading the rows of $T$ from left to right, starting from the bottom row. As an example, the tableau $T$ below has two descents, $2$ and $4$, since $3$ and $5$ occur in lower rows than $2$ and $4$ respectively: So $\maj(T)=2+4=6$. Note that its reading word $5367124$, and the inverse permutation is $5627134$, which correspondingly has descents in positions $2$ and $4$. (This is a slightly different approach to the major index than taken by Stanley, who used a reading word that read the columns from bottom to top, starting at the leftmost column. The descents remain the same in either case, since both reading words Schensted insert to give the same standard Young tableau.) Now, the major index for tableaux gives a remarkable specialization of the Schur functions $s_\lambda$. As shown in Stanley’s book, we have $$s_\lambda(1,q,q^2,q^3,\ldots)=\frac{\sum_{T} q^{\maj(T)}}{(1-q)(1-q^2)\cdots(1-q^n)}$$ where the sum is over all standard Young tableaux $T$ of shape $\lambda$. When I came across this fact, I was reminded of a similar specialization of the power sum symmetric functions. It is easy to see that $$p_\lambda(1,q,q^2,q^3,\ldots)=\prod_{i}\frac{1}{1-q^{\lambda_i}},$$ an identity that comes up in defining a $q$-analog of the Hall inner product in the theory of Hall-Littlewood symmetric functions. In any case, the power sum symmetric functions are related to the Schur functions via the irreducible characters $\chi_\mu$ of the symmetric group $S_n$, and so we get \begin{eqnarray*} p_\lambda(1,q,q^2,\ldots) &=& \sum_{|\mu|=n} \chi_{\mu}(\lambda) s_{\mu}(1,q,q^2,\ldots) \\ \prod_{i} \frac{1}{1-q^{\lambda_i}} &=& \sum_{\mu} \chi_{\mu}(\lambda) \frac{\sum_{T\text{ shape }\mu} q^{\maj(T)}}{(1-q)(1-q^2)\cdots(1-q^n)} \\ \end{eqnarray*} This can be simplified to the equation: \begin{equation} \sum_{|T|=n} \chi_{\sh(T)}(\lambda)q^{\maj(T)} = \frac{(1-q)(1-q^2)\cdots (1-q^n)}{(1-q^{\lambda_1})(1-q^{\lambda_2})\cdots(1-q^{\lambda_k})} \end{equation} where $\sh(T)$ denotes the shape of the tableau $T$. Notice that when we take $q\to 1$ above, the right hand side is $0$ unless $\lambda=(1^n)$ is the partition of $n$ into all $1$’s. If $\lambda$ is not this partition, setting $q=1$ yields $$\sum \chi_\mu(\lambda)\cdot f^{\mu}=0$$ where $f^\mu$ is the number of standard Young tableaux of shape $\mu$. Otherwise if $\lambda=(1^n)$, we obtain $$\sum \chi_\mu(\lambda)\cdot f^{\mu}=n!.$$ Recall also that $f^\mu$ (see e.g. Stanley or Sagan) is equal to the dimension of the irreducible representation $V_\mu$ of $S_n$. Thus, these two equations together are equivalent to the fact that, if $R$ is the regular representation, $$\chi_R=\sum_\mu (\dim V_\mu) \cdot \chi_{\mu}$$ which is in turn equivalent to the decomposition of $R$ into irreducibles. Therefore, Equation (1) is a $q$-analog of the decomposition of the regular representation. I’m not sure this is known, and I find it’s a rather pretty consequence of the Schur function specialization at powers of $q$. EDIT: It is known, as Steven Sam pointed out in the comments below, and it gives a formula for a graded character of a graded version of the regular representation.
There is known a lot about the use of groups -- they just really appear a lot, and appear naturally. Is there any known nice use of semigroups in Maths to sort of prove they are indeed important in Mathematics? I understand that it is a research question, but may be somebody can hint me the direction to look on so that I would see sensibility of semigroups, if you see what I mean (so some replies like look for wikipedia are not working as they are anti-answers). Semigroups provide a fundamental, algebraic tool in the analysis of regular languages and finite automata. This book chapter (pdf) by J-E Pin gives a brief overview of this area. Am slightly surprised no one has mentioned the Galvin-Glazer proof of Hindman's theorem via the existence of semigroup structure on $\beta{\mathbb N}$, the Stone-Cech compactification of the positive integers (see, for instance, part of this note by Hindman. The relevance to the original question is that knowing that ``compact right topological semigroups have idempotents'' may sound recondite, but it is just what was needed to answer Galvin's original question about translation-invariant ultrafilters, which was itself motivated by a "concrete" question in additive combinatorics. On a related note, while it is in general not possible to embed a locally compact group as a dense subgroup of something compact (the map from a group to its Bohr compactification need not be injective), you can always embed it densely into various semigroups equipped with topological structure that interacts with the semigroup action: there are various of these, perhaps the most common being the WAP-compactification and the LUC-compactification. Unfortunately this often says more about the complicated behaviour of compact semitopological semigroups (and their one-sided versions) than about anything true for all locally compact groups, but the compactifications are a useful resource in some problems in analysis, and the semigroup structure gives one some extra grip on how points in this compactification behave. (Disclaimer: this is rather off my own fields of core competence.) Victor, I don't understand your claim that $C^0$-semigroups aren't really semigroups. You are not free to decide for all the mathematical community what is a semigroup (I guess that you are interested only on discrete semigroups, aren't you ?). $C^0$-semigroups are fundamental in PDEs (in probability too as mentioned by Steinhurst). The reason is that a lot of evolution PDEs (basically all parabolic ones, like the heat equation, or Navier-Stokes) can be solved only forward but not backward. In linear PDEs, this is a consequence of the Uniform Boundedness Principle (= Banach-Steinhaus Theorem). There is a nice theory relating operators and semigroups, the former being the generator of the latter. In the linear case, a fundamental result is the Hille-Yosida Theorem. Subsequent tools are Duhamel's principle and Trotter's formula. A part of the theory extends to nonlinear semigroups. Edit. John B. expresses a doubt on the fundamental aspect of semigroups, compared with the evolution equations from which they arise. Let me say that semi-groups say much more, for the following reason. Evolutionary PDEs have classical solutions only when the initial data $u_0$ is smooth enough, typically when $u_0$ belongs to the so-called domain of the generator. This result can never be used to pass from a linear context to a non-linear one via the Duhamel's principle. In other words, in order to have a well-posed Cauchy-problem in Hadamard's sense, we need to invent a notion of weaker solutions ; this is where the semi-group theory comes into play. An important application of semigroups and monoids is algebraic theory of formal languages, like regular languages of finite and infinite words or trees (one could argue this is more theoretical computer science than mathematics, but essentialy TCS is mathematics). For example, regular languages can be characterized using finite state automata, but can also be described by homomorphisms into finite monoids. The algebraic approach simplifies many proofs (like determinization of Buchi automata for infinite words or proving that FO = LTL) and gives deeper insight into the structure of languages. Semigroups of bounded $L^{2}$ operators are very important in probability. They in fact provide one of the main ways show the very close connection between a self-adjoint operator and a `nice' Markov process (nice can be taken to mean strong Markov, cadlag, and quasi-left continuous.) So how does one get this semigroup from a Markov process? If $X_t$ is your process let $\mu_{t}(x,A)$ be measure with mass $\le 1$ with value $P(X_t \in A | X_0=x)$. Then $\int f(y) \mu_t(x,dy) = T_tf(x)$ gives a semigroup of bounded $L^{2}$ operators. Why are such constructions important and natural? If $f$ is your initial distribution of something (heat for example) and $X_t$ is Brownian motion. Then $T_t$ acts by letting the heat distribution $f$ diffuse the way heat should. This then gives a nice way to connect PDE and probability theory. I'd end by offering that semigroups are important, in part, because they do arise is so many places and can bridge between disciplines. There are other reasons as well. Circuit complexity. See Straubing, Howard, Finite automata, formal logic, and circuit complexity. Progress in Theoretical Computer Science. Birkhäuser Boston, Inc., Boston, MA, 1994. If you want a research problem relating circuit complexity with (finite) semigroups, there are many in the book and papers by Straubing and others. See also Eilenberg and Schutzenberger (that is in addition to Pin's book mentioned in another answer) - about connections between finite semigroups and regular languages and automata. (Commutative) semigroups and their analysis shows up in the theory of misère combinatorial games. The "misère quotient" semigroup construction gives a natural generalization of the normal-play Sprague-Grundy theory to misere play which allows for complete analysis of (many) such games. (See http://miseregames.org/ for various papers and presentations.) Though you say that $C_0$ semigroups are not really semigroups, the structure of compact semitopological semigroups plays an important role in the investigation of their asymptotic behaviour. For example, Glicksberg-DeLeeuw type decompositions or Tauberian theorems are obtained such a way, see Engel-Nagel: One-Parameter semigroups for Linear evolution Equations, Springer, 2000, Chapter V.2. Unary (1-variable) functions mapping a set X to itself under composition is a semigroup. Cayley's Theorem (one of them) says that every semigroup is isomorphic to one of this kind. Gerhard "Ask Me About System Design" Paseman, 2011.02.18 Given a group $G$, the Block Monoid $B(G)$ consists of sequences of elements in $G$ that sum to zero. So for example, an element of $B(\mathbb{Z})$ is $(-2,-3,1,1,3)$. The monoid operation is concatenation, and the empty block is the identity element. Given a Dedekind domain, one can take its ideal class group, and consider the block monoid over that group. Note that in the obvious way, elements of the block monoid can be irreducible or not. One can study irreducible factorization in the Dedekind domain by studying irreducible factorization in the block monoid. Why nobody explicitly mentioned one of the most natural examples: symmetric semigroup of a set? This is the semigroup anologue of a permutation group (http://en.wikipedia.org/wiki/Transformation_semigroup). Toric varieties in Algebraic Geometry!! Indeed, the category of normal toric varieties is equivalent with the dual of the category of finitely generated, integral semigroups.
GR9677 #6 Alternate Solutions insertphyspun 2011-02-06 13:05:22 A way to rule out answers, although not the fastest strategy: A) Wrong. The limit as gives us no loss in energy, which is just weird. If you can't remember the answer, you would at least think it should be proportional to or independent of . B) Maybe. Keeping your logic from (A), this one stays in the running. C) Wrong. As 0, the plane becomes a flat surface. The block will only move at a constant velocity if there is no dissipative force (since we know there is no applied force). This implies no energy loss. The limit for this answer gives us infinite energy loss, which is crazy. D) Wrong. The limit as 0 at first gives us a reasonable answer (i.e. if the track is flat and there is no applied force, the block should not lose energy if it moves at a constant velocity). However, in order to move at a constant velocity on a flat surface in the absence of an applied force, there cannot be friction. The problem states that there is friction, so this doesn't add up. E) Wrong. We know there is a non-conservative force in the problem, therefore work is being done. There must be a change in energy. So, the answer is (B). This definitely takes longer than the energy arguments, but may be faster than considering a force diagram. 99percent 2008-11-06 06:25:06 An easy way... Since, the speed of the block is constant, there is no gain in the kinetic energy of the block. Thus, potential energy lost by the block = Energy dissipated by friction = mgh. Bingo..!! Comments fredluis 2019-08-08 06:59:38 Well , The total resultant force on the car is in the radial direction. Obviously you have a drag force backwards. tree trimming enterprise 2018-04-01 14:07:45 mgh at the top must appear as a potential energy at the bottom. Because , this isn\'t so , the energy must go somewhere as a thermal energy due to friction. exenGT 2012-10-10 04:24:21 Consider the total energy of the block: At top: ; At bottom: (since block moves at constant speed) But total energy of the system (block+ramp) is conserved Therefore the energy dissipated by friction . exenGT 2012-10-10 04:28:37 Consider the total energy of the block: At top: ; At bottom: (since block moves at constant speed) But total energy of the system (block+ramp) is conserved; Therefore the energy dissipated by friction. (Sorry that my previous post contains several typos; please ignore that post.) insertphyspun 2011-02-06 13:05:22 A way to rule out answers, although not the fastest strategy: A) Wrong. The limit as gives us no loss in energy, which is just weird. If you can't remember the answer, you would at least think it should be proportional to or independent of . B) Maybe. Keeping your logic from (A), this one stays in the running. C) Wrong. As 0, the plane becomes a flat surface. The block will only move at a constant velocity if there is no dissipative force (since we know there is no applied force). This implies no energy loss. The limit for this answer gives us infinite energy loss, which is crazy. D) Wrong. The limit as 0 at first gives us a reasonable answer (i.e. if the track is flat and there is no applied force, the block should not lose energy if it moves at a constant velocity). However, in order to move at a constant velocity on a flat surface in the absence of an applied force, there cannot be friction. The problem states that there is friction, so this doesn't add up. E) Wrong. We know there is a non-conservative force in the problem, therefore work is being done. There must be a change in energy. So, the answer is (B). This definitely takes longer than the energy arguments, but may be faster than considering a force diagram. petr1243 2010-12-28 09:08:42 I did the problem a slightly different way: A frictional force is an example of a non-conservative force, so this is a non-isolated system: + = -(f_k)L Assuming that the particle starts and ends at rest: = mgh mgLsin(theta) = - (mu_k)mgcos(theta)L Coefficient of Kinetic Friction is: mu_k = -tan(theta) The work done by the frictional force is: W = f_k*dx W = -mgsin(theta)dx W=mgLsin(theta) = mgh Niko 2010-11-07 09:15:55 Yeah... just remember that when non-conservative forces are involved, the change in mechanical energy is equal to the work done by non-conservative forces (friction in this case). The key phrase here is "constant speed" which means constant KE. The change in mechanical energy (Umech = KE + PE with KE=const) is just mgh. A tricky one to get under pressure. 99percent 2008-11-06 06:25:06 An easy way... Since, the speed of the block is constant, there is no gain in the kinetic energy of the block. Thus, potential energy lost by the block = Energy dissipated by friction = mgh. Bingo..!! Saint_Oliver 2013-09-23 08:36:52 Winner justin_l 2013-10-17 22:59:50 this is the best answer and is probably the answer they are looking for. Mechanical Energy before: mgh Mechanical Energy after: 0 where did it go? into friction. Ryry013 2019-05-26 15:30:53 Slightly more accurate would be:\r\n\r\nEnergy before: \r\nEnergy after: (same velocity throughout)\r\n\r\nThen, the mgh-->0 part all went into friction, as you all said. cyberdeathreaper 2006-12-28 19:42:46 Isn't it somewhat repetitive to solve for the coefficient of kinetic friction, to plug back in to solve for friction, to calculate work? Couldn't you solve for f from your x-coordinate forces (f=mgsin(theta)), and plug that straight into your work equation (W=fl)? microcentury 2010-07-12 07:40:41 Very true. W= F*l = mg sin(theta)*L. Since h= L*sin(theta), we have W=mgh daschaich 2005-11-07 23:22:21 There's an easier way to do it - conservation of energy. Since the speed of the block is constant, its kinetic energy is the same at the top and bottom of the ramp. Therefore all its gravitational potential energy (mgh) must have been dissipated by friction. Blake7 2007-09-22 08:04:31 Excellent observation; saves a tremendous amount of time with much less risk. dcan 2008-04-09 17:11:02 This seems so obvious after seeing the answer. It's hard to get out of the crunch mode. tau1777 2008-11-05 14:49:47 all i could say after reading this was:quite beautiful. this is an excellent solution, thank you for sharing. Post A Comment! Bare Basic LaTeX Rosetta Stone LaTeX syntax supported through dollar sign wrappers $, ex., $\alpha^2_0$ produces . type this... to get... $\int_0^\infty$ $\partial$ $\Rightarrow$ $\ddot{x},\dot{x}$ $\sqrt{z}$ $\langle my \rangle$ $\left( abacadabra \right)_{me}$ $\vec{E}$ $\frac{a}{b}$ The Sidebar Chatbox... Scroll to see it, or resize your browser to ignore it...
I continue to be amazed at the multitude of different contexts in which the Schur functions naturally appear. In a previous post, I defined the Schur symmetric functions combinatorially, via the formula $$s_\lambda=\sum_{|\mu|=|\lambda|}K_{\lambda\mu} m_\mu$$ where $K_{\lambda\mu}$ is the number of semistandard Young tableaux of shape $\lambda$ and content $\mu$ and $m_\mu$ is the monomial symmetric function of shape $\lambda$. I also defined them as the ratio $$s_\lambda=\frac{a_{\lambda+\delta}}{a_\lambda}$$ where $a_\lambda$ is the elementary antisymmetric function. And, in another post, I pointed out that the Frobenius map sends the irreducible characters of the symmetric group $S_n$ to the Schur functions $s_\lambda$. This can be taken as a definition of the Schur functions: The Schur functions $s_\lambda$, for $|\lambda|=n$, are the images of the irreducible representations of $S_n$ under the Frobenius map. Today, I’d like to introduce an equally natural representation-theoretic definition of the Schur functions: The Schur functions $s_\lambda$, for $\lambda$ having at most $n$ parts, are the characters of the polynomial representations of the general linear group $GL_n(\mathbb{C})$. I recently read about this in Fulton’s book on Young Tableaux, while preparing to give a talk on symmetric function theory in this term’s new seminar on Macdonald polynomials. Here is a summary of the basic ideas (turn to page 2):
We analyzed the time complexity of different solutions for a given problem. Then, we proved several claims using the Formal definition of Big O. Given a natural number $p$, how many right triangles exist with integer-sized sides whose perimeter is $p$? How many triplets $(a,b,c)$ are there such that: When computing code complexity, we Go over all triplets of numbers in the relevant range, count only those where the above conditions hold. There are $p-1$ options for each variable $a,b,c$, so we need $(p-1)^3$ iterations and we can check each triplet in constant time. Complexity: $O(p^3)$ def num_of_triangles_v1(p): cnt = 0 for a in range(1, p): for b in range(1, p): for c in range(1, p): if a < b and a + b + c == p and a**2 + b**2 == c**2: cnt += 1 return cnt If we look at a pair of values for $a, b$, we can compute $c$ directly instead of going over all possible $c$ values in $\{1,2,\ldots,p-1\}$. We now use two nested loops instead of three (and omit the condition $a + b + c = p$). There are $(p-1)^2$ iterations, each one is again computed in constant time. Complexity: $O(p^2)$ def num_of_triangles_v2(p): cnt = 0 for a in range(1, p): for b in range(1, p): c = p - a - b if a < b and a**2 + b**2 == c**2 and c > 0: cnt += 1 return cnt Since we require $a < b$, in each iteration of the outer loop for $a$ we can define a lower bound for $b$, i.e. - in each iteration we need to check $b$ values between $a+1,\ldots, p-1$ instead of $1,\ldots,p-1$. The loops are now dependent and, therefore, to compute the number of atomic operations, we take a sum over $a$ of the number of $b$ values tested. In the $i$th iteration of $a$ (that is, for $a=i$) we need to check $p - 1 - i = p - (1 + i)$ values of $b$, so the total number of iterations is: $$\sum_{a = 1}^{p-1} (p - (1 + a)) = \sum_{i = 0}^{p-2} i = \frac{(p-1)(p-2)}{2} $$ Complexity: $O(p^2)$. def num_of_triangles_v3(p): cnt = 0 for a in range(1, p): for b in range(a + 1, p): c = p - a - b if a**2 + b**2 == c**2 and c > 0: cnt += 1 return cnt We now further improve our efficiency by defining upper bounds for $a,b$. we use the same strategy as in v3 to count operations. First of all, since $a < b < c$ and $a + b + c = p$, clearly $a < p/3$. That is, the maximal possible value of $a$ is $p//3$ (we will iterate till $p//3 + 1$ in order to include $p//3$ in the range). Next, $b = p-a-c < p-a-b \implies 2b < (p-a)$. Thus: $b < (p-a)/2$. That is, the maximal possible value of $b$ is $(p-a)//2$ (again, we iterate till $(p-a)//2 + 1$ in order to include $(p-a)//2$ in the range). The number of iterations is therefore: $$\sum_{a = 1}^{p/3} \left(\left\lfloor\frac{p-a}{2}\right\rfloor + 1 - (a+1)\right)\approx \sum_{a = 1}^{p/3} \frac{p}{2} - \frac{a}{2} - a =\frac{p^2}{6} - \frac{3}{2}\sum_{a = 1}^{p/3}a = \frac{p^2}{6} - \frac{3}{2}(p/3 + 1)\cdot \frac{p}{6} = \frac{p^2}{6} - \frac{p^2}{12} - \frac{p}{4} = \frac{p^2}{12} - \frac{p}{4}$$ Complexity: $O(p^2)$. def num_of_triangles_v4(p): cnt = 0 for a in range(1, p//3 + 1): for b in range(a + 1, (p - a)//2 + 1): c = p - a - b if a**2 + b**2 == c**2: cnt += 1 return cnt We realize we have two equations in three variables, therefore there’s only a single free parameter here. $a+b+c=p \implies c = p-a-b$ Substitute $c$ with $p-a-b$ in $a^2+b^2=c^2$ to get $a^2 + b^2 = (p -a -b)^2$, open and isolate $b$ to get $b = \frac{p^2-2ap}{2(p-a)}$. So what do we need to do? We loop only over $a$, but need to make sure that the resulting $b$ is integral, and that $a < b$. Note that we do not have to calculate $c$ here The number of iterations is $p/3$ and in each iteration we still do only a constant number of operations! Complexity: $O(p)$ def num_of_triangles_v5(p): cnt = 0 for a in range(1, p//3 + 1): b = (p**2 - 2*p*a)/(2*(p - a)) if int(b) == b and a < b: cnt += 1 return cnt We compare the change in running time of each of the functions as $p$ increases twofold. As expected, when $p$ increases by 2 and for large enough $p$ (so that asymptotic computations are valid): the cubical version ($O(p^3)$) runs in time which is roughly $2^3 = 8$ times longer, the quadratic versions ($O(p^2)$) runs in time which is roughly $2^2 = 4$ times longer, and the linear version ($O(p)$) runs in time which is roughly $2$ times longer. import timedef elapsed(expression, number = 1): ''' computes elapsed time for executing code number of times (default is 1 time). expression should be a string representing a Python expression. ''' t1=time.perf_counter() for i in range(number): x = eval(expression) t2=time.perf_counter() return t2-t1print("v1, p = 240 took",elapsed("num_of_triangles_v1(240)"), "secs")print("v2, p = 240 took",elapsed("num_of_triangles_v2(240)"), "secs")print("v3, p = 240 took",elapsed("num_of_triangles_v3(240)"), "secs")print("v4, p = 240 took",elapsed("num_of_triangles_v4(240)"), "secs")print("v5, p = 240 took",elapsed("num_of_triangles_v5(240)"), "secs")print("")print("v1, p = 480 took",elapsed("num_of_triangles_v1(480)"), "secs")print("v2, p = 480 took",elapsed("num_of_triangles_v2(480)"), "secs")print("v3, p = 480 took",elapsed("num_of_triangles_v3(480)"), "secs")print("v4, p = 480 took",elapsed("num_of_triangles_v4(480)"), "secs")print("v5, p = 480 took",elapsed("num_of_triangles_v5(480)"), "secs")print("")print("Since v5 is too fast for some machines to time, we check it for input size which is x100 and x200 bigger:")print("v5, p = 24000 took",elapsed("num_of_triangles_v5(24000)"), "secs")print("v5, p = 48000 took",elapsed("num_of_triangles_v5(48000)"), "secs") v1, p = 240 took 2.0098203190000277 secs v2, p = 240 took 0.07364331099995525 secs v3, p = 240 took 0.055745446999935666 secs v4, p = 240 took 0.013294021999968209 secs v5, p = 240 took 0.00012039499995353253 secs v1, p = 480 took 17.873038491999978 secs v2, p = 480 took 0.337060590999954 secs v3, p = 480 took 0.3156459970000469 secs v4, p = 480 took 0.044534067999961735 secs v5, p = 480 took 0.00019973799999206676 secs Since v5 is too fast for some machines to time, we check it for input size which is x100 and x200 bigger: v5, p = 24000 took 0.01722720300006131 secs v5, p = 48000 took 0.03454993299999387 secs Given two functions $f(n)$ and $g(n)$, $f(n) = O(g(n))$ If and only if there exist $c > 0 $ and $n_{0}\in \mathbb{R}$ such that $\forall n>n_0$ $|f(n)| \leq c\cdot|g(n)|$ When $f,g$ are positive, an equivalent condition which is sometimes easier to check is that $$f(n) = O(g(n)) \iff \lim_{n \to \infty} \frac{f(n)}{g(n)} < \infty$$ Let $n$ denote the size of the input and $c$ denote a constant. The most common time complexities we encounter are : See also this list on Wikipedia. Transitivity: Non-symmetry:
Lephenixnoir af424d1baa update documentation after writing the wiki 3 月之前 config 4 月之前 include/TeX 3 月之前 src 3 月之前 .gitignore 3 月之前 Makefile 4 月之前 README.md 3 月之前 TODO.md 3 月之前 configure 3 月之前 font5x7.bmp 4 月之前 font8x9.bmp 4 月之前 font10x12.bmp 4 月之前 This library is a customizable 2D math rendering tool for calculators. It can be used to render 2D formulae, either from an existing structure or TeX syntax. \frac{x^7 \left[X,Y\right] + 3\left|\frac{A}{B}\right>} {\left\{\frac{a_k+b_k}{k!}\right\}^5}+ \int_a^b \frac{\left(b-t\right)^{n+1}}{n!} dt+ \left(\begin{matrix} \frac{1}{2} & 5 \\ -1 & a+b \end{matrix}\right) List of currently supported elements: \frac) _ and ^) \left and \right) \sum, \prod and \int) \vec) and limits ( \lim) \sqrt) \begin{matrix} ... \end{matrix}) Features that are partially implemented (and what is left to finish them): See the TODO.md file for more features to come. First specify the platform you want to use : cli is for command-line tests, with no visualization (PC) sdl2 is an SDL interface with visualization (PC) fx9860g builds the library for fx-9860G targets (calculator) fxcg50 builds the library for fx-CG 50 targets (calculator) For calculator platforms, you can use --toolchain to specify a different toolchain than the default sh3eb and sh4eb. The install directory of the library is guessed by asking the compiler, you can override it with --prefix. Example for an SDL setup: % ./configure --platform=sdl2 Then you can make the program, and if it’s a calculator library, install it. You can later delete Makefile.cfg to reset the configuration, or just reconfigure as needed. % make% make install # fx9860g and fxcg50 only Before using the library in a program, a configuration step is needed. The library does not have drawing functions and instead requires that you provide some, namely: TeX_intf_pixel) TeX_intf_line) TeX_intf_size) TeX_intf_text) The three rendering functions are available in fxlib; for monospaced fonts the fourth can be implemented trivially. In gint, the four can be defined as wrappers for dpixel(), dline(), dsize() and dtext(). The type of formulae is TeX_Env. To parse and compute the size of a formula, use the TeX_parse() function, which returns a new formula object (or NULL if a critical error occurs). The second parameter display is set to non-zero to use display mode (similar to \[ .. \] in LaTeX) or zero to use inline mode (similar to $ .. $ in LaTeX). char *code = "\\frac{x_7}{\\left\\{\\frac{\\frac{2}{3}}{27}\\right\\}^2}";struct TeX_Env *formula = TeX_parse(code, 1); The size of the formula can be queried through formula->width and formula->height. To render, specify the location of the top-left corner and the drawing color (which will be passed to all primitives): TeX_draw(formula, 0, 0, BLACK); The same formula can be drawn several times. When it is no longer needed, free it with TeX_free(): TeX_free(formula);
Note: There was an error in the reverse formula (and maybe also with some of the values given). Frank pointed this out (see comments section below). In analytical chemistry, linear regression or linear function is a common (maybe the most common) tool to describe the relationship between a measured signal and the concentration of an analyte. Even if the relationship is much more complex, one usually works in small ranges only where the assumption of linearity is convenient. However, there are analytical problems, which cannot be solved with this simple approach. In this short article I want to introduce and present another useful function for data evaluation on the basis of a real example. The following plot shows the response (fluorescence emission at a certain wavelength) of a pH sensor foil to different pH values. The foil consists of a pH-sensitive dye in a hydrogel matrix. As can be seen from the figure above, the response is not linear except for a small area around pH 6. On the contrary, the curve approaches asymptotic values for higher and lower pH values. This effect is easily explainable. Since the receptor is a pH-sensitive dye, its fluorescence depends on the H + concentration. The fluorescence intensity is the higher the less protons are bound to the receptor. Obviously, there is a limit in both directions. For very high proton concentrations (i.e. low pH values!), all available binding sites on the dye are blocked. Therefore, lowering the pH more won’t have any further effect. On the other hand, for low proton concentrations (i.e. high pH values!) all possibly bound protons are already removed from the dye and no further protons can’t be removed. So, the fluorescence intensity remains constant. Now, how can we fit this sensor output? We could extract just the linear area and fit this with a linear equation. However, we don’t know which points actually belong to the linear area and, therefore, it would be possible that we choose too many or too few points. Also, we don’t want to exclude the two bending areas, since there is – even if the sensitivity is lower than in the linear region – still analytical information in these areas. So, we need a function which fits to this S-shaped curve. Such a function is called ‘ sigmoid‘ function. The name is derived from ‘sigma’ (=letter S) and ‘-oid’ (-like or -shaped). There are many forms of sigmoid functions [1]. A common example is the logistical function [2]: \(f(x) = \frac{1}{1 + e^{-x}}\) In our case the x is the pH value and f(x) is the fluorescence intensity F as a function of pH. If depicted, the function with the formula above will look similar to the one in Figure 2. This form of the function returns values between 0 and 1 for a given x. Hence, it is not suitable for our needs and we have to modify it a little bit. First, let us introduce some parameters: \(f(x) = \frac{a}{b + c \cdot e^{- d \cdot (x – f)}} + t\) With these parameters we can control the position, form, shape, and range of the function. One can play around with the parameters in an online function plotter [3] to get a feeling for the behaviour of the function and in order to visualize the invidual effect of each parameter. All we need to do now is to find the appropriate parameter values for our problem from above. Of course, this is a bit more complicated than finding the fit for a straight line. Usually, one uses software tools, such as Origin [4], for this purpose. If we do so the resulting function with all parameters has the following form: Note: I replaced the parameters with values, which I know work. However, keep in mind there is always more than one good solution. Always double check your results! \(F(pH) = \frac{1.31}{0.21 + 2.96 \cdot e^{- 1.69 \cdot (pH – 4.23)}} + 0.04\) Now, we know all parameters (i.e. we calibrated the sensor foil) and can use the sensor foil and measure some pH. However, since we actually measure the fluorescence intensity, we need a method to calculate the pH from this intensity. Therefore, we solve the equation for pH to get the reverse function: Note: I replaced this formula now with one, which is correct (see comments of Frank below). Originally, I used Matlab because I wanted to make sure this is correct. Maybe I copied it wrong or mixed things up. Mmh. Should have just trusted my good old brain a little more! \(\begin{aligned} pH(F) &= -\;ln( \frac{a – b \cdot (F – t)}{ c \cdot (F – t)} ) \cdot \frac{1}{d} + f = \\ &= -0.59 \cdot ln( \frac{1.31 – 0.21 \cdot (F – 0.04)}{ 2.96 \cdot (F – 0.04)} ) + 4.23 = \\ &= -0.59 \cdot F’ + 4.23 \end{aligned} \) Also the function looks complicated, it allows us to calculate the pH value to the respective fluorescence intensity. It should be noted that the reverse function is – of course – limited, which is obvious if plotted. Since the ln-function in the above formula only consists of constants and F as variable it can be substituted with a new variable F’, which makes the formula looking like an ordinary linear function. Indeed, the relationship between F’ and pH is linear (but not the relationship between F and pH!). Substitution is not only good for presenting formulas but for entering them in calculation programs such as Excel or OpenOffice Calc. I have observed many people having problems with entering the whole thing at once – debugging and finding typos is horrible! The following figure shows both the fitted function as well as the reverse function. Finally, we can use the sensor foil to measure the pH of unknown solutions by dropping the solution onto the foil (or putting the foil into the solution), reading out the fluorescence intensity, and calculating the corresponding pH value. That’s it! References "Sigmoid function - Wikipedia", Wikipedia, 2018. http://en.wikipedia.org/wiki/Sigmoid_function "Logistic function - Wikipedia", Wikipedia, 2018. http://en.wikipedia.org/wiki/Logistic_function "Function Grapher Online", 2011. http://www.walterzorn.de/en/grapher/grapher_e.htm " OriginLab - Origin and OriginPro - Data Analysis and Graphing Software " http://www.originlab.com/
I have just been playing with this and thought my solution just might help somebody else at some point. I wanted the following: table notes i.e. notes at the bottom of the tabular, within the table environment - not at the bottom of the page; automatic numbering of notes within the list of notes; automatic numbering of note markers within the table itself; numbering with small letters, to avoid any confusion with the Arabic numerals used to number footnotes and in the table and text to track content; note markers in the list of notes to be left aligned with text in the first column of the tabular. My solution involves an unholy mixture of threeparttablex with option referable: this manages the automatic numbering of the note markers, on the basis of labels inserted into the list of notes; enumitem: to customise the list of notes. This is a bit complex in terms of number of cooks responsible for the broth. To say that enumitem is used to 'customise' the list is a bit misleading. Essentially, my solution redefines it. More specifically, threeparttable provides tablenotes. threeparttablex redefines it and provides \tnotex{} and some other enhancements. enumitem is then used to redefine tablenotes again. Caveat emptor... Anyway, for what it is worth: \documentclass{article} \usepackage{enumitem,booktabs,cfr-lm} \usepackage[referable]{threeparttablex} \renewlist{tablenotes}{enumerate}{1} \makeatletter \setlist[tablenotes]{label=\tnote{\alph*},ref=\alph*,itemsep=\z@,topsep=\z@skip,partopsep=\z@skip,parsep=\z@,itemindent=\z@,labelindent=\tabcolsep,labelsep=.2em,leftmargin=*,align=left,before={\footnotesize}} \makeatother \begin{document} \begin{table} \centering\tlstyle \begin{threeparttable} \begin{tabular}{lcccc} \toprule & \multicolumn{4}{c}{Great Value}\\ \cmidrule(lr){2-5} Option & Robot 1 & Robot 2 & Robot 3 & Total\\ \midrule Develop Robot 1 brilliant eye\tnotex{tnote:robots-r1} & 5 & 78 & 54 & 56\\ Develop Robot 2 extended ears\tnotex{tnote:robots-r2} & 24 & 87 & 42 & 23\\ Develop Robot 3 brilliant eye\tnotex{tnote:robots-r3} & 0.5 & $\pi$ & 61 & $<19.3$\\ \bottomrule \end{tabular} \begin{tablenotes} \item\label{tnote:robots-r1}That is, $360^\circ$ vision, as proposed by Noddy Norris. \item\label{tnote:robots-r2}As recommended by \emph{Robot Review}. \item\label{tnote:robots-r3}That is, X-Ray vision, as proposed by \emph{Mechanical Maniacs}. \end{tablenotes} \end{threeparttable} \caption{\label{tab:robots}Total values of Jim's technological options for robot projects he thinks possible.} \end{table} \end{document}
Consider a white Gaussian noise signal $ x \left( t \right) $. If we sample this signal and compute the discrete Fourier transform, what are the statistics of the resulting Fourier amplitudes? Math tools We can do the calculation using some basic elements of probability theory and Fourier analysis. There are three elements (we denote the probability density of a random variable $X$ at value $x$ as $P_X(x)$): Given a random variable $X$ with distribution $P_X(x)$, the distribution of the scaled variable $Y = aX$ is $P_Y(y) = (1/a)P_X(y/a)$. The probability distribution of a sum of two random variables is equal to the convolution of the probability distributions of the summands. In other words, if $Z = X + Y$ then $P_Z(z) = (P_X \otimes P_Y)(z)$ where $\otimes$ indicates convolution. The Fourier transform of the convolution of two functions is equal to the product of the Fourier transforms of those two functions. In other words: $$\int dx \, (f \otimes g)(x) e^{-i k x} = \left( \int dx \, f(x) e^{-ikx} \right) \left( \int dx \, g(x) e^{-ikx} \right) \, . $$ Calculation Denote the random process as $x(t)$. Discrete sampling produces a sequence of values $x_n$ which we assume to be statistically uncorrelated. We also assume that for each $n$ $x_n$ is Gaussian distributed with standard deviation $\sigma$. We denote the Gaussian function with standard deviation $\sigma$ by the symbol $G_\sigma$ so we would say that $P_{x_n}(x) = G_{\sigma}(x)$. The discrete Fourier transform amplitudes are defined as $$X_k \equiv \sum_{n=0}^{N-1} x_n e^{-i 2 \pi n k /N} \, .$$ Focusing for now on just the real part we have $$\Re X_k = \sum_{n=0}^{N-1} x_n \cos(2 \pi n k /N) \, .$$ This is just a sum, so by rule #2 the probability distribution of $\Re X_k$ is equal to the multiple convolution of the probability distributions of the terms being summed. We rewrite the sum as $$\Re X_k = \sum_{n=0}^{N-1} y_n$$ where $$y_n \equiv x_n \cos(2\pi n k / N) \, .$$ The cosine factor is a deterministic scale factor. We know that the distribution of $x_n$ is $G_\sigma$ so we can use rule #1 from above to write the distribution of $y_n$: $$P_{y_n}(y) = \frac{1}{\cos(2 \pi n k / N)}G_\sigma \left( \frac{y}{\cos(2 \pi n k / N)} \right) = G_{\sigma c_{n,k}}(y)$$ where for brevity of notation we've defined $c_{n,k} \equiv \cos(2\pi n k / N)$. Therefore, the distribution of $\Re X_k$ is the multiple convolution over the functions $G_{\sigma c_{n,k}}$: $$P_{\Re X_k}(x) = \left( G_{\sigma c_{0,k}} \otimes G_{\sigma c_{1,k}} \otimes \cdots \right)(x) \, .$$ It's not obvious how to do the multiple convolution, but using rule #3 it's easy. Denoting the Fourier transform of a function by $\mathcal{F}$ we have $$\mathcal{F}(P_{\Re X_k}) = \prod_{n=0}^{N-1} \mathcal{F}(G_{\sigma c_{n,k}}) \, .$$ The Fourier transform of a Gaussian with width $\sigma$ is another Gaussian with width $1/\sigma$, so we get \begin{align} \mathcal{F}(P_{\Re X_k})(\nu) &= \prod_{n=0}^{N-1} G_{1/\sigma c_{n,k}}\\ &= \prod_{n=0}^{N-1} \sqrt{\frac{\sigma^2 c_{n,k}^2}{2\pi}} \exp \left[ \frac{-\nu^2}{2 (1 / \sigma^2 c_{n,k}^2)}\right] \\ &= \left( \frac{\sigma^2}{2\pi} \right)^{N/2} \left(\prod_{n=0}^{N-1}c_{n,k} \right) \exp \left[ -\frac{\nu^2}{2} \sigma^2 \sum_{n=0}^{N-1} \cos(2\pi nk/N)^2 \right] \, . \end{align} All of the stuff preceding the exponential are independent of $\nu$ and are therefore normalization factors, so we ignore them. The sum is just $N/2$ so we get \begin{align} \mathcal{F}(P_{\Re X_k}) &\propto \exp \left[ - \frac{\nu^2}{2} \sigma^2 \frac{N}{2}\right]\\ &= G_{\sqrt{2 / \sigma^2 N}} \end{align} and therefore $$P_{\Re X_k} = G_{\sigma \sqrt{N/2}} \, .$$ We have therefore computed the probability distribution of the real part of the Fourier coefficient $X_k$. It is Gaussian distributed with standard deviation $\sigma \sqrt{N/2}$. Note that the distribution is independent of the frequency index $k$, which makes sense for uncorrelated noise. By symmetry the imaginary part should be distributed exactly the same. Intuitively we expect adding more integration should reduce the width of the resulting noise distribution.However, we found that the standard deviation of the distribution of $X_k$ grows as $\sqrt{N}$.This is just due to our choice of normalization of the discrete Fourier transform.If we had instead normalized it like this$$X_k = \frac{1}{N} \sum_{n=0}^{N-1} x_n e^{-i 2 \pi n k /N}$$then we would have found$$P_{\Re X_k} = G_{\sigma / \sqrt{2N}}$$which agrees with the intuition that the noise distribution gets smaller as we add more data.With this normalization, a coherent signal would demodulate to a fixed amplitude phasor, so we recover the usual relation that the ratio of the signal to noise amplitudes scales as $\sqrt{N}$. I would like to give another take on @DanielSank's answer. We first suppose that $v_{n} \sim \mathcal{CN}(0, \sigma^{2})$ and is i.i.d. Its Discrete Fourier Transform is then: $$ V_{k} = \frac{1}{N} \sum_{n=0}^{N-1} v_{n} e^{-j 2 \pi \frac{n}{N} k}$$. We want to calculate the distribution of $V_{k}$ To start, we note that since $v_{n}$ is white Gaussian noise, it is circularly symmetric, so the real and imaginary parts of its Fourier Transform will distributed the same. Therefore, we only need to calculate the distribution of the real part and then combine it with the imaginary part. So we separate $V_{k}$ into its real and imaginary parts. We have: $$ V_{k} = \frac{1}{N} \sum_{n=0}^{N-1} v_{n} e^{-j 2 \pi \frac{n}{N} k} $$ $$ V_{k} = \frac{1}{N} \sum_{n=0}^{N-1} \big( R\{v_{n}\} + j I\{v_{n} \} \big) \cdot \big( cos(2 \pi \frac{n}{N} k) + j sin(2 \pi \frac{n}{N} k) \big)$$ $$ V_{k} = R\{V_{k}\}_{1} + R\{V_{k}\}_{2} + jI\{V_{k}\}_{1} + jI\{V_{k}\}_{2} $$ $$ V_{k} = R\{V_{k}\} + jI\{V_{k}\}$$ Where: $$ R\{V_{k}\} = R\{V_{k}\}_{1} + R\{V_{k}\}_{2} $$ $$ I\{V_{k}\} = I\{V_{k}\}_{1} + I\{V_{k}\}_{2} $$ And: $$ R\{V_{k}\}_{1} = \frac{1}{N} \sum_{n=0}^{N-1} R\{ v_{n} \} cos(2 \pi \frac{n}{N} k) $$ $$ R\{V_{k}\}_{2} = - \frac{1}{N} \sum_{n=0}^{N-1} I\{ v_{n} \} sin(2 \pi \frac{n}{N} k) $$ $$ I \{V_{k}\}_{1} = \frac{1}{N} \sum_{n=0}^{N-1} R\{ v_{n} \} sin(2 \pi \frac{n}{N} k) $$ $$ I\{V_{k}\}_{2} = \frac{1}{N} \sum_{n=0}^{N-1} I\{ v_{n} \} cos(2 \pi \frac{n}{N} k) $$ Now we work on deriving the distribution of $R\{V_{k}\}_{1}$ and $R\{V_{k}\}_{2}$. As in @DanielSank's answer, we define: $$ x_{n,k} = \frac{1}{N} cos(2 \pi \frac{n}{N} k) R\{v_{n}\} = \frac{1}{N} c_{n,k} R\{v_{n}\}$$ Thus we can write: $$ R\{ V_{k} \}_{1} = \sum_{n=0}^{N-1} x_{n,k} $$ This allows us the easily apply the following facts about linear combinations of Gaussian random variables. Namely, we know that: When $x \sim \mathcal{CN}(0, \sigma^{2})$ then $R\{ x \} \sim \mathcal{N}(0, \frac{1}{2} \sigma^{2})$ When $x \sim \mathcal{N}(\mu, \sigma^{2})$ then $cx \sim \mathcal{N}(c \mu, c^{2} \sigma^{2})$ Together, these imply that $x_{n,k} \sim \mathcal{N}(0, \frac{c^{2}_{n,k}}{2N^{2}} \sigma^{2})$. Now we work on the sum. We know that: When $x_{n} \sim \mathcal{N}(\mu_{n}, \sigma^{2}_{n})$ then $y = \sum_{n=0}^{N-1} x_{n} \sim \mathcal{N}(\sum_{n=0}^{N-1} \mu_{n}, \sum_{n=0}^{N-1} \sigma^{2}_{n}) $ $\sum_{n=0}^{N-1} c^{2}_{n,k} = \frac{N}{2}$ These imply that: $$ R\{V_{k}\}_{1} \sim \mathcal{N}(0, \sum_{n=0}^{N-1} \frac{c_{n,k}^{2}}{2N^{2}} \sigma^{2}) = \mathcal{N}(0 , \frac{\frac{N}{2}}{2N^{2}} \sigma^{2} = \mathcal{N}(0, \frac{\sigma^{2}}{4N})$$ So we have shown that: $$ R\{V_{k}\}_{1} \sim \mathcal{N}(0, \frac{\sigma^{2}}{4N}) $$ Now we apply the same argument to $R\{V_{k}\}_{2}$. Abusing our notation, we rewrite: $$ x_{n,k} = \frac{1}{N} sin(2 \pi \frac{n}{N} k) I\{v_{n}\} = \frac{1}{N} s_{n,k} I\{v_{n}\}$$ Repeating the same argument, and noting that the Gaussian is a symmetric distribution (so we can ignore the sign difference), gives us: $$ R\{V_{k}\}_{2} \sim \mathcal{N}(0, \frac{\sigma^{2}}{4N}) $$ Since $\sum_{n=0}^{N-1} s^{2}_{n,k} = \frac{N}{2}$ as well. So therefore since $R\{V_{k}\} = R\{V_{k}\}_{1} + R\{V_{k}\}_{2}$, we get: $$ R\{V_{k}\} \sim \mathcal{N}(0, \frac{\sigma^{2}}{4N} + \frac{\sigma^{2}}{4N}) = \mathcal{N}(0, \frac{\sigma^{2}}{2N}) $$ So we have shown that: $$ R\{V_{k}\} \sim \mathcal{N}(0, \frac{\sigma^{2}}{2N}) $$ By circular symmetry, we also know then that: $$ I\{V_{k}\} \sim \mathcal{N}(0, \frac{\sigma^{2}}{2N}) $$ So since $V_{k} = R\{V_{k}\} + jI\{V_{k}\}$, we finally arrive at: $$ V_{k} \sim \mathcal{CN}(0, \frac{\sigma^{2}}{N}) $$ Therefore taking the DFT divides the variance by the length of the DFT window -- assuming the window is rectangular of course -- which is the same result as in @DanielSank's answer.
Deadline for submission of results: October 14, October 21, 2019 (11:59PM PST) Presentation of awards: October 28, 2019 (at the ICCV 2019 workshop) The challenge is on the task of 6D localization of a number of varying Training Input: At training time, method $M$ learns using a training set, $T = \{T_o\}$, where $o$ is an object identifier. Training data $T_o$ may have different forms – a 3D mesh model of the object or a set of RGB-D images (synthetic or real) showing object instances in known 6D poses. Test Input: At test time, method $M$ is provided with image $I$ and list $L = [(o_1, n_1), ..., (o_m, n_m)]$, where $n_i$ is the number of instances of object $o_i$ present in image $I$. Test Output: Method $M$ produces list $E = [E_1, \dots, E_m]$, where $E_i$ is a list of $n_i$ pose estimates for instances of object $o_i$. Each estimate is given by a 3x3 rotation matrix, $\mathbf{R}$, a 3x1 translation vector, $\mathbf{t}$, and a confidence score, $s$. The ViVo task is referred to as the 6D localization problem in [2]. In the BOP paper [1], methods were evaluated on a different task – 6D localization of a single instance of a single object (the SiSo task), which was chosen because it allowed to evaluate all relevant methods out of the box. Since then, the state of the art has advanced and we have moved to the more challenging ViVo task. Multiple datasets are used for the evaluation. Every dataset includes 3D object models and training and test RGB-D images annotated with ground-truth 6D object poses and intrinsic camera parameters. Some datasets include also validation images – in this case, the ground-truth 6D object poses are publicly available only for the validation images, not for the test images. The 3D object models were created manually or using KinectFusion-like systems for 3D surface reconstruction [6]. The training images show individual objects from different viewpoints and are either captured by an RGB-D/Gray-D sensor or obtained by rendering of the 3D object models. The test images were captured in scenes with graded complexity, often with clutter and occlusion. The datasets are provided in the BOP format. For training, method $M$ can use the provided object models and training images and can render extra training images using the object models. Not a single pixel of test images may be used in training, nor the individual ground-truth poses or object masks provided for the test images. The range of all ground-truth poses in the test images, which is provided in file dataset_params.py in the BOP Toolkit, is the only information about the test set that can be used during training. Only subsets of the original datasets are used to speed up the evaluation. The subsets are defined in files test_targets_bop19.json provided with the datasets. The following awards will be presented at the 5th International Workshop on Recovering 6D Object Pose at ICCV 2019: To be considered for the awards, authors need to provide an implementation of the method (source code or a binary file with instructions) which will be validated. The error of an estimated pose $\hat{\textbf{P}}$ w.r.t. the ground-truth pose $\bar{\textbf{P}}$ of an object model $O$ is measured by three pose error functions defined below. Their implementation is available in the BOP Toolkit. Visible Surface Discrepancy (VSD) [1,2]: $\hat{S}$ and $\bar{S}$ are distance maps obtained by rendering the object model $O$ in the estimated pose $\hat{\textbf{P}}$ and the ground-truth pose $\bar{\textbf{P}}$ respectively. As in [1,2], the distance maps are compared with the distance map $S_I$ of the test image $I$ to obtain the visibility masks $\hat{V}$ and $\bar{V}$, i.e. the sets of pixels where the model $O$ is visible in image $I$. Estimation of the visibility masks has been modified – at pixels with no depth measurements, an object is now considered visible (it was considered not visible in [1,2]). This modification allows evaluating poses of glossy objects from the ITODD dataset whose surface is not always captured by the depth sensors. $\tau$ is a misalignment tolerance. See Section 2.2 of [1] for details. Maximum Symmetry-Aware Surface Distance (MSSD) [3]: $S_O$ is a set of symmetry transformations of object model $O$ (see Section 4.2). The maximum distance is relevant for robotic manipulation, where the maximum surface deviation strongly indicates the chance of a successful grasp. The maximum distance is also less dependent on the sampling strategy of the model vertices than the average distance used in ADD/ADI [2, 5], which tends to be dominated by higher-frequency surface parts. Maximum Symmetry-Aware Projection Distance (MSPD): $\text{proj}$ is the 2D projection operation (the result is in pixels) and the meaning of the other symbols is as in MSSD. Compared to the 2D projection [4], MSPD considers object symmetries and replaces the average by the maximum distance to increase robustness against the model sampling. As MSPD does not evaluate the alignment along the optical axis (Z axis) and measures only the perceivable discrepancy, it is relevant for augmented reality applications and suitable for the evaluation of RGB-only methods. The set of potential symmetry transformations (used in MSSD and MSPD) is defined as $S'_O = \{\textbf{S}: h(O, \textbf{S}O) < \varepsilon \}$, where $h$ is the Hausdorff distance calculated between the vertices of object model $O$. The allowed deviation is bounded by $\varepsilon = \text{max}(15\,mm, 0.1d)$, where $d$ is the diameter of model $O$ (the largest distance between any pair of model vertices) and the truncation at $15\,mm$ avoids breaking the symmetries by too small details. The set of symmetry transformations $S_O$ is a subset of $S'_O$ and consists of those symmetry transformations which cannot be resolved by the model texture (decided subjectively). Set $S_O$ covers both discrete and continuous rotational symmetries. The continuous rotational symmetries are discretized such as the vertex which is the furthest from the axis of symmetry travels not more than $1\%$ of the object diameter between two consecutive rotations. The symmetry transformations are stored in files models_info.json provided with the datasets. The performance of method $M$ w.r.t. pose error function $e_{\text{VSD}}$ is measured by the average recall $\text{AR}_{\text{VSD}}$ defined as the average of the recall rates for $\tau$ ranging from $5\%$ to $50\%$ of the object diameter with a step of $5\%$, and for the threshold of correctness $\theta_{\text{VSD}}$ ranging from $0.05$ to $0.5$ with a step of $0.05$. The recall rate is the fraction of annotated object instances for which a correct object pose was estimated. A pose estimate is considered correct if $e_{\text{VSD}} < \theta_{\text{VSD}}$. Similarly, the performance w.r.t. $e_{\text{MSSD}}$ is measured by the average recall $\text{AR}_{\text{MSSD}}$ defined as the average of the recall rates for the threshold of correctness $\theta_{\text{MSSD}}$ ranging from $5\%$ to $50\%$ of the object diameter with a step of $5\%$. The performance w.r.t. $e_{\text{MSPD}}$ is measured by $\text{AR}_{\text{MSPD}}$ defined as the average of the recall rates for $\theta_{\text{MSPD}}$ ranging from $5r\,px$ to $50r\,px$ with a step of $5r\,px$, where $r = w/640$ and $w$ is the width of the image. The performance on a dataset is measured by the average recall $\text{AR} = (\text{AR}_{\text{VSD}} + \text{AR}_{\text{MSSD}} + \text{AR}_{\text{MSPD}}) / 3$. The overall performance on the core datasets is measured by $\text{AR}_{\text{Core}}$ defined as the average of the per-dataset average recalls $\text{AR}$. In this way, each dataset is treated as a separate sub-challenge which avoids the overall score being dominated by larger datasets. To have your method evaluated, run it on the ViVo task and submit the results in the format described below to the BOP evaluation system (the used evaluation script is publicly available in the BOP Toolkit). Each method has to use a identical set of hyper-parameters across all objects and datasets. The list of object instances for which the pose is to be estimated can be found in files test_targets_bop19.json provided with the datasets. For each object instance in the list, at least $10\%$ of the object surface is visible in the respective image [1]. Results for all test images from one dataset are saved in one CSV file, with one pose estimate per line in the following format: scene_id,im_id,obj_id,score,R,t,time scene_id, im_id, and obj_id is the ID of scene, image and object respectively. score is a confidence of the estimate (the range of confidence values is not restricted). R is a 3x3 rotation matrix whose elements are saved row-wise and separated by a white space (i.e. r11 r12 r13 r21 r22 r23 r31 r32 r33, where t is a 3x1 translation vector (in mm) whose elements are separated by a white space (i.e. t1 t2 t3). time is the time method $M$ took to make estimates for all objects in image im_id from scene scene_id. All estimates with the same scene_id and im_id must have the same value of time. Report the wall time from the point right after the raw data (the image, 3D object models etc.) is loaded to the point when the final pose estimates are available (a single real number in seconds, -1 if not available). $\mathbf{P} = \mathbf{K} [\mathbf{R} \, \mathbf{t}]$ is the camera matrix which transforms 3D point $\mathbf{x}_m = [x, y, z, 1]'$ in the model coordinates to 2D point $\mathbf{x}_i = [u, v, 1]'$ in the image coordinates: $s\mathbf{x_i} = \mathbf{P} \mathbf{x}_m$. The camera coordinate system is defined as in OpenCV with the camera looking along the $Z$ axis. Camera intrinsic matrix $\mathbf{K}$ is provided with the test images and might be different for each image. Example results can be found here. After the results are evaluated, the authors can decide whether to make the evaluation scores visible to the public. To be considered for the awards, authors need to provide an implementation of the method (source code or a binary file with instructions) which will be validated. For the results to be included in a publication about the challenge, a documentation of the method, including specifications of the used computer, needs to be provided through the online submission form. Without the documentation, the scores will be listed on the website but will not be considered for inclusion in the publication. Tomáš Hodaň, Czech Technical University in Prague Eric Brachmann, Heidelberg University Bertram Drost, MVTec Frank Michel, Technical University Dresden Martin Sundermeyer, DLR German Aerospace Center Jiří Matas, Czech Technical University in Prague Carsten Rother, Heidelberg University [1] Hodaň, Michel et al.: BOP: Benchmark for 6D Object Pose Estimation, ECCV'18. [2] Hodaň et al.: On Evaluation of 6D Object Pose Estimation, ECCVW'16. [3] Drost et al.: Introducing MVTec ITODD - A Dataset for 3D Object Recognition in Industry, ICCVW'17. [4] Brachmann et al.: Uncertainty-Driven 6D Pose Estimation of Objects and Scenes from a Single RGB Image, CVPR'16. [5] Hinterstoisser et al.: Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes, ACCV'12. [6] Newcombe et al.: KinectFusion: Real-time dense surface mapping and tracking, ISMAR'11.
If a particle travels on a geodesic with 4-momentum $P^\mu$ in a spacetime with a Killing vector $K_\mu$ then we have a constant of motion, $K$, given by: $$K=K_\mu P^\mu$$ Using the relationships: $$P^\mu=mU^\mu$$ and $$K_\mu=g_{\mu\nu}K^\nu$$ we obtain: $$K=mg_{\mu\nu}K^\nu U^\mu$$ Let us assume that $K^\nu$ is a timelike Killing vector so that we have: $$K^\nu=(1,0,0,0)$$ Then the constant $K$ is the total energy $e$ of the particle (including gravitational energy) given by: $$e=-mg_{00}\frac{dt}{d\tau}$$ The above argument is my generalisation from @StanLiou's answer to the question Potential Energy in General Relativity where he gives an expression for the total energy of a particle in geodesic motion in Schwarzschild spacetime. I hope I have got the algebra correct! My question is: could this definition of total particle energy be carried over to cosmology where one has the FRW metric? The FRW metric does not have a timelike Killing vector so that one cannot expect the total energy $e$ of a co-moving particle to be constant. But the FRW metric does have a timelike conformal Killing vector so that it seems reasonable that the particle energy should scale in some way with conformal time $\eta$. We can write the flat FRW metric in conformal co-ordinates: $$ds^2=a(\eta)^2(-d\eta^2+dx^2+dy^2+dz^2)$$ Thus we have: $$g_{00}=-a(\eta)^2$$ $$\frac{d\eta}{d\tau}=\frac{1}{a(\eta)}$$ Therefore $$e=m\ a(\eta)$$ Thus it seems that the total energy of a co-moving particle, when expressed in conformal co-ordinates, scales with the Universal scale factor $a(\eta)$. Is this reasoning valid?
I'm trying to implement Bayesian power priors to discount historical data. $$\pi^p(\theta|\mathbf{z_{n_0}},a_0) \propto \pi_0(\theta) * L(\theta;\mathbf{z_{n_0}})^{a_0}$$ where $\pi^p$ is the posterior, $\pi_0$ is the prior, $L$ is the likelihood function, $\theta$ are the parameters of interest, $\mathbf{z_{n_0}}$ is the historical data, and $a_0 \in [0,1]$ is the discounting factor. My problem consists of five parameters, and if $a_0 =1$, I can simulate it using an MCMC method like Gibbs sampler. There is no conjugate prior and accompanying closed-form solution available in my problem. If $a_0 = 0$, the posterior equals the prior and no updating is required. For any other value of $a_0$, I haven't come across any ways of getting the result. If conjugate priors are used, there are closed form solutions (examples in Section 5). @Florian, yes I want to estimate the posterior. Sorry for the confusion. @Björn, thanks for the suggestion. I am looking at a fixed exponent and am using R, so I will try rstan. I'll report back afterwards.
This post describes some discriminative machine learning algorithms. Distribution of Y given X Algorithm to predict Y Normal distribution Linear regression Bernoulli distribution Logistic regression Multinomial distribution Multinomial logistic regression (Softmax regression) Exponential family distribution Generalized linear regression Distribution of X Algorithm to predict Y Multivariate normal distribution Gaussian discriminant analysis or EM Algorithm X Features conditionally independent \(p(x_1, x_2|y)=p(x_1|y) * p(x_2|y) \) Naive Bayes Algorithm Other ML algorithms are based on geometry, like the SVM and K-means algorithms. Linear Regression Below a table listing house prices by size. x = House Size (\(m^2\)) y = House Price (k$) 50 99 50 100 50 100 50 101 60 110 For the size 50\(m^2\), if we suppose that prices are normally distributed around the mean μ=100 with a standard deviation σ, then P(y|x = 50) = \(\frac{1}{\sigma \sqrt{2\pi}} exp(-\frac{1}{2} (\frac{y-μ}{\sigma})^{2})\) We define h(x) as a function that returns the mean of the distribution of y given x (E[y|x]). We will define this function as a linear function. \(E[y|x] = h_{θ}(x) = \theta^T x\) P(y|x = 50; θ) = \(\frac{1}{\sigma \sqrt{2\pi}} exp(-\frac{1}{2} (\frac{y-h_{θ}(x)}{\sigma})^{2})\) We need to find θ that maximizes the probability for all values of x. In other words, we need to find θ that maximizes the likelihood function L: \(L(\theta)=P(\overrightarrow{y}|X;θ)=\prod_{i=1}^{n} P(y^{(i)}|x^{(i)};θ)\) Or maximizes the log likelihood function l: \(l(\theta)=log(L(\theta )) = \sum_{i=1}^{m} log(P(y^{(i)}|x^{(i)};\theta ))\)\(= \sum_{i=1}^{m} log(\frac{1}{\sigma \sqrt{2\pi}}) -\frac{1}{2} \sum_{i=1}^{n} (\frac{y^{(i)}-h_{θ}(x^{(i)})}{\sigma})^{2}\) To maximize l, we need to minimize J(θ) = \(\frac{1}{2} \sum_{i=1}^{m} (y^{(i)}-h_{θ}(x^{(i)}))^{2}\). This function is called the Cost function (or Energy function, or Loss function, or Objective function) of a linear regression model. It’s also called “Least-squares cost function”. J(θ) is convex, to minimize it, we need to solve the equation \(\frac{\partial J(θ)}{\partial θ} = 0\). A convex function has no local minimum. There are many methods to solve this equation: Gradient descent (Batch or Stochastic Gradient descent) Normal equation Newton method Matrix differentiation Gradient descent is the most used Optimizer (also called Learner or Solver) for learning model weights. Batch Gradient descent \(θ_{j} := θ_{j} – \alpha \frac{\partial J(θ)}{\partial θ_{j}} = θ_{j} – α \frac{\partial \frac{1}{2} \sum_{i=1}^{n} (y^{(i)}-h_{θ}(x^{(i)}))^{2}}{\partial θ_{j}}\) α is called “Learning rate” \(θ_{j} := θ_{j} – α \frac{1}{2} \sum_{i=1}^{n} \frac{\partial (y^{(i)}-h_{θ}(x^{(i)}))^{2}}{\partial θ_{j}}\) If \(h_{θ}(x)\) is a linear function (\(h_{θ} = θ^{T}x\)), then : \(θ_{j} := θ_{j} – α \sum_{i=1}^{n} (h_{θ}(x^{(i)}) – y^{(i)}) * x_j^{(i)} \) Batch size should fit the size of CPU or GPU memories, otherwise learning speed will be extremely slow. When using Batch gradient descent, the cost function in general decreases without oscillations. Stochastic (Online) Gradient Descent (SGD) (use one example for each iteration – pass through all data N times (N Epoch)) \(θ_{j} := θ_{j} – \alpha (h_{θ}(x^{(i)}) – y^{(i)}) * x_j^{(i)} \) This learning rule is called “Least mean squares (LMS)” learning rule. It’s also called Widrow-Hoff learning rule. Mini-batch Gradient descent Run gradient descent for each mini-batch until we pass through traning set (1 epoch). Repeat the operation many times. \(θ_{j} := θ_{j} – \alpha \sum_{i=1}^{20} (h_{θ}(x^{(i)}) – y^{(i)}) * x_j^{(i)} \) Mini-batch size should fit the size of CPU or GPU memories. When using Batch gradient descent, the cost function decreases quickly but with oscillations. Learning rate decay It’s a technique used to automatically reduce learning rate after each epoch. Decay rate is a hyperparameter. \(α = \frac{1}{1+ decayrate + epochnum} . α_0\) Momentum Momentum is a method used to accelerate gradient descent. The idea is to add an extra term to the equation to accelerate descent steps. \(θ_{j_{t+1}} := θ_{j_t} – α \frac{\partial J(θ_{j_t})}{\partial θ_j} \color{blue} {+ λ (θ_{j_t} – θ_{j_{t-1}})} \) Below another way to write the expression: \(v(θ_{j},t) = α . \frac{\partial J(θ_j)}{\partial θ_j} + λ . v(θ_{j},t-1) \\ θ_{j} := θ_{j} – \color{blue} {v(θ_{j},t)}\) Nesterov Momentum is a slightly different version of momentum method. AdaGrad Adam is another method used to accelerate gradient descent. The problem in this method is that the term grad_squared becomes large after running many gradient descent steps. The term grad_squared is used to accelerate gradient descent when gradients are small, and slow down gradient descent when gradients are large. RMSprop RMSprop is an enhanced version of AdaGrad. The term decay_rate is used to apply exponential smoothing on grad_squared term. Adam optimization Adam is a combination of Momentum and RMSprop. Normal equation To minimize the cost function \(J(θ) = \frac{1}{2} \sum_{i=1}^{n} (y^{(i)}-h_{θ}(x^{(i)}))^{2} = \frac{1}{2} (Xθ – y)^T(Xθ – y)\), we need to solve the equation: \( \frac{\partial J(θ)}{\partial θ} = 0 \\ \frac{\partial trace(J(θ))}{\partial θ} = 0 \\ \frac{\partial trace((Xθ – y)^T(Xθ – y))}{\partial θ} = 0 \\ \frac{\partial trace(θ^TX^TXθ – θ^TX^Ty – y^TXθ + y^Ty)}{\partial θ} = 0\)\(\frac{\partial trace(θ^TX^TXθ) – trace(θ^TX^Ty) – trace(y^TXθ) + trace(y^Ty))}{\partial θ} = 0 \\ \frac{\partial trace(θ^TX^TXθ) – trace(θ^TX^Ty) – trace(y^TXθ))}{\partial θ} = 0\)\(\frac{\partial trace(θ^TX^TXθ) – trace(y^TXθ) – trace(y^TXθ))}{\partial θ} = 0 \\ \frac{\partial trace(θ^TX^TXθ) – 2 trace(y^TXθ))}{\partial θ} = 0 \\ \frac{\partial trace(θθ^TX^TX)}{\partial θ} – 2 \frac{\partial trace(θy^TX))}{\partial θ} = 0 \\ 2 X^TXθ – 2 X^Ty = 0 \\ X^TXθ= X^Ty \\ θ = {(X^TX)}^{-1}X^Ty\) If \(X^TX\) is singular, then we need to calculate the pseudo inverse instead of the inverse. Newton method \(J”(θ_{t}) := \frac{J'(θ_{t+1}) – J'(θ_{t})}{θ_{t+1} – θ_{t}}\)\(\rightarrow θ_{t+1} := θ_{t} – \frac{J'(θ_{t})}{J”(θ_{t})}\) Matrix differentiation To minimize the cost function: \(J(θ) = \frac{1}{2} \sum_{i=1}^{n} (y^{(i)}-h_{θ}(x^{(i)}))^{2} = \frac{1}{2} (Xθ – y)^T(Xθ – y)\), we need to solve the equation: \( \frac{\partial J(θ)}{\partial θ} = 0 \\ \frac{\partial θ^TX^TXθ – θ^TX^Ty – y^TXθ + y^Ty}{\partial θ} = 2X^TXθ – \frac{\partial θ^TX^Ty}{\partial θ} – X^Ty = 0\) \(2X^TXθ – \frac{\partial y^TXθ}{\partial θ} – X^Ty = 2X^TXθ – 2X^Ty = 0\) (Note: In matrix differentiation: \( \frac{\partial Aθ}{\partial θ} = A^T\) and \( \frac{\partial θ^TAθ}{\partial θ} = 2A^Tθ\)) we can deduce \(X^TXθ = X^Ty\) and \(θ = (X^TX)^{-1}X^Ty\) Logistic Regression Below a table that shows tumor types by size. x = Tumor Size (cm) y = Tumor Type (Benign=0, Malignant=1) 1 0 1 0 2 0 2 1 3 1 3 1 Given x, y is distributed according to the Bernoulli distribution with probability of success p = E[y|x]. \(P(y|x;θ) = p^y (1-p)^{(1-y)}\) We define h(x) as a function that returns the expected value (p) of the distribution. We will define this function as: \(E[y|x] = h_{θ}(x) = g(θ^T x) = \frac{1}{1+exp(-θ^T x)}\). g is called Sigmoid (or logistic) function. P(y|x; θ) = \(h_{θ}(x)^y (1-h_{θ}(x))^{(1-y)}\) We need to find θ that maximizes this probability for all values of x. In other words, we need to find θ that maximizes the likelihood function L: \(L(θ)=P(\overrightarrow{y}|X;θ)=\prod_{i=1}^{m} P(y^{(i)}|x^{(i)};θ)\) Or maximize the log likelihood function l: \(l(θ)=log(L(θ)) = \sum_{i=1}^{m} log(P(y^{(i)}|x^{(i)};θ ))\)\(= \sum_{i=1}^{m} y^{(i)} log(h_{θ}(x^{(i)}))+ (1-y^{(i)}) log(1-h_{θ}(x^{(i)}))\) Or minimize the \(-l(θ) = \sum_{i=1}^{m} -y^{(i)} log(h_{θ}(x^{(i)})) – (1-y^{(i)}) log(1-h_{θ}(x^{(i)})) = J(θ) \) J(θ) is convex, to minimize it, we need to solve the equation \(\frac{\partial J(θ)}{\partial θ} = 0\). There are many methods to solve this equation: Gradient descent \(θ_{j} := θ_{j} – α \sum_{i=1}^{n} (h_{θ}(x^{(i)}) – y^{(i)}) * x_j^{(i)} \) Logit function (inverse of logistic function) Logit function is defined as follow: \(logit(p) = log(\frac{p}{1-p})\) The idea in the use of this function is to transform the interval of p (outcome) from [0,1] to [0, ∞]. So instead of applying linear regression on p, we will apply it on logit(p). Once we find θ that maximizes the Likelihood function, we can then estimate logit(p) given a value of x (\(logit(p) = h_{θ}(x) \)). p can be then calculated using the following formula: \(p = \frac{1}{1+exp(-logit(h_{θ}(x)))}\) Multinomial Logistic Regression (using maximum likelihood estimation) In multinomial logistic regression (also called Softmax Regression), y could have more than two outcomes {1,2,3,…,k}. Below a table that shows tumor types by size. x = Tumor Size (cm) y = Tumor Type (Type1= 1, Type2= 2, Type3= 3) 1 1 1 1 2 2 2 2 2 3 3 3 3 3 Given x, we can define a multinomial distribution with probabilities of success \(\phi_j = E[y=j|x]\). \(P(y=j|x;\Theta) = ϕ_j \\ P(y=k|x;\Theta) = 1 – \sum_{j=1}^{k-1} ϕ_j \\ P(y|x;\Theta) = ϕ_1^{1\{y=1\}} * … * ϕ_{k-1}^{1\{y=k-1\}} * (1 – \sum_{j=1}^{k-1} ϕ_j)^{1\{y=k\}}\) We define \(\tau(y)\) as a function that returns a \(R^{k-1}\) vector with value 1 at the index y: \(\tau(y) = \begin{bmatrix}0\\0\\1\\0\\0\end{bmatrix}\), when \(y \in \{1,2,…,k-1\}\), . and \(\tau(y) = \begin{bmatrix}0\\0\\0\\0\\0\end{bmatrix}\), when y = k. We define \(\eta(x)\) as a \(R^{k-1}\) vector = \(\begin{bmatrix}log(\phi_1/\phi_k)\\log(\phi_2/\phi_k)\\…\\log(\phi_{k-1}/\phi_k)\end{bmatrix}\) \(P(y|x;\Theta) = 1 * exp(η(x)^T * \tau(y) – (-log(\phi_k)))\) This form is an exponential family distribution form. We can invert \(\eta(x)\) and find that: \(ϕ_j = ϕ_k * exp(η(x)_j)\)\(= \frac{1}{1 + \frac{1-ϕ_k}{ϕ_k}} * exp(η(x)_j)\)\(=\frac{exp(η(x)_j)}{1 + \sum_{c=1}^{k-1} ϕ_c/ϕ_k}\)\(=\frac{exp(η(x)_j)}{1 + \sum_{c=1}^{k-1} exp(η(x)_c)}\) If we define η(x) as linear function, \(η(x) = Θ^T x = \begin{bmatrix}Θ_{1,1} x_1 +… + Θ_{n,1} x_n \\Θ_{1,2} x_1 +… + Θ_{n,2} x_n\\…\\Θ_{1,k-1} x_1 +… + Θ_{n,k-1} x_n\end{bmatrix}\), and Θ is a \(R^{n*(k-1)}\) matrix. Then: \(ϕ_j = \frac{exp(Θ_j^T x)}{1 + \sum_{c=1}^{k-1} exp(Θ_c^T x)}\) The hypothesis function could be defined as: \(h_Θ(x) = \begin{bmatrix}\frac{exp(Θ_1^T x)}{1 + \sum_{c=1}^{k-1} exp(Θ_c^T x)} \\…\\ \frac{exp(Θ_{k-1}^T x)}{1 + \sum_{c=1}^{k-1} exp(Θ_c^T x)} \end{bmatrix}\) We need to find Θ that maximizes the probabilities P(y=j|x;Θ) for all values of x. In other words, we need to find θ that maximizes the likelihood function L: \(L(θ)=P(\overrightarrow{y}|X;θ)=\prod_{i=1}^{m} P(y^{(i)}|x^{(i)};θ)\)\(=\prod_{i=1}^{m} \phi_1^{1\{y^{(i)}=1\}} * … * \phi_{k-1}^{1\{y^{(i)}=k-1\}} * (1 – \sum_{j=1}^{k-1} \phi_j)^{1\{y^{(i)}=k\}}\)\(=\prod_{i=1}^{m} \prod_{c=1}^{k-1} \phi_c^{1\{y^{(i)}=c\}} * (1 – \sum_{j=1}^{k-1} \phi_j)^{1\{y^{(i)}=k\}}\) \(=\prod_{i=1}^{m} \prod_{c=1}^{k-1} \phi_c^{1\{y^{(i)}=c\}} * (1 – \sum_{j=1}^{k-1} \phi_j)^{1\{y^{(i)}=k\}}\) and \(ϕ_j = \frac{exp(Θ_j^T x)}{1 + \sum_{c=1}^{k-1} exp(Θ_c^T x)}\) Multinomial Logistic Regression (using cross-entropy minimization) In this section, we will try to minimize the cross-entropy between Y and estimated \(\widehat{Y}\). We define \(W \in R^{d*n}\), \(b \in R^{d}\) such as \(S(W x + b) = \widehat{Y}\), S is the Softmax function, k is the number of outputs, and \(x \in R^n\). To estimate W and b, we will need to minimize the cross-entropy between the two probability vectors Y and \(\widehat{Y}\). The cross-entropy is defined as below: \(D(\widehat{Y}, Y) = -\sum_{j=1}^d Y_j Log(\widehat{Y_j})\) Example: if \(\widehat{y} = \begin{bmatrix}0.7 \\0.1 \\0.2 \end{bmatrix} \& \ y=\begin{bmatrix}1 \\0 \\0 \end{bmatrix}\) then \(D(\widehat{Y}, Y) = D(S(W x + b), Y) = -1*log(0.7)\) We need to minimize the entropy for all training examples, therefore we will need to minimize the average cross-entropy of the entire training set. \(L(W,b) = \frac{1}{m} \sum_{i=1}^m D(S(W x^{(i)} + b), y^{(i)})\), L is called the loss function. If we define \(W = \begin{bmatrix} — θ_1 — \\ — θ_2 — \\ .. \\ — θ_d –\end{bmatrix}\) such as: \(θ_1=\begin{bmatrix}θ_{1,0}\\θ_{1,1}\\…\\θ_{1,n}\end{bmatrix}, θ_2=\begin{bmatrix}θ_{2,0}\\θ_{2,1}\\…\\θ_{2,n}\end{bmatrix}, θ_d=\begin{bmatrix}θ_{d,0}\\θ_{d,1}\\…\\θ_{d,n}\end{bmatrix}\) We can then write \(L(W,b) = \frac{1}{m} \sum_{i=1}^m D(S(W x^{(i)} + b), y^{(i)})\) \(= \frac{1}{m} \sum_{i=1}^m \sum_{j=1}^d 1^{\{y^{(i)}=j\}} log(\frac{exp(θ_k^T x^{(i)})}{\sum_{k=1}^d exp(θ_k^T x^{(i)})})\) For d=2 (nbr of class=2), \(= \frac{1}{m} \sum_{i=1}^m 1^{\{y^{(i)}=\begin{bmatrix}1 \\ 0\end{bmatrix}\}} log(\frac{exp(θ_1^T x^{(i)})}{exp(θ_1^T x^{(i)}) + exp(θ_2^T x^{(j)})}) + 1^{\{y^{(i)}=\begin{bmatrix}0 \\ 1\end{bmatrix}\}} log(\frac{exp(θ_2^T x^{(i)})}{exp(θ_1^T x^{(i)}) + exp(θ_2^T x^{(i)})})\) \(1^{\{y^{(i)}=\begin{bmatrix}1 \\ 0\end{bmatrix}\}}\) means that the value is 1 if \(y^{(i)}=\begin{bmatrix}1 \\ 0\end{bmatrix}\) otherwise the value is 0. To estimate \(θ_1,…,θ_d\), we need to calculate the derivative and update \(θ_j = θ_j – α \frac{\partial L}{\partial θ_j}\) Kernel regression Kernel regression is a non-linear model. In this model we define the hypothesis as the sum of kernels. \(\widehat{y}(x) = ϕ(x) * θ = θ_0 + \sum_{i=1}^d K(x, μ_i, λ) θ_i \) such as: \(ϕ(x) = [1, K(x, μ_1, λ),…, K(x, μ_d, λ)]\) and \(θ = [θ_0, θ_1,…, θ_d]\) For example, we can define the kernel function as : \(K(x, μ_i, λ) = exp(-\frac{1}{λ} ||x-μ_i||^2)\) Usually we select d = number of training examples, and \(μ_i = x_i\) Once the vector ϕ(X) calculated, we can use it as new engineered vector, and then use the normal equation to find θ: \(θ = {(ϕ(X)^Tϕ(X))}^{-1}ϕ(X)^Ty\) Bayes Point Machine The Bayes Point Machine is a Bayesian linear classifier that can be converted to a nonlinear classifier by using feature expansions or kernel methods as the Support Vector Machine (SVM). More details will be provided. Ordinal Regression Ordinal Regression is used for predicting an ordinal variable. An ordinal variable is a categorical variable for which the possible values are ordered (eg. size: Small, Medium, Large). More details will be provided. Poisson Regression Poisson regression assumes the output variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear function. log(E[Y|X]) = log(λ) = θ.x
Forgot password? New user? Sign up Existing user? Log in ab+2c+3d+bc+2d+3a+cd+2a+3b+da+2b+3c>23\dfrac{a}{b+2c+3d} + \dfrac{b}{c+2d+3a} + \dfrac{c}{d+2a+3b} + \dfrac{d}{a+2b+3c} > \dfrac{2}{3}b+2c+3da+c+2d+3ab+d+2a+3bc+a+2b+3cd>32 If a,b,c,d are distinct positive reals, prove the above inequality. Note by Raushan Sharma 3 years, 9 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: Have u given the inmo mock test of fiitjee yesterday Log in to reply No, one of my friends gave that, and yeah, it's a question I gave from there only... Did you give that test?? How was it? Yes I gave the test and it was really difficult....this was the only question I could completely solve....I would post the complete paper soon. @Samarth Agarwal – Ya, I have the complete paper, but can you give the solution to this inequality, I mean how you solved, I was trying it with Tittu's Lemma, but couldn't complete @Raushan Sharma – Use titu lemma and then cauchy schwarz on sqrt(a),sqrt(b),sqrt(c),sqrt(d)and1,1,1,1sqrt (a), sqrt (b), sqrt (c), sqrt (d) and 1,1,1,1sqrt(a),sqrt(b),sqrt(c),sqrt(d)and1,1,1,1 it would give the direct result @Samarth Agarwal – Oh, yeah, I did it today, after I commented that. It was quite easy. Actually first I was not expanding (a+b+c+d)2(a+b+c+d)^2(a+b+c+d)2. I first applied Tittu's lemma and then AM-GM @Raushan Sharma – Can you please post the complete solution including explanation for this question?? @Saurabh Mallik – Yes, I can, but for that can you please tell me, how can we add an image in the comment?? @Raushan Sharma – Sorry. I don't know how can we upload images in comment. I think we can't upload images in comment but and upload it in solutions for given questions. Can you please write the whole solution? Quite easy cyclic sum(a^2/(ab+2ac+3ad)) and apply Tittu's lemma Problem Loading... Note Loading... Set Loading...
So when combining atomic orbitals to form molecular orbitals, you can either add the wave functions or subtract them. But at the same time, orbitals can exist in opposite phases (say one lobe of the p orbital is '+' and the other '-'). So what happens when you subtract two 1s orbitals of opposite phase, is that the same thing as constructively interference of the wave functions? The concept of linear combination of atomic orbitals (LCAO) to form molecular orbitals (MO) is probably best understood, while digging a little deeper into quantum chemistry. The method is an approximation that was introduced for ab initio methods like Hartree Fock. I don not want to go into too much detail, but there a some points that need to be considered before understanding what LCAO actually does. The time independent Schrödinger equation can was postulated as $$\mathbf{H}\Psi=E\Psi,$$ with the Hamilton operator $\mathbf{H}$, the wave function $\Psi$ and the corresponding energy eigenvalue(s) $E$. I will just note that we are working in the framework of the Born-Oppenheimer approximation and refer to many textbooks for more details. There is a set of rules, the wave function has to obey. It is a scalar, that can be real or complex, but the product of itself with its complex conjugated version is always positive and real. $$0\leq \Psi^*\Psi=|\Psi|^2$$ The probability of finding all $N$ electrons in all space $\mathbb{V}$ is one, hence to function is normed. $$N=\int_\mathbb{V} |\Psi(\mathbf{x}_1,\mathbf{x}_2,\dots,\mathbf{x}_N)|^2 \mathrm{d}(\mathbf{x}_1,\mathbf{x}_2,\dots,\mathbf{x}_N)$$ The value of the wave function has to vanish at infinity. $$0 = \lim_{\mathbf{x}\to\infty}|\Psi(\mathbf{x})|$$ The wave function has to be continuous and continuously differentiable, due to the second order differential operator for the kinetic energy $\mathbf{T}_c$ included in $\mathbf{H}$. The Pauli Principle has to be obeyed. $$\Psi(\mathbf{x}_1,\mathbf{x}_2,\dots,\mathbf{x}_N) = -\Psi(\mathbf{x}_2,\mathbf{x}_1,\dots,\mathbf{x}_N)$$ Also the variational principle should hold for this approximation, stating that the expectation value for the energy of any trial wave function is larger that the energy eigenvalue of the true ground state. One of the most basic methods to approximately solve this problem is Hartree Fock. In it the trial wave function $\Phi$ is set up as a Slater determinant. $$ \Phi(\mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_N) = \frac{1}{\sqrt{N!}} \left| \begin{matrix} \phi_1(\mathbf{x}_1) & \phi_2(\mathbf{x}_1) & \cdots & \phi_N(\mathbf{x}_1) \\ \phi_1(\mathbf{x}_2) & \phi_2(\mathbf{x}_2) & \cdots & \phi_N(\mathbf{x}_2) \\ \vdots & \vdots & \ddots & \vdots \\ \phi_1(\mathbf{x}_N) & \phi_2(\mathbf{x}_N) & \cdots & \phi_N(\mathbf{x}_N) \end{matrix} \right| $$ The expectation value of the energy in the Hartree Fock formalism is set up, using bra-ket notation as $$E =\langle\Phi|\mathbf{H}|\Phi\rangle.$$ Skipping through some major parts of the deviation of HF, well end up at an expression for the energy. $$E=\sum_i^N \langle\phi_i|\mathbf{H}^c|\phi_i\rangle +\frac12 \sum_i^N\sum_j^N \langle\phi_i|\mathbf{J}_j-\mathbf{K}_j|\phi_i\rangle$$ To find the best one-electron wave functions $\phi_i$ we introduce Lagrange multiplicators $\lambda$ minimising the energy with respect to our chosen conditions. These conditions include that the molecular orbitals are ortho normal. $$\langle\phi_i|\phi_j\rangle =\delta_{ij} =\left\{\begin{matrix} 0 & , \text{for}~i \neq j\\ 1 & , \text{for}~i = j\\\end{matrix}\right.$$I will again skip through most of it and just show you the end expression.$$\sum_j\lambda_{ij}|\phi_j\rangle = \mathbf{F}_i|\phi_i\rangle$$with the Fock operator set up as$$\mathbf{F}_i = \mathbf{H}^c +\sum_j (\mathbf{J}_j − \mathbf{K}_j)$$with $i\in1\cdots{}N$, the total number of electrons. We can transform these trial wavefunctions $\phi_i$ to canonical orbitals $\phi_i'$ (molecular orbitals) and obtain the pseudo eigenwertproblem $$\varepsilon_i\phi_i'=\mathbf{F}_i\phi_i'.\tag{Fock}$$ This equation is actually only well defined for occupied orbitals and these are the orbitals that give the lowest energy. In practice this formalism can be extended to include virtual (unoccupied) molecular orbitals as well. Until now we did not use any atomic orbitals at all. This is the next step to find an approximation to actually solve these still pretty complicated systems. LCAO a superposition method. In this approach we map a finite set of $k$ atomic (spin) orbitals $\chi_a$ onto another finite set of $l$ molecular (spin) orbitals $\phi_i'$. They are related towards each other via the expression \begin{align} \phi_i'(\mathbf{x}) &= c_{1,i}\chi_1(\mathbf{x}) + c_{2,i}\chi_2(\mathbf{x}) + \cdots + c_{k,i}\chi_k(\mathbf{x})\\ \phi_i'(\mathbf{x}) &= \sum_a^k c_{a,i}\chi_a(\mathbf{x})\\\tag{LCAO} \end{align} From $\text{(Fock)}$ you can see, that there will be $l$ possible equations depending on the chosen set of orbitals, in the form of $$\sum_a^k\sum_b^k c_{ia}^* c_{ib} \langle\chi_a|\mathbf{F}_i|\chi_b\rangle = \varepsilon_i \sum_a^k\sum_b^k c_{ia}^* c_{ib} \langle\chi_a|\chi_b\rangle$$ or for short there are $l$ equations $$C_{ab}F_{ab}=\varepsilon_i C_{ab}S_{ab}.$$ Since $a,b \in [1,k]$ you can gather $C_{ab}$ in a $k\times k$ matrix $\mathbb{C}$. Since $i\in[1,l]$, there will only be $l$ $\varepsilon_i$ and therefore the matrix of the Fock elements $F_{ab}$ has to be a $l\times l$ matrix $\mathbb{F}$. The whole problem reduces a matrix equation $$\mathbb{F}\mathbb{C}=\mathbb{S}\mathbb{C}\epsilon\!\!\varepsilon,\tag{work}$$ from which it is obvious, that the dimension of the involved matrices have to be the same. Hence $k=l$. Too long, didn't read The total number of elements in the finite set of atomic orbitals is equal to the total number of elements in the finite set of molecular orbitals. Any linear combination is possible, but only the orbitals that minimise the energy will be occupied. The coefficients $c$ from $\text{(LCAO)}$ will be chosen to minimise the energy of the wave function. This will always be constructive interference. This is also independent of the "original" phase of the orbital that is combined with another orbital. In other words, from $\text{(LCAO)}$ $$\phi_i'(\mathbf{x}) = c_{1,i}\chi_1(\mathbf{x}) + c_{2,i}\chi_2(\mathbf{x}) \equiv c_{1,i}\chi_1(\mathbf{x}) - c_{2,i}[-\chi_2(\mathbf{x})].$$ Add or substract the wavefunctions is the first step, then you need to normalize the new wavefunction, so its norm becomes 1. In your case, you get twice 1s wavefunction before normalization, and result in 1s afterwards. So usually there's no point to combine the same orbitals in LACO. Since you are talking about s-orbitals and p-orbitals only(which would be a $ \sigma$bond),there is a head-on combination of the orbitals.This means that the orbitals would combine along the "axes"(each p orbital has both of its lobes aligned along the x-axis,or y-axis or z-axis,mathematically).The s-orbitals always combine head-on,since they are spherical. Now,if two s-orbitals combine,it would be constructive interference if both are of same phase.If they are of opposite phases,there will be destructive interference.What you are talking about when you say "add" or "subtract" means constructive and destructive interference,respectively when the orbitals are of same phase;and destructive and constructive interference,respectively when orbitals are of different phases.
For (quasi-compact and quasi-separated) schemes there is a categorical way to characterise quasi-coherent sheaves of finite type using purely the abelian category $\operatorname{QCoh}(X)$. In an abelian category one can define a categorically finitely generated object as being an object M such that for any directed family of subobjects $M_\alpha \subset M$, such that the sum $\sum_\alpha M_\alpha$ is equal to the original $M$, then there exists an index $\alpha_0$ such that $M_{\alpha_0} = M$. Now, in the category of $R$-modules this is easy to see, and the argument essentially relies on the fact that any module is the colimit of its finitely generated submodules. For quasi-compact and quasi-separated schemes the only reference I could find is Daniel Murfet's excellent notes. This is Corollary 64 from here. I would like to know whether this result still holds for algebraic spaces or more generally for algebraic stacks. (I imagine that knowing this is essentially equivalent to proving Lemma 61 in Murfet's notes for a scheme but using the étale toplogy) EDIT: Perhaps one cook up a variant of Proposition 15.4 of Laumon-Moret-Bailly. Unfortunately the argument there seems to use at the very end that a submodule of a finite type module is of finite type, which does not hold in the non noetherian case. EDIT2: By the way, for qcqs algebraic spaces this was already known to Raynaud-Gruson [RG Proposition 5.7.8].
The Annals of Statistics Ann. Statist. Volume 31, Number 5 (2003), 1491-1516. Enriched conjugate and reference priors for the Wishart family on symmetric cones Abstract A general Wishart family on a symmetric cone is a natural exponential family (NEF) having a homogeneous quadratic variance function. Using results in the abstract theory of Euclidean Jordan algebras, the structure of conditional reducibility is shown to hold for such a family, and we identify the associated parameterization $\phi$ and analyze its properties. The enriched standard conjugate family for $\phi$ and the mean parameter $\mu$ are defined and discussed. This family is considerably more flexible than the standard conjugate one. The reference priors for $\phi$ and $\mu$ are obtained and shown to belong to the enriched standard conjugate family; in particular, this allows us to verify that reference posteriors are always proper. The above results extend those available for NEFs having a simple quadratic variance function. Specifications of the theory to the cone of real symmetric and positive-definite matrices are discussed in detail and allow us to perform Bayesian inference on the covariance matrix $\Sigma$ of a multivariate normal model under the enriched standard conjugate family. In particular, commonly employed Bayes estimates, such as the posterior expectation of $\Sigma$ and $\Sigma^{-1}$, are provided in closed form. Article information Source Ann. Statist., Volume 31, Number 5 (2003), 1491-1516. Dates First available in Project Euclid: 9 October 2003 Permanent link to this document https://projecteuclid.org/euclid.aos/1065705116 Digital Object Identifier doi:10.1214/aos/1065705116 Mathematical Reviews number (MathSciNet) MR2012823 Zentralblatt MATH identifier 1046.62054 Citation Consonni, Guido; Veronese, Piero. Enriched conjugate and reference priors for the Wishart family on symmetric cones. Ann. Statist. 31 (2003), no. 5, 1491--1516. doi:10.1214/aos/1065705116. https://projecteuclid.org/euclid.aos/1065705116
This tutorial section covers the basic use of equations or equation systems that use different notations. It is assumed that you have worked through the basic tutorial parts. Rules The basic contract of the naming policy ‘integrate’ is as follows: Basic rule: If an equation or equation system (sub element) is added to a super equation system (super element), the sub element’s variables will get the namespace of the super element Additional rule using different notations: If a sub element is connected to a super element, and both elements use different notations, then allvariable names of the sub element have to be translated (by a connector) into the super notation. Attention: In any time the meaning of each variable has to be uniquely defined by the notation of its namespace. If different notations are used in sub and super element, each variable has to be translated. Otherwise the meaning of some variables coming from the sub element would be undefined in the notation (and therefore in the namespace ) of the super element. Result: After parsing, all variables have a top level namingaccording to the super notation in the top level namespace. ‘Integrate’ case a: All elements use the same notation. This is the standard case where no translation is necessary. It is only listed here for the sake of completeness. Figure 1: Graph of the variable interpretation Equation System itgr a Notation x Base Names: Name Description value x parameter a parameter b parameter c parameter d Indices: Name max val Description Ni value index Problem description Design variables and their corresponding values: Iteration variables and their corresponding guess values: Expected solution: Repeating the basics If you do not know how to enter the above equation system and solve the derived problem in MOSAICmodeling, then you should work through (or revisit) the tutorial parts in the basics. ‘Integrate’ case b: A connected equation uses another notation. In this case (only) one sub equation has a different notation. This is similar to the scenario where you establish your equations in your notation and use some equations from a collaborating developer who uses his own notation. Equation System itgr b Notation y Base Names: Name Description values parameter 1 parameter 2 parameter 3 parameter 4 Indices: Name Max Val Description Nj value index Collaboration Equation (\ref{eq:itgr:b:incl}) exists in a collaborating project. To avoid reimplementation the existing equation (\ref{eq:itgr:b:used}) is used here. In the other project another notation is used. To be able to use equations of the other project, a connector must be created, that translates into the notation used in this project. Equation one Notation x Base Names: Name Description values parameter 1 parameter 2 parameter3 parameter 4 Indices: Name Max val Description Ni value index Connector one Subnotation: Notation x Supernotation: Notation y Value list: Figure 2: Graph of the variable interpretation Problem description Design variables with their values: Iteration variables with guess values: Expected solution: Creating the equations and the notations As a preparation of this tutorial the basic model elements must be created: Create ‘Notation y’ and ‘Equation two’ as it is used in your own project (see eq. \ref{eq:itgr:b:eq:two:new}). Create ‘Notation x’ and ‘Equation one’ as it is present in the collaborating project (see eq. \ref{eq:itgr:b:used}). Creating the connector Now it is necessary to create a connector, which is basically a list of synonymous variable names. The connector to be created is given in the specifications above but repeated here for convenience: Connector one Subnotation: Notation x Supernotation: Notation y Value list: Every connector consists of three pieces of information: The sub notation, the super notation and a variable matching list. However, there are several ways to create a connector. The standard way is to specify the sub and super notation directly and create the variable namings that need to be matched by hand. If you already have connectible elements (equations or equation systems) that you want to match, you can also load them into the . In this case the notations and the variable namings are extracted from the loaded connected elements, which usually saves much work. The stored information, however, is still only the sub notation, the super notation and the variable matching list. In this introductory example the latter case is shown. To create the connector based on existing connectible elements do the following: Connector Editor Select the tab in the Editor Bar. Connector Activate the tab . You see two sections, one for the Set Notations and one for the Sub Notation . Both sections contain a field to import the notation. Super Notation In the section for the press the Sub Notation button of the associated file selection area. [Import] Select the file for ‘Equation one’. Let’s take a look at the results of the last action: The file panel shows the notation used by ‘Equation one’. Notation If you look at section you will find the notation has been loaded into a detailed view panel similar to the View with the title Notation Editor Sub Notation The Go back to the tab . Set Notations Within the specification of the press the Super Element button on the right hand side of this area. [Import] Now select the file for ‘Equation two’. (Strictly, ‘itgr b’ is the super element. But ‘Equation two’ uses the same notation and contains all necessary variables names). The variable names of ‘Equation two’ and information about the corresponding notation, ‘Notation y’ should be present in the editor now, just as it was shown for the sub element. Now the matching of the variables must be specified. Change to the tab . Edit Matching For the correct translation of the variables of ‘Equation one’ it is sufficient to specify a matching for b, c, and . We will start with b. press choose Sub variable Naming and type [New], binto the field . Press Tex Expression and then [Update MathML] . To specify the corresponding name, [OK] , press choose Super variable Naming and type [New], into the field \beta . Press Tex Expression and then [Update MathML] . [OK] Select both bin the list and Sub Notation in the list and press Super Notation . [Match] It is useful to test the translation results of the new connector. To do so, select the tab and load the sub equation or equation system for which the translation should be tested. For this tutorial load ‘Equation one’ and look at the results. Test Connector Add another synonym information for c, creating and matching the names as described in the previous step. \gamma Now you could match and separately to and . However, this would not be very efficient. Instead you should match the indiced variable in a generic way: Create a New Sub and a New Super and match them. At all times you may look at the contents of the connector by selecting the tab . Further you may always look at the matching results in the tab View Connector . Test Connector Save the connector as ‘Connector one’. Up to now the connector is sufficient for the above example case. To obtain a connector that is able to translate all symbols of ‘Notation x’ into symbols of ‘Notation y’ you need to create the sub variable namings aand dand match them with the super variable namings and . Using the connector (during creation of the equation system) Choose Equation System in the Editor Bar. First load ‘Notation y’ into the section of this editor. Notation To connect ‘Equation one’ press . [Add] Select ‘Equation one’ as usual in the to Element but do not press Edit Connection . [Submit] Choose the Naming Policy . integrate Still in the dialog check the box for . Use connector Press next to the now activated text field and choose the file for ‘Connector one’ that you created above. [Change] Press . [Submit] The connector name is shown in the table row for ‘Equation one’. Add ‘Equation two’ in the usual way without specifying any connector. Differences when evaluating the equation system In the Evaluate bar activate the tab and load the equation system as usual. Equation System You will see that the equations are presented in their original notation. Specify the maximum index value in the NC=2 tab. Indexing Now have a look at the presentation in the tab. Here, both equations are presented in the symbols corresponding to the super notation (‘Notation y’). Instance Change to tab . Variable Specification Select the variable . Remember that this variable has the variable name bin the original notation. Look at the lower end of the editor at the Panel. At the left hand side you see the symbol Notation Information and next to it an activated arrow button pointing down. Press this button [latex]\downarrow[/latex]. You will see that the symbol changes from to band the notation information at the right hand side shows the description for bprovided by ‘Notation x’. Using the buttons [ ] and [ ] you can change between the two namings. Pressing displays the [T] Top Level Naming, which is the naming that is actually used to distinguish the variables from each other. Enter the variable specification given in the model and problem description and solve the problem. ‘Integrate’ case c: All connected equation systems use a different notation from the desired one. This is the most general (but most rare) case. All connected sub elements use different notations from the super notation. Equation System itgr c Notation z Base Names: Name Description values parameter 1 parameter 2 parameter 3 parameter 4 Indices: Name Max val Description Nk value index Collaboration Equations (\ref{eq:itgr:c:incl:one}) and (\ref{eq:itgr:c:incl:two}) exist in other projects. To avoid reimplementation of the existing equations (\ref{eq:itgr:b:used}) and (\ref{eq:itgr:b:eq:two:new}) are used here. Applicable connectors are used to translate between the notations. Equation one Notation x Base Names: Name Description values parameter 1 parameter 2 parameter 3 parameter 4 Indices: Name Max val Description i Ni value index Connector two Subnotation: Notation x Supernotation: Notation z Value list: Equation two Notation y Base Names: Name Description values parameter 1 parameter 2 parameter 3 parameter 4 Indices: Name Max val Description Nj value index Connector three Subnotation: Notation y Supernotation: Notation z Value list: Figure3: Graph of the variable interpretation Problem description Design variables with values: Iteration variables with guess values: Expected result: Equations and the notations As a preparation of this tutorial the basic model elements must be present: You may use ‘Equation one’ and ‘Equation two’ as well as ‘Notation x’ and ‘Notation y’ from section ( ‘Integrate’ case b: A connected equation uses another notation.). Create ‘Notation z’ as specified above. Creating the connectors In the previous section it was explained how a connector can be created based on existing equations or equation systems. Now it will be explained how to create a connector based directly on the notations. Select the in the Connector . Editor Bar Activate the tab . Set Notations In the section for the press the Sub Notation button of the associated file selection area. [Import] Select the file for ‘Notation y’. In the section for the press the Super Notation button. [Import] Select the file for ‘Notation z’. Switch to tab . Edit Matching Create the new sub variable namings , , , and . (For each variable naming press and in the appearing dialog enter the latex expression, press [New Sub] and [Update MathML] ). [OK] Create the new super variable namings A, B, C, Dand . Match the variables according to the list given in the above model description for ‘Connector two’. Test the connector for ‘Equation two’ in the . Test Connector Save the connector (Connector two). Press in the file panel of the [New] . Connector Editor Create ‘Connector three’ according to the above model description. Using the connectors when creating the equation system Activate the and load ‘Notation z’. Equation System Editor Press . In the [Add] dialog load ‘Equaton one’, use the Naming Policy Edit Connection to Element and click on integrate and specify ‘Connector two’ in the corresponding file area. Press Use connector to close the dialog. [Submit] Add ‘Equation two’ specifying ‘Connector three’ for its connection. Save the equation system. Differences in the evaluation In the load the equation system, specify the maximum index value as Evaluation Editor for instantiation. 2 In the tab select the variable Variable Specification . In the at the bottom scroll through the different namings of this variable by using the [ ] and [ ] buttons. You should find Notation Information Panel and as further namings. brings you back to the top level naming ( [T] ). Solve the simulation problem according to the data given above.
It looks like you're new here. If you want to get involved, click one of these buttons! Last time we talked about string diagrams, which are a great way to draw morphisms in symmetric monoidal categories. But they're the most fun when we have the ability to take a diagram and bend its strings around, converting inputs to outputs or vice versa. And that's what 'compact closed' categories are for! A compact closed category is a symmetric monoidal category \(\mathcal{C}\) where every object \(x\) has a dual \(x^\ast\) equipped with two morphisms called the cap or unit $$ \cap_x \colon I \to x \otimes x^\ast $$and the cup or counit $$ \cup_x \colon x^\ast \otimes x \to I $$obeying two equations called the snake equations. You've seen these equations a couple times before! In Lecture 68 I was telling you about feedback in co-design diagrams: caps and cups let you describe feedback. I was secretly telling you that the category of feasibility relations was a compact closed category. In Lecture 71 I came back to this theme at a higher level of generality. Feasibility relations are just \(\mathcal{V}\)-enriched profunctors for \(\mathcal{V} = \mathbf{Bool}\), and in Lecture 71 I was secretly telling you that the category of \(\mathcal{V}\)-profunctors is always a compact closed category! But now I'm finally telling you what a compact closed category is in general. The snake equations are easiest to remember using string diagrams. In a compact closed category we draw arrows on string in these diagrams as well as labeling them by objects. For any object \(x\) a left-pointing wire labelled \(x\) means the same as a right-pointing wire labelled \(x^\ast\). Thus, we draw the cap as This picture has no wires coming in at left, which says that the cap is a morphism from \(I\), the unit object of our symmetric monoidal category. It has two wires doing out at right: the top wire with a right-pointing arrow, stands for \(x\), while the bottom wire with a right-pointing arrow stands for \(x^\ast\), and together these tell us that cap is a morphism to \(x \otimes x^\ast\). Similarly, we draw the cup as and this diagram, to the trained eye, says that the cup is a morphism from \(x^\ast \otimes x \) to \( I \). In this language, the snake equations simply say that we can straighten out a 'zig-zag': or a 'zag-zig': If we don't use string diagrams, these equation look more complicated. The first says that this composite morphism is the identity: where the unnamed isomorphisms are the inverse of the left unitor, the associator and the right unitor. The second says that this composite is the identity: where the unnamed isomorphisms are the inverse of the right unitor, the inverse of the associator, and the left unitor. These are a lot less intuitive, I think! One advantage of string diagrams is that they hide associators and unitors, yet let us recover them if we really need them. If you've faithfully done all the puzzles so far, you've proved the following grand result, which summarizes a lot of this chapter: Theorem. Suppose \(\mathcal{V}\) is a commutative quantale. Then the category \(\mathbf{Prof}_{\mathcal{V}}\) with and But the most famous example of a compact closed category comes from linear algebra! It's the category \(\mathbf{FinVect}_k\), with and If you don't know about fields, you may still know about real vector spaces: that's the case \(k = \mathbb{R}\). There's a tensor product \(V \otimes W\) of vector spaces \(V\) and \(W\), which has dimension equal to the dimension of \(V\) times the dimension of \(W\). And there's a dual \(V^\ast\) of a vector space \(V\), which is just the space of all linear maps from \(V\) to \(k\). Tensor products and dual vector spaces are very important in linear algebra. My main point here is that the profunctors work a lot like linear maps: both are morphisms in some compact closed category! Indeed, the introduction of profunctors into category theory was very much like the introduction of linear algebra in ordinary set-based mathematics. I've tried to hint at this several times: the ultimate reason is that composing profunctors is a lot like multiplying matrices! This is easiest to see for \(\mathcal{V}\)-enriched profunctors we've been dealing with. Composing these: $$ (\Psi\Phi)(x,z) = \bigvee_{y \in \mathrm{Ob}(\mathcal{Y})} \Phi(x,y) \otimes \Psi(y,z)$$ looks just like matrix multiplication, with \(\bigvee\) replacing addition in the field \(k\) and \(\otimes\) replacing multiplication. So it's not surprising that this analogy extends, with the opposite of a \(\mathcal{V}\)-enriched category acting like a dual vector space. If you're comfortable with tensor products and duals of vector spaces, you may want to solidify your understanding of compact closed categories by doing this puzzle: Puzzle 283. Guess what the cap and cup $$ \cap_V \colon k \to V \otimes V^\ast, \qquad \cup_V \colon V^\ast \otimes V \to k $$ are for a finite-dimensional vector space \(V\), and check your guess by proving the snake equations. Here are some good things to know about compact closed categories: Puzzle 284. Using the cap and cup, any morphism \(f \colon x \to y \) in a compact closed category gives rise to a morphism from \(y^\ast\) to \(x^\ast\). This amounts to 'turning \(f\) around' in a certain sense, and we call this morphism \(f^\ast \colon y^\ast \to x^\ast \). Write down a formula for \(f^\ast\) and also draw it as a string diagram. Puzzle 285. Show that \( (fg)^\ast = g^\ast f^\ast \) for any composable morphisms \(f\) and \(g\), and show that \( (1_x)^\ast = 1_x \) for any object \(x\). Puzzle 286. What is a slick way to state the result in Puzzle 285? Puzzle 287. Show that if \(x\) is an object in a compact closed category, \( (x^\ast)^\ast\) is isomorphic to \(x\).
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - vv - final states using proton–proton collisions at √s=13 TeV with the ATLAS detector European Physical Journal C, ISSN 1434-6044, 04/2018, Volume 78, Issue 4 Journal Article 2. Measurement of the ZZ production cross section in proton-proton collisions at √s = 8 TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and ZZ→ℓ−ℓ+νν¯¯¯ decay channels with the ATLAS detector Journal of High Energy Physics, ISSN 1126-6708, 2017, Volume 2017, Issue 1, pp. 1 - 53 A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ−ℓ+νν¯ channels (ℓ = e, μ) in proton-proton collisions at s=8TeV at the Large Hadron... Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences Journal Article 3. Search for new resonances decaying to a W or Z boson and a Higgs boson in the ℓ+ℓ−bb¯, ℓνbb¯, and νν¯bb¯ channels with pp collisions at s=13 TeV with the ATLAS detector Physics Letters B, ISSN 0370-2693, 02/2017, Volume 765, Issue C, pp. 32 - 52 Journal Article 4. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - ν ν ¯ final states using proton–proton collisions at s = 13 TeV with the ATLAS detector The European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 04/2018, Volume 78, Issue 4, pp. 1 - 34 Journal Article 5. Search for heavy ZZ resonances in the $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$ ℓ+ℓ-νν¯ final states using proton–proton collisions at $$\sqrt{s}= 13$$ s=13 $$\text {TeV}$$ TeV with the ATLAS detector The European Physical Journal C, ISSN 1434-6044, 4/2018, Volume 78, Issue 4, pp. 1 - 34 A search for heavy resonances decaying into a pair of $$Z$$ Z bosons leading to $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article 6. ZZ -> l(+)l(-)l '(+)l '(-) cross-section measurements and search for anomalous triple gauge couplings in 13 TeV pp collisions with the ATLAS detector PHYSICAL REVIEW D, ISSN 2470-0010, 02/2018, Volume 97, Issue 3 Measurements of ZZ production in the l(+)l(-)l'(+)l'(-) channel in proton-proton collisions at 13 TeV center-of-mass energy at the Large Hadron Collider are... PARTON DISTRIBUTIONS | EVENTS | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Couplings | Large Hadron Collider | Particle collisions | Transverse momentum | Sensors | Cross sections | Bosons | Muons | Particle data analysis | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences PARTON DISTRIBUTIONS | EVENTS | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Couplings | Large Hadron Collider | Particle collisions | Transverse momentum | Sensors | Cross sections | Bosons | Muons | Particle data analysis | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article 7. Measurement of the ZZ production cross section in proton-proton collisions at s = 8 $$ \sqrt{s}=8 $$ TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and Z Z → ℓ − ℓ + ν ν ¯ $$ ZZ\to {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ decay channels with the ATLAS detector Journal of High Energy Physics, ISSN 1029-8479, 1/2017, Volume 2017, Issue 1, pp. 1 - 53 A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ − ℓ + ν ν ¯ $$ {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ channels (ℓ = e, μ) in... Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment Journal Article 8. Measurement of the ZZ production cross section and Z→ℓ+ℓ−ℓ′+ℓ′− branching fraction in pp collisions at s=13 TeV Physics Letters B, ISSN 0370-2693, 12/2016, Volume 763, Issue C, pp. 280 - 303 Journal Article 2014, 1st edition., ISBN 9781941806401, 165 Book 10. Search for heavy ZZ resonances in the l(+) l(-) l(+) l(-) and l(+) l(-) nu(nu)over-bar final states using proton-proton collisions at root s=13 TeV with the ATLAS detector EUROPEAN PHYSICAL JOURNAL C, ISSN 1434-6044, 04/2018, Volume 78, Issue 4 A search for heavy resonances decaying into a pair of Z bosons leading to l(+) l(-) l(+) l(-) and l(+) l(-) nu(nu) over bar final states, where l stands for... DISTRIBUTIONS | BOSON | WZ | DECAY | MASS | TAUOLA | TOOL | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences DISTRIBUTIONS | BOSON | WZ | DECAY | MASS | TAUOLA | TOOL | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article
Yes, a neural network plus loss function can be viewed as a function composition as you have written, and back propagation is just the chain function repeated. Your equations $L = f(g(h(\dots u(v(\dots))))$ and $\frac{\partial L}{\partial v} = \frac{\partial f}{\partial g}\frac{\partial g}{\partial h}\dots\frac{\partial u}{\partial v}$ are useful for high level intuition. At some point you need to look at the actual forms of the functions, and that introduces a bit more complexity. For instance, the loss function for a mini-batch can be expressed in terms of the output. This is the equivalent of $L = f(g)$ for a batch or mini-batch with mean squared error loss, where I am using $J$ (for cost) in place of $L$ and the output of the neural network $\hat{y}$ in place of $g$: $$J = \frac{1}{2N}\sum_{i=1}^{N}(\hat{y}_i - y_i)^2$$ The gradient of $J$ with respect to $\hat{y}$ is equivalent to your first part $\frac{\partial f}{\partial g}$: $$\nabla_{\hat{y}} J = \frac{1}{N}\sum_{i=1}^{N}(\hat{y}_i - y_i)$$ Many of the functions in a neural network involve sums over terms. They can be expressed as vector and matrix operations, which can make them look simpler, but you still need to have code somewhere that works through all the elements. There is one thing that the function composition view does not show well. The gradient you want to calculate is $\nabla_{\theta} J$, where $\theta$ represents all the parameters of the neural network (weights and biases). The parameters in each layer are end points of back propagation - they are not functions of anything else, and the chain rule has to stop with them. That means you have a series of "dead ends" - or possibly another way of thinking about it would be that your $\frac{\partial L}{\partial v} = \frac{\partial f}{\partial g}\frac{\partial g}{\partial h}\dots\frac{\partial u}{\partial v}$ is the "trunk" of the algorithm that links layers together, and every couple of steps there is a "branch" to calculate the gradient of the parameters that you want to change. More concretely, if you have weight parameters in each layer noted as $W^{(n)}$, and two functions for each layer (the sum over weights times inputs, and an activation function) then your example ends up looking like this progression: $$\frac{\partial L}{\partial W^{(n)}} = \frac{\partial f}{\partial g}\frac{\partial g}{\partial h}\frac{\partial h}{\partial W^{(n)}}$$ $$\frac{\partial L}{\partial W^{(n-1)}} = \frac{\partial f}{\partial g}\frac{\partial g}{\partial h}\frac{\partial h}{\partial i}\frac{\partial i}{\partial j} \frac{\partial j}{\partial W^{(n-1)}}$$ $$\frac{\partial L}{\partial W^{(n-2)}} = \frac{\partial f}{\partial g}\frac{\partial g}{\partial h}\frac{\partial h}{\partial i}\frac{\partial i}{\partial j}\frac{\partial j}{\partial k}\frac{\partial k}{\partial l} \frac{\partial l}{\partial W^{(n-2)}}$$ . . . this is the same idea but showing the goal of calculating $\nabla_{W} L$ which doesn't fit into a single chain of gradients. Notice the last term on each line does not appear in the next line. However, you can keep all the terms prior to that and re-use them in the next line - this matches the layer-by-layer calculations in many implementations of back propagation.
Please assume that this graph is a highly magnified section of the derivative of some function, say $F(x)$. Let's denote the derivative by $f(x)$.Let's denote the width of a sample by $h$ where $$h\rightarrow0$$Now, for finding the area under the curve between the bounds $a ~\& ~b $ we can a... @Ultradark You can try doing a finite difference to get rid of the sum and then compare term by term. Otherwise I am terrible at anything to do with primes that I don't know the identities of $\pi (n)$ well @Silent No, take for example the prime 3. 2 is not a residue mod 3, so there is no $x\in\mathbb{Z}$ such that $x^2-2\equiv 0$ mod $3$. However, you have two cases to consider. The first where $\binom{2}{p}=-1$ and $\binom{3}{p}=-1$ (In which case what does $\binom{6}{p}$ equal?) and the case where one or the other of $\binom{2}{p}$ and $\binom{3}{p}$ equals 1. Also, probably something useful for congruence, if you didn't already know: If $a_1\equiv b_1\text{mod}(p)$ and $a_2\equiv b_2\text{mod}(p)$, then $a_1a_2\equiv b_1b_2\text{mod}(p)$ Is there any book or article that explains the motivations of the definitions of group, ring , field, ideal etc. of abstract algebra and/or gives a geometric or visual representation to Galois theory ? Jacques Charles François Sturm ForMemRS (29 September 1803 – 15 December 1855) was a French mathematician.== Life and work ==Sturm was born in Geneva (then part of France) in 1803. The family of his father, Jean-Henri Sturm, had emigrated from Strasbourg around 1760 - about 50 years before Charles-François's birth. His mother's name was Jeanne-Louise-Henriette Gremay. In 1818, he started to follow the lectures of the academy of Geneva. In 1819, the death of his father forced Sturm to give lessons to children of the rich in order to support his own family. In 1823, he became tutor to the son... I spent my career working with tensors. You have to be careful about defining multilinearity, domain, range, etc. Typically, tensors of type $(k,\ell)$ involve a fixed vector space, not so many letters varying. UGA definitely grants a number of masters to people wanting only that (and sometimes admitted only for that). You people at fancy places think that every university is like Chicago, MIT, and Princeton. hi there, I need to linearize nonlinear system about a fixed point. I've computed the jacobain matrix but one of the elements of this matrix is undefined at the fixed point. What is a better approach to solve this issue? The element is (24*x_2 + 5cos(x_1)*x_2)/abs(x_2). The fixed point is x_1=0, x_2=0 Consider the following integral: $\int 1/4*(1/(1+(u/2)^2)))dx$ Why does it matter if we put the constant 1/4 behind the integral versus keeping it inside? The solution is $1/2*\arctan{(u/2)}$. Or am I overseeing something? *it should be du instead of dx in the integral **and the solution is missing a constant C of course Is there a standard way to divide radicals by polynomials? Stuff like $\frac{\sqrt a}{1 + b^2}$? My expression happens to be in a form I can normalize to that, just the radicand happens to be a lot more complicated. In my case, I'm trying to figure out how to best simplify $\frac{x}{\sqrt{1 + x^2}}$, and so far, I've gotten to $\frac{x \sqrt{1+x^2}}{1+x^2}$, and it's pretty obvious you can move the $x$ inside the radical. My hope is that I can somehow remove the polynomial from the bottom entirely, so I can then multiply the whole thing by a square root of another algebraic fraction. Complicated, I know, but this is me trying to see if I can skip calculating Euclidean distance twice going from atan2 to something in terms of asin for a thing I'm working on. "... and it's pretty obvious you can move the $x$ inside the radical" To clarify this in advance, I didn't mean literally move it verbatim, but via $x \sqrt{y} = \text{sgn}(x) \sqrt{x^2 y}$. (Hopefully, this was obvious, but I don't want to confuse people on what I meant.) Ignore my question. I'm coming of the realization it's just not working how I would've hoped, so I'll just go with what I had before.
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest. Nah, I have a pretty garbage question. Let me spell it out. I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$. For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$. This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin. Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle. Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$ $$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$ @user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure). The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$. @RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea. The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible. Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$.... The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case? I think there you really do need some kind of PDEs to construct good charts. I might be way overcomplicating this. If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$? I think so by the squeeze theorem or something. this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$ but then we can replace all of those $U_i$'s with balls, incurring some fixed error In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid... @BalarkaSen what is this ok but this does confirm that what I'm trying to do is wrong haha In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas... Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function? I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation.
Schedule Titles and Abstracts Elke Brendel, Truthmaker Maximalism and the Truthmaker-Paradox Truthmaker maximalism (TMM) is the view that every true sentence has a truthmaker, where a truthmaker of a sentence σ is conceived as an entity whose mere existence necessitates the truth of σ. Peter Milne has attempted to refute TMM via a self-referential sentence M “saying” of itself that it doesn’t have a truthmaker. Milne argues that M turns out to be a true sentence without a truthmaker and thus provides a counterexample to TMM. In contrast to Milne’s argument, it will be shown that M leads to a provable contradiction no matter how one assesses TMM. M turns out to be a liar-like sentence that gives rise to a Truthmaker-Paradox (TP). TP is an interesting paradox in its own right, but has no significance for the still virulent question of whether TMM is a philosophically plausible account. In contrast to Milne’s argument, it will be shown that M leads to a provable contradiction no matter how one assesses TMM. M turns out to be a liar-like sentence that gives rise to a Truthmaker-Paradox (TP). TP is an interesting paradox in its own right, but has no significance for the still virulent question of whether TMM is a philosophically plausible account. Andrea Cantini , From self-reference to unfoundedness As a starting point, we reconsider the paradoxes of Russell-Zermelo vs. Burali-Forti and Mirimanoff. We then deal with Yablo's paradox by projecting it into different formal frameworks. In particular we discuss: (i) a recent attempt to understand structural aspects underlying Yablo's paradox; (ii) the same paradox in a classical theory of Frege structures and in a theory of stratified truth. In particular we discuss: (i) a recent attempt to understand structural aspects underlying Yablo's paradox; (ii) the same paradox in a classical theory of Frege structures and in a theory of stratified truth. Michał Godziszewski, On some metalogical properties of Visser - Yablo sequences and potential infinity (workshop) You can find the abstract here. Volker Halbach , Self-reference in Arithmetic? Many sentences in arithmetic that are of metamathematical interest are described as self-referential. In particular, the Gödel sentence is described as a sentence that claims its own unprovability, while the Henkin sentence is described as a sentence claiming its own provability. More recently philosophers have discussed whether Visser-Yablo sentences are self-referential. The notion of self-reference, however, seems blurry in these discussion; in other cases the analysis of self-reference seems just inadequate. The emphasis of the talk will be on the consequences of the analysis of self-reference for formal results in metamathematics. Joel David Hamkins, Self-reference in computability theory and the universal algorithm I shall give an elementary account of the universal algorithm, due to Woodin, showing how the capacity for self-reference in arithmetic gives rise to a Turing machine program e, which provably enumerates a finite set of numbers, but which can in principle enumerate any finite set of numbers, when it is run in a suitable model of arithmetic. Furthermore, the algorithm can successively enumerate any desired extension of the sequence, when run in a suitable top-extension of the universe. Thus, the algorithm sheds some light on the debate between free will and determinism, if one should imagine extending the universe into a nonstandard time scale. An analogous result holds in set theory, where Woodin and I have provided a universal locally definable finite set, which can in principle be any finite set, in the right universe, and which can furthermore be successively extended to become any desired finite superset of that set in a suitable top-extension of that universe. Comments and inquires can be made on the speaker's blog at http://jdh.hamkins.org/self-reference-in-the-universal-algorithm. Comments and inquires can be made on the speaker's blog at http://jdh.hamkins.org/self-reference-in-the-universal-algorithm. Deborah Kant, Set theory on both the meta and the object level (workshop) This introductory talk is based on the observation that we use set theory to argue about set-theoretic theories and their models. Thus, in certain contexts we employ set theory on both the meta and the object level. For example, a model of ZFC can 'talk about itself', and it is possible, in ZFC, to 'talk about ZFC itself'. This situation makes it difficult to clearly delineate the object and the meta levels in set-theoretic discourses. We approach this problem by considering specific examples, and ask the question whether an identification of the meta and the object level in set theory is possible or not. Peter Koepke, Self-Reference in Set Theory The "Self" is a multifacetted notion in Psychology and Philosophy, central to human life and society, but also leading into bewildering conundrums. In mathematics, theories like set theory and even number theory allow versions of self-reference which lead to inconsistencies, paradoxes and incompletenesses. In my talk I shall show that prudent uses of self-reference account for strong principles. Induction, recursion, ordinal numbers and large cardinals involve substantial self-referentiality. I shall in particular consider elementary maps of models of set theory that can be applied to themselves. In my talk I shall show that prudent uses of self-reference account for strong principles. Induction, recursion, ordinal numbers and large cardinals involve substantial self-referentiality. I shall in particular consider elementary maps of models of set theory that can be applied to themselves. Lavinia Picollo, Truth, Reference and Disquotation I first provide intuitively appealing notions of reference, self-reference, and well-foundedness of sentences of the language of first-order Peano arithmetic extended with a truth predicate. They are intended as a tool for studying reference patterns that underlie expressions leading to semantic paradox, and thus to shed light on the debate on whether every paradox formulated in a first-order language involves self-reference or some other vicious reference pattern. I use proof-theoretic versions of the new notions to formulate sensible restrictions on the acceptable instances of the T-schema, to carry out the disquotationalist project. Since the concept of reference I put forward is proof-theoretic - i.e., it turns to the provability predicate rather than the truth predicate - and, therefore, arithmetically definable, it can be used to provide recursive axiomatizations of truth. I show the resulting systems are $\omega$-consistent and as strong as Tarski's theory of ramified truth iterated up to $\epsilon_0$. I use proof-theoretic versions of the new notions to formulate sensible restrictions on the acceptable instances of the T-schema, to carry out the disquotationalist project. Since the concept of reference I put forward is proof-theoretic - i.e., it turns to the provability predicate rather than the truth predicate - and, therefore, arithmetically definable, it can be used to provide recursive axiomatizations of truth. I show the resulting systems are $\omega$-consistent and as strong as Tarski's theory of ramified truth iterated up to $\epsilon_0$. Graham Priest, This sentence is not provable Gödel’s first Incompleteness Theorem is often phrased as: any (sufficiently strong) axiomatic arithmetic is incomplete. This is inaccurate: what Gödel’s proof shows is that it is either incomplete or inconsistent. Of course, if classical logic is used, then inconsistency implies triviality; so it is natural enough to ignore the inconsistency alternative. However, this is not so if a paraconsistent logic is used. Indeed, it is now known that there are axiomatic arithmetics that are complete, inconsistent, and non-trivial. In this talk, I will explain what these theories a like, and then turn to the question of what one should make of their existence philosophically. Lorenzo Rossi, A theory of paradoxes, with an application to some variants of Yablo’s and Visser’s paradoxes (workshop) Naïveté about truth has it that for every sentence φ (of a sufficiently rich language), φ and ‘φ is true’ are in some sense equivalent. Naïveté about truth is known to generate a wide range of semantic paradoxes, including the Liar, the Truth-teller, Curry’s Paradox and many more. In this paper, I will explore the wide range of semantic behaviours displayed by paradoxical sentences, and the resources that are required to evaluate them. I will propose a new semantic theory of paradoxes that distinguishes between four main kinds of paradoxical sentences (Liar-like paradoxes, Truth-teller-like paradoxes, Revenge paradoxes, and Non-well-founded paradoxes), and gives their semantics in one single model. At the same time, the theory offers a simple formal explication of the notions of self-reference, circularity, and well-foundedness. As an application of the proposed theory, I will investigate in more detail the semantics of (versions of) Yablo’s and Visser’s paradoxes. I will argue that some Yablo- and Visser-like paradoxes are circular but well-founded, while others are non-circular but non-well-founded. Thomas Schindler, A graph-theoretic analysis of the semantic paradoxes We introduce a framework for a graph-theoretic analysis of the semantic paradoxes. Similar frameworks for infinitary propositional languages have been investigated by Cook (2004, 2014), Rabern, Rabern, and Macauley (2013), and others. Our focus, however, will be on the language of first-order arithmetic augmented with a primitive truth predicate. Using Leitgeb’s (2005) notion of semantic dependence, we assign reference graphs (rfgs) to the sentences of this language and define a notion of paradoxicality in terms of acceptable decorations of rfgs with truth values. It is shown that this notion of paradoxicality coincides with that of Kripke (1975). In order to track down the structural components of an rfg that are responsible for paradoxicality, we show that any decoration can be obtained ina three-stage process: first, the rfg is unfolded into a tree, second, the tree is decorated with truth values (yielding a dependence tree in the sense of Yablo (1982)) and third, the decorated tree is re-collapsed onto the rfg. We show that paradoxicality enters the picture only at stage three. Due to this we can isolate two basic patterns necessary for paradoxicality. Moreover, we conjecture a solution to the characterization problem for dangerous rfgs that amounts to the claim that basically the Liar- and the Yablo-graph are the only paradoxical rfgs. Furthermore, we develop signed rfgs that allow us to distinguish between ‘positive’ and ‘negative’reference and obtain more fine-grained versions of our results for unsigned rfgs. Albert Visser, Varieties of Self-Referential Experience My talk is a prolegomenon to Volker Halbach’s talk. It consists of two parts. First, I discuss the usual construction of (supposedly) Self-Referential sentences in theories that interpret a modicum of arithmetic. I will briefly digress on the idea of a Gödel-numbering with built-in Self-Reference. Secondly, I will present the story of Kreisel's and Löb’s discoveries concerning the Henkin sentence. What are the methodogical and philosophical morals to draw from this story? We will have a look at a possible defect of Kreisel’s paper even by its own internal standards. I briefly consider variations like Rosser-style Henkin sentences and $\Sigma^0_1$-truth-tellers. First, I discuss the usual construction of (supposedly) Self-Referential sentences in theories that interpret a modicum of arithmetic. I will briefly digress on the idea of a Gödel-numbering with built-in Self-Reference. Secondly, I will present the story of Kreisel's and Löb’s discoveries concerning the Henkin sentence. What are the methodogical and philosophical morals to draw from this story? We will have a look at a possible defect of Kreisel’s paper even by its own internal standards. I briefly consider variations like Rosser-style Henkin sentences and $\Sigma^0_1$-truth-tellers. Philip Welch, Higher types of recursion: revision theoretic definitions and infinite time turing machines The connection between the Herzbergerian flavour of the revision theory of circular definitions, and infinite time Turing machines (ITTM's) is now well understood. We study here higher type versions of these, where revisions are allowed to call on oracles that are themselves the result of revisions of this kind. In the ITTM arena, we can use ideas of Kleene of his higher type recursion with ITTM's, instead of ordinary Turing machines, situated on wellfounded trees of subcomputations. We consider how low levels of determinacy in the arithmetic hierarchy can correspond to the halting problem for certain such recursions.
Permanent link: https://www.ias.ac.in/article/fulltext/joaa/040/02/0009 We report optical observations of TGSS J1054 $+$ 5832, a candidate high-redshift ($z = 4.8 \pm 2$) steep-spectrum radio galaxy, in $r$ and $i$ bands, using the faint object spectrograph and camera mounted on 3.6-m Devasthal Optical Telescope (DOT). The source previously detected at 150 MHz from Giant Meterwave Radio Telescope (GMRT) and at 1420 MHz from Very Large Array has a known counterpart in near-infrared bands with $K$-band magnitude of AB 22. The source is detected in $i$-band with AB24.3 $\pm$ 0.2 magnitude in theDOT images presented here. The source remains undetected in the $r$-band image at a 2.5$\sigma$ depth of AB 24.4 mag over an $1.2^{\prime\prime}\times 1.2^{\prime\prime}$ aperture. An upper limit to $i−K$ color is estimated to be $\sim$2.3, suggesting youthfulness of the galaxy with active star formation. These observations highlight the importance and potential of the 3.6-mDOT for detections of faint galaxies. Current Issue Volume 40 | Issue 5 October 2019 Since January 2016, the Journal of Astrophysics and Astronomy has moved to Continuous Article Publishing (CAP) mode. This means that each accepted article is being published immediately online with DOI and article citation ID with starting page number 1. Articles are also visible in Web of Science immediately. All these have helped shorten the publication time and have improved the visibility of the articles. Click here for Editorial Note on CAP Mode
Search Now showing items 1-10 of 34 Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC (Elsevier, 2013-12) The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ... Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (American Physical Society, 2013-12) The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2013-10) Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ... Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2013-03) The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
I have a function that generates possible actions, so that I would have a Q-table in form of a nested dictionary, with states (added as they occur) as keys whose values are also dictionaries of possible actions as keys and q-values as values. Is this possible? How would it affect learning? What other method can I use? (Disclaimer: I provided this suggestion to OP here as an answer to a question on Data Science Stack Exchange) Yes this is possible. Assuming you have settled on all your other decisions for Q learning (such as rewards, discount factor or time horizon etc), then it will have no logical impact on learning compared to any other approach to table building. The structure of your table has no relevance to how Q learning converges, it is an implementation detail. This choice of structure may have a performance impact in terms of how fast the code runs - the design works best when it significantly reduces memory overhead compared to using a tensor that over-specifies all possible states and actions. If all parts of the state vector could take all values in any combination, and all states had the same number of allowed actions, then a tensor model for the Q table would likely be more efficient than a hash. I want to update Q-value, but the next state is one that was never there before and has to be added to my nested dictionary with all possible actions having initial Q-values of zero; how do I update the Q-value, now that all of the actions in this next state have Q-values of zero. I assume you are referring to the update rule from single step Q learning: $$Q(s,a) \leftarrow Q(s,a) + \alpha(r + \gamma \text{max}_{a'} Q(s',a') - Q(s,a))$$ What do you do when you first visit $(s,a)$, and want to calculate $\text{max}_{a'} Q(s',a')$ for the above update, yet all of $Q(s',a') = 0$, because you literally just created them? What you do is use the zero value in your update. There is no difference whether you create entries on demand or start with a large table of zeroes. The value of zero is your best estimate of the action value, because you have no data. Over time, as the state and action pair are visited multiple times, perhaps across multiple episodes, the values from experience will back up over time steps due to the way that the update formula makes a link between states $s$ and $s'$. Actually you can use any arbitrary value other than zero. If you have some method or information from outside of your reinforcement learning routine, then you could use that. Also, sometimes it helps exploration if you use optimistic starting values - i.e. some value which is likely higher than the true optimal value. There are limits to that approach, but it's a quick and easy trick to try and sometimes it helps explore and discover the best policy more reliably.
Following Dan Schmidt's post 5, it seems that if \\(f_{\ast}\\) has a left adjoint \\(g_{\ast}\\), then \\(f\\) must be injective. Otherwise, if \\(f\\) is not injective, there are elements \\(a_1, a_2 \in A, a_1 \neq a_2\\) such that \\(f(a_1) = b = f(a_2)\\). Then, \\(f_{\ast}\\) sends both \\(\\{a_1\\}\\) and \\(\\{a_2\\}\\) to \\(\\{b\\}\\). The left adjoint \\(g_{\ast}: PY \to PX\\) must then send \\(\\{b\\}\\) to something at most \\(\\{a_1\\} \wedge \\{a_2\\} = \emptyset\\), which must be \\(\emptyset\\) itself. But then we would have \\(\emptyset = g_{\ast}(\\{b\\}) \leq \emptyset\\) while \\(\\{b\\} \nleq f_{\ast}(\emptyset) = \emptyset\\), contradicting our hypothesis.
Let $B=(B_t)_{0\le t\le T}$ be a continuous semi-martingale and $\mathbb F=(\mathcal F_t)_{0\le t\le T}$ be its natural filtration. Denote by $\mathcal C_b(\Omega\times \mathbb R_+)$ the space of continuous bounded functions $F:\Omega\times \mathbb R_+ \to \mathbb R$, where $\Omega$ denotes the space of continuous functions defined on $[0,T]$ endowed with the uniform norm, and $\Omega\times\mathbb R_+$ is equipped with the product topology. Now, given a sequence of $\mathbb F-$stopping times $(\tau_n)_{n\ge1}$ s.t. there exists a random variable $\tau$ satisfying $$\lim_{n\to \infty}\mathbb E[F(B,\tau_n)]~~=~~\mathbb E[F(B,\tau)],~~ \mbox{ for all } F\in\mathcal C_b(\Omega \times \mathbb R_+).$$ My question is: could we show that $\tau$ is an $\mathbb F-$stopping time? If not, what about assuming that $B$ is Markov or even a Brownian motion? Thanks a lot for the reply!
Let $G$ be a connected, reductive group over an algebraically closed field $k$. Let $B$ be a Borel subgroup with maximal torus $T$ and unipotent radical $U$. Let $\Phi^+ = \Phi(B,T)$ and $\Delta$ the base of $\Phi = \Phi(G,T)$ corresponding to $\Phi^+$. A subset $\Psi$ of $\Phi^+$ is called closed if whenever $\alpha, \beta \in \Psi$, and $\alpha + \beta$ is a root, we have $\alpha + \beta \in \Psi$. If $\Psi$ is closed, then the root subgroups $U_{\alpha} : \alpha \in \Psi$ directly span a closed, connected subgroup $U_{\Psi}$ of $U$ which is normalized by $T$ and whose Lie algebra is $\bigoplus\limits_{\alpha \in \Psi} \mathfrak g_{\alpha}$. Furthermore, if $\alpha \in \Phi^+$, and $\Psi \subseteq \Phi^+$ is closed, and all roots of the form $a \alpha + b \Psi$ for $a, b \in \mathbb{Z}^+$ lie in $\Psi$, then the root subgroup $U_{\alpha}$ normalizes $U_{\Psi}$. These facts are proved in Chapter 14 of Borel, Linear Algebraic Groups. In particular, let $\Psi = \Phi^+ - \Delta$. It is easy to see that $\Psi$ is closed and normalized by every root subgroup $U_{\alpha} : \alpha \in \Phi^+$, and in particular, $U_{\Phi^+ - \Delta}$ is a normal subgroup of $U$. Moreover, it is a consequence of 8.32 in Springer's Linear Algebraic Groups that for any $x \in U_{\alpha}$ and $y \in U_{\beta}$, the commmutator $xyx^{-1}y^{-1}$ lies in $U_{\Phi^+ - \Delta}$. From here one can argue that $U_{\Phi^+ - \Delta}$ contains the derived group of $U$ by producing a homomorphism $U \rightarrow \prod\limits_{\alpha \in \Delta} \mathbf G_a$ with kernel $U_{\Phi^+ - \Delta}$. My question is, is $U_{\Phi^+ - \Delta}$ exactly the derived group of $U$? For $G = \textrm{GL}_n$, this does seem to be the case.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
uses two $3\times 3$ matrices and calculates their sum. It is an online math tool speciallyprogrammed to perform matrix addition between the two $3\times 3$ matrices. 3x3 matrix addition calculator Matrices are a powerful tool in mathematics, science and life. Matrices are everywhere and they have significant applications. For example, spreadsheet such as Excel or written a table represents a matrix. The word "matrix" is the Latin word and it means "womb". This term was introduced by J. J. Sylvester (English mathematician) in 1850.The first need for matrices was in the studying of systems of simultaneous linear equations. A matrix is a rectangular array of numbers, arranged in the following way $$A=\left( \begin{array}{cccc} a_{11} & a_{12} & \ldots&a_{1n} \\ a_{21} & a_{22} & \ldots& a_{2n} \\ \ldots &\ldots &\ldots&\ldots\\ a_{m1} & a_{m2} & \ldots&a_{mn} \\ \end{array} \right)=\left[ \begin{array}{cccc} a_{11} & a_{12} & \ldots&a_{1n} \\ a_{21} & a_{22} & \ldots& a_{2n} \\ \ldots &\ldots &\ldots&\ldots\\ a_{m1} & a_{m2} & \ldots&a_{mn} \\ \end{array} \right]$$ There are two notation of matrix: in parentheses or box brackets. The terms in the matrix are called its entries or its elements. Matrices are most often denoted by upper-case letters, while the corresponding lower-case letters, with two subscript indices, are the elements of matrices. For examples, matrices are denoted by $A,B,\ldots Z$ and its elements by $a_{11}$ or $a_{1,1}$, etc. The horizontal and vertical lines of entries in a matrix are called rows and columns, respectively. The size of a matrix is a Descartes product of the number of rows and columns that it contains. A matrix with $m$ rows and $n$ columns is called an $m\times n$ matrix. In this case $m$ and $n$ are its dimensions. If a matrix consists of only one row, it is called a row matrix. If a matrix consists only one column is called a column matrix. A matrix which contains only zeros as elements is called a zero matrix. Matrices $A$ and $B$ can be added if and only if their sizes are equal. Their sum is a matrix $C=A+B$ with elements$$c_{ij}=a_{ij}+b_{ij}$$The matrix of sum has the same size as the matrices $A$ and $B$.This means, each element in $C$ is equal to the sum of the elements in $A$ and $B$ that are located in corresponding places. For example, $c_{13}=a_{13}+b_{13}$. If two matrices have different sizes, their sum is not defined.It is easy to prove that $A+B=B+A$, in other words the addition of matrices is commutative operation. A $3\times 3$ matrix has $3$ columns and $3$ rows. For example, the sum of two $3\times 3$ matrices $A$ and $B$ is a matrix $C$ such that $$\begin{align} C=&\left( \begin{array}{ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \\ \end{array} \right)+ \left( \begin{array}{ccc} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ b_{31} &b_{32} & b_{33} \\ \end{array} \right)= \left(\begin{array}{ccc} a_{11}+b_{11}& a_{12}+b_{12}& a_{13}+b_{13} \\ a_{21}+b_{21} &a_{22}+b_{22}& a_{23}+b_{23} \\ a_{31}+b_{31} &a_{32}+b_{32} & a_{33}+b_{33} \\ \end{array}\right)\end{align}$$ For example, let us find the sum for $$A=\left( \begin{array}{ccc} 10 & 20 & 10 \\ 4 & 5 & 6 \\ 2 & 3 & 5 \\ \end{array} \right)\quad\mbox{and}\quad B=\left( \begin{array}{ccc} 3 & 2 & 4 \\ 3 & 3 & 9 \\ 4 & 4 & 2 \\ \end{array} \right)$$ Using the matrix addition formula, the sum of the matrices $A$ and $B$ is the matrix $$A+B=\left( \begin{array}{ccc} 10+3 & 20+2 & 10+4 \\ 4+3 & 5+3 & 6+9 \\ 2+4 & 3+4 & 5+2 \\ \end{array} \right)=\left( \begin{array}{ccc} 13 & 22 & 14 \\ 7 & 8 & 15 \\ 6 & 7 & 7 \\ \end{array} \right)$$ The $3\times 3$ matrix addition work with steps shows the complete step-by-step calculation for finding the sum of two $3\times3$ matrices $A$ and $B$ using the matrix addition formula. For any other matrices, just supply elements of $2$ matrices in terms of a real numbers and click on the GENERATE WORK button. The grade school students may use this $3\times 3$ matrix addition to generate the work, verify the results of adding matrices derived by hand, or do their homework problems efficiently. To represent data using a matrix, we will choose which category is represented by the columns and which category is represented by the rows. For example, it performed a study to find what types of sports are most popular and how they could attract more people. The matrix below represent their findings (percent of people) Practice Problem 1 : Find the sum of matrices $$X=\left( \begin{array}{ccc} -4 & 2 & 10 \\ -7 & 15 & -6 \\ 12 & 0 & 5 \\ \end{array} \right)\quad\mbox{and}\quad Y=\left( \begin{array}{ccc} -13 & 0 & -4 \ 13 & 0 & 0 \\ 45 & 8 & 2 \\ \end{array} \right)$$ Practice Problem 2 : Given triangle $\Delta ABC$ in three dimensional coordinate plane with $A(0,0,1),$ $B(3,6,2)$ and $C(-4,6,7)$. Translate this triangle for the vector $\vec a=(1,2,4)$. The 3x3 matrix addition calculator, formula, example calculation (work with steps), real world problems and practice problems would be very useful for grade school students (K-12 education) to understand the addition of two or more matrices. Using this concept they can be able to look at real life situations and transform them into mathematical models.
Science Advisor Gold Member 7,190 1,008 Here is one I am having trouble following. Can anyone help me through my confusion? Our setup is a normal Bell test using entangled photons created using spontaneous parametric down conversion (PDC). Such a setup uses 2 BBO crystals oriented a 90 degrees relative to each other. See for example Dehlinger and Mitchell's http://users.icfo.es/Morgan.Mitchell/QOQI2005/DehlingerMitchellAJP2002EntangledPhotonsNonlocalityAndBellInequalitiesInTheUndergraduateLaboratory.pdf [Broken]. 1. Say we have Alice and Bob set their polarizers at identical settings, at +45 degrees relative to the vertical. Once the individual results of Alice and Bob are examined, it will be seen (in the ideal case) that they always match (either ++ or --). According to the local realist or local hidden variables (LHV) advocate, this is "easily" explained: But that explanation does not seem reasonable to me, even in the case above in which Alice and Bob have identical settings. Here is the paradox as I see it. The source of the photon pairs is the 2 crystals. They achieve an EPR entangled state for testing by preparing a superposition of states as follows: [tex] |\psi_e_p_r\rangle = \frac {1} {\sqrt{2}} (|V\rangle _s|V\rangle _i + |H\rangle _s|H\rangle _i) [/tex] This is the standard description per QM. We already know this leads to the [tex] cos^2 \theta [/tex] relationship and the results will be 100% correlation. The local realist presumably would not accept this description as accurate because it is not complete, and violates the basic premise of any LHV theory. He has an alternate explanation, and the Heisenberg Uncertainty Principle (HUP) is not part of it. So now it appears that our experimental results are compatible with the expectations of both QM and LHV (at least when Alice and Bob have matching settings); however, they have different ways of obtaining identical predictions. But let's look deeper, because I think there is a paradox in the LHV side. 2. Suppose I remove one of the BBO crystals, say the one which produces pairs that are horizontally polarized. I have removed an element of uncertainty of the output stream, as we will now know which crystal was the source of the photon pair. Now the results of Alice and Bob no longer match in all cases, and such is predicted by the application of QM: Alice and Bob will now have matched results only 50% of the time. This follows because the resulting photon pairs emerge from the remaining BBO crystal with a vertical orientation. Each photon has a 50-50 chance of passing through the polarizer at Alice and Bob. But since there is no longer a superposition of states, Alice and Bob do not end up with correlated results. But what about our LHV theory? We should still get matching results for Alice and Bob because we are still measuring the same attribute on both photons and the conservation rule remains in effect! Yet the actual results are now matches only 50% of the time, no better than even odds. I mean, if the LHV advocate denies there is superposition in case 1 (such denial is essentially a requirement of any LHV, right?), how does the greater knowledge of the state change anything in case 2? Our setup is a normal Bell test using entangled photons created using spontaneous parametric down conversion (PDC). Such a setup uses 2 BBO crystals oriented a 90 degrees relative to each other. See for example Dehlinger and Mitchell's http://users.icfo.es/Morgan.Mitchell/QOQI2005/DehlingerMitchellAJP2002EntangledPhotonsNonlocalityAndBellInequalitiesInTheUndergraduateLaboratory.pdf [Broken]. 1. Say we have Alice and Bob set their polarizers at identical settings, at +45 degrees relative to the vertical. Once the individual results of Alice and Bob are examined, it will be seen (in the ideal case) that they always match (either ++ or --). According to the local realist or local hidden variables (LHV) advocate, this is "easily" explained: if you measure the same attribute of two separated particles sharing such a common origin, you will naturally always get the same answer.There is no continuing entanglement or spooky action at a distance, and conservation rules are sufficient to provide a suitable explanation. I.e. in LHV theories there is no continuing connection between spacelike separated particles that interacted in the past. The results will be 100% correlation. But that explanation does not seem reasonable to me, even in the case above in which Alice and Bob have identical settings. Here is the paradox as I see it. The source of the photon pairs is the 2 crystals. They achieve an EPR entangled state for testing by preparing a superposition of states as follows: [tex] |\psi_e_p_r\rangle = \frac {1} {\sqrt{2}} (|V\rangle _s|V\rangle _i + |H\rangle _s|H\rangle _i) [/tex] This is the standard description per QM. We already know this leads to the [tex] cos^2 \theta [/tex] relationship and the results will be 100% correlation. The local realist presumably would not accept this description as accurate because it is not complete, and violates the basic premise of any LHV theory. He has an alternate explanation, and the Heisenberg Uncertainty Principle (HUP) is not part of it. So now it appears that our experimental results are compatible with the expectations of both QM and LHV (at least when Alice and Bob have matching settings); however, they have different ways of obtaining identical predictions. But let's look deeper, because I think there is a paradox in the LHV side. 2. Suppose I remove one of the BBO crystals, say the one which produces pairs that are horizontally polarized. I have removed an element of uncertainty of the output stream, as we will now know which crystal was the source of the photon pair. Now the results of Alice and Bob no longer match in all cases, and such is predicted by the application of QM: Alice and Bob will now have matched results only 50% of the time. This follows because the resulting photon pairs emerge from the remaining BBO crystal with a vertical orientation. Each photon has a 50-50 chance of passing through the polarizer at Alice and Bob. But since there is no longer a superposition of states, Alice and Bob do not end up with correlated results. But what about our LHV theory? We should still get matching results for Alice and Bob because we are still measuring the same attribute on both photons and the conservation rule remains in effect! Yet the actual results are now matches only 50% of the time, no better than even odds. What happened to our explanation that "measuring the same attribute" gives identical results?It seems to me that the only way for a LHV to avoid the paradox is to incorporate the HUP - and maybe the projection postulate too - as a fundamental part of the theory so that it can give the same predictions as QM. I mean, if the LHV advocate denies there is superposition in case 1 (such denial is essentially a requirement of any LHV, right?), how does the greater knowledge of the state change anything in case 2? Last edited by a moderator:
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
Rich Trigonometry concepts, Maxima (or Minima) of Trigonometric expressions like $a \sin \theta + b \cos \theta$ Before going through the process of finding the maxima (or minima) of trigonometric expressions let us state the most important intermediate goal towards the solution. Interim goal for finding maxima (or minima) of trigonometric expressions We will first consider the problem of finding maximum value of the two term expression, $a\sin \theta + b\cos \theta$. The individual functions of $\sin \theta$ and $\cos \theta$ change periodically with change of $\theta$ in exactly the same manner except that, broadly speaking, when $\sin \theta$ reaches maximum value, $\cos \theta$ reaches value 0. In formal terms we say, $\cos$ function lags behind in phase from $\sin$ function by $90^0$. The following curves give an idea of the behavior of two trigonometric terms, $y=3\sin x$ and $y=2\cos x$, plotted on the same graphing layout. Though each term individually oscillates between fixed maximum and minimum values, when the values of the two terms are summed up, finding out the maximum or minimum of the resulting expression turns out to be confusing and difficult. We can see though, , we can easily determine the maximum (or minimum) value of the sum expression if somehow the two term expression is transformed to a single trigonometric function with suitably changed coefficient from the maximum (or minimum) of the equivalent single trigonometric term. This in fact is the , most important intermediate goal in the process of finding the maxima or minima of ANY trigonometric expression Transform the given expression to an equivalent expression in a single term trigonometric function with a suitably evaluated co-efficient. While transforming the given expression, the suitable coefficient is automatically generated with the single function of $\sin$ or $\cos$. The angle base of this function will be different from the angle base of the functions in the original expression, but having a single function will have the great advantage of directly getting its maximum value from trigonometric function behavior. Multiplied with the co-efficient we get the maxima or minima of the given expression straightaway. How to transform the expression $a\sin \theta +b\cos \theta$ to an expression with single term trigonometric function The direct path to transform the two term expression to a single term is to , use the compound angle relationship $\sin (A+B)={\sin A}\cos B + {\cos A}\sin B$ The expression to be transformed to a single term function is, $a\sin \theta + b\cos \theta$, Let us assume, $a=c\cos \alpha$, and, $b=c\sin \alpha$. These two assumptions automatically relate $a$ and $b$ through the common element $c$, $a^2+b^2=c^2$. Geometrically we can represent this relationship between $a$, $b$ and $c$ in a right triangle with two perpendicular sides as $a$ and $b$ and the hypotenuse as, $c=\sqrt{a^2+b^2}$, We are now in a position to define $\alpha$ in terms of $\tan \alpha=\displaystyle\frac{b}{a}$, so that, $a\sin \theta + b\cos \theta$ $=c\cos {\alpha} \sin \theta + c\sin {\alpha} \cos \theta$ $=c\left(\cos {\alpha} \sin \theta +\sin {\alpha} \cos \theta\right)$ $=c\sin (\theta +\alpha)$. The coefficients $a$, $b$ and $c$ are related through the right triangle with base angle $\angle \alpha$. Note: For detailed concepts on compound angle functions you may refer to our session on Trigonometry concepts part 2, compound angle functions. Finding maxima (or minima) for $a\sin \theta + b\cos \theta$ So we have achieved the first goal of transforming the two term trigonometric expression to a single term function. At this point it is easy to see that the maxima of $a\sin \theta +b\cos \theta$ will occur when $\sin (\theta +\alpha)$ reaches its maxima which is 1. So the maximum value of the expression is, $\text{Max }[a\sin \theta + b\cos \theta]=c=\sqrt{a^2+b^2}$. Ultimately it is a simple expression. Minima of $a\sin \theta + b\cos \theta$ Minima of the target expression will occur simply when $\sin (\theta +\alpha)$ is minimum, which is $-1$. So, $\text{Min }[a\sin \theta + b\cos \theta]=-c=-\sqrt{a^2+b^2}$. Variation 1: Finding maxima (or minima) of $(a\cos \theta - b\sin \theta)$ For this expression we need to use the compound angle relation, $\cos (\theta+\alpha)=\cos {\theta}\cos \alpha - \sin {\theta}\sin \alpha$. We will take, $a=c\cos \alpha$, and $b=c\sin \alpha$. Substituting these two in $(a\cos \theta - b\sin \theta)$, $(a\cos \theta - b\sin \theta)$ $=c\left(\cos {\theta}\cos \alpha - \sin {\theta}\sin \alpha\right)$ $=c\cos(\theta+\alpha)$. In this case also, maximum value of $\cos(\theta +\alpha)$ is 1 and $c=\sqrt{a^2+b^2}$. Variation 2: Quadratic expressions: Finding maxima (or minima) of $2\sin^2\theta+3\cos^2\theta$ In this case the method is exactly same in so far as the The difference lies in how we carry out the all-important transformation. intermediate step of converting the expression to a single function expression is concerned. The target expression to be transformed is, $2\sin^2\theta+3\cos^2\theta=2+cos^2\theta$ Because of the convenience of using the identity, $\sin^2 \theta+\cos^2 \theta=1$, we didn't have to use compound angle relationship to transform the target expression into an expression involving a single trigonometric function. As absolute value both of minima and maxima for $\cos \theta$ is 1, squaring removes the minus sign, and maximum value of $\cos^2 \theta$ turns out to be 1. So, $\text{Max }[2\sin^2\theta+3\cos^2\theta]$ $=\text{Max }[2+\cos^2\theta]$ $=2+\text{Max }[\cos^2\theta]$ $=2+1=3$ Minima of quadratic expression $2\sin^2\theta+3\cos^2\theta$ The minima occurs when the $\cos^2 \theta$ function reaches minima, which is 0, as negative values are squared into positive sign. Thus, $\text{Min }[2\sin^2\theta+3\cos^2\theta]$ $=\text{Min }[2+\cos^2\theta]$ $=2+\text{Min }[\cos^2\theta]$ $=2+0=2$. Variation 3: Maxima or minima of $\sin {\theta}\cos \theta$ and its powers Here also we need to use the compound angle relationship, $\sin 2\theta=\sin (\theta +\theta)$ $=\sin {\theta}\cos\theta +\cos {\theta}\sin \theta$ $=2\sin {\theta}\cos \theta$, Or, $\sin {\theta}\cos \theta=\frac{1}{2}\sin 2\theta$. This is the guiding relationship that will determine the maxima or minima of $\sin {\theta}\cos \theta$ and its powers. In general, the is, maxima for all even powers of $\sin {\theta}\cos \theta$ $\text{Max}\left[\left(\sin {\theta}\cos \theta\right)^{2n}\right]$, where $n$ is a natural number $= \left(\frac{1}{2}\right)^{2n}\left(\text{Max}[\sin 2\theta]\right)^{2n}$ $=\left(\frac{1}{2}\right)^{2n}\left(1\right)^{2n}$ $=\left(\frac{1}{2}\right)^{2n}$. But for evaluating the minima for all of $\sin {\theta}\cos \theta$ even powers we cannot take the minimum value of $\sin 2\theta$ which $-1$because it will then result in the same maxima and minima for the expression for even powers of the expression. Instead we need to consider 0 value of the $\sin 2\theta$ for evaluating the minima of the LHS expression. Thus $\sin {\theta}\cos \theta$ minima for even powers of , is $\text{Min}\left[\left(\sin {\theta}\cos \theta\right)^{2n}\right]=0$ On the other hand, though the maxima for all of $\sin {\theta}\cos \theta$ will be same as the maxima for even powers, the minima for odd powers will be, odd powers $\text{Min}\left[\left(\sin {\theta}\cos \theta\right)^{2n+1}\right]$ $=\left(\frac{1}{2}\right)^{2n+1}\left(-1\right)^{2n+1}$ $=-\left(\frac{1}{2}\right)^{2n+1}$. These are all based on , and being confined in the domain of trigonometry itself, may be felt to be more attractive. But the fact remains that in conjunction with the above concepts if we use the fundamental trigonometric concepts of Artificial Tool from Algebra topic domain, many of the otherwise difficult maximizing (or minimizing) problems of this class can be solved quickly and easily. AM GM inequality Let us first state AM GM inequality and offer a brief proof. AM (Arithmetic Mean) GM(Geometric Mean) inequality Arithmetic Mean or AM of the two terms of $x+y$ The AM of the two terms in the two term (for simplicity and relevance to our class of problems) expression, $x+y$ is, $\text{AM}=\displaystyle\frac{x+y}{2}$ This is also known as average of the terms where the terms are summed up and the sum divided by the number of terms. The two terms $x$ and $y$ with the AM in between form an Arithmetic Progression or AP, where difference between any two consecutive terms is same. In the three term series, $x$, $\displaystyle\frac{x+y}{2}$ and $y$ the difference between the second and first terms is, $\displaystyle\frac{x+y}{2}-x=\displaystyle\frac{y-x}{2}$. Similarly the difference between the third term and the second term is, $y-\displaystyle\frac{x+y}{2}=\displaystyle\frac{y-x}{2}$, which is same as the previous difference. Geometric Mean or GM of the two terms of $x+y$ The Geometric Mean or GM of the two terms $x$ and $y$ in the expression $x+y$ is, $\text{GM}=\sqrt{xy}$. Geometric Mean of the two terms $x$ and $y$ forms the middle term of a Geometric Progression or GP of three terms, $x$, $\sqrt{xy}$, and $y$. These three terms form a GP as each term is multiplied by the fixed value of $\sqrt{\displaystyle\frac{y}{x}}$ to get the next term, which is the basic characteristic of a GP. Note: Verify this yourself. AM GM inequality The frequently used inequality states mathematically, $\text{AM} \geq \text{GM}$. Let us see how this happens with the two terms of our expression, $x+y$. We have, $(x-y)^2\geq 0$, Or, $x^2-2xy+y^2 \geq 0$, Or, $x^2+2xy+y^2 -4xy \geq 0$, Or, $(x+y)^2 -4xy \geq 0$, Or, $(x+y)^2 \geq 4xy$, Or, $\displaystyle\frac{x+y}{2} \geq \sqrt{xy}$, Or, $\text{AM} \geq \text{GM}$. Only when $x=y$, the equality occurs. Otherwise, $\text{AM} \gt \text{GM}$. In general, $\text{AM} \geq \text{GM}$. Let us see how this powerful but simple inequality can be used for finding maxima or minima of trigonometric expressions. As an example we will take up minimizing the expression, $7 \tan \theta +8 \cot \theta$. How to use AM GM inequality to find minima or maxima of $7 tan \theta +8 cot \theta$ The expression under consideration is, $7 \tan \theta +8 \cot \theta$. So the AM is, $\text{AM}=\displaystyle\frac{7 \tan \theta +8 \cot \theta}{2}$. The GM is, $\text{GM}=\sqrt{56}$. By the AM GM inequality then, $\displaystyle\frac{7 \tan \theta +8 \cot \theta}{2} \geq \sqrt{56}$, Or, $7 \tan \theta +8 \cot \theta \geq 4\sqrt{14}$. This inequality states that the value of the expression on the LHS can only be greater than or equal to the value in RHS. This means the minimum value of the expression $7\tan \theta +8\cot \theta$ is, $4\sqrt{14}$. The has been used here. basic inequality concept Note: The given expression doesn't have any defined maximum as either of the terms have maximum value undefined as $\infty$. Important This is applicable in finding maxima or minima of two term trigonometric expressions AM GM inequality method such as, $\tan$ and $\cot$; $\cos$ and $\sec$; or, $\sin$ and $\text{cosec}$. involving a pair of inverse functions The cancelling out of the functions when evaluating the GM leaves a pure number in such situations. The pair of terms may be in equal compound angles or in equal powers. Recommendation There are many variations to this class of problems. In general for all such problems the overriding goal is to transform the expression to an expression involving a single trigonometric function. Wherever possible we need to use the AM GM inequality in addition or in isolation as required. Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry Trigonometry concepts part 3, maxima (or minima) of Trigonometric expressions General guidelines for success in SSC CGL Efficient problem solving in Trigonometry A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving.
I hope this question ist still interesting for some people, as it is for me. I will first try to give a motivation for the "ghost map", i.e. the Witt polynomials, which, I think, is absent from Harder's article, and then give some historical remarks. Motivation (partly inspired by the impressive article on Witt vectors in the German wikipedia): For this forget for a moment that we know the Teichmüller representatives. Imagine we have a strict $p$-ring $A$ with (perfect) residue ring $k$, that is, for any set-theoretic section $\sigma: k \rightarrow A$ of $A \twoheadrightarrow k$, every element of A can be written as $$\sum_{i=0}^\infty \sigma(a_i) p^i$$ with unique $a_i$ in $k$. That's a set-theoretic bijection $$k^\mathbb{N} \leftrightarrow A .$$ How to describe the ring structure on the "coordinates" on the left? It suffices to describe it mod $p^{n+1}$ for every n. The most naive "coordinate map" would be $$k^{n+1} \rightarrow A/p^{n+1} A$$ $$(a_0, ..., a_n) \mapsto \sigma(a_0) + \sigma(a_1) p + ... + \sigma(a_n)p^n .$$ To this, there would correspond the naiveWitt polynomial $$NW(X_0, ..., X_{n}) = X_0 + pX_1 + ... + p^n X_n .$$ "Naive" because the above map induced by it depends on $\sigma$. (We could even do worse and choose different $\sigma$'s for each coordinate, still maintaining a bijection). Now do remember -- not the Teichmüller representatives, but the crucial fact from their construction: $a \equiv b$ mod $p \Rightarrow a^{p^i} \equiv b^{p^i}$ mod $p^{i+1}$. All possible $\sigma(a_0)$ are congruent mod $p$; we want something unique mod $p^{n+1}$, so why not write $\sigma(a_0)^{p^n}$ in the 0-th coordinate. The first coordinate will be multiplied by $p$ anyway, so we only have to raise it to the $p^{n-1}$-th power to make it unique mod $p^{n+1}$. Upshot: $$\sigma(a_0)^{p^n} + \sigma(a_1)^{p^{n-1}} p + ... + \sigma(a_n) p^n \in A/p^{n+1} A$$ is independent of $\sigma$; in a way, it is a canonical representative in $A/p^{n+1} A$ of one element in $k^{n+1}$. And it is induced by the (non-naive) Witt polynomial $$W(X_0, ..., X_n) = X_0^{p^n} + p X_1^{p^{n-1}} + ... + p^n X_n .$$ In other words: For fixed $(a_0, ..., a_n)$, whatever liftings $\sigma_i$ one might choose, evaluating the Witt polynomial at $X_i = \sigma_i(a_i)$ will give the same element in $A/p^{n+1} A$. So it looks like it might produce universal formulae for a ring structure. (Still, it is astounding that these turn out to be polynomials.)Finally, if $k$ is perfect, this coordinate map is still bijective, and we can and will normalise it. Depending on whether one considers the normalisation $(a_0, ..., a_n) \mapsto (a_0^{p^{-n}}, ..., a_n^{p^{-n}})$ or $(a_0, ..., a_n) \mapsto (a_0^{p^{-n}}, a_1^{p^{1-n}}, ..., a_n)$ to be more natural, in the limit one will look at the element $\sum_{i=0}^\infty \tau (a_i^{p^{-i}}) p^i$ or $\sum_{i=0}^\infty \tau (a_i) p^i$ as a natural representative in $A$ of the coordinates $(a_0, a_1, ...)$ -- where the map $\tau: a \mapsto \lim \sigma (a^{p^{-i}})^{p^i}$ is independent of $\sigma$ and turns out to be the Teichmüller map, which we have thus generalised by forgetting it for a while. History: The relevant papers are Hasse, F. K. Schmidt: Die Struktur diskret bewerteter Körper. Crelle 170 (1934) H. L. Schmid: Zyklische algebraische Funktionenkörper vom Grad $p^n$ über endlichem Konstantenkörper der Charakteristik $p$. Crelle 175 (1936); received 6-I-1936 Teichmüller: Über die Struktur diskret bewerteter Körper. Nachr. Ges. Wiss. Göttingen, 1936; received 21-II-1936 Witt: Zyklische Körper und Algebren der Charakteristik $p$ vom Grad $p^n$. Struktur diskret bewerteter Körper mit vollkommenem Restklassenkörper der Charakteristik $p$, Crelle 176 (1937), dated 22-VI-1936, received 29-VIII-1936 Teichmüller: Diskret bewertete perfekte Körper mit unvollkommenem Restklassenkörper, Crelle 176 (1937), received 5-IX-1936 (Beware, obsolete notation: "perfekt" $\sim$ complete; "(un)vollkommen" = (im)perfect) In (1), a structure theory of complete discretely valued fields had already been done (!), although more complicated. As olli_jvn has already said, Witt was mainly working on generalising Artin-Schreier theory to what is now Artin-Schreier-Witt theory as well as constructing cyclic algebras of degree $p^n$. This is also what Schmid did in (2). This paper was discussed in an Arbeitsgemeinschaft led by Witt with participants Hasse, Teichmüller, Schmid and others. On the first page of (2), there is a note added during correction that Witt has found a "neues Kalkül" which simplifies Schmid's results. -- Hazewinkel notes (p. 5) that the Witt polynomials turn up in (3) and suggests that this might have inspired Witt, however if one looks where they come from here, one reads (p. 155 = p. 57 in Teichmüller's Collected Works): "Tatsächlich ergibt sich das Verfahren aus einem Formalismus, den H. L. Schmid und E. Witt zu ganz anderen Zwecken aufgestellt haben." and then the Witt polynomials appear, and the summation polynomials (at least, mod p) are deduced from them. On the first page of (3), Teichmüller writes that this work was inspired by the mentioned Arbeitsgemeinschaft. In the introductions to (4) and (5), Witt and Teichmüller credit each other with realising the use of the "neues Kalkül" for the structure theory of complete discretely valued fields in the unequal characteristic case. As Hazewinkel writes (p. 9, in accordance with Witt's introduction in (4)), a decisive inspiration for Witt had been the "summation" polynomials that occured in (2), which are constructed recursively (in building an algebra of degree $p^n$ recursively by adding Artin-Schreier-like $p$-layers), and which for $n = 1$ reduce to a plain sum. Indeed, on p. 111 of (2), there are polynomials $z_\nu$ which in today's notation would be Witt's $S_\nu - X_\nu - Y_\nu$, defined recursively with the help of a polynomial $f_\nu$ to be found on p. 112, which is nothing else than the Witt polynomial $W_\nu$ in slightly different normalization. So presumably the timeline is: Schmid presents his paper in the Arbeitsgemeinschaft (before January 1936) $\rightarrow$ Witt finds general Witt vector "Kalkül" (January 1936) $\rightarrow$ Witt and Teichmüller independently realise that this gives a structure theory of complete discretely valued fields with perfect residue field; Teichmüller finds sum and product polynomials (mod p) as well as Teichmüller representatives and reduction of the general case to the case of perfect residue field (January-February 1936) $\rightarrow$ Witt works out his whole theory, Witt and Teichmüller agree to put the perfect case among all the other applications into (4), a detailed treatment of the imperfect case in (5).
Answer $$\tan\Big(x-\frac{\pi}{2}\Big)=\cot x$$ The statement is false. Work Step by Step $$\tan\Big(x-\frac{\pi}{2}\Big)=\cot x$$ Now what we've already known from the cofunction identities is $$\tan\Big(\frac{\pi}{2}-x\Big)=\cot x$$ Yet here, the statement involves $\tan\Big(x-\frac{\pi}{2}\Big)$, not $\tan\Big(\frac{\pi}{2}-x\Big)$. Unfortunately, 2 expressions are not the same, so we need to find a way to transform $\tan\Big(x-\frac{\pi}{2}\Big)$ to the already known. Though it can be a little bit tricky, we can do like this: $$\tan\Big(x-\frac{\pi}{2}\Big)=\tan\Big[-\Big(\frac{\pi}{2}-x\Big)\Big]$$ And we already know from the negative-angle identities that $$\tan(-\theta)=-\tan\theta$$ Therefore, $$\tan\Big(x-\frac{\pi}{2}\Big)=-\tan\Big(\frac{\pi}{2}-x\Big)$$ $$\tan\Big(x-\frac{\pi}{2}\Big)=-\cot x$$ Since $\cot x\ne-\cot x$, $$\tan\Big(x-\frac{\pi}{2}\Big)\ne\cot x$$ The statement thus is false.
We all know the basic equation of Elliptic Curve is $y^2 \equiv x^3 + ax + b \pmod p$ How the value of the constants $a$ and $b$are chosen? Suppose $\mathbb P\ni p \approx 2^{256}$ then what approach should I take while selecting the value of constants $a$ and $b$? I have some basic knowledge of Elliptic Curve, EC Cryptography, EC discrete logarithm problem and EC diffie-hellman. I need some experts help.
The Free Burnside group $G=B(2,665)=\langle a,b|g^{665} \rangle$ is infinite, by the work of Adyan and Novikov. Furthermore, the centralizer of any nonidentity element in $G$ is finite cyclic, and so the group is an i.c.c. group and the associated left group von Neumann algebra $LG$ is a type $II_{1}$ factor. It is a fact, due to of Adyan, that this group is not amenable, so the group von Neumann algebra is not injective. A type $II_{1}$ factor $M$ with trace $\tau$ has Property $\Gamma$ if for every finite subset $\{ x_{1}, x_{2},..., x_{n} \} \subseteq M$ and each $\epsilon >0$, there is a unitary element $u$ in $M$ with $\tau (u)=0$ and $||ux_{j}-x_{j}u||_{2}<\epsilon$ for all $1 \leq j \leq n$. (Here $||T||_2=(\tau(T^{*}T))^{1/2}$ for $T\in M$.) I should mention that if a group is not inner amenable in the sense described in then its left group von Neumann algebra does not have property $\Gamma$. (There exist i.c.c. inner amenable groups whose group von Neumann algebras don't have $\Gamma$, as recently shown by Stefaan Vaes: http://arxiv.org/PS_cache/arxiv/pdf/0909/0909.1485v1.pdf.) My question is: Does the group von Neumann algebra $LG$ have Property $\Gamma$?
I am interested to find an absolute value (not an approximation) of "combination without repetition" for given \$n\$ and \$k\$, or \$\binom{n}{k}\$. The brute force solution would look like this private static ulong Factorial(int x){ ulong res = 1; while (x > 1) { res *= (ulong)x--; } return res;}public static int Combination0(int k, int n){ k = Math.Min(k, n - k); if (n < 2 || k < 1) return 1; if (k == 1) return n; return (int)(Factorial(n) / (Factorial(k) * Factorial(n - k)));} We can slightly optimize this solution, by finding \$\prod_{n\geq i>k}{i}\$ instead of \$\frac{n!}{(n-k)!}\$. private static ulong Factorial(int x, int until = 0){ ulong res = 1; while (x > until) { res *= (ulong)x--; } return res;}public static int Combination1(int k, int n){ k = Math.Min(k, n - k); if (n < 2 || k < 1) return 1; if (k == 1) return n; return (int)(Factorial(n, n - k) / Factorial(k));} But these two solutions have one significant problem - we are limited by ulong.MaxValue, which is more than \$20!\$, but less than \$21!\$. Another way to find the number of combinations, which doesn't have the previously described problem, is the Pascal's triangle. public static int Combination2(int k, int n){ k = Math.Min(k, n - k); if (n < 2 || k < 1) return 1; if (k == 1) return n; int[] triangle = new int[k + 1]; triangle[0] = 1; // expanding int i = 0; for (; i < k; i++) { for (int j = i + 1; j > 0; j--) { triangle[j] += triangle[j - 1]; } } // progressing for (; i < n - k; i++) { for (int j = k; j > 0; j--) { triangle[j] += triangle[j - 1]; } } // collapsing for (; i < n; i++) { int until = k - (n - i); for (int j = k; j > until; j--) { triangle[j] += triangle[j - 1]; } } return triangle[k];} But the problem is that Combination2 is significantly slow. I would appreciate any comments and suggestions for an improvement. Update @quasar and @henrik-hansen suggested the way to prevent overflow by calculating \$\prod_{0 \leq i < k}{\frac{n-i}{i+1}}\$.
Here is a beautiful result from numerical analysis. Given any nonsingular $n\times n$ system of linear equations $Ax=b$, an optimal Krylov subspace method like GMRES must necessarily terminate with the exact solution $x=A^{-1}b$ in no more than $n$ iterations (assuming exact arithmetic). The Cayley-Hamilton theorem provides a simple, elegant proof of this statement. To begin, recall that at the $k$-th iteration, minimum residual methods like GMRES solve the least-squares problem$$\underset{x_k\in\mathbb{R}^n}{\text{minimize }} \|Ax_k-b\|$$by picking a solution from the $k$-th Krylov subspace$$\text{subject to } x_k \in \mathrm{span}\{b,Ab,A^2b,\ldots,A^{k-1}b\}.$$If the objective $ \|Ax_k-b\|$ goes to zero, then we have found the exact solution at the $k$-th iteration (we have assumed that $A$ is full-rank). Next, observe that $x_k=(c_0 + c_1 A + \cdots + c_{k-1}A^{k-1})b=p(A)b$, where $p(\cdot)$ is a polynomial of order $k-1$. Similarly, $\|Ax_k-b\|=\|q(A)b\|$, where $q(\cdot)$ is a polynomial of order $k$ satisfying $q(0)=-1$. So the least-squares problem from above for each fixed $k$ can be equivalently posed as a polynomial optimization problem with the same optimal objective $$\text{minimize } \|q_k(A)b\| \text{ subject to } q_k(0)=-1,\; q_k(\cdot) \text{ is an order-} k \text{ polynomial.}$$Again, if the objective $\|q_k(A)b\|$ goes to zero, then GMRES has found the exact solution at the $k$-th iteration. Finally, we ask: what is a bound on $k$ that guarantees that the objective goes to zero? Well, with $k=n$, and the optimal polynomial $q_n(\cdot)$ for our polynomial optimization problem is just the characteristic polynomial of $A$. According to Cayley-Hamilton, $q_n(A)=0$, so $\|q_n(A)b\|=0$. Hence we conclude that GMRES always terminate with the exact solution at the $n$-th iteration. This same argument can be repeated (with very minor modifications) for other optimal Krylov methods like conjugate gradients, conjugate residual / MINRES, etc. In each case, the Cayley-Hamilton forms the crux of the argument.
Search Now showing items 1-10 of 27 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
The Steklov problem for a compact planar region $\Omega$ is \begin{cases} \Delta u =0 &\text{in $\Omega$}, \\ \frac{\partial u}{\partial n} = \sigma u &\text{on $\partial \Omega$}, \end{cases} where $n$ is the outward unit normal along $\partial \Omega$. I am finding Steklov eigenfunctions when $\Omega$ is a region bounded by ellipse. I know that a spectrum of the problem is $0=\sigma_0<\sigma_1\le \cdots\rightarrow \infty$. Can we obtain the eigenfunctions explicitly by seperation of variables? Is there any calculation about the eigenfunctions or multiplicities of the eigenvalues? Or is there any numerical results about level sets of the eigenfunctions? My approach is as follows. Consider the elliptic coordinate system $(\mu, \nu)$ which is given by $x=a \cosh \mu \cos \nu, y=a \sinh \mu \sin \nu$. It is a two dimensional orthogonal coordinate system and the curves of $\mu=const$, $\nu=const$ are ellipse, hyperbola, respectively. Now we assume that $\Omega$ is bounded by the curves of $\mu=\mu_0$. In this coordinate system, Laplacian is calculated by \begin{align} \Delta u = \frac{1}{a^2 (\cosh^2 \mu -\cos^2 \nu)} (\frac{\partial^2 u}{\partial \mu^2}+\frac{\partial^2 u}{\partial \nu^2}). \end{align} By seperation of variables, $\Delta u=0$ gives $u =\mu, \nu, \mu\nu, \cosh \alpha \mu \cos \alpha \nu, \cosh \alpha \mu \sin \alpha \nu, \sinh \alpha \mu \ \alpha \nu, \sinh \alpha \mu \sin \alpha \nu, \\ \cosh \alpha \nu \cos \alpha \mu, \cosh \alpha \nu \sin \alpha \mu, \sinh \alpha \nu \cos \alpha \mu, \sinh \alpha \nu \sin \alpha \mu$. In addition, since curves of $\nu=const$ is orthogonal to $\partial \Omega$, we can calculate \begin{align} \frac{\partial u}{\partial n} = \left.\frac{\partial u}{\partial \mu}\right|_{\mu=\mu_0} \times a\sqrt{\cosh^2 \mu_0 - \cos^2 \nu}. \end{align} But it seems hard to deal with $\sqrt{\cosh^2 \mu_0 - \cos^2 \nu}$ and find eigenfunctions explicitly.
Abstract To any two graphs $G$ and $H$ one can associate a cell complex ${\tt Hom}(G,H)$ by taking all graph multihomomorphisms from $G$ to $H$ as cells. In this paper we prove the Lovász conjecture which states that if ${\tt Hom}(C_{2r+1},G)$ is $k$ -connected, then $\chi(G)\geq k+4,$ where $r,k\in\mathbb{Z}$, $r\geq 1$, $k\geq -1$, and $C_{2r+1}$ denotes the cycle with $2r+1$ vertices. The proof requires analysis of the complexes ${\tt Hom}(C_{2r+1},K_n)$. For even $n$, the obstructions to graph colorings are provided by the presence of torsion in $H^*({\tt Hom}(C_{2r+1},K_n);\mathbb{Z})$. For odd $n$, the obstructions are expressed as vanishing of certain powers of Stiefel-Whitney characteristic classes of ${\tt Hom}(C_{2r+1},K_n)$, where the latter are viewed as $\mathbb{Z}_2$-spaces with the involution induced by the reflection of $C_{2r+1}$.
First section of Cosmography Problem 1 problem id: cs-1 Using the cosmographic parameters introduced above, expand the scale factor into a Taylor series in time. solution We can write the scale factor in terms of the present time cosmographic parameters:\[a(t)\sim 1+H_{0} \Delta t-\frac{1}{2} q_{0} H_{0}^{2} \Delta t^{2} +\frac{1}{6} j_{0} H_{0}^{3} \Delta t^{3} +\frac{1}{24} s_{0} H_{0}^{4} \Delta t^{4} +120l_{0} H_{0}^{5} \Delta t^{5} \]This decomposition describes evolution of the Universe on the time interval $\Delta t$ directly through the measurable cosmographic parameters. Each of them describes certain characteristic of the evolution. In particular, the sign of deceleration parameter $q$ indicates whether the dynamics is accelerated or decelerated. In other words, a positive\textbf{ }acceleration parameter indicates that standard gravity predominates over the other species, whereas a negative sign\textbf{ }provides a repulsive e\textbf{ff}ect which overcomes the standard attraction due to gravity. Evolution of the deceleration parameter is described by the jerk parameter $j$. In particular, a positive jerk parameter would\textbf{ }indicate that there exists a transition time when the Universe modifies its expansion. In the vicinity of this transition the modulus of deceleration parameters tends to zero and then changes its sign\textbf{. }The two terms, i.e., $q$ and $j$ fix the local dynamics, but they may be not sufficient to remove the degeneration between different cosmological models and one will need higher terms of the decomposition. Problem 2 problem id: cs-2 Using the cosmographic parameters, expand the redshift into a Taylor series in time. solution \[\begin{array}{l} {1+z=\left[\begin{array}{cc} {} & {1+H_{0} (t-t_{0} )-\frac{1}{2} q_{0} H_{0}^{2} (t-t_{0} )^{2} +\frac{1}{3!} j_{0} H_{0}^{3} \left(t-t_{0} \right)^{3} +\frac{1}{4!} s_{0} H_{0}^{4} \left(t-t_{0} \right)^{4} } \\ {} & {+\frac{1}{5!} l_{0} H_{0}^{5} \left(t-t_{0} \right)^{5} \; +{\rm O}\left(\left(t-t_{0} \right)^{6} \right)} \\ {} & {} \end{array}\right]^{-1} ;} \\ {z=H_{0} (t_{0} -t)+\left(1+\frac{q_{0} }{2} \right)H_{0}^{2} (t-t_{0} )^{2} +\cdots .} \end{array}\] Problem 3 problem id: cs-3 What is the reason for the statement that the cosmological parameters are model-independent? solution The cosmographic parameters are model-independent quantities for the simple reason: these parameters are not functions of the EoS parameters $w$ or $w_{i} $ of the cosmic fluid filling the Universe in a concrete model. Problem 4 problem id: cs-4 Obtain the following relations between the deceleration parameter and Hubble's parameter$$q(t)=\frac{d}{dt}(\frac{1}{H})-1;\,\,q(z)=\frac{1+z}{H}\frac{dH}{dz}-1;\,\,q(z)=\frac{d\ln H}{dz}(1+z)-1.$$ Problem 5 problem id: cs-5 Show that the deceleration parameter can be defined by the relation \[q=-\frac{d\dot{a}}{Hda} \] solution \[q=-\frac{d\dot{a}}{Hda} =-\frac{\ddot{a}dt}{Hda} =-\frac{\ddot{a}}{aH^{2} } .\]It corresponds to the standard definition of the deceleration parameter\[q-\frac{\ddot{a}}{aH^{2} } .\] Problem 6 problem id: cs-6 Classify models of Universe basing on the two cosmographic parameters -- the Hubble parameter and the deceleration parameter. solution When the rate of expansion never changes, and $\dot{a}$ is constant, the scaling factor is proportional to time $t$ , and the deceleration term is zero. When the Hubble term is constant, the deceleration term $q$ is also constant and equal to $\mathrm{-}$1, as in the de Sitter and steady-state Universes. In most models of Universes the deceleration term changes in time. One can classify models of Universe on the basis of time dependence of the two parameters. All models can be characterized by whether they expand or contract, and accelerate or decelerate:
Here is one idea. I'll start with a more specific "mathematical singularity", defined as an algorithm that can do the following in N hours or less (for all $N >= 1$): State equivalent versions (up to notional differences) of all mathematical theorems/conjectures that humans will read and understand in N*20 years after 2018 that can be stated formally in Metamath (this in an arbitrary choice, but Metamath is general enough to include quantum logic and extensions of ZFC so it seems like a decent place to start with. Feel free to instead use Coq, Isabelle, Lean, etc. instead if you prefer), assuming those humans never have access to a "mathematical singularity" capable algorithm and their mathematical community continues living and functioning intellectually in a manner similar in capacity to how it did in 2018 Of those problems, provide correct proofs (these may not be readable, that's ok) of all of those that will be solved by those humans in N*20 years. This of course does not fully capture all mathematical progress that humans will make in those years: a big component missing is "readable proofs" and concepts that can't be captured in metamath. But it is something that is theoretically formal. I know that this doesn't include any "continual improvement", what I am referring to here is simply a threshold such that when an algorithm passes it, I think it is sufficently powerful enough to be considered as "intelligent enough" that it has reached close to singularity levels of intelligence. Feel free to adjust the (20 years) constant in your head to match your preferred threshold. I'm not going to accept this answer because it is lacking "continual improvement", but I brought it up because if we can't figure out how to define it mathematically, perhaps simply having "sufficient criteria" in various domains could be a good start. Edit: I suppose that the singularity typically involves an assumption of the development of an intelligence that is superior to human society. This implies that it is capable of at least doing the things that our society does, so there is probably a good argument to be made here that "proof accessibility" and "method teachability" are vital to this problem. I mean, if we think of the current state of the field of calculus, it has gone from an arcane topic only understood by a few field experts, to now being readily accessible and teachable to high school students. While that didn't require proving any new major mathematical theorems, one could argue that much of our technological progress didn't come until advanced mathematical machinery developed (calculus) became accessible to a wide range of people. I was going to make an argument about how "the difference is that computers can learn quicker: they can read through massive proofs very quickly". But I suppose that depends on the architecture of whatever kind of "thing" is achieving the singularity. I.e., here is a (non-exhaustive) list two possible outcomes: There is only one "mind" that is achieving all of this. In that case, that mind has all the knowledge it needs and it doesn't need to teach anyone to progress further, so this point is sorta irrelevant. However, I can still see an argument for "teachability" if we want to utilize this vast amount of knowledge the AI has gained in human society, if possible. There is a simulated "society" of virtual minds that are interacting with each other, that, together, achieve the mathematical singularity. If a single "mind" in this "society" isn't able to easily use and understand the work done by another mind, then the point of "teachability" is very important to prevent individual minds from having to continually recreate the wheel, so to speak. Without our biological limitations these digital minds may have very different "teaching" methods, but I think here is the ideal additional requirement for a "mathematical singularity": These proofs must be (eventually, perhaps not until spending quite a bit of time) accessible to a graduate mathematician, via proving pdf textbooks (or other similar teaching materials) that cover the same material that human mathematical textbooks would have covered after N*20 years in a way that is accessible to the typical graduate mathematician. However we have now lost some formality in this: textbooks usually contain lots of exposition and analogies that are difficult to formally measure and may not even be relevant for the AI. Here is an alternate option that is not as good, but still close: The algorithm must present its results in a form that can be used by any other algorithm that also can achieve the "mathematical singularity" to "skip ahead" to N*20 years, and then immediately continue progress from there. However this criteria has a trivial exploit: an algorithm might as well just provide a 'save state' and a 'program' to run that save state. Conceivably any algorithm that can achieve the mathematical singularity is at least capable of executing code, so providing a 'save state' and 'program' passes this criteria without making it at all accessible (The caveat here is if it uses some sort of model of computation that requires special hardware such as quantum computing or black hole computing to prevent slowdown, but that's besides the point) I think I prefer this alternative: These proofs must be similar in length as the (formalized versions of) proofs the human academic community would have made in those 20*N years "length" is tricky here: it is possible to prove a very difficult theorem very succinctly by simply referencing a very powerful lemma. But here is one example metric: $$length(Proof) = lengthInSymbols(Proof)+\sum_{symbol \in Proof} \frac{length(symbol)}{numberOfTimesUsedInOtherProofs(symbol)}$$ Where "Other Proofs" is the set of all proofs read and understood by humans in those N*20 years, and "symbols" refers to things such as "Green's Theorem" or "$\in$". Hopefully the idea is apparent here: if something is used frequently in many proofs, it is a "common technique" that isn't vital to that proof, and thus doesn't contribute as much to the "length" of that proof. Finding a potentially more suitable metric here seems like a much more tractable problem then defining the mathematical singularity itself and I suspect this is studied elsewhere more, so I'll leave it at this for now.
This question concerns a process that iterates intersection of randomly rotated planar shapes. Start with a simply connected region $R_0$ in the plane, and let $c_0$ be the centroid of $R_0$. Rotate $R_0$ about $c_0$ a random angle; call the result $R'_0$. Set $R_1 = R_0 \cap R'_0$. And repeat, always rotating about $c_0$, and computing $R_{i+1} = R_i \cap R'_i$. It is not difficult to see that, if $c_0 \in R_0$, then the process converges to a disk: Rotation center $c_0$ fixed to centroid of $R_0$: $\rightarrow$ disk. (Scale changes frame-to-frame.) If $c_0 \not\in R_0$, then eventually the empty set is reached. My question concerns the process where the rotation center moves each step to $c_i$, the centroid of region $R_i$. Then sometimes, even when $c_0 \not\in R_0$, the process converges to a disk: Rotation center $c_i=$ the centroid of $R_i$: $\rightarrow$ disk. (Scale changes frame-to-frame.) And sometimes, for the same shape, it leads to the empty set: Rotation center $c_i=$ the centroid of $R_i$: $\rightarrow \varnothing$ (in the 5th step not shown). (Scale changes frame-to-frame.) For the process that moves the rotation center $c_i$ to the centroid of $R_i$ at each step: . What characteristics do the shapes $R_0$ possess that lead to a disk with high probability? And what characteristics lead to $\varnothing$ with high probability? Q For example, I believe that if $R_0$ is convex, then the process always leads to a disk (not generally the same disk as when the center is fixed at $c_0$ throughout). But I am having difficulty seeing any regularity for nonconvex $R_0$.
This is more of an extended comment than an answer, but I hope it might help clarify the problem. The OP seems to have two things in mind: 1) there are inherent errors (the $E$) in the measurements of distance and/or location, and 2) there is an unknown constant $k$ affecting all measurements of distance from the observation points $P_i$ to the object of interest $O$. I was puzzled by what this second concern meant until I came up with a possible, albeit fanciful, interpretation. It may not be at all what the OP means, in which case I hope he or she (or someone) will clarify the actual intent, but I think it at least produces a reasonably nice problem in its own right. Suppose you know the exact locations of your observation points, meaning in particular that you know the exact distance between any pair of them. Let's say you've measured these distances in kilometers. (At some point one might generalize the problem to incorporate errors in these measurements as well, but let's leave well enough alone for now.) You now want to know the location of $O$, so you dispatch surveyors from each of the observation points to measure the distances from $O$ to each $P_i$. If each surveyor were able to report the exact distance in kilometers, you'd really only need to hear from two of them in order to discern the exact location of $O$ (assuming you know which side of the line segment connecting the two observation points $O$ lies on -- if you don't know even that much, then you need three observations points, and they can't lie in row). If the surveyors report approximate distances in kilometers, together with an estimate of the error, say $d_i\pm e_i$ (perhaps with the errors reported in meters, or even centimeters), it seems straightforward, conceptually, to draw the $n$ annuli centered at the observation points, determine if they have a non-empty intersection, enclose it in a circle of minimal radius, and use the center of that circle for the estimated location of $O$ and its radius for the uncertainty. Precisely how (or whether) one can do that efficiently, I'll leave for the experts in computational geometry. But now -- and here's where I think it gets interesting -- suppose your surveyors, who learned their skills in a foreign land, come back and say something like "The distance from $O$ to $P_i$ is $y_i$ dythrms, give or take $x_i$ frklfts." And then they vanish. You turn to your advisor from the Bureau of Standards and say, "What the heck?" She replies, "I'm told that a dythrm is the distance around the Holy Sepulchre in the kingdom where the surveyors went to school, and a frklft is the length of the Royal Sceptre. I think we can assume one is comparable to a kilometer and the other to a meter. But I'm afraid that's all I know." The question is, based on numbers that are reported in units you're not familiar with, can you nonetheless get an estimate on the location of $O$ and, as a consequence, estimate the conversion factor(s) between the surveyors' units and your own (in particular the unknown factor $k$ that tells you how many kilometers there are in a dythrm)? It's pretty clear you need to know something about the relative size of dythrms and frklfts. Otherwise you'd be left saying "For all we know a dythrm is an inch and a frklft is a parsec, and we just paid those damn surveyors a million gazebos for nothing!" It might make sense to assume that the ratio of dythrms to frklfts is known (i.e., in effect assume the surveyors report all numbers using just dythrms). But it might be more fun to see if you can figure out the smallest ratio of frklfts to dythrms that is consistent with the reported numbers $y_i$ and $x_i$ (i.e, ask how small can a frklft be as a fraction of a dythrm and still have all the measurements make sense). One case that's relatively easy to solve is if all the measurements are exact, so there are no frklfts to worry about. In that case you only need the distances to three observation points. If the (unknown) distance in kilometers from $P_i$ to $O$ is $ky_i$ for $i=1,2,3$, and if the distances $P_1P_2$, $P_2P_3$, and $P_1P_3$ are $a$, $b$, and $c$ (also in kilometers), then the law of cosines says $$a^2 = (ky_1)^2 + (ky_2)^2 - 2(ky_1)(ky_2)\cos\alpha,$$$$b^2 = (ky_2)^2 + (ky_3)^2 - 2(ky_2)(ky_3)\cos\beta,$$and$$c^2 = (ky_1)^2 + (ky_3)^2 - 2(ky_1)(ky_3)\cos(\alpha+\beta),$$ where $\alpha$ and $\beta$ are the angles $\angle P_1OP_2$ and $\angle P_2OP_3$, respectively (referring to the figure in the original problem). The first two equations can be solved for $\cos\alpha$ and $\cos\beta$ in terms of $k$, and these can be substituted into the third equation after rewriting $\cos(\alpha+\beta)$ as $\cos\alpha\cos\beta-\sqrt{(1-\cos^2\alpha)(1-\cos^2\beta)}$. The result is a rather messy looking equation (I think it's a quartic in $k^2$ with coefficients in the six distances), but whatever it is, it can certainly be solved for $k$ (and if it can't, that tells you the measurements violate a triangle inequality of some sort). My apologies for writing an overly long comment, particularly if it has nothing to do with what the OP is really asking. Mostly I was just perplexed by the meaning of an unknown scaling factor.
Time for another gemstone from symmetric function theory! (I am studying for my Ph.D. qualifying exam at the moment, and as a consequence, the next several posts will feature yet more gemstones from symmetric function theory. You can refer back to this post for the basic definitions.) Start with a polynomial $p(x)$ that factors as $$p(x)=(x-\alpha_1)(x-\alpha_2)\cdots(x-\alpha_n).$$ The coefficients of $p(x)$ are symmetric functions in $\alpha_1,\ldots,\alpha_n$ – in fact, they are, up to sign, the elementary symmetric functions in $\alpha_1,\ldots,\alpha_n$. In particular, if $e_i$ denotes $\sum_{j_1<\cdots<j_i} \alpha_{j_1}\alpha_{j_2}\cdots\alpha_{j_i}$, then $$p(x)=x^n-e_1x^{n-1}+e_{2}x^{n-2}-\cdots+(-1)^n e_n.$$ (These coefficients are sometimes referred to as Vieta’s Formulas.) Since $p(\alpha_i)=0$ for all $i$, we can actually turn the equation above into a symmetric function identity by plugging in $\alpha_1,\ldots,\alpha_n$: $$\begin{eqnarray*} \alpha_1^n-e_1\alpha_1^{n-1}+e_{2}\alpha_1^{n-2}-\cdots+(-1)^n e_n&=&0 \\ \alpha_2^n-e_1\alpha_2^{n-1}+e_{2}\alpha_2^{n-2}-\cdots+(-1)^n e_n&=&0 \\ \vdots\hspace{3cm}& & \end{eqnarray*}$$ and then summing these equations: $$(\alpha_1^n+\cdots+\alpha_n^n)-e_1(\alpha_1^{n-1}+\cdots+\alpha_n^{n-1})+\cdots+(-1)^n\cdot ne_n=0$$ …and we have stumbled across another important basis of the symmetric functions, the power sum symmetric functions. Defining $p_i=\alpha_1^i+\cdots+\alpha_n^i$, every symmetric function in $\{\alpha_j\}$ can be uniquely expressed as a linear combination of products of these $p_i$`s (we write $p_\lambda=p_{\lambda_1}p_{\lambda_2}\cdots p_{\lambda_n}$.) So, we have $$p_n-e_1p_{n-1}+e_2 p_{n-2}-\cdots+(-1)^n\cdot ne_n=0.$$ This is called the $n$th Newton-Girard identity, and it gives us a recursive way of expressing the $p$`s in terms of the $e$`s. Well, almost. We have so far only shown that this identity holds when dealing with $n$ variables. However, we can plug in any number of zeros for the $\alpha_i$`s to see that the identity also holds for any number of variables $k<n$. And, if we had more than $n$ variables, we can compare coefficients of each monomial individually, which only can involve at most $n$ of the variables at a time since the equation is homogeneous of degree $n$. Setting the rest of the variables equal to zero for each such monomial will do the trick. So now we have it! The Newton-Girard identities allow us to recursively solve for $p_n$ in terms of the $e_i$`s. Wikipedia does this nicely and explains the computation, and the result is: $$p_n=\det\left(\begin{array}{cccccc} e_1 & 1 & 0 & 0 &\cdots & 0 \\ 2e_2 & e_1 & 1 & 0 & \cdots & 0 \\ 3e_3 & e_2 & e_1 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ (n-1)e_{n-1} & e_{n-2} & e_{n-3} & e_{n-4} & \ddots & 1 \\ ne_n & e_{n-1} & e_{n-2} & e_{n-3} & \cdots & e_1 \end{array}\right)$$ For instance, this gives us $p_2=e_1^2-2e_2$, which is true: $$x^2+y^2+z^2=(x+y+z)^2-2(xy+yz+zx).$$ The Wikipedia page derives a similar identity for expressing the $e$`s in terms of the $p$`s. It also does the same for expressing the complete homogeneous symmetric functions $h_n$ in terms of the $p$`s and vice versa. However, it does not explicitly express the $e$`s in terms of the $h$`s or vice versa. In the name of completeness of the Internet, let’s treat these here. Fix some number of variables $x_1,\ldots,x_n$. For any $d$, define $h_d$ to be the sum of all monomials of degree $d$ in $x_1,\ldots,x_n$. This is clearly symmetric, and we define $$h_\lambda = h_{\lambda_1}h_{\lambda_2}\cdots h_{\lambda_k}$$ for any partition $\lambda$, as we did for the elementary symmetric functions last week. The $h_\lambda$`s, called the complete homogeneous symmetric functions, form a basis for the space of symmetric functions. It’s a fun exercise to derive the following generating function identities for $e_n$ and $h_n$: $$H(t)=\sum_n h_n t^n = \prod_{i=1}^n \frac{1}{1-x_i t}$$ $$E(t)=\sum_n e_n t^n = \prod_{i=1}^n (1+x_i t)$$ (The first requires expanding out each factor as a geometric series, and then comparing coefficients. Try it!) From these, we notice that $H(t)E(-t)=1$, and by multiplying the generating functions together and comparing coefficients, we find the identities $$h_n=h_{n-1}e_1-h_{n-2}e_2+\cdots+(-1)^{n-1}e_n$$ Just as before, this gives us a recursion for $h_n$ in terms of the $e_i$`s. With a bit of straightforward algebra, involving Cramer’s Rule, we can solve for $h_n$: $$h_n=\det\left(\begin{array}{cccccc} e_1 & 1 & 0 & 0 &\cdots & 0 \\ e_2 & e_1 & 1 & 0 & \cdots & 0 \\ e_3 & e_2 & e_1 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ e_{n-1} & e_{n-2} & e_{n-3} & e_{n-4} & \ddots & 1 \\ e_n & e_{n-1} & e_{n-2} & e_{n-3} & \cdots & e_1 \end{array}\right)$$ We can also use the same equations to solve for $e_n$ in terms of $h_n$: $$e_n=\det \left(\begin{array}{cccccc} h_1 & 1 & 0 & 0 &\cdots & 0 \\ h_2 & h_1 & 1 & 0 & \cdots & 0 \\ h_3 & h_2 & h_1 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ h_{n-1} & h_{n-2} & h_{n-3} & h_{n-4} & \ddots & 1 \\ h_n & h_{n-1} & h_{n-2} & h_{n-3} & \cdots & h_1 \end{array}\right)$$ I find these two formulas to be more aesthetically appealing than the standard Newton-Gerard formulas between the $p_n$`s and $e_n$`s, since they lack the pesky integer coefficients that appear in the first column of the matrix in the $p$-to-$e$ case. While perhaps not as mainstream, they are gemstones in their own right, and deserve a day to shine.
Differential and Integral Equations Differential Integral Equations Volume 16, Number 5 (2003), 605-624. Periodic solutions of Liénard equations at resonance Abstract In this paper we study the existence of periodic solutions of the second order differential equation $$ x''+f(x)x'+n^2x+\varphi(x)=p(t), \quad n\in {{\bf N}} . $$ We assume that the limits $$ \lim_{x\to\pm\infty}\varphi(x)=\varphi(\pm\infty),\quad \lim_{x\to\pm\infty}F(x)=F(\pm\infty)\quad \Big( F(x) =\int_0^xf(u)du \Big) $$ exist and are finite. We prove that the given equation has at least one $2\pi$-periodic solution provided that (for $A=\int_0^{2\pi}p(t)\sin nt dt, B=\int_0^{2\pi}p(t)\cos nt dt$) one of the following conditions is satisfied: $$ 2(\varphi(+\infty)-\varphi(-\infty))>\sqrt{A^2+B^2} $$ $$ 2n(F(+\infty)-F(-\infty))>\sqrt{A^2+B^2} $$ $$ 2(\varphi(+\infty)-\varphi(-\infty))=\sqrt{A^2+B^2}, \quad F(+\infty)\not=F(- \infty) $$ $$ 2n(F(+\infty)-F(-\infty))=\sqrt{A^2+B^2}, \quad \varphi(+\infty)\not=\varphi(- \infty). $$ On the other hand, we prove the non-existence of $2\pi$-periodic solutions provided that the inequality $$ 2(\varphi(+\infty)-\varphi(-\infty))+2n(F(+\infty)-F(-\infty))\leq\sqrt {A^2+B^2} $$ and other conditions hold. We also deal with the existence of $2\pi$-periodic solutions of the equation when $\varphi$ satisfies a one-sided sublinear condition and $F$ is bounded. Article information Source Differential Integral Equations, Volume 16, Number 5 (2003), 605-624. Dates First available in Project Euclid: 21 December 2012 Permanent link to this document https://projecteuclid.org/euclid.die/1356060630 Mathematical Reviews number (MathSciNet) MR1973066 Zentralblatt MATH identifier 1039.34034 Citation Capietto, Anna; Wang, Zaihong. Periodic solutions of Liénard equations at resonance. Differential Integral Equations 16 (2003), no. 5, 605--624. https://projecteuclid.org/euclid.die/1356060630
Consider $g(z) = (z+1)(z-1)(z-2)/z$. Then $$ \frac{g'(z)}{g(z)} = \frac{1}{z+1} - \frac{1}{z} + \frac{1}{z-1} + \frac{1}{z-2}. $$ Moreover, if $\gamma$ is any closed curve in $\Omega = \mathbb{C} \setminus ([-1, 0] \cup [1, 2])$, then it must wind $-1$ and $0$ the same time and $1$ and $2$ the same time. So if $W(\gamma, z_0)$ denotes the winding number of $\gamma$ at $z_0$, then $$ \int_{\gamma} g(z) \, \mathrm{d}z = 2\pi i \left[ W(\gamma, -1) - W(\gamma, 0) + W(\gamma,1) + W(\gamma,2) \right]. $$ By the previous comment, $W(\gamma,-1) = W(\gamma, 0)$ and $W(\gamma, 1) = W(\gamma, 2)$, and so, the above number is an even multiple of $2\pi i$. So $\frac{1}{2} \int_{\gamma} g(z) \, \mathrm{d}z$ is still an integer multiple of $2\pi i$. This allows us to define $F(z)$ as $$ F(z) = a \exp\left\{ \frac{1}{2} \int_{i}^{z} \frac{g'(w)}{g(w)} \, \mathrm{d}w \right\}, $$ where $a^2 = f(i)$ is chosen to satisfy $\operatorname{Re}(a) > 0$ and the integral is taken over any path in $\Omega$ joining from $i$ to $z$. This is well-defind since the difference of any two such integrals is an integer multiple of $2\pi i$, which is cancelled out by the exponential function. Moreover, it is easy to check that $F(z)^2 = g(z)$. So $F$ is the square root of $g$ on $\Omega$ satisfying the prescribed condition. More generally, assume that $r(z)$ is a rational function. Then its logarithmic derivative takes the form $$ \frac{r'(z)}{r(z)} = \sum_k \frac{n_k}{z - z_k}, $$ for some non-zero integer $n_k$'s and $z_k \in \mathbb{C}$. Indeed, if $n_k \geq 1$, then $z_k$ is a zero of order $n_k$. If $n_k \leq -1$, then $z_k$ is a pole of order $-n_k$. Then, on each domain $\Omega \subseteq \mathbb{C}$, an $m$-th root of $r$ is well-defined if the following condition holds: For each bounded connected component $C$ of $\mathbb{C}\setminus\Omega$, the sum of $n_k$'s for which $z_k \in C$ is a multiple of $m$. The reasoning is fairly the same as before: Given this condition, integral of $r'(z)/r(z)$ over any closed curve in $\Omega$ is a multiple of $2\pi i m$, and so, the logarithm can be defined in $\mathbb{C} / 2\pi i m \mathbb{Z}$. So if we divide this logarithm by $m$ and composing with $\exp$, such ambiguity disappears, yielding a well-defined function whose $m$-th power equals $r(z)$. (I think this is an equivalent condition, but do not want to delve into technicality that I may encounter while attempting to prove the converse.)
For a silly screen saver I'm trying to develop, I'd like to randomly generate a divergence-free 2D array of 2D vectors, and then use it to generate a line integral convolution plot. I've heard$^1$ that one way to do this is to generate random noise, and then project out the solenoidal component of its Helmholtz-Hodge decomposition. To do that, I tried using the following reasoning: A function $\mathbf{f}:\mathbb{R}^2\rightarrow\mathbb{R}^2$ has Helmholtz-Hodge decomposition$^2$ $$\mathbf{f}=\mathbf{h}+\nabla\phi+J\nabla\psi$$ where $$J=\pmatrix{0&-1\\1&0}$$ and where $\phi,\psi$ are scalar functions. For now, assume the harmonic component $\mathbf{h}$ vanishes. In Fourier space, this becomes $$\mathcal{F}\mathbf{f}=-i\mathbf{k}\hat{\phi}-iJ\mathbf{k}\hat{\psi}$$ and we can define a solenoidal projection operator on Fourier space as $$P=I-\frac{\mathbf{k}\otimes\mathbf{k}}{\mathbf{k}\cdot\mathbf{k}}$$ which projects a function on its solenoidal component, via $$\mathcal{F}^{-1}P\mathcal{F}\mathbf{f}=J\nabla\psi.$$ I then tried to implement this in Mathematica, applying it to a $21\times 21\times 2$ random array. First I generate the random array, and take apply the FFT to each of its two components: arr = RandomReal[{-1, 1}, {2, 21, 21}];fArr = Fourier /@ arr; I then define $\mathbf{k}$ as a function of array index: k[k1_, k2_] := Mod[{k1 - 1, k2 - 1}, 21, -10]/21; Then I perform the projection on the Fourier components (the singularity at $\mathbf{k}=0$ is left alone using an If statement): dat = Transpose[ Table[If[k1 == 1 && k2 == 1, fArr[[;; , k1, k2]], fArr[[;; , k1, k2]] - k[k1, k2] (k[k1, k2].fArr[[;; , k1, k2]])/(k[k1, k2].k[k1, k2])], {k1, 21}, {k2, 21}], {2, 3, 1}]; Then I iFFT the two components: projArr = InverseFourier /@ dat; This gives a purely real array, and I would naively expect the result to be an approximation of $J\nabla\psi$. My question is: In what sense does the result approximate $J\nabla\psi$? Supposedly the Helmholtz-Hodge decomposition of 2D data is a nontrivial task, as Chris Beaumont's HH_DECOMP routine is supposed to use FFTs to perform Helmholtz-Hodge decomposition, but he also says (in the comments at the top of the code) that the method seems inaccurate. Likewise, there are more complicated variational methods to perform Helmholtz-Hodge decompositions$^3$ of 2D data, which would seem to suggest that the simpler FFT method is somehow inadequate. Why? What does the FFT method get wrong? And is it wrong to assume the harmonic component vanishes for my random noise? (1): Stable Fluids, Jos Stam. (2): Feature Detection in Vector Fields Using the Helmholtz-Hodge Decomposition, Alexander Wiebel, page 12. (3): Discrete Multiscale Vector Field Decomposition, Yiying Tong.
If a multi-engine aircraft suffers an engine failure while near minimum control speed (V mc), one of the solutions is to bank up to 5 degrees into the operating engine to increase rudder effectiveness to maintain control. Why is it up to 5 degrees? What happens if the pilot banks more than 5 degrees into the operating engine? If a multi-engine aircraft suffers an engine failure while near minimum control speed (V The 5 degrees of bank is to create a side slip component that offsets the skewed thrust line created by the asymmetric thrust, and the rudder input made to counteract the asymmetric thrust. You have the live engine on one wing that wants to make the airplane turn. You apply opposite rudder to stop the turn. With the rudder moment pushing sideways, you end up with a resultant thrust line that is offset, and the airplane proceeds forward with a lateral skew toward the dead engine even though you think you're going straight. By banking into the live engine, bank angle makes the airplane want to side slip toward the down wing, which is in the opposite direction to the skew effect mentioned above. The 5 degrees of bank is roughly what gives the necessary amount of side slip tendency. Close enough in other words. The result is that you will be flying with 5 degrees of bank, but actually proceeding straight through the air. The skid ball will be offset into the bank because you are actually still in coordinated flight and the offset location of the ball is the true "centered" location. These figures are a regulatory baseline for sizing the ailerons, vertical stabilizer and rudder for an aircraft. The 5° bank limit is done to minimize the load factor on the aircraft while providing a force to counteract rudder input required to maintain a coordinated flight path. In the event of an engine failure in a non-centerline thrust twin or multi engine aircraft, the operative engine is going to create a strong yawing moment about the vertical axis of the aircraft in the direction of the dead engine. Uncorrected, this results in a forward slip toward the side of the good engine and, when combined with the fuselage blanking airflow over the wing on the dead engine side, a rolling moment also develops about the longitudinal axis in the direction of the dead engine. At low speeds, combined with the high drag created by the slipping condition plus a 50% loss of total available thrust from the engine failure, this can quickly snowball into a departure from controlled flight and crash. The typical action is to apply rudder in the direction of the good engine to counteract this forward slip. However, while the nose will be aligned with the desired flight path doing this, the actual flight path is a side slip towards the dead engine side, which creates excess drag. The only available counter to this is to bank the airplane into the direction of the good engine to counteract the rudder force using the horizontal component of lift. This results in a coordinated flight track parallel to the horizontal axis of the aircraft with a minimum amount of drag. If excessive bank angle is used to do this, the vertical component of lift is diminished, requiring a greater angle of attack to be imposed upon the wings to stay aloft. This in turn creates more induced drag. The regulations for aircraft design of light twins, therefore, dictated that, in a worst case Vmca, directional control must be maintained with a bank angle NOT GREATER THAN 5°. Harry Horlings, a former military test pilot and aviation consultant, published this excellent video on the nature of Vmc and what it means to the design and operation of aircraft. To maintain straight flight after one engine inoperative (let it be right hand engine), rudder input (nose left) is required to take out the yaw asymmetry from the engines. As rudder is deflected, it produces an aerodynamic side force (to the right), which, if left as is, would push the aircraft into a skid turn. This would not constitute straight flight. To zero the side force, and to maintain level flight (ball centered), the only recourse is to utilize sideslip to generate opposite aerodynamic side force. This means a sideslip nose left in our scenario, which means even more rudder nose left. As speed is decreased, increasingly larger rudder would be required. At some threshold, rudder would be saturated and level flight would no longer be possible below this speed. But what if we relax the requirement of level flight? What if we allow a bank angle into the live engine (bank left wing down)? In this case, we are allowing a little portion of the gravity, equal to $W\phi$ for small bank, to help out with the aerodynamic side force. Correspondingly, less sideslip and rudder would be needed. In fact, if sufficient bank angle is used (usually after a few deg), we can allow the aircraft to sideslip into the failed engine (nose right); a nose right sideslip would generate aerodynamic nose left aerodynamic yaw, further decreasing the rudder required. By allowing banking, we can decrease the speed threshold to which the control surfaces would saturate, thus lowering the minimum control speed (Vmc). Throughout it all, rudder generates an aerodynamic rolling moment, as does sideslip, which must be countered by roll control. As bank angle is increased, the aircraft will be less rudder limited, and more roll control limited. Under FAR 25.149 (and the old 23.149), a maximum bank angle of 5 deg is allowed for the determination of Vmc. Different aircraft will be limited differently at 5 deg bank; some may be limited by rudder, others by roll control, and still others by stall warning. For those still not convinced, please refer to the following equations, which must hold true for steady/straight flight: $$0=N_{engine}+qS_{ref}b_{ref}(C_{n_\beta}\beta+C_{n_{\delta r}}\delta r+C_{n_{\delta a}}\delta a+C_{n_{\delta s}}\delta s)$$ $$0=C_{l_\beta}\beta+C_{l_{\delta r}}\delta r+C_{l_{\delta a}}\delta a+C_{l_{\delta s}}\delta s$$ $$0=W\phi+qS_{ref}b_{ref}(C_{y_\beta}\beta+C_{y_{\delta r}}\delta r+C_{y_{\delta a}}\delta a+C_{y_{\delta s}}\delta s)$$ Even more additional information can be found in AC 25-7C Appendix 6. What would happen if you fly more than 5 deg into the live engine with OEI, nothing much, unless you are flying at Vmc, which would smaller than $V_2$ and $V_{REF}$. As to why 5 deg, and not 6, or 7 deg? My guess is that it's a rounded number that offers adequate decrease in Vmc for performance, yet not so much as to introduce large lateral acceleration and a big disparity between (a high) low weight rudder limited OEI speed and (a low) high weight rudder limited OEI speed. The rudder is used as needed to prevent sideslip (as measured by a yaw string not the slip-skid ball), and the aircraft is banked as needed to eliminate any turning tendency (heading change) due to the deflected rudder. That's really all there is to it. Answer: Increases available rudder travel Logically, we counter yaw of engine out thrust imbalance with rudder into live engine. But now we have used up some of our rudder travel in that direction. Alternative: roll into live engine. Slip into live engine. Create side force on Vstab/rudder without deflecting the rudder. Result: Now have more rudder travel available, BUT, plane will drift laterally off course towards the live engine. You can maintain heading by allowing the nose (thrust vector) to compensate the drift, but now you are doing .... a forward slip! The bank and slip technique certainly centers the rudder a bit more. But to reduce drag (recenter the "ball") the best way is a coordinated turn into the live engine! But notice to compensate for lateral live engine force vector, control inputs will feature a bit more rudder.
Intuition Math Examples What is ICA? If we have two unique signals A and B, we can generate new signals by mixing them. These mixed signals (x) are what we are usually measuring. In order to unmix them again we need to know, with what weights the source-signals have been mixed. If we can get the mixing matrix (A), we can invert it to get the unmixing matrix(\( A^{-1}\),sometimes W) with which we can undo the mixing and obtain the underlying sources. \( n\): number of samples \( i \): number of sources \( j \): number of recorded mixtures \(x\) \( x \): signals / recorder mixtures, \([j * n]\) \( s \): original underlying source \([i * n]\) \( A \): mixing matrix [j * i] ] \( A^{-1}\) also \( W \): unmixing matrix \([i * j] \) The ICA model: $$x = As$$ We are looking for the mixing matrix \( A\) , that minimizes the dependence of each row of the also unknown \( s \) . Here we show that \( A^{-1} \) is equal to the unmixing matrix: $$\begin{aligned} x &= As \\ A^{-1}x &= A^{-1}As [with M^{-1}M = I] \\ A^{-1}x &= s \end{aligned}$$ \( i \): number of sources \( j \): number of recorded mixtures \(x\) \( x \): signals / recorder mixtures, \([j * n]\) \( s \): original underlying source \([i * n]\) \( A \): mixing matrix [j * i] ] \( A^{-1}\) also \( W \): unmixing matrix \([i * j] \) The ICA model: $$x = As$$ We are looking for the mixing matrix \( A\) , that minimizes the dependence of each row of the also unknown \( s \) . Here we show that \( A^{-1} \) is equal to the unmixing matrix: $$\begin{aligned} x &= As \\ A^{-1}x &= A^{-1}As [with M^{-1}M = I] \\ A^{-1}x &= s \end{aligned}$$ AudioSounds are waves and can easily recorded as digital signals with microphones. If you listen to two persons speaking (the sources s) at the same time, both source-signals are mixed together (by A). If you would now record these mixed signals (x) with two microphones, you can try to disentangle the two recorded mixtures x into their original sources s. Click here for the audio signal example (needs some seconds to load!) EEGIn EEG you assume to have many different synchronous active neuronal patches (the sources). But with electrodes you always record a summation of these sources and never one source alone. ICA can be used to try to get back to those original sources. Arbitrary CaseIn a more abstract case we have two unique signals A and B. We can generate new signals by addition.$$ \begin{aligned} C &=& 1*A &+ -2*B \\D &=& 1.73*A &+ 3.41*B \end{aligned}$$ We have two independent Signals,\( s_{1}\) and \(s_{2}\). Mixing them results in two signals (\( x_{1} \) and\( x_{2}\)): $$ \begin{aligned} x_{1} &= \sum_{i=1}^{2}a_{1i}*s_{i} \\ x_{1}&= a_{11} * s_{1} + * a_{12} s_{2} \\ x_{1}&= 1* s_{1} + -2* s_{2} \\ x_{2} &= 1.73*s_{1} + 3.41*s_{2} \end{aligned} $$ We have n independent Signals, s. Linear mixing is described by \( x = As \). In our case: $$ A = \left(\begin{array}{lr}1 -2 \\ 1.73 3.41 \end{array}\right)$$. $$ x_j = \sum_{i=1}^{m}a_{ji}*s_{i} (for j = 1,2, ... n)$$ Step 1. Demeaning In a first step we demean the signal, that means from we subtract the mean value of each mixed signal. Demeaning (also called centering) $$x_{m} = x-E[x] $$ $$x_{m} = x-E[x] $$ The mean amplitude of one channel of a microphone-signal is 65db. We simply subtract this value from the channel. The mean of this channel is now 0. Step 2. Whitening Every mixing is part stretching (e.g. \(2*A\)) and part rotation of all points in the signal (see interactive graphic and compare "mixture" and "original"). To unmix, we therefore have to undo the stretching and undo the rotation. In a first step, the "sphering" or "whitening" we try to undo the stretching. After whitening the variables are uncorrelated (that does not imply independence!) and also the mixing matrix is now orthogonal. Click here for the example $$ \begin{aligned} Cov(N) &= \frac{1}{n-1}(N*N') \\ C &= cov(x_{m}) \\ V &= C^{-\frac{1}{2}} \\ Cov(P) = I &= Cov(V*x_{m}) \\ &= Cov(C^{-\frac{1}{2}}*x_{m}) \\ &= (C^{-\frac{1}{2}}*x_{m})(C^{-\frac{1}{2}}*x_{m})' \\ &= C^{-\frac{1}{2}}*x_{m}*x_{m}'*C^{-\frac{1}{2}} \\ &= C^{-\frac{1}{2}}CC^{-\frac{1}{2}} = I \end{aligned} $$ Multiplying \( Vx_{m} = C^{-\frac{1}{2}}x_{m} \) returns us the \( x_{w} \), the whitened signal. Rotation matricesRotating the point \((0|1) \)clockwise 45° from will result in $$(\sqrt[2]{1/2}|\sqrt[2]{1/2}) $$ which is equal to a matrixmultiplikation $$ \left(\begin{array}{l} 0 \\ 1 \end{array}\right) \left( \begin{array}{ll} \phantom{123}0 \,\,\,\,\sqrt{1/2}\\ \sqrt{1/2} \phantom{123}0 \end{array}\right)$$ Relation of Covariance and 'Stretching'It is very hard to get a better intuitive understanding of covariance than reading this and this . Step 3. Rotation There are two main approaches to get the right rotation. Maximizing statistical independence (e.g. infomax) and minimizing normality. Maximizing statistical independenceMutual information is closely related to statistical independence. It is a measure, of how much information one can get about an outcome of B if I know A already. Minimizing mutual information, maximizes independence (c.f. examples). Minimizing normalityThe second approach exploits the central limit theorem (CLT). Simplified the CLT states, that the mixture (signals) of two independent distributions (sources) is often more normal-distributed than then underlying signal. Therefore if we minimize the "normal-distributedness" of the projections (the histograms above), we may get more independence. As a measure of normal-distributiveness, or gaussianity, we have some different methods. A simple approach would be kurtosis, a non-robust measure used to asses normality. As you can see in the visualisation, the histograms in the end are not very gaussian at all, they seem to be uniform or reverse gaussians. Click here for the source example Click here for the original signal Mutual information:You draw two cards from a deck, the first a King and the second a 7. These events share some information, as drawing a king, reduces the probability of drawing another King (from 4/52 to 3/51). Therefore mutual information (I) between the first and second drawing is not 0. \[ I(Second;First) \not= 0 \]. You throw two dices. The first shows a 5 the second a 6. These events are completly independent, they do not share mutual information. Knowing what the one dice rolled, does not give you information about the other one. \[ I(Second;First) = 0 \] fastICA The concept of negentropy can be used to estimate non-Gaussianity of a distribution. Intuitively entropy is high, if the probability density function is even, entropy is low, if there are clusters or spikes in the pdf. Negentropy in words: It is the difference between the entropy of a normal distribution with the variance and mean of X and the entropy of the distribution X itself. From all probability densitiy functions with the same variance, the gaussian function has the highest entropy. Therefore negentropy is always positive and only zero, if X is also normal. \( w_{i} \) is the i-th column of the unmixing matrix - we estimate each component iteratively. We try minimize negentropy, using this formular The weightvector is normalized (c.f. Limitations) If we want to estimate the 2nd, 3rd ... source, we need to make sure the unmixing weights are orthogonal a Make the weight-column vector orthogonal to the other vectors (why does this work?) Normalize again Start from the beginning, or finish Negentropy: $$ H(X) = H(X_{Gauss}) - H(X) $$. Where \( X_{Gauss} \) has the same covariance matrix as X Initialize \( w_{i} \) randomly < \( w_{i}^{+} = E(\phi^{'}(w_{i}^{T}X))w_{i} - E(x\phi(w_i{^T}X)) \) \( w_{i} = \frac{w_{i}^{+}}{||w_{i}^{+}||} \) For i=1 go to step 7, else step 5 \( w_i^+ = w_i - sum_{j=1}^{i-1}w_i^Tw_jw_j \) \( w_{i} = \frac{w_{i}^{+}}{||w_{i}^{+}||} \) If we extraced all sources finish, else \(i = i+1\)and go back to step 1 infomax The infomax algorithm is an iterative algorithm, trying to minimize mutual information. Mutual information is the amount of information that you get about the random variable X when you already know Y. If X and Y are indepdenent, mutual information is 0. Therefore minimizing mutual information is one way to solve the ICA problem. Initialize \( W(0) \) random \( W(t+1) = W(t) + \epsilon(I - f(Wx)(Wx)^T)W(t)\) If not converged, go back to 2 \( f(x) \) is a nonlinear function, \( f(x) = Tanh(x)\) \( I \) is the identity matrix. \( x \) is the original mixed signal When the algorithm converged, then \(I-f(Wx)(Wx)^T = 0\), that means \(f(Wx)(Wx)^T=I\) therefore \(f(Wx)\) is orthogonal to \((Wx)^T\). The standard infomax algorithm with \(f(Y) = Tanh(Y)\) is only suitable for super-gaussian distributions, as in sub-gaussian distributions the function gets negative. Therfore an extension to the infomax algorithm "extended infomax" is used, where the function used is $$(I - K4*tanh(Y)*Y')W(T) $$. K4 is the sign of the kurtosis, and thereby allowing to switch between subgaussian and supergaussian source-learning. Ambiguities and Limitations a) It is not possible, to recover the original ordering of the sources. Every permutation of order can be matched by the weight matrix. b) It is not possible, to recover the original scale of the sources. Every arbitrary scale \(k \) can be matched by dividing the weighting matrix by \(k \) and vice versa. c) IT is not possible, to disentangle a mixture of two gaußian variables. An arbitrary rotation of such a mixture, will not change the shape of the histograms. b) It is not possible, to recover the original scale of the sources. Every arbitrary scale \(k \) can be matched by dividing the weighting matrix by \(k \) and vice versa. c) IT is not possible, to disentangle a mixture of two gaußian variables. An arbitrary rotation of such a mixture, will not change the shape of the histograms. a) Let \(P \) be an arbitrary Permutationmatrix for s. $$ x_{p} = APs $$ There exists always a new mixing matrix $$A_{p}=AP^{-1}$$ to do: $$x=A_{p}Ps$$ b) Arbitrary scaling factor \(k \). $$ x = \frac{k}{k}As = (\frac{1}{k}A)(k*s) $$ b) Arbitrary scaling factor \(k \). $$ x = \frac{k}{k}As = (\frac{1}{k}A)(k*s) $$ Click here for an interactive example Assumptions There are five main assumptions in ICA: Each source is statistical independent from the others The mixing matrix is square and of full rank, i.e. there must be the same number of signals to sources and they must be linearly independent from each other There is no external noise - the model is noise free The data are centered There is maximally one gaussian source. Sources
Here is an elementary observation (I mean “elementary” in the sense mathematicians sometimes use it: i.e. straightforward if you already understand it, otherwise gibberish) that I talked about at MIT’s category theory seminar, relating Grice’s maxims of Quantity and Quality to the mathematical notion of a Galois connection. Inscrutable Summary: For a state space W, the left adjoint of a monotone map (i.e. a left Galois connection), \(L_0\), from a set of utterances U to the poset (ordered by inclusion) \(\mathcal{P}(W)\) is the monotone map \(S_1: \mathcal{P}(W)\to U\) which takes a set of states and returns the strongest true utterance with respect to \(L_0\). This happens to be a very natural way to encode Grice’s maxims of Quality (roughly: speak truthfully) and Quantity (roughly: be as informative as possible, relative to what is relevant) simultaneously. Scrutable Explanation: Each \(w \in W\) is a state of the world, or, so that each element of \(\mathcal{P}(W)\) is a set of states. As usual, the simplest possible example is a reference game, where “state” just means the intended referent. Concretely, say that W = \(\{R_1, R_2, R_3\}\) as pictured below, and U = { red dress, dress, hat, silence}. Obviously arbitrary choices, but just for illustration. Say that the literal listener \(L_0\) maps an utterance u to the set of referents (i.e. states, i.e. worlds) compatible with u, mapping red dress to \(\{R_1\}\), dress to \(\{R_1, R_2\}\) , hat to \(\{R_3\}\) and silence to \(\{R_1, R_2, R_3\}\). Note that we can make U a poset by defining the partial ordering on U where \(u \leq u’ \leftrightarrow L_0(u) \leq L_0(u’)\). For example, \(\mathit{dress} \leq \mathit{silence}\). Note that \(u \leq u’\) means that u is stronger than u’. It then follows (by the definition of the ordering on U) that \(L_0\) is a monotone map (i.e. a function that preserves the poset ordering) from U to W. Galois Connections So far just definitions. Just one more: for monotone maps f and g, f is the left Galois connection of g iff: $$f(s) \leq u \leftrightarrow s \leq g(u)$$ It takes a bit of thinking to make sense of this strange definition, but the intuition is this: there’s no obvious notion of an exact inverse of g, because g might well not be surjective (or injective). But for a monotone map, there’s a notion of the best approximation of such an inverse. That approximation is f, as defined above. (In fact there are two, the left and right Galois connections, and more broadly, the left and right adjoints of a functor. A monotone map is a very simple case of a functor between very simple categories, namely posets). Maybe this direct corollary of the above definition will help: if I know g, then its left adjoint f is defined as: $$f(s) = \bigwedge(\{u : s \leq g(u)\})$$. I write \(\bigwedge(X)\) for a poset X to mean the greatest lower bound of X. OK, so now you can ask: what’s the left Galois connection of the literal listener \(L_0\)? Let’s call this left Galois connection \(S_1\), for reasons that will soon be clear. Again, note that \(L_0\) can’t just be inverted, because it’s in general not the case that for any subset s of W (i.e. element of \(\mathcal{P}(W)\)), there’s an expression which means exactly s under \(L_0\). It’s illustrative to work through an example, to see what \(S_1\) looks like. Using our case from above, what’s \(S_1(\{R_2\})\)? Well, \(S_1(\{R_2\}) = \bigwedge\{u : \{R_2\} \leq L_0(u)\})\) = \(\bigwedge(\{dress, silence\})\) = \(dress\). First you find all the utterances that map to supersets of \(\{R_2\}\). These are all the true utterances ( Quality). Then you take the greatest lower bound ( Quantity). So in other words, the definition of an left Galois connection gives you the following informative speaker \(S_1\): consider the set of all utterances compatible with your (possibly singleton) set of worlds, and choose the strongest of these. There’s something nice about how the maxims of Quality and Quantity fall out from this. We also obtain similar results for pragmatic implicatures (that I won’t sketch out here for reasons of laziness) namely that a literal speaker, in the form of a monotone map \(S_0\) from w \(\in\) W to us in \(\mathcal{P}(U)\), admits a left Galois connection \(L_1\) which returns the exhaustification (linguistics term) of the literal meaning of us. So this would model, for example, the fact that the pragmatic interpretation of a (possibly singleton) set of utterances us should give the smallest set of possible worlds that could have produced every \(u \in \mathit{us}\). The niceness of this correspondence between Galois connections and pragmatics suggests that something relatively deep is going on here, but I haven’t thought about it too much further. The sensible thing to do would be to consider the categorical generalization of Galois connections, namely adjoint functors, and to see if we get the same effect when we invert a more sophisticated functorial semantics. Summary: *An informative speaker is a left Galois connection to a literal listener *A pragmatic listener is a left Galois connection to a literal speaker
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ... @Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation") @Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags @Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag @glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :) @Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work) This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin... @Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension @Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head @Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write @Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all. I've actually recently asked some questions on math.SE on related topics @Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true also probably even more generally without $i$ factors so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal) Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary @Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ... There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h...
Universality and average-case algorithm runtimes Let $H > 0$ be a positive-definite $N \times N$ matrix, and let $x_0$ be an $N$-dimensional random vector. The power method to compute the largest eigenvalue of $H$ is given by the iteration, for $j = 1,2,\ldots$ $$y_j = x_{j-1}/\|x_{j-1}\|_2,$$ $$x_j = Hy_j,$$ $$\mu_j = x_j^* y_j.$$ Then $\mu_j \to \lambda_{\max}$, the largest eigenvalue of $H$ as $j \to \infty$. With P. Deift, we considered the case where $H = XX^*/M$ is a sample covariance matrix. Here $X \in \mathbb C^{N \times M}$ has independent entries with mean zero and variance one (and some technical tail assumptions, $X$ could be real or complex as well). Define the (random) iteration count $$ T(H,x_0,\epsilon) = \min\{ j: |\mu_j - \mu_{j-1} | < \epsilon^2 \}. $$ The algorithm is terminated when the difference of two successive iterations is small. We proved the following universal, average-case theorem for $T$: Theorem: Let $\epsilon \ll N^{-5/3 - \sigma}, \sigma > 0$ and $M \sim N/d$, $0 < d < 1$. Then there exists a distribution function $F_\beta^{\mathrm{gap}}(t)$ and a constant $c$ depending only on $d$ such that $$ \lim_{N \to \infty} \mathbb P \left( \frac{ T(H,x_0,\epsilon)}{c N^{2/3} (\log \epsilon^{-1} - \frac{2}{3} \log N)} \leq t \right) = F_\beta^{\mathrm{gap}}(t), \quad t \geq 0.$$ Furthermore, the distribution function $F_\beta^{\mathrm{gap}}(t)$ can be identified as the limiting distribution, after rescaling, of the inverse of the top gap in the real ($\beta =1$) and complex ($\beta = 2$) cases. This theorem gives both universality for and the average-case behavior of the power method. Similar results hold for the inverse power method, the QR algorithm and the Toda algorithm.
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
2018-09-11 04:29 Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text Detaljerad journal - Similar records 2018-08-25 06:58 Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 Detaljerad journal - Similar records 2018-08-23 11:31 Detaljerad journal - Similar records 2018-08-23 11:31 Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 Detaljerad journal - Similar records 2018-08-23 11:31 Detaljerad journal - Similar records 2018-08-23 11:31 Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 Detaljerad journal - Similar records 2018-08-23 11:31 Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE Detaljerad journal - Similar records 2018-08-22 06:27 Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 Detaljerad journal - Similar records 2018-08-22 06:27 Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 Detaljerad journal - Similar records 2018-08-22 06:27 Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 Detaljerad journal - Similar records
Introduction: A few days ago I decided to analyse the symmetries of the two-thirds power law [1] and this analysis naturally led to the following kinematic sequence: where is a volume-preserving transformation and the position is updated using: \begin{equation} x_{n+1} = x_n + \dot{x}_n\cdot \Delta t + \frac{1}{2} \ddot{x}_n \cdot \Delta t^2 \end{equation} Now, in order to make sure that I decided to use the trigonometric identity: \begin{equation} cos^2(\theta) + sin^2(\theta) = 1 \end{equation} so we only have to sample three random numbers so we have: For the rest of the discussion we shall assume that and . Now, the key question I have is whether: \begin{equation} \mathbb{E}\big[\frac{\Delta y_n}{\Delta x_n}\big] = \text{Cst} \end{equation} i.e. whether the expected value of the rate of change is constant. Using a symmetry to simplify calculations: A tale of two branching processes: The following diagram, derived from the first figure, is a particularly useful method for visualising the trajectory of our stochastic sequence: If we use and to denote random variables associated with the first and second kinds of branching processes, we may simplify (1) so we have: \begin{equation} \ddot{x_n} = \ddot{x_0} \cdot c_0 \cdot \Sigma_{2}^n + \dot{x_0} \cdot d_0 \cdot \Sigma_{2}^n = q_1 \Sigma_{2}^n \end{equation} \begin{equation} \dot{x_n} = \ddot{x_0} \cdot a_0 \cdot \Sigma_{1}^n + \dot{x_0} \cdot b_0 \cdot \Sigma_{1}^n = q_2 \Sigma_{1}^n \end{equation} Similarly, we find that for and we have: \begin{equation} \ddot{y_n} = \ddot{y_0} \cdot c_0 \cdot \Sigma_{2}^n + \dot{y_0} \cdot d_0 \cdot \Sigma_{2}^n = q_3 \Sigma_{2}^n \end{equation} \begin{equation} \dot{y_n} = \ddot{y_0} \cdot a_0 \cdot \Sigma_{1}^n + \dot{y_0} \cdot b_0 \cdot \Sigma_{1}^n = q_4 \Sigma_{1}^n \end{equation} Analysis of the rate of change: Given equation (3) we may deduce that: \begin{equation} \frac{\Delta y_n}{\Delta x_n} = \frac{y_{n+1}-y_n}{x_{n+1}-x_n} = \frac{\dot{y_n} \Delta t + \frac{1}{2} \ddot{y_n} \Delta t^2}{\dot{x_n} \Delta t + \frac{1}{2} \ddot{x_n} \Delta t^2} = \frac{\dot{y_n} + h\ddot{y_n}}{\dot{x_n} + h \ddot{x_n}} \end{equation} where . Now, using equations (7), (8), (9) and (10) we find that: \begin{equation} \frac{\Delta y_n}{\Delta x_n} = \frac{\dot{y_n} + h\ddot{y_n}}{\dot{x_n} + h \ddot{x_n}} = \frac{q_4 \Sigma_{1}^n + h \cdot q_3 \Sigma_{2}^n}{q_2 \Sigma_{1}^n + h \cdot q_1 \Sigma_{2}^n} \end{equation} An experimental observation: Expected values of and : It’s useful to note that given that the matries are independent and: \begin{equation} \forall n \in \mathbb{N}, \mathbb{E}[M_n] = 0 \end{equation} we may deduce that: \begin{equation} \mathbb{E}[\Sigma_{1}^n] =\mathbb{E}[\Sigma_{2}^n]= 0 \end{equation} Numerical experiments with : My intuition told me from the beginning that (12) might be useful for analysing the expected value of . In fact, numerical experiments suggest: \begin{equation} \frac{\Delta y_n}{\Delta x_n} \approx \frac{q_4}{q_2} \end{equation} To be precise, numerical experiments show that 100% of the time the sign of is in agreement with the sign of and more than 70% of the time these two numbers disagree with each other by less than a factor of 1.5 i.e. a 30% difference. Analysis: If we take the limit as : \begin{equation} \lim_{h \to 0} \frac{\Delta y_n}{\Delta x_n} = \lim_{h \to 0} \frac{q_4 \Sigma_1^n+h \cdot q_3 \cdot \Sigma_2^n}{q_2 \Sigma_1^n+h \cdot q_1 \cdot \Sigma_2^n} = \frac{q_4}{q_2} \end{equation} so it appears that what I observed numerically depends on and it’s still not clear to me how to calculate directly, which was my original question. Conjecture: While I’m still looking for a closed form expression for my previous analysis leads me to conclude that, for any random matrices sampled i.i.d., as : \begin{equation} \lim_{h \to 0} \frac{\Delta y_n}{\Delta x_n} = \frac{q_4}{q_2} \end{equation} which is a general result I didn’t expect in advance. Now, given that there is strong numerical evidence for (6) regardless of the magnitude of , I wonder whether we can show: \begin{equation} \lim_{h \to 0} \frac{\Delta y_n}{\Delta x_n} = \mathbb{E}\big[\frac{\Delta y_n}{\Delta x_n} \big] \end{equation} References: D. Huh & T. Sejnowski. Spectrum of power laws for curved hand movements. 2015.
Answer $$1+\cos 2x-\cos^2x=\cos^2x$$ This equation is an identity, which can be proved by transforming the left side using the identity from Exercise 69. Work Step by Step $$1+\cos 2x-\cos^2x=\cos^2x$$ We examine from the left side: $$A=1+\cos 2x-\cos^2x$$ The question hints at using the result from Question 69, which states that $$\cos 2x=\cos^2x-\sin^2x$$ That means we can replace $\cos2x$ in $A$ with $\cos^2x-\sin^2x$. $$A=1+\cos^2x-\sin^2x-\cos^2x$$ $$A=1-\sin^2 x$$ Now recall that $\cos^2\theta=1-\sin^2\theta$. So, $$A=\cos^2x$$ The equation has been verified to be an identity.
This is the first of a series of blog posts on rendering in Static Sky. We yanked out Unity’s lighting system and replaced it with our own, super fast one that’s specialised for rendering tons of bumped dynamic lights at 60FPS. This post will talk about the hybrid lighting model that we’re using. We want to do noir, so what does that actually mean? It means a high-contrast world, with lots of light sources. Typically we’d be doing nighttime in the city. This means headlight beams, billboards lighting up the world, somebody being lit by a streetlight as they pass under it. In other words, lots of small light sources, most of which change – so there’s very little we can bake. We’re a small shop. This means that we simply do not have time for bake steps – hence we didn’t want to go lightmapped (waiting to bake is such a creativity killer). It’s important for us to remain nimble, so this was seen as a key feature. Basic Idea The solution we settled on is essentially a mix between vertex and fragment lighting. The trick is that for each vertex, we look a the lights affecting the object, and figure out an ‘average light’. It sends this interpolated (or hybrid) into the fragment shader, where we shade it just like we would any other pixel-driven light. This lighting system mainly runs in the vertex shader – The first step is to find out which lights affect which vertex: In the image above, the left image is the shaded results, the right one shows the per-vertex influence of each light. It fades to black at the range of the light. Basically, the vertex shader figures out a color and light direction per vertex. This works really well, but what happens when the lights begin to overlap? Basically we need to average them out. In the areas where the lights overlap, It’s not possible to approximate with one light – so we need to approximate somehow. What I do is that while I’m adding up to build the hybrid light, I also calculate how much light is coming in from the lights – assuming there’s no bump \( \underline {\vec{n} \cdot \vec{l}} \). This gives me an idea for how much Then I subtract the hybrid light from the total lighting, to get a per-vertex ambient color that corrects the errors from the averaging. It’s not perfect and loses some bump, but I’d rather that than have the combined lighting be too dark in non-predicatble ways. To sum up: This doesn’t work that well when lights overlap (they tend to blend together in a blur), but for many small pointlights hitting large objects, it works really well. See below: Left: The sphere works well as the lights comes from opposite sides. Middle: Hybrid lights on a bad day. Right: Unity built-in. The image in the middle has lost quite a bit of bump definition in the region between the lights Let’s take a look at how we implement this. Note: This is inspired by a technique that you can read about here. Unfortunately, that process is patented, so while we really liked the idea, we had to come up with a completely different approach for implementation. Implementation I split the calculations in 2 parts: First we need to figure out the direction of our hybrid light, by averaging the incoming light Once we have the direction, we need to figure out the color of light coming from that direction. We want to stuff the remainder into our per-vertex ambient term. Hybrid light direction In this step we go over all our lights and figure out how important they are. The goal is to figure the direction where most of the illumination is coming from. This means we care about attenuation, light brightness, and especially how much the light faces the vertex normal (\(\underline{\vec{l} \cdot \vec{n}}\)). When calculating facing, it’s worth noting that the most interesting bump happens at an angle to the main surface, so I use wraparound to calculate it ( \(\vec{n} \cdot \vec{l} * .5 + .5\) ). So we get: $$Weight_L = Atten_L * Intensity_L * n \cdot l_L * .5 + .5$$ For \(Importance_L\), I add up the the lights RGB values, and multiply with a fudge factor. You could use the greyscale value of the light, but I figured e.g. a highly saturated blue light was as important as a green light. It’s worth playing around with this formula in a real scenario (esp. once you get specular in). This is what I’m using for now. We multiply the weights by the normalized light directions, so the hybrid light direction becomes: $$ \vec{hybrid} = \sum_{L=1}^n \vec{l}_L * Weight_L $$ Figuring out the color Now we have the hybrid light direction, it’s time to figure out how well each source light maps to it. Basically we do $$ hybridColor = \sum_{L=1}^n \underline{\vec{l}_L\cdot \vec{hybrid}} * color_L $$ The net effect of this is that only lights that actually ended up getting represented by our hybrid light direction contribute. The others are lost – we need to fix that. Ambient correction We want to calculate how much light we need to add back in. What we do a standard diffuse calculation for each vertex, then subtract the hybrid light: $$ ambientColor = (\sum_{L=1}^n \underline{\vec{lightDir_L} \cdot n} * atten_L * color_L ) – hybridColor $$ We add this in as a per-vertex ambient in the pixel shader. The effect of this is that when we have hopeless case for hybrid lighting (different colored lights coming from different directions), we still get the correct brightness in the scene – the bumps just fade away. That’s it This outlines the key concept for the hybrid lighting. In the future, I’ll look at improving the light’s falloff curves & how we get the lights into the system. In the meantime, Please ask away in the comments, and I’ll do whatever I can to answer. Happy coding
I came across an exercise in Ahlfors’ Complex Analysis the other day that got me thinking. The exercise asked to prove that the complex numbers $a$, $b$, and $c$ form the vertices of an equilateral triangle if and only if $a^2+b^2+c^2=ab+bc+ca.$ It struck me as quite a nice, simple, and symmetric condition. My first instinct, in going about proving this, was to see if the condition was translation invariant, so that one of the points can be moved to the origin. Indeed, if we subtract a constant $z$ from each of $a,b,c$ the equation becomes $(a-z)^2+(b-z)^2+(c-z)^2=(a-z)(b-z)+(b-z)(c-z)+(c-z)(a-z),$ which simplifies to the original equation after expanding each term. So, we can assume without loss of generality that $a=0$, and we wish to show that $0$, $b$, and $c$ form the vertices of an equilateral triangle if and only if $b^2+c^2=bc$. To finish up, since we can multiply $b$ and $c$ by a constant scaling factor without changing the condition, we can further assume that $b=1$. So then $1+c^2=c$, implying that $c=\frac{1}{2}\pm \frac{\sqrt{3}}{2}i$. So indeed, the three points form an equilateral triangle. QED. This proof works, but is somehow deeply unsatisfying. I wanted to find a more “symmetric” proof, that didn’t involve moving one of the points to an origin and another to an axis. Such a coordinate-free condition should have a coordinate-free proof. Another way to approach it is to first manipulate the condition a bit: \begin{eqnarray*} a^2+b^2+c^2-ab-bc-ca&=& 0 \\ 2a^2+2b^2+2c^2-2ab-2bc-2ca &=& 0 \\ (a^2-2ab+b^2)+(b^2-2bc+c^2)+(c^2-2ac+a^2) &=& 0 \\ (a-b)^2+(b-c)^2+(c-a)^2=0 \end{eqnarray*} So, we have re-written the equation as a sum of three squares that equals $0$. (Too bad we’re looking for complex and not real solutions!) Setting $x=a-b$, $y=b-c$, and $z=c-a$, the condition now becomes $x^2+y^2+z^2=0$, along with the additional equation $x+y+z=0$. Notice that $x$, $y$, and $z$ are, in some sense, the sides of the triangle $abc$ as vectors. So, we are trying to show that if $x^2+y^2+z^2=0$ and $x+y+z=0$, then $x$, $y$, and $z$ all have the same distance to $0$ and are spaced $120^\circ$ apart from each other around the origin. This is starting to sound more tractable. In fact, upon closer inspection, we are simply studying the intersection of two curves in the complex projective plane. The points in this plane are equivalence classes of triples $(x:y:z)$, not all zero, up to scaling by a complex number (which is dilation and rotation, geometrically). So, we can think of the equation $x^2+y^2+z^2=0$ as a quadratic curve in this plane, and the equation $x+y+z=0$ as a line, and we simply want to know where these curves intersect. Intuitively, a quadratic curve (such as a circle or a parabola) and a line should intersect in at most 2 points. Indeed, Bezout’s theorem tells us that two complex projective curves of degrees $d$ and $e$ intersect in at most $de$ points. Here, $d$ and $e$ are $2$ and $1$, so we have at most $2$ solutions $(x:y:z)$ to the equations $x^2+y^2+z^2=0$ and $x+y+z=0$. But we already know two solutions: first, any $x,y,z$ that are equal in norm and spaced at $120^\circ$ angles around $0$ will satisfy $x+y+z=0$, since they form an equilateral triangle with its centroid at the origin. Since squaring them will double their angles, $x^2$, $y^2$, and $z^2$ are at angles of $240^\circ$ from each other, which is the same as $120^\circ$ in the opposite direction! So, $x^2+y^2+z^2=0$ as well in this case. Moreover, such $x$, $y$, and $z$ are unique up to dilation, rotation, and reflection of the equilateral triangle (say, interchanging $x$ and $y$). But scaling or rotating $x$, $y$, and $z$ by a complex constant does not change the point $(x:y:z)$, and so the only other distinct point is formed by reflecting the triangle. (Thanks to Bryan Gillespie for this helpful geometric insight!) So we already know two distinct solutions, and by Bezout’s theorem there cannot be any others. It follows that the original triangle $abc$ must be equilateral. Somehow more satisfactorily, QED.
Almost certainly not. As in, "you'll have more chance of winning the Lottery than you will of killing them by this method". Antimatter annihilation of a single atom - we'll be good here and say one with a hefty nucleus like, say, iron - releases $$\left(2\ \mathrm{atoms}\right) \times \left(\frac{55.8452\ \mathrm{g}}{1\ \mathrm{mol}}\right) \times \left(\frac{1\ \mathrm{kg}}{1000\ \mathrm{g}}\right) \times \left(\frac{1\ \mathrm{mol}}{6.022 \times 10^{23}\ \mathrm{atoms}}\right) \times c^2 \approx 1.67 \times 10^{-8}\ \mathrm{J}$$ which is 16.7 nanojoules, or over 100 GeV, of energy. (The "2 atoms" factor is because you need a second atom's worth in equivalent - not necessarily in the form of a literal single atom - of ordinary matter to complete the annihilation.) The release of this will likely not be all at once, but rather will basically consist of the heavy anti-iron atom, upon teleportation to the brain center, annihilating with some lighter atom which will cause it to explode catastrophically into a shower of lighter particles and anti-particles as well as VERY hard (100 MeV+) gamma rays for the anti-nucleon annihilations, and these anti-particles will also collide with and cause similar explosions of the atoms they encounter elsewhere, producing even more showers of tertiary, quaternary, etc. ionizing particles. Essentially it's a demolition derby on an atomic scale with billions of bits of high-energy matter flying around and knock apart everything in their wake - DNA, proteins, and more. Keep in mind that a chemical bond has energy only on the order of 1 eV, so this is enough to break on the order of 100 billion chemical bonds. Now that sounds rather extreme. But there's two things to keep in mind here: Even a single cell, if we for simplicity [and wrongly] treat it as a sphere of water 10 µm in diameter, contains about 17 trillion molecules and thus 34 trillion chemical bonds. Effectively there's only enough energy to break about 0.3% of them. Granted, that could be considerably destructive to that single cell, and thus you might expect we could at least kill one neuron with this (you cannot turn a neuron to cancer, because they cannot divide, though if you get something like a glial cell, then it's possible in theory, and this is a real and actually common type of brain tumor, called a glioma). However, that assumes all the particles are absorbed in the neuron, and that will almost surely not be the case, because that would mean total absorption within 5 micrometers assuming it appears dead center, and these forms of radiation are far more penetrating. The result is maybe you might break a few thousand or million of bonds all over the entire brain - something with maybe over $10^{24}$ atoms in it. That will be virtually unnoticeable. Which is what our second point is. The 100 GeV of energy released here corresponds to about a thousand typical 1 MeV particles of the type that naturally exist in background radiation, not taking into account the possibly increased penetration of some of the highest-energy products which will make it even less damaging(*). As a dose to the brain tissue itself, it corresponds to (assuming it like gamma, which will actually not, again, be right, but we just want the order of magnitude, and using 1.5 kg for the mass of a brain) around 10 nanosieverts (nSv) of dose. The average background exposure in the United States is 3.1 millisieverts (mSv) per year (cite: https://www.nrc.gov/reading-rm/doc-collections/fact-sheets/bio-effects-radiation.html) or about 99 nSv/Ms. Thus your brain is dosed with about this much about every 0.1 Ms, or 100 ks, or a bit over a day (86.4 ks). In effect, you get all of an extra day's and change worth of normal background dosage for this stint. Very unlikely to kill, and impossible to kill "instantly". In fact such ultra-low doses may even have a protective, and not harmful, effect due to possible radiation hormesis (not sure what the evidence on this is as of now). Nonetheless, there is a potentially useful lateral angle to this that might be worth considering, and that's that if people generally have a fear of things like "antimatter" that they've seen in movies and don't necessarily understand very well except that they make things go "boom", such a thing could be a useful psychological control tactic on at least some of the population. If you want to make the threat credible, I'd suggest instead having some kind of device in the brain that creates a small artificial aneurysm. A burst aneurysm can kill very fast, and if the device can also self-destruct so as not to leave residue, could look like a "natural" event to an unsophisticated autopsier. Such a thing might work by, for example, being placed near a suitable blood vessel and then, upon triggering, would start a release of some kind of chemicals that partially break down the vessel wall, weakening it and thus allowing for a swelling or hernia of blood (the aneurysm) to form, that then bursts and causes massive brain damage. All the better since you can control the placement in the brain to target the areas most likely to cause death or at least major disability. (*) You might think highly-penetrating radiation is "worse" than lower-penetrating, e.g. gamma is "worse" than alpha, but this is only with regard to the fact that an external source of alpha is "better" in that it can only burn the skin, but gamma, due to being penetrating, can "burn" all tissues through the full thickness of the body uniformly, leading to radiation poisoning, essentially a "systemic radiation burn". But that's only for an external source, with the skin blocking. In fact, if the source is ingested, alpha particles are much worse, because they have much more ionizing punch per particle. Effectively you're now comparing them both on a fair playing field as full-body irradiators, and the gammas are considerably less damaging due to the fact that greater penetration means less chance of interaction. This is part of why that polonium-210, and not, say, cobalt-60 [a strong, and relatively "pure", gamma emitter, and much easier (and cheaper!) to get ahold of], was used to assassinate the late Russian defector Alexander Litvinenko a few hundred megaseconds ago. The needed lethal dose was much less due in part to this fact.
Introduction: Back in 2015, Elon Musk gave an engaging presentation on Tesla’s new home battery products as part of his vision of a solar energy future. A key part of his presentation hinged on a ‘blue square’, a section of Texas that is supposedly sufficient to cover USA’s electricity consumption. To give the reader some idea of the size of , you can fit around ten New York cities in that ‘blue square’. Thinking about Elon’s solar energy calculation led me to consider the following: Can Elon’s argument be justified by a back-of-the-envelope calculation? Is solar energy a serious option if we take into account the expected rate of globalisation and technological progress? Although I’m not American, the USA is a very important case as it represents approximately 20% of the Globe’s total energy consumption while its population represents barely 5% of the world population. If we take into account the rapid Americanisation of all countries then the second question emerges naturally. Disclaimer: I take it for granted that using fossil fuels isn’t reasonable by any measure. Elon Musk’s blue square: Let’s first note that according to the EIA US electricity consumption was about 3758 TeraWatt Hours in 2015 which we may then convert to watts as follows: \begin{equation} \frac{3758 \quad \text{TeraWatt hours}}{365 \cdot 24} \approx 429 \quad \text{GigaWatts} \end{equation} Now, it’s useful to note that according to the US Energy Information Administration total energy consumption in 2015 was around British Thermal Units, or five times the total electricity consumption in the USA. So even if Elon is right that may be sufficient for electricity consumption it’s a good idea to note that at least five blue squares will be needed for total USA energy consumption. That’s more than fifty times the surface area of New York city! But, is Elon right? Let’s use the following formula for calculating solar energy yield: \begin{equation} Energy = A \cdot r \cdot H \cdot PR \end{equation} where is the total solar panel area in square meters, is the solar panel efficiency, is the annual average solar radiation(which varies between regions) and is the performance ratio(usually between 0.5 and 0.9). If we assume Saudi Arabian solar radiation levels,that the performance ratio is nearly 1.0 and about 21% efficiency we have: \begin{equation} Energy = 10^4 \cdot .21 \cdot 2600 \cdot 1.0 \approx 5.460 \quad \text{PetaWatt Hours} \end{equation} which I can convert to Watts as follows: \begin{equation} Wattage = \frac{Energy}{365 \cdot 24} = \frac{5460 \quad \text{TeraWatt Hours}}{365 \cdot 24} \approx 623 \quad \text{GigaWatts} \end{equation} which more than satisfies the first equation so Elon is right or at least he isn’t wrong in a manner that is obvious. Globalisation and Technological progress: Upper-bound on potential solar power on Earth due to the sun: In order to estimate the potential solar power on Earth due to the sun we may use the Stefan-Boltzmann equation for luminosity: \begin{equation} L_o= \sigma A T^4 \end{equation} which depends on the effective temperature of the sun, its surface area and the Stefan-Boltzmann constant. Now if we define 1 to be the average distance between the Earth and the sun, the maximum solar power available to Earth is given by: \begin{equation} P= L_o \frac{A_{Earth}}{A_{1AU}} = \sigma T^4 \big(\frac{R_s}{1 AU}\big)^2 4 \pi R_{Earth}^2 \approx 174 \quad \text{PetaWatts} \end{equation} and if we take into account that about 70% of sunlight is lost to outerspace, only about a third of the Globe is terrestrial and the solar energy conversion efficiency is around 20% we have: \begin{equation} \bar{P}= 0.7 \cdot .3 \cdot .2 \cdot P \approx 7300 \quad \text{TeraWatts} \end{equation} which is about five hundred times current use. This might sound like a large margin until you realise that energy consumption in developed countries has been growing exponentially during the last two hundred years. The rate of globalisation: Given that new energy infrastructure takes time to build it’s essential to realise that you build it for the foreseeable future and by that I mean at least the next couple decades. Is it possible that within a few decades we might start getting dangerously close to ? I would argue that we are already in trouble because we are rapidly becoming American in our energy consumption patterns. As I mentioned earlier the USA is responsible for approximately 20% of Earth’s total energy consumption while its population represents barely 5% of the world population. So convergence in American-style economic development implies that eventually the entire world will be consuming around four times more energy per annum. Let’s denote this global energy consumption pattern by and suppose that this event happens within a decade, then: \begin{equation} \frac{\bar{P}}{\hat{P}} \approx 100 \end{equation} so we’re only a factor of a hundred away from doomsday and we haven’t even started building solar panels seriously yet. Can things get worse? The rate of technological progress: As pointed out by Tom Murphy, a professor of Physics at UC San Diego, the rate of energy consumption in the USA has been increasing at an exponential rate. Around 2.3% per year which might not sound like much until you think about the effect of compounding. Let’s suppose that by 2030 all countries have similar energy consumption patterns. In order to figure out how much time human civilisation has left on Earth we must calculate: \begin{equation} x = \frac{\ln 100}{\ln 1.023} \approx 200 \quad \text{years} \end{equation} but that’s the result of a simple extrapolation. I’m actually much less optimistic. At a time when we have two emerging superpowers, China & India, and the decline of the USA that is willing to do everything it can to maintain its hegemony; I believe that we’ll witness an acceleration. Discussion: If we do go all the way with solar energy another important factor consider is the volume of lithium ion battery production that would be necessary in order to supply electricity at night around the Globe. I have yet to do detailed calculations on this but this makes me even less confident that the future can be 100% solar. Yet, if not solar energy then what? I think we can actually make a case for nuclear fusion especially if we start thinking about multi-planetary civilisations. But, I will save this analysis for another day. References: T. Murphy. Galactic-Scale Energy. 2011. D. McKay. Sustainable Energy-without the hot air. 2008. EIA. Annual Energy Outlook. 2019.
I’ve been doing a lot of reading on confidence interval theory. Some of the reading is more interesting than others. There is one passage from Neyman’s (1952) book “Lectures and Conferences on Mathematical Statistics and Probability” (available here) that stands above the rest in terms of clarity, style, and humor. I had not read this before the last draft of our confidence interval paper, but for those of you who have read it, you’ll recognize that this is the style I was going for. Maybe you have to be Jerzy Neyman to get away with it. Neyman gets bonus points for the footnote suggesting the “eminent”, “elderly” boss is so obtuse (a reference to Fisher?) and that the young frequentists should be “remind[ed] of the glory” of being burned at the stake. This is just absolutely fantastic writing. I hope you enjoy it as much as I did. [Neyman is discussing using “sampling experiments” (Monte Carlo experiments with tables of random numbers) in order to gain insight into confidence intervals. \(\theta\) is a true parameter of a probability distribution to be estimated.] The sampling experiments are more easily performed than described in detail. Therefore, let us make a start with \(\theta_1 = 1\), \(\theta_2 = 2\), \(\theta_3 = 3\) and \(\theta_4 = 4\). We imagine that, perhaps within a week, a practical statistician is faced four times with the problem of estimating \(\theta\), each time from twelve observations, and that the true values of \(\theta\) are as above [ie, \(\theta_1,\ldots,\theta_4\)] although the statistician does not know this. We imagine further that the statistician is an elderly gentleman, greatly attached to the arithmetic mean and that he wishes to use formulae (22). However, the statistician has a young assistant who may have read (and understood) modern literature and prefers formulae (21). Thus, for each of the four instances, we shall give two confidence intervals for \(\theta\), one computed by the elderly Boss, the other by his young Assistant. [Formula 21 and 22 are simply different 95% confidence procedures. Formula 21 is has better frequentist properties; Formula 22 is inferior, but the Boss likes it because it is intuitive to him.] Using the first column on the first page of Tippett’s tables of random numbers and performing the indicated multiplications, we obtain the following four sets of figures. The last two lines give the assertions regarding the true value of \(\theta\) made by the Boss and by the Assistant, respectively. The purpose of the sampling experiment is to verify the theoretical result that the long run relative frequency of cases in which these assertions will be correct is, approximately, equal to \(\alpha = .95\). You will notice that in three out of the four cases considered, both assertions (the Boss’ and the Assistant’s) regarding the true value of \(\theta\) are correct and that in the last case both assertions are wrong. In fact, in this last case the true \(\theta\) is 4 while the Boss asserts that it is between 2.026 and 3.993 and the Assistant asserts that it is between 2.996 and 3.846. Although the probability of success in estimating \(\theta\) has been fixed at \(\alpha = .95\), the failure on the fourth trial need not discourage us. In reality, a set of four trials is plainly too short to serve for an estimate of a long run relative frequency. Furthermore, a simple calculation shows that the probability of at least one failure in the course of four independent trials is equal to .1855. Therefore, a group of four consecutive samples like the above, with at least one wrong estimate of \(\theta\), may be expected one time in six or even somewhat oftener. The situation is, more or less, similar to betting on a particular side of a die and seeing it win. However, if you continue the sampling experiment and count the cases in which the assertion regarding the true value of \(\theta\), made by either method, is correct, you will find that the relative frequency of such cases converges gradually to its theoretical value, \(\alpha= .95\). Let us put this into more precise terms. Suppose you decide on a number \(N\) of samples which you will take and use for estimating the true value of \(\theta\). The true values of the parameter \(\theta\) may be the same in all \(N\) cases or they may vary from one case to another. This is absolutely immaterial as far as the relative frequency of successes in estimation is concerned. In each case the probability that your assertion will be correct is exactly equal to \(\alpha = .95\). Since the samples are taken in a manner insuring independence (this, of course, depends on the goodness of the table of random numbers used), the total number \(Z(N)\) of successes in estimating \(\theta\) is the familiar binomial variable with expectation equal to \(N\alpha\) and with variance equal to \(N\alpha(1 – \alpha)\). Thus, if \(N = 100\), \(\alpha = .95\), it is rather improbable that the relative frequency \(Z(N)/N\) of successes in estimating \(\alpha\) will differ from \(\alpha\) by more than\( 2\sqrt{\frac{\alpha(1-\alpha)}{N}} = .042 \) This is the exact meaning of the colloquial description that the long run relative frequency of successes in estimating \(\theta\) is equal to the preassigned \(\alpha\). Your knowledge of the theory of confidence intervals will not be influenced by the sampling experiment described, nor will the experiment prove anything. However, if you perform it, you will get an intuitive feeling of the machinery behind the method which is an excellent complement to the understanding of the theory. This is like learning to drive an automobile: gaining experience by actually driving a car compared with learning the theory by reading a book about driving. Among other things, the sampling experiment will attract attention to the frequent difference in the precision of estimating \(\theta\) by means of the two alternative confidence intervals (21) and (22). You will notice, in fact, that the confidence intervals based on \(X\), the greatest observation in the sample, are frequently shorter than those based on the arithmetic mean \(\bar{X}\). If we continue to discuss the sampling experiment in terms of cooperation between the eminent elderly statistician and his young assistant, we shall have occasion to visualize quite amusing scenes of indignation on the one hand and of despair before the impenetrable wall of stiffness of mind and routine of thought on the other. [See footnote] For example, one can imagine the conversation between the two men in connection with the first and third samples reproduced above. You will notice that in both cases the confidence interval of the Assistant is not only shorter than that of the Boss but is completely included in it. Thus, as a result of observing the first sample, the Assistant asserts that .956 \leq \theta \leq 1.227. \) On the other hand, the assertion of the Boss is far more conservative and admits the possibility that \(\theta\) may be as small as .688 and as large as 1.355. And both assertions correspond to the same confidence coefficient, \(\alpha = .95\)! I can just see the face of my eminent colleague redden with indignation and hear the following colloquy. Boss: “Now, how can this be true? I am to assert that \(\theta\) is between .688 and 1.355 and you tell me that the probability of my being correct is .95. At the same time, you assert that \(\theta\) is between .956 and 1.227 and claim the same probability of success in estimation. We both admit the possibility that \(\theta\) may be some number between.688 and .956 or between1.227 and 1.355. Thus, the probability of \(\theta\) falling within these intervals is certainly greater than zero. In these circumstances, you have to be a nit-wit to believe that \( \begin{eqnarray*} P\{.688 \leq \theta \leq 1.355\} &=& P\{.688 \leq \theta < .956\} + P\{.956 \leq \theta \leq 1.227\}\\ && + P\{1.227 \leq \theta \leq 1.355\}\\ &=& P\{.956 \leq \theta \leq 1.227\}.\mbox{”} \end{eqnarray*} \) Assistant: “But, Sir, the theory of confidence intervals does not assert anything about the probability that the unknown parameter \(\theta\) will fall within any specified limits. What it does assert is that the probability of success in estimation using either of the two formulae (21) or (22) is equal to \(\alpha\).” Boss: “Stuff and nonsense! I use one of the blessed pair of formulae and come up with the assertion that \(.688 \leq \theta \leq 1.355\). This assertion is a success only if \(\theta\) falls within the limits indicated. Hence, the probability of success is equal to the probability of \(\theta\) falling within these limits —.” Assistant: “No, Sir, it is not. The probability you describe is the a posteriori probability regarding \(\theta\), while we are concerned with something else. Suppose that we continue with the sampling experiment until we have, say, \(N = 100\) samples. You will see, Sir, that the relative frequency of successful estimations using formulae (21) will be about the same as that using formulae (22) and that both will be approximately equal to .95.” I do hope that the Assistant will not get fired. However, if he does, I would remind him of the glory of Giordano Bruno who was burned at the stake by the Holy Inquisition for believing in the Copernican theory of the solar system. Furthermore, I would advise him to have a talk with a physicist or a biologist or, maybe, with an engineer. They might fail to understand the theory but, if he performs for them the sampling experiment described above, they are likely to be convinced and give him a new job. In due course, the eminent statistical Boss will die or retire and then —. [footnote] Sad as it is, your mind does become less flexible and less receptive to novel ideas as the years go by. The more mature members of the audience should not take offense. I, myself, am not young and have young assistants. Besides, unreasonable and stubborn individuals are found not only among the elderly but also frequently among young people. [end excerpt]
Journal of the European Mathematical Society Full-Text PDF (191 KB) | Metadata | Table of Contents | JEMS summary Volume 14, Issue 3, 2012, pp. 733–748 DOI: 10.4171/JEMS/316 Published online: 2012-03-07 On sets of vectors of a finite vector space in which every subset of basis size is a basisSimeon Ball [1](1) Universitat Politécnica de Catalunya, Barcelona, Spain It is shown that the maximum size of a set ${ S}$ of vectors of a $k$-dimensional vector space over ${\mathbb F}_q$, with the property that every subset of size $k$ is a basis, is at most $q+1$, if $k \leq p$, and at most $q+k-p$, if $q \geq k \geq p+1 \geq 4$, where $q=p^h$ and $p$ is prime. Moreover, for $k\leq p$, the sets $S$ of maximum size are classified, generalising Beniamino Segre's “arc is a conic'' theorem. These results have various implications. One such implication is that a $k\times (p+2)$ matrix, with $k \leq p$ and entries from ${\mathbb F}_p$, has $k$ columns which are linearly dependent. Another is that the uniform matroid of rank $r$ that has a base set of size $n \geq r+2$ is representable over ${\mathbb F}_p$ if and only if $n \leq p+1$. It also implies that the main conjecture for maximum distance separable codes is true for prime fields; that there are no maximum distance separable linear codes over ${\mathbb F}_p$, of dimension at most $p$, longer than the longest Reed-Solomon codes. The classification implies that the longest maximum distance separable linear codes, whose dimension is bounded above by the characteristic of the field, are Reed–Solomon codes. These results have various implications. One such implication is that a $k\times (p+2)$ matrix, with $k \leq p$ and entries from ${\mathbb F}_p$, has $k$ columns which are linearly dependent. Another is that the uniform matroid of rank $r$ that has a base set of size $n \geq r+2$ is representable over ${\mathbb F}_p$ if and only if $n \leq p+1$. It also implies that the main conjecture for maximum distance separable codes is true for prime fields; that there are no maximum distance separable linear codes over ${\mathbb F}_p$, of dimension at most $p$, longer than the longest Reed-Solomon codes. The classification implies that the longest maximum distance separable linear codes, whose dimension is bounded above by the characteristic of the field, are Reed–Solomon codes. Keywords: Arcs, Maximum Distance Separable Codes (MDS codes), uniform matroids Ball Simeon: On sets of vectors of a finite vector space in which every subset of basis size is a basis. J. Eur. Math. Soc. 14 (2012), 733-748. doi: 10.4171/JEMS/316