text
stringlengths
256
16.4k
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Examples In this video, we work three examples, one with $x=a \tan(\theta)$, one with $x = a\sin(\theta)$, and one with $x = a \sec(\theta)$. Worked out solutions are written below the video. Examples from the video. Consider our answer above. In order to convert back into terms of $x$, we must figure out what $\sin(\theta)$ is in terms of $x$. By rewriting our original substitution we see that $\tfrac x 2=\tan\theta$. Use this to draw a right triangle, with opposite side $x$ and adjacent side $a=2$. The hypotenuse is then $\sqrt{a^2+x^2}=\sqrt{4+x^2}$. We need to find $\sin\theta$ in terms of $x$, and we see from the triangle that $\sin\theta=\frac{x}{\sqrt{x^2+4}}$. Here we have used the methods of the last learning module to evaluate the trig integral, including the handy trig identities for $\cos^2\theta$ and $\sin(2\theta)$. (You need to know these by heart). We look at the terms in our final answer above. We use the triangle to convert $\sin\theta\cos\theta$ back into terms of $x$. Finally, we must write $\theta$ in terms of $x$. We use our original substitution: $\frac{x}{3}=\sin x$ gives us $\sin^{-1}(\tfrac x 3)=\theta$. We now convert to terms of $x$: $\sec\theta=2x$, so $\cos\theta=\frac{1}{2x}$. The triangle could therefore have adjacent side of 1, and hypotenuse of $2x$, and the opposite side $\sqrt{4x^2-1}$. We can also have adjacent of $\frac{1}{2}$ and hypotenuse of $x$. (Why?) This gives the opposite side the value of $\sqrt{x^2-\frac{1}{4}}$, as in the diagram. Either way is fine. Looking at our values above, we only need deal with $\tan \theta$, since we know $\sec\theta=2x$. From the triangle, $\tan\theta=\frac{\sqrt{x^2-1/4}}{1/2}$. If we had used the other triangle, we would get $\tan\theta=\frac{\sqrt{4x^2-1}}{1}$ -- are these the same values?
I have started compiling some review materials for math 32A and 32AH for this quarter. They consist of I will post solutions to the exercises before the review session on Thursday. As always, let me know in the comments below if anything is unclear or incorrect. Update I have posted the solutions to the practice problems. Also, a student pointed out a typo in statement of the final problem in the review. Spherical coordinates should be given by $$ x = \rho \cos \theta \sin \varphi, \quad y = \rho \sin \theta \sin \varphi, \quad z = \rho \cos \varphi. $$ These changes have been made in the version of the problems and solutions online. Update 2 There was a mistake in the solution of problem 10 involving curvature. This has (hopefully) been fixed.
Say that $a_1a_2\ldots a_n$ and $b_1b_2\ldots b_n$ are two strings of the same length. An anagramming of two strings is a bijective mapping $p:[1\ldots n]\to[1\ldots n]$ such that $a_i = b_{p(i)}$ for each $i$. There might be more than one anagramming for the same pair of strings. For example, If $a=$`abcab` and $b=$ cabab we have $p_1[1,2,3,4,5]\to[4,5,1,2,3]$ and $p_2[1,2,3,4,5] \to [2,5,1,4,3]$, among others. We'll say that the weight $w(p)$ of an anagramming $p$ is the number of cuts one must make in the first string to get chunks that can be rearranged to obtain the second string. Formally, this the number of values of $i\in[1\ldots n-1]$ for which $p(i)+1\ne p(i+1)$. That is, it is the number of points at which $p$ does not increase by exactly 1.For example, $w(p_1) = 1$ and $w(p_2) = 4$, because $p_1$ cuts 12345 once, into the chunks 123 and 45, and $p_2$ cuts 12345 four times, into five chunks. Suppose there exists an anagramming for two strings $a$ and $b$. Then at least one anagramming must have least weight. Let's say this this one is lightest. (There might be multiple lightest anagrammings; I don't care because I am interested only in the weights.) Question I want an algorithm which, given two strings for which an anagramming exists, efficiently yields the exact weight of the lightest anagramming of the two strings. It is all right if the algorithm also yields a lightest anagramming, but it need not. It is a fairly simple matter to generate all anagrammings and weigh them, but there may be many, so I would prefer a method that finds light anagrammings directly. Motivation The reason this problem is of interest is as follows. It is very easy to make the computer search the dictionary and find anagrams, pairs of words that contain exactly the same letters. But many of the anagrams produced are uninteresting. For instance, the longest examples to be found in Webster's Second International Dictionary are: cholecystoduodenostomy duodenocholecystostomy The problem should be clear: these are uninteresting because they admit a very light anagramming that simply exchanges the cholecysto, duedeno, and stomy sections, for a weight of 2. On the other hand, this much shorter example is much more surprising and interesting: coastline sectional Here the lightest anagramming has weight 8. I have a program that uses this method to locate interesting anagrams, namely those for which all anagrammings are of high weight. But it does this by generating and weighing all possible anagrammings, which is slow.
The Bulletin of the AMS just made this paper by Julia Mueller available online: “On the genesis of Robert P. Langlands’ conjectures and his letter to Andre Weil” (hat tip +ChandanDalawat and +DavidRoberts on Google+). It recounts the story of the early years of Langlands and the first years of his mathematical career (1960-1966)leading up to his letter to Andre Weil in which he outlines his conjectures, which would become known as the Langlands program. Langlands letter to Weil is available from the IAS. The Langlands program is a vast net of conjectures. For example, it conjectures that there is a correspondence between – $n$-dimensional representations of the absolute Galois group $Gal(\overline{\mathbb{Q}}/\mathbb{Q})$, and – specific data coming from an adelic quotient-space $GL_n(\mathbb{A}_{\mathbb{Q}})/GL_n(\mathbb{Q})$. Here we have on the one hand the characters of the abelianised absolute Galois group \[ Gal(\overline{\mathbb{Q}}/\mathbb{Q})^{ab} \simeq Gal(\mathbb{Q}(\pmb{\mu}_{\infty})/\mathbb{Q}) \simeq \widehat{\mathbb{Z}}^{\ast} \] and on the other hand the connected components of the idele class space \[ GL_1(\mathbb{A}_{\mathbb{Q}})/GL_1(\mathbb{Q}) = \mathbb{A}_{\mathbb{Q}}^{\ast} / \mathbb{Q}^{\ast} = \mathbb{R}_+^{\ast} \times \widehat{\mathbb{Z}}^{\ast} \] For $n=2$ it involves the study of Galois representations coming from elliptic curves. A gentle introduction to the general case is Mark Kisin’s paper What is … a Galois representation?. One way to look at some of the quantum statistical systems studied via non-commutative geometry is that they try to understand the “bad” boundary of the Langlands space $GL_n(\mathbb{A}_{\mathbb{Q}})/GL_n(\mathbb{Q})$. Here, the Bost-Connes system corresponds to the $n=1$ case, the Connes-Marcolli system to the $n=2$ case. If $\mathbb{A}’_{\mathbb{Q}}$ is the subset of all adeles having almost all of its terms in $\widehat{\mathbb{Z}}_p^{\ast}$, then there is a well-defined map \[ \pi~:~\mathbb{A}’_{\mathbb{Q}}/\mathbb{Q}^{\ast} \rightarrow \mathbb{R}_+ \qquad (x_{\infty},x_2,x_2,\dots) \mapsto | x_{\infty} | \prod_p | x_p |_p \] The inverse image of $\pi$ over $\mathbb{R}_+^{\ast}$ are exactly the idele classes $\mathbb{A}_{\mathbb{Q}}^{\ast}/\mathbb{Q}^{\ast}$, so we can view them as the nice locus of the horrible complicated quotient of adele-classes $\mathbb{A}_{\mathbb{Q}}/\mathbb{Q}^*$. And we can view the adele-classes as a ‘closure’ of the idele classes. But, the fiber $\pi^{-1}(0)$ has horrible topological properties because $\mathbb{Q}^*$ acts ergodically on it due to the fact that $log(p)/log(q)$ is irrational for distinct primes $p$ and $q$. This is why it is better to view the adele-classes not as an ordinary space (one with bad topological properties), but rather as a ‘non-commutative’ space because it is controlled by a non-commutative algebra, the Bost-Connes algebra. For $n=2$ there’s a similar story with a ‘bad’ quotient $M_2(\mathbb{A}_{\mathbb{Q}})/GL_2(\mathbb{Q})$, being the closure of an ‘open’ nice piece which is the Langlands quotient space $GL_2(\mathbb{A}_{\mathbb{Q}})/GL_2(\mathbb{Q})$.
Abstract For $d\ge 3$, we construct a non-randomized, fair, and translation-equivariant allocation of Lebesgue measure to the points of a standard Poisson point process in $\mathbb{R}^d$, defined by allocating to each of the Poisson points its basin of attraction with respect to the flow induced by a gravitational force field exerted by the points of the Poisson process. We prove that this allocation rule is economical in the sense that the allocation diameter, defined as the diameter $X$ of the basin of attraction containing the origin, is a random variable with a rapidly decaying tail. Specifically, we have the tail bound \[\mathbb{P}(X > R) \le C \operatorname{exp}\big[-c R (\log R)^{\alpha_d} \big]\] for all $R>2$, where: $\alpha_d = \frac{d-2}{d}$ for $d\ge 4$; $\alpha_3$ can be taken as any number less than $-4/3$; and $C$ and $c$ are positive constants that depend on $d$ and $\alpha_d$. This is the first construction of an allocation rule of Lebesgue measure to a Poisson point process with subpolynomial decay of the tail $\mathbb{P}(X>R)$. [ajtaietal] M. Ajtai, J. Komlós, and G. Tusnády, "On optimal matchings," Combinatorica, vol. 4, iss. 4, pp. 259-264, 1984. @article {ajtaietal, MRKEY = {779885}, AUTHOR = {Ajtai, M. and Koml{ó}s, J. and Tusn{á}dy, G.}, TITLE = {On optimal matchings}, JOURNAL = {Combinatorica}, FJOURNAL = {Combinatorica. An International Journal of the János Bolyai Mathematical Society}, VOLUME = {4}, YEAR = {1984}, NUMBER = {4}, PAGES = {259--264}, ISSN = {0209-9683}, CODEN = {COMBDI}, MRCLASS = {60D05 (60G40 90B15)}, MRNUMBER = {86f:60018}, MRREVIEWER = {Richard A. Vitale}, DOI = {10.1007/BF02579135}, ZBLNUMBER = {0562.60012}, } [arnold] V. I. Arnol$’$d, Mathematical Methods of Classical Mechanics, Second ed., New York: Springer-Verlag, 1989. @book {arnold, MRKEY = {997295}, AUTHOR = {Arnol$'$d, V. I.}, TITLE = {Mathematical Methods of Classical Mechanics}, SERIES = {Grad. Texts Math.}, NUMBER = {60}, EDITION = {Second}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York}, YEAR = {1989}, PAGES = {xvi+508}, ISBN = {0-387-96890-3}, MRCLASS = {58Fxx (70-02 70H05)}, MRNUMBER = {90c:58046}, ZBLCOMMENT = {BIBPROC: YEAR doesn't match found ZBLNUMBER}, ZBLNUMBER = {0386.70001}, } [chandrasekhar] S. Chandrasekhar, "Stochastic problems in physics and astronomy," Rev. Modern Phys., vol. 15, pp. 1-89, 1943. @article {chandrasekhar, MRKEY = {0008130}, AUTHOR = {Chandrasekhar, S.}, TITLE = {Stochastic problems in physics and astronomy}, JOURNAL = {Rev. Modern Phys.}, FJOURNAL = {Reviews of Modern Physics}, VOLUME = {15}, YEAR = {1943}, PAGES = {1--89}, ISSN = {0034-6861}, MRCLASS = {60.0X}, MRNUMBER = {4,248i}, MRREVIEWER = {J. L. Doob}, DOI = {10.1103/RevModPhys.15.1}, ZBLNUMBER={0061.46403}, } [gravpaper2] S. Chatterjee, R. Peled, Y. Peres, and D. Romik, Phase transitions in gravitational allocation. @misc{gravpaper2, author={Chatterjee, S. and Peled, R. and Peres, Y. and Romik, D.}, TITLE={Phase transitions in gravitational allocation}, NOTE={to appear in {\it Geom. Funct. Anal.}}, } [grimmett] G. Grimmett, Percolation, Second ed., New York: Springer-Verlag, 1999, vol. 321. @book {grimmett, MRKEY = {1707339}, AUTHOR = {Grimmett, Geoffrey}, TITLE = {Percolation}, SERIES = {Grundl. Math. Wissen.}, VOLUME = {321}, EDITION = {{S}econd}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York}, YEAR = {1999}, PAGES = {xiv+444}, ISBN = {3-540-64902-6}, MRCLASS = {60K35 (60-02 82B43)}, MRNUMBER = {2001a:60114}, MRREVIEWER = {Neal Madras}, ZBLNUMBER = {0948.60092}, } [heathshepp] S. Heath and L. Shepp, "Olbers’ paradox, wireless telephones, and Poisson random sets. Is the universe finite?," in A Garden of Quanta, World Sci. Publ., River Edge, NJ, 2003, pp. 155-166. @incollection {heathshepp, MRKEY = {2045964}, AUTHOR = {Heath, Susan and Shepp, Larry}, TITLE = {Olbers' paradox, wireless telephones, and {P}oisson random sets. {I}s the universe finite?}, BOOKTITLE = {A Garden of Quanta}, PAGES = {155--166}, PUBLISHER = {World Sci. Publ., River Edge, NJ}, YEAR = {2003}, MRCLASS = {82B31 (60G50 85A40 94A05)}, MRNUMBER = {2004m:82046}, ZBLNUMBER={1040.83048}, } [holroydperesmatchings] A. E. Holroyd and Y. Peres, "Trees and matchings from point processes," Electron. Comm. Probab., vol. 8, pp. 17-27, 2003. @article {holroydperesmatchings, MRKEY = {1961286}, AUTHOR = {Holroyd, Alexander E. and Peres, Yuval}, TITLE = {Trees and matchings from point processes}, JOURNAL = {Electron. Comm. Probab.}, FJOURNAL = {Electronic Communications in Probability}, VOLUME = {8}, YEAR = {2003}, PAGES = {17--27}, ISSN = {1083-589X}, MRCLASS = {60G55 (60D05 60K35)}, MRNUMBER = {2004b:60127}, MRREVIEWER = {Anton Wakolbinger}, ZBLNUMBER = {1060.60048}, } [holroydperes] A. E. Holroyd and Y. Peres, "Extra heads and invariant allocations," Ann. Probab., vol. 33, iss. 1, pp. 31-52, 2005. @article {holroydperes, MRKEY = {2118858}, AUTHOR = {Holroyd, Alexander E. and Peres, Yuval}, TITLE = {Extra heads and invariant allocations}, JOURNAL = {Ann. Probab.}, FJOURNAL = {The Annals of Probability}, VOLUME = {33}, YEAR = {2005}, NUMBER = {1}, PAGES = {31--52}, ISSN = {0091-1798}, CODEN = {APBYAE}, MRCLASS = {60G55}, MRNUMBER = {2005k:60153}, MRREVIEWER = {Wolfgang Freudenberg}, DOI = {10.1214/009117904000000603}, ZBLNUMBER = {1097.60032}, } [kallenberg] O. Kallenberg, Foundations of Modern Probability, Second ed., New York: Springer-Verlag, 2002. @book {kallenberg, MRKEY = {1876169}, AUTHOR = {Kallenberg, Olav}, TITLE = {Foundations of Modern Probability}, SERIES = {Probability and its Applications}, VENUE={New York}, EDITION = {{S}econd}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York}, YEAR = {2002}, PAGES = {xx+638}, ISBN = {0-387-95313-2}, MRCLASS = {60-01}, MRNUMBER = {2002m:60002}, MRREVIEWER = {Klaus D. Schmidt}, ZBLNUMBER = {0996.60001}, } [stablemarriage1] C. Hoffman, A. E. Holroyd, and Y. Peres, "A stable marriage of Poisson and Lebesgue," Ann. Probab., vol. 34, iss. 4, pp. 1241-1272, 2006. @article {stablemarriage1, MRKEY = {2257646}, AUTHOR = {Hoffman, Christopher and Holroyd, Alexander E. and Peres, Yuval}, TITLE = {A stable marriage of {P}oisson and {L}ebesgue}, JOURNAL = {Ann. Probab.}, FJOURNAL = {The Annals of Probability}, VOLUME = {34}, YEAR = {2006}, NUMBER = {4}, PAGES = {1241--1272}, ISSN = {0091-1798}, CODEN = {APBYAE}, MRCLASS = {60D05 (60G55)}, MRNUMBER = {2007k:60034}, MRREVIEWER = {Dominic Schuhmacher}, DOI = {10.1214/009117906000000098}, ZBLNUMBER = {1111.60008}, } [stablemarriage2] C. Hoffman, A. E. Holroyd, and Y. Peres, "Tail bounds for the stable marriage of Poisson and Lebesgue," Canad. J. Math., vol. 61, iss. 6, pp. 1279-1299, 2009. @article {stablemarriage2, MRKEY = {2588423}, AUTHOR = {Hoffman, Christopher and Holroyd, Alexander E. and Peres, Yuval}, TITLE = {Tail bounds for the stable marriage of {P}oisson and {L}ebesgue}, JOURNAL = {Canad. J. Math.}, FJOURNAL = {Canadian Journal of Mathematics. Journal Canadien de Mathématiques}, VOLUME = {61}, YEAR = {2009}, NUMBER = {6}, PAGES = {1279--1299}, ISSN = {0008-414X}, CODEN = {CJMAAB}, MRCLASS = {60D05 (60G55)}, MRNUMBER = {2588423}, URL = {http://journals.cms.math.ca/ams/ams-redirect.php?Journal=CJM&Volume=61&FirstPage=1279}, ZBLNUMBER = {05644856}, } [leightonshor] T. Leighton and P. Shor, "Tight bounds for minimax grid matching with applications to the average case analysis of algorithms," Combinatorica, vol. 9, iss. 2, pp. 161-187, 1989. @article {leightonshor, MRKEY = {1030371}, AUTHOR = {Leighton, T. and Shor, P.}, TITLE = {Tight bounds for minimax grid matching with applications to the average case analysis of algorithms}, JOURNAL = {Combinatorica}, FJOURNAL = {Combinatorica. An International Journal on Combinatorics and the Theory of Computing}, VOLUME = {9}, YEAR = {1989}, NUMBER = {2}, PAGES = {161--187}, ISSN = {0209-9683}, CODEN = {COMBDI}, MRCLASS = {05B99 (60C05 68Q25 68R05)}, MRNUMBER = {90k:05056}, MRREVIEWER = {Mark R. Jerrum}, DOI = {10.1007/BF02124678}, ZBLNUMBER = {0686.68039}, } [mattila] P. Mattila, Geometry of Sets and Measures in Euclidean Spaces, Cambridge: Cambridge Univ. Press, 1995, vol. 44. @book {mattila, MRKEY = {1333890}, AUTHOR = {Mattila, Pertti}, TITLE = {Geometry of Sets and Measures in {E}uclidean Spaces}, SERIES = {Cambridge Stud. Adv. Math.}, VOLUME = {44}, PUBLISHER = {Cambridge Univ. Press}, ADDRESS = {Cambridge}, YEAR = {1995}, PAGES = {xii+343}, ISBN = {0-521-46576-1; 0-521-65595-1}, MRCLASS = {28A75 (49Q20)}, MRNUMBER = {96h:28006}, MRREVIEWER = {Harold Parks}, DOI = {10.1017/CBO9780511623813}, ZBLNUMBER = {0819.28004}, } [nazarovetal1] F. Nazarov, M. Sodin, and A. Volberg, "Transportation to random zeroes by the gradient flow," Geom. Funct. Anal., vol. 17, iss. 3, pp. 887-935, 2007. @article {nazarovetal1, MRKEY = {2346279}, AUTHOR = {Nazarov, Fedor and Sodin, Mikhail and Volberg, Alexander}, TITLE = {Transportation to random zeroes by the gradient flow}, JOURNAL = {Geom. Funct. Anal.}, FJOURNAL = {Geometric and Functional Analysis}, VOLUME = {17}, YEAR = {2007}, NUMBER = {3}, PAGES = {887--935}, ISSN = {1016-443X}, CODEN = {GFANFB}, MRCLASS = {60G55 (30C15 30D15 31A35 60G15)}, MRNUMBER = {2009c:60124}, MRREVIEWER = {Lutz Peter Klotz}, DOI = {10.1007/s00039-007-0613-z}, ZBLNUMBER = {1153.60027}, } [nazarovetal2] F. Nazarov, M. Sodin, and A. Volberg, "Transportation to random zeroes by the gradient flow," Geom. Funct. Anal., vol. 17, iss. 3, pp. 887-935, 2007. @article {nazarovetal2, MRKEY = {2346279}, AUTHOR = {Nazarov, Fedor and Sodin, Mikhail and Volberg, Alexander}, TITLE = {Transportation to random zeroes by the gradient flow}, JOURNAL = {Geom. Funct. Anal.}, FJOURNAL = {Geometric and Functional Analysis}, VOLUME = {17}, YEAR = {2007}, NUMBER = {3}, PAGES = {887--935}, ISSN = {1016-443X}, CODEN = {GFANFB}, MRCLASS = {60G55 (30C15 30D15 31A35 60G15)}, MRNUMBER = {2009c:60124}, MRREVIEWER = {Lutz Peter Klotz}, DOI = {10.1007/s00039-007-0613-z}, ZBLNUMBER = {1153.60027}, } [sodintsirelson] M. Sodin and B. Tsirelson, "Random complex zeroes. II. Perturbed lattice," Israel J. Math., vol. 152, pp. 105-124, 2006. @article {sodintsirelson, MRKEY = {2214455}, AUTHOR = {Sodin, Mikhail and Tsirelson, Boris}, TITLE = {Random complex zeroes. {II}. {P}erturbed lattice}, JOURNAL = {Israel J. Math.}, FJOURNAL = {Israel Journal of Mathematics}, VOLUME = {152}, YEAR = {2006}, PAGES = {105--124}, ISSN = {0021-2172}, CODEN = {ISJMAP}, MRCLASS = {60G15 (30D20 60D05)}, MRNUMBER = {2007a:60027}, MRREVIEWER = {M. Iosifescu}, DOI = {10.1007/BF02771978}, ZBLNUMBER={1125.60033}, } [talagrand] M. Talagrand, "Matching theorems and empirical discrepancy computations using majorizing measures," J. Amer. Math. Soc., vol. 7, iss. 2, pp. 455-537, 1994. @article {talagrand, MRKEY = {1227476}, AUTHOR = {Talagrand, M.}, TITLE = {Matching theorems and empirical discrepancy computations using majorizing measures}, JOURNAL = {J. Amer. Math. Soc.}, FJOURNAL = {Journal of the American Mathematical Society}, VOLUME = {7}, YEAR = {1994}, NUMBER = {2}, PAGES = {455--537}, ISSN = {0894-0347}, MRCLASS = {60C05 (60D05)}, MRNUMBER = {94g:60021}, MRREVIEWER = {Joseph E. Yukich}, DOI = {10.2307/2152764}, ZBLNUMBER = {0810.60036}, }
Chapter 7 Current phosphor device technology Author: , Elsevier2004 2004 http://www.sciencedirect.com/science/article/pii/S0169315804800100 More details:Publisher Summary This chapter discusses the latest devices and the phosphors currently employed in them. Many phosphors that have been developed for the particular display purposes are discussed. The chapter discusses how the light sources such as, incandescent and fluorescent lamps are going to become obsolete as the quality of light-emitting diodes (LEDs) improves. The phosphors will continue to be used to make light emitting \{LED\} light sources. The main advantage of the \{LED\} lamp is that, it uses milliwatts of power, compared to watts of power for the incandescent and fluorescent lamp. The chapter surveys the present-day devices utilizing phosphors emphasizing the recent improvements made in cathode-ray tubes (CRT) and fluorescent lamps. The devices that depend upon thin film deposition for the manufacture of the appliance are described. The devices that use phosphors responding to high energy photons like X-rays and gamma rays (high energy X-rays) are described. The chapter also describes scintillators that are the phosphors used to detect $\alpha$, $\beta$ and $\gamma$ rays from incident sources.
In fact it is quite rare for a blog writer to put some mathematical models on the blog unless it is an article related to some very professional courses. However, low propability can hardly mean never happen. Taking myself for example, I would like to post some of my notes on econometrics and my feeling of research proposal with the blog readers, though I can write them with a Latex system and then compile it to a PDF document, providing it as an attachment for download. While I don’t think most readers would prefer such behavior. Reading that directly can be a much more pleasant experience, especially when there is not many complicated equations. Thanks to the flexibility of the WordPress system and its countless expanded plug-ins, the solution for this can be quite simple. The only thing you are willing to do is to be familiar with the LaTex syntax, which, under most occasions, would not be a big problem for most users who are willing to do so. Even though you are unlikely to type the raw code, Mathtype can help you transfer the equation. The only work thus left for you is copy and paste. Taking a search at WordPress.org, with the keyword of ‘latex’, you are likely to find the following plug-ins. Some Plug-ins for WordPress Among the plug-ins that are available, I made an attempt for the following ones on my blog. Here is a summary about each of them. For details, you can refer to their websites. WP Latex This is an official plug-ins for Latex from WordPress.com, with its default service based on wordpress.com. No cache is needed. You can freely change its style through the css file in settings. The only problem is that its original server ‘s.wordpress.com’ is forbidden in China, you have to change it to ‘wordpress.com’ to make it work. Besides, you can also set up your own service, but I wonder if it can violate the regulation from the host. Easy Latex Comparing to the official plug-ins, this one offer more flexible features. Besides the colors, you can also freely change its size directly through its GUI. Cache option is provided so that the readers can stil read the equations on your blog even when the latex service is out of work. Youngwhan’s Simple Latex This is the simplest plug-ins for use among the three. No other option is presented besides the address of the latex service. The service for this plugin, unlike WP Latex and Easy Latex, is provided by some open source project. Though it is convenient to set up, its flexibility is a question. My Choice I finally choose the official plug-ins with the consideration of the limitation of file numbers on my server. And there is no free CGI support. Which means, I can hardly construct my own system and can hardly bear too many cache files. For those whose server can support CGI, a self-constructed system should be preferred considering its stability. An Example As a final, I shall present an example. This is a measurement of likelihood that is common to see in econometrics.{{R}^{2}}=\frac{SSE}{SST}=\frac{{{\left( \sum\limits_{i=1}^{n}{\left( {{y}_{i}}-\bar{y} \right)\left( {{{\hat{y}}}_{i}}-\bar{\hat{y}} \right)} \right)}^{2}}}{\left( \sum\limits_{i=1}^{n}{{{\left( {{y}_{i}}-\bar{y} \right)}^{2}}} \right)\left( \sum\limits_{i=1}^{n}{{{\left( {{{\hat{y}}}_{i}}-\bar{\hat{y}} \right)}^{2}}} \right)} Where y stands for the real value and \hat{y} stands for the estimated value from a regression model like:y=\alpha_0+\alpha_1*x_1+\alpha_2*x_2+\alpha_3*x_3+\cdots+\alpha_n*x_n, n\in {{Z}^{+}} The only problem left here is: how to adjust the alignments? I want them to be centered, but not left-aligned.
Following an exercise from Hopcroft-Ullman's Introduction to automata theory: Let's define $k-MDFA$ as two way deterministic finite automaton with $k$ markers, similarly to two way $DFA$ but with ability to place (or pick up) a marker on the input cell currently under the head, but with limitation to placing at most $k$ markers at any time during computation, and with single cell only being able to hold at most 1 marker at any given time. Formally, it's a tuple $A=(Q, \Sigma, \delta, s, F)$ where $Q$ is finite set of states $\Sigma$ is the input alphabet $\delta : Q \times (\Sigma \hspace{2pt}\cup \hspace{2pt} \{\vdash, \dashv\})\times \{0,1\} \rightarrow Q \times \{\textbf{L,R}\}\times \{0,1\}$ is the transition function $s$ is the initial state and $F$ is the set of accepting states Input tape of the automaton for word $w \in \Sigma^*$ is $\vdash w \dashv$. Moreover the transition function satisfies $\delta(q,\vdash,m) = (q', R, m')$, $\hspace{2pt}\delta(q, \dashv, m) = (q', L, m')$ for all $q \in Q, \hspace{2pt} m \in \{0,1\}$ and for some $q' \in Q $ and $m' \in \{0,1\}$ (we do not allow automaton to leave the input). If automaton attempts to place more than $k$ markers on the tape, the computation stops with rejection. I have proved that $1-MDFA$ recognize exactly regular sets, but I can also show that with $2-MDFA$ I can recognize language $\{ ww : w \in \Sigma^*\}$, which is not a context free language. The second fact is fairly simple - here's an outline of how such automaton would work: Check parity of input's lenght Locate the middle of input by placing markers at both ends of the word, and then moving them one step closer towards the middle alternatingly. Place marker on the first cell of input and on first cell of second half of the input Compare letters under markers, if they differ enter reject state, if they are same, move both markers 1 step to the left and repeat. If you are about to move right marker onto symbol $\dashv$ enter accept state. As such, clearly languages recognized by $k-MDFA$ are not contained in $CFL$ for $k \geq 2$. My questions are: Does increasing the amount of markers increase the computation power of $k-MDFA$ for $k \geq 2$? Does allowing nondeterminism increase computation power of $k-MDFA$? Can $k-MDFA$ recognize all context-free languages? If so, what's the minimal $k$ capable of expressing $CFL$? Any references are welcome, if the question shows ignorance in the subject, I apologise.
I want to implement an algorithm to solve a heat equation, i.e. \begin{align*} \partial_t u - \Delta u = f \text{ in } \Omega\times(0,T)\\ \partial_nu = 0 \text{ in } \partial\Omega \times (0,T)\\ u(0) = u_0 \end{align*} I'm looking for some reasonable functions $f$ and $u_0$ to model a very hot area (e.g. a box in the middle of $\Omega = [0,1]^3$), which cools down slowly since the rest of the room has room temperature. Can you give me a proper example for this problem and explain me the physical meaning of $f$ and $u_0$? The problem you proposes must be treated with care. The function $f$ must have zero mean in the domain $\Omega$,i.e. $$\int_\Omega{f\, dV}=0$$ Otherwise the problem is not well posed. Physically the terms of your equation mean: $\partial_tu$: the energy change of your system. $-\Delta u$:the diffusion of the temperature through the domain. $f$: Heat sources/sinks in the domain. For the example you proposes: the function $f=0$, since there are not sources/sinks, and the initial condition, given by $u_0$ can be any distribution you want. It can be discontinuous. The only restriction is that $u_0\in L^2(\Omega)$ i.e. it must be square integrable. Finally the BC you want is not what you have written but: $$\partial_n u=-h(u-u_\infty)$$ Where $h$ may be a function of $u$. This BC takes into account the fact that in equilibrium there is no heat loss through the boundaries to the outside (the room). Otherwise, if the flux is zero, the temperature reaches a value (homogeneous) being its energy a constant through time and it will never be at room temperature. The BC has also a physical meaning: the heat loss is proportional to the temperature differente between the hot object and its surroundings, i.e. the hotter the object is the greater the amount of heat that is loss. For example ice is melt quicker when is outside the fridge than outside because the $u-u_\infty$ in the first case is greater than the second given $u$ as the ice temperature.
For each of the sets of functions below, draw a chain rule diagram for the indicated derivative and use it to write a chain rule, then evaluate your chain rule to find the derivative. (mbTreeDiagramMath) Use Tree Diagrams to find chain rules. Find $\left( \frac{dg}{dt} \right)$ for $g = (a + b)^2$, $a = \sin 2t$, and $b = t^{3/2}$. Find $\left( \frac{\partial h}{\partial v} \right)_u$ for $h = \sqrt{a - b}$, $a = uv^2 - 1/v$, and $b = \frac{uv}{u + v}$. Find $\left( \frac{\partial A}{\partial B} \right)_F$ for $A(B,C)$ and $F(B,C)$. (Of course, you don't have to evaluate this derivative!) (mbCyclic) Check cyclic chain rule for realistic equation of state A possible equation of state for a gas takes the form $$pV=N k_B T \exp\left(-\frac{\alpha V}{N k_B T}\right)$$ in which $\alpha$, $N$, and $k_B$ are constants. Calculate expressions for: $$\left(\frac{\partial p}{\partial V}\right)_T\qquad\qquad \left(\frac{\partial V}{\partial T}\right)_p\qquad\qquad \left(\frac{\partial T}{\partial p}\right)_V$$ and show that these derivatives satisfy the cyclic chain rule. (mbParamagnet) Use chain rules to solve physics problem Paramagnetism\hfill\break We have the following equations of state for the total magnetization$M$, and the entropy $S$ of a paramagnetic system: \begin{eqnarray*} M&=&N\mu\, \frac{e^{\frac{\mu B}{k_B T}} - e^{-\frac{\mu B}{k_B T}}} {e^{\frac{\mu B}{k_B T}} + e^{-\frac{\mu B}{k_B T}}}\\ \noalign{\smallskip} S&=&Nk_B\left\{\ln 2 + \ln \left(e^{\frac{\mu B}{k_B T}}+e^{-\frac{\mu B}{k_B T}}\right) +\frac{\mu B}{k_B T} \frac{e^{\frac{\mu B}{k_B T}} - e^{-\frac{\mu B}{k_B T}}} {e^{\frac{\mu B}{k_B T}} + e^{-\frac{\mu B}{k_B T}}} \right\}\\ \end{eqnarray*} Solve for the magnetic susceptibility, which is defined as: $$\chi_B=\left(\frac{\partial M}{\partial B}\right)_T $$ Also solve for almost the same derivative, now taken with the entropy $S$ held constant: $$\left(\frac{\partial M}{\partial B}\right)_S $$ Why does this second derivative turn out to be zero? Sense-making: solve explicitly for the chain rule that allows you to evaluate $$\left(\frac{\partial M}{\partial B}\right)_S $$ using both total differentials (zapping with d) and a chain rule diagram. (Your chain rule should be the same!)
Consider the Lagrangian $ L= L_D + L_{KG} +L_{int}$ where the first term is the Dirac Lagrangian, the second is the Klein-Gordon and the interaction term is $L_{int} = g \bar \psi \tau \cdot \phi \psi$. The interaction term describes the interaction of Dirac spinors via a scalar; this model was used to describe the scattering of nucleons via a pion exchange. The question can be put here by two different ways. One way is to ask if we can construct out of this Lagrangian Feynman diagrams that will give the scattering of a nucleon with a pion or the scattering of two pions; Feynman diagrams for nπ and ππ interactions. The other way, more general and interesting, is to ask if we can, from an interaction Lagrangian, derive what will be the possible in and out states for which we can have Feynman diagrams or in general scattering. That is, if we can understand which of the fields in the interaction Lagrangian can give us asymptotically free states or which fields cannot have in and out states, so that they cannot represent outer legs in a Feynman diagram; our Lagrangian cannot describe a scattering for these degrees of freedom; I am not sure if this can be simply shown by proving certain correlation functions as zero all the time, for example that all the correlation functions with odd number of $\phi$ fields are equal to zero. I don' t know if the question is ill phrased; maybe I should consider the LSZ reduction formula somehow and show only specifically for this Lagrangian the outer and inner states. Any comments with suggestions for reading and any recommendations for studying are welcomed. If finally the post is ill- written please clarify the problem, since I am not sure how to search this specific subject. If it is only a matter of calculations, please inform me. Thanks in advance.
Definition Let \(n \) \(\in\) \(\mathbb{Z_+}\). Define a relation " \(\equiv\) "on \(\mathbb{Z}\) by \( a \equiv b \) (mod n) iff (if and only if) \(m \mid a-b \) for all \(a, b \in \mathbb{Z}\). Example \(\PageIndex{1}\): Suppose \(n= 5, \) then the possible remainders are \( 0,1, 2, 3,\) and \(4,\) when we divide any integer by \(5\). Is \(6 \, \equiv 11\) (mod 5)? Yes, because \(6\) and \(11 \) both belong to the same congruent/residue class 1. That is to say, when 6 and 11 are divided by 5 the remainder will be 1. Is \(7 \equiv 15\) (mod 5)? No, because 7 and 15 do not belong to the same congruent/residue class. Seven has a remainder of 2, while 15 has a remainder of 0, therefore 7 is not congruent to 15 (mod 5). That is \(7 \not \equiv 15\) (mod 5). Example \(\PageIndex{2}\): Clock arithmetic \(18 mod(12)\). Solution \(18 mod(12) \equiv 6\). 6 pm. Properties Let \(n \in \mathbb{Z_+}\). Then Theorem 1 :Two integers a and b are said to be congruent modulo n, \(a \equiv b\) (mod n), if all of the following are true: a) \(m\mid (a-b).\) b) both \(a\) and \(b \) have the same remainder when divided by \(n.\) c) \(a-b= kn\) , for some \(k \in \mathbb{Z}\). NOTE: Possible remainders of \( n\) are \(0, ..., n-1.\) Reflexive Property Theorem 2: The relation " \(\equiv\) " over \(\mathbb{Z}\) is reflexive. Let \(a \in \mathbb{Z} \). Proof: Then a-a=0(n), and \( 0 \in \mathbb{Z}\). Hence \(a \equiv a \) (mod n). Thus congruence modulo n is Reflexive. Symmetric Property Theorem 3: The relation " \(\equiv\) " over \(\mathbb{Z}\) is symmetric. Let \(a, b \in \mathbb{Z} \) such that \(a \equiv b\) (mod n). Proof: Then a-b=kn, for some \(k \in \mathbb{Z}\). Now b-a= (-k)n and \(-k \in \mathbb{Z}\). Hence \(b \equiv a\) (mod n). Thus the relation is symmetric. Antisymmetric Property Is the relation " \(\equiv\) " over \(\mathbb{Z}\) antisymmetric? : n is fixed Counter Example choose: a= n+1, b= 2n+1, then \(a \equiv b\) (mod n) and \( b \equiv a \) (mod n) but \( a \ne b.\) Thus the relation " \(\equiv\) "on \(\mathbb{Z}\) is not antisymmetric. Transitive Property Theorem 4 : The relation " \(\equiv\) " over \(\mathbb{Z}\) is transitive. Let a, b, c \(\in\) \(\mathbb{Z}\), such that a \(\equiv\) b (mod n) and b \(\equiv\) c (mod n). Proof: Then a=b+kn, k \(\in\) \(\mathbb{Z}\) and b=c+hn, h \(\in\) \(\mathbb{Z}\). We shall show that a \(\equiv\) c (mod n). Consider a=b+kn=(c+hn)+kn=c+(hn+kn)=c+(h+k)n, h+k \(\in\) \(\mathbb{Z}\). Hence a \(\equiv\) c (mod n). Thus congruence modulo n is transitive. Theorem 5: The relation " \(\equiv\) " over \(\mathbb{Z}\) is an equivalence relation. Computational aspects: %%python3 print( "integer integer mod 5") for i in range(30): print(i, " ", i%5)
When simulating liquids or solids under periodic boundary conditions, we are making two fundamental approximations: Here, we want to calculate the diffusion constant of water at room temperature ($T=300\,\text{K}$). Since we are interested in a property of bulk water, we don't need to worry about the first approximation. But we need to pay attention to the second one. The theory derived in 10.1021/jp0477147 allows us to estimate the finite size effects in the diffusion constant $D_{pbc}(L)$, calculated under periodic boundary conditions with cell size $L$. With this information at hand, we will be able to extrapolate the results for finite cell sizes to the diffusion constant $D=\lim\limits_{L\rightarrow \infty}D_{pbc}(L)$, effectively getting rid of the second approximation. Calculating transport properties typically requires lots of sampling. Start the MD simulation for 32 water molecules and see how far you can get (aim at least for 200 ps). ./get_t_sigma file.ener to calculate the standard deviation of the temperature for your simulation as well as for the provided simulations of larger cells containing 64, 128 and 256 water molecules. The mean squared displacement (msd) is defined as $$\text{msd}(t) = \langle |r(t+t_0)-r(t_0)|^2 \rangle$$ where the average $\langle \ldots \rangle$ runs over all particles in the system. Our simulations are not large enough to obtain reasonable statistics just from averaging over all water molecules.We therefore perform an additional average over the time $t_0$: $\text{msd}(t)$ is calculated as an average over all non-overlapping time windows of width $t$ that fit into the total simulation time $T$.We have provided a Fortran program that uses this algorithm to extract the msd from a trajectory in a .xyz file. gfortran msd.f90 -o msd.x # compile msd.x executable ./msd.x < msd.in # check input file 'msd.in' before you run! Per default, msd.x writes the msd in units of $\unicode{x212B}^2$ as a function of time in fs. Once you have calculated the msd, have a look into section III of the article on how to fit the diffusion constant. msd.x to calculate the msd, modifying msd.in as needed. msd.x may run up to 30 minutes for the largest cell. When your MD of the 32 water molecules has finished (for example on the next day), you can start fitting the diffusion constant.
The output power of a TV transmitter is the electric power applied to antenna system. There are two definitions: nominal (or peak) and thermal. Analogue television systems put about 70% to 90% of the transmitters power into the sync pulses. The remainder of the transmitter's power goes into transmitting the video's higher frequencies and the FM audio carrier. Digital television modulation systems are about 30% more efficient than analogue modulation systems overall. Contents Analogue vs digital 1 Power defined in terms of voltage 2 Nominal power of a TV transmitter 3 Thermal power 4 Ratio of thermal power to nominal power 5 References 6 Analogue vs digital Analogue The large amount of energy that Sync Pulses use is largely independent of the measurement system and efficiency of the analogue TV transmitter (as most transmitters in use average around 75% efficiency). The transmission of FM audio (including Stereo subcarriers) is only overall the 3rd largest consumer of TV transmitter power. Digital DVB like transmission systems, with their groups of related carriers are not quite as energy efficient as 8VSB systems 8VSB transmission systems only provide limited one carrier wave that consumes about 7% of the transmitters energy that under multipath conditions can be lost causing a signal loss event Power defined in terms of voltage The average power for a sinusoidal drive is [1] P = \frac{1}{T}\int_0^T i(t) \cdot e (t) dt\,\!. For a system where the voltage and the current are in phase, the output power can be given as P = \frac{1}{T \cdot R}\int_0^T e(t)^2 dt\,\!. In this equation R is the resistance and e (t) is the output voltage Nominal power of a TV transmitter Nominal power of a TV transmitter is given as the power during the sync interval. (For the sake of simplicity aural power is omitted) Since, the voltage during the sync interval is a fixed value, P_n = \frac{E_p^2}{2\cdot R}\,\! P_n = \frac{E^2}{R}\,\! where E is the rms value of the output voltage. To measure the nominal output power, measuring devices with time constants much greater than the line time are used. So the measuring equipment's measure only the highest level (sync pulse) of a line waveform which is 100%. This power level is the commercial power of the transmitter. Thermal power In analogue TV broadcasting, the video signal modulates a carrier by a kind of amplitude modulation (VSB modulation or C3F). The modulation polarity is negative. That means that the higher the level of the video signal the lower the power of the RF signal. The lowest possible modulating signal during the synchrone interval yields 100% of the carrier. (The nominal power of the transmitter.) The blanking level ( 300 mV) yields 73% (in an ideally linear transmitter). Usually the figure 75% is found to be acceptable. The highest modulating signal at white ( 1000 mV) yields only 10% of the carrier. (so called residual carrier). Sometimes 12.5% is used as the residual carrier so the output power applied to the antenna system is considerably lower than the nominal power. The thermal power which can be measured by a microwave power meter depends on the program content as well as the residual carrier and sync depths. Ratio of thermal power to nominal power Since the program content is variable, the thermal power varies during the transmission. However, for testing purposes a standard line waveform can be applied to the transmitter. Usually line waveforms corresponding to 350 mV or 300 mV black image (and without field sync) are applied to the input of the transmitter. For System B, the duration of the black level 300 mV (together with the front and back porches), is 59.3 μs and it corresponds to 73% of maximum voltage level. The duration of the sync pulse is 4.7 μs. The total duration of the line is 64 μs. P_t = \frac{E^2}{64\cdot R}\cdot (4.7\cdot ( 100\%)^2 + 59.3\cdot ( 73\%)^2) \approx 57\% \cdot \frac{ E^2 }{R}\,\! So the maximum thermal power applied to the antenna system is 57% of the nominal power, even in the black scene. In normal program content this ratio may be around 25% or less. References ^ MIT lecture This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
... the purpose of non-dimensionalization is to "collapse" solutions onto one curve so that the solution space can be explored with fewer parameters. That's one 'purpose' of non-dimensionalization; two others are the identification of the characteristic length, velocity, pressure, etc. scales and the analysis of the equation under different regimes, in this case $\mathrm{Re}\ll1$ (viscosity dominated) and $\mathrm{Re}\gg1$ (convection dominated). Characteristic length scales are the scales of the system which uniquely characterize the dynamics such that all terms in the equations (including initial/boundary conditions) are $O(1)$ or smaller. You have chosen a set of scales with which you non-dimensionalize the equation, but are they characteristic scales? Well let's put it to the test. First, consider the convectively dominated regime for which you determine: $$\partial_t u_i + u_j \partial_j u_i = -\partial_i p + \mathrm{Re}^{-1} \partial_{jj} u_i, \qquad \qquad \qquad \text{using convective time scale}$$ In this regime, $\mathrm{Re}\gg1$ and assuming the scaling was done correctly, i.e. all non-dimensional variables are $O(1)$, then we see that all terms are $O(1)$ or smaller (since $O(\mathrm{Re}^{-1})\ll O(1)$). This indicates that the scaling was properly done and that the scales are the characteristic scales in this regime. Since viscous term is neglibly small compared to the convective term, we are allowed to completely disregard it. Note that the pressure term is of the same order as the convective term; the pressure gradient must always be of the same order as the dominant term in the equation in order to balance the terms. Let's now consider the viscous regime where $\mathrm{Re}\ll1$, you find (after dividing by $\mathrm{Re}^{-1}$): $$\partial_t u_i + \mathrm{Re} u_j \partial_j u_i = - \mathrm{Re} \partial_i p + \partial_{jj} u_i, \qquad \qquad \text{using diffusive time scale}$$ We see here that again all terms are $O(1)$ or smaller (since $O(\mathrm{Re})\ll O(1)$). In this regime, we can neglect the convective terms as the viscous terms are dominant. However, apparantly the pressure gradient is also negligible which should in fact be of the same order as the viscous terms. This indicates that the proposed scaling in this regime is not correct. To resolve this problem we require a rescaling of the pressure scale. Let's rescale $[p]^* = \mathrm{Re}^{\alpha}[p]$ where $\alpha$ is some constant to be determined. Substituting into the equations yields: $$\partial_t u_i + \mathrm{Re} u_j \partial_j u_i = - \mathrm{Re}^{1+\alpha} \partial_i p^* + \partial_{jj} u_i, \qquad \qquad \text{using diffusive time scale}$$ For the pressure gradient to be of the same order as the viscous terms, we require $1+\alpha=0$ or $\alpha=-1$. We then get: $$\partial_t u_i + \mathrm{Re} u_j \partial_j u_i = - \partial_i p^* + \partial_{jj} u_i, \qquad \qquad \text{using diffusive time scale}$$ and the pressure scale was rescaled to $[p]^* = \mathrm{Re}^{-1}[p] = \frac{\mu U}{L}$ which is clearly a viscous pressure scale as evident from the presence of the viscosity $\mu$. This is in contrast with the original pressure scale $[p] = \rho U^2$ which clearly doesn't contain a viscosity and was only appropriate in the convective regime $\mathrm{Re}\gg1$. We can therefor refer to it as a convective pressure scale. To conclude with answering your questions: Because the equations are valid only for two different regimes as set by your choice of scales, $\mathrm{Re}\ll1$ and $\mathrm{Re}\gg1$. The only 'related' solution will be found for $\mathrm{Re}=1$, otherwise the solutions will be completely different. In fact analytical solutions are generally only possible for $\mathrm{Re}\ll1$ because the equations become linear. For $\mathrm{Re}\gg1$, the equations are highly non-linear making it only numerically possible to obtain solutions. No, see answer to 2.
I want to determine the relationship that must exist between the $x_i$ and $y_i$ such that $$ \frac{\partial}{\partial\theta} \prod_{i=1}^n \frac{f(x_i,\theta)}{f(y_i,\theta)} = 0, $$ where $$ f(x,\theta) = \frac{e^{-(\theta - x)}}{(1+e^{-(\theta - x)})^2}, \;\; \forall x \in {\mathbb R}, \theta \in {\mathbb R} $$ Clarification: what I'm trying to find is a condition on the $x_i$ and $y_i$ such that the derivative above (viewed as a function of $\theta$) is zero for all $\theta$ only if this condition holds. Clearly, this derivative is zero for all $\theta$ if, $\forall i\in\{1,\dots,n\}$, the condition $x_i = y_i$ holds, since then the product in the derivative expression above is identically 1. But this condition is not necessary: the derivative will be identically zero also if there is an $n$-permutation $\sigma$ such that $\forall i,\,x_i = y_{\sigma(i)}$. My problem is to prove that the derivative is identically zero (i.e. it is zero for all $\theta$) only if such a $\sigma$ exists, for given $x_i$ and $y_i$. So, hoping to have a look at the derivative above, I input this into Mathematica: Block[{f, θ, x, y, i, n}, f[x_][θ_] := E^(-x + θ)/(1 + E^(-x + θ))^2; D[Product[f[x[i]][θ]/f[y[i]][θ], {i, n}], θ]] ...but Mathematically basically spits back the last formula (after replacing the various expressions in f): D[Product[(E^(-x[i] + y[i])*(1 + E^(θ - y[i]))^2)/(1 + E^(θ - x[i]))^2, {i, n}], θ] If instead of using a symbolic product (with an unspecified number of terms) I attempt the same thing with a product of three terms, namely (f[x1][θ]/f[y1][θ]) (f[x2][θ]/f[y2][θ]) (f[x3][θ]/f[y3][θ]) ...Mathematica does compute the derivative (though the resulting expression is hairy, and I can't extract any insight from it). So my first question is How can I get Mathematica to produce the expression for the derivative for the general case? (After all, the derivative of an $n$-term product has a form that Mathematica should be able to express relatively easily.) In any case, the results I got for a three-term product were not encouraging. Of course, I really don't care for the derivative per se, but rather, what I'm after are the conditions on the $x_i$ and $y_i$ that make this derivative vanish. Is there a way that Mathematica can show me the relationship between the $x_i$ and $y_i$ when this derivative is 0?
We report a high statistics measurement of Upsilon production with an 800 GeV/c proton beam on hydrogen and deuterium targets. The dominance of the gluon-gluon fusion process for Upsilon production at this energy implies that the cross section ratio, $\sigma (p + d \to \Upsilon) / 2\sigma (p + p\to \Upsilon)$, is sensitive to the gluon content in the neutron relative to that in the proton. Over the kinematic region 0 < x_F < 0.6, this ratio is found to be consistent with unity, in striking contrast to the behavior of the Drell-Yan cross section ratio $\sigma(p+d)_{DY}/2\sigma(p+p)_{DY}$. This result shows that the gluon distributions in the proton and neutron are very similar. The Upsilon production cross sections are also compared with the p+d and p+Cu cross sections from earlier measurements. We report on the first observation of open charm production in neutral current deep inelastic neutrino scattering as seen in the NuTeV detector at Fermilab. The production rate is shown to be consistent with a pure gluon-$% Z^{0}$ boson production model, and the observed level of charm production is used to determine the effective charm mass. As part of our analysis, we also obtain a new measurement for the proton-nucleon charm production cross section at $\sqrt{s}=38.8$ GeV. A precise measurement of the ratio of Drell-Yan yields from an 800 GeV/c proton beam incident on hydrogen and deuterium targets is reported. Over 140,000 Drell-Yan muon pairs with dimuon mass M_{mu+ mu-} >= 4.5 GeV/c^2 were recorded. From these data, the ratio of anti-down (dbar) to anti-up (ubar) quark distributions in the proton sea is determined over a wide range in Bjorken-x. A strong x dependence is observed in the ratio dbar/ubar, showing substantial enhancement of dbar with respect to ubar for x<0.2. This result is in fair agreement with recent parton distribution parameterizations of the sea. For x>0.2, the observed dbar/ubar ratio is much nearer unity than given by the parameterizations. The cross sections for the hadroproduction of the Chi1 and Chi2 states of charmonium in proton-silicon collisions at sqrt{s}=38.8 GeV have been measured in Fermilab fixed target Experiment 771. The Chi states were observed via their radiative decay to J/psi+gamma, where the photon converted to e+e- in the material of the spectrometer. The measured values for the Chi1 and Chi2 cross sections for x_F>0 are 263+-69(stat)+-32(syst) and 498+-143(stat)+-67(syst) nb per nucleon respectively. The resulting sigma(Chi1}/sigma(Chi2) ratio of 0.53+-0.20(stat)+-0.07(syst), although somewhat larger than most theoretical expectations, can be accomodated by the latest theoretical estimates. Measurements of the production of high transverse momentum direct photons by a 515 GeV/c piminus beam and 530 and 800 GeV/c proton beams in interactions with beryllium and hydrogen targets are presented. The data span the kinematic ranges of 3.5 < p_T < 12 GeV/c in transverse momentum and 1.5 units in rapidity. The inclusive direct-photon cross sections are compared with next-to-leading-order perturbative QCD calculations and expectations based on a phenomenological parton-k_T model. We present results on the production of high transverse momentum pizero and eta mesons in pp and pBe interactions at 530 and 800 GeV/c. The data span the kinematic ranges: 1 < p_T < 10 GeV/c in transverse momentum and 1.5 units in rapidity. The inclusive pizero cross sections are compared with next-to-leading order QCD calculations and to expectations based on a phenomenological parton-k_T model. Measurements of the ratio of Drell-Yan yields from an 800 \rm{GeV/c} proton beam incident on liquid hydrogen and deuterium targets are reported. Approximately 360,000 Drell-Yan muon pairs remained after all cuts on the data. From these data, the ratio of anti-down ($\bar{d}$) to anti-up ($\bar{u}$) quark distributions in the proton sea is determined over a wide range in Bjorken-$x$. These results confirm previous measurements by E866 and extend them to lower $x$. From these data, $(\bar{d}-\bar{u})$ and $\int(\bar{d}-\bar{u})dx$ are evaluated for $0.015<x<0.35$. These results are compared with parameterizations of various parton distribution functions, models and experimental results from NA51, NMC, and HERMES.
Deceleration parameter Part of a series on Physical cosmology q \ \stackrel{\mathrm{def}}{=}\ -\frac{\ddot{a} a }{\dot{a}^2} where \! a is the scale factor of the universe and the dots indicate derivatives by proper time. The expansion of the universe is said to be "accelerating" if \ddot{a} is positive (recent measurements suggest it is), and in this case the deceleration parameter will be negative. [1] The minus sign and name "deceleration parameter" are historical; at the time of definition \! q was thought to be positive, now it is believed to be negative. The Friedmann acceleration equation can be written as 3\frac{\ddot{a}}{a} =-4 \pi G (\rho+3p)=-4\pi G(1+3w)\rho, where \! \rho is the energy density of the universe, \! p is its pressure, and \! w is the equation of state of the universe. This can be rewritten as q=\frac{1}{2}(1+3w)\left(1+K/(aH)^2\right) The derivative of the Hubble parameter can be written in terms of the deceleration parameter: \frac{\dot{H}}{H^2}=-(1+q). Except in the speculative case of phantom energy (which violates all the energy conditions), all postulated forms of matter yield a deceleration parameter \! q \ge -1. Thus, any expanding universe should have a decreasing Hubble parameter and the local expansion of space is always slowing (or, in the case of a cosmological constant, proceeds at a constant rate, as in de Sitter space). Observations of the cosmic microwave background demonstrate that the universe is very nearly flat, so: q=\frac{1}{2}(1+3w) This implies that the universe is decelerating for any cosmic fluid with equation of state \! w greater than \! -1/3 (any fluid satisfying the strong energy condition does so, as does any form of matter present in the Standard Model, but excluding inflation). However observations of distant type Ia supernovae indicate that \! q is negative; the expansion of the universe is accelerating. This is an indication that the gravitational attraction of matter, on the cosmological scale, is more than counteracted by the negative pressure of dark energy, in the form of either quintessence or a positive cosmological constant. Before the first indications of an accelerating universe, in 1998, it was thought that the universe was dominated by dust with negligible equation of state, \! w \approx 0. This had suggested that the deceleration parameter was equal to one half; the experimental effort to confirm this prediction led to the discovery of possible acceleration.
1. Check that \(\left\{\begin{pmatrix}x\\y\end{pmatrix} \middle|\, x,y \in \Re \right\} = \Re^{2}\) (with the usual addition and scalar multiplication) satisfies all of the parts in the definition of a vector space. 2. a) Check that the complex numbers \(\mathbb{C}= \left\{x+iy \mid i^{2}=-1, x,y\in \Re \right\}\), satisfy all of the parts in the definition of a vector space over \(\mathbb{C}\). Make sure you state carefully what your rules for vector addition and scalar multiplication are. b) What would happen if you used \(\mathbb{R}\) as the base field (try comparing to problem 1). 3. a) Consider the set of convergent sequences, with the same addition and scalar multiplication that we defined for the space of sequences: \[V = \left\{f \mid f \colon \mathbb{N} \rightarrow \Re, \lim_{n \rightarrow \infty} f \in \Re \right\}\subset {\mathbb{R}^{\mathbb{N}}}\, .\] Is this still a vector space? Explain why or why not. b) Now consider the set of divergent sequences, with the same addition and scalar multiplication as before: \[V = \left\{f \mid f \colon \mathbb{N} \rightarrow \Re, \lim_{n \rightarrow \infty} f \text{ does not exist or is }\pm \infty \right\}\subset {\mathbb{R}}^{\mathbb{N}}\, .\] Is this a vector space? Explain why or why not. 4. Let \(V= \{\begin{pmatrix}x\\y\end{pmatrix} : x,y \in \Re \} = \Re^{2}\). Propose as many rules for addition and scalar multiplication as you can that satisfy some of the vector space conditions while breaking some others. 5. Consider the set of \(2\times 4\) matrices: \[ V = \left\{ \begin{pmatrix} a & b & c & d \\ e & f & g & h \end{pmatrix} \mid a,b,c,d,e,f,g,h \in \mathbb{C} \right\} \] Propose definitions for addition and scalar multiplication in \(V\). Identify the zero vector in \(V\), and check that every matrix in \(V\) has an additive inverse. 6. Let \(P_{3}^{\Re}\) be the set of polynomials with real coefficients of degree three or less. a) Propose a definition of addition and scalar multiplication to make \(P_{3}^{\Re}\) a vector space. b) Identify the zero vector, and find the additive inverse for the vector \(-3-2x+x^{2}\). c) Show that \(P_{3}^{\Re}\) is not a vector space over \(\mathbb{C}\). Propose a small change to the definition of \(P_{3}^{\Re}\) to make it a vector space over \(\mathbb{C}\). 7. Let \(V=\{x\in \mathbb{R}|x>0\}=:\mathbb{R}_{+}\). For \(x,y\in V\) and \(\lambda\in \mathbb{R}\), define $$ x\oplus y = xy\, ,\qquad \lambda \otimes x = x^{\lambda}\, . $$ Prove that \((V,\oplus,\otimes,\mathbb{R})\) is a vector space. 8. The component in the \(i\)th row and \(j\)th column of a matrix can be labeled \(m^{i}_{j}\). In this sense a matrix is a function of a pair of integers. For what set \(S\) is the set of \(2\times2\) matrices the same as the set \(\Re^{S}\)? Generalize to other size matrices. 9. Show that any function in \(\Re^{\{*,\star,\# \}}\) can be written as a sum of multiples of the functions \(e_{*},e_{\star},e_{\#$}\) defined by $$ e_{*} (k)= \left\{\!\! \begin{array}{ll} ~~1\, , & k=*\\ ~~0\, , & k=\star \\ ~~ 0\, , & k=\# \end{array} \right. ,~ e_{\star} (k)= \left\{\!\! \begin{array}{ll} ~~0\, , & k=*\\ ~~1\, , & k=\star \\ ~~0\, , & k=\# \end{array} \right. ,~ e_{\#} (k)= \left\{\!\! \begin{array}{ll} ~~0\, , & k=*\\ ~~0\, , & k=\star \\ ~~1\, , & k=\# \end{array} \right. $$ 10. Let \(V\) be a vector space and \(S\) any set. Show that the set of all functions mapping \(V\to S\), \(\textit{i.e.}\) \(V^{S}\), is a vector space. \(\textit{Hint:}\) first decide upon a rule for adding functions whose outputs are vectors.
Difference between revisions of "Linear norm" (Created page with "This is the wiki page for understanding ''seminorms of linear growth'' on a group <math>G</math> (such as the free group on two generators). These are functions <math>\| \|:...") Line 1: Line 1: This is the wiki page for understanding ''seminorms of linear growth'' on a group <math>G</math> (such as the free group on two generators). These are functions <math>\| \|: G \to [0,+\infty)</math> that obey the triangle inequality This is the wiki page for understanding ''seminorms of linear growth'' on a group <math>G</math> (such as the free group on two generators). These are functions <math>\| \|: G \to [0,+\infty)</math> that obey the triangle inequality − :<math>\|xy\| \leq \|x\| + \|y\|</math> + :<math>\|xy\| \leq \|x\| + \|y\| </math> and the linear growth condition and the linear growth condition − :<math> \|x^n \| = |n| \|x\| </math> + :<math> \|x^n \| = |n| \|x\| </math> − for all <math>x,y \in G</math> and <math>n \in {\ + for all <math>x,y \in G</math> and <math>n \in {\Z}</math>. + + == Threads == == Threads == Line 15: Line 17: == Key lemmas == == Key lemmas == + + + + + + + + + + + + + Revision as of 13:21, 21 December 2017 This is the wiki page for understanding seminorms of linear growth on a group [math]G[/math] (such as the free group on two generators). These are functions [math]\| \|: G \to [0,+\infty)[/math] that obey the triangle inequality [math]\|xy\| \leq \|x\| + \|y\| \quad (1)[/math] and the linear growth condition [math] \|x^n \| = |n| \|x\| \quad (2) [/math] for all [math]x,y \in G[/math] and [math]n \in {\bf Z}[/math]. We use the usual group theory notations [math]x^y := yxy^{-1}[/math] and [math][x,y] := xyx^{-1}y^{-1}[/math]. Threads https://terrytao.wordpress.com/2017/12/16/bi-invariant-metrics-of-linear-growth-on-the-free-group/, Dec 16 2017. Bi-invariant metrics of linear growth on the free group, II, Dec 19 2017. Key lemmas Henceforth we assume we have a seminorm [math]\| \|[/math] of linear growth. The letters [math]x,y,z,w[/math] are always understood to be in [math]G[/math], and [math]i,j,n,m[/math] are always understood to be integers. From (2) we of course have [math] \|x^{-1} \| = \| x\| \quad (3)[/math] Lemma 1If [math]x[/math] is conjugate to [math]y[/math], then [math]\|x\| = \|y\|[/math]. Proof: By hypothesis, [math]x = zyz^{-1}[/math] for some [math]z[/math], thus [math]x^n = z y^n z^{-1}[/math], hence by the triangle inequality [math] n \|x\| = \|x^n \| \leq \|z\| + n \|y\| + \|z^{-1} \|[/math] for any [math]n \geq 1[/math]. Dividing by [math]n[/math] and taking limits we conclude that [math]\|x\| \leq \|y\|[/math]. Similarly [math]\|y\| \leq \|x\|[/math], giving the claim. [math]\Box[/math]
I don't know the atmospheric pressure of your world, so I'll assume that it is like Earth's: 101.325 kPa. Atmosphere: 101.325 kPa 35% oxygen. 61% argon. 1.07% carbon dioxide. 0.93% arsenic. 2% other trace elements. First Arsenic is: Deadly poison. Here you can read about the symptoms and more information. Impossible for it to be a gas in your current atmosphere (you need to change the pressure and temperature of the atmosphere a lot to 615°C). Second To calculate the partial pressure of gases I need know the molecular mass of other trace elements, because I don't know them I will replace them with N 2 (a really common gas). Partial Pressure $$ \left| \begin{array}{cc|ccc|c|c}\hline\text{Gas}&\text{%}&\text{gr/mol}&\text{Mols}&\text{Fractal Mol}&\text{Partial Pressure (kPa)}&0.66 \text{ g}\\\hline\text{O}_{2}&\text{35%}&32&1.09&\text{40.08%}&40.61&26.8\\\text{Ar}&\text{61%}&39.95&1.53&\text{55.96%}&56.7&37.42\\\text{CO}_{2}&\text{1.07%}&44.01&0.02&\text{0.89%}&0.9&0.6\\\text{As}&\text{0.93%}&74.92&0.01&\text{0.45%}&0.46&0.3\\\text{N}_{2}&\text{2%}&28.01&0.07&\text{0.45%}&2.65&1.75\\\hline\text{Total}&\text{100%}&218.891&2.73&\text{100%}&101.325&66.8745\\\hline\end{array}\right| $$ High oxygen value: Humans need around 21 kPa of oxygen to "work" properly, you have the double, your people would suffer hyperoxia. Also when oxygen is above 50 kPa it becomes toxic, luckily your O 2 isn't toxic but would be annoying for your population. Argon Asphyxia: Although argon is non-toxic, it is 38% denser than air and therefore considered a dangerous asphyxiant gas in closed areas. It is difficult to detect because it is colorless, odorless, and tasteless. Argon narcopsia: I don't know much about it but I think it can cause narcopsia like nitrogen ( 56.17 kPa of argon is very much, maybe it could produce some dizziness). Also, I am not sure but Xenon weakens the blood-barrier brain and this increase the probability of infections in the brain, argon and xenon are inert gases, anaesthesic and narcotic, maybe argon also weakens the barrier. CO maximum amount of CO 2 slightly above the normal: 2 in air can be 1% without visible problems, at 1.5% you would die in a month, you have 1.07%, maybe it could take years to kill you or your body will adapt to survive. Lethal arsenic: see above. For more information about gases in the atmosphere you can check this answer (effect of several gas with an emphasis in O 2) and this answer (effect of several gas in extreme doses with an emphasis in CO 2 intoxication). Your world has 0.66 g gravity and you don't indicate the pressure of your atmosphere. The information above assumes that the pressure is equal to Earth's but I don't know if it's same pressure or same amount. If it's the second option then its partial pressures will be found in the last column of the table, in that case you won't have hyperoxia, but argon could still be dangerous. Sorry, but I don't know about your additional question, (I'll compensate for that with a free check of atmospheric stability!. Calculating if gases will escape! 1) Calculation of the escape velocity: In physics, escape velocity is the minimum speed needed for an object to escape from the gravitational influence of a massive body. Escape Velocity = $\text{v}_\text{e} = \sqrt{\frac{2\times\text{G}\times\text{M}}{\text{r}}} = \sqrt{2\times\text{g}\times\text{r}}$ Where: G is the gravitational constant: ($\text{G} ≈ 6.67 \times 10^{11} \text{ m}^3 \times \text{kg}^{-1} \times \text{s}^{-2} ≈ 0.0000000000667$) M is the mass of the body to be escaped (planet) in kg. R is the distance from the center of mass of the body to the object in metres. g is the gravity in m/s 2. The problem is we don't know the radius of your planet: Gravity can be calculated:$${\displaystyle g={\frac {m}{r^{2}}}}$$Where g is the surface gravity of the planet, in a multiple of the Earth's, m mass, in multiples of the Earth's mass (5.976·10^24 kg) and r its radius, expressed as a multiple of the Earth's (mean) radius (6,371 km). So, to calculate the radius I can do:$$r = \sqrt{m \times g}$$ On an exoplanet with 0.284 Earth-mass and a surface gravity of 0.66 g (6.44 m/s^2). $$r = \sqrt{0.284 \times (0.66)} = \sqrt{0.18744} = 0.43$$$$0.43 \times 6,371 \text{ km} = 2758.28 \text{ km}$$ So: $$v_e = \sqrt{2gR} = 5,960 \text{ m/s}$$ 2) Now if the RMS ( Root-mean-square speed) velocity of any gas in your atmosphere is equal or greater than escape velocity of the planet then that gas will escape rapidly and will be absent. $\text{RMS} = \text{v}_{\text{rms}}=\sqrt{\frac{3\times\text{R}\times\text{T}}{\text{M}_{\text{m}}}}$ Where: $\text{Vrms}$ is the root mean square of the speed in meters per second. $\text{Mm}$ is the molar mass of the gas in kilograms per mole. $\text{R}$ is the molar gas constant. $\text{R} = 8.3144598(48)\text{ J}\times\text{mol}^{-1}\times\text{K}^{-1}$ $\text{T}$ is the temperature in degrees kelvin (K = °C + 273.15). I'll use 25°C (298.15 K), I think that is the "normal" temperature used in gas calculations where it's specified. $$ \left| \begin{array}{c|c|c}\hline\text{Gas}&\text{kg/mol}&\text{RMS}\\\hline\text{O2}&0.032&482.08 \text{ m/s}\\\text{Ar}&0.039&431.45 \text{ m/s}\\\text{CO2}&0.044&411.07 \text{ m/s}\\\text{As}&0.074&315.06 \text{ m/s}\\\text{N2}&0.028&515.27 \text{ m/s}\\\hline \end{array}\right| $$ Your atmosphere is stable! (or at least for short-term, I don't know how to calculate the Boltzman distribution for geological ages).
The Hamiltonian of the SYK model is \begin{equation} H = \mathcal{N}\sum_{ijkl}^N J^{ijkl} \chi_i \chi_j \chi _k \chi _l \end{equation} where $\mathcal{N}$ is some normalization to make the energy scale with $N$ and $\chi_i$ is a Majorana operator. Different reviews on the SYK model call the variables $i=1,\dots,N$ sites, others talk about $\chi$ as a vector with $N$ components like it was a spin $N$ particle. In relation to this question, I don't understand whether the number of particles is conserved in this Hamiltonian. In other cases, like the Hubbard model, one gets a term like \begin{equation} H = \mathcal{N}\sum_{r,r'} J^{r,r'} a^\dagger_r a_{r'} \end{equation} where the interpretation is that the Hamiltonian destroys a particle on the site $r'$ and creates another one at site $r$. On the SYK Hamiltonian, however, since the Majorana fermions are self-adjoint, any operator can work as both creation and annihilation operators. This means that any term in the Hamiltonian could create four particles, destroy four particles or anything in between. So the question is: How should we interpret the SYK Hamiltonian (or any Hamiltonian with Majorana fermions, for that matter)?
Subset Product within Commutative Structure is Commutative Jump to navigation Jump to search Theorem Let $\struct {S, \circ}$ be a magma. Proof Let $\struct {S, \circ}$ be a magma in which $\circ$ is commutative. Let $X, Y \in \powerset S$. Then: \(\displaystyle X \circ_\mathcal P Y\) \(=\) \(\displaystyle \set {x \circ y: x \in X, y \in Y}\) \(\displaystyle Y \circ_\mathcal P X\) \(=\) \(\displaystyle \set {y \circ x: x \in X, y \in Y}\) from which it follows that $\circ_\mathcal P$ is commutative on $\powerset S$. $\blacksquare$ Also see
Suppose that $(\Omega,\Sigma,\mathbb P)$ is a probability space and let $Y$ and $Z$ be two real-valued, $\Sigma$-to-Borel measurable functions on this space. Suppose that the distribution of $Y$ conditional on $Z$ satisfies $$\mathbb P(Y\leq y\lvert\rvert Z)=F(y-Z)\quad\text{almost surely, for each $y\in\mathbb R$},\tag{$\clubsuit$}$$where $F:\mathbb R\to[0,1]$ is some non-decreasing ( a fortiory Borel-measurable) distribution function. The form of the conditional probabilities ($\clubsuit$) leads me to formulate the following two conjectures, which I have had a hard time proving: Conjecture 1:The unconditional distribution of $Y-Z$ satisfies \begin{align*} \mathbb P(Y-Z\leq y)=F(y)\quad\text{for each $y\in\mathbb R$}; \end{align*} and Conjecture 2:The variables $Y-Z$ and $Z$ are independent. Any hints would be greatly appreciated.
“As a geologist Farey is entitled to respect for the work which he carried out himself, although it has scarcely been noticed in the standard histories of geology.” That we still remember his name after 200 years is due to a short letter he wrote in 1816 to the editor of the Philosophical Magazine “On a curious Property of vulgar Fractions. By Mr. J. Farey, Sen. To Mr. Tilloch Sir. – On examining lately, some very curious and elaborate Tables of “Complete decimal Quotients,” calculated by Henry Goodwyn, Esq. of Blackheath, of which he has printed a copious specimen, for private circulation among curious and practical calculators, preparatory to the printing of the whole of these useful Tables, if sufficient encouragement, either public or individual, should appear to warrant such a step: I was fortunate while so doing, to deduce from them the following general property; viz. If all the possible vulgar fractions of different values, whose greatest denominator (when in their lowest terms) does not exceed any given number, be arranged in the order of their values, or quotients; then if both the numerator and the denominator of any fraction therein, be added to the numerator and the denominator, respectively, of the fraction next but one to it (on either side), the sums will give the fraction next to it; although, perhaps, not in its lowest terms. For example, if 5 be the greatest denominator given; then are all the possible fractions, when arranged, 1/5, 1/4, 1/3, 2/5, 1/2, 3/5, 2/3, 3/4, and 4/5; taking 1/3, as the given fraction, we have (1+1)/(5+3) = 2/8 = 1/4 the next smaller fraction than 1/3; or (1+1)/(3+2) = 2/5, the next larger fraction to 1/3. Again, if 99 be the largest denominator, then, in a part of the arranged Table, we should have 15/52, 28/97, 13/45, 24/83, 11/38, &c.; and if the third of these fractions be given, we have (15+13)/(52+45) = 28/97 the second: or (13+11)/(45+38) = 24/83 the fourth of them: and so in all the other cases. I am not acquainted, whether this curious property of vulgar fractions has been before pointed out?; or whether it may admit of any easy or general demonstration?; which are points on which I should be glad to learn the sentiments of some of your mathematical readers; and am Sir, Your obedient humble servant, J. Farey. Howland-street.” So, if we interpolate “childish addition of fractions” $\frac{a}{b} \oplus \frac{c}{d} = \frac{a+c}{b+d} $ and start with the numbers $0 = \frac{0}{1} $ and $\infty = \frac{1}{0} $ we get the binary Farey-tree above. For a fixed natural number n, if we stop the interpolation whenever the denominator of the fraction would become larger than n and order the obtained fractions (smaller or equal to one) we get the Farey sequence F(n). For example, if n=3 we start with the sequence $ \frac{0}{1},\frac{1}{1} $. The next step we get $\frac{0}{1},\frac{1}{2},\frac{1}{1} $ and the next step gives $\frac{0}{1},\frac{1}{3},\frac{1}{2},\frac{2}{3},\frac{1}{1} $ and as all the denomnators of childish addition on two consecutive fractions will be larger than 3, the above sequence is F(3). A remarkable feature of the series F(n) is that if $\frac{a}{b} $ and $\frac{c}{d} $ are consecutive terms in F(n), then $det \begin{bmatrix} a & c \\\ b & d \end{bmatrix} = -1 $ and so these two fractions are the endpoints of an even geodesic in the Dedekind tessellation. A generalized Farey series is an ordered collection of fractions $\infty,x_0,x_1,\cdots,x_n,\infty $ such that $x_0 $ and $x_n $ are integers and some $x_i=0 $. Moreover, writing $x_i = \frac{a_i}{b_i} $ we have that $det \begin{bmatrix} a_i & a_{i+1} \\\ b_i & b_{i+1} \end{bmatrix} = -1 $ A Farey code is a generalized Farey sequence consisting of all the vertices of a special polygon that lie in $\mathbb{R} \cup \{ \infty \} $ together with side-pairing information. If two consecutive terms are such that the complete geodesic between $x_i $ and $x_{i+1} $ consists of two sides of the polygon which are paired we denote this fact by [tex]\xymatrix{x_i \ar@{-}[r]_{\circ} & x_{i+1}}[/tex]. If they are the endpoints of two odd sides of the polygon which are paired we denote this by [tex]\xymatrix{x_i \ar@{-}[r]_{\bullet} & x_{i+1}}[/tex]. Finally, if they are the endpoints of a free side which is paired to another free side determined by $x_j $ and $x_{j+1} $ we denote this fact by marking both edges [tex]\xymatrix{x_i \ar@{-}[r]_{k} & x_{i+1}}[/tex] and [tex]\xymatrix{x_j \ar@{-}[r]_{k} & x_{j+1}}[/tex] with the same number. For example, for the M(12) special polygon on the left (bounded by the thick black geodesics), the only vertices in $\mathbb{R} \cup \{ \infty \} $ are $\infty,0,\frac{1}{3},\frac{1}{2},1 $. The two vertical lines are free sides and are paired, whereas all other sides of the polygon are odd. Therefore the Farey-code for this Mathieu polygon is [tex]\xymatrix{\infty \ar@{-}[r]_{1} & 0 \ar@{-}[r]_{\bullet} & \frac{1}{3} \ar@{-}[r]_{\bullet} & \frac{1}{2} \ar@{-}[r]_{\bullet} & 1 \ar@{-}[r]_{1} & \infty}[/tex] Conversely, to a Farey-code we can associate a special polygon by first taking the hyperbolic convex hull of all the terms in the sequence (the region bounded by the vertical lines and the bottom red circles in the picture on the left) and adding to it for each odd interval [tex]\xymatrix{x_i \ar@{-}[r]_{\bullet} & x_{i+1}}[/tex] the triangle just outside the convex hull consisting of two odd edges in the Dedekind tessellation (then we obtain the region bounded by the black geodesics). Again, the side-pairing of the obained special polygon can be obtained from that of the Farey-code. This correspondence gives a natural one-to-one correspondence special polygons <---> Farey-codes . _Later_ we will see how the Farey-code determines the group structure of the corresponding finite index subgroup of the modular group $\Gamma = PSL_2(\mathbb{Z}) $. Reference Ravi S. Kulkarni, “An arithmetic-geometric method in the study of the subgroups of the modular group”, Amer. J. Math 113 (1991) 1053-1133 Similar Posts: the iguanodon dissected Generators of modular subgroups more iguanodons via kfarey.sage The Dedekind tessellation Hyperbolic Mathieu polygons the modular group and superpotentials (2) Farey symbols of sporadic groups Monstrous dessins 3 Superpotentials and Calabi-Yaus Iguanodon series of simple groups
Here's a question from Hull's Options Futures and Other derivatives which I'd appreciate if someone helped me to clarify. The question is from the chapter "Martingales and Measures" Suppose that the price of a zero-coupon bond maturing at time T follows the process: \begin{align} \frac{dP(t,T)}{P(t,T)} = \mu_P dt + \sigma_P dW_t^{\mathbb{P}} \\ \end{align} and the price of a derivative dependent on the bond follows the process \begin{align} \frac{df}{f} = \mu_f dt + \sigma_f dW_t^{\mathbb{P}} \\ \end{align} Assume only one source of uncertainty and that f provides no income. (a) What is the forward price $F$ of $f$ for a contract maturing at time $T$? (b) What is the process followed by $F$ in a world that is forward risk neutral with respect to $P(t,T)$? (c) What is the process followed by $F$ in the traditional risk-neutral world? Now the answers are: (a) $F=\frac{f}{P(t,T)}$, and from here we can derive $F$ dynamics as: \begin{align} \frac{dF}{F} = (\mu_f-\mu_p + \sigma_P^2 - \sigma_f \sigma_P )dt + (\sigma_f -\sigma_P) dW_t^{\mathbb{P}} \\ \end{align} (b) Since $F=\frac{f}{P(t,T)}$ has the numeraire $P(t,T)$ we can expect it to be a martingale under this measure so that the dynamics are: \begin{align} \frac{dF}{F} = (\sigma_f -\sigma_P) dW_t^{\mathbb{P}} \\ \end{align} (c) This is where I'm confused, the solution apparently is \begin{align} \frac{dF}{F} = (\mu_f-\mu_p + \sigma_P^2 - \sigma_f \sigma_P )dt + (\sigma_f -\sigma_P) dW_t^{\mathbb{P}} \\ \frac{dF}{F} = ((r + \lambda \sigma_f)-(r + \lambda \sigma_P) + \sigma_P^2 - \sigma_f \sigma_P )dt + (\sigma_f -\sigma_P) dW_t^{\mathbb{P}} \end{align} Now since we are talking about the risk-neutral world we must choose $B_t$ as the numeraire. With \begin{align} \frac{dB_t}{B_t} = (r)dt \end{align} In this chapter Hull says that if we choose $\lambda = \sigma_g$ the process of $f/g$ will become a Martingale. Assuming $f$ and $g$ follow also the same dynamics with the same source of uncertainty. So in this case we might choose $\lambda = 0$ since there is no brownian motion in the $B_t$ dynamics. This would lead us to believe that the solution is: \begin{align} \frac{dF}{F} = (\mu_f-\mu_p + \sigma_P^2 - \sigma_f \sigma_P )dt + (\sigma_f -\sigma_P) dW_t^{\mathbb{P}} \\ \frac{dF}{F} = (\sigma_P^2 - \sigma_f \sigma_P )dt + (\sigma_f -\sigma_P) dW_t^{\mathbb{P}} \quad (1) \end{align} This last equation is the solution proposed in the Solutions Manual of the book. My problem here is that to my understanding, if (1) are the actual dynamics of $F$ under the traditional risk-neutral measure then $F/B$ should be a martingale, but when doing the calculation it doesn't. Is my interpretation of the question incorrect? or could the answer be wrong? Much help appreciated
Camouflaged Butterfly Source Problem Chord $CD\;$ and $QT\;$ of a given circle meet at point $P.\;$ The tangents at $Q\;$ and $T\;$ cross $CD\;$ extended at $A\;$ and $B,\;$ respectively. Prove that $\displaystyle\frac{1}{AP}-\frac{1}{BP}=\frac{1}{CP}-\frac{1}{DP}.$ Solution 1 By the Power of a Point Theorem, $AC\cdot AD=AQ^2,\;$ $BD\cdot BC=BT^2,\;$ and $CP\cdot DP=PQ\cdot PT,\;$ implying $(AC(AP+DP)=AQ^2\;$ and $BD(BP+CP)=BT^2.\;$ By Stewart's theorem in $\Delta QAP\;$ for the cevian $QC,$ $CQ^2\cdot AP+AC\cdot AP\cdot CP=PQ^2\cdot AC+AQ^2\cdot CP,$ so that $CQ^2\cdot AP+AC\cdot AP\cdot CP=PQ^2\cdot AC+AC(AP+DP)\cdot CP,\quad$ or $CQ^2\cdot AP=PQ^2\cdot AC+AC\cdot CP\cdot DP,\;$ or else, $CQ^2\cdot AP=PQ^2\cdot AC+AC\cdot PQ\cdot PT,\;$ and, finally, $CQ^2\cdot AP=AC\cdot PQ\cdot PT.\;$ Similarly, $DT^2\cdot BP=BD\cdot PT\cdot QT.$ We thus have $\displaystyle\frac{QC^2}{DT^2}=\frac{BP\cdot AC\cdot PQ}{AP\cdot BD\cdot PT}.\;$ But triangles $PCQ\;$ and $PTD\;$ are similar, so that $\displaystyle\frac{QC^2}{DT^2}=\frac{PQ^2}{DP^2},\;$ or $\displaystyle\frac{PQ^2}{DP^2}=\frac{BP\cdot AC\cdot PQ}{AP\cdot BD\cdot PT},\;$ implying $\displaystyle\frac{1}{DP^2}=\frac{BP\cdot AC}{AP\cdot BD\cdot PT\cdot PQ},\;$ or $\displaystyle\frac{1}{DP^2}=\frac{BP\cdot AC}{AP\cdot BD\cdot CP\cdot DP}.\;$ Simplifying, $\displaystyle\frac{AP\cdot CP}{BP\cdot DP}=\frac{AC}{BD},\;$ i.e., $\displaystyle\frac{AP\cdot CP}{BP\cdot DP}=\frac{AP-CP}{BP-DP},\;$ which is $\displaystyle\frac{BP-DP}{BP\cdot DP}=\frac{AP-CP}{AP\cdot CP},\;$ or $\displaystyle\frac{1}{DP}-\frac{1}{BP}=\frac{1}{CP}-\frac{1}{AP},\;$ as required. Solution 2 Draw $AQ\parallel BS.\;$ We'll use the notations as defined in the diagram below: We have $\displaystyle\frac{a}{b}=\frac{AO}{BS}=\frac{AQ}{BT}.\;$ From this $\displaystyle\left(\frac{a}{b}\right)^2=\left(\frac{AQ}{BT}\right)^2=\frac{AD\cdot AC}{BC\cdot BD}=\frac{(a+n)(a-m)}{(b+m)(b-n)},$ From this we deduce $\displaystyle\frac{(a+n)(a-m)}{a^2}=\frac{(b+m)(b-n)}{b^2},$ which is the same as $\displaystyle\frac{1}{a}-\frac{1}{b}=\frac{1}{m}-\frac{1}{n},\;$ as required. Remark The problem above is a clear generalization of the one, Butterfly in Inscriptible Quadrilateral. Furthermore, the genuine Butterfly Theorem in which the fact that $P\;$ is the midpoint of one of the segments, $AB,\;$ $CD,\;$ implies its being the midpoint of the other, has been generalized to exactly same condition $\displaystyle\frac{1}{DP}-\frac{1}{BP}=\frac{1}{CP}-\frac{1}{AP},\;$ for an arbitrary $P,\;$ see, e.g., the remark at the end of Proof 8 of the Butterfly Theorem. Acknowledgment Butterfly Theorem and Variants Butterfly theorem 2N-Wing Butterfly Theorem Better Butterfly Theorem Butterflies in Ellipse Butterflies in Hyperbola Butterflies in Quadrilaterals and Elsewhere Pinning Butterfly on Radical Axes Shearing Butterflies in Quadrilaterals The Plain Butterfly Theorem Two Butterflies Theorem Two Butterflies Theorem II Two Butterflies Theorem III Algebraic proof of the theorem of butterflies in quadrilaterals William Wallace's Proof of the Butterfly Theorem Butterfly theorem, a Projective Proof Areal Butterflies Butterflies in Similar Co-axial Conics Butterfly Trigonometry Butterfly in Kite Butterfly with Menelaus William Wallace's 1803 Statement of the Butterfly Theorem Butterfly in Inscriptible Quadrilateral Camouflaged Butterfly General Butterfly in Pictures Butterfly via Ceva Butterfly via the Scale Factor of the Wings Butterfly by Midline Stathis Koutras' Butterfly The Lepidoptera of the Circles The Lepidoptera of the Quadrilateral The Lepidoptera of the Quadrilateral II The Lepidoptera of the Triangle Two Butterflies Theorem as a Porism of Cyclic Quadrilaterals Two Butterfly Theorems by Sidney Kung Butterfly in Complex Numbers 65620915
Problem 1 from the 2006 IMO Problem Solution Let angles at $A,$ $B,$ $C$ be $2\alpha,$ $2\beta,$ $2\gamma,$ respectively. $\alpha+\beta+\gamma=90^{\circ}.$ Note that $\angle PBA + \angle PCA + \angle PBC + \angle PCB=2\beta+2\gamma=180^{\circ}-2\alpha.$ Thus, condition $\angle PBA + \angle PCA + \angle PBC + \angle PCB$ implies that $\angle PBC + \angle PCB=90^{\circ}-\alpha$ so that $\angle BPC=90^{\circ}+\alpha,$ placing $P$ on the circumcircle $(BCI).$ As we know, $(BCI)$ is centered at the midpoint $M$ of the arc $\overset{\frown}{BC},$ opposite $A,$ meaning in part that $A,$ $I,$ $M$ are collinear and that $MP=MI.$ By the triangle inequality $AP+MP\ge AM=AI+MI=AI+MP,$ from which $AP\ge AI$ with equality only when $P=I.$ Acknowledgment I am grateful to Siyoun Sung for pointing out that this 2006 IMO problem has the elegant solution described above. Siyoun Sung has observed that the problem succumbs easily based on a property of the incircle. It's edifying to compare this proof with an older one. 65621223
I want to approximate how close is the greedy algorithm to the optimal solution for the Set Cover Problem, which I'm sure most of you are familiar with, but just in case, you can visit the link above. The problem is NP-Hard, and I'm trying to find a bound on how well does the greedy algorithm perform. I know it looks a lot, but please bare with me. I pretty much did most of the work, I'm just missing that last small piece. Here is the pseudo code: Input: $U$ - set of elements, $F$ - family of sets s.t. $\bigcup_{S\in F}S=U$ Output: $C$ - a family of sets; $C\subseteq F$ s.t. $\bigcup_{S\in C}S=U$ initially C is emptywhile U is not empty do: choose S from F that maximizes the cover of elements in U add S to C subtract S's elements from Ureturn C The algorithm is pretty straight forward, and it is easy to see that it is indeed polynomial. This is my attempt to approximate: Claim 1: In a set $U$ of $m$ objects, that can be covered with $k$ sets, there has to be a set $S\subseteq U$ whose size is at least $\frac{1}{k}m$. Proof: Trivial. (I decided not to prove it) A corollary is that given the situation described in that claim, the greedy algorithm will choose a set whose size is at least $\frac{1}{k}m$. Claim 2: Given a universe $U$, if there exist a cover of size $k$, then after $k$ iterations, the greedy algorithm will cover at least half of the elements, meaning at least $\frac{1}{2}n$ elements. Proof: By claim 1, in the first iteration the algorithm will cover at least $\frac{1}{k}n$ elements. Upon entering the second iteration, there are at most $n-n\frac{1}{k}$ elements, and so the greedy algorithm will at least cover additional $\frac{1}{k}(n-n\frac{1}{k})$ elements. In general, on the $i$'th iteration, the algorithm will cover $\frac{1}{k}(n-n\frac{i-1}{k})$ elements. So after $k$ iterations: $$\sum_{i=1}^{k}\frac{1}{k}(n-n\frac{i-1}{k})=\sum_{i=0}^{k-1}\frac{1}{k}(n-n\frac{i}{k})$$ $$=\sum_{i=0}^{k-1}\frac{n}{k}-\sum_{i=0}^{k-1}\frac{ni}{k^2}=n-\frac{n}{k^2}\sum_{i=0}^{k-1}i$$ $$=n-\frac{n}{k^2}(\frac{k(k-1)}{2})=n-\frac{n}{2}(\frac{k-1}{k})\geq\frac{1}{2}n$$ OK, now this is where I need help: I know that in the first $k$ iterations, the algorithm picks at least half of the elements. After another $k$ iterations, another half was covered, out of what's left (meaning another $\frac{1}{4}n$). So in general, I know that the bound is $k\log n$, I just can't figure out how to formalize it. What formula represents this behaviour? $T(ki)=T(\frac{1}{2^i}n)$ and solve for $i$? It didn't work... What formula or equation should I solve to actually show that the number of iterations is bounded by $k\log n$?
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Theorems for and Examples of Computing Limits of Sequences Theorem 1: Let $f$ be a function with $f(n)=a_n$ for all integers $n>0$. If $\displaystyle\lim_{x\to\infty}f(x)=L$, then $\displaystyle\lim_{n\to\infty}a_n=L$ also. Example 1:By the theorem, since $\displaystyle\lim_{x\to\infty}\frac{1}{x^r}=0$ when $r>0$, $\displaystyle\lim_{n\to\infty}\frac{1}{n^r}=0$ when $r>0$. Learn this example. ------------------------------------------------------------------- Example 2:Evaluate $\displaystyle\lim_{n\to\infty}\frac{2n}{3n-4}$. Solution 2:By dividing the top and the bottom by $n$, we get $\displaystyle\lim_{n\to\infty}\frac{2}{3-\tfrac 4 n}$. By the previous example, $\tfrac 4 n$ converges to 0, so we get $\displaystyle\lim_{n\to\infty}\frac{2n}{3n-4}=\frac{2}{3}$. ------------------------------------------------------------------- Example 3:Evaluate $\displaystyle\lim_{n\to\infty}\frac{5n}{e^n}$. DO:To solve this limit, first compute $\displaystyle\lim_{x\to\infty}\frac{5x}{e^x}$ before reading on. Solution 3:We use l'Hospital's Rule. $\displaystyle\lim_{x\to\infty}\frac{5x}{e^x}\underset{\fbox{$\tfrac \infty \infty\,,\, \text{l'H}$}}{=}\lim_{x\to\infty}\frac{5}{e^x}=0$. Thus $\displaystyle\lim_{n\to\infty}\frac{5n}{e^n}=0$ by Theorem 1. Notice that we cannot take a derivative of a sequence, since a derivative exists only if the function is continuous, and sequences are not continuous. Yet, because of our theorem, we can use l'Hopital's rule on the associated (continuous) $f(x)$ and compute our limit by taking derivatives. We get the limit of the sequence by evaluating the limit of the function. Theorem 2: $\displaystyle \lim_{n\to\infty}a_n=0$ if and only if $\displaystyle\lim_{n\to\infty}|a_n|=0$. Warning: This is only true when the limits are equal to 0. To see why this theorem makes sense, think about this graph of a sequence $a_n=2\frac{(-1)^n}{2}$, which converges to 0. The graph of $\left\{|a_n|\right\}$ would flip all the negative points to their positive values, giving a sequence steadily decreasing to zero. Example 4: By Theorem 1, since $\displaystyle\lim_{x\to\infty}r^x=0$ when $0<r<1$ (remember exponential functions?), $\displaystyle\lim_{n\to\infty}r^n=0$ when $0<r<1$. By Theorem 2, this extends to $-1<r<1$. If $r>1$ or $r\le-1$, the limit diverges. ( DO:what happens when $r=1$?) Summarizing: $\displaystyle\lim_{n\to\infty}r^n \left\{\begin{array}{ll}\text{converges }&\text{ if }-1<r\le 1\\\text{diverges}&\text{otherwise}\end{array}\right.$ Learn this. We will use it frequently. ------------------------------------------------------------------- Example 5:Let $ a_n=(-1)^n\frac{3^{n+2}}{5^n}$. We will take the the absolute value of our sequence, which here simply means ignoring the alternating sign $(-1)^n$, and take a limit to see if we get 0. Rewrite to get $|a_n|=3^2\left(\frac{3}{5}\right)^n=9\left(\frac{3}{5}\right)^n$. We evaluate $\displaystyle\lim_{n\to\infty}|a_n|= \lim_{n\to\infty}\frac{3^{n+2}}{5^n}=9\left(\frac{3}{5}\right)^n=9\cdot 0=0$, by Example 4 with $r=\tfrac 3 5$, since $-1<r<1$. Therefore, $\displaystyle\lim_{n\to\infty}a_n=\lim_{n\to\infty}(-1)^n\frac{3^{n+2}}{5^n}=0$ by Theorem 2.
We report the measurements of the t anti-t production cross section and of the top quark mass using 1.02 fb^-1 of p anti-p data collected with the CDFII detector at the Fermilab Tevatron. We select events with six or more jets on which a number of kinematical requirements are imposed by means of a neural network algorithm. At least one of these jets must be identified as initiated by a b-quark candidate by the reconstruction of a secondary vertex. The cross section is measured to be sigma_{tt}=8.3+-1.0(stat.)+2.0-1.5(syst.)+-0.5(lumi.) pb, which is consistent with the standard model prediction. The top quark mass of 174.0+-2.2(stat.)+-4.8(syst.) GeV/c^2 is derived from a likelihood fit incorporating reconstructed mass distributions representative of signal and background. We present the first model-independent measurement of the helicity of $W$ bosons produced in top quark decays, based on a 1 fb$^{-1}$ sample of candidate $t\bar{t}$ events in the dilepton and lepton plus jets channels collected by the D0 detector at the Fermilab Tevatron $p\bar{p}$ Collider. We reconstruct the angle $\theta^*$ between the momenta of the down-type fermion and the top quark in the $W$ boson rest frame for each top quark decay. A fit of the resulting \costheta distribution finds that the fraction of longitudinal $W$ bosons $f_0 = 0.425 \pm 0.166 \hbox{(stat.)} \pm 0.102 \hbox{(syst.)}$ and the fraction of right-handed $W$ bosons $f_+ = 0.119 \pm 0.090 \hbox{(stat.)} \pm 0.053 \hbox{(syst.)}$, which is consistent at the 30% C.L. with the standard model. We present a measurement of the t anti-t production cross section in p anti-p collisions at s**(1/2) = 1.96 TeV which uses events with an inclusive signature of significant missing transverse energy and jets. This is the first measurement which makes no explicit lepton identification requirements, so that sensitivity to W --> tau nu decays is maintained. Heavy flavor jets from top quark decay are identified with a secondary vertex tagging algorithm. From 311 pb-1 of data collected by the Collider Detector at Fermilab we measure a production cross section of 5.8 +/- 1.2(stat.)+0.9_-0.7(syst.) pb for a top quark mass of 178 GeV/c2, in agreement with previous determinations and standard model predictions. We present a measurement of the top quark pair production cross section in ppbar collisions at sqrt(s)=1.96 TeV using 318 pb^{-1} of data collected with the Collider Detector at Fermilab. We select ttbar decays into the final states e nu + jets and mu nu + jets, in which at least one b quark from the t-quark decays is identified using a secondary vertex-finding algorithm. Assuming a top quark mass of 178 GeV/c^2, we measure a cross section of 8.7 +-0.9 (stat) +1.1-0.9 (syst) pb. We also report the first observation of ttbar with significance greater than 5 sigma in the subsample in which both b quarks are identified, corresponding to a cross section of 10.1 +1.6-1.4(stat)+2.0-1.3 (syst) pb. We present a measurement of the ttbar cross section using high-multiplicity jet events produced in ppbar collisions at sqrt{s}=1.96 TeV. These data were recorded at the Fermilab Tevatron collider with the D0 detector. Events with at least six jets, two of them identified as b jets, were selected from a 1 fb-1 data set. The measured cross section, assuming a top quark mass of 175 GeV/c^2, is 6.9 \pm 2.0 pb, in agreement with theoretical expectations. We present a measurement of the $\ttbar$ production cross section using $194 \mathrm{pb^{-1}}$ of CDF II data using events with a high transverse momentum electron or muon, three or more jets, and missing transverse energy. The measurement assumes 100% $t\to Wb$ branching fraction. Events consistent with $\ttbar$ decay are found by identifying jets containing heavy flavor semileptonic decays to muons. The dominant backgrounds are evaluated directly from the data. Based on 20 candidate events and an expected background of 9.5$\pm$1.1 events, we measure a production cross section of $5.3\pm3.3^{+1.3}_{-1.0} \mathrm{pb}$, in agreement with the standard model. We measure the ttbar production cross section in ppbar collisions at sqrt{s}=1.96 TeV in the lepton+jets channel. Two complementary methods discriminate between signal and background, b-tagging and a kinematic likelihood discriminant. Based on 0.9 fb-1 of data collected by the D0 detector at the Fermilab Tevatron Collider, we measure sigma_ttbar=7.62+/-0.85 pb, assuming the current world average m_t=172.6 GeV. We compare our cross section measurement with theory predictions to determine a value for the top quark mass of 170+/-7 GeV. We present a measurement of the ttbar pair production cross section in ppbar collisions at sqrt(s) = 1.96 TeV utilizing approximately 425 pb-1 of data collected with the D0 detector. We consider decay channels containing two high pT charged leptons (either e or \mu) from leptonic decays of both top-daughter W bosons. These were gathered using four sets of selection criteria, three of which required that a pair of fully identified leptons (i.e., e\mu, ee, or \mu\mu) be found. The fourth approach imposed less restrictive criteria on one of the lepton candidates and required that at least one hadronic jet in each event be tagged as containing a b quark. For a top quark mass of 175 GeV, the measured cross section is 7.4 +/-1.4(stat} +/- 1.0(syst) pb. We present a measurement of the top quark pair production cross section in ppbar collisions at sqrt(s)=1.96 TeV utilizing 425 pb-1 of data collected with the D0 detector at the Fermilab Tevatron Collider. We consider the final state of the top quark pair containing one high-pT electron or muon and at least four jets. We exploit specific kinematic features of ttbar events to extract the cross section. For a top quark mass of 175 GeV, we measure sigma_ttbar = 6.4 +1.3-1.2(stat} +/- 0.7(syst)+/- 0.4(lum) pb in good agreement with the standard model prediction. We present a measurement of the ttbar production cross section using events with one charged lepton and jets from ppbar collisions at a center-of-mass energy of 1.96 TeV. In these events, heavy flavor quarks from top quark decay are identified with a secondary vertex tagging algorithm. From 162 pb-1 of data collected by the Collider Detector at Fermilab, a total of 48 candidate events are selected, where 13.5 +- 1.8 events are expected from background contributions. We measure a ttbar production cross section of 5.6^{+1.2}_{-1.1} (stat.) ^{+0.9}_{0.6} (syst.) pb. We present a measurement of the top pair production cross section in $p\bar{p}$ collisions at $\sqrt{s}$=1.96 TeV. We collect a data sample with an integrated luminosity of 194$\pm$11 pb$^{-1}$ with the CDF II detector at the Fermilab Tevatron. We use an artificial neural network technique to discriminate between top pair production and background processes in a sample of 519 lepton+jets events, which have one isolated energetic charged lepton, large missing transverse energy and at least three energetic jets. We measure the top pair production cross section to be $\sigma_{t\bar{t}}= 6.6
So I was trying to prove that the characteristic of an integral domain is either $0$ or prime. I got stuck, so I searched for a proof and I came across the following proof online Now I almost want to accept this proof, except for the following (possibly silly and pedantic) issue. In the above proof we know that $n_0 \in \mathbb{N} \subseteq \mathbb{Z}$, so since $n$ is not prime, we factorize $n = m \cdot k$ (where $\cdot$ represents multiplication on the integers in the ring $(\mathbb{Z}, +, \cdot)$). Now in the above proof the following is asserted $$n(1_D) = \underbrace{1_D + \dots + 1_D}_{n \text{ times}} =0_D \implies m\cdot k(1_D) = \underbrace{\left(1_D + \dots + 1_D\right)}_{m \text{ times}} \ \bullet \underbrace{\left(1_D + \dots + 1_D\right)}_{k \text{ times}}$$ Now seemingly it seems that multiplication of $m$ and $k$ in $\mathbb{Z}$ is inducing (ring) multiplication of elements in $D$ (when I thought we'd only end up with addition of the $1_D$'s $mk$ times). Is there a reason why this happens?
I agree with the last sentence of Bill Mitchell's comment. But here is something closer than the cardinals mentioned. In [1] the notion of "almost Ramsey" cardinal was coined. Such a cardinal $\kappa$ is required to be $\alpha$-Erdos for all $\alpha<\kappa$. Then $V_\kappa$ is a model of what you are after (but in fact this is still not exact as there are many other $\gamma <\kappa$ for which this is true too. Almost Ramseys also get an outing in [2]. If you are interested in sharps alone, then, eg just for sharps for reals, $\kappa \rightarrow (\omega_1)^{<\omega}_{2}$ is already overkill: one really just needs for any function $f$ homogeneous sets of arbitrarily large but countable length, all of which have the same "type". This has been investigated closely in [3]. Similar considerations would hold for sharps of other sets of ordinals. The least inner model in which every set has a sharp, $L^\#$ say, is too thin to contain any Erdos cardinals (other than trivially $\kappa(\delta)$ for $\delta< \omega_1^{L^\#}$). [1] J. Vickers & P.Welch "Elementary Embeddings of an inner model into the Universe" JSL vol 66, 2001. [2] A. Apter & P. Koepke "Making All cardinals almost Ramsey" Archive for Math. Logic, vol 47, 2008. [3] J. Baumgartner & F.Galvin "Generalized Erdos cardinals and $0^\#$", Ann. of Math. Logic, vol. 15 , 1978.
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Test for Divergence and Other Theorems Test for divergence Because the definition of a convergent series is a limit, we have the following theorems about convergent series $\displaystyle\sum_{n=1}^\infty a_n = L$ and $\displaystyle\sum_{n=1}^\infty b_n = M$. Warning: If either series is divergent, we do not have these facts. $\displaystyle\sum_{n=1}^\infty \left(a_n+b_n\right) = L+M$, $\displaystyle\sum_{n=1}^\infty \left(a_n-b_n\right) = L-M$, and $\displaystyle\sum_{n=1}^\infty \left(c\, a_n\right) = cL$. There is no similar information for $\displaystyle\sum_{n=1}^\infty\left( a_nb_n\right)$ or $\displaystyle\sum_{n=1}^\infty \left(\frac{a_n}{b_n}\right)$. This video justifies these theorems.
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
First, to reach the ocean, is it more practical to melt down through the ice, or drill it? The Russians drilled a hole 4 km down to Lake Vostok. They used a drill. The technology they are using is prefectly capable of drilling a similar borehole on Europa. Would it be practical to build habitats mounted on the underside of the ice, in the water, or would drilling rooms within the ice be easier? In this question I calculated the water pressure at the base of a 20 km thick layer of ice at 237 atm. Since hydrostatic pressure scales linearly with depth, at 10 km thick, pressure would still be 118 atm, equivalent to 1250 m in our ocean. Modern submarine are rated to a pressure of about 500 m. Assuming that transportation cost of materials is a significant factor (i.e. moving all that structural alloy from another moon/asteroid, then down a 10–20 km shaft), it is probably not worth making a large permanant living structure that deep. Humans need a lot of air space at 1 atm to live comfortably, and that is very expensive to make available. Also in that other post, I calculated radiation exposure. 10 m of ice is really all you need, so there isn't a lot of value in going too deep. If melting/drilling a tunnel down to the ocean, would it be possible to passively leave this borehole open, or would the pressures cause the ice to flow together again? Ice has a viscocity. From this textbook, the viscocity is about $2\times10^{13} \text{Pa}\cdot\text{s}$. This means that an internal pressure of $2\times10^{13} \text{Pa}$ will impart an expansion of 1m/s to an ice mass. 118 atm is about $1.2\times10^{7} \text{Pa}$, so the imparted speed of expansion will be $\frac{1.2\times10^{7}}{2\times10^{13}}= 6\times10^{-7}\text{m/s}$ which is about 5 cm per day. The pressure of the ice on any structure made to hold the tunnel would be about 12 MPa. That pressure isn't excessive, but since ice is viscous you can't just put some support struts in there. The ice will ooze around it at 5cm a day. To put a cylinder in to maintain the size of the hole you need it to be.. well at least 10 km long. Too expensive. The Russians drilling in Lake Vostok have similar problems here. Their hole is only 4 km, but since gravity on earth is higher, pressure is higher too, up to 350 atm at the bottom of the ice. They don't use a structure to maintain the hole, they simply melt all the ice that seeps in with a mixture of kerosene, freon, and antifreeze, and then pump it out. This solution is a bit harder on Europa. Lake Vostok itself is about −3C, while the surface temps can be as low as −89C (coldest place on earth, incidentally). However, they don't drill in the winter so −20 to −50C is more like what the drill team sees at the surface. The surface temperature of Europa's surface is −160C, but the liquid ocean would be warmer than Vostok, based on the phase diagram of water and anticipated pressure of 12 MPa. What kind of engineering would be required to keep the borehole open, assuming my colonists want an elevator to the surface. After drilling is complete, the hole is kept open by pumping an anti-freeze solution around the edges of it to melt it. The anti-freeze interacts with the ice, lowering its lowering its freezing point below whatever temperature the ice is at. The rate of ice encroachment is relatively small, but since the shaft is big, the amount of ice to be removed is large. Assuming 2.5cm on average of ice encroach along the entire 10km length of the shaft, and with a 4m radius borehole, 12600 m$^3$ of ice must be removed every day, or 12 million tons of it. Fortunately, by melting the ice with anti-freeze and letting it flow to the bottom of the hole by force of gravity, the flow rate of 0.14 $\frac{\text{m}^3}{\text{s}}$ is not unrealistic. That would take 4 standard, 3" firehoses, and is about 50% more than you can get out of a single fire hydrant. The biggest engineering challenge is removing the ice that is collapsing in the top half of the ice sheet, where the temperatures are closer to −160C than zero. No anti-freeze is going to work at those temperatures; car anti-freeze freezes at -40C, and alcohol at -110C. The anti-freeze itself will freeze. Some sort of heating system will be needed. It would be much more effective, once the borehole is dug, to maintain it from the bottom, since that is the warmer side of the ice, and since gravity will pull melted ice down into the warmer regions without need for pumping; you only have to pump antifreeze and not melted ice too. So you basically have to pump your heated anti-freeze solution from the base of the ice sheet to the top, and recover it at the bottom, separating the antifreeze out for reuse, and presumably dumping the water/ice into the ocean. Pumping up is a big issue, due to shutoff head limits for centrifugal pumps. I deleted the math as extraneous since this post is already forever long, but, suffice to say, a centrifugal pump, which is good at high volume pumping, will not get the pressure you need. However, any good pressure washer can get the pressure you need (3000 psi = 204 atm) and they do this with positive displacement pumps. So you will need some enormous positive displacement pumps; flow rate has to be relatively high or your heated antifreeze will cool and freeze before it reaches the service. Not an impossible engineering challenge, since I have seen them. If you want 200 gpm of 3000 psi reciprocating positive displacement pumps, you will need about 400 kW of electrical power, based on the pumps I've seen. So that brings us to generating both a.) enough heat to unfreeze a 10km hole and b.) enough power to run a 400 kW electrical load forever; for reference this is what a 100 kW diesel generator looks like and c.) doesn't take a ton of fuel. The solution with today's technology is a nuclear reactor. Fortunately, they already have them in submarines, so it's not too much to ask for to bulk up the pressure hull to handle higher pressure, and install one at the bottom of the ice to keep the hole open. Though, keep in mind, you can't assemble it 10 km under the ice, so the hole has to be big enough to get the thing down there in the first place. Also, you have to replace it every 10–15 years once it runs out of fuel. In conclusion Most people would permanently live in habitats dug a few meters into the ice. This would give plenty of room for expansion by digging more warrens into the ice, without having to go into high pressure areas, and also keeping the colonists close to the outside world. The hole would have to be significant. A submarine style pressure hull with reactor would have to be inserted into the hole. However, since the smallest nuclear submarine had a pressure hull about 4 m in diameter, the size doesn't have to be unreasonably large. Maybe an 7 m hole and a 6 m pressure hull with nuclear reactor and ice melting equipment. This could be operated remotely, its not a threat to human life if the hole closes if there are no people below the hole. The worst you have to do is re-drill the hole. In fact, I don't anticipate humans going down the hole at all, too dangerous. Just some construction-bots to install your ice-melter and some submarine-bots to explore. Maybe an Alvin for exploration, but you'd never want to try to dock and transfer people to the ice-melting hull at 12 Mpa.
Overview of Integration Methods in Space and Time Integration is one of the most important mathematical tools, especially for numerical simulations. Partial Differential Equations (PDEs) are usually derived from integral balance equations, for example. Once a PDE needs to be solved numerically, integration most often plays an important role, too. This blog post gives an overview of the integration methods available in the COMSOL software and shows you how you can use them. The Importance of Integrals COMSOL uses the finite element method, which transforms the governing PDE into an integral equation — the weak form, in other words. Having a closer look at the COMSOL simulation software, you may realize that many boundary conditions are formulated in terms of integrals. A couple of examples of these are Total heat flux or floating potential. Integration also plays a key role in postprocessing, as COMSOL provides many derived values based on integration, like electric energy, flow rate, or total heat flux. Of course, our users can also use integration in COMSOL for their own means, and here you will learn how. Integration by Means of Derived Values A general integral has the form where [t_0,t_1] is a time interval, \Omega is a spatial domain, and F(u) is an arbitrary expression in the dependent variable u. The expression can include derivatives with respect to space and time or any other derived value. The most convenient way to obtain integrals is to use the “Derived Values” in the Results section of the new ribbon (or the Model Builder if you’re not running Windows®). How to add volume, surface, or line integrals as Derived Values. You can refer to any available solution by choosing the corresponding data set. The Expression field is the integrand and allows for dependent or derived variables. For transient simulations, the spatial integral is evaluated at each time step. Alternatively, the settings window offers Data Series Operations, where Integration can be selected for the time domain. This results in space-time integration. Example of Surface Integration Settings with additional time integration via the Data Series Operation. The Average is another Derived Value related to integration. It equals an integral, which is divided by the volume, area, or length of the considered domain. The Average Data Series Operation additionally divides by the time horizon. Derived Values are very useful, but because they are only available for postprocessing, they cannot handle every type of integration. That is why COMSOL provides more powerful and flexible integration tools. We demonstrate these methods with an example model below. Spatial and Temporal Integration for a Heat Transfer Example Model We introduce a simple heat transfer model, a 2D aluminum unit square in the ( x, y)-plane. The upper and right sides are fixed at room temperature (293.15 K) and on the left and lower boundary, a General inward heat flux of 5000W/m^2 is prescribed. A stationary solution and a time-dependent solution after 100 seconds are shown in the following figures. Spatial Integration by Means of Component Coupling Operators Component Coupling Operators are, for example, needed when several integrals are combined in one expression, when integrals are requested during calculation, or in cases where a set of path integrals are required. Component Coupling Operators are defined in the Definitions section of the respective component. At that stage, the operator is not evaluated yet. Only its name and domain selection are fixed. How to add Component Coupling Operators for later use. For our example, we first want to calculate the spatial integral over the stationary temperature, which is given by In the COMSOL software, we use an integration operator, which is named intop1 by default. Settings window of the integration operator. How to evaluate the Integration operator. In the next step, we demonstrate how an Integration operator can also be used within the model. We could, for example, ask what heating power we need to apply to obtain an average temperature of 303.15 K, which equals an average temperature increase of 10 K compared to room temperature. First, we need to compute the difference between the desired and the actual average temperature. The average is calculated by the integral over T, divided by the integral over the constant function 1, which gives the area of the domain. Fortunately, this type of calculation can easily be done with an Average operator in COMSOL. By default, such an operator is named aveop1. (Note that the average over the domain is the same as the integral for our example. That is because the domain has unit area.) The corresponding difference is given by Next, we need to find the General heat flux on the left and lower boundary, so that the desired average temperature is satisfied. To this end, we introduce an additional degree of freedom named q_hot and an additional constraint as a global equation. The General inward heat flux is replaced by q_hot. How to add an additional degree of freedom and a global equation, which forces the average temperature to 303.15 K. Solving this coupled system with a stationary study results in q_{hot}=5881.30 W/m^2. This value has to be prescribed as a General inward heat flux boundary condition to achieve an average temperature of 303.15 K in the whole domain. Computing the Antiderivative by Means of Integration Coupling A frequently asked question we receive in Support is: How can one obtain the spatial antiderivative? The following application of integration coupling answers this question. The antiderivative is the counterpart of the derivative, and geometrically, it enables the calculation of arbitrary areas bounded by function graphs. One important application is the calculation of probabilities in statistical analyses. To demonstrate this, we fix y=0 in our example and denote the antiderivative of T(x,0) by u(x). This means that \frac{\partial u}{\partial x}=T(x,0). A representation of the antiderivative is the following integral where we use \bar x in order to distinguish the integration and the output variable. In contrast to the integrals above, we here have a function as a result, rather than a scalar quantity. We need to include the information that for each \bar x\in[0,1] the corresponding value of u(\bar x) requires an integral to be solved. Fortunately, this is easy to set up in the COMSOL environment and requires only three ingredients, so to speak. First, a logical expression can be used to reformulate the integral as Second, we need an integration operator that acts on the lower boundary of our example domain. Let’s denote it by intop2. Third, we need to include the distinction of integration and output variable. The notation for this situation is source and destination for x and \bar x, respectively. When using an integration coupling operator, the built-in operator dest is available, which indicates that the corresponding expression does not belong to the integration variable. More precisely, it means \bar x=dest(x) in COMSOL. Putting the logical expression and the dest operator together, results in the expression T*(x<=dest(x)), which is exactly the input expression that we need for intop2. Altogether, we can calculate the antiderivative by intop2(T*(x<=dest(x))), resulting in the following plot in our example: How to plot the antiderivative by Integration coupling, the dest operator, and a logical expression. COMSOL provides two other integration coupling operators, namely general projection and linear projection. These can be used to obtain a set of path integrals in any direction of the domain. In other words, integration is performed only with respect to one dimension. The result is a function of one dimension less than the domain. For a 2D example the result is a 1D function, which can be evaluated on any boundary. Some more details on how to use these operators are subject to a forthcoming blog post on component couplings. Spatial Integration by Means of an Additional Physics Interface The most flexible way of spatial integration is to add an additional PDE interface. Let’s remember the example of the antiderivative and assume that we want to calculate the antiderivative not only for y=0. The task can be formulated in terms of the PDE with Dirichlet boundary condition u=0 on the left boundary. The easiest interface to implement this equation is the Coefficient Form PDE interface, which only needs the following few settings: How to use an additional physics interface for spatial integration. The dependent variable u represents the antiderivative with respect to x and is available during calculation and postprocessing. Besides flexibility, a further advantage of this method is accuracy, because the integral is not obtained as a derived value, but is part of the calculation and internal error estimation. Temporal Integration by Means of Built-In Operators We have already mentioned the Data Series Operations, which can be used for time integration. Another very useful method for time integration is provided by the built-in operators timeint and timeavg for time integration or time average, respectively. They are readily available in postprocessing and are used to integrate any time-dependent expression over a specified time interval. In our example we may be interested in the temperature average between 90 seconds and 100 seconds, i.e.: The following surface plot shows the resulting integral, which is a spatial function in (x,y): How to use the built-in time integration operator timeavg . Similar operators are available for integration on spherical objects, namely ballint, circint, diskint, and sphint. Temporal Integration by Means of Additional Physics Interfaces If temporal integrals have to be available in the model, you need to define them as additional dependent variables. Similar to the Coefficient Form PDE example shown above, this can be done by adding an ODE interface of the Mathematics branch. Suppose, for example, that at each time step, the model requests the time integral from start until now over the total heat flux magnitude, which measures the accumulated energy. The variable for the total heat flux is automatically calculated by COMSOL and is named ht.tfluxMag. The integral can be calculated as an additional dependent variable with a Distributed ODE, which is a subnode of the Domain ODEs and DAEs interface. The source term of this domain ODE is the integrand, as shown in the following figure. How to use an additional physics interface for temporal integration. What is the benefit of such a calculation? The integral can be reused in another physics interface, which may be influenced by the accumulated energy in the system. Moreover, it is now available for all kinds of postprocessing, which is more convenient and faster than built-in operators. For an example, check out the Carbon Deposition in Hetereogeneous Catalysis model, where a domain ODE is used to calculate the porosity of a catalyst as a time-dependent field variable in the presence of chemical reactions. Integration of Analytic Functions and Expressions So far, we have shown how to integrate solution variables during calculation or in postprocessing. We have not yet covered integrals of analytic functions or expressions. To this end, COMSOL provides the built-in operator integrate( expression, integration variable, lower bound, upper bound). The expression might be any 1D function, such as sin(x). It is also possible to include additional variables, such as sin(x*y). The second argument specifies over which variable the integral is calculated. For example integrate(sin(x*y),y,0,1) yields a function in x, because integration only eliminates the integration variable y. Note that the operator can also handle analytic functions, which need to be defined in the Definitions node of the current component. How to add an analytic function. How to integrate over an analytic function. Further Reading Model downloads Using Integration coupling operators: Acoustics of a Muffler Using global equations for time integration: Process Control Using a PID Controller Using global equations to satisfy constraints: Using Global Equations to Satisfy Constraints Using domain ODEs for time integration: Capacity Fade of a Li-Ion Battery and Carbon Deposition in Heterogeneous Catalysis Knowledge Base entry: Computing Time and Space Integrals Comments (4) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Suppose we have a sequence of random variables $X_1, X_2,...$ that takes value $\left\{ 0,1\right\}$ with $\lim _{n\rightarrow \infty} \frac{\sum_{i=1}^n X_i}{n} = 0.5$. i.e. In long term this is a fair coin toss. Design an mechanism so that for some $n$ and $k$, the distribution of $\sum_{i=n}^{n+k-1} X_i$ has a smaller variance. It is very clear that if the variables are independent then the distribution is always fixed. Therefore we focus on dependent variables, in particular we consider creating a Markov Chain, that $X_{n+k}$ depends on $\frac{\sum_{i=n}^{n+k-1} X_i}{k}$. We call this dependence function be $f: [0,1]\rightarrow [0,1]$, which is rotational symmetric around $(0.5,0.5)$. In order to investigate the effectiveness of our mechanism we have to go deep into the calculations because the mean calculations won't work (we need variance!) Consider $k=2$ with $f(x) = 0.9 - 0.8x$ and the four states are $00,01,10,11$. Then the transition matrix is given by: $P = \begin{bmatrix} 0.1&0&0.5&0\\0.9&0&0.5&0\\0&0.5&0&0.9\\0&0.5&0&0.1\end{bmatrix}$ Where the transition matrix assuming independence equal to $P = \begin{bmatrix} 0.5&0&0.5&0\\0.5&0&0.5&0\\0&0.5&0&0.5\\0&0.5&0&0.5\end{bmatrix}$ Since $P$ represents a regular Markov chain it's clear that it has an eigenvalue 1 with the steady state vector $\frac{5}{28}(1,1.8,1.8,1)^T$. Comparing with $(0.25,0.25,0.25,0.25)^T$ this one has a high tendency at the central. What about higher k? Consider the case $k=3$ with 8 states $000,001,010,011,100,101,110,111$, where $P =\begin{bmatrix} \frac{1}{10}&0&0&0&\frac{11}{30}&0&0&0\\ \frac{9}{30}&0&0&0&\frac{19}{30}&0&0&0\\ 0&\frac{11}{30}&0&0&0&\frac{19}{30}&0&0\\ 0&\frac{19}{30}&0&0&0&\frac{11}{30}&0&0\\ 0&0&\frac{11}{30}&0&0&0&\frac{19}{30}&0\\ 0&0&\frac{19}{30}&0&0&0&\frac{11}{30}&0\\ 0&0&0&\frac{19}{30}&0&0&0&\frac{9}{10}\\ 0&0&0&\frac{11}{30}&0&0&0&\frac{1}{10}\\\end{bmatrix}$ With steady state vector, according to mathematica, $c(0.161905, 0.397403, 0.397403, 0.397403, 0.397403, 0.397403, 0.397403, 0.161905)^T$ (where $c$ is the constant to make the sum of the probabilities 1.) Surely there is a higher tendency to stay at the centre, but it looks less centre-located than $k=2$. Of course it's very hard to determine the variance through the steady state vector from the Markov chain as every states represents a series of random variables, but we can plot the distribution for it as follows. Here we sample 10000 times for $\sum_{i=1}^{100} X_i$. Here we see the black line as the standard binomial distribution, red as $k=2$, green as $k = 8$, blue as $k=25$ and mathematically the line tends to the standard binomial distribution as $n \rightarrow \infty$. If we take a function that is closer to $f(x) = 0.5$ (take the distance in inner product space, if you like) it will, of course, closer to the original distribution. For the above graph, red refers to $y = 0.9-0.8x$, green is $y = 0.8-0.6x$ while blue is $0.7-0.4x$, and the difference is obvious. Convexity also affect the ability of 'forcing' the RV back to the mean using the same idea (distance between functions). The variance is naturally smaller if $f(x)$ is larger approaching to $0.5^-$ and smaller from $0.5^+$. If you do some Fourier analysis we can 'force' the random variable sequence into a perfect alternating 0 1 0 1 0 1 0 1!!! The red line refers to $f(x) = 0.5 + 0.4((1-x)^5-x^5))$, the green line being $f(x) = 0.9-0.8x$ and the blue line is $f(x) = 0.5+0.4\cos (\pi x)$. Let's demonstrate the Fourier series correlation here. For the function $f: [0,1]\rightarrow [0,1]$ where $f(x) = 1$ for $x\in [0,0.5]$ and $f(x) = 0$ for $x\in (0.5,1]$ we have $f(x) = \frac{1}{2} + \sum_{k=1}^{\infty}\frac{2\sin ((4n-2)\pi x)}{(2n-1)\pi}$. Turncating the values that exceeds 1 and below 0 we have the following distribution by taking $1,2,3$ terms with $k=15$: We put a large graph here because it takes great effort for us to separate the three lines. Zooming gives We see (once more) that Fourier series converges very quick despite the occurrence of Gibbs' phenomenon here. There's an interesting question that why does the distribution converges to something non-trivial instead of going to the impulse $\delta (\frac{n}{2})$, where the sequence is perfectly 1 0 1 0 ......? I'll leave this question to the reader as the solution is quite simple. I don't intend to 'teach' something here but creating a sequence of dependent random variables is a very practical topic and the variance could play an important role. For instance in some round-based RPG battle you certainly don't want your game to be dominated by randomness, so it makes sense to restrict the randomness so that it's about the same for both sides over most if not all short periods. The faster converging speed is what we want to tweek the variance. Foods for thought. 1) Why does the distribution using Fourier series as $f(x)$ does not converge to an impulse, where the sequence of distribution has no variance? You may want to change the following variables to see what happens (a) $n$, the length of sequence simulated (b) $k$ the length of correlation (c) the number of terms in the Fourier series or the turncation (d) numerical precision. 2) It is possible to calculate the variance directly from the Markov chain steady state probability? 3) By inverting the idea we can't create sequence of RVs of 4) For the Markov Chain case $k=3$, why does the steady state probability for all 6 states except $000,111$ are equal? (i.e., That looks 'unusual'.) larger variancedirectly, for instance, $f(x) = 0.5-0.4\cos (\pi x)$ doesn't give a sequence of larger variance. Think about if it is possible, then see if there are practical applications for it. 4) For the Markov Chain case $k=3$, why does the steady state probability for all 6 states except $000,111$ are equal? (i.e., That looks 'unusual'.) R code, for $f(x)$: f = function(n,k,g) { s = NULL for(i in 1:k) { s = c(s,rbinom(1,1,0.5)) } for(i in (k+1):n) { prev_mean = mean(s[(i-k):(i-1)]) s = c(s,rbinom(1,1,g(prev_mean))) } return(sum(s)) }
Search Now showing items 1-10 of 19 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
ECE1254H Modeling of Multiphysics Systems. Lecture 11: Nonlinear equations. Taught by Prof. Piero Triverio Disclaimer Peeter’s lecture notes from class. These may be incoherent and rough. Solution of N nonlinear equations in N unknowns We’d now like to move from solutions of nonlinear functions in one variable: \begin{equation}\label{eqn:multiphysicsL11:200} f(x^\conj) = 0, \end{equation} to multivariable systems of the form \begin{equation}\label{eqn:multiphysicsL11:20} \begin{aligned} f_1(x_1, x_2, \cdots, x_N) &= 0 \\ \vdots & \\ f_N(x_1, x_2, \cdots, x_N) &= 0 \\ \end{aligned}, \end{equation} where our unknowns are \begin{equation}\label{eqn:multiphysicsL11:40} \Bx = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_N \\ \end{bmatrix}. \end{equation} Form the vector \( F \) \begin{equation}\label{eqn:multiphysicsL11:60} F(\Bx) = \begin{bmatrix} f_1(x_1, x_2, \cdots, x_N) \\ \vdots \\ f_N(x_1, x_2, \cdots, x_N) \\ \end{bmatrix}, \end{equation} so that the equation to solve is \begin{equation}\label{eqn:multiphysicsL11:80} \boxed{ F(\Bx) = 0. } \end{equation} The Taylor expansion of \( F \) around point \( \Bx_0 \) is \begin{equation}\label{eqn:multiphysicsL11:100} F(\Bx) = F(\Bx_0) + \underbrace{ J_F(\Bx_0) }_{Jacobian} \lr{ \Bx – \Bx_0}, \end{equation} where the Jacobian is \begin{equation}\label{eqn:multiphysicsL11:120} J_F(\Bx_0) = \begin{bmatrix} \PD{x_1}{f_1} & \cdots & \PD{x_N}{f_1} \\ & \ddots & \\ \PD{x_1}{f_N} & \cdots & \PD{x_N}{f_N} \end{bmatrix} \end{equation} Multivariable Newton’s iteration Given \( \Bx^k \), expand \( F(\Bx) \) around \( \Bx^k \) \begin{equation}\label{eqn:multiphysicsL11:140} F(\Bx) \approx F(\Bx^k) + J_F(\Bx^k) \lr{ \Bx – \Bx^k } \end{equation} With the approximation \begin{equation}\label{eqn:multiphysicsL11:160} 0 = F(\Bx^k) + J_F(\Bx^k) \lr{ \Bx^{k + 1} – \Bx^k }, \end{equation} then multiplying by the inverse Jacobian, and rearranging, we have \begin{equation}\label{eqn:multiphysicsL11:220} \boxed{ \Bx^{k+1} = \Bx^k – J_F^{-1}(\Bx^k) F(\Bx^k). } \end{equation} Our algorithm is Guess \( \Bx^0, k = 0 \). REPEAT Compute \( F \) and \( J_F \) at \( \Bx^k \) Solve linear system \( J_F(\Bx^k) \Delta \Bx^k = – F(\Bx^k) \) \( \Bx^{k+1} = \Bx^k + \Delta \Bx^k \) \( k = k + 1 \) UNTIL converged As with one variable, our convergence is after we have all of the convergence conditions satisfied \begin{equation}\label{eqn:multiphysicsL11:240} \begin{aligned} \Norm{ \Delta \Bx^k } &< \epsilon_1 \\ \Norm{ F(\Bx^{k+1}) } &< \epsilon_2 \\ \frac{\Norm{ \Delta \Bx^k }}{\Norm{\Bx^{k+1}}} &< \epsilon_3 \\ \end{aligned} \end{equation} Typical termination is some multiple of eps, where eps is the machine precision. This may be something like: \begin{equation}\label{eqn:multiphysicsL11:260} 4 \times N \times \text{eps}, \end{equation} where \( N \) is the “size of the problem”. Sometimes we may be able to find meaningful values for the problem. For example, for a voltage problem, we may not be interested in precisions greater than a millivolt. Automatic assembly of equations for nolinear system Nonlinear circuits We will start off considering a non-linear resistor, designated within a circuit as sketched in fig. 2. Example: diode, with \( i = g(v) \), such as \begin{equation}\label{eqn:multiphysicsL11:280} i = I_0 \lr{ e^{v/{\eta V_T}} – 1 }. \end{equation} Consider the example circuit of fig. 3. KCL’s at each of the nodes are \( I_A + I_B + I_D – I_s = 0 \) \( – I_B + I_C – I_D = 0 \) Introducing the consistuative equations this is \( g_A(V_1) + g_B(V_1 – V_2) + g_D (V_1 – V_2) – I_s = 0 \) \( – g_B(V_1 – V_2) + g_C(V_2) – g_D (V_1 – V_2) = 0 \) In matrix form this is \begin{equation}\label{eqn:multiphysicsL11:300} \begin{bmatrix} g_D & -g_D \\ -g_D & g_D \end{bmatrix} \begin{bmatrix} V_1 \\ V_2 \end{bmatrix} + \begin{bmatrix} g_A(V_1) &+ g_B(V_1 – V_2) & & – I_s \\ &- g_B(V_1 – V_2) & + g_C(V_2) & \\ \end{bmatrix} = 0 . \end{equation} We can write the entire system as \begin{equation}\label{eqn:multiphysicsL11:320} \boxed{ F(\Bx) = G \Bx + F'(\Bx) = 0. } \end{equation} The first term, a product of a nodal matrix \( G \) represents the linear subnetwork, and is filled with the stamps we are already familiar with. The second term encodes the relationships of the nonlinear subnetwork. This non-linear component has been marked with a prime to distinguish it from the complete network function that includes both linear and non-linear elements. Observe the similarity with the stamp analysis that we did previously. With \( g_A() \) connected on one end to ground we have it only once in the resulting vector, whereas the nonlinear elements connected to two non-zero nodes in the network occur once with each sign. Stamp for nonlinear resistor For the non-linear circuit element of fig. 4. The stamp is Stamp for Jacobian \begin{equation}\label{eqn:multiphysicsL11:360} J_F(\Bx^k) = G + J_{F’}(\Bx^k). \end{equation} Here the stamp for the Jacobian, an \( N \times N \) matrix, is
Dedekind’s Psi-function $\Psi(n)= n \prod_{p |n}(1 + \frac{1}{p})$ pops up in a number of topics: $\Psi(n)$ is the index of the congruence subgroup $\Gamma_0(n)$ in the modular group $\Gamma=PSL_2(\mathbb{Z})$, $\Psi(n)$ is the number of points in the projective line $\mathbb{P}^1(\mathbb{Z}/n\mathbb{Z})$, $\Psi(n)$ is the number of classes of $2$-dimensional lattices $L_{M \frac{g}{h}}$ at hyperdistance $n$ in Conway’s big picture from the standard lattice $L_1$, $\Psi(n)$ is the number of admissible maximal commuting sets of operators in the Pauli group of a single qudit. The first and third interpretation have obvious connections with Monstrous Moonshine. Conway’s big picture originated from the desire to better understand the Moonshine groups, and Ogg’s Jack Daniels problem asks for a conceptual interpretation of the fact that the prime numbers such that $\Gamma_0(p)^+$ is a genus zero group are exactly the prime divisors of the order of the Monster simple group. Here’s a nice talk by Ken Ono : Can’t you just feel the Moonshine? For this reason it might be worthwhile to make the connection between these two concepts and the number of points of $\mathbb{P}^1(\mathbb{Z}/n\mathbb{Z})$ as explicit as possible. Surely all of this is classical, but it is nicely summarised in the paper by Tatitscheff, He and McKay “Cusps, congruence groups and monstrous dessins”. The ‘monstrous dessins’ from their title refers to the fact that the lattices $L_{M \frac{g}{h}}$ at hyperdistance $n$ from $L_1$ are permuted by the action of the modular groups and so determine a Grothendieck’s dessin d’enfant. In this paper they describe the dessins corresponding to the $15$ genus zero congruence subgroups $\Gamma_0(n)$, that is when $n=1,2,3,4,5,6,7,8,9,10,12,13,16,18$ or $25$. Here’s the ‘monstrous dessin’ for $\Gamma_0(6)$ But, one can compute these dessins for arbitrary $n$, describing the ripples in Conway’s big picture, and try to figure out whether they are consistent with the Riemann hypothesis. We will get there eventually, but let’s start at an easy pace and try to describe the points of the projective line $\mathbb{P}^1(\mathbb{Z}/n \mathbb{Z})$. Over a field $k$ the points of $\mathbb{P}^1(k)$ correspond to the lines through the origin in the affine plane $\mathbb{A}^2(k)$ and they can represented by projective coordinates $[a:b]$ which are equivalence classes of couples $(a,b) \in k^2- \{ (0,0) \}$ under scalar multiplication with non-zero elements in $k$, so with points $[a:1]$ for all $a \in k$ together with the point at infinity $[1:0]$. When $n=p$ is a prime number we have $\# \mathbb{P}^1(\mathbb{Z}/p\mathbb{Z}) = p+1$. Here are the $8$ lines through the origin in $\mathbb{A}^2(\mathbb{Z}/7\mathbb{Z})$ Over an arbitrary (commutative) ring $R$ the points of $\mathbb{P}^1(R)$ again represent equivalence classes, this time of pairs \[ (a,b) \in R^2~:~aR+bR=R \] with respect to scalar multiplication by units in $R$, that is \[ (a,b) \sim (c,d)~\quad~\text{iff}~\qquad \exists \lambda \in R^*~:~a=\lambda c, b = \lambda d \] For $\mathbb{P}^1(\mathbb{Z}/n \mathbb{Z})$ we have to find all pairs of integers $(a,b) \in \mathbb{Z}^2$ with $0 \leq a,b < n$ with $gcd(a,b)=1$ and use Cremona’s trick to test for equivalence: \[ (a,b) = (c,d) \in \mathbb{P}^1(\mathbb{Z}/n \mathbb{Z})~\quad \text{iff}~\quad ad-bc \equiv 0~mod~n \] The problem is to find a canonical representative in each class in an efficient way because this is used a huge number of times in working with modular symbols. Perhaps the best algorithm, for large $n$, is sketched in pages 145-146 of Bill Stein’s Modular forms: a computational approach. For small $n$ the algorithm in $\S 1.3$ in the Tatitscheff, He and McKay paper suffices: Consider the action of $(\mathbb{Z}/n\mathbb{Z})^*$ on $\{ 0,1,…,n-1 \}=\mathbb{Z}/n\mathbb{Z}$ and let $D$ be the set of the smallest elements in each orbit, For each $d \in D$ compute the stabilizer subgroup $G_d$ for this action and let $C_d$ be the set of smallest elements in each $G_d$-orbit on the set of all elements in $\mathbb{Z}/n \mathbb{Z}$ coprime with $d$, Then $\mathbb{P}^1(\mathbb{Z}/n\mathbb{Z})= \{ [c:d]~|~d \in D, c \in C_d \}$. Let’s work this out for $n=12$ which will be our running example (the smallest non-squarefree non-primepower): $(\mathbb{Z}/12\mathbb{Z})^* = \{ 1,5,7,11 \} \simeq C_2 \times C_2$, The orbits on $\{ 0,1,…,11 \}$ are \[ \{ 0 \}, \{ 1,5,7,11 \}, \{ 2,10 \}, \{ 3,9 \}, \{ 4,8 \}, \{ 6 \} \] and $D=\{ 0,1,2,3,4,6 \}$, $G_0 = C_2 \times C_2$, $G_1 = \{ 1 \}$, $G_2 = \{ 1,7 \}$, $G_3 = \{ 1,5 \}$, $G_4=\{ 1,7 \}$ and $G_6=C_2 \times C_2$, $1$ is the only number coprime with $0$, giving us $[1:0]$, $\{ 0,1,…,11 \}$ are all coprime with $1$, and we have trivial stabilizer, giving us the points $[0:1],[1:1],…,[11:1]$, $\{ 1,3,5,7,9,11 \}$ are coprime with $2$ and under the action of $\{ 1,7 \}$ they split into the orbits \[ \{ 1,7 \},~\{ 3,9 \},~\{ 5,11 \} \] giving us the points $[1:2],[3:2]$ and $[5:2]$, $\{ 1,2,4,5,7,8,10,11 \}$ are coprime with $3$, the action of $\{ 1,5 \}$ gives us the orbits \[ \{ 1,5 \},~\{ 2,10 \},~\{ 4,8 \},~\{ 7,11 \} \] and additional points $[1:3],[2:3],[4:3]$ and $[7:3]$, $\{ 1,3,5,7,9,11 \}$ are coprime with $4$ and under the action of $\{ 1,7 \}$ we get orbits \[ \{ 1,7 \},~\{ 3,9 \},~\{ 5,11 \} \] and points $[1:4],[3:4]$ and $[5,4]$, Finally, $\{ 1,5,7,11 \}$ are the only coprimes with $6$ and they form a single orbit under $C_2 \times C_2$ giving us just one additional point $[1:6]$. This gives us all $24= \Psi(12)$ points of $\mathbb{P}^1(\mathbb{Z}/12 \mathbb{Z})$ (strangely, op page 43 of the T-H-M paper they use different representants). One way to see that $\# \mathbb{P}^1(\mathbb{Z}/n \mathbb{Z}) = \Psi(n)$ comes from a consequence of the Chinese Remainder Theorem that for the prime factorization $n = p_1^{e_1} … p_k^{e_k}$ we have \[ \mathbb{P}^1(\mathbb{Z}/n \mathbb{Z}) = \mathbb{P}^1(\mathbb{Z}/p_1^{e_1} \mathbb{Z}) \times … \times \mathbb{P}^1(\mathbb{Z}/p_k^{e_k} \mathbb{Z}) \] and for a prime power $p^k$ we have canonical representants for $\mathbb{P}^1(\mathbb{Z}/p^k \mathbb{Z})$ \[ [a:1]~\text{for}~a=0,1,…,p^k-1~\quad \text{and} \quad [1:b]~\text{for}~b=0,p,2p,3p,…,p^k-p \] which shows that $\# \mathbb{P}^1(\mathbb{Z}/p^k \mathbb{Z}) = (p+1)p^{k-1}= \Psi(p^k)$. Next time, we’ll connect $\mathbb{P}^1(\mathbb{Z}/n \mathbb{Z})$ to Conway’s big picture and the congruence subgroup $\Gamma_0(n)$.
Thorne and Żytkow's original paper on TŻOs actually opens with a comparison of TŻOs and the type of object you mention, with a white dwarf degenerate core instead of a neutron star degenerate core. They note that the equilibrium states - essentially, stable configurations - of such combinations lie near the Hayashi track (actually acting a bit like AGB stars, in some cases), indicating high metallicity, as is the case with TŻOs. These objects generate energy the same way TŻOs do: matter is accreted by the core, releasing gravitational potential energy, and the red giant envelope continues some fusion, although, of course, core fusion has been substantially disrupted by the arrival of the new degenerate core. The main difference in energy production are the ratios between nuclear contributions to luminosity and gravitational contributions to luminosity:$$L_{\text{nuc}}/L\approx0.99,\quad L_{\text{grav}}/L\approx0.01\quad\text{for white dwarf core}$$$$L_{\text{nuc}}/L\approx0.04,\quad L_{\text{grav}}/L\approx0.96\quad\text{for neutron core}$$Why the difference? $L_{\text{grav}}$ is proportional to$$\frac{GM_c}{R_cc^2}$$where $_c$ refers to values for the core. The masses and radii of neutron stars differ drastically from those of white dwarfs. This becomes less important in the case of supergiant TŻOs (i.e. $M>10 M_{\odot}$), because convection cycles "burned" nuclear fuel back outwards into the envelope, and so energy ratios become more like those found in the case of a white dwarf core. This difference in energy production ratios also means that the objects will remain in roughly stable states for different amounts of time; red giants with white dwarf cores can survive in equilibrium for at least an order of magnitude or more as long as TŻOs. One interesting thing to note is that TŻOs and red giants with white cores may share some of the same problems when it comes to stability. The envelopes are expected to be composed similarly and act similarly, with the potential difference in nuclear fusion rates, and so the same dynamical instabilities are possible in both cases. However, Thorne and Żytkow state that they find this possibility unlikely.
Difference between revisions of "Fujimura.tex" (4 intermediate revisions by 2 users not shown) Line 1: Line 1: \section{Fujimura's problem}\label{fujimura-sec} \section{Fujimura's problem}\label{fujimura-sec} + Let $\overline{c}^\mu_n$ be the size of the largest subset of the trianglular grid Let $\overline{c}^\mu_n$ be the size of the largest subset of the trianglular grid − $$\Delta_n := \{(a,b,c)\in \mathbb + $$\Delta_n := \{(a,b,c)\in \mathbb Z}^3_+ : a+b+c = n\}$$ which contains no equilateral triangles $(a+r,b,c), (a,b+r,c), (a,b,c+r)$ with $r>0$. These are upward-pointing equilateral triangles. We shall refer to such sets as 'triangle-free'. which contains no equilateral triangles $(a+r,b,c), (a,b+r,c), (a,b,c+r)$ with $r>0$. These are upward-pointing equilateral triangles. We shall refer to such sets as 'triangle-free'. (Kobon Fujimura is a prolific inventor of puzzles, and in [http://www.puzzles.com/PuzzlePlayground/CoinsAndTriangles/CoinsAndTriangles.htm this] puzzle asked the related question of eliminating all equilateral triangles.) (Kobon Fujimura is a prolific inventor of puzzles, and in [http://www.puzzles.com/PuzzlePlayground/CoinsAndTriangles/CoinsAndTriangles.htm this] puzzle asked the related question of eliminating all equilateral triangles.) − The + The table was formed mostly by computer searches for optimal solutions. We also found human proofs for most of them (see http://michaelnielsen.org/polymath1/index.php?title=Fujimura') + + + + + + + + + + + + + + . − \ + \{} $\overline{c}^\mu_n $ \\ − + − + − + − \ + − + $()$, $$ with one . − + $\{}(n2)$ $n\\$. Latest revision as of 06:46, 27 July 2009 \section{Fujimura's problem}\label{fujimura-sec} Let $\overline{c}^\mu_n$ be the size of the largest subset of the trianglular grid $$\Delta_n := \{(a,b,c)\in {\mathbb Z}^3_+ : a+b+c = n\}$$ which contains no equilateral triangles $(a+r,b,c), (a,b+r,c), (a,b,c+r)$ with $r>0$. These are upward-pointing equilateral triangles. We shall refer to such sets as 'triangle-free'. (Kobon Fujimura is a prolific inventor of puzzles, and in this puzzle asked the related question of eliminating all equilateral triangles.) The table in Figure \ref{lowFujimura} was formed mostly by computer searches for optimal solutions. We also found human proofs for most of them (see {\tt http://michaelnielsen.org/polymath1/index.php?title=Fujimura's\_problem}). \begin{figure} \centerline{ \begin{tabular}{l|llllllllllllll} $n$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13\\ \hline $\overline{c}^\mu_n$ & 1 & 2 & 4 & 6 & 9 & 12 & 15 & 18 & 22 & 26 & 31 & 35 & 40 & 46 \end{tabular} } \label{lowFujimura} \caption{Fujimura numbers} \end{figure} For any equilateral triangle $(a+r,b,c)$,$(a,b+r,c)$ and $(a,b,c+r)$, the value $y+2z$ forms an arithmetic progression of length 3. A Behrend set is a finite set of integers with no arithmetic progression of length 3 (see {\tt http://arxiv.org/PS\_cache/arxiv/pdf/0811/0811.3057v2.pdf}). By looking at those triples $(a,b,c)$ with $a+2b$ inside a Behrend set, one can obtain the lower bound of $\overline{c}^\mu_n \geq n^2 exp(-O(\sqrt{\log n}))$. It can be shown by a `corners theorem' of Ajtai and Szemeredi \cite{ajtai} that $\overline{c}^\mu_n = o(n^2)$ as $n \rightarrow \infty$. An explicit lower bound is $3(n-1)$, made of all points in $\Delta_n$ with exactly one coordinate equal to zero. An explicit upper bound comes from counting the triangles. There are $\binom{n+2}{3}$ triangles, and each point belongs to $n$ of them. So you must remove at least $(n+2)(n+1)/6$ points to remove all triangles, leaving $(n+2)(n+1)/3$ points as an upper bound for $\overline{c}^\mu_n$.
Let’s try to identify the $\Psi(n) = n \prod_{p|n}(1+\frac{1}{p})$ points of $\mathbb{P}^1(\mathbb{Z}/n \mathbb{Z})$ with the lattices $L_{M \frac{g}{h}}$ at hyperdistance $n$ from the standard lattice $L_1$ in Conway’s big picture. Here are all $24=\Psi(12)$ lattices at hyperdistance $12$ from $L_1$ (the boundary lattices): You can also see the $4 = \Psi(3)$ lattices at hyperdistance $3$ (those connected to $1$ with a red arrow) as well as the intermediate $12 = \Psi(6)$ lattices at hyperdistance $6$. The vertices of Conway’s Big Picture are the projective classes of integral sublattices of the standard lattice $\mathbb{Z}^2=\mathbb{Z} e_1 \oplus \mathbb{Z} e_2$. Let’s say our sublattice is generated by the integral vectors $v=(v_1,v_2)$ and $w=(w_1.w_2)$. How do we determine its class $L_{M,\frac{g}{h}}$ where $M \in \mathbb{Q}_+$ is a strictly positive rational number and $0 \leq \frac{g}{h} < 1$?Here’s an example: the sublattice (the thick dots) is spanned by the vectors $v=(2,1)$ and $w=(1,4)$ Well, we try to find a basechange matrix in $SL_2(\mathbb{Z})$ such that the new 2nd base vector is of the form $(0,z)$. To do this take coprime $(c,d) \in \mathbb{Z}^2$ such that $cv_1+dw_1=0$ and complete with $(a,b)$ satisfying $ad-bc=1$ via Bezout to a matrix in $SL_2(\mathbb{Z})$ such that \[ \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} v_1 & v_2 \\ w_1 & w_2 \end{bmatrix} = \begin{bmatrix} x & y \\ 0 & z \end{bmatrix} \] then the sublattice is of class $L_{\frac{x}{z},\frac{y}{z}~mod~1}$. In the example, we have \[ \begin{bmatrix} 0 & 1 \\ -1 & 2 \end{bmatrix} \begin{bmatrix} 2 & 1 \\ 1 & 4 \end{bmatrix} = \begin{bmatrix} 1 & 4 \\ 0 & 7 \end{bmatrix} \] so this sublattice is of class $L_{\frac{1}{7},\frac{4}{7}}$. Starting from a class $L_{M,\frac{g}{h}}$ it is easy to work out its hyperdistance from $L_1$: let $d$ be the smallest natural number making the corresponding matrix integral \[ d. \begin{bmatrix} M & \frac{g}{h} \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} u & v \\ 0 & w \end{bmatrix} \in M_2(\mathbb{Z}) \] then $L_{M,\frac{g}{h}}$ is at hyperdistance $u . w$ from $L_1$. Now that we know how to find the lattice class of any sublattice of $\mathbb{Z}^2$, let us assign a class to any point $[c:d]$ of $\mathbb{P}^1(\mathbb{Z}/n\mathbb{Z})$. As $gcd(c,d)=1$, by Bezout we can find a integral matrix with determinant $1$ \[ S_{[c:d]} = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \] But then the matrix \[ \begin{bmatrix} a.n & b.n \\ c & d \end{bmatrix} \] has determinant $n$. Working backwards we see that the class $L_{[c:d]}$ of the sublattice of $\mathbb{Z}^2$ spanned by the vectors $(a.n,b.n)$ and $(c,d)$ is of hyperdistance $n$ from $L_1$. This is how the correspondence between points of $\mathbb{P}^1(\mathbb{Z}/n\mathbb{Z})$ and classes in Conway’s big picture at hyperdistance $n$ from $L_1$ works. Let’s do an example. Take the point $[7:3] \in \mathbb{P}^1(\mathbb{Z}/12\mathbb{Z})$ (see last time), then \[ \begin{bmatrix} -2 & -1 \\ 7 & 3 \end{bmatrix} \in SL_2(\mathbb{Z}) \] so we have to determine the class of the sublattice spanned by $(-24,-12)$ and $(7,3)$. As before we have to compute \[ \begin{bmatrix} -2 & -7 \\ 7 & 24 \end{bmatrix} \begin{bmatrix} -24 & -12 \\ 7 & 3 \end{bmatrix} = \begin{bmatrix} -1 & 3 \\ 0 & -12 \end{bmatrix} \] giving us that the class $L_{[7:3]} = L_{\frac{1}{12}\frac{3}{4}}$ (remember that the second term must be taken $mod~1$). If you do this for all points in $\mathbb{P}^1(\mathbb{Z}/12\mathbb{Z})$ (and $\mathbb{P}^1(\mathbb{Z}/6\mathbb{Z})$ and $\mathbb{P}^1(\mathbb{Z}/3 \mathbb{Z})$) you get this version of the picture we started with You’ll spot that the preimages of a canonical coordinate of $\mathbb{P}^1(\mathbb{Z}/m\mathbb{Z})$ for $m | n$ are the very same coordinate together with ‘new’ canonical coordinates in $\mathbb{P}^1(\mathbb{Z}/n\mathbb{Z})$. To see that this correspondence is one-to-one and that the index of the congruence subgroup \[ \Gamma_0(n) = \{ \begin{bmatrix} p & q \\ r & s \end{bmatrix}~|~n|r~\text{and}~ps-qr=1 \} \] in the full modular group $\Gamma = PSL_2(\mathbb{Z})$ is equal to $\Psi(n)$ it is useful to consider the action of $PGL_2(\mathbb{Q})^+$ on the right on the classes of lattices. The stabilizer of $L_1$ is the full modular group $\Gamma$ and the stabilizer of any class is a suitable conjugate of $\Gamma$. For example, for the class $L_n$ (that is, of the sublattice spanned by $(n,0)$ and $(0,1)$, which is of hyperdistance $n$ from $L_1$) this stabilizer is \[ Stab(L_n) = \{ \begin{bmatrix} a & \frac{b}{n} \\ c.n & d \end{bmatrix}~|~ad-bc = 1 \} \] and a very useful observation is that \[ Stab(L_1) \cap Stab(L_n) = \Gamma_0(n) \] This is the way Conway likes us to think about the congruence subgroup $\Gamma_0(n)$: it is the joint stabilizer of the classes $L_1$ and $L_n$ (as well as all classes in the ‘thread’ $L_m$ with $m | n$). On the other hand, $\Gamma$ acts by rotations on the big picture: it only fixes $L_1$ and maps a class to another one of the same hyperdistance from $L_1$.The index of $\Gamma_0(n)$ in $\Gamma$ is then the number of classes at hyperdistance $n$. To see that this number is $\Psi(n)$, first check that the classes at hyperdistance $p^k$ for $p$ a prime number and for all $k$ for the $p+1$ free valent tree with root $L_1$, so there are exactly $p^{k-1}(p+1)$ classes as hyperdistance $p^k$. To get from this that the number of hyperdistance $n$ classes is indeed $\Psi(n) = \prod_{p|n}p^{v_p(n)-1}(p+1)$ we have to use the prime- factorisation of the hyperdistance (see this post). The fundamental domain for the action of $\Gamma_0(12)$ by Moebius tranfos on the upper half plane must then consist of $48=2 \Psi(12)$ black or white hyperbolic triangles Next time we’ll see how to deduce the ‘monstrous’ Grothendieck dessin d’enfant for $\Gamma_0(12)$ from it
1. Definition of a Matrix Definition: Matrix An \(m\) by \(n\) matrix is an array of numbers with \(m\) rows and \(n\) columns. Example 1 \[\begin{pmatrix} 4&5\\0&15\\-9&3 \end{pmatrix}\nonumber \] is a 3 by 2 matrix. Example 2 Consider the system of equations \[\begin{align} &2x &-y &&+3z&=5 \\ &x & &&+4z&=3 \\ &5x &-7y &&+3z&=7 \end{align}\nonumber \] Then the matrix \[\begin{pmatrix}\begin{array}{ccc|c}2&-1&3&5 \\1&0&4&3\\ 5&-7&3&7\end{array}\end{pmatrix}\nonumber \] is called the augmented matrix associated to the system of equations. Two matrices are called equal if all of their entries are the same. Two matrices are called row equivalent is one can be transformed using a sequence of the three operations that we discussed earlier. Interchanging two rows. Multiplying a row by a nonzero constant. Replacing a row with the row + a constant multiple of another row. 2. Solving Linear Systems Using Matrices We can solve a linear system by writing down its augmented matrix and performing the row operations that we did last time. Example 3 Solve \[\begin{align} &2x &-y &&+z&=3 \\ &x&+y&&+z&=2 \\ & &y&&-z&=-1 \end{align}\nonumber \] Solution We write the associated augmented matrix: \[\begin{pmatrix}\begin{array}{ccc|c}2&-1&1&3 \\1&1&1&2 \\ 0&1&-1&-1\end{array}\end{pmatrix}\nonumber \] Now begin solving by performing row operations: \[ R_1 \leftrightarrow R_2 \nonumber \] \[\begin{pmatrix}\begin{array}{ccc|c}1&1&1&2 \\2&-1&1&3\\ 0&1&-1&-1\end{array}\end{pmatrix}\nonumber \] \[R_1 \leftrightarrow R-2\nonumber \] \[\begin{pmatrix}\begin{array}{ccc|c}1&1&1&2 \\0&3&-1&-1\\ 0&1&-1&-1\end{array}\end{pmatrix}\nonumber \] \[R_2 \leftrightarrow R_3\nonumber \] \[\begin{pmatrix}\begin{array}{ccc|c}1&1&1&2 \\0&1&-1&-1\\ 0&3&-1&-1\end{array}\end{pmatrix} \nonumber \] \[R_1 - R_2 \rightarrow R_1, \;\; R_3 + 3R_2 \rightarrow R_3\nonumber \] \[\begin{pmatrix}\begin{array}{ccc|c}1&1&1&2 \\0&1&-1&-1\\ 0&3&-1&-1\end{array}\end{pmatrix}\nonumber \] \[ R_1 - R_2 \rightarrow R_1, \;\; R_3 + 3R_2 \rightarrow R_3 \nonumber \] \[\begin{pmatrix}\begin{array}{ccc|c}1&0&2&3 \\0&1&-1&-1\\ 0&0&-4&-4\end{array}\end{pmatrix}\nonumber \] \[R_3 \rightarrow -\dfrac{1}{4} R_3\nonumber \] \[\begin{pmatrix}\begin{array}{ccc|c}1&0&2&3 \\0&1&-1&-1\\ 0&0&1&1\end{array}\end{pmatrix}\nonumber \] \[R_1 - 2R_3 \rightarrow R_1, \;\; R_2 + R_3 \rightarrow R_2\nonumber \] \[\begin{pmatrix}\begin{array}{ccc|c}1&0&0&1 \\0&1&1&0\\ 0&0&1&1\end{array}\end{pmatrix}\nonumber \] \[R_1 - 2R_3 - \rightarrow +R_1, \;\;R_2 + R_3 \rightarrow R_2\nonumber \] \[\begin{pmatrix}\begin{array}{ccc|c}1&0&0&1 \\0&1&0&0\\ 0&0&1&1\end{array}\end{pmatrix}\nonumber \]We can now put the matrix back in equation form: \[x = 1, y = 0 \text{ and } z = 1\nonumber \] Note If we had seen a bottom row that was of the form \(0 \; 0 \; 0 \; a\) where \(a\) is a nonzero constant, then there would be no solution. If \(a\) had been 0 there would be infinitely many solutions. 3. Addition and Scalar Multiplication of Matrices We can only add matrices that are of the same dimensions, that is if \[A=\begin{pmatrix} 1&2\\3&4 \end{pmatrix}, \;\;\; B=\begin{pmatrix} 2&3\\4&1\\5&9 \end{pmatrix}, \;\;\; C=\begin{pmatrix} 1&3\\7&2 \end{pmatrix}\nonumber \] then only \(A + C\) makes sense. We write \[A+C=\begin{pmatrix} 1+1&2+3\\3+7&4+2\end{pmatrix}=\begin{pmatrix} 2&5\\10&6\end{pmatrix} \nonumber \] For any matrix, we can multiply a matrix by a real number as in the following example (Same \(B\) as above): \[5B=\begin{pmatrix} 10&15\\20&5\\25&45 \end{pmatrix}\nonumber \] We define the zero matrix to be the matrix with only zeros for entries. For example, the 2 by 2 zero matrix is \[ \begin{pmatrix} 0&0\\0&0 \end{pmatrix}\nonumber \] 4. Multiplication of Matrices To multiply matrices, unfortunately the definition is not the obvious one. We can only multiply matrices where the number of columns of the first matrix is the same as the number of rows of the second matrix. The best way to learn how to multiply matrices is by example: \[\text{Let}\; A=\begin{pmatrix} 3&5&2\\0&1&-2 \end{pmatrix}, \;\; \text{and}\; B=\begin{pmatrix} 7&-3\\-2&1\\0&5 \end{pmatrix}\nonumber \] \[\text{then}\;AB=\begin{pmatrix} 3(7)+4(-2)+2(0)&3(-3)+4(1)+2(5)\\0(7)+1(-2)+-2(5) &0(-3)+1(1)+-2(5) \end{pmatrix}=\begin{pmatrix} 13&5\\-12&-9 \end{pmatrix}\nonumber \] Exercise \[\text{Let}\;A=\begin{pmatrix} 1&2\\3&4 \end{pmatrix}, \;\;\; B=\begin{pmatrix} 4&2&1\\-2&0&0\\1&6&-1 \end{pmatrix}, \;\;\; C=\begin{pmatrix} 1&0\\2&1\\4&5 \end{pmatrix}, \;\;\; D=\begin{pmatrix} 3&4&0\\5&0&0\end{pmatrix},\;\;\; B=\begin{pmatrix} 3&4&2\\1&5&0\\1&-1&2\end{pmatrix}\nonumber \] Evaluate each one that makes sense: 1) \(A + B\) 2) \(4C\) 3) \(AB\) 4) \(CD\) 5) \(DC\) 6) \(B + E\) 7) \(A^3\) 5. Applications of Matrices Application 1 A) Tables and chairs are made in the Mexico plant, the Brazil plant, and the US plant. The matrix below represents the quantity made per day. \(A:\) Labor and material cost for 1997 are represented in the following matrix. \(B =\) In 1997, the costs have increases to \(C =\) Find the following and describe what they mean: 1) \(AB\) 2) \(C - B\) 3) \(AC\) 4) \(A(C - B)\) 5) \(365AC\) Application 2 Suppose that you have two jobs, each contribute to two different mutual funds for retirement. The first fund pays 5% interest and the second pays 8% interest. Initially $5,000 is put into the funds and after one year there will be $5,300. If the first fund got half of the money from the first job and one third of the money from the second job, how much did each job contribute? Hint: Multiplication of matrices is the same as composition of functions Larry Green (Lake Tahoe Community College) Integrated by Justin Marshall.
A long while ago I promised to take you from the action by the modular group $\Gamma=PSL_2(\mathbb{Z})$ on the lattices at hyperdistance $n$ from the standard orthogonal laatice $L_1$ to the corresponding ‘monstrous’ Grothendieck dessin d’enfant. Speaking of dessins d’enfant, let me point you to the latest intriguing paper by Yuri I. Manin and Matilde Marcolli, ArXived a few days ago Quantum Statistical Mechanics of the Absolute Galois Group, on how to build a quantum system for the absolute Galois group from dessins d’enfant (more on this, I promise, later). Where were we? We’ve seen natural one-to-one correspondences between (a) points on the projective line over $\mathbb{Z}/n\mathbb{Z}$, (b) lattices at hyperdistance $n$ from $L_1$, and (c) coset classes of the congruence subgroup $\Gamma_0(n)$ in $\Gamma$. How to get from there to a dessin d’enfant? The short answer is: it’s all in Ravi S. Kulkarni’s paper, “An arithmetic-geometric method in the study of the subgroups of the modular group”, Amer. J. Math 113 (1991) 1053-1135. It is a complete mystery to me why Tatitscheff, He and McKay don’t mention Kulkarni’s paper in “Cusps, congruence groups and monstrous dessins”. Because all they do (and much more) is in Kulkarni. I’ve blogged about Kulkarni’s paper years ago: – In the Dedekind tessalation it was all about assigning special polygons to subgroups of finite index of $\Gamma$. – In Modular quilts and cuboid tree diagram it did go on assigning (multiple) cuboid trees to a (conjugacy class) of such finite index subgroup. – In Hyperbolic Mathieu polygons the story continued on a finite-to-one connection between special hyperbolic polygons and cuboid trees. – In Farey codes it was shown how to encode such polygons by a Farey-sequence. – In Generators of modular subgroups it was shown how to get generators of the finite index subgroups from this Farey sequence. The modular group is a free product \[ \Gamma = C_2 \ast C_2 = \langle s,u~|~s^2=1=u^3 \rangle \] with lifts of $s$ and $u$ to $SL_2(\mathbb{Z})$ given by the matrices \[ S=\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix},~\qquad U= \begin{bmatrix} 0 & -1 \\ 1 & -1 \end{bmatrix} \] As a result, any permutation representation of $\Gamma$ on a set $E$ can be represented by a $2$-coloured graph (with black and white vertices) and edges corresponding to the elements of the set $E$. Each white vertex has two (or one) edges connected to it and every black vertex has three (or one). These edges are the elements of $E$ permuted by $s$ (for white vertices) and $u$ (for black ones), the order of the 3-cycle determined by going counterclockwise round the vertex. Clearly, if there’s just one edge connected to a vertex, it gives a fixed point (or 1-cycle) in the corresponding permutation. The ‘monstrous dessin’ for the congruence subgroup $\Gamma_0(n)$ is the picture one gets from the permutation $\Gamma$-action on the points of $\mathbb{P}^1(\mathbb{Z}/n \mathbb{Z})$, or equivalently, on the coset classes or on the lattices at hyperdistance $n$. Kulkarni’s paper (or the blogposts above) tell you how to get at this picture starting from a fundamental domain of $\Gamma_0(n)$ acting on teh upper half-plane by Moebius transformations. Sage gives a nice image of this fundamental domain via the command FareySymbol(Gamma0(n)).fundamental_domain() Here’s the image for $n=6$: The boundary points (on the halflines through $0$ and $1$ and the $4$ half-circles need to be identified which is indicaed by matching colours. So the 2 halflines are identified as are the two blue (and green) half-circles (in opposite direction). To get the dessin from this, let’s first look at the interior points. A white vertex is a point in the interior where two black and two white tiles meet, a black vertex corresponds to an interior points where three black and three white tiles meet. Points on the boundary where tiles meet are coloured red, and after identification two of these reds give one white or black vertex. Here’s the intermediate picture The two top red points are identified giving a white vertex as do the two reds on the blue half-circles and the two reds on the green half-circles, because after identification two black and two white tiles meet there. This then gives us the ‘monstrous’ modular dessin for $n=6$ of the Tatitscheff, He and McKay paper: Let’s try a more difficult example: $n=12$. Sage gives us as fundamental domain giving us the intermediate picture and spotting the correct identifications, this gives us the ‘monstrous’ dessin for $\Gamma_0(12)$ from the THM-paper: In general there are several of these 2-coloured graphs giving the same permutation representation, so the obtained ‘monstrous dessin’ depends on the choice of fundamental domain. You’ll have noticed that the domain for $\Gamma_0(6)$ was symmetric, whereas the one Sage provides for $\Gamma_0(12)$ is not. This is caused by Sage using the Farey-code \[ \xymatrix{ 0 \ar@{-}[r]_1 & \frac{1}{6} \ar@{-}[r]_1 & \frac{1}{5} \ar@{-}[r]_2 & \frac{1}{4} \ar@{-}[r]_3 & \frac{1}{3} \ar@{-}[r]_4 & \frac{1}{2} \ar@{-}[r]_4 & \frac{2}{3} \ar@{-}[r]_3 & \frac{3}{4} \ar@{-}[r]_2 & 1} \] One of the nice results from Kulkarni’s paper is that for any $n$ there is a symmetric Farey-code, giving a perfectly symmetric fundamental domain for $\Gamma_0(n)$. For $n=12$ this symmetric code is \[ \xymatrix{ 0 \ar@{-}[r]_1 & \frac{1}{6} \ar@{-}[r]_2 & \frac{1}{4} \ar@{-}[r]_3 & \frac{1}{3} \ar@{-}[r]_4 & \frac{1}{2} \ar@{-}[r]_4 & \frac{2}{3} \ar@{-}[r]_3 & \frac{3}{4} \ar@{-}[r]_2 & \frac{5}{6} \ar@{-}[r]_1 & 1} \] It would be nice to see whether using these symmetric Farey-codes gives other ‘monstrous dessins’ than in the THM-paper. Remains to identify the edges in the dessin with the lattices at hyperdistance $n$ from $L_1$. Using the tricks from the previous post it is quite easy to check that for any $n$ the monstrous dessin for $\Gamma_0(n)$ starts off with the lattices $L_{M,\frac{g}{h}} = M,\frac{g}{h}$ as below Let’s do a sample computation showing that the action of $s$ on $L_n$ gives $L_{\frac{1}{n}}$: \[ L_n.s = \begin{bmatrix} n & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & -n \\ 1 & 0 \end{bmatrix} \] and then, as last time, to determine the class of the lattice spanned by the rows of this matrix we have to compute \[ \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & -n \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} -1 & 0 \\ 0 & -n \end{bmatrix} \] which is class $L_{\frac{1}{n}}$. And similarly for the other edges. Similar Posts: Monstrous dessins 2 The Dedekind tessellation A tetrahedral snake the modular group and superpotentials (1) Modular quilts and cuboid tree diagrams quivers versus quilts Generators of modular subgroups The defining property of 24 Snakes, spines, threads and all that the mystery Manin-Marcolli monoid
Denote $A=\{0\}, B=\{0,1\}$. Then any subset of $\Omega:=\{A,B\}^{\mathbb N}$ is a continuum provided the number of $B$'s is infinite. We treat these as binary expansions of numbers in $[0,1]$. For instance, $(AB)^\omega$ is the set $\left\{\sum_{n=1}^\infty a_n4^{-n}\mid a_n\in\{0,1\}\right\}$, which is a Cantor set of Hausdorff dimension $\frac12$. Similarly, it is easy to show that $(AB^N)^\infty$ has Hausdorff dimension $\frac{N}{N+1}$. Now, suppose the sequence of $A$'s and $B$'s is random: i.i.d. with distribution $\bigl(\frac1{N+1}, \frac{N}{N+1}\bigr)$. It is obvious that a generic sequence will be a Cantor set. Question. Is it true that a subset of $\Omega$ has Hausdorff dimension $\frac{N}{N+1}$ almost surely? A weaker version: is this a null set almost surely?
To show that an algebra constructed as a quotient of the tensor algebra of a vector space is nonzero, one of the main ways to go is to construct representations. We can do this for the Clifford algebra as follows. Let $V$ be a vector space over a field $k$ and $(,):V\times V \to k$ a symmetric bilinear form on $V$. The Clifford algebra (for this form) is given by $$Cl(V)= T(V)/\langle v \otimes v - (v,v)\rangle.$$We will construct a representation of the Clifford algebra on the exterior algebra $\bigwedge (V)$. For $v \in V$, define two $k$-endomorphisms of $\bigwedge(V)$ by$$ l_v(x) = v \wedge x$$and$$ \delta_v(x) = \sum_{j=1}^k (-1)^{j-1}(v,x_j) x_1 \wedge \dots \wedge \widehat{x_j} \wedge \dots \wedge x_k$$if $x = x_1 \wedge \dots \wedge x_k$. Then check that $l_v^2 = \delta_v^2 = 0$, and moreover that $l_v \delta_v + \delta_v l_v = (v,v) \cdot \mathrm{id}$. Extend the map linear $v \mapsto l_v + \delta_v$ to an algebra homomorphism from the tensor algebra $T(V)$ to $\mathrm{End}_k(\bigwedge(V))$. By the previous remark, this descends to a map, let's call it $\phi$, from the Clifford algebra to $\mathrm{End}_k(\bigwedge(V))$. In particular, $\phi(v)1 = v$, so $V$ injects into the Clifford algebra. Edit: I believe also that the map$$ x \mapsto \phi(x)1$$gives a linear isomorphism of the Clifford algebra with the exterior algebra. A great reference for this stuff is Chevalley's monograph, The Algebraic Theory of Clifford Algebras and Spinors.
The construction of quasi-periodic solutions of quasi-periodic forced Schrödinger equation 1. Department of Mathematics, Nanjing University, Nanjing 210093, China 2. Department of Mathematics, Nanjing Univerisity, Nanjing 210093 i$u_t=u_{x x}-mu-f(\beta t,x)|u|^2 u,$ with the boundary conditions $u(t,0)=u(t,a\pi)=0, \ -\infty < t < \infty,$ where $m$ is real and $f(\beta t,x)$ is real analytic and quasi-periodic on $t$ satisfying the non-degeneracy condition $\lim_{T\rightarrow\infty}\frac{1}{T}\int_0^Tf(\beta t,x)dt\equiv f_0=$ const., $\quad 0\ne f_0 \in\mathbb R,$ with $\beta\in\mathbb R^b$ a fixed Diophantine vector. Mathematics Subject Classification:Primary: 70H08, 70H12; Secondary: 37J4. Citation:Lei Jiao, Yiqian Wang. The construction of quasi-periodic solutions of quasi-periodic forced Schrödinger equation. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1585-1606. doi: 10.3934/cpaa.2009.8.1585 [1] [2] Hongzi Cong, Lufang Mi, Yunfeng Shi, Yuan Wu. On the existence of full dimensional KAM torus for nonlinear Schrödinger equation. [3] [4] Hans Zwart, Yann Le Gorrec, Bernhard Maschke. Relating systems properties of the wave and the Schrödinger equation. [5] [6] Xiaocai Wang, Junxiang Xu, Dongfeng Zhang. A KAM theorem for the elliptic lower dimensional tori with one normal frequency in reversible systems. [7] Ricardo Miranda Martins. Formal equivalence between normal forms of reversible and hamiltonian dynamical systems. [8] Claude Bardos, François Golse, Peter Markowich, Thierry Paul. On the classical limit of the Schrödinger equation. [9] [10] [11] [12] [13] [14] [15] [16] Mostafa Abounouh, H. Al Moatassime, J. P. Chehab, S. Dumont, Olivier Goubet. Discrete Schrödinger equations and dissipative dynamical systems. [17] [18] [19] [20] 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
This is more of a question for the Physics stack, but I'll give it a shot, since it's fairly basic.You need to understand something before we begin. The theoretical framework we have to gauge and answer this sort of thing is called General Relativity, which was proposed by Einstein in 1915. It describes things such as gravity, black holes, or just about ... What you're describing is basically the "collapsed star" (Eng) or "frozen star" (Rus) interpretation of black holes that was common prior to the late mid-1960s. It was a mistake.Suppose you are distant and stationary relative to the black hole. You will observe infalling matter asymptotically approaching the horizon, growing ever fainter as it redshifts. ... Yes, you are absolutely right, from OUR VIEWPOINT it does.From Kip Thorne's book "Black Holes and Time Warps: Einstein's Outrageous Legacy."“Like a rock dropped from a rooftop, the star’s surface falls downward (shrinks inward) slowly at first, then more and more rapidly. Had Newton’s laws of gravity been correct, this acceleration of the implosion would ... Not at all a dumb question. As you have heard, it is true that time is affected by gravity. The stronger the gravitational field, the slower time passes. If you're far from any gravitating matter, time passes "normally".But to answer your question, we must specify what is meant by "the black holes's time" (let's call the black hole $\mathrm{BH}_\mathrm{Sgr\... The easiest explanation for why the maximum distance one can see is not simply the product of the speed of light with the age of the universe is because the universe is non-static.Different things (i.e. matter vs. dark energy) have different effects on the coordinates of the universe, and their influence can change with time.A good starting point in ... (I will assume a Schwarzschild black hole for simplicity, but much of the following is morally the same for other black holes.)If you were to fall into a black hole, my understanding is that from your reference point, time would speed up (looking out to the rest of the universe), approaching infinity when approaching the event horizon.In Schwarzschild ... The answer is yes time dilation does affect how much time an observer experiences since the big bang until the present (cosmological) time.However there is a certain set of special observers called comoving observers, these are the observers to which the Universe appears isotropic to. For example we can tell the Earth is moving at about 350 km/s relative ... I think the question is referring to situating a very large mirror in space facing earth. If we were to put it several light minutes away, then events occurring opposite the mirror could be reviewed de novo with more preparation upon the warning we received upon the first light of the event arriving at earth.For example, a supernova going off in M31 might ... Would it be possible to look deep into a certain part of space andtime to find some galaxy that contributed to the matter that makes upthe Milky Way today?No, that's not possible. If we could do that, it'd mean that the matter traveled from there to here faster than its light got here, and matter can't travel faster through space than light does.... What is the difference between time and space-time?Space-time is time plus space.How does gravity affect the passage of time?The higher the gravity of a planet or star and the closer to that body the slower the time.What is the speed of light and how does it relate to time?The speed of light is 299,792.4580 km/s in vacuum, the speed at which ... The arms are $4\,\mathrm{km}\,\times\, 1.2\,\mathrm{m}$:From the LIGO webpage:The 1.2 m diameter beam tubes were created in 19-20 m-long segments, rolled into a tube with a continuous spiral weld. While a mathematically perfect cylinder will not collapse under pressure, any small imperfection in a real tube would allow it to buckle (a crushed vacuum ... When we talk about the universe, we are really talking about one of two things:The observable universe, which is everything we can possibly see.The Universe, which is everything that has ever existed, currently exists, and will exist.The observable universe has its own center, usually the Earth. It is a spherical region of everything that we can see, ... Velocity is a form of kinetic energy, while height within a gravity well is a form of potential energy. For an orbiting body, conservation of energy will keep the total energy constant.So as a planet moves away from the parent star, it loses velocity and gains potential energy. As it moves closer, it trades the potential energy back for velocity. The point ... We need to think about just where the time dilation effect occurs. By then thinking about the observations from each point of view, that is the free falling object and the external observer, we can come to terms with just what is happening as opposed to what appears to be happening.The experience of timeWe must remember that an object moving at a certain ... Yes, we always look into the past, when looking somewhere. There is for instance a mirror on the moon. When sending a laser beam to that mirror, we can detect the reflected light about 2.5 seconds later. This could be interpreted as looking 2.5 seconds into the past, when the laser has been fired. Details here. Well, first things first. It's not likely to have a planet orbiting near a black hole and in significant time dilation because the tidal effects would likely tear anything that close apart. Certainly a planet orbiting a stellar mass black hole would need to be quite far away so as to not be torn apart, so any time dilation would be pretty small.Around ... Reason 1:Let's look at the Friedmann equations without the cosmological constant.$$ \frac{\dot{a}^2 }{a^2} = \frac{8 \pi G \rho}{3}-\frac{kc^2}{a^2}$$The term on the LHS is just the Hubble constant squared $H^2$ which can be measured the direct measurement of recession velocity of galaxiesThe density term can be said to be a combination of $\rho_{... What makes you think that it is "obviously not 2013 on Earth"?In actual calculations, astronomers use the Julian day, which is a decimal representation of time. A Julian year is exactly 365.25 days of 86,400 seconds each. Astronomical coordinates are usually written in the J2000 epoch, which allows us to compensate for Earth's axial precession.Our ... As others have said, mathematically, a singularity is when there is an attempt to divide by zero. Take, for example a Schwarzschild black hole. This is a black hole that has no electric charge or angular momentum; tt is the simplest type of black hole.According to general relativity, gravity is the bending of spacetime. The curvature of space can be ... There is a useful model of spacetime as a rubber sheet that is bent by masses laying on it. But it should be remembered that this is an analogy (Obligatory xkcd) and most analogies fail if pushed too far. Spacetime isn't made of something that can rip.A rotating black hole, a "Kerr black hole" is stranger than a static one, as it pulls spacetime around it ... Pretty much every hydrogen atom that's in a glass of water has a proton that dates from 1 / 1000000 seconds after the big bang. That's older than the cosmic microwave background, which dates from almost 400,000 years later. First, let's clear up a few misconceptions:The Hubble sphereThe speed of light as an upper limit is valid in special relativity (SR). In general relativity (GR), which must be used to describe the expansion of the Universe, although locally (i.e. where SR is a good approximation) you cannot exceed the speed of light, there is no limit to the relative ... Our day is 23 hours and 56 minutes long, and slowing by an infinitesimal (but measurable) amount each year due to tidal losses.Our day has a connection with the weather, in that the sun drives all our weather systems, so heating over each part of the globe happens every day, but aside from that, your question doesn't make much sense.Weather changes may ... In the standard model, the universe looks the same for all locations moving in the local rest frame. This includes its apparent age. You can tell if you are in the local rest frame if the expansion of galaxies around you is symmetric in all directions and the microwave background also is the same in all directions. Simply put, any civilization on any ... Space-time is not "made" of anything, it is merely a medium or coordinate system. Think about the grid lines on a map, they aren't "made" of anything, they're just a representation of the geometry of the Earth. Space-time is a concept envisaged by Einstein when he wrote his theory of Special Relativity that the properties of space and time become ... There are different words for different aspects of space. For example, consider: length, width, and height. Other words include depth and breadth. We can speak of them as different things if we choose to, but we generally consider them to be part of unified concept of space. Why?It's because we understand that these words just pick out measurements along ... Jonathan's answer is essentially correct, but as Rob Jeffries comments, he doesn't take into account that the Universe is expanding during the journey.The edge of the observable Universe is 47 billion lightyears (Gly) away. Even if you are a lightbeam, you cannot reach that point. The farthest you can go if departing today is roughly 5 Gpc, or 17 Gly, but ... "you would come back to where you began"That is at least doubtful. Even if the Universe has the topology of a 3-sphere, there hasn't been enough time for light to completely travel around it, and since the Universe is expanding at an accelerating rate light would never have the time to return to its starting position. In fact it may well be that the ...
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
I am reading BRAID GROUPS, FREE GROUPS, AND THE LOOP SPACE OF THE 2-SPHERE by F.R. Cohen and J. Wu and here is an extract of the paper: (The proof is not finished yet but I am very confused by now.) I couldn't understand the proof at all, will really appreciate if anyone could shed some lights on the proof. It seems to me that the authors did not prove that the Artin's representation is given by the composite $E\circ I$, and I have no idea why $P_{n+1}$ is isomorphic to the pullback. Here are some information that might be useful: Let $G$ be a group and let $\mathrm{Aut}(G)$ be the automorphism group of $G$. The holomorph of $\mathrm{Hol}(G)$, is defined as follows: As a set, $\mathrm{Hol}(G)=\mathrm{Aut}(G)\times G$; For each $x,y\in G$ and $f,g\in\mathrm{Aut}(G)$, the multiplication on $\mathrm{Hol}(G)$ is defined by $$(f,x)\cdot(g,y)=(fg, g^{-1}(x)y)\text{.}$$ The map $A:B_n\to \mathrm{Aut}(F_n)$ is the Artin's representation of braid groups, where $B_n$ is the braid group, $P_n$ is the pure braid group, $F_n$ is the free group of rank $n$. The corollary is after the following lemma: Theorem 2.3: Here I am not asking for an alternative proof of the result, but how to understand the proof presented.
On the denseness of certain reciprocal power sums Xiao Jiang , Shaofang Hong Mathematical College, Sichuan University, Chengdu 610064, P. R. China Received: , Accepted: , Published: By $(\mathbb{Z}^+)^{\infty}$ we denote the set of all theinfinite sequences $\mathcal{S}=\{s_i\}_{i=1}^{\infty}$ of positiveintegers (note that all the $s_i$ are not necessarily distinct and notnecessarily monotonic). Let $f(x)$ be a polynomial of nonnegativeinteger coefficients. For any integer $n\ge 1$, one lets$\mathcal{S}_n:=\{s_1, ..., s_n\}$ and$H_f(\mathcal{S}_n):=\sum_{k=1}^{n}\frac{1}{f(k)^{s_{k}}}$.In this paper, we use a result of Kakeya to show thatif $\frac{1}{f(k)}\le\sum_{i=1}^\infty\frac{1}{f(k+i)}$holds for all positive integers $k$, then the union set$\bigcup\limits_{\mathcal{S}\in (\mathbb{Z}^+)^{\infty}}\{ H_f(\mathcal{S}_n) | n\in \mathbb{Z}^+ \}$ is densein the interval $(0,\alpha_f)$ with$\alpha_f:=\sum_{k=1}^{\infty}\frac{1}{f(k)}$.It is well known that $\alpha_{x^2+1}=\frac{1}{2}\big(\pi\frac{e^{2\pi}+1}{e^{2\pi}-1}-1\big)\approx 1.076674$.Our dense result infers that for any sufficiently small$\varepsilon >0$, there are positive integers $n_1$ and$n_2$ and infinite sequences $\mathcal{S}^{(1)}$ and$\mathcal{S}^{(2)}$ of positive integers such that$1-\varepsilon<H_{x^2+1}(\mathcal{S}^{(1)}_{n_1})<1$ and$1<H_{x^2+1}(\mathcal{S}^{(2)}_{n_2})<1+\varepsilon$.Finally, we conjecture that for any polynomial $f(x)$of integer coefficients satisfying that $f(m)\ne 0$ for anypositive integer $m$ and for any infinite sequence$\mathcal{S}=\{s_i\}_{i=1}^\infty$ of positive integers(not necessarily increasing and not necessarily distinct),there is a positive integer $N$ such that for any integer$n$ with $n\ge N$, $H_f(\mathcal{S}_n)$ is not an integer.Particularly, we guess that for any positive integer $n$,$H_{x^2+1}(\mathcal{S}_n)$ is never equal to 1. Figure/Table Supplementary Article Metrics References 1. Y. G. Chen and M. Tang, On the elementary symmetric functions of 1, 1/2, ..., 1/n, Am. Math. Mon, 119 (2012), 862-867. 2. P. Erdös and I. Niven, Some properties of partial sums of the harmonic series, B. Am. Math. Soc., 52 (1946), 248-251. 3. Y. L. Feng, S. F. Hong, X. Jiang, et al. A generalization of a theorem of Nagell, Acta Math. Hung., 157 (2019), 522-536. 4. S. F. Hong and C. L. Wang, The elementary symmetric functions of reciprocals of the elements of arithmetic progressions, Acta Math. Hung., 144 (2014), 196-211. 5. S. Kakeya, On the set of partial sums of an infinite series, Proceedings of the Tokyo Mathematico-Physical Society. 2nd Series, 7 (1914), 250-251. 6. K. Kato, N. Kurokawa, T. Saito, et al. Number theory: Fermat's dream, Translated from the 1996 Japanese original by Masato Kuwata. Translations of Mathematical Monographs, Vol. 186. Iwanami Series in Modern Mathematics, American Mathematical Society, 2000. 7. Y. Y. Luo, S. F. Hong, G. Y. Qian, et al. The elementary symmetric functions of a reciprocal polynomial sequence, C. R. Math., 352 (2014), 269-272. 8. T. Nagell, Eine Eigenschaft gewissen Summen, Skr. Norske Vid. Akad. Kristiania, 13 (1923), 10-15. 9. L. Theisinger, Bemerkung über die harmonische Reihe, Monatsh. Math., 26 (1915), 132-134. 10. C. L. Wang and S. F. Hong, On the integrality of the elementary symmetric functions of 1; 1/3, ..., 1/(2n-1), Math. Slovaca, 65 (2015), 957-962. 11. W. X. Yang, M. Li, Y. L. Feng, et al. On the integrality of the first and second elementary symmetric functions of $1,1/2^{s_2} ,..., 1/n^{s_n}$, AIMS Mathematics, 2 (2017), 682--691 12. Q. Y. Yin, S. F. Hong, L. P. Yang, et al. Multiple reciprocal sums and multiple reciprocal star sums of polynomials are almost never integers, J. Number Theory, 195 (2019), 269-292. © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution Licese (http://creativecommons.org/licenses/by/4.0)
Summer of 2012. Suddenly several “integer-as-a-service-providers” spring from nowhere. They deliver “artisanal integers”. Integers which (they claim) are “hand-crafted and guaranteed to be unique and hella-beautiful”. Are you still with me? Still in the dark? Here’s my very own, freshly minted, unique integer: $420557015$. Anyone can check that this is a genuine Brooklyn-integer by looking up its corresponding number-site. In our case the URL would be: http://www.brooklynintegers.com/int/420557015/. Please pause a moment to admire the hipster-style web-design and the wonderful tagline: [quote name=”Brooklyn Integers”]we have infinity on our side[/quote] [section_title text=”Why do we need integer-as-a-service-providers?”] Each of these 67 million buildings was added to the database by first creating four or more “nodes” and then grouping them together in a “way”. Aaron wanted to build a catalog of all of these buildings. Because the “ways” had already a numeric ID in the OpenStreetMap-database (using 32-bit integers), he generated a new numeric ID for each and every building, starting at 32-bits plus one, to avoid collisions between the two databases. He met a similar problem one year later while working on a project called parallel Flickr. Here the idea is to allow individuals to run and maintain their own local copy of Flickr, consisting of their own photos, those of their close friends as well as photos they like. Assume a fair number of people have set up their local Flickr site and the very worst happens: Flickr itself shuts down. Can we recover the (to us) relevant part of Flickr from all our local copies? Yes. That is if all of us had the discipline to maintain the original Flickr-IDs and metadata in our parallel Flickrs. A collision between two or more of our databases would only mean we share a copy of the same photo. If however (and this is infinitely more likely) we all used our own idiosyncratic system to name files, rebuilding the network from our little Flickrs would quickly become hopeless. Parallel Flickr is an exercise in how individual people can take control and responsibility to archive material they leave on global social media sites such as Facebook, Google+, YouTube, Instagram and the likes. How can we collectively maintain generated content in case the service pulls the plug? Another example. Consider a large group of people, each of them geotagging and archiving their own tiny part of a larger collective neighbourhood. What’s needed to construct the bigger picture from their individual efforts? When Aaron asked his pal Mike over coffee in San Francisco’s Mission District, the reply was: “So, are you suggesting that we need something like a centralized ‘integers as a service’ platform?” As a gag Mike also suggested that rather than any old unique integers, there would probably be a demand for hand-crafted artisanal integers. It’s how Mission Integers was born. And immediately forgotten. There already exist methods to deliver unique IDs, such as the UUID scheme. The idea was rekindled when Aaron moved to New York and started the Gowanus Heights Neighborhood Project. Here again they needed unique IDs. Because the long strings of alphanumerical characters and dashes provided by UUID are less efficient at the database layer, they needed Mission integers, or perhaps Brooklyn integers, or both. In order to avoid collisions between these two artisanal integer providers, Mike claimed the even numbers for Mission (as San Francisco is on the ‘left hand’ coast) and Aaron the odd ones for Brooklyn (New York lying on the ‘right hand’ coast). Remember all of this started in order to empower individuals against the frills of global players, type Facebook. And now, this new system would depend on a two-party US-based monopoly? Most certainly not! Along came Dan Catt who created London Integers, using a rather different look-and-feel. “Both Mission and Brooklyn have gone for a hipster boutique type of look, which I wanted to eschew. The London I know is dirty, gritty, beautiful and punk.” London ‘artisan’ Integers would be multiples of $9$ and in order to avoid collisions with the others he took the maximal integer minted by both Brooklyn or Mission, added a couple of millions to it, and just started distributing integers. Someone could look at an artisan integer and work out if it was a London Integer by adding up the digits, repeatedly if necessary to get to single digit. If that’s $9$ it’s a London Integer. If it's not a London Integer then you can tell if it's Brooklyn or Mission on the odd, even front. [section_title text=”Where’s the math in all this?”] Ideally, integers-as-a-service-providers will be set up in all major cities, and why not, even in small communities. Nelson Minar solved two of the most imminent problems arising from having multiple providers of artisanal integers: all parties producing integers should be aware of one another and honour their respective offsets. given an artisanal integer one should be able to figure out where it came from [quote name=”Aaron Straup Cope”]Nelson did this using secret magical powers better known as “maths”.[/quote] He used the Cantor pairing to generate a unique integer $z$ corresponding to the $y$-th number produced by the $x$-th foundry: $z = \frac{(x+y)(x+y+1)}{2} + y$ Conversely, if you’re given the artisanal integer $z$, you can work out its integer-provider by following these rules: $w=\lfloor \frac{\sqrt{8z+1}-1}{2} \rfloor,~t = \frac{w^2+w}{2},~y=z-t,~x=w-y$ Elegant as this is, there’s a serious flaw. The number of providers will always be significantly smaller than the number of integers they mint. Therefore, most integers will not be used, ever. Here’s my two-pence worth of advice to build a slightly more economic system: The $x$-th foundry should only mint multiples of $p(x)$, the $x$-th prime number. As any time $t$ one should know the product $T$ of all prime numbers of foundries in operation. In order to decide whether the $x$-th foundry can distribute its $y$-th number $n = y \times p(x)$ one computes $gcd(n,T)$ and look at its largest prime divisor. If this is $p(x)$, the number $n$ can safely be minted. If not it will eventually be distributed by the foundry corresponding to that largest prime factor. With a little bit of extra work one gets a fairer system. Decompose $gcd(n,T)=p(x_1)^{e_1} p(x_2)^{e_2} \dots p(x_k)^{e_k}$. Then $n$ will be minted by the foundry having the largest exponent. If there are $m$ equal maximal exponents $e$ corresponding to $p(x_{i_1}), p(x_{i_2}) \dots p(x_{i_m})$, then $n$ will be minted by foundry $x_{i_j}$ where $j = e~mod(m)$. A new foundry will be associated to the next prime number, will advertise its existence to the existing services (changing $T$) and respect as initial offset the smallest prime multiple larger than the maximum integer already minted by the others. No doubt you will come up with a much cleverer idea! Please leave it in the comments. Sources: – H/T Christian Lawson-Perfect via Twitter – Aaron Straup Cope: “The “Drinking Coffee and Stealing Wifi” 2012 World Tour” Similar Posts: the moonshine picture – at last the monstrous moonshine picture – 2 The miracle of 163 Moonshine’s green anaconda key-compression A forgotten type and roots of unity (again) the Riemann hypothesis and Psi The Big Picture is non-commutative Roots of unity and the Big Picture Chinese remainders and adele classes
Timeline of prime gap bounds [math]H = H_1[/math] is a quantity such that there are infinitely many pairs of consecutive primes of distance at most [math]H[/math] apart. Would like to be as small as possible (this is a primary goal of the Polymath8 project). [math]k_0[/math] is a quantity such that every admissible [math]k_0[/math]-tuple has infinitely many translates which each contain at least two primes. Would like to be as small as possible. Improvements in [math]k_0[/math] lead to improvements in [math]H[/math]. (The relationship is roughly of the form [math]H \sim k_0 \log k_0[/math]; see the page on finding narrow admissible tuples.) More recent improvements on [math]k_0[/math] have come from solving a Selberg sieve variational problem. [math]\varpi[/math] is a technical parameter related to a specialized form of the Elliott-Halberstam conjecture. Would like to be as large as possible. Improvements in [math]\varpi[/math] lead to improvements in [math]k_0[/math], as described in the page on Dickson-Hardy-Littlewood theorems. In more recent work, the single parameter [math]\varpi[/math] is replaced by a pair [math](\varpi,\delta)[/math] (in previous work we had [math]\delta=\varpi[/math]). These estimates are obtained in turn from Type I, Type II, and Type III estimates, as described at the page on distribution of primes in smooth moduli. In this table, infinitesimal losses in [math]\delta,\varpi[/math] are ignored. Date [math]\varpi[/math] or [math](\varpi,\delta)[/math] [math]k_0[/math] [math]H[/math] Comments 10 Aug 2005 6 [EH] 16 [EH] ([Goldston-Pintz-Yildirim]) First bounded prime gap result (conditional on Elliott-Halberstam) 14 May 2013 1/1,168 (Zhang) 3,500,000 (Zhang) 70,000,000 (Zhang) All subsequent work (until the work of Maynard) is based on Zhang's breakthrough paper. 21 May 63,374,611 (Lewko) Optimises Zhang's condition [math]\pi(H)-\pi(k_0) \gt k_0[/math]; can be reduced by 1 by parity considerations 28 May 59,874,594 (Trudgian) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] with [math]p_{m+1} \gt k_0[/math] 30 May 59,470,640 (Morrison) 58,885,998? (Tao) 59,093,364 (Morrison) 57,554,086 (Morrison) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] and then [math](\pm 1, \pm p_{m+1}, \ldots, \pm p_{m+k_0/2-1})[/math] following [HR1973], [HR1973b], [R1974] and optimises in m 31 May 2,947,442 (Morrison) 2,618,607 (Morrison) 48,112,378 (Morrison) 42,543,038 (Morrison) 42,342,946 (Morrison) Optimizes Zhang's condition [math]\omega\gt0[/math], and then uses an improved bound on [math]\delta_2[/math] 1 Jun 42,342,924 (Tao) Tiny improvement using the parity of [math]k_0[/math] 2 Jun 866,605 (Morrison) 13,008,612 (Morrison) Uses a further improvement on the quantity [math]\Sigma_2[/math] in Zhang's analysis (replacing the previous bounds on [math]\delta_2[/math]) 3 Jun 1/1,040? (v08ltu) 341,640 (Morrison) 4,982,086 (Morrison) 4,802,222 (Morrison) Uses a different method to establish [math]DHL[k_0,2][/math] that removes most of the inefficiency from Zhang's method. 4 Jun 1/224?? (v08ltu) 1/240?? (v08ltu) 4,801,744 (Sutherland) 4,788,240 (Sutherland) Uses asymmetric version of the Hensley-Richards tuples 5 Jun 34,429? (Paldi/v08ltu) 4,725,021 (Elsholtz) 4,717,560 (Sutherland) 397,110? (Sutherland) 4,656,298 (Sutherland) 389,922 (Sutherland) 388,310 (Sutherland) 388,284 (Castryck) 388,248 (Sutherland) 387,982 (Castryck) 387,974 (Castryck) [math]k_0[/math] bound uses the optimal Bessel function cutoff. Originally only provisional due to neglect of the kappa error, but then it was confirmed that the kappa error was within the allowed tolerance. [math]H[/math] bound obtained by a hybrid Schinzel/greedy (or "greedy-greedy") sieve 6 Jun 387,960 (Angelveit) 387,904 (Angeltveit) Improved [math]H[/math]-bounds based on experimentation with different residue classes and different intervals, and randomized tie-breaking in the greedy sieve. 7 Jun 26,024? (vo8ltu) 387,534 (pedant-Sutherland) Many of the results ended up being retracted due to a number of issues found in the most recent preprint of Pintz. Jun 8 286,224 (Sutherland) 285,752 (pedant-Sutherland) values of [math]\varpi,\delta,k_0[/math] now confirmed; most tuples available on dropbox. New bounds on [math]H[/math] obtained via iterated merging using a randomized greedy sieve. Jun 9 181,000*? (Pintz) 2,530,338*? (Pintz) New bounds on [math]H[/math] obtained by interleaving iterated merging with local optimizations. Jun 10 23,283? (Harcos/v08ltu) 285,210 (Sutherland) More efficient control of the [math]\kappa[/math] error using the fact that numbers with no small prime factor are usually coprime Jun 11 252,804 (Sutherland) More refined local "adjustment" optimizations, as detailed here. An issue with the [math]k_0[/math] computation has been discovered, but is in the process of being repaired. Jun 12 22,951 (Tao/v08ltu) 22,949 (Harcos) 249,180 (Castryck) Improved bound on [math]k_0[/math] avoids the technical issue in previous computations. Jun 13 Jun 14 248,898 (Sutherland) Jun 15 [math]348\varpi+68\delta \lt 1[/math]? (Tao) 6,330? (v08ltu) 6,329? (Harcos) 6,329 (v08ltu) 60,830? (Sutherland) Taking more advantage of the [math]\alpha[/math] convolution in the Type III sums Jun 16 [math]348\varpi+68\delta \lt 1[/math] (v08ltu) 60,760* (Sutherland) Attempting to make the Weyl differencing more efficient; unfortunately, it did not work Jun 18 5,937? (Pintz/Tao/v08ltu) 5,672? (v08ltu) 5,459? (v08ltu) 5,454? (v08ltu) 5,453? (v08ltu) 60,740 (xfxie) 58,866? (Sun) 53,898? (Sun) 53,842? (Sun) A new truncated sieve of Pintz virtually eliminates the influence of [math]\delta[/math] Jun 19 5,455? (v08ltu) 5,453? (v08ltu) 5,452? (v08ltu) 53,774? (Sun) 53,672*? (Sun) Some typos in [math]\kappa_3[/math] estimation had placed the 5,454 and 5,453 values of [math]k_0[/math] into doubt; however other refinements have counteracted this Jun 20 [math]178\varpi + 52\delta \lt 1[/math]? (Tao) [math]148\varpi + 33\delta \lt 1[/math]? (Tao) Replaced "completion of sums + Weil bounds" in estimation of incomplete Kloosterman-type sums by "Fourier transform + Weyl differencing + Weil bounds", taking advantage of factorability of moduli Jun 21 [math]148\varpi + 33\delta \lt 1[/math] (v08ltu) 1,470 (v08ltu) 1,467 (v08ltu) 12,042 (Engelsma) Systematic tables of tuples of small length have been set up here and here (update: As of June 27 these tables have been merged and uploaded to an online database of current bounds on [math]H(k)[/math] for [math]k[/math] up to 5000). Jun 22 Slight improvement in the [math]\tilde \theta[/math] parameter in the Pintz sieve; unfortunately, it does not seem to currently give an actual improvement to the optimal value of [math]k_0[/math] Jun 23 1,466 (Paldi/Harcos) 12,006 (Engelsma) An improved monotonicity formula for [math]G_{k_0-1,\tilde \theta}[/math] reduces [math]\kappa_3[/math] somewhat Jun 24 [math](134 + \tfrac{2}{3}) \varpi + 28\delta \le 1[/math]? (v08ltu) [math]140\varpi + 32 \delta \lt 1[/math]? (Tao) 1,268? (v08ltu) 10,206? (Engelsma) A theoretical gain from rebalancing the exponents in the Type I exponential sum estimates Jun 25 [math]116\varpi+30\delta\lt1[/math]? (Fouvry-Kowalski-Michel-Nelson/Tao) 1,346? (Hannes) 1,007? (Hannes) 10,876? (Engelsma) Optimistic projections arise from combining the Graham-Ringrose numerology with the announced Fouvry-Kowalski-Michel-Nelson results on d_3 distribution Jun 26 [math]116\varpi + 25.5 \delta \lt 1[/math]? (Nielsen) [math](112 + \tfrac{4}{7}) \varpi + (27 + \tfrac{6}{7}) \delta \lt 1[/math]? (Tao) 962? (Hannes) 7,470? (Engelsma) Beginning to flesh out various "levels" of Type I, Type II, and Type III estimates, see this page, in particular optimising van der Corput in the Type I sums. Integrated tuples page now online. Jun 27 [math]108\varpi + 30 \delta \lt 1[/math]? (Tao) 902? (Hannes) 6,966? (Engelsma) Improved the Type III estimates by averaging in [math]\alpha[/math]; also some slight improvements to the Type II sums. Tuples page is now accepting submissions. Jul 1 [math](93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math]? (Tao) 873? (Hannes) Refactored the final Cauchy-Schwarz in the Type I sums to rebalance the off-diagonal and diagonal contributions Jul 5 [math] (93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math] (Tao) Weakened the assumption of [math]x^\delta[/math]-smoothness of the original moduli to that of double [math]x^\delta[/math]-dense divisibility Jul 10 7/600? (Tao) An in principle refinement of the van der Corput estimate based on exploiting additional averaging Jul 19 [math](85 + \frac{5}{7})\varpi + (25 + \frac{5}{7}) \delta \lt 1[/math]? (Tao) A more detailed computation of the Jul 10 refinement Jul 20 Jul 5 computations now confirmed Jul 27 633 (Tao) 632 (Harcos) 4,686 (Engelsma) Jul 30 [math]168\varpi + 48\delta \lt 1[/math]# (Tao) 1,788# (Tao) 14,994# (Sutherland) Bound obtained without using Deligne's theorems. Aug 17 1,783# (xfxie) 14,950# (Sutherland) Oct 3 13/1080?? (Nelson/Michel/Tao) 604?? (Tao) 4,428?? (Engelsma) Found an additional variable to apply van der Corput to Oct 11 [math]83\frac{1}{13}\varpi + 25\frac{5}{13} \delta \lt 1[/math]? (Tao) 603? (xfxie) 4,422?(Engelsma) 12 [EH] (Maynard) Worked out the dependence on [math]\delta[/math] in the Oct 3 calculation Oct 21 All sections of the paper relating to the bounds obtained on Jul 27 and Aug 17 have been proofread at least twice Oct 23 700#? (Maynard) Announced at a talk in Oberwolfach Oct 24 110#? (Maynard) 628#? (Clark-Jarvis) With this value of [math]k_0[/math], the value of [math]H[/math] given is best possible (and similarly for smaller values of [math]k_0[/math]) Nov 19 105# (Maynard) 5 [EH] (Maynard) 600# (Maynard/Clark-Jarvis) One also gets three primes in intervals of length 600 if one assumes Elliott-Halberstam Nov 20 Optimizing the numerology in Maynard's large k analysis; unfortunately there was an error in the variance calculation Nov 21 68?? (Maynard) 582#*? (Nielsen]) 59,451 [m=2]#? (Nielsen]) 42,392 [m=2]? (Nielsen) 356?? (Clark-Jarvis) Optimistically inserting the Polymath8a distribution estimate into Maynard's low k calculations, ignoring the role of delta Nov 22 388*? (xfxie) 448#*? (Nielsen) 43,134 [m=2]#? (Nielsen) 698,288 [m=2]#? (Sutherland) Uses the m=2 values of k_0 from Nov 21 Nov 23 493,528 [m=2]#? Sutherland Nov 24 484,234 [m=2]? (Sutherland) Nov 25 385#*? (xfxie) 484,176 [m=2]? (Sutherland) Using the exponential moment method to control errors Nov 26 102# (Nielsen) 493,426 [m=2]#? (Sutherland) Optimising the original Maynard variational problem Nov 27 484,162 [m=2]? (Sutherland) Nov 28 484,136 [m=2]? (Sutherland Dec 4 64#? (Nielsen) 330#? (Clark-Jarvis) Searching over a wider range of polynomials than in Maynard's paper Dec 6 493,408 [m=2]#? (Sutherland) Dec 19 59#? (Nielsen) 10,000,000? [m=3] (Tao) 1,700,000? [m=3] (Tao) 38,000? [m=2] (Tao) 300#? (Clark-Jarvis) 182,087,080? [m=3] (Sutherland) 179,933,380? [m=3] (Sutherland) More efficient memory management allows for an increase in the degree of the polynomials used; the m=2,3 results use an explicit version of the [math]M_k \geq \frac{k}{k-1} \log k - O(1)[/math] lower bound. Dec 20 55#? (Nielsen) 36,000? [m=2] (xfxie) 175,225,874? [m=3] (Sutherland) 27,398,976? [m=3] (Sutherland) Dec 21 1,640,042? [m=3] (Sutherland) 429,798? [m=2] (Sutherland) Optimising the explicit lower bound [math]M_k \geq \log k-O(1)[/math] Dec 22 1,628,944? [m=3] (Castryck) 75,000,000? [m=4] (Castryck) 3,400,000,000? [m=5] (Castryck) 5,511? [EH] [m=3] (Sutherland) 2,114,964#? [m=3] (Sutherland) 395,154? [m=2] (Sutherland) 1,523,781,850? [m=4] (Sutherland) 82,575,303,678? [m=5] (Sutherland) A numerical precision issue was discovered in the earlier m=4 calculations Legend: ? - unconfirmed or conditional ?? - theoretical limit of an analysis, rather than a claimed record * - is majorized by an earlier but independent result # - bound does not rely on Deligne's theorems [EH] - bound is conditional the Elliott-Halberstam conjecture [m=N] - bound on intervals containing N+1 consecutive primes, rather than two strikethrough - values relied on a computation that has now been retracted See also the article on Finding narrow admissible tuples for benchmark values of [math]H[/math] for various key values of [math]k_0[/math].
Defining parameters Level: \( N \) = \( 15 = 3 \cdot 5 \) Weight: \( k \) = \( 12 \) Nonzero newspaces: \( 3 \) Newforms: \( 6 \) Sturm bound: \(192\) Trace bound: \(1\) Dimensions The following table gives the dimensions of various subspaces of \(M_{12}(\Gamma_1(15))\). Total New Old Modular forms 96 68 28 Cusp forms 80 60 20 Eisenstein series 16 8 8 Decomposition of \(S_{12}^{\mathrm{new}}(\Gamma_1(15))\) We only show spaces with even parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension. Label \(\chi\) Newforms Dimension \(\chi\) degree 15.12.a \(\chi_{15}(1, \cdot)\) 15.12.a.a 1 1 15.12.a.b 2 15.12.a.c 2 15.12.a.d 3 15.12.b \(\chi_{15}(4, \cdot)\) 15.12.b.a 12 1 15.12.e \(\chi_{15}(2, \cdot)\) 15.12.e.a 40 2
Defining parameters Level: \( N \) = \( 3600 = 2^{4} \cdot 3^{2} \cdot 5^{2} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 3600.ew (of order \(30\) and degree \(8\)) Character conductor: \(\operatorname{cond}(\chi)\) = \( 1800 \) Character field: \(\Q(\zeta_{30})\) Newforms: \( 0 \) Sturm bound: \(720\) Trace bound: \(0\) Dimensions The following table gives the dimensions of various subspaces of \(M_{1}(3600, [\chi])\). Total New Old Modular forms 64 0 64 Cusp forms 0 0 0 Eisenstein series 64 0 64 The following table gives the dimensions of subspaces with specified projective image type. \(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 0 0 0 0
For $M \in \mathbb{Q}_+$ and $0 \leq \frac{g}{h} < 1$, $M,\frac{g}{h}$ denotes (the projective equivalence class of) the lattice\[\mathbb{Z} (M \vec{e}_1 + \frac{g}{h} \vec{e}_2) \oplus \mathbb{Z} \vec{e}_2 \]which we also like to represent by the $2 \times 2$ matrix\[\alpha_{M,\frac{g}{h}} = \begin{bmatrix} M & \frac{g}{h} \\ 0 & 1 \end{bmatrix} \]A subgroup $G$ of $GL_2(\mathbb{Q})$ is said to fix $M,\frac{g}{h}$ if \[ \alpha_{M,\frac{g}{h}}.G.\alpha_{M,\frac{g}{h}}^{-1} \subset SL_2(\mathbb{Z}) \] The full group of all elements fixing $M,\frac{g}{h}$ is the conjugate \[ \alpha_{M,\frac{g}{h}}^{-1}.SL_2(\mathbb{Z}).\alpha_{M,\frac{g}{h}} \] For a number lattice $N=N,0$ the elements of this group are all of the form \[ \begin{bmatrix} a & \frac{b}{N} \\ cN & d \end{bmatrix} \qquad \text{with} \qquad \begin{bmatrix} a & b \\ c & d \end{bmatrix} \in SL_2(\mathbb{Z}) \] and the intersection with $SL_2(\mathbb{Z})$ (which is the group of all elements fixing the lattice $1=1,0$) is the congruence subgroup \[ \Gamma_0(N) = \{ \begin{bmatrix} a & b \\ cN & d \end{bmatrix}~|~ad-Nbc = 1 \} \] Conway argues that this is the real way to think of $\Gamma_0(N)$, as the joint stabilizer of the two lattices $N$ and $1$! The defining definition of 24 tells us that $\Gamma_0(N)$ fixes more lattices. In fact, it fixes exactly the latices $M \frac{g}{h}$ such that \[ 1~|~M~|~\frac{N}{h^2} \quad \text{with} \quad h^2~|~N \quad \text{and} \quad h~|~24 \] Conway calls the sub-graph of the Big Picture on these lattices the snake of $(N|1)$. Here’s the $(60|1)$-snake (note that $60=2^2.3.5$ so $h=1$ or $h=2$ and edges corresponding to the prime $2$ are coloured red, those for $3$ green and for $5$ blue). \[ \xymatrix{& & & 15 \frac{1}{2} \ar@[red]@{-}[dd] & & \\ & & 5 \frac{1}{2} \ar@[red]@{-}[dd] & & & \\ & 15 \ar@[red]@{-}[rr] \ar@[blue]@{-}[dd] & & 30 \ar@[red]@{-}[rr] \ar@[blue]@{-}[dd] & & 60 \ar@[blue]@{-}[dd] \\ 5 \ar@[green]@{-}[ru] \ar@[blue]@{-}[dd] \ar@[red]@{-}[rr] & & 10 \ar@[green]@{-}[ru] \ar@[red]@{-}[rr] \ar@[blue]@{-}[dd] & & 20 \ar@[green]@{-}[ru] \ar@[blue]@{-}[dd] & \\ & 3 \ar@[red]@{-}[rr] & & 6 \ar@[red]@{-}[rr] \ar@[red]@{-}[dd] & & 12 \\ 1 \ar@[green]@{-}[ru] \ar@[red]@{-}[rr] & & 2 \ar@[green]@{-}[ru] \ar@[red]@{-}[rr] \ar@[red]@{-}[dd] & & 4 \ar@[green]@{-}[ru] & \\ & & & 3\frac{1}{2} & & \\ & & 1 \frac{1}{2} & & &} \] The sub-graph of lattices fixed by $\Gamma_0(N)$ for $h=1$, that is all number-lattices $M=M,0$ for $M$ a divisor of $N$ is called the thread of $(N|1)$. Here’s the $(60|1)$-thread \[ \xymatrix{ & 15 \ar@[red]@{-}[rr] \ar@[blue]@{-}[dd] & & 30 \ar@[red]@{-}[rr] \ar@[blue]@{-}[dd] & & 60 \ar@[blue]@{-}[dd] \\ 5 \ar@[green]@{-}[ru] \ar@[blue]@{-}[dd] \ar@[red]@{-}[rr] & & 10 \ar@[green]@{-}[ru] \ar@[red]@{-}[rr] \ar@[blue]@{-}[dd] & & 20 \ar@[green]@{-}[ru] \ar@[blue]@{-}[dd] & \\ & 3 \ar@[red]@{-}[rr] & & 6 \ar@[red]@{-}[rr] & & 12 \\ 1 \ar@[green]@{-}[ru] \ar@[red]@{-}[rr] & & 2 \ar@[green]@{-}[ru] \ar@[red]@{-}[rr] & & 4 \ar@[green]@{-}[ru] & } \] If $N$ factors as $N = p_1^{e_1} p_2^{e_2} \dots p_k^{e_k}$ then the $(N|1)$-thread is the product of the $(p_i^{e_i}|1)$-threads and has a symmetry group of order $2^k$. It is generated by $k$ involutions, each one the reflexion in one $(p_i^{e_i}|1)$-thread and the identity on the other $(p_j^{e_j}|1)$-threads. In the $(60|1)$-thread these are the reflexions in the three mirrors of the figure. So, there is one involution for every divisor $e$ of $N$ such that $(e,\frac{N}{e})=1$. For such an $e$ there are matrices, with $a,b,c,d \in \mathbb{Z}$, of the form \[ W_e = \begin{bmatrix} ae & b \\ cN & de \end{bmatrix} \quad \text{with} \quad ade^2-bcN=e \] Think of Bezout and use that $(e,\frac{N}{e})=1$. Such $W_e$ normalizes $\Gamma_0(N)$, that is, for any $A \in \Gamma_0(N)$ we have that $W_e.A.W_e^{-1} \in \Gamma_0(N)$. Also, the determinant of $W_e^e$ is equal to $e^2$ so we can write $W_e^2 = e A$ for some $A \in \Gamma_0(N)$. That is, the transformation $W_e$ (left-multiplication) sends any lattice in the thread or snake of $(N|1)$ to another such lattice (up to projective equivalence) and if we apply $W_e^2$ if fixes each such lattice (again, up to projective equivalence), so it is the desired reflexion corresponding with $e$. Consider the subgroup of $GL_2(\mathbb{Q})$ generated by $\Gamma_0(N)$ and some of these matrices $W_e,W_f,\dots$ and denote by $\Gamma_0(N)+e,f,\dots$ the quotient modulo positive scalar matrices, then \[ \Gamma_0(N) \qquad \text{is a normal subgroup of} \qquad \Gamma_0(N)+e,f,\dots \] with quotient isomorphic to some $(\mathbb{Z}/2\mathbb{Z})^l$ isomorphic to the subgroup generated by the involutions corresponding to $e,f,\dots$. More generally, consider the $(n|h)$-thread for number lattices $n=n,0$ and $h=h,0$ such that $h | n$ as the sub-graph on all number lattices $l=l,0$ such that $h | l | n$. If we denote with $\Gamma_0(n|h)$ the point-wise stabilizer of $n$ and $h$, then we have that \[ \Gamma(n|h) = \begin{bmatrix} h & 0 \\ 0 & 1 \end{bmatrix}^{-1}.\Gamma_0(\frac{n}{h}).\begin{bmatrix} h & 0 \\ 0 & 1 \end{bmatrix} \] and we can then denote with \[ \Gamma_0(n|h)+e,f,\dots \] the conjugate of the corresponding group $\Gamma_0(\frac{n}{h})+e,f,\dots$. If $h$ is the largest divisor of $24$ such that $h^2$ divides $N$, then Conway calls the spine of the $(N|1)$-snake the subgraph on all lattices of the snake whose distance from its periphery is exactly $log(h)$. For $N=60$, $h=2$ and so the spine of the $(60|1)$-snake is the central piece connected with double black edges \[ \xymatrix{& & & 15 \frac{1}{2} \ar@[red]@{-}[dd] & & \\ & & 5 \frac{1}{2} \ar@[red]@{-}[dd] & & & \\ & 15 \ar@[red]@{-}[rr] \ar@[blue]@{-}[dd] & & 30 \ar@[red]@{-}[rr] \ar@[black]@{=}[dd] & & 60 \ar@[blue]@{-}[dd] \\ 5 \ar@[green]@{-}[ru] \ar@[blue]@{-}[dd] \ar@[red]@{-}[rr] & & 10 \ar@[black]@{=}[ru] \ar@[red]@{-}[rr] \ar@[black]@{=}[dd] & & 20 \ar@[green]@{-}[ru] \ar@[blue]@{-}[dd] & \\ & 3 \ar@[red]@{-}[rr] & & 6 \ar@[red]@{-}[rr] \ar@[red]@{-}[dd] & & 12 \\ 1 \ar@[green]@{-}[ru] \ar@[red]@{-}[rr] & & 2 \ar@[black]@{=}[ru] \ar@[red]@{-}[rr] \ar@[red]@{-}[dd] & & 4 \ar@[green]@{-}[ru] & \\ & & & 3\frac{1}{2} & & \\ & & 1 \frac{1}{2} & & &} \] which is the $(30|2)$-thread. The upshot of all this is to have a visual proof of the Atkin-Lehner theorem which says that the full normalizer of $\Gamma_0(N)$ is the group $\Gamma_0(\frac{N}{h}|h)+$ (that is, adding all involutions) where $h$ is the largest divisor of $24$ for which $h^2|N$. Any element of this normalizer must take every lattice in the $(N|1)$-snake fixed by $\Gamma_0(N)$ to another such lattice. Thus it follows that it must take the snake to itself. Conversely, an element that takes the snake to itself must conjugate into itself the group of all matrices that fix every point of the snake, that is to say, must normalize $\Gamma_0(N)$. But the elements that take the snake to itself are precisely those that take the spine to itself, and since this spine is just the $(\frac{N}{h}|h)$-thread, this group is just $\Gamma_0(\frac{N}{h}|h)+$. Reference: J.H. Conway, “Understanding groups like $\Gamma_0(N)$”, in “Groups, Difference Sets, and the Monster”, Walter de Gruyter-Berlin-New York, 1996
A long while ago I promised to take you from the action by the modular group $\Gamma=PSL_2(\mathbb{Z})$ on the lattices at hyperdistance $n$ from the standard orthogonal laatice $L_1$ to the corresponding ‘monstrous’ Grothendieck dessin d’enfant. Speaking of dessins d’enfant, let me point you to the latest intriguing paper by Yuri I. Manin and Matilde Marcolli, ArXived a few days ago Quantum Statistical Mechanics of the Absolute Galois Group, on how to build a quantum system for the absolute Galois group from dessins d’enfant (more on this, I promise, later). Where were we? We’ve seen natural one-to-one correspondences between (a) points on the projective line over $\mathbb{Z}/n\mathbb{Z}$, (b) lattices at hyperdistance $n$ from $L_1$, and (c) coset classes of the congruence subgroup $\Gamma_0(n)$ in $\Gamma$. How to get from there to a dessin d’enfant? The short answer is: it’s all in Ravi S. Kulkarni’s paper, “An arithmetic-geometric method in the study of the subgroups of the modular group”, Amer. J. Math 113 (1991) 1053-1135. It is a complete mystery to me why Tatitscheff, He and McKay don’t mention Kulkarni’s paper in “Cusps, congruence groups and monstrous dessins”. Because all they do (and much more) is in Kulkarni. I’ve blogged about Kulkarni’s paper years ago: – In the Dedekind tessalation it was all about assigning special polygons to subgroups of finite index of $\Gamma$. – In Modular quilts and cuboid tree diagram it did go on assigning (multiple) cuboid trees to a (conjugacy class) of such finite index subgroup. – In Hyperbolic Mathieu polygons the story continued on a finite-to-one connection between special hyperbolic polygons and cuboid trees. – In Farey codes it was shown how to encode such polygons by a Farey-sequence. – In Generators of modular subgroups it was shown how to get generators of the finite index subgroups from this Farey sequence. The modular group is a free product \[ \Gamma = C_2 \ast C_2 = \langle s,u~|~s^2=1=u^3 \rangle \] with lifts of $s$ and $u$ to $SL_2(\mathbb{Z})$ given by the matrices \[ S=\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix},~\qquad U= \begin{bmatrix} 0 & -1 \\ 1 & -1 \end{bmatrix} \] As a result, any permutation representation of $\Gamma$ on a set $E$ can be represented by a $2$-coloured graph (with black and white vertices) and edges corresponding to the elements of the set $E$. Each white vertex has two (or one) edges connected to it and every black vertex has three (or one). These edges are the elements of $E$ permuted by $s$ (for white vertices) and $u$ (for black ones), the order of the 3-cycle determined by going counterclockwise round the vertex. Clearly, if there’s just one edge connected to a vertex, it gives a fixed point (or 1-cycle) in the corresponding permutation. The ‘monstrous dessin’ for the congruence subgroup $\Gamma_0(n)$ is the picture one gets from the permutation $\Gamma$-action on the points of $\mathbb{P}^1(\mathbb{Z}/n \mathbb{Z})$, or equivalently, on the coset classes or on the lattices at hyperdistance $n$. Kulkarni’s paper (or the blogposts above) tell you how to get at this picture starting from a fundamental domain of $\Gamma_0(n)$ acting on teh upper half-plane by Moebius transformations. Sage gives a nice image of this fundamental domain via the command FareySymbol(Gamma0(n)).fundamental_domain() Here’s the image for $n=6$: The boundary points (on the halflines through $0$ and $1$ and the $4$ half-circles need to be identified which is indicaed by matching colours. So the 2 halflines are identified as are the two blue (and green) half-circles (in opposite direction). To get the dessin from this, let’s first look at the interior points. A white vertex is a point in the interior where two black and two white tiles meet, a black vertex corresponds to an interior points where three black and three white tiles meet. Points on the boundary where tiles meet are coloured red, and after identification two of these reds give one white or black vertex. Here’s the intermediate picture The two top red points are identified giving a white vertex as do the two reds on the blue half-circles and the two reds on the green half-circles, because after identification two black and two white tiles meet there. This then gives us the ‘monstrous’ modular dessin for $n=6$ of the Tatitscheff, He and McKay paper: Let’s try a more difficult example: $n=12$. Sage gives us as fundamental domain giving us the intermediate picture and spotting the correct identifications, this gives us the ‘monstrous’ dessin for $\Gamma_0(12)$ from the THM-paper: In general there are several of these 2-coloured graphs giving the same permutation representation, so the obtained ‘monstrous dessin’ depends on the choice of fundamental domain. You’ll have noticed that the domain for $\Gamma_0(6)$ was symmetric, whereas the one Sage provides for $\Gamma_0(12)$ is not. This is caused by Sage using the Farey-code \[ \xymatrix{ 0 \ar@{-}[r]_1 & \frac{1}{6} \ar@{-}[r]_1 & \frac{1}{5} \ar@{-}[r]_2 & \frac{1}{4} \ar@{-}[r]_3 & \frac{1}{3} \ar@{-}[r]_4 & \frac{1}{2} \ar@{-}[r]_4 & \frac{2}{3} \ar@{-}[r]_3 & \frac{3}{4} \ar@{-}[r]_2 & 1} \] One of the nice results from Kulkarni’s paper is that for any $n$ there is a symmetric Farey-code, giving a perfectly symmetric fundamental domain for $\Gamma_0(n)$. For $n=12$ this symmetric code is \[ \xymatrix{ 0 \ar@{-}[r]_1 & \frac{1}{6} \ar@{-}[r]_2 & \frac{1}{4} \ar@{-}[r]_3 & \frac{1}{3} \ar@{-}[r]_4 & \frac{1}{2} \ar@{-}[r]_4 & \frac{2}{3} \ar@{-}[r]_3 & \frac{3}{4} \ar@{-}[r]_2 & \frac{5}{6} \ar@{-}[r]_1 & 1} \] It would be nice to see whether using these symmetric Farey-codes gives other ‘monstrous dessins’ than in the THM-paper. Remains to identify the edges in the dessin with the lattices at hyperdistance $n$ from $L_1$. Using the tricks from the previous post it is quite easy to check that for any $n$ the monstrous dessin for $\Gamma_0(n)$ starts off with the lattices $L_{M,\frac{g}{h}} = M,\frac{g}{h}$ as below Let’s do a sample computation showing that the action of $s$ on $L_n$ gives $L_{\frac{1}{n}}$: \[ L_n.s = \begin{bmatrix} n & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & -n \\ 1 & 0 \end{bmatrix} \] and then, as last time, to determine the class of the lattice spanned by the rows of this matrix we have to compute \[ \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & -n \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} -1 & 0 \\ 0 & -n \end{bmatrix} \] which is class $L_{\frac{1}{n}}$. And similarly for the other edges.2 Comments
Defining parameters Level: \( N \) = \( 1 \) Weight: \( k \) = \( 88 \) Nonzero newspaces: \( 1 \) Newforms: \( 1 \) Sturm bound: \(7\) Trace bound: \(0\) Dimensions The following table gives the dimensions of various subspaces of \(M_{88}(\Gamma_1(1))\). Total New Old Modular forms 8 8 0 Cusp forms 7 7 0 Eisenstein series 1 1 0 Decomposition of \(S_{88}^{\mathrm{new}}(\Gamma_1(1))\) We only show spaces with even parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension. Label \(\chi\) Newforms Dimension \(\chi\) degree 1.88.a \(\chi_{1}(1, \cdot)\) 1.88.a.a 7 1
I have the following problem: $$\begin{array}{ll} & \boldsymbol{x}^*(t) = \arg \min_{ \boldsymbol{x}}\text{ }g( \boldsymbol{x}) \\ \text{subject to} & \boldsymbol{A}(t)\boldsymbol{x }= \boldsymbol{B}(t)\\ & 0\leq x_i \leq x_{\max} , i=\{1,2,\cdots,N\}\end{array} $$ With $g(\boldsymbol{x})$ being a strictly convex function. $\boldsymbol{x}\in R^{N\times 1}$. $ \boldsymbol{A}(t) \in \mathcal{R}^{M \times N}$ has full row rank with $N>M$. The elements of $ \boldsymbol{A}(t)$ and $ \boldsymbol{B}(t)$ are continuous (possibly smooth if needed) with respect to $t$, and a solution $\boldsymbol{x}^*$ is always feasible for any time t. The question now is: Does strict (possibly strong if needed) convexity of $g(\boldsymbol{x})$ this imply that $\boldsymbol{x}^*(t)$ is continuous? PS: I did ask a similar question previously, where I got a good start. However I am not sure if it is enough yet. ( Is the optimal solution of a convex problem continuous with respect to parameters??) I have now specified the problem a little more, and my goal is to be able to prove that it is so. Any references along with answers is greatly appreciated. Edit: Though I am not able to formalize it, my thoughts so far is that I somehow can state and use: 1) The solutions of x has a unique minimum 2) The inequalities should restrict the solutions to a convex hull. 3) The equality gives a solution space, that (or so I believe) when changing smoothly also means that the solutions x* change continuously. Possibly involving the implicit function theorem on the KKT conditions.
In terms of interpretation, an $MA$ model simply means that the time series is a function of the error from previous periods. You might find it informative to consider plotting simple $AR(1)$ models alongside various $ARMA(1,1)$ to develop a more intuitive understanding. For instance, the $AR(1)$ model (chosen as it is common for financial time series)$$x_{... SOLUTION:Let $r_t$ be the log-return at time $t$, and $\hat{r}_t$ be the predicted log-return from the regression model.Initialize $loglik(0:T)=0$,$\epsilon_1=0$, $\sigma_1 = 0$, $\mu=U(0,1)*0.0001,\phi=U(0,1)*0.01$, $\alpha_0=U(0,1)*0.00002,\alpha_1=U(0,1)*0.01,\beta_1=0.9 + U(0,1)* 0.01$, $B=10,000$For $b$ = 1 to $B$$\quad$ For $t$ = 2 to $T$:$ \... The logic of your code is all right. However, the variance of the parameters is high because nobs=250 is relatively low. Increase nobs and your parameters will converge toward the parameters you specified eventually.import statsmodels.api as smimport numpy as np# Parameters.ar = np.array([.75, -.25])ma = np.array([.65, .35])# Simulate an ARMA ... You want to compute the BIC (Bayessian Information Criterion) or the AIC (Akaike information criterion) for different (p,q) pairs.Here is a wikipedia article with information on how to interpret those criteria in practice.Here is a mathworks page with detailed instructions on how to perform this task within Matlab.Keep in mind that in practice and ... Even though it's a straightforward extension, it took me a while (a year? yikes!); but now you can easily incorporate Bayesian ar(1) (or more generally, Bayesian regression) in joint estimation by using designmatrix = "ar(1)" as an argument to svsample. It's not well documented yet (except in the help files), but I nevertheless hope easy to use.From the ... There is no guarantee that the optimization method always converges! In an introduction the author of the package recommends using the "hybrid" solver, which starts out with the "solnp" and goes through the other solvers, if it doesn't converge. According to him, this should at least guarantee convergence in 90 % of the cases.http://unstarched.net/r-... The Autocorrelation Function (ACF) $\rho_k=Corr(y_t,y_{t-k})$ expresses the strength of linear dependency between the $k$-lagged realizations and hence represents an important tool for identification of the lag orders of ARMA and GARCH processes:$$\rho_k:=Corr(y_t,y_{t-k})=\frac{\gamma_k}{\gamma_0},\,\,k\in\mathbb{Z}$$where the Autocovariance $\gamma_k$ is ... Use acf and pacf as to determine AR and MA parts. Use the position of last significant value for the two tests as the AR and MA terms respectively. or use autoarima if matlab has one with AIC or BIC coefficients. AIC returns a more general model (all possible values) while BIC results in a more constrained one (simpler). Another possible solution is the EACF of Tsay and Tiao (1984) where the idea is that if the order of the AR process is known the MA can be inferred. The output is a table where the first left corner 0 is taken to be the order of the ARMA(p, q) model. 1.Is it correct, that the coefficients are now different to the coefficients of the arima output?It seems right that the ARMA coefficients are different. Indeed, in the second model, the GARCH component will capture fluctuations that the ARMA component will not have to capture, resulting in different ARMA parameter estimates.2.This is the acf of the ... It is just a problem of how you pass times series to yuima.Just one more thing, if you want to estimate a CARMA driven by a Brownian motion, it is better to work with log-prices instead of prices. Indeed, in the considered model, we have a non zero probability assigned to negative values of the process.Try the following coderequire(yuima)library(xts)... There is no particular issue with your polynomials. However if you really want them to both start with a 1, you can apply a change of variable by defining :\begin{equation}Y_t = -\frac{1}{4}X_t\end{equation}Then your polynomials $\Phi_y(B)$ and $\Theta(B)$ such that :\begin{equation}\Phi_y(B)Y_t=\Theta(B)Z_t\end{equation}will both start with a $1$.It ... To get it out the way: you cannot ask 'what model is better' without a reference to what its use is. Do you want to test for the mean or the AR parameter to trade it? Do you want to calculate VaR? Do you want to forecast volatility over one period? Or over 1000 periods? Or higher moments? Do you want to simulate volatility over one period? Or longer?For ... Normally distributed and that's why the two first moments are sufficient to infer their statistical significance.Proof are rather technical (and sometimes are not specific to time-series models) and mainly depends of:The estimation method employed ( QMLE, Least Squares, Moment, Whittle...)The parameter spaceMoment restrictions...These proofs ... Some models do use ln(r_t), like Black–Derman–Toy and the Black–Karasinski models. Mainly to avoid negative interest rates in low rates / high volatility environments through the use of the log-normal distribution. Negative rates can wreak havoc in option premiums for example.They are interest rates indeed, that we call short rates, not yield on treasuries.... It is a classical misunderstanding, your model is right, you always have a acf equal to one at lag zero (and not one) since if there is no lag acf = covariance(x , x_lag 0) / variance x = variance x / variance x = 1.So you need to pay attention to the x axis , some software displays ACF starting at lag zero and some others from 1 (which make better ... You are confusing the cond. mean process and cond. variance process : the autocorrelation plot of the squared returns gives you information about the cond. variance process (not the ARMA part !) . So you can't draw conclusion on the mean process. The squared returns are almost always autocorrelated since volatility is know to be time-varying.You need to ... You might be interested in this ARTICLE (published in Quantitative Finance 2016) and citations therein. The authors consider different distributions to model tails in financial time series and in particular focus on EVT/GPDs.GPDs are used to specifically model tails and hence are fitted after some threshold that separates the tail from the central region ... For ARIMA(2,1,4) you would need to use the ARIMA model, as described here.You would call with something like thisARIMA(endog, order = (2, 1, 4))where endog is your endogenous variable and the tuple given for order follows the convention AR, Differencing, MA.For ARMA(1, 1) you could just use ARMA(endog, order = (1, 1)). The code is correct regarding your question (and only for an AR(1) ), you made a mistake because the last observation of the data set is $t-1$ and not $t$ since you are forecasting the point at time $t$.In the code : MF(i,1) is the current point forecast ($t$) and lag one observation ( MF(i-1,1) which is $t-1$ ) is correctly related to the AR part.... To improve your model I would recommend you to take into acount the intraday periodicity : ie the fluctuation of the exchange rate over the daily cycle.For instance we observe strong increase on the volatility around 07:00 GMT (opening of European Market.)The following image taken from Andersen, T. G., & Bollerslev, T. (1997) illustrates it. It ... Your summarize statistics are really strange ( median = 0.0000 , max 431 ...).Compute the returns as follow : $log(p_{t+1}/p_{t}) *100 $ and run the Arma-Garch on it.Edit :As explanation : you need to use a stationnary time serie : see @Neeraj comment A good rule of thumb is to "test" your models by doing forecasts and to choose the best one. Note however that your choice will be based upon the loss function you selected. If you are concerned about outliers you should (for instance) use Median Squared Errors, if you don't you can use Mean Square Errors. In your particular case the Information Criteria ...
Difference between revisions of "Polarization Mixing Due to Feed Rotation" (→Background) (→Background) Line 303: Line 303: <center><math> <center><math> \begin{align} \begin{align} − \phi_{ij}(XY) - \phi_{ij}(XX) = \phi_j + d\phi_j + \pi/2 - \phi_i - \phi_j + \phi_i &= d\phi_j + \pi/2, \\ + \phi_{ij}(XY) - \phi_{ij}(XX) = \phi_j + d\phi_j + \pi/2 - \phi_i - \phi_j + \phi_i &= d\phi_j + \pi/2, \\ − \phi_{ij}(YY) - \phi_{ij}(YX) = \phi_j + d\phi_j - \phi_i - d\phi_i - \phi_j + \phi_i + d\phi_i + \pi/2 &= d\phi_j + \pi/2, \\ + \phi_{ij}(YY) - \phi_{ij}(YX) = \phi_j + d\phi_j - \phi_i - d\phi_i - \phi_j + \phi_i + d\phi_i + \pi/2 &= d\phi_j + \pi/2, \\ − \phi_{ij}(XX) - \phi_{ij}(YX) = \phi_j - \phi_i - \phi_j + \phi_i + d\phi_i + \pi/2 &= d\phi_i + \pi/2, \\ + \phi_{ij}(XX) - \phi_{ij}(YX) = \phi_j - \phi_i - \phi_j + \phi_i + d\phi_i + \pi/2 &= d\phi_i + \pi/2, \\ − \phi_{ij}(XY) - \phi_{ij}(YY) = \phi_j + d\phi_j + \pi/2 - \phi_i - \phi_j - d\phi_j + \phi_i + d\phi_i &= d\phi_i + \pi/2, + \phi_{ij}(XY) - \phi_{ij}(YY) = \phi_j + d\phi_j + \pi/2 - \phi_i - \phi_j - d\phi_j + \phi_i + d\phi_i &= d\phi_i + \pi/2, \end{align} \end{align} </math></center> </math></center> Revision as of 15:04, 2 July 2017 Contents 1 Explanation of Polarization Mixing 2 Absolute vs. Relative Angle of Rotation 3 Effect of an X - Y Delay 4 Another Look at X-Y Delays 5 Effect of Polarization Mixing on Observations Explanation of Polarization Mixing The newer 2.1-m antennas [Ants 1-8 and 12] have AzEl (azimuth-elevation) mounts (also referred to as AltAz; the terms Altitude and Elevation are used synonymously), which means that their crossed linear feeds have a constant angle relative to the horizon (the axis of rotation being at the zenith). The older 2.1-m antennas [Ants 9-11 and 13], and the 27-m antenna [Ant 14], have Equatorial mounts, which means that their crossed linear feeds have a constant angle with respect to the celestial equator, the axis of rotation being at the north celestial pole. Thus, the celestial coordinate system is tilted by the local co-latitude (complement of the latitude). This tilt results in a relative feed rotation between the 27-m antenna and the AzEl mounts, but not between the 27-m and the older equatorial mounts. This angle is called the "parallactic angle," and is given by: where is the site latitude, is the Azimuth angle [0 north], and is the Elevation angle [0 on horizon]. This function obviously changes with position on the sky, and as we follow a celestial source (e.g. the Sun) across the sky this rotation angle is continuously changing in a surprisingly complex manner as shown in Figure 1. Note that at zero hour angle for declinations less than the local latitude (37.233 degrees at OVRO), but is at higher declinations. The crossed linear dipole feeds on all antennas are oriented with the X-feed as shown in Figure 2, at 45-degrees from the horizontal, when the antenna is pointed at 0 hour angle. This is the view as seen looking down at the feed from the dish side, although since the feeds are at the prime focus this is the same as the view projected onto the sky. At other positions, the feeds on the AzEl antennas experience a rotation by angle relative to the equatorial antennas. Because of this rotation, the normal polarization products XX, XY, YX and YY on baselines with dissimilar antennas (one AzEl and the other equatorial) become mixed. The effect of this admixture can be written by the use of Jones matrices (see Hamaker, Bregman & Sault (1996) for a complete description). Consider antenna A whose feed orientation is rotated by , cross-correlated with antenna B with unrotated feed. The corresponding Jones matrices, acting on signal vector are: and the cross-correlation is found by taking the outer product, i.e. which relates the output polarization products to the input as where we have dropped the subscripts and complex conjugate notation for brevity. Of course, there are other effects such as unequal gains and cross-talk between feeds that are also at play, but for now we ignore those and focus only on the effect of this polarization mixing due to the parallactic angle. Absolute vs. Relative Angle of Rotation However, the above description fails when we consider a rotation on both antennas, so that In this case, performing the outer product gives: whereas intuitively we want something like: which becomes the identity matrix when , i.e. when the feeds on two antennas of a baseline are parallel. The difference seems to be that the earlier expression evaluates to components of X and Y in an absolute coordinate frame, whereas we are interested only the difference in angle of the feeds in a relative coordinate frame. This choice no doubt has implications for measuring Stokes Q and U, but for solar data we are not concerned with linear polarization. One way to achieve this in the framework of Jones matrices is to form Mueller matrices from the outer-product of the rotation times the gain matrix: and then form an overall matrix where . Effect of an X - Y Delay Regardless of how the math is done, we expect that the result should be dependent on the difference in angle, , so as a practical solution let us simply replace with and proceed as in section 1. and the cross-correlation is found by taking the outer product, i.e. which relates the output polarization products to the input as Now consider that there is a "multi-band" delay on both antennas, and . Then (2) becomes: The result agrees with our intuition: This approach will be implemented, to see how well it does in correcting for the effects of differential feed rotation. Another Look at X-Y Delays Prior to doing the feed rotation correction, it is essential that any X-Y delays be measured and corrected. We have devised a calibration procedure in which we take data on a strong calibrator with the feeds parallel, then rotate the 27-m (antenna 14) feed so that they are perpendicular. For an unpolarized source, this results in signal on the XX and YY polarization channels in the first case, and on the XY and YX polarization channels in the second case. As a practical matter, this can be done on all antennas at once if a strong source is observed near 0 HA, ideally timed to start 20 min before 0 HA and completing 20 min after 0 HA. The source 2253+161 works well, as does 1229+006 (3C273). Two observations are needed one with the 27-m feed unrotated (gives parallel-feed data for all dishes, if done near 0 HA). Gives strong signal in XX and YY channels. Example one with the 27-m feed rotated to -90 degrees (gives crossed-feed data for all dishes, if done near 0 HA). Gives strong signal in XY and YX channels. Example Note that the feed should be rotated by -90, not 90, in order for the expressions below to be used correctly. Background Consider antenna-based phases on X polarization as and on Y polarization as , i.e. the Y phases are nominally the same as for X, except for a 90-degree rotation and a possible X-Y delay difference , here written as delay phase . We are finding that this delay is a complicated function of frequency, so it is just as well to keep it in terms of phase. On a baseline , then, the four polarization terms become: We then examine the channel differences on baselines with antenna 14, i.e. where . Consequently, we can solve redundantly in two ways for the antenna-based delay phases: where we specifically use to emphasize that this quantity for all antennas should be the same value, because the measurements are all baselines with antenna 14. In practice, we can average the two measurements for each antenna for , and the 26 measurements for antenna 14 for , although care must be taken to do an appropriate average to take care of the phase ambiguity. One way to do this is form unit vectors and average those, then find the phase of the average vector. Applying the Measurements Once we have these, we can apply corrections to each of the polarization channels, and then do the feed rotation correction. The corrections are done to data taken in a normal way, without rotating the 27-m feed, hence we expect and no phase rotation in the XY and YX measurements. This is confirmed by doing the above analysis on data taken with parallactic angle near , so that all polarization channels contain relatively strong signal. The application of the correction is: It is quite pleasing that this agrees perfectly with the analysis of the previous section, and the only difference is one of emphasis. Rather than using fixed delays and , here we are using the frequency-dependent delay-phase. Actually, there is another, no-so-trivial difference. Here I am considering making these phase corrections first, and then applying the feed-rotation correction, i.e.: where the primed quantities are the phase-corrected channel data. I have analyzed a set of observations and got the values for , but when I attempt to make the corrections to parallel and crossed data I find that these are the required corrections: where is an antenna-dependent value that is zero for antennas 1, 2, 3, 6, and 8, but is 1 for antennas 4, 5, and 7. The values for other antennas remains to be determined. I am not sure what this is, but probably is has to do with differences in the Tecom feed internal connections, which may be reversed for some antennas. When I attempt to make corrections to non-crossed data (i.e. data taken with normal observations), I find that I need to apply: So, the location of the has changed, but this could be due to the sign of the parallactic angle at the time the observations were taken, since the phase flips by 180 degrees at 0 HA. I tried applying the feed rotation correction, and it does not seem to work. Okay, I looked at feed_rot_simulation.py, and I now realize that for my tests, which were done with negative angle for the crossed-feed measurements, I should expect the phase difference between XX and XY to be , not zero. For a positive angle, the flip by would appear in YX. This means that I need a diff correction: Effect of Polarization Mixing on Observations See Powerpoint Presentation File:EOVSA Status Jan 2017.pptx The main effect that is noticeable in observations is that strong signals on the crossed hands (XY and YX) will appear when feeds are misaligned. When feeds are properly aligned, we expect to see only weak signals in the crossed hands, nominally zero, but in practice non-zero due to slight cross-talk between X and Y, which can be due to non-orthogonality or simply coupling between the separate channels. Note that non-equal gains will not cause cross-talk, but can complicate efforts to untangle it. To make the observations, we observe calibrator sources at different declincations over a broad range of hour angle. The two sources observed so far are 3C84, at declination 41 degrees, and 3C273, at declination 2 degrees. We then plot the observed amplitude and phase for each of the observed polarization products [XX, XY, YX, YY]. For this demonstration, we use the baseline of Ant1-14, where Ant1 has the rotating feed and Ant14 has the non-rotating one (with respect to the celestial coordinate system). Figure 3 shows the 3C84 observation and simulation. The upper-left panel is the observed amplitude of the four polarization products during an observation from 08:30-15:00 UT, and the upper-right panel is the corresponding phase. The lower panels are the simulation amplitude and phase, where the simulation assumed constant polarization products with Amp[XX, XY, YX, YY] = [0.15, 0, 0, 0.23], and Phase[XX, XY, YX, YY] = [3.1, 0, 0, 2.4] (radians). A noise level of 0.015 rms was added. It is clear that the amplitude simulation works very well, but the phase does not have the correct character--the only deviation from constant phase is an abrupt 180-degree phase jump in XY and YX at 0 hour angle. Such phase jumps are seen in the observed data, but in addition there is a large amount of phase rotation in the observations that is not in the simulation. As a test, a simulation was done applying a phase rotation based on , as shown in Figure 4. Applying a rotation by the parallactic angle itself proved to be too small, and did not show the symmetric behavior around 0 hour angle, so the phase rotation applied in Fig. 4 is . It now looks about right, but there is a curvature in the simulation phase that is not really seen in the data. As a check, we repeated the exercise on 3C273, again applying a phase rotation of , with the result shown in Figure 5. As before, the amplitudes match quite well. For this different source, however, the measured phase variation is not symmetric about 0 hour angle, so the simulated phases do not match the observed ones. Finally, we instead apply a phase correction without the absolute value, i.e. just , with the result in Figure 6. Clearly this is "better," but still does not match the phase variation precisely. Other Possible Reasons for the Observed Phase Variations It has been suggested that there may be some secular change in phase not related to feed rotation, perhaps a delay error due to a baseline error, or because the Az and El axes do not cross at a common point. However, baseline errors would seem to be unlikely, because exactly the same character in the phase variations occurs on all of the AzEL antennas. And anyway a delay error is ruled out for another reason--the phase variation is not frequency dependent. Figures 7 & 8 illustrate these facts. Based on these tests, I conclude that the observed phase variations are indeed due to the relative feed rotation, but that something is missing in the above mathematical analysis or its application. One possibility is that there is some subtlety in the complex-conjugation of the Jones matrices, since in the above analysis they are entirely real. --Dgary (talk) 11:50, 22 October 2016 (UTC) More On Axis Offset Dr. Avinash Deshpande (Raman Research Institute, Bangalore -- Thanks to Dr. Ananthakrishnan for contacting him) confirms that no phase rotation is expected for the parallactic correction, aside from the 180-degree phase jump at the meridian crossing. He suggests that a non-intersecting axis is more likely, and notes that my plots claiming no evidence of a delay is too hasty. It may be that the small range of frequencies in Figure 8 is too small to see an evident frequency dependence that may nevertheless be there. He notes that the effect of non-intersecting axes is a phase rotation of where is the elevation angle, and is the offset distance. As a test, I applied this function, using cm (based on the apparent phase variation in the observed phases), and obtained the results in Figures 9 and 10. Although the observed phases show a bit more curvature than the simulation, this can be due to residual baseline errors, so I think it is fair to say this is a promising result. We can prove this very shortly, since the feed rotator on the 27-m antenna is soon to be working (I hope). The prediction is that rotating the 27-m feed to keep it parallel to the 2.1-m feeds on these antennas will correct the amplitudes, but the phases will still show the same behavior (since they are due to a different cause), and also that using a wider range of frequencies (which we can do, especially now that the high-frequency receiver is available) will show a frequency dependence in the amount of phase variation. --Dgary (talk) 04:55, 8 November 2016 (UTC) Further update On 2016 Nov 13, new observations of 3C84 were taken, and the correction for the axis offset (d = 15.2 cm) were applied, as shown in Figure 11 (at left). It appears that this correction works well, and that there is a residual baseline error on each of the antennas due to the fact that they were originally determined without the axis-offset correction. --Dgary (talk) 14:20, 15 November 2016 (UTC)
Sorry if the question is trivial - are there closed form expressions or good approximations for the sum of a symmetric function taken over all integer compositions (into given number of parts) of a number? More precisely, I'm interested in: $$ S(n,k) = \sum_{a1+ \cdots +a_k = n, \ \ a_i \geq 1} \phi_k(a_1,\dots,a_k) $$ where $\phi_k = \prod_i a_i^p$, e.g. for $p=-2$, but I'm curious even about $p=1$ I realize that I can bound this (using AM-GM ineq.) by replacing all terms by the most (un)balanced composition, but this seems quite weak as a bound. EDITED: partition composition
It seems like what you're having trouble understanding is about differential amplifiers and negative feedback in general rather than this specific circuit, so let's generalize this. The first thing we need to do is identify the feedback going from the output to the input, and then consider the circuit without it. I've highlighted the feedback network here: I'm also going to ignore C7, the compensation capacitor, which is very important but can be thought about later/separately. I'm not looking to do the full analysis here, but the small signal resistance looking into the inputs is about \$\beta R1\$, the output resistance is about 15k (and is proportional to the Early voltage of Q4), and the differential gain \$A_{diff} \approx 15\cdot\beta\$ (also going to ignore the common-mode gain for now). So then we can replace this with an equivalent looking like this: simulate this circuit – Schematic created using CircuitLab So what's the problem with this amplifier? It has a very high gain of about 1500, it has a pretty high input resistance of 150k. The problem is that its properties are very dependent on the transistors. A transistor with a β=200 when cold might have a β=50 when hot. A batch of transistors will have large variations in β. You wouldn't want, for example, your speakers to be four times louder on a hot day than on a cold day. 1500 (or 3000, or 750, depending on β) is also a lot of gain. Also the output resistance is kind of high at 15k. Now that we have a simpler model, let's add the feedback loop back in. I'm going to assume that at the frequency we're interested in, C5 is an open circuit and C6 is a short circuit. simulate this circuit All we added here is a voltage divider. Since Rin=150k is so much larger than R7 that it is in parallel with, we can ignore it. So, $$V_-=V_o \frac{\mathit{R7}}{\mathit{R5}+\mathit{R7}}$$ Then $$A_{diff}(V_+-V_-) = A_{diff}(V_{in} - V_o\frac{\mathit{R7}}{\mathit{R5}+\mathit{R7}})$$ The output current \$I_o\$ is $$I_o = \frac{V_o}{R5+R7}$$$$V_o = A_{diff}(V_+-V_-)-R_oI_o$$ so finally we have $$V_o = A_{diff}(V_{in} - V_o\frac{\mathit{R7}}{\mathit{R5}+\mathit{R7}})-V_o\frac{R_o}{R5+R7}$$ Then move Vo to the left side, $$V_o + A_{diff}V_o\frac{\mathit{R7}}{\mathit{R5}+\mathit{R7}} + V_o\frac{R_o}{R5+R7} = A_{diff}V_{in}$$ and solve for Vo $$V_o = \frac{A_{diff}V_{in}}{(1 + \frac{R_o}{R5+R7} + A_{diff}\frac{\mathit{R7}}{\mathit{R5}+\mathit{R7}})}$$ But Adiff (1500 or so) is really big compared to 1, and Ro/R5+R7 too. So let's ignore those terms. $$V_o = \frac{A_{diff}V_{in}}{A_{diff}\frac{\mathit{R7}}{\mathit{R5}+\mathit{R7}}}=V_{in}\frac{\mathit{R5}+\mathit{R7}}{\mathit{R7}} = V_{in}(1+\frac{\mathit{R5}}{\mathit{R7}})$$ This doesn't contain Adiff at all! Remember Adiff was heavily dependent on beta, which would have caused the amplifier to have very inconsistent properties. So by introducing feedback we've made the gain dependent almost exclusively on the value of some resistors, which can be very consistent. Now I only talked about gain here, but the same applies to distortion. Consider if the amplifier had an open-loop gain of 1000 for Vo=0V and 2000 for Vo=5V. A signal large enough to go through both points would be distorted as the parts above Vo=5V would be amplified only half as much. With the negative feedback, the closed-loop gain would change very little as Vo went above 5V, it would still be approximately 1+R5/R7. The factor by which this is reduced is how much larger \$A_{diff}\frac{\mathit{R7}}{\mathit{R5}+\mathit{R7}}\$ was than 1 when we decided to ignore the 1 term (and the output resistance term though we can come back to that). This is the open loop gain divided by the closed loop gain.
In order to apply the Fresnel equations, the field components have to be resolved into components where either the electric field or the magnetic field is parallel to the plane of reflection. The geometry of this, with the wave vector direction \( \kcap \) and the electric and magnetic field phasors perpendicular to that direction is sketched in fig. 1. If the incident wave is a plane wave, or equivalently a far field spherical wave, it will have the form \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:20} \BH = \inv{\mu_0} \kcap \cross \BE, \end{equation} with the field directions and wave vector directions satisfying \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:60} \Ecap \cross \Hcap = \kcap \end{equation} \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:80} \Ecap \cdot \kcap = 0 \end{equation} \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:100} \Hcap \cdot \kcap = 0. \end{equation} The key to resolving the fields into components parallel to the plane of reflection lies in the observation that the cross product of the plane normal \( \ncap \) and the incident wave vector direction \( \kcap \) lies in that plane. With \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:140} \pcap = \frac{\kcap \cross \ncap}{\Abs{\kcap \cross \ncap}} \end{equation} \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:160} \qcap = \kcap \cross \pcap, \end{equation} the field directions can be resolved into components \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:200} \BE = \lr{ \BE \cdot \pcap } \pcap + \lr{ \BE \cdot \qcap } \qcap = E_\parallel \pcap + E_\perp \qcap \end{equation} \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:220} \BH = \lr{ \BH \cdot \pcap } \pcap + \lr{ \BH \cdot \qcap } \qcap = H_\parallel \pcap + H_\perp \qcap. \end{equation} This subdivides the fields into two pairs, one with the electric field parallel to the reflection plane \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:240} \begin{aligned} \BE_1 &= \lr{ \BE \cdot \pcap } \pcap = E_\parallel \pcap \\ \BH_1 &= \lr{ \BH \cdot \qcap } \qcap = H_\perp \qcap, \end{aligned} \end{equation} and one with the magnetic field parallel to the reflection plane \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:260} \begin{aligned} \BH_2 &= \lr{ \BH \cdot \pcap } \pcap = H_\parallel \pcap \\ \BE_2 &= \lr{ \BE \cdot \qcap } \qcap = E_\perp \qcap. \end{aligned} \end{equation} This is most of what we need to proceed with the reflection and transmission analysis. The only task remaining is to determine the reflection angle. Using a pencil with the tip on the table I was able to convince myself by observation that there is always a normal plane of incidence regardless of any oblique angle that the ray hits the reflecting surface. This was, for some reason, not intuitively obvious to me. Having done that, the geometry must be reduced to what is sketched in fig. 2. Once \( \pcap \) has been determined, regardless of it’s orientation in the reflection plane, the component of \( \kcap \) that is normal, directed towards, the plane of reflection is \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:280} \kcap – \lr{ \kcap \cdot \pcap } \pcap, \end{equation} with (squared) length \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:300} \begin{aligned} \lr{ \kcap – \lr{ \kcap \cdot \pcap } \pcap }^2 &= 1 + \lr{ \kcap \cdot \pcap }^2 – 2 \lr{ \kcap \cdot \pcap }^2 \\ &= 1 – \lr{ \kcap \cdot \pcap }^2. \end{aligned} \end{equation} The angle of incidence, relative to the normal to the reflection plane, follows from \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:320} \begin{aligned} \cos\theta &= \kcap \cdot \frac{ \kcap – \lr{ \kcap \cdot \pcap } \pcap }{ \sqrt{ 1 – \lr{ \kcap \cdot \pcap }^2 } } \\ &= \sqrt{ 1 – \lr{ \kcap \cdot \pcap }^2 }, \end{aligned} \end{equation} Expanding the dot product above gives \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:360} \begin{aligned} \kcap \cdot \pcap’ &= \kcap \cdot \lr{ \pcap \cross \ncap } \\ &= \frac{1}{\Abs{\kcap \cross \ncap} } \kcap \cdot \lr{ \lr{\kcap \cross \ncap} \cross \ncap }, \end{aligned} \end{equation} where \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:380} \begin{aligned} \kcap \cdot \lr{ \lr{\kcap \cross \ncap} \cross \ncap } &= k_r \epsilon_{r s t} \lr{\kcap \cross \ncap}_s n_t \\ &= k_r \epsilon_{r s t} \epsilon_{s a b} k_a n_b n_t \\ &= -k_r \delta_{r t}^{[a b]} k_a n_b n_t \\ &= -k_r n_t \lr{ k_r n_t – k_t n_r } \\ &= -1 + \lr{ \kcap \cdot \ncap}^2. \end{aligned} \end{equation} That gives \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:400} \begin{aligned} \kcap \cdot \pcap’ &= \frac{-1 + \lr{ \kcap \cdot \ncap}^2}{\sqrt{1 – \lr{ \kcap \cdot \ncap}^2} } \\ &= -\sqrt{1 – \lr{ \kcap \cdot \ncap}^2}, \end{aligned} \end{equation} or \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:420} \begin{aligned} \cos\theta &= \sqrt{ 1 – \lr{-\sqrt{1 – \lr{ \kcap \cdot \ncap}^2}}^2 } \\ &= \sqrt{ \lr{ \kcap \cdot \ncap}^2 } \\ &= \kcap \cdot \ncap. \end{aligned} \end{equation} This surprisingly simple result makes so much sense, it is an awful admission of stupidity that I went through all the vector algebra to get it instead of just writing it down directly. The end result is the reflection angle is given by \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:340} \boxed{ \theta = \cos^{-1} \kcap \cdot \ncap, } \end{equation} where the reflection plane normal should off the back surface to get the sign right. The only detail left is the vector direction of the reflected ray (as well as the direction for the transmitted ray if that is of interest). The reflected ray direction flips the sign of the normal component of the ray \begin{equation}\label{eqn:resolvingFieldsIncidentOnPlane:440} \begin{aligned} \kcap’ &= -\lr{\kcap \cdot \ncap} \ncap + \lr{ \kcap \wedge \ncap} \ncap \\ &= -\lr{\kcap \cdot \ncap} \ncap + \kcap – \lr{ \ncap \kcap} \cdot \ncap \\ &= \kcap -2 \lr{\kcap \cdot \ncap} \ncap. \end{aligned} \end{equation} Here the sign of the normal doesn’t matter since it only occurs quadratically. This now supplies everything needed for the application of the Fresnel equations to determine the reflected ray characteristics of an arbitrarily polarized incident field.
No, your definition isn't very useful. A $\gamma$-approximation algorithm is an algorithm that produces a solution which is not too bad compared to the optimal solution. In contrast, under your definition you are guaranteed to produce a solution which is somewhat worse than the optimal one. Let's take as an example the maximum clique problem. Here is a $\gamma$-approximation algorithm for maximum clique under your definition: output an empty clique. This works for any $\gamma < 1$. I suspect that what you intended to capture was that $\gamma$ is the "tight" approximation ratio of the algorithm. That is, you want $\gamma$ to be the supremal $\gamma$ such that the solution $S$ produced by the algorithm always satisfied $|S| \geq \gamma |OPT|$. I just gave one definition of the tight approximation ratio, but here is another (equivalent) one. We say that $\gamma$ is the tight approximation ratio of the algorithm if the algorithm is a $\gamma$ approximation (under the usual definition), and for every $\epsilon > 0$ there is an instance on which the solution satisfies $|S| \leq (\gamma + \epsilon)|OPT|$. It's important here to include both parts of the definition.
Let me offer one reason and one misconception as an answer to your question.The main reason that it is easier to write (seemingly) correct mathematical proofs is that they are written at a very high level. Suppose that you could write a program like this:function MaximumWindow(A, n, w):using a sliding window, calculate (in O(n)) the sums of all ... (I am probably risking a few downvotes here, as I have no time/interest to make this a proper answer, but I find the text quoted (and the rest of the article cited) below to be quite insightful, also considering they are written by a well-known mathematician. Perhaps I can improve the answer later.)The idea, which I suppose isn't particularly distinct from ... Allow me to start by quoting E. W. Dijkstra:"Programming is one of the most difficult branches of applied mathematics; the poorer mathematicians had better remain pure mathematicians." (from EWD498)Although what Dijkstra meant with `programming' differs quite a bit from the current usage, there is still some merit in this quote. The other answers have ... Lamport provides some ground for disagreement on prevalence of errors in proofs in How to write a proof (pages 8-9):Some twenty years ago, I decided to write a proof of the Schroeder-Bernsteintheorem for an introductory mathematics class. The simplest proof I couldfind was in Kelley’s classic general topology text. Since Kelleywas writing for a ... Direct answer to the question: yes, there are esoteric and highly impractical PLs based on $\mu$-recursive functions (think Whitespace), but no practical programming language is based on $\mu$-recursive functions due to valid reasons.General recursive (i.e., $\mu$-recursive) functions are significantly less expressive than lambda calculi. Thus, they make a ... Answerwhy was the data considered to be a discrete mathematical entity rather than a continuous oneThis was not a choice; it is theoretically and practically impossible to represent continuous, concrete values in a digital computer, or actually in any kind of calculation.Note that "discrete" does not mean "integer" or something like that. "discrete" ... One big difference is that programs typically are written to operate on inputs, whereas mathematical proofs generally start from a set of axioms and prior-known theorems. Sometimes you have to cover multiple corner cases to get a sufficiently general proof, but the cases and their resolution is explicitly enumerated and the scope of the result is implicitly ... Fundamentally, a logic consists of two things.Syntax is a set of rules that determine what is and is not a formula.Semantics is a set of rules that determine what formulae are "true" and what are "false". To a model theorist, this is expressed by relating formulas to the mathematical structures that they're true in; to a proof theorist, truth corresponds ... Computers represent a piece of data as a finite number of bits (zeros and ones) and the set of all finite bit strings is discrete. You can only work with, say, real numbers if you find some finite representation for them. For example, you can say "this data corresponds to the number $\pi$", but you cannot store all digits of $\pi$ in a computer. Hence, ... They say the problem with computers is that they do exactly what you tell them.I think this might be one of the many reasons.Notice that, with a computer program, the writer (you) is smart but the reader (CPU) is dumb.But with a mathematical proof, the writer (you) is smart and the reader (reviewer) is also smart.This means you can never afford to get ... One issue that I think was not addressed in Yuval's answer, is that it seems you are comparing different animals.Saying "the code is correct" is a semantic statement, you mean to say that the object described by your code satisfies certain properties, e.g. for every input $n$ it computes $n!$. This is indeed a hard task, and to answer it, one has to look ... While fields such as computer science, mathematics and physics are relatively well-organized, Logic has a chaotic history. Its organization is really confusing so I think it's important to read some history to understand the dense structure of the field.The path you should choose will depend on your background and aims.What is a logic ?The traditional ... So, there are many fields of math that are relevant to the Science of CS, but for programming specifically:Graph theory: this is the big one. Graphs and trees are everywhere. Networks, maps, paths in video games. Even things like solving a Rubiks cube can be modelled as a graph algorithm and solved with A*.Discrete math: aside from graph theory, knowing ... Theoretical computer science is what theoretical computer scientists do; and mathematics is what mathematicians do. Other than that, there is no accepted definition of either. One might argue that theoretical computer science is a particular branch (or branches) of mathematics, influenced (at least originally) by the problem of efficient computation.Many ... What is so different about writing faultless mathematical proofs and writing faultless computer code that makes it so that the former is so much more tractable than the latter?I believe that the primary reasons are idempotency (gives the same results for the same inputs) and immutability (doesn't change).What if a mathematical proof could give different ... Here is a concrete encoding that can represent each symbol in less than 1 bit on average:First, split the input string into pairs of successive characters (e.g. AAAAAAAABC becomes AA|AA|AA|AA|BC). Then encode AA as 0, AB as 100, AC as 101, BA as 110, CA as 1110, BB as 111100, BC as 111101, CB as 111110, CC as 111111.I've not said what happens if there is ... You don't need any math to write a Hello World or a very simple website.You will need to know some discrete mathematics and algorithm analysis to write a program that finds a route between two cities.You will need to know matrix transformations and quaternions to write a game engine.You will need to know a lot about all kinds of mathematical fields to ... The entropy you've calculated isn't really for the specific string but, rather, for a random source of symbols that generates $A$ with probability $\tfrac{8}{10}$, and $B$ and $C$ with probability $\tfrac1{10}$ each, with no correlation between successive symbols. The calculated entropy for this distribution, $0.922$ means that you can't ... Let $\mathcal{D}$ be the following distribution over $\{A,B,C\}$: if $X \sim \mathcal{D}$ then $\Pr[X=A] = 4/5$ and $\Pr[X=B]=\Pr[X=C]=1/10$.For each $n$ we can construct prefix codes $C_n\colon \{A,B,C\}^n \to \{0,1\}^*$ such that$$\lim_{n\to\infty} \frac{\operatorname*{\mathbb{E}}_{X_1,\ldots,X_n \sim \mathcal{D}}[C_n(X_1,\ldots,X_n)]}{n} = H(\mathcal{... They represent continuous quantities with discrete approximations. Mostly, this is done with floating point, which is analogous to scientific notation. Essentially, they work with something like $1.xyz\times 10^k$, with some appropriate number of decimal places (and in binary, rather than decimal).It's also possible to work with some irrational numbers ... There's no contradiction, here. The first case defines the partial function $g\colon \mathbb{N}\to\mathbb{N}$ given by$$g(n) = \begin{cases}x &\text{if $x\in\mathbb{N}$ and }x^2=n\\\text{undefined} &\text{if no such $x$ exists.}\end{cases}$$As the text says, "the domain of $g$ is the set of perfect squares."The second case defines the ... Your question is answered by the arithmetical hierarchy. The existence of an odd perfect number is a $\Sigma_1$ statement, and so you can test it using a $\Sigma_1$ machine, which halts iff the statement is true. The twin prime conjecture is a $\Pi_2$ statement, and so you can construct a TM with access to the halting oracle which halts iff the statement is ... I agree with what Yuval has written. But also have a much simpler answer: In practice softwares engineers typically don't even try to check for correctness of their programs, they simply don't, they typically don't even write down the conditions that define when the program is correct.There are various reasons for it. One is that most software engineers ... There are a lot of good answers already but there are still more reasons math and programming aren't the same.1Mathematical proofs tend to be much simpler than computer programs. Consider the first steps of a hypothetical proof:Let a be an integerLet b be an integerLet c = a+bSo far the proof is fine. Let's turn that into the first ... A Continuous-time Markov Chain can be represented as a directed graph with constant non-negative edge weights. An equivalent representation of the constant edge-weights of a directed graph with $N$ nodes is as an $N \times N$ matrix. The Markov property (that the future states depend only on the current state) is implicit in the constant edge weights (or ... The parts that you mentioned are basic concepts of linear algebra. You cannot understand the more advanced concepts (say, eigenvalues and eigenvectors) before first understanding the basic concepts. There are no shortcuts in mathematics. Without an intuitive understanding of the concepts of span and linear independence you won't get far in linear algebra.... Yes. The quantum Turing machine is a mathematical formalization of a computation model for a quantum computer.See also https://en.wikipedia.org/wiki/Quantum_computing#Developments and https://en.wikipedia.org/wiki/Quantum_complexity_theory and https://en.wikipedia.org/wiki/BQP. I like Yuval's answer, but I wanted to riff off of it for a bit. One reason you might find it easier to write Math proofs might boil down to how platonic Math ontology is. To see what I mean, consider the following:Functions in Math are pure (the entire result of calling a function is completely encapsulated in the return value, which is deterministic and ... I see two possible points of confusion in your question, and I will address them separately.What is meant by the title of your post: ""Regular languages over a common alphabet are closed under union.""The union of $L_1,L_2$ is $\{x:x∈L_1∨x∈L_2\}$Does this mean that, for any string $s∈L_1$, we also have $s∈L_2$?"What is "Closure Under Union"?Regular ...
This was supposed to be the last blog post on distance estimated 3D fractals, but then I stumbled upon the dual number formulation, and decided it would blend in nicely with the previous post. So this blog post will be about dual numbers, and the next (and probably final) post will be about hybrid systems, heightmap rendering, interior rendering, and links to other resources. Dual Numbers Many of the distance estimators covered in the previous posts used a running derivative. This concept can be traced back to the original formula for the distance estimator for the Mandelbrot set, where the derivative is described iteratively in terms of the previous values: \(f’_n(c) = 2f_{n-1}(c)f’_{n-1}(c)+1\) In the previous post, we saw how the Mandelbox could be described a running Jacobian matrix, and how this matrix could be replaced by a single running scalar derivative, since the Jacobians for the conformal transformations all have a particular simple form (and thanks to Knighty the argument was extended to non-Julia Mandelboxes). Now, some month ago I stumbled upon automatic differentation and dual numbers, and after having done some tests, I think this a very nice framework to complete the discussion of distance estimators. So what are these dual numbers? The name might sound intimidating, but the concept is very simple: we extend the real numbers with another component – much like the complex numbers:\(x = (x_r, x_d) = x_r + x_d \epsilon\) where \(\epsilon\) is the dual unit, similar to the imaginary unit i for the complex numbers. The square of a dual unit is defined as: \(\epsilon * \epsilon = 0\). Now for any function which has a Taylor series, we have: \(f(x+dx) = f(x) + f'(x)dx + (f”(x)/2)dx^2 + …\) If we let \(dx = \epsilon\), it follows: \(f(x+\epsilon) = f(x) + f'(x)\epsilon \) because the higher order terms vanish. This means, that if we evaluate our function with a dual number \(d = x + \epsilon = (x,1)\), we get a dual number back, (f(x), f'(x)), where the dual component contains the derivative of the function. Compare this with the finite difference scheme for obtaining a derivative. Take a quadratic function as an example and evaluate its derivative, using a step size ‘h’:\(f(x) = x*x\) This gives us the approximate derivative: \(f'(x) \approx \frac {f(x+h)-f(x)}{h} = \frac { x^2 + 2*x*h + h^2 – x^2 } {h} = 2*x+h\) The finite difference scheme introduces an error, here equal to h. The error always gets smaller as h gets smaller (as it converges towards to the true derivative), but numerical differentiation introduces inaccuracies. Compare this with the dual number approach. For dual numbers, we have: \(x*x = (x_r+x_d\epsilon)*(x_r+x_d\epsilon) = x_r^2 + (2 * x_r * x_d )\epsilon\). Thus, \(f(x_r + \epsilon) = x_r^2 + (2 * x_r)*\epsilon\) Since the dual component is the derivative, we have f'(x) = 2*x, which is the exact answer. But the real beauty of dual numbers is, that they make it possible to keep track of the derivative during the actual calculation, using forward accumulation. Simply by replacing all numbers in our calculations with dual numbers, we will end up with the answer together with the derivative. Wikipedia has a very nice article, that explains this in more details: Automatic Differentation. The article also list several arithmetric rules for dual numbers. For the Mandelbox, we have a defining function R(p), which returns the length of p, after having been through a fixed number of iterations of the Mandelbox formula: scale*spherefold(boxfold(z))+p. The DE is then DE = R/DR, where DR is the length of the gradient of R. R is a scalar-valued vector function. To find the gradient we need to find the derivative along the x,y, and z direction. We can do this using dual vectors and evaluate the three directions, e.g. for the x-direction, evaluate \(R(p_r + \epsilon (1,0,0))\). In practice, it is more convenient to keep track of all three dual vectors during the calculation, since we can reuse part of the calculations. So we have to use a 3×3 matrix to track our derivatives during the calculation. Here is some example code for the Mandelbox: // simply scale the dual vectors void sphereFold(inout vec3 z, inout mat3 dz) { float r2 = dot(z,z); if (r2 < minRadius2) { float temp = (fixedRadius2/minRadius2); z*= temp; dz*=temp; } else if (r2 < fixedRadius2) { float temp =(fixedRadius2/r2); dz[0] =temp*(dz[0]-z*2.0*dot(z,dz[0])/r2); dz[1] =temp*(dz[1]-z*2.0*dot(z,dz[1])/r2); dz[2] =temp*(dz[2]-z*2.0*dot(z,dz[2])/r2); z*=temp; dz*=temp; } } // reverse signs for dual vectors when folding void boxFold(inout vec3 z, inout mat3 dz) { if (abs(z.x)>foldingLimit) { dz[0].x*=-1; dz[1].x*=-1; dz[2].x*=-1; } if (abs(z.y)>foldingLimit) { dz[0].y*=-1; dz[1].y*=-1; dz[2].y*=-1; } if (abs(z.z)>foldingLimit) { dz[0].z*=-1; dz[1].z*=-1; dz[2].z*=-1; } z = clamp(z, -foldingLimit, foldingLimit) * 2.0 - z; } float DE(vec3 z) { // dz contains our three dual vectors, // initialized to x,y,z directions. mat3 dz = mat3(1.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,1.0); vec3 c = z; mat3 dc = dz; for (int n = 0; n < Iterations; n++) { boxFold(z,dz); sphereFold(z,dz); z*=Scale; dz=mat3(dz[0]*Scale,dz[1]*Scale,dz[2]*Scale); z += c*Offset; dz +=matrixCompMult(mat3(Offset,Offset,Offset),dc); if (length(z)>1000.0) break; } return dot(z,z)/length(z*dz); } The 3×3 matrix dz contains our three dual vectors (they are stored as columns in the matrix, dz[0], dz[1], dz[2]). In order to calculate the dual numbers, we need to know how to calculate the length of z, and how to divide by the length squared (for sphere folds). Using the definition of the product for dual numbers, we have:\(|z|^2 = z \cdot z = z_r^2 + (2 z_r \cdot z_d)*\epsilon\) For the length, we can use the power rule, as defined on Wikipedia: \(|z_r + z_d \epsilon| = \sqrt{z_r^2 + (2 z_r \cdot z_d)*\epsilon} = |z_r| + \frac{(z_r \cdot z_d)}{|z_r|}*\epsilon\) Using the rule for division, we can derive: \(z/|z|^2=(z_r+z_d \epsilon)/( z_r^2 + 2 z_r \cdot z_d \epsilon)\) \( = z_r/z_r^2 + \epsilon (z_d*z_r^2-2z_r*z_r \cdot z_d)/z_r^4\) Given these rules, it is relatively simple to update the dual vectors: For the sphereFold, we either multiply by a real number or use the division rule above. For the boxFold, there is both multiplication (sign change), and a translation by a real number, which is ignored for the dual numbers. The (real) scaling factor is also trivially applied to both real and dual vectors. Then there is the addition of the original vector, where we must remember to also add the original dual vector. Finally, using the length as derived above, we find the length of the full gradient as: \(DR = \sqrt{ (z_r \cdot z_x)^2 + (z_r \cdot z_y)^2 + (z_r \cdot z_z)^2 } / |z_r|\) In the code example, the vectors are stored in a matrix, which makes a more compact notation possible: DR = length(z*dz)/length(z), leading to the final DE = R/DR = dot(z,z)/length(z*dz) There are some advantages to using the dual numbers approach: Compared to the four-point Makin/Buddhi finite difference approach the arbitrary epsilon (step distance) is avoided – which should give better numerical accuracy. It is also somewhat slightly faster computationally. Very general – e.g. works for non-conformal cases, where running scalar derivatives fail. The images here are from a Mandelbox where a different scaling factor was applied to each direction (making them non-conformal). This is not possible to capture in a running scalar derivative. On the other hand, the method is slower than using running scalar estimators. And it does require code changes. It should be mentioned that libraries exists for languages supporting operator overloading, such as C++. Since we find the gradient directly in this method, we can also use it as a surface normal – this is also an advantage compared to the scalar derivates, which normally use a finite difference scheme for the normals. Using the code example the normal is: // (Unnormalized) normal vec3 normal = vec3(dot(z,dz[0]),dot(z,dz[1]),dot(z,dz[2])); It should be noted that in my experiments, I found the finite difference method produced better normals than the above definition. Perhaps because it smothens them? The problem was somehow solved by backstepping a little before calculating the normal, but this again introduces an arbitrary distance step. Now, I said the scalar method was faster – and for a fixed number of ray steps it is – but let us take a closer look at the distance estimator function: The above image shows a sliced Mandelbox. The graph in the lower right conter shows a plot of the DE function along a line (two dimensions held fixed): The blue curve is the DE function, and the red line shows the derivative of the DE function. The function is plotted for the dual number derived DE function. We can see that our DE is well-behaved here: for a consistent DE the slope can never be higher than 1, and when we move away from the side of the Mandelbox in a perpendicular direction the derivative of the DE should be plus or minus one. Now compare this to the scalar estimated DE: Here we see that the DE is less optimal – the slope is ~0.5 for this particular line graph. Actually, the slope would be close to one if we omitted the ‘+1’ term for the scalar estimator, but then it overshoots slightly some places inside the Mandelbox. We can also see that there are holes in our Mandelbox – this is because for this fixed number of ray steps, we do not get close enough to the fractal surface to hit it. So even though the scalar estimator is faster, we need to crank up the number of ray steps to achieve the same quality. Final Remarks The whole idea of introducing dual derivatives of the three unit vectors seems to be very similar to having a running Jacobian matrix estimator – and I believe the methods are essentially idential. After all we try to achieve the same: keeping a running record of how the R(p) function changes, when we vary the input along the axis. But I think the dual numbers offer a nice theoretical framework for calculating the DE, and I believe they could be more accurate and faster then finite difference four point gradient methods. However, more experiments are needed before this can be asserted. Scalar estimators will always be the fastest, but they are probably only optimal for conformal systems – for non-conformal system, it seems necessary to introduce terms that make them too conservative, as demonstrated by the Mandelbox example. The final part contains all the stuff that didn’t fit in the previous posts, including references and links.
On the dual codes of skew constacyclic codes 1. Universidad de Concepción, Escuela de Educación, Departamento de Ciencias Básicas, Los Ángeles, Chile 2. Universidad de Concepción, Facultad de Ciencias Físicas y Matemáticas, Departamento de Matemática, Concepción, Chile Let $\mathbb{F}_q$ be a finite field with $q$ elements and denote by $\theta : \mathbb{F}_q\to\mathbb{F}_q$ an automorphism of $\mathbb{F}_q$. In this paper, we deal with skew constacyclic codes, that is, linear codes of $\mathbb{F}_q^n$ which are invariant under the action of a semi-linear map $ \phi _{\alpha,\theta }:\mathbb{F}_q^n\to\mathbb{F}_q^n$, defined by $ \phi _{\alpha,\theta }(a_0,...,a_{n-2}, a_{n-1}): = (\alpha \theta (a_{n-1}),\theta (a_0),...,\theta (a_{n-2}))$ for some $\alpha \in \mathbb{F}_q\setminus\{0\}$ and $n≥2$. In particular, we study some algebraic and geometric properties of their dual codes and we give some consequences and research results on $1$-generator skew quasi-twisted codes and on MDS skew constacyclic codes. Mathematics Subject Classification:Primary: 12Y05, 16Z05; Secondary: 94B05, 94B35. Citation:Alexis Eduardo Almendras Valdebenito, Andrea Luigi Tironi. On the dual codes of skew constacyclic codes. Advances in Mathematics of Communications, 2018, 12 (4) : 659-679. doi: 10.3934/amc.2018039 References: [1] [2] [3] A. Blokhuis, A. A. Bruen and J. A. Thas, Arcs in $ PG(n,q)$, MDS-codes and three fundamental problems of B. Segre - some extensions, [4] A. Blokhuis, A. A. Bruen and J. A. Thas, On MDS-codes, arcs in $ PG(n,q)$ with $ q$ even, and a solution of three fundamental problems of B. Segre, [5] [6] [7] [8] D. Boucher and F. Ulmer, Codes as modules over skew polynomial rings, [9] D. Boucher and F. Ulmer, A note on the dual codes of module skew codes, [10] [11] [12] [13] [14] J. W. P. Hirschfeld, [15] [16] [17] [18] T. Maruta, A geometric approach to semi-cyclic codes, [19] [20] [21] L. Storme and J. A. Thas, M.D.S. codes and arcs in $ PG(n,q)$ with $ q$ even: An improvement of the bounds of Bruen, Thas, and Blokhuis, [22] [23] [24] show all references References: [1] [2] [3] A. Blokhuis, A. A. Bruen and J. A. Thas, Arcs in $ PG(n,q)$, MDS-codes and three fundamental problems of B. Segre - some extensions, [4] A. Blokhuis, A. A. Bruen and J. A. Thas, On MDS-codes, arcs in $ PG(n,q)$ with $ q$ even, and a solution of three fundamental problems of B. Segre, [5] [6] [7] [8] D. Boucher and F. Ulmer, Codes as modules over skew polynomial rings, [9] D. Boucher and F. Ulmer, A note on the dual codes of module skew codes, [10] [11] [12] [13] [14] J. W. P. Hirschfeld, [15] [16] [17] [18] T. Maruta, A geometric approach to semi-cyclic codes, [19] [20] [21] L. Storme and J. A. Thas, M.D.S. codes and arcs in $ PG(n,q)$ with $ q$ even: An improvement of the bounds of Bruen, Thas, and Blokhuis, [22] [23] [24] Generator Matrix 5 2 5 2 5 2 5 2 5 2 5 2 5 2 5 2 6 1 2 6 1 2 10 1 2 10 1 2 8 1 2 12 1 2 12 1 2 30 1 2 62 1 2 2 7 2 4 2 5 2 5 2 5 3 6 3 Generator Matrix 5 2 5 2 5 2 5 2 5 2 5 2 5 2 5 2 6 1 2 6 1 2 10 1 2 10 1 2 8 1 2 12 1 2 12 1 2 30 1 2 62 1 2 2 7 2 4 2 5 2 5 2 5 3 6 3 Generator Matrix 6 2 10 2 10 2 12 2 12 2 30 2 30 2 62 2 Generator Matrix 6 2 10 2 10 2 12 2 12 2 30 2 30 2 62 2 Polynomial Polynomial [1] [2] [3] [4] [5] Ekkasit Sangwisut, Somphong Jitman, Patanee Udomkavanich. Constacyclic and quasi-twisted Hermitian self-dual codes over finite fields. [6] Nuh Aydin, Yasemin Cengellenmis, Abdullah Dertli, Steven T. Dougherty, Esengül Saltürk. Skew constacyclic codes over the local Frobenius non-chain rings of order 16. [7] David Grant, Mahesh K. Varanasi. The equivalence of space-time codes and codes defined over finite fields and Galois rings. [8] Somphong Jitman, Ekkasit Sangwisut. The average dimension of the Hermitian hull of constacyclic codes over finite fields of square order. [9] [10] [11] Amita Sahni, Poonam Trama Sehgal. Enumeration of self-dual and self-orthogonal negacyclic codes over finite fields. [12] [13] Minjia Shi, Daitao Huang, Lin Sok, Patrick Solé. Double circulant self-dual and LCD codes over Galois rings. [14] [15] Nuh Aydin, Nicholas Connolly, Markus Grassl. Some results on the structure of constacyclic codes and new linear codes over [16] Jérôme Ducoat, Frédérique Oggier. On skew polynomial codes and lattices from quotients of cyclic division algebras. [17] [18] [19] [20] 2018 Impact Factor: 0.879 Tools Metrics Other articles by authors [Back to Top]
The angular resolution of the telescope really has no direct bearing on our ability to detect Oort cloud objects beyond how that angular resolution affects the depth to which one can detect the light from faint objects. Any telescope can detect stars, even though their actual discs are way beyond the angular resolution of the telescope. The detection of Oort cloud objects is simply a question of detecting the (unresolved) reflected light in exactly the same way that one detects a faint (unresolved) star. Confirmation of the Oort cloud nature of the object would then come by observing at intervals over a year or so and obtaining a very large ($>2$ arcseconds) parallax. The question amounts to how deep do you need to go? We can do this in two ways (i) a back of the envelope calculation assuming the object reflects light from the Sun with some albedo. (ii) Scale the brightness of comets when they are distant from the Sun. (i) The luminosity of the Sun is $L=3.83\times10^{26}\ W$. Let the distance to the Oort cloud be $D$ and the radius of the (assumed spherical) Oort object be $R$.The light from the Sun incident on the object is $\pi R^2 L/4\pi D^2$.If we now assume that a fraction $f$ of this is reflected uniformly into a $2\pi$ solid angle. This latter point is an approximation, the light will not be reflected isotropically, but it will represent some average over any viewing angle. To a good approximation, as $D \gg 1$ au, we can assume that the distance from the Oort object to the Earth is also $D$. Hence the flux of light received at the Earth is$$F_{E} = f \frac{\pi R^2 L}{4\pi D^2}\frac{1}{2\pi D^2} = f \frac{R^2 L}{8\pi D^4}$$ Putting some numbers in, let $R=10$ km and let $D= 10,000$ au. Cometary material has a very low albedo, but let's be generous and assume $f=0.1$.$$ F_E = 3\times10^{-29}\left(\frac{f}{0.1}\right) \left(\frac{R}{10\ km}\right)^2 \left(\frac{D}{10^4 au}\right)^{-4}\ Wm^{-2}$$ To convert this to a magnitude, assume the reflected light has the same spectrum as sunlight. The Sun has an apparent visual magnitude of -26.74, corresponding to a flux at the Earth of $1.4\times10^{3}\ Wm^{-2}$. Converting the flux ratio to a magnitude difference, we find that the apparent magnitude of our fiducial Oort object is 52.4. (ii) Halley's comet is similar (10 km radius, low albedo) to the fiducial Oort object considered above. Halley's comet was observed by the VLT in 2003 with a magnitude of 28.2 and at a distance of 28 au from the Sun. We can now just scale this magnitude, but it scales as distance to the power of four, because the light must be received and then we see it reflected.Thus at 10,000 au, Halley would have a magnitude of $28.2 - 2.5 \log (28/10^{4})= 53.7$, in reasonable agreement with my other estimate. (Incidentally my crude formula in (i) above suggests a $f=0.1$, $R=10\ km$ comet at 28 au would have a magnitude of 26.9. Given that Halley probably has a smaller $f$ this is excellent consistency.) The observation of Halley by the VLT represents the pinnacle of what is possible with today's telescopes. Even the Hubble deep ultra deep field only reached visual magnitudes of about 29. Thus a big Oort cloud object remains more than 20 magnitudes below this detection threshold! The most feasible way of detecting Oort objects is when they occult background stars. The possibilities for this are discussed by Ofek & Naker 2010 in the context of the photometric precision provided by Kepler. The rate of occultations (which are of course single events and unrepeatable) was calculated to be between zero and 100 in the whole Kepler mission, dependent on the size and distance distribution of the Oort objects. As far as I am aware, nothing has come of this (yet).
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates General Version of the Chain Rule starts with a function $f(x,y)$, where $x$ and $y$ are themselves functions $x = x(s,\, t)$ and $y = y(s,\,t)$ of two other variables $s$ and $ t$, so that the composition $${\color{darkerblue}z\ = \ f(x(s, \,t), y(s, \,t))}$$ is now a function of $s$ and $ t$. The partial derivatives of $z$ become: Let's see why these formulas work. The partial derivative $\partial z/\partial s$ means ``Hold $t$ fixed and treat $z$ as a (compound) function of a single variable $s$. Then take its derivative.'' But this means that $x$ and $y$ are also treated as functions of the single variable $s$, and we are back in the setting of the simple case of the chain rule. The derivative of $z=f(x,y)$ is $f_x$ times the derivative of $x$ plus $f_y$ times the derivative of $y$, which is precisely what our first equation is saying. The reasoning behind the second equation is similar. The one and two variable chain rules set the pattern for more variables. If $w = f(x, \,y,\, z)$ and $$x \ = \ x(r, \,s,\, t), \qquad y \ = \ y(r, \,s, \,t), \qquad z \ = \ z(r,\, s, \,t),$$ then $$w \ = \ f(x(r, \,s, \,t), y(r,\, s, \,t), z(r, \,s, \,t))$$ is a function of $r, \,s,$ and $ t$ such that $$\frac{\partial w}{\partial r} \ = \ \frac{\partial f}{\partial x}\frac{\partial x}{\partial r} + \frac{\partial f}{\partial y}\frac{\partial y}{\partial r} + \frac{\partial f}{\partial z} \frac{\partial z}{\partial r}\,$$ and so on for functions $f(x_1,\, x_2,\, \ldots, \, x_n)$ of $n$ variables for any $n$. Why do we care about such compositions? These compositions come up whenever we switch coordinate systems. In the plane for example, bothrectangular and polar coordinates are important, so often there's a need to change coordinates, writing$$x\ = \ x(r, \theta) \ = \ r \cos \theta\,, \qquad y \ = \ y(r, \theta) \ = \ r \sin \theta\,.$$The chain rule then tells us that$$ \frac{\partial }{\partial r} f(r \cos \theta,\ r\sin \theta)\ = \ \frac{\partial f}{\partial x}\frac{\partial x}{\partial r}+ \frac{\partial f}{\partial y} \frac{\partial y}{\partial r}\ = \ \cos \theta \frac{\partial f}{\partial x} + \sin \theta \frac{\partial f}{ \partial y}\,,$$and so on.
Browse by Person Up a level 53. Article Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2017) Performance of algorithms that reconstruct missing transverse momentum in root s=8 TeV proton-proton collisions in the ATLAS detector. The European Physical Journal C, 77 (4). 241. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2883 more authors) (2016) Addendum to ‘Measurement of the tˉt production cross-section using eμ events with b-tagged jets in pp collisions at √s = 7 and 8 TeV with the ATLAS detector’. European Physical Journal C: Particles and Fields, 76. 642. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2855 more authors) (2016) Performance of pile-up mitigation techniques for jets in pp collisions at √s=8 TeV using the ATLAS detector. European Physical Journal C, 76 (11). ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2851 more authors) (2016) Search for gluinos in events with an isolated lepton, jets and missing transverse momentum at √s = 13 Te V with the ATLAS detector. European Physical Journal C, 76 (10). ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2860 more authors) (2016) Measurement of the inclusive isolated prompt photon cross section in pp collisions at root s=8 TeV with the ATLAS detector. Journal of High Energy Physics (8). ARTN 005. pp. 1-42. Aad, G, Abbott, B, Abdallah, J et al. (2866 more authors) (2016) Measurement of event-shape observables in Z→ℓ+ℓ- events in pp collisions at √s=7 TeV with the ATLAS detector at the LHC. The European Physical Journal C - Particles and Fields, 76 (7). ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2866 more authors) (2016) Measurement of event-shape observables in Z→ℓ+ℓ- events in pp collisions at √s=7 TeV with the ATLAS detector at the LHC. The European Physical Journal C, 76. 375. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2867 more authors) (2016) Identification of high transverse momentum top quarks in pp collisions at √s=8 TeV with the ATLAS detector. Journal of High Energy Physics, 2016. 93. ISSN 1029-8479 Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016) Measurement of the charged-particle multiplicity inside jets from √ s =8 TeV pp collisions with the ATLAS detector. European Physical Journal C: Particles and Fields , 76 (6). 322. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2715 more authors) (2016) Measurements of and production in collisions at with the ATLAS detector. Physical Review D, 93 (11). ISSN 2470-0010 Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016) Reconstruction of hadronic decay products of tau leptons with the ATLAS experiment. The European Physical Journal C, 76 (5). ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2867 more authors) (2016) Measurement of the transverse momentum and Øn∗ distributions of Drell–Yan lepton pairs in proton–proton collisions at √s = 8 TeV with the ATLAS detector. The European Physical Journal C - Particles and Fields, 76 (5). ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2862 more authors) (2016) Measurement of the differential cross-sections of prompt and non-prompt production of J/ψ and ψ(2S) in pp collisions at √s=7 and 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (5). 283. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2866 more authors) (2016) Search for the standard model Higgs boson produced in association with a vector boson and decaying into a tau pair in pp collisions sqrt s = 8 TeV at with the ATLAS detector. Physical Review D, 93 (9). ISSN 1550-7998 Aad, G, Abbott, B, Abdallah, J et al. (2860 more authors) (2016) Measurements of production cross sections in collisions at with the ATLAS detector and limits on anomalous gauge boson self-couplings. Physical Review D, 93 (9). ISSN 1550-7998 Aad, G, Abbott, B, Abdallah, J et al. (2856 more authors) (2016) Search for supersymmetry at $$\sqrt{s}=13$$ s = 13 TeV in final states with jets and two same-sign leptons or three leptons with the ATLAS detector. European Physical Journal C (The), 76 (5). ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2868 more authors) (2016) Observation of Long-Range Elliptic Azimuthal Anisotropies in root s=13 and 2.76 TeV pp Collisions with the ATLAS Detector. PHYSICAL REVIEW LETTERS, 116 (17). ARTN 172301. ISSN 0031-9007 Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016) Probing lepton flavour violation via neutrinoless τ⟶3μ decays with the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (5). 232. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2871 more authors) (2016) Search for dark matter produced in association with a Higgs boson decaying to two bottom quarks in pp collisions at root s=8 TeV with the ATLAS detector. PHYSICAL REVIEW D, 93 (7). ARTN 072007. ISSN 1550-7998 Aad, G, Abbott, B, Abdallah, J et al. (2863 more authors) (2016) Search for new phenomena in events with at least three photons collected in pp collisions at $$\sqrt{s}$$ s = 8 TeV with the ATLAS detector. The European Physical Journal C, 76 (4). ISSN 1434-6044 Aad, G, Abajyan, T, Abbott, B et al. (2840 more authors) (2016) Measurement of the centrality dependence of the charged-particle pseudorapidity distribution in proton–lead collisions at sNN‾‾‾√=5.02sNN=5.02 TeV with the ATLAS detector. The European Physical Journal C, 76 (4). ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2863 more authors) (2016) Search for anomalous couplings in the W tb vertex from the measurement of double differential angular decay rates of single top quarks produced in the t-channel with the ATLAS detector. Journal of High Energy Physics, 2016 (4). Aad, G, Abbott, B, Abdallah, J et al. (2782 more authors) (2016) Search for magnetic monopoles and stable particles with high electric charges in 8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052009. ISSN 2470-0010 Aad, G, Abbott, B, Abdallah, J et al. (2772 more authors) (2016) Measurement of the ZZ Production Cross Section in pp Collisions at root s=13 TeV with the ATLAS Detector. Physical Review Letters, 116 (10). 101801. ISSN 0031-9007 Aad, G, Abbott, B, Abdallah, J et al. (2769 more authors) (2016) Search for new phenomena in dijet mass and angular distributions from pp collisions at root s=13 TeV with the ATLAS detector. Physics Letters B, 754. pp. 302-322. ISSN 0370-2693 Aad, G, Abbott, B, Abdallah, J et al. (2844 more authors) (2016) Search for new phenomena with photon plus jet events in proton-proton collisions at TeV with the ATLAS detector. Journal of High Energy Physics (3). 41. ISSN 1029-8479 Aad, G, Abbott, B, Abdallah, J et al. (2835 more authors) (2016) Search for strong gravity in multijet final states produced in pp collisions at root s=13 TeV using the ATLAS detector at the LHC. Journal of High Energy Physics. 26. ISSN 1029-8479 Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016) Search for the electroweak production of supersymmetric particles in root s=8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052002. ISSN 2470-0010 Aad, G, Abbott, B, Abdallah, J et al. (2794 more authors) (2016) Search for the electroweak production of supersymmetric particles in root s=8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052002. ISSN 2470-0010 Aad, G, Abbott, B, Abdallah, J et al. (2879 more authors) (2016) Centrality, rapidity, and transverse momentum dependence of isolated prompt photon production in lead-lead collisions at TeV measured with the ATLAS detector. Physical Review C, 93 (3). ISSN 0556-2813 Aad, G, Abbott, B, Abdallah, J et al. (2856 more authors) (2016) Search for invisible decays of a Higgs boson using vector-boson fusion in pp collisions at √s=8 TeV with the ATLAS detector. Journal of High Energy Physics, 2016. 172. ISSN 1126-6708 Aad, G, Abbott, B, Abdallah, J et al. (2862 more authors) (2016) Search for a high-mass Higgs boson decaying to a W boson pair in pp collisions at s = 8 $$ \sqrt{s}=8 $$ TeV with the ATLAS detector. Journal of High Energy Physics, 2016 (1). Aad, G, Abbott, B, Abdallah, J et al. (2854 more authors) (2016) Measurements of fiducial cross-sections for $$t\bar{t}$$ t t ¯ production with one or two additional b-jets in pp collisions at $$\sqrt{s}$$ s =8 TeV using the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (1). 11. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2824 more authors) (2016) Measurements of the Higgs boson production and decay rates and coupling strengths using pp collision data at $$\sqrt{s}=7$$ s = 7 and 8 TeV in the ATLAS experiment. European Physical Journal C: Particles and Fields, 76. 6. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2854 more authors) (2015) ATLAS Run 1 searches for direct pair production of third-generation squarks at the Large Hadron Collider. European Physical Journal C: Particles and Fields, 75 (10). 510. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2835 more authors) (2015) Search for Higgs boson pair production in the $$b\bar{b}b\bar{b}$$ b b ¯ b b ¯ final state from pp collisions at $$\sqrt{s} = 8$$ s = 8 TeVwith the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (9). 412. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2825 more authors) (2015) Search for heavy long-lived multi-charged particles in pp collisions at root s=8 TeV using the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (8). 362. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2819 more authors) (2015) Constraints on the off-shell Higgs boson signal strength in the high-mass ZZ and WW final states with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (7). 335. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2015) Search for a new resonance decaying to a W or Z boson and a Higgs boson in the $$\ell \ell / \ell \nu / \nu \nu + b \bar{b}$$ ℓ ℓ / ℓ ν / ν ν + b b ¯ final states with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (6). 263. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2823 more authors) (2015) Determination of spin and parity of the Higgs boson in the $$WW^*\rightarrow e \nu \mu \nu $$ W W ∗ → e ν μ ν decay channel with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 231. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2815 more authors) (2015) Observation and measurements of the production of prompt and non-prompt $$\varvec{\text {J}\uppsi }$$ J ψ mesons in association with a $$\varvec{Z}$$ Z boson in $$\varvec{pp}$$ p p collisions at $$\varvec{\sqrt{s}= 8\,\text {TeV}}$$ s = 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 229. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2821 more authors) (2015) Search for direct pair production of a chargino and a neutralino decaying to the 125 GeV Higgs boson in $$\sqrt{\varvec{s}} = 8$$ s = 8 TeV $$\varvec{pp}$$ p p collisions with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 208. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2015) Search for $$W' \rightarrow tb \rightarrow qqbb$$ W ′ → t b → q q b b decays in $$pp$$ p p collisions at $$\sqrt{s}$$ s = 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (4). 165. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2015) Search for Higgs and Z Boson Decays to J/ψγ and ϒ(nS)γ with the ATLAS Detector. Physical Review Letters, 114 (12). 121801. ISSN 0031-9007 Aad, G, Abbott, B, Abdallah, J et al. (2881 more authors) (2015) Simultaneous measurements of the tt¯, W+W−, and Z/γ∗→ττ production cross-sections in pp collisions at √s=7 TeV with the ATLAS detector. Physical Review D - Particles, Fields, Gravitation and Cosmology, 91 (5). 052005. ISSN 1550-7998 Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2015) Search for dark matter in events with heavy quarks and missing transverse momentum in pp collisions with the ATLAS detector. European Physical Journal C, 75 (2). 92. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2467 more authors) (2015) Search for dark matter in events with heavy quarks and missing transverse momentum in pp collisions with the ATLAS detector. European Physical Journal C , 75 (2). 92. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2896 more authors) (2015) Measurements of Higgs boson production and couplings in the four-lepton channel in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Physical Review D, 91 (1). ISSN 1550-7998 Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2014) Search for nonpointing and delayed photons in the diphoton and missing transverse momentum final state in 8 TeV pp collisions at the LHC using the ATLAS detector. Physical Review D, 90 (11). ISSN 1550-7998 Aad, G, Abajyan, T, Abbott, B et al. (2793 more authors) (2014) Measurements of normalized differential cross sections for tt¯ production in pp collisions at √(s)=7 TeV using the ATLAS detector. Physical Review D, 90 (7). ISSN 1550-7998 Aad, G, Abbott, B, Abdallah, J et al. (2886 more authors) (2014) Measurement of the Higgs boson mass from the H→γγ and H→ZZ∗→4ℓ channels in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Physical Review D, 90 (5). ISSN 1550-7998 Aad, G, Abbott, B, Abdallah, J et al. (2878 more authors) (2014) Search for high-mass dilepton resonances in pp collisions at s√=8 TeV with the ATLAS detector. Physical Review D, 90. 052005. ISSN 1550-7998 Aad, G, Abajyan, T, Abbott, B et al. (2920 more authors) (2013) Evidence for the spin-0 nature of the Higgs boson using ATLAS data. Physics Letters B, 726 (1-3). pp. 120-144. ISSN 0370-2693
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Applications and Examples Using term-by-term differentiation and integration, \frac{1}{(1-x)^2}&=&\frac{d}{dx} \left(\frac{1}{1-x}\right)\\ &=&\frac{d}{dx} \left(\sum_{n=0}^\infty x^n \right)\\ &=&\sum_{n=1}^\infty n\, x^{n-1}\\ &=&1+2x+3x^2+4x^3+\cdots, \end{eqnarray}$ and we have our series representation when $\lvert x\rvert<1$. ----------------------------------------------------------------------------- Example 2: Find a series representation for $\ln(1+x)$. Solution 2: To do this, we must find a series that we know, for which $\ln(1+x)$ is the derivative or the antiderivative. At this point, we don't know that many series; really all we know is the standard geometric series and variants of it. ( Any ideas?) After some thought, we realize $\displaystyle \frac{d}{dx}\ln (1+x)=\frac{1}{1+x}$ and $\displaystyle\frac{1}{1+x}=\sum_{n=0}^\infty(-1)^nx^n$ (from our previous work). So, when $\lvert x\rvert<1$, $\begin{eqnarray} \ln(1+x)&=&\int\frac{1}{1+x}\,dx\\ &=&\int\left(\sum_{n=0}^\infty(-1)^nx^n\right)\,dx\\ &=&\left(\sum_{n=0}^\infty(-1)^n\frac{x^{n+1}}{n+1}\right)+C\\ &=&\sum_{n=0}^\infty(-1)^n\frac{x^{n+1}}{n+1}\\ &=&x - \frac{x^2}{2} + \frac{x^3}{3} -\frac{x^4}{4}+ \cdots. \end{eqnarray}$ To see why $C=0$, plug $x=0$ into $\ln(1+x)$. Since $\ln(1+0)=0$, our series is 0 when $x=0$, so $C=0$. ----------------------------------------------------------------------------- Example 3: Use the fact that $\displaystyle\frac{d}{dx} \tan^{-1}(x)=\frac{1}{1+x^2}$ to find a series representation for $\tan^{-1}(x)$. DO this before reading further. Solution 3: We know how to compute $\displaystyle\frac{1}{1+x^2}=\sum_{n=0}^\infty(-x^2)^n=\sum_{n=0}^\infty(-1)^nx^{2n}$. So, when $\lvert x\rvert<1$, $\begin{eqnarray}To solve for $C$, plug $x=0$ into $\tan^{-1}(x)$. We get $\tan^{-1}(0)=0$, so our series at $x=0$ must be $0$, and hence $C=0$. we have \tan^{-1}(x)&=&\int\frac{1}{1+x^2}\,dx\\ &=&\int\left(\sum_{n=0}^\infty(-1)^nx^{2n}\right)\,dx\\ &=&\int \left(1-x^2+x^4-x^6+x^8-x^{10}+\cdots\right)\,dx\\ &=&C+x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + \cdots\\ &=&\left(\sum_{n=0}^\infty(-1)^n\frac{x^{2n+1}}{2n+1}\right)+C\\ \end{eqnarray}$ $$\tan^{-1}(x)= \sum_{n=0}^\infty(-1)^n\frac{x^{2n+1}}{2n+1}=x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + \cdots$$ Most calculator and computer approximations are done via series, so if we can find a series to represent a hard-to-compute function, we are happy, since series are easy to compute (especially for a computer) to any degree of accuracy you wish. The video will go through some of these examples, and will demonstrate why this is so important. -----------------------------------------------------------------------------
The Annals of Probability Ann. Probab. Volume 26, Number 1 (1998), 316-345. No eigenvalues outside the support of the limiting spectral distribution of large-dimensional sample covariance matrices Abstract Let $B_n = (1/N)T_n^{1/2}X_n X_n^* T_n^{1/2}$, where $X_n$ is $n \times N$ with i.i.d. complex standardized entries having finite fourth moment and $T_n^{1/2}$ is a Hermitian square root of the nonnegative definite Hermitian matrix $T_n$. It is known that, as $n \to \infty$, if $n/N$ converges to a positive number and the empirical distribution of the eigenvalues of $T_n$ converges to a proper probability distribution, then the empirical distribution of the eigenvalues of $B_n$ converges a.s. to a nonrandom limit. In this paper we prove that, under certain conditions on the eigenvalues of $T_n$, for any closed interval outside the support of the limit, with probability 1 there will be no eigenvalues in this interval for all $n$ sufficiently large. Article information Source Ann. Probab., Volume 26, Number 1 (1998), 316-345. Dates First available in Project Euclid: 31 May 2002 Permanent link to this document https://projecteuclid.org/euclid.aop/1022855421 Digital Object Identifier doi:10.1214/aop/1022855421 Mathematical Reviews number (MathSciNet) MR1617051 Zentralblatt MATH identifier 0937.60017 Citation Bai, Z. D.; Silverstein, Jack W. No eigenvalues outside the support of the limiting spectral distribution of large-dimensional sample covariance matrices. Ann. Probab. 26 (1998), no. 1, 316--345. doi:10.1214/aop/1022855421. https://projecteuclid.org/euclid.aop/1022855421
The problem is the following: Let $X$ be a Baire Space and $f : X \to \mathbb{R}$ continuous. Prove that every non empty open set of $X$ has a non empty open subset where $f$ is bounded. So, let $A \subseteq X, A \neq \emptyset$ be an open set. Take $p \in A$ and look at $f(p) \in \mathbb{R}$. Since $f$ is continuous, there is an open neighborhood $V \subseteq X$ of $p$, such that $f(V) \subseteq B_1(f(p))=(f(p)-1, f(p)+1)$. Since $A, V$ are open, $A \cap V$ is also open, $A \cap V \subseteq A,$ and $f(A \cap V) \subseteq f(V) \subseteq (f(p) -1, f(p) +1)$, so $f$ is bounded in $A \cap V$. In particular $$f(p)-2 < f(x) < f(p) + 2, \,\,\, \forall x \in A \cap V .$$ What did I do wrong? I never used the fact that $X$ is a Baire Space, so I'm pretty sure I did something wrong, but I can't see it.
MeridionalHeatDiffusion¶ Solver for the 1D meridional heat diffusion equation on the sphere: for a temperature state variable \(T(\phi,t)\), a vertically-integrated heat capacity \(C\), and arbitrary thermal diffusivity \(D(\phi,t)\) in units of W/m2/K. The diffusivity \(D\) can be a single scalar,or optionally a vector specified at grid cell boundaries(so its length must be exactly 1 greater than the length of \(\phi\)). \(D\) can be modified by the user at any time (e.g., after each timestep, if it depends on other state variables). The heat capacity \(C\) is normally handled automatically by CLIMLAB as part of the grid specification. A fully implicit timestep is used for computational efficiency. Thus the computed tendency \(\frac{\partial T}{\partial t}\) will depend on the timestep. The diagnostics diffusive_flux and flux_convergence are computedas described in the parent class MeridionalDiffusion.Two additional diagnostics are computed here,which are meaningful if \(T\) represents a zonally averaged temperature: heat_transportgiven by \(\mathcal{H}(\phi) = -2 \pi ~ a^2 ~ \cos\phi ~ D ~ \frac{\partial T}{\partial \phi}\) in units of PW (petawatts). heat_transport_convergencegiven by \(-\frac{1}{2 \pi ~a^2 \cos\phi} \frac{\partial \mathcal{H}}{\partial \phi}\) in units of W/m2 Non-uniform grid spacing is supported. The state variable \(T\) may be multi-dimensional, but the diffusion will operate along the latitude dimension only. class climlab.dynamics.meridional_heat_diffusion. MeridionalHeatDiffusion( D=0.555, use_banded_solver=False, **kwargs)¶ A 1D diffusion solver for Energy Balance Models. Solves the meridional heat diffusion equation\[C \] rac{partial T}{partial t} = - rac{1}{cosphi} rac{partial}{partial phi} left[ -D cosphi rac{partial T}{partial phi} ight] on an evenly-spaced latitude grid, with a state variable \(T\), a heat capacity \(C\) and diffusivity \(D\). Assuming \(T\) is a temperature in K or degC, then the units are: \(D\) in W m-2 K-1 \(C\) in J m-2 K-1 \(D\) is provided as input, and can be either scalar or vector defined at latitude boundaries. \(C\) is normally handled automatically for temperature state variables in CLIMLAB. Attributes D K U depth Depth at grid centers (m) depth_bounds Depth at grid interfaces (m) diagnostics Dictionary access to all diagnostic variables input Dictionary access to all input variables lat Latitude of grid centers (degrees North) lat_bounds Latitude of grid interfaces (degrees North) lev Pressure levels at grid centers (hPa or mb) lev_bounds Pressure levels at grid interfaces (hPa or mb) lon Longitude of grid centers (degrees) lon_bounds Longitude of grid interfaces (degrees) prescribed_flux timestep The amount of time over which step_forward()is integrating in unit seconds. Methods add_diagnostic(name[, value]) Create a new diagnostic variable called namefor this process and initialize it with the given value. add_input(name[, value]) Create a new input variable called namefor this process and initialize it with the given value. add_subprocess(name, proc) Adds a single subprocess to this process. add_subprocesses(procdict) Adds a dictionary of subproceses to this process. compute() Computes the tendencies for all state variables given current state and specified input. compute_diagnostics([num_iter]) Compute all tendencies and diagnostics, but don’t update model state. declare_diagnostics(diaglist) Add the variable names in inputlistto the list of diagnostics. declare_input(inputlist) Add the variable names in inputlistto the list of necessary inputs. integrate_converge([crit, verbose]) Integrates the model until model states are converging. integrate_days([days, verbose]) Integrates the model forward for a specified number of days. integrate_years([years, verbose]) Integrates the model by a given number of years. remove_diagnostic(name) Removes a diagnostic from the process.diagnosticdictionary and also delete the associated process attribute. remove_subprocess(name[, verbose]) Removes a single subprocess from this process. set_state(name, value) Sets the variable nameto a new state value. set_timestep([timestep, num_steps_per_year]) Calculates the timestep in unit seconds and calls the setter function of timestep() step_forward() Updates state variables with computed tendencies. to_xarray([diagnostics]) Convert process variables to xarray.Datasetformat. property D¶ _update_diagnostics( newstate)¶ This method is called each timestep after the new state is computed with the implicit solver. Daughter classes can implement this method to compute any diagnostic quantities using the new state. _update_diffusivity()¶
Consider the query: Find all students who have taken all courses offered in the Biology department In the book the query is given as: $\{t\ | \ \exists r \in student \ (r[ID] = t[ID]) \ \land \ (\forall u \in course \ (u[dept\_name]\ = \ "Biology" \ \Rightarrow \exists s \in takes(t[ID] = s[ID] \ \land s[course\_id] = u[course\_id]))\}$ Now I have two questions here : 1) $t$ is a free variable and thus any tuple(including those not in the mentioned relations) can be represented using $t$. But how is it guaranteed that $t$ will only contain the attribute $ID$? Are we defining $t$ to contain $ID$ only and the way to do that is by using $r[ID] = t[ID]$ ? 2) Why is it necessary to include $t[ID] = s[ID]$? And is the value of this $t[ID]$ the same as in $r[ID] = t[ID]$?
Search Now showing items 1-10 of 27 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
We’ve seen that the far field electric and magnetic fields associated with a magnetic vector potential were \begin{equation}\label{eqn:dualFarField:40} \BE = -j \omega \textrm{Proj}_\T \BA, \end{equation} \begin{equation}\label{eqn:dualFarField:60} \BH = \inv{\eta} \kcap \cross \BE. \end{equation} It’s worth a quick note that the duality transformation for this, referring to [1] tab. 3.2, is \begin{equation}\label{eqn:dualFarField:100} \BH = -j \omega \textrm{Proj}_\T \BF \end{equation} \begin{equation}\label{eqn:dualFarField:120} \BE = -\eta \kcap \cross \BH. \end{equation} What does \( \BH \) look like in terms of \( \BA \), and \( \BE \) look like in terms of \( \BH \)? The first is \begin{equation}\label{eqn:dualFarField:140} \BH = -\frac{j \omega}{\eta} \kcap \cross \lr{ \BA – \lr{\BA \cdot \kcap} \kcap }, \end{equation} in which the \( \kcap \) crossed terms are killed, leaving \begin{equation}\label{eqn:dualFarField:160} \BH = -\frac{j \omega}{\eta} \kcap \cross \BA. \end{equation} The electric field follows again using a duality transformation, so in terms of the electric vector potential, is \begin{equation}\label{eqn:dualFarField:180} \BE = j \omega \eta \kcap \cross \BF. \end{equation} These show explicitly that neither the electric or magnetic far field have any radial component, matching with intuition for transverse propagation of the fields. References [1] Constantine A Balanis. Antenna theory: analysis and design. John Wiley & Sons, 3rd edition, 2005.
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
We have been exploring vectors and vector operations in three-dimensional space, and we have developed equations to describe lines, planes, and spheres. In this section, we use our knowledge of planes and spheres, which are examples of three-dimensional figures called surfaces, to explore a variety of other surfaces that can be graphed in a three-dimensional coordinate system. Identifying Cylinders The first surface we’ll examine is the cylinder. Although most people immediately think of a hollow pipe or a soda straw when they hear the word cylinder, here we use the broad mathematical meaning of the term. As we have seen, cylindrical surfaces don’t have to be circular. A rectangular heating duct is a cylinder, as is a rolled-up yoga mat, the cross-section of which is a spiral shape. In the two-dimensional coordinate plane, the equation \( x^2+y^2=9\) describes a circle centered at the origin with radius \( 3\). In three-dimensional space, this same equation represents a surface. Imagine copies of a circle stacked on top of each other centered on the \(z\)-axis (Figure \(\PageIndex{1}\)), forming a hollow tube. We can then construct a cylinder from the set of lines parallel to the \(z\)-axis passing through the circle \( x^2+y^2=9\) in the \(xy\)-plane, as shown in the figure. In this way, any curve in one of the coordinate planes can be extended to become a surface. \(z\) Figure \(\PageIndex{1}\): In three-dimensional space, the graph of equation \( x^2+y^2=9\) is a cylinder with radius \( 3\) centered on the -axis. It continues indefinitely in the positive and negative directions. Definition: cylinders and rulings A set of lines parallel to a given line passing through a given curve is known as a cylindrical surface, or cylinder . The parallel lines are called . From this definition, we can see that we still have a cylinder in three-dimensional space, even if the curve is not a circle. Any curve can form a cylinder, and the rulings that compose the cylinder may be parallel to any given line (Figure \(\PageIndex{2}\)). \(y\) Figure \(\PageIndex{2}\): In three-dimensional space, the graph of equation \( z=x^3\) is a cylinder, or a cylindrical surface with rulings parallel to the -axis. Example \( \PageIndex{1}\): Graphing Cylindrical Surfaces Sketch the graphs of the following cylindrical surfaces. \( x^2+z^2=25\) \( z=2x^2−y\) \( y=\sin x\) Solution a. The variable \( y\) can take on any value without limit. Therefore, the lines ruling this surface are parallel to the \(y\)-axis. The intersection of this surface with the \(xz\)-plane forms a circle centered at the origin with radius \( 5\) (see Figure \(\PageIndex{3}\)). \(y\) Figure \(\PageIndex{3}\): The graph of equation \( x^2+z^2=25\) is a cylinder with radius \( 5\) centered on the -axis. b. In this case, the equation contains all three variables —\( x,y,\) and \( z\)— so none of the variables can vary arbitrarily. The easiest way to visualize this surface is to use a computer graphing utility (Figure \(\PageIndex{4}\)). Figure \(\PageIndex{4}\) c. In this equation, the variable \( z\) can take on any value without limit. Therefore, the lines composing this surface are parallel to the \(z\)-axis. The intersection of this surface with the yz-plane outlines curve \( y=\sin x\) (Figure \(\PageIndex{5}\)). \(z\) Figure \(\PageIndex{5}\): The graph of equation \( y=\sin x\) is formed by a set of lines parallel to the -axis passing through curve \( y=\sin x\) in the\(xy\) -plane. Exercise \( \PageIndex{1}\): Sketch or use a graphing tool to view the graph of the cylindrical surface defined by equation \( z=y^2\). Hint The variable \( x\) can take on any value without limit. Answer When sketching surfaces, we have seen that it is useful to sketch the intersection of the surface with a plane parallel to one of the coordinate planes. These curves are called traces. We can see them in the plot of the cylinder in Figure \(\PageIndex{6}\). Definition: traces The traces of a surface are the cross-sections created when the surface intersects a plane parallel to one of the coordinate planes. Traces are useful in sketching cylindrical surfaces. For a cylinder in three dimensions, though, only one set of traces is useful. Notice, in Figure \(\PageIndex{6}\), that the trace of the graph of \( z=\sin x\) in the xz-plane is useful in constructing the graph. The trace in the xy-plane, though, is just a series of parallel lines, and the trace in the yz-plane is simply one line. \( z=\sin x\) Figure \(\PageIndex{6}\): (a) This is one view of the graph of equation . (b) To find the trace of the graph in the\(xz\) -plane, set\( y=0\) . The trace is simply a two-dimensional sine wave. Cylindrical surfaces are formed by a set of parallel lines. Not all surfaces in three dimensions are constructed so simply, however. We now explore more complex surfaces, and traces are an important tool in this investigation. Quadric Surfaces We have learned about surfaces in three dimensions described by first-order equations; these are planes. Some other common types of surfaces can be described by second-order equations. We can view these surfaces as three-dimensional extensions of the conic sections we discussed earlier: the ellipse, the parabola, and the hyperbola. We call these graphs quadric surfaces Definition: Quadric surfaces and conic sections Quadric surfaces are the graphs of equations that can be expressed in the form \[Ax^2+By^2+Cz^2+Dxy+Exz+Fyz+Gx+Hy+Jz+K=0.\] When a quadric surface intersects a coordinate plane, the trace is a conic section. An ellipsoid is a surface described by an equation of the form \( \dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}+\dfrac{z^2}{c^2}=1.\) Set \( x=0\) to see the trace of the ellipsoid in the yz-plane. To see the traces in the \(y\)- and \(xz\)-planes, set \( z=0\) and \( y=0\), respectively. Notice that, if \( a=b\), the trace in the \(xy\)-plane is a circle. Similarly, if \( a=c\), the trace in the \(xz\)-plane is a circle and, if \( b=c\), then the trace in the \(yz\)-plane is a circle. A sphere, then, is an ellipsoid with \( a=b=c.\) Example \( \PageIndex{2}\): Sketching an Ellipsoid Sketch the ellipsoid \[ \dfrac{x^2}{2^2}+\dfrac{y^2}{3^2}+\dfrac{z^2}{5^2}=1.\] Solution Start by sketching the traces. To find the trace in the xy-plane, set \( z=0: \dfrac{x^2}{2^2}+\dfrac{y^2}{3^2}=1\) (Figure \(\PageIndex{7}\)). To find the other traces, first set \( y=0\) and then set \( x=0.\) \(xy\) Figure \(\PageIndex{7}\): (a) This graph represents the trace of equation \( \dfrac{x^2}{2^2}+\dfrac{y^2}{3^2}+\dfrac{z^2}{5^2}=1\) in the -plane, when we set \( z=0\). (b) When we set \( y=0\), we get the trace of the ellipsoid in the\(xz\) -plane, which is an ellipse. (c) When we set \( x=0\), we get the trace of the ellipsoid in the\(yz\) -plane, which is also an ellipse. Now that we know what traces of this solid look like, we can sketch the surface in three dimensions (Figure \(\PageIndex{8}\)). Figure \(\PageIndex{8}\): (a) The traces provide a framework for the surface. (b) The center of this ellipsoid is the origin. The trace of an ellipsoid is an ellipse in each of the coordinate planes. However, this does not have to be the case for all quadric surfaces. Many quadric surfaces have traces that are different kinds of conic sections, and this is usually indicated by the name of the surface. For example, if a surface can be described by an equation of the form \[ \dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=\dfrac{z}{c}\] then we call that surface an elliptic paraboloid. The trace in the xy-plane is an ellipse, but the traces in the xz-plane and yz-plane are parabolas (Figure \(\PageIndex{9}\)). Other elliptic paraboloids can have other orientations simply by interchanging the variables to give us a different variable in the linear term of the equation \( \dfrac{x^2}{a^2}+\dfrac{z^2}{c^2}=\dfrac{y}{b}\) or \( \dfrac{y^2}{b^2}+\dfrac{z^2}{c^2}=\dfrac{x}{a}\). Figure \(\PageIndex{9}\): This quadric surface is called an elliptic paraboloid. Example \( \PageIndex{3}\): Identifying Traces of Quadric Surfaces Describe the traces of the elliptic paraboloid \( x^2+\dfrac{y^2}{2^2}=\dfrac{z}{5}\). Solution To find the trace in the \(xy\)-plane, set \( z=0: x^2+\dfrac{y^2}{2^2}=0.\) The trace in the plane \( z=0\) is simply one point, the origin. Since a single point does not tell us what the shape is, we can move up the \(z\)-axis to an arbitrary plane to find the shape of other traces of the figure. The trace in plane \( z=5\) is the graph of equation \( x^2+\dfrac{y^2}{2^2}=1\), which is an ellipse. In the \(xz\)-plane, the equation becomes \( z=5x^2\). The trace is a parabola in this plane and in any plane with the equation \( y=b\). In planes parallel to the \(yz\)-plane, the traces are also parabolas, as we can see in Figure \(\PageIndex{10}\). \(xz\) Figure \(\PageIndex{10}\): (a) The paraboloid \( x^2+\dfrac{y^2}{2^2}=\dfrac{z}{5}\). (b) The trace in plane \( z=5\). (c) The trace in the -plane. (d) The trace in the\(yz\) -plane. Exercise \( \PageIndex{2}\): A hyperboloid of one sheet is any surface that can be described with an equation of the form \( \dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}−\dfrac{z^2}{c^2}=1\). Describe the traces of the hyperboloid of one sheet given by equation \( \dfrac{x^2}{3^2}+\dfrac{y^2}{2^2}−\dfrac{z^2}{5^2}=1.\) Hint To find the traces in the coordinate planes, set each variable to zero individually. Answer The traces parallel to the \(xy\)-plane are ellipses and the traces parallel to the \(xz\)- and \(yz\)-planes are hyperbolas. Specifically, the trace in the \(xy\)-plane is ellipse \( \dfrac{x^2}{3^2}+\dfrac{y^2}{2^2}=1,\) the trace in the \(xz\)-plane is hyperbola \( \dfrac{x^2}{3^2}−\dfrac{z^2}{5^2}=1,\) and the trace in the \(yz\)-plane is hyperbola \( \dfrac{y^2}{2^2}−\dfrac{z^2}{5^2}=1\) (see the following figure). Hyperboloids of one sheet have some fascinating properties. For example, they can be constructed using straight lines, such as in the sculpture in Figure \(\PageIndex{1a}\). In fact, cooling towers for nuclear power plants are often constructed in the shape of a hyperboloid. The builders are able to use straight steel beams in the construction, which makes the towers very strong while using relatively little material (Figure \(\PageIndex{1b}\)). Figure \(\PageIndex{11}\): (a) A sculpture in the shape of a hyperboloid can be constructed of straight lines. (b) Cooling towers for nuclear power plants are often built in the shape of a hyperboloid. Example \( \PageIndex{4}\): Chapter Opener: Finding the Focus of a Parabolic Reflector Energy hitting the surface of a parabolic reflector is concentrated at the focal point of the reflector (Figure \(\PageIndex{12}\)). If the surface of a parabolic reflector is described by equation \( \dfrac{x^2}{100}+\dfrac{y^2}{100}=\dfrac{z}{4},\) where is the focal point of the reflector? Figure \(\PageIndex{12}\): Energy reflects off of the parabolic reflector and is collected at the focal point. (credit: modification of CGP Grey, Wikimedia Commons) Solution Since z is the first-power variable, the axis of the reflector corresponds to the \(z\)-axis. The coefficients of \( x^2\) and \( y^2\) are equal, so the cross-section of the paraboloid perpendicular to the \(z\)-axis is a circle. We can consider a trace in the xz-plane or the yz-plane; the result is the same. Setting \( y=0\), the trace is a parabola opening up along the \(z\)-axis, with standard equation \( x^2=4pz\), where \( p\) is the focal length of the parabola. In this case, this equation becomes \( x^2=100⋅\dfrac{z}{4}=4pz\) or \( 25=4p\). So p is \( 6.25\) m, which tells us that the focus of the paraboloid is \( 6.25\) m up the axis from the vertex. Because the vertex of this surface is the origin, the focal point is \( (0,0,6.25).\) Seventeen standard quadric surfaces can be derived from the general equation \[Ax^2+By^2+Cz^2+Dxy+Exz+Fyz+Gx+Hy+Jz+K=0.\] The following figures summarizes the most important ones. Figure \(\PageIndex{13}\): Characteristics of Common Quadratic Surfaces: Ellipsoid, Hyperboloid of One Sheet, Hyperboloid of Two Sheets. Figure \(\PageIndex{14}\): Characteristics of Common Quadratic Surfaces: Elliptic Cone, Elliptic Paraboloid, Hyperbolic Paraboloid. Example \( \PageIndex{5}\): Identifying Equations of Quadric Surfaces Identify the surfaces represented by the given equations. \( 16x^2+9y^2+16z^2=144\) \( 9x^2−18x+4y^2+16y−36z+25=0\) Solution a. The \( x,y,\) and \( z\) terms are all squared, and are all positive, so this is probably an ellipsoid. However, let’s put the equation into the standard form for an ellipsoid just to be sure. We have \[ 16x^2+9y^2+16z^2=144. \nonumber\] Dividing through by 144 gives \[ \dfrac{x^2}{9}+\dfrac{y^2}{16}+\dfrac{z^2}{9}=1. \nonumber\] So, this is, in fact, an ellipsoid, centered at the origin. b. We first notice that the \( z\) term is raised only to the first power, so this is either an elliptic paraboloid or a hyperbolic paraboloid. We also note there are \( x\) terms and \( y\) terms that are not squared, so this quadric surface is not centered at the origin. We need to complete the square to put this equation in one of the standard forms. We have \[ \begin{align*} 9x^2−18x+4y^2+16y−36z+25&=0 \\[5pt] 9x^2−18x+4y^2+16y+25 &=36z \\[5pt] 9(x^2−2x)+4(y^2+4y)+25 &=36z \\[5pt] 9(x^2−2x+1−1)+4(y^2+4y+4−4)+25 &=36z \\[5pt] 9(x−1)^2−9+4(y+2)^2−16+25 &=36z \\[5pt] 9(x−1)^2+4(y+2)^2 &=36z \\[5pt] \dfrac{(x−1)^2}{4}+\dfrac{(y−2)^2}{9} &=z. \end{align*}\] This is an elliptic paraboloid centered at \( (1,2,0).\) Exercise \( \PageIndex{3}\) Identify the surface represented by equation \( 9x^2+y^2−z^2+2z−10=0.\) Hint Look at the signs and powers of the \( x,y\), and \( z\) terms Answer Hyperboloid of one sheet, centered at \( (0,0,1)\). Key Concepts A set of lines parallel to a given line passing through a given curve is called a cylinder,or a cylindrical surface. The parallel lines are called rulings. The intersection of a three-dimensional surface and a plane is called a trace. To find the trace in the -,\(yz\) -, or \(xz\) -planes, set \( z=0,x=0,\) or \( y=0,\) respectively. Quadric surfaces are three-dimensional surfaces with traces composed of conic sections. Every quadric surface can be expressed with an equation of the form \[Ax^2+By^2+Cz^2+Dxy+Exz+Fyz+Gx+Hy+Jz+K=0. \nonumber\] To sketch the graph of a quadric surface, start by sketching the traces to understand the framework of the surface. Important quadric surfaces are summarized in Figures \(\PageIndex{13}\) and \(\PageIndex{14}\). Glossary cylinder a set of lines parallel to a given line passing through a given curve ellipsoid a three-dimensional surface described by an equation of the form \( \dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}+\dfrac{z^2}{c^2}=1\); all traces of this surface are ellipses elliptic cone a three-dimensional surface described by an equation of the form \( \dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}−\dfrac{z^2}{c^2}=0\); traces of this surface include ellipses and intersecting lines elliptic paraboloid a three-dimensional surface described by an equation of the form \( z=\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}\); traces of this surface include ellipses and parabolas hyperboloid of one sheet a three-dimensional surface described by an equation of the form \( \dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}−\dfrac{z^2}{c^2}=1;\) traces of this surface include ellipses and hyperbolas hyperboloid of two sheets a three-dimensional surface described by an equation of the form \( \dfrac{z^2}{c^2}−\dfrac{x^2}{a^2}−\dfrac{y^2}{b^2}=1\); traces of this surface include ellipses and hyperbolas quadric surfaces surfaces in three dimensions having the property that the traces of the surface are conic sections (ellipses, hyperbolas, and parabolas) rulings parallel lines that make up a cylindrical surface trace the intersection of a three-dimensional surface with a coordinate plane Contributors Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org.
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates The Basic Comparison Test for all $n$ larger than some finite positive $N$, and similarly for 2. The series $\displaystyle \sum_{n=1}^\infty\frac{2^n}{3^n+1}$ converges, since $$ \frac{2^n}{3^n+1}\le \frac{2^n}{3^n} $$ and we know that the geometric series $\displaystyle \sum_{n=1}^\infty\left(\frac{2}{3}\right)^n$ is a convergent geometric series, with $r=\frac23<1$. Example 1: The video explains the test, and looks at an example.
If you are expecting someone to solve the Kerr metric equations, you probably need to hire a professional mathematician; but if you want an approximation, we can make that happen. Lets start with some simple results and eventually work our way to advanced results. Objective Our objective is to calculate the maximum one dimensional stress tensor acting in the direction along the axis tangential to the assumed spherical surface of the black hole. Since all our forces come from the gravity of the black hole, they will all be acting in this direction, so I'm not going to use any vectors. Assumptions $\text{M}_{b}$ is the mass of the black hole and it equals$1.99\times10^{31} \text{ kg}$ (10 times the mass of the sun). The object in question is a 1 km long, 100m radius cylindrical rod, with the rod aligned in the direction of a radial line from the center of the black hole outwards. The object has constant density $\rho = 1$, its just not that important right now. The object is 'suspended' with it's midpoint centered at 3/4 the Schwarzschild radius of the black hole. Schwarzschild radius Given by $$r_s = \frac{2GM}{c^2}$$ where G is the universal gravitation constant ($6.67\times10^{-11}\frac{\text{N}\cdot\text{m}^2}{\text{kg}^2}$), M is the mass of the black hole, and c is the speed of light ($3.00\times10^{8}\frac{\text{m}}{\text{s}}$). Therefore $$r_s = \frac{2\cdot 6.67\times10^{-11} \cdot 1.99\times10^{31}}{(3.00\times10^{8})^2} = 29500 \text{m}. $$ Therefore, with some rounding, the near-hole point of our rod is at 22 km, the far-hole point is at 23km. Force of gravity as a function of distance from near-hole point Let us define a coordinate system in one dimension with $l = 0$ as the near-hole point, and $l = 1000$ (in meters) as the far-hole point of our rod. We will calculate the force of gravity on each infinitesimally small slice of the rod as a function of it's $l$ coordinate. The force of gravity on a mass is $$F = G\frac{m_1m_2}{r^2}.$$ The mass of a slice of the rod (equivalent to the distance derivative of the mass of the rod) is equal to the mass of a circle $\frac{dm}{dl} = \rho \pi (100)^2$. Therefore the distance derivative of the force of gravity on a slice is $$\frac{dF_{slice}}{dl} = 6.67\times10^{-11} \frac{1.99\times10^{31}\cdot \rho \pi (100)^2}{(23000 + l)^2} = \frac{4.17\times10^{25}}{(23000 + l)^2} $$ Integrate the distance derivative of the force of gravity To find the net force between points $l = a$ and $l = b$, we integrate the distance derivative of the force of gravity with respect to distance from the near-hole point. $$\int_a^b \frac{4.17\times10^{25}}{(23000 + l)^2} dl = \left.\frac{-4.17\times10^{25}}{23000 + l}\right|^b_a = -4.17\times10^{25}\left(\frac{1}{23000+b}-\frac{1}{23000+a}\right)$$ Solving this for the net force on the entire rod, we get $$-4.17\times10^{25}\left(\frac{1}{23000+1000}-\frac{1}{23000+0}\right) = 7.55\times10^{19} \text{N}.$$ Now that force has to be counteracted by a 'lift' force keeping the rod out of the black hole. For simplification let us assume that the counteracting force acts equally on each slice of the rod, so each a slice of the rod from a to b is pulled out of the black hole with force $$F_{lift} = -7.55\times10^{19}\cdot\frac{b-a}{1000}.$$ Note the force is negative because it is acting in the direction out of the hole. Solve for stress at any point in the rod In this simplification, the highest gravity force will be at the lowest point closest to $l = 0$. Therefore, the stress causing force at any distance $x$ in this rod is going to be the net force of gravity and lift for all slices below it minus the net force of gravity and lift for all points above it. $$\begin{align}F_{net} =&\left.\frac{-4.17\times10^{25}}{23000 + l}\right|^x_0 - 7.55\times10^{19}\cdot\frac{x-0}{1000}- \left.\frac{-4.17\times10^{25}}{23000 + l}\right|^{1000}_x + 7.55\times10^{19}\cdot\frac{1000-x}{1000} \\=& 3.55\times10^{21}-\frac{8.34\times10^{25}}{23000+x} + 7.55\times10^{16}\cdot (1000-2x)\end{align}$$ The net force graph looks like this: Maximum force is $1.61\times10^{18} \text{ N}$ at $l=500$. Stress defined as $\sigma = \frac{\text{F}}{\text{A}}$. The cross sectional area is $\pi(100)^2 = 31415 \text{m}^2$, so maximum stress is $$\sigma = \frac{1.61\times10^{18} \text{ N}}{31415 \text{m}^2} = 5.12\times10^{13} \text{Pa}.$$ Conclusion The calculation works and produces logical results. Stress should be zero at the ends of the rod (there is nothing to pull away from) and should be maximum in the center. The stress produced is very high, as would be expected 23km from the center of a black hole. Required yield strength is about 51 TPa. The required material strength is probably not achievable with any known or theoretical material. I can't find anything with a yield strength over 1 TPa, much less 51.
Intro.tex \section{Introduction} \subsection{Basic theorem statements} The purpose of this paper is to give the first elementary proof of the density Hales--Jewett theorem. This theorem, first proved by Furstenberg and Katznelson~\cite{FK89,FK91}, has the same relation to the Hales--Jewett theorem~\cite{HJ63} as Szemer\'edi's theorem~\cite{Sze75} has to van der Waerden's theorem~\cite{vdW27}. Before we go any further, let us state all four theorems. We shall use the notation $[k]$ to stand for the set $\{1,2,\dotsc,k\}$. If $X$ is a set and $r$ is a positive integer, then an $r$-\textit{colouring} of $X$ will mean a function $\kappa\colon X\rightarrow [r]$. A subset $Y$ of $X$ is called \textit{monochromatic} if $\kappa(y)$ is the same for every $y\in Y$. First, let us state van der Waerden's theorem and Szemer\'edi's theorem. \begin{named}{van der Waerden's Theorem} \label{thm:vdw} For every pair of positive integers $k$ and $r$ there exists $N$ such that for every $r$-colouring of $[N]$ there is a monochromatic arithmetic progression of length $k$. \end{named} \begin{named}{Szemer\'edi's Theorem} \label{thm:szem} For every positive integer $k$ and every $\delta>0$ there exists $N$ such that every subset $A\subseteq[N]$ of size at least $\delta N$ contains an arithmetic progression of length $k$. \end{named} It is usually better to focus on the \textit{density} $\abs{A}/N$ of a subset $A\subseteq [N]$ rather than on its cardinality, since this gives us a parameter that we can think of independently of $N$. Szemer\'edi's theorem is often referred to as the \textit{density version} of van der Waerden's theorem. To state the Hales--Jewett theorem, we need a little more terminology. The theorem is concerned with subsets of $[k]^n$, elements of which we refer to as \emph{points} (or \emph{strings}). Instead of looking for arithmetic progressions, the Hales--Jewett theorem looks for \emph{combinatorial lines}. A combinatorial line in $[k]^n$ is a set of $k$ points $\{x^{(1)}, \dots, x^{(k)}\}$ formed as follows: Given a line \emph{template}, which is a string $\lambda \in ([k] \cup \{\wild\})^n$, the associated combinatorial line is formed by setting $x^{(i)}$ to be the point given by changing each``wildcard symbol `$\wild$' in $\lambda$ to symbol `$i$'. For instance, when $k = 3$ and $\lambda = 1 3 \wild \wild 2 2 1 \wild$, the associated combinatorial line is the following set of $3$ points:\[\{13\bs{1}\bs{1}221\bs{1}, 13\bs{2}\bs{2}221\bs{2}, 1\bs{3}\bs{3}3221\bs{3}\}.\](We exclude \emph{degenerate} combinatorial lines, those that arise from templates with no $\wild$'s. More formal definitions are given in Section~\ref{sec:defs}.)\ignore{Let $x=(x_1,\dots,x_n)$ be an element of $[k]^n$, let $W\subset[n]$ and let $j\in[k]$. Let us write $x\oplus jW$ for the sequence $(y_1,\dots,y_n)$ such that $y_i=x_i$ if $i\notin A$ and $y_i=j$ if $i\in A$. That is, we take the sequence $x$ and overwrite the terms with index in $W$ with a $j$. For example, if $k=3$, $n=5$, then \begin{equation*}(1,1,3,2,1)+2\{1,4,5\}=(2,1,3,2,2).\end{equation*}The \textit{combinatorial line} $x\oplus[k]W$ is the set $\{x\oplus jW:j\in [k]\}$. For instance, \begin{equation*}(1,1,3,2,1)\oplus[3]\{1,4,5\}=\{(1,1,3,1,1),(2,1,3,2,2),(3,1,3,3,3)\}.\end{equation*}It may seem strange that the combinatorial line $x\oplus [k]W$ does not depend on the values of $x_i$ with $i\in A$, but this turns out to be a useful convention and makes it possible to state results in a concise way.} We are now ready to state the Hales--Jewett theorem. \begin{named}{Hales--Jewett theorem}\label{thm:hj} For every pair of positive integers $k$ and $r$ there exists a positive integer $\hj{k}{r}$ such that for every $r$-colouring of the set $[k]^n$ there is a monochromatic combinatorial line, provided $n \geq \hj{k}{r}$. \end{named} As with van der Waerden's theorem, we may consider the density version of the Hales-Jewett theorem, where the density of $A \subseteq [k]^n$ is $\abs{A}/k^n$. The following theorem was first proved by Furstenberg and Katznelson. \begin{named}{Density Hales--Jewett theorem} \label{thm:dhj} For every positive integer $k$ and real $\delta>0$ there exists a positive integer $\dhj{k}{\delta}$ such that every subset of $[k]^n$ of density at least $\delta$ contains a combinatorial line, provided $n \geq \dhj{k}{\delta}$. \end{named} We sometimes write ``DHJ($k$) to mean the $k$ case of this theorem.\noteryan{A bit confusing, since $\dhj{k}{\delta}$ is a number in the above theorem. Hmm.} The first nontrivial case, DHJ($2$), is a weak version of Sperner's theorem~\cite{Spe28}; we discuss this further in Section~\ref{sec:sperner}. We also remark that the Hales--Jewett theorem immediately implies van der Waerden's theorem, and likewise for the density versions. To see this, temporarily interpret $[m]$ as $\{0, 1, \dotsc, m-1\}$ rather than $\{1, 2, \dotsc, m\}$, and identify integers in $[N]$ with their base-$k$ representation in $[k]^n$.\noteryan{(we may assume $N$ is a power of $k$)} It's then easy to see that a combinatorial line in $[k]^n$ is a length-$k$ arithmetic progression in $[N]$; specifically, if the line's template is $\lambda$, with $S = \{i : \lambda_i = \star\}$, then the progression's common difference is $\sum_{i \in S} k^{n-i}$. In this paper, we give a new, elementary proof of the density Hales--Jewett theorem, achieving quantitative bounds: \begin{theorem} \label{thm:our-dhj} In the density Hales--Jewett theorem, one may take $\dhj{3}{\delta} = 2 \upuparrows O(1/\delta^3)$. For $k \geq 4$, the bound $\dhj{k}{\delta}$ we achieve is of Ackermann type.\noteryan{cop-out} \end{theorem} \noindent Here we use the notation $x \uparrow y$ for $x^y$, $x \uparrow^{(\ell)} y$ for $x \uparrow x \uparrow \dotsm \uparrow x \uparrow y$ (with $\ell$ many $\uparrow$'s), and $x \upuparrows y$ for $x \uparrow^{(y)} x$. We add that it is not too hard to obtain the following extensions to this result: (i)~a \emph{multidimensional} version, in which one finds higher-dimensional combinatorial subspaces;\ignore{We remark here that it is also possible to deduce the multidimensional Szemer\'edi theorem from the density Hales--Jewett theorem. Previously there were two known approaches: the ergodic approach and approaches that use hypergraph regularity. } (ii)~a \emph{probabilistic} version, in which one shows that a randomly chosen combinatorial line (from a suitable distribution) is in the subset with positive probability depending only on $k$ and $\delta$;\noteryan{name-check Varnavides?} (iii)~the combined probabilistic multidimensional version. Indeed, to prove Theorem~\ref{thm:our-dhj} we found it necessary to obtain these extensions in passing. See Section~\ref{sec:concepts} for more detailed statements. \subsection{Some discussion} Why is it interesting to give a new proof of this result? There are two main main reasons. The first is connected with the history of results in this area. One of the main benefits of Furstenberg's proof of Szemer\'edi's theorem was that it introduced a technique---ergodic methods---that could be developed in many directions, which did not seem to be the case with Szemer\'edi's proof. As a result, many far-reaching generalizations of Szemer\'edi's theorem were proved, and for a long time nobody could prove them in any other way than by using Furstenberg's methods. In the last few years that has changed, and a programme has developed to find new and finitary proofs of the results that were previously known only by infinitary ergodic methods. \noteryan{Citations would be good here; which are the far-reaching generalisations? Also, Tim's paper on Szemer\'edi is maybe 9 years old now\dots does that count as ``few years ?} Giving a non-ergodic proof of the density Hales--Jewett theorem was seen as a key goal for this programme: on the ergodic side it appeared to be significantly harder than Szemer\'edi's theorem, and this seemed to be reflected by certain significant combinatorial difficulties that we shall discuss later in this paper.\noteryan{Which difficulties does this refer to?} Having given a purely combinatorial proof, we are able to obtain explicit bounds for how large $n$ needs to be as a function of $\delta$ and $k$ in the density Hales--Jewett theorem (although admittedly the bounds for $k \geq 4$ are terrible and even the $k = 3$ bound is not especially good). Such bounds could not be obtained via the ergodic methods even in principle, since these proofs rely on the Axiom of Choice\noteryan{I just heard someone say this; is it true?}.\noteryan{maybe expand slightly on this, and/or cite the sister paper} A second reason is that the density Hales--Jewett theorem immediately implies Szemer\'edi's theorem, and finding a new proof of Szemer\'edi's theorem seems always to be illuminating---or at least this has been the case for the four main approaches discovered so far.\noteryan{Citations here.} In retrospect, one can add to this the fact that the proof we have discovered is, against all our expectations\noteryan{a strong phrase\dots}, \textit{simpler} than the previous approaches to Szemer\'edi's theorem; the most advanced notion needed is that of the total variation distance between discrete probability distributions. It seems that by looking at a more general problem we have removed some of the difficulty. Related to this is another surprise. We started out by trying to prove the first difficult case of the theorem, which is when $k=3$. The experience of all four of the earlier proofs of Szemer\'edi's theorem has been that interesting ideas are needed to prove results about progressions of length $3$, but significant extra difficulties arise when one tries to generalize an argument from the length-$3$ case to the general case. However, it turned out that once we had proved the case $k=3$ of the density Hales--Jewett theorem, it was straightforward to generalize the argument to the general case. We still do not have a convincing explanation of why our proof should differ from all the others in this respect.\noteryan{Perhaps we should think of one :)} \ignore{Here, briefly, is how to deduce Szemer\'edi's theorem from the density Hales--Jewett theorem. Given $k$ and $\delta$, let $n$ be such that the density Hales--Jewett theorem holds for $k$ and $\delta$ and let $N=k^n$. There is no harm in thinking of $[N]$ as the set $\{0,1,\dots,N-1\}$. Now given $A \subset [N]$ of density $\delta$, we can write the integers in $A$ in base $k$, thus thinking of it as a subset of density $\delta$ in~$[k]^n$ (where again, there is no harm in thinking of $[k]$ as $\{0, 1, \dots, k-1\}$). The density Hales--Jewett theorem gives a nondegenerate combinatorial line in $A$, and it is easy to see that this corresponds to a length-$k$ arithmetic progression: if the line's template is $\lambda$, with $S = \{i : \lambda_i = \star\}$, then the progression's common difference is $\sum_{i \in S} k^{n-i}$. \ignore{Thinking of $[k]$ as the set $\{0,1,\dots,k-1\}$ and $[N]$ as the set $\{0,1,\dots,N-1\}$, let $\phi:[k]^n\rightarrow[N]$ be the map that takes the sequence $(x_1,\dots,x_n)$ to the number $x_1+x_2k+x_3k^2+\dots+x_nk^{n-1}$, and note that $\phi$ is a bijection.\noteryan{Could we say here the catchier phrase, ``write the integers in $A$ in base $k$ ?} Now let $A\subset[N]$ be a set of density $\delta$. Then $\phi^{-1}(A)$ is a subset of $[k]^n$ of density $\delta$, so by the density Hales--Jewett theorem it contains a combinatorial line. It is easy to see that the image of a combinatorial line under $\phi$ is an arithmetic progression of length $k$: indeed, if the combinatorial line is $x\oplus[k]W$,\noteryan{should convert to new notation} then the first term of the arithmetic progression is $\phi(x\oplus 0W)$ and the common difference is $\sum_{i\in W}k^i$. A very similar argument shows that the Hales--Jewett theorem implies van der Waerden's theorem.}} Before we start working towards the proof of the theorem, we would like briefly to mention that it was proved in a rather unusual ``open source" way, which is why it is being published under a pseudonym. The work was done by several mathematicians, who wrote their thoughts, as they had them, in the form of blog comments.\noteryan{I would like to at least consider adding a link or statement that it was on Tim's blog. Otherwise, the only identifying part of the paper is the wiki's URL, which might make it seem like Michael Nielsen was the mastermind. Okay with you, Tim?} Anybody who wanted to could participate, and at all stages of the process the comments were fully open to anybody who was interested. This was in complete contrast to the usual way that results are proved in private and presented in a finished form. The blog comments are still available, so although this paper is a somewhat polished account of the argument, it is possible to read a record of the entire thought process that led to the proof. The participants also created a wiki, which contains sketches of the argument, links to the blog comments, and a great deal of related material. The wiki's URL is \url{http://michaelnielsen.org/polymath1/}.\\ \notetim{Somewhere we should mention Tim Austin's work and how it relates to ours. Perhaps Terry is best placed to do this.} \subsection{Outline of the paper} XXX ``table of contents paragraph to be filled in XXX
Measurement Category : 5th Class Measurement Learning Objectives Perimeter Perimeter is referred as the length of the boundary line, which surrounds the area occupied by a geometrical shape. Perimeters of different geometrical shapes are explained below. A. Perimeter of a Triangle A triangles has three sides. Perimeter of a triangle is the sum of its all the three sides. Perimeter of the triangle \[ABC=AB+BC+CA\] B. Perimeter of a Quadrilateral Perimeter of a quadrilateral is the sum of the length of its four sides. In quadrilateral ABCD, perimeter \[=AB+BC+CD+DA\] C. Perimeter of a Rectangle Perimeter of a rectangle = 2 (Length + Breadth). D. Perimeter of a square \[=\mathbf{4}\times \mathbf{side}\]. Perimeter of the square \[ABCD=4\times AB\] E. Perimeter of a Circle Perimeter of a circle \[=2\pi r\] Where \[~\pi =\frac{22}{7}~=3.14\] and r = radius of the circle Area All the geometrical shapes occupies some space. The occupied space by a geometrical shape is called area of that geometrical shape. Shaded part in the above figures represent area. Unit of area is \[c{{m}^{2}}\]or \[{{m}^{2}}\]. Areas of different geometrical shapes are listed belowA. Area of a Triangle Area of a triangle \[=1/2\times ~\,base~\times \,height\]. Where base is the one side of a triangle and height is the length of the line segment drawn \[90{}^\circ \] on the base of that triangle. B. Area of a Rectangle Area of a rectangle\[\text{=length }\!\!~\!\!\text{ }\!\!\times\!\!\text{ }\,\text{breadth}\]. Area of the rectangle \[PQRS=PQ\times QR\]. Where PQ is the length and QR is the breath. C. Area of a Square Area of a square \[\text{=sid}{{\text{e}}^{\text{2}}}\text{=side }\!\!~\!\!\text{ }\!\!\times\!\!\text{ }\,\text{side}\] Area of the square \[PQRS=PQ~\times \,PQ=P{{Q}^{2}}\]. D. Area of a Circle Area of the circle = \[\pi {{r}^{2}}\] Where \[\pi =\frac{22}{7}=3.14\] Commonly Asked Questions (a) 22.45 cm (b) 23.50 cm (c) 20.15 cm (d) 15.55 cm (e) None of these Answer: (b) Solution: Perimeter of the figure \[=4\text{ }cm+3\text{ }cm+4\text{ }cm+2.5\text{ }cm+5\text{ }cm+5\text{ }cm=23.50\text{ }cm\]. 2. Find the perimeter of the following triangle. (a) 14.7 cm (b) 13.2 cm (c) 13.2 c m (d) 16.5 cm (e) None of these Answer: (a) Solution: Perimeter of the triangle PQR \[=4\text{ }cm+4.7\text{ }cm+6\text{ }cm\] \[=14.7\text{ }cm\] 3. Find the perimeter of the following quadrilateral. (a) 12 cm (b) 10 cm (c) 15 cm (d) 19 cm (e) None of these Answer: (c) Solution: Perimeter of the quadrilateral \[=PS+SR+RQ+QP=5\text{ }cm+3\text{ }cm+4\text{ }cm+3cm=15\text{ }cm\] 4. Find the perimeter of the rectangle whose length is 12 cm and breadth is 8 cm. (a) 40 cm (b) 20 cm (c) 15 cm (d) 30 cm (e) None of these Answer: (a) Solution: Perimeter of the rectangle \[=2\left( 12+8 \right)=40\text{ }cm\]. 5. Find the perimeter of the square whose length of one side is 9 cm. (a) 32 cm (b) 31 cm (c) 36 cm (d) 15 cm (e) None of these Answer: (c) Solution: Perimeter of a square \[=4~\times \,side\] \[=4~\times \,9\text{ }cm=36\text{ }cm\] 6. If radius of a circle is 0.35 cm, find the perimeter of the circle. (a) 2.2 cm (b) 2.1 cm (c) 2.3 cm (d) 3.1 cm (e) None of these Answer: (a) Solution: Perimeter the circle \[=2\pi r\] \[=2\times \frac{22}{7}\times 0.35\,\,cm\] \[=2.2\text{ }cm\] 7. Find the area of the triangle whose base is 75 cm and height is 80 cm. (a) \[3000\,c{{m}^{2}}\] (b) \[1500\,c{{m}^{2}}\] (c) \[3500\,c{{m}^{2}}\] (d) \[2000\,c{{m}^{2}}\] (e) None of these Answer: (a) Solution: Area of the triangle \[=1/2~\times \,b~\times \,h\] \[=\frac{1}{2}\times ~75\text{ }cm~\times \,80\text{ }cm=3000\text{ }c{{m}^{2}}\] 8. Find the area of the rectangle whose length is 17 cm and breadth is 15 cm. (a) \[253\,c{{m}^{2}}\] (b) \[255\,c{{m}^{2}}\] (c) \[241\text{ }c{{m}^{2}}\] (d) \[234\text{ }c{{m}^{2}}\] (e) None of these Answer: (b) Solution: Area of the rectangle \[\text{=l }\!\!~\!\!\text{ }\!\!\times\!\!\text{ }\,\text{b}\] \[=17\text{ }cm~\times \,15\text{ }cm=255\text{ }c{{m}^{2}}\] 9. Find the area of the square whose length of each side is 21 cm. (a) \[441\,c{{m}^{2}}\] (b) \[420\,c{{m}^{2}}\] (c) \[244\,c{{m}^{2}}\] (d) \[211\,c{{m}^{2}}\] (e) None of these Answer: (a) Solution: Area of the square \[\text{=side }\!\!\times\!\!\text{ side=21 cm }\!\!~\!\!\text{ }\!\!\times\!\!\text{ }\,\text{21 cm=441 c}{{\text{m}}^{\text{2}}}\] 10. Find the area of the circle whose radius is 0.28 cm. (a) \[0.2342\,c{{m}^{2}}\] (b) \[0.2251\,c{{m}^{2}}\] (c) \[0.2464\,c{{m}^{2}}\] (d) \[0.2142\,c{{m}^{2}}\] (e) None of these Answer: (c) Solution: Area of a circle \[=2{{r}^{2}}\] \[=\frac{22}{7}\times ~0.28\text{ }cm~\times \,0.28\text{ }cm=0.2464\text{ }c{{m}^{2}}\] 11. Find the volume of the cuboid whose length, breadth and height are 15 cm, 13 cm and 14 cm respectively. (a) \[2507\,c{{m}^{2}}\] (b) \[2730\,\,c{{m}^{2}}\] (c) \[2302\,c{{m}^{2}}\] (d) \[2350\,\,c{{m}^{2}}\] (e) None of these Answer: (b) Solution: Volume of the cuboid \[\text{=l }\!\!~\!\!\text{ }\!\!\times\!\!\text{ }\,\text{b }\!\!~\!\!\text{ }\!\!\times\!\!\text{ }\,\text{h}\] \[=15\text{ }cm~\,\times \,13\text{ }cm\times 14\text{ }cm=2730\text{ }c{{m}^{3}}\] Volume In our daily life a number of things are stored in different kinds of containers Holding capacity of a container is called its volume. Volume of a Cuboid Volume of a cuboid = length \[\times \] breadth \[\times \] height = Ibh. Volume of the cuboid\[ABCDEFG=AB\times AE\times BC\]. Where, length = AB, breadth = AE and height = BC Volume of a Cube Volume of a cube \[=sid{{e}^{3}}=side~\times \,side~\times \,side\] You need to login to perform this action. You will be redirected in 3 sec
The Bulldozers and the Bee Contents Problem As soon as the bee reaches the other bulldozer, it reverses direction instantaneously and heads off at $20$ miles per hour back towards the first bulldozer. It continues to do this until the bulldozers collide, squashing the bee between them and killing her. The question is: how far does the bee fly before the collision? Solution This is frequently asked as a trick question. Let $d$ be the total distance the bee travels. Let $D_1$ be the initial separation of the bulldozers in miles. Let $d_n$ be the distance the bee travels on each leg of her journey. Let $d'_n$ be the distance that one of the bulldozers travels during the time the bee travels $d_n$. Let $D_n$ be the distance the bulldozers are apart at the start of each leg of the journey. The bee travels twice as fast as each of the bulldozers. So on each leg, $d_n = 2 d'_n$. Consider the $m$th leg of the journey. The bee travels $d_m$, and the bulldozers travel $\dfrac {d_m} 2$. These two together equal $D_m$. Therefore $d_m = \dfrac {2 D_m} 3$, while $d'_m = \dfrac {D_m} 3$. At the start of leg $m + 1$, both bulldozers have covered the distance $\dfrac {D_m} 3$. So at the start of the second leg, the bulldozers are $D_{m+1} = D_m - \dfrac {2 D_m} 3 = \dfrac {D_m} 3$. This gives us a recurrence formula: $\displaystyle d_{n+1} = \begin{cases} \dfrac {2 D_1} 3 & : n = 1 \\ \dfrac {d_n} 3 & : n > 1 \end{cases}$ It can be seen that the answer can be calculated by Sum of Geometric Progression, and comes out as $20$ miles. $\blacksquare$ The Short Answer The bulldozers are travelling at $10$ mph and are $20$ miles apart. Therefore they travel $10$ miles each and collide after $1$ hour. The bee is flying at $20$ mph and therefore travels $20$ miles in that time. $\blacksquare$ Pointless quibbles Whether a bee can actually fly at $20$ miles per hour is doubtful, let alone sustain that speed for a whole hour. I may be completely wrong. This may be completely reasonable. Even if she could, she could not reverse direction instantaneously. The laws of physics are completely against it. The problem of The Bulldozers and the Bee has been phrased in several different forms. One many apocryphal tales concerning John von Neumann is that he was asked this question. He instantly gave the answer. "So you've heard this one then? You solved it the quick way?" he was asked. "I solved it by summing an infinite geometric progression. There's a quicker way?" was the reply. The point is that there are (at least) two ways to solve the problem, and they come to the same value. That is: $\displaystyle 20 \times 2 \sum_{n \mathop \ge 1} \paren {\frac 1 3}^n = 20$
Conway and Norton showed that there are exactly 171 moonshine functions and associated two arithmetic subgroups to them. We want a tool to describe these and here’s where Conway’s big picture comes in very handy. All moonshine groups are arithmetic groups, that is, they are commensurable with the modular group. Conway’s idea is to view several of these groups as point- or set-wise stabilizer subgroups of finite sets of (projective) commensurable 2-dimensional lattices. To every cyclic subgroup $\langle m \rangle $ of the Monster $\mathbb{M} $ is associated a function $f_m(\tau)=\frac{1}{q}+a_1q+a_2q^2+\ldots $ with $q=e^{2 \pi i \tau} $ and all coefficients $a_i \in \mathbb{Z} $ are characters at $m $ of a representation of $\mathbb{M} $. These representations are the homogeneous components of the so called Moonshine module. Each $f_m $ is a principal modulus for a certain genus zero congruence group commensurable with the modular group $\Gamma = PSL_2(\mathbb{Z}) $. These groups are called the moonshine groups. Conway and Norton showed that there are exactly 171 different functions $f_m $ and associated two arithmetic subgroups $F(m) \subset E(m) \subset PSL_2(\mathbb{R}) $ to them (in most cases, but not all, these two groups coincide). Whereas there is an extensive literature on subgroups of the modular group (see for instance the series of posts starting here), most moonshine groups are not contained in the modular group. So, we need a tool to describe them and here’s where Conway’s big picture comes in very handy. All moonshine groups are arithmetic groups, that is, they are subgroups $G $ of $PSL_2(\mathbb{R}) $ which are commensurable with the modular group $\Gamma = PSL_2(\mathbb{Z}) $ meaning that the intersection $G \cap \Gamma $ is of finite index in both $G $ and in $\Gamma $. Conway’s idea is to view several of these groups as point- or set-wise stabilizer subgroups of finite sets of (projective) commensurable 2-dimensional lattices. Start with a fixed two dimensional lattice $L_1 = \mathbb{Z} e_1 + \mathbb{Z} e_2 = \langle e_1,e_2 \rangle $ and we want to name all lattices of the form $L = \langle v_1= a e_1+ b e_2, v_2 = c e_1 + d e_2 \rangle $ that are commensurable to $L_1 $. Again this means that the intersection $L \cap L_1 $ is of finite index in both lattices. From this it follows immediately that all coefficients $a,b,c,d $ are rational numbers. It simplifies matters enormously if we do not look at lattices individually but rather at projective equivalence classes, that is $~L=\langle v_1, v_2 \rangle \sim L’ = \langle v’_1,v’_2 \rangle $ if there is a rational number $\lambda \in \mathbb{Q} $ such that $~\lambda v_1 = v’_1, \lambda v_2=v’_2 $. Further, we are of course allowed to choose a different ‘basis’ for our lattices, that is, $~L = \langle v_1,v_2 \rangle = \langle w_1,w_2 \rangle $ whenever $~(w_1,w_2) = (v_1,v_2).\gamma $ for some $\gamma \in PSL_2(\mathbb{Z}) $. Using both operations we can get any lattice in a specific form. For example, $\langle \frac{1}{2}e_1+3e_2,e_1-\frac{1}{3}e_2 \overset{(1)}{=} \langle 3 e_1+18e_2,6e_1-2e_2 \rangle \overset{(2)}{=} \langle 3 e_1+18 e_2,38 e_2 \rangle \overset{(3)}{=} \langle \frac{3}{38}e_1+\frac{9}{19}e_2,e_2 \rangle $ Here, identities (1) and (3) follow from projective equivalence and identity (2) from a base-change. In general, any lattice $L $ commensurable to the standard lattice $L_1 $ can be rewritten uniquely as $L = \langle Me_1 + \frac{g}{h} e_2,e_2 \rangle $ where $M $ a positive rational number and with $0 \leq \frac{g}{h} < 1 $. Another major feature is that one can define a symmetric hyper-distance between (equivalence classes of) such lattices. Take $L=\langle Me_1 + \frac{g}{h} e_2,e_2 \rangle $ and $L’=\langle N e_1 + \frac{i}{j} e_2,e_2 \rangle $ and consider the matrix $D_{LL’} = \begin{bmatrix} M & \frac{g}{h} \\ 0 & 1 \end{bmatrix} \begin{bmatrix} N & \frac{i}{j} \\ 0 & 1 \end{bmatrix}^{-1} $ and let $\alpha $ be the smallest positive rational number such that all entries of the matrix $\alpha.D_{LL’} $ are integers, then $\delta(L,L’) = det(\alpha.D_{LL’}) \in \mathbb{N} $ defines a symmetric hyperdistance which depends only of the equivalence classes of lattices ( hyperdistance because the log of it behaves like an ordinary distance). Conway’s big picture is the graph obtained by taking as its vertices the equivalence classes of lattices commensurable with $L_1 $ and with edges connecting any two lattices separated by a prime number hyperdistance. Here’s part of the 2-picture, that is, only depicting the edges of hyperdistance 2. The 2-picture is an infinite 3-valent tree as there are precisely 3 classes of lattices at hyperdistance 2 from any lattice $L = \langle v_1,v_2 \rangle $ namely (the equivalence classes of) $\langle \frac{1}{2}v_1,v_2 \rangle~,~\langle v_1, \frac{1}{2} v_2 \rangle $ and $\langle \frac{1}{2}(v_1+v_2),v_2 \rangle $. Similarly, for any prime hyperdistance p, the p-picture is an infinite p+1-valent tree and the big picture is the product over all these prime trees. That is, two lattices at square-free hyperdistance $N=p_1p_2\ldots p_k $ are two corners of a k-cell in the big picture! (Astute readers of this blog (if such people exist…) may observe that Conway’s big picture did already appear here prominently, though in disguise. More on this another time). The big picture presents a simple way to look at arithmetic groups and makes many facts about them visually immediate. For example, the point-stabilizer subgroup of $L_1 $ clearly is the modular group $PSL_2(\mathbb{Z}) $. The point-stabilizer of any other lattice is a certain conjugate of the modular group inside $PSL_2(\mathbb{R}) $. For example, the stabilizer subgroup of the lattice $L_N = \langle Ne_1,e_2 \rangle $ (at hyperdistance N from $L_1 $) is the subgroup ${ \begin{bmatrix} a & \frac{b}{N} \\ Nc & d \end{bmatrix}~|~\begin{bmatrix} a & b \\ c & d \end{bmatrix} \in PSL_2(\mathbb{Z})~} $ Now the intersection of these two groups is the modular subgroup $\Gamma_0(N) $ (consisting of those modular group element whose lower left-hand entry is divisible by N). That is, the proper way to look at this arithmetic group is as the joint stabilizer of the two lattices $L_1,L_N $. The picture makes it trivial to compute the index of this subgroup. Consider the ball $B(L_1,N) $ with center $L_1 $ and hyper-radius N (on the left, the ball with hyper-radius 4). Then, it is easy to show that the modular group acts transitively on the boundary lattices (including the lattice $L_N $), whence the index $[ \Gamma : \Gamma_0(N)] $ is just the number of these boundary lattices. For N=4 the picture shows that there are exactly 6 of them. In general, it follows from our knowledge of all the p-trees the number of all lattices at hyperdistance N from $L_1 $ is equal to $N \prod_{p | N}(1+ \frac{1}{p}) $, in accordance with the well-known index formula for these modular subgroups! But, there are many other applications of the big picture giving a simple interpretation for the Hecke operators, an elegant proof of the Atkin-Lehner theorem on the normalizer of $\Gamma_0(N) $ (the whimsical source of appearances of the number 24) and of Helling’s theorem characterizing maximal arithmetical groups inside $PSL_2(\mathbb{C}) $ as conjugates of the normalizers of $\Gamma_0(N) $ for square-free N. J.H. Conway’s paper “Understanding groups like $\Gamma_0(N) $” containing all this material is a must-read! Unfortunately, I do not know of an online version.
A Belyi-extender (or dessinflateur) is a rational function $q(t) = \frac{f(t)}{g(t)} \in \mathbb{Q}(t)$ that defines a map \[ q : \mathbb{P}^1_{\mathbb{C}} \rightarrow \mathbb{P}^1_{\mathbb{C}} \] unramified outside $\{ 0,1,\infty \}$, and has the property that $q(\{ 0,1,\infty \}) \subseteq \{ 0,1,\infty \}$. An example of such a Belyi-extender is the power map $q(t)=t^n$, which is totally ramified in $0$ and $\infty$ and we clearly have that $q(0)=0,~q(1)=1$ and $q(\infty)=\infty$. The composition of two Belyi-extenders is again an extender, and we get a rather mysterious monoid $\mathcal{E}$ of all Belyi-extenders. Very little seems to be known about this monoid. Its units form the symmetric group $S_3$ which is the automrphism group of $\mathbb{P}^1_{\mathbb{C}} – \{ 0,1,\infty \}$, and mapping an extender $q$ to its degree gives a monoid map $\mathcal{E} \rightarrow \mathbb{N}_+^{\times}$ to the multiplicative monoid of positive natural numbers. If one relaxes the condition of $q(t) \in \mathbb{Q}(t)$ to being defined over its algebraic closure $\overline{\mathbb{Q}}$, then such maps/functions have been known for some time under the name of dynamical Belyi-functions, for example in Zvonkin’s Belyi Functions: Examples, Properties, and Applications (section 6). Here, one is interested in the complex dynamical system of iterations of $q$, that is, the limit-behaviour of the orbits \[ \{ z,q(z),q^2(z),q^3(z),… \} \] for all complex numbers $z \in \mathbb{C}$. In general, the 2-sphere $\mathbb{P}^1_{\mathbb{C}} = S^2$ has a finite number of open sets (the Fatou domains) where the limit behaviour of the series is similar, and the union of these open sets is dense in $S^2$. The complement of the Fatou domains is the Julia set of the function, of which we might expect a nice fractal picture. Let’s take again the power map $q(t)=t^n$. For a complex number $z$ lying outside the unit disc, the series $\{ z,z^n,z^{2n},… \}$ has limit point $\infty$ and for those lying inside the unit circle, this limit is $0$. So, here we have two Fatou domains (interior and exterior of the unit circle) and the Julia set of the power map is the (boring?) unit circle. Fortunately, there are indeed dynamical Belyi-maps having a more pleasant looking Julia set, such as this one But then, many dynamical Belyi-maps (and Belyi-extenders) are systems of an entirely different nature, they are completely chaotic, meaning that their Julia set is the whole $2$-sphere! Nowhere do we find an open region where points share the same limit behaviour… (the butterfly effect). There’s a nice sufficient condition for chaotic behaviour, due to Dennis Sullivan, which is pretty easy to check for dynamical Belyi-maps. A periodic point for $q(t)$ is a point $p \in S^2 = \mathbb{P}^1_{\mathbb{C}}$ such that $p = q^m(p)$ for some $m > 1$. A critical point is one such that either $q(p) = \infty$ or $q'(p)=0$. Sullivan’s result is that $q(t)$ is completely chaotic when all its critical points $p$ become eventually periodic, that is some $q^k(p)$ is periodic, but $p$ itself is not periodic. For a Belyi-map $q(t)$ the critical points are either comlex numbers mapping to $\infty$ or the inverse images of $0$ or $1$ (that is, the black or white dots in the dessin of $q(t)$) which are not leaf-vertices of the dessin. Let’s do an example, already used by Sullivan himself: \[ q(t) = (\frac{t-2}{t})^2 \] This is a Belyi-function, and in fact a Belyi-extender as it is defined over $\mathbb{Q}$ and we have that $q(0)=\infty$, $q(1)=1$ and $q(\infty)=1$. The corresponding dessin is (inverse images of $\infty$ are marked with an $\ast$) The critical points $0$ and $2$ are not periodic, but they become eventually periodic: \[ 2 \rightarrow^q 0 \rightarrow^q \infty \rightarrow^q 1 \rightarrow^q 1 \] and $1$ is periodic. For a general Belyi-extender $q$, we have that the image under $q$ of any critical point is among $\{ 0,1,\infty \}$ and because we demand that $q(\{ 0,1,\infty \}) \subseteq \{ 0,1,\infty \}$, every critical point of $q$ eventually becomes periodic. If we want to avoid the corresponding dynamical system to be completely chaotic, we have to ensure that one of the periodic points among $\{ 0,1,\infty \}$ (and there is at least one of those) must be critical. Let’s consider the very special Belyi-extenders $q$ having the additional property that $q(0)=0$, $q(1)=1$ and $q(\infty)=\infty$, then all three of them are periodic. So, the system is always completely chaotic unless the black dot at $0$ is not a leaf-vertex of the dessin, or the white dot at $1$ is not a leaf-vertex, or the degree of the region determined by the starred $\infty$ is at least two. Going back to the mystery Manin-Marcolli sub-monoid of $\mathcal{E}$, it might explain why it is a good idea to restrict to very special Belyi-extenders having associated dessin a $2$-coloured tree, for then the periodic point $\infty$ is critical (the degree of the outside region is at least two), and therefore the conditions of Sullivan’s theorem are not satisfied. So, these Belyi-extenders do not necessarily have to be completely chaotic. (tbc)Leave a Comment
tl;dr;You can teach your machine to break arbitrary Caesar cipher by observing enough training examples using Trusted Region Policy Optimization for Policy Gradients: Full textImagine the world where a hammer was introduced to the public just couple a years ago. Everyone is running around trying to apply the hammer to anything that even resembles a nail. This is the world we are living in and the hammer is deep learning. Today I will be applying it to a task that can be much easier solved by other means but hey, it's Deep Learning Age! Specifically, I will teach my machine to break a simple cipher like Caesar cipher just by looking at several (actually, a lot) examples of English text and corresponding encoded strings. You may have heard that machines are getting pretty good at playing games so I decided to formulate this code breaking challenge as a game. Fortunately there is this OpenAI Gym toolkit that can be used "for developing and comparing reinforcement learning algorithms". It provides some great abstractions that help us define games in terms that computer can understand. For instance, they have a game (or environment) called "Copy-v0" with the following setup and rules: There is an input tape with some characters. You can move cursor one step left or right along this tape. You can read symbols under the cursor and output characters one at a time to the output tape. You need to copy input tape characters to output tape to win. Now let's talk a bit about the hammer itself. The hottest thing on the Reinforcement Learning market right now is Policy Gradients and specifically this flavorTrust Region Policy Optimization. There is an amazing article from Andrej Karpathy on Policy Gradients so I will not give here an introduction. If you are new to Reinforcement Learning you just stop reading this post and go read that one. Seriously, it's so much better! Still here? Ok, I will tell you about TRPO then. TRPO is a technique for Policy Gradients optimization that produces much better results than vanilla gradient descent and even guarantees (theoretically, of course) that you can get an improved policy network on every iteration. With vanilla PG you start by defining a policy network that produces scores for the actions given the current state. You then simulate hundreds and thousands of games taking actions suggested by the network and note which actions produced better results. Having this data available you can then use backpropagation to update your policy network and start all over again. The only thing that TRPO adds to this is that you solve a constrained optimization problem instead of an unconstrained one: $$ \textrm{maximize } L(\theta) \textrm{ subject to } \bar{D}_{KL}(\theta_{\textrm{old}},\theta)<\delta$$ Here \(L(\theta)\) is a loss that we are trying to optimize. It is defined as $$E_{a \sim q}[\frac{\pi_\theta(a|s_n)}{q(a|s_n)} A_{\theta_{\textrm{old}}}(s_n,a)],$$ where \(\theta\) is our weights vector, \(\pi_\theta(a|s_n)\) is a probability (score) of the selected action \(a\) in state \(s_n\) according to the policy network, \(q(a|s_n)\) is a corresponding score using the policy network from the iteration before and \(A_{\theta_{\textrm{old}}}(s_n,a)\) is an advantage (more on it later). Running simple gradient descent on this is the vanilla Policy Gradients approach. TRPO approach doesn't blindly descend along the gradient but takes into account the \(\bar{D}_{KL}(\theta_{\textrm{old}},\theta)<\delta\) constraint. To make sure the constraint is satisfied we do the following. First, we approximately solve the following equation to find a search direction: $$Ax = g,$$ where A is the Fisher information matrix, \(A_{\textrm{ij}} = \frac{\partial}{\partial \theta_i}\frac{\partial}{\partial \theta_j}\bar{D}_{KL}(\theta_{\textrm{old}},\theta)\) and \(g\) is the gradient that you can get from the loss using backpropagation. This is done using conjugate gradients algorithm. Once we have a search direction we can easily find a maximum step along this direction that still satisfies the constraint. One thing that I promised to get back to is the advantage. It is defined as $$A_\pi(s,a)= Q_\pi(s,a)−V_\pi(s),$$ where \(Q_\pi(s,a)\) is a state-action value function (actual reward of taking an action in this state, it usually includes discounted rewards for all upcoming states) and \(V_\pi(s)\) is a value function (in our case it's just a separate network that we train to predict the value of the state). Bored enough already? I promise, it's not that scary in code. You can find the full implementation here: tilarids/reinforcement_learning_playground. Specifically, look at trpo_agent.py. You can reproduce the Caesar cipher breaking by running trpo_caesar.py. wojzaremba's implementation a lot - you are right. I was copying some TRPO code from there and then rewriting it to make it more readable and also to make sure it follows the paper closely.
Let $\operatorname{Klein}$ denote the category of principal homogeneous bundles. An object in this category is a tuple $\mathbf Q = (Q, P; G, H; q, a, \tilde a)$, where: $G$ is a Lie group, and $H$ is a closed subgroup; $P$ is a $G$-torsor and the manifold $Q$ is diffeomorphic to the quotient $G/H$; The (left) actions $\tilde a : G \to \operatorname{Aut}(P)$ and $a : G \to \operatorname{Aut}(Q)$ are morphisms of topological groups; and $q :P \to Q$ is a principal $H$-bundle. A morphism $\Phi : \mathbf Q \to \mathbf Q'$ is described by a tuple $\Phi = (\varphi, \tilde \varphi, f)$, where $\varphi : Q \to Q'$ and $\tilde \varphi : P \to P'$ are diffeomorphisms which commute with the bundle maps $q : P \to Q$ and $q' : P' \to Q'$, and $f : G \to G'$ is a morphism of Lie groups mapping $H$ to $H'$, and which satisfies the identity $$\tilde a'_{f(g)} \circ \tilde \varphi = \tilde \varphi \circ \tilde a_g$$ for all $g \in G$. I think that this is the correct categorical description of principal homogeneous bundles; please correct me if I'm wrong. I selected the name $\operatorname{Klein}$ in homage to Felix Klein and his Erlangen Program. It seems that such a bundle $\mathbf Q$ contains all the data on its symmetries. Namely, I think that its automorphism group $\operatorname{Aut}(\mathbf Q)$ is isomorphic to its Lie group Lie group $G = G(\mathbf Q)$? It is easy to see that there is a natural map $K : G \hookrightarrow \operatorname{Aut}(\mathbf Q)$, in that each $u \in G$ corresponds to a unique automorphism $K_u \in \operatorname{Aut}(\mathbf Q)$. The morphism $K_u = (k_u, \tilde k_u, c_u)$ is defined by $$k_u(q) = a_u(q), \quad \tilde k_u(p) = \tilde a_u(p), \quad \mathrm{and} \quad c_u(g) = ugu^{-1}.$$ That is, the morphism $K_u$ acts by left-multiplication on both $Q$ and $P$, but by left-conjugation on $G$. Is this map $K$ surjective? i.e., is $\operatorname{Aut}(\mathbf Q)$ isomorphic to $G = G(\mathbf Q)$? If the answer is yes, then I think that this captures the notion of "internal symmetries" of the bundle, since these are the transformations which preserve the bundle structure. However, I know that groupoids also show up to describe symmetries in a categorical setting, and I would be interested to hear more on that point of view. How can groupoids be used to describe symmetries in this category?
looking for some help, or at least if I'm going the right direction... Are there functions $f$ and $g$ such that $f$ is $O(g)$ and $g$ is $O(f)$ and NO constants $c_1$ and $c_2$ exist for which $f(x) = c_1 \cdot g(x) + c_2$? I want to say that there is not, because the definition of big O states that if $f(x)$ is $O(g(x))$ then $|f(x)| \leq c_1 \cdot |g(x)|$ for all values $x > k$ and if $f(x) = c_1 \cdot g(x) + c_2$ then $g(x) = \frac{f(x) - c_2}{c_1}$ and thus $0 \leq f(x) \leq c_1 \cdot g(x)$ $0 \leq c_1 \cdot g(x) + c_2 \leq c_1 \frac{f(x) - c_2}{c_1}$ $0 \leq c_1 \cdot g(x) + c_2 \leq f(x) - c_2$ $0 \leq c_1 \cdot g(x) \leq f(x)$ Which contradicts the initial definition of $f(x)$ being $O(g(x))$. Am I on the right path, here?
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
I have an asymmetrical rotating part. It vibrates its housing and emits audible noise. I need to add weights to ensure smooth rotation. However, I am constrained in which regions I can add material. I can't exploit rotational symmetry to balance this rotor. I define a rotoras a rigid system of particles, each with mass $m_n$ and position $r_n \mathbb\in {R}^3$, rotated about $k$ (the z-axis). What formula describes a rotor that is balanced when spun at constant velocity? According to Update International, a vendor of rotor balancing systems, the problem is broken into static and couple unbalance. Here are my interpretations: Static balance When the angular velocity $\omega\neq0$, a net force acts orthogonal to $k$, through the rotor's center of mass. Static unbalance is resolved by ensuring that the center of mass $C = \dfrac{\sum m_n r_n}{\sum m_n}$ lies along $k$. $C \times \hat{k} = 0$ Together with the fraction canceled: $\sum m_n r_n \times \hat{k} = 0$ Rewritten as a scalar system: $\begin{cases} \sum m_n r_{n,x} = 0 \\ \sum m_n r_{n,y} = 0 \end{cases}$ Couple unbalance When the requirements above are met, a pair of equal and opposite net forces act at different points along the axis. The forces are perpendicular to the axis. I'm stuck. How do point-masses give rise to couple unbalance?
Theorem 7.1 says that each harmonic function satisfies \begin{equation} \label{green1} u(x)=\int_{\partial\Omega}\left(\gamma (y,x)\frac{\partial u(y)}{\partial n_y}-u(y)\frac{\partial \gamma(y,x)}{\partial n_y}\right)\ dS_y, \end{equation} where \(\gamma(y,x)\) is a fundamental solution. In general, \(u\) does not satisfies the boundary condition in the above boundary value problems. Since \(\gamma=s+\phi\), see Section 7.2, where \(\phi\) is an arbitrary harmonic function for each fixed \(x\), we try to find a \(\phi\) such that \(u\) satisfies also the boundary condition. Consider the Dirichlet problem, then we look for a \(\phi\) such that \begin{equation} \label{green2} \gamma(y,x)=0,\ \ y\in\partial\Omega,\ x\in\Omega. \end{equation} Then $$ u(x)=-\int_{\partial\Omega}\ \frac{\partial \gamma(y,x)}{\partial n_y}u(y)\ dS_y,\ \ x\in\Omega. $$ Suppose that \(u\) achieves its boundary values \(\Phi\) of the Dirichlet problem, then \begin{equation} \label{green3} u(x)=-\int_{\partial\Omega}\ \frac{\partial \gamma(y,x)}{\partial n_y}\Phi(y)\ dS_y, \end{equation} We claim that this function solves the Dirichlet problem (7.3.1.1), (7.3.1.2). A function \(\gamma(y,x)\) which satisfies (\ref{green2}), and some additional assumptions, is called Green's function. More precisely, we define a Green function as follows. Definition. A function \(G(y,x)\), \(y,\ x\in\overline{\Omega}\), \(x\not= y\), is called Green function associated to \(\Omega\) and to the Dirichlet problem (7.3.1.1), (7.3.1.2) if for fixed \(x\in\Omega\), that is we consider \(G(y,x)\) as a function of \(y\), the following properties hold: (i) \(G(y,x)\in C^2(\Omega\setminus\{x\})\cap C(\overline{\Omega}\setminus\{x\})\), \(\triangle_yG(y,x)=0,\ \ x\not=y\). (ii) \(G(y,x)-s(|x-y|)\in C^2(\Omega)\cap C(\overline{\Omega})\). (iii) \(G(y,x)=0\) if \(y\in\partial\Omega\), \(x\not=y\). Remark. We will see in the next section that a Green function exists at least for some domains of simple geometry. Concerning the existence of a Green function for more general domains see [13]. It is an interesting fact that we get from (i)-(iii) of the above definition two further important properties, provided \(\Omega\) is bounded, sufficiently regular and connected. Proposition 7.7. A Green function has the following properties. In the case \(n=2\) we assume {\rm diam} \(\Omega<1\). (A) \(G(x,y)=G(y,x)\)\ \ (symmetry). (B) \(0<G(x,y)<s(|x-y|), \ \ x,\ y\in\Omega,\ x\not=y\). Proof. (A) Let \(x^{(1)},\ x^{(2)}\in\Omega\). Set \(B_i=B_\rho(x^{(i)})\), \(i=1,\ 2\). We assume \(\overline{B_i}\subset\Omega\) and \(B_1\cap B_2=\emptyset\). Since \(G(y,x^{(1)})\) and \(G(y,x^{(2)})\) are harmonic in \(\Omega\setminus\left(\overline{B_1}\cup\overline{B_2}\right)\) we obtain from Green's identity, see Figure 7.4.1 for notations, Figure 7.4.1: Proof of Proposition 7.7 \begin{eqnarray*} 0&=&\int_{\partial\left(\Omega\setminus(\overline{B_1}\cup\overline{B_2})\right)} \bigg(G(y,x^{(1)})\frac{\partial}{\partial n_y}G(y,x^{(2)})\\ && \qquad \qquad \qquad \qquad\qquad \qquad -G(y,x^{(2)})\frac{\partial}{\partial n_y}G(y,x^{(1)})\bigg) dS_y\\ &=&\int_{\partial\Omega} \left(G(y,x^{(1)})\frac{\partial}{\partial n_y}G(y,x^{(2)})-G(y,x^{(2)})\frac{\partial}{\partial n_y}G(y,x^{(1)})\right) dS_y\\ &+&\int_{\partial B_1} \left(G(y,x^{(1)})\frac{\partial}{\partial n_y}G(y,x^{(2)})-G(y,x^{(2)})\frac{\partial}{\partial n_y}G(y,x^{(1)})\right) dS_y\\ &+&\int_{\partial B_2} \left(G(y,x^{(1)})\frac{\partial}{\partial n_y}G(y,x^{(2)})-G(y,x^{(2)})\frac{\partial}{\partial n_y}G(y,x^{(1)})\right) dS_y. \end{eqnarray*} The integral over \(\partial\Omega\) is zero because of property (iii) of a Green function, and \begin{eqnarray*} \int_{\partial B_1}\ \bigg(G(y,x^{(1)})\frac{\partial}{\partial n_y}G(y,x^{(2)})&-&G(y,x^{(2)})\frac{\partial}{\partial n_y}G(y,x^{(1)})\bigg) dS_y\\ &\to& G(x^{(1)},x^{(2)}),\\ \int_{\partial B_2}\ \bigg(G(y,x^{(1)})\frac{\partial}{\partial n_y}G(y,x^{(2)})&-&G(y,x^{(2)})\frac{\partial}{\partial n_y}G(y,x^{(1)})\bigg)\ dS_y\\ &\to& -G(x^{(2)},x^{(1)}) \end{eqnarray*} as \(\rho\to 0\). This follows by considerations as in the proof of Theorem 7.1. (B) Since $$ G(y,x)=s(|x-y|)+\phi(y,x) $$ and \(G(y,x)=0\) if \(y\in\partial\Omega\) and \(x\in\Omega\) we have for \(y\in\partial\Omega\) $$ \phi(y,x)=-s(|x-y|). $$ From the definition of \(s(|x-y|)\) it follows that \(\phi(y,x)< 0\) if \(y\in\partial\Omega\). Thus, since \(\triangle_y\phi=0\) in \(\Omega\), the maximum-minimum principle implies that \(\phi(y,x)<0\) for all \(y,~x\in\Omega\). Consequently $$ G(y,x)<s(|x-y|),\ \ x,\ y\in\Omega,\ x\not=y. $$ It remains to show that $$ G(y,x)>0,\ \ x,\ y\in\Omega,\ x\not=y. $$ Fix \(x\in\Omega\) and let \(B_\rho(x)\) be a ball such that \(B_\rho(x)\subset\Omega\) for all \(0<\rho<\rho_0\). There is a sufficiently small \(\rho_0>0\) such that for each \(\rho\), \(0<\rho<\rho_0\), $$ G(y,x)>0\ \ \mbox{for all}\ y\in\overline{B_\rho(x)},\ x\not=y, $$ see property (iii) of a Green function. Since \begin{eqnarray*} \triangle_y G(y,x)&=&0\ \ \mbox{in}\ \Omega\setminus\overline{B_\rho(x)}\\ G(y,x)&>&0\ \ \mbox{if}\ y\in\partial B_\rho(x)\\ G(y,x)&=&0\ \ \mbox{if}\ y\in\partial\Omega \end{eqnarray*} it follows from the maximum-minimum principle that $$ G(y,x)>0\ \ \mbox{on}\ \Omega\setminus\overline{B_\rho(x)}. $$ \(\Box\)
In metric spaces, every open set is a countable union of closed sets. is the converse true? A topological space with the property "every open set is a countable union of closed sets" has to be metrizable? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community No. Take the indiscrete topology as a counterexample. If that were true, then any countable T$_1$-space would be metrizable. In fact there are lots of countable Hausdorff spaces that aren't even first countable. For example, topologize $\mathbb N$ so that $$S\text{ is closed }\iff1\in S\text{ or }\sum_{n\in S}\frac1n\lt\infty.$$
Hey guys! I built the voltage multiplier with alternating square wave from a 555 timer as a source (which is measured 4.5V by my multimeter) but the voltage multiplier doesn't seem to work. I tried first making a voltage doubler and it showed 9V (which is correct I suppose) but when I try a quadrupler for example and the voltage starts from like 6V and starts to go down around 0.1V per second. Oh! I found a mistake in my wiring and fixed it. Now it seems to show 12V and instantly starts to go down by 0.1V per sec. But you really should ask the people in Electrical Engineering. I just had a quick peek, and there was a recent conversation about voltage multipliers. I assume there are people there who've made high voltage stuff, like rail guns, which need a lot of current, so a low current circuit like yours should be simple for them. So what did the guys in the EE chat say... The voltage multiplier should be ok on a capacitive load. It will drop the voltage on a resistive load, as mentioned in various Electrical Engineering links on the topic. I assume you have thoroughly explored the links I have been posting for you... A multimeter is basically an ammeter. To measure voltage, it puts a stable resistor into the circuit and measures the current running through it. Hi all! There is theorem that links the imaginary and the real part in a time dependent analytic function. I forgot its name. Its named after some dutch(?) scientist and is used in solid state physics, who can help? The Kramers–Kronig relations are bidirectional mathematical relations, connecting the real and imaginary parts of any complex function that is analytic in the upper half-plane. These relations are often used to calculate the real part from the imaginary part (or vice versa) of response functions in physical systems, because for stable systems, causality implies the analyticity condition, and conversely, analyticity implies causality of the corresponding stable physical system. The relation is named in honor of Ralph Kronig and Hans Kramers. In mathematics these relations are known under the names... I have a weird question: The output on an astable multivibrator will be shown on a multimeter as half the input voltage (for example we have 9V-0V-9V-0V...and the multimeter averages it out and displays 4.5V). But then if I put that output to a voltage doubler, the voltage should be 18V, not 9V right? Since the voltage doubler will output in DC. I've tried hooking up a transformer (9V to 230V, 0.5A) to an astable multivibrator (which operates at 671Hz) but something starts to smell burnt and the components of the astable multivibrator get hot. How do I fix this? I check it after that and the astable multivibrator works. I searched the whole god damn internet, asked every god damn forum and I can't find a single schematic that converts 9V DC to 1500V DC without using giant transformers and power stage devices that weight 1 billion tons.... something so "simple" turns out to be hard as duck In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum. @AaronStevens Yeah, I had a good laugh to myself when he responded back with "Yeah, maybe they considered it and it was just too complicated". I can't even be mad at people like that. They are clearly fairly new to physics and don't quite grasp yet that most "novel" ideas have been thought of to death by someone; likely 100+ years ago if it's classical physics I have recently come up with a design of a conceptual electromagntic field propulsion system which should not violate any conservation laws, particularly the Law of Conservation of Momentum and the Law of Conservation of Energy. In fact, this system should work in conjunction with these two laws ... I rememeber that Gordon Freeman's thesis was "Observation of Einstein-Podolsky-Rosen Entanglement on Supraquantum Structures by Induction Through Nonlinear Transuranic Crystal of Extremely Long Wavelength (ELW) Pulse from Mode-Locked Source Array " In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum. @ACuriousMind What confuses me is the interpretation of Peskin to this infinite c-number and the experimental fact He said, the second term is the sum over zero point energy modes which is infnite as you mentioned. He added," fortunately, this energy cannot be detected experm., since the experiments measure only the difference between from the ground state of H". @ACuriousMind Thank you, I understood your explanations clearly. However, regarding what Peskin mentioned in his book, there is a contradiction between what he said about the infinity of the zero point energy/ground state energy, and the fact that this energy is not detectable experimentally because the measurable quantity is the difference in energy between the ground state (which is infinite and this is the confusion) and a higher level. It's just the first encounter with something that needs to be renormalized. Renormalizable theories are not "incomplete", even though you can take the Wilsonian standpoint that renormalized QFTs are effective theories cut off at a scale. according to the author, the energy differenc is always infinite according to two fact. the first is, the ground state energy is infnite, secondly, the energy differenc is defined by substituting a higher level energy from the ground state one. @enumaris That is an unfairly pithy way of putting it. There are finite, rigorous frameworks for renormalized perturbation theories following the work of Epstein and Glaser (buzzword: Causal perturbation theory). Just like in many other areas, the physicist's math sweeps a lot of subtlety under the rug, but that is far from unique to QFT or renormalization The classical electrostatics formula $H = \int \frac{\mathbf{E}^2}{8 \pi} dV = \frac{1}{2} \sum_a e_a \phi(\mathbf{r}_a)$ with $\phi_a = \sum_b \frac{e_b}{R_{ab}}$ allows for $R_{aa} = 0$ terms i.e. dividing by zero to get infinities also, the problem stems from the fact that $R_{aa}$ can be zero due to using point particles, overall it's an infinite constant added to the particle that we throw away just as in QFT @bolbteppa I understand the idea that we need to drop such terms to be in consistency with experiments. But i cannot understand why the experiment didn't predict such infinities that arose in the theory? These $e_a/R_{aa}$ terms in the big sum are called self-energy terms, and are infinite, which means a relativistic electron would also have to have infinite mass if taken seriously, and relativity forbids the notion of a rigid body so we have to model them as point particles and can't avoid these $R_{aa} = 0$ values.
See section 4.4 to review some basic terminology about graphs. A graph \(G\) consists of a pair \((V,E)\), where \(V\) is the set of vertices and \(E\) the set of edges. We write \(V(G)\) for the vertices of \(G\) and \(E(G)\) for the edges of \(G\) when necessary to avoid ambiguity, as when more than one graph is under discussion. If no two edges have the same endpoints we say there are no multiple edges, and if no edge has a single vertex as both endpoints we say there are no loops. A graph with no loops and no multiple edges is a simple graph. A graph with no loops, but possibly with multiple edges is a multigraph. The condensation of a multigraph is the simple graph formed by eliminating multiple edges, that is, removing all but one of the edges with the same endpoints. To form the condensation of a graph, all loops are also removed. We sometimes refer to a graph as a general graph to emphasize that the graph may have loops or multiple edges. The edges of a simple graph can be represented as a set of two element sets; for example, \[(\{v_1,\ldots,v_7\},\{\{v_1,v_2\},\{v_2,v_3\},\{v_3,v_4\},\{v_3,v_5\}, \{v_4,v_5\},\{v_5,v_6\},\{v_6,v_7\}\})\] is a graph that can be pictured as in figure 5.1.1. This graph is also a connected graph: each pair of vertices \(v\), \(w\) is connected by a sequence of vertices and edges, \(v=v_1,e_1,v_2,e_2,\ldots,v_k=w\), where \(v_i\) and \(v_{i+1}\) are the endpoints of edge \(e_{i}\). The graphs shown in figure 4.4.2 are connected, but the figure could be interpreted as a single graph that is not connected. A graph \(G=(V,E)\) that is not simple can be represented by using multisets: a loop is a multiset \(\{v,v\}=\{2\cdot v\}\) and multiple edges are represented by making \(E\) a multiset. The condensation of a multigraph may be formed by interpreting the multiset \(E\) as a set. The degree of a a vertex \(v\), \(d(v)\), is the number of times it appears as an endpoint of an edge. If there are no loops, this is the same as the number of edges incident with \(v\), but if \(v\) is both endpoints of an edge, namely, of a loop, then this contributes 2 to the degree of \(v\). The degree sequence of a graph is a list of its degrees; the order does not matter, but usually we list the degrees in increasing or decreasing order. The degree sequence of the graph in figure 5.1.2, listed clockwise starting at the upper left, is \(0,4,2,3,2,8,2,4,3,2,2\). We typically denote the degrees of the vertices of a graph by \(d_i\), \(i=1,2,\ldots,n\), where \(n\) is the number of vertices. Depending on context, the subscript \(i\) may match the subscript on a vertex, so that \(d_i\) is the degree of \(v_i\), or the subscript may indicate the position of \(d_i\) in an increasing or decreasing list of the degrees; for example, we may state that the degree sequence is \(d_1\le d_2\le \cdots\le d_n\). Our first result, simple but useful, concerns the degree sequence. Theorem 5.1.1 In any graph, the sum of the degree sequence is equal to twice the number of edges, that is, \[\sum_{i=1}^n d_i = 2|E|.\] Proof. Let \(d_i\) be the degree of \(v_i\). The degree \(d_i\) counts the number of times \(v_i\) appears as an endpoint of an edge. Since each edge has two endpoints, the sum \(\sum_{i=1}^n d_i\) counts each edge twice. An easy consequence of this theorem: Corollary 5.1.2 The number of odd numbers in a degree sequence is even. An interesting question immediately arises: given a finite sequence of integers, is it the degree sequence of a graph? Clearly, if the sum of the sequence is odd, the answer is no. If the sum is even, it is not too hard to see that the answer is yes, provided we allow loops and multiple edges. The sequence need not be the degree sequence of a simple graph; for example, it is not hard to see that no simple graph has degree sequence \(0,1,2,3,4\). A sequence that is the degree sequence of a simple graph is said to be graphical. Graphical sequences have be characterized; the most well known characterization is given by this result: Theorem 5.1.3 A sequence \(d_1\ge d_2\ge \ldots\ge d_n\) is graphical if and only if \(\sum_{i=1}^n d_i\) is even and for all \(k\in \{1,2,\ldots,n\}\), \[\sum_{i=1}^k d_i\le k(k-1)+\sum_{i=k+1}^n \min(d_i,k).\] It is not hard to see that if a sequence is graphical it has the property in the theorem; it is rather more difficult to see that any sequence with the property is graphical. What does it mean for two graphs to be the same? Consider these three graphs: \[\eqalign{ G_1&=(\{v_1,v_2,v_3,v_4\},\{\{v_1,v_2\},\{v_2,v_3\},\{v_3,v_4\},\{v_2,v_4\}\})\cr G_2&=(\{v_1,v_2,v_3,v_4\},\{\{v_1,v_2\},\{v_1,v_4\},\{v_3,v_4\},\{v_2,v_4\}\})\cr G_3&=(\{w_1,w_2,w_3,w_4\},\{\{w_1,w_2\},\{w_1,w_4\},\{w_3,w_4\},\{w_2,w_4\}\})\cr }\] These are pictured in figure 5.1.4. Simply looking at the lists of vertices and edges, they don't appear to be the same. Looking more closely, \(G_2\) and \(G_3\) are the same except for the names used for the vertices: \(v_i\) in one case, \(w_i\) in the other. Looking at the pictures, there is an obvious sense in which all three are the same: each is a triangle with an edge (and vertex) dangling from one of the three vertices. Although \(G_1\) and \(G_2\) use the same names for the vertices, they apply to different vertices in the graph: in \(G_1\) the "dangling'' vertex (officially called a pendant vertex) is called \(v_1\), while in \(G_2\) it is called \(v_3\). Finally, note that in the figure, \(G_2\) and \(G_3\) look different, even though they are clearly the same based on the vertex and edge lists. So how should we define "sameness'' for graphs? We use a familiar term and definition: isomorphism. Definition 5.1.4 Suppose \(G_1=(V,E)\) and \(G_2=(W,F)\). \(G_1\) and \(G_2\) are isomorphic if there is a bijection \(f\colon V\to W\) such that \(\{v_1,v_2\}\in E\) if and only if \(\{f(v_1),f(v_2)\}\in F\). In addition, the repetition numbers of \(\{v_1,v_2\}\) and \(\{f(v_1),f(v_2)\}\) are the same if multiple edges or loops are allowed. This bijection \(f\) is called an isomorphism. When \(G_1\) and \(G_2\) are isomorphic, we write \(G_1\cong G_2\). Each pair of graphs in figure 5.1.4 are isomorphic. For example, to show explicitly that \(G_1\cong G_3\), an isomorphism is \[\eqalign{ f(v_1)&=w_3\cr f(v_2)&=w_4\cr f(v_3)&=w_2\cr f(v_4)&=w_1.\cr }\] Clearly, if two graphs are isomorphic, their degree sequences are the same. The converse is not true; the graphs in figure 5.1.5 both have degree sequence \(1,1,1,2,2,3\), but in one the degree-2 vertices are adjacent to each other, while in the other they are not. In general, if two graphs are isomorphic, they share all "graph theoretic'' properties, that is, properties that depend only on the graph. As an example of a non-graph theoretic property, consider "the number of times edges cross when the graph is drawn in the plane.'' In a more or less obvious way, some graphs are contained in others. Definition 5.1.5 Graph \(H=(W,F)\) is a subgraph of graph \(G=(V,E)\) if \(W\subseteq V\) and \(F\subseteq E\). (Since \(H\) is a graph, the edges in \(F\) have their endpoints in \(W\).) \(H)\ is an induced subgraph if \(F\) consists of all edges in \(E\) with endpoints in \(W\). See figure 5.1.6. Whenever \(U\subseteq V\) we denote the induced subgraph of \(G\) on vertices \(U\) as \(G[U]\). A path in a graph is a subgraph that is a path; if the endpoints of the path are \(v\) and \(w\) we say it is a path from \(v\) to \(w\). A cycle in a graph is a subgraph that is a cycle. A clique in a graph is a subgraph that is a complete graph. If a graph \(G\) is not connected, define \(v\sim w\) if and only if there is a path connecting \(v\) and \(w\). It is not hard to see that this is an equivalence relation. Each equivalence class corresponds to an induced subgraph \(G\); these subgraphs are called the connected components of the graph.
Definition:Integral Domain Axioms Jump to navigation Jump to search These criteria are called the Definition \((A0)\) $:$ Closure under addition \(\displaystyle \forall a, b \in D:\) \(\displaystyle a * b \in D \) \((A1)\) $:$ Associativity of addition \(\displaystyle \forall a, b, c \in D:\) \(\displaystyle \paren {a * b} * c = a * \paren {b * c} \) \((A2)\) $:$ Commutativity of addition \(\displaystyle \forall a, b \in D:\) \(\displaystyle a * b = b * a \) \((A3)\) $:$ Identity element for addition: the zero \(\displaystyle \exists 0_D \in D: \forall a \in D:\) \(\displaystyle a * 0_D = a = 0_D * a \) \((A4)\) $:$ Inverse elements for addition: negative elements \(\displaystyle \forall a \in D: \exists a' \in D:\) \(\displaystyle a * a' = 0_D = a' * a \) \((M0)\) $:$ Closure under product \(\displaystyle \forall a, b \in D:\) \(\displaystyle a \circ b \in D \) \((M1)\) $:$ Associativity of product \(\displaystyle \forall a, b, c \in D:\) \(\displaystyle \paren {a \circ b} \circ c = a \circ \paren {b \circ c} \) \((D)\) $:$ Product is distributive over addition \(\displaystyle \forall a, b, c \in D:\) \(\displaystyle a \circ \paren {b * c} = \paren {a \circ b} * \paren {a \circ c} \) \(\displaystyle \paren {a * b} \circ c = \paren {a \circ c} * \paren {b \circ c} \) \((C)\) $:$ Product is commutative \(\displaystyle \forall a, b \in D:\) \(\displaystyle a \circ b = b \circ a \) \((U)\) $:$ Identity element for product: the unity \(\displaystyle \exists 1_D \in D: \forall a \in D:\) \(\displaystyle a \circ 1_D = a = 1_D \circ a \) \((ZD)\) $:$ No proper zero divisors \(\displaystyle \forall a, b \in D:\) \(\displaystyle a \circ b = 0_D \iff a = 0 \lor b = 0 \) These criteria are called the integral domain axioms. These can be otherwise presented as: \((A)\) $:$ $\struct {D, *}$ is an abelian group \((M)\) $:$ $\struct {D \setminus \set 0, \circ}$ is a monoid \((D)\) $:$ $\circ$ distributes over $*$ \((C)\) $:$ $\circ$ is commutative Also see