text
stringlengths
256
16.4k
Richard Bamler (Berkeley)  [   23 ] Tristan Collins (Harvard)  [   24 ] Ronan Conlon (L'Univ du Québec à Montréal)  [   24 ] Georges Dloussky (Aix-Marseille Univ)  [   21 ] Kento Fujita (Kyoto, RIMS)  [   21   23] Ursula Hamenstaedt (Bonn)  [   23 ] Yoshinori Hashimoto (Univ College London)  [   22 ] Tomoyuki Hisamoto (Nagoya)  [   23 ] Claude LeBrun (Stony Brook)  [   24 ] John Lott (Berkeley)  [   23 ] Ayato Mitsuishi (Gakushuin)  [   21 ] Kaoru Ono (Kyoto, RIMS)  [   22 ] Egor Shelukhin (Princeton, IAS)  [   22 ] Jake Solomon (Hebrew Univ)  [   22 ] Cristiano Spotti (Cambridge)  [   21 ] Jeff Viaclovsky (Wisconsin, Madison)  [   21 ] Christopher Woodward (Rutgers)  [   22 ] Title: "Convergence of Ricci flows with bounded scalar curvature" It is a basic fact that the Riemannian curvature becomes unbounded at every finite-time singularity of the Ricci flow. Sesum showed that the same is true for the Ricci curvature. It has since remained a conjecture whether also the scalar curvature becomes unbounded at any singular time. In this talk I will show that, given a uniform scalar curvature bound, the Ricci flow can only degenerate on a set of codimension bigger or equal to \(4\), if at all. This result is a consequence of a structure theory for such Ricci flows, which relies on and generalizes recent work of Cheeger and Naber. Title: "Sasaki-Einstein metrics and K-Stability" I will discuss the connection between Sasaki-Einstein metrics, or conical Ricci flat Kähler metrics, and the algebro-geometric notion of K-stability, and some applications to finding new Einstein metrics in the \(5\)-sphere. Title: "New examples of gradient expanding Kähler-Ricci solitons" A complete Kähler metric \(g\) on a Kähler manifold \(M\) is a gradient expanding Kähler-Ricci soliton if there exists a smooth real-valued function \(f:M\to\mathbb{R}\) with \(\nabla^{g}f\) holomorphic such that \(\operatorname{Ric}(g)-\operatorname{Hess}(f)+g=0.\) I will present new examples of such metrics on the total space of certain holomorphic line bundles. This is joint work with Alix Deruelle (Université Paris-Sud). Title: "Locally conformally symplectic structures on compact non-Kähler complex surfaces" Joint work with Vestislav Apostolov. We show that all compact surfaces with odd first Betti number admit locally conformally symplectic structures taming the complex structure (even for possible unknown surfaces in class \(VII_0\)). We introduce and study the subset \(C(S)\) (respectively, \(T (S)\)) in \(H^{1}_{dR}(M)\) of classes \(a\) for which there exists an lcK metric on \(S\) with Lee form \(\theta \in a\) (respectively, for which there exists a locally conformally symplectic form which tames \(J\), with Lee form in \(a\)). arXiv:1501.02687. Title: "A valuative criterion for uniform K-stability of Fano manifolds" (21st, Thursday) It is an interesting problem whether a given Fano manifold admits Kähler-Einstein metrics or not. It has been known recently that this condition is equivalent to the condition ``K-polystability”, which is purely algebraic. In this talk, we mainly treat uniform K-stability of Fano manifolds, which is stronger than K-polystability. More precisely, we will give a simple necessary and sufficient condition for uniform K-stability of Fano manifolds. Title: "K-stability of Fano manifolds with not small alpha invariants" (23rd, Saturday) I want to show in this talk that any \(n\)-dimensional Fano manifold \(X\) with \(\alpha(X)=n/(n+1)\) and \(n \geq 2\) is K-stable, where \(\alpha(X)\) is the alpha invariant of \(X\) introduced by Tian. In particular, any such \(X\) admits Kähler-Einstein metrics and the holomorphic automorphism group of \(X\) is finite. Title: "Geometric invariant of closed hyperbolic \(3\)-manifolds" Perelman's solution of the geometrization conjecture implies that "most" closed \(3\)-manifold admit a hyperbolic metric. This notion of "most" can be made precise using a notion of randomness for \(3\)-manifolds. I will describe recent results on the spectrum of the Laplace operator for such manifolds which are roughly sharp and distinguish random manifolds. I will use this to provided evidence for the validity of recent conjectures on the structure of such manifolds in the random case. Title: "Quantisation of extremal Kähler metrics" Kähler metrics with “optimal” curvature properties, such as constant scalar curvature Kähler (cscK) or Kähler-Einstein metrics, have been studied intensively in recent years. Extremal Kähler metrics, as proposed by Calabi, generalise these classes of metrics and give a unified treatment of them. It is well-known that a foundational theorem called Donaldson’s quantisation provides "finite dimensional" approximation of cscK metrics and insight into GIT (Geometric Invariant Theory) stability properties of the underlying manifold if the automorphism group is discrete, but this theorem does not naively extend to the case where the automorphism group is non-discrete. We propose in this talk a new “quantising” equation, which generalises various key results in Donaldson's quantisation when the automorphism group is no longer discrete, and can be applied more generally to extremal Kähler metrics. Title: "Stability and coercivity for toric polarized manifolds" We introduce "\(J\)-uniform K-polystability" for toric polarizations and show that it is equivalent to the natural growth condition of the K-energy. Title: "Mass in Kähler Geometry" Given a complete Riemannian manifold that looks enough like Euclidean space at infinity, physicists have defined a quantity called the “mass” which measures the asymptotic deviation of the geometry from the Euclidean model. In this lecture, I will explain a simple formula, discovered in joint work with Hajo Hein, for the mass of any asymptotically locally Euclidean (ALE) Kähler manifold. For ALE scalar-flat Kähler manifolds, the mass turns out to be a topological invariant, depending only on the underlying smooth manifold, the first Chern class of the complex structure, and the Kähler class of the metric. When the metric is actually AE (asymptotically Euclidean), our formula not only implies a positive mass theorem for Kähler metrics, but also yields a Penrose-type inequality for the mass. Title: "Ricci flow through singularities" Perelman’s Ricci flow-with-surgery involves a surgery parameter \(\delta\), which describes the scale at which surgery is performed. We show that there is a subsequential limit as \(\delta\) goes to zero, thereby partially answering a question of Perelman. The limiting object is called a singular Ricci flow. Such objects can be considered to be flows through singularities, and studied in their own right. We prove some geometric and analytical properties of such singular Ricci flows. This is joint work with Bruce Kleiner. Title: "Orientabilities and fundamental classes of Alexandrov spaces and its applications" Alexandrov spaces are complete metric spaces with a lower curvature bound in the sense that any geodesic triangle is not thinner than a model surface of constant curvature. Such a space naturally appears as the Gromov-Hausdorff limit of complete Riemannian manifolds whose sectional curvature are uniformly bounded from below. The orientability is very fundamental concept for manifolds. For Alexandrov spaces, the notions of orientability were considered in several ways. I will announce that such notions are equivalent. Further, I will give several applications to this result. Title: "Lagrangian Floer theory and Generation Criterion for Fukaya category" Based on joint works with K. Fukaya, Y.-G. Oh and H. Ohta and with M. Abouzaid, K. Fukaya, Y.-G. Oh, H. Ohta, I plan to explain several constructions in Lagrangian Floer theory related to generation criterion for Fukaya category. Title: "Measurements of transformations in symplectic topology" We discuss applications of quasi-morphisms and other novel tools to various ways of measuring distance between natural transformations in symplectic topology. Time permitting, we discuss recent applications of pesistent homology in this direction, and analogues of these investigations in contact topology. Title: "The space of positive Lagrangians" Specifically, a Hamiltonian isotopy class of positive Lagrangians admits a Riemannian metric of non-positive sectional curvature and a convex functional which has critical points at special Lagrangians. Geodesics are equivalent to solutions of the degenerate special Lagrangian equation. Existence of geodesics would imply uniqueness of special Lagrangians as well as a version of the strong Arnold conjecture. Weak geodesics are known to exist between positive graph Lagrangians in Euclidean space. Smooth geodesics can be constructed in Milnor fibers and toric Calabi-Yau manifolds using symmetry techniques. This talk is based partially on joint work with Y. Rubinstein and A. Yuval. Title: "Resolutions of conically singular cscK varieties" I will describe a construction of Kähler metrics with constant scalar curvature (cscK) on certain crepant resolutions of cscK varieties with isolated singularities modelled on Calabi-Yau cones. This is joint work with C. Arezzo. Title: "Deformation theory of scalar-flat Kähler ALE surfaces" I will discuss a Kuranishi-type theorem for deformations of complex structure on ALE Kähler surfaces, which will be used to prove that for any scalar-flat Kähler ALE surface, all small deformations of complex structure also admit scalar-flat Kähler ALE metrics. A local moduli space of scalar-flat Kähler ALE metrics can then be constructed which is universal up to small diffeomorphisms. I will also discuss a formula for the dimension of the local moduli space in the case of a scalar-flat Kähler ALE surface which deforms to a minimal resolution of an isolated quotient singularity. This is joint work with Jiyuan Han. Title: "Lagrangian submanifolds and minimal model transitions" The Fukaya category of Lagrangian submanifolds of a symplectic manifold is supposed to play a role in the homological mirror symmetry conjecture. But generators for Fukaya categories are known in only very few examples. I will describe a method for finding generators by associating Lagrangians to minimal model program (conjecturally equivalent to Kahler-Ricci flow with surgery) which "organizes" some results of Fukaya-Oh-Ohta-Ono on toric varieties, and works well for more general examples such as moduli spaces of genus zero parabolic bundles or stable marked curves. Partly joint with Francois Charest and Sushmita Venugopalan.
Fourier transformations: $$\phi(\vec{k}) = \left( \frac{1}{\sqrt{2 \pi}} \right)^3 \int_{r\text{ space}} \psi(\vec{r}) e^{-i \mathbf{k} \cdot \mathbf{r}} d^3r$$ for momentum space and $$\psi(\vec{r}) = \left( \frac{1}{\sqrt{2 \pi}} \right)^3 \int_{k\text{ space}} \phi(\vec{k}) e^{i \mathbf{k} \cdot \mathbf{r}} d^3k$$ for position space. How do we know that $\psi$ is not the Fourier transform of $\phi$ but we suppose that its the other way around ($\psi$ would be proportional to $\exp[-ikr]$ and $\phi$ would be proportional to $\exp[ikr]$)? If there was no difference in the signs, wouldn't there be a problem in the integration from minus inf. to plus inf. if the probability is asymmetric around zero? What is the physical reason that in the integral for momentum space we have $\exp[-ikr]$? I agree about the exponent for position space which can be explained as follows: its the sum of all definite momentum states of the system, but what about the Fourier of the momentum space? How can we explain the integral (not mathematically)?
Good question. What you see here is the procedure known as the WKB approximation. Let's start from scratch and proceed slowly. Consider the 1D path integral given by$$Z = \int_{-\infty}^{\infty}\exp(i\mathcal{S}/\hbar)\ \text{dx} = \int_{-\infty}^\infty\exp\left(\frac{i}{\hbar}\int_0^{t_0}\mathcal{L}(x,\dot{x})\text{dt}\right)\text{dx}.$$Let's look at the Lagrangian closely. Suppose that the Lagrangian has the form$$\mathcal{L}(x,\dot{x}) = \frac{1}{2}m\dot{x}^2-V(x)$$where the potential looks like two wells stitched together. For visual sake, suppose the potential looks like$$V(x)=\frac{1}{2}(x-L)^2(x+L)^2$$so that when $x=L$ or $x=-L$ the potential is $V(x)=0$. Graphically, it could look like the following image posted by another physics stack exchange post. Anyway, because of energy conservation, we know that $E=0$ but don't forget that $E=\mathcal{H}$, the Hamiltonian, and since $\mathcal{H}$ and $\mathcal{L}$ are related by$$\mathcal{H}=\dot{x}p-L,$$we can solve the above equation using $\mathcal{H}=0$, again we can do this because $E=0$. The above equation actually comes from the Legendre transform of $\mathcal{L}$ and you can find out more about it here. Anyway, returning to the problem we could solve the above equation to find\begin{align}L &= \dot{x}p\iff\\\frac{1}{2}m\dot{x}^2-V(x)&=\dot{x}p \iff \\-V(x) &= \dot{x}p - \frac{1}{2}m\dot{x}^2 \iff\\-V(x) &= \dot{x}p - \frac{1}{2}\dot{x}(m\dot{x})\iff\\-V(x) &= \dot{x}p - \frac{1}{2}\dot{x}p = \frac{1}{2}\dot{x}p\iff\\-V(x) &= \frac{p^2}{2m} \implies \\&\boxed{p = i\sqrt{2mV(x)}}\end{align}which is imaginary. Now let's go back to the path integral and plug things in to see what happens.\begin{align}Z &= \int_{-\infty}^{\infty}\exp(i\mathcal{S}/\hbar)\ \text{dx} = \int_{-\infty}^\infty\exp\left(\frac{i}{\hbar}\int_0^{t_0}\mathcal{L}(x,\dot{x})\text{dt}\right)\text{dx}\\&= \int_{-\infty}^\infty\exp\left(\frac{i}{\hbar}\int_0^{t_0}\left[\dot{x}p-\mathcal{H}\right]\text{dt}\right)\text{dx}=\int_{-\infty}^\infty\exp\left(\frac{i}{\hbar}\int_0^{t_0}\left[\dot{x}p-0\right]\text{dt}\right)\text{dx}\\&=\int_{-\infty}^\infty\exp\left(\frac{i}{\hbar}\int_0^{t_0}\left[\dot{x}i\sqrt{2mV(x)}\right]\text{dt}\right)\text{dx}\\&=\int_{-\infty}^\infty\exp\left(\frac{-1}{\hbar}\int_0^{t_0}\sqrt{2mV(x)}\frac{\text{d}x(\text{t})}{\text{dt}}\text{dt}\right)\text{dx}\end{align}Now from calculus, we know that the differential of $x$ is given by$$\text{d}x(\text{t})=\frac{\partial{x(\text{t})}}{{\partial\text{t}}}\text{dt}=\frac{\text{d}x(\text{t})}{\text{dt}}\text{dt}$$and then letting $x(t=0)=-L$ and $x(t=t_0)=+L$ we have $$Z = \int_{-\infty}^\infty\exp\left(\frac{-1}{\hbar}\int_{-L}^{+L}\sqrt{2mV(x)}\text{dx}\right)\text{d}x = \int_{-\infty}^{\infty}\exp\left(\frac{-1}{\hbar}\mathcal{S}_{classical}\right)\text{dx}$$Boom. The classical action pops right out.
Difference between revisions of "Literature on Carbon Nanotube Research" Line 99: Line 99: ==Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning== ==Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning== [http://www.mse.ncsu.edu/research/zhu/papers/CNT/Adv.Mat.CNTarray.pdf Q. Li et al., Advanced Materials, '''18''',3160-3163,2006] [http://www.mse.ncsu.edu/research/zhu/papers/CNT/Adv.Mat.CNTarray.pdf Q. Li et al., Advanced Materials, '''18''',3160-3163,2006] + + + + == In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation == == In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation == Revision as of 16:21, 20 March 2009 I have hijacked this page to write down my views on the literature on Carbon Nanotube (CNT) growths and processing, a procedure that should give us the cable/ribbon we desire for the space elevator. I will try to put as much information as possible here. If anyone has something to add, please do not hesitate! Contents 1 Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes 2 Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis 3 Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology 4 Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen 5 Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning 6 In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation 7 High-Performance Carbon Nanotube Fiber Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes B. G. Demczyk et al., Materials and Engineering, A334, 173-178, 2002 The paper by Demczyk et al. (2002) is the basic reference for the experimental determination of the tensile strengths of individual Multi-wall nanotube (MWNT) fibers. The experiments are performed with a microfabricated piezo-electric device. On this device CNTs in the length range of tens of microns are mounted. The tensile measurements are obseverd by transmission electron microscopy (TEM) and videotaped. Measurements of the tensile strength (tension vs. strain) were performed as well as Young modulus and bending stiffness. Breaking tension is reached for the SWNT at 150GP and between 3.5% and 5% of strain. During the measurements 'telescoping' extension of the MWNTs is observed, indicating that single-wall nanotubes (SWNT) could be even stronger. However, 150GPa remains the value for the tensile strength that was experimentally observed for carbon nanotubes. Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis Y.-L. Li, I. A. Kinloch, and A. H. Windle, Science, 304,276-278, 2004 The work described in the paper by Y.-L. Li et al. is a follow-on of the famous paper by Zhu et al. (2002), which was cited extensively in Brad's book. This article goes a little more into the details of the process. If you use a mixture of ethene (as the source of carbon), ferrocene, and theophene (both as catalysts, I suppose) into a furnace (1050 to 1200 deg C) using hydrogen as carrier gas, you apparently get an 'aerogel' or 'elastic smoke' forming in the furnace cavity, which comprises the CNTs. Here's an interesting excerpt: Under these synthesis conditions, the nanotubes in the hot zone formed an aerogel, which appeared rather like “elastic smoke,” because there was sufficient association between the nanotubes to give some degree of mechanical integrity. The aerogel, viewed with a mirror placed at the bottom of the furnace, appeared very soon after the introduction of the precursors (Fig. 2). Itwas then stretched by the gas flow into the form of a sock, elongating downwards along the furnace axis. The sock did not attach to the furnace walls in the hot zone, which accordingly remained clean throughout the process.... The aerogel could be continuously drawn from the hot zone by winding it onto a rotating rod. In this way, the material was concentrated near the furnace axis and kept clear of the cooler furnace walls,... The elasticity of the aerogel is interpreted to come from the forces between the individual CNTs. The authors describe the procedure to extract the aerogel and start spinning a yarn from it as it is continuously drawn out of the furnace. In terms of mechanical properties of the produced yarns, the authors found a wide range from 0.05 to 0.5 GPa/g/ccm. That's still not enough for the SE, but the process appears to be interesting as it allows to draw the yarn directly from the reaction chamber without mechanical contact and secondary processing, which could affect purity and alignment. Also, a discussion of the roles of the catalysts as well as hydrogen and oxygen is given, which can be compared to the discussion in G. Zhang et al. (2005, see below). Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology M. Zhang, K. R. Atkinson, and R. H. Baughman, Science, 306, 1358-1361, 2004 In the research article by M. Zhang et al. (2004) the procedure of spinning long yarns from forests of MWNTs is described in detail. The maximum breaking strength achieved is only 0.46 GPa based on the 30micron-long CNTs. The initial CNT forest is grown by chemical vapour deposition (CNT) on a catalytic substrate, as usual. A very intersting formula for the tensile strength of a yarn relative to the tensile strength of the fibers (in our case the MWNTs) is given: <math> \frac{\sigma_{\rm yarn}}{\sigma_{\rm fiber}} = \cos^2 \alpha \left(1 - \frac{k}{\sin \alpha} \right) </math> where <math>\alpha</math> is the helix angle of the spun yarn, i.e. fiber direction relative to yarn axis. The constant <math>k=\sqrt(dQ/\mu)/3L</math> is given by the fiber diameter d=1nm, the fiber migration length Q (distance along the yarn over which a fiber shifts from the yarn surface to the deep interior and back again), the quantity <math>\mu=0.13</math> is the friction coefficient of CNTs (the friction coefficent is the ratio of maximum along-fiber force divided by lateral force pressing the fibers together), <math>L=30{\rm \mu m}</math> is the fiber length. A critical review of this formula is given here. In the paper interesting transmission electron microscope (TEM) pictures are shown, which give insight into how the yarn is assembled from the CNT forest. The authors describe other characteristics of the yarn, like how knots can be introduced and how the yarn performs when knitted, apparently in preparation for application in the textile industry. Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen Important aspects of the production of CNTs that are suitable for the SE is the efficiency of the growth and the purity (i.e. lack of embedded amorphous carbon and imperfections in the Carbon bounds in the CNT walls). In their article G. Zhang et al. go into detail about the roles of oxygen and hydrogen during the chemical vapour deposition (CVD) growth of CNT forests from hydrocarbon sources on catalytic substrates. In earlier publications the role of oxygen was believed to be to remove amorphous carbon by oxidation into CO. The authors show, however, that, at least for this CNT growth technique, oxygen is important, because it removes hydrogen from the reaction. Hydrogen has apparently a very detrimental effect on the growth of CNTs, it even destroys existing CNTs as shown in the paper. Since hydrogen radicals are released during the dissociation of the hydrocarbon source compount, it is important to have a removal mechanism. Oxygen provides this mechanism, because its chemical affinity towards hydrogen is bigger than towards carbon. In summary, if you want to efficiently grow pure CNT forests on a catalyst substrate from a hydrocarbon CVD reaction, you need a few percent oxygen in the source gas mixture. An additional interesting information in the paper is that you can design the places on the substrate, on which CNTs grow by placing the the catalyst only in certain areas of the substrate using lithography. In this way you can grow grids and ribbons. Figures are shown in the paper. In the paper no information is given on the reason why the CNT growth stops at some point. The growth rate is given with 1 micron per minute. Of course for us it would be interesting to eliminate the mechanism that stops the growth so we could grow infinitely long CNTs. This article can be found in our archive. Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning Q. Li et al. have published a paper on a subject that is very close to our hearts: growing long CNTs. The longer the fibers, which we hope have a couple of 100GPa of tensile strength, can hopefully be spun into the yarns that will make our SE ribbon. In the paper the method of chemical vapour deposition (CVD) onto a catalyst-covered silicon substrate is described, which appears to be the leading method in the publications after 2004. This way a CNT "forest" is grown on top of the catalyst particles. The goal of the authors was to grow CNTs that are as long as possible. The found that the growth was terminated in earlier attempts by the iron catalyst particles interdiffusing with the substrate. This can apparently be avoided by putting an aluminium oxide layer of 10nm thickness between the catalyst and the substrate. With this method the CNTs grow to an impressive 4.7mm! Also, in a range from 0.5 to 1.5mm fiber length the forests grown with this method can be spun into yarns. The growth rate with this method was initially <math>60{\rm \mu m\ min.^{-1}</math> and could be sustained for 90 minutes. The growth was prolonged by the introduction of water vapour into the mixture, which achieved the 4.7mm after 2h of growth. By introducing periods of restricted carbon supply, the authors produced CNT forests with growth marks. This allowed to determine that the forest grew from the base. This is in line with the in situ observations by S. Hofmann et al. (2007). In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation The paper by S. Hofmann et al. (2007) is a key publication for understanding the microscropic processes of growing CNTs. The authors describe an experiment in which they observe in situ the growth of CNTs from chemical vapour deposition (CVD) onto metallic catalyst particles. The observations are made in time-lapse transmission electron microscopy (TEM) and in x-ray photo-electron spectroscopy. Since I am not an expert on spectroscopy, I stick to the images and movies produced by the time-lapse TEM. In the observations it can be seen that the catalysts are covered by a graphite sheet, which forms the initial cap of the CNT. The formation of that cap apparently deforms the catalyst particle due to its inherent shape as it tries to form a minimum-energy configuration. Since the graphite sheet does not extend under the catalyst particle, which is prevented by the catalyst sitting on the silicon substrate, the graphite sheet cannot close itself. The deformation of the catalyst due to the cap forming leads to a retoring force exerted by the crystaline stracture of the catalyst particle. As a consequence the carbon cap lifts off the catalyst particle. On the base of the catalyst particle more carbon atoms attach to the initial cap starting the formation of the tube. The process continues to grow a CNT as long as there is enough carbon supply to the base of the catalyst particle and as long as the particle cannot be enclosed by the carbon compounds. During the growth of the CNT the catalyst particle breathes so drives so the growth process mechanically. Of course for us SE community the most interesting part in this paper is the question: can we grow CNTs that are long enough so we can spin them in a yarn that would hold the 100GPa/g/ccm? In this regard the question is about the termination mechanism of the growth. The authors point to a very important player in CNT growth: the catalyst. If we can make a catalyst that does not break off from its substrate and does not wear off, the growth could be sustained as long as the catalyst/substrate interface is accessible to enough carbon from the feedstock. If you are interested, get the paper from our archive, including the supporting material, in which you'll find the movies of the CNTs growing. High-Performance Carbon Nanotube Fiber K. Koziol et al., Science, 318, 1892, 2007. The paper "High-Performance Carbon Nanotube Fiber" by K. Koziol et al. is a research paper on the production of macroscopic fibers out of an aerogel (low-density, porous, solid material) of SWNT and MWNT that has been formed by carbon vapor deposition. They present an analysis of the mechanical performance figures (tensile strength and stiffness) of their samples. The samples are fibers of 1, 2, and 20mm length and have been extracted from the aerogel with high winding rates (20 metres per minute). Indeed higher winding rates appear to be desirable, but the authors have not been able to achieve higher values as the limit of extraction speed from the aerogel was reached, and higher speeds led to breakage of the aerogel. They show in their results plot (Figure 3A) that typically the fibers split in two performance classes: low-performance fibers with a few GPa and high-performance fibers with around 6.5 GPa. It should be noted that all tensile strengths are given in the paper as GPa/SG, where SG is the specific gravity, which is the density of the material divided by the density of water. Normally SG was around 1 for most samples discussed in the paper. The two performance classes have been interpreted by the authors as the typical result of the process of producing high-strength fibers: since fibers break at the weakest point, you will find some fibers in the sample, which have no weak point, and some, which have one or more, provided the length of the fibers is in the order of the frequency of occurrence of weak points. This can be seen by the fact that for the 20mm fibers there are no high-performance fibers left, as the likelihood to encounter a weak point on a 20mm long fiber is 20 times higher than encountering one on a 1mm long fiber. As a conclusion the paper is bad news for the SE, since the difficulty of producing a flawless composite with a length of 100,000km and a tensile strength of better than 3GPa using the proposed method is enormous. This comes back to the ribbon design proposed on the Wiki: using just cm-long fibers and interconnect them with load-bearing structures (perhaps also CNT threads). Now we have shifted the problem from finding a strong enough material to finding a process that produces the required interwoven ribbon. In my opinion the race to come up with a fiber of better than Kevlar is still open.
The general trick to calculating such odds is that the probability of rolling a result that matches some criterion equals the number of possible matching rolls, divided by the total number of possible rolls. (By "roll", here, we mean a sequence of numbers obtained by rolling a certain number (e.g. 6) of a certain kind of fair dice (e.g. d6) in sequence. The important feature here is that each such roll, by itself, is equally likely, which is why the simple formula above works. If the rolls were not all equally like, we'd have to resort to more complicated maths.) For 6d6, the total number of possible rolls is \$6^6\$ = 46,656. (More generally, for Nd X, the total number of possible rolls is \$X^N\$.) Next, we just need to figure out in how many ways we can roll each of the results we're interested in. Straights For example, let's look at straights first. A straight on 6d6 obviously consists of the numbers 1, 2, 3, 4, 5 and 6, in any order. How many ways are there to order them? Well, imagine that we have six dice, each showing one of the numbers from 1 to 6, and six positions marked 1 to 6 on the table that we want to put the dice in. For the first position, we can choose any of the dice, so we have 6 choices there; for the second position, we only have five dice left, so the number of possible choices we can make for the second die is 5, giving us a total of 6 × 5 = 30 possible choices for the first two dice. Continuing in this manner, we find that the total number of different orders in which we can set down the six distinct dice is \$ 6 \times 5 \times 4 \times 3 \times 2 \times 1 = 720 \$. (Mathematicians have a specific name and a notation for such products, because they come up pretty often in math: they call them factorials, and write them by putting an exclamation point after the upper limit, as in 6! = 720.) Thus, the probability of rolling a straight on 6d6 is 6! out of \$6^6\$, or \$720 \div 46,656 \approx 0.0154 = 1.54\%\$. (For straights shorter than 6 dice, things get more complicated; see the computer results below.) \$n\$ of a kind What about \$n\$ of a kind? Well, it's pretty obvious that there are exactly six ways to roll 6 of a kind — either all 1, all 2, all 3, all 4, all 5 or all 6. Thus, the probability of rolling six of a kind is \$ 6 \div 6^6 \approx 0.00013 = 0.013\%\$. This is just about the rarest kind of combination you can get. 5 of a kind For five of a kind, we clearly have six choices for the number that occurs five times, and five choices for the single mismatched roll (or vice versa; it really doesn't matter which way you count them, since the result is the same), for a total of \$6 \times 5 = 30\$ possibilities. But since we're considering ordered die rolls (which we must do, to ensure that every roll is equally likely), we also have six choices for the position of the mismatched die in the sequence, giving us a total of \$30 \times 6 = 180\$ ways to roll 5-of-a-kind on 6d6, and thus a probability of \$180 \div 6^6 \approx 0.00386 = 0.386\%\$. 4 of a kind How about four of a kind? Again, we have six choices for the matched dice, but now there are more possibilities for the mismatched ones. We could consider the cases where the two mismatched dice are the same or different separately, but that quickly gets a bit complicated. The easy way here is to first assign the two mismatched dice into specific positions in the sequence; we can put the first one in any of 6 positions, and the second in any of the remaining 5, for a total of 30 choices — but, since we haven't yet assigned values for those dice, they're identical, and so we need to divide by 2 to avoid counting identical positions twice (because putting the first mismatched die in position 1, and the second in position 2, gives the same result as putting the first in position 2 and the second in position 1), giving us 15 ways to place the mismatched dice into the sequence of 6 rolls. Having done that, we just need to pick arbitrary values for those two die rolls; they can be identical, but neither of them can equal the four matched dice, so we have \$5 \times 5 = 25\$ choices total here. Putting this together with the 6 choices for the matched dice, and the 15 ways of picking the positions of the mismatched dice, and we get \$6 \times 15 \times 25 = 2,250\$ ways of rolling 4-of-a-kind on 6d6, with a probability of \$2,250 \div 6^6 \approx 0.0482 = 4.82\%\$, or slightly under one in 20—a lot more likely than 5-of-a-kind. 3 of a kind We could do the same thing for three-of-a-kind, but that gets even more complicated, mainly because it's now also possible to roll two different sets of three in a single 6d6 roll. Counting the possible combinations, in a similar manner as above, isn't really difficult as such, but it does get tedious and error-prone. ...and so on Fortunately, we can cheat and use a computer! Since there are only about 47 thousand possible 6d6 rolls, a computer can loop through all of them in a fraction of a second, and count how many times the most common die occurs in each of them. We can also do the same for straights, counting the longest sequence of consecutive dice rolled: Using the dice_pool() helper function (which enumerates all possible sorted outcomes of rolling Nd X dice and their respective probabilities) from this answer, here's a simple Python program to calculate the probabilities of various groups and straights: # generate all possible sorted NdD rolls and their probabilities # see http://en.wikipedia.org/wiki/Multinomial_distribution for the math factorial = [1.0] def dice_pool(n, d): for i in range(len(factorial), n+1): factorial.append(factorial[i-1] * i) nom = factorial[n] / float(d)**n for roll, den in _dice_pool(n, d): yield roll, nom / den def _dice_pool(n, d): if d > 1: for i in range(0, n+1): pair = (d, i) for roll, den in _dice_pool(n-i, d-1): yield roll + (pair,), den * factorial[i] else: yield ((d, n),), factorial[n] # the actual calculation and output code starts here groups = {} straights = {} for roll, prob in dice_pool(6, 6): # find largest n-of-a-kind: largest = max(count for num, count in roll) if largest not in groups: groups[largest] = 0.0 groups[largest] += prob # find longest straight: longest = length = 0 for num, count in roll: if count > 0: length += 1 else: length = 0 if longest < length: longest = length if longest not in straights: straights[longest] = 0.0 straights[longest] += prob # print out results for n in groups: print("max %d of a kind: %9.6f%%" % (n, 100*groups[n])) for n in straights: print("max %d in a row: %9.6f%%" % (n, 100*straights[n])) And here's the output: max 1 of a kind: 1.543210% max 2 of a kind: 61.728395% max 3 of a kind: 31.507202% max 4 of a kind: 4.822531% max 5 of a kind: 0.385802% max 6 of a kind: 0.012860% max 1 in a row: 5.971365% max 2 in a row: 34.615055% max 3 in a row: 32.407407% max 4 in a row: 17.746914% max 5 in a row: 7.716049% max 6 in a row: 1.543210% Note that this output doesn't distinguish e.g. two or three pairs from a single pair, or a triple and a pair from just a triple. If you know some Python, it would not be difficult to modify the program to check for those as well. Also note that it's actually really hard to get no more than one die of each kind (since that actually requires rolling a perfect straight), and also pretty hard to get no more than one in a row (although still a lot easier than getting six of a kind, since e.g. rolling 1,1,3,3,5,5 also counts). Three in a row is also only slightly less likely than two in a row (although some of the rolls counted as three in a row by the program actually include both), but larger groups and straights show the expected downward trend in probability as the group size increases.
I have a matrix of values where rows are individuals and columns are attributes. I want to extract a similarity value for every pair of individuals, and I use an rbf kernel: $$k(x_i,x_j) = \exp\left(-\gamma\|x_i-x_j\|^2\right),$$ where $\gamma = \frac{1}{(2\sigma)^2}$. Since any attribute has its own range, I suppose that a normalization step is necessary to get a sound similarity value. I divided each value in column (attribute) $i$ by the norm of column $i$, but now as output from the RBF kernel I get values very near to $1$, and I should use a very "high" $\gamma$ value ($\approx$ 500) to spread out the similarity values between $0$ (not similar) and $1$ (similar). Is this kind of data normalization "sound" for RBF kernel? Should I normalize the rows (individuals) rather than the columns (attributes)?
The problem of choosing subsets at random has been studied in a rather different context in mathematical economics. Suppose we choose a subset of $[0,1]$ by independently throwing a fair coin for each number. Heuristically, such a set should have measure $1/2$. For what we do is randomly choose an indicator function with pointwise expectation $1/2$. By some intuitive apeal to a law of large numbers, the sample realizations should have the same expectation. This kind of reasoning is widely used in economics. A large population is modeled by a continuum and even when each person faces individual uncertainty, there should be no aggregate uncertainty. For the reason given by Will Sawin, the naive approach doesn't work quite well. For Lebesgue measure, some intuition comes from Lusin's theorem to the effect that every measurable function is continuous on a "large" subset. Continouity is a condition to the effect that the value at a point is closely related to the value at nearby points. If you choose independently at each value, you wouldn't expect to get a function continuous on a large set. The general tradeoff between independence and measurable sample realizations is strongly expressed in the following result of Yeneng Sun: Proposition: Let $(I,\mathcal{I},\mu)$ and $(X,\mathcal{X},\nu)$ be probability spaces with (complete) product probability space $(I\times X,\mathcal{I}\otimes\mathcal{X},\mu\otimes\nu)$ and $f$ be a jointly measurable function from $I\times X$ to $\mathbb{R}$ such that for $\mu\otimes\mu$-almost all $(i,j)$ the functions $f(i,\cdot)$ and $f(j,\cdot)$ are independent. Then for $\mu$-almost all $i$, the function $f(i,\cdot)$ is constant. Note that the independence condition in this result is quite weak. Sun calls it almost sure pairwise independence. But an important discovery by Sun was that if joint measurability and almost sure pairwise independence were compatible, one could obtain an exact law of large numbers for a continuum of random variables by an application of Fubini's theorem. In particular, such a law of large numbers holds for extensions of the product spaces that allow for the conclusion of Fubini's theorem to hold and still allow for nontrivial (a.s. pairwise) independent processes. He called such extensions rich Fubini extensions and gave one example of such a product space: The Loeb product of two hyperfinite Loeb spaces. So one can get natural random sets for some spaces. The reference is: The exact law of large numbers via Fubini extension and characterization of insurable risks (2006) A systematic study of rich Fubini extensions was done by Konrad Podczeck in the paper On existence of rich Fubini extensions (2010), in which he has essentially shown that one can choose random subsets of a probability space if and only if the probability space has the following property, which he called super-atomlessnes (and which is known by a lot of other names such as saturation): For any subset $A$ with positive measure, the measure algebra of the trace on $A$ does not coincide with the measure algebra of a countably generated space. Lebesgue measure on the unit interval does not satisfy this condition, but there exists extensions of Lebesgue measure that are superatomless. Conclusion: One cannot obtain random Lebesgue measurable sets in a sensible way by choosing independently elements, but one can choose random sets in an extension of Lebesgue measure this way.
Why the expectation value of momentum $\langle p \rangle$ is zero for the one dimensional ground-state wave function of an infinite square well? And why $\langle p^2 \rangle = \frac{\hbar^2 \pi^2}{L^2}$? I am not asking for a proof. I am trying to interpret physical their meanings. I understand $\langle p \rangle = 0$ does not imply that momentum itself is zero, though at first I thought that way. I would really appreciate if you can help me build a good intuition. I have seen other questions related to this matter here, but they have not given the intuition I am looking for. To be $0$ on average, a quantity must either be always $0$ as you suggest or else it must have positive or negative outcomes, for instance $\{-2,-1,3\}$. Of course it is possible for a particle in $1d$ to have positive or negative values of momentum, with negative values corresponding to motion in the negative direction. In the case of steady state solution to the Schrödinger equation (as the ground state of the infinite well), the distribution of positions does not change with time, so $\langle x\rangle$ does not depend on $t$; as a result, the average momentum $\langle p\rangle= m\frac{d}{dt}\langle x\rangle=0$. If you solution is not a steady-state solution then $\langle p\rangle\ne 0$ at all times. On the other hand, $\langle p^2\rangle$ is the average of non-negative quantities since $p^2\ge 0$; this average cannot be negative, v.g. the average of $\{4,1,9\}$ is certainly not $0$. The precise actual value of $\langle p^2\rangle$ just happens to work out to $\hbar^2\pi^2/L^2$ and is basically related to the energy of your system: the ground state energy is $E_1= \pi^2\hbar^2/(2mL^2)$; in an infinite well the energy is completely kinetic. Since $E_1=\langle p^2\rangle/(2m)$, it's easy matter to find $\langle p^2\rangle= \pi^2\hbar^2/L^2$. So when trying to build intuition about a subject, ideally you want something experiential to help you understand it. By the statement of your question you seem to get why the math woks out the way it does (please correct me if this is not the case) but don't get what it's telling you about the system. Let's see if we can construct some intuition from experiences that a lot of have, driving. Let's say you live 5 miles away from work, which for the sake of this example takes you 20 minutes to drive to work and 20 minutes to drive back (obviously this is hardly ever the case, but we're idealizing here). On average you're driving at 15 mph to and from work. Though because you spend equal amounts of time driving to work as you do driving home. I can't say I expect you to be do one more than the other, so my expectation value would have to be equally weighted between both possibilities. However, that doesn't mean I can't have an idea on how fast you're going (15 mph) which is definite and positive. In terms of the particle in the box, classically the particle can be viewed as traveling from one side of the box just as much as it does to the other. Therefore, if we had any nonzero value we would be claiming that it would be spending more time traveling to one side than the other (though in reality "time" isn't the right way to think about it, but it's a good starting point). However, it does have a nonzero speed so if we average that and square it, it will still be nonzero (multiple by mass squared to get $\langle p ^{2} \rangle$. The statements of the first quantity $\langle p \rangle$ are more a value about where the particle is on average, while the latter $\langle p^{2} \rangle$ are more about how fast the particle is going. This all of course is simplifying quite a bite, but hopefully it's enough to get the gears moving. Expectation value is nothing but an average value ,So is the average of many p values in that state , there can even be positive momentum's or negative momentum's as reference to the direction (in mathematical perspective),So =0 but for p square it's no 0
Up a level Anand, V and Ghosh, S and Ghosh, M and Rao, GM and Railkar, R and Dighe, RR (2011) Surface modification of PDMS using atmospheric glow discharge polymerization of tetrafluoroethane for immobilization of biomolecules. In: Applied Surface Science, 257 (20). pp. 8378-8384. Kene, PS and Dighe, RR and Mahale, SD (2005) Delineation of regions in the extracellular domain of follicle-stimulating hormone receptor involved in hormone binding and signal transduction. In: American Journal Of Reproductive Immunology, 54 (1). pp. 38-48. Gadkari, RA and Roy, S and Rekha, N and Srinivasan, N and Dighe, RR (2005) Identification of a heterodimer-specific epitope present in human chorionic gonadotrophin (hCG) using a monoclonal antibody that can distinguish between hCG and human LH. In: Journal of Molecular Endocrinology, 34 (3). pp. 879-887. Gupta, CS and Dighe, RR (2000) Biological activity of Single chain Choriogonadotropin, $hCG \alpha \beta$ is decreased upon deletion of five Carboxyl terminal amino acids of the $\alpha$ subunit without affecting its receptor binding. In: Journal of Molecular Endocrinology, 24 (2). pp. 157-164. Gupta, Sen C and Dighe, RR (2000) Biological activity of single chain chorionic gonadotropin, HCG\alpha \beta, is decreased upon deletion of five carboxyl terminal amino acids of the \alpha subunit without affecting its receptor binding. In: Journal of Molecular Endocrinology, 24 (2). pp. 157-164. Gupta, Sen C and Dighe, RR (1999) Hyperexpression of biologically active human chorionic gonadotropin using the methylotropic yeast, Pichia pastoris. In: Journal of Molecular Endocrinology, 22 (3). pp. 273-283. Samaddar, M and Catterall, JF and Dighe, RR (1997) Expression of biologically active beta subunit of bovine follicle-stimulating hormone in the methylotrophic yeast Pichia pastoris. In: Protein Expression and Purification, 10 (3). pp. 345-355. Jeyakumar, M and Krishnamurthy, HN and Dighe, RR and Moudgal, NR (1997) Demonstration of complimentarity between monoclonal antibodies (MAbs)to human chorionic gonadotropin (hCG) and polyclonal antibodies to luteinizing hormone hCG receptor (LH-R) and their use in better understanding hormone-receptor interaction. In: Receptors & Signal Transduction, 7 (4). pp. 299-310. Shetty, J and Marathe, GK and Dighe, RR (1996) Specific immunoneutralization of FSH leads to apoptotic cell death of the pachytene spermatocytes and spermatogonial cells in the rat. In: Endocrinology, 137 (5). pp. 2179-2182. Moudgal, NR and Ravindranath, N and Murthy, GS and Dighe, RR and Aravindan, GR and Martin, F (1992) Long-term contraceptive efficacy of vaccine of ovine follicle-stimulating hormone in male bonnet monkeys (Macaca radiata). In: Journal of Reproduction and Fertility, 92 (1). pp. 91-102. Moudgal, NR and Sairam, MR and Dighe, RR (1989) Relative ability of ovine follicle stimulating hormone and its beta-subunit to generate antibodies having bioneutralization potential in nonhuman primates. In: Journal of Biosciences, 14 (2). pp. 91-100. Dighe, RR and Moudgal, NR (1983) Use of \alpha - and \delta -subunit specific antibodies in studying interaction of hCG with Leydig cell receptors. In: Archives of Biochemistry and Biophysics, 225 (2). pp. 490-499. Dighe, RR and Moudgal, NR (1982) Steroidogenesis in the desensitized rat corpora-lutea. In: Journal of Steroid Biochemistry, 17 (3). Dighe, RR and Muralidhar, K and Moudgal, NR (1979) Ability of human chorionic gonadotropin beta-subunit to inhibit the steroidogenic response to lutropin. In: Biochemical Journal, 180 (3). pp. 573-578. Dighe, RR and Muralidhar, K and Moudgal, NR (1979) Ability of human chorionic gonadotropin beta-subunit to inhibit the steroidogenic response to lutropin. In: Biochemical Journal, 180 (3). pp. 573-578. Banik, B and Sasmal, PK and Roy, S and Chakravarty, AR and Majumdar, R and Dighe, RR (2014) Oxovanadium(IV) Complexes for Cellular Imaging and Photochemotherapeutic Applications. In: JOURNAL OF BIOLOGICAL INORGANIC CHEMISTRY, 19 (1). Majumdar, R and Railkar, RS and Roy, S and Dighe, RR (2010) Dissecting the Glycoprotein Hormones - Receptors Interactions Using Receptor Antibodies. In: Endocrine Reviews, 31 (3, Sup). S2467. Dighe, RR and Moudgal, NR (1981) Use of highly specific antibodies against hcg subunits to study the interaction between the subunits and receptors. In: Indian Journal of Biochemistry & Biophysics, 18 (4). p. 73. Dighe, RR and Muralidhar, K and Moudgal, NR (1978) Ability of Beta-Subunit of Human Chorionic Gonadotropin (HCG) to Inhibit Steroidogenesis in Response to Lutropin (LH). In: Indian Journal of Biochemistry & Biophysics, 15 (2). 55 -55.
J. D. Hamkins, “The Vopěnka principle is inequivalent to but conservative over the Vopěnka scheme,” ArXiv e-prints, 2016. (under review) @ARTICLE{Hamkins:The-Vopenka-principle-is-inequivalent-to-but-conservative-over-the-Vopenka-scheme, author = {Joel David Hamkins}, title = {The {Vop\v{e}nka} principle is inequivalent to but conservative over the {Vop\v{e}nka} scheme}, journal = {ArXiv e-prints}, year = {2016}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, eprint = {1606.03778}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1lV}, } Abstract.The Vopěnka principle, which asserts that every proper class of first-order structures in a common language admits an elementary embedding between two of its members, is not equivalent over GBC to the first-order Vopěnka scheme, which makes the Vopěnka assertion only for the first-order definable classes of structures. Nevertheless, the two Vopěnka axioms are equiconsistent and they have exactly the same first-order consequences in the language of set theory. Specifically, GBC plus the Vopěnka principle is conservative over ZFC plus the Vopěnka scheme for first-order assertions in the language of set theory. The Vopěnka principle is the assertion that for every proper class $\mathcal{M}$ of first-order $\mathcal{L}$-structures, for a set-sized language $\mathcal{L}$, there are distinct members of the class $M,N\in\mathcal{M}$ with an elementary embedding $j:M\to N$ between them. In quantifying over classes, this principle is a single assertion in the language of second-order set theory, and it makes sense to consider the Vopěnka principle in the context of a second-order set theory, such as Godel-Bernays set theory GBC, whose language allows one to quantify over classes. In this article, GBC includes the global axiom of choice. In contrast, the first-order Vopěnka scheme makes the Vopěnka assertion only for the first-order definable classes $\mathcal{M}$ (allowing parameters). This theory can be expressed as a scheme of first-order statements, one for each possible definition of a class, and it makes sense to consider the Vopěnka scheme in Zermelo-Frankael ZFC set theory with the axiom of choice. Because the Vopěnka principle is a second-order assertion, it does not make sense to refer to it in the context of ZFC set theory, whose first-order language does not allow quantification over classes; one typically retreats to the Vopěnka scheme in that context. The theme of this article is to investigate the precise meta-mathematical interactions between these two treatments of Vopěnka’s idea. Main Theorems. If ZFC and the Vopěnka scheme holds, then there is a class forcing extension, adding classes but no sets, in which GBC and the Vopěnka scheme holds, but the Vopěnka principle fails. If ZFC and the Vopěnka scheme holds, then there is a class forcing extension, adding classes but no sets, in which GBC and the Vopěnka principle holds. It follows that the Vopěnka principle VP and the Vopěnka scheme VS are not equivalent, but they are equiconsistent and indeed, they have the same first-order consequences. Corollaries. Over GBC, the Vopěnka principle and the Vopěnka scheme, if consistent, are not equivalent. Nevertheless, the two Vopěnka axioms are equiconsistent over GBC. Indeed, the two Vopěnka axioms have exactly the same first-order consequences in the language of set theory. Specifically, GBC plus the Vopěnka principle is conservative over ZFC plus the Vopěnka scheme for assertions in the first-order language of set theory. $$\text{GBC}+\text{VP}\vdash\phi\qquad\text{if and only if}\qquad\text{ZFC}+\text{VS}\vdash\phi$$ These results grew out of my my answer to a MathOverflow question of Mike Shulman, Can Vopěnka’s principle be violated definably?, inquiring whether there would always be a definable counterexample to the Vopěnka principle, whenever it should happen to fail. I interpret the question as asking whether the Vopěnka scheme is necessarily equivalent to the Vopěnka principle, and the answer is negative. The proof of the main theorem involves the concept of a stretchable set $g\subset\kappa$ for an $A$-extendible cardinal, which has the property that for every cardinal $\lambda>\kappa$ and every extension $h\subset\lambda$ with $h\cap\kappa=g$, there is an elementary embedding $j:\langle V_\lambda,\in,A\cap V_\lambda\rangle\to\langle V_\theta,\in,A\cap V_\theta\rangle$ such that $j(g)\cap\lambda=h$. Thus, the set $g$ can be stretched by an $A$-extendibility embedding so as to agree with any given $h$.
Homogenization of singular quasilinear elliptic problems with natural growth in a domain with many small holes 1. Departamento de Matemáticas, Universidad de Almería, Ctra. Sacramento s/n, La Cañada de San Urbano 04120, Almería, Spain 2. Departamento de Matemática Aplicada y Estadística, Campus Alfonso XIII, Universidad Politécnica de Cartagena, 30203, Murcia, Spain $\begin{equation*}\begin{cases}\displaystyle -Δ u^\varepsilon + \frac{|\nabla u^\varepsilon|^2}{{(u^\varepsilon})^θ} = f (x)& \mbox{in} \; Ω^\varepsilon,\\u^\varepsilon = 0&\mbox{on} \; \partial Ω^\varepsilon,\\\end{cases}\end{equation*}$ $\mathbb{R}^N$ $θ ∈ (0,1)$ $f$ $Ω^\varepsilon$ Mathematics Subject Classification:Primary:35B09, 35B25, 35B27, 35J25, 35J60, 35J75;Secondary:35A01, 35D3. Citation:José Carmona, Pedro J. Martínez-Aparicio. Homogenization of singular quasilinear elliptic problems with natural growth in a domain with many small holes. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 15-31. doi: 10.3934/dcds.2017002 References: [1] D. Arcoya, J. Carmona, T. Leonori, P. J. Martínez-Aparicio, L. Orsina and F. Petitta, Existence and nonexistence of solutions for singular quadratic quasilinear equations, [2] D. Arcoya, J. Carmona and P. J. Martínez-Aparicio, Bifurcation for Quasilinear Elliptic Singular BVP, [3] D. Arcoya, J. Carmona and P. J. Martínez-Aparicio, Comparison principle for elliptic equations in divergence with singular lower order terms having natural growth [4] [5] D. Arcoya and S. Segura de León, Uniqueness of solutions for some elliptic equations with a quadratic gradient term, [6] [7] [8] J. Carmona, P. J. Martínez-Aparicio and A. Suárez, Existence and non-existence of positive solutions for nonlinear elliptic singular equations with natural growth, [9] J. Casado-Díaz, Homogenization of general quasi-linear Dirichlet problems with quadratic growth in perforated domains, [10] J. Casado-Díaz, Homogenization of a quasi-linear problem with quadratic growth in perforated domains: An example, [11] D. Cioranescu and F. Murat, Un terme étrange venu d'ailleurs, Ⅰ et Ⅱ', In [12] G. Dal Maso and A. Garroni, New results of the asymptotic behaviour of Dirichlet problems in perforated domains, [13] G. Dal Maso and F. Murat, Asymptotic behavior and correctors for Dirichlet problems in perforated domains with homogeneous monotone operators, [14] G. Dal Maso and F. Murat, Asymptotic behavior and correctors for linear Dirichlet problems with simultaneously varying operators and domains, [15] [16] D. Giachetti, P. J. Martínez-Aparicio and F. Murat, A semilinear elliptic equation with a mild singularity at [17] D. Giachetti, P. J. Martínez-Aparicio and F. Murat, Definition, existence, stability and uniqueness of the solution to a semi-linear elliptic problem with a strong singularity at [18] D. Giachetti, P. J. Martínez-Aparicio and F. Murat, Homogenization of a Dirichlet semi-linear elliptic problem with a strong singularity at [19] [20] [21] V. A. Marčenko and E. Ya. Khruslov, [22] G. Stampacchia, Èquations elliptiques du second ordre à coefficients discontinus, in show all references References: [1] D. Arcoya, J. Carmona, T. Leonori, P. J. Martínez-Aparicio, L. Orsina and F. Petitta, Existence and nonexistence of solutions for singular quadratic quasilinear equations, [2] D. Arcoya, J. Carmona and P. J. Martínez-Aparicio, Bifurcation for Quasilinear Elliptic Singular BVP, [3] D. Arcoya, J. Carmona and P. J. Martínez-Aparicio, Comparison principle for elliptic equations in divergence with singular lower order terms having natural growth [4] [5] D. Arcoya and S. Segura de León, Uniqueness of solutions for some elliptic equations with a quadratic gradient term, [6] [7] [8] J. Carmona, P. J. Martínez-Aparicio and A. Suárez, Existence and non-existence of positive solutions for nonlinear elliptic singular equations with natural growth, [9] J. Casado-Díaz, Homogenization of general quasi-linear Dirichlet problems with quadratic growth in perforated domains, [10] J. Casado-Díaz, Homogenization of a quasi-linear problem with quadratic growth in perforated domains: An example, [11] D. Cioranescu and F. Murat, Un terme étrange venu d'ailleurs, Ⅰ et Ⅱ', In [12] G. Dal Maso and A. Garroni, New results of the asymptotic behaviour of Dirichlet problems in perforated domains, [13] G. Dal Maso and F. Murat, Asymptotic behavior and correctors for Dirichlet problems in perforated domains with homogeneous monotone operators, [14] G. Dal Maso and F. Murat, Asymptotic behavior and correctors for linear Dirichlet problems with simultaneously varying operators and domains, [15] [16] D. Giachetti, P. J. Martínez-Aparicio and F. Murat, A semilinear elliptic equation with a mild singularity at [17] D. Giachetti, P. J. Martínez-Aparicio and F. Murat, Definition, existence, stability and uniqueness of the solution to a semi-linear elliptic problem with a strong singularity at [18] D. Giachetti, P. J. Martínez-Aparicio and F. Murat, Homogenization of a Dirichlet semi-linear elliptic problem with a strong singularity at [19] [20] [21] V. A. Marčenko and E. Ya. Khruslov, [22] G. Stampacchia, Èquations elliptiques du second ordre à coefficients discontinus, in [1] Zhijun Zhang. Boundary blow-up for elliptic problems involving exponential nonlinearities with nonlinear gradient terms and singular weights. [2] Boumediene Abdellaoui, Daniela Giachetti, Ireneo Peral, Magdalena Walias. Elliptic problems with nonlinear terms depending on the gradient and singular on the boundary: Interaction with a Hardy-Leray potential. [3] Zhaoli Liu, Jiabao Su. Solutions of some nonlinear elliptic problems with perturbation terms of arbitrary growth. [4] [5] Daniela Giachetti, Francesco Petitta, Sergio Segura de León. Elliptic equations having a singular quadratic gradient term and a changing sign datum. [6] Olivier Guibé, Anna Mercaldo. Uniqueness results for noncoercive nonlinear elliptic equations with two lower order terms. [7] José M. Arrieta, Ariadne Nogueira, Marcone C. Pereira. Nonlinear elliptic equations with concentrating reaction terms at an oscillatory boundary. [8] Sami Aouaoui. On some local-nonlocal elliptic equation involving nonlinear terms with exponential growth. [9] Shenzhou Zheng, Xueliang Zheng, Zhaosheng Feng. Optimal regularity for $A$-harmonic type equations under the natural growth. [10] Maria Francesca Betta, Rosaria Di Nardo, Anna Mercaldo, Adamaria Perrotta. Gradient estimates and comparison principle for some nonlinear elliptic equations. [11] Yunfeng Jia, Yi Li, Jianhua Wu, Hong-Kun Xu. Cauchy problem of semilinear inhomogeneous elliptic equations of Matukuma-type with multiple growth terms. [12] [13] Massimiliano Berti, M. Matzeu, Enrico Valdinoci. On periodic elliptic equations with gradient dependence. [14] Evgeny Galakhov, Olga Salieva. Blow-up for nonlinear inequalities with gradient terms and singularities on unbounded sets. [15] Gregory A. Chechkin, Vladimir V. Chepyzhov, Leonid S. Pankratov. Homogenization of trajectory attractors of Ginzburg-Landau equations with randomly oscillating terms. [16] Italo Capuzzo Dolcetta, Antonio Vitolo. Glaeser's type gradient estimates for non-negative solutions of fully nonlinear elliptic equations. [17] Gisella Croce. An elliptic problem with degenerate coercivity and a singular quadratic gradient lower order term. [18] [19] Claudianor O. Alves, J. V. Gonçalves, Olimpio Hiroshi Miyagaki. Remarks on multiplicity of positive solutions of nonlinear elliptic equations in $IR^N$ with critical growth. [20] Hirotoshi Kuroda, Noriaki Yamazaki. Approximating problems of vectorial singular diffusion equations with inhomogeneous terms and numerical simulations. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
I have the next exercice: For $\epsilon \in ]0,1]$, we have the function $$T_{\epsilon}(f)=\frac{1}{2\epsilon}\int_{-\epsilon}^{\epsilon}f(t)dt$$ and $T_0(f)=f(0).$ Show that for $\epsilon \in ]0,1]$ and $f\in E= C^1(]-1,1[)$ which are bounded as their derivative too, we have $$|T_{\epsilon}(f)-T_0(f)|\leq \frac{\epsilon}{2}||f||_E$$ where $||f||_E=\sup_{-1\leq t \leq 1}|f(t)|+\sup_{-1\leq t \leq 1}|f'(t)|$. My ideas for this statement: I have the idea that I should do tend $\epsilon$ to $1$ Rewrite in some convenient way $T_o(f)$ such that may have some integral of the form $\int_{-1}^{1}|f(t)|dt$ and after apply the $\sup$ for obtain $||f||_E$ Finally, I should use this result for preuve $L^p$ is not closed in $E'.$ Thanks a lot for your suggestions.
Say we have $k$ samples of data, where sample $i$ is of size $n_i$ and we write it as $x_{i1}, ... , x_{in_i}$. Let the total sample size be $N$. The ANOVA model is $X_{ij} \sim N(\mu_i, \sigma^2)$ independently. The null hypothesis is that the $\mu_i$ are all equal. The alternative hypothesis is that the null hypothesis is not true. The ANOVA F-statistic is $$F = \frac{S_2/(k-1)}{S_1/(N - k)},$$ where $$S_1 = \sum_{i, j}(x_{ij} - \bar{x}_{i\bullet})^2$$ is the within samples sum of squares and $$S_2 = \sum_in_i(\bar{x}_{i\bullet} - \bar{x}_{\bullet\bullet})^2$$ is the between samples sum of squares. We know that $S_1$ and $S_2$ are independent and $S_0 = S_1 + S_2$ $(*)$, where $$S_0 = \sum_{i, j}(x_{ij} - \bar{x}_{\bullet\bullet})^2$$ is the total sum of squares. It is straightforward to show that under both the null and the alternative hypotheses, $S_1 \sim \sigma^2\chi^2_{N - k}$. Also, under the null hypothesis, the $X_{ij}$ are identically distributed, and so $S_0 \sim \sigma^2\chi^2_{N-1}$. It follows from $(*)$ that under the null hypothesis $S_2 \sim \sigma^2\chi^2_{k-1}$ and thus $F \sim F_{k-1, N-k}$. It is claimed that under the alternative hypothesis, $F$ follows a non-central $F$-distribution $F_{k-1, N-k}(\lambda)$, where $\lambda = \sum_in_i(\mu_i - \bar\mu)^2$ and $\bar\mu = \sum_in_i\mu_i/N$ — or equivalently, that $S_2$ follows a (scaled) non-central $\chi^2$ distribution, $\sigma^2\chi^2_{k-1}(\lambda)$. My tentative approach to proving this is similar to the derivation under the null hypothesis — that is, it is sufficient to prove that $S_0$ follows a (scaled) non-central $\chi^2$ distribution, $\sigma^2\chi^2_{N-1}(\lambda)$. I've shown that this would follow from a slightly more general statement, namely that if $Y_i \sim N(\mu_i, \sigma^2)$ independently (sample size $N$), then $S_0 = \sum_i(Y_i - \bar{Y})^2 \sim \sigma^2\chi^2_{N-1}(\lambda)$, where $\lambda = \sum_i(\mu_i - \bar\mu)^2$. Is this the best approach? And what is the simplest proof of the final statement above? Thanks.
Let me explain in details. Let $\Psi=\Psi(x,t)$ be the wave function of a particle moving in a unidimensional space. Is there a way of writing $\Psi(x,t)$ so that $|\Psi(x,t)|^2$ represents the probability density of finding a particle in classical mechanics (using a Dirac delta function, perhaps)? Sure you can! This is actually a simple but very interesting result, and it is usually shown in quantum mechanics courses. It's called the Ehrenfest theorem, and I won't prove it here but I'll copy the result from Sakurai Modern Quantum Mechanics (1991). You can check the mathematical details there, or in many other books. If you have a hamiltonian with the form $$H = \frac{p^2}{2\,m}+V(x)$$ you can prove that, in the Heisenberg picture, $$m \frac{\mathrm{d}^2x}{\mathrm{d}t^2} = -\nabla V(x) .$$ If you now take the expectation value of that equation (for certain state kets), you get $$m \frac{\mathrm{d}^2\langle x \rangle}{\mathrm{d}t^2} = \frac{\mathrm{d}\langle p \rangle}{\mathrm{d}t} = -\langle \nabla V(x) \rangle .$$ This result is valid in both Heisenberg and Schrödinger's picture. If you want to recover the classical limit, you need to say that the area where the wavefunction is significantly nonzero is much smaller than the scale of variations of the potential. In that case, you can identify the center of the wavefunction with the position of the particle, and $\langle \nabla V(x) \rangle $ turns into $\nabla V(\langle x \rangle) $. What this means, conceptually, is that the center of the wavefunction will move according to the classical laws if you can't "see" that your object/particle it's not a material point, and if your potential is also classical, in that it doesn't have variations that are comparable to the "size" of the wavefunction. The short answer: No, does not exist any wavefunction in Hilbert space which reproduces classical mechanics. The classical limit of quantum mechanics is studied with some deep in Ballentine textbook. For instance, section 14.1 is devoted to the Ehrenfest theorem and it is shown that the theorem is neither necessary nor sufficient to define the classical regime. The paper What is the limit $\hbar \rightarrow 0$ of quantum theory? (Accepted for publication in the American Journal of Physics) shows that Schrödinger's equation for a single particle moving in an external potential does not lead to Newton's equation of motion for the particle in the general case. Page 9 of this more recent article precisely deals with the question of why no wavefunction in the Hilbert space can give a classical delta function probability. @Arnoques Sorry, but I think there is an error in your answer: The spatial extent of the particle wave-function, must be much smaller (and not longer) than the variation length-scale of the potential, to transform $\langle \nabla V(x)\rangle$ turns into $\nabla V\left(\langle x\rangle \right).$ Only in this case, it is possible to make a Taylor series of $V(X)),$ because $V(X)$ is slowly varying in the domain where the wave function is not null, and you can take the mean expectation : $$\nabla{\mathrm i}\, V(X) = \nabla{\mathrm i}\, V(\langle X\rangle ) + (X_j - \langle X_j\rangle ) \nabla j \,\nabla{\mathrm i} \,V(\langle x\rangle )\, +\,\textrm{negligible higher order terms in}\,\, (X_j - \langle X_j\rangle)$$ So, $\langle \nabla{\mathrm i}\, V(X)\rangle = \nabla{\mathrm i} \,V(\langle X\rangle),$ because $\langle X_j - \langle X_j\rangle \,\rangle = 0\;.$ You can recover Schroedingers equation from the path integral formulation of Quantum mechanics by Feynman. In the path integral picture the classical trajectories are the stationary points of the integrand. So in the stationary phase approximation, they are the contribution of $0$-th order in $\hbar$. Of course that is not a direct relation between the Schroedinger equation and classical trajectories. More intuitive picture is in Arnoques answer, alternative and a bit more formal approach is to note that all QM equations of motion have their classical mechanics equivalent if you formulate them using commutators and then replace commutator with Poisson bracket ($\partial A/\partial t = [H,A]$ $\Rightarrow$ $ \partial a/\partial t = \{ H,a \}_{q,p} $, if you "hide" Planck constant). The commutator itself is of course zero in classical case, when operators reduce to numbers. Accordingly, all general system properties easily map from QM to CM. And it may be shown (too much to copy, sorry) that a formal limit $\hbar\to0$ leads to exact equivalence between commutator and Poisson bracket. Concerning the wavefunction, classical motion is definite. Instead of probability you have definite correspondence between $t$ and $x$. Indeed, you may formulate it saying that classical $|\Psi(x,t)|^2=\delta(x-x_c(t))$ where $x_c(t)$ is classical tragectory. To write $\Psi$ itself, you have to treat $\hbar\to0$ accurately to avoid divergent integrals. Normally, there is no reason to do this. And technically, there is no guarantee that a limit $\hbar\to0$ of some particular quantum state is a "normal" solution of classical problem. This is exactly what Feynman's path-integral does, many-body quantum effects reveals classical limit at high temperature! Instead of solving the Schrödinger equation, it solves Newton's equation by splitting the atom into beads and perturbing the system effectively to find out the thermal equilibrium state of the system. The method showed the isomorphism of quantum theory and classical statistical mechanics, which leads to an interesting point: the imaginary time is isomorphic to the inverse of temperature (see Wick-rotation). You may want to read Feynman's thesis. BTW, I personally started the topic with reading this paper: Feynman's derivation of the Schrödinger equation protected by ACuriousMind♦ Apr 22 '17 at 22:18 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Four ships are located at the ocean, such that the distance between all four ships is the same. How are the ships positioned? Four ships are located at the ocean, such that the distance between all four ships is the same. Place them on the globe at the vertices of a (large) regular tetrahedron. They could also be at the vertices of a tetrahedron, with three ships near each other on the surface of the ocean and the other being a shipwreck on the ocean floor. They could all be distance 0 apart, i.e. touching each other. For example, three small ships in a triangular formation (all touching each other) on board a larger shipping vessel. If we take your phrasing very literally, "the ocean" shouldn't mean more than one ocean. My understanding of geography tells me ships on the surface of any single ocean could not be equidistant. But you never said they were floating on the surface. As @JonMarkPerry suggested, ships should be at the points of a regular tetrahedron, but I would position two ships floating on the ocean surface and two sunken on the floor. That way they don't have to be too far apart. Inspired by @Jafe's answer... The Starship Enterprise is floating in the ocean, and inside the holodeck there is a blimp airship. Inside the balloon part of the airship there is typical sailboat, and inside the hold of the sailboat there is a "ship-in-a-bottle". One or more of the ships is a submarine. Either 1 below or 3 below, or any other form of a regular tetrahedron in 3D (ocean) space. One solution on the earth globe: 3 ships on a parallel at equal distances and the fourth ship on a pole, like a spherical tetrahedron. It must fulfill: 1) $D_1 = 2\times\pi \times R_1/3$, where $D_1$ is the distance between the 3 boats on the parallel circle and $R_1$ the radius of this parallel whose length is $2\pi\times R_1$. 2) $D_2 = R\times\alpha$, where $D_2$ is the distance between any of the 3 boats and the boat on the pole, $R$ the radius of the earth and $\alpha$ the angle in radians between the pole and the parallel (like a latitude but measured from the pole and not the equator). 3) $D_1=D_2$ as we want the same distance between all boats Combining the previous equations with the fact that the radius of the parallel is $R_1 = R\sin(\alpha)$ we find: $\alpha/sin(\alpha)=2\pi/3$, so $\alpha = 1.947 \rm{~radians} = 111.5^o$ which is equivalent to a latitude of 21.5 degrees south So if ... the 3 boats are on the parallel latitude $21.5^o\rm S$ and the fourth is on the pole, the distances between them all are the same: $D_1 = R\times\alpha= 6370\rm{km}\times1.947= 12.402\rm{km}$ QED Just to add to the number of possibilities regarding alternate solutions: They are four spaceships arranged in the vertices of a tetrahedron orbiting above the ocean. If the ships are specified to be on the surface of the ocean (one ocean, not multiple), and the ocean is shaped as a normal ocean (not a sphere), we can still cheat with physics: Spatially, the ships would be equidistant if they are configured in a tetrahedron. But we need not deal only with space. If we consider distance in spacetime, we can make a similar tetrahedron where all ships are on the ocean's surface: Three of the ships are currently in a triangle pattern, equidistant, at the same time. One ship was formerly in the middle of that triangle. Let's say it was a spaceship, and it warped out. The time it was at that position can be calculated so that the combination of time and spatial distance from that ship to one of the others is the same as the spacetime separating any of the other two ships (i.e., spatial distance). We can thus construct a tetrahedron without requiring that a ship be above or below, by using increased temporal distance to make up what lacks in spatial distance.
Research Open Access Published: Blow up of solutions for a class of fourth order nonlinear pseudo-parabolic equation with a nonlocal source Boundary Value Problems volume 2015, Article number: 109 (2015) Article metrics 1697 Accesses Abstract In this paper, we consider the initial boundary value problem for a fourth order nonlinear pseudo-parabolic equation with a nonlocal source. By using the concavity method, we establish a blow-up result of the solutions under suitable assumptions on the initial energy. Introduction In this article, we are concerned with the following initial boundary value problem: where \(p>0\), and Ω is a bounded domain of \(\mathbb {R}^{n}\) (\(n\geq 1\)) with a smooth boundary ∂Ω. Here, ν is the unit outward normal to ∂Ω, and \(K(x,y)\) is an integrable, real valued function such that \(K(x,y)=K(y,x)\). It is well known that this type of equations describes a variety of important physical processes, such as the analysis of heat conduction in materials with memory, viscous flow in materials with memory [1], the theory of heat and mass exchange in stably stratified turbulent shear flow [2], the non-equilibrium water-oil displacement in porous strata [3], the aggregation of populations [4–6], the velocity evolution of ion-acoustic waves in a collisionless plasma when ion viscosity is invoked [7], filtration theory [8, 9], cell growth theory [10, 11], and so on. In population dynamics theory, the nonlocal term indicates that evolution of species at a point of space does not depend only on the nearby density but also on the total amount of species due to the effects of spatial inhomogeneity; see [4]. There have also been many profound results on the existence of global solutions and asymptotic behavior of the solutions for the initial boundary value problems and the initial value problems of fourth order nonlinear pseudo-parabolic equations. In 1972, Kabanin [8] considered the following problem: where α, β, γ are positive constants. A classical solution of this mixed problem is obtained through the Fourier method in the form of a series. Conditions sufficient for uniform convergence of this series are found. In 1978, Bakiyevich and Shadrin [9] considered the following problem: where \(\alpha>0\), \(\beta\geq0\), \(\gamma>0\) are constants. They showed that the solutions of this problem are expressed through the sum of convolutions of functions \(\varphi(x)\) and \(f(t,x)\) with corresponding fundamental solutions of the problem. Zhao and Xuan [12] studied the following fourth order pseudo-parabolic equation: They obtained the existence of the global smooth solutions for the initial value problem of (1.4) and discussed the convergence behavior of solutions as \(\beta\rightarrow0\). Recently, Khudaverdiyev and Farhadova [13] discussed the following fourth order semilinear pseudo-parabolic equation: with Ionkin type non-self-adjoint mixed boundary conditions, where \(\alpha>0\) is a fixed number. They proved the local existence for a generalized solution of the mixed problem under consideration by combining generalized contracted mapping principle and Schauder’s fixed point principle and then proved the global existence for a generalized solution by means of Schauder’s stronger fixed point principle. The so-called viscous Cahn-Hilliard equation is also in a class of fourth order nonlinear pseudo-parabolic equations and can be considered as a special case of (1.5). In recent years, a lot of attention has been paid to the viscous Cahn-Hilliard equations. For more and deeper investigations of the stability analysis (as \(t\rightarrow\infty\)) and the asymptotic behavior of viscous Cahn-Hilliard models, we refer readers to [14, 15] and the references therein. Since the study on blow-up solutions for nonlinear parabolic equation with nonlocal source by Levine in [16], many efforts have been made devoted to the study of blow-up properties for nonlocal semilinear parabolic equations. The upper bound and lower bound of the blow-up time, blow-up rate estimate, blow-up set, and blow-up profile of the blow-up solutions for a various of nonlocal semilinear parabolic equations with nonlocal source terms or nonlocal boundary condition have been widely studied in the last few decades; we refer the readers to [17–29] and the references cited therein. Korpusov [30] considered a Sobolev type equation with a nonlocal source and obtained blow-up results under suitable conditions on initial data and nonlinear function. In [31], Bouziani studied the solvability of nonlinear pseudo-parabolic equation with a nonlocal boundary condition. More results on the global well-posedness for the nonlinear pseudo-parabolic equation with nonlocal source can be found in [1] and the references therein. Motivated by the above-mentioned works, we investigate the blow-up behavior of solutions of the initial boundary value problem for a fourth order nonlinear pseudo-parabolic equation with a nonlocal source (1.1). By using the concavity method, we prove a finite time blow-up result under some assumption on the initial energy \(E(0)\). Preliminaries Theorem 2.1 Assume that \(p>0\) and \(u_{0}\in{H}_{0}^{2}(\Omega)\). Then there exists a \(T_{m}>0\) for which problem (1.1) has a unique local solution \(u\in {C}^{1}([0,T_{m});H_{0}^{2}(\Omega))\) satisfying for all \(v\in{H}_{0}^{2}(\Omega)\) and \(t\in[0,T_{m})\). Before stating our principal theorem, we note that the Fréchet derivative \(f_{u}\) of the nonlinear function \(f(u)=u^{p}(x,t)\int_{\Omega}K(x,y)u^{p+1}(y,t)\,dy \) is Clearly \(f_{u}\) is symmetric and bounded, so that the potential F exists and is given by Now, differentiating the identity (2.2) with respect to t, it follows that where we have used the symmetry of \(K(x,y)\). To obtain the blow-up result, we will introduce the energy function. We have Lemma 2.1 Let \(p>0\) and u be a solution of the problem (1.1). Then \(E(t)\) is non- increasing function, that is, \(E'(t)\leq0\). Moreover, the following energy equality holds: Proof Multiplying (1.1) by \(u_{t}\) and integrating over Ω, we have Hence, from (2.3), we obtain and Integrating (2.5) from 0 to t, we find The proof of the Lemma 2.1 is completed. □ Blow up of solutions Now, we will state the blow-up result of the solutions to the problem (1.1). Theorem 3.1 Assume that \(p>0\) and \(u_{0}\in{H}_{0}^{2}(\Omega)\). If \(u(x,t)\) is a solution of the problem \((1,1)\) and the initial data \(u_{0}(x)\) satisfies then the solution of problem (1.1) blows up in finite time; that is, the maximum existence time \(T_{\max}\) of \(u(x,t)\) is finite and where \(\eta=\frac{\alpha}{m}\); \(m=(\frac{\alpha}{2}-1)\lambda_{1}\); \(2\leq\alpha\leq2p+2\); \(\lambda_{1}\) is the first eigenvalue of operator −△ under homogeneous Dirichlet boundary conditions. Proof The proof makes use of the so-called ‘concavity method’. Multiplying (1.1) by u and integrating over Ω, we have Hence We consider the following function: where \(\eta=\frac{\alpha}{m}\); \(m=(\frac{\alpha}{2}-1)\lambda_{1}\); \(2\leq\alpha\leq2p+2\); \(\lambda_{1}\) is the first eigenvalue of operator −△ under homogeneous Dirichlet boundary conditions. Due to the conditions (3.1), it follows that Multiplying (3.4) by \(e^{-2mt}\), we have From the last inequality above and (3.5), we obtain From what has been discussed above, we find Now we define Differentiating the identity (3.8) with respect to t, we deduce that so we have Using Schwarz’s inequality, we get and Thus, we obtain On the other hand, from (3.6), we know This implies Hence, for \(2<\beta<\alpha\) there exists a \(T_{\beta}\), such that for all \(t\geq{T}_{\beta}\) We consider the function \(G(t)^{-q}\) for \(0< q<\frac{\beta}{2}\), we see that Since a concave function must always lie below any tangent line, we see that \(G(t)^{-q}\) reaches 0 in finite time as \(t\rightarrow{T}^{-}\), where \(T>T_{\beta}\). This means or Then the desired assertion immediately follows. □ References 1. Al’shin, AB, Korpusov, MO, Siveshnikov, AG: Blow-up in Nonlinear Sobolev Type Equations. De Gruyter Series in Nonlinear Analysis and Applications, vol. 15. de Gruyter, Berlin (2011) 2. Barenblatt, GI, Bertsch, M, DalPasso, R, Ughi, M: A degenerate pseudoparabolic regularization of a nonlinear forward-backward heat equation arising in the theory of heat and mass exchange in stably stratified turbulent shear flow. SIAM J. Math. Anal. 24(6), 1414-1439 (1993) 3. Barenblatt, GI, Azorero, JG, Pablo, AD, Vazquez, JL: Mathematical model of the non-equilibrium water-oil displacement in porous strata. Appl. Anal. 65(1-2), 19-45 (1997) 4. Furter, J, Grinfeld, M: Local versus nonlocal interactions in population dynamics. J. Math. Biol. 27(1), 65-80 (1989) 5. Cantrell, RS, Consner, C: Diffusive logistic equations with indefinite weights: population models in disrupted environments. SIAM J. Math. Anal. 22(4), 1043-1064 (1991) 6. Padron, V: Effect of aggregation on population recovery modeled by a forward-backward pseudoparabolic equation. Trans. Am. Math. Soc. 356(7), 2739-2756 (2004) 7. Rosenau, P: Evolution and breaking of the ion-acoustic waves. Phys. Fluids 31(6), 1317-1319 (1988) 8. Kabanin, VA: Solution of mixed problem for fourth order equation. Differ. Uravn. 8(1), 54-61 (1972) 9. Bakiyevich, NI, Shadrin, GA: Cauchy problem for an equation in filtration theory. Sb. Trudov Mosgospedinstituta. 7, 47-63 (1978) 10. Bai, F, Elliott, CM, Gardiner, A, Spence, A, Stuart, AM: The viscous Cahn-Hilliard equation. I: computations. Nonlinearity 8(2), 131-160 (1995) 11. Elliott, CM, Stuart, AM: Viscous Cahn-Hilliard equation. II: analysis. J. Differ. Equ. 128(2), 387-414 (1996) 12. Zhao, HJ, Xuan, BJ: Existence and convergence of solutions for the generalized BBM-Burgers equations. Nonlinear Anal. TMA 28(11), 1835-1849 (1997) 13. Khudaverdiyev, KI, Farhadova, GM: On global existence for generalized solution of one-dimensional non-selfadjoint mixed problem for a class of fourth order semilinear pseudo-parabolic equations. Proc. Inst. Math. Mech. Natl. Acad. Sci. Azerb. 31, 119-134 (2009) 14. Grasselli, M, Petzeltova, H, Schimperna, G: Asymptotic behavior of a nonisothermal the viscous Cahn-Hilliard equation with inertial term. J. Differ. Equ. 239(1), 38-60 (2007) 15. Qu, CY, Cao, Y: Global existence of solutions for a viscous Cahn-Hilliard equation with gradient dependent potentials and sources. Proc. Indian Acad. Sci. Math. Sci. 123(4), 499-513 (2013) 16. Levine, HA: Some nonexistence and instability theorems for solutions of formally parabolic equation of the form \(Pu_{t}=-Au+\mathcal{F}(u)\). Arch. Ration. Mech. Anal. 51, 371-386 (1973) 17. Chadam, JM, Peirce, A, Yin, HM: The blow up property of solutions to some diffusion equations with localized nonlinear reactions. J. Math. Anal. Appl. 169(2), 313-328 (1992) 18. Budd, C, Dold, J, Stuart, A: Blow-up in a partial differential equation with conserved first integral. SIAM J. Appl. Math. 53(3), 718-742 (1993) 19. Wang, MX, Wang, YM: Properties of positive solutions for nonlocal reaction-diffusion problems. Math. Methods Appl. Sci. (Online) 19(4), 1141-1156 (1996) 20. Souplet, P: Blow-up in nonlocal reaction-diffusion equations. SIAM J. Math. Anal. 29(6), 1301-1334 (1998) 21. Souplet, P: Uniform blow-up profile and boundary behavior for diffusion equations with nonlocal nonlinear source. J. Differ. Equ. 153(2), 374-406 (1999) 22. Deng, WB, Liu, QL, Xie, CH: Blow up properties for a class of nonlinear degenerate diffusion equation with nonlocal source. Appl. Math. Mech. 24(11), 1362-1368 (2003) 23. Chen, YP, Liu, QL, Xie, CH: Blow-up for degenerate parabolic equations with nonlocal source. Proc. Am. Math. Soc. 132(1), 135-145 (2003) 24. Liu, QL, Chen, YP, Xie, CH: Blow-up for a degenerate parabolic equations with a nonlocal source. J. Math. Anal. Appl. 285(2), 487-505 (2003) 25. Song, JC: Lower bounds for blow-up time in a nonlocal reaction diffusion problem. Appl. Math. Lett. 24(5), 793-796 (2011) 26. Liu, DM, Mu, CL, Qiao, X: Lower bounds estimate for the blow-up time of a nonlinear nonlocal porous medium equation. Acta Math. Sci. 32(3), 1206-1212 (2012) 27. Liu, Y: Lower bounds for the blow-up time in a nonlocal reaction diffusion problem under nonlinear boundary conditions. Math. Comput. Model. 57(3-4), 926-931 (2013) 28. Fang, ZB, Yang, R, Chai, Y: Lower bounds estimate for the blow-up time for a slow diffusion equation with nonlocal source and inner absorption. Math. Probl. Eng. 2014, Article ID 764248 (2014) 29. Fang, ZB, Zhang, JY: Global and blow up solutions for the nonlinear p-Laplacian evolution equation with weighted nonlinear nonlocal boundary condition. J. Integral Equ. Appl. 26(2), 171-196 (2014) 30. Korpusov, MO, Sveshnikov, AG: Blow-up of solutions of a Sobolev-type equation with a nonlocal source. Sib. Mat. Zh. 46(3), 567-578 (2005) 31. Bouziani, A: Solvability of nonlinear pseudoparabolic equation with a nonlocal boundary condition. Nonlinear Anal. 55, 883-904 (2003) 32. Lions, JL: Quelques méthodes de résolutions des probléms aux limites non linéaires. Dunod, Paris (1969) 33. Escobedo, M, Herrero, M: A semilinear parabolic system in bounded domain. Ann. Mat. Pura Appl. 165, 315-336 (1993) Acknowledgements This work is supported by the NSF of China (11401122, 40890153), the Scientific Program (2008B080701042) of Guangdong Province. Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions All authors contributed to each part of this work equally and read and approved the final manuscript.
Even faster quadratic formula for the HP-41C 06-04-2014, 04:25 AM (This post was last modified: 06-04-2014 04:52 AM by Gerson W. Barbosa.) Post: #1 Even faster quadratic formula for the HP-41C This is yet another attempt at finding a shorter (but not necessary faster) quadratic formula program for the HP-41C, while preserving the stack register T (Thanks to Jeff Kearns - if it were not because of his interest, I wouldn't have taken a second look at this old thread) Code: 01 LBL 'Q x₂ is computed using this well-known property: \[x_{1}+x_{2}=-\frac{b}{a}\] Real roots only, but since the stack register T is preserved it will solve the first example for the HP-42S program here. I'd rather present it here first, before submitting it to the HP-41C Software Library, as any issue I might have overlooked would surely be pointed out soon. Thanks! Gerson. PS.: The title should read ...faster quadratic formula program... - as the formula is essentially the same, but I cannot edit it. 06-04-2014, 05:32 AM Post: #2 RE: Even faster quadratic formula program for the HP-41C Code: 00 { 22 Byte-Prgm } x₂ is computed using this well-known property: \[x_{1}\cdot x_{2}=\frac{c}{a}\] Cheers Thomas 06-04-2014, 05:59 AM (This post was last modified: 06-04-2014 06:25 AM by Gerson W. Barbosa.) Post: #3 RE: Even faster quadratic formula for the HP-41C (06-04-2014 05:32 AM)Thomas Klemm Wrote: Thomas, I fear there might be trouble when c = 0. Also, the HP-41 lacks recall arithmetic. Anyway, records exist to be broken. I won't be surprised if you or someone else comes up with a shorter (or a lower byte-count) HP-41 or 42S program. Cheers, Gerson. P.S.: On the HP-42S, replace line 10 with 10 + P.P.S.: This won't solve the case when both b and c are zero, however. 06-04-2014, 07:11 AM (This post was last modified: 06-04-2014 07:12 AM by Thomas Klemm.) Post: #4 RE: Even faster quadratic formula program for the HP-41C Quote:Also, the HP-41 lacks recall arithmetic.I tend to forget this. Quote:P.S.: On the HP-42S, replace line 10 withShould this solve the problem with the division by zero? In case of \(c=0\) this depends on the sign of \(b\). So it doesn't really matter whether we use + or -. I prefer to have \(x_1\) in register X. Cheers Thomas 06-04-2014, 07:33 AM Post: #5 RE: Even faster quadratic formula for the HP-41C The same program for a 34S is very very tiny - Pauli 06-04-2014, 08:52 AM Post: #6 RE: Even faster quadratic formula for the HP-41C 06-04-2014, 10:14 AM Post: #7 RE: Even faster quadratic formula for the HP-41C 06-04-2014, 10:22 AM Post: #8 RE: Even faster quadratic formula for the HP-41C 06-04-2014, 01:07 PM Post: #9 RE: Even faster quadratic formula for the HP-41C (06-04-2014 07:11 AM)Thomas Klemm Wrote: I re-learn this every time I come back to using my 41. Which I suppose means I never really learn it all... Very nice Gerson. Short AND sweet. --Bob Prosperi 06-08-2014, 02:55 AM Post: #10 RE: Even faster quadratic formula for the HP-41C It won't solve this one, however, unlike the HP-42S programs (mine and Thomas's). But what would CSLVQ (SSIZE8 mode only) be useful for anyway? Gerson. 06-08-2014, 03:00 AM Post: #11 RE: Even faster quadratic formula for the HP-41C (06-04-2014 08:52 AM)walter b Wrote: In another post, about two years ago, I said "nothing beats SLVQ on the WP 34S, however". Definite a killjoy, but very useful :-) (I remember having used SLVQ in a program once). Gerson. 06-08-2014, 03:41 AM (This post was last modified: 06-08-2014 03:41 AM by Gerson W. Barbosa.) Post: #12 RE: Even faster quadratic formula for the HP-41C Thanks, Bob! For the sake of documentation I should have mentioned that I have used the quadratic formula in this form, valid when a = -1: \[x_{1}= \frac{b}{2}+\sqrt{\left ( \frac{b}{2} \right )^{2}+c}\] Please see Allen's post in this thread from 2007. Regards, Gerson. 06-08-2014, 09:26 AM (This post was last modified: 06-08-2014 09:38 AM by Ángel Martin.) Post: #13 RE: Even faster quadratic formula for the HP-41C (06-08-2014 02:55 AM)Gerson W. Barbosa Wrote: It won't solve this one, however, unlike the HP-42S programs (mine and Thomas's). But what would CSLVQ (SSIZE8 mode only) be useful for anyway? ZQRT in the 41Z module does too ;-) 06-08-2014, 06:25 PM Post: #14 RE: Even faster quadratic formula for the HP-41C (06-08-2014 09:26 AM)Ángel Martin Wrote:(06-08-2014 02:55 AM)Gerson W. Barbosa Wrote: It won't solve this one, however, unlike the HP-42S programs (mine and Thomas's). But what would CSLVQ (SSIZE8 mode only) be useful for anyway? ...with no native complex number support, you might have said! Sorry for my ignorance! Gerson. 06-09-2014, 02:24 PM (This post was last modified: 06-09-2014 02:25 PM by Ángel Martin.) Post: #15 RE: Even faster quadratic formula for the HP-41C Yes, quadratic and cubic equations with complex coefficients and obviously results. Interestingly, I took the opportunity to review the code in the 41Z - finding a typo (a.k.a. bug) on the Spherical Hankel functions (not in the Bessel functions, those were ok). It appears that, like rust, bugs never sleep! User(s) browsing this thread: 1 Guest(s)
Let's say that a "complete resolution of GCH" is a definable class function $F: \operatorname{Ord}\longrightarrow \operatorname{ Ord}$ such that $2^{\aleph_\alpha} = \aleph_{F(\alpha)}$ for all ordinals $\alpha$. It is known of course that $F(\alpha) = \alpha+1$ is a complete resolution of GCH (in the positive) that is relatively consistent with ZFC. I read that it's an unpublished theorem of Woodin that $F(\alpha) = \alpha+2$ is a complete resolution of GCH that is relatively consistent with ZFC plus some large cardinal hypothesis. My questions are: (1) What's the weakest known complete resolution of GCH in consistency strength other than $F(\alpha) = \alpha+1$ and what large cardinal axiom is required for it? (2) What are some other complete resolutions of GCH that are known to be consistent relative to specific large cardinal hypotheses, what are their respective large cardinal hypotheses, and how do these consistency strengths relate to one another? One candidate answer scheme might be the following: if $F$ is any (sufficiently absolute) definable function on the class of regular alephs such that $\kappa < \lambda \Rightarrow F(\kappa) \leq F(\lambda)$ and $\operatorname{cf}(F(\kappa)) > \kappa$, then ZFC + $(\forall \kappa = \operatorname{cf}(\kappa))(2^\kappa = F(\kappa))$ + SCH is consistent, where SCH is the Singular Cardinals Hypothesis or, in an equivalent form, the Gimel Hypothesis, due to Solovay, asserting $(\forall \kappa > \operatorname{cf}(\kappa))( \kappa^{\operatorname{cf}(\kappa)} = \max(2^{\operatorname{cf}(\kappa)}, \kappa^+))$, and no large cardinals are required. Knowledge of the gimel function $\gimel(\kappa) = \kappa^{\operatorname{cf}(\kappa)}$ suffices to determine cardinal exponentiation recursively (for example, see P. Komjath, V. Totik, ( Problems and Theorems in Classical Set Theory): chapter 10, problem 26, sets this out). So it is natural to explore the gimel function in greater depth. Writing a singular $\kappa$ as the limit of an increasing sequence $a$ of smaller regular cardinals leads to the observation that the deeper problem concerns the cofinality $\operatorname{cf}(([\kappa]^{\leq \lambda}, \subseteq))$ of the partial order $([\kappa]^{\leq \lambda}, \subseteq)$ for regular $\lambda < \kappa$. In this direction, one comes eventually to pcf theory, which offers an analysis of the puppet master $\operatorname{pcf}(a)$ rather than his troupe of erratic marionettes $\langle 2^\lambda : \lambda \in Card \rangle$. $\newcommand\Ord{\text{Ord}}$Easton's theorem allows us to control the continuum function on the infinite regular cardinals, and in particular, on the infinite successor cardinals, in a very flexible manner, without using any large cardinals. For example, we can have $F(\alpha+1)=\alpha+5$ for all ordinals $\alpha$, with $F(\lambda)=\lambda+1$ for all limit ordinals, and many other possibilities. There are a huge number of possibilities. Let me add more examples: If we consider the global behavior of the power function, then we have for example: (A) (Foreman-Woodin): $F$ can be such that $F(\alpha)>\alpha+\omega,$ all $\alpha$ (modulo a supercompact and infinitely many inaccessibles above it). Note that by a result of Patai, there is no $\beta>\omega$ such that $F(\alpha)=\alpha+\beta,$ all $\alpha$. Remark. In the above model, $F$ is not definable from the ground model, but we can go to intermediate submodel in which $F$ is definable. (B) (Cummings): $F$ can be such that $F(\alpha)=\alpha+1,$ all successor $\alpha,$ and $F(\alpha)=\alpha+2,$ all limit $\alpha$ (modulo a $\kappa+3$-strong cardinal $\kappa$. By work of Gitik-Mitchell, we need more than a $\kappa+2$-strong cardinal $\kappa$). (C) (Merimovich): Let $2\leq n < \omega.$ Then $F$ can be taken to be $F(\alpha)=\alpha+n,$ all $\alpha$ (modulo a $\kappa+n+1$-strong cardinal $\kappa$. By work of Gitik-Mitchell, we need more than a $\kappa+n$-strong cardinal $\kappa$). (D) (Firedman-G): We can have (B) or (C) just by adding a single real to a model satisfying $GCH$. More precisely, the final model can be of the form $V[R],$ where $V\models GCH$ and $R$ is a real. If we consider the local behavior of the power function, then we can say more: (E) (Gitik-Merimovich): Let $2\leq m <\omega,$ and let $\phi: \omega\to \omega$ be such that $\phi$ is increasing and $\phi(n)>n,$ for all $n$. Then we can have $F(n)=\phi(n)$ and $F(\omega)=\omega+m$ (modulo a $\kappa+m$-strong cardinal $\kappa$). (F) (Gitik): We can have $F$ defined on $\omega_1$ such that both sets $\{ \alpha<\omega_1: F(\alpha)=\alpha+2\}$ and $\{ \alpha<\omega_1: F(\alpha)=\alpha+3\}$ are stationary in $\omega_1$ (modulo suitable large cardinals. Some similar results are also proved by Gitik-Merimovich). If we avoid choice, then an Easton like theorem is valid for all cardinals: Let $\theta(\kappa)=sup\{\nu:$ there exists a surjection $f: p(\kappa)\to \nu \}.$ It is easily seen that $\theta(\kappa)>\kappa^+$ is a cardinal and it is increasing. The next theorem shows that these are the only restrictions that $ZF$ imposes on $θ(κ)$: (G) (Fernengel-Koepke, based on an earlier result of Gitik-Koepke) Let $M$ be a ground model of $ZFC + GCH +$Global Choice. In $M$,let $F$ be a function defined on the class of infinite cardinals such that i. $F(κ)$ is a cardinal > $κ^+$; ii. $κ < λ$ implies $F (κ)\leq F (λ)$. Then there is an extension $N$ of $M$ which satisfies $ZF$, preserves cardinals and cofinalities, and such that $θ (κ) = F (κ)$ holds for all cardinals in $N$.
Bernoulli Bernoulli Volume 8, Number 5 (2002), 627-642. Irregular sets and central limit theorems Abstract In previous papers we have studied the asymptotic behaviour of$S_N(A;X)=(2N+1)^{-d/2}\Sigma_{n \in A_N}X_n$, where $X$ is a centred, stationary and weakly dependent random field, and $A_N=A \cap [ -N,N]^d, A \subset \mathbb {Z}^d$. This leads to the definition of asymptotically measurable sets, which enjoy the property that $S_N(A;X)$ has a (Gaussian) weak limit for any $X$ belonging to a certain class. We present here an application of this technique. Consider a regression model $X_n=\varphi (\xi_,n,Y_n), n \in \mathbb {Z}^d$, where $X_n$ is centred, $\varphi$ satisfies certain regularity conditions, and $\xi$ and $Y$ are independent random fields; for any $m \in \mathbb {N}$ and $(y1,\ldots,y_m)$, the central limit theorem holds for ($\varphi (\xi ,y_1),\ldots, \xi ,y_m)$), but $Y$ satisfies only the strong law of large numbers as it applies to $(Y_m,Y_{m-n})_{m \in \mathbb{Z}^d}$, for any $n \in \mathbb{Z}^d$. Under these conditions, it is shown that the central limit theorem holds for $X$. Article information Source Bernoulli, Volume 8, Number 5 (2002), 627-642. Dates First available in Project Euclid: 4 March 2004 Permanent link to this document https://projecteuclid.org/euclid.bj/1078435221 Mathematical Reviews number (MathSciNet) MR2003h:60038 Zentralblatt MATH identifier 1014.60016 Citation Perera, Gonzalo. Irregular sets and central limit theorems. Bernoulli 8 (2002), no. 5, 627--642. https://projecteuclid.org/euclid.bj/1078435221
Huge cardinal Huge cardinals (and their variants) were introduced by Kenneth Kunen in 1972 as a very large cardinal axiom. Kenneth Kunen first used them to prove that the consistency of the existence of a huge cardinal implies the consistency of $\text{ZFC}$+"there is a $\omega_2$-saturated $\sigma$-ideal on $\omega_1$". It is now known that only a Woodin cardinal is needed for this result. However, the consistency of the existence of an $\omega_2$-complete $\omega_3$-saturated $\sigma$-ideal on $\omega_2$, as far as the set theory world is concerned, still requires an almost huge cardinal. [1] Contents Definitions Their formulation is similar to that of the formulation of superstrong cardinals. A huge cardinal is to a supercompact cardinal as a superstrong cardinal is to a strong cardinal, more precisely. The definition is part of a generalized phenomenon known as the "double helix", in which for some large cardinal properties n-$P_0$ and n-$P_1$, n-$P_0$ has less consistency strength than n-$P_1$, which has less consistency strength than (n+1)-$P_0$, and so on. This phenomenon is seen only around the n-fold variants as of modern set theoretic concerns. [2] Although they are very large, there is a first-order definition which is equivalent to n-hugeness, so the $\theta$-th n-huge cardinal is first-order definable whenever $\theta$ is first-order definable. This definition can be seen as a (very strong) strengthening of the first-order definition of measurability. Elementary embedding definitions $\kappa$ is almost n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length less than $\lambda$ (that is, $M^{<\lambda}\subseteq M$). $\kappa$ is n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length $\lambda$ ($M^\lambda\subseteq M$). $\kappa$ is almost n-hugeiff it is almost n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is n-hugeiff it is n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is super almost n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is almost n-huge with target $\lambda$ (that is, the target can be made arbitrarily large). $\kappa$ is super n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is n-huge with target $\lambda$. $\kappa$ is almost huge, huge, super almost huge, and superhugeiff it is almost 1-huge, 1-huge, etc. respectively. Ultrahuge cardinals A cardinal $\kappa$ is $\lambda$-ultrahuge for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $\mathrm{j}(\kappa)>\lambda$, $M^{j(\kappa)}\subseteq M$ and $V_{j(\lambda)}\subseteq M$. A cardinal is ultrahuge if it is $\lambda$-ultrahuge for all $\lambda\geq\kappa$. [1] Notice how similar this definition is to the alternative characterization of extendible cardinals. Furthermore, this definition can be extended in the obvious way to define $\lambda$-ultra n-hugeness and ultra n-hugeness, as well as the " almost" variants. Ultrafilter definition The first-order definition of n-huge is somewhat similar to measurability. Specifically, $\kappa$ is measurable iff there is a nonprincipal $\kappa$-complete ultrafilter, $U$, over $\kappa$. A cardinal $\kappa$ is n-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$, and cardinals $\kappa=\lambda_0<\lambda_1<\lambda_2...<\lambda_{n-1}<\lambda_n=\lambda$ such that: $$\forall i<n(\{x\subseteq\lambda:\text{order-type}(x\cap\lambda_{i+1})=\lambda_i\}\in U)$$ Where $\text{order-type}(X)$ is the order-type of the poset $(X,\in)$. [1] $\kappa$ is then super n-huge if for all ordinals $\theta$ there is a $\lambda>\theta$ such that $\kappa$ is n-huge with target $\lambda$, i.e. $\lambda_n$ can be made arbitrarily large. If $j:V\to M$ is such that $M^{j^n(\kappa)}\subseteq M$ (i.e. $j$ witnesses n-hugeness) then there is a ultrafilter $U$ as above such that, for all $k\leq n$, $\lambda_k = j^k(\kappa)$, i.e. it is not only $\lambda=\lambda_n$ that is an iterate of $\kappa$ by $j$; all members of the $\lambda_k$ sequence are. As an example, $\kappa$ is 1-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$ such that $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}\in U$. The reason why this would be so surprising is that every set $x\subseteq\lambda$ with every set of order-type $\kappa$ would be in the ultrafilter; that is, every set containing $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}$ as a subset is considered a "large set." Consistency strength and size Hugeness exhibits a phenomenon associated with similarly defined large cardinals (the n-fold variants) known as the double helix. This phenomenon is when for one n-fold variant, letting a cardinal be called n-$P_0$ iff it has the property, and another variant, n-$P_1$, n-$P_0$ is weaker than n-$P_1$, which is weaker than (n+1)-$P_0$. [2] In the consistency strength hierarchy, here is where these lay (top being weakest): measurable = 0-superstrong = 0-huge n-superstrong n-fold supercompact (n+1)-fold strong, n-fold extendible (n+1)-fold Woodin, n-fold Vopěnka (n+1)-fold Shelah almost n-huge super almost n-huge n-huge super n-huge ultra n-huge (n+1)-superstrong All huge variants lay at the top of the double helix restricted to some natural number n, although each are bested by I3 cardinals (the critical points of the I3 elementary embeddings). In fact, every I3 is preceeded by a stationary set of n-huge cardinals, for all n. [1] Similarly, every huge cardinal $\kappa$ is almost huge, and there is a normal measure over $\kappa$ which contains every almost huge cardinal $\lambda<\kappa$. Every superhuge cardinal $\kappa$ is extendible and there is a normal measure over $\kappa$ which contains every extendible cardinal $\lambda<\kappa$. Every (n+1)-huge cardinal $\kappa$ has a normal measure which contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is super n-huge" [1], in fact it contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is ultra n-huge". Every n-huge cardinal is m-huge for every m<n. Similarly with almost n-hugeness, super n-hugeness, and super almost n-hugeness. Every almost huge cardinal is Vopěnka (therefore the consistency of the existence of an almost-huge cardinal implies the consistency of Vopěnka's principle). [1] Every ultra n-huge is super n-huge and a stationary limit of super n-huge cardinals. Every super almost (n+1)-huge is ultra n-huge and a stationary limit of ultra n-huge cardinals. In terms of size, however, the least n-huge cardinal is smaller than the least supercompact cardinal (assuming both exist). [1] This is because n-huge cardinals have upward reflection properties, while supercompacts have downward reflection properties. Thus for any $\kappa$ which is supercompact and has an n-huge cardinal above it, $\kappa$ "reflects downward" that n-huge cardinal: there are $\kappa$-many n-huge cardinals below $\kappa$. On the other hand, the least super n-huge cardinals have both upward and downward reflection properties, and are all much larger than the least supercompact cardinal. It is notable that, while almost 2-huge cardinals have higher consistency strength than superhuge cardinals, the least almost 2-huge is much smaller than the least super almost huge. While not every $n$-huge cardinal is strong, if $\kappa$ is almost $n$-huge with targets $\lambda_1,\lambda_2...\lambda_n$, then $\kappa$ is $\lambda_n$-strong as witnessed by the generated $j:V\prec M$. This is because $j^n(\kappa)=\lambda_n$ is measurable and therefore $\beth_{\lambda_n}=\lambda_n$ and so $V_{\lambda_n}=H_{\lambda_n}$ and because $M^{<\lambda_n}\subset M$, $H_\theta\subset M$ for each $\theta<\lambda_n$ and so $\cup\{H_\theta:\theta<\lambda_n\} = \cup\{V_\theta:\theta<\lambda_n\} = V_{\lambda_n}\subset M$. Every almost $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\theta$-supercompact for each $\theta<\lambda_n$, and every $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\lambda_n$-supercompact. $\omega$-Huge Cardinals A cardinal $\kappa$ is almost $\omega$-huge iff there is some transitive model $M$ and an elementary embedding $j:V\prec M$ with critical point $\kappa$ such that $M^{<\lambda}\subset M$ where $\lambda$ is the smallest cardinal above $\kappa$ such that $j(\lambda)=\lambda$. Similarly, $\kappa$ is $\omega$-huge iff the model $M$ can be required to have $M^\lambda\subset M$. Sadly, $\omega$-huge cardinals are inconsistent with ZFC by a version of Kunen's inconsistency theorem. Now, $\omega$-hugeness is used to describe critical points of I1 embeddings. References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates The Ratio Test Let $\sum a_n$ be a series. The Notice that the Ratio Test considers the ratio of the absolute values of the terms. As you might expect, the Ratio Test thus gives us information about whether the series $\sum a_n$ converges absolutely. Warning: There are examples with $L=1$ that converge absolutely, examples that converge conditionally, and examples that diverge. DO: Apply the Ratio Test to 1) the absolutely convergent series $\sum\frac{1}{n^2}$, 2) the conditionally convergent series $\sum-\frac{1}{n}$, and 3) the divergent series $\sum\frac{1}{n}$. 1) As $n\to\infty$, $\displaystyle\frac{\left|a_{n+1}\right|}{\left|a_n\right|}=\frac{\frac{1}{(n+1)^2}}{\frac{1}{n^2}}=\frac{n^2}{(n+1)^2}\longrightarrow 1$. 2) As $n\to\infty$, $\displaystyle\frac{\left|a_{n+1}\right|}{\left|a_n\right|}=\frac{\frac{1}{n+1}}{\frac{1}{n}}=\frac{n}{n+1}\longrightarrow 1$. 3) As $n\to\infty$, $\displaystyle\frac{\left|a_{n+1}\right|}{\left|a_n\right|}=\frac{\frac{1}{n+1}}{\frac{1}{n}}=\frac{n}{n+1}\longrightarrow 1$. We used other tests to determine the convergence/divergence of these series - the Ratio Test fails to help us with these series. DO: The only way a series could be conditionally convergent is if the Ratio Test fails for that series. Why? Review of simiplificationAs you work through this module, you must be able to work with ratios of factorials as well as ratio of powers. Recall that $n!=1\cdot 2\cdot 3\cdots(n-1)\cdot n$. DO: Simplify $\frac{(n+1)!}{n!}$. $\frac{(n+1)!}{n!}=\frac{1\cdot 2\cdot 3\cdots(n-1)\cdot n\cdot (n+1)}{1\cdot 2\cdot 3\cdots(n-1)\cdot n}=n+1$ after all the cancellation. DO: Simplify $\displaystyle\frac{\frac{50^{n+1}}{(n+1)!}}{\frac{50^n}{n!}}$ $\displaystyle\frac{\frac{50^{n+1}}{(n+1)!}}{\frac{50^n}{n!}}=\frac{50^{n+1}}{(n+1)!}\cdot\frac{n!}{50^n}=\frac{50^{n+1}}{50^n}\cdot\frac{n!}{(n+1)!}=50\cdot\frac{1}{n+1}=\frac{50}{n+1}$ A couple of worked out examples of the Ratio Test are contained in the video, as well as the ideas of why the Ratio Test works.
Materials Pencil; Eraser; Blank A4 sheet of paper; and A ruler (also to use as a straightedge). Puzzle: Draw a square with a side-length of $a=16$cm. Then, draw eight circles inside. The circles must have $2$cm in diameter; they must all be the same size; they must not touch each other; their distances must all be different from each other (i.e. their positions must be randomised); and they must not touch the perimeter of the square. $$\bigcirc\quad\bigcirc\quad\bigcirc\quad\bigcirc\quad\bigcirc\quad\bigcirc\quad\bigcirc\quad\bigcirc$$ Now, draw two points on each side of the square. Begin with the top side, and from left to right, make points $A_1$ and $B_1$. The distance from these two points is arbitrary, but none of them can be on the corners of the square. Now, rotate the square $90^\circ$ anti-clockwise. You will now have a new top side, where you can put points $A_2$ and $B_2$ from left to right. Their distance is also arbitrary and cannot be on the corners of the square, but it also cannot be the same distance as any other points placed (i.e. the previously placed points). Continue with this method to make points $A_3$ and $B_3$, and then $A_4$ and $B_4$, making sure that the distance between each two points on a side is unique. Now, connect the points with a line in the following fashion. $$A_1\to A_2\to A_3\to A_4\to B_1\to B_2\to B_3\to B_4\to A_1$$ But, ensure that the lines do not touch nor intersect any of the circles! Note: You might have to place your points carefully. Aim: $$\verb|Ensure the lines do not touch nor intersect any of the circles!|$$ Edit: Removed Part 2 of the puzzle as it is much too difficult and incorrect (or perhaps impossible) in very specific cases of drawing circles. Thus, I am adding a bonus: Bonus: What is the minimum value of $a$ for your specific case you have chosen (i.e. how you have randomly positioned the circles)? Note that you cannot move the circles in different positions after plotting them. I used the tag connections-puzzle because you have to connect points with lines.
Example from Wikipedia: Suppose that in a particular geographic region, the mean and standard deviation of scores on a reading test are 100 points, and 12 points, respectively. Our interest is in the scores of 55 students in a particular school who received a mean score of 96. We can ask whether this mean score is significantly lower than the regional mean—that is, are the students in this school comparable to a simple random sample of 55 students from the region as a whole, or are their scores surprisingly low? They further use population sigma as the standard deviation: $\mathrm{SE} = \frac{\sigma}{\sqrt n} = \frac{12}{\sqrt{55}} = \frac{12}{7.42} = 1.62 \,\!$ This particular region may have higher variance in scores (for example, half of students are from poor families and have to work instead of studying). Why here (and in other text books) they assume variance as equal to population variance? Yes, I know that we should use one sample t-test, and I am OK with calculation of variance using true known mean under $H_0$, but the question is about Z-test which is usually taught before t-test. UPD Probable Answer: It seems that if you have population with $SD = x$ and you subsample from population with $SD != x$ - then your population is not normally distributed, but you have a mix of normal distributions (even if the means are the same). Correct me if I am wrong. Another problem that we do not care if population is normally distributed - we care that subsample is normally distributed (if we want to make Z-test stronger). But it is another story and have nothing in common with the assumptions of original Z test... UPD: Wrong again. Assume we subsample guys who did not finish the school and have low scores in reading test. scores <- rnorm(10000, mean=100, sd = 12)region_scores <- scores[which(scores < 90)] Distribution of region scores is not normal any more. We can not apply Z-test. It seems that we can never apply the Z-test. Or, if we satisfy the test's assumptions and sample from the population randomly, we will have nothing interesting to test - our subsample will deviate in mean randomly, and no true effect will arise. Test assumptions are described here. Why do they want to estimate deviation in means with random sampling from a normal population? There will be no true difference in means! I'm totally confused.
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Strategy to Test Series and a Review of Tests Notation: In this section, we will often use the following series notations: $\displaystyle\sum_{n}^\infty a_n=\sum_n a_n=\sum a_n$. $\left\{ \begin{array}{ll} &\text{All of these notations indicate that the index is $n$,}\\ &\text{but we aren't declaring where $n$ begins ($n=0$ or $n=1$ or $n=5$ etc.).}\\ \end{array}\right.$ As with techniques of integration, it is important to recognize the form of a series in order to decide your next steps. Although there are no hard-and-fast rules, running down the following steps (in order) may be helpful. Consider the series $\displaystyle\sum_{n}^\infty a_n$. Divergence Test: If $\displaystyle\lim_{n \to \infty} a_n \ne 0$, then $\displaystyle\sum_n a_n$ diverges. Integral Test: If $a_n = f(n)$, where $f(x)$ is a non-negative non-increasing function, then $\displaystyle\sum_{n}^\infty a_n$ converges if and only if the integral $\displaystyle\int_1^\infty f(x) \,dx$ converges. Comparison Test: This applies only to positive-term series. If $a_n \le b_n$ and $\sum b_n$ converges, then $\sum a_n$ converges. Limit comparison Test: If $\sum a_n$ and $\sum b_n$ are positive-term series, and $\displaystyle\lim_{n \to \infty} \frac{a_n}{b_n} = L$, with $0<L<\infty$, then either $\sum a_n$ and $\sum b_n$ both converge or both diverge. Alternating Series Test: When our series is alternating, so that $\displaystyle\sum_n^\infty a_n=\sum_n^\infty(-1)^nb_n$, if $b_n>0$, $\quad b_{n+1} \le b_n,\quad$ and $\quad\displaystyle\lim_{n \to \infty}b_n = 0$, then $\sum (-1)^{n+1} b_n$ converges. Ratio Test: Let $L= \displaystyle{\lim_{n\to\infty} \frac{|a_{n+1}|}{|a_n|}}$. If $L < 1$, then $\sum a_n$ converges absolutely. Root Test: Let $L = \displaystyle\lim_{n \to \infty}\sqrt[n]{|a_n|}$. If $L<1$, then $\sum a_n$ converges absolutely.
It is a very nice question! The answer is yes, natural instances of the $\Delta$ system property, which hold under GCH, are in fact equivalent to the GCH. Theorem. $\Delta(\omega_2,\omega_1)$ is equivalent to CH. Proof: You've pointed out that CH implies the principle, since thehypothesis you mention for this case amounts to$\omega_1^{\lt\omega_1}<\omega_2$, which amounts to CH. So let usconsider what happens when CH fails. Let $T=2^{\lt\omega}$ be thetree of all finite binary sequences, and label the nodes of $T$with distinct natural numbers. Let $F$ be the subsets of $\omega$arising as the sets of labels occuring on any of $\omega_2$ manybranches through $T$. Thus, $F$ has size $\omega_2$, and any twoelements of $F$ have finite intersection. I claim that this familyof sets can have no $\Delta$-system of size $\omega_2$, andindeed, it can have no $\Delta$-system even with three elements.If $r$ is the root of $a$, $b$ and $c$ in $F$, then $r=a\capb=a\cap c$, and so $a$ and $b$ branch out at the same node that$a$ and $c$ do, in which case $b$ and $c$ must agree one steplonger, so $b\cap c\neq r$. QED The same idea works for higher cardinals as follows: Theorem. For any infinite cardinal $\delta$, we have $\Delta(\delta^{++},\delta^+)$ is equivalent to$2^\delta=\delta^+$. Proof. If $2^\delta=\delta^+$, then your criterion, which amounts to $(\delta^+)^{\lt\delta^+}<\delta^{++}$, is fulfilled, and so the$\Delta$ property holds. Conversely, consider the tree$T=2^{\lt\delta}$, the binary sequences of length less than $\delta$. Let $F$ be a family of$\delta^{++}$ many branches through $T$, regarding each branch $b$ as a subset of $T$, the set of its initial segments. Each such branch has size $\delta$, since the tree has height $\delta$. But for the same reason as before, there can be no $\Delta$ system with even three elements, since the tree is merely binary branching, and so three distinct branches cannot have a common root. This contradicts $\Delta(\delta^{++},\delta^+)$, as desired. QED Corollary. The full GCH is equivalent to the assertion that $\Delta(\delta^{++},\delta^+)$ for every infinite cardinal $\delta$. Update. The same idea shows that the hypothesis you mentionis optimal: one can reverse the lemma from the conclusion to the hypothesis. Theorem. The following are equivalent, for regular $\kappa$and $\mu\lt\kappa$: $\Delta(\kappa,\mu)$ $\lambda^{\lt\mu}\lt\kappa$ for every $\lambda\lt\kappa$. Proof. You mentioned that 2 implies 1, and this is how one usuallysees the $\Delta$ system lemma stated. For the converse, supposethat $\lambda^{\lt\mu}\geq\kappa$ for some $\lambda\lt\kappa$.Since $\kappa$ is regular and $\mu\lt\kappa$, this implies $\lambda^\eta\geq\kappa$for some $\eta\lt\mu$. Let $T$ be the $\lambda$-branching tree$\lambda^{\lt\eta}$, which has height $\eta$. Let $F$ be a familyof $\kappa$ many branches through this tree, where we think of abranch as the set of nodes in the tree that lie on it, a maximal linearlyordered subset of the tree $T$. Each such branch is a set of size $\eta$. I claim that this family has nosubfamily that is $\Delta$ system of size $\lambda^+$. The reason isthat because the tree is $\lambda$-branching, if we have $\lambda^+$many branches with a common root, then at least two of them mustextend that root to the next level in the same way, a contradiction to it being aroot. Thus, the failure of 2 implies the failure of 1, as desired.QED
Insightful commentary on international conflict and the scholarly study thereof. Or laughably poor analysis of matters that deserve to be taken more seriously. Your call. Saturday, June 23, 2012 Measuring Military Capabilities There have been some interesting discussions about how to measure the position of the United States relative to China in the past few months (see here for an earlier take). One point that has been made a few times, particularly by Beckley, is that we put too much weight on the sheer size of a country. If you took a middling power and added 50 million or so desperately poor, illiterate, starving people, both their GDP and CINC scores (available here, under the National Material Capabilities page of Available Data Sets) would increase dramatically. Yet none of us really believe that such a nation would grow appreciably stronger as a result. It's high time someone proposed a measure that is immune to this criticism. I doubt it's perfect (in fact, I'm sure it's not), but I'd like to propose such a measure. It focuses exclusively on military components, and so I'm imaginatively calling it M. Construction This is a little messy. Math-phobes are going to want to skip this section. But for those who are interested in the details, here's how I constructed my measure. Formally,\begin{align*}\mbox{M}= ln(\mbox{milper}_{i,t})\displaystyle\left(\frac{ln(\mbox{qual}_{it})}{\delta_t}\right),\end{align*}where \(\mbox{milper}_{i,t}\) is country \(i\)'s total military personnel in year \(t\), \(\mbox{qual}_{i,t}\) is \(i\)'s quality ratio (military expenditures per personnel) in year \(t\), and \(\delta_t\) is a time-varying discount factor that I constructed to adjust for changes in military technology. Specifically,\begin{align*}\delta_t = 2.2^{\left((\mbox{year}-1700)/100\right)},\end{align*}which ensures that \(\delta\) takes on a value fairly close to the average quality ratio among the major powers in any given year, without exhibiting the fluctuations found in the actual average. My goal was to account for the size of a military as well as it's sophistication. I also wanted a measure that didn't correlate so highly with time (as GDP does) nor require that the total of capabilities in the international system always sum to 1 (as CINC does). There are various other technical concerns I had in mind that I'm glad to discuss in the comments if anyone is interested. But enough technical details. Let's look at some colorful graphs. Validation Here's a graph of the CINC scores for the United States, United Kingdom, Russia/Soviet Union, and China from 1945 to 2007 (the last year with available data). And here's M for the same states over the same time period. I don't know about you, but it looks to me like M has more face validity. According to CINC, China has already surpassed the United States. For all the debate between "alarmists" or "declinists" on the one hand, and "polyannas" on the other, no one seems to believe that the US has already ceased to be the dominant military power in the international system. We don't get that from M. CINC also tells us that the UK was a second rate power even in 1946, whereas M has the UK looking stronger even than the US in 1940 (not shown), soon to be eclipsed by their Atlantic ally during WWII, yet still quite formidable in 1946, though quickly dropping off from there. Finally, M makes the Soviet Union look like a real challenger to the US during the Cold War, whereas CINC suggests that the Soviet Union was far weaker than the US during the early decades of the Cold War. YMMV, but all of those things make me think that M does a better job of capturing the military might of these four powers. That said, there are some patterns in M that stand out. China sees a big slump before the end of the Cold War and a smaller but still considerable one in the past few years. Do these reflect real changes in the relative military capabilities, or are they signs that something is wrong with the measure? I don't know enough about China to say. Let's take a look at the European powers from 1816 to 1910. Compared to CINC, M paints a picture of a much more evenly balanced continental system, particularly after 1850 or 1860. The CINC score points to a century of fairly pronounced British dominance that only begins to break down at the very end of the century. I think there are those who would argue that CINC gets it right here, but I don't know. Do we really believe that Germany was basically a nobody until 1890 or so? That Austria was never even one of the top 3 European powers? That France was never remotely close to as strong as the UK? If anyone is more familiar than I am with 19th century Europe, please feel free to chime in, but my sense is that M actually looks pretty good here too. At any rate, this is all very preliminary, and only speaks to face validity. I'm excited enough about this measure that I just might decide to try to validate this measure in other ways. If it outperforms CINC in terms of explaining patterns of international conflict, both in terms of onset and outcome, I may start relying on it in my academic work. Maybe. For now, it's mostly just food for thought. What M Is Not Power is multi-faceted. It's also typically defined in a manner that's damn near tautological ("the ability to get others to do what you want them to do when they would otherwise not be inclined to do so"). I was careful above not to refer to M as a measure of "power", because I'm not at all sure that it is one. No more so, at any rate, than CINC is. It's a measure of military capabilities (as opposed to a broad basket of material capabilities that includes demographic and industrial components). One might argue that a measure based solely on nuclear or naval power might be more meaningful. Or one might argue that any debate about whether the future belongs to China should reflect measures of soft power and the growing dominance of the English language and American culture; the centrality of the United States in the global economy and the role of the dollar as the international system's reserve currency; or various other concerns. I have no principled argument to offer against such claims. All I set out to do was construct a measure of military might that would draw upon the same publicly available data that scholars of international relations are used to using but might be considered superior to CINC in certain respects. And I think I have done that...though if you ask me again in a month or so, my opinion may have changed.
In this post I will compute, under some simple assumptions, how much time a given amount of fluid (e.g. water) takes to go through a funnel. The result presented here is by no means a general formula but will be a good estimate in certain regimes (namely, if the stem radius is much smaller than the mouth radius and if the stem is short compared to the funnel height). The funnel is shaped like an "upside down" cone whose tip has been cut (see figure 1). It is initially completely filled with fluid. The volume of the neck itself is assumed to be negligible with respect to the total volume of the funnel. The fluid is assumed to be incompressible and the flow is assumed to be inviscid (meaning we treat the fluid as having zero viscosity). Fig. 1: A funnel with conical shape. The funnel has mouth radius $b$ and stem radius $a$. The initial fluid height is $h_0$, which is the height of the funnel itself. At time $t$, the height of the fluid is $h(t)$ and the radius of the fluid's surface is $r(t)$. It will be assumed here that the top surface of the fluid moves down very slowly. This assumption will be good as long as $h(t)$ is not too small, but will break as $h(t) \rightarrow 0$. With this assumption, the problem becomes similar to the "orifice in a tank" problem, meaning we can treat the flow as steady. The advantage this brings is we can use Bernoulli's equation. To be more precise, consider points $A$ and $B$ on figure 1. We can assume both points are on the same flow streamline (the dashed line), meaning: $$ \displaystyle\frac{v_A^2(t)}{2} + \frac{p_A}{\rho} + gh_A(t) = \frac{v_B^2(t)}{2} + \frac{p_B}{\rho} + gh_B(t) \label{post_62f54ab6114d473b6933ad5bf5a5fc88_eq_bern} $$ where $\rho$ is the density of the fluid, $v_A(t)$ is the fluid speed at $A$, $p_A$ is the pressure at $A$ and $h_A(t)$ is the height of point $A$ (all quantities referring to $B$ are equivalently defined). Since both points $A$ and $B$ are in direct contact with the surrounding air, we have $p_A = p_B = p_0$, where $p_0$ is the atmospheric pressure. Also, setting $h_B(t) = 0$ gives us $h_A(t) = h(t)$. Finally, since we assume the surface of the fluid moves down slowly, we can take $v_A(t) \approx 0$. Equation \eqref{post_62f54ab6114d473b6933ad5bf5a5fc88_eq_bern} then becomes: $$ \frac{p_0}{\rho} + gh(t) = \frac{v_B^2(t)}{2} + \frac{p_0}{\rho} $$ and therefore: $$ v_B(t) = \sqrt{2gh(t)} \label{post_62f54ab6114d473b6933ad5bf5a5fc88_eq_vB} $$ so the fluid comes out of the neck's orifice with speed $v_B(t) = v(t) = \sqrt{2gh(t)}$. Before we proceed, let's obtain a relation between $r(t)$ and $h(t)$. Since we assume $a$ to be small ($a \ll r(t)$ and $a \ll b$), the cross section of the cone shown on figure 1 is close to a triangle, so: $$ \displaystyle\frac{r(t)}{h(t)} = \frac{b}{h_0} \Longrightarrow r(t) = h(t)\frac{b}{h_0} $$ The total volume of fluid contained in the funnel at time $t$ is given by (as a reminder, the volume of the funnel's neck is assumed to be negligible): $$ V(t) = \displaystyle\frac{1}{3}\pi r^2(t) h(t) = \frac{\pi}{3}\left(\frac{b}{h_0}\right)^2h^3(t) \label{post_62f54ab6114d473b6933ad5bf5a5fc88_eq_volume} $$ This approximation will be good as long as $r(t) \gg a$, meaning the volume of the "removed tip" of the cone is negligible with respect to the fluid volume $V(t)$. As $h(t) \rightarrow 0$, $r(t) \rightarrow a$ so this approximation becomes poor, but by that time the small amount of fluid left should not be large enough to break the time estimate. From equation \eqref{post_62f54ab6114d473b6933ad5bf5a5fc88_eq_vB}, the volumetric rate at which the fluid goes through the orifice of the tunnel is equal to: $$ \Phi(t) = A v(t) = (\pi a^2) v(t) = \pi a^2 \sqrt{2gh(t)} \label{post_62f54ab6114d473b6933ad5bf5a5fc88_eq_flow_rate} $$ where $A = \pi a^2$ is the cross sectional area of the funnel's stem. Since this flow rate is the rate at which the fluid leaves the funnel, we have (below the notation $\dot{q}$ is used to represent $dq/dt$): $$ \dot{V}(t) = -\Phi(t) \label{post_62f54ab6114d473b6933ad5bf5a5fc88_eq_v_phi} $$ But from equation \eqref{post_62f54ab6114d473b6933ad5bf5a5fc88_eq_volume} we have: $$ \dot{V}(t) = \frac{\pi}{3}\left(\frac{b}{h_0}\right)^2 3 h^2(t) \dot{h}(t) = \pi\left(\frac{b}{h_0}\right)^2 h^2(t) \dot{h}(t) $$ Inserting this on equation \eqref{post_62f54ab6114d473b6933ad5bf5a5fc88_eq_v_phi} and using equation \eqref{post_62f54ab6114d473b6933ad5bf5a5fc88_eq_flow_rate}, we get: $$ \pi\left(\frac{b}{h_0}\right)^2 h^2(t) \dot{h}(t) = -\pi a^2 \sqrt{2gh(t)} $$ Cancelling common terms on both sides of the equation above yields: $$ \left(\frac{b}{h_0}\right)^2 h^{3/2}(t) \dot{h}(t) = -a^2 \sqrt{2g} \label{post_62f54ab6114d473b6933ad5bf5a5fc88_eq_hr} $$ Since: $$ h^{3/2}(t) \dot{h}(t) = \displaystyle\frac{2}{5}\frac{d}{dt}h^{5/2}(t) $$ then equation \eqref{post_62f54ab6114d473b6933ad5bf5a5fc88_eq_hr} can be rewritten as: $$ \displaystyle\frac{d}{dt}h^{5/2}(t) = -\frac{5}{2} \left(\frac{h_0}{b}\right)^2 a^2 \sqrt{2g} $$ Integrating both sides with respect to time yields: $$ \begin{eqnarray} \int_{t'=0}^{t'=t} \displaystyle\frac{d}{dt'}h^{5/2}(t')dt' &=& -\int_{t'=0}^{t'=t} \frac{5}{2} \left(\frac{h_0}{b}\right)^2 a^2 \sqrt{2g} dt' \nonumber\\[5pt] h^{5/2}(t)\bigg|_{t'=0}^{t'=t} &=& -\frac{5}{2} \left(\frac{h_0}{b}\right)^2 a^2 \sqrt{2g} t \nonumber\\[5pt] h^{5/2}(t) - h_0^{5/2} &=& -\frac{5}{2} \left(\frac{h_0}{b}\right)^2 a^2 \sqrt{2g} t \end{eqnarray} $$ Therefore: $$ \begin{eqnarray} \displaystyle h(t) &=& \left[ h_0^{5/2} - \frac{5}{2} \left(\frac{h_0}{b}\right)^2 a^2 \sqrt{2g} t \right]^{2/5} \\[5pt] &=& h_0 \left[ 1 - \frac{5}{2} \frac{1}{\sqrt{h_0}b^2} a^2 \sqrt{2g} t \right]^{2/5} \\[5pt] \end{eqnarray} $$ A bit more algebraic manipulation yields: $$ \boxed{ \displaystyle h(t) = h_0\left[ 1 - \frac{5}{2} \left(\frac{a}{b}\right)^2 \sqrt{\frac{2g}{h_0}} t \right]^{2/5} } \label{post_62f54ab6114d473b6933ad5bf5a5fc88_eq_h_vs_t} $$ The time $T$ it takes for the fluid to go through the funnel is such that $h(T) = 0$, which means: $$ 1 - \displaystyle \frac{5}{2} \left(\frac{a}{b}\right)^2 \sqrt{\frac{2g}{h_0}} T = 0 \Longrightarrow \boxed{ \displaystyle T = \frac{2}{5} \left(\frac{b}{a}\right)^2 \sqrt{\frac{h_0}{2g}} } $$ Equation \eqref{post_62f54ab6114d473b6933ad5bf5a5fc88_eq_h_vs_t} can then be written in a simpler form: $$ \displaystyle h(t) = h_0\left( 1 - \frac{t}{T} \right)^{2/5} $$ Figure 2 shows a graph of $h(t)/h_0$ versus $t / T$. Fig. 2: Fluid height $h(t)/h_0$ vs. $t/T$. As the graph on figure 2 shows, the fluid's surface moves down slowly until about $t = 0.95T$. After that point, $h(t)$ starts decreasing very quickly, meaning some of the approximations we made will fail to be appropriate. But as initially expected, this should not badly break our computed estimate of $T$.
Does the Friedmann vacuum equation have a linear solution rather than an exponential one? Using natural units one can write Friedmann's equation for the vacuum as $$ \begin{eqnarray} \left(\frac{\dot a}{a}\right)^2 &=& \frac{8\pi G}{3}\rho_{vac}\\\tag{1} &=& L^2 \left(\frac{\rho_0}{L^4}\right) \end{eqnarray} $$ where I define the Planck length $L=(8\pi G \hbar / 3 c^3)^{1/2}$, $\hbar = c = 1$, and $\rho_0$ is a dimenionless constant. Now let us interpret the Planck length $L$ to be the size of the smallest volume of space that can be described by general relativity. But the Weyl postulate, together with cosmological observations, also imply that space is expanding. Therefore we must have $$L = a(t) L_0\tag{2}$$ where $L_0$ is the Planck length measured at the reference time $t_0$ where $a(t_0)=1$. Inserting Eq.(2) into Eq.(1) we find $$\left(\frac{\dot a}{a}\right)^2 = L_0^2 \left(\frac{\rho_0}{a^2L_0^4}\right)\tag{3}$$ where the Friedmann equation (3) has been rescaled in terms of the Planck length $L_0$ measured at the reference time $t_0$. Eq.(3) has a linear solution $$a(t) = \frac{t}{t_0}.$$ The scaled mass density $\rho(t)$ of the vacuum is not constant but rather given by $$\rho(t) = \frac{\rho_0}{a^2 L_0^4} = \frac{1}{t^2 L_0^2}.$$
When dealing with mathematical modeling, choosing the right scale to frame the equations can make the difference between a successful and lasting model, or poor description of reality. In today’s post, we explore two important scaling procedures that arise in finance: the annualisation of returns and volatility. These are common terms in the industry and building bricks to many other metrics, so it is paramount to have their meaning and underlying assumptions straight. Context and motivation The usual measure to asses a portfolio’s profitability is the arithmetic return, whose formula you are familiar with, $$r_t = \frac{p_t}{p_{t-1}} – 1.$$ The price \(p_t\) could be the portfolio value (the total sum of its assets under management), a stock price, an interest rate index price or even a currency pair value. The frequency with which we sample prices can be daily, weekly, monthly, etc. Typically, the larger the time scale \(T\) in which you measure the return, the larger this number becomes in absolute terms. To this arithmetic expression we link the so called logarithmic return, which goes as $$ \ln \left(1 + r_t\right) = \ln \left(\frac{p_t}{p_{t-1}}\right), $$ where, in order to simplify the equations, one can define the logarithmic price, $$ P_t := \ln p_t \Rightarrow \ln \left(1 + r_t\right) = P_t – P_{t-1}. $$ This nonlinear, yet simple, transformation has many powerful applications. It is going to ease the manipulation of compound magnitudes, and allow us to leverage statistical theorems. Statistical realizations Each day, for each listed asset, the price goes up and down, generating at the end of the day a realization of the return. Starting at any date, after a year has gone by we have 260 realizations of the random process that generates the daily returns, 12 of the monthly and 52 of the weekly ones. This sample of a larger population, that grows every day that goes by, can be defined through statistics such as the mean \(\mu\) and the variance \(\sigma^2\). It is precisely in this definition where concepts start to get messy with annualisation. Scaling profits From a sequence of returns \(\{r_t\}\) one can compute the total return through the compounding formula, $$ 1+R_T = \frac{p_T}{p_0} = \prod_{i=1}^T(1+r_i) $$ If we wanted to compare two assets with different track records in both length and market moment, it would be unfair to do it through the total return, right? After all, one of them had more time to grow (and lose). So it is an accepted practice to annualise the return, that is, to find the yearly return \(r_{260}\) that would produce the same total return at the end of the time-span. The formula to answer that question is a simple geometrical mean, $$ 1 + r_{\tau} = (1+R_T)^{1/n} = \left(\prod_{i=1}^T(1+r_i)\right)^{1/n}, $$ whose exponent is the ratio between the number of realizations \(N\) that generated that total return and the number of realizations \(\tau\) that make up a year (there is not much consensus with this number, since each firm seems to use their own estimate of average days in a year. We take 52 weeks x 5 weekdays, so \(\tau= 260\)). $$ n = \frac{N}{\tau} $$ So far, so good. The above formula is well-known and there is not much mystery about it. But here is where the magic starts. Let’s take logarithms on both sides of the annualisation equation and operate them with their transforming properties. We get $$ \ln \left(1+r_\tau\right) = \frac{\tau}{N} \left(\sum_{i=1}^T\ln(1+r_i)\right). $$ Now, if we make \(\tau\rightarrow 1\), we obtain a familiar expression: the sample estimator for the mean! $$ \ln \left(1+r_1\right) = \frac{1}{T} \left(\sum_{i=1}^T\ln(1+r_i)\right) = \mu_T. $$ \(1+r_1\), which is the constant daily return which ends up with the same total return as the original realization of returns under consideration, is equal to the mean of the logarithmic returns. And for any other time span, different from a year, and defined by \(\tau\) number of days, it would be as simple as multiplying the mean by \(\tau\), $$\ln \left(1+r_{\tau} \right) = \tau \cdot \mu_T.$$ So, the mean profitability in a time span of \(\tau\) days scales linearly with \(\tau\) with respect to the mean of the distribution of log returns. Scaling risk Let’s have a look at risk, which is understood as the price variations in the future. You have probably seen a formula involving a square root, but where did it come from and what is its interpretation? Random walk Scaling the returns was easy, after all, we were playing with their definition and the mathematical properties of logarithms. To do the same little game with risk, we need to introduce some strong assumptions to recover the formula known to everyone. We state that log prices follow what is known as a random walk, $$ P_{t} = P_{t-1} + a_t, $$ where \(a_t\) is known as innovation, and source of the stochastic behaviour. The innovation term is modeled as a normal white noise process, that is $$ a_t \sim \mathcal{N}\left(0, \sigma^2\right). $$ If from an initial log price \(P_0\) you start evolving the process along \(\tau\) steps, you would get $$ P_\tau-P_0 = \sum_{i=1}^\tau a_i. $$ The gap between the final and initial log price is the sum of the realization of each innovation through the path. The variance of this walk is the sum of the variances of each innovation since they are identically and independently distributed random variables, $$ Var\left(P_\tau-P_0\right) = Var\left(\sum_{i=1}^\tau a_i\right) = \sum_{i=1}^\tau Var\left(a_i\right). $$ Since we have assumed constant variance through the innovation process, we find $$ Var\left(P_\tau-P_0\right) = \tau\sigma^2, $$ which gives us the famous formula for volatility annualisation, $$ \sigma_\tau = \sqrt{\tau}\sigma. $$ So, the variance of profitability for \(\tau\) days scales with the square root of \(\tau\) with respect to the standard deviation of the underlying distribution of log returns. When we set \(\tau = 260\), we would get back the deviation of the yearly return. The interesting thing is that you could have used more than one-year realizations to compute that annual volatility. Why? Because the more sample realizations you use to compute the mean and the standard deviation, the more accurate and close they should get to the true population statistic. You could also compute monthly volatility with a three-year-long track record. Simply set \(\tau = 21\). Fallacies The previous results are pretty nice and elegant, but we must put them into the context of their assumptions. To get there, we have assumed that both the mean and the variance exist, which is not a fact you can take for granted with financial figures (yes, the computer will always give you a number, but that doesn’t mean it is meaningful). Secondly, provided they exist, they should be constant in time. Once again this condition is only met when we compute them with a sufficient number of realizations, in the span of more than 10 years. And finally, returns should follow a normal distribution! And what’s more, we have built all the previous assuming zero mean, when it is clearly not zero (you could argue it is small, but it is actually in the order of magnitude of log returns). To get a better estimation of monthly or annual variation one could go ahead and build more complex models which account for variability in time for the mean and the variance. That path is exciting and full of mathematical challenges which require skill to be tackled. But as former trader Nassim Taleb warned us, you are better of creating a strategy that benefits from volatility, rather than trying to predict it accurately. Conclusions Today we have learned that annualising is just a particular case of scaling the location and scale parameters of a normal distribution of log returns. The first order momentum, the mean or expected value, scales linearly with time. The second order momentum, the standard deviation, scales with a square root. So next time you use them to contrast the performance between two portfolios, funds or indices, you might want to check if they have similar distributions of log returns through time before annualising their results, otherwise, it might be an unfair comparison. Thanks for reading!
Line 1: Line 1: − \section{ Remarks about Sperner's theorem} + \section{Sperner's theorem} − The first non-trivial case of Szemer\'edi's theorem is the case of arithmetic progressions of length 3. However, for the density Hales-Jewett theorem even the case $k=2$ is interesting. For the purposes of this discussion, let us define $ [2 ]$ to be the set $\{ 0, 1\} $. Then a combinatorial line in $[2]^n$ is a pair of $01$ sequences $x$ and $y$, such that to obtain $y$ from $x$ one changes a few $0$s to $1$s. If we think of the sequences $x$ and $y$ as the characteristic functions of two subsets $A$ and $B$ of $[n]$, then this is saying that $A$ is a proper subset of $B$. Therefore, when $k=2$ we can formulate the density Hales-Jewett theorem as follows: for every $\delta>0$ there exists $n$ such that if $\mathcal{A}$ is a collection of at least $ \delta 2^n$ subsets of $[n]$, then there must exist two distinct sets $A,B\in\mathcal{A}$ with $A\subset B$. + The first case of Szemer\'edi's theorem is the case of arithmetic progressions of length 3. However, for the density Hales-Jewett theorem even the case $k=2$ is interesting. $2$\{, } a of s . the , this $k=2$ the Hales-Jewett theorema of $$. − This result has been known for a long time: it is an immediate consequence of the following basic result in extremal combinatorics due to Sperner. Recall that an \textit{antichain} is a collection $\mathcal{A}$ of sets such that no set in $\mathcal{A}$ is a proper subset of any other. + it is of the in combinatorics. Recall that an \textit{antichain} is a collection $\mathcal{A}$ of sets such that no set in $\mathcal{A}$ is a proper subset of any other. − \begin{theorem} \label{ Sperner} + \begin{theorem} \label{} − For every positive integer $n$, the largest cardinality of any antichain of subsets of $[n]$ is $\binom n{\lfloor n/2\rfloor}$. + For every positive integer $n$, the largest cardinality of any antichain of subsets of $[n]$ is $\binomn{\lfloor n/2\rfloor}$. − \end{theorem} + \end{ + theorem } − \begin{proof} + \begin{proof} the of of $[n]$$\n$. of $\n\$, this is $$. − As the bound suggests, the best possible example is the collection of all subsets of $[n]$ of size $\ lfloor n /2\rfloor$. (It can be shown that this example is essentially unique: the only other example is to take all sets of size $\ lceil n /2\ rceil$, and even this is different only when $ n$ is odd. ) + − In the other direction, consider the following way of choosing a random subset of $ [n]$. First, we choose, uniformly at random, a permutation $\ pi$ of $ [n ]$ . Next, we choose, uniformly at random and independently of $\ pi$ , an integer $j$ from the set $ \{0,1 ,\dots,n\}$ . Finally, we set $ A=\{\ pi(1),\dots,\ pi( j)\}$ (where this is interpreted as the empty set if $ j=0$ ). + $$ . a $\$ $n$, $\$ the $1$ $\{\(1),\dots,\()\}$ $$. − Let $\mathcal{A}$ be an antichain. Then the probability that a set $ A$ that is chosen randomly in the above manner belongs to $\ mathcal{ A}$ is at most $1/(n+1)$ , since whatever $\ pi$ is at most one of the $ n+1$ sets $\{\ pi(1) ,\ dots,\ pi(j)\} $ can belong to $\mathcal{ A}$. + $\mathcal{A}$ . the probability that $$ the above$\{}$ is $1/(n+1)$ $\$of the $$ $\{\(1)\\\}{}$. − However, what we are really interested in is the probability that $ A\ in\mathcal{ A}$ if $A$ is chosen \textit{uniformly} from all subsets of $[n]$. Let us write $\mu(A)$ for the probability that we choose $A$ according to the distribution defined above, and $\nu(A)$ for the probability that we choose it uniformly. Then $\nu(A)=2 ^{-n} $ for every $A$, whereas $\mu(A)=\frac 1{n+1}\binom n{|A|}^{-1}$, since there is a probability $1/(n+1)$ that $j=|A|$, and all sets of size $|A|$ are equally likely to be chosen. Therefore, the largest ratio of $\nu(A)$ to $\mu(A)$ occurs when $|A|=\lfloor n/2\rfloor$ or $\lceil n/2\rceil$. In this case, the ratio is $(n+1)\binom n{\lfloor n/2\rfloor}2^{-n}$. + $\{2}(\mathcal{A})\leq 1/(n+1)$, it follows that $2^{-n}\mathcal{A}=\(\mathcal{A})\leq\binom n{\lfloor n/2\rfloor}2^{-n}$, which proves the theorem. − + − Since $\mu(\mathcal{A})\leq 1/(n+1)$, it follows that $2^{-n} |\mathcal{A} |=\ nu(\mathcal{A})\leq\binom n{\lfloor n/2\rfloor}2^{-n}$, which proves the theorem. + \end{proof} \end{proof} − We have given this proof in order to draw attention to the fact that it is very natural to consider two different measures on $\{0,1\}^n$, or equivalently on the set of all subsets of $[n]$. The first is the uniform measure, which is forced on us by the way the question is phrased. The second is what one might call the ``equal-slices measure'', where the cardinality of a set is chosen uniformly at random and then the set is chosen uniformly at random given that cardinality. This name comes from the fact that it is common to refer to sets of the form $\{ A\ subset[n]:|A|=k\ }$ as \textit{slices} of the discrete cube, and the measure $\nu$ gives equal density to each slice. + proofit is very natural to consider different on $\{0,1\}^n$, or equivalently on the set of all subsets of $[n]$. The first is the uniform , which is forced on us by the way the question is phrased. The second is what the ``equal-slices'' , the of is that is $\{\\$, and the . − After seeing the above proof, one might take the attitude that the `` correct'' statement of Sperner's theorem is that if $\ mathcal{ A}$ is an antichain, then $\ mu(\ mathcal)\ leq 1 /(n+1)$, and that the statement given above is a slightly artificial and strictly weaker consequence. + + + the proof , one take ``'' statementof Sperner's theorem + + that $\{}$ is an , $\ + + + + + + + + + + (\) \1n+1)$, and + + that the + + + + + + + + is a + + + + + + + + + + and . + − Since Sperner's theorem is the first non-trivial case of the theorem we are trying to prove, an obvious question is whether we can generalize the above proof. The answer seems to be no , essentially because there is no obvious analogue of the system of nested intervals that plays such an important role. However, the idea of looking at equal-slices measure does turn out to be very useful technically, so in the next section we shall define it for subsets of $ [k ]^n$ and prove a few simple facts about it for use later in the proof. + the theorem , an obvious question is whether we can generalize the proof . The answer seems to be nothere is no obvious the . , the equal-slices the we shall $k^n$ and prove facts about . Latest revision as of 21:21, 24 June 2009 \section{Sperner's theorem} \label{sec:sperner} The first nontrivial case of Szemer\'edi's theorem is the case of arithmetic progressions of length $3$. However, for the density Hales--Jewett theorem even the case $k=2$ is interesting. DHJ($2$) follows from a basic result in extremal combinatorics: Sperner's theorem. \ignore{This result has been known for a long time. \noteryan{amusingly, published after vdW's theorem}} In this section we review a standard probabilistic proof of Sperner's theorem. Besides suggesting the equal-slices distribution, this proof easily gives the $k=2$ case of the probabilistic Density Hales--Jewett theorem, a key component in our proof of DHJ($3$). To investigate DHJ($2$) it slightly more convenient to take the alphabet to be $\Omega = \{0,1\}$. Then a combinatorial line in $\{0,1\}^n$ is a pair of distinct binary strings $x$ and $y$ such that to obtain $y$ from $x$ one changes some $0$'s to $1$'s. If we think of the strings $x$ and $y$ as the indicators of two subsets $X$ and $Y$ of $[n]$, then this is saying that $X$ is a proper subset of $Y$. Therefore, when $k=2$ we can formulate the density Hales-Jewett theorem as follows: there exists $\dhj{2}{\delta}$ such that for $n \geq \dhj{2}{\delta}$, if $\calA$ is a collection of at least $\delta 2^n$ subsets of $[n]$, then there must exist two distinct sets $X,Y \in \calA$ with $X \subset Y$. In the language of combinatorics, this is saying that $\calA$ is \emph{not} an antichain. (Recall that an \textit{antichain} is a collection $\mathcal{A}$ of sets such that no set in $\mathcal{A}$ is a proper subset of any other.) Sperner's theorem gives something slightly stronger: a precise lower bound on the cardinality of any antichain. \begin{named}{Sperner's theorem} \label{thm:sperner}For every positive integer $n$, the largest cardinality of any antichain of subsets of $[n]$ is $\binom{n}{\lfloor n/2\rfloor}$.\end{named}As the bound suggests, the best possible example is the collection of all subsets of $[n]$ of size $\lfloor n/2\rfloor$. (It can be shown that this example is essentially unique: the only other example is to take all sets of size $\lceil n/2\rceil$, and even this is different only when $n$ is odd.) It is well known that $\binom{n}{\lfloor n/2\rfloor}2^{-n} \geq 1/2\sqrt{n}$ for all $n$; hence Sperner's theorem implies that one may take $\dhj{2}{\delta} = 4/\delta^2$.\noteryan{The constant can be sharpened to $\pi/2$ for small $\delta$, of course. Also, $4/\delta^2$ is technically not an integer.} Let us present a standard probabilistic proof of Sperner's theorem (see, e.g.,~\cite{Spe90}): \begin{proof} (\emph{Sperner's theorem.}) Consider the following way of choosing a random subset of $[n]$. First, we choose, uniformly at random, a permutation $\tau$ of $[n]$. Next, we choose, uniformly at random and independently of $\tau$, an integer $s$ from the set $\{0,1,\dots,n\}$. Finally, we set $X=\{\tau(1),\dots,\tau(s)\}$ (where this is interpreted as the empty set if $s=0$). Let $\mathcal{A}$ be an antichain. Then the probability that a set $X$ that is chosen randomly in the above manner belongs to $\mathcal{A}$ is at most $1/(n+1)$, since whatever $\tau$ is at most one of the $n+1$ sets $\{\tau(1),\dots,\tau(s)\}$ can belong to $\mathcal{A}$. However, what we are really interested in is the probability that $X\in\mathcal{A}$ if $X$ is chosen \textit{uniformly} from all subsets of $[n]$. Let us write $\eqs{2}[X]$ for the probability that we choose $X$ according to the distribution defined above, and $\unif_2[X]$ for the probability that we choose it uniformly. Then $\unif_2[X]=2^{-n}$ for every $X$, whereas $\eqs{2}[X]=\frac 1{n+1}\binom n{\abs{X}}^{-1}$, since there is a probability $1/(n+1)$ that $s = \abs{X}$, and all sets of size $\abs{X}$ are equally likely to be chosen. Therefore, the largest ratio of $\unif_2[X]$ to $\eqs{2}[X]$ occurs when $\abs{X}=\lfloor n/2\rfloor$ or $\lceil n/2\rceil$.\noteryan{by unimodality of binomial coefficients} In this case, the ratio is $(n+1)\binom n{\lfloor n/2\rfloor}2^{-n}$. Since $\eqs{2}(\mathcal{A})\leq 1/(n+1)$, it follows that $2^{-n}\abs{\mathcal{A}}=\unif_2(\mathcal{A})\leq\binom n{\lfloor n/2\rfloor}2^{-n}$, which proves the theorem. \end{proof} As one sees from the proof, it is very natural to consider different probability distributions on $\{0,1\}^n$, or equivalently on the set of all subsets of $[n]$. The first is the uniform distribution $\unif_2$, which is forced on us by the way the question is phrased. The second is what we called $\eqs{2}$; the reader may check that this is precisely the ``equal-slices distribution $\eqs{2}^n$ described in Section~\ref{sec:pdhj}. After seeing the above proof, one might take the attitude that the ``correct statement of Sperner's theorem is that if $\calA$ is an antichain, then $\eqs{2}(\calA) \leq 1/(n+1)$, and that the statement given above is a slightly artificial and strictly weaker consequence. \subsection{Probabilistic DHJ(\texorpdfstring{$2$}{2})} \label{sec:prob-sperner} Indeed, what the proof (essentially) establishes is the ``equal-slices DHJ($2$) theorem ; i.e., that in Theorem~\ref{thm:edhj} one may take $\edhj{2}{\delta} = 1/\delta$.\noteryan{minus $1$, even} We say ``essentially because of the small distinction between the distribution $\eqs{2}^n$ used in the proof and the distribution $\ens{2}^n$ in the statement. It will be convenient in this introductory discussion of Sperner's theorem to casually ignore this. We will introduce $\ens{k}^n$ and be more careful about its distinction with $\eqs{k}^n$ in Section~\ref{sec:eqs}. To further bolster the claim that $\eqs{2}^n$ is natural in this context we will show an easy proof of the \emph{probabilistic} DHJ($2$) theorem. Looking at the statement of Theorem~\ref{thm:pdhj}, the reader will see it requires defining $\eqs{3}^n$; we will make this definition in the course of the proof.\begin{lemma} \label{lem:p-sperner} For every real $\delta>0$, every $A \subseteq \{0,1\}^n$ with $\eqs{2}^n$-density at least $\delta$ satisfies\[\Pr_{\lambda \sim \eqs{3}^n}[\lambda \subseteq A] \geq \delta^2.\]\end{lemma} \noindent Note that there is no lower bound necessary on $n$; this is because a template $\lambda \sim \eqs{3}^n$ may be degenerate. \begin{proof}As in our proof of Sperner's theorem, let us choose a permutation $\tau$ of $[n]$ uniformly at random. Suppose we now choose $s \in \{0, 1, \dotsc, n\}$ and also $t \in \{0, 1, \dotsc, n\}$ \emph{independently}. Let $x(\tau,s) \in \{0,1\}^n$ denote the string which has $1$'s in coordinates $\tau(1), \dotsc, \tau(s)$ and $0$'s in coordinates $\tau(s+1), \dotsc, \tau(n)$, and similarly define $x(\tau,t)$. These two strings both have the distribution $\eqs{2}^n$, but are not independent. A key observation is that $\{x(\tau,s), x(\tau,t)\}$ is a combinatorial line in $\{0,1\}^n$, unless $s = t$ in which case the two strings are equal. The associated line template is $\lambda \in \{0,1,\wild\}^n$, with\[\lambda_i = \begin{cases} 1 & \text{if $i \leq \min\{s,t\}$,}\\ \wild & \text{if $\min\{s,t\} < i \leq \max\{s,t\}$,} \\ 0 & \text{if $i > \max\{s,t\}$.} \end{cases} \]This gives the definition of how to draw $\lambda \sim \eqs{3}^n$ (with alphabet $\{0,1,\wild\}$). Note that $\lambda$ is a degenerate template with probability $1/(n+1)$. Assuming $\Pr_{x \sim \eqs{2}^n}[x \in A] \geq \delta$, our goal is to show that $\Pr[x(\tau,s), x(\tau,t) \in A ] \geq \delta^2$. But\begin{align*}\Pr[x(\tau,s), x(\tau,t) \in A] &= \Ex_{\tau} \Bigl[\Pr_{s,t} [x(\tau,s), x(\tau,t) \in A]\Bigr] & \\&= \Ex_{\tau} \Bigl[\Pr_{s} [x(\tau,s) \in A] \Pr_{t}[x(\tau,s) \in A]\Bigr] &\text{(independence of $s$, $t$)} \\&= \Ex_{\tau} \Bigl[\Pr_{s} [x(\tau,s) \in A]^2\Bigr] & \\&\geq \Ex_{\tau} \Bigl[\Pr_{s} [x(\tau,s) \in A]\Bigr]^2 & \text{(Cauchy-Schwarz)}\\&= \Pr_{\tau,s} [x(\tau,s) \in A]^2, & \end{align*}and $x(\tau,s)$ has the distribution $\eqs{2}^n$, completing the proof.\end{proof} Having proved the probabilistic DHJ($2$) theorem rather easily, an obvious question is whether we can generalize the proof to $k = 3$. The answer seems to be no; there is no obvious way to generate random length-$3$ lines in which the points are independent, or even partially independent as in the previous proof. Nevertheless, the equal-slices distribution remains important for our proof of the general case of DHJ($k$); in Section~\ref{sec:eqs} we shall introduce both $\eqs{k}^n$ and $\ens{k}^n$ and prove some basic facts about them.
Probabilists often work with Polish spaces, though it is not always very clear where this assumption is needed. Question: What can go wrong when doing probability on non-Polish spaces? MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community One simple thing that can go wrong is purely related to the size of the space (Polish spaces are all size $\leq 2^{\aleph_0}$). When spaces are large enough product measures become surprisingly badly behaved. Consider Nedoma's pathology: Let $X$ be a measurable space with $|X| > 2^{\aleph_0}$. Then the diagonal in $X^2$ is not measurable. We'll prove this by way of a theorem: Let $U \subseteq X^2$ be measurable. $U$ can be written as a union of at most $2^{\aleph_0}$ subsets of the form $A \times B$. Proof: First note that we can find some countable collection $(A_i)_{i\ge 0}$ of subsets of $X$, such that $U \subseteq \sigma(\{A_i \times A_j:i,j\ge 0\})$, where $\sigma(\cdot)$ denotes the $\sigma$-algebra generated by the given subsets (proof: The set of $V$ such that we can find such $A_i$ is a $\sigma$-algebra containing the basis sets). For $x \in \{0, 1\}^\mathbb{N}$ define $B_x = \bigcap \{ A_i : x_i = 1 \} \cap \bigcap \{ A_i^c : x_i = 0 \}$. Consider all subsets of $X^2$ which can be written as a (possibly uncountable) union of $B_x \times B_y$ for some $y$. This is a $\sigma$-algebra and obviously contains all the $A_i \times A_j$, so contains $U$. But now we're done. There are at most $2^{\aleph_0}$ of the $B_x$, and each is certainly measurable in $X$, so $U$ can be written as a union of $2^{\aleph_0}$ subsets of the form $A \times B$. QED Corollary: The diagonal is not measurable. Evidently the diagonal cannot be written as a union of at most $2^{\aleph_0}$ rectangles, as they would all have to be single points, and the diagonal has size $|X| > 2^{\aleph_0}$. Separability is a key technical property used to avoid measure-theoretic difficulties for processes with uncountable index sets. The general problem is that measures are only countably additive and $\sigma$-algebras are closed under only countably many primitive set operations. In a variety of scenarios, uncountable collections of measure zero events can bite you; separability ensures you can use a countable sequence as a proxy for the entire process without losing probabilistic content. Here are two examples. Weak convergence: the classical theory of weak convergence utilizes Borel-measurable maps. When dealing with some function-valued random elements, such as cadlag functions endowed with the supremum norm, Borel-measurability fails to hold. See the motivation for Weak Convergence and Empirical Processes. The $J1$ topology is basically a hack which ensures the function space is separable and thereby avoids measurability issues. The parallel theory of weak convergence described in the book embraces non-measurability. Existence of stochastic processes with nice properties: a key property of Brownian motion is continuity of the sample paths. Continuity, however, is a property involving uncountably many indices. The existence of a continuous version of a process can be ensured with separable modifications. See this lecture and the one that follows. Metrizability allows us to introduce concepts such as convergence in probability. Completeness (the Cauchy convergence kind, not the null subsets kind) makes it easier to conduct analysis. There's already been some good responses, but I think it's worth adding a very simple example showing what can go wrong if you don't use Polish spaces. Consider $\mathbb{R}$ under its usual topology, and let X be a non-Lebesgue measurable set. e.g., a Vitali set. Using the subspace topology on X, the diagonal $D\subseteq\mathbb{R}\times X$, $D=\{(x,x)\colon x\in X\}$ is Borel Measurable. However, its projection onto $\mathbb{R}$ is X, which is not Lebesgue measurable. Problems like this are avoided by keeping to Polish spaces. A measurable function between Polish spaces always take Borel sets to analytic sets which are, at least, universally measurable. The space X in this example is a separable metrizable metric space, whereas Polish spaces are separable completely metrizable spaces. So things can go badly wrong if just the completeness requirement is dropped. Below is a copy of an answer I gave here https://stats.stackexchange.com/questions/2932/metric-spaces-and-the-support-of-a-random-variable/20769#20769 Here are some technical conveniences of separable metric spaces (a) If $X$ and $X'$ take values in a separable metric space $(E,d)$ then the event $\{X=X'\}$ is measurable, and this allows to define random variables in the elegant way: a random variable is the equivalence class of $X$ for the "almost surely equals" relation (note that the normed vector space $L^p$ is a set of equivalence class) (b) The distance $d(X,X')$ between the two $E$-valued r.v. $X, X'$ is measurable; in passing this allows to define the space $L^0$ of random variables equipped with the topology of convergence in probability (c) Simple r.v. (those taking only finitely many values) are dense in $L^0$ And some techical conveniences of complete separable (Polish) metric spaces : (d) Existence of the conditional law of a Polish-valued r.v. (e) Given a morphism between probability spaces, a Polish-valued r.v. on the first probability space always has a copy in the second one (f) Doob-Dynkin functional representation: if $Y$ is a Polish-valued r.v. measurable w.r.t. the $\sigma$-field $\sigma(X)$ generated by a random element $X$ in any measurable space, then $Y = h(X)$ for some measurable function $h$. We know by Ulam's theorem that a Borel measure on a Polish space is necessarily tight. If we just assume that the metric space is separable, we have that each Borel probability measure on $X$ is tight if and only if $X$ is universally measurable (that is, given a probability measure $\mu$ on the metric completion $\widehat X$, there are two measurable subsets $S_1$ and $S_2$ of $\widehat X$ such that $S_1\subset X\subset S_2$ and $\mu(S_1)=\mu(S_2)$. So a probability measure is not necessarily tight (take $S\subset [0,1]$ of inner Lebesgue measure $0$ and outer measure $1$), see Dudley's book Real Analysis and Probability. An other issue related to tightness. We know by Prokhorov theorem that if $(X,d)$ is Polish and if for all sequence of Borel probability measures $\{\mu_n\}$ we can extract a subsequence which converges in law, then $\{\mu_n\}$ is necessarily uniformly tight. It may be not true if we remove the assumption of "Polishness". And it may be problematic when we want results as "$\mu_n\to \mu$ in law if and only if there is uniform tightness and convergence of finite dimensional laws." Google "image measure catastrophe" with quotation marks. It can also be useful to have the set of Borel probability measures on $X$ (with weak* convergence, a.k.a. convergence in law) to be metrizable, for instance to be able to treat the convergences sequentially. For this you need the space $X$ to be separable and metrizable (see Lévy-Prohorov metric). Fun fact: you can find a non-separable Banach space and a Gaussian probability measure on it which gives measure $0$ to every ball of radius $1$. (In particular your intuition about notions like the "support" of a measure goes pretty badly wrong.) Consider i.i.d. $\xi_n$ and take as your norm $|\xi|^2 = \sup_{k\ge 0}2^{-k}\sum_{n=1}^{2^{k}} |\xi_n|^2$. This is almost surely finite by Borel-Cantelli and almost surely at least $1$ by the law of large numbers. The fact that it gives measure $0$ to every ball of radius $1$ is left as an exercise. This norm isn't even very exotic: if you interpret the $\xi_n$'s as Fourier coefficients, then $B$ is really just the Besov space $B^{1/2}_{2,\infty}$. As far as I remember, the projection of a measurable set may fail to be measurable, so something very natural may become not an event. Besides, constructing conditional probabilities as measures on sections becomes problematic. Perhaps, there are more reasons but these two are already good enough.
Let $G$ be a split semisimple real Lie group in characteristic zero, and let $B=TU$ be a Borel subgroup with unipotent radical $U$ and Levi $T$. Fix an ordering on the roots $\Phi^+$ of $T$ in $U$, and for each root subgroup $U_{\alpha}$ of $U$, let $u_{\alpha}: \mathbb R \rightarrow U_{\alpha}$ be an isomorphism. For all $\alpha, \beta \in \Phi^+$, there exist unique real numbers $C_{\alpha,\beta,i,j}$ (depending on the $u_{\alpha}$ and the ordering) such that for all $x, y \in \mathbb R$, $$u_{\alpha}(x) u_{\beta}(y) u_{\alpha}(x)^{-1} = u_{\beta}(y) \prod\limits_{\substack{i,j>0\\ i\alpha + j \beta \in \Phi^+}} u_{i\alpha+j\beta}(C_{\alpha,\beta,i,j}x^iy^j)$$ I want to work out some examples with unipotent groups of exceptional semisimple groups, and am looking for table of structure constants for the root system G2. Does anyone know a reference where an ordering on the roots is chosen and these constants are written down?
Difference between revisions of "Unitriangular matrix group:UT(3,p)" (→Families) m (→In coordinate form) (5 intermediate revisions by 2 users not shown) Line 10: Line 10: <math>\left \{ \begin{pmatrix} 1 & a_{12} & a_{13} \\ 0 & 1 & a_{23} \\ 0 & 0 & 1 \\\end{pmatrix} \mid a_{12},a_{13},a_{23} \in \mathbb{F}_p \right \}</math> <math>\left \{ \begin{pmatrix} 1 & a_{12} & a_{13} \\ 0 & 1 & a_{23} \\ 0 & 0 & 1 \\\end{pmatrix} \mid a_{12},a_{13},a_{23} \in \mathbb{F}_p \right \}</math> + + + + + + + + + + + + + + + + ===In coordinate form=== ===In coordinate form=== Line 16: Line 32: with the multiplication law given by: with the multiplication law given by: − <math> (a_{12},a_{13},a_{23}) (b_{12},b_{13},b_{23}) = (a_{12} + b_{12},a_{13} + b_{13} + a_{12}b_{23}, a_{23} + b_{23}), + <math> (a_{12},a_{13},a_{23}) (b_{12},b_{13},b_{23}) = (a_{12} + b_{12},a_{13} + b_{13} + a_{12}b_{23}, a_{23} + b_{23}), + + (a_{12},a_{13},a_{23})^{-1} = (-a_{12}, -a_{13} + a_{12}a_{23}, -a_{23}) </math>. The matrix corresponding to triple <math>(a_{12},a_{13},a_{23})</math> is: The matrix corresponding to triple <math>(a_{12},a_{13},a_{23})</math> is: Line 30: Line 48: The group can be defined by means of the following [[presentation]]: The group can be defined by means of the following [[presentation]]: − <math>\langle x,y,z \mid [x,y] = z, xz = zx, yz = zy, x^p = y^p = z^p = + <math>\langle x,y,z \mid [x,y] = z, xz = zx, yz = zy, x^p = y^p = z^p = \rangle</math> − where <math> + where <math></math> denotes the identity element. These commutation relation resembles Heisenberg's commuatation relations in quantum mechanics and so the group is sometimes called a finite Heisenberg group. Generators <math>x,y,z</math> correspond to matrices: These commutation relation resembles Heisenberg's commuatation relations in quantum mechanics and so the group is sometimes called a finite Heisenberg group. Generators <math>x,y,z</math> correspond to matrices: Line 49: Line 67: 0 & 0 & 1\\ 0 & 0 & 1\\ \end{pmatrix}</math> \end{pmatrix}</math> + + ===As a semidirect product=== ===As a semidirect product=== Line 129: Line 149: ==Subgroups== ==Subgroups== {{further|[[Subgroup structure of unitriangular matrix group:UT(3,p)]]}} {{further|[[Subgroup structure of unitriangular matrix group:UT(3,p)]]}} + + {{#lst:subgroup structure of unitriangular matrix group:UT(3,p)|summary}} {{#lst:subgroup structure of unitriangular matrix group:UT(3,p)|summary}} Line 188: Line 210: ==External links == ==External links == − * {{wp| + * {{wp|}} Latest revision as of 11:21, 22 August 2014 This article is about a family of groups with a parameter that is prime. For any fixed value of the prime, we get a particular group. View other such prime-parametrized groups Contents 1 Definition 2 Families 3 Elements 4 Arithmetic functions 5 Subgroups 6 Linear representation theory 7 Endomorphisms 8 GAP implementation 9 External links Definition Note that the case , where the group becomes dihedral group:D8, behaves somewhat differently from the general case. We note on the page all the places where the discussion does not apply to . As a group of matrices The multiplication of matrices and gives the matrix where: The identity element is the identity matrix. The inverse of a matrix is the matrix where: Note that all addition and multiplication in these definitions is happening over the field . In coordinate form We may define the group as set of triples over the prime field , with the multiplication law given by: , . The matrix corresponding to triple is: Definition by presentation The group can be defined by means of the following presentation: where denotes the identity element. These commutation relation resembles Heisenberg's commuatation relations in quantum mechanics and so the group is sometimes called a finite Heisenberg group. Generators correspond to matrices: Note that in the above presentation, the generator is redundant, and the presentation can thus be rewritten as a presentation with only two generators and . As a semidirect product This group of order can also be described as a semidirect product of the elementary abelian group of order by the cyclic group of order , with the following action. Denote the base of the semidirect product as ordered pairs of elements from . The action of the generator of the acting group is as follows: In this case, for instance, we can take the subgroup with as the elementary abelian subgroup of order , i.e., the elementary abelian subgroup of order is the subgroup: The acting subgroup of order can be taken as the subgroup with , i.e., the subgroup: Families These groups fall in the more general family of unitriangular matrix groups. The unitriangular matrix group can be described as the group of unipotent upper-triangular matrices in , which is also a -Sylow subgroup of the general linear group . This further can be generalized to where is the power of a prime . is the -Sylow subgroup of . These groups also fall into the general family of extraspecial groups. For any number of the form , there are two extraspecial groups of that order: an extraspecial group of "+" type and an extraspecial group of "-" type. is an extraspecial group of order and "+" type. The other type of extraspecial group of order , i.e., the extraspecial group of order and "-" type, is semidirect product of cyclic group of prime-square order and cyclic group of prime order. Elements Further information: element structure of unitriangular matrix group:UT(3,p) Summary Item Value number of conjugacy classes order Agrees with general order formula for : conjugacy class size statistics size 1 ( times), size ( times) orbits under automorphism group Case : size 1 (1 conjugacy class of size 1), size 1 (1 conjugacy class of size 1), size 2 (1 conjugacy class of size 2), size 4 (2 conjugacy classes of size 2 each) Case odd : size 1 (1 conjugacy class of size 1), size ( conjugacy classes of size 1 each), size ( conjugacy classes of size each) number of orbits under automorphism group 4 if 3 if is odd order statistics Case : order 1 (1 element), order 2 (5 elements), order 4 (2 elements) Case odd: order 1 (1 element), order ( elements) exponent 4 if if odd Conjugacy class structure Note that the characteristic polynomial of all elements in this group is , hence we do not devote a column to the characteristic polynomial. For reference, we consider matrices of the form: Nature of conjugacy class Jordan block size decomposition Minimal polynomial Size of conjugacy class Number of such conjugacy classes Total number of elements Order of elements in each such conjugacy class Type of matrix identity element 1 + 1 + 1 + 1 1 1 1 1 non-identity element, but central (has Jordan blocks of size one and two respectively) 2 + 1 1 , non-central, has Jordan blocks of size one and two respectively 2 + 1 , but not both and are zero non-central, has Jordan block of size three 3 if odd 4 if both and are nonzero Total (--) -- -- -- -- -- Arithmetic functions Compare and contrast arithmetic function values with other groups of prime-cube order at Groups of prime-cube order#Arithmetic functions For some of these, the function values are different when and/or when . These are clearly indicated below. Arithmetic functions taking values between 0 and 3 Function Value Explanation prime-base logarithm of order 3 the order is prime-base logarithm of exponent 1 the exponent is . Exception when , where the exponent is . nilpotency class 2 derived length 2 Frattini length 2 minimum size of generating set 2 subgroup rank 2 rank as p-group 2 normal rank as p-group 2 characteristic rank as p-group 1 Arithmetic functions of a counting nature Function Value Explanation number of conjugacy classes elements in the center, and each other conjugacy class has size number of subgroups when , when See subgroup structure of unitriangular matrix group:UT(3,p) number of normal subgroups See subgroup structure of unitriangular matrix group:UT(3,p) number of conjugacy classes of subgroups for , for See subgroup structure of unitriangular matrix group:UT(3,p) Subgroups Further information: Subgroup structure of unitriangular matrix group:UT(3,p) Note that the analysis here specifically does not apply to the case . For , see subgroup structure of dihedral group:D8. Table classifying subgroups up to automorphisms Automorphism class of subgroups Representative Isomorphism class Order of subgroups Index of subgroups Number of conjugacy classes Size of each conjugacy class Number of subgroups Isomorphism class of quotient (if exists) Subnormal depth (if subnormal) trivial subgroup trivial group 1 1 1 1 prime-cube order group:U(3,p) 1 center of unitriangular matrix group:UT(3,p) ; equivalently, given by . group of prime order 1 1 1 elementary abelian group of prime-square order 1 non-central subgroups of prime order in unitriangular matrix group:UT(3,p) Subgroup generated by any element with at least one of the entries nonzero group of prime order -- 2 elementary abelian subgroups of prime-square order in unitriangular matrix group:UT(3,p) join of center and any non-central subgroup of prime order elementary abelian group of prime-square order 1 group of prime order 1 whole group all elements unitriangular matrix group:UT(3,p) 1 1 1 1 trivial group 0 Total (5 rows) -- -- -- -- -- -- -- Tables classifying isomorphism types of subgroups Group name GAP ID Occurrences as subgroup Conjugacy classes of occurrence as subgroup Occurrences as normal subgroup Occurrences as characteristic subgroup Trivial group 1 1 1 1 Group of prime order 1 1 Elementary abelian group of prime-square order 0 Prime-cube order group:U3p 1 1 1 1 Total -- Table listing number of subgroups by order Group order Occurrences as subgroup Conjugacy classes of occurrence as subgroup Occurrences as normal subgroup Occurrences as characteristic subgroup 1 1 1 1 1 1 0 1 1 1 1 Total Linear representation theory Further information: linear representation theory of unitriangular matrix group:UT(3,p) Item Value number of conjugacy classes (equals number of irreducible representations over a splitting field) . See number of irreducible representations equals number of conjugacy classes, element structure of unitriangular matrix group of degree three over a finite field degrees of irreducible representations over a splitting field (such as or ) 1 (occurs times), (occurs times) sum of squares of degrees of irreducible representations (equals order of the group) see sum of squares of degrees of irreducible representations equals order of group lcm of degrees of irreducible representations condition for a field (characteristic not equal to ) to be a splitting field The polynomial should split completely. For a finite field of size , this is equivalent to . field generated by character values, which in this case also coincides with the unique minimal splitting field (characteristic zero) Field where is a primitive root of unity. This is a degree extension of the rationals. unique minimal splitting field (characteristic ) The field of size where is the order of mod . degrees of irreducible representations over the rational numbers 1 (1 time), ( times), (1 time) Orbits over a splitting field under the action of the automorphism group Case : Orbit sizes: 1 (degree 1 representation), 1 (degree 1 representation), 2 (degree 1 representations), 1 (degree 2 representation) Case odd : Orbit sizes: 1 (degree 1 representation), (degree 1 representations), (degree representations) number: 4 (for ), 3 (for odd ) Orbits over a splitting field under the multiplicative action of one-dimensional representations Orbit sizes: (degree 1 representations), and orbits of size 1 (degree representations) Endomorphisms Automorphisms The automorphisms essentially permute the subgroups of order containing the center, while leaving the center itself unmoved. GAP implementation GAP ID For any prime , this group is the third group among the groups of order . Thus, for instance, if , the group is described using GAP's SmallGroup function as: SmallGroup(343,3) Note that we don't need to compute ; we can also write this as: SmallGroup(7^3,3) As an extraspecial group For any prime , we can define this group using GAP's ExtraspecialGroup function as: ExtraspecialGroup(p^3,'+') For , it can also be constructed as: ExtraspecialGroup(p^3,p) where the argument indicates that it is the extraspecial group of exponent . For instance, for : ExtraspecialGroup(5^3,5) Other descriptions Description Functions used SylowSubgroup(GL(3,p),p) SylowSubgroup, GL SylowSubgroup(SL(3,p),p) SylowSubgroup, SL SylowSubgroup(PGL(3,p),p) SylowSubgroup, PGL SylowSubgroup(PSL(3,p),p) SylowSubgroup, PSL
Mr. Hobson has retired from running a stable and has invested in a more modern form of transport, trains. He has built a rail network with $n$ stations. However, he has retained his commitment to free the passenger from the burden of too many choices: from each station, a passenger can catch a train to exactly one other station. Such a journey is referred to as a leg. Note that this is a one-way journey, and it might not be possible to get back again. Hobson also offers exactly one choice of ticket, which allows a passenger to travel up to $k$ legs in one trip. At the exit from each station is an automated ticket reader (only one, so that passengers do not need to decide which to use). The reader checks that the distance from the initial station to the final station does not exceed $k$ legs. Each ticket reader must be programmed with a list of valid starting stations, but the more memory this list needs, the more expensive the machine will be. Help Hobson by determining, for each station $A$, the number of stations (including $A$) from which a customer can reach $A$ in at most $k$ legs. The first line of input contains two integers $n$ and $k$, where $n$ ($2 \leq n \leq 5 \cdot 10^5$) is the number of stations and $k$ ($1 \leq k \le n-1$) is the maximum number of legs that may be traveled on a ticket. Then follow $n$ lines, the $i^{\text {th}}$ of which contains an integer $d_ i$ ($1 \leq d_ i \leq n$ and $d_ i \neq i$), the station which may be reached from station $i$ in one leg. Output $n$ lines, with the $i^{\text {th}}$ line containing the number of stations from which station $i$ can be reached in at most $k$ legs. Sample Input 1 Sample Output 1 6 2 2 3 4 5 4 3 1 2 4 5 3 1 Sample Input 2 Sample Output 2 5 3 2 3 1 5 4 3 3 3 2 2
A. All light sources (even lasers) are subject to a diffraction limit, so any light beam will eventually diverge with an angle $\theta$ given by $$\theta \approx \frac{\lambda}{A_T}$$ where $\lambda$ is the wavelength of the light and $A_T$ is the aperture of the light beam source (and "eventually" means for distances much greater than $A_T$). Any beam diverging with a constant angle will have an intensity following an inverse-square law, though the total beam power will be unaffected (if we can neglect light absorption and scattering). The diffraction limit can be seen as a consequence of the Heisenberg uncertainty principle: calling one transverse coordinate $x$ and applying the position-momentum uncertainty relation at the source (assuming that the inequality is saturated and the position uncertainty is equal to the aperture size), we get $$\Delta p_x^{source} \Delta x^{source} \approx \hbar$$ $$\Delta p_x^{source} \approx \frac{\hbar}{A_T}$$ Far from the source, at a distance $R \gg A_T$, the transverse position uncertainty will be dominated by the transverse momentum uncertainty at the source, giving $$\theta \approx \frac{\Delta x^{far}}{R} \approx \frac{R \Delta p_x^{source}}{p}\frac{1}{R} \approx \frac{\hbar\lambda}{A_T \hbar 2\pi} = \frac{1}{2\pi}\frac{\lambda}{A_T}$$ that differs from the previously given result by a constant, due to the approximations involved and the imprecise nature of the "deltas". A more precise treatment shows that $\theta = \lambda/A_T$ is a better approximation. B. Photons are not scattered in a perfect vacuum. And the intergalactic space, while not a perfect vacuum, is so empty that even photons originated in galaxies at billions of light years can be received. C. Yes, but you will need a receiver with a big aperture to receive this light. In more precise terms, you will need a receiver with an aperture $A_R$ given by $$A_R \approx \frac{\lambda}{A_T}R$$ where $\lambda$ is the wavelength, $A_T$ is the aperture of the transmitter and $R$ is the distance from the transmitter to the receiver.
Learning Outcomes Interpret the Fprobability distribution as the number of groups and the sample size change The distribution used for the hypothesis test is a new one. It is called the F distribution, named after Sir Ronald Fisher, an English statistician. The F statistic is a ratio (a fraction). There are two sets of degrees of freedom; one for the numerator and one for the denominator. For example, if F follows an F distribution and the number of degrees of freedom for the numerator is four, and the number of degrees of freedom for the denominator is ten, then F ~ F 4,10. Note The F distribution is derived from the Student’s t-distribution. The values of the F distribution are squares of the corresponding values of the t-distribution. One-Way ANOVA expands the t-test for comparing more than two groups. The scope of that derivation is beyond the level of this course. To calculate the F ratio, two estimates of the variance are made. Variance between samples:An estimate of σ 2that is the variance of the sample means multiplied by n(when the sample sizes are the same.). If the samples are different sizes, the variance between samples is weighted to account for the different sample sizes. The variance is also called variation due to treatment or explained variation. Variance within samples:An estimate of σ 2that is the average of the sample variances (also known as a pooled variance). When the sample sizes are different, the variance within samples is weighted. The variance is also called the variation due to error or unexplained variation. SS between= the sum of squares that represents the variation among the different samples SS within= the sum of squares that represents the variation within samples that is due to chance. To find a “sum of squares” means to add together squared quantities that, in some cases, may be weighted. MS means “ mean square.” MS between is the variance between groups, and MS within is the variance within groups. Calculation of Sum of Squares and Mean Square k = the number of different groups nj = the size of the jth group sj = the sum of the values in the jth group n = total number of all the values combined (total sample size: ∑ n j) x = one value: ∑ x = ∑ s j Sum of squares of all values from every group combined: ∑ x 2 Between group variability: SS total = [latex]\displaystyle\sum{{x}^{{2}}}-\frac{{\sum{x}^{{2}}}}{{n}}[/latex] Total sum of squares: [latex]\displaystyle\sum{x}^{{2}}-\frac{{{(\sum{x})}^{{2}}}}{{n}}[/latex] Explained variation: sum of squares representing variation among the different samples: [latex]\displaystyle{S}{S}_{{\text{between}}}=\sum{[\frac{{({s}{j})}^{{2}}}{{n}_{{j}}}]}-\frac{{(\sum{s}_{{j}})}^{{2}}}{{n}}[/latex] Unexplained variation: sum of squares representing variation within samples due to chance: [latex]\displaystyle{S}{S}_{{\text{within}}}={S}{S}_{{\text{total}}}-{S}{S}_{{\text{between}}}[/latex] df‘s for different groups ( df‘s for the numerator): df = k – 1 Equation for errors within samples ( df‘s for the denominator): df within = n – k Mean square (variance estimate) explained by the different groups: [latex]\displaystyle{M}{S}_{{\text{between}}}=\frac{{{S}{S}_{{\text{between}}}}}{{{d}{f}_{{\text{between}}}}}[/latex] Mean square (variance estimate) that is due to chance (unexplained): [latex]\displaystyle{M}{S}_{{\text{within}}}=\frac{{{S}{S}_{{\text{within}}}}}{{{d}{f}_{{\text{within}}}}}[/latex] MS between and MS within can be written as follows: [latex]\displaystyle{M}{S}_{{\text{between}}}=\frac{{{S}{S}_{{\text{between}}}}}{{{d}{f}_{{\text{between}}}}}=\frac{{{S}{S}_{{\text{between}}}}}{{{k}-{1}}}[/latex] [latex]\displaystyle{M}{S}_{{\text{within}}}=\frac{{{S}{S}_{{\text{within}}}}}{{{d}{f}_{{\text{within}}}}}=\frac{{{S}{S}_{{\text{within}}}}}{{{n}-{k}}}[/latex] The one-way ANOVA test depends on the fact that MS between can be influenced by population differences among means of the several groups. Since MS within compares values of each group to its own group mean, the fact that group means might be different does not affect MS within. The null hypothesis says that all groups are samples from populations having the same normal distribution. The alternate hypothesis says that at least two of the sample groups come from populations with different normal distributions. If the null hypothesis is true, MS between and MS within should both estimate the same value. Note The null hypothesis says that all the group population means are equal. The hypothesis of equal means implies that the populations have the same normal distribution, because it is assumed that the populations are normal and that they have equal variances. F-Ratio or F Statistic [latex]\displaystyle{F}=\frac{{{M}{S}_{{\text{between}}}}}{{{M}{S}_{{\text{within}}}}}[/latex] If MS between and MS within estimate the same value (following the belief that H0 is true), then the F-ratio should be approximately equal to one. Mostly, just sampling errors would contribute to variations away from one. As it turns out, MS between consists of the population variance plus a variance produced from the differences between the samples. MS within is an estimate of the population variance. Since variances are always positive, if the null hypothesis is false, MS between will generally be larger than MS within.Then the F-ratio will be larger than one. However, if the population effect is small, it is not unlikely that MS within will be larger in a given sample. The foregoing calculations were done with groups of different sizes. If the groups are the same size, the calculations simplify somewhat and the F-ratio can be written as: F-Ratio Formula when the groups are the same size [latex]\displaystyle{F}=\frac{{{n}\cdot{{s}_{\overline{{x}}}^{{ {2}}}}}}{{{{s}_{{\text{pooled}}}^{{2}}}}}[/latex] where … n= the sample size df numerator= k– 1 df denominator= n– k s 2 pooled= the mean of the sample variances (pooled variance) [latex]\displaystyle{{s}_{\overline{{x}}}^{{ {2}}}}[/latex] = the variance of the sample means Data are typically put into a table for easy viewing. One-Way ANOVA results are often displayed in this manner by computer software. Source of Variation Sum of Squares ( SS) Degrees of Freedom ( df) Mean Square ( MS) F Factor(Between) SS(Factor) k – 1 MS(Factor) = SS(Factor)/( k – 1) F = MS(Factor)/ MS(Error) Error(Within) SS(Error) n – k MS(Error) = SS(Error)/( n – k) Total SS(Total) n – 1 Example Three different diet plans are to be tested for mean weight loss. The entries in the table are the weight losses for the different plans. The one-way ANOVA results are shown in in the table here. 3.5 Plan 1: n 1 = 4 Plan 2: n 2 = 3 Plan 3: n 3 = 3 5 3.5 8 4.5 7 4 4 3 4.5 s 1 = 16.5, s 2 =15, s 3 = 15.7 Following are the calculations needed to fill in the one-way ANOVA table. The table is used to conduct a hypothesis test. [latex]\displaystyle{{S}{S}}_{{\text{between}}}=\sum{\left[\frac{{{({s}_{j})}^{2}}}{{{n}_{j}}}\right]}-\frac{{(\sum{{s}_{j})}^{2}}}{{n}}[/latex] [latex]\displaystyle=\frac{{{{s}_{1}}^{2}}}{{4}}+\frac{{{{s}_{2}}^{2}}}{{3}}+\frac{{{{s}_{3}}^{2}}}{{3}}[/latex] where n 1 = 4, n 2 = 3, n 3 = 3 and n = n 1 + n 2 + n 3 = 10 [latex]\displaystyle=\frac{{({16.5})^{2}}}{{4}}+\frac{{({15})^{2}}}{{3}}+\frac{{ ({5.5})^{2}}}{{3}}-\frac{{ {({16.5}+{15}+{15.5})}^{2}}}{{10}}[/latex] [latex]\displaystyle{{S}{S}}_{{\text{between}}}={2.2458}{S}_{{\text{total}}}=\sum{x}^{2}-\frac{{{(\sum{x})}^{2}}}{{n}}[/latex] [latex]\displaystyle=\left({5}^{2}+{4.5}^{2}+{4}^{2}+{3}^{2}+{3.5}^{2}+{7}^{2}+{4.5}^{2}+{8}^{2}+{4}^{2}+{3.5}^{2}\right)[/latex] [latex]\displaystyle{-}\frac{{{\left({5}+{4.5}+{4}+{3}+{3.5}+{7}+{4.5}+{8}+{4}+{3.5}\right)}^{2}}}{{10}}[/latex] [latex]\displaystyle={244}-\frac{{{47}^{2}}}{{10}}={244}-{220.9}[/latex] Using a Calculator One-Way ANOVA Table: The formulas for SS(Total), SS(Factor) = SS(Between) and SS(Error) = SS(Within) as shown previously. The same information is provided by the TI calculator hypothesis test function ANOVA in STAT TESTS (syntax is ANOVA(L1, L2, L3) where L1, L2, L3 have the data from Plan 1, Plan 2, Plan 3 respectively). Source of Variation Sum of Squares ( SS) Degrees of Freedom ( df) Mean Square ( MS) F Factor(Between) SS(Factor)= SS(Between)= 2.2458 k – 1= 3 groups – 1= 2 MS(Factor)= SS(Factor)/( k– 1)= 2.2458/2= 1.1229 F = MS(Factor)/ MS(Error)= 1.1229/2.9792= 0.3769 Error(Within) SS(Error)= SS(Within)= 20.8542 n – k= 10 total data – 3 groups= 7 MS(Error)= SS(Error)/( n– k)= 20.8542/7= 2.9792 Total SS(Total)= 2.2458 + 20.8542= 23.1 n – 1= 10 total data – 1= 9 Try it As part of an experiment to see how different types of soil cover would affect slicing tomato production, Marist College students grew tomato plants under different soil cover conditions. Groups of three plants each had one of the following treatments bare soil a commercial ground cover black plastic straw compost All plants grew under the same conditions and were the same variety. Students recorded the weight (in grams) of tomatoes produced by each of the n = 15 plants: Bare: n1 = 3 Ground Cover: n2 = 3 Plastic: n3 = 3 Straw: n4 = 3 Compost: n5 = 3 2,625 5,348 6,583 7,285 6,277 2,997 5,682 8,560 6,897 7,818 4,915 5,482 3,830 9,230 8,677 Create the one-way ANOVA table. Enter the data into lists L1, L2, L3, L4 and L5. Press STAT and arrow over to TESTS. Arrow down to ANOVA. Press ENTER and enter L1, L2, L3, L4, L5). Press ENTER. The table was filled in with the results from the calculator. One-Way ANOVA table: Source of Variation Sum of Squares ( SS) Degrees of Freedom ( df) Mean Square ( MS) F Factor (Between) 36,648,561 5 – 1 = 4 [latex]\displaystyle\frac{{{36},{648},{561}}}{{4}}={9},{162},{140}[/latex] [latex]\displaystyle\frac{{{9},{162},{140}}}{{{2},{044},{672.6}}}={4.4810}[/latex] Error (Within) 20,446,726 15 – 5 = 10 [latex]\displaystyle\frac{{{20},{446},{726}}}{{10}}={2},{044},{672.6}[/latex] Total 57,095,287 15 – 1 = 14 The one-way ANOVA hypothesis test is always right-tailed because larger F-values are way out in the right tail of the F-distribution curve and tend to make us reject H 0. Notation The notation for the F distribution is F ~ F df( num), df( denom) where df( num) = df between and df( denom) = df within The mean for the F distribution is [latex]\displaystyle\mu=\frac{{{d}{f}{(\text{num})}}}{{{d}{f}{(\text{denom})}}}-{1}[/latex] References Tomato Data, Marist College School of Science (unpublished student research) Concept Review Analysis of variance compares the means of a response variable for several groups. ANOVA compares the variation within each group to the variation of the mean of each group. The ratio of these two is the F statistic from an F distribution with (number of groups – 1) as the numerator degrees of freedom and (number of observations – number of groups) as the denominator degrees of freedom. These statistics are summarized in the ANOVA table. Formula Review [latex]\displaystyle{S}{S}_{{\text{between}}}=\sum{[\frac{{({s}{j})}^{{2}}}{{n}_{{j}}}]}-\frac{{(\sum{s}_{{j}})}^{{2}}}{{n}}[/latex] SS total = [latex]\displaystyle\sum{{x}^{{2}}}-\frac{{\sum{x}^{{2}}}}{{n}}[/latex] [latex]\displaystyle{S}{S}_{{\text{within}}}={S}{S}_{{\text{total}}}-{S}{S}_{{\text{between}}}[/latex] df between = df(num) = k – 1 df within = df(denom) = n – k [latex]\displaystyle{M}{S}_{{\text{between}}}=\frac{{{S}{S}_{{\text{between}}}}}{{{d}{f}_{{\text{between}}}}}[/latex] [latex]\displaystyle{M}{S}_{{\text{within}}}=\frac{{{S}{S}_{{\text{within}}}}}{{{d}{f}_{{\text{within}}}}}[/latex] [latex]\displaystyle{F}=\frac{{{M}{S}_{{\text{between}}}}}{{{M}{S}_{{\text{within}}}}}[/latex] F ratio when the groups are the same size: [latex]\displaystyle{F}=\frac{{{n}{{s}_{\overline{{x}}}^{{ {2}}}}}}{{{s}_{{\text{pooled}}}^{{2}}}}[/latex] Mean of the F distribution:[latex]\displaystyle\mu=\frac{{{d}{f}{(\text{num})}}}{{{d}{f}{(\text{denom})}}}-{1}[/latex] where: k = the number of groups n j = the size of the jthgroup s= the sum of the values in the j jthgroup n= the total number of all values (observations) combined x= one value (one observation) from the data [latex]\displaystyle{{s}_{\overline{{x}}}^{{ {2}}}}[/latex] = the mean of the sample variances (pooled variance)
Thought you knew everything about correlation? Think there’s no fooling you with the question of correlation with financial prices or returns? Well maybe, just maybe, this post will enlighten you. Correlation: the debate is on Correlation can be a controversial topic. Things can go awry when two seemingly unrelated variables appear to move in a similar pattern and are found to be correlated. Take a look here at some unusual examples. My personal favourite is the clear relationship between the age of Miss America winners and the number of murders by hot things. There’s no denying it folks, just take a look for yourselves… Although there must be similar cases with financial series (and I’d be interested to know of any) this post focuses on another tricky aspect of correlation in finance. We take a look at a typical mistake made by most finance newbies: calculating correlation with prices instead of returns. We’ve all been there. You’ve just begun your quant career and been made aware of your mistake; “you should use returns not prices for correlation”. And you accept it without a second thought and continue with your research, right? Well, now is your chance to take a closer look at that pesky correlation and prepare to be amazed. But hold on a second, why are we even interested in correlation? Correlation is the key to diversity Who hasn’t heard the phrase “diversify your portfolio”? Diversification is pretty much number one priority in financial management (after making money, of course). The concept of not putting all your eggs in one basket is not new and it makes complete sense to control risk by spreading investments. Diversifying methods vary from selecting different asset classes (funds, bonds, stocks, etc.), combining industries, or varying the risk levels of investments. And the most common and direct diversification measurement used in these methods is correlation. A simple decision From the point of view of an investor, what would you do given these possible asset investments?Your first reaction is probably “invest in assets A and B, because C doesn’t look as good”. Then after a moment, you think “but A and B look highly correlated, so maybe A and C would be better”. But how would you feel if I told you that in fact A and B are perfectly negatively correlated and A and C perfectly positively correlated? A little confused, maybe? Not buying it? Let’s put the returns in a scatter plot:That’s what I said: A and B have negative correlation and A and C positive correlation (and the points lie on exact straight lines). But your thinking: “the prices look positively correlated”. Yes, something strange is going on here. Misconceptions Don’t worry; you’re not the only one confused. Correlation, despite its apparent simplicity, is often misinterpreted even by experienced academics and investors. One misconception is that extreme values of correlation imply the movements of two series are in exact opposite directions (for -1) or the same direction (for +1). But this is not correct. Assets A and C are perfectly positively correlated. You would then often hear people say “A and C move up and down together”. But not so fast… for small positive returns of asset A (less than 1%) asset C has negative returns. Hmmm… Not as common is the belief that the magnitude of the movements is the same for series with ±1 correlation. This is also not correct. Assets A and B are perfectly negatively correlated. Some may say “B moves the same amount as A but in the opposite direction”. Nope again. When A moves 4% B moves close to 0%. Wait, so what did we miss? Let’s go back to basics. What is correlation? Correlation is how closely variables are related. The Pearson correlation coefficient is its most common statistic and it measures the degree of linear relationship between two variables. Its values range between -1 (perfect negative correlation) and 1 (perfect positive correlation). Zero correlation implies no relationship between variables. It is defined as the covariance between two variables, say \(X\) and \(Y\), divided by the product of the standard deviations of each. Covariance is an unbounded statistic of how the variables change together, while standard deviation is a measure of data dispersion from its average. $$\rho^{}_\mathrm{X,Y} = \frac{\mathrm{cov(X,Y)}}{\sigma^{}_\mathrm{X}\sigma^{}_\mathrm{Y}}$$ This formula can be estimated for a sample by: $$\hat{\rho}^{}_{X,Y} = \frac{\sum^T_{t=1}(x_t-\bar{x})(y_t-\bar{y})}{\sqrt{\sum^T_{t=1}(x_t-\bar{x})^2\sum^T_{t=1}(y_t-\bar{y})^2}}$$ where \(x_t\) and \(y_t\) are the values of \(X\) and \(Y\) at time \(t\). The sample means of \(X\) and \(Y\) are \(\bar{x}\) and \(\bar{y}\) respectively. Uncovering the mystery Looking carefully at this last formula we see all the bracketed terms are differences to the variable average, so correlation is a comparison of the deviations from the means and not of the variations in the raw data itself. Hence, Pearson actually measures whether the variables are above or below their average at the same time. The term \((x_t-\bar{x})(y_t-\bar{y})\) is positive if both series are above (or below) their average together (and note the denominator is always positive). So a correct statement of perfect positive correlation would be “the upward deviations from the mean of asset A returns are simultaneous to upward deviations from the mean of asset B returns, and similarly with downward deviations”. This isn’t as intuitive as the typical “asset B goes up and down with asset A” and it is certainly not as easy to visualise. It’s no wonder correlation can be misleading. Removing the mean Let’s go back to our example. The asset prices were created to follow geometric Brownian motions with a trend component and an irregular component. All three series have strong, positive, constant trend components, hence their upward random walks (A and B have the same magnitude and C has half). The irregular components are generated with the same series of random numbers but their sign, have been inverted for B. These settings ensure the extreme correlations between the series. If we create two new series E and F with trend components set to zero then the upward bias is removed in the prices but the correlation on the returns stays the same. This is because the trend component doesn’t matter in the correlation calculation since it compares deviations from the mean returns, or in other words, from the trend.The difference is that all upward returns in asset E do correspond to downward returns in asset F, and vice versa. This is like shifting the axes in the first scatter plot and centring them on the means of the series of A and B. This shifting concept can be applied to the correlation calculation by removing the means from the formula: $$\hat{\mathrm{dq}}^{}_\mathrm{X,Y} = \frac{\sum^T_{t=1}x_ty_t}{\sqrt{\sum^T_{t=1}x_t^2\sum^T_{t=1}y_t^2}}$$ Instead of comparing deviations from the series’ averages we are directly comparing the values themselves. Using this QuantDare formula, we have the following correlations on the asset returns:Well, it kind of makes more sense looking at the price series, but they’re very different to the Pearson coefficients. But hold on a second, wasn’t this post about correlation of prices and returns? Prices vs returns Yes, let’s get back to that. Thinking about Pearson’s formula, it’s more likely that deviations from average prices are above and below at the same time since financial series usually have an upward bias together. Due to this, price correlations tend to be positive. Also, prices are not independent. Let \(P_t\) be the price of an asset at time \(t\) and then the time series can be written as: $$P_0, P_1, P_2, …, P_T.$$ Let \(R_t\) be the return at time \(t\): \(R_t = P_t-P_{t-1}\). Then we can rewrite the price series as: $$P_0, P_0+R_1, P_0+R_1+R_2, …, P_0+R_1+…+R_T.$$ Imagine correlation calculated over these prices. The first return \(R_1\) contributes to all the following entries and impacts every data point. On the other hand, the last return \(R_T\) only contributes to one. In this way, early changes in the prices have more weight than later changes in the correlation calculation whereas with the returns each one has equal importance. For this reason, correlation with prices is more sensitive to the number of time periods it’s calculated over. Using our asset examples, the Pearson correlation coefficients over prices are more in line with the visual perception. The magnitudes are different, but the signs coincide with the QuantDare formula with returns. This QD formula, however, doesn’t work with prices. It always produces positive correlations since it requires stationary series. Which correlation calculation convinces us more? Well, it all depends on the relationship you’re interested in comparing. Short-term changes are better interpreted from returns correlations, whilst valorations of long-term evolutions may be improved using prices. And if what you really want is to analyse if two series move up and down together, then you should replace the Pearson coefficient with the QuantDare formula over the return series. The most important thing with correlation is to really understand what is being measured and give the correct interpretation. It is such a common statistic used by professionals and laymen alike in all kinds of fields; it is easy to build a false confidence around its meaning and make inaccurate statements or misleading conclusions. But maybe, just maybe, this post will help to avoid future confusion and misinterpretation of this useful measure of relationship.
Let $V^{\mu}$ be a vector field defined in a Minkowski spacetime and suppose it transforms under a Lorentz transformation $V'^{\mu} = \Lambda^{\mu}_{\,\,\,\nu}V^{\nu}$. We can write this like $V'^{\mu} = (e^{i\omega})^{\mu}_{\,\,\,\nu}V^{\nu}$ I think where $\omega$ denotes a rotation in some plane spanned by indices $\left\{\rho \sigma\right\}$, say. In 2D Euclidean space time, we can write the matrix representation of $\Lambda$ as $$\begin{pmatrix} \cos \omega & \sin \omega\\-\sin \omega&\cos \omega\end{pmatrix}$$ and in Minkowski space this changes to the 'hyperbolic' rotation. Linearising the above yields $$\begin{pmatrix}1&\omega\\-\omega&1\end{pmatrix} = \text{Id} + \begin{pmatrix} 0&\omega\\-\omega&0\end{pmatrix} = \text{Id} + \omega \begin{pmatrix} 0&1\\-1&0\end{pmatrix}$$ Now compare with the more general treatment: $V'^{\mu} = \Lambda^{\mu}_{\,\,\,\nu}V^{\nu} \approx (\delta^{\mu}_{\nu} + \omega^{\mu}_{\,\,\,\nu})V^{\nu}$, where $\omega^{\mu}_{\,\,\,\nu} \equiv (\omega^{\rho \sigma} S_{\rho \sigma})^{\mu}_{\,\,\,\nu}$ In 2D, the spin matrix $S$ when acting on vectors in 2D Euclidean space time is therefore the matrix multiplying $\omega$ above, which agrees with the single generator of the SO(2) group. If we continue with the general analysis, we obtain $V'^{\mu} - V^{\mu} = \omega^{\mu}_{\,\,\,\nu}V^{\nu} = \eta^{\mu \rho}\omega_{\rho \nu} V^{\nu} = \omega^{\rho \sigma} \delta^{\mu}_{\rho} \eta_{\sigma \nu} V^{\nu}$ Now use the antisymmetry of $\omega_{\rho \sigma}$ gives $$2(V'^{\mu} - V^{\mu}) = \omega^{\rho \sigma}(\delta^{\mu}_{\rho} \eta_{\sigma \nu} - \delta^{\mu}_{\sigma} \eta_{\rho \nu})$$ from which we can identify $S$ to be $\delta^{\mu}_{\rho} \eta_{\sigma \nu} - \delta^{\mu}_{\sigma} \eta_{\rho \nu}$. I am wondering how this agrees with the matrix I obtained above. Many thanks.
Parallax would be the first thing to break the hoax In our universe, "nearby" stars appear to move across the sky as compared to distant stars due to parallax and the motion of the Earth around the Sun. The effect is subtle and wasn't observed until the late 1800 to early 1900s. Using something called a filar micrometer (seen below) to make accurate angular measurements. While the 'experimenters could simulate the annual parallax of stars, if the inhabitants of your solar system achieve technology comparable to the early 1900s, they could stumble upon the daily parallax. The daily parallax of the rotation of the Earth couldn't be simulated for everyone on Earth like the annual parallax could. For instance: Consider an astronomer at location A on the Earth. For the hoaxers to simulate the location of a fake star over a single night, the image on the display would need to move from A' to B' as the Earth rotates the astronomer from A to B. However, a second astronomer on the opposite side of the Earth would measure the star in a different position due it's nefarious displacement across the screen. The displacement (denoted $\theta_H$) could be measured relative to a known object, (Jupiter, for instance). When the two astronomers meet up later and compare their observations, they would know something is wrong. Appendix: Calculating the daily parallax The current state-of-art parallax measurements have angular resolutions of around 1 milli-arcsecond (see the Hubble Telescope). The resolution of of the filar micrometers of the 1920-1930s astrometry would have been around 10 milli-arcseconds. To calculate the parallax your screen incurs, let $R$ be the radius of the Earth, $H$ the distance to the hologram (screen), and $S$ the simulated distance to the fake star. The parallax angle the fake star creates across the diameter of the Earth is:$$ \theta_S = \frac{2R}{S} $$And the distance the image of the star must move across the screen (called $D$) is:$$ D = (S - H)\theta_S = \frac{(S - H)2R}{S} = (1 - \frac{H}{S})2R $$We see if $S$, the simulated distance to the star, is large, then $D\approx2R$; the image moves across the screen exactly the width of Earth. This is the worst-case scenario for the hoaxers. And lastly, the parallax produced by the motion of the image on the screen will be:$$ \theta_H = \frac{D}{H} = ( \frac{1}{H} - \frac{1}{S} ) 2R $$$$ \theta_H \approx \frac{2R}{H}$$ For Earth, $R=6.73\times10^6$ meters. If we take your experimenter's screen to be 3-lightdays away, then $H=7.77\times10^{13}$ meters, then: $$ \theta_H = \frac{2 6.73\times10^6}{7.77\times10^{13}}\frac{180^\circ \cdot 3600 \cdot 1000}{\pi} = 36 \text{milli-arcseconds}$$ Which is large enough to notice with 1900s technology. Note that Jupiter only moves around 7 milli-arcseconds a night.
i was a little confused about the casimir effect. my understanding is with the 2 plate experiment there was a force pushing the plates together because there were less virtual particles in between the plates than outside them. Why are there less particles in between the plates? The objects whose number is lower in between the plates are not really particles per se but the different modes - different possible values of the wavelength or frequency, in particular - in which the particles may be created. If the distance of the parallel plates is $L$, then the electric field has to vanish at the boundary between the vacuum and the metals - at the plates - and this implies that the electromagnetic waves may be decomposed to standing waves. The wavelength of such photons in the $x$ direction (the direction in which the plates are separated) has to be an integer multiple of $2L$. So only frequencies $$f_n = n\times c / 2L$$ are possible ($c$ is the speed of light). That's needed, once again, for the electric field to vanish at the plates. Note that I am talking about the transverse electric field and $-\nabla \Phi$ has to vanish in the directions $y,z$ because the potential $\Phi$ is constant throughout the metal. If there were no plates, the frequency $f$ could be any real number - which is equivalent to the $L\to\infty$ limit of the formula above. In this sense, the plates restrict the allowed frequency - they reduce the number of possible values of the frequency. (For the sake of simplicity, I was assuming that the photons don't have any motion in the directions $y,z$ along the plates - in general, they do have this motion. This fact would be correctly and easily accounted for by switching from the wavelength to the wavenumbers $k$.) The Casimir effect (plate attraction) is better understood as a Van der Waals force due to mutual polarization of neutral but extended quantum mechanical systems. Consider (find out about) two distant neutral atoms, for example. The case of plates is physically not too different from that. It is funny, but this attraction can be attributed to the virtual particles between atoms, contrary to the OP statement. Force is a (minus) gradient of the electric potential of interaction. Calculation of possible modes inside plates helps calculate this potential energy.
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates In our first example, the limits of integration for $r$ do not depend on $\theta$. When our regions are more complicated than circles, we have to be more careful. The function $e^{-x^2}$ does not have an antiderivative that can be expressed in closed form, and it is impossible to compute $\int_{0}^1 e^{-x^2} dx$ exactly. However, the integral $\displaystyle{\int_{-\infty}^\infty e^{-x^2} dx}$ turns out to equal $\sqrt{\pi}.$ In the following video, we use double integrals and polar coordinates to explain this surprising result.
I was doing a couple of problems for homework: Calculate $K_\mathrm{sp}$ of $\ce{AgI}$ at $55.0\ \mathrm{^\circ C}$ Calculate $K_\mathrm{b}$ of $\ce{NH3}$ at $36.0\ \mathrm{^\circ C}$ I have to use $\Delta G^\circ= -RT\ln K$ and $\Delta G= \Delta H-T\,\Delta S$ When I did this $\Delta G^\circ$ is positive ($89.59\ \mathrm{kJ/mol}$ and $28.037831\ \mathrm{kJ/mol}$ respectively), yet $K_\mathrm{sp}$ for $\ce{AgI}$ is $5.5\times10^{-15}$ and $K_\mathrm{b}$ for $\ce{NH3}$ is $1.8\times10^{-5}$, indicating that there are some products and the reactions do occur. Plus, $1.0\ \mathrm M$ $\ce{NH3}$ in solution has a $\mathrm{pH}$ of $11.6$ so it must react a little. According to the second law of thermodynamics, if $\Delta G$ is positive, the reaction is not spontaneous, right? But clearly, they, in fact, are to a certain extent. What is going on?
A group of $N$ fugitives is running away from the police. They reach a very narrow bridge. Since it's dark and they have a single flashlight, they can only cross the bridge in pairs. The $i^{th}$ fugitive takes $T_i$ minutes to cross the bridge. When a pair of fugitives crosses the bridge, they must go at the speed of the slowest fugitive. In other words, the crossing time for the pair of fugitives $(i,j)$ is $\max\{T_i,T_j\}$. After a pair of fugitives crosses the bridge, one of those who already crossed must return with the flashlight so that another pair of fugitives can go too (this new pair can include the fugitive who just brought back the flashlight). The process goes on until everyone has crossed the bridge. If the police is $T_p$ minutes behind the prisoners, and assuming they can no longer safely escape unless all of them have crossed the bridge by the time the police gets to it, can they make it? If yes, how? I was asked this question during a job interview and found it very interesting, so I decided to write a brute-force solver for this problem and discuss it here. The problem only becomes interesting if $N \ge 3$. Indeed, if $N = 2$ the two fugitives will make it as long as $\max\{T_1,T_2\} \le T_p$, meaning if the slower fugitive needs longer than $T_p$ to cross the bridge, the police will arrive before they can safely flee. The case $N = 1$ is excluded as we would have a single fugitive crossing the bridge, but the statement of the problem says they should cross it (to the safe side) in pairs (so $N \ge 2$). NOTE: readers who just wish to play with the brute-force solver should go directly to the end of this post. If you are interested in the discussion of the problem, read on! Let's first compute the number of steps $S_N$ which must happen until all $N$ fugitives have crossed the bridge. A step can be either a pair of fugitives crossing the bridge or one fugitive returning. We will prove that: $$ \boxed{ S_N = 2N - 3 } $$ To see this, notice first that for $N = 2$ we have only one step (the two fugitives cross the bridge), so the equation above applies since $S_2 = 1$. Now assume the formula formula is valid for $N-1$ fugitives. Then, for $N$ fugitives, after the first two cross and one returns with the flashlight (in other words, after two steps), we have $N-1$ fugitives left to cross the bridge. From there on, there are $S_{N-1}$ steps until all fugitives have crossed. So: $$ S_N = S_{N-1} + 2 = \left[2(N-1) - 3\right] + 2 = 2N-3 $$ as we wanted to prove. Now let's compute the theoretical maximum number of possibilities that one has to try when using brute force to solve the problem. First notice that if $N$ fugitives have not yet crossed the bridge, the number of possible pairs for crossing it is: $$ \left(\matrix{N \\ 2}\right) = \frac{N(N-1)}{2} $$ When a fugitive must return with the flashlight, and if $M$ fugitives have already crossed the bridge, then the number of choices is, well... $M$. We will therefore have the scheme shown below. I have marked the side which has the flashlight in green; "crossed" refers to the number of fugitives which have already crossed the bridge while "must still cross" refers to the fugitives which must still cross it: Crossed Must still cross Possibilities $0$ $N$ $\frac{N(N-1)}{2}$ $2$ $N-2$ $2$ $1$ $N-1$ $\frac{(N-1)(N-2)}{2}$ $3$ $N-3$ $3$ $\vdots$ $\vdots$ $\vdots$ $N-5$ $5$ $\frac{5\cdot 4}{2} = 10$ $N-3$ $3$ $N-3$ $N-4$ $4$ $\frac{4\cdot 3}{2} = 6$ $N-2$ $2$ $N-2$ $N-3$ $3$ $\frac{3\cdot 2}{2} = 3$ $N-1$ $1$ $N-1$ $N-2$ $2$ $\frac{2 \cdot 1}{2} = 1$ $N$ $0$ $0$ (done) The maximum number of moves $P_N$ which a brute-force solver must simulate is then: $$ \begin{eqnarray} P_N & = & \displaystyle\frac{N(N-1)}{2}\cdot 2\cdot \frac{(N-1)(N-2)}{2} \ldots (N-2) \frac{3\cdot 2}{2} (N-1) \frac{2 \cdot 1}{2} \nonumber \\[5pt] & = & \left[2 \ldots (N-2)(N-1)\right]\left[\displaystyle\frac{N(N-1)}{2}\frac{(N-1)(N-2)}{2} \ldots \frac{3\cdot 2}{2} \frac{2 \cdot 1}{2}\right] \nonumber \\[5pt] & = & (N-1)! \displaystyle \frac{N(N-1)^2(N-2)^2 \ldots 3^2 \cdot 2^2 \cdot 1}{2^{N-1}} \nonumber \\[5pt] & = & (N-1)! \displaystyle \frac{N^2(N-1)^2(N-2)^2 \ldots 3^2 \cdot 2^2}{2^{N-1}N} \nonumber \\[5pt] & = & \displaystyle\frac{(N-1)!}{2^{N-1}N} (N!)^2 \end{eqnarray} $$ and therefore: $$ \boxed{ P_N = \displaystyle\frac{N!^3}{2^{N-1}N^2} }\label{post_eeb0a9ec9ceed243221955dad220b478_num_possibilities} $$ The table below shows the values of $P_N$ for some values of $N$: $N$ $P_N$ $2$ $1$ $3$ $6$ $4$ $108$ $5$ $4320$ $6$ $324000$ $7$ $40824000$ $8$ $8001504000$ From the numbers above and from equation \eqref{post_eeb0a9ec9ceed243221955dad220b478_num_possibilities}, it is clear that the computational work required to simulate all possibilities grows extremely fast ($N!$ grows much faster than both $2^{N-1}$ and $N$). This will impose a limitation on how many fugitives ($N$) we can have when trying to solve the problem using brute force. The solver can be made more efficient, however, if it uses recursion and only keeps on going down a given branch of possibilities as long as the total crossing time has not yet exceeded the police time on that branch (for the picky reader: recursion will not be a problem here as the number of fugitives will hardly exceed a dozen). Also, the solver can be made more efficient by not distinguishing between fugitives which take the same time to cross the bridge. Now, to the brute-force solver! On the text fields below, enter the times taken by each fugitive to cross the bridge and the time the police will take to reach it. The values must be separated by commas (the values already entered are the ones I received on my job interview). If two or more fugitives take the same amount of time to cross the bridge, the solver will not output solutions with the same numbers twice.
Are you sure about the factor of $3$ in Koszul's formula? The computation below does not give such a factor, and it gives a stronger result. Let $X_i$ be any basis of the left-invariant vector fields (where the indices run from $0$ to $n$), with $[X_i,X_j] = c^k_{ij}X_k$, where, of course, $c^k_{ij}=-c^k_{ji}$ (and where here, as below, we use the summation convention that we sum over repeated indices in any given term if no summation is explicitly given). Let $\langle,\rangle$ be an $\mathrm{ad}$-invariant inner product, with $\langle X_k,X_l\rangle = g_{kl}$. Then $\mathrm{ad}$-invariance is just the equation$$g_{lk}c_{ij}^k + g_{jk}c_{il}^k = 0.$$If we assume that the $X_i$ are $\langle,\rangle$-orthonormal, then $g_{kl} = \delta_{kl}$, so $c^k_{ij} = - c^j_{ik}$, i.e., $c^i_{jk}$ is skew-symmetric in all its indices. For simplicity, let's agree to write $c^i_{jk} = c_{ijk}$, when the $X_i$ are orthonormal with respect to an $\mathrm{ad}$-invariant $\langle,\rangle$, which I'll assume from now on. Now consider the dual $1$-forms $\theta_i$ to the vector fields $X_i$. By the formula relating exterior derivative and Lie bracket, they satisfy$$\mathrm{d}\theta_i = -\tfrac12\,c_{ijk}\,\theta_j\wedge\theta_k= -\sum_{j<k} c_{ijk}\,\theta_j\wedge\theta_k\,.$$By definition, the Cartan form is just$$\omega = -\tfrac16\,c_{ijk}\,\theta_i\wedge\theta_j\wedge\theta_k= -\sum_{i<j<k} c_{ijk}\,\theta_i\wedge\theta_j\wedge\theta_k\,.$$ Let $\alpha = \theta_0$. Then, according to your definition, $$\omega_2 = -\sum_{0<j<k}\,c_{0jk}\,\theta_0\wedge\theta_j\wedge\theta_k= \theta_0\wedge\mathrm{d}\theta_0 = \alpha\wedge\mathrm{d}\alpha\,,$$which is a stronger result than Koszul claimed (although the factor of $3$ in his formula is mysterious), while$$\omega_3 = -\sum_{0<i<j<k}\,c_{ijk}\,\theta_i\wedge\theta_j\wedge\theta_k\,.$$ Remark: By the way, the definition that you give for 'the' Cartan form is a bit odd, because this form definitely depends on choice of the inner product, as you can see, since, if you double the metric, you'll double the Cartan form. This isn't so serious in the simple case because it's just a matter of a scale, but in the general semi-simple case, you run the risk of having many different 'Cartan forms'. However, maybe this isn't such a bad defect, since it's not possible, even in the simple case, to define 'the' Cartan form for all simple groups so that pullback to every simple subgroup $H$ of $G$ yields 'the' Cartan form of $H$. (Just look at the various $\mathrm{SU}(2)$ subgroups of $G$ when there is more than one such subgroup up to conjugacy.)
The Annals of Probability Ann. Probab. Volume 30, Number 4 (2002), 1913-1932. Regularity of quasi-stationary measures for simple exlusion in dimension d≥5 Abstract We consider the symmetric simple exclusion process on $\ZZ^d$, for $d\geq 5$, and study the regularity of the quasi-stationary measures of the dynamics conditioned on not occupying the origin. For each $\rho\in ]0,1[$, we establish uniqueness of the density of quasi-stationary measures in $L^2(d\nur)$, where $\nur$ is the stationary measure of density $\rho$. This, in turn, permits us to obtain sharp estimates for $P_{\nur}(\tau>t)$, where $\tau$ is the first time the origin is occupied. Article information Source Ann. Probab., Volume 30, Number 4 (2002), 1913-1932. Dates First available in Project Euclid: 10 December 2002 Permanent link to this document https://projecteuclid.org/euclid.aop/1039548376 Digital Object Identifier doi:10.1214/aop/1039548376 Mathematical Reviews number (MathSciNet) MR1944010 Zentralblatt MATH identifier 1014.60089 Subjects Primary: 60K35: Interacting random processes; statistical mechanics type models; percolation theory [See also 82B43, 82C43] 82C22: Interacting particle systems [See also 60K35] 60J25: Continuous-time Markov processes on general state spaces Citation Asselah, Amine; Ferrari, Pablo A. Regularity of quasi-stationary measures for simple exlusion in dimension d ≥5. Ann. Probab. 30 (2002), no. 4, 1913--1932. doi:10.1214/aop/1039548376. https://projecteuclid.org/euclid.aop/1039548376
In a previous post, I described how one can use the singular value decomposition of a matrix to represent it in a compressed form. The trick was to discard information (singular values) from the original matrix to generate an "approximate" version of it. This post describes how that technique can be used to also compress images. The method is shown for educational purposes only: it is not adequate for professional image compression. I will use octave to illustrate the method. On Ubuntu/Debian, you can install octave by opening a terminal and running the following command (the module octave-image must also be installed): sudo apt-get install octave octave-image Now start octave. Images can be read in octave with the function imread: I = imread("dog.jpg"); Fig. 1: Sample picture with $425 \times 425$ pixels. The command above will read the file dog.jpg (shown on figure 1) and store it as a set of three matrices on the object $I$. Each matrix contains the amount of red, green and blue on each pixel of the original image respectively. Their entries are one-byte integers which can take any value in the range $[0,255]$. To be clear, $I(i,j,n)$ contains the amount of the color $n$ on the pixel $(i,j)$, where $n = 1,2,3$ for red, green and blue respectively (pixel $(1,1)$ is the one on the top left of the image). For a fixed $n$ and the sample image used, $I(:,:,n)$ is a $425 \times 425$ matrix. Before we actually do image compression using $I$, let's first work on a simpler example: the grayscale version of the image. Octave has a built-in function which converts colored images to grayscale: J = rgb2gray(I); The created object $J$ is a $425 \times 425$ matrix which contains the grayscale intensity of each pixel of the original image. The function defined below will be used to compress matrices in the same way as described in the previous post: # A = input matrix, N = number of singular values to keep function [Uc, Sigmac, Vc] = compress_matrix(A, N) [U, Sigma, V] = svd(A); Uc = U(:, 1:N); Sigmac = Sigma(1:N, 1:N); Vc = V(:, 1:N); end NOTE: each entry of $J$ is a one-byte integer, but each entry of $U_c$, $\Sigma_c$ or $V_c$ is a double (8 bytes). This means the total number of stored matrix entries for these three matrices is not something we can directly compare with $N_J := \textrm{size}(J) = 425^2$. However, for the purposes of this post this will not be an issue since our goal is to compress images and not matrices. Let's go ahead and compress $J$ by keeping only its $N = 50$ largest singular values: [Uc, Sigmac, Vc] = compress_matrix(J, 50); Jc = uint8(Uc * Sigmac * Vc'); The values of the matrix $J_c = U_c\Sigma_c V_c^T$ must first be converted from doubles back to one-byte integers to correctly represent grayscale intensities. The smaller the value of $N$, the less information from the original image we preserve, so discarding too many singular values might yield a compressed image with very poor quality. To visualize the compressed image, run: imshow(Jc) To visualize both the original and compressed images next to each other, run: figure subplot(1,2,1) xlabel("original") imshow(J) subplot(1,2,2) xlabel("compressed") imshow(Jc) Fig. 2: Grayscale versions of the original picture. From left to right and top to bottom, the pictures show: the original image and compressed versions with $N = 100$, $50$, $25$, $10$, $5$. Notice how the main aspects of the image are preserved even for small values of $N$. Unsurprisingly, however, the dog becomes less recognizable as $N$ is decreased. The question which must be asked at this point is: would the size of the generated image $J_c$ be significantly different from the original image $J$ if both were stored as JPEG files? The answer is not a clear "yes" since the JPEG compression mechanism is not the same as the one we used here and therefore it might not be able to benefit much from our "matrix compression" trick. To see what happens, let's store both $J$ and $J_c$ as JPEG files with maximum quality and check the resulting file sizes: imwrite(J, "dog-gray-original.jpg", 'Quality', 100) imwrite(Jc, "dog-gray-compressed.jpg", 'Quality', 100) Table 1 shows the size of the resulting JPEG files for different values of $N$. The size of the file dog-gray-original.jpg is $119.9\textrm{kB}$. Taking $N = 425$ means keeping all singular values; in this case, the size of both JPEG files ("original" and "compressed") are the same: $N$ JPEG size (in kB) $425$ $119.9$ $100$ $118.2$ $50$ $108.3$ $25$ $93.4$ $10$ $71.9$ $5$ $59.9$ Table 1: Sizes of the generated JPEG images for different values of $N$. As table 1 shows, the generated JPEG images become smaller as we discard more singular values. However, to generate an image with only $50\%$ of the size of the original, we must keep only $N = 5$ singular values. The resulting image is unfortunately quite blurred at that point (see figure 2). Back to the original colored image We can now proceed to compress the original colored image (see figure 1). Remember that $I(:,:,n)$ is a $425 \times 425$ matrix which contains the intensity of red, green and blue at each pixel of the original image for $n = 1, 2, 3$ respectively: N = 50 [U1c, Sigma1c, V1c] = compress_matrix(I(:,:,1), N); [U2c, Sigma2c, V2c] = compress_matrix(I(:,:,2), N); [U3c, Sigma3c, V3c] = compress_matrix(I(:,:,3), N); Ic = uint8( zeros(size(I,1), size(I,2), 3) ); Ic(:,:,1) = uint8(U1c * Sigma1c * V1c'); Ic(:,:,2) = uint8(U2c * Sigma2c * V2c'); Ic(:,:,3) = uint8(U3c * Sigma3c * V3c'); Fig. 3: Colored versions of the original picture. From left to right and top to bottom, the pictures show: the original image and compressed versions with $N = 100$, $50$, $25$, $10$, $5$. As we did above, we can compare the sizes of the original and compressed images by storing them both as JPEG files (we must store the original image too to make sure both are generated with the same JPEG quality): imwrite(I, "dog-color-original.jpg", 'Quality', 100) imwrite(Ic, "dog-color-compressed.jpg", 'Quality', 100) Table 2 shows the size of the resulting JPEG files for different values of $N$. The size of the file dog-color-original.jpg is $195.6\textrm{kB}$. Taking $N = 425$ means keeping all singular values; in this case, the size of both JPEG files ("original" and "compressed") are the same: $N$ JPEG size (in kB) $425$ $195.6$ $300$ $209.2$ $200$ $233.4$ $100$ $253.8$ $50$ $248.3$ $25$ $225.1$ $10$ $175.8$ $5$ $153.5$ Table 2: Sizes of the generated JPEG images for different values of $N$. Strangely, the size of the compressed image grows as we reduce the value of $N$ but starts shrinking quickly for $N \leq 50$. This indicates that the JPEG compression mechanism does not always work well with the compression technique presented in this post.
I've seen similar conclusion from many discussions, that as the minibatch size gets larger the convergence of SGD actually gets harder/worse, for example this paper and this answer. Also I've heard of people using tricks like small learning rates or batch sizes in the early stage to address this difficulty with large batch sizes. However it seems counter-intuitive as the average loss of a minibatch can be thought of as an approximation to the expected loss over the data distribution, $$\frac{1}{|X|}\sum_{x\in X} l(x,w)\approx E_{x\sim p_{data}}[l(x,w)]$$ the larger the batch size the more accurate it's supposed to be. Why in practice is it not the case? Here are some of my (probably wrong) thoughts that try to explain. The parameters of the model highly depend on each other, when the batch gets too large it will affect too many parameters at once, such that its hard for the parameters to reach a stable inherent dependency? (like the internal covariate shift problem mentioned in the batch normalization paper) Or when nearly all the parameters are responsible in every iteration they will tend to learn redundant implicit patterns hence reduces the capacity of the model? (I mean say for digit classification problems some patterns should be responsible for dots, some for edges, but when this happens every pattern tries to be responsible for all shapes). Or is it because the when the batches size gets closer to the scale of the training set, the minibatches can no longer be seen as i.i.d from the data distribution, as there will be a large probability for correlated minibatches? Update As pointed out in Benoit Sanchez's answer one important reason is that large minibatches require more computation to complete one update, and most of the analyses use a fix amount of training epochs for comparison. However this paper (Wilson and Martinez, 2003) shows that a larger batch size is still slightly disadvantageous even given enough amount of training epochs. Is that generally the case?
Reidemeister conjugacy for finitely generated free fundamental groups Abstract Let $X$ be a space with the homotopy type of a bouquet of $k$ circles, and let $f:X\to X$ be a map. In certain cases, algebraic techniques can be used to calculate $N(f)$, the Nielsen number of $f$, which is a homotopy invariant lower bound on the number of fixed points for maps homotopic to $f$. Given two fixed points of $f$, $x$ and $y$, and their corresponding group elements, $W_x$ and $W_y$, the fixed points are Nielsen equivalent if and only if there is a solution $z\in \pi _1(X)$ to the equation $z=W_y^{-1}f_{\sharp }(z)W_x.$ The Nielsen number is the number of equivalence classes that have nonzero fixed point index. A variety of methods for determining the Nielsen classes, each with their own restrictions on the map $f$, have been developed by Wagner, Kim, and (when the fundamental group is free on two generators) by Kim and Yi. In order to describe many of these methods with a common terminology, we introduce new definitions that describe the types of bounds on $|z|$ that can occur. The best directions for future research become clear when this new nomenclature is used. To illustrate the new concepts, we extend Wagner's ideas, regarding W-characteristic maps and maps with remnant, to two new classes of maps that have only partial remnant. We prove that for these classes of maps Wagner's algorithm will find almost all Nielsen equivalences, and the algorithm is extended to find all Nielsen equivalences. The proof that our algorithm does find the Nielsen number is complex even though these two classes of maps are restrictive.For our classes of maps (MRN maps and 2C3 maps), the number of possible solutions $z$ is at most 11 for MRN maps and 14 for 2C3 maps. In addition, the length of any solution is at most three for MRN maps and four for 2C3 maps. This makes a computer search reasonable. Many examples are included.
Difference between revisions of "NTS ABSTRACTSpring2019" (→April 4) Line 140: Line 140: {| style="color:black; font-size:100%" table border="2" cellpadding="10" width="700" cellspacing="20" {| style="color:black; font-size:100%" table border="2" cellpadding="10" width="700" cellspacing="20" |- |- − | bgcolor="#F0A0A0" align="center" style="font-size:125%" | ''' + | bgcolor="#F0A0A0" align="center" style="font-size:125%" | '''''' |- |- | bgcolor="#BCD2EE" align="center" |Hecke L-functions and $\ell$ torsion in class groups | bgcolor="#BCD2EE" align="center" |Hecke L-functions and $\ell$ torsion in class groups Revision as of 11:18, 1 April 2019 Return to [1] Contents Jan 23 Yunqing Tang Reductions of abelian surfaces over global function fields For a non-isotrivial ordinary abelian surface $A$ over a global function field, under mild assumptions, we prove that there are infinitely many places modulo which $A$ is geometrically isogenous to the product of two elliptic curves. This result can be viewed as a generalization of a theorem of Chai and Oort. This is joint work with Davesh Maulik and Ananth Shankar. Jan 24 Hassan-Mao-Smith--Zhu The diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$ Abstract: Assume a polynomial-time algorithm for factoring integers, Conjecture~\ref{conj}, $d\geq 3,$ and $q$ and $p$ prime numbers, where $p\leq q^A$ for some $A>0$. We develop a polynomial-time algorithm in $\log(q)$ that lifts every $\mathbb{Z}/q\mathbb{Z}$ point of $S^{d-2}\subset S^{d}$ to a $\mathbb{Z}[1/p]$ point of $S^d$ with the minimum height. We implement our algorithm for $d=3 \text{ and }4$. Based on our numerical results, we formulate a conjecture which can be checked in polynomial-time and gives the optimal bound on the diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$. Jan 31 Kyle Pratt Breaking the $\frac{1}{2}$-barrier for the twisted second moment of Dirichlet $L$-functions Abstract: I will discuss recent work, joint with Bui, Robles, and Zaharescu, on a moment problem for Dirichlet $L$-functions. By way of motivation I will spend some time discussing the Lindel\"of Hypothesis, and work of Bettin, Chandee, and Radziwi\l\l. The talk will be accessible, as I will give lots of background information and will not dwell on technicalities. Feb 7 Shamgar Gurevich Harmonic Analysis on $GL_n$ over finite fields Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}: $$trace (\rho(g))/dim (\rho),$$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU). Feb 14 Tonghai Yang The Lambda invariant and its CM values Abstract: The Lambda invariant which parametrizes elliptic curves with two torsions (X_0(2)) has some interesting properties, some similar to that of the j-invariants, and some not. For example, $\lambda(\frac{d+\sqrt d}2)$ is a unit sometime. In this talk, I will briefly describe some of the properties. This is joint work with Hongbo Yin and Peng Yu. Feb 28 Brian Lawrence Diophantine problems and a p-adic period map. Abstract: I will outline a proof of Mordell's conjecture / Faltings's theorem using p-adic Hodge theory. Joint with Akshay Venkatesh. March 7 Masoud Zargar Sections of quadrics over the affine line Abstract: Abstract: Suppose we have a quadratic form Q(x) in d\geq 4 variables over F_q[t] and f(t) is a polynomial over F_q. We consider the affine variety X given by the equation Q(x)=f(t) as a family of varieties over the affine line A^1_{F_q}. Given finitely many closed points in distinct fibers of this family, we ask when there exists a section passing through these points. We study this problem using the circle method over F_q((1/t)). Time permitting, I will mention connections to Lubotzky-Phillips-Sarnak (LPS) Ramanujan graphs. Joint with Naser T. Sardari March 14 Elena Mantovan p-adic automorphic forms, differential operators and Galois representations A strategy pioneered by Serre and Katz in the 1970s yields a construction of p-adic families of modular forms via the study of Serre's weight-raising differential operator Theta. This construction is a key ingredient in Deligne-Serre's theorem associating Galois representations to modular forms of weight 1, and in the study of the weight part of Serre's conjecture. In this talk I will discuss recent progress towards generalizing this theory to automorphic forms on unitary and symplectic Shimura varieites. In particular, I will introduce certain p-adic analogues of Maass-Shimura weight-raising differential operators, and discuss their action on p-adic automorphic forms, and on the associated mod p Galois representations. In contrast with Serre's classical approach where q-expansions play a prominent role, our approach is geometric in nature and is inspired by earlier work of Katz and Gross. This talk is based joint work with Eishen, and also with Fintzen--Varma, and with Flander--Ghitza--McAndrew. March 28 Adebisi Agboola Relative K-groups and rings of integers Abstract: Suppose that F is a number field and G is a finite group. I shall discuss a conjecture in relative algebraic K-theory (in essence, a conjectural Hasse principle applied to certain relative algebraic K-groups) that implies an affirmative answer to both the inverse Galois problem for F and G and to an analogous problem concerning the Galois module structure of rings of integers in tame extensions of F. It also implies the weak Malle conjecture on counting tame G-extensions of F according to discriminant. The K-theoretic conjecture can be proved in many cases (subject to mild technical conditions), e.g. when G is of odd order, giving a partial analogue of a classical theorem of Shafarevich in this setting. While this approach does not, as yet, resolve any new cases of the inverse Galois problem, it does yield substantial new results concerning both the Galois module structure of rings of integers and the weak Malle conjecture. April 4 Wei-Lun Tsai Hecke L-functions and $\ell$ torsion in class groups Abstract: The canonical Hecke characters in the sense of Rohrlich form a set of algebraic Hecke characters with important arithmetic properties. In this talk, we will explain how one can prove quantitative nonvanishing results for the central values of their corresponding L-functions using methods of an arithmetic statistical flavor. In particular, the methods used rely crucially on recent work of Ellenberg, Pierce, and Wood concerning bounds for $\ell$-torsion in class groups of number fields. This is joint work with Byoung Du Kim and Riad Masri.
Difference between revisions of "Timeline of prime gap bounds" Line 756: Line 756: [http://math.mit.edu/~drew/admissible_2114964_33661442.txt 33,661,442]?# [m=3] ([http://terrytao.wordpress.com/2013/12/20/polymath8b-iv-enlarging-the-sieve-support-more-efficient-numerics-and-explicit-asymptotics/#comment-258451 Sutherland]) [http://math.mit.edu/~drew/admissible_2114964_33661442.txt 33,661,442]?# [m=3] ([http://terrytao.wordpress.com/2013/12/20/polymath8b-iv-enlarging-the-sieve-support-more-efficient-numerics-and-explicit-asymptotics/#comment-258451 Sutherland]) + + | A numerical precision issue was discovered in the earlier m=4 calculations | A numerical precision issue was discovered in the earlier m=4 calculations |} |} Revision as of 20:21, 22 December 2013 [math]H = H_1[/math] is a quantity such that there are infinitely many pairs of consecutive primes of distance at most [math]H[/math] apart. Would like to be as small as possible (this is a primary goal of the Polymath8 project). [math]k_0[/math] is a quantity such that every admissible [math]k_0[/math]-tuple has infinitely many translates which each contain at least two primes. Would like to be as small as possible. Improvements in [math]k_0[/math] lead to improvements in [math]H[/math]. (The relationship is roughly of the form [math]H \sim k_0 \log k_0[/math]; see the page on finding narrow admissible tuples.) More recent improvements on [math]k_0[/math] have come from solving a Selberg sieve variational problem. [math]\varpi[/math] is a technical parameter related to a specialized form of the Elliott-Halberstam conjecture. Would like to be as large as possible. Improvements in [math]\varpi[/math] lead to improvements in [math]k_0[/math], as described in the page on Dickson-Hardy-Littlewood theorems. In more recent work, the single parameter [math]\varpi[/math] is replaced by a pair [math](\varpi,\delta)[/math] (in previous work we had [math]\delta=\varpi[/math]). These estimates are obtained in turn from Type I, Type II, and Type III estimates, as described at the page on distribution of primes in smooth moduli. In this table, infinitesimal losses in [math]\delta,\varpi[/math] are ignored. Date [math]\varpi[/math] or [math](\varpi,\delta)[/math] [math]k_0[/math] [math]H[/math] Comments 10 Aug 2005 6 [EH] 16 [EH] ([Goldston-Pintz-Yildirim]) First bounded prime gap result (conditional on Elliott-Halberstam) 14 May 2013 1/1,168 (Zhang) 3,500,000 (Zhang) 70,000,000 (Zhang) All subsequent work (until the work of Maynard) is based on Zhang's breakthrough paper. 21 May 63,374,611 (Lewko) Optimises Zhang's condition [math]\pi(H)-\pi(k_0) \gt k_0[/math]; can be reduced by 1 by parity considerations 28 May 59,874,594 (Trudgian) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] with [math]p_{m+1} \gt k_0[/math] 30 May 59,470,640 (Morrison) 58,885,998? (Tao) 59,093,364 (Morrison) 57,554,086 (Morrison) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] and then [math](\pm 1, \pm p_{m+1}, \ldots, \pm p_{m+k_0/2-1})[/math] following [HR1973], [HR1973b], [R1974] and optimises in m 31 May 2,947,442 (Morrison) 2,618,607 (Morrison) 48,112,378 (Morrison) 42,543,038 (Morrison) 42,342,946 (Morrison) Optimizes Zhang's condition [math]\omega\gt0[/math], and then uses an improved bound on [math]\delta_2[/math] 1 Jun 42,342,924 (Tao) Tiny improvement using the parity of [math]k_0[/math] 2 Jun 866,605 (Morrison) 13,008,612 (Morrison) Uses a further improvement on the quantity [math]\Sigma_2[/math] in Zhang's analysis (replacing the previous bounds on [math]\delta_2[/math]) 3 Jun 1/1,040? (v08ltu) 341,640 (Morrison) 4,982,086 (Morrison) 4,802,222 (Morrison) Uses a different method to establish [math]DHL[k_0,2][/math] that removes most of the inefficiency from Zhang's method. 4 Jun 1/224?? (v08ltu) 1/240?? (v08ltu) 4,801,744 (Sutherland) 4,788,240 (Sutherland) Uses asymmetric version of the Hensley-Richards tuples 5 Jun 34,429? (Paldi/v08ltu) 4,725,021 (Elsholtz) 4,717,560 (Sutherland) 397,110? (Sutherland) 4,656,298 (Sutherland) 389,922 (Sutherland) 388,310 (Sutherland) 388,284 (Castryck) 388,248 (Sutherland) 387,982 (Castryck) 387,974 (Castryck) [math]k_0[/math] bound uses the optimal Bessel function cutoff. Originally only provisional due to neglect of the kappa error, but then it was confirmed that the kappa error was within the allowed tolerance. [math]H[/math] bound obtained by a hybrid Schinzel/greedy (or "greedy-greedy") sieve 6 Jun 387,960 (Angelveit) 387,904 (Angeltveit) Improved [math]H[/math]-bounds based on experimentation with different residue classes and different intervals, and randomized tie-breaking in the greedy sieve. 7 Jun 26,024? (vo8ltu) 387,534 (pedant-Sutherland) Many of the results ended up being retracted due to a number of issues found in the most recent preprint of Pintz. Jun 8 286,224 (Sutherland) 285,752 (pedant-Sutherland) values of [math]\varpi,\delta,k_0[/math] now confirmed; most tuples available on dropbox. New bounds on [math]H[/math] obtained via iterated merging using a randomized greedy sieve. Jun 9 181,000*? (Pintz) 2,530,338*? (Pintz) New bounds on [math]H[/math] obtained by interleaving iterated merging with local optimizations. Jun 10 23,283? (Harcos/v08ltu) 285,210 (Sutherland) More efficient control of the [math]\kappa[/math] error using the fact that numbers with no small prime factor are usually coprime Jun 11 252,804 (Sutherland) More refined local "adjustment" optimizations, as detailed here. An issue with the [math]k_0[/math] computation has been discovered, but is in the process of being repaired. Jun 12 22,951 (Tao/v08ltu) 22,949 (Harcos) 249,180 (Castryck) Improved bound on [math]k_0[/math] avoids the technical issue in previous computations. Jun 13 Jun 14 248,898 (Sutherland) Jun 15 [math]348\varpi+68\delta \lt 1[/math]? (Tao) 6,330? (v08ltu) 6,329? (Harcos) 6,329 (v08ltu) 60,830? (Sutherland) Taking more advantage of the [math]\alpha[/math] convolution in the Type III sums Jun 16 [math]348\varpi+68\delta \lt 1[/math] (v08ltu) 60,760* (Sutherland) Attempting to make the Weyl differencing more efficient; unfortunately, it did not work Jun 18 5,937? (Pintz/Tao/v08ltu) 5,672? (v08ltu) 5,459? (v08ltu) 5,454? (v08ltu) 5,453? (v08ltu) 60,740 (xfxie) 58,866? (Sun) 53,898? (Sun) 53,842? (Sun) A new truncated sieve of Pintz virtually eliminates the influence of [math]\delta[/math] Jun 19 5,455? (v08ltu) 5,453? (v08ltu) 5,452? (v08ltu) 53,774? (Sun) 53,672*? (Sun) Some typos in [math]\kappa_3[/math] estimation had placed the 5,454 and 5,453 values of [math]k_0[/math] into doubt; however other refinements have counteracted this Jun 20 [math]178\varpi + 52\delta \lt 1[/math]? (Tao) [math]148\varpi + 33\delta \lt 1[/math]? (Tao) Replaced "completion of sums + Weil bounds" in estimation of incomplete Kloosterman-type sums by "Fourier transform + Weyl differencing + Weil bounds", taking advantage of factorability of moduli Jun 21 [math]148\varpi + 33\delta \lt 1[/math] (v08ltu) 1,470 (v08ltu) 1,467 (v08ltu) 12,042 (Engelsma) Systematic tables of tuples of small length have been set up here and here (update: As of June 27 these tables have been merged and uploaded to an online database of current bounds on [math]H(k)[/math] for [math]k[/math] up to 5000). Jun 22 Slight improvement in the [math]\tilde \theta[/math] parameter in the Pintz sieve; unfortunately, it does not seem to currently give an actual improvement to the optimal value of [math]k_0[/math] Jun 23 1,466 (Paldi/Harcos) 12,006 (Engelsma) An improved monotonicity formula for [math]G_{k_0-1,\tilde \theta}[/math] reduces [math]\kappa_3[/math] somewhat Jun 24 [math](134 + \tfrac{2}{3}) \varpi + 28\delta \le 1[/math]? (v08ltu) [math]140\varpi + 32 \delta \lt 1[/math]? (Tao) 1,268? (v08ltu) 10,206? (Engelsma) A theoretical gain from rebalancing the exponents in the Type I exponential sum estimates Jun 25 [math]116\varpi+30\delta\lt1[/math]? (Fouvry-Kowalski-Michel-Nelson/Tao) 1,346? (Hannes) 1,007? (Hannes) 10,876? (Engelsma) Optimistic projections arise from combining the Graham-Ringrose numerology with the announced Fouvry-Kowalski-Michel-Nelson results on d_3 distribution Jun 26 [math]116\varpi + 25.5 \delta \lt 1[/math]? (Nielsen) [math](112 + \tfrac{4}{7}) \varpi + (27 + \tfrac{6}{7}) \delta \lt 1[/math]? (Tao) 962? (Hannes) 7,470? (Engelsma) Beginning to flesh out various "levels" of Type I, Type II, and Type III estimates, see this page, in particular optimising van der Corput in the Type I sums. Integrated tuples page now online. Jun 27 [math]108\varpi + 30 \delta \lt 1[/math]? (Tao) 902? (Hannes) 6,966? (Engelsma) Improved the Type III estimates by averaging in [math]\alpha[/math]; also some slight improvements to the Type II sums. Tuples page is now accepting submissions. Jul 1 [math](93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math]? (Tao) 873? (Hannes) Refactored the final Cauchy-Schwarz in the Type I sums to rebalance the off-diagonal and diagonal contributions Jul 5 [math] (93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math] (Tao) Weakened the assumption of [math]x^\delta[/math]-smoothness of the original moduli to that of double [math]x^\delta[/math]-dense divisibility Jul 10 7/600? (Tao) An in principle refinement of the van der Corput estimate based on exploiting additional averaging Jul 19 [math](85 + \frac{5}{7})\varpi + (25 + \frac{5}{7}) \delta \lt 1[/math]? (Tao) A more detailed computation of the Jul 10 refinement Jul 20 Jul 5 computations now confirmed Jul 27 633 (Tao) 632 (Harcos) 4,686 (Engelsma) Jul 30 [math]168\varpi + 48\delta \lt 1[/math]# (Tao) 1,788# (Tao) 14,994# (Sutherland) Bound obtained without using Deligne's theorems. Aug 17 1,783# (xfxie) 14,950# (Sutherland) Oct 3 13/1080?? (Nelson/Michel/Tao) 604?? (Tao) 4,428?? (Engelsma) Found an additional variable to apply van der Corput to Oct 11 [math]83\frac{1}{13}\varpi + 25\frac{5}{13} \delta \lt 1[/math]? (Tao) 603? (xfxie) 4,422?(Engelsma) 12 [EH] (Maynard) Worked out the dependence on [math]\delta[/math] in the Oct 3 calculation Oct 21 All sections of the paper relating to the bounds obtained on Jul 27 and Aug 17 have been proofread at least twice Oct 23 700#? (Maynard) Announced at a talk in Oberwolfach Oct 24 110#? (Maynard) 628#? (Clark-Jarvis) With this value of [math]k_0[/math], the value of [math]H[/math] given is best possible (and similarly for smaller values of [math]k_0[/math]) Nov 19 105# (Maynard) 5 [EH] (Maynard) 600# (Maynard/Clark-Jarvis) One also gets three primes in intervals of length 600 if one assumes Elliott-Halberstam Nov 20 Optimizing the numerology in Maynard's large k analysis; unfortunately there was an error in the variance calculation Nov 21 68?? (Maynard) 582#*? (Nielsen]) 59,451 [m=2]#? (Nielsen]) 42,392 [m=2]? (Nielsen) 356?? (Clark-Jarvis) Optimistically inserting the Polymath8a distribution estimate into Maynard's low k calculations, ignoring the role of delta Nov 22 388*? (xfxie) 448#*? (Nielsen) 43,134 [m=2]#? (Nielsen) 698,288 [m=2]#? (Sutherland) Uses the m=2 values of k_0 from Nov 21 Nov 23 493,528 [m=2]#? Sutherland Nov 24 484,234 [m=2]? (Sutherland) Nov 25 385#*? (xfxie) 484,176 [m=2]? (Sutherland) Using the exponential moment method to control errors Nov 26 102# (Nielsen) 493,426 [m=2]#? (Sutherland) Optimising the original Maynard variational problem Nov 27 484,162 [m=2]? (Sutherland) Nov 28 484,136 [m=2]? (Sutherland Dec 4 64#? (Nielsen) 330#? (Clark-Jarvis) Searching over a wider range of polynomials than in Maynard's paper Dec 6 493,408 [m=2]#? (Sutherland) Dec 19 59#? (Nielsen) 10,000,000? [m=3] (Tao) 1,700,000? [m=3] (Tao) 38,000? [m=2] (Tao) 300#? (Clark-Jarvis) 182,087,080? [m=3] (Sutherland) 179,933,380? [m=3] (Sutherland) More efficient memory management allows for an increase in the degree of the polynomials used; the m=2,3 results use an explicit version of the [math]M_k \geq \frac{k}{k-1} \log k - O(1)[/math] lower bound. Dec 20 55#? (Nielsen) 36,000? [m=2] (xfxie) 175,225,874? [m=3] (Sutherland) 27,398,976? [m=3] (Sutherland) Dec 21 1,640,042? [m=3] (Sutherland) 429,798? [m=2] (Sutherland) Optimising the explicit lower bound [math]M_k \geq \log k-O(1)[/math] Dec 22 1,628,944? [m=3] (Castryck) 75,000,000? [m=4] (Castryck) 3,400,000,000? [m=5] (Castryck) 5,511? [EH] [m=3] (Sutherland) 2,114,964#? [m=3] (Sutherland) 395,154? [m=2] (Sutherland) 1,523,781,850? [m=4] (Sutherland) 82,575,303,678? [m=5] (Sutherland) A numerical precision issue was discovered in the earlier m=4 calculations Legend: ? - unconfirmed or conditional ?? - theoretical limit of an analysis, rather than a claimed record * - is majorized by an earlier but independent result # - bound does not rely on Deligne's theorems [EH] - bound is conditional the Elliott-Halberstam conjecture [m=N] - bound on intervals containing N+1 consecutive primes, rather than two strikethrough - values relied on a computation that has now been retracted See also the article on Finding narrow admissible tuples for benchmark values of [math]H[/math] for various key values of [math]k_0[/math].
Is it possible to solve the following equal for $x$? $$4=2^{x^{x^{x^{...}}}}$$ I'm bit confused, how do you even simplify this equation, factoring? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Is it possible to solve the following equal for $x$? $$4=2^{x^{x^{x^{...}}}}$$ I'm bit confused, how do you even simplify this equation, factoring? First let us notice that $2 = x^{x^{x^{\cdots}}}$ (since $4=2^2$). Now we can see that $\log_x 2 = x^{x^{x^{\cdots}}} = 2 \implies \log_x 2 = 2 \implies 2 = x^2 \iff x = \pm \sqrt{2}$ however we can run into issues if we use the negative square root so we take $x = + \sqrt{2}$ $$4=2^{x^{x^{x^{...}}}}$$ let ${x^{x^{x^{...}}}}=y\implies x^y=y$ $$4=2^y$$ $$2^2=2^y$$ $$y=2$$ $$x^y=y$$ $$x^2=2$$ $$x=\pm\sqrt2$$
The first thing that one learns in statistics is to use the sample mean, $\hat{X}$, as an unbiased estimate of the population mean, $\mu$; and pretty much the same would be true for the variance, $S^2$, as an estimate of $\sigma^2$ (leaving aside Bessel's correction for a second). From these working assumptions, and with the CLT, a great part of basic inferential statistics is taught utilizing Gaussian and t distributions. This, in principle, seems very alike the setup behind MLE calculations - i.e. estimating a population parameter based on the evidence of a sample. And in both cases the population parameters are really unknown. My question is whether MLE is sort of the overarching mathematical frequentist framework underpinning the assumptions in basic (introductory) inferential statistical courses. Although it makes sense to derive the hat matrix for OLS utilizing MLE, and proving the maximum likelihood with Hessian matrices, it is also true that through MLE one can "rediscover" the truth of some basic assumptions that are given for granted in basic courses. For instance, we can derive the result that the MLE of the mean, $\mu$, of a Gaussian distribution given the observation of a sample of values ($x_1,..,x_N$) equals $\hat\mu=\frac{1}{N}\displaystyle\sum_{1}^N x_i$ - i.e. the sample mean; and the MLE of the variance, $\sigma^2$, is $\hat\sigma^2=\frac{1}{x}\displaystyle\sum_{1}^N (x_i-\hat\mu)^2$ -i.e. the sample variance. So in the end the layman's account would be that what is taught in introductory courses is really supported by a more sophisticated mathematical structure - Maximum Likelihood Estimation, elaborated by R.A. Fisher and that has its main counterpart in Bayesian statistics. MLE bypasses the need for a prior probability of the population parameter without support in the sample $p(\theta)$ needed in Bayes calculation of the inverse probability or posterior ($p(\theta|{x_1,...x_n})$) with the equation: $p(\theta|{x_1,...x_n}) = \Large \frac{p({x_1,...x_n}|\theta)\,p(\theta)}{p({x_1,...x_n})}$. And it does so by substituting $\mathscr{L}(\theta|{x_1,...x_n})$ (defined as the joint probability function of $\theta|{x_1,...x_n}$) for $p(\theta|{x_1,...x_n})$ and maximizing its value. So two general theories, one of them ( MLE) barely mentioned in introductory courses, but underpinning mathematically what is taught in school.
There are two aspects in this problem depending on the exact definition of the task, and, as we shall see, both are completely solved by MMA without any additional facility. The aspects are a) calculate the interpolation of the definite integral $f=\int_0^t \sqrt{1+x^3} \, dx$ b) calculate the interpolation of the antiderivative $fad=\int\sqrt{1+x^3} \, dx$ Part a) interpolation of Jim's definite integral There are no problems in interpolating your definite integral if you do it in two steps. Step 1: Calculate the definite integral (Mathematica does this exactly) f = Integrate[Sqrt[1 + x^3], {x, 0, t}, Assumptions -> t >= -1] (* -> (t^3)^(1/3) Hypergeometric2F1[-(1/2), 1/3, 4/3, -t^3] *) Step 2: Do the interpolation fi = FunctionInterpolation[f, {t, 0, 10}] (* -> InterpolatingFunction[{{0.`,10.`}},"<>"] *) And finally check the equality of f and fi: Plot[{f, fi[t]}, {t, 0, 10}] (* skip the picture *) For completeness: $Version (* -> "8.0 for Microsoft Windows (64-bit) (October 7, 2011)" *) This procedure works fine also with many other functions, e.g. integer powers other than 3 under the sqrt. Part b) interpolation of the antiderivative The antiderivative is given by the indefinite integral : fad = Integrate[Sqrt[1 + x^3], x] $\frac{2 \left(x+x^4+(-1)^{1/6} 3^{3/4} \sqrt{-(-1)^{1/6} \left((-1)^{2/3}+x\right)} \sqrt{1+(-1)^{1/3} x+(-1)^{2/3} x^2} \text{EllipticF}\left[\text{ArcSin}\left[\frac{\sqrt{-(-1)^{5/6} (1+x)}}{3^{1/4}}\right],(-1)^{1/3}\right]\right)}{5 \sqrt{1+x^3}}$ This quantity, besides having a small imaginary part fad /. x -> 1.25 (* -> 2.34276 - 1.17815*10^-16 I *) has a dicontinuity in both Re and Im: Plot[{Re[fad], Im[fad]}, {x, 0, 3}, PlotRange -> {-2, 5}, PlotLabel -> "Re and Im of the antiderivative fad of \!\(\*SqrtBox[\(1 + \ x^3\)]\)", AxesLabel -> {"x", "Re/Im[fad]"}] Discontinuities of antiderivatives are quite common, and they have been discussed in this community several times (cf. for example Mismatch between numerical and analytic evaluation of an integral) This could potentially cause a problem for FunctionInterpolate over an interval including the jump. But it is not the case here ! Rather, the interpolation returns the correct complex valued function, with just announcing - as it should do - an accuracy problem connected with the jump: fadi = FunctionInterpolation[fad, {x, 0, 5}] FunctionInterpolation::ncvb: FunctionInterpolation failed to meet the prescribed accuracy and precision goals after 6 recursive bisections near x = {1.99219}. Continuing to refine elsewhere. >> (* InterpolatingFunction[{{0.`,5.`}},"<>"] *) Visualizing both Re and Im of the interpolation Plot[{Re[fadi[t]], Im[fadi[t]]}, {t, 0, 3}, PlotRange -> {-2, 5}, PlotLabel -> "Re and Im of the interpolation of the \nantiderivative fadi of \\!\(\*SqrtBox[\(1 + x^3\)]\)", AxesLabel -> {"x", "Re/Im[fadi]"}] It shows agreement with the original function except, as expected, in the vicinity of the jumps. Summarizing part b) the interpolation of the true antiderivative is done perfectly well by Mathematica with only a justified accuracy error message due to the jump. Conclusion I have checked the complete code with a fresh kernel and can state that no problems are encountered in version 8 for both interpretations of the problem. Best regards,Wolfgang
Various weak theories of arithmetic have been partially motivated by a concern with numbers (or functions/proofs) that are feasible. This concern is sometimes connected to an interest in strictly finitistic approaches to arithmetic. While the precise account of feasibility varies across these systems, the general idea is that the theory should prove that 0 is feasible if $n$ is feasible then so is its successor $S(n)$ the feasible numbers are in some sense 'bounded' There are various ways of making this last statement precise: e.g. we can see it as a statement of the form $\exists xy \neg\exists z (z= \exp(x,y))$ stating that some fast-growing function is not total (or, as in the case of $I\Delta_{0}$, it may suffice to know that exponentiation -- and, by Parikh's Theorem, any function with superpolynomial growth -- is not provably total, so that the above is at least consistent with $I\Delta_{0}$). A different approach is to give some explicit upper bound on feasibility, and require the theory to prove $\forall x (\log_{2}\log_{2}x<10)$, as in Sazonov's $\texttt{FEAS}$ system. My question: is there any model-theoretic, or more broadly 'semantic', account of feasible numbers? Preferably, an account that would be (1) helpful in providing a clear mathematical picture of the structure of feasible numbers and/or (2) acceptable by the strict finitist's own lights? A word on the two desiderata: in the above systems, characterisations of feasibility are rather implicit, as well as very sensitive to the underlying language and proof systems. Moreover, models of those theories (when they exist) seem to fail both (1) and (2). For instance, models of $I\Delta_{0}$ where $\exp$ is not total (say, obtained via cuts of nonstandard models of PA) are not, presumably, objects to be taken seriously by the strict finitist as `concrete' objects, be it only due to their size. In addition, they hardly seem to be good models of 'intuitively feasible' numbers: their domains are basically given by (possibly nonstandard) integers bounded above by a power of some infinite nonstandard integer. The link to feasibility, or counting, or smallness, is very unclear, and it does not help building a mental picture consistent with the strict finitist's motivations. Sazonov's theory is downright inconsistent in the classical sense (i.e. if we allow unbounded proof length), so it admits no (classical) models. So: is there a serious mathematical account of 'feasibility' of this kind? Some additional remarks: Many logicians, like Gaifman, suggest a connection with vagueness, as the feasible numbers can be seen as forming a vagueset. But do we really need to resort to vagueness to provide semantics for feasible numbers? One possibility is to attempt an account in modal terms, where we imagine a Kripke frame where states are finite sets of integers representing 'the numbers we've counted to so far', and accessibility relations represent something like reaching further numbers via applying 'feasible' functions to the current (finite) domain. Of course, the Kripke frame would have infinite domain, but one could at least argue that it models feasibility in a way that gets things right 'locally', in providing an intuitive mental picture of the process of constructing numbers. But it is difficult to see how any construction of this sort could account in any way for the role played by particular notation systems or induction axioms (bounded induction). I understand that most strict finitists are not concerned with giving a semantic account of arithmetic; some (like Nelson) are explicit formalists, and regard `semantics' as an unnecessary, or perhaps even misleading, distraction. At the very least, the idea seems to be that feasibility depends on the notational system used. This makes good sense from a constructivist perspective; feasible numbers are not a finished collection that our formal theory describes; instead, the theory describes the rules that we can employ to 'construct' numbers. Nonetheless, there may be some intrinsic interest in the question of whether an elegant 'semantic' mathematical account of feasible numbers exists, or can be provided at all.
Consider a Dirichlet character, $\chi(n)$, and the partial sum : $$S(\chi,x)=\bigg |\sum_{n=1}^{x} \chi(n)\bigg|$$ There are many works to bound this sum when $\chi$ is a primitive character, but what can we say if $\chi$ is not primitive ? More specifically if we fix a primitive Dirichlet character $\chi$ and then define from it a suite of non primitive character $\chi_N$ defined by: $$\forall n, \chi_1(n)=\chi(n)$$ $$\forall n, \chi_N(n)=\chi_{N-1}(n).\chi^{P_N} (n)$$ Where $\chi^{P_N} (n)$ is the principal Dirichlet character associated to the N-th prime number (not considering 2). (so $\chi^{P_N} (n)$ is the principal character simply defined by : $\chi^{P_N} (n) =0 $ if n is a multiple of the N-th prime number $P_N$ and 1 if not) This suite of character is build from original character by "removing at each step a prime". Question : how will evolute the max of $S(\chi^{P_N},x)$ for these charcaters ? I would like to show that in this suite there are an infinity of characters with their $Max(S(\chi^{P_N},x))$ lower than a fix constant ? Is it realistic ? Any reference on bounding partial sum of imprimitive characters ?
Let $A$ and $B$ ($A\subset B$) be subsets of a finite abelian group $G$. (For the sake of argument, you can take $G$ to be $\mathbb{Z}/p\mathbb{Z}$ for large $p$, say.) Write $1_S$ for the characteristic function of any subset $S\subset G$. Put the counting measure on $G$ and $\widehat{G}$, so that, for instance, $|1_S| = |S|$, where $|S|$ is the number of elements of $S$. Normalize the Fourier transform on $G$ so that it is an isometry: $|\widehat{f}|_2 = |f|_2$. What upper bounds can be given for the size of $|\widehat{1_A} \widehat{1_B}|_1$? To be precise: Cauchy-Schwarz gives $|\widehat{1_A} \widehat{1_B}|_1\leq |\widehat{1_A}|_2 |\widehat{1_B}|_2 = |1_A|_2 |1_B|_2 = \sqrt{|A| |B|}$. On the other hand, $\left|\sum_x \widehat{1_A}(x) \widehat{1_B}(x)\right| = \left|\sum_g 1_A(g) 1_B(g)\right| = \left|\sum_g 1_A(g)\right| = |A|$, suggesting there might be some room for improvement. So, the question could be made more pointed, as follows: if $|\widehat{1_A} \widehat{1_B}|_1$ is closer to $\sqrt{|A| |B|}$ than to $|A|$, what follows about $|A|$ and $|B|$? What are necessary and sufficient conditions for $|\widehat{1_A} \widehat{1_B}|_1$ to be bounded by a constant times $|A|$?
Let $ f:S→B$ be an elliptic fibration from an integral surface $ S$ to integral curve $ B$ . Here I use following definitions: A surface (resp. curve) is a $ 2$ -dim (resp. $ 1$ -dim) proper k scheme over fixed field $ k$ . Fibration has two properties: 1. $ O_B = f_*O_S$ 2. all fibers of f are geometrically connected Futhermore a fibration is elliptic if the generic fiber $ S_{\eta}=f^{-1}(\eta)$ is an elliptic curve (over $ k(\eta)$ . Denote by $ i_S: S_{\eta} \to S$ the canonical immersion. Here I’m ot sure to 100% but I guess that for the structure sheaf holds $ O_{S_{\eta}}= O_S \otimes_k k(\eta)$ . Now the QUESTION: Since $ S_{\eta}$ is elliptic curve and therefore smooth the restriction of the Kähler differentials $ \Omega^2_{S/B} \vert _{S_{\eta}}$ is invertible. My question is how to see that there exist open neighbourhood $ U \subset S$ of $ S_{\eta}$ such that the restriction $ \Omega^2_{S/B} \vert _U$ is still invertible? Let $ Y$ be a closed Kähler manifold with $ c_1(Y)=0$ in $ H^2(Y,\mathbb{R})$ . Let $ \omega$ be a Ricci-flat Kähler form on $ \mathbb{C}^m \times Y$ such that $ $ A^{-1} (\omega_{\mathbb{C}^m} + \omega_Y) \leq \omega \leq A (\omega_{\mathbb{C}^m} + \omega_Y),$ $ for some constant $ A \geq 1$ , where $ \omega_Y$ is a Kähler form on $ Y$ and $ \omega_{\mathbb{C}^m}$ is the Euclidean form on $ \mathbb{C}^m$ . I want to show that there is a unique choice of $ \omega_Y$ such that $ \text{Ric}(\omega_Y)=0$ and that there is a smooth function $ f$ such that $ $ \omega = \omega_{\mathbb{C}^m} + \omega_Y + d f.$ $ I would like to find a proof for Remark 0.5 in the following article of Claire Voisin: https://webusers.imj-prg.fr/~claire.voisin/Articlesweb/fanosymp.pdf She writes in this remark the following: Remark 0.5 A compact Kahler manifold $ X$ which is rationally connected satisfies $ H^2(X, {\cal O}_X) = 0$ , hence is projective. I understand that a Kahler manifold with $ H^2(X, {\cal O}_X) = 0$ is projective. However, I don’t understand why a Kahler manifold that is rationally connected has $ H^2(X, {\cal O}_X) = 0$ . Indeed, the definition for rational contectedness that Voisin is using is the following: Definition 0.3 A compact Kahler manifold $ X$ is rationally connected if for any two points $ x, y\in X$ , there exists a (maybe singular) rational curve $ C\subset X$ with the property that $ x\in C$ , $ y\in C$ . So my question is the following: How to prove this remark starting with Definition 0.3? For a compact Hermitian manifold, the metric being Kähler is equivalent the identity $ $ \Delta_{\partial} = \Delta_{\overline{\partial}}, $ $ where $ \Delta_{\partial}$ and $ \Delta_{\overline{\partial}}$ are the $ \partial$ and $ \overline{\partial}$ -Laplacians. This is discussed in this question. Is there an analogous result for the $ \mathrm{d}$ -Laplacian $ \Delta_{\mathrm{d}}$ ? That is, does the identity $ $ \Delta_{\partial} = \Delta_{\mathrm{d}} $ $ or the identity $ $ \Delta_{\overline{\partial}} = \Delta_{\mathrm{d}}, $ $ imply that an Hermitian metric is Kähler? For any Kähler manifold $ (M,h)$ , with Lefschetz operators $ L$ and $ \Lambda$ , and counting operator $ H$ , we have the following the well-known Kähler-Hodge identities: \begin{align*} [\partial,L] = 0, && [\overline{\partial},L] = 0, & & [\partial^*,\Lambda] = 0, && [\overline{\partial}^*, \Lambda] = 0, \ [L,\partial^*] = i\overline{\partial}, & & [L,\overline{\partial}^*] = – i\partial, & & [\Lambda,\partial] = i\overline{\partial}^*, & & [\Lambda,\overline{\partial}] = – i\partial^*, \end{align*} In physics literature this is often referred to a “supersymmetric algebra”. Does there exist a more mathematical understanding of this object, perhaps as a Lie super algebra? Let $ (M, J,h)$ be an almost Hermitian manifold, where $ J$ is an almost complex structure and $ h$ is a Hermitian metric. Let $ D$ be the unique $ h$ -connection compatible with $ J$ , i.e. $ Dh=0=DJ$ . Let $ \tau$ be the torsion of $ D$ .If we decompose our connection $ D$ into (1,0) and (0,1) part: $ D=D’+D”$ . Then the tortion $ \tau$ of $ D$ will also be decomposed $ \tau=\tau’+\tau”$ . It is not hard to see that $ \tau’=N$ , the Nijenhuis tensor for $ J$ , which is eactly the obstruction for an integral complex structure. What about the other part $ \tau”$ ? Is there any geometric meaning? So far, I find that it will be an obstruction for being alomst Kähler, i.e. $ d\omega=0$ , where $ \Im h:=\omega$ . I mean, the following holds: $ d\omega=0 \Rightarrow \tau”=0$ . My question is about the converse. Is that true $ \tau”=0 \Rightarrow d\omega=0 $ ? BTW, if $ M$ itself a Hemrmitian manifold. It is well-known that $ \tau”=\tau$ , which will be the exact obstruction for being Kähler. I am reading D. Joyce book “Compact manifolds with special holonomy” and I have some problems of understanding some computation on page 111, the first line in the proof of Proposition 5.4.6. More specific the following: Let $ (M,\omega, J)$ be a compact Kähler manifold with Kähler form $ \omega$ and complex structure $ J$ . In holomorphic coordinates $ \omega$ is of the form $ \omega = ig_{\alpha \overline{\beta}}dz^{\alpha} \wedge d\overline{z}^{\beta}$ . Associated to the above data we have the Riemannian metric $ g$ which may be written in holomorphic coordinates as $ g=g_{\alpha \overline{\beta}}(dz^{\alpha}\otimes d\overline{z}^{\beta} + d\overline{z}^{\beta} \otimes dz^{\alpha})$ . Associated to $ g$ let $ \nabla$ be the Levi-Civita connection which also defines a covariant derivative on tensors. For a function $ \phi$ on $ M$ one may compute $ \nabla^{k}\phi$ . For example $ \nabla \phi = (\nabla_{\lambda}\phi)dz^{\lambda} + (\nabla_{\overline{\lambda}}\phi)d\overline{z}^{\lambda}=(\partial_{\lambda}\phi)dz^{\lambda} + (\partial_{\overline{\lambda}}\phi)d\overline{z}^{\lambda}$ (once applied on functions is as the usual $ d$ ) and $ \nabla_{\alpha \beta}\phi = \partial_{\alpha \beta} \phi – \partial_{\gamma}\phi \Gamma^{\gamma}_{\alpha \beta}$ , $ \nabla_{\alpha \overline{\beta}}\phi = \partial_{\alpha \overline{\beta}}\phi$ etc. In the first sentence of the proof of proposition 5.4.6 Joyce considers the equation $ \det(g_{\alpha \overline{\beta}} + \partial_{\alpha \overline{\beta}}\phi) = e^{f}\det(g_{\alpha \overline{\beta}})$ , where $ f:M\rightarrow \mathbb{R}$ is a smooth function on $ M$ . After taking the $ \log$ of this equation he obtains $ \log[\det(g_{\alpha \overline{\beta}} + \partial_{\alpha \overline{\beta}}\phi)] – \log[\det(g_{\alpha \overline{\beta}} )] = f$ which is obviously a globaly defined equality of functions on $ M$ . Now he takes the covariant derivative $ \nabla$ of this equation and obtains $ \nabla_{\overline{\lambda}}f = g’^{\mu \overline{\nu}}\nabla_{\overline{\lambda} \mu \overline{\nu}}\phi$ where $ g’^{\mu \overline{\nu}}$ is the inverse of the metric $ g’_{\alpha \overline{\beta}} = g_{\alpha \overline{\beta}} + \partial_{\alpha \overline{\beta}}\phi$ (which he assumes to exists). This last step (when taking the covariant derivative) I do not understant. In my computation I have the following: When taking the covariant derivative $ \nabla_{\overline{\lambda}}$ of the equation $ \log[\det(g_{\alpha \overline{\beta}} + \partial_{\alpha \overline{\beta}}\phi)] – \log[\det(g_{\alpha \overline{\beta}} )] = f$ and using the formula for the derivative of the determinant I obtain $ g’^{\alpha \overline{\beta}}(\partial_{\overline{\lambda}}g_{\alpha \overline{\beta}} + \partial_{\overline{\lambda} \alpha \overline{\beta}}\phi) – g^{\alpha \overline{\beta}}(\partial_{\overline{\lambda}}g_{\alpha \overline{\beta}}) = \partial_{\overline{\lambda}}f = \nabla_{\overline{\lambda}}f$ . This is obviously different to his formula. Moreover the term $ \nabla_{\overline{\lambda}\mu \overline{\nu}}\phi$ contains not only derivatives of order $ 3$ of $ \phi$ but it also contains a term with second derivatives of $ \phi$ . My question is: Where is my mistake? Have I understood something wrong? Say $ M$ is a closed Kähler manifold and $ (V, \nabla)$ is a (say) constant Hermitian bundle on $ V$ with (say) trivial flat connection. Now $ M$ Kähler gives several distinguished classes of closed one-forms in $ \Omega^1(M, \mathrm{End}(V))$ (harmonic, holomorphic, and variations on these). I’m curious whether there is a special class of one-forms for which the connection $ \nabla + \hbar\eta$ (which is flat to second order) can be canonically deformed to a flat connection $ \nabla + \hbar\eta + O(\hbar^2)$ . Is there some condition that guarantees this? Is there a context where the deformation theory becomes easily tractable? (I am assuming that $ M$ is Kähler here because I know Hodge theory makes deformation theory works better on Kähler manifolds – if there is an answer in the more general case where $ M$ is Riemannian and $ \eta$ is harmonic, I’m also curious about that.) Inspired by this question (Isometric embedding of a real-analytic Riemannian manifold in a compact Kähler manifold) I ask the following: Suppose $ X$ is a real analytic Riemannian manifold with a totally real embedding to $ X^\mathbb C$ which is Kähler and the Kähler metric restricts to the given Riemannian metric on $ X$ . Moreover, $ X^\mathbb C$ is equipped with an antiholomorphic involution whose fixed point set is $ X$ . Does these properties determine $ X^\mathbb C$ uniquely as a germ of manifolds? I believe that’s true but I haven’t found the precise statement of this fact in Lempert, Szöke or Guillemin, Stenzel. Now let $ X$ be compact. In the answer to the cited question D.Panov said some necessary words about how to prove that $ X^\mathbb C$ can be chosen to be compact. But can $ X^\mathbb C$ be chosen is some canonical way or perhaps it is even unique? In fact I’m even more interested in the case then $ X$ is Kähler manifold with a totally real embedding to a hyperkähler $ X^\mathbb C$ such that the restriction of the associated Kähler structure to $ X$ is the given Kähler structure. Moreover we are given a $ S^1$ action which rotates the complex structures and whose fixed point set is $ X$ . Feix and Kaledin proved that these properties determine $ X^\mathbb C$ uniquely as a germ. If $ X$ is complete can $ X^\mathbb C$ be chosen to be complete? Canonically? Uniquely? As I understand the last questions are far from being solved. I have been interested in the following situation of late: Let $ X$ and $ Y$ be compact Kähler manifolds with $ \dim_{\mathbb{C}}(Y) < \dim_{\mathbb{C}}(X)$ and let $ f : X \to Y$ be a surjective holomorphic map with connected fibres. Let $ S = \{ s_1, …, s_k \}$ denote the critical values of $ f$ , which is a subvariety of $ Y$ . I cannot find a detailed account of how bad the singular behaviour of the fibres of $ f$ can be. For example, do the fibres contain $ (-1)$ curves (i.e., curves with self-intersection number $ -1$ ) or $ (-2)$ curves? If anyone can provide references where I can get a better understanding of this, that would be tremendously appreciated.
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. Search for the Production of a Long-Lived Neutral Particle Decaying within the ATLAS Hadronic Calorimeter in Association with a Z Boson from pp Collisions at s=13 TeV Physical Review Letters, ISSN 0031-9007, 04/2019, Volume 122, Issue 15, p. 151801 This Letter presents a search for the production of a long-lived neutral particle (Zd) decaying within the ATLAS hadronic calorimeter, in association with a... Confidence intervals | Standard model (particle physics) | Large Hadron Collider | Collisions | Particle decay | Measuring instruments | Luminosity | Higgs bosons | Quarks | Neutral particles | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Confidence intervals | Standard model (particle physics) | Large Hadron Collider | Collisions | Particle decay | Measuring instruments | Luminosity | Higgs bosons | Quarks | Neutral particles | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 2. Search for an exotic decay of the Higgs boson to a pair of light pseudoscalars in the final state with two muons and two b quarks in pp collisions at 13 TeV Physics Letters B, ISSN 0370-2693, 08/2019, Volume 795, Issue C, pp. 398 - 423 A search for exotic decays of the Higgs boson to a pair of light pseudoscalar particles is performed under the hypothesis that one of the pseudoscalars decays... CMS | BSM Higgs physics | Physics | ASTRONOMY & ASTROPHYSICS | PHYSICS, NUCLEAR | ROOT-S=13 TEV | PHYSICS, PARTICLES & FIELDS | Analysis | Quarks | Collisions (Nuclear physics) | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS CMS | BSM Higgs physics | Physics | ASTRONOMY & ASTROPHYSICS | PHYSICS, NUCLEAR | ROOT-S=13 TEV | PHYSICS, PARTICLES & FIELDS | Analysis | Quarks | Collisions (Nuclear physics) | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article ISSN 0370-2693, 2016 neutralino: decay | gluon gluon: fusion | experimental results | decay: exotic | photon: final state | Higgs particle: mass | Higgs particle: rare decay | CMS | Z0: associated production | CERN LHC Coll | neutralino: lifetime | branching ratio: upper limit | 8000 GeV-cms | neutralino: mass | gravitation | Higgs particle: invisible decay | Higgs particle: branching ratio | Higgs particle: hadroproduction | gravitino Journal Article 4. Search for nonresonant Higgs boson pair production in the b(b)over-barb(b)over-bar final state at root s=13 TeV JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 04/2019, Issue 4 Journal Article 5. Search for lepton flavour violating decays of the Higgs boson to $\mu\tau$ and e$\tau$ in proton-proton collisions at $\sqrt{s}=$ 13 TeV ISSN 1029-8479, 2018 coupling: Yukawa | experimental results | CMS | Higgs particle: decay modes | CERN LHC Coll | Higgs particle: branching ratio: upper limit | Higgs particle: leptonic decay | lepton: flavor: violation | Higgs particle --> electron tau | 4/3 | Higgs particle: hadroproduction | p p: colliding beams | p p: scattering | 13000 GeV-cms | coupling constant: upper limit | Higgs particle --> muon tau Journal Article PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 01/2019, Volume 122, Issue 2 Journal Article Computer Physics Communications, ISSN 0010-4655, 05/2019, Volume 238, pp. 214 - 231 Journal Article PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 06/2019, Volume 122, Issue 23, p. 1 Dark matter particles, if sufficiently light, may be produced in decays of the Higgs boson. This Letter presents a statistical combination of searches for H ->... DARK-MATTER | CANDIDATES | PARTICLE | PHYSICS, MULTIDISCIPLINARY | LHC | MASS | Confidence intervals | Standard model (particle physics) | Statistical analysis | Dark matter | Searching | Particle decay | Higgs bosons | Quarks | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS DARK-MATTER | CANDIDATES | PARTICLE | PHYSICS, MULTIDISCIPLINARY | LHC | MASS | Confidence intervals | Standard model (particle physics) | Statistical analysis | Dark matter | Searching | Particle decay | Higgs bosons | Quarks | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 9. Search for an exotic decay of the Higgs boson to a pair of light pseudoscalars in the final state with two b quarks and two $\tau$ leptons in proton-proton collisions at $\sqrt{s}=$ 13 TeV ISSN 1873-2445, 2018 supersymmetry | boson: hadronic decay | experimental results | Higgs particle: doublet | decay: exotic | Higgs particle: rare decay | boson: pair production | CMS | bottom: 2 | bottom: pair production | CERN LHC Coll | mass: pseudoscalar | tau: pair production | branching ratio: upper limit | singlet: scalar | Higgs particle: branching ratio | Higgs particle: hadroproduction | minimal supersymmetric standard model | p p: colliding beams | p p: scattering | new particle | boson: leptonic decay | boson: pseudoscalar particle | 13000 GeV-cms Journal Article 10. Search for rare decays of Z and Higgs bosons to J$/\psi$ and a photon in proton-proton collisions at $\sqrt{s} =$ 13 TeV 10/2018 Eur. Phys. J. C 79 (2019)94 A search is presented for decays of Z and Higgs bosons to a J$/\psi$ meson and a photon, with the subsequent decay of the J$/\psi$... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 11. Searches for exclusive Higgs and Z boson decays into J/ψγ,ψ(2S)γ, and Υ(nS)γ at s=13 TeV with the ATLAS detector Physics Letters. Section B, ISSN 0370-2693, 11/2018, Volume 786, Issue C Journal Article Physical review letters, 01/2019, Volume 122, Issue 1, p. 011801 We report on the first Belle search for a light CP-odd Higgs boson, A^{0}, that decays into low mass dark matter, χ, in final states with a single photon and... Journal Article 13. Search for Higgs boson decays to beyond-the-Standard-Model light bosons in four-lepton events with the ATLAS detector at √s=13 TeV Journal of High Energy Physics, ISSN 1126-6708, 06/2018, Volume 2018, Issue 6, pp. 1 - 51 A search is conducted for a beyond-the-Standard-Model boson using events where a Higgs boson with mass 125 GeV decays to four leptons (ℓ = e or μ). This decay... Beyond Standard Model | Hadron-Hadron scattering (experiments) | Confidence intervals | Large Hadron Collider | Leptons | Searching | Decay | Luminosity | Higgs bosons | Quarks | Bosons | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences Beyond Standard Model | Hadron-Hadron scattering (experiments) | Confidence intervals | Large Hadron Collider | Leptons | Searching | Decay | Luminosity | Higgs bosons | Quarks | Bosons | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences Journal Article 14. Search for Higgs boson decays to beyond-the-Standard-Model light bosons in four-lepton events with the ATLAS detector at $\sqrt{s}=13$ TeV Journal of High Energy Physics (Online), ISSN 1029-8479, 02/2018, Volume 2018, Issue 6 Journal Article 15. Search for an exotic decay of the Higgs boson to a pair of light pseudoscalars in the final state of two muons and two $\tau$ leptons in proton-proton collisions at $\sqrt{s}=$ 13 TeV 05/2018 JHEP 11 (2018) 018 A search for exotic Higgs boson decays to light pseudoscalars in the final state of two muons and two $\tau$ leptons is performed using... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 16. Measurements of the Higgs boson production and decay rates and constraints on its couplings from a combined ATLAS and CMS analysis of the LHC pp collision data at $ \sqrt{s}=7 $ and 8 TeV ISSN 1029-8479, 2016 decay rate: measured | experimental results | Higgs particle: mass | CMS | Higgs particle: coupling | Higgs particle: decay modes | top: associated production | CERN LHC Coll | vector boson: associated production | CERN Lab | ATLAS | channel cross section: measured | 7000 GeV-cms8000 GeV-cms | gluon: fusion | Higgs particle: branching ratio | Higgs particle: hadroproduction | p p: colliding beams | p p: scattering | Higgs particle: decay rate | vector boson: fusion Journal Article 17. Search for lepton flavour violating decays of the Higgs boson to e tau and e mu in proton-proton collisions at sqrt(s)=8 TeV ISSN 1873-2445, 2016 coupling: Yukawa | experimental results | CMS | CERN LHC Coll | Higgs particle: branching ratio: upper limit | Higgs particle: leptonic decay | 8000 GeV-cms | lepton: flavor: violation | Higgs particle --> electron tau | Higgs particle --> electron muon | p p: scattering | p p: colliding beams | Higgs particle: hadroproduction Journal Article 18. Search for dark matter produced with an energetic jet or a hadronically decaying W or Z boson at $ \sqrt{s}=13 $ TeV ISSN 1029-8479, 2017 experimental results | dark matter: direct detection | CMS | W: hadronic decay | CERN LHC Coll | Higgs particle: branching ratio: upper limit | coupling: axial-vector | GLAST | Z0: hadronic decay | coupling: vector | dark matter: mass: lower limit | jet: multiplicity | coupling: scalar | Higgs particle: invisible decay | background | coupling: pseudoscalar | p p: colliding beams | p p: scattering | mediation | 13000 GeV-cms | transverse momentum: missing-energy | final state: ((n)jet) | dark matter: pair production Journal Article
Let $\iota: \mathcal{A}\subset \mathcal{C}$ a replete, full, reflexive subcategory with $F: \mathcal{C}\to \mathcal{A}$ left adjoint to $\iota$, and $\eta_X: X\to F(X)$ the canonical unity.The canonical counity $\epsilon$ is a isomorphism (because $\iota$ is full and faithfull) and $F(\eta_X)$ is a isomorphism (triangle identity). Let $W:= \{ f \in |\mathcal{C}|_1\ |\ F(f)\ is\ Iso \}$ (in your post you can consider the saturation...). Let $\mathcal{B}\subset \mathcal{C}$ the full subcategory of $W$-replete objects defined as in your post. Claim: $\mathcal{A}= \mathcal{B}$: Let $A\in \mathcal{A}$ and $f: X\to Y$ by $F(f)\in Iso$, considering the square of $f, F(f), \eta_X, \eta_Y$, by universal property follow that $A\in \mathcal{B}$. Let $B\in \mathcal{B}$, we have to show that $\eta_B\in Iso$, considering $\eta_B\in W$ follow a morphism $g: F(B)\to B$ with $1_B= g\circ \eta_B$, and from $(\eta_B\circ g)\circ \eta_B=1\circ \eta_B $ follow that (universal property): $\eta_B\circ g=1$. Edit: COnsidering the case $\mathcal{A}= \mathcal{C}[\mathcal{W}^{-1}]$ for some class of morphisms $\mathcal{W}$, and suppose that we have a full immersion $\iota: \mathcal{C}[\mathcal{W}^{-1}]\to \mathcal{C}$ left adjoint of the natural functor $F: \mathcal{C}\to \mathcal{C}[\mathcal{W}^{-1}]$, and let $W$ defined as above, then $W$ is the saturation of $\mathcal{W}$ and $\mathcal{C}[\mathcal{W}^{-1}]\cong \mathcal{C}[W^{-1}]$. THen the $W$-local objects are the $\mathcal{W}$-local objects, and this is a replete class. Edit ABout the second question, after the Zhen Lin counterexample, I can say only this:From [P], T.13.11, p.98, given an adjuntion $<\iota, F>: \mathcal{C}\to\mathcal{A}$ with $\iota$ full and faithfull we have that $\mathcal{A}$ is equivalent to $\mathcal{C}[W^{-1}]$ where $W$ is the class of morphisms $f$ such that $F(f)$ is a isomorphism, and $W$ is also the class of morphism's $w$ such that $\mathcal{C}(w, A)$ is a isomorphism (bijection) for any $A\in\mathcal{A}$.Then if $\mathcal{A}\subset \mathcal{C}$ is the full subcategory of the $\mathcal{W}$-stable objects for some class of morphisms $\mathcal{W}$, then $W$ is the "Galois-closure" of $\mathcal{W}$. Furthermore, about the existence of the adjoint $\iota$: given a class lef-calculable of morphisms $\mathcal{W}$ of a category $\mathcal{C}$, let $F: \mathcal{C}\to \mathcal{C}[\mathcal{W}^{-1}]$ the natural functor. From [P] 15.3 follow that $F$ has a right adjoint (and it is full and faithfull) IFF for any $X\in \mathcal{C}$ there exist a morphism $u_X: X\to X'$ where $X'$ is $\mathcal{W}$-stable and $u_X$ belong to the saturation $W$ of $\mathcal{W}$ IFF for any $X$ the category $X\downarrow W$ (the full subcategory of $X\downarrow\mathcal{C}$ of the $W$'s with $X$ as domain) has a final object (I seem that the proff work without the left-calculable hypothesis). Biblio:[P]: Theory of Categories , Nicolae Popescu & Liliana Popescu
Why was the symbol $r$ chosen to denote Pearson's product moment correlation? From Pearson's "Notes on the history of correlation" The title of Galton's R. I. lecture was Typical Laws of Heredity in Man. Here for the first time appears a numerical measure $r$ of what is termed 'reversion' and which Galton later termed 'regression'. This $r$ is the source of our symbol for the correlation coefficient. This 1977 lecture was also printed in Nature and in the Proceedings of the Royal Institution From page 532 in Francis Galton 1877 Typical laws of heredity. Nature vol 15 (via galton.org) Reversion is expressed by a fractional coefficient of the deviation, which we will write $r$. In the "reverted"parentages (a phrase whose meaning and object have already been explained) $$y = \frac{1}{r c \sqrt{\pi}} \cdot e ^{- \frac{x^2}{r^2c^2}}$$ In short, the population, of which each unit is a reverted parentage, follows the law of deviation, and has modulus, which we will write $c_2$, equal to $r c_1$.
SCM Repository View of /branches/vis12/test/unicode-cheatsheet.diderot Revision File size: 781 byte(s) 1685- ( download) ( annotate) Sun Jan 22 15:23:36 2012 UTC(7 years, 8 months ago) by jhr File size: 781 byte(s) Create a branch to implement things that we need for the Vis 2012 paper /* useful unicode characters for Diderot ⊛ convolution, as in field#2(3)[] F = bspln3 ⊛ load("img.nrrd"); LaTeX: \circledast is probably typical, but \varoast (with \usepackage{stmaryrd}) is slightly more legible × cross product, as in vec3 camU = normalize(camN × camUp); LaTeX: \times π Pi, as in real rad = degrees*π/360.0; LaTeX: \pi ∇ Del, as in vec3 grad = ∇F(pos); LaTeX: \nabla • dot product, as in real ld = norm • lightDir; LaTeX: \bullet, although \cdot more typical for dot products ⊗ tensor product, as in tensor[3,3] Proj = identity[3] - norm⊗norm LaTeX: \otimes ∞ Infinity, as in output real val = -∞; LaTeX: \infty */ strand blah (int i) { output real out = 0.0; update { stabilize; } } initially [ blah(i) | i in 0..0 ]; root@smlnj-gforge.cs.uchicago.edu ViewVC Help Powered by ViewVC 1.0.0
In this post, I will show how solving a Sudoku puzzle is equivalent to solving an integer linear programming (ILP) problem. This equivalence allows us to solve a Sudoku puzzle using any of the many freely available ILP solvers; an implementation of a solver (in Python 3) which follows the formulation described in this post can be found found here. A Sudoku puzzle is an $N \times N$ grid divided in blocks of size $m \times n$, i.e., each block contains $m$ rows and $n$ columns, with $N = mn$ since the number of cells in a block is the same as the number of rows/columns on the puzzle. The most commonly known version of Sudoku is a $9 \times 9$ grid (i.e., $N = 9$) with $3 \times 3$ blocks (i.e., $m = n = 3$). Initially, a cell can be either empty or contain an integer value in the interval $[1,N]$; non-empty cells are fixed and cannot be modified as the puzzle is solved. The rules for solving the puzzle are: 1. each integer value $k \in [1,N]$ must appear exactly once in each row 2. each integer value $k \in [1,N]$ must appear exactly once in each column 3. each integer value $k \in [1,N]$ must appear exactly once in each block. Each rule above individually implies that every cell of the puzzle will have a number assigned to it when the puzzle is solved, and that each row/column/block of a solved the puzzle will represent some permutation of the sequence $\{1, 2, \ldots, N\}$. The version of Sudoku outlined by these rules is the standard one. There are many variants of Sudoku, but I hope that after reading this post, you will realize that the these variants can also be expressed as ILP problems using the same ideas presented here. The rules above can be directly expressed as constraints of an ILP problem. Our formulation will be such that the constraints will enforce everything needed to determine a valid solution to the puzzle, and the objective function will therefore be irrelevant since any point which satisfies the constraints will represent a solution to the problem (notice, however, that some Sudoku puzzles may contain more than one solution, but I am assuming the reader will be content with finding a single one of those). Therefore, our objective function will be simply ${\bf 0}^T{\bf x} = 0$, where ${\bf 0}$ is a vector with all elements set to $0$ and ${\bf x}$ is a vector representing all variables used in the ILP formulation below. The puzzle grid can be represented as an $N \times N$ matrix, and each grid cell can be naturally assigned a pair of indices $(i,j)$, with $i$ and $j$ representing the cell row and column respectively (see figure 1). The top-left grid cell has $(i,j) = (1,1)$, and the bottom-right one has $(i,j) = (N,N)$, with $i$ increasing downwards and $j$ increasing towards the right. Fig. 1: A Sudoku puzzle with blocks of size $m \times n = 2 \times 3$. The cell indices $(i,j)$ are shown inside every cell, and the block indices $(I,J)$ are shown on the left and top sides of the grid respectively. Both the height (number of rows) and width (number of columns) of the puzzle are equal to $N = m n = 6$. The puzzle has $n = 3$ blocks along the vertical direction and $m = 2$ blocks along the horizontal direction. Let us then define $N^3$ variables as follows: $x_{ijk}$ is an integer variable which is restricted to be either $0$ or $1$, with $1$ meaning the value at cell $(i,j)$ is equal to $k$, and $0$ meaning the value at cell $(i,j)$ is not $k$. Rule (1) above, i.e., the requirement that each $k \in [1,N]$ must appear exactly once per row, can be expressed as: $$ \sum_{j=1}^N x_{ijk} = 1 \quad \textrm{ for } \quad i,k = 1,2,\ldots,N \label{post_25ea1e49ca59de51b4ef6885dcc3ee3b_row_constraint} $$ In other words, for a fixed row $i$ and a fixed $k \in [1,N]$, only a single $x_{ijk}$ will be $1$ on that row for $j = 1, 2, \ldots, N$. If the fact that the constraints above do not have any "$\leq$" is bothering you, remind yourself of the fact that $x = a$ can be expressed as $a \leq x \leq a$, which in turn is equivalent to the combination of $-x \leq -a$ and $x \leq a$, i.e., any equality constraint can be expressed as a pair of "less-than-or-equal-to" constraints like the ones we need in linear programming problems. Rule (2), i.e., the requirement that each $k \in [1,N]$ must appear exactly once per column, can be expressed as: $$ \sum_{i=1}^N x_{ijk} = 1 \quad \textrm{ for } \quad j,k = 1,2,\ldots,N \label{post_25ea1e49ca59de51b4ef6885dcc3ee3b_column_constraint} $$ Expressing rule (3), i.e., the requirement that each $k \in [1,N]$ must appear exactly once per block, is a bit more complicated. A way to simplify this task is by assigning pairs of indices $(I, J)$ to each block in a similar way as we did for cells (see figure 1): $(I,J) = (1,1)$ represents the top-left block, and $(I, J) = (n,m)$ represents the bottom-right block, with $I$ increasing downwards and ranging from $1$ to $n$ (there are $n$ blocks along the vertical direction) and $J$ increasing towards the right and ranging from $1$ to $m$ (there are $m$ blocks along the horizontal direction). Block $(I,J)$ will therefore contain cells with row indices $i = (I-1)m + 1, \ldots, Im$ and column indices $j = (J-1)n + 1, \ldots, Jn$. Therefore, rule (3) can be expressed as: $$ \sum_{i=(I-1)m + 1}^{Im} \sum_{j=(J-1)n + 1}^{Jn} x_{ijk} = 1 \quad \textrm{ for }\quad \left\{ \begin{matrix} \; I = 1,2,\ldots,n \\[5pt] \; J = 1,2,\ldots,m \\[5pt] \; k = 1,2,\ldots,N \end{matrix} \right. \label{post_25ea1e49ca59de51b4ef6885dcc3ee3b_block_constraint} $$ Notice that both equations \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_row_constraint} and \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_column_constraint} represent $N^2$ constraints each. As it turns out, equation \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_block_constraint} contains $nmN = N^2$ constraints as well. So far, our formulation does not prevent $x_{ijk}$ from being equal to $1$ for two or more distinct values $k$ at the same cell $(i,j)$. We need therefore to impose these constraints by hand: $$ \sum_{k=1}^N x_{ijk} = 1 \quad \textrm{ for } \quad i,j = 1,2,\ldots,N \label{post_25ea1e49ca59de51b4ef6885dcc3ee3b_cell_constraint} $$ Not all cells are initially empty on a Sudoku puzzle. Some cells will already contain values at the beginning, and those values are necessary so that the solution to the puzzle can be deduced logically (ideally, there should be a single valid solution). Let $\mathcal{C}$ be the set of tuples $(i,j,k)$ representing the fact that a cell $(i,j)$ contains the value $k$ at the beginning. We then have: $$ x_{ijk} = 1 \quad \textrm{ for } (i,j,k) \in \mathcal{C} \label{post_25ea1e49ca59de51b4ef6885dcc3ee3b_initial_puzzle_constraint} $$ Our last set of constraints limits the values which each variable $x_{ijk}$ can take: it's either $0$ or $1$ (our ILP formulation then technically defines a binary integer linear programming problem, or BILP for short): $$ 0 \leq x_{ijk} \leq 1 \quad \textrm{ for } \quad i,j,k = 1,2,\ldots,N \label{post_25ea1e49ca59de51b4ef6885dcc3ee3b_binary_constraint} $$ Since most ILP solvers allow bounds on the values which each $x_{ijk}$ can take to be set directly, this last set of constraints often does not need to be specified in the same manner as the previous ones. We now have a complete ILP formulation of a Sudoku puzzle: our goal is to minimize the objective function $f(x_{111}, \ldots, x_{ijk}, \ldots x_{NNN}) = 0$ subject to all constraints specified on equations \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_row_constraint}, \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_column_constraint}, \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_block_constraint}, \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_cell_constraint}, \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_initial_puzzle_constraint} and \eqref{post_25ea1e49ca59de51b4ef6885dcc3ee3b_binary_constraint}. After solving the ILP problem outlined above, the solution to the Sudoku puzzle can be constructed directly by placing, at each cell $(i,j)$, the value $k$ such that $x_{ijk} = 1$.
Note, as I have my degree in chemistry, I denote "=>" as the next step in the equation. Also, sorry for any obvious mistakes that I made - as I mentioned, my degree is not in engineering. Thanks! I am considering a design for a PET spherical toy ball. The ball will have a radius of 0.07 m that will have pressurized air in it at a pressure of 5.81 atm (588698.3 Pa). Also, the ball will include a 35 nm coating of Reduced Graphene Oxide (RGO) as a durable coating on the outside of the ball. To get the thickness of the walls of the ball, I know that the ultimate tensile strength of PET is 55 MPa. Because I want a 15% safety factor for the ball, I do the following calculation: $$(1-15\%)\cdot55 = 46.75\text{ MPa} = 46750000\text{ Pa}$$ I then use the reduced ultimate tensile strength to find the thickness of the wall using the formula for a thin-walled sphere where $p$ is pressure, $t$ is thickness, $r$ is radius, and $\sigma$ is stress: $$\begin{gather} \sigma = \dfrac{pr}{2t} \\ 46750000 = \dfrac{588698.3 \cdot 0.07}{2t} \\ \therefore t = 4.41\times 10^{-4}\text{ m} \end{gather}$$ I can calculate the burst pressure of the sphere $P$ for pressure (gauge) inside sphere, $FS$ for factor of safety, $\sigma$ for allowable stress and additionally $R_i$ for inner radius, $R_o$ for outer radius: $$P = \dfrac{(R_o^2-R_i^2)\sigma}{(R_i^2)FS}$$ For calculation of bursting pressure, we take $\sigma$ as ultimate stress for a given material and put $FS=1$. Note here that I chose the express the ultimate tensile strength of PET as $5.5 \times 10^7$: $$P = \dfrac{((0.07 + 4.41 \times 10^{-4})^2 - 0.07^2)\cdot 5.5 \times 10^7}{0.07^2 \times 1} = 695183.67\text{ Pa} = 6.86\text{ atm}$$ I now want to calculate the volume of PET needed to create a sphere of the desired dimensions: $$\begin{align} V &= \dfrac{4}{3}\pi \left(\left(\dfrac{d_o}{2}\right)^3 - \left(\dfrac{d_i}{2}\right)^3\right) \\ &= \dfrac{4}{3}\pi \left(\left(\dfrac{0.140 + 4.41 \times 10^{-4}}{2}\right)^3 - \left(\dfrac{0.140}{2}\right)^3\right) \\ &= 1.36 \times 10^{-5}\text{ m}^3 \end{align}$$ I now want to calculate the volume of Reduced Graphene Oxide (RGO) to apply to the surface of the sphere: $$\begin{align} V &= \dfrac{4}{3}\pi \left(\left(\dfrac{d_o}{2}\right)^3 - \left(\dfrac{d_i}{2}\right)^3\right) \\ &= \dfrac{4}{3}\pi \left(\left(\dfrac{0.140 + 4.41 \times 10^{-4} + 35 \times 10^{-9}}{2}\right)^3 - \left(\dfrac{0.140 + 4.41 \times 10^{-4}}{2}\right)^3\right) \\ &= 1.084366 \times 10^{-9}\text{ m}^3 \end{align}$$ Can you see any mistakes in my calculations? Please do not ask why I am using PET as a material or other design decisions like that. There are good reasons for them.
On quasilinear elliptic equations related to some Caffarelli-Kohn-Nirenberg inequalities 1. Departamento de Matemáticas, Universidad Autónoma de Madrid, 28049 Madrid, Spain, Spain -div $( |x|^{-p\gamma}|\nabla u|^{p-2}\nabla u)=f(x, u)\in L^1(\Omega),\quad x\in \Omega$ $u(x)=0$ on $\partial \Omega,$ where $-\infty<\gamma<\frac{N-p}{p}$, $\Omega$ is a bounded domain in $\mathbb R^N$ such that $0\in\Omega$ and $f(x,u)$ is a Caratheodory function under suitable conditions that will be stated in each section. Keywords:Weyl type lemmas., Cafarelli-Kohn-Nirenberg inequalities, existence and uniqueness, blow-up, Degenerate and singular elliptic equations. Mathematics Subject Classification:35D05, 35D10, 35J20, 35J25, 35J7. Citation:B. Abdellaoui, I. Peral. On quasilinear elliptic equations related to some Caffarelli-Kohn-Nirenberg inequalities. Communications on Pure & Applied Analysis, 2003, 2 (4) : 539-566. doi: 10.3934/cpaa.2003.2.539 [1] [2] Antonio Vitolo, Maria E. Amendola, Giulio Galise. On the uniqueness of blow-up solutions of fully nonlinear elliptic equations. [3] Françoise Demengel, O. Goubet. Existence of boundary blow up solutions for singular or degenerate fully nonlinear equations. [4] Jorge García-Melián, Julio D. Rossi, José C. Sabina de Lis. Elliptic systems with boundary blow-up: existence, uniqueness and applications to removability of singularities. [5] Hua Chen, Nian Liu. Asymptotic stability and blow-up of solutions for semi-linear edge-degenerate parabolic equations with singular potentials. [6] Hua Chen, Huiyang Xu. Global existence and blow-up of solutions for infinitely degenerate semilinear pseudo-parabolic equations with logarithmic nonlinearity. [7] Helin Guo, Yimin Zhang, Huansong Zhou. Blow-up solutions for a Kirchhoff type elliptic equation with trapping potential. [8] Mayte Pérez-Llanos. Optimal power for an elliptic equation related to some Caffarelli-Kohn-Nirenberg inequalities. [9] Huyuan Chen, Hichem Hajaiej, Ying Wang. Boundary blow-up solutions to fractional elliptic equations in a measure framework. [10] [11] Marius Ghergu, Vicenţiu Rădulescu. Nonradial blow-up solutions of sublinear elliptic equations with gradient term. [12] [13] Sachiko Ishida, Tomomi Yokota. Blow-up in finite or infinite time for quasilinear degenerate Keller-Segel systems of parabolic-parabolic type. [14] Zhijun Zhang. Boundary blow-up for elliptic problems involving exponential nonlinearities with nonlinear gradient terms and singular weights. [15] Xiumei Deng, Jun Zhou. Global existence and blow-up of solutions to a semilinear heat equation with singular potential and logarithmic nonlinearity. [16] Pablo L. De Nápoli, Irene Drelichman, Ricardo G. Durán. Improved Caffarelli-Kohn-Nirenberg and trace inequalities for radial functions. [17] Mingzhu Wu, Zuodong Yang. Existence of boundary blow-up solutions for a class of quasiliner elliptic systems for the subcritical case. [18] Matteo Bonforte, Jean Dolbeault, Matteo Muratori, Bruno Nazaret. Weighted fast diffusion equations (Part Ⅰ): Sharp asymptotic rates without symmetry and symmetry breaking in Caffarelli-Kohn-Nirenberg inequalities. [19] Evgeny Galakhov, Olga Salieva. Blow-up for nonlinear inequalities with gradient terms and singularities on unbounded sets. [20] Dongho Chae. On the blow-up problem for the Euler equations and the Liouville type results in the fluid equations. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
I have previously seen the Green's function for Laplace's equation in two spatial dimensions determined using the method of images. Since then, I have learned some more Fourier analysis and have attempted this problem using Fourier machinery. I'm still in the process of learning, and would therefore appreciate if someone can point out if there's anything wrong with my work. Let $\mathcal{L} = \Delta = \partial_{x}^{2} + \partial_{y}^{2}$. Green's function must satisfy $\mathcal{L}G(x, y; x_{0}, y_{0}) = \delta((x,y) - (x_{0}, y_{0}))$. Before taking the Fourier transform, we note that the Fourier transform of $\mathcal{L}$ can be written as the polynomial $-(\xi_{1}^{2} + \xi_{2}^{2})$, where $(\xi_{1}, \xi_{2})$ live in the range of the Fourier transform. Thus, if we let $z = (x,y)$, $z_{0} = (x_{0}, y_{0})$ and $\xi = (\xi_{1}, \xi_{2})$: $$\mathcal{F} \left[ \mathcal{L}G(z ; z_{0})\right] = \mathcal{F}[\delta(z - z_{0})]$$ which I have reduced to: $$\mathcal{F}(\mathcal{L})(\xi) \mathcal{F}(G)(\xi; z_{0}) = e^{-i \xi \cdot z_{0}} $$ and so we can take the inverse Fourier transform to find the Green's function: $$G(z, z_{0}) = \frac{1}{(2 \pi)^{2}} \int_{\mathbb{R}^{2}} \frac{e^{-i \xi \cdot z_{0}}}{-(\xi_{1}^{2} + \xi_{2}^{2})} e^{i \xi \cdot z} d\xi = \frac{1}{4 \pi^{2}} \int_{\mathbb{R}^{2}} \frac{e^{-i \xi \cdot (z_{0} - z)}}{-(\xi_{1}^{2} + \xi_{2}^{2})} d\xi $$ I suppose that if I use this method, it's okay to learn the solution in integral form? Or is there a straightforward way to compute this integral?
We assume that the unknown $X\in M_{m,n}$. When $A\in M_{m,m},B\in M_{n,n}$ are full matrices, $B^T\otimes A$ is a full $mn\times mn$ matrix; then inversion of such a matrix has complexity $O(m^3n^3)$. As we will see, using this method (when $A,B$ are large matrices), is a very bad idea. There are special methods to solve equations in the form $AXB+CXD=E$ where $A,C\in M_{m,m},B,D\in M_{n,n},E\in M_{m,n}$ are known and $X\in M_{m,n}$ is unknown. cf. i) http://www.maths.lth.se/na/courses/NUM115/NUM115-09/sylvester.pdf uses an algorithm that is an extension of the Bartels–Stewart method and the Hessenberg-Schur method. Originally the Bartels –Stewart algorithm was used to solve the Sylvester equation. ii) http://www.dm.unibo.it/~simoncin/matrixeq.pdf That is important, is that the previous algorithms have complexity $O(n^3+m^3)$ that is much smaller than the complexity of the Kronecker product method. For example, solving a Lyapounov equation ($AX+XA^T=B$ with $n\times n$ matrices) -with Bartels–Stewart- has the same complexity ($\approx 40 n^3$) as finding eigenvalues and eigenvectors of a $n\times n$ matrix.
I know there have been answers to similar questions, using either ImportString or ToExpression, however I haven't been able to succeed in importing this. I'm using version 10 Mathematica on Linux by the way. What I want to import is a multi-line aligned equation. In this link, Workaround for Import not supporting the \leqslant TeX macro, I see some more advanced work but as a Mathematica novice I am unable to extend it to deal with my problem. Here is the attached, long equation I'm trying to import. Thanks for any help at all! P.S. I really only care about the right-hand side. P.P.S. I know it is ridiculous and if I see how to do, say, three or four lines, then I imagine it is the same for the rest. One of the problems I had is that the author used things like having a space between the symbol and the underscore for subscripts... $$ \begin{aligned} V&{}_{2,3}(\alpha_1,\beta_1,\alpha_2,\beta_2;Q)=\\ &24 (Q-4) \alpha _1^2 \alpha _2^3 \beta _2^9 \beta _1^9-6 Q \left(4 Q^2-23 Q+25\right) \alpha _1 \alpha _2^3 \beta _2^9 \beta _1^9-2 \left(61 Q^3-447 Q^2+975 Q-625\right) \alpha _1^2 \alpha _2^2 \beta _2^9 \beta _1^9\\ &+36 Q^5 \alpha _1 \alpha _2^2 \beta _2^9 \beta _1^9+18 (Q-1) Q^4 \alpha _1 \alpha _2^2 \beta _2^8 \beta _1^9-90 (Q-1) Q \alpha _1 \alpha _2^3 \beta _2^7 \beta _1^9+36 Q^3 \left(Q^2-6 Q+5\right) \alpha _1 \alpha _2^2 \beta _2^7 \beta _1^9\\ &+2 Q^3 \left(38 Q^2-253 Q+305\right) \beta _2+\left(-16 Q^3+315 Q^2-888 Q+625\right) \alpha _2 \beta _2-36 \alpha _1 \alpha _2 \beta _2 \ . \end{aligned} $$
Suppose you wish to losslessly compress data which can be expressed as a sequence of symbols (a regular text file is a classic example where the symbols are letters and punctuation marks). There are many good compression algorithms out there that solve this problem. This post will present a famous one called Huffman's algorithm, which compresses data by creating an encoding scheme which is optimal in the sense that no other encoding will generate a better compression of the data. This form of compression is surely not the only one available, but it is very easy to understand and also very important since it is used by codecs such as JPEG and MP3. Huffman's algorithm works as follows: 1. count the number of occurrences $n_i$ of each distinct symbol $s_i$ on the data 2. for each distinct symbol $s_i$, generate a tree containing a single node with key $s_i$ and weight $n_i$ 3. select the two trees with smallest root node weights and merge them under a new root node; the key of this new root node is the concatenation of the keys of the root nodes of the merged trees and its weight is the sum of their weights (so if the two chosen trees have root nodes with keys $\alpha_j$ and $\alpha_k$ and weights $w_j$ and $w_k$ respectively, we put them under a new root node with key $\alpha_j\alpha_k$ and weight $w_j + w_k$) 4. if only a single tree is left, stop; otherwise go back to step 3 5. the representation of a given symbol is then given by the bit sequence which one obtains by starting from the root node of the final tree and moving along the tree until the node containing the given symbol as key is reached: each time we need to go the left child, we append a 0 to the bit representation, otherwise we append a 1 to it. Although the description above is somewhat abstract, the algorithm is really simple. Figure 1 describes visually how it works. Fig. 1: A visual description of Huffman's algorithm for some text which contains only the letters in the set {A,B,C,D}. In the first step, we merge the trees with root nodes C and D, then we merge the trees with root nodes B and CD, then we finally we merge the trees with root nodes A and BCD. The produced representation of a given symbol is a bit string which is obtained by merely traversing the final tree (starting from its root node) until the node whose key is this symbol is reached. Above, we have the following symbol-to-bit-string mappings: A $\rightarrow$ 0, B $\rightarrow$ 10, C $\rightarrow$ 110, D $\rightarrow$ 111. As mentioned above, Huffman's algorithm is optimal in the sense that no other encoding will generate a better compression of the data. Consider the following sequence with $20$ characters where $n_\textrm{A} = 12$, $n_\textrm{B} = 5$, $n_\textrm{C} = 2$ and $n_\textrm{D} = 1$ (as in figure 1): AABACAABBAABAAACABAD Below is the representation of the sequence above using ASCII codes (which is a very common encoding scheme used for storing text files on computers): 0100000101000001010000100100000101000011010000010100000101000010010000100100000101000001010000100100000101000001010000010100001101000001010000100100000101000100 And here is the representation of the same sequence using the encoding scheme from figure 1: 0010011000101000100001100100111 Huffman's representation of the original character sequence is clearly better than the ASCII one, but how much better is it? If $L^{\textrm{A}}_i$ is the number of bits used to represent the $i$-th symbol in the alphabet {A,B,C,D} in the ASCII encoding, then the total number of bits necessary to represent the character sequence using ASCII is: $$ N^{\textrm{A}} = \sum_i L^{\textrm{A}}_i n_i = 8n_{\textrm{A}} + 8n_{\textrm{B}} + 8n_{\textrm{C}} + 8n_{\textrm{D}} = 160 $$ The equivalent quantity for the Huffman encoding is: $$ N^{\textrm{H}} = \sum_i L^{\textrm{H}}_i n_i = 1n_{\textrm{A}} + 2n_{\textrm{B}} + 3n_{\textrm{C}} + 3n_{\textrm{D}} = 31 $$ which is less than $20\%$ of the amount of space needed with the ASCII encoding scheme. The Huffman encoding uses a clever trick: it uses short bit strings to represent the characters which appear more often in the data and larger bit strings to represent the ones which occur less frequently. One interesting aspect of Huffman's algorithm is the fact that it always generates encodings which are not ambiguous, in the sense that the original data can be directly obtained from the compressed version expressed as a concatenation of bit strings. This is due to the fact that Huffman's algorithm always produces prefix codes. To clarify this point, consider this mapping: A $\rightarrow$ 0, B $\rightarrow$ 10 and C $\rightarrow$ 101. If you have the following bit sequence: 01010, what does it represent: ABB or ACA? The ambiguity here comes from the fact that the representation of B is a prefix of the representation of C. Huffman's algorithm will never yield an encoding scheme like this. An implementation of Huffman's algorithm written in Python can be found here.
I would like to compute $\zeta_{\mathbb{Q}[i]}(-1)$ - a Dedekind zeta function. Mimicking the computation for $\zeta(-1)$, we can observe the following diverges: $$ \frac{1}{4}\sum_{ (m,n) \neq (0,0)} \sqrt{m^2 + n^2} = 1+\sqrt{2}+2 + 2 \sqrt{5} + \sqrt{8} + \dots $$ and I would like to gives infinite divergent sum a finite value along the same line as these answers: In particular there is Abel's theorem which I am going to misuse slightly. If $\sum a_n$ converges then: $$ \lim_{x \to 1^{-}} \sum a_n x^n = \sum a_n $$ which is a statement about continuity of the infinite series in $x$. Trying to make it work here. $$ \sum_{(m,n) \neq (0,0)} \sqrt{m^2 + n^2} \;x^{\sqrt{m^2 + n^2}} = \frac{d}{dx}\Bigg[\sum_{(m,n) \neq (0,0)} x^{\sqrt{m^2 + n^2}} \Bigg]$$ This is not so helpful as I now have a Puisieux series (what on earth is $x^\sqrt{2}$ ?) and there is no closed form. What about: $$ \sum_{(m,n) \neq (0,0)} \sqrt{m^2 + n^2} \;x^{m+n} = \frac{d}{dx}\bigg[\sum_{(m,n) \neq (0,0)} x^{m+n} \bigg]$$ This could converge as long as we have an estimate for the sum (this could be a separate strategy): $$ \sum_{m+n = N} \sqrt{m^2 + n^2} $$ maybe zeta-function regularization is our only option. The Dedekind function does have a Mellin transform $$ \sum_{(m,n) \neq (0,0)} \sqrt{m^2 + n^2} \;e^{t\sqrt{m^2 + n^2}} = \frac{d}{dt}\Bigg[\sum_{(m,n) \neq (0,0)} e^{t\sqrt{m^2 + n^2}} \Bigg]$$ similar to what I have found. So that zeta regularization and Abel regularization are kind of the same. Note As I've written it $\sum \sqrt{m^2 + n^2} = \zeta_{\mathbb{Q}(i)}(-\frac{1}{2})$ which I imagine should not attain any special value :-/
The Annals of Probability Ann. Probab. Volume 19, Number 4 (1991), 1781-1797. Existence of Probability Measures with Given Marginals Abstract We show that if $f$ is a probability density on $R^n$ wrt Lebesgue measure (or any absolutely continuous measure) and $0 \leq f \leq 1$, then there is another density $g$ with only the values 0 and 1 and with the same $(n - 1)$-dimensional marginals in any finite number of directions. This sharpens, unifies and extends the results of Lorentz and of Kellerer. Given a pair of independent random variables $0 \leq X,Y \leq 1$, we further study functions $0 \leq \phi \leq 1$ such that $Z = \phi(X,Y)$ satisfies $E(Z\mid X) = X$ and $E(Z\mid Y) = Y$. If there is a solution then there also is a nondecreasing solution $\phi(x,y)$. These results are applied to tomography and baseball. Article information Source Ann. Probab., Volume 19, Number 4 (1991), 1781-1797. Dates First available in Project Euclid: 19 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aop/1176990236 Digital Object Identifier doi:10.1214/aop/1176990236 Mathematical Reviews number (MathSciNet) MR1127728 Zentralblatt MATH identifier 0739.60001 JSTOR links.jstor.org Citation Gutmann, Sam; Kemperman, J. H. B.; Reeds, J. A.; Shepp, L. A. Existence of Probability Measures with Given Marginals. Ann. Probab. 19 (1991), no. 4, 1781--1797. doi:10.1214/aop/1176990236. https://projecteuclid.org/euclid.aop/1176990236
There is no single number that encompasses all of the covariance information - there are 6 pieces of information, so you'd always need 6 numbers. However there are a number of things you could consider doing. Firstly, the error (variance) in any particular direction $i$, is given by $\sigma_i^2 = \mathbf{e}_i ^ \top \Sigma \mathbf{e}_i$ Where $\mathbf{e}_i$ is the unit vector in the direction of interest. Now if you look at this for your three basic coordinates $(x,y,z)$ then you can see that: $\sigma_x^2 = \left[\begin{matrix} 1 \\ 0 \\ 0 \end{matrix}\right]^\top\left[\begin{matrix}\sigma_{xx} & \sigma_{xy} & \sigma_{xz} \\ \sigma_{yx} & \sigma_{yy} & \sigma_{yz} \\ \sigma_{xz} & \sigma_{yz} & \sigma_{zz} \end{matrix}\right]\left[\begin{matrix} 1 \\ 0 \\ 0 \end{matrix}\right] = \sigma_{xx}$ $\sigma_y^2 = \sigma_{yy}$ $\sigma_z^2 = \sigma_{zz}$ So the error in each of the directions considered separately is given by the diagonal of the covariance matrix. This makes sense intuitively - if I am only considering one direction, then changing just the correlation should make no difference. You are correct in noting that simply stating: $x = \mu_x \pm \sigma_x$ $y = \mu_x \pm \sigma_y$ $z = \mu_z \pm \sigma_z$ Does not imply any correlation between those three statement - each statement on its own is perfectly correct, but taken together some information (correlation) has been dropped. If you will be taking many measurements each with the same error correlation (supposing that this comes from the measurement equipment) then one elegant possibility is to rotate your coordinates so as to diagonalise your covariance matrix. Then you can present errors in each of those directions separately since they will now be uncorrelated. As to taking the "vector error" by adding in quadrature I'm not sure I understand what you are saying. These three errors are errors in different quantities - they don't cancel each other out and so I don't see how you can add them together. Do you mean error in the distance?
I'm working on a cosmological model for general relativity, and I need to define a tensor and assign values to this. For example, a tensor $A_{\mu\nu}$ that is function of other tensors: $$A_{\mu\nu}=R_{\mu\alpha}g^{\mu \alpha}+G^{\mu\beta}T_{\beta\nu}$$ And I need to use this expression for A_{\mu\nu} to calculate the covariant derivative of this. I have the components of every tensor of this expression ($T_{00}=Cos(\theta), T_{11}=\sin(\theta)$, etc..), but I need express $A_{\mu\nu}$ in this way. How can I do this in xAct? because in the tutorials of xCoba I didn't find something like this.
To answer your question I'll need to do a fairly long digression on homotopy limits and colimits. Before I delve deep into the topic let me say that there's more than one way to describe this topic, for example some people like model categories, other people might prefer triangulated categories (shudder), I'll simply explain the point of view that has been most helpful for me in the past. The language of $\infty$-categories I think that one of the best way to understand homotopy limits and colimits is in the settings of $\infty$-categories. In fact, in the theory $\infty$-categories then homotopy limits and colimits are just like ordinary limits and colimits and I will stop prefixing everything with the word "homotopy" from now on. My favourite catchphrase is that $\infty$-categories are just categories together with a notion of homotopy beteween morphisms. In fact you need a bit more, basically for every two objects $X$ and $Y$ and every $n\ge0$ you need a set $\textrm{Map}(X,Y)_n$ that morally corresponds to maps $X\times \Delta^n\to Y$ (the so-called higher homotopies) satisfying a set of compatibilities that I will not detail here (a good choice is to ask that $\textrm{Map}(X,Y)_\bullet$ form a Kan complex). To keep this concrete, examples are Spaces, with the obvious definition of homotopy; Manifolds and embeddings, with the notion of homotopy given by isotopies; Chain complexes with the notion of chain homotopy; Categories, where an homotopy between two functors is a natural isomorphism (and higher homotopies just chains of $n$ composable natural isomorphisms $F_0\to F_1\to\cdots\to F_n$). (The last example is of course the most relevant for your question). The important part about $\infty$-categories is that the notion of homotopy should seep through in our definition of commutative diagrams (so-called coherently commutative diagrams or coherent diagrams for short). For example a commutative square $$\require{AMScd} \begin{CD}A @>{f}>> B\\@V{h}VV @VV{g}V\\C @>>{k}> D\end{CD}$$ is the datum of four objects, four maps and a homotopy between $gf$ and $kh$. Once you have set up all of this you can go on and define initial and terminal objects, limits, colimits and so on and so forth exactly like you did for ordinary categories (that is without homotopies). These are often called "homotopy limits" and "homotopy colimits" to distinguish from the limits and colimits computed without paying attention to the homotopies. Ok, but what's the deal with the Čech complex anyway? Let's give an example. Suppose that $C,D$ are two categories and $F,G:C\to D$ two functors. If we ignored the homotopies the equalizer would simply be the objects $c\in C$ such that $Fc=Gc$. But we all know that this is a stupid notion. Naturally equivalent functors will give different answers and we do not want to distinguish between naturally equivalent functors. So we use the notion of equalizer in the $\infty$-category of categories. As you can see immediately from studying the coherent diagrams of the form $*\to C\rightrightarrows D$ the objects of the equalizer in this brave new setting are objects $c\in C$ together with an isomorphism $\alpha:Fc\cong Gc$. Now that's a much better behaved notion! Similarly, you can see that if $F:\Delta^{op}\to C$ is a functor to an ordinary category we get that $\lim_{\Delta^{op}} F = \textrm{eq}(F([0])\rightrightarrows F([1]))$ so that $F(X)\to \lim F(\check U_\bullet)$ being an equivalence is just the ordinary sheaf condition. But what happens if we go one categorical level up? Now let $F:\Delta^{op}\to \mathrm{Cat}$ be a functor to categories. Then you can see that an object of $\lim_{\Delta^{op}}F$ is an object $x\in F([0])$ together with an isomorphism $\alpha:d_0^*x\cong d_1^*x$ in $F([1])$ such that $d_0^*\alpha\circ d_2^*\alpha = d_1^*\alpha$. But this is exactly the notion of descent datum. So the notion of sheaf in this brave new context is just the clasical notion of stack You might have noticed that we are using just a small portion of the diagram. This is because our categories do not have many "interesting" homotopies. Sets have no homotopies at all and categories have only interesting 1-homotopies, higher homotopies are basically composable sequences of 1-homotopies. In general, the higher the order of the interesting homotopies, the more pieces of $\Delta$ we have to use. (precisely for an $n$-category you just need to go to $[n+1]$. The reason for this is basically Quillen's theorem A, as noted by Dylan Wilson in the comments). For spaces and chain complexes you need to consider the whole of $\Delta$. Conclusions and references Ugh that's a long answer. I hope I managed to give at lesat an inking of what's going on without getting bogged by the technical details. If technical details are what you want however the standard references for $\infty$-categories are Luries's Higher Topos Theory and Higher Algebra. They are not an easy read, by anyone's standards. The model of $\infty$-categories I've used is called fibrant simplicial categories. I think it is an excellent model to develop some intuition, but it is actually quite bad to work with. Most of the people in the industry use quasicategories, which are quite pleasant to work with but I didn't have the time to properly introduce them here. Appendix: why can we truncate? Let $E$ be an $n$-category, that is an $\infty$-category such that the mapping spaces $\mathrm{Map}(x,y)$ are $n$-truncated for every $x,y\in E$. Let $j:C\to D$ be a functor such that for every $d\in D$ the geometric realization $|C\times_D D_{d/}|$ is $n$-connected. Then for every functor $F:D\to E$ $\lim_D F$ exists if and only if $\lim_C Fj$ exists and they coincide. Lemma 1: Let $K$ be a simplicial set such that the geometric realization $|K|$ is $n$-connected. Then for every $e\in E$ the limit of the constant functor $K\to E$ at $e$ exists and coincides with $e$. Proof: $\mathrm{Map}(e',\lim_K e) \cong \lim_K \mathrm{Map}(e',e) = \mathrm{Map}(K,\mathrm{Map}(e',e))=\mathrm{Map}(e',e)$. Lemma 2: Let $p:\tilde D\to D$ be a cartesian fibration such that for every $d\in D$ the geometric realization $|\tilde D_d|$ is $n$-connected. Then for every functor $F:D\to E$ the limit $\lim_D F$ exists if and only if the limit $\lim_{\tilde D} Fp$ exists and they coincide. Proof: By an easy cofinality argument the right Kan extension along a cartesian fibration is obtained by computing the limits fiberwise. Then the thesis follows from Lemma 1. Proof of the main result: Let $\tilde D\to D$ the cartesian fibration classified by the functor $D^{op}\to \mathrm{Cat}$ given by $d\mapsto C\times_D D_{d/}$. We have a canonical functor $C\to \tilde D$ sending $c$ to $(c,jc=jc)$. A standard cofinality argument implies that the functor $C\to \tilde D$ is coinitial. Then the thesis follows from the previous lemma.
INFORMAL TREATMENT We should remember that the notation where we condition on random variables is inaccurate, although economical, as notation. In reality we condition on the sigma-algebra that these random variables generate. In other words $E[Y\mid X]$ is meant to mean $E[Y\mid \sigma(X)]$. This remark may seem out of place in an "Informal Treatment", but it reminds us that our conditioning entities are collections of sets (and when we condition on a single value, then this is a singleton set). And what do these sets contain? They contain the information with which the possible values of the random variable $X$ supply us about what may happen with the realization of $Y$. Bringing in the concept of Information, permits us to think about (and use) the Law of Iterated Expectations (sometimes called the "Tower Property") in a very intuitive way: The sigma-algebra generated by two random variables, is at least as large as that generated by one random variable: $\sigma (X) \subseteq \sigma(X,Z)$ in the proper set-theoretic meaning. So the information about $Y$ contained in $\sigma(X,Z)$ is at least as great as the corresponding information in $\sigma (X)$. Now, as notational innuendo, set $\sigma (X) \equiv I_x$ and $\sigma(X,Z) \equiv I_{xz}$.Then the LHS of the equation we are looking at, can be written $$E \left[ E \left(Y|I_{xz} \right) |I_{x} \right]$$Describing verbally the above expression we have : "what is the expectation of {the expected value of $Y$ given Information $I_{xz}$} given that we have available information $I_x$ only?" Can we somehow "take into account" $I_{xz}$? No - we only know $I_x$. But if we use what we have (as we are obliged by the expression we want to resolve), then we are essentially saying things about $Y$ under the expectations operator, i.e. we say "$E(Y\mid I_x)$", no more -we have just exhausted our information. Hence$$E \left[ E \left(Y|I_{xz} \right) |I_{x} \right] = E\left(Y|I_{x} \right)$$ If somebody else doesn't, I will return for the formal treatment. A (bit more) FORMAL TREATMENT Let's see how two very important books of probability theory, P. Billingsley's Probability and Measure (3d ed.-1995) and D. Williams "Probability with Martingales" (1991), treat the matter of proving the "Law Of Iterated Expectations": Billingsley devotes exactly three lines to the proof. Williams, and I quote, says "(the Tower Property) is virtually immediate from the definition of conditional expectation". That's one line of text. Billingsley's proof is not less opaque. They are of course right: this important and very intuitive property of conditional expectation derives essentially directly (and almost immediately) from its definition -the only problem is, I suspect that this definition is not usually taught, or at least not highlighted, outside probability or measure theoretic circles. But in order to show in (almost) three lines that the Law of Iterated Expectations holds, we need the definition of conditional expectation, or rather, its defining property. Let a probability space $(\Omega, \mathcal F, \mathbf P)$, and an integrable random variable $Y$. Let $\mathcal G$ be a sub-$\sigma$-algebra of $\mathcal F$, $\mathcal G \subseteq \mathcal F$. Then there exists a function $W$ that is $\mathcal G$-measurable, is integrable and (this is the defining property) $$E(W\cdot\mathbb 1_{G}) = E(Y\cdot \mathbb 1_{G})\qquad \forall G \in \mathcal G \qquad [1]$$ where $1_{G}$ is the indicator function of the set $G$. We say that $W$ is ("a version of") the conditional expectation of $Y$ given $\mathcal G$, and we write$W = E(Y\mid \mathcal G) \;a.s.$ The critical detail to note here is that the conditional expectation, has the same expected value as $Y$ does, not just over the whole $\mathcal G$, but in every subset $G$ of $\mathcal G$. (I will try now to present how the Tower property derives from the definition of conditional expectation). $W$ is a $\mathcal G$-measurable random variable. Consider then some sub-$\sigma$-algebra, say $\mathcal H \subseteq \mathcal G$. Then $G\in \mathcal H \Rightarrow G\in \mathcal G$. So, in an analogous manner as previously, we have the conditional expectation of $W$ given $\mathcal H$, say $U=E(W\mid \mathcal H) \;a.s.$ that is characterized by $$E(U\cdot\mathbb 1_{G}) = E(W\cdot \mathbb 1_{G})\qquad \forall G \in \mathcal H \qquad [2]$$ Since $\mathcal H \subseteq \mathcal G$, equations $[1]$ and $[2]$ give us $$E(U\cdot\mathbb 1_{G}) = E(Y\cdot \mathbb 1_{G})\qquad \forall G \in \mathcal H \qquad [3]$$ But this is the defining property of the conditional expectation of $Y$ given $\mathcal H$. So we are entitled to write $U=E(Y\mid \mathcal H)\; a.s.$ Since we have also by construction $U = E(W\mid \mathcal H) = E\big(E[Y\mid \mathcal G]\mid \mathcal H\big)$, we just proved the Tower property, or the general form of the Law of Iterated Expectations - in eight lines.
Those guys only make confusion. I will answer you with a very easy method you can use with piecewise functions. First of all you have two steps functions, which you can easily figure in your mind to be like 2 dimensional boxes of height $2$. Think about them as two boxes, one of which has to move towards the other. Those are two signals, and the convolution is nonzero only when they do intersect. Figure it out as the following picture: Before some obnoxious arsehole points it out: this figure does not represent your problem but it's a great help to figure it out. The first box goes from $0$ to $1$, whilst the second one goes from $\tau -3$ to $\tau$. In your case, you maintain the smallest step function you have, that is the $g$ and you make $f$, the larger step, to move towards the first box. Hence you will follow several steps. Step 1 The signals are like the above figure: separated. In this case, their convolution is simply zero which happens in the range $$\tau - 3 > 1$$ That is $$\tau > 4$$ Step 2 The larger box start to insinuate itself onto the smallest, hence you will have $$\int_{\tau -3}^1 4\ d\tau = 16 - 4\tau$$ And this happens in the range $$\begin{cases} 0 < \tau -3 \\ 1 > \tau -3 \end{cases}$$ Which means $$3 < \tau < 4$$ Step 3 The big box is all passing through the small box, id est the smaller is totally contained into the larger. Hence $$\int_0^1 4\ d\tau = 4$$ In the range $$\begin{cases} 0 > \tau -3 \\ 1 > \tau \end{cases}$$ Which means $$1 < \tau < 1$$ Step 4 Now the larger box fades away, but with a bit to it still inside the smaller box. $$\int_0^{\tau} 4\ d\tau = 4\tau$$ Which happens in the range $$0< \tau < 1$$ Step 5 Finally the big box goes away, and the convolution is zero again for $$\tau < 0$$ At the end you have your result: $$f*g = \begin{cases} 0 & \tau > 4 \\ 16 - 4\tau & 3<\tau < 4 \\ 4 & 1<\tau < 3 \\ 4\tau & 0 < \tau < 1 \\ 0 & \tau < 0 \end{cases}$$ This is a trapezoid, and if you plot it you will get As it has to be.
So for this one I'm having trouble isolating for y. If its not possible then in the form with dy with the y variable and x with the x variable. $$\frac{dy}{dx}-2xy=e^{x^{2}}.$$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community If you have a differential equation of the form $$\frac{dy}{dx} + a(x) y(x) = b(x) \tag{1}$$ we call the equation a first-order linear ODE and we can obtain it's solution using the following method. First, we multiply both sides of $(1)$ by a function $f(x)$ (called the integrating factor) and we obtain $$f y' + fay = fb \tag{2}$$ Using the product rule $(fy)' = fy' + f'y$, we can rewrite $(2)$ as $$(fy)' - f'y + fay = fb \Longrightarrow (fy)' + y \left(f' + fa \right) = fb \tag{3}$$ If it is the case that $f' + fa = 0$, then the LHS of $(3)$ is just $(fy)'$, and integrating both sides would yield an expression for $fy$. So let's solve for the $f$ that guarantees that $f' + fa = 0$. Solving this separable differential equation for $f$, we get that $$\frac{df}{dx} = f'(x) = -f(x)a(x) \Longrightarrow \frac{df}{f(x)} = a(x) dx \Longrightarrow \log(f(x)) = \int a(x) dx \Longrightarrow f(x) = e^{\int a(x) dx} $$ Using the $f$ we just found, $(3)$ therefore reduces to $$(fy)' = fb \Longrightarrow fy = \int fb \: dx \Longrightarrow y = \frac{\int fb \: dx}{f}$$ Plugging in our formula for $f(x)$, we get that the solution to $(1)$ is $$y(x) = \displaystyle\frac{\displaystyle\int \left(b(x) \: e^{\int a(x) dx} \right) dx}{e^{\int a(x) dx}}$$ Now, noting that $a(x) = -2x$ and $b(x) = e^{x^2}$ in your example, we see that $$y(x) = \displaystyle\frac{\displaystyle\int \left(e^{x^2} \: e^{\int -2x dx} \right) dx}{e^{\int -2x dx}} = \frac{\displaystyle\int \left(e^{x^2} e^{-x^2}\right)dx}{e^{x^2}} = \frac{\int e^0 dx}{e{x^2}} = \frac{x+c}{e^{x^2}} \Longrightarrow \boxed{y(x) = xe^{-x^2} + c e^{-x^2}}$$ Hint : Multiply throughout by the integrating factor, $e^{\int -2x dx} = e^{-x^2}$. Then notice that $$\begin{align}\frac{d}{dx}(e^{-x^2}y) &= e^{-x^2}\frac{dy}{dx} - 2e^{-x^2}xy\\&=LHS\end{align}$$ Rewrite the equation like this: $$e^{-x^2}\frac{dy}{dx}-2xe^{-x^2}y=1$$ Notice that if we apply the product rule in differentiating $ye^{-x^2}$ with respect to $x$, that we get exactly the left hand side. In other words, the equation is equivalent to: $$\frac{d(e^{-x^2}y)}{dx}=1$$ Integrating both sides yields: $$e^{-x^2}y=x+c\implies y=xe^{-x^2}+ce^{-x^2}.$$ This kind of technique can be generalised to the method of 'integrating factors', however it happens to work out nicely enough here that you can just follow your nose. Hint Because of the rhs, suppose that you define $y(x)=z(x)e^{x^2}$; then the differential equation write $$x z'(x)+z(x)=0$$ which is quite easy to integrate for $z(x)$. I am sure that you can take from here.
The factor of $1/2\pi$ is an artifact of the normalization convention being used for the momentum eigenstates. To begin to see how this is so, let us note that the choice of normalization of a Dirac-orthogonal continuous basis completely determines the form of the resolution of the identity. Writing an arbitrary state $|\psi\rangle$ in a given ``continuous" basis gives\begin{align} |\psi\rangle = \int da'\,\psi(a')|a'\rangle,\end{align}and applying $\langle a'|$ to both sides gives \begin{align} \langle a|\psi\rangle = \int da' \,\psi(a')\langle a|a'\rangle.\end{align}Now if this basis is Dirac orthogonal with normalization $K(a)$;\begin{align}\langle a |a'\rangle = K(a)\,\delta(a-a')\end{align}then we find $\langle a|\psi\rangle = K(a)\psi(a)$ which gives\begin{align} |\psi\rangle = \int \frac{da'}{K(a)}\,\langle a'|\psi\rangle|a'\rangle\end{align}which implies\begin{align} \int \frac{da'}{K(a)} |a'\rangle\langle a'| = I.\end{align}Now specializing to quantum mechanics, recall a standard convention is to set\begin{align} \langle q|q'\rangle = \delta (q-q'), \qquad \langle p|p'\rangle = 2\pi\delta(p-p') \tag{DefCon1}\end{align}and therefore by the remarks above, this implies the following resolutions of the identity:\begin{align} \int dq'\, |q'\rangle\langle q'| = I, \qquad \int \frac{dp'}{2\pi} \,|p'\rangle\langle p'| = I.\end{align} Motivation for normalization conventions To try to motivate the conventional normalizations of the position and momentum eigenstates written above, suppose we were to adopt the following general normalizations:\begin{align} \langle q|q'\rangle = C(q)\,\delta(q-q'), \qquad \langle p|p'\rangle = K(p)\,\delta(p-p'),\end{align}for functions $C(q)$ and $K(p)$. Let $Q$ be the position operator, and let $P$ be the momentum operator. Recall that the canonical commutation relation $[Q,P] = iI$ can be used to show that $e^{-iaP}$ is the generator of translations, namely that $e^{iap}|q\rangle = |q+a\rangle$. It follows that $\langle p|q\rangle = \langle p|e^{-iqP}|0\rangle = e^{iqp}\langle p|0\rangle$. If we let $\langle p|0\rangle=\phi(p)$ for some undetermined function $\phi$, then in summary we have so far shown that\begin{align} \langle p|q\rangle= e^{iqp}\phi(p)\end{align}If we now consider the quantity $\langle q|\psi\rangle$, and if we insert one resolution of the identity integrating over $p$ and the other integrating over $q'$, then we get\begin{align} \langle q|\psi\rangle = \int dq'\left[\int \frac{dp}{2\pi}e^{-i(q-q')p}\frac{2\pi|\phi(p)|^2}{C(q')K(p)}\right] \langle q'|\psi\rangle\end{align}Now, in order for this equation to be true, the term in brackets must equal $\delta(q-q')$, which means that\begin{align} \frac{2\pi|\phi(p)|^2}{C(q')K(p)} = 1\end{align}Another way of saying this is that position and momentum representations must be related by Fourier transform. As you can see, we can arbitrarily fix $C$ and $K$ to satisfy this condition because we have complete freedom to choose $\phi(p)$. The conventional choice leading to $(\text{DefCon1})$ is basically as simple as you can get because two of the degrees of freedom, namely $C(q)$, and $\phi(p)$ are set to 1, and this fixes $K(p)$ to be $2\pi$. Addendum. Other conventions in the literature As Qmechanic points out in his comment below, it's useful to stress that there is at least one other common convention in the literature that is just as simple as $(\text{DefCon1})$, and that is to choose\begin{align} C(q) = 1, \qquad K(p) = 1, \qquad \phi(p) = \frac{1}{\sqrt{2\pi}} \end{align}which would yield\begin{align} \langle q|q'\rangle = \delta (q-q'), \qquad \langle p|p'\rangle = \delta(p-p') , \qquad \langle p|q\rangle = \frac{1}{\sqrt{2\pi}}e^{-ipq}\tag{DefCon2}\end{align}This convention is used by Cohen-Tannoudji and Sakurai in their quantum texts. Of course, in practice, as long as one sticks to a single convention, no one is stopping her from choosing any convention she chooses, except for possibly a research advisor or physics gods.
Suppose a spaceship is moving away from the Earth at $0.5c$. When the spaceship is one light-year away from Earth, an observer on Earth sends a photon toward the spaceship. According to the observer on Earth, the photon is traveling at $c$ and the spaceship is traveling in the same direction at $0.5c$, so it takes 2 years for the photon to reach the spaceship. From the frame of reference of someone inside the spaceship, the spaceship is not moving and the photon is traveling towards it at $c$. So it takes 1 year for the photon to reach the spaceship. So why isn't 1 year inside the spaceship equal to 2 years on Earth, i.e. why is this method of deriving the Lorentz factor invalid? So it takes 1 year for the photon to reach the spaceship This isn't correct. Assume, for simplicity, that both the Earth's clock and spacecraft's clock read $t=0=t'$ when the spacecraft passes Earth. Then, when the photon is sent, Earth's clock reads $t=2\, \mathrm {y}$. Both observers agree on this. Also, both observers agree that the spacecraft's clock reads $t' = 3.464\, \mathrm {y}$ when the photon is received. However, due to the relativity of simultaneity, the observers don't agree on the reading of the spacecraft's clock when the photon is sent. According to the observer on Earth, the spaceship's clock reads $t' = 1.732\, \mathrm {y}$ when the photon is sent. But, according to the observer on the spacecraft, the spacecraft's clock reads $t' = 2.309\, \mathrm {y}$ when the photon is sent. And, the observers don't agree on the reading of Earth's clock when the photon is received. According to the observer on Earth, the Earth's clock reads $t = 4\, \mathrm {y}$ when the photon is received. But, according to the observer on the spacecraft, the Earth's clock reads $t = 3\, \mathrm {y}$ when the photon is received. In summary, according to the observer on Earth, 2 Earth years elapses between the emission and reception events while 1.732 spacecraft years elapses. According to the observer on the spacecraft, 1.155 spacecraft years elapses between the emission and reception events while 1 Earth years elapses. From either perspective, the moving clock's elapsed time is smaller and by the same proportion: $$\gamma = \frac{2}{1.732} = \frac{1.155}{1} = \frac{1}{\sqrt{1 - (0.5)^2}} $$ If you will draw the spacetime diagram for this, the above results will be clear. The spaceship crosses your path at location $0$, at time $0$. How things appear to you: At time $2$, when the spaceship has reached location $1$, you release the photon. At time $4$, the photon and the ship both reach location $2$ and collide. How things appear to the other guy: At time $2.3$ (when your slow clock was reading $2$), you (who, along with the rest of the earth, had been traveling backward at speed $1/2$) had reached location $-1.15$, where you released your photon. Since he's never moved from location $0$, the photon reaches him $1.15$ minutes later, at time $3.45$ by his ship's clock. More of how things appear to you: Huh. When the photon hit the spaceship --- that is, at time 4 --- the ship's clock showed 3.45. That clock must be running slow.
Image processing is one of the hot topics in AI research, alongside with reinforcement learning, ethics in AI and many others. A recent solution to perform ordinal regression on age of people has been published, and in this post we apply that technique to financial data. Ranking classification is an usual challenge in companies and research hasn’t stopped looking for better ways to solve this matter. In a previous QuantDare post, we have explained the most important ways to solve this problem. In this post, we’ll focus on a pointwise approach proposed in the paper “Rank-consistent Ordinal Regression for Neural Networks” by Wenzhi Cao, Vahid Mirjalili and Sebastian Raschka. In that paper, the authors tackle this problem as a K-1 binary classification problem, where K is the number of positions in the ranking. More specifically, we predict if the current observation is greater than position k in the ranking. We can calculate the final ranking position by using the following formula: \( q = 1 + \sum_{K-1}^{k=1} ŷ(x_{i}) \), where \(ŷ\) are the binary prediction for element \(x_{i}\). \(ŷ\) will be a list of size equal to K-1 and will have the binary indicators for each ranking position k. This approach has been used in the past too. However, ranking consistency was not guaranteed because there was a chance that the prediction for position k could be 1 (the observation ranking position is greater than position k) but the prediction for position k-1 could be 0 (the observation ranking position is not greater than position k-1). Here’s an example to illustrate this idea better: we can’t say that a person who is 25 years old is above 20 years old and below 18 years old at the same time. The authors of this paper provide an improvement to the loss function to optimize that considers the previous problem. The original loss function presented in the paper Ordinal Regression with Multiple Output CNN for Age Estimation by Niu et al. was categorical cross-entropy with an importance coefficient for each ranking position k. In this paper, they introduce a second part to the original equation that ensures the probability of the position k greater or equal than the probability of the position k+1. This aspect is explained in detail in the original paper. For our problem, we will assign more weight to top-ranking positions so the model can focus on minimizing the error in those positions. Our weight will be defined like this: \( \Lambda_{k} = \frac{q}{y} \) Bringing the solution to financial problems In the financial world, we want to prioritize which asset invest in, and that’s where the solution above kicks in! We’re going to predict the ranking position of a set of indices for each day. Let’s see how we’re going to formulate this problem. Our data The data we’re going to work has relative returns from closing prices for each business day for nine indices, starting from 7th January 1999 to 3rd March 2016. Target variable definition The target variable will be a numeric discrete ranking from 0 to K-1, where K will be equal to the number of positions in the ranking and equal to the number of indices we have. The values to rank will be the linear moving weighted average returns for the next 21 days. Closer returns will have more importance than further returns. The higher the values, the closer we’ll be to the top of the ranking (better), and the lower the values the closer we’ll be at the bottom of the ranking (worse). Each ranking position (q) will be transformed into a list of binary indicators. Here’s an example to illustrate this better: if our observation has a ranking position of 4 out of 9, the 4 position will be encoded like this [1, 1, 1, 0, 0, 0, 0, 0], meaning that exact observation ranking position is greater than the first ranking position, it’s greater than the second one, it’s greater than the third one, but it’s not greater than the following positions. Input features During my experience as a data scientist, I’ve worked with people that advocate using business knowledge, and people that want the model to learn its own features in order to avoid introducing human bias when it comes to create or select input features. I consider I belong to the first group, but this time, given we’re using neural networks and they have proven to be good at learning features by themselves, I’m going to just use the returns of each index as input and give a chance to that other point of view. Model to use We are going to use an LSTM model given the nature of our data, and we’re going to use an LSTM by each index. The reason behind using many models is that we want the model to learn from the dependency between values from a return series, and if we use data from all indices (shaping our data with K rows for each day) we’ll have consecutive values from different indices, and we don’t want that. For that reason, I will use one LSTM model by index to predict the ranking position returns for each day, feeding into the model the consecutive daily returns from that index. The model used for each index has two LSTM layers with 8 hidden units each, and a dense layer as an output layer with K-1 hidden units. The model was trained for 100 epochs, with a learning rate of 0.01 and a decreasing learning rate on the plateau that reaches a minimum learning rate of 0.0001 with the patience of 10 epochs. Simulation We’ll use from 7th January 1999 to 1st January 2012 as a training set, from 2nd January 2012 to 1st January 2014 as validation set, and from 2nd January 2014 to 1st January 2016 as a test set. Results Here you can see accuracy distribution throughout every data split by index: From the above distribution plots, we can see that training split accuracy shows lights and shadows, as for some indices the most frequent value was greater than 80% (e.g. Global) but for others, its most frequent accuracy value was between 60% and 70% (e.g. Government Bond). When it comes to checking distributions from test and validation splits, we see that there are mismatches between distributions, sometimes even testing and validation distributions, meaning that the performance of the model isn’t constant through time. In some cases we see that testing and validation distributions have their most frequent value been way below the training most frequent value (e.g. Commodities), meaning that there might be an overfitting problem. In the following plot, we aim to have a better grasp of what indices are better or worse: Here we can see that some perform better than others, but clearly Commodities and Emerging show the worst median accuracy and reach lower minimum accuracy values. Conclusions and further work This has been a first hands-on applying a technique used in an image computing problem in a financial problem. From the results we’ve seen, there’s room for improvement, not only from the results per se but for some decisions that were taken. Here are some ideas: Choose a better set of hyper-parameters: the hyper-parameters used were chosen based on the fact that they’ve worked reasonably well on other projects with similar data, but a more rigorous procedure should be used to select those values. Debug the neural network: Is the neural network learning anything at all after epoch t? Are we suffering a gradient vanishing/explosion problem? How sensitive is the model to randomness? How does having a high decimal accuracy impact the neural network’s performance? These kinds of questions should be answered too. To have control of what we’re feeding into the model: Although it’s hard to know what’s going on inside a neural network, we can have under control (just a little actually) what it’s doing if we feed into the model the data we want. It could even see its performance improved by using more information, or even behave coherently according to prior business knowledge! Pointwise or listwise?: Though here we’ve used a pointwise approach, anyone could argue that the unit of prediction is a ranking generated by multiple observations every day and that the quality of that ranking is what it should be optimized. In this post, we’re not taking into account information from other indices to predict the ranking position of a specific index, and maybe that’s valuable information. Improve performance with basic aspects of binary classification: because we’re predicting ranking positions for each index independently and each index could be biased to occupy some ranking positions, we might have imbalanced classes. At this moment, we’re considering as 1 every probability greater than 0.5, but this might be refined by finding an optimal new threshold. Another way to tackle the class imbalance problem is to modify the weight of each ranking position on the loss function, giving more weight to minority classes/label positions.
Hi all. I'm having a big brainfart and I can't seem to grasp what I really think is an easy problem. The full text says: 4)For the following pairs (b, a) find y and x such that by + ax = d where d = GCD(a, b). b)(b, a) = (18 - 16i, 7 - 2i) in the ring \(\displaystyle \mathbb{Z}\) I started by dividing \(\displaystyle \dfrac{18-16i}{7-2i}=\dfrac{158}{53}-\dfrac{76i}{53}\) I then rounded each of these values to the nearest Gaussian integer to get 3 - i \(\displaystyle (3-i)(7-2i)=19-13i \implies 18-16i=(3-i)(7-2i)-(1+3i)\) That gives me the quotient and remainder to "feed" to the next round of division \(\displaystyle \dfrac{7-2i}{-1-3i}=\dfrac{-1}{10}+\dfrac{23i}{10}\) Rounding these values again gives me 0 + 2i \(\displaystyle (0+2i)(-1-3i)=6-2i \implies 7-2i=(0+2i)(-1-3i)+1\) Got another quotient and non-zero remainder, so I go again \(\displaystyle \dfrac{-1-3i}{1}=1(-1-3i)+0\) The remainder is zero, so I stop here. But now I have to build back up, to figure out what x and y are. I tried using the matrix method, but it fell apart: \(\displaystyle \begin{pmatrix}1&1\\ 1&0\end{pmatrix} \cdot \begin{pmatrix}2i&1\\ 1&0\end{pmatrix} \cdot \begin{pmatrix}3-i&1\\ 1&0\end{pmatrix} \cdot \begin{pmatrix}0\\ 1\end{pmatrix} = \begin{pmatrix}1+2i\\ \:2i\end{pmatrix}\) Ought to give me my values for x and y, but it's wrong for some reason. \(\displaystyle (1+2i)(18-16i)+(2i)(7-2i)=54+34i\) Instead of giving me the GCD(18-16i, 7-2i) = 1. Every step I did matches up nicely with the in-class examples we worked... only the end result is wrong for this problem. The only thing I can maybe think of is that the examples we worked all involved only real numbers. Does this method just not work for complex numbers, or something? Any help or insight would be greatly appreciated. I started by dividing \(\displaystyle \dfrac{18-16i}{7-2i}=\dfrac{158}{53}-\dfrac{76i}{53}\) I then rounded each of these values to the nearest Gaussian integer to get 3 - i \(\displaystyle (3-i)(7-2i)=19-13i \implies 18-16i=(3-i)(7-2i)-(1+3i)\) That gives me the quotient and remainder to "feed" to the next round of division \(\displaystyle \dfrac{7-2i}{-1-3i}=\dfrac{-1}{10}+\dfrac{23i}{10}\) Rounding these values again gives me 0 + 2i \(\displaystyle (0+2i)(-1-3i)=6-2i \implies 7-2i=(0+2i)(-1-3i)+1\) Got another quotient and non-zero remainder, so I go again \(\displaystyle \dfrac{-1-3i}{1}=1(-1-3i)+0\) The remainder is zero, so I stop here. But now I have to build back up, to figure out what x and y are. I tried using the matrix method, but it fell apart: \(\displaystyle \begin{pmatrix}1&1\\ 1&0\end{pmatrix} \cdot \begin{pmatrix}2i&1\\ 1&0\end{pmatrix} \cdot \begin{pmatrix}3-i&1\\ 1&0\end{pmatrix} \cdot \begin{pmatrix}0\\ 1\end{pmatrix} = \begin{pmatrix}1+2i\\ \:2i\end{pmatrix}\) Ought to give me my values for x and y, but it's wrong for some reason. \(\displaystyle (1+2i)(18-16i)+(2i)(7-2i)=54+34i\) Instead of giving me the GCD(18-16i, 7-2i) = 1. Every step I did matches up nicely with the in-class examples we worked... only the end result is wrong for this problem. The only thing I can maybe think of is that the examples we worked all involved only real numbers. Does this method just not work for complex numbers, or something? Any help or insight would be greatly appreciated.
This question already has an answer here: Gibbs sampling and mixed distribution 2 answers Let X a continous variable and Y a binary variable with joint distribution : $$p(x,y;\beta,\rho_1,\rho_2,\phi_1,\phi_2)=\frac{1}{Z(\beta,\rho_1,\rho_2,\phi_1,\phi_2)}\exp(-0.5 \beta x^2+1_{y=0}\rho_1 x+1_{y=1} \rho_2 x+1_{y=0} \phi_1+1_{y=1} \phi_2)$$ with Z a normalising constant. The conditional of X given Y is a Gaussian depending on Y and the conditional of Y given X is a Bernoulli variable whose probability $P(Y=1)$ is depending on X. Is it possible to use Gibbs sampling?
The company Arch Bridges Construction (ABC) specializes in—you guessed it—the construction of arch bridges. This classical style of bridge is supported by pillars that extend from the ground below the bridge. Arches between pillars distribute the bridge’s weight onto the adjacent pillars. The bridges built by ABC often have pillars spaced at irregular intervals. For aesthetic reasons, ABC’s bridges always have semicircular arches, as illustrated in Figure 1. However, while a bridge arch can touch the ground, it cannot extend below the ground. This makes some pillar placements impossible. (a) Consecutive pillars at distance $d$ are connected by a semicircular arch with radius $r = d/2$. (b) An invalid pillar placement (arches cannot extend below ground). (c) A bridge corresponding to Sample Input 1. Given a ground profile and a desired bridge height $h$, there are usually many ways of building an arch bridge. We model the ground profile as a piecewise-linear function described by $n$ key points $(x_1, y_1), (x_2, y_2), \ldots , (x_ n, y_ n)$, where the $x$-coordinate of a point is the position along the bridge, and the $y$-coordinate is the elevation of the ground above sea level at this position along the bridge. The first and last pillars must be built at the first and last key points, and any intermediate pillars can be built only at these key points. The cost of a bridge is the cost of its pillars (which is proportional to their heights) plus the cost of its arches (which is proportional to the amount of material used). So a bridge with $k$ pillars of heights $h_1, \dots , h_ k$ that are separated by horizontal distances $d_1, \dots , d_{k-1}$ has a total cost of\[ \alpha \cdot \sum _{i=1}^ k h_ i + \beta \cdot \sum _{i=1}^{k-1} d_ i^2 \] for some given constants $\alpha $ and $\beta $. ABC wants to construct each bridge at the lowest possible cost. The first line of input contains four integers $n$, $h$, $\alpha $, and $\beta $, where $n$ ($2 \le n \le 10^4$) is the number of points describing the ground profile, $h$ ($1 \le h \le 10^5$) is the desired height of the bridge above sea level, and $\alpha $, $\beta $ ($1 \le \alpha , \beta \le 10^4$) are the cost factors as described earlier. Then follow $n$ lines, the $i^{\text {th}}$ of which contains two integers $x_ i$, $y_ i$ ($0 \le x_1 < x_2 < \ldots < x_ n \le 10^5$ and $0 \le y_ i < h$), describing the ground profile. Output the minimum cost of building a bridge from horizontal position $x_1$ to $x_ n$ at height $h$ above sea level. If it is impossible to build any such bridge, output impossible. Sample Input 1 Sample Output 1 5 60 18 2 0 0 20 20 30 10 50 30 70 20 6460 Sample Input 2 Sample Output 2 4 10 1 1 0 0 1 9 9 9 10 0 impossible
14. Relations in Lean¶ In the last chapter, we noted that set theorists think of a binary relation \(R\) on a set \(A\) as a set of ordered pairs, so that \(R(a, b)\) really means \((a, b) \in R\). An alternative is to think of \(R\) as a function which, when applied to \(a\) and \(B\), returns the proposition that \(R(a, b)\) holds. This is the viewpoint adopted by Lean: a binary relation on a type A is a function A → A → Prop. Remember that the arrows associate to the right, so A → A → Prop really means A → (A → Prop). So, given a : A, R a is a predicate (the property of being related to A), and given a b : A, R a b is a proposition. 14.1. Order Relations¶ With first-order logic, we can say what it means for a relation to be reflexive, symmetric, transitive, antisymmetric, and so on: namespace hiddenvariable {A : Type}def reflexive (R : A → A → Prop) : Prop :=∀ x, R x xdef symmetric (R : A → A → Prop) : Prop :=∀ x y, R x y → R y xdef transitive (R : A → A → Prop) : Prop :=∀ x y z, R x y → R y z → R x zdef anti_symmetric (R : A → A → Prop) : Prop :=∀ x y, R x y → R y x → x = yend hidden We can then use the notions freely. Notice that Lean will unfold the definitions when necessary, for example, treating reflexive R as ∀ x, R x x. variable R : A → A → Propexample (h : reflexive R) (x : A) : R x x := h xexample (h : symmetric R) (x y : A) (h1 : R x y) : R y x :=h x y h1example (h : transitive R) (x y z : A) (h1 : R x y) (h2 : R y z) : R x z :=h x y z h1 h2example (h : anti_symmetric R) (x y : A) (h1 : R x y) (h2 : R y x) : x = y :=h x y h1 h2 In the command variable {A : Type}, we put curly braces around A to indicate that it is an implicit argument, which is to say, you do not have to write it explicitly; Lean can infer it from the argument R. That is why we can write reflexive R rather than reflexive A R: Lean knows that R is a binary relation on A, so it can infer that we mean reflexivity for binary relations on A. Given h : transitive R, h1 : R x y, and h2 : R y z, it is annoying to have to write h x y z h1 h2 to prove R x z. After all, Lean should be able to infer that we are talking about transitivity at x, y, and z, from the fact that h1 is R x y and h2 is R y z. Indeed, we can replace that information by underscores: variable R : A → A → Propexample (h : transitive R) (x y z : A) (h1 : R x y) (h2 : R y z) : R x z :=h _ _ _ h1 h2 But typing underscores is annoying, too. The best solution is to declare the arguments x y z to a transitivity hypothesis to be implicit as well: variable {A : Type}variable R : A → A → Propexample (h : transitive R) (x y z : A) (h1 : R x y) (h2 : R y z) : R x z :=h h1 h2 In fact, the notions reflexive, symmetric, transitive, and anti_symmetric are defined in Lean’s core library in exactly this way, so we are free to use them without defining them. That is why we put our temporary definitions of in a namespace hidden; that means that the full name of our version of reflexive is hidden.reflexive, which, therefore, doesn’t conflict with the one defined in the library. In Section 13.1 we showed that a strict partial order—that is, a binary relation that is transitive and irreflexive—is also asymmetric. Here is a proof of that fact in Lean. variable A : Typevariable R : A → A → Propexample (h1 : irreflexive R) (h2 : transitive R) : ∀ x y, R x y → ¬ R y x :=assume x y,assume h3 : R x y,assume h4 : R y x,have h5 : R x x, from h2 h3 h4,have h6 : ¬ R x x, from h1 x,show false, from h6 h5 In mathematics, it is common to use infix notation and a symbol like ≤ to denote a partial order. Lean supports this practice: sectionparameter A : Typeparameter R : A → A → Proplocal infix ≤ := Rexample (h1 : irreflexive R) (h2 : transitive R) : ∀ x y, x ≤ y → ¬ y ≤ x :=assume x y,assume h3 : x ≤ y,assume h4 : y ≤ x,have h5 : x ≤ x, from h2 h3 h4,have h6 : ¬ x ≤ x, from h1 x,show false, from h6 h5end The parameter and parameters commands are similar to the variable and variables commands, except that parameters are fixed within a section. In other words, if you prove a theorem about R in the section above, you cannot apply that theorem to another relation, S, without closing the section. Since the parameter R is fixed, Lean allows us to define notation for R to be used locally in the section. In the example below, having fixed a partial order, R, we define the corresponding strict partial order and prove that it is, indeed, a strict order. sectionparameters {A : Type} (R : A → A → Prop)parameter (reflR : reflexive R)parameter (transR : transitive R)parameter (antisymmR : ∀ {a b : A}, R a b → R b a → a = b)local infix ≤ := Rdefinition R' (a b : A) : Prop := a ≤ b ∧ a ≠ blocal infix < := R'theorem irreflR (a : A) : ¬ a < a :=assume : a < a,have a ≠ a, from and.right this,have a = a, from rfl,show false, from ‹a ≠ a› ‹a = a›theorem transR {a b c : A} (h₁ : a < b) (h₂ : b < c) : a < c :=have a ≤ b, from and.left h₁,have a ≠ b, from and.right h₁,have b ≤ c, from and.left h₂,have b ≠ c, from and.right h₂,have a ≤ c, from transR ‹a ≤ b› ‹b ≤ c›,have a ≠ c, from assume : a = c, have c ≤ b, from eq.subst ‹a = c› ‹a ≤ b›, have b = c, from antisymmR ‹b ≤ c› ‹c ≤ b›, show false, from ‹b ≠ c› ‹b = c›,show a < c, from and.intro ‹a ≤ c› ‹a ≠ c›end Notice that we have used suggestive names reflR, transR, antisymmR instead of h1, h2, h3 to help remember which hypothesis is which. The proof also uses anonymous have and assume, referring back to them with the French quotes, \f< anf \f>. Remember also that eq.subst ‹a = c› ‹a ≤ b› is a proof of the fact that amounts for substituting c for a in a ≤ b. You can also use the equivalent notation ‹a = c› ▸ ‹a ≤ b›, where the triangle is written \t. In Section Section 13.1, we also noted that you can define a (weak) partial order from a strict one. We ask you to do this formally in the exercises below. Here is one more example. Suppose R is a binary relation on a type A, and we define S x y to mean that both R x y and R y x holds. Below we show that the resulting relation is reflexive and symmetric. sectionparameter A : Typeparameter R : A → A → Propvariable h1 : transitive Rvariable h2 : reflexive Rdef S (x y : A) := R x y ∧ R y xexample : reflexive S :=assume x,have R x x, from h2 x,show S x x, from and.intro this thisexample : symmetric S :=assume x y,assume h : S x y,have h1 : R x y, from h.left,have h2 : R y x, from h.right,show S y x, from ⟨h.right, h.left⟩end In the exercises below, we ask you to show that S is transitive as well. In the first example, we use the anonymous assume and have, and then refer back to the have with the keyword this. In the second example, we abbreviate and.left h and and.right h as h.left and h.right, respectively. We also abbreviate and.intro h.right h.left with an anonymous constructor, writing ⟨h.right, h.left⟩. Lean figures out that we are trying to prove a conjunction, and figures out that and.intro is the relevant introduction principle. You can type the corner brackets with \< and \>, respectively. 14.2. Orderings on Numbers¶ Conveniently, Lean has the normal orderings on the natural numbers, integers, and so on defined already. open natvariables n m : ℕ#check 0 ≤ n#check n < n + 1example : 0 ≤ n := zero_le nexample : n < n + 1 := lt_succ_self nexample (h : n + 1 ≤ m) : n < m + 1 :=have h1 : n < n + 1, from lt_succ_self n,have h2 : n < m, from lt_of_lt_of_le h1 h,have h3 : m < m + 1, from lt_succ_self m,show n < m + 1, from lt.trans h2 h3 There are many theorems in Lean that are useful for proving facts about inequality relations. We list some common ones here. variables (A : Type) [partial_order A]variables a b c : A#check (le_trans : a ≤ b → b ≤ c → a ≤ c)#check (lt_trans : a < b → b < c → a < c)#check (lt_of_lt_of_le : a < b → b ≤ c → a < c)#check (lt_of_le_of_lt : a ≤ b → b < c → a < c)#check (le_of_lt : a < b → a ≤ b) Here the declaration at the top says that A has the structure of a partial order. There are also properties that are specific to some domains, like the natural numbers: variable n : ℕ#check (nat.zero_le : ∀ n : ℕ, 0 ≤ n)#check (nat.lt_succ_self : ∀ n : ℕ, n < n + 1)#check (nat.le_succ : ∀ n : ℕ, n ≤ n + 1) 14.3. Equivalence Relations¶ In Section 13.3 we saw that an equivalence relation is a binary relation on some domain \(A\) that is reflexive, symmetric, and transitive. We will see such relations in Lean in a moment, but first let’s define another kind of relation called a preorder, which is a binary relation that is reflexive and transitive. namespace hiddenvariable {A : Type}def preorder (R : A → A → Prop) : Prop :=reflexive R ∧ transitive Rend hidden Lean’s library provides a different formulation of preorders, so, in order to use the same name, we have to put it in the hidden namespace. Lean’s library defines other properties of relations, such as these: namespace hiddenvariables {A : Type} (R : A → A → Prop)def equivalence := reflexive R ∧ symmetric R ∧ transitive Rdef total := ∀ x y, R x y ∨ R y xdef irreflexive := ∀ x, ¬ R x xdef anti_symmetric := ∀ ⦃x y⦄, R x y → R y x → x = yend hidden You can ask Lean to print their definitions: #print equivalence#print total#print irreflexive#print anti_symmetric Building on our previous definition of a preorder, we can describe a partial order as an antisymmetric preorder, and show that an equivalence relation as a symmetric preorder. namespace hiddenvariable {A : Type}def preorder (R : A → A → Prop) : Prop :=reflexive R ∧ transitive Rdef partial_order (R : A → A → Prop) : Prop :=preorder R ∧ anti_symmetric Rexample (R : A → A → Prop): equivalence R ↔ preorder R ∧ symmetric R :=iff.intro (assume h1 : equivalence R, have h2 : reflexive R, from and.left h1, have h3 : symmetric R, from and.left (and.right h1), have h4 : transitive R, from and.right (and.right h1), show preorder R ∧ symmetric R, from and.intro (and.intro h2 h4) h3) (assume h1 : preorder R ∧ symmetric R, have h2 : preorder R, from and.left h1, show equivalence R, from and.intro (and.left h2) (and.intro (and.right h1) (and.right h2)))end hidden In Section 13.3 we claimed that there is yet another way to define an equivalence relation, namely, as a binary relation satisfying the following two properties: \(\forall a \; (a \equiv a)\) \(\forall {a, b, c} \; (a \equiv b \wedge c \equiv b \to a \equiv c)\) Let’s prove this in Lean. Remember that the parameters and local infix commands serve to fix a relation R and introduce the notation ≈ to denote it. (You can type ≈ as \~~.) In the assumptions reflexive (≈) and symmetric (≈), the notation (≈) denotes R. sectionparameters {A : Type} (R : A → A → Prop)local infix ≈ := Rvariable (h1 : reflexive (≈))variable (h2 : ∀ {a b c}, a ≈ b ∧ c ≈ b → a ≈ c)example : symmetric (≈) :=assume a b (h : a ≈ b),have b ≈ b ∧ a ≈ b, from and.intro (h1 b) h,show b ≈ a, from h2 thisexample : transitive (≈) :=assume a b c (h3 : a ≈ b) (h4 : b ≈ c),have c ≈ b, from h2 (and.intro (h1 c) h4),have a ≈ b ∧ c ≈ b, from and.intro h3 this,show a ≈ c, from h2 thisend 14.4. Exercises¶ Replace the sorrycommands in the following proofs to show that we can create a partial order R'out of a strict partial order R. section parameters {A : Type} {R : A → A → Prop} parameter (irreflR : irreflexive R) parameter (transR : transitive R) local infix < := R def R' (a b : A) : Prop := R a b ∨ a = b local infix ≤ := R' theorem reflR' (a : A) : a ≤ a := sorry theorem transR' {a b c : A} (h1 : a ≤ b) (h2 : b ≤ c): a ≤ c := sorry theorem antisymmR' {a b : A} (h1 : a ≤ b) (h2 : b ≤ a) : a = b := sorry end Replace the sorryby a proof. section parameters {A : Type} {R : A → A → Prop} parameter (reflR : reflexive R) parameter (transR : transitive R) def S (a b : A) : Prop := R a b ∧ R b a example : transitive S := sorry end Only one of the following two theorems is provable. Figure out which one is true, and replace the sorrycommand with a complete proof. section parameters {A : Type} {a b c : A} {R : A → A → Prop} parameter (Rab : R a b) parameter (Rbc : R b c) parameter (nRac : ¬ R a c) -- Prove one of the following two theorems: theorem R_is_strict_partial_order : irreflexive R ∧ transitive R := sorry theorem R_is_not_strict_partial_order : ¬(irreflexive R ∧ transitive R) := sorry end Complete the following proof. open nat example : 1 ≤ 4 := sorry
Notebooks.azure.com provides a FREE Jupyter notebooks the service allows you to use Jupyter notebooks (http://jupyter.org/) and the programming language Python (https://www.python.org/). The Azure Notebook service is a FREE web application at http://notebooks.azure.com that allows you to create and share Juypter documents. The solution allows you to build interactive notebooks which contain live code, LATEX equations, and graphical visualizations and text which allow you to interactive run experiments and assessments on data and can be used in a varity of situations including data cleaning, data transformation, simulation, statistics, modeling, machine learning and much more with no overhead of server, infrastructure or service management. Azure notebook service The Azure notebook service provides cloud-based Jupyter notebook environments at https://notebooks.azure.com/. Dr Harry Strange at the University College London and Dr Garth Wells at the University of Cambridge are two academics, I know who are both are extensively using http://notebooks.azure.com within UG and PG teaching both academics/institutions now operate dedicated Azure Jupyter Notebook for use to their students and courses. Running and viewing the course material The way in which the institutions utilise Notebooks is by simply having a dedicated Notebook activities for the course. Students simply click on a notebook to view it. Students then can simply Click 'Clone and Run!' which creates their own copy of the Notebook on Azure and allows them to own a runnable and editable versions of the activity notebooks. This allows students to freely experiment with their own your copy of the activity notebooks and allows them to always return to the master version at any time. Creating your first Juypter notebook on Notebooks.azure.com Log into Azure Juypter Notebook server at https://notebooks.azure.com/. Go to 'Libraries' (top left of the page). Click 'New Library' to create a library (give it a suitable name, e.g. Python Notebooks. You should now be in your new library. Click 'Open in Jupyter' 'to get a Jupyter environment. At the top-right of the page, click New -> Python 3. You will now have a new notebook, and you're ready to start working. You should now see your new Juypter Notebook By default, the notebook is title is untitled, to rename your notebook simply click on untitled and enter the new name When you return to Notebooks.azure.com your notebooks are displayed, to open a notebook simply just click on the title. Running Jupyter locally or Via an Azure Virtual Machine You can run Jupyter and Python locally on your own computer if you wish or you can use the Data Science VM on Azure which comes with Juypter. We currently have Juypter Notebooks available on Azure as service Or you can have Juypter available on a Windows Virtual Machine Or a Linux Virtual Machine What I really like about Cambridge and UCL course is that they are fully supporting Azure within teaching and learning and utilising the Azure hosted Juypter Notebooks https://notebooks.azure.com/ environment. The following are some top Tips for getting started with Notebooks These following recommendations are from Dr Garth Wells at University of Cambridge who has provided this guidance in his Jupyter Notebooks available at The materials and course resources are available under Creative Common’s and MIT licenses and license details available at (https://github.com/CambridgeEngineering/PartIA-Computing-Michaelmas#license-and-copyright). The following top tips have been adapted from Dr Garth Wells materials to provide a high level insight of how to get started with Juypter Notebooks. Editing and running notebooks Jupyter notebooks have text cells and code cells. If you double-click on part of a notebook in a Jupyter environment (see below for creating a Jupyter environment on Azure), the cell will become editable. You will see in the menu bar whether it is a text cell ('Markdown') or a code cell ('Code'). You can use the drop-down box at the top of a notebook to change the cell type. You can use Insert from the menu bar to insert a new cell. The current cell can be 'run' using shift-return (the current cell is highlighted by a bar on the left-hand side of the page). When run, a text cell will be typeset, and the code in a 'code cell' will be executed. Any output from a code cell will appear below the code. Often you will want to run all cells from the start of a notebook. You can do this with Kernel -> Restart & Run All from the notebook menu bar. In this case the cells are executed in order (first through to last). Below is a code cell: print(2 + 2) Output 4 Formatting text cells Text cells are formatted using Markdown, and using LaTeX syntax for mathematics. Make extensive use of text cells to explain what your program does, and how it does it. Use mathematical typesetting to express yourself mathematically. Markdown You can find all the details in the Jupyter Markdown documentation. Below is a brief summary. Headings Using Markdown, headings are indicated by ' #': # Top level heading## Second level heading### Third level heading Text style The Markdown input Opening passage`A passage of text`*Some more text* appears as: Opening passage A passage of text Some more text Lists You can create bulleted list using: - Option A- Option B to show Option A Option B and enumerated lists using 1. Old approach1. New approach to show Old approach New approach Markdown resolves the list number for you. Code Code can be typeset using: ```pythondef f(x): return x*x``` which produces def f(x): return x*x You can include images in Jupyter notebooks - see Jupyter Markdown documentation LaTeX Markdown cells support LaTeX syntax for typesetting mathematics. LaTex is the leading tool for technical documents and presenting mathematics, and it is free. To typeset an inline equation, use: The term of interest in this case is $\exp(-2x) \sin(3 x^{4})$. which will appear as: 'The term of interest in this case is exp(−2x)sin(αx4)>exp(−2x)sin(αx4) .' For a displayed equation, from We wish to evaluate$$f(x) = \beta x^{3} \int_{0}^{2} g(x) \, dx$$when $\beta = 4$. we get: 'We wish to evaluate f(x)=βx3∫20g(x)dxf(x)=βx3∫02g(x)dx when β=4β=4 .' LaTeX commands for different mathematical symbols. If you see an example of mathematical typesetting in a notebook, you can also double-click it in a Jupyter environment to see the syntax.
Consider a bounded domain $\Omega$ (with smooth boundary) in some Riemannian $n$-manifold $M^n$. Let $L$ be the operator $$ L=\Delta+V $$ where $\Delta$ is the Laplace-beltrami operator on $M$ (so is formally negative) and $V$ is some smooth potential (we are mostly interested in $V\geq 0$). A well known result of Fischer-Colbrie and Schoen says that if $L$ has a positive solution in $\Omega$ then $\lambda_1(\Omega,L)$ is non-negative. That is the first Dirichlet eigenvalue of $L$ is non-negative (i.e. with our sign convention $L \phi=-\lambda_1 \phi$ for some $\lambda_1\geq 0, \phi\neq 0$, $\phi=0$ on $\partial \Omega$). Suppose instead that we we know only that $L$ has a solution $u$ in $\Omega$ so that both $\Omega^+=\lbrace u>0 \rbrace\cap \Omega \subset \Omega$ and $\Omega^-=\lbrace u<0 \rbrace\cap \Omega \subset \Omega$ are connected subsets. My question is: given this set up is anything known about $\lambda_2(\Omega,L)$ the second Dirichlet eigenvalue? We note given this set up, if $n=2$ then the nodal line $\lbrace u=0 \rbrace\cap \Omega \subset \Omega$ must have a single component and be smooth. Hence, an argument with nodal lines should imply that if $u=0$ on $\partial \Omega$ then $\lambda_2(\Omega,L)\geq 0$. Also, Fischer-Colbrie and Schoen's result implies that for $n=1$, the one dimensional problem, $\lambda_2(\Omega,L)\geq 0$.
This question already has an answer here: Prove Quotient Group Isomorphism 1 answer I'm trying to use the First Isomorphism Theorem to show that $\mathbb{T}^1 \cong \mathbb{C}^*/\mathbb{R}_{>0}$ by constructing a surjective group homomorphism from the nonzero complex numbers to the circle group whose kernel is the set of positive reals. I haven't yet taken a course in complex variables, so I did some digging around and could only find a homomorphism from $\mathbb{R}^*$ to $\mathbb{T}^1$ defined by $f(\theta) = e^{i\theta}$. I don't really know what that means, so I'm struggling to construct any homomorphism $\phi : \mathbb{C}^* \to \mathbb{T}^1$, let alone one whose kernel is $\mathbb{R}_{>0}$. EDIT: Does $f(z) = \frac{z}{|z|}$ work? EDIT2: Probably not. I think the image of this is actually just $\{\pm 1\}$. I really have no idea how to work with complex numbers at all.
The first question to answer: which matrices satisfy $A^2 = I$ and $\det(A) = 1$? It suffices to note that since $A^2 - I = 0$, the minimal polynomial of $A$ must divide $x^2 - 1 = (x-1)(x + 1)$. Hence, $A$ must be diagonalizable with eigenvalues equal to $\pm 1$. Moreover, since the product of eigenvalues is equal to $\det(A)$ which is equal to $1$, the $-1$ eigenvalue must have even multiplicity. All together, we can characterize these matrices as those of the form$$A = S \pmatrix{-I_{2k}&0\\0&I_{n - 2k}}S^{-1}, \quad k = 0,1,\dots,\lfloor n/2 \rfloor$$ where $I_k$ denotes the $k \times k$ identity matrix and the matrices here are $n \times n$. The second question: which matrices satisfy $A^2 = -I$ and $\det(A) = -1$? In fact, there are no such real matrices. Because $A^2 + I = 0$, the (complex) eigenvalues of $A$ must solve $x^2 + 1 = 0$, which is to say that the eigenvalues of $A$ are $\pm i$. Because $A$ is a real matrix, its complex eigenvalues come in conjugate pairs. Thus, its determinant must have the form $\det(A) = (-i)^k(i)^k = 1 \neq -1$. An alternative explanation: the minimal polynomial $x^2 + 1$ of $A$ is an irreducible polynomial over $\Bbb R$, so the characteristic polynomial must have the form $\det(xI - A) = (x^2 + 1)^k$. Thus, we find that $A$ has even size, and that $\det(A) = \det(0I - A) = (0^2 + 1)^k = 1$. As I note in the comments above, there are no solutions satisfying $A^2 = 0$, and in fact no non-invertible solutions other than $A = 0$.
Is there a function in Mathematica that can be used to find the perturbation solution of an equation like $x^2 − 1 = \epsilon \,x$, $x − 2 = \epsilon \cosh(x)$ or $x^2 − 1 = \epsilon\, e^x$? Decide up to which power you would like to expand: pow = 4; Let's do one of the equations you mentioned as an example (bring all terms to one side and save as a single expression, eq in this case): eq = x - 2 - e Cosh[x]; Write an ansatz for the x solution with unknown coefficients: ansatz = Sum[a[i] e^i, {i, 0, pow}]; Substitute the ansatz into your expression and do a series expansion: expand = Series[eq /. x -> ansatz, {e, 0, pow}]; Solve the constraints of overall factors vanishing: vars = Table[a[i], {i, 0, pow}];sols = Solve[expand == 0, vars][[1]] // Simplify // Quiet; Finally, insert the solution into your ansatz to obtain the result: res = ansatz /. sols 2 + e Cosh[2] + e^2 Cosh[2] Sqrt[-1 + Cosh[2]^2] + 1/3 e^4 Cosh[2] Sqrt[-1 + Cosh[2]^2] (-3 + 8 Cosh[2]^2) + e^3 (-Cosh[2] + (3 Cosh[2]^3)/2) Don't forget to test numerically, whether your result is actually correct at the end: enum = 10^-10;xnum = x /.FindRoot[(eq /. e -> enum) == 0, {x, 2}, WorkingPrecision -> 60];(res /. e -> enum) - xnum -3.627605747396*10^-47 This shows that expanding to fourth order with an e= 10^-10 indeed consistently matches the result up to about 10^-50 accuracy, so the expansion was correct. Rinse and repeat for the other examples. PS: In cases where your equation admits several solutions you might have to be a little bit more careful, but the principle still stays the same. For these simple examples you could use InverseSeries. For example: InverseSeries[Series[(x^2 - 1)/x, {x, 1, 10}], e]InverseSeries[Series[(x - 2)/Cosh[x], {x, 2, 10}], e]InverseSeries[Series[(x^2 - 1)/E^x, {x, 1, 10}], e] You need to solve for e, and then do a series around the value of x when e is zero. The new in M12 function AsymptoticSolve can be used to find these perturbation expansions: AsymptoticSolve[x^2 - 1 == ϵ x, {x, 1}, {ϵ, 0, 7}] {{x -> 1 + ϵ/2 + ϵ^2/8 - ϵ^4/128 + ϵ^6/ 1024}} AsymptoticSolve[x - 2 == ϵ Cosh[x], {x, 2}, {ϵ, 0, 7}] {{x -> 2 + ϵ Cosh[2] + ϵ^2 Cosh[2] Sinh[2] + 1/2 ϵ^3 (Cosh[2]^3 + 2 Cosh[2] Sinh[2]^2) + 1/3 ϵ^4 (5 Cosh[2]^3 Sinh[2] + 3 Cosh[2] Sinh[2]^3) + 1/24 ϵ^5 (13 Cosh[2]^5 + 88 Cosh[2]^3 Sinh[2]^2 + 24 Cosh[2] Sinh[2]^4) + 1/15 ϵ^6 (47 Cosh[2]^5 Sinh[2] + 100 Cosh[2]^3 Sinh[2]^3 + 15 Cosh[2] Sinh[2]^5) + 1/720 ϵ^7 (541 Cosh[2]^7 + 7746 Cosh[2]^5 Sinh[2]^2 + 7800 Cosh[2]^3 Sinh[2]^4 + 720 Cosh[2] Sinh[2]^6)}} AsymptoticSolve[x^2 - 1 == ϵ Exp[x], {x, 1}, {ϵ, 0, 7}]AsymptoticSolve[x^2 - 1 == ϵ Exp[x], {x, -1}, {ϵ, 0, 7}] {{x -> 1 + (E ϵ)/2 + (E^2 ϵ^2)/8 + (E^3 ϵ^3)/16 + ( 13 E^4 ϵ^4)/384 + (E^5 ϵ^5)/48 + (69 E^6 ϵ^6)/ 5120 + (841 E^7 ϵ^7)/92160}} {{x -> -1 - ϵ/(2 E) + (3 ϵ^2)/(8 E^2) - (7 ϵ^3)/( 16 E^3) + (235 ϵ^4)/(384 E^4) - (121 ϵ^5)/(128 E^5) + ( 7959 ϵ^6)/(5120 E^6) - (245953 ϵ^7)/(92160 E^7)}} According to standard perturbation theory for static Hamiltonians of this type, the ground state of the system may be approximated by: \begin{align} |0\rangle_V &= |0\rangle_H - \epsilon \sum_{\alpha\neq 0} \frac{U_{\alpha 0}}{\alpha} |\alpha\rangle_H \\ \notag &\simeq N^{-\frac12} \left ( |0\rangle_H + \beta_1 |1\rangle_H + \beta_2 |2\rangle_H \right)\,, \end{align} where $N=1+\beta_1^2+\beta_2^2$, $\beta_k = -\epsilon U_{\alpha 0}/\alpha$ and $U_{\alpha 0}={}_H\langle \alpha|U| 0 \rangle_H$, $\alpha=1,2$. Hope this helps to formulate your problem.
Ineffable cardinal Ineffable cardinals were introduced by Jensen and Kunen in [1] and arose out of their study of $\diamondsuit$ principles. An uncountable regular cardinal $\kappa$ is ineffable if for every sequence $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ there is $A\subseteq\kappa$ such that the set $S=\{\alpha<\kappa\mid A\cap \alpha=A_\alpha\}$ is stationary. Equivalently an uncountable regular $\kappa$ is ineffable if and only if for every function $F:[\kappa]^2\rightarrow 2$ there is a stationary $H\subseteq\kappa$ such that $F\upharpoonright [H]^2$ is constant [1]. This second characterization strengthens a characterization of weakly compact cardinals which requires that there exist such an $H$ of size $\kappa$. If $\kappa$ is ineffable, then $\diamondsuit_\kappa$ holds and there cannot be a slim $\kappa$-Kurepa tree [1] . A $\kappa$-Kurepa tree is a tree of height $\kappa$ having levels of size less than $\kappa$ and at least $\kappa^+$-many branches. A $\kappa$-Kurepa tree is slim if every infinite level $\alpha$ has size at most $|\alpha|$. Contents Ineffable cardinals and the constructible universe Ineffable cardinals are downward absolute to $L$. In $L$, an inaccessible cardinal $\kappa$ is ineffable if and only if there are no slim $\kappa$-Kurepa trees. Thus, for inaccessible cardinals, in $L$, ineffability is completely characterized using slim Kurepa trees. [1] Ramsey cardinals are stationary limits of completely ineffable cardinals, they are weakly ineffable, but the least Ramsey cardinal is not ineffable. Ineffable Ramsey cardinals are limits of Ramsey cardinals, because ineffable cardinals are $Π^1_2$-indescribable and being Ramsey is a $Π^1_2$-statement. The least strongly Ramsey cardinal also is not ineffable, but super weakly Ramsey cardinals are ineffable. $1$-iterable (=weakly Ramsey) cardinals are weakly ineffable and stationary limits of completely ineffable cardinals. The least $1$-iterable cardinal is not ineffable. [2, 4] Relations with other large cardinals Measurable cardinals are ineffable and stationary limits of ineffable cardinals. $\omega$-Erdős cardinals are stationary limits of ineffable cardinals, but not ineffable since they are $\Pi_1^1$-describable. [3] Ineffable cardinals are $\Pi^1_2$-indescribable [1]. Ineffable cardinals are limits of totally indescribable cardinals. [1] ([5] for proof) For a cardinal $κ=κ^{<κ}$, $κ$ is ineffable iff it is normal 0-Ramsey. [6] Weakly ineffable cardinal Weakly ineffable cardinals (also called almost ineffable) were introduced by Jensen and Kunen in [1] as a weakening of ineffable cardinals. An uncountable regular cardinal $\kappa$ is weakly ineffable if for every sequence $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ there is $A\subseteq\kappa$ such that the set $S=\{\alpha<\kappa\mid A\cap \alpha=A_\alpha\}$ has size $\kappa$. If $\kappa$ is weakly ineffable, then $\diamondsuit_\kappa$ holds. Weakly ineffable cardinals are downward absolute to $L$. [1] Weakly ineffable cardinals are $\Pi_1^1$-indescribable. [1] Ineffable cardinals are limits of weakly ineffable cardinals. Weakly ineffable cardinals are limits of totally indescribable cardinals. [1] ([5] for proof) For a cardinal $κ=κ^{<κ}$, $κ$ is weakly ineffable iff it is genuine 0-Ramsey. [6] Subtle cardinal Subtle cardinals were introduced by Jensen and Kunen in [1] as a weakening of weakly ineffable cardinals. A uncountable regular cardinal $\kappa$ is subtle if for every for every $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ and every closed unbounded $C\subseteq\kappa$ there are $\alpha<\beta$ in $C$ such that $A_\beta\cap\alpha=A_\alpha$. If $\kappa$ is subtle, then $\diamondsuit_\kappa$ holds. Subtle cardinals are downward absolute to $L$. [1] Weakly ineffable cardinals are limits of subtle cardinals. [1] Subtle cardinals are stationary limits of totally indescribable cardinals. [1, 7] The least subtle cardinal is not weakly compact as it is $\Pi_1^1$-describable. $\alpha$-Erdős cardinals are subtle. [1] If $δ$ is a subtle cardinal, the set of cardinals $κ$ below $δ$ that are strongly uplifting in $V_δ$ is stationary.[8] for every class $\mathcal{A}$, in every club $B ⊆ δ$ there is $κ$ such that $\langle V_δ, \mathcal{A} ∩ V_δ \rangle \models \text{“$κ$ is $\mathcal{A}$-shrewd.”}$.[9] (The set of cardinals $κ$ below $δ$ that are $\mathcal{A}$-shrewd in $V_δ$ is stationary.) there is an $\eta$-shrewd cardinal below $δ$ for all $\eta < δ$.[9] Ethereal cardinal Ethereal cardinals were introduced by Ketonen in [10] (information in this section from there) as a weakening of subtle cardinals. Definition: A regular cardinal $κ$ is called etherealif for every club $C$ in $κ$ and sequence $(S_α|α < κ)$ of sets such that for $α < κ$, $|S_α| = |α|$ and $S_α ⊆ α$, there are elements $α, β ∈ C$ such that $α < β$ and $|S_α ∩ S_β| = |α|$. I.e., symbolically(?): $$κ \text{ – ethereal} \overset{\text{def}}{⟺} \left( κ \text{ – regular} ∧ \left( \forall_{C \text{ – club in $κ$}} \forall_{S : κ → \mathcal{P}(κ)} \left( \forall_{α < κ} |S_α| = |α| ∧ S_α ⊆ α \right) ⟹ \left( \exists_{α, β ∈ C} α < β ∧ |S_α ∩ S_β| = |α| \right) \right) \right)$$ Properties: Every subtle cardinal is obviously ethereal. Every ethereal cardinal is weakly inaccessible. A strongly inaccessible cardinal is ethereal if and only if it is subtle. If $κ$ is ethereal and $2^\underset{\smile}{κ} = κ$, then $♢_κ$ holds (where $2^\underset{\smile}{κ} = \bigcup \{ 2^α | α < κ \}$ is the weak power of $κ$). To be expanded. $n$-ineffable cardinal The $n$-ineffable cardinals for $2\leq n<\omega$ were introduced by Baumgartner in [11] as a strengthening of ineffable cardinals. A cardinal is $n$-ineffable if for every function $F:[\kappa]^n\rightarrow 2$ there is a stationary $H\subseteq\kappa$ such that $F\upharpoonright [H]^n$ is constant. $2$-ineffable cardinals are exactly the ineffable cardinals. an $n+1$-ineffable cardinal is a stationary limit of $n$-ineffable cardinals. [11] A cardinal $\kappa$ is totally ineffable if it is $n$-ineffable for every $n$. a $1$-iterable cardinal is a stationary limit of totally ineffable cardinals. (this follows from material in [4]) Helix (Information in this subsection come from [7] unless noted otherwise.) For $k \geq 1$ we define: $\mathcal{P}(x)$ is the powerset (set of all subsets) of $x$. $\mathcal{P}_k(x)$ is the set of all subsets of $x$ with exactly $k$ elements. $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$ is regressive iff for all $A \in \mathcal{P}_k(\lambda)$, we have $f(A) \subseteq \min(A)$. $E$ is $f$-homogenous iff $E \subseteq \lambda$ and for all $B,C \in \mathcal{P}_k(E)$, we have $f(B) \cap \min(B \cup C) = f(C) \cap \min(B \cup C)$. $\lambda$ is $k$-subtle iff $\lambda$ is a limit ordinal and for all clubs $C \subseteq \lambda$ and regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous $A \in \mathcal{P}_{k+1}(C)$. $\lambda$ is $k$-almost ineffable iff $\lambda$ is a limit ordinal and for all regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous $A \subseteq \lambda$ of cardinality $\lambda$. $\lambda$ is $k$-ineffable iff $\lambda$ is a limit ordinal and for all regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous stationary $A \subseteq \lambda$. $0$-subtle, $0$-almost ineffable and $0$-ineffable cardinals can be defined as “uncountable regular cardinals” because for $k \geq 1$ all three properties imply being uncountable regular cardinals. For $k \geq 1$, if $\kappa$ is a $k$-ineffable cardinal, then $\kappa$ is $k$-almost ineffable and the set of $k$-almost ineffable cardinals is stationary in $\kappa$. For $k \geq 1$, if $\kappa$ is a $k$-almost ineffable cardinal, then $\kappa$ is $k$-subtle and the set of $k$-subtle cardinals is stationary in $\kappa$. For $k \geq 1$, if $\kappa$ is a $k$-subtle cardinal, then the set of $(k-1)$-ineffable cardinals is stationary in $\kappa$. For $k \geq n \geq 0$, all $k$-ineffable cardinals are $n$-ineffable, all $k$-almost ineffable cardinals are $n$-almost ineffable and all $k$-subtle cardinals are $n$-subtle. Completely ineffable cardinal Completely ineffable cardinals were introduced in [5] as a strengthening of ineffable cardinals. Define that a collection $R\subseteq P(\kappa)$ is a stationary class if $R\neq\emptyset$, for all $A\in R$, $A$ is stationary in $\kappa$, if $A\in R$ and $B\supseteq A$, then $B\in R$. A cardinal $\kappa$ is completely ineffable if there is a stationary class $R$ such that for every $A\in R$ and $F:[A]^2\to2$, there is $H\in R$ such that $F\upharpoonright [H]^2$ is constant. Relations: Completely ineffable cardinals are downward absolute to $L$. [5] Completely ineffable cardinals are limits of ineffable cardinals. [5] There are stationarily many completely ineffable, greatly Erdős cardinals below any Ramsey cardinal.[13] The following are equivalent:[6] $κ$ is completely ineffable. $κ$ is coherent $<ω$-Ramsey. $κ$ has the $ω$-filter property. Every completely ineffable is a stationary limit of $<ω$-Ramseys.[6] Completely Ramsey cardinals and $ω$-Ramsey cardinals are completely ineffable.[6] $ω$-Ramsey cardinals are limits of completely ineffable cardinals.[2] References Jensen, Ronald and Kunen, Kenneth. Some combinatorial properties of $L$ and $V$.Unpublished, 1969. www bibtex Holy, Peter and Schlicht, Philipp. A hierarchy of Ramsey-like cardinals.Fundamenta Mathematicae 242:49-74, 2018. www arχiv DOI bibtex Jech, Thomas J. Third, Springer-Verlag, Berlin, 2003. (The third millennium edition, revised and expanded) www bibtex Set Theory. Gitman, Victoria. Ramsey-like cardinals.The Journal of Symbolic Logic 76(2):519-540, 2011. www arχiv MR bibtex Abramson, Fred and Harrington, Leo and Kleinberg, Eugene and Zwicker, William. Flipping properties: a unifying thread in the theory of large cardinals.Ann Math Logic 12(1):25--58, 1977. MR bibtex Nielsen, Dan Saattrup and Welch, Philip. Games and Ramsey-like cardinals., 2018. arχiv bibtex Friedman, Harvey M. Subtle cardinals and linear orderings., 1998. www bibtex Hamkins, Joel David and Johnstone, Thomas A. Strongly uplifting cardinals and the boldface resurrection axioms., 2014. arχiv bibtex Rathjen, Michael. The art of ordinal analysis., 2006. www bibtex Ketonen, Jussi. Some combinatorial principles.Trans Amer Math Soc 188:387-394, 1974. DOI bibtex Baumgartner, James. Ineffability properties of cardinals. I.Infinite and finite sets (Colloq., Keszthely, 1973; dedicated to P. Erdős on his 60th birthday), Vol. I, pp. 109--130. Colloq. Math. Soc. János Bolyai, Vol. 10, Amsterdam, 1975. MR bibtex Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex Sharpe, Ian and Welch, Philip. Greatly Erdős cardinals with some generalizations to the Chang and Ramsey properties.Ann Pure Appl Logic 162(11):863--902, 2011. www DOI MR bibtex
According to the theory of general relativity, if you compress an object of mass $M$ to a sphere with radius equal to or smaller than the Schwarzschild radius ($R_s$) for that object: $$ R_s = \displaystyle\frac{2GM}{c^2} $$ then the escape velocity for a particle at a distance $R_s$ from the center of mass of $M$ will be equal to $c$, the speed of light. Since no object can exceed the speed of light, you have then created a black hole. The surface of this sphere of radius $R_s$ is called the event horizon of the created black hole. Any particle located at or below this surface will never be able to escape the gravitational field of the (compressed) mass $M$. One very curious fact is that this same radius appears in the Newtonian theory of gravitation. In fact, let's compute the escape velocity for a particle of mass $m$ when in the presence of the gravitational field of a spherical body of mass $M$. Since energy is conserved as $m$ escapes the gravitational field of $M$ (the gravitational force is conservative), we have that: $$ E_i = E_f \Longrightarrow \displaystyle\frac{1}{2}mv_i^2 - \displaystyle\frac{GMm}{r_i} = \displaystyle\frac{1}{2}mv_f^2 - \displaystyle\frac{GMm}{r_f} $$ where $v_i$ and $v_f$ are the initial and final velocities of the particle $m$ and $r_i$ and $r_f$ are the initial and final distances between $m$ and the center of mass of $M$ respectively. The escape velocity ($v_i = v_e$) which $m$ needs to initially have to escape the gravitational field of $M$ is such that, when it reaches "infinity", its velocity $v_f$ is zero ("infinity" here means an $r_f$ large enough so that the gravitational force of $M$ on $m$ becomes negligible). If $v_f \neq 0$ then $m$ had initially more kinetic energy than necessary to escape from $M$. From these facts we have that: $$ \frac{1}{2}mv_e^2 - \displaystyle\frac{GMm}{r_i} = \frac{1}{2}m(0)^2 - \displaystyle\frac{GMm}{\infty} = 0 \Longrightarrow v_e = \sqrt{\displaystyle\frac{2GM}{r_i}} $$ If $m$ is initially at a distance $r_i = 2GM/c^2$ from $M$, then its escape velocity is: $$ v_e = \sqrt{\displaystyle\frac{2GM}{r_i}} = \sqrt{2GM \left(\displaystyle\frac{c^2}{2GM}\right)} = c $$ In other words, Newtonian gravity predicts that an object of mass $m$ situated at a distance $r_i = R_s = 2GM/c^2$ from an object of mass $M$ must have an initial velocity equal to or larger than to the speed of light ($c$) to escape the gravitational field of $M$. Assuming that a photon has mass (this must be true according to Newtonian gravity as it says only particles with mass interact gravitationally and we know for a fact that light is bent by gravitational fields), then light itself would not escape such a gravitational field. Before you jump to the conclusion that there are things such as black holes in Newtonian gravity, notice that given enough fuel and a sufficiently powerful engine, any object (say, with mass $m$) can escape any gravitational field in a Newtonian universe. For that to happen, the object needs not exceed its escape velocity at any point of its escaping trajectory; one could simply attach the mentioned engine to it and have this engine exert a force larger than the gravitational force due to $M$ at all times in the escaping process. This will get $m$ to escape the gravitational field of $M$.
In the spirit of the classic four fours, I wonder what's the optimal set of four numbers? Your goal is to make the most consecutive integers using four digits of your choice. Pick four: $0,1,2,3,4,5,6,7,8,9$ ( You can pick multiple instances of the same digit ) When constructing an integer: all of your four candidates must be used exactly once (order/placing of digits is irrelevant) You may use basic arithmetic operations $+,-,\times,\div$ and parentheses $()$ You may use $a^b$ and $\sqrt[a]{b}$ but at the expense of 2 numbers as you can see You may notform new numbers, i.e. $ab$ is not allowed If we were to use four $4$s, the best we could do would be up to $9$: 0 = 4 ÷ 4 × 4 − 4 1 = 4 ÷ 4 + 4 − 4 2 = 4 −(4 + 4)÷ 4 3 = (4 × 4 − 4)÷ 4 4 = 4 + 4 ×(4 − 4) 5 = (4 × 4 + 4)÷ 4 6 = (4 + 4)÷ 4 + 4 7 = 4 + 4 − 4 ÷ 4 8 = 4 ÷ 4 × 4 + 4 9 = 4 ÷ 4 + 4 + 4*10 = 4 ÷√4 + 4 ×√4 *10 = (44 − 4) ÷ 4 Number 10 can't be done and is an example of failing, since it would require either: expenseless roots; $\sqrt{4}$ isn't allowed. ( $\sqrt[2]{4}$ is, which requires you to use $4$ and $2$ ) number formationwhich isn't allowed either. Zero does not necessarily need to be included, you can start at either $0$ or $1$. For the purposes of freedom of puzzling, if you think you can top your solution for a chosen set of digits, by starting at any other positive integer, you can add that to your answer below your initial solution. (I suspect this is unlikely) If you want, you can extend your consecutive list to negative integers but this is strictly optional and not necessary in any way, other than for the purposes of fulfillment and mathematical euphoria. Example There is an example on Puzzling.SE using digits $2,2,4,5$: But this can be expanded since the given example uses only basic arithmetic operations, not including potentiation and roots. I also suspect It could be done better using another set. I tried this by hand and I'm stuck at number $29$ using this example set, and at number $34$ using $9,8,3,2$.
202 35 Summary What happens to invariant mass of an object when it gets closer or further from a gravitational body? In Special Relativity, you learn that invariant mass is computed by taking the difference between energy squared and momentum squared. (For simplicity, I'm saying c = 1). [tex] m^2 = E^2 - \vec{p}^2 [/tex] This can also be written with the Minkowski metric as: [tex] m^2 = \eta_{\mu\nu} p^\mu p^\nu [/tex] More generally, if there is a different metric (for example Schwartzchild), you would write it as: [tex] m^2 = g_{\mu\nu} p^\mu p^\nu [/tex] Now the question is, if invariant mass does not change from one metric to the other, you get the equation: [tex] 0 = (g_{\mu\nu} - \eta_{\mu\nu})p^\mu p^\nu [/tex] This seems to give unphysical results. I solved for a photon in the Schwartzchild metric, and the only physical solution available is if the Schwartzchild radius is 0. So this seems to imply that invariant mass (or lack thereof) is not invariant under gravitational fields. Any help here would be much appreciated. Thank you. [tex] m^2 = E^2 - \vec{p}^2 [/tex] This can also be written with the Minkowski metric as: [tex] m^2 = \eta_{\mu\nu} p^\mu p^\nu [/tex] More generally, if there is a different metric (for example Schwartzchild), you would write it as: [tex] m^2 = g_{\mu\nu} p^\mu p^\nu [/tex] Now the question is, if invariant mass does not change from one metric to the other, you get the equation: [tex] 0 = (g_{\mu\nu} - \eta_{\mu\nu})p^\mu p^\nu [/tex] This seems to give unphysical results. I solved for a photon in the Schwartzchild metric, and the only physical solution available is if the Schwartzchild radius is 0. So this seems to imply that invariant mass (or lack thereof) is not invariant under gravitational fields. Any help here would be much appreciated. Thank you.
Jónsson cardinal Jónsson cardinals are named after Bjarni Jónsson, a student of Tarski working in universal algebra. In 1962, he asked whether or or not every algebra of cardinality $\kappa$ has a proper subalgebra of the same cardinality. The cardinals $\kappa$ that satisfy this property are now called Jónsson cardinals. An algebra of cardinality $\kappa$ is simply a set $A$ of cardinality $\kappa$ and finitely many functions (each with finitely many inputs) $f_0,f_1...f_n$ for which $A$ is closed under every such function. A subalgebra of that algebra is a set $B\subseteq A$ which $B$ is closed under each $f_k$ for $k\leq n$. Contents Equivalent Definitions There are several equivalent definitions of Jónsson cardinals. Partition Property A cardinal $\kappa$ is Jónsson iff the partition property $\kappa\rightarrow [\kappa]_\kappa^{<\omega}$ holds, i.e. that given any function $f:[\kappa]^{<\omega}\to\kappa$ we can find a subset $H\subseteq\kappa$ of order type $\kappa$ such that $f``[H]^n\neq\kappa$ for every $n<\omega$. [1] Substructure Characterization A cardinal $\kappa$ is Jónssoniff given any $A$ there exists an elementary substructure $\langle X,\in, X\cap A\rangle\prec\langle V_\kappa,\in,A\rangle$ with $|X|=\kappa$ and $X\cap\kappa\neq\kappa$. A cardinal $\kappa$ is Jónssoniff any structure with universe of cardinality $\kappa$ has a proper elementary substructure with universe also having cardinality $\kappa$. [1] Embedding Characterization A cardinal $\kappa$ is Jónsson iff for every $\theta>\kappa$ there exists a transitive set $M$ with $\kappa\in M$ and an elementary embedding $j:M\to H_\theta$ such that $j(\kappa)=\kappa$ and $\text{crit }j<\kappa$, iff for every $\theta>\kappa$ there exists a transitive set $M$ with $\kappa\in M$ and an elementary embedding $j:M\to V_\theta$ such that $j(\kappa)=\kappa$ and $\text{crit} j<\kappa$. Interestingly, if one such $\theta>\kappa$ has this property, then every $\theta>\kappa$ has this property as well. Abstract Algebra Characterization In terms of abstract algebra, $\kappa$ is Jónsson iff any algebra $A$ of size $\kappa$ has a proper subalgebra of $A$ of size $\kappa$. Properties All the following facts can be found in [1]: $\aleph_0$ is not Jónsson. If $\kappa$ isn't Jónsson then neither is $\kappa^+$. If $2^\kappa=\kappa^+$ then $\kappa^+$ isn't Jónsson. If $\kappa$ is regular then $\kappa^+$ isn't Jónsson (therefore $\kappa^{++}$ is never Jónsson, and if $\kappa$ is weakly inaccessible then $\kappa^+$ is never Jónsson). A singular limit of measurables is Jónsson. The least Jónsson is either weakly inaccessible or has cofinality $\omega$. $\aleph_{\omega+1}$ is not Jónsson. It is still an open question as to whether or not there is some known large cardinal axiom that implies the consistency of $\aleph_\omega$ being Jónsson. Relations to other large cardinal notions Jónsson cardinals have a lot of consistency strength: Jónsson cardinals are equiconsistent with Ramsey cardinals. [2] The existence of a Jónsson cardinal $\kappa$ implies the existence of $x^\sharp$ for every $x\in V_\kappa$ (and therefore for every real number $x$, because $\kappa$ is uncountable). But in terms of size, they're (ostensibly) quite small: A Jónsson cardinal need not be regular (assuming the consistency of a measurable cardinal). Every Ramsey cardinal is inaccessible and Jónsson. [3] Every weakly inaccessible Jónsson is weakly hyper-Mahlo. [4] It's an open question whether or not every inaccessible Jónsson cardinal is weakly compact. Jónsson successors of singulars As mentioned above, $\aleph_{\omega+1}$ is not Jónsson (this is due to Shelah). The question is then if it's possible for any successor of a singular cardinal to be Jónsson. Here is a (non-exhaustive) list of things known: If $0\neq\gamma<|\eta|$ then $\aleph_{\eta+\gamma+1}$ is not Jónsson. [5] If there exists a Jónsson successor of a singular cardinal then $0^\dagger$ exists. [6] Jónsson cardinals and the core model Weak covering holds at every Jónsson cardinal, i.e. that $\kappa^{+K}=\kappa^+$ for every Jónsson cardinal. If $\kappa$ is regular Jónsson then the set of regular $\alpha<\kappa$ satisfying weak covering is stationary in $\kappa$. If we assume that there's no sharp for a strong cardinal (known as $0^\P$ doesn't exist) then: For a Jónsson cardinal $\kappa$, $A^\sharp$ exists for every $A\subseteq\kappa$. References Mitchell, William J. Jónsson Cardinals, Erdős Cardinals, and the Core Model.J Symbol Logic , 1997. arχiv bibtex Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Shelah, Saharon. Cardinal Arithmetic.Oxford Logic Guides 29, 1994. bibtex Tryba, Jan. On Jónsson cardinals with uncountable cofinality.Israel Journal of Mathematics 49(4), 1983. bibtex Donder, Hans-Dieter and Koepke, Peter. On the Consistency Strength of 'Accessible' Jónsson Cardinals and of the Weak Chang Conjecture.Annals of Pure and Applied Logic , 1998. www DOI bibtex Welch, Philip. Some remarks on the maximality of Inner Models.Logic Colloquium , 1998. www bibtex
How can we evaluate the following? $$\int e^{\frac{y}{x}}\ \mathrm dy$$ An explanation of the answer would be helpful. The answer I got is $ x e^{y/x}$. But not sure about the steps used for obtaining the answer... Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community How can we evaluate the following? $$\int e^{\frac{y}{x}}\ \mathrm dy$$ An explanation of the answer would be helpful. The answer I got is $ x e^{y/x}$. But not sure about the steps used for obtaining the answer... $$\begin{align} \int_{- \infty} e^{\frac{y}{x}}dy &= \int_{- \infty}^y e^{\frac{t}{x}}dt \\ &= \int_{- \infty}^y e^{\frac{xu}{x}}d(xu) \\ &= x \int_{- \infty}^{y \over x} e^u d u \\ &= x \vert_{u = {- \infty}}^{y \over x} e^u \\ &= x \ e^{y \over x} \end{align}$$ Since this a task on finding a primitive, you can always check you candidate answer by taking (partial in your case) derivative. First note that $$\int a^t\ \mathrm dt=\frac{a^t}{\ln a}+C$$ Therefore $$\int e^{\frac{y}{x}}\ \mathrm dy=\int \left(e^{\frac{1}{x}}\right)^y\ \mathrm dy$$ $$=\frac{\left(e^{\frac{1}{x}}\right)^y}{\ln e^{\frac{1}{x}}}+C=xe^{\frac{y}{x}}+C$$
This is a talk for the New York Set Theory Seminar on March 1, 2013. This talk will be based on my recent paper with C. D. A. Evans, Transfinite game values in infinite chess. Infinite chess is chess played on an infinite chessboard. Since checkmate, when it occurs, does so after finitely many moves, this is technically what is known as an open game, and is therefore subject to the theory of open games, including the theory of ordinal game values. In this talk, I will give a general introduction to the theory of ordinal game values for ordinal games, before diving into several examples illustrating high transfinite game values in infinite chess. The supremum of these values is the omega one of chess, denoted by $\omega_1^{\mathfrak{Ch}}$ in the context of finite positions and by $\omega_1^{\mathfrak{Ch}_{\hskip-1.5ex {\ \atop\sim}}}$ in the context of all positions, including those with infinitely many pieces. For lower bounds, we have specific positions with transfinite game values of $\omega$, $\omega^2$, $\omega^2\cdot k$ and $\omega^3$. By embedding trees into chess, we show that there is a computable infinite chess position that is a win for white if the players are required to play according to a deterministic computable strategy, but which is a draw without that restriction. Finally, we prove that every countable ordinal arises as the game value of a position in infinite three-dimensional chess, and consequently the omega one of infinite three-dimensional chess is as large as it can be, namely, true $\omega_1$.
The Wholeness Axioms The wholeness axioms, proposed by Paul Corazza [1, 2], occupy a high place in the upper stratosphere of the large cardinal hierarchy, intended as slight weakenings of the Kunen inconsistency, but similar in spirit. The wholeness axioms are formalized in the language $\{\in,j\}$, augmenting the usual language of set theory $\{\in\}$ with an additional unary function symbol $j$ to represent the elementary embedding. The base theory ZFC is expressed only in the smaller language $\{\in\}$. Corazza'soriginal proposal, which we denote by $\text{WA}_0$, asserts that $j$ is a nontrivial amenable elementary embedding from the universe to itself, without adding formulas containing $j$ to the separation and replacement axioms. Elementarity is expressed by the scheme $\varphi(x)\iff\varphi(j(x))$, where $\varphi$ runs through the formulas of the usual language of set theory; nontriviality is expressed by the sentence $\exists x j(x)\not=x$; and amenability is simply the assertion that $j\upharpoonright A$ is a set for every set $A$. Amenability in this case is equivalent to the assertion that the separation axiom holds for $\Delta_0$ formulae in the language $\{\in,j\}$. The wholeness axiom $\text{WA}$, also denoted $\text{WA}_\infty$, asserts in addition that the full separation axiom holds in the language $\{\in,j\}$. Those two axioms are the endpoints of the hierarchy of axioms $\text{WA}_n$, asserting increasing amounts of the separation axiom. Specifically, the wholeness axiom $\text{WA}_n$, where $n$ is amongst $0,1,\ldots,\infty$, consists of the following: (elementarity) All instances of $\varphi(x)\iff\varphi(j(x))$ for $\varphi$ in the language $\{\in,j\}$. (separation) All instances of the Separation Axiom for $\Sigma_n$ formulae in the full language $\{\in,j\}$. (nontriviality) The axiom $\exists x\,j(x)\not=x$. Clearly, this resembles the Kunen inconsistency. What is missing from the wholeness axiom schemes, and what figures prominantly in Kunen's proof, are the instances of the replacement axiom in the full language with $j$. In particular, it is the replacement axiom in the language with $j$ that allows one to define the critical sequence $\langle \kappa_n\mid n\lt\omega\rangle$, where $\kappa_{n+1}=j(\kappa_n)$, which figures in all the proofs of the Kunen inconsistency. Thus, none of the proofs of the Kunen inconsistency can be carried out with $\text{WA}$, and indeed, in every model of $\text{WA}$ the critical sequence is unbounded in the ordinals. The hiearchy of wholeness axioms is strictly increasing in strength, if consistent. [3] If $j:V_\lambda\to V_\lambda$ witnesses a rank into rank cardinal, then $\langle V_\lambda,\in,j\rangle$ is a model of the wholeness axiom. Axioms $\mathrm{I}_4^n$ for natural numbers $n$ (starting from $0$) are an attempt to measure the gap between $\mathrm{I}_3$ and $\mathrm{WA}$. Each of these axioms asserts the existence of a transitive model of $\mathrm{ZFC} + \mathrm{WA}$ with additional, stronger and stronger properties. $\mathrm{I}_3(κ)$ is equivalent to the existence of an $\mathrm{I}_4(κ)$-coherent set of embeddings. On the other hand, it is not known whether the $\mathrm{I}_4^n$ axioms really increase in consistency strength and whether it is possible in $\mathrm{ZFC}$ that $\forall _{n\in\omega} \mathrm{I}_4^n(κ) \land \neg \mathrm{I}_3(κ)$.[2] If the wholeness axiom is consistent with $\text{ZFC}$, then it is consistent with $\text{ZFC+V=HOD}$.[3] References Corazza, Paul. The Wholeness Axiom and Laver sequences.Annals of Pure and Applied Logic pp. 157--260, October, 2000. bibtex Corazza, Paul. The gap between ${\rm I}_3$ and the wholeness axiom.Fund Math 179(1):43--60, 2003. www DOI MR bibtex Hamkins, Joel David. The wholeness axioms and V=HOD.Arch Math Logic 40(1):1--8, 2001. www arχiv DOI MR bibtex
Let $H^{(j)}$ and $G^{(j)}$ be Banach spaces for $j\in\{1,\dots,n\}$. Call norms $\|\cdot\|_{H}$ and $\|\cdot\|_{G}$ on the algebraic tensor products $H:=\bigotimes_{j=1}^n H^{(j)}$ and $G:=\bigotimes_{j=1}^n G^{(j)}$ uniform if the operator norm satisfies$$\|\bigotimes_{j=1}^n A^{(j)}\|_{H\to G}=\prod_{j=1}^n \|A^{(j)}\|_{H^{(j)}\to G^{(j)}}.$$ Is there an extensive list of pairs of spaces with uniform crossnorms? (Preferably with a focus on function spaces; Lebesgue, Sobolev, Hoelder would already be great) Of course, the results for tensor products of well-known spaces depend on the norms that we equip these tensor products with. For example, it would be good to know if $C(\Omega_1)\otimes C(\Omega_2)$ equipped with the $C(\Omega_1\times\Omega_2)$ norm and $H^1(\Omega_3)\otimes H^1(\Omega_4)$ equipped with the $H^{1}_{\text{mix}}(\Omega_3\times\Omega_4)$ norm are uniform, and if these norms actually turn the algebraic tensor products into the spaces $C(\Omega_1\times\Omega_2)$ and $H^{1}_{\text{mix}}(\Omega_3\times\Omega_4)$, respectively (by closure). Posting any specific results instead of a reference would be appreciated too; I will keep track in the list below: If $H^{(j)}$ and $G^{(j)}$ are Hilbert spaces, then equipping $H$ and $G$ with the induced Hilbert space structure yields uniform crossnorms. The induced scalar product on $H$ is the unique bilinear extension of $$\langle \otimes_{j=1}^n f^{(j)}_1 ,\otimes_{j=1}^n f^{(j)}_2\rangle_{H}=\prod_{j=1}^n \langle f^{(j)}_1,f^{(j)}_2\rangle_{H^{(j)}}$$ (Proposition 4.127 in W. Hackbusch, "Tensor spaces and numerical tensor calculus". Springer, 2012) Equipping $H$ with the projective norm and $G$ with any crossnorm (that is, a norm that is multiplicative w.r.t the tensor product) yields uniform crossnorms. (Have no reference) Equipping $G$ with the injective norm and $H$ with any crossnorm yields uniform crossnorms (Have no reference)
Let $\Omega\subset\mathbb{R}^2$ is bounded and convex and $\partial\Omega$ be smooth. Consider the second order elliptic PDE (1) $$\begin{cases}Lu = f &\text{ on } \Omega\,,\\ u=0 &\text{ on } \partial\Omega\,,\end{cases}$$where $f\in C^\infty(\Omega)$ satisfies $f(x) > 0$ and $f$ has no isolated critical points on $\Omega$. Then, does the solution $u$ of the above have only one critical point? I am looking for a result similar to Theorem 2 of this paper(ScienceDirect): Theorem 2:Let $\Omega$ be a bounded, strictly convex domain in $\mathbb{R}^2$ and $u\in C^3(\Omega)\cap C^1(\bar{\Omega})$ a solution to the boundary value problem \begin{align*} \Delta u &= f(u,\nabla u)\quad\text{ in }\Omega\,,\\ u=\text{const}\,,& \quad\nabla u\neq0\,,\quad\text{ on }\partial\Omega\,, \end{align*} where $f\in C^1$, $f_u\geq0$. Then $u$ has exactly one critical point in $\bar{\Omega}$ and there $\det(D^2u)>0$ holds (i.e., a global maximum or minimum). The paper claims that Theorem 2 is true even if we replace Laplace's operator by another elliptic operator. This means, for example, that if $f$ is constant, then the solution to (1) has a unique critical point. Edit 1: Here is the specific PDE I am interested in: \begin{equation} (1+h^2y^2)\partial_{xx}u +(1+h^2x^2)\partial_{yy}u -2xyh^2\partial_{xy}u-\frac{h^2x(3+h^2\rho^2)}{1+h^2\rho^2}\partial_{x}u-\frac{h^2y(3+h^2\rho^2)}{1+h^2\rho^2}\partial_{y}u = \frac{1}{1+h^2\rho^2} \end{equation} where $h\in (0,1]$, $\rho^2=x^2+y^2$ and the domain $\Omega$ is convex, bounded and does not contain the origin. I have verified that this PDE is elliptic for $h\in(0,1]$. Edit 2: Due to Mateusz's counterexample, I am adding the condition that if $a(x,y)$ is a coefficient of $L$, then on any simply connected subset $D$ of $\Omega$, $a$ attains its maximum and minimum on $\partial D$. The point of this is to avoid any 'humps' in the coefficient functions (since this is the case for my Pde in edit 1).
My graduation thesis was about stability theorems for $C_0$-semigroups (see the Wikipedia article for the definitions: http://en.wikipedia.org/wiki/C0-semigroup). I would like to know if there is some aplicability of the stability theorems I know in this field. The only applications I found for my thesis were about the Hille-Yosida theorem and some of its applications to existence and uniqueness of solutions of partial differential equations. I will not put any names to my theorems, since maybe they are not known to the world as my teachers name them. Here are some of them: The $C_0$-semigroup $\{T(t)\}_{t \geq 0}$ is exponentially stable if and only if there exists $p \geq 1$ such that $\int_0^\infty \|T(t)\|^pdt <\infty$. The $C_0$-semigroup $\{T(t)\}_{t \geq 0}$ is exponentially stable if and only if it satisfies the following condition: For any $f \in \mathcal{C}$ it follows that $x_f \in \mathcal{C}$ where $x_f: \Bbb{R}_+ \to X,\ x_f(t)=\int_0^t T(t-s)f(s)ds$, and $\mathcal{C} = \{ f : \Bbb{R}_+ \to X,\ f \text{ continuous and bounded } \}$. The last theorem can be formulated and proved in some cases for $(L^p,L^q)$ spaces with $(p,q) \neq (1,\infty)$. A more general concept, dichotomy can be formulated (the space splits into two spaces, on one of them there is stability, and on the other one there is instability. All these sound very nice, and have quite beautiful proofs, but are they applicable to some branches of applied math, such as ordinary or partial differential equations, or they are just pure math, and thats it?
Emilio Pisanty and Eckhard Giere have already given discontinuous, piecewise constant counterexamples in their answers. Here we provide for-the-fun-of-it a smooth infinitely-many-times-differentiable counterexample $f\in C^{\infty}(\mathbb{R})$ of a square integrable function $f:\mathbb{R} \to [0,1]$ that does not satisfy $\lim_{|x|\to \infty}f(x)=0$. Our counterexample is $$\tag{1} f(x)~:=~ e^{- g(x)} ~\in ~]0,1], \qquad g(x)~:=~x^4 \sin^2 x~\in ~[0,\infty[. $$ Intuitive idea: If we imagine $x$ as a time variable, then the function $f$ returns periodically to its maximum value $$\tag{2} f(x) =1 \quad\Leftrightarrow\quad g(x) =0 \quad\Leftrightarrow\quad \frac{x}{\pi}\in \mathbb{Z} ,$$ but spends most if its time close to the $x$-axis in order to be square integrable. Proof: We leave a detailed rigorous epsilon-delta mathematical proof to the reader, but a sketched heuristic proof goes like this. For each very large integer $|n|\gg 1$, define a shifted variable $$\tag{3} y~:=~x-\pi n.$$ For the fixed integer $n\in\mathbb{Z}$, always assume from now on that the $y$-variable belongs to the interval $$\tag{4} |y|~\leq~ \frac{\pi}{2}.$$ For $|y|\ll\frac{\pi}{2}$ very small, we may approximate $g(x) \approx (\pi n)^4y^2$, so that in the interval (4), we have $$\tag{5} g(x)~\lesssim~ \pi^4 |n| \quad \Leftrightarrow\quad |y| ~\lesssim~ |n|^{-\frac{3}{2}}.$$ Thus we may form a square integrable majorant function $h\geq f$ (outside a compact region on the $x$-axis) by defining $$\tag{6} h(x)~:=~\left\{\begin{array}{lcl} 1 &{\rm for}& |y| ~\lesssim~ |n|^{-\frac{3}{2}}, \cre^{-\pi^4 |n|}&{\rm for}& |n|^{-\frac{3}{2}}~\lesssim~ |y| ~\leq~ \frac{\pi}{2}, \end{array} \right. \qquad |n|\gg 1. $$ The function $h\in {\cal L}^2(\mathbb{R})$ is square integrable on the whole $x$-axis, since $$\tag{7} \sum_{n\neq 0} |n|^{-\frac{3}{2}} ~<~ \infty$$ and $$\tag{8} \pi \sum_{n\in\mathbb{Z}}e^{-2\pi^4 |n|}~<~\infty$$ are convergent series.