text
stringlengths
256
16.4k
The complex moment problem and direct and inverse spectral problems for the block Jacobi type bounded normal matrices Methods Funct. Anal. Topology 12 (2006), no. 1, 1-31 We continue to generalize the connection between the classical power moment problem and the spectral theory of selfadjoint Jacobi matrices. In this article we propose an analog of the Jacobi matrix related to the complex moment problem and to a system of polynomials orthogonal with respect to some probability measure on the complex plane. Such a matrix has a block three-diagonal structure and gives rise to a normal operator acting on a space of l2 type. Using this connection we prove existence of a one-to-one correspondence between probability measures defined on the complex plane and block three-diagonal Jacobi type normal matrices. For simplicity, we investigate in this article only bounded normal operators. From the point of view of the complex moment problem, this restriction means that the measure in the moment representation (or the measure, connected with the orthonormal polynomials) has compact support. Methods Funct. Anal. Topology 12 (2006), no. 1, 32-37 A variety of Banach algebras is a non-empty class of Banach algebras, for which there exists a family of laws such that its elements satisfy all of the laws. Each variety has a unique core (see [3]) which is generated by it. Each Banach algebra is not a core but, in this paper, we show that for each Banach algebra there exists a cardinal number (quantum of that Banach algebra) which shows the elevation of that Banach algebra for bearing a core. The class of all cores has interesting properties. Also, in this paper, we shall show that each core of a variety is generated by essential elements and each algebraic law of essential elements permeates to all of the elements of all of the Banach algebras belonging to that variety, which shows the existence of considerable structures in the cores. Methods Funct. Anal. Topology 12 (2006), no. 1, 38-56 Let H1 be a subspace in a Hilbert space H0 and let $\widetilde C(H_0,H_1)$ be the set of all closed linear relations from $H_0$ to $H_1$. We introduce a Nevanlinna type class $\widetilde R_+ (H_0,H_1)$ of holomorphic functions with values in $\widetilde C(H_0,H_1)$ and investigate its properties. In particular we prove the existence of a dilation for every function $\tau_+(\cdot)\in \widetilde R_+ (H_0,H_1)$. In what follows these results will be used for the derivation of the Krein type formula for generalized resolvents of a symmetric operator with arbitrary (not necessarily equal) deficiency indices. Methods Funct. Anal. Topology 12 (2006), no. 1, 57-73 In the present work a relationship between systems of n subspaces and representations of *-algebras generated by projections is investigated. It is proved that irreducible nonequivalent *-representations of *-algebras P4,com generate all nonisomorphic transitive quadruples of subspaces of a finite dimensional space. Methods Funct. Anal. Topology 12 (2006), no. 1, 74-81 In this paper, the author establishes the boundedness in weighted $L_p$ spaces on $\mathbb R^{n+1}$ with a parabolic metric for a large class of sublinear operators generated by parabolic Calderon-Zygmund kernels. The conditions of these theorems are satisfied by many important operators in analysis. Sufficient conditions on weighted functions $\omega$ and $\omega_1$ are given so that certain parabolic sublinear operator is bounded from the weighted Lebesgue spaces $L_{p,\omega}(\mathbb R^{n+1})$ into $L_{p,\omega_1}(\mathbb R^{n+1})$. Methods Funct. Anal. Topology 12 (2006), no. 1, 82-100 Weakly Lagrangian pairs and Lagrangian pairs in a pair of Hilbert spaces $(H_1, H_2)$ are defined. The weakly Lagrangian pair and Lagrangian pair extensions in $(H_1, H_2)$ of a given weakly Lagrangian pair in $(H_1, H_2)$ are characterized and those extensions which are operators are identified. A description of all Lagrangian pair extensions in a larger pair of Hilbert spaces $(\tilde H_1, \tilde H_2)$ of a given weakly Lagrangian pair in $(H_1, H_2)$ is also given.
If you Googled this number a week ago, all you’d get were links to the paper by Melanie Wood Belyi-extending maps and the Galois action on dessins d’enfants. In this paper she says she can separate two dessins d’enfants (which couldn’t be separated by other Galois invariants) via the order of the monodromy group of the inflated dessins by a certain degree six Belyi-extender. She gets for the inflated $\Delta$ the order 19752284160000 and for inflated $\Omega$ the order 214066877211724763979841536000000000000 (see also this post). After that post I redid the computations a number of times (as well as for other Belyi-extenders) and always find that these orders are the same for both dessins. And, surprisingly, each time the same numbers keep popping up. For example, if you take the Belyi-extender $t^6$ (power-map) then it is pretty easy to work out the generators of the monodromy group of the extended dessin. For example, there is a cycle $(1,2)$ in $x_{\Omega}$ and you have to replace it by \[ (11,12,13,14,15,16,21,22,23,24,25,26) \] and similarly for other cycles, always replace number $k$ by $k1,k2,k3,k4,k5,k6$ (these are the labels of the edges in the extended dessin corresponding to edge $k$ in the original dessin, starting to count from the the ‘spoke’ of the $6$-star of $t^6$ corresponding to the interval $(0,e^{\frac{4 \pi i}{3}})$, going counterclockwise). So the edge $(0,1)$ corresponds to $k3$, and for $y$ you take the same cycles as in $y_{\Omega}$ replacing number $k$ by $k3$. Here again, you get for both extended diagrams the same order of the monodromy group, and surprise, surprise: it is 214066877211724763979841536000000000000. Based on these limited calculations, it seems to be that the order of the monodromy group of the extended dessin only depends on the degree of the extender, and not on its precise form. I’d hazard a (probably far too optimistic) conjecture that the order of the monodromy groups of a dessin $\Gamma$ and the extended dessin $\gamma(\Gamma)$ for a Belyi-extender $\gamma$ of degree $d$ are related via \[ \# M(\gamma(\Gamma)) = d \times (\# M(\Gamma))^d \] (or twice that number), except for trivial settings such as power-maps extending stars. Edit (august 19): In the comments Dominic shows that in “most” cases the monodromy group of $\gamma(\Gamma)$ should be the wreath product on the monodromy groups of $\gamma$ and $\Gamma$ which has order \[ \# M(\Gamma)^d \times \# M(\gamma) \] which fits in with the few calculations i did. We knew already that the order of the monodromy groups op $\Delta$ and $\Omega$ is $1814400$, and sure enough \[ 6 \times 1814400^6 = 214066877211724763979841536000000000000. \] If you extend $\Delta$ and $\Omega$ by the power map $t^3$, you get the orders \[ 17919272189952000000 = 3 \times 1814400^3 \] and if you extend them with the degree 3 extender mentioned in the dessinflateurs-post you get 35838544379904000000, which is twice that number. ( Edit : the order of the monodromy group of the extender is $6$, see also above) As much as i like the Belyi-extender idea to construct new Galois invariants, i fear it’s a dead end. (Always glad to be proven wrong!) Similar Posts: Dessinflateurs the mystery Manin-Marcolli monoid Complete chaos and Belyi-extenders permutation representations of monodromy groups anabelian geometry The best rejected proposal ever the modular group and superpotentials (1) Klein’s dessins d’enfant and the buckyball Monstrous dessins 1 NeverEndingBooks-groups
This question builds on top of my former question in which I asked whether constant symbols are necessary in first-order theories. In that question I gave a rough idea on how to get rid of these symbols, which was not completely thought through. I will try to fix this now and would be glad if someone can verify this approach. Lets say we have a first-order theory $T$ over a language $L$ with (possibly countably infinite many) constant symbols $c_i$. we choose (possibly countably infinite many) predicates $\varphi_i(x_1,...,x_{n_i})$ so that $\varphi_i(c_{\sigma_i(1)},...,c_{\sigma_i(n_i)})$ makes up an axiom system of $T$. The $\sigma(i)$ will choose the constant symbols that are present in the $i$-th axiom and $n_i$ (possibly zero) will denote how many are in there. I want to construct a theory $T'$ over a language $L'$ which got all the constant symbols removed. Still, $T$ and $T'$ should have the exact same models (in some sense). I will do this by replacing the axioms $\varphi_i$ I will need some shorthand notations: I write $\vec x$ for a sequence of some of the variables $x_1,x_2,...$ where the exact names are derived from the place of use of this sequence. E.g. in $(\forall \vec x)\varphi_i(\vec x)$ the sequence will consist of the variables $x_{\sigma_i(1)},...,x_{\sigma_i(n_i)}$. I write $\vec x=\vec y$ for the statement that the two sequences agree element-wise on the variables that are contained in both of them. If a variable $x_j$ is not in both sequences then $\vec x=\vec y$ does not contain $x_j$. I write $\Delta(\vec x)$ for the statement, that all the variables of $\vec x$ are different. So $\Delta(\vec x)$ contains the statement $x_i\not= x_j$ for any pair of distinct variables $x_i,x_j\in\vec x$ conjugated together. Now I am going to replace the axiom $\varphi_i$ by $$(A_i)\qquad (\exists \vec x)[\Delta(\vec x)\wedge \varphi_i(\vec x)\quad\wedge\quad(\forall \vec y)\left[\Delta(\vec y)\wedge \varphi_i(\vec y)\rightarrow \vec x=\vec y]\right],$$ and for any pair $\varphi_i, \varphi_j$ of former axioms I will introduce the axiom $$(A_{ij})\qquad (\forall \vec x \forall \vec y)[\varphi_i(\vec x)\wedge\varphi_j(\vec y)\rightarrow \vec x= \vec y] $$ In the special case that we have constant symbols in $T$, but no axiom contains any of them, I will introduce the axiom $(\exists x)[x=x]$ to ensure that there is at least one element in the model. In my opinion, this reflects all the properties that the constants inherently brought into the theory. Is this correct? Does this give me an equivalent theory (in some sense, see note 2)? NOTE: I do not think we should get rid of contant symbols. My motivation is purely academic in the sense, that I want to know whether there is always an equivalent theory without constant symbols, like there is always an equivalent theory replacing function symbols and constant symbols by relation symbols. As Giorgio and Rene pointed out in there answers to my former question, there are legitimate and useful applications for constant symbols. I do not doubt this. NOTE 2: I am not sure how to directly compare the (whole "space" of) models of these theories as they are defined over different languages. So the first on is an $L$-structure, the second one an $L'$-structure. Is this done by comparing the categories of structures created by these two theories?
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - vv - final states using proton–proton collisions at √s=13 TeV with the ATLAS detector European Physical Journal C, ISSN 1434-6044, 04/2018, Volume 78, Issue 4 Journal Article 2. Measurement of the ZZ production cross section in proton-proton collisions at √s = 8 TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and ZZ→ℓ−ℓ+νν¯¯¯ decay channels with the ATLAS detector Journal of High Energy Physics, ISSN 1126-6708, 2017, Volume 2017, Issue 1, pp. 1 - 53 A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ−ℓ+νν¯ channels (ℓ = e, μ) in proton-proton collisions at s=8TeV at the Large Hadron... Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences Journal Article 3. Search for new resonances decaying to a W or Z boson and a Higgs boson in the ℓ+ℓ−bb¯, ℓνbb¯, and νν¯bb¯ channels with pp collisions at s=13 TeV with the ATLAS detector Physics Letters B, ISSN 0370-2693, 02/2017, Volume 765, Issue C, pp. 32 - 52 Journal Article 4. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - ν ν ¯ final states using proton–proton collisions at s = 13 TeV with the ATLAS detector The European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 04/2018, Volume 78, Issue 4, pp. 1 - 34 Journal Article 5. Search for heavy ZZ resonances in the $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$ ℓ+ℓ-νν¯ final states using proton–proton collisions at $$\sqrt{s}= 13$$ s=13 $$\text {TeV}$$ TeV with the ATLAS detector The European Physical Journal C, ISSN 1434-6044, 4/2018, Volume 78, Issue 4, pp. 1 - 34 A search for heavy resonances decaying into a pair of $$Z$$ Z bosons leading to $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article 6. ZZ -> l(+)l(-)l '(+)l '(-) cross-section measurements and search for anomalous triple gauge couplings in 13 TeV pp collisions with the ATLAS detector PHYSICAL REVIEW D, ISSN 2470-0010, 02/2018, Volume 97, Issue 3 Measurements of ZZ production in the l(+)l(-)l'(+)l'(-) channel in proton-proton collisions at 13 TeV center-of-mass energy at the Large Hadron Collider are... PARTON DISTRIBUTIONS | EVENTS | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Particle data analysis | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences PARTON DISTRIBUTIONS | EVENTS | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Particle data analysis | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article 7. Measurement of exclusive γγ→ℓ+ℓ− production in proton–proton collisions at s=7 TeV with the ATLAS detector Physics Letters B, ISSN 0370-2693, 10/2015, Volume 749, Issue C, pp. 242 - 261 Journal Article 8. Measurement of the ZZ production cross section in proton-proton collisions at s = 8 $$ \sqrt{s}=8 $$ TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and Z Z → ℓ − ℓ + ν ν ¯ $$ ZZ\to {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ decay channels with the ATLAS detector Journal of High Energy Physics, ISSN 1029-8479, 1/2017, Volume 2017, Issue 1, pp. 1 - 53 A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ − ℓ + ν ν ¯ $$ {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ channels (ℓ = e, μ) in... Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment Journal Article 9. Measurement of event-shape observables in Z→ℓ+ℓ− events in pp collisions at √s = 7 TeV with the ATLAS detector at the LHC European Physical Journal C, ISSN 1434-6044, 2016, Volume 76, Issue 7, pp. 1 - 40 Journal Article 10. Search for heavy ZZ resonances in the l(+) l(-) l(+) l(-) and l(+) l(-) nu(nu)over-bar final states using proton-proton collisions at root s=13 TeV with the ATLAS detector EUROPEAN PHYSICAL JOURNAL C, ISSN 1434-6044, 04/2018, Volume 78, Issue 4 A search for heavy resonances decaying into a pair of Z bosons leading to l(+) l(-) l(+) l(-) and l(+) l(-) nu(nu) over bar final states, where l stands for... DISTRIBUTIONS | BOSON | DECAY | MASS | TAUOLA | TOOL | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences DISTRIBUTIONS | BOSON | DECAY | MASS | TAUOLA | TOOL | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article 11. Hydrophilic interaction liquid chromatography coupled with tandem mass spectrometry method for the simultaneous determination of l‐valine, l‐leucine, l‐isoleucine, l‐phenylalanine, and l‐tyrosine in human serum Journal of Separation Science, ISSN 1615-9306, 11/2015, Volume 38, Issue 22, pp. 3876 - 3883 l ‐Valine, l ‐leucine, l ‐isoleucine, l ‐phenylalanine, and l ‐tyrosine are important proposed biomarkers for the early detection and diagnosis of type 2... Hydrophilic interaction liquid chromatography | Type 2 diabetes | Tandem mass spectrometry | Amino acids | AMINO-ACID-METABOLISM | CHEMISTRY, ANALYTICAL | OBESITY | PLASMA | URINE | BRANCHED-CHAIN | MS/MS | DERIVATIZATION | Tyrosine | Phenylalanine | Analysis | Phenols | Liquid chromatography | Mass spectrometry | Methods | Biomarkers | Diabetes | Chromatography | Separation | Diagnosis | Serums | Recovery Hydrophilic interaction liquid chromatography | Type 2 diabetes | Tandem mass spectrometry | Amino acids | AMINO-ACID-METABOLISM | CHEMISTRY, ANALYTICAL | OBESITY | PLASMA | URINE | BRANCHED-CHAIN | MS/MS | DERIVATIZATION | Tyrosine | Phenylalanine | Analysis | Phenols | Liquid chromatography | Mass spectrometry | Methods | Biomarkers | Diabetes | Chromatography | Separation | Diagnosis | Serums | Recovery Journal Article
Methods Funct. Anal. Topology 12 (2006), no. 2, 101-112 It is shown that every L\'{e}vy process on a locally compact group $G$ is determined by a sequence of one-dimensional Brownian motions and an independent Poisson random measure. As a consequence, we are able to give a very straightforward proof of sample path continuity for Brownian motion in $G$. We also show that every L\'{e}vy process on $G$ is of pure jump type, when $G$ is totally disconnected. Methods Funct. Anal. Topology 12 (2006), no. 2, 113-123 We define a space of holomorphic functions $O_{1}(U,E/F)$, where $U$ is an open pseudo-convex subset of $\Bbb{C}^{n}$, $E$ is a b-space and $F$ is a bornologically closed subspace of $E$, and we prove that the b-spaces $O_{1}(U,E/F)$ and $O(U,E)/O(U,F)$ are isomorphic. Methods Funct. Anal. Topology 12 (2006), no. 2, 124-130 We introduce a notion of uniform equicontinuity for sequences of functions with the values in the space of measurable operators. Then we show that all the implications of the classical Banach Principle on the almost everywhere convergence of sequences of linear operators remain valid in a non-commutative setting. Methods Funct. Anal. Topology 12 (2006), no. 2, 131-150 Let $q$ be a scalar generalized Nevanlinna function, $q\in\mathcal N_\kappa$. Its gene alized zeros and poles (including their orders) are defined in terms of the function's operator representation. In this paper analytic properties associated with the underlying root subspaces and their geometric structures are investigated in terms of the local behaviour of the function. The main results and various characterizations are expressed by means of (local) moments, asymptotic expansions, and via the basic factorization of $q$. Also an inverse problem for recovering the geometric structure of the root subspace from an appropriate asymptotic expansion is solved. Methods Funct. Anal. Topology 12 (2006), no. 2, 151-156 In this paper we study the complexity of representation theory of free products of finite-dimensional $C^*$-algebras. Methods Funct. Anal. Topology 12 (2006), no. 2, 157-169 We investigate the main spectral properties of quasi--Hermitian extensions of the minimal symmetric operator $L_{\rm min}$ generated by the differential expression $-\frac{{\rm sgn}\, x}{|x|^{\alpha}}\frac{d^2}{dx^2} \ (\alpha>-1)$ in $L^2(\mathbb R, |x|^{\alpha})$. We describe their spectra, calculate the resolvents, and obtain a similarity criterion to a normal operator in terms of boundary conditions at zero. As an application of these results we describe the main spectral properties of the operator $\frac{{\rm sgn}\, x}{|x|^\alpha}\left( -\frac{d^2}{dx^2}+c \delta \right), \, \alpha>-1$. Methods Funct. Anal. Topology 12 (2006), no. 2, 170-182 In this paper we introduce a mean of a continuous frame which is a generalization of discrete frames. Since a discrete frame is a special case of these frames, we expect that some of the results that occur in the frame theory will be generalized to these frames. For such a generalization, after giving some basic results and theorems about these frames, we discuss the following: dual to these frames, perturbation of continuous frames and robustness of these frames to an erasure of some elements. Methods Funct. Anal. Topology 12 (2006), no. 2, 183-196 In this paper we consider the strong matrix moment problem on the real line. We obtain a necessary and sufficient condition for uniqueness and find all the solutions for the completely indeterminate case. We use M.G. Krein’s theory of representations for Hermitian operators and technique of boundary triplets and the corresponding Weyl functions. Methods Funct. Anal. Topology 12 (2006), no. 2, 197-204 For $*$-algebras associated with extended Dynkin graphs, we investigate a set of parameters for which there exist representations. We give structure properties of such sets and a complete description for the set related to the graph $\tilde D_4$.
Hey guys! I built the voltage multiplier with alternating square wave from a 555 timer as a source (which is measured 4.5V by my multimeter) but the voltage multiplier doesn't seem to work. I tried first making a voltage doubler and it showed 9V (which is correct I suppose) but when I try a quadrupler for example and the voltage starts from like 6V and starts to go down around 0.1V per second. Oh! I found a mistake in my wiring and fixed it. Now it seems to show 12V and instantly starts to go down by 0.1V per sec. But you really should ask the people in Electrical Engineering. I just had a quick peek, and there was a recent conversation about voltage multipliers. I assume there are people there who've made high voltage stuff, like rail guns, which need a lot of current, so a low current circuit like yours should be simple for them. So what did the guys in the EE chat say... The voltage multiplier should be ok on a capacitive load. It will drop the voltage on a resistive load, as mentioned in various Electrical Engineering links on the topic. I assume you have thoroughly explored the links I have been posting for you... A multimeter is basically an ammeter. To measure voltage, it puts a stable resistor into the circuit and measures the current running through it. Hi all! There is theorem that links the imaginary and the real part in a time dependent analytic function. I forgot its name. Its named after some dutch(?) scientist and is used in solid state physics, who can help? The Kramers–Kronig relations are bidirectional mathematical relations, connecting the real and imaginary parts of any complex function that is analytic in the upper half-plane. These relations are often used to calculate the real part from the imaginary part (or vice versa) of response functions in physical systems, because for stable systems, causality implies the analyticity condition, and conversely, analyticity implies causality of the corresponding stable physical system. The relation is named in honor of Ralph Kronig and Hans Kramers. In mathematics these relations are known under the names... I have a weird question: The output on an astable multivibrator will be shown on a multimeter as half the input voltage (for example we have 9V-0V-9V-0V...and the multimeter averages it out and displays 4.5V). But then if I put that output to a voltage doubler, the voltage should be 18V, not 9V right? Since the voltage doubler will output in DC. I've tried hooking up a transformer (9V to 230V, 0.5A) to an astable multivibrator (which operates at 671Hz) but something starts to smell burnt and the components of the astable multivibrator get hot. How do I fix this? I check it after that and the astable multivibrator works. I searched the whole god damn internet, asked every god damn forum and I can't find a single schematic that converts 9V DC to 1500V DC without using giant transformers and power stage devices that weight 1 billion tons.... something so "simple" turns out to be hard as duck In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum. @AaronStevens Yeah, I had a good laugh to myself when he responded back with "Yeah, maybe they considered it and it was just too complicated". I can't even be mad at people like that. They are clearly fairly new to physics and don't quite grasp yet that most "novel" ideas have been thought of to death by someone; likely 100+ years ago if it's classical physics I have recently come up with a design of a conceptual electromagntic field propulsion system which should not violate any conservation laws, particularly the Law of Conservation of Momentum and the Law of Conservation of Energy. In fact, this system should work in conjunction with these two laws ... I rememeber that Gordon Freeman's thesis was "Observation of Einstein-Podolsky-Rosen Entanglement on Supraquantum Structures by Induction Through Nonlinear Transuranic Crystal of Extremely Long Wavelength (ELW) Pulse from Mode-Locked Source Array " In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum. @ACuriousMind What confuses me is the interpretation of Peskin to this infinite c-number and the experimental fact He said, the second term is the sum over zero point energy modes which is infnite as you mentioned. He added," fortunately, this energy cannot be detected experm., since the experiments measure only the difference between from the ground state of H". @ACuriousMind Thank you, I understood your explanations clearly. However, regarding what Peskin mentioned in his book, there is a contradiction between what he said about the infinity of the zero point energy/ground state energy, and the fact that this energy is not detectable experimentally because the measurable quantity is the difference in energy between the ground state (which is infinite and this is the confusion) and a higher level. It's just the first encounter with something that needs to be renormalized. Renormalizable theories are not "incomplete", even though you can take the Wilsonian standpoint that renormalized QFTs are effective theories cut off at a scale. according to the author, the energy differenc is always infinite according to two fact. the first is, the ground state energy is infnite, secondly, the energy differenc is defined by substituting a higher level energy from the ground state one. @enumaris That is an unfairly pithy way of putting it. There are finite, rigorous frameworks for renormalized perturbation theories following the work of Epstein and Glaser (buzzword: Causal perturbation theory). Just like in many other areas, the physicist's math sweeps a lot of subtlety under the rug, but that is far from unique to QFT or renormalization The classical electrostatics formula $H = \int \frac{\mathbf{E}^2}{8 \pi} dV = \frac{1}{2} \sum_a e_a \phi(\mathbf{r}_a)$ with $\phi_a = \sum_b \frac{e_b}{R_{ab}}$ allows for $R_{aa} = 0$ terms i.e. dividing by zero to get infinities also, the problem stems from the fact that $R_{aa}$ can be zero due to using point particles, overall it's an infinite constant added to the particle that we throw away just as in QFT @bolbteppa I understand the idea that we need to drop such terms to be in consistency with experiments. But i cannot understand why the experiment didn't predict such infinities that arose in the theory? These $e_a/R_{aa}$ terms in the big sum are called self-energy terms, and are infinite, which means a relativistic electron would also have to have infinite mass if taken seriously, and relativity forbids the notion of a rigid body so we have to model them as point particles and can't avoid these $R_{aa} = 0$ values.
Application Of Jordan Algebra For Testing Hypotheses About Structure Of Mean Vector In Model With Block Compound Symmetric Covariance Structure, 2018 Faculty of Mathematics, Computer Science and Econometrics, University of Zielona Góra, Application Of Jordan Algebra For Testing Hypotheses About Structure Of Mean Vector In Model With Block Compound Symmetric Covariance Structure, Roman Zmyślony, Ivan Zezula, Arkadiusz Kozioł Electronic Journal of Linear Algebra In this article authors derive test for structure of mean vector in model with block compound symmetric covariance structure for two-level multivariate observations. One possible structure is so called structured mean vector when its components remain constant over sites or over time points, so that mean vector is of the form $\boldsymbol{1}_{u}\otimes\boldsymbol{\mu}$ with $\boldsymbol{\mu}=(\mu_1,\mu_2,\ldots,\mu_m)'\in\mathbb{R}^m$. This hypothesis is tested against alternative of unstructured mean vector, which can change over sites or over time points. Inertia Sets Allowed By Matrix Patterns, 2018 Saint Olaf College Inertia Sets Allowed By Matrix Patterns, Adam H. Berliner, Dale D. Olesky, Pauline Van Den Driessche Electronic Journal of Linear Algebra Motivated by the possible onset of instability in dynamical systems associated with a zero eigenvalue, sets of inertias $\sn_n$ and $\SN{n}$ for sign and zero-nonzero patterns, respectively, are introduced. For an $n\times n$ sign pattern $\mc{A}$ that allows inertia $(0,n-1,1)$, a sufficient condition is given for $\mc{A}$ and every superpattern of $\mc{A}$ to allow $\sn_n$, and a family of such irreducible sign patterns for all $n\geq 3$ is specified. All zero-nonzero patterns (up to equivalence) that allow $\SN{3}$ and $\SN{4}$ are determined, and are described by their associated digraphs. Some Graphs Determined By Their Distance Spectrum, 2018 Department of mathematics & statistics, McGill University, Montreal Some Graphs Determined By Their Distance Spectrum, Stephen Drury, Huiqiu Lin Electronic Journal of Linear Algebra Let $G$ be a connected graph with order $n$. Let $\lambda_1(D(G))\geq \cdots\geq \lambda_n(D(G))$ be the distance spectrum of $G$. In this paper, it is shown that the complements of $P_n$ and $C_n$ are determined by their $D$-spectrum. Moreover, it is shown that the cycle $C_n$ ($n$ odd) is also determined by its $D$-spectrum. A Tensor's Torsion, 2018 University of Nebraska-Lincoln A Tensor's Torsion, Neil Steinburg Dissertations, Theses, and Student Research Papers in Mathematics While tensor products are quite prolific in commutative algebra, even some of their most basic properties remain relatively unknown. We explore one of these properties, namely a tensor's torsion. In particular, given any finitely generated modules, M and N over a ring R, the tensor product $M\otimes_R N$ almost always has nonzero torsion unless one of the modules M or N is free. Specifically, we look at which rings guarantee nonzero torsion in tensor products of non-free modules over the ring. We conclude that a specific subclass of one-dimensional Gorenstein rings will have this property. Adviser: Roger Wiegand ... Partially-Ordered Multi-Type Algebras, Display Calculi And The Category Of Weakening Relations, 2018 Chapman University Partially-Ordered Multi-Type Algebras, Display Calculi And The Category Of Weakening Relations, Peter Jipsen, Fei Liang, M. Andrew Moshier, Apostolos Tzimoulis Mathematics, Physics, and Computer Science Faculty Articles and Research "We define partially-ordered multi-type algebras and use them as algebraic semantics for multi-type display calculi that have recently been developed for several logics, including dynamic epistemic logic [7], linear logic[10], lattice logic [11], bilattice logic [9] and semi-De Morgan logic [8]." Factorization In Integral Domains., 2018 University of Louisville Factorization In Integral Domains., Ryan H. Gipson Electronic Theses and Dissertations We investigate the atomicity and the AP property of the semigroup rings F[X; M], where F is a field, X is a variable and M is a submonoid of the additive monoid of nonnegative rational numbers. In this endeavor, we introduce the following notions: essential generators of M and elements of height (0, 0, 0, . . .) within a cancellative torsion-free monoid Γ. By considering the latter, we are able to determine the irreducibility of certain binomials of the form Xπ − 1, where π is of height (0, 0, 0, . . .), in the monoid domain. Finally, we will consider relations between the ... Developments In Multivariate Post Quantum Cryptography., 2018 University of Louisville Developments In Multivariate Post Quantum Cryptography., Jeremy Robert Vates Electronic Theses and Dissertations Ever since Shor's algorithm was introduced in 1994, cryptographers have been working to develop cryptosystems that can resist known quantum computer attacks. This push for quantum attack resistant schemes is known as post quantum cryptography. Specifically, my contributions to post quantum cryptography has been to the family of schemes known as Multivariate Public Key Cryptography (MPKC), which is a very attractive candidate for digital signature standardization in the post quantum collective for a wide variety of applications. In this document I will be providing all necessary background to fully understand MPKC and post quantum cryptography as a whole. Then ... On N/P-Asymptotic Distribution Of Vector Of Weighted Traces Of Powers Of Wishart Matrices, 2018 Linnaeus University, Växjö, Sweden On N/P-Asymptotic Distribution Of Vector Of Weighted Traces Of Powers Of Wishart Matrices, Jolanta Maria Pielaszkiewicz, Dietrich Von Rosen, Martin Singull Electronic Journal of Linear Algebra The joint distribution of standardized traces of $\frac{1}{n}XX'$ and of $\Big(\frac{1}{n}XX'\Big)^2$, where the matrix $X:p\times n$ follows a matrix normal distribution is proved asymptotically to be multivariate normal under condition $\frac{{n}}{p}\overset{n,p\rightarrow\infty}{\rightarrow}c>0$. Proof relies on calculations of asymptotic moments and cumulants obtained using a recursive formula derived in Pielaszkiewicz et al. (2015). The covariance matrix of the underlying vector is explicitely given as a function of $n$ and $p$. A Note On The Matrix Arithmetic-Geometric Mean Inequality, 2018 University of Central Florida A Note On The Matrix Arithmetic-Geometric Mean Inequality, Teng Zhang Electronic Journal of Linear Algebra This note proves the following inequality: If $n=3k$ for some positive integer $k$, then for any $n$ positive definite matrices $\bA_1,\bA_2,\dots,\bA_n$, the following inequality holds: \begin{equation*}\label{eq:main} \frac{1}{n^3} \, \Big\|\sum_{j_1,j_2,j_3=1}^{n}\bA_{j_1}\bA_{j_2}\bA_{j_3}\Big\| \,\geq\, \frac{(n-3)!}{n!} \, \Big\|\sum_{\substack{j_1,j_2,j_3=1,\\\text{$j_1$, $j_2$, $j_3$ all distinct}}}^{n}\bA_{j_1}\bA_{j_2}\bA_{j_3}\Big\|, \end{equation*} where $\|\cdot\|$ represents the operator norm. This inequality is a special case of a recent conjecture proposed by Recht and R ... Local Higher Category Theory, 2018 The University of Western Ontario Local Higher Category Theory, Nicholas Meadows Electronic Thesis and Dissertation Repository The purpose of this thesis is to give presheaf-theoretic versions of three of the main extant models of higher category theory: the Joyal, Rezk and Bergner model structures. The construction of these model structures takes up Chapters 2, 3 and 4 of the thesis, respectively. In each of the model structures, the weak equivalences are local or ‘stalkwise’ weak equivalences. In addition, it is shown that certain Quillen equivalences between the aforementioned models of higher category theory extend to Quillen equivalences between the various models of local higher category theory. Throughout, a number of features of local higher category theory ... Webwork Problems For Linear Algebra, 2018 University of North Georgia Webwork Problems For Linear Algebra, Hashim Saber, Beata Hebda Mathematics Ancillary Materials This set of problems for Linear Algebra in the open-source WeBWorK mathematics platform was created under a Round Eleven Mini-Grant for Ancillary Materials Creation. The problems were created for an implementation of the CC-BY Lyrix open textbook A First Course in Linear Algebra. Also included as an additional file are the selected and modified Lyryx Class Notes for the textbook. Topics covered include: Linear Independence Linear Transformations Matrix of a Transformation Isomorphisms Eigenvalues and Eigenvectors Diagonalization Orthogonality Determining The Determinant, 2018 Xavier University Determining The Determinant, Danny Otero Linear Algebra No abstract provided. Dimers On Cylinders Over Dynkin Diagrams And Cluster Algebras, 2018 Louisiana State University and Agricultural and Mechanical College Dimers On Cylinders Over Dynkin Diagrams And Cluster Algebras, Maitreyee Chandramohan Kulkarni LSU Doctoral Dissertations This dissertation describes a general setting for dimer models on cylinders over Dynkin diagrams which in type A reduces to the well-studied case of dimer models on a disc. We prove that all Berenstein--Fomin--Zelevinsky quivers for Schubert cells in a symmetric Kac--Moody algebra give rise to dimer models on the cylinder over the corresponding Dynkin diagram. We also give an independent proof of a result of Buan, Iyama, Reiten and Smith that the corresponding superpotentials are rigid using the dimer model structure of the quivers. Rank Function And Outer Inverses, 2018 Manipal University, Manipal Rank Function And Outer Inverses, Manjunatha Prasad Karantha, K. Nayan Bhat, Nupur Nandini Mishra Electronic Journal of Linear Algebra For the class of matrices over a field, the notion of `rank of a matrix' as defined by `the dimension of subspace generated by columns of that matrix' is folklore and cannot be generalized to the class of matrices over an arbitrary commutative ring. The `determinantal rank' defined by the size of largest submatrix having nonzero determinant, which is same as the column rank of given matrix when the commutative ring under consideration is a field, was considered to be the best alternative for the `rank' in the class of matrices over a commutative ring. Even this determinantal rank and ... Correlation Matrices With The Perron Frobenius Property, 2018 Wilfrid Laurier University Correlation Matrices With The Perron Frobenius Property, Phelim P. Boyle, Thierno B. N'Diaye Electronic Journal of Linear Algebra This paper investigates conditions under which correlation matrices have a strictly positive dominant eigenvector. The sufficient conditions, from the Perron-Frobenius theorem, are that all the matrix entries are positive. The conditions for a correlation matrix with some negative entries to have a strictly positive dominant eigenvector are examined. The special structure of correlation matrices permits obtaining of detailed analytical results for low dimensional matrices. Some specific results for the $n$-by-$n$ case are also derived. This problem was motivated by an application in portfolio theory. Simple Groups, Progenitors, And Related Topics, 2018 California State University - San Bernardino Simple Groups, Progenitors, And Related Topics, Angelica Baccari Electronic Theses, Projects, and Dissertations The foundation of the work of this thesis is based around the involutory progenitor and the finite homomorphic images found therein. This process is developed by Robert T. Curtis and he defines it as 2^{*n} :N {pi w | pi in N, w} where 2^{*n} denotes a free product of n copies of the cyclic group of order 2 generated by involutions. We repeat this process with different control groups and a different array of possible relations to discover interesting groups, such as sporadic, linear, or unitary groups, to name a few. Predominantly this work was produced from transitive ... Galois Theory And The Quintic Equation, 2018 Union College Galois Theory And The Quintic Equation, Yunye Jiang Honors Theses Most students know the quadratic formula for the solution of the general quadratic polynomial in terms of its coefficients. There are also similar formulas for solutions of the general cubic and quartic polynomials. In these three cases, the roots can be expressed in terms of the coefficients using only basic algebra and radicals. We then say that the general quadratic, cubic, and quartic polynomials are solvable by radicals. The question then becomes: Is the general quintic polynomial solvable by radicals? Abel was the first to prove that it is not. In turn, Galois provided a general method of determining when ... Symmetric Presentations, Representations, And Related Topics, 2018 California State University - San Bernardino Symmetric Presentations, Representations, And Related Topics, Adam Manriquez Electronic Theses, Projects, and Dissertations The purpose of this thesis is to develop original symmetric presentations of finite non-abelian simple groups, particularly the sporadic simple groups. We have found original symmetric presentations for the Janko group J 1, the Mathieu group M 12, the Symplectic groups S(3,4) and S(4,5), a Lie type group Suz(8), and the automorphism group of the Unitary group U(3,5) as homomorphic images of the progenitors 2 *60 : (2 x A 5), 2 *60 : A 5, 2 *56 : (2 3 : 7), and 2 *28 : (PGL(2,7):2), respectively. We have also discovered the groups ... The Hermitian Null-Range Of A Matrix Over A Finite Field, 2018 University of Trento The Hermitian Null-Range Of A Matrix Over A Finite Field, Edoardo Ballico Electronic Journal of Linear Algebra Let $q$ be a prime power. For $u=(u_1,\dots ,u_n), v=(v_1,\dots ,v_n)\in \mathbb {F} _{q^2}^n$, let $\langle u,v\rangle := \sum _{i=1}^{n} u_i^qv_i$ be the Hermitian form of $\mathbb {F} _{q^2}^n$. Fix an $n\times n$ matrix $M$ over $\mathbb {F} _{q^2}$. In this paper, it is considered the case $k=0$ of the set $\mathrm{Num} _k(M):= \{\langle u,Mu\rangle \mid u\in \mathbb {F} _{q^2}^n, \langle u,u\rangle =k\}$. When $M$ has coefficients in $\mathbb {F ... The Properties Of Partial Trace And Block Trace Operators Of Partitioned Matrices, 2018 Poznań University Of Technology The Properties Of Partial Trace And Block Trace Operators Of Partitioned Matrices, Katarzyna Filipiak, Daniel Klein, Erika Vojtková Electronic Journal of Linear Algebra The aim of this paper is to give the properties of two linear operators defined on non-square partitioned matrix: the partial trace operator and the block trace operator. The conditions for symmetry, nonnegativity, and positive-definiteness are given, as well as the relations between partial trace and block trace operators with standard trace, vectorizing and the Kronecker product operators. Both partial trace as well as block trace operators can be widely used in statistics, for example in the estimation of unknown parameters under the multi-level multivariate models or in the theory of experiments for the determination of an optimal designs under ...
I'm trying for some time now to prove or disprove the following conjecture to no avail: Let $S$ be a set and let $(\Sigma _n)$ be a sequence of countably generated $\sigma$-algebras on $S$ satisfying the following two conditions: $\Sigma_n\subseteq\Sigma_{n+1}$ for all $n$. If $A\in\Sigma_{n+1}$ is a union of $\Sigma_n$-atoms, then $A\in\Sigma_n$ for all $n$. Then for all $n$: If $A\in\sigma\big(\bigcup_n\Sigma_n\big)$ is a union of $\Sigma_n$-atoms, then $A\in\Sigma_n$. An atom is a minimal measurable set. In a countably generated $\sigma$-algebra, the atoms form a partition of the underlying space into points that can not be distinguished by measurable sets. I have actually only little intuition for the problem. If $S$ is analytic and all the $\Sigma_n$ are sub-$\sigma$-algebras of the Borel-$\sigma$-algebra, both condition 2. and the conjecture is automatically satisfied, due to a result of Blackwell, so counterexamples must be somewhat unnatural.
Feynman diagrams provide a very compact and intuitive way of representing interactions between particles. These diagrams can be included into LaTeX documents thanks to a few packages. One of the older packages is feynmf which uses MetaPost in order to generate the diagrams. More recently, a new package called Ti kZ-Feynman has been published which uses Ti kZ in order to generate Feynman diagrams. Contents Ti kZ-Feynman is a LaTeX package allowing Feynman diagrams to be easily generated within LaTeX with minimal user instructions and without the need of external programs. It builds upon the Ti kZ package and its graph drawing algorithms in order to automate the placement of many vertices. Ti kZ-Feynman still allows fine-tuned placement of vertices so that even complex diagrams can be generated with ease. Currently, Ti kZ-Feynman is too new to have made it into ShareLaTeX's installation, but we are working to get it included soon. In the meantime, it is possible to include the package files manually in a ShareLaTeX project as shown in this template. After installing the package, the Ti kZ-Feynman package can be loaded with \usepackage{tikz-feynman} in the preamble. It is recommend that you also specify the version of Ti kZ-Feynman to use with the compat package option: \usepackage[compat=1.0.0]{tikz-feynman}. This ensures that any new versions of Ti kZ-Feynman do not produce any undesirable changes without warning. Feynman diagrams can be declared with the \feynmandiagram command. It is analogous to the \tikz command from Ti kZ and requires a final semi-colon ( ;) to finish the environment. For example, a simple s-channel diagram is: \feynmandiagram [horizontal=a to b] { i1 -- [fermion] a -- [fermion] i2, a -- [photon] b, f1 -- [fermion] b -- [fermion] f2, }; Let's go through this example line by line: \feynmandiagram introduces the Feynman diagram and allows for optional arguments to be given in the brackets [<options>]. In this instance, horizontal=a to b orients the algorithm outputs such that the line through vertices a and b is horizontal. i1, a and i2) and connecting them with edges --. Just like the \feynmandiagram command above, each edge also take optional arguments specified in brackets [<options>]. In this instance, we want these edges to have arrows to indicate that they are fermion lines, so we add the fermion style to them. As you will see later on, optional arguments can also be given to the vertices in exactly the same way. a and b with an edge styled as a photon. Since there is already a vertex labelled a, the algorithm will connect it to a new vertex labeled b. f1 and f2. It re-uses the previously labelled b vertex. ;) is important. The name given to each vertex in the graph does not matter. So in this example, i1, i2 denote the initial particles; f1, f2 denotes the final particles; and a, b are the end points of the propagator. The only important aspect is that what we called a in line 2 is also a in line 3 so that the underlying algorithm treats them as the same vertex. The order in which vertices are declared does not matter as the default algorithm re-arranges everything. For example, one might prefer to draw the fermion lines all at once, as with the following example (note also that the way we named vertices is completely different): \feynmandiagram [horizontal=f2 to f3] { f1 -- [fermion] f2 -- [fermion] f3 -- [fermion] f4, f2 -- [photon] p1, f3 -- [photon] p2, }; As a final remark, the calculation of where vertices should be placed is usually done through an algorithm written in Lua. As a result, LuaTeX is required in order to make use of these algorithms. If LuaTeX is not used, Ti kZ-Feynman will default to a more rudimentary algorithm and will warn the user instead. So far, the examples have only used the photon and fermion styles. The Ti kZ-Feynman package comes with quite a few extra styles for edges and vertices which are all documented over in the package documentation. For example, it is possible to add momentum arrows with momentum=<text>, and in the case of end vertices, the particle can be labelled with particle=<text>. To demonstrate how they are used, we take the generic s-channel diagram from earlier and make it a electron-positron pairs annihilating into muons: \feynmandiagram [horizontal=a to b] { i1 [particle=\(e^{-}\)] -- [fermion] a -- [fermion] i2 [particle=\(e^{+}\)], a -- [photon, edge label=\(\gamma\), momentum'=\(k\)] b, f1 [particle=\(\mu^{+}\)] -- [fermion] b -- [fermion] f2 [particle=\(\mu^{-}\)], }; In addition to the style keys documented below, style keys from Ti kZ can be used as well: \feynmandiagram [horizontal=a to b] { i1 [particle=\(e^{-}\)] -- [fermion, very thick] a -- [fermion, opacity=0.2] i2 [particle=\(e^{+}\)], a -- [red, photon, edge label=\(\gamma\), momentum'={[arrow style=red]\(k\)}] b, f1 [particle=\(\mu^{+}\)] -- [fermion, opacity=0.2] b -- [fermion, very thick] f2 [particle=\(\mu^{-}\)], }; For a list of all the various styles that Ti kZ provides, have a look at the Ti kZ manual; it is extremely thorough and provides many usage examples. By default, the \feynmandiagram and \diagram commands use the spring layout algorithm to place all the edges. The spring layout algorithm attempts to `spread out' the diagram as much as possible which—for most simpler diagrams—gives a satisfactory result; however in some cases, this does not produce the best diagram and this section will look at alternatives. There are three main alternatives: draw=none. The algorithm will treat these extra edges in the same way, but they are simply not drawn at the end; The underlying algorithm treats all edges in exactly the same way when calculating where to place all the vertices, and the actual drawing of the diagram (after the placements have been calculated) is done separately. Consequently, it is possible to add edges to the algorithm, but prevent them from being drawn by adding draw=none to the edge style. This is particularly useful if you want to ensure that the initial or final states remain closer together than they would have otherwise as illustrated in the following example (note that opacity=0.2 is used instead of draw=none to illustrate where exactly the edge is located). % No invisible to keep the two photons together \feynmandiagram [small, horizontal=a to t1] { a [particle=\(\pi^{0}\)] -- [scalar] t1 -- t2 -- t3 -- t1, t2 -- [photon] p1 [particle=\(\gamma\)], t3 -- [photon] p2 [particle=\(\gamma\)], }; % Invisible edge ensures photons are parallel \feynmandiagram [small, horizontal=a to t1] { a [particle=\(\pi^{0}\)] -- [scalar] t1 -- t2 -- t3 -- t1, t2 -- [photon] p1 [particle=\(\gamma\)], t3 -- [photon] p2 [particle=\(\gamma\)], p1 -- [opacity=0.2] p2, }; The graph drawing library from Ti kZ has several different algorithms to position the vertices. By default, \diagram and \feynmandiagram use the spring layout algorithm to place the vertices. The spring layout attempts to spread everything out as much as possible which, in most cases, gives a nice diagram; however, there are certain cases where this does not work. A good example where the spring layout doesn't work are decays where we have the decaying particle on the left and all the daughter particles on the right. % Using the default spring layout \feynmandiagram [horizontal=a to b] { a [particle=\(\mu^{-}\)] -- [fermion] b -- [fermion] f1 [particle=\(\nu_{\mu}\)], b -- [boson, edge label=\(W^{-}\)] c, f2 [particle=\(\overline \nu_{e}\)] -- [fermion] c -- [fermion] f3 [particle=\(e^{-}\)], }; % Using the layered layout \feynmandiagram [layered layout, horizontal=a to b] { a [particle=\(\mu^{-}\)] -- [fermion] b -- [fermion] f1 [particle=\(\nu_{\mu}\)], b -- [boson, edge label'=\(W^{-}\)] c, c -- [anti fermion] f2 [particle=\(\overline \nu_{e}\)], c -- [fermion] f3 [particle=\(e^{-}\)], }; You may notice that in addition to adding the layered layout style to \feynmandiagram, we also changed the order in which we specify the vertices. This is because the layered layout algorithm does pay attention to the order in which vertices are declared (unlike the default spring layout); as a result, c--f2, c--f3 has a different meaning to f2--c--f3. In the former case, f2 and f3 are both on the layer below c as desired; whilst the latter case places f2 on the layer above c (that, the same layer as where the W-boson originates). In more complicated diagrams, it is quite likely that none of the algorithms work, no matter how many invisible edges are added. In such cases, the vertices have to be placed manually. Ti kZ-Feynman allows for vertices to be manually placed by using the \vertex command. The \vertex command is available only within the feynman environment (which itself is only available inside a tikzpicture). The feynman environment loads all the relevant styles from Ti kZ-Feynman and declares additional Ti kZ-Feynman-specific commands such as \vertex and \diagram. This is inspired from PGFPlots and its use of the axis environment. The \vertex command is very much analogous to the \node command from Ti kZ, with the notable exception that the vertex contents are optional; that is, you need not have {<text>} at the end. In the case where {} is specified, the vertex automatically is given the particle style, and otherwise it is a usual (zero-sized) vertex. To specify where the vertices go, it is possible to give explicit coordinates though it is probably easiest to use the positioning library from Ti kZ which allows vertices to be placed relative to existing vertices. By using relative placements, it is possible to easily tweak one part of the graph and everything will adjust accordingly—the alternative being to manually adjust the coordinates of every affected vertex. Finally, once all the vertices have been specified, the \diagram* command is used to specify all the edges. This works in much the same way as \diagram (and also \feynmandiagram), except that it uses an very basic algorithm to place new nodes and allows existing (named) nodes to be included. In order to refer to an existing node, the node must be given in parentheses. This whole process of specifying the nodes and then drawing the edges between them is shown below for the muon decay: \begin{tikzpicture} \begin{feynman} \vertex (a) {\(\mu^{-}\)}; \vertex [right=of a] (b); \vertex [above right=of b] (f1) {\(\nu_{\mu}\)}; \vertex [below right=of b] (c); \vertex [above right=of c] (f2) {\(\overline \nu_{e}\)}; \vertex [below right=of c] (f3) {\(e^{-}\)}; \diagram* { (a) -- [fermion] (b) -- [fermion] (f1), (b) -- [boson, edge label'=\(W^{-}\)] (c), (c) -- [anti fermion] (f2), (c) -- [fermion] (f3), }; \end{feynman} \end{tikzpicture} The feynmf package lets you easily draw Feynman diagrams in your LaTeX documents. All you need to do is specify the vertices, the particles and the labels, and it will automatically layout and draw your diagram for you. Let's start with a quick example: \begin{fmffile*}{diagram} \begin{fmfgraph}(40,25) \fmfleft{i1,i2} \fmfright{o1,o2} \fmf{fermion}{i1,v1,o1} \fmf{fermion}{i2,v2,o2} \fmf{photon}{v1,v2} \end{fmfgraph} \end{fmffile*} The fmffile* environment must be put around all of your Feynman diagrams. You can use fmffile environment for multiple diagrams, so you can put one around your whole document and forget about it. The second argument to the fmffile environment tells LaTeX where to write the files that it uses to store the diagram. You can name this whatever you want, but you need to run metafont on your diagram between LaTeX runs in order for your diagram to show up (ShareLaTeX does this automatically): The 'fmfgraph' environment starts a Feynman diagram, and the figures in brackets afterwards specify the width and height of the diagram. The first thing you need to do is specify your external vertices, and where they should be positioned. You can name your vertices anything you like, and say where they should be positioned with the commands \fmfleft, \fmfright, \fmftop, \fmfbottom. For example % Creates two vertices on the left called i1 and i2 \fmfleft{i1,i2} % Creates two vertices on the right called o1 and o2 \fmfright{o1,o2} You can connect up vertices with the \fmf, which will create new vertices if you pass in names that haven't been created yet. For example % Will create a fermion line between i1 and % the newly created v1, and between v1 and o1. \fmf{fermion}{i1,v1,o1} % Will create a photon line between v1 and the newly created v2 \fmf{photon}{v2,v2} A vertex can be labelled using the \fmflabel command, which takes two arguments: the label to apply to the vertex, and the name of the vertex to apply it to. For example, in the above diagram, if we add in the following labels, we get the updated diagram below: Note that math mode can used inside the vertex labels, as we have done above. We've seen the 'photon' and 'fermion' line styles above, but the feynmf package support many more. Appearance Name(s) gluon, curly dbl_curly dashes scalar, dashes_arrow dbl_dashes dbl_dashes_arrow dots ghost, dots_arrow dbl_dots dbl_dots_arrow phantom phantom_arrow vanilla, plain fermion, electron, quark, plain_arrow double, dbl_plain double_arrow, heavy, dbl_plain_arrow boson, photon, wiggly dbl_wiggly zigzag dbl_zigzag For more information see:
I'm a first year physics major student, and this is my first question here. It's a well known fact that ideal pendulums with the same gravity acceleration and same length have the same period, now, I'm trying to proof it (and I might need some help with this). The goal is to make an experiment at home and compute the mass of the Earth (knowing that the period depends on the lenght and gravity only). I'm aware that the method is also well-known, but it's ok if I want to 'check' what I have? I mean, I really want to to do this. An ideal string of lenght ' l' and a particle of mass ' m', the angular speed vector is shown as coming out of the page and I defined a coordinate system with the radial and tangencial directions. Since the particle moves in circular motion and there are only two forces acting on it, then $$\vec{T}+m\vec{g}=m\vec{a}$$ becomes $$-T\hat{u_r}+mg\sin{\theta}\hat{u_t}+mg\cos{\theta}\hat{u_r}=m(\alpha l\hat{u_t}+(-w^2l)\hat{u_r})$$ Where $\hat{u_t}$ and $\hat{u_r}$ are the two directions of my coordinate system. This way, by projecting on $\hat{u_t}$ we have that $$mg\sin{\theta}=m\alpha l$$ and by solving $\alpha$: $$\alpha=\frac{g}{l}\sin\theta$$ My question is this one: we know that $$\vec{\alpha}=\dot{\vec{\omega }}$$ So, can I do this? $$\alpha=\frac{d\omega}{dt}$$ Implies that $$d\omega=\alpha dt=\alpha dt \frac{d\theta}{d\theta}$$ And since $$\omega=\frac{d\theta}{dt}$$ Is this true? $$\omega d\omega=\alpha d\theta$$ Because if that so, then I can integrate this, right? $$\int_{\omega_0}^{\omega}\omega d\omega=\int_{\theta_0}^{\theta}\frac{g}{l}\sin{\theta}d\theta$$ $$=\frac{\omega^2}{2}-\frac{\omega_0^2}{2}=\frac{g}{l}(\cos{\theta_0}-\cos{\theta})$$ I'm not sure about this equations, but at least they are dimensionally correct. Second question, these $\omega_0$ and $\theta_0$ values must be related? (that is, that they both are from the very same moment), and, can we pick this moment to be that of the maximum amplitude of the pendulum? (so $\omega_0=0$) That way I could have $\omega$ as a function of $\theta$ and maybe use that to find the period. Well, I hope this not to be annoying in any form to anyone and I'm sorry if the question is a little long.
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Type 1 - Improper Integrals with Infinite Intervals of Integration $\displaystyle\int_a^\infty f(x)\,dx$ converges if $\displaystyle\lim_{t\to\infty}\int_a^t f(x)\,dx$ exists, and diverges if the limit doesn't exist, including when it is infinite. $\displaystyle\int_{-\infty}^b f(x)\,dx$ converges if $\displaystyle\lim_{t \to -\infty} \int_t^b f(x)\, dx$ exists, and diverges if the limit doesn't exist, including when it is infinite. $\displaystyle\int_{-\infty}^\infty f(x)\,dx$ converges if both $\displaystyle\int_a^\infty f(x)\,dx$ and $\displaystyle\int_{-\infty}^b f(x)\,dx$ converge. We'll be talking a lot more about convergence and divergence when we get to sequences and series. The following video explains improper integrals with infinite intervals of integration (type 1) and works out a number of examples.
Definition:Operation/N-Ary Operation Definition Let $S_1, S_2, \dots, S_n$ be sets. That is, suppose that: $\circ: S_1 \times S_2 \times \ldots \times S_n \to \mathbb U: \forall \tuple {s_1, s_2, \ldots, s_n} \in S_1 \times S_2 \times \ldots \times S_n: \map \circ {s_1, s_2, \ldots, s_n} \in \mathbb U$ Then $\circ$ is an $n$-ary operation. Remark An $n$-ary operation needs to be defined for all ordered tuples in $S_1 \times S_2 \times \ldots \times S_n$. Also known as An $n$-ary operation is also sometimes referred to as a finitary operation, although the latter term may also encompass multiary operations as well. Also see Sources 1965: Seth Warner: Modern Algebra... (previous) ... (next): $\S 18$ 1966: Richard A. Dean: Elements of Abstract Algebra... (previous) ... (next): $\S 0.5$: Theorem $7$ 1972: A.G. Howson: A Handbook of Terms used in Algebra and Analysis... (previous) ... (next): $\S 2$: Sets and functions: Operations
Can you find the function that satisfy the relation? $$f(n) = \Theta(g(n)), f(n) = o(g(n))$$ Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community Can you find the function that satisfy the relation? $$f(n) = \Theta(g(n)), f(n) = o(g(n))$$ If oh is little, it is not possible. Because of the definition of these two symbols. $f(n) = \Theta(g(n))$ means there is two constants $c_1, c_2 > 0$ and $n_0 \in \mathbb{N}$ such that $g(n) < c_1 f(n)$ and $f(n) < c_2 g(n)$ for $n > n_0$. Hence, it means $\lim_{n\to \infty} \frac{f(n)}{g(n)} = constant > 0 $. However, the definition of the little-oh is $\lim_{n\to\infty}\frac{f(n)}{g(n)} = 0$ and there is not constant $c$ and $n_1$, such that $g(n) < c f(n)$ for $n > n_1$. Also, if you mean big-oh, by the definition, you can say if $f(n) = \Theta(g(n))$, you can always say $f(n) = O(g(n))$ too.
The polarization of neutral Cascade and anti-Cascade hyperons produced by 800 GeV/c protons on a BeO target at a fixed targeting angle of 4.8 mrad is measured by the KTeV experiment at Fermilab. Our result of 9.7% for the neutral Cascade polarization shows no significant energy dependence when compared to a result obtained at 400 GeV/c production energy and at twice our targeting angle. The polarization of the neutral anti-Cascade is measured for the first time and found to be consistent with zero. We also examine the dependence of polarization on transverse production momentum. We present the first measurement of the form factor ratios g1/f1 (direct axial-vector to vector), g2/f1 (second class current) and f2/f1 (weak magnetism) for the decay Xi0 -> Sigma+ e- anti-nu/e using the KTeV (E799) beam line and detector at Fermilab. From the Sigma+ polarization measured with the decay Sigma+ -> p pi0 and the e- - anti-nu/e correlation, we measure g1/f1 to be 1.32 +0.21-0.17(stat.) +/- 0.05(syst.), assuming the SU(3)f (flavor) values for g2/f1 and f2/f1. Our results are all consistent with exact SU(3)f symmetry. Inelastic and elastic $J/\psi$ photoproduction on hydrogen are investigated at a mean energy of 105 GeV. The inelastic cross section with $E_{\psi} / E_{\gamma}$ < 0.9 is significantly lower than the corresponding result for muoproduction on iron targets, but is consistent with a second-order perturbative QCD calculation. Two different nuclear-medium effects are isolated using a low three-momentum transfer subsample of neutrino-carbon scattering data from the MINERvA neutrino experiment. The observed hadronic energy in charged-current νμ interactions is combined with muon kinematics to permit separation of the quasielastic and Δ(1232) resonance processes. First, we observe a small cross section at very low energy transfer that matches the expected screening effect of long-range nucleon correlations. Second, additions to the event rate in the kinematic region between the quasielastic and Δ resonance processes are needed to describe the data. The data in this kinematic region also have an enhanced population of multiproton final states. Contributions predicted for scattering from a nucleon pair have both properties; the model tested in this analysis is a significant improvement but does not fully describe the data. We present the results as a double-differential cross section to enable further investigation of nuclear models. Improved description of the effects of the nuclear environment are required by current and future neutrino oscillation experiments. The largest sample ever recorded of $\numub$ charged-current quasi-elastic (CCQE, $\numub + p \to \mup + n$) candidate events is used to produce the minimally model-dependent, flux-integrated double-differential cross section $\frac{d^{2}\sigma}{dT_\mu d\uz}$ for $\numub$ incident on mineral oil. This measurement exploits the unprecedented statistics of the MiniBooNE anti-neutrino mode sample and provides the most complete information of this process to date. Also given to facilitate historical comparisons are the flux-unfolded total cross section $\sigma(E_\nu)$ and single-differential cross section $\frac{d\sigma}{d\qsq}$ on both mineral oil and on carbon by subtracting the $\numub$ CCQE events on hydrogen. The observed cross section is somewhat higher than the predicted cross section from a model assuming independently-acting nucleons in carbon with canonical form factor values. The shape of the data are also discrepant with this model. These results have implications for intra-nuclear processes and can help constrain signal and background processes for future neutrino oscillation measurements. Measurements of the cross section for the reaction p+p→π0+anything have been completed. The data cover a range of incident proton energies 50-400 GeV, π0 transverse momenta 0.3-4 GeV/c, and laboratory angles 30-275 mrad. The experiment was performed using the internal proton beam at the Fermi National Accelerator Laboratory. A lead-glass counter was used to detect photons from the decay of π0's produced by collisions in thin targets of hydrogen or carbon. Tables of the measured cross sections are presented.
Axiom:Axiomatization of 1-Based Natural Numbers Axioms The following axioms are intended to capture the behaviour of the ($1$-based) natural numbers $\N_{>0}$, the element $1 \in \N_{>0}$, and the operations of addition $+$ and multiplication $\times$ as they pertain to $\N_{>0}$: \((A)\) $:$ \(\displaystyle \exists_1 1 \in \N_{> 0}:\) \(\displaystyle a \times 1 = a = 1 \times a \) \((B)\) $:$ \(\displaystyle \forall a, b \in \N_{> 0}:\) \(\displaystyle a \times \paren {b + 1} = \paren {a \times b} + a \) \((C)\) $:$ \(\displaystyle \forall a, b \in \N_{> 0}:\) \(\displaystyle a + \paren {b + 1} = \paren {a + b} + 1 \) \((D)\) $:$ \(\displaystyle \forall a \in \N_{> 0}, a \ne 1:\) \(\displaystyle \exists_1 b \in \N_{> 0}: a = b + 1 \) \((E)\) $:$ \(\displaystyle \forall a, b \in \N_{> 0}:\) \(\displaystyle \)Exactly one of these three holds:\( \) \(\displaystyle a = b \lor \paren {\exists x \in \N_{> 0}: a + x = b} \lor \paren {\exists y \in \N_{> 0}: a = b + y} \) \((F)\) $:$ \(\displaystyle \forall A \subseteq \N_{> 0}:\) \(\displaystyle \paren {1 \in A \land \paren {z \in A \implies z + 1 \in A} } \implies A = \N_{> 0} \) Note The above axiom schema specifies the old-fashioned definition of the natural numbers as: $\text{The set of natural numbers} = \set {1, 2, 3, \ldots}$ as opposed to the more modern approach which defines them as: $\text{The set of natural numbers} = \set {0, 1, 2, 3, \ldots}$ In order to eliminate confusion, on $\mathsf{Pr} \infty \mathsf{fWiki}$ the set $\set {1, 2, 3, \ldots}$ will be denoted as $\N_{> 0}$ or $\N_{\ne 0}$ or $\N_{\ge 1}$. When $\N$ is used, $\N = \set {0, 1, 2, 3, \ldots}$ is to be understood.
Yes. The idea that sunspots are depressed slightly came as a possible explanation for the Wilson effect. The Wilson effect was discovered as the shape of sunspots as viewed from Earth changed as the Sun rotates, in a way consistent with the change in perspective looking onto a slightly depressed region. While this isn't the only explanation for the effect, it's certainly the most prevalent. More specifically, as Solanki (2003) writes, the depressions indicate a lowering of the layer where the optical depth $\tau=1$ (keep in mind that the bottom of the photosphere is the layer where $\tau=2/3$). There are two causes mentioned: lower temperature and magnetic effects. Sunspots are cooler than the surrounding areas, as is well known, and thus appear darker. We see the same thing apparent at the boundaries of solar granules: Cooler, darker gas sinks and lets the hotter gas (which is less dense) move upward. Additionally, the opacity $\kappa$ is temperature-dependent, which may impact how far one can see into the star. Not only is temperature a factor, but so is the magnetic field. Sunspots are, at heart, a magnetic phenomenon, and thus the radial force equation is substantially different. Normally, in a star, the equation of hydrostatic equilibrium is$$\frac{\mathrm{d}P}{\mathrm{d}r}=-\rho g$$for pressure $P$, density $\rho$ and gravitational acceleration $g$. However, when the magnetic field becomes important in a sunspot on the solar surface, the force balance is$$\frac{\mathrm{d}P}{\mathrm{d}r}=\frac{B_z}{4\pi}\left(\frac{\mathrm{d}B_r}{\mathrm{d}z}-\frac{\mathrm{d}B_z}{\mathrm{d}r}\right)$$where $r$ and $z$ are the radial and vertical coordinates (note the change of coordinate system - $r$ is along the surface, and $z$ is perpendicular to it!). The force from the magnetic field implies a lower gas pressure and a greater depression.
2016-09-11 Importance sampling of heavy-tailed iterated random functions Publication Publication We consider a stochastic recurrence equation of the form $Z_{n+1} = A_{n+1} Z_n+B_{n+1}$, where $\mathbb{E}[\log A_1]<0$, $\mathbb{E}[\log^+ B_1]<\infty$ and $\{(A_n,B_n)\}_{n\in\mathbb{N}}$ is an i.i.d. sequence of positive random vectors. The stationary distribution of this Markov chain can be represented as the distribution of the random variable $Z \triangleq \sum_{n=0}^\infty B_{n+1}\prod_{k=1}^nA_k$. Such random variables can be found in the analysis of probabilistic algorithms or financial mathematics, where $Z$ would be called a stochastic perpetuity. If one interprets $-\log A_n$ as the interest rate at time $n$, then $Z$ is the present value of a bond that generates $B_n$ unit of money at each time point $n$. We are interested in estimating the probability of the rare event $\{Z>x\}$, when $x$ is large; we provide a consistent simulation estimator using state-dependent importance sampling for the case, where $\log A_1$ is heavy-tailed and the so-called Cram\'{e}r condition is not satisfied. Our algorithm leads to an estimator for $P(Z>x)$. We show that under natural conditions, our estimator is strongly efficient. Furthermore, we extend our method to the case, where $\{Z_n\}_{n\in\mathbb{N}}$ is defined via the recursive formula $Z_{n+1}=\Psi_{n+1}(Z_n)$ and $\{\Psi_n\}_{n\in\mathbb{N}}$ is a sequence of i.i.d. random Lipschitz functions. Additional Metadata Citation Chen, B, Rhee, C.H, & Zwart, A.P. (2016). Importance sampling of heavy-tailed iterated random functions. See Also
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Examples of Iterated Integrals Example 1: (from the video) Compute $\displaystyle\iint_R(x-y)^2\,dA$ over the region with $x$ in $[0,2]$ and $y$ in $[0,1]$. As we will soon see, by Fubini's Theorem, $dA=dx\,dy$ or $dA=dy\,dx$; i.e. we can fix $x$ and integrate with respect to $y$ first, or vice versa. Solution 1: We will compute $\displaystyle\iint_R(x-y)^2\,dA=\int_0^2\int_0^1 (x-y)^2\,dy\,dx=\int_0^2\left(\int_0^1 (x-y)^2\,dy\right)\,dx$. DO: Integrate the parenthetical integral $\displaystyle\int_0^1 (x-y)^2\,dy$, treating $x$ as a constant, before reading more. First we compute the antiderivative, then evaluate it (if you can do this substitution in your head, you can just evaluate the integral). Remember, $y$ is our variable and $x$ is a constant. $\displaystyle\int (x-y)^2\,dy\overset{\fbox{$ \,\,u\,=\,x-y\\ du\,=\,(-1)\,dy$}\\}{=} -\int u^2\,du=-\frac{1}{3}u^3+c $, so $\displaystyle\int_0^1 (x-y)^2\,dy=-\frac{1}{3}(x-y)^3 \left|\begin{array}{c} ^1 \\ _0\end{array}\right .=-\frac{1}{3}\left((x-1)^3-(x-0)^3\right)=\frac{x^3-(x-1)^3}{3}$ DO: Integrate the outside integral, with the integrand being our answer from above: $\displaystyle\int_0^2\frac{x^3-(x-1)^3}{3}\,dx$ before reading more. Here, we can see our antiderivative (after a mental substitution $u=x-1$, where $du=dx$) $\displaystyle\int_0^2\frac{x^3-(x-1)^3}{3}\,dx=\frac{1}{3}\left(\frac{x^4}{4}-\frac{(x-1)^4}{4}\right)\left|\begin{array}{c} ^2 \\ _0 \end{array}\right .=\frac{1}{3}\left(\left(4-\frac{1}{4}\right)-\left(0-\frac{1}{4}\right)\right)=\frac{4}{3}$ In these two steps, one variable at a time, we have found $\displaystyle\iint_R(x-y)^2\,dA=\int_0^2\left(\int_0^1 (x-y)^2\,dy\right)\,dx=\int_0^2\left(\frac{x^3-(x-1)^3}{3}\right)\,dx=\frac{4}{3}$ ---------------------------------------------------------------------------- Example 2: Find the volume of the solid $W$ under the hyperbolic paraboloid $$z=f(x, y) = 2+ x^2 - y^2 $$ and over the square $D\,= \,[-1,\,1]\,\times\,[-1,\,1]$. DO: Compute this integral before reading more. This is the graph of $f$: Solution 2: We will compute $\displaystyle\int_{-1}^1\int_{-1}^1\, (2+x^2 - y^2)\, dy\,dx$. $\displaystyle\int_{-1}^1\left(\int_{-1}^1\, (2+x^2 - y^2)\, dy\right)\,dx=\int_{-1}^1\left[\,2y +x^2y -\frac{y^3}{3}\right]_{-1}^1\,dx$ (after integration with respect to $y$, leaving $x$ constant) $\displaystyle=\int_{-1}^1\left[(2+x^2-1/3)-(-2-x^2+1/3)\right]\,dx =\int_{-1}^1\left(\frac{10}{3} +2x^2\right)\,dx$ (and now integrate with respect to $x$) $\displaystyle=\left(\frac{10}{3}x+\frac{2}{3}x^3\right)\left|\begin{array}{c} ^1 \\ _{-1} \end{array}\right .=\frac{10}{3}+\frac{2}{3}-\left(-\frac{10}{3}-\frac{2}{3}\right)=8$
Difference between revisions of "Main Page" (→Unsolved questions) Line 30: Line 30: Prove that for any <math>c>0</math> there is a <math>d</math>, such that any c-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner. Prove that for any <math>c>0</math> there is a <math>d</math>, such that any c-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner. − The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for k=4. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our c-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma. + The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for k=4. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our c-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma. − Finally, let me prove that there is square if d is large enough compare to c. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length d. It has a one to one mapping to [4]^d; Given a point ((x_1,…,x_d),(y_1,…,y_d)) where x_i,y_j are 0 or 1, it maps to (z_1,…,z_d), where z_i=0 if x_i=y_i=0, z_i=1 if x_i=1 and y_i=0, z_i=2 if x_i=0 and y_i=1, and finally z_i=3 if x_i=y_i=1. Any combinatorial line in [4]^d defines a square in the Cartesian product, so the density HJ implies the statement. + Finally, let me prove that there is square if dis large enough compare to c. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length d. It has a one to one mapping to [4]^d; Given a point ((x_1,…,x_d),(y_1,…,y_d)) where x_i,y_j are 0 or 1, it maps to (z_1,…,z_d), where z_i=0 if x_i=y_i=0, z_i=1 if x_i=1 and y_i=0, z_i=2 if x_i=0 and y_i=1, and finally z_i=3 if x_i=y_i=1. Any combinatorial line in [4]^d defines a square in the Cartesian product, so the density HJ implies the statement. Gowers.7: With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product. Gowers.7: With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product. Line 95: Line 95: Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic. Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic. + == Other resources == == Other resources == * [http://meta.wikimedia.org/wiki/Help:Contents Wiki user's guide] * [http://meta.wikimedia.org/wiki/Help:Contents Wiki user's guide] Revision as of 20:41, 11 February 2009 The Problem Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. Density Hales-Jewett (DHJ) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math] The original proof of DHJ used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers. Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (active) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (final call) (500-599) TBA (600-699) A reading seminar on density Hales-Jewett (active) A spreadsheet containing the latest lower and upper bounds for [math]c_n[/math] can be found here. Unsolved questions Gowers.462: Incidentally, it occurs to me that we as a collective are doing what I as an individual mathematician do all the time: have an idea that leads to an interesting avenue to explore, get diverted by some temporarily more exciting idea, and forget about the first one. I think we should probably go through the various threads and collect together all the unsolved questions we can find (even if they are vague ones like, “Can an approach of the following kind work?”) and write them up in a single post. If this were a more massive collaboration, then we could work on the various questions in parallel, and update the post if they got answered, or reformulated, or if new questions arose. IP-Szemeredi (a weaker problem than DHJ) Solymosi.2: In this note I will try to argue that we should consider a variant of the original problem first. If the removal technique doesn’t work here, then it won’t work in the more difficult setting. If it works, then we have a nice result! Consider the Cartesian product of an IP_d set. (An IP_d set is generated by d numbers by taking all the [math]2^d[/math] possible sums. So, if the n numbers are independent then the size of the IP_d set is [math]2^d[/math]. In the following statements we will suppose that our IP_d sets have size [math]2^n[/math].) Prove that for any [math]c\gt0[/math] there is a [math]d[/math], such that any c-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner. The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for [math]k=4[/math]. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our c-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma. Finally, let me prove that there is square if [math]d[/math] is large enough compare to [math]c[/math]. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length [math]d[/math]. It has a one to one mapping to [4]^d; Given a point ((x_1,…,x_d),(y_1,…,y_d)) where x_i,y_j are 0 or 1, it maps to (z_1,…,z_d), where z_i=0 if x_i=y_i=0, z_i=1 if x_i=1 and y_i=0, z_i=2 if x_i=0 and y_i=1, and finally z_i=3 if x_i=y_i=1. Any combinatorial line in [4]^d defines a square in the Cartesian product, so the density HJ implies the statement. Gowers.7: With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product. This is simpler than the density Hales-Jewett problem in at least one respect: it involves 01-sequences rather than 012-sequences. But that simplicity may be slightly misleading because we are looking for corners in the Cartesian product. A possible disadvantage is that in this formulation we lose the symmetry of the corners: the horizontal and vertical lines will intersect this set in a different way from how the lines of slope -1 do. I feel that this is a promising avenue to explore, but I would also like a little more justification of the suggestion that this variant is likely to be simpler. Gowers.22: A slight variant of the problem you propose is this. Let’s take as our ground set the set of all pairs (U,V) of subsets of \null [n], and let’s take as our definition of a corner a triple of the form (U,V), (U\cup D,V), (U,V\cup D), where both the unions must be disjoint unions. This is asking for more than you asked for because I insist that the difference D is positive, so to speak. It seems to be a nice combination of Sperner’s theorem and the usual corners result. But perhaps it would be more sensible not to insist on that positivity and instead ask for a triple of the form (U,V), ((U\cup D)\setminus C,V), (U, (V\cup D)\setminus C, where D is disjoint from both U and V and C is contained in both U and V. That is your original problem I think. I think I now understand better why your problem could be a good toy problem to look at first. Let’s quickly work out what triangle-removal statement would be needed to solve it. (You’ve already done that, so I just want to reformulate it in set-theoretic language, which I find easier to understand.) We let all of X, Y and Z equal the power set of \null [n]. We join U\in X to V\in Y if (U,V)\in A. Ah, I see now that there’s a problem with what I’m suggesting, which is that in the normal corners problem we say that (x,y+d) and (x+d,y) lie in a line because both points have the same coordinate sum. When should we say that (U,V\cup D) and (U\cup D,V) lie in a line? It looks to me as though we have to treat the sets as 01-sequences and take the sum again. So it’s not really a set-theoretic reformulation after all. O'Donnell.35: Just to confirm I have the question right… There is a dense subset A of {0,1}^n x {0,1}^n. Is it true that it must contain three nonidentical strings (x,x’), (y,y’), (z,z’) such that for each i = 1…n, the 6 bits [ x_i x'_i ] [ y_i y'_i ] [ z_i z'_i ] are equal to one of the following: [ 0 0 ] [ 0 0 ] [ 0, 1 ] [ 1 0 ] [ 1 1 ] [ 1 1 ] [ 0 0 ], [ 0 1 ], [ 0, 1 ], [ 1 0 ], [ 1 0 ], [ 1 1 ], [ 0 0 ] [ 1 0 ] [ 0, 1 ] [ 1 0 ] [ 0 1 ] [ 1 1 ] ? McCutcheon.469: IP Roth: Just to be clear on the formulation I had in mind (with apologies for the unprocessed code): for every $\delta>0$ there is an $n$ such that any $E\subset [n]^{[n]}\times [n]^{[n]}$ having relative density at least $\delta$ contains a corner of the form $\{a, a+(\sum_{i\in \alpha} e_i ,0),a+(0, \sum_{i\in \alpha} e_i)\}$. Here $(e_i)$ is the coordinate basis for $[n]^{[n]}$, i.e. $e_i(j)=\delta_{ij}$. Presumably, this should be (perhaps much) simpler than DHJ, k=3. High-dimensional Sperner Kalai.29: There is an analogous for Sperner but with high dimensional combinatorial spaces instead of "lines" but I do not remember the details (Kleitman(?) Katona(?) those are ususal suspects.) Fourier approach Kalai.29: A sort of generic attack one can try with Sperner is to look at f=1_A and express using the Fourier expansion of f the expression \int f(x)f(y)1_{x<y} where x<y is the partial order (=containment) for 0-1 vectors. Then one may hope that if f does not have a large Fourier coefficient then the expression above is similar to what we get when A is random and otherwise we can raise the density for subspaces. (OK, you can try it directly for the k=3 density HJ problem too but Sperner would be easier;) This is not unrealeted to the regularity philosophy. Gowers.31: Gil, a quick remark about Fourier expansions and the k=3 case. I want to explain why I got stuck several years ago when I was trying to develop some kind of Fourier approach. Maybe with your deep knowledge of this kind of thing you can get me unstuck again. The problem was that the natural Fourier basis in \null [3]^n was the basis you get by thinking of \null [3]^n as the group \mathbb{Z}_3^n. And if that’s what you do, then there appear to be examples that do not behave quasirandomly, but which do not have large Fourier coefficients either. For example, suppose that n is a multiple of 7, and you look at the set A of all sequences where the numbers of 1s, 2s and 3s are all multiples of 7. If two such sequences lie in a combinatorial line, then the set of variable coordinates for that line must have cardinality that’s a multiple of 7, from which it follows that the third point automatically lies in the line. So this set A has too many combinatorial lines. But I’m fairly sure — perhaps you can confirm this — that A has no large Fourier coefficient. You can use this idea to produce lots more examples. Obviously you can replace 7 by some other small number. But you can also pick some arbitrary subset W of \null[n] and just ask that the numbers of 0s, 1s and 2s inside W are multiples of 7. DHJ for dense subsets of a random set Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic.
Linear regression is fitting a line to our data. First, let's explore linear regression when we have only one variable (one feature; one independent variable). Suppose that we have a table that describes house prices based on one feature which is the size of the house: Size in m 2 (x) Price in $ (y) 2104 460,000 1416 232,000 1002 112,000 1721 382,000 1590 266,000 1302 201000 Let \(m\) be the number of examples, \(x^i\) be the i-th element for variable \(x\) , and \(y^i\) be the i-th element for variable \(y\). For our example, \(m=4\), \(x^2=1416\), and \(y^3=112,000\). In real cases, \(m\) is much large than 4. We use four entries here just for clarification. The variable \(x\) is called feature, and variable \(y\) is called target. We call these examples the training examples, meaning that they are used to train a machine-learning model (in our case, linear regression) so we can use that model to predict the target variable for new data. To do so, we need to calculate a function called the hypothesis (\(h_\theta(x)\)) that can be used to predict house price knowing its size. This function has the following form: $$ h_\theta(x)=\theta_0+\theta_1x $$ Note that this equation looks like the line equation (\(y=b+mx\)) where \(m\) is the slope of the line and \(b\) is the where the line intersects with the y axis. In our equation, \(x\) is our one feature, the size of the house. \(\theta_0\) and \(\theta_1\) are unknowns; we need to find the optimal values for these two parameters. \(h_\theta(x)\) represents the predicted price. We want \(h_\theta(x)\) to be a good predictor, we want its value to be as close as possible to the actual \(y\) value. Let's suppose that we have these data points: where each point represents a house, where the size of this house is on the x-axis and its price is on the y-axis. We want to find a line that fit this data better than any other line. In other words, we want to find the optimal values for \(\theta_0\) and \(\theta_1\) in the following equation: $$ h_\theta(x)=\theta_0+\theta_1x $$ which represents a line as we said earlier. We can try many values for \(\theta_0\) and \(\theta_1\) until we reach a satisfactory results, but this is not a practically efficient way. Instead, we will use a function called the cost function with an algorithm called gradient descent to reach the optimal values for our two parameters. The cost function is used to calculate the error in our model, to know, for given values of \(\theta_0\) and \(\theta_1\), how good is our hypothesis, how close are the predicted values to the actual values. The cost function for linear regression can have the following formula: $$ \text{Cost}=J(\theta)=\frac{1}{2m}\sum_{i=1}^m(h_\theta(x^i)-y^i)^2 $$ So this cost function calculates the difference between the predicted value and the actual value for all the training examples that we have. Our goal is to choose values for theta0 and theta1 that minimize the cost function. For example, let's calculate the cost function value for the training examples in the table above, assuming that theta0=1 and theta1=2 (This is just random. We will see later how to get the optimal values for theta0 and theta1): $$ J(\theta)=\frac{1}{2\times6}\sum_{i=1}^m(\theta_0+\theta_1x^i-y^i)^2=\\~\\\frac{1}{12}[(1+2\times2104-460000)^2+\\~\\(1+2\times1416-232000)^2+\\~\\(1+2\times1002-112000)^2+\\~\\(1+2\times1721-382000)^2+\\~\\(1+2\times1590-266000)^2+\\~\\(1+2\times1302-201000)^2] $$ Now how to choose theta values? We do so using Gradient Descent algorithm, which is the repeat of: $$ \theta_j:=\theta_j-\alpha\frac{\partial}{\partial\theta_j}J(\theta) \qquad\text{ for }j=0,1 $$ until convergence. := means that we compute the right side and assign it to the left side. \(\alpha\) is the learning rate, it controls the step size of gradient descent. \(\frac{\partial}{\partial\theta_j}J(\theta)\) is the partial derivative of the cost function with respect to \(\theta_j\). Gradient descent algorithm starts with random values of thetas. It calculates the right side of the assignment for all thetas then updates all thetas simultaneously. This happens because to calculate the right side of the assignment for some \(\theta\), you need the values of other thetas: $$ \theta_j:=\theta_j-\alpha\frac{\partial}{\partial\theta_j}\left(\frac{1}{2m}\sum_{i=1}^m(h_\theta(x^i)-y^i)^2\right)=\\~\\\theta_j-\alpha\frac{\partial}{\partial\theta_j}\left(\frac{1}{2m}\sum_{i=1}^m(\theta_0+\theta_1x^i-y^i)^2\right) $$ The partial derivative of the cost function gives us the direction in which the gradient descent should move toward the optimal point at which the cost function has its smallest value. The partial derivative also has a magnitude that decreases as we get close to the optimal point. So gradient descent algorithm iterates and continues updating the parameters (thetas) until convergence, until the cost function value is not decreasing anymore. At that point, we can say that the values of thetas are optimal, meaning they produce the smallest cost (error) function value. This means that we can now use the equation: $$ h_\theta(x)=\theta_0+\theta_1x $$ to get the predicted value (\(h_\theta(x)\)) for an input value (\(x\)). Let's say that we trained the model and got the optimal values for thetas, and that we got thete0 = 1 and theta1 = 212. Now, we want to predict the price of a new house whose size is 1333 m 2, we apply our model (the hypothesis): $$ h_\theta(x)=\theta_0+\theta_1x = 1+212\times1333=282597 $$ So we got a predicted value of $ 282597 for the house price. Multivariate linear regression Until now, we have been talking about linear regression with one input variable (one feature). In real-world problems, there are much more features. Data for multivariate linear regression might look like: Size in m 2 (x 1) Number of Bedrooms (x 2) Number of Floors (x 3) Age in years (x 4) Price in $ (y) 2104 3 2 40 460,000 1416 1 1 42 232,000 1002 1 1 31 112,000 1721 2 2 22 382,000 1590 2 1 45 266,000 1302 1 1 44 201000 Here \(x_1\) is the first feature, \(x_2\) is the second one, and so on. \(x_2^1\) in this case represents the first value of the second feature (number of bedrooms), which is 3. The number of features is denoted by \(n\). In our example, \(n\) is 4. The same techniques used with univariate linear regression (linear regression with one variable) are used with multivariate linear regression. The difference is that we just add the new input variables with their corresponding parameters. So, the hypothesis function becomes: $$ h_\theta(x)=\theta_0+\theta_1x_1+\theta_2x_2+\theta_3x_3 + \cdots +\theta_nx_n $$ The cost function retain its general form: $$ \text{Cost}=J(\theta)=\frac{1}{2m}\sum_{i=1}^m(h_\theta(x^i)-y^i)^2$$ but \(h_\theta(x)\) will make it different. Also, gradient descent becomes: $$ \theta_j:=\theta_j-\alpha\frac{\partial}{\partial\theta_j}J(\theta) \qquad\text{ for }j=0,1,2,\cdots,n $$ Illustrated Example with Simple Python Code Let's say that we have these data points as our training data: Now, we want to fit a line to this data so we can use it to predict the output value for new data. First let's see a bad example of such a line: This line clearly is not the best fit. All points are on one of its sides. Let's see a better solution: This one is better but still not good enough. Let's write a simple Python code using scikit-learn library, which is a great Python library that allows us to build many machine-learning models easily with a few lines of code: # import the part that we need from scikit-learn from sklearn.linear_model import LinearRegression # Create the features array. # We have one feature in our example => # one column in feature array x = [[1375], [1737], [1702], [1411], [2119], [1735], [1545], [1566], [1754], [1930]] # Create the target array y = [263928, 344085, 345505, 289701, 445548, 348764, 328066, 319531, 372130, 383157] # Create an instance of LinearRegression model model = LinearRegression() # Fit the model to our data with one command model.fit(x, y) # Now we can see the value of the parameters # theta_0 and theta_1 # theta_0 is stored in model.intercept_ print("theta_0 =", model.intercept_) # -22610.91211103683 # theta_1 is stored in model.coef_ print("theta_1 =", model.coef_) # [217.28837982] When we run these lines of code, we get the values of \(\theta_0\) and \(\theta_1\): theta_0 = -22610.91211103683 theta_1 = [217.28837982] Now that we have the optimal values for our parameters, let's plot the line using these values: We can see that this line represents a better fit to the data than the previous two lines. And now, we can use the equation of this line that we talked about earlier: $$ h_\theta(x)=\theta_0+\theta_1x $$ to predict the output for new data. Let's have an example: suppose that we get a new data, which is the size of a house, and we want to predict its price using our model. If the size is 2030 m 2. We can calculate the predicted values as follows: $$ h_\theta(2030)=-22610.91211103683+217.28837982\times2030\\~\\=418484.49892356317 $$ So the predicted price is $418484.5. We can get the same value using the model that we built in Python: print(model.predict([[2030]])) This would be the new data point that we just predicted its target value; it is shown in yellow: This post is highly influenced by Machine Learning course by Prof. Andrew Ng.
This is a fairly tricky concept, and I see research physicists stumbling with (analogues of) this problem with a somewhat alarming frequency. When you say the nucleus is seemingly spherical what that really means is that the electrons interact with a spherical object (essentially a point Coulomb charge at the nuclear centre) and therefore their dynamics must be invariant under any rotation about this point. However: This does not mean that all the solutions to those dynamics must be spherically symmetric. What it does mean is that for every non-symmetric solution $S_1$ of those dynamics that points along some direction $\hat n_1$ and every other direction $\hat n_2$, there must exist a separate, equally valid, yet distinct, solution $S_2$ that points along $\hat n_2$. Thus, it's perfectly possible for an atom to exist in an anisotropic state, like, say, the $2p_z$ state of hydrogen. The only thing that symmetry requires is that we have analogous $2p_{\hat{n}}$ states along any given axis $\hat n$ as possibilities, which is obviously true. That still doesn't answer, of course, the question of which way a given atom in a $2p$ state will point, but the answer to that is simply that it depends on the situation, and you need to specify more information to say anything useful. If all you know is that a given atom is in a $2p$ state but you don't have any more information, then all you can say is that the electron is in a mixed state of all possible orientations, which is isotropic and can be written down as$$\hat{\rho}_{2p,\text{ isotropic}} = \sum_{j=x,y,z}|2p_j⟩⟨2p_j| = \frac{1}{4\pi} \int |2p_{\hat{n}}⟩⟨2p_{\hat{n}}| \mathrm d \Omega_{\hat{n}}.$$This means that the electron is not in a pure state and therefore you cannot assign it a wavefunction, which is standard fare when you have not specified enough information to pin down a single state. In practice, however, when you work with $2p$ states, or other anisotropic orbitals, you've normally prepared them yourself, by e.g. stimulating a transition up from the $1s$ state with a linearly polarized laser, or some other anisotropic pumping method. In those cases, the orientation will be fixed by the pumping mechanism, i.e. the symmetry gets broken by your preparation apparatus, which selects one out of the many possible orientations for the state.
Ground state solutions for the fractional Schrödinger-Poisson systems involving critical growth in $ \mathbb{R} ^{3} $ 1. College of Science, Huazhong Agricultural University, Wuhan, 430070, China 2. School of Science, East China JiaoTong University, Nanchang, 330013, China 3. School of Mathematics and Statistics, Central China Normal University, Wuhan, 430079, China $ \begin{equation*} \begin{cases} \varepsilon^{2s}(-\Delta)^{s}u+V(x)u+\phi(x)u = K(x)f(u)+|u|^{2_{s}^{*}-2}u, \ \ & x\in \mathbb{R} ^3, \\ \varepsilon^{2s}(-\Delta)^{s}\phi = u^{2}, \ \ & x \in \mathbb{R} ^3, \end{cases} \end{equation*} $ $ s \in (\frac{3}{4}, 1) $ $ \varepsilon $ $ V $ $ K $ $ 2_{s}^{*} $ $ f $ $ V $ $ K $ $ \varepsilon $ $ V $ $ K $ $ V $ $ K $ Keywords:Fractional Schrödinger-Poisson system, ground state solution, concentration behavior, multiple solutions. Mathematics Subject Classification:Primary: 35J62; Secondary: 35J20, 35J60. Citation:Lun Guo, Wentao Huang, Huifang Jia. Ground state solutions for the fractional Schrödinger-Poisson systems involving critical growth in $ \mathbb{R} ^{3} $. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1663-1693. doi: 10.3934/cpaa.2019079 References: [1] C. O. Alves and O. H. Miyagaki, Existence and concentration of solution for a class of fractional elliptic equation in $\mathbb R ^N $ via penalization method, [2] [3] A. Azzollini, P. d'Avenia and A. Pomponio, On the Schrödinger-Maxwell equations under the effect of a general nonlinear term, [4] [5] V. Benci and G. Cerami, Multiple positive solutions of some elliptic problems via the Morse theory and the domain topology, [6] [7] [8] [9] W. Choi, S. Kim and K.-A. Lee, Asymptotic behavior of solutions for nonlinear elliptic problems with the fractionalLaplacian, [10] T. D'Aprile and J. Wei, Standing waves in the Maxwell-Schrödinger equation and an optimal configuration problem, [11] J. D'avila, M. Del Pino and J. Wei, Concentrating standing waves for the fractional nonlinear Schrödinger equation, [12] Y. Ding and X. Liu, Semiclassical solutions of Schrödinger equations with magnetic fields andcritical nonlinearities, [13] S. Dipierro, M. Medina and E. Valdinoci, Fractional elliptic problems with critial growth in the whole of $ \mathbb R ^N$, [14] S. Dipierro, G. Palatucci and E. Valdinoci, Existence and symmetry results for a Schrödinger type problem involving the fractional Laplacian, [15] M. Fall, F. Mahmoudi and E. Valdinoci, Ground states and concentration phenomena for the fractional Schrödinger equation, [16] P. Felmer, A. Quaas and J. Tan, Positive solutions of the nonlinear Schrödinger equation with the fractional Laplacian, [17] [18] [19] X. He and W. Zou, Existence and concentration result for the fractional Schrödinger equations with critical nonlinearities, [20] Y. He and G. Li, Standing waves for a class of Schrödinger-Poisson equations in $ \mathbb R ^3$ involving critical Sobolev exponents, [21] I. Ianni and G. Vaira, On concentration of positive bound states for the Schrödinger-Poisson system with potentials, [22] [23] [24] G. Li, S. Peng and S. Yan, Infinitely many positive solutions for the nonlinear Schrödinger-Poisson system, [25] [26] Z. Liu and J. Zhang, Multiplicity and concentration of positive solutions for the fractional Schrödinger-Poisson systems with critical growth, [27] [28] G. Palatucci and A. Pisante, Improved Sobolev embeddings, profile decomposition, and concentration-compactness for fractional Sobolev spaces, [29] [30] D. Ruiz and G. Vaira, Cluster solutions for the Schrödinger-Poisson-Slater problem around a local minimum of potential, [31] [32] [33] X. Shang and J. Zhang, Existence and concentration of positive solutions for fractional nonlinear Schrödinger equation with critical growth, [34] [35] K. Teng, Existence of ground state solutions for the nonlinear fractional Schrödinger-Poisson systemwith critical Sobolev exponent, [36] J. Wang, L. Tian, J. Xu and F. Zhao, Existence and concentration of positive solutions for semilinear Schrödinger-Poisson systems in $ \mathbb R ^3$, [37] Z. Wang and H. Zhou, Positive solution for a nonlinear stationary Schrödinger-Poisson system in $ \mathbb R ^3$, [38] [39] Y. Yu, F. Zhao and L. Zhao, The concentration behavior of ground state solutions for a fractional Schrödinger-Poisson system, [40] J. Zhang, The existence and concentration of positive solutions for a nonlinear Schrödinger-Poisson system with critical growth, [41] [42] J. Zhang, M. do Ó João and M. Squassina, Fractional Schrödinger-Poisson systems with a general subcritical or critical nonlinearity, [43] show all references References: [1] C. O. Alves and O. H. Miyagaki, Existence and concentration of solution for a class of fractional elliptic equation in $\mathbb R ^N $ via penalization method, [2] [3] A. Azzollini, P. d'Avenia and A. Pomponio, On the Schrödinger-Maxwell equations under the effect of a general nonlinear term, [4] [5] V. Benci and G. Cerami, Multiple positive solutions of some elliptic problems via the Morse theory and the domain topology, [6] [7] [8] [9] W. Choi, S. Kim and K.-A. Lee, Asymptotic behavior of solutions for nonlinear elliptic problems with the fractionalLaplacian, [10] T. D'Aprile and J. Wei, Standing waves in the Maxwell-Schrödinger equation and an optimal configuration problem, [11] J. D'avila, M. Del Pino and J. Wei, Concentrating standing waves for the fractional nonlinear Schrödinger equation, [12] Y. Ding and X. Liu, Semiclassical solutions of Schrödinger equations with magnetic fields andcritical nonlinearities, [13] S. Dipierro, M. Medina and E. Valdinoci, Fractional elliptic problems with critial growth in the whole of $ \mathbb R ^N$, [14] S. Dipierro, G. Palatucci and E. Valdinoci, Existence and symmetry results for a Schrödinger type problem involving the fractional Laplacian, [15] M. Fall, F. Mahmoudi and E. Valdinoci, Ground states and concentration phenomena for the fractional Schrödinger equation, [16] P. Felmer, A. Quaas and J. Tan, Positive solutions of the nonlinear Schrödinger equation with the fractional Laplacian, [17] [18] [19] X. He and W. Zou, Existence and concentration result for the fractional Schrödinger equations with critical nonlinearities, [20] Y. He and G. Li, Standing waves for a class of Schrödinger-Poisson equations in $ \mathbb R ^3$ involving critical Sobolev exponents, [21] I. Ianni and G. Vaira, On concentration of positive bound states for the Schrödinger-Poisson system with potentials, [22] [23] [24] G. Li, S. Peng and S. Yan, Infinitely many positive solutions for the nonlinear Schrödinger-Poisson system, [25] [26] Z. Liu and J. Zhang, Multiplicity and concentration of positive solutions for the fractional Schrödinger-Poisson systems with critical growth, [27] [28] G. Palatucci and A. Pisante, Improved Sobolev embeddings, profile decomposition, and concentration-compactness for fractional Sobolev spaces, [29] [30] D. Ruiz and G. Vaira, Cluster solutions for the Schrödinger-Poisson-Slater problem around a local minimum of potential, [31] [32] [33] X. Shang and J. Zhang, Existence and concentration of positive solutions for fractional nonlinear Schrödinger equation with critical growth, [34] [35] K. Teng, Existence of ground state solutions for the nonlinear fractional Schrödinger-Poisson systemwith critical Sobolev exponent, [36] J. Wang, L. Tian, J. Xu and F. Zhao, Existence and concentration of positive solutions for semilinear Schrödinger-Poisson systems in $ \mathbb R ^3$, [37] Z. Wang and H. Zhou, Positive solution for a nonlinear stationary Schrödinger-Poisson system in $ \mathbb R ^3$, [38] [39] Y. Yu, F. Zhao and L. Zhao, The concentration behavior of ground state solutions for a fractional Schrödinger-Poisson system, [40] J. Zhang, The existence and concentration of positive solutions for a nonlinear Schrödinger-Poisson system with critical growth, [41] [42] J. Zhang, M. do Ó João and M. Squassina, Fractional Schrödinger-Poisson systems with a general subcritical or critical nonlinearity, [43] [1] Sitong Chen, Xianhua Tang. Existence of ground state solutions for the planar axially symmetric Schrödinger-Poisson system. [2] Sitong Chen, Junping Shi, Xianhua Tang. Ground state solutions of Nehari-Pohozaev type for the planar Schrödinger-Poisson system with general nonlinearity. [3] Xu Zhang, Shiwang Ma, Qilin Xie. Bound state solutions of Schrödinger-Poisson system with critical exponent. [4] Xianhua Tang, Sitong Chen. Ground state solutions of Nehari-Pohozaev type for Schrödinger-Poisson problems with general potentials. [5] Yong-Yong Li, Yan-Fang Xue, Chun-Lei Tang. Ground state solutions for asymptotically periodic modified Schr$ \ddot{\mbox{o}} $dinger-Poisson system involving critical exponent. [6] Yongpeng Chen, Yuxia Guo, Zhongwei Tang. Concentration of ground state solutions for quasilinear Schrödinger systems with critical exponents. [7] [8] [9] Miao-Miao Li, Chun-Lei Tang. Multiple positive solutions for Schrödinger-Poisson system in $\mathbb{R}^{3}$ involving concave-convex nonlinearities with critical exponent. [10] Kaimin Teng, Xiumei He. Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent. [11] Margherita Nolasco. Breathing modes for the Schrödinger-Poisson system with a multiple--well external potential. [12] Zhengping Wang, Huan-Song Zhou. Positive solution for a nonlinear stationary Schrödinger-Poisson system in $R^3$. [13] Zhanping Liang, Yuanmin Song, Fuyi Li. Positive ground state solutions of a quadratically coupled schrödinger system. [14] Yi He, Lu Lu, Wei Shuai. Concentrating ground-state solutions for a class of Schödinger-Poisson equations in $\mathbb{R}^3$ involving critical Sobolev exponents. [15] Dengfeng Lü. Existence and concentration behavior of ground state solutions for magnetic nonlinear Choquard equations. [16] Claudianor Oliveira Alves, M. A.S. Souto. On existence and concentration behavior of ground state solutions for a class of problems with critical growth. [17] Mingzheng Sun, Jiabao Su, Leiga Zhao. Infinitely many solutions for a Schrödinger-Poisson system with concave and convex nonlinearities. [18] Claudianor O. Alves, Minbo Yang. Existence of positive multi-bump solutions for a Schrödinger-Poisson system in $\mathbb{R}^{3}$. [19] Zhi Chen, Xianhua Tang, Ning Zhang, Jian Zhang. Standing waves for Schrödinger-Poisson system with general nonlinearity. [20] Zhitao Zhang, Haijun Luo. Symmetry and asymptotic behavior of ground state solutions for schrödinger systems with linear interaction. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
I) We interpret OP's question (v2) as essentially asking about the following. What happens L1) if the Lagrangian density $\delta {\cal L}= 0$ does not transform? L2) if the Lagrangian density $\delta {\cal L}=\varepsilon~ d_{\mu} f^{\mu}$ transforms with a total space-time divergence? Here $\delta$ denotes an infinitesimal transformation $$\tag{A} \delta\phi^{\alpha}~=~\varepsilon~ (\ldots), \qquad \delta x^{\mu}~=~\varepsilon~ (\ldots),$$ of the fields $\phi^{\alpha}$ and spacetime coordinates $x^{\mu}$. Moreover, $\varepsilon$ is an infinitesimal parameter, and the ellipsis $\ldots$ is shorthand for whatever transformation, we consider. First of all, note that terminology differs from author to author. Some authors (see e.g. Ref. 1 and this Phys.SE post) call the transformation $\delta$ for a symmetry and a quasi-symmetry of the Lagrangian density ${\cal L}$ in the case L1 and L2, respectively. Other authors (see e.g. Ref. 2) speak of a strict symmetry and a symmetry, respectively. While other authors simply call $\delta$ for a symmetry in both cases. The two cases L1 and L2 are not equivalent, but Noether's theorem holds in both cases: There exists in both cases a local conservation law of the form $$\tag{B} d_{\mu}J^{\mu}~\approx~ 0.$$ [Here the $\approx$ symbol means equality modulo eom.] However in case L2, the bare Noether current (i.e. the standard formula mentioned on Wikipedia) needs to be improved with (minus) $f^{\mu}$ in order to obtain the correct full Noether current $J^{\mu}$ in eq. (B). II) Finally, as innisfree points out, instead of the Lagrangian density ${\cal L}$, one can also consider the action $$\tag{C} S~=~\int_{R}d^4x~ {\cal L},$$ where $R$ denotes a spacetime region. Often (but not always) the region $R$ is assumed to transform in accordance with the horizontal transformation $\delta x^{\mu}$. There are again two cases: S1) The action $\delta S =0$ does not transform. S2) The action $\delta S =\varepsilon \int_{\partial R} d^{3}x~f $ transforms with a boundary term. In analogy with Section I, the transformation $\delta$ is by definition called various author-dependent variations of the phrase symmetry of the action $S$ in the two cases S1 and S2. Noether's theorem holds again in both cases. Note however that the cases L1 and L2 do not necessarily map to the cases S1 and S2, respectively. For instance, it could happen that a quasi-symmetry (L2) of the Lagrangian density $\cal L$ for certain choices of region $R$ turns into a strict symmetry (S1) of the action $S$. For an example of this phenomenon, see e.g. my Phys.SE answer here. References: J.V. Jose and E.J. Saletan, Classical Dynamics: A Contemporary Approach, p. 565. P.J. Olver, Applications of Lie Groups to Differential Equations, 1993.
Search Now showing items 1-2 of 2 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions (Elsevier, 2017-11) Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
A Belyi-extender (or dessinflateur) $\beta$ of degree $d$ is a quotient of two polynomials with rational coefficients \[ \beta(t) = \frac{f(t)}{g(t)} \] with the special properties that for each complex number $c$ the polynomial equation of degree $d$ in $t$ \[ f(t)-c g(t)=0 \] has $d$ distinct solutions, except perhaps for $c=0$ or $c=1$, and, in addition, we have that \[ \beta(0),\beta(1),\beta(\infty) \in \{ 0,1,\infty \} \] Let’s take for instance the power maps $\beta_n(t)=t^n$. For every $c$ the degree $n$ polynomial $t^n – c = 0$ has exactly $n$ distinct solutions, except for $c=0$, when there is just one. And, clearly we have that $0^n=0$, $1^n=1$ and $\infty^n=\infty$. So, $\beta_n$ is a Belyi-extender of degree $n$. A cute observation being that if $\beta$ is a Belyi-extender of degree $d$, and $\beta’$ is an extender of degree $d’$, then $\beta \circ \beta’$ is again a Belyi-extender, this time of degree $d.d’$. That is, Belyi-extenders form a monoid under composition! In our example, $\beta_n \circ \beta_m = \beta_{n.m}$. So, the power-maps are a sub-monoid of the Belyi-extenders, isomorphic to the multiplicative monoid $\mathbb{N}_{\times}$ of strictly positive natural numbers. In their paper Quantum statistical mechanics of the absolute Galois group, Yuri I. Manin and Matilde Marcolli say they use the full monoid of Belyi-extenders to act on all Grothendieck’s dessins d’enfant. But, they attach properties to these Belyi-extenders which they don’t have, in general. That’s fine, as they foresee in Remark 2.21 of their paper that the construction works equally well for any suitable sub-monoid, as long as this sub-monoid contains all power-map exenders. I’m trying to figure out what the maximal mystery sub-monoid of extenders is satisfying all the properties they need for their proofs. But first, let us see what Belyi-extenders have to do with dessins d’enfant. Look at all complex solutions of $f(t)=0$ and label them with a black dot (and add a black dot at $\infty$ if $\beta(\infty)=0$). Now, look at all complex solutions of $f(t)-g(t)=0$ and label them with a white dot (and add a white dot at $\infty$ if $\beta(\infty)=1$). Now comes the fun part. Because $\beta$ has exactly $d$ pre-images for all real numbers $\lambda$ in the open interval $(0,1)$ (and $\beta$ is continuous), we can connect the black dots with the white dots by $d$ edges (the pre-images of the open interval $(0,1)$), giving us a $2$-coloured graph. For the power-maps $\beta_n(t)=t^n$, we have just one black dot at $0$ (being the only solution of $t^n=0$), and $n$ white dots at the $n$-th roots of unity (the solutions of $x^n-1=0$). Any $\lambda \in (0,1)$ has as its $n$ pre-images the numbers $\zeta_i.\sqrt[n]{\lambda}$ with $\zeta_i$ an $n$-th root of unity, so we get here as picture an $n$-star. Here for $n=5$: This dessin should be viewed on the 2-sphere, with the antipodal point of $0$ being $\infty$, so projecting from $\infty$ gives a homeomorphism between the 2-sphere and $\mathbb{C} \cup \{ \infty \}$. To get all information of the dessin (including possible dots at infinity) it is best to slice the sphere open along the real segments $(\infty,0)$ and $(1,\infty)$ and flatten it to form a ‘diamond’ with the upper triangle corresponding to the closed upper semisphere and the lower triangle to the open lower semisphere. In the picture above, the right hand side is the dessin drawn in the diamond, and this representation will be important when we come to the action of extenders on more general Grothendieck dessins d’enfant. Okay, let’s try to get some information about the monoid $\mathcal{E}$ of all Belyi-extenders. What are its invertible elements? Well, we’ve seen that the degree of a composition of two extenders is the product of their degrees, so invertible elements must have degree $1$, so are automorphisms of $\mathbb{P}^1_{\mathbb{C}} – \{ 0,1,\infty \} = S^2-\{ 0,1,\infty \}$ permuting the set $\{ 0,1,\infty \}$. They form the symmetric group $S_3$ on $3$-letters and correspond to the Belyi-extenders \[ t,~1-t,~\frac{1}{t},~\frac{1}{1-t},~\frac{t-1}{t},~\frac{t}{t-1} \] You can compose these units with an extender to get anther extender of the same degree where the roles of $0,1$ and $\infty$ are changed. For example, if you want to colour all your white dots black and the black dots white, you compose with the unit $1-t$. Manin and Marcolli use this and claim that you can transform any extender $\eta$ to an extender $\gamma$ by composing with a unit, such that $\gamma(0)=0, \gamma(1)=1$ and $\gamma(\infty)=\infty$. That’s fine as long as your original extender $\eta$ maps $\{ 0,1,\infty \}$ onto $\{ 0,1,\infty \}$, but usually a Belyi-extender only maps into $\{ 0,1,\infty \}$. Here are some extenders of degree three (taken from Melanie Wood’s paper Belyi-extending maps and the Galois action on dessins d’enfants): with dessin $5$ corresponding to the Belyi-extender \[ \beta(t) = \frac{t^2(t-1)}{(t-\frac{4}{3})^3} \] with $\beta(0)=0=\beta(1)$ and $\beta(\infty) = 1$. So, a first property of the mystery Manin-Marcolli monoid $\mathcal{E}_{MMM}$ must surely be that all its elements $\gamma(t)$ map $\{ 0,1,\infty \}$ onto $\{ 0,1,\infty \}$, for they use this property a number of times, for instance to construct a monoid map \[ \mathcal{E}_{MMM} \rightarrow M_2(\mathbb{Z})^+ \qquad \gamma \mapsto \begin{bmatrix} d & m-1 \\ 0 & 1 \end{bmatrix} \] where $d$ is the degree of $\gamma$ and $m$ is the number of black dots in the dessin (or white dots for that matter). Further, they seem to believe that the dessin of any Belyi-extender must be a 2-coloured tree. Already last time we’ve encountered a Belyi-extender $\zeta(t) = \frac{27 t^2(t-1)^2}{4(t^2-t+1)^3}$ with dessin But then, you may argue, this extender sends all of $0,1$ and $\infty$ to $0$, so it cannot belong to $\mathcal{E}_{MMM}$. Here’s a trick to construct Belyi-extenders from Belyi-maps $\beta : \mathbb{P}^1 \rightarrow \mathbb{P}^1$, defined over $\mathbb{Q}$ and having the property that there are rational points in the fibers over $0,1$ and $\infty$. Let’s take an example, the ‘monstrous dessin’ corresponding to the congruence subgroup $\Gamma_0(2)$ with map $\beta(t) = \frac{(t+256)^3}{1728 t^2}$. As it stands, $\beta$ is not a Belyi-extender because it does not map $1$ into $\{ 0,1,\infty \}$. But we have that \[ -256 \in \beta^{-1}(0),~\infty \in \beta^{-1}(\infty),~\text{and}~512,-64 \in \beta^{-1}(1) \] (the last one follows from $(t+256)^2-1728 t^3=(t-512)^2(t+64)$). We can now pre-compose $\beta$ with the automorphism (defined over $\mathbb{Q}$) sending $0$ to $-256$, $1$ to $-64$ and fixing $\infty$ to get a Belyi-extender \[ \gamma(t) = \frac{(192t)^3}{1728(192t-256)^2} \] which maps $\gamma(0)=0,~\gamma(1)=1$ and $\gamma(\infty)=\infty$ (so belongs to $\mathcal{E}_{MMM}$) with the same dessin, which is not a tree, That is, $\mathcal{E}_{MMM}$ can at best consist only of those Belyi-extenders $\gamma(t)$ that map $\{ 0,1,\infty \}$ onto $\{ 0,1,\infty \}$ and such that their dessin is a tree. Let me stop, for now, by asking for a reference (or counterexample) to perhaps the most startling claim in the Manin-Marcolli paper, namely that any 2-coloured tree can be realised as the dessin of a Belyi-extender!2 Comments
RL agents - implemented correctly - do not take previous rewards into account when making decisions. For instance value functions only assess potential future reward. The state value or expected return (aka utility) $G$ from a starting state $s$ may be defined like this: $$v(s) = \mathbb{E}_{\pi}[G_t|S_t=s] = \mathbb{E}_{\pi}[\sum_{k=0}^{\infty} \gamma^kR_{t+k+1}|S_t=s] $$ Where $R_t$ is the reward distribution at time $t$, and $\mathbb{E}_{\pi}$ stands for expected value given following the policy $\pi$ for action selection. There are a few variations of this, depending on setting and which value function you are interested in. However, all value functions used in RL look at future sums of reward from the decision point when the action is taken. Past rewards are not taken into account. An agent may still select to take an early high reward over a longer term reward, if: The choice between two rewards is exclusive The return is higher for the early reward. This may depend on the discounting factor, $\gamma$, where low values make the agent prefer more immediate rewards. If your problem is that an agent selects a low early reward when it could ignore it in favour of something larger later, then you should check the discount factor you are using. If you want a RL agent to take a long term view, then the discount factor needs to be close to $1.0$. The premise of your question however is that somehow a RL agent would become "lazy" or "complacent" because it already had enough reward. That is not an issue that occurs in RL due to the way that it is formulated. Not only are past rewards not accounted for when calculating return values from states, but there is also no formula in RL for an agent receiving "enough" total reward like a creature satisfying its hunger - the maximisation is applied always in all states. There is no need to somehow decay past rewards in any memory structure, and in fact no real way to do this, as there is no data structure that accumulates past rewards used by any RL agent. You may still collect this information for displaying results or analysing performance, but the agent doesn't ever use $r_{t}$ to figure out what $a_t$ should be. I found that for certain applications and certain hyperparameters, if reward is cumulative, the agent simply takes a good action at the beginning of the episode, and then is happy to do nothing for the rest of the episode (because it still has a reward of R You have probably formulated the reward function incorrectly for your problem in that case. A cumulative reward scheme (where an agent receives reward $a$ at $t=1$ then $a+b$ on $t=2$ then $a+b+c$ on $t=3$ etc) would be quite specialist and you have likely misunderstood how to represent the agent's goals. I suggest ask a separate question about your specific environment and your proposed reward scheme if you cannot resolve this.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 21-26 of 26 Measurement of transverse energy at midrapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV (American Physical Society, 2016-09) We report the transverse energy ($E_{\mathrm T}$) measured with ALICE at midrapidity in Pb-Pb collisions at ${\sqrt{s_{\mathrm {NN}}}}$ = 2.76 TeV as a function of centrality. The transverse energy was measured using ... Elliptic flow of electrons from heavy-flavour hadron decays at mid-rapidity in Pb–Pb collisions at $\sqrt{s_{\rm NN}}= 2.76$ TeV (Springer, 2016-09) The elliptic flow of electrons from heavy-flavour hadron decays at mid-rapidity ($|y| < 0.7$) is measured in Pb–Pb collisions at $\sqrt{s_{\rm NN}}= 2.76$ TeV with ALICE at the LHC. The particle azimuthal distribution with ... Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ... D-meson production in $p$–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV and in $pp$ collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2016-11) The production cross sections of the prompt charmed mesons D$^0$, D$^+$, D$^{*+}$ and D$_{\rm s}^+$ were measured at mid-rapidity in p-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm NN}}=5.02$ TeV ... Azimuthal anisotropy of charged jet production in $\sqrt{s_{\rm NN}}=2.76$ TeV Pb–Pb collisions (Elsevier, 2016-02) This paper presents measurements of the azimuthal dependence of charged jet production in central and semi-central $\sqrt{s_{\rm NN}}=2.76$ TeV Pb–Pb collisions with respect to the second harmonic event plane, quantified ... Particle identification in ALICE: a Bayesian approach (Springer Berlin Heidelberg, 2016-05-25) We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation ...
Find the remainder when 5^99 is divided by 13. Follow Math Help Forum on Facebook and Google+ By Fermats little $\displaystyle 5^{12} \equiv 1 \pmod {13}$ so $\displaystyle 5^{99}=5^{12 \cdot 8}\cdot 5^{3} \equiv 5^3 \pmod{13}$It's left to find the rest for $\displaystyle 5^3$ when it's divided by 13 and I leave this to you. View Tag Cloud
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
TIPS FOR SOLVING QUESTIONS RELATED TO COMPOUND INTEREST: Compound Interest: The money is said to be lent on compound interest (C.I.) if the interest at the end of a year or some specified period is not paid to the lender, but is added to the principal, so that the amount at the end of this period becomes the principal for the next period. This process is repeated until the amount for the last period is obtained. After the specified period, the difference between the amount and the money borrowed (principal) is called the compound interest (C.I.) Compound Interest (C.I.) = Amount (A) - Principal (P) 1. If Principal = P, Rate = R% per annum, Amount = A, Time = n years, then Compound Interest Formula when interest is compounded Annually: \begin{aligned} Amount (A) = P\left( 1 + \frac{R}{100} \right)^n \\ \end{aligned} 2. Compound Interest Formula when interest is compounded Half Yearly : \begin{aligned} Amount (A) = P\left( 1 + \frac{R/2}{100} \right)^{2n} \\ \end{aligned} Tips: If the interest is payable half yearly, then time is multiplied by 2 and rate is divided by 2. 3. Compound Interest Formula when interest is compounded Quarterly: \begin{aligned} Amount (A) = P\left( 1 + \frac{R/4}{100} \right)^{4n} \\ \end{aligned} Tips: If the interest is payable quarterly, then time is multiplied by 4 and rate is divided by 4. 4. When interest is compounded Annually but time is in fraction, for e.g. \begin{aligned}2\frac{3}{5} years \end{aligned} then Amount will be, \begin{aligned} Amount (A) = P\left( 1 + \frac{R}{100} \right)^2 \times \left( 1 + \frac{\frac{3}{5}R}{100} \right) \end{aligned} 5. When Rates are different for different years, say R1%, R2%, R3% for 1st, 2nd and 3rd year respectively. Then Amount will be, \begin{aligned} Amount (A) = P\left( 1+\frac{R1}{100} \right) \left( 1+\frac{R2}{100} \right) \left( 1+\frac{R3}{100} \right) \end{aligned} 6. Present worth of Rs. x due n years hence will be: \begin{aligned} \text{Present Worth} = \frac{x}{\left(1+\frac{R}{100}\right)^n} \end{aligned}
I’m trying to get into the latest Manin-Marcolli paper Quantum Statistical Mechanics of the Absolute Galois Group on how to create from Grothendieck’s dessins d’enfant a quantum system, generalising the Bost-Connes system to the non-Abelian part of the absolute Galois group $Gal(\overline{\mathbb{Q}}/\mathbb{Q})$. In doing so they want to extend the action of the multiplicative monoid $\mathbb{N}_{\times}$ by power maps on the roots of unity to the action of a larger monoid on all dessins d’enfants. Here they use an idea, originally due to Jordan Ellenberg, worked out by Melanie Wood in her paper Belyi-extending maps and the Galois action on dessins d’enfants. To grasp this, it’s best to remember what dessins have to do with Belyi maps, which are maps defined over $\overline{\mathbb{Q}}$ \[ \pi : \Sigma \rightarrow \mathbb{P}^1 \] from a Riemann surface $\Sigma$ to the complex projective line (aka the 2-sphere), ramified only in $0,1$ and $\infty$. The dessin determining $\pi$ is the 2-coloured graph on the surface $\Sigma$ with as black vertices the pre-images of $0$, white vertices the pre-images of $1$ and these vertices are joined by the lifts of the closed interval $[0,1]$, so the number of edges is equal to the degree $d$ of the map. Wood considers a very special subclass of these maps, which she calls Belyi-extender maps, of the form \[ \gamma : \mathbb{P}^1 \rightarrow \mathbb{P}^1 \] defined over $\mathbb{Q}$ with the additional property that $\gamma$ maps $\{ 0,1,\infty \}$ into $\{ 0,1,\infty \}$. The upshot being that post-compositions of Belyi’s with Belyi-extenders $\gamma \circ \pi$ are again Belyi maps, and if two Belyi’s $\pi$ and $\pi’$ lie in the same Galois orbit, then so must all $\gamma \circ \pi$ and $\gamma \circ \pi’$. The crucial Ellenberg-Wood idea is then to construct “new Galois invariants” of dessins by checking existing and easily computable Galois invariants on the dessins of the Belyi’s $\gamma \circ \pi$. For this we need to know how to draw the dessin of $\gamma \circ \pi$ on $\Sigma$ if we know the dessins of $\pi$ and of the Belyi-extender $\gamma$. Here’s the procedure Here, the middle dessin is that of the Belyi-extender $\gamma$ (which in this case is the power map $t \rightarrow t^4$) and the upper graph is the unmarked dessin of $\pi$. One has to replace each of the black-white edges in the dessin of $\pi$ by the dessin of the expander $\gamma$, but one must be very careful in respecting the orientations on the two dessins. In the upper picture just one edge is replaced and one has to do this for all edges in a compatible manner. Thus, a Belyi-expander $\gamma$ inflates the dessin $\pi$ with factor the degree of $\gamma$. For this reason i prefer to call them dessinflateurs, a contraction of dessin+inflator. In her paper, Melanie Wood says she can separate dessins for which all known Galois invariants were the same, such as these two dessins, by inflating them with a suitable Belyi-extender and computing the monodromy group of the inflated dessin. This monodromy group is the permutation group generated by two elements, the first one gives the permutation on the edges given by walking counter-clockwise around all black vertices, the second by walking around all white vertices. For example, by labelling the edges of $\Delta$, its monodromy is generated by the permutations $(2,3,5,4)(1,6)(8,10,9)$ and $(1,3,2)(4,7,5,8)(9,10)$ and GAP tells us that the order of this group is $1814400$. For $\Omega$ the generating permutations are $(1,2)(3,6,4,7)(8,9,10)$ and $(1,2,4,3)(5,6)(7,9,8)$, giving an isomorphic group. Let’s inflate these dessins using the Belyi-extender $\gamma(t) = -\frac{27}{4}(t^3-t^2)$ with corresponding dessin It took me a couple of attempts before I got the inflated dessins correct (as i knew from Wood that this simple extender would not separate the dessins). Inflated $\Omega$ on top: Both dessins give a monodromy group of order $35838544379904000000$. Now we’re ready to do serious work. Melanie Wood uses in her paper the extender $\zeta(t)=\frac{27 t^2(t-1)^2}{4(t^2-t+1)^3}$ with associated dessin and says she can now separate the inflated dessins by the order of their monodromy groups. She gets for the inflated $\Delta$ the order $19752284160000$ and for inflated $\Omega$ the order $214066877211724763979841536000000000000$. It’s very easy to make mistakes in these computations, so probably I did something horribly wrong but I get for both $\Delta$ and $\Omega$ that the order of the monodromy group of the inflated dessin is $214066877211724763979841536000000000000$. I’d be very happy when someone would be able to spot the error! Leave a Comment
Let $f(re^{i \theta} ) = \sum_{n=0}^{\infty} a_n r^n e^{i \theta n} $ where this power series has radius of convergence $R > r > 0$. I am trying to show that $\frac{1}{2\pi} \int_0^{2 \pi} |f(r e^{ i \theta } ) |^2 d \theta = \sum_{n=1}^{\infty} |a_n|^2 r^{2n} $ Attempt: I am trying to expand $f \overline{f} = |f|^2 $ and then integrate term by term. I know $$ f \overline{f} = \left( \sum_{n=0}^{\infty} a_n r^n e^{i \theta n} \right) \left( \sum_{n=0}^{\infty} \overline{a_n} r^n e^{-i \theta n } \right) $$ But it seems cumbersome to expand this product. IS there a trick to expand this nicely?
As You correctly guessed the "orbit plane" is actually a surface (no thickness), so the individual atoms of the space station would, if separate, be on different orbits and drift away over (short) time; of course they are kept together by direct interaction so structural strength of your space station won't have any problem in keeping it together. This doesn't hold true for any free-floating object which, unless prevented, will try to follow his own orbit. This means objects "above" the "mean orbital surface" will drift to the ceiling, while those "below" (in Earth direction) will drift toward the floor. Let us try to compute the actual forces. On orbital plane (assumed being the Station Center of gravity) the two forces, gravity ($F_g = \frac{G M m}{r^2}$) and centrifugal ($F_c = m v^2/r$) are exactly the same "by definition" . This enables us to easily find relationship between speed and radius of a circular orbit. What we need to do is a bit different, we need to find what is the difference of force if we change "a bit" the radius leaving the speed the same. So we have: $\Delta F = F_g^\prime - F_c^\prime = \frac{GMm}{(r+\delta)^2} - mv^2/(r+\delta) = ma$ $a = \frac{GM}{(r+\delta)^2} - v^2/(r+\delta)$ If we substitute the relevant data: $G = 6.67 \times 10^{-11}$ $M = 5.972 \times 10^{24} kg$ $r = 35786 Km = 35786000 m$ $v = 3.07 Km/s = 3070 m/s$ and assume for $\delta$ 50 floors about $3m$ each: $\delta = 50 \times 3m = 150m$ we get: $a = \frac{6.67 \times 10^{-11} \times 5.972 \times 10^{24}}{(35786000+150)^2} - 3070^2/(35786000+150) \approx 8 \times 10^{-7} m/s^2 $ Note: I had to "cheat" to get the result; the values I have for all constants are way too imprecise to correctly compute the result (we are speaking about $4 ppm = 4 \times 10{-6}$). What I did is to "massage" radius till the acceleration ($\delta = 0$) was null, then I added the displacement. Someone is welcome to cross-check my computations as I might have goofed somewhere. From basic Physics we have: $s = 1/2 at^2$ ... $t = \sqrt{2 s / a} = \sqrt{\frac{2 \times150}{8 \times 10^{-7}}} \approx 8470s \approx 2 h 20m$ As you see the "free floating" objects will drift to floor/ceiling in a matter of hours, days at most for objects nearer to orbital plane. Note that the forces involved are quite tiny, so even the smallest friction will prevent it, so, in particular, you can expect surface tension to keep liquids plastered to whatever surface they touch and not to drift around.
The complex moment problem and direct and inverse spectral problems for the block Jacobi type bounded normal matrices Methods Funct. Anal. Topology 12 (2006), no. 1, 1-31 We continue to generalize the connection between the classical power moment problem and the spectral theory of selfadjoint Jacobi matrices. In this article we propose an analog of the Jacobi matrix related to the complex moment problem and to a system of polynomials orthogonal with respect to some probability measure on the complex plane. Such a matrix has a block three-diagonal structure and gives rise to a normal operator acting on a space of l2 type. Using this connection we prove existence of a one-to-one correspondence between probability measures defined on the complex plane and block three-diagonal Jacobi type normal matrices. For simplicity, we investigate in this article only bounded normal operators. From the point of view of the complex moment problem, this restriction means that the measure in the moment representation (or the measure, connected with the orthonormal polynomials) has compact support. Methods Funct. Anal. Topology 12 (2006), no. 1, 32-37 A variety of Banach algebras is a non-empty class of Banach algebras, for which there exists a family of laws such that its elements satisfy all of the laws. Each variety has a unique core (see [3]) which is generated by it. Each Banach algebra is not a core but, in this paper, we show that for each Banach algebra there exists a cardinal number (quantum of that Banach algebra) which shows the elevation of that Banach algebra for bearing a core. The class of all cores has interesting properties. Also, in this paper, we shall show that each core of a variety is generated by essential elements and each algebraic law of essential elements permeates to all of the elements of all of the Banach algebras belonging to that variety, which shows the existence of considerable structures in the cores. Methods Funct. Anal. Topology 12 (2006), no. 1, 38-56 Let H1 be a subspace in a Hilbert space H0 and let $\widetilde C(H_0,H_1)$ be the set of all closed linear relations from $H_0$ to $H_1$. We introduce a Nevanlinna type class $\widetilde R_+ (H_0,H_1)$ of holomorphic functions with values in $\widetilde C(H_0,H_1)$ and investigate its properties. In particular we prove the existence of a dilation for every function $\tau_+(\cdot)\in \widetilde R_+ (H_0,H_1)$. In what follows these results will be used for the derivation of the Krein type formula for generalized resolvents of a symmetric operator with arbitrary (not necessarily equal) deficiency indices. Methods Funct. Anal. Topology 12 (2006), no. 1, 57-73 In the present work a relationship between systems of n subspaces and representations of *-algebras generated by projections is investigated. It is proved that irreducible nonequivalent *-representations of *-algebras P4,com generate all nonisomorphic transitive quadruples of subspaces of a finite dimensional space. Methods Funct. Anal. Topology 12 (2006), no. 1, 74-81 In this paper, the author establishes the boundedness in weighted $L_p$ spaces on $\mathbb R^{n+1}$ with a parabolic metric for a large class of sublinear operators generated by parabolic Calderon-Zygmund kernels. The conditions of these theorems are satisfied by many important operators in analysis. Sufficient conditions on weighted functions $\omega$ and $\omega_1$ are given so that certain parabolic sublinear operator is bounded from the weighted Lebesgue spaces $L_{p,\omega}(\mathbb R^{n+1})$ into $L_{p,\omega_1}(\mathbb R^{n+1})$. Methods Funct. Anal. Topology 12 (2006), no. 1, 82-100 Weakly Lagrangian pairs and Lagrangian pairs in a pair of Hilbert spaces $(H_1, H_2)$ are defined. The weakly Lagrangian pair and Lagrangian pair extensions in $(H_1, H_2)$ of a given weakly Lagrangian pair in $(H_1, H_2)$ are characterized and those extensions which are operators are identified. A description of all Lagrangian pair extensions in a larger pair of Hilbert spaces $(\tilde H_1, \tilde H_2)$ of a given weakly Lagrangian pair in $(H_1, H_2)$ is also given.
There's is a significant difference between "arbitrarily long" and "infinite". As a simple example, an integer can have an arbitrarily great magnitude; formally, for every integer, there exists a larger integer (its successor), which in turn has a successor, and so on. But all integers are finite; $\omega \notin \mathbb{N}$. (Or, if you prefer, $\infty \notin \mathbb{N}$.) Similarly, the Kleene star operator $A^*$ represents the concatenation of an arbitrarily large number of elements from $A$, but not an infinite number of elements of $A$. There is an interesting part of formal language theory which deals with sets of infinitely-long strings (ω-strings), but such strings cannot be produced by any regular expression. (You can use ω-regular expressions, which include the infinite repetition operator $A^\omega$.) But you might want to master the material you're currently studying before venturing onto Buchi automata, interesting though they may be. To answer your questions: No, there are undecidable languages consisting only of finite strings. Indeed, since such languages are undecidable, it is not necessarily even decidable whether they are finite sets. A classic example of an undecidable language is the set of (descriptions of) Turing machines which halt on every input. Note that every Turing machine (like every integer) has a finitely-long description, so the language is a set (an infinite set, in this case) of finite strings. A Turing machine is under no obligation to read its entire input. Consider the language of ω-strings which start with an $a$. This is clearly a set of infinitely-long strings, but only a single character needs to be examined to determine whether a string belongs to the language. Sure, why not. For example, the Turing machine trying to determine whether its input describes a Turing machine which halts on every input could simulate the described machine using every possible input, but that is going to go on forever even if the language describe a Turing machine which always halts. See, for example, deterministic Buchi automata. Basically, an ω-language $L$ can be deterministic if there is a language $L' \subset Pref(L)$ of finite prefixes which can deterministically predict inclusion in $L$.
My question is about a claim on the bottom of p. 121 of the book "Neron models" by Bosch, Lutkebohmert, and Raynaud, so I will freely use the general terminology recalled in this book, but will introduce the relevant notation here. All the references are to the book mentioned above. Let $S$ be a noetherian, normal, strictly henselian local domain (most of these assumptions are likely to be irrelevant for the actual question, but I'll keep them), let $X$ be a smooth, separated, finite type $S$-scheme with $X \rightarrow S$ surjective, and let $m\colon X\times_S X\ --> X$ be an $S$-birational group law, which is assumed to be strict in the sense that there is an open subscheme $U \subset X\times_S X$ that is $X$-dense with respect to both projections and such that $m$ is defined on $U$, as are the universal right and left translations$$X \times_S X\ --> X\times_S X, \quad (x, y) \mapsto (xy, y) \\X \times_S X\ --> X\times_S X, \quad (x, y) \mapsto (x, xy),$$which are moreover required to be open immersions on $U$. All products being over $S$, let $\Gamma$ be the schematic image in $X^3$ of the graph of $m\colon U \rightarrow X$. As is proved in 5.3/3 each projection $q_{ij}\colon X^3 \rightarrow X^2$ is an open immersion on $\Gamma$ with the image that is $X$-dense with respect to both projections. In particular, it follows that for every $S$-scheme $T$ and all $a, c \in X(T)$ there is at most one $x \in X(T)$ with $ax = c$. Suggestively denoting such $x$ by $a^{-1} c$ (if it exists), it also follows that $$ f = q_{23} \circ q_{13}^{-1}\colon X\times_S X\ --> X\times_S X, \quad (a, c) \mapsto (a^{-1}c, c) $$ is an $S$-birational map whose domain of definition and image are $X$-dense with respect to both projections. My question is: is $f$ an open immersion on its domain of definition, as is claimed on p. 121? The apparent difficulty that is causing my trouble in seeing this is that even though I understand $f$ on $q_{13}(\Gamma)$, it could conceivably happen that the domain of definition of $f$ is larger, to the effect that $f$ looses direct touch with $m$. To get the conclusion from Zariski's main theorem, it would suffice to argue that $f$ is injective (on $T$-points for every $T$), but the injectivity is not clear to me either (same difficulty). What I am asking about is used in the proof of Lemma 6 on p. 126 although the full strength of the claim is seemingly not needed there. In particular, the said proof (which seems absolutely crucial for the overall goal of the book) may be salvaged even if my question turns out to have a negative answer.
Methods Funct. Anal. Topology 13 (2007), no. 3, 201-210 For extending the concepts of $p$-frame, frame for Banach spaces and atomic decomposition, we will define the concept of $pg$-frame and $g$-frame for Banach spaces, by which each $f\in X$ ($X$ is a Banach space) can be represented by an unconditionally convergent series $f=\sum g_{i}\Lambda_{i},$ where $\{\Lambda_{i}\}_{i\in J}$ is a $pg$-frame, $\{g_{i}\}\in(\sum\oplus Y_{i}^{*})_{l_q}$ and $\frac{1}{p}+\frac{1}{q}=1$. In fact, a $pg$-frame $\{\Lambda_{i}\}$ is a kind of an overcomplete basis for $X^{*}.$ We also show that every separable Banach space $X$ has a $g$-Banach frame with bounds equal to $1.$ Methods Funct. Anal. Topology 13 (2007), no. 3, 211-222 We define the $\varepsilon_{\infty }$-product of a Banach space $G$\ by a quotient bornological space $E\mid F$ that we denote by $G\varepsilon _{\infty }(E\mid F)$, and we prove that $G$ is an $% \mathcal{L}_{\infty }$-space if and only if the quotient bornological spaces $G\varepsilon _{\infty }(E\mid F)$ and $% (G\varepsilon E)\mid (G\varepsilon F)$ are isomorphic. Also, we show that the functor $\mathbf{.\varepsilon }_{\infty }\mathbf{.}:\mathbf{Ban\times qBan\longrightarrow qBan}$ is left exact. Finally, we define the $\varepsilon _{\infty }$-product of a b-space by a quotient bornological space and we prove that if $G$ is an $% \varepsilon $b-space\ and $E\mid F$ is a quotient bornological space, then $(G\varepsilon E)\mid (G\varepsilon F)$ is isomorphic to $G\varepsilon _{\infty }(E\mid F)$. Methods Funct. Anal. Topology 13 (2007), no. 3, 223-235 We consider a non-densely defined Hermitian contractive operator which is unitarily equivalent to its linear-fractional transformation. We show that such an operator always admits self-adjoint extensions which are also unitarily equivalent to their linear-fractional transformation. Methods Funct. Anal. Topology 13 (2007), no. 3, 236-261 We give a survey on generalized Krein algebras $K_{p,q}^{\alpha,\beta}$ and their applications to Toeplitz determinants. Our methods originated in a paper by Mark Krein of 1966, where he showed that $K_{2,2}^{1/2,1/2}$ is a Banach algebra. Subsequently, Widom proved the strong Szego limit theorem for block Toeplitz determinants with symbols in $(K_{2,2}^{1/2,1/2})_{N\times N}$ and later two of the authors studied symbols in the generalized Krein algebras $(K_{p,q}^{\alpha,\beta})_{N\times N}$, where $\lambda:=1/p+1/q=\alpha+\beta$ and $\lambda=1$. We here extend these results to $0< \lambda <1$. The entire paper is based on fundamental work by Mark Krein, ranging from operator ideals through Toeplitz operators up to Wiener-Hopf factorization. Methods Funct. Anal. Topology 13 (2007), no. 3, 262-266 It is known, that if the Euler--Lagrange variational equation is fulfilled everywhere in classical case $C^1$ then it's solution is twice continuously differentiable. The present note is devoted to the study of a similar problem for the Euler--Lagrange equation in the Sobolev space $W_{2}^{1}$. Direct theorems in the theory of approximation of Banach space vectors by exponential type entire vectors Methods Funct. Anal. Topology 13 (2007), no. 3, 267-278 For an arbitrary operator $A$ on a Banach space $X$ which is the generator of a $C_0$--group with certain growth condition at infinity, direct theorems on connection between the degree of smoothness of a vector $x\in X$ with respect to the operator $A$, the rate of convergence to zero of the best approximation of $x$ by exponential type entire vectors for the operator $A$, and the $k$-module of continuity are established. The results allow to obtain Jackson-type inequalities in a number of classic spaces of periodic functions and weighted $L_p$ spaces. Methods Funct. Anal. Topology 13 (2007), no. 3, 279-283 We extend the result of Beurling on the closure in $H^p$ of the linear manifold $F(z)\cdot $$ \{$polynomials of $z\}$ to the classes of entire functions of finite gamma-growth. The set of discontinuity points of separately continuous functions on the products of compact spaces Methods Funct. Anal. Topology 13 (2007), no. 3, 284-295 We solve the problem of constructing separately continuous functions on the product of compact spaces with a given set of discontinuity points. We obtain the following results. 1. For arbitrary \v{C}ech complete spaces $X$, $Y$, and a separable compact perfect projectively nowhere dense zero set $E\subseteq X\times Y$ there exists a separately continuous function $f:X\times Y\to\mathbb R$ the set of discontinuity points, which coincides with $E$. 2. For arbitrary \v{C}ech complete spaces $X$, $Y$, and nowhere dense zero sets $A\subseteq X$ and $B\subseteq Y$ there exists a separately continuous function $f:X\times Y\to\mathbb R$ such that the projections of the set of discontinuity points of $f$ coincides with $A$ and $B$, respectively. We construct an example of Eberlein compacts $X$, $Y$, and nowhere dense zero sets $A\subseteq X$ and $B\subseteq Y$ such that the set of discontinuity points of every separately continuous function $f:X\times Y\to\mathbb R$ does not coincide with $A\times B$, and a $CH$-example of separable Valdivia compacts $X$, $Y$ and separable nowhere dense zero sets $A\subseteq X$ and $B\subseteq Y$ such that the set of discontinuity points of every separately continuous function $f:X\times Y\to\mathbb R$ does not coincide with $A\times B$. Methods Funct. Anal. Topology 13 (2007), no. 3, 296-300 Let a sequence $\left\{v_k \right\}^{+\infty}_{-\infty}\in l_2$ and a real sequence $\left\{\lambda_k \right\}^{+\infty}_{-\infty}$ such that $\left\{\lambda_k^{-1} \right\}^{+\infty}_{-\infty}\in l_2$, and an orthonormal basis $\left\{e_k \right\}^{+\infty}_{-\infty}$ of a Hilbert space be given. We describe a sequence $M=\left\{\mu_k \right\}^{+\infty}_{-\infty}$, $M\cap \mathbb{R}=\varnothing$, such that the families $$ f_k = \sum\limits_{j\in\mathbb{Z}} {v_j\left(\lambda_j-\bar{\mu}_k \right)^{-1}}e_k, \quad k\in \mathbb{Z} $$ form an unconditional basis in $\mathfrak{H}$.
Methods Funct. Anal. Topology 14 (2008), no. 4, 297-301 We give some sufficient conditions under which the linear span of positive compact (resp. Dunford-Pettis, weakly compact, AM-compact) operators cannot be a vector lattice without being a sublattice of the order complete vector lattice of all regular operators. Also, some interesting consequences are obtained. On certain resolvent convergence of one non-local problem to a problem with spectral parameter in boundary condition Methods Funct. Anal. Topology 14 (2008), no. 4, 302-313 A family of non-local problems with the same finite point spectrum is given. The resolvent convergence on a dense linear subspace which gives a problem with spectral parameter in the boundary condition is considered. The spectral eigenvalue decomposition of the last problem on the half line for Sturm-Liouville operator with trivial potential is given. Methods Funct. Anal. Topology 14 (2008), no. 4, 314-322 In the present paper we describe semiadditive functionals and establish that the construction generated by semiadditive functionals forms a covariant functor. We show that the functor of semiadditive functionals is a normal functor acting in category of compact sets. Methods Funct. Anal. Topology 14 (2008), no. 4, 323-329 In this paper we investigate solvability of a partial integral equation in the space $L_2(\Omega\times\Omega),$ where $\Omega=[a,b]^ u.$ We define a determinant for the partial integral equation as a continuous function on $\Omega$ and for a continuous kernels of the partial integral equation we give explicit description of the solution. Methods Funct. Anal. Topology 14 (2008), no. 4, 330-333 In the paper we consider examples of basis families $\{\cos \lambda_k t\}^\infty_1$, $\lambda_k>0$, in the space $L_2(0,\sigma)$, such that systems $\{e^{i\lambda_kt},e^{-i\lambda_kt}\}^\infty_1$ don't form an unconditional basis in space $L_2(-\sigma,\sigma)$. Generalized stochastic derivatives on parametrized spaces of regular generalized functions of Meixner white noise Methods Funct. Anal. Topology 14 (2008), no. 4, 334-350 We introduce and study Hida-type stochastic derivatives and stochastic differential operators on the parametrized Kondratiev-type spaces of regular generalized functions of Meixner white noise. In particular, we study the interconnection between the stochastic integration and differentiation. Our researches are based on the general approach that covers the Gaussian, Poissonian, Gamma, Pascal and Meixner cases. Methods Funct. Anal. Topology 14 (2008), no. 4, 351-360 In this paper, we define the first topological $(\sigma,\tau)$-cohomology group and examine vanishing of the first $(\sigma,\tau)$-cohomology groups of certain triangular Banach algebras. We apply our results to study the $(\sigma,\tau)$-weak amenability and $(\sigma,\tau)$-amenability of triangular Banach algebras. Representation of commutants for composition operators induced by a hyperbolic linear fractional automorphisms of the unit disk Methods Funct. Anal. Topology 14 (2008), no. 4, 361-371 We describe the commutant of the composition operator induced by a hyperbolic linear fractional transformation of the unit disk onto itself in the class of linear continuous operators which act on the space of analytic functions. Two general classes of linear continuous operators which commute with such composition operators are constructed. The criteria of maximal dissipativity and self-adjointness for a class of differential-boundary operators with bounded operator coefficients Methods Funct. Anal. Topology 14 (2008), no. 4, 372-379 A class of the second order differential-boundary operators acting in the Hilbert space of infinite-dimensional vector-functions is investigated. The domains of considered operators are defined by nonstandard (e.g., multipoint-integral) boundary conditions. The criteria of maximal dissipativity and the criteria of self-adjointness for investigated operators are established. Methods Funct. Anal. Topology 14 (2008), no. 4, 380-385 The paper gives a complete characterization of all unitary operators acting in some wide Hilbert spaces $A^2_\omega(\mathbb{C})$ of entire functions possessing weighted square integrable modulus over the whole finite complex plane, which exhaust the set of all entire functions. Methods Funct. Anal. Topology 14 (2008), no. 4, 386-396 A detailed analysis of sufficient conditions on a family of many-body potentials, which ensure stability, superstability or strong superstability of a statistical system is given in present work.There has been given also an example of superstable many-body interaction.
Difference between revisions of "Kakeya problem" (12 intermediate revisions by 4 users not shown) Line 1: Line 1: − + '''Kakeya set''' a subset <math>\subset{\mathbb F}_3^n</math> that contains an [[algebraic line]] in every direction; that is, for every <math>d\in{\mathbb F}_3^n</math>, there exists <math>\in{\mathbb F}_3^n</math> such that <math>,+d,+2d</math> all lie in <math></math>. Let <math>k_n</math> be the smallest size of a Kakeya set in <math>{\mathbb F}_3^n</math>. Clearly, we have <math>k_1=3</math>, and it is easy to see that <math>k_2=7</math>. Using a computer, it is not difficult to find that <math>k_3=13</math> and <math>k_4\le 27</math>. Indeed, it seems likely that <math>k_4=27</math> holds, meaning that in <math>{\mathbb F}_3^4</math> one cannot get away with just <math>26</math> elements. Clearly, we have <math>k_1=3</math>, and it is easy to see that <math>k_2=7</math>. Using a computer, it is not difficult to find that <math>k_3=13</math> and <math>k_4\le 27</math>. Indeed, it seems likely that <math>k_4=27</math> holds, meaning that in <math>{\mathbb F}_3^4</math> one cannot get away with just <math>26</math> elements. − == + == == − Trivially, + Trivially, :<math>k_n\le k_{n+1}\le 3k_n</math>. :<math>k_n\le k_{n+1}\le 3k_n</math>. − Since the Cartesian product of two Kakeya sets is another Kakeya set, + Since the Cartesian product of two Kakeya sets is another Kakeya set, :<math>k_{n+m} \leq k_m k_n</math>; :<math>k_{n+m} \leq k_m k_n</math>; − this implies that <math>k_n^{1/n}</math> converges to a limit as <math>n</math> goes to infinity. + this implies that <math>k_n^{1/n}</math> converges to a limit as <math>n</math> goes to infinity. − + − To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, + To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, this direction. Therefore, <math>\binom{k_n}{2}\ge 3\cdot(3^n-1)/2</math>, and hence − :<math>k_n\ + :<math>k_n\3^{(n+1)/2}.</math> − One can + One can essentially the same conclusion using the "bush" argument. <math>N := (3^n-1)/2</math> different directions. <math>\mu</math> be the largest number of lines that are concurrent at a point . Eat least <math>3N/\mu</math>. On the other hand, by considering the "bush" of lines emanating from a point with multiplicity <math>\mu</math>, we see that <math>2\mu+1</math>. one obtains + <math>\sqrt{6N} \approx 3^{(n+1)/2}</math>. − A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{\mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> through <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (a + + + + + + + A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{\mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> through <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (ab)\in G\}</math> is all of <math>{\mathbb F}_3^{n-1}</math>. Using an estimate from [http://front.math.ucdavis.edu/math.CO/9906097 a paper of Katz-Tao], we conclude that <math>3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}</math>, leading to <math>|E|\ge 3^{6(n-1)/11}</math>. Thus, :<math>k_n \ge 3^{6(n-1)/11}.</math> :<math>k_n \ge 3^{6(n-1)/11}.</math> − == + == == We have We have Line 35: Line 42: since the set of all vectors in <math>{\mathbb F}_3^n</math> such that at least one of the numbers <math>1</math> and <math>2</math> is missing among their coordinates is a Kakeya set. since the set of all vectors in <math>{\mathbb F}_3^n</math> such that at least one of the numbers <math>1</math> and <math>2</math> is missing among their coordinates is a Kakeya set. − This estimate can be improved using an idea due to Ruzsa. Namely, let <math>E:=A\cup B</math>, where <math>A</math> is the set of all those vectors with <math>r/3+O(\sqrt r)</math> coordinates equal + This estimate can be improved using an idea due to Ruzsa . Namely, let <math>E:=A\cup B</math>, where <math>A</math> is the set of all those vectors with <math>r/3+O(\sqrt r)</math> coordinates equal to <math>1</math> and the rest equal to <math>0</math>, and <math>B</math> is the set of all those vectors with <math>2r/3+O(\sqrt r)</math> coordinates equal to <math>2</math> and the rest equal to <math>0</math>. Then <math>E</math>, being of size just about <math>(27/4)^{r/3}</math> (which is not difficult to verify using [[Stirling's formula]])contains lines in positive proportion of directions. Now one can use the random rotations trick to get the rest of the directions in <math>E</math> (losing a polynomial factor in <math>n</math>). Putting all this together, we seem to have Putting all this together, we seem to have Line 43: Line 50: or or − :<math>(1.8207 + :<math>(1.8207+o(1))^n \le k_n \le (1.+o(1))^n.</math> Latest revision as of 00:35, 5 June 2009 A Kakeya set in [math]{\mathbb F}_3^n[/math] is a subset [math]E\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]e\in{\mathbb F}_3^n[/math] such that [math]e,e+d,e+2d[/math] all lie in [math]E[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math]. Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements. Basic Estimates Trivially, we have [math]k_n\le k_{n+1}\le 3k_n[/math]. Since the Cartesian product of two Kakeya sets is another Kakeya set, the upper bound can be extended to [math]k_{n+m} \leq k_m k_n[/math]; this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity. Lower Bounds To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence [math]k_n\ge 3^{(n+1)/2}.[/math] One can derive essentially the same conclusion using the "bush" argument, as follows. Let [math]E\subset{\mathbb F}_3^n[/math] be a Kakeya set, considered as a union of [math]N := (3^n-1)/2[/math] lines in all different directions. Let [math]\mu[/math] be the largest number of lines that are concurrent at a point of [math]E[/math]. The number of point-line incidences is at most [math]|E|\mu[/math] and at least [math]3N[/math], whence [math]|E|\ge 3N/\mu[/math]. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that [math]|E|\ge 2\mu+1[/math]. Comparing the two last bounds one obtains [math]|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}[/math]. The better estimate [math]k_n\ge (9/5)^n[/math] is obtained in a paper of Dvir, Kopparty, Saraf, and Sudan. (In general, they show that a Kakeya set in the [math]n[/math]-dimensional vector space over the [math]q[/math]-element field has at least [math](q/(2-1/q))^n[/math] elements). A still better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a,b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus, [math]k_n \ge 3^{6(n-1)/11}.[/math] Upper Bounds We have [math]k_n\le 2^{n+1}-1[/math] since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set. This estimate can be improved using an idea due to Ruzsa (seems to be unpublished). Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula), contains lines in a positive proportion of directions: for, a typical direction [math]d\in {\mathbb F}_3^n[/math] can be represented as [math]d=d_1+2d_2[/math] with [math]d_1,d_2\in A[/math], and then [math]d_1,d_1+d,d_1+2d\in E[/math]. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]). Putting all this together, we seem to have [math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math] or [math](1.8207+o(1))^n \le k_n \le (1.8899+o(1))^n.[/math]
That was an excellent post and qualifies as a treasure to be found on this site! wtf wrote: When infinities arise in physics equations, it doesn't mean there's a physical infinity. It means that our physics has broken down. Our equations don't apply. I totally get that . In fact even our friend Max gets that.http://blogs.discovermagazine.com/crux/ ... g-physics/ Thanks for the link and I would have showcased it all on its own had I seen it first The point I am making is something different. I am pointing out that: All of our modern theories of physics rely ultimately on highly abstract infinitary mathematics That doesn't mean that they necessarily do; only that so far, that's how the history has worked out. I see what you mean, but as Max pointed out when describing air as seeming continuous while actually being discrete, it's easier to model a continuum than a bazillion molecules, each with functional probabilistic movements of their own. Essentially, it's taking an average and it turns out that it's pretty accurate. But what I was saying previously is that we work with the presumed ramifications of infinity, "as if" this or that were infinite, without actually ever using infinity itself. For instance, y = 1/x as x approaches infinity, then y approaches 0, but we don't actually USE infinity in any calculations, but we extrapolate. There is at the moment no credible alternative. There are attempts to build physics on constructive foundations (there are infinite objects but they can be constructed by algorithms). But not finitary principles, because to do physics you need the real numbers; and to construct the real numbers we need infinite sets. Hilbert pointed out there is a difference between boundless and infinite. For instance space is boundless as far as we can tell, but it isn't infinite in size and never will be until eternity arrives. Why can't we use the boundless assumption instead of full-blown infinity? 1) The rigorization of Newton's calculus culminated with infinitary set theory. Newton discovered his theory of gravity using calculus, which he invented for that purpose. I didn't know he developed calculus specifically to investigate gravity. Cool! It does make sense now that you mention it. However, it's well-known that Newton's formulation of calculus made no logical sense at all. If \(\Delta y\) and \(\Delta x\) are nonzero, then \(\frac{\Delta y}{\Delta x}\) isn't the derivative. And if they're both zero, then the expression makes no mathematical sense! But if we pretend that it does, then we can write down a simple law that explains apples falling to earth and the planets endlessly falling around the sun. I'm going to need some help with this one. If dx = 0, then it contains no information about the change in x, so how can anything result from it? I've always taken dx to mean a differential that is smaller than can be discerned, but still able to convey information. It seems to me that calculus couldn't work if it were based on division by zero, and that if it works, it must not be. What is it I am failing to see? I mean, it's not an issue of 0/0 making no mathematical sense, it's a philosophical issue of the nonexistence of significance because there is nothing in zero to be significant. 2) Einstein's gneral relativity uses Riemann's differential geometry. In the 1840's Bernhard Riemann developed a general theory of surfaces that could be Euclidean or very far from Euclidean. As long as they were "locally" Euclidean. Like spheres, and torii, and far weirder non-visualizable shapes. Riemann showed how to do calculus on those surfaces. 60 years later, Einstein had these crazy ideas about the nature of the universe, and the mathematician Minkowski saw that Einstein's ideas made the most mathematical sense in Riemann's framework. This is all abstract infinitary mathematics. Isn't this the same problem as previous? dx=0? 3) Fourier series link the physics of heat to the physics of the Internet; via infinite trigonometric series. In 1807 Joseph Fourier analyzed the mathematics of the distribution of heat through an iron bar. He discovered that any continuous function can be expressed as an infinite trigonometric series, which looks like this: $$f(x) = \sum_{n=0}^\infty a_n \cos(nx) + \sum_{n=1}^\infty b_n \sin(nx)$$ I only posted that because if you managed to survive high school trigonometry, it's not that hard to unpack. You're composing any motion into a sum of periodic sine and cosine waves, one wave for each whole number frequency. And this is an infinite series of real numbers, which we cannot make sense of without using infinitary math. I can't make sense of it WITH infinitary math lol! What's the cosine of infinity? What's the infnite-th 'a'? 4) Quantum theory is functional analysis . If you took linear algebra, then functional analysis can be thought of as infinite-dimensional linear algebra combined with calculus. Functional analysis studies spaces whose points are actually functions; so you can apply geometric ideas like length and angle to wild collections of functions. In that sense functional analysis actually generalizes Fourier series. Quantum mechanics is expressed in the mathematical framework of functional analysis. QM takes place in an infinite-dimensional Hilbert space. To explain Hilbert space requires a deep dive into modern infinitary math. In particular, Hilbert space is complete , meaning that it has no holes in it. It's like the real numbers and not like the rational numbers. QM rests on the mathematics of uncountable sets, in an essential way. Well, thanks to Hilbert, I've already conceded that the boundless is not the same as the infinite and if it were true that QM required infinity, then no machine nor human mind could model it. It simply must be true that open-ended finites are actually employed and underpin QM rather than true infinite spaces. Like Max said, "Not only do we lack evidence for the infinite but we don’t need the infinite to do physics. Our best computer simulations, accurately describing everything from the formation of galaxies to tomorrow’s weather to the masses of elementary particles, use only finite computer resources by treating everything as finite. So if we can do without infinity to figure out what happens next, surely nature can, too—in a way that’s more deep and elegant than the hacks we use for our computer simulations." We can *claim* physics is based on infinity, but I think it's more accurate to say *pretend* or *fool ourselves* into thinking such. Max continued with, "Our challenge as physicists is to discover this elegant way and the infinity-free equations describing it—the true laws of physics. To start this search in earnest, we need to question infinity. I’m betting that we also need to let go of it." He said, "let go of it" like we're clinging to it for some reason external to what is true. I think the reason is to be rid of god, but that's my personal opinion. Because if we can't have infinite time, then there must be a creator and yada yada. So if we cling to infinity, then we don't need the creator. Hence why Craig quotes Hilbert because his first order of business is to dispel infinity and substitute god. I applaud your effort, I really do, and I've learned a lot of history because of it, but I still cannot concede that infinity underpins anything and I'd be lying if I said I could see it. I'm not being stubborn and feel like I'm walking on eggshells being as amicable and conciliatory as possible in trying not to offend and I'm certainly ready to say "Ooooohhh... I see now", but I just don't see it. ps -- There's our buddy Hilbert again. He did many great things. William Lane Craig misuses and abuses Hilbert's popularized example of the infinite hotel to make disingenuous points about theology and in particular to argue for the existence of God. That's what I've got against Craig. Craig is no friend of mine and I was simply listening to a debate on youtube (I often let youtube autoplay like a radio) when I heard him quote Hilbert, so I dug into it and posted what I found. I'm not endorsing Craig lol 5) Cantor was led to set theory from Fourier series. In every online overview of Georg Cantor's magnificent creation of set theory, nobody ever mentions how he came upon his ideas. It's as if he woke up one day and decided to revolutionize the foundations of math and piss off his teacher and mentor Kronecker. Nothing could be further from the truth. Cantor was in fact studing Fourier's trigonometric series! One of the questions of that era was whether a given function could have more than one distinct Fourier series. To investigate this problem, Cantor had to consider the various types of sets of points on which two series could agree; or equivalently, the various sets of points on which a trigonometric series could be zero. He was thereby led to the problem of classifying various infinite sets of real numbers; and that led him to the discovery of transfinite ordinal and cardinal numbers. (Ordinals are about order in the same way that cardinals are about quantity). I still can't understand how one infinity can be bigger than another since, to be so, the smaller infinity would need to have limits which would then make it not infinity. In other words, and this is a fact that you probably will not find stated as clearly as I'm stating it here: If you begin by studying the flow of heat through an iron rod; you will inexorably discover transfinite set theory. Right, because of what Max said about the continuum model vs the actual discrete. Heat flow is actually IR light flow which is radiation from one molecule to another: a charged particle vibrates and vibrations include accelerations which cause EM radiation that emanates out in all directions; then the EM wave encounters another charged particle which causes vibration and the cycle continues until all the energy is radiated out. It's a discrete process from molecule to molecule, but is modeled as continuous for simplicity's sake. I've long taken issue with the 3 modes of heat transmission (conduction, convention, radiation) because there is only radiation. Atoms do not touch, so they can't conduct, but the van der waals force simply transfers the vibrations more quickly when atoms are sufficiently close. Convection is simply vibrating atoms in linear motion that are radiating IR light. I have many issues with physics and have often described it as more of an art than a science (hence why it's so difficult). I mean, there are pages and pages on the internet devoted to simply trying to define heat.https://www.quora.com/What-is-heat-1https://www.quora.com/What-is-meant-by-heathttps://www.quora.com/What-is-heat-in-physicshttps://www.quora.com/What-is-the-definition-of-heathttps://www.quora.com/What-distinguishes-work-and-heat Physics is a mess. What gamma rays are, depends who you ask. They could be high-frequency light or any radiation of any frequency that originated from a nucleus. But I'm digressing.... I do not know what that means in the ultimate scheme of things. But I submit that even the most ardent finitist must at least give consideration to this historical reality. It just means we're using averages rather than discrete actualities and it's close enough. I hope I've been able to explain why I completely agree with your point that infinities in physical equations don't imply the actual existence of infinities. Yet at the same time, I am pointing out that our best THEORIES of physics are invariably founded on highly infinitary math. As to what that means ... for my own part, I can't help but feel that mathematical infinity is telling us something about the world. We just don't know yet what that is. I think it means there are really no separate things and when an aspect of the universe attempts to inspect itself in order to find its fundamentals or universal truths, it will find infinity like a camera looking at its own monitor. Infinity is evidence of the continuity of the singular universe rather than an existing truly boundless thing. Infinity simply means you're looking at yourself. Anyway, great post! Please don't be mad. Everyone here values your presence and are intimidated by your obvious mathematical prowess Don't take my pushback too seriously I'd prefer if we could collaborate as colleagues rather than competing.
You say you want to understand how $\lambda$ and $\eta$ affect the cost function. If you hold the weights $w$ fixed, the equation for $C$ tells you how $\lambda$ affects the cost function, and $\eta$ doesn't affect the cost function; it only affects the sequence of steps taken by gradient descent. But the tricky thing is that the final weights $w$ depend on $\lambda,\eta$. If we choose $\lambda,\eta$ and then use gradient descent to train a set of weights, the final weights $w$ will depend on $\lambda$ and $\eta$. So, we can think of the final weights $w$ as actually being a function of $\lambda,eta$. Similarly, the final cost $C$ is actually a function of $w,\lambda,eta$, and since $w$ in turn depends on $\lambda,\eta$, this means that the final cost $C$ is really a function of the two variables $\lambda,\eta$. Suppose we write it as a function, to make the dependence clearer: $C(\lambda,\eta)$. Now we want minimize $C(\lambda,\eta)$. If you want to minimize this using gradient descent, you need to be able to compute the partial derivatives ${\partial \over \partial \lambda} C(\lambda,\eta)$ and ${\partial \over \partial \eta} C(\lambda,\eta)$. Unfortunately, it is not at all clear how to write down an analytical expression for those partial derivatives. The tricky bit is that it is not at all clear how the final weights $w$ depend on $\lambda,\eta$ -- we have no nice way to write down an analytical expression for that. So, we can differentiate to get $${\partial \over \partial \lambda} C(\lambda,\eta) = {\partial \over \partial \lambda} C_0(\lambda,\eta) + {\lambda \over 2n} \sum_i 2 w_i {\partial w_i \over \partial \lambda} + {1 \over 2n} \sum_i w_i^2,$$ but how do we compute ${\partial w_i \over \partial \lambda}$? And how do we compute ${\partial \over \partial \lambda} C_0(\lambda,\eta)$? It's not clear, as we don't have an analytical expression for the final weights $w_i$ as a function of $\lambda$; the only way we have to compute the weights is to run the gradient descent algorithm to completion. So that's the challenge with using gradient descent to optimize $\lambda,\eta$, and one reason why people often use grid search instead. I'm not saying applying gradient descent is impossible, but one must apply other black-box methods to compute the gradients, and there are other reasons why this might fail due to multiple local minima. Usually grid search is easy enough to apply and thus that is what is used.
FORMULAS, TIPS AND SHORTCUTS FOR SOLVING QUESTIONS RELATED TO BANKER'S DISCOUNT: Assume that a merchant A purchases goods worth, say Rs.1000 from another merchant B at a credit of say 4 months. Then B prepares a bill called bill of exchange (also called Hundi). On receipts of goods, A gives an agreement by signing on the bill allowing B to withdraw the money from A’s bank exactly after 4 months of the date of the bill. The date exactly after 4 months is known as nominally due date. Three more days (called grace days) are added to this date to get a date known as legally due date. The amount given on the bill is called the Face Value (F) which is Rs.1000 in this case. Assume that B needs this money before the legally due date. He can approach a banker or broker who pays him the money against the bill, but somewhat less than the face value. The banker deducts the simple interest on the face value for the unexpired time. This deduction is known as Bankers Discount (BD). In another words, Bank Discount (BD) is the simple interest on the face value for the period from the date on which the bill was discounted and the legally due date. The present value is the amount which, if placed at a particular rate for a specified period will amount to that sum of money at the end of the specified period. The interest on the present value is called the True Discount (TD). If the banker deducts the true discount on the face value for the unexpired time, he will not gain anything. Banker’s Gain (BG) is the difference between banker’s discount and the true discount for the unexpired time. Note: When the date of bill is not given, grace days are not to be added. IMPORTANT FORMULAE: Let F = Face Value of the Bill, R = Rate of Interest, T = Time in Years BD = Bankers Discount, TD = True Discount, BG = Banker’s Gain, and PW = True Present Worth \begin{aligned} \text{BD = Simple Interest on the face value of the bill for unexpired time = }\dfrac{\text{FRT}}{100} \end{aligned} \begin{aligned} \text{PW = }\dfrac{\text{F}}{1 + T\left(\dfrac{\text{R}}{100}\right)} \text {= } \dfrac{\text{F}} {\left(\dfrac{ 100 + \text{RT}}{100} \right)} \end{aligned} \begin{aligned} \text{TD = Simple Interest on the present value for unexpired time = }\dfrac{\text{PW }\times \text{ TR}}{100} = \dfrac{\text{FRT}}{100 + (\text{TR})} \end{aligned} \begin{aligned} \text{TD = }\dfrac{\text{BD }\times 100}{100 +\text{ TR}} \end{aligned} \begin{aligned} \text{PW = F - TD} \end{aligned} \begin{aligned} \text{F = }\dfrac{\text{BD }\times\text{ TD}}{(\text{BD – TD})} \end{aligned} \begin{aligned} \text{BG = BD – TD = Simple Interest on TD = }\dfrac{(\text{TD})^2 }{\text{PW}} \end{aligned} \begin{aligned} \text{TD = }\sqrt{\text{PW } \times \text{ BG}} \end{aligned} \begin{aligned} \text{TD = }\dfrac{\text{BG } \times 100}{\text{TR}} \end{aligned}
Let: $INF = \{ w \in \Sigma^* | \quad |L(M_w)| = \infty \} $. It is easy to show with Rices theorem that $INF$ is not decidable. ($INF$ is non-trivial because of $\emptyset$ and $\Sigma^*$). How can you show this with a reduction onto the halting problem (for example)? Here are my thoughts: I had an idea of running the decider of $INF$ in its own input, but couldnt get very far. Another idea that I just had was: Construct a Turing machine TM M' that halts if the language is finite, and loops endlessly if it is infinite: M'(w):for i=0,1,... simulate M_w on w_i Knowing the halting problem we cannot know if that turing machine will halt or not. We have supposedly reduced $HALT$ on $INF$. Is this correct ? (Since we take an instance of $HALT$ Can I get some feedback on my solutions ?
We have solved the wave equation by using Fourier series. But it is often more convenient to use the so-called d’Alembert solution to the wave equation 3. While this solution can be derived using Fourier series as well, it is really an awkward use of those concepts. It is easier and more instructive to derive this solution by making a correct change of variables to get an equation that can be solved by simple integration. Suppose we have the wave equation \[y_{tt}=a^2 y_{xx}.\] We wish to solve the equation (4.8.1) given the conditions \[ \left. \begin{array}{ccc} y(0,t)=y(L,t)=0 & {\rm{for ~all~}} t, \\ y(x,0)=f(x) & 0<x<L, \\ y_t(x,0)=g(x) & 0<x<L. \end{array} \right.\] 4.8.1 Change of variables We will transform the equation into a simpler form where it can be solved by simple integration. We change variables to \( \xi =x-at\), \( \eta =x+at\). The chain rule says: \[ \frac{\partial}{\partial x} = \frac{\partial \xi}{\partial x} \frac{\partial}{\partial \xi}+\frac{\partial \eta}{\partial x}\frac{\partial}{\partial \eta}= \frac{\partial}{\partial \xi}+ \frac{\partial}{\partial \eta}, \\ \frac{\partial}{\partial t}= \frac{\partial \xi}{\partial t}\frac{\partial}{\partial \xi}+\frac{\partial \eta}{\partial t}\frac{\partial}{\partial \eta}= -a \frac{\partial}{\partial \xi} + a\frac{\partial}{\partial \eta}.\] We compute \[ y_{xx}= \frac{\partial^2 y}{\partial x^2}= \left( \frac{\partial}{\partial \xi}+ \frac{\partial}{\partial \eta} \right) \left( \frac{\partial y}{\partial \xi}+ \frac{\partial y}{\partial \eta} \right)= \frac{\partial^2 y}{\partial \xi^2}+2 \frac{\partial^2 y}{\partial \xi \partial \eta}+ \frac{\partial^2 y}{\partial \eta^2}, \\ y_{tt}= \frac{\partial^2 y}{\partial ^2}= \left( -a \frac{\partial}{\partial \xi}+a \frac{\partial}{\partial \eta} \right) \left( -a \frac{\partial y}{\partial \xi}+ a \frac{\partial y}{\partial \eta} \right)= a^2 \frac{\partial^2 y}{\partial \xi^2}-2a^2 \frac{\partial^2 y}{\partial \xi \partial \eta}+a^2 \frac{\partial^2 y}{\partial \eta^2}. \] In the above computations, we used the fact from calculus that \( \frac{\partial^2 y}{\partial \xi \partial \eta}=\frac{\partial^2 y}{\partial \eta \partial \xi}\). We plug what we got into the wave equation, \[ 0=a^2 y_{xx}-y_{tt}=4a^2 \frac{\partial^2 y}{\partial \xi \partial \eta}= 4a^2 y_{ \xi \eta}.\] Therefore, the wave equation (4.8.1) transforms into \( y_{ \xi \eta} =0 \). It is easy to find the general solution to this equation by integrating twice. Keeping \( \xi\) constant, we integrate with respect to \( \eta\) first 4 and notice that the constant of integration depends on \( \xi\); for each \( \xi\) we might get a different constant of integration. We get \(y _{ \xi}=C( \xi)\). Next, we integrate with respect to \( \xi\) and notice that the constant of integration must depend on \( \eta\). Thus, \( y= \int C( \xi)d \xi+B( \eta) \). The solution must, therefore, be of the following form for some functions \(A( \xi)\) and \(B( \eta ) \) : \[ y =A( \xi)+B( \eta)= A(x-at)+B(x+at).\] The solution is a superposition of two functions (waves) travelling at speed \(a\) in opposite directions. The coordinates \(\xi\) and \(\eta\) are called the characteristic coordinates, and a similar technique can be applied to more complicated hyperbolic PDE. 4.8.2 D’Alembert’s formula We know what any solution must look like, but we need to solve for the given side conditions. We will just give the formula and see that it works. First let \( F(x)\) denote the odd extension of \( f(x)\), and let \( G(x)\) denote the odd extension of \( g(x)\). Define \[ A(x)= \frac{1}{2} F(x)- \frac{1}{2a} \int^x_0 G(s) ds,~~~~~B(x)= \frac{1}{2} F(x)+ \frac{1}{2a} \int^x_0 G(s) ds. \] We claim this \( A(x)\) and \( B(x)\) give the solution. Explicitly, the solution is \(y(x,t)= A(x-at)+B(x+at)\) or in other words: \[ y(x,t)= \frac{1}{2}F(x-at)- \frac{1}{2a} \int_0^{x-at} G(s)ds+ \frac{1}{2}F(x+at)+ \frac{1}{2a} \int_0^{x+at} G(s)ds \\ = \frac{F(x-at)+F(x+at)}{2} + \frac{1}{2a} \int_{x-at}^{x+at} G(s)ds.\] Let us check that the d’Alembert formula really works. \[ y(x,0)= \frac{1}{2}F(x)- \frac{1}{2a} \int_0^{x} G(s)ds+ \frac{1}{2}F(x)+ \frac{1}{2a} \int_0^{x} G(s)ds =F(x).\] So far so good. Assume for simplicity \(F\) is differentiable. By the fundamental theorem of calculus we have \[ y_t(x,t)= \frac{-a}{2}F'(x-at)+ \frac{1}{2}G(x-at)+ \frac{a}{2} F'(x+at)+ \frac{1}{2}G(x+at).\] So \[ y_t(x,0)= \frac{-a}{2}F'(x)+ \frac{1}{2}G(x)+ \frac{a}{2} F'(x)+ \frac{1}{2}G(x)=G(x).\] Yay! We’re smoking now. OK, now the boundary conditions. Note that \(F(x)\) and \(G(x)\) are odd. Also \( \int_0^x G(s)ds\) is an even function of \(x\) because \(G(x)\) is odd (to see this fact, do the substitution \(s=-v\)). So \[ y(0,t)= \frac{1}{2}F(-at)- \frac{1}{2a} \int_0^{-at} G(s)ds+ \frac{1}{2}F(at)+ \frac{1}{2a} \int_0^{at} G(s)ds \\ = \frac{-1}{2}F(at)- \frac{1}{2a} \int_0^{at} G(s)ds+ \frac{1}{2}F(at)+ \frac{1}{2a} \int_0^{at} G(s)ds=0 .\] Note that \(F(x)\) and \(G(x)\) are \(2L\) periodic. We compute \[ y(L,t)= \frac{1}{2}F(L-at)- \frac{1}{2a} \int_0^{L-at} G(s)ds+ \frac{1}{2}F(L+at)+ \frac{1}{2a} \int_0^{L+at} G(s)ds \\ = \frac{1}{2}F(-L-at)- \frac{1}{2a} \int_0^{L} G(s)ds- \frac{1}{2a} \int_0^{-at} G(s)ds +\\ + \frac{1}{2}F(L+at)+ \frac{1}{2a} \int_0^{L} G(s)ds+ \frac{1}{2a} \int_0^{at} G(s)ds \\ = \frac{-1}{2}F(L+at)- \frac{1}{2a} \int_0^{at} G(s)ds+ \frac{1}{2}F(L+at)+ \frac{1}{2a} \int_0^{at} G(s)ds=0.\] And voilà, it works. Example \(\PageIndex{1}\): D’Alembert says that the solution is a superposition of two functions (waves) moving in the opposite direction at “speed” \(a\). To get an idea of how it works, let us work out an example. Consider the simpler setup \[ y_{tt}=y_{xx}, \\ y(0,t)=y(1,t)=0, \\ y(x,0)=f(x), \\ y_t(x,0)=0. \] Here \(f(x)\) is an impulse of height 1 centered at \(x=0.5\): \[ f(x) = \left\{ \begin{array}{ccc} 0 & {\rm{if}} & 0 \leq x < 0.45, \\ 20(x-0.45) & {\rm{if}} & 0 \leq x < 0.45, \\ 20(0.55-x) & {\rm{if}} & 0.45 \leq x < 0.55 \\ 0 & {\rm{if}} & 0.55 \leq x \leq 1. \end{array} \right.\] The graph of this impulse is the top left plot in Figure 4.21. Let \(F(x)\) be the odd periodic extension of \(f(x)\). Then from (4.8.8) we know that the solution is given as \[ y(x,t)= \frac{F(x-t)+F(x+t)}{2}.\] It is not hard to compute specific values of \( y(x,t)\). For example, to compute \(y(0.1,0.6)\) we notice \(x-t=-0.5\) and \(x+t=0.7\). Now \(F(-0.5)=-f(0.5)=-20(0.55-0.5)=-1\) and \(F(0.7)=f(0.7)=0\). Hence \(y(0.1,0.6)= \frac{-1+0}{2}=-0.5\). As you can see the d’Alembert solution is much easier to actually compute and to plot than the Fourier series solution. See Figure 4.21 for plots of the solution \(y\) for several different \(t\). Figure 4.21: Plot of the d’Alembert solution for \(t=0, t=0.2, t=0.4,\) and \(t=0.6\). 4.8.3 Another way to solve for the side conditions It is perhaps easier and more useful to memorize the procedure rather than the formula itself. The important thing to remember is that a solution to the wave equation is a superposition of two waves traveling in opposite directions. That is, \[y(x,t)=A(x-at)+B(x+at).\] If you think about it, the exact formulas for \(A\) and \(B\) are not hard to guess once you realize what kind of side conditions \(y(x,t)\) is supposed to satisfy. Let us give the formula again, but slightly differently. Best approach is to do this in stages. When \(g(x)=0\) (and hence \(G(x)=0\)) we have the solution \[ \frac{F(x-at)+F(x+at)}{2}.\] On the other hand, when \(f(x)=0\) (and hence \(F(x)=0\)), we let \[H(x)=\int_0^x G(s)ds.\] The solution in this case is \[ \int_{x-at}^{x+at} G(s)ds = \frac{-H(x-at)+H(x+at)}{2a}.\] By superposition we get a solution for the general side conditions (4.8.2) (when neither \(f(x)\) nor \(g(x)\) are identically zero). \[ y(x,t)= \frac{F(x-at)+F(x+at)}{2} + \frac{-H(x-at)+H(x+at)}{2a}.\] Do note the minus sign before the \(H\), and the \(a\) in the second denominator. Exercise \(\PageIndex{1}\): Check that the new formula (4.8.21) satisfies the side conditions (4.8.2). Warning: Make sure you use the odd extensions \(F(x)\) and \(G(x)\), when you have formulas for \(f(x)\) and \(g(x)\). The thing is, those formulas in general hold only for \(0<x<L\), and are not usually equal to \(F(x)\) and \(G(x)\) for other \(x\). Contributors 4We can just as well integrate with first, if we wish.
A cylinder of radius $a$ is fixed in space. A hoop of mass $m$ and radius $b$ is in contact with the surface of the cylinder and it is able to rotate around of it without slipping. The force of gravity is acting on the system (figure below). At $t=0$, the hoop is at rest and a tangential velocity $v_0$ is imposed in the lowest point of the hoop. ¿What must be the minimum magnitude of $v_0$ in order to make the hoop perform a complete loop around the cylinder? I have no idea what condition must be imposed in order to describe the complete loop around the cylinder and then computing $v_0$. I think it has something to do with the rolling constraint and that the center of mass (COM) of the hoop must describe a complete arc lenght $2\pi(b-a)$. The initial condition must be $\dot{\alpha}(t=0)=v_0/b$ (it comes from $\boldsymbol{v}_\perp=\boldsymbol{\omega}\times\boldsymbol{r}$), where $\alpha$ is the angle between a vertical axis fixed at the COM of the cylinder and a vertical axis at the COM of the cylinder.
I am trying to model a mass being driven by a motor (Angular acceleration $\dot{\omega}$ and shaft-polar modulus $J$) with belts and pulleys. This is then being compared to numerical simulations. I find a 20-30% difference in results. The system is so: Here, $\tau$ is the torque provided by the motor whose shaft has moment of inertia $J$. Torque may be calculated as $\tau = J \dot{\omega}$ where the pulley (or sheave) has a diameter of $b$ and hence a radius of $b/2$. The mass being driven is $m$. The effect of gravity and any slack in the belt is neglected. A viscous force acts with a coefficient of $\nu$ (N/m/s). The belt is actually 3 x ropes. However, this is not showing up in my force balance and I am asking what went wrong. The horizontal direction is the $x$ direction and the corresponding displacement, velocity ($v$) and acceleration ($a$) of the mass are $x, \dot{x}, \ddot{x}$ respectively. Force balance, where tension in single rope is given by $T$: $$F = m\,a$$ $$3T - \nu \, v = m\,a$$ $$3\frac{\tau}{b/2} = m \ddot{x} + \nu \dot{x} $$ $\because$ the angular acceleration may be given as $\dot{\omega} = a/(b/2) = \ddot{x}/(b/2)$: $$ 3\frac{\tau}{b/2} = 3 \frac{J \dot{\omega}}{b/2} = 3 \frac{J \ddot{x}}{b^2/4}$$ $$\therefore m \ddot{x} + \nu \dot{x} = 3 \frac{J \ddot{x}}{b^2/4}$$ or $$ \frac{m}{3}\ddot{x} + \frac{\nu}{3} \dot{x} = \frac{4 J\ddot{x}}{b^2}$$ $$ \left[\frac{m}{3} - \frac{4 J}{b^2}\right] \ddot{x} + \frac{\nu}{3} \dot{x} = 0$$ The initial conditions for this differential equation are for position and velocity: $x(0) = 0, \dot{x}(0) = \omega b/2$ For the following physical parameters:$m=2000$kg, $\nu=1000$N/m/s, $\omega = 2$ rev per sec, $b=0.1$m, $J=1$kg/sq.m, I use Mathematica to solve the 2nd order ODE and plot the position wrt time. Simulations run by a proprietary software however, returns the following response: I can have my model get about 1-2% close to the proprietary model, through the following ODE $$ \frac{1}{3}\left[m - \frac{4 J}{b^2}\right] \ddot{x} + \nu \dot{x} = 0$$ This ODE is different from what I derived from a force balance. What gives? What went wrong? For those interested, Mathematica code to solve my model: Clear[m, Ir, b, \[Nu], s, t, \[Omega]];m = 2000.;(*mass in kg*)Ir = 1.;(*moment of inertia of shaft in kg-m^2*)b = 100*10^-3 (*sheave diameter in meter*);\[Nu] = 1000.;(*Viscous drag in N/m/s*)\[Omega] = N[120/60];(*Rev per second of shaft in 1/s*)\!\(TraditionalForm\`sVal = DSolveValue[{\((m/3 - \*FractionBox[\(4\ Ir\), \(\(b\)\(\ \)\(b\)\(\ \)\)])\)\ \(\*SuperscriptBox["s", "\[Prime]\[Prime]",MultilineFunction->None](t)\) + \[Nu]/3\ \(\*SuperscriptBox["s", "\[Prime]",MultilineFunction->None](t)\) == 0, s(0) == 0, \*SuperscriptBox["s", "\[Prime]",MultilineFunction->None](0) == \*FractionBox[\(b\ \[Omega]\), \(2\)]}, s(t), t]\) // Expand(*sVal=DSolveValue[{(1/3)(m-(4 Ir)/(b b )) s^\[Prime]\[Prime](t)+\\[Nu] s^\[Prime](t)\[LongEqual]0,s(0)\[LongEqual]0,s^\[Prime](0)\\[LongEqual](b \[Omega])/2},s(t),t]//Expand *)Plot[sVal, {t, 0, 20}, PlotRange -> {{0, 20}, {0, 0.25}}, ImageSize -> Medium, PlotStyle -> {Thick, Black}, BaseStyle -> {FontSize -> 15}, Frame -> True, FrameLabel -> {"Time,t", "Position, x(t)"}, GridLines -> All]
Search Now showing items 1-10 of 167 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
Definition:Limit of Function Contents Let $M_1 = \left({A_1, d_1}\right)$ and $M_2 = \left({A_2, d_2}\right)$ be metric spaces. Let $c$ be a limit point of $M_1$. Let $f: A_1 \to A_2$ be a mapping from $A_1$ to $A_2$ defined everywhere on $A_1$ except possibly at $c$. Let $L \in M_2$. $f \left({x}\right)$ is said to tend to the limit $L$ as $x$ tends to $c$ and is written: $f \left({x}\right) \to L$ as $x \to c$ or $\displaystyle \lim_{x \mathop \to c} f \left({x}\right) = L$ $\forall \epsilon \in \R_{>0}: \exists \delta \in \R_{>0}: 0 < d_1 \left({x, c}\right) < \delta \implies d_2 \left({f \left({x}\right), L}\right) < \epsilon$ That is, for every real positive $\epsilon$ there exists a real positive $\delta$ such that every point in the domain of $f$ within $\delta$ of $c$ has an image within $\epsilon$ of some point $L$ in the codomain of $f$. $\forall \epsilon \in \R_{>0}: \exists \delta \in \R_{>0}: f \left({B_\delta \left({c; d_1}\right) \setminus \left\{{c}\right\}}\right) \subseteq B_\epsilon \left({L; d_2}\right)$. where: $B_\delta \left({c; d_1}\right) \setminus \left\{{c}\right\}$ is the deleted $\delta $-neighborhood of $c$ in $M_1$ $B_\epsilon \left({L; d_2}\right)$ is the open $\epsilon$-ball of $L$ in $M_2$. Real and Complex Numbers As: The real number line $\R$ under the usual (Euclidean) metric forms a metric space; The complex plane $\C$ under the usual metric forms a metric space; the definition holds for sequences in $\R$ and $\C$. However, see the definition of the limit of a real function below: The concept of the limit of a real function has been around for a lot longer than that on a general metric space. The definition for the function on a metric space is a generalization of that for a real function, but the latter has an extra subtlety which is not encountered in the general metric space, namely: the "direction" from which the limit is approached. Let $\openint a b$ be an open real interval. Let $f: \openint a b \to \R$ be a real function. Let $L \in \R$. Suppose that: $\forall \epsilon \in \R_{>0}: \exists \delta \in \R_{>0}: \forall x \in \R: b - \delta < x < b \implies \size {\map f x - L} < \epsilon$ That is, for every real strictly positive $\epsilon$ there exists a real strictly positive $\delta$ such that every real number in the domain of $f$, less than $b$ but within $\delta$ of $b$, has an image within $\epsilon$ of $L$. Then $\map f x$ is said to tend to the limit $L$ as $x$ tends to $b$ from the left, and we write: $\map f x \to L$ as $x \to b^-$ or $\displaystyle \lim_{x \mathop \to b^-} \map f x = L$ This is voiced: the limit of $\map f x$ as $x$ tends to $b$ from the left and such an $L$ is called: a limit from the left. Let $\Bbb I = \openint a b$ be an open real interval. Let $f: \Bbb I \to \R$ be a real function. Let $L \in \R$. Suppose that: $\forall \epsilon \in \R_{>0}: \exists \delta \in \R_{>0}: \forall x \in \Bbb I: a < x < a + \delta \implies \size {\map f x - L} < \epsilon$ That is, for every real strictly positive $\epsilon$ there exists a real strictly positive $\delta$ such that every real number in the domain of $f$, greater than $a$ but within $\delta$ of $a$, has an image within $\epsilon$ of $L$. Then $\map f x$ is said to tend to the limit $L$ as $x$ tends to $a$ from the right, and we write: $\map f x \to L$ as $x \to a^+$ or $\displaystyle \lim_{x \mathop \to a^+} \map f x = L$ This is voiced the limit of $\map f x$ as $x$ tends to $a$ from the right and such an $L$ is called: a limit from the right. Limit Let $\openint a b$ be an open real interval. Let $c \in \openint a b$. Let $f: \openint a b \setminus \set c \to \R$ be a real function. Let $L \in \R$. Suppose that: $\forall \epsilon \in \R_{>0}: \exists \delta \in \R_{>0}: \forall x \in \R: 0 < \size {x - c} < \delta \implies \size {\map f x - L} < \epsilon$ That is: For every (strictly) positive real number $\epsilon$, there exists a (strictly) positive real number $\delta$ such that everyreal number $x \ne c$ in the domain of $f$ within $\delta$ of $c$ has an image within $\epsilon$ of $L$. $\epsilon$ is usually considered as having the connotation of being "small" in magnitude, but this is a misunderstanding of its intent: the point is that (in this context) $\epsilon$ can be made arbitrarily small. Then $\map f x$ is said to tend to the limit $L$ as $x$ tends to $c$, and we write: $\map f x \to L$ as $x \to c$ or $\displaystyle \lim_{x \mathop \to c} \map f x = L$ This is voiced: the limit of $\map f x$ as $x$ tends to $c$. It can directly be seen that this definition is the same as that for a general metric space. Also see Sources 2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics(5th ed.) ... (previous) ... (next): Entry: tend to
Disciplines using Maths - Accounting - Biology - Chemistry - Computing and IT - Construction Management - Education - Engineering - Nursing - Sport and Exercise Science Support and Resources Events, Workshops and Programs Maths Education Research Contact the MESH team How to calculate proportions If there are about 100 E. coli (a bacterium) in a 20 mL water sample, about how many E. coli would be found in 1000 mL of this water? There are two ways we can solve this equation: Method 1: solving a proportion equation Step 1: 100 E. coli / 20 mL = ? / 1000 mL. This can also be written as: $$ \frac{100\;E.coli}{20\; mL} = \frac{?}{1000\; mL} $$ We usually let Aside: xbe the number of E.colithat we are looking for. Hence the equation above can be written as: $$ \frac{100\;E.coli}{20\; mL} = \frac{x}{1000\; mL} $$ Step 2 Now we can solve for x $$ 100\; E.coli \times 1000\; mL = 20\; mL \times x $$ Step 3: Solve the equation for x, where x is the number of E.coli we are looking for: $$ \begin{align*} x &= \frac{100\; E. coli \times 1000\; mL}{20\; mL} \cr &= 5000\; E. coli \end{align*} $$ Therefore, about 5000 E. coli can be found in 1000 mL of this water. Method 2: solving how many E. coli are in each mL Step 1: 100 E. coli / 20 mL = ? / 1 mL Step 2: To determine how many E.coli are present in 1 mL, we divide by 20 which gives: E. coli/ 20 mL = 5 E.coliper 1 mL Step 3: Now to determine how many E.coli are present in 1000 mL, we multiply by 1000 as: If there are 5 E. coli per 1 mL of water, multiple 5 by 1000 to find out how many E. coli there are in total. Therefore, there are 5000 E. coli in 1000 mL of this water. Reference: Basic Laboratory Calculations for Biotechnology by Lisa A. Seidman
I worked out the differential cross section for Bhabha Scattering in the center of mass frame and I obtained the following: $$\left(\dfrac{d\sigma}{d\Omega}\right)_{CM} = \dfrac{\alpha^2}{2s} \left[\dfrac{t^2}{s^2} + \dfrac{s^2}{t^2} + u^2 \left(\dfrac{1}{s}+\dfrac{1}{t}\right)^2\right]$$ where s, t and u are the usual Mandelstam variables and $\alpha$ is the fine structure constant. I need the above equation to be like the following expression: $$\left(\dfrac{d\sigma}{d\Omega}\right) = \dfrac{\alpha^2}{8E^2} \left[\dfrac{1 + cos^4(\theta/2)}{sin^4(\theta/2)} - \dfrac{2cos^4(\theta/2)}{sin^2(\theta/2)} + \dfrac{1 + cos^2(\theta)}{2}\right]$$ I tried several times but for some reasons I am not able to get it in the desired form. I used the identities $cos\,(2\theta) = 1-2sin^2\theta = 2\,cos^2 -1$ Can someone please help me out to prove this final expression? Thanks!
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates The Root TestThe Root Test involves looking at $\displaystyle\lim_{n\to\infty}\sqrt[n]{\left|a_n\right|}$, hence the name. Notice: $\displaystyle\sqrt[n]{\left|a_n\right|}=\left|a_n\right|^{1/n}$, and you will see both notations. The Root Test, like the Ratio Test, is a test to determine absolute convergence (or not). While the Ratio Test is good to use with factorials, since there is that lovely cancellation of terms of factorials when you look at ratios, the Root Test is best used when there are terms to the $n^{th}$ power with no factorials. Example: Test the absolute/conditional convergence of the series $\displaystyle\sum_{n=1}^\infty(-1)^n\left(\frac{2n-7}{5n+2}\right)^n$. Solution: We run through our tests: Is this a geometric series? No, we have a function raised to the $n^{th}$ power, not a number (but we get a glimmer of something here, right?). Our terms are alternating, but the AST will not tell us whether or not we have absolute convergence. We don't want to think about integrating this expression. We could use the Ratio Test, but since our terms are raised to the $n^{th}$ power, we decide to try the Root Test. You will see how nicely it works with powers: As $n\to\infty$, $\displaystyle\sqrt[n]{\left|a_n\right|}=\sqrt[n]{\left(\frac{2n-7}{5n+2}\right)^n}=\frac{2n-7}{5n+2}=\frac{2-\frac7n}{5+\frac2n}\longrightarrow\frac25$. Since $\frac25<1$, our series converges absolutely. It is important to consider our litany of convergence/divergence tests before doing work. It can save valuable time, as well as helping you begin to recognize which test to do when. On an exam, you will not know which module the series came from! An important limit you may need in order to use the Root Test You may have to compute the limit of sequences like $\sqrt[n]n$ or $\lim\sqrt[n]{n^2}$. $\displaystyle\lim_{x\to\infty}\ln\left(x^{a/x}\right)=\lim_{x\to\infty}\frac ax\cdot\ln x \underset{\,\\ 0\cdot\infty}{=}\lim_{x\to\infty}\frac{\ln x}{\frac xa}\underset{ \,\\ \frac{\infty}{\infty},\text{ l'H}}{=}\lim_{x\to\infty}\frac{\frac 1x}{\frac 1a}=\lim_{x\to\infty}\frac ax=0$, Justification of the Root Test Notice that we can think of the relationship of $\sqrt[n]{\left|a_n\right|}$ to the terms of a geometric series, since $\sqrt[n]{r^n} = r$. The video gives a justification, using this idea, of why the Root Test works.
Consider the context-free grammars over the alphabet $\left \{ a, b, c \right \}$ given below. $S$ and $T$ are non-terminals. $G_{1}:S\rightarrow aSb \mid T, T \rightarrow cT \mid \epsilon$ $G_{2}:S\rightarrow bSa \mid T, T \rightarrow cT \mid \epsilon$ The language $L\left ( G_{1} \right )\cap L(G_{2})$ is Here both G1 and G2 are CFL. So intersection of two CFLs are not closed.. therefore it is not a CFL but every regular language is a CFL. so how come it is regular language?? Anyone please explain this @neenavath sindhu CFL is not closed under intersection means that intersection of 2 CFL's MAY or MAY NOT BE CFL. It does not necessarily means that intersection of 2 cfl can never be a cfl.. But when we say closed under something, like CFL is closed under union then we mean that UNION OF ALL CFL SHOULD BE CFL. Since while intersection all strings produced by production $aSb$ in $G_1$ and $bSa$ in $G_2$ will be $0$So, only common production will be:$S \rightarrow T$$T \rightarrow cT \mid \epsilon$Which is nothing but $c^*$ hence it is REGULAR and INFINITESo, option is (B). not closed means the result of intersection may or may not be CFL Answer: Option (B) We can also solve this Question by observing the productions given. Here there is common production What that represents! That generates Language with Regular Expression G: S -> T T -> cT | ϵ What that represents ! That geneates Language with Regular Expression no need to to think much straight forward question is given G1: starting with a and ending with b (compulsory conditon) or contains infinite c G2: starting with b and ending with a (compulsory condition) or contains infinite c so both are intersection will give 0 strings because if string is starting from 'a' than how it can start form b and second half part where c is infinite and regular so option B Exam date is 12-1-2020.
I love this one. I really do.Quick Answer:I'd put the date at roughly 2.3 billion years ago, give or take. This is the date of the Great Oxygenation Event. It's when organisms (bacteria) began putting oxygen into the atmosphere in large quantities as a waste product of photosynthesis. The atmosphere before that has a lot of carbon dioxide, which wouldn't ... Earth's rainforests are definitely not the lungs of world. Actually, they consume all (or most of) the oxygen they produce. The phytoplankton (seaweed and microscopical organisms) are the truly world's lung - if we may say there's such a thing - they are responsible for more than 50% of all the oxygen thrown into the atmosphere.So, answering your question.... According to this lovely image from NASA (article here), the source of onboard oxygen in current spacecraft is mainly water electrolysis. The hydrogen so produced is processed with carbon dioxide to reclaim some of the water and produce either solid carbon waste, or acetylene for propulsion.This isn't a 100% closed cycle, so you'll have to add more water ... This scenario is quite problematic for two main reasons: evaporation and peak wavelength.The black hole's lifetime is too shortWe can make a rough estimate of the properties of the Hawking radiation coming from the black hole. First, let's start with the luminosity. Since $L\propto M^{-2}$, where $L$ is luminosity and $M$ is the mass of the black hole, ... What keeps my planet's water from irreversibly concentrating over time on the frigid wastes while the rest of the planet dries up?When ice piles up, it will exercise pressure. The closer to the terminator, the less the ice.As a consequence, pressure gradient will tend to push the ice sheet toward the terminator, where it will melt, returning water to the ... Our known quantities are:Radius of the body: 50 metresDensity of the body: same as Earth's, 5515 kilograms per cubic metreThis is enough to calculate the acceleration due to gravity on the surface of the body. We multiply the density $\rho$ and the volume $V$ to get the mass, multiply it by Newton's universal gravitational constant $G$, and divide by ... We have already found exo-planets matching this criteria. For example HD_100777_b has a mass just slightly higher than Jupiter and orbits its star at the same distance from the sun that our earth does. (The star is a similar size to our sun but I didn't check the brightness so I don't know for sure if it's in the habitable zone).You can explore the known ... Your environment is quite similar to that in a globular cluster. At its densest, a globular cluster may see peak stellar number densities of $\sim1000$ stars per cubic parsec, which implies a mean separation of about 20,000 AU. This leads us to conclude that many, if not most, planets will be stripped away through encounters with other stars, leading to a ... Let's work out some factors.LuminosityYou gave the radius of the inner edge of the habitable zone as 1.976 AU and the outer edge as 2.808 AU. From this, we can calculate the luminosity of the star. There's an explanation of how to do this on Planetary Biology. The formulae are$$r_i=\sqrt{\frac{L_{\text{star}}}{1.1}}$$$$r_o=\sqrt{\frac{L_{\text{star}}}{... Yes.You can take binary or trinary star systems and swap one of the stars for a black hole and nothing changes in the orbital dynamics.Depending on the layout of the solar system planets can orbit the stars, the black hole, or some mixture of the above.Some of those planets could be in the habitable zone (liquid water).And some of those planets could ... Librations. That is, the tidally locked planet is not in a perfectly circular orbit, and so the portion of the planet that is sun-facing is not constant. This is because the rate of rotation is (extremely nearly) constant, but the rate of revolution around the sun changes due to the non-circular nature. For the Earth's Moon, this is only a few degrees. If ... From Hawking radiation? No.The Hawking radiation emitted is inversely proportional to the black hole's size. To make the black hole glow with enough light to be as bright as a star from Hawking radiation alone, it would need to be very small.The problem with very small black holes is they also have very short lifetimes due to the Hawking radiation ... It's possible, but heat generated by the Kelvin-Hemlholz mechanism may be too variable to complex life to develop solely as a result of this source of heat. This paper suggests that the temperature of Jupiter, when it first finished an initial phase of contraction, was quite high, at around 25000K. At this temperature, it would have a small habitable zone ... Yes, it is plausible as the timescales of which a red dwarf is a blue dwarf are quite big, to the point where if your icy planet is distant enough it will thaw and potentially develop life. What matters is the placement of your planet and its size, as if your icy planet is too small it won't retain a atmosphere and if it's too far the temperature increase ... In both cases the shear volume of necessary oxygen (and otherelements used in human breathable air) would make it verydifficult or impossible to build such a habitat, no?No, actually; I don't think so.The accepted atmospheric composition of Earth is 78% nitrogen, 21% oxygen, 0.9% argon, and 0.05% everything else, including carbon dioxide (about 0.... First matter first: to have a body in a spherical shape, you need to exceed a certain radius, dictated by the material. Most likely with 50 meters you will have a potato shaped object.Moreover, to have a decent gravity you need more mass. Just as a reference, Ceres has a radius of 473 km, a mass of 0.00015 Earth masses and a surface gravity of 0.029 G.... You're basically talking about Venus. Or, more accurately, Venus if it had started out with a lot less water and CO2. Less water and CO2 to start with mean you never get the runaway greenhouse effect Venus has, leaving you a planet that's a lot like Earth, just dryer and hotter. Any rainfall you DID get would be the higher latitudes and that's where you ... Humans require an oxygen atmosphere to breathe, and require multicellular life to eat. They also require temperatures roughly similar to those found today.It has been shown through geologic methods that the oxygenation of the atmosphere occurred during the Precambrian era, reaching levels possibly high enough to support human life around 1.9 billion years ... Problem 1: The supernovaThe first concern I have is one that Zeiss Ikon's answer discusses. To form a black hole, you need some sort of energetic event, likely a supernova. However, a supernova releases three extremely problematic sources of energy:High-energy photons, like gamma rays, that have the potential to strip away the atmosphere of any pre-... Age: The time a star spends on the main sequence is roughly inversely proportional to the luminosity, as given by the formula$$T \approx \ 10^{10} \text{years} \cdot \left[ \frac{M}{M_{\bigodot}} \right] \cdot \left[ \frac{L_{\bigodot}}{L} \right] =10^{10} \text{ years} \times \left[\frac{M}{M_{\odot}} \right]^{-2.5}$$where $M$ and $L$ are the mass and ... Probably about 5-10 years minimumFallout would not be a major long-term problem, the timescale on which radiation due to fallout would present a serious danger would be less than 5 years. See this article which says:Radioactive material which takes longer than 24 hours to return to earth is called delayed or global fallout. Some of the delayed fallout ... Rogue planets can be kept warm. The key points can be found in the Wikipedia entry on rogue planets.Interstellar planets generate little heat nor are they heated by a star. In 1998, David J. Stevenson theorized that some planet-sizedobjects adrift in the vast expanses of cold interstellar space couldpossibly sustain a thick atmosphere that would ... The closest feature we have on our planet that gets close to appear like this is the Giant's Causeway.To obtain this you need to have magma intruding a rock, then cooling down to form the pillars, and the surrounding rocks being eroded away.But you want it made on scales like El Capitan:With the right combination of factor (gravity, volcanic activity) ... There is no consensus now if such a thing as a habitable zone really truly exists around red dwarfs.FlaresMost red dwarfs flare, doubling their luminosity in a matter of minutes. This shifts the zone with optimal temperature very much, very fast, and the planet may be in it, and literally five minutes later be outside of it.Flares also tend do throw a ... If there is a gap between the sections of the ring, it would allow all the atmosphere to spill through the gap, like so:The question really isn't one of how long of segments you need (the answer would be "all the way around the circle"), but one of how to prevent the atmosphere from spilling out the ends. Here are three suggestions:Do what Trump wants. ... Models suggest that a desert planet (that is to say, a planet with some polar surface water, but otherwise dominated by land), can remain habitable as close as ~0.75 AU from a star with luminosity of 1 Sol (Abe et al. 2011). This is only a touch further out than Venus's orbit, which has a semi-major axis of 0.723 AU.However, it is important to consider ... Size and GravityHere's a handy equation if you want to find the radius of a planet with the same surface gravity as Earth (or near it), but with a different density. If you want to change the size but keep the same gravity, you need to mess with the density.$$ r = {{g}\over{{{4\pi}\over{3}} * G * \rho}} $$Where $ r $ is the radius in meters, g is Earth ... Make the axial tilt of the planet be parallel to its orbital planeUranus does this already with an axial tilt of about 97 degrees off orbital plane though this isn't a great example because Uranus is just cold all the time. All it's a gas giant while the OP is implying a rocky planet.Torque required to keep the hot pole pointing towards the starThe ... Nitrogen (N₂):60.4% — 78.08% = −18.48% N₂ (↓22%)Is used in the atmosphere to reduce the percentage of oxygen in the air (if O₂ is too much our atmosphere may burn out).It haven't any very important effect in life. N₂ is a inert gas, it can't burn and don't make any special reaction in the air.However bacteria breath N₂ and make amino acids > then make ... Google Surtsey.http://www.surtsey.is/pp_ens/gen_3.htmThis was an undersea volcano that formed off the coast of iceland. National Geographic had a series of articles on it.The water was shallow(130 m), so the island didn't have to make a huge thickness of land to get above the surface.Before breaking the surface, there was a lot of bubbles, floating ...
I was reading the paper titled "Primal-dual RNC approximation algorithms" by Rajagopalan and Vazirani. I have a problem of understanding the Lemma 4.1.1. They present a dual fitting based algorithm for weighted set cover. First let me set up the required concept to clarify where I am having trouble. Suppose we have $n$ elements ($U$) and $m$ sets ($S$). Each set has a positive weight. Let $E_v$ holds the sets in which the element $v$ is present. Let $\beta =$max$_{v \in U}$ min$_{s \in E_v}$ weight(s). Let also $IP^*$ is the weight of an optimal set cover. It is easy to see, $IP^*\geq \beta$. Now assume we have an approximation algorithm for weighted set cover. What the paper is saying in the lemma is that you can do a pre-processing before starting the approximation algorithm as follows. You can scan through the sets and add any sets that have weight $\leq \beta/n$. Since there are $n$ elements the additional cost is at most $\beta$. Then they claim that Since $\beta$ is a lower bound on $IP^*$, this cost is subsumed in the approximation. And this is the statement I did not understand.
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Forgot password? New user? Sign up Existing user? Log in Suppose there are 36 equally spaced points on the circumference of a circle.In how many ways you can choose 3 any two points such that no 2 of them are adjacent or diametrically opposite? Note by Souryajit Roy 3 years, 10 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: We shall use PIE. Step 1: Number of ways of selecting 333 points from 363636 points is (363)=7140\binom{36}{3} = 7140(336)=7140. Step 2: Number of ways of selecting 333 adjacent points is 363636. Step 3: Number of ways of selecting 222 adjacent and one not adjacent with them is 36×32=115236\times 32 = 115236×32=1152. (Since there are 323232 ways to select the non-adjacent point.) Step 4: Number of ways of selecting two diametrically opposite points are 181818 and number of ways of selecting third one not adjacent to both of them are 303030 in each case. So total number of ways in this step are 18×30=54018 \times 30 = 54018×30=540. Step 5: Number of ways (what we required) = Total −-− Number of ways of selecting 333 adjacent points −-− Number of ways of selecting 222 adjacent and one not adjacent with them - Number of ways of selecting two diametrically opposite points and selecting third one not adjacent to both of them =7140−36−1152−540=5412= 7140-36-1152-540=5412=7140−36−1152−540=5412. Log in to reply I think there were 32 points instead of 36. I have given RMO of West Bengal Rgion and there was 36 points in the problem (problem no 4)....Perhaps in your region they have given 32....basically the procedure of solving the sum is not affected if the number of points in even. 5430 For 32 objects you can have a look at my solution here What can be expected cutoff for rmo around 55 bro 28 The answer is 5400. Let the points be A,B and C. We can put A anywhere on the circle, giving 36 ways. Then, B and C cannot be on the two points next to A, or the point opposite it, leaving 32C2 = 496 ways to choose B and C. Of these, 30 configurations have B and C adjacent, and 16 have them opposite. We subtract these off to give 450 valid ways. For each valid configuration, there is one way to generate it starting with A at each vertex. Thus each configuration has been counted 3 times, so we divide by 3 to get the final answer: 36x450/3 = 5400 What was your answer? 32*36 Problem Loading... Note Loading... Set Loading...
Homework Helper 1,059 9 Ok so I've got a question after walking through the time dilation derivation that used 'light clocks' (think a beam of light bouncing back and forth between mirrors) to derive ##\delta t^\prime = \frac{\delta t}{\sqrt{1-\frac{v^2}{c^2}}}##. So my Q is could you derive the same equation if you had used atomic clocks instead? Don't actually do so here, just wanted to know if it would have lead to the same relation. Thanks for your time! Edit, my tex tags seem to not work?! I'm sure that was how... <mentor edit latex> Edit, my tex tags seem to not work?! I'm sure that was how... <mentor edit latex> Last edited by a moderator:
Let your graph be $G=(V,E)$. Build a new graph $G'=(V',E')$, where each vertex in $G'$ corresponds to a pair $(u,v)$ of vertices in $G$. Add an edge $(u,v) \to (v,w)$ in $G'$ iff there is an edge $v \to w$ in $G$. The intended meaning of the vertex $(u,v)$ is that you are currently at vertex $v$ in $G$, and previously were at vertex $u$. Thus, you can now figure out what weight to put on the edge $(u,v) \to (v,w)$ -- namely, whatever weight you want to put on the edge $v \to w$ assuming you previously traversed the edge $u \to v$. Let $X' = \{(x,u) \in E : x \in X, u \in V\}$ and $Y' = \{(u,y) \in E : u \in V, y \in Y\}$. Now, the find the shortest path in $G'$ from each vertex in $X'$ to each vertex in $Y'$, using an all-pairs shortest paths algorithm. From this you can infer the length of the shortest path in $G$ from some vertex in $X$ to some vertex in $Y$, while taking into account the fact that edges switch weights. If you use the Floyd-Warshall algorithm, the running time will be $O(|V'|^3)$, which is at most $O(|V|^6)$. As an optimization, the vertex set $V'$ can be taken to be $V' = \{(u,v) : u,v \in V, (u,v) \in E\}$. Then the running time of Floyd-Warshall in $G'$ will be at most $O(|E|^3)$. Given that your original graph $G$ is fairly small, this might already be feasible without further optimizations. If you want further optimizations, here's one more. You can add to $G'$ a special vertex $x^*$ for each $x \in X$, with an edge $x^* \to (x,u)$ in $G'$ for each $(x,u) \in E$ with the same weight as the edge $x \to u$ in $G$; and similarly add $y^*$ for each $y \in Y$, with edge $(u,y) \to y^*$ of weight 0. Now find the shortest path from each $x^*$ to each $y^*$. This can be done by applying Dijkstra's algorithm $|X|$ times, once for each $x \in X$, to find the distance from $x^*$ to each other node in the graph. The total running time will be $O(|X| \cdot |E'| \log |V'|)$, which is $O(|X| \cdot |V| \cdot |E| \log |V|)$. You can also achieve running time $O(|Y| \cdot |V| \cdot |E| \log |V|)$ by searching backwards (e.g., reversing the graph). Given that you have $|Y| \approx 300$, $|V| \approx 2000$, and $|E| \le |V|^2 = 4000000$, this might complete rapidly.
LaTeX forum ⇒ Math & Science ⇒ adding tabular in equation/math enviroment Information and discussion about LaTeX's math and science related features (e.g. formulas, graphs). 3 posts • Page 1of 1 can you help me fith the following expression please ? \begin{equation} F=\left\lbrace \begin{tabular}{ll} $F_{app}\mu_{k}\sin (v) $ & $if \quad v>0$, \\ $ F_{app}$ & if$ \quad v=0 \quad and \quad F_{app}\leq F_{app} \mu_{s}$, \end{tabular} \right %\label{RRRR} \end{equation} \begin{equation} F=\left\lbrace \begin{tabular}{ll} $F_{app}\mu_{k}\sin (v) $ & $if \quad v>0$, \\ $ F_{app}$ & if$ \quad v=0 \quad and \quad F_{app}\leq F_{app} \mu_{s}$, \end{tabular} \right %\label{RRRR} \end{equation} Hi! There's a math equivalent to I defined an operator, since it that index looks like an operator (that's usually written in upright shape) and not like a product of a, p, and p. Furthermore, I treated text (if, and) as text. What do you think? Stefan There's a math equivalent to tabular, it's array. Here is how I would do it: I defined an operator, since it that index looks like an operator (that's usually written in upright shape) and not like a product of a, p, and p. Furthermore, I treated text (if, and) as text. What do you think? Stefan Site admin Who is online Users browsing this forum: No registered users and 4 guests
Let $f: \mathbb R \to \mathbb R$ be a continuous function with a continuous derivative. In short $f \in C^1$. We know that $0<c\leq f'(x) \leq d < \infty$. We want to prove that $\exists! x_0 \in \mathbb R: f(x_0)=0$. Disclaimer: I am fully aware that there is a simple proof involving Rolle's theorem. This is not the proof I am after. I am trying to somehow invoke Banach Fixed Point Theorem in order to show that this function has a single root. Something I tried, to give you a general idea of what I'm looking for: Define $g(x) = \frac{1}{d}(f(x)-cx)$. We have that $0 \leq g'(x) \leq 1-\frac{c}{d}<1$ Now, since $f$ is $C^1$, we also have that $g$ is $C^1$. And more specifically, $g$ is continuous and with continuous derivative on all $[x_1,x_2]$ segments of the real line. So Lagrange mean value theorem applies. $\forall x_1,x_2 \in \mathbb R \exists c \in \mathbb R: g(x_1)-g(x_2)=g'(x)(x_1-x_2)$ If we add modules we get $|g(x_1)-g(x_2)| = |g'(c)||x_1-x_2| \leq |1-\frac{c}{d}||x_1-x_2|$ And so we have for all $x_1,x_2:$ $|g(x_1)-g(x_2)| \leq |1-\frac{c}{d}||x_1-x_2|$, which means $g$ is a contraction. So from Banach Fixed Point Theorem, $g$ admits a unique fixed point $g(x_0)=x_0$. The problem is: $g(x_0)=x_0$ does not imply that $f(x_0)=0$. Sure, it admits a fixed point but that does not help us prove that $f$ has a unique zero. if $g(x) = f(x)+x$ then that would have solved our problem, but no such luck. How can I use Banach fixed point theorem in a similar way to prove $f$ has a unique zero?
Let $f:\mathbb{R}^n \to \mathbb{R}$ be continuously differentiable, $\Omega \subseteq \mathbb{R}^n$ an open and bounded set and $f = 0$ on $\partial \Omega$. Show that then there exists a $x \in \Omega$ such that $Df(x) = 0$. I dont't understand what to show, isn't $Df(x) = 0$ a direct consequence from $f = 0$ ? An attempt: Because $x$ lies in the interior, $\exists \epsilon \gt 0$ such that the partial function $$f(x_1,...,x_{j-1},x_j,x_{j+1},...,x_d) := ]x_j - \epsilon,x_j - \epsilon[ \to \mathbb{R}, x \mapsto f(x_1,...,x_{j-1},a,x_{j+1},...,x_d)$$ is well defined. It has an extremum ar $x_j$, so $\frac{\partial f}{\partial a_j}(x) = 0$. This is the case for all $j \in \{1,...,d\}$ and by assumption $$Df(x) = \left( \frac{\partial f}{\partial a_1}(x) \frac{\partial f}{\partial a_2}(x) ... \frac{\partial f}{\partial a_d}(x)\right) = 0$$ Is this attempt correct?
Here's a problem in a laser spectroscopy class I have been trying to figure out for quite some time: A typical dielectric mirror has a damage threshold of $I_\textrm{threshold} = 5\times 10^8~{\rm W/cm^2}$ for 20 nanosecond pulses. What is the smallest beam diameter ($2\omega_0$) that can be used with a 1Hz beam of 3W? Assume a gaussian temporal and spatial intensity distribution, and that the FWHM is 20ns. First, some relevant equations. Gaussian temporal intensity distribution: $$ I(t) = I_\textrm{peak}e^{-4\ln2 \frac{t^2}{\tau^2}} $$ Gaussian spatial intensity distribution: $$ I(r) = \frac{2P}{\pi \omega^2}e^{-\frac{2r^2}{\omega^2}} $$ with $$ \omega (z) = \omega_0 \sqrt{1+\left( \frac{\lambda (z-z_0)}{\pi \omega_0^2} \right)^2} $$ The way I thought of going about this problem, was to calculate the total intensity (energy) over the spatial distribution: $$ E_\textrm{spatial} = \int_{0}^{\infty} I(r)~\mathrm dr $$ And then calculate the total energy over the duration of the pulse $$ E_\textrm{pulse} = \int_{-\infty}^{\infty} I(t)~\mathrm dt $$ And use these to calculate the total spatial energy during a pulse for a circular area $A_\textrm{circle} = \pi r^2$ $$ \frac{I_\textrm{total}}{\pi r^2} = I_\textrm{threshold} $$ which when solved for the double of the diameter gives $$ 2d = \sqrt{\frac{8 I_\textrm{total}}{\pi I_\textrm{threshold}}} $$ However, I don't know what the $\omega$ is (should I also integrate of $\omega\,?$), and I am quite sure that my way of doing this is correct. The question asks specifically for $2\omega_0$, not some $d$, so I feel I should use the definition of $\omega$ for something. I would appreciate some guiding on how to solve the problem.
So, the question is- Find $$ \lim_{x\to 2} \left[\frac{1}{x(x-2)^2}-\frac{1}{x^2-3x+2}\right]$$ What I've tried- It's quite easy to simplify the limit and get $$ -\lim_{x\to 2} \left(\frac{x^2-x+1}{(x-2)^2(x-1)x}\right)$$ Which upon putting the value yields- $$ \left(\frac{3}{0}\right)=-\infty$$ But, the answer in the book is $+\infty$ If I try to find the value of limit using L.H.L and R.H.L, then value comes out to be $-\infty$ and $+\infty$ respectively indicating that limit does not exist in the first place. Where am I going wrong?
Difference between revisions of "Abstract.tex" Line 1: Line 1: \begin{abstract} \begin{abstract} − The Hales--Jewett theorem asserts that for every $r$ and every $k$ there exists $n$ such that every $r$-colouring of the $n$-dimensional grid $\{1, \dotsc, k\}^n$ contains a combinatorial line. This result is a generalization of van der Waerden's theorem, and it is one of the fundamental results of Ramsey theory. The van der Waerden + The Hales--Jewett theorem asserts that for every $r$ and every $k$ there exists $n$ such that every $r$-colouring of the $n$-dimensional grid $\{1, \dotsc, k\}^n$ contains a combinatorial line. This result is a generalization of van der Waerden's theorem, and it is one of the fundamental results of Ramsey theory. The van der Waerden has a famous density version, conjectured by Erd\H os and Tur\'an in 1936, proved by Szemer\'edi in 1975 and given a different proof by Furstenberg in 1977. The Hales--Jewett theorem has a density version as well, proved by Furstenberg and Katznelson in 1991 by means of a significant extension of the ergodic techniques that had been pioneered by Furstenberg in his proof of Szemer\'edi's theorem. In this paper, we give the first elementary proof of the theorem of Furstenberg and Katznelson, and the first to provide a quantitative bound on how large $n$ needs to be. In particular, we show that a subset of $[3]^n$ of density $\delta$ contains a combinatorial line if $n \geq 2 \upuparrows O(1/\delta^3)$. Our proof is surprisingly\noteryan{``reasonably'', maybe} simple: indeed, it gives what is probably the simplest known proof of Szemer\'edi's theorem. \end{abstract} \end{abstract} Revision as of 21:19, 24 June 2009 \begin{abstract}The Hales--Jewett theorem asserts that for every $r$ and every $k$ there exists $n$ such that every $r$-colouring of the $n$-dimensional grid $\{1, \dotsc, k\}^n$ contains a combinatorial line. This result is a generalization of van der Waerden's theorem, and it is one of the fundamental results of Ramsey theory. The theorem of van der Waerden has a famous density version, conjectured by Erd\H os and Tur\'an in 1936, proved by Szemer\'edi in 1975 and given a different proof by Furstenberg in 1977. The Hales--Jewett theorem has a density version as well, proved by Furstenberg and Katznelson in 1991 by means of a significant extension of the ergodic techniques that had been pioneered by Furstenberg in his proof of Szemer\'edi's theorem. In this paper, we give the first elementary proof of the theorem of Furstenberg and Katznelson, and the first to provide a quantitative bound on how large $n$ needs to be. In particular, we show that a subset of $[3]^n$ of density $\delta$ contains a combinatorial line if $n \geq 2 \upuparrows O(1/\delta^3)$. Our proof is surprisingly\noteryan{``reasonably , maybe} simple: indeed, it gives what is probably the simplest known proof of Szemer\'edi's theorem. \end{abstract}
Last time we revisited Robin’s theorem saying that 5040 being the largest counterexample to the bound \[ \frac{\sigma(n)}{n~log(log(n))} < e^{\gamma} = 1.78107... \] is equivalent to the Riemann hypothesis. \[ \Psi(n) = n \prod_{p | n}(1 + \frac{1}{p}) \] where $p$ runs over the prime divisors of $n$. It is series A001615 in the online encyclopedia of integer sequences and it starts off with 1, 3, 4, 6, 6, 12, 8, 12, 12, 18, 12, 24, 14, 24, 24, 24, 18, 36, 20, 36, 32, 36, 24, 48, 30, 42, 36, 48, 30, 72, 32, 48, 48, 54, 48, … and here’s a plot of its first 1000 values To understand this behaviour it is best to focus on the ‘slopes’ $\frac{\Psi(n)}{n}=\prod_{p|n}(1+\frac{1}{p})$.So, the red dots of minimal ‘slope’ $\approx 1$ correspond to the prime numbers, and the ‘outliers’ have a maximal number of distinct small prime divisors. Look at $210 = 2 \times 3 \times 5 \times 7$ and its multiples $420,630$ and $840$ in the picture. For this reason the primorial numbers, which are the products of the fist $k$ prime numbers, play a special role. This is series A002110 starting off with 1, 2, 6, 30, 210, 2310, 30030, 510510, 9699690, 223092870,… In Patrick Solé and Michel Planat Extreme values of the Dedekind $\Psi$ function, it is shown that the primorials play a similar role for Dedekind’s Psi as the superabundant numbers play for the sum-of-divisors function $\sigma(n)$.That is, if $N_k$ is the $k$-th primorial, then for all $n < N_k$ we have that the 'slope' at $n$ is strictly below that of $N_k$ \[ \frac{\Psi(n)}{n} < \frac{\Psi(N_k)}{N_k} \] which follows immediately from the fact that any $n < N_k$ can have at most $k-1$ distinct prime factors and $p \mapsto 1 + \frac{1}{p}$ is a strictly decreasing function. Another easy, but nice, observation is that for all $n$ we have the inequalities \[ n^2 > \phi(n) \times \psi(n) > \frac{n^2}{\zeta(2)} \] where $\phi(n)$ is Euler’s totient function \[ \phi(n) = n \prod_{p | n}(1 – \frac{1}{p}) \] This follows as once from the definitions of $\phi(n)$ and $\Psi(n)$ \[ \phi(n) \times \Psi(n) = n^2 \prod_{p|n}(1 – \frac{1}{p^2}) < n^2 \prod_{p~\text{prime}} (1 - \frac{1}{p^2}) = \frac{n^2}{\zeta(2)} \] But now it starts getting interesting. In the proof of his theorem, Guy Robin used a result of his Ph.D. advisor Jean-Louis Nicolas known as Nicolas’ criterion for the Riemann hypothesis: RH is true if and only if for all $k$ we have the inequality for the $k$-th primorial number $N_k$ \[ \frac{N_k}{\phi(N_k)~log(log(N_k))} > e^{\gamma} \] From the above lower bound on $\phi(n) \times \Psi(n)$ we have for $n=N_k$ that \[ \frac{\Psi(N_k)}{N_k} > \frac{N_k}{\phi(N_k) \zeta(2)} \] and combining this with Nicolas’ criterion we get \[ \frac{\Psi(N_k)}{N_k~log(log(N_k))} > \frac{N_k}{\phi(N_k)~log(log(N_k)) \zeta(2)} > \frac{e^{\gamma}}{\zeta(2)} \approx 1.08… \] In fact, Patrick Solé and Michel Planat prove in their paper Extreme values of the Dedekind $\Psi$ function that RH is equivalent to the lower bound \[ \frac{\Psi(N_k)}{N_k~log(log(N_k))} > \frac{e^{\gamma}}{\zeta(2)} \] holding for all $k \geq 3$. In other words, it gives us the number of tiles needed in the Dedekind tessellation to describe the fundamental domain of the action of $\Gamma_0(n)$ on the upper half-plane by Moebius transformations. When $n=6$ we have $\Psi(6)=12$ and we can view its fundamental domain via these Sage commands: G=Gamma0(6) FareySymbol(G).fundamental_domain() giving us the 24 back or white tiles (note that these tiles are each fundamental domains of the extended modular group, so we have twice as many of them as for subgroups of the modular group) But, there are plenty of other, seemingly unrelated, topics where $\Psi(n)$ appears. To name just a few: The number of points on the projective line $\mathbb{P}^1(\mathbb{Z}/n\mathbb{Z})$. The number of lattices at hyperdistance $n$ in Conway’s big picture. The number of admissible maximal commuting sets of operators in the Pauli group for the $n$ qudit. and there are explicit natural one-to-one correspondences between all these manifestations of $\Psi(n)$, tbc.Leave a Comment
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
In this problem, you are asked to compute the gravitational potential from uniform density spherical shell of total mass \(m\) and radius \(R\). In general, the gravitational potential of a massive surface \(S\) is given by \[ V(a, b, c) = – G \iint_S \frac{\rho, dS}{\sqrt{(x – a)^2 + (y – b)^2 + (z – c)^2}} \] where \(\rho\) is the density (mass per area) of the surface. In this case, \(\rho = m / (4 \pi R^2)\). The first part of the question asks you to reduce the problem to the case where \((a, b, c) = (0, 0, r)\). We can do this by spherical symmetry: given any \((a,b,c)\) we can rotate the entire picture so that the point \((a, b, c)\) lies on the positive \(z\) axis. That is, \((a, b, c) = (0, 0, r)\) with \(r = \sqrt{x^2 + y^2 + z^2}\). Applying this rotation doesn’t change the gravitational potential because the rotated sphere is indistinguishable from the original sphere. The second part of the question asks you to set up the integral for \(V(0, 0, r)\) in spherical coordinates. In spherical coordinates (with radius \(R\)) we have \[ x = R \cos \theta \sin \varphi, \quad y = R \sin \theta \sin \varphi, \quad z = R \cos \varphi. \] You should verify that the normal vector for this parametrization satisfies \(|n| = R^2 \sin \varphi\). Notice that the denominator of the integral defining the gravitational potential is just the distance from \((a, b, c)\) to \((x, y, z)\) (where the latter is a point on the surface). In the case where \((a, b, c) = (0, 0, r)\) and \(S\) is the sphere of radius \(R\) centered at the origin, we can compute this distance using the law of cosines as indicated in the following figure: In the figure, \(d\) is the distance from the point \(P = (0,0,r)\) to a point \(Q = (x, y, z)\) lying on the sphere — that is, \(d\) is the denominator of the integral we’re trying to compute. By the law of cosines, \[ d^2 = r^2 + R^2 – 2 R r \cos \varphi. \] Therefore, the potential is given by \[ V(0, 0, r) = \frac{- G m}{4 \pi} \int_0^{\pi} \int_{0}^{2 \pi} \frac{\sin \varphi , d\theta, d\varphi}{\sqrt{R^2 + r^2 – 2 R r \cos \varphi}}. \] The book suggests using the substitution \(u = R^2 + r^2 – 2 R r \cos \varphi\) to compute this integral.
We have already seen that a convenient way to describe a line in three dimensions is to provide a vector that "points to'' every point on the line as a parameter \(t\) varies, like $$\langle 1,2,3\rangle+t\langle 1,-2,2\rangle =\langle 1+t,2-2t,3+2t\rangle.$$ Except that this gives a particularly simple geometric object, there is nothing special about the individual functions of \(t\) that make up the coordinates of this vector---any vector with a parameter, like \(\langle f(t),g(t),h(t)\rangle\), will describe some curve in three dimensions as \(t\) varies through all possible values. Example 13.1.1 Describe the curves \(\langle \cos t,\sin t,0\rangle\), \(\langle \cos t,\sin t,t\rangle\), and \(\langle \cos t,\sin t,2t\rangle\). Solution As \(t\) varies, the first two coordinates in all three functions trace out the points on the unit circle, starting with \((1,0)\) when \(t=0\) and proceeding counter-clockwise around the circle as \(t\) increases. In the first case, the \(z\) coordinate is always 0, so this describes precisely the unit circle in the \(x\)-\(y\) plane. In the second case, the \(x\) and \(y\) coordinates still describe a circle, but now the \(z\) coordinate varies, so that the height of the curve matches the value of \(t\). When \(t=\pi\), for example, the resulting vector is \(\langle -1,0,\pi\rangle\). A bit of thought should convince you that the result is a helix. In the third vector, the \(z\) coordinate varies twice as fast as the parameter \(t\), so we get a stretched out helix. Both are shown in figure 13.1.1. On the left is the first helix, shown for \(t\) between 0 and \(4\pi\); on the right is the second helix, shown for \(t\) between 0 and \(2\pi\). Both start and end at the same point, but the first helix takes two full "turns'' to get there, because its \(z\) coordinate grows more slowly. Figure 13.1.1. Two helixes. A vector expression of the form \(\langle f(t),g(t),h(t)\rangle\) is called a vector function; it is a function from the real numbers \(\mathbb{R}\) to the set of all three-dimensional vectors. We can alternately think of it as three separate functions, \(x=f(t)\), \(y=g(t)\), and \(z=h(t)\), that describe points in space. In this case we usually refer to the set of equations as parametric equations for the curve, just as for a line. While the parameter \(t\) in a vector function might represent any one of a number of physical quantities, or be simply a "pure number'', it is often convenient and useful to think of \(t\) as representing time. The vector function then tells you where in space a particular object is at any time. Vector functions can be difficult to understand, that is, difficult to picture. When available, computer software can be very helpful. When working by hand, one useful approach is to consider the "projections'' of the curve onto the three standard coordinate planes. We have already done this in part: in example 13.1.1 we noted that all three curves project to a circle in the \(x\)-\(y\) plane, since \(\langle \cos t,\sin t\rangle\) is a two dimensional vector function for the unit circle. Example 13.1.2 Graph the projections of \(\langle \cos t,\sin t,2t\rangle\) onto the \(x\)-\(z\) plane and the \(y\)-\(z\) plane. Solution The two dimensional vector function for the projection onto the \(x\)-\(z\) plane is \(\langle \cos t, 2t\rangle\), or in parametric form, \(x=\cos t\), \(z=2t\). By eliminating \(t\) we get the equation \(x=\cos(z/2)\), the familiar curve shown on the left in figure~\xrefn{fig:helix projections}. For the projection onto the \(y\)-\(z\) plane, we start with the vector function \(\langle \sin t, 2t\rangle\), which is the same as \(y=\sin t\), \(z=2t\). Eliminating \(t\) gives \(y=\sin(z/2)\), as shown on the right in figure 13.1.2. Figure 13.1.2. The projections of \(\langle \cos t,\sin t,2t\rangle\) onto the \(x\)-\(z\) and \(y\)-\(z\) planes.
If we take neutron star material at say a density of $\sim 10^{17}$ kg/m$^{3}$ the neutrons have an internal kinetic energy density of $3 \times 10^{32}$ J/m$^{3}$. This is calculated by multiplying the number density of the neutrons $n_n$ by, $3p_{f}^2/10m_n$, the average KE per fermion in a non-relativistically degenerate gas and where $p_f =(3/8\pi)hn_n^{1/3}$ is the Fermi momentum. So even in a teaspoonful (say 5ml), there is $1.5\times10^{27}$ J of kinetic energy (more than the Sun emits in a second, or a billion or so atom bombs) and this will be released instantaneously. The energy is in the form of around $10^{38}$ neutrons travelling at around 0.1-0.2$c$. So roughly speaking it is like half the neutrons (about 250 million tonnes) travelling at 0.1$c$ ploughing into the Earth. If I have done my Maths right, that is roughly equivalent to a 40km radius near-earth asteroid hitting the Earth at 30 km/s. So, falling through the Earth is not the issue - vapourising a significant chunk of it is. Note that the beta decay of the free neutrons that dominate the neutron material is also energetic, but a slow process. On these 10 minute timescales, the neutrons could have exploded to a radius of a tenth of an au.
If an electron has spin and volume than the point on the surface is rotating at constant speed according to Planck constant defined angular momentum.If this electron accelerates then this point on the surface has to add its rotating velocity to the velocity of translation of the electron but this sum must not reach the speed of light to not to contradict with special relativity so it seems its spin has to slow down in the reference frame that is not moving with the electron.Is that right? I think there are two questions that are mixed together here: a question about the nature of quantum-mechanical spin, and a question about how to deal with the relative motion of subsystems when you have a system moving very close to the relativistic limit of $c$. First, spin. It's very tempting to think of an electron as a "little ball of stuff," because macroscopic bits of matter come in discrete shapes that have surfaces, and because the people who illustrate textbooks feel like they have to have some picture of an electron, and choose a little ball. But those appealing models are wrong. We don't have any evidence that an electron has a surface or an interior, the way that a water droplet or a dust mote or even a nucleus has. (The nucleus is an interesting case because nucleons participate in interactions that electrons ignore, but we won't go there for now.) The modern picture of an electron is as a quantized disturbance in a "field," where a "field" is a continuous property of spacetime. When one applies conservation laws to interactions with the electron field, it becomes parsimonious to talk about these disturbances as being associated with an intrinsic mass, charge, and angular momentum --- the same mass, charge, and angular momentum that one ascribes to the electron in the little-ball model. But little balls have an intrinsic size parameter, which the electron doesn't appear to. If that little paragraph doesn't satisfy you, I'm not sure I can do better. Usually we tell people that quantum-mechanical spin is like a spinning little ball, but different, and if people press the issue we sign them up for a graduate class in QFT. But let's think about the relativity side of your question, too. Here's an example from accelerator physics. Suppose you inject a group of ultra-relativistic electrons into an accelerator. (I like to use the CEBAF accelerator, where the electrons are injected into the accelerator with $\gamma = (1-v^2/c^2)^{-1/2} \approx 100$ and exit with $\gamma \approx 20\,000$. These electrons are traveling at speeds upwards of $0.9999c$, in bunches that are about 0.3mm long. (Usually the bunch length is measured in how many picoseconds it takes for a bunch to pass a point on the accelerator.) The part of this that's relevant to your question is what happens to those little bunches of electrons as they spend a microsecond or so traveling around the accelerator. In their rest frame, the electrons in each little bunch think they are at rest and surrounded by other electrons --- whom they hate, because they all have the same sign of electric charge and repel each other. So without special focusing magnets which accelerate the front and rear of each bunch differently, the electron bunches would spread out as the beam travels: some would go faster and slower than the average, just like the surfaces on your imaginary spinning ball. How can you have velocity dispersion in a system that's traveling at a speed experimentally indistinguishable from $c$? It works because of relativistic velocity addition. If the bunch is moving at speed $u$ relative to me, and the fastest electron in the bunch is moving at speed $v$ relative to its friends, then my measurement of the speed of the fastest electron in the bunch is $$ w = \frac{u+v}{1 + uv/c^2} $$ Part of any first course on relativity is playing with this formula to convince yourself that, if the speeds $u$ and $v$ are less than the limit $c$, then so is $w$. You can't constrain an object's motion in its rest frame just by looking at it from a reference frame that's moving too fast. I'm pretty sure that was the main concern in your question. Presently in the standard model of particle physics, spin is a necessary adjunct of keeping conservation of angular momentum at the framework of quantum mechanics. The historical survey given by Rob is the way it was discovered, interaction by interaction. In all particle interactions there will be angular momentum missing where fermions are taking part (see table of particles), if a fixed spin is not assigned to the fermions (defined by having half integral spin). The way spin has been historically assigned conserves angular momentum in all quantum mechanical interactions. This has not been falsified in all data up to now. Note that conservation laws hold in all inertial frames of special relativity, so the speed the particle is going makes no difference to the value of spin. There is no rotational velocity associated with the spin assignment, it is just a number needed to conserve angular momentum in quantum mechanical frameworks.
Formual 1 : Cube – A cube is a three dimensional figure and has six square faces which meet each other at right angles. It has eight vertices and twelve edges.Let each edge of a cube be of length \(a\). Then, Volume = \(a^3\) cubic units Surface area = 6\(a^2\) sq. units Diagonal = \(\sqrt{3}a\)units Example 1 The diagonal of a cube is \(\sqrt[6]{3}\) cm. Find its volume and surface area. Solution: Let the edge of the cube be a. \(\sqrt{3}\)a = \(\sqrt[6]{3}\) ⇒ a = 6. So, Volume = \(cm^{3}\) = (6 x 6 x 6) \(cm^{3}\) = 216 \(cm^{3}\). Surface area = 6\(a^{2}\) = (6 x 6 x 6) \(cm^{2}\) = 216 \(cm^{2}\). Example 2 The surface area of a cube is 1734 sq. cm. Find its volume. Solution: Let the edge of the cube be a. Then, 6\(a^{2}\) = 1734 ⇒ \(a^{2}\) = 289 ⇒ a = 17cm. ∴ Volume = \(a^{3}\) = \((17)^{3} cm^{3}\) = 4913 \(cm^{3}\). Formual 2 : Cuboid(Rectangular parallelopiped) – A solid body having six rectangular faces, is called cuboid. (or) A parallelopiped whose faces are rectangles is called rectangular parallelopiped or cuboid. Let l – length, b – breadth, h – height. Then, Volume = (l x b x h) cubic units Surface area = 2(lb + bh + lh) sq. units Diagonal = \(\sqrt{l^2 + b^2 + h^2}\) units Example 1 Find the volume and surface area of a cubiod 16m long, 14 m broad and 7m high. Solution: Volume = (16 x 14 x 7) \(m^{3}\) = 1568 \(m^{3}\). Surface area = [2 (16 x 14 + 14 x 7 + 16 x 7)] \(cm^{2}\) = 868 \(cm^{2}\). Example 2 Find the length of the longest pole that can be placed in a room 12 m long, 8 m broad and 9 m high. Solution: Length of logest pole = length of the diagonal of the room = \(\sqrt{(12)^{2} + 8^{2} + 9^{2}}\) = \(\sqrt{289}\) = 17 m. Formual 3 : Cylinder – A solid geometrical figure with straight parallel sides and a circular or oval cross section is called cylinder. Let radius of base = r and height( or length) = h. Then, Volume = \(\pi r^2 h \) cubic units Curved Surface area = 2\(\pi r h\) sq. units Total surface area = 2(\(\pi rh + 2\pi r^2\)) = 2\(\pi r(h + r)\)sq. units Example 1 Find the volume, curved surface area and the total surface area of a cylinder with diameter of base 7 cm and height 40cm. Solution: Volume = \(\pi r^{2}h\) = (\(\frac{22}{7}\) x \(\frac{7}{2}\) x \(\frac{7}{2}\) x 40) \(cm^{3}\) = 1540 \(cm^{3}\). Curved surface area = \(2\pi rh\) = (2 x \(\frac{22}{7}\) x \(\frac{7}{2}\) x 40) \(cm^{2}\) = 880 \(cm^{2}\). Total surface area = \(2\pi rh\) + \(\pi r^{2}\) = \(2\pi r (h + r)\) = [2 x \(\frac{22}{7}\) x \(\frac{7}{2}\) x (40 + 35)] c = 957 \(cm^{2}\). Example 2 If the capacity of a cylindrical tank is 1848 \(m^{3}\) and the diameter of its base is 14 \(m\), then find the depth of the tank. Solution: Let the depth of the tank be \(h\) meters. Then, \(\pi\) x \((\frac{0.50}{2 \times 100})^{2} \times h\) = \(\frac{22}{1000}\) ⇒ \(h \pm (\frac{22}{1000} \times \frac{100 \times 100}{0.25 \times 0.25} \times \frac{7}{22})\) = 112 m. Formual 4 : Cone – A solid (3-dimensional) object with a circular flat base joined to a curved side that ends in an apex point is called cone. Let radius of base = r and height = h. Then, Slant height, l = \(\sqrt{h^2 + r^2}\) units Volume = \(\frac{1}{3}\pi r^2 h\) cubic units Curved surface area = \(\pi r l\) sq. units Total surface area = \(\pi r l + \pi r^2\) sq. units Example 1: Find the slant height, volume, curved surface area and the whole surface area of a cone of radius 21 cm and height 28 cm. Solution: Here, \(r\) = 21 cm and \(h\) = 28 cm. ∴ Slant height, \(l\) = \(\sqrt{r^{2} + h^{2}}\) = \(\sqrt{(21)^{2} + (28)^{2}}\) = \(\sqrt{12225}\) = 35 cm. Volume = \(\frac{1}{3} \pi r^{2}h\) = (\(\frac{1}{3}\) x \(\frac{22}{7}\) x 21 x 21 x 28) \(\frac{1}{3}\) = 12936 \(cm^{3}\). Curved surface area = \(\pi rl\) = (\(\frac{22}{7}\) x 21 x 35)\(cm^{2}\) = 2310 \(cm^{2}\). Total surface area = (\(\pi rl\) + \(\pi r^{2}\)) = \((2310 + \frac{22}{7} x 21 x 21) cm^{2}\) = 3696 \(cm^{2}\). Example 2: Find the length of canvas 1.25 \(m\) wide required to build a conical tent of radius 7 meters and height 24 meters. Solution: Here, \(r\) = 7\(m\) and \(h\) = 24m. So, \(l\) = \(\sqrt{r^{2} + h^{2}}\) = \(\sqrt{7^{2} + (24)^2}\) = \(\sqrt{625}\) = 25m. Area of canvas = \(\pi rl\) = (\(\frac{22}{7}\) x 7 x 25)\(m^{2}\) = 550 \(m^{2}\). ∴ Length of canvas = (\(\frac{Area}{Width}\)) = (\(\frac{550}{1.25}\))m = 440 m. Formual 5 : Sphere – A solid (3-dimensional) object with a circular flat base joined to a curved side that ends in an apex point is called cone. Let radius of the sphere be r. Then, Volume = (\(\frac{4}{3} \pi r^{3}\)) cubic units Surface area = \(\pi r^{2}\) sq. units Example 1: Find the volume and surface area of a sphere of radius 10.5 cm. Solution: Volume = \(\frac{4}{3} \pi r^{3}\) = (\(\frac{4}{3}\) x \(\frac{22}{7}\) x \(\frac{21}{2}\) x \(\frac{21}{2}\) x \(\frac{21}{2}\)) \(cm^{3}\) = 4851 \(cm^{3}\). Surface area = \(4 \pi r^{2}\) = (4 x \(\frac{22}{7}\) x \(\frac{21}{2}\) x \(\frac{21}{2}\))\(cm^{2}\) = 1386 \(cm^{2}\). Example 2: If the radius of a sphere is increased by 50 %, find the increase percent in volume and the increase percent in the surface area. Solution: Let original radius = R. Then, new radius = \(\frac{150}{100}\) R = \(\frac{3R}{2}\). Original volume = \(\frac{4}{3} \pi R^{3}\), New volume = \(\frac{4}{3} \pi (\frac{3R}{2})^{3}\) = \(\frac{9 \pi R^{3}}{2}\). Increase % in volume = (\(\frac{19}{6} \pi R^{3}\) x \(\frac{3}{4 \pi R^{3}}\) x 100)% = 237.5%. Original surface area = \(4 \pi R^{2}\). New surface area = \(4 \pi (\frac{3R}{2})^{2}\) = \(9 \pi R^{2}\) Increase % in surface area = (\(\frac{5 \pi R^{2}}{4 \pi R^{2}}\) x 100)% = 125%. Formual 6 : Hemisphere – In geometry, hemisphere is an exact half of sphere. Let the radius of a hemisphere be \(r\). Volume = \(\frac{2}{3} \pi r^3\) cubic units. Curved surface area = 2\(\pi r^2\) sq. units. Total surface area = 3\(\pi r^2 \) sq. units. Example 1: Find the volume, curved surface area and the total surface area of a hemisphere of radius 10.5 cm. Solution: Volume = \(\frac{2}{3} \pi r^3\) = (\(\frac{2}{3}\) x \(\frac{22}{7}\) x \(\frac{21}{2}\) x \(\frac{21}{2}\) x \(\frac{21}{2}\)) \(cm^{3}\) = 2425.5 \(cm^{3}\). Curved surface area = 2\(\pi r^2\) = (2 x \(\frac{22}{7}\) x \(\frac{21}{2}\) x \(\frac{21}{2}\)) \(cm^{2}\). Total surface area = 3\(\pi r^2 \) = (3 x \(\frac{22}{7}\) x \(\frac{21}{2}\) x \(\frac{21}{2}\))\(cm^{2}\) = 693 \(cm^{2}\). Total surface area = 3\(\pi r^2 \) = (3 x \(\frac{22}{7}\) x \(\frac{21}{2}\) x \(\frac{21}{2}\))\(cm^{2}\) = 1039.5 \(cm^{2}\). Example 2: A hemispherical bowl of inrenal radius 9 cm contains a liqud. That liquid is to be filled into cylidrical shaped small bottles of diameter 3 cm and height 4 cm. How many bottles will be needed to empty the bowl? Solution: Volume of bowl = (\(\frac{2}{3} \pi\) x 9 x 9 x 9) \(cm^{3}\). Number of 1 bottle = (\(\pi \times \frac{3}{2} \times \frac{3}{2} \times 4\))\(cm^{3}\) = 9 \(\pi\) \(cm^{3}\). Number of bottles = (\(\frac{486 \pi}{9 \pi}\)) = 54.
We were able to solve the trigonometric equations in the previous section fairly easily, which in general is not the case. For example, consider the equation \[\label{eqn:cosinefixed} \cos\;x ~=~ x ~. \] Unfortunately there is no trigonometric identity or simple method which will help us here. Instead, we have to resort to numerical methods, which provide ways of getting successively better approximations to the actual solution(s) to within any desired degree of accuracy. There is a large field of mathematics devoted to this subject called numerical analysis. Many of the methods require calculus, but luckily there is a method which we can use that requires just basic algebra. It is called the secant method, and it finds roots of a given function \(f(x) \), i.e. values of \(x \) such that \(f(x)=0 \). A derivation of the secant method is beyond the scope of this book, but we can state the algorithm it uses to solve \(f(x)=0\): Pick initial points \(x_0 \) and \(x_1 \) such that \(x_0 < x_1 \) and \(f(x_0)\,f(x_1) < 0 \) (i.e. the solution is somewhere between \(x_0 \) and \(x_1\)). For \(n \ge 2 \), define the number \(x_n \) by \[\label{eqn:secantmethod} x_n ~=~ x_{n-1} ~-~ \dfrac{(x_{n-1} \;-\; x_{n-2})\,f(x_{n-1})}{f(x_{n-1}) \;-\; f(x_{n-2})} \] as long as \(|x_{n-1} \;-\; x_{n-2}| > \epsilon_{error} \), where \(\epsilon_{error} > 0 \) is the maximum amount of error desired (usually a very small number). The numbers \(x_0 \), \(x_1 \), \(x_2 \), \(x_3 \), \(... \) will approach the solution \(x \) as we go through more iterations, getting as close as desired. We will now show how to use this algorithm to solve the equation \(\cos\;x = x \). The solution to that equation is the root of the function \(f(x) =\cos\;x - x \). And we saw that the solution is somewhere in the interval \([0,1] \). So pick \(x_0 = 0 \) and \(x_1 = 1 \). Then \(f(0)=1 \) and \(f(1)=-0.4597 \), so that \(f(x_0)\,f(x_1) < 0 \) (we are using radians, of course). Then by definition, \[\nonumber \begin{align*} x_2 ~&=~ x_1 ~-~ \dfrac{(x_1 \;-\; x_0)\,f(x_1)}{f(x_1) \;-\; f(x_0)}\\ \nonumber &=~ 1 ~-~ \dfrac{(1 \;-\; 0)\,f(1)}{f(1) \;-\; f(0)}\\ \nonumber &=~ 1 ~-~ \dfrac{(1 \;-\; 0)\,(-0.4597)}{-0.4597 \;-\; 1}\\ \nonumber &=~ 0.6851~,\\ \nonumber x_3 ~&=~ x_2 ~-~ \dfrac{(x_2 \;-\; x_1)\,f(x_2)}{f(x_2) \;-\; f(x_1)}\\ \nonumber &=~ 0.6851 ~-~ \dfrac{(0.6851 \;-\; 1)\,f(0.6851)}{f(0.6851) \;-\; f(1)}\\ \nonumber &=~ 0.6851 ~-~ \dfrac{(0.6851 \;-\; 1)\,(0.0893)}{0.0893 \;-\; (-0.4597)}\\ \nonumber &=~ 0.7363 ~, \end{align*}\] and so on. Using a calculator is not very efficient and will lead to rounding errors. A better way to implement the algorithm is with a computer. Listing 6.1 below shows the code (secant.java) for solving \(\cos\;x = x \) with the secant method, using the Java programming language: Listing 6.1 Program listing for secant.java Lines 4-5 read in \(x_0 \) and \(x_1 \) as input parameters to the program. Line 6 initializes the variable that will eventually hold the solution. Line 7 sets the maximum error \(\epsilon_{error} \) to be \(1.0 \,\times\, 10^{-50} \). That is, our final answer will be within that (tiny!) amount of the real solution. Line 8 starts a loop of 9 iterations of the algorithm, i.e. it will create the successive approximations \(x_2 \), \(x_3 \), \(... \), \(x_{10} \) to the real solution, though in Line 9 we check to see if the two previous approximations differ by less than the maximum error. If they do, we stop (since this means we have an acceptable solution), otherwise we continue. Line 10 is the main step in the algorithm, creating \(x_n \) from \(x_{n-1} \) and \(x_{n-2} \). Lines 11-12 set the new values of \(x_{n-2} \) and \(x_{n-1} \), respectively. Lines 18-20 set the number of decimal places to show in the final answer to 50 (the default is 16) and then print the answer. Lines 23-24 give the definition of the function \(f(x)=\cos\;x - x \). Below is the result of compiling and running the program using \(x_0 = 0 \) and \(x_1 = 1\): Since \(x=0.73908513321516... \) is the solution of \(\cos\;x = x \), you would get \(\cos\;(\cos\;x) = \cos\;x = x \), so \(\cos\;(\cos\;(\cos\;x)) = \cos\;x = x \), and so on. This number \(x \) is called an attractive fixed point of the function \(\cos\;x \). No matter where you start, you end up getting ``drawn'' to it. Figure 6.2.2 shows what happens when starting at \(x=0\): taking the cosine of \(0 \) takes you to \(1 \), and then successive cosines (indicated by the intersections of the vertical lines with the cosine curve) eventually "spiral'' in a rectangular fashion to the fixed point (i.e. the solution), which is the intersection of \(y=\cos\;x\) and \(y=x \). Recall in Example 5.10 in Section 5.2 that we claimed that the maximum and minimum of the function \(y=\cos\;6x + \sin\;4x \) were \(\pm\,1.90596111871578 \), respectively. We can show this by using the open-source program Octave. Octave uses a successive quadratic programming method to find the minimum of a function \(f(x) \). Finding the maximum of \(f(x) \) is the same as finding the minimum of \(-f(x) \) then multiplying by \(-1 \) (why?). Below we show the commands to run at the Octave command prompt (\(\texttt{octave:n>}\)) to find the minimum of \(f(x) = \cos\;6x + \sin\;4x \). The command \(\texttt{sqp(3,'f')}\) says to use \(x=3 \) as a first approximation of the number \(x \) where \(f(x) \) is a minimum. The output says that the minimum occurs when \(x=2.65792064609274 \) and that the minimum is \(-1.90596111871578 \). To find the maximum of \(f(x) \), we find the minimum of \(-f(x) \) and then take its negative. The command \(\texttt{sqp(2,'f')}\) says to use \(x=2 \) as a first approximation of the number \(x \) where \(f(x) \) is a maximum. The output says that the maximum occurs when \(x=2.05446832062993\) and that the maximum is \(-(-1.90596111871578) = 1.90596111871578 \). Recall from Section 2.4 that Heron's formula is adequate for "typical'' triangles, but will often have a problem when used in a calculator with, say, a triangle with two sides whose sum is barely larger than the third side. However, you can get around this problem by using computer software capable of handling numbers with a high degree of precision. Most modern computer programming languages have this capability. For example, in the Python programming language (chosen here for simplicity) the \(\texttt{decimal}\) module can be used to set any level of precision. Below we show how to get accuracy up to \(50 \) decimal places using Heron's formula for the triangle in Example 2.16 from Section 2.4, by using the python interactive command shell: (Note: The triple arrow \(>>>\) is just a command prompt, not part of the code.) Notice in this case that we do get the correct answer; the high level of precision eliminates the rounding errors shown by many calculators when using Heron's formula. Another software option is Sage, a powerful and free open-source mathematics package based on Python. It can be run on your own computer, but it can also be run through a web interface: go to http://sagenb.org to create a free account, then once you register and sign in, click the New Worksheet link to start entering commands. For example, to find the solution to \(\cos\;x = x \) in the interval \([0,1] \), enter these commands in the worksheet textfield: Click the evaluate link to display the answer: \(0.7390851332151559\)
Forgot password? New user? Sign up Existing user? Log in Prove that the roots of the equation can't be all real if 2a2<5b22a^2 <5b^22a2<5b2 : x5+ax4+bx3+cx2+dx+ex^5+ax^4+bx^3+cx^2+dx+ex5+ax4+bx3+cx2+dx+e Note by Ankit Kumar Jain 2 years, 5 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: Let x1,x2⋯x5 x_{1},x_{2} \cdots x_{5} x1,x2⋯x5 be the real roots of the polynomial P(x) P(x) P(x) Then x1+x2+⋯+x5=S=−a x_{1} + x_{2} + \cdots + x_{5} = S = -a x1+x2+⋯+x5=S=−a Let ∑x1⋅x2=T \sum x_{1} \cdot x_{2} = T ∑x1⋅x2=T Then you can directly prove by applying AM -GM twice Log in to reply Exactly..You got it correct..By the way..There is an algebraic proof also..I mean a proof using no inequalities. I will try to prove it without inequalities.... Btw nice question. Have u made it yourself? @Rahil Sehgal – No. I have summarized your solution as solution 1 to the problem ..See that.. Thanks... I was thinking to post it but a hint is actually enough. @Rahil Sehgal – If you did it something differently then you can post your solution too.And as for the solution 2 ..I am just posting it..wait.. SOLUTION 2: Lemma : For any equation f(x)f(x)f(x) of degree nnn to have all it's roots real , it must be true that all the roots of fr(x)f^{r}(x)fr(x) must also have it's all roots real , where 1≤r≤n1 \leq r \leq n1≤r≤n. BUT THE CONVERSE IS NOT TRUE. Proof : If you see this graphically . we observe that if all the roots of f(x)f(x)f(x) of degree nnn are real , then it must be true that the graph of the polynomial must take turns n−1n-1n−1 times or we can say that f′(x)f^{'}(x)f′(x) must be 000 for n−1n-1n−1 values including multiplicity of roots. So proceeding this way leads us to the desired result. So, if the equation in the question has all it's roots real , then it's third derivative must also have all it's roots real i.e. 60x2+24ax+6b=060x^2 +24ax + 6b = 060x2+24ax+6b=0 has all it's roots real. ⇒D≥0⇒2a2≥5b\Rightarrow D \geq 0 \Rightarrow \boxed{2a^2 \geq 5b}⇒D≥0⇒2a2≥5b. But the conditions provided in the question are contrary to what is an essential for all the roots to be real.Hence , all the roots of the equation can't be real. Thank you very much (+1) :) :) @Rahil Sehgal Here is an inequality problem.. Do we need to find the discriminant (D) of the equation to prove this? If D ≥ 0 then only roots are real. But there is no discriminant for a degree 5 equation ..as far as I know..Tell me if I am wrong. @Ankit Kumar Jain – We can actually find it... See the wiki. I found this. For a polynomial P(x)=anxn+an−1xn−1+⋯+a1x+a0P(x)=a_n x^n+a_{n-1} x^{n-1}+\cdots+a_1 x+a_0P(x)=anxn+an−1xn−1+⋯+a1x+a0 having roots x1,x2,…,xnx_1,x_2,\ldots,x_nx1,x2,…,xn (counting multiplicity), its discriminant is:Δ=an2n−2∏1≤i<j≤n(xi−xj)2\Delta=a_n^{2n-2}\prod_{1 \leq i < j \leq n} (x_i-x_j)^2Δ=an2n−21≤i<j≤n∏(xi−xj)2 @Rahil Sehgal – That is good...But will that help here? @Rahil Sehgal – What is the title of the wiki? @Ankit Kumar Jain – See this @Rahil Sehgal – See this @Rahil Sehgal – Thanks! You can use this ...Consider that all the roots of the equation are real..Then try to introduce some inequalities to get the desired result. SOLUTION 1: Suppose that the roots of the equation are x1,x2,x3,x4,x5x_1,x_2,x_3,x_4,x_5x1,x2,x3,x4,x5 , all ∈R\in \mathbb{R}∈R By AM-GM Inequality , we have xi2+xj2≥2xixjx_i^2+x_j^2 \geq 2x_ix_jxi2+xj2≥2xixj . Writing similarly and adding for all pairs of (i,j)(i,j)(i,j) such that 1≤i<j≤51 \leq i < j \leq 51≤i<j≤5. We get 4∑i=15xi2≥2∑1≤i<j≤5xixj4\displaystyle \sum_{i=1}^{5} x_{i}^2 \geq 2\displaystyle \sum_{1 \leq i < j \leq 5} x_ix_j4i=1∑5xi2≥21≤i<j≤5∑xixj Adding 8∑1≤i<j≤5xixj8\displaystyle \sum_{1 \leq i < j \leq 5} x_ix_j81≤i<j≤5∑xixj both sides we get 4(∑i=15xi)2≥10∑1≤i<j≤5xixj4\left(\displaystyle \sum_{i=1}^{5} x_i\right)^2 \geq 10\displaystyle \sum_{1 \leq i < j \leq 5} x_ix_j4(i=1∑5xi)2≥101≤i<j≤5∑xixj Using Vieta's Relations : ∑i=15xi=a,∑1≤i<j≤5xixj=b\displaystyle \sum_{i=1}^{5} x_i = a , \displaystyle \sum_{1 \leq i < j \leq 5} x_ix_j = bi=1∑5xi=a,1≤i<j≤5∑xixj=b Therefore , we get 2a2≥5b\boxed{2a^2 \geq 5b}2a2≥5b. But the condition given in the question is just the contrary to what is an essential for all roots to be real..Hence the equation can't have all it's roots real under the given condition of 2a2<5b2a^2 < 5b2a2<5b Can you please post the solution 2 ( without inequalities). This method is actually the elaboration of my solution. Oh..yes sorry..I used the wrong word 'summarized' , it should be 'elaborated'. @Rahil Sehgal I have fixed a minor issue here...The symbol should be greater than equal to in that box..I had earlier mentioned a strict inequality there. Thanks... I didn't notice that much. @Rahil Sehgal – You can try this ...Inequality is back..I have posted that today itself.And even this Algebraic Manipulation ..you can see that discussion link in the contributions tab in my profile. @Ankit Kumar Jain – OK. Sure. Are you assuming that xix_ixi are all real roots here? If that's not the case, then you cannot apply AM-GM at all as xi2{x_i}^2xi2 can be negative. I'm assuming this is a proof by contradiction but for that first you need to highlight the point xix_ixi are all real. Thanks!...I have edited the solution. :) @Md Zuhair @Aditya Narayan Sharma @Anirudh Sreekumar@Brian Charlesworth@Pi Han Goh Please post your solutions! It will help everyone know alternate solutions to the problem, @Tapas Mazumdar@Kushal Bose@Akshat Sharda Post your solutions guys. Problem Loading... Note Loading... Set Loading...
In the last tutorial, we have witnessed the working of Quick Sort algorithm and also analysed the worst case of it. As promised, we will analyse the average case of Quick Sort and prove that it is in fact \(O(NlogN)\) as we had promised. But, before moving forward, let us formalise the claim we are making: For every input array of length \(N\), Average running time of Quick Sort is \(O(NlogN)\) (by using the random pivots) . As we have already stated that choosing a pivot is a crucial step in Quick Sort, we will form our proof around the Partitioning step. To avoid bad choices of pivots we either shuffle the array first and then choose the first element as a pivot or we choose a random pivot each time (both essentially have the same outcome). At first, let's just check the best case. You must be wondering that why should we bother about the best case? In fact, we should never go for the Best case! But sometimes, if you want to check that how well your algorithm can do, you can find out using the Best Case analysis. So what would be the best case of Quick Sort or we should say Partition subroutine? Choose a pivot that partitions the array into two equal parts i.e. choose the median. We can not be that lucky to choose the median as a pivot in each recursive call but for the sake of argument let us just assume that we are, and that we chose the median as pivot in each recursive calls. In this situation, we can represent the running time of Quick Sort in a recurrence relation: \(T(N) = 2T(N/2) + C*O(N)\) We are recursing two times but for half of the input size and doing some linear amount of work in the Partition subroutine as the pivot gets compared with all the remaining elements once. We have seen this recurrence relation before in Merge Sort and know its solution to be \(O(NlogN)\) Now that we know how best Quick Sort can perform, one might suspect that the Average case would be costlier than the Best case! But in practice, it closely matches the Best case. To prove this, we would be using some probability concepts namely sample space, random variables, expectation and linearity of expectation. We are assuming that you are familiar with these concepts. Consider an array A of input size \(N\) : Then, sample space \(\Omega\) = all possible outcomes of randomly picking a pivot in Quick Sort. Let \(\sigma\) be the random variable for the pivot chosen such that \(\sigma \in \Omega \) and \(C(\sigma)\)= Number of comparisons between two elements made by Quick Sort (given random choices \(\sigma\)) Now since running time of Quick Sort is dominated by the comparisons, we would calculate the running time, as the average number of (or Expectation of) comparisons made by the algorithm. Thus, running time of Quick Sort = \(E[C]\) . Then, to prove our claim, we will have to prove that\(E[C] = O(NlogN)\). Given that A = Input array with length N. Let \(Z_i = \) i th smallest element of Array A. It means that in an array \(A = 3, 6, 2, 1\) . 3 is 3 rd smallest and 6 is 4 th smallest element. For ​\(\sigma \in \Omega \), ​\(X_{ij}(\sigma) = \) # of comparison between \(Z_i\) and \(Z_j\) in Quick Sort with Pivot sequence \(\sigma\) At this moment, we want you to think about the number of times comparisons between \(Z_i\) and \(Z_j\) can happen in Quick Sort algorithm. In each Partition subroutine, every other element is compared with the Pivot element. So \(Z_i\) and \(Z_j\) can only be compared iff one of them is pivot and then, next comparison will not happen as the pivot is excluded in next recursive calls. So we would have either 0 or 1 comparison between \(Z_i\) and \(Z_j\) . Now think about our first random variable \(C(\sigma)\) in terms of \(X_{ij}(\sigma)\) . We can express it as \(C(\sigma) = \sum_{i=1}^{N-1} \sum_{j=i+1}^{N} X_{ij}(\sigma)\) \(E[C] =E[ \sum_{i=1}^{N-1} \sum_{j=i+1}^{N} X_{ij} ]\) Then, by linearity of Expectation we have \(E[C] = \sum_{i=1}^{N-1} \sum_{j=i+1}^{N} E[X_{ij}]\) Since \(E[X_{ij}] = 0* p[X_{ij}=0] + 1* p[X_{ij}=1] = p[X_{ij}=1] \) i.e. probability of \(Z_i\) and \(Z_j\) get compared, \(E[C] = \sum_{i=1}^{N-1} \sum_{j=i+1}^{N} p[X_{ij}=1]\) Now we just have to find out the probability of \(Z_i\) and \(Z_j\) getting compared. For this, let's fix the \(Z_i\) and \(Z_j\) such that \(i < j\) and consider the set \(Z_i, Z_{i+1}, ..., Z_{j-1},Z_j\). As long as none of the elements of this set is chosen as the pivot, all are passed to the same recursive calls. Considering the first among \(Z_i, Z_{i+1}, ..., Z_{j-1},Z_j\) that gets chosen as pivot, then: Case 1: if \(Z_i\) or \(Z_j\) gets chosen as Pivot then both get compared. Case 2: if one of \(Z_{i+1}, ..., Z_{j-1}\) gets chosen as the pivot, then \(Z_i\) and \(Z_j\) are never compared as they are split between two recursive calls. So the sample space of Pivot choosing would be size of this set, i.e. \(j-1+1\) and favorable number of events would be 2 (either \(Z_i\) or \(Z_j\) got chosen as the pivot). This can be expressed as probability \(p[X_{ij}=1] = \frac{2}{(j-i+1)}\) thus \(E[C] = \sum_{i=1}^{N-1} \sum_{j=i+1}^{N} \frac{2}{(j-i+1)}\) \(E[C] = 2* \sum_{i=1}^{N-1} \sum_{j=i+1}^{N}\frac{1}{(j-i+1)}\) Now, if we can prove that the above equation is bounded by \(NlogN\), our claim will be validated. So let's find out how big this double summation term can be. For each fixed value of i, the inner sum is \(\sum_{j=i+1}^{N} \frac{1}{(j-i+1)} = \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \dots\) and i can range from 1 to N-1 So \(E[C] <= 2* N*\sum_{k=2}^{N} \frac{1}{k}\) Think of rectangles having width of 1 unit and height of \(\frac{1}{2} , \frac{1}{3}, \frac{1}{4} \dots\) and so on then the term \(\sum_{k=2}^{N} \frac{1}{k}\) will represent the sum of area of all such rectangles and it will be upper bounded by the area under the curve of \(\frac{1}{x}\) as shown in the diagram bellow. (Image Source) So \(\sum_{k=2}^{N} \frac{1}{k} <= \int_1^N \frac{1}{x} \: \mathrm{d}x\) \(\sum_{k=2}^{N} \frac{1}{k} <= logN\) so \(E[C] <= 2* N* logN = O(NlogN)\) Thus, our claim that the average number of comparisons in Quick Sort is of the order \(O(NlogN)\) is proved. Please note that this tutorial is adapted from the coursera Video lectures of Tim Roughgarden. If you want more details or the video version you can consult coursera lectures.
Author Lu, You; Luo, Rong; Schubert, Michael; Steffen, Eckhard LibreCat; Zhang, Cun-Quan Abstract Many basic properties in Tutte's flow theory for unsigned graphs do not have their counterparts for signed graphs. However, signed graphs without long barbells in many ways behave like unsigned graphs from the point view of flows. In this paper, we study whether some basic properties in Tutte's flow theory remain valid for this family of signed graphs. Specifically let $(G,\sigma)$ be a flow-admissible signed graph without long barbells. We show that it admits a nowhere-zero $6$-flow and that it admits a nowhere-zero modulo $k$-flow if and only if it admits a nowhere-zero integer $k$-flow for each integer $k\geq 3$ and $k \not = 4$. We also show that each nowhere-zero positive integer $k$-flow of $(G,\sigma)$ can be expressed as the sum of some $2$-flows. For general graphs, we show that every nowhere-zero $\frac{p}{q}$-flow can be normalized in such a way, that each flow value is a multiple of $\frac{1}{2q}$. As a consequence we prove the equality of the integer flow number and the ceiling of the circular flow number for flow-admissible signed graphs without long barbells. Publishing Year Journal Title arXiv:1908.11004 LibreCat-ID Cite this Lu Y, Luo R, Schubert M, Steffen E, Zhang C-Q. Flows on signed graphs without long barbells. arXiv:190811004. 2019. Lu, Y., Luo, R., Schubert, M., Steffen, E., & Zhang, C.-Q. (2019). Flows on signed graphs without long barbells. ArXiv:1908.11004. @article{Lu_Luo_Schubert_Steffen_Zhang_2019, title={Flows on signed graphs without long barbells}, journal={arXiv:1908.11004}, author={Lu, You and Luo, Rong and Schubert, Michael and Steffen, Eckhard and Zhang, Cun-Quan}, year={2019} } Lu, You, Rong Luo, Michael Schubert, Eckhard Steffen, and Cun-Quan Zhang. “Flows on Signed Graphs without Long Barbells.” ArXiv:1908.11004, 2019. Y. Lu, R. Luo, M. Schubert, E. Steffen, and C.-Q. Zhang, “Flows on signed graphs without long barbells,” arXiv:1908.11004. 2019. Lu, You, et al. “Flows on Signed Graphs without Long Barbells.” ArXiv:1908.11004, 2019.
These are homework exercises to accompany Libl's "Differential Equations for Engineering" Textmap. This is a textbook targeted for a one semester first course on differential equations, aimed at engineering students. Prerequisite for the course is the basic calculus sequence. Exercise 6.1.5: Find the Laplace transform of \(3+ t^5 + \sin (\pi t)\). Exercise 6.1.6: Find the Laplace transform of \(a + bt +ct^2\) for some constants \(a\), \(b\), and \(c\). Exercise 6.1.7: Find the Laplace transform of \(A \cos (\omega t) + B \sin (\omega t ) \). Exercise 6.1.8: Find the Laplace transform of \( \cos^2 (\omega t ) \). Exercise 6.1.9: Find the inverse Laplace transform of \(\dfrac{4}{s^2-9}\). Exercise 6.1.10: Find the inverse Laplace transform of \( \dfrac{2s}{s^2-1}\). Exercise 6.1.11: Find the inverse Laplace transform of \( \dfrac{1}{(s-1)^2(s+1)}\). Exercise 6.1.12: Find the Laplace transform of \(f(t)= \left\{ \begin{array}{cc} t & {\rm{if~}}t \geq 1, \\ 0 & {\rm{if~}}t < 1.\end{array} \right.\). Exercise 6.1.13:Find the inverse Laplace transform of \( \dfrac{s}{(s^2+s+2)(s+4)}\). Exercise 6.1.14: Find the Laplace transform of \(\sin \left( \omega (t-a) \right) \). Exercise 6.1.15: Find the Laplace transform of \( t \sin (\omega t) \). Hint: Several integrations by parts. Exercise 6.1.101: Find the Laplace transform of \(4(t+1)^2\). Exercise 6.1.102: Find the inverse Laplace transform of \(\dfrac{8}{s^3 (s+2)}\). Exercise 6.1.103: Find the Laplace transform of \(te^{-t}\) (Hint: integrate by parts). Exercise 6.1.104: Find the Laplace transform of \(\sin (t) e^{-t}\) (Hint: integrate by parts). Exercise 6.2.1: Verify Table 6.2. Exercise 6.2.2: Using the Heaviside function write down the piecewise function that is \(0\) for \(t<0, t^2\) for \(t\) in \([0,1]\) and \(t\) for \(t>1\). Exercise 6.2.3: Using the Laplace transform solve \[ mx'' + cx'+kx =0,~~~~~~~ x(0)=a, ~~~~~~~ x'(0)=b.\] where \(m>0,c>0,k>0\), and \(c^2-4km>0\) (system is overdamped). Exercise 6.2.4: Using the Laplace transform solve \[ mx'' + cx'+kx =0,~~~~~~~ x(0)=a, ~~~~~~~ x'(0)=b.\] where \(m>0,c>0,k>0\), and \(c^2-4km<0\) (system is underdamped). Exercise 6.2.5: Using the Laplace transform solve \[ mx'' + cx'+kx =0,~~~~~~~ x(0)=a, ~~~~~~~ x'(0)=b.\] where \(m>0,c>0,k>0\), and \(c^2=4km\) (system is critically damped). Exercise 6.2.6: Solve \(x''+x=u(t-1)\) for initial conditions \(x(0)=0\) and \(x'(0)=0\). Exercise 6.2.7: Show the differentiation of the transform property. Suppose \(\mathcal{L}\{f(t)\}=F(s)\), then show \[ \mathcal{L}\{-tf(t)\}=F'(s).\] Hint: Differentiate under the integral sign. Exercise 6.2.8: Solve \(x'''+x=t^3u(t-1)\) for initial conditions \(x(0)=1\) and \(x'(0)=0\), \(x''(0)=0\). Exercise 6.2.9: Show the second shifting property: \( \mathcal{L}\{f(t-a)u(t-a)\}=e^{-as}\mathcal{L}\{f(t)\} \). Exercise 6.2.10: Let us think of the mass-spring system with a rocket from Example 6.2.2. We noticed that the solution kept oscillating after the rocket stopped running. The amplitude of the oscillation depends on the time that the rocket was fired (for 4 seconds in the example). a) Find a formula for the amplitude of the resulting oscillation in terms of the amount of time the rocket is fired. b) Is there a nonzero time (if so what is it?) for which the rocket fires and the resulting oscillation has amplitude 0 (the mass is not moving)? Exercise 6.2.11: Define \[ f(t)= \left\{ \begin{array}{ccc} (t-1)^2 & if~1 \leq t<2, \\ 3-t & if~2 \leq t<3, \\ 0 & otherwise. \end{array} \right. \] a) Sketch the graph of \(f(t)\). b) Write down \(f(t)\) using the Heaviside function. c) Solve \(x''+x=f(t), x(0)=0,x'(0)=0\) using Laplace transform. Exercise 6.2.12: Find the transfer function for \(mx'' + cx'+kx =f(t)\) (assuming the initial conditions are zero). Exercise 6.2.101: Using the Heaviside function \(u(t)\), write down the function \[ f(t)= \left\{ \begin{array}{ccc} 0 & if~~~~~t<1, \\ t-1 & if~1 \leq t<2, \\ if~~~~~2 \leq t. \end{array} \right. \] Exercise 6.2.102: Solve \(x''-x=(t^2-1)u(t-1)\) for initial conditions \(x(0)=1,x'(0)=2\) using the Laplace transform. Exercise 6.2.103: Find the transfer function for \(x'+x=f(t)\) (assuming the initial conditions are zero). Exercise 6.3.1: Let \(f(t)=t^2\) for \(t \geq 0\), and \(g(t)=u(t-1)\). Compute \(f * g\). Exercise 6.3.2: Let \(f(t)=t\) for \(t \geq 0\), and \(g(t)=\sin t\) for \(t \geq 0\). Compute \(f * g\). Exercise 6.3.3: Find the solution to \( mx''+cx'+kx=f(t),~~~~~~x(0)=0,~~~~~~x'(0)=0,\) for an arbitrary function \(f(t)\), where \(m>0,c>0,k>0\), and \(c^2-4km>0\) (system is overdamped). Write the solution as a definite integral. Exercise 6.3.4: Find the solution to \( mx''+cx'+kx=f(t),~~~~~~x(0)=0,~~~~~~x'(0)=0,\) for an arbitrary function \(f(t)\), where \(m>0,c>0,k>0\), and \(c^2-4km<0\) (system is underdamped). Write the solution as a definite integral. Exercise 6.3.5: Find the solution to \( mx''+cx'+kx=f(t),~~~~~~x(0)=0,~~~~~~x'(0)=0,\) for an arbitrary function \(f(t)\), where \(m>0,c>0,k>0\), and \(c^2=4km\) (system is critically damped). Write the solution as a definite integral. Exercise 6.3.6: Solve \( x(t)=e^{-t} +\int_0^t\cos(t-\tau)x(\tau)~d\tau . \) Exercise 6.3.7: Solve \( x(t)=\cos t +\int_0^t\cos(t-\tau)x(\tau)~d\tau . \) Exercise 6.3.8: Compute \(\mathcal{L}^{-1} \left\{ \frac{s}{(s^2+4)^2}\right\}\) using convolution. Exercise 6.3.9: Write down the solution to \(x''-2x=e^{-t^2},x(0)=0,x'(0)=0\) as a definite integral. Hint: Do not try to compute the Laplace transform of \(e^{-t^2}\). Exercise 6.3.101: Let \(f(t)=\cos t\) for \(t \geq 0\), and \(g(t)=e^{-t}\). Compute \(f * g\). Exercise 6.3.102: Compute \(\mathcal{L}^{-1} \left\{ \frac{5}{s^4+s^2}\right\}\) using convolution. Exercise 6.3.103: Solve \(x''+x=\sin t, x(0)=0, x'(0)=0\) using convolution. Exercise 6.3.104: Solve \(x'''+x'=f(t), x(0)=0, x'(0)=0,x''(0)=0\) using convolution. Write the result as a definite integral. Exercise 6.4.1: Solve (find the impulse response) \( x'' + x' + x = \delta(t),x(0) = 0, x'(0)=0.\) Exercise 6.4.2: Solve (find the impulse response) \(x'' + 2 x' + x = \delta(t), x(0) = 0, x'(0)=0.\) Exercise 6.4.3: A pulse can come later and can be bigger. Solve \(x'' + 4 x = 4\delta(t-1), x(0) = 0, x'(0)=0.\) Exercise 6.4.4: Suppose that \(f(t)\) and \(g(t)\) are differentiable functions and suppose that \(f(t) = g(t) = 0\) for all \(t \leq 0\). Show that \[ (f * g)'(t) = (f' * g)(t) = (f * g')(t) .\] Exercise 6.4.5: Suppose that \(L x = \delta(t), x(0) = 0, x'(0) = 0\), has the solution \(x = e^{-t}\) for \(t>0\). Find the solution to \(Lx = t^2, x(0) = 0, x'(0) = 0\) for \(t > 0\). Exercise 6.4.6: Compute \(\mathcal{L}^{-1} \left\{ \frac{s^2+s+1}{s^2} \right\}\). Exercise 6.4.7 (challenging): Solve Example 6.4.3 via integrating 4 times in the \(x\) variable. Exercise 6.4.8: Suppose we have a beam of length \(1\) simply supported at the ends and suppose that force \(F=1\) is applied at \(x=\frac{3}{4}\) in the downward direction. Suppose that \(EI=1\) for simplicity. Find the beam deflection \(y(x)\). Exercise 6.4.101: Solve (find the impulse response) \(x'' = \delta(t), x(0) = 0, x'(0)=0\). Exercise 6.4.102: Solve (find the impulse response) \(x' + a x = \delta(t), x(0) = 0, x'(0)=0\). Exercise 6.4.103: Suppose that \(L x = \delta(t), x(0) = 0, x'(0) = 0\), has the solution \(x(t) = \cos(t)\) for \(t>0\). Find (in closed form) the solution to \(Lx = \sin(t), x(0) = 0, x'(0) = 0 for t > 0\). Exercise 6.4.104: Compute \({\mathcal{L}}^{-1} \left\{ \frac{s^2}{s^2+1} \right\}\). Exercise 6.4.105: Compute \({\mathcal{L}}^{-1} \left\{ \frac{3 s^2 e^{-s} + 2}{s^2} \right\}\).
Background: I've heard that Malliavin Calculus can be used to show the explicit form of a delta-neutral hedge (given an SDE driven market model). For example, here is a sketch here on page 21 on how to achieve a $\Delta$-neutral hedge but how would one achieve a $\Delta$-positive hedge. Question: Fix $T>0$. My question is how can this proof strategy be used to show the existence of a hedge $H$ which has: positive Delta throughout $[0,T]$ H(0)=1 $\Delta H(T)\geq \Delta \tilde{H}(T)$ for every hedge $\tilde{H}$ with $\tilde{H}(0)=1$.
Almost-period A concept from the theory of almost-periodic functions (cf. Almost-periodic function); a generalization of the notion of a period. For a uniformly almost-periodic function $f(x)$, $-\infty<x<\infty$, a number $\tau=\tau_f(\epsilon)$ is called an $\epsilon$-almost-period of $f(x)$ if for all $x$, $$|f(x+\tau)-f(x)|<\epsilon.$$ For generalized almost-periodic functions the concept of an almost-period is more complicated. For example, in the space $S_l^p$ an $\epsilon$-almost-period $\tau$ is defined by the inequality $$D_{S_l^p}[f(x+\tau),f(x)]<\epsilon,$$ where $D_{S_l^p}[f,\phi]$ is the distance between $f(x)$ and $\phi(x)$ in the metric of $S_l^p$. A set of almost-periods of a function $f(x)$ is said to be relatively dense if there is a number $L=L(\epsilon,f)>0$ such that every interval $(\alpha,\alpha+L)$ of the real line contains at least one number from this set. The concepts of uniformly almost-periodic functions and that of Stepanov almost-periodic functions may be defined by requiring the existence of relatively-dense sets of $\epsilon$-almost-periods for these functions. References [1] B.M. Levitan, "Almost-periodic functions" , Moscow (1953) (In Russian) Comments For the definition of $S_l^p$ and its metric $D_{S_l^p}$ see Almost-periodic function. The Weyl, Besicovitch and Levitan almost-periodic functions can also be characterized in terms of $S_l^p$ $\epsilon$-periods. These characterizations are more complicated. A good additional reference is [a1], especially Chapt. II. References [a1] A.S. Besicovitch, "Almost periodic functions" , Cambridge Univ. Press (1932) How to Cite This Entry: Almost-period. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Almost-period&oldid=32494
Let $T = (X_n:n \in \mathbb{N})$ denote a homogeneous Markov chain with state space $E=\lbrace 1, 2, 3\rbrace$ and $$\mathbb{P}(X_1=2\vert X_0=1) = \mathbb{P}(X_1=3\vert X_0=1)=\frac{1}{3}$$ as well as $$\mathbb{P}(X_1=1\vert X_0=2) = \mathbb{P}(X_1=2\vert X_0=3)=1.$$ I suspect that the transition matrix is $$T = \left( \begin{array}{ccc} 1/3 & 1/3 & 1/3 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right).$$ Also, I understand that the stationary distribution is a limiting form of the Markov chain, and so from the definition of a stationary distribution we find it $\pi$, the stationary distribution, by calculating $\pi=\pi T$. We calculate the stationary distribution by finding left eigenvectors for the eigenvalue $1$. So, \begin{align*} \frac{1}{3}\pi_1 +\frac{1}{3}\pi_2+\frac{1}{3}\pi_3 & =\pi_1,\\ \pi_1 & =\pi_2 \hspace{1.0cm}\textrm{and}\\ \pi_2 & =\pi_3. \end{align*} So $\pi_1=\pi_2=\pi_3$ and as $\pi_1+\pi_2+\pi_3=1$, then $$\pi=\left(\frac{1}{3}, \frac{1}{3}, \frac{1}{3}\right).$$ Is this correct? Also, positive recurrent Markov chain has a unique stationary distribution. How do I determine if this Markov chain is positive recurrent, and thus if the above stationary distribution is unique?
In Section 4.1 we saw that one revolution has a radian measure of \(2\pi \) rad. Note that \(2\pi\) is the ratio of the circumference (i.e. total arc length) \(C \) of a circle to its radius \(r\): \[\nonumber \text{Radian measure of 1 revolution} ~=~ 2\pi ~=~ \frac{2\pi\,r}{r} ~=~ \frac{C}{r} ~=~ \frac{\text{total arc length}}{\text{radius}} \] Clearly, that ratio is independent of \(r \). In general, the radian measure of an angle is the ratio of the arc length cut off by the corresponding central angle in a circle to the radius of the circle, independent of the radius. Now suppose that we cut the angle with radian measure \(1 \) in half, as in Figure 4.2.1(b). Clearly, this cuts the arc length \(r \) in half as well. Thus, we see that \[\nonumber \begin{align} \text{Angle} ~&=~ 1~\text{radian} \quad&\Rightarrow\quad \text{arc length} ~&=~ r ~,\\ \nonumber \text{Angle} ~&=~ 2~\text{radians} \quad&\Rightarrow\quad \text{arc length} ~&=~ 2\,r ~,\\ \nonumber \text{Angle} ~&=~ \tfrac{1}{2}~\text{radian} \quad&\Rightarrow\quad \text{arc length} ~&=~ \tfrac{1}{2}\,r ~,\end{align}\] and in general, for any \(\theta \ge 0 \), \[\nonumber \text{Angle} ~=~ \theta~\text{radians} \quad \Rightarrow\quad \text{arc length} ~=~ \theta\,r ~, \] so that \[\nonumber \theta ~=~ \frac{\text{arc length}}{\text{radius}} ~~. \] Intuitively, it is obvious that shrinking or magnifying a circle preserves the measure of a central angle even as the radius changes. The above discussion says more, namely that the ratio of the length \(s \) of an intercepted arc to the radius \(r \) is preserved, precisely because that ratio is the measure of the central angle in radians (see Figure 4.2.2). We thus get a simple formula for the length of an arc: In a circle of radius \(r \), let \(s \) be the length of an arc intercepted by a central angle with radian measure \(\theta \ge 0 \). Then the arc length \(s \) is: \[ s ~=~ r\,\theta \label{4.4}\] Example 4.3 In a circle of radius \(r=2 \) cm, what is the length \(s \) of the arc intercepted by a central angle of measure \(\theta = 1.2 \) rad? Solution: Using Equation \ref{4.4}, we get: \[s ~=~ r\,\theta ~=~ (2)\,(1.2) ~=~ \boxed{2.4~\text{cm}} \nonumber \] Example 4.4 In a circle of radius \(r=10 \) ft, what is the length \(s \) of the arc intercepted by a central angle of measure \(\theta = 41^\circ\;\)? Solution: Using Equation \ref{4.4} blindly with \(\theta = 41^\circ \), we would get \(\;s = r\,\theta = (10)\,(41) = 410 \) ft. But this impossible, since a circle of radius \(10 \) ft has a circumference of only \(2\pi\,(10) \approx 62.83 \) ft! Our error was in using the angle \(\theta \) measured in degrees, not radians. So first convert \(\theta =41^\circ \) to radians, then use \(s=r\,\theta\): \[\theta = 41^\circ ~=~ \frac{\pi}{180} \;\cdot\; 41 ~=~ 0.716~\text{rad} \quad\Rightarrow\quad s ~=~ r\,\theta ~=~ (10)\,(0.716) ~=~ \boxed{7.16~\text{ft}}\nonumber\] Note that since the arc length \(s \) and radius \(r \) are usually given in the same units, radian measure is really unitless, since you can think of the units canceling in the ratio \(\frac{s}{r} \), which is just \(\theta \). This is another reason why radians are so widely used. Example 4.5 A central angle in a circle of radius \(5 \) m cuts off an arc of length \(2 \) m. What is the measure of the angle in radians? What is the measure in degrees? Solution: Letting \(r=5 \) and \(s=2 \) in Equation \ref{4.4}, we get: \[\theta ~=~ \frac{s}{r} ~=~ \frac{2}{5} ~=~ \boxed{0.4~\text{rad}}\nonumber \] In degrees, the angle is: \[ \theta = 0.4~\text{rad} ~=~ \frac{180}{\pi} \;\cdot\; 0.4 ~=~ \boxed{22.92^\circ}\nonumber \] For central angles \(\theta > 2\pi \) rad, i.e. \(\theta > 360^\circ \), it may not be clear what is meant by the intercepted arc, since the angle is larger than one revolution and hence "wraps around'' the circle more than once. We will take the approach that such an arc consists of the full circumference plus any additional arc length determined by the angle. In other words, Equation \ref{4.4} is still valid for angles \(\theta > 2\pi \) rad. What about negative angles? In this case using \(s=r\,\theta \) would mean that the arc length is negative, which violates the usual concept of length. So we will adopt the convention of only using nonnegative central angles when discussing arc length. Example 4.6 A rope is fastened to a wall in two places \(8 \) ft apart at the same height. A cylindrical container with a radius of \(2 \) ft is pushed away from the wall as far as it can go while being held in by the rope, as in Figure 4.2.3 which shows the top view. If the center of the container is \(3 \) feet away from the point on the wall midway between the ends of the rope, what is the length \(L \) of the rope? Figure 4.2.3 Solution: We see that, by symmetry, the total length of the rope is \(\;L = 2\;(AB + \overparen{BC}) \). Also, notice that \(\triangle\,ADE \) is a right triangle, so the hypotenuse has length \(AE = \sqrt{DE^2 + DA^2} = \sqrt{3^2 + 4^2} = 5 \) ft, by the Pythagorean Theorem. Now since \(\overline{AB} \) is tangent to the circular container, we know that \(\angle\,ABE \) is a right angle. So by the Pythagorean Theorem we have \[ AB ~=~ \sqrt{AE^2 - BE^2} ~=~ \sqrt{5^2 - 2^2} ~=~ \sqrt{21} ~\text{ft}. \nonumber \] By Equation \ref{4.4} the arc \(\overparen{BC} \) has length \(BE \cdot \theta \), where \(\theta = \angle\,BEC \) is the supplement of \(\angle\,AED + \angle\,AEB \). So since \[\nonumber \tan\,\angle\,AED ~=~ \frac{4}{3} ~\Rightarrow~ \angle\,AED ~=~ 53.1^\circ \quad\text{and}\quad \cos\,\angle\,AEB ~=~ \frac{BE}{AE} ~=~ \frac{2}{5} ~\Rightarrow~ \angle\,AEB ~=~ 66.4^\circ ~, \] we have \[ \nonumber \theta ~=~ \angle\,BEC ~=~ 180^\circ \;-\; (\angle\,AED + \angle\,AEB) ~=~ 180^\circ \;-\; (53.1^\circ + 66.4^\circ) ~=~ 60.5^\circ ~. \] Converting to radians, we get \(\;\theta = \frac{\pi}{180} \;\cdot\; 60.5 = 1.06 \) rad. Thus, \[\nonumber L ~=~ 2\,(AB \;+\; \cdot \overparen{BC}) ~=~ 2\,(\sqrt{21} \;+\; BE \cdot \theta) ~=~ 2\,(\sqrt{21} \;+\; (2)\,( 1.06)) ~=~ \boxed{13.4 ~\text{ft}} ~. \] First, at the center \(B \) of the pulley with radius \(8 \), draw a circle of radius \(3 \), which is the difference in the radii of the two pulleys. Let \(C \) be the point where this circle intersects \(\overline{BF} \). Then we know that the tangent line \(\overline{AC} \) to this smaller circle is perpendicular to the line segment \(\overline{BF} \). Thus, \(\angle\,ACB \) is a right angle, and so the length of \(\overline{AC} \) is \[AC ~=~ \sqrt{AB^2 - BC^2} ~=~ \sqrt{15^2 - 3^2} ~=~ \sqrt{216} ~=~ 6\,\sqrt{6}\nonumber \] by the Pythagorean Theorem. Now since \(\overline{AE} \perp \overline{EF} \) and \(\overline{EF} \perp \overline{CF} \) and \(\overline{CF} \perp \overline{AC} \), the quadrilateral \(AEFC \) must be a rectangle. In particular, \(EF = AC \), so \(EF = 6\,\sqrt{6} \). By Equation \ref{4.4} we know that \(\;\overparen{DE} = EA \cdot \angle\,DAE\; \) and \(\;\overparen{FG} = BF \cdot \angle\,GBF \), where the angles are measured in radians. So thinking of angles in radians (using \(\pi \) rad \(= 180^\circ\)), we see from Figure 4.2.4 that \[\nonumber \angle\,DAE ~=~ \pi \;-\; \angle\,EAC \;-\; \angle\,BAC ~=~ \pi \;-\; \frac{\pi}{2} \;-\; \angle\,BAC ~=~ \frac{\pi}{2} \;-\; \angle\,BAC ~,\] where \[\nonumber \sin\;\angle\,BAC ~=~ \frac{BC}{AB} ~=~ \frac{3}{15} ~=~ 0.2 \quad\Rightarrow\quad \angle\,BAC ~=~ 0.201~\text{rad.}\] Thus, \(\;\angle\,DAE = \frac{\pi}{2} \,-\, 0.201 = 1.37 \) rad. So since \(\overline{AE} \) and \(\overline{BF} \) are parallel, we have \(\;\angle\,ABC = \angle\,DAE = 1.37 \) rad. Thus, \(\;\angle\,GBF = \pi \,-\, \angle\,ABC = \pi \,-\, 1.37 = 1.77 \) rad. Hence, \[\nonumber L ~=~ 2\;(\overparen{DE} \;+\; EF \;+\; \overparen{FG}) ~=~ 2\;(5\;(1.37) \;+\; 6\,\sqrt{6} \;+\; 8\;(1.77)) ~=~ \boxed{71.41~\text{cm}} ~.\]
Search Now showing items 11-20 of 26 Pseudorapidity dependence of the anisotropic flow of charged particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Elsevier, 2016-11) We present measurements of the elliptic ($\mathrm{v}_2$), triangular ($\mathrm{v}_3$) and quadrangular ($\mathrm{v}_4$) anisotropic azimuthal flow over a wide range of pseudorapidities ($-3.5< \eta < 5$). The measurements ... Correlated event-by-event fluctuations of flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2016-10) We report the measurements of correlations between event-by-event fluctuations of amplitudes of anisotropic flow harmonics in nucleus–nucleus collisions, obtained for the first time using a new analysis method based on ... Centrality dependence of $\mathbf{\psi}$(2S) suppression in p-Pb collisions at $\mathbf{\sqrt{{\textit s}_{\rm NN}}}$ = 5.02 TeV (Springer, 2016-06) The inclusive production of the $\psi$(2S) charmonium state was studied as a function of centrality in p-Pb collisions at the nucleon-nucleon center of mass energy $\sqrt{s_{\rm NN}}$ = 5.02 TeV at the CERN LHC. The ... Transverse momentum dependence of D-meson production in Pb–Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-03) The production of prompt charmed mesons D$^0$, D$^+$ and D$^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb–Pb collisions at the centre-of-mass energy per nucleon pair, $\sqrt{s_{\rm NN}}$ of ... Multiplicity and transverse momentum evolution of charge-dependent correlations in pp, p-Pb, and Pb-Pb collisions at the LHC (Springer, 2016) We report on two-particle charge-dependent correlations in pp, p-Pb, and Pb-Pb collisions as a function of the pseudorapidity and azimuthal angle difference, $\mathrm{\Delta}\eta$ and $\mathrm{\Delta}\varphi$ respectively. ... Charge-dependent flow and the search for the chiral magnetic wave in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2016-04) We report on measurements of a charge-dependent flow using a novel three-particle correlator with ALICE in Pb–Pb collisions at the LHC, and discuss the implications for observation of local parity violation and the Chiral ... Pseudorapidity and transverse-momentum distributions of charged particles in proton-proton collisions at $\mathbf{\sqrt{\textit s}}$ = 13 TeV (Elsevier, 2016-02) The pseudorapidity ($\eta$) and transverse-momentum ($p_{\rm T}$) distributions of charged particles produced in proton-proton collisions are measured at the centre-of-mass energy $\sqrt{s}$ = 13 TeV. The pseudorapidity ... Differential studies of inclusive J/$\psi$ and $\psi$(2S) production at forward rapidity in Pb-Pb collisions at $\mathbf{\sqrt{{\textit s}_{_{NN}}}}$ = 2.76 TeV (Springer, 2016-05) The production of J/$\psi$ and $\psi(2S)$ was measured with the ALICE detector in Pb-Pb collisions at the LHC. The measurement was performed at forward rapidity ($2.5 < y < 4 $) down to zero transverse momentum ($p_{\rm ... Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Elsevier, 2016-02) The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ... Anisotropic flow of charged particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2016-04) We report the first results of elliptic ($v_2$), triangular ($v_3$) and quadrangular flow ($v_4$) of charged particles in Pb--Pb collisions at $\sqrt{s_{_{\rm NN}}}=$ 5.02 TeV with the ALICE detector at the CERN Large ...
The real power of $O$ notation is in formulas like this. By systematically applying valid rules of manipulation, we can harness our intuition in a fully rigorous way, without much effort (such as that of dealing with explicit inequalities). (Of course, if we were allergic to $O$ notation, or not sufficiently confident about what manipulations are valid, we could immediately “translate” any $O$-containing expression we encounter into some other notation involving of sets of functions with appropriate quantifiers, but then we'd lose much of the power of $O$ notation.) Let's look at this example in detail. In this case, we start with the expression $$\frac{2a_0(2N) \ln(2N) + O(2N)}{2a_0N\ln N+O(N)}.$$(Presumably it has been specified earlier that in this context when we use $O()$ we are thinking of $N \to \infty$, not say $N \to 0$.) First, to get an intuitive sense, we “THINK BIG”: for large $N$, both the numerator and denominator are dominated by their first terms, i.e. they grow as $2a_0(2N)\ln(2N)$ and $2a_0N\ln N$ respectively. As we see a common $2a_0N$ factor, we can first divide both numerator and denominator by it. Our original expression thus becomes:$$\frac{2a_0(2N) \ln(2N) + O(2N)}{2a_0N\ln N+O(N)}\stackrel{(1)}{=} \frac{2\ln(2N) + O(2N)/(2a_0N)}{\ln N + O(N)/(2a_0N)}$$Here, the manipulation $(1)$ is simply algebra, treating the $O()$ expressions as black-boxes. Now, in the numerator we have the term $O(2N)/(2a_0N)$. It may be obvious that this is $O(1)$, and you could indeed prove it easily, but you can also use standard manipulation rules that have been proved previously. In this case, for example, we can use a combination of: which together give $f(n)O(g(n)) = O(f(n)g(n))$. (Recall that with equations containing $O$, all uses of the “$=$” sign are one-way: so we could not directly use $O(f(n)g(n)) = f(n)O(g(n))$ which is $(9.27)$ in the book.) So we can write $$O(2N)/(2a_0N) = \frac{1}{2a_0N} O(2N) = O\left(\frac{1}{2a_0N}2N\right) = O\left(\frac1{a_0}\right) = O(O(1)) = O(1).$$(See $(9.25)$ in the book for $O(O(f(n))) = O(f(n))$.) Similarly, for the second term in the denominator, we get $O(N)/(2a_0N) = O(1)$. So our expression becomes:$$\frac{2\ln(2N) + O(2N)/(2a_0N)}{\ln N + O(N)/(2a_0N)}\stackrel{(2)}{=} \frac{2\ln(2N) + O(1)}{\ln N + O(1)}$$using the manipulations above. This is the first equality in the question. Immediately, we can write the numerator $2\ln(2N) + O(1)$ as $2\ln N + 2\ln 2 + O(1) = 2\ln N + O(1)$, again using manipulations like $c = O(1)$ and $O(1) + O(1) = O(1)$ so $c + O(1) = O(1)$. So our expression is actually:$$\frac{2\ln(2N) + O(1)}{\ln N + O(1)}\stackrel{(3)}{=} \frac{2\ln N + O(1)}{\ln N + O(1)}$$ Now, faced with a division, it may be tempting to give up $O$-manipulation and reason with inequalities. But let's have a bit more faith in mindless manipulation. :-) We can repeat the trick of “dividing”, this time by $\ln n$, to rewrite as$$\frac{2\ln N + O(1)}{\ln N + O(1)}\stackrel{(4)}{=} \frac{2 + O(1)/\ln N}{1 + O(1)/\ln N}\stackrel{(5)}{=} \frac{2 + O\left(\frac{1}{\ln N}\right)}{1 + O\left(\frac{1}{\ln N}\right)}$$just as before. Finally, we can use the rule$$\frac{1}{1 + O(f(n))} = 1 + O(f(n)) \quad \text{if $f(n) = o(1)$}$$(I can't find this one in a book right now, but it's obviously useful to have in your toolkit and you can prove it yourself using $\frac{1}{1-x} = 1 + x + x^2 + x^3 + \dots$), to rewrite the expression as$$\begin{align}\frac{2 + O\left(\frac{1}{\ln N}\right)}{1 + O\left(\frac{1}{\ln N}\right)}&\stackrel{(6)}{=}\left(2 + O\left(\frac{1}{\ln N}\right)\right)\left(1 + O\left(\frac{1}{\ln N}\right)\right) \\&\stackrel{(7)}{=} 2 + 2O\left(\frac{1}{\ln N}\right) + O\left(\frac{1}{\ln N}\right) + O\left(\frac{1}{\ln N}\right)O\left(\frac{1}{\ln N}\right) \\&\stackrel{(8)}{=} 2 + O\left(\frac{1}{\ln N}\right)\end{align}$$where I skipped a few steps in the last equation because surely you get the point by now. Above, I just wrote everything out in detail for illustration, but in reality, after a bit of practice with valid $O$-manipulations (of course you have to be careful not to do anything that's not justified, but this is similar to when you learned during middle-school algebra not to write $(x+y)^2 = x^2 + y^2$ say), you can become comfortable and don't have to work things out so laboriously: you'll be able to write directly, say,$$\frac{2a_0(2N) \ln(2N) + O(2N)}{2a_0N\ln N+O(N)}= \frac{2\ln N + O(1)}{\ln N+O(1)}= \frac{2 + O\left(1/\ln N\right)}{1 + O\left(1/\ln N\right)}= 2 + O\left(\frac{1}{\log N}\right)$$pretty much as in the question. There is nothing sloppy or non-rigorous here; you're applying valid rules that have been proved. Being able to quickly do such calculations for asymptotics is the main benefit of using $O$ notation.
What a metric should necessarily do is it should give me a way to associate a frame invariant number with a given pair of spacetime events. Now, if I use a higher rank tensor field (say, for example, a tensor field of rank 3: $g_{\mu \nu \rho}$) then also I can certainly produce a frame invariant scalar out of a given displacement $\vec{A}$ in this trivial way: $ I := g_{\mu \nu \rho} A^{\mu}A^{\nu}A^{\rho}$. Since the components of the higher rank tensor and that of the displacement vector are going to transform in a covariant and a contravariant manner respectively, the cooked up quantity is certainly a scalar. Another crucial property, which I think has more direct physical content than the previous one, is that between two inertial frames, at least one such transformation should exist that leaves at least one metric invariant. i.e., There should exist at least one combination of transformation matrix $\displaystyle\frac{\partial{x^{\alpha}}}{\partial{x^{\mu '}}}$ and metric $g_{\alpha \beta \gamma}$ that satisfies the following equation: $\displaystyle\frac{\partial{x^{\alpha}}}{\partial{x^{\mu '}}}\displaystyle\frac{\partial{x^{\beta}}}{\partial{x^{\nu '}}}\displaystyle\frac{\partial{x^{\gamma}}}{\partial{x^{\rho '}}} g_{\alpha \beta \gamma} - \delta_{\mu '}^{\alpha}\delta_{\nu '}^{\beta}\delta_{\rho '}^{\gamma} g_{\alpha \beta \gamma} =0$ The last thing I can think of that can put a restriction on the choice of a tensor as a metric is the existence of a possibility of finding a metric compatible symmetric connection field. Following the usual procedure of finding the expression for a metric compatible symmetric connection field, I reached following condition (unlike the case of the usual two rank metric where we get a full-fledged expression) for the connection in the terms of the metric: $g_{\nu \rho k_1} \Gamma^{k_1}_{\mu \lambda} - g_{\lambda \mu k_2} \Gamma^{k_2}_{\rho \nu} = \displaystyle\frac{1}{2} (\partial_{\nu}g_{\rho \lambda \mu} + \partial_{\rho}g_{\lambda \mu \nu} - \partial_{\lambda}g_{\mu \nu \rho} - \partial_{\mu}g_{\nu \rho \lambda}) $ My question is that if it is possible to satisfy the two highlighted conditions then can we use such a 3 rank (or even higher rank tensors with similarly produced conditions) tensor fields as metric fields? PS: This is NOT a proposal for a new home-production theory of gravity (or that of anything for that matter) but rather it is just that I am trying to understand why a two rank tensor is used in General Relativity as the metric. Thank you.
Forgot password? New user? Sign up Existing user? Log in lf α\displaystyle\alphaα and β\displaystyle\betaβ are the roots of the equation λ(x2+x)+x+5=0,\displaystyle\lambda (x^{2} +x) +x+5=0, λ(x2+x)+x+5=0, and λ1\displaystyle\lambda_{1}λ1 and λ2\displaystyle\lambda_{2}λ2 are two values of λ\displaystyle\lambdaλ for which α\displaystyle\alphaα ,β\displaystyle\betaβ are connected by the relation αβ+βα=4,\displaystyle\frac{\alpha}{\beta}+\frac{\beta}{\alpha} =4 ,βα+αβ=4, then what is the value of λ1λ2+λ2λ1?\displaystyle\frac{\lambda_{1}}{\lambda_{2}}+\frac{\lambda_{2}}{\lambda_{1}} ?λ2λ1+λ1λ2? Problem Loading... Note Loading... Set Loading...
IMO transaction data is a better approach, because you have both sides of the trade agreeing that the price is "right." The literature tends to decompose the transaction price $P$ into a true/efficient price $P^e$ plus micro-structure noise, which I think originates from Hasbrouck '93 in the Review of Financial Studies. So you end up with something like $$P^e_t = P^e_{t-1} + \nu$$ and $$P_t = round(P^e_t + c_t Q_t, d)$$ where $\nu \sim N(0, \sigma^2_t)$, $c_t > 0$, $Q_t \in \left\{-1, 1 \right\}$, and $d$ is the tick size. Note that $c_t$ provides the spread and $Q_t$ tells you if the transaction is buyer or seller initiated (typically determined with the "Lee-Ready algorithm"). I found this particular presentation in a 2002 working paper from Engle and Russell (edit: titled Analysis of High Frequency Data); I think this is pretty standard and you can probably find a good deal of research that tries to provide $c_t = f(\cdot)$. It looks like a Andersen, Bollerslev, and Diebold have a 2007 NBER working paper (edit: titled Roughing it Up: Including Jump Components in the Measurement, Modeling and Forecasting of Return Volatility) that provides a more thorough treatment of these ideas. When you're dealing with (ultra) high-frequency data you also have the problem of time to transaction. Engle has a 2000 Econometrica paper (edit: titled The Econometrics of Ultra-High-Frequency Data) in which he describes how to account for time to transaction, but he's using bid-ask midpoints, not transactions. I don't have any first-hand experience to know if using the midpoint is a bad assumption in practice, but the 2000 and 2007 papers should be a good start.
5 0 1. Homework Statement From http://library.thinkquest.org/10796/index.html m=1300kg4. A 1300 kg car takes 35 s to travel around a circular road that has a radius of 40 m. ... b. What is the acceleration of the car? c. How big should the force of friction be to maintain this acceleration? d. How big should the coefficient of the force of friction be? t=35s r=40m 2. Homework Equations [tex]V=\frac{2\pi r}{t}[/tex] [tex]a_c=\frac{V^2}{r}[/tex] [tex]F_c=ma_c[/tex] [tex]F_f=\mu mg[/tex] [tex]F_f=F_c[/tex] [tex]\mu=\frac{a_c}{g}=\frac{F_c}{mg}[/tex] 3. The Attempt at a Solution Plugging into the formulae is pretty elementary, so I'll spare you all that. I used the first two equations for part b, then the third for part c. Anyway, I get 1.289m/s 2for part b and 1675.819N for part c, but the site's answers are 1.47m/s 2and 2210N, respectively. It appears that, to have arrived at their answer for part b, they used the incorrect equation a_c=V^2/t. I'm not sure how they arrived at the answer for part c. Using either part of the last equation above (with my answers for part b or part c) gives me μ=0.132. The site's answer for part d is 0.15, but--using the same equation that I have listed above--this is consistent with its answer for part b, but not with its answer for part c. Would someone please explain what's going on here? Thanks!! P.S. You know, I feel as if, from typing this, I've pretty much assured myself that the site is wrong. (I have another fairly recent post in which I consider the same thing, and the couple of replies it's gotten seem to support the idea.) I hope that doesn't sound arrogant, but it bothers me that the site was written by high school students.
Most of the permutation and combination problems we have seen count choices made without repetition, as when we asked how many rolls of three dice are there in which each die has a different value. The exception was the simplest problem, asking for the total number of outcomes when two or three dice are rolled, a simple application of the multiplication principle. Typical permutation and combination problems can be interpreted in terms of drawing balls from a box, and implicitly or explicitly the rule is that a ball drawn from the box stays out of the box. If instead each ball is returned to the box after recording the draw, we get a problem essentially identical to the general dice problem. For example, if there are six balls, numbered 1–6, and we draw three balls with replacement, the number of possible outcomes is \(6^3\). Another version of the problem does not replace the ball after each draw, but allows multiple "identical'' balls to be in the box. For example, if a box contains 18 balls numbered 1–6, three with each number, then the possible outcomes when three balls are drawn and not returned to the box is again \(6^3\). If four balls are drawn, however, the problem becomes different. Another, perhaps more mathematical, way to phrase such problems is to introduce the idea of a multiset. A multiset is like a set, except that elements may appear more than once. If \(\{a,b\}\) and \(\{b,c\}\) are ordinary sets, we say that the union \(\{a,b\}\cup\{b,c\}\) is \(\{a,b,c\}\), not \(\{a,b,b,c\}\). If we interpret these as multisets, however, we do write \(\{a,b,b,c\}\) and consider this to be different than \(\{a,b,c\}\). To distinguish multisets from sets, and to shorten the expression in most cases, we use a repetition number with each element. For example, we will write \(\{a,b,b,c\}\) as \(\{1\cdot a,2\cdot b,1\cdot c\}\). By writing \(\{1\cdot a,1\cdot b,1\cdot c\}\) we emphasize that this is a multiset, even though no element appears more than once. We also allow elements to be included an infinite number of times, indicated with \(\infty\) for the repetition number, like \(\{\infty\cdot a, 5\cdot b, 3\cdot c\}\). Generally speaking, problems in which repetition numbers are infinite are easier than those involving finite repetition numbers. Given a multiset \(A=\{\infty\cdot a_1,\infty\cdot a_2,\ldots,\infty\cdot a_n\}\), how many permutations of the elements of length \(k\) are there? That is, how many sequences \(x_1,x_2,\ldots,x_k\) can be formed? This is easy: the answer is \(n^k\). Now consider combinations of a multiset, that is, submultisets: Given a multiset, how many submultisets of a given size does it have? We say that a multiset \(A\) is a submultiset of \(B\) if the repetition number of every element of \(A\) is less than or equal to its repetition number in \(B\). For example, \(\{20\cdot a, 5\cdot b, 1\cdot c\}\) is a submultiset of \(\{\infty\cdot a, 5\cdot b, 3\cdot c\}\). A multiset is finite if it contains only a finite number of distinct elements, and the repetition numbers are all finite. Suppose again that \(A=\{\infty\cdot a_1,\infty\cdot a_2,\ldots,\infty\cdot a_n\}\); how many finite submultisets does it have of size \(k$? This at first seems quite difficult, but put in the proper form it turns out to be a familiar problem. Imagine that we have \(k+n-1\) "blank spaces'', like this: Now we place \(n-1\) markers in some of these spots: This uniquely identifies a submultiset: fill all blanks up to the first \(\land\) with \(a_1\), up to the second with \(a_2\), and so on: So this pattern corresponds to the multiset \(\{1\cdot a_2,3\cdot a_3,\ldots, 1\cdot a_n\}\). Filling in the markers \(\land\) in all possible ways produces all possible submultisets of size \(k\), so there are \(k+n-1\choose n-1\) such submultisets. Note that this is the same as \(k+n-1\choose k\); the hard part in practice is remembering that the \(-1\) goes with the \(n\), not the \(k\). $$\bullet\quad\bullet\quad\bullet$$ Summarizing the high points so far: The number of permutations of \(n\) things taken \(k\) at a time without replacement is \(\ds P(n,k)=n!/(n-k)!\); the number of permutations of \(n\) things taken \(k\) at a time with replacement is \(\ds n^k\). The number of combinations of \(n\) things taken \(k\) at a time without replacement is \({n\choose k}\); the number of combinations of \(n\) things taken \(k\) at a time with replacement is \({k+n-1 \choose k}\). $$\bullet\quad\bullet\quad\bullet$$ If \(A=\{m_1\cdot a_1, m_2\cdot a_2,\ldots,m_n\cdot a_n\}\), similar questions can be quite hard. Here is an easier special case: How many permutations of the multiset \(A\) are there? That is, how many sequences consist of \(m_1\) copies of \(a_1\), \(m_1\) copies of \(a_2\), and so on? This problem succumbs to overcounting: suppose to begin with that we can distinguish among the different copies of each \(a_i\); they might be colored differently for example: a red \(a_1\), a blue \(a_1\), and so on. Then we have an ordinary set with \(M=\sum_{i=1}^n m_i\) elements and \(M!\) permutations. Now if we ignore the colors, so that all copies of \(a_i\) look the same, we find that we have overcounted the desired permutations. Permutations with, say, the \(a_1\) items in the same positions all look the same once we ignore the colors of the \(a_1$s. How many of the original permutations have this property? \(m_1!\) permutations will appear identical once we ignore the colors of the \(a_1\) items, since there are \(m_1!\) permutations of the colored \(a_1$s in a given \(m_1\) positions. So after throwing out duplicates, the number of remaining permutations is \(M!/m_1!\) (assuming the other \(a_i\) are still distinguishable). Then the same argument applies to the \(a_2$s: there are \(m_2!\) copies of each permutation once we ignore the colors of the \(a_2$s, so there are \(\ds {M!\over m_1!\,m_2!}\) distinct permutations. Continuing in this way, we see that the number of distinct permutations once all colors are ignored is $${M!\over m_1!\,m_2!\cdots m_n!}.$$ This is frequently written $${M\choose m_1\;\;m_2\;\ldots\; m_n},$$ called a multinomial coefficient. Here the second row has \(n\) separate entries, not a single product entry. Note that if \(n=2\) this is $$\eqalignno{ {M\choose m1\;\;m2}&= {M!\over m_1!\,m_2!}={M!\over m_1!\,(M-m_1)!}={M\choose m_1}.& \eqrdef{eq:binomial as multinomial} (MISSING XREFN(eq:binomial as multinomial)) }$$ This is easy to see combinatorialy: given \(\{m_1\cdot a_1, m_2\cdot a_2\}\) we can form a permutation by choosing the \(m_1\) places that will be occupied by \(a_1\), filling in the remaining \(m_2\) places with \(a_2\). The number of permutations is the number of ways to choose the \(m_1\) locations, which is \(M\choose m_1\). Example 1.5.1 How many solutions does \(\ds x_1+x_2+x_3+x_4=20\) have in non-negative integers? That is, how many 4-tuples \((m_1,m_2,m_3,m_4)\) of non-negative integers are solutions to the equation? We have actually solved this problem: How many submultisets of size 20 are there of the multiset \(\{\infty\cdot a_1,\infty\cdot a_2,\infty\cdot a_3,\infty\cdot a_4\}$? A submultiset of size 20 is of the form \(\{m_1\cdot a_1,m_2\cdot a_2,m_3\cdot a_3,m_4\cdot a_4\}\) where \(\sum m_i=20\), and these are in 1–1 correspondence with the set of 4-tuples \((m_1,m_2,m_3,m_4)\) of non-negative integers such that \(\sum m_i=20\). Thus, the number of solutions is \(20+4-1\choose 20\). This reasoning applies in general: the number of solutions to $$\sum_{i=1}^n x_i = k$$ is $${k+n-1\choose k}.$$ This immediately suggests some generalizations: instead of the total number of solutions, we might want the number of solutions with the variables \(x_i\) in certain ranges, that is, we might require that \(m_i\le x_i\le M_i\) for some lower and upper bounds \(m_i\) and \(M_i\). Finite upper bounds can be difficult to deal with; if we require that \(0\le x_i\le M_i\), this is the same as counting the submultisets of \(\{M_1\cdot a_1,M_2\cdot a_2,\ldots,M_n\cdot a_n\}\). Lower bounds are easier to deal with. Example 1.5.2 Find the number of solutions to \(\ds x_1+x_2+x_3+x_4=20\) with \(x_1\ge 0\), \(x_2\ge 1\), \(x_3\ge 2\), \(x_4\ge -1\). We can transform this to the initial problem in which all lower bounds are 0. The solutions we seek to count are the solutions of this altered equation: $$ x_1+(x_2-1)+(x_3-2)+(x_4+1)=18.$$ If we set \(y_1=x_1\), \(y_2=x_2-1\), \(y_3=x_3-2\), and \(y_4=x_4+1\), then \((x_1,x_2,x_3,x_4)\) is a solution to this equation if and only if \((y_1,y_2,y_3,y_4)\) is a solution to $$ y_1+y_2+y_3+y_4=18,$$ and moreover the bounds on the \(x_i\) are satisfied if and only if \(y_i\ge 0\). Since the number of solutions to the last equation is \(18+4-1\choose 18\), this is also the number of solutions to the original equation.
We know that: $$\frac 1{2!}+\frac 1{3!}+\frac 1{4!}+\frac 1{5!}+\frac 1{6!}+\cdots =e-2\approx0.71828$$ But I am getting the above sum as $1,$ as shown below: \begin{align} S & = \frac 1{2!}+\frac 1{3!}+\frac 1{4!}+\frac 1{5!}+\frac 1{6!}+\cdots \\[10pt] & = \frac 1{2!} + \frac {3-2}{3!} +\fra... Perhaps the chaos-theory and chaotic-systems tags should be merged? The second has no official description but I can hardly imagine what the difference should be. The list of proposals on the 2016 thread that are still open: Proposal to rename the "adjoint" tag Proposal to join the "chaos theory" and "chaotic systems" tags Proposal to change the name of the "divisors" tag Proposal to make the "compactification" tag a synonym of the "compactness" tag Pr... Please merge chaotic-systems and chaos-theory. I am active in these tags and the respective field and I fail to see a meaningful difference between them, let alone a need for a distinction. This already got 10 upvotes last year. I propose creating relation-composition tag and making it a synonym of function-composition. I think that if composition of functions is important enough to have its own tag, then so is composition of relations. But it would probably be better to have both topics under the same tag. We definitel... Maple shows that $$ \sum_{0 \le k \le m} \frac{2^k}{(k+1)} = -i/2\pi -2\,{2}^{m} \left( 1/4\,{\it \Phi} \left( 2,1,m \right) -1 /4\,{m}^{-1}-1/2\, \left( m+1 \right) ^{-1} \right) $$ where $\Phi$ denotes Lerch's transcendent. How can we prove this? I have checked a few books but haven't got a cl... maybeit's clearer than having relations just as a synonym an mentioned in the tag-wiki...?) « first day (2004 days earlier) ← previous day next day → last day (676 days later) »
I'm asked to prove that the following limit doesn't exist: $$\lim_{x \to 1} \left (\tan \left(\frac{\pi x}{2} \right ) \lfloor x \rfloor \right )$$ My attempt was to break the case into one-sided limits. So, for example, I could prove that $$\lim_{x \to 1^+} \tan \left(\frac{\pi x}{2} \right ) = -\infty$$ and $$\lim_{x \to 1^-} \tan \left(\frac{\pi x}{2} \right ) = \infty$$ and then show that $$\lim_{x \to 1^+} \left (\tan \left(\frac{\pi x}{2} \right ) \lfloor x \rfloor \right ) \neq \lim_{x \to 1^-} \left (\tan \left(\frac{\pi x}{2} \right ) \lfloor x \rfloor \right )$$ using the product rule for infinite limits. The only problem is that I'm not allowed to use the theorem about the limit of a composition of functions ($g(t)=\pi t /2$ and $f(x)=\tan{x}$) nor any theorem related to the limit of a continuous function when proving $\lim_{x \to 1^+} \tan \left(\frac{\pi x}{2} \right ) = -\infty$. I can however use the fact that $\lim_{x \to \frac{\pi}{2}^+}\tan x=-\infty$ and $\lim_{x \to \frac{\pi}{2}^-}\tan x=\infty$. Is there any way I can prove it using the basic limit arithemtic rules?
That's a great question! What you are asking about is one of the missing links between classical and quantum gravity. On their own, the Einstein equations, $ G_{\mu\nu} = 8 \pi G T_{\mu\nu}$, are local field equations and do not contain any topological information. At the level of the action principle, $$ S_{\mathrm{eh}} = \int_\mathcal{M} d^4 x \, \sqrt{-g} \, \mathbf{R} $$ the term we generally include is the Ricci scalar $ \mathbf{R} = \mathrm{Tr}[ R_{\mu\nu} ] $, which depends only on the first and second derivatives of the metric and is, again, a local quantity. So the action does not tell us about topology either, unless you're in two dimensions, where the Euler characteristic is given by the integral of the ricci scalar: $$ \int d^2 x \, \mathcal{R} = \chi $$ (modulo some numerical factors). So gravity in 2 dimensions is entirely topological. This is in contrast to the 4D case where the Einstein-Hilbert action appears to contain no topological information. This should cover your first question. All is not lost, however. One can add topological degrees of freedom to 4D gravity by the addition of terms corresponding to various topological invariants (Chern-Simons, Nieh-Yan and Pontryagin). For instance, the Chern-Simons contribution to the action looks like: $$ S_{cs} = \int d^4 x \frac{1}{2} \left(\epsilon_{ab} {}^{ij}R_{cdij}\right)R_{abcd} $$ Here is a very nice paper by Jackiw and Pi for the details of this construction. There's plenty more to be said about topology and general relativity. Your question only scratches the surface. But there's a goldmine underneath ! I'll let someone else tackle your second question. Short answer is "yes".
A Belyi-extender (or dessinflateur) $\beta$ of degree $d$ is a quotient of two polynomials with rational coefficients \[ \beta(t) = \frac{f(t)}{g(t)} \] with the special properties that for each complex number $c$ the polynomial equation of degree $d$ in $t$ \[ f(t)-c g(t)=0 \] has $d$ distinct solutions, except perhaps for $c=0$ or $c=1$, and, in addition, we have that \[ \beta(0),\beta(1),\beta(\infty) \in \{ 0,1,\infty \} \] Let’s take for instance the power maps $\beta_n(t)=t^n$. For every $c$ the degree $n$ polynomial $t^n – c = 0$ has exactly $n$ distinct solutions, except for $c=0$, when there is just one. And, clearly we have that $0^n=0$, $1^n=1$ and $\infty^n=\infty$. So, $\beta_n$ is a Belyi-extender of degree $n$. A cute observation being that if $\beta$ is a Belyi-extender of degree $d$, and $\beta’$ is an extender of degree $d’$, then $\beta \circ \beta’$ is again a Belyi-extender, this time of degree $d.d’$. That is, Belyi-extenders form a monoid under composition! In our example, $\beta_n \circ \beta_m = \beta_{n.m}$. So, the power-maps are a sub-monoid of the Belyi-extenders, isomorphic to the multiplicative monoid $\mathbb{N}_{\times}$ of strictly positive natural numbers. In their paper Quantum statistical mechanics of the absolute Galois group, Yuri I. Manin and Matilde Marcolli say they use the full monoid of Belyi-extenders to act on all Grothendieck’s dessins d’enfant. But, they attach properties to these Belyi-extenders which they don’t have, in general. That’s fine, as they foresee in Remark 2.21 of their paper that the construction works equally well for any suitable sub-monoid, as long as this sub-monoid contains all power-map exenders. I’m trying to figure out what the maximal mystery sub-monoid of extenders is satisfying all the properties they need for their proofs. But first, let us see what Belyi-extenders have to do with dessins d’enfant. Look at all complex solutions of $f(t)=0$ and label them with a black dot (and add a black dot at $\infty$ if $\beta(\infty)=0$). Now, look at all complex solutions of $f(t)-g(t)=0$ and label them with a white dot (and add a white dot at $\infty$ if $\beta(\infty)=1$). Now comes the fun part. Because $\beta$ has exactly $d$ pre-images for all real numbers $\lambda$ in the open interval $(0,1)$ (and $\beta$ is continuous), we can connect the black dots with the white dots by $d$ edges (the pre-images of the open interval $(0,1)$), giving us a $2$-coloured graph. For the power-maps $\beta_n(t)=t^n$, we have just one black dot at $0$ (being the only solution of $t^n=0$), and $n$ white dots at the $n$-th roots of unity (the solutions of $x^n-1=0$). Any $\lambda \in (0,1)$ has as its $n$ pre-images the numbers $\zeta_i.\sqrt[n]{\lambda}$ with $\zeta_i$ an $n$-th root of unity, so we get here as picture an $n$-star. Here for $n=5$: This dessin should be viewed on the 2-sphere, with the antipodal point of $0$ being $\infty$, so projecting from $\infty$ gives a homeomorphism between the 2-sphere and $\mathbb{C} \cup \{ \infty \}$. To get all information of the dessin (including possible dots at infinity) it is best to slice the sphere open along the real segments $(\infty,0)$ and $(1,\infty)$ and flatten it to form a ‘diamond’ with the upper triangle corresponding to the closed upper semisphere and the lower triangle to the open lower semisphere. In the picture above, the right hand side is the dessin drawn in the diamond, and this representation will be important when we come to the action of extenders on more general Grothendieck dessins d’enfant. Okay, let’s try to get some information about the monoid $\mathcal{E}$ of all Belyi-extenders. What are its invertible elements? Well, we’ve seen that the degree of a composition of two extenders is the product of their degrees, so invertible elements must have degree $1$, so are automorphisms of $\mathbb{P}^1_{\mathbb{C}} – \{ 0,1,\infty \} = S^2-\{ 0,1,\infty \}$ permuting the set $\{ 0,1,\infty \}$. They form the symmetric group $S_3$ on $3$-letters and correspond to the Belyi-extenders \[ t,~1-t,~\frac{1}{t},~\frac{1}{1-t},~\frac{t-1}{t},~\frac{t}{t-1} \] You can compose these units with an extender to get anther extender of the same degree where the roles of $0,1$ and $\infty$ are changed. For example, if you want to colour all your white dots black and the black dots white, you compose with the unit $1-t$. Manin and Marcolli use this and claim that you can transform any extender $\eta$ to an extender $\gamma$ by composing with a unit, such that $\gamma(0)=0, \gamma(1)=1$ and $\gamma(\infty)=\infty$. That’s fine as long as your original extender $\eta$ maps $\{ 0,1,\infty \}$ onto $\{ 0,1,\infty \}$, but usually a Belyi-extender only maps into $\{ 0,1,\infty \}$. Here are some extenders of degree three (taken from Melanie Wood’s paper Belyi-extending maps and the Galois action on dessins d’enfants): with dessin $5$ corresponding to the Belyi-extender \[ \beta(t) = \frac{t^2(t-1)}{(t-\frac{4}{3})^3} \] with $\beta(0)=0=\beta(1)$ and $\beta(\infty) = 1$. So, a first property of the mystery Manin-Marcolli monoid $\mathcal{E}_{MMM}$ must surely be that all its elements $\gamma(t)$ map $\{ 0,1,\infty \}$ onto $\{ 0,1,\infty \}$, for they use this property a number of times, for instance to construct a monoid map \[ \mathcal{E}_{MMM} \rightarrow M_2(\mathbb{Z})^+ \qquad \gamma \mapsto \begin{bmatrix} d & m-1 \\ 0 & 1 \end{bmatrix} \] where $d$ is the degree of $\gamma$ and $m$ is the number of black dots in the dessin (or white dots for that matter). Further, they seem to believe that the dessin of any Belyi-extender must be a 2-coloured tree. Already last time we’ve encountered a Belyi-extender $\zeta(t) = \frac{27 t^2(t-1)^2}{4(t^2-t+1)^3}$ with dessin But then, you may argue, this extender sends all of $0,1$ and $\infty$ to $0$, so it cannot belong to $\mathcal{E}_{MMM}$. Here’s a trick to construct Belyi-extenders from Belyi-maps $\beta : \mathbb{P}^1 \rightarrow \mathbb{P}^1$, defined over $\mathbb{Q}$ and having the property that there are rational points in the fibers over $0,1$ and $\infty$. Let’s take an example, the ‘monstrous dessin’ corresponding to the congruence subgroup $\Gamma_0(2)$ with map $\beta(t) = \frac{(t+256)^3}{1728 t^2}$. As it stands, $\beta$ is not a Belyi-extender because it does not map $1$ into $\{ 0,1,\infty \}$. But we have that \[ -256 \in \beta^{-1}(0),~\infty \in \beta^{-1}(\infty),~\text{and}~512,-64 \in \beta^{-1}(1) \] (the last one follows from $(t+256)^2-1728 t^3=(t-512)^2(t+64)$). We can now pre-compose $\beta$ with the automorphism (defined over $\mathbb{Q}$) sending $0$ to $-256$, $1$ to $-64$ and fixing $\infty$ to get a Belyi-extender \[ \gamma(t) = \frac{(192t)^3}{1728(192t-256)^2} \] which maps $\gamma(0)=0,~\gamma(1)=1$ and $\gamma(\infty)=\infty$ (so belongs to $\mathcal{E}_{MMM}$) with the same dessin, which is not a tree, That is, $\mathcal{E}_{MMM}$ can at best consist only of those Belyi-extenders $\gamma(t)$ that map $\{ 0,1,\infty \}$ onto $\{ 0,1,\infty \}$ and such that their dessin is a tree. Let me stop, for now, by asking for a reference (or counterexample) to perhaps the most startling claim in the Manin-Marcolli paper, namely that any 2-coloured tree can be realised as the dessin of a Belyi-extender! Similar Posts: Complete chaos and Belyi-extenders Dessinflateurs 214066877211724763979841536000000000000 The best rejected proposal ever permutation representations of monodromy groups Monstrous dessins 3 the modular group and superpotentials (1) Connes & Consani go categorical Generators of modular subgroups Bost-Connes for ringtheorists
I’m trying to get into the latest Manin-Marcolli paper Quantum Statistical Mechanics of the Absolute Galois Group on how to create from Grothendieck’s dessins d’enfant a quantum system, generalising the Bost-Connes system to the non-Abelian part of the absolute Galois group $Gal(\overline{\mathbb{Q}}/\mathbb{Q})$. In doing so they want to extend the action of the multiplicative monoid $\mathbb{N}_{\times}$ by power maps on the roots of unity to the action of a larger monoid on all dessins d’enfants. Here they use an idea, originally due to Jordan Ellenberg, worked out by Melanie Wood in her paper Belyi-extending maps and the Galois action on dessins d’enfants. To grasp this, it’s best to remember what dessins have to do with Belyi maps, which are maps defined over $\overline{\mathbb{Q}}$ \[ \pi : \Sigma \rightarrow \mathbb{P}^1 \] from a Riemann surface $\Sigma$ to the complex projective line (aka the 2-sphere), ramified only in $0,1$ and $\infty$. The dessin determining $\pi$ is the 2-coloured graph on the surface $\Sigma$ with as black vertices the pre-images of $0$, white vertices the pre-images of $1$ and these vertices are joined by the lifts of the closed interval $[0,1]$, so the number of edges is equal to the degree $d$ of the map. Wood considers a very special subclass of these maps, which she calls Belyi-extender maps, of the form \[ \gamma : \mathbb{P}^1 \rightarrow \mathbb{P}^1 \] defined over $\mathbb{Q}$ with the additional property that $\gamma$ maps $\{ 0,1,\infty \}$ into $\{ 0,1,\infty \}$. The upshot being that post-compositions of Belyi’s with Belyi-extenders $\gamma \circ \pi$ are again Belyi maps, and if two Belyi’s $\pi$ and $\pi’$ lie in the same Galois orbit, then so must all $\gamma \circ \pi$ and $\gamma \circ \pi’$. The crucial Ellenberg-Wood idea is then to construct “new Galois invariants” of dessins by checking existing and easily computable Galois invariants on the dessins of the Belyi’s $\gamma \circ \pi$. For this we need to know how to draw the dessin of $\gamma \circ \pi$ on $\Sigma$ if we know the dessins of $\pi$ and of the Belyi-extender $\gamma$. Here’s the procedure Here, the middle dessin is that of the Belyi-extender $\gamma$ (which in this case is the power map $t \rightarrow t^4$) and the upper graph is the unmarked dessin of $\pi$. One has to replace each of the black-white edges in the dessin of $\pi$ by the dessin of the expander $\gamma$, but one must be very careful in respecting the orientations on the two dessins. In the upper picture just one edge is replaced and one has to do this for all edges in a compatible manner. Thus, a Belyi-expander $\gamma$ inflates the dessin $\pi$ with factor the degree of $\gamma$. For this reason i prefer to call them dessinflateurs, a contraction of dessin+inflator. In her paper, Melanie Wood says she can separate dessins for which all known Galois invariants were the same, such as these two dessins, by inflating them with a suitable Belyi-extender and computing the monodromy group of the inflated dessin. This monodromy group is the permutation group generated by two elements, the first one gives the permutation on the edges given by walking counter-clockwise around all black vertices, the second by walking around all white vertices. For example, by labelling the edges of $\Delta$, its monodromy is generated by the permutations $(2,3,5,4)(1,6)(8,10,9)$ and $(1,3,2)(4,7,5,8)(9,10)$ and GAP tells us that the order of this group is $1814400$. For $\Omega$ the generating permutations are $(1,2)(3,6,4,7)(8,9,10)$ and $(1,2,4,3)(5,6)(7,9,8)$, giving an isomorphic group. Let’s inflate these dessins using the Belyi-extender $\gamma(t) = -\frac{27}{4}(t^3-t^2)$ with corresponding dessin It took me a couple of attempts before I got the inflated dessins correct (as i knew from Wood that this simple extender would not separate the dessins). Inflated $\Omega$ on top: Both dessins give a monodromy group of order $35838544379904000000$. Now we’re ready to do serious work. Melanie Wood uses in her paper the extender $\zeta(t)=\frac{27 t^2(t-1)^2}{4(t^2-t+1)^3}$ with associated dessin and says she can now separate the inflated dessins by the order of their monodromy groups. She gets for the inflated $\Delta$ the order $19752284160000$ and for inflated $\Omega$ the order $214066877211724763979841536000000000000$. It’s very easy to make mistakes in these computations, so probably I did something horribly wrong but I get for both $\Delta$ and $\Omega$ that the order of the monodromy group of the inflated dessin is $214066877211724763979841536000000000000$. I’d be very happy when someone would be able to spot the error! Leave a Comment
For exercises 1 - 6, find the volume generated when the region between the two curves is rotated around the given axis. Use both the shell method and the washer method. Use technology to graph the functions and draw a typical slice by hand. 1) [T] Over the curve of \( y=3x,\) \(x=0,\) and \( y=3\) rotated around the \(y\)-axis. 2) [T] Under the curve of \( y=3x,\) \(x=0\), and \( x=3\) rotated around the \(y\)-axis. Answer: \(V = 54π\) units 3 3) [T] Over the curve of \( y=3x,\) \(x=0\), and \( y=3\) rotated around the \(x\)-axis. 4) [T] Under the curve of \( y=3x,\) \(x=0,\) and \( x=3\) rotated around the \(x\)-axis. Answer: \(V = 81π\) units 3 5) [T] Under the curve of \( y=2x^3,x=0,\) and \( x=2\) rotated around the \(y\)-axis. 6) [T] Under the curve of \( y=2x^3,x=0,\) and \( x=2\) rotated around the \(x\)-axis. Answer: \(V = \frac{512π}{7}\) units 3 For exercises 7 - 16, use shells to find the volumes of the given solids. Note that the rotated regions lie between the curve and the \(x\)-axis and are rotated around the \(y\)-axis. 7) \( y=1−x^2,\) \(x=0,\) and \( x=1\) 8) \( y=5x^3,\) \(x=0\), and \( x=1\) Answer: \(V = 2π\) units 3 9) \( y=\dfrac{1}{x},\) \(x=1,\) and \( x=100\) 10) \( y=\sqrt{1−x^2},\) \(x=0\), and \( x=1\) Answer: \(V= \frac{2π}{3}\) units 3 11) \( y=\dfrac{1}{1+x^2},\) \(x=0\),and \( x=3\) 12) \( y=\sin x^2,x=0\), and \( x=\sqrt{π}\) Answer: \(V= 2π\) units 3 13) \( y=\dfrac{1}{\sqrt{1−x^2}},\) \(x=0\), and \( x=\frac{1}{2}\) 14) \( y=\sqrt{x},\) \(x=0\), and \( x=1\) Answer: \(V = \frac{4π}{5}\) units 3 15) \( y=(1+x^2)^3,\) \(x=0\), and \( x=1\) 16) \( y=5x^3−2x^4,\) \(x=0\), and \( x=2\) Answer: \(V= \frac{64π}{3}\) units 3 For exercises 17 - 26, use shells to find the volume generated by rotating the regions between the given curve and \( y=0\) around the \(x\)-axis. 17) \( y=\sqrt{1−x^2},\) \(x=0\), and \( x=1\) 18) \( y=x^2,\) \(x=0\), and \( x=2\) Answer: \(V = \frac{32π}{5}\) units 3 19) \( y=e^x,\) \(x=0\),and \( x=1\) 20) \( y=\ln(x),\) \(x=1\), and \( x=e\) Answer: \(V= π(e−2)\) units 3 21) \( x=\dfrac{1}{1+y^2},\) \(y=1\), and \( y=4\) 22) \( x=\dfrac{1+y^2}{y},\) \(y=0\), and \( y=2\) Answer: \(V= \frac{28π}{3}\) units 3 23) \( x=\cos y,\) \(y=0\), and \( y=π\) 24) \( x=y^3−4y^2,\) \(x=−1\), and \( x=2\) Answer: \(V= \frac{84π}{5}\) units 3 25) \( x=ye^y,\) \(x=−1\), and \( x=2\) 26) \( x=e^y\cos y,\) \(x=0\), and \( x=π\) Answer: \(V = e^ππ^2\) units 3 For exercises 27 - 36, find the volume generated when the region between the curves is rotated around the given axis. 27) \( y=3−x,\) \(y=0,\) \(x=0\), and \( x=2\) rotated around the \(y\)-axis. 28) \( y=x^3\), \(y=0\), \(x=0\), and \( y=8\) rotated around the \(y\)-axis. Answer: \( V=\frac{64π}{5}\) units 3 29) \( y=x^2,\) \(y=x,\) rotated around the \(y\)-axis. 30) \( y=\sqrt{x},\) \(x=0\), and \( x=1\) rotated around the line \( x=2.\) Answer: \(V=\frac{28π}{15}\) units 3 31) \( y=\dfrac{1}{4−x},\) \(x=1,\) and \( x=2\) rotated around the line \( x=4\). 32) \( y=\sqrt{x}\) and \( y=x^2\) rotated around the \(y\)-axis. Answer: \(V=\frac{3π}{10}\) units 3 33) \( y=\sqrt{x}\) and \( y=x^2\) rotated around the line \( x=2\). 34) \( x=y^3,\) \(y=\dfrac{1}{x},\) \(x=1\), and \( y=2\) rotated around the \(x\)-axis. Answer: \( \frac{52π}{5}\) units 3 35) \( x=y^2\) and \( y=x\) rotated around the line \( y=2\). 36) [T] Left of \( x=\sin(πy)\), right of \( y=x\), around the \(y\)-axis. Answer: \(V \approx 0.9876\) units 3 For exercises 37 - 44, use technology to graph the region. Determine which method you think would be easiest to use to calculate the volume generated when the function is rotated around the specified axis. Then, use your chosen method to find the volume. 37) [T] \( y=x^2\) and \( y=4x\) rotated around the \(y\)-axis. 38) [T] \( y=\cos(πx),y=\sin(πx),x=\frac{1}{4}\), and \( x=\frac{5}{4}\) rotated around the \(y\)-axis. Answer: \(V = 3\sqrt{2}\) units 3 39) [T] \( y=x^2−2x,x=2,\) and \( x=4\) rotated around the \(y\)-axis. 40) [T] \( y=x^2−2x,x=2,\) and \( x=4\) rotated around the \(x\)-axis. Answer: \(V= \frac{496π}{15}\) units 3 41) [T] \( y=3x^3−2,y=x\), and \( x=2\) rotated around the \(x\)-axis. 42) [T] \( y=3x^3−2,y=x\), and \( x=2\) rotated around the \(y\)-axis. Answer: \( V = \frac{398π}{15}\) units 3 43) [T] \( x=\sin(πy^2)\) and \( x=\sqrt{2}y\) rotated around the \(x\)-axis. 44) [T] \( x=y^2,x=y^2−2y+1\), and \( x=2\) rotated around the \(y\)-axis. Answer: \( V =15.9074\) units 3 For exercises 45 - 51, use the method of shells to approximate the volumes of some common objects, which are pictured in accompanying figures. 45) Use the method of shells to find the volume of a sphere of radius \( r\). 46) Use the method of shells to find the volume of a cone with radius \( r\) and height \( h\). Answer: \(V = \frac{1}{3}πr^2h\) units 3 47) Use the method of shells to find the volume of an ellipse \( (x^2/a^2)+(y^2/b^2)=1\) rotated around the \(x\)-axis. 48) Use the method of shells to find the volume of a cylinder with radius \( r\) and height \( h\). Answer: \(V= πr^2h\) units 3 49) Use the method of shells to find the volume of the donut created when the circle \( x^2+y^2=4\) is rotated around the line \( x=4\). 50) Consider the region enclosed by the graphs of \( y=f(x),y=1+f(x),x=0,y=0,\) and \( x=a>0\). What is the volume of the solid generated when this region is rotated around the \(y\)-axis? Assume that the function is defined over the interval \( [0,a]\). Answer: \( V=πa^2\) units 3 51) Consider the function \( y=f(x)\), which decreases from \( f(0)=b\) to \( f(1)=0\). Set up the integrals for determining the volume, using both the shell method and the disk method, of the solid generated when this region, with \( x=0\) and \( y=0\), is rotated around the \(y\)-axis. Prove that both methods approximate the same volume. Which method is easier to apply? (Hint: Since \( f(x)\) is one-to-one, there exists an inverse \( f^{−1}(y)\).)
The problem is pretty simple, and its solution is too. But for some reason my suggested solution with functions doesn't work, and I can't figure out why: The first car leaves from point $A$ towards point $B$, and the second car leaves from $B$ to $A$. The distance between $A$ and $B$ is $720 km$. The second car is twice as fast as the first car. After $3$ hours of driving the distance left between the cars is $270 km$. Find their speed. The straight forward way to solve it is: Let $x =$ speed of first car. $$\begin{align}3x + 270 +3\times2x &= 720\\ x&=50\end{align}$$ I tried to solve it with functions and 2 equations, but that doesn't work. What am I missing here: $f$ is the distance function for the first car, and $g$ for the second car. $$\begin{align}f(t) &= xt\\ g(t) &= 720 -2xt\end{align}$$ 1st equation: $f(t)=g(t) = $meeting distance for a specific $t$ $$\begin{align}xt &= 720 -2xt\\ 3xt &= 720\\ x &= 720/3t\end{align}$$ 2nd equation: After $3$ hours the distance between them is $270$ km $$\begin{align}g(t+3) -f(t+3) &= 270\\ 720 -2x(t+3) -x(t+3) &= 270\\ 450 &= 3x(t+3)\end{align}$$ $$\begin{align}450 &= \frac{720(t+3)}{t}\\ &= \frac{(720t+2160)}{t} \\ 450t &= 720t+2160\\ 270t &= -2160\\ t &= -8\end{align}$$ Which is of course the wrong answer. How would you solve it with the distance functions? It's just more intuitive for me to get to the solution this way, even though it takes longer. Thanks!
I'm going to focus on one aspect of the question that I think has not been fully appreciated:How can one best convey to beginners—without algebra—the flipping of denominator fractions... what would convince a noviceMany of the ingredients of this answer are already present in some of the other answers to this question, but are rearranged here in a ... Questions of this nature were addressed in some part by New Math in the late 50s and 60s.They were also used as a point of criticism by its opponents. Most notable is Morris Kline's book Why Johnny Can't Add in which he writes relevantly in Chapter 1 :Evidently the class is not doing too well and so the teacher tries a simpler question. "Is 7 a number?"... The other answers are good, and/but some of those points are reinforced by a diagnostic that has ever-more been important to me: is intention being communicated? And a different, more linguistic point mentioned in other answers: toleration of ambiguity, allowing context (where intent is adequately clear) to disambiguate.Thus, yes, it is perverse to entrap/... Here are a few comments, and then an attempt at two succinct answers.In particular, I will try to answer this using a measurement interpretation, and then again with an equal sharing interpretation. I prefer the former, but include the latter for completeness.Comments:Some of the key terms in unpacking this are measurement, equal sharing, and missing ... Instead of presenting "cancelling" as an arbitrary rule (which is often how students have seen it — or at least how they learned it — before), explain it in a way that shows what's actually going on. So, if we have $\frac{5x + 5}{5}$, instead of just "cancelling the 5's" (which sounds kind of arbitrary and mysterious), write out several steps, like:$$\frac{... I think there's a countervailing issue that the book you're describing is trying to deal with. I teach college students, so I don't know what the particular approach it's taking is age appropriate for a 6th grader, but it is trying to solve a real problem.What I would do about this is introduce them as separate topics, probably spaced far in time. Let ... One reason is that mathematics was not handed down by the gods fully formed and unambiguous. It is a human construction over a very long time and mathematical notation even more so.Any time a notation doesn't "work" for all possible contexts, it's an opportunity for us to talk about this human side of mathematics, and about the pros and cons of notation ... I find this diagram helpful when relating the two:It comes from a model curriculum unit on Rates and Ratios for 6th graders (you can see them all here after registering), and I have found this particular graphic very helpful with math content professional development with 6th grade math teachers. I have no experience teaching fractions, but I think moving away from using the divide symbol makes things easier. It doesn't get used at university level (but exponents start being used, so there are still two notations).I would do$$3\div\frac{2}{7}= \frac{3}{\frac{2}{7}}= \frac{3}{\frac{2}{7}} \times \frac{7}{7}= \frac{3\times 7}{\frac{2}{7}\... First, I would have them really understand equivalent fractions. There are a lot of ways to write the number represented by the fraction $\frac23$. We can call it $\frac23,\frac46,\frac{20}{30},\frac{-2}{-3},$ etc., etc. Similarly, there are many ways to write the number represented by the fraction $\frac45$: as $\frac8{10},\frac{20}{25},$ etc.Once that's ... The obvious (to me) source of difficulty is that fractions are just plain complicated, more so than almost anything else in elementary education. You have to operate with a pair of numbers, instead of a single one, and you have to keep the order straight. Adding is quite complicated in its own right. Things are further complicated by rules about least ... When manipulating fractions, students quickly get comfortable with the idea that to combine two fractions they have to manipulate to get the denominators the same. Multiplying by 2/2 or 3/3, etc doesn't change the fraction value, obviously. I use the method below to unfraction (<< is there a word for this?) the denominator -$$\frac{3}{\frac{2}{7}}=\... Comparing fractions only works when the whole is the same size.Here's a few examples to get help the 7 year old understand what happens when things aren't the same size:1/2 will be greater than 2/4 when you compare 1/2 of a watermelonand 2/4 of an apple.1/2 will be less than 2/4 when you compare 2/4 of a watermelon and1/2 of an apple.If you take 1/2 ... ManipulationThe mixed fraction form makes subtraction and negative values easier to parse.$2\frac{1}{2} - 1\frac{1}{4}$ is substantially easier to read, write, and understand at this stage than either $2+\frac{1}{2} - (1+\frac{1}{4})$ or $2+\frac{1}{2} - 1-\frac{1}{4}$. Students at this age have not encountered the distributive property (or even brackets?... I would say that $\frac12$ is a number and that "$\frac12$" represents (or names) that number. I should therefore not say things like "the denominator of $\frac12$ is 2", because the denominator depends not only on the number but on its name. I confess, however, to being rather careless about such things most of the time. I feel like I always post the same thing in these threads, but this again sounds like an issue of blocking vs interleaving. In this case, the textbook may have started interleaving different problems a little bit too early, but generally it should be introduced earlier than feels intuitively right.That's because the algorithms for $\frac{1}{6}\times 90$ ... Rigid criteria for simplification seem to me largely a bad idea if they are not motivated by contextual considerations. The idea that $\sqrt{2}/2$ should be preferred to $1/\sqrt{2}$ struck me as unmotivated when I was a student, and now seems to me problematic to motivate. The situation is different with respect to writing rational numbers or rational ... There are many reasons why fractions are so hard for students to learn. Mostly, they're taught gibberish and assessed according to such gibberish.Example 1You are a 12-year-old student who has learned that "a fraction is part of a whole, such as part of pizza". So when you look at $\frac{2}{3}\times\frac{7}{5}$, you now must multiply pizza slices by ... You can find a discussion of these considerations in:Chapin, S., & Johnson, A. (2006). Math matters: Grade K-8 understanding the math you teach. Sausalito: Math Solution Publications.Specifically, pp. 99 - 131, Chapter 5, Fractions.Let me just quote from Teaching Fractions on p. 131:Fractional numbers are a rich part of mathematics. However, ... "Fraction Bars" may be helpful to you. It's a virtual space for exploring bar-shaped fraction models.Fraction Bars PageOn that page you'll find a link to launch the software as well as a software guide which includes some appropriate activities.A version of this software has been used in classes to prepare 7th grade teachers in Georgia to (among other ... This depends on what meaning you give to fractions. One approach is to give fractions the meaning of shares or portions of some unit e.g. a rectangle.Multiplication is mostly equal to the word of.Examples:$\frac{5}{8}=\frac{5}{8}\cdot 1$ are five eights of the unit.$\frac{2}{3}\cdot\frac{3}{4}$ means two thirds of $\frac{3}{4}$. Show a rectangle, ... This is strictly context dependent. Consider the following sentence:Is banana a fruit?Well, is it? The answer, of course, depends on what exactly you're talking about. It can be interpreted in two ways:1. Is the physical object known as banana a fruit?In this case, yes, it obviously is. But consider this interpretation:2. Is the string of ... First, a disclaimer: I am a mathematician, and not a math educator (at least, not beyond tutoring, and teaching algebra, statistics and some calculus as a grad student); thus, my answer is going to be colored by the experience of someone who has learned a lot more math than I have taught.The answer depends on what you mean by "higher math". If by higher ... Not sure about paper references. One reason why people don't understand fractions is because they are seemingly illogical.You score one basket out of three 1/3.A little while later you try again and score 1/2. Clearly you have scored 2/5 shots? In many ways this is the correct answer. So why shouldn't $\frac {1}{3}+\frac {1}{2}=\frac {2}{5}$People ... I suspect the main trick will be going from division by unit fractions to division by non-unit fractions. I would begin by considering that you have 4 groups of a certain size, and you want to know how many groups you'll have if you make a new set of groups $\frac{1}{7}$ the size of the current groups. This can be simplified further by considering each group ... I would say you're doing your student a disservice if you were to seriously disallow a negative denominator. A fraction is simply a ratio of two integers (where the denominator is not allowed to be zero). I disagree with @yoniLavi that we never need such fractions. Since division by negative numbers makes sense, such a fraction with a negative denominator ... You could try keeping 30 as the denominator throughout, that is, observing that $\frac{1}{3} = \frac{10}{30}$ and $\frac{1}{5} = \frac{6}{30}$, so the portion of the class that doesn't study chemistry is$$\frac{10}{30} + \frac{6}{30} = \frac{16}{30},$$and the portion that does study chemistry is$$\frac{30}{30} - \frac{16}{30} = \frac{14}{30}.$$Once they ... I think this problem is much easier to handle if you refer throughout to the number of students who study each subject, rather than the fraction, only expressing the final answer as a fraction at the very end.That is:$\frac{1}{3}$ of the $30$ students study Biology — this is $10$ students, because $\frac{1}{3} \times 30 = 10$.$\frac{1}{5}$ of the $30$ ... We have two cookies. We divide them into pieces of 1/2 cookie each and end up with four pieces. Thus 2 divided by 1/2 equals 4.We have two cookies. We take 1/2 of the collection which is one cookie. Thus 2 multiplied by 1/2 equals 1.Each of those examples can be criticized. In the first example, one could claim that it shows that 2 divided by 4 equals 1/... If you have some apples and chop them into seven pieces, how many pieces do you have? Well, seven times as many as you had apples, of course! What if you pair those pieces two-by-two? Well, okay, you now have half as many pairs.("X divided by Y" means "how many Ys can you get out of what you started with".)(Did it matter that we used seven? Well, no, ...
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
Let $BG$ is classifying space of $G$ topological group. If $G$ is any compact group and $H$ is a closed subgroup of $G$, then the inclusion map $i:H\rightarrow G$ induces \begin{equation*} G/H\rightarrow BH\rightarrow BG \end{equation*} a fiber bundle? If $G$ is any compact group and $H$ is a closed subgroup of $G$, then theinclusion map $i:H\rightarrow G$ induces \begin{equation*} G/H\rightarrow BH\rightarrow BG \end{equation*} a fibration? If $G$ is any compact group and $N$ is a closed normal subgroup of $G$, thenthe quotient map $\pi :G\rightarrow G/N$ induces \begin{equation*} BN\rightarrow BG\rightarrow B\left( G/N\right) \end{equation*} a fiber bundle? If $G$ is any compact group and $H$ is a closed normal subgroup of $G$, then the quotient map $\pi :G\rightarrow G/N$ induces \begin{equation*} BN\rightarrow BG\rightarrow B\left( G/N\right) \end{equation*} a fibration?
Difference between revisions of "Image Dimensions" (Added new section) m (Text replace - "{{Category|NewTerminology}}" to "{{NewTerminology}}") (8 intermediate revisions by 5 users not shown) Line 1: Line 1: + <div style="background-color:#DDFFDD; border:thin solid green; padding:1em"> <div style="background-color:#DDFFDD; border:thin solid green; padding:1em"> '''Disclaimer:''' This page's content is not official and not guaranteed to be free of mistakes. At the moment, it's even only a sum of personal thoughts to cast a bit of light onto synfig's image dimensions handling.</div> '''Disclaimer:''' This page's content is not official and not guaranteed to be free of mistakes. At the moment, it's even only a sum of personal thoughts to cast a bit of light onto synfig's image dimensions handling.</div> ==Describing the fields of the Canvas Properties Dialog== ==Describing the fields of the Canvas Properties Dialog== − The user access the image dimensions in the + The user access the image dimensions in the Canvas Properties Dialog. − ===The ' + ===The '' tab=== + + Here some properties can simply be locked (such that they can't be changed) and linked (so that changes in one entry simultaneously change other entries as well). Here some properties can simply be locked (such that they can't be changed) and linked (so that changes in one entry simultaneously change other entries as well). − ===The 'Image' tab=== + + ===The 'Image' tab=== + + + Obviously here the image dimensions can be set. There seem to be basically three groups of fields to edit: Obviously here the image dimensions can be set. There seem to be basically three groups of fields to edit: + ;The on-screen size(?): The fields ''Width'' and ''Height'' tell synfigstudio how many pixels the image shall cover at a zoom level of 100%. ;The on-screen size(?): The fields ''Width'' and ''Height'' tell synfigstudio how many pixels the image shall cover at a zoom level of 100%. + ;The physical size: The physical width and height should tell how big the image is on some physical media. That could be when printing out images on paper, or maybe even on transparencies or film. Not all file formats can save this on exporting/rendering images. ;The physical size: The physical width and height should tell how big the image is on some physical media. That could be when printing out images on paper, or maybe even on transparencies or film. Not all file formats can save this on exporting/rendering images. − ;The mysterious ''Image Area'': Given as two points (upper-left and lower-right corner) which also define the image span (Pythagoras: <font style="vertical-align:10%;font-size:8pt"><math>\scriptstyle\text{span}=\sqrt{\Delta x^2 + \Delta y^2}</math></font>). The unit seems to be not pixels but ''unit''s, which are at + + ;The mysterious ''Image Area'': Given as two points (upper-left and lower-right corner) which also define the image span (Pythagoras: <font style="vertical-align:10%;font-size:8pt"><math>\scriptstyle\text{span}=\sqrt{\Delta x^2 + \Delta y^2}</math></font>). The unit seems to be not pixels but ''unit''s, which are at Unit System|60 pixels each. If the ratio of the image size and image area dimensions are off, for example circles will appear as an ellipse (see image). These settings seem to influence how large one ''Image Size'' pixel is being rendered. This might be useful when one has to deal with non-square output pixels. ==Effects of the Image Area== ==Effects of the Image Area== − + image:Non_square_pixels.png|thumb|300px|Note the different scales at the rulers. Although the image is clearly 400x300 pixels big on screen, the rulers say it is only 400x200, which is what the ''Image Area'' values say. − + Image:Non_square.gif|frame|left|Note how the rectangle becomes a square and an elongated rectangle again as it rotates. Source:Image:Non square.sifz|Source file <br clear="all" /> <br clear="all" /> + + + + + + + + + + + + + Latest revision as of 10:55, 20 May 2013 Contents Describing the fields of the Canvas Properties Dialog The user access the image dimensions in the Canvas Properties Dialog. The Other tab Here some properties can simply be locked (such that they can't be changed) and linked (so that changes in one entry simultaneously change other entries as well). The Image tab Obviously here the image dimensions can be set. There seem to be basically three groups of fields to edit: The on-screen size(?) The fields Widthand Heighttell synfigstudio how many pixels the image shall cover at a zoom level of 100%. The physical size The physical width and height should tell how big the image is on some physical media. That could be when printing out images on paper, or maybe even on transparencies or film. Not all file formats can save this on exporting/rendering images. The mysterious Image Area Given as two points (upper-left and lower-right corner) which also define the image span (Pythagoras: Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \scriptstyle\text{span}=\sqrt{\Delta x^2 + \Delta y^2}}). The unit seems to be not pixels but units, which are at 60 pixels each. If the ratio of the image size and image area dimensions are off, for example circles will appear as an ellipse (see image). These settings seem to influence how large one Image Sizepixel is being rendered. This might be useful when one has to deal with non-square output pixels. Effects of the Image Area Somehow the image area setting seems to be saved when copy&pasting between image, see also bug #2116947. Possible intended effects of out-of-ratio image areas As mentioned above, different ratios might be needed when then output needs to be specified in pixels, but those pixels are not squares. That might happen for several kinds of media, such as videos encoded in some PAL formats or for dvds. For further reading, look at Wikipedia. Still, it is probably consensus that the image, as shown on screen while editing should look as closely as possible like when viewed by the final audience. So, while specifying a different output resolution at rendering time may well be wanted, synfigstudio should (for the majority of monitors) show square pixels, i.e. circles should stay circles. Feature wishlist to simplify working across documents See also Explanation by dooglus on the synfig-dev mailing list.
How To Make Deep Learning Models That Don’t Suck So you’ve watched all the tutorials. You now understand how a neural network works. You’ve built a cat and dog classifier. You tried your hand at a half-decent character-level RNN. You’re just one pip install tensorflow away from building the terminator, right? Wrong. A very important part of deep learning is finding the right hyperparameters. These are numbers that the model cannot learn. In this article, I’ll walk you through some of the most common (and important) hyperparameters that you’ll encounter on your road to the #1 spot on the Kaggle leaderboards. In addition, I’ll also show you some powerful algorithms that can help you choose your hyperparameters wisely. Hyperparameters in Deep Learning Hyperparameters can be thought of as the tuning knobs of your model. A fancy 7.1 Dolby Atmos home theatre system with a subwoofer that produces bass beyond the human ear’s audible range is useless if you set your AV receiver to stereo. Similarly, an inception_v3 with a trillion parameters won't even get you past MNIST if your hyperparameters are off. So now, let's take a look at the knobs to tune before we get into how to dial in the right settings. Learning Rate Arguably the most important hyperparameter, the learning rate, roughly speaking, controls how fast your neural net “learns”. So why don’t we just amp this up and live life on the fast lane? Not that simple. Remember, in deep learning, our goal is to minimize a loss function. If the learning rate is too high, our loss will start jumping all over the place and never converge. And if the learning rate is too small, the model will take way too long to converge, as illustrated above. Momentum Since this article focuses on hyperparameter optimization, I’m not going to explain the whole concept of momentum. But in short, the momentum constant can be thought of as the mass of a ball that’s rolling down the surface of the loss function. The heavier the ball, the quicker it falls. But if it’s too heavy, it can get stuck or overshoot the target. Dropout If you’re sensing a theme here, I’m now going to direct you to Amar Budhiraja’s article on dropout. But as a quick refresher, dropout is a regularization technique proposed by Geoff Hinton that randomly sets activations in a neural network to 0 with a probability of \(p\). This helps prevent neural nets from overfitting (memorizing) the data as opposed to learning it. \(p\) is a hyperparameter. Architecture — Number of Layers, Neurons Per Layer, etc. Another (fairly recent) idea is to make the architecture of the neural network itself a hyperparameter. Although we generally don’t make machines figure out the architecture of our models (otherwise AI researchers would lose their jobs), some new techniques like Neural Architecture Search have been implemented this idea with varying degrees of success. If you’ve heard of AutoML, this is basically how Google does it: make everything a hyperparameter and then throw a billion TPUs at the problem and let it solve itself. But for the vast majority of us who just want to classify cats and dogs with a budget machine cobbled together after a Black Friday sale, it’s about time we figured out how to make those deep learning models actually work. Hyperparameter Optimization Algorithms Grid Search This is the simplest possible way to get good hyperparameters. It’s literally just brute force. Try it in a notebook The Algorithm: Try out a bunch of hyperparameters from a given set of hyperparameters, and see what works best. The Pros: It’s easy enough for a fifth grader to implement. Can be easily parallelized. The Cons: As you probably guessed, it’s insanely computationally expensive(as all brute force methods are). Should I use it: Probably not. Grid search is terribly inefficient. Even if you want to keep it simple, you’re better off using random search. Random Search It’s all in the name — random search searches. Randomly. Try it in a notebook The Algorithm: Try out a bunch of random hyperparameters from a uniform distribution over some hyperparameter space, and see what works best. The Pros: Can be easily parallelized. Just as simple as grid search, but a bit better performance, as illustrated below: The Cons: While it gives better performance than grid search, it is still just as computationally intensive. Should I use it: If trivial parallelization and simplicity are of utmost importance, go for it. But if you can spare the time and effort, you'll be rewarded big time by using Bayesian Optimization. Bayesian Optimization Unlike the other methods we’ve seen so far, Bayesian optimization uses knowledge of previous iterations of the algorithm. With grid search and random search, each hyperparameter guess is independent. But with Bayesian methods, each time we select and try out different hyperparameters, the inches toward perfection. The ideas behind Bayesian hyperparameter tuning are long and detail-rich. So to avoid too many rabbit holes, I’ll give you the gist here. But be sure to read up on Gaussian processes and Bayesian optimization in general, if that’s the sort of thing you’re interested in. Remember, the reason we’re using these hyperparameter tuning algorithms is that it’s infeasible to actually evaluate multiple hyperparameter choices individually. For example, let’s say we wanted to find a good learning rate manually. This would involve setting a learning rate, training your model, evaluating it, selecting a different learning rate, training you model from scratch again, re-evaluating it, and the cycle continues. The problem is, “training your model” can take up to days (depending on the complexity of the problem) to finish. So you would only be able to try a few learning rates by the time the paper submission deadline for the conference turns up. And what do you know, you haven’t even started playing with the momentum. Oops. The Algorithm: Bayesian methods attempt to build a function (more accurately, a probability distribution over possible function) that estimates how good your model might be for a certain choice of hyperparameters. By using this approximate function (called a surrogate function in literature), you don’t have to go through the set, train, evaluate loop too many time, since you can just optimize the hyperparameters to the surrogate function. As an example, say we want to minimize this function (think of it like a proxy for your model's loss function): The surrogate function comes from something called a Gaussian process (note: there are other ways to model the surrogate function, but I’ll use a Gaussian process). Like, I mentioned, I won’t be doing any math heavy derivations, but here’s what all that talk about Bayesians and Gaussians boils down to: $$ \mathbb{P} (F_n(X)|X_n) = \frac{e^{-\frac12 F_n^T \Sigma_n^{-1} F_n}}{\sqrt{(2\pi)^n |\Sigma_n|}} $$ Which, admittedly is a mouthful. But let’s try to break it down. The left-hand side is telling you that a probability distribution is involved (given the presence of the fancy looking \( \mathbb{P} \) ). Looking inside the brackets, we can see that it’s a probability distribution of \( F_n(X) \), which is some arbitrary function. Why? Because remember, we’re defining a probability distribution over allpossible functions, not just a particular one. In essence, the left-hand side says that the probability that the true function that maps hyperparameters to the model’s metrics (like validation accuracy, log likelihood, test error rate, etc.) is \( F_n(X) \), given some sample data \(X_n\) is equal to whatever’s on the right-hand side. Now that we have the function to optimize, we optimize it. Here's what the Gaussian process will look like before we start the optimization process: Use your favorite optimizer of choice (the pros like maximizing expected improvement), but somehow, just follow the signs (or gradients) and before you know it, you’ll end up at your local minima. After a few iterations, the Gaussian process gets better at approximating the target function: Regardless of the method you used, you have now found the `argmin` of the surrogate function. Ans surprise, surprise, those arguments that minimize the surrogate function are (an estimate of) the optimal hyperparameters! Yay. The final result should look like this: Use these “optimal” hyperparameters to do a training run on your neural net, and you should see some improvement. But you can also use this new information to redo the whole Bayesian optimization process, again, and again, and again. Feel free to run the Bayesian loop however many times you want, but be wary. You are actually computing stuff. Those AWS credits don’t come for free, you know. Or do they…Try it in a notebook The Pros: Bayesian optimization gives better results than both grid search and random search. The Cons: It's not as easy to parallelize. Should I Use It: In most cases, yes! The only exceptions would be if You're a deep learning expert and you don't need the help of a measly approximation algorithm. You have access to a vast computational resources and can massively parallelize grid search and random search. If you're an frequentist/anti-Bayesian statistics nerd. An Alternate Approach To Finding A Good Learning Rate In all the methods we’ve seen so far, there’s one underlying theme: automate the job of the machine learning engineer. Which is great and all; until your boss gets wind of this and decides to replace you with 4 RTX Titan cards. Huh. Guess you should have stuck to manual search. But do not despair, there is active research in the field of making researchers do less and simultaneously get paid more. And one of the ideas that has worked extremely well is the learning rate range test, which, to the best of my knowledge, first appeared in a paper by Leslie Smith. The paper is actually about a method for scheduling (changing) the learning rate over time. The LR (learning rate) range test was a gold nugget that the author just casually dropped on the side. When you’re using a learning rate schedule that varies the learning rate from a minimum to maximum value, such as cyclic learning rates or stochastic gradient descent with warm restarts, the author suggests linearly increasing the learning rate after each iteration from a small to a large value (say, 1e-7 to 1e-1), evaluate the loss at each iteration, and plot the loss (or test error or accuracy) against the learning rate on a log scale. Your plot should look something like this: As marked on the plot, you’d then use set your learning rate schedule to bounce between the minimum and maximum learning rate, which are found by looking at the plot and trying to eyeball the region with the steepest gradient. Here's a sample LR range test plot (DenseNet trained on CIFAR10) from our Colab notebook: As a rule of thumb, if you’re not doing any fancy learning rate schedule stuff, just set your constant learning rate to an order of magnitude lower than the minimum value on the plot. In this case that would be roughly 1e-2. The coolest part about this method, other than that it works really well and spares you the time, mental effort, and compute required to find good hyperparameters with other algorithms, is that it costs virtually no extra compute. While the other algorithms, namely grid search, random search, and Bayesian Optimization, require you to run a whole project tangential to your goal of training a good neural net, the LR range test is just executing a simple, regular training loop, and keeping track of a few variables along the way. Here's the type of convergence speed you can expect when using a optimal learning rate (from the example in the notebook): The LR range test has been implemented by the team at fast.ai, and you should definitely take a look at their library to implement the LR range test (they call it the learning rate finder) as well as many other algorithms with ease. For The More Sophisticated Deep Learning Practitioner If you're interested, there's also a notebook written in pure pytorch that implements the above. This might give you a better understanding of the behind-the-scenes training process. Check it out here. Save Yourself The Effort Of course, all these algorithms, as great as they are, don’t always work in practice. There are many more factors to consider when training neural nets, such as how you’re going to preprocess your data, define your model, and actually get a computer powerful enough to run the darn thing. Nanonets provides easy to use APIs to train and deploy custom deep learning models. It takes care of all of the heavy lifting, including data augmentation, transfer learning and yes, hyperparameter optimization! Nanonets makes use of Bayesian search on their vast GPU clusters to find the right set of hyperparameters without the need for you to worry about blowing cash on the latest graphics card and out of bounds for axis 0. Once it finds the best model, Nanonets serves it on their cloud for you to test the model using their web interface or to integrate it into your program using 2 lines of code. Say goodbye to less than perfect models. Conclusion In this article, we’ve talked about hyperparameters and a few methods of optimizing them. But what does it all mean? As we try harder and harder to democratize AI technology, automated hyperparameter tuning is probably a step in the right direction. It allows regular folks like you and me to build amazing deep learning applications without a math PhD. While you could argue that making model hungry for computing power leaves the very best models in the hands of those that can afford said computing power, cloud services like AWS and Nanonets help democratize access to powerful machines, making deep learning far more accessible. But more fundamentally, what we’re actually doing here using math to solve more math. Which is interesting not only because of how meta that sounds, but also because of how easily it can be misinterpreted. We certainly have come a long way from the era of punch cards and trace tables to an age where we optimize functions that optimize functions that optimize functions. But we are nowhere close to building machines that can "think" on their own. And that's not discouraging, not in the least, because if humanity can do so much with so little, imagine what the future holds, when our visions become something that we can actually see. And so we sit, on a cushioned mesh chair staring at a blank terminal screen, every keystroke giving us a sudo superpower that can wipe the disk clean. And so we sit, we sit there all day, because the next big breakthrough might be just one pip install away. Lazy to code? Don't want to spend on compute resources? Head over to Nanonets and Start Building a Model now!