text
stringlengths
256
16.4k
Every possible reaction in chemistry is to attain stability. In physics, the alignment of an electric dipole in an external electric field and in all other physical systems (at least those I study in high school) attains stability at the lowest energy state. But, why is it so? To answer your question, you should first understand when is a system . most stable Firstly it shouldn't have a tendency to move or change state, thus it should be under equilibrium conditions, i.e. the net Force should be zero. We know that $$F = - \frac{dU}{dx}$$ Putting $F=0$, we get $$\frac{dU}{dx}=0 \tag{1}$$ Secondly, it should be able to maintain that equilibrium condition by itself. This can be tested by displacing the system by a small distance $\delta x$. If the force on the system then becomes opposite to direction of $\delta x$, we can say that the system has a tendency to restore back to its original equilibrium position. An example of this would be a ball kept at the bottom of a spherical valley. Displace the ball a little towards the right, and the net force on it acts towards left, bringing it back to its original position. You will realise that I just described a stable equilibrium condition. What this proves is that it is the stable equilibrium condition in which the system is . most stable From the above description we have that the small displacement $\delta x$ and net extra force $\delta F$ should be in opposite directions $$\delta F = \frac{dF}{dx} \delta x + \mathcal{O}(\delta x^2) \approx \frac{dF}{dx} \delta x$$ which gives as stability condition $$\frac{dF}{dx} < 0$$ which implies $$-\frac{d^2U}{dx^2}<0$$ $$\frac{d^2U}{dx^2}>0\tag{2}$$ From $(1)$ and $(2)$ it is evident that the graph of $U$ should have a minima at stable equilibrium condition, i.e. The Potential Energy should be minimum when a system attains maximum stability. Roughly: Becouse $F=-\overrightarrow\nabla U$, with $U$ some potential energy (coud be an effective potential energy). Then, if you aren't in a minimum of potetial, your system isn't in equilibrium. Can you see that Edit: 1.and 2.are stables equilibrium?. I chemistry your effective potential energy is some function called Gibbs free energy. A system that is in thermal contact with it's environment will tend towards both a lower energy state and a higher entropy state. Basically, the energy of the system + environment is fixed, but energy will flow between the two until they are in a state of maximum entropy. It might be more informative to ask why systems tend towards increased entropy. What happens is that all the states that the system+environment can occupy with fixed total energy have equal probability of being occupied. This is called the fundamental postulate of statistical mechanics. Now there are many such states for which the system has some particular energy $E$. The value of $E$ that corresponds to the greatest number of states is therefore most likely. I'll try to explain with the help of an classical example. Take the situations in the picture above. What you're interested in are the first to cases. The unstable state of equilibrium is such a state that when you slightly displace the ball, it departs from the original position. Being at the top of the hill, it has an excess of potential energy (may it be gravitational, electric, etc.) that can be converted into work. However, in stable equilibrium, displacing the ball it will always return to its original position. It was from the start in a state with minimum energy. Well, chemical reactions almost always require heat (energy) to take a place, and almost always release heat upon reaction, so by that logic state when elements is unable to keep reacting is a state with insufficient energy or, in other words, lowest energy state (or we probably should say "lower energy state" then one that required for reactions) Resolving (Decreasing) energy gradients to lower realized potential states is what drives every process on many levels including evolution. www.intothecool.com Here is a super cool paper that shows the process across universal, biological and socio-economic domains. http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=185965 if you look around using a constrained google search for filetype:pdf you can find it free. That being said the lowest potential state is likely a lattice at absolute zero with a perfectly isomorphic electromagnetic state. Technically anything less than all matter reaching that state is subject to being just a local equilibria point awaiting a lower (more broadly contextualized) state i.e. ultimate heat death. protected by Qmechanic♦ Feb 21 '18 at 6:01 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
2019-09-20 08:41 Search for the $^{73}\mathrm{Ga}$ ground-state doublet splitting in the $\beta$ decay of $^{73}\mathrm{Zn}$ / Vedia, V (UCM, Madrid, Dept. Phys.) ; Paziy, V (UCM, Madrid, Dept. Phys.) ; Fraile, L M (UCM, Madrid, Dept. Phys.) ; Mach, H (UCM, Madrid, Dept. Phys. ; NCBJ, Swierk) ; Walters, W B (Maryland U., Dept. Chem.) ; Aprahamian, A (Notre Dame U.) ; Bernards, C (Cologne U. ; Yale U. (main)) ; Briz, J A (Madrid, Inst. Estructura Materia) ; Bucher, B (Notre Dame U. ; LLNL, Livermore) ; Chiara, C J (Maryland U., Dept. Chem. ; Argonne, PHY) et al. The existence of two close-lying nuclear states in $^{73}$Ga has recently been experimentally determined: a 1/2$^−$ spin-parity for the ground state was measured in a laser spectroscopy experiment, while a J$^{\pi} = 3/2^−$ level was observed in transfer reactions. This scenario is supported by Coulomb excitation studies, which set a limit for the energy splitting of 0.8 keV. [...] 2017 - 13 p. - Published in : Phys. Rev. C 96 (2017) 034311 Registo detalhado - Registos similares 2019-09-20 08:41 Search for shape-coexisting 0$^+$ states in $^{66}$Ni from lifetime measurements / Olaizola, B (UCM, Madrid, Dept. Phys.) ; Fraile, L M (UCM, Madrid, Dept. Phys.) ; Mach, H (UCM, Madrid, Dept. Phys. ; NCBJ, Warsaw) ; Poves, A (Madrid, Autonoma U.) ; Nowacki, F (Strasbourg, IPHC) ; Aprahamian, A (Notre Dame U.) ; Briz, J A (Madrid, Inst. Estructura Materia) ; Cal-González, J (UCM, Madrid, Dept. Phys.) ; Ghiţa, D (Bucharest, IFIN-HH) ; Köster, U (Laue-Langevin Inst.) et al. The lifetime of the 0$_3^+$ state in $^{66}$Ni, two neutrons below the $N=40$ subshell gap, has been measured. The transition $B(E2;0_3^+ \rightarrow 2_1^+)$ is one of the most hindered E2 transitions in the Ni isotopic chain and it implies that, unlike $^{68}$Ni, there is a spherical structure at low excitation energy. [...] 2017 - 6 p. - Published in : Phys. Rev. C 95 (2017) 061303 Registo detalhado - Registos similares 2019-09-17 07:00 Laser spectroscopy of neutron-rich tin isotopes: A discontinuity in charge radii across the $N=82$ shell closure / Gorges, C (Darmstadt, Tech. Hochsch.) ; Rodríguez, L V (Orsay, IPN) ; Balabanski, D L (Bucharest, IFIN-HH) ; Bissell, M L (Manchester U.) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cheal, B (Liverpool U.) ; Garcia Ruiz, R F (Leuven U. ; CERN ; Manchester U.) ; Georgiev, G (Orsay, IPN) ; Gins, W (Leuven U.) ; Heylen, H (Heidelberg, Max Planck Inst. ; CERN) et al. The change in mean-square nuclear charge radii $\delta \left \langle r^{2} \right \rangle$ along the even-A tin isotopic chain $^{108-134}$Sn has been investigated by means of collinear laser spectroscopy at ISOLDE/CERN using the atomic transitions $5p^2\ ^1S_0 \rightarrow 5p6\ s^1P_1$ and $5p^2\ ^3P_0 \rightarrow 5p6s^3 P_1$. With the determination of the charge radius of $^{134}$Sn and corrected values for some of the neutron-rich isotopes, the evolution of the charge radii across the $N=82$ shell closure is established. [...] 2019 - 7 p. - Published in : Phys. Rev. Lett. 122 (2019) 192502 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-17 07:00 Radioactive boron beams produced by isotope online mass separation at CERN-ISOLDE / Ballof, J (CERN ; Mainz U., Inst. Kernchem.) ; Seiffert, C (CERN ; Darmstadt, Tech. U.) ; Crepieux, B (CERN) ; Düllmann, Ch E (Mainz U., Inst. Kernchem. ; Darmstadt, GSI ; Helmholtz Inst., Mainz) ; Delonca, M (CERN) ; Gai, M (Connecticut U. LNS Avery Point Groton) ; Gottberg, A (CERN) ; Kröll, T (Darmstadt, Tech. U.) ; Lica, R (CERN ; Bucharest, IFIN-HH) ; Madurga Flores, M (CERN) et al. We report on the development and characterization of the first radioactive boron beams produced by the isotope mass separation online (ISOL) technique at CERN-ISOLDE. Despite the long history of the ISOL technique which exploits thick targets, boron beams have up to now not been available. [...] 2019 - 11 p. - Published in : Eur. Phys. J. A 55 (2019) 65 Fulltext from Publisher: PDF; Registo detalhado - Registos similares 2019-09-17 07:00 Inverse odd-even staggering in nuclear charge radii and possible octupole collectivity in $^{217,218,219}\mathrm{At}$ revealed by in-source laser spectroscopy / Barzakh, A E (St. Petersburg, INP) ; Cubiss, J G (York U., England) ; Andreyev, A N (York U., England ; JAEA, Ibaraki ; CERN) ; Seliverstov, M D (St. Petersburg, INP ; York U., England) ; Andel, B (Comenius U.) ; Antalic, S (Comenius U.) ; Ascher, P (Heidelberg, Max Planck Inst.) ; Atanasov, D (Heidelberg, Max Planck Inst.) ; Beck, D (Darmstadt, GSI) ; Bieroń, J (Jagiellonian U.) et al. Hyperfine-structure parameters and isotope shifts for the 795-nm atomic transitions in $^{217,218,219}$At have been measured at CERN-ISOLDE, using the in-source resonance-ionization spectroscopy technique. Magnetic dipole and electric quadrupole moments, and changes in the nuclear mean-square charge radii, have been deduced. [...] 2019 - 9 p. - Published in : Phys. Rev. C 99 (2019) 054317 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-17 07:00 Investigation of the $\Delta n = 0$ selection rule in Gamow-Teller transitions: The $\beta$-decay of $^{207}$Hg / Berry, T A (Surrey U.) ; Podolyák, Zs (Surrey U.) ; Carroll, R J (Surrey U.) ; Lică, R (CERN ; Bucharest, IFIN-HH) ; Grawe, H ; Timofeyuk, N K (Surrey U.) ; Alexander, T (Surrey U.) ; Andreyev, A N (York U., England) ; Ansari, S (Cologne U.) ; Borge, M J G (CERN ; Madrid, Inst. Estructura Materia) et al. Gamow-Teller $\beta$ decay is forbidden if the number of nodes in the radial wave functions of the initial and final states is different. This $\Delta n=0$ requirement plays a major role in the $\beta$ decay of heavy neutron-rich nuclei, affecting the nucleosynthesis through the increased half-lives of nuclei on the astrophysical $r$-process pathway below both $Z=50$ (for $N>82$ ) and $Z=82$ (for $N>126$). [...] 2019 - 5 p. - Published in : Phys. Lett. B 793 (2019) 271-275 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-14 06:30 Precision measurements of the charge radii of potassium isotopes / Koszorús, Á (KU Leuven, Dept. Phys. Astron.) ; Yang, X F (KU Leuven, Dept. Phys. Astron. ; Peking U., SKLNPT) ; Billowes, J (Manchester U.) ; Binnersley, C L (Manchester U.) ; Bissell, M L (Manchester U.) ; Cocolios, T E (KU Leuven, Dept. Phys. Astron.) ; Farooq-Smith, G J (KU Leuven, Dept. Phys. Astron.) ; de Groote, R P (KU Leuven, Dept. Phys. Astron. ; Jyvaskyla U.) ; Flanagan, K T (Manchester U.) ; Franchoo, S (Orsay, IPN) et al. Precision nuclear charge radii measurements in the light-mass region are essential for understanding the evolution of nuclear structure, but their measurement represents a great challenge for experimental techniques. At the Collinear Resonance Ionization Spectroscopy (CRIS) setup at ISOLDE-CERN, a laser frequency calibration and monitoring system was installed and commissioned through the hyperfine spectra measurement of $^{38–47}$K. [...] 2019 - 11 p. - Published in : Phys. Rev. C 100 (2019) 034304 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-12 09:23 Evaluation of high-precision atomic masses of A ∼ 50-80 and rare-earth nuclides measured with ISOLTRAP / Huang, W J (CSNSM, Orsay ; Heidelberg, Max Planck Inst.) ; Atanasov, D (CERN) ; Audi, G (CSNSM, Orsay) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cakirli, R B (Istanbul U.) ; Herlert, A (FAIR, Darmstadt) ; Kowalska, M (CERN) ; Kreim, S (Heidelberg, Max Planck Inst. ; CERN) ; Litvinov, Yu A (Darmstadt, GSI) ; Lunney, D (CSNSM, Orsay) et al. High-precision mass measurements of stable and beta-decaying nuclides $^{52-57}$Cr, $^{55}$Mn, $^{56,59}$Fe, $^{59}$Co, $^{75, 77-79}$Ga, and the lanthanide nuclides $^{140}$Ce, $^{140}$Nd, $^{160}$Yb, $^{168}$Lu, $^{178}$Yb have been performed with the Penning-trap mass spectrometer ISOLTRAP at ISOLDE/CERN. The new data are entered into the Atomic Mass Evaluation and improve the accuracy of masses along the valley of stability, strengthening the so-called backbone. [...] 2019 - 9 p. - Published in : Eur. Phys. J. A 55 (2019) 96 Fulltext from Publisher: PDF; Registo detalhado - Registos similares 2019-09-05 06:35 Nuclear charge radii of $^{62−80}$Zn and their dependence on cross-shell proton excitations / Xie, L (Manchester U.) ; Yang, X F (Peking U., SKLNPT ; Leuven U.) ; Wraith, C (Liverpool U.) ; Babcock, C (Liverpool U.) ; Bieroń, J (Jagiellonian U.) ; Billowes, J (Manchester U.) ; Bissell, M L (Manchester U. ; Leuven U.) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cheal, B (Liverpool U.) ; Filippin, L (U. Brussels (main)) et al. Nuclear charge radii of $^{62−80}$Zn have been determined using collinear laser spectroscopy of bunched ion beams at CERN-ISOLDE. The subtle variations of observed charge radii, both within one isotope and along the full range of neutron numbers, are found to be well described in terms of the proton excitations across the $Z=28$ shell gap, as predicted by large-scale shell model calculations. [...] 2019 - 5 p. - Published in : Phys. Lett. B 797 (2019) 134805 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-04 06:18 Electromagnetic properties of low-lying states in neutron-deficient Hg isotopes: Coulomb excitation of $^{182}$Hg, $^{184}$Hg, $^{186}$Hg and $^{188}$Hg / Wrzosek-Lipska, K (Warsaw U., Heavy Ion Lab ; Leuven U.) ; Rezynkina, K (Leuven U. ; U. Strasbourg) ; Bree, N (Leuven U.) ; Zielińska, M (Warsaw U., Heavy Ion Lab ; IRFU, Saclay) ; Gaffney, L P (Liverpool U. ; Leuven U. ; CERN ; West Scotland U.) ; Petts, A (Liverpool U.) ; Andreyev, A (Leuven U. ; York U., England) ; Bastin, B (Leuven U. ; GANIL) ; Bender, M (Lyon, IPN) ; Blazhev, A (Cologne U.) et al. The neutron-deficient mercury isotopes serve as a classical example of shape coexistence, whereby at low energy near-degenerate nuclear states characterized by different shapes appear. The electromagnetic structure of even-mass $^{182-188}$ Hg isotopes was studied using safe-energy Coulomb excitation of neutron-deficient mercury beams delivered by the REX-ISOLDE facility at CERN. [...] 2019 - 23 p. - Published in : Eur. Phys. J. A 55 (2019) 130 Fulltext: PDF; Registo detalhado - Registos similares
Missing observations can cause a bias in the VPC and hamper its diagnosis value. This mini case-study shows why missing censored data or censored data replaced by the LOQ can cause a bias in the VPC, and how Monolix handles censored data to prevent this bias. It also explains the bias resulting from non-random dropout, and how this can be corrected with Simulx. Censored data and VPC This section explains the different cases that can occur when some data are censored. If the censored data are marked in the dataset, they are handled by Monolix in a way that prevents bias in the diagnostic plots such as the VPC. The bias remains however if the censored data are missing. Missing censored data The figures below show what happens if censored data are missing from the dataset. The figures on the left and middle focus on two individuals in a PK dataset, with their observed concentrations (on the left) or predicted concentrations (in the middle) over time as blue dots, and with similar measurement times. We assume that a good model has been estimated on this data. Three simulated replicates are represented on the same plot in the middle, although percentiles are calculated on each replicate separately in order to generate the prediction intervals on the VPC. In case of missing censored data, the empirical percentiles represented by the blue lines on the VPC do not decrease much at high times, because only the individuals with high enough observations (like the orange individual, and not like the green individual) contribute to the percentiles. The simulations for the VPC follow the same measurement times as the initial dataset, however if the variability is unexplained the predictions for all the individuals have the same prediction distributions, so the number of predicted observations that contribute to the prediction intervals are the same for each percentile. This results in a discrepancy between observed and simulated data, that can be seen in red. Explained inter-individual variability This bias is reduced if most of the variability in the predictions is explained by some covariate effects in the model. In that case the variability of the predicted concentrations at high times will match the variability in the measurement times, and the discrepancy is reduced and can diseappear. The diagnosis value of the VPC is improved, although the shape of the percentiles in the VPC is still affected by missing observations. Censored data as LOQ If the censored observations are not missing, but are replaced by the LOQ value, a strong bias appears in the VPC. The bias affects the empirical percentiles, as seen below, because the censored observations should actually be lower than the LOQ. Simulated censored data In Monolix the bias shown above is prevented by replacing the censored observations in the diagnostic plots by samples from the conditional distribution \(p(y^{BLQ} | y^{non BLQ}, \hat{\psi}, \hat{\theta})\), where \(\hat{\theta}\) and \(\hat{\psi}\) are the estimated population and individual parameters. This corrects the shape of the empirical percentiles in the VPC, as seen on the figure below. More information on handling censored data in Monolix is available here. Dropout and VPC The same kind of bias can also occur when the dataset is affected by non-random dropout events. In this section we take an example of tumor growth data. We assume that the data have been fitted with a good model, and is affected by non-random dropout, because individuals with a high tumor size quit the study or die. This means that no more observation for the tumor size is available after the time of dropout, represented with the dashed vertical lines. A bias appears in the VPC, caused by the missing censored observations. This is because there are more missing observations in the dataset for high tumor sizes, which is not the case in simulated predictions from the model, unless the inter-individual variability is well explained by some covariate effects. Correcting the bias from dropout Correcting this bias requires to provide additional measurement times after the dropouts, which cannot be done automatically in Monolix without making strong assumptions on the design of the dataset. However, it can be attempted with simulations in Simulx, with additional measurement times chosen by the user in adequation with the dataset, and some post-processing in R. The correction also requires to model the dropout with a time-to-event model, that should depend on the tumour size. After estimating the joint tumor growth and time-to-event model in Monolix, it becomes possible to predict the time of dropout for each individual. The predictions for dropout times we will use for post-processing in R. The figure below shows the main steps for correcting the bias from dropout in a VPC. The VPC (on the bottom of the figure) can be regenerated with new simulations in Simulx (shown on the top of the figure) and with some plotting functions of ggplot to display the empirical percentiles and prediction intervals. Two simulations that can be corrected are colored in light orange and in light green. First, the light green simulation comes from an individual that had a late dropout in the initial dataset, so it has late prediction times. However, in this case the predicted tumor size if high, and the predicted dropout, visible in light green dashed line, occurs before the last prediction time. If this individual was real, it would not have been possible to measure this observation. (Note that for Monolix or Simulx the dropout is just an event, it cannot have any effect on the predictions of the size model Second, the light orange simulation comes from the orange individual, so it has missing predictions at high times. However in this case the predicted tumor size is small and the predicted dropout time is quite high, higher than the missing predictions. So if this individual was real, it would have been possible to measure an additional observation here. Step 1 (left): the VPC is displayed without post-processing: it has the same bias as in Monolix. Step 2 (middle): To get predictions that are more representative of the dataset, the prediction marked in red from the light green simulation is removed before plotting the VPC, along with all the individual predictions that occur after the corresponding predicted dropout. This first correction is seen in the middle and reduces strongly the bias of the VPC. Step 3 (right): With Simulx it is possible to change the design structure for the prediction times defined in the outputs of the simulations. The time marked in red for the light orange simulation is added in the output before performing the simulations for the VPC, and the same is done for all individuals that have missing observations in the dataset. Combining the pre-processing of the outputs before the simulations from Step 3 and the post-processing of the simulations from Step 2 to remove the predictions occurring after a dropout gives a VPC that is not biased by a spurious discrepancy (VPC3). The shape of the percentiles in the VPC is still affected by the dropouts, but the diagnosis value of the VPC is retained. The difficulty of the approach in Step 3 is to extrapolate the design of the dataset in a way that makes sense. This can not be done in Monolix, but it has to be done in R by the user because it requires to make assumptions on the design, and it might be problematic when the design structure is complex. For example, in the case of repeated doses, doses that are missing from the dataset because they would occur after a dropout will also have to be included in the new design. Furthermore, the user has to extrapolate the measurement times from the dataset, that are not necessarily the same for all individuals, while keeping the same measurement density for the new observations as for the rest of the observations, to avoid introducing other bias in the VPC. The example shown on this page along with the R script to correct the VPC can be downloaded here. This example is based on simulated data from the PDTTE model developed in “Desmée, S, Mentré, F, Veyrat-Follet, C, Sébastien, B, Guedj, J (2017). Using the SAEM algorithm for mechanistic joint models characterizing the relationship between nonlinear PSA kinetics and survival in prostate cancer patients. Biometrics, 73, 1:305-312.” We assume for the sake of the example that the level of PSA is a marker of the tumour size.
The Annals of Statistics Ann. Statist. Volume 12, Number 4 (1984), 1467-1487. Tail Estimates Motivated by Extreme Value Theory Abstract An estimate of the upper tail of a distribution function which is based on the upper $m$ order statistics from a sample of size $n(m \rightarrow \infty, m/n \rightarrow 0$ as $n \rightarrow \infty)$ is shown to be consistent for a wide class of distribution functions. The empirical mean residual life of the $\log$ transformed data and the sample $1 - m/n$ quantile play a key role in the estimate. The joint asymptotic behavior of the empirical mean residual life and sample $1 - m/n$ quantile is determined and rates of convergence of the estimate to the tail are derived. Article information Source Ann. Statist., Volume 12, Number 4 (1984), 1467-1487. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176346804 Digital Object Identifier doi:10.1214/aos/1176346804 Mathematical Reviews number (MathSciNet) MR760700 Zentralblatt MATH identifier 0555.62035 JSTOR links.jstor.org Subjects Primary: 62G05: Estimation Secondary: 62G30: Order statistics; empirical distribution functions 62F12: Asymptotic properties of estimators Citation Davis, Richard; Resnick, Sidney. Tail Estimates Motivated by Extreme Value Theory. Ann. Statist. 12 (1984), no. 4, 1467--1487. doi:10.1214/aos/1176346804. https://projecteuclid.org/euclid.aos/1176346804
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Suppose that we have a thin plate, so thin that it's practically 2-dimensional. Such a plate is called a If the density were a constant, finding the total mass of the lamina would be easy: we would just multiply the density by the area. When the density isn't constant, we need to integrate instead. The mass of a little box of area $dA$ around the point $(x,y)$ is essentially $\rho(x,y) dA$. For the total mass of the lamina, we add up the boxes and take a limit to get $$M \ = \ \iint_D \rho(x,y) dA.$$ This integral can be done in rectangular coordinates, polar coordinates, or by whatever method you prefer. The center-of-mass of a body is a weighted average of the positions of the particles inside. Since a box of area $dA$ at position $(x,y)$ has a mass that's a fraction $\rho(x,y) dA/M$ of the total, the center-of-mass of our lamina is at position $(\bar x, \bar y)$, where \begin{eqnarray*} \bar x & = & \frac{1}{M} \iint_D x \rho(x,y) dA \\ \bar y & = & \frac{1}{M} \iint_D y \rho(x,y) dA \end{eqnarray*} The moment of inertia of an object indicates how hard it is to rotate. For a point particle, the moment of inertial is $I=mr^2$, where $m$ is the mass of the particle and $r$ is the distance from the particle to the axis of rotation. The moment of intertia of an object with many pieces is the sum of the moments of inertia of its pieces. The following video what the moment of inertia means physically, and how we can calculate it. Let's imagine that we're rotating around the origin, so $r^2=x^2+y^2$. Since the moment of inertial of a little box of size $dA$ at position $(x,y)$ is $(x^2+y^2) \rho(x,y) dA$, the moment of inertia of the entire lamina is $$ I = \iint_D (x^2+y^2) \rho(x,y) dA.$$
Exercise 8.1.1: Sketch the phase plane vector field for: a) \(x'=x^2, ~~y'=y^2\), b) \(x'=(x-y)^2, ~~y'=-x\), c) \(x'=e^y,~~ y'=e^x\). Exercise 8.1.2: Match systems 1) \(x'=x^2\), \(y'=y^2\), 2) \(x'=xy\), \(y'=1+y^2\), 3) \(x'=\sin(\pi y)\), \(y'=x\), to the vector fields below. Justify. a) b) c) Exercise 8.1.3: Find the critical points and linearizations of the following systems. a) \(x'=x^2-y^2\), \(y'=x^2+y^2-1\), b) \(x'=-y\), \(y'=3x+yx^2\), c) \(x'=x^2+y\), \(y'=y^2+x\). Exercise 8.1.4: For the following systems, verify they have critical point at \((0,0)\), and find the linearization at \((0,0)\). a) \(x'=x+2y+x^2-y^2\), \(y'=2y-x^2\) b) \(x'=-y\), \(y'=x-y^3\) c) \(x'=ax+by+f(x,y)\), \(y'=cx+dy+g(x,y)\), where \(f(0,0) = 0\), \(g(0,0) = 0\), and all first partial derivatives of \(f\) and \(g\) are also zero at \((0,0)\), that is, \(\frac{\partial f}{\partial x}(0,0) = \frac{\partial f}{\partial y}(0,0) = \frac{\partial g}{\partial x}(0,0) = \frac{\partial g}{\partial y}(0,0) = 0\). Exercise 8.1.5:Take \(x'=(x-y)^2\), \(y'=(x+y)^2\). a) Find the set of critical points. b) Sketch a phase diagram and describe the behavior near the critical point(s). c) Find the linearization. Is it helpful in understanding the system? Exercise 8.1.6: Take \(x'=x^2\), \(y'=x^3\). a) Find the set of critical points. b) Sketch a phase diagram and describe the behavior near the critical point(s). c) Find the linearization. Is it helpful in understanding the system? Exercise 8.1.101: Find the critical points and linearizations of the following systems. a) \(x'=\sin(\pi y)+(x-1)^2\), \(y'=y^2-y\), b) \(x'=x+y+y^2\), \(y'=x\), c) \(x'=(x-1)^2+y\), \(y'=x^2+y\). Exercise 8.1.102: Match systems 1) \(x'=y^2\), \(y'=-x^2\), 2) \(x'=y\), \(y'=(x-1)(x+1)\), 3) \(x'=y+x^2\), \(y'=-x\), to the vector fields below. Justify. a) b) c) Exercise 8.1.103: The idea of critical points and linearization works in higher dimensions as well. You simply make the Jacobian matrix bigger by adding more functions and more variables. For the following system of 3 equations find the critical points and their linearizations: \(x' = x + z^2,\\ y' = z^2-y, \\ z' = z+x^2.\) Exercise 8.1.1: Any two-dimensional non-autonomous system \(x'=f(x,y,t)\), \(y'=g(x,y,t)\) can be written as a three-dimensional autonomous system (three equations). Write down this autonomous system using the variables \(u\), \(v\), \(w\). Exercise 8.2.1: For the systems below, find and classify the critical points, also indicate if the equilibria are stable, asymptotically stable, or unstable. a) \(x'=-x+3x^2, y'=-y\) b) \(x'=x^2+y^2-1\),\(y'=x\) c) \(x'=ye^x\),\(y'=y-x+y^2\) Exercise 8.2.2: Find the implicit equations of the trajectories of the following conservative systems. Next find their critical points (if any) and classify them. a) \(x''+ x+x^3 = 0\) b) \(\theta''+\sin \theta = 0\) c) \(z''+ (z-1)(z+1) = 0\) d) \(x''+ x^2+1 = 0\) Exercise 8.2.3: Find and classify the critical point(s) of \(x' = -x^2\),\(y' = -y^2\). Exercise 8.2.4: Suppose \(x'=-xy\),\(y'=x^2-1-y\). a) Show there are two spiral sinks at \((-1,0)\) and \((1,0)\). b) For any initial point of the form \((0,y_0)\),find what is the trajectory. c) Can a trajectory starting at \((x_0,y_0)\) where \(x_0 > 0\) spiral into the critical point at \((-1,0)\)? Why or why not? Exercise 8.2.5: In the example \(x'=y\),\(y'=y^3-x\) show that for any trajectory, the distance from the origin is an increasing function. Conclude that the origin behaves like is a spiral source. Hint: Consider \(f(t) = {\bigl(x(t)\bigr)}^2 + {\bigl(y(t)\bigr)}^2\) and show it has positive derivative. Exercise 8.2.6: Suppose \(f\) is always positive. Find the trajectories of \(x''+f(x') = 0\). Are there any critical points? Exercise 8.2.7: Suppose that \(x' = f(x,y)\),\(y' = g(x,y)\). Suppose that \(g(x,y) > 1\) for all \(x\) and \(y\). Are there any critical points? What can we say about the trajectories at \(t\) goes to infinity? Exercise 8.2.101: For the systems below, find and classify the critical points. a) \(x'=-x+x^2\),\(y'=y\) b) \(x'=y-y^2-x\),\(y'=-x\) c) \(x'=xy\),\(y'=x+y-1\) Exercise 8.2.102: Find the implicit equations of the trajectories of the following conservative systems. Next find their critical points (if any) and classify them. a) \(x''+ x^2 = 4\) b) \(x''+ e^x = 0\) c) \(x''+ (x+1)e^x = 0\) Exercise 8.2.103: The conservative system \(x''+x^3 = 0\) is not almost linear. Classify its critical point(s) nonetheless. Exercise 8.2.104:Derive an analogous classification of critical points for equations in one dimension, such as \(x'= f(x)\) based on the derivative. A point \(x_0\) is critical when \(f(x_0) = 0\) and almost linear if in addition \(f'(x_0) \not= 0\). Figure out if the critical point is stable or unstable depending on the sign of \(f'(x_0)\). Explain. Hint: see Ch. 1.6. Exercise 8.3.1: Take the damped nonlinear pendulum equation \(\theta '' + \mu \theta' + (\frac{g}{L}) \sin \theta = 0\) for some \(\mu > 0\) (that is, there is some friction). a) Suppose \(\mu = 1\) and \(\frac{g}{L} = 1\) for simplicity, find and classify the critical points. b) Do the same for any \(\mu > 0\) and any \(g\) and \(L\), but such that the damping is small, in particular, \(\mu^2 < 4(\frac{g}{L})\). c) Explain what your findings mean, and if it agrees with what you expect in reality. Exercise 8.3.2: Suppose the hares do not grow exponentially, but logistically. In particular consider \[x' = (0.4-0.01y)x - \gamma x^2, ~~~~~ y' = (0.003x-0.3)y .\] For the following two values of \(\gamma\), find and classify all the critical points in the positive quadrant, that is, for \(x \geq 0\) and \(y \geq 0\). Then sketch the phase diagram. Discuss the implication for the long term behavior of the population. a) \(\gamma=0.001\), b) \(\gamma=0.01\). Exercise 8.3.3: a) Suppose \(x\) and \(y\) are positive variables. Show \(\frac{y x}{e^{x+y}}\) attains a maximum at \((1,1)\). b) Suppose \(a,b,c,d\) are positive constants, and also suppose \(x\) and \(y\) are positive variables. Show \(\frac{y^a x^d}{e^{cx+by}}\) attains a maximum at \((\frac{d}{c},\frac{a}{b})\). Exercise 8.3.4: Suppose that for the pendulum equation we take a trajectory giving the spinning-around motion, for example \(\omega = \sqrt{\frac{2g}{L} \cos \theta + \frac{2g}{L} + \omega_0^2}\). This is the trajectory where the lowest angular velocity is \(\omega_0^2\). Find an integral expression for how long it takes the pendulum to go all the way around. Exercise 8.3.5:[challenging] Take the pendulum, suppose the initial position is \(\theta = 0\). a) Find the expression for \(\omega\) giving the trajectory with initial condition \((0,\omega_0)\). Hint: Figure out what \(C\) should be in terms of \(\omega_0\). b) Find the crucial angular velocity \(\omega_1\), such that for any higher initial angular velocity, the pendulum will keep going around its axis, and for any lower initial angular velocity, the pendulum will simply swing back and forth. Hint: When the pendulum doesn't go over the top the expression for \(\omega\) will be undefined for some \(\theta\)s. c) What do you think happens if the initial condition is \((0,\omega_1)\), that is, the initial angle is 0, and the initial angular velocity is exactly \(\omega_1\). Exercise 8.3.101: Take the damped nonlinear pendulum equation \(\theta '' + \mu \theta' + (\frac{g}{L}) \sin \theta = 0\) for some \(\mu > 0\) (that is, there is friction). Suppose the friction is large, in particular \(\mu^2 > 4 (\frac{g}{L})\). a) Find and classify the critical points. b) Explain what your findings mean, and if it agrees with what you expect in reality. Exercise 8.3.102: Suppose we have the system predator-prey system where the foxes are also killed at a constant rate \(h\) (\(h\) foxes killed per unit time): \(x' = (a-by)x,\) \(y' = (cx-d)y - h\). a) Find the critical points and the Jacobin matrices of the system. b) Put in the constants \(a=0.4\), \(b=0.01\), \(c=0.003\), \(d=0.3\), \(h=10\). Analyze the critical points. What do you think it says about the forest? Exercise 8.3.103:[challenging] Suppose the foxes never die. That is, we have the system \(x' = (a-by)x,\) \(y' = cxy\). Find the critical points and notice they are not isolated. What will happen to the population in the forest if it starts at some positive numbers. Hint: Think of the constant of motion. Exercise 8.4.1: Show that the following systems have no closed trajectories. a) \(x'=x^3+y,y'=y^3+x^2\), b) \(x'=e^{x-y},y'=e^{x+y}\), c) \(x'=x+3y^2-y^3,y'=y^3+x^2\). Exercise 8.4.2: Formulate a condition for a 2-by-2 linear system \({\vec{x}\,}' = A \vec{x}\) to not be a center using the Bendixson-Dulac theorem. That is, the theorem says something about certain elements of \(A\). Exercise 8.4.3: Explain why the Bendixson-Dulac Theorem does not apply for any conservative system \(x''+h(x) = 0\). Exercise 8.4.4: A system such as \(x'=x, y'=y\) has solutions that exist for all time \(t\), yet there are no closed trajectories or other limit cycles. Explain why the Poincare-Bendixson Theorem does not apply. Exercise 8.4.5: Differential equations can also be given in different coordinate systems. Suppose we have the system \(r' = 1-r^2\), \(\theta' = 1\) given in polar coordinates. Find all the closed trajectories and check if they are limit cycles and if so, if they are asymptotically stable or not. Exercise 8.4.101: Show that the following systems have no closed trajectories. a) \(x'=x+y^2\), \(y'=y+x^2\), b) \(x'=-x\sin^2(y)\), \(y'=e^x\), c) \(x'=xy\), \(y'=x+x^2\). Exercise 8.4.102: Suppose an autonomous system in the plane has a solution \(x=\cos(t)+e^{-t}\), \(y=\sin(t)+e^{-t}\). What can you say about the system (in particular about limit cycles and periodic solutions)? Exercise 8.4.103: Show that the limit cycle of the Van der Pol oscillator (for \(\mu > 0\)) must not lie completely in the set where \(- \sqrt{\frac{1+\mu}{\mu}} < x < \sqrt{\frac{1+\mu}{\mu}}\). Exercise 8.4.104: Suppose we have the system \(r' = \sin(r)\), \(\theta' = 1\) given in polar coordinates. Find all the closed trajectories.
Momentum is the generator of spacial translations, even in classical physics. Anyway, you can find a derivation here or in Sakurai's book Modern Quantum Mechanics. They are more or less the same and go like this: The translation operator is the operator $T( a)$ such that $$T( a) \mid x \rangle = \mid x+a\rangle$$ From the definition it follows that the adjoint of $T$ performs a backwards translation: $$T^\dagger(a) \mid x \rangle = \mid x-a\rangle$$ Of course, we must require that if we translate and then translate back the state is unchanged: $$T^\dagger(a) T(a) \mid x \rangle = \mid x \rangle$$ From which it follows that $T$ must be unitary: $T^\dagger=T^{-1}$ Any unitary operator can be written in the form $$T(a) =e^{-iKa}$$ with $K$ hermitian. Now you will find that the eigenstates of $K$ in the position basis are plane waves: $$\langle x \mid k \rangle = \psi_k(x) \sim e^{ikx}$$ Now (and this is the crucial passage), the De Broglie hypothesis comes into play: $$p = \hbar k$$ so that $$T(a)=e^{-iPa/\hbar}$$ And with some math (the passages are in the paper I linked) you can show that $$P \psi(x) = \langle x \mid P \mid x \rangle = - i \hbar \frac{\partial \psi}{\partial x}$$ The De Broglie hypotesis is not strictly necessary. For example Sakurai observes that for an infinitesimal translation you have $$T(dx) = 1-i K dx$$ and that in classical mechanics the generating function of the infinitesimal translation $$x'=x+dx$$$$p'=p$$ is $$F(x,p')=x p'+ p dx$$ where $xp'$ is the generating function of the identity transformation. From the similarity between $F(x,p')$ and $T(dx)$ he then speculates that $K$ is related to momentum, and since $K \ dx$ must be dimensionless we must have $$K=\frac{P}{\text{constant with dimensions of an action}}$$ It turns out from experiments that our constant is exactly $\hbar$.
Forgot password? New user? Sign up Existing user? Log in Upload the pdf of RMO 2013 and discuss problems under this thread ONLY AFTER COMPLETION of the exam ( i.e. 4pm).Best of luck to all of you,buddies. Note by Priyatam Roy 5 years, 10 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: CRMO-Paper 3 Log in to reply I took this paper. I was sitting behind Sreejato, and I got only questions 1, 2, and 4 right. I hope I make the cutoff! Isn't RMO a contest held in India? That's the impression that I had at least edit: looking at the problems, I have to say problems 3 and 5 sure are tough. @Michael Tong – Yes, RMO is specifically for Indians. And problem 5 is pretty easy if you can find the recursion. @Sreejato Bhattacharya – hey....Sreejato in which class do u study???? @Michael Tong – well 3rd one is simple. Menelaus theorem you see.... For maharashtra region... :P @Kshitij Khandelwal – We are talking about CRMO Paper 3. I thought you live in USA. :O @Sreejato Bhattacharya – But I saw Cody entering ISI :P @Soham Chanda – Have you been dreaming when you gave RMO? xD @Sreejato Bhattacharya – Dreaming that I had enough time to write up the solutions :| Oh!How I wish that you were seated next to me ,so that I could have a good shot>>>>Counterattack :p Downvoted. Any idea about the cutoff of paper 3 ? That's what I got! The problems weren't very hard, but the time limit (3 hours) put pressure on me. Apparently I solved the first four problems, but I won't get full credit for the third one, as I couldn't complete it (lack of time! >:( ). I had the fifth question on scratch, but didn't get the time to present it carefully. :'( @Sreejato Bhattacharya – Solved the fifth one now, only 4 lines more and my solution would have been complete >.< time was really a factor and that I did not have practice in solution writing was really a major fault from my side @Soham Chanda – The fifth one was completely solved in my rough work. Unfortunately I won't get marks for it. -.- @Sreejato Bhattacharya – :| You could have atleast shown the recursive relation and give outline of the proof @Soham Chanda – I had only like 1 minute left when I solved #4. :( @Sreejato Bhattacharya – Sreejato.. keno pakamo marish... :P emniteo peye jabi.. chap nis na. @Sagnik Saha – guarantee nei :( It was quite easy though. * Mumbai RMO-Paper* https://brilliant.org/discussions/thread/an-rmo-problem-2/ There has to be a 2013 problem or solution.Therefore note that2013=3 X 11 X 61 True dat. You were right! About what..........exactly?Mine didn't have any 2013 related problem You can find the question papers and solutions here.I'm creating subthreads for CRMO 1-4. CRMO-Paper 1 CRMO-Paper 2 I got 3 questions correct in CRMO paper 2. Anyone else get CRMO paper 2? I got this paper :)Felt it was easy after finishing the test and found the solutions(at home) CRMO-Paper 4 Was there any question common in CRMO 1, 2, 3 or 4? I don't think so. nope....all were miles away..... Mine was CRMO Paper 2....And there is a 2013 related problem.... RMO Maharashtra and Goa region question paper and answers are not available online. They never upload it! Well then we can type the questions and start the discussion. Well Maharashtra paper was easy rhan other papers, I hope so. Really it was easier than expected...... R U really going to type the whole paper??I have already typed it as a new discussion @Aditya Raut – Will see about that. mine was crmo 1.. mine too....how many were u able to solve...and from which region r u...i was able to solve only the 1, 2 and 5.....and yeah the 6th one partly i was able to solve 1,3 and 4 and rest was quite tough..time limitation ..:P Please upload the paper of your respective regions.... http://olympiads.hbcse.tifr.res.in/subjects/mathematics/previous-question-papers-and-solutions well i was able to solve 4 questions completely but i was unable to solve the last question due to less time.so i want to know whats the cutoff for delhi region I am in class 9( goint to class 10 this year). I will be giving Pre RMO and hopefully RMO this year, can someone please give me some tips on how to prepare and what to prepare for both. Do suggest some books also please.... this rmo paper is tough, but much easier than the previous years!! Problem Loading... Note Loading... Set Loading...
Let's suppose that we are aware of the integral$$\int_{C_k} \prod_{i=1}^k x_i^{\alpha_i-1} dx_i = \frac{\prod_{i=1}^k \Gamma(\alpha_i)}{\Gamma\left(\sum_{i=1}^k \alpha_i\right)} $$$(\textbf{1})$ (See here for a proof). It turns out that only thing we need is to rewrite properly the inner one of the two products: $$P(k)=\prod_{j=i}^k (x_i x_j)^{\alpha_{ij}-1} \text{ =?}$$ Let's try the easy example $k=3$, (assuming the $\alpha_{ij}$ are symmetric)we get$$(x_1x_1)^{\alpha_{11}-1}(x_1x_2)^{\alpha_{12}-1}(x_1x_3)^{\alpha_{13}-1}(x_2x_2)^{\alpha_{22}-1}(x_2x_3)^{\alpha_{23}-1}(x_3x_3)^{\alpha_{33}-1}$$which equals$$x_1^{2\alpha_{11}+\alpha_{12}+\alpha_{13}-4}x_2^{2\alpha_{22}+\alpha_{12}+\alpha_{23}-4}x_3^{2\alpha_{33}+\alpha_{13}+\alpha_{23}-4}$$or $$P(3)=\prod_i^3 x_i^{2\alpha_{ii}+\sum_{j\neq i}^3\alpha_{ij}-4}$$It is obvious (or can be shown by induction) that one can generalize this to arbitrary $k$$$P(k)=\prod_{i=1}^k x_i^{\alpha_{ii}-1+\sum_{j=1}^k(\alpha_{ij}-1)}$$Plugging this expression into our integral (using $\prod_ib_i\prod_i a_i=\prod_i a_i b_i$) we see that it is given by$$\int_{C_k} \prod_{i=1}^k dx_i\prod_{j=i}^k (x_i x_j)^{\alpha_{ij}-1}=\int_{C_k} \prod_{i=1}^k dx_ix_i^{\alpha_{ii}-1+\sum_{j=1}^k(\alpha_{ij}-1)}$$ which allows us to just use $(\textbf{1})$, replacing $\alpha_i$ by $\alpha_{ii}+\sum_{j=1}^k(\alpha_{ij}-1)$ $$\int_{C_k} \prod_{i=1}^k dx_i\prod_{j=i}^k (x_i x_j)^{\alpha_{ij}-1}=\frac{\prod_{i=1}^k \Gamma\left(\alpha_{ii}+\sum_{j=1}^k(\alpha_{ij}-1)\right)}{\Gamma\left(\sum_{i=1}^k \left[\alpha_{ii}+\sum_{j=1}^k(\alpha_{ij}-1)\right]\right)}$$ Edit: It appears that in my original answer, i assumed implicitly that the $a_{ij}$ are symmetric $\alpha_{ij}$=$\alpha_{ji}$. If this is not the case, one has to modify the answer as follows: Replace $\sum_{j\neq i}^k\alpha_{ij}$ by $\sum_{j\neq i}^k\left(\Theta(i-j)\alpha_{ij}+\Theta(j-i)\alpha_{ji}\right)$where $\Theta(x)$ is Heaviside's step function. Please note also that this approach is generalizable to integrals like$$\int_{C_k} \prod_{i=1}^k dx_i\prod_{i_1, i_2, \dot .......,i_n=i}^k (x_{i_1} x_{i_2}...... x_{i_n})^{\alpha_{i_1 i_2...i_n}-1}$$
TL;DR - Not practical due to high energy demands Lets examine the power of a photonic thruster using two wavelengths, one long (radio/UHF at 3 Ghz, 0.1 meter wavelength) and one short (x-rays at 300PHz, 1nm wavelength). The energy of a photon ($E$) is defined by $E = \frac{hc}{\lambda}$ while the momentum ($p$) is $p = \frac{E}{c} = \frac{h}{\lambda}$. $h$ is Plank's constant $6.626\times10^{-34} \text{J}\cdot\text{s}$, and $c$ is the speed of light $3.00\times10^8\frac{\text{m}}{\text{s}}$. For a 1.0nm photon, $E = 2.0\times10^{-16}\text{J}$ and $p = 6.6\times10^{-25}\frac{\text{kg}\cdot\text{m}}{\text{s}}$. Lets compare a theoretical x-ray photonic engine with a Hall effect thruster (which have flown in space since the 70s). A generic hall effect thruster requires input power of 2kW and generates thrust of 100 millinewtons. Assuming a 1000kg satellite, acceleration will be $a = \frac{F}{m} = \frac{.1\text{N}}{1000\text{kg}}=0.0001\frac{\text{m}}{\text{s}^2}$. Therefore in 1 day (86400s) of thrusting it can take a 1000kg satellite from rest to a speed of $v_f = v_i + a\cdot t = 0 + 0.0001\frac{\text{m}}{\text{s}^2}\cdot 86400s = 8.64 \frac{\text{m}}{\text{s}}$. Not too much acceleration, but it only took 2kW of power. Now lets try to get our photonic propulsor to match that speed. The momentum change required for a 1000kg satellite to 8.64 $\frac{\text{m}}{\text{s}}$ is 8640 $\frac{\text{kg}\cdot\text{m}}{\text{s}}$. If each photon's departure from the engine imparts $6.6\times10^{-25}\frac{\text{kg}\cdot\text{m}}{\text{s}}$, then $n = \frac{8640 \frac{\text{kg}\cdot\text{m}}{\text{s}}}{6.6\times10^{-25}\frac{\text{kg}\cdot\text{m}}{\text{s}}} = 1.30\times10^{28}$ photons are needed, requiring an energy of $ 1.30\times10^{28} \cdot 2.0\times10^{-16}\text{J} = 2.59\times10^{12}\text{J} $. Divided by a day, that works out to 30 MW just for the energy that need to be imparted to the photons, assuming a perfectly efficient engine. So there you see the problem. A photonic thruster will require (at perfect efficiency) about 4 orders of magnitude more energy than a current technology ion thruster to generate the same thrust. I would assume that future ion engines will be even more efficient. Also notice, that the ratio between momentum and energy in a photon is constant (the speed of light) so the theoretical max efficiency of the thruster does not change with wavelength. So a photonic thruster is not practical with real physics; this explains why photon related thrust proposals involve solar sails, where the sun is giving the photons the energy. If you want to use 'maybe' physics, then you could say that there exists a way to generate photons that does not require energy generation in the form of electricity. Some exotic interaction of dark matter/dark energy/anti-matter etc. Or you could just say its magic. In that case the photonic thruster would work, but watch out for whatever you are pointing that thruster at. If the photons pack 30MW at 100 mN, then they will be worth 39TW if they match the 130kN of an F-15's twin engines. That is more of a death ray than a transportation system.
Global existence and asymptotic stability in a competitive two-species chemotaxis system with two signals Institut für Mathematik, Universität Paderborn, Warburger Str. 100, 33098 Paderborn, Germany $\left\{ \begin{array}{l}{u_t}\; = \Delta u - {\chi _1}\nabla \cdot (u\nabla v) + {\mu _1}u(1 - u - {a_1}w),\;\;\;\;\;\;\;x \in \Omega ,t > 0,\\{u_t}\; = \Delta v - v + w,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;x \in \Omega ,t > 0,\\{w_t}\; = \Delta w - {\chi _2}\nabla \cdot (w\nabla z) + {\mu _2}w(1 - w - {a_2}u),\;\;\;x \in \Omega ,t > 0,\\{z_t}\; = \Delta z - z + u,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;x \in \Omega ,t > 0,\end{array} \right.$ $Ω\subset\mathbb{R}^n$ $n=2$ $n≥q2$ $a_1<1$ $a_2<1$ $\frac{μ_1}{χ_1^2}$ $\frac{μ_2}{χ_2^2}$ $u\not\equiv0\not\equiv w$ $a_1≥q 1$ $a_2<1$ $\frac{μ_2}{χ_2^2}$ $w\not\equiv0$ $(0,1,1,0)$ $t\to∞$ Keywords:Multi-species chemotaxis, boundedness, logistic source, Lotka-Volterra competition, stability. Mathematics Subject Classification:Primary:35K35;Secondary:35A01, 35B40, 35B35, 35Q92, 92C17, 92D40. Citation:Tobias Black. Global existence and asymptotic stability in a competitive two-species chemotaxis system with two signals. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1253-1272. doi: 10.3934/dcdsb.2017061 References: [1] N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler, Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues, [2] [3] [4] [5] T. Black, J. Lankeit and M. Mizukami, On the weakly competitive case in a two-species chemotaxis model, [6] X. Cao and J. Lankeit, Global classical small-data solutions for a three-dimensional chemotaxis Navier-Stokes system involving matrix-valued sensitivities Calc. Var. Partial Differential Equations, 55 (2016), 39 pp, arXiv: 1601.03897. doi: 10.1007/s00526-016-1027-2. Google Scholar [7] C. Conca and E. Espejo, Threshold condition for global existence and blow-up to a radially symmetric drift-diffusion system, [8] [9] [10] [11] [12] [13] K. Kuto, K. Osaki, T. Sakurai and T. Tsujikawa, Spatial pattern formation in a chemotaxis-diffusion-growth model, [14] O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, [15] [16] J. Lankeit, Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source, [17] Y. Li, Global bounded solutions and their asymptotic properties under small initial data condition in a two-dimensional chemotaxis system for two species, [18] [19] N. Mizoguchi and P. Souplet, Nondegeneracy of blow-up points for the parabolic Keller-Segel system, [20] N. Mizoguchi and M. Winkler, Blow-up in the two-dimensional parabolic Keller-Segel system, Preprint.Google Scholar [21] M. Mizukami and T. Yokota, Global existence and asymptotic stability of solutions to a two-species chemotaxis system with any chemical diffusion, [22] [23] T. Nagai, Blowup of nonradial solutions to parabolic-elliptic systems modeling chemotaxis in two-dimensional domains, [24] [25] E. Nakaguchi and K. Osaki, Global solutions and exponential attractors of a parabolic-parabolic system for chemotaxis with subquadratic degradation, [26] [27] M. Negreanu and J. I. Tello, Asymptotic stability of a two species chemotaxis system with non-diffusive chemoattractant, [28] [29] C. G. Simader, The weak Dirichlet and Neumann problem for the Laplacian in $L^q$ for bounded and exterior domains. Applications, [30] [31] C. Stinner, C. Surulescu and M. Winkler, Global weak solutions in a PDE-ODE system modeling multiscale cancer cell invasion, [32] Y. Tao and M. Winkler, Energy-type estimates and global solvability in a two-dimensional chemotaxis-haptotaxis model with remodeling of non-diffusible attractant, [33] Y. Tao and M. Winkler, Blow-up prevention by quadratic degradation in a two-dimensional Keller-Segel-Navier-Stokes system Z. Angew. Math. Phys. , 67 (2016), Art. 138, 23 pp. doi: 10.1007/s00033-016-0732-1. Google Scholar [34] Y. Tao and M. Winkler, Boundedness and decay enforced by quadratic degradation in a three-dimensional chemotaxis-fluid system, [35] Y. Tao and M. Winkler, Boundedness vs. blow-up in a two-species chemotaxis system with two chemicals, [36] Y. Tao and M. Winkler, Boundedness and competitive exclusion in a population model with cross-diffusion for one species, Preprint.Google Scholar [37] [38] [39] [40] [41] M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, [42] [43] M. Winkler, Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with strong logistic dampening, [44] [45] M. Winkler and X. Bai, Equilibration in a fully parabolic two-species chemotaxis system with competitive kinetics, [46] Q. Zhang and Y. Li, Global existence and asymptotic properties of the solution to a two-species chemotaxis system, show all references References: [1] N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler, Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues, [2] [3] [4] [5] T. Black, J. Lankeit and M. Mizukami, On the weakly competitive case in a two-species chemotaxis model, [6] X. Cao and J. Lankeit, Global classical small-data solutions for a three-dimensional chemotaxis Navier-Stokes system involving matrix-valued sensitivities Calc. Var. Partial Differential Equations, 55 (2016), 39 pp, arXiv: 1601.03897. doi: 10.1007/s00526-016-1027-2. Google Scholar [7] C. Conca and E. Espejo, Threshold condition for global existence and blow-up to a radially symmetric drift-diffusion system, [8] [9] [10] [11] [12] [13] K. Kuto, K. Osaki, T. Sakurai and T. Tsujikawa, Spatial pattern formation in a chemotaxis-diffusion-growth model, [14] O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, [15] [16] J. Lankeit, Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source, [17] Y. Li, Global bounded solutions and their asymptotic properties under small initial data condition in a two-dimensional chemotaxis system for two species, [18] [19] N. Mizoguchi and P. Souplet, Nondegeneracy of blow-up points for the parabolic Keller-Segel system, [20] N. Mizoguchi and M. Winkler, Blow-up in the two-dimensional parabolic Keller-Segel system, Preprint.Google Scholar [21] M. Mizukami and T. Yokota, Global existence and asymptotic stability of solutions to a two-species chemotaxis system with any chemical diffusion, [22] [23] T. Nagai, Blowup of nonradial solutions to parabolic-elliptic systems modeling chemotaxis in two-dimensional domains, [24] [25] E. Nakaguchi and K. Osaki, Global solutions and exponential attractors of a parabolic-parabolic system for chemotaxis with subquadratic degradation, [26] [27] M. Negreanu and J. I. Tello, Asymptotic stability of a two species chemotaxis system with non-diffusive chemoattractant, [28] [29] C. G. Simader, The weak Dirichlet and Neumann problem for the Laplacian in $L^q$ for bounded and exterior domains. Applications, [30] [31] C. Stinner, C. Surulescu and M. Winkler, Global weak solutions in a PDE-ODE system modeling multiscale cancer cell invasion, [32] Y. Tao and M. Winkler, Energy-type estimates and global solvability in a two-dimensional chemotaxis-haptotaxis model with remodeling of non-diffusible attractant, [33] Y. Tao and M. Winkler, Blow-up prevention by quadratic degradation in a two-dimensional Keller-Segel-Navier-Stokes system Z. Angew. Math. Phys. , 67 (2016), Art. 138, 23 pp. doi: 10.1007/s00033-016-0732-1. Google Scholar [34] Y. Tao and M. Winkler, Boundedness and decay enforced by quadratic degradation in a three-dimensional chemotaxis-fluid system, [35] Y. Tao and M. Winkler, Boundedness vs. blow-up in a two-species chemotaxis system with two chemicals, [36] Y. Tao and M. Winkler, Boundedness and competitive exclusion in a population model with cross-diffusion for one species, Preprint.Google Scholar [37] [38] [39] [40] [41] M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, [42] [43] M. Winkler, Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with strong logistic dampening, [44] [45] M. Winkler and X. Bai, Equilibration in a fully parabolic two-species chemotaxis system with competitive kinetics, [46] Q. Zhang and Y. Li, Global existence and asymptotic properties of the solution to a two-species chemotaxis system, [1] Qi Wang, Yang Song, Lingjie Shao. Boundedness and persistence of populations in advective Lotka-Volterra competition system. [2] Yasuhisa Saito. A global stability result for an N-species Lotka-Volterra food chain system with distributed time delays. [3] Masaaki Mizukami. Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity. [4] Ting-Hui Yang, Weinian Zhang, Kaijen Cheng. Global dynamics of three species omnivory models with Lotka-Volterra interaction. [5] [6] [7] Guo-Bao Zhang, Ruyun Ma, Xue-Shi Li. Traveling waves of a Lotka-Volterra strong competition system with nonlocal dispersal. [8] [9] Yuan Lou, Dongmei Xiao, Peng Zhou. Qualitative analysis for a Lotka-Volterra competition system in advective homogeneous environment. [10] Yukio Kan-On. Global bifurcation structure of stationary solutions for a Lotka-Volterra competition model. [11] [12] Jong-Shenq Guo, Ying-Chih Lin. The sign of the wave speed for the Lotka-Volterra competition-diffusion system. [13] Qi Wang, Chunyi Gai, Jingda Yan. Qualitative analysis of a Lotka-Volterra competition system with advection. [14] Liangchen Wang, Yuhuan Li, Chunlai Mu. Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source. [15] Rui Wang, Xiaoyue Li, Denis S. Mukama. On stochastic multi-group Lotka-Volterra ecosystems with regime switching. [16] [17] Chiun-Chuan Chen, Yin-Liang Huang, Li-Chang Hung, Chang-Hong Wu. Semi-exact solutions and pulsating fronts for Lotka-Volterra systems of two competing species in spatially periodic habitats. [18] Yukio Kan-On. Bifurcation structures of positive stationary solutions for a Lotka-Volterra competition model with diffusion II: Global structure. [19] Qihuai Liu, Dingbian Qian, Zhiguo Wang. Quasi-periodic solutions of the Lotka-Volterra competition systems with quasi-periodic perturbations. [20] Shuling Yan, Shangjiang Guo. Dynamics of a Lotka-Volterra competition-diffusion model with stage structure and spatial heterogeneity. 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
Science Advisor Gold Member 7,190 1,008 Here is one I am having trouble following. Can anyone help me through my confusion? Our setup is a normal Bell test using entangled photons created using spontaneous parametric down conversion (PDC). Such a setup uses 2 BBO crystals oriented a 90 degrees relative to each other. See for example Dehlinger and Mitchell's http://users.icfo.es/Morgan.Mitchell/QOQI2005/DehlingerMitchellAJP2002EntangledPhotonsNonlocalityAndBellInequalitiesInTheUndergraduateLaboratory.pdf [Broken]. 1. Say we have Alice and Bob set their polarizers at identical settings, at +45 degrees relative to the vertical. Once the individual results of Alice and Bob are examined, it will be seen (in the ideal case) that they always match (either ++ or --). According to the local realist or local hidden variables (LHV) advocate, this is "easily" explained: But that explanation does not seem reasonable to me, even in the case above in which Alice and Bob have identical settings. Here is the paradox as I see it. The source of the photon pairs is the 2 crystals. They achieve an EPR entangled state for testing by preparing a superposition of states as follows: [tex] |\psi_e_p_r\rangle = \frac {1} {\sqrt{2}} (|V\rangle _s|V\rangle _i + |H\rangle _s|H\rangle _i) [/tex] This is the standard description per QM. We already know this leads to the [tex] cos^2 \theta [/tex] relationship and the results will be 100% correlation. The local realist presumably would not accept this description as accurate because it is not complete, and violates the basic premise of any LHV theory. He has an alternate explanation, and the Heisenberg Uncertainty Principle (HUP) is not part of it. So now it appears that our experimental results are compatible with the expectations of both QM and LHV (at least when Alice and Bob have matching settings); however, they have different ways of obtaining identical predictions. But let's look deeper, because I think there is a paradox in the LHV side. 2. Suppose I remove one of the BBO crystals, say the one which produces pairs that are horizontally polarized. I have removed an element of uncertainty of the output stream, as we will now know which crystal was the source of the photon pair. Now the results of Alice and Bob no longer match in all cases, and such is predicted by the application of QM: Alice and Bob will now have matched results only 50% of the time. This follows because the resulting photon pairs emerge from the remaining BBO crystal with a vertical orientation. Each photon has a 50-50 chance of passing through the polarizer at Alice and Bob. But since there is no longer a superposition of states, Alice and Bob do not end up with correlated results. But what about our LHV theory? We should still get matching results for Alice and Bob because we are still measuring the same attribute on both photons and the conservation rule remains in effect! Yet the actual results are now matches only 50% of the time, no better than even odds. I mean, if the LHV advocate denies there is superposition in case 1 (such denial is essentially a requirement of any LHV, right?), how does the greater knowledge of the state change anything in case 2? Our setup is a normal Bell test using entangled photons created using spontaneous parametric down conversion (PDC). Such a setup uses 2 BBO crystals oriented a 90 degrees relative to each other. See for example Dehlinger and Mitchell's http://users.icfo.es/Morgan.Mitchell/QOQI2005/DehlingerMitchellAJP2002EntangledPhotonsNonlocalityAndBellInequalitiesInTheUndergraduateLaboratory.pdf [Broken]. 1. Say we have Alice and Bob set their polarizers at identical settings, at +45 degrees relative to the vertical. Once the individual results of Alice and Bob are examined, it will be seen (in the ideal case) that they always match (either ++ or --). According to the local realist or local hidden variables (LHV) advocate, this is "easily" explained: if you measure the same attribute of two separated particles sharing such a common origin, you will naturally always get the same answer.There is no continuing entanglement or spooky action at a distance, and conservation rules are sufficient to provide a suitable explanation. I.e. in LHV theories there is no continuing connection between spacelike separated particles that interacted in the past. The results will be 100% correlation. But that explanation does not seem reasonable to me, even in the case above in which Alice and Bob have identical settings. Here is the paradox as I see it. The source of the photon pairs is the 2 crystals. They achieve an EPR entangled state for testing by preparing a superposition of states as follows: [tex] |\psi_e_p_r\rangle = \frac {1} {\sqrt{2}} (|V\rangle _s|V\rangle _i + |H\rangle _s|H\rangle _i) [/tex] This is the standard description per QM. We already know this leads to the [tex] cos^2 \theta [/tex] relationship and the results will be 100% correlation. The local realist presumably would not accept this description as accurate because it is not complete, and violates the basic premise of any LHV theory. He has an alternate explanation, and the Heisenberg Uncertainty Principle (HUP) is not part of it. So now it appears that our experimental results are compatible with the expectations of both QM and LHV (at least when Alice and Bob have matching settings); however, they have different ways of obtaining identical predictions. But let's look deeper, because I think there is a paradox in the LHV side. 2. Suppose I remove one of the BBO crystals, say the one which produces pairs that are horizontally polarized. I have removed an element of uncertainty of the output stream, as we will now know which crystal was the source of the photon pair. Now the results of Alice and Bob no longer match in all cases, and such is predicted by the application of QM: Alice and Bob will now have matched results only 50% of the time. This follows because the resulting photon pairs emerge from the remaining BBO crystal with a vertical orientation. Each photon has a 50-50 chance of passing through the polarizer at Alice and Bob. But since there is no longer a superposition of states, Alice and Bob do not end up with correlated results. But what about our LHV theory? We should still get matching results for Alice and Bob because we are still measuring the same attribute on both photons and the conservation rule remains in effect! Yet the actual results are now matches only 50% of the time, no better than even odds. What happened to our explanation that "measuring the same attribute" gives identical results?It seems to me that the only way for a LHV to avoid the paradox is to incorporate the HUP - and maybe the projection postulate too - as a fundamental part of the theory so that it can give the same predictions as QM. I mean, if the LHV advocate denies there is superposition in case 1 (such denial is essentially a requirement of any LHV, right?), how does the greater knowledge of the state change anything in case 2? Last edited by a moderator:
The pigeonhole principle is so obvious to me that I am not able to think of a proof based on the axioms of natural numbers. Can anyone please explain its proof clearly mentioning the axioms? It can be proved by induction; so the crucial axiom is that the natural numbers are well-ordered, that is, any nonempty set of natural numbers has a least element. For a proof of the pigeonhole principle, see my answer to this other question. (Strictly speaking, what's proved there isn't exactly the pigeonhole principle - instead, it's the statement "There is no injection from a set of $n$ elements to a set of $m$ elements, if $n>m$" - but it's easy to get from this to the pigeonhole principle.) Here is a non-numeric version of the pigeonhole principle that may interest you. The formal proof is too long to post here (several hundred lines, see "The Pigeonhole Principle" at my math blog). Let $P$ be the (non-empty) set of pigeons, and $H$ the set of holes. Suppose each pigeon is put in a hole. Suppose further that there are more pigeons than holes, i.e. that no function mapping holes to pigeons is surjective (onto). Then at least two pigeons will be put in the same hole. More formally: $\exists a:a \in P\space\space$ i.e. there is at least one pigeon $\implies \forall f:[f:P \to H\space \space\space$ i.e. each pigeon is put in a hole $\land \forall g: [[g: H \to P]\implies g$ is not surjective] $\space \space$ i.e. there are more pigeons than holes $\implies \exists b,c,d: [b\in P\land c\in P \land d\in H \land [b\ne c\land f(b)=d \land f(c)=d]]]\space \space$ i.e. at least two pigeons will be in the same hole.
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates The change-of-variables formula with 3 (or more) variables is just like the formula for two variables. If we do a change-of-variables $\Phi$ from coordinates $(u,v,w)$ to coordinates $(x,y,z)$, then the Jacobian is the determinant $$\frac{\partial(x,y,z)}{\partial(u,v,w)} \ = \ \left | \begin{matrix} \frac{\partial x}{\partial u} & \frac{\partial x}{\partial v} & \frac{\partial x}{\partial w} \\ \frac{\partial y}{\partial u} & \frac{\partial y}{\partial v} & \frac{\partial y}{\partial w} \\ \frac{\partial z}{\partial u} & \frac{\partial z}{\partial v} & \frac{\partial z}{\partial w} \end{matrix} \right |,$$ and the volume element is $$dV \ = \ dx\,dy\,dz \ = \ \left | \frac{\partial(x,y,z)}{\partial(u,v,w)}\right | du\,dv\,dw.$$ After rectangular (aka Cartesian) coordinates, the two most common anuseful coordinate systems in 3 dimensions are
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 27 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
I have a method for you that will help you find valid solutions (matrices) for many possible values of $m,n$. However, it is not a complete answer to your question. It can try to find a matrix for a particular value of $m,n$, but it might fail, and if it fails, you've learned nothing; my method cannot prove that no such matrix exists. The method is based upon the following observation: Theorem. If we have a valid $m_1\times n_1$ matrix $X_1$ that meets all your requirements (for parameters $m_1,n_1$) and a valid $m_2\times n_2$ matrix $X_2$ that meets all your requirements (for parameters $m_2,n_2$), then we can find a valid $m\times n$ matrix that meets all your requirements (for parameters $m,n$), where $m=m_1+m_2$ and $n=n_1+n_2$. Proof. Use the following matrix: $$X = \begin{pmatrix} 0 &X_1 \\ X_2 &Z \end{pmatrix},$$ where $Z$ is arbitrary. Suppose $Xy=0$, where $y \in \{-1,0,1\}^n$. Then since the last $m_1$ coefficients of $Xy$ are zero, and since $X_1 y_1 =0$ implies $y_1=0$, it follows that the last $n_1$ coefficients of $y$ are zero. Thus by letting $y_2$ be the restriction of $y$ to its first $m_2$ coefficients, we find that $X_2 y_2 = 0$. But this implies $y_2 = 0$, i.e., $y=0$. In other words, if $Xy=0$, then $y=0$. This proves that $X$ is a valid matrix. Now this lets us find many values of $m,n$ where it is possible to find a valid matrix $X$. In particular, seed things with some small matrices for various small values of $m,n$ (using any convenient method); then you can derive some larger values of $m,n$ that also have such a matrix. Here are some observations that will help you identify seed values $m,n$ where such a matrix $X$ exists: First, a trivial observation: Obviously, if $n \le m$, it is easy to find a valid solution $x$: just use the identity matrix (if $n<m$, fill in the extra rows arbitrarily). No need to use integer linear programming. So this problem is only interesting when $n>m$. Second, if $n$ is small enough, you can express this as a SAT instance and apply an off-the-shelf SAT solver. The SAT instance will be of exponential size: it will have more than $3^n$ constraints, so this is only helpful for very small values of $n$, but it will still help you construct some values of $m,n$ where you can find a valid matrix $X$. Third, you can use bcorso's answer to handle all cases where $n=m+1$ (there is always a valid solution, for $m\ge 3$). In particular, you can construct a SAT instance where the $x_{i,j}$ are the variables. Now, for each possible non-zero vector $y \in \{-1,0,1\}^n$, you can add a complicated constraint enforcing the requirement that $Xy \ne 0$. (You'll need to have $m$ adders, each of which adds up to $n$ 0-or-1 values, and then a comparison to test whether the results of all of the $m$ adders are all zero or not.) In this way, I would expect that, for each $n\le 8$ (or so), you can probably find the largest value of $m$ such that there exists a valid $m\times n$ matrix. Now once you have those seed values, you can use the Theorem above to help you find additional values of $m,n$ where such a matrix exists. As I stated above, this is not a complete solution, but it might help you solve your problem at least some of the time. For general $m,n$, I doubt that there's any straightforward formulation of this as a polynomial-size integer linear program (unless $\text{NP} = \text{NP}^\text{co-NP}$ or the polynomial hierarchy collapses or something like that, which is not expected to hold; or unless you use some special knowledge about the solution to this problem). Just telling whether a candidate value of $x$ is indeed a valid solution to this problem is $\text{co-NP}$-complete. See https://cstheory.stackexchange.com/q/20277/5038. In other, recognizing a solution to this problem can't be done in polynomial time (as far as we know); just recognizing a valid solution is $\text{co-NP}$-complete. This means that the problem of finding a valid solution is in $\text{NP}^\text{co-NP}$. In contrast, integer linear programming is in $\text{NP}$. Therefore, without using some special knowledge about this problem, I don't think you can find a generic reduction from your problem to integer linear programming unless $\text{NP}^\text{co-NP} = \text{NP}$ (something that most complexity theorists believe is not likely to hold). Don't take this too seriously. I'm not trying to prove a formal theorem or anything like that; I'm just trying to give some weak evidence why this problem does not look like it has a straightforward, generic formulation as an instance of integer linear programming. Of course, you can probably get a formulation as an integer linear program with a number of constraints that is exponential (say, in $m$), similar to how we got a SAT instance. It'll be uglier, because expressing the constraint that some vector (namely, $Xy$) is not identically zero is ugly in ILP. But it's doable. See, e.g., Express boolean logic operations in zero-one integer linear programming (ILP). However, I'm not sure this will be any better than the SAT-based method. If I were implementing this, I would start by trying the SAT-based method, because (if you use a suitable front end, like STP) I think it will be easier to implement and might work just as well or better than an ILP-based formulation.
Search Now showing items 1-10 of 33 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
A long while ago I promised to take you from the action by the modular group $\Gamma=PSL_2(\mathbb{Z})$ on the lattices at hyperdistance $n$ from the standard orthogonal laatice $L_1$ to the corresponding ‘monstrous’ Grothendieck dessin d’enfant. Speaking of dessins d’enfant, let me point you to the latest intriguing paper by Yuri I. Manin and Matilde Marcolli, ArXived a few days ago Quantum Statistical Mechanics of the Absolute Galois Group, on how to build a quantum system for the absolute Galois group from dessins d’enfant (more on this, I promise, later). Where were we? We’ve seen natural one-to-one correspondences between (a) points on the projective line over $\mathbb{Z}/n\mathbb{Z}$, (b) lattices at hyperdistance $n$ from $L_1$, and (c) coset classes of the congruence subgroup $\Gamma_0(n)$ in $\Gamma$. How to get from there to a dessin d’enfant? The short answer is: it’s all in Ravi S. Kulkarni’s paper, “An arithmetic-geometric method in the study of the subgroups of the modular group”, Amer. J. Math 113 (1991) 1053-1135. It is a complete mystery to me why Tatitscheff, He and McKay don’t mention Kulkarni’s paper in “Cusps, congruence groups and monstrous dessins”. Because all they do (and much more) is in Kulkarni. I’ve blogged about Kulkarni’s paper years ago: – In the Dedekind tessalation it was all about assigning special polygons to subgroups of finite index of $\Gamma$. – In Modular quilts and cuboid tree diagram it did go on assigning (multiple) cuboid trees to a (conjugacy class) of such finite index subgroup. – In Hyperbolic Mathieu polygons the story continued on a finite-to-one connection between special hyperbolic polygons and cuboid trees. – In Farey codes it was shown how to encode such polygons by a Farey-sequence. – In Generators of modular subgroups it was shown how to get generators of the finite index subgroups from this Farey sequence. The modular group is a free product \[ \Gamma = C_2 \ast C_2 = \langle s,u~|~s^2=1=u^3 \rangle \] with lifts of $s$ and $u$ to $SL_2(\mathbb{Z})$ given by the matrices \[ S=\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix},~\qquad U= \begin{bmatrix} 0 & -1 \\ 1 & -1 \end{bmatrix} \] As a result, any permutation representation of $\Gamma$ on a set $E$ can be represented by a $2$-coloured graph (with black and white vertices) and edges corresponding to the elements of the set $E$. Each white vertex has two (or one) edges connected to it and every black vertex has three (or one). These edges are the elements of $E$ permuted by $s$ (for white vertices) and $u$ (for black ones), the order of the 3-cycle determined by going counterclockwise round the vertex. Clearly, if there’s just one edge connected to a vertex, it gives a fixed point (or 1-cycle) in the corresponding permutation. The ‘monstrous dessin’ for the congruence subgroup $\Gamma_0(n)$ is the picture one gets from the permutation $\Gamma$-action on the points of $\mathbb{P}^1(\mathbb{Z}/n \mathbb{Z})$, or equivalently, on the coset classes or on the lattices at hyperdistance $n$. Kulkarni’s paper (or the blogposts above) tell you how to get at this picture starting from a fundamental domain of $\Gamma_0(n)$ acting on teh upper half-plane by Moebius transformations. Sage gives a nice image of this fundamental domain via the command FareySymbol(Gamma0(n)).fundamental_domain() Here’s the image for $n=6$: The boundary points (on the halflines through $0$ and $1$ and the $4$ half-circles need to be identified which is indicaed by matching colours. So the 2 halflines are identified as are the two blue (and green) half-circles (in opposite direction). To get the dessin from this, let’s first look at the interior points. A white vertex is a point in the interior where two black and two white tiles meet, a black vertex corresponds to an interior points where three black and three white tiles meet. Points on the boundary where tiles meet are coloured red, and after identification two of these reds give one white or black vertex. Here’s the intermediate picture The two top red points are identified giving a white vertex as do the two reds on the blue half-circles and the two reds on the green half-circles, because after identification two black and two white tiles meet there. This then gives us the ‘monstrous’ modular dessin for $n=6$ of the Tatitscheff, He and McKay paper: Let’s try a more difficult example: $n=12$. Sage gives us as fundamental domain giving us the intermediate picture and spotting the correct identifications, this gives us the ‘monstrous’ dessin for $\Gamma_0(12)$ from the THM-paper: In general there are several of these 2-coloured graphs giving the same permutation representation, so the obtained ‘monstrous dessin’ depends on the choice of fundamental domain. You’ll have noticed that the domain for $\Gamma_0(6)$ was symmetric, whereas the one Sage provides for $\Gamma_0(12)$ is not. This is caused by Sage using the Farey-code \[ \xymatrix{ 0 \ar@{-}[r]_1 & \frac{1}{6} \ar@{-}[r]_1 & \frac{1}{5} \ar@{-}[r]_2 & \frac{1}{4} \ar@{-}[r]_3 & \frac{1}{3} \ar@{-}[r]_4 & \frac{1}{2} \ar@{-}[r]_4 & \frac{2}{3} \ar@{-}[r]_3 & \frac{3}{4} \ar@{-}[r]_2 & 1} \] One of the nice results from Kulkarni’s paper is that for any $n$ there is a symmetric Farey-code, giving a perfectly symmetric fundamental domain for $\Gamma_0(n)$. For $n=12$ this symmetric code is \[ \xymatrix{ 0 \ar@{-}[r]_1 & \frac{1}{6} \ar@{-}[r]_2 & \frac{1}{4} \ar@{-}[r]_3 & \frac{1}{3} \ar@{-}[r]_4 & \frac{1}{2} \ar@{-}[r]_4 & \frac{2}{3} \ar@{-}[r]_3 & \frac{3}{4} \ar@{-}[r]_2 & \frac{5}{6} \ar@{-}[r]_1 & 1} \] It would be nice to see whether using these symmetric Farey-codes gives other ‘monstrous dessins’ than in the THM-paper. Remains to identify the edges in the dessin with the lattices at hyperdistance $n$ from $L_1$. Using the tricks from the previous post it is quite easy to check that for any $n$ the monstrous dessin for $\Gamma_0(n)$ starts off with the lattices $L_{M,\frac{g}{h}} = M,\frac{g}{h}$ as below Let’s do a sample computation showing that the action of $s$ on $L_n$ gives $L_{\frac{1}{n}}$: \[ L_n.s = \begin{bmatrix} n & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & -n \\ 1 & 0 \end{bmatrix} \] and then, as last time, to determine the class of the lattice spanned by the rows of this matrix we have to compute \[ \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & -n \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} -1 & 0 \\ 0 & -n \end{bmatrix} \] which is class $L_{\frac{1}{n}}$. And similarly for the other edges.2 Comments
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
Indeed, in quantum field theory, $e$ has the interpretation of a coupling constant, as well as electric charge. What I believe you are describing is Compton scattering, $e^- \gamma \to e^- \gamma$. We have that, $$\frac14 \sum_{\mathrm{spins}} |\mathcal M|^2 = -2e^4 \left(\frac{u}{s} + \frac{s}{u} \right)$$ after simplification, where $s,u$ are the usual Mandelstam variables. Now, as an example, the cross section takes the form, $$\frac{\mathrm d\sigma}{\mathrm d \cos\theta} = -\frac{e^4}{16\pi s} \left( \frac{u}{s} + \frac{s}{u} \right).$$ We know the differential cross section goes as the amplitude squared, and thus is proportional to the probability. Thus, knowing the kinematic variables, we can deduce $e$ in an experiment, in theory from the outcome of such a scattering process. It should be noted as well that $e$ has a non-zero beta function, which implies it flows to different values as you change the energy scale of the process. Thus, there is no fixed value of $e$. However, these changes are small, and in non high energy physics, one can stick with the classical value. As for units, there is no disagreement. $[m\bar\psi \psi] = 4$, in terms of mass dimension. Obviously $[m] = 1$ so $[\psi] = \frac32$. We have $[A_\mu] = 1$. Thus in natural units, in four dimensions, $[e] = 0$. The Planck charge goes as $\sqrt{\hbar c/k_e}$, so in units wherein $\hbar=c=1$ it is easy to see that the coupling constant is dimensionless. If you introduce the factors of $\hbar, c$ in any QFT formula, you will find $e$ becomes a charge again.
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates The Integral Test Picture infinitely many rectangles of width 1 and height $a_n$, so the area of the $n^{th}$ rectangle is $a_n$. Then the series $\displaystyle\sum_{n=1}^\infty a_n$ is equal to the sum of the areas of these infinitely many rectangles. See the graphic examples below. Summary: either both the integral and the series converge, or both diverge. From our work with improper integrals, you may have seen that the improper integral $\displaystyle\int_1^\infty\frac{1}{x^p}\,dx$ converges if $p>1$, and diverges if $p\le 1$. By using the integral test, we therefore get our $p$-series test, which is extremely useful, especially when used to find comparable series for the comparison tests. Explanation and examples of the integral test, as well as determining the above integral of $\frac{1}{x^p}$ and the $p$-series test are included on the first video. The second video includes detail of the graphical information above.
tl;dr They reported a condition number, not necessarily the right condition number for the matrix, because there is a difference. This is specific to the matrix and the right hand side vector. If you look at the documentation for *getrs, it says the forward error bound is$$ \frac{\|x-x_0\|_\infty}{\|x\|_\infty} \lesssim \mathrm{cond}(A,x)u \leq \mathrm{cond}(A)u. $$Here $\mathrm{cond}(A,x)$ is not quite the usual condition number $\kappa_\infty(A)$, but rather $$ \mathrm{cond}(A,x) = \frac{\||A^{-1}||A||x|\|_\infty}{\|x\|_\infty},\\\mathrm{cond}(A) = \||A^{-1}||A|\|. $$(Here inside the norm these are component-wise absolute values.) See, for example, Iterative refinement for linear systems and LAPACK by Higham, or Higham's Accuracy and Stability of Numerical Algorithms (7.2). For your example, I took a pseudospectral differential operator for a similar problem with $n=128$, and there is in fact a big difference between $\||A^{-1}||A|\|$ and $\kappa_\infty(A)$, I computed $7\times 10^3$ and $2.6\times 10^7$, which is enough to explain the observation that this happens for all right hand sides, because the orders of magnitudes roughly match what is seen in Table 3.1 (3-4 orders better errors). This doesn't work when I try the same for just a random ill-conditioned matrix, so it has to be a property of $A$. An explicit example for which the two condition numbers don't match, which I took from Higham (7.17, p.124), due to Kahan is$$ \begin{pmatrix}2&-1&1\\-1&\epsilon&\epsilon\\1&\epsilon&\epsilon\end{pmatrix}, \qquad \begin{pmatrix}2+2\epsilon\\-\epsilon\\\epsilon\end{pmatrix}. $$Another example I found is just the plain Vandermonde matrix on [1:10] with random $b$. I went through MatrixDepot.jl and some other ill-conditioned matrices also produce this type of result, like triw and moler. Essentially, what's going on is that when you analyze the stability of solving linear systems with respect to perturbations, you first have to specify which perturbations you are considering. When solving linear systems with LAPACK, this error bound considers component-wise perturbations in $A$, but no perturbation in $b$. So this is different from the usual $\kappa(A) = \|A^{-1}\|\|A\|$, which considers normwise perturbations in both $A$ and $b$. Consider (as a counterexample) also what would happen if you don't make the distinction. We know that using iterative refinement with double precision (see link above) we can get the best possible forward relative error of $O(u)$ for those matrices with $\kappa(A)\ll 1/u$. So if we consider the idea that linear systems can't be solved to accuracy better than $\kappa(A)u$, how would refining solutions possibly work? P.S. It matters that ?getrs says the computed solution is the true solution of (A + E)x = b with a perturbation $E$ in $A$, but no perturbation in $b$. Things would be different if perturbations were allowed in $b$. Edit To show this working more directly, in code, that this is not a fluke or a matter of luck, but rather the (unusual) consequence of two condition numbers being very different for some specific matrices, i.e.,$$ \mathrm{cond}(A,x) \approx \mathrm{cond}(A) \ll \kappa(A). $$ function main2(m=128) A = matrixdepot("chebspec", m)^2 A[1,:] = A[end,:] = 0 A[1,1] = A[end,end] = 1 best, worst = Inf, -Inf for k=1:2^5 b = randn(m) x = A \ b x_exact = Float64.(big.(A) \ big.(b)) err = norm(x - x_exact, Inf) / norm(x_exact, Inf) best, worst = min(best, err), max(worst, err) end @printf "Best relative error: %.3e\n" best @printf "Worst relative error: %.3e\n" worst @printf "Predicted error κ(A)*ε: %.3e\n" cond(A, Inf)*eps() @printf "Predicted error cond(A)*ε: %.3e\n" norm(abs.(inv(A))*abs.(A), Inf)*eps() end julia> main2() Best relative error: 2.156e-14 Worst relative error: 2.414e-12 Predicted error κ(A)*ε: 8.780e-09 Predicted error cond(A)*ε: 2.482e-12 Edit 2 Here is another example of the same phenomenon where the different conditions numbers unexpectedly differ by a lot. This time,$$ \mathrm{cond}(A, x) \ll \mathrm{cond}(A) \approx \kappa(A). $$Here $A$ is the 10×10 Vandermonde matrix on $1:10$, and when $x$ is chosen randomly, $\mathrm{cond}(A,x)$ is noticably smaller than $\kappa(A)$, and the worst case $x$ is given by $x_i = i^a$ for some $a$. function main4(m=10) A = matrixdepot("vand", m) lu = lufact(A) lu_big = lufact(big.(A)) AA = abs.(inv(A))*abs.(A) for k=1:12 # b = randn(m) # good case b = (1:m).^(k-1) # worst case x, x_exact = lu \ b, lu_big \ big.(b) err = norm(x - x_exact, Inf) / norm(x_exact, Inf) predicted = norm(AA*abs.(x), Inf)/norm(x, Inf)*eps() @printf "relative error[%2d] = %.3e (predicted cond(A,x)*ε = %.3e)\n" k err predicted end @printf "predicted κ(A)*ε = %.3e\n" cond(A)*eps() @printf "predicted cond(A)*ε = %.3e\n" norm(AA, Inf)*eps() end Average case (almost 9 orders of magnitude better error): julia> T.main4() relative error[1] = 6.690e-11 (predicted cond(A,x)*ε = 2.213e-10) relative error[2] = 6.202e-11 (predicted cond(A,x)*ε = 2.081e-10) relative error[3] = 2.975e-11 (predicted cond(A,x)*ε = 1.113e-10) relative error[4] = 1.245e-11 (predicted cond(A,x)*ε = 6.126e-11) relative error[5] = 4.820e-12 (predicted cond(A,x)*ε = 3.489e-11) relative error[6] = 1.537e-12 (predicted cond(A,x)*ε = 1.729e-11) relative error[7] = 4.885e-13 (predicted cond(A,x)*ε = 8.696e-12) relative error[8] = 1.565e-13 (predicted cond(A,x)*ε = 4.446e-12) predicted κ(A)*ε = 4.677e-04 predicted cond(A)*ε = 1.483e-05 Worst case ($a=1,\ldots,12$): julia> T.main4() relative error[ 1] = 0.000e+00 (predicted cond(A,x)*ε = 6.608e-13) relative error[ 2] = 1.265e-13 (predicted cond(A,x)*ε = 3.382e-12) relative error[ 3] = 5.647e-13 (predicted cond(A,x)*ε = 1.887e-11) relative error[ 4] = 8.895e-74 (predicted cond(A,x)*ε = 1.127e-10) relative error[ 5] = 4.199e-10 (predicted cond(A,x)*ε = 7.111e-10) relative error[ 6] = 7.815e-10 (predicted cond(A,x)*ε = 4.703e-09) relative error[ 7] = 8.358e-09 (predicted cond(A,x)*ε = 3.239e-08) relative error[ 8] = 1.174e-07 (predicted cond(A,x)*ε = 2.310e-07) relative error[ 9] = 3.083e-06 (predicted cond(A,x)*ε = 1.700e-06) relative error[10] = 1.287e-05 (predicted cond(A,x)*ε = 1.286e-05) relative error[11] = 3.760e-10 (predicted cond(A,x)*ε = 1.580e-09) relative error[12] = 3.903e-10 (predicted cond(A,x)*ε = 1.406e-09) predicted κ(A)*ε = 4.677e-04 predicted cond(A)*ε = 1.483e-05 Edit 3 Another example is the Forsythe matrix, which is a perturbed Jordan block of any size of the form$$A = \begin{pmatrix} 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1 \\ \epsilon&0&0&0 \end{pmatrix}. $$This has $\|A\|=1$, $\|A^{-1}\|=\epsilon^{-1}$, so $\kappa_\infty(A) = \epsilon^{-1}$, but $|A^{-1}| = A^{-1} = |A|^{-1}$, so $\mathrm{cond}(A) = 1$. And as can be verified by hand, solving systems of linear equations like $Ax=b$ with pivoting is extremely accurate, despite the potentially unbounded $\kappa_\infty(A)$. So this matrix too will yield unexpectedly precise solutions. Edit 4 Kahan matrices are also like this, with $\mathrm{cond}(A)\ll \kappa(A)$: A = matrixdepot("kahan", 48) κ, c = cond(A, Inf), norm(abs.(inv(A))*abs.(A), Inf) @printf "κ=%.3e c=%.3e ratio=%g\n" κ c (c/κ) κ=8.504e+08 c=4.099e+06 ratio=0.00482027
This came up in another thread. I thought I'd make a few notes about them. It should be spelled Painleve, I can't edit the typo in the title :-( Google finds http://www.physics.umd.edu/grt/taj/776b/hw1soln.pdf, which, being a homework solution, will probably disappear soon. The line element is: [tex] -{d{{T}}}^{2}+ \left( d{{r}}+\sqrt {{\frac {2M}{r}}}d{{T}} \right) ^{2}+{r}^{2} \left( {d{{\theta}}}^{2}+ \sin ^2 \theta d {{\phi}}}^{2} \right) [/tex] The following coordinate transformation will map PG coordinates into Schwarzschild coordinates: [tex] t = T -2\,\sqrt {2\,M\,r}+4\,M\,\mathrm{arctanh} \left( \sqrt {{\frac {r}{2 \, M}}} \right) [/tex] [add] arctanh(x) is defined only for x<1, while arctanh(x) = 1/2 (ln(1+x)-ln(1-x)), more work needs to be done to deal with the sign issues that arise when making x > 1. The above The metric is not a function of T, therfore [itex]u_0 = g_{0i} u^i =[/itex] = (-1+2M/r) [itex]dT/d\tau[/itex] + sqrt(2M/r) [itex]dr/d\tau[/itex] = constant, a conserved energy-like quantity of the orbit. Here [itex]u^i[/itex] is the 4-velocity [itex](dT/d\tau, dr/d\tau, d\theta/d\tau, d\phi / d\tau)[/itex] Similarly, since the metric is not a function of [itex]\phi[/itex] [itex]u_3 = g_{3i} u^i[/itex] = r^2 sin^2 [itex]\theta[/itex] [itex]d\phi / d\tau[/itex]= constant representing a conserved angular momentum-like quantity of the orbit. Generally, the orbit will be taken to be in the equatorial plane, [itex]\theta = \pi/2[/itex] and the above two conserved quantites plus the metric equation will be sufficient to calculate orbital motion. It should be spelled Painleve, I can't edit the typo in the title :-( Google finds http://www.physics.umd.edu/grt/taj/776b/hw1soln.pdf, which, being a homework solution, will probably disappear soon. The line element is: [tex] -{d{{T}}}^{2}+ \left( d{{r}}+\sqrt {{\frac {2M}{r}}}d{{T}} \right) ^{2}+{r}^{2} \left( {d{{\theta}}}^{2}+ \sin ^2 \theta d {{\phi}}}^{2} \right) [/tex] The following coordinate transformation will map PG coordinates into Schwarzschild coordinates: [tex] t = T -2\,\sqrt {2\,M\,r}+4\,M\,\mathrm{arctanh} \left( \sqrt {{\frac {r}{2 \, M}}} \right) [/tex] [add] arctanh(x) is defined only for x<1, while arctanh(x) = 1/2 (ln(1+x)-ln(1-x)), more work needs to be done to deal with the sign issues that arise when making x > 1. The above The metric is not a function of T, therfore [itex]u_0 = g_{0i} u^i =[/itex] = (-1+2M/r) [itex]dT/d\tau[/itex] + sqrt(2M/r) [itex]dr/d\tau[/itex] = constant, a conserved energy-like quantity of the orbit. Here [itex]u^i[/itex] is the 4-velocity [itex](dT/d\tau, dr/d\tau, d\theta/d\tau, d\phi / d\tau)[/itex] Similarly, since the metric is not a function of [itex]\phi[/itex] [itex]u_3 = g_{3i} u^i[/itex] = r^2 sin^2 [itex]\theta[/itex] [itex]d\phi / d\tau[/itex]= constant representing a conserved angular momentum-like quantity of the orbit. Generally, the orbit will be taken to be in the equatorial plane, [itex]\theta = \pi/2[/itex] and the above two conserved quantites plus the metric equation will be sufficient to calculate orbital motion. Last edited:
Sperner's theorem Contents Statement of the theorem Sperner's theorem as originally stated is a result about set systems. Suppose that you want to find the largest collection of sets [math]\mathcal{A}[/math] such that no set in [math]\mathcal{A}[/math] is a proper subset of any other. Then the best you can do is choose all the sets of some fixed size---and of course the best size to pick is [math]\lfloor n/2\rfloor[/math], since the binomial coefficient [math]\binom nm[/math] is maximized when [math]m=\lfloor n/2\rfloor.[/math] Sperner's theorem is closely related to the density Hales-Jewett theorem: in fact, it is nothing other than DHJ(2) with the best possible bound. To see this, we associate each set [math]A\subset[n][/math] with its characteristic function (that is, the sequence that is 0 outside A and 1 in A). If we have a pair of sets [math]A\subset B,[/math] then the two sequences form a combinatorial line in [math][2]^n.[/math] For example, if n=6 and A and B are the sets [math]\{2,3\}[/math] and [math]\{2,3,4,6\}[/math], then we get the combinatorial line that consists of the two points 011000 and 011101, which we can denote by 011*0* (so the wildcard set is [math]\{4,6\}[/math]). Proof of the theorem There are several proofs, but perhaps the most enlightening is a very simple averaging argument that proves a stronger result. Let [math]\mathcal{A}[/math] be a collection of subsets of [n]. For each k, let [math]\delta_k[/math] denote the density of [math]\mathcal{A}[/math] in the kth layer of the cube: that is, it is the number of sets in [math]\mathcal{A}[/math] of size k, divided by [math]\binom nk.[/math] The equal-slices measure of [math]\mathcal{A}[/math] is defined to be [math]\delta_0+\dots+\delta_n.[/math] Now the equal-slices measure of [math]\mathcal{A}[/math] is easily seen to be equal to the following quantity. Let [math]\pi[/math] be a random permutation of [n], let [math]U_0,U_1,U_2\dots,U_n[/math] be the sets [math]\emptyset, \{\pi(1)\},\{\pi(1),\pi(2)\},\dots,[n],[/math] and let [math]\mu(\mathcal{A})[/math] be the expected number of the sets [math]U_i[/math] that belong to [math]\mathcal{A}.[/math] This is the same by linearity of expectation and the fact that the probability that [math]U_k[/math] belongs to [math]\mathcal{A}[/math] is [math]\delta_k.[/math] Therefore, if the equal-slices measure of [math]\mathcal{A}[/math] is greater than 1, then the expected number of sets [math]U_k[/math] in [math]\mathcal{A}[/math] is greater than 1, so there must exist a permutation for which it is at least 2, and that gives us a pair of sets with one contained in the other. To see that this implies Sperner's theorem, one just has to make the simple observation that a set with equal-slices measure at most 1 must have cardinality at most [math]\binom n{\lfloor n/2\rfloor}.[/math] (If n is odd, so that there are two middle layers, then it is not quite so obvious that to have an extremal set you must pick one or other of the layers, but this is the case.) This stronger version of the statement is called the LYM inequality Multidimensional version The following proof is a variant of the Gunderson-Rodl-Sidorenko result. Its parameters are a little worse, but the proof is a little simpler. Proposition 1: Let [math]A \subseteq \{0,1\}^n[/math] have density [math]\delta[/math]. Let [math]Y_1, \dots, Y_d[/math] be a partition of [math][n][/math] with [math]|Y_i| \geq r[/math] for each [math]i[/math]. If [math]\delta^{2^d} - \frac{d}{\sqrt{\pi r}} \gt 0, [/math] (1) then [math]A[/math] contains a nondegenerate combinatorial subspace of dimension [math]d[/math], with its [math]i[/math]th wildcard set a subset of [math]Y_i[/math]. Proof: Let [math]C_i[/math] denote a random chain from [math]0^{|Y_i|}[/math] up to [math]1^{|Y_i|}[/math], thought of as residing in the coordinates [math]Y_i[/math], with the [math]d[/math] chains chosen independently. Also, let [math]s_i, t_i[/math] denote independent Binomial[math](|Y_i|, 1/2)[/math] random variables, [math]i \in [d][/math]. Note that [math]C_i(s_i)[/math] and [math]C_i(t_i)[/math] are (dependent) uniform random strings in [math]\{0,1\}^{Y_i}[/math]. We write, say, [math](C_1(s_1), C_2(t_2), C_3(t_3), \dots, C_d(s_d)) [/math] (2) for the string in [math]\{0,1\}^n[/math] formed by putting [math]C_1(s_1)[/math] into the [math]Y_1[/math] coordinates, [math]C_2(t_2)[/math] into the [math]Y_2[/math] coordinates, etc. Note that each string of this form is also uniformly random, since the chains are independent. If all [math]2^d[/math] strings of the form in (2) are simultaneously in [math]A[/math] then we have a [math]d[/math]-dimensional subspace inside [math]A[/math] with wildcard sets that are \emph{subsets} of [math]Y_1, \dots, Y_d[/math]. All [math]d[/math] dimensions are nondegenerate iff [math]s_i \neq t_i[/math] for all [math]i[/math]. Since [math]s_i[/math] and [math]t_i[/math] are independent Binomial[math](|Y_i|, 1/2)[/math]'s with [math]|Y_i| \geq r[/math], we have [math]\Pr[s_i = t_i] \leq \frac{1}{\sqrt{\pi r}}.[/math] Thus to complete the proof, it suffices to show that with probability at least [math]\delta^{2^d}[/math], all [math]2^d[/math] strings of the form in (2) are in [math]A[/math]. This is easy: writing [math]f[/math] for the indicator of [math]A[/math], the probability is [math]\mathbf{E}_{C_1, \dots, C_d} \left[\mathbf{E}_{s_1, \dots, t_d}[f(C_1(s_1), \dots, C_d(s_d)) \cdots f(C_1(t_1), \dots, C_d(t_d))]\right].[/math] Since [math]s_1, \dots, t_d[/math] are independent, the inside expectation-of-a-product can be changed to a product of expectations. [THIS STEP IS WRONG, I THINK -- Ryan] But for fixed [math]C_1, \dots, C_d[/math], each string of the form in (2) has the same distribution. Hence the above equals [math]\mathbf{E}_{C_1, \dots, C_d} \left[\mathbf{E}_{s_1, \dots, s_d}[f(C_1(s_1), \dots, C_d(s_d))]^{2^d}\right].[/math] By Jensen (or repeated Cauchy-Schwarz), this is at least [math]\left(\mathbf{E}_{C_1, \dots, C_d} \mathbf{E}_{s_1, \dots, s_d}[f(C_1(s_1), \dots, C_d(s_d))]\right)^{2^d}.[/math] But this is just [math]\delta^{2^d}[/math], since [math](C_1(s_1), \dots, C_d(s_d))[/math] is uniformly distributed. [] As an aside: Corollary 2: If [math]A \subseteq [n][/math] has density [math]\Omega(1)[/math], then [math]A[/math] contains a nondegenerate combinatorial subspace of dimension at least [math]\log_2 \log n - O(1)[/math]. If we are willing to sacrifice significantly more probability, we can find a [math]d[/math]-dimensional subspace randomly. Corollary 3: In the setting of Proposition 1, assume [math]\delta \lt 2/3[/math] and [math] r \geq \exp(4 \ln(1/\delta) 2^d). [/math] (3) Suppose we choose a random nondegenerate [math]d[/math]-dimensional subspace of [math][n][/math] with wildcard sets [math]Z_i \subseteq Y_i[/math]. By this we mean choosing, independently for each [math]i[/math], a random combinatorial line within [math]\{0,1\}^{Y_i}[/math], uniformly from the [math]3^r - 1[/math] possibilities. Then this subspace is entirely contained within [math]A[/math] with probability at least [math]3^{-dr}[/math]. This follows immediately from Proposition~\ref{prop:1}: having [math]r[/math] as in (3) achieves (1), hence the desired nondengenerate combinatorial subspace exists and we pick it with probability [math]1/(3^r-1)^d[/math]. We can further conclude: Corollary 4: Let [math]A \subseteq \{0,1\}^n[/math] have density [math]\delta \lt 2/3[/math] and let [math]Y_1, \dots, Y_d[/math] be disjoint subsets of [math][n][/math] with each [math]|Y_i| \geq r[/math], [math] r \geq \exp(4 \ln(1/\delta) 2^d). [/math] Choose a nondegenerate combinatorial subspace at random by picking uniformly nondegenerate combinatorial lines in each of [math]Y_1, \dots, Y_d[/math], and filling in the remaining coordinates outside of the [math]Y_i[/math]'s uniformly at random. Then with probability at least [math]\exp(-r^{O(1)})[/math], this combinatorial subspace is entirely contained within [math]A[/math]. This follows because for a random choice of the coordinates outside the [math]Y_i[/math]'s, there is a [math]\delta/2[/math] chance that [math]A[/math] has density at least [math]\delta/2[/math] over the [math]Y[/math] coordinates. We then apply the previous corollary, noting that [math]\exp(-r^{O(1)}) \ll (\delta/2)3^{-dr}[/math], even with [math]\delta[/math] replaced by [math]\delta/2[/math] in the lower bound demanded of [math]r[/math]. Strong version An alternative argument deduces the multidimensional Sperner theorem from the density Hales-Jewett theorem. We can think of [math][2]^n[/math] as [math][2^k]^{n/k}.[/math] If we do so and apply DHJ(2^k) and translate back to [math][2]^n,[/math] then we find that we have produced a k-dimensional combinatorial subspace. This is obviously a much more sophisticated proof, since DHJ(2^k) is a very hard result, but it gives more information, since the wildcard sets turn out to have the same size. A sign that this strong version is genuinely strong is that it implies Szemerédi's theorem. For instance, suppose you take as your set [math]\mathcal{A}[/math] the set of all sequences such that the number of 0s plus the number of 1s in even places plus twice the number of 1s in odd places belongs to some dense set in [math][3n].[/math] Then if you have a 2D subspace with both wildcard sets of size d, one wildcard set consisting of odd numbers and the other of even numbers (which this proof gives), then this implies that in your dense set of integers you can find four integers of the form a, a+d, a+2d, a+d+2d, which is an arithmetic progression of length 4. One can also prove the above strong form of Sperner theorem by using the multidimensional Szemerédi theorem which has combinatorial proofs. (reference!) It states that large dense high dimensional grids contain corners. Given a dense subset of [math][2]^n,[/math] denoted by [math]\mathcal{A}[/math]. We can suppose that the elements of [math]\mathcal{A}[/math] are of size about [math]\frac{n}{2}\pm C\sqrt{n}.[/math] Take a random permutation of [n]. An element of [math]\mathcal{A}[/math] is “[math]d-[/math]nice” after the permutation if it consists of [math]d[/math] intervals, each of length between [math]\frac{n}{2d}\pm C\sqrt{n}/2,[/math] and each interval begins at position [math]id[/math] for some [math]0\leq i\lt \frac{n}{d}.[/math] (Suppose that [math]d[/math] divides [math]n[/math]) Any [math]d-[/math]nice set can be represented as a point in a [math]d-[/math]dimensional [math][C\sqrt{n}]^d[/math] cube. The sets represented by the vertices of an axis-parallel [math]d-[/math]dimensional cube in [math][C\sqrt{n}]^d[/math] form a subspace with equal sized wildcard sets. Finding a cube is clearly more difficult than finding a corner, but it's existence in dense sets also follows from the multidimensional Szemerédi theorem. All what we need is to show that the expected number of the [math]d-[/math]nice elements is [math]c\sqrt{n}^d[/math] where c only depends on the density of [math]\mathcal{A}[/math]. For a typical [math]m-[/math]element subset of [math]\mathcal{A}[/math] the probability that it is [math]d-[/math]nice after the permutation is about [math]\binom{n}{m}^{-1}\sqrt{n}^{d-1}.[/math] The sum for elements of [math]\mathcal{A}[/math] with size between [math]\frac{n}{2}\pm C\sqrt{n},[/math] gives that the expected number of the [math]d-[/math]nice elements is [math]c\sqrt{n}^d,[/math] so there is a cube if n is large enough. Further remarks The k=3 generalisation of the LYM inequality is the hyper-optimistic conjecture. Sperner's theorem is also related to the Kruskal-Katona theorem.
Abstract In this paper we answer a question of J. Bourgain which was motivated by questions A. Bellow and H. Furstenberg. We show that the sequence $\{ n^{2}\}_{n=1}^{\infty}$ is $L^{1}$-universally bad. This implies that it is not true that given a dynamical system $(X ,\Sigma, \mu, T)$ and $f\in L^{1}(\mu)$, the ergodic means \[ \lim_{N\to \infty}\frac{1}N\sum _{n=1}^{N}f(T^{n^{2}}(x)) \] converge almost surely. [[abm]] I. Assani, Z. Buczolich, and D. R. Mauldin, "An $L^1$ counting problem in ergodic theory," J. Anal. Math., vol. 95, pp. 221-241, 2005. @article {[abm], MRKEY = {2145565}, AUTHOR = {Assani, Idris and Buczolich, Zolt{á}n and Mauldin, R. Daniel}, TITLE = {An {$L\sp 1$} counting problem in ergodic theory}, JOURNAL = {J. Anal. Math.}, FJOURNAL = {Journal d'Analyse Mathématique}, VOLUME = {95}, YEAR = {2005}, PAGES = {221--241}, ISSN = {0021-7670}, CODEN = {JOAMAV}, MRCLASS = {37A05 (28D05 37A25 37A30)}, MRNUMBER = {2006c:37002}, MRREVIEWER = {Joseph Max Rosenblatt}, DOI = {10.1007/BF02791503}, ZBLNUMBER = {1110.28013}, } [[abm2]] I. Assani, Z. Buczolich, and D. R. Mauldin, "Counting and convergence in ergodic theory," Acta Univ. Carolin. Math. Phys., vol. 45, iss. 2, pp. 5-21, 2004. @article {[abm2], MRKEY = {2138271}, AUTHOR = {Assani, Idris and Buczolich, Zolt{á}n and Mauldin, R. Daniel}, TITLE = {Counting and convergence in ergodic theory}, JOURNAL = {Acta Univ. Carolin. Math. Phys.}, FJOURNAL = {Acta Universitatis Carolinae. Mathematica et Physica}, VOLUME = {45}, YEAR = {2004}, NUMBER = {2}, PAGES = {5--21}, ISSN = {0001-7140}, CODEN = {AUMMBZ}, MRCLASS = {37A05 (28D05 47A35 60F99)}, MRNUMBER = {2006a:37001}, ZBLNUMBER = {1067.37001}, } [OW] A. Bellow, "Two problems," in Measure Theory, New York: Springer-Verlag, 1982, vol. 945. @incollection{OW, author={Bellow, A.}, TITLE={Two problems}, BOOKTITLE={Measure Theory}, VENUE={Oberwolfach, June 21--27, 1981}, SERIES={Lecture Notes in Math.}, VOLUME={945}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York}, YEAR={1982}, } [B1] J. Bourgain, "Pointwise ergodic theorems for arithmetic sets," Inst. Hautes Études Sci. Publ. Math., iss. 69, pp. 5-45, 1989. @article {B1, MRKEY = {1019960}, AUTHOR = {Bourgain, J.}, TITLE = {Pointwise ergodic theorems for arithmetic sets}, NOTE = {with an appendix by the author, Harry Furstenberg, Yitzhak Katznelson and Donald S. Ornstein}, JOURNAL = {Inst. Hautes Études Sci. Publ. Math.}, FJOURNAL = {Institut des Hautes Études Scientifiques. Publications Mathématiques}, NUMBER = {69}, YEAR = {1989}, PAGES = {5--45}, ISSN = {0073-8301}, CODEN = {PMIHA6}, MRCLASS = {28D05 (11B83 11K99 11L03 47A35)}, MRNUMBER = {90k:28030}, MRREVIEWER = {F. Schweiger}, URL = {http://www.numdam.org/item?id=PMIHES_1989__69__5_0}, ZBLNUMBER = {0705.28008}, } [B2] J. Bourgain, "An approach to pointwise ergodic theorems," in Geometric Aspects of Functional Analysis (1986/87), New York: Springer-Verlag, 1988, pp. 204-223. @incollection {B2, MRKEY = {950982}, AUTHOR = {Bourgain, J.}, TITLE = {An approach to pointwise ergodic theorems}, BOOKTITLE = {Geometric Aspects of Functional Analysis (1986/87)}, SERIES = {Lecture Notes in Math.}, NUMBER = {1317}, PAGES = {204--223}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York}, YEAR = {1988}, MRCLASS = {28D05 (11L40 11P55 42B25 47A35)}, MRNUMBER = {90b:28016}, MRREVIEWER = {V. Losert}, ZBLNUMBER = {0662.47006}, } [B3] J. Bourgain, "On the pointwise ergodic theorem on $L^p$ for arithmetic sets," Israel J. Math., vol. 61, iss. 1, pp. 73-84, 1988. @article {B3, MRKEY = {937582}, AUTHOR = {Bourgain, J.}, TITLE = {On the pointwise ergodic theorem on {$L\sp p$} for arithmetic sets}, JOURNAL = {Israel J. Math.}, FJOURNAL = {Israel Journal of Mathematics}, VOLUME = {61}, YEAR = {1988}, NUMBER = {1}, PAGES = {73--84}, ISSN = {0021-2172}, CODEN = {ISJMAP}, MRCLASS = {28D05}, MRNUMBER = {89f:28037b}, MRREVIEWER = {Alberto de la Torre}, DOI = {10.1007/BF02776302}, ZBLNUMBER = {0642.28011}, } [B4] J. Bourgain, "On the maximal ergodic theorem for certain subsets of the integers," Israel J. Math., vol. 61, iss. 1, pp. 39-72, 1988. @article {B4, MRKEY = {937581}, AUTHOR = {Bourgain, J.}, TITLE = {On the maximal ergodic theorem for certain subsets of the integers}, JOURNAL = {Israel J. Math.}, FJOURNAL = {Israel Journal of Mathematics}, VOLUME = {61}, YEAR = {1988}, NUMBER = {1}, PAGES = {39--72}, ISSN = {0021-2172}, CODEN = {ISJMAP}, MRCLASS = {28D05}, MRNUMBER = {89f:28037a}, MRREVIEWER = {Alberto de la Torre}, DOI = {10.1007/BF02776301}, ZBLNUMBER = {0642.28010}, } [B5] J. Bourgain, "Almost sure convergence in ergodic theory," in Almost Everywhere Convergence, Boston, MA: Academic Press, 1989, pp. 145-151. @incollection {B5, MRKEY = {1035242}, AUTHOR = {Bourgain, J.}, TITLE = {Almost sure convergence in ergodic theory}, BOOKTITLE = {Almost Everywhere Convergence}, VENUE={{C}olumbus, {OH}, 1988}, PAGES = {145--151}, PUBLISHER = {Academic Press}, ADDRESS = {Boston, MA}, YEAR = {1989}, MRCLASS = {47A35 (28D05)}, MRNUMBER = {90k:47017}, ZBLNUMBER = {0697.47005}, } [BM] Z. Buczolich and R. D. Mauldin, "Concepts behind divergent ergodic averages along the squares," in Ergodic Theory and Related Fields, Providence, RI: A.M.S., 2007, vol. 430, pp. 41-56. @incollection{BM, author={Buczolich, Z. and Mauldin, R. D.}, title={Concepts behind divergent ergodic averages along the squares}, BOOKTITLE={Ergodic Theory and Related Fields}, JOURNAL={Contemp. Math.}, PUBLISHER={A.M.S.}, ADDRESS={Providence, RI}, VOLUME={430}, YEAR={2007}, PAGES={41--56}, ZBLNUMBER={1122.37003}, MRNUMBER={2009c:37001}, } [[con]] J. Conze, "Convergence des moyennes ergodiques pour des sous-suites," in Contributions au Calcul des Probabilités, Paris: Soc. Math. France, 1973, vol. 35, pp. 7-15. @incollection {[con], MRKEY = {0453975}, AUTHOR = {Conze, Jean-Pierre}, TITLE = {Convergence des moyennes ergodiques pour des sous-suites}, BOOKTITLE = {Contributions au Calcul des Probabilités}, PAGES = {7--15}, SERIES={Bull. Soc. Math. France, Mém.}, VOLUME={35}, PUBLISHER = {Soc. Math. France}, ADDRESS = {Paris}, YEAR = {1973}, MRCLASS = {28A65}, MRNUMBER = {56 \#12226}, MRREVIEWER = {Manfred Denker}, ZBLNUMBER = {0285.28017}, } [FU] H. Furstenberg, Problem session. @misc{FU, author={Furstenberg, H.}, title={Problem session}, note={{\it Conference on Ergodic Theory and Applications,} University of New Hampshire, Durham, NH, June 1982}, } [[gar]] A. M. Garsia, Topics in Almost Everywhere Convergence, Markham Publishing Co., Chicago, Ill., 1970. @book {[gar], MRKEY = {0261253}, AUTHOR = {Garsia, Adriano M.}, TITLE = {Topics in Almost Everywhere Convergence}, SERIES = {Lectures in Advanced Mathematics}, NUMBER = {4}, PUBLISHER = {Markham Publishing Co., Chicago, Ill.}, YEAR = {1970}, PAGES = {x+154}, MRCLASS = {42.16 (28.00)}, MRNUMBER = {41 \#5869}, MRREVIEWER = {R. A. Hunt}, ZBLNUMBER = {0198.38401}, } [HW] G. H. Hardy and E. M. Wright, An Introduction to the Theory of Numbers, Fifth ed., New York: The Clarendon Press, Oxford Univ. Press, 1979. @book {HW, MRKEY = {568909}, AUTHOR = {Hardy, G. H. and Wright, E. M.}, TITLE = {An Introduction to the Theory of Numbers}, EDITION = {Fifth}, PUBLISHER = {The Clarendon Press, Oxford Univ. Press}, ADDRESS = {New York}, YEAR = {1979}, PAGES = {xvi+426}, ISBN = {0-19-853170-2; 0-19-853171-0}, MRCLASS = {10-01}, MRNUMBER = {81i:10002}, MRREVIEWER = {T. M. Apostol}, ZBLNUMBER = {0423.10001}, } [IK] H. Iwaniec and E. Kowalski, Analytic Number Theory, Providence, RI: Amer. Math. Soc., 2004, vol. 53. @book{IK, author={Iwaniec, H. and Kowalski, E.}, TITLE={Analytic Number Theory}, SERIES={A.M.S. Colloq. Publ.}, VOLUME={53}, PUBLISHER={Amer. Math. Soc.}, ADDRESS={Providence, RI}, YEAR={2004}, MRNUMBER={2005h:11005}, ZBLNUMBER={1059.11001}, } [[Kur]] Pär. Kurlberg and Z. Rudnick, "The distribution of spacings between quadratic residues," Duke Math. J., vol. 100, iss. 2, pp. 211-242, 1999. @article {[Kur], MRKEY = {1722952}, AUTHOR = {Kurlberg, P{ä}r and Rudnick, Ze{é}v}, TITLE = {The distribution of spacings between quadratic residues}, JOURNAL = {Duke Math. J.}, FJOURNAL = {Duke Mathematical Journal}, VOLUME = {100}, YEAR = {1999}, NUMBER = {2}, PAGES = {211--242}, ISSN = {0012-7094}, CODEN = {DUMJAO}, MRCLASS = {11N69 (11K36)}, MRNUMBER = {2000k:11109}, MRREVIEWER = {D. R. Heath-Brown}, DOI = {10.1215/S0012-7094-99-10008-1}, ZBLNUMBER = {0985.11038}, } [[Kur2]] Pär. Kurlberg, "The distribution of spacings between quadratic residues. II," Israel J. Math., vol. 120, iss. , part A, pp. 205-224, 2000. @article {[Kur2], MRKEY = {1815376}, AUTHOR = {Kurlberg, P{ä}r}, TITLE = {The distribution of spacings between quadratic residues. {II}}, JOURNAL = {Israel J. Math.}, FJOURNAL = {Israel Journal of Mathematics}, VOLUME = {120}, YEAR = {2000}, NUMBER = {, part A}, PAGES = {205--224}, ISSN = {0021-2172}, CODEN = {ISJMAP}, MRCLASS = {11N69 (11K36)}, MRNUMBER = {2001m:11163}, MRREVIEWER = {D. R. Heath-Brown}, ZBLNUMBER = {1026.11074}, } [[L]] J. Lamperti, Probability. A Survey of the Mathematical Theory, New York: W. A. Benjamin, 1966. @book {[L], MRKEY = {0206996}, AUTHOR = {Lamperti, John}, TITLE = {Probability. {A} Survey of the Mathematical Theory}, PUBLISHER = {W. A. Benjamin}, ADDRESS={New York}, YEAR = {1966}, PAGES = {x+150}, MRCLASS = {60.00}, MRNUMBER = {34 \#6812}, MRREVIEWER = {H. P. McKean, Jr.}, ZBLNUMBER = {0147.15502}, } [NZ] I. Niven and H. S. Zuckerman, An Introduction to the Theory of Numbers, Third ed., New York: John Wiley\thinspace &\thinspace Sons, 1972. @book {NZ, MRKEY = {0344181}, AUTHOR = {Niven, Ivan and Zuckerman, Herbert S.}, TITLE = {An Introduction to the Theory of Numbers}, EDITION = {Third}, PUBLISHER = {John Wiley\thinspace \&\thinspace Sons}, ADDRESS={New York}, YEAR = {1972}, PAGES = {xii+288}, MRCLASS = {10-01 (12-01)}, MRNUMBER = {49 \#8921}, ZBLNUMBER = {0237.10001}, } [[Per]] R. Peralta, "On the distribution of quadratic residues and nonresidues modulo a prime number," Math. Comp., vol. 58, iss. 197, pp. 433-440, 1992. @article {[Per], MRKEY = {1106978}, AUTHOR = {Peralta, Ren{é}}, TITLE = {On the distribution of quadratic residues and nonresidues modulo a prime number}, JOURNAL = {Math. Comp.}, FJOURNAL = {Mathematics of Computation}, VOLUME = {58}, YEAR = {1992}, NUMBER = {197}, PAGES = {433--440}, ISSN = {0025-5718}, CODEN = {MCMPAF}, MRCLASS = {11Y16 (11A15)}, MRNUMBER = {93c:11115}, MRREVIEWER = {Eric Bach}, DOI = {10.2307/2153045}, ZBLNUMBER = {0745.11057}, } [[RW]] J. M. Rosenblatt and Máté. Wierdl, "Pointwise ergodic theorems via harmonic analysis," in Ergodic Theory and its Connections with Harmonic Analysis, Cambridge: Cambridge Univ. Press, 1995, pp. 3-151. @incollection {[RW], MRKEY = {1325697}, AUTHOR = {Rosenblatt, Joseph M. and Wierdl, M{á}t{é}}, TITLE = {Pointwise ergodic theorems via harmonic analysis}, BOOKTITLE = {Ergodic Theory and its Connections with Harmonic Analysis}, VENUE={{A}lexandria, 1993}, SERIES = {London Math. Soc. Lecture Note Ser.}, NUMBER = {205}, PAGES = {3--151}, PUBLISHER = {Cambridge Univ. Press}, ADDRESS = {Cambridge}, YEAR = {1995}, MRCLASS = {28D05 (47A35)}, MRNUMBER = {96c:28025}, MRREVIEWER = {Richard D. Duncan}, ZBLNUMBER = {0848.28008}, } @article {[saw], MRKEY = {0209867}, AUTHOR = {Sawyer, S.}, TITLE = {Maximal inequalities of weak type}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {84}, YEAR = {1966}, PAGES = {157--174}, ISSN = {0003-486X}, MRCLASS = {47.25 (28.00)}, MRNUMBER = {35 \#763}, MRREVIEWER = {R. A. Hirschfeld}, DOI = {10.2307/1970516}, ZBLNUMBER = {0186.20503}, } [[st]] E. M. Stein, "On limits of seqences of operators," Ann. of Math., vol. 74, pp. 140-170, 1961. @article {[st], MRKEY = {0125392}, AUTHOR = {Stein, E. M.}, TITLE = {On limits of seqences of operators}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {74}, YEAR = {1961}, PAGES = {140--170}, ISSN = {0003-486X}, MRCLASS = {42.11}, MRNUMBER = {23 \#A2695}, MRREVIEWER = {R. P. Boas, Jr.}, DOI = {10.2307/1970308}, }
What is the rationale of equation 7 in this paper? Could you please provide a step-by-step demonstration of this equality? This equation is unrelated to the Heston model. It is simply the value of a European call under the a constant coefficient geometric Brownian motion, i.e. the Black and Scholes (1973) model. Here $\nu$ is the constant volatility and $\mu$ is the risk-neutral drift of the asset. For a stock you could for example have $\mu = r - q$ where $r$ is the risk-free interest rate and $q$ is the dividend yield. You find a derivation of this formula in almost any introductory book on continuous time finance - e.g. Shreve's "Continuous Time Finance II" or Musiela and Rutkowksi's "Martingale Methods in Financial Modelling". Note that there is one mistake in the formula: the authors forget to discount the call price. In their notation it should read \begin{equation} C_0 = \frac{1}{2} \color{red}{e^{-r \left( T - t_0 \right)}} \left( S_0 e^{\mu \left( T - t_0 \right)} \mathrm{erfc} \left( -\frac{d_+}{\sqrt{2}} \right) - K \mathrm{erfc} \left( -\frac{d_-}{\sqrt{2}}\right) \right), \end{equation} where \begin{equation} d_\pm = \frac{1}{\sqrt{\nu \left( T - t_0 \right)}} \left( \ln \left( \frac{S_0}{K} \right) + \left( \mu \pm \frac{1}{2} \nu \right) \left( T_0 - t \right) \right) \end{equation} Usually you find this formula expressed in terms of the cumulative normal distribution function $\mathcal{N}(x)$ instead of the complementary error function $\mathrm{erfc}(x)$. The connection between the two is \begin{eqnarray} \frac{1}{2} \mathrm{erfc} \left( -\frac{d_\pm}{\sqrt{2}} \right) & = & \frac{1}{2} \left( 1 - \mathrm{erf} \left( -\frac{d_\pm}{\sqrt{2}} \right) \right)\\ & = & \frac{1}{2} \left( 1 + \mathrm{erf} \left( \frac{d_\pm}{\sqrt{2}} \right) \right)\\ & = & \mathcal{N} \left( d_\pm \right). \end{eqnarray} You then get the more familiar expression \begin{equation} C_0 = e^{-r \left( T - t_0 \right)} \left( S_0 e^{\mu \left( T - t_0 \right)} \mathcal{N} \left( d_+ \right) - K \mathcal{N} \left( d_- \right) \right). \end{equation}
Difference between revisions of "Metric" (Importing text file) m Line 1: Line 1: − ''distance on a set + ''distance on a set '' − A function + A function with non-negative real values, defined on the Cartesian product and satisfying for any the conditions: − 1) + 1) =0if and only if = (the identity axiom); − 2) + 2) (the triangle axiom); − 3) + 3) = (the symmetry axiom). − A set + A set on which it is possible to introduce a metric is called metrizable (cf. [[Metrizable space|Metrizable space]]). A set provided with a metric is called a [[Metric space|metric space]]. ===Examples.=== ===Examples.=== Revision as of 08:54, 7 December 2012 distance on a set $X$ A function $\rho$ with non-negative real values, defined on the Cartesian product $X\times X$ and satisfying for any $x, y\in X$ the conditions: 1) $\rho(x,y)=0$ if and only if $x = y$ (the identity axiom); 2) $\rho(x,y) + \rho(y,z) \geq \rho(x,z)$ (the triangle axiom); 3) $\rho(x,y) = \rho(y,x)$ (the symmetry axiom). Examples. 1) On any set there is the discrete metric 2) In the space various metrics are possible, among them are: here . 3) In a Riemannian space a metric is defined by a metric tensor, or a quadratic differential form (in some sense, this is an analogue of the first metric of example 2)). For a generalization of metrics of this type see Finsler space. 4) In function spaces on a (countably) compact space there are also various metrics; for example, the uniform metric (an analogue of the second metric of example 2)), and the integral metric 5) In normed spaces over a metric is defined by the norm : 6) In the space of closed subsets of a metric space there is the Hausdorff metric. If, instead of 1), one requires only: A metric (and even a pseudo-metric) makes the definition of a number of additional structures on the set possible. First of all a topology (see Topological space), and in addition a uniformity (see Uniform space) or a proximity (see Proximity space) structure. The term metric is also used to denote more general notions which do not have all the properties 1)–3); such are, for example, an indefinite metric, a symmetry on a set, etc. References [1] P.S. Aleksandrov, "Einführung in die Mengenlehre und die allgemeine Topologie" , Deutsch. Verlag Wissenschaft. (1984) (Translated from Russian) [2] J.L. Kelley, "General topology" , Springer (1975) [3] K. Kuratowski, "Topology" , 1 , PWN & Acad. Press (1966) (Translated from French) [4] N. Bourbaki, "Elements of mathematics. General topology" , Addison-Wesley (1966) (Translated from French) Comments Potentially, any metric space has a second metric naturally associated: the intrinsic or internal metric. Potentially, because the definition may give for some pairs of points . One defines the length (which may be ) of a continuous path by , where is the infimum of all finite sums with a finite subset of which is an -net (cf. Metric space) and is listed in the natural order. Then is the infimum of the lengths of paths with , , but if there is no such path of finite length. No reasonable topological restriction on suffices to guarantee that the intrinsic "metric" (or écart) will be finite-valued. If is finite-valued, suitable compactness conditions will assure that minimum-length paths, i.e. paths from to of length , exist. When every pair of points is joined by a path (non-unique, in general) of length , the metric is often called convex. (This is much weaker than the surface theorists' convex metric.) The main theorem in this area is that every locally connected metric continuum admits a convex metric [a1], [a2]. References [a1] R.H. Bing, "Partitioning a set" Bull. Amer. Math. Soc. , 55 (1949) pp. 1101–1110 [a2] E.E. Moïse, "Grille decomposition and convexification" Bull. Amer. Math. Soc. , 55 (1949) pp. 1111–1121 How to Cite This Entry: Metric. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Metric&oldid=12195
Let $\phi : \mathbb{Q}_{>0} \to \mathbb{Z}$ be the group morphism defined by $\phi(p) = p$ for $p$ a prime number. It follows that $\phi(\prod_i p_i^{n_i}) = \sum_i n_i p_i$, with $p_i$ a prime number and $n_i \in \mathbb{Z}$. Let $v: \mathbb{Q}_{>0} \to \mathbb{N}$ be the map defined by $v(\prod_i p_i^{n_i}) = \sum_i \vert n_i \vert$ (with $i \neq j \Rightarrow p_i \neq p_j$). The map $v$ is not a really a valuation, nevertheless $v = \sum_p \vert v_p \vert$, with $v_p$ the $p$-adic valuation. Let $\mathcal{K} = \{ r \in \mathbb{Q}_{>0} \ \vert \ r=\prod_i p_i^{n_i} \text{ and } \sum_i n_i p_i = 0 \} = ker (\phi) $, a subgroup of $\mathbb{Q}_{>0}$. Definition: An element $r \in \mathcal{K} $ is called irreducible if $r \neq 1$ and if : $$ r = \prod_i r_i \text{ (with } r_i \in \mathcal{K}) \Rightarrow \exists i \text{ such that } v(r_i) \ge v(r) $$ Warning: The notion of irreducible defined above is different with the notion of "irreducible fraction". Example: Let $(p,p+2)$ be twin primes, then $r=\frac{2p}{p+2} $ is an irreducible element of $\mathcal{K}$ and $v(r) = 3$. Question: Let $r \in \mathcal{K} $ be an irreducible element. Is it true that $v(r) \in \{ 3,4 \}$ ? Application: The group $\mathcal{K}$ is generated by its irreducible elements, so (modulo a positive anwser), it's generated by the elements $v$ with $v(r) \in \{ 3,4 \}$. Edit (14/07/14): It's a Goldbach-type problem: two days after having posted my question, I've found a proof using the Goldbach conjecture (see my answer below). So an alternative question could be: Is it possible to answer my question without using the Goldbach conjecture or is it equivalent to it ?
I need the following "color interpolation lemma". Actually I know a way to prove it, but I'm not very satisfied with that proof. Lemma.Let $G=(V,E)$ be a (properly) colored graph with colors $1, \dots, n$, and let $V = V_1 \cup \dots \cup V_n$ be the corresponding partition of the vertices. Let $\mu$ be a weighting on the vertices such that each color has total weight $\mu(V_j) = 1$. Suppose $\delta>0$ is such that: Then, for every probability vector $t = (t_1, \dots, t_n)$, we can find a nonnegative weighting $\mu_t \le \mu$, depending continuously on $t$, such that the following properties are satisfied: every vertex $v$ has weight $\mu(v) \le \delta$; if $v$ has color $i$ then for every color $j>i$, the total weight of vertices of color $j$ adjacent to $v$ is at most $(1+\delta)\mu(v)$. The support of $\mu_t$ is totally disconnected, that is, no edge of $G$ joins two vertices of positive $\mu_t$-weight. If $P_t$ denotes the set of the vertices $v$ such that $\mu_t(v) = \mu(v)$, then for each color $i$ we have $$t_i - C \delta \le \mu(V_i\cap P_t) \le t_i,$$where $C>0$ is a constant (say, $C=10$). Here's a sketch of proof: We order the vertices in a way compatible with the color ordering. Given $t=(t_1,\dots,t_n)$, let's define $\mu_t$. We distribute the mass $t_1$ among the vertices of color 1, starting from the "lower" vertices. Then we distribute the mass $t_2$ among the vertices of color 2 that are not adjacent to the already charged vertices of color 1. (The $1+\delta$ condition guarantees that not much of the weight $t_2$ will be wasted.) And so on... except that extra care is needed so to assure continuity of $\mu_t$ with respect to $t$. Some bookkeeping is needed to avoid, say, charging much a 2-colored vertex adjacent to a 1-colored one which is "about to" be charged. So the proof gets pretty messy. It is possible to write an "algorithmic" proof, but then it becomes a nightmare to read. Question.Is there a clean proof for this lemma? Maybe an extremely clever formula? Alternatively (wishful thinking), is there a known reference?
Here we first give a step-by-step solution of the basic question 1, and then turn to other questions. Integral representation of $S(p,q,x)$ Writing $$k^{-q}=\frac{1}{\Gamma (q)}\int_0^{\infty } e^{-k \;t} t^{q-1} \, dt\tag{s1}$$ and observing the formula for the generating function of the generalized harmonic number $$\sum _{k=1}^{\infty } z^k H_k^{(p)}=\frac{\text{Li}_p(z)}{1-z}\tag{s2}$$ we can write (4) as $$S(p,q,x) =\frac{1}{\Gamma (q)}\int_0^{\infty } \frac{t^{q-1} \text{Li}_p\left(e^{-t} x\right)}{1-e^{-t} x} \, dt\tag{s3} $$ Hence $S(p,q,x)$ has the form of a Mellin tranformation defined as $$M(f(t),t,q) = \int_{0}^\infty t^{q-1} f(t) \, dt$$ $$S(p,q,x) =\frac{1}{\Gamma (q)} M(f(p,x,t),t,q)\tag{s3a} $$ with the kernel $$f(p,x,t) =\frac{ \text{Li}_p\left(e^{-t} x\right)}{1-e^{-t} x} \tag{s3b}$$ For $x = 1$ and $p=1$ this simplifies to $$S(1,q) =\frac{1}{\Gamma (q)} M(f(t),t,q)\tag{s4} $$ The kernel simplifies to $$f(t) = \frac{-\log(1-e^{-t})}{1-e^{-t}}\tag{s4a} $$ Notice that for $t>>1$ this kernel approaches the kernel for the $\zeta$-function: $$f(t \to \infty) =f_{\zeta}(t) = \frac{1}{e^{t}-1}\tag{s5} $$ The method will now be illustrated by examining the simplified expression (s4). In order to find singularities in $q$, we split the integral in (s4) into two parts $F=\int_0^1 f \, dt$ and $G=\int_1^\infty f \, dt$ and notice that the integral $G$ is always convergent so that $G$ is a holomorphic in $q$. The singularities must therefore come from $F$, in particular from the vicinity of $t=0$ of the integration. Hence they can be found by expanding the integrand of $F$ into a series about $t=0$. Singularities of $S(1,q)$ To lowest order in $t$ the integrand (s4a) is given by $$t^{q-1} \left(-\frac{\log (t)}{2}-\frac{\log (t)}{t}+\frac{1}{2}\right)$$ Integrating over $t$ from $0$ to $1$ an taking into account the $\Gamma$ function gives $$F_0 = \frac{1}{2 \Gamma (q)}\left( \frac{1}{q^2}+\frac{1}{q}+\frac{2}{(q-1)^2}\right) = \frac{1}{2 \Gamma(q) q^2}+\frac{1}{2 \Gamma(q) q}+\frac{1}{\Gamma(q) (q-1)^2} \\=\frac{1}{2 \Gamma(q+1) q}+\frac{1}{2 \Gamma(q+1)}+\frac{1}{\Gamma(q) (q-1)^2} $$ From this we can easily identify the following basic singularities: a double pole at $q=1$ with residue $r=1$ a simple pole at $q=0$ with residue $r=\frac{1}{2}$ This is in contrast to the zeta function which has a simple pole at $q=1$ with residue $r=1$ and no singularity at $q=0$. The next order gives for the integrand $$t^{q-1} \left(t \left(\frac{5}{24}-\frac{\log (t)}{12}\right)-\frac{\log (t)}{2}-\frac{\log (t)}{t}+\frac{1}{2}\right)$$ which after integrating an taking into account the $\Gamma$ function gives $$F_1 = F_0 + \frac{5}{24 (q+1) \Gamma (q)}+\frac{1}{12 (q+1)^2 \Gamma (q)}$$ A new pole appears here at $q=-1$. It comes from the last term and the observation that $(q+1)^2 \Gamma (q)=\frac{(q+1)^2 \Gamma (q+1)}{q}=\frac{(q+1) \Gamma (q+2)}{q}$ which goes $\to -(q+1)$ for $q\to -1$. The last but one term is regular at $q=-1$. In summary we have found a new simple pole at $q=-1$ with a residue $r=-\frac{1}{12}$. Continuing this procedure leads to the following structure of the poles besides the basic ones: $S(1,q)$ has simple poles at negative odd integers $q=-(2k-1)$. Their residues turn out to be $$r(k) =- \frac{B_{2 k}}{2 k}$$ where $B_{n}$ is the n-th Bernoulli number. Here are the first few pole locations and residues $$\left(\begin{array}{cc} -1 & -\frac{1}{12} \\ -3 & \frac{1}{120} \\ -5 & -\frac{1}{252} \\ -7 & \frac{1}{240} \\ -9 & -\frac{1}{132} \\ -11 & \frac{691}{32760} \\ -13 & -\frac{1}{12} \\\end{array}\right)$$ For comparision: $\zeta(q)$ has just one simple pole $\frac{1}{q-1}$ in the whole complex $q$-plane. $S(p,q)$ for $p\gt1$ For a partial answer to question 3 the same method can be applied for $p\gt 1$. The results for $p=1$ through $p=4$ are written, for each $p$, as a list of poles, their possible multiplicity, and their residues. The list starts with the pole at $q=1$ and proceeds in the direction of the negative real $q$ axis. The last entry is the general expression from that point on, where for each $p$ we let $k=1,2,3,...$ $p=1\; \left( (1^2 , 1), (0, \frac{1}{2} ), ( -(2 k-1) , -\frac{B_{2 k}}{2 k}) \right)$ $p=2\; \left((1, \zeta(2)),(0,-1), (-1,\frac{1}{2}),(-2k, -B_{2 k})\right)$ $p=3\; \left((1, \zeta(3)),(0,0), (-1,-\frac{1}{2}),(-2,\frac{1}{2}),(-(2k+1), -(k+\frac{1}{2})B_{2 k})\right)$ $p=4\; \left((1, \zeta(4)),(0,0), (-1,0),(-2,-\frac{1}{3}),(-3,\frac{1}{2}),(-(2k+2), -\frac{1}{3}(k+1)(2k+1)B_{2 k})\right)$ Observations The only double pole appears for the case $p=1$ at $q=1$, all other poles are simple ones with increasing $p$ an increasing gap appears between the pole at $q=1$ and the next pole on the negative real $q$-axis. This corresponds to the fact that the generalized harmonic number $H_k^{(p)}$ approaches $1$ for large $p$ which in turn means that $S(p,q)$ approaches $\zeta(q)$ which has only one pole at $q=1$. The residue of the pole of $S(p,q)$ at $q=1$ is $\zeta(p)$ which for $p\to\infty$ goes to $1$ as it is with $\zeta(q)$. Singularities of $S(1,q)$ using asymptotic expansion for $H_{k}$ A much simpler way to find the pole structure of $S(1,q)$ consists in using the asymptotic expansion $$H_k = \log(k) +\gamma + \frac{1}{2k} - \sum_{m\ge 1} \frac{B_{2m}}{2m k^{2m}}$$ Inserting this in the definition of (1) and interchanging the summation gives $$S(1,q) = \sum_{k\ge 1}\frac{\log(k)}{k^q} +\gamma \sum_{k\ge 1}\frac{1}{k^q}+ \frac{1}{2k} - \sum_{m\ge 1} \frac{B_{2m}}{2m} \sum_{k\ge 1}\frac{1}{ k^{2m+q}}\\= \zeta'(q) +\gamma \zeta(q) + \frac{1}{2}\zeta(q+1) - \sum_{m\ge 1} \frac{B_{2m}}{2m} \zeta(2m+q)$$ All we need to know is that $\zeta(q)$ has a simple pole at $q=1$ with residue $1$. $\zeta'(q)$ then obviously has a double pole (the derivative of the simple pole) at $q=1$ with residue $-1$. The second term has a simple pole at $q=1$ with residue $\gamma$ which did not appear previously, and which I consider therefore to be "spurious". Third term: pole at $(1+q)=1$, i.e. $q=0$ with residue $\frac{1}{2}$. Fourth term: pole at $2m+q=1$, i.e. $q=1-2m$ ($=-1, -3, -5, ...$) and residues $-\frac{B_{2m}}{2m} $. Summing up: except for the term with $\gamma$ we find the previously obtained pole structure.
Taking the equation $x^2-y^2-z^2=1$ and using ContourPlot3D: ContourPlot3D[ x^2 - y^2 - z^2 == 1, {x, -3, 3}, {y, -3, 3}, {z, -3, 3}] Yields the proper image. Then I made substitutions and put it in spherical coordinate form. Note: Mathematica uses $\theta$ for the angle from the positive z-axis and $\phi$ for the angle of rotation in the xy-plane (or around the z-axis). $$\begin{align*} x^2-y^2-z^2&=1\\ (\rho\sin\theta\cos\phi)^2-(\rho\sin\theta\sin\phi)^2-(\rho\cos\theta)^2&=1\\ \rho^2(\sin^2\theta\cos^2\phi-\sin^2\theta\sin^2\phi-\cos^2\theta)&=1\\ \rho^2(\sin^2\theta\cos 2\phi-\cos^2\theta)&=1 \end{align*}$$ Which gives: $$\rho=\sqrt{\frac{1}{\sin^2\theta\cos2\phi-\cos^2\theta)}}$$ Now I gave SphericalPlot3D a chance: SphericalPlot3D[Sqrt[1/(Sin[θ]^2 Cos[2 ϕ] - Cos[θ]^2)], {θ, π/4, 3 π/4}, {ϕ, -π/4, π/4}] But look at the image: Yuk! Any thoughts? Great Answer from Simon Rochester But there are still a couple of weird things going on that I don't understand. Suppose we define our region function such that $0<r<7$. SphericalPlot3D[Sqrt[1/(Sin[θ]^2 Cos[2 ϕ] - Cos[θ]^2)], {θ, 0, π}, {ϕ, 0, 2 π}, MaxRecursion -> 4, PlotRange -> {-3, 3}, RegionFunction -> Function[{x, y, z, θ, ϕ, r}, 0 < r < 7]] Look what happens. Weird! Secondly, consider the contour plot of $x^2-y^2=1$. ContourPlot[x^2 - y^2 == 1, {x, -3, 3}, {y, -3, 3}, Epilog -> { Red, Dashed, Line[{{-3, -3}, {3, 3}}], Line[{{-3, 3}, {3, -3}}] }, Axes -> True, AxesLabel -> {"x", "y"} ] Thus, you can see why I picked $\{\phi,-\pi/4,\pi/4\}$ for the right branch. Similarly, consider the contour plot of $x^2-z^2=1$. ContourPlot[x^2 - z^2 == 1, {x, -3, 3}, {z, -3, 3}, Epilog -> { Red, Dashed, Line[{{-3, -3}, {3, 3}}], Line[{{-3, 3}, {3, -3}}] }, Axes -> True, AxesLabel -> {"x", "z"} ] You can see why I picked $\{\theta,\pi/4,3\pi/4\}$ for the right branch. Thus, the domain for the right branch is $\{(\theta,\phi): \pi/4<\theta<3\pi/4\ \text{and}\ -\pi/4<\phi<\pi/4\}$. Yet: SphericalPlot3D[Sqrt[1/(Sin[θ]^2 Cos[2 ϕ] - Cos[θ]^2)], {θ, π/4, 3 π/4}, {ϕ, -π/4, π/4}, MaxRecursion -> 4, PlotRange -> {-3, 3}, RegionFunction -> Function[{x, y, z, θ, ϕ, r}, 0 < r < 5]] Still some strange stuff happening on the edges. An answer to Simon Rochester's question in his latest comment Consider: func[θ_, ϕ_] = Sqrt[1/(Sin[θ]^2 Cos[2 ϕ] - Cos[θ]^2)];denom[θ_, ϕ_] = (Sin[θ]^2 Cos[2 ϕ] - Cos[θ]^2);Show[ SphericalPlot3D[If[denom[θ, ϕ] > 0, func[θ, ϕ], 10], {θ, π/4, 3 π/4}, {ϕ, -π/4, π/4}, PlotPoints -> 30, PlotRange -> {-3, 3}, RegionFunction -> Function[{x, y, z, θ, ϕ, r},denom[θ, ϕ] > 0]], SphericalPlot3D[If[denom[θ, ϕ] > 0, func[θ, ϕ], 10], {θ, π/4, 3 π/4}, {ϕ, 3 π/4, 5 π/4}, PlotPoints -> 30, PlotRange -> {-3, 3}, RegionFunction -> Function[{x, y, z, θ, ϕ, r},denom[θ, ϕ] > 0]] ] Which produces this image: Note the increase in meshes because of the restriction to the domain.
Article Abstract This work shows how to encode inductive types using recursion schemes. Unlike Church-Boem-Berarducci encoding that can encode inductive types without fixpoint, recursion schemes require the fixpoint constructor in the typechecker core in order to express this encoding. We will use cubical type checker from Mortberg et all. You may want to try this in Agda, Idris, Coq, or any other MLTT prover. Fixpoint The core fixpoint reflection type is parametrized by a functor and has only one contructor with the value of this functor applied to fixpoint itself. data fix (F: U -> U) = Fix (point: F (fix F)) We also need functions for projecting and embedding values to/from fixpoint functiorial stream. unfix (F: U -> U): fix F -> F (fix F) = split Fix f -> fembed (F: U -> U): F (fix F) -> fix F = \(x: F (fix F)) -> Fix x F-Algebra F-Algebras give us a categorical understanding of recursive types. Let $F : C \rightarrow C$ be an endofunctor on category $C$. An F-algebra is a pair $(A, \varphi)$, where A is an object and $\varphi\ : F\ A \rightarrow A$ is a morphism in the category $C$. The object A is the carrier and the functor F is the signature of the algebra. Reversing arrows gives us F-coalgebra. Initial Algebra A F-algebra $(\mu F, in)$ is the initial F-algebra if for any F-algebra $(C, \varphi)$ there exists a unique arrow $\llparenthesis \varphi \rrparenthesis : \mu F \rightarrow C$ where $f = \llparenthesis \varphi \rrparenthesis$ and is called catamorphism. Similarly, a F-coalgebra $(\nu F, out)$ is the terminal F-coalgebra if for any F-coalgebra $(C, \phi)$ there exists a unique arrow $\llbracket \phi \rrbracket : C \rightarrow \nu F$ where $f = \llbracket \phi \rrbracket$ Example of Initial Algebra The data type of $List$ over a given set $A$ can be represented as the initial algebra $\mu\ L_A in$ of the functor $L_A(X) = 1 + (A \times X)$. Denote $\mu\ L_A = List (A)$. The constructor functions $nil: 1 \rightarrow List (A)$ and $cons: A \times List(A) \rightarrow List(A)$ are defined by $nil = in \circ inl$ and $cons = in \circ inr$, so $in = [nil,cons]$. Catamorphism Catamorphism is known as generalized version of fold. Assume we have fmap defined somewhere else. It is used to construct instances of inductive datatypes. fmap (A B: U) (F: U -> U): (A -> B) -> F A -> F B = undefined Then cata is defined as follows: cata (A: U) (F: U -> U) (alg: F A -> A) (f: fix F): A = alg (fmap (fix F) A F (cata A F alg) (unfix F f)) Inductive Let's rewrite fix data type as an interface structure along with its fold: ind (F: U -> U) (A: U): U = (in_: F (fix F) -> fix F) * (in_rev: fix F -> F (fix F)) * (fold_: (F A -> A) -> fix F -> A) * Unit Then instance of this type class would be: inductive (F: U -> U) (A: U): ind F A = (embed F,unfix F,cata A F,tt) Anamorphism Anamorphism is used to build instances of coinductive data types and represents generic stream unfold. ana (A: U) (F: U -> U) (coalg: A -> F A) (a: A): fix F = Fix (fmap A (fix F) F (ana A F coalg) (coalg a)) Coinductive All arrows are reversed, in is out, fold is unfold. coind (F: U -> U) (A: U): U = (out_: F (fix F) -> fix F) * (out_rev: fix F -> F (fix F)) * (unfold_: (F A -> A) -> fix F -> A) * Unit Then instance of this type class would be: coinductive (F: U -> U) (A: U) : coind A F = (unfix F,embed F,ana A F,tt) Inductive List Nat Here is an example of inductive encoding of list nat: > inductive list EVAL: (\(A : U) -> (embed F,(unfix F,(cata A F,tt)))) (F = (\(A : U) -> list)) > inductive list nat EVAL: ((\(x : F (fix F)) -> Fix x) (F = (\(A : U) -> list)), (unfix (\(A : U) -> list),((\(alg : Pi \(_ : F A) -> A) -> \(f : fix F) -> alg (fmap (fix F) A F (cata A F alg) (unfix F f))) (A = nat, F = (\(A : U) -> list)),tt))) Coinductive Stream Nat Here is example of coinductive encoding of stream nat: > coinductive stream nat EVAL: (unfix (\(A : U) -> stream),((\(x : F (fix F)) -> Fix x) (F = (\(A : U) -> stream)),((\(coalg : Pi \(_ : A) -> F A) -> \(a : A) -> Fix (fmap A (fix F) F (ana A F coalg) (coalg a))) (A = nat, F = (\(A : U) -> stream)),tt))) Hylomorphism Hylomorphism is a bi-functor that could be taken as axiom, since all other recursion schemas are derivable from it. More common, (Co)-Inductive types could be represented as di-algebras. hylo (A B: U) (F: U -> U) (alg: F B -> B) (coalg: A -> F A) (a: A): B = alg (fmap A B F (hylo A B F alg coalg) (coalg a)) Prelude First we need to set up an inductive type tuple for para and either type for apomorphism. data tuple (A B: U) = pair (a: A) (b: B)data either (A B: U) = left (a: A) | right (b: B)either_ (A B C: U): (A -> C) -> (B -> C) -> (either A B) -> C = \(b: A -> C) -> \(c: B -> C) -> [email protected](either A B -> C) with left x -> b x right y -> c yfst (A B: U): tuple A B -> A = split pair a b -> asnd (A B: U): tuple A B -> B = split pair a b -> b Primitive Recursion Paramorphism para (A: U) (F: U -> U) (psi: F (tuple (fix F) A) -> A) (f: fix F): A = psi (fmap (fix F) (tuple (fix F) A) F (\(m: fix F) -> pair m (para A F psi m)) (unfix F f)) Apomorphism apo (A: U) (F: U -> U) (coalg: A -> F(either (fix F) A)) (a: A): fix F = Fix(fmap (either (fix F) A) (fix F) F (\(x: either (fix F) A) -> either_ (fix F) A (fix F) (idfun (fix F)) (apo A F coalg) x) (coalg a)) Gapomorphism gapo (A B: U) (F: U -> U) (coalg: A -> F A) (coalg2: B -> F(either A B)) (b: B): fix F = Fix((fmap (either A B) (fix F) F (\(x: either A B) -> either_ A B (fix F) (\(y: A) -> ana A F coalg y) (\(z: B) -> gapo A B F coalg coalg2 z) x) (coalg2 b))) Morphisms on (Co)-Initial Objects data freeF (F: U -> U) (A B: U) = ReturnF (a: A) | BindF (f: F B)data cofreeF (F: U -> U) (A B: U) = CoBindF (a: A) (f: F B)data free (F: U -> U) (A: U) = Free (_: fix (freeF F A))data cofree (F: U -> U) (A: U) = CoFree (_: fix (cofreeF F A))unfree (A: U) (F: U -> U): free F A -> fix (freeF F A) = split Free a -> auncofree (A: U) (F: U -> U): cofree F A -> fix (cofreeF F A) = split CoFree a -> a Histomorphism histo (A:U) (F: U->U) (f: F (cofree F A) -> A) (z: fix F): A = extract A F ((cata (cofree F A) F (\(x: F (cofree F A)) -> CoFree (Fix (CoBindF (f x) ((fmap (cofree F A) (fix (cofreeF F A)) F (uncofree A F) x)))))) z) where extract (A: U) (F: U -> U): cofree F A -> A = split CoFree f -> unpack_fix f where unpack_fix: fix (cofreeF F A) -> A = split Fix f -> unpack_cofree f where unpack_cofree: cofreeF F A (fix (cofreeF F A)) -> A = split CoBindF a -> a Futumorphism futu (A: U) (F: U -> U) (f: A -> F (free F A)) (a: A): fix F = Fix (fmap (free F A) (fix F) F (\(z: free F A) -> w z) (f a)) where w: free F A -> fix F = split Free x -> unpack x where unpack_free: freeF F A (fix (freeF F A)) -> fix F = split ReturnF x -> futu A F f x BindF g -> Fix (fmap (fix (freeF F A)) (fix F) F (\(x: fix (freeF F A)) -> w (Free x)) g) unpack: fix (freeF F A) -> fix F = split Fix x -> unpack_free x Chronomorphism chrono (A B: U) (F: U -> U) (f: F (cofree F B) -> B) (g: A -> F (free F A)) (a: A): B = histo B F f (futu A F g a) Appendix Metamorphism meta (A B: U) (F: U -> U) (f: A -> F A) (e: B -> A) (g: F B -> B) (t: fix F): fix F = ana A F f (e (cata B F g t)) Mutumorphism mutu (A B: U) (F: U -> U) (f: F (tuple A B) -> B) (g: F (tuple B A) -> A) (t: fix F): A = g (fmap (fix F) (tuple B A) F (\(x: fix F) -> pair (mutu B A F g f x) (mutu A B F f g x)) (unfix F t)) Zygomorphism zygo (A B: U) (F: U -> U) (g: F A -> A) (alg: F (tuple A B) -> B) (f: fix F): B = snd A B (cata (tuple A B) F (\(x: F (tuple A B)) -> pair (g(fmap (tuple A B) A F (\(y: tuple A B) -> fst A B y) x)) (alg x)) f) Prepromorphism prepro (A: U) (F: U -> U) (nt: F(fix F) -> F(fix F)) (alg: F A -> A) (f: fix F): A = alg (fmap (fix F) A F (\(x: fix F) -> prepro A F nt alg (cata (fix F) F (\(y: F(fix F)) -> Fix (nt(y))) x)) (unfix F f)) Postpromorphism postpro (A: U) (F: U -> U) (nt : F(fix F) -> F(fix F)) (coalg: A -> F A) (a: A): fix F = Fix(fmap A (fix F) F (\(x: A) -> ana (fix F) F (\(y: fix F) -> nt(unfix F y)) (postpro A F nt coalg x)) (coalg a)) The code is here.
Why does the following DFA have (to have) the state $b_4$? Shouldn't states $b_1,b_2,b_3$ already cover "exactly two 1s"? Wouldn't state $b_4$ mean "more than two 1s", even if it doesn't trigger an accept state? Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community $b_4$ is what is called a trap state, that is, a state that exists just so that all possible transitions are explicitly represented, even those that do not lead to a final state. It doesn't change the language that is being defined, and can be omitted for the sake of brevity. b4 exists to cover the entire alphabet ([0,1], in this case) for each state. While this is not strictly necessary, it is a hot topic of discussion in the field. By showing the complete graph, it is more obvious that a third '1' in your input string permanently moves you out of the 'accept' state b3. The formal definition of a DFA is $M = (Q, \Sigma, \delta, q_0, F)$, were $Q$ is the finite set of states, $\Sigma$ is the alphabet, $\delta$ is the transition function, $q_0 \in Q$ is the start state, and $F \subseteq Q$ is the set of final states. Note that $\delta \colon Q \times \Sigma \to Q$ is specified to be a function, i.e., it has to be defined for all states and symbols. The graphical depiction of the DFA is complete in this sense with $b_4$. Often such dead states are just omitted in the sake of clarity of the diagram, the reader is surely capable of adding them if required. Answering your question I have to say(sadly) that it depend. It depends on the definition of DFA that you are using because it appears to not be concensus in a unique definition. For example I use the definition of the DFA where $\delta$ is a function. The next question is: Is $\delta$ a total function or a partial function? Personally when I use the term function I am refering to total functions by default. But someone can disagree with me. More importantly when I studied the definition of a DFA my teacher told me that $\delta$ is a total function. Summarizing I use a particular definition of a DFA where the $b_4$ state have to exist. I can skip drawing it for the sake of laziness or clarity, but I know it exist. Finally to answer your question more precisely we have to know what definition of DFA you use. Wouldn't state b4 mean "more than two 1s", even if it doesn't trigger an accept state? The state $b_4$ means that if a word $\sigma$ have more than two "1" it will never reach an accepting state so $\sigma\notin L = \{w\,|\, \text{contains exactly two ones}\}$.
Maybe it could be called book-errata? I cannot give many examples of discussions from math.SE offhand but at least one example from here https://math.stackexchange.com/questions/34641/find-limit-of-unknown-function This is an example from a different forum: http://www.sosmath.com/CBB/viewtopic.php?p=181367 I guess questions (and answers) revealing mistakes in book will appear here occasionaly. (Perhaps even withou this being the original intent.) A related (interesting) link: https://mathoverflow.net/questions/3038/errata-database/3040#3040 Although I have tag-creating privileges, I've never done this before and I wanted to ask about opinion of other members first. EDIT: Here's a recent post of this type: Showing $\sum\limits^N_{n=1}\left(\prod\limits_{i=1}^n b_i \right)^\frac1{n}\le\sum\limits^N_{n=1}\left(\prod\limits_{i=1}^n a_i \right)^\frac1{n}$?
When we make the minimal substitution \begin{equation*} p^\mu\rightarrow p^\mu+\frac{e}{c}A^\mu \end{equation*} the four-potential $A^\mu$ must be proportional to $1/e$ in order to ensure the whole term has units of momentum. However, in the Maxwell equation \begin{equation*} \partial^\mu\partial_\mu A^\mu=\frac{j^\mu}{c} \end{equation*} it seems that $A^\mu$ must be proportional to $e$, in order to account for the factor of $e$ in $j^\mu$. Can anyone tell me what is going on here? What am I missing? You're just missing that there is something proportional to $e$ also in the four-potential $A_\mu$ since $A_\mu=(\phi,\vec{A})$ where $\phi$ and $\vec{A}$ are the scalar and vector potentials. The dimensions are OK. We have that $cp_\mu$ and $eA_\mu$ are energies; $j_\mu/c$ is a charge density, then, if you multiply for $e$ at both the sides of the last equation, we have $$ \partial_\mu\partial^\mu eA^\nu = \frac{ej^\nu}{c} $$ and from the left member we see that both sides must be energies divided by square meters. We check that this is true for the r.h.s.: it is a square charge density and, keeping in mind that $e^2/r$ is an energy, than it is an energy divided by square meters. This is a result of people using different systems of units in different parts of the physics course. Rewriting in SI, $$p^\mu \mapsto p^\mu + e A^\mu,$$ $$\partial^\mu \partial_\mu A^\nu = \mu_0 j^\nu,$$ we can see that the vacuum permeability $\mu_0$ appears. One could say this has the potential to cancel two units of $e$ per, for example, Ampère's force law, where $\mu_0 I_1 I_2$ appear on one side and only mechanical quantities on the other. The discrepancy vanishes. Or, better, you can easily find the exact units of all the quantities present here. Some are easy to remember or rederive from basic definitions, $$\begin{aligned}\ [p] &= \mathrm{Js/m}, \\ [e] &= \mathrm{As}, \\ [j] &= \mathrm{A/m^2}, \\ [\mu_0] &= \mathrm{Vs/(Am)} = \mathrm{J/(A^2m)}, \\ \end{aligned}$$ and $\partial^\mu \partial_\mu$, being a second derivative by coordinates, applies $1/\mathrm{m}^2$. From either equation one can easily derive $$[A] = \mathrm{J/(Am)}.$$
Suppose that all stars in this galaxy were born in a single major-merger burst event about 10 Gyr ago. From this original burst, I want to compute the fraction of stellar mass still surviving as stars in the main sequence ? For this, I have got to use a Salpeter IMF, and a star formation range between 0.1 and 120 solar masses. What I have done is starting from Salpeter IMF : $$\Phi(m)\text{d}m=\Phi_{0}\,m^{-2.35}$$ with $$\Phi_{0}$$ a constant normalization. From this, I integrate from $$m_{1}=0.1\,\text{M}_{\odot}$$ to $$m_{2}=120\,\text{M}_{\odot}$$ $$N(0.1<m<120) = \int_{0.1}^{120}\,\Phi(m)\,\text{d}m = \Phi_{0}\,\bigg[\dfrac{0.1^{-1.35}-120^{-1.35}}{1.35}\bigg]$$ This result depends on the valeur of $$\Phi_{0}$$ and I don't know how to deal with it in order to get $$N(0.1<m<120)$$ ? Moreover, it seems that I have to take into account of the age of the major-merger burst event (10 Gyr). From these 2 principles, how could I calculate the fraction of stars surviving in the main sequence ? Any help is wlecome, Regards
Abstract If $A$ is a finite subset of a free group with at least two noncommuting elements, then $|A\cdot A\cdot A|\geq\frac{|A|^2}{(\log |A|)^{O(1)}}$. More generally, the same conclusion holds in an arbitrary virtually free group, unless $A$ generates a virtually cyclic subgroup. The central part of the proof of this result is carried on by estimating the number of collisions in multiple products $A_1\cdot\ldots\cdot A_k$. We include a few simple observations showing that in this “statistical” context the analogue of the fundamental Plünnecke-Ruzsa theory looks particularly simple and appealing. Note: To view the article, click on the URL link for the DOI number. [BIW] B. Barak, R. Impagliazzo, and A. Wigderson, "Extracting randomness using few independent sources," SIAM J. Comput., vol. 36, iss. 4, pp. 1095-1118, 2006. @article {BIW, MRKEY = {2272272}, AUTHOR = {Barak, Boaz and Impagliazzo, Russell and Wigderson, Avi}, TITLE = {Extracting randomness using few independent sources}, JOURNAL = {SIAM J. Comput.}, FJOURNAL = {SIAM Journal on Computing}, VOLUME = {36}, YEAR = {2006}, NUMBER = {4}, PAGES = {1095--1118}, ISSN = {0097-5397}, MRCLASS = {68Q10 (05C55 68W20)}, MRNUMBER = {2272272}, MRREVIEWER = {Heng Liang}, DOI = {10.1137/S0097539705447141}, ZBLNUMBER = {1127.68030}, } [BRS*] B. Barak, A. Rao, R. Shaltiel, and A. Wigderson, "2-source dispersers for sub-polynomial entropy and Ramsey graphs beating the Frankl-Wilson construction," in STOC’06: Proceedings of the 38th Annual ACM Symposium on Theory of Computing, New York: ACM, 2006, pp. 671-680. @incollection {BRS*, MRKEY = {2277192}, AUTHOR = {Barak, Boaz and Rao, Anup and Shaltiel, Ronen and Wigderson, Avi}, TITLE = {2-source dispersers for sub-polynomial entropy and {R}amsey graphs beating the {F}rankl-{W}ilson construction}, BOOKTITLE = {S{TOC}'06: {P}roceedings of the 38th {A}nnual {ACM} {S}ymposium on {T}heory of {C}omputing}, PAGES = {671--680}, PUBLISHER = {ACM}, ADDRESS = {New York}, YEAR = {2006}, MRCLASS = {68R10 (05C55 68W20)}, MRNUMBER = {2277192}, DOI = {10.1145/1132516.1132611}, ZBLNUMBER = {1122.68300}, } [BoGa] J. Bourgain and A. Gamburd, "Uniform expansion bounds for Cayley graphs of ${ SL}_2(\Bbb F_p)$," Ann. of Math., vol. 167, iss. 2, pp. 625-642, 2008. @article {BoGa, MRKEY = {2415383}, AUTHOR = {Bourgain, Jean and Gamburd, Alex}, TITLE = {Uniform expansion bounds for {C}ayley graphs of {${\rm SL}\sb 2(\Bbb F\sb p)$}}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {167}, YEAR = {2008}, NUMBER = {2}, PAGES = {625--642}, ISSN = {0003-486X}, CODEN = {ANMAAH}, MRCLASS = {20F65 (05C25 05E15 11B30 20G40)}, MRNUMBER = {2415383}, MRREVIEWER = {Ben Joseph Green}, DOI = {10.4007/annals.2008.167.625}, ZBLNUMBER = {1216.20042}, } [BKT] J. Bourgain, N. Katz, and T. Tao, "A sum-product estimate in finite fields, and applications," Geom. Funct. Anal., vol. 14, iss. 1, pp. 27-57, 2004. @article {BKT, MRKEY = {2053599}, AUTHOR = {Bourgain, Jean and Katz, N. and Tao, T.}, TITLE = {A sum-product estimate in finite fields, and applications}, JOURNAL = {Geom. Funct. Anal.}, FJOURNAL = {Geometric and Functional Analysis}, VOLUME = {14}, YEAR = {2004}, NUMBER = {1}, PAGES = {27--57}, ISSN = {1016-443X}, CODEN = {GFANFB}, MRCLASS = {11B75 (11T30)}, MRNUMBER = {2053599}, MRREVIEWER = {Ben Joseph Green}, DOI = {10.1007/s00039-004-0451-1}, ZBLNUMBER = {1145.11306}, } [Cha] M. Chang, "Product theorems in ${ SL}_2$ and ${ SL}_3$," J. Inst. Math. Jussieu, vol. 7, iss. 1, pp. 1-25, 2008. @article {Cha, MRKEY = {2398145}, AUTHOR = {Chang, Mei-Chu}, TITLE = {Product theorems in {${\rm SL}\sb 2$} and {${\rm SL}\sb 3$}}, JOURNAL = {J. Inst. Math. Jussieu}, FJOURNAL = {Journal of the Institute of Mathematics of Jussieu. JIMJ. Journal de l'Institut de Mathématiques de Jussieu}, VOLUME = {7}, YEAR = {2008}, NUMBER = {1}, PAGES = {1--25}, ISSN = {1474-7480}, MRCLASS = {20G30 (11B75 11H56 20G20)}, MRNUMBER = {2398145}, MRREVIEWER = {Serge{\u\i} V. Konyagin}, DOI = {10.1017/S1474748007000126}, ZBLNUMBER = {1167.20328}, } [Gow] W. T. Gowers, "A new proof of Szemerédi’s theorem for arithmetic progressions of length four," Geom. Funct. Anal., vol. 8, iss. 3, pp. 529-551, 1998. @article {Gow, MRKEY = {1631259}, AUTHOR = {Gowers, W. T.}, TITLE = {A new proof of {S}zemerédi's theorem for arithmetic progressions of length four}, JOURNAL = {Geom. Funct. Anal.}, FJOURNAL = {Geometric and Functional Analysis}, VOLUME = {8}, YEAR = {1998}, NUMBER = {3}, PAGES = {529--551}, ISSN = {1016-443X}, CODEN = {GFANFB}, MRCLASS = {11B25 (11N13)}, MRNUMBER = {1631259}, MRREVIEWER = {D. R. Heath-Brown}, DOI = {10.1007/s000390050065}, ZBLNUMBER = {0907.11005}, } [Haa] U. Haagerup, "An example of a non nuclear $C^{\ast} $-algebra, which has the metric approximation property," Invent. Math., vol. 50, iss. 3, pp. 279-293, 1978/79. @article {Haa, MRKEY = {0520930}, AUTHOR = {Haagerup, Uffe}, TITLE = {An example of a non nuclear {$C\sp{\ast} $}-algebra, which has the metric approximation property}, JOURNAL = {Invent. Math.}, FJOURNAL = {Inventiones Mathematicae}, VOLUME = {50}, YEAR = {1978/79}, NUMBER = {3}, PAGES = {279--293}, ISSN = {0020-9910}, CODEN = {INVMBH}, MRCLASS = {46L05 (22D35 43A35)}, MRNUMBER = {0520930}, MRREVIEWER = {Ole A. Nielsen}, DOI = {10.1007/BF01410082}, ZBLNUMBER = {0408.46046}, } [Hel] H. A. Helfgott, "Growth and generation in ${ SL}_2(\Bbb Z/p\Bbb Z)$," Ann. of Math., vol. 167, iss. 2, pp. 601-623, 2008. @article {Hel, MRKEY = {2415382}, AUTHOR = {Helfgott, H. A.}, TITLE = {Growth and generation in {${\rm SL}\sb 2(\Bbb Z/p\Bbb Z)$}}, JOURNAL = {Ann. of Math.}, FJOURNAL = {Annals of Mathematics. Second Series}, VOLUME = {167}, YEAR = {2008}, NUMBER = {2}, PAGES = {601--623}, ISSN = {0003-486X}, CODEN = {ANMAAH}, MRCLASS = {20G40 (05C25 20F69)}, MRNUMBER = {2415382}, MRREVIEWER = {Martin W. Liebeck}, DOI = {10.4007/annals.2008.167.601}, ZBLNUMBER = {1213.20045}, } [Hel2] H. A. Helfgott, Growth in groups: ideas and perspectives, 2013. @misc{Hel2, author = {Helfgott, H. A.}, TITLE = {Growth in groups: ideas and perspectives}, YEAR={2013}, ARXIV={1303.0239}, } [Jol] P. Jolissaint, "Rapidly decreasing functions in reduced $C^*$-algebras of groups," Trans. Amer. Math. Soc., vol. 317, iss. 1, pp. 167-196, 1990. @article {Jol, MRKEY = {0943303}, AUTHOR = {Jolissaint, Paul}, TITLE = {Rapidly decreasing functions in reduced {$C\sp *$}-algebras of groups}, JOURNAL = {Trans. Amer. Math. Soc.}, FJOURNAL = {Transactions of the American Mathematical Society}, VOLUME = {317}, YEAR = {1990}, NUMBER = {1}, PAGES = {167--196}, ISSN = {0002-9947}, CODEN = {TAMTAM}, MRCLASS = {22D25 (43A15 46L99)}, MRNUMBER = {0943303}, MRREVIEWER = {A. Derighetti}, DOI = {10.2307/2001458}, ZBLNUMBER = {0711.46054}, } [KhM] O. Kharlampovich and A. Myasnikov, "Implicit function theorem over free groups," J. Algebra, vol. 290, iss. 1, pp. 1-203, 2005. @article {KhM, MRKEY = {2154989}, AUTHOR = {Kharlampovich, Olga and Myasnikov, Alexei}, TITLE = {Implicit function theorem over free groups}, JOURNAL = {J. Algebra}, FJOURNAL = {Journal of Algebra}, VOLUME = {290}, YEAR = {2005}, NUMBER = {1}, PAGES = {1--203}, ISSN = {0021-8693}, CODEN = {JALGA4}, MRCLASS = {20E05 (20F10)}, MRNUMBER = {2154989}, MRREVIEWER = {Alina A. Vdovina}, DOI = {10.1016/j.jalgebra.2005.04.001}, ZBLNUMBER = {1094.20016}, } [KhM3] O. Kharlampovich and A. Myasnikov, "Elementary theory of free non-abelian groups," J. Algebra, vol. 302, iss. 2, pp. 451-552, 2006. @article {KhM3, MRKEY = {2293770}, AUTHOR = {Kharlampovich, Olga and Myasnikov, Alexei}, TITLE = {Elementary theory of free non-abelian groups}, JOURNAL = {J. Algebra}, FJOURNAL = {Journal of Algebra}, VOLUME = {302}, YEAR = {2006}, NUMBER = {2}, PAGES = {451--552}, ISSN = {0021-8693}, CODEN = {JALGA4}, MRCLASS = {20E05 (03B25 03C60 20F10)}, MRNUMBER = {2293770}, MRREVIEWER = {O. V. Belegradek}, DOI = {10.1016/j.jalgebra.2006.03.033}, ZBLNUMBER = {1110.03020}, } [Kow] E. Kowalski, Sieve in expansion. @misc{Kow, author={Kowalski, E.}, TITLE={Sieve in expansion}, NOTE={Technical Report 1028, Séminaire Bourbaki, 63éme année, 2010--2011}, } [Lub] A. Lubotzky, "Expander graphs in pure and applied mathematics," Bull. Amer. Math. Soc., vol. 49, iss. 1, pp. 113-162, 2012. @article {Lub, MRKEY = {2869010}, AUTHOR = {Lubotzky, Alexander}, TITLE = {Expander graphs in pure and applied mathematics}, JOURNAL = {Bull. Amer. Math. Soc.}, FJOURNAL = {American Mathematical Society. Bulletin. New Series}, VOLUME = {49}, YEAR = {2012}, NUMBER = {1}, PAGES = {113--162}, ISSN = {0273-0979}, CODEN = {BAMOAD}, MRCLASS = {05-02 (05C40 11N05 11N35 20F65 68R10)}, MRNUMBER = {2869010}, MRREVIEWER = {Mikhail Ostrovskii}, DOI = {10.1090/S0273-0979-2011-01359-3}, ZBLNUMBER = {1232.05194}, } [Lyn] R. C. Lyndon, "Equations in free groups," Trans. Amer. Math. Soc., vol. 96, pp. 445-457, 1960. @article {Lyn, MRKEY = {0151503}, AUTHOR = {Lyndon, Roger C.}, TITLE = {Equations in free groups}, JOURNAL = {Trans. Amer. Math. Soc.}, FJOURNAL = {Transactions of the American Mathematical Society}, VOLUME = {96}, YEAR = {1960}, PAGES = {445--457}, ISSN = {0002-9947}, MRCLASS = {20.10}, MRNUMBER = {0151503}, MRREVIEWER = {V. K. Belov}, DOI = {10.2307/1993533}, ZBLNUMBER = {0108.02301}, } [LyS] R. C. Lyndon and P. E. Schupp, Combinatorial Group Theory, New York: Springer-Verlag, 1977, vol. 89. @book {LyS, MRKEY = {0577064}, AUTHOR = {Lyndon, Roger C. and Schupp, Paul E.}, TITLE = {Combinatorial Group Theory}, SERIES = {Ergeb. Math. Grenzgeb.}, VOLUME={89}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York}, YEAR = {1977}, PAGES = {xiv+339}, ISBN = {3-540-07642-5}, MRCLASS = {20F05 (55A05)}, MRNUMBER = {0577064}, MRREVIEWER = {Ian M. Chiswell}, ZBLNUMBER = {0368.20023}, } [Nat] M. B. Nathanson, Additive Number Theory: Inverse Problems and the Geometry of Sumsets, New York: Springer-Verlag, 1996, vol. 165. @book {Nat, MRKEY = {1477155}, AUTHOR = {Nathanson, Melvyn B.}, TITLE = {Additive Number Theory: Inverse Problems and the Geometry of Sumsets}, SERIES = {Grad. Texts in Math.}, VOLUME = {165}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York}, YEAR = {1996}, PAGES = {xiv+293}, ISBN = {0-387-94655-1}, MRCLASS = {11Bxx (11-02)}, MRNUMBER = {1477155}, MRREVIEWER = {Yuri Bilu}, ZBLNUMBER = {0859.11003}, } [Sela] Z. Sela, "Diophantine geometry over groups. VI. The elementary theory of a free group," Geom. Funct. Anal., vol. 16, iss. 3, pp. 707-730, 2006. @article {Sela, MRKEY = {2238945}, AUTHOR = {Sela, Z.}, TITLE = {Diophantine geometry over groups. {VI}. {T}he elementary theory of a free group}, JOURNAL = {Geom. Funct. Anal.}, FJOURNAL = {Geometric and Functional Analysis}, VOLUME = {16}, YEAR = {2006}, NUMBER = {3}, PAGES = {707--730}, ISSN = {1016-443X}, CODEN = {GFANFB}, MRCLASS = {20F65 (03C07 03C10 03C65 20E05 20F10)}, MRNUMBER = {2238945}, MRREVIEWER = {Daniel P. Groves}, DOI = {10.1007/s00039-006-0565-8}, ZBLNUMBER = {1118.20035}, } [Tao] T. Tao, "Product set estimates for non-commutative groups," Combinatorica, vol. 28, iss. 5, pp. 547-594, 2008. @article{Tao, MRKEY={2501249}, AUTHOR={Tao, Terence}, TITLE = {Product set estimates for non-commutative groups}, JOURNAL = {Combinatorica}, FJOURNAL = {Combinatorica. An International Journal on Combinatorics and the Theory of Computing}, VOLUME = {28}, YEAR = {2008}, NUMBER = {5}, PAGES = {547--594}, MRCLASS = {11B30 (11P70)}, MRNUMBER = {2501249}, MRREVIEWER = {Ben Joseph Green}, DOI = {10.1007/s00493-008-2271-7}, ZBLNUMBER = {1254.11017}, } @book {TaV, MRKEY = {2289012}, AUTHOR = {Tao, Terence and Vu, Van H.}, TITLE = {Additive Combinatorics}, SERIES = {Cambridge Stud. Adv. Math.}, NUMBER = {105}, PUBLISHER = {Cambridge Univ. Press}, ADDRESS = {Cambridge}, YEAR = {2006}, PAGES = {xviii+512}, ISBN = {978-0-521-85386-6; 0-521-85386-9}, MRCLASS = {11-02 (05-02 05D10 11B13 11P70 11P82 28D05 37A45)}, MRNUMBER = {2289012}, MRREVIEWER = {Serge{\u\i} V. Konyagin}, DOI = {10.1017/CBO9780511755149}, ZBLNUMBER = {1127.11002}, } [Adi] S. I. Adian, The Burnside Problem and Identities in Groups, New York: Springer-Verlag, 1979, vol. 95. @book {Adi, MRKEY = {0537580}, AUTHOR = {Adian, S. I.}, TITLE = {The {B}urnside Problem and Identities in Groups}, SERIES = {Ergeb. Math. Grenzgeb.}, VOLUME = {95}, NOTE = {translated from the Russian by John Lennox and James Wiegold}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York}, YEAR = {1979}, PAGES = {xi+311}, ISBN = {3-540-08728-1}, MRCLASS = {20F05}, MRNUMBER = {0537580}, } [Bul] V. K. Bulitko, "Equations and inequalities in a free group and a free semigroup," Tul. Gos. Ped. Inst. Učen. Zap. Mat. Kaf., iss. 2 Geometr. i Algebra, pp. 242-252, 1970. @article {Bul, MRKEY = {0393235}, AUTHOR = {Bulitko, V. K.}, TITLE = {Equations and inequalities in a free group and a free semigroup}, JOURNAL = {Tul. Gos. Ped. Inst. Učen. Zap. Mat. Kaf.}, NUMBER = {2 Geometr. i Algebra}, YEAR = {1970}, PAGES = {242--252}, MRCLASS = {20E05 (20M05)}, MRNUMBER = {0393235}, MRREVIEWER = {V. A. Artamonov}, } [Mak1] G. S. Makanin, "Equations in a free group," Izv. Akad. Nauk SSSR Ser. Mat., vol. 46, iss. 6, pp. 1199-1273, 1344, 1982. @article {Mak1, MRKEY = {0682490}, AUTHOR = {Makanin, G. S.}, TITLE = {Equations in a free group}, JOURNAL = {Izv. Akad. Nauk SSSR Ser. Mat.}, FJOURNAL = {Izvestiya Akademii Nauk SSSR. Seriya Matematicheskaya}, VOLUME = {46}, YEAR = {1982}, NUMBER = {6}, PAGES = {1199--1273, 1344}, ISSN = {0373-2436}, MRCLASS = {20F05 (20E05 20M05)}, NOTE = {in Russian; translated in {\it Math. USSR-Izvestiya} {\bf 21} (1983), 483--546}, MRNUMBER = {0682490}, ZBLNUMBER = {0527.20018}, DOI = {10.1070/IM1983v021n03ABEH001803}, } [equations] A. A. Razborov, "On systems of equations in a free group," Izv. Akad. Nauk SSSR Ser. Mat., vol. 48, iss. 4, pp. 779-832, 1984. @article {equations, MRKEY = {o755958}, AUTHOR = {Razborov, A. A.}, TITLE = {On systems of equations in a free group}, JOURNAL = {Izv. Akad. Nauk SSSR Ser. Mat.}, FJOURNAL = {Izvestiya Akademii Nauk SSSR. Seriya Matematicheskaya}, VOLUME = {48}, YEAR = {1984}, NUMBER = {4}, PAGES = {779--832}, ISSN = {0373-2436}, MRCLASS = {20F10 (03B25 20E05)}, MRNUMBER = {0755958}, MRREVIEWER = {R. St{ö}hr}, NOTE = {in Russian; translated in {\it Math. USSR-Izvestiya} {\bf 25} (1985), 115--162}, ZBLNUMBER = {0579.20019}, DOI = {10.1070/IM1985v025n01ABEH001272}, } [Frei] G. A. Freuiman, Foundations of a Structural Theory of Set Addition, Providence, R. I.: Amer. Math. Soc., 1973. @book {Frei, MRKEY = {0360496}, AUTHOR = {Fre{\u\i}man, G. A.}, TITLE = {Foundations of a Structural Theory of Set Addition}, NOTE = {translated from the Russian, \emph{Transl. Math. Monogr.} {\bf 37}}, PUBLISHER = {Amer. Math. Soc.}, ADDRESS = {Providence, R. I.}, YEAR = {1973}, PAGES = {vii+108}, MRCLASS = {10J99 (10AXX)}, MRNUMBER = {0360496}, ZBLNUMBER = {0271.10044}, }
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago BTW your program looks very interesting, in particular the way to enter mathematics. One thing that seem to be missing is documentation (at least I did not find it). This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for. For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$? ******* Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports. When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to. ******* If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string: I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead: One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find... In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som... @MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, " BTW those animations with examples of searching look really cool. @MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page! @MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users. @MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it. @MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords. @MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history. @MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though) @MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match. @MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell. @MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets. @MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit. @MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned. @MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish. @MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish. So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago @GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago @quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago "What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago @quid I will reply here, since I do not want to digress in the comments too much from the topic of that question. Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that". Book recommendations are certainly accepted on the main site, if they are formulated in the proper way. If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here. Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed. Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously. I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc. Academia.SE has some questions which could be classified as "demographic" (including gender). @quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar. But that is only anecdotal. And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat. From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov." My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men. As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation. It seems that they have also other interpretations in Poland. "A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House"). Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany." BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question. In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3] A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar). In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing. On Slovakia specifically it says there: The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko.
In this section, we study Stokes’ theorem, a higher-dimensional generalization of Green’s theorem. This theorem, like the Fundamental Theorem for Line Integrals and Green’s theorem, is a generalization of the Fundamental Theorem of Calculus to higher dimensions. Stokes’ theorem relates a vector surface integral over surface S in space to a line integral around the boundary of S. Therefore, just as the theorems before it, Stokes’ theorem can be used to reduce an integral over a geometric object S to an integral over the boundary of S. In addition to allowing us to translate between line integrals and surface integrals, Stokes’ theorem connects the concepts of curl and circulation. Furthermore, the theorem has applications in fluid mechanics and electromagnetism. We use Stokes’ theorem to derive Faraday’s law, an important result involving electric fields. Stokes’ Theorem Stokes’ theorem says we can calculate the flux of \( curl \,\vecs{F}\) across surface \(S\) by knowing information only about the values of \(\vecs{F}\) along the boundary of \(S\). Conversely, we can calculate the line integral of vector field \(\vecs{F}\) along the boundary of surface \(S\) by translating to a double integral of the curl of \(\vecs{F}\) over \(S\). Let S be an oriented smooth surface with unit normal vector \(\vecs{N}\). Furthermore, suppose the boundary of \(S\) is a simple closed curve \(C\). The orientation of \(S\) induces the positive orientation of C if, as you walk in the positive direction around C with your head pointing in the direction of \(\vecs{N}\), the surface is always on your left. With this definition in place, we can state Stokes’ theorem. Theorem \(\PageIndex{1}\): Stokes’ Theorem Let \(S\) be a piecewise smooth oriented surface with a boundary that is a simple closed curve \(C\) with positive orientation (Figure \(\PageIndex{1}\)). If \(\vecs{F}\) is a vector field with component functions that have continuous partial derivatives on an open region containing \(S\), then \[\int_C \vecs{F} \cdot d \vecs{r} = \iint_S curl \, \vecs{F} \cdot dS. \label{Stokes1}\] Suppose surface S is a flat region in the xy-plane with upward orientation. Then the unit normal vector is \(\vecs{k}\) and surface integral \[\iint_S curl \, \vecs{F} \cdot d\vecs{S}\] is actually the double integral \[\iint_S curl \, \vecs{F} \cdot \vecs{k} \, dA.\] In this special case, Stokes’ theorem gives \[\int_C \vecs{F} \cdot d\vecs{r} = \iint_S curl \, \vecs{F} \cdot \vecs{k} \, dA.\] However, this is the flux form of Green’s theorem, which shows us that Green’s theorem is a special case of Stokes’ theorem. Green’s theorem can only handle surfaces in a plane, but Stokes’ theorem can handle surfaces in a plane or in space. The complete proof of Stokes’ theorem is beyond the scope of this text. We look at an intuitive explanation for the truth of the theorem and then see proof of the theorem in the special case that surface S is a portion of a graph of a function, and S, the boundary of S, and \(\vecs{F}\) are all fairly tame. Proof First, we look at an informal proof of the theorem. This proof is not rigorous, but it is meant to give a general feeling for why the theorem is true. Let S be a surface and let D be a small piece of the surface so that D does not share any points with the boundary of S. We choose D to be small enough so that it can be approximated by an oriented square E. Let D inherit its orientation from S, and give E the same orientation. This square has four sides; denote them \(E_l, \, E_r, \, E_u\), and \(E_d\) for the left, right, up, and down sides, respectively. On the square, we can use the flux form of Green’s theorem: \[\int_{E_l+E_d+E_r+E_u} \vecs{F} \cdot d \vecs{r} = \iint_E curl \, \vecs{F} \cdot \vecs{N} \, d \vecs{S} = \iint_E curl \, \vecs{F} \cdot d\vecs{S}.\] To approximate the flux over the entire surface, we add the values of the flux on the small squares approximating small pieces of the surface (Figure \(\PageIndex{2}\)). By Green’s theorem, the flux across each approximating square is a line integral over its boundary. Let F be an approximating square with an orientation inherited from S and with a right side \(E_l\) (so F is to the left of E). Let \(F_r\) denote the right side of \(F\); then, \(E_l = - f_r\). In other words, the right side of \(F\) is the same curve as the left side of E, just oriented in the opposite direction. Therefore, \[\int_{E_l} F \cdot dr = - \int_{F_r} F \cdot dr. \nonumber\] As we add up all the fluxes over all the squares approximating surface S, line integrals \[\int_{E_l} \vecs{F} \cdot d \vecs{r}\] and \[ \int_{F_r} \vecs{F} \cdot d\vecs{r}\] cancel each other out. The same goes for the line integrals over the other three sides of E. These three line integrals cancel out with the line integral of the lower side of the square above E, the line integral over the left side of the square to the right of E, and the line integral over the upper side of the square below E (Figure \(\PageIndex{3}\)). After all this cancelation occurs over all the approximating squares, the only line integrals that survive are the line integrals over sides approximating the boundary of S. Therefore, the sum of all the fluxes (which, by Green’s theorem, is the sum of all the line integrals around the boundaries of approximating squares) can be approximated by a line integral over the boundary of S. In the limit, as the areas of the approximating squares go to zero, this approximation gets arbitrarily close to the flux. Let’s now look at a rigorous proof of the theorem in the special case that S is the graph of function \(z = f(x,y)\), where x and y vary over a bounded, simply connected region D of finite area (Figure \(\PageIndex{4}\)). Furthermore, assume that \(f\) has continuous second-order partial derivatives. Let C denote the boundary of S and let C′ denote the boundary of D. Then, D is the “shadow” of S in the plane and C′ is the “shadow” of C. Suppose that S is oriented upward. The counterclockwise orientation of C is positive, as is the counterclockwise orientation of \(C'\). Let \(F(x,y,z) = \langle P,Q,R \rangle\) be a vector field with component functions that have continuous partial derivatives. We take the standard parameterization of \(S \, : \, x = x, \, y = y, \, z = g(x,y)\). The tangent vectors are \(t_x = \langle 1,0,g_x \rangle\) and \(t_y = \langle 0,1,g_y \rangle\), and therefore \(t_x \cdot t_y = \langle -g_x, \, -g_y, \, 1 \rangle\). \[\iint_S curl \, \vecs{F} \cdot d\vecs{S} = \iint_D [- (R_y - Q_z)z_x - (P_z - R_x)z_y + (Q_x - P_y)] \, dA, \nonumber\] where the partial derivatives are all evaluated at \((x,y,g(x,y))\), making the integrand depend on x and y only. Suppose \(\langle x (t), \, y(t) \rangle, \, a \leq t \leq b\) is a parameterization of \(C'\). Then, a parameterization of C is \(\langle x (t), \, y(t), \, g(x(t), \, y(t))\rangle, \, a \leq t \leq b\). Armed with these parameterizations, the Chain rule, and Green’s theorem, and keeping in mind that P, Q, and R are all functions of x and y, we can evaluate line integral \[ \begin{align*} \int_C \vecs{F} \cdot d \vecs{r} &= \int_a^b (Px'(t) + Qy'(t) + Rz'(t)) \, dt \\[4pt] &= \int_a^b \left[Px'(t) + Qy'(t) + R\left(\dfrac{\partial z}{\partial x} \dfrac{dx}{dt} + \dfrac{\partial z}{\partial y} \dfrac{dy}{dt}\right) \right] dt \\[4pt] &= \int_a^b \left[ \left(P + R \dfrac{\partial z}{\partial x} \right) x' (t) + \left(Q + R \dfrac{\partial z}{\partial y} \right) y'(t) \right] dt \\[4pt] &= \int_{C'} \left(P + R \dfrac{\partial z}{\partial x} \right)\, dx + \left(Q + R \dfrac{\partial z}{\partial y} \right) \, dy \\[4pt] &= \iint_D \left[ \dfrac{\partial}{\partial x} \left( Q + R \dfrac{\partial z}{\partial y} \right) - \dfrac{\partial}{\partial y} \left(P + R \dfrac{\partial z}{\partial x} \right) \right] \, dA \\[4pt] & =\iint_D \left(\dfrac{\partial Q}{\partial x} + \dfrac{\partial Q}{\partial z} \dfrac{\partial z}{\partial x} + \dfrac{\partial R}{\partial x} \dfrac{\partial z}{\partial y} + \dfrac{\partial R}{\partial z}\dfrac{\partial z}{\partial x} \dfrac{\partial z}{\partial y} + R \dfrac{\partial^2 z}{\partial x \partial y} \right) - \left(\dfrac{\partial P}{\partial y} + \dfrac{\partial P}{\partial z} \dfrac{\partial z}{\partial y} + \dfrac{\partial R}{\partial z} \dfrac{\partial z}{\partial y} \dfrac{\partial z}{\partial x} + R \dfrac{\partial^2 z}{\partial y \partial x} \right) \end{align*} \] By Clairaut’s theorem, \[\dfrac{\partial^2 z}{\partial x \partial y} = \dfrac{\partial^2 z}{\partial y \partial x} \nonumber\] Therefore, four of the terms disappear from this double integral, and we are left with \[\iint_D [- (R_y - Q_z)Z_x - (P_z - R_x) z_y + (Q_x - P_y)] \, dA, \nonumber\] which equals \[\iint_S curl \, \vecs{F} \cdot d\vecs{S}.\] \(\Box\) We have shown that Stokes’ theorem is true in the case of a function with a domain that is a simply connected region of finite area. We can quickly confirm this theorem for another important case: when vector field \(\vecs{F}\) is a conservative field. If \(\vecs{F}\) is conservative, the curl of \(\vecs{F}\) is zero, so \[\iint_S curl \, \vecs{F} \cdot d\vecs{S} = 0.\] Since the boundary of S is a closed curve, the integral \[\int_C \vecs{F} \cdot d\vecs{r}.\] is also zero. Example \(\PageIndex{1}\): Verifying Stokes’ Theorem for a Specific Case Verify that Stokes’ theorem is true for vector field \(\vecs{F}(x,y) = \langle -z,x,0 \rangle\) and surface S, where S is the hemisphere, oriented outward, with parameterization \(r(\phi, \theta) = \langle \sin \phi \, \cos \theta, \, \sin \phi \, \sin \theta, \, \cos \phi \rangle, \, 0 \leq \theta \leq \pi, \, 0 \leq \phi \leq \pi\) as shown in Figure \(\PageIndex{5}\). Solution Let C be the boundary of S. Note that C is a circle of radius 1, centered at the origin, sitting in plane \(y = 0\). This circle has parameterization \(\langle \cos t, \, 0, \, \sin t \rangle, \, 0 \leq t \leq 2\pi\). By [link], \[ \begin{align*} \int_C \vecs{F} \cdot d \vecs{r} &= \int_0^{2\pi} \langle -\sin t, \, \cos t, \, 0 \rangle \cdot \langle - \sin t, \, 0, \, \cos t \rangle \, dt \\[4pt] &= \int_0^{2\pi} \sin^2 t \, dt \\[4pt] &= \pi. \end{align*}\] By [link], \[ \begin{align*} \iint_S \, curl \, \vecs{F} \cdot dS &= \iint_D curl \, \vecs{F} (r (\phi,\theta)) \cdot ( t_{\phi} \times t_{\theta}) \, dA \\[4pt] &= \iint_D \langle 0, -1, 1 \rangle \cdot \langle \cos \theta \, \sin^2 \phi, \, \sin \theta \, \sin^2 \phi, \, \sin \phi \, \cos \phi \rangle \, dA \\[4pt] &= \int_0^{\pi} \int_0^{\pi} (\sin \phi \, \cos \phi - \sin \theta \, \sin^2 \phi ) \, d\phi d\theta \\[4pt] &= \dfrac{\pi}{2} \int_0^{\pi} \sin \theta \, d\theta = \pi.\end{align*}\] Therefore, we have verified Stokes’ theorem for this example. Exercise \(\PageIndex{1}\) Verify that Stokes’ theorem is true for vector field \(\vecs{F}(x,y,z) = \langle y,x,-z \rangle \) and surface S, where S is the upwardly oriented portion of the graph of \(f(x,y) = x^2 y\) over a triangle in the xy-plane with vertices \((0,0), \, (2,0)\), and \((0,2)\). Hint Calculate the double integral and line integral separately. Answer Both integrals give \(-\dfrac{136}{45}\):
TIPS FOR SOLVING QUESTIONS RELATED TO PERCENTAGE: Percentage: A fraction whose denominator is 100 is called percentage. The numerator of the fraction is called the rate percent. 1. To express x% as a fraction: We have, x% = \begin{aligned} \frac{x}{100} \\ Thus, 30\% = \frac{30}{100} = \frac{3}{10} \\ \end{aligned} 2. To express fraction as percentage, we have \begin{aligned} \frac{a}{b} = \left(\frac{a}{b}\times100\right)\% \end{aligned} 3. If A is R% more than B, then B is less than A by \begin{aligned} \left[ \frac{R}{(100+R)}\times 100 \right]\% \end{aligned} 4. If A is R% less than B, then B is more than A by \begin{aligned} \left[ \frac{R}{(100-R)}\times 100 \right]\% \end{aligned} 5. If the price of a commodity increases by R%, then the reduction in consumption so as not to increase the expenditure is: \begin{aligned} \left[ \frac{R}{(100+R)}\times 100 \right]\% \end{aligned} 6. If the price of a commodity decreases by R%, then the increase in consumption so as not to decrease the expenditure is: \begin{aligned} \left[ \frac{R}{(100-R)}\times 100 \right]\% \end{aligned} 7. Let the population of a town be P now and suppose it increases at the rate of R% per annum, then \begin{aligned} 1. & \text{Population after n years = }P\left(1+\frac{R}{100}\right)^n \\ 2.& \text{Population before n years =} \frac{P}{\left(1+\frac{R}{100}\right)^n} \\ \end{aligned} 8. Let the present value of a machine be P. Suppose it depreciates at the rate of R% per annum. 1. Value of the machine after n years = \begin{aligned} P\left(1-\frac{R}{100}\right)^n \end{aligned} 2. Value of the machine n years ago = \begin{aligned} \frac{P}{\left(1-\frac{R}{100}\right)^n} \\ \end{aligned} 9. For two successive changes of x% and y%, net change = \begin{aligned} \left(x +y + \frac{xy}{100}\right)\%\\ \end{aligned}
Analogues and Generalizations of the Pythagorean Theorem Pythagorean Theorem is one of the most fundamental results of Mathematics. Using the theorem we define what's known as euclidean distance dist 2. This notion of distance extends to spaces with scalar product - Hilbert spaces. Proofs ##13, 17, and 18 gave us plane generalizations of the theorem. Below I consider an analogue that holds in the $3$-dimensional space $\mathbb{R}^{3}.$ The statement leads to the definition of euclidean distance in $\mathbb{R}^{3}.$ Afterwards, there is an additional and unexpected analog of the theorem in $\mathbb{R}^{3}.$ It's convenient to think of the Pythagorean Theorem as defining the length of the diagonal in a rectangle when its two sides are given. Now consider a parallelepiped with sides $a,$ $b,$ and $c.$ Incidently, the diagonal in question serves as the hypotenuse of the right triangle formed by the edge $c$ and the diagonal of the face $ab.$ The latter, by the Pythagorean Theorem, equals $\sqrt{a^{2}+b^{2}}.$ Applying it the second time gives the length of the diagonal as $\sqrt{a^{2}+b^{2}+c^{2}}.$ When we moved from a $2$-dimensional space to a $3$-dimensional space the formula for the diagonal of a shape built on orthogonal segments remained virtually the same except the number of terms grew from $2$ to $3,$ as appropriate. However, in both cases squared were line segments. Pythagorean Theorem has an analog where squared are areas of triangles. The theorem applies to a special kind of tetrahedra in which all three edges emanating from one of the vertices are perpendicular to each other. One can obtain such a pyramid by cutting a corner from a parallelepiped. Let's introduce the areas $A,B,C$ of the faces that house right angles, and let $D$ be the area of the remaining face. We have $A = qr/2,$ $B = rp/2,$ $C = pq/2.$ What I want to show is that $A^{2} + B^{2} + C^{2} = D^{2}.$ Draw a plane through $p$ perpendicular to $a.$ Then both $k$ and $h$ will be perpendicular to $a.$ We find that $D = ha/2,$ while $h^{2} = k^{2} + p^{2}.$ Also $A = ak/2.$ Therefore, $\begin{align} 4D^{2} &= a^{2}h^{2}\\ &= a^{2}(k^{2} + p^{2})\\ &= 4A^{2} + a^{2}p^{2}\\ &= 4A^{2} + (r^{2} + q^{2})p^{2}\\ &= 4A^{2} + (rp)^{2} + (pq)^{2}\\ &= 4A^{2} + 4B^{2} + 4C^{2} \end{align}$ Q.E.D. Oops, I almost forgot the Cosine Law which is a clear generalization of the Pythagorean Theorem. For a triangle with sides $a,$ $b,$ and $c$ and angle $C$ opposite the side $c,$ one has $c^{2} = a^{2} + b^{2} - 2ab\cdot \cos(C)$ which, in turn, admits a generalization to higher dimensional spaces. References G. Polya, Mathematical Discovery, John Wiley and Sons, 1981. Copyright © 1996-2018 Alexander Bogomolny 65620974
Let $M$ and n-dimensional Riemannian Manifold without boundary. Suppose we have an isometry $\tau_{x}: M \to M$ such that $\tau_{x}(x)=o$, for a fixed point $o$ in $M$. My question is, how can understand $\tau_{x}$? The first time I saw this, I thought $\tau_{x}$ was of the form $\tau_{x}(x)=x-x=o$, but this doesn't make sense, because $M$ is not a vector space. Then, I thought, if $M=\mathbb{S}^{1}\subset\mathbb{R}^{2}$, and define the polar angle for $x$ as $\theta_{x}$ with $\theta\in[0,2\pi)$. Now, if we take the fixed point $o=(1,0)\in\mathbb{S}^{1}$, we have something like $\tau_{\theta_{x}}(\theta_{y})=e^{i(\theta_{y}-\theta_{x})}$. But this is more complicated for Riemannian manifold. Now, if $\tau_{x}: M \to M$ is an isometry, then is a smooth map of smooth manifolds. Given some $p\in M$, the differential of $\tau_{x}$ at $p$ is a linear map, $$d\tau_{x}\vert_{p}:T_{p}M\to T_{\tau_{x}(p)}M$$ from the tangent space of $M$ at $p$ to the tangent space of $M$ at $\tau_{x}(p)$. Then the differential is given by $$d\tau_{x}(X)(f)\vert_{p}=X(f\circ \tau_{x})\vert_{p}$$ Here $X\in T_{x}M$, therefore $X$ is a derivation defined on $M$ and $f$ is a smooth real-valued function on M. Any idea will be appreciated, Thanks!
The LHC program is in excellent shape, it will continue its journey throughout a series of upgrades up to its highest luminosity (sensitivity) until 2035. As for today, after the Higgs (H) discovery, the accumulated data do not reveal any sign of a new particle. With increasing sensitivity, its potential for discovery will improve and new particles may show up that would dramatically change the course of high-energy physics (HEP). However LHC has inherent weaknesses that will limit its sensitivity no matter how high the integrated luminosity is or how long the collider runs. Take for example, the measurement of the Higgs couplings to fermions or bosons. These couplings are sensitive to the imprint of possible new particles beyond the Standard Model (SM). The side figure, here, has been at the center of many discussions at the last Linear Collider workshop in Morioka December 2016. I’ll try to describe it here in some details. Particles collisions or decays are quantum phenomena, therefore, probabilistic. The larger the number of events produced, the lower 1 the statistical (random) error. A smaller error leads to a better sensitivity to unaccounted effects. This is true for all measurements at collider and even beyond, for any measurement whatsoever, quantum or classical. Aside of the statistical error there is an other type of errors called systematics. They are more difficult to estimate as there are due to multiple effects like unaccounted or merely approximated effects, limited theoretical precision, event selection and reconstruction errors, detector simulation biases including acceptance and more. The systematic errors do not significantly decrease with the number of events and measurements are ultimately limited by them. LHC, the Large Hadron Collider at CERN Geneva (Switzerland) has been running at full blow in 2016 accumulating 40 fb -1 and promises to break new records in the coming years reaching its ultimate sensitivity to new physics with up to 3000 fb -1 accumulated luminosity. That will take time (~18 years) and money. But CERN says it is dedicated to use the LHC to its last drop as expected from a responsible organization. What are the inherent precision limitations of the LHC? All colliders and HEP experiments have limitations: detector precision in space and time, detector spatial acceptance, colliders luminosity, …. they may have different values depending on the technology used, but they are always present. However, pp colliders have a specific limitation not showing up at lepton colliders: the proton itself. The proton is not an elementary particle but a pack of confined quarks(q), anti-quarks(\bar{q}) and gluons(g) (also called partons). At these energies, the interest is more on the collision of its basic constituents than on the proton it-self as a whole. For example, interesting processes are: g g \rightarrow H, q q \rightarrow q q H… To finely study these interactions, both the initial state (here g g or q q) and the final states (H or q q H) and the H decay products must be known precisely. Concerning the final states, the burden is on the detectors and the precision keeps improving with the technology. But for the initial states, it is a different matter as the type of initial partons is not directly measurable, neither are their relative energy, momentum and spin state 2. This raises significant ambiguities and errors. The uncertainty due to composite proton is a substantial part of the total systematic errors occurring at pp colliders. In addition the rest of the proton insides (others q , g) may also interact making the recorded event a superposition of different processes (underlying event), a very complicated picture, adding to the systematic errors. One more thing, the increase in luminosity is in part due to a higher proton density in the colliding bunches. The probability of getting more than one interaction in a given bunch becomes large and the pile up (the superposition of several collisions from different protons in the recorded event) may reach 140. Picking up the event of interest and making the correct identification and attribution of each track, namely succeeding in correctly reconstructing the event, become a difficult task prone to additional errors. And last but not least, the theoretical calculations that have to deal with this complex initial state are also quite involved, specially when precision requires accounting for higher-order corrections. The theoretical uncertainties will contribute substantially to the global systematic errors. Proton collisions give rise to a large number of possible interactions and final states. Some of them, although of different nature, may mimic the studied final state. These spurious events, called background come in a number often much larger than the studied process. Reducing them requires highly sophisticated selection algorithms, inevitably introducing additional systematic errors. After the Higgs boson discovery which has been a fantastic achievement for the theory, the machine, the experiments and the analysis, it is now essential to measure with high precision all its parameters: mass, various decay and couplings, … However, the discused limitations make pp colliders unfit to the challenge. Effect of the systematic and statistical errors on the measurement of the Higgs couplings The graph shown at last Linear Collider Workshop held in Morioka (LCWS) in December 2016, shed some light on what should be done next to improve the precision. The picture is quite busy but I will explain here the various items. In the vertical axis the precision in % of the measurement of 7 coupling constants (\kappa) on the decay of the Higgs boson. The Higgs boson is a very unstable particle, as soon as it is produced, it decays in pairs of Z, W, q\bar{q}… Any discrepancy in the measurement of the couplings compared to the Standard Model prediction would sign the onset of a different mechanism or the contribution of a new particle. For each decay, the various colors correspond to different conditions for two colliders. The dark and light green bars are for the pp machine LHC, the red and yellow bars are for the e+e- colliders ILC and the blue ones are when LHC and ILC data are combined. The LHC light green bars include the current irreducible errors from the theoretical calculations. The dark green assume a smaller theoretical error that could be obtained if more extensive calculations Fig:1 Precision on the Higgs couplings to Z, W, b quark… [1] involving additional complex subprocesses (higher-order processes, foreseen in the coming years but not fully guaranteed [2]) are performed. In the legend caption: in TeV or GeV the collider energy, in fb -1 the integrated luminosity that will be reached for the LHC in 2035 (18 years) and for the ILC. The larger the fb -1, the longer the data taking time. The red bars correspond to 8 years of data taking, the yellow bars would need 12 more years so in total ~ 20 years. The conclusion is clear: e+e- colliders provide a factor 10 (or more) better precision than pp colliders like LHC if one assumes the current theoretical achieved precision for Z, W, b, c, t (namely comparing the light green and yellow bars). There is a notable exception with the coupling to γ. This is essentially due to the very small probability of the Higgs decaying in 2 γ (H \rightarrow \gamma \gamma) 3. Here, we see the value of the complementarity of the two types of colliders. Neither of them provides a good accuracy but the combination of the data improves drastically the figure. The e+ and e- being elementary particles, the collisions do not suffer from the limitations the pp colliders have. The initial state is well measured and unambiguous (even the particle spin direction can be selected), the theoretical calculations are more manageable, the background is much smaller, no pile-up, no underlying events, … The systematic errors are therefore dramatically smaller. The limitation here come from the statistical error, namely the luminosity and the production rate of the Higgs are much lower. However overall, the figure shows that the e+e- colliders deliver a much better precision. Does one need this high accuracy? High precisions on the coupling measurements are needed to probe possible deviations from the Standard Model. As said, a tiny discrepancy would sign the existence of new particles or mechanisms, a guidance to a much more global and unified theory. These “beyond the Standard Model” theories are many (see a few in the below table). Experimental measurements are needed to choose the one selected by Nature. But how precise should it be? At LCWS, it was recalled, as demonstrated, for example, in the (arxiv:1206.3560) that the expected effects on these couplings from beyond SM models like Minimal Supersymmetry are of the order of 1-3%, unreachable at the LHC, but fully accessible to e+e- colliders. Table 1: Required precision on the Higgs coupling to vector bosons (\Delta hVV, V=Z, W) or quarks (\Delta H\bar{t}t, \Delta H\bar{b}b) to be sensitive to three possible Standard Models extensions. This is just one example of the advantage of e+e- over pp physics for prec ision physics, many other examples (Higgs self-couplings, top physics,…) will be discussed in this blog. History shows that high-energy physics has evolved alternating e+e- and pp colliders/accelerators, both have tremendous values. Proton-proton colliders have shown to be discovering machines (Sp\bar{p}S for the discovery Z and W), Tevatron for the top quark, the LHC for the Higgs), but precision is in the realm of e+e- colliders. LEP provided a precise analysis of the Z and W and brought the Standard Model to the high level status it has today. With two “recently” discovered, but poorly known particles, the top quark and the Higgs boson, and no other particle showing up, so with no hints on which energy to tune a future pp discovering machine, there is a clear incentive to turn to precision physics, namely to e+e- colliders but in sync with the LHC running up to its term. References: “The Physical Case for the International Linear Collider” Snowmass Higgs working group report The LCWS timetable (most presentations are available) the LCWS presentation from Roman Poeschl and the summary talk of Marcel Vos See also: Cosmos Magazine about the “Next king Collider” The error is proportional to √N/N, where N is the number of events as long are the event distribution is “normal” like a Gaussian: for N=100 events, the error is 10%, 10000 events error is 1% ↩ This actually can be traced back by analyzing the final elements of the interaction and relying on the proton composition probabilities obtained from other measurements as long as the event is complete (no invisible particle) ↩ The γ having no mass, the Higgs does not couple directly to γ and this process only occurs through higher-order corrections which are rarer than those of leading order ↩
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Abstract Let $\mathbb{M}$ be a compact $C^\infty$-smooth Riemannian manifold of dimension $n$, $n\geq 3$, and let $\varphi_\lambda: \Delta_M \varphi_\lambda + \lambda \varphi_\lambda = 0$ denote the Laplace eigenfunction on $\mathbb{M}$ corresponding to the eigenvalue $\lambda$. We show that $$H^{n-1}(\{ \varphi_\lambda=0\}) \leq C \lambda^{\alpha},$$ where $\alpha>1/2$ is a constant, which depends on $n$ only, and $C>0$ depends on $\mathbb{M}$ . This result is a consequence of our study of zero sets of harmonic functions on $C^\infty$-smooth Riemannian manifolds. We develop a technique of propagation of smallness for solutions of elliptic PDE that allows us to obtain local bounds from above for the volume of the nodal sets in terms of the frequency and the doubling index. % We obtain partial positive answers to the question: Is the frequency additive in some sense? [B] J. Brüning, "Über Knoten von Eigenfunktionen des Laplace-Beltrami-Operators," Math. Z., vol. 158, iss. 1, pp. 15-21, 1978. @ARTICLE{B, author = {Brüning, Jochen}, title = {{Ü}ber {K}noten von {E}igenfunktionen des {L}aplace-{B}eltrami-{O}perators}, journal = {Math. Z.}, fjournal = {Mathematische Zeitschrift}, volume = {158}, year = {1978}, number = {1}, pages = {15--21}, issn = {0025-5874}, mrclass = {58G99 (53C20)}, mrnumber = {0478247}, mrreviewer = {Shukichi Tanno}, doi = {10.1007/BF01214561}, url = {http://dx.doi.org/10.1007/BF01214561}, zblnumber = {0349.58012}, } [CM] T. H. Colding and W. P. Minicozzi II, "Lower bounds for nodal sets of eigenfunctions," Comm. Math. Phys., vol. 306, iss. 3, pp. 777-784, 2011. @ARTICLE{CM, author = {Colding, Tobias H. and Minicozzi, II, William P.}, title = {Lower bounds for nodal sets of eigenfunctions}, journal = {Comm. Math. Phys.}, fjournal = {Communications in Mathematical Physics}, volume = {306}, year = {2011}, number = {3}, pages = {777--784}, issn = {0010-3616}, mrclass = {58J50 (28A78 35P15 35P20)}, mrnumber = {2825508}, mrreviewer = {Julie Rowlett}, doi = {10.1007/s00220-011-1225-x}, url = {http://dx.doi.org/10.1007/s00220-011-1225-x}, zblnumber = {1238.58020}, } [LM2] A. Logunov, "Nodal sets of Laplace eigenfunctions: proof of Nadirashvili’s conjecture and of the lower bound in Yau’s conjecture," Ann. of Math., vol. 187, iss. 1, pp. 241-262, 2018. @article{LM2, author = {Logunov, A.}, title = {Nodal sets of {L}aplace eigenfunctions: proof of {N}adirashvili's conjecture and of the lower bound in {Y}au's conjecture}, journal={Ann. of Math.}, VOLUME={187}, number={1}, year={2018}, doi={10.4007/annals.2018/187.1.5}, pages={241--262}, } [N] N. Nadirashvili, "Geometry of nodal sets and multiplicity of eigenvalues," Curr. Dev. Math., pp. 231-235, 1997. @ARTICLE{N, author = {Nadirashvili, N.}, title = {Geometry of nodal sets and multiplicity of eigenvalues}, journal = {Curr. Dev. Math.}, year = {1997}, pages = {231--235}, doi = {10.4310/CDM.1997.v1997.n1.a16}, } [SZ] C. D. Sogge and S. Zelditch, "Lower bounds on the Hausdorff measure of nodal sets II," Math. Res. Lett., vol. 19, iss. 6, pp. 1361-1364, 2012. @ARTICLE{SZ, author = {Sogge, Christopher D. and Zelditch, Steve}, title = {Lower bounds on the {H}ausdorff measure of nodal sets {II}}, journal = {Math. Res. Lett.}, fjournal = {Mathematical Research Letters}, volume = {19}, year = {2012}, number = {6}, pages = {1361--1364}, issn = {1073-2780}, mrclass = {58C40 (28A78 35P15 35R01)}, mrnumber = {3091613}, mrreviewer = {Nelia Charalambous}, doi = {10.4310/MRL.2012.v19.n6.a14}, url = {http://dx.doi.org/10.4310/MRL.2012.v19.n6.a14}, zblnumber = {1283.58020}, } [Y] S. T. Yau, "Problem section," in Seminar on Differential Geometry, Princeton Univ. Press, Princeton, N.J., 1982, vol. 102, pp. 669-706. @INCOLLECTION{Y, author = {Yau, Shing Tung}, title = {Problem section}, booktitle = {Seminar on {D}ifferential {G}eometry}, series = {Ann. of Math. Stud.}, volume = {102}, pages = {669--706}, publisher = {Princeton Univ. Press, Princeton, N.J.}, year = {1982}, mrclass = {53Cxx (58-02)}, mrnumber = {0645762}, mrreviewer = {Yu. Burago}, zblnumber = {0479.53001}, doi = {10.1515/9781400881918-035}, } [AGM] S. Agmon, Unicité et convexité dans les problèmes différentiels, Les Presses de l’Université de Montréal, Montreal, Que., 1966. @book {AGM, author = {Agmon, Shmuel}, TITLE = {Unicité et convexité dans les problèmes différentiels}, SERIES = {Séminaire de Mathématiques Supérieures, No. 13 (Été, 1965)}, PUBLISHER = {Les Presses de l'Université de Montréal, Montreal, Que.}, YEAR = {1966}, PAGES = {152}, MRCLASS = {35.01}, MRNUMBER = {0252808}, MRREVIEWER = {B. Hellwig}, ZBLNUMBER = {0147.07702}, } [ALM] F. J. Almgren Jr., "Dirichlet’s problem for multiple valued functions and the regularity of mass minimizing integral currents," in Minimal Submanifolds and Geodesics, North-Holland, Amsterdam-New York, 1979, pp. 1-6. @incollection {ALM, author = {Almgren, Jr., Frederick J.}, TITLE = {Dirichlet's problem for multiple valued functions and the regularity of mass minimizing integral currents}, BOOKTITLE = {Minimal Submanifolds and Geodesics}, VENUE={{P}roc. {J}apan-{U}nited {S}tates {S}em., {T}okyo, 1977}, PAGES = {1--6}, PUBLISHER = {North-Holland, Amsterdam-New York}, YEAR = {1979}, MRCLASS = {49F22}, MRNUMBER = {574247}, ZBLNUMBER = {0439.49028}, } [LAN] E. M. Landis, "Some questions in the qualitative theory of second-order elliptic equations (case of several independent variables)," Uspehi Mat. Nauk, vol. 18, iss. 1 (109), pp. 3-62, 1963. @article {LAN, author = {Landis, E. M.}, TITLE = {Some questions in the qualitative theory of second-order elliptic equations (case of several independent variables)}, JOURNAL = {Uspehi Mat. Nauk}, FJOURNAL = {Akademiya Nauk SSSR i Moskovskoe Matematicheskoe Obshchestvo. Uspekhi Matematicheskikh Nauk}, VOLUME = {18}, YEAR = {1963}, NUMBER = {1 (109)}, PAGES = {3--62}, ISSN = {0042-1316}, MRCLASS = {35.42}, MRNUMBER = {0150437}, MRREVIEWER = {J. Nečas}, ZBLNUMBER = {0122.33701}, DOI = {10.1070/RM1963v018n01ABEH004124}, } [Cauchy] G. Alessandrini, L. Rondi, E. Rosset, and S. Vessella, "The stability for the Cauchy problem for elliptic equations," Inverse Problems, vol. 25, iss. 12, p. 123004, 2009. @ARTICLE{Cauchy, author = {Alessandrini, Giovanni and Rondi, Luca and Rosset, Edi and Vessella, Sergio}, title = {The stability for the {C}auchy problem for elliptic equations}, journal = {Inverse Problems}, fjournal = {Inverse Problems. An International Journal on the Theory and Practice of Inverse Problems, Inverse Methods and Computerized Inversion of Data}, volume = {25}, year = {2009}, number = {12}, pages = {123004, 47}, issn = {0266-5611}, mrclass = {35R25 (35-02 35B35 35J05)}, mrnumber = {2565570}, doi = {10.1088/0266-5611/25/12/123004}, url = {http://dx.doi.org/10.1088/0266-5611/25/12/123004}, zblnumber = {1190.35228}, } [D] R. Dong, "Nodal sets of eigenfunctions on Riemann surfaces," J. Differential Geom., vol. 36, iss. 2, pp. 493-506, 1992. @ARTICLE{D, author = {Dong, Rui-Tao}, title = {Nodal sets of eigenfunctions on {R}iemann surfaces}, journal = {J. Differential Geom.}, fjournal = {Journal of Differential Geometry}, volume = {36}, year = {1992}, number = {2}, pages = {493--506}, issn = {0022-040X}, mrclass = {58G25 (35P99)}, mrnumber = {1180391}, mrreviewer = {Stig I. Andersson}, zblnumber = {0776.53024}, doi = {10.4310/jdg/1214448750}, } [DF] H. Donnelly and C. Fefferman, "Nodal sets of eigenfunctions on Riemannian manifolds," Invent. Math., vol. 93, iss. 1, pp. 161-183, 1988. @ARTICLE{DF, author = {Donnelly, Harold and Fefferman, Charles}, title = {Nodal sets of eigenfunctions on {R}iemannian manifolds}, journal = {Invent. Math.}, fjournal = {Inventiones Mathematicae}, volume = {93}, year = {1988}, number = {1}, pages = {161--183}, issn = {0020-9910}, mrclass = {58G25 (35B60 35P05)}, mrnumber = {0943927}, mrreviewer = {P. Günther}, doi = {10.1007/BF01393691}, url = {http://dx.doi.org/10.1007/BF01393691}, zblnumber = {0659.58047}, } [DF1] H. Donnelly and C. Fefferman, "Nodal sets for eigenfunctions of the Laplacian on surfaces," J. Amer. Math. Soc., vol. 3, iss. 2, pp. 333-353, 1990. @ARTICLE{DF1, author = {Donnelly, Harold and Fefferman, Charles}, title = {Nodal sets for eigenfunctions of the {L}aplacian on surfaces}, journal = {J. Amer. Math. Soc.}, fjournal = {Journal of the American Mathematical Society}, volume = {3}, year = {1990}, number = {2}, pages = {333--353}, issn = {0894-0347}, mrclass = {58G25 (35P05)}, mrnumber = {1035413}, mrreviewer = {H.-B. Rademacher}, doi = {10.2307/1990956}, url = {http://dx.doi.org/10.2307/1990956}, zblnumber = {0702.58077}, } [GL] N. Garofalo and F. Lin, "Monotonicity properties of variational integrals, $A_p$ weights and unique continuation," Indiana Univ. Math. J., vol. 35, iss. 2, pp. 245-268, 1986. @ARTICLE{GL, author = {Garofalo, Nicola and Lin, Fang-Hua}, title = {Monotonicity properties of variational integrals, {$A_p$} weights and unique continuation}, journal = {Indiana Univ. Math. J.}, fjournal = {Indiana University Mathematics Journal}, volume = {35}, year = {1986}, number = {2}, pages = {245--268}, issn = {0022-2518}, mrclass = {35J20 (35J10 42B25)}, mrnumber = {0833393}, mrreviewer = {Stavros A. Belbas}, doi = {10.1512/iumj.1986.35.35015}, url = {http://dx.doi.org/10.1512/iumj.1986.35.35015}, zblnumber = {0678.35015}, } [HS] R. Hardt and L. Simon, "Nodal sets for solutions of elliptic equations," J. Differential Geom., vol. 30, iss. 2, pp. 505-522, 1989. @ARTICLE{HS, author = {Hardt, Robert and Simon, Leon}, title = {Nodal sets for solutions of elliptic equations}, journal = {J. Differential Geom.}, fjournal = {Journal of Differential Geometry}, volume = {30}, year = {1989}, number = {2}, pages = {505--522}, issn = {0022-040X}, mrclass = {58E05 (35J99)}, mrnumber = {1010169}, mrreviewer = {Fang Hua Lin}, zblnumber = {0692.35005}, doi = {10.4310/jdg/1214443599}, } [HL] Q. Han and F. -H. Lin, Nodal Sets of Solutions of Elliptic Differential Equations. @MISC{HL, author = {Han, Q and Lin, F.-H.}, title = {Nodal Sets of Solutions of Elliptic Differential Equations}, note = {book in preparation}, zblnumber = {}, } [L] F. Lin, "Nodal sets of solutions of elliptic and parabolic equations," Comm. Pure Appl. Math., vol. 44, iss. 3, pp. 287-308, 1991. @ARTICLE{L, author = {Lin, Fang-Hua}, title = {Nodal sets of solutions of elliptic and parabolic equations}, journal = {Comm. Pure Appl. Math.}, fjournal = {Communications on Pure and Applied Mathematics}, volume = {44}, year = {1991}, number = {3}, pages = {287--308}, issn = {0010-3640}, mrclass = {58G11 (35J05 35K05 58G03)}, mrnumber = {1090434}, mrreviewer = {Robert McOwen}, doi = {10.1002/cpa.3160440303}, url = {http://dx.doi.org/10.1002/cpa.3160440303}, zblnumber = {0734.58045}, } [LM] A. Logunov and . E. Malinnikova, Nodal sets of Laplace eigenfunctions: estimates of the Hausdorff measure in dimension two and three. @MISC{LM, author = {Logunov, A. and Malinnikova, {\relax Eu}.}, title = {Nodal sets of {L}aplace eigenfunctions: estimates of the {H}ausdorff measure in dimension two and three}, note = {preprint}, zblnumber = {}, } [M] D. Mangoubi, "The effect of curvature on convexity properties of harmonic functions and eigenfunctions," J. Lond. Math. Soc. (2), vol. 87, iss. 3, pp. 645-662, 2013. @ARTICLE{M, author = {Mangoubi, Dan}, title = {The effect of curvature on convexity properties of harmonic functions and eigenfunctions}, journal = {J. Lond. Math. Soc. (2)}, fjournal = {Journal of the London Mathematical Society. Second Series}, volume = {87}, year = {2013}, number = {3}, pages = {645--662}, issn = {0024-6107}, mrclass = {58J50 (35J15 35P20 35R01 53C21 58E20)}, mrnumber = {3073669}, mrreviewer = {Tanya J. Christiansen}, doi = {10.1112/jlms/jds067}, url = {http://dx.doi.org/10.1112/jlms/jds067}, zblnumber = {1316.35220}, } [NPS] F. Nazarov, L. Polterovich, and M. Sodin, "Sign and area in nodal geometry of Laplace eigenfunctions," Amer. J. Math., vol. 127, iss. 4, pp. 879-910, 2005. @ARTICLE{NPS, author = {Nazarov, Fëdor and Polterovich, Leonid and Sodin, Mikhail}, title = {Sign and area in nodal geometry of {L}aplace eigenfunctions}, journal = {Amer. J. Math.}, fjournal = {American Journal of Mathematics}, volume = {127}, year = {2005}, number = {4}, pages = {879--910}, issn = {0002-9327}, mrclass = {58J50 (35J25 35P20)}, mrnumber = {2154374}, mrreviewer = {Alessandro Savo}, url = {http://muse.jhu.edu/journals/american_journal_of_mathematics/v127/127.4nazarov.pdf}, zblnumber = {1079.58026}, doi = {10.1353/ajm.2005.0030}, }
Abstract Let $u$ be a harmonic function in the unit ball $B(0,1) \subset \mathbb{R}^n$, $n \geq 3$, such that $u(0)=0$. Nadirashvili conjectured that there exists a positive constant $c$, depending on the dimension $n$ only, such that $$H^{n-1}(\{u=0 \} \cap B) \geq c.$$ We prove Nadirashvili’s conjecture as well as its counterpart on $C^\infty$-smooth Riemannian manifolds. The latter yields the lower bound in Yau’s conjecture. Namely, we show that for any compact $C^\infty$-smooth Riemannian manifold $M$ (without boundary) of dimension $n$, there exists $c>0$ such that for any Laplace eigenfunction $\varphi_\lambda$ on $M$, which corresponds to the eigenvalue $\lambda$, the following inequality holds: $c \sqrt \lambda \leq H^{n-1}(\{\varphi_\lambda =0\})$. [Y] S. T. Yau, "Problem section," in Seminar on Differential Geometry, Princeton Univ. Press, Princeton, N.J., 1982, vol. 102, pp. 669-706. @INCOLLECTION{Y, author = {Yau, Shing Tung}, title = {Problem section}, booktitle = {Seminar on {D}ifferential {G}eometry}, series = {Ann. of Math. Stud.}, volume = {102}, pages = {669--706}, publisher = {Princeton Univ. Press, Princeton, N.J.}, year = {1982}, mrclass = {53Cxx (58-02)}, mrnumber = {0645762}, mrreviewer = {Yu. Burago}, zblnumber = {}, } [DF] H. Donnelly and C. Fefferman, "Nodal sets of eigenfunctions on Riemannian manifolds," Invent. Math., vol. 93, iss. 1, pp. 161-183, 1988. @ARTICLE{DF, author = {Donnelly, Harold and Fefferman, Charles}, title = {Nodal sets of eigenfunctions on {R}iemannian manifolds}, journal = {Invent. Math.}, fjournal = {Inventiones Mathematicae}, volume = {93}, year = {1988}, number = {1}, pages = {161--183}, issn = {0020-9910}, mrclass = {58G25 (35B60 35P05)}, mrnumber = {0943927}, mrreviewer = {P. Günther}, doi = {10.1007/BF01393691}, url = {http://dx.doi.org/10.1007/BF01393691}, zblnumber = {0659.58047}, } [DF1] H. Donnelly and C. Fefferman, "Nodal sets for eigenfunctions of the Laplacian on surfaces," J. Amer. Math. Soc., vol. 3, iss. 2, pp. 333-353, 1990. @ARTICLE{DF1, author = {Donnelly, Harold and Fefferman, Charles}, title = {Nodal sets for eigenfunctions of the {L}aplacian on surfaces}, journal = {J. Amer. Math. Soc.}, fjournal = {Journal of the American Mathematical Society}, volume = {3}, year = {1990}, number = {2}, pages = {333--353}, issn = {0894-0347}, mrclass = {58G25 (35P05)}, mrnumber = {1035413}, mrreviewer = {H.-B. Rademacher}, doi = {10.2307/1990956}, url = {http://dx.doi.org/10.2307/1990956}, zblnumber = {0702.58077}, } [GL] N. Garofalo and F. Lin, "Monotonicity properties of variational integrals, $A_p$ weights and unique continuation," Indiana Univ. Math. J., vol. 35, iss. 2, pp. 245-268, 1986. @ARTICLE{GL, author = {Garofalo, Nicola and Lin, Fang-Hua}, title = {Monotonicity properties of variational integrals, {$A_p$} weights and unique continuation}, journal = {Indiana Univ. Math. J.}, fjournal = {Indiana University Mathematics Journal}, volume = {35}, year = {1986}, number = {2}, pages = {245--268}, issn = {0022-2518}, mrclass = {35J20 (35J10 42B25)}, mrnumber = {0833393}, mrreviewer = {Stavros A. Belbas}, doi = {10.1512/iumj.1986.35.35015}, zblnumber = {0678.35015}, } [HS] R. Hardt and L. Simon, "Nodal sets for solutions of elliptic equations," J. Differential Geom., vol. 30, iss. 2, pp. 505-522, 1989. @ARTICLE{HS, author = {Hardt, Robert and Simon, Leon}, title = {Nodal sets for solutions of elliptic equations}, journal = {J. Differential Geom.}, fjournal = {Journal of Differential Geometry}, volume = {30}, year = {1989}, number = {2}, pages = {505--522}, issn = {0022-040X}, mrclass = {58E05 (35J99)}, mrnumber = {1010169}, mrreviewer = {Fang Hua Lin}, doi = {10.4310/jdg/1214443599}, zblnumber = {0692.35005}, } [HL] Q. Han and F. -H. Lin, Nodal Sets of Solutions of Elliptic Differential Equations. @MISC{HL, author = {Han, Q. and Lin, F.-H.}, title = {Nodal Sets of Solutions of Elliptic Differential Equations}, note = {book in preparation}, zblnumber = {}, } [LM1] A. Logunov and . E. Malinnikova, Nodal sets of Laplace eigenfunctions: estimates of the Hausdorff measure in dimension two and three. @misc{LM1, author = {Logunov, A. and Malinnikova, {\relax Eu}.}, title = {Nodal sets of {L}aplace eigenfunctions: estimates of the {H}ausdorff measure in dimension two and three}, note = {preprint}, zblnumber = {}, } [LM2] A. Logunov, "Nodal sets of Laplace eigenfunctions: polynomial upper bounds for the Hausdorff measure," Ann. of Math., vol. 187, iss. 1, pp. 221-239, 2018. @article{LM2, author = {Logunov, A.}, title = {Nodal sets of {L}aplace eigenfunctions: polynomial upper bounds for the {H}ausdorff measure}, journal={Ann. of Math.}, VOLUME={187}, number={1}, year={2018}, pages={221--239}, doi={10.4007/annals.2018.198.1.4}, zblnumber = {}, } [CM] T. H. Colding and W. P. Minicozzi II, "Lower bounds for nodal sets of eigenfunctions," Comm. Math. Phys., vol. 306, iss. 3, pp. 777-784, 2011. @ARTICLE{CM, author = {Colding, Tobias H. and Minicozzi, II, William P.}, title = {Lower bounds for nodal sets of eigenfunctions}, journal = {Comm. Math. Phys.}, fjournal = {Communications in Mathematical Physics}, volume = {306}, year = {2011}, number = {3}, pages = {777--784}, issn = {0010-3616}, mrclass = {58J50 (28A78 35P15 35P20)}, mrnumber = {2825508}, mrreviewer = {Julie Rowlett}, doi = {10.1007/s00220-011-1225-x}, url = {http://dx.doi.org/10.1007/s00220-011-1225-x}, zblnumber = {1238.58020}, } [SZ] C. D. Sogge and S. Zelditch, "Lower bounds on the Hausdorff measure of nodal sets II," Math. Res. Lett., vol. 19, iss. 6, pp. 1361-1364, 2012. @ARTICLE{SZ, author = {Sogge, Christopher D. and Zelditch, Steve}, title = {Lower bounds on the {H}ausdorff measure of nodal sets {II}}, journal = {Math. Res. Lett.}, fjournal = {Mathematical Research Letters}, volume = {19}, year = {2012}, number = {6}, pages = {1361--1364}, issn = {1073-2780}, mrclass = {58C40 (28A78 35P15 35R01)}, mrnumber = {3091613}, mrreviewer = {Nelia Charalambous}, doi = {10.4310/MRL.2012.v19.n6.a14}, url = {http://dx.doi.org/10.4310/MRL.2012.v19.n6.a14}, zblnumber = {1283.58020}, } [B] J. Brüning, "Über Knoten von Eigenfunktionen des Laplace-Beltrami-Operators," Math. Z., vol. 158, iss. 1, pp. 15-21, 1978. @ARTICLE{B, author = {Brüning, Jochen}, title = {{Ü}ber {K}noten von {E}igenfunktionen des {L}aplace-{B}eltrami-{O}perators}, journal = {Math. Z.}, fjournal = {Mathematische Zeitschrift}, volume = {158}, year = {1978}, number = {1}, pages = {15--21}, } [GT] D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Springer-Verlag, Berlin-New York, 1977, vol. 224. @BOOK{GT, author = {Gilbarg, David and Trudinger, Neil S.}, title = {Elliptic Partial Differential Equations of Second Order}, series = {Grundlehren Math. Wiss.}, volume = {224}, publisher = {Springer-Verlag, Berlin-New York}, year = {1977}, pages = {x+401}, isbn = {3-540-08007-4}, mrclass = {35-02 (35J25 35J65)}, mrnumber = {0473443}, mrreviewer = {O. John}, zblnumber = {0361.35003}, } [L] F. Lin, "Nodal sets of solutions of elliptic and parabolic equations," Comm. Pure Appl. Math., vol. 44, iss. 3, pp. 287-308, 1991. @ARTICLE{L, author = {Lin, Fang-Hua}, title = {Nodal sets of solutions of elliptic and parabolic equations}, journal = {Comm. Pure Appl. Math.}, fjournal = {Communications on Pure and Applied Mathematics}, volume = {44}, year = {1991}, number = {3}, pages = {287--308}, issn = {0010-3640}, mrclass = {58G11 (35J05 35K05 58G03)}, mrnumber = {1090434}, mrreviewer = {Robert McOwen}, doi = {10.1002/cpa.3160440303}, url = {http://dx.doi.org/10.1002/cpa.3160440303}, zblnumber = {0734.58045}, } [M] D. Mangoubi, "The effect of curvature on convexity properties of harmonic functions and eigenfunctions," J. Lond. Math. Soc. (2), vol. 87, iss. 3, pp. 645-662, 2013. @ARTICLE{M, author = {Mangoubi, Dan}, title = {The effect of curvature on convexity properties of harmonic functions and eigenfunctions}, journal = {J. Lond. Math. Soc. (2)}, fjournal = {Journal of the London Mathematical Society. Second Series}, volume = {87}, year = {2013}, number = {3}, pages = {645--662}, issn = {0024-6107}, mrclass = {58J50 (35J15 35P20 35R01 53C21 58E20)}, mrnumber = {3073669}, mrreviewer = {Tanya J. Christiansen}, doi = {10.1112/jlms/jds067}, url = {http://dx.doi.org/10.1112/jlms/jds067}, zblnumber = {1316.35220}, } [N] N. Nadirashvili, "Geometry of nodal sets and multiplicity of eigenvalues," Curr. Dev. Math., pp. 231-235, 1997. @ARTICLE{N, author = {Nadirashvili, N.}, title = {Geometry of nodal sets and multiplicity of eigenvalues}, journal = {Curr. Dev. Math.}, year = {1997}, pages = {231--235}, zblnumber = {}, doi = {10.4310/CDM.1997.v1997.n1.a16}, }
Suppose that $M$ is a compact Riemannian manifold and that $\gamma$ is a closed path in $M$ which is assumed to be continuous but not necessarily piecewise smooth. Must the free homotopy class of $\gamma$ necessarily contain at least one closed geodesic, or can that only be shown on the additional assumption that $\gamma$ is piecewise smooth? Furthermore, how can it be shown that the free homotopy class of $\gamma$ must contain at least one closed geodesic of minimal length, that is, how can we know that the infimum of the lengths of the closed geodesics in the homotopy class is actually attained. Every continuous path in a Riemannian manifold is homotopic to a piecewise-smooth path. Intuitively, small segments of $\gamma$ can be smoothed by a homotopy (that fixes the endpoints of the segment); since the image of $\gamma$ is compact, we can break the image into finitely many sufficiently small pieces and smooth each piece, obtaining a piecewise-smooth curve. In detail: Let $\gamma:[a, b] \to M$ be continuous. For each $t$ in $[a, b]$, let $U(t)$ be the exponential image of a ball centered at $\gamma(t)$ (sufficiently small that the image is contractible in $M$), then use compactness to extract a finite subcovering. For convenience, call these sets "coordinate balls". Next, inductively construct a sequence of coordinate balls whose first element contains $\gamma(a)$, and such that "each overlaps the next". Precisely, put $u_{1} = a$ and pick a coordinate ball $U_{1}$ containing $\gamma(a)$. Assuming balls $(U_{j})_{j=1}^{m}$ have been chosen and $\gamma(b)$ is not in $U_{m}$, let $$ u_{m+1} = \inf \{u > u_{m} : \gamma(u) \not\in U_{m}\} $$ be "the first subsequent time when $\gamma$ leaves $U_{m}$", and pick a coordinate ball $U_{m+1}$ containing $\gamma(u_{m+1})$. This process terminates with a finite covering $(U_{j})_{j=1}^{N}$ since the image of $\gamma$ is covered by finitely many coordinate balls. Put $t_{0} = a$ and $t_{N} = b$. For $1 \leq j < N$, choose $t_{j}$ so that $\gamma(t_{j}) \in U_{j} \cap U_{j+1}$. By construction, $\gamma([t_{j-1}, t_{j}]) \subset U_{j}$ for each $j = 1, \dots, N$. Since $U_{j}$ is a coordinate ball, $\gamma|_{[t_{j-1}, t_{j}]}$ is homotopic in $U_{j}$ (with endpoints fixed) to a smooth path. (For example, transfer the path to a Euclidean ball by the exponential map and perform a straight-line homotopy with a line segment.) Consequently, $\gamma$ itself is homotopic (with endpoints fixed, if it matters) to a piecewise-smooth path.
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Ellipses can be elegantly described in four ways. The simplest description of an ellipse is as a squashed or stretched circle. Start with the unit circle $x^2 + y^2 =1$, and stretch it by a factor of $a$ in the $x$ direction and $b$ in the $y$ direction to get: vertices. If$a > b > 0$, then the major axis is the line segment from$(-a,0)$ to $(a,0)$ and the semi-major axis is the line segmentfrom the origin to $(a,0)$. Likewise, the minor axis runs from$(0,-b)$ to $(0,b)$ and the semi-minor axis runs from theorigin to $(0,b)$. If $b > a > 0$, then the major and semi-majoraxes are vertical and the minor and semi-minor axes arehorizontal. For now we'll stick with the case that $a > b$, so that theellipse is short and fat. The origin is the center of theellipse. This fact gives elliptical rooms amazing acoustic properties. If you whisper at one focus of such a room, the sound waves from your voice will bounce off the walls and converge at the other focus -- that's why it is called a focus. The same goes for light reflecting off elliptical mirrors. To understand the amazing fact, let's convert the equation $L_1 + L_2 = 2a$ to rectangular coordinates: \begin{eqnarray*} L_1 + L_2 & = & 2a \cr\cr L_1 & = & 2a-L_2 \cr \cr \sqrt{(x+c)^2+y^2} & = & 2a -\sqrt{(x-c)^2 + y^2} \cr\cr (x+c)^2 + y^2 & = & 4a^2 + (x-c)^2 + y^2 - 4a \sqrt{(x-c)^2 + y^2}\cr\cr 4a\sqrt{(x-c)^2 + y^2}&=& 4a^2-4cx \cr \cr a \sqrt{(x-c)^2 + y^2} &=& a^2-cx \cr \cr a^2(x-c)^2+ a^2 y^2 &=& a^4+c^2x^2 -2a^2cx \cr \cr a^2x^2 + a^2c^2 -2a^2cx + y^2 &=& a^4 + c^2x^2 -2a^2cx \cr \cr (a^2-c^2)x^2 + a^2 y^2 &=& a^2(a^2-c^2) \cr \cr b^2 x^2 + a^2 y^2 &=& a^2b^2 \cr \cr \frac{x^2}{a^2} + \frac{y^2}{b^2} &=& 1,\end{eqnarray*} where we have used the fact that $b^2=a^2-c^2$. That's a long and messy calculation for a simple and elegant result. You should be able to construct the equation of an ellipse given any two of $a$, $b$ and $c$, since you can get the third from $c^2=a^2-b^2.$
I’m thrilled to join everyone at the best-named math blog. I am just home from Combinatorial Link Homology Theories, Braids, and Contact Geometry at ICERM in Providence, Rhode Island. The conference was aimed at students and non-experts with a focus on introducing open problems and computational techniques. Videos of many of the talks are available at ICERM’s site. (Look under “Programs and Workshops,” then “Summer 2014”.) One of the highlights of the workshop was the ‘Computational Problem Session’ MC’d by John Baldwin with contributions from Rachel Roberts, Nathan Dunfield, Joanna Mangahas, John Etnyre, Sucharit Sarkar, and András Stipsicz. Each spoke for a few minutes about open problems with a computational bent. I’ve done my best to relate all the problems in order with references and some background. Any errors are mine. Corrections and additions are welcome! Rachel Roberts Contact structures and foliations Eliashberg and Thurston showed that a one-dimensional foliation of a three-manifold can be -approximated by a contact structure (as long as it is not the product foliation on ). Vogel showed that, with a few other restrictions, any two approximating contact structures lie in the same isotopy class. In other words, there is a map from , taut, oriented foliations to contact structures modulo isotopy for any closed, oriented three-manifold. What is the image of ? Geography: What do the fibers of look like? Botany: The image of is known to be contained within the space of weakly symplectically fillable and universally tight contact structures. Etnyre showed that if one removes “taut”, then is surjective. Etnyre and Baldwin showed that doesn’t “see” universal tightness. L-spaces and foliations A priori the rank of the Heegaard Floer homology groups associated to a rational homology three-sphere Y are bounded by the first ordinary homology group: . An L-space is a rational homology three-sphere for which equality holds. Conjecture: Y is an L-space if and only if it does not contain a taut, oriented, foliation. Ozsváth and Szabó showed that L-spaces do not contain such foliations. Kazez and Roberts proved that the theorem applies to a class of foliations and perhaps all foliations. The classification of L-spaces is incomplete and we are led to the following: How can one prove the (non-)existence of such a foliation? Question: Existing methods are either ad hoc or difficult (e.g. show that the manifold does not act non-trivially on a simply-connected (but not necessarily Hausdorff!) one-manifold). Roberts suggested that Agol and Li’s algorithm for detecting “Reebless” foliations via laminar branched surfaces may be useful here, although the algorithm is currently impractical. Nathan Dunfield What do random three-manifolds look like? First of all, how does one pick a random three-manifold? There are countably many compact three-manifolds (because there are countably many finite simplicial complexes, or because there are countably many rational surgeries on the countably many links in , or because…) so there is no uniform probability distribution on the set of compact orientable three-manifolds. To dodge this issue, we first consider random objects of bounded complexity, then study what happens as we relax the bound. (A cute, more modest example: the probability that two random integers are relatively prime is $6/\pi^2$. 1). Fix a genus and write for the mapping class group of the oriented surface of genus . Pick some generators of . Let be a random word of length in the chosen generators. We can associate a unique closed, orientable three-manifold to by identifying the boundaries of two genus handlebodies via . How is your favorite invariant distributed for random 3-manifolds of genus ? How does it behave as ? Experiment! (Ditto for knots, links, and their invariants.) Metaquestion: Show that your favorite conjecture about some class of three-manifolds or links holds with positive probability. For example: Challenge: Conjecture: a random three-manifold is not an -space, has left-orderable fundamental group, admit a taut foliation, and admit a tight contact structure. These methods can also be used to prove more traditional-sounding existence theorems. Perhaps you’d like to show that there is a three-manifold of every genus satisfying some condition. It suffices to show that a random three-manifold of fixed genus satisfies the condition with non-negative probability! For example, Theorem: (Lubotzky-Maher-Wu, 2014): For any integers and with , there exist infinitely many closed hyperbolic three-manifolds which are integral homology spheres with Casson invariant and Heegaard genus . Johanna Mangahas What do generic mapping classes look like? Here are two sensible ways to study random elements of bounded complexity in a finitely-generated group. Fix a generating set. Look at all words of length N or less in those generators and their inverses. (word ball) Fix a generating set and the associated Cayley graph. Look at all vertices within distance N of the identity. (Cayley ball) A property of elements in a group is generic if a random element has the property with probability, so the meaning of “generic” differs with the meaning of “random.” For example, consider the group $G = \langle a, b \rangle \oplus \mathbb{Z}$ with generating set $\{(a,0), (b,0), (id,1)\}$. The property “is zero in the second coordinate” is generic for the first notion but not the second. So we are stuck/blessed with two different notions of genericity. Recall that the mapping class group of a surface is the group of orientation-preserving homeomorphisms modulo isotopy. Thurston and Nielsen showed that a mapping class falls into one of three categories: Finite order:for some . Reducible:fixes some finite set of simple closed curves. Pseudo-Anosov:there exists a transverse pair of measured foliations which stretches by and . The first two classes are easier to define, but the third is generic. Are pseudo-Anosov mapping classes generic in the second sense? Question: The braid group on n strands can be understood as the mapping class group of the disk with n punctures. But the braid group is not just a mapping class group; it admits an invariant left-order and a Garside structure. Tetsuya Ito gave a great minicourse on both of these structures! Can one leverage these additional structures to answer genericity questions about the braid group? Question’: Fast algorithms for the Nielsen-Thurston classification Is there a polynomial-time algorithm for computing the Thurston-Nielsen classification of a mapping class? Question: Matthieu Calvez has described an algorithm to classify braids in where is the length of the candidate braid. The algorithm is not yet implementable because it relies on knowledge of a function where is the index of the braid. These numbers come from a theorem of Masur and Minsky and are thus difficult to compute. These difficulties, as well as the power of the Garside structure and other algorithmic approaches, are described in Calvez’s linked paper. Implement Calvez’s algorithm, perhaps partially, without knowing . Challenge: Mark Bell is developing Flipper which implements a classification algorithm for mapping class groups of surfaces. How fast are such algorithms in practice? Question: 2 John Etnyre Contactomorphism and isotopy of unit cotangent bundles For background on all matters symplectic and contact see Etnyre’s notes. Let be a manifold of any (!) dimension. The total space of the cotangent bundle is naturally symplectic: the cotangent bundle of supports the Liouville one-form characterized by for any one-form ; the pullback is along the canonical projection . The form is symplectic on . Inside the cotangent bundle is the unit cotangent bundle . (This is not a vector bundle!) The form restricts to a contact structure on the . Fact: If the manifolds and are diffeomorphic, then their unit cotangent bundles and are contactomorphic In which dimensions greater than two is the converse true? Hard question: This question is attributed to Arnol’d, perhaps incorrectly. The converse is known to be true in dimensions one and to and also in the case that is the three-sphere (exercise!). Does contactomorphism type of unit cotangent bundles distinguish lens spaces from each other? Tractable (?) question: Also intriguing is the relative version of this construction. Let be an Legendrian embedded (or immersed with transverse self-intersections) submanifold of . Define the unit cosphere bundle of to be . You can think of it as the boundary of the normal bundle to . It is a Legendrian submanifold of the unit cotangent bundle . Fact: If is Legendrian isotopic to then is Legendrian isotopic to . Under what conditions is the converse true? Relative question: Etnyre noted that contact homology may be a useful tool here. Lenny Ng’s “A Topological Introduction to Knot Contact Homology” has a nice introduction to this problem and the tools to potentially solve it. Sucharit Sarkar How many Szabó spectral sequences are there, really? Ozsváth and Szabó constructed a spectral sequence from the Khovanov homology of a link to the Heegaard Floer homology of the branched double cover of over that link. (There are more adjectives in the proper statement.) This relates two homology theories which are defined very differently. Construct an algorithm to compute the Ozsváth-Szabó spectral sequence. Challenge: Sarkar suggested that bordered Heegaard Floer homology may be useful here. Alternatively, one could study another spectral sequence, combinatorially defined by Szabó, which also seems to converge to the Heegaard Floer homology of the branched double cover. Is Szabó’s spectral sequence isomorphic to the Ozsváth-Szabó spectral sequence? Question: Again, the bordered theory may be useful here. Lipshitz, Ozsváth, and D. Thurston have constructed a bordered version of the Ozsváth-Szabó spectral sequence which agrees with the original under a pairing theorem. If the answer is “yes” then Szabó’s spectral sequence should have more structure. This was the part of Sarkar’s research talk which was unfortunately scheduled after the problem session. I hope to return to it in a future post (!). Can Szabó’s spectral sequence be defined over a two-variable polynomial ring? Is there an action of the dihedral group on the spectral sequence? Question: András Stipsicz Knot Floer Smörgåsbord Link Floer homology was spawned from Heegaard Floer homology but can also be defined combinatorially via grid diagrams. Lenny Ng explained this in the second part of his minicourse. However you define it, the theory assigns to a link a bigraded -module $HFK^-(L)$. From this group one can extract the numerical concordance invariant $\tau(L)$. Defining over or one can define invariants and . Are these invariants distinct from ? Question: Does have -torsion for some ? (From a purely algebraic perspective, a “no” to the first question suggests a “no” to this one.) Harder question: Stipsicz noted that there are complexes of -modules for which the answer is yes, but those complexes are not known to be of any link. Speaking of which, Characterize those modules which appear as . “A shot in the dark:” In another direction, Stipsicz spoke earlier about a family of smooth concordance invariants . These were constructed from link Floer homology by Ozsváth, Stipsicz, and Szabó. Earlier, Hom constructed the smooth concordance invariant . Both invariants can be used to show that the smooth concordance group contains a summand, but their fibers are not the same: Hom produced a knot which has for all t and . Is there a knot with by ? Conversely: Stipsicz closed the session by waxing philosophical: “When I was a child we would get these problems like ‘Jane has 6 pigs and Joe has 4 pigs’ and I used to think these were stupid. But now I don’t think so. Sit down, ask, do calculations, answer. That’s somehow the method I advise. Do some calculations, or whatever.” 1. An analogous result holds for arbitrary number fields — I make no claims about the cuteness of such generalizations. ↩ 2. An old example: the simplex algorithm from linear programming runs in exponential time in the worst-case, but in
Prior to working on the assignment, you'd better check out the corresponding course material: We suggest that you first read the articles (quiz questions are based on them), if something is not clear - watch thr corresponding lecture. Solutions will be discussed during a live YouTube session on September 28. You can get up to 10 credits (those points in a web-form, 15 max, will be scaled to a max of 10 credits). For discussions, please stick to ODS Slack, channel #mlcourse_ai_news, pinned thread #quiz1_fall2019 Question 1. Which of these problems does not fall into 3 main types of ML tasks: classification, regression, and clustering? Question 2. Maximal possible entropy is achieved when all states are equally probable (prove it yourself for a system with 2 states with probabilities $p$ and $1-p$). What's the maximal possible entropy of a system with N states? (here all logs are with base 2) Question 3. In Topic 3 article, toy example with 20 balls, what's the information gain of splitting 20 balls in 2 groups based on the condition X <= 8? Question 4. In a toy binary classification task, there are $d$ features $x_1 \ldots x_d$, but target $y$ depends only on $x_1$ and $x_2$: $y = [\frac{x_1^2}{4} + \frac{x_2^2}{9} \leq 16]$, where $[\cdot]$ is an indicator function. All of features $x_3 \ldots x_d$ are noisy, i.e. do not influence the target feature at all. Obviously, machine learning algorithms shall perform almost perfectly in this task, where target is a simple function of input features. If we train sklearn's DecisionTreeClassifier for this task, which parameters have crucial effect on accuracy (crucial - meaning that if these parameters are set incorrectly, then accuracy can drop significantly)? Select all that apply (to get credits, you need to select all that apply, no partially correct answers). max_features criterion min_samples_leaf max_depth Question 5. Load iris data with sklearn.datasets.load_iris. Train a decision tree with this data, specifying params max_depth=4 and random_state=17 (all other arguments shall be left unchanged). Use all available 150 instances to train a tree (do not perform train/validation split). Visualize the fitted decision tree, see topic 3 for examples. Let's call a leaf in a tree pure if it contains instances of only one class. How many pure leaves are there in this tree? Question 6. There are 7 jurors in the courtroom. Each of them individually can correctly determine whether the defendant is guilty or not with 80% probability. How likely is the jury will make a correct verdict jointly if the decision is made by majority voting? Question 7. In Topic 5, part 2, section 2. "Comparison with Decision Trees and Bagging" we show how bagging and Random Forest improve classification accuracy as compared to a single decision tree. Which of the following is a better explanation of the visual difference between decision boundaries built by a single desicion tree and those built by ensemble models? Question 8. Random Forest learns a coefficient for each input feature, which shows how much this feature influences the target feature. True/False? Question 9. Suppose we fit RandomForestRegressor to predict age of a customer (a real task actually, good for targeting ads), and the maximal age seen in the dataset is 98 years. Is it possible that for some customer in future the model predicts his/her age to be 105 years? Question 10. Select all statements supporting advantages of Random Forest over decision trees (some statements might be true but not about Random Forest's pros, don't select those).
Search Now showing items 1-2 of 2 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
In this section, we examine how to solve nonhomogeneous differential equations. The terminology and methods are different from those we used for homogeneous equations, so let’s start by defining some new terms. General Solution to a Nonhomogeneous Linear Equation Consider the nonhomogeneous linear differential equation \[a_2(x)y″+a_1(x)y′+a_0(x)y=r(x). \nonumber\] The associated homogeneous equation \[a_2(x)y″+a_1(x)y′+a_0(x)y=0 \nonumber\] is called the complementary equation. We will see that solving the complementary equation is an important step in solving a nonhomogeneous differential equation. Definition: particular solution A solution \(y_p(x)\) of a differential equation that contains no arbitrary constants is called a particular solution to the equation. GENERAL SOLUTION TO A NONHOMOGENEOUS EQUATION Let \(y_p(x)\) be any particular solution to the nonhomogeneous linear differential equation \[a_2(x)y″+a_1(x)y′+a_0(x)y=r(x).\] Also, let \(c_1y_1(x)+c_2y_2(x)\) denote the general solution to the complementary equation. Then, the general solution to the nonhomogeneous equation is given by \[y(x)=c_1y_1(x)+c_2y_2(x)+y_p(x).\] Proof To prove \(y(x)\) is the general solution, we must first show that it solves the differential equation and, second, that any solution to the differential equation can be written in that form. Substituting \(y(x)\) into the differential equation, we have \[\begin{align}a_2(x)y″+a_1(x)y′+a_0(x)y & =a_2(x)(c_1y_1+c_2y_2+y_p)″+a_1(x)(c_1y_1+c_2y_2+y_p)′ \nonumber \\ & \;\;\;\; +a_0(x)(c_1y_1+c_2y_2+y_p) \nonumber \\ & =[a_2(x)(c_1y_1+c_2y_2)″+a_1(x)(c_1y_1+c_2y_2)′+a_0(x)(c_1y_1+c_2y_2)] \nonumber \\ & \;\;\;\; +a_2(x)y_p″+a_1(x)y_p′+a_0(x)y_p \nonumber \\ & =0+r(x) \\ & =r(x). \nonumber \end{align} \nonumber \] So \(y(x)\) is a solution. Now, let \(z(x)\) be any solution to \(a_2(x)y''+a_1(x)y′+a_0(x)y=r(x).\) Then \[\begin{align*}a_2(x)(z−y_p)″+a_1(x)(z−y_p)′+a_0(x)(z−y_p) & =(a_2(x)z″+a_1(x)z′+a_0(x)z) \nonumber \\ & \;\;\;\;−(a_2(x)y_p″+a_1(x)y_p′+a_0(x)y_p) \nonumber \\ & =r(x)−r(x) \nonumber \\ & =0, \nonumber \end{align*} \nonumber \] so \(z(x)−y_p(x)\) is a solution to the complementary equation. But, \(c_1y_1(x)+c_2y_2(x)\) is the general solution to the complementary equation, so there are constants \(c_1\) and \(c_2\) such that \[z(x)−y_p(x)=c_1y_1(x)+c_2y_2(x). \nonumber \] Hence, we see that \[z(x)=c_1y_1(x)+c_2y_2(x)+y_p(x). \nonumber \] Example \(\PageIndex{1}\): Verifying the General Solution Given that \(y_p(x)=x\) is a particular solution to the differential equation \(y″+y=x,\) write the general solution and check by verifying that the solution satisfies the equation. Solution The complementary equation is \(y″+y=0,\) which has the general solution \(c_1 \cos x+c_2 \sin x.\) So, the general solution to the nonhomogeneous equation is \[y(x)=c_1 \cos x+c_2 \sin x+x. \nonumber \] To verify that this is a solution, substitute it into the differential equation. We have \[y′(x)=−c_1 \sin x+c_2 \cos x+1 \nonumber \] and \[y″(x)=−c_1 \cos x−c_2 \sin x. \nonumber \] Then \[\begin{align*} y″(x)+y(x) & =−c_1 \cos x−c_2 \sin x+c_1 \cos x+c_2 \sin x+x \nonumber \\ & =x. \nonumber \end{align*} \] So, \(y(x)\) is a solution to \(y″+y=x\). Exercise \(\PageIndex{1}\) Given that \(y_p(x)=−2\) is a particular solution to \(y″−3y′−4y=8,\) write the general solution and verify that the general solution satisfies the equation. Hint Find the general solution to the complementary equation. Answer \[y(x)=c_1e^{−x}+c_2e^{4x}−2\] In the preceding section, we learned how to solve homogeneous equations with constant coefficients. Therefore, for nonhomogeneous equations of the form \(ay″+by′+cy=r(x)\), we already know how to solve the complementary equation, and the problem boils down to finding a particular solution for the nonhomogeneous equation. We now examine two techniques for this: the method of undetermined coefficients and the method of variation of parameters. Undetermined Coefficients The method of undetermined coefficients involves making educated guesses about the form of the particular solution based on the form of \(r(x)\). When we take derivatives of polynomials, exponential functions, sines, and cosines, we get polynomials, exponential functions, sines, and cosines. So when \(r(x)\) has one of these forms, it is possible that the solution to the nonhomogeneous differential equation might take that same form. Let’s look at some examples to see how this works. Example \(\PageIndex{2}\): Undetermined Coefficients When \(r(x)\) Is a Polynomial Find the general solution to \(y″+4y′+3y=3x\). Solution The complementary equation is \(y″+4y′+3y=0\), with general solution \(c_1e^{−x}+c_2e^{−3x}\). Since \(r(x)=3x\), the particular solution might have the form \(y_p(x)=Ax+B\). If this is the case, then we have \(y_p′(x)=A\) and \(y_p″(x)=0\). For \(y_p\) to be a solution to the differential equation, we must find values for \(A\) and \(B\) such that \[\begin{align} y″+4y′+3y & =3x \nonumber \\ 0+4(A)+3(Ax+B) & =3x \nonumber \\ 3Ax+(4A+3B) &=3x. \nonumber \end{align} \nonumber \] Setting coefficients of like terms equal, we have \[\begin{align*} 3A & =3 \\ 4A+3B & =0. \end{align*} \] Then, \(A=1\) and \(B=−\frac{4}{3}\), so \(y_p(x)=x−\frac{4}{3}\) and the general solution is \[y(x)=c_1e^{−x}+c_2e^{−3x}+x−\frac{4}{3}. \nonumber \] In Example \(\PageIndex{2}\), notice that even though \(r(x)\) did not include a constant term, it was necessary for us to include the constant term in our guess. If we had assumed a solution of the form \(y_p=Ax\) (with no constant term), we would not have been able to find a solution. (Verify this!) If the function \(r(x)\) is a polynomial, our guess for the particular solution should be a polynomial of the same degree, and it must include all lower-order terms, regardless of whether they are present in \(r(x)\). Example \(\PageIndex{3}\): Undetermined Coefficients When \(r(x)\) Is an Exponential Find the general solution to \(y″−y′−2y=2e^{3x}\). Solution The complementary equation is \(y″−y′−2y=0\), with the general solution \(c_1e^{−x}+c_2e^{2x}\).Since \(r(x)=2e^{3x}\), the particular solution might have the form \(y_p(x)=Ae^{3x}.\) Then, we have \(yp′(x)=3Ae^{3x}\) and \(y_p″(x)=9Ae^{3x}\). For \(y_p\) to be a solution to the differential equation, we must find a value for \(A\) such that \[\begin{align*} y″−y′−2y &=2e^{3x} \\ 9Ae^{3x}−3Ae^{3x}−2Ae^{3x} & =2e^{3x} \\ 4Ae^{3x} & =2e^{3x}. \end{align*} \] So, \(4A=2\) and \(A=1/2\). Then, \(y_p(x)=(\frac{1}{2})e^{3x}\), and the general solution is \[y(x)=c_1e^{−x}+c_2e^{2x}+\dfrac{1}{2}e^{3x}. \nonumber\] Exercise \(\PageIndex{3}\) Find the general solution to \(y″−4y′+4y=7 \sin t− \cos t.\) Hint Use \(y_p(t)=A \sin t+B \cos t \) as a guess for the particular solution. Answer \[y(t)=c_1e^{2t}+c_2te^{2t}+ \sin t+ \cos t \] In the previous checkpoint, \(r(x)\) included both sine and cosine terms. However, even if \(r(x)\) included a sine term only or a cosine term only, both terms must be present in the guess. The method of undetermined coefficients also works with products of polynomials, exponentials, sines, and cosines. Some of the key forms of \(r(x)\) and the associated guesses for \(y_p(x)\) are summarized in Table \(\PageIndex{1}\). \(r(x)\) Initial guess for \(y_p(x)\) \(k\) (a constant) \(A\) (a constant) \(ax+b\) \(Ax+B\) ( Note: The guess must include both terms even if \(b=0\).) \(ax^2+bx+c\) \(Ax^2+Bx+C\) ( Note: The guess must include all three terms even if \(b\) or \(c\) are zero.) Higher-order polynomials Polynomial of the same order as \(r(x)\) \(ae^{λx}\) \(Ae^{λx}\) \(a \cos βx+b \sin βx\) \(A \cos βx+B \sin βx\) ( Note: The guess must include both terms even if either \(a=0\) or \(b=0.\)) \(ae^{αx} \cos βx+be^{αx} \sin βx\) \(Ae^{αx} \cos βx+Be^{αx} \sin βx\) \((ax^2+bx+c)e^{λx}\) \((Ax^2+Bx+C)e^{λx}\) \((a_2x^2+a_1x+a0) \cos βx \\ +(b_2x^2+b_1x+b_0) \sin βx\) \((A_2x^2+A_1x+A_0) \cos βx \\ +(B_2x^2+B_1x+B_0) \sin βx \) \((a_2x^2+a_1x+a_0)e^{αx} \cos βx \\ +(b_2x^2+b_1x+b_0)e^{αx} \sin βx \) \((A_2x^2+A_1x+A_0)e^{αx} \cos βx \\ +(B_2x^2+B_1x+B_0)e^{αx} \sin βx \) Keep in mind that there is a key pitfall to this method. Consider the differential equation \(y″+5y′+6y=3e^{−2x}\). Based on the form of \(r(x)\), we guess a particular solution of the form \(y_p(x)=Ae^{−2x}\). But when we substitute this expression into the differential equation to find a value for \(A\),we run into a problem. We have \[y_p′(x)=−2Ae^{−2x} \nonumber\] and \[y_p''=4Ae^{−2x}, \nonumber\] so we want \[\begin{align*} y″+5y′+6y &=3e^{−2x} \nonumber \\ 4Ae^{−2x}+5(−2Ae^{−2x})+6Ae^{−2x} &=3e^{−2x} \nonumber \\ 4Ae^{−2x}−10Ae^{−2x}+6Ae^{−2x} &=3e^{−2x} \nonumber \\ 0 & =3e^{−2x}, \nonumber \end{align*}\] which is not possible. Looking closely, we see that, in this case, the general solution to the complementary equation is \(c_1e^{−2x}+c_2e^{−3x}.\) The exponential function in \(r(x)\) is actually a solution to the complementary equation, so, as we just saw, all the terms on the left side of the equation cancel out. We can still use the method of undetermined coefficients in this case, but we have to alter our guess by multiplying it by \(x\). Using the new guess, \(y_p(x)=Axe^{−2x}\), we have \[y_p′(x)=A(e^{−2x}−2xe^{−2x} \nonumber\] and \[y_p''(x)=−4Ae^{−2x}+4Axe^{−2x}. \nonumber\] Substitution gives \[\begin{align}y″+5y′+6y & =3e^{−2x} \nonumber \\(−4Ae^{−2x}+4Axe^{−2x})+5(Ae^{−2x}−2Axe^{−2x})+6Axe^{−2x} &=3e^{−2x} \nonumber\\−4Ae^{−2x}+4Axe^{−2x}+5Ae^{−2x}−10Axe^{−2x}+6Axe^{−2x} &=3e^{−2x} \nonumber \\ Ae^{−2x} &=3e^{−2x}.\nonumber \end{align}\] So, \(A=3\) and \(y_p(x)=3xe^{−2x}\). This gives us the following general solution \[y(x)=c_1e^{−2x}+c_2e^{−3x}+3xe^{−2x}. \nonumber\] Note that if \(xe^{−2x}\) were also a solution to the complementary equation, we would have to multiply by \(x\) again, and we would try \(y_p(x)=Ax^2e^{−2x}\). PROBLEM-SOLVING STRATEGY: METHOD OF UNDETERMINED COEFFICIENTS Solve the complementary equation and write down the general solution. Based on the form of \(r(x)\), make an initial guess for \(y_p(x)\). Check whether any term in the guess for\(y_p(x)\) is a solution to the complementary equation. If so, multiply the guess by \(x.\) Repeat this step until there are no terms in \(y_p(x)\) that solve the complementary equation. Substitute \(y_p(x)\) into the differential equation and equate like terms to find values for the unknown coefficients in \(y_p(x)\). Add the general solution to the complementary equation and the particular solution you just found to obtain the general solution to the nonhomogeneous equation. Exercise \(\PageIndex{3}\) Find the general solution to the following differential equations. \(y″−5y′+4y=3e^x\) \(y″+y′−6y=52 \cos 2t \) Hint Use the problem-solving strategy. Answer a \(y(x)=c_1e^{4x}+c_2e^x−xe^x\) Answer b \(y(t)=c_1e^{−3t}+c_2e^{2t}−5 \cos 2t+ \sin 2t\) Variation of Parameters Sometimes, \(r(x)\) is not a combination of polynomials, exponentials, or sines and cosines. When this is the case, the method of undetermined coefficients does not work, and we have to use another approach to find a particular solution to the differential equation. We use an approach called the method of variation of parameters. To simplify our calculations a little, we are going to divide the differential equation through by \(a,\) so we have a leading coefficient of 1. Then the differential equation has the form \[y″+py′+qy=r(x),\] where \(p\) and \(q\) are constants. If the general solution to the complementary equation is given by \(c_1y_1(x)+c_2y_2(x)\), we are going to look for a particular solution of the form \[y_p(x)=u(x)y_1(x)+v(x)y_2(x).\] In this case, we use the two linearly independent solutions to the complementary equation to form our particular solution. However, we are assuming the coefficients are functions of \(x\), rather than constants. We want to find functions \(u(x)\) and \(v(x)\) such that \(y_p(x)\) satisfies the differential equation. We have \[\begin{align*}y_p & =uy_1+vy_2 \\ y_p′ & =u′y_1+uy_1′+v′y_2+vy_2′ \\ y_p″ &=(u′y_1+v′y_2)′+u′y_1′+uy_1″+v′y_2′+vy_2″. \end{align*}\] Substituting into the differential equation, we obtain \[\begin{align*}y_p″+py_p′+qy_p & =[(u′y_1+v′y_2)′+u′y_1′+uy_1″+v′y_2′+vy_2″] \\ &\;\;\;\;+p[u′y_1+uy_1′+v′y_2+vy_2′]+q[uy_1+vy_2] \\ &=u[y_1″+p_y1′+qy_1]+v[y_2″+py_2′+qy_2] \\ & \;\;\;\; +(u′y_1+v′y_2)′+p(u′y_1+v′y_2)+(u′y_1′+v′y_2′). \end{align*}\] Note that \(y_1\) and \(y_2\) are solutions to the complementary equation, so the first two terms are zero. Thus, we have \[(u′y_1+v′y_2)′+p(u′y_1+v′y_2)+(u′y_1′+v′y_2′)=r(x).\] If we simplify this equation by imposing the additional condition \(u′y_1+v′y_2=0\), the first two terms are zero, and this reduces to \(u′y_1′+v′y_2′=r(x)\). So, with this additional condition, we have a system of two equations in two unknowns: \[\begin{align*} u′y_1+v′y_2 &= 0 \\u′y_1′+v′y_2′ &=r(x). \end{align*}\] Solving this system gives us \(u′\) and \(v′\), which we can integrate to find \(u\) and \(v\). Then, \(y_p(x)=u(x)y_1(x)+v(x)y_2(x)\) is a particular solution to the differential equation. Solving this system of equations is sometimes challenging, so let’s take this opportunity to review Cramer’s rule, which allows us to solve the system of equations using determinants. RULE: CRAMER’S RULE The system of equations \[\begin{align*} a_1z_1+b_1z_2 &=r_1 \\[4pt] a_2z_1+b_2z_2 &=r_2 \end{align*}\] has a unique solution if and only if the determinant of the coefficients is not zero. In this case, the solution is given by \[z_1=\dfrac{\begin{array}{|ll|}r_1 & b_1 \\ r_2 & b_2 \end{array}}{\begin{array}{|ll|}a_1 & b_1 \\ a_2 & b_2 \end{array}} \; \; \; \; \; \text{and} \; \; \; \; \; z_2= \dfrac{\begin{array}{|ll|}a_1 & r_1 \\ a_2 & r_2 \end{array}}{\begin{array}{|ll|}a_1 & b_1 \\ a_2 & b_2 \end{array}}. \label{cramer}\]
We can also solve this problem by projecting $f(x) = -\dfrac{1}{1+x^2}$ on the linear space spanned by all linear functions defined on $[0,1]$ where we use the inner product $\langle f,g\rangle=\int_0^1f(x) g(x) dx$. The normalized constant function $e_0(x) = 1$ can be taken to be one basis vector of the linear space of linear function. The function $h(x) = x$ is linearly independent from $e_0(x)$ but it is not orthogonal to it. Using the Gram–Schmidt process we can find the correct basis vector as follows. We subtract from $h(x)$ its component in the direction of $e_0(x)$ and then we normalize the result. We put: $$g(x) = h(x) - \langle h,e_0\rangle e_0(x) = x - \int_0^1 x dx = x - \frac{1}{2}$$ Normalizing $g(x)$ then gives us the other basis vector $e_1(x)$ of the space of the linear functions: $$e_1(x) = \frac{g(x)}{\sqrt{\langle g,g\rangle}} = \frac{x-\frac{1}{2}}{\sqrt{\int_0^1 \left(x-\frac{1}{2}\right)^2 dx }} = 2\sqrt{3}\left(x-\frac{1}{2}\right)$$ The projection of $f(x)$ on the linear space spanned by the linear functions is then: $$\langle f,e_0 \rangle e_0(x) + \langle f,e_1 \rangle e_1(x) = 3\log(2) -\pi +\left( \frac{3\pi}{2}-6\log(2)\right)x $$
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Examples 5-7 In the previous Examples 3 and 4, the way we specified $D$ and $R$ suggested how to write the integral as a iterated integral. Sometimes conditions are best interpreted graphically before deciding on whether to evaluate as Type I or Type II. Solution 5: $D$ is enclosed by the straight line $y = x$ and the parabola $y = x^2$ as shown here. To determine the limits of integration we first need to find the points of intersection of $y = x$ and $y = x^2$. These occur when $x^2 = x$. This means $x(x-1)=0$, so $x = 0,\, 1$. Treating $D$ as a Type I region, we fix $x$ between $x=0$ and $x=1$, and integrate with respect to $y$ along the black vertical line, getting the iterated integral $\displaystyle I \ = \ \int_0^1\left(\int_{x^2}^{x}\, (3x + 4y)\, dy\right)\,dx$ : Evaluate $I$. DO Example 6: The region $D$ that is shown here is Type I but is not Type II: Fixing $x$ and integrating first with respect to $y$ along the vertical black line makes good sense because then we have the same curve $f_1(x)$ along the bottom and $f_2(x)$ along the top: $$ D \ = \ \Bigl\{\,(x,\,y) : f_1(x) \le y \le f_2(x),\ \ a \le x \le b\,\Bigl\}$$ for suitable choices of $a,\, b$ and functions $f_1(x),\, f_2(x)$ giving us: $$ \iint_D\, f(x,\,y)\, dA = \int_a^b \left(\int_{f_1(x)}^{f_2(x)}\, f(x,\,y)\, dy\right) dx\,.$$But if we had chosen to fix $y$, then the integral with respect to $x$ would sometimes split into two parts, as is shown with the red horizontal lines. This would make evaluate of this integral more complicated -- for one thing, there would be more than one integral. Example 7: Similarly, the region $D$ shown here is Type II but not Type I. Fixing $y$ and integrating first with respect to $x$ along the horizontal black line makes good sense because then $$ D \ = \ \Bigl\{\,(x,\,y) : g_1(y) \le x \le g_2(y),\ \ c \le y \le d\,\Bigl\}$$ for suitable choices of $c,\, d$ and functions $g_1(y),\, g_2(y)$. In this case $$ \iint_D\, f(x,\,y)\, dxdy = \int_c^d \left(\int_{g_1(y)}^{g_2(y)}\, f(x,\,y)\, dx\right) dy\,.$$But if we had chosen to fix $x$, then the integral with respect to $y$ would sometimes splits into two parts as shown by the red vertical lines. Again, this would make the integral(s) more complicated.
This case study is a detailed modeling and simulation workflow on a TTE data set. It is recommended to read before the Introduction to TTE data and library of models in Monolix. Another case study on time-to-event data is also available: Case study on veteran lung cancer data set In this case study, we develop a model for the NCCTG survival data set and study the effect of covariates. Introduction Data set visualization Modeling with Monolix Simulations with Simulx Conclusion Downloads Introduction The North Central Cancer Treatment Group (NCCTG) data set records the survival of 228 patients with advanced lung cancer, together with assessments of the patients performance status measured either by the physician and by the patients themselves. The goal of the study was to determine whether patients self-assessment could provide prognostic information complementary to the physician’s assessment. This data set has been originally presented and analyzed in: Loprinzi et al. (1994). Prospective evaluation of prognostic variables from patient-completed questionnaires. North Central Cancer Treatment Group. Journal of Clinical Oncology : Official Journal of the American Society of Clinical Oncology, 12(3), 601–607. In this case study, we will test several parametric models to capture this data set and evaluate the prognostic performance of the recorded covariates. Data set visualization The data set contains 228 patients, including 63 patients that are right censored (patients that left the study before their death). The original data set has been reformatted according to the data set formatting guidelines and includes both the starting time and the time of death or drop out. To assess the importance of self performance assessment versus physician’s assessment, the following covariates have been recorded: ecogPH: ECOG (Eastern Cooperative Oncology Group) performance status assessed by the physician, on a scale from 0 (fully active) to 5 (dead). For information on the scale, click here. karnoPH: Karnofsky performance status, assessed by the physician, on a scale from 0 (dead) to 100 (completely healthy). More details about the scale can be found here. karnoPAT: Karnofsky performance status, assessed by the patient sex: sex of the patient (F for female, M for male) age: age of the patient (years) We first use Monolix to visualize the data set. After having opened Monolix, we start a new project and load the data (the covariates must be assigned to COV for continuous covariates or CAT for categorical covariates). Before using the data visualization feature of Monolix, we must indicate that the data is TTE data. This is done via the choice of a structural model, that defines an event as output. For the moment, we can just select any model from the TTE model library via the model selection window that opens when clicking on “model file”. We next click on the data visualization button, next to Data button. The Kaplan-Meier (KM) estimate of the survival curve as well as the mean number of events curve appear in the figure window. In the “Settings”, we can choose to remove the mean number of event curve and display the censored data. In the “Stratify” part, we can split the KM curve according to categorical covariates, or categorized continuous covariates (groups can be changed), in order to visually check the impact of the covariates. At first sight, it seems that sex, ECOG, karnoPH and karnoPAT influence the survival, while age does not. sex Age ecogPH karnoPH karnoPAT Note that ECOG, karmoPH and karnoPAT are all performance scores and that they are probably strongly correlated. Modeling with Monolix We now would like to develop a model for this data set, and next analyze which covariates have the highest prognostic performance. Within the MonolixSuite, only parametric models are possible (no non-parametric or semi-parametric approaches). Structural model We start with the development of the structural model. The typical shape of common parametric survival models has been presented in the Part 1 of the TTE case study, together with the library of common TTE models. From the KM curve visualization, the Weibull, log-logistic, gamma or Gompertz models could be appropriate. We will try each in turn, choosing the files with extensions “_singleEvent.txt” from the library. For models with 2 parameters, it is common to assume that the shape parameter is the same for all individuals, while the scale parameter can vary from individual to individual. This inter-individual variability can be captured via the effect of covariates and/or via random effects (often called frailty models). Following this strategy, the random effects are disabled for the shape parameters p, s, k and alpha (by clicking on the corresponding diagonal element of the variance-covariance matrix) and kept for the scale parameter Te. We also keep the default log-normal distributions, to ensure positive values of the parameters. In each model, the scale parameter is the characteristic time Te. Using the KM data visualization, we choose 300 as initial value, i.e the value for which the survival is around 50%. For the shape parameter, we use as initial value 1, according to the typical value chart given on the page of the TTE model library. We next launch the estimation tasks for each structural model in turn: estimation of the population parameters, estimation of the standard errors via the Fisher Information Matrix, estimation of the individual parameter by mean, estimation of the log-likelihood and generation of the graphics. The performance of each structural model can then be assessed and compared using the TTE graphic and the log-likelihood values. After each run, in the Time to Event data graphic, one can calculate the 90% prediction interval for the KM curve given the model (in Settings > Prediction interval) and overlay it on top of the empirical KM curve. It can be interpreted in the same way as a VPC. A summary is presented below: From a visual point of vue, the exponential, log-logistic and gamma models can be excluded as they do not capture the shape of the KM curve. The Weibull and Gompertz models are satisfactory, with a slight preference for the Gompertz model, as indicated by the BIC values. We thus choose the Gompertz model as structural model. Covariate model We next investigate if considering covariates on the Te parameter can help explaining its inter-individual variability. For didactic purposes, we will perform the covariate search by hand and stepwise. In Monolix, using a backward covariate search approach is especially powerful. Indeed, after having calculated the s.e, a Wald test is performed to test the significance of each covariate. Thus it is sufficient to estimate the parameters of the model including all covariates relationships to get a p-value for each relationship, without having to estimate the submodels with one covariate relationship less. Following this strategy, we will estimate the model with all available covariates on Te and stepwise remove the less significant relationship, until all remaining covariates are significant. The AIC and BIC will also be monitored in parallel. We thus add all covariates on the Te parameter in the “Covariate model” section, which corresponds to the following model for Te: $$T_{e,i}=T_{e,\textrm{pop}}e^{\beta_{\textrm{sex}}[\textrm{if sex=M}] + \beta_{\textrm{age}}\times\textrm{age} + \beta_{\textrm{ecogPH}}\times\textrm{ecogPH} + \beta_{\textrm{karnoPH}}\times\textrm{karnoPH} + \beta_{\textrm{karnoPAT}}\times\textrm{karnoPAT}} e^{\eta_i}$$ The table below summarizes the stepwise covariate removal, based on the p-values, and AIC/BIC: The models with (sex, ecogPH, karnoPAT) and (sex, ecogPH) have similar AIC and BIC values. Yet the model with (sex, ecogPH, karnoPAT) has a high condition number (around 300), we thus prefer the (sex, ecogPH) model. Final model Our final model includes a Gompertz structural model, and the covariates sex, and ecogPH on the scale parameter Te. The model improvement when karnoPAT is included in addition to ecogPH is very small indicating that a self-assessment of the performance status by the patient permits only a slightly better prognosis, compared to using the physicians ECOG performance status evaluation only. In the original study including more patients and different types of cancer, the value of patient self-assessment was higher. The estimated parameters are below. The r.s.e are reasonable. The 90% prediction interval for the KM curve shows that the data is properly captured. The other graphics do not hint at any model mis-specification. For a given sex and a given ecogPH score, we can easily calculate analytically the typical hazard and the associated Gompertz model survival. From the survival function, we calculate the probability to survive at least 6 months (180 days), 1 year (365 days), or 2 years (730 days) depending on the sex and ECOG score: The probability density function of the death event can also be easily calculated analytically as the product of the hazard and the survival functions, for a given sex and ECOG score. Below we show the probability of death with respect to time for three different cases. The plot can also be interpreted as the distribution of death times in each of the three sub-populations. Simulations using Simulx Simulx is part of the mlxR package. To run the scripts, mlxR version >= 3.3.1 is required. We would like to simulate three new patients cohorts (with different covariates compared to the original data set). The three cohorts have the following characteristics: cohort 1: 50% male / 50% female, good ECOG scores (0 or 1) cohort 2: 50% male / 50% female, poorer ECOG scores (2 or 3) cohort 3: 90% male / 10% female, good ECOG score (0 and 1) To simulate these three cohorts, we create three groups, each defined via a data frame of the individuals covariates, and pass them as simulx input argument . The simulation is done via the simulx function which returns a R object containing the result of the simulation (time of death for each individual). We can then pass this object to kmplotmlx to obtain the Kaplan-meier survival curves. For cohort 1, the R code reads: # defining the path to the mlxtran project file project.file <- "./monolix_project/43_gompertz_all_nokarnoPH_noage_nokarnoPAT.mlxtran" #========== defining group 1 # covariate data frame for group 1, with column id and one column per covariate cov1 <- data.frame(id=1:228, sex = c(rep("F",114),rep("M",114)), ecogPH = rbinom(n=228, size=1, prob=0.5), age = NaN, #unused in model but must be present karnoPH = NaN, #unused in model but must be present karnoPAT = NaN) #unused in model but must be present #============= calling simulx for group 1 res1 <- simulx(project = project.file, parameter = cov1) We then combine the output objects into one and plot with: #============= plotting the KM survival curve group.labels <- c("50% M/50% F, ECOG 0-1", "50% M/ 50% F, ECOG 2-3","90% M/10% F, ECOG 0-1" ) kmplotmlx(EvRes,labels=group.labels,facet=FALSE) + xlab("Time (days)") + ylab("Survival") + theme(legend.justification=c(1,1), legend.position=c(0.9,0.9)) We obtain the following prediction: Because the time of death is a random variable, several simulations of the same population will lead to different survival curves. We can assess the uncertainty of the survival curve by doing replicates. This can be done very easily by adding an argument nrep: res1 <- simulx(project = project.file, parameter = cov1, nrep = 100) We then obtain a prediction interval for the three survival curves: Conclusion The MonolixSuite is a powerful tool for modeling and simulation of time-to-event data via a parametric approach. Covariates, frailty models and censoring can easily be incorporated. Build-in statistical tests and diagnostic plots (in particular the “visual predictive check for TTE data”) render the model development process straightforward. With the lung cancer data set of this case study, the risk of death increases over time (Gompertz distribution of death times). Sex and the ECOG performance score are significant covariates and thus prognostic factors. Performance scores assessed by patients rather than physicians can also serve as prognostic factors but their utility in addition to the physicians measured score is small. Thank to the parametric formulation of the model, survival probabilities depending on the sex and ECOG score can easily be computed. In addition, simulations of cohorts with combination of covariates can be performed using Simulx and the uncertainty of the resulting survival curve can be visualized.
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
Let's start by tracing the probabilities with a single alien: At the 1 st step, he might be at positions $+1$ or $-1$ with $1/2$ probability each. At the 2 nd step, $-2$ ($1/4$), $0$ ($2/4$) or $+2$ ($1/4$). At the 3 rd step, $-3$ ($1/8$), $-1$ ($3/8$), $+1$ ($3/8$), $+3$ ($1/8$). At the 4 th step, $-4$ ($1/16$), $-2$ ($4/16$), $0$ ($6/16$), $+2$ ($4/16$), $+4$ ($1/16$). At the 5 th step, $-5$ ($1/32$), $-3$ ($5/32$), $-1$ ($10/32$), $+1$ ($10/32$), $+3$ ($5/32$), $+5$ ($1/32$). At the 6 th step, $-6$ ($1/64$), $-4$ ($6/64$), $-2$ ($15/64$), $0$ ($20/64$), $+2$ ($15/64$), $+4$ ($6/64$), $+6$ ($1/64$). At the 7 th step, $-7$ ($1/128$), $-5$ ($7/128$), $-3$ ($21/128$), $-1$ ($35/128$), $+1$ ($35/128$), $+3$ ($21/128$), $+5$ ($7/128$), $+7$ ($1/128$). This can easily be seen when you build: a triangle with a number 1 in the top and filling each row below it with the sum of the two numbers above. Like this: Each line has a position from $-7$ to $+7$ (the horizontal positions) and the probability of the alien being in that position is expressed as $\frac{x}{2^y}$, where $x$ is the number in the table for the step $y$. Blanks should be considered as $x = 0$. Now, let's see where the two aliens might meet: [a] At the 5 th step with $+5$ and $-5$, coming from $+4$ and $-4$ with a probability of $\frac{1}{32} \times \frac{1}{32} = \frac{1}{1024}$. Note that: At the 6 th step with $+6$ and $-6$, coming from $+5$ and $-5$ they would already had met at the 5 th step; [b] At the 6 th step with $+6$ and $-4$ coming from $+5$ and $-3$. This gives a probability of $\frac{1}{64} \times \frac{5}{64} = \frac{5}{4096}$; [c] The opposite of [b]; [d] At the 7 th step, having the 5 th as $+5$ and $-3$, 6 th as $+6$ and $-2$ and 7 th as $+7$ and $-3$; [e] At the 7 th step, having the 5 th as $+5$ and $-3$, 6 th as $+4$ and $-4$ and 7 th as $+5$ and $-5$; [f] At the 7 th step, having the 5 th as $+3$ and $-3$, 6 th as $+4$ and $-4$ and 7 th as $+5$ and $-5$; [g] opposite of [d]; [h] opposite of [e]. For cases d, e, g and h, the probabilities of reaching the 5 th step are calculated as: $\frac{1}{32} \times \frac{5}{32} = \frac{5}{1024}$ for each. For case f, the proability of reaching the 5 th step is: $\frac{5}{32} \times \frac{5}{32} = \frac{25}{1024}$ Each one of the cases d to h depend on exacts two steps performed by each of the aliens, so... ... we multiply each of those probabilities by $\frac{1}{16}$, because we have four steps to do, each one having $\frac{1}{2}$ of probability of happening. This gives the probabilities as: $ P = p_a + p_b + p_c + p_d \times \frac{1}{16} + p_e \times \frac{1}{16} + p_f \times \frac{1}{16} + p_g \times \frac{1}{16} + p_h \times \frac{1}{16}$ However... Given that $p_b = p_c$ and that $p_d = p_e = p_g = p_h$, then $p_b + p_c = 2 p_b$ and $p_d + p_e + p_g + p_h = 4 p_d$ and we can simplify the formula to: $P = p_a + 2 p_b + 4 p_d \times \frac{1}{16} + p_f$ Now, we can solve this as... $$ \begin{align} P & = p_a + 2 p_b + 4 p_d \times \frac{1}{16} + p_f \\ & = \frac{1}{1024} + 2 \times \frac{5}{4096} + 4 \times \frac{5}{1024} \times \frac{1}{16} + \frac{25}{1024} \times \frac{1}{16} \\ & = \frac{16}{16384} + \frac{40}{16384} + \frac{20}{16384} + \frac{25}{16384} \\ & = ... \end{align}$$ And the final result is... $$P = \frac{101}{16384}$$ And a final curiosity: The aliens are always separated by an even number of meters. This also ensures that if they meet, they will collide and not pass one through the other swapping their positions.
I have this code: S = (Sqrt[2]/2)*{{1 + Conjugate[δ], 0}, {0,1 - Conjugate[δ]}}(** Suppose a+b=1 and δ=((a-b)/(a+b))\[Conjugate] **)k = (1/Sqrt[2])*{{S[[1, 1]] + S[[2, 2]]}, {S[[1, 1]] - S[[2, 2]]}, {2 S[[1, 2]]}} // SimplifySubscript[T, 0] = Dot[k, ConjugateTranspose[k]]Subscript[T, 0] // MatrixFormSubscript[T, 0] // TraditionalForm $$\left( \begin{array}{ccc} 1 & \delta & 0 \\ \delta ^* & \delta \delta ^* & 0 \\ 0 & 0 & 0 \\ \end{array} \right)$$ As you see at the end the product of $\delta$ and $\delta^*$ is not printed as $|\delta|^2$ but as $\delta\delta^*$ Someone told me in one of my questions that this is because: It seems that you did not instruct Mma that δ∗ is a conjugated value of δ. Using simply a conjugate symbol is not enough. You should use Conjugate[δ] instead and then apply ComplexExpand so far I have tried several ways like Using the UpsetDelayed operator in the begining of code as: δ\[Conjugate] ^:= Conjugate[δ] or using: ComplexExpand[Subscript[T,0], δ, TargetFunctions -> {Abs, Conjugate}] But I couldn't change any thing?! Following the the first answer posted to the question I wrote: FullSimplify[Subscript[T, 0]] // TraditionalForm $$\left(\begin{array}{ccc} 1 & \delta & 0 \\ \delta ^* & \left| \delta \right| ^2 & 0 \\ 0 & 0 & 0 \\\end{array}\right)$$ But when I continue the code and apply the same trick on another matrix, the trick doesn't work! R[ψ_] := {{1, 0, 0}, {0, Cos[2 ψ], Sin[2 ψ]}, {0, -Sin[2 ψ], Cos[2 ψ]}}T[ψ_] := Dot[R[ψ], Subscript[T, 0], Transpose[R[ψ]]]FullSimplify[T[ψ]] // TraditionalForm $$\left( \begin{array}{ccc} 1 & \delta (\cos (2 \psi )) & -\delta (\sin (2 \psi )) \\ \delta ^* (\cos (2 \psi )) & \delta \delta ^* \left(\cos ^2 (2 \psi )\right) & -\frac{1}{2} \delta \delta ^* (\sin (4 \psi )) \\ -\delta ^* (\sin (2 \psi )) & -\frac{1}{2} \delta \delta ^* (\sin (4 \psi )) & \delta \delta ^* \left(\sin ^2 (2 \psi )\right) \\ \end{array} \right)$$
Search Now showing items 1-10 of 21 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Centrality dependence of particle production in p-Pb collisions at $\sqrt{s_{\rm NN} }$= 5.02 TeV (American Physical Society, 2015-06) We report measurements of the primary charged particle pseudorapidity density and transverse momentum distributions in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV, and investigate their correlation with experimental ...
I'm trying to learn about linear congruences of the form ax = b(mod m). In my book, it's written that if $\gcd(a, m) = 1$ then there must exist an integer $a'$ which is an inverse of $a \pmod{m}$. I'm trying to solve this example: $$3x \equiv 4 \pmod 7$$ First I noticed $\gcd(3, 7) = 1$. Therefore, there must exist an integer which is the multiplicative inverse of $3 \pmod 7$. According to Bezout's Theorem, if $\gcd(a, m) = 1$ then there are integers $s$ and $t$ such that $sa+tm=1$ where $s$ is the multiplicative inverse of $a\pmod{m}$. Using that theorem: $\begin{align}7 = 3\cdot2 +1\\7 - 3\cdot2 = 1 \\-2\cdot3 + 7 = 1\end{align}$ $s=-2$ in the above equation so $-2$ is the inverse of $3 \pmod{7}$. The book says that the next step to solve $3x \equiv 4 \pmod{7}$ is to multiply $-2$ on both sides. By doing that I get: $\begin{align}-2\cdot3x \equiv -2\cdot4 \pmod 7\\-6x\equiv -8 \pmod 7\end{align}$ What should I do after that? I am working on this problem for hours. Thanks :)
A blog by Sebastian Liem and Erwin Poeze – ViriCiti Labs ViriCiti provides insights in electric, CNG, diesel, hybrid, and hydrogen vehicles, as well as their charging infrastructure. We can, therefore, provide full insight in energy management, maintenance, route operations, and flexible charging for mixed fleets. Currently, we are working on the service ‘Smart Driving’, a tool assisting drivers to drive in an energy efficient, comfortable, and safe way. One of the core inputs of Smart Driving is the measurements of the vehicle acceleration which is done via the DataHub — ViriCiti’s onboard hardware unit — that has a built-in 3D accelerometer. Before the acceleration signals can be useful to us, however, the coordinate system of the DataHub must be aligned with the vehicle. Only then can we interpret the acceleration in the x-direction (driving direction) as braking or accelerating, and only then can we interpret rotation around the z-direction (downward direction) as the vehicle turning. The DataHub is installed by our customers and we at ViriCiti do not necessarily know the orientation of the DataHub in relation to the vehicle. The coordinate systems of the DataHub and the vehicle are therefore not necessarily aligned, as depicted in Figure 1. Figure 1: Vehicle’s coordinate system ($x_v$, $y_v$, $z_v$) is not aligned with that of the DataHub ($x_d$, $y_d$, $z_d$) To overcome this misalignment, we have developed a method to automatically determine the DataHub’s orientation enabling us to align its coordinate system with the vehicle’s. This method has two steps; in the first step, we align the z-axes of the DataHub and vehicle. Secondly, we align the x– and y-axes. The principle is the same for both steps: We measure the acceleration in a known vehicle state where we know forces acting on the vehicle, meaning that we know the acceleration vector in the vehicle’s coordinate system. In the first step, the vehicle must be stationary and positioned on a level surface. In this situation, only gravity affects the vehicle and acceleration is $a_v = (0, 0, g)$ in the vehicle’s coordinate system. As the DataHub is now misaligned, it measures some other acceleration $a_d$. Knowing how the $a_v$ is expressed in the coordinate system of the DataHub, we can find a coordinate transformation that aligns the z-axes of the vehicle’s and DataHub’s coordinate systems. In the z-aligned coordinate system $x’_d, y’_d, z’_d$ (where $z’_d = z_v$), the x- and y-axes can still be misaligned but this is solved in the second step. In this step, the vehicle should be braking while driving in a straight line. We then derive that $a_v = (-x, 0, g)$, which we use to find the coordinate transformation from the z-aligned DataHub’s coordinate system to the vehicle’s coordinate system. Combining the coordinate transformations from step 1 and 2, we have a coordinate transformation that aligns the measurements from the DataHub with the vehicle. This allows us to use the acceleration to describe the movements of the vehicle. In the two following sections, we provide details on how each step works. Figure 2: Step 1, rotation to align the z-axes of the vehicle and DataHub ($z_v$ and $z_d$). Step 2, rotation around aligned z-axis to align the x- and y-axes. Step 1: Using Gravity In this first step, we need to determine when the vehicle is stationary and standing on a level surface. Determining if the vehicle is stationary is quite straightforward as we have direct access to the vehicle speed. Determining if the vehicle is on a level surface, however, is more difficult, as there is no sensor reading readily available. Instead, we rely on statistics taken from many samples — no city is all uphill after all and with a sufficient number of samples the average models a level surface. The measured acceleration $a_d$ in the DataHub’s coordinate system expressed in the vehicle’s coordinate system should be $a_v = (0, 0, g)$. We can now find a transformation matrix $R$ so that $$a_v = R \cdot a_d.$$ We use $R$ to denote the matrix because we know it should be a rotation matrix, it should preserve the origin, the norm, as well as the orientation of the acceleration signal. Note that $R$ doesn’t fully transform the vector from the DataHub’s coordinate system to that of the vehicle. It does, however, align their z-axes or, equivalently, the xy-planes. There is more than one way to construct a rotation matrix – Euler angles and quaternions are two popular approaches. We found these two methods to be unnecessarily complex and numerically error-prone for our purposes. Instead we took a step back and used the theory of rotations: $SO(3)$ the group of 3D Euclidean rotations. We will sketch, with no claims of rigor, the method we settled on. $SO(3)$ is a Lie group which has corresponding Lie algebra $\mathcal{so}(3)$. If we can express our rotation in this algebra we can generate the rotation matrix. To do this we chose the basis $L = [L_x, L_y, L_z]$ for $\mathcal{so}(3)$ $$L_x =\begin{pmatrix}0 & 0 & 0 \\0 & 0 & -1 \\0 & 1 & 0 \\\end{pmatrix},\quad L_y =\begin{pmatrix}0 & 0 & 1 \\0 & 0 & 0 \\-1 & 0 & 0 \\\end{pmatrix},\quad L_z =\begin{pmatrix}0 & -1 & 0 \\1 & 0 & 0 \\0 & 0 & 0 \\\end{pmatrix}$$ With this basis we can identify angle rotations $\theta$ around some unit vector $u$ as element $\theta u \cdot L \in \mathcal{so}(3)$. With this description of the rotation in the Lie algebra we use the exponential map to generate the actual rotation matrix. The map is defined using the matrix exponential series $$\mathcal{so}(3) \to SO(3); \quad \theta u \cdot L \to R = e^{\theta u \cdot L } = I + \theta u \cdot L + \frac{1}{2!}(\theta u \cdot L )^2 + \ldots.$$ The infinite series has an analytical solution because $u \cdot L $ is skew-symmetric, meaning $(u \cdot L )^3 = -u \cdot L $. The higher order terms simplifies to one $u \cdot L $ term and one $(u \cdot L )^2$ term. We get $$R = I + \left[\theta – \frac{\theta^3}{3!} + \frac{\theta^5}{5!} – \ldots\right] u \cdot L+ \left[ {\theta^2}{2!} – {\theta^4}{4!} + {\theta^2}{6!} – \ldots \right](u \cdot L)^2,$$ which, after remembering our trigonometric expansions, let us write $$R = I + \sin \theta u \cdot L + (1 – \cos \theta) (u \cdot L)^2.$$ With this formula we can return to our problem of rotating $a_d$ onto $a_v$. We rotate in the plane spanned by the two vectors, i.e. around the cross-product $v = a_d x a_v$. The $\cos \theta$ and $\sin \theta$ can be found using the geometric meaning of the dot and cross-products respectively. We identify $$u = \frac{v}{||v||}, \quad \cos \theta = \frac{a_v \cdot a_d}{||a_v|| ||a_d||}, \quad \sin \theta = \frac{||v||}{ ||a_v|| ||a_d||}.$$ And with $a_d$ measured and $a_v$ assumed we have our rotation matrix $R$ which aligns the z-axes of the DataHub and that of the vehicle. Step 2: Braking With the z-axes aligned the next step is to align the x- and y-axes. We do this by identifying a scenario where the acceleration in the xy-plane is known to be only in the x-direction. When the vehicle is braking in a straight line, the acceleration is $a_v = (-a_{x;v}, 0, g)$ in the vehicle coordinate system. Having measured the acceleration in the DataHub’s coordinate system and rotating this acceleration vector to the aligned xy-planes of the vehicle and DataHub during the braking event, we can then find the rotation matrix to completely align the DataHub with its vehicle. To detect a braking event we simply observe if the speed is rapidly diminishing. As it’s braking, the vehicle must maintain the same driving direction (within some tolerance) and to enforce this we use the circular dispersion of the acceleration sampled during the braking event. Only samples that meet a maximum dispersion threshold are accepted so we can be sure that the vehicle is indeed braking in a sufficiently straight line. We combine samples for a number of braking events for our final measurement $a_d$ with which we find the rotation matrix using the method outlined in step 1. Results By composing the rotation matrices from step 1 and step 2 we achieve the alignment of the DataHub and the vehicle. In Figure 3 we see the difference between the acceleration signals of the unaligned DataHub and one that is aligned properly. As one would expect, the x– and y components of the acceleration (blue and orange) are nearly zero when the vehicle is stationary, between timesteps 750 and 1100. Figure 3: x, y, z acceleration in the coordinate system of the DataHub (left) and the same signals transformed to the vehicle coordinate system (right). The process of DataHub alignment is fully automated and currently runs on a selection of vehicles. In the near future we will roll out this new feature to all vehicles equipped with a DataHub and the users will have access to the acceleration signals, useful for, e.g., brake tests of new buses. In the meantime, we continue to develop Smart Driving using the acceleration signals.
Learning Objectives Use long division to divide polynomials. Use synthetic division to divide polynomials. The exterior of the Lincoln Memorial in Washington, D.C., is a large rectangular solid with length 61.5 meters (m), width 40 m, and height 30 m.1 We can easily find the volume using elementary geometry. \[\begin{align*} V&=l \; {\cdot} \; w \; {\cdot} \; h \\ &=61.5 \; {\cdot} \; 40 \; {\cdot} \; 30 \\ &=73,800 \end{align*}\] So the volume is 73,800 cubic meters (\(m^3\)). Suppose we knew the volume, length, and width. We could divide to find the height. \[\begin{align*} h&=\dfrac{V}{l{\cdot}w} \\&=\dfrac{73,800}{61.5{\cdot}40} \\ &=30 \end{align*}\] As we can confirm from the dimensions above, the height is 30 m. We can use similar methods to find any of the missing dimensions. We can also use the same method if any or all of the measurements contain variable expressions. For example, suppose the volume of a rectangular solid is given by the polynomial \(3x^4−3x^3−33x^2+54x\). The length of the solid is given by \(3x\); the width is given by \(x−2\). To find the height of the solid, we can use polynomial division, which is the focus of this section. Using Long Division to Divide Polynomials We are familiar with the long division algorithm for ordinary arithmetic. We begin by dividing into the digits of the dividend that have the greatest place value. We divide, multiply, subtract, include the digit in the next place value position, and repeat. For example, let’s divide 178 by 3 using long division. Another way to look at the solution is as a sum of parts. This should look familiar, since it is the same method used to check division in elementary arithmetic. \[\begin{align*} \text{dividend}&=(\text{divisor}{\cdot}\text{quotient})+\text{remainder} \\ 178&=(3{\cdot}59)+1 \\ &=177+1 \\ &=178\end{align*}\] We call this the Division Algorithm and will discuss it more formally after looking at an example. Division of polynomials that contain more than one term has similarities to long division of whole numbers. We can write a polynomial dividend as the product of the divisor and the quotient added to the remainder. The terms of the polynomial division correspond to the digits (and place values) of the whole number division. This method allows us to divide two polynomials. For example, if we were to divide \(2x^3−3x^2+4x+5\) by \(x+2\) using the long division algorithm, it would look like this: We have found \[\dfrac{2x^3−3x^2+4x+5}{x+2}=2x^2−7x+18−\dfrac{31}{x+2}\] or \[2x^3−3x^2+4x+5=(x+2)(2x^2−7x+18)−31\] We can identify the dividend, the divisor, the quotient, and the remainder. Writing the result in this manner illustrates the Division Algorithm. The Division Algorithm The Division Algorithm states that, given a polynomial dividend \(f(x)\) and a non-zero polynomial divisor \(d(x)\) where the degree of \(d(x)\) is less than or equal to the degree of \(f(x)\), there exist unique polynomials \(q(x)\) and \(r(x)\) such that \[f(x)=d(x)q(x)+r(x)\] \(q(x)\) is the quotient and \(r(x)\) is the remainder. The remainder is either equal to zero or has degree strictly less than \(d(x)\). If \(r(x)=0\), then \(d(x)\) divides evenly into \(f(x)\). This means that, in this case, both \(d(x)\) and \(q(x)\) are factors of \(f(x)\). Set up the division problem. Determine the first term of the quotient by dividing the leading term of the dividend by the leading term of the divisor. Multiply the answer by the divisor and write it below the like terms of the dividend. Subtract the bottom binomialfrom the top binomial. Bring down the next term of the dividend. Repeat steps 2–5 until reaching the last term of the dividend. If the remainder is non-zero, express as a fraction using the divisor as the denominator. Example \(\PageIndex{1}\): Using Long Division to Divide a Second-Degree Polynomial Divide \(5x^2+3x−2\) by \(x+1\). Solution The quotient is \(5x−2\). The remainder is 0. We write the result as \[\dfrac{5x^2+3x−2}{x+1}=5x−2\] or \[5x^2+3x−2=(x+1)(5x−2)\] Analysis This division problem had a remainder of 0. This tells us that the dividend is divided evenly by the divisor, and that the divisor is a factor of the dividend. Example \(\PageIndex{2}\): Using Long Division to Divide a Third-Degree Polynomial Divide \(6x^3+11x^2−31x+15\) by \(3x−2\). Solution There is a remainder of 1. We can express the result as: \[\dfrac{6x^3+11x^2−31x+15}{3x−2}=2x^2+5x−7+\dfrac{1}{3x−2}\] Analysis We can check our work by using the Division Algorithm to rewrite the solution. Then multiply. \[(3x−2)(2x^2+5x−7)+1=6x^3+11x^2−31x+15\] Notice, as we write our result, the dividend is \(6x^3+11x^2−31x+15\) the divisor is \(3x−2\) the quotient is \(2x^2+5x−7\) the remainder is \(1\) \(\PageIndex{2}\) Divide \(16x^3−12x^2+20x−3\) by \(4x+5\). Solution \(4x^2−8x+15−\dfrac{78}{4x+5}\) Using Synthetic Division to Divide Polynomials As we’ve seen, long division of polynomials can involve many steps and be quite cumbersome. Synthetic division is a shorthand method of dividing polynomials for the special case of dividing by a linear factor whose leading coefficient is 1. To illustrate the process, recall the example at the beginning of the section. Divide \(2x^3−3x^2+4x+5\) by \(x+2\) using the long division algorithm. The final form of the process looked like this: There is a lot of repetition in the table. If we don’t write the variables but, instead, line up their coefficients in columns under the division sign and also eliminate the partial products, we already have a simpler version of the entire problem. Synthetic division carries this simplification even a few more steps. Collapse the table by moving each of the rows up to fill any vacant spots. Also, instead of dividing by 2, as we would in division of whole numbers, then multiplying and subtracting the middle product, we change the sign of the “divisor” to –2, multiply and add. The process starts by bringing down the leading coefficient. We then multiply it by the “divisor” and add, repeating this process column by column, until there are no entries left. The bottom row represents the coefficients of the quotient; the last entry of the bottom row is the remainder. In this case, the quotient is \(2x^2–7x+18\) and the remainder is –31.The process will be made more clear in Example \(\PageIndex{3}\). Synthetic Division Synthetic division is a shortcut that can be used when the divisor is a binomial in the form \(x−k\). In synthetic division, only the coefficients are used in the division process. Write \(k\) for the divisor. Write the coefficients of the dividend. Bring the lead coefficient down. Multiply the lead coefficient by \(k\). Write the product in the next column. Add the terms of the second column. Multiply the result by \(k\). Write the product in the next column. Repeat steps 5 and 6 for the remaining columns. Use the bottom numbers to write the quotient. The number in the last column is the remainder and has degree 0, the next number from the right has degree 1, the next number from the right has degree 2, and so on. Example \(\PageIndex{3}\): Using Synthetic Division to Divide a Second-Degree Polynomial Use synthetic division to divide \(5x^2−3x−36\) by \(x−3\). Solution Begin by setting up the synthetic division. Write \(k\) and the coefficients. Bring down the lead coefficient. Multiply the lead coefficient by \(k\). Continue by adding the numbers in the second column. Multiply the resulting number by \(k\).Write the result in the next column. Then add the numbers in the third column. The result is \(5x+12\). The remainder is 0. So \(x−3\) is a factor of the original polynomial. Analysis Just as with long division, we can check our work by multiplying the quotient by the divisor and adding the remainder. \[(x−3)(5x+12)+0=5x^2−3x−36\] Example \(\PageIndex{4}\): Using Synthetic Division to Divide a Third-Degree Polynomial Use synthetic division to divide \(4x^3+10x^2−6x−20\) by \(x+2\). Solution The binomial divisor is \(x+2\) so \(k=−2\). Add each column, multiply the result by –2, and repeat until the last column is reached. The result is \(4x^2+2x−10\). The remainder is 0. Thus, \(x+2\) is a factor of \(4x^3+10x^2−6x−20\). Analysis The graph of the polynomial function \(f(x)=4x^3+10x^2−6x−20\) in Figure \(\PageIndex{2}\) shows a zero at \(x=k=−2\). This confirms that \(x+2\) is a factor of \(4x^3+10x^2−6x−20\). Example \(\PageIndex{5}\): Using Synthetic Division to Divide a Fourth-Degree Polynomial Use synthetic division to divide \(−9x^4+10x^3+7x^2−6\) by \(x−1\). Solution Notice there is no x-term. We will use a zero as the coefficient for that term. The result is \(−9x^3+x^2+8x+8+\frac{2}{x−1}\). \(\PageIndex{5}\) Use synthetic division to divide \(3x^4+18x^3−3x+40\) by \(x+7\). Solution \(3x^3−3x^2+21x−150+\frac{1,090}{x+7}\) Using Polynomial Division to Solve Application Problems Polynomial division can be used to solve a variety of application problems involving expressions for area and volume. We looked at an application at the beginning of this section. Now we will solve that problem in the following example. Example \(\PageIndex{6}\): Using Polynomial Division in an Application Problem The volume of a rectangular solid is given by the polynomial \(3x^4−3x^3−33x^2+54x\). The length of the solid is given by \(3x\) and the width is given by \(x−2\). Find the height of the solid. Solution There are a few ways to approach this problem. We need to divide the expression for the volume of the solid by the expressions for the length and width. Let us create a sketch as in Figure \(\PageIndex{3}\). We can now write an equation by substituting the known values into the formula for the volume of a rectangular solid. \[\begin{align*} V&=l{\cdot}w{\cdot}h \\ 3x^4−3x^3−33x^2+54x&=3x{\cdot}(x−2){\cdot}h \end{align*}\] To solve for \(h\), first divide both sides by \(3x\). \[\dfrac{3x{\cdot}(x−2){\cdot}h}{3x}=\dfrac{3x^4−3x^3−33x^2+54x}{3x}\] \[(x-2)h=\dfrac{x^3-x^2-11x+18}{x-2}\] Now solve for \(h\) using synthetic division. \[h=\dfrac{x^3−x^2−11x+18}{x−2}\] The quotient is \(x^2+x−9\) and the remainder is 0. The height of the solid is \(x^2+x−9\). \(\PageIndex{6}\) The area of a rectangle is given by \(3x^3+14x^2−23x+6\). The width of the rectangle is given by \(x+6\). Find an expression for the length of the rectangle. Solution \(3x^2−4x+1\) Key Equations Division Algorithm \(f(x)=d(x)q(x)+r(x)\) where \(q(x){\neq}0\) Key Concepts Polynomial long division can be used to divide a polynomial by any polynomial with equal or lower degree. The Division Algorithm tells us that a polynomial dividend can be written as the product of the divisor and the quotient added to the remainder. Synthetic division is a shortcut that can be used to divide a polynomial by a binomial in the form x−k. Polynomial division can be used to solve application problems, including area and volume. Footnotes 1 National Park Service. "Lincoln Memorial Building Statistics." http://www.nps.gov/linc/historycultu...statistics.htm. Accessed 4/3/2014 Glossary Division Algorithm given a polynomial dividend \(f(x)\) and a non-zero polynomial divisor \(d(x)\) where the degree of \(d(x)\) is less than or equal to the degree of \(f(x)\), there exist unique polynomials \(q(x)\) and \(r(x)\) such that \(f(x)=d(x)q(x)+r(x)\) where \(q(x)\) is the quotient and \(r(x)\) is the remainder. The remainder is either equal to zero or has degree strictly less than \(d(x)\). synthetic division a shortcut method that can be used to divide a polynomial by a binomial of the form \(x−k\)
Answer: B Explanation: Required Average = \(\frac {67 \times 2 + 35 \times 2 + 6 \times 3}{2 + 2 + 3}\) \(\frac {134 + 170 + 18}{7}\) \(\frac {222}{7}\) 31\(\frac {5}{7}\) years Q2. A grocer has a sales of Euro 6435, Euro 6927, Euro 6855, Euro 7230 and Euro 6562 for 5 consecutive months. How much sale must he have in the sixth month so that he gets an average sale of Euro 6500? Answer: A Explanation: Total sale for 5 months = Euro (6435 + 6927 + 6855 + 7230 + 6562) = Euro 34009. Required sale = Euro [ \((6500 \times 6) – 34009 \)] = Euro (39000 – 34009) = Euro 4991. Q3. The average of 20 numbers is zero. Of them, at the most, how many may be greater than zero? Answer: D Explanation: Average of 20 numbers = 0. Sum of 20 numbers \( (0 \times 20) \)= 0. It is quite possible that 19 of these numbers may be positive and if their sum is a then \({20}^{th}\) number is (-a). Q4. The average weight of 8 person’s increases by 2.5 kg when a new person comes in place of one of them weighing 65 kg. What might be the weight of the new person? Answer: C Explanation: Total weight increased = \( (8 \times 2.5) kg \)= 20 kg. Weight of new person = (65 + 20) kg = 85 kg Q5. The captain of a cricket team of 11 members is 26 years old and the wicket-keeper is 3 years older. If the ages of these two are excluded, the average age of the remaining players is one year less than the average age of the whole team. What is the average age of the team? Answer: A Explanation: Let the average age of the whole team by x years. \(11 x – (26 + 29) = 9(x -1) \) \(11 x – 9 x \)= 46 \(2 x \) = 46 \(x \) = 23. So, the average age of the team is 23 years. Answer: B Explanation: Sum of the present ages of husband, wife and child = \( (27 \times 3 + 3 \times 3) \) years = 90 years. Sum of the present ages of wife and child = \( (20 \times 2 + 5 \times 2) \) years = 50 years. Husband’s present age = (90 – 50) years = 40 years. Q2. A car owner buys petrol at Euro 7.50, Euro 8 and Euro 8.50 per liter for three successive years. What approximately is the average cost per liter of petrol if he spends Euro 4000 each year? Answer: A Explanation: Total quantity of petrol = \((\frac {4000}{7.50} + \frac {4000}{8} + \frac {4000}{8.50})\)litres = 4000 \((\frac {2}{15} + \frac {1}{8} + \frac {2}{17})\) litres = \((\frac {76700}{51})\) litres Total amount spent = Euro \((3 \times 4000) \)= Euro 12000. Average cost = Euro \( \frac {(12000 \times 51)}{76700} \)= Euro 6120/767 = Euro 7.98 Q3. In Jessica’s opinion, her weight is greater than 65 kg but less than 72 kg. Her brother does not agree with Jessica and he thinks that Jessica’s weight is greater than 60 kg but less than 70 kg. Her mother’s view is that her weight cannot be greater than 68 kg. If all are them are correct in their estimation, what is the average of different probable weights of Jessica? Answer: A Explanation: Let Jessica’s weight by X kg. According to Jessica, 65 < X < 72 According to Jessica’s brother, 60 < X < 70. According to Jessica’s mother, X <= 68 The values satisfying all the above conditions are 66, 67 and 68. therefor, Required Average = \((\frac {66 + 67 + 68}{3}) = (\frac {201}{3}) \) = 67 kg Q4. 3 years ago, the average of a family of 5 members was 17 years. A baby having been born, the average age of the family is the same today. The present age of the baby is: Answer: B Explanation: Total age of 5 members, 3 years ago = \( (17 \times 5) \) years = 85 years Total age of 5 members now = \( (85 + 3 \times 5) \) years = 100 years Total age of 6 members now = \( (17 \times 6) \) years = 102 years Age of the baby = (102 – 100) years = 2 years Q5. Of the four numbers, the first is twice the second, the second is one-third of the third and the third is 5 times the fourth. The average of the numbers is 24.75. The largest of these numbers is: Answer: D Explanation: Let the fourth number be a then hen, third number = 5 a, second number= \( \frac {5a} {3} and first= \frac {10a} {3}\) a + 5a + \( \frac {5a}{3} + \frac {10}{3} = 24.75 \times 4 \) so,a = 9 So, the numbers are 9, 45, 15 and 30 Answer: B Explanation: Let the ratio be k:1 Then, \( k \times 16.4 + 1 \times 15.4 = (k + 1) \times 15.8 \) (16.4 – 15.8) k = (15. 8 – 15. 4) k = \( \frac {2}{3} \) Q2. The average price of 10 books is Rs. 12 while the average price of 8 of these books is Rs. 11.75. Of the remaining two books, if the price of one book is 60 % more than the price of the other, what is the price of each of these two books? Answer: A Explanation: Total price of the two books = \(Rs. [(12 \times 10) – (11.75 \times 8)] \) = Rs. (120 – 94) = Rs. 26 Let the price of one book be Rs.x Then, the price of other book = \( Rs. (x + 60 Percent of x )= x + (\frac {3}{5}) x = (\frac {8}{5}) x \) so, \( x + (\frac {8}{5}) \times \)= 26 , x = 10 The prices of the two books are Rs. 10 and Rs. 16 Q3. The average age of 30 boys in a class is equal to 14 years. When the age of the class teacher is included the average becomes 15 years. Find the age of the class teacher? Answer: C Explanation: Total ages of 30 boys = \( 14 \times 30 \) = 420 years Total age when class teacher is included = \(15 \times 31 \)= 465 years Age of class teacher = 465 – 420 = 45 years Q4. The average of marks obtained by 120 candidates in a certain examination is 35. If the average marks of passed candidates are 39 and that of failed candidates is 15, what is the number of candidates who passed the examination? Answer: B Explanation: Let the number of passed candidates be a Then total marks \( \Rightarrow 120 \times 35 = 39 a + (120 – a) \times 15 \) 4200 = 39 a + 1800 – 15 a a = 100 Q5. The average of 11 results is 50. If the average of the first 6 results is 49 and that of last 6is 52, find the sixth result? Answer: B Explanation: The total of 11 results = \(11 \times 50 \) = 550 The total of first 6 results = \(6 \times 49 \) = 294 The total of last 6 results = \(6 \times 52 \) = 312 The sixth result is common to both: Sixth result = 294 + 312 – 550 = 56 Competitive Exams – College Entrance Exams Category Notification Diploma NITC New Delhi PG GATE 2020 Click Here For – All India Entrance Exam Notifications Competitive Exams – Current Affairs Category Name of the Article Current Affairs Current Affairs Daily News Papers July 24th, 2019 Current Affairs July 24th, 2019 Daily Newspapers Editorials Current Affairs July 24th, 2019 Daily Newspapers Important Articles Current Affairs Banking Awareness One Liners July 23rd, 2019 Current Affairs Banking Awareness July 22nd, 2019 Quiz Current Affairs One Liners July 23rd, 2019 Current Affairs Quiz July 23rd, 2019 Quiz Current Affairs Vocabulary Day 118 Current Affairs July 22nd, 2019 Vocabulary Quiz CA – Related Information General Knowledge for Competitive Examinations Topic Name of the Article GK – World Agricultural Commodities Largest Producers International Borders GK – India Economic Survey Highlights 2018-2019 Indian Index Rankings GK – Abbreviations Indian Government Policies Abbreviations Finance Abbreviations GK – Banking & Insurance National Housing Bank First Banks in India GK – Science & Technology Ama Ghare LED Scheme Indian Research Institutes
Posted by Harish Cherukuri on 15 Oct 2008 01:06, last edited by Helmuti_pdorf on 16 Oct 2008 09:13 A brief tutorial on using math symbols and equations in wikidot In the following writeup, I explain the wiki syntax for entering mathematical symbols and equations. A rudimentary knowledge of typesetting math with $\LaTeX$ will help you understand the writeup much easier. Inline Equations In $\LaTeX$, in-line math commands begin and end with $. In wikidot, one enters inline math by enclosing the $\LaTeX$ commands including the $ symbols with [[ and ]]. So, for example, if you want the in-line equation $\alpha + \beta = \gamma$, we would type the following: [[$ \alpha + \beta = \gamma$]]. Note that in $\LaTeX$, you would type just $ \alpha + \beta = \gamma$ without the double square brackets. Alternatively, you can click on the in-line equation button in the wikidot editor to enter inline equations. When you click on this button, the editor will insert the string [[$ insert LaTeX equation here $]] at the current position of the cursor. All you then have to do is replace "insert LaTeX equation here" with $\LaTeX$ commands. Display Math (Equations) Single Line Equations In $\LaTeX$, one can use the equation environment to display single-line equations. For example, \begin{equation} S_{ij,j} + \rho b_i = 0\end{equation} would display(1) Note the equation number displayed on the right. If you want to reference this number anywhere in the text, you can use the \label tag. In $\LaTeX$, you would include this in the equation environment as follows: \begin{equation}\label{equil} S_{ij,j} + \rho b_i = 0\end{equation} which would appear as(2) The equation number can now be referenced with \ref{equil} or even better, \eqref{equil}. Note that the text (tag) used to label equations (here, "equil") can be anything. Of course, for each equation, you would want to use different tags. So, suppose that we would like to reference the equation number in a follow-up text: Note that, since $i$ is a free index, Equation \eqref{equil} represents three equations (equilibrium equations). which would appear as "Note that, since $i$ is a free index, equation 2 represents three equations (equilibrium equations)." In Wikidot, typing equations is made considerably simpler. The simplest way is to click on the "square root of x" button in the edit menu. This action would insert the necessary tags and all that you need to be concerned about is the $\LaTeX$ syntax for the actual equations, i.e., the stuff between the \begin{equation} and \end{equation} environment tags. Let's look at an example. In wikidot math syntax, equation 2 can be entered by first clicking on the mathematical expression button (the "square root of x" button) which would insert the following code: [[math]]insert LaTeX equation here[[/math]] We then replace the "insert LaTeX equation here" by our equation: [[math equil]] S_{ij,j} + \rho b_i = 0[[/math]] Note that I also added the optional "equil" label to the [[math]] block tag. The label can then be used to reference equation numbers. Again, wikidot provides an easy way to reference equations by providing the "Eq. (n)" button in its edit menu. Upon clicking on this button, you will be given a list of all the labels present in the current document and you can choose the one that you want to autoreference. Multiline Equations If you have equations that span multilines or if you want to have numbered multiline equations, you can use the align environment or split construction in $\LaTeX$. \begin{equation} \begin{split} \mathbf{T n} &= \left[T_{ij} \mathbf{e}_i \otimes \mathbf{e}_j \right] n_k \mathbf{e}_k \\ & = T_{ij} n_k \left(\mathbf{e}_i \otimes \mathbf{e}_j\right) \mathbf{e}_k \\ & = T_{ij} n_j (\mathbf{e}_i \end{split}\end{equation} which would be displayed as(3) The \\ at the end of each line represent line breaks for equations. The & before = sign in each line is there to say that all the equations should be aligned with the = sign as a reference. In Wikidot, you would simply type the following: [[math]] \begin{split} \mathbf{T n} &= \left[T_{ij} \mathbf{e}_i \otimes \mathbf{e}_j \right] n_k \mathbf{e}_k \\ & = T_{ij} n_k \left(\mathbf{e}_i \otimes \mathbf{e}_j\right) \mathbf{e}_k \\ & = T_{ij} n_j \mathbf{e}_i \end{split}[[/math]] Arrays If you want to matrices to be displayed, you can use the $\LaTeX$ array environment. For example, to get(4) one would type [[math]]\left[ \begin{array}{ccc} T_{11} & T_{12} & T_{13} \\ T_{21} & T_{22} & T_{23} \\ T_{31} & T_{32} & T_{33} \end{array}\right][[/math]] Let's dissect the above code. \left[ and \right] produce the large square brackets around the matrix. The matrix contents are output using the array environment. Each matrix component is separated by & and each row is separated by \\. Each letter in {ccc} represents a column and that all the entries in each column should be centered. If you want left or right alignment for any of the columns, you would replace the corresponding c by l or r. The number of letters in braces immediately following \begin{array} represent the number of columns. Similarly, if you want the following equation(5) you would use the following code: [[math]] \left\{ \begin{array}{c} t_1 \\ t_2 \\ t_3 \end{array} \right\} = \left[ \begin{array}{ccc} T_{11} & T_{12} & T_{13} \\ T_{21} & T_{22} & T_{23} \\ T_{31} & T_{32} & T_{33} \end{array} \right] \left\{ \begin{array}{c} n_1 \\ n_2 \\ n_3 \end{array} \right\}[[/math]] Note that in the above, to get curly braces, we used \left\{ and \right\} instead of \left{ and \right}. The reason for this is that { and } have special meaning in $\LaTeX$ and therefore, to typeset the braces, you have to use \{ and \}. This will allow the special meaning to be "escaped".
Alright, this is my first answer on Stack Exchange, so let me know if I've made any grave mistakes in my analysis. These numbers are definitely back of the envelope calculations, but they do give a good picture of what's going on in your ship. TL;DR Yes, it will rotate the ship, but it's nothing that the ship's thrusters can't compensate for if your ship is designed to even move . In the context of your story, I wouldn't worry about it. at all Long answer: What we need to find is the maximum impulse your ship can impart onto the projectile without exceeding the maximum corrective torque that can be produced by your thrusters. This is because any torque put on the ship is going to cause a change in trajectory, even if basically infinitesimal. We'll set up an equilibrium with the torque produced by the impulse of the railgun and the force provided by the thrusters. I'm also going to assume that your thrusters are mounted at the edges of your ship, but you can change the variables to match whatever configuration you want. Just to get an idea of the magnitude of the forces you're talking about, using the definition of torque $\tau = I\alpha$ (I is your moment of inertia), the angular acceleration caused by your railgun is going to be 3.2 x 10^-11 times the reaction force of the railgun, based on the dimensions and mass of your ship. Here's the calculation: $$\tau = I\alpha$$$$I = \frac{m(a^2 + b^2)}{12}$$ Assuming that your ship is a rectanglular plate (thickness doesn't affect distrubution of mass [and moment of inertia] in this calculation)$$Fr = \frac{m(a^2 + b^2)}{12}\alpha$$$$F = \frac{m(a^2 + b^2)}{12r}\alpha$$$$F = \frac{(3.18 * 10^7 kg)((60m)^2 + (300m)^2)}{12(8m)}\alpha$$$$F = 3.1 * 10^{10}\alpha$$$$\alpha = 3.2 * 10^{-11}F$$ (Note: this constant of proportionality does have units, but they're not important if we're just using this as a ratio between force applied and angular acceleration) A factor of 3.2 x 10^-11 is really small. To put a 0.1 degree per sec^2 angular acceleration on your ship (a pretty minor acceleration), you need 5.31 x 10^7 Newtons of force. I think the highest thrust rocket engine we've built, the F-1, produces 6.6 x 10^6 Newtons of thrust. You'd need the equivalent of 8 of those just to get your 0.1 degrees per sec^2 angular acceleration. And that's just the force the railgun would have to make. Let's say your railgun has a muzzle velocity of 5 km/s (over Mach 14!) and accelerates your projectiles over the whole 300m length of your ship. From basic kinematic equations, this means your projectile takes 0.12s to get from the back to the front of your ship when starting at rest. The projectile's acceleration is 41,700 m/s^2, and now to find the reaction force due to this, just multiply the mass times the acceleration. We'll assume 1000kg (and yes, a 1000kg metal rod going at Mach 14 will do a lot of damage). This gives a force of 4.17x10^7 Newtons, which is pretty close to the force required to make a 0.1 degree per sec^2 angular acceleration. Now, the torque produced by your corrective engines will have to be the exact same torque as the torque produced by your railgun to keep the ship from gaining angular velocity. This is where the width of your ship comes in: the radius from the center of mass to the edge of your ship is 30m, and your railgun is at 8m. The ratio between the two is 3.75. This means that your engines can be 3.75 times as weak (or 0.26 times as strong) as your railgun. In this case, the engines will have to put out 1.11 x 10^7 Newtons of thrust, or about two F-1 engines at full power. On a ship your size, this isn't unreasonable. Anyways, all that calculation was just to show you that although the railgun puts out a (much more than a literal) of reaction force, the ship's thrusters have to be at least one or two orders of magnitude greater than this just to get the ship moving at any reasonable speed. Just to get your massive 35,000-ton ship accelerating at 10 m/s^2, you need 3.18x10^8 Newtons of thrust, so correcting for railgun blasts should be well within the tolerance of what your maneuvering engines can provide. I hope this helps. ton
EDIT: The answer now applies to arbitrary topologies, using an idea by Pietro Majer from the comments. Proposition: There are no topologies $\tau_0,\tau_1$ on $\mathbb R$ such that $f\colon\mathbb R\to\mathbb R$ is uniformly continuous in the Euclidean metric iff $f\colon(\mathbb R,\tau_0)\to(\mathbb R,\tau_1)$ is continuous. Proof:$\tau_1$ cannot be indiscrete (lest all functions are uniformly continuous), hence we can fix a $\tau_1$-closed set $F$ and points $a\in F$, $b\notin F$. For every Euclidean closed set $A$ and $c>0$, let $f_c(x)=a+c\operatorname{dist}(x,A)$. Then $f_c$ is uniformly continuous, hence continuous from $(\mathbb R,\tau_0)$ to $(\mathbb R,\tau_1)$, hence the $\tau_0$-closed set $f_c^{-1}(F)$ includes $A$ and excludes all points of Euclidean distance $(b-a)/c$ from $A$. The intersection of such sets for all $c$ is just $A$. This shows that $A$ is $\tau_0$-closed, i.e., $\tau_0$ refines the Euclidean topology. Let $f\colon\mathbb R\to\mathbb R$ be a Euclidean-continuous but not uniformly continuous function, such as $f(x)=x^2$. For every $n>0$, $f_n=f\restriction[-n,n]$ can be extended to a uniformly continuous function on $\mathbb R$. By assumption, this function is continuous from $(\mathbb R,\tau_0)$ to $(\mathbb R,\tau_1)$, hence $f_n$ is continuous from $([-n,n],\tau_0)$ to $(\mathbb R,\tau_1)$. Since $\tau_0$ refines the Euclidean topology, every point has a $\tau_0$-open neighbourhood included in some $[-n,n]$, thus $f=\bigcup_nf_n$ is continuous from $(\mathbb R,\tau_0)$ to $(\mathbb R,\tau_1)$. However, it is not uniformly continuous in the Euclidean metric, a contradiction.
I think that the question is sufficiently precise if we think at a realistic meaning of the word “inconsistent”. Also nowadays, for non logicians the adjective “inconsistent” doesn't really mean “free of contradictions” (this is only the obvious meaning given by modern Mathematical Logic), but rather it means not acceptable by a large or important part of the scientific community. Also nowadays, some of our works in some parts of modern Mathematics are not accepted as sufficiently rigorous by other parts. These works are hence perceived only as not sufficiently precise “ways of arguing”. Therefore, these “foreign argumentations” are perceived as potentially inconsistent, and need a different reformulation to be accepted. I know of relationships of this type between some parts of Geometry and Analysis, to mention only an example. It is the same problem occurring in the relationships between (some parts of) Physics and Mathematics because these two disciplines are really completely different “games”: in Physics the most important achievement is the existence of a dialectic between formulas and a part of nature, even if the related Mathematics lacks in formal clarity and is hence not accepted by several mathematicians. Analogously, early calculus was consistent until the community accepted these “ways of arguing” and discovered statements which could be verified as true by a dialogue with other part of knowledge: Physics and geometrical intuition in primis. Since in the early calculus the formal intuition (in the modern sense of manipulation of symbols, without a reference to intuition) was surely weak, the dialectic between proofs and intuition was surely stronger (I mean statistically, in the distribution of 17th century mathematicians). In my opinion, this is the reason of the discovering of true statements, even if the related proofs are perceived as “weak” nowadays. Once the great triumvirate Cantor, Dedekind, and Weierstrass decided that it was time to make a step further, the notion of “inconsistent” changed for this important part of the community and hence, sooner or later, for all the others. Also from the point of view of rules of inference, the consistency of early calculus has to be meant in the sense of dialectic between different parts of knowledge and acceptance by the related scientific community. Therefore, in this sense, in my opinion early calculus is as consistent as our (and the future) calculus. I agree with Joel that “we are not in a qualitatively different situation”: probably in the near future all proofs will be computer assisted, in the sense that all the missing steps will be checked by a computer (whose software will be verified, once again, by a large part of the community) and we will only need to provide the main steps. Necessarily, articles will change in nature and, I hope, they will be more focused on those ideas and intuitions thanks to which we were able to create the results we are presenting. Therefore, young students in the future will probably read disgusted at our papers saying: “how were they able to understand how all these results were created? These papers seems like phone books: def, lem, thm, cor, def, lem, thm, cor... without any explanation of discovery rules and several missing formal steps!”. Finally, I think that only formally, but not conceptually, this early calculus may look similar to NSA or SDG. In my opinion, one of the main reason of the lack of diffusion of NSA is that its techniques are perceived as “voodoo” by all modern mathematicians (the majority) that rely their work on the dialogue between formal mathematics and informal intuition. Too much frequently the lack of intuition is too strong in both theories. For example, for a person like Cauchy, what is the intuitive meaning of the standard part of the sine of an infinite number (NSA)? For people like Bernoulli, what is the intuitive meaning of properties like $x\le0$ and $x\ge0$ for every infinitesimal and $\neg\neg\exists h$ such that $h$ is infinitesimal (but not necessarily there exists an infinitesimal; SDG)? Moreover, as soon as discontinuous functions appeared in the calculus, the natural reactions of almost every working mathematicians (of 17th century and nowadays) looking at the microaffinity axiom is not to change Logic switching to the intuitionistic one, but to change this axiom inserting a restriction on the quantifier “for every $f:R\longrightarrow R$”. The apparently inconsistent argumentation of setting $h\ne0$ and finally $h=0$, can be faithfully formalized using classical calculus rather than using these theories of infinitesimals. We can say that $f:R\longrightarrow R$ (here $R$ is the usual Archimedean real field) is differentiable at $x$ if there exists a function $r:R\times R\longrightarrow R$ such that $f(x+h)=f(x)+h\cdot r(x,h)$ and such that $r$ is continuous at $h=0$. It is easy to prove that this function $r$ is unique. Therefore, we can assume $h\ne0$, we can make freely calculations to discover what is the unique form of the function $r(x,h)$ for $h\ne0$ and, in the final formula, to set $h=0$ because $r$ is clearly continuous for all the examples of functions of the early calculus. This is called the Fermat-Reyes methods, and it can be proved also for generalized functions like Schwartz distributions (and hence for an isomorphic copy of the space of all the continuous functions). Moreover, in my opinion, both Cauchy and Bernoulli would had perfectly understood this method and the related intuition. On the contrary, they would not be able to understand all the intuitive inconsistencies they can easily find both in NSA and SDG.
In this post we would like to give a possible new approach to the second part of the Hilbert 16th problem First we give a short introduction: A quadratic system is a polynomial vector field on the plane in the form $$(X_{\alpha})\;\;\;\;\;\;\;\;\;\;\begin{cases}\dot x=P_{\alpha}(x,y)\\ \dot y=Q_{\alpha}(x,y) \end{cases}$$ where $P_{\alpha},Q_{\alpha}$ are polynomials of degree $2$ which are parametrized by a $10$ tuple parameter $\alpha$ of coefficients. A center is a singularity of this vector field which is surrounded by a band of closed orbits. All quadratic vector fields with center are classified as follows: They correspond to a finite number of algebraic conditions in $\alpha$, see "Integrability of plane quadratic vector fields" Expos. Math(1990)3-25.. We denote these algebraic conditions by $Cent(\alpha)=0$ Question: Are there a family of (polynomial) Riemannian metrics $g_{\alpha}$ on the punctured plane(after removing singularities) with Gaussian curvature $\kappa_{\alpha}$ with the following properties: The solutions of $X_{\alpha}$, after a possible reparametrization, are geodesics of $g_{\alpha}$.Moreover $\kappa_{\alpha}$, is not zero except at a finite number of algebraic curves transverse to $X_{\alpha}$. Moreover $\kappa_{\alpha}$ is identically zero if $Cent(\alpha)=0$? The question, in particular its last part is well behaved and consistent since a quadratic system can not have a center and a limit cycle, simultaneously. If the answer would be yes, then $H(2)$, the maximum number of limit cycles of a quadratic system, would be finite. This question is already discussed at the comment-conversation of the following post:
I am implementing a finite difference method in solving the diffusive-advective equation:$$u_t + v \cdot u_x = D\cdot u_{xx}$$(v, D are constants). Planning to use the operator splitting method (see below), I am now focusing on the advective part. I chose staggered leapfrog method (from Numerical Recipes book) $$ u^{n+1}_j = u^{n-1}_j - \frac{v\cdot\Delta t }{\Delta x}(u^n_{j+1} - u^n_{j-1}) \;\;\;\; $$ because it should be Courant-stable in case of conserved flux; this turns out to be the case, since it works perfectly using periodic conditions. The problems show when trying to change the boundaries. My question, in brief, is: how to explicitly implement these conditions in such a method? Below, I am going to provide more details on my failings. No-flux conditions Let us start from a more general case: as it is pointed out here for diffusion-advection equation, Robin conditions are required to achieve closed boundaries $$ v\cdot u - Du_x = 0 $$ I tried to discretize them: $$ v\cdot u_j - \frac{D}{\Delta x} (c_j - c_{j-1}) = 0 $$ However, when D=0 the only condition remaining is that $$ v\cdot u = 0 $$ thus it is to me unclear how to interpret this: imposing u=0 would result in the (failing) strategy for absorbing conditions (see below), while v is constant (and I don't understand where I could impose it equals to zero). Absorbing conditions Leapfrog method is meant for flux-conserving situations. In fact, imposing u(j=0,n) = 0 u(jmax,n) = 0 creates artificial instabilities. The leapfrog updating has got three parts: one for the previous (two steps before) value in the point, one increasing this value proportionally to the uphill point, the other reducing proportionally to the downhill one. This is also what one could expect from a physical point of view: some quantity enters, some goes out. The reason why the simple imposition of null boundaries fails is now evident: if the downhill point is null, there is no reducing term, so the point before the last explodes; the point before it has now a big reducing term, so goes to zero; the point before then has a small reducing term and so on, resulting in an alternance of increasing and decreasing terms (see figure 1). Operator splitting As explained in the Numerical Recipes book (cited above), an equation like $$ u_t = \mathcal{L} u $$ where a decomposition in linear operators $$ \mathcal{L}=\sum^m_{i}\mathcal{L}_i $$ is possible can be resolved applying, for each step, the sequence $$ \mathcal{U}_1(\mathcal{U}_2(\dots\mathcal{U}_m(u^n)\dots)) = u^{n+1} $$ provided that each operator U solves for a term in the sum and is stable. Even if I know alternatives are possible, which contemplate diffusion-advection function in general, before digging into them I am willing to solve this doubt of mine.
When Is Triangle Equilateral: Miguel Ochoa Sanchez's Criterion Source Problem Let $P\;$ be a point on the incircle of $\Delta ABC.\;$ Let $x,y,z\;$ be the distances of $P\;$ to the vertices $A,B,C.\;$ Prove that if $\displaystyle \frac{x^2}{bc}+\frac{y^2}{ca}+\frac{z^2}{ab}=\frac{5}{4}$ then the triangle is equilateral. Proof First observe that the identity in the problem is equivalent to $\displaystyle ax^2+by^2+cz^2=\frac{5}{4}abc.\;$ Now, using the barycentric coordinates, the circle with center $M(u,v,w),\;$ $u+v+w=1,\;$ and radius $r,\;$ has the equation $ux^2+vy^2+wz^2 = r^2+a^2vw+b^2wu+c^2uv.$ With this in mind, the incircle is described by $\displaystyle \begin{align}\frac{1}{a+b+c}\left(ax^2+by^2+cz^2\right)&=r^2+\frac{1}{(a+b+c)^2}(a^2bc+ab^2c+abc^2)\\ &=r^2+\frac{abc}{a+b+c}. \end{align}$ Thus the problem reduces to showing that $\displaystyle r^2(a+b+c)+abc=\frac{5}{4}abc,$ which is the same as $4r^2(a+b+c)=abc.$ However, $abc=4RS,\;$ where $R\;$ is the circumradius and $S\;$ the area of $\Delta ABC.\;$ If $s\;$ is the semiperimeter of the triangle then $rs=S\;$ and $4r^2(a+b+c)=8rS,\;$ meaning that the identity is equivalent to $2r=R,\;$ which only holds for the equilateral triangle. Acknowledgment The problem, due to Miguel Ochoa Sanchez, has been kindly posted by Leo Giugiuc at the CutTheKnotMath facebook page. Alexander Bogomolny 65620867
The Multi-Armed Platform Problem Or why pizzas and loans create the same conundrum Introduction Our life as investors (or investment advisor) would be much, much simpler if we had no choice in the kind of assets we buy (but probably no ‘raison d’être’). In reality, choices are plenty, if only because there are so many people eager to attract money. Some of these options are good, some others not so much. The question is, once we’re tried investing in an origination platform and it generates good return, should we keep pouring money in it, or should we still look for an even better opportunity? Faced with multiple choices, one has to find a equilibrium between ‘exploit’ and ‘explore’. Let’s say you found a nice little Italian restaurant in your town that makes the best Napolitan-style pizza you’ve ever tasted. Should you keep going there (exploit) or keep trying other places as well (explore)? It’s likely that the next place you’ll try won’t be as good. But on the other hand, there may be an even better place somewhere, and you’ll never know if you keep eating the same pizzas, again and again. Systematic Exploring One simple, systematic method is to alternate exploitation and exploration. For instance, one time out of two, you go back to the usual pizza place. The other time, you try something new. If a new place becomes your favorite, you now go there half the time, and put back your oldest favorite in the pool of random place to try in the future. This algorithm is an example of ‘epsilon-greedy’ strategy, ‘epsilon’ being when you try something new, and ‘greedy’ when you go back to the place providing the highest rewards so far. Which can be formalized as \( p_i(k+1)\) the probability of picking choice i amongst n after k trials that gave an average return for that choice of \(u_j(k) \) being: \[ p_i(k+1) = \left\{ \begin{array}{ll} 1 – \epsilon + \frac{\epsilon}{n} & \quad \text{if }i = \text{argmax}_j u_j(k) \\ \frac{\epsilon}{n} & \quad \text{otherwise} \end{array} \right. \] In our example of trying a new restaurant one time out of two, epsilon equals 0.5. This algorithm is both quite simple and shockingly efficient. One drawback, though, is that it is extremely sensitive to the value of epsilon. How often to explore versus exploit has a large impact on results and may depend upon the number of options and the duration of the experiment. At the extreme, imagine there are only two restaurants in your town, and the epsilon is high, you may end up going to your last preferred restaurant quite often! At the other end, if there are 24,000 restaurants, chances are pretty high that there’s a better restaurant somewhere, and that an epsilon too low means that you’re going to your ‘usual’ way too often 1. It also makes sense that there should be less and less exploration as time progress. After all, if the restaurants provide a constant experience, there is no point in exploring anymore once you tried them all. But even with a less extreme scenario, one may consider the results to become more trustworthy over time. If the Italian place was the first restaurant you ever tried in town, the fact that you like it doesn’t mean much. Inversely, if after 20 years and hundreds of different places, it’s still you favorite, then maybe it’s not worth exploring other options anymore 2. Second, because with less time left until the end of the experiment, it becomes more rationale to prioritize exploitation over exploration. Would choose a restaurant at random for the very last dinner out in a city you lived in for 10 years, or would rather go to your old time favorite one last time 3? Reducing the frequency of ‘exploration’ versus ‘exploitation’ over time is called an Epsilon-decreasing strategy. The Secretary Problem A similar issue is tackled with the so-called ‘Secretary Problem’. Imagine you’re hiring a secretary. You’re interviewing multiple candidates. Considering that you can hire a candidate right after the interview or must pass on that one forever, when should you hire the best candidate your found so far? This requires to determine a stopping rule, such as ‘After passing on x-1 candidates, I’ll hire the best candidate so far’. For instance, with 3 candidates, if we decide to hire the first one, the probability of having selected the best is 1/3. Similarly, if we decide to wait for candidate 3 to make a hiring decision, we don’t have much choice left, so the probability to hire the best one is simply the probability that the best one was the third, or 1/3 again. But if we decided to pass on the first one, then hire #2 if that candidate is better than #1 or hire #3 if #2 is not better than #1, then our probability to have selected the best candidate is 3/6 (3 permutations out of 6). Another way to look at this is there is 3 cases out of 6 when we do NOT hire the best candidate: if it was the first one (2 permutations, #1 > #2 > #3 and #1 > #3 > #2) so we didn’t hire fast enough or if the candidate #2 is better than #1 but still not has good as #3 (1 permutation, #3 > #2 > #1) so we hire too early. Interestingly, it shows that when interviewing at least 3 candidates, you should ALWAYS pass on the first one, however amazing is that candidate! Generalizing this, we find that the probability of choosing the best candidate j amongst n by passing on the first k-1 is: \[ P(n,k) = \frac{k-1}{k} \sum_{i=k}^n \frac{1}{i-1} \] As n tends to infinity, we can compute the optimum cut-off. The optimal strategy is to reject out of hand the first 37% of the candidates, then select the first one who’s the best candidate so far. Elegantly, the probability to find the best candidate this way is also equal to 37%. Incidentally, you may want to remember that method and percentage next time you’re looking for an apartment, a job, or even the love of your life… You shall systematically pass on the first 37% of your options, however great they are! The Secretary Problem relies on two conditions that are not applicable to origination platforms, or even pizza restaurants. First, that you can immediately assess quality. Maybe the pizza you ordered wasn’t the best choice. The first loan you invested in a platform defaulting does not mean all loans should default. The second assumptions is that quality remains constant. But restaurants change, so do credit risk assessments. Another way to look at it is that the reward is somewhat random. We can observe some rewards, but still don’t know exactly what are the underlying distribution functions. Which related to another well-know mathematical problem… The Multi-Armed Bandit Problem Imagine a row of several slot machines. When played, each machine provides a random reward, based from secret probabilities, specific to that machine. The multi-armed bandit problem aims to determine which machines a gambler should play, and how many times, in order to maximize his rewards. This problem has been studied since World War II, because its application goes way beyond gambling, such as clinical trials or resource allocations. While it was originally thought to be intractable (it was even suggested to entice the Germans to work on it for wasting their time), an optimal strategy was published in 1979, bearing the name of ‘Gittins index’. Interestingly, some non-optimal solutions turn out to be both easier to apply, and able to generate higher returns in real-life conditions. Such a solution is called ‘Boltzann Exploration’, or ’Softmax method’. To show off at your next diner party (or scare people away), you could mention that the Softmax function is a “gradient-log-normalizer of a categorical probability distribution”. Given average returns u observed for each machine, the probability to play the machine i after k plays is: \[ p_i(k+1) = \frac{e^\frac{u_i(k)}{T}}{\sum_{j=1}^n e^\frac{u_i(k)}{T}}, i = 1…n\] Where T is a ‘temperature’ parameters, that controls the randomness of the choice. If T=0, the algorithm is purely greedy. As T goes to infinity, the algorithm chooses uniformly at random. The interesting aspect, compared to the epsilon-greedy strategy, is that exploration is also dependent upon returns observed so far amongst non-optimal choices. It makes you much less likely to give a second chance to a restaurant where the experience was abysmal. Another option is to start by giving the same probability of play to all arms. Then, progressively increase the probability of playing the arm with the highest average gain. Such an algorithm is called a ‘Pursuit’ algorithm. The probability of playing the machine i after k plays is: \[ p_i(k+1) = \left\{ \begin{array}{ll} p_i(k) + \beta \big(1 – p_i(k) \big) & \quad \text{if }i = \text{argmax}_j u_j(k) \\ p_i(k) + \beta \big(0 – p_i(k) \big) & \quad \text{otherwise} \end{array} \right. \] Where \( \beta \) is the learning rate, between 0 and 1. Both the algorithms described above make sense, intuitively. One of their drawback is that they do not take into account the variance of each machine. As a player, you would probably react very differently when playing an arm that returns either \$0 or \$1, or an arm that gives you 50¢ everytime, although the average gain is the same. The lower the variance, the more confident we can also be that the results we observed are meaningful. An algorithm addressing that is the ‘UCB1-Tuned’. Initially, each arm is played once. At time k, the algorithm picks an arm j(k) such as: \[ j(k) = \text{argmax}_{i=1…k} \Bigg( u_i + \sqrt{\frac{ln(k)}{n_i} \text{min}\big( \frac{1}{4}, V_i(n_i)\big)}\Bigg) \] where u is the average gain of an arm n the number of times it has been played so far. V is a measure of the variance, such that: \[ V_i(k) = \sigma_i^2(k) + \sqrt{\frac{2\text{ ln }(k)}{n_i(k)}} \] It looks fancy, so it must perform better, right? Well, it doesn’t. As demonstrated by Vermorel an Mohri as well as Kuleshov and Precup, a simple heuristics-based system such as the Softmax algorithm generates at least 50% less regret than UCB1-Tuned. But this algorithm is still not exactly what we need. The multi-armed bandit problem is ‘all or nothing’. It doesn’t allow to hedge bet by splitting a dollar gambled between multiple machines, as we would do with a real portfolio allocation. Instead of choosing which place to dine tonight, imagine allocating a monthly restaurant budget across multiple places. Multiple players A simple approach is to consider multiple players, playing multiple bandits. The sum to ‘gamble’ is divided between players. Then, each of them decide which machine to play, following the Softmax algorithm described above. Imagine deciding in advance the five next restaurants you’ll go to. Because the algorithm is probabilistic, the choices will be nicely allocated following your exploitation / exploration choices. Every time a restaurant has been tried or a machine played, we update the corresponding expected return, which will in turn influence the choice of the next player. Even more flexible, we can imagine that instead of distributing resources equally amongst multiple players, we divide them by the smallest amount we can commit. For instance, \$50 for a restaurant diner or \$25 on a Lending Club note. Then we distribute those amounts based on the probability computed by the Softmax algorithm. In practice, for each unit of minimum commitment, we draw an independent and identically distributed random number \( 0 \leq \varepsilon \leq 1\) and allocate that amount to the first choice i amongst n to satisfy: \[ \varepsilon \leq \frac{\sum_{k=1}^I e^\frac{u(k)}{T}}{\sum_{t=1}^n e^\frac{u_i(k)}{T}}, i = 1…n\] Expected returns are constantly updated, but the amounts cannot be re-allocated once they’ve been committed. Imagine that for dining out, you would decide at the beginning of the month what will be your budget for each restaurant. You wouldn’t change it, even if you discovered a new gem a few days later. But if you get extra budget after that, you will allocate it with the updated expected returns. Getting the Temperature The one parameter that remains to be determined is T, the ‘temperature’ of our algorithm. Intuitively, the more certain we are of the results, the less we should explore. One can imagine updating T such as: \[ T_{t+1} = \alpha \cdot | f(t+1) – f(t) | + (1-\alpha) \cdot T_{t}\] Where \( \alpha \) is a sensitivity constant and \( f() \) is a margin of error or a measure of riskiness. For instance, \( f(t) \) could be the latest difference in expected returns, the standard deviation between returns since inception, or the average of the variances calculated in the UCB1-Tuned algorithm. This will ensure that our allocation algorithm will self-adapt. For instance if a restaurant we tried before was to provide a surprisingly bad or good experience, or if the returns of an origination platforms were becoming increasingly disappointing, our algorithm would naturally start exploring other options more. Conclusion Ian Fleming once said ‘never say no to adventure, or you’ll live a very dull life’, but according to Euripides, ‘Chance fights ever on the side of the prudent’. Hopefully, a little math can help in finding the balance between the two. Emmanuel Marot December 8, 2016 1 Comment
A search is reported for massive resonances decaying into a quark and a vector boson (W or Z), or two vector bosons (WW, WZ, or ZZ). The analysis is performed on an inclusive sample of multijet events corresponding to an integrated luminosity of 19.7 inverse femtobarns, collected in proton-proton collisions at a centre-of-mass energy of 8 TeV with the CMS detector at the LHC. The search uses novel jet-substructure identification techniques that provide sensitivity to the presence of highly boosted vector bosons decaying into a pair of quarks. Exclusion limits are set at a confidence level of 95% on the production of: (i) excited quark resonances q* decaying to qW and qZ for masses less than 3.2 TeV and 2.9 TeV, respectively, (ii) a Randall-Sundrum graviton G[RS] decaying into WW for masses below 1.2 TeV, and (iii) a heavy partner of the W boson W' decaying into WZ for masses less than 1.7 TeV. For the first time mass limits are set on W' to WZ and G[RS] to WW in the all-jets final state. The mass limits on q* to qW, q* to qZ, W' to WZ, G[RS] to WW are the most stringent to date. A model with a "bulk" graviton G[Bulk] that decays into WW or ZZ bosons is also studied. We have used 19 pb**-1 of data collected with the Collider Detector at Fermilab to search for new particles decaying to dijets. We exclude at 95% confidence level models containing the following new particles: axigluons with mass between 200 and 870 GeV, excited quarks with mass between 80 and 570 GeV, and color octet technirhos with mass between 320 and 480 GeV. The inclusive jet differential cross section has been measured for jet transverse energies, $E_T$, from 15 to 440 GeV, in the pseudorapidity region 0.1$\leq | \eta| \leq $0.7. The results are based on 19.5 pb$~{-1}$ of data collected by the CDF collaboration at the Fermilab Tevatron collider. The data are compared with QCD predictions for various sets of parton distribution functions. The cross section for jets with $E_T>200$\ GeV is significantly higher than current predictions based on O($\alpha_s~3$) perturbative QCD calculations. Various possible explanations for the high-$E_T$\ excess are discussed. We present a search for new heavy particles, $X$, which decay via $X rightarrow WZ \to e\nu +jj$ in $p{\bar p}$ collisions at $\sqrt{s} = 1.8$ TeV. No evidence is found for production of $X$ in 110 pb$^{-1}$ of data collected by the Collider Detector at Fermilab. Limits are set at the 95% C.L. on the mass and the production of new heavy charged vector bosons which decay via $W'\to WZ$ in extended gauge models as a function of the width, $Gamma (W')$, and mixing factor between the $W'$ and the Standard Model $W$ bosons. Results are presented from analyses of jet data produced in pbarp collisions at sqrt{s} = 630 and 1800 GeV collected with the DO detector during the 1994-95 Fermilab Tevatron Collider run. We discuss details of detector calibration, and jet selection criteria in measurements of various jet production cross sections at sqrt{s} = 630 and 1800 GeV. The inclusive jet cross sections, the dijet mass spectrum, the dijet angular distributions, and the ratio of inclusive jet cross sections at sqrt{s} = 630 and 1800 GeV are compared to next-to-leading-order QCD predictions. The order alpha_s^3 calculations are in good agreement with the data. We also use the data at sqrt{s} = 1800 GeV to rule out models of quark compositeness with a contact interaction scale less than 2.2 TeV at the 95% confidence level. We present the first model-independent measurement of the helicity of $W$ bosons produced in top quark decays, based on a 1 fb$^{-1}$ sample of candidate $t\bar{t}$ events in the dilepton and lepton plus jets channels collected by the D0 detector at the Fermilab Tevatron $p\bar{p}$ Collider. We reconstruct the angle $\theta^*$ between the momenta of the down-type fermion and the top quark in the $W$ boson rest frame for each top quark decay. A fit of the resulting \costheta distribution finds that the fraction of longitudinal $W$ bosons $f_0 = 0.425 \pm 0.166 \hbox{(stat.)} \pm 0.102 \hbox{(syst.)}$ and the fraction of right-handed $W$ bosons $f_+ = 0.119 \pm 0.090 \hbox{(stat.)} \pm 0.053 \hbox{(syst.)}$, which is consistent at the 30% C.L. with the standard model. We report the first observation of diffractive $J/\psi(\to \mu^+\mu^-)$ production in $\bar pp$ collisions at $\sqrt{s}$=1.8 TeV. Diffractive events are identified by their rapidity gap signature. In a sample of events with two muons of transverse momentum $p_T^{\mu}>2$ GeV/$c$ within the pseudorapidity region $|\eta|<$1.0, the ratio of diffractive to total $J/\psi$ production rates is found to be $R_{J/\psi}= [1.45\pm 0.25]%$. The ratio $R_{J/\psi}(x)$ is presented as a function of $x$-Bjorken. By combining it with our previously measured corresponding ratio $R_{jj}(x)$ for diffractive dijet production, we extract a value of $0.59\pm 0.15$ for the gluon fraction of the diffractive structure function of the proton. We determine the fraction of events with double parton (DP) scattering in a single ppbar collision at sqrt{s}=1.96 TeV in samples of photon + 3 jet and photon + b/c jet + 2 jet events collected with the D0 detector and corresponding to an integrated luminosity of about 8.7 fb^{-1}. The DP fractions and effective cross sections (sigma_eff) are measured for both event samples using the same kinematic selections. The measured DP fractions range from 0.21 to 0.17, with effective cross sections in the photon + 3 jet and photon + b/c jet + 2 jet samples of sigma_eff^incl = 12.7 +- 0.2 (stat) +- 1.3 (syst) mb and sigma_eff^HF = 14.6 +- 0.6 (stat) +- 3.2 (syst) mb, respectively. We measure the ratio of cross sections, {\sigma}(ppbar -> Z + b jet)/{\sigma}(ppbar -> Z + jet), for associated production of a Z boson with at least one jet. The ratio is also measured as a function of the jet transverse momentum, jet pseudorapidity, Z boson transverse momentum, and the azimuthal angle between the Z boson and the closest jet for events with at least one b jet. These measurements use data collected by the D0 experiment in Run II of Fermilab's Tevatron ppbar Collider at a center-of-mass energy of 1.96 TeV, and correspond to an integrated luminosity of 9.7 fb$^{-1}$. The results are compared to predictions from next-to-leading order calculations and various Monte Carlo event generators. We present the first combined measurement of the rapidity and transverse momentum dependence of dijet azimuthal decorrelations, based on the recently proposed quantity $R_{\Delta \phi}$. The variable $R_{\Delta \phi}$ measures the fraction of the inclusive dijet events in which the azimuthal separation of the two jets with the highest transverse momenta is less than a specified value for the parameter $\Delta \phi_{\rm max}$. The quantity $R_{\Delta \phi}$ is measured in $p\bar{p}$ collisions at $\sqrt{s}=1.96\,$TeV, as a function of the dijet rapidity interval, the total scalar transverse momentum, and $\Delta \phi_{\rm max}$. The measurement uses an event sample corresponding to an integrated luminosity of $0.7\,$fb$^{-1}$ collected with the D0 detector at the Fermilab Tevatron Collider. The results are compared to predictions of a perturbative QCD calculation at next-to-leading order in the strong coupling with corrections for non-perturbative effects. The theory predictions describe the data, except in the kinematic region of large dijet rapidity intervals and large $\Delta \phi_{\rm max}$. We present measurements of the differential cross section $d\sigma/dp_{T}^{\gamma}$ for the associated production of a $c$-quark jet and an isolated photon with rapidity $|y^{\gamma}|< 1.0$ and transverse momentum $30 < p_{T}^{\gamma} < 300$ GeV. The $c$-quark jets are required to have $|y^{jet}| < 1.5$ and $p_{T}^{jet} >15$ GeV. The ratio of differential cross sections for photon+ c and photon+ b production as a function of $p_{T}^{\gamma}$ is also presented. The results are based on data corresponding to an integrated luminosity of 8.7 fb$^{-1}$ recorded with the D0 detector at the Fermilab Tevatron $p\bar{p}$ Collider at $\sqrt{s}=$1.96 TeV. The obtained results are compared to next-to-leading order perturbative QCD calculations using various parton distribution functions, to predictions based on the $k_{T}$-factorization approach, and to predictions from the Sherpa and Pythia Monte Carlo event generators. This paper presents distributions of topological observables in inclusive three- and four-jet events produced in pp collisions at a centre-of-mass energy of 7 $\,\text {TeV}$ with a data sample collected by the CMS experiment corresponding to a luminosity of 5.1 $\,\text {fb}^{-1}$ . The distributions are corrected for detector effects, and compared with several event generators based on two- and multi-parton matrix elements at leading order. Among the considered calculations, MadGraph interfaced with pythia6 displays the overall best agreement with data. We report measurements of the inclusive transverse momentum pT distribution of centrally produced kshort, kstar(892), and phi(1020) mesons up to pT = 10 GeV/c in minimum-bias events, and kshort and lambda particles up to pT = 20 GeV/c in jets with transverse energy between 25 GeV and 160 GeV in pbar p collisions. The data were taken with the CDF II detector at the Fermilab Tevatron at sqrt(s) = 1.96 TeV. We find that as pT increases, the pT slopes of the three mesons (kshort, kstar, and phi) are similar, and the ratio of lambda to kshort as a function of pT in minimum-bias events becomes similar to the fairly constant ratio in jets at pT ~ 5 GeV/c. This suggests that the particles with pT >~ 5 GeV/c in minimum-bias events are from soft jets, and that the pT slope of particles in jets is insensitive to light quark flavor (u, d, or s) and to the number of valence quarks. We also find that for pT <~ 4 GeV relatively more lambda baryons are produced in minimum-bias events than in jets.
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
(no solution shared, part of this Coursera specialization) In this task we will use SOCR data containing information about height and weight of 25 thousands teenagers. [1]. If you haven't installed yet Seaborn library you should execute conda install seaborn in the terminal. (Seaborn isn't part of Anaconda and it provides suitable high level functionality for data visualization). import numpy as npimport pandas as pdimport seaborn as snsimport matplotlib.pyplot as plt%matplotlib inline Read the data about height and weight into Pandas DataFrame: data = pd.read_csv('../../data/weights_heights.csv', index_col='Index') First thing you should do after reading the data is to look at first records. It helps to find the data reading errors (for example, when we have 1 column instead of 10 and it has 9 dots with commas in column name). Also it allows to take a closer look at the data and features and their nature (numerical, categorical, etc.). Than we should plot histograms of feature distributions. Also it can help to understand features nature (power-series distribution or standard or something else). Histogram can help us find some values that aren't similar to each other - outliers.It is convenient to plot histograms using plot method of Pandas DataFrame with option kind='hist'. Example. Let's plot the histogram of teenager's height distribution. We use method plot for DataFrame data with options y='Height' (the feature which distribution we want to plot). data.plot(y='Height', kind='hist', color='red', title='Height (inch.) distribution'); Options: [2]. Look at the first 5 rows using head method of Pandas DataFrame. Plot the histogram of weight distribution using method plot Pandas DataFrame. Make the color of histogram to be green and add title. # Your code here # Your code here One of the most effective methods of basic data analysis is mapping pairwise dependencies of features. We make $m \times m$ plots ( m is number of features) where we have histograms of feature distributions in diagonal and scatter plots of two feature dependencies outside. We can do this using $scatter\_matrix$ method of Pandas Data Frame or pairplot of Seaborn library. To illustrate this method we add third feature. Let's create bodymass index (BMI). To do this we use apply method of Pandas FataFrame and Python's lambda functions. def make_bmi(height_inch, weight_pound): METER_TO_INCH, KILO_TO_POUND = 39.37, 2.20462 return (weight_pound / KILO_TO_POUND) / \ (height_inch / METER_TO_INCH) ** 2 data['BMI'] = data.apply(lambda row: make_bmi(row['Height'], row['Weight']), axis=1) [3]. Create the picture that contains pairwise dependencies of features 'Height', 'Weight' и 'BMI'. You should use pairplot method of Seaborn library. # Your code here During the basic analysis you often have to investigate dependencies of numerical from categorical features (for example, dependency between salary and employee sex). In this case we can use boxplots from Seaborn library. Box plot is a compact way to show real value statistics (mean and quartiles) by different values of categorical feature. It also helps to find outliers - observations that have very different values from others. [4]. Create new feature weight_category in DataFrame data that will have 3 values: 1 if the weight is less than 120 pounds, 3 if the weight is greater or equal to 150 pounds, 2 in other cases. Create boxplot showing dependency between height and weight category. Use boxplot method of Seaborn library and apply method of Pandas DataFrame. Add titles "Height" to y axis and "Weight category" to x axis. def weight_category(weight): pass # Your code heredata['weight_cat'] = data['Weight'].apply(weight_category)# Your code here [5]. Create scatter plot of dependencies between height and weight using plot method for Pandas DataFrame with option kind='scatter'. Add title to the figure. # Your code here In basic case the task of real value prediction by other features (regression task) can be solved using squared error minimization. [6]. Create function computing squared error of dependency approximation between height $y$ and weight $x$ using straight line $y = w_0 + w_1 * x$ by two parameters $w_0$ and $w_1$:$$error(w_0, w_1) = \sum_{i=1}^n {(y_i - (w_0 + w_1 * x_i))}^2 $$Where $n$ is number of observations in dataset, $y_i$ and $x_i$ are height and weight of $i$th person in dataset. # Your code here So we are solving the task how to draw a straight line through the points cloud corresponding to observations in our dataset in space of features "Height" and "Weight" to minimize function[6]. Let's start with drawings some lines and make sure they transfer dependencies from height to weight. [7]. On plot from [5] Problem 1 draw two straight lines corresponding to values of parameters $w_0, w_1) = (60, 0.05)$ and ($w_0, w_1) = (50, 0.16)$. Use plot method from matplotlib.pyplot and linspace method from NumPy library. Add the titles to axes and plot. # Your code here Squared error function minimization is very easy task because of the function's convex nature. There are many optimization methods for this problem. Let's look at dependency between error function and the first parameter (slope of the straight line) if the second parameter (absolute term) is fixed. [8]. Plot dependency between error function calculated in [6] and $w_1$ parameter when $w_0$ = 50. Add the titles to axes and plot. # Your code here Now we can find the slope of the straight line approximating dependency between height and weight when coefficient is fixed $w_0 = 50$ using optimization method. [9]. Using minimize_scalar method from scipy.optimize find the minimum of the function[6] for parameter value $w_1$ in range [-5,5]. Draw on plot [5] Problem 1 the straight line corresponding to the values of parameters ($w_0$, $w_1$) = (50, $w_1\_opt$) where $w_1\_opt$ is optimal value of parameter $w_1$ that was found in [8]. # Your code here # Your code here When you analyze multidimensional data, you often want to get intuitive understanding about data nature using visualization. It is impossible to plot the data when you have more than 3 features. It is better to choose 2 or 3 principal components from data and represent them in plane or volume. Let's have a look how Python can draw 3D figures on example of function $z(x,y) = sin(\sqrt{x^2+y^2})$ for values of $x$ и $y$ from interval [-5,5] with step 0.25 from mpl_toolkits.mplot3d import Axes3D Create objects of type matplotlib.figure.Figure (picture) and matplotlib.axes._subplots.Axes3DSubplot (axes). fig = plt.figure()ax = fig.gca(projection='3d') # get current axis# Create NumPy arrays with data points on X and Y axes.# Use meshgrid method creating matrix of coordinates# By vectors of coordinates. Set needed function Z(x, y).X = np.arange(-5, 5, 0.25)Y = np.arange(-5, 5, 0.25)X, Y = np.meshgrid(X, Y)Z = np.sin(np.sqrt(X**2 + Y**2))# Finally use *plot_surface* method of type object# Axes3DSubplot. Add titles to axes.surf = ax.plot_surface(X, Y, Z)ax.set_xlabel('X')ax.set_ylabel('Y')ax.set_zlabel('Z')plt.show() [10]. Create 3D-plot between error function calculated in [6] and parameters $w_0$ and $w_1$. Add titles "Intercept" to the $x$ axis, "Slope" to the $y$ axis, "Error" to the $z$ axis. # Your code here # Your code here # Your code here [11]. Find the minimum of the function in [6] using minimize method from scipy.optimize for parameters values $w_0$ in range [-100,100] and $w_1$ in range [-5, 5]. Starting point is ($w_0$, $w_1$) = (0, 0). Use L-BFGS-B optimization method (option method in minimize). Draw on plot from [5] Problem 1 the straight line coresponding finded optimal values of parameters $w_0$ and $w_1$. Add titles to the axes and plot. # Your code here # Your code here
The total wave function for a particle is made by a combination of a spin part (say $\chi$) and of a position part (lets say $\psi$), so that we have : $$\Psi_t(x_i,s_i)=\chi (s_i) \, \psi (x_i)$$ When you consider bosons, you want to ensure that the total wave function is symmetric. As such, you would have either two cases: a symmetric spin wave function and a symmetric spatial wave function or you would have that the spin and spatial wave functions are both anti-symmetric. So, for bosons (+ sign for symmetric and - for anti-symmetric): $$\Psi_t = \chi_+ \ \psi_+$$Or$$\Psi_t = \chi_- \ \psi_-$$ Your final wave function is then a combination or those two cases. For fermions, same idea but remember that fermions have anti-symmetric total wave function. So that for fermions, both accepted wave functions are: $$\Psi_t = \chi_- \ \psi_+$$Or$$\Psi_t = \chi_+ \ \psi_-$$ Now say you consider a spin $\frac{1}{2}$ particle, there are four spin states, which we separate into two categories, the singlet state and the triplet states. For the singlet state (anti-symmetric state), you have:$$|\chi\rangle = \frac{1}{\sqrt{2}}[|\uparrow\downarrow\rangle - |\downarrow\uparrow\rangle]$$ As for the triplet states (all symmetric states), they are :$$|\chi\rangle = \frac{1}{\sqrt{2}}[|\uparrow\downarrow\rangle + |\downarrow\uparrow\rangle] \\ |\chi\rangle= |\uparrow\uparrow\rangle \\|\chi\rangle= |\downarrow\downarrow\rangle$$ You can then see that it is possible to get extra degeneracy via the spin parts (you have more than one state that gives a symmetric spin part). For extra reference, you can check out Sakurai's or Zettili's textbook.
I hope I'm asking this at the right place. This pertains to actuarial exam MFE/3F on Financial Economics. If $\sigma$ is "volatility" and $\Omega$ the elasticity of the stock, one formula that is taught in this course is $$\sigma_{\text{option}} = \sigma_{\text{stock}} \cdot |\Omega|\text{,}$$ where "option" means a call or a put. Finan (Proposition 31.1, pp. 234-235) proves this statement. My question is, does this formula make an implicit assumption that the Black-Scholes assumptions have to hold?
In principle, yes, it would work. However, there are two huge practical issues that would probably make it much easier to just fly to a distant star and then come back, or go and orbit a black hole for a bit. The first has already been mentioned: it would be very difficult to apply the vibrations in such a way that the person isn't immediately liquidized. Under normal circumstances a human being can't take more than a few $g$'s of acceleration, even with a G-suit, and this is nowhere near enough to accelerate them to near the speed of light in a fraction of a second, which is what you'd need to do repeatedly in order to do what you're suggesting. As Rod Vance says in a comment, it could in principle be done with a time-varying gravitational field, which would apply exactly the same acceleration to every part of the body, so there wouldn't be any stress. However, then you'd run into the second issue: the energy it would take. Accelerating a person up to $0.99c$ means changing their kinetic energy by $$\Delta E = \frac{mc^2}{\sqrt{1-v^2/c^2}}-mc^2 \approx 4\times 10^{19}\:\mathrm{J}.$$(calculation). You'd need to do this many times a second (if you did it only once per second the person would be flying almost to the moon and back on every vibration) so you'd need a total power of at least, let's say, $10^{24}\:\mathrm{W}$. The curent total power generated by humans on Earth is around $2\times 10^{12}\:\mathrm{W}$, which is nowhere near enough. The sun puts out around $4\times 10^{26}\:\mathrm{W}$, so I guess it might be possible using a Dyson sphere, but this would take a huge fraction of an advanced space-faring civilisation's power output just to send one person forward in time. It's difficult to imagine how you could dispose of the waste heat this would generate. In principle you don't need to use all this energy, because you could recover the person's kinetic energy on every stroke, store it, and then turn it back into kinetic energy going the other way. But again it's very difficult to imagine how to do this. Though in a sense, this is what happens when you orbit a heavy object, so maybe orbiting a small black hole is the best way to do this after all.
A search for new physics is performed based on all-hadronic events with large missing transverse momentum produced in proton-proton collisions at sqrt(s) = 13 TeV. The data sample, corresponding to an integrated luminosity of 2.3 inverse femtobarns, was collected with the CMS detector at the CERN LHC in 2015. The data are examined in search regions of jet multiplicity, tagged bottom quark jet multiplicity, missing transverse momentum, and the scalar sum of jet transverse momenta. The observed numbers of events in all search regions are found to be consistent with the expectations from standard model processes. Exclusion limits are presented for simplified supersymmetric models of gluino pair production. Depending on the assumed gluino decay mechanism, and for a massless, weakly interacting, lightest neutralino, lower limits on the gluino mass from 1440 to 1600 GeV are obtained, significantly extending previous limits. A search for top squark pair production in pp collisions at $ \sqrt{s}=13 $ TeV is performed using events with a single isolated electron or muon, jets, and a large transverse momentum imbalance. The results are based on data collected in 2016 with the CMS detector at the LHC, corresponding to an integrated luminosity of 35.9 fb$^{−1}$. No significant excess of events is observed above the expectation from standard model processes. Exclusion limits are set in the context of supersymmetric models of pair production of top squarks that decay either to a top quark and a neutralino or to a bottom quark and a chargino. Depending on the details of the model, we exclude top squarks with masses as high as 1120 GeV. Detailed information is also provided to facilitate theoretical interpretations in other scenarios of physics beyond the standard model. A search for new phenomena is performed in final states containing one or more jets and an imbalance in transverse momentum in pp collisions at a centre-of-mass energy of 13 $\,\text {TeV}$ . The analysed data sample, recorded with the CMS detector at the CERN LHC, corresponds to an integrated luminosity of 2.3 $\,\text {fb}^{-1}$ . Several kinematic variables are employed to suppress the dominant background, multijet production, as well as to discriminate between other standard model and new physics processes. The search provides sensitivity to a broad range of new-physics models that yield a stable weakly interacting massive particle. The number of observed candidate events is found to agree with the expected contributions from standard model processes, and the result is interpreted in the mass parameter space of fourteen simplified supersymmetric models that assume the pair production of gluinos or squarks and a range of decay modes. For models that assume gluino pair production, masses up to 1575 and 975 $\,\text {GeV}$ are excluded for gluinos and neutralinos, respectively. For models involving the pair production of top squarks and compressed mass spectra, top squark masses up to 400 $\,\text {GeV}$ are excluded. Results are reported from a search for physics beyond the standard model in proton–proton collisions at a center-of-mass energy of s=13TeV . The search uses a signature of a single lepton, large jet and bottom quark jet multiplicities, and high sum of large-radius jet masses, without any requirement on the missing transverse momentum in an event. The data sample corresponds to an integrated luminosity of 35.9 fb −1 recorded by the CMS experiment at the LHC. No significant excess beyond the prediction from standard model processes is observed. The results are interpreted in terms of upper limits on the production cross section for R -parity violating supersymmetric extensions of the standard model using a benchmark model of gluino pair production, in which each gluino decays promptly via g˜→tbs . Gluinos with a mass below 1610 GeV are excluded at 95% confidence level. Results are reported from a search for supersymmetric particles in proton-proton collisions in the final state with a single lepton, multiple jets, including at least one b-tagged jet, and large missing transverse momentum. The search uses a sample of proton-proton collision data at s=13 TeV recorded by the CMS experiment at the LHC, corresponding to an integrated luminosity of 35.9 fb−1. The observed event yields in the signal regions are consistent with those expected from standard model backgrounds. The results are interpreted in the context of simplified models of supersymmetry involving gluino pair production, with gluino decay into either on- or off-mass-shell top squarks. Assuming that the top squarks decay into a top quark plus a stable, weakly interacting neutralino, scenarios with gluino masses up to about 1.9 TeV are excluded at 95% confidence level for neutralino masses up to about 1 TeV. <p>A search for new long-lived particles decaying to leptons is presented using proton-proton collisions produced by the LHC at <inline-formula><mml:math display="inline"><mml:mrow><mml:msqrt><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msqrt><mml:mo>=</mml:mo><mml:mn>8</mml:mn><mml:mtext> </mml:mtext><mml:mtext> </mml:mtext><mml:mi>TeV</mml:mi></mml:mrow></mml:math></inline-formula>. Data used for the analysis were collected by the CMS detector and correspond to an integrated luminosity of <inline-formula><mml:math display="inline"><mml:mrow><mml:mn>19.7</mml:mn><mml:mtext> </mml:mtext><mml:mtext> </mml:mtext><mml:mrow><mml:msup><mml:mrow><mml:mi>fb</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow></mml:math></inline-formula>. Events are selected with an electron and muon with opposite charges that both have transverse impact parameter values between 0.02 and 2 cm. The search has been designed to be sensitive to a wide range of models with nonprompt <inline-formula><mml:math display="inline"><mml:mrow><mml:mi>e</mml:mi><mml:mtext>-</mml:mtext><mml:mi>μ</mml:mi></mml:mrow></mml:math></inline-formula> final states. Limits are set on the “displaced supersymmetry” model, with pair production of top squarks decaying into an <inline-formula><mml:math display="inline"><mml:mrow><mml:mi>e</mml:mi><mml:mtext>-</mml:mtext><mml:mi>μ</mml:mi></mml:mrow></mml:math></inline-formula> final state via <inline-formula><mml:math display="inline"><mml:mi>R</mml:mi></mml:math></inline-formula>-parity-violating interactions. The results are the most restrictive to date on this model, with the most stringent limit being obtained for a top squark lifetime corresponding to <inline-formula><mml:math display="inline"><mml:mrow><mml:mi>c</mml:mi><mml:mi>τ</mml:mi><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mtext> </mml:mtext><mml:mtext> </mml:mtext><mml:mi>cm</mml:mi></mml:mrow></mml:math></inline-formula>, excluding masses below 790 GeV at 95% confidence level.</p> Results are reported from a search for supersymmetric particles in proton-proton collisions in the final state with a single, high transverse momentum lepton, multiple jets, including at least one b-tagged jet, and large missing transverse momentum. The data sample corresponds to an integrated luminosity of 2.3 fb$^{−1}$ at $ \sqrt{s}=13 $ TeV, recorded by the CMS experiment at the LHC. The search focuses on processes leading to high jet multiplicities, such as gluino pair production with $ \tilde{\mathrm{g}}\to \mathrm{t}\overline{\mathrm{t}}{\tilde{\chi}}_1^0 $ . The quantity M$_{J}$ , defined as the sum of the masses of the large-radius jets in the event, is used in conjunction with other kinematic variables to provide discrimination between signal and background and as a key part of the background estimation method. The observed event yields in the signal regions in data are consistent with those expected for standard model backgrounds, estimated from control regions in data. Exclusion limits are obtained for a simplified model corresponding to gluino pair production with three-body decays into top quarks and neutralinos. Gluinos with a mass below 1600 GeV are excluded at a 95% confidence level for scenarios with low $ {\tilde{\chi}}_1^0 $ mass, and neutralinos with a mass below 800 GeV are excluded for a gluino mass of about 1300 GeV. For models with two-body gluino decays producing on-shell top squarks, the excluded region is only weakly sensitive to the top squark mass. A search for supersymmetry in the context of general gauge-mediated (GGM) breaking with the lightest neutralino as the next-to-lightest supersymmetric particle and the gravitino as the lightest is presented. The data sample corresponds to an integrated luminosity of 36 inverse picobarns recorded by the CMS experiment at the LHC. The search is performed using events containing two or more isolated photons, at least one hadronic jet, and significant missing transverse energy. No excess of events at high missing transverse energy is observed. Upper limits on the signal cross section for GGM supersymmetry between 0.3 and 1.1 pb at the 95% confidence level are determined for a range of squark, gluino, and neutralino masses, excluding supersymmetry parameter space that was inaccessible to previous experiments. A search for physics beyond the standard model in final states with at least one photon, large transverse momentum imbalance, and large total transverse event activity is presented. Such topologies can be produced in gauge-mediated supersymmetry models in which pair-produced gluinos or squarks decay to photons and gravitinos via short-lived neutralinos. The data sample corresponds to an integrated luminosity of 35.9 inverse femtobarns of proton-proton collisions at sqrt(s) = 13 TeV recorded by the CMS experiment at the LHC in 2016. No significant excess of events above the expected standard model background is observed. The data are interpreted in simplified models of gluino and squark pair production, in which gluinos or squarks decay via neutralinos to photons. Gluino masses of up to 1.50-2.00 TeV and squark masses up to 1.30-1.65 TeV are excluded at 95% confidence level, depending on the neutralino mass and branching fraction. A search for new physics is presented in final states with two oppositely charged leptons (electrons or muons), jets identified as originating from b quarks, and missing transverse momentum (pTmiss). The search uses proton-proton collision data at s=13 TeV amounting to 35.9 fb-1 of integrated luminosity collected using the CMS detector in 2016. Hypothetical signal events are efficiently separated from the dominant tt¯ background with requirements on pTmiss and transverse-mass variables. No significant deviation is observed from the expected background. Exclusion limits are set in the context of simplified supersymmetric models with pair-produced top squarks. For top squarks, decaying exclusively to a top quark and a neutralino, exclusion limits are placed at 95% confidence level on the mass of the lightest top squark up to 800 GeV and on the lightest neutralino up to 360 GeV. These results, combined with searches in the single-lepton and all-jet final states, raise the exclusion limits up to 1050 GeV for the lightest top squark and up to 500 GeV for the lightest neutralino. For top squarks undergoing a cascade decay through charginos and sleptons, the mass limits reach up to 1300 GeV for top squarks and up to 800 GeV for the lightest neutralino. The results are also interpreted in a simplified model with a dark matter (DM) particle coupled to the top quark through a scalar or pseudoscalar mediator. For light DM, mediator masses up to 100 (50) GeV are excluded for scalar (pseudoscalar) mediators. The result for the scalar mediator achieves some of the most stringent limits to date in this model. A search for supersymmetry is presented based on proton-proton collision events containing identified hadronically decaying top quarks, no leptons, and an imbalance pTmiss in transverse momentum. The data were collected with the CMS detector at the CERN LHC at a center-of-mass energy of 13 TeV, and correspond to an integrated luminosity of 35.9 fb−1. Search regions are defined in terms of the multiplicity of bottom quark jet and top quark candidates, the pTmiss, the scalar sum of jet transverse momenta, and the mT2 mass variable. No statistically significant excess of events is observed relative to the expectation from the standard model. Lower limits on the masses of supersymmetric particles are determined at 95% confidence level in the context of simplified models with top quark production. For a model with direct top squark pair production followed by the decay of each top squark to a top quark and a neutralino, top squark masses up to 1020 GeV and neutralino masses up to 430 GeV are excluded. For a model with pair production of gluinos followed by the decay of each gluino to a top quark-antiquark pair and a neutralino, gluino masses up to 2040 GeV and neutralino masses up to 1150 GeV are excluded. These limits extend previous results. A search for supersymmetry is presented based on events with at least one photon, jets, and large missing transverse momentum produced in proton–proton collisions at a center-of-mass energy of 13 $\,\text {Te}\text {V}$ . The data correspond to an integrated luminosity of 35.9 $\,\text {fb}^{-1}$ and were recorded at the LHC with the CMS detector in 2016. The analysis characterizes signal-like events by categorizing the data into various signal regions based on the number of jets, the number of $\mathrm {b}$ -tagged jets, and the missing transverse momentum. No significant excess of events is observed with respect to the expectations from standard model processes. Limits are placed on the gluino and top squark pair production cross sections using several simplified models of supersymmetric particle production with gauge-mediated supersymmetry breaking. Depending on the model and the mass of the next-to-lightest supersymmetric particle, the production of gluinos with masses as large as 2120 $\,\text {Ge}\text {V}$ and the production of top squarks with masses as large as 1230 $\,\text {Ge}\text {V}$ are excluded at 95% confidence level. A search is presented for additional neutral Higgs bosons in the $\tau\tau$ final state in proton-proton collisions at the LHC. The search is performed in the context of the minimal supersymmetric extension of the standard model (MSSM), using the data collected with the CMS detector in 2016 at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb$^{-1}$. To enhance the sensitivity to neutral MSSM Higgs bosons, the search includes production of the Higgs boson in association with b quarks. No significant deviation above the expected background is observed. Model-independent limits at 95% confidence level (CL) are set on the product of the branching fraction for the decay into $\tau$ leptons and the cross section for the production via gluon fusion or in association with b quarks. These limits range from 18 pb at 90 GeV to 3.5 fb at 3.2 TeV for gluon fusion and from 15 pb (at 90 GeV) to 2.5 fb (at 3.2 TeV) for production in association with b quarks. In the m$_{\text{h}}^{\text{mod+}}$ scenario these limits translate into a 95% CL exclusion of $\tan\beta>$ 6 for neutral Higgs boson masses below 250 GeV, where $\tan\beta$ is the ratio of the vacuum expectation values of the neutral components of the two Higgs doublets. The 95% CL exclusion contour reaches 1.6 TeV for $\tan\beta=$ 60.
A useless state in a finite automaton is one from which no path leads to a final state, hence no (piece of a string) is recognized out of this state. Theoretically, the algorithm to determine the useful states is trivial: Let $G$ be the set of good (useful) states and let $\Omega$ be the set of all states. Initialize $G$ with all final states. Check all states $\Omega\setminus G$ for those that have a transition to a state in $G$ and add them to $G$. Repeat until nothing is added to $G$ any more. A straightforward implementation mimicking the above, however, can be quite costly, looping over states and transitions over and over again. The number of loops checking $\Omega \setminus G$ is limited by the depth. In a degenerate automaton containing transitions $a_0\to a_1 \to\dots\to a_n\to f$ where $f$ is a single final state and an additional transition $a_n\to u$ such that $u$ is useless, there would be around $n$ loops if you always loop over the $a_i$ in the order of $i$. But if you loop in decreasing order of $i$ shuffling found good states immediately into $G$, a single loop would suffice. But this may lead to other degenerate situations (I am guessing). I am looking for an algorithm to mark useful states, or remove useless states, that is not recursive (to prevent stack overflow, since I am looking at FAs with millions of states) and as efficient as possible. Extra question: are there theoretical limits known for this algorithm?
Equivalence of Definitions of Complex Natural Logarithm Contents Theorem Let $z = r e^{i \theta}$ be a complex number expressed in exponential form such that $z \ne 0$. The complex natural logarithm of $z \in \C_{\ne 0}$ is the multifunction defined as: $\map \ln z := \set {\map \ln r + i \paren {\theta + 2 k \pi}: k \in \Z}$ The complex natural logarithm of $z$ is the multifunction defined as: $\map \ln z := \set {w \in \C: e^w = z}$ Proof Let $z = r e^{i \theta}$ such that $z \ne 0$. Let $F = \set {\ln r + i \theta + 2 k \pi i: k \in \Z}$. Let $G = \set {w \in \C: e^w = z}$. We will demonstrate that $F = G$. Definition 1 implies Definition 2 Let $w = x + i y$ such that $w \in F$. Then: $x + i y = \ln r + i \theta + 2 k \pi i$ for some $k \in \Z$. $x = \ln r$ $y = \theta + 2 k \pi$ Then: \(\displaystyle e^w\) \(=\) \(\displaystyle e^{x + i y}\) \(\displaystyle \) \(=\) \(\displaystyle e^x e^{i y}\) Exponential of Sum \(\displaystyle \) \(=\) \(\displaystyle e^{\ln r} e^{i \paren {\theta + 2 k \pi} }\) Definition of $w$ \(\displaystyle \) \(=\) \(\displaystyle e^{\ln r} e^{i \theta}\) Periodicity of Complex Exponential Function \(\displaystyle \) \(=\) \(\displaystyle r e^{i \theta}\) Exponential of Natural Logarithm \(\displaystyle \) \(=\) \(\displaystyle z\) Definition of $z$ Thus $w \in G$ and so $f \subseteq G$. $\Box$ Definition 2 implies Definition 1 Let $w \in G$. By definition: $\exists z \in \C_{\ne 0}: z = e^w = r e^{i \theta}$ Thus: \(\displaystyle r e^{i \theta}\) \(=\) \(\displaystyle e^{\ln r} e^{i \theta}\) Exponential of Natural Logarithm \(\displaystyle \) \(=\) \(\displaystyle e^{\ln r + i \theta}\) Exponential of Sum \(\displaystyle \) \(=\) \(\displaystyle e^w\) by definition Thus $w$ is of the form $\ln r + i \theta + k \pi i$, where $k = 0$. Therefore $w \in F$, and so $G \subseteq F$. $\Box$ So by definition of set equality: $F = G$ $\blacksquare$
Neural Networks From Ufldl Line 11: Line 11: This "neuron" is a computational unit that takes as input <math>x_1, x_2, x_3</math> (and a +1 intercept term), and This "neuron" is a computational unit that takes as input <math>x_1, x_2, x_3</math> (and a +1 intercept term), and - outputs <math>h_{W,b}(x) = f(W^Tx) = f(\sum_{i=1}^3 W_{i}x_i +b)</math>, where <math>f : \Re \mapsto \Re</math> is + outputs <math>h_{W,b}(x) = f(W^Tx) = f(\sum_{i=1}^3 W_{i}x_i +b)</math>, where <math>f : \Re \mapsto \Re</math> is called the '''activation function'''. In these notes, we will choose called the '''activation function'''. In these notes, we will choose <math>f(\cdot)</math> to be the sigmoid function: <math>f(\cdot)</math> to be the sigmoid function: Line 31: Line 31: - + - [[Image:Sigmoid_Function.png|400px| + [[Image:Sigmoid_Function.png|400px||Sigmoid activation function.]] - [[Image:Tanh_Function.png|400px| + [[Image:Tanh_Function.png|400px||Tanh activation function.]] + The <math>\tanh(z)</math> function is a rescaled version of the sigmoid, and its output range is The <math>\tanh(z)</math> function is a rescaled version of the sigmoid, and its output range is <math>[-1,1]</math> instead of <math>[0,1]</math>. <math>[-1,1]</math> instead of <math>[0,1]</math>. - Note that unlike + Note that unlike (parts of CS229, we are not using the convention here of <math>x_0=1</math>. Instead, the intercept term is handled separately by the parameter <math>b</math>. here of <math>x_0=1</math>. Instead, the intercept term is handled separately by the parameter <math>b</math>. Line 52: Line 53: A neural network is put together by hooking together many of our simple A neural network is put together by hooking together many of our simple - + neurons,so that the output of a neuron can be the input of another. For example, here is a small neural network: example, here is a small neural network: Line 58: Line 59: In this figure, we have used circles to also denote the inputs to the network. The circles In this figure, we have used circles to also denote the inputs to the network. The circles - labeled + labeled +1are called '''bias units''', and correspond to the intercept term. The leftmost layer of the network is called the '''input layer''', and the The leftmost layer of the network is called the '''input layer''', and the rightmost layer the '''output layer''' (which, in this example, has only one rightmost layer the '''output layer''' (which, in this example, has only one Line 94: Line 95: In the sequel, we also let <math>z^{(l)}_i</math> denote the total weighted sum of inputs to unit <math>i</math> in layer <math>l</math>, In the sequel, we also let <math>z^{(l)}_i</math> denote the total weighted sum of inputs to unit <math>i</math> in layer <math>l</math>, - including the bias term (e.g., <math>z_i^{(2)} = \sum_{j=1}^n W^{(1)}_{ij} x_j + b^{(1)}_i</math>), so that + including the bias term (e.g., <math>z_i^{(2)} = \sum_{j=1}^n W^{(1)}_{ij} x_j + b^{(1)}_i</math>), so that <math>a^{(l)}_i = f(z^{(l)}_i)</math>. <math>a^{(l)}_i = f(z^{(l)}_i)</math>. Line 101: Line 102: to apply to vectors in an element-wise fashion (i.e., to apply to vectors in an element-wise fashion (i.e., <math>f([z_1, z_2, z_3]) = [f(z_1), f(z_2), f(z_3)]</math>), then we can write <math>f([z_1, z_2, z_3]) = [f(z_1), f(z_2), f(z_3)]</math>), then we can write - + more compactly as: compactly as: :<math>\begin{align} :<math>\begin{align} Line 109: Line 110: h_{W,b}(x) &= a^{(3)} = f(z^{(3)}) h_{W,b}(x) &= a^{(3)} = f(z^{(3)}) \end{align}</math> \end{align}</math> - More generally, recalling that we also use <math>a^{(1)} = x</math> to also denote the values from the input layer, + More generally, recalling that we also use <math>a^{(1)} = x</math> to also denote the values from the input layer, then given layer <math>l</math>'s activations <math>a^{(l)}</math>, we can compute layer <math>l+1</math>'s activations <math>a^{(l+1)}</math> as: then given layer <math>l</math>'s activations <math>a^{(l)}</math>, we can compute layer <math>l+1</math>'s activations <math>a^{(l+1)}</math> as: :<math>\begin{align} :<math>\begin{align} Line 117: Line 118: By organizing our parameters in matrices and using matrix-vector operations, we can take By organizing our parameters in matrices and using matrix-vector operations, we can take advantage of fast linear algebra routines to quickly perform calculations in our network. advantage of fast linear algebra routines to quickly perform calculations in our network. + We have so far focused on one example neural network, but one can also build neural We have so far focused on one example neural network, but one can also build neural - networks with other + networks with other architectures(meaning patterns of connectivity between neurons), including ones with multiple hidden layers. - architectures + The most common choice is a <math>n_l</math>-layered network - The most common choice is a <math>n_l</math>-layered network + where layer <math>1</math> is the input layer, layer <math>n_l</math> is the output layer, and each - where layer <math>1</math> is the input layer, layer <math>n_l</math> is the output layer, and each + layer <math>l</math> is densely connected to layer <math>l+1</math>. In this setting, to compute the - layer <math>l</math> is densely connected to layer <math>l+1</math>. In this setting, to compute the + output of the network, we can successively compute all the activations in layer output of the network, we can successively compute all the activations in layer - <math>L_2</math>, then layer <math>L_3</math>, and so on, up to layer <math>L_{n_l}</math>, using + <math>L_2</math>, then layer <math>L_3</math>, and so on, up to layer <math>L_{n_l}</math>, using . This is one - example of a + example of a feedforwardneural network, since the connectivity graph does not have any directed loops or cycles. does not have any directed loops or cycles. - + - + Neural networks can also have multiple output units. For example, here is a network Neural networks can also have multiple output units. For example, here is a network Line 142: Line 142: patient, and the different outputs <math>y_i</math>'s might indicate presence or absence patient, and the different outputs <math>y_i</math>'s might indicate presence or absence of different diseases.) of different diseases.) + + + + + +
In this section we explore the relationship between the derivative of a function and the derivative of its inverse. For functions whose derivatives we already know, we can use this relationship to find derivatives of inverses without having to use the limit definition of the derivative. In particular, we will apply the formula for derivatives of inverse functions to trigonometric functions. This formula may also be used to extend the power rule to rational exponents. The Derivative of an Inverse Function Note: The Inverse Function Theorem is an "extra" for our course, but can be very useful. There are other methods to derive (prove) the derivatives of the inverse Trigonmetric functions. Be sure to see the Table of Derivatives of Inverse Trigonometric Functions. We begin by considering a function and its inverse. If \(f(x)\) is both invertible and differentiable, it seems reasonable that the inverse of \(f(x)\) is also differentiable. Figure shows the relationship between a function \(f(x)\) and its inverse \(f^{−1}(x)\). Look at the point \((a,f^{−1}(a))\) on the graph of \(f^{−1}(x)\) having a tangent line with a slope of \[(f−1)′(a)=\dfrac{p}{q}.\] This point corresponds to a point \((f^{−1}(a),a)\) on the graph of \(f(x)\) having a tangent line with a slope of \[f′(f^{−1}(a))=\dfrac{q}{p}.\] Thus, if \(f^{−1}(x)\) is differentiable at \(a\), then it must be the case that \((f^{−1})′(a)=\dfrac{1}{f′(f^{−1}(a))}\). Figure \(\PageIndex{1}\):The tangent lines of a function and its inverse are related; so, too, are the derivatives of these functions. We may also derive the formula for the derivative of the inverse by first recalling that \(x=f(f^{−1}(x))\). Then by differentiating both sides of this equation (using the chain rule on the right), we obtain \(1=f′(f^{−1}(x))(f^{−1})′(x))\). Solving for \((f^{−1})′(x)\), we obtain \((f^{−1})′(x)=\dfrac{1}{f′(f^{−1}(x))}\). We summarize this result in the following theorem. Inverse Function Theorem Let \(f(x)\) be a function that is both invertible and differentiable. Let \(y=f^{−1}(x)\) be the inverse of \(f(x)\). For all \(x\) satisfying \(f′(f^{−1}(x))≠0\), \[\dfrac{dy}{dx}=\dfrac{d}{dx}(f^{−1}(x))=(f^{−1})′(x)=\dfrac{1}{f′(f^{−1}(x))}.\] Alternatively, if \(y=g(x)\) is the inverse of \(f(x)\), then \[g(x)=\dfrac{1}{f′(g(x))}.\] Example \(\PageIndex{1}\): Applying the Inverse Function Theorem Use the inverse function theorem to find the derivative of \(g(x)=\dfrac{x+2}{x}\). Compare the resulting derivative to that obtained by differentiating the function directly. Solution The inverse of \(g(x)=\dfrac{x+2}{x}\) is \(f(x)=\dfrac{2}{x−1}\). Since \[g′(x)=\dfrac{1}{f′(g(x))},\] begin by finding \(f′(x)\). Thus, \(f′(x)=\dfrac{−2}{(x−1)^2}\) and \(f′(g(x))=\dfrac{−2}{(g(x)−1)^2}=\dfrac{−2}{(\dfrac{x+2}{x}−1)^2}=−\dfrac{x^2}{2}\). Finally, \(g′(x)=\dfrac{1}{f′(g(x))}=−\dfrac{2}{x^2}\). We can verify that this is the correct derivative by applying the quotient rule to \(g(x)\) to obtain \(g′(x)=−\dfrac{2}{x^2}\). Exercise \(\PageIndex{1}\) Use the inverse function theorem to find the derivative of \(g(x)=\dfrac{1}{x+2}\). Compare the result obtained by differentiating \(g(x)\) directly. Hint Use the preceding example as a guide. Answer \(g′(x)=−\dfrac{1}{(x+2)^2}\) Example \(\PageIndex{2}\): Applying the Inverse Function Theorem Use the inverse function theorem to find the derivative of \(g(x)=\sqrt[3]{x}\). Solution The function \(g(x)=\sqrt[3]{x}\) is the inverse of the function \(f(x)=x^3\). Since \(g′(x)=\dfrac{1}{f′(g(x))}\), begin by finding \(f′(x)\). Thus, \[f′(x)=3x^3\] and \[f′(g(x))=3(\sqrt[3]{x})^2=3x^{2/3}\] Finally, \[g′(x)=\dfrac{1}{3x^{2/3}}=\dfrac{1}{3}x^{−2/3}.\] Exercise \(\PageIndex{2}\) Find the derivative of \(g(x)=\sqrt[5]{x}\) by applying the inverse function theorem. Hint \(g(x)\) is the inverse of \(f(x)=x^5\). Answer \(g(x)=\dfrac{1}{5}x^{−4/5}\) From the previous example, we see that we can use the inverse function theorem to extend the power rule to exponents of the form \(\dfrac{1}{n}\), where \(n\) is a positive integer. This extension will ultimately allow us to differentiate \(x^q\), where \(q\) is any rational number. Extending the Power Rule to Rational Exponents The power rule may be extended to rational exponents. That is, if \(n\) is a positive integer, then \[\dfrac{d}{dx}(x^{1/n})=\dfrac{1}{n} x^{(1/n)−1}.\] Also, if \(n\) is a positive integer and \(m\) is an arbitrary integer, then \(\dfrac{d}{dx}(x^{m/n})=\dfrac{m}{n}x^{(m/n)−1}\). Proof The function \(g(x)=x^{1/n}\) is the inverse of the function \(f(x)=x^n\). Since \(g′(x)=\dfrac{1}{f′(g(x))}\), begin by finding \(f′(x)\). Thus, \(f′(x)=nx^{n−1}\) and \(f′(g(x))=n(x^{1/n})^{n−1}=nx^{(n−1)/n}\). Finally, \(g′(x)=\dfrac{1}{nx^{(n−1)/n}}=\dfrac{1}{n}x^{(1−n)/n}=\dfrac{1}{n}x^{(1/n)−1}\). To differentiate \(x^{m/n}\) we must rewrite it as \((x^{1/n})^m\) and apply the chain rule. Thus, \[\dfrac{d}{dx}(x^{m/n})=\dfrac{d}{dx}((x^{1/n})^m)=m(x^{1/n})^{m−1}⋅\dfrac{1}{n}x^{(1/n)−1}=\dfrac{m}{n}x^{(m/n)−1}.\] □ Example \(\PageIndex{3}\): Applying the Power Rule to a Rational Power Find the equation of the line tangent to the graph of \(y=x^{2/3}\) at \(x=8\). Solution First find \(\dfrac{dy}{dx}\) and evaluate it at \(x=8\). Since \(\dfrac{dy}{dx}=\dfrac{2}{3}x^{−1/3}\) and \(\dfrac{dy}{dx}∣_{x=8}=\dfrac{1}{3}\) the slope of the tangent line to the graph at \(x=8\) is \(\dfrac{1}{3}\). Substituting \(x=8\) into the original function, we obtain \(y=4\). Thus, the tangent line passes through the point \((8,4)\). Substituting into the point-slope formula for a line, we obtain the tangent line \(y=\dfrac{1}{3}x+\dfrac{4}{3}\). Exercise \(\PageIndex{3}\) Find the derivative of \(s(t)=\sqrt{2t+1}\). Hint Use the chain rule. Answer \(s′(t)=(2t+1)^{−1/2}\) Derivatives of Inverse Trigonometric Functions We now turn our attention to finding derivatives of inverse trigonometric functions. These derivatives will prove invaluable in the study of integration later in this text. The derivatives of inverse trigonometric functions are quite surprising in that their derivatives are actually algebraic functions. Previously, derivatives of algebraic functions have proven to be algebraic functions and derivatives of trigonometric functions have been shown to be trigonometric functions. Here, for the first time, we see that the derivative of a function need not be of the same type as the original function. Example \(\PageIndex{4}\): Derivative of the Inverse Sine Function Use the inverse function theorem to find the derivative of \(g(x)=\sin ^{−1}x\). Solution Since for \(x\) in the interval \([−\dfrac{π}{2},\dfrac{π}{2}],f(x)=\sin x\) is the inverse of \(g(x)=sin^{−1}x\), begin by finding \(f′(x)\). Since \(f′(x)=\cos x\) and \(f′(g(x))=\cos (\sin ^{−1}x)=\sqrt{1−x^2}\), we see that \(g′(x)=\dfrac{d}{dx}(\sin ^{−1}x)=\dfrac{1}{f′(g(x))}=\dfrac{1}{\sqrt{1−x^2}}\). Analysis To see that \(\cos (\sin^{−1}x)=\sqrt{1−x^2}\), consider the following argument. Set \(\sin ^{−1}x=θ\). In this case, \(\sin θ=x\) where \(−\dfrac{π}{2}≤θ≤\dfrac{π}{2}\). We begin by considering the case where \(0<θ<\dfrac{π}{2}\). Since \(θ\) is an acute angle, we may construct a right triangle having acute angle \(θ\), a hypotenuse of length \(1\) and the side opposite angle \(θ\) having length \(x\). From the Pythagorean theorem, the side adjacent to angle \(θ\) has length \(\sqrt{1−x^2}\). This triangle is shown in Figure. Using the triangle, we see that \(\cos (\sin ^{−1}x)=\cos θ=\sqrt{1−x^2}\). Figure \(\PageIndex{2}\): Using a right triangle having acute angle \(θ\), a hypotenuse of length \(1\), and the side opposite angle \(θ\) having length \(x\), we can see that \(\cos (\sin ^{−1}x)=\cos θ=\sqrt{1−x^2}\). In the case where \(−\dfrac{π}{2}<θ<0\), we make the observation that \(0<−θ<\dfrac{π}{2}\) and hence \(\cos (\sin ^{−1}x)=\cos θ=\cos (−θ)=\sqrt{1−x^2}\). Now if \(θ=\dfrac{π}{2}\) or \(θ=−\dfrac{π}{2},x=1\) or \(x=−1\), and since in either case \(\cos θ=0\) and \(\sqrt{1−x^2}=0\), we have \(\cos (\sin ^{−1}x)=\cos θ=\sqrt{1−x^2}\). Consequently, in all cases, \(\cos (\sin ^{−1}x)=\sqrt{1−x^2}\). Example \(\PageIndex{5}\): Applying the Chain Rule to the Inverse Sine Function Apply the chain rule to the formula derived in Example to find the derivative of \(h(x)=sin^{−1}(g(x))\) and use this result to find the derivative of \(h(x)=sin^{−1}(2x^3).\) Solution Applying the chain rule to \(h(x)=\sin ^{−1}(g(x))\), we have \(h′(x)=\dfrac{1}{\sqrt{1−(g(x))^2}}g′(x)\). Now let \(g(x)=2x^3,\) so \(g′(x)=6x.\) Substituting into the previous result, we obtain \(h′(x)=\dfrac{1}{\sqrt{1−4x^6}}⋅6x=\dfrac{6x}{\sqrt{1−4x^6}}\) Exercise \(\PageIndex{4}\) Use the inverse function theorem to find the "derive" the derivative of \(g(x)=\tan ^{−1}x\). Hint The inverse of \(g(x)\) is \(f(x)=\tan x\). Use Example \(\PageIndex{5}\) as a guide. Answer \(g′(x)=\dfrac{1}{1+x^2}\) The derivatives of the remaining inverse trigonometric functions may also be found by using the inverse function theorem. These formulas are provided in the following theorem. Table of Derivatives of Inverse Trigonometric Functions \(\dfrac{d}{dx}\sin ^{−1}x=\dfrac{1}{\sqrt{1−(x)^2}}\) \(\dfrac{d}{dx}\cos ^{−1}x=\dfrac{−1}{\sqrt{1−(x)^2}}\) \(\dfrac{d}{dx}\tan ^{−1}x=\dfrac{1}{1+(x)^2}\) \(\dfrac{d}{dx}\cot ^{−1}x=\dfrac{−1}{1+(x)^2}\) \(\dfrac{d}{dx}\sec ^{−1}x=\dfrac{1}{|x|\sqrt{(x)^2−1}}\) \(\dfrac{d}{dx}\csc ^{−1}x=\dfrac{−1}{|x|\sqrt{(x)^2−1}}\) Example \(\PageIndex{6}\): Applying Differentiation Formulas to an Inverse Tangent Function Find the derivative of \(f(x)=\tan ^{−1}(x^2).\) Solution \(Let g(x)=x^2\), so \(g′(x)=2x.\) Substituting into Equation, we obtain \(f′(x)=\dfrac{1}{1+(x^2)^2}⋅(2x).\) Simplifying, we have \(f′(x)=\dfrac{2x}{1+x^4}\). Example \(\PageIndex{7}\): Applying Differentiation Formulas to an Inverse Sine Function Find the derivative of \(h(x)=x^2\sin ^{−1}x.\) Solution By applying the product rule, we have \(h′(x)=2x\sin ^{−1}x+\dfrac{1}{\sqrt{1−x^2}}⋅x2\) Exercise \(\PageIndex{5}\) Find the derivative of \(h(x)=\cos ^{−1}(3x−1).\) Hint Use Equation. with \(g(x)=3x−1\) Answer \(h′(x)=\dfrac{−3}{\sqrt{6x−9x^2}}\) Example \(\PageIndex{8}\): Applying the Inverse Tangent Function The position of a particle at time \(t\) is given by \(s(t)=\tan ^{−1}(\dfrac{1}{t})\) for \(t≥\dfrac{1}{2}\). Find the velocity of the particle at time \( t=1\). Solution Begin by differentiating \(s(t)\) in order to find \(v(t)\).Thus, \(v(t)=s′(t)=\dfrac{1}{1+(\dfrac{1}{t})^2}⋅\dfrac{−1}{t^2}\). Simplifying, we have \(v(t)=−\dfrac{1}{t^2+1}\). Thus, \(v(1)=−\dfrac{1}{2}.\) Exercise \(\PageIndex{6}\) Find the equation of the line tangent to the graph of \(f(x)=\sin ^{−1}x\) at \(x=0.\) Hint \(f′(0)\) is the slope of the tangent line. Answer \(y=x\) Key Concepts The inverse function theorem allows us to compute derivatives of inverse functions without using the limit definition of the derivative. We can use the inverse function theorem to develop differentiation formulas for the inverse trigonometric functions. Key Equations Inverse function theorem \((f−1)′(x)=\dfrac{1}{f′(f^{−1}(x))}\) whenever \(f′(f^{−1}(x))≠0\) and \(f(x)\) is differentiable. Power rule with rational exponents \(\dfrac{d}{dx}(x^{m/n})=\dfrac{m}{n}x^{(m/n)−1}.\) Derivative of inverse sine function \(\dfrac{d}{dx}\sin^{−1}x=\dfrac{1}{\sqrt{1−(x)^2}}\) Derivative of inverse cosine function \(\dfrac{d}{dx}\cos^{−1}x=\dfrac{−1}{\sqrt{1−(x)^2}}\) Derivative of inverse tangent function \(\dfrac{d}{dx}\tan^{−1}x=\dfrac{1}{1+(x)^2}\) Derivative of inverse cotangent function \(\dfrac{d}{dx}\cot^{−1}x=\dfrac{−1}{1+(x)^2}\) Derivative of inverse secant function \(\dfrac{d}{dx}\sec^{−1}x=\dfrac{1}{\sqrt{|x|(x)^2−1}}\) Derivative of inverse cosecant function \(\dfrac{d}{dx}\csc^{−1}x=\dfrac{−1}{|x|\sqrt{(x)^2−1}}\) Contributors Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org.
This was supposed to be the last blog post on distance estimated 3D fractals, but then I stumbled upon the dual number formulation, and decided it would blend in nicely with the previous post. So this blog post will be about dual numbers, and the next (and probably final) post will be about hybrid systems, heightmap rendering, interior rendering, and links to other resources. Dual Numbers Many of the distance estimators covered in the previous posts used a running derivative. This concept can be traced back to the original formula for the distance estimator for the Mandelbrot set, where the derivative is described iteratively in terms of the previous values: \(f’_n(c) = 2f_{n-1}(c)f’_{n-1}(c)+1\) In the previous post, we saw how the Mandelbox could be described a running Jacobian matrix, and how this matrix could be replaced by a single running scalar derivative, since the Jacobians for the conformal transformations all have a particular simple form (and thanks to Knighty the argument was extended to non-Julia Mandelboxes). Now, some month ago I stumbled upon automatic differentation and dual numbers, and after having done some tests, I think this a very nice framework to complete the discussion of distance estimators. So what are these dual numbers? The name might sound intimidating, but the concept is very simple: we extend the real numbers with another component – much like the complex numbers:\(x = (x_r, x_d) = x_r + x_d \epsilon\) where \(\epsilon\) is the dual unit, similar to the imaginary unit i for the complex numbers. The square of a dual unit is defined as: \(\epsilon * \epsilon = 0\). Now for any function which has a Taylor series, we have: \(f(x+dx) = f(x) + f'(x)dx + (f”(x)/2)dx^2 + …\) If we let \(dx = \epsilon\), it follows: \(f(x+\epsilon) = f(x) + f'(x)\epsilon \) because the higher order terms vanish. This means, that if we evaluate our function with a dual number \(d = x + \epsilon = (x,1)\), we get a dual number back, (f(x), f'(x)), where the dual component contains the derivative of the function. Compare this with the finite difference scheme for obtaining a derivative. Take a quadratic function as an example and evaluate its derivative, using a step size ‘h’:\(f(x) = x*x\) This gives us the approximate derivative: \(f'(x) \approx \frac {f(x+h)-f(x)}{h} = \frac { x^2 + 2*x*h + h^2 – x^2 } {h} = 2*x+h\) The finite difference scheme introduces an error, here equal to h. The error always gets smaller as h gets smaller (as it converges towards to the true derivative), but numerical differentiation introduces inaccuracies. Compare this with the dual number approach. For dual numbers, we have: \(x*x = (x_r+x_d\epsilon)*(x_r+x_d\epsilon) = x_r^2 + (2 * x_r * x_d )\epsilon\). Thus, \(f(x_r + \epsilon) = x_r^2 + (2 * x_r)*\epsilon\) Since the dual component is the derivative, we have f'(x) = 2*x, which is the exact answer. But the real beauty of dual numbers is, that they make it possible to keep track of the derivative during the actual calculation, using forward accumulation. Simply by replacing all numbers in our calculations with dual numbers, we will end up with the answer together with the derivative. Wikipedia has a very nice article, that explains this in more details: Automatic Differentation. The article also list several arithmetric rules for dual numbers. For the Mandelbox, we have a defining function R(p), which returns the length of p, after having been through a fixed number of iterations of the Mandelbox formula: scale*spherefold(boxfold(z))+p. The DE is then DE = R/DR, where DR is the length of the gradient of R. R is a scalar-valued vector function. To find the gradient we need to find the derivative along the x,y, and z direction. We can do this using dual vectors and evaluate the three directions, e.g. for the x-direction, evaluate \(R(p_r + \epsilon (1,0,0))\). In practice, it is more convenient to keep track of all three dual vectors during the calculation, since we can reuse part of the calculations. So we have to use a 3×3 matrix to track our derivatives during the calculation. Here is some example code for the Mandelbox: // simply scale the dual vectors void sphereFold(inout vec3 z, inout mat3 dz) { float r2 = dot(z,z); if (r2 < minRadius2) { float temp = (fixedRadius2/minRadius2); z*= temp; dz*=temp; } else if (r2 < fixedRadius2) { float temp =(fixedRadius2/r2); dz[0] =temp*(dz[0]-z*2.0*dot(z,dz[0])/r2); dz[1] =temp*(dz[1]-z*2.0*dot(z,dz[1])/r2); dz[2] =temp*(dz[2]-z*2.0*dot(z,dz[2])/r2); z*=temp; dz*=temp; } } // reverse signs for dual vectors when folding void boxFold(inout vec3 z, inout mat3 dz) { if (abs(z.x)>foldingLimit) { dz[0].x*=-1; dz[1].x*=-1; dz[2].x*=-1; } if (abs(z.y)>foldingLimit) { dz[0].y*=-1; dz[1].y*=-1; dz[2].y*=-1; } if (abs(z.z)>foldingLimit) { dz[0].z*=-1; dz[1].z*=-1; dz[2].z*=-1; } z = clamp(z, -foldingLimit, foldingLimit) * 2.0 - z; } float DE(vec3 z) { // dz contains our three dual vectors, // initialized to x,y,z directions. mat3 dz = mat3(1.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,1.0); vec3 c = z; mat3 dc = dz; for (int n = 0; n < Iterations; n++) { boxFold(z,dz); sphereFold(z,dz); z*=Scale; dz=mat3(dz[0]*Scale,dz[1]*Scale,dz[2]*Scale); z += c*Offset; dz +=matrixCompMult(mat3(Offset,Offset,Offset),dc); if (length(z)>1000.0) break; } return dot(z,z)/length(z*dz); } The 3×3 matrix dz contains our three dual vectors (they are stored as columns in the matrix, dz[0], dz[1], dz[2]). In order to calculate the dual numbers, we need to know how to calculate the length of z, and how to divide by the length squared (for sphere folds). Using the definition of the product for dual numbers, we have:\(|z|^2 = z \cdot z = z_r^2 + (2 z_r \cdot z_d)*\epsilon\) For the length, we can use the power rule, as defined on Wikipedia: \(|z_r + z_d \epsilon| = \sqrt{z_r^2 + (2 z_r \cdot z_d)*\epsilon} = |z_r| + \frac{(z_r \cdot z_d)}{|z_r|}*\epsilon\) Using the rule for division, we can derive: \(z/|z|^2=(z_r+z_d \epsilon)/( z_r^2 + 2 z_r \cdot z_d \epsilon)\) \( = z_r/z_r^2 + \epsilon (z_d*z_r^2-2z_r*z_r \cdot z_d)/z_r^4\) Given these rules, it is relatively simple to update the dual vectors: For the sphereFold, we either multiply by a real number or use the division rule above. For the boxFold, there is both multiplication (sign change), and a translation by a real number, which is ignored for the dual numbers. The (real) scaling factor is also trivially applied to both real and dual vectors. Then there is the addition of the original vector, where we must remember to also add the original dual vector. Finally, using the length as derived above, we find the length of the full gradient as: \(DR = \sqrt{ (z_r \cdot z_x)^2 + (z_r \cdot z_y)^2 + (z_r \cdot z_z)^2 } / |z_r|\) In the code example, the vectors are stored in a matrix, which makes a more compact notation possible: DR = length(z*dz)/length(z), leading to the final DE = R/DR = dot(z,z)/length(z*dz) There are some advantages to using the dual numbers approach: Compared to the four-point Makin/Buddhi finite difference approach the arbitrary epsilon (step distance) is avoided – which should give better numerical accuracy. It is also somewhat slightly faster computationally. Very general – e.g. works for non-conformal cases, where running scalar derivatives fail. The images here are from a Mandelbox where a different scaling factor was applied to each direction (making them non-conformal). This is not possible to capture in a running scalar derivative. On the other hand, the method is slower than using running scalar estimators. And it does require code changes. It should be mentioned that libraries exists for languages supporting operator overloading, such as C++. Since we find the gradient directly in this method, we can also use it as a surface normal – this is also an advantage compared to the scalar derivates, which normally use a finite difference scheme for the normals. Using the code example the normal is: // (Unnormalized) normal vec3 normal = vec3(dot(z,dz[0]),dot(z,dz[1]),dot(z,dz[2])); It should be noted that in my experiments, I found the finite difference method produced better normals than the above definition. Perhaps because it smothens them? The problem was somehow solved by backstepping a little before calculating the normal, but this again introduces an arbitrary distance step. Now, I said the scalar method was faster – and for a fixed number of ray steps it is – but let us take a closer look at the distance estimator function: The above image shows a sliced Mandelbox. The graph in the lower right conter shows a plot of the DE function along a line (two dimensions held fixed): The blue curve is the DE function, and the red line shows the derivative of the DE function. The function is plotted for the dual number derived DE function. We can see that our DE is well-behaved here: for a consistent DE the slope can never be higher than 1, and when we move away from the side of the Mandelbox in a perpendicular direction the derivative of the DE should be plus or minus one. Now compare this to the scalar estimated DE: Here we see that the DE is less optimal – the slope is ~0.5 for this particular line graph. Actually, the slope would be close to one if we omitted the ‘+1’ term for the scalar estimator, but then it overshoots slightly some places inside the Mandelbox. We can also see that there are holes in our Mandelbox – this is because for this fixed number of ray steps, we do not get close enough to the fractal surface to hit it. So even though the scalar estimator is faster, we need to crank up the number of ray steps to achieve the same quality. Final Remarks The whole idea of introducing dual derivatives of the three unit vectors seems to be very similar to having a running Jacobian matrix estimator – and I believe the methods are essentially idential. After all we try to achieve the same: keeping a running record of how the R(p) function changes, when we vary the input along the axis. But I think the dual numbers offer a nice theoretical framework for calculating the DE, and I believe they could be more accurate and faster then finite difference four point gradient methods. However, more experiments are needed before this can be asserted. Scalar estimators will always be the fastest, but they are probably only optimal for conformal systems – for non-conformal system, it seems necessary to introduce terms that make them too conservative, as demonstrated by the Mandelbox example. The final part contains all the stuff that didn’t fit in the previous posts, including references and links.
Before we go on to solving differential equations using power series, it would behoove you to go back to you calculus notes and review power series. There is one topic that was a small detail in first year calculus, but will be a main issue for solving differential equations. This is the technique of changing the index. Example \(\PageIndex{1}\) Change the index and combine the power series \[ \sum_{n=1}^\infty n\,a_n\,x^{n+1} + \sum_{n=0}^\infty a_n\,x^n.\nonumber \] Solution There are two issues here: The first is the the powers of \(x\) are different and the second is that the summations begin at different values. To make the powers of \(x\) the same we perform the substitution [u = n + 1, \;\;\; n = u - 1. \nonumber\] Notice that when \(n = 1\), \(u = 2\) and when \(n\) is infinity so is \(u\). We can write \[ \sum_{n=1}^{\infty} {n\,a_n\,x^{n+1}} = \sum_{u=2}^{\infty} {(u-1)\,a_{u-1}\,x^u}. \nonumber\] Since \(u\) is a dummy index, we can rename it \(n\) to get \[ \sum_{u=2}^{\infty} (u-1)\,a_{u-1}\,x^u = \sum_{n=2}^{\infty} (n-1)\,a_{n-1}\,x^n. \nonumber\] We now need to find \[ \sum_{n=2}^{\infty} (n-1)\,a_{n-1}\,x^n + \sum_{n=0}^{\infty} a_n \,x^n. \nonumber\] The trouble now is that the starting numbers are different for the two series. We can pull the first two terms out of the second series to get \[ \sum_{n=0}^{\infty} a_{n}\,x^n = a_0 + a_1\,x + \sum_{n=2}^{\infty} a_n\, x^n \nonumber \] putting this together we get \[\begin{align*} \sum_{n=0}^{\infty} (n-1)\, a_{n-1}\,x^n + a_0+a_1\,x+ \sum_{n=2}^{\infty} a_n\, x^n \\[4pt] &= a_0+a_1\,x+ \sum_{n=2}^{\infty} \left[ (n-1)\, a_{n-1} +a_n \right]x^n. \end{align*} \] Contributors Larry Green (Lake Tahoe Community College) Integrated by Justin Marshall.
How to solve $n$ for $n^n = 2^c$? What's the numerical method? I don't get this. For $c=1000$, $n$ should be approximately $140$, right? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Alpha sometimes goes off into the complex plane when what you want is only the reals. I agree with you and get about 140.222. If you ask to solve n(ln(n))=1000 ln(2) you get what you want. Hint: Consider this. Hint 2: First take $\log$ on both sides. And explicitly: The solution to your question is given by $$ n = e^{W(c\log 2)} = \frac{{c\log 2}}{{W(c\log 2)}}. $$ For $c=1000$, this gives $n \approx 140.2217$. The function $W$ is standard (ProductLog in Wolfram Mathematica). EDIT: For large $c$, a rough but very simple approximation to the solution $n$ of $n^n = 2^c$ can be obtained as follows (cf. this, also for improvement of the approximation): $$ n \approx (c\log 2)[\log (c\log 2)]^{1/\log (c\log 2) - 1} . $$ For example, for $c=1000$ this gives $n \approx 141.2083$, not far from the exact value of about $140.2217$. Yes, take the $log_2$ of both sides gives you: $$n log_2(n) = c$$ You can use Newton's method to solve this: $$x_0 = c$$ $$x_{k+1} = x_k - (x_k log(x_k) - c log(2))/(1 + log(x_k))$$ where now "log" is the natural logarithm. This gives the solution $n \sim x_4 = 140.221667$. Starting with a better $x_0$, like $x_0=c / log(c)$ gives you even faster convergence. With c=1000 or c=1000000, the value $x_3$ is correct with an error of $10^{-8}$.
This notebook demonstrates a few capabilities of SageMath in computations regarding Kerr spacetime. More precisely, it focuses of the Killing tensor $K$ found by Walker & Penrose [Commun. Math. Phys. 18, 265 (1970)].The employed differential geometry tools have been developed within the SageManifolds project (version 1.3, as included in SageMath 8.3). Click here to download the notebook file (ipynb format). To run it, you must start SageMath within the Jupyter notebook, via the command sage -n jupyter NB: a version of SageMath at least equal to 8.2 is required to run this notebook: version() 'SageMath version 8.3, Release Date: 2018-08-03' First we set up the notebook to display mathematical objects using LaTeX rendering: %display latex To speed up the computations, we ask for running them in parallel on 8 cores: Parallelism().set(nproc=8) We declare the Kerr spacetime (or more precisely the Boyer-Lindquist domain of Kerr spacetime) as a 4-dimensional Lorentzian manifold: M = Manifold(4, 'M', latex_name=r'\mathcal{M}', structure='Lorentzian')print(M) 4-dimensional Lorentzian manifold M Let us declare the Boyer-Lindquist coordinates via the method chart(), the argument of which is a string expressing the coordinates names, their ranges (the default is $(-\infty,+\infty)$) and their LaTeX symbols: BL.<t,r,th,ph> = M.chart(r't r:(0,+oo) th:(0,pi):\theta ph:(0,2*pi):\phi') print(BL) ; BL Chart (M, (t, r, th, ph)) BL[0], BL[1] The 2 parameters $m$ and $a$ of the Kerr spacetime are declared as symbolic variables: var('m, a', domain='real') We get the (yet undefined) spacetime metric by g = M.metric() The metric is set by its components in the coordinate frame associated with Boyer-Lindquist coordinates, which is the current manifold's default frame: rho2 = r^2 + (a*cos(th))^2Delta = r^2 -2*m*r + a^2g[0,0] = -(1-2*m*r/rho2)g[0,3] = -2*a*m*r*sin(th)^2/rho2g[1,1], g[2,2] = rho2/Delta, rho2g[3,3] = (r^2+a^2+2*m*r*(a*sin(th))^2/rho2)*sin(th)^2g.display() A matrix view of the components with respect to the manifold's default vector frame: g[:] The list of the non-vanishing components: g.display_comp() The Levi-Civita connection $\nabla$ associated with $g$: nabla = g.connection() ; print(nabla) Levi-Civita connection nabla_g associated with the Lorentzian metric g on the 4-dimensional Lorentzian manifold M Let us verify that the covariant derivative of $g$ with respect to $\nabla$ vanishes identically: nabla(g).display() The default vector frame on the spacetime manifold is the coordinate basis associated with Boyer-Lindquist coordinates: M.default_frame() is BL.frame() BL.frame() Let us consider the first vector field of this frame: xi = BL.frame()[0] ; xi print(xi) Vector field d/dt on the 4-dimensional Lorentzian manifold M The 1-form associated to it by metric duality is xi_form = xi.down(g) ; xi_form.display() Its covariant derivative is nab_xi = nabla(xi_form) ; print(nab_xi) ; nab_xi.display() Tensor field of type (0,2) on the 4-dimensional Lorentzian manifold M Let us check that the Killing equation is satisfied: nab_xi.symmetrize() == 0 Similarly, let us check that $\frac{\partial}{\partial\phi}$ is a Killing vector: chi = BL.frame()[3] ; chi nabla(chi.down(g)).symmetrize() == 0 k = M.vector_field(name='k')k[:] = [(r^2+a^2)/(2*rho2), -Delta/(2*rho2), 0, a/(2*rho2)]k.display() el = M.vector_field(name='el', latex_name=r'\ell')el[:] = [(r^2+a^2)/Delta, 1, 0, a/Delta]el.display() Let us check that $k$ and $\ell$ are null vectors: g(k,k).expr() g(el,el).expr() Their scalar product is $-1$: g(k,el).expr() Note that the scalar product (with respect to metric $g$) can also be computed by means of the method dot: k.dot(el).expr() Let us evaluate the "acceleration" of $k$, i.e. $\nabla_k k$: acc_k = nabla(k).contract(k)acc_k.display() We check that $k$ is a pregeodesic vector, i.e. that $\nabla_k k = \kappa_k k$ for some scalar field $\kappa_k$: for i in [0,1,3]: show(acc_k[i] / k[i]) kappa_k = acc_k[[0]] / k[[0]]kappa_k.display() acc_k == kappa_k * k Similarly let us evaluate the "acceleration" of $\ell$: acc_l = nabla(el).contract(el)acc_l.display() Hence $\ell$ is a geodesic vector. uk = k.down(g)ul = el.down(g) The Walker-Penrose Killing tensor $K$ is then formed as $$ K = \rho^2 (\underline{\ell}\otimes \underline{k} + (\underline{k}\otimes \underline{\ell}) + r^2 g $$ K = rho2*(ul*uk+ uk*ul) + r^2*gK.set_name('K')print(K) Tensor field K of type (0,2) on the 4-dimensional Lorentzian manifold M K.display_comp() DK = nabla(K)print(DK) Tensor field nabla_g(K) of type (0,3) on the 4-dimensional Lorentzian manifold M DK.display_comp() Let us check that $K$ is a Killing tensor: DK.symmetrize().display() Equivalently, we may write, using index notation: DK['_(abc)'].display()
Neural Networks From Ufldl Line 11: Line 11: This "neuron" is a computational unit that takes as input <math>x_1, x_2, x_3</math> (and a +1 intercept term), and This "neuron" is a computational unit that takes as input <math>x_1, x_2, x_3</math> (and a +1 intercept term), and - outputs <math>h_{W,b}(x) = f(W^Tx) = f(\sum_{i=1}^3 W_{i}x_i +b)</math>, where <math>f : \Re \mapsto \Re</math> is + outputs <math>h_{W,b}(x) = f(W^Tx) = f(\sum_{i=1}^3 W_{i}x_i +b)</math>, where <math>f : \Re \mapsto \Re</math> is called the '''activation function'''. In these notes, we will choose called the '''activation function'''. In these notes, we will choose <math>f(\cdot)</math> to be the sigmoid function: <math>f(\cdot)</math> to be the sigmoid function: Line 31: Line 31: - + - [[Image:Sigmoid_Function.png|400px| + [[Image:Sigmoid_Function.png|400px||Sigmoid activation function.]] - [[Image:Tanh_Function.png|400px| + [[Image:Tanh_Function.png|400px||Tanh activation function.]] + The <math>\tanh(z)</math> function is a rescaled version of the sigmoid, and its output range is The <math>\tanh(z)</math> function is a rescaled version of the sigmoid, and its output range is <math>[-1,1]</math> instead of <math>[0,1]</math>. <math>[-1,1]</math> instead of <math>[0,1]</math>. - Note that unlike + Note that unlike (parts of CS229, we are not using the convention here of <math>x_0=1</math>. Instead, the intercept term is handled separately by the parameter <math>b</math>. here of <math>x_0=1</math>. Instead, the intercept term is handled separately by the parameter <math>b</math>. Line 52: Line 53: A neural network is put together by hooking together many of our simple A neural network is put together by hooking together many of our simple - + neurons,so that the output of a neuron can be the input of another. For example, here is a small neural network: example, here is a small neural network: Line 58: Line 59: In this figure, we have used circles to also denote the inputs to the network. The circles In this figure, we have used circles to also denote the inputs to the network. The circles - labeled + labeled +1are called '''bias units''', and correspond to the intercept term. The leftmost layer of the network is called the '''input layer''', and the The leftmost layer of the network is called the '''input layer''', and the rightmost layer the '''output layer''' (which, in this example, has only one rightmost layer the '''output layer''' (which, in this example, has only one Line 94: Line 95: In the sequel, we also let <math>z^{(l)}_i</math> denote the total weighted sum of inputs to unit <math>i</math> in layer <math>l</math>, In the sequel, we also let <math>z^{(l)}_i</math> denote the total weighted sum of inputs to unit <math>i</math> in layer <math>l</math>, - including the bias term (e.g., <math>z_i^{(2)} = \sum_{j=1}^n W^{(1)}_{ij} x_j + b^{(1)}_i</math>), so that + including the bias term (e.g., <math>z_i^{(2)} = \sum_{j=1}^n W^{(1)}_{ij} x_j + b^{(1)}_i</math>), so that <math>a^{(l)}_i = f(z^{(l)}_i)</math>. <math>a^{(l)}_i = f(z^{(l)}_i)</math>. Line 101: Line 102: to apply to vectors in an element-wise fashion (i.e., to apply to vectors in an element-wise fashion (i.e., <math>f([z_1, z_2, z_3]) = [f(z_1), f(z_2), f(z_3)]</math>), then we can write <math>f([z_1, z_2, z_3]) = [f(z_1), f(z_2), f(z_3)]</math>), then we can write - + more compactly as: compactly as: :<math>\begin{align} :<math>\begin{align} Line 109: Line 110: h_{W,b}(x) &= a^{(3)} = f(z^{(3)}) h_{W,b}(x) &= a^{(3)} = f(z^{(3)}) \end{align}</math> \end{align}</math> - More generally, recalling that we also use <math>a^{(1)} = x</math> to also denote the values from the input layer, + More generally, recalling that we also use <math>a^{(1)} = x</math> to also denote the values from the input layer, then given layer <math>l</math>'s activations <math>a^{(l)}</math>, we can compute layer <math>l+1</math>'s activations <math>a^{(l+1)}</math> as: then given layer <math>l</math>'s activations <math>a^{(l)}</math>, we can compute layer <math>l+1</math>'s activations <math>a^{(l+1)}</math> as: :<math>\begin{align} :<math>\begin{align} Line 117: Line 118: By organizing our parameters in matrices and using matrix-vector operations, we can take By organizing our parameters in matrices and using matrix-vector operations, we can take advantage of fast linear algebra routines to quickly perform calculations in our network. advantage of fast linear algebra routines to quickly perform calculations in our network. + We have so far focused on one example neural network, but one can also build neural We have so far focused on one example neural network, but one can also build neural - networks with other + networks with other architectures(meaning patterns of connectivity between neurons), including ones with multiple hidden layers. - The most common choice is a <math>n_l</math>-layered network + The most common choice is a <math>n_l</math>-layered network - where layer <math>1</math> is the input layer, layer <math>n_l</math> is the output layer, and each + where layer <math>1</math> is the input layer, layer <math>n_l</math> is the output layer, and each - layer <math>l</math> is densely connected to layer <math>l+1</math>. In this setting, to compute the + layer <math>l</math> is densely connected to layer <math>l+1</math>. In this setting, to compute the output of the network, we can successively compute all the activations in layer output of the network, we can successively compute all the activations in layer - <math>L_2</math>, then layer <math>L_3</math>, and so on, up to layer <math>L_{n_l}</math>, using + <math>L_2</math>, then layer <math>L_3</math>, and so on, up to layer <math>L_{n_l}</math>, using . This is one - example of a + example of a feedforwardneural network, since the connectivity graph does not have any directed loops or cycles. does not have any directed loops or cycles. + Neural networks can also have multiple output units. For example, here is a network Neural networks can also have multiple output units. For example, here is a network Line 139: Line 142: patient, and the different outputs <math>y_i</math>'s might indicate presence or absence patient, and the different outputs <math>y_i</math>'s might indicate presence or absence of different diseases.) of different diseases.) + + + + + +
Search Now showing items 1-10 of 33 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
Yesterday, there was an interesting post by John Baez at the n-category cafe: The Riemann Hypothesis Says 5040 is the Last. The 5040 in the title refers to the largest known counterexample to a bound for the sum-of-divisors function \[ \sigma(n) = \sum_{d | n} d = n \sum_{d | n} \frac{1}{n} \] In 1983, the french mathematician Guy Robin proved that the Riemann hypothesis is equivalent to \[ \frac{\sigma(n)}{n~log(log(n))} < e^{\gamma} = 1.78107... \] when $n > 5040$. The other known counterexamples to this bound are the numbers 3,4,5,6,8,9,10,12,16,18,20,24,30,36,48,60,72,84,120,180,240,360,720,840,2520. In Baez’ post there is a nice graph of this function made by Nicolas Tessore, with 5040 indicated with a grey line towards the right and the other counterexamples jumping over the bound 1.78107… Robin’s theorem has a remarkable history, starting in 1915 with good old Ramanujan writing a part of this thesis on “highly composite numbers” (numbers divisible by high powers of primes). His PhD. adviser Hardy liked his result but called them “in the backwaters of mathematics” and most of it was not published at the time of Ramanujan’s degree ceremony in 1916, due to paper shortage in WW1. When Ramanujan’s paper “Highly Composite Numbers” was first published in 1988 in ‘The lost notebook and other unpublished papers’ it became clear that Ramanujan had already part of Robin’s theorem. Ramanujan states that if the Riemann hypothesis is true, then for $n_0$ large enough we must have for all $n > n_0$ that \[ \frac{\sigma(n)}{n~log(log(n))} < e^{\gamma} = 1.78107... \] When Jean-Louis Nicolas, Robin's PhD. adviser, read Ramanujan's lost notes he noticed that there was a sign error in Ramanujan's formula which prevented him from seeing Robin's theorem. Nicolas: “Soon after discovering the hidden part, I read it and saw the difference between Ramanujan’s result and Robin’s one. Of course, I would have bet that the error was in Robin’s paper, but after recalculating it several times and asking Robin to check, it turned out that there was an error of sign in what Ramanujan had written.” If you are interested in the full story, read the paper by Jean-Louis Nicolas and Jonathan Sondow: Ramanujan, Robin, Highly Composite Numbers, and the Riemann Hypothesis. What’s the latest on Robin’s inequality? An arXiv-search for Robin’s inequality shows a flurry of activity. For starters, it has been verified for all numbers smaller that $10^{10^{13}}$… It has been verified, unconditionally, for certain classes of numbers: all odd integers $> 9$ all numbers not divisible by a 25-th power of a prime Rings a bell? Here’s another hint:According to Xiaolong Wu in A better method than t-free for Robin’s hypothesis one can replace the condition of ‘not divisible by an N-th power of a prime’ by ‘not divisible by an N-th power of 2’. Further, he claims to have an (as yet unpublished) argument that Robin’s inequality holds for all numbers not divisible by $2^{42}$. So, where should we look for counterexamples to the Riemann hypothesis? What about the orders of huge simple groups? The order of the Monster group is too small to be a counterexample (yet, it is divisible by $2^{46}$). Leave a Comment
Here I will go over the last post at a more leisurely pace, focussing on a couple of far more trivial examples. Here’s the goal : we want to assign a quiver-superpotential to any subgroup of finite index of the modular group. So fix such a subgroup $\Gamma’ $ of the modular group $\Gamma=PSL_2(\mathbb{Z}) $ and consider the associated permutation representation of $\Gamma $ on the left-cosets $\Gamma/\Gamma’ $. As $\Gamma \simeq C_2 \ast C_3 $ this representation is determined by the action of the order 2 and order 3 generators of the modular group. There are a number of combinatorial gadgets to control the subgroup $\Gamma’ $ and the associated permutation representation : (generalized) Farey symbols and dessins d’enfants. Recall that the modular group acts on the upper-halfplane (the ‘hyperbolic plane’) by Moebius transformations, so to any subgroup $\Gamma’ $ we can associate a fundamental domain for its restricted action. The dessins and the Farey symbols give us a particular choice of these fundamental domains. Let us consider the two most trivial subgroups of all : the modular group itself (so $\Gamma/\Gamma $ is just one element and therefore the associated permutation representation is just the trivial representation) and the unique index two subgroup $\Gamma_2 $ (so there are two cosets $\Gamma/\Gamma_2 $ and the order 2 generator interchanges these two while the order 3 generator acts trivially on them). The fundamental domains of $\Gamma $ (left) and $\Gamma_2 $ (right) are depicted below In both cases the fundamental domain is bounded by the thick black (hyperbolic) edges. The left-domain consists of two hyperbolic triangles (the upper domain has $\infty $ as the third vertex) and the right-domain has 4 triangles. In general, if the subgroup $\Gamma’ $ has index n, then its fundamental domain will consist of $2n $ hyperbolic triangles. Note that these triangles are part of the Dedekind tessellation so really depict the action of $PGL_2(\mathbb{Z} $ and any $\Gamma $-hyperbolic triangle consists of one black and one white triangle in Dedekind’s coloring. We will indicate the color of a triangle by a black circle if the corresponding triangle is black. Of course, the bounding edges of the fundamental domain need to be identified and the Farey symbol is a notation device to clarify this. The Farey symbols of the above domains are [tex]\xymatrix{\infty \ar@{-}[r]_{\circ} & 0 \ar@{-}[r]_{\bullet} & \infty}[/tex] and [tex]\xymatrix{\infty \ar@{-}[r]_{\bullet} & 0 \ar@{-}[r]_{\bullet} & \infty}[/tex] respectively. In both cases this indicates that the two bounding edges on the left are to be identified as are the two bounding edges on the right (so, in particular, after identification $\infty $ coincides with $0 $). Hence, after identification, the $\Gamma $ domain consists of two triangles on the vertices ${ 0,i,\rho } $ (where $\rho=e^{2 \pi i}{6} $) (the blue dots) sharing all three edges, the $\Gamma_2 $ domain consists of 4 triangles on the 4 vertices ${ 0,i,\rho,\rho^2 } $ (the blue dots). In general we have three types of vertices : cusps (such as 0 or $\infty $), even vertices (such as $i $ where there are 4 hyperbolic edges in the Dedekind tessellation) and odd vertices (such as $\rho $ and $\rho^2 $ where there are 6 hyperbolic edges in the tessellation). Another combinatorial gadget assigned to the fundamental domain is the cuboid tree diagram or dessin. It consists of all odd and even vertices on the boundary of the domain, together with all odd and even vertices in the interior. These vertices are then connected with the hyperbolic edges connecting them. If we color the even vertices red and the odds blue we have the indicated dessins for our two examples (the green pictures). An half-edge is an edge connecting a red and a blue vertex in the dessin and we number all half-edges. So, the $\Gamma $-dessin has 1 half-edge whereas the $\Gamma_2 $-dessin has two (in general, the number of these half-edges is equal to the index of the subgroup). Observe also that every triangle has exactly one half-edge as one of its three edges. The dessin gives all information to calculate the permutation representation on the coset-set $\Gamma/\Gamma’ $ : the action of the order 2 generator of $\Gamma $ is given by taking for each internal red vertex the two-cycle $~(a,b) $ where a and b are the numbers of the two half-edges connected to the red vertex and the action of the order 3 generator is given by taking for every internal blue vertex the three cycle $~(c,d,e) $ where c, d and e are the numbers of the three half-edges connected to the blue vertex in counter-clockwise ordering. Our two examples above are a bit too simplistic to view this in action. There are no internal blue vertices, so the action of the order 3 generator is trivial in both cases. For $\Gamma $ there is also no red internal vertex, whence this is indeed the trivial representation whereas for $\Gamma_2 $ there is one internal red vertex, so the action of the order 2 generator is given by $~(1,2) $, which is indeed the representation representation on $\Gamma/\Gamma_2 $. In general, if the index of the subgroup $\Gamma’ $ is n, then we call the subgroup of the symmetric group on n letters $S_n $ generated by the action-elements of the order 2 and order 3 generator the monodromy group of the permutation representation (or of the subgroup). In the trivial cases here, the monodromy groups are the trivial group (for $\Gamma $) and $C_2 $ (for $\Gamma_2 $). As a safety-check let us work out all these concepts in the next simplest examples, those of some subgroups of index 3. Consider the Farey symbols [tex]\xymatrix{\infty \ar@{-}[r]_{\circ} & 0 \ar@{-}[r]_{\circ} & 1 \ar@{-}[r]_{\circ} & \infty}[/tex] and [tex]\xymatrix{\infty \ar@{-}[r]_{\circ} & 0 \ar@{-}[r]_{1} & 1 \ar@{-}[r]_{1} & \infty}[/tex] In these cases the fundamental domain consists of 6 triangles with the indicated vertices (the blue dots). The distinction between the two is that in the first case, one identifies the two edges of the left, resp. bottom, resp. right boundary (so, in particular, 0,1 and $\infty $ are identified) whereas in the second one identifies the two edges of the left boundary and identifies the edges of the bottom with those of the right boundary (here, 0 is identified only with $\infty $ but also $1+i $ is indetified with $\frac{1}{2}+\frac{1}{2}i $). In both cases the dessin seems to be the same (and given by the picture on the right). However, in the first case all three red vertices are distinct hence there are no internal red vertices in this case whereas in the second case we should identify the bottom and right-hand red vertex which then becomes an internal red vertex of the dessin! Hence, if we order the three green half-edges 1,2,3 starting with the bottom one and counting counter-clockwise we see that in both cases the action of the order 3-generator of $\Gamma $ is given by the 3-cycle $~(1,2,3) $. The action of the order 2-generator is trivial in the first case, while given by the 2-cycle $~(1,2) $ in the second case. Therefore, the monodromy group is the cylic group $C_3 $ in the first case and is the symmetric group $S_3 $ in the second case. Next time we will associate a quiver to these vertices and triangles as well as a cubic superpotential which will then allow us to define a noncommutative algebra associated to any subgroup of the modular group. The monodromy group of the situation will then reappear as a group of algebra-automorphisms of this noncommutative algebra! Similar Posts: the modular group and superpotentials (2) Modular quilts and cuboid tree diagrams Hyperbolic Mathieu polygons quivers versus quilts Farey symbols in SAGE 5.0 The Dedekind tessellation Monstrous dessins 3 Superpotentials and Calabi-Yaus permutation representations of monodromy groups Generators of modular subgroups
Very roughly how many proteins (chains) are synthesized in the human body per hour under normal conditions? (And how many ribosomes does the human body have?) closed as too broad by MattDMo, anongoodnurse, David, AliceD♦, March Ho Nov 16 '16 at 23:09 Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. Regarding protein synthesis rate, here's an attempt at an estimate from bioenergetics: ATP turnover in the human body is consider to be about 100 mol / day. Protein synthesis is estimated to require about 1/4 of ATP consumption in a mammalian cell, and one amino acid elongation requires about 5 ATP, so about 5 mol amino acids are elongated per day, which translates to $1.2 \cdot 10^{23}$ amino acids per hour. An average human protein is about 400 amino acids, which gives about $3 \cdot 10^{20}$ proteins per hour.${}^1$ I think this should be correct within an order of magnitude at least. (Unless I screwed up at some step in the calculation, it's getting late over here :) The most uncertain factor is probably the fractional ATP demand for protein synthesis. ${}^1$ The is correct with the following assumptions. Consider that the proteome has some (any) length distribution $p(n)$ where $n = 1,2,3 \dots$ is the protein length in amino acids, and $\sum_n p(n) = 1$. Let $r(n)$ be the total rate of synthesis of all proteins of length $n$. If we assume that this rate is proportional to protein abundance, $r(n) = t\ p(n)$, where $t = \sum_n r(n)$ is the total protein synthesis rate, then the total rate of amino acids per unit time is $r_{AA} = \sum_n n\ r(n) = t \sum_n n\ p(n) = t\ \mu$ where $\mu$ is the average protein length. Hence the sought total protein synthesis rate is $t = r_{AA} / \mu$. For your first question I think @Michael_A has explained why this question can't be answered. And for the second one, I quote from bscb.org : Ribosomes are found ‘free’ in the cytoplasm or bound to the endoplasmic reticulum (ER) to form rough ER. In a mammalian cell there can be as many as 10 million ribosomes(A single cell of E. Coli contains about 20,000 ribosomesand this accounts for about 25% of the total cell mass). Several ribosomes can be attached to the same mRNA strand, this structure is called a polysome. Ribosomes have only a temporary existence. When they have synthesised a polypeptide the two sub-units separate and are either re-used or broken up.
I am trying to find a solution on mathematica to $ \partial_tu-\partial_{xx}u= \cos(2\pi x), \; 0<x<1,\; t>0 $ $ \partial_xu(0,t)=\partial_xu(1,t)=0, \; t>0 $ $ u(x,0) = \sin^2(3\pi x), \; 0 \leq x \leq 1 $ to be able to check my answers when working them out by hand. I have tried several methods, two of which I will list here: Method 1 sol = NDSolve[{D[u[t, x], {t, 1}] - D[u[t, x], {x, 2}] == Cos[2 \[Pi] x], Derivative[0, 1][u][t, 0] == 0 , u[0, x] == Sin[3 \[Pi] x]^2}, u, {x, 0, 1}, {t, 0, 10}]; This gives me an output of an interpolating function. Is there any way to get an explicit answer? Method 2 However, I do not think I am using the right functions to solve this. DSolve, NDSolve? any suggestions would be very welcome. I am trying to read up on their website how to solve PDEs. eqn = D[u[x,t],t]-D[D[u[x,t],x],x]==Cos[2 \[Pi] x]; BC1 = D[u[x,t],t]/.x->0==0; BC2 = D[u[x,t],t]/.x->1==0; IC = u[x,0]==Sin[3\[Pi]x]^2; DSolve[{eqn,BC1,BC2,IC},u[x,t],{x,t}] Thank you
Difference between revisions of "The Erdos-Rado sunflower lemma" (→Bibliography: added links) m (→Bibliography: fixed deltas) Line 119: Line 119: * [http://anothersample.net/on-set-systems-not-containing-delta-systems On set systems not containing delta systems], H. L. Abbott and G. Exoo, Graphs and Combinatorics 8 (1992), 1–9. * [http://anothersample.net/on-set-systems-not-containing-delta-systems On set systems not containing delta systems], H. L. Abbott and G. Exoo, Graphs and Combinatorics 8 (1992), 1–9. − * [http://www.sciencedirect.com/science/article/pii/0012365X74901034 On finite + * [http://www.sciencedirect.com/science/article/pii/0012365X74901034 On finite -systems], H. L. Abbott and D. Hanson, Discrete Math. 8 (1974), 1-12. − * [http://www.sciencedirect.com/science/article/pii/0012365X7790139X On finite + * [http://www.sciencedirect.com/science/article/pii/0012365X7790139X On finite -systems II], H. L. Abbott and D. Hanson, Discrete Math. 17 (1977), 121-126. * [http://www.sciencedirect.com/science/article/pii/0097316572901033 Intersection theorems for systems of sets], H. L. Abbott, D. Hanson, and N. Sauer, J. Comb. Th. Ser. A 12 (1972), 381–389. * [http://www.sciencedirect.com/science/article/pii/0097316572901033 Intersection theorems for systems of sets], H. L. Abbott, D. Hanson, and N. Sauer, J. Comb. Th. Ser. A 12 (1972), 381–389. * [http://arxiv.org/abs/1511.02888 Hodge theory for combinatorial geometries], Karim Adiprasito, June Huh, and Erick Katz * [http://arxiv.org/abs/1511.02888 Hodge theory for combinatorial geometries], Karim Adiprasito, June Huh, and Erick Katz * [https://www.math.uni-bielefeld.de/ahlswede/homepage/public/114.pdf The Complete Nontrivial-Intersection Theorem for Systems of Finite Sets], R. Ahlswede, L. Khachatrian, Journal of Combinatorial Theory, Series A 76, 121-138 (1996). * [https://www.math.uni-bielefeld.de/ahlswede/homepage/public/114.pdf The Complete Nontrivial-Intersection Theorem for Systems of Finite Sets], R. Ahlswede, L. Khachatrian, Journal of Combinatorial Theory, Series A 76, 121-138 (1996). − * [http://www.sciencedirect.com/science/article/pii/0012365X9400185L On set systems without weak 3- + * [http://www.sciencedirect.com/science/article/pii/0012365X9400185L On set systems without weak 3--subsystems], M. Axenovich, D. Fon-Der-Flaassb, A. Kostochka, Discrete Math. 138 (1995), 57-62. − * [http://www.renyi.hu/~p_erdos/1961-07.pdf Intersection theorems for systems of finite sets], P. Erdős, C. Ko, R. Rado, The Quarterly Journal of Mathematics. Oxford. Second Series 12 + * [http://www.renyi.hu/~p_erdos/1961-07.pdf Intersection theorems for systems of finite sets], P. Erdős, C. Ko, R. Rado, The Quarterly Journal of Mathematics. Oxford. Second Series 12 (1961), . * [http://renyi.hu/~p_erdos/1960-04.pdf Intersection theorems for systems of sets], P. Erdős, R. Rado, Journal of the London Mathematical Society, Second Series 35 (1960), 85–90. * [http://renyi.hu/~p_erdos/1960-04.pdf Intersection theorems for systems of sets], P. Erdős, R. Rado, Journal of the London Mathematical Society, Second Series 35 (1960), 85–90. * [http://arxiv.org/abs/1205.6847 On the Maximum Number of Edges in a Hypergraph with Given Matching Number], P. Frankl * [http://arxiv.org/abs/1205.6847 On the Maximum Number of Edges in a Hypergraph with Given Matching Number], P. Frankl * [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.300.1783&rep=rep1&type=pdf An intersection theorem for systems of sets], A. V. Kostochka, Random Structures and Algorithms, 9 (1996), 213-221. * [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.300.1783&rep=rep1&type=pdf An intersection theorem for systems of sets], A. V. Kostochka, Random Structures and Algorithms, 9 (1996), 213-221. − * [http://www.math.uiuc.edu/~kostochk/docs/2000/survey3.pdf Extremal problems on + * [http://www.math.uiuc.edu/~kostochk/docs/2000/survey3.pdf Extremal problems on -systems], A. V. Kostochka * On Systems of Small Sets with No Large Δ-Subsystems, A. V. Kostochka, V. Rödl, and L. A. Talysheva, Comb. Probab. Comput. 8 (1999), 265-268. * On Systems of Small Sets with No Large Δ-Subsystems, A. V. Kostochka, V. Rödl, and L. A. Talysheva, Comb. Probab. Comput. 8 (1999), 265-268. * [http://arxiv.org/abs/1202.4196 On Erdos' extremal problem on matchings in hypergraphs], T. Luczak, K. Mieczkowska * [http://arxiv.org/abs/1202.4196 On Erdos' extremal problem on matchings in hypergraphs], T. Luczak, K. Mieczkowska * Intersection theorems for systems of sets, J. H. Spencer, Canad. Math. Bull. 20 (1977), 249-254. * Intersection theorems for systems of sets, J. H. Spencer, Canad. Math. Bull. 20 (1977), 249-254. Revision as of 05:56, 5 February 2016 Contents The problem A sunflower (a.k.a. Delta-system) of size [math]r[/math] is a family of sets [math]A_1, A_2, \dots, A_r[/math] such that every element that belongs to more than one of the sets belongs to all of them. A basic and simple result of Erdos and Rado asserts that Erdos-Rado Delta-system theorem: There is a function [math]f(k,r)[/math] so that every family [math]\cal F[/math] of [math]k[/math]-sets with more than [math]f(k,r)[/math] members contains a sunflower of size [math]r[/math]. (We denote by [math]f(k,r)[/math] the smallest integer that suffices for the assertion of the theorem to be true.) The simple proof giving [math]f(k,r)\le k! (r-1)^k[/math] can be found here. The best known general upper bound on [math]f(k,r)[/math] (in the regime where [math]r[/math] is bounded and [math]k[/math] is large) is [math]\displaystyle f(k,r) \leq D(r,\alpha) k! \left( \frac{(\log\log\log k)^2}{\alpha \log\log k} \right)^k[/math] for any [math]\alpha \lt 1[/math], and some [math]D(r,\alpha)[/math] depending on [math]r,\alpha[/math], proven by Kostkocha from 1996. The objective of this project is to improve this bound, ideally to obtain the Erdos-Rado conjecture [math]\displaystyle f(k,r) \leq C^k [/math] for some [math]C=C(r)[/math] depending on [math]r[/math] only. This is known for [math]r=1,2[/math](indeed we have [math]f(k,r)=1[/math] in those cases) but remains open for larger r. Variants and notation Given a family F of sets and a set S, the star of S is the subfamily of those sets in F containing S, and the link of S is obtained from the star of S by deleting the elements of S from every set in the star. (We use the terms link and star because we do want to consider eventually hypergraphs as geometric/topological objects.) We can restate the delta system problem as follows: f(k,r) is the maximum size of a family of k-sets such that the link of every set A does not contain r pairwise disjoint sets. Let f(k,r;m,n) denote the largest cardinality of a family of k-sets from {1,2,…,n} such that that the link of every set A of size at most m-1 does not contain r pairwise disjoint sets. Thus f(k,r) = f(k,r;k,n) for n large enough. Conjecture 1: [math]f(k,r;m,n) \leq C_r^k n^{k-m}[/math] for some [math]C_r[/math] depending only on r. This conjecture implies the Erdos-Ko-Rado conjecture (set m=k). The Erdos-Ko-Rado theorem asserts that [math]f(k,2;1,n) = \binom{n-1}{k-1}[/math] (1) when [math]n \geq 2k[/math], which is consistent with Conjecture 1. More generally, Erdos, Ko, and Rado showed [math]f(k,2;m,n) = \binom{n-m}{k-m}[/math] when [math]n[/math] is sufficiently large depending on k,m. The case of smaller n was treated by several authors culminating in the work of Ahlswede and Khachatrian. Erdos conjectured that [math]f(k,r;1,n) = \max( \binom{rk-1}{k}, \binom{n}{k} - \binom{n-r}{k} )[/math] for [math]n \geq rk[/math], generalising (1), and again consistent with Conjecture 1. This was established for k=2 by Erdos and Gallai, and for r=3 by Frankl (building on work by Luczak-Mieczkowska). A family of k-sets is balanced (or k-colored) if it is possible to color the elements with k colors so that every set in the family is colorful. Reduction (folklore): It is enough to prove Erdos-Rado Delta-system conjecture for the balanced case. Proof: Divide the elements into d color classes at random and take only colorful sets. The expected size of the surviving colorful sets is k!/k^k|F|. Hyperoptimistic conjecture: The maximum size of a balanced collection of k-sets without a sunflower of size r is (r-1)^k. Disproven for [math]k=3,r=3[/math]: set [math]|V_1|=|V_2|=|V_3|=3[/math] and use ijk to denote the 3-set consisting of the i^th element of V_1, j^th element of V_2, and k^th element of V_3. Then 000, 001, 010, 011, 100, 101, 112, 122, 212 is a balanced family of 9 3-sets without a 3-sunflower. Small values Below is a collection of known constructions for small values, taken from Abbott-Exoo. Boldface stands for matching upper bound (and best known upper bounds are planned to be added to other entries). Also note that for [math]k[/math] fixed we have [math]f(k,r)=k^r+o(k^r)[/math] from Kostochka-Rödl-Talysheva. r\k 2 3 4 5 6 ...k 3 6 20 54- 160- 600- ~3.16^k 4 10 38- 114- 380- 1444- ~3.36^k 5 20 88- 400- 1760- 8000- ~4.24^k 6 27 146- 730- 3942- 21316- ~5.26^k Threads Polymath10: The Erdos Rado Delta System Conjecture, Gil Kalai, Nov 2, 2015. Inactive Polymath10, Post 2: Homological Approach, Gil Kalai, Nov 10, 2015. Inactive Polymath 10 Post 3: How are we doing?, Gil Kalai, Dec 8, 2015. Inactive Polymath10-post 4: Back to the drawing board?, Gil Kalai, Jan 31, 2016. Active Erdos-Ko-Rado theorem (Wikipedia article) Sunflower (mathematics) (Wikipedia article) What is the best lower bound for 3-sunflowers? (Mathoverflow) Bibliography Edits to improve the bibliography (by adding more links, Mathscinet numbers, bibliographic info, etc.) are welcome! On set systems not containing delta systems, H. L. Abbott and G. Exoo, Graphs and Combinatorics 8 (1992), 1–9. On finite Δ-systems, H. L. Abbott and D. Hanson, Discrete Math. 8 (1974), 1-12. On finite Δ-systems II, H. L. Abbott and D. Hanson, Discrete Math. 17 (1977), 121-126. Intersection theorems for systems of sets, H. L. Abbott, D. Hanson, and N. Sauer, J. Comb. Th. Ser. A 12 (1972), 381–389. Hodge theory for combinatorial geometries, Karim Adiprasito, June Huh, and Erick Katz The Complete Nontrivial-Intersection Theorem for Systems of Finite Sets, R. Ahlswede, L. Khachatrian, Journal of Combinatorial Theory, Series A 76, 121-138 (1996). On set systems without weak 3-Δ-subsystems, M. Axenovich, D. Fon-Der-Flaassb, A. Kostochka, Discrete Math. 138 (1995), 57-62. Intersection theorems for systems of finite sets, P. Erdős, C. Ko, R. Rado, The Quarterly Journal of Mathematics. Oxford. Second Series 12 (1961), 313–320. Intersection theorems for systems of sets, P. Erdős, R. Rado, Journal of the London Mathematical Society, Second Series 35 (1960), 85–90. On the Maximum Number of Edges in a Hypergraph with Given Matching Number, P. Frankl An intersection theorem for systems of sets, A. V. Kostochka, Random Structures and Algorithms, 9 (1996), 213-221. Extremal problems on Δ-systems, A. V. Kostochka On Systems of Small Sets with No Large Δ-Subsystems, A. V. Kostochka, V. Rödl, and L. A. Talysheva, Comb. Probab. Comput. 8 (1999), 265-268. On Erdos' extremal problem on matchings in hypergraphs, T. Luczak, K. Mieczkowska Intersection theorems for systems of sets, J. H. Spencer, Canad. Math. Bull. 20 (1977), 249-254.
Let define $f(x)=(\tan x)^{\sin 2x}$ for $x \in (0, \frac{\pi}{2})$ Please help me prove, that $f$ reaches its lower bound in only one point $x_1$ and reaches its upper bound $x_2$ also in only one point of domain. Calculate $x_1 + x_2$. What I have done: I've checked the derivative, but it's looking horrible: The key to find the extremum is to calculate $x$, that satisfy $\cos 2x \cdot \ln (\tan x) = -1$ I don't have any idea how to do this. All help and hints are appreciated. Thanks in advance!
Introduction Occasion definition in a data set Cross over study Occasions with washout Occasions without washout Multiple levels of occasions Objectives: learn how to take into account inter occasion variability (IOV). Projects: iov1_project, iov1_Evid_project, iov2_project, iov3_project, iov4_project A simple model consists of splitting the study into K time periods or occasions and assuming that individual parameters can vary from occasion to occasion but remain constant within occasions. Then, we can try to explain part of the intra-individual variability of the individual parameters by piecewise-constant covariates, i.e., occasion-dependent or occasion-varying (varying from occasion to occasion and constant within an occasion) ones. The remaining part must then be described by random effects. We will need some additional notation to describe this new statistical model. Let \(\psi_{ik}\) be the vector of individual parameters of individual ifor occasion k, where \(1\leq i \leq N\) and \(1\leq k \leq K\). \({c}_{ik}\) be the vector of covariates of individual ifor occasion k. Some of these covariates remain constant (gender, group treatment, ethnicity, etc.) and others can vary (weight, treatment, etc.). Let \(\psi_i = (\psi_{i1}, \psi_{i2}, \ldots , \psi_{iK})\) be the sequence of K individual parameters for individual i. We also need to define: \(\eta_i^{(0)}\), the vector of random effects which describes the random inter-individual variabilityof the individual parameters, \(\eta_{ik}^{(1)}\), the vector of random effects which describes the random intra-individual variabilityof the individual parameters in occasion k, for each \(1\leq k \leq K\). Here and in the following, the superscript (0) is used to represent inter-individual variability, i.e., variability at the individual level, while superscript (1) represents inter-occasion variability, i.e., variability at the occasion level for each individual. The model now combines these two sequences of random effects: \(h(\psi_{ik}) = h(\psi_{\rm pop})+ \beta(c_{ik} – c_{\rm pop}) + \eta_i^{(0)} + \eta_{ik}^{(1)} \) Remark: Individuals do not need to share the same sequence of occasions: the number of occasions and the times defining the occasions can differ from one individual to another. There are two ways to define occasions in a data set: Explicitly using an OCCASION column. It is possible to have, in a data set, one or several columns with the column-type OCCASION. It corresponds to the same subject (ID should remain the same) but under different circumstances, occasions. For example, if the same subject has two successive different treatments, it should be considered as the same subject with two occasions. The OCC columns can contain only integers. Implicitly using EVID column. If there is an EVID column with a value 4 then Monolix defines a washout and creates an occasion. Thus, if there are several times where EVID equals 4 for a subject, it will create the same number of occasions. Notice that if EVID equals 4 happens only once at the beginning, only one occasion will be defined and no inter occasion variability would be possible. There are three kinds of occasions Cross over study: In that case, data is collected for each patient during two independent treatment periods of time, there is an overlap on the time definition of the periods. A column OCCASIONcan be used to identify the period. An alternative way is to define an EVIDcolumn starting for all occasions with EVID equals 4. Both types of definition will be presented in the iov1 example. Occasions with washout: In that case, data is collected for each patient during one period and there is no overlap between the periods. The time is increasing but the dynamical system (i.e. the compartments) is reset when the second period starts. In particular, EVID=4indicates that the system is reset (washout) for example, when a new dose is administrated. Occasions without washout: In that case, data is collected for each patient during one period and there is no overlap between the periods. The time is increasing and we want to differentiate periods in terms of occasions without any reset of the dynamical system. Multiple doses are administrated to each patient. each period of time between successive doses is defined as a statistical occasion. A column OCCASIONis therefore necessary in the data file to define it. iov1_project(data = ‘iov1_data.txt’, model = ‘lib:oral1_1cpt_kaVk.txt’) In this example, PK data is collected for each patient during two independent treatment periods of time (each one starting at time 0). A column OCCASION is used to identify the study: This column is defined using the reserved keyword OCCASION. Then, the model associated to the individual parameter is as presented below First, to define the variability of each parameter on each level, you just have to go on the good level, and you’ll see the associated random effects on each level. On the figure above, we see that all parameters have variability on the ID level, which means that all parameters have inter-individual variability. On the figure below, we see the OCC level. In the presented case, only the volume V has inter-study variability and thus inter occasion variability. Thus, this is the only one having variability on the occasion level. In terms of covariates, we then see two parts as displayed below. We see the covariates associated to the level ID (in green). It corresponds to all the covariates that are constant for each subject. associated to the level OCC (in blue). It corresponds to all the covariates that are constant for each occasion but not on each subject. In the presented case, the treatment TRT varies for each individual. It contains inter-occasion information and is thus displayed with the occasion level. On the other hand, the SEX is constant for each subject. It contains then inter-individual information but no inter-occasion information. It is then displayed with the ID level. What is the impact?Covariates can be associated to the parameter if and only if their level of variability is coherent with the level of variability of the parameter. In the presented case, TRT has inter-occasion variability. It can only be used with the parameter V that has inter-occasion variability. The two other parameters have only inter-individual variability and can therefore not use this TRT information. The interface is greyed and the user can not add this covariate to the parameters ka and Cl. SEX has only inter-individual variability. It can therefore be associated to any parameter that has inter-individual variability. The population parameters now include the standard deviations of the random effects for the 2 levels of variability ( omega is used fo IIV and gamma for IOV): Two important features are proposed in the plots. Firstly, in the individual fits, you can split or merge the occasions. When split is done, the name of the subject-occasion is the name of the subject, #, and the name of the occasion. Secondly, you can use the occasion to split the plots iov1_Evid_project(data = ‘iov1_Evid_data.txt’, model = ‘lib:oral1_1cpt_kaVk.txt’) Another way to describe this cross over study is to use EVID=4 as explained in the data set definition. In that example, the EVID creates a washout and another occasion. iov2_project(data = ‘iov2_data.txt’, model = ‘lib:oral1_1cpt_kaVk.txt’) The time is increasing in this example, but the dynamical system (i.e. the compartments) is reset when the second period starts. Column EVID provides some information about events concerning dose administration. In particular, EVID=4 indicates that the system is reset (washout) when a new dose is administrated Monolix automatically proposes to define the treatment periods (between successive resetting) as statistical occasions and introduce IOV, as we did in the previous example. We can display the individual fit by splitting each occasion for each individual Or by merging the different occasions in a unique plot for each individual: Remark: If you are modeling a PK as in this example, the washout implies that the occasions are independent. Thus, the cpu time is much faster as we do not have to compute predictions between occasions. iov3_project(data = ‘iov3_data.txt’, model = ‘lib:oral1_1cpt_kaVk.txt’) Multiple doses are administrated to each patient. We consider each period of time between successive doses as a statistical occasion. A column OCCASION is therefore necessary in the data file. We can color the observed data by their occasion to have a better representation The model for IIV and IOV can then be defined as usual. The plot of individual fits allows us to check that the predicted concentration is now continuous over the different occasions for each individual: iov4_project(data = ‘iov4_data.txt’, model = ‘lib:oral1_1cpt_kaVk.txt’) We can easily extend such an approach to multiple levels of variability. In this example, columns P1 and P2 define embedded occasions. They are both defined as occasions: We then define a statistical model for each level of variability.
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s? @daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format). @JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems.... well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty... Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d... @Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure. @JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now @yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first @yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts. @JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing. @Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work @Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable. @Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time. @Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things @Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)] @JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :) @Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!) @JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand. @JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series @JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code. @PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ... The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial. This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ... I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv... As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists? I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib... @EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc. Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/… You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball. @ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why? @AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially... @vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes. @RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself @AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that? @ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions... When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former. @RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that And that is what I mean by "the basics". Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers @RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14 The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for... @vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world. @Slereah It's like the brain has a limited capacity on math skills it can store. @NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life" I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it
The concepts we used to find the arc length of a curve can be extended to find the surface area of a surface of revolution. Surface area is the total area of the outer layer of an object. For objects such as cubes or bricks, the surface area of the object is the sum of the areas of all of its faces. For curved surfaces, the situation is a little more complex. Let \(f(x)\) be a nonnegative smooth function over the interval \([a,b]\). We wish to find the surface area of the surface of revolution created by revolving the graph of \(y=f(x)\) around the \(x-axis\) as shown in the following figure. As we have done many times before, we are going to partition the interval \([a,b]\) and approximate the surface area by calculating the surface area of simpler shapes. We start by using line segments to approximate the curve, as we did earlier in this section. For \(i=0,1,2,…,n\), let \(P={x_i}\) be a regular partition of \([a,b]\). Then, for \(i=1,2,…,n,\) construct a line segment from the point \((x_{i−1},f(x_{i−1}))\) to the point \((x_i,f(x_i))\). Now, revolve these line segments around the \(x-axis\) to generate an approximation of the surface of revolution as shown in the following figure. Notice that when each line segment is revolved around the axis, it produces a band. These bands are actually pieces of cones (think of an ice cream cone with the pointy end cut off). A piece of a cone like this is called a frustum of a cone. To find the surface area of the band, we need to find the lateral surface area, \(S\), of the frustum (the area of just the slanted outside surface of the frustum, not including the areas of the top or bottom faces). Let \(r_1\) and \(r_2\) be the radii of the wide end and the narrow end of the frustum, respectively, and let \(l\) be the slant height of the frustum as shown in the following figure. We know the lateral surface area of a cone is given by \[\text{Lateral Surface Area } =πrs,\] where \(r\) is the radius of the base of the cone and \(s\) is the slant height (Figure \(\PageIndex{7}\)). Since a frustum can be thought of as a piece of a cone, the lateral surface area of the frustum is given by the lateral surface area of the whole cone less the lateral surface area of the smaller cone (the pointy tip) that was cut off (Figure \(\PageIndex{8}\)). The cross-sections of the small cone and the large cone are similar triangles, so we see that \[ \dfrac{r_2}{r_1}=\dfrac{s−l}{s}\] Solving for \(s\), we get =s−ls \[\begin{align} \dfrac{r_2}{r_1}&=\dfrac{s−l}{s} \\ r_2s &=r_1(s−l) \\ r_2s&=r_1s−r_1l \\ r_1l&=r_1s−r_2s \\ r_1l&=(r_1−r_2)s \\ \dfrac{r_1l}{r_1−r_2}&=s \end{align}\] Then the lateral surface area (SA) of the frustum is \[\begin{align*} S&= \text{(Lateral SA of large cone)}− \text{(Lateral SA of small cone)} \\[4pt] &=πr_1s−πr_2(s−l) \\[4pt] &=πr_1(\dfrac{r_1l}{r_1−r_2})−πr_2(\dfrac{r_1l}{r_1−r_2−l}) \\[4pt] &=\dfrac{πr^2_1l}{r^1−r^2}−\dfrac{πr_1r_2l}{r_1−r_2}+πr_2l \\[4pt] &=\dfrac{πr^2_1l}{r_1−r_2}−\dfrac{πr_1r2_l}{r_1−r_2}+\dfrac{πr_2l(r_1−r_2)}{r_1−r_2} \\[4pt] &=\dfrac{πr^2_1}{lr_1−r_2}−\dfrac{πr_1r_2l}{r_1−r_2} + \dfrac{πr_1r_2l}{r_1−r_2}−\dfrac{πr^2_2l}{r_1−r_3} \\[4pt] &=\dfrac{π(r^2_1−r^2_2)l}{r_1−r_2}=\dfrac{π(r_1−r+2)(r1+r2)l}{r_1−r_2} \\[4pt] &= π(r_1+r_2)l. \label{eq20} \end{align*}\] Let’s now use this formula to calculate the surface area of each of the bands formed by revolving the line segments around the \(x-axis\). A representative band is shown in the following figure. Note that the slant height of this frustum is just the length of the line segment used to generate it. So, applying the surface area formula, we have \[\begin{align} S&=π(r_1+r_2)l \\ &=π(f(x_{i−1})+f(x_i))\sqrt{Δx^2+(Δyi)^2} \\ &=π(f(x_{i−1})+f(x_i))Δx\sqrt{1+(\dfrac{Δy_i}{Δx})^2} \end{align}\] Now, as we did in the development of the arc length formula, we apply the Mean Value Theorem to select \(x^∗_i∈[x_{i−1},x_i]\) such that \(f′(x^∗_i)=(Δy_i)/Δx.\) This gives us \[S=π(f(x_{i−1})+f(x_i))Δx\sqrt{1+(f′(x^∗_i))^2}\] Furthermore, since\(f(x)\) is continuous, by the Intermediate Value Theorem, there is a point \(x^{**}_i∈[x_{i−1},x[i]\) such that \(f(x^{**}_i)=(1/2)[f(xi−1)+f(xi)], so we get \[S=2πf(x^{**}_i)Δx\sqrt{1+(f′(x^∗_i))^2}.\] Then the approximate surface area of the whole surface of revolution is given by \[\text{Surface Area} ≈\sum_{i=1}^n2πf(x^{**}_i)Δx\sqrt{1+(f′(x^∗_i))^2}.\] This almost looks like a Riemann sum, except we have functions evaluated at two different points, \(x^∗_i\) and \(x^{**}_{i}\), over the interval \([x_{i−1},x_i]\). Although we do not examine the details here, it turns out that because \(f(x)\) is smooth, if we let n\(→∞\), the limit works the same as a Riemann sum even with the two different evaluation points. This makes sense intuitively. Both \(x^∗_i\) and x^{**}_i\) are in the interval \([x_{i−1},x_i]\), so it makes sense that as \(n→∞\), both \(x^∗_i\) and \(x^{**}_i\) approach \(x\) Those of you who are interested in the details should consult an advanced calculus text. Taking the limit as \(n→∞,\) we get \[ \begin{align} \text{Surface Area} &=\lim_{n→∞}\sum_{i=1}n^2πf(x^{**}_i)Δx\sqrt{1+(f′(x^∗_i))^2} \\[4pt] &=∫^b_a(2πf(x)\sqrt{1+(f′(x))^2}) \end{align}\] As with arc length, we can conduct a similar development for functions of \(y\) to get a formula for the surface area of surfaces of revolution about the \(y-axis\). These findings are summarized in the following theorem. Surface Area of a Surface of Revolution Let \(f(x)\) be a nonnegative smooth function over the interval \([a,b]\). Then, the surface area of the surface of revolution formed by revolving the graph of \(f(x)\) around the x-axis is given by \[\text{Surface Area}=∫^b_a(2πf(x)\sqrt{1+(f′(x))^2})dx\] Similarly, let \(g(y)\) be a nonnegative smooth function over the interval \([c,d]\). Then, the surface area of the surface of revolution formed by revolving the graph of \(g(y)\) around the \(y-axis\) is given by \[\text{Surface Area}=∫^d_c(2πg(y)\sqrt{1+(g′(y))^2}dy\] Example \(\PageIndex{4}\): Calculating the Surface Area of a Surface of Revolution 1. Let \(f(x)=\sqrt{x}\) over the interval \([1,4]\). Find the surface area of the surface generated by revolving the graph of \(f(x)\) around the \(x-axis.\) Round the answer to three decimal places. Solution The graph of \(f(x)\) and the surface of rotation are shown in the following figure. We have \(f(x)=\sqrt{x}\) . Then, \(f′(x)=1/(2\sqrt{x})\) and \((f′(x))^2=1/(4x).\) Then, \[\begin{align*} \text{Surface Area} &=∫^b_a(2πf(x)\sqrt{1+(f′(x))^2}dx \\[4pt] &=∫^4_1(\sqrt{2π\sqrt{x}1+\dfrac{1}{4x}})dx \\[4pt] &=∫^4_1(2π\sqrt{x+14}dx. \end{align*}\] Let \(u=x+1/4.\) Then, \(du=dx\). When \(x=1, u=5/4\), and when \(x=4, u=17/4.\) This gives us \[\begin{align*} ∫^1_0(2π\sqrt{x+\dfrac{1}{4}})dx &= ∫^{17/4}_{5/4}2π\sqrt{u}du \\[4pt] &= 2π\left[\dfrac{2}{3}u^{3/2}\right]∣^{17/4}_{5/4} \\[4pt] &=\dfrac{π}{6}[17\sqrt{17}−5\sqrt{5}]≈30.846 \end{align*}\]
Portfolio Health and Potential Uses When we released our mobile application, LendingRobot Dashboard, many users might have noticed that the first screen to appear after logging in displays your portfolio’s health. In this article, we’ll give an explanation of portfolio health, how it is calculated, and its potential use in gauging portfolio performance. We consider Lending Club as our example platform. We start by presenting our equation for portfolio health: \[Note \space Health = \left(1-\frac{Days\space Late}{120}\right) * 100\%\] \[Portfolio \space Health = \sum \limits_{i=1}^{n}Note \space Health_{i} * \omega_{i}\] Where \(n\) is the number of ongoing (not paid or charged off) notes in the portfolio, \(Note \space Health\) is the individual health of note \(i\), and \(\omega\) is a weight calculated as the remaining outstanding principal for note \(i\) divided by the portfolio’s total remaining outstanding principal. Notice in the equation that if a note is 0 days late then it has a health of 100% whereas if it is 120 days late it has a health of 0%. The note health decreases linearly as days late increases from 0 to 120. Also, it is possible for notes to be more than 120 days late when they have the status of default but have not charged off yet; we count them as being 120 days late and having 0% note health. Let’s clarify this with some examples. First, imagine that you have a portfolio of only one note with the following characteristics: Note Term Subgrade Age (months) Status Days Late Note Health (%) Remaining Outstanding Principal ($) Weight (\(\omega\)) Weight-Adjusted Health (%) 1 36 D5 18 Grace 3 97.5 20 1 97.5 Note 1 Total Ongoing Portfolio Remaining Outstanding Principal ($) 20 20 \(\omega\) 1 1 The portfolio health in this case is 97.5%; exactly the same as the sole note’s weight-adjusted health. This is because the entire portfolio is comprised of that single note. Now consider a hypothetical portfolio consisting of 6 notes with the following characteristics (weights are rounded): Note Term Subgrade Age (months) Status Days Late Note Health (%) Remaining Outstanding Principal ($) Weight (\(\omega\)) Weight-Adjusted Health (%) 1 36 D5 18 Grace 3 97.5 20 .278 27.11 2 36 B1 30 Current 0 100 8 .111 11.11 3 36 G3 12 Late 87 27.5 21 .292 8.03 4 36 C4 8 Default 143 0 23 .318 0.00 5 36 F2 50 Paid 0 100 0 N/A N/A 6 36 E5 60 Charged Off 0 0 19 N/A N/A Note 1 2 3 4 5 6 Total Ongoing Portfolio Remaining Outstanding Principal ($) 20 8 21 23 0 19 72 \(\omega\) .278 .111 .292 .319 N/A N/A 1 The portfolio health for this portfolio is 46.25%, the sum of the weight-adjusted health column excluding N/A values. Again, our process is finding every ongoing note’s health, weighting the note health by its weight based on remaining outstanding principal, and summing up the resulting weight-adjusted healths. Again, note that notes 5 and 6 are not included in the portfolio health calculation because they are not ongoing notes. So aside from being an indication of how late your ongoing portfolio of notes is, how else can portfolio health be used? You could use it as an indication of how your current investment strategy is performing. Consider the second sample portfolio above; where notes 1 through 4 are included in the portfolio health calculation. We can analyze the historical data we have about Lending Club loans and find an expected days late based on loans that match characteristics of those in your portfolio, and with the expected days late we can calculate an expected portfolio health. If your portfolio health is greater than the expected portfolio health, it indicates that your portfolio is “not as late” as we’d expect based on historical data and is performing better than average. Here’s a snippet of fictitious data for comparisons with the example portfolio above: Term Subgrade Age (months) Expected Days Late Expected Note Health (%) Weight (\(\omega\)) Expected Weight-Adjusted Health (%) 36 D5 18 8 93.3 .278 25.94 36 B1 30 1 99.2 .111 11.01 36 G3 12 40 66.6 .292 19.45 36 C4 8 50 58.3 .318 18.54 In this case, we have an expected portfolio health of 74.94%. We can see that our example portfolio’s health (46.25%) is worse than expected, and is largely due to notes 3 and 4 where the actual days late are much greater than expected. This could be an indication that whatever rule or investment strategy that is picking 36 term G3 and C4 notes might warrant reconsideration. Thus, we believe portfolio health has potential in gauging your portfolio and investment strategy performance. Justin Hsi July 21, 2016 3 Comment