text
stringlengths
256
16.4k
In Bishop's Pattern Recognition and Machine Learning, he states that exponential family distributions over $\mathbf{x}$ given parameters $\boldsymbol{\eta}$ can be written as $p(\mathbf{x} | \boldsymbol{\eta}) = h(\mathbf{x})g(\boldsymbol{\eta})\exp\{\boldsymbol{\eta}^\top\mathbf{u}(\mathbf{x})\}$. He gives the natural parameters for the Bernoulli distribution with probability of success $\mu$ as: $$\eta = \ln(\frac{\mu}{1 - \mu})$$ $$u(x) = x$$ $$h(x) = 1$$ $$g(\eta) = \frac{1}{1 + \exp{(\eta)}}$$ I was working this out myself and found $$\boldsymbol{\eta} = [\ln\mu,\: \ln(1 - \mu)]^\top$$ $$\mathbf{u}(x) = [x, \: 1-x]^\top$$ $$h(\mathbf{x}) = 1$$ $$g(\boldsymbol{\eta}) = 1$$ could also work. How do you know when you've found the natural parameters and is there a rule preventing you from adding arbitrary scaling constants into the expressions for $\boldsymbol{\eta}$, $\mathbf{u}$, $h$, and $g$?
I find the Frobenius Method quite beautiful, and I would like to be able to apply it. In particular there are three questions in my text book that I have attempted. In each question my limited understanding has stopped me. Only one of these questions (the last) is assigned homework. The rest are examples I found interesting*. 1) $ L[y] = xy'' + 2xy' +6e^xy = 0 $ (1) The wikipedia article begins by saying that the Frobenius method is a way to find solutions for ODEs of the form $ x^2y'' + xP(x) + Q(x)y = 0 $ To put (1) into that form I might multiply across by x, giving me $ x^2y'' + x[2x]y' + [6xe^x]y = 0 $ (2) But is that OK? The first step in the method seems to be dividing by $ x^2 $, so can't I just leave the equation in its original form? I'll assume I can. Now we let $ y_1 = \sum _{n=0}^{\infty} a_n x^{r+n} $ then, $ y_1' = \sum _{n=0}^{\infty} (r+n)a_nx^{r+n-1} $ and, $ y_1'' = \sum _{n=0}^{\infty} (r+n)(r+n-1)a_nx^{r+n-2} $ substituting into (2) we get, $ x\sum _{n=0}^{\infty}(r+n)(r+n-1)a_nx^{r+n-2} + 2x\sum _{n=0}^{\infty}(r+n)a_nx^{r+n-1} + 6e^x\sum _{n=0}^{\infty}a_nx^{r+n} = 0 $ But now what? I am aware that $ 6e^x = 6\sum _{n=0}^{\infty}x^n/n! $, but my powers stop there. Can I multiply the two series together? I would have to multiply each term in one series by every term in the other, and I don't know how to deal with that. The text provides no worked examples in which P(x) or Q(x) are not polynomials... so for now my work stops here. 2) $ L[y] = x(x-1)y'' + 6x^2y' + 3y = 0 $ Again, I will leave the question in its original form, rather than try to get that x^2 in front (I realise I am not checking that the singular point is a regular singular point, but checking the answer in the back of the book, x = 1 and x = 0 are indeed regular points). With two regular singular points, I expect I will get 2 sets of answers: one near x = 1 and the other near x = 0. Is it enough to just proceed with one case and then the next? I will assume so, and begin with the case close to x = 0. Again, letting $ y_1 = \sum _{n=0}^{\infty} a_n x^{r+n} $, and taking the appropriate derivatives, we find by substitution, $ x(x-1)\sum _{n=0}^{\infty}(r+n)(r+n-1)a_nx^{r+n-2} + 6x^2\sum _{n=0}^{\infty}(r+n)a_nx^{r+n-1} + 3\sum _{n=0}^{\infty}a_nx^{r+n} = 0 $ $ x^2\sum _{n=0}^{\infty}(r+n)(r+n-1)a_nx^{r+n-2} - x\sum _{n=0}^{\infty}(r+n)(r+n-1)a_nx^{r+n-2} + 6x^2\sum _{n=0}^{\infty}(r+n)a_nx^{r+n-1} + 3\sum _{n=0}^{\infty}a_nx^{r+n} = 0 $ $ \sum _{n=0}^{\infty}(r+n)(r+n-1)a_nx^{r+n} - \sum _{n=0}^{\infty}(r+n)(r+n-1)a_nx^{r+n-1} + \sum _{n=0}^{\infty}6(r+n)a_nx^{r+n+1} + \sum _{n=0}^{\infty}3a_nx^{r+n} = 0 $ we shift the indexes of the above sums, so that everything will be in terms of the same power of x. $ \sum _{n=1}^{\infty}(r+n-1)(r+n-2)a_{n-1}x^{r+n-1} - \sum _{n=0}^{\infty}(r+n)(r+n-1)a_nx^{r+n-1} + \sum _{n=2}^{\infty}6(r+n-2)a_{n-2}x^{r+n-1} + \sum _{n=1}^{\infty}3a_{n-1}x^{r+n-1} = 0 $ we synchronise the indexes in order to group like terms, by extracting early terms from each series, $ r(r-1)a_0x^r + \sum _{n=2}^{\infty}(r+n-1)(r+n-2)a_{n-1}x^{r+n-1} - r(r-1)a_0x^{r-1} - r(r+1)a_1x^r - \sum _{n=2}^{\infty}(r+n)(r+n-1)a_nx^{r+n-1} + \sum _{n=2}^{\infty}6(r+n-2)a_{n-2}x^{r+n-1} + 3a_0x^{r-1} + \sum _{n=2}^{\infty}3a_{n-1}x^{r+n-1} = 0 $ rearranging, we get $ r(r-1)a_0x^r - r(r-1)a_0x^{r-1} - r(r+1)a_1x^r + 3a_0x^{r-1} + \sum _{n=2}^{\infty}(r+n-1)(r+n-2)a_{n-1}x^{r+n-1} - \sum _{n=2}^{\infty}(r+n)(r+n-1)a_nx^{r+n-1} + \sum _{n=2}^{\infty}6(r+n-2)a_{n-2}x^{r+n-1} + \sum _{n=2}^{\infty}3a_{n-1}x^{r+n-1} = 0 $ At this point I expect the indicial equation to emerge, and I expect it to be similar to an Euler Equation. That is, I expect a polynomial that I can solve to get two 'exponents at the singularity'. Unfortunately, I cannot see an indicial equation and am at a loss to know precisely why. 3) $ L[y] = xy'' + y = 0 $ Finally we come to the assigned question, which I have been able to manipulate into an almost final form. Again, letting $ y_1 = \sum _{n=0}^{\infty} a_n x^{r+n} $, taking derivatives, and substituting into L, we get $ x\sum _{n=0}^{\infty} (r+n)(r+n-1)a_nx^{r+n-2} + \sum _{n=0}^{\infty} a_n x^{r+n} = 0 $ $ \sum _{n=0}^{\infty} (r+n)(r+n-1)a_nx^{r+n-1} + \sum _{n=0}^{\infty} a_n x^{r+n} = 0 $ Now shifting indexes, $ \sum _{n=0}^{\infty} (r+n)(r+n-1)a_nx^{r+n-1} + \sum _{n=1}^{\infty} a_{n-1} x^{r+n-1} = 0 $ and extracting the $ 0^{th} $ term of the first sum, $ r(r-1)a_0x^{r-1} + \sum _{n=1}^{\infty} (r+n)(r+n-1)a_nx^{r+n-1} + \sum _{n=1}^{\infty} a_{n-1} x^{r+n-1} = 0 $ $ r(r-1)a_0x^{r-1} + \sum _{n=1}^{\infty}[(r+n)(r+n-1)a_n + a_{n-1}]x^{r+n-1} = 0 $ And voila! We have an indicial term with solutions $r_1 = 1$ and $r_2 = 0$, and a recurrence relation. From my text I expect that $y_1 = |x|^{r_1}[1+\sum _{n=0}^{\infty}a_nx^n]$ and since $ r_1 - r_2 \in \mathbb{Z} $, $y_2 = ay_1ln|x| + |x|^{r_2}[1 + \sum _{n=0}^{\infty}b_nx^n]$ to find $a_n$ we observe the recurrence relation with $ r = r_1 = 1 $, $ (r+n)(r+n-1)a_n + a_{n-1} $ $ a_n = -a_{n-1}/n(n+1) $ so, $ a_1 = -a_0/2*1 $ $ a_2 = -a_1/2*3 = a_0/3*2*1*2*1 = a_0/3!2! $ $ a_3 = -a_2/3*4 = -a_0/4!3! $ and in general, $ a_n = (-1)^na_0/n!(n+1)! $ so we have $ y_1 = |x| + \sum _{n=0}^{\infty} (-1)^na_0x^{n+1}/n!(n+1)! $ Not so easily done with r = r_2 = 0, I'm afraid... since the relation becomes $ a_n = -a_{n-1}/n(n-1) $, which means we can't have a_1 for fear of division by zero. Never the less, starting at n = 2 we get, $ a_2 = -a_1/2*1 $ $ a_3 = -a_2/2*3 = a_1/3*2*1*2*1 = a_1/3!2! $ $ a_4 = -a_3/3*4 = -a_1/4!3! $ and in general, $ a_n = (-1)^{n-1}a_1/n!(n-1)! $ so we have $ y_2 = ay_1ln|x| + 1 + \sum _{n=0}^{\infty} (-1)^{n-1}a_1x^{n+1}/n!(n-1)! $ Which I feel may not be correct... and even if it is, how should one man solve for a in a single lifetime? Thanks everyone for looking at this. I want to stress that I am not just a student looking for help in his homework: I would really like to understand this method because it appeals to me. I particularly like the way we extract the indicial expression from the sums, in order to synchronise them. That is so cool. And how you get 1 recurrence relation that you can use for both solutions: neat. PS sorry if my Latex is not perfect? I'm just getting started with it. questions taken from "Elementary Differential Equations and Boundary Value Problems" by William E. Boyce and Richard C. DiPrima (9th ed), from sectin 5.6 pp 290
2019-09-04 12:06 Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 Подробная запись - Подобные записи 2019-08-15 17:39 LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Подробная запись - Подобные записи 2019-08-15 17:36 Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Подробная запись - Подобные записи 2019-05-15 16:57 Подробная запись - Подобные записи 2019-02-12 14:01 XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 Подробная запись - Подобные записи 2019-01-21 09:59 Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 Подробная запись - Подобные записи 2019-01-15 14:22 Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 Подробная запись - Подобные записи 2019-01-10 15:54 Подробная запись - Подобные записи 2018-12-20 16:31 Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 Подробная запись - Подобные записи 2018-12-14 16:02 The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. Подробная запись - Подобные записи
Here are some thoughts about this question - but not a definitive answer: I don't believe a definitive answer exists. In general, error propagation is founded on the assumption that the distribution of the errors is Gaussian, and that the error is small compared to the value of the quantity. In that case, a simple propagation of errors is possible. For example, assuming that the error is small, you should be able to take the derivative of your function with respect to each of the variables - then you can use that to determine the error in the result. For example, your case of $$F =\sqrt{A^2+B^2}$$ Derivative with respect to A, B: $$\frac{\partial F}{\partial A} = \frac{2A}{\sqrt{A^2+B^2}}\\\frac{\partial F}{\partial B} = \frac{2B}{\sqrt{A^2+B^2}}$$ If the error in A is $\Delta A$, and the error in B is $\Delta B$, then the total expected error is the sum of squares of errors: $$\Delta F = \frac{\sqrt{(2A\Delta A)^2 + (2B\Delta B)^2}}{\sqrt{A^2+B^2}}$$ However, the moment you state that your distribution is not symmetrical, the situation changes. If you have a sufficient number of variables with "small but non Gaussian" error distributions, the central limit theorem tells us that the result will nonetheless be Gaussian distributed: in that case you can compute the standard deviation of the (non Gaussian) individual distributions, and use those as a surrogate in your error propagation calculation. But if you have a small number of variables (like in your example), AND the distribution is not Gaussian, then there is no method I'm aware of to solve the question analytically. It can, however, be addressed with a simple Monte Carlo simulation. In a Monte Carlo simulation, you sample the distributions of your input variables, and transform them according to the formula; you can then plot the resulting output distribution, and compute its shape etc. The upper and lower limits of the output can in principle be computed by setting the input variables to their extreme values (this is sometimes done for "worst case analysis"); but it is rare that that gives you any really useful insights, since error distributions most often describe something stochastic rather than deterministic (which means that an upper limit is almost never "hard"). And as I said - the moment you have more than a small number of variables with similar weights, the output distribution will start to look Gaussian.
No, it is not possible. In particular, consider that, were we to number the stakes from $0$ through $2015$ going evenly around the circle, then a string going from $a$ to $b$ is parallel to one going from $c$ to $d$ if and only if $a+b\equiv c+d\pmod{2016}$. This may be seen as there is a reflection taking stake $a$ to stake $b$ which also takes stake $c$ to stake $d$. Two sets of parallel lines between such labeling are shown below: Thus, we may reduce the problem to the following: Is there a sequence $s_0,\ldots,s_{2015}$ such that the value of $s_n+s_{n+1}$ is distinct for every $n$ between $0$ and $2015$ (inclusively, taking the last value to be $s_{2015}+s_0$)? And the answer is "no" because there are only $2016$ equivalence classes mod $2016$, meaning each equivalence class must contain exactly one value $s_{n}+s_{n+1}$. However, that means that $$1008\equiv\sum_{i=0}^{2015}i\equiv \sum_{n=0}^{2015}s_n+s_{n+1}\equiv 2\sum_{n=0}^{2015}s_n\equiv 2\sum_{i=0}^{2015}i\equiv2\cdot 1008\pmod{2016}$$which is false since $2\cdot 1008 \not\equiv 1008$. Note that we used, in our second to last step, that $s_n$ enumerates the equivalence classes mod $2016$. (Note that the equivalence $1008\equiv\sum_{i=0}^{2015}i\pmod{2016}$ comes from a pairing argument - each summand $i$ may be paired with its opposite $-i$ making a total of $0$, with the exception of the two $i$ such that $i=-i$ - which are $0$ and $1008$. So the sum is $0+1008=1008$)
2019-05-20 15:18 Detailed record - Similar records 2019-01-23 09:13 nuSTORM at CERN: Feasibility Study / Long, Kenneth Richard (Imperial College (GB)) The Neutrinos from Stored Muons, nuSTORM, facility has been designed to deliver a definitive neutrino-nucleus scattering programme using beams of $\bar{\nu}_e$ and $\bar{\nu}_\mu$ from the decay of muons confined within a storage ring. The facility is unique, it will be capable of storing $\mu^\pm$ beams with a central momentum of between 1 GeV/c and 6 GeV/c and a momentum spread of 16%. [...] CERN-PBC-REPORT-2019-003.- Geneva : CERN, 2019 - 150. Detailed record - Similar records 2019-01-23 08:54 Detailed record - Similar records 2019-01-15 15:35 Report from the LHC Fixed Target working group of the CERN Physics Beyond Colliders forum / Barschel, Colin (CERN) ; Bernhard, Johannes (CERN) ; Bersani, Andrea (INFN e Universita Genova (IT)) ; Boscolo Meneguolo, Caterina (Universita e INFN, Padova (IT)) ; Bruce, Roderik (CERN) ; Calviani, Marco (CERN) ; Carassiti, Vittore (Universita e INFN, Ferrara (IT)) ; Cerutti, Francesco (CERN) ; Chiggiato, Paolo (CERN) ; Ciullo, Giuseppe (Universita e INFN, Ferrara (IT)) et al. Several fixed-target experiments at the LHC are being proposed and actively studied. Splitting of beam halo from the core by means of a bent crystal combined with a second bent crystal after the target has been suggested in order to study magnetic and electric dipole moments of short-lived particles. [...] CERN-PBC-REPORT-2019-001.- Geneva : CERN, 2019 Fulltext: PDF; Detailed record - Similar records 2018-12-20 13:45 Detailed record - Similar records 2018-12-18 14:08 Physics Beyond Colliders at CERN: Beyond the Standard Model Working Group Report / Beacham, J. (Ohio State U., Columbus (main)) ; Burrage, C. (U. Nottingham) ; Curtin, D. (Toronto U.) ; De Roeck, A. (CERN) ; Evans, J. (Cincinnati U.) ; Feng, J.L. (UC, Irvine) ; Gatto, C. (INFN, Naples ; NIU, DeKalb) ; Gninenko, S. (Moscow, INR) ; Hartin, A. (U. Coll. London) ; Irastorza, I. (U. Zaragoza, LFNAE) et al. The Physics Beyond Colliders initiative is an exploratory study aimed at exploiting the full scientific potential of the CERN’s accelerator complex and scientific infrastructures through projects complementary to the LHC and other possible future colliders. These projects will target fundamental physics questions in modern particle physics. [...] arXiv:1901.09966; CERN-PBC-REPORT-2018-007.- Geneva : CERN, 2018 - 150 p. Full Text: PDF; Fulltext: PDF; Detailed record - Similar records 2018-12-17 18:05 PBC technology subgroup report / Siemko, Andrzej (CERN) ; Dobrich, Babette (CERN) ; Cantatore, Giovanni (Universita e INFN Trieste (IT)) ; Delikaris, Dimitri (CERN) ; Mapelli, Livio (Universita e INFN, Cagliari (IT)) ; Cavoto, Gianluca (Sapienza Universita e INFN, Roma I (IT)) ; Pugnat, Pierre (Lab. des Champs Magnet. Intenses (FR)) ; Schaffran, Joern (Deutsches Elektronen-Synchrotron (DE)) ; Spagnolo, Paolo (INFN Sezione di Pisa, Universita' e Scuola Normale Superiore, Pisa (IT)) ; Ten Kate, Herman (CERN) et al. Goal of the technology WG set by PBC: Exploration and evaluation of possible technological contributions of CERN to non-accelerator projects possibly hosted elsewhere: survey of suitable experimental initiatives and their connection to and potential benefit to and from CERN; description of identified initiatives and how their relation to the unique CERN expertise is facilitated.. CERN-PBC-REPORT-2018-006.- Geneva : CERN, 2018 - 31. Fulltext: PDF; Detailed record - Similar records 2018-12-14 16:17 AWAKE++: The AWAKE Acceleration Scheme for New Particle Physics Experiments at CERN / Gschwendtner, Edda (CERN) ; Bartmann, Wolfgang (CERN) ; Caldwell, Allen Christopher (Max-Planck-Institut fur Physik (DE)) ; Calviani, Marco (CERN) ; Chappell, James Anthony (University of London (GB)) ; Crivelli, Paolo (ETH Zurich (CH)) ; Damerau, Heiko (CERN) ; Depero, Emilio (ETH Zurich (CH)) ; Doebert, Steffen (CERN) ; Gall, Jonathan (CERN) et al. The AWAKE experiment reached all planned milestones during Run 1 (2016-18), notably the demonstration of strong plasma wakes generated by proton beams and the acceleration of externally injected electrons to multi-GeV energy levels in the proton driven plasma wakefields. During Run~2 (2021 - 2024) AWAKE aims to demonstrate the scalability and the acceleration of electrons to high energies while maintaining the beam quality. [...] CERN-PBC-REPORT-2018-005.- Geneva : CERN, 2018 - 11. Detailed record - Similar records 2018-12-14 15:50 Particle physics applications of the AWAKE acceleration scheme / Wing, Matthew (University of London (GB)) ; Caldwell, Allen Christopher (Max-Planck-Institut fur Physik (DE)) ; Chappell, James Anthony (University of London (GB)) ; Crivelli, Paolo (ETH Zurich (CH)) ; Depero, Emilio (ETH Zurich (CH)) ; Gall, Jonathan (CERN) ; Gninenko, Sergei (Russian Academy of Sciences (RU)) ; Gschwendtner, Edda (CERN) ; Hartin, Anthony (University of London (GB)) ; Keeble, Fearghus Robert (University of London (GB)) et al. The AWAKE experiment had a very successful Run 1 (2016-8), demonstrating proton-driven plasma wakefield acceleration for the first time, through the observation of the modulation of a long proton bunch into micro-bunches and the acceleration of electrons up to 2 GeV in 10 m of plasma. The aims of AWAKE Run 2 (2021-4) are to have high-charge bunches of electrons accelerated to high energy, about 10 GeV, maintaining beam quality through the plasma and showing that the process is scalable. [...] CERN-PBC-REPORT-2018-004.- Geneva : CERN, 2018 - 11. Fulltext: PDF; Detailed record - Similar records 2018-12-13 13:21 Summary Report of Physics Beyond Colliers at CERN / Jaeckel, Joerg (CERN) ; Lamont, Mike (CERN) ; Vallee, Claude (Centre National de la Recherche Scientifique (FR)) Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN's accelerator complex and its scientific infrastructure in the next two decades through projects complementary to the LHC, HL-LHC and other possible future colliders. These projects should target fundamental physics questions that are similar in spirit to those addressed by high-energy colliders, but that require different types of beams and experiments. [...] arXiv:1902.00260; CERN-PBC-REPORT-2018-003.- Geneva : CERN, 2018 - 66 p. Fulltext: PDF; PBC summary as submitted to the ESPP update in December 2018: PDF; Detailed record - Similar records
I'm want to do a recursive least square algorithm but I can't get it to work. If you don't know what recursive least square algorithm is. Well, it just ordinary least square, but it's an algorithm which works as online estimator for estimating a mathematical model, every iteration. So let's say what we have a process $H(s)$ (transfer function symbol) and we are giving the process a constant signal $u(t) = 1$ for $t = [0, 15]$ seconds. We don't know the process $H(s)$, but we know the input $u(t)$ and the output $y(t)$. But we know the that the process contains acceleration, and I will give us the information that $H(s)$ is a second order process. We give the process $H(s)$ the $u(t)$ signal and measure the output $y(t)$. The template of the second order process can be described as: $$G(s) = \frac{K}{s^2 + as + b}$$ But notice that we have collect the input and output signal with a sampling rate per $0.1$ time units. So we need to have this as a discrete process. $$G_d(z) = \frac{a_0z + 1}{b_0z^2 + b_1z + 1}$$ Remember that $z^2 = (k+2)$ and $z = (k+1)$. We can expand $G_d(z)$ into a discrete ODE with difference equations: $$b_0y(k+2) + b_1y(k+1) + y(k) = a_0u(k+1) + a_1u(k)$$ Then we can change this into: $$y(k) = -b_0y(k+2) - b_1y(k+1) + a_0u(k+1) + a_1u(k)$$ We know that the scalar of input $u(k)$ and output $y(k)$ is 1 because we have measure them both. But when it comes to estimation, we need to estimate the $a_1$ scalar too. What we don't know is: $b_0, b_1, a_0, a_1$ and that is why we are using Least Square. Image that we have a lot of mesaruments. $$y(0) = -b_0y(2) - b_1y(1) + a_0u(1) + a_1u(0)$$ $$y(1) = -b_0y(3) - b_1y(2) + a_0u(2) + a_1u(1)$$ $$y(2) = -b_0y(4) - b_1y(3) + a_0u(3) + a_1u(2)$$ $$\vdots$$ $$y(k) = -b_0y(k+2) - b_1y(k+1) + a_0u(k+1) + a_1u(k)$$ Then we can express those like: $$b = Ax$$ Where $A$ is: $$y(0) = -y(2) - (1) + u(1) + u(0)$$ $$y(1) = -(3) - (2) + u(2) + u(1)$$ $$y(2) = -y(4) - y(3) + u(3) + u(2)$$ $$\vdots$$ $$y(k) = -(k+2) - y(k+1) + u(k+1) + u(k)$$ And $x$ as: $$[b_0, b_1, a_0, a_1]^T$$ and $b$ as: $$[y(k),y(k+1), y(k+2), \dots, y(k)]^T$$ We know $b$ and $A$, and we need to find $x$. A simple answer is: $$x = A^{-1}b$$ That's how we can do least square! So now back the Recursive Least Square algorithm. We first need to identify $\phi(k)$ as $$\phi^T(k) = -(k+2) - y(k+1) + u(k+1) + u(k)$$ And our estimated scalars $\hat{\theta}$ as: $$\hat{\theta}^T = [b_0, b_1, a_0, a_1]$$ Now we setup the Recursive Least Square algorithm as: $$P(k) = P(k-1) - P(k-1)\phi(k)\phi^T(k)P(k-1)[1+\phi^T(k)P(k-1)\phi(k)]^{-1}$$ $$\hat{\theta} = \hat{\theta}(k-1) + P(k-1)\phi(k)[y(k) - \phi^T(k)\hat{\theta}(k-1)]$$ Where $P(k-1)$ and $\hat{\theta}(k-1)$ is just the past value. $P(k)$ is also called "updater". Before we start the algoritm, we need to set some initial conditions for $P$ and $\hat{\theta}$. For $P$ we set the value as: $$P = cI$$ Where $c$ is a large number, say 1000, and $I$ is the identity matrix. For $\hat{\theta}$ we set those to: $$\hat{\theta} = 0$$ You can read more here about Recursive Least Square Anyway! I have made an algorithm in MATLAB/Octave for estimating $\hat{\theta}$. % Time and inputt = 0:0.1:15;u = linspace(1,1, length(t));% Continues time processG = tf([1], [1 2 3]);% Turn it to discrete with sampling time 0.1Gd = c2d(G, 0.1) % 0.1 seconds sample time% Simulate the process.[y, t] = lsim(Gd, u, t);% Initial thetaTheta = [0;0;0;0];% Initial Pc = 1000; % A large numberI = eye(length(Theta));P = c*I;% Estimationn = length(u) -2 ; % 2 because (k_max + 2) does not exist.for k = 1:n phi = [-y(k+2); -y(k+1); u(k+1); u(k)]; % Update P = P - P*phi*phi'*P/(1 + phi'*P*phi); % Update Theta = Theta + P*phi*(y(k) - phi'*Theta);end% Theta Theta And Theta becomes: Theta = -0.477237 -0.578289 -0.010080 -0.010080 That means: $$y(k) = -b_0y(k+2) - b_1y(k+1) + a_0u(k+1) + a_1u(k)$$ $b_0 = -0.477237$, $b_1 = -0.578289$, $a_0 = -0.010080$, $a_1 = -0.010080$, $$-0.477237(k+2) - 0.578289y(k+1) + y(k) = -0.010080u(k+1) - 0.010080u(k)$$ Which will be: $$G_d(z) = \frac{-0.010080z - 0.010080}{-0.477237z^2 - 0.578289z + 1}$$ And this will give a very unstable process, because the poles of $G_d(z)$ are:$-2.17510, 0.9633$6 But writing $G_d(z)$ as: $$G_d(z) = \frac{0.010080z + 0.010080}{0.477237z^2 + 0.578289z + 1}$$ Results stable poles $$-0.6059 + 1.3147i\\ -0.6059 - 1.3147i$$ But the simulation doesn't look great at all. Question: This sound not right to me. Have I done something wrong in the estimation? Edit: A more theoretical explonation for $P(k)$ is that $P(k)$ can be expressed as: $$P(k) = [\Phi^T(k)\Phi(k)]^{-1}$$ Because $$b = Ax$$ Where $b$ is equivalent with $y(k)$ and $A$ is equivalent with $\Phi(k)$ What will gives us: $$\hat{\theta} = [\Phi^T(k)\Phi]^{-1}\Phi^T(k)y(k) = \frac{y(k)}{\Phi(k)}$$ And every body knows the formula for least square with matrices? $$x = (A^TA)^{-1}A^Tb = A^{-1}b$$ Edit2: Here is a modification for RLS, which gives a better step answer of the transfer function. % Time and inputt = 0:0.1:15;u = linspace(1,1, length(t));% Continues time processG = tf([1], [1 2 3]);% Turn it to discrete with sampling time 0.1Gd = c2d(G, 0.1) % 0.1 seconds sample time% Simulate the process.[y, t] = lsim(Gd, u, t);% Initial thetaTheta = [0;0;0;0;0];% Initial Pc = 1000; % A large numberI = eye(length(Theta));P = c*I;% Estimationn = length(u);for k = 1:n if k <= 3 phi = 0; else phi = [-y(k-1); -y(k-2); -y(k-3); u(k-1); u(k-2)]; % Update P = P - P*phi*phi'*P/(1 + phi'*P*phi); % Update Theta = Theta + P*phi*(y(k) - phi'*Theta); endend% Theta ThetaGd2 = tf([Theta(4) Theta(5)],[1 Theta(1) Theta(2) Theta(3)]);Gd2.sampleTime = 0.1;figure(2)step(Gd2, 15);
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
When we go swimming, we feel a little weightless in the water. The reason for this is that liquids exert an upward force to objects submerged in them. This is known as thrust and is a consequence of the difference in pressure a liquid exerts at different heights. As we submerge an object (considering it is fully submerged) deeper into a liquid, the pressure exerted by the liquid keeps on increasing but the thrust force remains same. What is Archimedes Principle? Archimedes’ principle states that The upward buoyant force that is exerted on a body immersed in a fluid, whether partially or fully submerged, is equal to the weight of the fluid that the body displaces and acts in the upward direction at the center of mass of the displaced fluid. The value of thrust force is given by the Archimedes law which was discovered by Archimedes of Syracuse of Greece. When an object is partially or fully immersed in a liquid, the apparent loss of weight is equal to the weight of liquid displaced by it. If you look at the figure, the weight due to gravity is opposed by the thrust provided by the fluid. The object inside the liquid only feels the total force acting on it as the weight. Because the actual gravitational force is decreased by the liquid’s upthrust, the object feels as though its weight is reduced. The apparent weight is thus given by: Apparent weight= Weight of object (in air) – Thrust force (buoyancy) Archimedes principle tells us that this loss of weight is equal to the weight of liquid the object displaces. If the object has a volume of V, then it displaces a volume V of the liquid when it is fully submerged. If only a part of the volume is submerged, the object can only displace that much of liquid. Archimedes Principle Formula In simple form, the Archimedes law states that the buoyant force on an object is equal to the weight of the fluid displaced by the object. Mathematically written as: F b = ρ x g x V Where, F bis the buoyant force ρ is the density the fluid V is the submerged volume g is the acceleration due to gravity Archimedes Principle Derivation The mass of the liquid displaced is. \(Mass\) = \(Density~ × ~Volume\) = \(ρ~×~V\) This is because density (ρ) is defined as \(Density,ρ\) = \(\frac{Mass}{Volume}\) = \(\frac{M}{V}\) Thus the weight of that displaced liquid is: \(Weight\) = \(Mass~ ×~ Acceleration~ due~ to~ gravity\) \(W\) = \(M~×~g\) = \(ρ~×~V~×~g\) Thus from Archimedes principle, we can write: Apparent loss of weight = weight of water displaced = ρ×V×g Thus the Thrust force is, \(Thrust\) = \(ρ~×~V~×~g\) Where, ρ is the density of liquid V is the volume of liquid displaced The thrust force is also called the buoyant force because it is responsible for objects to float. Thus, this equation is also called the law of buoyancy. Related Articles: Archimedes Principle Applications Following are the applications of Archimedes principle: Submarine: The reason why submarines are always under water is because they have a component called ballast tank which allows the water to enter making the submarine be in its position under water as the weight of the submarine is greater than the buoyant force. Hot-air balloon: The reason why hot-air balloon rises and floats in mid-air because the buoyant force of hot-air balloon is less than the surrounding air. When the buoyant force of hot-air balloon is more, it starts to descend. This is done by varying the quantity of hot air in the balloon. Hydrometer: Hydrometer is an instrument used for measuring the relative density of liquids. Hydrometer consists of lead shots which makes them float vertically on the liquid. The lower the hydrometer sinks, lesser is the density of the liquid. Archimedes Principle Experiment You can try an Archimedes principle experiment at home. Take a mug filled with water to the brim and place it in an empty bowl. Now take any solid object you like and measure its weight using a spring balance. Note this down. Keep the object attached to the spring balance and submerge it in the water. Just make sure the spring balance is not submerged. Now, note down the weight shown by the spring balance. You will notice that it is less. Some water will be displaced into the bowl. Collect this water and weigh it. You will find that the weight of the water will be exactly equal to the loss of weight of the object! Archimedes Principle Examples Q1. Calculate the resulting force, if a steel ball of radius 6 cm is immersed in water. Assume the density of lead is 7900 kg.m -3. Ans: Given, Radius of steel ball = 6 cm = 0.06 m Volume of steel ball, V = \(\frac{4}{3}\pi r^{3}\) V = \(\frac{4}{3}\pi 0.06^{3}\) ∴V = 9.05 × 10 -4 m 3 Density of water, ρ = 1000 kg.m -3 Acceleration due to gravity, g = 9.8 m.s -2 From Archimedes principle formula, F b = ρ × g × V F b = (1000 kg.m -3)(9.8 m.s -2)(9.05 × 10 -4 m 3) ∴F b = 8.87N Q2. Calculate the buoyant force, if a floating body is 95% submerged in water. Density of water is 1000 kg.m -3. Ans: Given, Density of water, p = 1000 kg.m -3 From Archimedes principle formula, F b = ρ × g × V or V b × ρ b × g = ρ × g × V Where, ρ,g, and V are the density, acceleration due to gravity, and volume of the water V b, ρ b, and g are the volume, density, and acceleration due to gravity of body immersed Rearranging the equation,\(\rho _{b}=\frac{V\rho }{V_{b}}\) Since 95% of the body is immersed, 0.95 × V b = V ∴ρ b = 950 kg.m -3
Let us define the compound Poisson process $$Y_{N_t}=a+\sum_{n=1}^{N_t}X_n$$ where $X_n \sim f(x)$ such that $\langle X_n \rangle > 0$ $\forall n$ and $N_t$ is an independent Poisson random variable of parameter $\lambda t$. I want to study the survival probability above $0$ at infinite time $\mathbb{P}(Y_{N_t}\geq0, \; \forall t\;|\;a)=S(a)$. This turns out to be very similar to the Cramér-Lundberg model for ruin / collective risk theory: $$\tilde{Y}_{N_t}=a+ct-\sum_{n=1}^{N_t}\tilde{X}_n \quad \text{where } \langle \tilde{X}_n \rangle > 0$$ for which we know some asymptotic estimations (cf. intro here) of the ruin probability (complementary to the survival). I wonder if we could define $$X_n=cT_n-\tilde{X}_n \quad \text{where } T_n \sim \lambda e^{-\lambda t}\; t>0$$ to use the result of the Cramér-Lundberg model for the random walk (which looks different because $T_n$ is certainly not independent of $N_t$). This question is a reformulation of some aspects of these related questions: 1 and 2 where we have set a renewal equation for $S(a)$. I find in this paper a quite different renewal equation for the risk model (p. 67) which suggests that the solution is different, of course. But I would expect the same asymptotic. Any comment about this would be appreciated. EDIT : The function $f$ can be any distribution, but I am first interested in pdf of the form $f=\sum_{i=0}^n a_i \delta_{+c_i}+b_i \delta_{-c_i}$. The motivation for this would be to estimate the first moment of the cdf in $a$ from $f$.
Answer $W=135\ J$ Work Step by Step We know that $W=\int \vec{F}\cdot d\vec{r}$. Thus, we find: $W = \int_0^3 cxydx+\int_0^6d dy$ $W = \int_0^3 cx(ax^2-bx)dx+6\times15$ $W=135\ J$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
This question is cross-posted from math.stackexchange.com, where it did not (yet?) get any answers despite a +100 bounty. Consider a wave equation $$\frac{\partial^2 u}{\partial t^2} = c(x)^2 \frac{\partial^2 u}{\partial x^2} \tag{1}$$ In frequency domain this becomes an ODE: $$-\omega^2 u = c(x)^2 \frac{\partial^2 u}{\partial x^2} \tag{2}$$ We can solve (2) analytically when $c(x)$ is constant, then the solution is $\exp\left(\pm\frac{i \omega}{c} x\right)$. We can also solve it analytically if $c(x)$ is piecewise constant: write in every region-of-constant-$c$ a linear combination of leftward and rightward propagating waves, impose the appropriate continuity conditions on the interfaces between regions with different values of $c$, solve for the coefficients of the linear combinations. Suppose now that $c(x)$ is some arbitrary function. We can still write it as a limit of piecewise constant functions: $$c(x)=\lim_{\Delta\rightarrow 0^+} \sum_{i=-\infty}^{\infty} \left\{\begin{matrix}c(i\Delta) & x\in [i\Delta,(i+1)\Delta[ \\ 0 & \text{otherwise}\end{matrix}\right.$$ Let us call these piecewise constant approximations to $c$, $c_{\Delta}$ $$c_{\Delta}(x)=\sum_{i=-\infty}^{\infty} \left\{\begin{matrix}c(i\Delta) & x\in [i\Delta,(i+1)\Delta[ \\ 0 & \text{otherwise}\end{matrix}\right.$$ We can analytically solve, for any strictly positive $\Delta$, $$-\omega^2 u_{\Delta}= c_{\Delta}(x)^2 \frac{\partial^2 u_{\Delta}}{\partial x^2}$$ Question Is $\lim_{\Delta\rightarrow 0^+} u_{\Delta} = u$? In other words, can we approximate general solutions of (2) by substituting in a piecewise constant approximation of $c(x)$, and then solving analytically?
Search At this week's labs we will implement the AdaBoost algorithm. The AdaBoost algorithm takes a class of “weak” classifiers and boosts their performance by combining them together to a “strong” classifier with adaptive weights assigned to each of the weak classifiers (AdaBoost stands for “adaptive boosting”). This way even relatively simple classifiers can be boosted to practically interesting accuracy levels. In Adaboost, the classification function $H(x)$ has the form$$H(x) = \mbox{sign}(F(x))$$$$F(x)=\sum^T_{t=1}\alpha_t h_t(x) $$where $h_t(x)\;: \mathcal{X} \to \{-1, +1\}$ are the selected weak classifiers and $\alpha_t$ are the mixing weights. The training algorithm itself is relatively simple: The AdaBoost Algorithm Input: Training data $\{(x_1,y_1),\dotsc,(x_n,y_n)\}$; $x_i \in \mathcal{X}, y_i \in (-1, 1)$, number of iterations $T$, a set of weak classifiers $\mathcal{H}$ Initialise training data weights: $D_1(i) = 1/n$ for $i = 1,\dots,n$.For $t = 1$ to $T$: Output: “Strong” classifier $H(x)=\mbox{sign}(F(x)) = \mbox{sign}(\sum^T_{t=1}\alpha_t h_t(x))$ In this assignment, we will use the AdaBoost learning algorithm to train a digit classifier. The input to the algorithm will be a set of 13×13 grayscale images of digits and their corresponding labels (0-9). We will select one of the digits as positive class ($y_i = +1$) and the rest of the images will form the negative part of the training set ($y_i = -1$). The task is then to distinguish the selected digit from the rest of the digits. We are also free to choose the set of weak classifiers, $\mathcal{H}$. Ideally, we would like them to be discriminative enough for the given task but they do not necessarily need to be “too strong”. In our case, possible weak classifiers include local binary patterns (LBP), we may compute various histogram based features, use some edge-based features,… We may use single threshold decisions, build decision trees or use some form of regression. Note, that this is the strength of the AdaBoost algorithm – you are free to combine various weak classifiers and the algorithm determines which ones are useful for the task. Our dataset is rather small and simple, so to see the effects of weak classifiers combination into a stronger one, we will keep the weak classifiers intentionally rather simple. Each weak classifier $h_{r,c}(I; \theta, p)\in\mathcal{H}$ is based on intensity value of one pixel and is parametrised by four numbers: its coordinates in the image $I$, row $r$ and column $c$, and by an intensity threshold $\theta\in R$ and a parity flag $p \in \{+1, -1\}$ indicating whether the positive class is above or below the threshold. Each such weak classifier takes the image intensity at position $(r, c)$, compares it with the threshold $\theta$ and returns the class using the parity:$$h_{r,c}(I; \theta, p) = \mbox{sign}(p * (I(r,c) - \theta))$$Note, that we have 13×13=169 weak classifiers which are further parametrised by the threshold and parity. This gives us large variety of weak classifiers while keeping them rather weak and simple (much stronger weak classifiers could be easily used if we consider the relations between pixels, like the above mentioned LBPs). When searching for the best weak classifier in step 1 of the AdaBoost learning algorithm we are thus searching over $\theta$ and $p$ parameters of each of the 169 “template” classifiers and select the one with the lowest weighted error. To fulfil this assignment, you need to submit these files (all packed in a single .zip file) into the upload system: .zip answers.txt assignment_09.m adaboost.m adaboost_classify.m compute_error.m error_evolution.png classification.png weak_classifiers.png Start by downloading the template of the assignment. Experimental. Use at your own risk. Standard is to use Matlab. You may skip this info box if you follow the standard rules. General information for Python development. adaboost.ipynb adaboost.py adaboost adaboost_classify compute_error Use template of the assignment. When preparing the archive file for the upload system, do not include any directories, the files have to be in the archive file root. [strong_class, wc_error, upper_bound] = adaboost(X, y, num_steps) strong_class wc_error upper_bound findbestweak num_steps errors = compute_error(strong_class, X, y) X classify = adaboost_classify(strong_class, X) Fill the correct answers to your answers.txt file. Implement the AdaBoost classifier for the data described in the lecture slides [2]. Each weak classifier is a threshold/parity decision on a radial line from the data center (the thresholds are actually what is drawn in the slides). Discretise the angles of the directions of the radial lines to a reasonable number (e.g. every 5 degrees). Feel free to use the functions generate_data.m, show_data.m, showclassif.m and vis_boundary.m available in the template. generate_data.m show_data.m showclassif.m vis_boundary.m [1] short support text on AdaBoost[2] AdaBoost lecture
It looks like you're new here. If you want to get involved, click one of these buttons! Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the way in which we match up these two objects, to see that they look the same. For example, any two of these squares look the same after you rotate and/or reflect them: An isomorphism between two of these squares is a process of rotating and/or reflecting the first so it looks just like the second. As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse: Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that and I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\). Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse. Now we're ready for isomorphisms! Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\). Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like! What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph: The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2: $$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1: $$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms: $$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism! In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism. We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\). Puzzle 144 says that in a poset, the only isomorphisms are identities. Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions. Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\). So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them. One more example: Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism. This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the isomorphisms deserve to be called 'natural isomorphisms'. But what are they like? Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism $$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes: Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism $$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that $$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means $$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\). In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\). But the converse is true, too! It takes a little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism. Doing this will help you understand natural isomorphisms. But you also need examples! Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal! We should talk about this.
It looks like you're new here. If you want to get involved, click one of these buttons! In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly: Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints. Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints. Today we'll conclude our discussion of Chapter 1 with two more bombshells: Joins are left adjoints, and meets are right adjoints. Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down. This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world! Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders. In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets. Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have all joins: it's enough that all the joins in this formula exist: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have all meets: it's enough that all the meets in this formula exist: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes. Suppose \(A\) is a poset with all binary joins. Then we get a function $$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows: $$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that $$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the diagonal $$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called duplication, since it duplicates any element of \(A\). Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact: $$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \). Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \). A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function $$ \wedge : A \times A \to A $$that's the right adjoint of \( \Delta \). This is just a clever way of saying $$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check. Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number. All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on. Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by $$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short. I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason. Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\). So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs. Once you start looking you can find duality everywhere, from ancient Chinese philosophy: to modern computers: But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality! This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises.
There are many words and sentences in mathematics that I basically completely don't understand, including the words "Koszul" and "derived". But rather than ask for a complete description of such words, I will ask about a particular example, and hope that an MOer can spell out concretely what's going on. (Of course, links to good write-ups of Koszul, etc., are always welcome.) There is a relatively easy theorem, probably due to Chevalley and Eilenberg, with many generalizations due to Koszul: Let $L$ be a (finite-dimensional) vector space (in characteristic $0$); let $[1] L^*$ denote the dual vector space, "shifted" to a graded vector space supported in degree $1$; and let $([1]L^*)^{\vee \bullet}$ denote the free graded-commutative algebra generated by $[1]L^*$ (I impose the Koszul rule for signs, so that as an algebra $([1]L^*)^{\vee \bullet} = (L^*)^{\wedge\bullet}$ is classically an alternating algebra). Then to give a Lie algebra structure to $L$ is the same as giving to $([1]L^*)^{\vee \bullet}$ a square-zero degree-$1$ derivation. The construction is (contravariantly) functorial and full and faithful, and so embeds the category of Lie algebras fully-faithfully into the (opposite to the) category of commutative dgas. Here are two examples. I will call my characteristic-$0$ field $\mathbb R$. The one-dimensional Lie algebra corresponds to the dga $\mathbb R \overset 0 \to \mathbb R \xi$, where $\xi$ is the coordinate function on the one-dimensional vector space, and the algebra is graded-commutative, so $\xi^2 = 0$. The two-dimensional non-commutative Lie algebra corresponds to the dga $\mathbb R \overset 0 \to (\mathbb R \xi \oplus \mathbb R \upsilon) \overset{\xi \mapsto 0, \, \upsilon \mapsto \xi\upsilon}\longrightarrow \mathbb R\xi\upsilon$. Now, once we're in the land of dgas, it makes sense to talk about their (co?)homology. In the above examples, the cohomology of the one- and two-dimensional Lie algebras are isomorphic as algebras; the isomorphism on cohomology is given in one direction by the "abelianization" map from the two-dimensional Lie algebra to the one-dimensional Lie algebra. Question:How should I interpret "geometrically" the fact that the cohomologies agree? An idea that I've heard is that somehow dgas-up-to-? should correspond to some notion of "stack". Now, it is certainly not the case that "point mod one-dimensional" and "point mod two-dimensional" present the same stack. So that's not quite what's going on. Perhaps the problem is that these algebras are not the same as ${\rm A}_\infty$ algebras; we should probably add the word "${\rm A}_\infty$" to the list of words I don't really know. If this is the answer, I hope that some MOer will spell it out. Another idea that I've heard is that the passage "Lie algebra to dga to cohomology" remembers exactly the "derived category of representations of the Lie algebra". Again, I don't really know what that is, but that's OK. Somehow, a representation of a Lie algebra should be a "quasicoherent sheaf on point mod the Lie algebra", and I know that people like "derived categories of quasicoherent sheaves". So should I understand Koszul duality as remembering not all the data of some "stack-like object" but just some "derived data"? If so, again I hope that some MOer will spell it out pedantically in this example.
Hello MO World I'm working on a paper involving embedding your favourite measure-preserving transformation into a topological model (think Krieger generator theorem: embedding in a full shift) and have run into a question about entropy and Bowen balls. Setup: $T$ is a homeomorphism from a compact metric space $X$ to itself. Definition: $B(x,n,\delta)=\lbrace y\colon d(T^ix,T^iy) < \delta\text{ for $0\le i < n$}\rbrace$ is a Bowen $(n,\delta)$-ball around $x$. Let $\mu$ be an ergodic invariant measure for $T$ and let $A$ be a set of positive measure. What can be said about the functions $$ f_n(x)=\frac{\mu(A\cap B(x,n,\delta))}{\mu(B(x,n,\delta))} $$ as $n\to\infty$ for fixed $\delta$? Notice that if $X$ is a shift space, then the $B(x,n,\delta)$ partition the space and the $f_n$ are just conditional expectations with respect to that partition. In that case, $\int f_n \ d\mu(x)$ is equal to $\mu(A)$ for all $n$. This is the kind of conclusion I'd like to find in the general case. One interpretation of the question is that I'm asking: if you pick a point $x$ according to the measure $\mu$ and then pick a second point $y$ according to the restriction of $\mu$ to the Bowen ball around $x$, then is the distribution of $y$ `similar' to $\mu$? For a non-dynamical example where the resampled distribution is not the same, consider the set $\lbrace 1,2,3\rbrace$ with the usual distance in $\mathbb R$. If you pick a point $x$, and then sample a point $y$ in the 1.5 ball around $x$, you're more likely to get to 2 than you are to 1 or 3. If anyone has ideas, or has seen something similar, I'd really like to hear about it...
Your clicks are coming from two sources. The wavetable and the hardware. If you are looking to create a simple little instrument that produces tones by playing back a wavetable, then there is no way to fix the size of the "playback window" to reproduce tones without clicks. The frequencies that compose the chromatic scale follow a geometric progression that achieves doubling of the frequency in 12 steps. The frequency for each tone is produced by multiplying the previous tone by something like 1.05. That factor is worked out by: Let's take $A_F = 440Hz$ and its octave at $A_O = 880Hz$. Each one of the tones is $A_{n+1} = A_n * r$, so $A_O = A_F \cdot r^{12}$. But we know that $A_O = A_F \cdot 2$. Therefore, $r^{12} = 2$ and $r \approx 1.059463094359$ So, you might fit $A_F,A_O$ neatly in a wavetable, but $A_F \sharp$ now needs $\approx 1.0594$ more samples to fit in neatly. Not only there is no integer there but even if you could reproduce the tones in neat windows you would get problems with the tempo, the duration of each note in a musical piece, because $A_F$ is shorter than $A_F \sharp$. The "classic" way around this is to setup a high resolution wavetable of a single sinusoid which is then read back at "different speeds", with interpolation. So, $A_F$ is read back at $r^0 = 1$ speed (and practically no interpolation) but $A_F \sharp$ is read back at $r^{1} \approx 1.0594$ speed and a bit of interpolation when the "needle" has to land between known samples. BUT!, this has nothing to do with your output stream. In other words, you have to set-up (or adopt) a stream system where samples are continuously being pushed to the sound card and you have a "global" sense of time. Once upon a time, you did this by setting up three pointers (that's in C, C++) to three buffers. Let's call them $H,D,L$. You set up the sound card to record in $H$ and playback $L$ and while that was going on, your main code was processing buffer $D$. When the sound card called its callback to signify that the recording ended, you would "circularly switch" the buffers around so that they now pointed to $L \rightarrow D, D \rightarrow H, H \rightarrow silence$ and re-set up the sound card to do the same job while you were now processing the $D$ which is now your freshly recorded $H$. Those buffers had a length, say for instance 1000 samples and the task then was becoming to find a sweet spot that balanced length of buffer with responsiveness. But as you can see, in this system, samples are being pushed out to the sound card continuously in blocks of $bufferLength$ samples and you don't really care if what is being played back fits the buffer. All that you do is iterate your models (e.g. the wavetable), produced a buffer of samples, freeze the models, playback that buffer and move again from the top. Now, obviously, the above includes real time input from audio sources too. If all you are doing is a tiny little synth, you could have two buffers and switch around between them, working on one while the other is being played back. But even if you did this with some off the shelf cheap sound card in the market, you still got clicks although, you have worked so hard to perfectly align the zeroes between blocks. This is because of something called latency. In other words, just because your program called some Operating System Application Programming Interface function that said "Record NOW!" (or "Play NOW!), it doesn't mean that this command will propagate through the driver and eventually the audio hardware and recording will start NOW. Recording (and playback) will commence some $NOW! + latency$ units of time later and in off the shelf sound cards, this used to be an awful time later. So awful that you could hear the silence between calls. So, the solution there was to move over to something like ASIO which eventually packaged this whole workflow into a complete solution all the way from the software calls to the hardware. As an example, an 8-track external sound card can do 48kHz Stereo playback / rec over buffers of 256 samples (with absolutely no clicks). This means, that sound is processed in blocks that last approximately 5ms. The human ear begins resolving sounds as distinct when they are apart for more than 40ms, so this 5ms is really safe. It means that we can plug an electric instrument in the sound card and process it, in real time like it went through an effects box and the performer will not be noticing any difference between what they play and what they hear. If you tried to make $H,D,L$ the size of 256 and try to do the same with an off the shelf (relatively popular) sound card, first of all you would g/et/ /an/awf/ul/am//ount/of/cli/k/s// (one every 5ms) and secondly, the driver and operating system would be so much overwhelmed by the amount of function calls that soon you would be greeted with Seg Faults, blue screens, system freeze, reprepreprepreprepreprepreprepreprereprepreprepreprepreprepeated waves out of synsynsynsynsynsynsynsynsynsynreprepresync and other amusing side effects. So, bottom line, keep working on the DSP side of your synth but make sure that your sound hardware is up to the task as well if you really want to get rid of the clicks. Hope this helps.
In both cases, there appears to be a confusion of terminology between common and technical uses. We commonly use methane and propane for cooking (and home heating), but not ethane. I would expect ethane to be suitable for this, being in between the two, but I've never heard of anyone using it for this purpose. Why is that? In reality, anyone using natural gas as a cooking fuel likely is cooking both with $\ce{CH4}$ and $\ce{C2H6}$. From the above-linked Wikipedia page (emphasis added): Natural gas is a naturally occurring hydrocarbon gas mixture consisting primarily of methane, , and sometimes a small percentage of carbon dioxide, nitrogen, hydrogen sulfide, or helium. but commonly including varying amounts of other higher alkanes EngineeringToolbox.com reports the following representative composition ranges (probably in percent by volume?) of natural gas: $$\text{Composition (%)} \\\begin{array}{cccccccccc}\hline& \ce{CO2} & \ce{CO} & \ce{CH4} & \ce{C2H6} & \ce{H2} & \ce{H2S} & \ce{O2} & \ce{N2} \\\hline\text{Min} & 0 & 0 & 82 & 0 & 0 & 0 & 0 & 0.5 \\\text{Max} & 0.8 & 0.45 & 93 & 15.8 & 1.8 & 0.18 & 0.35 & 8.4\\\hline\end{array}$$ Given that $\ce{CH4}$ is by far the major constituent of natural gas, it is sensible that it is referred to commonly by the term methane, even if it is often actually a mixture of methane, ethane, and trace higher hydrocarbons. On a related note, why is butane used for cigarette lighters and basically nothing else (in ordinary life, I mean)? Per the Wikipedia page for liquefied petroleum gas, linked in Mithoron's comment, most of what is commonly referred to as propane or butane is actually a mix of $\ce{C3H8}$ and $\ce{C4H10}$ in varying ratios (emphasis added): Liquefied petroleum gas or liquid petroleum gas (LPG or LP gas), , are flammable mixtures of hydrocarbon gases used as fuel in heating appliances, cooking equipment, and vehicles. ... Varieties of LPG bought and sold include mixes that are mostly propane ($\ce{C3H8}$), mostly butane ($\ce{C4H10}$) and, most commonly, mixes including both propane and butane. In the northern hemisphere winter, the mixes contain more propane, while in summer, they contain more butane. also referred to as simply propane or butane So, Mithoron is right: $\ce{C4H10}$ is used in much more than just cigarette lighters, it's just that common usage happens to apply the term butane for this context. As a further note, I would guess the primary rationale for using different mixes of $\ce{C3H8}$/$\ce{C4H10}$ deals with the vapor pressures of the two gases. The energy densities $\eta$ of the liquefied gases, approximated as $-\Delta H_c^\circ\rho \over \mathrm{MW}$, are nearly equal: $$\begin{array}{ccccc}\hline\text{Quantity} & \text{Units} & \ce{C3H8} & n\text{-}\ce{C4H10} & iso\text{-}\ce{C4H10} \\\hline\Delta H_c^\circ & \mathrm{kJ\over mol} & -2202^1 & -2878^2 & -2869^3\\\mathrm{MW} & \mathrm{g\over mol} & 44 & 58 & 58 \\\rho & \mathrm{g\over mL} & 0.58^4 & 0.604^5 & 0.56^6 \\\hline\eta & \mathrm{MJ\over L} & 29.0 & 30.0 & 27.7 \\\hline\end{array}$$ Thus, roughly comparable energy value is obtained per volume of each, and there is little reason to favor one or the other on this basis. Practically, the lower limit of acceptable vapor pressure is that which provides sufficient flow of gaseous hydrocarbon to the point of combustion. The upper limit is more or less defined by the strength of the container and plumbing. Consider the following vapor pressure data, calculated from fitted equations (sources: propane | n-butane | iso-butane): $$\text{Vapor Pressure (atm)} \\\begin{array}{ccc}\hline & 0~^\circ\mathrm C& 25~^\circ\mathrm C & 38~^\circ\mathrm C \\\hline\ce{C3H8} & 4.7 & 9.3 & 12.8 \\n\text{-}\ce{C4H10} & 1.0 & 2.4 & 3.5 \\iso\text{-}\ce{C4H10} & 1.5 & 3.4 & 4.9 \\\hline\end{array}$$ Cigarette lighters (especially disposable plastic ones) presumably do actually use butane-rich fuel mixes, so as not to approach or exceed the mechanical limits of the lightweight, portable containers. As well, the temperature at point-of-use is somewhat better controlled, as even on cold days the heat from the user's hand is likely to maintain the butane vapor pressure high enough to provide sufficient gas flow. Finally, as noted in a comment by A.K., lighters are generally charged with iso-butane, which is sensible as it is the isomer exhibiting modestly higher vapor pressures. For applications where metal-walled containers are feasible (grilling, automotive fuel, etc.), however, structural considerations are less important and the higher deliverable pressure from propane becomes advantageous. In hot summer months, though, I would assume the higher fraction of butane is used so as to mitigate the fairly dramatic increase in vapor pressure of pure propane with increasing temperature. $^1$ Wikipedia, "Propane (data page)" $^2$ Wikipedia, "Butane (data page)" $^3$ Wikipedia, "Isobutane (data page)" $^4$ Engineering Toolbox, "Chemical, Physical and Thermal Properties of Propane Gas - $\ce{C3H8}$ $^5$ Engineering Toolbox, "Chemical, Physical and Thermal Properties of n-Butane" $^6$ AeroPres, "Physical Properties" datasheet (PDF link)
There exists a number of robust estimators of scale. A notable example is the median absolute deviation which relates to the standard deviation as $\sigma = \mathrm{MAD}\cdot1.4826$. In a Bayesian framework there exist a number of ways to robustly estimate the location of a roughly normal distribution (say a Normal contaminated by outliers), for example, one could assume the data is distributed as a t distribution or Laplace distribution. Now my question: What would a Bayesian model for measuring the scale of a roughly normal distribution in a robust way be, robust in the same sense as the MAD or similar robust estimators? As is the case with MAD, it would be neat if the Bayesian model could approach the SD of a normal distribution in the case when the distribution of the data actually is normally distributed. edit 1: A typical example of a model that is robust against contamination/outliers when assuming the data $y_i$ is roughly normal is using a t distribution like: $$y_i \sim \mathrm{t}(m, s,\nu)$$ Where $m$ is the mean, $s$ is the scale, and $\nu$ is the degree-of-freedom. With suitable priors on $m, s$ and $\nu$, $m$ will be an estimate of the mean of $y_i$ that will be robust against outliers. However, $s$ will not be a consistent estimate of the SD of $y_i$ as $s$ depends on $\nu$. For example, if $\nu$ would be fixed to 4.0 and the model above would be fitted to a huge number of samples from a $\mathrm{Norm}(\mu=0,\sigma=1)$ distribution then $s$ would be around 0.82. What I'm looking for is a model which is robust, like the t model, but for the SD instead of (or in addition to) the mean. edit 2: Here follows a coded example in R and JAGS of how the t-model mentioned above is more robust with respect to the mean. # generating some contaminated datay <- c( rnorm(100, mean=10, sd=10), rnorm(10, mean=100, sd= 100))#### A "standard" normal model ####model_string <- "model{ for(i in 1:length(y)) { y[i] ~ dnorm(mu, inv_sigma2) } mu ~ dnorm(0, 0.00001) inv_sigma2 ~ dgamma(0.0001, 0.0001) sigma <- 1 / sqrt(inv_sigma2)}"model <- jags.model(textConnection(model_string), list(y = y))mcmc_samples <- coda.samples(model, "mu", n.iter=10000)summary(mcmc_samples)### The quantiles of the posterior of mu## 2.5% 25% 50% 75% 97.5% ## 9.8 14.3 16.8 19.2 24.1 #### A (more) robust t-model ####library(rjags)model_string <- "model{ for(i in 1:length(y)) { y[i] ~ dt(mu, inv_s2, nu) } mu ~ dnorm(0, 0.00001) inv_s2 ~ dgamma(0.0001,0.0001) s <- 1 / sqrt(inv_s2) nu ~ dexp(1/30) }"model <- jags.model(textConnection(model_string), list(y = y))mcmc_samples <- coda.samples(model, "mu", n.iter=1000)summary(mcmc_samples)### The quantiles of the posterior of mu## 2.5% 25% 50% 75% 97.5% ##8.03 9.35 9.99 10.71 12.14
2019-09-20 08:41 Search for the $^{73}\mathrm{Ga}$ ground-state doublet splitting in the $\beta$ decay of $^{73}\mathrm{Zn}$ / Vedia, V (UCM, Madrid, Dept. Phys.) ; Paziy, V (UCM, Madrid, Dept. Phys.) ; Fraile, L M (UCM, Madrid, Dept. Phys.) ; Mach, H (UCM, Madrid, Dept. Phys. ; NCBJ, Swierk) ; Walters, W B (Maryland U., Dept. Chem.) ; Aprahamian, A (Notre Dame U.) ; Bernards, C (Cologne U. ; Yale U. (main)) ; Briz, J A (Madrid, Inst. Estructura Materia) ; Bucher, B (Notre Dame U. ; LLNL, Livermore) ; Chiara, C J (Maryland U., Dept. Chem. ; Argonne, PHY) et al. The existence of two close-lying nuclear states in $^{73}$Ga has recently been experimentally determined: a 1/2$^−$ spin-parity for the ground state was measured in a laser spectroscopy experiment, while a J$^{\pi} = 3/2^−$ level was observed in transfer reactions. This scenario is supported by Coulomb excitation studies, which set a limit for the energy splitting of 0.8 keV. [...] 2017 - 13 p. - Published in : Phys. Rev. C 96 (2017) 034311 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-09-20 08:41 Search for shape-coexisting 0$^+$ states in $^{66}$Ni from lifetime measurements / Olaizola, B (UCM, Madrid, Dept. Phys.) ; Fraile, L M (UCM, Madrid, Dept. Phys.) ; Mach, H (UCM, Madrid, Dept. Phys. ; NCBJ, Warsaw) ; Poves, A (Madrid, Autonoma U.) ; Nowacki, F (Strasbourg, IPHC) ; Aprahamian, A (Notre Dame U.) ; Briz, J A (Madrid, Inst. Estructura Materia) ; Cal-González, J (UCM, Madrid, Dept. Phys.) ; Ghiţa, D (Bucharest, IFIN-HH) ; Köster, U (Laue-Langevin Inst.) et al. The lifetime of the 0$_3^+$ state in $^{66}$Ni, two neutrons below the $N=40$ subshell gap, has been measured. The transition $B(E2;0_3^+ \rightarrow 2_1^+)$ is one of the most hindered E2 transitions in the Ni isotopic chain and it implies that, unlike $^{68}$Ni, there is a spherical structure at low excitation energy. [...] 2017 - 6 p. - Published in : Phys. Rev. C 95 (2017) 061303 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-09-17 07:00 Laser spectroscopy of neutron-rich tin isotopes: A discontinuity in charge radii across the $N=82$ shell closure / Gorges, C (Darmstadt, Tech. Hochsch.) ; Rodríguez, L V (Orsay, IPN) ; Balabanski, D L (Bucharest, IFIN-HH) ; Bissell, M L (Manchester U.) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cheal, B (Liverpool U.) ; Garcia Ruiz, R F (Leuven U. ; CERN ; Manchester U.) ; Georgiev, G (Orsay, IPN) ; Gins, W (Leuven U.) ; Heylen, H (Heidelberg, Max Planck Inst. ; CERN) et al. The change in mean-square nuclear charge radii $\delta \left \langle r^{2} \right \rangle$ along the even-A tin isotopic chain $^{108-134}$Sn has been investigated by means of collinear laser spectroscopy at ISOLDE/CERN using the atomic transitions $5p^2\ ^1S_0 \rightarrow 5p6\ s^1P_1$ and $5p^2\ ^3P_0 \rightarrow 5p6s^3 P_1$. With the determination of the charge radius of $^{134}$Sn and corrected values for some of the neutron-rich isotopes, the evolution of the charge radii across the $N=82$ shell closure is established. [...] 2019 - 7 p. - Published in : Phys. Rev. Lett. 122 (2019) 192502 Article from SCOAP3: PDF; დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-09-17 07:00 Radioactive boron beams produced by isotope online mass separation at CERN-ISOLDE / Ballof, J (CERN ; Mainz U., Inst. Kernchem.) ; Seiffert, C (CERN ; Darmstadt, Tech. U.) ; Crepieux, B (CERN) ; Düllmann, Ch E (Mainz U., Inst. Kernchem. ; Darmstadt, GSI ; Helmholtz Inst., Mainz) ; Delonca, M (CERN) ; Gai, M (Connecticut U. LNS Avery Point Groton) ; Gottberg, A (CERN) ; Kröll, T (Darmstadt, Tech. U.) ; Lica, R (CERN ; Bucharest, IFIN-HH) ; Madurga Flores, M (CERN) et al. We report on the development and characterization of the first radioactive boron beams produced by the isotope mass separation online (ISOL) technique at CERN-ISOLDE. Despite the long history of the ISOL technique which exploits thick targets, boron beams have up to now not been available. [...] 2019 - 11 p. - Published in : Eur. Phys. J. A 55 (2019) 65 Fulltext from Publisher: PDF; დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-09-17 07:00 Inverse odd-even staggering in nuclear charge radii and possible octupole collectivity in $^{217,218,219}\mathrm{At}$ revealed by in-source laser spectroscopy / Barzakh, A E (St. Petersburg, INP) ; Cubiss, J G (York U., England) ; Andreyev, A N (York U., England ; JAEA, Ibaraki ; CERN) ; Seliverstov, M D (St. Petersburg, INP ; York U., England) ; Andel, B (Comenius U.) ; Antalic, S (Comenius U.) ; Ascher, P (Heidelberg, Max Planck Inst.) ; Atanasov, D (Heidelberg, Max Planck Inst.) ; Beck, D (Darmstadt, GSI) ; Bieroń, J (Jagiellonian U.) et al. Hyperfine-structure parameters and isotope shifts for the 795-nm atomic transitions in $^{217,218,219}$At have been measured at CERN-ISOLDE, using the in-source resonance-ionization spectroscopy technique. Magnetic dipole and electric quadrupole moments, and changes in the nuclear mean-square charge radii, have been deduced. [...] 2019 - 9 p. - Published in : Phys. Rev. C 99 (2019) 054317 Article from SCOAP3: PDF; დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-09-17 07:00 Investigation of the $\Delta n = 0$ selection rule in Gamow-Teller transitions: The $\beta$-decay of $^{207}$Hg / Berry, T A (Surrey U.) ; Podolyák, Zs (Surrey U.) ; Carroll, R J (Surrey U.) ; Lică, R (CERN ; Bucharest, IFIN-HH) ; Grawe, H ; Timofeyuk, N K (Surrey U.) ; Alexander, T (Surrey U.) ; Andreyev, A N (York U., England) ; Ansari, S (Cologne U.) ; Borge, M J G (CERN ; Madrid, Inst. Estructura Materia) et al. Gamow-Teller $\beta$ decay is forbidden if the number of nodes in the radial wave functions of the initial and final states is different. This $\Delta n=0$ requirement plays a major role in the $\beta$ decay of heavy neutron-rich nuclei, affecting the nucleosynthesis through the increased half-lives of nuclei on the astrophysical $r$-process pathway below both $Z=50$ (for $N>82$ ) and $Z=82$ (for $N>126$). [...] 2019 - 5 p. - Published in : Phys. Lett. B 793 (2019) 271-275 Article from SCOAP3: PDF; დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-09-14 06:30 Precision measurements of the charge radii of potassium isotopes / Koszorús, Á (KU Leuven, Dept. Phys. Astron.) ; Yang, X F (KU Leuven, Dept. Phys. Astron. ; Peking U., SKLNPT) ; Billowes, J (Manchester U.) ; Binnersley, C L (Manchester U.) ; Bissell, M L (Manchester U.) ; Cocolios, T E (KU Leuven, Dept. Phys. Astron.) ; Farooq-Smith, G J (KU Leuven, Dept. Phys. Astron.) ; de Groote, R P (KU Leuven, Dept. Phys. Astron. ; Jyvaskyla U.) ; Flanagan, K T (Manchester U.) ; Franchoo, S (Orsay, IPN) et al. Precision nuclear charge radii measurements in the light-mass region are essential for understanding the evolution of nuclear structure, but their measurement represents a great challenge for experimental techniques. At the Collinear Resonance Ionization Spectroscopy (CRIS) setup at ISOLDE-CERN, a laser frequency calibration and monitoring system was installed and commissioned through the hyperfine spectra measurement of $^{38–47}$K. [...] 2019 - 11 p. - Published in : Phys. Rev. C 100 (2019) 034304 Article from SCOAP3: PDF; დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-09-12 09:23 Evaluation of high-precision atomic masses of A ∼ 50-80 and rare-earth nuclides measured with ISOLTRAP / Huang, W J (CSNSM, Orsay ; Heidelberg, Max Planck Inst.) ; Atanasov, D (CERN) ; Audi, G (CSNSM, Orsay) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cakirli, R B (Istanbul U.) ; Herlert, A (FAIR, Darmstadt) ; Kowalska, M (CERN) ; Kreim, S (Heidelberg, Max Planck Inst. ; CERN) ; Litvinov, Yu A (Darmstadt, GSI) ; Lunney, D (CSNSM, Orsay) et al. High-precision mass measurements of stable and beta-decaying nuclides $^{52-57}$Cr, $^{55}$Mn, $^{56,59}$Fe, $^{59}$Co, $^{75, 77-79}$Ga, and the lanthanide nuclides $^{140}$Ce, $^{140}$Nd, $^{160}$Yb, $^{168}$Lu, $^{178}$Yb have been performed with the Penning-trap mass spectrometer ISOLTRAP at ISOLDE/CERN. The new data are entered into the Atomic Mass Evaluation and improve the accuracy of masses along the valley of stability, strengthening the so-called backbone. [...] 2019 - 9 p. - Published in : Eur. Phys. J. A 55 (2019) 96 Fulltext from Publisher: PDF; დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-09-05 06:35 Nuclear charge radii of $^{62−80}$Zn and their dependence on cross-shell proton excitations / Xie, L (Manchester U.) ; Yang, X F (Peking U., SKLNPT ; Leuven U.) ; Wraith, C (Liverpool U.) ; Babcock, C (Liverpool U.) ; Bieroń, J (Jagiellonian U.) ; Billowes, J (Manchester U.) ; Bissell, M L (Manchester U. ; Leuven U.) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cheal, B (Liverpool U.) ; Filippin, L (U. Brussels (main)) et al. Nuclear charge radii of $^{62−80}$Zn have been determined using collinear laser spectroscopy of bunched ion beams at CERN-ISOLDE. The subtle variations of observed charge radii, both within one isotope and along the full range of neutron numbers, are found to be well described in terms of the proton excitations across the $Z=28$ shell gap, as predicted by large-scale shell model calculations. [...] 2019 - 5 p. - Published in : Phys. Lett. B 797 (2019) 134805 Article from SCOAP3: PDF; დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-08-14 18:20 დეტალური ჩანაწერი - მსგავსი ჩანაწერები
So we are calculating the loss $$J(\theta) = -\frac{1}{T}\sum_{t=1}^T\sum_{-m \leq j \leq m} \log P(w_{t+j}|w_t;\theta)$$ and to do this we need to calculate $$P(o|c) = \frac{\exp(u_o^Tv_c)}{\sum \exp(u_w^Tv_c)},$$ which is computationally inefficient. To solve this we could use the hierarchical softmax and construct a tree based on word frequency. However, I am having trouble on how we could get the probability based on the word frequency. And what exactly is the backprop step if using hierarchical softmax?
$\newcommand{\real}{\mathbb R}\newcommand{\field}{\mathbb F}\newcommand{\cx}{\mathbb C}\newcommand{\ip}[2]{\left< #1,#2\right>}$We need to dive into mathematics of vector spaces and inner products in order to understand what a vector means and what is it mean to take a scalar product of two vectors. There is a long post ahead so bear with me even though you may think that the maths is too abstract and has nothing to do with QM. In fact without understanding the abstract concept of vector spaces it is almost impossible to understand QM thoroughly. Vector Spaces Let's turn to linear algebra and think for a second what we mean by a vector space $V$ on a field $(F,+ , \cdot , 0 ,1)$. It consists of a set $V$, a field $F$ a scalar multiplication $ \odot : F \times V \to V$, $(\lambda , v) \mapsto \lambda \odot v$ and vector addition $ \oplus : V \times V \to V $, $ (v,w) \mapsto v \oplus w $, which has the following properties: $(V, \oplus, \tilde 0)$ is an abelian group $\forall v,w \in V$ and $\forall \lambda \in F: \; \; \lambda \odot (v \oplus w) = \lambda \odot v \oplus \lambda \odot w$ $ \forall v \in V$ and $\forall \lambda, \mu \in F: \; \; (\lambda + \mu) \odot v = \lambda \odot v \oplus \mu \odot v $ $\forall v \in V: \; \; 1 \odot v = v$ Note that the summation in the third axiom takes place on $F$ on the left hand side and on $V$ on the right hand side. So a vector space can really be anything, with these properties. Normally people don't distinguish between $\odot$ and $\cdot$ or $\oplus$ and $+$ because they show similar properties. For the sake of being absolutely clear I'll nevertheless make this distinction. If it bothers you replace $\oplus \to +$ and $\odot \to \cdot$ in your head. The elements of the set $V$ is called vectors and I didn't mention a thing called basis to define them. Basis and Generators The basis of a vector is then defined to be a set $B \subseteq V$ which has the following two properties: $$\forall B' \subseteq B\; \text{ with }\; \# B' < \infty: \; \; \bigoplus_{b \in B'} \lambda_b \odot b = \tilde 0 \implies \lambda_b = 0$$ where $\lambda_b \in F$ and $$ \forall v \in V, \; \exists B' \subseteq B \; \text{ with }\; \# B' < \infty:\;\; v = \bigoplus_{b \in B'} \lambda_b \odot b$$ for some $\lambda_b \in F$. A set $U \subseteq V$ with the first property is called a linear independent set and a set $T \subseteq V$ with the second property is called the generator of the vector space. You have namely $$B \text{ is a basis} :\iff B \text{ is a generator of the vector space and linear independent}$$ In a linear algebra class it is shown that all the bases of $V$ has the same cardinality. We define there for the dimension of $V$ to be $\mathrm{dim}(V)=\#B$ for $B$ being a basis of $V$. Representation of a vector as tuple If your vector space is finite dimensional ie if $\mathrm{dim}(V)< \infty$ then you can simplify the above conditions as: $$\bigoplus_{b\in B} \lambda_b \odot b = \tilde 0 \; \text{and} \; \forall v\in V: \; \; v= \bigoplus_{b\in B} \lambda_b \odot b$$ In a linear algebra course it is taught that every vector space (finite or infinite) $V$ has a basis and for a chosen basis the scalars $\lambda_b$ is unique. A basis is called an ordered basis if you order the elements of your basis in a tuple. If your vector space is finite dimensional, and $B$ is an order basis you can define a vector space isomorphism $\iota_B$ $$\iota_B : V \to F^n, \; v = \bigoplus_{b \in B} \lambda_b \odot b \mapsto (\lambda_{b_1}, \dots, \lambda_{b_n}) $$ where $b_i \in B$ $\forall i = 1 \dots n$ and $n = \#B$. There you can see the components of the vector $b$ as the numbers $\lambda_{b_i} \in F$. Note that even though the representation of a vector $v \in V $ as a n-tuple is unique for a given basis, there is no unique representation for the vector $v \in V$ for all bases. As an example will consider the set $V:=\real^2$ as a $F:=\real$ vector space, where $\oplus$ and $\odot$ are defined in the following fashion. I'll denote the elements of $V$ with square brackets and their representations with normal brackets to avoid confusion.$$[a,b] \oplus [c,d] = [a+c, b+d]$$ and $$\lambda \odot [a,b] = [\lambda \cdot a, \lambda \cdot b]$$ for all $a,b,c,d \in F$ and $[a,b],[c,d] \in V$. I leave you to check the vector space axioms. Note that whilst writing $[a,b] \in \real^2$ I'm not referring to any basis. It is merely a definition of being in $\real^2$ that let's me write $[a,b]$. Now let $B=\{b_1,b_2\}$ with $b_1 = [1,0]$ and $b_2=[1,0]$. Note that $\lambda \odot b_1 \oplus \mu \odot b_2 = [0,0] \implies \mu= \lambda =0 $ so $B$ is linear independent. Furthermore $[a,b] = a \odot b_1 + b \odot b_2$, $\forall [a,b] \in V $ as you can easily check. Thus the isomorphism $\iota_B$ well-defined and you get $$\iota_B([a,b]) = (a,b) \;\; \forall [a,b] \in V$$ However for another Basis $C=\{c_1, c_2\}$ with $c_1 =[-1,0]$ and $c_2=[0,2]$ you get: $$\iota_C([a,b])=(-a,b/2) \;\; \forall [a,b] \in V$$ as you can easily check. Note that in this example your vectors are $[a,b]\in \real^2$, which exist as an element of $\real^2$ independent of any basis. Another example of a $n$-dimensional vector space could be the solutions of a homogeneous linear differential equation of degree $n$. You should play with it choose some basis and represent them as tuples. Note that in this case your vectors are functions, which solve that particular differential equation. In the case of a infinite dimensional vector space things get a little bit difficult but to understand the basis concept finite dimensional vector spaces are the best way to go. Inner/Scalar product Let $V$ be a vector space, and $\field = \{\real, \cx\}$ be either real or complex numbers. Furthermore I'll now replace $\otimes \to +$ and $\odot \to \cdot$ to make my point more clear. A scalar product is a function $\ip \cdot \cdot: V \times V \to \field$, $(v,w) \mapsto \ip vw$ with following properties: $\forall v_1,v_2,w \in V, \; \forall \lambda \in \field: \;\; \ip{v_1 + \lambda v_2} w = \ip{v_1} w + \lambda^* \ip{v_2}w $ $\forall v,w_1,w_2 \in V, \; \forall \lambda \in \field: \;\; \ip{v}{ w_1 + \lambda w_2} = \ip{v}{ w_1} +\lambda \ip{v}{w_2}$ $\forall v,w \in \field: \;\;\ip vw = \ip wv^*$ $\forall v \in V\setminus\{0\}:\;\; \ip vv \in \real_{>0}$ Again note that $\ip \cdot\cdot$ could be any function with these properties. Furthermore I didn't need a basis of $V$ to define the scalar product, so it cannot possibly be dependent of the basis chosen. A vector space with an scalar product is called an inner product space. As an example take the polynomials of degree $\leq n$, defined on an interval $I\subset\real$. You can easily show that $$V:=\{P:I \to \real \, | \, P \text{ is a polynomial and degree of } P \leq n\}$$ is a vector space. Furthermore we define $$\forall P,Q \in V\;\; \ip PQ_1 := \int_0^1 P(x) \cdot Q(x) \, \mathrm d x $$ Note that this function is a valid scalar product on $V$. However I can also define $$\forall P= p_n x_n + \dots + p_0 , \,Q = q_n x_n + \dots q_0 \in V \;\; \ip PQ_2 := p_nq_n + \dots + p_0 q_0 $$ which is also a nice definition for a scalar product. Again I'm not referring to any basis as I write $P= p_n x_n + \dots + p_0$. It is just a definition of being in $V$. It is clear that there is no unique scalar product defined on a particular vector space. Representation of $\ip \cdot \cdot$ wrt a Basis Now if I choose a ordered basis $B$ of $V$ then I can simplify my life a little bit. Let $v\in V$ with $v= \sum_{i=1}^n v_i b_i$ and $w\in V$ with $w = \sum_{j=0} ^n w_i b_i$. Then I can write: $$\ip vw = \ip{\sum_{i=1}^n v_i b_i}{\sum_{j=0} ^n w_j b_j} = \sum_{i=1}^n\sum_{j=1}^n v_i^* w_j \ip {b_i} {b_j}$$ You see now that this looks like a matrix product of $\iota_B(v)^\dagger \cdot A \cdot\iota_B(w) $, where $A=(a_{ij}):=\ip{b_i}{b_j}$. Note that this representation of $\ip\cdot \cdot$ depends of the basis chosen. Having chosen a basis however, you can just do the matrix multiplication to get the scalar product. For the above example with polynomials and the inner product $\ip \cdot \cdot _2$, if you choose the basis to be $b_i = x^i$ then you get for $A = \mathrm{diag}(1, \dots, 1)$ Hilbert Space A Hilbert space is an inner product space and as a metric space it is complete with the metric induced by the scalar product. This means your norm is defined by $\lVert v \rVert := \sqrt{\ip vv}$ and the metric is defined by $d(v,w):= \lVert v-w \rVert$. For what it means for a space to be complete you can check the Wikipedia article. Upshot There is very clear distinction between the vector itself as an element of the set $V$ and its representation as a tuple in $F^n$ with respect to a basis $B$ of $V$. The vector itself exist regardless of any basis, whereas the representation of the vector is basis dependent. The same goes for the scalar product and its representation. In physics one automatically assumes the standard basis for $\real^n$, which has vectors with zeros everywhere except for one component, that is equal to one, and calculates everything in representation of vectors without specifying any basis, which in turn creates the illusion that vectors are its components and a vector without its a component is unimaginable. Since a vector space could almost be anything it is really hard to imagine what a vector is without referring to its components. The best way to do that in my opinion is to accept the fact that a vector is just an element the set $V$ and nothing more. For example if you write $$\left|\psi \right> = c_1 \left|+ \right> + c_2 \left| - \right>$$ you barely say that $\left| \psi \right>$ can be written as a linear combination of $\left| \pm \right>$. So this does not refer to any basis. However the moment you write $$\left|\psi \right> = c_1 \left|+ \right> + c_2 \left| - \right> = \begin{pmatrix} c_1 \\ c_2 \end{pmatrix}$$ first of all a mathematician dies because $\left|\psi \right> = \begin{pmatrix} c_1 \\ c_2 \end{pmatrix}$ doesn't make any sense as $\left|\psi \right> \in \mathcal H$ and $\begin{pmatrix} c_1 \\ c_2 \end{pmatrix} \in \cx^2$. Regardless of that $\begin{pmatrix} c_1 \\ c_2 \end{pmatrix}$ refers to an ordered basis of $\mathcal H$ namely the basis $B=\{\left|+ \right>, \left|- \right> \}$ and this representation depends on the basis that you've chosen.
There is a book of Byckling and Kajantie, "Particle kinematics", discussing in particular (Chapter V) the kinematics of the $2\to 3$ cross-sections. It can be easily found in internet. I just want to ask those who read it about the phase space evaluation. I follow their idea of representing of the phase space $$ d\Phi_{3} = \frac{d^{3}\mathbf p_{1}}{2E_{1}}\frac{d^{3}\mathbf p_{2}}{2E_{2}}\frac{d^{3}\mathbf p_{3}}{2E_{3}}\delta(k_{1}+k_{2}-p_{1}-p_{2}-p_{3}) $$ for the process $p_{a}+p_{b} \to p_{1}+p_{2}+p_{3}$ in terms of kinematic invariants $$ \tag 0 s_{1} = (p_{1}+p_{2})^{2},\quad s_{2} = (p_{2}+p_{3})^{2}, \quad t_{1} = (p_{a}-p_{1})^{2}, \quad t_{2} = (p_{b}-p_{3})^{2} $$ (and one angle, which is typically trivial and therefore is irrelevant). In general, it is not hard to express all of the scalar products of 4-momentums through these invariants, and the $2\to 3$ cross-section is possible to write in the form $$ \tag 1 \sigma \sim \frac{1}{s}\int |M|^{2}d\Phi_{3} \sim \frac{1}{s^{2}}\int \frac{ds_{2}dt_{1}dt_{2}ds_{1}}{\sqrt{-\Delta_{4}}}|M(s,s_{1},s_{2},t_{1},t_{2})|^{2}, \quad s = (p_{a}+p_{b})^{2} $$ where $M$ is the matrix element of the process, and $\Delta_{4}$ is the Gram determinant of 4 independent vectors combined from $p_{a},...,p_{3}$. What I don't understand is following. In the end of V.4 (p. 118) of the book the authors say about freedom in marking the initial and final particles (say, this one corresponds to $p_{1}$, this one - to $p_{2}$, and so on). They say that for the particular process the choice of numbers is dictated by the form of the matrix element, which sounds reasonable. Many of following chapters are based on particles numbers permutations, which I don't understand completely. Suppose now the Standard model Z-mediated process $\gamma + f \to f + l + \bar{l} $, where $f$ denotes arbitrary fermion, while $f'$ is another arbitrary fermion. There are two diagrams given on the picture: If I fix the numeration of the particles on one of diagrams, then on another one two numbers will be permutted. The problem (at least in my head) appears, since the authors of the book calculated the limits of integration for $(1)$ based on fixed numeration, as on the left diagram. The question is: Can I fix the numeration for one of diagrams (say, for the left one) and then just to use the expression for the integration limits for all of the kinematic invariants? Or I need to do some more complicated thing (I need to make the permutation $2 \leftrightarrow 3$ back, so $s_{1} = (p_{1}+p_{2})\to (p_{1}+p_{3})$, $t_{2} = (p_{b}-p_{3}) \to (p_{b}- p_{2})$, and it is needed to replace these quantities in terms of $(0)$)? In the last case, what to do with the interference between these two diagrams? The problem exists, since if I do the first, then the calculate cross-section will differ from the correct one (which can be found in a literature). Alternative question is: whether some other books or articles discussing the $2\to 3$ cross-sections evaluation in details exist?
So in your proposal, the balloons themselves provide only tensile strength. The resistance to compression comes from the pressure of the air inside the chambers. Let $R$ be the inside radius of the balloon shell, and let $a$ be the thickness of the shell. (So the outside radius is $R+a$.) Now consider what happens when the radius shrinks to $R-dR$. The outside radius shrinks to $R+a-dR$, meaning that the device now displaces $4\pi(R+a)^2dR$ less air. This is energetically favourable, leading to a reduction of energy of $4\pi(R+a)^2dR \cdot 1$atm. On the other hand, the air inside the chambers now occupies $4\pi((R+a)^2-R^2)dR$ less volume, which is energetically unfavourable. If $P$ is the pressure inside the chambers, then the energy increases by $4\pi((R+a)^2-R^2)dR \cdot P$. Thus, the overall change in energy is: $$\Delta E = 4\pi dR \left( P((R+a)^2-R^2) - 1\text{atm}(R+a)^2 \right)$$ If $\Delta E < 0$, the device will be crushed down to a smaller size by atmospheric pressure. Thus, in order to have the device be stable, the following inequality must be satisfied. $$\Delta E = 4\pi dR \left( P((R+a)^2-R^2) - 1\text{atm}(R+a)^2 \right) \geq 0$$ Therefore: $$P \geq 1\text{atm} \frac{(R+a)^2}{(R+a)^2-R^2}$$ Now let's compute the mass of this thing. Suppose that the density of air at atmospheric pressure is $\rho$. The total mass of air displaced by your balloon will be: $$\frac{4\pi}{3}\rho(R+a)^3$$ On the other hand, the mass of the balloon, ignoring the mass of the balloon fabric, will be: $$\rho \frac{P}{1\text{atm}} \frac{4\pi}{3} ((R+a)^3-R^3)$$ (Since density is proportional to pressure.) This is equal to: $$\frac{4\pi}{3} \rho ((R+a)^3-R^3) \frac{(R+a)^2}{(R+a)^2-R^2}$$ $$=\frac{4\pi}{3} \rho (R+a)^3 \frac{(R+a)^3-R^3}{(R+a)^3} \frac{(R+a)^2}{(R+a)^2-R^2}$$ $$=\frac{4\pi}{3} \rho (R+a)^3 \frac{(R+a)^3-R^3}{(R+a)^3-R^2(R+a)}$$ Since $R$ and $a$ are both positive, we have: $$\frac{(R+a)^3-R^3}{(R+a)^3-R^2(R+a)} > 1$$ Thus, the balloon must be more massive than the air that it displaces. Preventing atmospheric pressure from simply crushing the balloon requires a very high $P$, which means that the air in the shell has a high density. This high density adds enough mass to prevent the balloon from feeling any lift. If you use helium instead of ordinary air, then you can use a smaller value for $\rho$ in the equation for the mass of the balloon, so you might get lift. However it is clear from the equations that you are best off making $R=0$. i.e. creating an ordinary helium balloon with no vacuum inside.
I try to find a solution for this Problem: We should compare circular motion with uniform acceleration. The Task is to find out whether it is better to make a turn if you face a wall in front of your car or to brake.Turn would mean a 90° turns so that you do not hit the wall. Another important information is that the velocity which you are using to turn should equal the negative acceleration (brake). What I tried to do: $u(circle)=2\pi r$ I face the wall with 90° so $S=r (circle)$ S is the distance until I hit the wall: $s(u)=vt $, $t=\frac {s(u)}{v} $, $s(u,90°)=\frac {\pi r}{2}$, $t=\frac {\pi r}{2v} $ However I need $r$ because this is the distance to the wall; $s=\frac {2vt}{\pi} $ Then I described the negative acceleration when $v=0\frac {m}{s} $:$s=\frac {V0^2}{2a} $ Now I wanted to check how the equations behave when $a$ respectively $v$. As I said before they both need to be the same; $\lim_{a \rightarrow \infty} \frac {V_0^2}{2a}=0$ and $\lim_{v \rightarrow \infty} \frac {2vt}{ \pi}=\infty$ One goes to 0 one to $\infty$. Is there any way to express where it does not matter to turn or to brake.
Nice question/homework! Stated in another way: $\mathsf{NP}$ is not closed under Cook reductions (assuming $\mathsf{P}\neq \mathsf{NP}$). How about $\mathsf{NP}\cap\mathsf{coNP}$? Is $\mathsf{NP}\cap\mathsf{coNP}$ closed under Cook reductions? Is the only reason that $\mathsf{NP}$ is not closed under Cook reductions is because it is not closed under complement so if we take those $\mathsf{NP}$ problems whose complement is also $\mathsf{NP}$ do we get around the problem? For example, if we have a Cook reduction from a problem to Factoring, would that mean that the problem is in $\mathsf{NP}\cap\mathsf{coNP}$? An oracle doesn't just allow us to ask about membership and non-membership in the oracle, it allows us to ask very complicated list of questions where each question can depend on the answer for the previous one. Let's look at a problem $Q \in \mathsf{P^{NP \cap coNP}}$. We know that there is a polynomial-time algorithm $M$ and a set $A\in \mathsf{NP \cap coNP}$ such that $M^A$ solves the problem $Q$. Is $Q$ in $\mathsf{NP}$? Hint: when does $M^A$ accept an input $x$? Don't read the answer below if this is an assignment, the answer is not difficult, you should be able to solve it if you spend a few hours on it, and spending those hours is what makes you learn. Let's look at the execution of $M^A$ on an input $x$. $M$ will make a number of queries to $A$ during its computation and will receive YES and NO answers, and finally will accept or reject. If we could compute that answers to the queries in polynomial time we would have shown that the problem is in $\mathsf{P}$, we would simulate the algorithm $M$ and whenever it asked a query from $A$ we would compute the answer and give to $M$ and continue with its simulation. But $A\in\mathsf{NP \cap coNP}$ and we don't know how to compute the answers to queries in polynomial time. But we don't need to do this in polynomial-time! We can just guess the answers to the questions and verify our guesses in polynomial-time. To be able to verify the answers for both positive and negative answers we need $A\in\mathsf{NP\cap coNP}$, that is why this does not work for $A \in\mathsf{NP}$, that would allow only verifying positive answers. Let $V_{YES}$ and $V^{NO}$ be two polynomial-time verifiers for membership and non-membership in $A$. Consider the verfier $V(x,y)$ which works as follows: I. check that $y$ consists of 1. a string $c$ (computation of $M$ on $x$), 2. a list of strings $q_1,\ldots,q_m$ (queries to the oracle in $c$), 3. a list of strings $a_1,\ldots,q_m$ (answers to queries from oracle), 4. a list of strings $w_1,\ldots,w_m$ (certificates/proofs/witnesses for the correctness of the query answers). II. check that the list of queries $q_1,\ldots,q_m$ contains all oracle queries in $c$, III. check that the computation $c$ is an accepting computation of $M$ on $x$ if answers to the queries are $a_1,\ldots,a_m$. IV. for all $1\leq i \leq m$, check that if $a_i=YES$ then $V^{YES}$ accepts $(q_i,w_i)$ and if $a_i=NO$ then $V^{NO}$ accepts $(q_i,w_i)$. All of these steps can be checked in polynomial-time. So we have a verifier for YES answers of $M^A$. Furthermore note that if $M^A$ accepts $x$, then there is $y$ satisfying these conditions which has polynomial-size: the computation of a polynomial-time machine is of polynomial-size and the number of queries and the size of all queries are also polynomial in the input. Moreover the size of certificates for queries are also bounded by some polynomial in the size of the queries, so again are polynomial in the size of input. In short we have a polynomial-time verifier with polynomial-size certificates for $M^A$. This completes the proof that $Q \in \mathsf{NP}$. A similar argument shows that $Q\in\mathsf{coNP}$. So $Q\in\mathsf{NP\cap coNP}$. In other words, $\mathsf{P^{NP\cap coNP}} \subseteq \mathsf{NP\cap coNP}$.
Category:Metric Subspaces This category contains results about Metric Subspaces. Let $\left({A, d}\right)$ be a metric space. Let $H \subseteq A$. Let $d_H: H \times H \to \R$ be the restriction $d \restriction_{H \times H}$ of $d$ to $H$. That is, let $\forall x, y \in H: d_H \left({x, y}\right) = d \left({x, y}\right)$. Then $d_H$ is the metric induced on $H$ by $d$ or the subspace metric of $d$ (with respect to $H$). The metric space $\left({H, d_H}\right)$ is called a metric subspace of $\left({A, d}\right)$. Pages in category "Metric Subspaces" The following 8 pages are in this category, out of 8 total.
Timeline of prime gap bounds Date [math]\varpi[/math] or [math](\varpi,\delta)[/math] [math]k_0[/math] [math]H[/math] Comments Aug 10 2005 6 [EH] 16 [EH] ([Goldston-Pintz-Yildirim]) First bounded prime gap result (conditional on Elliott-Halberstam) May 14 2013 1/1,168 (Zhang) 3,500,000 (Zhang) 70,000,000 (Zhang) All subsequent work (until the work of Maynard) is based on Zhang's breakthrough paper. May 21 63,374,611 (Lewko) Optimises Zhang's condition [math]\pi(H)-\pi(k_0) \gt k_0[/math]; can be reduced by 1 by parity considerations May 28 59,874,594 (Trudgian) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] with [math]p_{m+1} \gt k_0[/math] May 30 59,470,640 (Morrison) 58,885,998? (Tao) 59,093,364 (Morrison) 57,554,086 (Morrison) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] and then [math](\pm 1, \pm p_{m+1}, \ldots, \pm p_{m+k_0/2-1})[/math] following [HR1973], [HR1973b], [R1974] and optimises in m May 31 2,947,442 (Morrison) 2,618,607 (Morrison) 48,112,378 (Morrison) 42,543,038 (Morrison) 42,342,946 (Morrison) Optimizes Zhang's condition [math]\omega\gt0[/math], and then uses an improved bound on [math]\delta_2[/math] Jun 1 42,342,924 (Tao) Tiny improvement using the parity of [math]k_0[/math] Jun 2 866,605 (Morrison) 13,008,612 (Morrison) Uses a further improvement on the quantity [math]\Sigma_2[/math] in Zhang's analysis (replacing the previous bounds on [math]\delta_2[/math]) Jun 3 1/1,040? (v08ltu) 341,640 (Morrison) 4,982,086 (Morrison) 4,802,222 (Morrison) Uses a different method to establish [math]DHL[k_0,2][/math] that removes most of the inefficiency from Zhang's method. Jun 4 1/224?? (v08ltu) 1/240?? (v08ltu) 4,801,744 (Sutherland) 4,788,240 (Sutherland) Uses asymmetric version of the Hensley-Richards tuples Jun 5 34,429? (Paldi/v08ltu) 4,725,021 (Elsholtz) 4,717,560 (Sutherland) 397,110? (Sutherland) 4,656,298 (Sutherland) 389,922 (Sutherland) 388,310 (Sutherland) 388,284 (Castryck) 388,248 (Sutherland) 387,982 (Castryck) 387,974 (Castryck) [math]k_0[/math] bound uses the optimal Bessel function cutoff. Originally only provisional due to neglect of the kappa error, but then it was confirmed that the kappa error was within the allowed tolerance. [math]H[/math] bound obtained by a hybrid Schinzel/greedy (or "greedy-greedy") sieve Jun 6 387,960 (Angelveit) 387,904 (Angeltveit) Improved [math]H[/math]-bounds based on experimentation with different residue classes and different intervals, and randomized tie-breaking in the greedy sieve. Jun 7 26,024? (vo8ltu) 387,534 (pedant-Sutherland) Many of the results ended up being retracted due to a number of issues found in the most recent preprint of Pintz. Jun 8 286,224 (Sutherland) 285,752 (pedant-Sutherland) values of [math]\varpi,\delta,k_0[/math] now confirmed; most tuples available on dropbox. New bounds on [math]H[/math] obtained via iterated merging using a randomized greedy sieve. Jun 9 181,000*? (Pintz) 2,530,338*? (Pintz) New bounds on [math]H[/math] obtained by interleaving iterated merging with local optimizations. Jun 10 23,283? (Harcos/v08ltu) 285,210 (Sutherland) More efficient control of the [math]\kappa[/math] error using the fact that numbers with no small prime factor are usually coprime Jun 11 252,804 (Sutherland) More refined local "adjustment" optimizations, as detailed here. An issue with the [math]k_0[/math] computation has been discovered, but is in the process of being repaired. Jun 12 22,951 (Tao/v08ltu) 22,949 (Harcos) 249,180 (Castryck) Improved bound on [math]k_0[/math] avoids the technical issue in previous computations. Jun 13 Jun 14 248,898 (Sutherland) Jun 15 [math]348\varpi+68\delta \lt 1[/math]? (Tao) 6,330? (v08ltu) 6,329? (Harcos) 6,329 (v08ltu) 60,830? (Sutherland) Taking more advantage of the [math]\alpha[/math] convolution in the Type III sums Jun 16 [math]348\varpi+68\delta \lt 1[/math] (v08ltu) 60,760* (Sutherland) Attempting to make the Weyl differencing more efficient; unfortunately, it did not work Jun 18 5,937? (Pintz/Tao/v08ltu) 5,672? (v08ltu) 5,459? (v08ltu) 5,454? (v08ltu) 5,453? (v08ltu) 60,740 (xfxie) 58,866? (Sun) 53,898? (Sun) 53,842? (Sun) A new truncated sieve of Pintz virtually eliminates the influence of [math]\delta[/math] Jun 19 5,455? (v08ltu) 5,453? (v08ltu) 5,452? (v08ltu) 53,774? (Sun) 53,672*? (Sun) Some typos in [math]\kappa_3[/math] estimation had placed the 5,454 and 5,453 values of [math]k_0[/math] into doubt; however other refinements have counteracted this Jun 20 [math]178\varpi + 52\delta \lt 1[/math]? (Tao) [math]148\varpi + 33\delta \lt 1[/math]? (Tao) Replaced "completion of sums + Weil bounds" in estimation of incomplete Kloosterman-type sums by "Fourier transform + Weyl differencing + Weil bounds", taking advantage of factorability of moduli Jun 21 [math]148\varpi + 33\delta \lt 1[/math] (v08ltu) 1,470 (v08ltu) 1,467 (v08ltu) 12,042 (Engelsma) Systematic tables of tuples of small length have been set up here and here (update: As of June 27 these tables have been merged and uploaded to an online database of current bounds on [math]H(k)[/math] for [math]k[/math] up to 5000). Jun 22 Slight improvement in the [math]\tilde \theta[/math] parameter in the Pintz sieve; unfortunately, it does not seem to currently give an actual improvement to the optimal value of [math]k_0[/math] Jun 23 1,466 (Paldi/Harcos) 12,006 (Engelsma) An improved monotonicity formula for [math]G_{k_0-1,\tilde \theta}[/math] reduces [math]\kappa_3[/math] somewhat Jun 24 [math](134 + \tfrac{2}{3}) \varpi + 28\delta \le 1[/math]? (v08ltu) [math]140\varpi + 32 \delta \lt 1[/math]? (Tao) 1,268? (v08ltu) 10,206? (Engelsma) A theoretical gain from rebalancing the exponents in the Type I exponential sum estimates Jun 25 [math]116\varpi+30\delta\lt1[/math]? (Fouvry-Kowalski-Michel-Nelson/Tao) 1,346? (Hannes) 1,007? (Hannes) 10,876? (Engelsma) Optimistic projections arise from combining the Graham-Ringrose numerology with the announced Fouvry-Kowalski-Michel-Nelson results on d_3 distribution Jun 26 [math]116\varpi + 25.5 \delta \lt 1[/math]? (Nielsen) [math](112 + \tfrac{4}{7}) \varpi + (27 + \tfrac{6}{7}) \delta \lt 1[/math]? (Tao) 962? (Hannes) 7,470? (Engelsma) Beginning to flesh out various "levels" of Type I, Type II, and Type III estimates, see this page, in particular optimising van der Corput in the Type I sums. Integrated tuples page now online. Jun 27 [math]108\varpi + 30 \delta \lt 1[/math]? (Tao) 902? (Hannes) 6,966? (Engelsma) Improved the Type III estimates by averaging in [math]\alpha[/math]; also some slight improvements to the Type II sums. Tuples page is now accepting submissions. Jul 1 [math](93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math]? (Tao) 873? (Hannes) Refactored the final Cauchy-Schwarz in the Type I sums to rebalance the off-diagonal and diagonal contributions Jul 5 [math] (93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math] (Tao) Weakened the assumption of [math]x^\delta[/math]-smoothness of the original moduli to that of double [math]x^\delta[/math]-dense divisibility Jul 10 7/600? (Tao) An in principle refinement of the van der Corput estimate based on exploiting additional averaging Jul 19 [math](85 + \frac{5}{7})\varpi + (25 + \frac{5}{7}) \delta \lt 1[/math]? (Tao) A more detailed computation of the Jul 10 refinement Jul 20 Jul 5 computations now confirmed Jul 27 633 (Tao) 632 (Harcos) 4,686 (Engelsma) Jul 30 [math]168\varpi + 48\delta \lt 1[/math]# (Tao) 1,788# (Tao) 14,994# (Sutherland) Bound obtained without using Deligne's theorems. Aug 17 1,783# (xfxie) 14,950# (Sutherland) Oct 3 13/1080?? (Nelson/Michel/Tao) 604?? (Tao) 4,428?? (Engelsma) Found an additional variable to apply van der Corput to Oct 11 [math]83\frac{1}{13}\varpi + 25\frac{5}{13} \delta \lt 1[/math]? (Tao) 603? (xfxie) 4,422?(Engelsma) 12 [EH] (Maynard) Worked out the dependence on [math]\delta[/math] in the Oct 3 calculation Oct 21 All sections of the paper relating to the bounds obtained on Jul 27 and Aug 17 have been proofread at least twice Oct 23 700#? (Maynard) Announced at a talk in Oberwolfach Oct 24 110#? (Maynard) 628#? (Clark-Jarvis) With this value of [math]k_0[/math], the value of [math]H[/math] given is best possible (and similarly for smaller values of [math]k_0[/math]) Nov 19 105# (Maynard) 5 [EH] (Maynard) 600# (Maynard/Clark-Jarvis) One also gets three primes in intervals of length 600 if one assumes Elliott-Halberstam Nov 20 Optimizing the numerology in Maynard's large k analysis; unfortunately there was an error in the variance calculation Nov 21 68?? (Maynard) 582#*? (Nielsen]) 59,451 [m=2]#? (Nielsen]) 42,392 [m=2]? (Nielsen) 356?? (Clark-Jarvis) Optimistically inserting the Polymath8a distribution estimate into Maynard's low k calculations, ignoring the role of delta Nov 22 388*? (xfxie) 448#*? (Nielsen) 43,134 [m=2]#? (Nielsen) 698,288 [m=2]#? (Sutherland) Uses the m=2 values of k_0 from Nov 21 Nov 23 493,528 [m=2]#? Sutherland Nov 24 484,234 [m=2]? (Sutherland) Nov 25 385#*? (xfxie) 484,176 [m=2]? (Sutherland) Using the exponential moment method to control errors Nov 26 102# (Nielsen) 493,426 [m=2]#? (Sutherland) Optimising the original Maynard variational problem Nov 27 484,162 [m=2]? (Sutherland) Nov 28 484,136 [m=2]? (Sutherland Dec 4 64#? (Nielsen) 330#? (Clark-Jarvis) Searching over a wider range of polynomials than in Maynard's paper Dec 6 493,408 [m=2]#? (Sutherland) Dec 19 59#? (Nielsen) 10,000,000? [m=3] (Tao) 1,700,000? [m=3] (Tao) 38,000? [m=2] (Tao) 300#? (Clark-Jarvis) 182,087,080? [m=3] (Sutherland) 179,933,380? [m=3] (Sutherland) More efficient memory management allows for an increase in the degree of the polynomials used; the m=2,3 results use an explicit version of the [math]M_k \geq \frac{k}{k-1} \log k - O(1)[/math] lower bound. Dec 20 55#? (Nielsen) 36,000? [m=2] (xfxie) 175,225,874? [m=3] (Sutherland) 27,398,976? [m=3] (Sutherland) Dec 21 1,640,042? [m=3] (Sutherland) 429,798? [m=2] (Sutherland) Optimising the explicit lower bound [math]M_k \geq \log k-O(1)[/math] Dec 22 1,628,944? [m=3] (Castryck) 75,000,000? [m=4] (Castryck) 3,400,000,000? [m=5] (Castryck) 5,511? [EH] [m=3] (Sutherland) 2,114,964#? [m=3] (Sutherland) 309,954? [EH] [m=5] (Sutherland) 395,154? [m=2] (Sutherland) 1,523,781,850? [m=4] (Sutherland) 82,575,303,678? [m=5] (Sutherland) A numerical precision issue was discovered in the earlier m=4 calculations Dec 23 41,589? [EH] [m=4] (Sutherland) 24,462,774? [m=3] (Sutherland) 1,512,832,950? [m=4] (Sutherland) 2,186,561,568#? [m=4] (Sutherland) 131,161,149,090#? [m=5] (Sutherland) Dec 24 474,320? [EH] [m=4] (Sutherland) 1,497,901,734? [m=4] (Sutherland) Dec 28 474,296? [EH] [m=4] (Sutherland) Jan 2 2014 474,290? [EH] [m=4] (Sutherland) Jan 6 54# (Nielsen) 270# (Clark-Jarvis) Jan 8 4 [GEH] (Nielsen) 8 [GEH] (Nielsen) Using a "gracefully degrading" lower bound for the numerator of the optimisation problem. Calculations confirmed here. Jan 9 474,266? [EH] [m=4] (Sutherland) Jan 28 395,106? [m=2] (Sutherland) Jan 29 3 [GEH] (Nielsen) 6 [GEH] (Nielsen) A new idea of Maynard exploits GEH to allow for cutoff functions whose support extends beyond the unit cube Feb 9 Jan 29 results confirmed here Feb 17 53?# (Nielsen) 264?# (Clark-Jarvis) Managed to get the epsilon trick to be computationally feasible for medium k Feb 22 51?# (Nielsen) 252?# (Clark-Jarvis) More efficient matrix computation allows for higher degrees to be used Mar 4 Jan 6 computations confirmed Apr 14 50?# (Nielsen) 246?# (Clark-Jarvis) A 2-week computer calculation! Apr 17 35,410? [m=2]* (xfxie) 398,646? [m=2]* (Sutherland) 25,816,462? [m=3]* (Sutherland) 1,541,858,666? [m=4]* (Sutherland) 84,449,123,072? [m=5]* (Sutherland) Redoing the m=2,3,4,5 computations using the confirmed MPZ estimates rather than the unconfirmed ones Apr 18 398,244? [m=2]* (Sutherland) 1,541,183,756? [m=4]* (Sutherland) 84,449,103,908? [m=5]* (Sutherland) Apr 28 398,130? [m=2]* (Sutherland) 1,526,698,470? [m=4]* (Sutherland) 83,833,839,882? [m=5]* (Sutherland) May 1 81,973,172,502? [m=5] (Sutherland) 2,165,674,446#? [m=4] (Sutherland) 130,235,143,908#? [m=5] (Sutherland) faster admissibility testing May 3 1,460,493,420? [m=4] (Sutherland) 80,088,836,006? [m=5] (Sutherland) 1,488,227,220?* [m=4] (Sutherland) 81,912,638,914?* [m=5] (Sutherland) 2,111,605,786?# [m=4] (Sutherland) 127,277,395,046?# [m=5] (Sutherland) Fast admissibility testing for Hensley-Richards tuples May 3 3,393,468,735? [m=5] (de Grey) 2,113,163?# [m=3] (de Grey) 105,754,479?# [m=4] (de Grey) 5,274,206,963?# [m=5] (de Grey) Improved hillclimbing; also confirmation of previous k values May 4 79,929,339,154? [m=5] (Sutherland) 2,111,597,632?# [m=4] (Sutherland) 126,630,432,986?# [m=5] (Sutherland) May 5 32,285,928?# [m=3] (Sutherland) May 9 1,460,485,532? [m=4] (Sutherland) 79,929,332,990? [m=5] (Sutherland) 1,488,222,198?* [m=4] (Sutherland) 81,912,604,302?* [m=5] (Sutherland) 2,111,417,340?# [m=4] (Sutherland) 126,630,386,774?# [m=5] (Sutherland) Legend: ? - unconfirmed or conditional ?? - theoretical limit of an analysis, rather than a claimed record * - is majorized by an earlier but independent result # - bound does not rely on Deligne's theorems [EH] - bound is conditional the Elliott-Halberstam conjecture [GEH] - bound is conditional the generalized Elliott-Halberstam conjecture [m=N] - bound on intervals containing N+1 consecutive primes, rather than two strikethrough - values relied on a computation that has now been retracted See also the article on Finding narrow admissible tuples for benchmark values of [math]H[/math] for various key values of [math]k_0[/math].
I was going to post a comment(especially since it's an old question), but it became too long.That Application Note National AN-1852 (Which it located at TI now), describes in large detail the reasons for the inclusion of the op-amp to begin with.It provides two totally different services to the circuit.First it provides a low impedance 512mV bias to ... As others have noted, there's no problem as such with having both sides of an LED at +9V. It'll just be off, just as if both sides were at 0V. I do see a couple of other potential issues with your circuit, however:Your voltage notation is confusing. Assuming that the triangle symbols in your circuit represent ground, which is at 0V by definition, all ... Assuming a buck converter that's 100% efficient, yes, you will be able to draw 1A from the output.In practice, buck converters can be very efficient, often over 90%, so you'll still be able to get around 900mA output, which is rather better than the 500mA you'd get with a linear regulator.An important part of the buck converter system is its input filter ... It's been a while and no one has decided to close your question, so I'l write something short.I think you are talking about this schematic:simulate this circuit – Schematic created using CircuitLabWith KVL, starting at the lower left corner (at the ground symbol) and working clockwise, I get:$$0\:\text{V} + 5\:\text{V}-I\cdot R_1-V_{Z_1}=0\:\... The classic 4-resistor difference amplifier will do what you want:simulate this circuit – Schematic created using CircuitLabThis circuit keeps the difference between Vout and Vref equal to the difference between V+ and V-, effectively adding Vref to V+ if V- is grounded.The opamp tries to keep its two input terminals at the same potential using ... The total resistance for 200 mA at 20V is 100 Ohm. The wires should be quite long or thin.But as answer: I guess you want to make some kind of heating. If you plan PWM control, high enough switching frequency should quarantee the temperature of the wire stays in certain range, no matter the current is alternatingbeing part of the time 0 mA and the rest ... Based strictly on the graph you provided:The curve intersects 1.2V at about 16 hours, by my reading, and hits zero at about 25.5 hours. That's about 63% of the lifetime, or 63% of the coulombs. Without trying to calculate area under the curve exactly, we can observe that it's monotone decreasing, and therefore the joules available before 1.2V are more than ... It is hard to answer your exact question, since you want information on a scenario that has no commercial relevance, so companies are not going to test almost drained batteries just to know how much energy can be harvested from them.Those discharge curves they publish are used to design battery-powered products: the curves allow to estimate how long a ... Generally using a nearly-flat AA/AAA battery with a boost converter is not going to give you much, especially if the thing you're powering requires substantial power.When the battery goes down, not only its voltage decreases, it's internal resistance increases as well, sometimes dramatically. So the battery may show, say, 1.1V when measured with a ... AWG 8 cable has 2.06Ω/km, so using 5m of it (or 0.005km) would be equivalent to 0.01Ω.With a 12A current this equates to 0.12V of drop per cable, and 0.24V with both sides of the cable.However, connectors can also add a significant amount of resistance, make sure that you are using the right connectors and they are terminated properly.0.6V at 12A ... Remember that the "discharge" pin (pin 7) is essentially a copy of the output pin, except that it is open-collector — it can only sink current, not source it. But you could use it to control your LED without affecting the rest of your circuit:simulate this circuit – Schematic created using CircuitLab For a quick and simple solution, (if you don't mind an inversion of the LED function), you could just add a pull up resistor to +5V then connect the LED between that resistor and the 555's output pin. For a more modest LED current use a resistor of 270 ohms or more. From the NE555 datasheet:we can see that with a 5 V supply we can expect around 3.3 V at the output when loaded with 100 mA. Your LED of course consumes less current but that will not increase the voltage by much. So the behavior you see is to be expected.I expect that even without the LED the voltage at the output will not reach 5 V! That's a ... Calculate the resistance of each based on their rated power and voltage. P = V^2/R. e.g. the first one is rated 60W, 220VOnce you know the resistance of each then you can work the resulting series circuit when the 380V is applied across the string.Then, P = I^2R for each.simulate this circuit – Schematic created using CircuitLabsimulate this ... Remember that for an NMOS transistor to conduct you must bring the gate voltage above the source voltage, and we usually expect the difference to be at least equal to the transistor's threshold voltage if want significant current.The sources of your two NMOS transistors are connected to the NAND gate output, so the NAND gate output (the sources) must be at ... Most explanations disregard reactive power as "useless" power or power that we don't want to pay for.Such explanations are wrong. Sources of such information should probably be avoided."Reactive power" is a common term for reactive volt-amperes (VARs). Reactive volt-amperes have one very important use, they provide the magnetic fields for every ... Current through an inductor causes a 90° phase-shifted voltage over it. The more current, the more voltage, the more reactive power.In contrary to a resistor, neither the reactive power nor the induced voltage is lost to the circuit as heat, so you can do neat things with it. The simplest example are autotransformers. Look that up. A more complicated ... We can design the heat-removal from the Zener. Shall we do so?We'll assume the leads of the Zener are copper, and are 1mm square. Yes, they likely areround, but I'll let you insert a square-to-round correction factor. Copper, in the default thickness of PCB foil, which at 1 ounce/squareFoot is 1.4 mils or 35 microns, hasthermal resistance of 70 degree ... Probably the simplest solution would be a zener diode in series with the load, chosen to drop just the amount of voltage you want dropped (not the voltage you want to get). No resistor is required. In your case, a zener of around 11V would do. The zener will need an appropriate wattage rating, as it will be dropping up to about 1.3W.simulate this ... You're using a linear regulator which simply "burns off" the excess voltage.The current does not change and remains the same so you can draw up to 1.25 A at the output of the regulator. So after the regulator you're limited to 5 V, 1.25 A so 6.25 W.When you draw that 6.25 W there is 12 V - 5 V = 7 V at 1.25 A meaning 7 V * 1.25 A = 8.75 W dissipated in ... You are able to use a fixed voltage drop of about 11 Vdc at about 120 mA. This is fairly easy.simulate this circuit – Schematic created using CircuitLabThe transistor is a Darlington device in a TO-220 package and has a reasonable gain of greater than 1000. The Vbe drop is about 1.2V. Choose the appropriate Zener diode for the desired voltage ... In a linear voltage regulator as the LM117, all the voltage drop × current is turned into heat. That's about 9W in you case. You can draw 1.25A@5V from the 5V output.If you wanted to draw more current on the 5V side than it is supplied on the 12V side and produce less heat, you had to use a switching regulator. There are some manufacturers which produce ... The LEDs at the end of the strip tend to not follow suite because of the 5V drop across a 5m Strip. It would appear they are not receiving the proper data. I connect both ends of the strip to the same PSU. Also If using an Arduino or R Pi you can limit the amount of current the LEDs will use. You'll find it in the library you are using or on the spec ... The ground connection MUST be common to all device involved*.Without a continuous circuit there can be no current flow.The ULN200X is effectively just a group of Darlington open collector transistors in a package. You can "safely enough" drive each section with a correct voltage input (varies with the X in 200X) and then drive outputs that the Darlington ... You can drive the ULN2003a with 3.3V outputs- they don’t put any voltage onto the driving circuit.Of course if you mess up the circuit or the grounding your chances of ruining the Pi are much higher than if you use 3.3V only.You need series resistors for the LEDs and you should make sure that the maximum total current can be supplied by the 5V supply. ... Although the negative resistance is veiled in mystery, in fact it is a quite simple concept. It can be easily explained by analyzing the voltage drops across resistances.The positive resistor subtracts its voltage drop from the input voltage thus decreasing the current while the (S-shaped) negative resistor adds its voltage drop to the input voltage thus ... This is a variation on Dave Tweed's circuit. If you use a BJT you can get much lower voltage loss in the common-base (or -gate) amplifier -- like less than one volt.You probably want to look around for a "super beta" transistor, or maybe a Darlington, and you want to be careful about supplying adequate base voltage -- the 0.7V that we all think about ... As @PeterJennings said, the power dissipated by the polyswitch when it is conducting is a function of current. The polyswitch has its own resistance, which works against the current to generate a voltage, and, hence, heat. The switch then trips when it gets hot.The 240V part of the rating is the voltage the switch can withstand when it is off. At a ... The quick answer is that the power is not dissipated in the fuse but in the load. If, for sake of argument, the fuse has an internal resistance of 0.1 ohm then at it's full rated current it will be dissipating I^2R W = 1.6W and experience a voltage drop of IR = 0.4V. The rest of the voltage, be it 15.6V or 239.6V is across the load it is protecting. Well, first of all we are trying to analyze the following circuit:simulate this circuit – Schematic created using CircuitLabFinding first \$\text{R}_\text{th}\$ we need to look at:simulate this circuitNow, we see that:$$\text{R}_\text{th}=\frac{\text{R}_4\cdot\left(\frac{\text{R}_1\text{R}_2}{\text{R}_1+\text{R}_2}+\text{R}_3\right)}{\text{R}... Assuming that the only problem is the voltage — the DRV_OUT can otherwise handle the solenoid current, you could use the big brother of the common logic level conversion circuit:simulate this circuit – Schematic created using CircuitLabOtherwise, you'll have to use your 2-transistor circuit — but be sure to add a pullup to Q101's gate, ... There's no reason to drive the GPIO pins high at all. Just configure them as open-drain — drive the pin low when you want to activate the corresponding "button", and tristate it otherwise.As long as the fob voltage is not higher than the MCU voltage, it will be fine.And running the fob at 3.3 V should be fine, too. Since DC motors are also generators, the reverse voltage raises proportional with the RPM. So the higher the RPM, the higher the reverse voltage -> less current. Also the ambient temperature is most probably not stable, which leads to fluctuating current measurement results. The voltage is zero after the resistor because you have defined it to be zero -- it is the circuit common (or ground).If you are talking about the RLC circuit on the front page, it is because the L and C are energy storage elements that are charged by the voltage source before the start of the simulation. The simulation begins at the opening of the switch. ... The situation is very simple:Because we know that the voltage at point A is 7 volts higher than the voltage at point C. And also That the voltage at point B is 1.8V higher than the voltage at point C.Therefore we can easily find the V_AB voltage.Can you do it? Short Answer: Yes, you can use the comparator with a voltage divider. Just make sure you choose an appropriate reference voltage and hysteresis level. However, it may be better to use an appropriate charge controller.You should not use a GPIO input in place of a comparator. Not only is there no way to determine the exact threshold voltage, it will draw an ... Using a battery that is not brand new so its loaded voltage is 8.0V and using a 3.2V blue LED, then the current is (8V - 3.2V)/220 ohms= 21.8mA. No problem when most 5mm blue LEDs have a maximum continuous current of 30mA. Here is how this works -----simulate this circuit – Schematic created using CircuitLabI picked a 180 ohm resistor, because at 20 ohms/volt we are guaranteed a50 milliamp intersection at the left axis. Thus we scale 20 ohms/volt by 9 volts, finding a 180 ohm resistor provides that current.Note other LEDs will have lower forward voltage, moving ... Firstly, I would recommend not using a voltage divider to get your different output voltages. You should look into using stacked zener diodes with a current limiting resistor on the top of them. This would only be good too if the different voltages are going into a high impedance load as well. If you you don't have a high impedance load with your current ... Now that we've gotten your setup explained in the comments, we can explain your problems.Switching power supplies and breadboards don't mix. Breadboards have parasitic capacitance between pin rows, and parasitic inductance in the rows - not to mention in the wires. The contacts are also often weak and don't hold well - the resistance varies just by ... The V1P8DIG and V1P8ANA provide 1.8V to digital and analog sections respectively. They are internal LDO's that drop the main VCC to around 1.8V, which the IC needs to function. If you already have 1.8V then the LDO's don't work, so you'll bypass the LDO's. My guess is the LDO's are provided because most people use 3.3V for Vcc, you would be less likely to ... In general, a 60 amp branch circuit should be constructed using components; conductors, connectors, and outlet devices that will be protected by a 60 amp circuit breaker of a type that is suitable for that service. All branch circuit components should be certified by an independent testing laboratory in that regard. The standards that apply to the ... It means that Vos is referenced to the ground pin which usually doesn't need to be defined. The Vdmk signal is not referenced to ground, it's referenced to the OS signal (which I think is also Vos). This could also mean that both are a sum of some kind. Either way the Vos/OS signal goes from 1.5 to 3.5V In general, set up your KVL and KCL equations with the assumptions that quantities have a "higher" voltage on the "+" end of the resistor or voltage source and that currents are positive in the direction of the arrow. By being consistent with this, your units should always make sense (positive voltages across resistors, etc) as long as you did your systems ... Because you get a negative \$V_O\$ value which means that the voltage at the right side of the top resistor is higher by 11.9 volts than the left side.This also means that in "reality" you have this situation:And you can find the voltage across the current source do the KVL. Your first question is a bit unclear. The marking for \$V_O\$ means that there is a voltage named "\$V_0\$" that is the voltage across the resistor, and it is assumed to have a higher voltage on the left side of the resistor.For the second question, I assume that you mean the polarity of the voltage across the current source. Whenever you are looking for a ... The stall current will vary according to \$I_{stall}=\dfrac{V_{supply}}{R_{terminal}}\$, but the nominal current is unchanged, as it is the current that can continuously flow through the winding without creating excess heat.When changing voltage supply, the affected parameters are:no load speednominal speedno load currentAll others remain the same.
Search Now showing items 1-10 of 15 A free-floating planet candidate from the OGLE and KMTNet surveys (2017) Current microlensing surveys are sensitive to free-floating planets down to Earth-mass objects. All published microlensing events attributed to unbound planets were identified based on their short timescale (below 2 d), ... OGLE-2016-BLG-1190Lb: First Spitzer Bulge Planet Lies Near the Planet/Brown-Dwarf Boundary (2017) We report the discovery of OGLE-2016-BLG-1190Lb, which is likely to be the first Spitzer microlensing planet in the Galactic bulge/bar, an assignation that can be confirmed by two epochs of high-resolution imaging of the ... OGLE-2015-BLG-1459L: The Challenges of Exo-Moon Microlensing (2017) We show that dense OGLE and KMTNet $I$-band survey data require four bodies (sources plus lenses) to explain the microlensing light curve of OGLE-2015-BLG-1459. However, these can equally well consist of three lenses ... OGLE-2017-BLG-1130: The First Binary Gravitational Microlens Detected From Spitzer Only (2018) We analyze the binary gravitational microlensing event OGLE-2017-BLG-1130 (mass ratio q~0.45), the first published case in which the binary anomaly was only detected by the Spitzer Space Telescope. This event provides ... OGLE-2017-BLG-1434Lb: Eighth q < 1 * 10^-4 Mass-Ratio Microlens Planet Confirms Turnover in Planet Mass-Ratio Function (2018) We report the discovery of a cold Super-Earth planet (m_p=4.4 +/- 0.5 M_Earth) orbiting a low-mass (M=0.23 +/- 0.03 M_Sun) M dwarf at projected separation a_perp = 1.18 +/- 0.10 AU, i.e., about 1.9 times the snow line. ... OGLE-2017-BLG-0373Lb: A Jovian Mass-Ratio Planet Exposes A New Accidental Microlensing Degeneracy (2018) We report the discovery of microlensing planet OGLE-2017-BLG-0373Lb. We show that while the planet-host system has an unambiguous microlens topology, there are two geometries within this topology that fit the data equally ... OGLE-2017-BLG-1522: A giant planet around a brown dwarf located in the Galactic bulge (2018) We report the discovery of a giant planet in the OGLE-2017-BLG-1522 microlensing event. The planetary perturbations were clearly identified by high-cadence survey experiments despite the relatively short event timescale ... Spitzer Opens New Path to Break Classic Degeneracy for Jupiter-Mass Microlensing Planet OGLE-2017-BLG-1140Lb (2018) We analyze the combined Spitzer and ground-based data for OGLE-2017-BLG-1140 and show that the event was generated by a Jupiter-class (m_p\simeq 1.6 M_jup) planet orbiting a mid-late M dwarf (M\simeq 0.2 M_\odot) that ... OGLE-2016-BLG-1266: A Probable Brown-Dwarf/Planet Binary at the Deuterium Fusion Limit (2018) We report the discovery, via the microlensing method, of a new very-low-mass binary system. By combining measurements from Earth and from the Spitzer telescope in Earth-trailing orbit, we are able to measure the ... KMT-2016-BLG-0212: First KMTNet-Only Discovery of a Substellar Companion (2018) We present the analysis of KMT-2016-BLG-0212, a low flux-variation $(I_{\rm flux-var}\sim 20$) microlensing event, which is well-covered by high-cadence data from the three Korea Microlensing Telescope Network (KMTNet) ...
Review Questions Review Questions User-defined functions Q07.01 Write a function called ft_to_in() which converts feet to inches. Note the conversion factor is 1 foot = 12 inches. Convert 6 feet into inches using your function. Q07.02 Write a function called m_to_ft() which converts meters to feet. Note the conversion factor is 1 meter = 3.28084 feet. Convert 5,000 meters into feet using your function. Q07.03 Use the functions in questions Q07.01 and Q07.02 to convert 2 meters into inches. Q07.04 Write a function that calculates the area of a circle based on a circle's radius. The formula for the area of a circle is A = \pi r^2 where A is area and r is radius. Use your function to calculate the area of circle with a radius of 5. Q07.05 Write a function that converts degrees Celsius (C) to degrees Fahrenheit (F). The formula to convert between the two temperature scales is F = \frac{9}{5} C + 32. Convert 100 degrees C to degrees F using your function. Q07.06 Write a function that converts Kelvin temperature (K) to degrees Celsius (C). The formula to convert between the two temperature scales is C = K - 273.15. Q07.07 Use the functions in questions Q07.05 and Q07.06 to convert to convert the temperature at Standard Temperature and Pressure (STP) of 273.15 K into degrees F. Q07.08 Use the functions in questions Q07.05 and Q07.06 to convert to convert the temperature at absolute zero, 0 K into degrees Celsius and degrees Fahrenheit. Q07.09 Write a function called hp_to_kw() which converts horse power (hp) into kilowatts (kW). Note the conversion factor is 1 hp = 0.7457 kW. Convert the horsepower of the average horse, 14.9 hp, into kilowatts (kW). Q07.10 Write a function called fun_logic() that accepts three boolean variables as its input ( a, b, and c). The output of fun_logic() will be a single boolean variable that is only True when either a is True, or a, b, and c are all False. Q07.11 Write a function called pn() that takes in a single number and outputs a string. If the input number is negative, output the string 'negative'. If the input number is positive, output 'positive'. Otherwise, output 'neither'. Test your function with a negative number, a positive number and 0. Functions with multiple arguments Q07.20 Write a function called cyl_v() that calculates the volume V of a cylinder based on cylinder height h and cylinder radius r. The formula for the volume of a cylinder is below: Use your function cyl_v() to calculate the volume of a cylinder with height = 2.7 and radius = 0.73. Q07.21 The universal gas law states that the pressure P, and temperature T of a volume V of gas with number of particles n is related by the equation below, where R is the universal gas constant. Write a function called gas_v() to calculate the volume V of a gas based on pressure P, temperature T, number of particles n and universal gas constant R. Use your function to find the volume of gas with the following parameters: Q07.22 Most professional bakers weight their ingredients, while many home bakers use measurements like cups and tablespoons to measure out ingredients. Create a function that takes two arguments: ingredient and cups. The function will output the number of grams of the specified ingredient. For example, a skeleton outline of your function might look like: def bake_conv(ingredient, cups) <code> return grams Your function needs to accept the following ingredients: 'flour' and 'sugar'. The conversion factor for flour is 1 cup flour = 128 grams flour. The conversion factor for sugar is 1 cup sugar = 200 grams sugar. Use your function to convert a recipe that calls for 3 cups of flour and a quarter cup of sugar. Q07.23 The gravitational force between two celestial bodies (planets, moons, stars etc) is calculated according to: where F_g is the gravitational force, M is the mass of one of the celestial bodies, m is the mass of the other the celestial body, r is the distance between the two celestial bodies and G is the universal gravitational constant. G=6.667408 \times 10^{-11} m^3 kg^{-1}s^{-2}. (a) Write a function called grav_force() that accepts the mass of two celestial bodies and outputs the gravitational force produced. Celestial Body Mass Distance from sun Sun 1.989\times10^{30} kg 0 Earth 5.98\times10^{24} kg 149.6 \times 10^9 m Mars 6.42\times10^{23} kg 228 \times 10^9 m (b) Use your function grav_force() and the table above to calculate the gravitational force between the earth and the sun. (c) Use your function grav_force() and the table above to calculate the gravitational force between the mars and the sun. Q07.24 Write a function called add3() that takes in 3 numbers and returns their sum. Only a single output variable (the sum of the three numbers) should be returned. Test your function by writing code beneath the function definition that calls the function with input you create. Q07.25 Write a function called asq() that takes in two variables, A and B. Have asq() output 3 values: the sum, the difference and the quotient of A and B as defined below: sum = A + B difference = A - B quotient = A/B Your function should accept two input arguments and output three values. Functions with default arguments Q07.30 (a) Rewrite the function in problem Q07.20 called gas_v() with the default values n = 6.02 \times 10^{23}, R = 8.314 and P = 101,325. (a) Use your modified function gas_v() to calculate the volume of a gas at T = 500 K using the default arguments. (b) Use your modified function gas_v() to calculate the volume of a gas at T = 500K, under half the pressure p = 101,325/2. Q07.31 In engineering mechanics, the tensile stress \sigma applied to a solid cylinder is equal to the tensile force on the cylinder F divided by the cylinder's cross sectional area A according to the formula below: The standard diameter d of a cylinder pulled in tension in a tensile test using the ASTM D8 standard is d=0.506 inches. Use the formula for stress \sigma and area A above to write a function called stress() that calculates stress \sigma based on force F and diameter d. Use d=0.506 as the default diameter, but allow the user to specify a different diameter if they want. Use your stress() function to calculate the tensile stress \sigma in a cylinder with the default diameter and a tensile force F = 12,000. Q07.32 One way to calculate how much an investment will be worth is to use the Future Value formula: Where FV is the future value, I_0 is the initial investment, r is the yearly rate of return, and n is the number of years you plan to invest. (a) Write a function called future_value() which accepts an initial investment I_0 and a number of years n and calculates the future value FV. Include r=0.05 as the default yearly rate of return. (b) Use your future_value() function to calculate the future value of an initial investment of 2000 dollars over 30 years with the default yearly rate of return (c) Use your future_value() function to calculate the future value of the same initial investment of 2000 dollars over 30 years, but a rate of return of 8% ( 0.08). (d) Use your future_value() function to determine when 2000 dollars is invested over 30 years, how much more do you make if the rate of return is 10% ( 0.10) instead of 5% ( 0.05). Q07.33 Write a function called s() that takes in 3 variables, a, b, and c. Have your function s() output the sum of a, b and c. If no value is passed in for c, set c to the default value of 100. Nested Functions Q07.36 In mechanical engineering, there are a couple different units of stress. Units of stress include: Pascals (Pa), Mega Pascals (MPa), pounds per square inch (psi) and kilopounds per square inch (ksi). (a) Write a function called pa_to_mpa to convert between Pascals (Pa) and Mega Pascals (MPa). The conversion factor is 1 MPa = 10^6 Pa (b) Write a function called mpa_to_ksi to convert between Mega Pascals (MPa) and kilopounds per square inch (ksi). The conversion factor is 1 ksi = 6.89476 MPa (c) Write a function called ksi_to_psi to convert between kilopounds per square inch (ksi) and pounds per square inch (psi). The conversion factor is 1000 psi = 1 ksi (d) Combine the three functions pa_to_mpa, mpa_to_ksi, ksi_to_psi into a single function pa_to_psi. Do this by calling the other functions as part of the pa_to_psi function, not by rewriting the same code you wrote in parts (a), (b), and (c). (e) Convert 2,500 Pa into psi using your pa_to_psi function. Functions in other files Q07.40 Create a separate file called .py . Inside of greetings.py include the code: greetings.py def hi(): print("Hi!") Import your newly created file and run the function greatings.py hi(). Q07.41 Create a separate file called .py . Inside of greetings.py include the code: greetings.py def hello(name): print("Hello " + name) Import your newly created file and run the function greatings.py hello() with your name as an input argument. Q07.42 Create a separate file file called .py . Inside of areas.py include the code: areas.py def triangle(base,height): area = 0.5*base*height print("Triangle Area: ", area)def rectangle(length, width): area = length* width print("Rectangle Area: ", area) Import your newly created file and run the functions areas.py triangle() and rectangle() with the same two input arguments: 2 and 3. Errors, Explanations, and Solutions For the sections of code below, run the lines of code. Then explain the error in your own words. Below your error explanation, rewrite and run an improved section of code that fixes the error. Q07.80 Run the code below and explain the error. Rewrite the code and run it error free. def add_me(num) return num + 2add_me(1) Q07.81 Run the code below and explain the error. Rewrite the code and run it error free. def add_you[num]: return num + 2add_you(2) Q07.82 Run the code below and explain the error. Rewrite the code and run it error free. def my_func(): print('yup')my_func('yup') Q07.83 Run the code below and explain the error. Rewrite the code and run it error free. def nothing():nothing() Q07.84 Run the code below and explain the error. Rewrite the code and run it error free. def plus(2+2): return 4 nothing() Q07.85 Run the code below and explain the error. Rewrite the code and run it error free. def first_a(a): return a[0]first_a(1)
Because you are dealing with normal-normal model, its not to hard to work out analytically whats going on. Now the standard argument for "diffuse" priors is usually $\frac{1}{\sigma}$ for variance parameters ("jeffreys" prior). But you will be able to see that if you were to use jeffreys prior for both parameters, you would have an improper posterior. But note that the main justification for using jeffreys prior is that it is a scale parameter. However you can show for your model, that neither parameter sets the scale of the problem. If we consider the marginal model, with $\theta_{i}$ integrated out. It is a well-known result that if you integrate a normal with another normal, you get a normal. So we can skip the integration, and just work out the expectation and variance. We then get: $$E(y_{i}|\mu\sigma\sigma_{\theta})=E\left[E(y_{i}|\mu\sigma\sigma_{\theta}\theta_{i})\right]=E\left[\theta_{i}|\mu\sigma\sigma_{\theta}\right]=\mu$$$$V(y_{i}|\mu\sigma\sigma_{\theta})=E\left[V(y_{i}|\mu\sigma\sigma_{\theta}\theta_{i})\right]+V\left[E(y_{i}|\mu\sigma\sigma_{\theta}\theta_{i})\right]=\sigma^{2}+\sigma_{\theta}^{2}$$ And hence we have the marginal model: $$(y_{i}|\mu\sigma\sigma_{\theta})\sim N(\mu,\sigma^{2}+\sigma_{\theta}^{2})$$ And this does show an identifiability problem with this model - so the data cannot distinguish between the two variances, it can only give information about their sum. You may have been able to see this intuitively. For example, we can always take $\theta_{i}=y_{i}$ for all $i$ and hence this will set $\sigma=0$. Alternatively we can set $\theta_{i}=\mu$ for all $i$ and this will set $\sigma_{\theta}=0$. Both of these scenarios will be indistinguishable by the data - in the sense that if I was to generate two data sets, one from the first case, and one from the second (but ensured that $\sigma^{2}+\sigma_{\theta}^{2}$ was the same in both cases), you would not be able to tell which data set came from which case. This suggests that it is fundamentally the sum that sets the scale and so we should apply jeffreys prior to the parameter $\tau^{2}=\sigma^{2}+\sigma_{\theta}^{2}$. Now suppose that $\tau^{2}$ was known, I would have thought a non-informative choice of prior for $\sigma^{2}$ would be uniform between $0$ and $\tau^{2}$ (for a more informative choice I would use a re-scaled beta distribution over this range). So we have the prior: $$p(\tau^{2},\sigma^{2})\propto\frac{1}{\tau^{2}}\frac{I(0<\sigma^{2}<\tau^{2})}{\tau^{2}}$$ If we make the change of variables to $\sigma^{2},\tau^{2}\to\sigma,\sigma_{\theta}$ so that. We then get: $$p(\sigma_{\theta},\sigma)\propto\frac{1}{(\sigma^{2}+\sigma_{\theta}^{2})^{2}}|\frac{\partial\sigma^{2}}{\partial\sigma}\frac{\partial\tau^{2}}{\partial\sigma_{\theta}}-\frac{\partial\sigma^{2}}{\partial\sigma_{\theta}}\frac{\partial\tau^{2}}{\partial\sigma}| =\frac{2\sigma\sigma_{\theta}}{(\sigma^{2}+\sigma_{\theta}^{2})^{2}}$$ Note that the non-identifiability is preserved in this prior because it is symmetric in its arguments. Another not so obvious symmetry is that if you were to integrate out either one of the variance parameters you would be left with the jeffreys prior for the other one: $$\int_{0}^{\infty}\frac{2\sigma\sigma_{\theta}}{(\sigma^{2}+\sigma_{\theta}^{2})^{2}}d\sigma=\frac{1}{\sigma_{\theta}}$$ Hence, all you are required to input is the prior range for one of the parameters, as this will stop you from getting into trouble with improper priors. Call this $0<L_{\sigma}<\sigma<U_{\sigma}<\infty$. It is then easy to sample from the joint density using the inverse CDF method, for we have: $$F_{\sigma}(x)=\frac{\log\left(\frac{x}{L_{\sigma}}\right)}{\log\left(\frac{U_{\sigma}}{L_{\sigma}}\right)}\implies F^{-1}_{\sigma}(p)=\frac{U_{\sigma}^{p}}{L_{\sigma}^{p-1}}$$$$F_{\sigma_{\theta}|\sigma}(y|x)=1-\frac{x^{2}}{y^{2}+x^{2}}\implies F^{-1}_{\sigma_{\theta}|\sigma}(p|x)=x\sqrt{\frac{p}{1-p}}$$ So you sample two independent uniform random variables $q_{1b},q_{2b}$, and then your random value of $\sigma^{(b)}=U_{\sigma}^{q_{1b}}L_{\sigma}^{1-q_{1b}}$ and your random value of $\sigma^{(b)}_{\theta}=U_{\sigma}^{q_{1b}}L_{\sigma}^{1-q_{1b}}\sqrt{\frac{q_{2b}}{1-q_{2b}}}$. Combine this with the usual flat prior for $-\infty<L_{\mu}<\mu<U_{\mu}<\infty$ generated by a third random uniform variable $\mu^{(b)}=L_{\mu}+q_{3b}(U_{\mu}-L_{\mu})$ and you have all the ingredients to do monte carlo posterior simulation - note that this is much better than "Gibbs sampling" because each simulation is independent, so no need to wait for convergence (and also less need for a large number of simulations) - and you are dealing with proper priors - so divergence is impossible (however some moments may or may not exist, but all quantiles exist).
You look at ACF and PACFs of the differenced series. This is because these are tools for looking at stationary processes. You mention that the undifferenced values have a mean increasing over time...right off the bat that process can't be stationary. Looking at the ACF and PACF will help you distinguish between some sort of noise and an ARMA(p,q) process. If you're looking at financial time series, it would be unsurprising to see ACF and PACF values that look nonsignificant--white noise is a pretty common model. Also keep in mind that stationarity can be broken in more ways than the mean function depending on time. Edit: Valerie, what Richard is referring to might happen if your true model is something like $y_t = \beta_0 + \beta_1 t + \epsilon_t$ where $\epsilon_t$ is iid or white noise. In this case, if you incorrectly difference your series, you will have $\bigtriangledown y_t = \beta_1 + \epsilon_t - \epsilon_{t-1}$. Then, looking at the ACF plot, it will look like you have an MA(2) model. This will help you determine whether the series is trend-stationary or difference-stationary, or in other words, if it has a deterministic trend or a stationary trend.
To do this analysis, one needs data up to 20000 psi (1400 bars) on either enthalpy vs temperature and pressure (a graph) or compressibility factor Z vs temperature and pressure. The only graphs I have found of the former type go up to only 100 bars, which is a factor of 14 too low. However, I have found data on the compressibility factor Z at temperatures up to 300 K and pressures up to 1000 bars: https://cds.cern.ch/record/1444601/files/978-1-4419-9979-5_BookBackMatter.pdfAlthough this is still a factor of 1.4 too low, it might provide some idea of the temperature rise that might be expected in the valve. So here is how the data would be used. The effect of pressure on enthalpy (per mole) of gas is given by $$\left(\frac{\partial H}{\partial P}\right)_T=V-T\left(\frac{\partial V}{\partial T}\right)_P\tag{1}$$For a real gas, the equation of state in terms of the compressibility factor Z=Z(P,T) is given by$$PV=ZRT\tag{2}$$If we substitute Eqn. 2 into Eqn. 1, we obtain: $$\left(\frac{\partial H}{\partial P}\right)_T=-\frac{RT^2}{P}\left(\frac{\partial Z}{\partial T}\right)_P\tag{3}$$Integrating Eqn. 3 between P=0 and arbitrary P at constant temperature yields the so-called Residual Enthalpy $H^R$:$$H^R(P,T)=-RT^2\int_0^P{\left(\frac{\partial Z}{\partial T}\right)_{P'}\frac{dP'}{P'}}=-RT^2\frac{\partial}{\partial T}\left(\int_0^P{(Z(T,P')-1)\frac{dP'}{P'}}\right)\tag{4}$$where P' is a dummy variable of integration. If the final pressure coming out of the valve is low (so that the gas exiting the valve is in the ideal gas region), we can write: $$\Delta H=-H^R+C_p\Delta T=0$$where, for a monoatomic gas like Helium, $C_p=\frac{5}{2}R$. Therefore, $$\Delta T=-\frac{2}{5}T^2\frac{\partial}{\partial T}\left(\int_0^P{(Z(T,P')-1)\frac{dP'}{P'}}\right)\tag{5}$$This expression would be evaluated using the data presented in the reference above. If a reference can be found with data going out to 1400 bar, that would be even better.
Achieving desired tolerance of a Taylor polynomial on desired interval This third question is usually the most difficult, since it requiresboth estimates and adjustment of number of terms in theTaylor expansion: Given a function, given a fixed point, givenan interval around that fixed point, and given a required tolerance,find how many terms must be used in the Taylor expansion toapproximate the function to within the required tolerance on the giveninterval. For example, let's get a Taylor polynomial approximation to$e^x$ which is within $0.001$ on the interval$[-{1\over 2},+{1\over 2}]$. We use$$e^x=1+x+{x^2\over 2!}+{x^3\over 3!}+\ldots+{x^n\over n!}+{e^c\over (n+1)!}x^{n+1}$$for some $c$ between $0$ and $x$, and where we do not yet know what wewant $n$ to be. It is very convenient here that the $n$th derivativeof $e^x$ is still just $e^x$! We are wanting to choose $n$ large-enough to guarantee that$$\left|{e^c\over (n+1)!}x^{n+1}\right|\le 0.001$$for all $x$ in that interval (without knowing anything too detailedabout what the corresponding $c$'s are!). The error term is estimated as follows,by thinking about the worst-case scenario for the sizes of the parts of thatterm: we know that the exponential function is increasing along thewhole real line, so in any event $c$ lies in$[-{1\over 2},+{1\over 2}]$ and$$|e^c|\le e^{1/2}\le 2$$(where we've not been too fussy about being accurate about how big thesquare root of $e$ is!). And for $x$ in that interval we know that$$|x^{n+1}|\le \left({1\over 2}\right)^{n+1}$$So we are wanting to choose $n$ large-enough to guarantee that$$\left|{e^c\over (n+1)!} ({1\over 2})^{n+1}\right|\le 0.001$$Since$$\left|{e^c\over (n+1)!} ({1\over 2})^{n+1}\right|\le {2\over (n+1)!}\left({1\over 2}\right)^{n+1}$$we can be confident of the desired inequality if we can be sure that$${2\over (n+1)!}\left({1\over 2}\right)^{n+1}\le 0.001$$That is, we want to ‘solve’ for $n$ in the inequality$${2\over (n+1)!}\left({1\over 2}\right)^{n+1}\le 0.001$$ There is no genuine formulaic way to ‘solve’ for $n$ to accomplish this. Rather, we just evaluate the left-hand side of the desired inequality for larger and larger values of $n$ until (hopefully!) we get something smaller than $0.001$. So, trying $n=3$, the expression is $${2\over (3+1)!}\left({1\over 2}\right)^{3+1}={1\over 12\cdot 16}$$ which is more like $0.01$ than $0.001$. So just try $n=4$: $${2\over (4+1)!}\left({1\over 2}\right)^{4+1}={1\over 60\cdot 32}\le 0.00052$$ which is better than we need. The conclusion is that we needed to take the Taylor polynomial of degree $n=4$ to achieve the desired tolerance along the whole interval indicated. Thus, the polynomial $$1+x+{x^2\over 2}+{x^3\over 3}+{x^4\over 4}$$ approximates $e^x$ to within $0.00052$ for $x$ in the interval $[-{1\over 2},{1\over 2}]$. Yes, such questions can easily become very difficult. And, as a reminder, there is no real or genuine claim that this kind of approach to polynomial approximation is ‘the best’. Exercises Determine how many terms are needed in order to have the corresponding Taylor polynomial approximate $e^x$ to within $0.001$ on the interval $[-1,+1]$. Determine how many terms are needed in order to have the corresponding Taylor polynomial approximate $\cos x$ to within $0.001$ on the interval $[-1,+1]$. Determine how many terms are needed in order to have the corresponding Taylor polynomial approximate $\cos x$ to within $0.001$ on the interval $[{ -\pi \over 2 },{ \pi \over 2 }]$. Determine how many terms are needed in order to have the corresponding Taylor polynomial approximate $\cos x$ to within $0.001$ on the interval $[-0.1,+0.1]$. Approximate $e^{1/2}=\sqrt{e}$ to within $.01$ byusing a Taylor polynomial with remainder term, expanded at $0$. (Do NOT add up the finite sum you get!) Approximate $\sqrt{101}=(101)^{1/2}$ to within$10^{-15}$ using a Taylor polynomial with remainder term. (Do NOT add up the finite sum you get! One point here is that most hand calculators do not easily give 15 decimal places. Hah!)
Your notation is pretty standard. The sign bit is obvious to anyone, so I won't address that further. The mantissa is in hidden-bit notation (the "1." is assumed, except in the unique case of an exact zero, which you didn't discuss.) The exponent is in the usual "excess" notation (some bias is added, so that a stored/packed exponent of zero represents the smallest allowable exponent.) Start with the number you have as \$v\$ (this is a real number, not necessarily an integer) and the number of mantissa bits available as \$m=4\$ and the number of exponent bits available as \$e=3\$. Assuming that \$v\ne 0\$, apply the following logic: set up a new variable as \$p=0\$. if \$v\$ is positive, set new variable \$s=0\$ otherwise set \$s=1\$, set \$v=\:\mid v \:\mid\$. while \$v\lt 2^m\$, set \$p=p-1\$ and \$v=2\cdot v\$. while \$v\ge 2^{m+1}\$, set \$p=p+1\$ and \$v=\frac{v}{2}\$. set up a new variable as \$f=\lfloor v\rfloor\$. if \$\left(v-f\right)\ge \frac{1}{2}\$ set \$f=f + 1\$. if \$f = 2^{m+1}\$ then set \$p=p+1\$, \$f=\frac{f}2\$. set up a new variable as \$x=p+m+2^{e-1}-1\$. if \$x\lt 0\$ or \$x\ge 2^e\$ then . error it must be that \$2^m\le f\lt 2^{m+1}\$, so set \$f=f-2^m\$. At this point, you find the result as \$\left<[\;s\;]_1\quad[\;f\;]_m\quad[\;x\;]_e\right>\$, with the subscripts indicating the number of bits for each field. (If \$v=0\$ then the entire result has all bits as zero.) (Note that I've buried the exponent's excess value in the algorithm because it is almost always handled this way.) Steps 1 and 2 handle the sign bit and prepare for the remaining steps; steps 3 and 4 perform value normalization; steps 5 and 6 handle value rounding; step 7 deals with the rare case where the rounding required a re-normalization step; step 8 computes the excess exponent notation and step 9 validates that result; and finally step 10 removes the hidden bit from the mantissa prior to packing the floating point notation word. I just dashed this off quickly and I'll gladly correct any uncovered algorithm errors.
It looks like you're new here. If you want to get involved, click one of these buttons! Prove that the poset of upper sets on a discrete poset (see Example 1.25) on a set \(X\) is simply the power set \(P(X)\). An upper set in P is a subset U of P satisfying the condition that if p ∈ U and p ≤ q, then q ∈ U. Because in a discrete poset we only have order between a element and itself, for every element p ∈ X any subset of X containing p will be in the set of all upper sets of X. Because we are talking about any p in X, we have all non-empty subsets of X. We can also add the empty subset, because upper sets are defined in terms of some conditions on elements, and without elements, it'll be automatically true. In other words, the set of of upper sets is the set of all subsets of X - the power set PX. The exercise does not limit us in how the order on the poset of upper sets should be defined, but the authors probably meant something like in 1.38 "We can give the set U an order by letting U ≤ V if U is contained in V." which is for subsets reflexive and transitive. An upper set in P is a subset U of P satisfying the condition that if p ∈ U and p ≤ q, then q ∈ U. Because in a discrete poset we only have order between a element and itself, for every element p ∈ X any subset of X containing p will be in the set of all upper sets of X. Because we are talking about any p in X, we have all non-empty subsets of X. We can also add the empty subset, because upper sets are defined in terms of some conditions on elements, and without elements, it'll be automatically true. In other words, the set of of upper sets is the set of all subsets of X - the power set PX. The exercise does not limit us in how the order on the poset of upper sets should be defined, but the authors probably meant something like in 1.38 "We can give the set U an order by letting U ≤ V if U is contained in V." which is for subsets reflexive and transitive. Start with the empty set. If \( \emptyset \in U \), then so are each of the discreet elements of \( X \). If each of the discreet elements are in the upper set, so are each of the unique pairs, and so on. Finally, the "largest" upper set must be \( X \) itself. This collection is the power set \( PX \) . But why would \( \emptyset \) be permitted as an upper set? I understand it to be present in the power set, so it must be allowed somehow Start with the empty set. If \\( \emptyset \in U \\), then so are each of the discreet elements of \\( X \\). If each of the discreet elements are in the upper set, so are each of the unique pairs, and so on. Finally, the "largest" upper set must be \\( X \\) itself. This collection is the power set \\( PX \\) . But why would \\( \emptyset \\) be permitted as an upper set? I understand it to be present in the power set, so it must be allowed somehow \(X\) is discrete, so \(x \leq x, \forall x \in X\) and \(x \nleq y\) iff \(x \neq y\) Consider any subset of \(X\), \(U \subseteq X\) Since, for all \(u \in U\) the only thing related to \(u\) is itself, and \(u \in U\), \(U\) is an upper set of \(X\). It follows that the power set \(PX\) of all subsets of \(X\) forms a poset of upper sets. \\(X\\) is discrete, so \\(x \leq x, \forall x \in X\\) and \\(x \nleq y\\) iff \\(x \neq y\\) Consider any subset of \\(X\\), \\(U \subseteq X\\) Since, for all \\(u \in U\\) the only thing related to \\(u\\) is itself, and \\(u \in U\\), \\(U\\) is an upper set of \\(X\\). It follows that the power set \\(PX\\) of all subsets of \\(X\\) forms a poset of upper sets. Is a key insight for this proof, that the upper set U, can contain elements that cannot be compared to the smallest element in U? Asking because it almost easier to answer the question: Given all possible subsets of a discrete preorder P, show that they are all 'upper sets' of some element of P. Is a key insight for this proof, that the upper set *U*, **can** contain elements that cannot be compared to the smallest element in *U*? Asking because it almost easier to answer the question: Given all possible subsets of a discrete preorder P, show that they are all 'upper sets' of some element of P. I think the trick is that there may be multiple "smallest" elements. In the discrete case, they're all smallest -- and all incomparable. You might be assuming that upper sets are always principal, which is not necessarily true. In the discrete case, \(\{x, y\}\) (for distinct \(x, y\)) is an upper set, but it is not principal. I think the trick is that there may be multiple "smallest" elements. In the discrete case, they're all smallest -- and all incomparable. You might be assuming that upper sets are always principal, which is not necessarily true. In the discrete case, \\(\\{x, y\\}\\) (for distinct \\(x, y\\)) is an upper set, but it is not principal. Thx @JonathanCastello. What is the meaning of 'principle', is it similar to 'the only one'? I am finding it somewhat challenging to reconcile, the book's def of upper set: “If p is an element then so is anything bigger" “If p is an element then so is anything bigger" with this case of incomparable members. Thx @JonathanCastello. What is the meaning of 'principle', is it similar to 'the only one'? I am finding it somewhat challenging to reconcile, the book's def of upper set: >*“If p is an element then so is anything bigger"* with this case of incomparable members. Thx @JonathanCastello. What is the meaning of 'principal', is it similar to 'the only one'? Thx @JonathanCastello. What is the meaning of 'principal', is it similar to 'the only one'? Not exactly. It means "generated by just one element". We say \(U\) is the principal upper set of \(p\) if and only if \(U = \{ q\, : \, p \leq q \}\) I've seen this written as \(\uparrow\{p\}\), \(\{p\}^u\), \(\{p\}^{\tiny \triangle}\), and sometimes \(\uparrow p \) or the like. Similarly, in abstract algebra, we say \(I\) is the principal ideal generated by \(a\) in a ring \(R\) if and only if \(I = \{ a \cdot r \, : \, r \in R\}\). > Thx @JonathanCastello. What is the meaning of 'principal', is it similar to 'the only one'? Not exactly. It means "generated by just one element". We say \\(U\\) is the *principal upper set* of \\(p\\) if and only if \\(U = \\{ q\, : \, p \leq q \\}\\) I've seen this written as \\(\uparrow\\{p\\}\\), \\(\\{p\\}^u\\), \\(\\{p\\}^{\tiny \triangle}\\), and sometimes \\(\uparrow p \\) or the like. Similarly, in abstract algebra, we say \\(I\\) is the principal ideal generated by \\(a\\) in a [ring](https://en.wikipedia.org/wiki/Ring_(mathematics)) \\(R\\) if and only if \\(I = \\{ a \cdot r \, : \, r \in R\\}\\). One problem with internalizing upper sets on a discrete preorder rests precisely on what you quoted, Vladislav: “If p is an element then so is anything bigger" “If p is an element then so is anything bigger" On the discrete preorder, nothing is ever bigger. So this condition is vacuously true, always. One problem with internalizing upper sets on a discrete preorder rests precisely on what you quoted, Vladislav: > “If p is an element then so is anything bigger" On the discrete preorder, _nothing is ever bigger_. So this condition is vacuously true, always. Thank you. With that ,and reading Thrina's explanation, above, I now understand why there are as many upper sets in X, as there are elements in X's powerset. Essentially, any non-empty subset of X is an upper set of some element in x, except the empty set. I initially would say, that if a set contains at least one element that is not bigger than x -- then that set is not an upper set of x. But it seems to be an incorrect negation of "If p is an element then so is anything bigger". The number of upper sets of a discrete set, then, should be \(2^n - 1 \) (where n is the number of elements in the discrete set), not \(2^n \) ? Thank you. With that ,and reading Thrina's explanation, above, I now understand why there are as many upper sets in X, as there are elements in X's powerset. Essentially, any non-empty subset of X is an upper set of some element in x, except the empty set. I initially would say, that if a set contains at least one element that is **not** bigger than x -- then that set is **not** an upper set of x. But it seems to be an incorrect negation of "*If p is an element then so is anything bigger*". The number of upper sets of a discrete set, then, should be \\(2^n - 1 \\) (where *n* is the number of elements in the discrete set), not \\(2^n \\) ?
Given two free semicirculars X_1 and X_2 and a projection h in the von-Neumann algebra generated by X_1, how does one show that the von-Neumann algebra generated by {X_1, hX_2(1-h)} is a factor? It is easy to show that the two elements in the generating set are free. But I am unable to see what kind of an object hX_2(1-h) is. It appears in the definition of interpolated free group factor in Radulescu's paper (pre-print 1991) on random matrices, amalgamated free products and subfactors of free group factors of non-integer index. Let me first point out that $X_1$ and $Y=h X_2 (1-h)$ are not freely independent. This is most easily seen if $h$ has trace 1/2, in which case $Y$ has range and support projections $h$ and $(1-h)$, respectively. But since the support and range projections of $Y$ belong to $W^*(Y)$, it would follow from the assumption that $Y$ and $X$ are free that actually $h$ and $X$ are free. But this is not possible, since they commute. Now to your question of factoriality. Here is a sketch of the proof. Let us assume for definiteness that $\tau(h) \geq \tau (1-h)$ (otherwise, switch $Y$ and $Y^\ast$). You can then verify that $Y^\ast Y$ and $YY^\ast$ have free Poisson distributions (with different parameters) and that the spectrum of $YY^\ast$ has no atoms. It follows that if you consider the polar decomposition of $Y = V |Y|$, then $V$ is a partial isometry with domain projection $1-h$ and range projection $\leq h$. Using this, you can see that $W^*(X_1,Y)$ is a factor iff $N=h W^*(X_1,Y)h $ is a factor. But $N$ is generated by $hX_1h$ and $Y^\ast Y$; you can prove that these elements are freely independent (in $N$). This either uses a random matrix model (see e.g. Voiculescu's book on free random variables for the proof of the compression formula for free group factors), or can be done directly using operator-valued semicircular systems. Thus $N = W^*(hX_1 H) * W^*(Y^\ast Y)$ which is a free product of two abelian von Neumann algebras, one of which is diffuse and the other not complex numbers. You can then get factoriality (see references in Ueda's paper http://arxiv.org/abs/1011.5017) One way of thinking about the operator $$ Y=hX_{2}(1-h) $$ is to work with the random matrix models. More specifically, the operators $X_{1}$ and $X_{2}$ can be thought as the limit as $n\to\infty$ of two independent $n\times n$ Hermitian random matrices where the upper triangular parts are formed by i.i.d. Gaussian random variables of zero mean and variance $1/\sqrt{n}$. Then $Y$ is the limit of the upper right corner of $X_{2}$. For example, let us assume (for notation simplicity only) that $\tau(h)=1/2$ then by the previous argument you can think of $X_{1}$ and $Y$ as: \[ X_{1} = \begin{pmatrix} x & z \\\ z^{*} & y \end{pmatrix} \] \[ Y = \begin{pmatrix} 0 & w \\\ 0 & 0 \end{pmatrix} \] where $x$ and $y$ are semicircular operators and $z$ and $w$ are circular operators and all of them are free. This will help you to understand the operator $Y$ and deduce all you need (joint moments, factoriality, etc) from the von Neumann algebra generated by $\{X_{1},Y\}$. Note that as Dima is showing you, the elements $X_{1}$ and $Y$ are not free over the the algebra of complex numbers. However, represented as two by two matrices of operators as above they are free over the algebra $M_{2}(\mathbb{C})$.
I am interested in showing continuity/boundedness of the weak solution to the following problem pde: \begin{align*} 0 &= \mathbf{q} + \mathbf{\nabla}u && \quad x\in \Omega,\\ 0 &= \mathbf{\nabla} \cdot \mathbf{q} && \quad x\in \Omega,\\ 0 &= u && \quad x\in \partial \Omega_D,\\ g &= \mathbf{q}\cdot \mathbf{\eta} &&\quad x\in \partial\Omega_N. \end{align*} How can I show that the norm to the weak solution to this problem is bounded by the Neumann Data? In other words, How can I show there exists a $C$ dependent only on the domain so that $$ \| \mathbf{q}\|_{H^{\mathrm{div}}(\Omega)} \le C \| g \|_{H^{-1/2}(\partial\Omega_N)} $$ I would especially appreciate references to papers or books. If this is handled in any of the standard references (Grisvard, or Gilbarg and Trudinger) or the like, and I have missed it, could you tell me specifically where this is handled? To the mods, I posted this question earlier today, on math.stackexchange. I have taken it down from there as I think it is more appropriate here.
Let's say we have a GARCH($1,1$) process specified as follows: $y_t = \epsilon_t \sqrt h_t, \quad \epsilon_t \sim N(0,1) \quad \text{i.i.d.}$ $h_t = a_0 + a_1 y^2_{t-1} + b_1 h_{t-1}.$ If we were trying to estimate the parameters $\Theta = (a_0, a_1, b_1)$, and we have a sample $\{y_k\}^n_{k=1}$, would it make a difference if we estimated the parameters from $\{y_k\}^n_{k=1}$ or $100\times\{y_k\}^n_{k=1}$? I basically want to know which (if any) parameters would also scale. I am rescaling the $y$ for numerical stability.
One thing to watch for, since you didn't clearly indicate it: the MLE is the maximum of the absolute values $|X_k|$, not simply the maximum of the $X_k$. Taking the derivative of the log of this likelihood function isn't valid since it would lead to the conclusion that $\frac{-n}{2\theta} = 0$, which isn't helpful in finding the MLE $\hat{\theta}$ of $\theta$. Let's step back a bit. Instead of blindly saying "take the derivative, set it equal to zero", let's look at the problem we're trying to solve with it. We're trying to find a maximum - and if you recall from your calculus class, the standard way to find a maximum is to look for critical points. The zeros of the derivative are critical points, but they're not the only critical points. In addition to them, we also have to consider any points at which the derivative fails to exist, and the boundary of the domain. What does that give us here? As you already found, the derivative doesn't have any zeros. The domain for $\theta$ is $(0,\infty)$; as $\theta\to\infty$, the probability goes to zero so that's not the maximum. As $\theta\to 0$, our formula $(2\theta)^{-n}$ goes to $\infty$. That seems unlikely; there must be a problem here that means the formula doesn't apply. Before we go any farther, we need to resolve this issue. Resolving this issue is simple in principle; the formula $f(x;\theta)=\frac1{2\theta}$ is only valid for $|x|\le \theta$, and the probability density is zero if $|x|>\theta$. There's more than one way to write this down - both the piecewise definition $f(x;\theta) = \begin{cases}\frac1{2\theta}& -\theta\le x\le\theta\\ 0&\text{otherwise}\end{cases}$ and the indicator function form $f(x;\theta)=\frac1{2\theta}\mathbf{1}_{[-\theta,\theta]}(x)$ are valid. So then, the problem with the density at zero is resolved. Once we've gone closer in than some of the observed $x$-values, some of the $f(X_k;\theta)$ we're putting into the product are zero and thus the whole product is zero. Well, that's the boundary dealt with, and we still haven't found anywhere that could be a maximum. There's only one possible critical point left to try: places where the derivative fails to exist. The pieces that go into our density function are smooth, so the only way the derivative can fail to exist is if we switch formulas. That happens when $|X_k|=\theta$ for some $k$. Looking closer, if $\theta=|X_k|$ and $|X_k|<|X_j|$ for some other $j$, the term $f(X_j;\theta)$ will be zero in an interval around $\theta$; that rules out $|X_k|$ as a potential minimum. The only option left, then, is $\hat{\theta}=\max_k |X_k|$, the largest of them. The density function jumps from zero to $(2\hat{\theta})^{-n}$ there, and then decays as $\theta$ continues to increase. ... the question I have is about how to use this information for further steps of determining things like variance and bias of $\hat{\theta}$. Now that we have our estimator, we shift perspectives. This is no longer a parameter estimation problem; it's a problem of finding things out about a known probability distribution. Given a fixed $\theta$ and $n$, what does $\hat{\theta}$ look like? This distribution is pretty well known; it's an order statistic. The easiest handle we can get is the cumulative probability distribution: $\hat{\theta} \le x$ if and only if $|X_k|\le x$ for each $k$. Those events are independent and each has probability $\frac{x}{\theta}$ (for $0\le x\le\theta$), so the cumulative distribution function is$$G(x) =P(\hat{\theta}\le x) = \begin{cases} 0 & x\le 0\\ \frac{x^n}{\theta^n}& 0\le x\le \theta\\ 1& x\ge\theta\end{cases}$$Now that we have a cumulative distribution function $G$, we can differentiate it to find the density $g(x)$. Then things like the mean $E(\hat{\theta})=\int_{\mathbb{R}}xg(x)$, the variance $E(\hat{\theta}^2)-(E(\hat{\theta}))^2$, and other measures are routine calculations.
I'm trying to calculate the probability current for a scattering problem. The potential is $V = V_0 > 0$ in $x>0$, with $E>V_0$ So I have in the region $x \le 0$: $$\psi = \exp(ikx) + R \exp(-ikx)$$ And in $x>0$ $$\psi = T \exp(i \kappa x)$$ I am trying to calculate the probability current, $j = \frac{-ih}{2m} (\bar{\psi}\psi' - \bar{\psi'}\psi)$, in each region and show that it is equal. However when I calculate the probability current in $x<0$, I get: $$\bar{\psi} = \exp(-ikx) + \bar{R}\exp(ikx)$$ $$\psi' = ik\exp(ikx) -ikR\exp(-ikx)$$ $$\bar{\psi'} = -ik\exp(-ikx) + ik\bar{R}\exp(ikx)$$ $$\psi = \exp(ikx) + R\exp(-ikx)$$ So: $$\bar{\psi} \psi' = ik -ikR\exp(-2ikx) + \bar{R}ik\exp(2ikx) - ikR\bar{R}$$ $$\bar{\psi}' \psi = -ik -ikR\exp(-2ikx) +ik\bar{R}exp(2ikx) +ik R\bar{R}$$ Hence, $$j = \frac{hk}{2m} (1 + R \exp(-2ikx) - \bar{R} \exp(2ikx) - R\bar{R})$$ And in the region $x>0$: $$j = \frac{\kappa h}{2m}(T\bar{T})$$. I can show that (by imposing continuity conditions at the boundary): $$k(1-|R|^2) = \kappa |T|^2$$ So I would expect the first probability current to just be: $\frac{kh}{2m}(1-|R|^2)$. Any help with this issue is much appreciated! I'm pretty sure I'm making some stupid mistake somewhere, but it's very frustrating as I cannot find it! Thanks
Let $f : \mathbb{S}^n \to \mathbb{S}^n$ be a continuous function. From homology, the degree $\deg_1(f)$ of $f$ can be defined as the integer $n$ such that $f_* : x \mapsto n \cdot x$, where $f$ induces the morphism $f_* : \mathbb{Z} \simeq H_n(\mathbb{S}^n) \to \mathbb{Z} \simeq H_n(\mathbb{S}^n)$. From topological degree, the degree $\deg_2(f)$ of $f$ can be defined as $\deg(\tilde{f},B^{n+1},0)$, where $\deg$ is the Brouwer degree as defined in the book Topological Degree Theory and Applications or in the paper An Elementary Analytic Theory of the Degree of Mapping in n-Dimensional Space and $\tilde{f} : \overline{B}^{n+1} \to \mathbb{R}^{n+1}$ is a continuous extension of $f$ on the unit closed ball of $\mathbb{R}^{n+1}$. (The integer $\deg_2(f)$ does not depend on the choice of $\tilde{f}$.) Question:Do $\deg_1$ and $\deg_2$ coincide? I think it is true, but probably I do not have enough knowledge about homology theory to find a rigorous proof... Added: Let $\Omega \subset \mathbb{R}^n$ be a bounded open set, $f : \overline{\Omega} \to \mathbb{R}^n$ be a continuous function $C^1$ on $\Omega$ and $p \notin f(\partial \Omega)$ a regular value of $f$. The Brouwer degree of $f$ is defined by $$\deg(f,\Omega,p)= \sum\limits_{x \in f^{-1}(p)} \mathrm{sign}(J_f(x))$$ where $J_f$ is the Jacobian determinant of $f$. It can be shown that $\deg(g,\Omega,q)=\deg(f,\Omega,p)$ if $q$ is a regular value of a function $g \in C^1(\overline{\Omega})$ such that $\sup\limits_{x \in \overline{\Omega}} \|f(x)-g(x)\|$ and $\|p-q\|$ are sufficiently small. Using Sard theorem and Stone-Weierstrass approximation theorem, we can define the Brouwer degree of a continuous function $f: \overline{\Omega} \to \mathbb{R}^n$ with respect to a bounded open set $\Omega \subset \mathbb{R}^n$ and a point $p \notin f(\partial \Omega)$ by $$ \deg(f,\Omega,p)=\lim\limits_{n \to + \infty} \deg(f_n,\Omega,p_n)$$ where $(f_n)$ is a sequence in $C^1(\overline{\Omega})$ converging uniformly to $f$ and $p_n$ is a regular value of $f_n$ for each $n$.
in nearly all code examples I've seen of a VAE, the loss functions are defined as follows (this is tensorflow code, but I've seen similar for theano, torch etc. It's also for a convnet, but that's also not too relevant, just affects the axes the sums are taken over): # latent space loss. KL divergence between latent space distribution and unit gaussian, for each batch.# first half of eq 10. in https://arxiv.org/abs/1312.6114kl_loss = -0.5 * tf.reduce_sum(1 + log_sigma_sq - tf.square(mu) - tf.exp(log_sigma_sq), axis=1)# reconstruction error, using pixel-wise L2 loss, for each batchrec_loss = tf.reduce_sum(tf.squared_difference(y, x), axis=[1,2,3])# or binary cross entropy (assuming 0...1 values)y = tf.clip_by_value(y, 1e-8, 1-1e-8) # prevent nan on log(0)rec_loss = -tf.reduce_sum(x * tf.log(y) + (1-x) * tf.log(1-y), axis=[1,2,3])# sum the two and average over batchesloss = tf.reduce_mean(kl_loss + rec_loss) However the numeric range of kl_loss and rec_loss are very dependent on latent space dims and input feature size (e.g. pixel resolution) respectively. Would it be sensible to replace the reduce_sum's with reduce_mean to get per z-dim KLD and per pixel (or feature) LSE or BCE? More importantly, how do we weight latent loss with reconstruction loss when summing together for the final loss? Is it just trial and error? or is there some theory (or at least rule of thumb) for it? I couldn't find any info on this anywhere (including the original paper). The issue I'm having, is that if the balance between my input feature (x) dimensions and latent space (z) dimensions is not 'optimum', either my reconstructions are very good but the learnt latent space is unstructured (if x dimensions is very high and reconstruction error dominates over KLD), or vice versa (reconstructions are not good but learnt latent space is well structured if KLD dominates). I'm finding myself having to normalise reconstruction loss (dividing by input feature size), and KLD (dividing by z dimensions) and then manually weighting the KLD term with an arbitrary weight factor (The normalisation is so that I can use the same or similar weight independent of dimensions of x or z). Empirically I've found around 0.1 to provide a good balance between reconstruction and structured latent space which feels like a 'sweet spot' to me. I'm looking for prior work in this area. Upon request, maths notation of above (focusing on L2 loss for reconstruction error) $$\mathcal{L}_{latent}^{(i)} = -\frac{1}{2} \sum_{j=1}^{J}(1+\log (\sigma_j^{(i)})^2 - (\mu_j^{(i)})^2 - (\sigma_j^{(i)})^2)$$ $$\mathcal{L}_{recon}^{(i)} = -\sum_{k=1}^{K}(y_k^{(i)}-x_k^{(i)})^2$$ $$\mathcal{L}^{(m)} = \frac{1}{M}\sum_{i=1}^{M}(\mathcal{L}_{latent}^{(i)} + \mathcal{L}_{recon}^{(i)})$$ where $J$ is the dimensionality of latent vector $z$ (and corresponding mean $\mu$ and variance $\sigma^2$), $K$ is the dimensionality of the input features, $M$ is the mini-batch size, the superscript $(i)$ denotes the $i$th data point and $\mathcal{L}^{(m)}$ is the loss for the $m$th mini-batch.
I'm analysing classroom interaction data. I'm using a simple quantitative index counting the number of interactions of a certain kind. I want to know if it significantly changed between years 1 and 2, where I used method A to teach, and year 3, where I used method B. I thus have $x_1$ and $x_2$ from the first two years to compare against a single value $x_3$ for the third year. It is correct to use a $t$-test to check if $x_3$ is significantly different from $x_1$ and $x_2$ by checking if, following a $t$ distribution with 1 degree of freedom, the probability of getting a value at least as extreme as $x_3$ is lower than some $p$-value? That is, wether this holds: $$P_{t(df=1)}\left(X \geq \frac{x_3 - \bar{x}}{s}\right) \leq 0.05,$$ where $\bar{x} = \frac{1}{n}\sum_{i=1}^n x_n$ and $s=\sqrt{\frac{1}{n-1}\sum_{i=1}^n (x_i-\bar{x})^2}$, so in this case where $n=2$ we would have $\bar{x} = \frac{x_1 + x_2}{2}$ and $s=\sqrt{(x_1 - \bar{x})^2 + (x_2 - \bar{x})²}$. Is this correct, or shall I rather use $\frac{x_3 - \bar{x}}{s/\sqrt{2}}$ instead of $\frac{x_3 - \bar{x}}{s}$; i.e., divide by the standard error of the mean instead of the sample standard deviation? (I am aware that this makes the assumption that the population follows a normal distribution.)
I need to understand a passage from a paper which I don't quite understand. Let $M$ be a module over the ring $\mathbb C\{t\}$ of convergent power series. We want to show that $M$ is torsion-free, i.e., if $\psi(t)\cdot m = 0$ for some non-zero $\psi \in \mathbb C\{t\}$ then $m = 0$. What the author is actually showing, if I understand correctly, is that $t\cdot m = 0$ implies $m = 0$, for every $m$. This is of course necessary, but is it sufficient? So the question is: Let $M$ be a $\mathbb C\{t\}$-module. Suppose $t\cdot m = 0$ implies $m=0$. Is then $M$ torsion-free?
Archive: Subtopics: Comments disabled Wed, 18 Sep 2019 Suppose you have a bottle that contains !!N!! whole pills. Each day you select a pill at random from the bottle. If it is a whole pill you eat half and put the other half back in the bottle. If it is a half pill, you eat it. How many half-pills can you expect to have in the bottle the day after you break the last whole pill? Let's write !!E(N)!! for the expected number of half-pills. It's easily seen that !!E(N) = 0, 1, \frac32!! for !!N=0,1,2!!, and it's not hard to calculate that !!E(3) = \frac{11}{6}!!. For larger !!N!! it's easy to use Monte Carlo simulation, and find that !!E(30)!! is almost exactly !!4!!. But it's also easy to use dynamic programming and compute that $$E(30) = \frac{9304682830147}{2329089562800}$$ exactly, which is a bit less than 4, only !!3.994987!!. Similarly, the dynamic programming approach tells us that $$E(100) = \frac{14466636279520351160221518043104131447711}{2788815009188499086581352357412492142272}$$ which is about !!5.187!!. (I hate the term “dynamic programming”. It sounds so cool, but then you find out that all it means is “I memoized the results in a table”. Ho hum.) As you'd expect for a distribution with a small mean, you're much more likely to end with a small number of half-pills than a large number. In this graph, the red line shows the probability of ending with various numbers of half-pills for an initial bottle of 100 whole pills; the blue line for an initial bottle of 30 whole pills, and the orange line for an initial bottle of 5 whole pills. The data were generated by this program. The !!E!! function appears to increase approximately logarithmically. It first exceeds !!2!! at !!N=4!!, !!3!! at !!N=11!!, !!4!! at !!N=31!!, and !!5!! at !!N=83!!. The successive ratios of these !!N!!-values are !!2.75, 2.81,!! and !!2.68!!. So we might guess that !!E(N)!! first exceeds 6 around !!N=228!! or so, and indeed !!E(226) < 6 < E(227)!!. So based on purely empirical considerations, we might guess that $$E(N) \approx \frac{\log{\frac{15}{22}N}}{\log 2.75}.$$ (The !!\frac{15}{22}!! is a fudge factor to get the curves to line up properly.) I don't have any theoretical justification for this, but I think it might be possible to get a bound. I don't think modeling this as a Markov process would work well. There are too many states, and it is not ergodic. [ Addendum 20190919: Ben Handley informs me that !!E(n)!! is simply the harmonic numbers, !!E(n) = \sum_1^n \frac1n!!. I feel a little foolish that I did not notice that the first four terms matched. The appearance of !!E(3)=\frac{11}6!! should have tipped me off. Thank you, M. Handley. ] [ Addendum 20190920: I was so fried when I wrote this that I alsodidn't notice that the denominator I guessed, !!2.75!!, is almostexactly !!e!!. (The [ Addendum 20191004: More about this ]
It looks like you're new here. If you want to get involved, click one of these buttons! Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the way in which we match up these two objects, to see that they look the same. For example, any two of these squares look the same after you rotate and/or reflect them: An isomorphism between two of these squares is a process of rotating and/or reflecting the first so it looks just like the second. As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse: Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that and I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\). Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse. Now we're ready for isomorphisms! Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\). Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like! What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph: The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2: $$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1: $$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms: $$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism! In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism. We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\). Puzzle 144 says that in a poset, the only isomorphisms are identities. Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions. Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\). So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them. One more example: Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism. This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the isomorphisms deserve to be called 'natural isomorphisms'. But what are they like? Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism $$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes: Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism $$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that $$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means $$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\). In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\). But the converse is true, too! It takes a little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism. Doing this will help you understand natural isomorphisms. But you also need examples! Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal! We should talk about this.
In this review, we present a couple of the more important Multivariable Calculus methods commonly used in STAT 414, mainly for Exam 4 and the Final Exam. While this is not a complete review, you should use this to refresh your memory and guide you to where you need to spend time reviewing. As always, practice is key! First, multivariable calculus involves functions of several variables. For simplicity, we focus on functions of two variables. You can find information on the web or in other text to review in more detail, if you need. Partial Derivatives Let's begin with Partial Derivatives. Suppose we have the function \(f(x,y)\). The partial derivative with respect to x would be \[f_x(x,y)=\lim_{h\rightarrow 0} \frac{f(x-h, y)}{x-h}\] Similarly, the partial derivative of \(f(x,y)\) with respect to \(y\) would be \[f_y(x,y)=\lim_{h\rightarrow 0} \frac{f(x, y-h)}{y-h}\] The notation for partial derivatives is not the same for all texts. You should be able to recognize the different forms. The notation, for example, for the partial derivative of $f(x,y)$, with respect to $x$, could be denoted as: \[f_x(x,y)=\frac{\partial}{\partial x}f(x, y)=\frac{\partial f}{\partial x}\] Derivatives of Multivariable Functions Video, (Khan Academy) Double Integrals Integrating over regions will be important in STAT 414. Suppose we have the function \(f(x,y)\), the over the region R, would be: \[\int \int_R f(x,y)\; dx dy\] Consider the rectangular region defined by \(a\le x\le b\) and \(c\le y\le d\), or \(R=[a,b]\times[c, d]\). Then the iterated integral would be: \( \int_c^d \left[\int_a^b f(x,y) dx\right] dy=\int_a^b \left[\int_c^d f(x,y) dy\right] dx \) When the region is not rectangular, things can get complicated. It is important to draw out the support space and consider the region when building these double integrals. Double Integrals Video, (Khan Academy) Example C.4.1 First, let's find the partial of x. To do this, we consider y as a constant. \[\dfrac{\partial f}{\partial x}=-\dfrac{2}{x^3}-y\] Now, let's find \(\dfrac{\partial f}{\partial y}\). \[\dfrac{\partial f}{\partial y}=\dfrac{3}{\sqrt{y}}-x\] Example C.4.2 Integrate \(f(x,y) =24xy\). For \(0< x < 1, 0 <y<1\) and \(x+y < 1\) over the space where \(x+y<\dfrac{1}{2}\). \begin{align*} \int_0^{1/2}\int_{0}^{1/2-y} 24xy\; dx \; dy &=\int_0^{1/2} 12y^3-12y^2+3y\; dy\\ & = 3y^4-4y^3+\dfrac{3}{2}y^2|_0^{1/2}\\&=3\left(\dfrac{1}{2}\right)^4-4\left(\dfrac{1}{2}\right)^3+\dfrac{3}{2}\left(\dfrac{1}{2}\right)^2\\ & = \dfrac{1}{16} \end{align*}
Section 3.5.2 in The Elements of Statistical Learning is useful because it puts PLS regression in the right context (of other regularization methods), but is indeed very brief, and leaves some important statements as exercises. In addition, it only considers a case of a univariate dependent variable $\mathbf y$. The literature on PLS is vast, but can be quite confusing because there are many different "flavours" of PLS: univariate versions with a single DV $\mathbf y$ (PLS1) and multivariate versions with several DVs $\mathbf Y$ (PLS2), symmetric versions treating $\mathbf X$ and $\mathbf Y$ equally and asymmetric versions ("PLS regression") treating $\mathbf X$ as independent and $\mathbf Y$ as dependent variables, versions that allow a global solution via SVD and versions that require iterative deflations to produce every next pair of PLS directions, etc. etc. All of this has been developed in the field of chemometrics and stays somewhat disconnected from the "mainstream" statistical or machine learning literature. The overview paper that I find most useful (and that contains many further references) is: For a more theoretical discussion I can further recommend: A short primer on PLS regression with univariate $y$ (aka PLS1, aka SIMPLS) The goal of regression is to estimate $\beta$ in a linear model $y=X\beta + \epsilon$. The OLS solution $\beta=(\mathbf X^\top \mathbf X)^{-1}\mathbf X^\top \mathbf y$ enjoys many optimality properties but can suffer from overfitting. Indeed, OLS looks for $\beta$ that yields the highest possible correlation of $\mathbf X \beta$ with $\mathbf y$. If there is a lot of predictors, then it is always possible to find some linear combination that happens to have a high correlation with $\mathbf y$. This will be a spurious correlation, and such $\beta$ will usually point in a direction explaining very little variance in $\mathbf X$. Directions explaining very little variance are often very "noisy" directions. If so, then even though on training data OLS solution performs great, on testing data it will perform much worse. In order to prevent overfitting, one uses regularization methods that essentially force $\beta$ to point into directions of high variance in $\mathbf X$ (this is also called "shrinkage" of $\beta$; see Why does shrinkage work?). One such method is principal component regression (PCR) that simply discards all low-variance directions. Another (better) method is ridge regression that smoothly penalizes low-variance directions. Yet another method is PLS1. PLS1 replaces the OLS goal of finding $\beta$ that maximizes correlation $\operatorname{corr}(\mathbf X \beta, \mathbf y)$ with an alternative goal of finding $\beta$ with length $\|\beta\|=1$ maximizing covariance $$\operatorname{cov}(\mathbf X \beta, \mathbf y)\sim\operatorname{corr}(\mathbf X \beta, \mathbf y)\cdot\sqrt{\operatorname{var}(\mathbf X \beta)},$$ which again effectively penalizes directions of low variance. Finding such $\beta$ (let's call it $\beta_1$) yields the first PLS component $\mathbf z_1 = \mathbf X \beta_1$. One can further look for the second (and then third, etc.) PLS component that has the highest possible covariance with $\mathbf y$ under the constraint of being uncorrelated with all the previous components. This has to be solved iteratively, as there is no closed-form solution for all components (the direction of the first component $\beta_1$ is simply given by $\mathbf X^\top \mathbf y$ normalized to unit length). When the desired number of components is extracted, PLS regression discards the original predictors and uses PLS components as new predictors; this yields some linear combination of them $\beta_z$ that can be combined with all $\beta_i$ to form the final $\beta_\mathrm{PLS}$. Note that: If all PLS1 components are used, then PLS will be equivalent to OLS. So the number of components serves as a regularization parameter: the lower the number, the stronger the regularization. If the predictors $\mathbf X$ are uncorrelated and all have the same variance (i.e. $\mathbf X$ has been whitened), then there is only one PLS1 component and it is equivalent to OLS. Weight vectors $\beta_i$ and $\beta_j$ for $i\ne j$ are not going to be orthogonal, but will yield uncorrelated components $\mathbf z_i=\mathbf X \beta_i$ and $\mathbf z_j=\mathbf X \beta_j$. All that being said, I am not aware of (while the latter does have lots of advantages: it is continuous and not discrete, has analytical solution, is much more standard, allows kernel extensions and analytical formulas for leave-one-out cross-validation errors, etc. etc.). any practical advantages of PLS1 regression over ridge regression Quoting from Frank & Friedman: RR, PCR, and PLS are seen in Section 3 to operate in a similar fashion. Their principal goal is to shrink the solution coefficient vector away from the OLS solution toward directions in the predictor-variable space of larger sample spread. PCR and PLS are seen to shrink more heavily away from the low spread directions than RR, which provides the optimal shrinkage (among linear estimators) for an equidirection prior. Thus PCR and PLS make the assumption that the truth is likely to have particular preferential alignments with the high spread directions of the predictor-variable (sample) distribution. A somewhat surprising result is that PLS (in addition) places increased probability mass on the true coefficient vector aligning with the $K$th principal component direction, where $K$ is the number of PLS components used, in fact expanding the OLS solution in that direction. They also conduct an extensive simulation study and conclude (emphasis mine): For the situations covered by this simulation study, one can conclude that all of the biased methods (RR, PCR, PLS, and VSS) provide substantial improvement over OLS. [...] In all situations, RR dominated all of the other methods studied. PLS usually did almost as well as RR and usually outperformed PCR, but not by very much. Update: In the comments @cbeleites (who works in chemometrics) suggests two possible advantages of PLS over RR: An analyst can have an a priori guess as to how many latent components should be present in the data; this will effectively allow to set a regularization strength without doing cross-validation (and there might not be enough data to do a reliable CV). Such an a priori choice of $\lambda$ might be more problematic in RR. RR yields one single linear combination $\beta_\mathrm{RR}$ as an optimal solution. In contrast PLS with e.g. five components yields five linear combinations $\beta_i$ that are then combined to predict $y$. Original variables that are strongly inter-correlated are likely to be combined into a single PLS component (because combining them together will increase the explained variance term). So it might be possible to interpret the individual PLS components as some real latent factors driving $y$. The claim is that it is easier to interpret $\beta_1, \beta_2,$ etc., as opposed to the joint $\beta_\mathrm{PLS}$. Compare this with PCR where one can also see as an advantage that individual principal components can potentially be interpreted and assigned some qualitative meaning.
Here's a problem I need some help with: Let $ f $ be a twice differentiable function in the closed interval $ [-1, 1] $ and $ f''(x) \geq 0 $ for all $ x \in [-1, 1] $. Show that $$ \int^1 _{-1} f(x)dx \geq 2f(0)$$ When does the equality hold? There was a small hint given to apply the Mean Value Theorem and the fact that $f'(x)$ is growing in that interval. I've got a very vague idea how to apply the MVT. This is what I've done so far $$ \int_{-1} ^1 f(x) = F(1) - F(-1) = (1 -(-1))f(\xi) = 2f(\xi)$$ using the MVT. But I've got no idea how to show the inequality to be true and why $f(0)$ in particular. Actually I'm not sure if I'm on the right track to begin with. PS. I'd prefer some hints to a complete solution at first.
Say $A$ is a square matrix over an algebraically closed field. Say $m$ is the minimal polynomial and $p$ is the characteristic polynomial. Of course C-H implies that $m|p$. Conversely, if we can show $m|p$ then C-H follows; the question is whether one can give a "simple", "elementary" or "straightforward" proof that $m|p$. Note. What I really want is a proof such that I feel I actually understand the whole thing. Hence in particular no Jordan form allowed. Edit. An Answer has appeared that shows $m|p$ in a very simple way - simply demolishes what I wrote below. Edit. When I posted this is was an honest question that I didn't know the answer to. I think I got it; if anyone wants to say they believe the argument below (or not) that would be great. First, it's clear that linear factors of $m$ must divide $p$: If $m(\lambda)=0$ then $p(\lambda)=0$. Because $m(t)=(t-\lambda)r(t)$, so $(A-\lambda)r(A)=0$. Minimality of $m$ shows that $r(A)\ne0$, hence $A-\lambda$ is not invertible, hence $p(\lambda)=0$. If we could show that $(t-\lambda)^k|m$ implies $(t-\lambda)^k|p$ we'd be set. Some possible progress on that, first restricted to a simple special case: If $t^2|m(t)$ then $\dim(\ker(A^2))\ge 2$. Proof: Say $X=K^n$ is the underlying vector space. Say $m(t)=t^2q(t)$. Let $$Y=q(A)X,$$ $$B=A|_Y.$$ Then $Y\subset\ker(A^2)$. Say $d=\dim(Y)$. Now $B^2=0$, and it follows easily that $B^d=0$. But $B\ne0$, hence $d\ge2$. Similarly If $(t-\lambda)^k|m$ then $\dim(\ker(A-\lambda)^k)\ge k$. So we only need If $\dim(\ker(A-\lambda)^k)\ge k$ then $(t-\lambda)^k|p$. Which I gather is true, but only by hearsay; I'm sort of missing what it "really means" to say $t^2|p$. Wait, I think I got it. Say $$m(t)=(t-\lambda)^kq(t),$$ $$q(\lambda)\ne0.$$ The "kernel lemma" shows that $$X=\ker((A-\lambda)^k)\oplus\ker(q(A))=X_1\oplus X_2.$$ Each $X_j$ is $A$-invariant, so we can define $$B_j=A|_{X_j}.$$Since similar matrices have the same determinant we can use any basis we like in calculating the determinant $p(t)$; if we use a basis compatible with the decomposition $X=X_1\oplus X_2$ it's clear that $$p_A=p_{B_1}p_{B_2},$$so we need only show that $$p_{{B_1}}(t)=(t-\lambda)^k.$$ In fact it's actually enough to show $(t-\lambda)^k|p_{B_1}$, and that's clear: Lemma.If $B$ is a $d\times d$ nilpotent matrix then $p_B(t)=t^d$. Proof: We're still assuming $K$ is algebraically closed; $B$ cannot have a non-zero eigenvalue. So if $d=\dim(\ker((A-\lambda)^k)$ then $$p_{B_1}(t)=(t-\lambda)^d;$$we've already shown that $d\ge k$, so $(t-\lambda)^k|p$. Hmm. Maybe that doesn't look all that simple. It's nonetheless the sort of thing I wanted, because I can give a one-line summary making it at least comprehensible: One-line summary: Since $m$ splits, the kernel lemma (a simple consequence of the fact that $K[t]$ is a PID) shows that $A$ is the direct sum of operators $B_j$ such that $B_j-\lambda_j$ is nilpotent. So it's enough to prove C-H for nilpotent operators, which is not hard.
Although the OP did not respond, I am answering this to showcase the method I proposed (and indicate what statistical intuition it may contain). First, it is important to distinguish on which entity is the constraint imposed. In a deterministic optimization setting, there is no such issue : there is no "true value", and an estimator of it. We just have to find the optimizer. But in a stochastic setting, there are conceivably two different cases: a) "Estimate the parameter given a sample that has been generated by a population that has a non-negative mean" (i.e. $\theta \ge 0$) and b) "Estimate the parameter under the constraint that your estimator cannot take negative values"(i.e. $\hat \theta \ge 0$). In the first case, imposing the constraint is including prior knowledge on the unknown parameter. In the second case, the constraint can be seen as reflecting a prior belief on the unknown parameter (or some technical, or "strategic", limitation of the estimator). The mechanics of the solution are the same, though:The objective function, (the log-likelihood augmented by the non-negativity constraint on $\theta$) is $$\tilde L(\theta|\mathbf{x})=-\frac n2 \ln(2\pi)-\frac{1}{2}\sum_{i=1}^{n}(x_i-\theta)^2 +\xi\theta,\qquad \xi\ge 0 $$ Given concavity, f.o.c is also sufficient for a global maximum. We have $$\frac {\partial}{\partial \theta}\tilde L(\theta|\mathbf{x})=\sum_{i=1}^{n}(x_i-\theta) +\xi = 0 \Rightarrow \hat \theta = \bar x+\frac{\xi}{n} $$ 1) If the solution lies in an interior point ($\Rightarrow \hat \theta >0$), then $\xi=0$ and so the solution is $\{\hat \theta= \bar x>0,\; \xi^*=0\}$. 2) If the solution lies on the boundary ($\Rightarrow \hat \theta =0$) then we obtain the value of the multiplier at the solution $\xi^* = -n\hat x$, and so the full solution is $\{\hat \theta= 0,\; \xi^*=-n\bar x\}$. But since the multiplier must be non-negative, this necessarily implies that in this case we would have $\bar x\le 0$ (There is nothing special about setting the constraint to zero. If say the constraint was $\theta \ge -2$, then if the solution lied on the boundary, $\hat \theta = -2$, it would imply (in order for the multiplier to have a positive value), that $\bar x \le -2$). So, if the optimizer is $0$ what are we facing here? If we are in "constraint type-a", i.e we have been told that the sample comes from a population that it has a non-negative mean, then with $\hat \theta =0$ chances are that the sample may not be representative of this population. If we are in "constraint type-b", i.e. we had the belief that the population has a non-negative mean, with $\hat \theta =0$ this belief is questioned. (This is essentially an alternative way to deal with prior beliefs, outside the formal bayesian approach). Regarding the properties of the estimator, one should carefully distinguish this constrained estimation case, with the case where the true parameter lies on the boundary of the parameter space.
Gold Member 16 0 I'm reading through Lancaster & Blundell's It seems to me that the energy of the harmonic oscillator in its "one-particle" state is ##\omega_0##, and the general energy (off the mass shell) is given by ##\omega## so that the position-space propagator would be given by $$G\left(x,y\right)=\int\frac{d^{4}p}{\left(2\pi\right)^{4}}e^{-ip\cdot\left(x-y\right)}\frac{i}{\omega^{2}-\omega_{0}^{2}+i\epsilon}$$ From there, we can read off the momentum-space Fourier component as $$\tilde{G}\left(\omega\right)=\frac{i}{\left(\omega^{2}-\omega_{0}^{2}+i\epsilon\right)}$$ I can't figure out where the extra factor of ##m## in the denominator comes from. Introducing the extra ##m## seems to mess up the units as well, since their general expression for the momentum-space propagator is $$\tilde{\Delta}\left(p\right)=\frac{i}{\left(p^{0}\right)^{2}-E_{\boldsymbol{p}}^{2}+i\epsilon}$$. I'm guessing I'm missing something simple (since pretty well all the exercises in the book aren't too complex once you understand the principles), but I just can't see it. Quantum Field Theory for the Gifted Amateurand have got to Chapter 17 on calculating propagataors. In their equation 17.23 they derive the expression for the free Feynman propagator for a scalar field to be $$\Delta\left(x,y\right)=\int\frac{d^{4}p}{\left(2\pi\right)^{4}}e^{-ip\cdot\left(x-y\right)}\frac{i}{\left(p^{0}\right)^{2}-E_{\boldsymbol{p}}^{2}+i\epsilon}$$ where ##p^0=E## represents an energy that is not on the mass shell, so that in general ##p^{0}\ne\sqrt{E_{\boldsymbol{p}}^{2}+m^{2}}##. I'm able to follow their derivation (I think), but then in Exercise 17.4, they ask us to show that the Feynman propagator for the quantum simple harmonic oscillator with spring constant ##m\omega_{0}^{2}## is given by $$\tilde{G}\left(\omega\right)=\frac{i}{m\left(\omega^{2}-\omega_{0}^{2}+i\epsilon\right)}$$ It seems to me that the energy of the harmonic oscillator in its "one-particle" state is ##\omega_0##, and the general energy (off the mass shell) is given by ##\omega## so that the position-space propagator would be given by $$G\left(x,y\right)=\int\frac{d^{4}p}{\left(2\pi\right)^{4}}e^{-ip\cdot\left(x-y\right)}\frac{i}{\omega^{2}-\omega_{0}^{2}+i\epsilon}$$ From there, we can read off the momentum-space Fourier component as $$\tilde{G}\left(\omega\right)=\frac{i}{\left(\omega^{2}-\omega_{0}^{2}+i\epsilon\right)}$$ I can't figure out where the extra factor of ##m## in the denominator comes from. Introducing the extra ##m## seems to mess up the units as well, since their general expression for the momentum-space propagator is $$\tilde{\Delta}\left(p\right)=\frac{i}{\left(p^{0}\right)^{2}-E_{\boldsymbol{p}}^{2}+i\epsilon}$$. I'm guessing I'm missing something simple (since pretty well all the exercises in the book aren't too complex once you understand the principles), but I just can't see it.
Institute of Mathematical Statistics Lecture Notes - Monograph Series Multivariate sequential analysis with linear boundaries Abstract Let $\{S_n=(X_n,W_n)\}_{n\ge0}$ be a random walk with $X_n\in\R$ and $W_n\in\R^m$. Let $\tau=\tau_a=\inf\{n:X_n>a\}$. The main results presented are two term asymptotic expansions for the joint distribution of $S_\tau$ and $\tau$ and the marginal distribution of $h(S_\tau/a,\tau/a)$ in the limit $a\to\infty$. These results are used to study the distribution of $t$-statistics in sequential experiments with sample size $\tau$, and to remove bias from confidence intervals based on Anscombe's theorem. Chapter information Source Recent Developments in Nonparametric Inference and Probability: Festschrift for Michael Woodroofe (Beachwood, Ohio, USA: Institute of Mathematical Statistics, 2006) Dates First available in Project Euclid: 28 November 2007 Permanent link to this document https://projecteuclid.org/euclid.lnms/1196284053 Digital Object Identifier doi:10.1214/074921706000000608 Mathematical Reviews number (MathSciNet) MR2409064 Zentralblatt MATH identifier 1268.62094 Rights Copyright © 2006, Institute of Mathematical Statistics Citation Keener, Robert. Multivariate sequential analysis with linear boundaries. Recent Developments in Nonparametric Inference and Probability, 58--79, Institute of Mathematical Statistics, Beachwood, Ohio, USA, 2006. doi:10.1214/074921706000000608. https://projecteuclid.org/euclid.lnms/1196284053
Math.SE report 2015-06 [ This page originally held the report for April2015, which has moved. It now containsthe report for June 2015. ] Is “smarter than” a transitiverelationship?concerns a hypothetical "is smarter than" relation with thefollowing paradoxical-seeming property: most X's are smarter than most Y's, but most Y's are such that it is not the case that most X's are smarter than it. That is, if !!\mathsf Mx.\Phi(x)!! means that most !!x!! have property!!\Phi!!, then we want both $$\mathsf Mx.\mathsf My.S(x, y)$$ andalso $$\mathsf My.\mathsf Mx.\lnot S(x, y).$$ “Most” is a little funny here: what does it mean? But we can pin itdown by supposing that there are an infinite number of !!x!!es and!!y!!s, and agreeing that most !!x!! have property !!P!! if thereare only a finite number of exceptions. For example, everyoneshould agree that most positive integers are larger than 7 and thatmost prime numbers are odd. The jargon word here is that we aresaying that a subset contains “most of” the elements of a larger setif it is cofinite. There is a model of this property, and OP reports that they askedthe prof if this was because the "smarter than" relation !!S(x,y)!!could be antitransitive, so that one might have !!S(x,y), S(y,z)!!but also !!S(z,x)!!. The prof said no, it's not because of that,but the OP want so argue that it's that anyway. But no, it's notbecause of that; there is a model that uses a perfectly simpletransitive relation, and the nontransitive thing nothing but adistraction. (The model maps the !!x!!es and !!y!!s onto numbers,and says !!x!! is smarter than !!y!! if its number is bigger.)Despite this OP couldn't give up the idea that the model existsbecause of intransitive relations. It's funny how sometimes peopleget stuck on one idea and can't let go of it. How to generate a random number between 1 and 10 with a six-sideddie? was a lot offun and attracted several very good answers. Top-scoring is JackD'Aurizio's, which proposes a completely straightforward method:roll once to generate a bit that selects !!N=0!! or !!N=5!!, andthen roll again until you get !!M\ne 6!!, and the result is !!N+M!!. But several other answers were suggested, including two by me, oneexplaining the general technique of arithmeticcoding, which I'llprobably refer back to in the future when people ask similarquestions. Don't miss NovaDenizen's clever simplification ofarithmetic coding,which I want to think about more, or D'Aurizio's suggestion that ifyou threw the die into a V-shaped trough, it would land with oneedge pointing up and thus select a random number from 1 to 12 in asingle throw. Interesting question: Is there an easy-to-remember mapping fromedges to numbers from 1–12? Each edge is naturally identified by apair of distinct integers from 1–6 that do not add to 7. The oddly-phrased Category theory with objects as logicalexpressions over !!{\vee,\wedge,\neg}!! and morphismsas? asks if there isa standard way to turn logical expressions into a category, whichthere is: you put an arrow from !!A\to B!! for each proof that !!A!!implies !!B!!; composition of arrows is concatenation of proofs, andidentity arrows are empty proofs. The categorial product,coproduct, and exponential then correspond to !!\land, \lor, !! and!!\to!!. This got me thinking though. Proofs are properly not lists, they aretrees, so it's not entirely clear what the concatenation operationis. For example, suppose proof !!X!! concludes !!A!! at its rootand proof !!Y!! assumes !!A!! in more than one leaf. When youconcatenate !!X!! and !!Y!! do you join all the !!A!!'s, or what? Ireally need to study this more. Maybe the Lambek and Scott booktalks about it, or maybe the Goldblatt Topoi book, which I actuallyown. I somehow skipped most of the Cartesian closed category stuff,which is an oversight I ought to correct. In Why is the Ramsey`s theorem a generalization of the Pigeonholeprinciple I gavewhat I thought was a terrific answer, showing how Ramsey's graphtheorem and the pigeonhole principle are both special cases ofRamsey's hypergraph theorem. This might be my favorite answer ofthe month. It got several upvotes, but OP preferred a differentanswer, with fewer details. There was a thread a whileback about theoremswhich are generalizations of other theorems in non-obvious ways. Ipointed out the Yoneda lemma was a generalization of Cayley'stheorem from group theory. I see that nobody mentioned the Ramseyhypergraph theorem being a generalization of the pigeonholeprinciple, but it's closed now, so it's too late to add it. In Why does the Deduction Theorem useUnion? I explainedthat the English word and actually has multiple meanings. I know I'veseen this discussed in elementary logic texts but I don't rememberwhere. Finally, Which is the largest power of natural number that can beevaluated bycomputers? asks ifit's possible for a computer to calculate !!7^{120000000000}!!. Theanswer is yes, but it's nontrivial and you need to use some tricks.You have to use the multiplying-by-squaring trick, and for thesquarings you probably want to do the multiplication with DFT. OPwas dissatistifed with the answer, and seemed to have some axe togrind, but I couldn't figure out what it was. [Other articles in category /math/se] permanent link
We know that $2^4 = 4^2$ and $(-2)^{-4} = (-4)^{-2}$. Is there another pair of integers $x, y$ ($x\neq y$) which satisfies the equality $x^y = y^x$? This is a classic (and well known problem). The general solution of $x^y = y^x$ is given by $$\begin{align*}x &= (1+1/u)^u \\ y &= (1+1/u)^{u+1}\end{align*}$$ It can be shown that if $x$ and $y$ are rational, then $u$ must be an integer. For every integer $n$, $x = y = n$ is a solution. So assume $x \neq y$. Suppose $n^m = m^n$. Then $n^{1/n} = m^{1/m}$. Now the function $x \mapsto x^{1/x}$ reaches its maximum at $e$, and is otherwise monotone. Thus (assuming $n < m$) we must have $n < e$, i.e. $n = 1$ or $n = 2$. If $n = 1$ then $n^m = 1$ and so $m = 1$, so it's a trivial solution. If $n = 2$ then $n^m$ is a power of $2$, and so (since $m > 0$) $m$ must also be a power of $2$, say $m = 2^k$. Then $n^m = 2^{2^k}$ and $m^n = 2^{2k}$, so that $2^k = 2k$ or $2^{k-1} = k$. Now $2^{3-1} > 3$, and so an easy induction shows that $k \leq 2$. If $k = 1$ then $n = m$, and $k = 2$ corresponds to $2^4 = 4^2$. EDIT: Up till now we considered $n,m>0$. We now go over all other cases. The solution $n = m = 0$ is trivial (whatever value we give to $0^0$). If $n=0$ and $m \neq 0$ then $n^m = 0$ whereas $m^n = 1$, so this is not a solution. If $n > 0$ and $m < 0$ then $0 < n^m \leq 1$ whereas $|m^n| \geq 1$. Hence necessarily $n^m = 1$ so that $n = 1$. It follows that $m^1 = 1^m = 1$. In particular, there's no solution with opposite signs. If $n,m < 0$ then $(-1)^m (-n)^m = n^m = m^n = (-1)^n (-m)^n$, so that $n,m$ must have the same parity. Taking inverses, we get $(-n)^{-m} = (-m)^{-n}$, so that $-n,-m$ is a solution for positive integers. The only non-trivial positive solution $2,4$ yields the only non-trivial negative solution $-2,-4$. Say $x^y = y^x$, and $x > y > 0$. Taking logs, $y \log x = x \log y$; rearranging, $(\log x)/x = (\log y)/y$. Let $f(x) = (\log x)/x$; then this is $f(x) = f(y)$. Now, $f^\prime(x) = (1-\log x)/x^2$, so $f$ is increasing for $x<e$ and decreasing for $x>e$. So if $x^y = y^x$ has a solution, then $x > e > y$. So $y$ must be $1$ or $2$. But $y = 1$ doesn't work. $y=2$ gives $x=4$. (I've always thought of this as the ``standard'' solution to this problem and I'm a bit surprised nobody has posted it yet.) If $x > 0 > y$, then $0 < x^y < 1$ and $y^x$ is an integer, so there are no such solutions. If $0 > x > y$, then $x^y = y^x$ implies $x$ and $y$ must have the same parity. Also, taking reciprocals, $x^{-y} = y^{-x}$. Then $(-x)^{-y} = (-y)^{-x}$ since $x$ and $y$ have the same parity. (The number of factors of $-1$ we introduced to each side differs by $x-y$, which is even.) So solving the problem where $x$ and $y$ are negative reduces to solving it when $x$ and $y$ are positive. Although this thing has already been answered, here a shorter proof Because $x^y = y^x $ is symmetric we first demand that $x>y$ Then we proceed simply this way: $ x^y = y^x $ $ x = y^{\frac x y } $ $ \frac x y = y^{\frac x y -1} $ $ \frac x y -1 = y^{\frac x y -1} - 1 $ Now we expand the rhs into its well-known exponential-series $ \frac x y -1 = \ln(y)*(\frac x y -1) + \frac {((\ln(y)*(\frac x y -1))^2}{2!} + ... $ Here by the definition x>y the lhs is positive, so if $ \ln(y) $ >=1 we had lhs $\lt$ rhs Thus $ \ln(y) $ must be smaller than 1, and the only integer y>1 whose log is smaller than 1 is y=2, so there is the only possibility $y = 2$ and we are done. [update] Well, after having determined $y=2$ the same procedure can be used to show, that after manyally checking $x=3$ (impossible) $x=4$ (possible) no $x>4$ can be chosen. We ask for $x=4^{1+\delta} ,\delta > 0 $ inserting the value 2 for y: $ 4^{(1+\delta)*2}=2^{4^{(1+\delta)}} $ Take log to base 2: $ (1+\delta)*4=4^{(1+\delta)} $ $ \delta =4^{\delta} - 1 $ $ \delta = \ln(4)*\delta + \frac { (\ln(4)*\delta)^2 }{2!} + \ldots $ $ 0 = (\ln(4)-1)*\delta + \frac { (\ln(4)*\delta)^2 }{2!} + \ldots $ Because $ \ln(4)-1 >0 $ this can only be satisfied if $ \delta =0 $ So indeed the only solutions, assuming x>y, is $ (x,y) = (4,2)$ . [end update] I've collected some references, feel free to add more of them. (Some of them are taken from other answers. And, of course, some of them can contain further interesting references.) Online: On Torsten Sillke's page: http://www.mathematik.uni-bielefeld.de/~sillke/PUZZLES/x%5Ey-x%5Ey (Wayback Machine) Wikipedia: Equation $x^y=y^x$ Papers: Michael A. Bennett and Bruce Reznick: Positive Rational Solutions to $x^y = y^{mx}$ : A Number-Theoretic Excursion, The American Mathematical Monthly , Vol. 111, No. 1 (Jan., 2004), pp. 13-21; available at jstor, arxiv or at author's homepage. Marta Sved: On the Rational Solutions of $x^y = y^x$, Mathematics Magazine, Vol. 63, No. 1 (Feb., 1990), pp. 30-33, available at jstor. It is mentioned here, that this problem appeared in 1960 Putnam Competition (for integers) F. Gerrish: 76.25 $a^{b}=b^{a}$: The Positive Integer Solution, The Mathematical Gazette, Vol. 76, No. 477 (Nov., 1992), p. 403. Jstor link Solomon Hurwitz: On the Rational Solutions of $m^n=n^m$ with $m\ne n$, The American Mathematical Monthly, Vol. 74, No. 3 (Mar., 1967), pp. 298-300. jstor Joel Anderson: Iterated Exponentials, The American Mathematical Monthly , Vol. 111, No. 8 (Oct., 2004), pp. 668-679. jstor Books: Titu Andreescu, Dorin Andrica, Ion Cucurezeanu: An Introduction to Diophantine Equations: A Problem-Based Approach, Springer, New York, 2010. Page 209 Searches:The reason I've added this is that it can be somewhat tricky to search for a formula or an equation. So any interesting idea which could help finding interesting references may be of interest. Well I finally found an answer relating to some number theory I suppose ! Assume that : $x={p_1}^{\alpha _ 1}.{p_2}^{\alpha _ 2}...{p_k}^{\alpha _ k}$ it is clear that number y prime factors are the same as number x but with different powers i.e: $y={p_1}^{\beta _ 1}.{p_2}^{\beta _ 2}...{p_k}^{\beta _ k}$ replacing the first equation we get: ${({p_1}^{\alpha _ 1}.{p_2}^{\alpha _ 2}...{p_k}^{\alpha _ k})}^y={({p_1}^{\beta _ 1}.{p_2}^{\beta _ 2}...{p_k}^{\beta _ k})}^x$ i.e: ${p_1}^{{\alpha _ 1}y}.{p_2}^{{\alpha _ 2}y}...{p_k}^{{\alpha _ k}y}={p_1}^{{\beta _ 1}x}.{p_2}^{{\beta _ 2}x}...{p_k}^{{\beta _ k}x}$ Since the the powers ought to be equal we know for each $1\le i \le k$ we have:${\alpha_i}y={\beta_i}x$ i.e: ${\alpha_i}/{\beta_i}=x/y$ Considering that the equation is symmetric we can assume that $x \le y$ but we have ${\alpha_i}/{\beta_i} = x/y \ge 1$ hence ${\alpha_i} \ge {\beta_i}$ Assume this obvious,easy-to-prove theorem: Theorem #1 Consider $x,y \in \mathbb{N}$ such that $x={p_1}^{\alpha _ 1}.{p_2}^{\alpha _ 2}...{p_k}^{\alpha _ k}$ $y={p_1}^{\beta _ 1}.{p_2}^{\beta _ 2}...{p_k}^{\beta _ k}$ for each $1\le i \le k$ we have: $y|x \to {\alpha_i}\ge{\beta_i}$ or vice versa Using the Theorem #1 we can get that $y|x$ i.e $x=yt$ replacing in the main equation we get: $x^y=y^x \to ({yt})^y=y^{({yt})} \to yt=y^t$ Now we must find the answers to the equation $yt=y^t$ for $t=1$ it is obvious that for every $y \in \mathbb{N}$ the equation is valid.so one answer is $x=y$ Yet again for $t=2$ we must have $2y=y^2$ i.e $y=2$ and we can conclude that $x=4$ (using the equation $x=yt$)so another answer is $x=4$ $\land$ y=2$ (or vice versa) We show that for $t\ge3$ the equation is not valid anymore. If $t\ge3$ then $y\gt2$ we prove that with these terms the inequality $y^t \gt yt$ stands. $y^t={(y-1+1)}^t={(y-1)}^t+...+\binom{t}{2} {(y-1)}^2 + \binom{t}{1}(y-1) +1 \gt \binom{t}{2} {(y-1)}^2 + t(y-1) +1$ But we have $y-1\gt1$ so: $y^t \gt \binom{t}{2} {(y-1)}^2 + t(y-1) +1= \frac {t(t-1)}{2} -t +1 +yt= \frac {(t-2)(t-1)}{2} + yt \gt yt$ So it is proved that for $t\ge3$ is not valid anymore.$\bullet$ P.S: The equation is solved for positive integers yet the solution for all the integers is quite the same!(took me an hour to write this all,hope you like my solution)
I know that this seems to be a pretty easy question but for some reason I can’t find any explanation for the following equation in any high school text book or on the internet. So according to my answers sheet this equation is valid for the state of equilibrium of an object with the mass $m$ hanging on a metal spring. $$ m\cdot g = D \cdot \hat y \implies \dfrac D m = \dfrac {g}{\hat y} \tag{II} $$ The given formula is to be used in order to find the spring constant. With $mg$ standing for the gravitational force, $D$ for the spring constant and $\hat y$ for the maximum displacement. But shouldn’t the displacement be zero at the state of equilibrium? Is this just an assumption or can you mathematically derive this equation? I would be grateful, if someone could give me a plausible explanation for this simple equation.
Some remarks on boundary operators of Bessel extensions 1. Department of Statistics, University of Auckland, Private Bag 92019, Victoria Street West, Auckland 1142, New Zealand 2. Department of Applied Mathematics, National Chiao Tung University, 1001 Ta Hsueh Road, Hsinchu, Taiwan 3. National Center for Theoretical Sciences, National Taiwan University, No. 1 Sec. 4 Roosevelt Rd, Taipei, 106, Taiwan $\begin{align*}Δ_x u(x, y) +\frac{1-2s}{y} \frac{\partial u}{\partial y}(x, y)+\frac{\partial^2 u}{\partial y^2}(x, y)&=0 &&\text{for }x∈\mathbb{R}^d, y>0, \\ u(x, 0)&=f(x) &&\text{for }x∈\mathbb{R}^d.\end{align*}$ $s=k ∈ \mathbb{N}$ Keywords:Boundary operator, Littlewood-Paley extension, Bessel functions, functional calculus, Laplacian. Mathematics Subject Classification:Primary: 35J70; Secondary: 47D03, 33C10. Citation:Jesse Goodman, Daniel Spector. Some remarks on boundary operators of Bessel extensions. Discrete & Continuous Dynamical Systems - S, 2018, 11 (3) : 493-509. doi: 10.3934/dcdss.2018027 References: [1] [2] [3] [4] I. S. Gradshteyn and M. Ryzhik, [5] [6] [7] [8] M. Renardy and R. C. Rogers, [9] [10] E. M. Stein, [11] P. R. Stinga and J. L. Torrea, Extension problem and Harnack's inequality for some fractional operators, [12] [13] show all references References: [1] [2] [3] [4] I. S. Gradshteyn and M. Ryzhik, [5] [6] [7] [8] M. Renardy and R. C. Rogers, [9] [10] E. M. Stein, [11] P. R. Stinga and J. L. Torrea, Extension problem and Harnack's inequality for some fractional operators, [12] [13] [1] Radjesvarane Alexandre, Mouhamad Elsafadi. Littlewood-Paley theory and regularity issues in Boltzmann homogeneous equations II. Non cutoff case and non Maxwellian molecules. [2] [3] [4] Fabrizio Colombo, Graziano Gentili, Irene Sabadini and Daniele C. Struppa. A functional calculus in a noncommutative setting. [5] Vladimir V. Kisil. Mobius transformations and monogenic functional calculus. [6] Mikko Kemppainen, Peter Sjögren, José Luis Torrea. Wave extension problem for the fractional Laplacian. [7] Hassan Emamirad, Arnaud Rougirel. A functional calculus approach for the rational approximation with nonuniform partitions. [8] Gregorio Díaz, Jesús Ildefonso Díaz. On the free boundary associated with the stationary Monge--Ampère operator on the set of non strictly convex functions. [9] [10] Nicola Abatangelo. Large $s$-harmonic functions and boundary blow-up solutions for the fractional Laplacian. [11] Nicolas Lerner, Yoshinori Morimoto, Karel Pravda-Starov, Chao-Jiang Xu. Phase space analysis and functional calculus for the linearized Landau and Boltzmann operators. [12] [13] Mehar Chand, Jyotindra C. Prajapati, Ebenezer Bonyah, Jatinder Kumar Bansal. Fractional calculus and applications of family of extended generalized Gauss hypergeometric functions. [14] [15] Jingbo Dou, Ye Li. Classification of extremal functions to logarithmic Hardy-Littlewood-Sobolev inequality on the upper half space. [16] Matthias Geissert, Horst Heck, Christof Trunk. $H^{\infty}$-calculus for a system of Laplace operators with mixed order boundary conditions. [17] [18] Peter Giesl. Construction of a global Lyapunov function using radial basis functions with a single operator. [19] [20] Daniela Calvetti, Paul J. Hadwin, Janne M. J. Huttunen, Jari P. Kaipio, Erkki Somersalo. Artificial boundary conditions and domain truncation in electrical impedance tomography. Part II: Stochastic extension of the boundary map. 2018 Impact Factor: 0.545 Tools Metrics Other articles by authors [Back to Top]
I tried to approach it from Leibniz formula for determinants $$\det(A) = \sum_{\sigma \in S_n} \operatorname{sgn}(\sigma) \prod_{i=1}^n A_{i,\sigma_i}.$$ There are $n!$ factorial terms in this sum. Alice will have $\frac{n^2+1}{2}$ moves whereas Bob has $\frac{n^2-1}{2}$ moves. There are $n^2$ variables (matrix entries). Each of them taken alone appear in $(n-1)!$ terms in this summation. Whenever Bob picks a zero in his first move for any entry in the matrix, $(n-1)!$ factorial terms of this go to zero. For instance, consider a $5 \times 5$ matrix. So there are 120 terms. In first move, whenever Bob makes any matrix entry zero, he zeros out 24 of this terms. In his second move, he has to pick that matrix entry which has least number of presence in the first zeroed-out 24 terms. There can be multiple such matrix entries. In face, it can be seen that there is surely another matrix entry appearing in 24 non-zero terms in the above sum. Since $n$ is odd in this case, the last chance will always be that of Alice. Because of that, one doesn't have to bother about this terms summing to zero. What Bob has to do if he has to win is that He has to make sure he touches at least once (in effect zeroes) each of this 120 terms. In the $n=5$ case, he has 12 chances. In this 12 chances he has to make sure that he zeros out all this 120 terms. In one sense, It means that he has to average at least 10 terms per chance of his. I looked at the $n=3$ case, bob has 4 chances there and 6 terms, he can zero out all of them in 3 moves. He has to make sure that Alice doesn't get hold of all the matrix entries in one single term in 120 terms, because then it will be non-zero, and since the last chance is hers, Bob won't be able to zero it out, so she will win. As per above explanation, in the $5 \times 5$, he has to just average killing 10 terms in each chance which seems quite easy to do. I feel this method is a bit easy to generalize and many really clever people in here can do it. EDIT---------------- In response to @Ross Milikan, I tried to look at solving $5 \times 5$ case, this is the approach. Consider $5 \times 5$ matrix with its entries filled in by the english alphabets row-wise, so that the matrix of interest is \begin{align}\begin{bmatrix}a & b & c & d& e \\ f& g & h &i& j \\k& l& m& n& o \\ p& q& r& s& t\\ u& v& w& x& y \end{bmatrix}\end{align} Without loss of Generality (WLOG), let Alice pick up $a$ (making any entry zero is advantageous for her). Lets say Bob picks up $b$ (again WLOG, picking up any entry is same). This helps Bob to zero out 24 terms in the total 120. Alice has to pick up one entry in this first row itself otherwise she will be in a disadvantage (since then, Bob gets to pick the 3 terms in total from the first row and gets 72 terms zeroed out). So concerning the first row, Alice picks 3 of them, Bob picks 2 of them (say $b$ and $d$), and hence he zeros out 48 terms of the total 120. Now note that next move is Bob's. Let us swap the second column and first column. This doesn't change the determinant other than its sign. Look at the modified matrix \begin{align}\begin{bmatrix}0 & \otimes & \otimes & 0 & \otimes \\ g & f & h &i& j \\l& k& m& n& o \\ q& p& r& s& t\\ v& u& w& x& y \end{bmatrix}\end{align} where $0$ is put in entries Bob has modified and $\otimes$ has been put in entries modified by Alice. Now in the first column, lets say Bob gets hold of $g$ and $q$, and alice gets hold of $l$ and $v$. Again Alice has to do this and any other move will put her in a disadvantage. Bob had made 4 moves already, the next move is his and now the matrix will look like, \begin{align}\begin{bmatrix}0 & \otimes & \otimes & 0 & \otimes \\ 0 & f & h &i& j \\ \otimes & k & m& n& o \\ 0 & p& r& s& t\\ \otimes & u& w& x& y \end{bmatrix}\end{align} Now we are left with the lower $4 \times 4$ matrix, Bob is left with 8 chances, and the first move is his. Compare this with $4 \times 4$ case, it looks intuitively that Bob should win.
Abel problem To find, in a vertical plane $(s,\tau)$, a curve such that a material point moving along it under gravity from rest, starting from a point with ordinate $x$, will meet the $\tau$-axis after a time $T=f(x)$, where the function $f(x)$ is given in advance. The problem was posed by N.H. Abel in 1823, and its solution involves one of the first integral equations — the Abel integral equation — which was also solved. In fact, if $\omega$ is the angle formed by the tangent of the curve being sought with the $\tau$-axis, then $$\frac{ds}{d\tau}=-\sqrt{2g(x-s)}\sin\omega.$$ Integrating this equation between $0$ and $x$ and putting $$\frac1{\sin\omega}=\phi(s),\quad-\sqrt{2g}\Phi(x)=f(x),$$ one obtains the integral equation $$\int\limits_0^x\frac{\phi(s)ds}{\sqrt{x-s}}=f(x)$$ for the unknown function $\phi(s)$, the determination of which makes it possible to find the equation of the curve being sought. The solution of the equation introduced above is: $$\phi(x)=\frac1\pi\left[\frac{f(0)}{\sqrt x}+\int\limits_0^x\frac{f'(\tau)d\tau}{\sqrt{x-\tau}}\right].$$ References [1] N.H. Abel, "Solutions de quelques problèmes à l'aide d'intégrales défines" , Oeuvres complètes, nouvelle éd. , 1 , Grondahl & Son , Christiania (1881) pp. 11–27 (Edition de Holmboe) Comments In the case that $f(x)=\mathrm{const}$, this is the famous tautochrone problem first solved by Chr. Huyghens, who showed that this curve is then a cycloid. References [a1] A.J. Jerri, "Introduction to integral equations with applications" , M. Dekker (1985) pp. Sect. 2.3 [a2] H. Hochstadt, "Integral equations" , Wiley (1973) [a3] B.L. Moiseiwitsch, "Integral equations" , Longman (1977) How to Cite This Entry: Abel problem. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Abel_problem&oldid=43529
My question is as follows. Do the Chern classes as defined by Grothendieck for smooth projective varieties coincide with the Chern classes as defined with the aid of invariant polynomials and connections on complex vector bundles (when the ground field is $\mathbf{C}$)? I suppose GAGA is involved here. Could anybody give me a reference where this is shown as detailed as possible? Or is the above not true? Some background on my question: Let $X$ be a smooth projective variety over an algebraically closed field $k$. For any integer $r$, let $A^r X$ be the group of cycles of codimension $r$ rationally equivalent to zero. Let $AX=\bigoplus A^r X$ be the Chow ring. Grothendieck proved the following theorem on Chern classes. There is a unique "theory of Chern classes", which assigns to each locally free coherent sheaf $\mathcal{E}$ on $X$ an $i$-th Chern class $c_i(\mathcal{E})\in A^i(X)$ and satisfies the following properties: C0. It holds that $c_0(\mathcal{E}) = 1$. C1. For an invertible sheaf $\mathcal{O}_X(D)$ on $X$, we have that $c_1(\mathcal{O}_X(D)) = [D]$ in $A^1(X)$. C2. For a morphism of smooth quasi-projective varieties $f:X\longrightarrow Y$ and any positive integer $i$, we have that $f^\ast(c_i(\mathcal{E})) =c_i(f^\ast(\mathcal{E}))$. C3. If $$0\longrightarrow \mathcal{E}^\prime \longrightarrow \mathcal{E} \longrightarrow \mathcal{E}^{\prime\prime} \longrightarrow 0$$ is an exact sequence of vector bundles on $X$, then $c_t(\mathcal{E}) = c_t(\mathcal{E}^\prime)c_t(\mathcal{E}^{\prime\prime})$ in $A(X)[t]$. So that's how it works in algebraic geometry. Now let me sketch the complex analytic case. Let $E\longrightarrow X$ be a complex vector bundle. We are going to associate certain cohomology classes in $H^{even}(X)$ to $E$. The outline of this construction is as follows. Step 1. We choose a connection $\nabla^E$ on $E$; Step 2. We construct closed even graded differential forms with the aid of $\nabla^E$; Step 3. We show that the cohomology classes of these differential forms are independent of $\nabla^E$. Let us sketch this construction. Let $k= \textrm{rank}(E)$. Let us fix an invariant polynomial $P$ on $\mathfrak{gl}_k(\mathbf{C})$, i.e. $P$ is invariant under conjugation by $\textrm{GL}_k(\mathbf{C})$. Let us fix a connection $\nabla^E$ on $E$. We denote denote its curvature by $R^E = (\nabla^E)^2$. One shows that $$R^E \in \mathcal{C}^\infty(X,\Lambda^2(T^\ast X)\otimes \textrm{End}(E)).$$ That is, $R^E$ is a $2$-form on $X$ with values in $\textrm{End}(E)$. Define $$P(E,\nabla^E) = P(-R^E/{2i\pi}).$$ (This is well-defined.) The Chern-Weil theorem now says that: The even graded form $P(E,\nabla^E)$ is a smooth complex differential form which is closed. The cohomology class of $P(E,\nabla^E)$ is independent of the chosen connection $\nabla^E$ on $E$. Choosing $P$ suitably, we get the Chern classes of $E$ (by definition). These are cohomology classes. In order for one to show the equivalence of these "theories" one is forced to take the leap from the Chow ring to the cohomology ring. How does one choose $P$? You just take $P(B) = \det(1+B)$ for a matrix. Motivation: If one shows the equivalence of these two theories one gets "two ways" of "computing" the Chern character.
In the theory of electromagnetism (EM), why is the principle of superposition true? Can we read it off from Maxwell's equations directly? Does it have any limit of applicability or is it a fundamental property of nature? The principle of superposition comes from the fact that equations you solve most of the time are made of Linear operators (just like the derivative). So as long as you are using these operators you can write something like $$ \mathcal L\cdot \psi = 0$$ where $\mathcal L$ is a linear operator and, let say, $\psi$ is a function that depends on coordinates that $\mathcal L$ is acting on. The superposition principle is the same that saying this $$\mathcal L \cdot \left(\sum_i \psi_i \right) = \mathcal L\cdot\psi_1 + \mathcal L\cdot\psi_2 + ... = 0$$ holds. An example when it doesn't would be, for example... $$\mathcal L^2 \cdot\left(\sum_i \psi_i\right) \neq \mathcal L^2\cdot\psi_1 + \mathcal L^2\cdot\psi_2 + ...$$ in general (for the Laplacian in Euclidean space it is equal to). So then, the question is if Maxwell equations are linear. And they are because they are made up with this kind of operators. For instance, Gauss law for two different electric fields can be written as one $$ \nabla \cdot \vec{E}_1 = \rho_1/\varepsilon \quad; \quad \nabla \cdot \vec{E}_2 = \rho_2/\varepsilon $$ $$ \nabla \cdot \overbrace{\left(\vec{E}_1 + \vec{E}_2\right)}^{\vec{E}} = \frac{\overbrace{\rho_1 + \rho_2}^{\rho_T}}{\varepsilon} \Rightarrow \boxed{\nabla \cdot\vec E = \frac{\rho_T}{\varepsilon}} $$ just because $\nabla$ is a linear operator. It is true up to very high filed strengths. For too high strengths the field itself is not stable, it can create real pairs. It is like a limit on a field strength in a capacitor. The capacitor dielectric can break. EDIT: Classical Maxwell equations are linear indeed so the principle of superposition is implemented into them. But break of a dielectric can be introduced too as a resistance depending on the field strength. Thus one can make the Maxwell equation non-linear starting from some threshold strength. In fact, the dielectric break or capacitor discharge due to cold electron emission (classical non linearity) occurs "well before" creating electron-positron pair in vacuum. Within the realm of Maxwell's equations, the principle of superposition is exactly true because Maxwell's equations are linear in both the sources and the fields. So if you have two solutions to Maxwell's equations for two different sets of sources then the sum of those two solutions will be a solution to the case where you add together the two sets of sources. This will only break down when Maxwell's equations break down, for example, when the field strengths are so high that pair production becomes significant. In that case the quantum field theory of Quantum Electrodynamics (QED) would be required. Now, quantum theories are also linear, at least as far as the quantum wave function is concerned, however the probabilities that quantum theories predict depend on the magnitude of the wave function, so in that sense they are nonlinear, and therefore superposition will not apply to the results. While the first part of the question has been answered satisfactorily, everybody seems to confuse the unconditional linearity of the Maxwell equations with the often observed linearity of the constitutive relations for the material law. The field of nonlinear optics is concerned with the behavior of light in nonlinear media where the constitutive relations are no longer linear. However, the superposition principle is already violated if even a single electron gets accelerated by the field. So nonlinear media are nothing exotic, even if most media are well described by linear constitutive relations for small field strengths. Superposition principle is a trouble maker. The problem also comes from the definition of the field. In the beginning the electric field is defined with a test charge. If the test charge exist, if there are two fields we can added them up. We said this is the superposition principle. Up to this point every thing is OK. But the problem is we later extended the the concept. We claim that even the test charge is not here the field still exist and same as if the the test charge exist. Who knows if the test charge is removed the electric field is there or not? There are two kind theories they debated 100 years for this problem. One is the Faraday and Maxwell, they claim the field is still there even the test charge is removed, field is real substance. Another side is the theory of action-at-a-distance, Which are Weber electromagnetic theory, Schwarzschild-Tetrode-Fokker action principle, absorber theory of Wheeler and Feynman, these theory claim that field is not real, there is only the action between at least two charges! The later has no the problem of Maxwell equation and superposition. But the theory of action at a distance is to complicate compare the field theory of Maxwell equation. Action-at-a-distance cannot win the war with Maxwell's theory. However they are correct at least for the superposition, only if test charge is there, you can superpose the two fields. Without test charge, the field is not defined how you can superpose two fields? Feynman notice this problem, but he has not found a way to correct Maxwell theory. Instead, he decided just give up the Maxwell theory. He created QED. In QED the problem is partially solved by normalization, second second quantization. But the problem is still not thoroughly solved by him. This difficulty now is overcome by the "mutual energy principle" and "self-energy principle". Mutual energy flow is produced by the retarded wave and the advanced wave. the mutual energy principle tell us that the photon energy is transferred only by the mutual energy flow. The self-energy principle tell us the self-energy flow do not transfer any energy. Interesting is that the new theory belongs to Maxwell's field theory. The mutual energy principle and self-energy principle claim the field is still a real substance. The new theory has absorbed all advantage of the action-at-a-distance and the absorber theory and then updated the Maxwell's field theory. Any way in the new theory of the mutual energy principle and self-energy principle the supposition principle is avoided. Maxwell equation need superposition principle, without superposition even Maxwell equation is correct for single charge, it still cannot prove it is correct for N charges. The mutual energy principle do not need the superposition principle. Any particles for example photon and electron are consist of 4 waves and 6 energy flows instead of one waves one energy flow. Please see my publication for the details: http://www.openscienceonline.com/journal/archive2?journalId=726&paperId=4042 protected by Qmechanic♦ Mar 14 '18 at 14:30 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
My variable is alpha, \( \alpha \), but I defined it as al_phasince Maxima doesn't like variables that are already defined and many of those are Greek variables. I just broke it up phonetically. I set alpha to be a constant by al_pha: 30*%pi/180 (I think using just "pi" works too. It does, it just leaves it in symbolic form. See screen shots.) The colon( :) provides the capability to define the variableal_pha in Maxima. I also multiplied by Pi and divided by 180 in order to get al_pha from degrees (30) into radians since I am going to be dealing with trigonometric functions. Next, I define my function as lamb_daas lamb_da(x):= (csc(x))^2 Note, that I used "x" even though I want the angle alpha. We will call lamb_da for al_pha later. So the colon( :) the plus equal sign( =) attains a functiondefinition in Maxima. Also note that in order to square cosecant in Maxima I had to wrap in parentheses and then square. I also could have done csc(x)*csc(x) but who wants to do that for 2 or higher?! Screen shots: Notice the difference between using %pi and pi gives 4 and \( \csc ^2 \left( \dfrac{\pi}{6}\right) \), respectively. I then add more to lamb_da once I know it is working properly. I can check it with KAlgebra. I found this very good pdf where I discovered how to define a function and parameter: resources.eun.org/xplora/Maxima_Xplora.pdf Also this can be found in the Maxima documentation here: 7.6 Assignment operators - http://maxima.sourceforge.net/docs/manual/en/maxima_7.html#SEC41
Archive: Subtopics: Comments disabled Wed, 15 Nov 2017 [ Credit where it is due: This was entirely Darius Bacon's idea. ] In connection with “Recognizing when two arithmetic expressions are essentially the same”, I had several conversations with people about ways to normalize numeric expressions. In that article I observed that while everyone knows the usual associative law for addition $$ (a + b) + c = a + (b + c)$$ nobody ever seems to mention the corresponding law for subtraction: $$ (a+b)-c = a + (b-c).$$ And while everyone“knows” that subtraction is not associative because $$(a - b) - c ≠ a- (b-c)$$ nobody ever seems to observe that there This asymmetry is kind of a nuisance, and suggests that a more symmetric notation might be better. Darius Bacon suggested a simple change that I think is an improvement: The !!\star!! operation obeys the following elegant and simple laws: $$\begin{align} a\star\star & = a \\ (a+b)\star & = a\star + b\star \end{align} $$ Once we adopt !!\star!!, we get a huge payoff: The negation of !!a+b\star!! is $$(a+b\star)\star = a\star + b{\star\star} = a\star +b.$$ We no longer have the annoying notational asymmetry between !!a-b!! and !!-b + a!! where the plus sign appears from nowhere. Instead, one is !!a+b\star!! and the other is !!b\star+a!!, which is obviously just the usual commutativity of addition. The !!\star!! is of course nothing but a synonym for multiplication by !!-1!!. But it is a much less clumsy synonym. !!a\star!! means !!a\cdot(-1)!!, but with less inkjunk. In conventional notation the parentheses in !!a(-b)!! are essential and if you lose them the whole thing is ruined. But because !!\star!! is just a special case of multiplication, it associates with multiplication and division, so we don't have to worry about parentheses in !!(a\star)b = a(b\star) = (ab)\star!!. They are all equal to just !!ab\star!!. and you can drop the parentheses or include them or write the terms in any order, just as you like, just as you would with !!abc!!. The surprising associativity of subtraction is no longer surprising, because $$(a + b) - c = a + (b - c)$$ is now written as $$(a + b) + c\star = a + (b + c\star)$$ so it's just the usual associative law for addition; it is not even disguised. The same happens for the reverse associative laws for subtraction that nobody mentions; they become variations on $$ \begin{align} (a + b\star) + c\star & = a + (b\star + c\star) \\ & = a + (b+c)\star \end{align} $$ and such like. The !!\star!! is faster to read and faster to say. Instead of “minus one” or “negative one” or “times negative one”, you just say “star”. The !!\star!! is just a number, and it behaves like a number. Its role in an expression is the same as any other number's. It is just a special, one-off notation for a single, particularly important number. Open questions: Curious footnote: While I was writing up the draft of this article, it had a reminder in it: “How did you and Darius come up with this?” I went back to our email to look, and I discovered the answer was: I wish I could take more credit, but there it is. Hmm, maybe I will take credit for inspiring Darius! That should be worth at least fifty percent, perhaps more. [ This article had some perinatal problems. It escaped early from the laboratory, in a not-quite-finished state, so I apologize if you are seeing it twice. ]
so I found out that SR-71 Blackbird is using what's called "turboramjet" and I find the idea a bit appealing since they say with such engine, the Blackbird is more fuel efficient at it's top speed. I know that the mechanism for such engine is extremely complicated, but what's on my mind is not making an engine like Blackbird's, more to the point of having a direct supersonic fresh airflow to the afterburner to keep the fuel consumption slightly lower thanks to more oxygen to keep the afterburner effectively burn(because I heard somehwhere afterburners burns 3 times more fuel than dry thrust, I hope this can reduce it to only twice or even 1.5 times original dry thrust fuel consumption and have the same performance). Is it possible to do such thing? what would be the main problem to such engine/design? can it produce more thrust/more efficient in fuel consumption? and for the sake of calculations, let's say the turbofan engine in question is GE F-414-EPE, on afterburner, in supersonic speed. The J-58 took compressed air from the compressor at stage 4 and piped it directly into the flow aft of the turbine. This cooled the exhaust stream that entered the afterburner, so the starting temperature there was lower and the density higher, which increases efficiency and thrust. Note that the intake of the SR-71 slowed the flow down to Mach 0.4, so all internal flow was subsonic. Only when the hot exhaust gas in the afterburner expanded again would the flow speed increase to supersonic speeds again. This is only possible because the intake would already compress the air by a factor of nearly 40 when flying at Mach 3.2. This precompression scales with $$p_0 = p_{\infty}\cdot\frac{(1.2\cdot Ma^2)^{3.5}}{\left(1+\frac{5}{6}\cdot(Ma^2-1)\right)^{2.5}}$$ so the precompression is much lower (less than 6) for a typical F-414 maximum speed of Mach 1.8. ($Ma$ = Mach number, $p_0$ = ram pressure, $p_{\infty}$ = atmospheric pressure). The gains in efficiency scale with the ratio of starting and exhaust temperatures (measured from absolute zero), so the increase in efficiency is much lower than what you seem to hope for. Make sure that flow speed at the entry of the afterburner is decidedly subsonic. Supersonic flow at the start of the afterburner section will only delay ignition until the flow exits through the nozzle. All afterburners have rings inside, called flameholders, which cause local separated flow, so some burning gas is always present to ignite the newly arriving fuel-air mixture.
Partial Answer The trick that usually solves this type of challenging problems is finding an invariant. First, let us say $(i,j)$ represents the $j$th space in the $i$th row. We say the $i$th row is the one with $i-1$ spots. We should label each cell with a number. For any given position, we can sum the values of each circle. We want to create the labels in such a way that no move can cause the sum to increase. Let's say $$f(i, j)=x^{i}$$ for some constant $x$. Right now I'm only thinking of preserving equality on vertical movement. Horizontal movement will clearly only decrease the sum for any positive $x$. To preserve equality on vertical movement, we need to satisfy $x^2+x=1$ so we find $x=\frac{-1+\sqrt 5}{2}$. Let $A$ be the sum of $f(i,j)$ for all spaces $(i,j)$. Then we have \begin{align*}A&=1\left(\sum_{i=0}^\infty x^i\right)+x\left(\sum_{i=0}^\infty x^i\right)+x^2\left(\sum_{i=0}^\infty x^i\right)+\dots\\&=\left(\sum_{i=0}^\infty x^i\right)^2\\&=\frac{1}{(1-x)^2}\end{align*} Now, let's sum after the $n$th row. We'll call this $B_n$. We have \begin{align*}B_n&=nx^n\left(\sum_{i=0}^\infty x^i\right)+x^nA\\&=\frac{n(1-x)x^n+x^n}{(1-x)^2}\end{align*} We want to know the first time $B_n\le 1$, and we can calculate that this is $n=7$. So it is unsolvable if we remove $7$ or more rows. So the final answer is no more than $6$ rows. This is only an upper bound for the final answer because I haven't shown that it is possible if we remove $6$ rows.
Question: When $\pu{60 g}$ of $\ce{C60}$ is combusted in a bomb calorimeter that has a water jacket containing $\pu{300.0 g}$ of water, the temperature of the water increases by $\pu{10 ^\circ C}$. Assuming that the specific heat of water is $\pu{4.18 J g^{-1} K^{-1}}$, estimate $\Delta E$ of combustion per mole of $\ce{C60}$. Answer: $\pu{-150.5 kJ/mol}$ What I have tried:$$\begin{align}Q &= m \times c \times \Delta T \\Q &= 300 \times 4.18 \times 10\end{align}$$ That's the only equation I know and it doesn't seem to work for this problem.
My textbook, Gettys's Physics (Italian language edition), says that the Lorenz gauge choice uses the magnetic vector potential $$\mathbf{A}(\mathbf{x},t):=\frac{\mu_0}{4\pi}\int \frac{\mathbf{J}(\mathbf{y},t-c^{-1}\|\mathbf{x}-\mathbf{y}\|)}{\|\mathbf{x}-\mathbf{y}\|}d^3y $$ which is said to be such that$$\nabla^2\mathbf{A}(\mathbf{x},t)-\varepsilon_0\mu_0\frac{\partial^2 \mathbf{A}(\mathbf{x},t)}{\partial t^2}=-\mu_0\mathbf{J}(\mathbf{x},t)$$How is this identity derived?If $\mathbf{J}$ does not depend on time and is assumed, as often done in physics, to be of class $C_c^2(\mathbb{R}^3)$, I know, as proved here, that $$\nabla^2\mathbf{A}(\mathbf{x})=-\mu_0\mathbf{J}(\mathbf{x}),$$ consistently with the above equation, but I am not able to derive the general case. I heartily thank any answerer. My textbook, Gettys's So the best way to view this is that you're going to derive $\vec A(\vec x, t)$ from the equations that you want, first and foremost. So we have to start with what equations we want. What equations do we want? The Maxwell equations are:$$\begin{array}{ll} \nabla\cdot E=\rho/\epsilon_0 & ~~~\nabla \times E = -\dot B\\ \nabla\cdot B = 0 &~~~ \nabla \times B =\mu_0 J + \mu_0\epsilon_0 \dot E \end{array}$$And our idea is that we see $\nabla\cdot B = 0$ and we know that this means $B = \nabla\times A$ for some $A$. Now looking at $\nabla \times E = -\nabla \times \dot A $ we see that actually $\nabla \times (E + \dot A) = 0$ which means that there must also be a scalar potential $E + \dot A = -\nabla \phi$ for some $\phi$. Now we ask, "to what extent was that choice of $A, \phi$ free, and to what extent were we constrained?" And the answer is that we have to preserve the fields $E$ and $B$. We know that we preserve $B$ if we add any $\nabla \psi$ to $A$ because the curl of a gradient is zero. But what does that do to our equation for $E$? It gives us $E + \dot A + \nabla \dot\psi = -\nabla \phi.$ Conclusion: we can add any $\nabla \psi$ to $A$ but only if we also subtract $\dot \psi$ to $\phi$, so that we preserve $E$ as well. This ability to add $\psi$ is called the gauge freedom and it is analogous to having a freedom to choose a constant of integration; $A$ and $\phi$ are in some sense integrations of $E$ and $B$. Now we have two more equations above that we haven't automatically guaranteed by the above construction. Using the identity $\nabla \times (\nabla \times X) = \nabla (\nabla \cdot X) - \nabla^2 X$ we can simplify these considerably.$$-\nabla^2 \phi - \nabla \cdot \dot A = \rho/\epsilon_0, \\\nabla (\nabla \cdot A) - \nabla^2 A =\mu_0 J - \mu_0\epsilon_0 \big(\nabla \dot \phi + \ddot A\big).$$The first thing we see is the emergence of some operator $\Box X = \mu_0\epsilon_0 \ddot X - \nabla^2 X,$ and the next thing is some sort of potential field $\lambda = \nabla \cdot A + \mu_0 \epsilon_0 \dot \phi.$ Combining these into the above two equations gives:$$\Box \phi = \rho/\epsilon_0 + \dot \lambda, \\\Box A =\mu_0 J - \nabla \lambda.$$Okay now remember what I said about the gauge freedom? We can add $\nabla \psi$ to $A$ if we also subtract $\dot\psi$ from $\phi$? What does that do to $\lambda$ as defined above? It changes $\lambda\to\lambda - \Box\psi.$ It turns out that this exactly guarantees that the equations don't change ( Exercise: prove this, then reflect on why that has to be true.) We can effectively use this to replace $\lambda$ with whatever we want. For example in the Coulomb gauge, we can first imagine that we solve for some arbitrary $\phi(\vec x,~ t)$ and some arbitrary $\vec A(\vec x,~ t),$ so $\lambda(\vec x,~t)$ is some complicated mess. But if we first solve $\Box \psi = \lambda - \mu_0 \epsilon_0 \dot \phi$ the resulting gauge transform maps $\lambda \to \mu_0 \epsilon_0 \dot\phi$ and the above equation becomes $-\nabla^2 \phi = \rho/\epsilon_0,$ solved with the standard Coulomb potential (hence this choice is known as the Coulomb gauge). You might object that this creates instantaneous action-at-a-distance, but remember that $E$ is not $-\nabla \phi$ in general; it is $-\nabla\phi - \dot A.$ The instantaneous action-at-a-distance in $\phi$ is balanced out by instantaneous action-at-a-distance in $A$ to keep the fields all right. So the "equations that we want" explicitly contain this no-action-at-a-distance and come when we solve $\Box \psi = \lambda$ to gauge-transform $\lambda\to 0.$In fact the two equations combine together into a "four-vector" expression: $\Box (\phi/c,~ A) = \mu_0 (c \rho, ~ J).$ And those are the equations that we want. (In college it took me a long time to grapple with the question "what happens if the equations of motion take the system to a different $\lambda$?" which turns out to be a pointless question. Recall that we're solving for the fields at all times.) Note that this last step of choosing the gauge basically says "you can always do this," but it doesn't quite construct it for you: our procedure right now is very good for analytical theory but very clumsy for practical theory: "find the fields, guess some $A$, figure out some $\phi$, calculate $\lambda$, solve for the $\psi$ of your dreams, use that to correct $A$ and $\phi$ whose equation no longer will have $\lambda:$ hooray, you're finally in a mathematically pretty place." We want to turn this process around: "take your $\rho, J,$ calculate the right $\phi, A$ in this mathematically pretty place, now use it to get the right $E, B$." Then we become a lean, mean theory machine! Now how do we solve these equations? Okay, we know that in 3D space we could solve expressions of the form $\nabla^2 \alpha = -\beta$ by the solution you provided, $$\alpha(\vec r) = \frac{1}{4\pi} ~ \int_V d^3 r' ~\frac{\beta(\vec r')}{|\vec r - \vec r'|} = \int d^3 r' ~\eta(\vec r, \vec r').$$ There is a nice way to interpret this in terms of Dirac $\delta$-functions called the "Green's functions" approach; this says that our source equation involves a linear differential operator and therefore obeys the principle of superposition, so that if you know its response to a stimulus at any one point, $\beta = \delta_3(\vec r - \vec r')$ is the so-called "Green's function" $\alpha = f(\vec r, \vec r')$, then the actual driver is just a superposition $\beta(\vec r) = \int d^3r' ~\delta_3(\vec r - \vec r') \beta(\vec r'),$ and therefore by linearity the general solution is the weighted superposition of these "Green's functions," $\int d^3r~\beta(\vec r')~f(\vec r, \vec r').$ So the given function $-1/({4\pi|\vec r - \vec r'|})$ is the $\alpha$ you calculate when $\beta(\vec r) = \delta_3(\vec r - \vec r')$ is the 3D Dirac $\delta$-function. You can see this by recognizing the Laplacian as the divergence of a gradient; this describes a field whose divergence is 0 except at one point $\vec r'$ where the divergence suddenly spikes to infinity in such a way so that the surface integral bounding the point is 1. We know that that requires an inverse-square field which goes like $(\vec r - \vec r')/|\vec r - \vec r'|^3$ and we see that that sort of thing would come out as the gradient of $1/|\vec r - \vec r'|,$ and the $4 \pi$ comes from the fact that the surface area of the sphere is naturally $4 \pi r^2$. So now we come to the idea of $\Box = c^{-2} \partial_t^2 - \nabla^2$ and we want to do the same thing, but we know that $\Box \alpha = 0$ is solved by a superposition of traveling waves, $\alpha(\vec r, t) = \sum_i f_i(\vec r - \vec u_i~t)$ where for all $i$ we know that $|\vec u_i| = c.$ If we imagine a $\delta$-function in space and time, we imagine that it would have to produce a spherical shell propagating in all directions, attenuating as it goes, and so we guess something like $\alpha(\vec r, t) = \delta(t - t' - |\vec r - \vec r'|/c) / (4 \pi |\vec r - \vec r'|).$ This guess turns out to be exactly correct, hence the general solution is the superposition over both coordinates: $$ \alpha(\vec r, t) = \int dt' ~\int d^3 r'~ \delta\left(t - t' - \frac{|\vec r - \vec r'|}c\right)~\frac{\beta(\vec r', t')}{4\pi|\vec r - \vec r'|}.$$Performing the integral over $t'$ you can just use the definition of the $\delta$-function to replace t' with $t - |\vec r - \vec r'|/c$, your expression above. Alternatively, just apply $\Box$ to your expression. This is the "guess and check" method of solving an equation; I've given you the Green's function reason why you should guess this expression as the solution to the equation; now all you have to do is check it and if it's correct then it's correct! For this you're simply doing a divergence of a gradient, where it's useful to know that $\nabla$ obeys the normal product rule for derivatives with $\nabla f\big(|\vec r - \vec r'|\big) = f'\big(|\vec r - \vec r'|\big) ~ (\vec r - \vec r')/|\vec r - \vec r'|.$ We have to say very carefully that $\dot \beta$ is the derivative of $\beta$ with respect to its second argument overall which we'll call $\tau$ for short. So you start with: $$\nabla \left(\frac{\beta(\vec r', \tau)}{4\pi|\vec r - \vec r'|}\right) = \frac{\vec r - \vec r'}{|\vec r - \vec r'|} \left(\frac{\dot \beta(\vec r', \tau)}{4\pi|\vec r - \vec r'|} - \frac{\beta(\vec r', \tau)}{4\pi|\vec r - \vec r'|^2}\right),$$ and then do the same with the divergence.
Orthogonal Vectors Two vectors, xand y, are orthogonalif their dot product is zero. For example \[ e \cdot f = \begin{pmatrix} 2 & 5 & 4 \end{pmatrix} * \begin{pmatrix} 4 \\ -2 \\ 5 \end{pmatrix} = 2*4 + (5)*(-2) + 4*5 = 8-10+20 = 22\] Vectors e and f are not orthogonal. \[ g \cdot h = \begin{pmatrix} 2 & 3 & -2 \end{pmatrix} * \begin{pmatrix} 4 \\ -2 \\ 1 \end{pmatrix} = 2*4 + (3)*(-2) + (-2)*1 = 8-6-2 = 0\] However, vectors g and h are orthogonal. Orthogonal can be thought of as an expansion of perpendicular for higher dimensions. Let \(x_1, x_2, \ldots , x_n\) be m-dimensional vectors. Then a linear combination of \(x_1, x_2, \ldots , x_n\) is any m-dimensional vector that can be expressed as \[ c_1 x_1 + c_2 x_2 + \ldots + c_n x_n \] where \(c_1, \ldots, c_n\) are all scalars. For example: \[x_1 =\begin{pmatrix} 3 \\ 8 \\ -2 \end{pmatrix}, x_2 =\begin{pmatrix} 4 \\ -2 \\ 3 \end{pmatrix}\] \[y =\begin{pmatrix} -5 \\ 12 \\ -8 \end{pmatrix} = 1*\begin{pmatrix} 3 \\ 8 \\ -2 \end{pmatrix} + (-2)* \begin{pmatrix} 4 \\ -2 \\ 3 \end{pmatrix} = 1*x_1 + (-2)*x_2\] So y is a linear combination of \(x_1\) and \(x_2\). The set of all linear combinations of \(x_1, x_2, \ldots , x_n\) is called the span of \(x_1, x_2, \ldots , x_n\). In other words \[ span(\{x_1, x_2, \ldots , x_n \} ) = \{ v| v= \sum_{i = 1}^{n} c_i x_i , c_i \in \mathbb{R} \} \] A set of vectors \(x_1, x_2, \ldots , x_n\) is linearly independent if none of the vectors in the set can be expressed as a linear combination of the other vectors. Another way to think of this is a set of vectors \(x_1, x_2, \ldots , x_n\) are linearly independent if the only solution to the below equation is to have \(c_1 = c_2 = \ldots = c_n = 0\), where \(c_1 , c_2 , \ldots , c_n \) are scalars, and \(0\) is the zero vector (the vector where every entry is 0). \[ c_1 x_1 + c_2 x_2 + \ldots + c_n x_n = 0 \] If a set of vectors is not linearly independent, then they are called linearly dependent. Example M.5.1 \[ x_1 =\begin{pmatrix} 3 \\ 4 \\ -2 \end{pmatrix}, x_2 =\begin{pmatrix} 4 \\ -2 \\ 2 \end{pmatrix}, x_3 =\begin{pmatrix} 6 \\ 8 \\ -2 \end{pmatrix} \] Does there exist a vector c, such that, \[ c_1 x_1 + c_2 x_2 + c_3 x_3 = 0 \] To answer the question above, let: \begin{align} 3c_1 + 4c_2 +6c_3 &= 0,\\ 4c_1 -2c_2 + 8c_3 &= 0,\\ -2c_1 + 2c_2 -2c_3 &= 0 \end{align} Solving the above system of equations shows that the only possible solution is \(c_1 = c_2 = c_3 = 0\). Thus \(\{ x_1 , x_2 , x_3 \}\) is linearly independent. One way to solve the system of equations is shown below. First, subtract (4/3) times the 1st equation from the 2nd equation. \[-\frac{4}{3}(3c_1 + 4c_2 +6c_3) + (4c_1 -2c_2 + 8c_3) = -\frac{22}{3}c_2 = -\frac{4}{3}0 + 0 = 0 \Rightarrow c_2 = 0 \] Then add the 1st and 3 times the 3rd equations together, and substitute in \(c_2 = 0\). \[ (3c_1 + 4c_2 +6c_3) + 3*(-2c_1 + 2c_2 -2c_3) = -3c_1 + 10 c_2 = -3c_1 + 10*0 = 0 + 3*0 = 0 \Rightarrow c_1 = 0 \] Now, substituting both \(c_1 = 0\) and \(c_2 = 0\) into equation 2 gives. \[ 4c_1 -2c_2 + 8c_3 = 4*0 -2*0 + 8c_3 = 0 \Rightarrow c_3 = 0 \] So \(c_1 = c_2 = c_3 = 0\), and \(\{ x_1 , x_2 , x_3 \}\) are linearly independent. Example M.5.2 \[ x_1 =\begin{pmatrix} 1 \\ -8 \\ 8 \end{pmatrix}, x_2 =\begin{pmatrix} 4 \\ -2 \\ 2 \end{pmatrix}, x_3 =\begin{pmatrix} 1 \\ 3 \\ -2 \end{pmatrix} \] In this case \(\{ x_1 , x_2 , x_3 \}\)are linearly dependent, because if \(c = (-1, 1, -2)\), then \[c^T X = \begin{pmatrix} -1 \\ 1\\ -2 \end{pmatrix} \begin{pmatrix} x_1 & x_2 & x_3 \end{pmatrix} = -1 \begin{pmatrix} 1 \\ -8\\ 8 \end{pmatrix}+ 1 \begin{pmatrix} 4 \\ -2\\ 2 \end{pmatrix} - 2 \begin{pmatrix} 1 \\ 3 \\ -2 \end{pmatrix} = \begin{pmatrix} -1*1 +1*4-2*1 \\ -1*-8+1*-2-2*3 \\ -1*8+1*2-2*-2 \end{pmatrix}= \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \] Norm of a vector or matrix The normof a vector or matrix is a measure of the "length" of said vector or matrix. For a vector x, the most common norm is the \(\mathbf{L_2}\) norm, or Euclidean norm. It is defined as \[ \| x \| = \| x \|_2 = \sqrt{ \sum_{i=1}^{n} x_i^2 } \] Other common vector norms include the \(\mathbf{L_1}\) norm, also called the Manhattan normand Taxicab norm. \[ \| x \|_1 = \sum_{i=1}^{n} |x_i| \] Other common vector norms include the Maximum norm, also called the Infinity norm. \[ \| x \|_\infty = max( |x_1| ,|x_2|, \ldots ,|x_n|) \] The most commonly used matrix norm is the Frobenius norm. For a m× nmatrix A, the Frobenius norm is defined as: \[ \| A \| = \| A \|_F = \sqrt{ \sum_{i=1}^{m} \sum_{j=1}^{n} x_{i,j}^2 } \] Quadratic Form of a Vector The quadratic formof the vector xassociated with matrix Ais \[ x^T A x = \sum_{i = 1}^{m} \sum_{j=1}^{n} a_{i,j} x_i x_j \] A matrix Ais Positive Definiteif for any non-zero vector x, the quadratic form of xand Ais strictly positive. In other words, \(x^T A x > 0\) for all nonzero x. A matrix Ais Positive Semi-Definiteor Non-negative Definiteif for any non-zero vector x, the quadratic form of xand Ais non-negative . In other words, \(x^T A x \geq 0\) for all non-zero x. Similarly, A matrix Ais Negative Definiteif for any non-zero vector x, \(x^T A x < 0\). A matrix Ais Negative Semi-Definiteor Non-positive Definiteif for any non-zero vector x, \(x^T A x \leq 0\).
Definition:Cardinality Contents 1 Definition 2 Cardinality of Natural Numbers 3 Also defined as 4 Also known as 5 Examples 6 Also see 7 Sources Definition The cardinality of a set $S$ is written $\card S$. Let $S$ be a finite set. The cardinality $\card S$ of $S$ is the number of elements in $S$. That is, if: $S \sim \N_{< n}$ where: then we define: $\card S = n$ Let $S$ be an infinite set. The cardinality $\card S$ of $S$ can be indicated as: $\card S = \infty$ It means that $\card S$ is at least $\aleph_0$ (aleph null). Cardinality of Natural Numbers That is: $\forall n \in N: \card n := n$ Some sources cut through all the complicated language and call it the size. Some sources use $\map {\#} S$ (or a variant) to denote set cardinality. This notation has its advantages in certain contexts, and is used on occasion on this website. Others use $\map C S$, but this is easy to confuse with other uses of the same or similar notation. A clear but relatively verbose variant is $\Card \paren S$ or $\operatorname{card} \paren S$. Further notations are $\map n A$ and $\overline A$. Let: \(\displaystyle S_1\) \(=\) \(\displaystyle \set {-1, 0, 1}\) \(\displaystyle S_2\) \(=\) \(\displaystyle \set {x \in \Z: 0 < x < 6}\) \(\displaystyle S_3\) \(=\) \(\displaystyle \set {x^2 - x: x \in S_1}\) \(\displaystyle S_4\) \(=\) \(\displaystyle \set {X \in \powerset {S_2}: \card X = 3}\) \(\displaystyle S_5\) \(=\) \(\displaystyle \powerset \O\) The cardinality of $S_1$ is given by: $\card {S_1} = 3$ The cardinality of $S_2$ is given by: $\card {S_2} = 5$ The cardinality of $S_3$ is given by: $\card {S_3} = 2$ The cardinality of $S_4$ is given by: $\card {S_4} = 10$ The cardinality of $S_5$ is given by: $\card {S_5} = 1$ Also see Results about cardinalitycan be found here. Sources 1915: Georg Cantor: Contributions to the Founding of the Theory of Transfinite Numbers... (previous) ... (next): First Article: $\S 1$: The Conception of Power or Cardinal Number: $(3)$ 1964: Steven A. Gaal: Point Set Topology... (previous) ... (next): Introduction to Set Theory: $2$. Set Theoretical Equivalence and Denumerability 1966: Richard A. Dean: Elements of Abstract Algebra... (previous) ... (next): $\S 0.2$ 1968: A.N. Kolmogorov and S.V. Fomin: Introductory Real Analysis... (previous) ... (next): $\S 2.5$: The power of a set 1977: Gary Chartrand: Introductory Graph Theory... (previous) ... (next): Appendix $\text{A}.2$: Cartesian Products and Relations 1982: P.M. Cohn: Algebra Volume 1(2nd ed.) ... (previous) ... (next): Chapter $1$: Sets and mappings: $\S 1.3$: Mappings 1994: Martin J. Osborne and Ariel Rubinstein: A Course in Game Theory... (previous) ... (next): $1.7$: Terminology and Notation 1996: John F. Humphreys: A Course in Group Theory... (previous) ... (next): Chapter $5$: Cosets and Lagrange's Theorem: Proposition $5.8$ Notation 1996: H. Jerome Keisler and Joel Robbin: Mathematical Logic and Computability... (previous) ... (next): Appendix $\text{A}.6$: Cardinality 2008: David Nelson: The Penguin Dictionary of Mathematics(4th ed.) ... (previous) ... (next): Entry: cardinal number (cardinality) 1972: A.G. Howson: A Handbook of Terms used in Algebra and Analysis... (previous) ... (next): $\S 4$: Number systems $\text{I}$: A set-theoretic approach 2005: René L. Schilling: Measures, Integrals and Martingales... (previous) ... (next): $2.3$ 2008: Paul Halmos and Steven Givant: Introduction to Boolean Algebras... (previous) ... (next): Appendix $\text{A}$: Set Theory: Cardinality 2012: M. Ben-Ari: Mathematical Logic for Computer Science(3rd ed.) ... (previous) ... (next): Appendix $\text{A}.5$: Definition $\text{A}.25$
User talk:WikiSysop Contents 1 Test external Link 29th January 2015 2 Test Copy&Paste HTML 3 Test Asymptote 3.1 Test January 12th 2015 3.2 Tests November 1th 3.3 Tests November 17th 3.4 Tests November 4th 3.5 Tests October 27th 3.6 Previous tests 4 Test Cite Extension 5 Test MathJax 6 Pages A-Z 7 Recent Changes Test external Link 29th January 2015 [http://www.ams.org/publications/math-reviews/math-reviews URL on white list\ Test Copy&Paste HTML Test-copy-paste Test Test Test Asymptote Test January 12th 2015 Case 1 modified Tests November 1th Case 1 modified Case 1 Tests November 17th Case 1 Tests November 4th Case 1 Case 2 Case 3 Tests October 27th Case 1 Case 2 Case 3 Case 4 Case 5 [asy] pair A,B,C,X,Y,Z; A = (0,0); B = (1,0); C = (0.3,0.8); draw(A--B--C--A); X = (B+C)/2; Y = (A+C)/2; Z = (A+B)/2; draw(A--X, red); draw(B--Y,red); draw(C--Z,red); [/asy] Previous tests Test Cite Extension Example: Cite-Extension Test MathJax \begin{align} \dot{x} & = \sigma(y-x) \\ \dot{y} & = \rho x - y - xz \\ \dot{z} & = -\beta z + xy \end{align} \[ \frac{1}{(\sqrt{\phi \sqrt{5}}-\phi) e^{\frac25 \pi}} = 1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}} {1+\frac{e^{-8\pi}} {1+\ldots} } } } \] Some Text \( \frac{1}{(\sqrt{\phi \sqrt{5}}-\phi) e^{\frac25 \pi}} = 1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}} {1+\frac{e^{-8\pi}} {1+\ldots} } } } \) Some Text \[ \frac{1}{(\sqrt{\phi \sqrt{5}}-\phi) e^{\frac25 \pi}} = 1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}} {1+\frac{e^{-8\pi}} {1+\ldots} } } } \] Alphabetically ordered index of all pages List of previous changes on EOM . How to Cite This Entry: WikiSysop. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=WikiSysop&oldid=36276 Test January 12th 2015
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
We have SampleLeft function in lattice trapdoors as Algorithm $\textbf{SampleLeft}(A,M_1,T_A,u,\sigma)$: $\textbf{Input}$: a rank $n$ matrix $A$ in $\mathbb{Z}^{n×m}_q$ and a matrix $M_1$ in $\mathbb{Z}^{n×m_1}_q$ , a “short” basis $T_A$ of $\Lambda^{\perp}_q(A)$ and a vector $u \in \mathbb{Z}^n_q$, a gaussian parameter $\sigma > \|\widetilde{T_A}\|.\omega(\sqrt{\log(m+m_1)}$ $\textbf{Output}$: Let $F_1$ := $(A | M_1)$. The algorithm outputs a vector $e \in \mathbb{Z}^{m+m_1}$ sampled from a distribution statistically close to $\mathcal{D}_{\Lambda^{u}_q(F_1),\sigma}$ such that $F_1 . e = u$ My question is can we sample a vector $e$ using the above algorithm(or making some changes) such that $e . F_1 = u$ holds? The dimensions of the matrices change accordingly.
Let $(X, \mathcal{A}, \mu)$ be a measurable space, $(f_n)$ a sequence of integrable functions that converges in measure to some integrable function $f$. Assume further that there is some $g : X \rightarrow [0,\infty]$ integrable such that $|f_n|\leq g$ and $|f|\leq g$. Then $(f_n) \rightarrow f$ in mean. I am not sure how to prove this. Since $(f_n)\rightarrow f$ in measure, every subsequence of $(f_n)$ admits some subsequence that converges to $f$ almost everywhere. I can prove that such a subsequence converges in measure, since convergence a.e plus being dominated by an integrable function as above implies convergence in mean. If $(f_n)\rightarrow f$ a.e, then $(f_n-f)\rightarrow f$ a.e. Also, $$|f_n - f|\leq |f_n| + |f| \leq 2g$$ and therefore appealing to the Dominated Convergence Theorem we obtain that $\int_X |f_n -f|d\mu \rightarrow 0$ so $(f_n)\rightarrow f$ in mean. The problem is then in showing that if a subsequence of a sequence $(f_n)$ converges in mean, then $(f_n) \rightarrow f$ in mean, and this is the part I need some help with. Thank you!
I'm trying to implement an equation into a programming language which doesn't have functions for integrals. However as it's many years since I've had any math exercise I'm having some trouble understanding how I can simplify the following equation. $$\mathrm{erf}(x) = \frac{1}{\sqrt{\pi}} \int_{-x}^x e^{-t^2} dt.$$ As an example would it be correct to refactor the equation as follows? $$\frac{1}{\sqrt{\pi}} \cdot \left(x \cdot e^{-x^2}+(-x) \cdot e^{x^2}\right)$$ Please forgive me if I'm completely off target! As I said, it's been quite a few years since I've had integrals.
I am reading thorugh some topological definitions, in my book it is stated that $id_M:(M,\tau_d)\rightarrow(M,\tau_h),x\rightarrow x$ is a Homeomorphism where $(M,d)$ is a metric space, $h(x,y)=\frac{d(x,y)}{1+d(x,y)}$ is a metric $h$ on $M$. A Homeomorphism between topological spaces is a map which is bijective, continuous and the inverse map is also continuous. I would really appreciate it if could help me showing this properties.
In a recent research paper, I formalised some of my theorems in the proof assistant Isabelle, but left some of them as just proved 'by hand'. I found it helpful to use a different 'QED' symbol depending on which method I had used. Below is an enlarged version of the 'three cubes' symbol in the snippet… Continue reading The Isabelle logo as end-of-proof symbol If $latex f$ and $latex g$ are functions, what does $latex f*g$ mean? Definition 1. A separation algebra (citation) is a triple $latex (M, \circ, u)$, where $latex M$ is a set, $latex \circ$ has type $latex M \times M \rightharpoonup M$ and is commutative, associative and cancellative, and $latex u\in M$ is the unit… Continue reading A star operator for functions Abstract Separation Logic (citation) is based on a partial composition operator, $latex \circ$, which has the following type: $latex \circ : \Sigma \times \Sigma \rightharpoonup \Sigma$ We shall concentrate henceforth on the simplest incarnation of this operator: disjoint union. Here are two useful theorems about disjoint union. Theorem 1. If $latex a\subseteq a'$ and $latex… Continue reading A comment on ‘Abstract Separation Logic’ Here's an interesting little theorem I came across in number theory. By instantiating $latex n$, one can generate an unlimited of exercises for maths students! Theorem. When two numbers are raised to the same odd power, the sum is divisible by the sum of the two numbers; that is: $latex a+b\mid a^{2n+1}+b^{2n+1}$ for any natural… Continue reading A little theorem about odd powers The johnproof style file provides an environment for typesetting proofs. Features include: It automatically handles page breaks, for those occasions when a long proof splits over multiple pages. It deals with assumptions and case splits. It assigns a label to each step in the proof, for referencing. It indicates the scope of each assumption using… Continue reading A proof environment for LaTeX We write $latex \forall x\in\mathbb{R}\ldotp x^2 \ge 0$ to express that the square of any real number is non-negative. We write $latex \exists x\in\mathbb{Z} \ldotp x \ge 0 \land x \le 0$ to express that there exists an integer that is both non-negative and non-positive. We write $latex \lambda x:\mathsf{int} \ldotp x+1$ to denote the… Continue reading A new notation for building sets?
We know thatif $\lim\limits_{x\rightarrow a}{y(x)}=a$ and $\lim\limits_{x\rightarrow a}{w(x)}=b$ then $\lim\limits_{x\rightarrow a}{\left(y(x)\times w(x)\right)}=ab$ Now $\lim\limits_{x\rightarrow \infty}{\dfrac1{f(x)}}=m$ (as $m$ is non-zero) Consider functions $y(x)=\dfrac{1}{f(x)}$ and $w(x)=f(x)\times g(x)$. Now as both the limits of $y(x)$ and $w(x)$ exists. If you consider the limit of $y(x)\times w(x)\color{grey}{=g(x)}$, you can prove that $g(x)$ approaches the limit $\dfrac{L}{m}$. About the second question: As you might have noticed, before defining the limit, we are imposing a condition that $L$ is real I want to avoid things like $\lim{f(x)}=\infty$ and claiming that $\lim f(x)$ exist .We write $f(x)\rightarrow +\infty$ as $x\rightarrow a$ instead of the $\lim$ equals $\infty$). Now this becomes tricky! Although it looks like the previous proofworks for this one too, it is not so. It cannot be used as theproduct of the limits equals the limit of product is applicable onlyif the limit of both the functions 'exist', the existence of limit of$f(x)$ is a problem here, so we have to resolve to more elementarytechnique. We say that a function $f(x)$ approaches $\infty$ if for every $M\in \mathbb{R}$ we can find an $N\in \mathbb{R}$ such that $$x\ge N \implies f(x)\ge M.$$ Now as $f(x)g(x)$ approaches $L$ we can say that for every $\varepsilon^{'} =\varepsilon \cdot M -L$ there exists an $N_1$ such that $x>N_1 \implies -\varepsilon<\frac{L-\varepsilon^{'}}{f(x)}<g(x)<\frac{L+\varepsilon^{'}}{f(x)} <\varepsilon \tag{1}$ And thus $g(x)$ approaches $0$ (For proving (1) you will have to assume that $L\ge0$, for the case $L<0$ you can consider $\varepsilon^{'}=\varepsilon \cdot M +L$ and arrive at the same inequality. Also division by $f(x)$ is valid as we can have an $N_2$ such that for all $x>N_2$, $f(x)$ is greater than $0$)
The conditional variational principle for maps with the pseudo-orbit tracing property 1. School of Mathematical Sciences, Anhui University, Hefei 230601, Anhui, China 2. School of Mathematical Sciences and Institute of Mathematics, Nanjing Normal University, Nanjing 210023, Jiangsu, China 3. Center of Nonlinear Science, Nanjing University, Nanjing 210093, Jiangsu, China $(X,d,f)$ $(X,d)$ $f:X \to X$ $n$ $x \in X$ $\mathscr{E}_n(x) = \frac{1}{n}\sum\limits_{i = 0}^{n-1}δ_{f^ix},$ $δ_y$ $y$ $V(x)$ $\mathscr{E}_n(x)$ $\Delta_{sub}(I): = \left\{ {x \in X:V(x)\subset I} \right\},$ $\Delta_{cap}(I): = \left\{ {x \in X:V(x)\cap I≠\emptyset } \right\}.$ $I$ $\mathscr M_{\rm inv}(X,f)$ Mathematics Subject Classification:Primary: 37B40; Secondary: 37C45. Citation:Zheng Yin, Ercai Chen. The conditional variational principle for maps with the pseudo-orbit tracing property. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 463-481. doi: 10.3934/dcds.2019019 References: [1] N. Aoki and K. Hiraide, [2] [3] [4] [5] [6] [7] [8] T. Downarowicz, Survey of odometers and Toeplitz flows, in [9] [10] D. Kwietniak, M. Lacka and P. Oprocha, A panorama of specification-like properties and their consequences, in [11] [12] V. Mijović and L. Olsen, Dynamical multifractal zeta-functions and fine multifractal spectra of graph-directed self-conformal constructions, [13] [14] [15] L. Olsen and S. Winter, Normal and non-normal points of self-similar sets and divergence points of self-similar measures, [16] [17] C. Pfister and W. Sullivan, Large deviations estimates for dynamical systems without the specification property. Applications to the $\beta $-shifts, [18] [19] F. Takens and E. Verbitskiy, On the variational principle for the topological entropy of certain non-compact sets, [20] X. Tian and P. Varandas, Topological entropy of level sets of empirical measures for nonuniformly expanding maps, [21] show all references References: [1] N. Aoki and K. Hiraide, [2] [3] [4] [5] [6] [7] [8] T. Downarowicz, Survey of odometers and Toeplitz flows, in [9] [10] D. Kwietniak, M. Lacka and P. Oprocha, A panorama of specification-like properties and their consequences, in [11] [12] V. Mijović and L. Olsen, Dynamical multifractal zeta-functions and fine multifractal spectra of graph-directed self-conformal constructions, [13] [14] [15] L. Olsen and S. Winter, Normal and non-normal points of self-similar sets and divergence points of self-similar measures, [16] [17] C. Pfister and W. Sullivan, Large deviations estimates for dynamical systems without the specification property. Applications to the $\beta $-shifts, [18] [19] F. Takens and E. Verbitskiy, On the variational principle for the topological entropy of certain non-compact sets, [20] X. Tian and P. Varandas, Topological entropy of level sets of empirical measures for nonuniformly expanding maps, [21] [1] Zheng Yin, Ercai Chen. Conditional variational principle for the irregular set in some nonuniformly hyperbolic systems. [2] [3] [4] [5] [6] [7] [8] Piotr Oprocha, Paweł Potorski. Topological mixing, knot points and bounds of topological entropy. [9] [10] Boris Hasselblatt, Zbigniew Nitecki, James Propp. Topological entropy for nonuniformly continuous maps. [11] [12] [13] Eva Glasmachers, Gerhard Knieper, Carlos Ogouyandjou, Jan Philipp Schröder. Topological entropy of minimal geodesics and volume growth on surfaces. [14] Dante Carrasco-Olivera, Roger Metzger Alvan, Carlos Arnoldo Morales Rojas. Topological entropy for set-valued maps. [15] César J. Niche. Topological entropy of a magnetic flow and the growth of the number of trajectories. [16] [17] [18] [19] [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
what does it mean by becoming extensional in the first place? The axiom of extensionality relates to what it means for two functions to be equal. Specifically, extensionality says: $f = g \iff \forall x \ldotp f(x) = g(x)$ That is, functions are equal if they map equal inputs to equal outputs. By this definition, quicksort and mergesort are equal, even if they don't have the same implementations, because they behave the same as functions. How does it become extensional What's missing is the rule of definitional equality for functions. It usually looks like this: $\frac{\Gamma, (x : U) \vdash (f x) = (g x):V}{\Gamma \vdash f = g: (x : U) \to V}\text{(Fun-DefEq)}$ That is, two functions are definitionally equal when they produce equal results when applied to an abstract variable. This is similar in spirit to the way we typecheck polymorphic functions: you make sure it holds for all values by making sure it holds for an abstract value. We get extensionality when we combine the two: if two functions always produce the same result, we should be able to find some equality proof $P$ such that $\Gamma,(x: U) \vdash P:Id_V(f x, g x)$ i.e. the proof that the two functions always produce the same result. But, if we combine this with the rule $\text{(Id-DefEq)}$, then any time two functions are extensionally equal (i.e. we can find the proof term $P$, then they are also definitionally equal. This is in stark contrast to an intensional system, where two functions are equal if and only if their bodies are syntactically identical. So mergesort and quicksort are intensionally different, but extensionally the same. The $\text{(Id-DefEq)}$ means that extensional equality is baked into the type system: if you have a type constructor $T : ((x : U) \to V) \to \mathsf{Set}$, then you can use a value of type $T\ f$ in a context expecting $T\ g$ if $f$ and $g$ map equal inputs to equal outputs. Again this is not true in an intensional system, where $f$ and $g$ might be incompatible if they're syntactically different. Does the above mean that we purposefully drop the proof that M and N are equal and just consider them to be equal definitionally (like a presumption)? It's even a bit stronger than that. It's saying that $M$ and $N$ are definitionally equal any time there exists some proof that they are propositionally equal. So on the one hand, if you have a propositional proof that two values are equal, you can forget that proof and say that they are definitionally equal. But on the other hand, if you are trying to prove that two values are definitionally equal (as a dependent type checking algorithm would), then you cannot say that they are not equal unless you are certain that no proof $P$ exists. This is why it is undecidable.
This question is based on two answers I saw. I want to know if my reasoning is correct in each case. For me, $A$ is always a commutative ring with $1 \ne 0$. In the first answer this proposition was given: Let $f_1,\dots,f_t\in A[X]$ be nonzero. If there is $0 \ne g\in A[X]$ such that $gf_1=\cdots=gf_t=0$, then there is $0 \ne a\in A$ such that $af_1=\cdots=af_t=0$. Assuming the case $t = 1$ which is McCoy's theorem, I think I can prove this as follows: Proof: Let $\displaystyle F := \sum_{i=1}^t f_i X^{\sum_{j < i} (1 + \deg f_j)}$. Then $gf_1 = \ldots = gf_t = 0 \implies gF = 0 \implies aF = 0$ for some $0 \ne a \in A \implies af_1 = \ldots = af_t = 0$. The last implication is because every coefficient of each $f_i$ appears exactly once as a coefficient of $F$.$\newcommand{\N}{\mathbb{N}}$ Question 1: Is the proof above correct? Moving on to the next question, in the other answer this proposition was given: (*) Let $A$ be a commutative $\N$-graded ring. Let $f \in A$ be a zerodivisor. Then there is some $0 \ne a \in A$ homogeneous such that $af = 0$. I think the proof given in that answer has a fatal flaw, which was pointed out as a comment but with no response. Specifically $\deg f_ig < \deg g$ may not be true. For polynomial rings in $1$ variable it's ok because $f_i = a_ix^i$ where $a_i$ has degree $0$ and $x^i$ is a nonzerodivisor, so $\deg a_ig < \deg g$ and $a_ig \ne 0$. So my second question is: Question 2: Is (*) true? If not, what is a counterexample? One note is that (*) is true if $A$ is Noetherian (since then every zerodivisor is in some associated prime, which is the annihilator of a homogeneous element). So a counterexample would have to be non-Noetherian.
Using Green's theorem to find area Typically we use Green's theorem as an alternative way to calculate a line integral $\dlint$. If, for example, we are in two dimension, $\dlc$ is a simple closed curve, and $\dlvf(x,y)$ is defined everywhere inside $\dlc$, we can use Green's theorem to convert the line integral into to double integral. Instead of calculating line integral $\dlint$ directly, we calculate the double integral \begin{align*} \iint_\dlr \left(\pdiff{\dlvfc_2}{x} - \pdiff{\dlvfc_1}{y}\right) dA \end{align*} Can we use Green's theorem to go the other direction? If we are given a double integral, can we use Green's theorem to convert the double integral into a line integral and calculate the line integral? If we are given the double integral \begin{align*} \iint_\dlr f(x,y) dA, \end{align*} we can use Green's theorem only if there happens to be a vector field $\dlvf(x,y)$ so that \begin{align*} f(x,y) = \pdiff{\dlvfc_2}{x} - \pdiff{\dlvfc_1}{y}. \end{align*} However, we haven't learned any method to find such a vector field $\dlvf$. So, we aren't likely to use Green's theorem in this direction very often. There is one important exception to this rule, however, and that is when we are using a double integral to calculate the area of a region $\dlr$. The area of a region $\dlr$ is equal to the double integral of $f(x,y)=1$ over $\dlr$: \begin{align*} \text{Area of $\dlr$} = \iint_\dlr dA = \iint_\dlr 1\, dA. \end{align*} If $f(x,y) =1$, it is easy to find a vector field $\dlvf$ so that \begin{align*} \pdiff{\dlvfc_2}{x} - \pdiff{\dlvfc_1}{y} = f(x,y) = 1. \end{align*} There are many such vector fields $\dlvf$, but we'll pick the vector field $\dlvf(x,y) = (-y/2, x/2)$. You can confirm that indeed $\displaystyle \pdiff{\dlvfc_2}{x} - \pdiff{\dlvfc_1}{y} =1$. In summary, if $\dlc$ is a counterclockwise oriented simple closed curve that bounds a region where you can apply Green's theorem, the area of the region $\dlr$ bounded by $\dlc = \partial \dlr$ is \begin{align*} \text{Area of $\dlr$} = \dlint = \frac{1}{2} \int_\dlc x\,dy - y\,dx, \end{align*} where $\dlvf(x,y) = (-y/2,x/2)$. Example 1 Use Green's Theorem to calculate the area of the disk $\dlr$ of radius $r$ defined by $x^2+y^2 \le r^2$. Solution: Since we know the area of the disk of radius $r$ is$\pi r^2$, we better get $\pi r^2$ for our answer. The boundary of $\dlr$ is the circle of radius $r$. We can parametrized it in a counterclockwise orientation using \begin{align*} \dllp(t) = (r\cos t, r\sin t), \quad 0 \le t \le 2\pi. \end{align*} Then \begin{align*} \dllp'(t) = (-r\sin t, r\cos t), \end{align*} and, by Green's theorem, \begin{align*} \text{area of } \dlr &= \iint dA\\ &= \frac{1}{2} \int_\dlc x\,dy - y\,dx\\ &= \frac{1}{2} \int_0^{2\pi} [(r\cos t)(r\cos t) - (r\sin t)(-r\sin t)]dt \\ &= \frac{1}{2} \int_0^{2\pi} r^2 (\sin^2t+\cos^2t) dt = \frac{r^2}{2}\int_0^{2\pi} dt = \pi r^2. \end{align*} Thankfully, our answer agrees with what we know it should be. Example 2 Calculate the area of the region $D$ bounded by the curve $\dlc$ parametrized by $\dllp(t)=\sin 2t\,\vc{i} +\sin t\,\vc{j}$ for $0 \le t \le \pi$. The region and curve are illustrated by the below applet. Area inside sinusoidal curve. The curve $\dlc$ parameterized by $\dllp(t)=(\sin 2t,\sin t)$ for $0 \le t \le \pi$ is the counterclockwise oriented boundary of a region $D$, shown shaded in blue. As you specify $t$ by dragging the green point on the slider, the red point traces out the curve $\dllp(t)$. Alternatively, you can drag the red point around the curve, and the green point on the slider indicates the corresponding value of $t$. One can calculate the area of $D$ using Green's theorem and the vector field $\dlvf(x,y)=(-y,x)/2$. Solution: We'll use Green's theorem to calculate the area bounded by the curve. Since $\dlc$ is a counterclockwise oriented boundary of $D$, the area is just the line integral of the vector field $\dlvf(x,y) = \frac{1}{2}(-y,x)$ around the curve $\dlc$ parametrized by $\dllp(t)$. To integrate around $\dlc$, we need to calculate the derivative of the parametrization $\dllp'(t)=2\cos 2t \,\vc{i}+\cos t\,\vc{j}$. Then, by Green's theorem, we turn the double integral defining the area into line integral and calculate the area. \begin{align*} \text{area of } \dlr &= \iint dA\\ &= \dlint = \plint{0}{\pi}{\dlvf}{\dllp}\\ &= \frac{1}{2} \int_0^{\pi} ( -\sin t, \sin 2t) \cdot ( 2\cos 2t,\cos t)dt\\ &= \frac{1}{2} \int_0^{\pi} ( -\sin t, 2 \sin t \cos t ) \cdot (2\cos^2 t - 2\sin^2 t,\cos t)dt\\ &= \int_0^{\pi} \sin^3t dt =\int_0^{\pi} (\sin t-\sin t\,\cos^2t)\, dt\\ &= \Bigl[-\cos t -\frac{1}{3}\cos^3 t\Bigr]_0^{\pi} = 1+\frac{1}{3}=\frac{4}{3} \end{align*}
Basic integration formulas The fundamental use of integration is as a continuousversion of summing. But, paradoxically, often integrals are computed by viewing integration as essentially an inverse operation to differentiation. (That fact is the so-called Fundamental Theorem of Calculus.) The notation, which we're stuck with for historical reasons,is as peculiar as the notation for derivatives: the integral of afunction $f(x)$ with respect to $x$ is written as$$\int f(x)\;dx$$ The remark that integration is (almost) an inverse to the operation ofdifferentiation means that if$${d\over dx}f(x)=g(x)$$then$$\int g(x)\;dx=f(x)+C$$The extra $C$, called the constant of integration, is reallynecessary, since after all differentiation kills off constants, whichis why integration and differentiation are not exactly inverseoperations of each other. Since integration is almost the inverse operation ofdifferentiation, recollection of formulas and processes for differentiation already tells the most important formulas for integration:\begin{align*}\int x^n\; dx &= {1\over n+1}x^{n+1}+C &\hbox{ unless $n=-1$ }\\\int e^x \;dx&= e^x+C \\ \int {1\over x} \;dx&= \ln x+C \\\int \sin x\;dx&=-\cos x+C \\\int \cos x\;dx&= \sin x + C\\ \int \sec^2 x\;dx&=\tan x+C \\ \int {1\over 1+x^2} \; dx&=\arctan x+C\end{align*} And since the derivative of a sum is the sum of the derivatives, the integral of a sum is the sum of the integrals:$$ \int f(x)+g(x)\;dx=\int f(x)\;dx+\int g(x)\;dx$$And, likewise, constants ‘go through’ the integral sign:$$\int c\cdot f(x)\;dx=c\cdot \int f(x)\;dx$$ For example, it is easy to integrate polynomials, even including terms like $\sqrt{x}$ and more general power functions. The only thing to watch out for is terms $x^{-1}={1\over x}$, since these integrate to $\ln x$ instead of a power of $x$. So $$\int 4x^5-3x+11-17\sqrt{x}+{3\over x}\;dx= {4x^6\over 6}-{3x^2\over 2}+11x-{ 17x^{3/2} \over 3/2 }+3\ln x+C$$ Notice that we need to include just one ‘constant of integration’. Other basic formulas obtained by reversing differentiation formulas: \begin{align*} \int a^x \;dx&= {a^x\over \ln a}+C \\ \int \log_a x\;dx&={1\over \ln a}\cdot{1\over x}+C \\ \int { 1 \over \sqrt{1-x^2 }} \; dx&=\arcsin x+C\\ \int { 1 \over x\sqrt{x^2-1 }} \; dx&=\hbox{ arcsec}\, x+C \end{align*} Sums of constant multiples of all these functions are easy to integrate: for example, $$\int 5\cdot 2^x-{ 23 \over x\sqrt{x^2-1 }}+5x^2\;dx= {5\cdot 2^x\over \ln 2}-23\,\hbox{arcsec}\,x+{5x^3\over 3}+C$$ Exercises $\int 4x^3-3\cos x+{ 7 \over x }+2\;dx=?$ $\int 3x^2+e^{2x}-11+\cos x\,dx=?$ $\int \sec^2 x\,dx=?$ $\int { 7 \over 1+x^2 }\; dx=?$ $\int 16x^7-\sqrt{x}+{ 3 \over \sqrt{x }}\; dx=?$ $\int 23 \sin x-{ 2 \over \sqrt{1-x^2 }}\; dx=?$
I'm currently looking into survival analysis regression models and can't quite wrap my head around the following: Say I want to model time to occurrence of a cardiovascular event and use age (as opposed to follow-up time) as the underlying time scale. It seems clear to me that it doesn't make sense to include age as a covariate in the model, but what about (2-way) interaction terms between age and another risk factor, say smoking or blood pressure? Would it make sense to include such interaction terms as covariates (also in light of the usual advice to always include both of an interaction's constituent main effects, which in this case wouldn't seem to make much sense for the age main effect)? Also, assuming it would be possible to include such interaction terms, should I treat age as a time-varying covariate (i.e., re-calculate the hazard in different years of the follow-up based on different values of age) or should I use the baseline age value only? Thanks! EDIT: As an example of the age * covariate interaction problem described above, consider the model described in this quote from the supplementary material of Hajifathalian et al. (2015) (excluding the references made in this quote): We used sex- and cohort-stratified Cox proportional hazards regression to estimate the coefficients of the risk function, similar to previous cohort pooling projects. The risk predictors (smoking, diabetes, serum total cholesterol and systolic blood pressure) were measured at baseline. Participants’ age was used as the time scale, which allows age-specific cardiovascular disease (CVD) rates to vary across cohorts, and hence across populations to which the risk score is applied. This parameterization allows the risk equation to be recalibrated to each country, and each sex and age group within it, by replacing the age-sex-specific CVD rates from the pooled cohorts with those from the application country. We included interaction terms between all risk factors and age and, as described in the main paper, between sex and diabetes, and sex and smoking based on previous evidence. The Cox model is given as: $\lambda_i(t)=\lambda_{0,k}(t)exp(\sum^4_{l=1}\beta_lX_{i,l}+\sum^4_{l=1}\delta_ltX_{i,l}+\sum^4_{l=3}\gamma_lsex_iX_{i,l}$ (1) where subscript $i$ denotes individual participants in the cohorts; $k$ denotes sex-cohort combination; $l$ denotes risk factors in the risk prediction equation (1: systolic blood pressure, 2: total cholesterol, 3: diabetes, and 4: smoking); $t$ denotes age in years; $X_{i,l}$ is the level of the risk factor $l$ for individual $i$ at baseline; $\beta_l$ is the log hazard ratio (HR) for the main effect of each risk factor, $\delta_l$ is the coefficient for linear interaction between risk factor $l$ and age, and $\gamma_l$ is the coefficient for interaction between diabetes and smoking, and sex; and $\lambda_{0,k}(t)$ is the age-specific hazard of CVD at the average level of risk factors for participants of sex-cohort $k$. $\lambda_{0,k}(t)$ is conceptually equivalent to an age-specific version of the $S_0(t)$ parameter in the specification used by D’Agostino and colleagues in the Framingham Risk Score. This formulation of the risk prediction equation does not need a coefficient for age although it includes interaction terms between age and other risk factors as described below.
Population and Parameters Section Population A populationis any large collection of objects or individuals, such as Americans, students, or trees about which information is desired. Parameter A parameteris any summary number, like an average or percentage, that describes the entire population. The population mean \(\mu\) (the greek letter "mu") and the population proportion p are two different population parameters. For example: We might be interested in learning about \(\mu\), the average weight of all middle-aged female Americans. The population consists of all middle-aged female Americans, and the parameter is µ. Or, we might be interested in learning about p, the proportion of likely American voters approving of the president's job performance. The population comprises all likely American voters, and the parameter is p. The problem is that 99.999999999999... % of the time, we don't — or can't — know the real value of a population parameter. The best we can do is estimate the parameter! This is where samples and statistics come in to play. Samples and statistics Sample A sampleis a representative group drawn from the population. Statistic A statisticis any summary number, like an average or percentage, that describes the sample. The sample mean, \(\bar{x}\), and the sample proportion \(\hat{p}\) are two different sample statistics. For example: We might use \(\bar{x}\), the average weight of a random sample of 100middle-aged female Americans, to estimate µ, the average weight of allmiddle-aged female Americans. Or, we might use \(\hat{p}\), the proportion in a random sample of 1000likely American voters who approve of the president's job performance, to estimate p, the proportion of alllikely American voters who approve of the president's job performance. Because samples are manageable in size, we can determine the actual value of any statistic. We use the known value of the sample statistic to learn about the unknown value of the population parameter. Example S.1.1 What was the prevalence of smoking at Penn State University before the 'no smoking' policy? Section The main campus at Penn State University has a population of approximately 42,000 students. A research question is "what proportion of these students smoke regularly?" A survey was administered to a sample of 987 Penn State students. Forty-three percent (43%) of the sampled students reported that they smoked regularly. How confident can we be that 43% is close to the actual proportion of all Penn State students who smoke? The population is all 42,000 students at Penn State University. The parameter of interest is p, the proportion of students at Penn State University who smoke regularly. The sample is a random selection of 987 students at Penn State University. The statistic is the proportion, \(\hat{p}\), of the sample of 987 students who smoke regularly. The value of the sample proportion is 0.43. Example S.1.2 Are the grades of college students inflated? Section Let's suppose that there exists a population of 7 million college students in the United States today. (The actual number depends on how you define "college student.") And, let's assume that the average GPA of all of these college students is 2.7 (on a 4-point scale). If we take a random sample of 100 college students, how likely is it that the sampled 100 students would have an average GPA as large as 2.9 if the population average was 2.7? The population is all 7 million college students in the United States today. The parameter of interest is µ, the average GPA of all college students in the United States today. The sample is a random selection of 100 college students in the United States. The statistic is the mean grade point average, \(\bar{x}\), of the sample of 100 college students. The value of the sample mean is 2.9. Example S.1.3 Is there a linear relationship between birth weight and length of gestation? Section Consider the relationship between the birth weight of a baby and the length of its gestation: The dashed line summarizes the (unknown) relationship —\(\mu_Y = \beta_0+\beta_1x\)— between birth weight and gestation length of all births in the population. The solid line summarizes the relationship —\(\hat{y} = \beta_0+\beta_1x\)— between birth weight and gestation length in our random sample of 32 births. The goal of linear regression analysis is to use the solid line (the sample) in hopes of learning about the dashed line (the population). Next... Confidence intervals and hypothesis tests Section There are two ways to learn about a population parameter. 1) We can use confidence intervals to estimate parameters. "We can be 95% confident that the proportion of Penn State students who have a tattoo is between 5.1% and 15.3%." 2) We can use hypothesis tests to test and ultimately draw conclusions about the value of a parameter. "There is enough statistical evidence to conclude that the mean normal body temperature of adults is lower than 98.6 degrees F." We review these two methods in the next two sections.
If we idealize the scenario enough, this is a simple exercise in differential equations, so let's get to work. First, we know that it's initial speed is $150 \text{ m/s}$, but that is by no means its final speed - obviously, the bb slows down as it travels through air! Let's suppose that the moment the bb exits the barrel, it is no longer being pushed (as Steevan pointed out). So, the only force acting on it is air resistance. So the question is, why does the bb slow down significantly with distance traveled - we can determine this exactly, assuming the model is correct. Now, the model you are using (apparently) for air resistance is given as $$F_d = \frac{1}{2} pv^2C_DA.$$ We want to see how the velocity changes as a function of distance! But we know Newton's second law, so we can write that $$F = m \frac{dv}{dt} = m \frac{dv}{dx} \frac{dx}{dt} = m v' v$$ where $v$ is now a function of distance (this uses the chain rule - hope you're comfortable with that!). Now, we can write our differential equation: $$mv'v = -\frac{1}{2} pv^2 C_DA.$$ Note - there is a negative sign there because the force opposes the direction of motion. That is, the force points backwards, and the particle has a positive (forward) velocity. Simplifying, we get $$v' = -\frac{1}{2m} pC_DAv.$$ Now this is a simple differential equation to solve: we separate variables, i.e. $\frac{v'}{v} = -\frac{1}{2m}pC_DA,$ and then doing some more chain rule magic, we end up with $$\frac{dv}{v} = -\frac{1}{2m}pC_DA \, dx.$$ Now we can integrate both sides and find our solution: $$\int_{v(0)}^{v(x)} \frac{dv}{v} = -\frac{1}{2m} pC_DA \int_0^x dx,$$or$$v(x) = v(0)\exp{\left(-\frac{1}{2m} pC_DA x\right)}.$$Finally, we can plug in the initial condition, that at $x=0$, the speed is $150 \text{ m/s}$: $$v(x) = (150 \text{ m/s}) \exp{-\left(\frac{1}{2m} pC_DA x\right)}.$$ Finally, for a numerical answer, you may want to plug in your known constants. Unfortunately, for this you need to know the mass of the bb! For the sake of argument, let's assume a mass of $0.12 \text{ g}$, the most common mass for airsoft bbs, according to Wiki - Airsoft Pellets. So, we can now calculate the speed of the bb as it travels, knowing that $\frac{1}{2} pC_D A = 0.00817 \text{ g/m}$! So now we have a function for velocity: $$v(x) = (150 \text{ m/s}) \exp{(-0.0681x)}.$$ For example, to find the distance at which the speed drops by half, we would solve $$75 \text{ m/s} = (150 \text{ m/s}) \exp{(-0.0681x)},$$ which yields a distance of approximately 10 meters. Now you see why the bb slows down significantly with distance - it's exponential decay, which tends to decrease the quantity a large amount at first, with the amount of decrease decreasing over time (or in this case, distance).
Living in the world today requires giving up a lot of control. We give up control of our financial lives to banks and unaccountable credit bureaus. We give up control of our most intimate data to use online services like Facebook, Amazon, and Google. We give up control of our elections to voting-system companies who run opaque and unauditable elections. We even give up some control over our understanding of the world through exposure to false or misleading news stories. But it doesn’t have to be this way. Cryptography as a discipline provides us with some of the tools necessary to regain some of this control over our resources and data while reducing the need to trust unaccountable actors. One such tool is verifiable computation. A verifiable computation is a computation that produces an output along with a proof certifying something about that output. Until very recently, verifiable computation has been mostly theoretical, but recent developments in zk-SNARK constructions have helped maked it practical. Verifiable computation makes it possible for you to be confident about exactly what other people are doing with your data. For example, it enables Online services that are forced to be transparent about what data of yours they have and how they are using it. Auditable elections that protect the privacy of your vote. Publishing stories that provably come from a reliable source, without leaking what that source is. Sending money to others without yielding control of your account or your privacy. Verifiable computation is powered by zk-SNARKs. Right now, however, progamming directly with SNARKs is comparable to writing machine code by hand, and trusting “SNARK machine code” is a lot like trusting a compiled binary without the source code. To fulfill the promise of verifiable computing as a tool for returning control and agency to individuals, their operation has to be made as transparent as possible. We can help accomplish that goal by making the properties that verifiable computations prove specifiable in languages that are as close as possible to the informal, inuitive properties we have in our minds. That way, individuals can trust the easy-to-understand high-level specifications, rather than opaque “SNARK machine code”. In this post, I’ll describe how we at O(1) Labs are helping to bridge this gap and solve the transparency problem with our language Snarky for specifying verifiable computation. Verifiable computation As mentioned above, a verifiable computation is a computation that computes an output along with a proof certifying something about that output. For example, A government could computethe winner of an election and provethat they counted all the votes correctly. An advertising service could computean ad to serve to you and provethat the ad was generated without using personal data (i.e., your race, income, political views, etc.) A website could computea news-feed to send to you and provethat it was generated without access to your personal data (and thus free of targeted ads, content, etc.) A journalist could computea story containing a quote from a source and provethat the quote came from a reliable source (without revealing which one). Verifiable elections For this post, let’s focus on the example of a verifiable election. One place where people are clamoring for accountability is of course in how pizza toppings are chosen in group settings. (Heads up: I kept this example simple for exposition, which means there are a few flaws. I take no accountability for your pizza election.) So let’s imagine you and your friends are trying to decide on what pizza topping to get (either pepperoni or mushroom) and you’d like to vote on a topping while keeping your votes as secret as possible. Let’s say everyone trusts Alice to keep votes secret. She’ll act as the “government” by collecting everyone’s votes. But everyone also knows Alice loves mushroom pizza, which means we don’t necessarily trust her to run a fair election. So we’ll develop a scheme that gives everyone assurance that the election was run correctly (i.e., that each person’s vote was included and that the majority vote was computed correctly). Using zk-SNARKs, we can write a verifiable computation which Alice can run to compute the majority vote and prove that it was computed correctly. Moreover, using the “zk” or zero-knowledge part of zk-SNARKs, she can do so in such a way that everyone can trust the result without learning any information about individuals’ votes. zk-SNARKs, technically Simplifying a bit, zk-SNARK constructions give us the following ability. Say we have a set of polynomials \(p_1, \dots, p_k\) in variables \(x_1, \dots, x_n, y_1, \dots, y_m\). For given \(\alpha_1, \dots, \alpha_n\), if we know \(\beta_1, \dots, \beta_m\) such that \[ p_i(\alpha_1, \dots, \alpha_n, \beta_1, \dots, \beta_m) = 0 \] we can produce a piece of data \(\pi\) which somehow certifies our knowledge of such \(\beta_i\)s which has the following two properties: 1. Zero-knowledge: \(\pi\) does not expose any information about \(\beta_1, \dots, \beta_m\) 2. Succinctness: \(\pi\) is very small (concretely, a few hundred bytes) and can be checked quickly (concretely, in milliseconds). Such a set of \(\beta\)s is called a satisfying assignment. It turns out that such constraint systems are universal in the following sense. Given any (non-deterministic) verifiable computation, we can construct a constraint system so that the existence of a satisfying assignment is equivalent to the correctness of the computation. So, it seems that zk-SNARKs gives us exactly what we want. Namely, a means to prove correctness of computations while hiding private information and saving parties from having to redo the computation themselves. Back to elections With these SNARKs in hand, let’s return to our election example. The voting scheme will be as follows: Each voter \(i\) chooses a vote \(v_i\) and a nonce \(b_i\). They publish a commitment \(h_i = H(b_i, v_i)\), where \(H\) is some collision resistant hash function. They send \((b_i, v_i)\) to the government. The government computes the majority vote \(w\) and publishes it along with a SNARK proving “For each \(i\), I know \((b_i, v_i)\) such that \(H(b_i, v_i) = h_i\) and \(w\) is the majority vote of the \(v_i\)”. Voters verify the SNARK on their own against the public set of commitments \((h_1, \dots, h_n)\). The zero knowledge property of the SNARK ensures that no votes are revealed to anyone except the government. So, to realize this scheme in practice, all we need to do is to encode the above statement as a constraint system. Here it is: Great, we’re done! Er – well, maybe not. The trouble is that it’s basically impossible for anyone to verify that this constraint system does actually enforce the above property. I could have just chosen it at random, or maliciously. In fact it doesn’t actually force the property: I deleted a bunch of constraints to make this page load faster. The situation is similar to programming in general: one doesn’t want to have to trust a compiled binary because it is difficult to verify that it is doing what one expects one to do. Instead, we write programs in high-level languages that are easier for people to verify, and then compile them to assembly. Here, in order for it to be convincing that a constraint system actually does what one expects it to do, one would like it to be the result of running a trusted compiler on a high-level program that is more easily seen to be equivalent to the claim one wants to prove. Toward a programming language for verifiable computation We’ll now describe Snarky, our OCaml DSL for verifiable computation. It’s a high-level language for describing verifiable computations so that their correctness is more transparent. First we describe the programming model of Snarky and then explain in more depth how this model is realized. Requests The basic programming model is as follows. A “verifiable computation” will be an otherwise pure computation augmented with the ability to do the following two things: Pause execution to ask its environment to provide it with a value and then resume execution using that value. Assert that a constraint holds among some values, terminating with an exception if the constraint does not hold. To get a feel for the model, let’s see our election computation rendered in a pseudocode version of Snarky. winner (commitments): votes = List.map commitments (fun commitment -> (nonce, vote) = request (Open_ballot commitment) assert (H(nonce, vote) = commitment) return vote) pepperoni_count = count votes (fun v -> v = Pepperoni) pepperoni_wins = pepperoni_count > commitments.length / 2 return (if pepperoni_wins then Pepperoni else Mushroom) This is intended to define a function winner that takes as input a list of commitments and returns the majority vote of a set of votes corresponding to those commitments (assuming it doesn’t terminate with an exception). It obtains the corresponding votes by mapping over the commitments and for each one requesting that the environment provide it with such a vote (and nonce) asserting that the provided vote and nonce do in fact correspond to the commitment. If winner(commitments) is run in an environment in which it terminates without an assertion failure and outputs w, we know that there were votes corresponding to commitments such that the majority vote was w. Snarky gives us a way to prove statements like this about computations. \(\newcommand{\tild}{\widetilde}\) Namely, given a verifiable computation \(P\) (i.e., a computation that makes some requests for values and assertions of constraints) Snarky lets us compile \(P\) into a constraint system \(\tild{P}\) such that the following two are equivalent: Some environment can provide \(P\) with values to answer each request such that \(P\) executes without an assertion failure. Some environment can produce a satisfying assignment to \(\tild{P}\). In our case, the requests are for openings to each of the vote commitments, and the assertions check the correctness of the openings. So, reiterating, if Alice can prove winner(cs) = w for some commitments cs and winner w, she will have proved “I know a set of votes votes corresponding to the commitments cs such that the majority vote of votes is w”. Snarky concretely Let’s take a look at what the above example actually looks like in Snarky let winner commitments = let%bind votes = Checked.List.mapi commitments ~f:(fun i commitment -> let%bind nonce, vote = request Ballot.Opened.typ (Open_ballot i) in let%map () = hash_ballot (nonce, votes) >>= Ballot.Closed.assert_equal commitment in vote) in let%bind pepperoni_count = count votes ~f:(fun v -> Vote.(v = var Pepperoni)) in let half = constant (Field.of_int (List.length commitments / 2)) in let%bind pepperoni_wins = pepperoni_count > half in Vote.(if_ pepperoni_wins ~then_:(var Pepperoni) ~else_:(var Mushroom)) There’s a bit of noise caused by the harsh realities of OCaml’s monad syntax, but overall it is quite close to our pseudocode. We Map over the commitments, requesting for our environment to open them. Compute the number of votes for pepperoni. If the number of pepperoni votes is greater than half the votes, return pepperoni as the winner, and otherwise return mushroom. Handling requests We must provide a mechanism for handling requests made by verifiable computations to pass in requested values (similar to the way we write exception handlers). In Snarky, this looks like where ballots : Ballot.Opened.t array is the array of opened ballots that the government has access to. The request/handler model has a few nice features. In particular, It allows one to program in a direct style by pretending one has magical access to requested values. It makes a clear distinction between values that are directly computed and non-deterministically chosen values that need to be constrained (with assertions) to ensure correctness. It makes testing verifiable computations simple, as one can set up “malicious” handlers that provide values other than the intended ones. The purpose here is to check that the assertions made by the computation do in fact rule out all values besides the intended ones. Wrapping up Snarky helps us bridge the gap between high-level properties we want to prove using verifiable computations, and the low-level constraint systems we need to provide to SNARK constructions. It brings the promise of accountability and control over personal data through verifiable computing one step closer to practicality. The code is available on github, and we at O(1) Labs are using it in the development of our new cryptocurrency protocol that aims to power the examples described above and more.
A "field of sets" is a collection $\mathcal{F}$ of subsets of a given set $X$ which is closed under binary unions, intersections, and under complementation. Alas, it is not an "field" in the sense of abstract algebra. It is a "subalgebra" of the Boolean algebra of the power set of $X$. Alas, here as well, "Boolean algebra" and "subalgebra" do not have the meaning they have in abstract algebra/ring theory: a "Boolean algebra" is not really an algebra: rather, it is a special kind of lattice. Namely, a Boolean algebra is a set $A$ together with two binary operations, $\wedge$ (meet) and $\vee$ (join), one unary operation ${}^c$ (complementation), and two nulary operations (distinguished elements) $0$ and $1$. We require $\vee$ and $\wedge$ to be associative, commutative, and to distribute over each other (this is already different from the case of "algebra" in the ring-theoretic sense, in which only multiplication distributes over sums, and not the other way around); we also have two rules of "absorption" that further describe how $\wedge$ and $\vee$ interact: $x\wedge(x\vee y) = x$ and $x\vee(x\wedge y) = x$. Finally, we require $x\vee x^{c}=1$ and $x\wedge x^{c}=0$. The typical example of a Boolean algebra/Boolean lattice is the lattice of subsets of a given set $X$, with $\wedge$ corresponding to intersection, $\vee$ to union, ${}^c$ to complementation, $0$ to $\emptyset$, and $1$ to $X$. Another example is propositional calculus, with $\wedge$ corresponding to conjunction, $\vee$ to disjunction, ${}^{c}$ to negation, $1$ to the class of tautologies, and $0$ to the class of contradictions. The reason for calling it algebra, in defiance of the meaning of "algebra" from ring theory, is historical: Boole talked about "algebra of thought" or "algebra of logic". Now, any Boolean algebra gives rise to a ring. We define $+$ by "symmetric difference": $x+y = (x\wedge y^{c})\vee(x^{c}\wedge y)$. If you do this, then you get an abelian group, with identity element being $0$, and with every element being its own inverse (that is, every nontrivial element is of additive order $2$). We define $*$ by $x*y=x\wedge y$. The unity of this ring is the $1$ from the Boolean lattice. This ring is an algebra over the field of $2$ elements, since it has characteristic $2$ and a unity. You may ask: will this ring be a field? The only time it is a field is if the original Boolean algebra was the trivial algebra, with $0$ and $1$ the only elements. For if $x$ is any other element, $0\neq x$, $x\neq 1$, then if you look at the order induced by the lattice structure ($x\preceq y$ if and only if $x\wedge y = x$) then you have $0\prec x \prec 1$, and $x\wedge y \preceq x$ for all $y$. Therefore, since $x$ is strictly smaller than $1$, then so will $x*y$ for any $y$, so $x$ cannot have a multiplicative inverse in the corresponding ring.
Interferometers Big telescopes are expensive, and doubling the size of a telescope increases the cost by a much bigger factor. In addition, bigger telescopes suffer from deforming under their own weight, so there is a limit to the useful size of a single telescope. There is however, a way to combine telescopes so that the effective size of a telescope is equal - in one sense - to the distance between the telescopes. Combining telescopes in this way is called interferometry.,BR /. The light gathering power of a telescope is proportional to its area. If we combine telescopes, the light gathering power of the set of telescopes considered as a single instruments is proportional to the sum of the areas. The resolving power of a telescope with a primary lens/mirror of diameter \[D\] using electromagnetic radiation of wavelength \[\lambda\] is given by the Rayleigh criterion \[\theta \simeq \frac{1.22 \lambda}{D}\], so we can resolve objects an angular distance \[\theta\] apart using this interferometer and this wavelength radio waves are radio waves of very long wavelength. Light emitted from distant objects is red shifted into the radio part of the spectrum in reaching us. This means that we can only really study these objects at radio wavelengths, and must use interferometers with a large baseline \[D\] to be able to resolve them and determine the source of the radiation. Add comment
What does 1/k represent regarding Newtons Law of Cooling? I know k represents the cooling constant. I think the inverse of k is the time taken for the liquid to cool from its maximum temperture to surrounding temperature. Any clarification would be most appreciated. What does 1/k represent regarding Newtons Law of Cooling? In Newton's law of cooling, the constant $k$ appears in most solutions schematically as,$^\dagger$ $$T(t)\sim e^{-kt}$$ Clearly, the argument of the exponential must be dimensionless, hence the constant $k$ has dimensions, $$[k]=\frac{1}{[\mathrm{time}]}$$ The constant $k$ is a measure of the rate of cooling, with the same dimensions as a frequency. $\dagger$ We have assumed the simplest formulation of Newton's law of cooling, wherein the system of differential equations are linear, with constant coefficients.
Math.SE report 2015-04 [ Notice: I originally published this report at the wrong URL. Imoved it so that I could publish the June 2015report at that URL instead. If you'reseeing this for the second time, you might want to read the Junearticle instead. ] A lot of the stuff I've written in the past couple of years has beenon Mathematics StackExchange. Some of it is pretty mundane, but someis interesting. I thought I might have a little meta-discussion inthe blog and see how that goes. These are the noteworthy posts I madein April 2015. Languages and their relation :helpis pretty mundane, but interesting for one reason: OP was confusedabout a statement in a textbook, and provided a reference, which OPsdon't always do. The text used the symbol !!\subset_\ne!!. OP hadinterpreted it as meaning !!\not\subseteq!!, but I think what wasmeant was !!\subsetneq!!. I dug up a copy of the text and groveled over it looking for theexplanation of !!\subset_\ne!!, which is not standard. There wasnone that I could find. The book even had a section with a glossaryof notation, which didn't mention !!\subset_\ne!!. Math professorscan be assholes sometimes. Is there an operation that takes !!a^b!! and !!a^c!!, and returns!!a^{bc}!!is more interesting. First off, why is this even a reasonablequestion? Why should there be such an operation? But note thatthere is an operation that takes !!a^b!! and !!a^c!! and returns!!a^{b+c}!!, namely, multiplication, so it's plausible that theoperation that OP wants might also exist. But it's easy to see that there is no operation that takes !!a^b!!and !!a^c!! and returns !!a^{bc}!!: just observe that although!!4^2=2^4!!, the putative operation (call it !!f!!) should take!!f(2^4, 2^4)!! and yield !!2^{4\cdot4} = 2^{16} = 65536!!, but itshould also take !!f(4^2, 4^2)!! and yield !!4^{2\cdot2} = 2^4 =256!!. So the operation is not well-defined. And you can take thiseven further: !!2^4!! can be written as !!e^{4\log 2}!!, so !!f!!should also take !!f(e^{2\log 4}, e^{2\log 4})!! and yield!!e^{4(\log 4)^2} \approx 2180.37!!. They key point is that the representation of a number, or even aninteger, in the form !!a^b!! is not unique. (Jargon:"exponentiation is not injective".) You can raise !!a^b!!, buthaving done so you cannot look at the result and know what !!a!!and !!b!! were, which is what !!f!! needs to do. But if !!f!! can't do it, how can multiplication do it when itmultiplies !!a^b!! and !!a^c!! and gets !!a^{b+c}!!? Does itsomehow know what !!a!! is? No, it turns out that it doesn't need!!a!! in this case. There is something magical going on there,ultimately related to the fact that if some quantity is increasingby a factor of !!x!! every !!t!! units of time, then there is some!!t_2!! for which it is exactly doubling every !!t_2!! units oftime. Because of this there is a marvelous group homomophism $$\log: \langle \Bbb R^+, \times\rangle \to \langle \Bbb R ,+\rangle$$ whichcan change multiplication into addition without knowing what thebase numbers are. In that thread I had a brief argument with someone who thinks thatoperators apply to expressions rather than to numbers. Well, youcan say this, but it makes the question trivial: you can certainlyhave an "operator" that takes expressions !!a^b!! and !!a^c!! andyields the expression !!a^{bc}!!. You just can't expect to applyit to numbers, such as !!16!! and !!16!!, because those numbers arenot expressions in the form !!a^b!!. I remembered the argumentgoing on longer than it did; I originally ended this paragraph witha lament that I wasted more than two comments on this guy, butlooking at the record, it seems that I didn't. Good work,Mr. Dominus. how 1/0.5 is equal to2?wants a simple explanation. Very likely OP is a primary schoolstudent. The question reminds me of a similar question, asking whythe long division algorithm is the way itis. Eachof these is a failure of education to explain what division isactually doing. The long division answer is that long division isan optimization for repeated subtraction; to divide !!450\div 3!!you want to know how many shares of three cookies each you can getfrom !!450!! cookies. Long division is simply a notation forkeeping track of removing !!100!! shares, leaving !!150!! cookies,then !!5\cdot 10!! further shares, leaving none. In this question there was a similar answer. !!1/0.5!! is !!2!!because if you have one cookie, and want to give each kid a shareof !!0.5!! cookies, you can get out two shares. Simple enough. I like division examples that involve giving cookies to kids,because cookies are easy to focus on, and because the motivation forequal shares is intuitively understood by everyone who has kids, orwho has been one. There is a general pedagogical principle that an ounce of examplesare worth a pound of theory. My answer here is a good example ofthat. When you explain the theory, you're telling the student howto understand it. When you give an example, though, if it's theright example, the student can't help but understand it, and whenthey do they'll understand it in their own way, which is better thanif you told them how. How to read a cyclegraph?is interesting because hapless OP is asking for an explanation of aparticularly strange diagram from Wikipedia. I'm familiar with theeccentric Wikipedian who drew this, and I was glad that I was aroundto say "The other stuff in this diagram is nonstandard stuff thatthe somewhat eccentric author made up. Don't worry if it's notclear; this author is notorious for that." In Expected number of die tosses to get something less than5,OP calculated as follows: The first die roll is a winner !!\frac23!!of the time. The second roll is the first winner!!\frac13\cdot\frac23!! of the time. The third roll is the firstwinner !!\frac13\cdot\frac13\cdot\frac23!! of the time. Summing the series!!\sum_n \frac23\left(\frac13\right)^nn!! we eventually obtain theanswer, !!\frac32!!. The accepted answer does it this way also. But there's a much easier way to solve this problem. What we reallywant to know is: how many rolls before we expect to have seen onegood one? And the answer is: the expected number of winners per dieroll is !!\frac23!!, expectations are additive, so the expectednumber of winners per !!n!! die rolls is !!\frac23n!!, and so weneed !!n=\frac32!! rolls to expect one winner. Problem solved! I first discovered this when I was around fifteen, and wrote aboutit here a few years ago. As I've mentionedbefore, this isone of the best things about mathematics: not that it works, butthat you can do it by whatever method that occurs to you and you getthe same answer. This is where mathematics pedagogy goes wrong mostoften: it proscribes that you must get the answer by method X,rather than that you must get the answer by hook or by crook. Ifthe student uses method Y, and it works (and if it is correct) thatshould be worth full credit. Bad instructors always say "Well, we need to test to see if thestudent knows method X." No, we should be testing to see if thestudent can solve problem P. If we are testing for method X, thatis a failure of the test or of the curriculum. Because if method Xis useful, it is useful because for some problems, it is the onlymethod that works. It is the instructor's job to find one of theseproblems and put it on the test. If there is no such problem, thenX is useless and it is the instructor's job to omit it from thecurriculum. If Y always works, but X is faster, it is theinstructor's job to explain this, and then to assign a problem forthe test where Y would take more time than is available. I see now I wrote the same thing in2006. It bears repeating.I also said it again a couple of years ago on math.seitselfin reply to a similar comment by Brian Scott: If the goal is to teach students how to write proofs by induction, the instructor should damned well come up with problems for which induction is the best approach. And if even then a student comes up with a different approach, the instructor should be pleased. ... The directions should not begin [with "prove by induction"]. I consider it a failure on the part of the instructor if he or she has to specify a technique in order to give students practice in applying it. [Other articles in category /math/se] permanent link
Cherry-picking? [Study Assessment] » 3. The within-subject standard deviation of test and reference products will be compared, and the upper limit of the 90% confidence interval for the test-to-reference ratio of the within-subject variability should be ≤ 2.5. » […] 90% confidence interval for the test-to-reference ratio of the within-subject variability ≤ 2.5 were not meet the criteria for all PK variables (Cmax, AUCt and inf). Failed to demonstrate BE due to the higher within-subject variability of the test product. Full stop. » Exercises, Observations and Analysis: What do you mean by „Exercises”? Since the study failed are you asking for a recipe to cherry-pick? » 1. We have taken subjects who have completed at least 2R or 2T in Reference Scaled Average Bio equivalence calculation (existing study). That’s my interpretation as well. Although only the calculation of sis given in Step 1 of the guidance by analogy the same procedure should be applicable for WR s. WT » 2. We have done the exercise who have completed all four treatments and did the statistical calculation- still failing on the same criteria marginally. Leaving cherry-picking aside: By doing so, you drop available information. One should always use all. The more data you have, the more accurate/precise an estimate will be. Have a look at the formula to calculate the 100(1–α) CI of σ /σ WT :$$\left(\frac{s_{WT} / s_{WR}}{\sqrt[]{F_{\alpha /2,\nu_1,\nu_2}}},\frac{s_{WT} / s_{WR}}{\sqrt[]{F_{1-\alpha /2,\nu_1,\nu_2}}} \right)$$We have two different degrees of freedom ( WR ν 1, ν 1), the first associated with sand the second with WT s. WR » 3. It was observed that if the “SWT” value should be closed to SWR value or lower, then 90% CI for the test-to-reference ratio of the within-subject variability ≤ 2.5 will meet the criteria. Of course. » 1. Which Reference Scaled Average Bioequivalence approach is acceptable in regulatory? Yes. » Approach 2: Subjects who completed four period will be consider for SWR & SWT calculation. No. » or both. Which one will you pick at the end if one passesand the other one fails? The passing one, right? The FDA will love that. Be aware that the FDA recalculates every study. BTW, how would you describe that in the SAP? » 2. which are the factors adding variability to SWT? That’s product-related. The idea behind the FDA’s reference-scaling for NTIDs is not only the narrow the limits but also to prevent products with higher variability than the reference’s entering the market. » 3. Whether same formulation can be taken for the repeat bio-study with some clinical restrictions? If yes then what are the clinical factor to be considered? As I wrote above, the failure to show BE was product-related. If you introduce clinical restrictions* in order to reduce within-subject variability – due to randomization – both products will be affected in the same way and s/ WT sbe essentially the same like in the failed study. WR Reformulate. PS: I changed the category of your post to yesterday and you to today. Wrong. Don’t test my patience – your problems are definitely study-specific (see the Policy for a description of categories). Which ones are you thinking about? I don’t see how it could be possible to reduce within-subject variability by any means. Given, chaining volunteers to their beds might help. Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes Complete thread: NTI drug Bioequivalence study Statistical approach Sukalpa Biswas 2019-08-06 09:15 [Study Assessment]
In geometry, you come across different types of figures which properties that set them apart from one another. One common figure among them is a triangle. A triangle is a closed figure, a polygon, with three sides. It has 3 vertices and its 3 sides enclose 3 interior angles of the triangle. The sum of the three interior angles in a triangle is always 180 degrees. The most common types of triangle that we study about are equilateral, isosceles, scalene and right angled triangle. In this section, we will talk about the right angled triangle, also called right triangle, and the formulas associated with it. A right triangle is the one in which the measure of any one of the interior angles is 90 degrees. It is to be noted here that since the sum of interior angles in a triangle is 180 degrees, only 1 of the 3 angles can be a right angle. If the other two angles are equal, that is 45 degrees each, the triangle is called an isosceles right angled triangle. However, if the other two angles are unequal, it is a scalene right angled triangle. The most common application of right angled triangles can be found in trigonometry. In fact, the relation between its angles and sides forms the basis for trigonometry. Formulas \(Area of a right triangle = \frac{1}{2} bh\) Where b and h refers to the base and height of triangle respectively. \(Perimeter of a right triangle = a+b+c\) Where a, b and c are the measure of its three sides. Pythagoras Theorem defines the relationship between the three sides of a right angled triangle. Thus, if the measure of two of the three sides of a right triangle is given, we can use the Pythagoras Theorem to find out the third side. \(Hypotenuse^{2} = Perpendicular^{2} + Base^{2}\) In the figure given above, ∆ABC is a right angled triangle which is right angled at B. The side opposite to the right angle, that is the longest side, is called the hypotenuse of the triangle. In ∆ABC, AC is the hypotenuse. Angles A and C are the acute angles. We name the other two sides (apart from the hypotenuse) as the ‘base’ or ‘perpendicular’ depending on which of the two angles we take as the basis for working with the triangle. Derivation: Consider a right angled triangle ABC which has B as 90 degrees and AC is the hypotenuse. Now we flip the triangle over its hypotenuse such that a rectangle ABCD with width h and length b is formed. You already know that area of a rectangle is given as the product of its length and width, that is, length x breadth. Hence, area of the rectangle ABCD = b x h As you can see, the area of the right angled triangle ABC is nothing but one-half of the area of the rectangle ABCD. Thus, \(Area of \Delta ABC = \frac{1}{2} Area of rectangle ABCD\) Hence, area of a right angled triangle, given its base b and height \(h= \frac{1}{2} bh\) Solved Examples: Question 1: The length of two sides of a right angled triangle is 5 cm and 8 cm. Find: Length of its hypotenuse Perimeter of the triangle Area of the triangle Solution: Given, One side a = 5cm Other side b = 8 cm The length of the hypotenuse is, Using Pythagoras theorem, \(Hypotenuse^{2} = Perpendicular^{2} + Base^{2}\) \(c^{2} = a^{2} +b^{2}\) \(c^{2} = 5^{2} +8^{2}\) \(c= \sqrt{25+64}= \sqrt{89}= 9.43cm\) Perimeter of the right triangle = a + b + c = 5 + 8 + 9.43 = 22.43 cm \(Area of a right triangle = \frac{1}{2} bh\) Here, area of the right triangle = \(\frac{1}{2} (8*5)= 20cm^{2}\) Question 2: The perimeter of a right angled triangle is 32 cm. Its height and hypotenuse measure 10 cm and 13cm respectively. Find its area. Solution: Given, Perimeter = 32 cm Hypotenuse a= 13 cm Height b= 10 cm Third side, c=? We know that perimeter = a+ b+ c 32 cm = 13 + 10 + c Therefore, c = 32 – 23 = 9 cm \(Area = \frac{1}{2} bh = \frac{1}{2} (9*10)= 45cm^{2}\) To solve more problems on the topic and for video lessons, download Byju’s -The Learning App.
I have a problem understanding how phasors work and I'll use a problem from a recent exam to illustrate the misunderstanding. Note: Underlined variables such as \$\underline{V}\$ are considered here to be complex numbers, while non-underlined are considered to be magnitudes of the complex numbers. Here's the text: We have voltage generator \$E=2\sqrt{7} \mbox{ } V\$ with angular frequency \$\omega=10^6 \mbox{ } s^{-1}\$ and internal resistance \$R_g=0.5\sqrt{3} \mbox{ } k\Omega\$ connected to parallel connection of impedance \$Z\$ and coil \$L\$. Current is \$I=I_1=I_2=4 \mbox{ } mA\$. Calculate complex value of \$\underline{Z}\$ and inductivity of \$L\$. Here's the picture of the circuit: So from phasor theory I know that they represent complex numbers in the form of \$ \underline{I}=I_e e^{j\psi_0}\$, where \$I_e\$ is effective value of the current and \$\psi_0\$ if the initial phase of the current. So on phasor, the line would be of length \$I_e\$ and have an angle \$\psi_0\$ to the phase axis. Next, I know that on a line which only has a resistor the voltage and current are in phase and on the phasor, they would be on a same line. If we have a line which has an inductor, the current will lag with respect to voltage by \$ \frac{\Pi}{2}\$. If we have a capacitor, then the current will be in front of the voltage by \$ \frac{\Pi}{2}\$. Next on a phasor currents on a circuit should form a closed figure. If we don't have any starting phases in a circuit, we can set the phase of one element to zero, proclaim it the reference phase and calculate the rest of the phases with respect to it. So according to my reasoning, I can set the phase of the generator to zero and get \$ \underline{E}=2\sqrt{7} e^{j0}\mbox{ } V\$ and now I have the complex voltage. Next, I know the effective current \$I\$ and the resistance of \$ R_g\$, so I can calculate the voltage drop across the resistor and this way get the voltage which the impedance \$Z\$ and the inductance \$L\$ see. So \$R_gI=2\sqrt{3}\mbox{ }V\$ and the impedance and the inductance see the voltage of \$U_1=\sqrt{7} \mbox{ }V\$. Next since the resistor is in this case ideal, the \$U_1\$ is in phase with \$E\$. Next, I know that the current \$I_1\$ can be obtained by following equation \$ \underline{I}_1=\frac {\underline{U_1}}{\underline{Z}}=\frac{U_1 e^{j0}}{Ze^{j\phi}}=\frac{U_1}{Z}e^{0-\phi}\$ From this I can get the \$Z\$. Next, I know that the impedance \$\underline{Z_l}\$ of the line on which the inductor is is \$\underline{Z_l}=j\omega L\$ and I know that \$\underline{I_2}=\frac{\underline{U_1}}{\underline{Z_l}}=\frac{U_1 e^{j0}}{\omega L e^{\frac{\Pi}{2}}}=\frac{U_1} {\omega L} e^{0- \frac{\Pi}{2}}= \frac{U_1} {\omega L} e^{- \frac{\Pi}{2}}\$ From this I can get the L. For currents \$ I_1\$ and \$I_2\$ to have same effective value the \$\phi\$ needs to be \$ \frac{Pi}{2}\$ and the impedance \$ Z\$ needs to be mostly capacitive. In this case the current \$I\$ needs to be in phase with the voltage \$ E\$ but it isn't because if it were, the effective value would be determined by the resistance \$R_g\$ and it would be different. So since the two currents going out of the current \$I\$ have same effective value, I concluded that the three currents need to form a triangle with sides of the same length, like on this picture: In that case the angle on one of the currents needs to be \$\frac {\Pi} {3} \$ and on the other current it needs to be \$-\frac {\Pi} {3} \$. In that case the current \$\underline{I}\$ is in phase with the voltage \$\underline{U_1}\$ and \$\underline{I_2}\$ is lagging by \$ \frac {\Pi}{3}\$ while the \$\underline{I_1}\$ moved forward by\$ \frac {\Pi}{3}\$. However in that case the current \$ \underline{I_2}\$ isn't lagging by \$ \frac {\Pi}{2}\$ which it should because the only component on its branch is an inductor. So my problem is that I have a bunch of rules and I can't determine when whey should be applied, as I have shown. Can anyone explain to me where my reasoning is wrong? For example in the inductor branch, in which cases will current lag behind voltage by \$ \frac {\Pi}{2}\$ and in which by some other number? When can I rely on voltage and current on a purely resistive branch to be in phase? According to Kirchhoff's law the sum of currents in a node should be zero, so on a phasor currents for that node should form a closed figure but in this case they don't. In my books this isn't clearly explained and most problems we have in problem collections don't have detailed solutions. Correct solutions for this specific problem are \$\underline{Z}=250(\sqrt{3}-j) \mbox{ } \Omega\$ and \$L=0.5 \mbox{ } mH\$
Let $f\in L^2\mathbb{R}^d$ such that $\nabla f$ (in the distributional sense) coincides with an $L^2$-function outside of $0$. In which dimensions $d$ do we automatically have that $\nabla f\in (L^2\mathbb{R}^d)^d$? For $d=1$ it is wrong, e.g. the Heaviside functions provides a counterexample. For $d\ge3$, I can prove the statement: We have equality of $\nabla f$ with some $L^2$-function when testet against a test function which vanishes at $0$. In order to extend this result to arbitrary test functions, I want to use a collection of cutoff functions $\phi_\epsilon\colon \mathbb{R}^d\rightarrow B(0,1)$ with support in $B(0,\epsilon)$. The crucial point that would allow to pass to the limit $\epsilon \rightarrow 0$ is that $$ \nabla\phi_\epsilon \rightarrow 0 \quad \text{weakly in } L^2. $$ For that I considered $\psi(t)=\exp(-t^2/(1-t^2))$ and $\phi_\epsilon(x)=\psi(\vert x \vert/\epsilon)$, which is the standard way of producing bumps. Then $$ \vert \nabla\phi_\epsilon(x)\vert \lesssim 1/\epsilon, \quad \text{supp}(\nabla \phi_\epsilon)\subset B(0,\epsilon), $$ hence for any $L^2$-function $g$ we have $$ \vert\int g \nabla \phi_\epsilon\vert \lesssim \frac{1}{\epsilon}\int_{B(0,\epsilon)} \vert g\vert \le \frac{1}{\epsilon} \Vert g\Vert_{L^2}\cdot \vert B(0,\epsilon)\vert^{1/2} = O(\epsilon^{d/2 -1}), $$ thus the result follows if $d \ge 3$. Question: Is the result true for $d=2$ or does there exist a counterexample?
I know that a constant current doesn't create induced emf, however, I do not know why a changing one does. Are there any ways to understand it briefly/easily, or is it just the way it is? You can understand this through the Faraday Law: $$ \varepsilon = - \frac{\Delta \Phi_B}{\Delta t}$$ Here $\varepsilon$ is the EMF, $\Phi_B$ is the magnetic flux which equals the multiplication of a magnetic field $\bf B$ and the area $\bf A$ crossed by $\bf B$, whereas $t$ is time, and $\Delta$ represents changes. So the equation just says that the EMF can be produced by the magnetic flux that changes over time. Furthermore, in the special case you mentioned $\bf B$ is produced by the electric current. So assuming there's no change in $\bf A$, the only way to produce $\varepsilon$, i.e. the EMF, is by having $\bf B$ that's changing over time, and in order to get $\bf B$ that's changing over time you need an electric current that's changing over time too. So that's how it's explained!
Search Now showing items 1-10 of 21 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Centrality dependence of particle production in p-Pb collisions at $\sqrt{s_{\rm NN} }$= 5.02 TeV (American Physical Society, 2015-06) We report measurements of the primary charged particle pseudorapidity density and transverse momentum distributions in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV, and investigate their correlation with experimental ...