text
stringlengths
256
16.4k
Using the addition formula for the sine function I have managed to reduce this to a simpler form: $$\sum \frac{\cos \frac{2n\pi }{3}}{2^{n}}$$ It is obvious here that it passes the n-th term convergence test. But what next? I have applied Cauchy's root test, this is the result: $$\lim_{x\rightarrow \infty }\sqrt[n]{\frac{\cos \frac{2n\pi }{3}}{2^{n}}}$$ For the numerator being a "constant", I have gotten that the limit is $\frac{1}{2}$, which in turn means that the series is convergent. Is my reasoning behind this correct? Going your way: $$\lim\sup_{n\to\infty}\sqrt[n]{\left|\frac{\cos\frac{2n\pi}3}{2^n}\right|}=\lim_{n\to\infty}\frac{\sqrt[n]1}2=\frac12<1$$ and the series converges absolutely and thus converges. Hint Use the comparison with geometric series $$ \frac{|\sin \frac{\left( 3-4n \right)\pi }{6}| }{2^{n}}\le\left(\frac12\right)^n$$
LHCb Collaboration,; Aaij, R; Adeva, B; Adinolfi, M; Anderson, J; Bernet, R; Bowen, E; Bursche, A; Chiapolini, N; Chrzaszcz, M; Dey, B; Elsasser, C; Graverini, E; Lionetto, F; Lowdon, P; Mauri, A; Müller, K; Serra, N; Steinkamp, O; Storaci, B; Straumann, U; Tresch, M; Vollhardt, A; et al, (2015). Search for the decay $B_s^0 \to \overline{D}^{0} f_{0}(980)$. Journal of High Energy Physics, 2015(8):5. Abstract A search for $B_s^0 \to \overline{D}^{0} f_{0}(980)$ decays is performed using $3.0\, {\rm fb}^{-1}$ of $pp$ collision data recorded by the LHCb experiment during 2011 and 2012. The $f_{0}(980)$ meson is reconstructed through its decay to the $\pi^{+}\pi^{-}$ final state in the mass window $900\, {\rm MeV}/c^{2} < m(\pi^{+}\pi^{-}) < 1080\, {\rm MeV}/c^{2}$. No significant signal is observed. The first upper limits on the branching fraction of $\mathcal{B}(B_s^0 \to \overline{D}^{0} f_{0}(980)) < 3.1\,(3.4) \times 10^{-6}$ are set at $90\,\%$ ($95\,\%$) confidence level. Abstract A search for $B_s^0 \to \overline{D}^{0} f_{0}(980)$ decays is performed using $3.0\, {\rm fb}^{-1}$ of $pp$ collision data recorded by the LHCb experiment during 2011 and 2012. The $f_{0}(980)$ meson is reconstructed through its decay to the $\pi^{+}\pi^{-}$ final state in the mass window $900\, {\rm MeV}/c^{2} < m(\pi^{+}\pi^{-}) < 1080\, {\rm MeV}/c^{2}$. No significant signal is observed. The first upper limits on the branching fraction of $\mathcal{B}(B_s^0 \to \overline{D}^{0} f_{0}(980)) < 3.1\,(3.4) \times 10^{-6}$ are set at $90\,\%$ ($95\,\%$) confidence level. Additional indexing
Dividing the numerator and denominator of the integral by $\gamma + \kappa$ gives$$\int_{0}^{x}\frac{a(e^{\gamma u}-1)du}{e^{\gamma u}-1+a\gamma}$$Where $a=\frac2{\gamma + \kappa}.$ Breaking this into $2$ integrals,$$=\frac a\gamma\int_{0}^x\frac{\gamma e^{\gamma u}du}{e^{\gamma u}-1+a\gamma}-a\int_0^x\frac{du}{e^{\gamma u}-1+a\gamma}$$ For the first, substitute $e^{\gamma u}-1=t$ and you're left with$$\frac{a}{\gamma}\int \frac{dt}{t+a\gamma}$$Do the same substitution for the second one, and you have$$\frac{1}{\gamma}\int \frac{dt}{(t+1)(t+a\gamma)}$$The first one is now standard, and the second can be done by partial fractions. Edit: For the second integral, partial fractions isn't mandatory. The second integral can be written as$$\frac{1}{\gamma(a\gamma-1)}\int \frac{(t+a\gamma)-(t+1)dt}{(t+1)(t+a\gamma)}$$$$=\frac{1}{\gamma(a\gamma-1)}\int \frac{dt}{t+1}-\frac{1}{\gamma(a\gamma-1)}\int \frac{dt}{t+a\gamma}$$Both of which are standard.
I would like to show $\lim_{n \rightarrow \infty}\left(\frac{n - 1}{n}\right)^n = 1/e$. I know the argument typically goes like this: Let $y = \left(\frac{n - 1}{n}\right)^n$. Then $\ln(y) = n\cdot{}\ln \left(\frac{n - 1}{n}\right)$. Taking the limit as $n \rightarrow \infty$, we have an indeterminant product of the form $\infty\cdot0$. I think ideally I would like to use L'Hopital's Rule, so the issue is getting this into the correct form to apply it. I don't think the simplification $n\ln(n - 1) - n\ln(n)$ helps any. But if we can establish that $\lim_{n\rightarrow\infty}\ln(y) = -1$, then using the identity $y = e^{\ln(y)}$, we'd arrive at the desired result. Alternatively, could one use the definition of $e$? This might not help, but $e = \lim_{n \rightarrow \infty}(1 + \frac{1}{n})^n = \lim_{n \rightarrow \infty}(\frac{n + 1}{n})^n$, which looks similar to what we have, but not quite.
7.3 Holt-Winters’ seasonal method Holt (1957) and Winters (1960) extended Holt’s method to capture seasonality. The Holt-Winters seasonal method comprises the forecast equation and three smoothing equations — one for the level \(\ell_t\), one for the trend \(b_t\), and one for the seasonal component \(s_t\), with corresponding smoothing parameters \(\alpha\), \(\beta^*\) and \(\gamma\). We use \(m\) to denote the frequency of the seasonality, i.e., the number of seasons in a year. For example, for quarterly data \(m=4\), and for monthly data \(m=12\). There are two variations to this method that differ in the nature of the seasonal component. The additive method is preferred when the seasonal variations are roughly constant through the series, while the multiplicative method is preferred when the seasonal variations are changing proportional to the level of the series. With the additive method, the seasonal component is expressed in absolute terms in the scale of the observed series, and in the level equation the series is seasonally adjusted by subtracting the seasonal component. Within each year, the seasonal component will add up to approximately zero. With the multiplicative method, the seasonal component is expressed in relative terms (percentages), and the series is seasonally adjusted by dividing through by the seasonal component. Within each year, the seasonal component will sum up to approximately \(m\). Holt-Winters’ additive method The component form for the additive method is: \[\begin{align*} \hat{y}_{t+h|t} &= \ell_{t} + hb_{t} + s_{t+h-m(k+1)} \\ \ell_{t} &= \alpha(y_{t} - s_{t-m}) + (1 - \alpha)(\ell_{t-1} + b_{t-1})\\ b_{t} &= \beta^*(\ell_{t} - \ell_{t-1}) + (1 - \beta^*)b_{t-1}\\ s_{t} &= \gamma (y_{t}-\ell_{t-1}-b_{t-1}) + (1-\gamma)s_{t-m}, \end{align*}\] where \(k\) is the integer part of \((h-1)/m\), which ensures that the estimates of the seasonal indices used for forecasting come from the final year of the sample. The level equation shows a weighted average between the seasonally adjusted observation \((y_{t} - s_{t-m})\) and the non-seasonal forecast \((\ell_{t-1}+b_{t-1})\) for time \(t\). The trend equation is identical to Holt’s linear method. The seasonal equation shows a weighted average between the current seasonal index, \((y_{t}-\ell_{t-1}-b_{t-1})\), and the seasonal index of the same season last year (i.e., \(m\) time periods ago). The equation for the seasonal component is often expressed as \[ s_{t} = \gamma^* (y_{t}-\ell_{t})+ (1-\gamma^*)s_{t-m}. \] If we substitute \(\ell_t\) from the smoothing equation for the level of the component form above, we get \[ s_{t} = \gamma^*(1-\alpha) (y_{t}-\ell_{t-1}-b_{t-1})+ [1-\gamma^*(1-\alpha)]s_{t-m}, \] which is identical to the smoothing equation for the seasonal component we specify here, with \(\gamma=\gamma^*(1-\alpha)\). The usual parameter restriction is \(0\le\gamma^*\le1\), which translates to \(0\le\gamma\le 1-\alpha\). Holt-Winters’ multiplicative method The component form for the multiplicative method is: \[\begin{align*} \hat{y}_{t+h|t} &= (\ell_{t} + hb_{t})s_{t+h-m(k+1)} \\ \ell_{t} &= \alpha \frac{y_{t}}{s_{t-m}} + (1 - \alpha)(\ell_{t-1} + b_{t-1})\\ b_{t} &= \beta^*(\ell_{t}-\ell_{t-1}) + (1 - \beta^*)b_{t-1} \\ s_{t} &= \gamma \frac{y_{t}}{(\ell_{t-1} + b_{t-1})} + (1 - \gamma)s_{t-m} \end{align*}\] Example: International tourist visitor nights in Australia We apply Holt-Winters’ method with both additive and multiplicative seasonality to forecast quarterly visitor nights in Australia spent by international tourists. Figure 7.6 shows the data from 2005, and the forecasts for 2016–2017. The data show an obvious seasonal pattern, with peaks observed in the March quarter of each year, corresponding to the Australian summer. aust <- window(austourists,start=2005)fit1 <- hw(aust,seasonal="additive")fit2 <- hw(aust,seasonal="multiplicative")autoplot(aust) + autolayer(fit1, series="HW additive forecasts", PI=FALSE) + autolayer(fit2, series="HW multiplicative forecasts", PI=FALSE) + xlab("Year") + ylab("Visitor nights (millions)") + ggtitle("International visitors nights in Australia") + guides(colour=guide_legend(title="Forecast")) \(t\) \(y_t\) \(\ell_t\) \(b_t\) \(s_t\) \(\hat{y}_t\) 2004 Q1 -3 9.70 2004 Q2 -2 -9.31 2004 Q3 -1 -1.69 2004 Q4 0 32.26 0.70 1.31 2005 Q1 1 42.21 32.82 0.70 9.50 42.66 2005 Q2 2 24.65 33.66 0.70 -9.13 24.21 2005 Q3 3 32.67 34.36 0.70 -1.69 32.67 2005 Q4 4 37.26 35.33 0.70 1.69 36.37 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 2015 Q1 41 73.26 59.96 0.70 12.18 69.05 2015 Q2 42 47.70 60.69 0.70 -13.02 47.59 2015 Q3 43 61.10 61.96 0.70 -1.35 59.24 2015 Q4 44 66.06 63.22 0.70 2.35 64.22 \(h\) \(\hat{y}_{T+h|T}\) 2016 Q1 1 76.10 2016 Q2 2 51.60 2016 Q3 3 63.97 2016 Q4 4 68.37 2017 Q1 5 78.90 2017 Q2 6 54.41 2017 Q3 7 66.77 2017 Q4 8 71.18 \(t\) \(y_t\) \(\ell_t\) \(b_t\) \(s_t\) \(\hat{y}_t\) 2004 Q1 -3 1.24 2004 Q2 -2 0.77 2004 Q3 -1 0.96 2004 Q4 0 32.49 0.70 1.02 2005 Q1 1 42.21 33.51 0.71 1.24 41.29 2005 Q2 2 24.65 33.24 0.68 0.77 26.36 2005 Q3 3 32.67 33.94 0.68 0.96 32.62 2005 Q4 4 37.26 35.40 0.70 1.02 35.44 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 2015 Q1 41 73.26 58.57 0.66 1.24 72.59 2015 Q2 42 47.70 60.42 0.69 0.77 45.62 2015 Q3 43 61.10 62.17 0.72 0.96 58.77 2015 Q4 44 66.06 63.62 0.75 1.02 64.38 \(h\) \(\hat{y}_{T+h|T}\) 2016 Q1 1 80.09 2016 Q2 2 50.15 2016 Q3 3 63.34 2016 Q4 4 68.18 2017 Q1 5 83.80 2017 Q2 6 52.45 2017 Q3 7 66.21 2017 Q4 8 71.23 The applications of both methods (with additive and multiplicative seasonality) are presented in Tables 7.3 and 7.4 respectively. Because both methods have exactly the same number of parameters to estimate, we can compare the training RMSE from both models. In this case, the method with multiplicative seasonality fits the data best. This was to be expected, as the time plot shows that the seasonal variation in the data increases as the level of the series increases. This is also reflected in the two sets of forecasts; the forecasts generated by the method with the multiplicative seasonality display larger and increasing seasonal variation as the level of the forecasts increases compared to the forecasts generated by the method with additive seasonality. The estimated states for both models are plotted in Figure 7.7. The small value of \(\gamma\) for the multiplicative model means that the seasonal component hardly changes over time. The small value of \(\beta^{*}\) for the additive model means the slope component hardly changes over time (check the vertical scale). The increasing size of the seasonal component for the additive model suggests that the model is less appropriate than the multiplicative model. Holt-Winters’ damped method Damping is possible with both additive and multiplicative Holt-Winters’ methods. A method that often provides accurate and robust forecasts for seasonal data is the Holt-Winters method with a damped trend and multiplicative seasonality: \[\begin{align*} \hat{y}_{t+h|t} &= \left[\ell_{t} + (\phi+\phi^2 + \dots + \phi^{h})b_{t}\right]s_{t+h-m(k+1)}. \\ \ell_{t} &= \alpha(y_{t} / s_{t-m}) + (1 - \alpha)(\ell_{t-1} + \phi b_{t-1})\\ b_{t} &= \beta^*(\ell_{t} - \ell_{t-1}) + (1 - \beta^*)\phi b_{t-1} \\ s_{t} &= \gamma \frac{y_{t}}{(\ell_{t-1} + \phi b_{t-1})} + (1 - \gamma)s_{t-m}. \end{align*}\] Example: Holt-Winters method with daily data The Holt-Winters method can also be used for daily type of data, where the seasonal period is \(m=7\), and the appropriate unit of time for \(h\) is in days. Here, we generate daily forecasts for the last five weeks for the hyndsight data, which contains the daily pageviews on the Hyndsight blog for one year starting April 30, 2014. Clearly the model has identified the weekly seasonal pattern and the increasing trend at the end of the data, and the forecasts are a close match to the test data. Bibliography Holt, C. E. (1957). Forecasting seasonals and trends by exponentially weighted averages (O.N.R. Memorandum No. 52). Carnegie Institute of Technology, Pittsburgh USA. https://doi.org/10.1016/j.ijforecast.2003.09.015 Winters, P. R. (1960). Forecasting sales by exponentially weighted moving averages. Management Science, 6, 324–342. https://doi.org/10.1287/mnsc.6.3.324
Unlike, say, Leo Corry’s similiarly titled A Brief History of Numbers, this book is not intended as a historical account of how various kinds of numbers came to be discovered and accepted. Although it does have some historical content, this book is primarily intended as a collection of vignettes and anecdotes about various kinds of numbers. Its style is informal, and prerequisites are minimal; a good understanding of high school mathematics will take a reader through most of the book, though on some occasions limits, infinite series, derivatives and integrals are mentioned. Indeed, it says something about the general tone of this book that the library at Iowa State University shelves it in its “leisure” collection, rather than with the mathematics books. In keeping with the elementary nature of this book, mathematical subtleties are generally ignored, and proofs are often omitted; if you want to see, for example, a development of the real numbers by Dedekind cuts, you’ll need to look elsewhere. But there are a lot of stories told here, as the author starts with the integers and works his way up through real and complex numbers, and then to some more esoteric kinds of numbers, including hyperreal numbers, quaternions, and p-adic numbers. After an introductory chapter, there is a fairly long chapter on the integers. The material in this chapter ranges from amusing little arithmetic facts such as \(\begin{align*}1 + 2 &= 3,\\4 + 5 + 6 &= 7 + 8,\\9 + 10 + 11 + 12 &= 13 + 14 + 15,\\16 + 17 + 18 + 19 + 20 &= 21 + 22 + 23 + 24,\end{align*}\) and so on, to discussions of various kinds of prime numbers (Fermat, Mersenne, Double Mersenne, Sophie Germaine, Wilson, Twin and many others) to composite numbers (e.g., highly composite numbers, perfect numbers) to number sequences (Fibonacci, Padovan, etc.), and more besides. There are discussions of various kinds of very large numbers, as well as discussions of individual numbers (such as 4 or 7) with amusing properties (in the case of 4, the author notes that every positive integer can be written as the sum of 4 squares; in the case of 7, the author discusses the unsettled question of whether there is any \(n>7\) with the property that \(n! + 1\) is a perfect square). The next chapter is on the real numbers. The author begins by discussing irrational and transcendental numbers and stating a few facts about them. He then looks at continued fractions and also sequences of numbers obtained by iteration, thereby allowing him to talk briefly about (among other things) chaos theory and various sequences converging to \(\sqrt2\). The chapter ends with two sections, the first talking about various specific rational numbers (including congruent numbers) and the second talking about various specific irrational ones (\(e\) and \(\pi\) are of course discussed, but so are a number of other ones as well). Following this, there is a relatively short chapter on complex numbers. Beginning with a brief account of their history, the author touches on such topics as the geometric representation of complex numbers, Euler’s formula, the fundamental theorem of algebra, the curiosities of complex exponentiation, and Gaussian integers. There is also a fairly lengthy discussion of the Riemann Hypothesis (pitched at a somewhat higher level than much of the rest of the book). I am still wondering whether the author’s sentence “Riemann’s zeta function occupies a prime place in mathematics” is a deliberate play on words or a happy accident. Finally, in a chapter that, like the discussion of the Riemann Hypothesis, may in large part not be as readily accessible to the target audience of high school students and lay people as is the rest of the book, the author discusses such “unusual” numbers as hyperreal numbers, dual numbers, quaternions and p-adic numbers. The level of mathematical detail varies with the topic being discussed. As noted earlier, in many — actually, most — cases, results are just stated as fact, without proof. However, in those cases where a simple argument to show something can be given, the author gives it: he proves, for example, that there are infinitely many primes, that \(\sqrt2\) is irrational, and that there exists two irrational numbers \(\alpha\) and \(\beta\) with the property that \(\alpha^\beta\) is rational. (This last result, in case you haven’t seen it before, is particularly easy and elegant: consider \(\sqrt2^{\sqrt2}\). If this number is rational, take \(\alpha=\beta=\sqrt2\). If it is not rational, take \(\alpha\) to be this number, and \(\beta\) to be \(\sqrt2\). Of course, using more advanced results, one can show that \(\sqrt2^{\sqrt2}\) is in fact irrational, but the beauty of the proof above is that it does not depend on knowledge of this fact.) A reader who is tantalized by some of the stories mentioned in this book — for example, the brief reference to legislation in the state of Indiana that purported to establish by law a value of \(\pi\) — can search online for more detail, or consult the reasonably good bibliography provided by the author. The books listed in the bibliography are generally of the “popular”, rather than technical, kind, so should be accessible to any reader of this book. (The author, by the way, says that the Indiana legislation establishes two different values of \(\pi\), but opinions vary; the legislation is so poorly written that some people have discerned many more values inherent in it. See, for example, Singmaster’s article “The Legal Values of Pi,” in the June 1985 Mathematical Intelligencer, in which he concludes that there are six different values inherent in the legislation itself, and three other values in earlier writing by the author of that bill.) Though the author has done a good job in assembling a large collection of facts and interesting stories, nobody can think of everything, and there are, I think, some examples of missed opportunities in the text. More specifically: Constructible numbers don’t appear. The set of these numbers is a field that is strictly between the rational numbers and the algebraic numbers, and plays a central role in the solution of some of the famous construction problems of antiquity. These numbers can be described, and their role discussed, without recourse to a great deal of mathematical background. Cardinal and ordinal numbers are also not discussed. These form two interesting and unusual sets of numbers and there are certainly all sorts of interesting stories connected to them. Another benefit of discussing cardinal numbers is that it can then be stated that the set of algebraic numbers is countable. This, together with the fact that the set of real numbers is not, allows the reader to see a beautiful example of how mathematicians can prove something exists without giving a single specific example of that thing. Although Gaussian integers are mentioned, the author stops short of discussing other quadratic extensions of the integers. Doing so would allow a relatively easy discussion of how uniqueness of factorization into primes is not always the case in number systems. The role of complex numbers in elementary Euclidean geometry could have been discussed in somewhat more detail. (Numerous examples are given in Complex Numbers from A to … Z, by Andreescu and Andrica.) Although octonions are mentioned in the book, they are not discussed in any detail. Particularly in view of the fact that some physicists think that they may be the key to a unified field theory (see, e.g., The Geometry of the Octonions by Dray and Manoque), it might have been useful to spend a few pages on them. This is not a textbook: there are no exercises, and there are certainly very few courses being offered which match up with the topic coverage. The book is simply intended to — and does — convey to high school students or other interested laypeople some new and interesting things about numbers. It’s also not a bad book for faculty members to have on their shelves; though most professionals won’t learn a great deal of new things from this book (although I picked up some interesting facts that I was not previously aware of), they may find the book a useful collection of interesting information with which to spice up a lecture. Mark Hunacek (mhunacek@iastate.edu) teaches mathematics at Iowa State University.
People don't immediately compress because the body is more or less a pressure vessel. It's not a very good pressure vessel for dealing with vacuum, but it's something. It's your body's resistance to pressure that lets you do things like spray bodily fluids (from your mouth, or from your bladder, or from your arteries). When my wife was in labor with my son, one of the sensors monitoring her reported the internal pressure during her contractions in pascals (though you'll have to find an obstetrician to tell you what a typical uterine contraction pressure is, because I was mostly paying attention to other things at the time). Gases don't have this constriction. Nitrogen and oxygen at room temperature have mean molecular velocities following\begin{align}\frac12 mv^2 &= \frac32 kT \\\frac vc &= \sqrt\frac{3kT}{mc^2} \approx \sqrt\frac{3\cdot 25\,\mathrm{meV}}{30\,\mathrm{GeV}} \\v &\approx 1.5\times 10^{-6}c \approx 450\,\mathrm{m/s} \approx 1000\,\mathrm{mph}\end{align}When you depressurize a gas volume at room temperature, this is the average speed of the gas that moves into the vacuum. If you depressurize an airlock by letting the air flow through a 2 m$^2$ doorway, you have a lot of momentum to carry things along with you. Emilio Pisanty gives a nice example in a comment: if your airlock is the size of a bathroom (~ 20 m$^3$) and it's depressurized rapidly, so that all the air is "suddenly" moving away from the door at a thermal speed, the momentum of the air is comparable to the momentum of a car. You have a good intuition for what it's like to be hit by a car (hint: it sucks, even at low speed); getting thrown out the door of the airlock is a totally plausible outcome, but not a certainty. Keep in mind, if you start to calculate things, that the kinetic energy available, $p^2/2m$, goes way up if you take the momentum of a car collision and put it into a few kilograms of air. Note that in a real airlock, you'd change the pressure slowly. If you were on a spacecraft where air was a precious resource, you would probably even pump the air from the airlock into the spacecraft rather that throwing it away.
Gravitation Acceleration due to Gravity The variation of acceleration due to gravity with height can be calculated at \tt gh=\frac{GM}{\left(R+h\right)^2} Fractional decrease in ‘g’ for smaller attitude is \tt \frac{\Delta g}{g}=\frac{2h}{R} which is used only for very small value. Acceleration due to gravity below the surface of earth can be calculated by gd = g(1- d/R) where ‘d’ is the depth from surface. At the centre of the earth acceleration due to gravity is ‘0’ Acceleration due to gravity at a height and at a depth are equal when 2h = d. Variation of acceleration due to gravity when the earth rotates with an angular velocity ‘w’ and a latitude angle φ is gφ = g - Rw 2cos 2φ. The value of ‘g’ the equator depends on angular velocity of earth. The value of ‘g’ at poles is independent of the rotation of the earth. The value of ‘g’ at equator becomes zero when the angular velocity of rotation increases to 17 times the present value. The value of ‘g’ gradually increases from equator to poles. The value of ‘g’ is slightly more at the location of minerals deposits. The value of ‘g’ is slightly less on the top of mountain and also inside mines. If g 1g 2g 3are the acceleration due to gravity's on the surface of earth on the top of a mountain and inside a mine then g 2> g 1> g 3. View the Topic in this video From 0:02 To 08:50 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. 1. Consider a body of mass m, lying on the surface of earth then gravitational force on the body is given by \tt F=\frac{GMm}{R^{2}} 2. Acceleration due to gravity \tt g=\frac{GM}{R^{2}} 3. Acceleration due to gravity at height h from the surface of the earth \tt g'=\frac{GM}{\left(R+h\right)^2} 4. As we go above the surface of the earth, the value of g decreases because \tt g'\propto\frac{1}{r^{2}} 5. Acceleration due to gravity at depth d from the surface of the earth g'=\frac{4}{3}\pi \rho G\left(R-d\right)
What are feasible options for an equivalent of Shamir Secret Sharing using small tables, preferably usable with pen-and-paper? We want to share a secret $K$ into $n\ge2$ shares, so that $m$ shares ($2\le m\le n$) are necessary to reconstruct the secret, and less than $m$ shares reveal no information about $K$. For $m=n$ (e.g. 2-out-of-2), we can use $n-1$ shares $S_i$ ($1\le i<n$) of uniformly random independent bits, and another $S_0$ that is the bitwise-XOR of the secret $K$ and the other shares: $S_{0,j}=K_j\oplus\left(\displaystyle\bigoplus_{i=1}^{n-1}S_{i,j}\right)$. The secret is recomposed by bitwise-XOR of the random shares: $K_j=\displaystyle\bigoplus_{i=0}^{n-1}S_{i,j}$. This is easily extended to $2^k$ symbols, e.g. octal or hexadecimal. For 2-out-of-3, we can use a ternary system, a first share $S_0$ of random trits, an two shares $S_i$ ($i\in\{1,2\}$ ) defined by $S_{i,j}=S_{1,j}+i\,K_j\bmod3$. The secret can be recomposed from any two shares, as: $$K_j\,=\,S_{1,j}-S_{0,j}\bmod3\,=\,S_{2,j}-S_{1,j}\bmod3\,=\,S_{0,j}-S_{2,j}\bmod3$$ The tables for addition (used for encoding) and subtraction (used for decoding) are: | 0 1 2 | 0 1 2 ----+------ ----+------ + 0 | 0 1 2 - 0 | 0 1 2 + 1 | 1 2 0 - 1 | 2 0 1 + 2 | 2 0 1 - 2 | 1 2 0 This is easily extended to base $3^k$; e.g. for $k=2$ | 0 1 2 3 4 5 6 7 8 | 0 1 2 3 4 5 6 7 8 ----+------------------ ----+------------------ + 0 | 0 1 2 3 4 5 6 7 8 - 0 | 0 1 2 3 4 5 6 7 8 + 1 | 1 2 0 4 5 3 7 8 6 - 1 | 2 0 1 5 3 4 8 6 7 + 2 | 2 0 1 5 3 4 8 6 7 - 2 | 1 2 0 4 5 3 7 8 6 + 3 | 3 4 5 6 7 8 0 1 2 - 3 | 6 7 8 0 1 2 3 4 5 + 4 | 4 5 3 7 8 6 1 2 0 - 4 | 8 6 7 2 0 1 5 3 4 + 5 | 5 3 4 8 6 7 2 0 1 - 5 | 7 8 6 1 2 0 4 5 3 + 6 | 6 7 8 0 1 2 3 4 5 - 6 | 3 4 5 6 7 8 0 1 2 + 7 | 7 8 6 1 2 0 4 5 3 - 7 | 5 3 4 8 6 7 2 0 1 + 8 | 8 6 7 2 0 1 5 3 4 - 8 | 4 5 3 7 8 6 1 2 0 See this for $k=3$ used for the 26 letters and space. As long as conciseness of the shares is not an issue, we can directly encode binary with the ternary system, octal with $k=2$, hex with $k=3$, base64 with $k=4$. What about other $m$-out-of-$n$ schemes? In particular 2-out-of-4, 2-out-of-5, 3-out-of-4?
In his celebrated paper "Conjugate Coding" (written around 1970), Stephen Wiesner proposed a scheme for quantum money that is unconditionally impossible to counterfeit, assuming that the issuing bank has access to a giant table of random numbers, and that banknotes can be brought back to the bank for verification. In Wiesner's scheme, each banknote consists of a classical "serial number" $s$, together with a quantum money state $|\psi_s\rangle$ consisting of $n$ unentangled qubits, each one either $$|0\rangle,\ |1\rangle,\ |+\rangle=(|0\rangle+|1\rangle)/\sqrt{2},\ \text{or}\ |-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}.$$ The bank remembers a classical description of $|\psi_s\rangle$ for every $s$. And therefore, when $|\psi_s\rangle$ is brought back to the bank for verification, the bank can measure each qubit of $|\psi_s\rangle$ in the correct basis (either $\{|0\rangle,|1\rangle\}$ or ${|+\rangle,|-\rangle}$), and check that it gets the correct outcomes. On the other hand, because of the uncertainty relation (or alternatively, the No-Cloning Theorem), it's "intuitively obvious" that, if a counterfeiter who doesn't know the correct bases tries to copy $|\psi_s\rangle$, then the probability that both of the counterfeiter's output states pass the bank's verification test can be at most $c^n$, for some constant $c<1$. Furthermore, this should be true regardless of what strategy the counterfeiter uses, consistent with quantum mechanics (e.g., even if the counterfeiter uses fancy entangled measurements on $|\psi_s\rangle$). However, while writing a paper about other quantum money schemes, my coauthor and I realized that we'd never seen a rigorous proof of the above claim anywhere, or an explicit upper bound on $c$: neither in Wiesner's original paper nor in any later one. So, has such a proof (with an upper bound on $c$) been published? If not, then can one derive such a proof in a more-or-less straightforward way from (say) approximate versions of the No-Cloning Theorem, or results about the security of the BB84 quantum key distribution scheme? Update: In light of the discussion with Joe Fitzsimons below, I should clarify that I'm looking for more than just a reduction from the security of BB84. Rather, I'm looking for an explicit upper bound on the probability of successful counterfeiting (i.e., on $c$)---and ideally, also some understanding of what the optimal counterfeiting strategy looks like. I.e., does the optimal strategy simply measure each qubit of $|\psi_s\rangle$ independently, say in the basis $$\{ \cos(\pi/8)|0\rangle+\sin(\pi/8)|1\rangle, \sin(\pi/8)|0\rangle-\cos(\pi/8)|1\rangle \}?$$ Or is there an entangled counterfeiting strategy that does better? Update 2: Right now, the best counterfeiting strategies that I know are (a) the strategy above, and (b) the strategy that simply measures each qubit in the $\{|0\rangle,|1\rangle\}$ basis and "hopes for the best." Interestingly, both of these strategies turn out to achieve a success probability of (5/8) n. So, my conjecture of the moment is that (5/8) n might be the right answer. In any case, the fact that 5/8 is a lower bound on c rules out any security argument for Wiesner's scheme that's "too" simple (for example, any argument to the effect that there's nothing nontrivial that a counterfeiter can do, and therefore the right answer is c=1/2). This post has been migrated from (A51.SE) Update 3: Nope, the right answer is (3/4) n! See the discussion thread below Abel Molina's answer.
This is a homework problem for a class that ended 2 years ago, I'm learning it by myself. Consider a directed graph $D=(V,A)$, $s,t\in V$. $A=\{a_1,\ldots,a_n\}$. Let $P=\{p_1,\ldots,p_m\}$ be the set of all simple paths from $s$ to $t$, There is a capacity function $c:A\to \mathbb{R}^+$. First we are asked to express max s-t flow as a linear program letting each path $p_j$ in $P$ associated with the variable $x_j$, there could be a exponential number of variables. The formulation, $$ \begin{align} \text{Maximize:} & \sum_{j}x_j\\ \text{subject to:} & \sum_{a_i \in p_j}{x_j} \leq c(a_i) &\text{ for } 1 \le i \le n, \\ & x_j\geq 0 &\text{ for } 1 \leq j \leq m\\ \end{align} $$ The dual is therefore: $$ \begin{align} \text{Minimize:} & \sum_{i} c(a_i) y_i\\ \text{subject to:} & \sum_{a_i\in p_j} y_i \geq 1 &\text{ for } 1 \le j \le m, \\ & y_i\geq 0 &\text{ for } 1 \leq i \leq n\\ \end{align} $$ Assuming we have an optimal solution for the dual, the problem ask us to use complementary slackness to show there exist a formulation of the primal with only a polynomial number of paths, which also obtain an optimal solution. How is this done?
First of all, look at the power of what you call background noise: It's between -150 dB and -120 dB below your peak's power. Which means that the magnitude of these FFT points is somewhere in $[10^{-7.5};10^{-6}]$ of the main peak magnitude. That screams "numerical inaccuracy" all over the place: Assuming your FFT was implemented using 32 bit IEEE754 floating point numbers, you'd have to realize that for each point of the FFT output, there's at least an FFT-length amount of additions taking place with a very finite numerical precision. Now your FFT has an enormous length; $1323000\approx 1.3\cdot 10^6$ additions bring the most stable algorithm to its knees. For a bit of details on error analysis on commonly used FFT algorithms, refer to [1]. Have a look at the DFT matrix to understand the concept of how these values come to be: For the frequency of your spectral peak, signal-period ($\frac{1}{f_\text{signal}}$) spaced samples are added up; since they always have the same value (definition of "period"), the result is large; for all the others, the should cancel each other out. For example, the bin at $2\cdot f_\text{signal}$ always adds up the same absolute value, but with alternating sign. For other bins, you just take values from the sine and add them up. What happens now is that due to limited floating point math, when you add up a large and a small number, and then subtract the large number again, you won't get the same small number you had – accuracy reduces with magnitude! Now, the FFT is an algorithm that is pretty resilient with respect to that (because it doesn't try to add up all FFT-length values at once), but there's only so much stability that an algorithm can achieve with a given machine $\epsilon$. I would have expected a more even noise, or rather a linear shaped noise. Why that? If anything, you're observing a process where the imperfections of the FFT are correlated with the input signal. Something like that won't have "white" PSD! Remember, the PSD of a process is the Fourier transform of its autocorrelation; in your case, it seems there's more autocorrelation at higher frequencies. Do your understanding a favor and add a bit of as-good-as-feasible white, gaussian noise to the input, maybe with a variance of $N=10^{-3}$ of the sine's amplitude, and do your mag squared FFT again. You'll see the noise floor where you expect it, and it will be flat, because the (dominant) source of noise is actually white, and not strongly correlated. Also, your $1.3\cdot10^6$ FFT really isn't all that helpful, unless you actually need a frequency resolution of $\frac{f_\text{sample}}{N}=\frac{4.41\cdot 10^4}{1.323\cdot10^6}=33\,\text{mHz}$ (which I dare to doubt). Try with shorter pieces of your signal! [1] Tasche, M., and H. Zeuner, Roundoff error analysis for fast trigonometric transforms, in Handbook of Analytic-Computational Methodsin Applied Mathematics, G. Anastassiou (ed.), Chapman & Hall/CRC, Boca Rota, 2000, 357–406.
The motivation for this question comes from J. Cohen's result; at the prime $p=2$ his result says that any element in ${_2\pi_*^s}$ can be written as a (higher) Toda bracket of $2,\eta,\nu,\sigma$, indeed modulo indeterminacy in choosing various extension and coextension maps; it has to be said that this is mainly based on the way that he defines the higher brackets as I understand it. Now, for a moment, let's think of Toda bracketing as a specific kind of homotopy operation. Then, how much of ${_2\pi_*}$ can be obtained by applying various homotopy operations to $2,\eta,\nu,\sigma$. As an another motivation, consider the Kahn-Priddy theorem, or rather delooped version of it (look at "The Homology of the James-Hopf maps" by Kuhn); it asserts that there is a map $$t:S^1\times QD_2S^1\to QS^1$$inducing an epimorphism on ${_2\pi_*}$. Consequently, in dimensions $>1$ we may ` capture' ${_2\pi_*}QS^1\simeq{_2\pi_{*+1}^s}$ just by $_2\pi_*^sD_2S^1$, that is any element in $\pi_{*+1}^s$ maybe written as $\alpha^*(1)$ defined by the stable composition $$S^{n+1} \stackrel{\alpha}{\rightarrow} D_2S^1\stackrel{D_21}{\rightarrow} D_2S^1\stackrel{t}{\to} S^1$$ where $\alpha$ varies in ${_2\pi_*^s}D_2S^1={_2\pi_*^s}\Sigma\mathbb{R}P$. Now, I wonder, if such classic theorems can be interpreted in any such a way to show that we can generate ${_2\pi_*^s}$ by homotopy operations possibly coming from $D_2^iS^n$ with $D_2^i=D_2D_2^{i-1}$ and $n>-1$. I wonder if anything on this is known, maybe in the land of $E_\infty$ operations on homotopy groups, something that we can do sums with it!
Data Mining - (Feature|Attribute) Extraction Function Table of Contents 1 - About This function is useful for reducing the dimensionality of high-dimensional data. (ie you get less columns) Applicable for: latent semantic analysis, data decomposition and projection, and pattern recognition. We project the p predictors into a M-dimensional subspace, where M < p. This is achieved by computing M different linear combinations, or projections, of the variables. Then these M projections are used as predictors to fit a linear regression model by least squares. Dimension reduction is a way of finding combinations of variables, extracting important combinations of variables, and then using those combinations as the features in regression. Approaches: 2 - Articles Related 3 - Procedure 3.1 - Linear Combinations Let <math>Z_1, Z_2, \dots, Z_m</math> represent m linear combinations of the original p predictors ( m<p). <MATH> Z_m = \sum_{j=1}^p \phi_{mj} x_j </MATH> where: <math>\phi_{mj}</math> is a constant <math>x_j</math> are the original predictors ( m<p) because if m equals p, dimension reduction will just give least squares on the raw data. 3.2 - Model Fitting Then the following linear model can be fitted using ordinary least squares: <MATH> y_i = \theta_0 + \sum_{m=1}^M \theta_m z_{im} + \epsilon_i </MATH> where: <math>i = 1, \dots, n</math> This model can be thought of as a special case of the original linear regression model because (a little bit of algebra): <MATH> \sum_{m=1}^M \theta_m z_{im} = \sum_{m=1}^M \theta_m ( \sum_{j=1}^p \phi_{mj} x_{ij} ) = \sum_{j=1}^p (\sum_{m=1}^M \theta_m \phi_{mj} ) x_{ij} = \sum_{j=1}^p \beta_j x_{ij} </MATH> The latest term is just a linear combination of the original predictors (<math>x</math>) where the linear combination involves <math>\beta_j</math> <MATH> \beta_j = \sum_{m=1}^M \theta_m \phi_{mj} </MATH> Dimension reduction fits a linear model through the definition of new <math>z</math>'s that's linear in the original <math>x</math>'s where the <math>\beta_j</math>'s need to take a very, very specific form. In a way, it's similar to ridge and LASSO, it's still least squares, it's still a linear model in all the variables, but there's a constraint (penalty term) on the coefficients. We're not getting a constraint on the RSS amount but on the coefficient form <math>\beta_j</math> coefficients. Dimension reduction goal wants to win the bias-variance trade-off by getting a simplified model with low bias and also low variance relative to a plain vanilla least squares on the original features. 4 - Feature Extraction vs Ridge regression Ridge regression is really different from dimension reduction methods (principal components regression and partial least squares) but it turns out that mathematically these ideas are all very closely related. Principle components regression, for example, is just a discrete version of ridge regression. Ridge regression is continuously shrinking variables, whereas principal components is doing it in a more choppy sort of way. 5 - Algorithm Fourier Wavelet transforms 6 - Example Example of feature extractors: unigrams, bigrams, unigrams and bigrams, and unigrams with part of speech tags. Given demographic data about a set of customers, group the attributes into general characteristics of the customers
A priori bounds and existence result of positive solutions for fractional Laplacian systems School of Applied Mathematics, Xiamen University of Technology, 600 Ligong Road, Xiamen 361024, China $\left\{\begin{array}{ll}(-\triangle)^{\frac{\alpha}{2}}u+\sum^{N}_{i = 1}b_{i}(x)\frac{\partial u}{\partial x_{i}}+C(x)u = f(x,v), \;\;x\in \Omega,\\(-\triangle)^{\frac{\beta}{2}}v+\sum^{N}_{i = 1}c_{i}(x)\frac{\partial v}{\partial x_{i}}+D(x)v = g(x,u),\;\; x\in \Omega,\\u>0, v>0, \;\; x\in \Omega,\\u = 0, v = 0, \;\; x\in \mathbb R^{N}\setminus \Omega,\end{array}\right.$ $Ω$ $\mathbb R^{N}$ $α ∈ (1,2)$ $β ∈ (1,2)$ $N>\max\{α, β\}$ Mathematics Subject Classification:Primary: 35B09, 35B45, 35J05. Citation:Lishan Lin. A priori bounds and existence result of positive solutions for fractional Laplacian systems. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1517-1531. doi: 10.3934/dcds.2019065 References: [1] W. Ao, J. Wei and W. Yang, Infinitely many positive solutions of fractional nonlinear Schrödinger equations with non-symmetric potentials, [2] D. Applebaum, [3] B. Barrios, I. De Bonis, M. Medina and I. Peral, Semilinear problems for the fractional Laplacian with a singular nonlinearity, [4] B. Barrios, E. Colorado, R. Servadei and F. Soria, A critical fractional equation with concave-convex power nonlinearities, [5] [6] J. P. Bouchaud and A. Georges, Anomalous diffusion in disordered media: Statistical mechanisms, models and physical applications, [7] [8] K. Chang, [9] [10] [11] W. Chen, C. Li and P. Ma, [12] [13] [14] R. Cont and P. Tankov, [15] [16] [17] P. Felmer, A. Quaas and J. Tan, Positive solutions of the nonlinear Schr¨odinger equation withthe fractional Laplacian, [18] D. Gilbarg and N. S. Trudinger, [19] Y. Gou and J. Nie, Infinitely many non-radial solutions for the prescribed curvature problem of fractional operator, [20] T. Gou and H. Sun, Solutions of nonlinear Schrödinger equation with fractional Laplacian without the Ambrosetti-Rabinowitz condition, [21] D. Kriventsov, Comm. Partial Differential Equations, 38 (2013), 2081-2106. doi: 10.1080/03605302.2013.831990. Google Scholar [22] [23] [24] [25] [26] W. Long, S. Peng and J. Yang, Infinitely many positive and sign-changing solutions for nonlinear fractional scalar field equations, [27] P. Poláčik, P. Quittner and P. Souplet, Singularity and decay estimates in superlinear problems via Liouville-type theorems. I. Elliptic equations and systems, [28] A. Quaas and A. Xia, Liouville type theorems for nonlinear elliptic equations and systems involving fractional Laplacian in the half space, [29] [30] A. Quaas and A. Xia, Existence results of positive solutions for nonlinear cooperative elliptic systems involving fractional Laplacian, [31] [32] [33] [34] [35] [36] L. Zhang, C. Li, W. Chen and T. Cheng, A Liouville theorem for $α$-harmonic functions in $\Bbb R^n_+$, show all references References: [1] W. Ao, J. Wei and W. Yang, Infinitely many positive solutions of fractional nonlinear Schrödinger equations with non-symmetric potentials, [2] D. Applebaum, [3] B. Barrios, I. De Bonis, M. Medina and I. Peral, Semilinear problems for the fractional Laplacian with a singular nonlinearity, [4] B. Barrios, E. Colorado, R. Servadei and F. Soria, A critical fractional equation with concave-convex power nonlinearities, [5] [6] J. P. Bouchaud and A. Georges, Anomalous diffusion in disordered media: Statistical mechanisms, models and physical applications, [7] [8] K. Chang, [9] [10] [11] W. Chen, C. Li and P. Ma, [12] [13] [14] R. Cont and P. Tankov, [15] [16] [17] P. Felmer, A. Quaas and J. Tan, Positive solutions of the nonlinear Schr¨odinger equation withthe fractional Laplacian, [18] D. Gilbarg and N. S. Trudinger, [19] Y. Gou and J. Nie, Infinitely many non-radial solutions for the prescribed curvature problem of fractional operator, [20] T. Gou and H. Sun, Solutions of nonlinear Schrödinger equation with fractional Laplacian without the Ambrosetti-Rabinowitz condition, [21] D. Kriventsov, Comm. Partial Differential Equations, 38 (2013), 2081-2106. doi: 10.1080/03605302.2013.831990. Google Scholar [22] [23] [24] [25] [26] W. Long, S. Peng and J. Yang, Infinitely many positive and sign-changing solutions for nonlinear fractional scalar field equations, [27] P. Poláčik, P. Quittner and P. Souplet, Singularity and decay estimates in superlinear problems via Liouville-type theorems. I. Elliptic equations and systems, [28] A. Quaas and A. Xia, Liouville type theorems for nonlinear elliptic equations and systems involving fractional Laplacian in the half space, [29] [30] A. Quaas and A. Xia, Existence results of positive solutions for nonlinear cooperative elliptic systems involving fractional Laplacian, [31] [32] [33] [34] [35] [36] L. Zhang, C. Li, W. Chen and T. Cheng, A Liouville theorem for $α$-harmonic functions in $\Bbb R^n_+$, [1] Xi Wang, Zuhan Liu, Ling Zhou. Asymptotic decay for the classical solution of the chemotaxis system with fractional Laplacian in high dimensions. [2] Ran Zhuo, Wenxiong Chen, Xuewei Cui, Zixia Yuan. Symmetry and non-existence of solutions for a nonlinear system involving the fractional Laplacian. [3] Alexander Quaas, Aliang Xia. Existence and uniqueness of positive solutions for a class of logistic type elliptic equations in $\mathbb{R}^N$ involving fractional Laplacian. [4] Maoding Zhen, Jinchun He, Haoyuan Xu, Meihua Yang. Positive ground state solutions for fractional Laplacian system with one critical exponent and one subcritical exponent. [5] Guowei Dai, Rushun Tian, Zhitao Zhang. Global bifurcations and a priori bounds of positive solutions for coupled nonlinear Schrödinger Systems. [6] [7] Friedemann Brock, Leonelo Iturriaga, Justino Sánchez, Pedro Ubilla. Existence of positive solutions for $p$--Laplacian problems with weights. [8] Chunxiao Guo, Fan Cui, Yongqian Han. Global existence and uniqueness of the solution for the fractional Schrödinger-KdV-Burgers system. [9] De Tang, Yanqin Fang. Regularity and nonexistence of solutions for a system involving the fractional Laplacian. [10] Dengfeng Lü, Shuangjie Peng. On the positive vector solutions for nonlinear fractional Laplacian systems with linear coupling. [11] Xudong Shang, Jihui Zhang, Yang Yang. Positive solutions of nonhomogeneous fractional Laplacian problem with critical exponent. [12] Rongrong Yang, Zhongxue Lü. The properties of positive solutions to semilinear equations involving the fractional Laplacian. [13] Leyun Wu, Pengcheng Niu. Symmetry and nonexistence of positive solutions to fractional [14] [15] Leonelo Iturriaga, Eugenio Massa. Existence, nonexistence and multiplicity of positive solutions for the poly-Laplacian and nonlinearities with zeros. [16] Jacques Giacomoni, Tuhina Mukherjee, Konijeti Sreenadh. Existence and stabilization results for a singular parabolic equation involving the fractional Laplacian. [17] [18] Dominique Blanchard, Nicolas Bruyère, Olivier Guibé. Existence and uniqueness of the solution of a Boussinesq system with nonlinear dissipation. [19] [20] Eric R. Kaufmann. Existence and nonexistence of positive solutions for a nonlinear fractional boundary value problem. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation. Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards. Permanent link for public information only: Permanent link for all public and protected information: The family of decays mediated by $b \to s \ell^+ \ell^-$ transitions provides a rich laboratory to search for effects of physics beyond the Standard Model. In recent years LHCb has found hints of deviations from theoretical predictions both in the rates and angular distributions of such processes. In addition, hints of lepton flavour non-universality have been seen when comparing $B^+ \to K^+\mu^+\mu^-$ and $B^+ \to K^+e^+e^-$ decay rates, with the so-called $R_K$ ratio. Similar observables in different decays, such as $R_{K^\ast} = \mathrm{BR}(B^0 \to K^{\ast 0}\mu^+\mu^-) / \mathrm{BR}(B^0 \to K^{\ast 0}e^+e^-)$ and others, can also be measured by LHCb, thus providing further avenues to test the effectiveness of lepton flavour universality. The latest results from LHCb in this sector will be presented.
This is a typical Euclidean descent. The set $M$ of all positive common multiples of $\,a,b\,$ is closed under positive subtraction, i.e. $\,m> n\in M$ $\Rightarrow$ $\,a,b\mid m,n\,\Rightarrow\, a,b\mid m\!-\!n\,\Rightarrow\,m\!-\!n\in M.\,$ Therefore, further, $\,M\,$ is closed under mod, i.e. remainder, since it arises by repeated subtraction, i.e. $\ m\ {\rm mod}\ n\, =\, m-qn = ((m-n)-n)-\cdots-n.\,$ Therefore it follows that the least positive $\,\ell\in M\,$ divides every $\,m\in M,\,$ else $\ 0\ne m\ {\rm mod}\ \ell\, $ would be an element of $\,M\,$ smaller than $\,\ell,\,$ contra minimality of $\,\ell.\,$ Thus the least common multiple $\,\ell\,$ divides every common multiple $\,m.$ Remark $\ $ The key structure exploited in the proof is abstracted out in the Lemma below. Lemma $\ \ $ Let $\,\rm S\ne\emptyset \,$ be a set of integers $>0$ closed under subtraction $> 0,\,$ i.e. for all $\rm\,n,m\in S, \,$ $\rm\ n > m\ \Rightarrow\ n-m\, \in\, S.\,$ Then every element of $\rm\,S\,$ is a multiple of the least element $\rm\:\ell = \min\, S.$ Proof ${\bf\ 1}\,\ $ If not there is a least nonmultiple $\rm\,n\in S,\,$ contra $\rm\,n-\ell \in S\,$ is a nonmultiple of $\rm\,\ell.$ Proof ${\bf\ 2}\,\rm\,\ \ S\,$ closed under subtraction $\rm\,\Rightarrow\,S\,$ closed under remainder (mod), when it is $\ne 0,$ since mod may be computed by repeated subtraction, i.e. $\rm\, a\ mod\ b\, =\, a - k b\, =\, a-b-b-\cdots -b.\,$ Thus $\rm\,n\in S\,$ $\Rightarrow$ $\rm\, (n\ mod\ \ell) = 0,\,$ else it is $\rm\,\in S\,$ and smaller than $\rm\,\ell,\,$ contra mimimality of $\rm\,\ell.$ Remark $\ $ In a nutshell, two applications of induction yield the following inferences $\ \ \rm\begin{eqnarray} S\ closed\ under\ {\bf subtraction} &\:\Rightarrow\:&\rm S\ closed\ under\ {\bf mod} = remainder = repeated\ subtraction \\&\:\Rightarrow\:&\rm S\ closed\ under\ {\bf gcd} = repeated\ mod\ (Euclid's\ algorithm) \end{eqnarray}$ Interpreted constructively, this yields the extended Euclidean algorithm for the gcd. The Lemma describes a fundamental property of natural number arithmetic whose essence will become clearer when one studies ideals of rings (viz. $\,\Bbb Z\,$ is Euclidean $\Rightarrow$ PID).
Let us consider, a closed Riemannian surface $(\Sigma,h)$ and a compact Riemannian manifold $(N,g)$ with dimension greater than $3$. If we are given a sequence of harmonic maps $u_n:(\Sigma,h) \rightarrow (N,g)$ with bounded energy, i.e. $$E(u_n)=\int_\Sigma \vert du_n\vert^2 \, dv < C,$$ it is well know that we have an energy identity, that is to say there exists an harmonic map $u^\infty : (\Sigma,h) \rightarrow (N,g)$ and some bubbles, i.e. harmonic maps $\omega_i:\mathbb{C} \rightarrow (N,g)$ such that $$\lim_n E(u_n)= E(u^\infty)+\sum_{i} E(\omega_i).$$ My question is: Does $i\geq 1$ really occurs? When $\Sigma=\hat{\mathbb{C}}$, the answer is clearly yes, considering $$u_n(z)=(z,nz).$$ But here, the fact that the conformal group of $\hat{\mathbb{C}}$ is not compact seems to be crucial. So is there an example of bubbling when $\Sigma$ is not $\mathbb{C}$, especially in genus bigger than $2$?
Well, what is “abstract” analytic number theory? The book’s back cover provides an answer: The three part treatment [given by Knopfmacher] applies classical analytic number theory [methods] to a wide variety of mathematical subjects not usually treated in an arithmetical way. The first part deals with arithmetical semigroups and algebraic enumeration problems. Part Two addresses arithmetical semigroups with analytical properties of classical type; and the final part explores analytical properties of other arithmetical systems. Aha! We’re getting closer. What about some samplings from each of the three parts? Fine: here is a pair of titles: §2 of Chapter 1 is titled “Categories satisfying theorems of the Krull-Schmidt type,” and §1 of Chapter 2 is titled “The Dirichlet algebra of an arithmetical semigroup.” Not your usual analytic number theory fare. But then in §7 of the second chapter Knopfmacher discusses “\(\zeta\)-formulae” and we find stuff like this: “A function \(f \in \mathrm{Dir}(G)\) is a \(\mathrm{PIM}\)-function if and only if \(f\) possesses a generalized \(\zeta\)-function.” Here \(G\) is an arithmetical semigroup, \(\mathrm{Dir}(G)\) is the algebra of arithmetical functions on \(G\) (these being functions from \(G\) to the complex numbers: c’est tout) and \(\mathrm{PIM}\) (or \(\mathrm{PIM}(G)\)) stands for the set of prime-independent functions among all the multiplicative functions on \(G\).” So, yes, the forest is getting denser, but there is certainly some connection discernible with classical notions: we find on p. 59, indeed, that if \(G\) is the semigroup of positive integers and \(f\) is the identity, then the ensuing \(\zeta\)-function is Riemann’s. All right, what about Part Two? Well, Chapter 6 is enticingly titled “The Abstract Prime Number Theorem,” and we find the lead role in the play appearing already on p. 154, to the effect that if \(G\) is an arithmetical semigroup, such that, with \(N_G(x)\) denoting the number of elements in \(G\) having norm at most \(x\), there exist positive \(A\), \(\delta\), and \(0\leq \eta <\delta\), with \(N_G(x) = Ax^\delta _ O(x^\delta)\) as \(x\to\infty\), then \(\pi_G(x) \sim x^\delta/[\delta(\log x)]\): a very familiar business, given that, of course, \(\pi_G(x)\) counts primes in the expected way. The more things change the more they stay the same, at least sort of. Part Two goes on to a (for me, at least) particularly tantalizing discussion of Fourier analysis: on p. 206, Knopfmacher throws Ramanujan expansions into the mix. And then Part Three deals with some relative esoterica like counterparts of earlier results for arithmetical additive semigroups and arithmetical formations. I will just mention these without further elaboration, as the point is already made, I think: Knopfmacher’s book is not going to be everyone’s cup of tea, given the severity of the level of abstraction he presents, but it generalizes a very classical and fecund subject, viz. the vaunted analytic theory of numbers of Riemann and Dirichlet, and before them, Euler and Gauss. That said, however, the generalizations Knopfmacher presents are exciting in their own right, even as they evince welcome connections with and parallels to their classical precursors. To have these results available in such contexts as the theory of arithmetical semigroups is a boon indeed. Michael Berg is Professor of Mathematics at Loyola Marymount University in Los Angeles, CA.
Problem 616 Suppose that $p$ is a prime number greater than $3$. Consider the multiplicative group $G=(\Zmod{p})^*$ of order $p-1$. (a) Prove that the set of squares $S=\{x^2\mid x\in G\}$ is a subgroup of the multiplicative group $G$. (b) Determine the index $[G : S]$. Add to solve later (c) Assume that $-1\notin S$. Then prove that for each $a\in G$ we have either $a\in S$ or $-a\in S$. Problem 613 Let $m$ and $n$ be positive integers such that $m \mid n$. (a) Prove that the map $\phi:\Zmod{n} \to \Zmod{m}$ sending $a+n\Z$ to $a+m\Z$ for any $a\in \Z$ is well-defined. (b) Prove that $\phi$ is a group homomorphism. (c) Prove that $\phi$ is surjective. Add to solve later (d) Determine the group structure of the kernel of $\phi$. Problem 612 Let $C[-2\pi, 2\pi]$ be the vector space of all real-valued continuous functions defined on the interval $[-2\pi, 2\pi]$. Consider the subspace $W=\Span\{\sin^2(x), \cos^2(x)\}$ spanned by functions $\sin^2(x)$ and $\cos^2(x)$. (a) Prove that the set $B=\{\sin^2(x), \cos^2(x)\}$ is a basis for $W$. Add to solve later (b) Prove that the set $\{\sin^2(x)-\cos^2(x), 1\}$ is a basis for $W$. Problem 611 An $n\times n$ matrix $A$ is called orthogonal if $A^{\trans}A=I$. Let $V$ be the vector space of all real $2\times 2$ matrices. Consider the subset \[W:=\{A\in V \mid \text{$A$ is an orthogonal matrix}\}.\] Prove or disprove that $W$ is a subspace of $V$. Problem 607 Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less. Let \[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\] where \begin{align*} p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\ p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3. \end{align*} (a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$. (b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$. Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution Problem 606 Let $V$ be a vector space and $B$ be a basis for $V$. Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$. Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$. After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form \[\begin{bmatrix} 1 & 0 & 2 & 1 & 0 \\ 0 & 1 & 3 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}.\] (a) What is the dimension of $V$? (b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$? Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution Problem 605 Let $T:\R^2 \to \R^3$ be a linear transformation such that \[T\left(\, \begin{bmatrix} 3 \\ 2 \end{bmatrix} \,\right) =\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} \text{ and } T\left(\, \begin{bmatrix} 4\\ 3 \end{bmatrix} \,\right) =\begin{bmatrix} 0 \\ -5 \\ 1 \end{bmatrix}.\] (a) Find the matrix representation of $T$ (with respect to the standard basis for $\R^2$). (b) Determine the rank and nullity of $T$. Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution Problem 604 Let \[A=\begin{bmatrix} 1 & -1 & 0 & 0 \\ 0 &1 & 1 & 1 \\ 1 & -1 & 0 & 0 \\ 0 & 2 & 2 & 2\\ 0 & 0 & 0 & 0 \end{bmatrix}.\] (a) Find a basis for the null space $\calN(A)$. (b) Find a basis of the range $\calR(A)$. (c) Find a basis of the row space for $A$. Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution Problem 603 Let $C[-2\pi, 2\pi]$ be the vector space of all continuous functions defined on the interval $[-2\pi, 2\pi]$. Consider the functions \[f(x)=\sin^2(x) \text{ and } g(x)=\cos^2(x)\] in $C[-2\pi, 2\pi]$. Prove or disprove that the functions $f(x)$ and $g(x)$ are linearly independent. Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution Problem 601 Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers. Let \[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix} a & b\\ c& -a \end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\] (a) Show that $W$ is a subspace of $V$. (b) Find a basis of $W$. (c) Find the dimension of $W$. Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
Electronic Devices Semiconductor Diode, Application of Junction Diode as a Rectifier A P-n junction can be formed during the crystal growth in a pure semi-conductor At the junction, the free electrons from n-region migrate towards p-region and the holes in p-region migrate towards n-region. This process is known as DIFFUSION Due to diffusion, positive ions are left over in n-region and negative ions are left over in P-region near the junction. These ions are immobile. Due to the immobile ions on either side of the junction an internal electric field in formed at the junction which is directed from n-region to p-region. Charge carrier region is not formed at p-n junction due to the combination of electrons and holes i.e called as DEPLETION LAYER. If V is the barrier potential and D is the thickness of the depletion layer, then the electric field intensity across the junction is \tt E=\frac{v}{d} from n side to p side. The direction of electric field is always from n-side to p-side. Circuit symbol of p-n junction diode Due to the very small size of p-n junction diodes they are used in micro circuits. The movement of holes and electrons towards the junction and their recombination reduces the width of the charge depleted region. A.C resistance of the diode \tt R_{a.c}=\frac{\Delta v}{\Delta I} The Avalanche breakdown in reverse bias is due to the breaking of covalent bonds as a result of collision of electrons and holes with the valance electrons. Zener breakdown in reverse bias is due to the breaking of covalent bonds simultaneously. Light emitting diode (LED) is forward biased. LED are used as photo-luminescent panels in road signs, indicator lights etc. Applications of Junction diode Solar cells are used in calculators. Solar arrays generate electricity. Zener diode is a properly doped p-n junction diode which is operated in the breakdown region in reverse bias mode. Zener diode has a sharp breakdown voltage in the reverse bias condition. This voltage is called Zener Voltage (Vz). Silicon is preferred over germanium while constructing Zener diodes, due to its high thermal stability and current compatibility. More number of electron-hole pairs are created due to the strong electric field at the junction at Zener voltage, which increases the reverse current without change in voltage. Zener diode is used as a voltage regulator. \tt I_{L}=\frac{v_{o}}{R_{L}}=\frac{v_{z}}{R_{L}}=constant Voltage across series resistance, V = input voltage – zener voltage, V = V i– V z Current through series resistance (R)\tt I=\frac{v}{R}=\frac{v_{i}-v_{z}}{R} Current through Zener diode, I Z= I – I L SemiConductor Diode View the Topic in this video From 00:24 To 4:54 View the Topic in this video From 00:27 To 10:31 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. 1. dc Current Gain It is defined as the ratio of the collector current ( I C) to the base current ( I). B \beta_{dc} = \frac{I_{C}}{I_{B}} 2. ac Current Gain It is defined as ratio of change in collector current (Δ I C) to the change in base current (Δ I). B \beta_{ac} = \frac{\Delta I_{C}}{\Delta I_{B}} 3. Voltage Gain It is defined as the ratio of output voltage to the input voltage. A_{v} = \frac{V_{0}}{V_{i}} = -\beta_{ac} \times \frac{R_{0}}{R_{i}}
Let M be some manifold, and TM the tangent bundle. Let $\gamma : [a,b] \to M$ be a smooth curve on M defined on an interval on $\mathbb{R}$. Let $J$ be another interval in $\mathbb{R}$ containing 0. A 'deformation of $\gamma(t)$ with fixed endpoints' is a curve $\overline{\gamma}:[a,b]\times J \to M : (t,\epsilon) \mapsto \overline{\gamma}_{\epsilon}(t)$ such that $\overline{\gamma}_{0}(t)=\gamma(t), \forall t \in [a,b]$ $\overline{\gamma}_{\epsilon}(a) = \gamma(a)$ and $\overline{\gamma}_{\epsilon}(b) = \gamma(b)$ for all $\epsilon \in J$ Let L be a lagrangian, i.e. a smooth map $L : TM \to \mathbb{R} : (p,\dot{p}) \mapsto L(p,\dot{p})$. For $M = \mathbb{R}^n$ it is simple to prove that $\gamma$ fulfills the variational principle $$\left. \frac{d}{d\epsilon} \right |_{\epsilon=0} \int_a^b L(\overline{\gamma}_{\epsilon}(t),\dot{\overline{\gamma}}_{\epsilon}(t)) dt = 0$$ for every deformation of $\gamma$, if and only if $\gamma$ satisfies the Euler-Lagrange equations $$\frac{d}{dt}\frac{\partial L}{\partial \dot{p}}(\gamma(t),\dot{\gamma}(t)) - \frac{\partial L}{\partial {p}}(\gamma(t),\dot{\gamma}(t)).$$ My question In many references (any book on geometric mechanics), it is stated that this equivalence hold on any manifold - not just the euclidean case. And in several places (for examples Marsden and Ratiu's book on geometric mechanics) I have seen it stated that this can be prooved in coordinates. However, this is only done for the case where $\gamma$ is contained in a single chart. I am trying to prove, or looking for a reference that proves, the general case. Preferably in coordinates, or in a relatively 'simple' intrinsic way. Can anyone help with this? My attempt Say we want to prove the following direction; let $\gamma : [a,b]\to M$ fulfill the Euler-Lagrange equation. I.e. it fulfills the equation in every chart. We want to show that the variation of the integral is 0. Choose a cover of M, and let $\gamma$ be covered by 3 charts, as in the figure below (copied from the book 'Geometric mechanics and symmetry' by Holm et al). Then its deformations (for small enough $\epsilon$) is also covered by these charts. Choose one such deformation. Then we can split it into three subcurves defined on the intervals $[a,t_1],[t_1,t_2],[t_2,b]$, respectively, such that each is contained in a single chart. Likewise, we can split up the integral into three integrals \begin{align} \left. \frac{d}{d\epsilon} \right |_{\epsilon=0} \int_a^b L(\overline{\gamma}_{\epsilon}(t),\dot{\overline{\gamma}}_{\epsilon}(t)) dt =& \left. \frac{d}{d\epsilon} \right |_{\epsilon=0} \int_a^{t_{1}} L(\overline{\gamma}_{\epsilon}(t),\dot{\overline{\gamma}}_{\epsilon}(t)) dt \\ &+ \left. \frac{d}{d\epsilon} \right |_{\epsilon=0} \int_{t_{1}}^{t_{2}} L(\overline{\gamma}_{\epsilon}\nonumber(t),\dot{\overline{\gamma}}_{\epsilon}(t)) dt \\ &+ \left. \frac{d}{d\epsilon} \right |_{\epsilon=0} \int_{t_{2}}^b L(\overline{\gamma}_{\epsilon}(t),\dot{\overline{\gamma}}_{\epsilon}(t)) dt. \end{align} In each integral, we can use the coordinates of the suitable chart. However, for each such curve/deformation in $\mathbb{R}^n$, the endpoints will not be fixed, except at a and b. From the proof of the equivalence on $M = \mathbb{R}^n$, one can deduce that if an arbitrary deformation $\overline{g} : [T_1,T_2]\times J \to \mathbb{R}^n$ (not necessarily with fixed endpoints) fulfills the E-L equations, then \begin{align*} \left. \frac{d}{d\epsilon} \right |_{\epsilon=0} \int_{T_1}^{T_2} L(\overline{g}_{\epsilon}(t),\dot{\overline{g}}_{\epsilon}(t)) dt = \left[ \frac{\partial L}{\partial \dot{p}}(g(t),\dot{{g}}(t)) \cdot \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} \overline{g}_{\epsilon}(t)\right]_{T_1}^{T_2} \end{align*} This can be used on the previous equation to get \begin{align} \left. \frac{d}{d\epsilon} \right |_{\epsilon=0} \int_a^b L(\overline{\gamma}_{\epsilon}(t),\dot{\overline{\gamma}}_{\epsilon}(t)) dt =& \left[ \frac{\partial L'}{\partial \dot{p}}(\gamma(t)',\dot{{\gamma}}'(t)) \cdot \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} \overline{\gamma}'_{\epsilon}(t)\right]_{a}^{t_1} \\ &+ \left[ \frac{\partial L''}{\partial \dot{p}}(\gamma(t)'',\dot{{\gamma}}(t)'') \cdot \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} \overline{\gamma}_{\epsilon}''(t)\right]_{t_1}^{t_2} \\ &+ \left[ \frac{\partial L'''}{\partial \dot{p}}(\gamma(t)''',\dot{{\gamma}}(t)''') \cdot \left.\frac{d}{d\epsilon}\right|_{\epsilon=0} \overline{\gamma}_{\epsilon}'''(t)\right]_{t_2}^{b} \end{align} where the clumsy '-notation denotes that in each term of the sum we use a different coordinate representation of L, $\gamma$ and $\overline{\gamma}$, since they belong to different charts. In the case where $M = \mathbb{R}^n$ we can use a single chart, so the sum telescopes. But on a general manifold, the sum does not necessarily telescope due to the different coordinate maps. Is there a way to fix this?
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 27 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
I tried using a very specific counterexample here where I select a surjective function for which the compositions are equal but the functions within are not. This is probably off-base, but it's what I've got so far. Assume $f \circ g = f \circ h$. Consider the surjective function $f:\mathbb{R} \rightarrow \mathbb{R}$ given by $f(x) = x*sin(x)$. Should I prove this is surjective before proceeding? Suppose for the sake of contradiction that $g \neq h$ given by $g(x) = 0$ and $h(x) = 2\pi$. Can I choose these constant functions? Do I need to define domains and codomains? $(f \circ g)(x) = f(g(x)) = f(0) = 0 * sin(0) = 0$ $(f \circ h)(x) = f(h(x)) = f(2\pi) = 2\pi * sin(2\pi) = 0$ Observe that $f \circ g = f \circ h$ $\land$ $x_1 \neq x_2$. Thus we have given a counterexample to disprove the statement. Thus surjectivity of $f$ is not a sufficient condition for the statement to be true. I understand the proof completely now and understand I have it correct, thank you for your responses.
8.7 ARIMA modelling in R How does auto.arima() work? The auto.arima() function in R uses a variation of the Hyndman-Khandakar algorithm (Hyndman & Khandakar, 2008), which combines unit root tests, minimisation of the AICc and MLE to obtain an ARIMA model. The arguments to auto.arima() provide for many variations on the algorithm. What is described here is the default behaviour. Hyndman-Khandakar algorithm for automatic ARIMA modelling The default procedure uses some approximations to speed up the search. These approximations can be avoided with the argument approximation=FALSE. It is possible that the minimum AICc model will not be found due to these approximations, or because of the use of a stepwise procedure. A much larger set of models will be searched if the argument stepwise=FALSE is used. See the help file for a full description of the arguments. Choosing your own model If you want to choose the model yourself, use the Arima() function in R. There is another function arima() in R which also fits an ARIMA model. However, it does not allow for the constant \(c\) unless \(d=0\), and it does not return everything required for other functions in the forecast package to work. Finally, it does not allow the estimated model to be applied to new data (which is useful for checking forecast accuracy). Consequently, it is recommended that Arima() be used instead. Modelling procedure When fitting an ARIMA model to a set of (non-seasonal) time series data, the following procedure provides a useful general approach. Plot the data and identify any unusual observations. If necessary, transform the data (using a Box-Cox transformation) to stabilise the variance. If the data are non-stationary, take first differences of the data until the data are stationary. Examine the ACF/PACF: Is an ARIMA(\(p,d,0\)) or ARIMA(\(0,d,q\)) model appropriate? Try your chosen model(s), and use the AICc to search for a better model. Check the residuals from your chosen model by plotting the ACF of the residuals, and doing a portmanteau test of the residuals. If they do not look like white noise, try a modified model. Once the residuals look like white noise, calculate forecasts. The Hyndman-Khandakar algorithm only takes care of steps 3–5. So even if you use it, you will still need to take care of the other steps yourself. The process is summarised in Figure 8.11. Example: Seasonally adjusted electrical equipment orders We will apply this procedure to the seasonally adjusted electrical equipment orders data shown in Figure 8.12. The time plot shows some sudden changes, particularly the big drop in 2008/2009. These changes are due to the global economic environment. Otherwise there is nothing unusual about the time plot and there appears to be no need to do any data adjustments. There is no evidence of changing variance, so we will not do a Box-Cox transformation. The data are clearly non-stationary, as the series wanders up and down for long periods. Consequently, we will take a first difference of the data. The differenced data are shown in Figure 8.13. These look stationary, and so we will not consider further differences. The PACF shown in Figure 8.13 is suggestive of an AR(3) model. So an initial candidate model is an ARIMA(3,1,0). There are no other obvious candidate models. We fit an ARIMA(3,1,0) model along with variations including ARIMA(4,1,0), ARIMA(2,1,0), ARIMA(3,1,1), etc. Of these, the ARIMA(3,1,1) has a slightly smaller AICc value. The ACF plot of the residuals from the ARIMA(3,1,1) model shows that all autocorrelations are within the threshold limits, indicating that the residuals are behaving like white noise. A portmanteau test returns a large p-value, also suggesting that the residuals are white noise. Forecasts from the chosen model are shown in Figure 8.15. If we had used the automated algorithm instead, we would have obtained an ARIMA(3,1,0) model using the default settings, but the ARIMA(3,1,1) model if we had set approximation=FALSE. Understanding constants in R A non-seasonal ARIMA model can be written as \[\begin{equation} \tag{8.4} (1-\phi_1B - \cdots - \phi_p B^p)(1-B)^d y_t = c + (1 + \theta_1 B + \cdots + \theta_q B^q)\varepsilon_t, \end{equation}\] or equivalently as \[\begin{equation} \tag{8.5} (1-\phi_1B - \cdots - \phi_p B^p)(1-B)^d (y_t - \mu t^d/d!) = (1 + \theta_1 B + \cdots + \theta_q B^q)\varepsilon_t, \end{equation}\] where \(c = \mu(1-\phi_1 - \cdots - \phi_p )\) and \(\mu\) is the mean of \((1-B)^d y_t\). R uses the parameterisation of Equation (8.5). Thus, the inclusion of a constant in a non-stationary ARIMA model is equivalent to inducing a polynomial trend of order \(d\) in the forecast function. (If the constant is omitted, the forecast function includes a polynomial trend of order \(d-1\).) When \(d=0\), we have the special case that \(\mu\) is the mean of \(y_t\). By default, the Arima() function sets \(c=\mu=0\) when \(d>0\) and provides an estimate of \(\mu\) when \(d=0\). It will be close to the sample mean of the time series, but usually not identical to it as the sample mean is not the maximum likelihood estimate when \(p+q>0\). The argument include.mean only has an effect when \(d=0\) and is TRUE by default. Setting include.mean=FALSE will force \(\mu=c=0\). The argument include.drift allows \(\mu\ne0\) when \(d=1\). For \(d>1\), no constant is allowed as a quadratic or higher order trend is particularly dangerous when forecasting. The parameter \(\mu\) is called the “drift” in the R output when \(d=1\). There is also an argument include.constant which, if TRUE, will set include.mean=TRUE if \(d=0\) and include.drift=TRUE when \(d=1\). If include.constant=FALSE, both include.mean and include.drift will be set to FALSE. If include.constant is used, the values of include.mean=TRUE and include.drift=TRUE are ignored. The auto.arima() function automates the inclusion of a constant. By default, for \(d=0\) or \(d=1\), a constant will be included if it improves the AICc value; for \(d>1\) the constant is always omitted. If allowdrift=FALSE is specified, then the constant is only allowed when \(d=0\). Plotting the characteristic roots (This is a more advanced section and can be skipped if desired.) We can re-write Equation (8.4) as \[\phi(B) (1-B)^d y_t = c + \theta(B) \varepsilon_t\] where \(\phi(B)= (1-\phi_1B - \cdots - \phi_p B^p)\) is a \(p\)th order polynomial in \(B\) and \(\theta(B) = (1 + \theta_1 B + \cdots + \theta_q B^q)\) is a \(q\)th order polynomial in \(B\). The stationarity conditions for the model are that the \(p\) complex roots of \(\phi(B)\) lie outside the unit circle, and the invertibility conditions are that the \(q\) complex roots of \(\theta(B)\) lie outside the unit circle. So we can see whether the model is close to invertibility or stationarity by a plot of the roots in relation to the complex unit circle. It is easier to plot the inverse roots instead, as they should all lie within the unit circle. This is easily done in R. For the ARIMA(3,1,1) model fitted to the seasonally adjusted electrical equipment index, we obtain Figure 8.16. The three red dots in the left hand plot correspond to the roots of the polynomials \(\phi(B)\), while the red dot in the right hand plot corresponds to the root of \(\theta(B)\). They are all inside the unit circle, as we would expect because R ensures the fitted model is both stationary and invertible. Any roots close to the unit circle may be numerically unstable, and the corresponding model will not be good for forecasting. The Arima() function will never return a model with inverse roots outside the unit circle. The auto.arima() function is even stricter, and will not select a model with roots close to the unit circle either. Bibliography Hyndman, R. J., & Khandakar, Y. (2008). Automatic time series forecasting: The forecast package for R. Journal of Statistical Software, 27(1), 1–22. https://doi.org/10.18637/jss.v027.i03
Answer $\theta $ lies in the Third Quadrant or Quadrant-III. Work Step by Step The trigonometric ratios are as follows: $\sin \theta =\dfrac{y}{r} \\ \cos \theta =\dfrac{x}{r} \\ \tan \theta =\dfrac{y}{x}\\ \csc \theta =\dfrac{r}{y} \\ \sec \theta =\dfrac{r}{x} \\ \cot \theta =\dfrac{x}{y}$ where, $ r=\sqrt {x^2+y^2}$ It has been seen that both $ x $ and $ y $ are negative; this implies that the angle $\theta $ lies in the Third Quadrant or Quadrant-III.
8.4 Moving average models Rather than using past values of the forecast variable in a regression, a moving average model uses past forecast errors in a regression-like model.\[ y_{t} = c + \varepsilon_t + \theta_{1}\varepsilon_{t-1} + \theta_{2}\varepsilon_{t-2} + \dots + \theta_{q}\varepsilon_{t-q},\]where \(\varepsilon_t\) is white noise. We refer to this as an MA(\(q\)) model, a moving average model of order \(q\). Of course, we do not observe the values of \(\varepsilon_t\), so it is not really a regression in the usual sense. Notice that each value of \(y_t\) can be thought of as a weighted moving average of the past few forecast errors. However, moving average models should not be confused with the moving average smoothing we discussed in Chapter 6. A moving average model is used for forecasting future values, while moving average smoothing is used for estimating the trend-cycle of past values. Figure 8.6 shows some data from an MA(1) model and an MA(2) model. Changing the parameters \(\theta_1,\dots,\theta_q\) results in different time series patterns. As with autoregressive models, the variance of the error term \(\varepsilon_t\) will only change the scale of the series, not the patterns. It is possible to write any stationary AR(\(p\)) model as an MA(\(\infty\)) model. For example, using repeated substitution, we can demonstrate this for an AR(1) model: \[\begin{align*} y_t &= \phi_1y_{t-1} + \varepsilon_t\\ &= \phi_1(\phi_1y_{t-2} + \varepsilon_{t-1}) + \varepsilon_t\\ &= \phi_1^2y_{t-2} + \phi_1 \varepsilon_{t-1} + \varepsilon_t\\ &= \phi_1^3y_{t-3} + \phi_1^2\varepsilon_{t-2} + \phi_1 \varepsilon_{t-1} + \varepsilon_t\\ &\text{etc.} \end{align*}\] Provided \(-1 < \phi_1 < 1\), the value of \(\phi_1^k\) will get smaller as \(k\) gets larger. So eventually we obtain \[ y_t = \varepsilon_t + \phi_1 \varepsilon_{t-1} + \phi_1^2 \varepsilon_{t-2} + \phi_1^3 \varepsilon_{t-3} + \cdots, \] an MA(\(\infty\)) process. The reverse result holds if we impose some constraints on the MA parameters. Then the MA model is called invertible. That is, we can write any invertible MA(\(q\)) process as an AR(\(\infty\)) process. Invertible models are not simply introduced to enable us to convert from MA models to AR models. They also have some desirable mathematical properties. For example, consider the MA(1) process, \(y_{t} = \varepsilon_t + \theta_{1}\varepsilon_{t-1}\). In its AR(\(\infty\)) representation, the most recent error can be written as a linear function of current and past observations: \[\varepsilon_t = \sum_{j=0}^\infty (-\theta)^j y_{t-j}.\] When \(|\theta| > 1\), the weights increase as lags increase, so the more distant the observations the greater their influence on the current error. When \(|\theta|=1\), the weights are constant in size, and the distant observations have the same influence as the recent observations. As neither of these situations make much sense, we require \(|\theta|<1\), so the most recent observations have higher weight than observations from the more distant past. Thus, the process is invertible when \(|\theta|<1\). The invertibility constraints for other models are similar to the stationarity constraints. For an MA(1) model: \(-1<\theta_1<1\). For an MA(2) model: \(-1<\theta_2<1,~\) \(\theta_2+\theta_1 >-1,~\) \(\theta_1 -\theta_2 < 1\). More complicated conditions hold for \(q\ge3\). Again, R will take care of these constraints when estimating the models.
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
First, the equivalence theorem refers to S-matrix elements rather than off-shell n-point functions, or their generator $Z[j]$, which are generally different. What you have to study is the LSZ formula that gives the relation between S-matrix elements and expectation values of time-ordered product of fields (off-shell n-point functions, what one gets after taking derivatives of $Z[j]$ and setting $j=0$). You will see that even thought these time-ordered products are different, the S-matrix elements are equal just because the residues of these products in the relevant poles are "equal" (they are strictly equal if the matrix elements of the fields between vacuum and one-particle states ( $\langle p|\phi|0\rangle$) are equal, if they are not equal, but both of them are different from zero, one can trivially adapt the LSZ formula to give the same results). Second, the generating functional \begin{equation}Z[j]=\int \mathcal{D}[\phi] \exp{\{iS(\phi)+i\int d^4x\hspace{0.2cm} j(x)\phi(x) \}}\end{equation} is not valid for all actions functionals $S$. I will illustrate this with a quantum-mechanical example—the generalization to quantum field theory is trivial. The key point is to notice that the "fundamental" path integral is the phase-space or Hamiltonian path integral, that is, the path integral before integrating out momenta. Suppose an action $S[q]=\int L (q, \dot q) \, dt=\int {\dot q^2\over 2}-V(q)\, dt$, then the generating of n-point functions is: $$Z[j]\sim\int \mathcal{D}[q] \exp{\{iS(q)+i\int dt\hspace{0.2cm} j(t)q(t) \}}$$ The Hamiltonian that is connected with the action above is $H(p,q)={p^2\over 2}+V(q)$ and the phase-space path integral is:$$Z[j]\sim \int \mathcal{D}[q]\mathcal{D}[ p] \exp{\{i\int p\dot q - H(p,q)\;dt+i\int dt\hspace{0.2cm} j(t)q(t) \}}$$Now, if one performs a change of coordinates $q=x+G(x)$ in the Lagrangian:$$\tilde L(x,\dot x)=L(x+G(x), \dot x(1+G(x)))={1\over 2}\dot x^2 (1+G'(x))^2-V(x+G(x))$$the Hamiltonian is: $$\tilde H={\tilde p^2\over 2(1+G'(x))}+V\left( x+G(x)\right)$$where the momentum is $\tilde p={d\tilde L\over d\dot x }=\dot x \; (1+G'(x))^2$. A change of coordinates implies a change in the canonical momentum and the Hamiltonian. And now the phase-space path integral is:$$W[j]\sim \int \mathcal{D}[x]\mathcal{D}[\tilde p] \exp{\{i\int \tilde p\dot x - \tilde H(\tilde p,x)\;dt+i\int dt\hspace{0.2cm} j(t)x(t) \}}\,,$$as you were probably expecting. However, when one integrates the momentum, one obtains the Langrangian version of the path integral:$$W[j]\sim\int \mathcal{D}[x]\;(1+G'(x)) \exp{\{iS[x+G(x)]+i\int dt\hspace{0.2cm} j(t)x(t) \}}$$where $(1+G'(x))$ is just $\det {dq\over dx}$. Thus, your second equation is wrong (if one assumes that the starting kinetic term is the standard one) since the previous determinant is missing. This determinant cancels the determinant in your last equation. Nonetheless, $Z[j]\neq W[j]$, since changing the integration variable in the first equation of this answer$$Z[j]\sim\int \mathcal{D}[x]\;(1+G'(x)) \exp{\{iS[x+G(x)]+i\int dt\hspace{0.2cm} j(t)(x(t)+G(x)) \}}$$which does not agree with $W[j]$ due to the term $j(t)(x(t)+G(x))$. So that, both generating functional of n-point functions are different (but the difference is not the Jacobian), although they give the same S-matrix elements as I wrote in the first paragraph. Edit: I will clarify the questions in the comments Let $I=S(\phi)$ be the action functional in Lagrangian form and let's assume that the Lagrangian generating functional is given by$$Z[j]=\int \mathcal{D}[\phi] \exp{\{iS(\phi)+i\int d^4x\hspace{0.2cm} j\phi \}}$$ Obviously, we may change the integration variable $\phi$ without changing the integral. So that, if $\phi\equiv \chi + G(\chi)$, one obtains: $$Z[j]=\int \mathcal{D}[\chi]\,\det(1+G'(\chi)) \exp{\{iS(\chi +G(\chi))+i\int d^4x\hspace{0.2cm} j(\chi + G(\chi))\}}$$ If we want to use this generating functional in terms of the field variable $\chi$, the determinant is crucial. If we had started with the action $S'(\chi)=S(\chi +G(\chi))=I$ — without knowing the existence of the field variable $\phi$ —, we would had derived the following Lagrangian version of the generating functional:$$Z'[j]=\int \mathcal{D}[\chi]\,\det(1+G'(\chi)) \exp{\{iS'(\chi )+i\int d^4x\hspace{0.2cm}j \chi\}}$$Note that $Z'[j]\neq Z[j]$ (but $Z[j=0]=Z'[j=0]$) and therefore the off-shell n-point functions are different. If we want to see if these generating functional give rise the same S-matrix elements, we can, as always, perform a change of integration variable without changing the functional integral. Let's make the inverse change, that is, $\chi\equiv\phi+F(\phi)$:$$Z'[j]=\int \mathcal{D}[\phi]\, \det(1+F'(\phi)) \det(1+G'(\chi)) \exp{\{iS'(\phi+F(\phi) )+i\int d^4x\hspace{0.2cm} j(\phi + F(\phi))\}}=\int \mathcal{D}[\phi]\, \exp{\{iS(\phi)+i\int d^4x\hspace{0.2cm} j(\phi + F(\phi))\}}$$ So that, one has to introduce the n-point functions connected with $Z[j]$ and $Z'[j]$ in the LSZ formula and analyze if they give rise to same S-matrix elements, even though they are different n-point functions. (Related question: Scalar Field Redefinition and Scattering Amplitude)This post imported from StackExchange Physics at 2014-03-31 22:22 (UCT), posted by SE-user drake
From set of numbers from $\Bbb S=\{0,1,\dots,m\}$, how many distinct $3\times 3$ unimodular matrices parametrized by $(a,b,c,d,e,f)\in\Bbb S^6$ of following type can one form? \begin{bmatrix} a^2 &ab &b^2\\ c^2 &cd &d^2\\ e^2 &ef &f^2\\ \end{bmatrix} Is it at least $3m^{2+\beta}$ for some $\beta>0$ when $m\gg0$? From comment below determinant is $$(ad-bc)(af-be)(cd-ef).$$ So how many $3$ tuples of $2\times 2$ matrices of following type with determinant being simultaneously $\pm1$ with entries from $\Bbb S$? $$\begin{bmatrix} a &c\\ b &d \end{bmatrix}\quad \begin{bmatrix} c &e\\ d &f \end{bmatrix}\quad \begin{bmatrix} e &a\\ f &b \end{bmatrix}$$ An example matrix: \begin{bmatrix} 1 &1 &1\\ 9 &6 &4\\ 4 &2 &1\\ \end{bmatrix} has determinant $-1$. Update: As determined below by Kantelope and Neil Strickland, rough asymptotics seem to be at least $3m^2$. Could this be improved to $3m^{2+\beta}$ for some $\beta>0$ when $m\gg0$?
FAQs and troubleshooting¶ The load carrier rim is not visible in the depth image One common case for missing the rim in depth images takes place when one of the edges of the load carrier is parallel to the baseline of the stereo system. It can be easily determined if this the case by slightly rotating the load carrier and observing if the orientation has an effect on the appearance of the object in the depth image. The load carrier is not detected or not detected robustly Is the load carrier fully visible in both the left and right camera image? Does the load carrier rim appear in the depth image? (cfr. Setting up the scene) Is the load carrier inside the region of interest (if specified)? Are the dimensions of the load carrier model correct? (cfr. Configuring the load carrier) The manufacturer’s dimensions might not be completely accurate. If the load carrier is not detected, we recommended to double check the configured dimensions. Additionally, one can try increasing the load_carrier_model_toleranceparameter (e.g. to the maximum value). The load carrier is not placed on a horizontal surface By default, ItemPick assumes that the load carrier is located on a horizontal surface. If that’s not the case, one needs to provide the load carrier orientation as a prior with the load carrier model. This is currently not possible in the Web GUI’s ItemPick panel.Instead, it can be set via the REST-API interface. Two sample cases are shown in Fig. 24 for the external pose frame(left) and the camera pose frame (right). In the left case of Fig. 24, the load carrier ( tilted-load-carrier-ext)is rotated by an angle \(\theta\) around the \(y\)axis of the external coordinate system. The load carrier orientation is given by thefollowing quaternion: \(\left[0, \sin(\theta/2), 0, \cos(\theta/2)\right]\). Request to the REST-API for configuring tilted-load-carrier-ext Here we make the assumption that tilted-load-carrier-exthas the same dimensions of my-load-carrier-1(cfr. Configuring the load carrier) and that the angle \(\theta\) is 30 deg. To trigger the set_load_carrierservice via the REST-API for tilted-load-carrier-ext, one needs to send a PUT request to the URL http://<rc-visard-ip>/api/v1/nodes/rc_itempick/services/set_load_carrier, where <rc-visard-ip>should be replaced by the actual IP of the rc_visard. The PUT body should include the following data, in JSON:{ "args": { "load_carrier": { "id": "tilted-load-carrier-ext", "outer_dimensions": { "x": 0.4, "y": 0.3, "z": 0.22 }, "inner_dimensions": { "x": 0.37, "y": 0.27, "z": 0.215 }, "pose_frame": "external", "pose": { "orientation": { "x": 0, "y": 0.25882, "z": 0, "w": 0.96593 } } } } } In the right case of Fig. 24, the load carrier ( tilted-load-carrier-cam) is parallel to the image plane.Its orientation in the camera coordinate system is\(\left[\sqrt(2)/2, -\sqrt(2)/2, 0, 0\right]\). Request to the REST-API for configuring tilted-load-carrier-cam Here we make the assumption that tilted-load-carrier-camhas the same dimensions of my-load-carrier-1(cfr. Configuring the load carrier). To trigger the set_load_carrierservice via the REST-API for tilted-load-carrier-ext, one needs to send a PUT request to the URL http://<rc-visard-ip>/api/v1/nodes/rc_itempick/services/set_load_carrier, where <rc-visard-ip>should be replaced by the actual IP of the rc_visard. The PUT body should include the following data, in JSON:{ "args": { "load_carrier": { "id": "tilted-load-carrier-cam", "outer_dimensions": { "x": 0.4, "y": 0.3, "z": 0.22 }, "inner_dimensions": { "x": 0.37, "y": 0.27, "z": 0.215 }, "pose_frame": "camera", "pose": { "orientation": { "x": 0.70711, "y": -0.70711, "z": 0, "w": 0 } } } } } The load carrier is deformed One can try increasing the load_carrier_model_toleranceparameter (e.g. to the maximum value). For significantly deformed load carriers, the detection algorithm might not provide reliable results. This can for example be the case of cardboard boxes after several uses. An alternative for such cases is to fix the load carrier placement and manually select a region of interest inside the load carrier. The load carrier floor is detected as load carrier content This means that either the load carrier \(z\) inner dimension is too large or the reconstruction of the load carrier floor is noisy. To improve the detection result as shown in Fig. 25, two options are available: Decrease the load carrier \(z\) inner dimension Increase the load_carrier_crop_distaceparameter (recommended for noisy data) Objects on the load carrier floor are not detected as load carrier content load_carrier_crop_distace parameter is too large. There are multiple load carriers of the same type in the scene In the current implementation, ItemPick detects one load carrier with each compute_graspsor detect_load_carriers request. If there are multiple load carriers of the same type in the scene, we recommended to specify one ore more regions of interest, each one including one load carrier instance. My load carrier doesn’t move. How do I speed up my application? There are no grasps detected on the objects Do objects appear in the depth image? (an additional pattern projector might be needed) Do the workpieces appear in the depth image without holes? (cfr. Configuring image parameters) Are the workpieces inside the region of interest (if specified)? Is the load carrier detected (if specified)? Is the object smaller than the cluster_maximum_dimensionvalue? There are too many grasps on one single object clustering_surface_max_rmse and cluster_maximum_curvature parameters should beincreased.
I need to find the order of the minimum k = k(n) such that the probability of having at least 1 k-clique in a random graph $G(n, \frac{1}{2}$) is $\mathcal{O}(\frac{1}{n})$. $X_k$ is the random variable which count the number of k-cliques in a random graph. I already know $E[X_k] = \binom{n}{k}(\frac{1}{2})^{\binom{k}{2}}$. I don't know exactly how to find the exact value of k. I know $P[X_k \geq 1] \leq E[X_k]$ for Markov inequality but I'm not sure that this is helpful. Thanks in advance The value is $2\log_2 n - 2\log_2\log_2 n + O(1)$. Let $k_0$ be the maximal value such that the expected number of cliques of size $k_0$ is at least 1. A boring calculation shows that $k_0 = 2\log_2 n - 2\log_2\log_2 n + O(1)$. It is a classical result that with high probability, $G(n,1/2)$ contains a $(k_0-1)$-clique. This implies that $k > k_0-1$. On the other hand, it is known that the expected number of cliques of size $k_0+1+C$ is $O((\log n/n)^C)$ for any constant $C$ (this is because the expected number of cliques drops by $\Theta(\log n/n)$ near $k_0$). This shows that $k \leq k_0+3$. We conclude that $k_0 \leq k \leq k_0+3$. You can probably decrease the length of this interval by 1, following the proof of the classical result which shows concentration of the clique number of two values rather than three.
I would sincerely appreciate if anyone can tell me how to solve g(x) defined by the following functional equation: $h(t) = \int_0^t f(2t-x)g(x)dx$ for $0\leq t\leq \infty$? where: f(x) is a known function (actually a probability density functions defined on $[0^+,\infty$) with finite first and second order moments. h(t) is a known function and g(x), the unknown function, is a pdf defined on $[0^+,\infty$ with finite first and second order moments. Assume $f(x)=g(x)=h(x)=0$ when $x<0$ and $f(x) \geq 0, g(x)\geq 0, and h(x)\geq 0$ when $x \geq 0$. I understand basic convolution stuff(Laplace transform etc) but have minimum explosures to functional analysis. The formulation looks somewhat similar to convolution but not exactly the same. Also it seems related to Wiener-Hopf integral equation but I am unfamiliar with it. Numerical solution is OK. But any kind of analytic insight or closed-form solution under specific class of functions will be very beneficial. In particular I am interested in the case when f(t) follows a (truncated) Gaussian distribution. Any textbook/web link/paper recommendation is highly appreciated. Many thanks!
Continuity and Differentiability Exponential and Logarithmic Functions The exponential function with positive base b > 1 is the function y = f(x) = b x Some of the salient features of the exponential functions. (i) Domain of the exponential function is R, the set of all real numbers. (ii) Range of the exponential function is the set of all positive real numbers. (iii) The point (0, 1) is always on the graph of the exponential function (this is a restatement of the fact that b 0= 1 for any real b >1). (iv) Exponential function is ever increasing; i.e., as we move from left to right, the graph rises above. (v) For very large negative values of x, the exponential function is very close to 0. In other words, in the second quadrant, the graph approaches x-axis (but never meets it). Let b > 1 be a real number. Then we say logarithm of ato base b is xif b x= a. Logarithm of ato base bis denoted by log a. Thus log b b a = xif b = a. x Some of the important observations about the logarithm function to any base b > 1 are listed below: (i) We cannot make a meaningful definition of logarithm of non-positive numbers and hence the domain of log function is R +. (ii) The range of log function is the set of all real numbers. (iii) The point (1, 0) is always on the graph of the log function. (iv) The log function is ever increasing, i.e., as we move from left to right the graph rises above. (v) For xvery near to zero, the value of log xcan be made lesser than any given real number. In other words in the fourth quadrant the graph approaches y-axis (but never meets it). Part1: View the Topic in this video From 20:20 To 46:56 Part2: View the Topic in this video From 00:40 To 32:42 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. Derivative of exponential functions: • \frac{d}{dx}(e^{x})=e^{x} Derivative of logarithmic functions: • \frac{d}{dx}(\log x)=\frac{1}{x} • \frac{d}{dx}(a^{x})=a^{x} \log a • \frac{d}{dx}\left(\log_{a}{x}\right)=\frac{1}{x\cdot \log_{e}{a}}
Home > Time-dependent CP-violation sensitivity at the Belle II experiment BELLE2-TALK-DRAFT-2018-061 BELLE2-TALK-DRAFT-2018-062 BELLE2-TALK-CONF-2018-050 Fernando Abudinén 15 June 2018 SPP2018 Abstract: Time dependent CP-violation phenomena are a powerful tool to precisely measure fundamental parameters of the Standard Model and search for New Physics. The Belle II experiment is a substantial upgrade of the Belle detector and will operate at the SuperKEKB energy-asymmetric e^+ e^- collider. The accelerator has already successfully completed the first phase of commissioning in 2016 and first electron positron collisions in Belle II are expected for April 2018. The design luminosity of SuperKEKB is 8 \times 10^{35} cm^{-2}s^{-1} and the Belle II experiment aims to record 50 ab^{-1} of data, a factor of 50 more than the Belle experiment. This dataset will greatly improve the present knowledge, particularly on the CKM angles \beta and \alpha by measuring a wide spectrum of B-meson decays, including many with neutral particles in the final state. In this talk we will present estimates of the sensitivity to \beta in the golden channels B\to c\bar cs and in the penguin-dominated modes B^0\to\eta’ K^0, \phi K^0, K_S\pi^0(\gamma). A study for the time-dependent analysis of B^0\to\pi^0\pi^0, relevant for the measurement of \alpha, and feasible only in the clean environment of an e^+ e^- collider, will also be given. Keyword(s): time-dependent CPV Note: 20 min
(2) is the only correct answer. Merge sort makes $\Theta(n \log n)$ comparisons. Here, we are comparing strings of length $n$, and comparing two length-$n$ strings takes $\Theta(n)$ time in the worst case. Therefore, the running time is $O(n^2 \log n)$. It is easy to arrange a set of inputs that makes Merge sort take this long. For instance, consider an input that contains $n$ identical strings; or $n$ strings that all start with the same common prefix (where this common prefix has length $n/2$, say). Then each comparison will take $\Theta(n)$ time, and Merge sort will do $\Theta(n \log n)$ comparisons, for a total of $\Theta(n^2 \log n)$ running time. Consequently, (2) is correct, and none of the other answers are correct. Your reasoning is incorrect. In general, merge sort makes $\Theta(n\log n)$ comparisons, and runs in $\Theta(n \log n)$ time if each comparison can be done in $O(1)$ time. Therefore, if for some reason we were promised that comparing two strings could be done $O(1)$, all the options would be correct: they would all give an upper bound on the running time of merge sort. $O$ is used to give upper bounds, so $O(n^2)$ is also a correct upper bound for any algorithm whose running time is $O(n \log n)$ (since $n\log n \leq n^2$). In this case, the problem statement didn't make that promise, and in the worst case comparing two strings can take $\Theta(n)$ time, so answers (1), (3), and (4) are not correct. There's an important difference between $O$-notation, $\Theta$-notation, and $\Omega$-notation; they are not equivalent. The problem statement doesn't say anything about in-place merge-sort, so I'm not sure why you're bringing that up in your answer. That seems irrelevant. Anyway, while the standard merge sort algorithm is not in-place, there exist ways to do in-place merge-sort with the same asymptotic running time as the standard merge sort algorithm.
Lemma. (Transitivity)"$\leq_p$" is a transitive relation on languages, i.e., if $L_1 \leq_p L_2$ and $L_2 \leq_p L_3$, then $L_1 \leq_p L_3$. Proof.By definition, there are poly-time functions $f$ and $g$ such that $x \in L_1 \Leftrightarrow f(x) \in L_2$ and $y \in L_2 \Leftrightarrow g(y) \in L_3$, thus $x \in L_1 \Leftrightarrow f(x) \in L_2 \Leftrightarrow g(f(x)) \in L_3$. Obviously, $g(f(\cdot))$ is poly-time (since $|f(x)|$ is polynomial in $|x|$). This lemma proves that "$\leq_p$" is transitive, but how would I prove that it is not antisymmetric?
Let $X_i$ be the result of $i$-th die. Thesis: $$P\left(\sum_{i=1}^{N} X_i = k\right)\text{ is equal to coefficient in $z^k$ of }\left(\frac{z^1+\ldots +z^6}{6}\right)^N. \quad\quad\quad(1) $$ Proof by induction: Base of induction: $P(X_1 = k) = \frac{1}{6}$ for $1 \leq k \leq 6$ and $0$ otherwise. Indeed this is the case for $\left(\frac{z^1+\ldots+z^6}{6}\right)^1$. Assumption of induction: thesis (1) holds for $1, 2, \ldots, N$. Hypothesis of induction: thesis (1) holds for $N+1$. Step of induction: We know that$$ \left(\frac{z^1+\ldots+z^6}{6}\right)^{N+1} = \left(\frac{z^1+\ldots+z^6}{6}\right)^N\left(\frac{z^1+\ldots+z^6}{6}\right) \quad\quad\quad(2)$$By the assumption we know that first part of right-hand side represents the probability distribution of sum of $N$ dice and the second part represents single die. Using the notation I've used in another answer we can observe that right-hand side can be written as $W_{X_1 + \ldots + X_N} \cdot W_{X_{N+1}}$and that equals $W_{X_1 + \ldots + X_N + X_{N+1}}$ by the formulas I have derived there. But definition of this polynomial is $$ W_{X_1 + \ldots + X_N + X_{N+1}}(z) = \sum_k P(X_1 + \ldots + X_N + X_{N+1} = k) z^k $$ so the coefficient at $z^k$ is $P(X_1 + \ldots + X_N + X_{N+1} = k)$ which is precisely the induction hypothesis and that completes the induction step. By the method of induction that completes the proof of (1) for all $N \geq 1$. Afterword: It is important that different dice get different $X$-es, because this way you say that those results are independent. Having just one $X$ would make possible to derive that you have only 6 possible answers: $N, 2N, 3N, \ldots, 6N$, as it would mean just taking the very same result of the single $die$ $N$ times. (Notation $W_{X+X}$ in one of the previous comments is just wrong and shouldn't have happened. That ought to be $W_{X_1 + X_2}$ naturally.) In conclusion, I think I overdid it a little, but I guess more explicit is in this case better than less explicit, please bear it. Also there may be some typos there, so watch out!
Suppose $A,A_1,\ldots,A_{n-2}$ (resp. $B$) are (resp. is) real positive-definite (resp. arbitrary) symmetric $n\times n$ matrices and denote by $D(\cdot,\ldots,\cdot)$ the mixed discriminant. We have the following well-known Aleksandrov-Fenchel inequality \begin{equation}\label{e} D(A,B,A_1,\ldots,A_{n-2})^2\geq D(A,A,A_1,\ldots,A_{n-2})D(B,B,A_1,\ldots,A_{n-2}), \end{equation} with equality iff $A$ and $B$ are proportional. Now my question comes: does this inequality and equality case still hold if we assume these matrices are complex Hermitian matrices rather than real symmetric ones? I guess this is the case but I am not able to find a reference. The standard textbook of Schneider only treats the real symmetric case. Many thanks in advance!
82 0 1. Homework Statement Find the inertia tensor for a uniform, thin hollow cone, such as an ice cream cone, of mass M, height h, and base radius R, spinning about its pointed end. 2. Homework Equations [itex]I_{zz} = \sum x^{2}+y^{2}[/itex] [itex]\rho = \sqrt{x^{2}+y^{2}}[/itex] 3. The Attempt at a Solution I first tried to think of this as a bunch of little rings [itex]area = 2\pi\rho dz[/itex] as rho is changing as height increases i defined rho as [itex]\rho=\frac{z R}{h}[/itex] but how would i set up this integral? My book shows a few examples of doing it with a solid but i don't know how to do it with an area. so far i have [itex]\int \frac{2\pi R^{3}}{h^{3}} z^{3}dz[/itex] I know that that the [itex]I_{zz}= \frac{MR^{2}}{2}[/itex] just need help getting started
26 0 In this topic https://physics.stackexchange.com/questions/129417/what-is-pseudo-tensor one answer was the next: But I don't understand one thing. Is that statement only for Euclidean three dimensions? I attempted understand it myself. And it is my thoughts. Pseudotensor is determined as: $$\hat{P}^{i_1\ldots i_q}_{\,j_1\ldots j_p} = (-1)^A A^{i_1} {}_{k_1}\cdots A^{i_q} {}_{k_q} B^{l_1} {}_{j_1}\cdots B^{l_p} {}_{j_p} P^{k_1\ldots k_q}_{l_1\ldots l_p} $$ where ##(-1)^A = \mathrm{sign}(\det(A^{i_q} {}_{k_q})) = \pm{1}## Let's consider a pseudovector in Euclidean three dimensions. Then ##\det(A^{i_q} {}_{k_q})## is \begin{pmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{pmatrix} And ##(-1)^A=-1## Let's consider a pseudovector in Euclidean three dimensions. Then ##\det(A^{i_q} {}_{k_q})## is \begin{pmatrix} -1 & 0 & 0 & 0\\ 0 & -1 & 0 & 0\\ 0 & 0 & -1 &0 \\ 0 & 0 & 0 & -1 \end{pmatrix} And ##(-1)^A=1## Let's consider a pseudovector in Minkovski space. Then ##\det(A^{i_q} {}_{k_q})## is \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & -1 & 0 & 0\\ 0 & 0 & -1 &0 \\ 0 & 0 & 0 & -1 \end{pmatrix} And ##(-1)^A=-1## am I right? The action of parity on a tensor or pseudotensor depends on the number of indices it has (i.e. its tensor rank): - Tensors of odd rank (e.g. vectors) reverse sign under parity. - Tensors of even rank (e.g. scalars, linear transformations, bivectors, metrics) retain their sign under parity. - Pseudotensors of odd rank (e.g. pseudovectors) retain their sign under parity. - Pseudotensors of even rank (e.g. pseudoscalars) reverse sign under parity. - Tensors of odd rank (e.g. vectors) reverse sign under parity. - Tensors of even rank (e.g. scalars, linear transformations, bivectors, metrics) retain their sign under parity. - Pseudotensors of odd rank (e.g. pseudovectors) retain their sign under parity. - Pseudotensors of even rank (e.g. pseudoscalars) reverse sign under parity. But I don't understand one thing. Is that statement only for Euclidean three dimensions? I attempted understand it myself. And it is my thoughts. Pseudotensor is determined as: $$\hat{P}^{i_1\ldots i_q}_{\,j_1\ldots j_p} = (-1)^A A^{i_1} {}_{k_1}\cdots A^{i_q} {}_{k_q} B^{l_1} {}_{j_1}\cdots B^{l_p} {}_{j_p} P^{k_1\ldots k_q}_{l_1\ldots l_p} $$ where ##(-1)^A = \mathrm{sign}(\det(A^{i_q} {}_{k_q})) = \pm{1}## Let's consider a pseudovector in Euclidean three dimensions. Then ##\det(A^{i_q} {}_{k_q})## is \begin{pmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{pmatrix} And ##(-1)^A=-1## Let's consider a pseudovector in Euclidean three dimensions. Then ##\det(A^{i_q} {}_{k_q})## is \begin{pmatrix} -1 & 0 & 0 & 0\\ 0 & -1 & 0 & 0\\ 0 & 0 & -1 &0 \\ 0 & 0 & 0 & -1 \end{pmatrix} And ##(-1)^A=1## Let's consider a pseudovector in Minkovski space. Then ##\det(A^{i_q} {}_{k_q})## is \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & -1 & 0 & 0\\ 0 & 0 & -1 &0 \\ 0 & 0 & 0 & -1 \end{pmatrix} And ##(-1)^A=-1## am I right?
Residuals for Fitted Point Process Model Given a point process model fitted to a point pattern, compute residuals. Usage # S3 method for ppmresiduals(object, type="raw", …, check=TRUE, drop=FALSE, fittedvalues=NULL, new.coef=NULL, dropcoef=FALSE, quad=NULL) Arguments object The fitted point process model (an object of class "ppm") for which residuals should be calculated. type String indicating the type of residuals to be calculated. Current options are "raw", "inverse", "pearson"and "score". A partial match is adequate. … Ignored. check Logical value indicating whether to check the internal format of object. If there is any possibility that this object has been restored from a dump file, or has otherwise lost track of the environment where it was originally computed, set check=TRUE. drop Logical value determining whether to delete quadrature points that were not used to fit the model. See quad.ppmfor explanation. fittedvalues Vector of fitted values for the conditional intensity at the quadrature points, from which the residuals will be computed. For expert use only. new.coef Optional. Numeric vector of coefficients for the model, replacing coef(object). See the section on Modified Residuals below. dropcoef Internal use only. quad Optional. Data specifying how to re-fit the model. A list of arguments passed to quadscheme. See the section on Modified Residuals below. Details This function computes several kinds of residuals for the fit of a point process model to a spatial point pattern dataset (Baddeley et al, 2005). Use plot.msr to plot the residuals directly, or diagnose.ppm to produce diagnostic plots based on these residuals. The argument object must be a fitted point process model (object of class "ppm"). Such objects are produced by the maximum pseudolikelihood fitting algorithm ppm. This fitted model object contains complete information about the original data pattern. Residuals are attached both to the data points and to some other points in the window of observation (namely, to the dummy points of the quadrature scheme used to fit the model). If the fitted model is correct, then the sum of the residuals over all (data and dummy) points in a spatial region \(B\) has mean zero. For further explanation, see Baddeley et al (2005). The type of residual is chosen by the argument type. Current options are "raw": the raw residuals $$ r_j = z_j - w_j \lambda_j $$ at the quadrature points \(u_j\), where \(z_j\) is the indicator equal to 1 if \(u_j\) is a data point and 0 if \(u_j\) is a dummy point; \(w_j\) is the quadrature weight attached to \(u_j\); and $$\lambda_j = \hat\lambda(u_j,x)$$ is the conditional intensity of the fitted model at \(u_j\). These are the spatial analogue of the martingale residuals of a one-dimensional counting process. "inverse": the `inverse-lambda' residuals (Baddeley et al, 2005) $$ r^{(I)}_j = \frac{r_j}{\lambda_j} = \frac{z_j}{\lambda_j} - w_j $$ obtained by dividing the raw residuals by the fitted conditional intensity. These are a counterpart of the exponential energy marks (see eem). "pearson": the Pearson residuals (Baddeley et al, 2005) $$ r^{(P)}_j = \frac{r_j}{\sqrt{\lambda_j}} = \frac{z_j}{\sqrt{\lambda_j}} - w_j \sqrt{\lambda_j} $$ obtained by dividing the raw residuals by the square root of the fitted conditional intensity. The Pearson residuals are standardised, in the sense that if the model (true and fitted) is Poisson, then the sum of the Pearson residuals in a spatial region \(B\) has variance equal to the area of \(B\). "score": the score residuals (Baddeley et al, 2005) $$ r_j = (z_j - w_j \lambda_j) x_j $$ obtained by multiplying the raw residuals \(r_j\) by the covariates \(x_j\) for quadrature point \(j\). The score residuals always sum to zero. The result of residuals.ppm is a measure (object of class "msr"). Use plot.msr to plot the residuals directly, or diagnose.ppm to produce diagnostic plots based on these residuals. Use integral.msr to compute the total residual. By default, the window of the measure is the same as the original window of the data. If drop=TRUE then the window is the domain of integration of the pseudolikelihood or composite likelihood. This only matters when the model object was fitted using the border correction: in that case, if drop=TRUE the window of the residuals is the erosion of the original data window by the border correction distance rbord. Value An object of class "msr" representing a signed measure or vector-valued measure (see msr). This object can be plotted. Modified Residuals Sometimes we want to modify the calculation of residuals by using different values for the model parameters. This capability is provided by the arguments new.coef and quad. If new.coef is given, then the residuals will be computed by taking the model parameters to be new.coef. This should be a numeric vector of the same length as the vector of fitted model parameters coef(object). If new.coef is missing and quad is given, then the model parameters will be determined by re-fitting the model using a new quadrature scheme specified by quad. Residuals will be computed for the original model object using these new parameter values. The argument quad should normally be a list of arguments in name=value format that will be passed to quadscheme (together with the original data points) to determine the new quadrature scheme. It may also be a quadrature scheme (object of class "quad") to which the model should be fitted, or a point pattern (object of class "ppp") specifying the dummy points in a new quadrature scheme. References Baddeley, A., Turner, R., Moller, J. and Hazelton, M. (2005) Residual analysis for spatial point processes. Journal of the Royal Statistical Society, Series B 67, 617--666. Baddeley, A., Moller, J. and Pakes, A.G. (2008) Properties of residuals for spatial point processes. Annals of the Institute of Statistical Mathematics 60, 627--649. See Also Aliases residuals.ppm Examples # NOT RUN { fit <- ppm(cells, ~x, Strauss(r=0.15)) # Pearson residuals rp <- residuals(fit, type="pe") rp # simulated data X <- rStrauss(100,0.7,0.05) # fit Strauss model fit <- ppm(X, ~1, Strauss(0.05)) res.fit <- residuals(fit) # check that total residual is 0 integral.msr(residuals(fit, drop=TRUE)) # true model parameters truecoef <- c(log(100), log(0.7)) res.true <- residuals(fit, new.coef=truecoef) # } Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2)
L # 1 Show that It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline Last edited by krassi_holmz (2006-03-09 02:44:53) IPBLE: Increasing Performance By Lowering Expectations. Offline It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline L # 2 If It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline Let log x = x' log y = y' log z = z'. Then: x'+y'+z'=0. Rewriting in terms of x' gives: IPBLE: Increasing Performance By Lowering Expectations. Offline Well done, krassi_holmz! It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline L # 3 If x²y³=a and log (x/y)=b, then what is the value of (logx)/(logy)? It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline loga=2logx+3logy b=logx-logy loga+3b=5logx loga-2b=3logy+2logy=5logy logx/logy=(loga+3b)/(loga-2b). Last edited by krassi_holmz (2006-03-10 20:06:29) IPBLE: Increasing Performance By Lowering Expectations. Offline Very well done, krassi_holmz! It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline L # 4 Offline It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline You are not supposed to use a calculator or log tables for L # 4. Try again! Last edited by JaneFairfax (2009-01-04 23:40:20) Offline No, I didn't I remember It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline You still used a calculator / log table in the past to get those figures (or someone else did and showed them to you). I say again: no calculators or log tables to be used (directly or indirectly) at all!! Last edited by JaneFairfax (2009-01-06 00:30:04) Offline Offline log a = 2log x + 3log y b = log x log y log a + 3 b = 5log x loga - 2b = 3logy + 2logy = 5logy logx / logy = (loga+3b) / (loga-2b) Offline Hi ganesh for L # 1 since log(a)= 1 / log(b), log(a)=1 b a a we have 1/log(abc)+1/log(abc)+1/log(abc)= a b c log(a)+log(b)+log(c)= log(abc)=1 abc abc abc abc Best Regards Riad Zaidan Offline Hi ganesh for L # 2 I think that the following proof is easier: Assume Log(x)/(b-c)=Log(y)/(c-a)=Log(z)/(a-b)=t So Log(x)=t(b-c),Log(y)=t(c-a) , Log(z)=t(a-b) So Log(x)+Log(y)+Log(z)=tb-tc+tc-ta+ta-tb=0 So Log(xyz)=0 so xyz=1 Q.E.D Best Regards Riad Zaidan Offline Gentleman, Thanks for the proofs. Regards. It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline log_2(16) = \log_2 \left ( \frac{64}{4} \right ) = \log_2(64) - \log_2(4) = 6 - 2 = 4, \, log_2(\sqrt[3]4) = \frac {1}{3} \log_2 (4) = \frac {2}{3}. \, Offline L # 4 I don't want a method that will rely on defining certain functions, taking derivatives, noting concavity, etc. Change of base: Each side is positive, and multiplying by the positive denominator keeps whatever direction of the alleged inequality the same direction: On the right-hand side, the first factor is equal to a positive number less than 1, while the second factor is equal to a positive number greater than 1. These facts are by inspection combined with the nature of exponents/logarithms. Because of (log A)B = B(log A) = log(A^B), I may turn this into: I need to show that Then Then 1 (on the left-hand side) will be greater than the value on the right-hand side, and the truth of the original inequality will be established. I want to show Raise a base of 3 to each side: Each side is positive, and I can square each side: ----------------------------------------------------------------------------------- Then I want to show that when 2 is raised to a number equal to (or less than) 1.5, then it is less than 3. Each side is positive, and I can square each side: Last edited by reconsideryouranswer (2011-05-27 20:05:01) Signature line: I wish a had a more interesting signature line. Offline Hi reconsideryouranswer, This problem was posted by JaneFairfax. I think it would be appropriate she verify the solution. It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline Hi all, I saw this post today and saw the probs on log. Well, they are not bad, they are good. But you can also try these problems here by me (Credit: to a book): http://www.mathisfunforum.com/viewtopic … 93#p399193 Practice makes a man perfect. There is no substitute to hard work All of us do not have equal talents but everybody has equal oppurtunities to build their talents.-APJ Abdul Kalam Offline JaneFairfax, here is a basic proof of L4: For all real a > 1, y = a^x is a strictly increasing function. log(base 2)3 versus log(base 3)5 2*log(base 2)3 versus 2*log(base 3)5 log(base 2)9 versus log(base 3)25 2^3 = 8 < 9 2^(> 3) = 9 3^3 = 27 < 25 3^(< 3) = 25 So, the left-hand side is greater than the right-hand side, because Its logarithm is a larger number. Offline
Let $\ \mathbf N = \{1\ 2\ \ldots\}\ $ be the set of natural numbers. Let $\ f : \mathbf N\rightarrow\mathbf N\ $ be an arbitrary function, and $\ \forall_{n\in\mathbf N}\, F(n)\ :=\ \max_{k = 1\ldots n}\, f(k)$. Let's assume that, with respect to a fixed universal Turing machine, there exists at least one algorithm which computes $\ f,\ $ and let $\ ||f||_A(n)\ $ be the number of Turing operations which compute $\ f(n)\ $ by algorithm $\ A$. By a polynomial $\ \mathbf N\rightarrow\mathbf N\ $ I mean a function which differs from a (true real) polynomial by less then 1 for almost all natural numbers $\ n\in\mathbf N$. DEFINITION 1 Function $\ f\ $ is called a fast counter $\ \Leftarrow:\Rightarrow\ $ there exists an algorithm $\ A\ $ and a polynomial $\ p : \mathbf N\rightarrow \mathbf N\ $ such that $$\forall_{n\in\mathbf N}\ \ ||f||_A(n)\ \le\ \frac{p(n)}{n!}\cdot F(n) $$ DEFINITION 2 Function $\ f\ $ is called a slow counter $\ \Leftarrow:\Rightarrow\ $ for every algorithm $\ A\ $ there exists polynomial $\ q : \mathbf N\rightarrow \mathbf N\ $ $$\forall_{n\in\mathbf N}\ \ ||f||_A(n)\ \ge\ \frac{F(n)}{n!\cdot q(n)}$$ DEFINITION 3 Function $\ f\ $ is called an algorithmic counter $\ \Leftarrow:\Rightarrow\ \ f\ $ is both a fast and a slow counter. QUESTIONLet $\ pos(n)\ $ be the number of all partial orders in the integer interval $\ \{0\ \ldots\ n\!-\!1\}.\ $ Is function $\ pos\ $ an algorithmic counter? A similar question holds for the number of quasi-orders (i.e. of topologies).
38 1 1. Homework Statement What is the electric field at a point on the central axis of a solid, uniformly charged cylinder of radis R and length h? 2. Homework Equations Well, I've set up the triple integral and have gotten to this point: [itex]\bar{E}_{z}=\frac{2\pi\rho}{4\pi\epsilon_{0}}\int^{h/2}_{-h/2}[1-\frac{z'-z}{\sqrt{(z'-z)^{2}+R^{2}}}]dz[/itex] , z' is the location of the test point When I integrate this I get a constant term and a mess of logarithms. I know that the field should be 0 when z' = 0, but it doesn't check out. What's wrong?
Tagged: abelian group Abelian Group Problems and Solutions. The other popular topics in Group Theory are: Problem 616 Suppose that $p$ is a prime number greater than $3$. Consider the multiplicative group $G=(\Zmod{p})^*$ of order $p-1$. (a) Prove that the set of squares $S=\{x^2\mid x\in G\}$ is a subgroup of the multiplicative group $G$. (b) Determine the index $[G : S]$. Add to solve later (c) Assume that $-1\notin S$. Then prove that for each $a\in G$ we have either $a\in S$ or $-a\in S$. If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575 Let $G$ be a finite group of order $2n$. Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$. Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later Problem 497 Let $G$ be an abelian group. Let $a$ and $b$ be elements in $G$ of order $m$ and $n$, respectively. Prove that there exists an element $c$ in $G$ such that the order of $c$ is the least common multiple of $m$ and $n$. Also determine whether the statement is true if $G$ is a non-abelian group.Add to solve later Problem 434 Let $R$ be a ring with $1$. A nonzero $R$-module $M$ is called irreducible if $0$ and $M$ are the only submodules of $M$. (It is also called a simple module.) (a) Prove that a nonzero $R$-module $M$ is irreducible if and only if $M$ is a cyclic module with any nonzero element as its generator. Add to solve later (b) Determine all the irreducible $\Z$-modules. Problem 420 In this post, we study the Fundamental Theorem of Finitely Generated Abelian Groups, and as an application we solve the following problem. Add to solve later Problem. Let $G$ be a finite abelian group of order $n$. If $n$ is the product of distinct prime numbers, then prove that $G$ is isomorphic to the cyclic group $Z_n=\Zmod{n}$ of order $n$. Problem 343 Let $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$. Let $\Aut(N)$ be the group of automorphisms of $G$. Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime. Then prove that $N$ is contained in the center of $G$.
Category: Group Theory Group Theory Problems and Solutions. Popular posts in Group Theory are: Problem 625 Let $G$ be a group and let $H_1, H_2$ be subgroups of $G$ such that $H_1 \not \subset H_2$ and $H_2 \not \subset H_1$. (a) Prove that the union $H_1 \cup H_2$ is never a subgroup in $G$. Add to solve later (b) Prove that a group cannot be written as the union of two proper subgroups. Problem 616 Suppose that $p$ is a prime number greater than $3$. Consider the multiplicative group $G=(\Zmod{p})^*$ of order $p-1$. (a) Prove that the set of squares $S=\{x^2\mid x\in G\}$ is a subgroup of the multiplicative group $G$. (b) Determine the index $[G : S]$. Add to solve later (c) Assume that $-1\notin S$. Then prove that for each $a\in G$ we have either $a\in S$ or $-a\in S$. Problem 613 Let $m$ and $n$ be positive integers such that $m \mid n$. (a) Prove that the map $\phi:\Zmod{n} \to \Zmod{m}$ sending $a+n\Z$ to $a+m\Z$ for any $a\in \Z$ is well-defined. (b) Prove that $\phi$ is a group homomorphism. (c) Prove that $\phi$ is surjective. Add to solve later (d) Determine the group structure of the kernel of $\phi$. If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575 Let $G$ be a finite group of order $2n$. Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$. Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later Problem 497 Let $G$ be an abelian group. Let $a$ and $b$ be elements in $G$ of order $m$ and $n$, respectively. Prove that there exists an element $c$ in $G$ such that the order of $c$ is the least common multiple of $m$ and $n$. Also determine whether the statement is true if $G$ is a non-abelian group.Add to solve later
@DavidReed the notion of a "general polynomial" is a bit strange. The general polynomial over a field always has Galois group $S_n$ even if there is not polynomial over the field with Galois group $S_n$ Hey guys. Quick question. What would you call it when the period/amplitude of a cosine/sine function is given by another function? E.g. y=x^2*sin(e^x). I refer to them as variable amplitude and period but upon google search I don't see the correct sort of equation when I enter "variable period cosine" @LucasHenrique I hate them, i tend to find algebraic proofs are more elegant than ones from analysis. They are tedious. Analysis is the art of showing you can make things as small as you please. The last two characters of every proof are $< \epsilon$ I enjoyed developing the lebesgue integral though. I thought that was cool But since every singleton except 0 is open, and the union of open sets is open, it follows all intervals of the form $(a,b)$, $(0,c)$, $(d,0)$ are also open. thus we can use these 3 class of intervals as a base which then intersect to give the nonzero singletons? uh wait a sec... ... I need arbitrary intersection to produce singletons from open intervals... hmm... 0 does not even have a nbhd, since any set containing 0 is closed I have no idea how to deal with points having empty nbhd o wait a sec... the open set of any topology must contain the whole set itself so I guess the nbhd of 0 is $\Bbb{R}$ Btw, looking at this picture, I think the alternate name for these class of topologies called British rail topology is quite fitting (with the help of this WfSE to interpret of course mathematica.stackexchange.com/questions/3410/…) Since as Leaky have noticed, every point is closest to 0 other than itself, therefore to get from A to B, go to 0. The null line is then like a railway line which connects all the points together in the shortest time So going from a to b directly is no more efficient than go from a to 0 and then 0 to b hmm... $d(A \to B \to C) = d(A,B)+d(B,C) = |a|+|b|+|b|+|c|$ $d(A \to 0 \to C) = d(A,0)+d(0,C)=|a|+|c|$ so the distance of travel depends on where the starting point is. If the starting point is 0, then distance only increases linearly for every unit increase in the value of the destination But if the starting point is nonzero, then the distance increases quadratically Combining with the animation in the WfSE, it means that in such a space, if one attempt to travel directly to the destination, then say the travelling speed is 3 ms-1, then for every meter forward, the actual distance covered by 3 ms-1 decreases (as illustrated by the shrinking open ball of fixed radius) only when travelling via the origin, will such qudratic penalty in travelling distance be not apply More interesting things can be said about slight generalisations of this metric: Hi, looking a graph isomorphism problem from perspective of eigenspaces of adjacency matrix, it gets geometrical interpretation: question if two sets of points differ only by rotation - e.g. 16 points in 6D, forming a very regular polyhedron ... To test if two sets of points differ by rotation, I thought to describe them as intersection of ellipsoids, e.g. {x: x^T P x = 1} for P = P_0 + a P_1 ... then generalization of characteristic polynomial would allow to test if our sets differ by rotation ... 1D interpolation: finding a polynomial satisfying $\forall_i\ p(x_i)=y_i$ can be written as a system of linear equations, having well known Vandermonde determinant: $\det=\prod_{i<j} (x_i-x_j)$. Hence, the interpolation problem is well defined as long as the system of equations is determined ($\d... Any alg geom guys on? I know zilch about alg geom to even start analysing this question Manwhile I am going to analyse the SR metric later using open balls after the chat proceed a bit To add to gj255's comment: The Minkowski metric is not a metric in the sense of metric spaces but in the sense of a metric of Semi-Riemannian manifolds. In particular, it can't induce a topology. Instead, the topology on Minkowski space as a manifold must be defined before one introduces the Minkowski metric on said space. — baluApr 13 at 18:24 grr, thought I can get some more intuition in SR by using open balls tbf there’s actually a third equivalent statement which the author does make an argument about, but they say nothing about substantive about the first two. The first two statements go like this : Let $a,b,c\in [0,\pi].$ Then the matrix $\begin{pmatrix} 1&\cos a&\cos b \\ \cos a & 1 & \cos c \\ \cos b & \cos c & 1\end{pmatrix}$ is positive semidefinite iff there are three unit vectors with pairwise angles $a,b,c$. And all it has in the proof is the assertion that the above is clearly true. I've a mesh specified as an half edge data structure, more specifically I've augmented the data structure in such a way that each vertex also stores a vector tangent to the surface. Essentially this set of vectors for each vertex approximates a vector field, I was wondering if there's some well k... Consider $a,b$ both irrational and the interval $[a,b]$ Assuming axiom of choice and CH, I can define a $\aleph_1$ enumeration of the irrationals by label them with ordinals from 0 all the way to $\omega_1$ It would seemed we could have a cover $\bigcup_{\alpha < \omega_1} (r_{\alpha},r_{\alpha+1})$. However the rationals are countable, thus we cannot have uncountably many disjoint open intervals, which means this union is not disjoint This means, we can only have countably many disjoint open intervals such that some irrationals were not in the union, but uncountably many of them will If I consider an open cover of the rationals in [0,1], the sum of whose length is less than $\epsilon$, and then I now consider [0,1] with every set in that cover excluded, I now have a set with no rationals, and no intervals.One way for an irrational number $\alpha$ to be in this new set is b... Suppose you take an open interval I of length 1, divide it into countable sub-intervals (I/2, I/4, etc.), and cover each rational with one of the sub-intervals.Since all the rationals are covered, then it seems that sub-intervals (if they don't overlap) are separated by at most a single irrat... (For ease of construction of enumerations, WLOG, the interval [-1,1] will be used in the proofs) Let $\lambda^*$ be the Lebesgue outer measure We previously proved that $\lambda^*(\{x\})=0$ where $x \in [-1,1]$ by covering it with the open cover $(-a,a)$ for some $a \in [0,1]$ and then noting there are nested open intervals with infimum tends to zero. We also knew that by using the union $[a,b] = \{a\} \cup (a,b) \cup \{b\}$ for some $a,b \in [-1,1]$ and countable subadditivity, we can prove $\lambda^*([a,b]) = b-a$. Alternately, by using the theorem that $[a,b]$ is compact, we can construct a finite cover consists of overlapping open intervals, then subtract away the overlapping open intervals to avoid double counting, or we can take the interval $(a,b)$ where $a<-1<1<b$ as an open cover and then consider the infimum of this interval such that $[-1,1]$ is still covered. Regardless of which route you take, the result is a finite sum whi… W also knew that one way to compute $\lambda^*(\Bbb{Q}\cap [-1,1])$ is to take the union of all singletons that are rationals. Since there are only countably many of them, by countable subadditivity this give us $\lambda^*(\Bbb{Q}\cap [-1,1]) = 0$. We also knew that one way to compute $\lambda^*(\Bbb{I}\cap [-1,1])$ is to use $\lambda^*(\Bbb{Q}\cap [-1,1])+\lambda^*(\Bbb{I}\cap [-1,1]) = \lambda^*([-1,1])$ and thus deducing $\lambda^*(\Bbb{I}\cap [-1,1]) = 2$ However, what I am interested here is to compute $\lambda^*(\Bbb{Q}\cap [-1,1])$ and $\lambda^*(\Bbb{I}\cap [-1,1])$ directly using open covers of these two sets. This then becomes the focus of the investigation to be written out below: We first attempt to construct an open cover $C$ for $\Bbb{I}\cap [-1,1]$ in stages: First denote an enumeration of the rationals as follows: $\frac{1}{2},-\frac{1}{2},\frac{1}{3},-\frac{1}{3},\frac{2}{3},-\frac{2}{3}, \frac{1}{4},-\frac{1}{4},\frac{3}{4},-\frac{3}{4},\frac{1}{5},-\frac{1}{5}, \frac{2}{5},-\frac{2}{5},\frac{3}{5},-\frac{3}{5},\frac{4}{5},-\frac{4}{5},...$ or in short: Actually wait, since as the sequence grows, any rationals of the form $\frac{p}{q}$ where $|p-q| > 1$ will be somewhere in between two consecutive terms of the sequence $\{\frac{n+1}{n+2}-\frac{n}{n+1}\}$ and the latter does tends to zero as $n \to \aleph_0$, it follows all intervals will have an infimum of zero However, any intervals must contain uncountably many irrationals, so (somehow) the infimum of the union of them all are nonzero. Need to figure out how this works... Let's say that for $N$ clients, Lotta will take $d_N$ days to retire. For $N+1$ clients, clearly Lotta will have to make sure all the first $N$ clients don't feel mistreated. Therefore, she'll take the $d_N$ days to make sure they are not mistreated. Then she visits client $N+1$. Obviously the client won't feel mistreated anymore. But all the first $N$ clients are mistreated and, therefore, she'll start her algorithm once again and take (by suposition) $d_N$ days to make sure all of them are not mistreated. And therefore we have the recurence $d_{N+1} = 2d_N + 1$ Where $d_1$ = 1. Yet we have $1 \to 2 \to 1$, that has $3 = d_2 \neq 2^2$ steps.
3.1 Some simple forecasting methods Some forecasting methods are extremely simple and surprisingly effective. We will use the following four forecasting methods as benchmarks throughout this book. Average method Here, the forecasts of all future values are equal to the average (or “mean”) of the historical data. If we let the historical data be denoted by \(y_{1},\dots,y_{T}\), then we can write the forecasts as \[ \hat{y}_{T+h|T} = \bar{y} = (y_{1}+\dots+y_{T})/T. \] The notation \(\hat{y}_{T+h|T}\) is a short-hand for the estimate of \(y_{T+h}\) based on the data \(y_1,\dots,y_T\). Naïve method For naïve forecasts, we simply set all forecasts to be the value of the last observation. That is, \[ \hat{y}_{T+h|T} = y_{T}. \] This method works remarkably well for many economic and financial time series. Because a naïve forecast is optimal when data follow a random walk (see Section 8.1), these are also called random walk forecasts. Seasonal naïve method A similar method is useful for highly seasonal data. In this case, we set each forecast to be equal to the last observed value from the same season of the year (e.g., the same month of the previous year). Formally, the forecast for time \(T+h\) is written as \[ \hat{y}_{T+h|T} = y_{T+h-m(k+1)}, \] where \(m=\) the seasonal period, and \(k\) is the integer part of \((h-1)/m\) (i.e., the number of complete years in the forecast period prior to time \(T+h\)). This looks more complicated than it really is. For example, with monthly data, the forecast for all future February values is equal to the last observed February value. With quarterly data, the forecast of all future Q2 values is equal to the last observed Q2 value (where Q2 means the second quarter). Similar rules apply for other months and quarters, and for other seasonal periods. Drift method A variation on the naïve method is to allow the forecasts to increase or decrease over time, where the amount of change over time (called the drift) is set to be the average change seen in the historical data. Thus the forecast for time \(T+h\) is given by\[ \hat{y}_{T+h|T} = y_{T} + \frac{h}{T-1}\sum_{t=2}^T (y_{t}-y_{t-1}) = y_{T} + h \left( \frac{y_{T} -y_{1}}{T-1}\right).\]This is equivalent to drawing a line between the first and last observations, and extrapolating it into the future. Examples Figure 3.1 shows the first three methods applied to the quarterly beer production data. # Set training data from 1992 to 2007beer2 <- window(ausbeer,start=1992,end=c(2007,4))# Plot some forecastsautoplot(beer2) + autolayer(meanf(beer2, h=11), series="Mean", PI=FALSE) + autolayer(naive(beer2, h=11), series="Naïve", PI=FALSE) + autolayer(snaive(beer2, h=11), series="Seasonal naïve", PI=FALSE) + ggtitle("Forecasts for quarterly beer production") + xlab("Year") + ylab("Megalitres") + guides(colour=guide_legend(title="Forecast")) In Figure 3.2, the non-seasonal methods are applied to a series of 200 days of the Google daily closing stock price. autoplot(goog200) + autolayer(meanf(goog200, h=40), series="Mean", PI=FALSE) + autolayer(rwf(goog200, h=40), series="Naïve", PI=FALSE) + autolayer(rwf(goog200, drift=TRUE, h=40), series="Drift", PI=FALSE) + ggtitle("Google stock (daily ending 6 Dec 2013)") + xlab("Day") + ylab("Closing Price (US$)") + guides(colour=guide_legend(title="Forecast")) Sometimes one of these simple methods will be the best forecasting method available; but in many cases, these methods will serve as benchmarks rather than the method of choice. That is, any forecasting methods we develop will be compared to these simple methods to ensure that the new method is better than these simple alternatives. If not, the new method is not worth considering.
On Monday, Celestalon kicked off the official Alpha Theorycrafting season by posting a Theorycrafting Discussion thread on the forums. And he was kind enough to toss a meaty chunk of information our way about Resolve, the replacement for Vengeance. Resolve: Increases your healing and absorption done to yourself, based on Stamina and damage taken (before avoidance and mitigation) in the last 10 sec. In today’s post, I want to go over the mathy details about how Resolve works, how it differs from Vengeance, and how it may (or may not) fix some of the problems we’ve discussed in previous blog posts. Mathemagic Celestalon broke the formula up into two components: one from stamina and one from damage taken. But for completeness, I’m going to bolt them together into one formula for resolve $R$: $$ R =\frac{\rm Stamina}{250~\alpha} + 0.25\sum_i \frac{D_i}{\rm MaxHealth}\left ( \frac{2 ( 10-\Delta t_i )}{10} \right )$$ where $D_i$ is an individual damage event that occurred $\Delta t_i$ seconds ago, and $\alpha$ is a level-dependent constant, with $\alpha(100)=261$. The sum is carried out over all damaging events that have happened in the last 10 seconds. The first term in the equation is the stamina-based contribution, which is always active, even when out of combat. There’s a helpful buff in-game to alert you to this: My premade character has 1294 character sheet stamina, which after dividing by 250 and $\alpha(90)=67$, gives me 0.07725, or about 7.725% Resolve. It’s not clear at this point whether the tooltip is misleadingly rounding down to 7% (i.e. using floor instead of round) or whether Resolve is only affected by the stamina from gear. The Alpha servers went down as I was attempting to test this, so we’ll have to revisit it later. We’ve already been told that this will update dynamically with stamina buffs, so having Power Word: Fortitude buffed on you mid-combat will raise your Resolve. Once you’re in combat and taking damage, the second term makes a contribution: I’ve left this term in roughly the form Celestalon gave, even though it can obviously be simplified considerably by combining all of the constants, because this form does a better job of illustrating the behavior of the mechanic. Let’s ignore the sum for now, and just consider an isolated damage event that does $D$ damage: $$0.25\times\frac{D}{\rm MaxHealth}\left ( \frac{2 ( 10-\Delta t )}{10} \right )$$ The 0.25 just moderates the amount of Resolve you get from damaging attacks. It’s a constant multiplicative factor that they will likely tweak to achieve the desired balance between baseline (stamina-based) Resolve and dynamic (damage-based) Resolve. The factor of $D/{\rm MaxHealth}$ means that we’re normalizing the damage by our max health. So if we have 1000 health and take an attack that deals 1000 damage (remember, this is before mitigation), this term gives us a factor of 1. Avoided auto-attacks also count here, though instead of performing an actual damage roll the game just uses the mean value of the boss’s auto-attack damage. Again, nothing particularly complicated here, it just makes Resolve depend on the percentage of your health the attack would have removed rather than the raw damage amount. Also note that we’ve been told that dynamic health effects from temporary multipliers (e.g. Last Stand) aren’t included here, so we’re not punished for using temporary health-increasing cooldowns. The term in parentheses is the most important part, though. In the instant the attack lands, $\Delta t=0$ and the term in parentheses evaluates to $2(10-0)/10 = 2.$ So that attack dealing 1000 damage to our 1000-health tank would give $0.25\times 1 \times 2 = 0.5,$ or 50% Resolve. However, one second later, $\Delta t = 1$, so the term in parentheses is only $2(10-1)/10 = 1.8$, and the amount of resolve it grants is reduced to 45%. The amount of Resolve granted continues to linearly decrease as time passes, and by the time ten seconds have elapsed it’s reduced to zero. Each attack is treated independently, so to get our total Resolve from all damage taken we just have to add up the Resolve granted by every attack we’ve taken, hence the sum in my equation. You may note that the time-average of the term in parentheses is 1, which is how we get the advertised “averages to ~Damage/MaxHealth” that Celestalon mentioned in the post. In that regard, he’s specifically referring to just the part within the sum, not the constant factor of 0.25 outside of it. So in total, your average Resolve contribution from damage is 25% of Damage/MaxHealth. Comparing to Vengeance Mathematically speaking, there’s a world of difference between Resolve and Vengeance. First and foremost is the part we already knew: Resolve doesn’t grant any offensive benefit. We’ve talked about that a lot before, though, so it’s not territory worth re-treading. Even in the defensive component though, there are major differences. Vengeance’s difference equation, if solved analytically, gives solutions that are exponentials. In other words, provided you were continuously taking damage (such that it didn’t fall off entirely), Vengeance would decay and adjust to your new damage intake rather smoothly. It also meant that damage taken at the very beginning of an encounter was still contributing some amount of Vengeance at the very end, again, assuming there was no interruption. And since it was only recalculated on a damage event, you could play some tricks with it, like taking a giant attack that gave you millions of Vengeance and then riding that wave for 20 seconds while your co-tank takes the boss. Resolve does away with all of that. It flat-out says “look, the only thing that matters is the last 10 seconds.” The calculation doesn’t rely on a difference equation, meaning that when recalculating, it doesn’t care what your Resolve was at any time previously. And it forces a recalculation at fixed intervals, not just when you take damage. As a result, it’s much harder to game than Vengeance was. Celestalon’s post also outlines a few other significant differences: No more ramp-up mechanism No taunt-transfer mechanism Resolve persists through shapeshifts Resolve only affects self-healing and self-absorbs The lack of ramp-up and taunt-transfer mechanisms may at first seem like a problem. But in practice, I don’t think we’ll miss either of them. Both of these effects served offensive (i.e. threat) and defensive purposes, and it’s pretty clear that the offensive purposes are made irrelevant by definition here since Resolve won’t affect DPS/threat. The defensive purpose they served was to make sure you had some Vengeance to counter the boss’s first few hits, since Vengeance had a relatively slow ramp-up time but the boss’s attacks did not. However, Resolve ramps up a lot faster than Vengeance does. Again, this is in part thanks to the fact that it isn’t governed by a difference equation. The other part is because it only cares about the last ten seconds. To give you a visual representation of that, here’s a plot showing both Vengeance and Resolve for a player being attacked by a boss. The tank has 100 health and the boss swings for 30 raw damage every 1.5 seconds. Vengeance is shown in arbitrary units here since we’re not interested in the exact magnitude of the effect, just in its dynamic properties. I’ve also ignored the baseline (stamina-based) contribution to Resolve for the same reason. As a final note, while the blog post says that Resolve is recalculated every second, it seemed like it was updating closer to every half-second when I fooled with it on alpha, so these plots use 0.5-second update intervals. Changing to 1-second intervals doesn’t significantly change the results (they just look a little more fragmented). The plot very clearly shows the 50% ramp-up mechanism and slow decay-like behavior of Vengeance. Note that while the ramp-up mechanism gets you to 50% of Vengeance’s overall value at the first hit (at t=2.5 seconds), Resolve hits this mark as soon as the second hit lands (at 4.0 seconds) despite not having any ramp-up mechanism. Resolve also hits its steady-state value much more quickly than Vengeance does. By definition, Resolve gets there after about 10 seconds of combat (t=12.5 seconds). But with Vengeance, it takes upwards of 30-40 seconds to even approach the steady-state value thanks to the decay effect (again, a result of the difference equation used to calculate Vengeance). Since most fights involve tank swaps more frequently than this, it meant that you were consistently getting stronger the longer you tanked a boss. This in turn helped encourage the sort of “solo-tank things that should not be solo-tanked” behavior we saw in Mists. This plot assumes a boss who does exactly 30 damage per swing, but in real encounters the boss’s damage varies. Both Vengeance and Resolve adapt to mimic that change in the tank’s damage intake, but as you could guess, Resolve adapts much more quickly. If we allow the boss to hit for a random amount between 20 and 40 damage: You can certainly see the similar changes in both curves, but Resolve reacts quickly to each change while Vengeance changes rather slowly. One thing you’ve probably noticed by now is that the Resolve plot looks very jagged (in physics, we might call this a “sawtooth wave”). This happens because of the linear decay built into the formula. It peaks in the instant you take the attack – or more accurately, in the instant that Resolve is recalculated after that attack. But then every time it’s recalculated it linearly decreases by a fixed percent. If the boss swings in 1.5-second intervals, then Resolve will zig-zag between its max value and 85% of its max value in the manner shown. The more frequently the boss attacks, the smoother that zig-zag becomes; conversely, a boss with a long swing timer will cause a larger variation in Resolve. This is apparent if we adjust the boss’s swing timer in either direction: It’s worth noting that every plot here has a new randomly-generated sequence of attacks, so don’t be surprised that the plots don’t have the same profile as the original. The key difference is the size of the zig-zag on the Resolve curve. I’ve also run simulations where the boss’ base damage is 50 rather than 30, but apart from the y-axis having large numbers there’s no real difference: Note that even a raw damage of 50% is pretty conservative for a boss – heroic bosses in Siege have frequently had raw damages that were larger than the player’s health. But it’s not clear if that will still be the case with the new tanking and healing paradigm that’s been unveiled for Warlords. If we make the assumption that raw damage will be lower, then these rough estimates give us an idea of how large an effect Resolve will be. If we guess at a 5%-10% baseline value from stamina, these plots suggest that Resolve will end up being anywhere from a 50% to 200% modifier on our healing. In other words, it has the potential to double or triple our healing output with the current tuning numbers. Of course, it’s anyone’s guess as to whether those numbers are even remotely close to what they’ll end up being by the end of beta. Is It Fixed Yet? If you look back over our old blog posts, the vast majority of our criticisms of Vengeance had to do with its tie-in to damage output. Those have obviously been addressed, which leaves me worrying that I’ll have nothing to rant about for the next two or three years. But regarding everything else, I think Resolve stands a fair chance of addressing our concerns. One of the major issues with Vengeance was the sheer magnitude of the effect – you could go from having 50k AP to 600k AP on certain bosses, which meant your abilities got up to 10x more effective. Even though that’s an extreme case, I regularly noted having over 300k AP during progression bosses, a factor of around 6x improvement. Resolve looks like it’ll tamp down on that some. Reasonable bosses are unlikely to grant a multiplier larger than 2x, which will be easier to balance around. It hasn’t been mentioned specifically in Celestalon’s post, but I think it’s a reasonable guess that they will continue to disable Resolve gains from damage that could be avoided through better play (i.e. intentionally “standing in the bad”). If so, there will be little (if any) incentive to take excess damage to get more Resolve. Our sheer AP scaling on certain effects created situations where this was a net survivability gain with Vengeance, but the lower multiplier should make that impossible with Resolve. While I still don’t think it needs to affect anything other than active mitigation abilities, the fact that it’s a multiplier affecting everything equally rather than a flat AP boost should make it easier to keep talents with different AP coefficients balanced (Eternal Flame and Sacred Shield, specifically). And we already know that Eternal Flame is losing its Bastion of Glory interaction, another change which will facilitate making both talents acceptable choices. All in all, I think it’s a really good system, if slightly less transparent. It’s too soon to tell whether we’ll see any unexpected problems, of course, but the mechanic doesn’t have any glaring issues that stand out upon first examination (unlike Vengeance). I still have a few lingering concerns about steady-state threat stability between tanks (ironically, due to the removal of Vengeance), but that is the sort of thing which will become apparent fairly quickly during beta testing, and at any rate shouldn’t reflect on the performance of Resolve.
Forgot password? New user? Sign up Existing user? Log in Prove that the value of,∑n=1n1n2<2\sum_{n=1}^{n}\dfrac{1}{n^2}<2n=1∑nn21<2.Feel free to post your innovative methods!I have posted mine below! Note by Adarsh Kumar 3 years, 11 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: We basically have to prove that,∑n=2n1n2<1\sum_{n=2}^{n}\dfrac{1}{n^2}<1n=2∑nn21<1Motivation of proof:\text{Motivation of proof}:Motivation of proof:,the first idea that struck me was the use of telescoping series,if i could write,1n2<something\dfrac{1}{n^2}<\text{something}n21<something and write,1(n−1)2<another something\dfrac{1}{(n-1)^2}<\text{another something}(n−1)21<another something,and when we add the terms of L.H.SL.H.SL.H.S we would get the required expression and when we add the terms of the R.H.SR.H.SR.H.S we would get a telescoping series.The next thing that came to my mind was that,n2>(n)(n−1)n^2>(n)(n-1)n2>(n)(n−1)(we have taken the greater than sign as then when we take the reciprocal the sign would get reversed),since 1(n)(n−1)=1n−1−1n\dfrac{1}{(n)(n-1)}=\dfrac{1}{n-1}-\dfrac{1}{n}(n)(n−1)1=n−11−n1,and we would get a telescoping series like this,122<11−12132<12−13...1n2<1n−1−1nadding,we get:∑n=2n1n2<1−1n<1\dfrac{1}{2^2}<\dfrac{1}{1}-\dfrac{1}{2}\\\dfrac{1}{3^2}<\dfrac{1}{2}-\dfrac{1}{3}\\.\\.\\.\\\dfrac{1}{n^2}<\dfrac{1}{n-1}-\dfrac{1}{n}\\\text{adding,we get}:\\\sum_{n=2}^{n}\dfrac{1}{n^2}<1-\dfrac{1}{n}<1221<11−21321<21−31...n21<n−11−n1adding,we get:n=2∑nn21<1−n1<1.Hence proved.And done! Log in to reply I like your motivations haha :P From where you got this motivation ? I am sorry but i don't understand the meaning of your comment.Could you please explain? @Adarsh Kumar – Simplification: Source of the question = ? @Akshat Sharda – Arihant Mathematical Olympiads. Well, yours is the simplest way. For the sake of mentioning, one can also write a proof by induction for this. How do you do a proof by induction (that is fundamentally different from his approach)? @Calvin Lin – @Calvin Lin Oops, although Induction is tempting at first look, it isn't the best way to go about. I haven't found an inductive proof yet 😕. @Karthik Venkata – Right. The inductive proof that I know, is to show that for n≥2 n \geq 2 n≥2, ∑i=1n1i2<2−1n. \sum_{i=1}^n \frac{1}{i^2 } < 2 - \frac{1}{n}. i=1∑ni21<2−n1. This is similar to what Adarsh did. from the basel problem we get ∑n=1∞(1n2)=π26<2\sum_{n=1}^\infty (\frac{1}{n^2}) = \frac{\pi^2}{6}<2n=1∑∞(n21)=6π2<2. although this is probably not intended, but still works as a good proof. we prove the basel proble:Euler's original derivation of the value π26\frac{π^2}{6}6π2 essentially extended observations about finite polynomials and assumed that these same properties hold true for infinite series. recall the Taylor series expansion of the sine function sin(x)=x−x33!+x55!−x77!+⋯ . \sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots. sin(x)=x−3!x3+5!x5−7!x7+⋯.Dividing through by x, we have sin(x)x=1−x23!+x45!−x67!+⋯ .\frac{\sin(x)}{x} = 1 - \frac{x^2}{3!} + \frac{x^4}{5!} - \frac{x^6}{7!} + \cdots. xsin(x)=1−3!x2+5!x4−7!x6+⋯.Using the Weierstrass factorization theorem, it can also be shown that the left-hand side is the product of linear factors given by its roots, just as we do for finite polynomials sin(x)x=(1−xπ)(1+xπ)(1−x2π)(1+x2π)(1−x3π)(1+x3π)⋯=(1−x2π2)(1−x24π2)(1−x29π2)⋯ .\begin{aligned}\frac{\sin(x)}{x} &= \left(1 - \frac{x}{\pi}\right)\left(1 + \frac{x}{\pi}\right)\left(1 - \frac{x}{2\pi}\right)\left(1 + \frac{x}{2\pi}\right)\left(1 - \frac{x}{3\pi}\right)\left(1 + \frac{x}{3\pi}\right) \cdots \\ &= \left(1 - \frac{x^2}{\pi^2}\right)\left(1 - \frac{x^2}{4\pi^2}\right)\left(1 - \frac{x^2}{9\pi^2}\right) \cdots.\end{aligned}xsin(x)=(1−πx)(1+πx)(1−2πx)(1+2πx)(1−3πx)(1+3πx)⋯=(1−π2x2)(1−4π2x2)(1−9π2x2)⋯.If we formally multiply out this product and collect all the x2 terms (we are allowed to do so because of Newton's identities), we see that the x2 coefficient of sin(x)/x is −(1π2+14π2+19π2+⋯ )=−1π2∑n=1∞1n2. -\left(\frac{1}{\pi^2} + \frac{1}{4\pi^2} + \frac{1}{9\pi^2} + \cdots \right) = -\frac{1}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}.−(π21+4π21+9π21+⋯)=−π21n=1∑∞n21.But from the original infinite series expansion of sin(x)/x, the coefficient of x2 is −1/(3!) = −1/6. These two coefficients must be equal; thus, −16=−1π2∑n=1∞1n2.-\frac{1}{6} = -\frac{1}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}.−61=−π21n=1∑∞n21.Multiplying through both sides of this equation by −π2-\pi^2−π2 gives the sum of the reciprocals of the positive square integers. ∑n=1∞1n2=π26.\sum_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}{6}.n=1∑∞n21=6π2. You are right,it wasn't intended but if you write this,you are expected to prove it too. fine Thanks a lot for adding the proof! Isn't this reimann zeta(2)? Then the value will be 1.64 only. Yes,it is but that isn't the intended proof,or else you will have to prove how you found the value of ζ(2)\zeta{(2)}ζ(2). Oh so that's why proof! Btw my method was just like yours Problem Loading... Note Loading... Set Loading...
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
Some Basic concepts in Chemistry Stoichiometry and Empirical and Molecular Formulae and Percentage Composition Chemical formula : (Percentage composition) \tt Mass \ percent = \frac{mass \ of \ element \ in \ a \ compound}{molar \ mass \ of \ that \ compound}\times 100 \tt \% \ by \ mass \ \Rightarrow \left(w/w\right) = \frac{mass\ of \ solute}{mass \ of \ solution} \times 100 \tt \% \ by \ volume \ \Rightarrow \left(w/v\right) = \frac{mass\ of \ solute}{volume \ of \ solution} \times 100 \tt ppt \left(parts \ per \ thousand\right) = \frac{Parts \ of \ solute}{Parts \ of \ solution} \times 10^{3} \tt ppm \left(parts \ per \ million\right) = \frac{Parts \ of \ solute}{Parts \ of \ solution} \times 10^{6} \tt ppb \left(parts \ per \ billion\right) = \frac{Parts \ of \ solute}{Parts \ of \ solution} \times 10^{9} Oxidation is addition of ' O ' or removal of ' H ' from a compound. \tt 4Na + O_{2} \rightarrow 2Na_{2}O Reduction is addition of ' H ' or removal of ' O ' from a compound. \tt CuO + H_{2} \rightarrow Cu + H_{2}O Oxidation Number : The number of electrons donated or accepted by an atom during the formation of molecule, represents its oxidation number. eg : FeCl 3 molecule consists of one Fe +3 and three Cl - Valency and Oxidation Number : Valency represents combining capacity of an element. Oxidation number of an atom represents charge present on it. It is +ve (or) −ve. Balancing equation in acid medium : Add H 2O → for ' O ' balanced atom Add H +→ for ' H ' balanced atom . Balancing equation in basic medium : Add H 2O → for ' O ' balanced atom Add H 2O → for ' H ' balanced atom and add same number of OH −ions on opposite side . Part1: View the Topic in this Video from 0:13 to 6:53 Part2: View the Topic in this Video from 0:12 to 13:20 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. 1. \tt Percentage \ of \ element = \frac{Mass \ of \ element}{Molecular \ mass}\times 100 = \frac{B}{M} \times100 M = mass of compound B = mass of element 2. Molecular formula = n × empirical formula 3. \tt n = \frac{molecular \ weight \ of \ compound}{empirical \ formula \ weight \ of \ compound}
Statistics involves a lot of mathematics, so one of the nice things about report-generation systems for R like Rmarkdown is that it makes it easy to include nicely-formatted equations by using the LaTeX syntax. So, if we want to include the density function of the Guassian Normal distribution: $$ \frac{1}{{\sigma \sqrt {2\pi } }} e^ { - \frac{ - \left( {x - \mu } \right)^2 }{2\sigma ^2} } $$ we can just add the following markup to the document: \[ \frac{1}{{\sigma \sqrt {2\pi} }} e^{-\frac{-(x-\mu)^2}{2\sigma ^2}} \] Creating that markup can be a little tricky, but it generally follows naturally from the mathematics, and it's much easier than other methods like including a screenshot of the equation. Still, a new R package makes it even easier: the mathpix package will convert an image containing an equation to its LaTeX markup equivalent. That image might be a photo of a handwritten equation on paper of a whiteboard, or even a "stattoo": The resulting LaTeX isn't quite perfect: it mistakes the proportionality symbol for the Greek letter alpha (a mistake I've seen a few typesetters make). With that correction, the rendered equation — used for Bayesian inference — looks like: $$ p ( \theta | y) \propto p (y | \theta) p (\theta) $$ The mathpix package was created by Jonathan Carroll and is an interface to the Mathpix API. (It's recommended you sign up for a free API key if you intend to use this package regularly.) The package is available now on CRAN, and you can find more details on its Github page, linked below. Github (jonocarroll): mathpix : Query the mathpix API to convert math images to LaTeX
Again, I have a question ! Let $E = \mathcal{C}([0;1])$ with the $||.||_{\infty}$ norm, and let $S : E \rightarrow E$ defined by : $S(u)(x) = \int_{0}^{x}u(t)\mathrm{dt} \quad \forall u \in E$. I have already shown that $S$ is injective but not surjective, and for all $(f_n)_n \in E^{\mathbb{N}}$ a collection such that : $\exists M>0 \; \forall n \in \mathbb{N} \quad ||f_n||_\infty \leq M$, then it exists a subseries $(S(f_{\phi(n)})_n)$ which converges on $E$. Now, I would like to find all the $\lambda \in \mathbb{R}$ such that : $S-\lambda Id$ is a bijection. I already find that wathever $\lambda \in \mathbb{R}$, the map is injective. So, I have to find for which $\lambda$ it's surjective. So, let $v \in E$ such that : $v(x) = (S-\lambda Id)(u)(x)$ for some $u \in E$. Then : $v(x)+\lambda u(x) = \int_{0}^{x}u(t)\mathrm{dt}$ for all $x \in [0;1]$. If I suppose $u,v \in C^1([0;1])$ for example, I have the relation : $u(x) - \lambda u'(x) = v'(x)$ and $u(0) = -\frac{1}{\lambda}v(0)$ ($\lambda \neq 0$ cause $S$ is not surjective). And then, I have to find all the $\lambda \in \mathbb{R}$ such as for all $v \in C^1([0;1])$, this equation has a solution in $E$. My first problem is that I don't figure out how to find all those $\lambda \in \mathbb{R}$. I think it's okay for all $\lambda \in \mathbb{R^{*}}$ (cause it's a linear differential equation or the first order), but it seems me weird. And this is my second problem : I tried to show that if $S-\lambda Id$ is a surjective from $C^1([0;1])$ to $C^1([0;1])$, then it's surjective from $E$ to $E$. To do that, I wanted to use the fact that : $\overline{C^1([0;1])} = C([0;1])$ and the fact that $S-\lambda Id$ is continuous. So, let $u \in E$ and $(u_n)_n \in (C^1)^{\mathbb{N}}$ such that : $u_n \rightarrow u$. Let $(v_n)_n \in (C^1)^{\mathbb{N}}$ such that : $\forall n \in \mathbb{N} \quad (S-\lambda Id)(v_n) = u_n$. As $(u_n)_n$ has a subseries which converges (cause the series is bounded), if I have $(S(v_n))_n$ which admit as well a subseries which converges, then by the continuity of $S-\lambda Id$ I can conclude. But I'm stuck, cause I don't have any argument to say that $(S(v_n))_n$ has a subseries which converges, and so, I can't conclude. Though, it would have just be enough that $(v_n)_n$ was bounded (for $||.||_\infty$), but why would it be ? Any help would be appreciated ! :) Thank you !
In a previous question of mine, I asked how efficient is the Least Significant Digit first radix sort algorithm for sorting 32-bit integers. It turns out that the bounds are: Time: $ \Theta (\frac{32}{k}(n+2^k))$ Space: $\Theta (2n + 2^k)$ I have read various articles online about the efficiency of sorting the numbers by loking at the first $k$ most significant digits, applying counting sort, and recursively looking at the next $k$ digits of every new group that was just created. Most of the articles suggest that the MSD approach is actually as efficient as the LSD approach and some times even more efficient because it is more cache efficient. I tried to do the time and space analysis to see if that is the case, at least theoretically. MSD is recursive so in every level, we have to work with $n$ elements in total because the union of all the groups of that level is going to give you the input set. However, we also have to take into account the buckets, and in this case the total amount of bucket arrays that we will need is going to be logarithmic. We know that the bucket has $2^k$ elements and now we need to find the total amount of levels. In the first level, we have a problem of size $n$, in the second level a problem of size $\frac{n}{2^k}$, in the i-th level a probem of size $\frac{n}{2^{ik}}$. When does the problem size become 1? When $\frac{n}{2^{ik}} = 1 $ so when $i = \frac{log_{2}n}{k}$. We have $\frac{log_{2}n}{k}$ levels and in every level we need a new bucket array and the input, $\Theta(2n + 2^k \frac{log_{2}n}{k}) = \Theta(2n + 2^{k-1} \log_{2}n)$. We have $2n$ because during the counting phase we need that extra temporary array. Now what about the time? The problem with the time is that for every sub problem generated, we will have to scan through the entire bucket array to find the next sub problems! In every level however, we will only have to spend $\Theta(n)$ time for reading stuff and writing from/to the input. However I am really stuck at this point. In the first level, we have one problem, in the second level $2^k$ problems.. so in total we have $\sum_{i=1}^{\frac{\log_{2}n}{k}}2^{ik}$ sub problems?? So the time is: $\Theta(2n + 2^k\sum_{i=1}^{\frac{\log_{2}n}{k}}2^{ik})$ This looks very bad compared to LSD radix sort... Am I doing something wrong here?
I am quite new to differential equations and I have to solve the following $a(t)+b(t)C(t)+s a(t)C'(t)=0$, where $s$ is some constant. I read about differential equations and at this moment my main difficulty is that $C'(t)$ is multiplied by $s a(t)$ (the examples of differential equations I have seen so far have $s a(t)=1$). Does anyone have a hint about this equation (maybe it turns out to be a "famous" equation and I am not aware about it) and how to solve it? What you did in your comment is correct up to (except a negative sign) $$\exp{K(t)}\frac{b(t)C(t)}{sa(t)}+\exp{K(t)}C'(t)=-\exp{K(t)}s^{-1}$$ After that $$\left(\exp{K(t)}C(t)\right)'=-\exp{K(t)}s^{-1}$$ Integrate both sides $$\exp{K(t)}C(t)=-s^{-1}\int\exp{K(t)}\,dt+D\\ C(t)=-s^{-1}\exp{(-K(t))}\int\exp{K(t)}\,dt+D\exp{(-K(t))}$$ where $D$ is a constant.
The myth of the finite pressuremeter geometry correction Introduction Soil strength and stiffness properties are obtained from pressuremeter tests using analyses that depend on solutions for the expansion of an infinitely long cylindrical cavity. Real pressuremeters have length to diameter ratios between 3 and 10. Studies using finite element methods have indicated that this finite pressuremeter geometry leads to significant over-estimation of the shear strength. This paper tests the finite element results and shows that they do not predict the shear stress-strain response observed in real tests in the field. The conclusion is that end effects associated with finite pressuremeter geometry are of negligible significance for the derivation of material strength. Background Several papers have reported on the consequences of finite pressuremeter geometry for the derivation of material strength and stiffness parameters. Two of the more recent are considered here, Houlsby & Carter (1993) and Shuttle & Jefferies (1995). Houlsby & Carter (1993) present the results of simulated undrained pressuremeter tests in a finite element model of simple elastic/perfectly plastic soil. The simulated tests were run for differing length to diameter ratios, depths below ground level and for common values of rigidity index in the model soil. Gibson & Anderson (1961) give the solution for an undrained cylindrical expansion in simple elastic/perfectly plastic soil. For a current radius \(a\) at the cavity the total pressure \[P\] being applied is … [1] $$P=P_0+S_u\left\{1+\ln \left[(I_r)\left(1 - \left(\frac{a_0} a\right)^2\right)\right]\right\}$$ where \(P_0\) is the cavity reference pressure \(I_r\) is the rigidity index \(G/S_u\) \(G\) is the shear modulus \(S_u\) is the undrained shear strength \(a_0\) is the initial radius of the cavity The Houlsby & Carter tests indicate that the smaller the length to diameter ratio, the greater the stress that must be applied to achieve a given plastic deformation. The difference in stress is expressed as a correction factor whose magnitude depends on a number of variables. The corrections given in Houlsby & Carter are derived by following conventional engineering practice. The finite element tests are plotted in terms of total pressure against the natural log of the current volumetric strain and the slope of the most linear part of this plot gives the undrained shear strength directly. Houlsby & Carter take the slope between 2% and 5% cavity strain and express the correction factor as the ratio of the derived shear strength to the input shear strength. The choice of strain range is important because the effective length to diameter ratio reduces as the diameter of the plastic zone increases, so that other strain ranges give different results. The conclusion of Houlsby & Carter is that the undrained shear strength is greatly affected by finite pressuremeter geometry. It is reported that a pressuremeter with a length to diameter ratio of 6 in material with a rigidity index of 200 would over-estimate the shear strength by more than 20%. The strain dependent nature of this error is acknowledged but not quantified because it is accounted for by the specified working method. Shuttle & Jefferies (1995) repeats some of the work of Houlsby & Carter using different finite element code. Similar results are obtained although this is not immediately obvious from the manner in which the data are presented. The consequences of finite geometry are expressed in Shuttle & Jefferies as an overshoot correction \(\beta\) where at an instant in a test: ...[2] $$S_{\text{u(true)}}= \beta S_{\text{u(measured)}}$$ \(\beta\) is related to the rigidity index and is strain dependent, being a function of the current diameter of the plastic zone. Shuttle & Jefferies give the following approximation for \(\beta\): ...[3] $$\beta = 1.241 - 0.05 [\ln (I_r \epsilon_c)]$$ where \(\epsilon_c\) is cavity strain in % $\beta$ is used within the context of a curve comparison procedure where \(S_u\) is an input parameter for generating a theoretical curve to fit as closely as possible the full measured curve. For a pressuremeter test in material with a rigidity index of 200, the effect of \(\beta\) is less than 10% at 5% cavity strain. Both studies provide corrections for pressuremeters with a length to diameter ratio (L/D) of 6 (the ratio for a Cambridge Self Boring Pressuremeter) in material with a rigidity index typical of what might be discovered in London clay (Table 1). Table 1 - Variation of S u (measured) / S u (true) for L/D of 6 at about 5% cavity strain Source of Data Method I I I [1] Houlsby & Carter (1993) [a] 1.152 1.248 1.420 [2] Shuttle & Jefferies (1995) [b] 1.062 1.095 1.153 [3] Shuttle & Jefferies (1995) [a] 1.174 1.278 1.389 [4] Houlsby & Carter (1995) [a] 1.149 1.246 1.415 Notes: Method [a] - produce a plot of total pressure against Ln volumetric strain and take the slope between 2% and 5% cavity strain. Method [b] - calculate the overshoot correction \(\beta\). At first sight there seems to be conflict about the magnitude of the correction between the two studies, but the difference is due to how the simulated tests are interpreted. If equations [1] and [3] are combined, then Shuttle & Jefferies ‘geometry affected’ pressuremeter tests can be derived. If these tests are then interpreted using the method specified by Houlsby & Carter, correction factors similar to those reported by Houlsby & Carter are obtained. This has been done here, and is shown in the third row of Table 1. Later work by Shuttle (quoted in Houlsby & Carter 1995) along these lines gives results even closer to those reported by Houlsby & Carter (1993) - compare rows one and four of Table 1. For the purposes of this paper the Shuttle & Jefferies overshoot correction described by equation [3] is the most convenient way to access the finite geometry results but the table indicates that the two sets of simulated tests produce similar data. Alternative ways of utilising that data give different corrections. The comparison with field tests It is not possible to test the magnitude of the corrections from field data. However if the predictions of the finite element studies are valid then the strain dependent nature of the finite geometry effect will give pressuremeter shear stress curves that show a strain hardening response. The degree of this response will depend on a number of factors, but for a given pressuremeter the significant variable will be the rigidity index. Presenting the finite element results on semi log axes masks the extent of strain hardening behaviour. The effect is better demonstrated by drawing the shear stress versus strain curve using the subtangent or Palmer (1972) method. Figure 1 is an example. A pressure versus strain curve for an infinite length pressuremeter was created using equation [1] with a rigidity index of 200, an undrained shear strength of 1 and a cavity reference pressure of 0. An overshoot correction \(\beta\) was derived for every increment of strain using a version of equation [3] optimised for an \(I_r\) value of 200, and the corresponding pressure versus strain curve for a pressuremeter with a length to ratio diameter of 6 was derived. These curves were then analysed by the subtangent method to give the data plotted in Figure 1. The infinite length data gives the simple elastic/perfectly plastic form assumed by the Gibson & Anderson solution. However an obvious strain hardening response is apparent in the curve for the finite length pressuremeter. Fig. 1 The subtangent analysis applied to the finite length problem If the finite element results are relevant then tests in the field must show this response but they do not. Figure 2 is the shear stress:strain response derived from four self bored pressuremeter (SBP) tests in the same borehole in London clay. The instrument has a length to diameter ratio of 6. The tests lie between 10 and 17 metres below ground level. These tests have been chosen because of their freedom from defects such as obvious cracking at large strain which would be indicated by apparent strain softening. They are good examples of pressuremeter tests in the material for which the Gibson & Anderson analysis was developed. The data has been normalised by the derived shear strength to make the curves comparable with the data in figure 1, and the finite length pressuremeter data from figure 1 is also plotted in figure 2. There are difficulties using conventional spreadsheet facilities to calculate the local slope of measured pressure:strain curves but the scatter here is reasonably small and the shapes of the curves are clear. There is no evidence of apparent shear stress increasing with strain, if anything the reverse. There are some indications of a peak strength near the origin up to 1.2 times the ultimate strength, and some signs that the pressuremeter curves have been affected by the taking of unload/reload cycles. Figure 3 shows the same data plotted in terms of total pressure versus the natural log of the volumetric strain. In this example the total pressure has been normalised by subtracting the reference pressure \(P_0\) before dividing by the derived shear strength. The pressure scale starts from 1, in effect the yield stress, so that only the plastic loading is seen and the intercept on the strain axis gives \(\ln (1/I_r)\). The intercept on the normalised pressure axis gives the limit pressure in the form of the so-called pressuremeter constant (6.3 for anof 200). Fig. 2 Subtangent analysis of four SBP Tests in London clay Although short sections of the field data do indicate some local strain hardening, a glance along the plotted points especially at large strain shows that the assumption of perfect plasticity is not unreasonable. The local strain hardening is due to the influence of unload/reload cycles on the loading curve, and is a transitory effect. This is indicated by the manner in which a line projected back through the points at large strain passes close to the initial points near the yield stress. The shear stress:strain response predicted for a length to diameter ratio of 6 and an \(I_r\) of 200 is quite different. Fig. 3 Gibson & Anderson style analysis applied to the same data plotted in figure 2 Conclusion An attack on the validity of the finite geometry correction based on the absence of a strain hardening response might seem indirect, not to say superficial, but it is fundamental. There is no special magic about the length to diameter ratio of the instrument - it is the length to diameter ratio of the expanding cavity that is significant. Because this ratio is reducing all the time the test is in progress, the strain hardening response is a necessary sign that a correction is needed. Conversely, as demonstrated here, the absence of such behaviour is sufficient evidence by itself to show that there can be no important error in the pressuremeter estimates of shear strength due to end effects. The finite element tests are not wrong - indeed, some effort has been expended demonstrating that different researchers using different code obtain similar data. However as far as tests in real soil are concerned, the results of the finite element tests are irrelevant. Although this conclusion comes from a handful of field tests from one type of instrument in one type of soil it is strongly suspected that other pressuremeters in other soils are much less affected by finite geometry than has been supposed. The evidence from field tests in sand, for example, is that the angle of internal friction determined at the initial yield is usually similar to the value derived at the end of the test at about 10% cavity strain. Although this author has little experience of the Ménard pressuremeter, inspection of the examples of such tests in the papers by Gibson & Anderson and by Palmer show no evidence of strain hardening. Indeed, it is difficult to imagine how these two classic analyses for undrained strength could have been demonstrated if the corrections for end effects are of the magnitude suggested by the finite element tests. It is not difficult to think of reasons why the finite element results are misleading. The finite element soil is described by a linear elastic characteristic. Real soils invariably have a non-linear elastic response. The consequences for the test are dramatic. Non-linear stiffness means that in the expanding cavity, soil that is yielding is conditioned by a significantly lower stiffness than soil which is behaving elastically. The ‘ends’ of the expanding cavity where a small axial component might be expected are much stiffer than the linear elastic model suggests. It is possible to imagine circumstances where end effects could be demonstrated. Clean sand of uniform particle size rained into a calibration chamber might respond in a manner that approximates to linear elastic. Tests in such material (Fahey 1980), do show some measurable effect due to pressuremeter length to diameter ratios but it is by no means clear that strength is over-estimated. Outside of this contrived situation, non-linear elastic behaviour probably compensates for the influence of end effects. The ‘myth’ in the title of this paper refers as much to the model as it does to the concept.
Occam’s Razor Among competing hypotheses, the one with the fewest assumptions should be selected Question What is a “complex” vs “simple” hypothesis? Answer 1 A “simple” model is one where $\theta$ has few non-zero parameters. i.e.: only a few features are relevant Answer 2 A “simple” model is one where $\theta$ is almost uniform. i.e: few features are significantly more relevant than the others Regularization is the process of penalizing model complexity during training Regularization Models w/ high bias will have a tendency to overfit the data or to learn the noise. One method to combat this is called regularization. You basically add a term to your cost function that penalizes large weights, which, in effect penalizes model complexity in training. Equation 1 is your standard optimization problem that is often implemented as gradient descent. The First term is the standard residual measure that we are trying to minimize. The second term is the interesting part: the regularization. The coefficient $\lambda$ or sometimes $\alpha$ is simply a parameter we have to optimize. The interesting part is the norm of the weights vector: this is called the l2 Norm. L2 Norm Consider the vectors $a=(0.5, 0.5)$ and $b=(-1,0)$. We can compute the L1 and L2 norms: As you can see, the two vectors are equivalent with respect to the L1 norm, however, the are different w/ respect to the L2 norm. This is because squaring the number punishes large values more than small values. Often times the equation 1 is called “Tikhonov regularization” in academia or called Ridge in machine learning circles. For example, it is implemented as Ridge Regression in scikit. Ridge Regression really wants small value in all slots of $\theta$, whereas solving the L1 version doesn’t care if it’s large values or not. Analysis So we have stated the L2 regularization helps to remove variance across the weights. Lets take a look at that in practice by comparing it to an unregularized linear regression. # Code source: Gaël Varoquaux# Modified by Alex Egg 12/15/16# License: BSD 3 clauseimport numpy as npimport matplotlib.pyplot as plt%matplotlib inlinefrom sklearn import linear_model X_train = np.c_[.5, 1].Ty_train = [.5, 1]X_test = np.c_[0, 2].Tnp.random.seed(0)classifiers = dict(ols=linear_model.LinearRegression(), ridge=linear_model.Ridge(alpha=.1))fignum = 1for name, clf in classifiers.items(): fig = plt.figure(fignum, figsize=(4, 3)) plt.clf() plt.title(name) ax = plt.axes([.12, .12, .8, .8]) for _ in range(6): #add variance to training data this_X = .1 * np.random.normal(size=(2, 1)) + X_train clf.fit(this_X, y_train) ax.plot(X_test, clf.predict(X_test), color='.5') ax.scatter(this_X, y_train, s=3, c='.5', marker='o', zorder=10) clf.fit(X_train, y_train) ax.plot(X_test, clf.predict(X_test), linewidth=2, color='blue') ax.scatter(X_train, y_train, s=30, c='r', marker='+', zorder=10) ax.set_xticks(()) ax.set_yticks(()) ax.set_ylim((0, 1.6)) ax.set_xlabel('X') ax.set_ylabel('y') ax.set_xlim(0, 2) fignum += 1plt.show() Figure 1: Left is OLS regression sans regularization. You can see that the random variance added to the training data is exaggerated the regressions. Right is the OLS regression with ridge regularization. You can see that the random variance introduced into the training data has smaller effect not the regressions. “Due to the few points in each dimension and the straight line that linear regression uses to follow these points as well as it can, noise on the observations will cause great variance as shown in the first plot. Every line’s slope can vary quite a bit for each prediction due to the noise induced in the observations. Ridge regression is basically minimizing a penalized version of the least-squared function. The penalizing shrinks the value of the regression coefficients. Despite the few data points in each dimension, the slope of the prediction is much more stable and the variance in the line itself is greatly reduced, in comparison to that of the standard linear regression.” Visualizing Regression There is a compelling visualization and geometric argument for L1 and L2 regularization (ISLR Figure 6.7) Figure 2: On the left is Lasso and right is Ridge There are 2 coefficients in this model where $\hat{\beta}$ is the least squares estimate on the 2 variables or the RSS minimum. As you move up the contours the RSS increases. The blue area is the constraint region which is a circle defined by the sum of squares. In ridge regression, you have a budget on the total sum of squares of the base. So the budget is the radius of a circle. Therefore the ridge problem says find the first place these contours hit the constraint region. In other words, find the smallest RSS you can get within the budget defined by this circle; this is where the sum of squares of $\beta_1$, $\beta_2$ is less than the budget. And since it is a circle you’d have to be very lucky to hit exactly the place where one or the other is 0. Now, consider Lasso. Everything is same as the ridge except the constraint equation is now the sum of the absolute values. So rather than a circle, it’s a diamond. So in this picture, I’ve hit this corner, and now I get a place where $\hat{\beta_1}$ is 0. So in other words, to summarize, the absolute value’s going to be a constraint region that has sharp corners. In high dimensions, you have edges and corners. And along an edge or a corner, if you hit there, you get a 0. So this is, geometrically, why you get sparsity in the Lasso. Takeaways One measure of model complexity is the number of features. Another measure of model complexity is the size of the weights. Regularization is a method to control model complexity where Lasso and Ridge address those issues respectively. If you are taking a regression using a linear method, depending on your data you should probably use a type of regularizer, either L1 (Lasso) or L2 (Ridge) to avoid a high number of low-variance features or to avoid features w/ high variance relative to the others, respectively. Permalink: regularization Tags:
berylium? really? okay then...toroidalet wrote:I Undertale hate it when people Emoji movie insert keywords so people will see their berylium page. A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content. 83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X When xq is in the middle of a different object's apgcode. "That's no ship!" Airy Clave White It Nay When you post something and someone else posts something unrelated and it goes to the next page. Also when people say that things that haven't happened to them trigger them. Also when people say that things that haven't happened to them trigger them. "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett -Terry Pratchett Huh. I've never seen a c/posts spaceship before.drc wrote:"The speed is actually" posts Bored of using the Moore neighbourhood for everything? Introducing the Range-2 von Neumann isotropic non-totalistic rulespace! It could be solved with a simple PM rather than an entire post.Gamedziner wrote:What's wrong with them?drc wrote:"The speed is actually" posts An exception is if it's contained within a significantly large post. I hate it when people post rule tables for non-totalistic rules. (Yes, I know some people are on mobile, but they can just generate them themselves. [citation needed]) "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett -Terry Pratchett OK this is a very niche one that I hadn't remembered until a few hours ago. You know in some arcades they give you this string of cardboard tickets you can redeem for stuff, usually meant for kids. The tickets fold beautifully perfectly packed if you order them one right, one left - zigzagging. When people fold them randomly in any direction giving a clearly low density packing with loads of strain, I just think You know in some arcades they give you this string of cardboard tickets you can redeem for stuff, usually meant for kids. The tickets fold beautifully perfectly packed if you order them one right, one left - zigzagging. When people fold them randomly in any direction giving a clearly low density packing with loads of strain, I just think omg why on Earth would you do that?!Surely they'd have realised by now? It's not that crazy to realise? Surely there is a clear preference for having them well packed; nobody would prefer an unwieldy mess?! Also when I'm typing anything and I finish writing it and it just goes to the next line or just goes to the next page. Especially when the punctuation mark at the end brings the last word down one line. This also applies to writing in a notebook: I finish writing something but the very last thing goes to a new page. "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett -Terry Pratchett 83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: ... you were referencing me before i changed it, weren't you? because I had fit both of those.A for awesome wrote: When people put non- spectacularly-interesting patterns, questions, etc. in their signature. ON A DIFFERENT NOTE. When i want to rotate a hexagonal file but golly refuses because for some reason it calculates hexagonal patterns on a square grid and that really bugs me because if you want to show that something has six sides you don't show it with four and it makes more sense to have the grid be changed to hexagonal but I understand Von Neumann because no shape exists (that I know of) that has 4 corners and no edges but COME ON WHY?! WHY DO YOU REPRESENT HEXAGONS WITH SQUARES?! In all seriousness this bothers me and must be fixed or I will SINGLEHANDEDLY eat a universe. EDIT: possibly this one. EDIT 2: IT HAS BEGUN. HAS BEGUN. Last edited by 83bismuth38 on September 19th, 2017, 8:25 pm, edited 1 time in total. Actually, I don't remember who I was referencing, but I don't think it was you, and if it was, it wasn't personal.83bismuth38 wrote:... you were referencing me before i changed it, weren't you? because I had fit both of those.A for awesome wrote: When people put non- spectacularly-interesting patterns, questions, etc. in their signature. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce 83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: oh okay yeah of course sureA for awesome wrote:Actually, I don't remember who I was referencing, but I don't think it was you, and if it was, it wasn't personal.83bismuth38 wrote:... you were referencing me before i changed it, weren't you? because I had fit both of those.A for awesome wrote: When people put non- spectacularly-interesting patterns, questions, etc. in their signature. but really though, i wouldn't have cared. When someone gives a presentation to a bunch of people and you knowthat they're getting the facts wrong. Especially if this is during the Q&A section. "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett -Terry Pratchett Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X When you watch a boring video in class but you understand it perfectly and then at the end your classmates dont get it so the teacher plays the borinh video again Airy Clave White It Nay 83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: when scientists decide to send a random guy into a black hole hovering directly above Earth for no reason at all. hit; that random guy was me. hit; that random guy was me. 83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: When I see a "one-step" organic reaction that occurs in an exercise book for senior high school and simply takes place under "certain circumstance" like the one marked "?" here but fail to figure out how it works even if I have prepared for our provincial chemistry olympiadEDIT: In fact it's not that hard.Just do a Darzens reaction then hydrolysis and decarboxylate. Current status: outside the continent of cellular automata. Specifically, not on the plain of life. An awesome gun firing cool spaceships: An awesome gun firing cool spaceships: Code: Select all x = 3, y = 5, rule = B2kn3-ekq4i/S23ijkqr4eikry2bo$2o$o$obo$b2o! When there's a rule with a decently common puffer but it can't interact with itself "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett -Terry Pratchett Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X When that oscillator is just When you're sooooooo close to a thing you consider amazing but miss... not sparky enough. When you're sooooooo close to a thing you consider amazing but miss... Airy Clave White It Nay People posting tons of "new" discoveries that have been known for decades, showing that they've not observed standard netiquette by reading the forums a while before posting, nor done the most minimal research about whether things have been already known, despit repeated posts about where to find such resources (e.g. jslife, wiki, Life lexicon, etc.). People posting tons of useless "new" discoveries that take longer to post than to find (e.g. "look what happens when I put this blinker next to this beehive"). Newbies with attitudes, who think they know more than people who have been part of the community for years or even decades. Posts where the quoted text is substantially longer than added text. Especially "me too" posts. People whose signatures are longer than the actual text of their posts. People whose signatures include graphics or pattern files, especially ones that are just human-readable text. Improper grammar, spelling, and punctuation (although I've gotten used to that; long-term use of the internet has made me rather fluent in typo, both reading and writing). Imperfect English is not unreasonable from people for whom English is not a primary language, but from English speakers, it is a symptom of sloppiness that can also manifest in other areas. People posting tons of useless "new" discoveries that take longer to post than to find (e.g. "look what happens when I put this blinker next to this beehive"). Newbies with attitudes, who think they know more than people who have been part of the community for years or even decades. Posts where the quoted text is substantially longer than added text. Especially "me too" posts. People whose signatures are longer than the actual text of their posts. People whose signatures include graphics or pattern files, especially ones that are just human-readable text. Improper grammar, spelling, and punctuation (although I've gotten used to that; long-term use of the internet has made me rather fluent in typo, both reading and writing). Imperfect English is not unreasonable from people for whom English is not a primary language, but from English speakers, it is a symptom of sloppiness that can also manifest in other areas. Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X That's G U S T A V O right theremniemiec wrote:People posting tons of "new" discoveries that have been known for decades, showing that they've not observed standard netiquette by reading the forums a while before posting, nor done the most minimal research about whether things have been already known, despit repeated posts about where to find such resources (e.g. jslife, wiki, Life lexicon, etc.). People posting tons of useless "new" discoveries that take longer to post than to find (e.g. "look what happens when I put this blinker next to this beehive"). Newbies with attitudes, who think they know more than people who have been part of the community for years or even decades. Also, when you walk into a wall slowly and carefully but you hit your teeth on the wall and it hurts so bad. Airy Clave White It Nay
MathRevolution wrote: The \(2\) lines \(x+2y=3, 2x+py=q\) have infinitely many points of intersection in the xy-plane. Which of the following IS the value of \(p\)? \(A. 0\) \(B. 1\) \(C. 2\) \(D. 3\) \(E. 4\) \(? = p\) From the question stem, we know both lines (each represented by one of the equations) must coincide (*), hence: \(\left\{ \begin{gathered} \,x + 2y = 3\,\,\,\left( { \cdot 2} \right) \hfill \\ 2x + py = q \hfill \\ \end{gathered} \right.\,\,\,\,\,\, \sim \,\,\,\,\,\,\left\{ \begin{gathered} \,2x + 4y = 6 \hfill \\ 2x + py = q \hfill \\ \end{gathered} \right.\,\,\,\,\,\,\mathop \Rightarrow \limits^{\left( * \right)} \,\,\,\,\,\,\,? = p = 4\,\,\,\,\,\,\,\left( {{\text{and}}\,\,q = 6} \right)\) This solution follows the notations and rationale taught in the GMATH method. Regards, fskilnik. _________________ Fabio Skilnik :: GMATH method creator (Math for the GMAT) Our high-level "quant" preparation starts here: https://gmath.net
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
5.1 The linear model Simple linear regression In the simplest case, the regression model allows for a linear relationship between the forecast variable \(y\) and a single predictor variable \(x\): \[ y_t = \beta_0 + \beta_1 x_t + \varepsilon_t. \] An artificial example of data from such a model is shown in Figure 5.1. The coefficients \(\beta_0\) and \(\beta_1\) denote the intercept and the slope of the line respectively. The intercept \(\beta_0\) represents the predicted value of \(y\) when \(x=0\). The slope \(\beta_1\) represents the average predicted change in \(y\) resulting from a one unit increase in \(x\). Notice that the observations do not lie on the straight line but are scattered around it. We can think of each observation \(y_t\) as consisting of the systematic or explained part of the model, \(\beta_0+\beta_1x_t\), and the random “error”, \(\varepsilon_t\). The “error” term does not imply a mistake, but a deviation from the underlying straight line model. It captures anything that may affect \(y_t\) other than \(x_t\). Example: US consumption expenditure Figure 5.2 shows time series of quarterly percentage changes (growth rates) of real personal consumption expenditure, \(y\), and real personal disposable income, \(x\), for the US from 1970 Q1 to 2016 Q3. A scatter plot of consumption changes against income changes is shown in Figure 5.3 along with the estimated regression line \[ \hat{y}_t=0.55 + 0.28x_t. \] (We put a “hat” above \(y\) to indicate that this is the value of \(y\) predicted by the model.) The equation is estimated in R using the tslm() function: We will discuss how tslm() computes the coefficients in Section 5.2. The fitted line has a positive slope, reflecting the positive relationship between income and consumption. The slope coefficient shows that a one unit increase in \(x\) (a 1 percentage point increase in personal disposable income) results on average in 0.28 units increase in \(y\) (an average increase of 0.28 percentage points in personal consumption expenditure). Alternatively the estimated equation shows that a value of 1 for \(x\) (the percentage increase in personal disposable income) will result in a forecast value of \(0.55 + 0.28 \times 1 = 0.83\) for \(y\) (the percentage increase in personal consumption expenditure). The interpretation of the intercept requires that a value of \(x=0\) makes sense. In this case when \(x=0\) (i.e., when there is no change in personal disposable income since the last quarter) the predicted value of \(y\) is 0.55 (i.e., an average increase in personal consumption expenditure of 0.55%). Even when \(x=0\) does not make sense, the intercept is an important part of the model. Without it, the slope coefficient can be distorted unnecessarily. The intercept should always be included unless the requirement is to force the regression line “through the origin”. In what follows we assume that an intercept is always included in the model. Multiple linear regression When there are two or more predictor variables, the model is called a multiple regression model. The general form of a multiple regression model is\[\begin{equation} y_t = \beta_{0} + \beta_{1} x_{1,t} + \beta_{2} x_{2,t} + \cdots + \beta_{k} x_{k,t} + \varepsilon_t, \tag{5.1}\end{equation}\]where \(y\) is the variable to be forecast and \(x_{1},\dots,x_{k}\) are the \(k\) predictor variables. Each of the predictor variables must be numerical. The coefficients \(\beta_{1},\dots,\beta_{k}\) measure the effect of each predictor after taking into account the effects of all the other predictors in the model. Thus, the coefficients measure the marginal effects of the predictor variables. Example: US consumption expenditure Figure 5.4 shows additional predictors that may be useful for forecasting US consumption expenditure. These are quarterly percentage changes in industrial production and personal savings, and quarterly changes in the unemployment rate (as this is already a percentage). Building a multiple linear regression model can potentially generate more accurate forecasts as we expect consumption expenditure to not only depend on personal income but on other predictors as well. Figure 5.5 is a scatterplot matrix of five variables. The first column shows the relationships between the forecast variable (consumption) and each of the predictors. The scatterplots show positive relationships with income and industrial production, and negative relationships with savings and unemployment. The strength of these relationships are shown by the correlation coefficients across the first row. The remaining scatterplots and correlation coefficients show the relationships between the predictors. Assumptions When we use a linear regression model, we are implicitly making some assumptions about the variables in Equation (5.1). First, we assume that the model is a reasonable approximation to reality; that is, the relationship between the forecast variable and the predictor variables satisfies this linear equation. Second, we make the following assumptions about the errors \((\varepsilon_{1},\dots,\varepsilon_{T})\): they have mean zero; otherwise the forecasts will be systematically biased. they are not autocorrelated; otherwise the forecasts will be inefficient, as there is more information in the data that can be exploited. they are unrelated to the predictor variables; otherwise there would be more information that should be included in the systematic part of the model. It is also useful to have the errors being normally distributed with a constant variance \(\sigma^2\) in order to easily produce prediction intervals. Another important assumption in the linear regression model is that each predictor \(x\) is not a random variable. If we were performing a controlled experiment in a laboratory, we could control the values of each \(x\) (so they would not be random) and observe the resulting values of \(y\). With observational data (including most data in business and economics), it is not possible to control the value of \(x\), we simply observe it. Hence we make this an assumption.
Thermal Properties of Matter Heat Transfer The phenomenon of heat transfer without the actual displacement of the particles of the medium called conduction. The phenomenon of heat transfer takes place with the actual motion of particles called convection. The phenomenon of heat transfer takes place without any medium called Radiation. The process of heat conduction from not end to cold end of a conductor. No heat is absorbed by it at any section called steady state. Coefficient of thermal conductivity is the quantity of heat required to rise the temperature of unit length through 1ºC in unit time \tt K=\frac{Q.\ell}{\Delta \left(\theta_{1}-\theta_{2}\right)t} The Thermal resistance of a body is a measure of its opposition to the flow of heat through it \tt K=\frac{\theta_1-\theta_2}{\left(Q/t\right)} (or) R = l/kA Thermal diffusivity is the ratio of coefficient of thermal conductivity to thermal of capacity per unit volume. Conduction takes place in solids only where as conversion takes plane in liquids and gases. Natural convection takes heat from the bottom to the top while forced convection may takes place in any direction. Radiation is the phenomenon of transfer of heat without necessity of material medium. View the Topic in this video From 13:07 To 41:47 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. 1. The amount of heat transmitted through a conductor is given by Q=\frac{kA\Delta\ Tt}{l} 2. \tt H=KA\frac{T_{C}-T_{D}}{L} The constant of proportionality K is called the thermal conductivity of the material. 3. For a body with emissivity e, the relation modifies to \tt H=e\sigma A \left({T^{4}-T_s^4}\right) 4. For a body, which is a perfect radiator, the energy emitted per unit time (H) is given by H = AσT 4
Problem 131 Let $V$ be the following subspace of the $4$-dimensional vector space $\R^4$. \[V:=\left\{ \quad\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} \in \R^4 \quad \middle| \quad x_1-x_2+x_3-x_4=0 \quad\right\}.\] Find a basis of the subspace $V$ and its dimension. Problem 129 Let $G$ be a group and $H$ and $K$ be subgroups of $G$. For $h \in H$, and $k \in K$, we define the commutator $[h, k]:=hkh^{-1}k^{-1}$. Let $[H,K]$ be a subgroup of $G$ generated by all such commutators. Show that if $H$ and $K$ are normal subgroups of $G$, then the subgroup $[H, K]$ is normal in $G$.Add to solve later Problem 125 Let $S$ be the following subset of the 3-dimensional vector space $\R^3$. \[S=\left\{ \mathbf{x}\in \R^3 \quad \middle| \quad \mathbf{x}=\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}, x_1, x_2, x_3 \in \Z \right\}, \] where $\Z$ is the set of all integers. Determine whether $S$ is a subspace of $\R^3$. Problem 121 Let $A$ be an $m \times n$ real matrix. Then the null space $\calN(A)$ of $A$ is defined by \[ \calN(A)=\{ \mathbf{x}\in \R^n \mid A\mathbf{x}=\mathbf{0}_m\}.\] That is, the null space is the set of solutions to the homogeneous system $A\mathbf{x}=\mathbf{0}_m$. Prove that the null space $\calN(A)$ is a subspace of the vector space $\R^n$. (Note that the null space is also called the kernel of $A$.) Read solution Problem 120 Suppose that $\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_r$ are linearly dependent $n$-dimensional real vectors. For any vector $\mathbf{v}_{r+1} \in \R^n$, determine whether the vectors $\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_r, \mathbf{v}_{r+1}$ are linearly independent or linearly dependent.Add to solve later Problem 119 Let $\mathbf{a}$ and $\mathbf{b}$ be fixed vectors in $\R^3$, and let $W$ be the subset of $\R^3$ defined by \[W=\{\mathbf{x}\in \R^3 \mid \mathbf{a}^{\trans} \mathbf{x}=0 \text{ and } \mathbf{b}^{\trans} \mathbf{x}=0\}.\] Prove that the subset $W$ is a subspace of $\R^3$. Read solution
Let $\sigma$ be the time a nearest neighbor random walk started at 1 that has probability $p>1/2$ of moving left reaches $0$. Let $\sigma'$ be an independent copy of $\sigma$. Let $(X_k)_1^\infty$ be iid unit Exponential random variables, and let $(Y_k)_1^\infty$ be iid Exponentials with mean $v$ (i.e. $P(Y_1 \geq t) = e^{-t/v}$). We are interested in if there is a closed form in terms of $p$ and $v$ for the probability $$P \left( \sum_1^\sigma X_k < \sum_1^{\sigma '} Y_k \right).$$ Conditioning on the value of either sum gives a messy expression that isn't obvious how to simplify. A reformulation of the problem is to think of this as a race to reach 0 by two continuous time random walks with rates $1$ and $1/v$. Using the memoryless property, the probability the rate-1 walk advances at a jump time is $q=v/(1+v)$. Otherwise the rate-$1/v$ walk advances. Let $Z(r,q)$ be the number of successes before $r$ failures occur in iid trials with success probability $q$ (i.e. negative binomial). If we think of the rate-$1$ walk advancing as a success, we can rewrite the above probability as $$P( Z(\sigma,q) > \sigma').$$ Condition on the values of $\sigma$ and $\sigma'$ and use the distribution for a negative binomial to write this as $$\sum_{i,j \geq 0} C_i C_j p^2(p(1-p))^{i+j} \sum_{k\geq 2i+1} \binom{2j +k}{k} q^k (1-q)^{2j+1} .$$ Here $C_i$ is the $i$th Catalan number. It does not look easy to evaluate exactly. We are happier with this though because it is easier to numerically approximate (though we would prefer a closed form).
Asteroids vary enormously in their weight, but let's take a mass of $10^{15}$kg as a starting point. This would have a Schwarzschild radius of: $$ r_s = \frac{2GM}{c^2} \approx 1.5 \times 10^{-12}\,\text{m} $$ and a Hawking temperature of: $$ T = \frac{\hbar c^3}{8\pi G m k_b} \approx 1.6 \times 10^8 \, \text{K} $$ To get the power radiated by the black hole we use the Stefan-Boltzmann law: $$ P = A\,J = 4\pi r_s^2 \sigma T^4 \approx 1150 \, \text{W} $$ And finally the peak wavelength of the radiation would be given by the Wien displacement law: $$ \lambda_\text{max} = \frac{b}{T} \approx 0.018 \, \text{nm} $$ If we're trying to detect the black hole the mass doesn't help because it would behave the same as the countless other medium mass asteroids in the asteroid belt, so the question is whether we could detect it by the X-rays it emitted. And I must confess that I have no idea how sensitive the current generation of X-ray telescopes is. However we could work out the number of photons per unit area that would be received by a satellite orbiting the Earth. The photon energy is: $$ E = \frac{hc}{\lambda} $$ And if we divide our power of $1150$W by this we get about $10^{17}$ photons per second emitted. The asteroid belt is around $3$ AU from the Sun, so the closest approach to Earth would be about $2$ AU. Dividing our photon flux by the area of a sphere with a radius of $2$ AU gives us the photons per metre per second at the Earth: $$ N = \frac{10^{17}}{4 \pi (2 \text{AU})^2} \approx 3.7 \times 10^{-7} $$ Which is a bit disappointing really. It's hard to see any satellite detecting a source that only emits one photon per square meter every 2.7 million seconds. I think we'd have to say that we have little chance of detecting the black hole.
In a Dumas bulb a volatile substance is introduced. After a few minutes when the liquid has evaporated, the bulb is sealed. It is known that the initial weight of the bulb with air is $12.0468~\mathrm{g}$, $12.4528~\mathrm{g}$ with the volatile substance and $350.6264~\mathrm{g}$ with water. Calculate the molar mass of the substance if all these measurements were done at $25~^\circ\mathrm{C}$ and $1~\mathrm{atm}$ of pressure. The correct answer is $29~\mathrm{g/mol}$. What I've done is the following. I've constructed a system of two equations to know the mass of the bulb and the volume it can contain. $m_b+\rho_\mathrm{air}V = 12.0468~\mathrm{g}$ $m_b+\rho_\mathrm{water}V = 350.6264~\mathrm{g}$ The solutions are $m_b=11.6144~\mathrm{g}$ and $V=0.339012~\mathrm{m^3}$ With this I can find the molar mass of the volatile substance by knowing that its true mass was $m=0.8384~\mathrm{g}$ and the volume it occupied was $0.339~\mathrm{m^3}$ and using the ideal gas equation. The ideal gas law, can be used to say $M=\frac{m}{pV}RT=0.060~\mathrm{g/mol}$ Which is wrong. I've done all sorts of other things and can't get $29~\mathrm{g/mol}$. Maybe this is the craziest thing I've done until now.
Setting Definition $\mathcal{M} \models T$ is existentially closed if whenever $\mathcal{N} \models T$, $\mathcal{N} \supseteq \mathcal{M}$, and $\mathcal{N}\models \exists \bar{v} \phi(\bar{v},\bar{a})$, where $\bar{a} \in \mathbb{M}$ and $\phi$ is quantifier free, then $\mathcal{M} \models \exists \bar{v} \phi(\bar{v},\bar{a})$. I would like to show that if T is $\forall\exists$-axiomatizable,, then T has an existentially closed model. From now on we assume that if $\mathcal M \models T$, then there is $\mathcal N \supseteq \mathcal M$ existentially closed with $|\mathbb{N}| = |\mathbb{M}| + |\mathcal L| + \aleph_o$. Suppose that T has an infinite nonexistentially closed model, then I want to prove T has a nonexistentially closed model of cardinality $\kappa$ for any infinite cardinal $\kappa \ge |\mathcal{L}|$. Finally, I would like to show that if T is $\kappa$-catagorical for some infinite $\kappa \ge |\mathcal{L}|$ and axiomatized by $\forall\exists$-sentences, then all models of T are existentially closed.
Let's say I want to generate correlated random variables. I understand that I can use Cholesky decomposition of the correlation matrix to obtain the correlated values. If $C$ is the correlation matrix, then we can do the cholesky decomposition: $LL^{T}=C$ Then I can easily generate correlated random variables: $LX=Y$, where $X$ are uncorrelated values and $Y$ are correlated values. If I want two correlated random variables then $L$ is: $L = \left[ {\begin{array}{*{20}c} 1 & 0 \\ \rho & {\sqrt {1 - \rho ^2 } } \\ \end{array}} \right] $ I understand that this works, but I don't really understand why... My question is: Why does this work?
Statistics - (Logit|Logistic) (Function|Transformation) 1 - About Never below 0, never above 1 and a smooth transition in between. <MATH> \begin{array}{rrrl} Logit(x) & = & \frac{\displaystyle e^{x}}{\displaystyle 1+ e^{x}} \\ \end{array} </MATH> where: <math> e \approx 2:71828</math> is the scientific constant, the exponential. Euler's number The values have to lie between 0 and 1 because: e to anything is positive. As the denominator is bigger than the numerator, it's always got to be bigger than 0. When <math>x</math> gets very large, this approaches 1. Used to normalize? The natural log of the odds is call the log-odds or logit. 2 - Articles Related 3 - Logistic function The logistic function (= logit ?) asymptotically approaches 0 as the input approaches negative infinity and 1 as the input approaches positive infinity. Since the results are bounded by 0 and 1, it can be directly interpreted as a probability The logistic function <MATH> \frac{1}{1 + \exp^{-z}} </MATH>
5.3 Evaluating the regression model The differences between the observed \(y\) values and the corresponding fitted \(\hat{y}\) values are the training-set errors or “residuals” defined as, \[\begin{align*} e_t &= y_t - \hat{y}_t \\ &= y_t - \hat\beta_{0} - \hat\beta_{1} x_{1,t} - \hat\beta_{2} x_{2,t} - \cdots - \hat\beta_{k} x_{k,t} \end{align*}\] for \(t=1,\dots,T\). Each residual is the unpredictable component of the associated observation. The residuals have some useful properties including the following two: \[ \sum_{t=1}^{T}{e_t}=0 \quad\text{and}\quad \sum_{t=1}^{T}{x_{k,t}e_t}=0\qquad\text{for all $k$}. \] As a result of these properties, it is clear that the average of the residuals is zero, and that the correlation between the residuals and the observations for the predictor variable is also zero. (This is not necessarily true when the intercept is omitted from the model.) After selecting the regression variables and fitting a regression model, it is necessary to plot the residuals to check that the assumptions of the model have been satisfied. There are a series of plots that should be produced in order to check different aspects of the fitted model and the underlying assumptions. We will now discuss each of them in turn. ACF plot of residuals With time series data, it is highly likely that the value of a variable observed in the current time period will be similar to its value in the previous period, or even the period before that, and so on. Therefore when fitting a regression model to time series data, it is common to find autocorrelation in the residuals. In this case, the estimated model violates the assumption of no autocorrelation in the errors, and our forecasts may be inefficient — there is some information left over which should be accounted for in the model in order to obtain better forecasts. The forecasts from a model with autocorrelated errors are still unbiased, and so are not “wrong”, but they will usually have larger prediction intervals than they need to. Therefore we should always look at an ACF plot of the residuals. Another useful test of autocorrelation in the residuals designed to take account for the regression model is the Breusch-Godfrey test, also referred to as the LM (Lagrange Multiplier) test for serial correlation. It is used to test the joint hypothesis that there is no autocorrelation in the residuals up to a certain specified order. A small p-value indicates there is significant autocorrelation remaining in the residuals. The Breusch-Godfrey test is similar to the Ljung-Box test, but it is specifically designed for use with regression models. Histogram of residuals It is always a good idea to check whether the residuals are normally distributed. As we explained earlier, this is not essential for forecasting, but it does make the calculation of prediction intervals much easier. Example Using the checkresiduals() function introduced in Section 3.3, we can obtain all the useful residual diagnostics mentioned above. Figure 5.8 shows a time plot, the ACF and the histogram of the residuals from the multiple regression model fitted to the US quarterly consumption data, as well as the Breusch-Godfrey test for jointly testing up to 8th order autocorrelation. (The checkresiduals() function will use the Breusch-Godfrey test for regression models, but the Ljung-Box test otherwise.) The time plot shows some changing variation over time, but is otherwise relatively unremarkable. This heteroscedasticity will potentially make the prediction interval coverage inaccurate. The histogram shows that the residuals seem to be slightly skewed, which may also affect the coverage probability of the prediction intervals. The autocorrelation plot shows a significant spike at lag 7, but it is not quite enough for the Breusch-Godfrey to be significant at the 5% level. In any case, the autocorrelation is not particularly large, and at lag 7 it is unlikely to have any noticeable impact on the forecasts or the prediction intervals. In Chapter 9 we discuss dynamic regression models used for better capturing information left in the residuals. Residual plots against predictors We would expect the residuals to be randomly scattered without showing any systematic patterns. A simple and quick way to check this is to examine scatterplots of the residuals against each of the predictor variables. If these scatterplots show a pattern, then the relationship may be nonlinear and the model will need to be modified accordingly. See Section 5.8 for a discussion of nonlinear regression. It is also necessary to plot the residuals against any predictors that are not in the model. If any of these show a pattern, then the corresponding predictor may need to be added to the model (possibly in a nonlinear form). Example The residuals from the multiple regression model for forecasting US consumption plotted against each predictor in Figure 5.9 seem to be randomly scattered. Therefore we are satisfied with these in this case. df <- as.data.frame(uschange)df[,"Residuals"] <- as.numeric(residuals(fit.consMR))p1 <- ggplot(df, aes(x=Income, y=Residuals)) + geom_point()p2 <- ggplot(df, aes(x=Production, y=Residuals)) + geom_point()p3 <- ggplot(df, aes(x=Savings, y=Residuals)) + geom_point()p4 <- ggplot(df, aes(x=Unemployment, y=Residuals)) + geom_point()gridExtra::grid.arrange(p1, p2, p3, p4, nrow=2) Residual plots against fitted values A plot of the residuals against the fitted values should also show no pattern. If a pattern is observed, there may be “heteroscedasticity” in the errors which means that the variance of the residuals may not be constant. If this problem occurs, a transformation of the forecast variable such as a logarithm or square root may be required (see Section 3.2.) Example Continuing the previous example, Figure 5.10 shows the residuals plotted against the fitted values. The random scatter suggests the errors are homoscedastic. Outliers and influential observations Observations that take extreme values compared to the majority of the data are called outliers. Observations that have a large influence on the estimated coefficients of a regression model are called influential observations. Usually, influential observations are also outliers that are extreme in the \(x\) direction. There are formal methods for detecting outliers and influential observations that are beyond the scope of this textbook. As we suggested at the beginning of Chapter 2, becoming familiar with your data prior to performing any analysis is of vital importance. A scatter plot of \(y\) against each \(x\) is always a useful starting point in regression analysis, and often helps to identify unusual observations. One source of outliers is incorrect data entry. Simple descriptive statistics of your data can identify minima and maxima that are not sensible. If such an observation is identified, and it has been recorded incorrectly, it should be corrected or removed from the sample immediately. Outliers also occur when some observations are simply different. In this case it may not be wise for these observations to be removed. If an observation has been identified as a likely outlier, it is important to study it and analyse the possible reasons behind it. The decision to remove or retain an observation can be a challenging one (especially when outliers are influential observations). It is wise to report results both with and without the removal of such observations. Example Figure 5.11 highlights the effect of a single outlier when regressing US consumption on income (the example introduced in Section 5.1). In the left panel the outlier is only extreme in the direction of \(y\), as the percentage change in consumption has been incorrectly recorded as -4%. The red line is the regression line fitted to the data which includes the outlier, compared to the black line which is the line fitted to the data without the outlier. In the right panel the outlier now is also extreme in the direction of \(x\) with the 4% decrease in consumption corresponding to a 6% increase in income. In this case the outlier is extremely influential as the red line now deviates substantially from the black line. Spurious regression More often than not, time series data are “non-stationary”; that is, the values of the time series do not fluctuate around a constant mean or with a constant variance. We will deal with time series stationarity in more detail in Chapter 8, but here we need to address the effect that non-stationary data can have on regression models. For example, consider the two variables plotted in Figure 5.12. These appear to be related simply because they both trend upwards in the same manner. However, air passenger traffic in Australia has nothing to do with rice production in Guinea. Regressing non-stationary time series can lead to spurious regressions. The output of regressing Australian air passengers on rice production in Guinea is shown in Figure 5.13. High \(R^2\) and high residual autocorrelation can be signs of spurious regression. Notice these features in the output below. We discuss the issues surrounding non-stationary data and spurious regressions in more detail in Chapter 9. Cases of spurious regression might appear to give reasonable short-term forecasts, but they will generally not continue to work into the future. aussies <- window(ausair, end=2011)fit <- tslm(aussies ~ guinearice)summary(fit)#> #> Call:#> tslm(formula = aussies ~ guinearice)#> #> Residuals:#> Min 1Q Median 3Q Max #> -5.945 -1.892 -0.327 1.862 10.421 #> #> Coefficients:#> Estimate Std. Error t value Pr(>|t|) #> (Intercept) -7.49 1.20 -6.23 2.3e-07 ***#> guinearice 40.29 1.34 30.13 < 2e-16 ***#> ---#> Signif. codes: #> 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1#> #> Residual standard error: 3.24 on 40 degrees of freedom#> Multiple R-squared: 0.958, Adjusted R-squared: 0.957 #> F-statistic: 908 on 1 and 40 DF, p-value: <2e-16
How can the old school "much less (or greater)" – << and >> – symbol: be made? I haven't succeeded with http://detexify.kirelabs.org. Note I don't want the common \ll & \gg symbols. TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community The mathabx package provides this glyph as \lll (and correspondingly \ggg). However, mathabx changes a lot of symbols. If you want only these two you may easily adapt the code from Importing a Single Symbol From a Different Font \documentclass{article}\DeclareFontFamily{U}{matha}{\hyphenchar\font45}\DeclareFontShape{U}{matha}{m}{n}{ <5> <6> <7> <8> <9> <10> gen * matha <10.95> matha10 <12> <14.4> <17.28> <20.74> <24.88> matha12 }{}\DeclareSymbolFont{matha}{U}{matha}{m}{n}\DeclareMathSymbol{\Lt}{3}{matha}{"CE}\DeclareMathSymbol{\Gt}{3}{matha}{"CF}\begin{document}$a \ll b \Lt c \Gt d \gg e$\end{document} EDIT I was reminded in comments that \lll and \ggg are defined also in amssymb (and other math font packages) to do something else. The names \Lt, \Gt (from stix. unicode-math & others) avoid such clashes. The nice answer by Davislor also shows how to import these symbols from the stix fonts, though in that case I'd switch fonts completely. In a Computer Modern setting I find the mathabx symbols more suitable. If you use stix, here is a version that creates a newcommand and rotates the symbols to give what you want. \documentclass{article}\usepackage{graphicx}\usepackage{stix}\newcommand{\lWedge}{\mathbin{\rotatebox[origin=c]{90}{$\Wedge$}}}\newcommand{\gWedge}{\mathbin{\rotatebox[origin=c]{-90}{$\Wedge$}}}\begin{document}$\lWedge$$\gWedge$$\Wedge$$\Vee$\end{document} EDIT: Lt and Gt exist, so no need to rotate. These characters are in Unicode, ⪡ as U+2AA1 and ⪢ as U+2AA2. They have the macro names \Lt and \Gt in unicode-math ( q.v. pages 74–75), the stix package (p. 16), the stix2 package (p.16) , and at least one other. The OpenType fonts XITS Math, Stix Two Math, Asana Math, GFS Neohellenic Math and Cambria Math all contain the glyphs, and it is possible in unicode-math to fill in only those characters with a command such as: \setmathfont[range={`⪡,`⪢}, Scale=MatchUppercase]{Asana Math} A code sample that works with LuaLaTeX or XeLaTeX: \documentclass[varwidth, preview]{standalone}\usepackage{unicode-math}\setmainfont{STIX Two Text}\setmathfont{STIX Two Math}\begin{document}\( \delta \Lt h \Gt \epsilon \)\end{document} And one that works with any LaTeX engine: \documentclass[varwidth, preview]{standalone}\usepackage[T1]{fontenc}\usepackage[utf8]{inputenc}\usepackage{stix}\begin{document}\( \delta \Lt h \Gt \epsilon \)\end{document} Should you be forced to use a legacy NFSS font package that does not contain these symbols, you might be able to load \usepackage{stix} before the other font packages. But, when another package is incompatible (perhaps because you run out of LaTeX math alphabets) and you need to declare them with \DeclareMathSymbol, you can do it with a bit of reverse-engineering. The following template declares \Lt and \Gt for use with newtxmath: \documentclass[varwidth, preview]{standalone}\usepackage[T1]{fontenc}\usepackage[utf8]{inputenc}\usepackage{newtxtext}\usepackage{newtxmath}\DeclareFontEncoding{LS1}{}{}\DeclareFontSubstitution{LS1}{stix}{m}{n}\DeclareSymbolFont{stixsymbb}{LS1}{stixbb}{m}{n}\DeclareMathSymbol{\Lt}{\mathrel}{stixsymbb}{"F1}\DeclareMathSymbol{\Gt}{\mathrel}{stixsymbb}{"F2}\begin{document}\( \delta \Lt h \Gt \epsilon \)\end{document} Note that this is a contrived example, and the practical code you would use to get newtx fonts plus all symbols in stix is in fact: \usepackage{stix}\usepackage{newtxtext}\usepackage{newtxmath} You can additionally insert the following lines to be able to use the UTF-8 characters directly within your document, if you are not using unicode-math (which does that out of the box): \usepackage{newunicodechar}\newunicodechar{⪡}{\ensuremath{\Lt}}\newunicodechar{⪢}{\ensuremath{\Gt}} For completeness: the boisik package also defines \Lt and \Gt, but (as of 2018) is only available as a bitmap font. It is therefore probably not an acceptable option. As pointed out in another answer, mathabx contains similar glyphs, but declares them as \lll and \ggg. This is a serious design flaw and will cause incorrect output when migrating to a different stylesheet. In the standard AMS fonts, unicode-math and several other common packages, \lll is ⋘. triple less than, and \ggg is ⋙, triple greater than. I would recommend unicode-math as the first option and stix as the second. If you want to load the glyphs from mathabx, I would use campa’s answer, revised to change the names to the more standard \Lt and \Gt.
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
If all of the continental crust on Earth were eroded down, smoothed out, and distributed evenly across the whole planet, filling in all of the ocean basins and displacing the water therein, how deep would the resulting world-wide ocean be? I found this here, so it's not technically my answer, but here you go: The total volume of the oceans is 1.3 billion cubic kilometers (http://en.wikipedia.org/wiki/Ocean#Physical_properties). The surface area of the Earth is 510,072,000 square kilometers (http://en.wikipedia.org/wiki/Earth). Dividing the volume by the surface area, we get a depth of 2.5 kilometers. The wiki links do have those numbers and the math seems to check out (although rounded), so there you have it :) I agree with the conclusions of the current two answers, but thought a different analysis might be interesting. Since the ocean is a hollow sphere, its depth isn't given exactly by volume/surface area. However, the relationship is still simple: $V_{ocean} = \frac43 \pi (R_{ocean}^3 - R_{earth}^3)$ $R_{earth} = 6371 \mbox{km}$ $V_{ocean} = 1.332 \times 10^9 \mbox{km}^3$ $R_{ocean} = 6373.61 \mbox{km}$ So the ocean's depth will be 2.61 km, or about 0.04% of the radius of the Earth - hence the similarity to the approximation using $\frac{V}{SA}$. Plug in better estimates for the average radius of the Earth to get more accuracy. If all of the dry land that lies above sea level were to be pushed into the ocean, the ocean would rise less than 300 meters. Wikipedia summarizes the current division of land and sea as follows: 510072000 km2 (196940000 sq mi) 148940000 km2 land (57510000 sq mi; 29.2%) 361132000 km2 water (139434000 sq mi; 70.8%) The mean height of land above sea level is 0.840 km So the volume of land above the current sea level is 148,940,000 x 0.84 = 125,109,600 km3. Reshaping that volume so that it covers the entire earth surface, it would have a height of 125,109,600 / 510,072,000 = 0.245 km. (Of course the displaced land would sink to the bottom, and the water would be cover the entire surface.) So the ocean would rise by 245 meters. How deep would the current ocean be? It would be 245 meters deeper at any given point than it currently is. It's average depth (about 2.5 km) would change by less than 10%.
L # 1 Show that It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline Last edited by krassi_holmz (2006-03-09 02:44:53) IPBLE: Increasing Performance By Lowering Expectations. Offline It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline L # 2 If It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline Let log x = x' log y = y' log z = z'. Then: x'+y'+z'=0. Rewriting in terms of x' gives: IPBLE: Increasing Performance By Lowering Expectations. Offline Well done, krassi_holmz! It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline L # 3 If x²y³=a and log (x/y)=b, then what is the value of (logx)/(logy)? It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline loga=2logx+3logy b=logx-logy loga+3b=5logx loga-2b=3logy+2logy=5logy logx/logy=(loga+3b)/(loga-2b). Last edited by krassi_holmz (2006-03-10 20:06:29) IPBLE: Increasing Performance By Lowering Expectations. Offline Very well done, krassi_holmz! It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline L # 4 Offline It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline You are not supposed to use a calculator or log tables for L # 4. Try again! Last edited by JaneFairfax (2009-01-04 23:40:20) Offline No, I didn't I remember It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline You still used a calculator / log table in the past to get those figures (or someone else did and showed them to you). I say again: no calculators or log tables to be used (directly or indirectly) at all!! Last edited by JaneFairfax (2009-01-06 00:30:04) Offline Offline log a = 2log x + 3log y b = log x log y log a + 3 b = 5log x loga - 2b = 3logy + 2logy = 5logy logx / logy = (loga+3b) / (loga-2b) Offline Hi ganesh for L # 1 since log(a)= 1 / log(b), log(a)=1 b a a we have 1/log(abc)+1/log(abc)+1/log(abc)= a b c log(a)+log(b)+log(c)= log(abc)=1 abc abc abc abc Best Regards Riad Zaidan Offline Hi ganesh for L # 2 I think that the following proof is easier: Assume Log(x)/(b-c)=Log(y)/(c-a)=Log(z)/(a-b)=t So Log(x)=t(b-c),Log(y)=t(c-a) , Log(z)=t(a-b) So Log(x)+Log(y)+Log(z)=tb-tc+tc-ta+ta-tb=0 So Log(xyz)=0 so xyz=1 Q.E.D Best Regards Riad Zaidan Offline Gentleman, Thanks for the proofs. Regards. It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline log_2(16) = \log_2 \left ( \frac{64}{4} \right ) = \log_2(64) - \log_2(4) = 6 - 2 = 4, \, log_2(\sqrt[3]4) = \frac {1}{3} \log_2 (4) = \frac {2}{3}. \, Offline L # 4 I don't want a method that will rely on defining certain functions, taking derivatives, noting concavity, etc. Change of base: Each side is positive, and multiplying by the positive denominator keeps whatever direction of the alleged inequality the same direction: On the right-hand side, the first factor is equal to a positive number less than 1, while the second factor is equal to a positive number greater than 1. These facts are by inspection combined with the nature of exponents/logarithms. Because of (log A)B = B(log A) = log(A^B), I may turn this into: I need to show that Then Then 1 (on the left-hand side) will be greater than the value on the right-hand side, and the truth of the original inequality will be established. I want to show Raise a base of 3 to each side: Each side is positive, and I can square each side: ----------------------------------------------------------------------------------- Then I want to show that when 2 is raised to a number equal to (or less than) 1.5, then it is less than 3. Each side is positive, and I can square each side: Last edited by reconsideryouranswer (2011-05-27 20:05:01) Signature line: I wish a had a more interesting signature line. Offline Hi reconsideryouranswer, This problem was posted by JaneFairfax. I think it would be appropriate she verify the solution. It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline Hi all, I saw this post today and saw the probs on log. Well, they are not bad, they are good. But you can also try these problems here by me (Credit: to a book): http://www.mathisfunforum.com/viewtopic … 93#p399193 Practice makes a man perfect. There is no substitute to hard work All of us do not have equal talents but everybody has equal oppurtunities to build their talents.-APJ Abdul Kalam Offline JaneFairfax, here is a basic proof of L4: For all real a > 1, y = a^x is a strictly increasing function. log(base 2)3 versus log(base 3)5 2*log(base 2)3 versus 2*log(base 3)5 log(base 2)9 versus log(base 3)25 2^3 = 8 < 9 2^(> 3) = 9 3^3 = 27 < 25 3^(< 3) = 25 So, the left-hand side is greater than the right-hand side, because Its logarithm is a larger number. Offline
$$\int \frac{\sin x}{\sqrt{2-\cos^2x}}dx$$ I have tried many trig identities but it seems that none have made a way to the solution. $$\int \frac{\sin x}{\sqrt{2-\cos^2x}}dx=\int \frac{\sin x}{\sqrt{1+\sin^2x}}dx$$ $$\int \frac{\sin x}{\sqrt{2-\cos^2x}}dx=\int \frac{\sin x}{\sqrt{2+\frac{1+\cos 2x}{2}}}dx=\int \frac{\sin x}{\sqrt{\frac{5+\cos 2x}{2}}}dx$$
Is there a closed form of the following sum? $$\sum^{m}_{j=0}\frac{(-1)^{j}{m \choose j}}{n+jk}$$ I figure it should but the binomial is throwing me off. Any help would be greatly appreciated. The given sum equals $$ \int_{0}^{1}\sum_{j=0}^{m}\binom{m}{j}(-1)^j x^{jk+n-1}\,dx =\int_{0}^{1}x^{n-1}(1-x^k)^m\,dx$$ and by the substitution $x=z^{1/k}$ and Euler's Beta function this equals $\frac{\Gamma(m+1)\,\Gamma\left(\frac{n}{k}\right)}{k\,\Gamma\left(1+m+\frac{n}{k}\right)}.$ Assuming $n, k, m$ are positive integers, you could write this as $$\log \left(\dfrac{\prod_{j \text{ even}} (n+jk)^{{m \choose j}}}{\prod_{j \text{ odd}} (n+jk)^{{m \choose j}}}\right)$$ For example, if $m=4$ it is $$\log \left( {\frac {n \left( 2\,k+n \right) ^{6} \left( 4\,k+n \right) }{ \left( k+n \right) ^{4} \left( 3\,k+n \right) ^{4}}} \right) $$ Thus this is the log of a rational function of $n$ and $k$. Numerator and denominator of that rational function are both of total degree $2^{m-1}$. I don't see how this could be simplified any further. There is a technique for this that has appeared on several occasions on MSE which I am not able to locate at this time. We introduce $$f(z) = (-1)^m \frac{m!}{n+zk} \prod_{q=0}^m \frac{1}{z-q}.$$ We suppose that $z=-n/k$ is not an integer from the range $[0, m].$ We then obtain $$\mathrm{Res}_{z=j} f(z) = (-1)^m \frac{m!}{n+jk} \prod_{q=0}^{j-1} \frac{1}{j-q} \prod_{q=j+1}^m \frac{1}{j-q} \\ = (-1)^m \frac{m!}{n+jk} \frac{1}{j!} \frac{(-1)^{m-j}}{(m-j)!} \\ = (-1)^j \frac{1}{nj+k} {m\choose j}.$$ It follows that $$S = \sum_{j=0}^m \mathrm{Res}_{z=j} f(z)$$ and since residues sum to zero this means that $$S = -\mathrm{Res}_{z=-n/k} f(z) - \mathrm{Res}_{z=\infty} f(z).$$ Now the residue at infinity is zero since $\lim_{R\to\infty} 2\pi R/R^{m+2} = 0$ or more formally through $$-\mathrm{Res}_{z=0} \frac{1}{z^2} f(1/z) = - \mathrm{Res}_{z=0} \frac{1}{z^2} (-1)^m \frac{m!}{n+k/z} \prod_{q=0}^m \frac{1}{1/z-q} \\ = - \mathrm{Res}_{z=0} \frac{1}{z^2} (-1)^m \frac{z\times m!}{zn+k} \prod_{q=0}^m \frac{z}{1-qz} \\ = - \mathrm{Res}_{z=0} z^m (-1)^m \frac{m!}{zn+k} \prod_{q=0}^m \frac{1}{1-qz} = 0.$$ This leaves the contribution from $z=-n/k$ and we get $$-\mathrm{Res}_{z=-n/k} \frac{1}{k} (-1)^{m} \frac{m!}{z+n/k} \prod_{q=0}^m \frac{1}{z-q} \\ = (-1)^{m+1} \frac{m!}{k} \prod_{q=0}^m \frac{1}{-n/k-q} \\ = (-1)^{m+1} \times m! \times k^{m} \prod_{q=0}^m \frac{1}{-n-qk} \\ = m! \times k^{m} \prod_{q=0}^m \frac{1}{n+qk}.$$ We recall the Melzak's identity $$f\left(x+y\right)=x\dbinom{x+n}{n}\sum_{k=0}^{n}\left(-1\right)^{k}\dbinom{n}{k}\frac{f\left(y-k\right)}{x+k},\, x,y\in\mathbb{R},\, x\neq-k $$ where $f $ is an algebraic polynomial up to degree $n $. So taking $f\left(z\right)\equiv1$ and $x=n/k$ we have $$\frac{1}{k}\sum_{j=0}^{m}\dbinom{m}{j}\frac{\left(-1\right)^{j}}{j+n/k}=\color{red}{\frac{1}{n\dbinom{n/k+m}{m}}}.$$
The problem I have lately been working Project Euler: 231: The prime factorisation of binomial coefficients The binomial coefficient \$ ^{10}C_3 = 120 \$. \$ 120 = 2^3 × 3 × 5 = 2 × 2 × 2 × 3 × 5 \$, and \$ 2 + 2 + 2 + 3 + 5 = 14 \$. So the sum of the terms in the prime factorisation of \$^{10}C_3\$ is 14. Find the sum of the terms in the prime factorisation of \$ ^{20000000}C_{15000000} \$. Naive approach Using some built in functions I was able to hack together a brute-force solution fairly quickly: from primefac import primefacdef slow_factors(n, k): total = 0 nom = range(n - k+1, n+1) denom = range(2, k+1) for num in nom: total += sum(primefac(num)) for num in denom: total -= sum(primefac(num)) return total This solution uses that: \$ ^n C_k = \binom{n}{k} = \frac{n!}{k!(n-k)!} = \frac{\overbrace{n \cdot (n-1) \cdots p}^{k \text{ times}}}{k\cdot (k-1) \cdots 2} \$ For a concrete example of my algorithm: \$ \binom{10}{3} = \frac{10!}{3!(10-3)!} = \frac{10 \cdot 9 \cdot 8}{3 \cdot 2 \cdot 1} \$ This is respectively nominator and denominator in my code. From here I simply calculate the prime factorization of every number in the denominator. The answer is this minus the sum of the prime factorization of the denominator. I then realized that: \$ \binom{20000000}{15000000} \$ Is incredibly large! Wolfram Alpha says that it is approximately \$2 \approx 10^{4884377}\$, far bigger than \$9000! \approx 10^{31681}\$. So I was in need of a smarter solution. Some improvements The first improvements uses that: \$ \binom{n}{k} = \binom{n}{n-k} \$ We compute values \$n \cdot (n-1) \cdots\$ until \$k\$. However because of the symmetry we could also compute \$n \cdot (n-1) \cdots\$ until \$n-k\$. We choose based on which gives us the fewest values to compute: k = min(k, n - k) It's also inefficient because it calculates the prime factorization of each number separately. My new idea was to make a dict containing a list of sums of the factors under some LIMIT. Then for new numbers I would divide them by primes until I found a match in my list. This is what the factor function does. Now my code can calculate: \$ \binom{200000}{150000} \$ In a little over 2 minutes. Since this is 100 times smaller than what was asked for in the problem. I estimated that my code would use well over 3 hours to finish. This is of course still too slow. Are there any other obvious speed improvements I have forgotten? I really think my idea is good, although my code surely could be improved. Question I'm not looking for feedback on the function slow_factors this is only included to test the accuracy of the faster function. I'm mainly looking for speed improvements, as well as smarter approaches. from primefac import primefac, isprimefrom collections import Counterfrom primesieve import generate_n_primes as primesLIMIT = 10**4PRIMS = primes(1000)sum_factors = Counter()for i in range(LIMIT): sum_factors[i] = sum(primefac(i))def factors(num): primelist = [] for prime in PRIMS: while num % prime == 0: primelist.append(prime) num //= prime if sum_factors[num] > 0: val = sum_factors[num] return num, val, primelist return num, sum(primefac(num)), primelistdef factor_lst(lst): total = 0 for num in lst: if sum_factors[num] > 0: total += sum_factors[num] elif isprime(num) == True: sum_factors[num] = num total += num else: # Reiterates over all the newly found nums in_factors, val, primelist = factors(num) for p in primelist: in_factors *= p val += p sum_factors[in_factors] = val total += val return totaldef binom_factor(n, k): total = 0 k = min(k, n - k) nom = xrange(n - k+1, n+1) denom = xrange(2, k+1) nom_sum = factor_lst(nom) denom_sum = factor_lst(denom) return nom_sum - denom_sumdef slow_factors(n, k): total = 0 k = min(k, n - k) nom = range(n - k+1, n+1) denom = range(2, k+1) for num in nom: total += sum(primefac(num)) for num in denom: total -= sum(primefac(num)) return totalif __name__ == '__main__': print binom_factor(76430, 4321) print slow_factors(76430, 4321) power = 4 print binom_factor(20*10**power, 15*10**power)
The Annals of Statistics Ann. Statist. Volume 19, Number 3 (1991), 1639-1650. Generalizations of James-Stein Estimators Under Spherical Symmetry Abstract This paper is primarily concerned with extending the results of Stein to spherically symmetric distributions. Specifically, when $X \sim f(\|X - \theta\|^2)$, we investigate conditions under which estimators of the form $X + ag(X)$ dominate $X$ for loss functions $\|\delta - \theta\|^2$ and loss functions which are concave in $\|\delta - \theta\|^2$. Additionally, if the scale is unknown we investigate estimators of the location parameter of the form $X + aVg(X)$ in two different settings. In the first, an estimator $V$ of the scale is independent of $X$. In the second, $V$ is the sum of squared residuals in the usual canonical setting of a generalized linear model when sampling from a spherically symmetric distribution. These results are also generalized to concave loss. The conditions for domination of $X + ag(X)$ are typically (a) $\|g\|^2 + 2\nabla \circ g \leq 0$, (b) $\nabla \circ g$ is superharmonic and (c) $0 < a < 1/pE_0(1/\|X\|^2)$, plus technical conditions. Article information Source Ann. Statist., Volume 19, Number 3 (1991), 1639-1650. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176348267 Digital Object Identifier doi:10.1214/aos/1176348267 Mathematical Reviews number (MathSciNet) MR1126343 Zentralblatt MATH identifier 0741.62058 JSTOR links.jstor.org Citation Brandwein, Ann Cohen; Strawderman, William E. Generalizations of James-Stein Estimators Under Spherical Symmetry. Ann. Statist. 19 (1991), no. 3, 1639--1650. doi:10.1214/aos/1176348267. https://projecteuclid.org/euclid.aos/1176348267
Science Advisor Homework Helper 2,559 3 Just trying to understand a few concepts first. If H is a Hamiltonian operator, then H is characteristic of the system, and changes from system to system? Moreover, if you have some wavefunction f, then would <f|H|f> be the energy of the state corresponding to f? Suppose [itex]\psi _k[/itex] are solutions to the time indpendent Schrödinger equation: [tex]H(\psi _k) = E_k \psi _k[/tex] Is it true that [itex]\frac{\partial }{\partial t}\psi _k = 0[/itex]? Is it also true that: [tex]H\left (\exp \left (\frac{-iE_k (t - t_0)}{\hbar }\right )\psi _k\right ) = \exp \left (\frac{-iE_k (t-t_0)}{\hbar }H(\psi _k)[/tex] Also, the dynamics of the system are described by the wave function that satisfies: [tex]H(\psi (t)) = i\hbar \frac{\partial }{\partial t}\psi (t)[/tex] and it's not as though that's a definition for H, right? Next, is a Hamiltonian Hermitian if and only if it's eigenfunctions span the Hilbert space? Is the previou sentence at least partially true, if not entirely? Now, some problems: ------------- Assume that H, A and B are Hermitian operators on a finite-dimensional Hilbert space. 1. Show that if [H,A] = 0, then A and H have a complete set of eigenfunctions in common, i.e. there exists a basis [itex]\{\psi _{\alpha i}\}[/itex] for the Hilbert space such that: [tex]H\psi _{\alpha i} = E_{\alpha }\psi _{\alpha i},\ A\psi _{\alpha i} = a_i\psi _{\alpha i}[/tex] Since the operators are Hermitian, I think it suffices to show that for some choice of eigenfunctions, every eigenfuntion of H is an eigenfunction of A. I don't really know how to do this. I've already figured that if f is an eigenfunction for A corresponding to value a, then: HAf = Haf = aHf Since HA = AH, it's also true that: AHf = aHf, so Hf is an eigenfunction of A corresponding to the same eigenvalue as f. Can we pick the f so that Hf = Ef? 2. Show that if A and B are both symmetries of H and [A,B] = 0, then it is possible to construct a basis for the Hilbert space with a triple set of labels [itex]\psi _{\alpha i j}[/itex] such that: [tex]H\psi _{\alpha i j} = E_{\alpha }\psi _{\alpha i j},\ A\psi _{\alpha i j} = a_i\psi _{\alpha i j},\ B\psi _{\alpha i j} = b_j\psi _{\alpha i j}[/tex] I'm guessing that the eigenfunctions of H span the Hilbert space (as I asked before, does this follow from the fact that it's Hermitian?), so [H,A] = [H,B] = 0. So from the previous problem, we know that each pair of operators has a basis in common. Although I don't know how to show that they can all share the same basis. I think if I knew how to do question 1, I might know how to do this one. 3. Show that if A and B are both symmetries of H but [A,B] != 0, then it is possible to construct a basis for the Hilbert space with two labelling indices [itex]\{\psi _{\alpha i}[/itex] such that: [tex]H\psi _{\alpha i} = E_{\alpha }\psi _{\alpha i},\ A\psi _{\alpha i} = a_i\psi _{\alpha i},\ B\psi _{\alpha i} = \sum _j \psi _{\alpha i}M_{ji}(B)[/tex] where M(B) is a complex matrix. .... No idea for this one. [tex]H(\psi _k) = E_k \psi _k[/tex] Is it true that [itex]\frac{\partial }{\partial t}\psi _k = 0[/itex]? Is it also true that: [tex]H\left (\exp \left (\frac{-iE_k (t - t_0)}{\hbar }\right )\psi _k\right ) = \exp \left (\frac{-iE_k (t-t_0)}{\hbar }H(\psi _k)[/tex] Also, the dynamics of the system are described by the wave function that satisfies: [tex]H(\psi (t)) = i\hbar \frac{\partial }{\partial t}\psi (t)[/tex] and it's not as though that's a definition for H, right? Next, is a Hamiltonian Hermitian if and only if it's eigenfunctions span the Hilbert space? Is the previou sentence at least partially true, if not entirely? Now, some problems: ------------- Assume that H, A and B are Hermitian operators on a finite-dimensional Hilbert space. 1. Show that if [H,A] = 0, then A and H have a complete set of eigenfunctions in common, i.e. there exists a basis [itex]\{\psi _{\alpha i}\}[/itex] for the Hilbert space such that: [tex]H\psi _{\alpha i} = E_{\alpha }\psi _{\alpha i},\ A\psi _{\alpha i} = a_i\psi _{\alpha i}[/tex] Since the operators are Hermitian, I think it suffices to show that for some choice of eigenfunctions, every eigenfuntion of H is an eigenfunction of A. I don't really know how to do this. I've already figured that if f is an eigenfunction for A corresponding to value a, then: HAf = Haf = aHf Since HA = AH, it's also true that: AHf = aHf, so Hf is an eigenfunction of A corresponding to the same eigenvalue as f. Can we pick the f so that Hf = Ef? 2. Show that if A and B are both symmetries of H and [A,B] = 0, then it is possible to construct a basis for the Hilbert space with a triple set of labels [itex]\psi _{\alpha i j}[/itex] such that: [tex]H\psi _{\alpha i j} = E_{\alpha }\psi _{\alpha i j},\ A\psi _{\alpha i j} = a_i\psi _{\alpha i j},\ B\psi _{\alpha i j} = b_j\psi _{\alpha i j}[/tex] I'm guessing that the eigenfunctions of H span the Hilbert space (as I asked before, does this follow from the fact that it's Hermitian?), so [H,A] = [H,B] = 0. So from the previous problem, we know that each pair of operators has a basis in common. Although I don't know how to show that they can all share the same basis. I think if I knew how to do question 1, I might know how to do this one. 3. Show that if A and B are both symmetries of H but [A,B] != 0, then it is possible to construct a basis for the Hilbert space with two labelling indices [itex]\{\psi _{\alpha i}[/itex] such that: [tex]H\psi _{\alpha i} = E_{\alpha }\psi _{\alpha i},\ A\psi _{\alpha i} = a_i\psi _{\alpha i},\ B\psi _{\alpha i} = \sum _j \psi _{\alpha i}M_{ji}(B)[/tex] where M(B) is a complex matrix. .... No idea for this one.
Hi, I reinstalled Cadabra, but now the previous notebooks no longer work. It looks like something breaks at the "Depends" property. I use "Depends" on indices like {\dot{#}, \bar{#}}::Symbol; {\alpha, \beta, \gamma, \delta}::Indices(chiral, position=fixed); {\dalpha, \dbeta, \dgamma, \ddelta}::Indices(antichiral, position=fixed); followed by \theta{#}::Depends{\alpha, \beta, \gamma, \delta, \dalpha, \dbeta, \dgamma, \ddelta}; which gives me an error like RuntimeError: Depends: \prod lacks property Coordinate, Derivative, Accent or Indices. In 2.x, make sure to write dependence on a derivative as A::Depends(\partial{#}), note the '{#}'. I didn't get to the place where I had the problem with the other "accented" symbols, but I will try to see if now it works or not.
Advances in Differential Equations Adv. Differential Equations Volume 19, Number 11/12 (2014), 1043-1066. Maximal regularity for evolution equations governed by non-autonomous forms Abstract We consider a non-autonomous evolutionary problem \[ \dot{u} (t)+\mathcal A(t)u(t)=f(t), \quad u(0)=u_0 \] where the operator $\mathcal A(t)\colon V\to V^\prime$ is associated with a form $\mathfrak{a}(t,.,.)\colon V\times V \to \mathbb R$ and $u_0\in V$. Our main concern is to prove well-posedness with maximal regularity, which means the following. Given a Hilbert space $H$ such that $V$ is continuously and densely embedded into $H$ and given $f\in L^2(0,T;H)$, we are interested in solutions $u \in H^1(0,T;H)\cap L^2(0,T;V)$. We do prove well-posedness in this sense whenever the form is piecewise Lipschitz-continuous and satisfies the square root property. Moreover, we show that each solution is in $C([0,T];V)$. The results are applied to non-autonomous Robin-boundary conditions and maximal regularity is used to solve a quasilinear problem. Article information Source Adv. Differential Equations, Volume 19, Number 11/12 (2014), 1043-1066. Dates First available in Project Euclid: 18 August 2014 Permanent link to this document https://projecteuclid.org/euclid.ade/1408367288 Mathematical Reviews number (MathSciNet) MR3250762 Zentralblatt MATH identifier 1319.35106 Subjects Primary: 35K90: Abstract parabolic equations 35K50 35K45: Initial value problems for second-order parabolic systems 47D06: One-parameter semigroups and linear evolution equations [See also 34G10, 34K30] Citation Arendt, Wolfgang; Dier, Dominik; Laasri, Hafida; Ouhabaz, El Maati. Maximal regularity for evolution equations governed by non-autonomous forms. Adv. Differential Equations 19 (2014), no. 11/12, 1043--1066. https://projecteuclid.org/euclid.ade/1408367288
Answer a) 390 N b) 7.7 m/s, directed toward the second base Work Step by Step a) Frictional force = 0.49 $\times$ mg = 0.49 $\times$ 81 $\times$ 9.8 $\approx$ 390 N a = $\frac{F_{net}}{m}$ $\approx$ $\frac{-390}{81}$ $\approx$ -4.8 m/$s^{2}$ b) u = v - at u = 0 - (-4.8)(1.6) u $\approx$ 7.7 m/s, directed toward the second base
Tagged: trace of a matrix Problem 505 Let $A$ be a singular $2\times 2$ matrix such that $\tr(A)\neq -1$ and let $I$ be the $2\times 2$ identity matrix. Then prove that the inverse matrix of the matrix $I+A$ is given by the following formula: \[(I+A)^{-1}=I-\frac{1}{1+\tr(A)}A.\] Using the formula, calculate the inverse matrix of $\begin{bmatrix} 2 & 1\\ 1& 2 \end{bmatrix}$. Problem 391 (a) Is the matrix $A=\begin{bmatrix} 1 & 2\\ 0& 3 \end{bmatrix}$ similar to the matrix $B=\begin{bmatrix} 3 & 0\\ 1& 2 \end{bmatrix}$? (b) Is the matrix $A=\begin{bmatrix} 0 & 1\\ 5& 3 \end{bmatrix}$ similar to the matrix $B=\begin{bmatrix} 1 & 2\\ 4& 3 \end{bmatrix}$? (c) Is the matrix $A=\begin{bmatrix} -1 & 6\\ -2& 6 \end{bmatrix}$ similar to the matrix $B=\begin{bmatrix} 3 & 0\\ 0& 2 \end{bmatrix}$? Add to solve later (d) Is the matrix $A=\begin{bmatrix} -1 & 6\\ -2& 6 \end{bmatrix}$ similar to the matrix $B=\begin{bmatrix} 1 & 2\\ -1& 4 \end{bmatrix}$? Problem 389 (a) A $2 \times 2$ matrix $A$ satisfies $\tr(A^2)=5$ and $\tr(A)=3$. Find $\det(A)$. (b) A $2 \times 2$ matrix has two parallel columns and $\tr(A)=5$. Find $\tr(A^2)$. (c) A $2\times 2$ matrix $A$ has $\det(A)=5$ and positive integer eigenvalues. What is the trace of $A$? ( Harvard University, Linear Algebra Exam Problem) Problem 79 Let $V$ be the set of all $n \times n$ diagonal matrices whose traces are zero. That is, \begin{equation*} V:=\left\{ A=\begin{bmatrix} a_{11} & 0 & \dots & 0 \\ 0 &a_{22} & \dots & 0 \\ 0 & 0 & \ddots & \vdots \\ 0 & 0 & \dots & a_{nn} \end{bmatrix} \quad \middle| \quad \begin{array}{l} a_{11}, \dots, a_{nn} \in \C,\\ \tr(A)=0 \\ \end{array} \right\} \end{equation*} Let $E_{ij}$ denote the $n \times n$ matrix whose $(i,j)$-entry is $1$ and zero elsewhere. (a) Show that $V$ is a subspace of the vector space $M_n$ over $\C$ of all $n\times n$ matrices. (You may assume without a proof that $M_n$ is a vector space.) (b) Show that matrices \[E_{11}-E_{22}, \, E_{22}-E_{33}, \, \dots,\, E_{n-1\, n-1}-E_{nn}\] are a basis for the vector space $V$. Add to solve later (c) Find the dimension of $V$. Read solution Problem 69 Let $F$ and $H$ be an $n\times n$ matrices satisfying the relation \[HF-FH=-2F.\] (a) Find the trace of the matrix $F$. Add to solve later (b) Let $\lambda$ be an eigenvalue of $H$ and let $\mathbf{v}$ be an eigenvector corresponding to $\lambda$. Show that there exists an positive integer $N$ such that $F^N\mathbf{v}=\mathbf{0}$. Problem 46 Let $A$ be an $n\times n$ matrix such that $A^k=I_n$, where $k\in \N$ and $I_n$ is the $n \times n$ identity matrix. Show that the trace of $(A^{-1})^{\trans}$ is the conjugate of the trace of $A$. That is, show that $\tr((A^{-1})^{\trans})=\overline{\tr(A)}$. Add to solve later Problem 34 (a) Let \[A=\begin{bmatrix} a_{11} & a_{12}\\ a_{21}& a_{22} \end{bmatrix}\] be a matrix such that $a_{11}+a_{12}=1$ and $a_{21}+a_{22}=1$. Namely, the sum of the entries in each row is $1$. (Such a matrix is called (right) (also termed probability matrix, transition matrix, substitution matrix, or Markov matrix).) stochastic matrix Then prove that the matrix $A$ has an eigenvalue $1$. (b) Find all the eigenvalues of the matrix \[B=\begin{bmatrix} 0.3 & 0.7\\ 0.6& 0.4 \end{bmatrix}.\] Add to solve later (c) For each eigenvalue of $B$, find the corresponding eigenvectors. Problem 19 Let $A=(a_{i j})$ and $B=(b_{i j})$ be $n\times n$ real matrices for some $n \in \N$. Then answer the following questions about the trace of a matrix. (a) Express $\tr(AB^{\trans})$ in terms of the entries of the matrices $A$ and $B$. Here $B^{\trans}$ is the transpose matrix of $B$. (b) Show that $\tr(AA^{\trans})$ is the sum of the square of the entries of $A$. Add to solve later (c) Show that if $A$ is nonzero symmetric matrix, then $\tr(A^2)>0$.
I am new to reconstruction and interpolation. From what I understand, interpolation using cubic B-splines can be viewed in two ways: 1) through construction of a linear system of equations whose solution uniquely determines the interpolating cubic spine, and 2) in a filter/convolution type framework. My question is of the latter. When viewing the problem as filter/convolution, the reconstruction for a uniformly sampled data set is given by something like: \begin{align} \sum_i s(i)*h(i) \beta_3 (x - i) \end{align} Where $s(i)$ is the sampled signal, h(i) is an interpolation filter, and $\beta_3$ is a third order B-spline (or cubic spine obtained by 3 convolutions of the box function). The reason this filter is needed is to remove the high frequency "copies" of the signal in the frequency domain by: S($\omega)$H($\omega$). My question is the following: Is this filter H($\omega$) uniquely determined based on the fact that I am using cubic B-splines? Or can I just take it to be the box function? \begin{align} H(\omega) = \left\{ \begin{array}{c} 1, |\omega| < \omega_{max} \\ 0, |\omega| \geq \omega_{max}\\ \end{array} \right. \end{align} I came across one possible explanation here: but am unsure if it is exactly related to my problem since in their sum, they only have $c(l)$, where as I have $s(i)*h(i)$.
Ugh I just lost my post but the short version is that on top of Igor's answer, it is easy to prove this using Edmonds' characterization of the perfect matching polytope, which implies putting weight 1/k on every edge will give you a vector in the polytope. From this fact the matching-coveredness is straightforward. EDIT: Edmonds proved that a vector (i.e. an edge-weighting $w(e)$) is in the perfect matching polytope (i.e. the convex hull of incidence vectors of perfect matchings) if and only if the following hold: 1) Every edge has weight in $[0,1]$. 2) Every set $S$ of vertices with odd size has $\sum_{e\in\delta(S)} w(e) \geq 1$, where $\delta(S)$ is the set of edges with exactly one endpoint in $S$. 3) Every vertex $v$ satisfies $\sum_{e\in\delta(\{v\})} w(e) = 1$. It is an easy exercise show that these conditions are necessary, but as Edmonds proved, they are also sufficient. This implies immediately that if $G$ is a $k-1$-edge-connected graph that is $k$-regular, the vector with every edge getting weight $1/k$ is in the perfect matching polytope of $G$ (in other words, $G$ is fractionally $k$-edge-colourable). Since the weight vector is nonzero everywhere, every edge must be contained in at least one perfect matching. (Again in other words, since only perfect matchings can be used to fractionally $k$-edge-colour a $k$-regular graph, every edge must be in a perfect matching.)
12.2 Time series of counts All of the methods discussed in this book assume that the data have a continuous sample space. But often data comes in the form of counts. For example, we may wish to forecast the number of customers who enter a store each day. We could have 0, 1, 2, , customers, but we cannot have 3.45693 customers. In practice, this rarely matters provided our counts are sufficiently large. If the minimum number of customers is at least 100, then the difference between a continuous sample space \([100,\infty)\) and the discrete sample space \(\{100,101,102,\dots\}\) has no perceivable effect on our forecasts. However, if our data contains small counts \((0, 1, 2, \dots)\), then we need to use forecasting methods that are more appropriate for a sample space of non-negative integers. Such models are beyond the scope of this book. However, there is one simple method which gets used in this context, that we would like to mention. It is “Croston’s method”, named after its British inventor, John Croston, and first described in Croston (1972). Actually, this method does not properly deal with the count nature of the data either, but it is used so often, that it is worth knowing about it. With Croston’s method, we construct two new series from our original time series by noting which time periods contain zero values, and which periods contain non-zero values. Let \(q_i\) be the \(i\)th non-zero quantity, and let \(a_i\) be the time between \(q_{i-1}\) and \(q_i\). Croston’s method involves separate simple exponential smoothing forecasts on the two new series \(a\) and \(q\). Because the method is usually applied to time series of demand for items, \(q\) is often called the “demand” and \(a\) the “inter-arrival time”. If \(\hat{q}_{i+1|i}\) and \(\hat{a}_{i+1|i}\) are the one-step forecasts of the \((i+1)\)th demand and inter-arrival time respectively, based on data up to demand \(i\), then Croston’s method gives \[\begin{align} \hat{q}_{i+1|i} & = (1-\alpha)\hat{q}_{i|i-1} + \alpha q_i, \tag{12.1}\\ \hat{a}_{i+1|i} & = (1-\alpha)\hat{a}_{i|i-1} + \alpha a_i. \tag{12.2} \end{align}\] The smoothing parameter \(\alpha\) takes values between 0 and 1 and is assumed to be the same for both equations. Let \(j\) be the time for the last observed positive observation. Then the \(h\)-step ahead forecast for the demand at time \(T+h\), is given by the ratio \[\begin{equation}\label{c2ratio} \hat{y}_{T+h|T} = q_{j+1|j}/a_{j+1|j}. \end{equation}\] There are no algebraic results allowing us to compute prediction intervals for this method, because the method does not correspond to any statistical model (Shenstone & Hyndman, 2005). The croston() function produces forecasts using Croston’s method.It simply uses \(\alpha=0.1\) by default, and \(\ell_0\) is set to be equal to the first observation in each of the series. This is consistent with the way Croston envisaged the method being used. Example: lubricant sales Several years ago, we assisted an oil company with forecasts of monthly lubricant sales. One of the time series is shown in the table below. The data contain small counts, with many months registering no sales at all, and only small numbers of items sold in other months. Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 1 0 2 0 1 0 11 0 0 0 0 2 0 2 6 3 0 0 0 0 0 7 0 0 0 0 3 0 0 0 3 1 0 0 1 0 1 0 0 There are 11 non-zero demand values in the series, denoted by \(q\). The corresponding arrival series \(a\) are also shown in the following table. \(i\) 1 2 3 4 5 6 7 8 9 10 11 \(q\) 2 1 11 2 6 3 7 3 1 1 1 \(a\) 2 2 2 5 2 1 6 8 1 3 2 Applying Croston’s method gives the demand forecast 2.750 and the arrival forecast 2.793. So the forecast of the original series is \(\hat{y}_{T+h|T} = 2.750 / 2.793 = 0.985\). In practice, R does these calculations for you: An implementation of Croston’s method with more facilities (including parameter estimation) is available in the tsintermittent package for R. Forecasting models that deal more directly with the count nature of the data are described in Christou & Fokianos (2015). Bibliography Christou, V., & Fokianos, K. (2015). On count time series prediction. Journal of Statistical Computation and Simulation, 85(2), 357–373. https://doi.org/10.1080/00949655.2013.823612 Croston, J. D. (1972). Forecasting and stock control for intermittent demands. Operational Research Quarterly, 23(3), 289–303. https://doi.org/10.2307/3007885 Shenstone, L., & Hyndman, R. J. (2005). Stochastic models underlying croston’s method for intermittent demand forecasting. Journal of Forecasting, 24(6), 389–402. https://robjhyndman.com/publications/croston/
So basically $dS_t=\mu S_tdt+\sigma S_tdWt$ and $\mu=r-\frac12\sigma^2$ I have just been thinking about this later equation. This is very interesting because it ties together risk-free rate, volatility and asset drift. I always like and try to look at equation from some simple perspective, for example assuming that something is huge or very small or 0, and trying to watch how it impacts other variables. This is good approach to remember some dependencies. So looking at this later equation, first thing to note is the negative sign of volatility. This is OK when trying to explain why VIX is index of fear and that "investors" don't like increase in volatilities. But increasing risk-free rate in macroeconomics theory translates to increased demand for bonds and decrease in demand for stocks, so their prices drop - this assumption is quite real in today's market - when US Treasuries yields rise stocks go down and vice versa. So this is not in agreement with this also fundamental assumption $\mu=r-\frac12\sigma^2$. How do you interpret this fact?
Lets start by writing the definition of the equilibrium constant, for some general reaction: $$\sum_i^{n_\text{reac}} a_iA_i = \sum_j^{n_\text{prod}}b_jB_j$$ The above is a general reaction, where $a_i$ and $b_j$ is the stoichiometry and $A_i$ and $B_i$ is the molecules. For this reaction we can write the equilibrium constant: $$K=\frac{\prod_j^{n_\text{prod}}\frac{B_j^{b_j}}{U_j^{\circ}}}{\prod_i^{n_\text{reac}}\frac{A_i^{a_i}}{U_i^{\circ}}}$$ Here $\prod$ is the symbol that denotes we take the product over a range. The $U_i^\circ$ are a standard unit, to ensure that our equilibrium constant is well defined. Now if we look at your reaction: $$\mathrm{H_2O\left( l \right)} \rightleftharpoons \mathrm{H_2O\left( g \right)}$$ We can from the above equations identify that; $a_1 = 1$, $b_1=1$, $A_1=\left[ \mathrm{H_2O} \right]$ and $B_1=p_\mathrm{H_2O}$. We can thus write our equilibrium constant as: $$K=\frac{\frac{p_\mathrm{H_2O}}{1\text{ bar}}}{\frac{\left[ \mathrm{H_2O} \right]}{1\text{ M}}}$$ As you might note, I have also inserted the $U$s. For pressure $U=1\text{ bar}$ and for concentrations $U=1\text{ M}$. Now we need one last ingredient. You might have learned that the activity of liquids is $1$, i.e. the means that we have to set $\frac{\left[ \mathrm{H_2O} \right]}{1\text{ M}}=1$, this is due to the definition of activity. If interested I would suggest to take a look at this. As our final result we can see that: $$K=\frac{p_\mathrm{H_2O}}{1\text{ bar}}$$ I.e. we have that the equilibrium constant equals the partial pressure of water in standard units.
In his book "All of Statistics", Prof. Larry Wasserman presents the following Example (11.10, page 188). Suppose that we have a density $f$ such that $f(x)=c\,g(x)$, where $g$ is a known (nonnegative, integrable) function, and the normalization constant $c>0$ is unknown. We are interested in those cases where we can't compute $c=1/\int g(x)\,dx$. For example, it may be the case that $f$ is a pdf over a very high-dimensional sample space. It is well known that there are simulation techniques that allow us to sample from $f$, even though $c$ is unknown. Hence, the puzzle is: How could we estimate $c$ from such a sample? Prof. Wasserman describes the following Bayesian solution: let $\pi$ be some prior for $c$. The likelihood is $$ L_x(c) = \prod_{i=1}^n f(x_i) = \prod_{i=1}^n \left(c\,g(x_i)\right) = c^n \prod_{i=1}^n g(x_i) \propto c^n \, . $$ Therefore, the posterior $$ \pi(c\mid x) \propto c^n \pi(c) $$ does not depend on the sample values $x_1,\dots,x_n$. Hence, a Bayesian can't use the information contained in the sample to make inferences about $c$. Prof. Wasserman points out that "Bayesians are slaves of the likelihood function. When the likelihood goes awry, so will Bayesian inference". My question for my fellow stackers is: Regarding this particular example, what went wrong (if anything) with Bayesian methodology? P.S. As Prof. Wasserman kindly explained in his answer, the example is due to Ed George.
Question: An infinite cyclic group has exactly two generators. Answer: Suppose $G=\langle a\rangle$ is an infinite cyclic group. If $b=a^{n}\in G$ is a generator of $G$ then as $a\in G,\ a=b^{m}={(a^{n})}^{m}=a^{nm}$ for some $m\in Z$. $\therefore$ We have $a^{nm-1}=e.$ (We know that the cyclic group $G=\langle a\rangle$ is infinite if and only if $0$ is the only integer for which $a^{0}=e$.) So, we have, $nm-1=0\Rightarrow nm=1.$ As $n$ and $m$ are integers, we have $n=1,n=-1.$ Now, $n=1$ gives $b=a$ which is already a generator and $n=-1$ gives $$H=\langle a^{-1}\rangle =\{(a^{-1})^{j}\mid j\in Z\} =\{a^{k}\mid k\in Z\}=G$$ That is $a^{-1}$ is also generator of $g$ My question is that am I approaching this question correctly?
For city we have simplified its weather forecasting as such. If it rains then the probability for rain the next day is $0.2$. If its sunny then the probability for sunny day the next day is $0.7$. Vector $$x_{k}=\begin{bmatrix}\text{probability for sunny weather at day } k \\ \text{probability for rainy weather at day } k\end{bmatrix}$$ is the probability for sunny and rainy weather. At day $k+1$ we get from $k$ days probabilities that $$x_{k+1}=\begin{bmatrix} 0.7 & 0.8 \\ 0.3 & 0.2 \end{bmatrix} x_k$$ Whats the probability that it rains on random day? I have a hint that I can assume that $x_0 = [1\;0]^T$. So \begin{align} x_{0+1}&=\begin{bmatrix} 0.7 & 0.8 \\ 0.3 & 0.2 \end{bmatrix} \begin{bmatrix} 1 \\ 0\end{bmatrix} = \begin{bmatrix} 0.7 \\ 0.3 \end{bmatrix} \\ x_{1+1}&=\begin{bmatrix} 0.7 & 0.8 \\ 0.3 & 0.2 \end{bmatrix} \begin{bmatrix} 0.7 \\ 0.3\end{bmatrix} = \begin{bmatrix} 0.73 \\ 0.27 \end{bmatrix} \end{align} Should I continue this for $k\to \infty$ or use some other method?
I saw a concept on the Internet that says "the strength of genetic drift is inversely proportional to the population size". I don't know why they are inversely proportional? Can somebody explain? Thank you all! Plane Crash Analogy 4 people in a plane crash In a small aeroplane, there are 2 people that wear a blue shirt and 2 people that wear a green shirt. The plane crashes, half of the people died. The 2 survivors are those wearing the green shirt… well, nothing so surprising! 400 people in a plane crash In a very big aeroplane, there are 200 people that wear a blue shirt and 200 people that wear a green shirt. The plane crashes, half of the people died. The 200 survivors are those wearing green shirt… This is quite surprising! Genetic Drift The same logic applies to genetic drift. Genetic drift is caused by events that modify the reproductive success of individuals in a random way (independently of their genotype). We usually referred to this as random sampling. At each generation, individuals are randomly chosen to reproduce and some genotypes might just happen to be chosen more often than others at a given draw (=at a given generation). Genetic drift pushes the frequency of allele slightly away from what would be predicted. According to the Wright-Fisher model, the frequency of alleles (of a bi-allelic gene) in a haploid population (to make it easier) in the next time step is given by: $$p' = \frac{p \cdot WA}{p \cdot WA + (1-p) \cdot Wa}$$ where $p$ is there frequency of an allele at time = $t$ and $p'$ is the frequency at time = $t+1$. $WA$ is the fitness of the genotype which frequency is $p$ and $Wa$ is the fitness of the genotype which frequency is $1-p$. If the population is infinite, the predictions of this equation are exactly correct. Now if we say that meteorites fall and half of the individual get killed. The probability of getting killed by a meteorite obviously does not depend on genetic predisposition, it is a question of chance! If you look at a population of 1 million individuals, half of them having the genotype $A$, the other half having the genotype $a$. It is very unlikely that more than 60% of all individuals that get killed are of the same genotype. Therefore, the meteorites won't change much the frequency of the genotypes. If you look at a population of 4 individuals 2 are $a$ and 2 are $A$, 2 of them get killed by a meteorite. Well, you have a probability of one half that the two survivors are of the same genotype and that the genotypes frequency would have changed drastically. Genetic drift refers to these changes in allele frequency which are due to random events (such as meteorites) and the strength of genetic drift indeed depends on the population size for probabilistic reasons. The greatest the population size, the lowest is the strength (or the relative importance) of genetic drift. How to model genetic drift There are three famous models of genetic drift that all lead to very similar expectations. I shall just name them here but I will not develop the underlying mathematics. Wright-Fisher model of genetic drift Not to confuse with the Wright-Fisher model of selection written above It models genetic drift as a random sample of the previous generation and hence uses a binomial distribution. Moran model It is based on a birth-death model (a type of Markov model). Kimura's diffusion equation model It is an extension of the above two models for a case of continuous time. Genetic drift refers to changes in allele frequencies that are due to random sampling effects, and not selection. If you sample alleles from a finite population (e.g. caused by the fact that only some individuals in the population reproduce each year), the resulting frequencies will deviate from the original frequencies due to random chance. If your sample is large (a large population) this deviation will be small, and if your sample is small the effect will be substantial. This is exactly the same process as flipping a coin. If you flip a coin 10 times there is a large probability that the outcome will deviate a lot from 50% heads/tails (the outcome will follow the binomial distribution). This is analogous to the deviation in allele frequency due to genetic drift. If you flip the coin 10000 times the outcome will deviate little from the expected 50% distribution of heads/tail. Therefore the effect is described as inversely proportional. There exist several models of genetic drift, but one way to look at it is as for how the variance in allele frequency changes over time, which can be described by: $$ V_t \approx p*q(1-e^{-t/2N_e}) $$ where $p$ and $q$ are the allele frequencies and $N_e$ is the effective population size. Alternatively, the change in heterozygozity ($H$) over time due to drift is described by (Crow & Kimura, 1970): $$ H_t = H_{t0}\left( 1-\frac{1}{2N_e} \right)^t$$ In both these functions, the inverse relationship between genetic drift and population size is seen clearly (population size is found in the denominator in both equations). The concept of genetic drift is closely related to the founder effect, but in that case, the sampling is only done once, when a small subpopulation of individuals establishes a new population.
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
I'm working on a masters thesis and need to calculate Mass inertia matrix ($M$), Coriolis/Centrifugal matrix ($C$), and the gravity matrix ($G$) in the equation $M\ddot{\theta} + C\theta + G = \tau$ (to get the dynamic model). I'm trying to calculate PD control in matlab, but I'm unsure if I need to calculate $M$, $C$, $G$ for every orientation of the robot as it moves. I assume so, but I don't want to code it and then find out I didn't need to. Please advise. You haven't included a diagram of your robot, so I don't know how big or complex these matrices are to calculate, but I would argue that you might not need to constantly recalculate the values if your controller isn't sensitive to the parameter changes. At the master's level, I would hope you have a good grasp on dynamics and kinematics and would be able to know what the "best" and "worst" case scenarios are for loading on your joint(s). For example, an arm that is fully extended would provide maximal torque to a shoulder joint, and arm that is curled up would provide the minimal torque to the shoulder joint. Calculate your matrices at the nominal or average use case, then calculate your gains for that case. Then, calculate your matrices at the minimum and maximum use cases and compare performance with the nominal gains. This is referred to as a sensitivity analysis. If you can prove that there is a "negligible" difference in performance between the min/max cases when you use the nominal control gains, then you're in the clear. If you fail this test (if there is a "non-negligible" difference in performance) then you need to calculate the system parameters in order to maintain satisfactory performance. A couple notes: Be careful with PD control - the integral term Iis what eliminates steady-state error. Derivative control is highly sensitiveto noise. If you're going to go through the effort of modelling the system ("plant"), then you may want to consider using a more advanced control mechanism like a state feedback controller. On that note, also be careful with the state feedback controllers - they are for use with Linear, Time- Invariant (LTI) systems. LTI means that your model must be linear and the onlything that is allowed to change with respect to time is the state vector. I would suggest you look at LQR control for nonlinear systems. Finally, with regards to calculating the new matrices with every orientation, I would suggest you read more about Jacobian matrices. The Jacobian is basically a vector of vectors (matrix) that gives the mathematical expression for how a particular value changes when the underlying parameters change. It's a matrix of partial differentials, and you can use it to find how each of your control matrices changes when things like the joint angles change. A change of XX percent on a particular joint causes a YY percent change in your matrix, etc.
415 46 Weinberg 5.9.34 Not a four vector? So the vector potential in the development that follows is not a vector, not Lorentz invariant, and most significantly, not generally covariant in this universe. If ##a## is not a vector in the construction of a Lagrangian, either the action is not a scalar or the charge-current density is not a tensor, or both. If we brush this under the carpet, a Lagrangian constructed to conserve charge is either not the Lagrangian of a conserved quantity (##dj \neq 0##) or the Lagrangian density is frame dependent, or both. Is this later resolved? "[...] Using this together with Eq. 5.9.23 gives the general antisymmetric tensor field for massless particles of helicity ##\pm 1## in the form ##f_{\mu\nu} = \partial_{ [ \mu } a_{ \nu ] }##. Note that this is a tensor even though ##a_{\mu}## is not a 4-vector." Not a four vector? So the vector potential in the development that follows is not a vector, not Lorentz invariant, and most significantly, not generally covariant in this universe. If ##a## is not a vector in the construction of a Lagrangian, either the action is not a scalar or the charge-current density is not a tensor, or both. If we brush this under the carpet, a Lagrangian constructed to conserve charge is either not the Lagrangian of a conserved quantity (##dj \neq 0##) or the Lagrangian density is frame dependent, or both. Is this later resolved? Last edited: