text
stringlengths
256
16.4k
This question already has an answer here: $$\alpha\,\mathrm{A}+\beta\,\mathrm{B}+\cdots\rightleftharpoons \rho\,\mathrm{R}+\sigma\,\mathrm{S}+\cdots$$ The equilibrium constant for this reaction can be calculated using the following formula: $$\mathrm K_c = \frac{{{[\mathrm{R}]}}^\rho {{[\mathrm{S}]}}^\sigma ... } {{{[\mathrm{A}]}}^\alpha {{[\mathrm{B}]}}^\beta ...}$$ In this formula for the equilibrium constant of a reaction, why are the quantities R and S (and likewise A and B) multiplied? I understand the equilibrium constant to be a ratio of the amounts of the products and the amounts of the reactants. But if this is the case, wouldn't a formula like $\mathrm K_c = \frac{[\mathrm{R}] + [\mathrm{S}] ... }{[\mathrm{A}] + [\mathrm{B}] ...}$ make more sense? I think this question is really a result of not understanding the meaning of the equilibrium constant.
The root locus is a parametric plot. It shows how zeros and poles move in the complex plain depending on the value of a gain $k$. For all points on the plot both the phase condition 1 and the amplitude condition 2 have to be true. $$G_0(s) = k \cdot \frac{\prod^q_{i=1}(s-s_{0i})}{\prod^n_{i=1}(s-s_{i})}$$ In your case, $s_{01} = s_{02} = 2$, $s_1 = 0$ and $s_2 = s_3 = 20$ There are a few rules to construct the root locus by hand, after you added the locations of the zeros and poles to the plot: symmetry Everything is symmetric to the real axis. start and end points Every line/curve/branch (whatever you want to call it) of the plot starts in a pole and ends in a zero or infinity. You have $q=2$ as the number of zeros and $n=3$ the number of poles. That means that 1 branch will go to infinity and two others will connect the poles with the zeros. asymptotes There are $n-q = 1$ asymptotes that the branches are converging towards for very large gains. The asymptotes have an angle of $$\phi = \frac{\pi + l\cdot 2\pi}{n-q}$$ and all asymptotes meet in the point $$s=\frac{\sum^n_{i=1}s_i - \sum^q_{i=1}s_{0i}}{n-q}$$ in your case $s=16$ which is admittedly only of real help if you have more asymptotes. With $l=0, 1, ..., n- q-1$. For the single asymptote of your system, that means $\phi=\pi$. From this rule you know that the one pole that goes to infinity will do so by getting close to the asymptote at angle $\pi$ which means it goes to the left of the plot. real axis This is one of those rules where you think it might as well be part of a horoscope (at least I did), because it sounds as arbitrary and random as how much the constellation of planets at your birth has an influence on your life. Points of the real axis can be part of the plot. Those points that are part of the plot have an uneven number of poles and zeros to their right. Wait, what? To estimate the results for your plot, start from $+\infty$ on the real axis and go to $-\infty$. The rightmost "thing" in your plot are the double poles at 20. No point of the real axis to the right of those can have any number of poles or zeros to its right, so we conclude that the real axis to the right of 20 is not part of the plot. This means that the 2 branches coming out of the poles at 20 cannot be on the real axis. Going further to the left, with the 2 poles to the right and then the 2 poles and 2 zeros to the right (4 in total) you know that no point from 0 to $+\infty$ of the real axis is part of the plot.Note that this also means that the branches ending in the zeros will not arrive there on the real axis, because the real axis around 2 is not part of the plot. To the left of 0, you now have all your 5 zeros and poles to the right, which means every point to the left of 0 on the real axis is part of the plot. real axis break in/away points The branches split or join at points where the following condition is true: $$\begin{align}\sum^q_{i=1}\frac{1}{s-s_{0i}} &= \sum^n_{i=1}\frac{1}{s-s_{i}}\\2 \cdot \frac{1}{s-2} &= 2 \cdot \frac{1}{s-20} + \frac{1}{s}\\\frac{2}{s-2} &= \frac{3s - 20}{s(s-20)}\\2s(s-20) &= (3s - 20)(s-2)\\2s^2-40s&=3s^2-26s+40\\0&=s^2+14s+40\end{align}$$ that gives $s_1 = -4$ and $s_2 = -10$. At those two points, there's a split or branch. Now which one's a split and which one's a join? Remember that this is a parametric plot and the branches grow as the value of $k$ grows. You start with what's essentially a split at 20 (the double pole as starting point), which means the you first have to encounter a join to later split again in order to arrive at 2 (the double zero). We can conclude that the point with a lower value for $k$ is the join and the other is the split. To calculate the value of $k$ at any given point, the amplitude condition 2 can be used: $$\begin{align}k_{-4} &= \frac{|(-4-20)^2(-4)|}{|(-4-2)^2|}\\&=\frac{|(-24)^2(-4)|}{|(-6)^2|}\\&=\frac{24^2 4}{6^2}\\\end{align}$$ $$\begin{align}k_{-10} &= \frac{|(-10-20)^2(-10)|}{|(-10-2)^2|}\\&= \frac{|(-30)^2(-10)|}{|(-12)^2|}\\&= \frac{30^2 10}{12^2}\end{align}$$ Ok, which one is bigger? $$\begin{align}k_{-4} &\overset{?}{\lessgtr}k_{-10}\\\frac{24^2 4}{6^2} &\lessgtr \frac{30^2 10}{12^2}\\\frac{24^2 4}{6^2} \frac{2^2}{2^2}&\lessgtr \frac{30^2 10}{12^2}\\\frac{24^2 4 2^2}{12^2} &\lessgtr \frac{30^2 10}{12^2}\\24^2 4~ 2^2&\lessgtr30^2 10\\3^2 2^{10}&\lessgtr 5^3 3^2 2^3\\2^7&\lessgtr 5^3\\128 &> 125\\k_{-4} &> k_{-10}\\\end{align}$$ $k_{-4}$ is the split and $k_{-10}$ is the join. Of course you could check that more quickly with a calculator, but you said dungeons & dragons style. 1 that is$$\sum^q_{i=1}\Phi_{0i} - \sum^n_{i=1}\Phi_{i} = (2l+1)\pi$$ with $l = 0, 1, 2, ...$ and $\Phi_x$ being the angle from a horizontal line drawn to the right of the respective pole or zero to the point in question. This is just FYI. 2 that is$$\frac{|\prod^q_{i=1}(s-s_{0i})|}{|\prod^n_{i=1}(s-s_{i})|} = \frac{1}{|k|}$$
Abbreviation: Frm A is a structure $\mathbf{A}=\langle A, \bigvee, \wedge, e, 0\rangle$ of type $\langle\infty, 2, 0, 0\rangle$ such that frame $\langle A, \bigvee, 0\rangle$ is a complete semilattice with $0=\bigvee\emptyset$, $\langle A, \wedge, e\rangle$ is a meet semilattice with identity, and $\wedge$ distributes over $\bigvee$: $x\wedge(\bigvee Y)=\bigvee_{y\in Y}(x\wedge y)$ Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{A}$ and $\mathbf{B}$ be frames. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(\bigvee X)=\bigvee h[X]$ for all $X\subseteq A$ (hence $h(0)=0$), $h(x \wedge y)=h(x) \wedge h(y)$ and $h(e)=e$. A is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[...]] subvariety [[...]] expansion [[...]] supervariety [[...]] subreduct
Question The angle between the two diagonals of a cube is (a) 30° (b) 45° (c) \[\cos^{- 1} \left( \frac{1}{\sqrt{3}} \right)\] (d) \[\cos^{- 1} \left( \frac{1}{3} \right)\] Solution \[\cos^{- 1} \left( \frac{1}{3} \right)\] \[\text { Let a be the length of an edge of the cube and let one corner be at the origin as shown in the figure . Clearly, OP, AR, BS, and CQ are the diagonals of the cube } . \] \[\text{ Consider the diagonals OP and AR } . \] \[\text{ Direction ratios of OP and AR are proportional to a - 0, a - 0, a - 0 and 0 - a, a - 0, a - 0, i . e . a, a, a and - a, a, a, respectively } . \] \[\text { Let } \theta \text{ be the angle between OP and AR . Then,} \] \[\cos \theta = \frac{a \times - a + a \times a + a \times a}{\sqrt{a^2 + a^2 + a^2}\sqrt{\left( - a \right)^2 + a^2 + a^2}}\] \[ \Rightarrow \cos \theta = \frac{- a^2 + a^2 + a^2}{\sqrt{3 a^2}\sqrt{3 a^2}}\] \[ \Rightarrow \cos \theta = \frac{1}{3} \] \[ \Rightarrow \theta = \cos^{- 1} \left( \frac{1}{3} \right) \] \[\text{ Similarly, the angles between other pairs of the diagonals are equal to } \cos^{- 1} \left( \frac{1}{3} \right) \text{ as the angle between any two diagonals of a cube is } \cos^{- 1} \left( \frac{1}{3} \right) .\]
Z-Scores The standard normal distribution is a normal distribution of standardized values called z-scores. A z-score is measured in units of the standard deviation. Definition: Z-Score If \(X\) is a normally distributed random variable and \(X \sim N(\mu, \sigma)\), then the z-score is: \[z = \dfrac{x=\mu}{\sigma} \label{zscore}\] The z-score tells you how many standard deviations the value \(x\) is above (to the right of) or below (to the left of) the mean, \(\mu\). Values of \(x\) that are larger than the mean have positive \(z\)-scores, and values of \(x\) that are smaller than the mean have negative \(z\)-scores. If \(x\) equals the mean, then \(x\) has a \(z\)-score of zero. For example, if the mean of a normal distribution is five and the standard deviation is two, the value 11 is three standard deviations above (or to the right of) the mean. The calculation is as follows: \[ \begin{align*} x &= \mu + (z)(\sigma) \\[5pt] &= 5 + (3)(2) = 11 \end{align*}\] The z-score is three. Since the mean for the standard normal distribution is zero and the standard deviation is one, then the transformation in Equation \ref{zscore} produces the distribution \(Z \sim N(0, 1)\). The value \(x\) comes from a normal distribution with mean \(\mu\) and standard deviation \(\sigma\). A z-score is measured in units of the standard deviation. Example \(\PageIndex{1}\) Suppose \(X \sim N(5, 6)\). This says that \(x\) is a normally distributed random variable with mean \(\mu = 5\) and standard deviation \(\sigma = 6\). Suppose \(x = 17\). Then (via Equation \ref{zscore}): \[z = \dfrac{x-\mu}{\sigma} = \dfrac{17-5}{6} = 2 \nonumber\] This means that \(x = 17\) is two standard deviations (2\(\sigma\)) above or to the right of the mean \(\mu = 5\). The standard deviation is \(\sigma = 6\). Notice that: \(5 + (2)(6) = 17\) (The pattern is \(\mu + z \sigma = x\)) Now suppose \(x = 1\). Then: \[z = \dfrac{x=\mu}{\sigma} = \dfrac{1-5}{6} = -0.67 \nonumber\] (rounded to two decimal places) This means that \(x = 1\) is \(0.67\) standard deviations (\(–0.67\sigma\)) below or to the left of the mean \(\mu = 5\). Notice that: \(5 + (–0.67)(6)\) is approximately equal to one (This has the pattern \(\mu + (–0.67)\sigma = 1\)) Summarizing, when \(z\) is positive, \(x\) is above or to the right of \(\mu\) and when \(z\) is negative, \(x\) is to the left of or below \(\mu\). Or, when \(z\) is positive, \(x\) is greater than \(\mu\), and when \(z\) is negative \(x\) is less than \(\mu\). Exercise \(\PageIndex{1}\) What is the \(z\)-score of \(x\), when \(x = 1\) and \(X \sim N(12, 3)\)? Answer \(z = \dfrac{1-12}{3} \approx -3.67\) Example \(\PageIndex{2}\) Some doctors believe that a person can lose five pounds, on the average, in a month by reducing his or her fat intake and by exercising consistently. Suppose weight loss has a normal distribution. Let \(X =\) the amount of weight lost(in pounds) by a person in a month. Use a standard deviation of two pounds. \(X \sim N(5, 2)\). Fill in the blanks. Suppose a person lostten pounds in a month. The \(z\)-score when \(x = 10\) pounds is \(x = 2.5\) (verify). This \(z\)-score tells you that \(x = 10\) is ________ standard deviations to the ________ (right or left) of the mean _____ (What is the mean?). Suppose a person gainedthree pounds (a negative weight loss). Then \(z =\) __________. This \(z\)-score tells you that \(x = -3\) is ________ standard deviations to the __________ (right or left) of the mean. Answers a. This \(z\)-score tells you that \(x = 10\) is 2.5 standard deviations to the right of the mean five. b. Suppose the random variables \(X\) and \(Y\) have the following normal distributions: \(X \sim N(5, 6)\) and \(Y \sim N(2, 1)\). If \(x = 17\), then \(z = 2\). (This was previously shown.) If \(y = 4\), what is \(z\)? \[z = \dfrac{y-\mu}{\sigma} = \dfrac{4-2}{1} = 2 \nonumber\] where \(\mu = 2\) and \(\sigma = 1\). The \(z\)-score for \(y = 4\) is \(z = 2\). This means that four is \(z = 2\) standard deviations to the right of the mean. Therefore, \(x = 17\) and \(y = 4\) are both two (of their own) standard deviations to the right of their respective means. The z-score allows us to compare data that are scaled differently. To understand the concept, suppose \(X \sim N(5, 6)\) represents weight gains for one group of people who are trying to gain weight in a six week period and \(Y \sim N(2, 1)\) measures the same weight gain for a second group of people. A negative weight gain would be a weight loss. Since \(x = 17\) and \(y = 4\) are each two standard deviations to the right of their means, they represent the same, standardized weight gain relative to their means. Exercise \(\PageIndex{2}\) Fill in the blanks. Jerome averages 16 points a game with a standard deviation of four points. \(X \sim N(16, 4)\). Suppose Jerome scores ten points in a game. The \(z\)–score when \(x = 10\) is \(-1.5\). This score tells you that \(x = 10\) is _____ standard deviations to the ______(right or left) of the mean______(What is the mean?). Answer 1.5, left, 16 The Empirical Rule If \(X\) is a random variable and has a normal distribution with mean \(\mu\) and standard deviation \(\sigma\), then the Empirical Rule says the following: About 68% of the \(x\) values lie between –1\(\sigma\) and +1\(\sigma\) of the mean \(\mu\) (within one standard deviation of the mean). About 95% of the \(x\) values lie between –2\(\sigma\) and +2\(\sigma\) of the mean \(\mu\) (within two standard deviations of the mean). About 99.7% of the \(x\) values lie between –3\(\sigma\) and +3\(\sigma\) of the mean \(\mu\) (within three standard deviations of the mean). Notice that almost all the \(x\) values lie within three standard deviations of the mean. The \(z\)-scores for +1\(\sigma\) and –1\(\sigma\) are +1 and –1, respectively. The \(z\)-scores for +2\(\sigma\) and –2\(\sigma\) are +2 and –2, respectively. The \(z\)-scores for +3\(\sigma\) and –3\(\sigma\) are +3 and –3 respectively. The empirical rule is also known as the 68-95-99.7 rule. Figure \(\PageIndex{1}\) Example \(\PageIndex{3}\) The mean height of 15 to 18-year-old males from Chile from 2009 to 2010 was 170 cm with a standard deviation of 6.28 cm. Male heights are known to follow a normal distribution. Let \(X =\) the height of a 15 to 18-year-old male from Chile in 2009 to 2010. Then \(X \sim N(170, 6.28)\). Suppose a 15 to 18-year-old male from Chile was 168 cm tall from 2009 to 2010. The \(z\)-score when \(x = 168\) cm is \(z =\) _______. This \(z\)-score tells you that \(x = 168\) is ________ standard deviations to the ________ (right or left) of the mean _____ (What is the mean?). Suppose that the height of a 15 to 18-year-old male from Chile from 2009 to 2010 has a \(z\)-score of \(z = 1.27\). What is the male’s height? The \(z\)-score (\(z = 1.27\)) tells you that the male’s height is ________ standard deviations to the __________ (right or left) of the mean. Answers –0.32, 0.32, left, 170 177.98, 1.27, right Exercise \(\PageIndex{3}\) Use the information in Example \(\PageIndex{3}\) to answer the following questions. Suppose a 15 to 18-year-old male from Chile was 176 cm tall from 2009 to 2010. The \(z\)-score when \(x = 176\) cm is \(z =\) _______. This \(z\)-score tells you that \(x = 176\) cm is ________ standard deviations to the ________ (right or left) of the mean _____ (What is the mean?). Suppose that the height of a 15 to 18-year-old male from Chile from 2009 to 2010 has a \(z\)-score of \(z = –2\). What is the male’s height? The \(z\)-score (\(z = –2\)) tells you that the male’s height is ________ standard deviations to the __________ (right or left) of the mean. Answer Solve the equation \(z = \dfrac{x-\mu}{\sigma}\) for \(z\). \(x = \mu+ (z)(\sigma)\) \(z = \dfrac{176-170}{6.28}\), This z-score tells you that \(x = 176\) cm is 0.96 standard deviations to the right of the mean 170 cm. Answer Solve the equation \(z = \dfrac{x-\mu}{\sigma}\) for \(z\). \(x = \mu+ (z)(\sigma)\) \(X = 157.44\) cm, The \(z\)-score(\(z = –2\)) tells you that the male’s height is two standard deviations to the left of the mean. Example \(\PageIndex{4}\) From 1984 to 1985, the mean height of 15 to 18-year-old males from Chile was 172.36 cm, and the standard deviation was 6.34 cm. Let \(Y =\) the height of 15 to 18-year-old males from 1984 to 1985. Then \(Y \sim N(172.36, 6.34)\). The mean height of 15 to 18-year-old males from Chile from 2009 to 2010 was 170 cm with a standard deviation of 6.28 cm. Male heights are known to follow a normal distribution. Let \(X =\) the height of a 15 to 18-year-old male from Chile in 2009 to 2010. Then \(X \sim N(170, 6.28)\). Find the z-scores for \(x = 160.58\) cm and \(y = 162.85\) cm. Interpret each \(z\)-score. What can you say about \(x = 160.58\) cm and \(y = 162.85\) cm? Answer The \(z\)-score (Equation \ref{zscore}) for \(x = 160.58\) is \(z = –1.5\). The \(z\)-score for \(y = 162.85\) is \(z = –1.5\). Both \(x = 160.58\) and \(y = 162.85\) deviate the same number of standard deviations from their respective means and in the same direction. Exercise \(\PageIndex{4}\) In 2012, 1,664,479 students took the SAT exam. The distribution of scores in the verbal section of the SAT had a mean \(\mu = 496\) and a standard deviation \(\sigma = 114\). Let \(X =\) a SAT exam verbal section score in 2012. Then \(X \sim N(496, 114)\). Find the \(z\)-scores for \(x_{1} = 325\) and \(x_{2} = 366.21\). Interpret each \(z\)-score. What can you say about \(x_{1} = 325\) and \(x_{2} = 366.21\)? Answer The z-score (Equation \ref{zscore}) for \(x_{1} = 325\) is \(z_{1} = –1.14\). The z-score (Equation \ref{zscore}) for \(x_{2} = 366.21\) is \(z_{2} = –1.14\). Student 2 scored closer to the mean than Student 1 and, since they both had negative \(z\)-scores, Student 2 had the better score. Example \(\PageIndex{5}\) Suppose \(x\) has a normal distribution with mean 50 and standard deviation 6. About 68% of the \(x\) values lie between \(-1\sigma\) = (-1)(6) = -6\) and \(1 \sigma = (1)(6) = 6\) of the mean 50. The values \(50 - 6 = 44\) and \(50 + 6 = 56\) are within one standard deviation of the mean 50. The \(z\)-scores are –1 and +1 for 44 and 56, respectively. About 95% of the \(x\) values lie between \(-2 \sigma = (–2)(6) = –12\) and \(2 \sigma = (2)(6) = 12\). The values \(50 – 12 = 38\) and \(50 + 12 = 62\) are within two standard deviations of the mean 50. The \(z\)-scores are –2 and +2 for 38 and 62, respectively. About 99.7% of the \(x\) values lie between \(–3 \sigma = (–3)(6) = –18\) and \(3 \sigma = (3)(6) = 18\) of the mean 50. The values \(50 – 18 = 32\) and \(50 + 18 = 68\) are within three standard deviations of the mean 50. The \(z\)-scores are –3 and +3 for 32 and 68, respectively. Exercise \(\PageIndex{5}\) Suppose \(X\) has a normal distribution with mean 25 and standard deviation five. Between what values of \(x\) do 68% of the values lie? Answer between 20 and 30. Example \(\PageIndex{6}\) From 1984 to 1985, the mean height of 15 to 18-year-old males from Chile was 172.36 cm, and the standard deviation was 6.34 cm. Let \(Y =\) the height of 15 to 18-year-old males in 1984 to 1985. Then \(Y \sim N(172.36, 6.34)\). About 68% of the \(y\) values lie between what two values? These values are ________________. The \(z\)-scores are ________________, respectively. About 95% of the \(y\) values lie between what two values? These values are ________________. The \(z\)-scores are ________________ respectively. About 99.7% of the \(y\) values lie between what two values? These values are ________________. The \(z\)-scores are ________________, respectively. Answer About 68% of the values lie between 166.02 and 178.7. The \(z\)-scores are –1 and 1. About 95% of the values lie between 159.68 and 185.04. The \(z\)-scores are –2 and 2. About 99.7% of the values lie between 153.34 and 191.38. The \(z\)-scores are –3 and 3. Exercise \(\PageIndex{6}\) The scores on a college entrance exam have an approximate normal distribution with mean, \(\mu = 52\) points and a standard deviation, \(\sigma = 11\) points. About 68% of the \(y\) values lie between what two values? These values are ________________. The \(z\)-scores are ________________, respectively. About 95% of the \(y\) values lie between what two values? These values are ________________. The \(z\)-scores are ________________, respectively. About 99.7% of the \(y\) values lie between what two values? These values are ________________. The \(z\)-scores are ________________, respectively. Answer a About 68% of the values lie between the values 41 and 63. The \(z\)-scores are –1 and 1, respectively. Answer b About 95% of the values lie between the values 30 and 74. The \(z\)-scores are –2 and 2, respectively. Answer c About 99.7% of the values lie between the values 19 and 85. The \(z\)-scores are –3 and 3, respectively. Summary A \(z\)-score is a standardized value. Its distribution is the standard normal, \(Z \sim N(0, 1)\). The mean of the \(z\)-scores is zero and the standard deviation is one. If \(y\) is the z-score for a value \(x\) from the normal distribution \(N(\mu, \sigma)\) then \(z\) tells you how many standard deviations \(x\) is above (greater than) or below (less than) \(\mu\). Formula Review \(Z \sim N(0, 1)\) \(z = a\) standardized value (\(z\)-score) mean = 0; standard deviation = 1 To find the \(K\) th percentile of \(X\) when the \(z\)-scores is known: \(k = \mu + (z)\sigma\) \(z\)-score: \(z = \dfrac{x-\mu}{\sigma}\) \(Z =\) the random variable for z-scores \(Z \sim N(0, 1)\) Glossary Standard Normal Distribution a continuous random variable (RV) \(X \sim N(0, 1)\); when \(X\) follows the standard normal distribution, it is often noted as \(Z \sim N(0, 1)\. \(z\)-score the linear transformation of the form \(z = \dfrac{x-\mu}{\sigma}\); if this transformation is applied to any normal distribution \(X \sim N(\mu, \sigma\) the result is the standard normal distribution \(Z \sim N(0,1)\). If this transformation is applied to any specific value \(x\) of the RV with mean \(\mu\) and standard deviation \(\sigma\), the result is called the \(z\)-score of \(x\). The \(z\)-score allows us to compare data that are normally distributed but scaled differently. References “Blood Pressure of Males and Females.” StatCruch, 2013. Available online at http://www.statcrunch.com/5.0/viewre...reportid=11960 (accessed May 14, 2013). “The Use of Epidemiological Tools in Conflict-affected populations: Open-access educational resources for policy-makers: Calculation of z-scores.” London School of Hygiene and Tropical Medicine, 2009. Available online at http://conflict.lshtm.ac.uk/page_125.htm (accessed May 14, 2013). “2012 College-Bound Seniors Total Group Profile Report.” CollegeBoard, 2012. Available online at http://media.collegeboard.com/digita...Group-2012.pdf (accessed May 14, 2013). “Digest of Education Statistics: ACT score average and standard deviations by sex and race/ethnicity and percentage of ACT test takers, by selected composite score ranges and planned fields of study: Selected years, 1995 through 2009.” National Center for Education Statistics. Available online at http://nces.ed.gov/programs/digest/d...s/dt09_147.asp (accessed May 14, 2013). Data from the San Jose Mercury News. Data from The World Almanac and Book of Facts. “List of stadiums by capacity.” Wikipedia. Available online at https://en.wikipedia.org/wiki/List_o...ms_by_capacity (accessed May 14, 2013). Data from the National Basketball Association. Available online at www.nba.com (accessed May 14, 2013). Contributors Barbara Illowsky and Susan Dean (De Anza College) with many other contributing authors. Content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/30189442-699...b91b9de@18.114.
How to show the sequence $x_n = (1 + \frac{x}{n})^{n}$ is bounded above by $e^x$? Note: I'm not supposed to be able to use any differentiation techniques if possible. Since we techincally "don't know" it yet. As can be deduced I am trying to show that the sequence $x_n = (1 + \frac{x}{n})^{n}$ is convergent. I have to arrive at this conclusion using the monotonic convergence theorem. So we are given by definition that $$e^{x} = \lim_{n \rightarrow \infty} \Bigg(1 + \frac{x}{n} \Bigg)^{n}$$ I think I figured out how to show the sequence is montonically increasing. My problem is showing that it is bounded. So one idea I thought of trying to apply was using the binomial theorem: $$\Bigg(1 + \frac{x}{n}\Bigg)^{n} = \sum_{k = 0}^{n} \binom{n}{k} \bigg(\frac{x}{n}\Bigg)^{k}$$ and then since this would be some sort of finite quantity, I would compare it to $e^x$: $$\Bigg(1 + \frac{x}{n}\Bigg)^{n} = \sum_{k = 0}^{n} \binom{n}{k} \bigg(\frac{x}{n}\Bigg)^{k} < e^{x} = \lim_{n \rightarrow \infty} \Bigg(1 + \frac{x}{n} \Bigg)^{n} = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \dots $$ But I can't seem to get the finite binomial expansion in a comparable form. Questions: 1) Is this a correct approach ? 2) If it is how can I rewrite the binomial expansion to work in my favor ?
The Annals of Probability Ann. Probab. Volume 43, Number 2 (2015), 639-681. Critical two-point functions for long-range statistical-mechanical models in high dimensions Abstract We consider long-range self-avoiding walk, percolation and the Ising model on $\mathbb{Z}^{d}$ that are defined by power-law decaying pair potentials of the form $D(x)\asymp|x|^{-d-\alpha}$ with $\alpha>0$. The upper-critical dimension $d_{\mathrm{c}}$ is $2(\alpha\wedge2)$ for self-avoiding walk and the Ising model, and $3(\alpha\wedge2)$ for percolation. Let $\alpha\ne2$ and assume certain heat-kernel bounds on the $n$-step distribution of the underlying random walk. We prove that, for $d>d_{\mathrm{c}}$ (and the spread-out parameter sufficiently large), the critical two-point function $G_{p_{\mathrm{c}}}(x)$ for each model is asymptotically $C|x|^{\alpha\wedge2-d}$, where the constant $C\in(0,\infty)$ is expressed in terms of the model-dependent lace-expansion coefficients and exhibits crossover between $\alpha<2$ and $\alpha>2$. We also provide a class of random walks that satisfy those heat-kernel bounds. Article information Source Ann. Probab., Volume 43, Number 2 (2015), 639-681. Dates First available in Project Euclid: 2 February 2015 Permanent link to this document https://projecteuclid.org/euclid.aop/1422885572 Digital Object Identifier doi:10.1214/13-AOP843 Mathematical Reviews number (MathSciNet) MR3306002 Zentralblatt MATH identifier 1342.60162 Subjects Primary: 60K35: Interacting random processes; statistical mechanics type models; percolation theory [See also 82B43, 82C43] 82B20: Lattice systems (Ising, dimer, Potts, etc.) and systems on graphs 82B27: Critical phenomena 82B41: Random walks, random surfaces, lattice animals, etc. [See also 60G50, 82C41] 82B43: Percolation [See also 60K35] Citation Chen, Lung-Chi; Sakai, Akira. Critical two-point functions for long-range statistical-mechanical models in high dimensions. Ann. Probab. 43 (2015), no. 2, 639--681. doi:10.1214/13-AOP843. https://projecteuclid.org/euclid.aop/1422885572
In general, a Hilbert Schmidt Operator $M$ is one where \begin{equation} \sum_{i=1}^{\infty} ||M \varphi_{i}||^2 \end{equation} is bounded for any orthonormal system $\{\varphi_i: i\in \mathbb{N}\}$. Now, let $M_k$ be a sequence of Hilbert Schmidt operators. We call it a uniform sequence of Hilbert Schmidt operators, if for every $\rho>0$ there is an $n \in \mathbb{N}$ such that \begin{equation} \sum_{i=n}^{\infty} ||M_k \varphi_{i}||^2 \leq \rho \end{equation} for all $k \in \mathbb{N}$. Further remarks: If the observation operator $H$ is depending on $k$, i.e. $H = H_k$, we need to have a sequence $\lambda_{j}^{\ast} > 0$ such that for the singular values $\lambda_{j}^{(k)}$ of $H_k$ we have \begin{equation} \lambda_{j}^{(k)} \geq \lambda_{j}^{\ast} \end{equation} for all $j, k \in \mathbb{N}$.
Find a Basis of the Subspace Spanned by Four Polynomials of Degree 3 or Less Problem 607 Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less. Let \[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\] where \begin{align*} p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\ p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3. \end{align*} (a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$. (b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$. (The Ohio State University, Linear Algebra Midterm) Add to solve later Contents Solution. (a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$. Let $B=\{1, x, x^2, x^3\}$ be the standard basis for $\calP_3$. With respect to the basis $B$, the coordinate vectors of the given polynomials are \begin{align*} \mathbf{v}_1:=[p_1(x)]_B&=\begin{bmatrix} 1 \\ 3 \\ 2 \\ -1 \end{bmatrix}, &\mathbf{v}_2:=[p_2(x)]_B=\begin{bmatrix} 0 \\ 1 \\ 0 \\ 1 \end{bmatrix}\\[6pt] \mathbf{v}_3:=[p_3(x)]_B&=\begin{bmatrix} 0 \\ 1 \\ 1 \\ -1 \end{bmatrix}, &\mathbf{v}_4:=[p_4(x)]_B=\begin{bmatrix} 3 \\ 8 \\ 0 \\ 8 \end{bmatrix}. \end{align*} Let $T=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4\}$ be the set of these coordinate vectors. We find a basis for $\Span(T)$ among the vectors in $T$ by the leading 1 method. We form the matrix whose column vectors are the vectors in $T$ and apply elementary row operations as follows. \begin{align*} \begin{bmatrix} 1 & 0 & 0 & 3 \\ 3 &1 & 1 & 8 \\ 2 & 0 & 1 & 0 \\ -1 & 1 & -1 & 8 \end{bmatrix} \xrightarrow{\substack{R_2-3R_1\\R_3-2R_1\\R_4+R_1}} \begin{bmatrix} 1 & 0 & 0 & 3 \\ 0 &1 & 1 & -1 \\ 0 & 0 & 1 & -6 \\ 0 & 1 & -1 & 11 \end{bmatrix} \xrightarrow{R_4-R_2}\\[6pt] \begin{bmatrix} 1 & 0 & 0 & 3 \\ 0 &1 & 1 & -1 \\ 0 & 0 & 1 & -6 \\ 0 & 0 & -2 & 12 \end{bmatrix} \xrightarrow{R_4+2R_2} \begin{bmatrix} 1 & 0 & 0 & 3 \\ 0 &1 & 0 & 5 \\ 0 & 0 & 1 & -6 \\ 0 & 0 & 0 & 0 \end{bmatrix}. \end{align*} The first three columns of the reduced row echelon form matrix contain the leading 1’s. Thus, $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is a basis for $\Span(T)$. It follows that $Q:=\{p_1(x), p_2(x), p_3(x)\}$ is a basis for $\Span(S)$. (b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$. Note that $p_4(x)$ is not in the basis $Q$. The fourth column of the matrix in reduced row echelon form of part (a) gives the coefficients of the linear combination: \[p_4(x)=3p_1(x)+5p_2(x)-6p_3(x).\] Thus, the coordinate vector of $p_4(x)$ with respect to the basis $Q$ is \[[p_4(x)]_Q=\begin{bmatrix} 3 \\ 5 \\ -6 \end{bmatrix}.\] Comment. This is one of the midterm 2 exam problems for Linear Algebra (Math 2568) in Autumn 2017. In part (b), some students stopped after obtaining the linear combination $p_4(x)=3p_1(x)+5p_2(x)-6p_3(x)$. You must read the problem carefully. You are asked to find the coordinate vector of $p_4(x)$ with respect to $Q$. List of Midterm 2 Problems for Linear Algebra (Math 2568) in Autumn 2017 Vector Space of 2 by 2 Traceless Matrices Find an Orthonormal Basis of the Given Two Dimensional Vector Space Are the Trigonometric Functions $\sin^2(x)$ and $\cos^2(x)$ Linearly Independent? Find Bases for the Null Space, Range, and the Row Space of a $5\times 4$ Matrix Matrix Representation, Rank, and Nullity of a Linear Transformation $T:\R^2\to \R^3$ Determine the Dimension of a Mysterious Vector Space From Coordinate Vectors Find a Basis of the Subspace Spanned by Four Polynomials of Degree 3 or Less ←The current problem Add to solve later
NOTE: This is a working document that is regularly being edited and added to. In the late 1950's and early 1960's, Wolfgang Paul and collaborators were developing novel methodolgies for storing and manipulating ions through the use of alternating radio frequency electric fields. Their methodologies utilized quadrupolar fields, which by definition have potentials that scale with the square of x,y and z in a cartesian coordinate system: $$\phi(x,y,z) = A(λx^2 + σy^2 + γz^2) + C$$ Because of the Laplace equation/condition, the gradient of the potential must equal zero (at least in the absence of any charges): $$\nabla\phi(x,y,z) = 2A(λx + σy + γz) = 0$$ $$λx + σy + γz = 0$$ This equation can be satisfied in an infinite number of ways, with various values of the coefficients λ, σ and γ. One of the most well-known solutions utilizes the combination λ=1, σ=-1 and γ=0 to give a potential of: $$\phi(x,y) = A(x^2 - y^2) + C$$ This is the basis of the potential inside the now famous quadrupole mass filter (QMF). This potential is best generated via four rods of hyperbolic cross section arranged around a center axis. The "x electrodes" have the form: $$x^2 - y^2 = r_0^2$$ and the "y electrodes" have the form: $$y^2 - x^2 = r_0^2$$ In order to determine what the coefficients 'A' and 'C' are, we must consider the boundary conditions of the electrode surfaces themselves. For example, assume that there is a potential $\phi_0$ applied between the two rod sets, with $\frac{\phi_0}{2}$ applied to the x-electrodes and an opposite potential $\frac{-\phi_0}{2}$ applied to the y-electrodes. The equal and opposite voltages cancel out in the center of the rods, giving a value of C=0 in the potential: $$\phi(x,y) = A(x^2 - y^2)$$ The innermost point of the x-electrodes is located at x=r 0 and y=0, and has a potential of $\frac{\phi_0}{2}$: $$\phi(r_0,0) = \frac{\phi_0}{2} = Ax^2 = Ar_0^2$$ This gives a value of A: $$A = \frac{\phi_0}{2r_0^2}$$ A similar examination of the potential on the y-electrodes gives the same value for A. As such, the general potential in the QMF has the form: $$\phi(x,y) = \frac{\phi_0(x^2 - y^2)}{2r_0^2}$$ Now that we have determined the form of the potential as a funciton of x and y, we can consider what happens with the potentials that are usually applied to QMFs. These potentials have both a static DC component and a time varying RF component: $$\phi_0 = 2(U + VcosΩt)$$ Remember that $\phi_0$ is the difference in potential between the rods, which have equal and opposite potentials applied. This means that the DC component (2U) is split, such that one pair of rods has a DC component of +U and the other pair has -U. Likewise, the RF component (2VcosΩt) is split between the rods; this is accomplished by applying sinusoids of equal and opposite amplitude (V and -V) to the rod pairs. With all these considerations applied, one set of rods has the potential +(U + VcosΩt) applied while the other has the potential -(U + VcosΩt) applied. Ion Motion in a Quadrupolar Field Let's consider the motion of an ion in the x dimension, specifically with an ion on the x-axis (y=0). The potential along this axis is: $$\phi(x,0) = \frac{\phi_0 x^2}{2r_0^2}$$ The electric field along the x-axis at this point is the derivate of the potential: $$E_x = \frac{d\phi}{dx} = \frac{\phi_0 x}{r_0^2}$$ The force acting on the ion along this axis is determined by multiplying the electric field by -e: $$F_x = -eE_x = \frac{-e\phi_0 x}{r_0^2}$$ The classic equation F=ma can be applied here: $$F_x = ma = m\frac{d^2x}{dt^2} = \frac{-e\phi_0 x}{r_0^2}$$ Now we can insert the usual form of $\phi_0 = 2(U + VcosΩt)$: $$m\frac{d^2x}{dt^2} = \frac{-2e(U + VcosΩt) x}{r_0^2}$$ In preparation for what's to come, this equation is slightly rearranged: $$m\frac{d^2x}{dt^2} = -\left(\frac{2eU}{r_0^2} + \frac{2eVcosΩt}{r_0^2}\right) x$$ $$\frac{d^2x}{dt^2} + \left(\frac{2eU}{mr_0^2} + \frac{2eVcosΩt}{mr_0^2}\right) x = 0$$ Interestingly, this equation of ion motion has the form of the "Mathieu equation", which is: $$\frac{d^2u}{d\xi^2} + \left(a_u -2q_ucos2 \xi \right)u=0$$ In order to make the equation of ion motion fit the Mathieu equation format exactly, we make the following substitutions: $$u=x$$ $$\xi = \frac{\Omega t}{2}$$ $$a_x = \frac{8eU}{mr_0^2 \Omega^2}$$ $$q_x = \frac{-4eV}{mr_0^2 \Omega^2}$$ Similar considerations can be made for ion motion along the y-axis of the QMF, again providing an equation of motion that takes the form of a Mathieu equation with the values: $$a_y = \frac{-8eU}{mr_0^2 \Omega^2}$$ $$q_y = \frac{4eV}{mr_0^2 \Omega^2}$$ Since the equation of ion motion along the x-axis of the QMF fits the form of the Mathieu equation, we can utilize known characteristics of said equation to think about ion trajectories inside the mass filter. The solutions to the Mathieu equation can be characterized as either "bounded" or "unbounded", based on how their amplitudes evolve over time. There are only certain combinations of $a_u$ and $q_u$ that provide bounded/stable solutions. As such, there are only certain values of $a_x$, $q_x$, $a_y$, and $q_y$ that provide stable ion trajectories As ions fly down the long z-axis of a QMF, they must have stable trajectories in both the x and y dimensions so that they don't strike the metal rods and neutralize. When thinking about these aspects of ion trajectories, it is common to use a "stability diagram" that helps visualize the values of a and q that allow ions to be stable in both the x and y dimensions: Now let us consider how ions of different m/z values are arranged within the stability diagram. We begin by considering the case when only RF is applied, with no DC difference between the rods (U=0). Since there is no DC applied, the "a-values" for ions of any m/z are zero $(a_x=a_y=0)$. However, since the "q-values" of the ions are m/z-dependent, ions of different masses spread out along the a=0 axis, with heaviest ions to the left and lightest ions to the right. The amplitude of the RF voltage dictates how far to the right ions are pushed. Higher RF amplitudes increase the q-values of all ions, pushing ions to the right in the stability diagram. If the RF amplitude is increased enough, it eventually pushes ions out of the region of stability (with the lightest ions reaching instability first). Once a DC difference is applied between the rods, ions obtain a-values which are inversely proportional to their m/z. Depending on how far ions are pushed up the a-axis, they can exit the region of stability: In the right of the preceeding figure, it is apparent that if appropriate RF and DC voltages are applied to the QMF's rods, then only a small subset of m/z values will reside within the region of stability. Notice how only one of our representative ions falls within the region of stability, falling just inside its apex. Meanwhile, ions of both lower and higher m/z values are outside of the stability region. If such conditions were used on a QMF filter, the heavier and lighter ions would quickly become unstable and strike the rods, while the ions just insde the apex would be successfully transmitted through the filter thanks to their stable trajectories. This is the essence of how QMFs perform mass analysis. A beam of ions with various m/z values continuously enters the filter while the RF and DC voltages are ramped together. If the ratio of U and V is held constant, then ions are effectively pushed along a single line within the stability diagram, continually shifting up and to the right. If the U/V ratio is set properly, then ions will pass just under the apex of the stability digram, ensuring good discrimination between nearby m/z values. If the U/V ratio is set too high, then ions simply never pass through the stability region and therefore aren't detected. Lastly, if the U/V ratio is set too low, then ions do not approach the apex of the stability boundary, meaning that a relatively wiude range of m/z values are stable at any given time, causing mass resolution to degrade. Coming Soon Mass Resolution Pre/Post Filters RF-only QMF Triple Quadrupoles
Because the particle apparently doesn't exist, the most likely prediction for the 2016 dataset was that it disappears completely. It did. And even though CMS was against the publication of its new diphoton paper after it was for it ;-), we saw the paper with the new graphs in time, as I discussed in the previous blog post. I encourage all particle phenomenologists who have mostly completed a model explaining the \(750\GeV\) not to send it to the arXiv. Instead, submit it to the competing viXra.org – it should be an easy process – for you to have a nice, arXiv-like URL and for the viXra amateur scientists to have a nice company, competition, and perhaps inspiration. (Well, most of them won't get inspired because they believe that they are brighter than you are LOL.) Alternatively, you may just change the title and a few words, submit to arXiv and conferences, and pretend that your paper didn't depend on the diphoton excess. ;-) There isn't anything to be excited about in the diphoton channel now. The new largest current bumps \(620,900,1300\GeV\) of CMS are small and disjoint with the small but largest \(975\GeV\) bump that ATLAS will probably show tonight. Update 4pm: See the new ATLAS plots, a press release, and the paper. As the previous blog post mentioned, the highest new significance seen by the CMS is an excited quark (see Page 5, Figure 2) whose mass is almost exactly \(2.0\TeV\) and whose excess only appears in one bin of width of \(70\GeV\) or so. But locally, it's a cool 3.7-sigma excess, assuming a low \(f\sim 0.1\), which is a cubic coupling constant of the excited quarks to the SM gauge fields, and that still translates to a 2.84-sigma excess (see page 7) globally. Can you see an ATLAS talk on this channel? A related ATLAS paper based on the 2015 data sees nothing around \(2\TeV\). Just for the fun of it, imagine that ATLAS will announce the same 3.7-sigma (locally) excess at the mass \(2\TeV\) soon. It is unlikely. But I still have the freedom to speculate. What would it imply? First, the combined local significance would be \(3.7\times \sqrt{2}\sim \)=5.2 sigma. Even when you take care of the 30 bins to compute the global significance, it would be some 4.6 sigma. Not bad. Despite the disappointing experience with the \(750\GeV\) diphoton excess, people could start to write lots of papers attempting to explain this possible signal. What theories could you invent? Quarks could be composite (a bound state of several point-like particles). There may be preons and other kinds of substructure. But all these models are rather contrived and unnatural and they have problems with the right spectrum of particles, viable flavor-changing processes etc. If you search for an "excited quark" and "string theory", you will find e.g. this paper by Blumenhagen, Deser, and Lüst from 2010. Already on page 2, they tell you that a very natural explanation for an excited quark \(q^*\) or an excited gluon \(g^*\) is simply an excited string with the string scale equal to\[ M_{q^*,g^*} = M_s = \sqrt{\frac{1}{\alpha'}} = 2.0\TeV. \] For your convenience and excitement, I included the precise value of the string scale measured by the LHC. Now, string models with this accessibly low string scale can't be old-fashioned heterotic string models or any vacua explaining the Standard Model as closed strings. They have to be all about open strings – and these open strings have to be stuck on branes within a much larger compactification manifold. As sketched by ADD, Arkani-Hamed, Dimopoulos, and Dvali in 1998 ("old large dimensions"). You may literally think that the quark state \(\ket{q}\) is an open string state of a low-lying vibration of an open string whose one end point is stuck at one D-brane stack, the other on another D-brane stack, and the whole open string – which doesn't want to grow too long because it costs energy – is therefore confined to the vicinity of the intersection of the two D-brane stacks. The \(2.0\TeV\) excited string would probably be the state similar to\[ \ket{q^*} = \alpha_{-1}^{\mu} \ket{q},\quad \mu\in\{x,y\} \] I've simply added the lowest non-trivial string oscillator Fourier mode of \(X^{x/y}(\sigma,\tau)\) because I wanted to keep all the internal quantum numbers as well as the statistics. But the addition of this \(\alpha^\mu_{-1}\) oscillator to an open string simply increases \(m^2\) by\[ \Delta (m^2) = \frac{1}{\alpha'}. \] Cool. So there could be additional excitations of these open strings. Their masses would be very close to \(\sqrt{n}\times 2.0\TeV\) for integer values of \(n\in\ZZ\). So the following one would be about \(2.8\TeV\). However, there could be new objects where \(n\) is a fractional number, a multiple of one-half or (because of the omnipresence of \(\ZZ_3\) orbifolds in simple enough orbifold compactifications) one-third or one-sixth. You have the homework to list all interesting, low enough values of \(\sqrt{p/6}\times 2.0\TeV\) for \(p\in\ZZ\). For example, the lightest \(p=1\) massive state could be \(\sqrt{1/6}\times 2.0\TeV\sim 816\GeV\). Different levels of this kind could contain different exotic states. With an increasing \(p\), the spacing between the levels shrinks. So if you build a \(100\TeV\) Mao collider, you may in principle probe the excited string states up to \(p\sim 1000\) or more. If one could know the list of all particle species at each level, he could probably extract the appropriate orbifold compactification (assuming it would be simple enough) uniquely. Let me point out that the closed strings need left-moving and right-moving oscillators. The pair \(\alpha_{-1}^\kappa \tilde \alpha_{-1}^\lambda\) of creation oscillators adds \(4/ \alpha'\) to the closed string's \(m^2\), about \(16\TeV^2\). The first excited closed string sits at \(m\sim 4\TeV\). But the division by \(\sqrt{6}\) etc. may be plausible here, too. The \(2.0\TeV\) excited quark probably doesn't really exist. But even speculating about possible explanations of such conceivablediscoveries from the LHC shows how well-motivated, predictive, and interesting the stringy explanations of such effects are relatively to things you could think about if you knew nothing about string theory. Note that if \[ \alpha' = \frac{1}{(2\TeV)^2} \] and \(g_s\sim 0.1\), you get \[ G_4 \sim\frac{ g_s^2(\alpha')^4 }{V^6} \] which implies \(V\sim 10^6\sqrt{\alpha'}\) or so. Assuming six large dimensions, their common radius would be a bit longer than the radius of the proton. There even might be a (holographic style?) reason why the QCD scale is linked to the Kaluza-Klein scale. These possibilities seem unlikely but imagine how terribly far-reaching the consequences would be. The LHC could easily start to discover the detailed shape of extra dimensions and excited strings.
skills to develop To see a complete linear correlation and regression analysis, in a practical setting, as a cohesive whole In the preceding sections numerous concepts were introduced and illustrated, but the analysis was broken into disjoint pieces by sections. In this section we will go through a complete example of the use of correlation and regression analysis of data from start to finish, touching on all the topics of this chapter in sequence. In general educators are convinced that, all other factors being equal, class attendance has a significant bearing on course performance. To investigate the relationship between attendance and performance, an education researcher selects for study a multiple section introductory statistics course at a large university. Instructors in the course agree to keep an accurate record of attendance throughout one semester. At the end of the semester \(26\) students are selected a random. For each student in the sample two measurements are taken: \(x\), the number of days the student was absent, and \(y\), the student’s score on the common final exam in the course. The data are summarized in Table \(\PageIndex{1}\). Absences Score Absences Score \(x\) \(y\) \(x\) \(y\) 2 76 4 41 7 29 5 63 2 96 4 88 7 63 0 98 2 79 1 99 7 71 0 89 0 88 1 96 0 92 3 90 6 55 1 90 6 70 3 68 2 80 1 84 2 75 3 80 1 63 1 78 A scatter plot of the data is given in Figure \(\PageIndex{1}\). There is a downward trend in the plot which indicates that on average students with more absences tend to do worse on the final examination. : Figure \(\PageIndex{1}\) Plot of the Absence and Exam Score Pairs The trend observed in Figure \(\PageIndex{1}\) as well as the fairly constant width of the apparent band of points in the plot makes it reasonable to assume a relationship between \(x\) and \(y\) of the form where \(β_1\) and \(β_0\) are unknown parameters and \(\varepsilon\) is a normal random variable with mean zero and unknown standard deviation \(\sum\). Note carefully that this model is being proposed for the population of all students taking this course, not just those taking it this semester, and certainly not just those in the sample. The numbers \(β_1\), \(β_0\), and \(\sum\) are parameters relating to this large population. First we perform preliminary computations that will be needed later. The data are processed in Table \(\PageIndex{2}\). \(x\) \(y\) \(x^2\) \(xy\) \(y^2\) \(x\) \(y\) \(x^2\) \(xy\) \(y^2\) 2 76 4 152 5776 4 41 16 164 1681 7 29 49 203 841 5 63 25 315 3969 2 96 4 192 9216 4 88 16 352 7744 7 63 49 441 3969 0 98 0 0 9604 2 79 4 158 6241 1 99 1 99 9801 7 71 49 497 5041 0 89 0 0 7921 0 88 0 0 7744 1 96 1 96 9216 0 92 0 0 8464 3 90 9 270 8100 6 55 36 330 3025 1 90 1 90 8100 6 70 36 420 4900 3 68 9 204 4624 2 80 4 160 6400 1 84 1 84 7056 2 75 4 150 5625 3 80 9 240 6400 1 63 1 63 3969 1 78 1 78 6084 Adding up the numbers in each column in Table \(\PageIndex{2}\) gives Then \[SS_{xx}=\sum x^2-\frac{1}{n}\left ( \sum x \right )^2=329-\frac{1}{26}(71)^2=135.1153846\\ SS_{xy}=\sum xy-\frac{1}{n}\left ( \sum x \right )\left ( \sum y \right )=4758-\frac{1}{26}(71)(2001)=-706.2692308\\ SS_{yy}=\sum y^2-\frac{1}{n}\left ( \sum y \right )^2=161511-\frac{1}{26}(2001)^2=7510.961538\] and \[\bar{x}=\frac{\sum x}{n}=\frac{71}{26}=2.730769231\; \; and\; \; \bar{y}=\frac{\sum y}{n}=\frac{2001}{26}=76.96153846\] We begin the actual modelling by finding the least squares regression line, the line that best fits the data. Its slope and \(y\)-intercept are \[\hat{\beta _1}=\frac{SS_{xy}}{SS_{xx}}=\frac{-706.2692308}{135.1153846}=-5.227156278\] \[\hat{\beta _0}=\bar{y}-\hat{\beta _1}\bar{x}=76.96153846-(-5.227156278)(2.730769231)=91.23569553\] Rounding these numbers to two decimal places, the least squares regression line for these data is \[\hat{y}=-5.23 x+91.24\] The goodness of fit of this line to the scatter plot, the sum of its squared errors, is \[SSE=SS_{yy}-\hat{\beta _1}SS_{xy}=7510.961538-(-5.227156278)(-706.2692308)=3819.181894\] This number is not particularly informative in itself, but we use it to compute the important statistic \[S_\varepsilon =\sqrt{\frac{SSE}{n-2}}=\sqrt{\frac{3819.181894}{24}}=12.11988495\] The statistic \(S_\varepsilon\) estimates the standard deviation \(\sum\) of the normal random variable \(\varepsilon\) in the model. Its meaning is that among all students with the same number of absences, the standard deviation of their scores on the final exam is about \(12.1\) points. Such a large value on a \(100\)-point exam means that the final exam scores of each sub-population of students, based on the number of absences, are highly variable. The size and sign of the slope \(βˆ1=−5.23\) indicate that, for every class missed, students tend to score about \(5.23\) fewer points lower on the final exam on average. Similarly for every two classes missed students tend to score on average \(2\times 5.23=10.46\) fewer points on the final exam, or about a letter grade worse on average. Since \(0\) is in the range of \(x\)-values in the data set, the \(y\)-intercept also has meaning in this problem. It is an estimate of the average grade on the final exam of all students who have perfect attendance. The predicted average of such students is \(\hat{\beta _0}=91.24\). Before we use the regression equation further, or perform other analyses, it would be a good idea to examine the utility of the linear regression model. We can do this in two ways: 1) by computing the correlation coefficient \(r\) to see how strongly the number of absences \(x\) and the score \(y\) on the final exam are correlated, and 2) by testing the null hypothesis \(H_0: \hat{\beta _1}=0\) (the slope of the population regression line is zero, so \(x\) is not a good predictor of \(y\)) against the natural alternative \(H_a: \hat{\beta _1}<0\) (the slope of the population regression line is negative, so final exam scores \(y\) go down as absences \(x\) go up). The correlation coefficient \(r\) is \[r=\frac{SS_{xy}}{\sqrt{SS_{xx}SS_{yy}}}=\frac{-706.2692308}{\sqrt{(135.1153846)(7510.961538)}}=-0.7010840977\] a moderate negative correlation. Turning to the test of hypotheses, let us test at the commonly used \(5\%\) level of significance. The test is \[H_0: \beta _1=0\\ vs.\\ H_a: \beta _1<0\; \; @\; \; \alpha =0.05\] From Figure 7.1.6, with \(df=26-2=24\) degrees of freedom \(t_{0.05}=1.711\), so the rejection region is \((-\infty ,-1.711]\). The value of the standardized test statistic is \[t=\frac{\hat{\beta _1}-B_0}{S_\varepsilon /\sqrt{SS_{xx}}}=\frac{-5.227156278-0}{12.11988495/\sqrt{135.1153846}}=-5.013\] which falls in the rejection region. We reject \(H_0\) in favor of \(H_a\). The data provide sufficient evidence, at the \(5\%\) level of significance, to conclude that \(β_1\) is negative, meaning that as the number of absences increases average score on the final exam decreases. As already noted, the value \(β_1=-5.23\) gives a point estimate of how much one additional absence is reflected in the average score on the final exam. For each additional absence the average drops by about \(5.23\) points. We can widen this point estimate to a confidence interval for \(β_1\). At the \(95\%\) confidence level, from Figure 7.1.6 with \(df=26-2=24\) degrees of freedom, \(t_{\alpha /2}=t_{0.025}=2.064\). The \(95\%\) confidence interval for \(β_1\) based on our sample data is \[\hat{\beta _1}\pm t_{\alpha /2}\tfrac{S_\varepsilon }{\sqrt{SS_{xx}}}=-5.23\pm 2.064\tfrac{12.11988495}{\sqrt{135.1153846}}=-5.23\pm 2.15\] or \((-7.38,-3.08)\). We are \(95\%\) confident that, among all students who ever take this course, for each additional class missed the average score on the final exam goes down by between \(3.08\) and \(7.38\) points. If we restrict attention to the sub-population of all students who have exactly five absences, say, then using the least squares regression equation \(\hat{y}=-5.23x+91.24\) we estimate that the average score on the final exam for those students is \[\hat{y}=-5.23(5)+91.24=65.09\] This is also our best guess as to the score on the final exam of any particular student who is absent five times. A \(95\%\) confidence interval for the average score on the final exam for all students with five absences is \[\begin{align*} \hat{y_p}\pm t_{\alpha /2}S_\varepsilon \sqrt{\frac{1}{n}+\frac{(x_p-\bar{x})^2}{SS_{xx}}} &= 65.09\pm (2.064)(12.11988495)\sqrt{\frac{1}{26}+\frac{(5-2.730769231)^2}{135.1153846}}\\ &= 65.09\pm 25.01544254\sqrt{0.0765727299}\\ &= 65.09\pm 6.92 \end{align*}\] which is the interval \((58.17,72.01)\). This confidence interval suggests that the true mean score on the final exam for all students who are absent from class exactly five times during the semester is likely to be between \(58.17\) and \(72.01\). If a particular student misses exactly five classes during the semester, his score on the final exam is predicted with \(95\%\) confidence to be in the interval \[\begin{align*} \hat{y_p}\pm t_{\alpha /2}S_\varepsilon \sqrt{1+\frac{1}{n}+\frac{(x_p-\bar{x})^2}{SS_{xx}}} &= 65.09\pm 25.01544254\sqrt{1.0765727299}\\ &= 65.09\pm 25.96 \end{align*}\] which is the interval \((39.13,91.05)\). This prediction interval suggests that this individual student’s final exam score is likely to be between \(39.13\) and \(91.05\). Whereas the \(95\%\) confidence interval for the average score of all student with five absences gave real information, this interval is so wide that it says practically nothing about what the individual student’s final exam score might be. This is an example of the dramatic effect that the presence of the extra summand \(1\) under the square sign in the prediction interval can have. Finally, the proportion of the variability in the scores of students on the final exam that is explained by the linear relationship between that score and the number of absences is estimated by the coefficient of determination, \(r^2\). Since we have already computed r above we easily find that \[r^2=(-0.7010840977)^2=0.491518912\] or about \(49\%\). Thus although there is a significant correlation between attendance and performance on the final exam, and we can estimate with fair accuracy the average score of students who miss a certain number of classes, nevertheless less than half the total variation of the exam scores in the sample is explained by the number of absences. This should not come as a surprise, since there are many factors besides attendance that bear on student performance on exams. Contributor Anonymous
Two matrices $A$ and $B$ are similar if there exists a nonsingular (invertible) matrix $S$ such that\[S^{-1}BS=A.\] A matrix $A$ is diagonalizable if $A$ is similar to a diagonal matrix. Namely, $A$ is diagonalizable if there exist a nonsingular matrix $S$ and a diagonal matrix $D$ such that\[S^{-1}AS=D.\] Some useful facts are If $S$ and $T$ are invertible matrices, then we have\[(TS)^{-1}=S^{-1}T^{-1}.\](Note the order of the product.) A matrix is nonsingular if and only if its determinant is nonzero. Proof. Since the matrix $A$ is diagonalizable, there exist a nonsingular matrix $S$ and a diagonal matrix $D$ such that\[S^{-1}AS=D. \tag{*}\]Also, since $B$ is similar to $A$, there exist a nonsingular matrix $T$ such that\[T^{-1}BT=A. \tag{**}\] Inserting the expression of $A$ from (**) into the equality (*), we obtain\begin{align*}D&=S^{-1}(T^{-1}BT)S\\&=(S^{-1}T^{-1})B(TS)\\&=(TS)^{-1}B(TS) \tag{***}.\end{align*} Now let us put $U:=TS$. Then the matrix $U$ is nonsingular.(This is because we have\[\det(U)=\det(TS)=\det(T)\det(S)\neq 0\]since $T$ and $S$ are nonsingular matrices, hence their determinants are not zero.) Therefore from (***) we have\[D=U^{-1}BU,\]where $D$ is a diagonal matrix and $U$ is a nonsingular matrix.Thus $B$ is a diagonalizable matrix. Determine Whether Given Matrices are Similar(a) Is the matrix $A=\begin{bmatrix}1 & 2\\0& 3\end{bmatrix}$ similar to the matrix $B=\begin{bmatrix}3 & 0\\1& 2\end{bmatrix}$?(b) Is the matrix $A=\begin{bmatrix}0 & 1\\5& 3\end{bmatrix}$ similar to the matrix […] How to Diagonalize a Matrix. Step by Step Explanation.In this post, we explain how to diagonalize a matrix if it is diagonalizable.As an example, we solve the following problem.Diagonalize the matrix\[A=\begin{bmatrix}4 & -3 & -3 \\3 &-2 &-3 \\-1 & 1 & 2\end{bmatrix}\]by finding a nonsingular […] Diagonalizable by an Orthogonal Matrix Implies a Symmetric MatrixLet $A$ be an $n\times n$ matrix with real number entries.Show that if $A$ is diagonalizable by an orthogonal matrix, then $A$ is a symmetric matrix.Proof.Suppose that the matrix $A$ is diagonalizable by an orthogonal matrix $Q$.The orthogonality of the […] Dimension of Null Spaces of Similar Matrices are the SameSuppose that $n\times n$ matrices $A$ and $B$ are similar.Then show that the nullity of $A$ is equal to the nullity of $B$.In other words, the dimension of the null space (kernel) $\calN(A)$ of $A$ is the same as the dimension of the null space $\calN(B)$ of […] Quiz 13 (Part 1) Diagonalize a MatrixLet\[A=\begin{bmatrix}2 & -1 & -1 \\-1 &2 &-1 \\-1 & -1 & 2\end{bmatrix}.\]Determine whether the matrix $A$ is diagonalizable. If it is diagonalizable, then diagonalize $A$.That is, find a nonsingular matrix $S$ and a diagonal matrix $D$ such that […]
Electrostatic Potential and Capacitance Electric Potential Electric potential is the amount of work done in bringing a unit positive charge from infinity to given point. \tt V=\frac{W}{Q} Potential at a point \tt V=\frac{1}{4\pi\ e_{0}}.\frac{Q}{R} Electric potential is a scalar quantity. The relation between "V" and "K" \tt E=\frac{-dv}{dr} \tt V_{B}-V_{A}=-\int_{A}^{B} \overline{E}.\overline{dr} If potential at any point is zero it is called zero potential point. For unlike change "Q 1" and "-Q 2" are separated by "d", P 1and P 2one zero potential points as shown. At P 1\tt \frac{Q_{1}}{x}=\frac{Q_{2}}{d-x} At P 2\tt \frac{Q_{1}}{y}=\frac{Q_{2}}{d+y} View the Topic in this video From 00:59 To 56:29 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. Electric Potential(V) Electric potentical at any point is equal to the work done per unit positive charge in carrying it from infinity to that point in electric field. Electric potential, V = \frac{W}{q}
I'm trying to catch up on Algebraic Geometry for a potential Masters project. I'm just now trying to get my head around determining the projective closure, and wanted to check my reasoning. So in the exercise underway at the moment I've been given a set of zeroes $Z(yt-x^2, xy-zt, y^2-xz)$ in $\mathbb{P}^3$ - the question being to show that this is, in fact, the Zariski closure of the zeroes $U$ of the affine twisted cubic in $\mathbb{A}^3$. I can show that if you identify $\mathbb{A}^3$ with the affine chart for $\mathbb{P}^3$ at $t \neq 0$ then you get the right zeroes, so $U \subset Z$. I'm trying to follow the argument for why $Z$ is $\bar{U}$. So where I've got to is that it's to do with the points in $Z$ that aren't necessarily in $U$ - which will be any 'points at infinity' that aren't in the affine chart (i.e. not $(0:0:0:1)$) but happen to be in $Z$, right? Of which there is only one - $(0:0:1:0)$ (all the others don't give zeroes). Here I reach a logical block that I can't think around. So the projective closure of $U$ is $U \cup \{\infty\}$ where ${\infty}$ is that particular point. Do i just rely on the fact that a single point is closed in $\mathbb{P}^3$ and $U$ is closed in $\mathbb{A}^3$ and so as a union of closed sets $Z$ is closed? That feels wrong - like I shouldn't be invoking closures in two different topological spaces. I have some notes that point in this direction but they're a bit terse and ambiguous.
I don't understand how to do this considering that the contrapositive of $x^3$ is irrational. For example $2$ to the cube root is irrational, but I am trying to prove that is is rational Suffices to show the following: Suppose $x$ is rational. Then $f(x)=x^3+3x+3$ is also rational. We claim that $A(x)=x^3$ is rational if $x$ is rational. Indeed, the set of rationals is closed under multiplication, and $x^3 = x \times x \times x$. Likewise, as 3 is rational, it follows that $B(x) = 3x$ is rational if $x$ is rational. However, for each $x$ we note that $f(x)=A(x)+B(x)+3$. Then if $x$ is rational then $f(x)=A(x)+B(x)+3$ is the sum of 3 rational numbers $A(x),B(x)$ and 3. As the set of rationals is closed under addition, it follows that $f(x)$ is rational as well. A more pedestrian proof (following @Mike's first step): Suppose $x$ is rational, and the quotient of two integers $x = \frac{a}{b}.$ Then \begin{align} f(x) &= x^3 + 3x + 3 \\[8pt] &= \left(\frac{a}{b}\right)^3 + 3\frac{a}{b} + 3\frac{1}{1} \\[8pt] &= \frac{a^3}{b^3} + 3\frac{a}{b}\frac{b^2}{b^2} + 3\frac{1}{1}\frac{b^3}{b^3} \\[8pt] &= \frac{a^3}{b^3} + 3\frac{ab^2}{b^3} + \frac{3b^3}{b^3} \\[8pt] &= \frac{a^3 + 3ab^2 + 3b^3}{b^3} \end{align} which is a quotient of two integers (namely, $a^3 + 3ab^2 + 3b^2$ and $b^3$). Hence $f(x)$ is a rational as well.
The Hessian matrix $\{\partial_i \partial_j f \}$ of a function $f:\mathbb{R}^n \to \mathbb{R}$ depends on the coordinate system you choose. If $x_1,\cdots,x_n$ and $y_1,\cdots,y_n$ are two sets of coordinates (say, in some open neighborhood of a manifold), then $\frac{\partial f(y(x))}{\partial x_i} = \sum_{k} \frac{\partial f}{\partial y_k} \frac{\partial y_k}{\partial x_i}$. Differentiating again, this time with respect to $x_j$, we get $\frac{\partial^2 f(y(x))}{\partial x_i \partial x_j} = \sum_{k} \sum_{l} \frac{\partial^2 f}{\partial y_k \partial y_l} \frac{\partial y_l}{\partial x_j} \frac{\partial y_k}{\partial x_i}+\frac{\partial f(y(x))}{\partial y_k}\frac{\partial^2y}{\partial x_i \partial x_j}$. At a critical point, the second term goes away, so we will consider such a case. In other words, if the derivative is a differential $1$-form, i.e. $\sum_{i} \frac{\partial f}{\partial x_i} dx_i$, a section of the cotangent bundle, then the second derivative should be $\sum_{k,l} \frac{\partial^2 f(y(x))}{\partial y_k \partial x_l} dy_k \otimes dy_l$. This makes sense since $dy_k=\sum_{i} \frac{\partial y_k}{\partial x_i} dx_i$, and $dy_l=\sum_{j} \frac{\partial y_l}{\partial x_j} dx_i$, meaning that $\sum_{k,l} \frac{\partial^2 f(y(x))}{\partial y_k \partial x_l} dy_k \otimes dy_l = \sum_{k,l} \frac{\partial^2 f(y(x))}{\partial y_k \partial x_l} (\sum_{i} \frac{\partial y_k}{\partial x_i} dx_i) \otimes (\sum_{j} \frac{\partial y_l}{\partial x_j} dx_j) = \sum_{i,j,k,l} \frac{\partial^2 f(y(x))}{\partial y_k \partial x_l} \frac{\partial y_k}{\partial x_i} \frac{\partial y_l}{\partial x_j} dx_i dx_j = \sum_{i,j} \frac{\partial^2 f}{\partial x_i \partial x_j}$, making it coordinate independent. Note that I did not use exterior powers, I used tensor powers, since I wanted to actually find a way to make sense of second derivatives, rather than having $d^2=0$. This means the Hessian should be a rank $2$ tensor ((2,0) or (0,2), I can't remember which, but definitely not (1,1)). Does this make sense? Can we then express the third, etc, derivative as a tensor? More interestingly, how can this help us make sense of Taylor's formula? Can we come up with a coordinate-free Taylor series of a function at a point on a manifold? EDIT: An in general, if the first $n$ derivatives vanish, then the $n+1$ derivative should be a rank $n+1$ tensor, right?
ISSN: 1556-1801 eISSN: 1556-181X All Issues Networks & Heterogeneous Media March 2013 , Volume 8 , Issue 1 Special issue dedicated to Hiroshi Matano on the occasion of his 60th birthday: Part II Select all articles Export/Reference: Abstract: Professor Hiroshi Matano was born in Kyoto, Japan, on July 28th, 1952. He studied at Kyoto University, where he prepared his doctoral thesis under the supervision of Professor Masaya Yamaguti. He obtained his first academic position as a research associate at the University of Tokyo. He then moved to Hiroshima University in 1982 and came back to Tokyo in 1988. He is a Professor at the Graduate School of Mathematical Sciences at the University of Tokyo since 1991. For more information please click the “Full Text” above. Abstract: We provide formal matched asymptotic expansions for ancient convex solutions to MCF. The formal analysis leading to the solutions is analogous to that for the generic MCF neck pinch in [1]. For any $p, q$ with $p+q=n$, $p\geq1$, $q\geq2$ we find a formal ancient solution which is a small perturbation of an ellipsoid. For $t\to-\infty$ the solution becomes increasingly astigmatic: $q$ of its major axes have length $\approx\sqrt{2(q-1)(-t)}$, while the other $p$ axes have length $\approx \sqrt{-2t\log(-t)}$. We conjecture that an analysis similar to that in [2] will lead to a rigorous construction of ancient solutions to MCF with the asymptotics described in this paper. Abstract: We analyze the evolution of multi-dimensional normal graphs over the unit sphere under volume preserving mean curvature flow and derive a non-linear partial differential equation in polar coordinates. Furthermore, we construct finite difference numerical schemes and present numerical results for the evolution of non-convex closed plane curves under this flow, to observe that they become convex very fast. Abstract: A reaction-diffusion equation with nonlinear boundary condition is considered in a two-dimensional infinite strip. Existence of waves in the bistable case is proved by the Leray-Schauder method. Abstract: The deep quench obstacle problem $$ {\rm{\bf{(DQ)}}} \begin{equation}\left\{ \begin{array}{l} \frac{\partial u}{\partial t}=\nabla \cdot M(u) \nabla w, \\ w + \epsilon^2 \triangle u + u \in \partial \Gamma(u), \end{array} \right. \end{equation}$$ for $(x,t) \in \Omega \times (0,T)$, models phase separation at low temperatures. In (DQ), $\epsilon>0,$ $\partial \Gamma(\cdot)$ is the sub-differential of the indicator function $I_{[-1,1]}(\cdot),$ and $u(x,t)$ should satisfy $\nu \cdot \nabla u=0$ on the ``free boundary'' where $u=\pm 1$. We shall assume that $u$ is sufficiently smooth to make these notions well-defined. The problem (DQ) corresponds to the zero temperature ``deep quench'' limit of the Cahn--Hilliard equation. We focus here on a degenerate variant of (DQ) in which $M(u)=1-u^2,$ as well as on a constant mobility non-degenerate variant in which $M(u)=1.$ Although historically more emphasis has been placed on models with non-degenerate mobilities, degenerate mobilities capture some of the underlying physics more accurately. In the present paper, a careful numerical study is undertaken, utilizing a variety of benchmarks as well as new upper bounds for coarsening, in order to clarify evolutionary properties and to explore the differences in the two variant models. Abstract: The insulin signaling pathway propagates a signal from receptors in the cell membrane to the nucleus via numerous molecules some of which are transported through the cell in a partially stochastic way. These different molecular species interact and eventually regulate the activity of the transcription factor FOXO, which is partly responsible for inhibiting the growth of organs. It is postulated that FOXO partially governs the plasticity of organ growth with respect to insulin signalling, thereby preserving the full function of essential organs at the expense of growth of less crucial ones during starvation conditions. We present a mathematical model of this reacting and directionally-diffusing network of molecules and examine the predictions resulting from simulations. Abstract: This paper deals with the existence of traveling fronts for the reaction-diffusion equation: $$ \frac{\partial u}{\partial t} - \Delta u =h(u,y) \qquad t\in \mathbb{R}, \; x=(x_1,y)\in \mathbb{R}^N. $$ We first consider the case $h(u,y)=f(u)-\alpha g(y)u$ where $f$ is of KPP or bistable type and $\lim_{|y|\rightarrow +\infty}g(y)=+\infty$. This equation comes from a model in population dynamics in which there is spatial spreading as well as phenotypic mutation of a quantitative phenotypic trait that has a locally preferred value. The goal is to understand spreading and invasions in this heterogeneous context. We prove the existence of threshold value $\alpha_0$ and of a nonzero asymptotic profile (a stationary limiting solution) $V(y)$ if and only if $\alpha<\alpha_0$. When this condition is met, we prove the existence of a traveling front. This allows us to completely identify the behavior of the solution of the parabolic problem in the KPP case. We also study here the case where $h(y,u)=f(u)$ for $|y|\leq L_1$ and $h(y,u) \approx - \alpha u$ for $|y|>L_2\geq L_1$. This equation provides a general framework for a model of cortical spreading depressions in the brain. We prove the existence of traveling front if $L_1$ is large enough and the non-existence if $L_2$ is too small. Abstract: We consider a homogenization problem for the magnetic Ginzburg-Landau functional in domains with a large number of small holes. We establish a scaling relation between sizes of holes and the magnitude of the external magnetic field when the multiple vortices pinned by holes appear in nested subdomains and their homogenized density is described by a hierarchy of variational problems. This stands in sharp contrast with homogeneous superconductors, where all vortices are known to be simple. The proof is based on the $\Gamma$-convergence approach applied to a coupled continuum/discrete variational problem: continuum in the induced magnetic field and discrete in the unknown finite (quantized) values of multiplicity of vortices pinned by holes. Abstract: We consider a simplified 1-dimensional PDE-model describing the effect of contact inhibition in growth processes of normal and abnormal cells. Varying the value of a significant parameter, numerical tests suggest two different types of contact inhibition between the cell populations: the two populations move with constant velocity and exhibit spatial segregation, or they stop to move and regions of coexistence are formed. In order to understand the different mechanisms, we prove that there exists a segregated traveling wave solution for a unique wave speed, and we present numerical results on the ``stability" of the segregated waves. We conjecture the existence of a non-segregated standing wave for certain parameter values. Abstract: The aim of this paper is first to find interactions between compartments of hosts in the Ross-Macdonald Malaria transmission system. So, to make clearer this association we introduce the concordance measure and then the Kendall's tau and Spearman's rho. Moreover, since the population compartments are dependent, we compute their conditional distribution function using the Archimedean copula. Secondly, we get the vector population partition into several dependent parts conditionally to the fecundity and to the transmission parameters and we show that we can divide the vector population by using $p$-th quantiles and test the independence between the subpopulations of susceptibles and infecteds. Third, we calculate the $p$-th quantiles with the Poisson distribution. Fourth, we introduce the proportional risk model of Cox in the Ross-Macdonald model with the copula approach to find the relationship between survival functions of compartments. Abstract: In this note we analyze a spatially structured SI epidemic model with vertical transmission, a logistic effect on vital dynamics and a density dependent incidence. The dynamics of the underlying system of ordinary differential equations are first shown to exhibit an infinite number of heteroclinic orbits connecting the trivial equilibrium with an interior equilibrium. Our mathematical study of the corresponding reaction-diffusion system is concerned with travelling wave solutions. Based on a detailed study of the center-unstable manifold around the interior equilibrium, we are able to prove the existence of an infinite number of travelling wave solutions connecting the trivial equilibrium and the interior equilibrium. Abstract: We consider pulse-like localized solutions for reaction-diffusion systems on a half line and impose various boundary conditions at one end of it. It is shown that the movement of a pulse solution with the homogeneous Neumann boundary condition is completely opposite from that with the Dirichlet boundary condition. As general cases, Robin type boundary conditions are also considered. Introducing one parameter connecting the Neumann and the Dirichlet boundary conditions, we clarify the transition of motions of solutions with respect to boundary conditions. Abstract: The primary visual cortex (V1) can be partitioned into fundamental domains or hypercolumns consisting of one set of orientation columns arranged around a singularity or ``pinwheel'' in the orientation preference map. A recent study on the specific problem of visual textures perception suggested that textures may be represented at the population level in the cortex as a second-order tensor, the structure tensor, within a hypercolumn. In this paper, we present a mathematical analysis of such interacting hypercolumns that takes into account the functional geometry of local and lateral connections. The geometry of the hypercolumn is identified with that of the Poincaré disk $\mathbb{D}$. Using the symmetry properties of the connections, we investigate the spontaneous formation of cortical activity patterns. These states are characterized by tuned responses in the feature space, which are doubly-periodically distributed across the cortex. Abstract: A stochastic modulation of the safety distance can reduce traffic jams. It is found that the effect of random modulation on congestive flow formation depends on the spatial correlation of the noise. Jam creation is suppressed for highly correlated noise. The results demonstrate the advantage of heterogeneous performance of the drivers in time as well as individually. This opens the possibility for the construction of technical tools to control traffic jam formation. Abstract: In this paper, we explain in simple PDE terms a famous result of Bramson about the logarithmic delay of the position of the solutions $u(t,x)$ of Fisher-KPP reaction-diffusion equations in $\mathbb{R}$, with respect to the position of the travelling front with minimal speed. Our proof is based on the comparison of $u$ to the solutions of linearized equations with Dirichlet boundary conditions at the position of the minimal front, with and without the logarithmic delay. Our analysis also yields the large-time convergence of the solutions $u$ along their level sets to the profile of the minimal travelling front. Abstract: The Gierer-Meinhardt system is a mathematical model describing the process of hydra regeneration. This system has a stationary solution with a stripe pattern on a rectangular domain, but numerical results suggest that such stripe pattern is unstable. In [8], Kolokolnikov et al. proved the existence of a positive eigenvalue, which is called an unstable eigenvalue, for a stationary solution with a stripe pattern by the NLEP method, which implies the instability of the stripe pattern. In addition, the uniqueness of the unstable eigenvalue was shown under some technical assumptions in [8]. In this paper, we prove the existence and uniqueness of an unstable eigenvalue by using the SLEP method without any extra conditions. We also prove the existence of a single-spike solution in one-dimension. Abstract: A reaction diffusion system with a distributed time delay is proposed for virus spread on bacteria immobilized on an agar-coated plate. A distributed delay explicitly accounts for a virus latent period of variable duration. The model allows the number of virus progeny released when an infected cell lyses to depend on the duration of the latent period. A unique spreading speed for virus infection is established and traveling wave solutions are shown to exist. Abstract: We adapt (ray-based) geometrical optics approaches to encompass the formal asymptotic analysis of front propagation in a Fisher-KPP equation with slowly varying spatial inhomogeneities. The wavespeed is shown to be selected by two distinct (and fully constructive) mechanisms, depending on whether the source term is an increasing or decreasing function of the spatial variable. Canonical inner problems, analogous to those arising in the geometrical theory of diffraction, are formulated to give refined expressions for the wavefront location. Additional phenomena, notably the initiation of new fronts and the transitions that occur when the source term is a non-monotonic function of space, are shown to be amenable to the same asymptotic approaches. Abstract: It is well known that a competition-diffusion system has a one-dimensional traveling front. This paper studies traveling front solutions of pyramidal shapes in a competition-diffusion system in $\mathbb{R}^N$ with $N\geq 2$. By using a multi-scale method, we construct a suitable pair of a supersolution and a subsolution, and find a pyramidal traveling front solution between them. Abstract: In analogy to the analysis of minimal conditions for the formation of diffusion driven instabilities in the sense of Turing, in this paper minimal conditions for a class of kinetic equations with mass conservation are discussed, whose solutions show patterns with a characteristic wavelength. The related linearized systems are analyzed, and the minimal number of equations is derived, which is needed for specific patterns to occur. Readers Authors Editors Referees Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser. In this letter, we introduce a novel sufficient condition for the stability of acyclic directed networks subject to time- and state-dependent disturbances. Such a condition, which essentially characterizes the behavior of the network in terms of the solutions of its unperturbed dynamics, is applied to address the problem of designing distributed control protocols for vehicle platooning. Specifically,... In a companion paper, the authors have proposed a new algorithm for the partial stochastic realization of vector discrete-time processes from finite covariance data, based on a nonlinear generalization of the classical Yule-Walker equations. In particular the algorithm provides solutions of the covariance matching problem for periodic ARMA models on a finite interval. In this letter, we provide a... We consider distributed control of double-integrator networks, where agents are subject to stochastic disturbances. We study performance of such networks in terms of coherence, defined through an $\mathcal {H}_{2}$ norm metric that represents the variance of nodal state fluctuations. Specifically, we address known performance limitations of the standard consensus protocol, which cause this variance... We consider the problem of constructing control policies that are robust against distribution errors in the model parameters of Markov decision processes. The Wasserstein metric is used to model the ambiguity set of admissible distributions. We prove the existence and optimality of Markov policies and develop convex optimization-based tools to compute and analyze the policies. Our methods, which are... This letter aims for a simple and accessible explanation as to why oscillations naturally arise due to tradeoffs in feedback systems, and how these can be aggravated by delays and unstable poles and zeros. Such results have been standard for decades using frequency domain methods, which yield a rich variety of familiar “waterbed” tradeoffs. While almost trivial for control experts, frequency domain... A reduction approach on the discrete-time equivalent model of a nonlinear input delayed system is proposed to design a sampled-data stabilizing feedback. Global asymptotic stability of the feedback system is so achieved by solving the problem over the reduction state. Stabilization of the reduced dynamics is obtained through input-Lyapunov matching. Connections with prediction-based methods are established... The mathematical theory of nonlinear cooperative control relies heavily on notions from graph theory and passivity theory. A general analysis result is known about cooperative control of maximally equilibrium-independent systems, relating steady-states of the closed-loop system to network optimization theory. However, until now only analysis results have been proven, and there is no known synthesis... Given $\epsilon \in {(}0{,}1{)}$ , a probability measure $\mu {\Omega }\subset \mathbb {R}^{p}$ and a semialgebraic set $ \text {K}\subset \text {X}\times {\Omega }$ , we consider the feasible set $ \text {X}^{*}_\epsilon =\{ \text {x}\in \text {X}~:~\text {Prob}{ \text { [(}} \text {x}{,} {\omega }{)}\in \text {K} {]}\geq 1-\epsilon \}$ associated with a chance-constraint. We provide... The idea of creating the Letters first came up in Spring 2015, during my term as President of the IEEE Control Systems Society (CSS). Even if the possibility of launching a CSS online journal devoted to brief papers had been discussed in the past, it was only recently that two important issues came up, demanding an immediate and effective response. Building upon recent insights on the effect of sensor mobility on the detection performance of mobile radiation sensors, this letter analyzes the impact of stochastic noise in the motion of the sensors on their decision-making accuracy. A stochastic optimal control law for the sensors is designed to maximize decision-making performance. Numerical simulations indicate that noisy sensor motion significantly... This letter addresses the consensus problem of multi-agent systems for a static undirected communication topology. It is known that for a static undirected graph, the convergence rate of the consensus protocol depends on the second smallest eigenvalue of the graph Laplacian. The fastest convergence rate can be achieved when the communication topology is given by a complete graph which is costly in... In this letter, we study a channel scheduling problem for a class of networked nonlinear control systems. The controller, sensors, and actuators of a group of dynamically decoupled nonlinear subsystems are connected through a digital communication channel, and due to the limited capacity, only one sensor and actuator can communicate with the controller at each time instant. To alleviate the negative... This letter studies remote state estimation under denial-of-service (DoS) attacks. A sensor transmits its local estimate of an underlying physical process to a remote estimator via a wireless communication channel. A DoS attacker is capable to interfere the channel and degrades the remote estimation accuracy. Considering the tactical jamming strategies played by the attacker, the sensor adjusts its... We revisit the linear programming approach to deterministic, continuous time, infinite horizon discounted optimal control problems. In the first part, we relax the original problem to an infinite-dimensional linear program over a measure space and prove equivalence of the two formulations under mild assumptions, significantly weaker than those found in the literature until now. The proof is based... This letter introduces an efficient first-order method based on the alternating direction method of multipliers (ADMM) to solve semidefinite programs arising from sum-of-squares (SOS) programming. We exploit the sparsity of the coefficient matching conditions when SOS programs are formulated in the usual monomial basis to reduce the computational cost of the ADMM algorithm. Each iteration of our algorithm... Battery short-term electrical impedance behavior varies between linear, linear time-varying, or nonlinear at different operating conditions. Data-based electrical impedance modeling techniques often model the battery as a linear time-invariant system at all operating conditions. In addition, these techniques require extensive and time consuming experimentation. Often due to sensor failures during... We present stability criteria for equilibria of a class of linear complementarity systems, subjected to discrete and distributed delay. We present necessary and sufficient conditions for local exponential stability, inferred from the spectrum location of a corresponding system of delay differential algebraic equations. Subsequently, we obtain sufficient LMI-based conditions for global asymptotic stability... In this letter, we consider chance-constrained decision problems with a specific structure: on one hand, we assume that some prior information about the unknown parameters of the decision problem is known, in the form of samples; on the other hand, we assume that it is possible to gather further information regarding the true value of these parameters via measurements. We specialize the scenario approach... In this letter, we investigate a class of slow-fast systems for which the classical model order reduction technique based on singular perturbations does not apply due to the lack of a normally hyperbolic critical manifold. We show, however, that there exists a class of slow-fast systems that after a well-defined change of coordinates have a normally hyperbolic critical manifold. This allows the use... Financed by the National Centre for Research and Development under grant No. SP/I/1/77065/10 by the strategic scientific research and experimental development program:SYNAT - “Interdisciplinary System for Interactive Scientific and Scientific-Technical Information”.
So here is the last of this series explaining the expressions on Newton’s clock: We are now up to 8. Let’s look at\[ \mathop{\prod}\limits_{{k}{=}{0}}\limits^{1}{{(}{2}{k}{+}{2}{)}} \] This is another excellent example of how concise the words in maths can be. The symbol “𝚷” is the capital version of 𝜋 which corresponds to the english “P”. The “P” here stands for “product” which is the result of multiplying two or more numbers. The expression on the clock means : “Take the expression 2 k + 2 and successively replace the k with the number at the bottom of the 𝚷 symbol (in this case “0”), evaluate the expression to get a number, and increment k by 1 and repeat until you reach the number at the top of the 𝚷 symbol (in this case “1”). Then multiply all these numbers together.” You can see why maths expressions are much more concise than English. So to evaluate this expression, we first replace the k with 0, then work out 2(0) + 2. This equals 2. Now increment the k by 1 to get 1, then work out 2(1) + 2. This equals 4. Since k is now at the number at the top of 𝚷, we are done increasing k. Now multiply these numbers together. 2 × 4 = 8 which is the correct number at this position on the clock. Most of you now know what\[ \sqrt{81} \] means. It means “what number multiplied by itself equals 81?” The answer, of course is 9 as 9 × 9 = 81. The next hour is\[ {\log}_{2}1024 \] The basics of this expression have already been explained for position “2” on the clock. This expression is asking the question “what does the exponent of 2 have to be so that 2 = 1024?”. Hopefully, the answer is “10” and it is because 2 x 10= 1024. Now let’s look at B 16. Remember when I explained position “7” on the clock: 0111 2? That was a number in the base 2 system of counting. Another common base used with computers is the base 16 counting system. We are familiar with the base 10 counting system that has 10 symbols used to count with: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. The base 16 system needs 16 symbols. So what is used after 9 is reached? Well, we resort to the letters of the alphabet. The numbers up to 9 in base 16 correspond to the same numbers in base 10. The next number in base 16 is “A” which corresponds to 10 in base 10 and the next number is “B” which is 11 in base 10. So B 16 = 11. Finally, the last expression:\[ \mathop{\sum}\limits_{{i}{=}{1}}\limits^{3}{{(}{3}{i}{-}{2}{)}} \] The Σ symbol is the Greek capital “sigma” and corresponds to the english “S”. This letter stands for “Sum” which is the addition of two or more numbers. This expression is just like the one in position “8” on the clock except that you add the resulting numbers together instead of multiplying them. So starting at i = 1, 3(1) – 2 = 1, 3(2) -2 = 4, 3(3) – 2 = 7, and we are done as i now equals the numbers on top of the Σ. So now add these numbers together: 1 + 4 + 7 = 12. It is now high noon and that completes Newton’s clock.
Beloved community, Does the following series converge? $$\sum_{n=0}^\infty \left(\sqrt[n]{n} - \sqrt[n+1]{n+1}\right)$$ According to Wolfram Alpha, it does by the Comparison Test. However, after thinking about it long and hard, I still haven't found any series to compare it to. Many thanks in advance! :)
I would like to apply the known version of the conjectural formula (11) page !0 of the paper Number theory and dynamical Lefschetz trace formula. Disclaimer: I do not have a complete understanding of this formula but I can get an sketch of it. I just know that both sides of the formula are not number but distribution. I also know the meaning of ingredient of the formula. I wish to apply it to find an appropriate application in limit cycle theory. To start this way, I have three precise questions: Question 1: In the right hand side of the formula (11), is not necessary to assume that there are only a finite number of non degenerate periodic orbits $\gamma$(As they are appeared in the sum $\sum$ of the right side of the formula)? Is it implicitly included in the assumptions of that formula? I think that even the non degeneracy assumption of periodic orbits does not easily imply the finite ness of such periodic orbits. Is not possible that a sequence of non degenerate periodic orbits accumulate to a non periodic orbit which is a kind of complicated and strange attractor? Question 2:(This question is completely different from the previous one but there are some motivations from this posts And also this post Lifting a Quadratic System to a non Vanishing vector field on $S^3$) A polynomial vector field on the plane gives us an analytic vector field $X$ on $S^2$. Put $\tilde{X}$ for the obvious lifting of $X$ on $S^2\times S^1$ with $\tilde{X}=X+\partial/\partial{\theta}$ Is there a quadratic polynomial vector field on $\mathbb{R}^2$ with Poincare compactification $X$ such that $\tilde{X}$ on $S^2\times S^1$ does not admit a 2 dimensional transversal foliation which is invariant under the flow of $\tilde{X}$. The reason we consider Quadratic system: Note that every quadratic system is a geodesible vector field but it is not the case for higher degree polynomial vector field. On the other hand, in dimension 2, if a vector field $X$ is geodesible then there is a transversal field $Y$ with $[X,Y] \parallel Y$ this implies that orbits of $Y$ are invariant under flow of $X$. So this make us to be hopeful a little that the product vector field $\tilde{X}=X+\partial/\partial \theta$ admit a transversal 2 dimensional foliation which is invariant under the flow of $\tilde{X}$. Existence of such transversal foliation is the key condition in the paper we linked in the first lines of this post. The reason we consider $S^2\times S^1$ rather than $S^3$: The lifting of the simplest vector field $X=0$ to the Hopf vector field on $S^3$ does not admit a transversal foliation. Question 3: When we lift a vector field $X$ to a non vanishing vector field $\tilde{X}$ on $S^3, S^2\times S^1\quad\text{or}\quad T^1 S^2$, it is possible that the preimage of a closed orbit, which is an invariant torus, would not contain any closed orbit, so we loose our closed orbits.please see the comment by Sebastian Goette in this post. With terminologies in the linked paper, let we have a 3 manifold foliated by 2 dimensional leaves whch is compatible with a flow $X$. So is there an analogy of the formula (11) in page 10 of the linked paper in the first lines of this post whose right side depends on invariant torus of $X$ as well as closed orbits of $X$?
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
ISSN: 1937-5093 eISSN: 1937-5077 Kinetic & Related Models June 2011 , Volume 4 , Issue 2 Issue on thermomechanics and phase change Guest Editors: Alain Miranville and Ulisse Stefanelli Select all articles Export/Reference: Abstract: A relativistic kinetic Fokker-Planck equation that has been recently proposed in the physical literature is studied. It is shown that, in contrast to other existing relativistic models, the one considered in this paper is invariant under Lorentz transformations in the absence of friction. A similar property (invariance by Galilean transformations in the absence of friction) is verified in the non-relativistic case. In the first part of the paper some fundamental mathematical properties of the relativistic Fokker-Planck equation are established. In particular, it is proved that the model is compatible with the finite propagation speed of particles in relativity. In the second part of the paper, two non-linear relativistic mean-field models are introduced. One is obtained by coupling the relativistic Fokker-Planck equation to the Maxwell equations of electrodynamics, and is therefore of interest in plasma physics. The other mean-field model couples the Fokker-Planck dynamics to a relativistic scalar theory of gravity (the Nordström theory) and is therefore of interest in gravitational physics. In both cases the existence of steady states for all possible prescribed values of the mass is established. In the gravitational case this result is better than for the corresponding non-relativistic model, the Vlasov-Poisson-Fokker-Planck system, for which existence of steady states is known only for small mass. Abstract: We study a kinetic mean-field equation for a system of particles with different sizes, in which particles are allowed to coagulate only if their sizes sum up to a prescribed time-dependent value. We prove well-posedness of this model, study the existence of self-similar solutions, and analyze the large-time behavior mostly by numerical simulations. Depending on the parameter $k_0$, which controls the probability of coagulation, we observe two different scenarios: For $k_0>2$ there exist two self-similar solutions to the mean field equation, of which one is unstable. In numerical simulations we observe that for all initial data the rescaled solutions converge to the stable self-similar solution. For $k_0<2$, however, no self-similar behavior occurs as the solutions converge in the original variables to a limit that depends strongly on the initial data. We prove rigorously a corresponding statement for $k_0\in (0,1/3)$. Simulations for the cross-over case $k_0=2$ are not completely conclusive, but indicate that, depending on the initial data, part of the mass evolves in a self-similar fashion whereas another part of the mass remains in the small particles. Abstract: In this work, we extend the micro-macro decomposition based numerical schemes developed in [3] to the collisional Vlasov-Poisson model in the diffusion and high-field asymptotics. In doing so, we first write the Vlasov-Poisson model as a system that couples the macroscopic (equilibrium) part with the remainder part. A suitable discretization of this micro-macro model enables to derive an asymptotic preserving scheme in the diffusion and high-field asymptotics. In addition, two main improvements are presented: On the one hand a self-consistent electric field is introduced, which induces a specific discretization in the velocity direction, and represents a wide range of applications in plasma physics. On the other hand, as suggested in [30], we introduce a suitable reformulation of the micro-macro scheme which leads to an asymptotic preserving property with the following property: It degenerates into an implicit scheme for the diffusion limit model when $\varepsilon\rightarrow 0$, which makes it free from the usual diffusion constraint $\Delta t=O(\Delta x^2)$ in all regimes. Numerical examples are used to demonstrate the efficiency and the applicability of the schemes for both regimes. Abstract: In this paper we take an idea presented in recent paper by Carlen, Carvalho, Le Roux, Loss, and Villani ([3]) and push it one step forward to find an exact estimation on the entropy production. The new estimation essentially proves that Villani's conjecture is correct, or more precisely that a much worse bound to the entropy production is impossible in the general case. Abstract: We establish local-in-time validity of the Boltzmann equation in the presence of an external force deriving from a $C^2$ potential. Abstract: The kinetic flux vector splitting (KFVS) scheme, when used for quantum Euler equations, as was done by Yang et al[22], requires the integration of the quantum Maxwellian (Bose-Einstein or Fermi-Dirac distribution), giving a numerical flux much more complicated than the classical counterpart. As a result, a nonlinear 2 by 2 system that connects the macroscopic quantities temperature and fugacity with density and internal energy needs to be inverted by iterative methods at every spatial point and every time step. In this paper, we propose to use a simple classical KFVS scheme for the quantum hydrodynamics based on the key observation that the quantum and classical Euler equations share the same form if the (quantum) internal energy rather than temperature is used in the flux. This motivates us to use a classical Maxwellian - that depends on the internal energy rather than temperature - instead of the quantum one in the construction of the scheme, yielding a KFVS which is purely classical. This greatly simplifies the numerical algorithm and reduces the computational cost. The proposed schemes are tested on a 1-D shock tube problem for the Bose and Fermi gases in both classical and nearly degenerate regimes. Abstract: In this paper we focus on the initial value problem of the semi-linear plate equation with memory in multi-dimensions $(n\geq1)$, the decay structure of which is of regularity-loss property. By using Fourier transform and Laplace transform, we obtain the fundamental solutions and thus the solution to the corresponding linear problem. Appealing to the point-wise estimate in the Fourier space of solutions to the linear problem, we get estimates and properties of solution operators, by exploiting which decay estimates of solutions to the linear problem are obtained. Also by introducing a set of time-weighted Sobolev spaces and using the contraction mapping theorem, we obtain the global in-time existence and the optimal decay estimates of solutions to the semi-linear problem under smallness assumption on the initial data. Abstract: We consider the classical Vlasov-Poisson system in three space dimensions in the electrostatic case. For smooth solutions starting from compactly supported initial data, an estimate on velocities is derived, showing an upper bound with a growth rate no larger than $(t\ln t)^{6/25}$. As a consequence, a decay estimate is obtained for the electric field in the $L^\infty$ norm. Abstract: The main concern of the present paper is to analyze a sheath formed around a surface of a material with which plasma contacts. Here, for a formation of the sheath, the Bohm criterion requires the velocity of positive ions should be faster than a certain physical constant. The behavior of positive ions in plasma is governed by the Euler-Poisson equations. Mathematically, the sheath is regarded as a special stationary solution. We first show that the Bohm criterion gives a sufficient condition for an existence of the stationary solution by using the phase plane analysis. Then it is shown that the stationary solution is time asymptotically stable provided that an initial perturbation is sufficiently small in the weighted Sobolev space. Moreover we obtain the convergence rate of the time global solution towards the stationary solution subject to the decay rate of the initial perturbation. These theorems are proved by a weighted energy method. Readers Authors Editors Referees Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Let $E$ be a set of finite outer measure and $F$ a collection of closed, bounded intervals that cover $E$ in the sense of Vitali. Show that there is a countable disjoint collection {${I_k}$}$_{k=1}^\infty$ of intervals in $F$ for which $m$*$[E$~$\bigcup_{k=1}^{\infty}I_k] = 0$. This is the same as the Vitali Covering Lemma except for each $\epsilon > 0$, $m$*$[E$~$\bigcup_{k=1}^{\infty}I_k] < \epsilon$. In that proof, my book (Royden, 4th) uses Theorem 11: Let $E$ be any set of real numbers. Then, each of the following four assertions is equivalent to the measurability of E. i) For each $\epsilon > 0$, there is an open set $O$ containing $E$ for which $m$*$(O$~$E)<\epsilon$. ii) There is a $G_\delta$ set $G$ containing $E$ for which $m$*$(G$~$E)=0$. iii), iv) etc. Would this proof boil down to i) -> ii)?
I'm trying to follow the proof in Wikipedia that the PNT is equivalent to the assertion $\psi(x)\sim x$, by proving that $\psi(x)\sim\pi(x)\log x$, which it claims is a very simple proof. One direction of inequality is an actual bound, $\psi(x)\le\pi(x)\log x$, but the other inequality has a fuzz factor: $$\psi(x) \ge \sum_{x^{1-\epsilon}\le p\le x} \log p\ge\sum_{x^{1-\epsilon}\le p\le x}(1-\epsilon)\log x=(1-\epsilon)(\pi(x)+O(x^{1-\epsilon}))\log x.$$ But this doesn't actually complete the proof, because we want $\psi(x)\ge(1-\epsilon)\pi(x)\log x$ without the fuzz factor. If we take large enough $x$ and use $\epsilon/2$ in the above equation we get $$\psi(x)\ge(1-\epsilon/2)\pi(x)\log x+Ax^{1-\epsilon/2}\log x,$$ so it is sufficient to prove that $Ax^{1-\epsilon/2}\le\frac{\epsilon}2\pi(x)$ for sufficiently large $x$, i.e. $x^{1-\epsilon/2}\in o(\pi(x)),$ and although I am sure there is a proof of this, it's not so simple that the proof can be completely omitted, at least as far as I can see. Is there an easy proof to be found here? The only one I am seeing is Chebyshev's weak version of the PNT, $\frac x{\log x}\in O(\pi(x))$, which takes some significant work to prove.
I tried to use the commands defined here: Can I change all math output to use monospaced text? to change the font in the math environment. However, the following code does not compile: \everymath{\mathtt{\xdef\tmp{\fam\the\fam\relax}\aftergroup\tmp}}\everydisplay{\mathtt{\xdef\tmp{\fam\the\fam\relax}\aftergroup\tmp}} \begin{align*} T = \{ \alpha : A \subseteq B,\\ \beta : B \subseteq C\\ \} \end{align*} It compiles correctly in normal math mode (without using "align" and line breaks), or without the commands to change the font. How can I modify the font used inside the align environment to typewriter? (Similar to: \newenvironment where all text is typewriter (like \texttt) , but in math mode) I would like to write something like: \begin{ttalign*} T = \{ \alpha : A \subseteq B,\\ \beta : B \subseteq C\\ \} \end{ttalign*} where the environment "ttalign" behaves likes "align", but with the typewriter font instead of the normal font.
ISSN: 1937-5093 eISSN: 1937-5077 Kinetic & Related Models September 2011 , Volume 4 , Issue 3 Issue on new trends in direct, inverse, and control problems for evolution equations Select all articles Export/Reference: Abstract: A Gaussian beam method is presented for the analysis of the energy of the high frequency solution to the mixed problem of the scalar wave equation in an open and convex subset $\Omega$ of $IR^n$, with initial conditions compactly supported in $\Omega$, and Dirichlet or Neumann type boundary condition. The transport of the microlocal energy density along the broken bicharacteristic flow at the high frequency limit is proved through the use of Wigner measures. Our approach consists first in computing explicitly the Wigner measures under an additional control of the initial data allowing to approach the solution by a superposition of first order Gaussian beams. The results are then generalized to standard initial conditions. Abstract: We study a particle model for a simple system of partial differential equations describing, in dimension $d\geq 2$, a two component mixture where light particles move in a medium of absorbing, fixed obstacles; the system consists in a transport and a reaction equation coupled through pure absorption collision terms. We consider a particle system where the obstacles, of radius $\varepsilon$, become inactive at a rate related to the number of light particles travelling in their range of influence at a given time and the light particles are instantaneously absorbed at the first time they meet the physical boundary of an obstacle; elements belonging to the same species do not interact among themselves. We prove the convergence (a.s. w.r.t. the product measure associated to the initial datum for the light particle component) of the densities describing the particle system to the solution of the system of partial differential equations in the asymptotics $ a_n^d n^{-\kappa}\to 0$ and $a_n^d \varepsilon^{\zeta}\to 0$, for $\kappa\in(0,\frac 12)$ and $\zeta\in (0,\frac12 - \frac 1{2d})$, where $a_n^{-1}$ is the effective range of the obstacles and $n$ is the total number of light particles. Abstract: A thermal plasma is studied accounting for both impact ionization, and an electromagnetic field. This plasma problem is modeled based on a system of Boltzmann type transport equations. Electron-neutral collisions are assumed to be much more frequently elastic than inelastic, to complete previous investigations of thermal plasma [4]-[6]. A viscous hydrodynamic/diffusion limit is derived in two stages doing an Hilbert expansion and using the Chapman-Enskog method. The resultant viscous fluid model is characterized by two temperatures, and non equilibrium ionization. Its diffusion coefficients depend on the magnetic field, and can be computed explicitely. Abstract: A non self-similar change of coordinates provides improved matching asymptotics of the solutions of the fast diffusion equation for large times, compared to already known results, in the range for which Barenblatt solutions have a finite second moment. The method is based on relative entropy estimates and a time-dependent change of variables which is determined by second moments, and not by the scaling corresponding to the self-similar Barenblatt solutions, as it is usually done. Abstract: Moment methods are classical approaches that approximate the mesoscopic radiative transfer equation by a system of macroscopic moment equations. An expansion in the angular variables transforms the original equation into a system of infinitely many moments. The truncation of this infinite system is the moment closure problem. Many types of closures have been presented in the literature. In this note, we demonstrate that optimal prediction, an approach originally developed to approximate the mean solution of systems of nonlinear ordinary differential equations, can be used to derive moment closures. To that end, the formalism is generalized to systems of partial differential equations. Using Gaussian measures, existing linear closures can be re-derived, such as $P_N$, diffusion, and diffusion correction closures. This provides a new perspective on several approximations done in the process and gives rise to ideas for modifications to existing closures. Abstract: The Spitzer-Härm regime arising in plasma physics leads asymptotically to a nonlinear diffusion equation for the electron temperature. In this work we propose a hierarchy of models intended to retain more features of the underlying modeling based on kinetic equations. These models are of non--local type. Nevertheless, owing to energy discretization they can lead to coupled systems of diffusion equations. We make the connection between the different models precise and bring out some mathematical properties of the models. A numerical scheme is designed for the approximate models, and simulations validate the proposed approach. Abstract: The asymptotic limit of the nonlinear Schrödinger-Poisson system with general WKB initial data is studied in this paper. It is proved that the current, defined by the smooth solution of the nonlinear Schrödinger-Poisson system, converges to the strong solution of the incompressible Euler equations plus a term of fast singular oscillating gradient vector fields when both the Planck constant $\hbar$ and the Debye length $\lambda$ tend to zero. The proof involves homogenization techniques, theories of symmetric quasilinear hyperbolic system and elliptic estimates, and the key point is to establish the uniformly bounded estimates with respect to both the Planck constant and the Debye length. Abstract: Navier-Stokes equations for compressible quantum fluids, including the energy equation, are derived from a collisional Wigner equation, using the quantum entropy maximization method of Degond and Ringhofer. The viscous corrections are obtained from a Chapman-Enskog expansion around the quantum equilibrium distribution and correspond to the classical viscous stress tensor with particular viscosity coefficients depending on the particle density and temperature. The energy and entropy dissipations are computed and discussed. Numerical simulations of a one-dimensional tunneling diode show the stabilizing effect of the viscous correction and the impact of the relaxation terms on the current-voltage charcteristics. Abstract: This paper studies a Boltzmann transport equation with several electron-phonon scattering mechanisms, which describes the charge transport in semiconductors. The electric field is coupled to the electron distribution function via Poisson's equation. Both the parabolic and the quasi-parabolic band approximations are considered. The steady state behaviour of the electron distribution function is investigated by a Monte Carlo algorithm. More precisely, several nonlinear functionals of the solution are calculated that quantify the deviation of the steady state from a Maxwellian distribution with respect to the wave-vector. On the one hand, the numerical results illustrate known theoretical statements about the steady state and indicate directions for further studies. On the other hand, the nonlinear functionals provide tools that can be used in the framework of Monte Carlo algorithms for detecting regions in which the steady state distribution has a relatively simple structure, thus providing a basis for domain decomposition methods. Readers Authors Editors Referees Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
I need to evaluate $\sum_{n = -\infty}^{\infty} J_0(\alpha n) z^{-n}$ in closed form, where $z$ is complex variable and $J_0()$ is the zeroth order Bessel function of the first kind. How do I evaluate this summation? This is not a full answer, but maybe it will get you somewhere. You can prove (or maybe already know) that in fact $J_0(-z)=J_0(z)$. So we can split up the sum as $$ \underset{n=-\infty}{\overset{\infty}{\sum}}J_0(an)z^{-n}=\underset{n=-\infty}{\overset{-1}{\sum}}J_0(an)z^{-n}+\underset{n=0}{\overset{\infty}{\sum}}J_0(an)z^{-n} = \underset{n=1}{\overset{\infty}{\sum}}J_0(an)z^{n}+\underset{n=0}{\overset{\infty}{\sum}}J_0(an)z^{-n}$$ $$ = J_0(0)+\underset{n=1}{\overset{\infty}{\sum}}J_0(an)(z^n+z^{-n}) = 1+\underset{n=1}{\overset{\infty}{\sum}}J_0(an)(z^n+z^{-n})$$ I am not sure if this will lead to something useful, but it may simplify things somewhat.
Here's an inelegant but straightforward, elementary solution. It begins with the case $n=1$. Generally, a non-central chi-squared distribution arises as the sum of squares of independent Normal distributions, each with unit variance, but with varying and possibly non-zero means. (When all means are zero, we obtain the usual chi-squared distribution.) For $n=1$ there's just one such Normal distribution involved, which is convenient to express as the sum of its mean $\mu$ and a standard Normal variable $X$. Thus, we are concerned with finding the variance of $Y=(X+\mu)^2$. By definition, this is $$\operatorname{Var}(Y) = E[Y^2] - E[Y]^2 = E[(X+\mu)^4] - E[(X+\mu)^2]^2.$$ We can read these expectations directly off the moment generating function for $X+\mu$, which is $$\psi_{X+\mu}(t) = e^{\mu t} \psi_X(t) = e^{\mu t} e^{t^2/2} = e^{\mu t + t^2/2}$$ as demonstrated at https://stats.stackexchange.com/a/176814/919, for instance. The Maclaurin series for $\psi$ is $$\psi_X(t) = 1 + \mu t + \frac{1+\mu^2}{2!}t^2 + \frac{3\mu+\mu^3}{3!}t^3 + \frac{3+6\mu^2+\mu^4}{4!}t^4 + \cdots.$$ The numerators of the fractions are the corresponding moments, whence $$E[(X+\mu)^4] = 3+6\mu^2+\mu^4$$ and $$E[(X+\mu)^2]^2 = (1 + \mu^2)^2,$$ yielding $$\operatorname{Var}(Y) = 3+6\mu^2+\mu^4 - (1 + \mu^2)^2 = 2(1 + 2\mu^2).\tag{1}$$ The hard part has been done, because the general case $n\ge 1$ reduces to this one. The underlying geometric idea is that we can rotate the coordinate system so that the first coordinate is parallel to the vector of means $(\mu_1,\mu_2,\ldots,\mu_n)$ and the other coordinates are orthogonal to it. Because a rotation is a linear transformation, the coordinates continue to have a jointly multinormal distribution. Because rotations are orthogonal, the new variables continue to have unit variances. All the "noncentrality," however, has been located in the first coordinate alone because the remaining variables are standard Normal (their means are all zero). These contours represent the underlying multivariate Normal distributions. The non-central chi-squared distribution is that of their squared distance to the origin. The distributions are the same in both panels, because a rotation around the origin does not change the distances to it. However, only the first coordinate in the rotated (right hand) panel has a nonzero mean; the remaining coordinates have standard Normal distributions. I'll illustrate by doing the algebra for $n=2$. Let's index $Y$, $X$, and $\mu$ with $i=1,2$, where the $X_i$ are independent standard normal variables and $Y_i=\mu_i+X_i$. The variable $Y=Y_1^2 + Y_2^2$ has a noncentral chi-squared distribution with noncentrality parameter $\mu_1^2 + \mu_2^2$. Indeed, writing $\mu^2 = \mu_1^2 + \mu_2^2$ and assuming it is nonzero, we may perform a little algebra (completing the square) to obtain $$\eqalign{Y &= (\mu_1+X_1)^2 + (\mu_2+X_2)^2 \\&= (\mu_1^2+\mu_2^2) + 2\mu_1X_1 + 2\mu_2X_2 + X_1^2 + X_2^2\\&= \left(\mu + \frac{\mu_1}{\mu}X_1 + \frac{\mu_2}{\mu}X_2\right)^2 + \left(\frac{\mu_2}{\mu}X_1 - \frac{\mu_1}{\mu}X_2\right)^2 \\&=(\mu + Z_1)^2 + Z_2^2}$$ where $$Z_1 = \frac{\mu_1}{\mu}X_1 + \frac{\mu_2}{\mu}X_2$$ and $$Z_2 = \frac{\mu_2}{\mu}X_1 - \frac{\mu_1}{\mu}X_2.$$ The $Z_i$ are linear combinations of independent standard Normal variables. Their means therefore are $0$ and their variances are the sum of squares of the coefficients, both of which are $$\operatorname{Var}(Z_i) = \left(\pm\frac{\mu_1}{\mu}\right)^2 + \left(\frac{\mu_2}{\mu}\right)^2 = \frac{\mu_1^2 + \mu_2^2}{\mu^2}=1.$$ Moreover, a similar calculation shows the $Z_i$ are uncorrelated. Since they are jointly Normal, they must be independent. Accordingly, $$\operatorname{Var}(Y) = \operatorname{Var}((\mu + Z_1)^2) + \operatorname{Var}(Z_2^2).$$ Our previous formula $(1)$ applies directly to both terms (the noncentrality parameter in the second term is $0$), giving $$\operatorname{Var}(Y) = 2(1+2\mu^2) + 2(1 + 2(0^2)) = 2(2 + 2\mu^2).$$ The formula for general $n$ works out the same way (use induction or apply linear algebra), giving the general formula $$\operatorname{Var}(Y) = 2(n + 2\mu^2)$$ where now $\mu^2 = \mu_1^2 + \mu_2^2 + \cdots + \mu_n^2$ is the noncentrality parameter.
Given the system: $$ \dot{r} = -\mu r + r^3, \\ \dot{\theta} = r $$ There is clearly one single node at $r=0$. The Jacobian is then: $$ \begin{pmatrix} -\mu + 3r^2 & 0 \\ 1 & 0 \end{pmatrix}$$ Setting $r=0$ and finding the eigenvalues I get: $\lambda = 0 , \lambda = -\mu $. The problem statement says "show that a subcritical Hopf bifurcation occurs at the parameter value $\mu = 0$ ". I don't see how a Hopf bifurcation appears here when all my eigenvalues are all real and I am failing to interpret $\lambda = 0$
So one thing that I find really interesting is that if I have a vector $\vec V = V_x \hat i + V_y \hat j$ its length is just: $$ |V|^2 = V_x^2 + V_y^2 $$ That is all well and good, but then if I transform the vector into a new basis, I can rewrite the vector in terms of a covariant basis as: $$ \vec V = V^1 \vec b_1 + V^2 \vec b_2 $$ Now of course, it is obvious that since $\vec b_1 $ and $\vec b_2$ are not necessarily orthogonormal, that $|V| \ne \sqrt{(V^1)^2 + (V^2)^2}$, that is all well and good: Contravariant Basis Now the usual way this goes is that we then define a new set of basis vectors: we define $b^1$ to be orthogonal to all $b_i$ when $i\ne 1$ but we define strangely that $b_1 \cdot b^1 = 1$. Then we rinse and repeat for all other vectors. We can then represent v in terms of this new basis directly as: $$ \vec V = V_1 \vec b^1 + V_2 \vec b^2 $$ Now this is also fine, but then something totally out of the blue happens: The Dot Product If we take the dot product of these two representations, we can get an alternative formula for the length: $$ |V| ^2 = (V^1 \vec b_1 + V^2 \vec b_2) \cdot (V_1 \vec b^1 + V_2 \vec b^2) = V_x^2 + V_y^2 $$ Now I can verify this by calculation, but I now realise that I have absolutely no understanding of why this should be true. Any help would be most appreciated :) I don't see the connection here, why does defining this new basis with the rule that $b_j \cdot b^k = \delta_{jk}$ lead to such an elegant formula for length?
To motivate Sobolev spaces, let me pose a motivating problem. Let $\Omega$ be a smooth, bounded domain in ${\Bbb R}^n$ and let $f$ be a $C^\infty$ function on $\Omega$. Prove that there exists a $C^2$ function $u$ satisfying $-\Delta u = f$ in $\Omega$ and $u = 0$ on the boundary of $\Omega$. As far as PDE's go, this is the tamest of the tame: it's a second-order, constant coefficient elliptic PDE with a smooth right-hand side and a smooth boundary. Should be easy right? It certainly can be done, but you'll find it's harder than you might think. Imagine replacing the PDE with something more complicated like $-\text{div}(A(x)\nabla u) = f$ for some $C^1$ uniformly positive definite matrix-valued function $A$. Proving even existence of solutions is a nightmare. Such PDE's come up all the time in the natural sciences, for instance representing the equillibrium distribution of heat (or stress, concentration of impurities,...) in a inhomogenous, anisotropic medium. Proving the existence of weak solutions to such PDE's in Sobolev spaces is incredibly simple: once all the relevant theoretical machinery has been worked out, the existence, uniqueness, and other useful things about the solutions to the PDE can be proven in only a couple of lines. The reason Sobolev spaces are so effective for PDEs is that Sobolev spaces are Banach spaces, and thus the powerful tools of In particular, the existence of weak solutions to many elliptic PDE follows directly from the Lax-Milgram theorem. functional analysis can be brought to bear. So what is a weak solution to a PDE? In simple terms, you take the PDE and multiply by a suitably chosen${}^*$ test function and integrate over the domain. For my problem, for instance, a weak formulation would be to say that $-\int_\Omega v\Delta u \, dx = \int_\Omega fv \, dx$ for all $C^\infty_0$ functions $v$. We often want to use integration by parts to simplify our weak formulation so that the order of the highest derivative appearing in the expression goes down: you can check that in fact $\int_\Omega \nabla v\cdot \nabla u \, dx = \int_\Omega fv \, dx$ for all $C^\infty_0$ functions $v$. Note the logic. You begin with a smooth solution to your PDE, which a priori may or may not exist. You then derive from the PDE a certain integral equation which is guaranteed to hold for all suitable test functions $v$. You then define $u$ to be a weak solution of the PDE if the integral equation holds for all test functions $v$. By construction, every classical solution to the PDE is a weak solution. Conversely, you can show that if $u$ is a $C^2$ weak solution, then $u$ is a classical solution.${}^\dagger$ Showing the existence of solutions in a Sobolev space is easy, but proving that they have enough regularity (that is, they are continuous differentiable up to some order--2, in our case) to be classical solutions often requires very length and technical proofs.${}^\$$ (The Sobolev embedding theorems you mention in your post are one of the key tools--they establish that if you have enough weak derivatives in a Sobolev sense, then you also are guaranteed to have a certain number of classical derivatives. The downside is you have to work in a Sobolev space $W^{k,p}$ where $p$ is larger than the dimension of the space, $n$. This is a major bummer since we like to work in $W^{k,2}$ since it is a Hilbert space, and thus has much nicer functional analytic tools. Alternatively, if you show that your function is in $W^{k,2}$ for every $k$, then it is guaranteed to lie in $C^\infty$.) All of what I've written kind of dances around the central question of why Sobolev spaces are so useful and why all of these functional analytic tools work for Sobolev spaces but not for spaces like $C^2$. In a sentence, completeness is really, really important. Often, in analysis, when we want to show a solution to something exists, it's much easier to construct a bunch of approximate solutions and then show those approximations converge to a bonafide solution. But without completeness, there might not be a solution ( a priori, at least) for them to converge to. As a much simpler example, think of the intermediate value theorem. $f(x) = x^2-2$ has $f(2) = 2$ and $f(0) = -2$, so there must exist a zero (namely $\sqrt{2}$) in $(0,2)$. This conclusion fails over the rationals however, since the rationals are not complete, $\sqrt{2} \notin {\Bbb Q}$. In fact, one way to define the Sobolev spaces is as the completion of $C^\infty$ (or $C^k$ for $k$ large enough) under the Sobolev norms.${}^\%$ I have not the space in this to answer your questions (1) and (2) directly, as answering these questions in detail really requires spinning out a whole theory. Most graduate textbooks on PDEs should have answers with all the details spelled out. (Evans is the standard reference, although he doesn't include potential theory so he doesn't answer (1), directly at least.) Hopefully this answer at least motivates why Sobolev spaces are the "appropriate space to look for solutions to PDEs". ${}^*$ Depending on the boundary conditions of the PDE's, our test functions may need to be zero on the boundary or not. Additionally, to make the functional analysis nice, we often want our test functions to be taken from the same Sobolev space as we seek solutions in. This usually poses no problem as we may begin by taking our test functions to be $C^\infty$ and use certain approximation arguments to extend to all functions in a suitable Sobolev space. ${}^\dagger$ Apply integration by parts to recover $-\int_\Omega v\Delta u \, dx = \int_\Omega fv \, dx$ for all $C^\infty_0$ functions $v$ and apply the fundamental lemma of calculus of variations. ${}^\$$ Take a look at a regularity proof for elliptic equations in your advanced PDE book of choice. ${}^\%$ You might ask why complete in Sobolev norm, not some simpler norm like $L^p$? Unfortunately, the $L^p$ completion of $C^\infty$ is $L^p$, and there are functions in $L^p$ which you can't define any sensible weak or strong derivative of. Thus, in order to define a complete normed space of differentiable functions, the derivative has to enter the norm (which is why the Sobolev norms are important, and in some sense natural.)
Search Now showing items 1-2 of 2 Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector (Elsevier, 2014-11-10) This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ... Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector (Elsevier, 2014-11-10) Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ...
Let $X_1, X_2,\ldots$ be a random sample from the probability density function for $0<\mu<\infty$, $0<\alpha < 1$ $$f(x;\mu,\alpha)= \begin{cases}\frac{1}{\Gamma(\alpha)}(x-\mu)^{\alpha-1}e^{-(x-\mu)}, & x>\mu \\ 0, & \text{otherwise} \end{cases}$$ Does Maximum Likelihood Estimator exist for parameters $\alpha$ and $\mu$ ? By observation, this looks like a Weibull distribution. Also, I know that we can obtain MLE by taking product of the PDF from $1$ to $n$ and differentiating wrt to the parameter. But I do not know about the criteria that a parameter needs to satisfy in order to have Maximum Likelihood Estimator.
I'm currently trying to learn Bayesian Statistics but I keep losing time trying to figure out what exactly is meant by notation. Could someone answer the following for me? Let's say $X \sim N(\mu,\sigma^2)$ (1) I'm trying to calculate a posterior distribution $p(\mu\mid X) \propto p(X\mid\mu)p(\mu)$. So my understanding is that $p(\mu\mid X)$ is the probability distribution of the parameter $\mu$ given the data $X$. What then does $p(\mu\mid X,\sigma^2)$ mean exactly? My guess is that it is the probability distribution of the parameter $\mu$ given the data $X$ and assuming that $\sigma^2$ is fixed. Is that correct? (2) Following up from (1), for $p(\mu\mid X,\sigma^2)$, what does the posterior function transform into? Is it $p(\mu\mid X,\sigma^2) \propto p(X\mid\mu,\sigma^2)p(\mu,\sigma^2)$? If it is different, how does the likelihood function really change? My understanding is that the likelihood function is based on the way the data are distributed and not the parameters conditioned on. Is there a difference. (3) Similar question regarding the prior. If we are told that $p(\mu) \propto 1$, is there a difference between $p(\mu)$ and $p(\mu,\sigma^2)$? Any clarification would be greatly appreciated!
Suppose that the matrix $A$ is diagonalizable by an orthogonal matrix $Q$.The orthogonality of the matrix $Q$ means that we have\[Q^{\trans}Q=QQ^{\trans}=I, \tag{*}\]where $Q^{\trans}$ is the transpose matrix of $Q$ and $I$ is the $n\times n$ identity matrix. Since $Q$ diagonalizes the matrix $A$, we have\[Q^{-1}AQ=D,\]where $D$ is a diagonal matrix.Equivalently, we have\[A=QDQ^{-1} \tag{**}.\]Taking transpose of both sides, we obtain\begin{align*}A^{\trans}&=(QDQ^{-1})^{\trans}\\&=(Q^{-1})^{\trans}D^{\trans} Q^{-1}\\&=(Q^{-1})^{\trans}D Q^{-1} \text{ since } D \text{ is diagonal.}\tag{***}\end{align*} By (*), we observe that the inverse matrix of $Q$ is the transpose $Q^{\trans}$, that is, $Q^{-1}=Q^{\trans}$.It follows from this observation and (***) that we have\[A^{\trans}=QDQ^{-1}.\](Note that $(Q^{-1})^{\trans}=Q^{\trans \trans}=Q$.) Comparing this with (**), we obtain\[A^{\trans}=A,\]and hence $A$ is a symmetric matrix. Quiz 13 (Part 1) Diagonalize a MatrixLet\[A=\begin{bmatrix}2 & -1 & -1 \\-1 &2 &-1 \\-1 & -1 & 2\end{bmatrix}.\]Determine whether the matrix $A$ is diagonalizable. If it is diagonalizable, then diagonalize $A$.That is, find a nonsingular matrix $S$ and a diagonal matrix $D$ such that […] How to Diagonalize a Matrix. Step by Step Explanation.In this post, we explain how to diagonalize a matrix if it is diagonalizable.As an example, we solve the following problem.Diagonalize the matrix\[A=\begin{bmatrix}4 & -3 & -3 \\3 &-2 &-3 \\-1 & 1 & 2\end{bmatrix}\]by finding a nonsingular […] A Square Root Matrix of a Symmetric MatrixAnswer the following two questions with justification.(a) Does there exist a $2 \times 2$ matrix $A$ with $A^3=O$ but $A^2 \neq O$? Here $O$ denotes the $2 \times 2$ zero matrix.(b) Does there exist a $3 \times 3$ real matrix $B$ such that $B^2=A$ […] A Matrix Similar to a Diagonalizable Matrix is Also DiagonalizableLet $A, B$ be matrices. Show that if $A$ is diagonalizable and if $B$ is similar to $A$, then $B$ is diagonalizable.Definitions/Hint.Recall the relevant definitions.Two matrices $A$ and $B$ are similar if there exists a nonsingular (invertible) matrix $S$ such […] Find the Inverse Matrix of a Matrix With FractionsFind the inverse matrix of the matrix\[A=\begin{bmatrix}\frac{2}{7} & \frac{3}{7} & \frac{6}{7} \\[6 pt]\frac{6}{7} &\frac{2}{7} &-\frac{3}{7} \\[6pt]-\frac{3}{7} & \frac{6}{7} & -\frac{2}{7}\end{bmatrix}.\]Hint.You may use the augmented matrix […] Rotation Matrix in Space and its Determinant and EigenvaluesFor a real number $0\leq \theta \leq \pi$, we define the real $3\times 3$ matrix $A$ by\[A=\begin{bmatrix}\cos\theta & -\sin\theta & 0 \\\sin\theta &\cos\theta &0 \\0 & 0 & 1\end{bmatrix}.\](a) Find the determinant of the matrix $A$.(b) Show that $A$ is an […]
Standard finite difference formulas are usable to numerically compute a derivative under the expectation that you have function values $f(x_k)$ at evenly spaced points, so that $h \equiv x_{k+1} - x_k$ is a constant. What if I have unevenly spaced points, so that $h$ now varies from one pair of adjacent points to the next? Obviously I can still compute a first derivative as $f'(x) \approx \frac{1}{h_k}[f(x_{k+1}) - f(x_k)]$, but are there numerical differentiation formulas at higher orders and accuracies that can adapt to variation in the grid size? J.M's comment is right: you can find an interpolating polynomial and differentiate it. There are other ways of deriving such formulas; typically, they all lead to solving a van der Monde system for the coefficients. This approach is problematic when the finite difference stencil includes a large number of points, because the Vandermonde matrices become ill-conditioned. A more numerically stable approach was devised by Fornberg, and is explained more clearly and generally in a second paper of his. Here is a simple MATLAB script that implements Fornberg's method to compute the coefficients of a finite difference approximation for any order derivative with any set of points. For a nice explanation, see Chapter 1 of LeVeque's text on finite difference methods. A bit more on FD formulas: Suppose you have a 1D grid. If you use the whole set of grid points to determine a set of FD formulas, the resulting method is equivalent to finding an interpolating polynomial through the whole grid and differentiating that. This approach is referred to as spectral collocation. Alternatively, for each grid point you could determine a FD formula using just a few neighboring points. This is what is done in traditional finite difference methods. As mentioned in the comments below, using finite differences of very high order can lead to oscillations (the Runge phenomenon) if the points are not chosen carefully. This addresses your question and shows the formula you are looking for, for the second derivative. Higher-Order derivatives follow the same pattern. The above answers are great in terms of giving you a code to use, but aren't as good in terms of theory. If you want to delve deeper into interpolating polynomials, take a look at this theoretical treatment with a few concrete examples: Singh, Ashok K., and B. S. Bhadauria. "Finite difference formulae for unequal sub-intervals using lagrange’s interpolation formula." International Journal of Mathematics and Analysis 3.17 (2009): 815-827. (Link to PDF) The authors use Lagrangian Interpolation (see the Wikipedia article) to calculate 3-point, 4-point and 5-point interpolating polynomials, as well as their first, second and third derivatives. They have expressions for the truncation error as well, which is important to consider when using any finite difference scheme. They also have the generic formula for calculating interpolating polynomials using N points. Lagrangian interpolating polynomials are useful because they and their derivatives can be very accurate in the domain you are interpolating, and they do not assume an even grid spacing. Due to the nature of Lagrangian interpolating polynomials, you can never have more orders of derivatives than you have grid points. I think this answers your question well because the paper I cited has formulae for arbitrarily high-order finite difference schemes, which by nature are for uneven grids and are limited only by the number of grid points you include in your stencil. The paper also has a generic formula for the truncation error, which will help you evaluate the Lagrangian interpolating polynomial scheme against other schemes you might be considering. The author's paper should give the same results as Fornberg's method. Their contribution is really just tallying a few examples and giving an estimate of the error, which you may find useful. I found both the paper I cited and Fornberg's work to be useful for my own research. I found this paper on finite difference formulas with unequal sub-intervals. I'm going to use this instead of interpolation. Once I type all the formulas out, I'll post them here. The simplest method is to use finite difference approximations. A simple two-point estimation is to compute the slope of a nearby secant line through the points (x,f(x)) and (x+h,f(x+h)).[1] Choosing a small number h, h represents a small change in x, and it can be either positive or negative. The slope of this line is $$f(x+h)-f(x)\over h$$ This expression is Newton's difference quotient. The slope of this secant line differs from the slope of the tangent line by an amount that is approximately proportional to h. As h approaches zero, the slope of the secant line approaches the slope of the tangent line. Therefore, the true derivative of f at x is the limit of the value of the difference quotient as the secant lines get closer and closer to being a tangent line
If the Quotient by the Center is Cyclic, then the Group is AbelianLet $Z(G)$ be the center of a group $G$.Show that if $G/Z(G)$ is a cyclic group, then $G$ is abelian.Steps.Write $G/Z(G)=\langle \bar{g} \rangle$ for some $g \in G$.Any element $x\in G$ can be written as $x=g^a z$ for some $z \in Z(G)$ and $a \in \Z$.Using […] Group of Order 18 is SolvableLet $G$ be a finite group of order $18$.Show that the group $G$ is solvable.DefinitionRecall that a group $G$ is said to be solvable if $G$ has a subnormal series\[\{e\}=G_0 \triangleleft G_1 \triangleleft G_2 \triangleleft \cdots \triangleleft G_n=G\]such […] Use Lagrange’s Theorem to Prove Fermat’s Little TheoremUse Lagrange's Theorem in the multiplicative group $(\Zmod{p})^{\times}$ to prove Fermat's Little Theorem: if $p$ is a prime number then $a^p \equiv a \pmod p$ for all $a \in \Z$.Before the proof, let us recall Lagrange's Theorem.Lagrange's TheoremIf $G$ is a […] Abelian Normal subgroup, Quotient Group, and Automorphism GroupLet $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$.Let $\Aut(N)$ be the group of automorphisms of $G$.Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime.Then prove that $N$ is contained in the center of […] A Group of Order the Square of a Prime is AbelianSuppose the order of a group $G$ is $p^2$, where $p$ is a prime number.Show that(a) the group $G$ is an abelian group, and(b) the group $G$ is isomorphic to either $\Zmod{p^2}$ or $\Zmod{p} \times \Zmod{p}$ without using the fundamental theorem of abelian […] Normal Subgroup Whose Order is Relatively Prime to Its IndexLet $G$ be a finite group and let $N$ be a normal subgroup of $G$.Suppose that the order $n$ of $N$ is relatively prime to the index $|G:N|=m$.(a) Prove that $N=\{a\in G \mid a^n=e\}$.(b) Prove that $N=\{b^m \mid b\in G\}$.Proof.Note that as $n$ and […] No Finite Abelian Group is DivisibleA nontrivial abelian group $A$ is called divisible if for each element $a\in A$ and each nonzero integer $k$, there is an element $x \in A$ such that $x^k=a$.(Here the group operation of $A$ is written multiplicatively. In additive notation, the equation is written as $kx=a$.) That […]
There are $2^4=16$ total possible outcomes of which only one outcome gives rise to all heads. Thus the probability that all coins land heads is $1/16$. Solution (2) Consider the event that the first coin is heads. In this case, there are total $2^3=8$ possible outcomes for the rest of coins (2nd, 3rd, and 4th). Hence, the probability that all coins land heads given that the first coin is heads is $1/8$. Solution (3) Let $H$ be the event that all coins land heads. Let $F$ be the event that at least one coin lands heads. Then the required conditional probability is given by\begin{align*}P(H \mid F) &= \frac{P(H \cap F)}{P(F)}.\end{align*}The complement $F^c$ of $F$ is the event that all lands tails whose probability $P(F^c)$ is $1/16$ just like part (a). Hence\[P(F) = 1 – P(F^c) = 1 – \frac{1}{16} = \frac{15}{16}.\] It follows that\begin{align*}P(H \mid F) &= \frac{P(H \cap F)}{P(F)}\\[6pt]&= \frac{P(H)}{P(F)}\\[6pt]&= \frac{1/16}{15/16}\\[6pt]&= \frac{1}{15}.\end{align*} Independent and Dependent Events of Three Coins TossingSuppose that three fair coins are tossed. Let $H_1$ be the event that the first coin lands heads and let $H_2$ be the event that the second coin lands heads. Also, let $E$ be the event that exactly two coins lands heads in a row.For each pair of these events, determine whether […] Complement of Independent Events are IndependentLet $E$ and $F$ be independent events. Let $F^c$ be the complement of $F$.Prove that $E$ and $F^c$ are independent as well.Solution.Note that $E\cap F$ and $E \cap F^c$ are disjoint and $E = (E \cap F) \cup (E \cap F^c)$. It follows that\[P(E) = P(E \cap F) + P(E […] Jewelry Company Quality Test Failure ProbabilityA jewelry company requires for its products to pass three tests before they are sold at stores. For gold rings, 90 % passes the first test, 85 % passes the second test, and 80 % passes the third test. If a product fails any test, the product is thrown away and it will not take the […] Independent Events of Playing CardsA card is chosen randomly from a deck of the standard 52 playing cards.Let $E$ be the event that the selected card is a king and let $F$ be the event that it is a heart.Prove or disprove that the events $E$ and $F$ are independent.Definition of IndependenceEvents […] Pick Two Balls from a Box, What is the Probability Both are Red?There are three blue balls and two red balls in a box.When we randomly pick two balls out of the box without replacement, what is the probability that both of the balls are red?Solution.Let $R_1$ be the event that the first ball is red and $R_2$ be the event that the […] Probability Problems about Two DiceTwo fair and distinguishable six-sided dice are rolled.(1) What is the probability that the sum of the upturned faces will equal $5$?(2) What is the probability that the outcome of the second die is strictly greater than the first die?Solution.The sample space $S$ is […] Conditional Probability Problems about Die RollingA fair six-sided die is rolled.(1) What is the conditional probability that the die lands on a prime number given the die lands on an odd number?(2) What is the conditional probability that the die lands on 1 given the die lands on a prime number?Solution.Let $E$ […] Overall Fraction of Defective Smartphones of Three FactoriesA certain model of smartphone is manufactured by three factories A, B, and C. Factories A, B, and C produce $60\%$, $25\%$, and $15\%$ of the smartphones, respectively.Suppose that their defective rates are $5\%$, $2\%$, and $7\%$, respectively. Determine the overall fraction of […]
ISSN: 1930-5311 eISSN: 1930-532X All Issues Journal of Modern Dynamics July 2008 , Volume 2 , Issue 3 Select all articles Export/Reference: Abstract: Professor Michael Brin of the University of Maryland endowed an international prize for outstanding work in the theory of dynamical systems and related areas. The prize is given biennially for specific mathematical achievements that appear as a single publication or a series thereof in refereed journals, proceedings or monographs. For more information please click the “Full Text” above. Abstract: The present note is occasioned by the award to Giovanni Forni of the inaugural Michael Brin Prize in Dynamical Systems. The award reflects the profound contributions to dynamical systems by Giovanni Forni. The existenceof the award reflects the extraordinary generosity of Michael and Eugenia Brin,who have provided funds for many mathematical and scientific activities, including the Brin Prize. For the full article, please click the "Full Text" link above. Abstract: We introduce a class of continuous maps $f$ of a compact topological space $I$ admitting inducing schemes and describe the tower constructions associated with them. We then establish a thermodynamic formalism, \ie describe a class of real-valued potential functions $\varphi$ on $I$, which admit a unique equilibrium measure $\mu_\varphi$ minimizing the free energy for a certain class of invariant measures. We also describe ergodic properties of equilibrium measures, including decay of correlation and the Central Limit Theorem. Our results apply to certain maps of the interval with critical points and/or singularities (including some unimodal and multimodal maps) and to potential functions $\varphi_t=-t\log|df|$ with $t\in(t_0, t_1)$ for some $t_0<1 < t_1$. In the particular case of $S$-unimodal maps we show that one can choose $t_0<0$ and that the class of measures under consideration consists of all invariant Borel probability measures. Abstract: We prove that the group of Hamiltonian automorphisms of a symplectic $4$-manifold $(M,\omega)$, Ham$(M,\omega)$, contains only finitely many conjugacy classes of maximal compact tori with respect to the action of the full symplectomorphism group Symp$(M,\omega)$. We also consider the set of conjugacy classes of\/ $2$-tori in Ham$(M,\omega)$ with respect to Hamiltonian conjugation and show that its finiteness is equivalent to the finiteness of the symplectic mapping class group $\pi_{0}$(Symp$(M,\omega)$). Finally, we extend to rational and ruled manifolds a result of Kedra which asserts that if $(M,\omega)$ is a simply connected symplectic $4$-manifold with $b_{2}\geq 3$, and if $(\widetilde{M},\widetilde{\omega}_{\delta})$ denotes a symplectic blow-up of $(M,\omega)$ of small enough capacity $\delta$, then the rational cohomology algebra of the Hamiltonian group Ham($\widetilde{M},\widetilde{\omega}_{\delta})$ is not finitely generated. Our results are based on the fact that in a symplectic $4$-manifold endowed with any tamed almost structure $J$, exceptional classes of minimal symplectic area are $J$-indecomposable. Abstract: We show that there exist minimal interval-exchange transformations with an ergodic measure whose Hausdorff dimension is arbitrarily small, even 0. We will also show that in particular cases one can bound the Hausdorff dimension between $\frac{1}{2r+4}$ and $\frac{1}{r}$ for any r greater than 1. Abstract: Denote by $\Gamma$ the set of pointwise good sequences: sequences of real numbers $(a_k)$ such that for any measure--preserving flow $(U_t)_{t\in\mathbb R}$ on a probability space and for any $f\in L^\infty$, the averages $\frac{1}{n} \sum_{k=1}^{n} f(U_{a_k}x) $ converge almost everywhere. We prove the following two results. 1. If $f: (0,\infty)\to\mathbb R$ is continuous and if $(f(ku+v))_{k\geq 1}\in\Gamma$ for all $u, v>0$, then $f$ is a polynomial on some subinterval $J\subset (0,\infty)$ of positive length. 2. If $f: [0,\infty)\to\mathbb R$ is real analytic and if $(f(ku))_{k\geq 1}\in\Gamma$ for all $u>0$, then $f$ is a polynomial on the whole domain $[0,\infty)$. These results can be viewed as converses of Bourgain's polynomial ergodic theorem which claims that every polynomial sequence lies in $\Gamma$. Abstract: We prove that the displacement energy of a stable coisotropic submanifold is bounded away from zero if the ambient symplectic manifold is closed, rational and satisfies a mild topological condition. Abstract: We give examples of finitely presented groups containing elements with irrational (in fact, transcendental) stable commutator length, thus answering in the negative a question of M. Gromov. Our examples come from 1-dimensional dynamics and are related to the generalized Thompson groups studied by M. Stein, I. Liousse and others. Abstract: Let $\alpha_0$ be an affine action of a discrete group $\Gamma$ on a compact homogeneous space $X$ and $\alpha_1$ a smooth action of $\Gamma$ on $X$ which is $C^1$-close to $\alpha_0$. We show that under some conditions, every topological conjugacy between $\alpha_0$ and $\alpha_1$ is smooth. In particular, our results apply to Zariski-dense subgroups of $SL_d(\mathbb{Z})$ acting on the torus $\mathbb{T}^d$ and Zariski-dense subgroups of a simple noncompact Lie group $G$ acting on a compact homogeneous space $X$ of $G$ with an invariant measure. Readers Authors Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
This week¶ This week's paper is Active Perception in Adversarial Scenarios using Maximum Entropy Deep Reinforcement Learning. The idea is that an agent interacting with another agent can learn to assess the threat it may pose. It does this by actively testing the opponent agent's behavior, and does not assume the opponent's behavior remains stationary. It uses Bayesian filtering to update its belief about the disposition of the opponent, and that's why this paper caught my eye. I'm on a Bayesian kick lately. To summarize, the contribution here is the development of a scalable robust active perception method in scenarios where a potential adversary opponent could be actively hostile to the intent recognition activity, which extends and outperforms the POMDP methods. I'm a bit short on time this week, so I apologize for the amount of jargon and the unusually high level of confusion. Problem setup¶ We model the active perception problem as a planning problem, defined by the tuple $\langle S,A^a,A^o,T,O,R,b_0,\gamma \rangle$, where $S=\langle S^o,S^p \rangle$ is the state of the world, consisting of the set of observable states $S^o$ and the set of partially observable states $S^p$; $A^a$ is the set of actions of the autonomous agent; $A^o$ is the set of actions of the opponent; we further assume that regardless of the intention, the opponent has the same set of observable actions. Otherwise, an intention is easily identifiable once an action that is uniquely corresponding to that type of intention is observed. $T:S \times A^a \times A^o \rightarrow \Delta_S $ is the transition probability, where $\Delta_{\bullet}$ denotes the space of probability distribution over the space $\bullet$. $O: S \times A^a \rightarrow \Delta_{A^o}$ is the observation probability; $R: S \times A^a \times A^o \rightarrow \mathbb{R}$ is the reward function; $b_0$ is the prior probability of the opponent being an adversary; and $\gamma$ is the discount factor. Further, the opponent is assumed to be either neutral (merely self-interested, in a known way) or hostile (goal-directed, as defined by a known MDP), with bounded rationality, (it may not be able to take the optimal action) and it is likely to behave deceptively. Notice that the actual behavior of the opponent is known if its disposition is known, which to my mind may or may not be a reasonable assumption, depending on the setting. Since I've had AI safety on the brain lately, it strikes me as unrealistic in a situation where your opponent is smarter than you are. It may be more realistic in settings where everyone has the same goal and it's relatively clear how anyway would try to achieve it if they didn't have to deal with other agents. The authors' adversarial model is interesting. ($\lambda$ is the parameter to $\pi^o$ that specifies whether the agent is neutral: $\lambda=0$, or adversarial: $\lambda=1$): We use the following equation to model an adversarial agent's policy $\pi^o$: $$ \begin{align} \pi^o(a^o_t|s_t,\lambda=1;\alpha,\beta)= & \text{argmin}_{\pi \in \Delta} \{\mathbb{KL}(\pi|\pi^{\text{MDP}}_{\alpha})\\ & +\beta \mathbb{KL}(\pi|\pi^o(\cdot|s_t,\lambda=0)) \} \pi^{\text{MDP}}_{\alpha}(a_t^o|s_t,\lambda=1)=e^{\alpha Q(s_t,a_t^o)}/Z(s_t) \end{align} $$ The thing to take away from this is that both rationality and deception are tunable parameters. The rationality of the opponent is controlled by the temperature parameter $\alpha$, by adjusting how well the opponent makes use of the optimal Q function. The degree to which the opponent is deceptive is controlled by $\beta$, which adjusts how much the KL-divergence of the existing policy from the neutral policy affects the opponent's search for an optimal strategy. Bayesian filtering¶ We maintain a belief $b_t(\lambda)$ over the hidden variable by Bayesian filtering. As I mentioned, I'm rather short on time today, so I must apologize again for not actually spending the time to explain this. For now, suffice it to say that the opponent is either neutral ($\lambda=0$) or hostile ($\lambda=1$), and how your agent reacts to it depends very much on which one of those it believes it is playing against. Bayesian filtering will allow it to make the most of the evidence available, so it can use its best guess as it trains. We define a hybrid belief-state dependent reward to balance exploration and safety \begin{equation} \begin{aligned} r(b_t,s_t,a^a_t)&=-H(b_t)+r(s_t,a^a_t)\\ &=b\log b+(1-b)\log(1-b)+r(s_t,a^a_t), \end{aligned} \label{eq6} \end{equation} where we use the shorthand $b$ to denote $b_t(\lambda=1)$, the belief that the opponent is an adversary; and $r(s_t,a^a_t)$ is the state dependent reward. This reward balances exploration behavior and safety. The negative entropy reward $-H(b_t)$ can be interpreted as maximizing the expected logarithm of true positive rate (TPR) and true negative rate (TNR). The state-dependent reward $r(s_t,a^a_t)$ depends both on the observable state and the partially observable intent state $\lambda$, as well as the action of the autonomous agent. This reward is used to ensure safety. For instance, some actions could be dangerous to the neutral [opponent], which are discouraged by a large negative reward. Our agent is trained using Soft-Q Learning while values of $\lambda$ are varied, with corresponding opponent behavior. Interestingly, in the case study section the authors mention that the actual adversary models were not always provided in the learning phase. The active perception agent has to identify the hidden intent while bein grobust to this model uncertainty, which is challenging. Parting thoughts¶ I admit to being a bit confused by this paper. The authors claim to do Bayesian filtering, but it's not an explicit feature of the algorithm. In fact, they seem to be sampling $\lambda$ for use in training by using only $b_0$, their prior probability for their belief state. Perhaps it's a typo. They also seem to claim that the two models of the opponent behavior must be known, but then they mention they're not available during the learning phase in their case study. Drop me a line if this makes sense to you.
Search Now showing items 1-10 of 10 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... Production of $\pi^0$ and $\eta$ mesons up to high transverse momentum in pp collisions at 2.76 TeV (Springer, 2017-05) The invariant differential cross sections for inclusive $\pi^{0}$ and $\eta$ mesons at midrapidity were measured in pp collisions at $\sqrt{s}=2.76$ TeV for transverse momenta $0.4<p_{\rm T}<40$ GeV/$c$ and $0.6<p_{\rm ... Linear and non-linear flow modes in Pb-Pb collisions at $\sqrt{s_{\rm NN}} =$ 2.76 TeV (Elsevier, 2017-10) The second and the third order anisotropic flow, $V_{2}$ and $V_3$, are mostly determined by the corresponding initial spatial anisotropy coefficients, $\varepsilon_{2}$ and $\varepsilon_{3}$, in the initial density ... Production of muons from heavy-flavour hadron decays in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Elsevier, 2017-07) The production of muons from heavy-flavour hadron decays in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV was studied for $2<p_{\rm T}<16$ GeV/$c$ with the ALICE detector at the CERN LHC. The measurement was performed ... Measurement of deuteron spectra and elliptic flow in Pb–Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV at the LHC (Springer, 2017-10) The transverse momentum ($p_{\rm T}$) spectra and elliptic flow coefficient ($v_2$) of deuterons and anti-deuterons at mid-rapidity ($|y|<0.5$) are measured with the ALICE detector at the LHC in Pb-Pb collisions at ... Measuring K$^0_{\rm S}$K$^{\rm \pm}$ interactions using Pb-Pb collisions at ${\sqrt{s_{\rm NN}}=2.76}$ TeV (Elsevier, 2017-11) We present the first ever measurements of femtoscopic correlations between the K$^0_{\rm S}$ and K$^{\rm \pm}$ particles. The analysis was performed on the data from Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV measured ... Measurement of D-meson production at mid-rapidity in pp collisions at ${\sqrt{s}=7}$ TeV (Springer, 2017-08) The production cross sections of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$, ${\rm D^{*+}}$ and ${\rm D_s^+}$ were measured at mid-rapidity in proton-proton collisions at a centre-of-mass energy $\sqrt{s}=7$ TeV ... Charged-particle multiplicity distributions over a wide pseudorapidity range in proton-proton collisions at √s = 0.9, 7, and 8 TeV (Springer, 2017-12-09) We present the charged-particle multiplicity distributions over a wide pseudorapidity range (−3.4<η<5.0 ) for pp collisions at √s=0.9,7 , and 8 TeV at the LHC. Results are based on information from the Silicon Pixel Detector ...
So now I’m up to “4” on Newton’s clock: So the expression\[ {\left({2\sin\frac{\mathit{\pi}}{2}}\right)}^{2} \] uses the sine function which has been talked about many posts before. Only this time, it is using radian measure of angles instead of degrees. If your calculator is in degree mode, you can substitute 90° in place of 𝜋/2 to get the same answer. The sine of 𝜋/2 radians or 90° is 1. So in the brackets we have 2 × 1 = 2. 2² = 4, hence its position on the clock. Now let’s look at\[ \sqrt[3]{125} \] This is the cube root of 125. This expression is asking the question: “What number multiplied 3 times equals 125?”. The answer to that is 5 because 5 × 5 × 5 = 125. So once again, the clock does not lie. Now let’s look at 3! This is pronounced “3 factorial”. The factorial of a number is that number successively multiplied by a number which is 1 less. So 5! = 5 × 4 × 3 × 2 × 1 = 120. So 3! = 3 × 2 × 1 = 6. Factorials are used a lot in probability. I have touched on this before but perhaps there is another future post here. Now let’s look at 0111 2. We are very familiar with decimal system way of counting. This system is a base 10 system because we use 10 distinct digits (symbols) to count: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. When we run out of digits, like when we count up to 9, we add another place holder to the right of the number and put the starting digit 0 there: 10. And then successively increase it’s digits until we get to 9 again. then we increase the left digit by 1 and start over again: 20, 21, … . There are other number systems based on numbers other than ten. Computers are composed of switches based on two states, on or off. We mathematically say that off is 0 and on is 1. Computers essentially count with just o’s and 1’s: a base 2 system. Counting in base 2 is done exactly as we do in base 10, we just have fewer digits to work with. So we if we start counting we get 0, 1, but we’ve ran out of digits so we add a place holder to the right and start again: 0, 1, 10, 11. Ran out of digits again so add another place holder and start over: 0, 1, 10, 11, 100, 101, 110, 111. If you are keeping track, 111 in base 2 is equal to 7 in base 10. It is a convention to subscript a number with its base when dealing with other base systems, so 0111 2 means 7 in base 10. The leading 0 doesn’t add to the value but in computer maths, base 2 numbers are typically written 4 digit places at a time.
Hubble's Law, when written in this form,$$v = H_0D,$$means: if $D$ is the current distance of a galaxy, and $H_0$ the Hubble constant, then $v$ is the current recession velocity of the galaxy. So it tells you what the recession velocity of a galaxy is right now, not what it was in the past. Basically, the Hubble Law is a consequence of the cosmological principle, i.e. that the universe on large scales is isotropic and homogeneous. This means that the expansion of the universe can be described by a single function of time, the so-called scale factor $a(t)$, so that the distance to a faraway galaxy increases over time as$$D(t) = a(t)D_c,$$where $D_c$ is a constant, called the co-moving distance to the galaxy; $D(t)$ is known as its proper distance. Also, the present-day value of $a(t)$ is set to 1 by convention, i.e. $a(t_0)=1$, so that $D(t_0) = D_c$. If we take the derivative, then$$v(t) = \dot{D}(t) = \dot{a}(t)D_c = \frac{\dot{a}(t)}{a(t)}D(t) = H(t)D(t),$$with $v(t)$ called the recession velocity and $H(t)=\dot{a}/a$ the Hubble parameter. This is the general version of Hubble's Law at cosmological time $t$, which at the present day takes the familiar form$$v = H_0D,$$where $v=v(t_0)$, $H_0=H(t_0)$ and $D=D(t_0)$. But in this form, Hubble's Law isn't very useful: it's a purely theoretical relation, because the recession velocity of a galaxy cannot be directly observed, nor does it say anything about the expansion of the universe in the past. It only tells us how fast a galaxy is moving from us right now, if you know its current distance. However, there's a related quantity that we can observe, namely the redshift $z$ of a galaxy, which is the change in wavelength of its photons as they travel through the expanding space:$$1 + z = \frac{\lambda_\text{ob}}{\lambda_\text{em}},$$ with $\lambda_\text{em}$, $\lambda_\text{ob}$ the emitted and observed wavelength respectively. Unlike the recession velocity, the redshift does give us information about the past, because the redshift of a photon accumulates over time, during its journey from the source galaxy to us. By comparing the redshifts of two galaxies, we can deduce information about the expansion rate in the past: suppose we observe two galaxies with distances $D_1 > D_2$ and redshifts $z_1 > z_2$, which emitted their light at times $t_1$, $t_2$ respectively. Then the difference in redshift $z_1-z_2$ will tell you how much the universe expanded in the time interval $[t_1,t_2]$. In other words, if the expansion of the universe were decelerating, we'd see that the redshift of distant galaxies accumulated a lot in the distant past, when the expansion rate of the universe was high. However, observations showed that the expansion of the universe first decelerated and then started to accelerate again (the transition from deceleration to acceleration occurred when the universe was about 7.7 billion years old). This means that there was a time when the expansion rate was at a minimum, during which the redshift of photons increased less. The relation between $v$ and $z$ is determined by the cosmological model. In the Standard Model, it can be shown that the observational version of the Hubble Law looks like this:$$H_0D = c\int_0^z\frac{\text{d}z'}{\sqrt{\Omega_{R,0}(1+z')^4 + \Omega_{M,0}(1+z')^3 + \Omega_{K,0}(1+z')^2 + \Omega_{\Lambda,0}}},$$where $\Omega_{R,0}$, $\Omega_{M,0}$ and $\Omega_{\Lambda,0}$ are the fraction of radiation, matter and dark energy in the present-day universe, and $\Omega_{K,0} = 1 - \Omega_{R,0} - \Omega_{M,0} -\Omega_{\Lambda,0}$ describes the curvature of space. Early observations and inflation models suggested that the curvature of space is close to zero, which would mean that $\Omega_{M,0}\approx 1$ if there's no dark energy (the contribution of radiation is negligible). On the other hand, dynamical studies of galaxy clusters indicated that $\Omega_{M,0}\approx 0.3$. Furthermore, models without dark energy led to a 'cosmic age' paradox: the calculated age of the universe in these models was less than the age of the oldest observed stars (see Krauss 1995 for a review). These issues were resolved in 1998 when two teams applied Hubble's Law to a sample of supernovae, comparing their distance and redshift, which offered clear evidence for dark energy, with $\Omega_{M,0}\approx 0.3$ and $\Omega_{\Lambda,0}\approx 0.7$, and a Hubble constant $H_0\approx 65\;\text{km}\,\text{s}^{-1}\,\text{Mpc}^{-1}$. These values have been further refined by CMB observations. The effect of dark energy can be seen in this figure: Here, I've set $\Omega_{R,0}=0$ and $H_0 = 63.7\;\text{km}\,\text{s}^{-1}\,\text{Mpc}^{-1}$ (the most recently obtained value). The red curve is a model with dark energy. As you can see, for a given distance the corresponding redshift is much lower than in models without dark energy, i.e. without acceleration. Extra info It's interesting to examine these models in more detail. Once the values of the cosmological parameters are fixed, the evolution of the universe can be calculated. In particular, the cosmic time can be written as a function of the scale factor:$$t(a) = \frac{1}{H_0}\int_0^a\frac{a'\text{d}a'}{\sqrt{\Omega_{R,0} + \Omega_{M,0}\,a' + \Omega_{K,0}\,a'^2 + \Omega_{\Lambda,0}\,a'^4}},$$which can be inverted to yield $a(t)$, and thus also $\dot{a}(t)$ (see also this post and this post). The age of the universe is $t_0=t(1)$, and we find that $t_0 =$ 14.0, 11.8, and 9.7 billion years for $(\Omega_{M,0},\Omega_{\Lambda,0})= (0.3,\,0.7), (0.3,\,0.0), (1.0,\,0.0)$ respectively. In other words, dark energy increases the age of the universe (which also solves the age paradox previously mentioned). This is a crucial point, as I will show below. Furthermore, there's a simple relation between the redshift of light and the scale factor: if a photon is emitted at a time $t_\text{em}$, then its redshift will accumulate as$$z(t) = \frac{a(t)}{a(t_\text{em})} - 1,$$so that its present-day observed redshift is $z = 1/a(t_\text{em})-1$ (see wikipedia for a derivation). In other words, the observed redshift of a photon tells us when it was emitted. Let's apply this to a particular galaxy. Suppose we have a galaxy at a present-day distance $D = 10$ billion lightyears. We then have the following situation: The first graph shows the proper distance of the galaxy and its light $D(t)=a(t)D$ as a function of lookback time $t_0-t$. In all three cases, $D(t)=0$ corresponds with the 'big bangs' of these models. The change from dotted lines to solid lines indicate the moment $t_\text{em}$ at which the galaxies emitted the photons that we observe today; the dashed lines are the paths of those photons. In all three models, the photons were emitted about 7 billion years ago. But the corresponding scale factors $a(t_\text{em})$ are very different: $a(t_\text{em})=0.54,\,0.48,\,0.43$ for the red, blue, green models respectively. This is a direct consequence of the different age of the universe in the three cases. This immediately explains the redshifts shown in the graph below: the present-day redshift of the light is $z=0.86,\,1.1,\,1.3$ in the respective models, i.e. the observed redshift is much lower in the dark energy model. Although it's not very clear, the red curve of $D(t)$ has an inflection point about 6 billion years ago, corresponding with the moment when $\ddot{a}=0$ and the expansion of the 'red' universe began accelerating. This is much clearer in the top right graph, showing the recession velocity $v(t)=\dot{a}(t)D$. In all three cases, $v(t)$ was much higher in the past, which means that the expansion has been decelerating. But in the dark energy case, $v(t)$ reached a minimum value around 6 billion years ago, and began to increase again. This is the effect of recent acceleration due to dark energy. However, note that $v(t)$ is much lower in the dark energy universe. Again, this is a consequence of the age of the universe in the models: it took 14 billion years for $a(t)$ to increase from 0 to 1 in the red model, while it took only 9.7 billion years in the green model. As a consequence, $\dot{a}$ is much lower in the former case. Finally, the last graph shows the Hubble parameter $H(t)=\dot{a}/a$, showing that even in the accelerating universe $H(t)$ decreases. To summarise, the influence of dark energy determines the redshift, proper distance and the recession velocity over time, but it's not really its effect on the accelerating expansion that's important, but rather its effect on the age of the universe. As a final note, the proper distance of a galaxy isn't directly observed. It can be derived from its so-called luminosity distance (comparing the apparent brightness and intrinsic luminosities of supernovae). So we should actually compare the evolution of a galaxy with a fixed present-day luminosity distance in different models rather than a fixed proper distance, but this doesn't change the argument.
Search Now showing items 1-1 of 1 Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
I'm looking for something synonymous to tabbing, but for math mode. I read that alignat* should do the trick, but this behaves differently it seems. For example: \documentclass{article}\usepackage{amsmath}\begin{document}\begin{alignat*}{2}S_4^{t} &= \{&\langle rdi, r12, rax' \rangle \mid \langle rdi, r12, rax \rangle \in S_4~\wedge \\&& rax' = \{ rax \mid rax > 0 \} \}\\\end{alignat*}\end{document} The second line appears to be right aligned, whereas I would like the & markers to be aligned with each other and then all text left aligned (like in tabbing). Is anyone able to fix this code, or suggest an alternative.
Determine the Quotient Ring $\Z[\sqrt{10}]/(2, \sqrt{10})$ Problem 487 Let\[P=(2, \sqrt{10})=\{a+b\sqrt{10} \mid a, b \in \Z, 2|a\}\]be an ideal of the ring\[\Z[\sqrt{10}]=\{a+b\sqrt{10} \mid a, b \in \Z\}.\]Then determine the quotient ring $\Z[\sqrt{10}]/P$.Is $P$ a prime ideal? Is $P$ a maximal ideal? We prove that the ring $\Z[\sqrt{10}]/P$ is isomorphic to the ring $\Zmod{2}$. We define the map $\Psi:\Z[\sqrt{10}] \to \Zmod{2}$ by sending $a+b\sqrt{10}$ to $\bar{a}=a \pmod 2 \in \Zmod{2}$.The map $\Psi$ is a ring homomorphism. To see this,let $a+b\sqrt{10}, c+d\sqrt{10} \in \Z[\sqrt{10}]$ . We have\begin{align*}\Psi\left( (a+b\sqrt{10})(c+d\sqrt{10}) \right) &=\Psi\left(ac+10bd+(ad+bc)\sqrt{10}\right)\\&=ac+10bd \pmod{2}=ac \pmod{2}\\&=\Psi(a+b\sqrt{10}) \Psi(c+d\sqrt{10}).\end{align*} We also have\begin{align*}\Psi\left( (a+b\sqrt{10})+(c+d\sqrt{10}) \right) &=\Psi\left( a+c+(b+d)\sqrt{10}) \right) \\&=a+c \pmod{2}\\&=\Psi(a+b\sqrt{10})+\Psi(c+d\sqrt{10}).\end{align*} Therefore the map $\Psi$ is a ring homomorphism. Since $\Psi(0)=\bar{0}$ and $\Psi(1)=\bar{1}$, the map $\Psi:\Z[\sqrt{10}] \to \Zmod{2}$ is surjective. We have $\Psi(a+b\sqrt{10})=\bar{0}$ if and only if $a$ is even.Thus, the kernel of the homomorphism $\Psi$ is\[\ker(\Psi)=\{a+b\sqrt{10} \mid a, b \in \Z, 2|a\}=P.\] In summary the map $\Psi:\Z[\sqrt{10}] \to \Zmod{2}$ is a surjective ring homomorphism with the kernel $P$. Hence by the first isomorphism theorem, we have\[\Z[\sqrt{10}] /P \cong \Zmod{2}\]as we claimed. Since $\Zmod{2}$ is a field, the ideal $P$ is a maximal ideal, and in particular $P$ is a prime ideal. Prove the Ring Isomorphism $R[x,y]/(x) \cong R[y]$Let $R$ be a commutative ring. Consider the polynomial ring $R[x,y]$ in two variables $x, y$.Let $(x)$ be the principal ideal of $R[x,y]$ generated by $x$.Prove that $R[x, y]/(x)$ is isomorphic to $R[y]$ as a ring.Proof.Define the map $\psi: R[x,y] \to […] Every Prime Ideal of a Finite Commutative Ring is MaximalLet $R$ be a finite commutative ring with identity $1$. Prove that every prime ideal of $R$ is a maximal ideal of $R$.Proof.We give two proofs. The first proof uses a result of a previous problem. The second proof is self-contained.Proof 1.Let $I$ be a prime ideal […] Three Equivalent Conditions for a Ring to be a FieldLet $R$ be a ring with $1$. Prove that the following three statements are equivalent.The ring $R$ is a field.The only ideals of $R$ are $(0)$ and $R$.Let $S$ be any ring with $1$. Then any ring homomorphism $f:R \to S$ is injective.Proof. […]
I'm doing the following exercise: Consider $$ f(x)= \begin{cases} \displaystyle\frac{1-(\cos(x))^3}{x^2}, \quad if\ x\neq 0\\ \displaystyle\frac{3}{2}, \quad if\ x= 0\\ \end{cases} $$ Calculate $f(x_0)$, with $x_0=0.000011$, with a C program (simple and double precision) and with a pocket calculator. Identify the cause of the error. Justify the differences and the magnitude of the errors. I know that there's cancellation on the numerator and the rounding errors on every operation ($\cos(x)$, $(\cos(x))^3$, $1-(\cos(x))^3$ and dividing that quantity by $x^2$). The calculator gives me $0$, and the correct answer is very near to $3/2$. This is because a pocket calculator can store only 9 significant digits, and $1-(\cos(x_0))^3$ is $0$ for a calculator. But how can I measure the magnitude of the errors when there're not rounding errors on the input? The quantity given by $x_0$ is exact and the computer write this quantity correctly in simple and double precision, so there's only rounding errors inside every operation. Can I measure the magnitude by using the propagation of errors' formula? This formula says: $$|\Delta f(x)|=|f'(x)|\cdot|\Delta x|.$$ Thanks!
I face some trouble solving Maxwell's equations inside a cylinder with perfect conductor boundaries (in 3D) ? We work with cylindrical coordinates $(r, \phi, z)$ and we make the assumption that fields have a sinusoidal "$e^{i\omega t}$" time dependence. Note that we have a $\phi$ symmetry. First, and in any coordinates system, by taking the rotational and injecting one equation in the other we reduce Maxwell's equations to the following, $$ \nabla\times\nabla\times E = -\partial_t^2 E = \omega^2 E $$ In vacuum, from the $curl curl$ identity, it leads, $$ \nabla\times\nabla\times E = \nabla(\nabla . E) - \nabla^2 E = - \nabla^2 E $$ Where $- \nabla^2 E$ is the laplacian operator applied to each coordinate. Now, in cylindrical coordinates, we can only compute the $z-$coordinate since, in this case we get the wave equation, $$ \nabla^2 E_z = \omega^2 E_z $$ For the other coordinates, the change of coordinates introduce other terms such that (for the $\phi-$ coordin. ate)$\frac{E_r}{r^2} - \frac{2}{r^2}\frac{\partial E_\phi}{\partial\phi}$. Then, a fastidious step consists in performing a separation of variable which leads us quite easily to the solution for every separated variable and also to the Bessel differential equation which brings its solution, the Bessel function. Together with boundary conditions we can get the solution according to $z$ but what about the other coordinates ?
Under the assumption that The probability of getting a girl or a boy is equal (50%) A male can only put his thang on his partner female (no multiple wives) Neither of them will get significantly more boys than the other (in a long period of time) A Priori Proof : Barbarian King In regex: (G*)B ... G G B Z B B The tree goes from left to right. Z = initial state; G = girl; B = boy; The probability of getting 1 boy and 0 girls is 1/2. The probability of getting 1 boy and 1 girl is 1/4. The probability of getting 1 boy and 2 girls is 1/8. The probability of getting 1 boy and N girls is 1/2^(N+1). To solve the average difference of boys and girls under the Barbarian King's rule, we would need the following : $$\lim_{N\to \infty} \frac{\sum_0^N((\mbox{BoyCount}-\mbox{GirlCount})*\mbox{probability}))}{N}$$ $$\lim_{N\to \infty} \frac{\sum_0^N(\frac{1-N}{2^{N+1}}))}{N} = 0$$ Check this wolfram solution to this equation. (Sorry Barbarian King) This means that in the long run (approaching $\infty$), there is no ($0$) significant difference between the number of boys and girls under the Barbarian King's rule. Councillor In regex: (B*)G G Z G B G B ... The tree goes from left to right. Z = initial state; G = girl; B = boy; The probability of getting 0 boys and 1 girl is 1/2. The probability of getting 1 boy and 1 girl is 1/4. The probability of getting 2 boys and 1 girl is 1/8. The probability of getting N boys and 1 girl is 1/2^(N+1). To solve the average difference of boys and girls under the Councillor's rule, we would need the following : $$\lim_{N\to \infty} \frac{\sum_0^N((\mbox{BoyCount}-\mbox{GirlCount})*\mbox{probability}))}{N}$$ $$\lim_{N\to \infty} \frac{\sum_0^N(\frac{N-1}{2^{N+1}}))}{N} = 0$$ Check this wolfram solution to this equation. (Sorry Councillor) This means that in the long run (approaching $\infty$), there is no ($0$) significant difference between the number of boys and girls under the Councillor's rule. A Posteriori Proof : Open your JavaScript console (F12->Console or Ctrl+Shift+J) and copy and paste the below code. function makeBabies(couples,simulations){ var kingAdvantage=0; var councillorAdvantage=0; var kingBabies={boys:0,girls:0}; var councillorBabies={boys:0,girls:0}; for(var j=0;j<simulations;j++){ for(var i=0;i<couples;i++){ //Barbarian King while(Math.random()<0.5){ kingBabies.girls++; } kingBabies.boys++; //Councillor while(Math.random()<0.5){ councillorBabies.boys++; } councillorBabies.girls++; } kingAdvantage += (kingBabies.boys-kingBabies.girls); councillorAdvantage += (councillorBabies.boys-councillorBabies.girls); //reset simulation kingBabies={boys:0,girls:0}; councillorBabies={boys:0,girls:0}; } //average kingAdvantage /= simulations; councillorAdvantage /= simulations; console.log("Barbarian King advantage : "+kingAdvantage); console.log("Councillor advantage : "+councillorAdvantage); } Type makeBabies(<couples>,<simulations>) and replace <couples> with the number of couples that you want in your simulation, and <simulations> with the number of times you want to run the same test. Results : makeBabies(100,10000) = 0.0488, -0.0228 makeBabies(1000,10000) = 0.3019, -0.1117 makeBabies(10000,10000) = -2.47, -0.4471 10000 simulations each time, with increasing number of couples (100, 1000, 10000). As you can see, the average advantage of boys over girls in both methods is very insignificant. (Most of the time < 1) You can try out the code and run more simulations if you want.
This week¶ This week's paper, Large-scale traffic signal control using machine learning: some traffic flow considerations, caught my eye for several reasons. First, traffic signal control is relevant to my own group's work involving microservice and network traffic management. Second, the authors use cellular automaton rule 184 as their traffic model, which is actually the first time I've seen a cellular automaton used for something serious since A New Kind of Science, despite that book's claim about the likely broad usefulness of simple programs for complex purposes. Lastly, the authors find that supervised learning and random search outperform deep reinforcement learning for high-occupancies of the traffic flow network, For occupancies > 75% during training, DRL policies perform very poorly for all traffic conditions, which means that DRL methods cannot learn under highly congested conditions. and that they recommend practitioners throw away congested data! Our findings imply that it is advisable for current DRL methods in the literature to discard any congested data when training, and that doing this will improve their performance under all traffic conditions. I also have to admit that I've thought to myself, waiting at empty intersections for a light to turn green, that I could just solve this problem with DRL. If I'm wrong, that would be very interesting and surprising. Considerations in a nutshell¶ The introduction and background are well summarized in their last paragraph: In summary, most recent studies focus on developing effective and robust multi-agent DRL algorithms to achieve coordination among intersections. The number of intersections in those studies are usually limited, thus their results might not apply to large open network. Although the signal control is indeed a continuing problem, it has been always modeled as an episodic process. From the perspective of traffic considerations, expert knowledge has only been incorporated in down-scaling the size of the control problem or designing novel reward functions for DRL algorithm. Few studies have tested their methods given different traffic demands, or shed lights on the learning performance under different traffic conditions, especially the congestion regimes. To fill the gap, our study will treat the large-scale traffic control as a continuing problem and extend classical RL algorithm to fit it. More importantly, noticing the lack of traffic considerations on learning performance, we will train DRL policies under different density levels and explore the results from a traffic flow perspective. Set up¶ Traffic¶ This is elementary cellular automaton (CA) rule 184. Elementary cellular automata operate on a binary vector, producing a new binary vector in each step that's a function of the previous one. For each entry in the previous vector, the new value of the corresponding entry in the resulting vector depends on the previous entry and its neighbors to the left and right. There are 256 possible rules with this formulation, and this picture is of the 184th rule set when ordered in the natural way. Rule 184 can be thought of as a flow of cars along a lane of traffic. Cars move forward (right) by one cell each step only if there is an open space in front of them, otherwise they wait for one to open up. Here's an example: def rule_184(lane): l = [False] + lane + [False] # pad return [(l[i-1] and not l[i]) or (l[i] and l[i+1]) for i in range(1,len(l)-1)]def show(t, lane): print(f't{t}:\t', ' '.join(['🚘' if i else '_' for i in lane]) )ti = [True, True, True, True, True, False, False, True, False, False, False, False, False, False, False]for i in range(7): show(i, ti) ti = rule_184(ti) t0: 🚘 🚘 🚘 🚘 🚘 _ _ 🚘 _ _ _ _ _ _ _ t1: 🚘 🚘 🚘 🚘 _ 🚘 _ _ 🚘 _ _ _ _ _ _ t2: 🚘 🚘 🚘 _ 🚘 _ 🚘 _ _ 🚘 _ _ _ _ _ t3: 🚘 🚘 _ 🚘 _ 🚘 _ 🚘 _ _ 🚘 _ _ _ _ t4: 🚘 _ 🚘 _ 🚘 _ 🚘 _ 🚘 _ _ 🚘 _ _ _ t5: _ 🚘 _ 🚘 _ 🚘 _ 🚘 _ 🚘 _ _ 🚘 _ _ t6: _ _ 🚘 _ 🚘 _ 🚘 _ 🚘 _ 🚘 _ _ 🚘 _ The cellular automaton simulates a lane of traffic, and the authors wire two of these lanes up between each adjacent traffic light to create a grid network. The network is laid out on a torus, so there are no boundaries. The signalized network corresponds to a homogeneous grid network of bidirectional streets, with one lane per direction of length $n = 5$ cells between neighboring traffic lights. The connecting links to form the torus are shown as dashed directed links; we have omitted the cells on these links to avoid clutter. Each segment has n = 5 cells; an additional cell has been added downstream of each segment to indicate the traffic light color. Cars arriving at a green traffic light choose a random "direction" in which to continue. Green lights are on for a minimum of three steps. Learning¶ Each traffic signal is managed by an agent, which has two actions it can take at any time step: turn the light red/green for the North-South approaches, or the opposite. The state observable by each agent is an $8\times n$ matrix of bits corresponding to the four incoming and four outgoing CA vectors, and the output is the probability of turning the light red for the North-South approaches. Only one neural net is actually trained, and used by all agents, since there's no reason for them to be different in this formulation. For the DRL agent, the reward is the incremental average flow per lane (not the average flow per lane), which the authors mention is lower-variance. The authors use a custom infinite-horizon variant of REINFORCE they call REINFORCE-TD. Experiments¶ The authors use a maximum-queue-first (LQF) greedy algorithm as their baseline for comparison, which services the lane with the longest queue length at all times. Random policies¶ They begin by randomly reinitializing the parameters of the neural network, and discover that ~15% of random policies are competitive (that is, they can outperform LQF for some traffic densities). They also note a previously undiscovered pattern that "all policies, no matter how bad, are best when the density exceeds approximately 75%." How odd. Supervised learning policies¶ They then train a policy with supervised learning, and surprisingly, with only the two obvious extreme examples, the resulting policy is near-optimal. DRL policies¶ Policies trained with constant demand and random initial parameters $\theta$. The label in each diagram gives the iteration number and the constant density value. First column: NS red probabilities of the extreme states, $\pi(s1)$ in dashed line and $\pi(s2)$ in solid line. The remaining columns show the flow-density diagrams obtained at different iterations, and the last column shows the iteration producing the highest flow at $k = 0.5$, if not reported on a earlier column. Finally, they run two experiments with DRL policies, as described above. These policies seem to do rather poorly in general compared to random search and supervised learning, and as density increases, they stop learning much of anything. We conjecture that this result is a consequence of a property of congested urban networks and has nothing to do with the algorithm to train the DRL policy. I'm skeptical. See my parting thoughts. The other experiments the authors perform just confirms that average flow per lane does worse than incremental average flow per lane. Parting thoughts¶ In the end, I'm way more interested in the experimental setup of this paper than the conclusions. As usual, I learned a ton, and I may actually use rule 184 as a model for traffic flow on something. Isn't it obviousgiven their problem formulation that the agents can't learn under conditions of congestion, since it means their input is essentially whited out? I would be more impressed with the conclusion if a neural net with complete visibility had trouble learning with congestion. It also seems to me extremelysuggestive that a supervised policy can learn from only two examples, and I would very much like to see if the major conclusions of this paper explode with a more realistic network topology. Queueing theory contains all sorts of counterintuitive surprises, and it seems likely to me that their results are more indicative of one of those surprises, rather than some deep fact about DRL's ability to manage urban congestion. It's interesting that they formulate the problem as a continuing one, against the prevailing trend in the traffic signal control literature. I agree with them, that even if you get to a state where there's no traffic, that's a function of the demand, not of the agent's choices. I bring this up because I too have found that it's really quite importantto recognize an infinite-horizon problem when you have one, or else your agent learns to rack up debts until the end of the artificial episode when all is "forgiven". It's fascinating that all random policies, no matter how bad, are best around 75% congestion. I have been admonished to avoid scheduling myself at more than 70% capacity to avoid the ringing effect. I wonder if this is an empirical vindication of that...
I have a gaussian noise $\nu(t)$ with variance $\sigma^2$. After a FFT I get $X(\omega)$. If now I do the IFFT on the $X(\omega)$ can I say that the result is still a gaussian noise of variance $\sigma^2$? What is the effect of zero padding on FFT? How the zero padding affects the statistics of the noise after FFT and IFFT? Thanks. Think about both questions separately. First of all, the (I)FFT is just an implementation of the (I)DFT, so I'm going to generalize all this to the DFT. Does the zero-padded IDFT retain variance? Parseval's theorem says power out = constant factor · power in, and the power of the zero-padded sequence is the energy of that sequence divided by it's length – and that length is larger than the original length, whereas the energy stayed the same. Does the zero-padded IDFT retain gaussianness? Long story short: yes. This is a result from the fact that the DFT is effectively a large sum of sufficiently identically distributed random variables. What you need is independence (which would imply whiteness), but as mentioned below, most people mean "WGN" when they say "GN". Other effects? When people say "Gaussian noise" they often mean " white Gaussian noise", but since that would have a constant PSD, and you explicitly made it so that your noise realization's Fourier transform is anything but constant, but comes in a boxcar shape, you obviously lose the whiteness.
I think it can be solved with a simple algorithm: Take two $x_i$,$x_j$, if they exist, such that $x_i-x_j>1$ Replace them, respectively, with $x_i-1$ and $x_j+1$ Repeat from point 1 We'll show that the algorithm is finite and that it makes $S=\sum_{k=1}^n x_k^{\alpha}$ smaller at each run. Let's demonstrate the latter: after the substitution, we still have $x_1+x_2+⋯+x_n=m$ and, since $x^\alpha$ is a strictly convex function and $(x_i,x_j)$ majorizes $(x_i-1,x_j+1)$, we can apply Karamata's inequality to obtain$$(x_i-1)^{\alpha}+(x_j+1)^{\alpha} < x_i^\alpha+x_j^\alpha$$Hence,$$S'=\sum_{k=1,k\neq i,j}^n x_k^{\alpha}+(x_i-1)^{\alpha}+(x_j+1)^{\alpha} < \sum_{k=1,k\neq i,j}^n x_k^{\alpha}+x_i^\alpha+x_j^\alpha=S$$For the first statement, consider the quantity $I=\sum_{k=1}^n x_k^2$: applying what we found before with $\alpha=2$, we have that $I$ gets strictly smaller at each run, it is integer and it cannot reach 0, for the initial conditions on the $x_k$, so the algorithm must terminate at some point. Last but not least, we need to show that, independently from the choice of $x_i$ and $x_j$, the algorithm will give us a fixed $n$-uple, without regard to the order of the $x_k$s. The algorithm can terminate if and only if, for a certain integer $t$, all the $x_k$s are either equal to $t$ or to $t+1$. Let's say there are $h$ of them (with $h \leq n$) which are equal to $t+1$ and there are $n-h$ equal to $t$: we have that$$m=h(t+1)+(n-h)t=nt+h$$That is the Euclidean division between $m$ and $n$, which is unique. Hence, if we denote with $q$ its quotient and with $r$ its remainder, the only possible minimal $n$-uple is that composed by $r$ numbers equal to $q+1$ and the others equal to $q$. The final solution is thus what @kamran was guessing.
K p And K c K p And K c are the equilibrium constant of a ideal gaseous mixture. K p is equilibrium constant used when equilibrium concentrations are expressed in atmospheric pressure and K c is equilibrium constant used when equilibrium concentrations are expressed in molarity. For many general chemical reactions aA + bB ⇋ cC + dD Where, a mole of reactant A b mole of reactant B c mole of product C d mole of product D Consider an example 2A (g)+B (g) ⇋ 2C (g) All in the gas phase. The K p is given by- Ideal Gas Equation Each of these ideal gas molecules behaves similarly. So for each of them, PV = nRT On rearranging we get-\(P=\frac{n}{V}RT\) Substituting these in equation (1)\(\Rightarrow K_{p}=\frac{\left [ C \right ]^{2}\left ( RT \right )^{2}}{\left [ A \right ]^{2}\left ( RT \right )^{2}\left [ B \right ]\left ( RT \right )}\) \(\Rightarrow K_{p}=\frac{\left [ C \right ]^{2}}{\left [ A \right ]^{2}\left [ B \right ]}\times \frac{\left ( RT \right )^{2}}{\left ( RT \right )^{2}\left ( RT \right )}\) On canceling like terms and substituting \(K_{c}=\frac{\left [ C \right ]^{2}}{\left [ A \right ]^{2}\left [ B \right ]}\) we get-\(\Rightarrow K_{p}=\frac{K_{c}}{RT}\) Or\(K_{p}=K_{c}\left ( RT \right )^{-1}\) In general, \(K_{p}=K_{c}\left ( RT \right )^{\Delta n}\) Where, Δn represents the change in the number of moles of gas molecules. [That is Δn = product – reactant in moles only for gas molecules] When the change in the number of moles of gas molecules is zero, that is Δn = 0 ⇒ K p=K c In general, for any chemical reactions of gas molecules relation between K p And K c is- Physics Related Topics: Stay tuned with Byju’s for more such interesting articles. Also, register to “BYJU’S-The Learning App” for loads of interactive, engaging physics related videos and an unlimited academic assist.
I actually have to prove the following : $\mathbf{NL} \subseteq \mathbf{NC_2}$ I have the following approach : I will prove that $\mathbf{PATH} = \{〈D, s, t〉 | \text{D is a directed graph with a path from vertex s to t}\} \in \mathbf{NC_2}$. I will show that $\mathbf{NC_2}$ is closed under log-space reductions i.e: $$(1): B \in \mathbf{NC_2} \hbox{ and } A \le_l B \Longrightarrow A \in \mathbf{NC_2}$$ where $\le_l$ is the logspace reduction defined as $$A \le_l B :\Longleftrightarrow (\exists M \hbox{ TM}, \forall x)[x \in A \Longleftrightarrow M(x) \in B]$$ ($M$ is a TM which runs in logarithmic space). Since $\mathbf{PATH}$ is an $\mathbf{NL}$-complete problem the proof will be done. Proving The 1st part was easy, i am stuck at the second part and have no idea how to proceed. Any help?
fskilnik wrote: GMATH practice exercise (Quant Class 18) Last Monday N female executives (N>1) received M male managers (M>1) for a business meeting. If every person shook hands exactly once with every other person in the meeting, what is the difference between the total number of shaking hands and the number of shaking hands among the female executives only? (1) M < 11 (2) M(M+2N) = 65 \(m,n\,\, \ge \,\,2\,\,\,{\rm{ints}}\,\,\,\,\left( * \right)\) \(? = C\left( {m + n,2} \right) - C\left( {n,2} \right) = {{\left( {m + n} \right)\left( {m + n - 1} \right)} \over 2} - {{n\left( {n - 1} \right)} \over 2}\) \(? = \frac{{m\left( {m + n - 1} \right) + nm + n\left( {n - 1} \right) - n\left( {n - 1} \right)}}{2} = \,\,\frac{{m\left( {m + 2n - 1} \right)}}{2}\,\,\,\,\, \Leftrightarrow \,\,\,\,\boxed{\,? = \frac{{m\left( {m + 2n - 1} \right)}}{2}\,}\) \(\left( 1 \right)\,\,m < 11\,\,\,\left\{ \matrix{ \,{\rm{Take}}\,\,\left( {m,n} \right) = \left( {2,2} \right)\,\,\,\, \Rightarrow \,\,\,{\rm{?}}\,\,{\rm{ = }}\,\,{\rm{5}} \hfill \cr \,{\rm{Take}}\,\,\left( {m,n} \right) = \left( {2,3} \right)\,\,\,\, \Rightarrow \,\,\,{\rm{?}}\,\, \ne \,\,{\rm{5}}\, \hfill \cr} \right.\) \(\left( 2 \right)\,\,m\left( {m + 2n} \right) = 65 = 5 \cdot 13\,\,\,\,\,\mathop \Rightarrow \limits^{\left( * \right)\,\,{\rm{and}}\,\,\left( {**} \right)} \,\,\,\,\left( {m,m + 2n} \right) = \left( {5,13} \right)\,\,\,\,\,\, \Rightarrow \,\,\,\,\,{\rm{SUFF}}.\) \(\left( {**} \right)\,\,\,m > m + 2n\,\,\,\,\, \Rightarrow \,\,\,n < 0\,\,\,\,\,\,{\rm{impossible}}\) The correct answer is (B). We follow the notations and rationale taught in the GMATH method. Regards, Fabio. _________________ Fabio Skilnik :: GMATH method creator (Math for the GMAT) Our high-level "quant" preparation starts here: https://gmath.net
Here is a short tale about Linear Discriminant Analysis (LDA) as a reply to the question. When we have one variable and $k$ groups (classes) to discriminate by it, this is ANOVA. The discrimination power of the variable is $SS_\text{between groups} / SS_\text{within groups}$, or $B/W$. When we have $p$ variables, this is MANOVA. If the variables are uncorrelated neither in total sample nor within groups, then the above discrimination power, $B/W$, is computed analogously and could be written as $trace(\bf{S_b})$$/trace(\bf{S_w})$, where $\bf{S_w}$ is the pooled within-group scatter matrix (i.e. the sum of $k$ p x p SSCP matrices of the variables, centered about the respective groups' centroid); $\bf{S_b}$ is the between-group scatter matrix $=\bf{S_t}-\bf{S_w}$, where $\bf{S_t}$ is the scatter matrix for the whole data (SSCP matrix of the variables centered about the grand centroid. (A "scatter matrix" is just a covariance matrix without devidedness by sample_size-1.) When there is some correlation between the variables - and usually there is - the above $B/W$ is expressed by $\bf{S_w^{-1} S_b}$ which is not a scalar anymore but a matrix. This simply due to that there are $p$ discriminative variables hidden behind this "overall" discrimination and partly sharing it. Now, we may want to submerge in MANOVA and decompose $\bf{S_w^{-1} S_b}$ into new and mutually orthogonal latent variables (their number is $min(p,k-1)$) called discriminant functions or discriminants - the 1st being the strongest discriminator, the 2nd being next behind, etc. Just like we do it in Pricipal component analysis. We replace original correlated variables by uncorrelated discriminants without loss of discriminative power. Because each next discriminant is weaker and weaker we may accept a small subset of first $m$ discriminants without great loss of discriminative power (again, similar to how we use PCA). This is the essense of LDA as of dimensionality reduction technique (LDA is also a Bayes' classification technique, but this is an entirely separate topic). LDA thus resembles PCA. PCA decomposes "correlatedness", LDA decomposes "separatedness". In LDA, because the above matrix expressing "separatedness" isn't symmetric, a by-pass algebraic trick is used to find its eigenvalues and eigenvectors$^1$. Eigenvalue of each discriminant function (a latent variable) is its discriminative power $B/W$ I was saying about in the first paragraph. Also, it is worth mentioning that discriminants, albeit uncorrelated, are not geometrically orthogonal as axes drawn in the original variable space. Some potentially related topics that you might want to read: LDA is MANOVA "deepened" into analysing latent structure and is a particular case of Canonical correlation analysis (exact equivalence between them as such).How LDA classifies objects and what are Fisher's coefficients. (I link only to my own answers currently, as I remember them, but there is many good and better answers from other people on this site as well). $^1$ LDA extraction phase computations are as follows. Eigenvalues ($\bf L$) of $\bf{S_w^{-1} S_b}$ are the same as of symmetric matrix $\bf{(U^{-1})' S_b U^{-1}}$, where $\bf U$ is the Cholesky root of $\bf{S_w}$: an upper-triangular matrix whereby $\bf{U'U=S_w}$. As for the eigenvectors of $\bf{S_w^{-1} S_b}$, they are given by $\bf{V=U^{-1} E}$, where $\bf E$ are the eigenvectors of the above matrix $\bf{(U^{-1})' S_b U^{-1}}$. (Note: $\bf U$, being triangular, can be inverted - using low-level language - faster than using a standard generic "inv" function of packages.) The described workaround-eigendecomposition-of-$\bf{S_w^{-1} S_b}$ method is realized in some programs (in SPSS, for example), while in other programs there is realized a "quasi zca-whitening" method which, being just a little slower, gives the same results and is described elsewhere. To summarize it here: obtain ZCA-whitening matrix for $\bf{S_w}$ - the symmetric sq. root $\bf S_w^{-1/2}$ (what is done through eigendecomposition); then eigendecomposition of $\bf S_w^{-1/2} S_b S_w^{-1/2}$ (which is a symmetric matrix) yields discriminant eigenvalues $\bf L$ and eigenvectors $\bf A$, whereby the discriminant eigenvectors $\bf V= S_w^{-1/2} A$. The "quasi zca-whitening" method can be rewritten to be done via singular-value-decomposition of casewise dataset instead of working with $\bf S_w$ and $\bf S_b$ scatter matrices; that adds computational precision (what is important in near-singularity situation), but sacrifices speed. OK, let's turn to the statistics usually computed in LDA. Canonical correlations corresponding to the eigenvalues are $\bf \Gamma = \sqrt{L/(L+1)}$. Whereas eigenvalue of a discriminant is $B/W$ of the ANOVA of that discriminant, canonical correlation squared is $B/T$ (T = total sum-of-squares) of that ANOVA. If you normalize (to SS=1) columns of eigenvectors $\bf V$ then these values can be seen as the direction cosines of the rotation of axes-variables into axes-discriminants; so with their help one can plot discriminants as axes on the scatterplot defined by the original variables (the eigenvectors, as axes in that variables' space, are not orthogonal). The unstandardized discriminant coefficients or weights are simply the scaled eigenvectors $\bf {C}= \it \sqrt{N-k} ~\bf V$. These are the coefficients of linear prediction of discriminants by the centered original variables. The values of discriminant functions themselves (discriminant scores) are $\bf XC$, where $\bf X$ is the centered original variables (input multivariate data with each column centered). Discriminants are uncorrelated. And when computed by the just above formula they also have the property that their pooled within-class covariance matrix is the identity matrix. Optional constant terms accompanying the unstandardized coefficients and allowing to un-center the discriminants if the input variables had nonzero means are $\bf {C_0} \it = -\sum^p diag(\bar{X}) \bf C$, where $diag(\bar{X}) $ is the diagonal matrix of the p variables' means and $\sum^p$ is the sum across the variables. In standardized discriminant coefficients, the contribution of variables into a discriminant is adjusted to the fact that variables have different variances and might be measured in different units; $\bf {K} \it = \sqrt{diag \bf (S_w)} \bf V$ (where diag(Sw) is diagonal matrix with the diagonal of $\bf S_w$). Despite being "standardized", these coefficients may occasionally exceed 1 (so don't be confused). If the input variables were z-standardized within each class separately, standardized coefficients = unstandardized ones. Coefficients may be used to interpret discriminants. Pooled within-group correlations ("structure matrix", sometimes called loadings) between variables and discriminants are given by $\bf R= \it diag \bf (S_w)^{-1} \bf S_w V$. Correlations are insensitive to collinearity problems and constitute an alternative (to the coefficients) guidance in assessment of variables' contributions, and in interpreting discriminants. See the complete output of the extraction phase of the discriminant analysis of iris data here. Read this nice later answer which explains a bit more formally and detailed the same things as I did here. This question deals with the issue of standardizing data before doing LDA.
14 0 Homework Statement A yo-yo is placed on a conveyor belt accelerating ##a_C = 1 m/s^2## to the left. The end of the rope of the yo-yo is fixed to a wall on the right. The moment of inertia is ##I = 200 kg \cdot m^2##. Its mass is ##m = 100kg##. The radius of the outer circle is ##R = 2m## and the radius of the inner circle is ##r = 1m##. The coefficient of static friction is ##0.4## and the coefficient of kinetic friction is ##0.3##. Find the initial tension in the rope and the angular acceleration of the yo-yo. Homework Equations ##T - f = ma## ##\tau_P = -fr## ##\tau_G = Tr## ##I_P = I + mr^2## ##I_G = I + mR^2## ##a = \alpha R## First off, I was wondering if the acceleration of the conveyor belt can be considered a force. And I'm not exactly sure how to use Newton's second law if the object of the forces is itself on an accelerating surface. Also, I don't know whether it rolls with or without slipping. I thought I could use ##a_C = \alpha R## for the angular acceleration, but the acceleration of the conveyor belt is not the only source of acceleration, since the friction and the tension also play a role. I can't find a way to combine these equations to get the Also, I don't know whether it rolls with or without slipping. I thought I could use ##a_C = \alpha R## for the angular acceleration, but the acceleration of the conveyor belt is not the only source of acceleration, since the friction and the tension also play a role. I can't find a way to combine these equations to get the
Once we have identified two variables that are correlated, we would like to model this relationship. We want to use one variable as a predictor or explanatory variable to explain the other variable, the response or dependent variable. In order to do this, we need a good relationship between our two variables. The model can then be used to predict changes in our response variable. A strong relationship between the predictor variable and the response variable leads to a good model. Figure 9. Scatterplot with regression model. Definition: simple linear regression A simple linear regression model is a mathematical equation that allows us to predict a response for a given predictor value. Our model will take the form of \(\hat y = b_0+b_1x\) where b0 is the y-intercept, b1 is the slope, x is the predictor variable, and ŷ an estimate of the mean value of the response variable for any value of the predictor variable. The y-intercept is the predicted value for the response ( y) when x = 0. The slope describes the change in y for each one unit change in x. Let’s look at this example to clarify the interpretation of the slope and intercept. Example \(\PageIndex{1}\): A hydrologist creates a model to predict the volume flow for a stream at a bridge crossing with a predictor variable of daily rainfall in inches. Answer \[\hat y = 1.6 +29 x \nonumber\] The y-intercept of 1.6 can be interpreted this way: On a day with no rainfall, there will be 1.6 gal. of water/min. flowing in the stream at that bridge crossing. The slope tells us that if it rained one inch that day the flow in the stream would increase by an additional 29 gal./min. If it rained 2 inches that day, the flow would increase by an additional 58 gal./min. Example \(\PageIndex{2}\): What would be the average stream flow if it rained 0.45 inches that day? Answer \[\hat y= 1.6 + 29x = 1.6 + 29(0.45) = 14.65 gal./min \nonumber\] The Least-Squares Regression Line (shortcut equations) The equation is given by $$\hat y = b_0+b_1x$$ where \(b_1 = r\left ( \dfrac {s_y}{s_x} \right )\) is the slope and \(b_0=\hat y -b_1\bar x\) is the y-intercept of the regression line. An alternate computational equation for slope is: $$b_1 = \dfrac {\sum xy - \dfrac {(\sum x)(\sum y)}{n}} {\sum x^2 - \dfrac {(\sum x)^2}{n}} = \dfrac {S_{xy}}{S_{xx}}$$ This simple model is the line of best fit for our sample data. The regression line does not go through every point; instead it balances the difference between all data points and the straight-line model. The difference between the observed data value and the predicted value (the value on the straight line) is the error or residual. The criterion to determine the line that best describes the relation between two variables is based on the residuals. $$Residual = Observed – Predicted$$ For example, if you wanted to predict the chest girth of a black bear given its weight, you could use the following model. Chest girth = 13.2 +0.43 weight The predicted chest girth of a bear that weighed 120 lb. is 64.8 in. Chest girth = 13.2 + 0.43(120) = 64.8 in. But a measured bear chest girth (observed value) for a bear that weighed 120 lb. was actually 62.1 in. The residual would be 62.1 – 64.8 = -2.7 in. A negative residual indicates that the model is over-predicting. A positive residual indicates that the model is under-predicting. In this instance, the model over-predicted the chest girth of a bear that actually weighed 120 lb. Figure 10. Scatterplot with regression model illustrating a residual value. This random error (residual) takes into account all unpredictable and unknown factors that are not included in the model. An ordinary least squares regression line minimizes the sum of the squared errors between the observed and predicted values to create a best fitting line. The differences between the observed and predicted values are squared to deal with the positive and negative differences. Coefficient of Determination After we fit our regression line (compute b0 and b1), we usually wish to know how well the model fits our data. To determine this, we need to think back to the idea of analysis of variance. In ANOVA, we partitioned the variation using sums of squares so we could identify a treatment effect opposed to random variation that occurred in our data. The idea is the same for regression. We want to partition the total variability into two parts: the variation due to the regression and the variation due to random error. And we are again going to compute sums of squares to help us do this. Suppose the total variability in the sample measurements about the sample mean is denoted by \(\sum (y_i - \bar y)^2\), called the sums of squares of total variability about the mean (SST). The squared difference between the predicted value \(\hat y\) and the sample mean is denoted by \(\sum (\hat {y_i} - \bar y)^2\), called the sums of squares due to regression (SSR). The SSR represents the variability explained by the regression line. Finally, the variability which cannot be explained by the regression line is called the sums of squares due to error (SSE) and is denoted by \(\sum (y_i - \hat y)^2\). SSE is actually the squared residual. SST = SSR + SSE \(\sum (y_i - \bar y)^2\) = \(\sum (\hat {y_i} - \bar y)^2\) +\(\sum (\hat {y_i} - \bar y)^2\) Figure 11. An illustration of the relationship between the mean of the y’s and the predicted and observed value of a specific y. The sums of squares and mean sums of squares (just like ANOVA) are typically presented in the regression analysis of variance table. The ratio of the mean sums of squares for the regression (MSR) and mean sums of squares for error (MSE) form an F-test statistic used to test the regression model. The relationship between these sums of square is defined as $$Total \ Variation = Explained \ Variation + Unexplained \ Variation$$ The larger the explained variation, the better the model is at prediction. The larger the unexplained variation, the worse the model is at prediction. A quantitative measure of the explanatory power of a model is \(R^2\), the Coefficient of Determination: $$R^2 = \dfrac {Explained \ Variation}{Total \ Variation}$$ The Coefficient of Determination measures the percent variation in the response variable ( y) that is explained by the model. Values range from 0 to 1. An \(R^2\) close to zero indicates a model with very little explanatory power. An \(R^2\) close to one indicates a model with more explanatory power. The Coefficient of Determination and the linear correlation coefficient are related mathematically. $$R^2 = r^2$$ However, they have two very different meanings: r is a measure of the strength and direction of a linear relationship between two variables; R2 describes the percent variation in “ y” that is explained by the model. Residual and Normal Probability Plots Even though you have determined, using a scatterplot, correlation coefficient and R2, that x is useful in predicting the value of y, the results of a regression analysis are valid only when the data satisfy the necessary regression assumptions. The response variable (y) is a random variable while the predictor variable (x) is assumed non-random or fixed and measured without error. The relationship between yand xmust be linear, given by the model \(\hat y = b_0 + b_1x\). The error of random term the values ε are independent, have a mean of 0 and a common variance \(\sigma^2\), independent of x, and are normally distributed. We can use residual plots to check for a constant variance, as well as to make sure that the linear model is in fact adequate. A residual plot is a scatterplot of the residual (= observed – predicted values) versus the predicted or fitted (as used in the residual plot) value. The center horizontal axis is set at zero. One property of the residuals is that they sum to zero and have a mean of zero. A residual plot should be free of any patterns and the residuals should appear as a random scatter of points about zero. A residual plot with no appearance of any patterns indicates that the model assumptions are satisfied for these data. Figure 12. A residual plot. A residual plot that has a “fan shape” indicates a heterogeneous variance (non-constant variance). The residuals tend to fan out or fan in as error variance increases or decreases. Figure 13. A residual plot that indicates a non-constant variance. A residual plot that tends to “swoop” indicates that a linear model may not be appropriate. The model may need higher-order terms of x, or a non-linear model may be needed to better describe the relationship between y and x. Transformations on x or y may also be considered. Figure 14. A residual plot that indicates the need for a higher order model. A normal probability plot allows us to check that the errors are normally distributed. It plots the residuals against the expected value of the residual as if it had come from a normal distribution. Recall that when the residuals are normally distributed, they will follow a straight-line pattern, sloping upward. This plot is not unusual and does not indicate any non-normality with the residuals. Figure 15. A normal probability plot. This next plot clearly illustrates a non-normal distribution of the residuals. Figure 16. A normal probability plot, which illustrates non-normal distribution. The most serious violations of normality usually appear in the tails of the distribution because this is where the normal distribution differs most from other types of distributions with a similar mean and spread. Curvature in either or both ends of a normal probability plot is indicative of nonnormality.
Below I expand a little bit on the point in Peter's answer by trying to carry out the quantifier removal for more than constant number of steps to see where it fails and if anything can be salvaged from such an attempt. Let's try to amplify $\mathsf{P}=\mathsf{NP}$ for more than constant number times. Assume that $\mathsf{P}=\mathsf{NP}$. Therefore there is polynomial time machine that solves Ext-Circuit-SAT (is there a satisfying extension for a given circuit and a partial assignment to its inputs?). More formally, we have a polytime algorithm $A$ with polynomial running time $p(n)\in\rm{poly}(n)$ s.t. Given a Boolean circuit $\varphi$, and a partial assignment $\tau$ to the inputs, $A$ returns "yes" if there is an extension of $\tau$ that satisfies $\varphi$, and return "no" otherwise. To go over constant times, we need to do the quantifier removal effectively. We can do this because the Cook-Levin theorem is a constructive theorem, in fact it gives a polynomial time algorithm $Cook$ s.t. Given a DTM $M$ receiving two inputs, and three unary numbers $n$, $m$, and $t$, $Cook(M, n, m, t)$ returns a Boolean circuit of size $O(t^2)$ that simulates $M$ on inputs of length $(n,m)$ for $t$ steps. Let's try to use these to extend the argument for $\mathsf{P}=\mathsf{PH}$ to obtain an algorithm solving TQBF (actually TQBCircuit, i.e. Totally Quantified Boolean Circuit problem). The idea of the algorithm is as follows: we repeatedly use $Cook$ on $A$ to remove the quantifiers from a given quantified circuit. There are linear number of quantifiers so we hope to get a polynomial time algorithm (we have an algorithm with polynomially many steps using the polynomial time subroutine $Cook$). At the end of this process of quantifier elimination we will have a quantifier-free circuit which can be evaluated in polynomial time (Circuit Value problem is in $\mathsf{P}$, let $CV$ be a polynomial time algorithm for computing the circuit value of a given circuit). However we will see that this idea does not work (for the same reason pointed out by Peter). The resulting algorithm looks polynomial time: we have polynomial many steps, each step is polynomial time computable. However this is not correct, the algorithm is not polynomial time. Using polynomial time subroutines in a polynomial time algorithm is polynomial time. The problem is that in general this does not need to be true if the values returned by the subroutines are not of polynomial size in the original input and we assume that we do assignments about the values returning from the subroutines. (In the TM model we have to read the output of any polynomial time subroutine bit by bit.) Here the size of the returned value from algorithm $Cook$ is increasing (can be a power of the size of the input given to it, the exact power depends on the running time of $A$ and is around $p^2(|input|)$, so since we know that $A$ cannot be less than linear time, $|output|$ is at least $|input|^2$). The problem is similar to the simple code below: Given $x$, Let $n = |x|$, Let $y = x$, For $i$ from $1$ to $n$ do Let $y = y^{|y|}$, (i.e. concatenation of $|y|$ copies of $y$) Return y Each time we execute $y = y^{|y|}$ we square the size of $y$. After $n$ executions we will have a $y$ which is $x^{2^n}$ and has size $n2^n$, obviously not a polynomial in the size of the input. Let's assume that we only consider quantified formulas with $k(n)$ quantifier alternations (where $n$ is the total size of the quantified formula). Assume that $A$ runs in time $p$ (e.g. linear time which is not ruled out so far), and have maybe a more efficient $Cook$ algorithm outputting a smaller circuit of size $l(t)$ in place of $t^2$, then we get an algorithm for ExtCircuitSat that runs in time $(l\circ p)^{O(k)}(n)=\underbrace{l(p(l(p(\dots(l(p(n)))))))}_{O(k)\mbox{ compositions}}$. Even in the case that both $l$ and $p$ were linear (but with total coefficient $a\geq 2$) we would get an algorithm which runs in time $\Omega(n2^{k(n)})$ and if $k(n) = \Theta(n)$ it would be $\Omega(n2^n)$ similar to the brute-force algorithm (and even this was based on assuming Cook-Levin can be performed on algorithms resulting circuits of linear size in the running time of the algorithm).
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever." Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field. "You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. " so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force. For the buoyancy do I: density of water * volume of water displaced * gravity acceleration? so: mass of bottle * gravity = volume of water displaced * density of water * gravity? @EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$? As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern... You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer. Though as it happens I have to go now - lunch time! :-) @JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth. Anonymous Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure Not sure about that, but the converse is certainly false :P Derrida has received a lot of criticism from the experts on the fields he tried to comment on I personally do not know much about postmodernist philosophy, so I shall not comment on it myself I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger. I can see why a man of that generation would be leaned towards that idea. I do too.
I want to draw bode plot of this transfer function: $$G(p) = {K \over p \space (1+0.1p) \space (1+0.05p)}$$ But I don't know what to do with that K (static gain) -- I've only drawn TF with known gain. Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. It only takes a minute to sign up.Sign up to join this community The shape of the function is exactly the same for all values of K (assuming you're drawing a Bode plot). Different values of K just mean a translation of the graph upwards for higher values or downwars for lower values. Ok, let's do some math to explain more explicitly what has been said in some comments to your question. Let's rewrite G as another TF multiplied by K: $$ G(p) = K \cdot G_n(p) $$ where $$ G_n(p) = \dfrac{1} {p \cdot (1+0.1p) \cdot (1+0.05p)} $$ is the normalized (with respect to K) TF. Let's define the logarithmic (dB) amplitude response of the system this way: $$ A_{(dB)}(\omega) = 20 \log_{10} \left| G(j\omega) \right| $$ We see easily that: $$ A_{(dB)}(\omega) = \\[1em] = 20 \log_{10} \left| K \cdot G_n(j\omega) \right| = \\[1em] = 20 \log_{10} \left| K \right| + 20 \log_{10} \left| G_n(j\omega) \right| = \\[1em] = K_{(dB)} + A_{n(dB)}(\omega) $$ Where \$A_{n(dB)}\$ is the amplitude response relative to the normalized TF and \$K_{(dB)}\$ is the constant K expressed in dB: \begin{align*} A_{n(dB)}(\omega) &= 20 \log_{10} \left| G_n(j\omega) \right| \\[1em] K_{(dB)} &= 20 \log_{10} \left| K \right| \end{align*} From that you can see that the only difference in the amplitude Bode plot between the original and the normalized TF is just a vertical shift, so the corner frequencies of both plots will remain the same. Here is an LTspice simulation that shows practically the situation: Of course I had to choose a value for K (100 = 40dB), but you can easily see that any change to K will just change the amount of the vertical shift.
If two rays are not parallel in the start, how can they become parallel at the instant when they strike the lens of a telescope? If they don't become parallel, why do we consider them to be, in the ray diagrams of telescopes? The answer is because our ken (field of view) subtends an extremely small angle at the source. Even though the source may emit over a wide angular range, we can only receive a small angular range of that light if we have a limited aperture instrument and our distance from the source is large compared with the aperture. Suppose we look at Alpha Centauri through a 1 meter diameter aperture. Then the range of angles present in the rays that reach us if Alpha Centauri were a true point would be: $$\frac{1\text{ meter}}{4.1\times10^{16}\text{ meters}} \approx 2.5\times10^{-17}\text{ radians}$$ The path difference between a central and edge ray would be: $$\sqrt{(4.1\times10^{16})^2 + 1^2} - 4.1\times10^{16} \approx 1.25\times10^{-17}{\rm m}$$ or less than one hundredth of an atomic nucleus. Even when we take account of the fact that the star is an extended source, the range of angles is roughly the star's angular subtense at our position. This is still an extremely small number that has no bearing whatsoever on the diffraction of visible light. In the limiting case, consider that the object and your lens are finite in size and infinitely far apart. Then each appears as a point when viewed from the other. Two rays passing from the object to your lens would then follow the same path and would thus be parallel.
Does there exist a definition for matrix exponentiation? If we have, say, an integer, one can define $A^B$ as follows: $$\prod_{n = 1}^B A$$ We can define exponentials of fractions as a power of a radical, and we even have the following definition of the exponential: $$e^z = \sum_{n = 0}^\infty \frac{z^n}{n!}$$ which comes from a taylor series for the function $\exp(z)$. Now, a problem seems to arise when we attempt to calculate $\exp(A)$, where $A$ is an $n$ x $n$ (square) matrix. We cannot define it as multiplication a "matrix number of times" as this makes no sense. The only reasonable definition that could work is the latter definition (the infinite series): $$e^A = 1 + A + \frac{AA}{2!} + \frac{AAA}{3!} + \frac{AAAA}{4!} + \cdots$$ where we can define matrix exponentiation to the power of an integer, which is all that is required here. We know that $e^x$ will converge absolutely for all complex numbers, but do we know that this is true for matrices? Can this "matrix sum" diverge, and are there ways to test divergence/convergence when a matrix is applied? Or is this concept of "matrix divergence" not well defined? Thanks.
The Annals of Mathematical Statistics Ann. Math. Statist. Volume 33, Number 4 (1962), 1272-1280. A Characterization of the Wishart Distribution Abstract It is known that if $X$ and $Y$ are independent random variables having a Gamma distribution with parameters $(\theta, n)$ and $(\theta, m)$, i.e., with density function $p(x, \theta, n) = \frac{\theta^{n/2}x^{n/2 - 1}e^{-(\frac{1}{2})\theta x}}{2^{n/2}\Gamma(n/2)},\quad 0 < x, \theta; 1 \leqq n,$ then $X + Y$ and $X/(X + Y)$, or equivalently $X/Y$, are statistically independent. Lukacs [1] proved that this independence property characterizes the Gamma distribution, namely, if $X$ and $Y$ are two nondegenerate positive random variables, and if $X + Y$ is independent of $X/(X + Y)$, or equivalently of $X/U$, then $X$ and $Y$ have a Gamma distribution with the same scale parameter. In the present paper we present an extension to the case where $U$ and $V$ are symmetric positive definite matrices having a Wishart distribution. A number of difficulties are encountered in the generalization. First, there is no natural extension of a ratio, and we consider $Z = W^{-1}UW'$ $^{-1}$, where the "square root" $W = (U + V)^{\frac{1}{2}}$ is any factorization $WW' = (U + V)$. In the matrix case $Z$ is not a function of $V^{-\frac{1}{2}}UV^{-\frac{1}{2}}$ as was true in one dimension, and indeed if $U$ and $V$ are independent random matrices having a Wishart distribution, $U + V$ and $V^{-\frac{1}{2}}UV^{-\frac{1}{2}}$ need not be statistically independent, depending on which square root is chosen. This aspect will be treated in another paper. In the univariate case it is relatively straightforward to generate differential equations by differentiating under the expectation sign, but this is no longer true since the elements of $(U + V)^{\frac{1}{2}}$ do not bear a simple relation to the elements of $(U + V)$, and it is this point which leads to the difficulties in the proof. The characterization theorem is stated in Section 2. In Section 3 the differential equation is set up, and is solved in Section 4. The authors are indebted to Martin Fox for his comments and suggestions. Article information Source Ann. Math. Statist., Volume 33, Number 4 (1962), 1272-1280. Dates First available in Project Euclid: 27 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoms/1177704359 Digital Object Identifier doi:10.1214/aoms/1177704359 Mathematical Reviews number (MathSciNet) MR141186 Zentralblatt MATH identifier 0111.34202 JSTOR links.jstor.org Citation Olkin, Ingram; Rubin, Herman. A Characterization of the Wishart Distribution. Ann. Math. Statist. 33 (1962), no. 4, 1272--1280. doi:10.1214/aoms/1177704359. https://projecteuclid.org/euclid.aoms/1177704359
I don't know how your book defines the ring of fractions, but the right (i.e. the most general as possible) way of doing things with rings of fractions is this : If $M$ is an $R$-module and $1 \in D \subseteq R$ is a subset of $R$ which is multiplicatively closed, then we define $D^{-1} R$ as follows. Define the following equivalence relation over $D \times M$ : $(d_1, m_1) \sim (d_2, m_2)$ if there exists $d \in D$ such that $d(d_1 m_2 - d_2 m_1) = 0$. You can check that this defines an equivalence class over $D \times M$ (the details are okay to work out, nothing hard), and in the case where $M = R$ and $R$ is an integral domain, you can get rid of the condition 'there exists a $d \in D$ such that' because it is not necessary. Defining the addition as usual over $D^{-1} M$, this makes $D^{-1}M$ into an abelian group. In particular, since $R$ is an $R$-module over itself, we can define $D^{-1}R$. Considering the particular case of $D^{-1}R$ alone first, we can define multiplication as $\frac{r_1}{d_1} \frac{r_2}{d_2} = \frac{r_1 r_2}{d_1 d_2}$. This makes $D^{-1}R$ into a ring, so now we can say that the scalar multiplication $\frac{r}{d} \frac{m}{d'} = \frac{rm}{dd'}$ makes $D^{-1}M$ into a $D^{-1}R$-module. (I used the letter $D$ because it stands for denominators. The letter $S$ is probably just used because of the alphabet...) In this kind of generality, if you want to make $D^{-1}R$ into an integral domain, you need to make some assumptions on $D$. For instance, Note that the set $D$ of all non-zerodivisors is a multiplicatively closed subset of $R$ which contains $1$. This means that if $\frac a1 = \frac b1$, there exists $d \in D$ such that $d(a-b) = 0$, but $d$ is not a zero-divisor, then $a-b =0$ and $a=b$, hence the remark that $f$ is a monomorphism in this case. If $D$ contains a zero divisor, you can have some fraction $\frac r1 = \frac 01$ without having $r=0$, because this equation only means that there exists $d \in D$ such that $rd = 0$. The equivalence class of $\frac 01$ contains precisely those $r \in R$ that are zero divisors. In an integral domain, there are no zero-divisors, so by the above remark the map $f$ is an embedding of $R$ into its quotient field $D^{-1}R$ (where $D$ is the set of non-zero elements, which is also the set of non-zero divisors in this case). Note that $D^{-1}R$ is a field because a fraction $\frac rd$ is never equivalent to $\frac 0{d'}$ for some $d'$, hence we can take its inverse to be $\frac dr$. Hope that helps! Feel free to ask any questions about the details.
Let $f\colon[0,1]\to \mathbb{R^2}$ be continuous such that $f(0)=f(1)$. If want to find a 1-Lipschitz function $g : [0,a]\to f([0,1])$ such that $g(0)=g(a)$ and $g$ is surjective ($a>0$). I had the following idea using the total variation of $f$: Denote $V_T(f) = \sup \left\lbrace\sum_{i=1}^n \lVert f(x_i) - f(x_{i-1})\rVert_2 \ \biggm| \ n\in\mathbb{N}, \ 0=x_0<x_1<\dots<x_n=1\right\rbrace$. Suppose morevover that $V_T(f)<+\infty$. Define $g:[0,V_T(f)]\to f([0,1])$ such that $g(x) = f\left(\frac{x}{V_T(f)}\right)$. Then $g(0)=g(V_T(f))$ and $g$ is surjective. But I cannot prove that $\Vert g(x_1)-g(x_2)\Vert_2\leq |x_1-x_2|$. Maybe it is not true, in that case is there another definition of $g$ that would be suitable?
Establishing the type of distribution, sample size, and known or unknown standard deviation can help you figure out how to go about a hypothesis test. However, there are several other factors you should consider when working out a hypothesis test. Rare Events Suppose you make an assumption about a property of the population (this assumption is the null hypothesis). Then you gather sample data randomly. If the sample has properties that would be very unlikely to occur if the assumption is true, then you would conclude that your assumption about the population is probably incorrect. (Remember that your assumption is just an assumption—it is not a fact and it may or may not be true. But your sample data are real and the data are showing you a fact that seems to contradict your assumption.) For example, Didi and Ali are at a birthday party of a very wealthy friend. They hurry to be first in line to grab a prize from a tall basket that they cannot see inside because they will be blindfolded. There are 200 plastic bubbles in the basket and Didi and Ali have been told that there is only one with a $100 bill. Didi is the first person to reach into the basket and pull out a bubble. Her bubble contains a $100 bill. The probability of this happening is \(\frac{1}{200} = 0.005\). Because this is so unlikely, Ali is hoping that what the two of them were told is wrong and there are more $100 bills in the basket. A "rare event" has occurred (Didi getting the $100 bill) so Ali doubts the assumption about only one $100 bill being in the basket. Using the Sample to Test the Null Hypothesis Use the sample data to calculate the actual probability of getting the test result, called the \(p\)-value. The \(p\)-value is the probability that, if the null hypothesis is true, the results from another randomly selected sample will be as extreme or more extreme as the results obtained from the given sample. A large \(p\)-value calculated from the data indicates that we should not reject the null hypothesis. The smaller the \(p\)-value, the more unlikely the outcome, and the stronger the evidence is against the null hypothesis. We would reject the null hypothesis if the evidence is strongly against it. Draw a graph that shows the \(p\)-value. The hypothesis test is easier to perform if you use a graph because you see the problem more clearly. Example \(\PageIndex{1}\) Suppose a baker claims that his bread height is more than 15 cm, on average. Several of his customers do not believe him. To persuade his customers that he is right, the baker decides to do a hypothesis test. He bakes 10 loaves of bread. The mean height of the sample loaves is 17 cm. The baker knows from baking hundreds of loaves of bread that the standard deviation for the height is 0.5 cm. and the distribution of heights is normal. The null hypothesis could be \(H_{0}: \mu \leq 15\) The alternate hypothesis is \(H_{a}: \mu > 15\) The words "is more than" translates as a "\(>\)" so "\(\mu > 15\)" goes into the alternate hypothesis. The null hypothesis must contradict the alternate hypothesis. Since \(\sigma\) is known (\(\sigma = 0.5 cm.\)), the distribution for the population is known to be normal with mean \(μ = 15\) and standard deviation \[\dfrac{\sigma}{\sqrt{n}} = \frac{0.5}{\sqrt{10}} = 0.16. \nonumber\] Suppose the null hypothesis is true (the mean height of the loaves is no more than 15 cm). Then is the mean height (17 cm) calculated from the sample unexpectedly large? The hypothesis test works by asking the question how unlikely the sample mean would be if the null hypothesis were true. The graph shows how far out the sample mean is on the normal curve. The p-value is the probability that, if we were to take other samples, any other sample mean would fall at least as far out as 17 cm. The \(p\) -value, then, is the probability that a sample mean is the same or greater than 17 cm. when the population mean is, in fact, 15 cm. We can calculate this probability using the normal distribution for means. Figure \(\PageIndex{1}\) \(p\text{-value} = P(\bar{x} > 17)\) which is approximately zero. A \(p\)-value of approximately zero tells us that it is highly unlikely that a loaf of bread rises no more than 15 cm, on average. That is, almost 0% of all loaves of bread would be at least as high as 17 cm. purely by CHANCE had the population mean height really been 15 cm. Because the outcome of 17 cm. is so unlikely (meaning it is happening NOT by chance alone), we conclude that the evidence is strongly against the null hypothesis (the mean height is at most 15 cm.). There is sufficient evidence that the true mean height for the population of the baker's loaves of bread is greater than 15 cm. Exercise \(\PageIndex{1}\) A normal distribution has a standard deviation of 1. We want to verify a claim that the mean is greater than 12. A sample of 36 is taken with a sample mean of 12.5. \(H_{0}: \mu leq 12\) \(H_{a}: \mu > 12\) The \(p\)-value is 0.0013 Draw a graph that shows the \(p\)-value. Answer: \(p\text{-value} = 0.0013\) Figure \(\PageIndex{2}\) Decision and Conclusion A systematic way to make a decision of whether to reject or not reject the null hypothesis is to compare the \(p\)-value and a preset or preconceived \(\alpha\) (also called a " significance level"). A preset \(\alpha\) is the probability of a Type I error (rejecting the null hypothesis when the null hypothesis is true). It may or may not be given to you at the beginning of the problem. When you make a decision to reject or not reject \(H_{0}\), do as follows: If \(\alpha > p\text{-value}\), reject \(H_{0}\). The results of the sample data are significant. There is sufficient evidence to conclude that \(H_{0}\) is an incorrect belief and that the alternative hypothesis, \(H_{a}\), may be correct. If \(\alpha \leq p\text{-value}\), do not reject \(H_{0}\). The results of the sample data are not significant.There is not sufficient evidence to conclude that the alternative hypothesis,\(H_{a}\), may be correct. When you "do not reject \(H_{0}\)", it does not mean that you should believe that H 0 is true. It simply means that the sample data have failed to provide sufficient evidence to cast serious doubt about the truthfulness of \(H_{0}\). Conclusion: After you make your decision, write a thoughtful conclusion about the hypotheses in terms of the given problem. Example \(\PageIndex{2}\) When using the \(p\)-value to evaluate a hypothesis test, it is sometimes useful to use the following memory device If the \(p\)-value is low, the null must go. If the \(p\)-value is high, the null must fly. This memory aid relates a \(p\)-value less than the established alpha (the \(p\) is low) as rejecting the null hypothesis and, likewise, relates a \(p\)-value higher than the established alpha (the \(p\) is high) as not rejecting the null hypothesis. Fill in the blanks. Reject the null hypothesis when ______________________________________. The results of the sample data _____________________________________. Do not reject the null when hypothesis when __________________________________________. The results of the sample data ____________________________________________. Answer Reject the null hypothesis when the \(p\) -value is less than the established alpha value. The results of the sample data support the alternative hypothesis. Do not reject the null hypothesis when the \(p\) -value is greater than the established alpha value. The results of the sample data do not support the alternative hypothesis. Exercise \(\PageIndex{2}\) It’s a Boy Genetics Labs claim their procedures improve the chances of a boy being born. The results for a test of a single population proportion are as follows: \(H_{0}: p = 0.50, H_{a}: p > 0.50\) \(\alpha = 0.01\) \(p\text{-value} = 0.025\) Interpret the results and state a conclusion in simple, non-technical terms. Answer Since the \(p\)-value is greater than the established alpha value (the \(p\)-value is high), we do not reject the null hypothesis. There is not enough evidence to support It’s a Boy Genetics Labs' stated claim that their procedures improve the chances of a boy being born. Chapter Review When the probability of an event occurring is low, and it happens, it is called a rare event. Rare events are important to consider in hypothesis testing because they can inform your willingness not to reject or to reject a null hypothesis. To test a null hypothesis, find the p-value for the sample data and graph the results. When deciding whether or not to reject the null the hypothesis, keep these two parameters in mind: \(\alpha > p-value\), reject the null hypothesis \(\alpha \leq p-value\), do not reject the null hypothesis Glossary Level of Significance of the Test probability of a Type I error (reject the null hypothesis when it is true). Notation: \(\alpha\). In hypothesis testing, the Level of Significance is called the preconceived \(\alpha\) or the preset \(\alpha\). \(p\)-value the probability that an event will happen purely by chance assuming the null hypothesis is true. The smaller the \(p\)-value, the stronger the evidence is against the null hypothesis. Contributors Barbara Illowsky and Susan Dean (De Anza College) with many other contributing authors. Content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/30189442-699...b91b9de@18.114.
Search Now showing items 1-2 of 2 Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector (Elsevier, 2014-11-10) This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ... Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector (Elsevier, 2014-11-10) Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ...
Search Now showing items 1-10 of 167 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
This is the students’ version of the page. Log in above for the teachers’ version. Part 1 – Adding and subtracting fractions Between each question and the next, only one aspect is changed. Can you see how this affects the answer in each case? Click the “New questions” button for a new set of randomly generated questions. Click “show all answers” to show all answers at once, or click on each individual question to show answers one at a time. Teachers: log in to access the following: Worksheet (with space for student work) Handout (slides with exercises only; 4 per page for reduced printing) See Teacher resources under Part 3 for a card sort covering all four operations with fractions Teachers: log in to access these. Part 2 – Multiplying fractions Here is a sequence of calculations: \(12 \times 8 = 96 \) \(12 \times 4 = 48 \) \(12 \times 2 = 24 \) \(12 \times 1 = 12 \) What are the next three calculations in this sequence? Teachers: log in to access the following: Worksheet (with space for student work) Handout (slides with exercises only; 4 per page for reduced printing) See Teacher resources under Part 3 for a card sort covering all four operations with fractions Teachers: log in to access these. Consider \(\frac{4}{4} \times \frac{1}{3}\). This is equivalent to \(1 \times \frac{1}{3}\) so we would expect the product to be \(\frac{1}{3}\), as shown in the applet’s initial configuration. Tick the box to show the vertical splits. Start reducing the numerator of the first fraction while holding everything else constant for a visualisation of \(\frac{3}{4} \times \frac{1}{3}, \frac{2}{4} \times \frac{1}{3},\) and \( \frac{1}{4} \times \frac{1}{3}\). Continue to play around with this applet! Part 3 – Dividing fractions 1) Here is a sequence of calculations: \(40 \times 8 = 320 \) \(40 \times 4 = 160 \) \(40 \times 2 = 80 \) \(40 \times 1 = 40 \) What are the next three calculations in this sequence? 2) Here is a different sequence of calculations: \(40 \div 8 = 5 \) \(40 \div 4 = 10 \) \(40 \div 2 = 20 \) \(40 \div 1 = 40 \) What are the next three calculations in this sequence? Teachers: log in to access the following: Worksheet (with space for student work) Handout (slides with exercises only; 4 per page for reduced printing) Card sort covering all four operations, not just division. Teachers: log in to access these. Teachers: log in to view this content. N10a – Converting terminating decimals into fractions and vice versa N10b – Converting recurring decimals into fractions and vice versa N11a – Identifying and working with fractions in ratio problems N12a – Interpreting fractions and percentages as operators A4g – Adding and subtracting algebraic fractions A4h – Multiplying and dividing algebraic fractions R3a – Expressing one quantity as a fraction of another P8a – Tree diagrams
Introduction Drawing inferences from A/B tests is an integral job to many data scientists. Often, we hear about the frequentist (classical) approach, where we specify the alpha and beta rates and see if we can reject the null hypothesis in favor of the alternate hypothesis. On the other hand, Bayesian inference uses Bayes’ Theorem to update the probability for a hypothesis to be true, as more evidence becomes available. In this blog post, we are going to use R to follow the example in [1] and extend it with a sensitivity analysis to observe the impact of tweaking the priors on the findings. [1] has a great discussion on the advantages and disadvantages of Frequentist vs. Bayesian that I’d recommend reading. My main takeaways are that: Bayesian is often criticised for having a subjective prior, which we will examine in the sensitivity analysis section Frequentist is criticised for having different p-values with different experiment set-up, which we will examine in the next section under stopping rules “… for any decision rule, there is a Bayesian decision rule which is, in a precise sense, at least as good as a rule” – it doesn’t hurt for a data scientist to gain another perspective in making inferences Case Study Background The objective of the experiment is to check if a coin is biased, suppose the person who conducts the experiment (let’s call him the researcher) is not the same person who performs the analysis of results (let’s call him the analyst). The researcher has two ways to stop the experiment (stopping rules): Toss the coin 6 times and report the number of heads Toss the coin until the first head appear The researcher reports HHHHHT and his stopping rule to the analyst. However, the analyst forgot what is the stopping rule. Frequentist Approach The Frequentist analyst sets up the hypothesis: \(H_0: \theta = 0.5, H_A : \theta > 0.5\) Binomial Distribution Under stopping rule (1), the number of heads is under the Binomial distribution. More formally, Observed Heads ~ Bin(6,0.5) # Binomial Distribution n = 6 num_heads = c(1:n) pmf_binom <- dbinom(k,size=n,prob=0.5) plot(num_heads,pmf_binom,type="h", main = "Prob mass function of a Binomial distribution") # The following two lines are equivalent 1-pbinom(q=4,size=6,prob=0.5) pmf_binom[5]+pmf_binom[6] Prob(5 or 6 heads in 6 tosses) = 0.1094 Therefore, we fail to reject the null hypothesis at the 0.05 significance level. Geometric Distribution Under stopping rule (2), the number of flips required until heads appear is under the Geometric distribution. Number_failures_until_1st_head ~ Geometric (0.5) # Geometic Distribution num_fails = c(0:10) pmf_geom = dgeom (x = num_fails, prob=0.5) sum(pmf_geom) plot(num_fails, pmf_geom, type = "h", main = "Prob mass function of a Geometric dist.") # The following two lines are equivalent 1- pgeom(q=4,prob=0.5) 1-sum(pmf_geom[1:5]) P(It takes at least 5 fails before the 1st head) = 0.0313 Therefore, we reject the null hypothesis at 0.05 significance level. Notice how the same data leads to opposite conclusions. Under the frequentist approach, the stopping rule, which decides the distribution of the random variable, must be specified before the experiment. Bayesian Approach We want to estimate theta, which is defined as the true probability that the coin would come up heads. We use a beta distribution to represent the conjugate prior. In order not to lose the focus of the case study, we have introduced the beta distribution in the appendix. As the prior distribution, let’s say the prior is under the Beta distribution (3,3), which suggests a fairly flat distribution around 0.5. This suggests that the analyst believes that the coin is fair, but uses (3,3) as an indication of his uncertainty. We will study the impact of changing these two parameters in the Sensitivity Analysis section. For now, let’s go with: Theta_prior ~ Beta(3,3) During the experiment, we have 6 flips, of which 1 is heads. Let’s fill in the following table: Item Prior Experiment Posterior Heads 3 5 8 Tails 3 1 4 Total Flips 6 6 12 # Bayesian Approach theta=seq(from=0,to=1,by=.01) plot(theta,dbeta(theta,8,4) ,type="l" , ylim = c(0,6) , col = "red" , lwd =2 , ylab = "Prob. Density Function" , main = "Prob. Density Function") lines(theta,dbeta(theta,3,3),type="l", col = "green", lwd =2) lines(theta,dbeta(theta,5,1),type="l", col = "blue", lwd =2) abline(v=0.5, col='grey') legend("topright", legend = c("Posterior", "Prior", "Experiment"), col = c("red", "green", "blue"), bty = "n", text.col = "black", horiz = F , inset = c(0.1, 0.1), lty = 1, lwd=2)theta=seq(from=0,to=1,by=.01) 1-pbeta(0.5, 8,4)\(P(\theta > 0.5 | data) = 0.89 \) i.e., 0.89 is the area under the red curve, to the right of 0.5. In the next section, we investigate the impact of changing the shape of the prior distribution on posterior probabilities. Sensitivity Analysis of the impact of Prior Distribution on Posterior Probabilities How would changing the prior distribution from Beta(3,3) have an impact on the posterior probability that theta > 0.5? In this section, we are going to change the variance and the expected value of the distribution as part of the sensitivity analysis.\( mean = \frac{\alpha}{\alpha + \beta};\) \( variance = \frac{\alpha\beta}{(\alpha + \beta)^2 (\alpha + \beta + 1)}\) (1) Changing the variance – When we inject a stronger prior with a lower variance that the coin is fair, the posterior probability is reduced from 0.89 to 0.84. Distribution Mean Variance \(P(\theta > 0.5 | data)\) Beta(1,1) 0.5 0.083 0.94 Beta(2,2) 0.5 0.05 0.91 Beta(3,3) 0.5 0.036 0.89 Beta(5,5) 0.5 0.023 0.84 ## Sensitivity Analysis - change the variance par(mfrow = c(2,2)) alpha_prior = 5 beta_prior = 5 alpha_expt = 5 beta_expt = 1 alpha_post = alpha_prior + alpha_expt beta_post = beta_prior + beta_expt title = paste0("Prior Beta(", alpha_prior, "," , beta_prior, ")") # Bayesian Approach theta=seq(from=0,to=1,by=.01) plot(theta,dbeta(theta,alpha_post,beta_post) ,type="l" , ylim = c(0,6) , col = "red" , lwd =2 , ylab = "Prob. Density Function" , main = title) lines(theta,dbeta(theta,alpha_prior,beta_prior),type="l", col = "green", lwd =2) lines(theta,dbeta(theta,alpha_expt,beta_expt),type="l", col = "blue", lwd =2) abline(v=0.5, col='grey') # Prior Mean alpha_prior / (alpha_prior + beta_prior) # Prior Variance (alpha_prior * beta_prior) / ((alpha_prior + beta_prior)^2 * (alpha_prior + beta_prior+1)) # P(theta > 0.5 | data) 1-pbeta(0.5, alpha_post,beta_post) Above: Effect of changing prior variance whilst keeping mean constant. Green: Prior; Red: Posterior; Blue: Experiment (2) Changing the mean – Similarly, and as expected, if we inject the prior that the coin is biased towards the tails, when the experiment is biased towards heads, we are less confident that the coin is biased towards heads. Given mean and variance, I needed to compute alpha and beta. Thankfully, we have this stackoverflow post that help us do that: For simplicity, we are rounding off alpha and beta to the nearest integer. Hence variance might be a little different. par(mfrow = c(2,2)) mean = 0.7 variance = 0.036 alpha_prior = ((1-mean)/variance - 1/mean) * mean^2 beta_prior = alpha_prior * (1/mean - 1) alpha_prior = round(alpha_prior,0) beta_prior = round(beta_prior,0) alpha_expt = 5 beta_expt = 1 alpha_post = alpha_prior + alpha_expt beta_post = beta_prior + beta_expt title = paste0("Prior Beta(", alpha_prior, "," , beta_prior, ")") # Bayesian Approach theta=seq(from=0,to=1,by=.01) plot(theta,dbeta(theta,alpha_post,beta_post) ,type="l" , ylim = c(0,6) , col = "red" , lwd =2 , ylab = "Prob. Density Function" , main = title) lines(theta,dbeta(theta,alpha_prior,beta_prior),type="l", col = "green", lwd =2) lines(theta,dbeta(theta,alpha_expt,beta_expt),type="l", col = "blue", lwd =2) abline(v=0.5, col='grey') # Prior Mean alpha_prior / (alpha_prior + beta_prior) # Prior Variance (alpha_prior * beta_prior) / ((alpha_prior + beta_prior)^2 * (alpha_prior + beta_prior+1)) # P(theta > 0.5 | data) 1-pbeta(0.5, alpha_post,beta_post) Distribution Mean Variance \(P(\theta > 0.5 | data)\) Beta(2,3) 0.4 0.04 0.83 Beta(3,3) 0.5 0.036 0.89 Beta(3,2) 0.6 0.04 0.94 Beta(3,1) 0.7 0.038 0.98 Above: Effect of changing prior mean, keeping variance constant. Green: Prior; Red: Posterior; Blue: Experiment Conclusion In conclusion, we have demonstrated the Bayesian perspective on A/B testing on small samples. We saw that the stopping rule is critical in establishing the p-value in the frequentist approach, and the stopping rule is not considered in the Bayesian approach. The Bayesian approach also gives an probability that a hypothesis is true, given the prior and experiment results. Lastly, we also observed how the posterior probability is affected by the mean and variance of the prior distribution. Appendix – Beta distribution The beta distribution is a family of continuous probability distributions defined on the interval [0,1] parametrized by two positive shape parameters, denoted by α and β. There are three reasons why the beta distribution is great for Bayesian inferences: The interval [0,1] makes it suitable to represent probabilities. It has the nice property that the posterior distribution is also a beta distribution. To be clear, the prior distribution refers to the distribution we believe theta exhibits before we do any analysis whilst the posterior distribution refers to the distribution we believe theta exhibits after we observe some samples. We can specify a large range of beliefs by changing a and b – the probability density function of theta, given a and b is the following: From the above equation, we see that α and b control the shape of the distribution, and indeed, they are known as shape parameters. Let’s plug in some values into R and observe the difference in shapes. The expected value is computed by α / (α+β). Color α β Mean = α / (α+β) Black 0.5 0.5 0.50 Red (uniform) 1 1 0.50 Blue 3 3 0.50 Yellow 5 5 0.50 theta=seq(from=0,to=1,by=.01) plot(theta,dbeta(theta,0.5,0.5) ,type="l" , ylim = c(0,3) , col = "black" , lwd =2 , ylab = "Prob. Density Function") lines(theta,dbeta(theta,1,1),type="l", col = "red", lwd =2) lines(theta,dbeta(theta,3,3),type="l", col = "blue", lwd =2) lines(theta,dbeta(theta,5,5),type="l", col = "yellow", lwd =2) Notice how the mean of all four distributions is the same at 0.5, and different distributions could be specified. This is what we meant by a large range of beliefs could be specified using the beta distribution. References [1] Jeremy Orloff, and Jonathan Bloom. 18.05 Introduction to Probability and Statistics. Spring 2014. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA.
Parentheses and brackets are very common in mathematical formulas. You can easily control the size and style of brackets in LaTeX; this article explains how. Contents Here's how to type some common math braces and parentheses in LaTeX: Type LaTeX markup Renders as Parentheses; round brackets (x+y) \((x+y)\) Brackets; square brackets [x+y] \([x+y]\) Braces; curly brackets \{ x+y \} \(\{ x+y \}\) Angle brackets \langle x+y \rangle \(\langle x+y\rangle\) Pipes; vertical bars |x+y| \(\displaystyle| x+y |\) Double pipes \|x+y\| \(\| x+y \|\) The size of brackets and parentheses can be manually set, or they can be resized dynamically in your document, as shown in the next example: \[ F = G \left( \frac{m_1 m_2}{r^2} \right) \] Notice that to insert the parentheses or brackets, the \left and \right commands are used. Even if you are using only one bracket, both commands are mandatory. \left and \right can dynamically adjust the size, as shown by the next example: \[ \left[ \frac{ N } { \left( \frac{L}{p} \right) - (m+n) } \right] \] When writing multi-line equations with the align, align* or aligned environments, the \left and \right commands must be balanced on each line and on the same side of &. Therefore the following code snippet will fail with errors: \[ y = 1 + & \left( \frac{1}{x} + \frac{1}{x^2} + \frac{1}{x^3} + \ldots \\ & \quad + \frac{1}{x^{n-1}} + \frac{1}{x^n} \right) \] The solution is to use "invisible" brackets to balance things out, i.e. adding a \right. at the end of the first line, and a \left. at the start of the second line after &: \[ y = 1 + & \left( \frac{1}{x} + \frac{1}{x^2} + \frac{1}{x^3} + \ldots \right. \\ & \quad \left. + \frac{1}{x^{n-1}} + \frac{1}{x^n} \right) \] The size of the brackets can be controlled explicitly The commands \Bigg and \bigg stablish the size of the delimiters < and > respectively. For a complete list of parentheses and sizes see the reference guide. LaTeX markup Renders as \big( \Big( \bigg( \Bigg( \big] \Big] \bigg] \Bigg] \big\{ \Big\{ \bigg\{ \Bigg\{ \big \langle \Big \langle \bigg \langle \Bigg \langle \big \rangle \Big \rangle \bigg \rangle \Bigg \rangle \big| \Big| \bigg| \Bigg| \(\displaystyle\big| \; \Big| \; \bigg| \; \Bigg|\) \big\| \Big\| \bigg\| \Bigg\| \(\displaystyle\big\| \; \Big\| \; \bigg\| \; \Bigg\|\)
The solution below is based on the one sent in by Barinder of Langley Grammar School. We had quite a number of correct, well laid out solutions to this problem this month including those from Roy of Allerton High School, Dan (no address given) and Calum of Wayland High School. Although definitely not in proportion, this makes the problem seem a lot easier. The question is asking for the length of the arc I have coloured red. To get this, I decided to find the angle $ \theta $ on the diagram, and use the equation Arc Length $ = \frac {\theta}{360} \times 2 \pi r $ where $ \theta $ is measured in degrees and r is the radius. $\angle OAB = 90^\circ$, since it is where a tangent and a radius of a circle meet - it is a circle theorem. Thus, the triangle AOB can be drawn as follows: We can now use trigonometry to find $ \theta $: $$\begin{align*} \cos \theta &= \frac {6367000}{6367025} \\ \cos \theta &= 0.99999607 \\ \theta &= \cos^{-1} (0.99999607) = 0.1606^\circ \mbox{(4 d.p.)}\end{align*}$$ Substitute this into the equation for the arc length of a circle earlier to obtain the length required: Arc Length $= \frac {0.1606}{360} \times 2\pi r. $ Arc Length $= 0.000446 \times 2 \times \pi \times 6367000 = 17,842.3m = 17.8 km $ The cliffs of Dover For this next part, we are given the arc length, since this corresponds to the distance between England and France. The diagram is therefore: This is essentially the reverse of the previous question. We need to find the angle $\alpha$ first, and to do this, we consider the arc length of the sector OAD of the circle: Arc Length $ = \frac {\alpha}{360} \times 2 \pi r$. So $32000 = \frac {\alpha}{360} \times 2 \times \pi \times 6367000$. Then $32000 \times 360 = \alpha \times 2 \times \pi \times 6367000 $. So $ \alpha = 0.288^\circ \mbox{(3 d.p.)}$ Since we now have the angle $\alpha$, we can consider the triangle AOB: $ \cos \alpha = \frac {6367000}{6367000 + h} $. So $ 6367000 + h = \frac {6367000}{\cos (0.288)} $. So $ 6367000 + h = 6367080.415 $ $ h = 6367080.415 - 6367000 = 80.415 $ m high. Thus, the cliffs of Dover are $80.4$ metres high. Clare's distance In this case, Clare has found the distance AB, shown in blue on this copy of the original diagram. Using Pythagoras' Theorem, $AB^2+6367000^2=6367025^2$ $\Rightarrow AB^2+40538689000000=40539007350625$ $\Rightarrow AB^2=318350625$ $\Rightarrow AB=17842.3828...\approx17842.4$ metres This is very similar to the red distance, which we found to be $17842.3$ metres - they only differ by about $5$ cm! To explain why these distances are so similar, a more representative diagram is helpful. Really, the lighthouse is very small compared to the Earth, and we found that $\theta=0.1606^\circ$, so the triangle should be very skinny! In the diagram below, the lighthouse is still too big relative to the Earth as the angle is still several degrees (rather than $0.1606^\circ$), but you can see that the blue and red distances must be similar. What happens as the lighthouse gets even smaller reltive to the Earth - or as $\theta$ gets even closer to $0$?
Regression Intercept Confidence Interval, is a way to determine closeness of two factors and is used to check the reliability of estimation. ${R = \beta_0 \pm t(1 - \frac{\alpha}{2}, n-k-1) \times SE_{\beta_0} }$ Where − ${\beta_0}$ = Regression intercept. ${k}$ = Number of Predictors. ${n}$ = sample size. ${SE_{\beta_0}}$ = Standard Error. ${\alpha}$ = Percentage of Confidence Interval. ${t}$ = t-value. Problem Statement: Compute the Regression Intercept Confidence Interval of following data. Total number of predictors (k) are 1, regression intercept ${\beta_0}$ as 5, sample size (n) as 10 and standard error ${SE_{\beta_0}}$ as 0.15. Solution: Let us consider the case of 99% Confidence Interval. Step 1: Compute t-value where ${ \alpha = 0.99}$. ${ = t(1 - \frac{\alpha}{2}, n-k-1) \\[7pt] = t(1 - \frac{0.99}{2}, 10-1-1) \\[7pt] = t(0.005,8) \\[7pt] = 3.3554 }$ Step 2: ${\ge} $Regression intercept: ${ = \beta_0 + t(1 - \frac{\alpha}{2}, n-k-1) \times SE_{\beta_0} \\[7pt] = 5 - (3.3554 \times 0.15) \\[7pt] = 5 - 0.50331 \\[7pt] = 4.49669 }$ Step 3: ${\le} $Regression intercept: ${ = \beta_0 - t(1 - \frac{\alpha}{2}, n-k-1) \times SE_{\beta_0} \\[7pt] = 5 + (3.3554 \times 0.15) \\[7pt] = 5 + 0.50331 \\[7pt] = 5.50331 }$ As a result, Regression Intercept Confidence Interval is ${4.49669}$ or ${5.50331}$ for 99% Confidence Interval.
Al-Zamil, Qusay and Montaldi, James (2010) Generalized Dirichlet to Neumann operator on invariant differential forms and equivariant cohomology. [MIMS Preprint] PDF DNoperator1.pdf Download (140kB) Abstract In a recent paper, Belishev and Sharafutdinov consider a compact Riemannian manifold $M$ with boundary $\partial M$. They define a generalized Dirichlet to Neumann (DN) operator $\Lambda$ on all forms on the boundary and they prove that the real additive de Rham cohomology structure of the manifold in question is completely determined by $\Lambda$. This shows that the DN map $\Lambda$ inscribes into the list of objects of algebraic topology. In this paper, we suppose $G$ is a torus acting by isometries on $M$. Given $X$ in the Lie algebra of $G$ and the corresponding vector field $X_M$ on $M$, one defines Witten's inhomogeneous coboundary operator $d_{X_M} = d+\iota_{X_M}$ on invariant forms on $M$. The main purpose is to adapt Belishev and Sharafutdinov's boundary data to invariant forms in terms of the operator $d_{X_M}$ and its adjoint $\delta_{X_M}$. In other words, we define an operator $\Lambda_{X_M}$ on invariant forms on the boundary which we call the $X_M$-DN map and using this we recover the long exact $X_M$-cohomology sequence of the topological pair $(M,\partial M)$ from an isomorphism with the long exact sequence formed from our boundary data. We then show that $\Lambda_{X_M}$ completely determines the free part of the relative and absolute equivariant cohomology groups of $M$ when the set of zeros of the corresponding vector field $X_M$ is equal to the fixed point set $F$ for the $G$-action. In addition, we partially determine the mixed cup product (the ring structure) of $X_M$-cohomology groups from $\Lambda_{X_M}$. These results explain to what extent the equivariant topology of the manifold in question is determined by the $X_M$-DN map $\Lambda_{X_M}$. Finally, we illustrate the connection between Belishev and Sharafutdinov's boundary data on $\partial F$ and ours on $\partial M$. Item Type: MIMS Preprint Uncontrolled Keywords: Algebraic Topology, equivariant topology, manifolds with boundary, equivariant cohomology, cup product (ring structure), group actions, Dirichlet to Neumann operator. Subjects: MSC 2010, the AMS's Mathematics Subject Classification > 35 Partial differential equations MSC 2010, the AMS's Mathematics Subject Classification > 55 Algebraic topology MSC 2010, the AMS's Mathematics Subject Classification > 58 Global analysis, analysis on manifolds Depositing User: Dr James Montaldi Date Deposited: 03 Oct 2010 Last Modified: 08 Nov 2017 18:18 URI: http://eprints.maths.manchester.ac.uk/id/eprint/1528 Available Versions of this Item Generalized Dirichlet to Neumann operator on invariant differential forms and equivariant cohomology. (deposited 03 Oct 2010) [Currently Displayed] Actions (login required) View Item
mrtaurho "The mathematician does not study pure mathematics because it is useful; he studies it because he delights in it and he delights in it because it is beautiful" - Georg Cantor Contact: mrtaurho[at]gmail[dot]com My favorite Theorem so far: Let $f(x)$ be an analytic function with a MacLaurin Expansion of the form $$f(x)=\sum_{k=0}^{\infty}\frac{\phi(k)}{k!}(-x)^k$$then the Mellin Transform of this function is given by $$\int_0^{\infty}x^{s-1}f(x)dx=\Gamma(s)\phi(-s)$$ Some contributions I am proud of: https://math.stackexchange.com/questions/3048010/how-to-show-that-prod-r-1n-gamma-left-frac-rn1-right-sqrt/3048032#3048032 https://math.stackexchange.com/questions/3056890/a-series-for-the-golden-ratio/3056902#3056902 https://math.stackexchange.com/questions/3029789/proving-im-operatornameli-2-sqrt-i-sqrt-2-1-frac34g-frac18-pi-ln-sqrt/3061998#3061998 https://math.stackexchange.com/questions/3057155/is-int-0-infty-frac-sin-yys1dy-gamma-s-sin-frac-pi-s2-for/3057177#3057177 https://math.stackexchange.com/questions/3042291/int-0-infty-frac11-xr-dx-frac1r-gamma-left-fracr-1/3042481#3042481 https://math.stackexchange.com/questions/3027576/integral-of-ln-tanhx/3027618#3027618 https://math.stackexchange.com/questions/3003880/integral-int-a-infty-frac-arctanxbx2cdx/3004022#3004022 https://math.stackexchange.com/questions/3106051/why-is-catalans-constant-g-important/3106088#3106088 https://math.stackexchange.com/questions/3220645/how-to-find-int-0-pi-2-pi-x-4x2-log1-tan-x-mathrm-dx/3220860#3220860 And even some of my own question I would call interesting $($ ^^$)$: https://math.stackexchange.com/questions/2943752/show-that-int-01-frac-operatornameli-31-z-sqrtz1-zdz-frac-pi3 https://math.stackexchange.com/questions/3051228/show-that-sum-limits-n-1-infty-frac1n2-sum-limits-n-1-infty-frac https://math.stackexchange.com/questions/3039874/evaluate-int-01-left-logx-log1-x-operatornameli-2x-right-left-fr https://math.stackexchange.com/questions/2942630/show-that-int-01-frac-lnx1xdx-frac12-int-01-frac-ln-x1-xdx Germany Member for 1 year 7 profile views Last seen 19 hours ago
I was reading about energy usage in batteries and don't quite understand why it is measured in different units than home electrical usage. An ampere-hour does not include a measure of volts. But my understanding though is that a battery has a constant voltage (1.5V, 9V, ...) just as much as home electrical usage (120V, 220V, ...). So I don't see why they have different units by which they are measured. \$kW \cdot h\$ are a measure of energy, for which grid customers are billed and usually shows up on your invoice in easily understood numbers (0-1000, not 0-1 or very large numbers; ranges which, unfortunately, confuse many people). \$A \cdot h\$ are a measure of electrical charge. A battery (or capacitor) can store more or less a certain amount of charge regardless of its operating conditions, whereas its output energy can change. If the voltage curve for a battery in certain operating conditions are known (circuit, temperature, lifetime), then its output energy is also known, but not otherwise, though you can come up with some pretty good estimates. To convert from \$A \cdot h\$ to \$kW \cdot h\$ for a constant voltage source, multiply by that voltage; for a changing voltage and/or current source, integrate over time: $$ \frac{1 kW\cdot h}{1000 W\cdot h}\int_{t_1}^{t_2} \! I(t)E(t)dt ~;~~E~[V],~I~[A],~{t_{1,2}}~[h]$$ A note about battery voltage: Rated battery voltage is "nominal". A fully charged 12 volt lead acid battery actually starts out around ~14.4 volts and drops off as you draw energy from it. The actual battery voltage depends on a number of factors not limited to state of charge, battery age, load profile, chemistry, etc,... For instance, A lithium ion battery of 3.7V (nominal) may start out at 4.15 volts and diminish to ~2.7 volts before requiring recharge. Watt-Hours (or kW-H) is an indicator of the energy storage capacity of the battery, whereas amp-hours would refer to how many amps minimum you can draw from a battery at full charge for an hour before it was no longer capable of providing that level of flow (perhaps at or above the rated voltage?). They are closely related, but not equivalent. Some batteries are designed more for high current draw devices, whereas others are designed to last a long time for lower current draw devices. Appended: Now that I look at my cell phone battery, I notice that it has all three ratings printed on it. It is a Lithium-Ion battery whose nominal voltage rating is 3.7V. It's energy capacity is marked as 4.81 Watt-Hours. It's electric charge rating is 1300 milliAmp-Hours. This seems to indicate that Energy = Voltage * Electric Charge (at least in terms of the battery ratings), though I think that this equation is hiding the fact that there is an integration of P=VI going on and that V is more like an average value than a constant, which probably gives a pretty good approximation. The way a battery works, the total coulombs it can push around falls out more directly than the total energy it can store. The voltage is not constant. It varies by state of charge for one, and the relationship between the two can be quite different between battery chemistries. All this is to say that A-h is more relevant to battery manufacturers than W-h or Joules. Joules can of course be relevant to circuit design, so this information is available, just not included in the 2 second sound bite called the Amp-hour rating. Battery datasheets can get quite complex. As with most things, there are a host of tradeoffs and thorough information is more than a single number. If you do have to pick just two numbers to quickly characterize a battery, Volts and Amp-hours are as good as any, and are what the industry has converged on. A batteries voltage changes over its lifetime. The current is set by the circuit it is connected to. As the current is a constant known value and can be predicted, and the voltage cannot, the units are in the value that can be predicted. Your electricity supply is a constant voltage and can be predicted. One factor not yet mentioned is that because batteries have a certain amount of internal resistance, drawing more current will cause the voltage to sag. Suppose, hypothetically, that a particular battery that's been discharged a certain amount will supply 12 volts when supplying 10mA, or 10 volts when supplying 100mA. Drawing 10mA from the battery for 10 hours will discharge it about as much as drawing 100mA for an hour, but in the former scenario the battery would have supplied 20% more "useful" energy. Key point: a larger fraction of the energy in a battery will be lost when trying to drain it quickly than when trying to drain it slowly. Power lines also have a certain level of resistance, and similar factors may apply, but the line voltage reaching a residental customer's meter is generally not appreciably affected by that customer's usage. A power company could supply one amp at 105 volts using 20% less energy (per unit time) than would be required to supply one amp at 126 volts. If customers were billed per amp-hour, the power companies would have an incentive to supply their energy at the lowest possible voltage. Billing per kWh means the customer's billable usage will be proportional to the amount of energy the power company has to generate to supply it. Incidentally, some devices (e.g. induction motors) will often draw less current at higher voltages (while doing the same amount of work), while other devices like incandescent lamps and heaters will draw more current at higher voltages (while producing substantially more light and heat). Simply an "Amp.Hour" is not a scientific unit or SI unit. Amp.hr is a rating that battery manufacturers use but because one ampre = one Coulomb per second, when multiplied by one hour the two time factors cancel out and the result is simply 1 Amp.Hr = 3600 Coulombs of charge, no time factor involved. So a bit of smoke and mirrors from the battery manufacturers. If you want to really know how your battery is going to perform you will have to look a little deeper than taking the word of the sales people ... !
Based on an original page posted by Nick Grassly on the H1N1 pandemic website. The basic reproduction number of the swine influenza epidemic, \( R_{0} \) can be estimated from its initial rate of spread. If we assume roughly exponential growth then the basic reproductive number is related to the growth rate by the so-called Lotka-Euler estimating equation: \[ R_{0}=\int_{0}^{\infty} \frac{1}{w(\tau)e^{-rt}} d\tau \] where \( r \) is the rate of exponential growth and \( w(\tau) \) the generation time distribution [1]. The generation time distribution can be thought of as the probability density function describing the distribution of times between successive infections in a chain of transmission. The estimate of the basic reproductive number is therefore dependent not just on an estimate of \( r \) , but also a good estimate of the generation time distribution [2]. In the case of swine influenza the generation time distribution is unclear, but data appear consistent with seasonal influenza that has a mean generation time of approximately 3 days. Analytical solutions for \( R_{0} \) can be derived for different assumed generation time distributions using the Lotka-Euler estimating equation (which is essentially a moment generating function). If we assume a generation time distribution that follows the gamma distribution, then \[ R_{0}=\left(1+\frac{r}{b}\right)^{a} \] where a and b are the parameters of the gamma distribution (\( a = m^{2}/s^{2} \) and \( b = m/s^{2} \) where \( m \) and \( s \) are the mean and standard deviation of the distribution respectively). Estimates of \( R_{0} \) based on the estimates of \( r \) reported by Andrew Rambaut are given in Table 1. Obviously \( r \) can also be estimated from epidemiological case data and this may give different results. \( r \) (per day with 95% HPD) generation time distribution (and parameters in days) \( R_{0} \) 0.053 (0.0014, 0.12) gamma (\( m=3 \), \( s=2 \)) 1.17 (1.00 - 1.40) 0.053 (0.0014, 0.12) exponential (\( m=3 \)) (i.e. SIR model) 1.16 (1.00 - 1.36) Table 1 | Estimates of \( R_{0} \) from the coalescent growth rate, \( r \) for the early period of Pandemic H1N1. Citations Wallinga J, Lipsitch M. (2007) How generation intervals shape the relationship between growth rates and reproductive numbers. Proc Roy Soc Lond B 274: 599-604 Grassly NC, Fraser C. (2008) Mathematical models of infectious disease transmission. Nat Rev Microbiol. 6: 477-487
How does spin arise in (relativistic or not) quantum mechanics? What are particles in the first place? And what precisely is this property that we call "spin"? A modern way of arriving at the notion of particles, that may be more transparent than the historical version you summarize in your post, is the way they are introduced in Weinberg's textbook on Quantum Field Theory. Take the Hilbert space $\mathcal{H}$ of your quantum theory, and take the group $G$ of symmetries of your spacetime. To ensure that an experiment will give the same results if I move it around in spacetime, rotate it, or take it with me on a cruise, there should exist a unitary representation $U$ of $G$ on $\mathcal{H}$. Ie. for any $g \in G$, there should exist a unitary operator $U(g)$ on $\mathcal{H}$, with:$$U(1) = \text{id}_{\mathcal{H}} \;\&\; U(g.h) = U(g) U(h).$$ We can then decompose this representation $\mathcal{H}, U$ into simpler representations. This means writing $\mathcal{H}$ as a direct sum of smaller Hilbert spaces:$$\mathcal{H} = \bigoplus_k \mathcal{H}_k$$with each $\mathcal{H}_k$ being stabilized by all $U(g)$ for $g \in G$, so that $\mathcal{H}_k, U_k := \left. U \right|_{\mathcal{H}_k}$ is itself a unitary representation of G. A representation that does not contain any smaller representation is called an irreducible representation, or irrep, and the simplest irreps are the ones that hold quantum states with just one particle (refer to Weinberg for what "simplest" exactly means here). So each (simple) irrep of the symmetry group $G$ that can be found in our theory $\mathcal{H}$ is what we define as a particle species. Integer spins Good, so if we want to know which particle species are physically possible, we just need to know what are the "simplest" representations of our group of symmetries. Fortunately, a full classification of those is known for the Poincaré or Galilean group. The way it is constructed would be too long to be reproduced here, but again it can be found in great details in Weinberg for the Poincaré group (brief accounts of both the Poincaré and the Galilean case can be found on wikipedia). The bottom line is that, for physically-admissible massive particles, they have the form:$$\mathcal{H}_k = \text{Span} \left\{ \left| \vec{p}, m \right\rangle \middle| \vec{p} \in \mathbb{R}^3, m \in \mathbb{Z}, -s_k \leq m \leq +s_k \right\}$$with the non-negative integer $s_k$ being what is called the spin of this particle species $k$ and $\vec{p}$ being its impulsion. The impulsion determines how the particle transform under a spatial translation:$$U_k(\text{translation by } \vec{\alpha}) \left| \vec{p}, m \right\rangle = e^{i \vec{\alpha}.\vec{p}} \left| \vec{p}, m \right\rangle$$(just as in the good old momentum representation of QM). Its spin number determines how it transforms under a rotation:$$U_k(\text{rotation by } \vec{\theta}) \left| \vec{p}, m \right\rangle = \sum_{m^\prime} R^{(s_k)}_{mm^\prime}(\vec{\theta}) \left| R(\vec{\theta}) \vec{p}, m^\prime \right\rangle$$with $R(\vec{\theta})$ the usual $3\times 3$ rotation matrix acting on $\vec{p}$ and $R^{(s_k)}$ an irrep of the rotation group $\mathcal{SO}(3)$. The intuition behind this form of $U_k$ is that the spin $s_k$ captures the way the particle may be affected by a rotation beyond the obvious rotation of its impulsion $\vec{p}$. The classical analogy here is that of a rigid body, which changes not only its position but also its orientation under a rotation. The reason the spin is an integer is because the irreps of $\mathcal{SO}(3)$ are labeled by integers: spin-0 is the trivial representation $R^{(0)}(\vec{\theta}) = \text{id}, \forall \vec{\theta} \in \mathbb{R}^3$ on a 1-dimensional vector space, spin-1 is the usual representation by $3 \times 3$ matrices, and so on. Half-integer spins But now there is a twist (figuratively and mathematically...). As mentioned in an earlier comment by ACuriousMind, and as explained in great details in the linked thread, the overall phase of a quantum state is not physically measurable. This means that we can get away with less that a strict unitary representation of $G$ on $\mathcal{H}$, and still ensure that all experimental results are invariant under $G$! Namely, we can replace:$$U(g\cdot h) = U(g) U(h) \text{ by } U(g\cdot h) = e^{i \varphi(g,h)} U(g) U(h)$$with the phase factors $\varphi(g,h)$ satisfying suitable consistency relations. Such unitary representations "up to additional phase factors" are called projective representations. If one goes through the math, we find that for the Poincaré/Galilean group this gives a few additional possible irreps, corresponding to particle species with half-integer spin. They correspond to projective representations of the rotation group in which a rotation of $2\pi$ has a non-trivial (albeit non-detectable) action on the quantum state:$$U_k(\text{rotation by } 2\pi) \left| \vec{p}, m \right\rangle = - \left| \vec{p}, m \right\rangle$$ Observable signature? But wait! If this extra minus sign is not physically detectable anyway, how do we know that some particles have half-integer spin? This has to do with the properties of quantum measurments, which will reveal the spectrum (aka. eigenvalues) of the measured observable. We cannot directly observe that the quantum state vector transforms under a projective representation but we can determine it indirectly because it gets imprinted in the spectrum of the angular-momentum operator. What about classical (non-quantum) mechanic? Take a classical mechanical system, say, for concreteness, a system of rigid bodies possibly interacting via conservative forces. The phase space of such a system carries a non-projective representation $T$ of the Galilean group (we can check this by writing it explicitly). But this representation is not a linear representation (at best it may be an affine representation, since translations act, well, by translations). So spin in the above sense does not immediately make sense. Instead, we can do classical statistical physics for this system: ie. write a field equation for a probability distribution $\rho$ on the phase space (which can be seen as the classical counterpart of a quantum mechanical wave-function). The space $\mathcal{P}$ of such probability distributions naturally carries a linear representation $U$ of $G$ defined by:$$\forall g \in G,\; [U(g) \rho](x) = \rho\big(T(g^{-1})x\big)$$which is, again, a non-projective representation (strictly speaking, admissible probability distributions are positive and normalized, but we can study their spin properties by working in the vector space they span: this is analogous to considering the whole Hilbert space in quantum mechanics, although actual quantum states should be normalized). So, what would a "half-integer-spin mode" for such a system be? According to the previously explained definition of spin, that would be a half-integer-spin irrep $\mathcal{P}_k \subseteq \mathcal{P}, U_k := \left. U \right|_{\mathcal{P}_k}$ appearing in the decomposition of $\mathcal{P}, U$. Can such a $\mathcal{P}_k$ exist? No! Indeed, if it would, we would have a distribution $\rho \in \mathcal{P}_k \setminus \{0\}$ such that$$U(\text{rotation by } 2\pi) \rho = U_k (\text{rotation by } 2\pi) \rho = -\rho,$$but, since $U$ is a non-projective representation, we already know that $U(\text{rotation by } 2\pi) \rho = \rho$. A similar argument can be applied for example to the classical electromagnetic field: the space of solutions of Maxwell's equations carries a non-projective linear representation of the Poincaré group (one could say: by historical definition of the latter). What about a thermodynamical system? Suppose I take a large number of mechanical bodies interacting via conservative forces (say molecules) and take the thermodynamical limit to derive effective equations for some macroscopic variable (eg. their density). Could such an equation exhibit half-integer-modes? Ie. could its space of solutions carry a projective representation of G? Let us do some though experiment: Take two rigorously identical boxes containing this thermodynamical system and perform the exact same experiment on them, except that the second one is first subjected to a full $2\pi$-rotation ( very slowly, so as to not perturb any (local) thermodynamical equilibrium). Because the underlying microscopic theory carries a non-projective representation of G, the two experiments should give the exact same result! Note that in arguments of this kind, one has to be very careful. The thermodynamic limit can do funny things to the symmetries of a systems. This is known as symmetry-breaking: while the space of solutions of the underlying microscopic theory may be invariant under a certain group $G$, a given thermodynamical phase may have less symmetry because it fails to explore the full solution space (keyword: ergodicity, or more precisely lack thereof). But, such a mechanism cannot turn a non-projective representation into a projective one: since a $2\pi$-rotation brings me back on the exact same microscopic configuration from which I started, I am guaranteed not to land in a different thermodynamical phase. Can we cheat? Suppose I come up with a mathematical description of some physically valid classical system in which, for technical reasons, I choose to introduce some auxiliary, non-measurable quantity (eg. a complex phase). Since the auxiliary quantity is non-measurable, I can let it transform in whatever way is mathematically convenient. In this way, I may arrive at a description of a classical system which exhibits a projective representation. But still the original physical system will not exhibit any observable half-integer-spin behavior. As the truly measurable quantities have to be invariant under a full $2\pi$-rotation, there should exist a basic description of the same system, that refrains from introducing any auxiliary quantities, and carries a non-projective representation. Computing experimental predictions using this basic description, no half-integer-spins should show up. TL;DR: This is crucially differently from the above discussed quantum mechanical case, in which you can hide a non-projective representation, so as to preserve $2\pi$-rotation-invariance, while nevertheless retaining some observable signature. Bonus: Does Wick-rotating a quantum equation give a thermodynamical equation? I do not think Wick rotation should be thought as some kind of magic transformation to turn a QM equation into a thermodynamical one. There is a connection between quantum field theory on 3+1d Minkowski spacetime and statistical field theory in 4d Euclidean space. But statistical physics (the study of the probability distribution over (field) configurations) is not quite the same as thermodynamics (the derivation of effective equations for macroscopic variables in the large-number-of-particles limit). I suspect the appearance of the heat equation as the complex-time Schrödinger equation is more a coincidence, coming from the fact that, well, there are only so many linear PDEs you can write with a certain order in space and time derivatives. If you would like to investigate the Wick rotated Dirac equation anyway, I guess a good place to start would be the Wick-rotated gamma matrices. You will get a field equation carrying a projective representation of the 4d Euclidean group, sure. But Wick-rotating a physically valid quantum equation does not a priori guarantee any particular physical relevance for the resulting equation: in fact, such an equation cannot describe any actual physical system, if only because, as pointed out by flippiefanus, we do not live in 4d Euclidean space ;-).
The Annals of Applied Probability Ann. Appl. Probab. Volume 26, Number 2 (2016), 722-759. The mean Euler characteristic and excursion probability of Gaussian random fields with stationary increments Abstract Let $X=\{X(t),t\in\mathbb{R}^{N}\}$ be a centered Gaussian random field with stationary increments and $X(0)=0$. For any compact rectangle $T\subset\mathbb{R}^{N}$ and $u\in\mathbb{R}$, denote by $A_{u}=\{t\in T:X(t)\geq u\}$ the excursion set. Under $X(\cdot)\in C^{2}(\mathbb{R}^{N})$ and certain regularity conditions, the mean Euler characteristic of $A_{u}$, denoted by $\mathbb{E}\{\varphi(A_{u})\}$, is derived. By applying the Rice method, it is shown that, as $u\to\infty$, the excursion probability $\mathbb{P}\{\sup_{t\in T}X(t)\geq u\}$ can be approximated by $\mathbb{E}\{\varphi(A_{u})\}$ such that the error is exponentially smaller than $\mathbb{E}\{\varphi(A_{u})\}$. This verifies the expected Euler characteristic heuristic for a large class of Gaussian random fields with stationary increments. Article information Source Ann. Appl. Probab., Volume 26, Number 2 (2016), 722-759. Dates Received: November 2012 Revised: December 2014 First available in Project Euclid: 22 March 2016 Permanent link to this document https://projecteuclid.org/euclid.aoap/1458651818 Digital Object Identifier doi:10.1214/15-AAP1101 Mathematical Reviews number (MathSciNet) MR3476623 Zentralblatt MATH identifier 1339.60055 Citation Cheng, Dan; Xiao, Yimin. The mean Euler characteristic and excursion probability of Gaussian random fields with stationary increments. Ann. Appl. Probab. 26 (2016), no. 2, 722--759. doi:10.1214/15-AAP1101. https://projecteuclid.org/euclid.aoap/1458651818
I'm trying to typeset some axiomatic logic proofs in list form. I have tried to define a new environment such that I could combine the proof environment of the amsthm package with a table, resulting in the ugly picture underneath. What I would like to achieve is that the lines of the proof are automatically numbered (I've tried to do that with a counter), that they have the same indentation as an enumeration, that the descriptions are nicely aligned (that's why I tried to use a table) and that the QED symbol is placed properly. How can I do that? \documentclass{article}\usepackage{array,amsthm,amssymb,amsmath}\newcounter{rowcount}\newenvironment{listproof}{\setcounter{rowcount}{0} \begin{proof}\mbox{}\\\\ \begin{tabular}{@{\stepcounter{rowcount}(\alph{rowcount})\hspace*{\tabcolsep}}ll}} {\end{tabular}\end{proof}}\newtheorem{thm}{Theorem}\newcommand{\necc}{\ensuremath{\mathbin{\Box}}}\newcommand{\limpl}{\ensuremath{\mathbin{\rightarrow}}}\newcommand{\theo}{\ensuremath{\mathrel{\vdash}}}\begin{document}\begin{thm} $\theo\phi\limpl\psi \implies \theo\necc\phi\limpl\necc\psi$. \end{thm} \begin{listproof} $\phi\limpl\psi$ & given\\ $\necc(\phi\limpl\psi)$ & N, a\\ $\necc(\phi\limpl \psi)\limpl(\necc \phi\limpl\necc\psi)$ & $[\phi/p, \psi/q]\,$K\\ $\necc{\phi}\limpl\necc{\psi}$ & MP, b, c\\ \end{listproof} \end{document}
According to Maxwell's equations, magnetic fields are divergence-free: $\nabla \cdot \mathbf{B} = 0$. If I understand this correctly, this means that magnetic field lines do not start or end. How can we reconcile this with magnetic reconnection? One must be very careful in making the step from $\nabla\cdot\mathbf{B}=0$ to a statement such as "magnetic field lines do not start or end". Consider the field in the region of an X-point type magnetic null (in two dimensions). Take a 'volume' (i.e. an area) centred on the null point, and look at the field lines through the bounding curve. No matter how small you make the volume, you will see an equal number of field lines of equal strength entering and leaving the volume. At the point of reconnection (in an idealised case) the field lines 'start' and 'end' at an infinitesimal point. Even in the limit that your volume for the purposes of the calculation tends to zero (which defines the scalar field of divergence), you will still have equal flux 'into' and 'out of' that volume. Note the sentence in this source, where it is stated that "[f]an field lines and spine field lines are notable exceptions to the general tenet that field lines have no beginning or ending – it seems that certain field lines terminate at null points." There is however, as discussed above, no violation of the condition that the field be divergence-free. Edit: With the amount of attention this post is getting, I feel I should add a couple of points of clarification. In no sense am I saying that any such thing as a 'magnetic monopole' exists at a reconnecting X-point. In the resistive MHD picture, at an infinitesimal spatial point and for an infinitesimal time, magnetic field lines essentially lose their identity when they pass through the reconnection region. It makes no sense to talk about 'tracing' a field line across the X-point as we normally do when we plot maps of field lines. All we can say for sure is that the flux intoand the flux out ofa sufficiently small (formally infinitesimal) volume around the X-point are equal, satisfying $\nabla\cdot\mathbf{B}=0$. The sense in which the field lines 'terminate' at the reconnection is a corollary to this; because we can't identify any particular path which carries us smoothly across the X-point along a particular field line, we're forced to admit a discontinuity. This is why MHD equilibrium solvers for example use certain computational tricks to 'skirt round' the X-point in a given configuration rather than modelling the field all the way to the discontinuity. The foregoing discussion is valid only as long as the resistive MHD picture is valid; once we get down to scales comparable to the electron gyro-radius, the whole thing requires a self-consistent kinetic approach. $\nabla\cdot\mathbf B=0$ does indicate that there are no magnetic monopoles, so there isn't a "starting" or "ending" point for field lines is mostly correct. So this must mean that magnetic field lines either form closed loops extend to infinity intersect the domain boundary (wall, stellar surface, etc) So the "starting & ending points" issue is nuanced beyond what you've stated. With reconnection, we can usually assume the middle option: field lines extend to infinity (though invoking that they intersect the boundary is just as valid). For those unawares, magnetic reconnection is the when magnetic field lines pointing in opposite directions pinch together (reconnect) and form new lines: (source) To model this, one needs to modify the ideal MHD equations (this is because if we assume $\mathbf B\parallel\delta\mathbf x$ where $\delta\mathbf x$ is some displacement of field lines, it will remain so for all time $t$). For typical plasmas, one uses Faraday's law in conjunction with the Lorentz force to model the magnetic evolution,$$\frac{\partial \mathbf B}{\partial t}=-\nabla\times\mathbf E=\nabla\times\mathbf u\times\mathbf B\tag{1}$$But when considering magnetic reconnection, the conductivity isn't assumed to be infinite, so we have to use Ohm's law and add the current density, $\mathbf J\sim\nabla\times\mathbf B$:$$\frac{\partial \mathbf B}{\partial t}=\nabla\times\left(\mathbf u\times\mathbf B+\eta\nabla\times\mathbf B\right)\tag{2}$$where $\eta$ is the magnetic diffusivity. So now the magnetic field can diffuse, rather than simply moving along the flow; this is what allows for reconnection to occur in the plasma. However, because the divergence of the curl of any vector is identically zero, $\nabla\cdot\nabla\times\mathbf A=0$, both (1) and (2) satisfy the divergence-free condition. Magnetic reconnection comes from a cartoon picture of what magnetic field line motion may portray. This is not based on any physical law. Field lines are not real entity - just a means to display the lines of force when magnetic field is present in space. Field line motion is non-unique also, which is a fundamental flaw for people relying on field line motion to understand physics. That is why the magnetic reconnection is so misleading. It is unphysical to cut and join magnetic field lines. The exact equation for describing the time rate of change of B is not what is given in Eq. (2) by Kyle Kanos. The generalized Ohm's law is far more complicated than representing all the other missing terms in the simplified Ohm's law by a scalar resistivity. One should learn more about how magnetic reconnection is entrenched in space plasma physics - see, e.g., the following publications: note that Hannes Alfvén is a Nobel Laureate and is the one who invented MHD, Alfvén, H., On frozen-in field lines and field-line reconnection, J. Geophys. Res., 81, 4019-4021, 1976; Alfvén, H., Electrical currents in cosmic plasmas, Rev. Geophys., 15, 271-284, 1977; Akasofu, S.-I., Auroral substorms as an electrical discharge phenomenon, Progress in Earth and Planbetary Science, 2:20, doi:10.1186/s40645-015-0050-9, 2015; Lui, A. T. Y., Comparison of current disruption and magnetic reconnection, Geosci. Lett., 2:14, doi 10.1186/s40562-015-0031-2, 2015.
Another of the uses of the \(F\) distribution is testing two variances. It is often desirable to compare two variances rather than two averages. For instance, college administrators would like two college professors grading exams to have the same variation in their grading. In order for a lid to fit a container, the variation in the lid and the container should be the same. A supermarket might be interested in the variability of check-out times for two checkers. to perform a \(F\) test of two variances, it is important that the following are true: The populations from which the two samples are drawn are normallydistributed. The two populations are independentof each other. Unlike most other tests in this book, the \(F\) test for equality of two variances is very sensitive to deviations from normality. If the two distributions are not normal, the test can give higher \(p\text{-values}\) than it should, or lower ones, in ways that are unpredictable. Many texts suggest that students not use this test at all, but in the interest of completeness we include it here. Suppose we sample randomly from two independent normal populations. Let \(\sigma^{2}_{1}\) and \(\sigma^{2}_{2}\) be the population variances and \(s^{2}_{1}\) and \(s^{2}_{2}\) be the sample variances. Let the sample sizes be \(n_{1}\) and \(n_{2}\). Since we are interested in comparing the two sample variances, we use the \(F\) ratio: \[F = \dfrac{\left[\dfrac{(s_{1})^{2}}{(\sigma_{1})^{2}}\right]}{\left[\dfrac{(s_{2})^{2}}{(\sigma_{2})^{2}}\right]}\] \(F\) has the distribution \[F \sim F(n_{1} - 1, n_{2} - 1)\] where \(n_{1} - 1\) are the degrees of freedom for the numerator and \(n_{2} - 1\) are the degrees of freedom for the denominator. If the null hypothesis is \(\sigma^{2}_{1} = \sigma^{2}_{2}\), then the \(F\) Ratio becomes \[F = \dfrac{\left[\dfrac{(s_{1})^{2}}{(\sigma_{1})^{2}}\right]}{\left[\dfrac{(s_{2})^{2}}{(\sigma_{2})^{2}}\right]} = \dfrac{(s_{1})^{2}}{(s_{2})^{2}}.\] The \(F\) ratio could also be \(\dfrac{(s_{2})^{2}}{(s_{1})^{2}}\). It depends on \(H_{a}\) and on which sample variance is larger. If the two populations have equal variances, then \(s^{2}_{1}\) and \(s^{2}_{2}\) are close in value and \(F = \dfrac{(s_{1})^{2}}{(s_{2})^{2}}\) is close to one. But if the two population variances are very different, \(s^{2}_{1}\) and \(s^{2}_{2}\) tend to be very different, too. Choosing \(s^{2}_{1}\) as the larger sample variance causes the ratio \(\dfrac{(s_{1})^{2}}{(s_{2})^{2}}\) to be greater than one. If \(s^{2}_{1}\) and \(s^{2}_{2}\) are far apart, then \[F = \dfrac{(s_{1})^{2}}{(s_{2})^{2}}\] is a large number. Therefore, if \(F\) is close to one, the evidence favors the null hypothesis (the two population variances are equal). But if \(F\) is much larger than one, then the evidence is against the null hypothesis. A test of two variances may be left, right, or two-tailed. A test of two variances may be left, right, or two-tailed. Example \(\PageIndex{1}\) Two college instructors are interested in whether or not there is any variation in the way they grade math exams. They each grade the same set of 30 exams. The first instructor's grades have a variance of 52.3. The second instructor's grades have a variance of 89.9. Test the claim that the first instructor's variance is smaller. (In most colleges, it is desirable for the variances of exam grades to be nearly the same among instructors.) The level of significance is 10%. Answer Let 1 and 2 be the subscripts that indicate the first and second instructor, respectively. \(n_{1} = n_{2} = 30\). \(H_{0}: \sigma^{2}_{1} = \sigma^{2}_{2}\) and \(H_{a}: \sigma^{2}_{1} < \sigma^{2}_{2}\) Calculate the test statistic: By the null hypothesis \(\sigma^{2}_{1} = \sigma^{2}_{2})\), the \(F\) statistic is: \[F = \dfrac{\left[\dfrac{(s_{1})^{2}}{(\sigma_{1})^{2}}\right]}{\left[\dfrac{(s_{2})^{2}}{(s_{2})^{2}}\right]} = \dfrac{(s_{1})^{2}}{(s_{2})^{2}} = \dfrac{52.3}{89.9} = 0.5818\] Distribution for the test: \(F_{29,29}\) where \(n_{1} - 1 = 29\) and \(n_{2} - 1 = 29\). Graph: This test is left tailed. Draw the graph labeling and shading appropriately. Figure \(\PageIndex{1}\) Probability statement: \(p\text{-value} = P(F < 0.5818) = 0.0753\) Compare \(\alpha\) and the \(p\text{-value}\) : \(\alpha = 0.10 \alpha > p\text{-value}\). Make a decision: Since \(\alpha > p\text{-value}\), reject \(H_{0}\). Conclusion: With a 10% level of significance, from the data, there is sufficient evidence to conclude that the variance in grades for the first instructor is smaller. Press STAT and arrow over to TESTS. Arrow down to D:2-SampFTest. Press ENTER. Arrow to Stats and press ENTER. For Sx1, n1, Sx2, and n2, enter (52.3)−−−−−√(52.3), 30, (89.9)−−−−−√(89.9), and 30. Press ENTER after each. Arrow to σ1: and <σ2. Press ENTER. Arrow down to Calculate and press ENTER. F = 0.5818 and p-value = 0.0753. Do the procedure again and try Draw instead of Calculate. Exercise \(\PageIndex{1}\) The New York Choral Society divides male singers up into four categories from highest voices to lowest: Tenor1, Tenor2, Bass1, Bass2. In the table are heights of the men in the Tenor1 and Bass2 groups. One suspects that taller men will have lower voices, and that the variance of height may go up with the lower voices as well. Do we have good evidence that the variance of the heights of singers in each of these two groups (Tenor1 and Bass2) are different? Tenor1 Bass2 Tenor 1 Bass 2 Tenor 1 Bass 2 69 72 67 72 68 67 72 75 70 74 67 70 71 67 65 70 64 70 66 75 72 66 69 76 74 70 68 72 74 72 68 75 71 71 72 64 68 74 66 74 73 70 75 68 72 66 72 Answer The histograms are not as normal as one might like. Plot them to verify. However, we proceed with the test in any case. Subscripts: \(\text{T1} =\) tenor 1 and \(\text{B2} =\) bass 2 The standard deviations of the samples are \(s_{\text{T1}} = 3.3302\) and \(s_{\text{B2}} = 2.7208\). The hypotheses are \(H_{0}: \sigma^{2}_{\text{T1}} = \sigma^{2}_{\text{B2}}\) and \(H_{0}: \sigma^{2}_{\text{T1}} \neq \sigma^{2}_{\text{B2}}\) (two tailed test) The \(F\) statistic is \(1.4894\) with 20 and 25 degrees of freedom. The \(p\text{-value}\) is \(0.3430\). If we assume alpha is 0.05, then we cannot reject the null hypothesis. We have no good evidence from the data that the heights of Tenor1 and Bass2 singers have different variances (despite there being a significant difference in mean heights of about 2.5 inches.) References “MLB Vs. Division Standings – 2012.” Available online at http://espn.go.com/mlb/standings/_/y...ion/order/true. Chapter Review The F test for the equality of two variances rests heavily on the assumption of normal distributions. The test is unreliable if this assumption is not met. If both distributions are normal, then the ratio of the two sample variances is distributed as an F statistic, with numerator and denominator degrees of freedom that are one less than the samples sizes of the corresponding two groups. A test of two variances hypothesis test determines if two variances are the same. The distribution for the hypothesis test is the \(F\) distribution with two different degrees of freedom. Assumptions: The populations from which the two samples are drawn are normally distributed. The two populations are independent of each other. Formula Review \(F\) has the distribution \(F \sim F(n_{1} - 1, n_{2} - 1)\) \(F = \dfrac{\dfrac{s^{2}_{1}}{\sigma^{2}_{1}}}{\dfrac{s^{2}_{2}}{\sigma^{2}_{2}}}\) If \(\sigma_{1} = \sigma_{2}\), then \(F = \dfrac{s^{2}_{1}}{s^{2}_{2}}\) Contributors Barbara Illowsky and Susan Dean (De Anza College) with many other contributing authors. Content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/30189442-699...b91b9de@18.114.
Trigonometric functions From JSXGraph Wiki Revision as of 10:35, 23 June 2009 by A WASSERMANN The well known trigonometric functions can be visualized on the circle of radius 1. See http://en.wikipedia.org/wiki/Trigonometric_functions for the definitions. Tangent: [math]\tan x = \frac{\sin x}{\cos x}[/math] Cotangent: [math]\cot x = \frac{\cos x}{\sin x}[/math] Secant: [math]\sec x = \frac{1}{\cos x}[/math] Cosecant: [math]\csc x = \frac{1}{\sin x}[/math]
Let $A$ be a finite alphabet. For a given language $L \subseteq A^{\ast}$ the syntactic monoid $M(L)$ is a well-known notion in formal language theory. Furthermore, a monoid $M$ recognizes a language $L$ iff there exists a morphism $\varphi : A^{\ast} \to M$ such that $L = \varphi^{-1}(\varphi(L)))$. Then we have the nice result: A monoid $M$ recognizes $L \subseteq A^{\ast}$ if $M(L)$ is a homomorphic image of a submonoid of $M$ (writen as $M(L) \prec M$). The above is usually states in the context of regular languages, and then the above monoids are all finite. Now suppose we substitute $A^{\ast}$ with an arbitrary monoid $N$, and we say that a subset $L \subseteq N$ is recognized by $M$ if there exists a morphism $\varphi : N \to M$ such that $L = \varphi^{-1}(\varphi(L))$. Then we still have that if $M$ recognizes $L$, then $M(L) \prec M$ (see S. Eilenberg, Automata, Machines and Languages, Volume B), but does the converse hold? In the proof for $A^{\ast}$ the converse is proven by exploiting the property that if $N = \varphi(M)$ for some morphism $\varphi : M \to N$ and $\psi : A^{\ast} \to N$ is also a morphism, then we can find $\rho : A^{\ast} \to M$ such that $\varphi(\rho(u)) = \psi(u)$ holds, simply by choosing some $\rho(x) \in \varphi^{-1}(\psi(x))$ for each $x \in A$ and extending this to a morphism from $A^{\ast}$ to $M$. But this does not work for arbitrary monoids $N$ so I expect the above converse to be false then. And if it is false, for what kind of monoid beside $A^{\ast}$ is it still true, and did those monoids have received any attention in the research literature?
Wave energy converters in coastal structures Introduction Fig 1: Construction of a coastal structure. Coastal works along European coasts are composed of very diverse structures. Many coastal structures are ageing and facing problems of stability, sustainability and erosion. Moreover climate change and especially sea level rise represent a new danger for them. Coastal dykes in Europe will indeed be exposed to waves with heights that are greater than the dykes were designed to withstand, in particular all the structures built in shallow water where the depth imposes the maximal amplitude because of wave breaking. This necessary adaptation will be costly but will provide an opportunity to integrate converters of sustainable energy in the new maritime structures along the coasts and in particular in harbours. This initiative will contribute to the reduction of the greenhouse effect. Produced energy can be directly used for the energy consumption in harbour area and will reduce the carbon footprint of harbours by feeding the docked ships with green energy. Nowadays these ships use their motors to produce electricity power on board even if they are docked. Integration of wave energy converters (WEC) in coastal structures will favour the emergence of the new concept of future harbours with zero emissions. Inhoud Wave energy and wave energy flux For regular water waves, the time-mean wave energy density E per unit horizontal area on the water surface (J/m²) is the sum of kinetic and potential energy density per unit horizontal area. The potential energy density is equal to the kinetic energy [1] both contributing half to the time-mean wave energy density E that is proportional to the wave height squared according to linear wave theory [1]: (1) [math]E= \frac{1}{8} \rho g H^2[/math] g is the gravity and [math]H[/math] the wave height of regular water waves. As the waves propagate, their energy is transported. The energy transport velocity is the group velocity. As a result, the time-mean wave energy flux per unit crest length (W/m) perpendicular to the wave propagation direction, is equal to [1]: (2) [math] P= Ec_{g}[/math] with [math]c_{g}[/math] the group velocity (m/s). Due to the dispersion relation for water waves under the action of gravity, the group velocity depends on the wavelength λ (m), or equivalently, on the wave period T (s). Further, the dispersion relation is a function of the water depth h (m). As a result, the group velocity behaves differently in the limits of deep and shallow water, and at intermediate depths: [math](\frac{\lambda}{20} \lt h \lt \frac{\lambda}{2})[/math] Application for wave energy convertersFor regular waves in deep water: [math]c_{g} = \frac{gT}{4\pi} [/math] and [math]P_{w1} = \frac{\rho g^2}{32 \pi} H^2 T[/math] The time-mean wave energy flux per unit crest length is used as one of the main criteria to choose a site for wave energy converters. For real seas, whose waves are random in height, period (and direction), the spectral parameters have to be used. [math]H_{m0} [/math] the spectral estimate of significant wave height is based on zero-order moment of the spectral function as [math]H_{m0} = 4 \sqrt{m_0} [/math] Moreover the wave period is derived as follows [2]. [math]T_e = \frac{m_{-1}}{m_0} [/math] where [math]m_n[/math] represents the spectral moment of order n. An equation similar to that describing the power of regular waves is then obtained [2] : [math]P_{w1} = \frac{\rho g^2}{64 \pi} H_{m0}^2 T_e[/math] If local data are available ([math]H_{m0}^2, T_e [/math]) for a sea state through in-situ wave buoys for example, satellite data or numerical modelling, the last equation giving wave energy flux [math]P_{w1}[/math] gives a first estimation. Averaged over a season or a year, it represents the maximal energetic resource that can be theoretically extracted from wave energy. If the directional spectrum of sea state variance F (f,[math]\theta[/math]) is known with f the wave frequency (Hz) and [math]\theta[/math] the wave direction (rad), a more accurate formulation is used: [math]P_{w2} = \rho g\int\int c_{g}(f,h)F(f,\theta) dfd \theta[/math] Fig 2: Time-mean wave energy flux along West European coasts [3] . It can be shown easily that equations (5 and 6) can be reduced to (4) with the hypothesis of regular waves in deep water. The directional spectrum is deduced from directional wave buoys, SAR images or advanced spectral wind-wave models, known as third-generation models, such as WAM, WAVEWATCH III, TOMAWAC or SWAN. These models solve the spectral action balance equation without any a priori restrictions on the spectrum for the evolution of wave growth. From TOMAWAC model, the near shore wave atlas ANEMOC along the coasts of Europe and France based on the numerical modelling of wave climate over 25 years has been produced [4]. Using equation (4), the time-mean wave energy flux along West European coasts is obtained (see Fig. 2). This equation (4) still presents some limits like the definition of the bounds of the integration. Moreover, the objective to get data on the wave energy near coastal structures in shallow or intermediate water requires the use of numerical models that are able to represent the physical processes of wave propagation like the refraction, shoaling, dissipation by bottom friction or by wave breaking, interactions with tides and diffraction by islands. The wave energy flux is therefore calculated usually for water depth superior to 20 m. This maximal energetic resource calculated in deep water will be limited in the coastal zone: at low tide by wave breaking; at high tide in storm event when the wave height exceeds the maximal operating conditions; by screen effect due to the presence of capes, spits, reefs, islands,... Technologies According to the International Energy Agency (IEA), more than hundred systems of wave energy conversion are in development in the world. Among them, many can be integrated in coastal structures. Evaluations based on objective criteria are necessary in order to sort theses systems and to determine the most promising solutions. Criteria are in particular: the converter efficiency : the aim is to estimate the energy produced by the converter. The efficiency gives an estimate of the number of kWh that is produced by the machine but not the cost. the converter survivability : the capacity of the converter to survive in extreme conditions. The survivability gives an estimate of the cost considering that the weaker are the extreme efforts in comparison with the mean effort, the smaller is the cost. Unfortunately, few data are available in literature. In order to determine the characteristics of the different wave energy technologies, it is necessary to class them first in four main families [3]. An interesting result is that the maximum average wave power that a point absorber can absorb [math]P_{abs} [/math](W) from the waves does not depend on its dimensions [5]. It is theoretically possible to absorb a lot of energy with only a small buoy. It can be shown that for a body with a vertical axis of symmetry (but otherwise arbitrary geometry) oscillating in heave the capture (or absorption) width [math]L_{max}[/math](m) is as follows [5]: [math]L_{max} = \frac{P_{abs}}{P_{w}} = \frac{\lambda}{2\pi}[/math] or [math]1 = \frac{P_{abs}}{P_{w}} \frac{2\pi}{\lambda}[/math] Fig 4: Upper limit of mean wave power absorption for a heaving point absorber. where [math]{P_{w}}[/math] is the wave energy flux per unit crest length (W/m). An optimally damped buoy responds however efficiently to a relatively narrow band of wave periods. Babarit et Hals propose [6] to derive that upper limit for the mean annual power in irregular waves at some typical locations where one could be interested in putting some wave energy devices. The mean annual power absorption tends to increase linearly with the wave power resource. Overall, one can say that for a typical site whose resource is between 20-30 kW/m, the upper limit of mean wave power absorption is about 1 MW for a heaving WEC with a capture width between 30-50 m. In order to complete these theoretical results and to describe the efficiency of the WEC in practical situations, the capture width ratio [math]\eta[/math] is also usually introduced. It is defined as the ratio between the absorbed power and the available wave power resource per meter of wave front times a relevant dimension B [m]. [math]\eta = \frac{P_{abs}}{P_{w}B} [/math] The choice of the dimension B will depend on the working principle of the WEC. Most of the time, it should be chosen as the width of the device, but in some cases another dimension is more relevant. Estimations of this ratio [math]\eta[/math] are given [6]: 33 % for OWC, 13 % for overtopping devices, 9-29 % for heaving buoys, 20-41 % for pitching devices. For energy converted to electricity, one must take into account moreover the energy losses in other components of the system. Civil engineering Never forget that the energy conversion is only a secondary function for the coastal structure. The primary function of the coastal structure is still protection. It is necessary to verify whether integration of WEC modifies performance criteria of overtopping and stability and to assess the consequences for the construction cost. Integration of WEC in coastal structures will always be easier for a new structure than for an existing one. In the latter case, it requires some knowledge on the existing coastal structures. Solutions differ according to sea state but also to type of structures (rubble mound breakwater, caisson breakwaters with typically vertical sides). Some types of WEC are more appropriate with some types of coastal structures. Fig 5: Several OWC (Oscillating water column) configurations (by Wavegen – Voith Hydro). Environmental impact Wave absorption if it is significant will change hydrodynamics along the structure. If there is mobile bottom in front of the structure, a sand deposit can occur. Ecosystems can also be altered by change of hydrodynamics and but acoustic noise generated by the machines. Fig 6: Finistere area and locations of the six sites (google map). Study case: Finistere area Finistere area is an interesting study case because it is located in the far west of Brittany peninsula and receives in consequence the largest wave energy flux along the French coasts (see Fig.2). This area with a very ragged coast gathers moreover many commercial ports, fishing ports, yachting ports. The area produces a weak part of its consumption and is located far from electricity power plants. There are therefore needs for renewable energies that are produced locally. This issue is important in particular in islands. The production of electricity by wave energy will have seasonal variations. Wave energy flux is indeed larger in winter than in summer. The consumption has peaks in winter due to heating of buildings but the consumption in summer is also strong due to the arrival of tourists. Six sites are selected (see figure 7) for a preliminary study of wave energy flux and capacity of integration of wave energy converters. The wave energy flux is expected to be in the range of 1 – 10 kW/m. The length of each breakwater exceeds 200 meters. The wave power along each structure is therefore estimated between 200 kW and 2 MW. Note that there exist much longer coastal structures like for example Cherbourg (France) with a length of 6 kilometres. (1) Roscoff (300 meters) (2) Molène (200 meters) (3) Le Conquet (200 meters) (4) Esquibien (300 meters) (5) Saint-Guénolé (200 meters) (6) Lesconil (200 meters) Fig.7: Finistere area, the six coastal structures and their length (google map). Wave power flux along the structure depends on local parameters: bottom depth that fronts the structure toe, the presence of caps, the direction of waves and the orientation of the coastal structure. See figure 8 for the statistics of wave directions measured by a wave buoy located at the Pierres Noires Lighthouse. These measurements show that structures well-oriented to West waves should be chosen in priority. Peaks of consumption occur often with low temperatures in winter coming with winds from East- North-East directions. Structures well-oriented to East waves could therefore be also interesting even if the mean production is weak. Fig 8: Wave measurements at the Pierres Noires Lighthouse. Conclusion Wave energy converters (WEC) in coastal structures can be considered as a land renewable energy. The expected energy can be compared with the energy of land wind farms but not with offshore wind farms whose number and power are much larger. As a land system, the maintenance will be easy. Except the energy production, the advantages of such systems are : a “zero emission” port industrial tourism test of WEC for future offshore installations. Acknowledgement This work is in progress in the frame of the national project EMACOP funded by the French Ministry of Ecology, Sustainable Development and Energy. See also Waves Wave transformation Groynes Seawall Seawalls and revetments Coastal defense techniques Wave energy converters Shore protection, coast protection and sea defence methods Overtopping resistant dikes References Mei C.C. (1989) The applied dynamics of ocean surface waves. Advanced series on ocean engineering. World Scientific Publishing Ltd Vicinanza D., Cappietti L., Ferrante V. and Contestabile P. (2011) : Estimation of the wave energy along the Italian offshore, journal of coastal research, special issue 64, pp 613 - 617. Mattarolo G., Benoit M., Lafon F. (2009), Wave energy resource off the French coasts: the ANEMOC database applied to the energy yield evaluation of Wave Energy, 10th European Wave and Tidal Energy Conference Series (EWTEC’2009), Uppsala (Sweden) Benoit M. and Lafon F. (2004) : A nearshore wave atlas along the coasts of France based on the numerical modeling of wave climate over 25 years, 29th International Conference on Coastal Engineering (ICCE’2004), Lisbonne (Portugal), pp 714-726. De O. Falcão A. F. (2010) Wave energy utilization: A review of the technologies. Renewable and Sustainable Energy Reviews, Volume 14, Issue 3, April 2010, pp. 899–918. Babarit A. and Hals J. (2011) On the maximum and actual capture width ratio of wave energy converters – 11th European Wave and Tidal Energy Conference Series (EWTEC’2011) – Southampton (U-K).
Note that, by definition, the projections $P_n$ converge strongly to $I$. Let $r\in\mathbb N$ (to be determined later), and define$$Q_n=P_{n+r}-P_n. $$The projections $Q_n$ are finite-rank, and pairwise orthogonal. Let $$S=\sum_n Q_nTQ_n,\ \ \ \ K=T-S.$$Let us check first that $SP_n=P_nS$ for all $n$. It is obvious that $SQ_n=Q_nS$ for all $n$. We have $$SP_n-P_nS=-(SQ_n-Q_nS)+SP_{n+r}-P_{n+r}S=SP_{n+r}-P_{n+r}S.$$Repeat the argument, to get $$SP_n-P_nS=SP_{n+kr}-P_{n+kr}S,\ \ \ k\in\mathbb N. $$As $P_n\nearrow I$, we get $SP_n-P_nS=0$. As for $K$, we have $$K=T-\sum_n Q_nSQ_n=\sum_n TQ_n-Q_nTQ_n=\sum_n (TQ_n-Q_nT)Q_n.$$Also,$$\|TQ_n-Q_nT\|\leq\|TP_n-P_nT\|+\|TP_{n+r}-P_{n+r}T\|\leq\frac1{2^{n+1}}+\frac1{2^{n+r+1}}$$$$\tag1\left\|\sum_{n=m+1}^\infty (TQ_n-Q_nT)Q_n\right\|\leq\sum_{n=m+1}^\infty \|TQ_n-Q_nT\|\leq\sum_{m+1}^\infty \frac1{2^{n+1}}+\frac1{2^{n+r+1}}\xrightarrow[m\to\infty]{}0$$The estimate $(1)$ shows that $K$ is of the form $\sum_{n=1}^m(TQ_n-Q_nT)Q_n$, which is finite-rank, plus an arbitrarily small operator; that is, $K$ is a limit of finite-rank operators, and thus compact. Finally, using $(1)$ with $m=1$, we get $$\|K\|\leq\sum_{m=1}^\infty \frac1{2^{n+1}}+\frac1{2^{n+r+1}}=\frac12+\frac1{2^{r+1}}.$$So any $r\geq3$ will give us $\|K\|<1$. As a final note, a very small tweak of the argument allows one to get $\|K\|<\varepsilon$ for any fixed $\varepsilon>0$.
The key to my questions was the wonderful book "Linear Algebraic Groups" by Armand Borel (specifically, page 57). First, a lemma (1 of 2): if $M$ is a (not necessarily algebraic) subgroup of an algebraic $G$, then $G(M) = \overline{M}$ (the closure is in the Zariski topology). Proof of lemma 1: $G(M)$ is closed so it contains $\overline{M}$. $\overline{M}$ is a group: it contains the identity since $M$ does. It contains the inverse of each element since $x \to x^{-1}$ is a homeomorphism on the big group $G$ which shows $\overline{H}^{-1} = \overline{H^{-1}} = \overline{H}$. And it is closed under products: $x \in H \implies x\overline{H} = \overline{xH} = \overline{H} \implies H\overline{H} = \overline{H}$. We need to show $\overline{H}\overline{H} \subseteq\overline{H}$, so take $y\in \overline{H}$. We have $Hy\subseteq \overline{H} \implies \overline{H}y=\overline{Hy}\subseteq \overline{H} \implies \overline{H}\overline{H} \subseteq \overline{H}$. $\blacksquare$ First I'll show that $G(U), G(S)$ commute. The abstract groups generated by $U$ and $S$, $\langle U \rangle = \{ U^k | k \in \mathbb{Z} \}$,$\langle S \rangle = \{ S^k | k \in \mathbb{Z} \}$ commute since $U,S$ commute (I mean that the commutator group $(\langle U \rangle, \langle S \rangle)$ is trivial). It will follow from the following lemma, applied to $M=\langle U \rangle, N= \langle S \rangle$, that $G(S),G(U)$ commute: Lemma 2: If $M,N\subseteq G$ are subgroups of $G$, then the commutator subgroup $(M,N),(\overline{M},\overline{N})$ have the same closure. (In our case, $(M,N)={e}$, which is closed already.) Proof of lemma: Consider the algebraic homomorphism $c:G\times G \to G$ defined as $c(x,y)=xyx^{-1}y^{-1}$. $M \times N$ is dense in $\overline{M} \times \overline{N}$, so $c(M \times N)$ is dense in $c(\overline{M} \times \overline{N})$, which in turns shows $G(c(M \times N)) = G(c(\overline{M} \times \overline{N}))$. But by lemma 1, those 2 groups are the closures of $c(M \times N), c(\overline{M}, \overline{N})$. $\blacksquare$ The fact that $G(S)$ and $G(U)$ commute shows that the map $(s,u) \to (su)$ from $G(S) \times G(U) \to G$ is indeed a homomorphism, and the image of a homomorphism is an algebraic group too (well-known fact), so $G(S)G(U)$ is a subgroup. So $G(S)G(U)$ is a commutative algebraic group, and it contains $S,U$, and it is evident that it is the minimal group containing both $S,U$. So $G(S,U)=G(S)G(U)$ and we're done. One direction is trivial: $R(G(A))$ contains $R(A)$ and is an algebraic group, so it must contain the smallest algebraic group containing $R(A)$, which is $G(R(A))$ by definition. The second inclusion is as follows: $R^{-1}(R(G(A))$ is closed (as the preimage of a closed set by a continuous function), and it contains $A$, so it must contain $G(A)$ - the smallest algebraic subgroup containing $A$, and so: $R(G(A)) \subseteq R(R^{-1}(R(G(A)))) \subseteq R(G(A))$. Note: the more general equality $R(G(M))=G(R(M))$ when $M$ is any subset of $G$, with the same proof.
Hint $\ {\rm mod}\ 13\!:\ \dfrac{41}7 \equiv \dfrac{28}7 = 4\ \ $ by $\ \ 41\equiv 41\!-\!13 = 28$ Alternatively $\ \dfrac{41}{7}\equiv\dfrac{(-2)(-1)}{-6}\equiv \dfrac{-2}{-2}\dfrac{12}3\equiv 4\ \ $ by $\ \ \begin{eqnarray}41&&\equiv\ \ 2\\ 7 &&\equiv -6\end{eqnarray}$ Alternatively $\ \dfrac{41}{7}\equiv \dfrac{2}7\equiv \dfrac{4}{14}\equiv \dfrac{4}1\ $ by Gauss's Algorithm. Such twiddling (adding/subtracting the modulus from numerator or denominator till things divide or factor nicely) works quite well for small numbers (more generally we can use Inverse Reciprocity to make the quotient exact. For larger numbers one can invert the denominator by the Extended Euclidean Algorithm, or Gauss's algorithm if the modulus is prime. Beware $\ $ The use of fractions in modular arithmetic is valid only when the denominator is invertible, i.e. coprime to the modulus. Otherwise the quotient need not be unique, for example mod $\rm\:10,\:$ $\rm\:4\,x\equiv 2\:$ has solutions $\rm\:x\equiv 3,8,\:$ so the "fraction" $\rm\:x \equiv 2/4\pmod{10}\,$ cannot designate a unique solution of $\,4x\equiv 2.\,$ Indeed, the solution is $\rm\:x\equiv 1/2\equiv 3\pmod 5,\,$ which requires canceling $\,2\,$ from the modulus too, since $\rm\:10\:|\:4x-2\iff5\:|\:2x-1.\:$ Generally the grade-school rules of fraction arithmetic apply universally (i.e. in all rings) where the denominators are invertible. This fundamental property will be clarified conceptually when one learns in university algebra about the universal properties of fractions rings and localizations.
PGFplots allows you to use all the usual functions from Ti kZ within its {axis} environment. You have access to the coordinate system through axis cs so that \node at (axis cs: 3, 4) {}; places a node at the x- y coordinate (3, 4). In version 1.11, axis cs became the default coordinate system used by Ti kZ within the {axis} environments so you need not specify axis cs every time and instead can just type \node at (3, 4) {};. I provide below two very similar ways of drawing (what I think) you want. Both of them plot the two relevant curves ( x and 6 / (5 - x)), but the first one also uses the x-axis as the number line whilst the second one places the number line above the the plot. Version 1: All in One This solution uses the one set of axes to both display the appropriate equations for the inequality and label the part of the number line for which the inequality holds true: \documentclass{amsart} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amsthm} \usepackage{thmtools} \declaretheoremstyle[ headfont=\normalfont\bfseries, numbered=unless unique, bodyfont=\normalfont, spaceabove=1em plus 0.75em minus 0.25em, spacebelow=1em plus 0.75em minus 0.25em, qed={\rule{1.5ex}{1.5ex}}, ]{solstyle} \declaretheorem[ style=solstyle, title=Solution, refname={solution,solutions}, Refname={Solution,Solutions} ]{solution} \usepackage{enumitem} \usepackage{tikz} \usepackage{pgfplots} \pgfplotsset{compat=1.13} \begin{document} \begin{enumerate}[label=\bfseries\arabic*)] \item Determine the solution set to \begin{equation*} \frac{6}{x - 5} \geq x . \end{equation*} Graph the solution set on the real number line. \begin{solution} We first observe that there is a singularity at \(x = 5\) as we consider the region above and below \(5\) separately: \begin{description} \item[\(\boldsymbol{x > 5}\)] Over this interval, the denominator is always greater than zero. As a result, multiplying both sides by \(x-5\) we obtain: \begin{align*} & 6 \geq x^{2} - 5x \\ \Leftrightarrow & 0 \geq x^{2} - 5x - 6 = (x-6)(x+1) \end{align*} Over the given domain, \(x+1\) is always positive; therefore, we must have that \(x-6 \leq 0\) and conclude that the inequality is satisfied only for \(5 < x \leq 6\). \item[\(\boldsymbol{x < 5}\)] Over this internal, the denominator is always less than zero. As a result, multiplying both sides by \(x-5\) flips the inequality and we obtain: \begin{align*} & 6 \leq x^{2} - 5x \\ \Leftrightarrow & 0 \leq x^{2} - 5x - 6 = (x-6)(x+1) \end{align*} Over the given domain, \(x-6\) is always negative; therefore, we must have that \(x+1 \leq 0\) and conclude that the inequality is satisfied only for \(x \leq -1\). \end{description} The two relevant curves for this inequality are plotted below with the appropriate domain marked in red along the \(x\)-axis: \begin{center} \begin{tikzpicture} \begin{axis}[ width=\linewidth, height=0.7\linewidth, axis lines=middle, xlabel=\(x\), ylabel=\(y\), xlabel style={at={(ticklabel* cs:1)},anchor=west}, ylabel style={at={(ticklabel* cs:1)},anchor=south}, clip=false, domain=-5:10, samples=501, restrict y to domain=-10:16, clip=false, ] \addplot [blue] {6/(x - 5)} node [above, pos=0.95, font=\footnotesize] {\(y=\dfrac{6}{x-5}\)}; \addplot [latex-latex] {x} node[anchor=west, pos=1, font=\footnotesize]{\(y=x\)}; \draw [dashed, latex-latex] (5,\pgfkeysvalueof{/pgfplots/ymin}) -- (5, \pgfkeysvalueof{/pgfplots/ymax}) node [pos=0.05, below, sloped, font=\footnotesize] {\(x=5\)}; \fill [blue] (-1, -1) circle [radius=2pt] node [anchor=north, font=\footnotesize] {\((-1, -1)\)}; \fill [blue] (6, 6) circle [radius=2pt] node [anchor=west, font=\footnotesize] {\((6, 6)\)}; \draw [-latex, red, very thick] (-1, 0) -- (\pgfkeysvalueof{/pgfplots/xmin}, 0); \draw [red, very thick] (5, 0) -- (6, 0); \fill [black] (-1, 0) circle [radius=2pt]; \draw [draw=black, fill=white] (5, 0) circle [radius=2pt]; \fill [black] (6, 0) circle [radius=2pt]; \end{axis} \end{tikzpicture} \end{center} \end{solution} \end{enumerate} \end{document} Version 2: Number line on top If you want to have the number line separate from the axis (as you intend to in the original question), you basically had it all right: \begin{center} \begin{tikzpicture} \begin{axis}[ name=plot1, width=\linewidth, height=11em, axis x line=middle, axis y line=none, clip=false, domain=-5:10, axis line style={latex-latex}, ] \addplot [draw=none] {0}; \draw [-latex, red, very thick] (-1, 0) -- (\pgfkeysvalueof{/pgfplots/xmin}, 0); \draw [red, very thick] (5, 0) -- (6, 0) node [above, pos=0] {\(5\)} node [above, pos=1] {\(6\)}; \fill [black] (-1, 0) circle [radius=2pt] node [red, above] {\(-1\)}; \draw [draw=black, fill=white] (5, 0) circle [radius=2pt]; \fill [black] (6, 0) circle [radius=2pt]; \end{axis} \begin{axis}[ at=(plot1.south), anchor=north, width=\linewidth, height=0.7\linewidth, axis lines=middle, xlabel=\(x\), ylabel=\(y\), xlabel style={at={(ticklabel* cs:1)},anchor=west}, ylabel style={at={(ticklabel* cs:1)},anchor=south}, clip=false, domain=-5:10, samples=501, restrict y to domain=-10:16, clip=false, ] \addplot [blue] {6/(x - 5)} node [above, pos=0.95, font=\footnotesize] {\(y=\dfrac{6}{x-5}\)}; \addplot [latex-latex] {x} node[anchor=west, pos=1, font=\footnotesize]{\(y=x\)}; \draw [dashed, latex-latex] (5,\pgfkeysvalueof{/pgfplots/ymin}) -- (5, \pgfkeysvalueof{/pgfplots/ymax}) node [pos=0.05, below, sloped, font=\footnotesize] {\(x=5\)}; \fill [blue] (-1, -1) circle [radius=2pt] node [anchor=north, font=\footnotesize] {\((-1, -1)\)}; \fill [blue] (6, 6) circle [radius=2pt] node [anchor=west, font=\footnotesize] {\((6, 6)\)}; \end{axis} \end{tikzpicture} \end{center} Extra Notes Firstly, I took the liberty of cleaning up your example and make use of environments such as enumerate, description and created a solution environment to take care of the formatting for you automatically. Although having \texbf{1) } and \vskip1em do work, it isn't really the best way to use LaTeX. You should write what you mean instead of writing what you want to see. That is, instead of \textbf{1) }, \textbf{2) }, have an enumerated list; and instead of \textbf{Solution: } ... \rule{1.5ex}{1.5ex}, have a {solution} environment. The advantage of writing what you mean is that if you want to change the way solutions look, you can do it in one place instead of having to go through your whole document and changing every instance. A few other small things: For some reason, the {axis} environment seems to require having at least one \addplot command. I suspect that it is because it needs that to calculate the range of both axes even if xmin, xmax, ymin and ymax are all specified. Since I don't want to actually plot anything for the number line, I used \addplot [draw=none] {0};. I can't seem to find any mention of this requirement within the PGFplots documentation. When PGFplots calculates the positioning of all the labels, it seems to require a minimum height. When drawing the number line, I initially used a height=0pt, but this resulted in errors so instead I used height=11em. This has the additional benefit that I no longer need adjust the plot1.south coordinate as the vertical height of the baseline is enough. Instead of declaring samples and domain with every \addplot call, I declare these properties for the whole axis. This makes the code a little cleaner and also ensures that all the plots are drawn over the whole domain (for example, I'd rather not have the line y=x stop half-way). If that is intended behaviour though, having \addplot [domain=-5:0] {x}; will override the axis-wide domain. Similar to the previous note, having restrict y to domain in the {axis} options make that change work for every \addplot command in that environment. In addition, restrict y to domain discards points which are outside of the specified domain. You don't need to plot 6 / (5-x) in two separate \addplot calls because any value which end up outside of the specified y domain are automatically discarded. With regards to the two previous points, think of domain and restrict y to domain as settings the overall view port for the whole graph, and PGFplots will then figure out what to draw. I use \pgfkeysvalueof{/pgfplots/xmin} (and analogous) in order to obtain the value of xmin, ymin and ymax instead of hard-coding them. This means that if I want to change where the y-axis starts and stops, the asymptote line will automatically adjust. Instead of using \addplot to draw the line x=5, I use explicit coordinates. This is mostly because I found the PGFplots behaviour to be slightly inconsistent sometimes. Instead of using \addplot coordinates{-1,-1}; to draw a single point, I used one of the basic Ti kZ commands. Firstly, we aren't really plotting another curve, but instead annotating it, so \addplot already doesn't feel like what we need. Additionally, having the extra \addplot command will mess around with legend entries and the plot style cycle, hence why you initial plot had various shapes and colours despite you not specifying them. I chose width=\linewidth so that the plot fills the width of the current line. As for height=0.7\linewidth, it is arbitrary (I could have used height=5cm) but the rationale for using \linewidth is that if I change the formatting of the document, the aspect ratio of the plot width and height remains the same and it is always guaranteed to take up the width of the line. As for the 0.7 in particular, I typically use 0.62 because that ensure the plot follows the golden ratio, but in the particular case of this graph I thought it looked a little too squashed so instead I used 0.7.
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever." Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field. "You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. " so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force. For the buoyancy do I: density of water * volume of water displaced * gravity acceleration? so: mass of bottle * gravity = volume of water displaced * density of water * gravity? @EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$? As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern... You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer. Though as it happens I have to go now - lunch time! :-) @JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth. Anonymous Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure Not sure about that, but the converse is certainly false :P Derrida has received a lot of criticism from the experts on the fields he tried to comment on I personally do not know much about postmodernist philosophy, so I shall not comment on it myself I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger. I can see why a man of that generation would be leaned towards that idea. I do too.
vignettes/ctstm.Rmd ctstm.Rmd Continuous time state transition models (CTSTMs) simulate trajectories for patients between mutually exclusive health states. These models can be parameterized using multi-state models, which are generalizations of survival models with more than two states. Transitions between health states \(r\) and \(s\) for patient \(i\) with treatment \(k\) at time \(t\) are governed by hazard functions, \(\lambda_{rs}(t|x_{ik})\), that can depend on covariates \(x_{ik}\). Different assumptions can be made about the time scales used to determine the hazards. In a “clock forward” (i.e., Markov) model, time \(t\) refers to time since entering the initial health state. Conversely, in a “clock reset” (i.e., semi-Markov) model, time \(t\) refers to time since entering the current state \(r\), meaning that time resets to 0 each time a patient enters a new state. While state occupancy probabilities in “clock forward” models can be estimated analytically using the Aalen-Johansen estimator, state occupancy probabilities in “clock reset” models can only be computed in a general fashion using individual patient simulation. hesim provides support for individual-level CTSTMs (iCTSTMs) which can simulate either “clock forward” or “clock reset” models. Discounted costs and quality-adjusted life-years (QALYs) are computed using the continuous time present value of a flow of state values—\(q_{hik}(t)\) for utility and \(c_{m, hik}(t)\) for the \(m\)th cost category—that depend on the health state of a patient on a given treatment strategy at a particular point in time. Discounted QALYs and costs given a model starting at time \(0\) with a time horizon of \(T\) are then given by, \[ \begin{aligned} QALYs_{hik} &= \int_{0}^{T} q_{hik}(t)e^{-rt}dt, \\ Costs_{m, hik} &= \int_{0}^{T} c_{m, hik}(t)e^{-rt}dt, \end{aligned} \] where \(r\) is the discount rate. The reversible illness-death model is a commonly used state transition model (see figure below) with 3 health states and 4 transitions. In this example, we will use 3 generic health states: (1) Healthy, (2) Sick, and (3) Dead. The following 4 transitions are possible. In general, the transitions of a multi-state model can be characterized with an H x H transition matrix where \(H\) is the number of health states, which is a square-matrix where the (r,s) element is a positive integer if a transition from r to s is possible and NA otherwise. A 4 x 4 transition matrix is appropriate for the reversible illness death model. ## Healthy Sick Dead## Healthy NA 1 2## Sick 3 NA 4## Dead NA NA NA In a cost-effectiveness analysis, the treatments strategies of interest and characteristics of the target population must be specified in addition to the selected model structure. We will consider a simple case with two treatment strategies and a heterogeneous population of 1000 patients who differ by age and gender. The model contains 3 health states (2 of which are non-death states). library("hesim")library("data.table")strategies <- data.table(strategy_id = c(1, 2))n_patients <- 1000patients <- data.table(patient_id = 1:n_patients, age = rnorm(n_patients, mean = 45, sd = 7), female = rbinom(n_patients, size = 1, prob = .51))states <- data.table(state_id = c(1, 2), state_name = c("Healthy", "Sick")) # Non-death health stateshesim_dat <- hesim_data(strategies = strategies, patients = patients, states = states) CTSTMs can be parameterized by fitting statistical models in R or by storing the parameters from a model fit outside R as described in the introduction to hesim. Either a single joint model can be estimated encompassing all transitions or separate models can be estimated for each possible transition. In the introduction we considered a joint model; here, we will fit separate models. A number of parametric and flexibly parametric approaches are available (as described more detail in the params_surv() documentation), but we will illustrate with a generalized gamma model. We will begin by fitting a “clock reset” model using flexsurvreg(). library("flexsurv")n_trans <- max(tmat, na.rm = TRUE) # Number of transitionswei_fits_cr <- vector(length = n_trans, mode = "list") for (i in 1:length(wei_fits_cr)){ wei_fits_cr[[i]] <- flexsurv::flexsurvreg(Surv(years, status) ~ factor(strategy_id), data = ctstm3_exdata$transitions, subset = (trans == i) , dist = "weibull") }wei_fits_cr <- flexsurvreg_list(wei_fits_cr) “Clock forward” models are fit in a similar fashion by specifying both the starting ( Tstop) and stopping ( Tstop) times associated with each transition. wei_fits_cf <- vector(length = n_trans, mode = "list") for (i in 1:length(wei_fits_cf)){ wei_fits_cf[[i]] <- flexsurv::flexsurvreg(Surv(Tstart, Tstop, status) ~ factor(strategy_id), data = ctstm3_exdata$transitions, subset = (trans == i) , dist = "weibull") }wei_fits_cf <- flexsurvreg_list(wei_fits_cf) The most straightforward way to assign utility and cost values to health states is with a stateval_tbl(). For example, we can specify the mean and standard error of utilities by health state (implying that utility values do not vary by treatment strategy or patient) and that we will use a beta distribution to randomly sample utility values for the probabilistic sensitivity analysis (PSA). utility_tbl <- stateval_tbl(data.table(state_id = states$state_id, mean = ctstm3_exdata$utility$mean, se = ctstm3_exdata$utility$se), dist = "beta", hesim_data = hesim_dat)head(utility_tbl) ## state_id mean se## 1: 1 0.65 0.1732051## 2: 2 0.85 0.2000000 Drug and medical costs can be specified in a similar fashion. Drug costs are assumed to known with certainty and vary by treatment strategy whereas medical costs are assumed to vary by health state and to follow a gamma distribution. drugcost_tbl <- stateval_tbl(data.table(strategy_id = strategies$strategy_id, est = ctstm3_exdata$costs$drugs$costs), dist = "fixed", hesim_data = hesim_dat) medcost_tbl <- stateval_tbl(data.table(state_id = states$state_id, mean = ctstm3_exdata$costs$medical$mean, se = ctstm3_exdata$costs$medical$se), dist = "gamma", hesim_data = hesim_dat) The economic model consists of a model for disease progression and models for assigning utility and cost values to health states. Since we are performing a PSA, we must specify the number of times to sample the parameters. We begin by constructing the model for health state transitions, which is a function of input data (i.e., covariates) and a fitted multi-state model (or a parameter object). When separate multi-state models are fit by transition, the input data consists of one observation for each treatment strategy and patient combination (joint models consist of one observation for each treatment strategy, patient, and transition combination). It can be created easily by using the expand() function to expand the hesim_data() object created above. ## strategy_id patient_id age female## 1: 1 1 25.23180 1## 2: 1 2 43.90158 0## 3: 1 3 39.60841 0## 4: 1 4 51.95736 0## 5: 1 5 45.81387 1## 6: 1 6 38.12804 0 “Clock reset” and “clock forward” transition models are created by combining the fitted models and input data with the transition matrix, desired number of PSA samples, the timescale of the model, and the starting age of each patient in the simulation (by default, patients are assumed to live no longer than age 100 in the individual-level simulation). transmod_cr <- create_IndivCtstmTrans(wei_fits_cr, transmod_data, trans_mat = tmat, n = n_samples, clock = "reset", start_age = patients$age)transmod_cf <- create_IndivCtstmTrans(wei_fits_cf, transmod_data, trans_mat = tmat, n = n_samples, clock = "forward", start_age = patients$age) It is a good idea to evaluate the assumptions underlying multi-state models. hesim can help facilitate these analyses since hazards ( $hazard()), cumulative hazards ( $cumhazard()), and state probabilities ( $stateprobs()) can be easily computed. As an illustration, we will predict hazards using the maximum likelihood estimates of the Weibull model for a single patient ( patient_id = 1). To do so, we create new transition models based on a subset of the dataset transmod_data used above. # Predict hazardtransmod_data_pat1 <- transmod_data[patient_id == 1]predict_haz <- function(fits, clock){ transmod_cr_pat1 <- create_IndivCtstmTrans(fits, transmod_data_pat1, trans_mat = tmat, clock = clock, point_estimate = TRUE) haz <- transmod_cr_pat1$hazard(t = seq(0, 20, 1)) title_clock <- paste(toupper(substr(clock, 1, 1)), substr(clock, 2, nchar(clock)), sep="") haz[, clock := title_clock] return(haz[, ])} We then plot the predicted hazard by treatment strategy and timescale. # Plot hazardslibrary("ggplot2") ## Registered S3 methods overwritten by 'ggplot2':## method from ## [.quosures rlang## c.quosures rlang## print.quosures rlang haz <- rbind(predict_haz(wei_fits_cr, "reset"), predict_haz(wei_fits_cf, "forward"))haz[, trans_name := factor(trans, levels = 1:4, labels = c("Healthy-> Sick", "Healthy -> Dead", "Sick -> Healthy", "Sick -> Dead"))]ggplot(haz[t > 0], aes(x = t, y = hazard, col = clock, linetype = factor(strategy_id))) + geom_line() + facet_wrap(~trans_name) + xlab("Years") + ylab("Hazard") + scale_linetype_discrete(name = "Strategy") + scale_color_discrete(name = "Clock") + theme_bw() While the hazards from the healthy state are similar between the “clock forward” and “clock reset” approaches, they differ significantly in the sick state. Treatment effects (i.e., the hazard ratios between treatment strategies 1 and 2) are also largest in the sick state. Additional analyses should be conducted as well. For instance, the hazards for treatment strategy 1 (the reference treatment strategy) could be assessed by comparing the Weibull model’s predictions with predictions from non-parametric (i.e., the Kaplan-Meier estimator) or semi-parametric (i.e., Cox) models. This can be performed using mstate, which can predict cumulative hazards and state probabilities in non-parametric and semi-parametric models. Furthermore, the Weibull model’s proportional hazards assumption should be tested using standard techniques such as plots of log time vs. the log cumulative hazard, inclusion of time-dependent covariates, and tests of the Schoenfeld residuals. Mean only models (see params_mean()) can be created directly from the utility and cost tables using since they do not include covariates and therefore do not require input data. Now that the necessary transition, utility,and cost models have been created, we combine them to create separate economic models based on the “clock reset” and “clock forward” transition models, respectively. Disease progression can be simulated using the $sim_disease() method. In the individual-level simulation, unique trajectories through the multi-state model are simulated for each patient, treatment strategy, and PSA sample. Patients transition from an old health state that was entered at time time_start to a new health state at time time_stop. # "Clock reset"econmod_cr$sim_disease()head(econmod_cr$disprog_) ## sample strategy_id patient_id from to final time_start time_stop## 1: 1 1 1 1 2 0 0.000000 1.0291345## 2: 1 1 1 2 3 1 1.029135 6.3478413## 3: 1 1 2 1 3 1 0.000000 4.1999090## 4: 1 1 3 1 3 1 0.000000 0.2143922## 5: 1 1 4 1 2 0 0.000000 1.4682744## 6: 1 1 4 2 1 0 1.468274 1.6410123 State occupancy probabilities at different time points are computed using $sim_stateprobs(). First, we simulate state probabilities for the “clock reset” model. econmod_cr$sim_stateprobs(t = seq(0, 20 , 1/12)) We can then compare state probabilities between the competing treatment strategies. # Short funtion add create state name variable to data.tabaleadd_state_name <- function(x){ x[, state_name := factor(state_id, levels = 1:nrow(tmat), labels = colnames(tmat))] }# Short function to create state probability "dataset" for plottingsummarize_stprobs <- function(stateprobs){ x <- stateprobs[, .(prob_mean = mean(prob)), by = c("strategy_id", "state_id", "t")] add_state_name(x)}# Plot of state probabilitiesstprobs_cr <- summarize_stprobs(econmod_cr$stateprobs_)ggplot(stprobs_cr, aes(x = t, y = prob_mean, col = factor(strategy_id))) + geom_line() + facet_wrap(~state_name) + xlab("Years") + ylab("Probability in health state") + scale_color_discrete(name = "Strategy") + theme(legend.position = "bottom") + theme_bw() Next, we compare the state probabilities from the “clock reset” and “clock forward” models. econmod_cf$sim_stateprobs(t = seq(0, 20 , 1/12)) stprobs_cf <- summarize_stprobs(econmod_cf$stateprobs_)# Compare "clock forward" and "clock reset" casesstprobs <- rbind(data.table(stprobs_cf, clock = "Forward"), data.table(stprobs_cr, clock = "Reset"))ggplot(stprobs[strategy_id == 1], aes(x = t, y = prob_mean, col = clock)) + geom_line() + facet_wrap(~state_name) + xlab("Years") + ylab("Probability in health state") + scale_color_discrete(name = "Clock") + theme(legend.position = "bottom") + theme_bw() The probabilities are generally quite similar, implying that the choice of timescale has a small impact on the results. This is not unexpected given that patients spend considerably more time in the healthy state and the predicted hazard rates are very similar in the healthy state. QALYs (and life-years) are simulated using $sim_qalys(). By default, mean QALYs are computed by treatment strategy, health state, and PSA sample (the by_patient option can be used to compute aggregated QALYs at the patient level). Here, we used the “clock reset” model to compute both undiscounted QALYs ( dr = 0) and QALYs discounted at 3%. ## sample strategy_id state_id dr qalys lys## 1: 1 1 1 0 3.2320541 7.482911## 2: 1 1 2 0 0.5590484 1.445154## 3: 1 2 1 0 3.6215944 8.384782## 4: 1 2 2 0 0.4811626 1.243817## 5: 2 1 1 0 3.8180763 5.744700## 6: 2 1 2 0 1.3636947 1.386749 We summarize the simulated QALYs by computing means by treatment strategy and health state across the PSA samples. qalys_summary <- econmod_cr$qalys_[, .(mean = mean(qalys)), by = c("strategy_id", "state_id", "dr")]add_state_name(qalys_summary)ggplot(qalys_summary[dr == .03], aes(x = factor(strategy_id), y = mean, fill = state_name)) + geom_bar(stat = "identity") + scale_fill_discrete(name = "") + xlab("Strategy") + ylab("Mean QALYs") + theme_bw() Costs are computed in the same way as QALYs, except that they are computed by category. We use the “clock reset” model and a 3% discount rate. econmod_cr$sim_costs(dr = 0.03)head(econmod_cr$costs_) ## sample strategy_id state_id dr category costs## 1: 1 1 1 0.03 Drug 28898.278## 2: 1 1 2 0.03 Drug 5660.795## 3: 1 2 1 0.03 Drug 63918.216## 4: 1 2 2 0.03 Drug 9707.357## 5: 2 1 1 0.03 Drug 23189.995## 6: 2 1 2 0.03 Drug 5521.021 As with QALYs, we summarize costs by computing means (now by treatment strategy and category) across the PSA samples. library("scales")costs_summary <- econmod_cr$costs_[dr == .03 , .(mean = mean(costs)), by = c("strategy_id", "category")]ggplot(costs_summary, aes(x = factor(strategy_id), y = mean, fill = category)) + geom_bar(stat = "identity") + scale_fill_discrete(name = "Category") + scale_y_continuous(label = scales::dollar_format()) + xlab("Strategy") + ylab("Mean costs") + theme_bw() Once costs and QALYs are computed a cost-effectiveness analysis can be performed. The $summarize() method creates a “cost-effectiveness” object with mean costs and QALYs computed for each PSA sample. The icea() and icea_pw() can then be used for cost-effectiveness analysis as described here.
Well done Robert of Madras College, St Andrew's, Scotland andAndrei of School No. 205, Bucharest, Romania for your solutions tothis problem. In both parts of this question we consider the limiting case of aprocess which is repeated infinitely often and things are not whatthey might seem to be. (a) In a square $ABCD$ with sides of length 1 unit a path is drawnfrom $A$ to the opposite corner $C$ so that all the steps in thepath are either parallel to $AB$ or parallel to $BC$ and notnecessarily equal steps. If we draw paths of this sort putting inmore and more and more steps the length of the path is always the same." The steps parallel to $AB$ together must stretch all the way acrossfrom $A$ to $B$ and the steps parallel to $BC$ together muststretch all the way up from $A$ to $D$. Irrespective of the numberof small steps, A point moving on any path of this type moves atotal of 1 unit parallel to $AB$ and a total of one unit parallelto $BC$, hence a total of 2 units altogether. With more and moresteps the path gets closer and closer to the diagonal so you mightexpect the length to converge to $\sqrt 2$. Surprisingly the lengthis always 2 units and not even close to $\sqrt 2$ units. (b) Now consider the graphs of $y={1\over 2^n}\sin 2^nx$ for $n=1,2,3, ...$ and $0\leq x \leq 2\pi$. As $n$\ tends to infinity thegraphs oscillate more and more and get closer and closer to the $x$axis. We have to prove that the length of the curve from $x=0$ to$x=2 \pi$ is the same for all values of $n$. The hint says we don'tneed to calculate the length of the path here and we should thinkabout scale factors. The graph of $G_n:\ y={1\over 2^n}\sin 2^nx$ from $x=0$ to $x=\pi$is similar to the graph of $G_{n-1}:\ y={1\over 2^{n-1}}\sin2^{n-1}x$ from $x=0$ to $x=2\pi$\ but scaled down by a linear scalefactor of 1/2 so $G_n$ is half the length of $G_{n-1}$. However$G_n$ is repeated twice periodically between $x=0$ and $x=2\pi$ sothe two pieces together have the same length as $G_{n-1}$. This shows that all these graphs on $0\leq x \leq 2\pi$ have thesame length although as $n\rightarrow \infty$ the graphs get closerand closer to the $x$ axis so you might suppose that the lengthconverges to $2\pi$. Surprisingly the length is always the same andmuch more than $2\pi$.
With $\hat i$ a unit vector, a definition of simple harmonic motion might go like this. The motion of a particle is simple harmonic if the acceleration of the particle, $\vec a$, is proportional to the displacement of the particle from a fixed point, $\vec x$ the acceleration of the particle is always directed towards the fixedpoint From the first statement $\vec a \propto \vec x \Rightarrow a\,\hat i \propto x \,\hat i$ where $a$ and $x$ are components of acceleration and displacement in the $\hat i$ direction. From the second statement $\vec a = -c \, \vec x \Rightarrow a\, \hat i = c \, x\,(-\hat i)$ where $c$ ia a constant which must be positive. How might one ensure that $c$ is positive? By defining $c=\Omega^2$ where $\Omega$ is a constant. which results in the equation $a = - \Omega^2 \,x$. Doing this also means that there is a simple relationship between the constant $\Omega$ and the period of the oscillation $T$ which is $T = \frac{2 \pi}{\Omega}$.
Let $B$ be the $3\times 3$ matrix whose columns are the vectors $\mathbf{x},\mathbf{y}, \mathbf{z}$, that is,\[B=[\mathbf{x} \mathbf{y} \mathbf{z}].\] Then we have\[AB=\begin{bmatrix}1 & 0 & 1 \\0 &1 &1 \\1 & 0 & 1\end{bmatrix}.\] Then we have\[\det(A)\det(B)=\det(AB)=\begin{vmatrix}1 & 0 & 1 \\0 &1 &1 \\1 & 0 & 1\end{vmatrix}=0.\](If two rows are equal, then the determinant is zero. Or you may compute the determinant by the second column cofactor expansion.) Note that the column vectors of $B$ are linearly independent, and hence $B$ is nonsingular matrix. Thus the $\det(B)\neq 0$.Therefore the determinant of $A$ must be zero. we have\[A\mathbf{x}+A\mathbf{y}=A\mathbf{z}.\]It follows that we have\[A(\mathbf{x}+\mathbf{y}-\mathbf{z})=\mathbf{0}.\] Since the vectors $\mathbf{x}, \mathbf{y}, \mathbf{z}$ are linearly independent, the linear combination $\mathbf{x}+\mathbf{y}-\mathbf{z} \neq \mathbf{0}$.Hence the matrix $A$ is singular, and the determinant of $A$ is zero. (Recall that a matrix $A$ is singular if and only if there exist nonzero vector $\mathbf{v}$ such that $A\mathbf{u}=\mathbf{0}$.) Find All Values of $x$ so that a Matrix is SingularLet\[A=\begin{bmatrix}1 & -x & 0 & 0 \\0 &1 & -x & 0 \\0 & 0 & 1 & -x \\0 & 1 & 0 & -1\end{bmatrix}\]be a $4\times 4$ matrix. Find all values of $x$ so that the matrix $A$ is singular.Hint.Use the fact that a matrix is singular if and only […] Properties of Nonsingular and Singular MatricesAn $n \times n$ matrix $A$ is called nonsingular if the only solution of the equation $A \mathbf{x}=\mathbf{0}$ is the zero vector $\mathbf{x}=\mathbf{0}$.Otherwise $A$ is called singular.(a) Show that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is […] Rotation Matrix in Space and its Determinant and EigenvaluesFor a real number $0\leq \theta \leq \pi$, we define the real $3\times 3$ matrix $A$ by\[A=\begin{bmatrix}\cos\theta & -\sin\theta & 0 \\\sin\theta &\cos\theta &0 \\0 & 0 & 1\end{bmatrix}.\](a) Find the determinant of the matrix $A$.(b) Show that $A$ is an […] Maximize the Dimension of the Null Space of $A-aI$Let\[ A=\begin{bmatrix}5 & 2 & -1 \\2 &2 &2 \\-1 & 2 & 5\end{bmatrix}.\]Pick your favorite number $a$. Find the dimension of the null space of the matrix $A-aI$, where $I$ is the $3\times 3$ identity matrix.Your score of this problem is equal to that […]
With the set of parameters available to you, you cannot do this. If you have the actual track instead of the desired track, you will be able to calculate the wind. The simplest way to do this is using vector math. There are three vectors to consider: ground speed vector $\vec{V_{gs}}$ air speed vector $\vec{V_{as}} $ wind speed vector $\vec{V_{ws}} $ $\vec{V_{gs}} =\vec{V_{as}} + \vec{V_{ws}} $ $\vec{V_{ws}} = \vec{V_{gs}} - \vec{V_{as}} $ I assume the actual track angle ($\phi$) and heading ($\psi$) are with respect to the true North. The north component of your air speed is then: $V_{as} \cdot \cos(\psi)$ and the east component of your air speed is: $V_{as} \cdot \sin(\psi)$ For ground speed the decomposition is: north: $V_{gs} \cdot \cos(\phi)$ and the east component of your ground speed is: $V_{gs} \cdot \sin(\phi)$ $$\begin{bmatrix} V_{ws,north}\\ V_{ws,east} \end{bmatrix} = \begin{bmatrix} V_{gs} \cdot \cos(\phi) - V_{as} \cdot \cos(\psi) \\ V_{gs} \cdot \sin(\phi) - V_{as} \cdot \sin(\psi) \end{bmatrix}$$ You now have the north and east component of the wind vector. This you can change to a speed and direction, but I leave that last part up to you. Don't forget that wind direction is usually reported as the direction which the wind is coming. from To find the wind speed from the North and East components use the root of the sum of the squares: $V_{ws}=\sqrt{V_{ws,north}^2 + V_{ws,east}^2}$ The wind direction can be found by $\tan^{-1} (\frac {V_{ws,north}}{V_{ws,east} }) $ Note that this will give a division by 0 for winds exactly from north or south. To implement it in a computer language the atan2 function can be used. This prevents division by zero and also returns the direction of the full range of the circle instead of semi-circular wind_dir = atan2(-wind_north, -wind_east) This should give the direction from which the wind is coming in radians.
Abbreviation: All An is an expanded category $\mathbf{M}=\langle M,\circ,\text{dom},\text{rng},\text{id},\vee,\wedge,^\smile\rangle$ such that allegory $...$ is …: $...$ $...$ is …: $...$ Remark: This is a template. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{A}$ and $\mathbf{B}$ be allegories. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a functor $F:A\rightarrow B$ that also preserves the new operations: $h(x ... y)=h(x) ... h(y)$ An is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[...]] subvariety [[...]] expansion [[...]] supervariety [[...]] subreduct % 1)