text
stringlengths
256
16.4k
The Annals of Probability Ann. Probab. Volume 29, Number 1 (2001), 92-122. Power-law corrections to exponential decay of connectivities and correlations in lattice models Abstract Consider a translation-invariant bond percolation model on the integer lattice which has exponential decay of connectivities, that is, the probability of a connection $0 \leftrightarrow x$ by a path of open bonds decreases like $\exp\{-m(\theta)|x|\}$ for some positive constant $m(\theta)$ which may depend on the direction $\theta = x/|x|$. In two and three dimensions, it is shown that if the model has an appropriate mixing property and satisfies a special case of the FKG property, then there is at most a power-law correction to the exponential decay—there exist $A$ and $C$ such that $\exp\{-m(\theta)|x|\} \ge P(0 \leftrightarrow x) \ge A|x|^{-C} \exp\{-m(\theta)|x|\}$ for all nonzero $x$ . In four or more dimensions, a similar bound holds with $|x|^{-C}$ replaced by $\exp\{-C(\log |x|)^2\}$. In particular the power-law lower bound holds for the Fortuin-Kasteleyn random cluster model in two dimensions whenever the connectivity decays exponentially, since the mixing property is known to hold in that case. Consequently a similar bound holds for correlations in the Potts model at supercritical temperatures. Article information Source Ann. Probab., Volume 29, Number 1 (2001), 92-122. Dates First available in Project Euclid: 21 December 2001 Permanent link to this document https://projecteuclid.org/euclid.aop/1008956323 Digital Object Identifier doi:10.1214/aop/1008956323 Mathematical Reviews number (MathSciNet) MR1825143 Zentralblatt MATH identifier 1034.82005 Subjects Primary: 60K35: Interacting random processes; statistical mechanics type models; percolation theory [See also 82B43, 82C43] Secondary: 82B20: Lattice systems (Ising, dimer, Potts, etc.) and systems on graphs 82B43: Percolation [See also 60K35] Citation Alexander, Kenneth S. Power-law corrections to exponential decay of connectivities and correlations in lattice models. Ann. Probab. 29 (2001), no. 1, 92--122. doi:10.1214/aop/1008956323. https://projecteuclid.org/euclid.aop/1008956323
TADM2E 2.21 First we should see the dominance relations given in the book as it will help in determining the complexity for cases involving square root of n. 1. True 2. False, sqrt(n) dominates logn. 3. True 4. False 5. True 6. True 7. False Edit: I'm not sure why (7) is False. It's possible that I'm mistaken but I think that: $ n^{-1/2} = \dfrac{n}{\sqrt{n}} = \sqrt{n} $ Since a square root of N dominates logn: $ \log n = O(\sqrt{n}) $ Then 7 should be True.
The domain for x, y, z is real numbers. i) $\forall x \exists y (y^2<x)$ FALSE counterexample: $x=0$ ii) $\forall x \exists y (y^3<x)$ TRUE iii) $\forall x\exists y \forall z ((y>0) \land ((z^2<y) \rightarrow (z^2+1<x^4)))$ (Not sure how exactly x,y,z relate to each other but from my current understanding it is False) Update: It's false because the second part of the if-then statement, $(z^2+1<x^4)$ does not always hold true for all $x$ and for all $z$ in the reals. iv) $\exists y \forall x (y^2<x)$ FALSE counterexample: all $x \lt 0$ please let me know if my answers are right or wrong, if wrong, please provide an explanation. For iii) please provide an explanation in either case. Thanks!
For a project I'm looking for continuous distributions which have a somewhat simple closed form for upper-truncation expectation ($E[x|x>c]$). Here are two examples I've found so far: Exponential distribution ($F(x)=1-e^{-\lambda x}$): $c+1/\lambda$ Uniform distribution on $(a,b)$: $\frac{c+b}{2}$ Assuming positive $c$: The logistic distribution with mean 0, scale parameter $s$ has truncated expectation $$-ck + s(1+k)\log(1+k),\text{ where }k=e^{c/s}$$ The Laplace distribution with mean 0, scale parameter $b$ has truncated expectation $$(b+c)(1+e^{-c/b})/2$$
A standard deviation is a number that tells us to what extent a set of numbers lie apart. A standard deviation can range from 0 to infinity. A standard deviation of 0 means that a list of numbers are all equal -they don't lie apart to any extent at all. Standard Deviation - Example Five applicants took an IQ test as part of a job application. Their scores on three IQ components are shown below. Now, let's take a close look at the scores on the 3 IQ components. Note that all three have a mean of 100 over our 5 applicants. However, the scores on iq_verbal lie closer together than the scores on iq_math. Furthermore, the scores on iq_spatial lie further apart than the scores on the first two components. The precise extent to which a number of scores lie apart can be expressed as a number. This number is known as the standard deviation. Standard Deviation - Results In real life, we obviously don't visually inspect raw scores in order to see how far they lie apart. Instead, we'll simply have some software calculate them for us (more on that later). The table below shows the standard deviations and some other statistics for our IQ data. Note that the standard deviations confirm the pattern we saw in the raw data. Standard Deviation and Histogram Right, let's make things a bit more visual. The figure below shows the standard deviations and the histograms for our IQ scores. Note that each bar represents the score of 1 applicant on 1 IQ component. Once again, we see that the standard deviations indicate the extent to which the scores lie apart. Standard Deviation - More Histograms When we visualize data on just a handful of observations as in the previous figure, we easily see a clear picture. For a more realistic example, we'll present histograms for 1,000 observations below. Importantly, these histograms have identical scales; for each histogram, one centimeter on the x-axis corresponds to some 40 ‘IQ component points’. Note how the histograms allow for rough estimates of standard deviations. ‘Wider’ histograms indicate larger standard deviations; the scores (x-axis) lie further apart. Since all histograms have identical surface areas (corresponding to 1,000 observations), higher standard deviations are also associated with ‘lower’ histograms. Standard Deviation - Population Formula So how does your software calculate standard deviations? Well, the basic formula is $$\sigma = \sqrt{\frac{\sum(X - \mu)^2}{N}}$$ where \(X\) denotes each separate number; \(\mu\) denotes the mean over all numbers and \(\sum\) denotes a sum. In words, the standard deviation is the square root of the average squared difference between each individual number and the mean of these numbers. Importantly, this formula assumes that your data contain the entire population of interest (hence “population formula”). If your data contain only a sample from your target population, see below. Population Formula - Software You can use this formula in Google sheets, OpenOffice and Excel by typing =STDEVP(...) into a cell. Specify the numbers over which you want the standard deviation between the parentheses and press Enter. The figure below illustrates the idea. Oddly, the population standard deviation formula does not seem to exist in SPSS. Standard Deviation - Sample Formula Now for something challenging: if your data are (approximately) a simple random sample from some (much) larger population, then the previous formula will systematically underestimate the standard deviation in this population. An unbiased estimator for the population standard deviation is obtained by using $$S = \sqrt{\frac{\sum(X - \overline{X})^2}{n -1}}$$ Regarding calculations, the big difference with the first formula is that we divide by \(n -1\) instead of \(n\). Dividing by a smaller number results in a (slightly) larger outcome. This precisely compensates for the aforementioned underestimation. For large sample sizes, however, the two formulas have virtually identical outcomes. In GoogleSheets, Open Office and MS Excel, the STDEV function uses this second formula. It is also the (only) standard deviation formula implemented in SPSS. Standard Deviation and Variance A second number that expresses how far a set of numbers lie apart is the variance. The variance is the squared standard deviation. This implies that, similarly to the standard deviation, the variance has a population as well as a sample formula. In principle, it's awkward that two different statistics basically express the same property of a set of numbers. Why don't we just discard the variance in favor of the standard deviation (or reversely)? The basic answer is that the standard deviation has more desirable properties in some situations and the variance in others.
I'll write down the idea of "method of Lagrange multiplier" here for my note. For simplicity, the number of variables would be 3. Supposedly, a function $\psi: \Omega \to \mathbb{R}$ differential for each variable, and satisfies a restriction $\psi(x,y,z)=0$. If a function $f : \mathbb{R}^3 \to \mathbb{R}$ realizes extremal at $p=(x_0, y_0, z_0)\in \Omega \subseteq \mathbb{R}^3$, then the following equations would be true for some constant $\lambda\in\mathbb{R}$: $\displaystyle \frac{\partial f}{\partial x}(p)-\lambda\frac{\partial \psi}{\partial x}(p) =\frac{\partial f}{\partial y}(p)-\lambda\frac{\partial \psi}{\partial y}(p)=\frac{\partial f}{\partial z}(p)-\lambda\frac{\partial \psi}{\partial z}(p)=0$
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Let’s say we took the radon example, and took annual measurements. Let’s ignore the floor part of the problem too, and just think about this model: radon_{i,c,t} = \mu_t + \alpha_{c,t} + \epsilon_c so that \mu_t reflects the changing mean across counties over time, and \alpha_{c,t} captures changing county offsets. We want to encode our prior assumption that the \alpha_{c,t} won’t change much year-to-year. What is the best way to do this? I’ve thought of a couple options, but neither is perfect. Model \mu_t and \alpha_{c,t} as Gaussian Random Walks. We can constrain \alpha_{c,0} to be centered around zero using the initargument. The issue here: is modeling the walks with drift=0 enough to constrain the mean of \alpha_{c,t} from drifting over time? My intuition is that the mean of \alpha_{c,t} could drift in one direction while \mu_t drifts in the opposite direction. Model \alpha_{c,t} as a MvNormal. This ensures that each year will be have mean zero, but using the covariance matrix to encode our assumption of minimal year-to-year change seems awkward. In a related question: if this were a single-year model with no time-variation, we would use a non-centered parameterization for \alpha_{c}. Is there a corresponding non-centered parameterization for time-varying \alpha_{c,t}? My next step is to generate some simulated data and run some tests, but I thought I’d ask first to see how other folks have solved this problem.
Ex.10.2 Q1 Circles Solution - NCERT Maths Class 10 Question From a point \(Q,\) the length of the tangent to a circle is \(\text{24 cm} \) and the distance of \(Q\) from the center is \(\text{25 cm.} \) The radius of the circle is (A) \(\text{7 cm}\) (B) \(\text{12 cm}\) (C) \(\text{15 cm}\) (D) \(\text{24.5 cm}\) Text Solution What is Known? (i) Length of tangent from a point \(Q,\) i.e. \(PQ\) is \(\text{24 cm}\) (ii) Distance of \(Q\) from center, i.e. \(OQ\) is \(\text{25 cm.}\) What is Unknown? Radius of the circle Reasoning: Tangent at any point of a circle is perpendicular to the radius through the point of contact. Steps: \(\therefore \; \Delta \,{OPQ}\) is a right-angled triangle By Pythagoras theorem, \[\begin{align} {O Q} ^ { 2 } &= {O P} ^ { 2 } + {P Q} ^ { 2 } \\ 25 ^ { 2 } &= r ^ { 2 } + 24 ^ { 2 } \\ r ^ { 2 } &= 25 ^ { 2 } - 24 ^ { 2 } \\ & = 625 - 576 \\ r ^ { 2 } &= 49 \\ r& = \pm 7 \end{align}\] Radius cannot be a negative value, \(\therefore\; r = + 7 \; \rm{cm}\) Hence the correct Option is A
Table of Content: Chemical Equilibrium Ionic Equilibrium – Degree of Ionization and Dissociation Equilibrium Constant – Characteristics and Applications Le Chatelier’s Principle on Equilibrium Solubility and Solubility Product Acid and Base pH Scale and Acidity pH and Solutions Hydrolysis, Salts, and Types Buffer Solutions pH and Solutions Water itself ionizes and has a hydrogen concentration of 10 -7 moles per litre. So, for pH near 6-8, the hydrogen ion concentration of water also need be included. Mixture of Two Strong Acids The strong acid is completely ionized. So the concentration of the hydrogen ion is the same as the acid concentration. The concentration of the hydrogen ion in the mixture is the sum of the acid concentration divided by the total volume. Say, N1, V1 are the strength and volume of the first strong acid and N2, V2 of the second acid. The concentration of the hydrogen ion in acid 1 is N1V1 and in acid 2 is N2V2 Total hydrogen concentration = N1V1 + N2V2 Total volume of solution = V1 + V2[H +] = \(=\frac{N1V1+N2V2}{V1+V2}\) from which pH can be calculated Mixture of Two Strong Bases Strong bases are completely ionized. So the concentration of the hydroxide ion is the same as the base concentration. The concentration of the hydroxide ion in the mixture is the sum of the base concentration divided by the total volume. Say, N1, V1 is the strength and volume of the first strong base and N2, V2 of the second base. The concentration of the hydroxide ion in base 1 is N1V1 and in base 2 is N2V2 Total hydroxide concentration = N1V1 + N2V2 The total volume of solution = V1 +V2[OH –] = \(=\frac{N1V1+N2V2}{V1+V2}\) [H +] = \(=\frac{10^{-14}}{[OH^{-}]}\) from which pH can be calculated Mixture of a Strong Acid and a Strong Base On mixing neutralization takes place. The resulting solution may be an acid or base depending on the Concentration. Say, N1, V1 are the strength and volume of the strong acid and N2, V2 of the base. If, N1V1> N2V2, resulting solution will be acidic, with [H +] = \(=\frac{N1V1-N2V2}{V1+V2}\) If, N1V1˂ N2V2, resulting solution will be basic, with [OH –] = \(=\frac{N2V2-N1V1}{V1+V2}\) Weak Acid Weak ionize partly, and Ostwald’s dilution Law can be applied to calculate pH.\(HA\rightleftharpoons H^{+}+A^{-}\) Initial concentration, moles/l, C 0 0 At equilibrium, moles /l C(1-α) Cα Cα So, Acid Ionization constant = \(Ka=\frac{[H+][A-]}{HA}=\frac{(C\alpha +C\alpha)}{c(1-\alpha)}=\frac{c\alpha^{2}}{(1-\alpha)}\) (i) For very weak electrolytes, since α <<< 1, (1 – α ) = 1 .·. \(Ka=C\alpha^{2\;\;\;\;\;}\alpha = \sqrt{\frac{Ka}{c}}=\sqrt{KaV}\) (ii) Concentration [H +] of ion = \(C\alpha = \sqrt{CKa}=\sqrt{\frac{Ka}{V}}\) iii) \(pH=-\log\sqrt{CKa}=\frac{1}{2}(-\log Ka -\log C)\) ;\(pH=\frac{1}{2}(pKa -\log C)\) Increasing dilution, increases ionization and pH Mixture of strong acid and weak monoprotic acid Let C1 and C2 be the concentrations of the strong and weak acids. If α is the degree of dissociation in the mixture, the hydrogen ion concentration = [H +] = C1+ C2*α. Degree of dissociation of the weak acid will be less than the pure acid because of the higher [H +] From the strong acid. This is referred as levelling effect. If the [H +] is less than 10 -6 mole/l, hydrogen ion concentration from water also is to be added. Mixture of two Weak Monoprotic Acids Say the two weak acid HA1 and HA2, have concentrations C 1, C 2 and degree of ionization α 1 and α 2. – Initial concentration, moles/l At equilibrium, moles /l C 1(1- α 1) C 1α 1+ C 2α 2 C 1α 1 C 2(1- α 2) C 1α 1+ C 2α 2 C 2α 2 So, \(Ka=\frac{[H+][A-]}{[HA]}\;\;\;\;\;\;\;\;Ka1=\frac{c1\alpha 1(c1\alpha 1+c2\alpha 2)}{C1(1-\alpha)}\;\;\; Ka2=\frac{C2\alpha 2(1\alpha 1+C2\alpha 2)}{C2(1-\alpha2)}\) Since α is small, Ka 1 = (C 1α 1+ C 2α 2) α 1 Ka 2 = (C 1α 1+ C 2α 2)α 2 +] = C 1α 1+C 2α 2= \(=\sqrt{C1 Ka1+ C2 Ka2}\)
Some convenient notational framework For the formal handling I' propose a slightly different notation: two resistors with $r_1$ and $r_2$ Ohm in serie : $ r_1 \oplus r_2$ two resistors with $r_1$ and $r_2$ Ohm parallel : $ r_1 || r_2$ For convenience (reduction of parentheses) we assume that "$||$" binds stronger than "$ \oplus $" two or more equal resistors in serie : $ \,^n r $ two or more equal resistors parallel : $ \,_n r $ where if the iteration-number $n=1$ we have $ \,^1 r= \,_1 r = r$ . We let the notation $ \,^n r$ and $ \,_n r$ bind stronger than $||$ and $\oplus$. Of course numerically $\,^n r $ with a resistor of $r=1 \; \Omega$ evaluates to $ n \cdot r = n \; \Omega $, $\, _n r$ evaluates to $1/n \cdot r = 1/n \;\Omega$ . Some basic algebraic rules on equal resistors are $ \,^m r \oplus \,^n r = \,^{n+m} r $ and $ \, _m r|| \, _n r = \,_{n+m} r $ , $ \,_m ( \,^n r) = \,^n ( \,_m r) = \,_m ^n r \quad $ where also $\,_1 ^n r = \, ^nr \quad $ and $\,_n ^1 r = \, _nr \quad $ and finally $ \,_n ^m r || \,_p ^q r = \,_{qn+pm} ^{m q} r \qquad $ and $ \qquad \,_n ^m r \oplus \,_p ^q r = \,_{n p} ^{mp + qn} r $ For some partial construct $ \, ^a_b r $, the upper bound for the needed resistors is $a \cdot b$ and for concatenation of such constructs their sum. By expansion of $ \, ^a_b r $ into concatenated subconstructs that number can in most cases be reduced; a standard (=safe) algorithm is that of representation of $\frac ab $ by its continued fraction; for sometimes even better solutions see some examples below. A nice conjugay: $ \,^m r || \,^n r = {1 \over \, _m r \oplus \, _n r} $ Some examples show possible improvement over continued-fraction solutions Three examples: Let $x=5/6 = {2 + 3\over 2 \cdot 3}$ and of course we have always $r=1 \text{ Ohm} $ .The continued fraction (="Euclidean"?) gives $\text{contfrac}(x) = [0,1,5]$ so we have $x = 1/(1 + 1/5) $ and thus $$x = r || \, ^5 r $$ and we need $6$ resistors, which is not optimal. A better configuration is by $ 5/6 = 1/2 + 1/3 = \,_ 2 r \oplus \, _3 r$ so we need $5$ resistors. $$ $$ Let $x=8/15 = {3+5\over 3 \cdot 5}$ and again $r=1 \text{Ohm} $ The continued fraction gives $\text{contfrac}(x) = [0,1,1,7]$ so we have $x = 1/(1 + 1/(1+1/7)) $ and thus $$x = r||(r \oplus \, _7r ) $$ and we need $9$ resistors, which is not optimal. A better configuration is by $ 8/15 = 1/3 + 1/5 = \,_3 r \oplus \, _5 r $ so we need $8$ resistors. $$ $$ Let $x=103/165 = {3 \cdot 5 + 5\cdot 11 + 11 \cdot 3\over 3 \cdot 5 \cdot 11} $ The continued fraction gives $\text{contfrac}(x) = [0, 1, 1, 1, 1, 1, 20]$ so we have $x = 1/(1 + 1/(1+ 1/(1+1/(1+1/20)))) $ and thus $$x = r || ( r \oplus ( r || ( r \oplus r || \,^{20} r )))$$ and we need $25$ resistors , which is not optimal. A better configuration is by $ x = 1/3 + 1/5 + 1/11 = \,_3 r \oplus \, _5 r \oplus \, _{11} r$ so we need only $19$ resistors. : I get by $$ x= \, _3 r \oplus \, _3 r || ( \,^2 r \oplus \, _3 r || \, ^2 r ) $$ an even better solution with only $13$ resistors... Hmmm. It is the following sum of two shorter and much smoother continued fractions: $$ x = \frac13 + {1 \over 3 + {1\over 2+ {1\over 3+\frac12} } } = [0;3] + [0;3,2,3,2] $$ update Method for creation of the full tree of resistors-configurations One remark on the example-trees in the other answers. Let $$R_1 = [r] $$ be the 1-element vector of possible configurations with one resistor. Then let $$R_2 = [\,^2 r, \, _2 r] $$ be the two-element vector of possible configurations of two resistors. We can understand that result as concatenation of the vectorproduct $ R_1 \times_\oplus R_1 $ one time with the operation $ \oplus$ and one time $ R_1 \times_{||} R_1 $ with the operation $||$ . So full expanded we have $$ R_2 = [ r \oplus r , r || r ]$$ with the already given two resulting elements. This latter definition lets us generalize to get a full tree: Let analoguously $$\hat R_3 = [R_2 \times_\oplus R_1 , R_2 \times_{||} R_1] $$ Then let us assume that this is sorted and that doublet elements are removed to occur only once such that effectively$$ \begin{array}{rl}R_3 &= [\,^2 r \oplus r, \, _2 r \oplus r, \, ^2 r || r, \, _2 r||r ] \quad \text{or}\\R_3 &= [\,^3 r, \, ^3_2 r, \, ^2_3 r, \, ^1_3 r ] = [\frac31, \frac32 , \frac23 , \frac13 ] \end{array} $$ The key to the tree is now, that for $R_4$ all possbible combinations must be collected, which means full $$ \begin{array} {rl} \hat R_4 &= [ & R_3 \times_\oplus R_1 , R_3 \times_{||} R_1 , \\ & & R_2 \times_\oplus R_2 , R_2 \times_{||} R_2 ] \\\end{array}$$ and $R_4$ is then the sorted and shortened version of $\hat R_4$ with the elements $$ R_4 = [4, \frac52, \frac53, \frac43, 1, \frac34, \frac35, \frac25, \frac14] $$The difference to earlier answers (with a sketch of the partial tree, for instance by @zwim) is that there we have only $$ \begin{array} {rl} \hat R_4 &= [ & R_3 \times_\oplus R_1 , R_3 \times_{||} R_1 ] \\\end{array}$$ and so the second part ($R_2 \times R_2$)is missing there. Of course $$ \begin{array} {rl} \hat R_5 &= [ & R_4 \times_\oplus R_1 , R_4 \times_{||} R_1 , \\ & & R_3 \times_\oplus R_2 , R_3 \times_{||} R_2 ] \\ \hat R_6 &= [ & R_5 \times_\oplus R_1 , R_5 \times_{||} R_1 , \\ & & R_4 \times_\oplus R_2 , R_4 \times_{||} R_2 ] \\ & & R_3 \times_\oplus R_3 , R_3 \times_{||} R_3 ] \\\end{array}$$and so on .... I save myself the draw of the (complicated) tree of partial constructions, someone else might do that. But it is to be noted that in this full tree various results occur multiple and we can search for the occurence / the occurence with earliest of required resistors. Note, that for instance in $R_4$ we find the element $1r$ which has there the configuration $ (r \oplus r) || (r \oplus r) = \,^2 r || \,^2 r$ having 4 resistors, but of course that element occurs already in $R_1$ and has its earliest occurence there. least number I don't see, how one could formalize that earliest occurence in the tree /that "least number of resistors needed for some result" furtherly... [appendix] After we remove from $R_m$ all that which occur in earlier $R_k, k \lt m$ to get $T_m$ we get the set of cardinalities $ [@T_1,@T2,@T_3,...] = [ 1, 2, 4, 8, 20, 42, 102, 250, 610, 1486, 3710, 9228, 23050, 57718,...] $ which has already been found by @martin and also is as sequence in OEIS: see "A051389 Rational resistances requiring at least n 1-ohm resistors in series or parallel (2006)" which provides also links to more related information (a recent update was from 20 Mar, just a couple of days ago). An ordered list for all reachable fractional resistance-values by that vectors $T_k$ (which have duplicates with higher number of resistors removed) shows the following picture (ironically I've written "transistors" instead of "resistors", but that's caused by my old-time education in electronics...)
I saw that two random independent vectors are approximately orthogonal in high dimensional space. How can I prove this? And is there an intuitive explanation? Thank you. MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community There are many ways to interpret this question depending on how you "randomly" choose your vectors. Here's one example. Take the set of vectors $v\in\{-1,1\}^N$, where the coordinates of $v$ are chosen independently to be $+1$ half the time and $-1$ half the time. Then an easy calculation shows that the expected value of $$\frac{|v_1\cdot v_2|^2}{|v_1|^2|v_2|^2} \quad\text{is equal to}\quad\frac{1}{N}.$$ In other words, the expected value of $\cos^2\theta$, where $\theta$ is the angle between two randomly chosen vectors in this set, is $\frac{1}{N}$, which shows that randomly chosen vectors are reasonably orthogonal to one another if $N$ is large. One can compute the higher moments $\cos^{2k}\theta$ to get further information. Other random distributions may be computed similarly, for example the set of vectors whose coordinates are independently distributed in the interval $[-1,1]$. I don't really have a good intuitive explanation, because I have trouble visualizing $N$ dimensional space. (By "intuitive", I'm assuming you mean "geometrically intuitive.") For those who have voted to close this question, please don't. It does require some clarification to answer carefully, but it is an important principle in, for example, lattice-based cryptography. The results from a large number of repeated trials can be viewed as a random point in a high-dimensional space, so we can use our intuition about probability in the long run as a substitute for our inability to visualize geometry in high dimensions. The "trend towards the mean" after many repeated trials of the same experiment explains why you should expect such phenomena only in high dimensions. I will focus on the intuition, rather than a rigorous proof, since you asked about both and I think it's good to see why you should expect this type of result before reading a proof about it. If you flip a fair coin a large number of times you would expect on average to have about as many heads as tails. Assigning "1" to an outcome of heads and "0" to an outcome of tails, for a sequence of $n$ independent coin flips $(x_1,\ldots,x_n)$ where $n$ is large you'd expect the average $(\sum x_i)/n$ to be very close to $1/2$. To recenter the data around $0$ instead of $1/2$, apply the affine transformation $y_i = 2x_i -1$. Then $y_i = 1$ or $-1$ and $(y_1,y_2,\ldots,y_n)$ is a random sequence from $\{\pm 1\}$ with average value $(\sum y_i)/n = 2(\sum x_i)/n - (\sum 1)/n = 2(\sum x_i)/n -1$ very close to $2(1/2) -1 = 0$ when $n$ is very large. Everything I wrote above goes through if you replace the discrete $x_i$ in $\{0,1\}$ with uniformly chosen $x_i$ in $[0,1]$, so $y_i = 2x_i-1$ becomes a uniform random variable in $[-1,1]$. A vector $(y_1,\ldots,y_n)$ with coordinates chosen randomly (independently and uniformly) in $[-1,1]$ is, for large $n$ (not small $n$ like $2$ or $3$!) very likely to have the average of the coordinates nearly equal to $0$. Now we make a connection to geometry. The average $(\sum y_i)/n$ can be viewed as the dot product of the vectors $\mathbf y = (y_1,\ldots,y_n)$ and $\mathbf v = (1/n,\ldots,1/n)$, so $\mathbf y \cdot \mathbf v$ is most likely small when $n$ is large. Intuitively you might think "small dot product means they are nearly orthogonal," but the dot product involves the length of the vectors, not only the angle between them, so a dot product might be small because one of the vectors is small. And indeed $\mathbf v$ has length $1/\sqrt{n}$, which tends to $0$ as $n \rightarrow \infty$. Maybe the angle between $\mathbf y$ and $\mathbf v$ doesn't matter? A little algebra will help us focus on the angle instead of the lengths. Rewrite the dot product $\mathbf y\cdot \mathbf v$ as $(\mathbf y/\sqrt{n})\cdot \mathbf u$ where $\mathbf u = (1/\sqrt{n},\ldots,1/\sqrt{n})$ is a specific unit vector in $\mathbf R^n$. Since $||\mathbf y|| \leq \sqrt{n}$, and equality is certainly possible, the vector $\mathbf y/\sqrt{n}$ is a "random" vector in the unit ball of $\mathbf R^n$. Therefore our probabilistic intuition suggests that a random vector in the unit ball of $\mathbf R^n$ is very likely to have a small dot product with the specific unit vector $\mathbf u$. But geometrically what makes $\mathbf u$ so special? We can transform one unit vector to any other unit vector by a rotation (orthogonal transformation), and rotations do not change distances or dot products, so we are led to expect a random vector in the unit ball of $\mathbf R^n$ and a random unit vector in $\mathbf R^n$ to have a small dot product when $n$ is large. The unit ball and the unit sphere are different objects, but the following claim will let us put them on more of an equal footing in high dimensions. Claim: most of the unit ball in $\mathbf R^n$ consists of vectors near the boundary when $n$ is large. To explain the claim pick a small $\varepsilon > 0$ and divide the unit ball in $\mathbf R^n$ into $\{\mathbf w : ||\mathbf w|| \leq 1-\varepsilon\}$ and $\{\mathbf w : 1-\varepsilon < ||\mathbf w|| \leq 1\}$. The second set is all the vectors in the unit ball that are near the boundary and the first set has $n$-dimensional volume $(1-\varepsilon)^nb_n$, where $b_n$ is the volume of the $n$-dimensional unit ball. It turns out that even though the volume $2^n$ of the $n$-dimensional cube $[-1,1]^n$ explodes for large $n$ the unit-ball volume $b_n$ is bounded above as $n$ varies (I don't think this critical fact is intuitive; the value of $b_n$ is maximized at $n = 5$ or $6$ and actually tends to $0$ as $n$ gets large), so for fixed $\varepsilon > 0$ that first set has volume tending to $0$ as $n$ gets large because $(1-\varepsilon)^n \rightarrow 0$ as $n \rightarrow \infty$ no matter how ultra-tiny $\varepsilon$ is (you'd just better not make $\varepsilon$ depend on $n$). Thus, if you pick $\varepsilon$ and then make $n$ big enough, the bulk of the unit ball in $\mathbf R^n$ is in the second set. This completes the justification of the claim (including showing what it really means). By the claim, in high dimensions a random vector in the unit ball is likely to be near the unit sphere, so we can say that picking a random vector in the unit ball is essentially the same as picking a random unit vector when we are in high dimensions (certainly not in two or three dimensions). We can finally interpret our original intuition that the average $(\sum y_i)/n$ is likely to be near $0$ when $n$ is large as saying that two random unit vectors in $\mathbf R^n$ are likely to have a small dot product, and for unit vectors (or just vectors bounded away from the origin), having a small dot product means the vectors are nearly orthogonal. Pick two random unit vectors. After picking the first vector, switch to a coordinate system in which this is the first basis vector. The probability distribution for the second vector is assumed to be uniform among all directions, and so should assign an average magnitude of $\approx 1/\sqrt{n}$ to each component, making the dot product between the two vectors $\approx 1/\sqrt{n}$ as well. You have to split up the "total alloted length" of $1$ among increasingly many directions, only one of which counts toward non-orthogonality. I think there is a stronger statement for large n. If you have n unit vectors (thought of as n points chosen uniformly at random on the sphere in R^n), and put them into the rows of an n x n matrix, the matrix will be nearly orthogonal. So you can choose n vectors at a time, not just 2! You can test this by filling an n x n matrix with draws from a normal distribution with mean zero and standard deviation 1/sqrt(n). Each row will be approximately unit length, distributed uniformly at random near the sphere, and the whole matrix will be nearly orthogonal. The key term here is the measure concentration phenomenon, explored in the work of Vitaly Milman and others. This refers to the fact that on a high-dimensional sphere most of the volume is concentrated near the equatorial hemisphere. If one assumes that one's probability distribution is uniform with respect to area, then this phenomenon is equivalent to the statement that a second vector chosen at random will be close to orthogonal to the original one. This is essentially a development of Yemon Choi's comment. One way to come at this is to try to stretch your intuition even farther, toward the Johnson-Lindenstrauss lemma, which says that while we can only fit $n$ orthogonal vectors into $\mathbb{R}^n$, we can easily (randomly) fit exponentially many almost-orthogonal vectors. The precise statement of the lemma includes the result you mention as a special case, so it's not really fair to say it provides the intuition. Rather I mention it because if you can fit it into your toolbox for thinking about high dimensions it will make other things more intuitive.
It is well known that the Van der Pol equation $$\begin{cases} \dot x=y-(x^3-x)\\\dot y=-x \end{cases}$$ has no an algebraic limit cycle. According to this fact, we search for a related question in the following form: We consider the Lienard equation $$\begin{cases} \dot x=y-F(x)\\ \dot y=-x\end{cases}$$ where $F(x)$ is a polynomial with real coefficient. Is there an example of a Lienard equation as the above system with two distinct limit cycles $\gamma_1, \gamma_2$ with a polynomial or rational map $g:\mathbb{R}^2 \to \mathbb{R}^2$ such that $g(\gamma_1)=\gamma_2$?
It is easier to think about the flatness problem in the other direction: from the past to the present. First, we need to introduce some quantities: $$E \equiv \frac{H}{H_0},\qquad \bar{\Omega} \equiv \Omega E^2, \qquad \bar{\Omega}_k \equiv \Omega_k E^2, \qquad \Omega_k \equiv -\frac{k}{a^2H^2},$$ where $H_0$ refers to the the Hubble constant today, i.e., $H_0 \equiv H(a_0)$ and $a_0$ representing the scale factor today. In these variables $\bar\Omega + \bar\Omega_k = E^2$, $\bar\Omega(a_0) = \Omega(a_0)$ and $\bar\Omega_k(a_0) = \Omega_k(a_0)$. It is easy to see that $\Omega \propto \rho$, and therefore, for a fluid with arbitrary constant equation of state $w$, $$\bar\Omega_w = \Omega_{w0}\left(\frac{a_0}{a}\right)^{3(1+w)},\qquad \Omega_{w0} \equiv \Omega_w(a_0).$$ If the only component of the universe is the $w$ fluid, then, $\Omega = \Omega_w$. To simplify it further, lets choose a dust like fluid $\Omega_m$ with $w = 0$ (however, the argument bellow is valid for any combination of components with equations of state larger than $-1/3$). Now, lets say that initially $\vert\Omega-1\vert$ is very close to one, that is, for an initial value of the scale factor $a_i$, we have $$\Omega_{mi} \equiv \Omega_m(a_i) = 1-10^{-5}, \qquad \Omega_{ki} \equiv \Omega_k(a_i) = 10^{-5}.$$ In terms of these initial densities we have, $$\Omega_m = \frac{\Omega_{mi}(a_i/a)^{3}}{\Omega_{mi}(a_i/a)^{3} + \Omega_{ki}(a_i/a)^{2}} = \frac{1}{1+\alpha a / a_i}, \quad \Omega_k = \frac{\Omega_{ki}(a_i/a)^{2}}{\Omega_{mi}(a_i/a)^{3} + \Omega_{ki}(a_i/a)^{2}} = \frac{\alpha a / a_i}{1+\alpha a / a_i},$$ where we defined $\alpha = \Omega_{ki} / \Omega_{mi}$. Putting these initial conditions around the nucleosynthesis epoch, we have $a_i \approx 10^{-10}a_0$ and consequently$$\Omega_{m0} \approx \frac{1}{1+\alpha 10^{10}} \approx 10^{-5}, \qquad \Omega_{k0} \approx \frac{\alpha 10^{10}}{1+\alpha 10^{10}} \approx 1.$$ The situation is completely reversed, in this case today we would have an universe completely dominated by curvature. To avoid this, we would need $\alpha \ll 10^{-10}$, i.e. $\Omega_{ki} \ll 10^{-10}$. Now, we can answer your questions: No matter which value Ω(t0) we would measure, it would be arbitrary close to 1 for small enough t That's true as long as the matter content of the universe has an equation of state larger than $-1/3$. However, that's exactly the problem, since the matter content tends to dominate in the past, the ratio $\Omega_k/\Omega$ is led dynamically to zero, even if $\vert\Omega_{k0}\vert > \vert\Omega_0\vert$ today. This means that if we want to put initial conditions in the past we need to fine-tune $\Omega_{ki}$ to a very small value in order to obtain the observed value today $\Omega_{k0} \approx 0$ (in the example above, to get $\Omega_{k0} \approx 10^{-5}$ we need $\Omega_{ki} \approx 10^{-15}$!). Nevertheless, this is not a problem per se. What happens is that we have an implicit expectation that the initial conditions should be natural/ common (maybe an implicit use of the Mediocrity principle) and the dynamics above shows that only a very narrow interval of the possible initial conditions for $\Omega_{ki}$ could explain the current state of the universe. Finally, this has nothing to do with Planck time or quantum gravity, that problem takes place even if you choose to put initial conditions around the nucleosynthesis epoch. $\rho(t0)$ and $k$ can be set independently. So increasing the density a little bit doesn't have any influence on the value of $\Omega$ at all (it just changes $H$). So the statement "the matter density has to be precisely fine-tuned to make the universe (that) flat as we measure it today" doesn't make any sense, since matter doesn't affect the spatial curvature That's not true. You are right to say that if one varies $\rho(t0)$ and $k$ independently, a shift in the value of $\rho$ would modify the value of $H$ (due to Friedmann equation, which is actually a constrain equation and not a dynamical one). This certainly would affect $\Omega$, to see this we can write the Friedmann equation in terms of barred densities:$$E^2 = \bar{\Omega} + \bar\Omega_k,$$ remembering that $\bar\Omega \propto \rho$. Thus, a shift in $\rho$ is equivalent to a shift in $\bar\Omega$, then if our new density is given by $\bar\Omega \to \bar\Omega+\Delta$, then $E^2 \to E^2 + \Delta$ and consequently, $$\Omega = \frac{\bar\Omega}{E^2} \to \frac{\bar\Omega+\Delta}{E^2+\Delta}, \qquad \vert1-\Omega\vert \to \left\vert\frac{1-\Omega}{1+\Delta/E^2}\right\vert.$$ The quantity $1−\Omega(t)$ seems to be arbitrary. Couldn't I also arbitrarily define $b(t)=ke^{−1/t}$? Now $b$ needs to even smaller than $\Omega$ for small $t$ if we measure $k$ to be small nowdays. Creates this then a more severe flatness problem? This quantity is not arbitrary, it is a direct consequence of the Friedmann equation written in terms of $\Omega$ and $\Omega_k$, i.e., $\Omega + \Omega_k = 1$ (yes, this is the Friedmann equation). This expression gives the exact relation between the energy content of the universe and the absolute value of the spatial curvature. So in conclusion: I don't see any problem as long as direct properties of the universe like $a(t)$ do not depend on the initial conditions $\rho(t0)$ and $k$ in an unstable way. Yes, that's actually one way to frame the problem. The flat solution is unstable, any small departure from an exact flat solution leads to a curvature dominated evolution at some point in the future (unless you have a matter component with $w<-1/3$ in which case it would dominates instead. The most famous example of this would be the cosmological constant).
Because the slab extends infinitely in the xy plane, the electric field lies only along the z direction. The differential form of Gauss' Law for such a one-dimensional electric field is $$\frac{dE}{dz}=\frac{\rho(z)}{\epsilon_0}$$ This can be integrated, using the boundary condition that the electric field must be zero at large distances from the slab, because it is electrically neutral. A more elementary way of solving the problem is to divide the slab into infinitesimally thin layers of thickness $dz$. The electric field of each layer is uniform, independent of distance from the layer, and is $dE=\frac{d\sigma}{2\epsilon}$ pointing away from the layer on each side for +ve surface charge density $d\sigma=\rho(z)dz$. The total electric field at any point inside or outside of the slab is found by superposition of fields from every such layer in the slab. Note that the total electric field at any point due to all layers which are closer to the centre plane $z=0$, is zero. This is because the charge density is anti-symmetric : for every layer of +ve area charge density on one side of $z=0$ there is a layer of -ve charge with the same magnitude of area density on the other side of $z=0$. The electric fields of these two layers cancel out for points which outside of the two layers, in the same way that the total electric field is zero outside of a parallel plate capacitor (if the plate dimensions are very much bigger than the distance from them). From this observation you can see that the electric field outside of the slab is zero, because all layers in the slab are closer to the centre plane. The simplest way of getting the field inside the slab is to apply Gauss' Law using a "pill box" Gaussian surface which has one face A of area S at the surface of the slab $z=a$ (where $E(a)=0$) and the other face B at distance $|z|<a$ from the centre plane. The other face(s) of the pill box are parallel to the z direction so the electric flux through them is zero. There is no electric flux through face A; the flux through face B is $E(z)S=\frac{q}{\epsilon_0}$ where by integration $$q=\int_a^z S\rho(z)dz$$ is the total charge inside the pill box. See Electric field in a non-uniformly charged sheet. An answer identical to mine is given in the duplicate question Finding the electric field of a NON uniform slab?
I've just finished designing my own laboratory power supply unit. At first I wanted it to be 0-3A current limited, ~1-30V, but I've changed my mind about it - I think I might need more acuracy in the lower current levels instead of higher currents. For ease of design I decided that it'll be powered from a laptop charger. The choice of charger and LT3080 for current precision made it 0-1A, 1-20V. The whole thing is monitored and controlled by microcontroller. About the whole project idea: this is a hobby project. I know that one can buy better bench PSU cheaper, but beside having a power supply I want to build it. I have some concerns about parts of the circuit, namely: Does the control of the ST1S14 look right? I found this kind of voltage controlled dc-dc converter somewhere on the Internet and I think I understand how it works, but I've never built anything like it - will it really work? I'm aware that there will be some offset from the set voltage and thats ok in this application, since this is just a preregulator (or there's an option to overcome this in software). Also is the BCR169 a good transistor here? I used it just because it's cheap and seems to be fast enough. For the current limitting there are 2 digipots - 5k and 50k. This gives 0-1A regulation with theoretical resolution of \$1A \frac{\frac{5k\Omega}{256}}{50k\Omega}\approx0.4mA\$ (plus errors, plus wipers resistance offset). Could current limiting be done better without digipots, but using DAC and some opamps? Finally the programming port. There's USBasp connector on sheet 3 and I want to use it while the board is powered externally, is it enough to connect all pins but vcc? Is there anything else you see here that doesn't seem right? Are there any flaws with the schematic readability (there are everywhere, but how can I improve it)? Here is the schematic: Update - fixed ADC buffering opamps. Changed them to MCP6001 running from VREF(reference 3V). Changed shunt resistors to single 1W 500m\$\Omega\$ resistor. Update - I've rethinked preregulator control part and came up with a way to drive it with analog feedback. This makes things a bit less complicated. I tested it in LTspice (with some LT dc-dc ICs) and it apears to be working.
Communities (47) Mathematics 5.1k 5.1k22 gold badges2323 silver badges5353 bronze badges Emacs 229 22922 silver badges1111 bronze badges Personal Finance & Money 201 20111 silver badge44 bronze badges Software Recommendations 191 19133 silver badges88 bronze badges Area 51 151 15133 bronze badges View network profile → Top network posts 33 What are some conceptualizations that work in mathematics but are not strictly true? 29 How to solve this sequence $165,195,255,285,345,x$ 23 Evaluating $\lim \limits_{n\to \infty}\,\,\, n\!\! \int\limits_{0}^{\pi/2}\!\! \left(1-\sqrt [n]{\sin x} \right)\,\mathrm dx$ 18 Problem 6 - IMO 1985 14 Prove that ${1\over2}<{1\over1001}+......+{1\over2000}<1$ 13 If $f$ is a strictly increasing function with $f(f(x))=x^2+2$, then $f(3)=?$ 13 Interview riddle View more network posts → Top tags (14) 16 Quora vs. Stack Exchange when asking about mathematics Jul 14 '14 13 Circumventing displayed equation ban in titles Jan 3 '15 8 2014 Nominations for moderator on Math.SE Dec 10 '14 3 How can I get a list of posts I have edited? Mar 7 '14 3 Can we cancel flags? Apr 16 '14 2 Question on 3D maths required for a simulation. Oct 28 '14
Ex.7.1 Q9 Coordinate Geometry Solution - NCERT Maths Class 10 Question If \(Q \;(0, 1)\) is equidistant from \(P \;(5, -3)\) and \(R\; (x, 6)\), find the values of x. Also find the distances \(QR\) and \(PR\). Text Solution Reasoning: The distance between the two points can be measured using the Distance Formula which is given by: Distance Formula\(\begin{align} = \sqrt {{{\left( {{{\text{x}}_{\text{1}}} - {{\text{x}}_{\text{2}}}} \right)}^2} + {{\left( {{{\text{y}}_{\text{1}}} - {{\text{y}}_{\text{2}}}} \right)}^2}} .\end{align}\) What is the known? The \(x\) and \(y\) co-ordinates of the points \(P\), \(Q\) and \(R\) between which the distance is to be measured. What is the unknown? The value of \(x\) and the distance \(QR\) and \(PR\). Steps: Given, Since \(Q\; (0, 1)\) is equidistant from \(P\; (5, -3) \)and \(R \;(x, 6)\),\(\begin{align}PQ = QR\end{align}\) We know that the distance between the two points is given by the Distance Formula, \[\begin{align}\sqrt {{{\left( {{{\text{x}}_1} - {{\text{x}}_2}} \right)}^2} + {{\left( {{{\text{y}}_1} - {{\text{y}}_2}} \right)}^2}} \qquad \qquad...\;{\text{Equation}}\,(1)\end{align}\] Hence by applying the distance formula for the \(\begin{align}PQ = QR,\end{align}\)we get \[\begin{align}\sqrt {{{(5 - 0)}^2} + {{( - 3 - 1)}^2}} &= \sqrt {{{(0 - x)}^2} + {{(1 - 6)}^2}} \\\sqrt {{{(5)}^2} + {{( - 4)}^2}} &= \sqrt {{{( - x)}^2} + {{( - 5)}^2}} \end{align}\] By squaring both the sides, \[\begin{align}\;\;\;\;\;25 + 16 &= {{\text{x}}^2} + 25\\\;\;16 &= {{\text{x}}^2}\\\;\;{\text{x}} &= \pm 4\end{align}\] Therefore, point \(R\) is \((4, 6)\) or \((-4, 6)\). Case (1), When point R is \((4, 6)\), Distance between \(P \;(5, -3)\) and \(R\; (4, 6)\) can be calculated using the Distance Formula as , \[\begin{align}PR &= \sqrt {{{(5 - 4)}^2} + {{( - 3 - 6)}^2}} \\ &= \sqrt {{1^2} + {{( - 9)}^2}} \\ &= \sqrt {1 + 81} \\ &= \sqrt {82}\end{align}\] Distance between \(Q\; (0, 1)\) and \(R\; (4, 6)\) can be calculated using the Distance Formula as , \[\begin{align}QR &= \sqrt {{{(0 - 4)}^2} + {{(1 - 6)}^2}} \\ &= \sqrt {{{( - 4)}^2} + {{( - 5)}^2}} \\ &= \sqrt {16 + 25} \\& = \sqrt {41}\end{align}\] Case (2), When point R is \((-4, 6)\), Distance between \(P\; (5, -3)\) and \(R\; (-4, 6)\) can be calculated using the Distance Formula as, \[\begin{align}PR& = \sqrt {{{(5 - ( - 4))}^2} + {{( - 3 - 6)}^2}} \\& = \sqrt {{{(9)}^2} + {{( - 9)}^2}} \\ &= \sqrt {81 + 81} \\& = 9\sqrt 2 \end{align}\] Distance between \(Q \;(0, 1)\) and \(R\; (-4, 6) \) can be calculated using the Distance Formula as , \[\begin{align}QR& = \sqrt {{{(0 - ( - 4))}^2} + {{(1 - 6)}^2}} \\& = \sqrt {{{(4)}^2} + {{( - 5)}^2}} \\ &= \sqrt {16 + 25} \\&= \sqrt {41} \end{align}\]
Difference between revisions of "Probability Seminar" (→April 4, TBA) (→April 11, Eviatar Procaccia, Texas A&M) Line 89: Line 89: == April 11, [https://sites.google.com/site/ebprocaccia/ Eviatar Procaccia], [http://www.math.tamu.edu/index.html Texas A&M] == == April 11, [https://sites.google.com/site/ebprocaccia/ Eviatar Procaccia], [http://www.math.tamu.edu/index.html Texas A&M] == + + + + == April 18, [https://services.math.duke.edu/~agazzi/index.html Andrea Agazzi], [https://math.duke.edu/ Duke] == == April 18, [https://services.math.duke.edu/~agazzi/index.html Andrea Agazzi], [https://math.duke.edu/ Duke] == Revision as of 11:06, 1 April 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison Title: Outliers in the spectrum for products of independent random matrices Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke. April 11, Eviatar Procaccia, Texas A&M Stabilization of Diffusion Limited Aggregation in a Wedge. Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems.
Hello, I've a problem to calculate the Position of a pendulum as a function of theta. For example: $\theta (t)$ is a function of time which returns the angle made by the pendulum at a particular instant wrt it's equilibrium Position. So, $$ T = \dfrac 12 m l^2 \dot \theta^2 $$$$ U = - mgl \cos \theta $$ $$ L(\theta, \dot \theta) = \dfrac 12 m l^2 \dot \theta^2 + mgl \cos \theta $$ Using, the Euler - Lagrangian Formula, $$ \dfrac d{dt} \left ( \dfrac{\partial L}{\partial \dot \theta}\right) - \dfrac{\partial L}{\partial \theta} = 0 $$ We get, $$ \boxed{\ddot \theta =- \dfrac gl \sin \theta} $$ which is the equation of motion. But, most of the derivations, I've seen/read go this way: $$ \ddot \theta = \dfrac gl \theta \quad \dots \quad (\text{As, } \sin \theta \approx \theta, \theta \rightarrow 0) \tag{*} $$ $$ \theta (t) = \cos \left ( \sqrt{\dfrac gl} t \right) $$ Because it satisfies $(*)$ So, I've 2 questions here. Other possible solutions of the Second Order Differential Equations exist like $\theta (t) = e^{\left( \sqrt{\dfrac gl}t \right)}$. So, why we choose that only one? One would argue that the sine function oscillates similar to the pendulum, so this makes sense to accept the sine one. But, in general case, when we solve the Lagrangian and get the equation of motion in differential form, then there are tons of complex situation possible, How can you determine which kind is needed? How can we solve the Second Order Differential Equation $\ddot \theta = - \dfrac gl \sin \theta$ and get an exact formula for that? Thanks :)
To demodulate a phase-shift keyed signal, of which BPSK is the simplest, you have to recover the carrier frequency, phase, and symbol timing.Bursty SignalsSome signals are bursty and provide a known data sequence called a preamble or mid-amble (depending on whether it shows up at the beginning or middle of the burst). Demodulators can use a matched ... First of all the definitions are different:Phase delay: (the negative of) Phase divided by frequencyGroup delay: (the negative of) First derivative of phase vs frequencyIn words that means:Phase delay: Phase angle at this point in frequencyGroup delay: Rate of change of the phase around this point in frequency.When to use one or the other really ... You aren't doing anything wrong, but you also aren't thinking carefully about what you should expect to see, which is why you're surprised at the result. For question 1, your conjecture is close, but you actually have things backwards; it's numerical noise that's plaguing your second one, not your first one.Pictures may help. Here's plots of the magnitude ... Suppose that a linear filter has impulse response $h(t)$ and frequency response/transfer function $H(f) = \mathcal F [h(t)]$, where $H(f)$ has the property that $H(-f) = H^*(f)$ (conjugacy constraint).Now, the response of thisfilter to complex exponential input $x(t) = e^{j2\pi f t}$ is$$y(t) = H(f)e^{j2\pi f t} = |H(f)|e^{j(2\pi f t + \angle H(f))}$$ ... They don't both measure how much a sinusoid is delayed. Phase delay measures exactly that. Group delay is a little more complicated. Picture a short sine wave with an amplitude envelope applied to it so that it fades in and fades out, say, a gaussian multiplied by a sinusoid. This envelope has a shape to it, and in particular, it has a peak that represents ... If you want the shifted output of the IFFT to be real, the phase twist/rotation in the frequency domain has to be conjugate symmetric, as well as the data. This can be accomplished by adding an appropriate offset to your complex exp()'s exponent, for the given phase slope, so that the phase of the upper (or negative) half, modulo 2 Pi, mirrors the lower ... One-dimensional versionThe one-dimensional version that you list won't work. When there is a large enough shift in images (more than one or two pixels in real-world images), there will be nothing relating the column pixels.For an example of this, try:I5 = rand(100,100)*255;I6 = zeros(100,100);I6(11:100,22:100) = I5(1:90,1:79);So that we have I5:... For those who still cannot chalk the difference here is an simple exampleTake long transmission line with simple sine signal with an amplitude envelope, $v(t)$, at its input$$v(t) \cdot \sin(\omega t)$$If you measure this signal at the transmission line end, it might come somewhere like this:$$ v(t-\tau_g) \cdot \sin(\omega t + \phi)\\= v(t-\tau_g)... Phase Noise and Frequency Noise are not two different noise sources, they are artifacts of the same noise, it is just a matter of what units you want to use. Frequency and Phase are directly related as frequency is phase changing with time, so if you have one you will always have the other; frequency and phase are related by derivatives and integrals: the ... Nice question! It uses one of my favorite trig identities (which can also be used to show that quadrature modulation is actually simultaneous amplitude and phase modulation).The impulse response of the system described above is given by:Block diagram: Phase (or carrier) Recovery for BPSK can be done over the entire sequence using the information from every sample. Here are common approaches to doing Carrier Recovery:Frequency Doubling (squaring): If you square a BPSK signal (multiply it by itself) a strong tone will be created at twice your carrier frequency. The Squaring operation strips the data ... Compare $y= sin(\omega t)$ to $y=sin(\omega t-\phi)$ as shown in the plot below. $\phi$ is the additional phase term in radians where $\omega$ represents the frequency in radians per second. Thus the phase term shifts a sinusoid along the horizontal axis. So at a given frequency this will result in a time delay for that frequency, although to be noted that a ... The book's formula is right.Let $$H(w) = 1 - r e^{j(\theta - w)} = [1-r \cos(\theta - w)] + j [-r \sin(\theta - w)]$$ Since the group delay $\tau$ is the negative of the derivative of the phase of $H(w)$, we first define the phase as:$$\phi(w) = \tan^{-1}\left( \frac{-r \sin(\theta - w)}{1-r \cos(\theta - w)} \right)$$Using the derivative rule for the ... What you need is carrier phase synchronization. This is a complicated topic with many different approaches. The approach that you'll choose could depend on things like:Data-aided versus blind: Does the underlying sequence contain any known data (e.g. a training or sync sequence of some kind) that you can use to divine the phase offset? Or, do you have to ... Your question is tricky because it is hard to define what "discarding the phase information" means without specifying a practical way of doing it; but then we encounter problems/artifacts which are specific to this particular method.Since you mentioned STFT, let us assume phase information is suppressed by doing a STFT, keeping the magnitudes, setting to 0 ... Answers about using cross-correlation are correct. But if your input and output signals are sinusoid you can use more simple and faster method.input - $y1=a*\sin(\omega*t)$output - $y2=b*\sin(\omega*t+\phi)$multiply input and output:$y1*y2=a*b*\sin(\omega*t)*\sin(\omega*t+\phi)=a*b*1/2*(\cos(\phi) - \cos(2*\omega*t+\phi))$eliminate high-frequency ... The group delay of a filter is defined as minus the change in the phase response with respect to frequency. If the phase response of a filter is $\Phi(\omega)$, the corresponding group delay $\tau_g$ is given by:\begin{equation}\tau_g = -\frac{d\Phi(\omega)}{d\omega}\end{equation}In Matlab code, the group delay of a 4th order Butterworth filter can be ... I think wikipedia can more than adequately answer your question.http://en.wikipedia.org/wiki/Phase_(waves)However, in brief, the phase term describes the relationship between a waveform and a fixed reference point in time. For example the sinusoid $sin(\omega t)$ is zero at $t = 0$ whereas $sin(\omega t - \phi)$ is zero at $t = \phi$. $\phi$ could be ... If you have a signal$$f[n]=\cos(\Omega_0n)$$and you apply a time shift of $n_0$ you get$$f[n+n_0]=\cos(\Omega_0(n+n_0))=\cos(\Omega_0n+\Omega_0n_0)=\cos(\Omega_0n+\phi)$$where $\phi=\Omega_0n_0$ is the phase shift.The other way around, if you have a phase shift of $\phi$, this is not always equivalent to a time shift of the original signal:$$g[n]=... This is a slightly tedious but nevertheless straightforward exercise in computing the derivative of a function:$$\begin{align}\tau(\omega)&=-\frac{d\phi(\omega)}{d\omega}=-\frac{d}{d\omega}\arctan(f(\omega))\tag{1}\end{align}$$with$$f(\omega)=\frac{r\sin(\omega-\theta)}{1-r\cos(\omega-\theta)}\tag{2}$$From $(1)$ we have$$\tau(\omega)=-\frac{f'(\... There are several reasons why the two results don't match:the coefficients of the FIR Hilbert transformer are wrongthe FIR Hilbert transformer is too short to even come close to the performance of the FFT-based implementationthe frequency of the input signal is too low for the FIR Hilbert transformer to perform properly. A FIR Hilbert transformer always ... You're probably better off skipping the $5u(t)$ idea and going straight to a periodic input, probably a sinusoid. A phase locked loop is for signals that have a phase to lock onto, whereas $u(t)$ doesn't have a phase. The Stanford reference looks pretty good. Start by comparing that to your feedback system: Note that the phase detector is a ... Modern FFT libraries, such as FFTW and Apple's Accelerate framework can do non-power-of-2 FFTs very efficiently, as long as all the prime divisors of the composite length are fairly small (2,3,5,etc.)A power of 2 makes it simpler (about 1 page of source code) if you have to code your own FFT for some reason, or are otherwise constrained as to max program ... This is indeed correct. There is a few things you could do to make it a better graph:Label the X-axisLabel the Y-axisUse real physical units if it's applicable. Phase is measured either in radians or in degrees. These are VERY different things, so proper labeling with units helps to clarify what you are using. Similar if f is a frequency in Hz (or kHz or ... The z domain transfer function of the system is the z transform of the system impulse response, so start by taking the Z transform of h[n] ...$$H[z] = -z^1 + 1 + 2z^{-1} + 2z^{-2} + z^{-3} -z^{-4}$$You may be able to message this into a nicer form, but that isn't necessary.Next, to get the the frequency response, replace z with $e^{jw}$So this ... There is nothing inherently 'magical' about performing a power of 2 DFT, other than the fact that performing a power of 2 DFT allows one to perform the DFT in $O(Nlog(N))$ instead of $O(N^2)$. So the power of 2 DFT, (The algorithm that does this is known as the FFT), allows you to simply speed up your DFT computation by a huge factor.I apply the fft ... If you are looking for a frequency-independent delay applied to any given input signal by the filter (apart from amplifying and attenuating certain frequency components), then you won't be able to find it because there is no such delay. As you can see in your plots, group delay and phase delay are generally frequency dependent. Furthermore, for general input ... I'm not sure if I understand your problem, but I'll give it a try. If you have a complex frequency response$$H(\omega)=|H(\omega)|e^{j\phi(\omega)}\tag{1}$$where $\phi(\omega)$ is the phase, then you probably know that for a given frequency $\omega_0$, the frequency response might as well be written as$$H(\omega_0)=|H(\omega_0)|e^{j(\phi(\omega_0)+2k\...
First fix the following notations: $AF:=$ The axiom of foundation $ZFC^{-}:=ZFC\setminus \left\lbrace AF \right\rbrace $ $G:=$ The proper class of all sets $V:=$ The proper class of Von neumann's cumulative hierarchy $L:=$ The proper class of Godel's cumulative hierarchy $G=V:~\forall x~\exists y~(ord(y) \wedge ``x\in V_{y}")$ $G=L:~\forall x~\exists y~(ord(y) \wedge ``x\in L_{y}")$ Almost all of $ZFC$ axioms have a same "nature" in some sense. They are "generating" new sets which form the world of mathematical objects $G$. In other words they are complicating our mathematical chess by increasing its playable and legitimated nuts. But $AF$ is an exception in $ZFC$. It is "simplifying" our mathematical world by removing some sets which innocently are accused to be "ill founded". Even $AF$ is regulating $G$ by $V$ and says $G=V$. So it is "miniaturizing" the "real" size of $G$ by the "very small" cumulative hierarchy $V$ as same as the assumption of constructibility axiom $G=L$. In fact "minimizing" the size of mathematical universe is ontological "nature" of all constructibilty kind of axioms like $G=W$ which $W$ is an arbitrary cumulative hierarchy. But in the opposite direction the large cardinal axioms says a different thing about $G$. We know that any large cardinal axiom stronger than "$0^{\sharp}$ exists" implies $G\neq L$. This illustrates the "nature" of large cardinal axioms. They implicitly say the universe of mathematical objects is too big and is "not" reachable by cumulative hierarchies. So it is obvious that any constructibility kind axiom such as $AF$, imposes a limitation on the height of large cardinal tree. One of these serious limitations is Kunen inconsistency theorem in $ZFC^{-}+AF$. Theorem (Kunen inconsistency) There is no non trivial elementary embedding $j:\langle V,\in\rangle\longrightarrow \langle V,\in\rangle $ (or equivalently $j:\langle G,\in\rangle\longrightarrow \langle G,\in\rangle$) The proof has two main steps as follows: Step (1): By induction on Von neumann's "rank" in $V$ one can prove any non trivial elementary embedding $j:\langle V,\in\rangle\longrightarrow \langle V,\in\rangle$ has a critical point $\kappa$ on $Ord$. Step (2): By iterating $j$ on this critical point one can find an ordinal $\alpha$ such that $j[\alpha]=\lbrace j(\beta)~|~\beta \in \alpha \rbrace \notin V (=G)$ which is a contradiction. Now in the absence of $AF$ we must notice that the Kunen inconsistency theorem splits into two distinct statements and the orginal proof fails in both of them. Statement (1): (Strong version of Kunen inconsistency) There is no non trivial elementary embedding $j:\langle G,\in\rangle\longrightarrow \langle G,\in\rangle$. Statement (2): (Weak version of Kunen inconsistency) There is no non trivial elementary embedding $j:\langle V,\in\rangle\longrightarrow \langle V,\in\rangle$. In statement (1), step (1) collapses because without $AF$ we have not a "rank notion" on $G$ and the induction makes no sense. So we can not find any critical point on $Ord$ for $j$ by "this method". In statement (2), step (2) fails because without $AF$ we don't know $G=V$ and so $j[\alpha]\notin V$ is not a contradiction. But it is clear that in $ZFC^{-}$ the original proof of Kunen inconsistency theorem works for both of the following propositions: Proposition (1): There is no elementary embedding $j:\langle G,\in\rangle\longrightarrow \langle G,\in\rangle $ with a critical point on $Ord$. Proposition (2): Every non trivial elementary embedding $j:\langle V,\in\rangle\longrightarrow \langle V,\in\rangle$ has a critical point on $Ord$. Now the main questions are: Question (1): Is the statement "There is a non trivial elementary embedding $j:\langle V,\in\rangle\longrightarrow \langle V,\in\rangle$" an acceptable large cardinal axiom in the absence of $AF$($G=V$)? What about other statements by replacing $V$ with an arbitrary cumulative hierarchy $W$?(In this case don't limit the definition of a cumulative hierarchy by condition $W_{\alpha +1}\subseteq P(W_{\alpha})$) Note that such statements are very similar to the statment "$0^{\sharp}$ exists" that is equivalent to existence of a non trivial elementary embedding $j:\langle L,\in\rangle\longrightarrow \langle L,\in\rangle$ and could be an "acceptable" large cardinal axiom in the "absence" of $G=L$. So if the answer of the question (1) be positive, we can go "beyond" weak version of Kunen inconsistency by removing $AF$ from $ZFC$ and so we can find a family of "Reinhardt shape" cardinals correspond to any cumulative hierarchy $W$ by a similar argument to proposition (2) dependent on "good behavior" of "rank notion" in $W$. Question (2): Is $AF$ necessary to prove "strong" version of Kunen inconsistency theorem? In the other words is the statement "$Con(ZFC)\longrightarrow Con(ZFC^{-}+ \exists$ a non trivial elementary embedding $j:\langle G,\in\rangle\longrightarrow \langle G,\in\rangle)$" true? It seems to go beyond Kunen inconsistency it is not necessary to remove $AC$ which possibly "harms" our powerful tools and changes the "natural" behavior of objects.It simply suffices that one omit $AF$'s limit on largeness of "Cantor's heaven" and his "set theoretic intuition". َAnyway whole of the set theory is not studying $L$, $V$ or any other cumulative hierarchy and there are many object "out" of these realms. For example without limitation of $G=L$ we can see more large cardinals that are "invisible" in small "scope" of $L$.In the same way without limitation of $AF$ we can probably discover more stars in the mathematical universe out of scope of $V$. Furthermore we can produce more interesting models and universes and so we can play an extremely exciting mathematical chess beyond inconsistency, beyond imagination!
On the Commutability of Homogenization and Linearization in Finite Elasticity Article First Online: 211 Downloads Citations Abstract We consider a family of non-convex integral functionals where $$\frac{1}{h^2}\int_\Omega W(x/\varepsilon,{\rm Id}+h\nabla g(x))\,\,{\rm d}x,\quad g\in W^{1,p}({\Omega};{\mathbb R}^n)$$ Wis a Carathéodory function periodic in its first variable, and non-degenerate in its second. We prove under suitable conditions that the Γ-limits corresponding to linearization ( h→ 0) and homogenization (\({\varepsilon\rightarrow 0}\)) commute, provided Wis minimal at the identity and admits a quadratic Taylor expansion at the identity. Moreover, we show that the homogenized integrand, which is determined by a multi-cell homogenization formula, has a quadratic Taylor expansion with a quadratic term that is given by the homogenization of the second variation of W. KeywordsStrong Convergence Strong Topology Quadratic Domain Bound Lipschitz Domain Quadratic Expansion These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. Preview Unable to display preview. Download preview PDF. References 1. 2. 3. 4. 5.Braides, A., Defranceschi, A.: Homogenization of multiple integrals. Oxford Lecture Series in Mathematics and its Applications, vol. 12. Clarendon Press, Oxford, xiv, 1998Google Scholar 6. 7.Conti, S.: Low-energy deformations of thin elastic sheets: isometric embeddings and branching patterns. Habilitation thesis, Universität Leipzig (2003)Google Scholar 8.Dal Maso, G.: An introduction to Γ-convergence. Progress in Nonlinear Differential Equations and their Applications, vol. 8. Birkhäuser, Basel, xiv, 1993Google Scholar 9. 10. 11. 12. 13. 14. 15. 16. 17.Neukamm, S.: Homogenization, linearization and dimension reduction in elasticity with variational methods. Ph.D. thesis, Technische Universität München (2010)Google Scholar 18.Olejnik O., Shamaev A., Yosifyan G.: Problèmes d’homogénéisation pour le système de l’élasticité linéaire à coefficients oscillant non-uniformément. (Problems of homogenization for linear elasticity system with non-uniformly oscillating coefficients). C. R. Acad. Sci. Paris Sér. I. 298, 273–276 (1984)zbMATHGoogle Scholar 19. 20.Tartar, L.: Cours peccot au collège de france. Paris, 1977Google Scholar 21.Tartar, L.: The general theory of homogenization. A personalized introduction. Lecture Notes of the Unione Matematica Italiana, vol. 7. Springer, Berlin, xvii 2009Google Scholar 22. 23. Copyright information © Springer-Verlag 2011
We make a few comments only. $1.$ Note that $2\pi$ is a period of $\sin x$, or, equivalently, $1$ is a period of $\sin(2\pi x)$. But $\sin x$ has many other periods, such as $4\pi$, $6\pi$, and so on. However, $\sin x$ has no (positive) period shorter than $2\pi$. $2.$ If $p$ is a period of $f(x)$, and $H$ is any function, then $p$ is a period of $H(f(x))$. So in particular, $2\pi$ is a period of $\sin^2 x$. However, $\sin^2 x$ has a period which is smaller than $2\pi$, namely $\pi$. Note that $\sin(x+\pi)=-\sin x$, so $\sin^2(x+\pi)=\sin^2 x$. It turns out that $\pi$ is the shortest period of $\sin^2 x$. $3.$ For sums and products, the general situation is complicated. Let $p$ be a period of $f(x)$ and let $q$ be a period of $g(x)$. Suppose that there are positive integers $a$ and $b$ such that $ap=bq=r$. Then $r$ is a period of $f(x)+g(x)$, and also of $f(x)g(x)$. So for example, if $f(x)$ has $5\pi$ as a period, and $g(x)$ has $7\pi$ as a period, then $f(x)+g(x)$ and $f(x)g(x)$ each have $35\pi$ as a period. However, even if $5\pi$ is the shortest period of $f(x)$ and $7\pi$ is the shortest period of $g(x)$, the number $35\pi$ need not be the shortest period of $f(x)+g(x)$ or $f(x)g(x)$. We already had an example of this phenomenon: the shortest period of $\sin x$ is $2\pi$, while the shortest period of $(\sin x)(\sin x)$ is $\pi$. Here is a more dramatic example. Let $f(x)=\sin x$, and $g(x)=-\sin x$. Each function has smallest period $2\pi$. But their sum is the $0$-function, which has every positive number $p$ as a period! $4.$ If $p$ and $q$ are periods of $f(x)$ and $g(x)$ respectively, then any common multiple of $p$ and $q$ is a period of $H(f(x), g(x))$ for any function $H(u,v)$, in particular when $H$ is addition and when $H$ is multiplication. So the least common multiple of $p$ and $q$, if it exists, is a period of $H(f(x),g(x))$. However, it need not be the smallest period. $5.$ Periods can exhibit quite strange behaviour. For example, let $f(x)=1$ when $x$ is rational, and let $f(x)=0$ when $x$ is irrational. Then every positive rational $r$ is a period of $f(x)$. In particular, $f(x)$ is periodic but has no shortest period. $6.$ Quite often, the sum of two periodic functions is not periodic. For example, let $f(x)=\sin x+\cos 2\pi x$. The first term has period $2\pi$, the second has period $1$. The sum is not a period. The problem is that $1$ and $2\pi$ are incommensurable. There do not exist positive integers $a$ and $b$ such that $(a)(1)=(b)(2\pi)$.
Last edited: March 22nd 2018 Within this example, we'll consider a resistor network consisting of $N$ repetitive units. Each unit has two resistors of magnitude $R$ and $12R$, respectively. The units are connected to a battery of voltage $V_0$, as shown in the figure below. The goal of this example is to calculate the toal current propagating through the cicuit, provided by the battery. The following image explains the applied notations and variables. When solving a problem like this, it is often useful to look at special scenarios, if available, before one tries to solve the general problem. We will now look at two such simplified problems. First, consider the special case when $N=1$. The circuit then looks like the following, This reduced problem is easily solved by defining $R_{eff}$, the effective resistance of the circuit connected to the battery. This implies that we may represent the circuit by the following diagram where $R_{eff} = R+12R=13R$. Then, by Ohm's law,$$ I_{1,1} = \frac{V_0}{13R} = \frac{1}{13} \frac{\textrm{V}}{\Omega} \approx 0.0769 \textrm{A} $$ As another special scenario, we will consider the case when the number of units $N$ goes to infinity. This scenario is not so trivial. Obviously, we are providing more options for the current to flow as $N\to\infty$. Hence, we expect the resistance to decrease in this limit. Take a minute to think about how you would solve it before you read on. Again, we can consider an effective resistance $R_{eff}$ and represent the whole circuit as before (last image). Now, take the circuit from the previuos page with infinitely many repetitive units and add one more unit to it, so that it can be represented by the following circuit diagram Since $N$ is infinitely large, adding one more unit should not change the effective resistance $R_{eff}$ of the whole circuit. In other words, $12R$ and $R_{eff}$ are in parallel in the above sketch and the resistance of the entire circuit $R_{total}$ is still $R_{eff}$. By this argument, we can relate $R_{eff}$ to itself by$$ R_{total} = R_{eff} = R + \frac{1}{\frac{1}{12R} + \frac{1}{R_{eff}}} $$ By solving this quadric equation for $R_{eff}$ and demanding that $R_{eff}$ be positive, we obtain$$ R_{eff} = 4R = 4\Omega $$ Such that by Ohm's law, we find $$ I_{1,1} = \frac{V_0}{4R} = \frac{1}{4} \frac{\textrm{V}}{\Omega} \approx 0.25 \textrm{A} $$ Now, we turn to the more general case when $1<N<\infty$. To solve it, we will set up a system of $N$ equations and $N$ unknowns that we will formulate as a matrix problem. It will then be solven using Python. The unknowns will be the $N$ voltages $V_i$, $i=1,\ldots,N;$. To obtain the $N$ equations and $N$ unknowns, we first apply Ohm's law to all resistors in the circuit. This yields$$ I_{i,1} = \frac{V_{i-1} - V_i}{R}, \quad i=1,\ldots,N; $$ for the $N$ resistors of resistance $R$ and$$ I_{i,2} = \frac{V_i}{12R}, \quad i=1,\ldots,N; $$ for the $N$ resistors with resistance 12R. The next step is to eliminate the currents $I_{i,1}$ and $I_{i,2}$ for $i=1,\ldots,N;$ in the equations above. To do so, we turn to the principle of conservation of current, namely that the sum of all currents flowing into a node in a circuit diagram is equal to the sum of all currents flowing out of it. This statement equates to saying that no charges are created or destroyed in a node and is often reffered to as Krichoff's current law. For the nodes labelled $V_i$, where $i=1,\ldots,N-1;$ we get$$ I_{i,1} = I_{i,2} + I_{i+1,1} $$ For the last node, labelled $V_N$, we get$$ I_{N,1} = I_{N,2} $$ Substituting the earliest expressions for $I_{i,1}$ and $I_{i,2}$ into the each of these two last equations separately (the upper one first), we find for the first node $i=1$,$$ \frac{25}{12R} V_1 - \frac{1}{R}V_2 = V_0 $$ and for the nodes labelled $V_i$, with $i=2,\ldots,N-1;$$$ -\frac{1}{R}V_{i-1} + \frac{25}{12R}V_i - \frac{1}{R} V_{i+1} = 0 $$ and then for the lower equation,$$ -\frac{1}{R}V_{N-1} + \frac{13}{12R}V_N = 0 $$ Counting the number of equations, we see that these three last expressions contain $N$ equations in total. This is exactly the amount we need to determine all $N$ voltages $V_i$ for $i=2,\ldots,N-1;$ uniquely. Moreover, these equations can be formulated as a matrix problem $\mathcal{A}\boldsymbol{V}=\boldsymbol{b}$$$ \begin{bmatrix} 25/12R & -1/R & 0 & \dots & 0 \\ -1/R & 25/12R & -1/R & \dots & 0 \\ \vdots& \ddots &\ddots& \ddots& \vdots \\ 0 & \dots &-1/R & 25/12R & -1/R \\ 0 & \dots & 0 & -1/R & 25/12R \end{bmatrix} \cdot \begin{bmatrix} V_1 \\ V_2 \\ \vdots \\ V_{N-1} \\ V_N \end{bmatrix} = \begin{bmatrix} V_0/R \\ 0 \\ \vdots \\ 0 \\ 0 \end{bmatrix} $$ Now, finding the unknown voltages amounts to solving this matrix equation $\mathcal{A}\boldsymbol{V}=\boldsymbol{b}$, for the unknown voltage vector $\boldsymbol{V}$. The matrix $\mathcal{A}$ and the vector $\boldsymbol{b}$ are given by the resistances within the circuit and the voltage $V_0$. Subsequently, we can calculate the total current by Ohm's law,$$ I_{1,1} = \frac{V_0 - V_1}{R} $$ In Python, the first thing we need to do is defineing $V_0$, $R$ and $N$. R = 1.0 # Resistance [Ohm]V0 = 1.0 # Applied voltage [V]N = 10 # Number of repitative units [dimless] Then, we need to set up the matrix $\mathcal{A}$ and the vector $\boldsymbol{b}$ of the matrix equation. We start by initializing them both with zeros before filling them and also define two variables, $a=25/12R$ and $c=-1/R$ to simplify their filling. # We use numpy arrays for increased efficiency# of matrix operations and functionality.import numpy as np A = np.zeros((N,N)) # Matrix of dimension NxNb = np.zeros(N) # Vector of length Na = 25.0/(12*R) # Scalar (constant)c = -1.0/R # Scalar (constant) Next, we set up $\boldsymbol{b}$, and $\mathcal{A}$ row by row. # the b-vector is all zeros except for first entry. Thus,b[0] = V0/R# Set up first rowA[0,0] = aA[0,1] = c # Set up last rowA[N-1,N-1] = aA[N-1,N-2] = c# Set up all other rows# OBS: if you're running Python 2.7 (or lower) you might want to use# the 'xrange()' instead of 'range()' depending on the size of 'N'.for row in range(1,N-1): A[row,row-1] = c A[row,row ] = a A[row,row+1] = c# You may want to print A,b to see if it was initialized correctly:# print(A,b) Then we can solve the system of equations by using the built-in Numerical Python Linear Algebra Solver Voltages = np.linalg.solve(A,b)print("Voltages = ", Voltages)I11 = (V0 - Voltages[0])/Rprint("\nI_11 = ", I11) Voltages = [ 0.74895759 0.56032831 0.41839305 0.31132388 0.23019837 0.16825606 0.12033508 0.08244203 0.05141914 0.02468119] I_11 = 0.251042413381 We see that when $N$ becomes large (or actually already at $N\geq15$), $I_{1,1}$ approaches the limit we found analytically as $N\to\infty$. That is $I_{1,1}\to 1/4$. You should note that, even though the built-in solver we use in this example is implemented to be effective, it is a general solver which do not take advantage of the fact that $\mathcal{A}$ is a sparse matrix for this problem. If $\mathcal{A}$ becomes really large, then this solver would eventually become very slow as it unnessecarily iterates over all the zeros. As an alternative, one can use the built-in functionality of the Scientific Python Sparse Linear Algebra package. However, this module requires $\mathcal{A}$ to be stored in a specific (sparse) manner. from scipy import sparseimport scipy.sparse.linalg as sslfrom scipy.sparse.linalg import spsolve# Since our matrix only have non-zero values on # THE diagonal, the UPPER(sup) diagonal and the LOWER(sub) diagonal,# this is in fact all we need to tell Python.#Create create a sparse matrixsup_diag = np.ones(N)*csub_diag = np.ones(N)*cthe_diag = np.ones(N)*a # Offset: -1 0 1all_diags = [sub_diag, the_diag, sup_diag]offsets = np.array([-1, 0, 1])csc="csc" # Computer format of which the matrix is stored# Define the SPARSE matrixA_sparse = sparse.spdiags(all_diags, offsets, N,N, format=csc)# print(A_sparse.todense()) # prints the SPARSE matrix in a DENSE (normal NxN) format.Voltages = spsolve(A_sparse,b)print("Voltages = ", Voltages)I11 = (V0 - Voltages[0])/Rprint("\nI_11 = ", I11) Voltages = [ 0.74895759 0.56032831 0.41839305 0.31132388 0.23019837 0.16825606 0.12033508 0.08244203 0.05141914 0.02468119] I_11 = 0.251042413381 Now, try to compare running times of the sparse solver, with the standard solver as you increase $N$ to $10, 100, 1000, \ldots$.
Infinite time blow-up of many solutions to a general quasilinear parabolic-elliptic Keller-Segel system Institut für Mathematik, Universität Paderborn, Warburger Str. 100, 33098 Paderborn, Germany $ \begin{align} & {{u}_{t}}=\nabla \cdot ({{(u+1)}^{m-1}}\nabla u)-\nabla \cdot (u{{(u+1)}^{\sigma -1}}\nabla v) \\ & \ 0=\Delta v-v+u \\ \end{align} $ $ \Omega \subset \mathbb{R}^N $ $ N\ge 3 $ $ \sigma<m-\frac{N-2}{N} $ $ \sigma\le 0 $ $ \sigma>m-\frac{N-2}{N} $ $ \sigma\le 0 $ $ \sigma>m-\frac{N-2}{N} $ Mathematics Subject Classification:Primary: 92C17, 35Q92, 35A01, 35K55. Citation:Johannes Lankeit. Infinite time blow-up of many solutions to a general quasilinear parabolic-elliptic Keller-Segel system. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020013 References: [1] [2] V. Calvez and J. A. Carrillo, Volume effects in the Keller-Segel model: Energy estimates preventing blow-up, [3] V. Calvez, L. Corrias and M. A. Ebde, Blow-up, concentration phenomenon and global existence for the Keller-Segel model in high dimension, [4] [5] T. Cieślak and C. Morales-Rodrigo, Quasilinear non-uniformly parabolic-elliptic system modelling chemotaxis with volume filling effect. Existence and uniqueness of global-in-time solutions, [6] T. Cieślak and C. Stinner, Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions, [7] T. Cieślak and C. Stinner, Finite-time blowup in a supercritical quasilinear parabolic-parabolic Keller-Segel system in dimension 2, [8] T. Cieślak and C. Stinner, New critical exponents in a fully parabolic quasilinear Keller-Segel system and applications to volume filling models, [9] [10] A. Friedman, [11] [12] [13] [14] [15] S. Ishida, T. Ono and T. Yokota, Possibility of the existence of blow-up solutions to quasilinear degenerate Keller-Segel systems of parabolic-parabolic type, [16] S. Ishida, K. Seki and T. Yokota, Boundedness in quasilinear Keller-Segel systems of parabolic-parabolic type on non-convex bounded domains, [17] S. Ishida and T. Yokota, Global existence of weak solutions to quasilinear degenerate Keller-Segel systems of parabolic-parabolic type, [18] S. Ishida and T. Yokota, Blow-up in finite or infinite time for quasilinear degenerate Keller-Segel systems of parabolic-parabolic type, [19] W. Jäger and S. Luckhaus, On explosions of solutions to a system of partial differential equations modelling chemotaxis, [20] J. Lankeit, Locally bounded global solutions to a chemotaxis consumption model with singular sensitivity and nonlinear diffusion, [21] X. Li and Z. Xiang, Boundedness in quasilinear Keller-Segel equations with nonlinear sensitivity and logistic source, [22] N. Mizoguchi and M. Winkler, Blow-up in the two-dimensional parabolic Keller-Segel system, Preprint.Google Scholar [23] T. Nagai, Blowup of nonradial solutions to parabolic-elliptic systems modeling chemotaxis in two-dimensional domains, [24] [25] [26] [27] [28] [29] [30] Y. Sugiyama and H. Kunii, Global existence and decay properties for a degenerate Keller-Segel model with a power factor in drift term, [31] Y. Tao and M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity, [32] M. Tian and S. Zheng, Global boundedness versus finite-time blow-up of solutions to a quasilinear fully parabolic Keller-Segel system of two species, [33] L. Wang, C. Mu and P. Zheng, On a quasilinear parabolic-elliptic chemotaxis system with logistic source, [34] Y. Wang, A quasilinear attraction-repulsion chemotaxis system of parabolic-elliptic type with logistic source, [35] [36] [37] [38] [39] M. Winkler, Global classical solvability and generic infinite-time blow-up in quasilinear Keller-Segel systems with bounded sensitivities, Preprint.Google Scholar [40] M. Winkler, Global existence and slow grow-up in a quasilinear Keller-Segel system with exponentially decaying diffusivity, [41] [42] J. Zheng, Boundedness of solutions to a quasilinear parabolic-elliptic Keller-Segel system with logistic source, [43] P. Zheng, C. Mu and X. Hu, Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source, show all references References: [1] [2] V. Calvez and J. A. Carrillo, Volume effects in the Keller-Segel model: Energy estimates preventing blow-up, [3] V. Calvez, L. Corrias and M. A. Ebde, Blow-up, concentration phenomenon and global existence for the Keller-Segel model in high dimension, [4] [5] T. Cieślak and C. Morales-Rodrigo, Quasilinear non-uniformly parabolic-elliptic system modelling chemotaxis with volume filling effect. Existence and uniqueness of global-in-time solutions, [6] T. Cieślak and C. Stinner, Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions, [7] T. Cieślak and C. Stinner, Finite-time blowup in a supercritical quasilinear parabolic-parabolic Keller-Segel system in dimension 2, [8] T. Cieślak and C. Stinner, New critical exponents in a fully parabolic quasilinear Keller-Segel system and applications to volume filling models, [9] [10] A. Friedman, [11] [12] [13] [14] [15] S. Ishida, T. Ono and T. Yokota, Possibility of the existence of blow-up solutions to quasilinear degenerate Keller-Segel systems of parabolic-parabolic type, [16] S. Ishida, K. Seki and T. Yokota, Boundedness in quasilinear Keller-Segel systems of parabolic-parabolic type on non-convex bounded domains, [17] S. Ishida and T. Yokota, Global existence of weak solutions to quasilinear degenerate Keller-Segel systems of parabolic-parabolic type, [18] S. Ishida and T. Yokota, Blow-up in finite or infinite time for quasilinear degenerate Keller-Segel systems of parabolic-parabolic type, [19] W. Jäger and S. Luckhaus, On explosions of solutions to a system of partial differential equations modelling chemotaxis, [20] J. Lankeit, Locally bounded global solutions to a chemotaxis consumption model with singular sensitivity and nonlinear diffusion, [21] X. Li and Z. Xiang, Boundedness in quasilinear Keller-Segel equations with nonlinear sensitivity and logistic source, [22] N. Mizoguchi and M. Winkler, Blow-up in the two-dimensional parabolic Keller-Segel system, Preprint.Google Scholar [23] T. Nagai, Blowup of nonradial solutions to parabolic-elliptic systems modeling chemotaxis in two-dimensional domains, [24] [25] [26] [27] [28] [29] [30] Y. Sugiyama and H. Kunii, Global existence and decay properties for a degenerate Keller-Segel model with a power factor in drift term, [31] Y. Tao and M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity, [32] M. Tian and S. Zheng, Global boundedness versus finite-time blow-up of solutions to a quasilinear fully parabolic Keller-Segel system of two species, [33] L. Wang, C. Mu and P. Zheng, On a quasilinear parabolic-elliptic chemotaxis system with logistic source, [34] Y. Wang, A quasilinear attraction-repulsion chemotaxis system of parabolic-elliptic type with logistic source, [35] [36] [37] [38] [39] M. Winkler, Global classical solvability and generic infinite-time blow-up in quasilinear Keller-Segel systems with bounded sensitivities, Preprint.Google Scholar [40] M. Winkler, Global existence and slow grow-up in a quasilinear Keller-Segel system with exponentially decaying diffusivity, [41] [42] J. Zheng, Boundedness of solutions to a quasilinear parabolic-elliptic Keller-Segel system with logistic source, [43] P. Zheng, C. Mu and X. Hu, Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source, [1] Sachiko Ishida, Tomomi Yokota. Blow-up in finite or infinite time for quasilinear degenerate Keller-Segel systems of parabolic-parabolic type. [2] Miaoqing Tian, Sining Zheng. Global boundedness versus finite-time blow-up of solutions to a quasilinear fully parabolic Keller-Segel system of two species. [3] Monica Marras, Stella Vernier Piro, Giuseppe Viglialoro. Lower bounds for blow-up in a parabolic-parabolic Keller-Segel system. [4] [5] Vincent Calvez, Thomas O. Gallouët. Particle approximation of the one dimensional Keller-Segel equation, stability and rigidity of the blow-up. [6] Ansgar Jüngel, Oliver Leingang. Blow-up of solutions to semi-discrete parabolic-elliptic Keller-Segel models. [7] Chao Deng, Tong Li. Global existence and large time behavior of a 2D Keller-Segel system in logarithmic Lebesgue spaces. [8] Monica Marras, Stella Vernier Piro. On global existence and bounds for blow-up time in nonlinear parabolic problems with time dependent coefficients. [9] Sachiko Ishida, Tomomi Yokota. Remarks on the global existence of weak solutions to quasilinear degenerate Keller-Segel systems. [10] Kentarou Fujie, Takasi Senba. Global existence and boundedness in a parabolic-elliptic Keller-Segel system with general sensitivity. [11] Qi Wang. Boundary spikes of a Keller-Segel chemotaxis system with saturated logarithmic sensitivity. [12] Yajing Zhang, Xinfu Chen, Jianghao Hao, Xin Lai, Cong Qin. Dynamics of spike in a Keller-Segel's minimal chemotaxis model. [13] Piotr Biler, Ignacio Guerra, Grzegorz Karch. Large global-in-time solutions of the parabolic-parabolic Keller-Segel system on the plane. [14] Long Wei, Zhijun Qiao, Yang Wang, Shouming Zhou. Conserved quantities, global existence and blow-up for a generalized CH equation. [15] Qi Wang, Jingyue Yang, Lu Zhang. Time-periodic and stable patterns of a two-competing-species Keller-Segel chemotaxis model: Effect of cellular growth. [16] Qi Wang, Lu Zhang, Jingyue Yang, Jia Hu. Global existence and steady states of a two competing species Keller--Segel chemotaxis model. [17] [18] Hao Yu, Wei Wang, Sining Zheng. Global boundedness of solutions to a Keller-Segel system with nonlinear sensitivity. [19] Norikazu Saito. Error analysis of a conservative finite-element approximation for the Keller-Segel system of chemotaxis. [20] Marco Di Francesco, Donatella Donatelli. Singular convergence of nonlinear hyperbolic chemotaxis systems to Keller-Segel type models. 2018 Impact Factor: 0.545 Tools Metrics Other articles by authors [Back to Top]
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-3 of 3 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... Net-baryon fluctuations measured with ALICE at the CERN LHC (Elsevier, 2017-11) First experimental results are presented on event-by-event net-proton fluctuation measurements in Pb- Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, recorded by the ALICE detector at the CERN LHC. The ALICE detector is well ...
so I've been at this for about 3 - 4 hours now. It is an homework assignment (well part of a question which i've already completed). We did not learn this in class. All work is shown below. An atom in an excited state of $4.9 eV$ emits a photon and ends up in the ground state. The lifetime of the excited state is $1.2 \times 10^{-13} s$. (b) What is the spectral line width (in wavelength) of the photon? So lets look at what I have done so far. I have done the following: $$\Delta E \Delta t = \frac{\hbar}{2} $$ but $$E = h f$$ so $$\Delta f = \frac{1}{4\pi \Delta t}$$ but if I take $\Delta f$ and convert it into wavelength using $\lambda f = c $ then it gives me the wrong answer. I've tried MANY variations of the above formulas. The correct answer is $0.142 nm $ Can anyone give me a hint?
Answer The ladder goes up the wall a distance of 9.35 meters above the top of the fire truck. Work Step by Step We can convert angle to degrees: $\theta = 43^{\circ}50' = (43+\frac{50}{60})^{\circ} = 43.83^{\circ}$ We can find the height $d$: $\frac{d}{h} = sin ~\theta$ $d = (h)~(sin ~\theta)$ $d = (13.5~m)~sin ~(43.83^{\circ})$ $d = 9.35~m$ The ladder goes up the wall a distance of 9.35 meters above the top of the fire truck.
Many inapproximability factors are $2^{n^\epsilon}$. I don't know why this is 2 instead of 3, 4, ..., etc. To be precise, given a problem, theorem 1 says: It's NP-hard to approximate the problem to $2^{n^\epsilon}$ for any $0 \leq \epsilon < 1$; theorem 2 says: It's NP-hard to approximate the problem to $3^{n^\delta}$ for any $0 \leq \delta < 1$. Can we say theorem 2 is tighter? I know that given $\delta < 1$, we can find $\epsilon < 1$ such that $3^{n^\delta} \in \mathcal O(2^{n^\epsilon})$. However, the definition of approximation factor is not in asymptotics. We should get $3^{n^\delta} \leq 2^{n^\epsilon}$ if we want to say the two theorems are actually same.
I'm working on the following question. I'm having trouble with the solution presented in the textbook - specifically weak convergence. Let $f \in L^p(\mathbb{R}^N), 1<p<\infty$ with $\alpha>0$ real, define $f_n = n^\alpha f(nx)$ for $n = 1,2,3,\dots$. Does $\{f_n\}$ strongly converge? Does it weakly converge? It can be shown that for $\alpha < N/p$ the sequence strongly converges, otherwise it does not strongly converge. Clearly, for $\alpha < N/p$ the sequence also weakly converges (I got this far). Now consider $\alpha = N/p$. Suppose $f$ vanishes for $|x| >K$...(some analysis to conclude weak convergence). For $f$ arbitrary, given $\epsilon > 0$ and $\phi \in L^q(\mathbb{R}^n)$, let $g$ be a function vanishing outside a compact set such that $\|f-g\|_p \leq \epsilon/2\|\phi\|_q$. Why do they consider $f$ compact first? How do we get that estimate between $f$ and the compactly supported $g$?
Finite element simulation of a hyperelastic material with nonlinear Kirchhoff-St. Venant constitutive law. The material is stretched upwards for one second and then released inducing the free vibration shown. Roundoff errors eventually build enough to perturb the material laterally. The mesh was generated with gmsh. Running ./transient.exe sam with the code at https://github.com/smsolivier/FEM will produce the data for the above video. The visualization was done with LLNL’s Visit. P2P1 triangles were used to discretize the Stokes Equations. Lid Driven Cavity The velocity magnitude and vectors for the Lid Driven Cavity problem. The top wall has a fixed velocity of 1 and the sides and bottom have no slip boundary conditions. Flow Over a Cylinder The velocity magnitude, vectors, and streamlines for Stokes Flow across a cylinder. The code can be found here. Building the stokes.cpp executable and running: ./stokes.exe cavity and ./stokes.exe flow will generate the data for the plots shown above. My arbitrary order FEM code (found here) was used to solve the Wave Equation with dissipation: $$ \frac{\partial^2 u}{\partial t^2} + a \frac{\partial u}{\partial t} = c \nabla^2 u \,, $$ where \(u\) is the \(z\) displacement of the wave. \( a\) controls the dissipation of the wave over time and \(c\) controls the wave propogation speed. The second derivative in time was handled with a finite difference scheme: $$ \frac{\partial^2 u}{\partial t^2} \approx \frac{u_{i+1} – 2u_i + u_{i-1}}{\Delta t^2} \,. $$ Standard time integration procedures can now be applied. In the video below, the second order Crank-Nicholson method and a fourth order FEM discretizations were used for space and time, respectively. The video shows the relaxation of the initial sinusoidal perturbation over time in an L shaped domain. The Mandelbrot set is the set of complex numbers, \(c\), such that the series $$ z_{n+1} = z_n^2 + c $$ does not diverge as \(n\rightarrow \infty\). These values of \(c\) can then be plotted in the complex plane to produce some interesting images. As each pixel in the image is an independent calculation this is a good problem for getting familiar with parallel computing. The following images were generated with the code at https://github.com/smsolivier/Mandelbrot.
This is a good question, but at first blush it is hard to answer. This is because there is always an ambiguity in where you put the meaningful definitions - you can put it in the force constant, or you can put it in the units for the magnetic charges. For magnetic monopoles, the first place one naturally turns to is to the magnetic Gauss law, which would be modified to the form $\nabla\cdot\mathbf B\propto \rho_\mathrm{m}$ ... except that you don't really have a way to fix that proportionality constant. By symmetry with respect to the electric Gauss law ($\nabla\cdot\mathbf E=\rho_\mathrm e/\varepsilon_0$) you'd hope that would be $\mu_0$, but that's some hardy guessing. What you do want, very much, is for magnetic monopoles to have the same relationship to magnetic dipoles as electric monopoles have with electric dipoles - and magnetic dipoles we know how to handle. If you take this as your basis, then, you can postulate* a magnetic field of the form $$\mathbf B=\frac{k_1q_\mathrm{m}}{r^2}\hat{\mathbf r}, \tag 1$$with some as-yet-to-be-determined constant $k_1$, for a magnetic monopole of magnetic charge $q_\mathrm{m}$. The key physical input is requiring a pair of opposite magnetic charges a distance $d$ apart to have a magnetic dipole moment of $$m=q_\mathrm{m}d,$$as in the dipole case, and this fixes the units of the magnetic charge: you know that this magnetic dipole (made from two opposite magnetic monopoles) must be exactly equivalent to a standard magnetic dipole: a current $I$ in a loop of area $A$, with magnetic dipole moment $m=IA$. This requires you to have $[q_\mathrm{m}]=\mathrm{A\,m}$. From here you can get the dimensionality of the magnetic field constant:$$[k_1]=[Br^2/q_\mathrm{m}]=[BL/I]=\left[\frac{\mu_0I}{L}\frac{L}{I}\right]=[\mu_0]=\mathrm{N/A^2}.$$However, it doesn't tell you the numeric value of this constant, for which you need to go back to your initial physical input - the equivalence of opposite pairs of magnetic monopoles with current loops. In particular, if each magnetic monopole has a magnetic field as in $(1)$, then it follows that a pair of them, with opposite magnetic charges and separated by a distance $d$ along an axis $\hat{\mathbf u}$, must have the magnetic field$$\mathbf B=\frac{k_1q_\mathrm{m}d}{r^3}\left(3(\hat{\mathbf u}\cdot\hat{\mathbf r})\hat{\mathbf r}-\hat{\mathbf u}\right).$$This contrasts with the magnetic field of a current-loop magnetic dipole with magnetic dipole moment $\mathbf m$$$\mathbf B=\frac{\mu_0/4\pi}{r^3}\left(3({\mathbf m}\cdot\hat{\mathbf r})\hat{\mathbf r}-{\mathbf m}\right),$$which needs to be equivalent under the identification $\mathbf m=q_\mathrm{m} d\:\hat{\mathbf u}$, and this completely fixes the magnetic-field constant at$$k_1=\frac{\mu_0}{4\pi},$$i.e.$$\mathbf B=\frac{\mu_0}{4\pi}\frac{q_\mathrm{m}}{r^2}\hat{\mathbf r}$$for a point magnetic monopole of magnetic charge $q_\mathrm{m}$. Similarly, this fixes the magnetic Gauss law to the naive$$\nabla\cdot\mathbf B=\mu_0\rho_\mathrm m$$(where $\rho_\mathrm m$ is of course the volumetric density of magnetic charge). OK, so that's a lot of work - and we're nowhere near the force that you asked about. The reason for this is that we can only get out of the formalism what we put in, and we have only specified how magnetic monopoles should produce magnetic fields, but not how they should react to them. To get that from the formalism, we need to give it more information, and here again we are constrained in that a opposed-point-monopoles magnetic dipole needs to feel exactly the same force that a current-loop magnetic dipole of the same magnetic moment does. Similarly to the above, we can postulate* that an external magnetic field $\mathbf B$ will produce a force$$\mathbf F=k_2q_\mathrm m\mathbf B$$on a point magnetic monopole of magnetic charge $q_\mathrm m$, and see what happens. Given this postulate, the same algebra that worked for electric dipoles implies that if you have two magnetic monopoles of opposite magnetic charges $q_\mathrm m$ a distance $d$ apart along the unit vector $\hat{\mathbf u}$, then the torque on them exerted by an external uniform magnetic field $\mathbf B$ will be$$\boldsymbol{\tau} = (k_2q_\mathrm md\:\hat{\mathbf u})\times\mathbf B,$$which contrasts with $\boldsymbol{\tau} = \mathbf m\times\mathbf B$ for a usual current-loop magnetic dipole of magnetic dipole moment $\mathbf m$. This equivalence then forces our second constant to be$$k_2=1.$$Similarly, you can check that this force will give identical expressions for the force on a magnetic dipole in a non-uniform external magnetic field $\mathbf B(\mathbf r)$, regardless of whether it is made up from opposing magnetic monopoles or from a small current loop. With this, then, we can just connect the two main laws - how magnetic monopoles produce magnetic fields and how they react to them - to get the answer we're after. If you have two magnetic monopoles of magnetic charges $q_{\mathrm m,1}$ and $q_{\mathrm m,2}$, separated by a distance $r$ along the separation vector $\hat{\mathbf{r}}_{1\to2}$ pointing from $1$ to $2$, then the magnetic force exerted by magnetic monopole $1$ on magnetic monopole $2$ is$$\mathbf F=\frac{\mu_0}{4\pi} \frac{q_{\mathrm{m},1}q_{\mathrm{m},2}}{r^2}\hat{\mathbf{r}}_{1\to2}.$$As expected, like poles repel each other (e.g. two point north poles repel). Here the magnetic charges are measured in ampere meters, and the magnetic charges can be calibrated by observing the interaction with a moving electric charge: if you have a point magnetic monopole of magnetic charge $q_\mathrm{m}$ and an electric charge $q_\mathrm{e}$ separated by a distance $r$ along the unit separation vector $\hat{\mathbf{r}}_\mathrm{m\to e}$, with the electric charge moving at velocity $\mathbf{v}_\mathrm{e}$, then the electric charge will be subject to a force$$\mathbf F=\frac{\mu_0}{4\pi} \frac{q_{\mathrm{m}}q_\mathrm{e}}{r^2} \mathbf{v}_\mathrm{e} \times \hat{\mathbf{r}}_\mathrm{m\to e}.$$This gives you an 'anchor' for the unit of magnetic charge - it's not free-floating, and it's completely tied to the unit of electric charge. * Note that these are also additional impositions on the form of the produced magnetic field and the response to external fields. However, both postulates are reasonable things to assume: if the relations were substantially different, then we wouldn't want to call those objects magnetic monopoles. Moreover, it should be essentially possible to get to those forms using very little more than symmetry considerations.
Chandra, Poonam and Ray, Alak and Bhatnagar, Sanjay (2004) Synchrotron aging and the radio spectrum of SN 1993J. [Preprint] PDF 0402391.pdf Download (118kB) Abstract We combine the GMRT low frequency radio observations of SN 1993J with the VLA high frequency radio data to get a near simultaneous spectrum around day 3200 since explosion. The low frequency measurements of the supernova determine the turnover frequency and flux scale of the composite spectrum and help reveal a steepening in the spectral index, $\Delta \alpha \sim 0.6$, in the optically thin part of the spectrum. This is the first observational evidence of a break in the radio spectrum of a young supernova. We associate this break with the phenomenon of synchrotron aging of radiating electrons. From the break in the spectrum we calculate the magnetic field in the shocked region independent of the equipartition assumption between energy density of relativistic particles and magnetic energy density. We determine the ratio of these two energy densities and find that this ratio is in the range: $8\times 10^{-6}-5\times 10^{-4}$. We also predict the nature of the evolution of the synchrotron break frequency with time, with competing effects due to diffusive Fermi acceleration and adiabatic expansion of the radiative electron plasma. Item Type: Preprint Additional Information: Astrophys.J. 604 (2004) L97-L100 Department/Centre: Division of Physical & Mathematical Sciences > Joint Astronomy Programme Depositing User: Ramnishath A Date Deposited: 31 Jul 2004 Last Modified: 19 Sep 2010 04:13 URI: http://eprints.iisc.ac.in/id/eprint/750 Actions (login required) View Item
In evaluating the vacuum structure of quantum field theories you need to find the minima of the effective potential including perturbative and nonperturbative corrections where possible. In supersymmetric theories, you often see the claim that the Kähler potential is the suitable quantity of interest (as the superpotential does not receive quantum corrections). For simplicity, let's consider just the case of a single chiral superfield: $\Phi(x,\theta)=\phi(x)+\theta^\alpha\psi_\alpha(x) + \theta^2 f(x)$ and its complex conjugate. The low-energy action functional that includes the Kähler and superpotential is $$ S[\bar\Phi,\Phi] = \int\!\!\!\mathrm{d}^8z\;K(\bar\Phi,\Phi) + \int\!\!\!\mathrm{d}^6z\;W(\Phi) + \int\!\!\!\mathrm{d}^6\bar{z}\;\bar{W}(\bar\Phi) $$ Keeping only the scalar fields and no spacetime derivatives, the components are $$\begin{align} S[\bar\Phi,\Phi]\big|_{\text{eff.pot.}} = &\int\!\!\!\mathrm{d}^4x\Big(\bar{f}f\,\frac{\partial^2K(\bar\phi,\phi)}{\partial\phi\partial{\bar\phi}} + f\,W'(\phi) + \bar{f}\, W(\phi)\Big) \\ \xrightarrow{f\to f(\phi)} -\!&\int\!\!\!\mathrm{d}^4x\Big(\frac{\partial^2K(\bar\phi,\phi)}{\partial\phi\partial{\bar\phi}}\Big)^{-1}|W'(\phi)|^2 =: -\!\int\!\!\!\mathrm{d}^4x \ V(\bar\phi,\phi) \end{align}$$ where in the second line we solve the (simple) equations of motion for the auxiliary field. The vacua are then the minuma of the effective potential $V(\bar\phi,\phi)$. However, if you read the old (up to mid 80s) literature on supersymmetry they calculate the effective potential using all of the scalars in the theory, i.e. the Coleman-Weinberg type effective potential using the background/external fields $\Phi(x,\theta)=\phi(x) + \theta^2 f(x)$. This leads to an effective potential $U(\bar\phi,\phi,\bar{f},f)$ which is more than quadratic in the auxiliary fields, so clearly not equivalent to calculating just the Kähler potential. The equivalent superfield object is the Kähler potential + auxiliary fields' potential, as defined in "Supersymmetric effective potential: Superfield approach" (or here). It can be written as$$ S[\bar\Phi,\Phi] = \int\!\!\!\mathrm{d}^8z\;\big(K(\bar\Phi,\Phi) + F(\bar\Phi,\Phi,D^2\Phi,\bar{D}^2\bar{\Phi})\big) + \int\!\!\!\mathrm{d}^6z\;W(\Phi) + \int\!\!\!\mathrm{d}^6\bar{z}\;\bar{W}(\bar\Phi) $$where $F(\bar\Phi,\Phi,D^2\Phi,\bar{D}^2\bar{\Phi})$ is at least cubic in $D^2\Phi,\bar{D}^2\bar{\Phi}$.The projection to low-energy scalar components of the above gives the effective potential $U(\bar\phi,\phi,\bar{f},f)$ that is in general non-polynomial in the auxiliary fields and so clearly harder to calculate and work with than the quadratic result given above. So my question is: when did this shift to calculating only the Kähler potential happen and is there a good reason you can ignore the corrections of higher order in the auxiliary fields?
Is the 2-D elevation wave spectrum (as a function of wavenumber and direction, with units of $m^4$) always positive? If so, why would that be the case? Yes, wave variance or energy spectrum, direcional or non-directional is positive-definite as @aretxabaleta said in the comment. In linear water-wave theory, the surface elevation is described as a linear superposition of sinusoids: $$ \eta(t) = \sum_{i-1}^{N}a_i \sin(f_i t + \phi_i) $$ where $a_i$, $f_i$ and $\phi_i$ are the amplitude, frequency and phase, respectively, of each wave component $i$. The most commonly used wave spectrum is the wave variance spectrum. Wave variance is: $$ \langle\eta^2\rangle = \dfrac{1}{2N}\sum_{i=1}^{N}a_i^2 = \sigma^2 $$ and wave variance spectrum $F(f)$ is defined such that: $$ F(f)\Delta{f}=\dfrac{a_i^2}{2} $$ In the limit of $N \rightarrow \infty$ (continuous spectrum), the following holds: $$ \int_{0}^{\infty}F(f)df = \sigma^2 $$ Being quadratic, both wave variance (spectrum integral) and individual discrete spectrum components are positive-definite. Note that so far we implied non-directional frequency spectrum, i.e. spectrum defined in frequency space. It can be also defined in wavenumber $k$ space, and the following holds: $$ \int_{0}^{\infty}F(k)dk = \int_{0}^{\infty}F(f)df = \sigma^2 $$ $$ F(k)\Delta{k} = F(f)\Delta{f} $$ $$ F(k) = F(f)c_g $$ where $c_g$ is group velocity of an individual component. The non-directional spectrum is simply an integral of directional spectrum over all directions: $$ \int_{0}^{\infty}F(k)dk = \int_{0}^{2\pi}\int_{0}^{\infty}F(k,\theta)dkd\theta $$ Be careful about units here. All spectrum integrals must come up at $m^2$. Thus, $F(k)$ has units of $m^3$ and $F(k,\theta)$ has units of $m^3$ $rad^{-1}$. If you are considering polar (spectral bins scaling with $k\theta$ instead of $\theta$) directional wavenumber spectrum such that: $$ \int_{0}^{2\pi}\int_{0}^{\infty}F(k,\theta)k\ dk\ d\theta = \sigma^2 $$ then $F(k,\theta)$ has units of $m^4$ $rad^{-1}$.
EDIT: I notice that the link is hidden, but this post is made with reference to THIS PAPER I'm trying to solve quite an old problem (once again) - to find the distance between a point (in 3d space) and the ellipse. - Or actually the furthest distance on said ellipse (but the algorithm is similar so...). The first step is obviously a change of coordinate system so that I have the projection of a point on the plane of the ellipse, and the middle point of the ellipse at the origin. Once that has happend the problem is transformed from a 3d problem to a 2d problem. And I can follow a paper on it. Now I'm having a lot of trouble understand the paper. Especially why the paper takes it's long winded approach. Then the ellipse is defined by: $$\left(\frac{x_0}{e_0}\right)^2 + \left(\frac{x_1}{e_1}\right)^2 = 1$$ And the point: $$\mathbf{Y} = (y_0, y_1)$$ Now (following also the obvious steps given at page 3 of the paper, equations 4 & 5) I can parametirize the ellipse as: $$\mathbf{X}(\theta) = (e_0 \cos(\theta), e_1 \sin(\theta)) \qquad \theta \in [0,2\pi)$$ The (squared) distance formula is obvious: $$F(\theta) = |\mathbf{X}(\theta) - \mathbf{Y}|^2$$ Now if the distance is maximized/minimized, so is the squared distannce. And furthermore, the derivative is zero at that point (logical step to equation 5): $$F'(\theta) = 2(\mathbf{X}(\theta) - \mathbf{Y}) \cdot \mathbf{X}'(\theta) = 0$$ Now the paper deviates from what I would take as steps. The paper goes through great lengths stating this means that distance vector must be perpendicular to the tangent of the ellipse (logical). And uses this information with the general equation of the ellipse to find ($x_0, x_1$ of the closest point). However to do, the paper does still have to take a numerical approach in the final step. Now looking at above derivative ($F'(\theta)$), there is only a single unknown and a single equation. There are multiple roots, corresponding to (local) maxima and minima. But a simple analysis shows easily in which of the 4 quadrants one has to look, and in that quadrant only a single minimum will be found. So writing out the derivative: $$\begin{pmatrix}e_0 \cos(\theta) - y_0 \\ e_1 \sin(\theta) - y_1\end{pmatrix} \cdot \begin{pmatrix}- e_0 \sin(\theta) \\ e_1 \cos(\theta)\end{pmatrix} = 0$$ $$(e_0 \cos(\theta) - y_0) (- e_0 \sin(\theta)) + (e_1 \sin(\theta) - y_1) ( e_1 \cos(\theta) ) = 0$$ $$-e_0^2 \sin(\theta) \cos(\theta) + y_0 e_0 \sin(\theta) + e_1^2 \sin(\theta)\cos(\theta) - y_1 e_1 \cos(\theta) = 0$$ $$-\frac{e_0^2}{2} \sin(2 \theta) + y_0 e_0 \sin(\theta) + \frac{e_1^2}{2} \sin(2 \theta) - y_1 e_1 \cos(\theta) = 0$$ Which can be solved numerically in the quadrant where it lays. There are only 4 edge cases. If $y_0 = y_1 = 0$ the point is the center of the ellipse and the solution is trivial. If $y_1 = 0$ The point is at the short axis (definition), and the minimum is simply at either $\theta = 0$ or $\theta = \pi$ (depending on the sign). If $y_0 = 0$ the point is at the long axis. If $\mathbf{Y}$ is before the focal point of the ellipse there are potentially two equal solutions, and any of those is correct. If $\mathbf{Y}$ is after the focal point there is only a single solution at either $\theta = \pm \frac{\pi}{2}$ Now this is a complete different approach that the paper I linked used. Is above method correct? Did I miss something? Why is the paper using a complete different approach? Is that faster to compute?
Negate and simplify the the quantified statement: $$\forall_x[p(x) \to \neg q(x)] $$ My answer: $ \neg\forall_x[p(x) \to \neg q(x)] \tag 1$ $ \exists_x\neg [p(x) \to \neg q(x)] \tag 2$ $ \exists _x[\neg p(x)\leftrightarrow \neg(¬q(x))] \tag 3$ $ \exists _x[\neg p(x) \leftrightarrow q(x)] \tag 4$ My answer is not correct. I believe I have made a mistake (I am unsure how to deal with the implies symbol), and, hence, clarification would be much obliged.
This question relates to a discussion on another message board. Euclid's proof of the infinitude of primes is an indirect proof (a.k.a. proof by contradiction, reductio ad absurdum, modus tollens). My understanding is that Intuitionists reject such proofs because they rely on the Law of the Excluded Middle, which they don't accept. Does there exist a direct and constructive proof of the infinitude of primes? Due to a widely propagated historical error, it is commonly misbelieved that Euclid's proof was by contradiction. This is false. Euclid's proof was in fact presented in the obvious constructive fashion. See Hardy and Woodgold's Intelligencer article[1] for a detailed analysis of the history (based in part on many sci.math discussions [2]). A variant that deserves to be much better known is the following folklore one-line proof that there are infinitely many prime integers $\rm\qquad\qquad N\ (N+1)\ $ has a larger set of prime factors than does $\rm\:N.$ because $\rm\,N+1>1\,$ is coprime to $\,\rm N\,$ so it has a prime factor which does not divide $\rm\,N.$ Curiously, Ribenboim believes this proof to be of recent vintage, attributing it to Filip Saidak. But I recall seeing variants published long ago. Does anyone know its history? For even further variety, here is a version of Euclid's proof reformulated into infinite descent form. If there are only finitely many primes, then given any prime $\rm\:p\:$ there exists a smaller prime, namely the least factor $> 1$ of $\rm\ 1 + $ product of all primes $\rm \ge p\:.$ It deserves to be much better known that Euclid's constructive proof generalizes very widely - to any fewunit ring, i.e. any ring having fewer units than elements - see my proof here. $ $ The key idea is that Euclid's construction of a new prime generalizes from elements to ideals, i.e. given some maximal ideals $\rm\ P_1,\ldots,P_k\:,\: $ a simple pigeonhole argument employing $\rm CRT$ deduces that $\rm\ 1+P_1\:\cdots\:P_k\ $ contains a nonunit, which lies in some maximal ideal which, by construction, is comaximal (so distinct) from the initial max ideals $\rm\:P_1,\ldots,P_k\:.$ [2] Note: Although the article [1] makes no mention of such, it appears to have strong roots in frequent sci.math discussions - in which the first author briefly participated. A Google groups search in the usenet newsgroup sci.math for "Euclid plutonium" will turn up many long discussions on various misinterpretations of Euclid's proof. Your question is predicated on a common misconception. In fact Euclid's proof is thoroughly constructive: it gives an algorithm which, upon being given as input any finite set of prime numbers, outputs a prime number which is not in the set. Added: For a bit more on mathematical issues related to the above algorithm, see Problem 6 here. (This is one of the more interesting problems on the first problem set of an advanced undergraduate number theory course that I teach from time to time.) Euclid's theorem is intuitionistic. Given any finite set S of primes, their product plus one is not divisible by any of the primes and hence is divisible by some prime not in S. This gives a concrete upper bound on the n-th prime as well -- though of course it's astronomical. Here's a simple direct proof that not only shows there is an infinite number of primes, but gives a lower bound on how many primes are less than $N$. The idea comes from Erdős. Let $\pi(N)$ be the number of primes $\leq N$ for a positive integer N. Any positive integer $\leq N$ can be written in the form $$B^2{p_1}^{e_1} {p_2}^{e_2} ... {p_{\pi(N)}}^{e_{\pi(N)}}$$ where the $e_i$'s are 0 or 1 and $B\leq \sqrt N$. There are at most $2^{\pi(N)}\sqrt N$ possible numbers of this form, and so we have $$2^{\pi(N)}\sqrt{N}\geq N$$ which gives us $$\pi(N)\geq {\log{N}\over{2 \log 2}}$$ which is clearly unbounded. This idea is used in Erdos' nice proof that $\sum {1\over p}$ diverges, although that is a proof by contradiction and so would not satisfy the intuitionist school. It goes like this. Assume the sum converges. Then there is some $k$ such that ${\sum_{i \geq k+1}} {1\over p_i} < 1/2$. Call the primes $\leq p_k$ the "small" primes, and the primes $\geq p_{k+1}$ the "big" primes. We can divide the positive integers $\leq N$, for arbitrary N, into two groups: those that are divisible by a "big" prime, and those that are not. An upper limit on the number divisible by a "big" prime is $N/2$ (this comes from the assumption that the sum of the reciprocals of the big primes is $\leq 1/2$), and an upper limit on the number not divisible by a "big" prime is $2^k\sqrt N$. Since those two categories include all the positive integers $\leq N$, we must have $N/2+2^k \sqrt N \geq N$. However, that is not true for sufficiently large $N$, and so we have our contradiction, and the series diverges. Although this is an old post it is instructive to cite the following study on constructive mathematics which clears some common misconceptions related to constructive mathematics by people trained exclusively in classical mathematics. For example the difference between proof by negation ( valid in constructive mathematics) vs proof by contradiction ( not valid in constructive mathematics in general) Abstract.On the odd day, a mathematician might wonder what constructive mathematics is all about. They may have heard arguments in favor of constructivism but are not at all convinced by them, and in any case they may care little about philosophy. A typical introductory text about constructivism spends a great deal of time explaining the principles and contains only trivial mathematics, while advanced constructive texts are impenetrable, like all unfamiliar mathematics. How then can a mathematician find out what constructive mathematics feels like? What new and relevant ideas does constructive mathematics have to offer, if any? I shall attempt to answer these questions. There in you can find explanations why Euclid's proof is valid in constructive mathematics as it is proof by negation.
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
I am looking for a version of Suslin's Stability Theorem for Chevalley groups. The version of the theorem for $G=SL_n({\mathbb Z}[x_1, \dots , x_m])$ states that the if $n\ge m+2$, the elementary matrices generate $G$. I think by embedding many copies of $SL_n$, one can prove a similar statement, but probably with a bad bound, for all Chevalley groups over ${\mathbb Z}[x_1, \dots , x_m]$. I was wondering if a sharp bound has already been worked out in the literature. I am somewhat puzzled by your version of "Suslin stability theorem". What you are referring to is a combination of usual stabilization for $SK_1$ and an estimate for the stable rank of integer polynomials. Suslin's theorem states something much stronger, in particular Corollary 6.6 in his paper On the structure of the special linear group over polynomial rings is as follows (note that it doesn't depend on the number of variables): Let $A$ be a regular ring such that $SK_1(A)=0$ (for example, the ring of integers in an algebraic number field). Then $SL_r(A[x_1,\ldots,x_n])$ is generated by elementary matrices for $r\geqslant\max(3,\dim A+2)$. Corollary 7.10 extends this result to rings of the form $A[x_1^\pm,\ldots,x_k^\pm,x_{k+1},\ldots,x_n]$ for $A$ regular. For other Chevalley groups the situation is complicated. One has the stability theorems for $K_1(\Phi)$ in terms of stable rank (or its ramifications such as absolute stable rank or $\Lambda$-stable rank), but they give pretty bad bounds for polynomial rings. There is, however, the following version of Suslin's theorem for symplectic group in a paper On symplectic groups over polynomial rings by F. Grunewald, J. Mennicke and L. Vaserstein: Let $A$ be a locally principal ring. For an integer $m\geqslant0$ put $R=A[x_1,\ldots,x_m]$. Then $Sp_{2n}(R)=Sp_{2n}(A)\cdot Ep_{en}(R)$ for any $n\geqslant2$. By locally principal ring they mean a commutative ring such that its localization at any maximal ideal is a principal ideal ring. For euclidean ring $A$ this gives $K_1(\mathsf{C}_\ell,R)=0$. As a byproduct they also prove a stronger version of Suslin's theorem for $SL$ and a locally principal ring. They also have a version for Laurent polynomial rings and claim that by using stability theorems for $K_1$ as in M. Stein's papers one can prove the same results for classical simple algebraic groups of relative rank $\geqslant2$, but the latter has never been written in full details. UPDATE 14.02.2019The proof for all simply-connected Chevalley groups is given by A. Stavrova in Chevalley groups of polynomial rings over Dedekind domains. Namely, she proves the following theorem: Let $R$ be a locally principal ring, and let $G$ be a Chevalley—Demazure group scheme of isotropic rank $\geq2$. Then $G(R[x_1,\ldots,x_n])=G(R)E(R[x_1,\ldots,x_n])$ for any $n\geq1$. If $R$ is a Dedekind ring of arithmetic type (for example, $R=\mathbb{Z}$), it follows that $G(R[x_1,\ldots,x_n])=E(R[x_1,\ldots,x_n])$. I don't know what happens over $\mathbb{Z}$, but for reductive groups over a field, it turns out that there is no analogue to Suslin's theorem in general. Indeed consider the following result (due to A.Stavrova, Homotopy invariance of non-stable K_1-functors, http://arxiv.org/abs/1111.4664): For a split reductive group $G$ over a ring $R$, let me denote by $K^G_1(R)$ the quotient of $G(R)$ by the subgroup $E(R)$ of elementary matrices (defined via a Chevalley presentation, and which turns out to be a normal subgroup). Then for any field $k$, any $G$ such that any semi-simple normal subgroup of $G$ is of rank at least $2$ (e.g. $G$ simple of rank at least $2$) and any $n\geq 0$. Then $$ K_1^G(k[X_1,\ldots,X_n])\simeq K^G_1(k) $$ So the obstruction for elementary matrices to generate is stable. This does not contradict Suslin's result because, as is well-known, $K^{SL_n}_1(k)$ is trivial (as is the case for all semi-simple simply-connected split groups).
On the LUQ decomposition The algorithm implemented in luq (see reference given below) computes bases for the left/right null spaces of a sparse matrix $A$. Unfortunately, as far as I can tell, there seems to be no thorough discussion of this particular algorithm in the literature. In place of a reference, let us clarify how/why it works and test it a bit. The luq routine inputs an $m$-by-$n$ matrix $A$ and outputs an $m$-by-$m$ invertible matrix $L$, an $n$-by-$n$ invertible matrix $Q$ , and an $m$-by-$n$ upper trapezoidal matrix $U$ such that: (i) $A=LUQ$ and (ii) the pivot-less columns/rows of $U$ are zero vectors. For example, $$\underbrace{\begin{pmatrix} 1 & 1 \\1 & 1 \end{pmatrix}}_A = \underbrace{\begin{pmatrix}1 & 0 \\1 & 1\end{pmatrix}}_L \underbrace{\begin{pmatrix}1 & 0 \\0 & 0\end{pmatrix}}_U \underbrace{\begin{pmatrix}1 & 1 \\0 & 1\end{pmatrix}}_Q$$ Point (ii) allows one to construct bases for the left/right null spaces of $A$. Bases for Left/Right Null Spaces of $A$ Let $r = \operatorname{Rank}(A)$. Suppose we can compute the exact $LUQ$ decomposition of $A$ as described above. Then, The $n-r$ columns of $Q^{-1}$ corresponding to the pivotless columns of $U$ are a basis for the null space of $A$. This follows from the fact that $\operatorname{null}(A) = \operatorname{null}(A Q^{-1}) = \operatorname{null}(L U)$ and that the pivotless columns of $U$ are zero vectors by construction. The $m-r$ rows of $L^{-1}$ corresponding to the pivotless rows of $U$ are a basis for the left null space of $A$. This follows from the fact that $\operatorname{null}(A^T) = \operatorname{null}((L^{-1} A)^T) = \operatorname{null}( (U Q)^T)$ and that the pivotless rows of $U$ are zero vectors by construction. LUQ Algorithm Assume that $m \ge n$. (If $m < n$, then the lu command mentioned below outputs a slightly different $PA=LU$ factorization. Otherwise the LUQ decomposition is almost the same, and so, we omit this case.) Given an $m$-by-$n$ matrix $A$, the LUQ decomposition calls MATLAB command lu with partial (i.e., just row) pivoting. lu implements a variant of the LU decomposition that inputs $A$ and outputs: $m$-by-$m$ permutation matrix $P$; $m$-by-$n$ lower trapezoidal matrix $\tilde L$ with ones on the diagonal; and, $n$-by-$n$ upper triangular matrix $\tilde U$ such that $PA = \tilde L \tilde U$. Write:$$\tilde U = \begin{bmatrix} \tilde U_{11} & \tilde U_{12} \\0 & \tilde U_{22} \end{bmatrix}$$ where $\tilde U_{11}$ has nonzero diagonal entries, and hence, is invertible. Also, let $e_i$ denote unit $m$-vectors equal to $1$ in the $i$th component and zero otherwise. The algorithm then builds:$$L = P^T \begin{bmatrix} \tilde L & e_{n+1} & \cdots & e_m \end{bmatrix} $$which is an $m \times m$ invertible matrix, and$$U = \begin{bmatrix} \tilde U_{11} & 0 \\0 & \tilde U_{22} \\0 & 0 \end{bmatrix}$$ which is upper trapezoidal, and$$Q = \begin{bmatrix} I & \tilde U_{11}^{-1} \tilde U_{12} \\0 & I \end{bmatrix}$$which is an $n$-by-$n$ invertible matrix. To summarize, we obtain:$$A = L \begin{bmatrix} \tilde U_{11} & 0 \\0 & \tilde U_{22} \\0 & 0 \end{bmatrix} Q $$ For the most part, that is all the algorithm does. However, if there are any nonzero entries in $\tilde U_{22}$, then the algorithm will call luq again with input matrix containing all of the nonzero entries of $\tilde U_{22}$. This last step introduces more zeros into $U$ and modifies the invertible matrices $L$ and $Q$. To understand this last step, it helps to consider a simple input to luq like$$A = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix}$$The first call to luq with this input trivially gives $U=A$ with $L$ and $Q$ being the $3$-by-$3$ identity matrices. Since $U$ has nonzero entries, a second call is made to luq with input $1$, which outputs $L=U=Q=1$. This second decomposition is incorporated into the first one by making the second column of $L$ the first one and moving all the other columns to the right of it, and similarly, moving the third row of $Q$ to the first row and moving all the other rows below it. This yields,$$A = \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}$$ To be sure, consider another simple example$$A = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & a & 0 & b \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & c \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}$$ where $a,b,c$ are nonzero reals. In the first pass through luq the algorithm again sets $U=A$ and $L$, $Q$ equal to the $5$-by-$5$ identity matrices. Since $U=\tilde U_{22}$ has nonzero elements, luq is called again with input matrix$$B = \begin{pmatrix} a & b \\0 & c \end{pmatrix}$$ This is incorporated into the first decomposition by permuting $L$ and $Q$ as shown:$$A = \begin{pmatrix} 0 & 0 & 1 & 0 & 0 \\1 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 1 & 0 \\0 & 1 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} a & b & 0 & 0 & 0 \\ 0 & c & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 1 & 0 & 0 \\0 & 0 & 0 & 0 & 1 \\1 & 0 & 0 & 0 & 0 \\0 & 1 & 0 & 0 & 0 \\0 & 0 & 0 & 1 & 0 \end{pmatrix} $$ In general, the columns of $L$ and the rows of $Q$ are permuted so that the the zero columns/rows of $\tilde U_{22}$ are moved to the end of the matrix. An LUQ decomposition is then performed on this nonzero sub-block. A full explanation would be notation heavy (requiring index sets for the zero/nonzero elements) and not much easier to understand than the code itself. Simple Test In reality, the algorithm computes an approximate LUQ decomposition and approximate bases, i.e., with rounding errors. These rounding errors might be significant if some of the nonzero singular values of $A$ are too small for the algorithm to detect. Here is a MATLAB script file that tests the luq code. The script is a slight modification of the demo file that the software comes with. I modified the original file so that it inputs a sparse, random, rectangular, rank deficient matrix and outputs bases for the left/right null spaces of this input matrix. Here is a sample output from this demo file. elapsed time = 0.011993 seconds Input matrix: size = 10000x500 true right null space dimension = 23 true left null space dimension = 9523 Output: estimated right null space dimension = 23 estimated left null space dimension = 9523 error in basis for right null space = 0 error in basis for left null space = 2.2737e-13 "Extreme" Test This example is adapted from Gotsman and Toledo [2008]. Consider the $(n+1)$-by-$n$ matrix:$$A_1 = \begin{pmatrix} 1 & & & & \\-1 & 1 & & & \\\vdots & -1 & \ddots & & \\\vdots & & \ddots & 1 & \\-1 & -1 & \cdots & -1 & 1 \\0.5 & 0.5 & \cdots & 0.5 & 0.5 \end{pmatrix}$$and in terms of this matrix, define the block diagonal matrix:$$A = \begin{bmatrix} A_1 & 0 \\0 & A_2 \end{bmatrix}$$ where $A_2$ is an $n$-by-$n$ random symmetric positive definite matrix whose eigenvalues are all equal to one except $3$ are zero and one is $10^{-8}$. With this input matrix and $n=1000$, we obtain the following sample output. elapsed time = 1.1092 the matrix: size of A = 2001x2000 true rank of A = 1997 true right null space dimension = 3 true left null space dimension = 4 results: estimated right null space dimension = 3 estimated left null space dimension = 4 error in basis for right null space = 9.2526e-13 error in basis for left null space = 5.9577e-14 Remark There is an option in the luq code to use LU factorization with complete (i.e., row and column) pivoting $PAQ=LU$. The resulting $U$ matrix in the $LUQ$ factorization may better reflect the rank of $A$ in more ill-conditioned problems, but there is an added cost to doing column pivoting. Reference Kowal, P. [2006]. "Null space of a sparse matrix." https://www.mathworks.com/matlabcentral/fileexchange/11120-null-space-of-a-sparse-matrix Gotsman, C., and S. Toledo [2008]. "On the computation of null spaces of sparse rectangular matrices." SIAM Journal on Matrix Analysis and Applications, (30)2, 445-463.
Ex.14.3 Q6 Statistics Solution - NCERT Maths Class 10 Question \(100\) surnames were randomly picked up from a local telephone directory and the frequency distribution of the number of letters in the English alphabets in the surnames was obtained as follows: Number of letters \(1 - 4\) \(4 - 7\) \(7 - 10\) \(10 - 13\) \(13 - 16\) \(16 - 19\) Number of surnames \(6\) \(30\) \(40\) \(16\) \(4\) \(4\) Determine the median number of letters in the surnames. Find the mean number of letters in the surnames? Also, find the modal size of the surnames. Text Solution What is known? The frequency distribution of the number of letters in the English alphabets for \(100\) surnames. What is unknown? The median and mean number of letters in the surnames and the modal size of the surnames Reasoning: We will find the mean by step-deviation method. Mean,\(\overline x = a + \left( {\frac{{\sum {{f_i}{u_i}} }}{{\sum {{f_i}} }}} \right) \times h\) Modal Class is the class with highest frequency Mode \( = l + \left( {\frac{{{f_1} - {f_0}}}{{2{f_1} - {f_0} - {f_2}}}} \right) \times h\) Where, Class size,\(h\) Lower limit of modal class,\(l\) Frequency of modal class,\(f_1\) Frequency of class preceding modal class,\(f_0\) Frequency of class succeeding the modal class,\(f_2\) Median Class is the class having Cumulative frequency \((cf)\) just greater than \( \frac n{2}\) Median \( = l + \left( {\frac{{\frac{n}{2} - cf}}{f}} \right) \times h\) Class size,\(h\) Number of observations,\(n\) Lower limit of median class,\(l\) Frequency of median class,\(f\) Cumulative frequency of class preceding median class,\(cf\) Steps: To find the median. Number of letters Frequency \(\begin{align}\mathbf{f}_{\mathbf{i}}\end{align}\) Cumulative frequency (\(cf\)) \(1 - 4\) \(6\) \(6\) \(4 - 7\) \(30\) \(30 +6= 36\) \(7 - 10\) \(40\) \(36 + 40 = 76\) \(10 - 13\) \(16\) \(76 + 16 = 92\) \(13 - 16\) \(4\) \(92 + 4 = 96\) \(16 - 19\) \(4\) \(96 + 4 = 100\) Total (n)\(= 100\) From the table, it can be observed that \(n = 100{\rm{ }} \Rightarrow \frac{n}{2} = 50\) Cumulative frequency \((cf)\) just greater than \(50\) is \(76,\) belonging to class \(7 – 10.\) Therefore, median class \(=7 – 10\) Class size\(, h = 3\) Lower limit of median class\(, l = 7\) Frequency of median class\(, f = 40\) Cumulative frequency of class preceding median class\(, cf = 36\) Median \( = l + \left( {\frac{{\frac{n}{2} - cf}}{f}} \right) \times h\) \[\begin{array}{l} = 7 + \left( {\frac{{50 - 36}}{{40}}} \right) \times 3\\ = 7 + \frac{{14}}{{40}} \times 3\\ = 7 + \frac{{21}}{{20}}\\ = 7 + 1.05\\ = 8.05 \end{array}\] To find the mean Class mark, \({x_i} = \frac{{{\text{Upper class limit }} + {\text{ Lower class limit}}}}{2}\) Taking assumed mean\(, a = 11.5\) Number of letters Number of Surnames \(f_i\) Class mark \(\mathbf{x}_{\mathbf{i}}\) \(\mathbf{d}_{\mathbf{i}}=\mathbf{x}_{\mathbf{i}}-\mathbf{a}\) \(\mathbf{u}_{i}=\frac{\mathbf{d}_{i}}{h}\) \(\mathbf{f}_{\mathbf{i}} \mathbf{u}_{\mathbf{i}}\) \(1 - 4\) \(6\) \(2.5\) \(– 9\) \(– 3\) \(– 18\) \(4 - 7\) \(30\) \(5.5\) \(– 6\) \(– 2\) \(– 60\) \(7 - 10\) \(40\) \(8.5\) \(– 3\) \(– 1\) \(– 40\) \(10 - 13\) \(16\) \(11.5\) \(0\) \(0\) \(0\) \(13 - 16\) \(4\) \(14.5\) \(3\) \(1\) \(4\) \(16 - 19\) \(4\) \(17.5\) \(6\) \(2\) \(8\) Total \(100\) \(-106\) From the table, we obtain \(\begin{align} \Sigma f_{i} u_{i} &=-106 \\ \Sigma f_{i} &=100\\ \text{Class size,} h &=3 \end{align}\) \[\begin{align} \operatorname{Mean}, \overline{x} &=a+\left(\frac{\sum f_{i} u_{i}}{\sum f_{i}}\right) h \\ &=11.5+\left(\frac{-106}{100}\right) \times 3 \\ &=11.5-3.18 \\ &=8.32 \end{align}\] To find mode Number of letters Number of Surnames \(1 - 4\) \(6\) \(4 - 7\) \(30\) \(7 - 10\) \(40\) \(10 - 13\) \(16\) \(13 - 16\) \(4\) \(16 - 19\) \(4\) \(n\)\(= 100\) From the table, it can be observed that the maximum class frequency is \(40,\) belonging to class interval \(7 - 10.\) Class size (\(h\)) \(=3.\) Modal class \(=7 - 10.\) Lower limit of modal class (\(l\)) \( =7.\) Frequency of modal class \(f_1\)\(\) \(=40.\) Frequency of class preceding the modal class \(f_0\) \(= 30.\) Frequency of class succeeding the modal class, \(f_2\) \(=16.\) \[\begin{align} \text { Mode } &=l+\left(\frac{f_{1}-f_{0}}{2 f_{1}-f_{0}-f_{2}}\right) \times h \\ &=7+\left[\frac{40-30}{2(40)-30-16}\right] \times 3 \\ &=7+\frac{10}{34} \times 3 \\ &=7+\frac{30}{34} \\ &=7.88\end{align}\] Therefore, median and mean number of letters in surnames is \(8.05\) and \(8.32\) respectively while modal size of surnames is \(7.88.\)
Cosmology, or at least basic cosmology, models the world as perfectly homogeneous and isotropic. Everything in the study of the evolution of the Universe actually works out fine if this is at least true at all times on the largest scales, and observations confirm it pretty much is. But at smaller scales, our Universe does not appear like a featureless, homogeneous soup, but as an immense tree of fractal complexity and divisions and subdivisions. Superclusters, clusters, galaxies, stars, planets, mountains and craters. And then there's life on Earth, an incredible variety of desperate machines grasping for survival. And on the top of the tree (for what we know), human civilizations, our actions and thoughts and the information we produce and consume. All of this complexity is concentrated in minuscule oases of useful information lost in a huge, huge blackness filled only by what is essentially thermal noise. A pretty picture, but doesn't that sound... wrong? Is that supposed to happen? On the face of it, all of this complexity arises and continues to exist because of an intricate array of physical phenomena and interactions on which life piggybacks. The varied chemistry of carbon allows for the extraordinary machineries of biology, fluid mechanics drives the weather and the tectonic cycle, electromagnetism and the theory of conductivity is the foundation for all of electronics, computers, and the Internet. We could list thousands of examples, but you get the idea. As any person who "fucking loves science" will tell you, we are made of stardust, in the sense that our existence is based on many natural phenomena in a mechanical Universe. You would assume that is a satisfying explanation: many complex elements and interactions, thus great complexity in the results. However, all of these interactions naturally push towards death and decay. Organic matter burns and turns into vapour and dissolves into the atmosphere. Atomic bombs explode. Light bounces around and loses coherence. Hard drives break. Information scrambles. It's just the second law of thermodynamics. And the "complexity" of the interactions never really matters - a triple pendulum or the entire standard model in a box both tend to the same result: thermal equilibrium, and nothing happening. In schools we teach two essential lessons of both life and science: that everything in existence is the product of a chaotic and uncaring Universe, and that anything worth something requires care and work to be preserved, and would otherwise disassemble in time. But we don't point out the apparent contradiction between the two for some reason. At least not from the physics, even though this is very much a physics issue. Creationists (to the extent that they actually exist and aren't an elaborate practical joke by the US on the rest of the world) like to argue that it is impossible for life on Earth to have spontaneously arisen, because it would mean a transition from "disorder" to "order", or more precisely a spontaneous creation of new useful information, in violation of the second law of thermodynamics. This very abstract conceptualization of life as a decrease in entropy, as a transition from equilibrium to non-equilibrium, is not wrong and is actually the view Schrödinger tried to push in "What is Life?". The obvious mistake, as the fucking lover of fucking science will make very sure to point out, is forgetting about the massive thermonuclear reactor in the sky. The Earth is not a closed system and receives constant input of "useful" energy from the Sun. This "negative entropy" (more precisely free energy) drives the Earth system and keeps it out of equilibrium. Alternatively, the Earth still has to cool its core and there's a bit of free energy being provided to the crust from the temperature difference between the core and the coldness of empty space, and a few organisms can live on that. None of these things can last forever, but for now they work. However, this begs the question. If all our free energy comes from the Sun, then... who made the Sun? What brought it all out of equilibrium? The solar system started out as a relatively featureless, less differentiated protostellar cloud of hydrogen and helium. It was dead. Now it's neatly organized in differentiated bodies, a nuclear furnace forging heavy elements, a bunch of random planets with different composition and varied moons, and one even has life and roads and bucatini all'amatriciana. That is a potential violation of the second law. Hydrogen in a tank does not do that - it does not form tiny solar systems with tiny people, it fills the tank homogeneously and stays dead in all aspects. In fact, no substance does that. Essentially every system I can think of, when isolated in a small tank, will eventually "die" and decompose and reach thermal equilibrium. So why does a lot of hydrogen in space do the opposite? The reason has to do with the unusual thermodynamic properties of gravity and gravitationally-dominated systems. Consider a gas of particles (which might be molecules, or stars, or galaxies!) interacting only gravitationally. Then the total internal energy of the system is the kinetic + gravitational potential energy: $$ U = U_k + U_G $$ However, also recall that for the gravitational interaction the virial theorem implies $$ U_G = - 2 U_k $$ (at least as a time-average) so that you can simplify $$ U = - U_k \,.$$ Now, there is a common myth that kinetic energy is proportional to the temperature - we don't need such a strong (and occasionally false) statement. We just need to know that it is an increasing function of temperature. More precisely, you could write \( U_k(T,P)\) or \( U_k(T,V)\) and it would be an increasing function of \( T \) in either case. This means that for a gravitational system, the heat capacity either at constant pressure or volume is negative: $$ \frac{\partial U}{\partial T} |_P < 0 $$ $$ \frac{\partial U}{\partial T} |_V < 0 $$ That is, a gravitational system gets colder when you give it energy, and gets hotter when you take it away. Note this also holds if there are additional interactions other than gravitational $$ U = U_k + U_G + U_\text{other} $$ provided the gravitational potential is large enough compared to the other forces. This changes everything. Let me just review briefly how normal thermodynamic systems always push towards equilibrium. Imagine a system has some type of inhomogeneity in the form of a temperature difference between two subsystems C and H, and let's say C is colder and H is hotter. Heat flows spontaneously from the hotter to the colder system, and that is always true. As C receives heat, and like most systems has positive heat capacity, it gets a bit hotter. Conversely, H is donating heat and gets colder. The temperature difference is reduced and the subsystems walk back into equilibrium. However, if the heat capacity is negative, this doesn't work. Heat still moves from H to C, but H gets hotter and C gets colder. All inhomogeneities are amplified and the system moves away from equilibrium. Thus gravitational systems possesses the essential ingredient for the creation of life. But still, the second law of thermodynamics seems to be broken and that is always a bad sign. The 2nd law is a very, very general statement; it could be simplified as just the idea that the information in an imprecise or coarse-grained description of reality can only degrade, not improve. There just aren't any exceptions to this. How does gravity manage to reduce entropy? It is commonly claimed that the Universe after the Big Bang was in a "low-entropy" state, while what awaits it in the far-future (heat death?) is a "high-entropy" state, and this difference actually gives the arrow of time, and is the reason for which we define it such that the Big Bang is in the past and the heat death in the future. It is true that there is an entropy difference between these two extremities, and that the total entropy increases monotonically between these two values. But the specific entropy in the early Universe is not particularly low compared to what we experience here on Earth - the early Universe was a homogeneous state in thermal equilibrium. If you consider the particles that compose a human body, the entropy they have now is much, much lower than the same particles (to the extent that this makes sense) right after the recombination of hydrogen, simply because the first is a complex, ordered system, in fact a living being composed of literally tens of trillions of little working machines, capable of withholding information and performing calculations and reasoning, while the latter is just... noise. So there is a decrease of entropy. It is gravity that is able to create such lower and lower-entropy states through gravitational collapse. If a gravitational system has a typical size \( R \), then its potential energy will go as \( U_G \sim - 1/R\) which means \( \frac{dU}{dR} > 0 \). Intuitive: if you give energy to a gravitationally-bound system, orbits get wider. Since we already know the heat capacity is negative, this means \( \frac{dT}{d R} < 0 \), and that the more it shrinks, the hotter it gets. Like a satellite in orbit: if you take orbital energy away from it, it grazes the atmosphere and starts burning up. However, entropy decreases as the system shrinks - a normal behaviour for usual systems but unexpected considering the weird thing we saw with the temperature. Ignoring pressure for now, the first law implies $$ dS = dU/T = \frac{1}{T} \left(\frac{dU}{dR}\right) dR \Rightarrow \frac{dS}{dR} > 0 $$ So gravitational collapse violates the second law of thermodynamics and should never happen. But we are forgetting about the bizzarre property of spontaneous divergence from equilibrium of gravitational systems that I introduced before: they can (and will in time) split into subsystems of ever-increasing temperature difference. We know which one is the one getting hotter and hotter: it's the part that collapses. The one getting colder and colder is "radiation": actual radiation (gravitational or electromagnetic) escaping the gravitating mass or just expelled matter that has become gravitationally unbound. "Radiation" can move in a very large space and is not constrained by gravity to satisfy the conclusion $$ -2U_k = U_G $$ of the virial theorem, and thus carries a very large entropy. So a mass can collapse, but it needs to sacrifice part of itself to store a very large entropy, so that the part that continues collapsing can have a decreased entropy while still satisfying the second law of thermodynamics as a whole. While this is not an unusual or unseen occurrence (e.g.: I can reduce the entropy of a deck of cards by sorting it, but the minimum computations necessary in my brain will increase entropy by a larger amount), gravity is special and unique among all physical phenomena in that it does this spontaneously. Thus the early Universe collapses gravitationally expelling "radiation", and the process is repeated at various scales. We therefore find little pockets of very low entropy in a very large high-entropy expanse. (Black holes are an exception, but for the sake of simplicity let's put them aside). It explains the tree of increasing complexity and the giant cold expanse of sparse thermal garbage. You can imagine life without the chemistry, nuclear physics, condensed matter physics we know. You can imagine intelligence made of plasma or computers made of dancing stars in clusters. If you have a powerful enough imagination, every cog in the architecture of life is replaceable. Except for one: gravity. Only gravity can reduce entropy and create complexity where it doesn't already exist. It is, ultimately, the origin of anything worthwhile in the Universe. So, if you really need to worship something, or need a mantra, or are stupid enough to want a tattoo of a physics formula, let it be this one: $$ F = -\frac{GM_1 M_2}{r^2} $$ because it's the closest thing to a loving God physics is going to give us. Now, with this said, why the heat death? Well, the reason is that in all of this I meant gravity gravity, that is the long-range interaction between masses governed by Newton's law. Not all of general relativity and the range of phenomena it predicts. In particular in the presence of dark energy / cosmological constant, the acceleration of mass \( M_1\) due to mass \( M_2\) is corrected as $$ \ddot {\vec r} = - \frac{GM_2}{r^2} + \frac{\Lambda c^2}{3} r $$ so, a repulsive force which grows linearly with distance. The thing with the virial theorem applied to an interaction potential that goes as \(r^2\) gives \(U_k = U_\Lambda\) and so this means a positive contribution to the heat capacity. If dark energy is dominant, and it will be in the not-so-far future, then it undoes the creative property of gravity. In a dark-energy dominated system, we move towards thermal equilibrium instead of away from it. The natural order according to which all things spontaneously degrade and die is recovered, and all things will spontaneously degrade and die.
Tokyo Journal of Mathematics Tokyo J. Math. Volume 37, Number 1 (2014), 61-72. Infinitesimal Deformations and Brauer Group of Some Generalized Calabi--Eckmann Manifolds Abstract Let $X$ be a compact connected Riemann surface. Let $\xi_1: E_1\longrightarrow X$ and $\xi_2: E_2\,\longrightarrow X$ be holomorphic vector bundles of rank at least two. Given these together with a $\lambda \in {\mathbb C}$ with positive imaginary part, we construct a holomorphic fiber bundle $S^{\xi_1,\xi_2}_{\lambda}$ over $X$ whose fibers are the Calabi--Eckmann manifolds. We compute the Picard group of the total space of $S^{\xi_1,\xi_2}_{\lambda}$. We also compute the infinitesimal deformations of the total space of $S^{\xi_1,\xi_2}_{\lambda}$. The cohomological Brauer group of $S^{\xi_1,\xi_2}_{\lambda}$ is shown to be zero. In particular, the Brauer group of $S^{\xi_1,\xi_2}_{\lambda}$ vanishes. Article information Source Tokyo J. Math., Volume 37, Number 1 (2014), 61-72. Dates First available in Project Euclid: 28 July 2014 Permanent link to this document https://projecteuclid.org/euclid.tjm/1406552431 Digital Object Identifier doi:10.3836/tjm/1406552431 Mathematical Reviews number (MathSciNet) MR3264514 Zentralblatt MATH identifier 1330.14026 Subjects Primary: 14F22: Brauer groups of schemes [See also 12G05, 16K50] Secondary: 32Q55: Topological aspects of complex manifolds 32G05: Deformations of complex structures [See also 13D10, 16S80, 58H10, 58H15] Citation BISWAS, Indranil; MJ, Mahan; THAKUR, Ajay Singh. Infinitesimal Deformations and Brauer Group of Some Generalized Calabi--Eckmann Manifolds. Tokyo J. Math. 37 (2014), no. 1, 61--72. doi:10.3836/tjm/1406552431. https://projecteuclid.org/euclid.tjm/1406552431
(5 Marks) Dec-2018 Subject Fluid Mechanics 2 Topic Compressible Flow Difficulty Medium (5 Marks) Dec-2018 Subject Fluid Mechanics 2 Topic Compressible Flow Difficulty Medium Mach number defined as the square root of the ratio of the inertia force of a flowing fluid to the elastic force. $\therefore \text{Mach Number} = M = \sqrt \frac{ \text{Inertia force} }{ \text{Elastic force} } = \sqrt \frac{\rho AV^2}{KA}$ $M = \sqrt \frac {V^2}{K/ \rho } = \frac{V} {\sqrt {k/ \rho }} = \frac{V}{C}$ i.e. $M = \frac{V}{C} (\therefore \sqrt \frac{(k}{\rho } = C)$ $ M = \frac{ \text{Velocity of fluid or body moving in fluid} }{ \text{Velocity of sound in fluid}}$ $M = \frac{V}{C}$ For the compressible fluid flow, Mach number is an important non-dimensional parameter. On the basis of the Mach number the flow is defined as: Sub sonic flow: A flow is said sub sonic flow if Mach number is less than 1 i.e velocity of flow (V) is less than the velocity of the sound wave (C). Sonic flow: A flow is said sonic flow if Mach number is equal to 1 i.e velocity of flow is equal to the velocity of the sound wave. Super sonic flow: A flow is said super sonic flow if Mach number is greater than 1 i.e velocity of flow is greater than the velocity of the sound wave.
Your task is to take an array of numbers and a real number and return the value at that point in the array. Arrays start at \$\pi\$ and are counted in \$\pi\$ intervals. Thing is, we're actually going to interpolate between elements given the "index". As an example: Index: 1π 2π 3π 4π 5π 6πArray: [ 1.1, 1.3, 6.9, 4.2, 1.3, 3.7 ] Because it's \$\pi\$, we have to do the obligatory trigonometry, so we'll be using cosine interpolation using the following formula: \${\cos(i \mod \pi) + 1 \over 2} * (\alpha - \beta) + \beta\$ where: \$i\$ is the input "index" \$\alpha\$ is the value of the element immediately before the "index" \$\beta\$ is the value of the element immediately after the "index" \$\cos\$ takes its angle in radians Example Given [1.3, 3.7, 6.9], 5.3: Index 5.3 is between \$1\pi\$ and \$2\pi\$, so 1.3 will be used for before and 3.7 will be used for after. Putting it into the formula, we get: \${\cos(5.3 \mod \pi) + 1 \over 2} * (1.3 - 3.7) + 3.7\$ Which comes out to 3.165 Notes Input and output may be in any convenient format You may assume the input number is greater than \$\pi\$ and less than array length* \$\pi\$ You may assume the input array will be at least 2 elements long. Your result must have at least two decimal points of precision, be accurate to within 0.05, and support numbers up to 100 for this precision/accuracy. (single-precision floats are more than sufficient to meet this requirement) Happy Golfing!
Let's work over $\mathbb{C}$. Consider the following commutative diagram \begin{array}{llllllllllll} E_1& \xrightarrow{f} &E_2\\ \downarrow{\pi} &&\downarrow{\pi}\\ P_1 & \xrightarrow{g} &P_2 \end{array} Here $E_1=E_2=E$ is an elliptic curve, $P_1=P_2=\mathbb{P}^1$, $f$ is the multiplication map defined by $f(x)=2x$. Let $\sigma:E \to E$ denote the involution: $\sigma(x)=-x$ and let $\pi:E\to E/<\sigma>=\mathbb{P}^1$ denote the quotient of the involution. Since $\sigma\circ f=f\circ \sigma$ we have the induced endomorphism $g$ to make the diagram commutes. With the diagram above, I have the following questions: Question 1 (Base change). $E_1=E_2\times_{P_2}P_1$? My reason: Let $B:=E_2\times_{P_2}P_1$. Then $B\to P_1$ is a (flat) double cover. So $B$ has at most 2 irreducible components. Note that $B\to E_2$ is a finite cover. So each component of $B$ is not rational and hence elliptic and further $B$ is an irreducible elliptic curve. Note that $E_1$ factors through $B$ and $\pi$ is a double cover. So the induced $E_1\to B$ is isomorphic. Fact (Pullback of relative sheaf of differential). A) $\Omega_{E_1/E_2}=0$ (unramified) B) $\Omega_{P_1/P_2}\neq 0$ (ramified) C) $\Omega_{E_1/E_2}=\pi^*\Omega_{P_1/P_2}$ See GTM52, Hartshorne, Chapter II, Proposition 8.10. Question 2 (Faithfully flat) $\pi^*\mathcal{F}=0\Rightarrow \mathcal{F}=0$ for any coherent sheaf $\mathcal{F}$ of $P_1$? My reason: $\pi$ is flat and surjective and hence faithfully flat. Main Question 3(Contradiction). Question 2 and Facts A) and C) imply $\Omega_{P_1/P_2}=0$ which contradicts Fact B)? Where am I wrong? Question 4. What is the rational function of $g^*(x)$ where we assume $\mathbb{C}(x)$ is the function field of $\mathbb{P}^1$? I know it depends on $\pi$, but is $\frac{x^2}{(x+1)^2(x-1)^2}$ possible?
As already mentioned by TheSimpliFire you cannot simply change from real to complex variables without any changes within your solution. I will present you a solution which does not rely on complex analysis at all. Recently reading this post dealing with related integrals I have finally found a way to evaluate your integral. However, the crucial part we are in need of is precisely the Mellin Transform of the sine function which is given by $$\mathcal M_s\{\sin(x)\}~=~\int_0^\infty x^{s-1}\sin(x)\mathrm dx~=~\Gamma(s)\sin\left(\frac{\pi s}2\right)\tag1$$ Here $\Gamma(z)$ denotes the Gamma Function. There are different possible ways to show this relation, for myself I prefer using Ramanujan's Master Theorem as it is done here for example $($just substitute $s$ by $-s$$)$, but I will leave this out for now hence it is not of our concern. Note that we can use a variation of Feynman's Trick, i.e. Differentation under the Integral Sign. Before applying this technique we may rewrite the RHS of $(1)$ in the following way $$\Gamma(s)\sin\left(\frac{\pi s}2\right)=\Gamma(s)\sin\left(\frac{\pi s}2\right)\frac{2\cos\left(\frac{\pi s}2\right)}{2\cos\left(\frac{\pi s}2\right)}=\frac1{{2\cos\left(\frac{\pi s}2\right)}}\Gamma(s)\sin(\pi s)=\frac\pi2\frac1{\Gamma(1-s)\cos\left(\frac{\pi s}2\right)}$$ Here we used Euler's Reflection Formula. Even though the new form seems to be more complicated in the context of taking derivatives it actually prevents us from running into indefinite expressions which are harder to deal with. Anyway, differentiating w.r.t. $s$ leads us to \begin{align*}\frac{\mathrm d}{\mathrm ds}\int_0^\infty x^{s-1}\sin(x)\mathrm dx&=\frac{\mathrm d}{\mathrm ds}\frac\pi2\frac1{\Gamma(1-s)\cos\left(\frac{\pi s}2\right)}\\\int_0^\infty \frac{\partial}{\partial s}x^{s-1}\sin(x)\mathrm dx&=\frac\pi2\frac{\mathrm d}{\mathrm ds}\frac1{\Gamma(1-s)\cos\left(\frac{\pi s}2\right)}\\\int_0^\infty x^{s-1}\log(x)\sin(x)\mathrm dx&=\frac\pi2\left[\frac1{\cos\left(\frac{\pi s}2\right)}\frac{-(-1)\Gamma'(1-s)}{\Gamma^2(1-s)}+\frac1{\Gamma(1-s)}\frac{-\frac\pi2\sin\left(\frac{\pi s}2\right)}{\cos^2\left(\frac{\pi s}2\right)}\right]\\\int_0^\infty x^{s-1}\log(x)\sin(x)\mathrm dx&=\frac\pi2\left[\frac1{\cos\left(\frac{\pi s}2\right)}\frac{\psi^{(0)}(1-s)}{\Gamma(1-s)}-\frac\pi2\frac1{\Gamma(1-s)}\frac{\sin\left(\frac{\pi s}2\right)}{\cos^2\left(\frac{\pi s}2\right)}\right]\end{align*} Now we are basically done. Hence every occuring term is defined at $s=0$ we can simply plug in this value. Utilizing that the Digamma Function $\psi^{(0)}(z)$ is closely related to the Euler-Mascheroni Constant we can deduce that $$\int_0^\infty x^{0-1}\log(x)\sin(x)\mathrm dx=\frac\pi2\left[\underbrace{\frac1{\cos\left(\frac{\pi\cdot0}2\right)}\frac{\psi^{(0)}(1-0)}{\Gamma(1-0)}}_{=-\gamma}-\underbrace{\frac\pi2\frac1{\Gamma(1-0)}\frac{\sin\left(\frac{\pi\cdot0}2\right)}{\cos^2\left(\frac{\pi\cdot0}2\right)}}_{=0}\right]$$ $$\therefore~\int_0^\infty \frac{\log(x)\sin(x)}x\mathrm dx~=~-\frac{\gamma\pi}2$$
The Annals of Statistics Ann. Statist. Volume 8, Number 3 (1980), 457-487. Minimum Chi-Square, not Maximum Likelihood! Abstract The sovereignty of MLE is questioned. Minimum $\chi^2_\lambda$ yields the same estimating equations as MLE. For many cases, as illustrated in presented examples, and further algorithmic exploration in progress may show that for all cases, minimum $\chi^2_\lambda$ estimates are available. In this sense minimum $\chi^2$ is the basic principle of estimation. The criterion of asymptotic sufficiency which has been called "second order efficiency" is rejected as a criterion of goodness of estimate as against some loss function such as the mean squared error. The relation between MLE and sufficiency is not assured, as illustrated in an example in which MLE yields $\infty$ as estimate with samples that have different values of the sufficient statistic. Other examples are cited in which minimal sufficient statistics exist but where the MLE is not sufficient. The view is advanced that statistics is a science, not mathematics or philosophy (inference) and as such requires that any claimed attributes of the MLE must be testable by a Monte Carlo experiment. Article information Source Ann. Statist., Volume 8, Number 3 (1980), 457-487. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176345003 Digital Object Identifier doi:10.1214/aos/1176345003 Mathematical Reviews number (MathSciNet) MR568715 Zentralblatt MATH identifier 0456.62023 JSTOR links.jstor.org Citation Berkson, Joseph. Minimum Chi-Square, not Maximum Likelihood!. Ann. Statist. 8 (1980), no. 3, 457--487. doi:10.1214/aos/1176345003. https://projecteuclid.org/euclid.aos/1176345003
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range Journal of High Energy Physics, ISSN 1029-8479, 01/2010, Volume 2010, Issue 1, pp. 1 - 63 A combination is presented of the inclusive deep inelastic cross sections measured by the H1 and ZEUS Collaborations in neutral and charged current unpolarised... Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Lepton-Nucleon Scattering | Physics | Elementary Particles, Quantum Field Theory | Lepton-nucleon scattering | LOW Q | PROTON COLLISIONS | F | DETECTOR | LOW X | EP SCATTERING | DEEP-INELASTIC SCATTERING | F-2 STRUCTURE-FUNCTION | E(+)P SCATTERING | PHYSICS, PARTICLES & FIELDS | High Energy Physics - Experiment | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Lepton-Nucleon Scattering | Physics | Elementary Particles, Quantum Field Theory | Lepton-nucleon scattering | LOW Q | PROTON COLLISIONS | F | DETECTOR | LOW X | EP SCATTERING | DEEP-INELASTIC SCATTERING | F-2 STRUCTURE-FUNCTION | E(+)P SCATTERING | PHYSICS, PARTICLES & FIELDS | High Energy Physics - Experiment | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences Journal Article 2. Combination of measurements of inclusive deep inelastic e(+/-)p scattering cross sections and QCD analysis of HERA data EUROPEAN PHYSICAL JOURNAL C, ISSN 1434-6044, 12/2015, Volume 75, Issue 12, p. 580 A combination is presented of all inclusive deep inelastic cross sections previously published by the H1 and ZEUS collaborations at HERA for neutral and... LUMINOSITY | LOW Q | PARTON DISTRIBUTIONS | EVOLUTION | HEAVY QUARKS | POSITRON-PROTON COLLISIONS | LOW X | EP SCATTERING | F-2 STRUCTURE-FUNCTION | E(+)P SCATTERING | PHYSICS, PARTICLES & FIELDS | High Energy Physics - Phenomenology | High Energy Physics - Experiment | Nuclear Experiment | Nuclear Theory | Physics | violation [scaling] | deep inelastic scattering | inclusive reaction | beam [electron] | HERA, ZEUS, H1, Inclusive DIS, QCD | quantum chromodynamics | beam [p] | coupling constant | production [charm] | Phenomenology | gluon | hadronization | High Energy Physics | parametrization | DESY HERA Stor | Engineering (miscellaneous) | strong coupling | [PHYS.HPHE]Physics [physics]/High Energy Physics - Phenomenology [hep-ph] | [PHYS.NEXP]Physics [physics]/Nuclear Experiment [nucl-ex] | production [jet] | Physics and Astronomy (miscellaneous) | experimental results | [ PHYS.HPHE ] Physics [physics]/High Energy Physics - Phenomenology [hep-ph] | deep inelastic scattering [electron p] | [ PHYS.NEXP ] Physics [physics]/Nuclear Experiment [nucl-ex] | Experiment | Bjorken | distribution function [parton] | QCD analysis; HERA data | [PHYS.NUCL]Physics [physics]/Nuclear Theory [nucl-th] | correlation | scattering | charm | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | charged current | polarization [beam] | energy [beam] | [ PHYS.HEXP ] Physics [physics]/High Energy Physics - Experiment [hep-ex] | electroweak interaction | [ PHYS.NUCL ] Physics [physics]/Nuclear Theory [nucl-th] LUMINOSITY | LOW Q | PARTON DISTRIBUTIONS | EVOLUTION | HEAVY QUARKS | POSITRON-PROTON COLLISIONS | LOW X | EP SCATTERING | F-2 STRUCTURE-FUNCTION | E(+)P SCATTERING | PHYSICS, PARTICLES & FIELDS | High Energy Physics - Phenomenology | High Energy Physics - Experiment | Nuclear Experiment | Nuclear Theory | Physics | violation [scaling] | deep inelastic scattering | inclusive reaction | beam [electron] | HERA, ZEUS, H1, Inclusive DIS, QCD | quantum chromodynamics | beam [p] | coupling constant | production [charm] | Phenomenology | gluon | hadronization | High Energy Physics | parametrization | DESY HERA Stor | Engineering (miscellaneous) | strong coupling | [PHYS.HPHE]Physics [physics]/High Energy Physics - Phenomenology [hep-ph] | [PHYS.NEXP]Physics [physics]/Nuclear Experiment [nucl-ex] | production [jet] | Physics and Astronomy (miscellaneous) | experimental results | [ PHYS.HPHE ] Physics [physics]/High Energy Physics - Phenomenology [hep-ph] | deep inelastic scattering [electron p] | [ PHYS.NEXP ] Physics [physics]/Nuclear Experiment [nucl-ex] | Experiment | Bjorken | distribution function [parton] | QCD analysis; HERA data | [PHYS.NUCL]Physics [physics]/Nuclear Theory [nucl-th] | correlation | scattering | charm | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | charged current | polarization [beam] | energy [beam] | [ PHYS.HEXP ] Physics [physics]/High Energy Physics - Experiment [hep-ex] | electroweak interaction | [ PHYS.NUCL ] Physics [physics]/Nuclear Theory [nucl-th] Journal Article Physics Letters B, ISSN 0370-2693, 11/2009, Volume 681, Issue 5, pp. 391 - 399 A measurement of elastic deeply virtual Compton scattering using and collision data recorded with the H1 detector at HERA is presented. The analysed data... ELECTROPRODUCTION | NUCLEON | GENERALIZED PARTON DISTRIBUTIONS | PHYSICS, MULTIDISCIPLINARY | IMPACT PARAMETER SPACE | CALORIMETER | Physics | High Energy Physics - Experiment | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences ELECTROPRODUCTION | NUCLEON | GENERALIZED PARTON DISTRIBUTIONS | PHYSICS, MULTIDISCIPLINARY | IMPACT PARAMETER SPACE | CALORIMETER | Physics | High Energy Physics - Experiment | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences Journal Article Physics Letters B, ISSN 0370-2693, 11/2011, Volume 705, Issue 1-2, pp. 52 - 58 Journal Article 5. Measurement of the inclusive e ± p scattering cross section at high inelasticity y and of the structure function F L The European Physical Journal C, ISSN 1434-6044, 3/2011, Volume 71, Issue 3, pp. 1 - 50 A measurement is presented of the inclusive neutral current e ± p scattering cross section using data collected by the H1 experiment at HERA during the years... Measurement Science and Instrumentation | Nuclear Physics, Heavy Ions, Hadrons | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PERTURBATION-THEORY | LOW Q | STRUCTURE-FUNCTION F-2 | LOW-X | LEADING ORDER | EP SCATTERING | PARTON DENSITIES | FAST SIMULATION | PROTON STRUCTURE-FUNCTION | LIQUID ARGON CALORIMETER | PHYSICS, PARTICLES & FIELDS | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences Measurement Science and Instrumentation | Nuclear Physics, Heavy Ions, Hadrons | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PERTURBATION-THEORY | LOW Q | STRUCTURE-FUNCTION F-2 | LOW-X | LEADING ORDER | EP SCATTERING | PARTON DENSITIES | FAST SIMULATION | PROTON STRUCTURE-FUNCTION | LIQUID ARGON CALORIMETER | PHYSICS, PARTICLES & FIELDS | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences Journal Article Physics Letters B, ISSN 0370-2693, 06/2008, Volume 663, Issue 5, pp. 382 - 389 A search for first generation excited neutrinos is performed using the full data sample collected by the H1 experiment at HERA at a centre-of-mass energy of ,... CALIBRATION | PHYSICS, MULTIDISCIPLINARY | ELECTRONS | CURRENT CROSS-SECTIONS | COUPLINGS | COLLIDERS | HADRON-COLLISIONS | ENERGY EP COLLISIONS | LIQUID ARGON CALORIMETER | SCATTERING | LEPTON PRODUCTION | Collisions (Nuclear physics) | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences CALIBRATION | PHYSICS, MULTIDISCIPLINARY | ELECTRONS | CURRENT CROSS-SECTIONS | COUPLINGS | COLLIDERS | HADRON-COLLISIONS | ENERGY EP COLLISIONS | LIQUID ARGON CALORIMETER | SCATTERING | LEPTON PRODUCTION | Collisions (Nuclear physics) | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences Journal Article 7. Combination of measurements of inclusive deep inelastic $${e^{\pm }p}$$ e ± p scattering cross sections and QCD analysis of HERA data : H1 and ZEUS Collaborations The European Physical Journal C, ISSN 1434-6044, 12/2015, Volume 75, Issue 12, pp. 1 - 98 A combination is presented of all inclusive deep inelastic cross sections previously published by the H1 and ZEUS collaborations at HERA for neutral and... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article The European Physical Journal C, ISSN 1434-6044, 04/2001, Volume 20, Issue 1, pp. 29 - 49 A measurement is presented of dijet and 3-jet cross sections in low- $|t|$ diffractive deep-inelastic scattering interactions of the type $ep \rightarrow eXY$... LARGE-RAPIDITY-GAP | PARTON DISTRIBUTIONS | VIRTUAL PHOTON | QUANTUM CHROMODYNAMICS | POMERON STRUCTURE | FRACTURE FUNCTIONS | SCATTERING EVENTS | EP COLLISIONS | DIJET CROSS-SECTIONS | HARD-SCATTERING | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment LARGE-RAPIDITY-GAP | PARTON DISTRIBUTIONS | VIRTUAL PHOTON | QUANTUM CHROMODYNAMICS | POMERON STRUCTURE | FRACTURE FUNCTIONS | SCATTERING EVENTS | EP COLLISIONS | DIJET CROSS-SECTIONS | HARD-SCATTERING | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment Journal Article 9. Measurement of multijet production in e p collisions at high Q2and determination of the strong coupling αs European Physical Journal C, ISSN 1434-6044, 2015, Volume 75, Issue 2, pp. 1 - 48 Journal Article European Physical Journal C, ISSN 1434-6044, 2010, Volume 70, Issue 3, pp. 823 - 874 The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid.... Measurement Science and Instrumentation | Nuclear Physics, Heavy Ions, Hadrons | Quantum Field Theories, String Theory | Physics | Astronomy, Astrophysics and Cosmology | Elementary Particles, Quantum Field Theory | MONTE-CARLO | LIBRARY | PARTON | EVENT | TILE CALORIMETER | PHYSICS, PARTICLES & FIELDS | Usage | Software | Collisions (Nuclear physics) | Synthetic training devices | Infrastructure (Economics) | Models | Detectors | Simulation | Large Hadron Collider | Particle collisions | Computer simulation | Infrastructure | Hadrons | Packages | Computer programs | Instrumentation and Detectors | High Energy Physics - Experiment | PARTICLE ACCELERATORS | MATHEMATICS AND COMPUTING | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Measurement Science and Instrumentation | Nuclear Physics, Heavy Ions, Hadrons | Quantum Field Theories, String Theory | Physics | Astronomy, Astrophysics and Cosmology | Elementary Particles, Quantum Field Theory | MONTE-CARLO | LIBRARY | PARTON | EVENT | TILE CALORIMETER | PHYSICS, PARTICLES & FIELDS | Usage | Software | Collisions (Nuclear physics) | Synthetic training devices | Infrastructure (Economics) | Models | Detectors | Simulation | Large Hadron Collider | Particle collisions | Computer simulation | Infrastructure | Hadrons | Packages | Computer programs | Instrumentation and Detectors | High Energy Physics - Experiment | PARTICLE ACCELERATORS | MATHEMATICS AND COMPUTING | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article 11. Measurement of multijet production in $${e\!\!\;p}$$ e p collisions at high $$\varvec{Q^2}$$ Q 2 and determination of the strong coupling $$\varvec{{\alpha _s}} $$ α s The European Physical Journal C, ISSN 1434-6044, 2/2015, Volume 75, Issue 2, pp. 1 - 48 Inclusive jet, dijet and trijet differential cross sections are measured in neutral current deep-inelastic scattering for exchanged boson virtualities $$150 <... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article
Perturbed fractional eigenvalue problems 1. Department of Mathematics, University of Craiova, 200585 Craiova, Romania 2. "Simion Stoilow" Institute of Mathematics of the Romanian Academy, 010702 Bucharest, Romania 3. Department of Mathematics and Computer Science, University Politehnica of Bucharest, 060042 Bucharest, Romania 4. "Simion Stoilow" Institute of Mathematics of the Romanian Academy, 010702 Bucharest, Romania Let $Ω\subset\mathbb{R}^N$ ($N≥2$) be a bounded domain with Lipschitz boundary. For each $p∈(1,∞)$ and $s∈ (0,1)$ we denote by $(-Δ_p)^s$ the fractional $(s,p)$-Laplacian operator. In this paper we study the existence of nontrivial solutions for a perturbation of the eigenvalue problem $(-Δ_p)^s u=λ |u|^{p-2}u$, in $Ω$, $u=0$, in $\mathbb{R}^N\backslash Ω$, with a fractional $(t,q)$-Laplacian operator in the left-hand side of the equation, when $t∈(0,1)$ and $q∈(1,∞)$ are such that $s-N/p=t-N/q$. We show that nontrivial solutions for the perturbed eigenvalue problem exists if and only if parameter $λ$ is strictly larger than the first eigenvalue of the $(s,p)$-Laplacian. Keywords:Perturbed eigenvalue problem, non-local operator, variational methods, fractional Sobolev space. Mathematics Subject Classification:Primary: 35P30; Secondary: 49J35, 47J30, 46E35. Citation:Maria Fărcăşeanu, Mihai Mihăilescu, Denisa Stancu-Dumitru. Perturbed fractional eigenvalue problems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6243-6255. doi: 10.3934/dcds.2017270 References: [1] M. Bocea and M. Mihăilescu, Existence of nonnegative viscosity solutions for a class of problems involving the $∞$-Laplacian, [2] L. Brasco, E. Parini and M. Squassina, Stability of variational eigenvalues for the fractional $p$-Laplacian, [3] [4] [5] [6] M. Fărcăşeanu, M. Mihăilescu and D. Stancu-Dumitru, On the set of eigenvalues of some PDEs with homogeneous Neumann boundary condition, [7] R. Ferreira and M. Perez-Llanos, Limit problems for a Fractional $p$-Laplacian as $p \to \infty $, [8] [9] P. Grisvard, [10] [11] M. Mihăilescu, An eigenvalue problem possessing a continuous family of eigenvalues plus an isolated eigenvalue, [12] show all references References: [1] M. Bocea and M. Mihăilescu, Existence of nonnegative viscosity solutions for a class of problems involving the $∞$-Laplacian, [2] L. Brasco, E. Parini and M. Squassina, Stability of variational eigenvalues for the fractional $p$-Laplacian, [3] [4] [5] [6] M. Fărcăşeanu, M. Mihăilescu and D. Stancu-Dumitru, On the set of eigenvalues of some PDEs with homogeneous Neumann boundary condition, [7] R. Ferreira and M. Perez-Llanos, Limit problems for a Fractional $p$-Laplacian as $p \to \infty $, [8] [9] P. Grisvard, [10] [11] M. Mihăilescu, An eigenvalue problem possessing a continuous family of eigenvalues plus an isolated eigenvalue, [12] [1] Raffaella Servadei, Enrico Valdinoci. Variational methods for non-local operators of elliptic type. [2] Shixiu Zheng, Zhilei Xu, Huan Yang, Jintao Song, Zhenkuan Pan. Comparisons of different methods for balanced data classification under the discrete non-local total variational framework. [3] Rafael Abreu, Cristian Morales-Rodrigo, Antonio Suárez. Some eigenvalue problems with non-local boundary conditions and applications. [4] Walter Allegretto, Yanping Lin, Shuqing Ma. On the box method for a non-local parabolic variational inequality. [5] Anouar Bahrouni, VicenŢiu D. RĂdulescu. On a new fractional Sobolev space and applications to nonlocal variational problems with variable exponent. [6] Bin Guo, Wenjie Gao. Finite-time blow-up and extinction rates of solutions to an initial Neumann problem involving the $p(x,t)-Laplace$ operator and a non-local term. [7] Anouar Bahrouni. Trudinger-Moser type inequality and existence of solution for perturbed non-local elliptic operators with exponential nonlinearity. [8] Alexander V. Rezounenko, Petr Zagalak. Non-local PDEs with discrete state-dependent delays: Well-posedness in a metric space. [9] Massimiliano Ferrara, Giovanni Molica Bisci, Binlin Zhang. Existence of weak solutions for non-local fractional problems via Morse theory. [10] Christos V. Nikolopoulos, Georgios E. Zouraris. Numerical solution of a non-local elliptic problem modeling a thermistor with a finite element and a finite volume method. [11] Joseph G. Conlon, André Schlichting. A non-local problem for the Fokker-Planck equation related to the Becker-Döring model. [12] [13] [14] Olivier Bonnefon, Jérôme Coville, Guillaume Legendre. Concentration phenomenon in some non-local equation. [15] Yuxia Guo, Jianjun Nie. Infinitely many non-radial solutions for the prescribed curvature problem of fractional operator. [16] [17] [18] [19] [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Nodal bubble-tower solutions to radial elliptic problems near criticality 1. Departamento de Ingeniería Matemática, Universidad de Chile, Casilla 170, Correo 3, Santiago, Chile, Chile $ -\Delta u =|u|^{\frac 4{N-2} -\varepsilon} u \quad \text{in } B $ where $B$ is the unit ball in $\R^N$, $N\ge 3$, under zero Dirichlet boundary conditions. We construct radial solutions with $k$ nodal regions which resemble a superposition of "bubbles'' of different signs and blow-up orders, concentrating around the origin. A dual phenomenon is described for the slightly supercritical problem $ -\Delta u =|u|^{\frac 4{N-2} +\varepsilon} u \quad \text{in } \R^N \setminus B $ under Dirichlet and fast vanishing-at-infinity conditions. Mathematics Subject Classification:Primary: 35J25, 35J20; Secondary: 35B3. Citation:Andrés Contreras, Manuel del Pino. Nodal bubble-tower solutions to radial elliptic problems near criticality. Discrete & Continuous Dynamical Systems - A, 2006, 16 (3) : 525-539. doi: 10.3934/dcds.2006.16.525 [1] Salomón Alarcón, Jinggang Tan. Sign-changing solutions for some nonhomogeneous nonlocal critical elliptic problems. [2] M. Ben Ayed, Kamal Ould Bouh. Nonexistence results of sign-changing solutions to a supercritical nonlinear problem. [3] [4] Yanfang Peng, Jing Yang. Sign-changing solutions to elliptic problems with two critical Sobolev-Hardy exponents. [5] Yohei Sato. Sign-changing multi-peak solutions for nonlinear Schrödinger equations with critical frequency. [6] Yuxin Ge, Monica Musso, A. Pistoia, Daniel Pollack. A refined result on sign changing solutions for a critical elliptic problem. [7] Yuanxiao Li, Ming Mei, Kaijun Zhang. Existence of multiple nontrivial solutions for a $p$-Kirchhoff type elliptic problem involving sign-changing weight functions. [8] [9] Teodora-Liliana Dinu. Entire solutions of the nonlinear eigenvalue logistic problem with sign-changing potential and absorption. [10] Yohei Sato, Zhi-Qiang Wang. On the least energy sign-changing solutions for a nonlinear elliptic system. [11] Guirong Liu, Yuanwei Qi. Sign-changing solutions of a quasilinear heat equation with a source term. [12] Shao-Yuan Huang. Global bifurcation and exact multiplicity of positive solutions for the one-dimensional Minkowski-curvature problem with sign-changing nonlinearity. [13] Tsung-Fang Wu. On semilinear elliptic equations involving critical Sobolev exponents and sign-changing weight function. [14] Norimichi Hirano, A. M. Micheletti, A. Pistoia. Existence of sign changing solutions for some critical problems on $\mathbb R^N$. [15] J. Húska, Peter Poláčik, M.V. Safonov. Principal eigenvalues, spectral gaps and exponential separation between positive and sign-changing solutions of parabolic equations. [16] Jun Yang, Yaotian Shen. Weighted Sobolev-Hardy spaces and sign-changing solutions of degenerate elliptic equation. [17] Wei Long, Shuangjie Peng, Jing Yang. Infinitely many positive and sign-changing solutions for nonlinear fractional scalar field equations. [18] Hongxia Shi, Haibo Chen. Infinitely many solutions for generalized quasilinear Schrödinger equations with sign-changing potential. [19] Wen Zhang, Xianhua Tang, Bitao Cheng, Jian Zhang. Sign-changing solutions for fourth order elliptic equations with Kirchhoff-type. [20] Huxiao Luo, Xianhua Tang, Zu Gao. Sign-changing solutions for non-local elliptic equations with asymptotically linear term. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Here is one way to make the connection you are hinting at more rigorous. Throughout this answer, $\log$ denotes only the real logarithm (defined for positive real numbers only, so there is no ambiguity in how it is interpreted). Recall that $$|e^w| = e^{\operatorname{Re} w}, \qquad w \in \mathbb{C}.$$If there were an annulus $A$ centered at $0$ and an function $g$ on $A$ with the property that$$\operatorname{Re} g(z) = \log |z|, \qquad z \in A,$$then the function $h$ on $A$ given by $h(z) = e^{g(z)}/z$ for all $z \in A$ would be analytic on $A$ and would satisfy$$|h(z)| = \frac{|e^{g(z)}|}{|z|} = \frac{e^{\operatorname{Re} g(z)}}{|z|} = \frac{e^{\log |z|}}{|z|} = 1, \qquad z \in A.$$It would follow from this that$$\overline{h(z)} = \frac{|h(z)|^2}{h(z)} = \frac{1}{h(z)},$$being a quotient of analytic functions, is also analytic on $A$. It easily follows from the Cauchy-Riemann equations that $h$ must be constant on $A$ (generally, whenever both a function and its complex conjugate are analytic on a connected open set, the function must be constant on that set). So there is $\omega \in \mathbb{C}$ with $|\omega| = 1$ and $$e^{g(z)} = \omega z, \qquad z \in A.$$Choosing a real number $s$ with $\omega = e^{is}$ it follows that the function analytic $L$ on $A$ given by $L(z) = e^{g(z) - is}$ satisfies$$e^{L(z)} = z, \qquad z \in A.$$To summarize: beginning with an analytic function having $\log |z|$ as its real part on $A$, we have constructed an analytic branch of the logarithm on $A$. So if we happen to know that no such thing exists, we have our contradiction, and are done. It is, of course, generally true that if $g$ and $k$ are analytic functions on a connected open set $G$ and $\operatorname{Re} g(z) = \operatorname{Re} k(z)$ holds for all $z \in G$, then there must be a real constant $C$ with $g(z) = k(z) + iC$. (Proof: consider the Cauchy-Riemann equations for the difference $g - k$ to deduce that $g - k$ must be constant, and clearly the constant must have zero real part.) So the intuition that an analytic function having $\log |z|$ as its real part must be "essentially the same thing" as a branch of the logarithm is correct. But in translating this intuition into a nonexistence proof, one must be careful, as you probably sensed when you were asking this question. (Working hastily and without careful thought, one might, for example, try to prove what you want by applying the result just mentioned with $G$ equal to the annulus, and one of the functions $g$ or $k$ equal to a branch of the logarithm--- conveniently forgetting for a moment that the result being appealed to concerns analytic functions on $G$, and there isn't an analytic logarithm on the annulus.) To get an actual proof along these lines, one must proceed more carefully: let $A$ be the annulus, and let $k(z)$ denote e.g. the principal branch of the logarithm. Then $k$ is analytic on the connected open set $G = A \setminus (-\infty,0]$. If $g$ is an analytic function on $A$ satisfying $\operatorname{Re} g(z) = \log |z|$ for all $z \in A$, the result of the previous paragraph implies that there is a real constant $C$ with the property that $g(z) = k(z) + iC$ holds for all $z \in G$ (not all $z \in A$). Hence $k(z) = g(z) - iC$ holds for all $z \in G$, and as $g$ is continuous on all of $A$, we deduce from this equation that for any $c$ in the nonempty set $A \cap (-\infty,0)$, the limit $\lim_{z \to c, z \in G} k(z)$ exists (and is $g(c) - iC$). This contradicts the fact, obvious from the explicit formulas for $k$, that $k$ has a jump discontinuity at every point on the negative real axis.
Usually, the Stirling numbers of the first kind are defined as the coefficients of the rising factorial: $(*) \prod_{i=0}^{n-1}(x+i) = \sum_{i=0}^{n} S(n,i) x^i$. With this definition, a recursive relation of $S(n,i)$ be derived, and it can be shown that is coincides with the recursive relation on the number of permutation on an n-set with i cycles and they have the same initial conditions, hence they coincide. 1) Is there any possibility to do it the other way around, i.e., define $S(n,i)$ combinatorially and then to show that $\prod_{i=0}^{n-1}(x+i) = \sum_{i=0}^{n} S(n,i) x^i$ holds for $x \in \mathbb{N}$ by some combinatorial argument, and thus it is a polynomial identity? (For Stirling numbers of the second kind, it is possible: it can be shown that $n^k = \sum_{i=0}^{k} \binom{n}{i} i! S_2(k,i)$ combinatorially ($n^k$ counts function from $[k]$ to $[n]$).) 2) Additionally: equating coefficients in $(*)$ shows that $S(n,i)$ is the elementary symmetric polynomial on $n$ variables of degree $n-i$ evaluated on $(0,1,\cdots ,n-1)$. Is there a combinatorial interpretation of this?
I know that the spectral radius is $\rho(A) = max |\lambda_l| = \S_{max}^2$ and that the Frobenius norm is $||A|| = \sqrt{tr(A^*A)} = (\sum_{k}S_k^2)^{1/2}$, which means I want to find the matrix A for which the following is true $$ ||A||_F = \sqrt{tr(A^*A)} = (\sum_{k}S_k^2)^{1/2} = S_{max}^2 $$ So is the spectral radius equal to the frobenius norm if A is a square matrix with its largest eigenvalue equal to |1|? (S are the singular values)
A simple illustration of the trapezoid rule for definite integration:$$ \int_{a}^{b} f(x)\, dx \approx \frac{1}{2} \sum_{k=1}^{N} \left( x_{k} - x_{k-1} \right) \left( f(x_{k}) + f(x_{k-1}) \right). $$ First, we define a simple function and sample it between 0 and 10 at 200 points %matplotlib inlineimport numpy as npimport matplotlib.pyplot as plt def f(x): return (x-3)*(x-5)*(x-7)+85x = np.linspace(0, 10, 200)y = f(x) Choose a region to integrate over and take only a few points in that region a, b = 1, 8 # the left and right boundariesN = 5 # the number of pointsxint = np.linspace(a, b, N)yint = f(xint) Plot both the function and the area below it in the trapezoid approximation plt.plot(x, y, lw=2)plt.axis([0, 9, 0, 140])plt.fill_between(xint, 0, yint, facecolor='gray', alpha=0.4)plt.text(0.5 * (a + b), 30,r"$\int_a^b f(x)dx$", horizontalalignment='center', fontsize=20); Compute the integral both at high accuracy and with the trapezoid approximation from __future__ import print_functionfrom scipy.integrate import quadintegral, error = quad(f, a, b)integral_trapezoid = sum( (xint[1:] - xint[:-1]) * (yint[1:] + yint[:-1]) ) / 2print("The integral is:", integral, "+/-", error)print("The trapezoid approximation with", len(xint), "points is:", integral_trapezoid) The integral is: 565.2499999999999 +/- 6.275535646693696e-12 The trapezoid approximation with 5 points is: 559.890625
Gold Member 16 0 I'm reading through Lancaster & Blundell's It seems to me that the energy of the harmonic oscillator in its "one-particle" state is ##\omega_0##, and the general energy (off the mass shell) is given by ##\omega## so that the position-space propagator would be given by $$G\left(x,y\right)=\int\frac{d^{4}p}{\left(2\pi\right)^{4}}e^{-ip\cdot\left(x-y\right)}\frac{i}{\omega^{2}-\omega_{0}^{2}+i\epsilon}$$ From there, we can read off the momentum-space Fourier component as $$\tilde{G}\left(\omega\right)=\frac{i}{\left(\omega^{2}-\omega_{0}^{2}+i\epsilon\right)}$$ I can't figure out where the extra factor of ##m## in the denominator comes from. Introducing the extra ##m## seems to mess up the units as well, since their general expression for the momentum-space propagator is $$\tilde{\Delta}\left(p\right)=\frac{i}{\left(p^{0}\right)^{2}-E_{\boldsymbol{p}}^{2}+i\epsilon}$$. I'm guessing I'm missing something simple (since pretty well all the exercises in the book aren't too complex once you understand the principles), but I just can't see it. Quantum Field Theory for the Gifted Amateurand have got to Chapter 17 on calculating propagataors. In their equation 17.23 they derive the expression for the free Feynman propagator for a scalar field to be $$\Delta\left(x,y\right)=\int\frac{d^{4}p}{\left(2\pi\right)^{4}}e^{-ip\cdot\left(x-y\right)}\frac{i}{\left(p^{0}\right)^{2}-E_{\boldsymbol{p}}^{2}+i\epsilon}$$ where ##p^0=E## represents an energy that is not on the mass shell, so that in general ##p^{0}\ne\sqrt{E_{\boldsymbol{p}}^{2}+m^{2}}##. I'm able to follow their derivation (I think), but then in Exercise 17.4, they ask us to show that the Feynman propagator for the quantum simple harmonic oscillator with spring constant ##m\omega_{0}^{2}## is given by $$\tilde{G}\left(\omega\right)=\frac{i}{m\left(\omega^{2}-\omega_{0}^{2}+i\epsilon\right)}$$ It seems to me that the energy of the harmonic oscillator in its "one-particle" state is ##\omega_0##, and the general energy (off the mass shell) is given by ##\omega## so that the position-space propagator would be given by $$G\left(x,y\right)=\int\frac{d^{4}p}{\left(2\pi\right)^{4}}e^{-ip\cdot\left(x-y\right)}\frac{i}{\omega^{2}-\omega_{0}^{2}+i\epsilon}$$ From there, we can read off the momentum-space Fourier component as $$\tilde{G}\left(\omega\right)=\frac{i}{\left(\omega^{2}-\omega_{0}^{2}+i\epsilon\right)}$$ I can't figure out where the extra factor of ##m## in the denominator comes from. Introducing the extra ##m## seems to mess up the units as well, since their general expression for the momentum-space propagator is $$\tilde{\Delta}\left(p\right)=\frac{i}{\left(p^{0}\right)^{2}-E_{\boldsymbol{p}}^{2}+i\epsilon}$$. I'm guessing I'm missing something simple (since pretty well all the exercises in the book aren't too complex once you understand the principles), but I just can't see it.
In this Notebook we explore the Lorenz system of differential equations:$$ \begin{aligned} \dot{x} & = \sigma(y-x) \\ \dot{y} & = \rho x - y - xz \\ \dot{z} & = -\beta z + xy \end{aligned} $$ This is one of the classic systems in non-linear differential equations. It exhibits a range of different behaviors as the parameters ($\sigma$, $\beta$, $\rho$) are varied. First, we import the needed things from IPython, NumPy, Matplotlib and SciPy. %matplotlib inline from IPython.html.widgets import interact, interactivefrom IPython.display import clear_output, display, HTML import numpy as npfrom scipy import integratefrom matplotlib import pyplot as pltfrom mpl_toolkits.mplot3d import Axes3Dfrom matplotlib.colors import cnamesfrom matplotlib import animation We define a function that can integrate the differential equations numerically and then plot the solutions. This function has arguments that control the parameters of the differential equation ($\sigma$, $\beta$, $\rho$), the numerical integration ( N, max_time) and the visualization ( angle). def solve_lorenz(N=10, angle=0.0, max_time=4.0, sigma=10.0, beta=8./3, rho=28.0): fig = plt.figure() ax = fig.add_axes([0, 0, 1, 1], projection='3d') ax.axis('off') # prepare the axes limits ax.set_xlim((-25, 25)) ax.set_ylim((-35, 35)) ax.set_zlim((5, 55)) def lorenz_deriv(x_y_z, t0, sigma=sigma, beta=beta, rho=rho): """Compute the time-derivative of a Lorenz system.""" x, y, z = x_y_z return [sigma * (y - x), x * (rho - z) - y, x * y - beta * z] # Choose random starting points, uniformly distributed from -15 to 15 np.random.seed(1) x0 = -15 + 30 * np.random.random((N, 3)) # Solve for the trajectories t = np.linspace(0, max_time, int(250*max_time)) x_t = np.asarray([integrate.odeint(lorenz_deriv, x0i, t) for x0i in x0]) # choose a different color for each trajectory colors = plt.cm.jet(np.linspace(0, 1, N)) for i in range(N): x, y, z = x_t[i,:,:].T lines = ax.plot(x, y, z, '-', c=colors[i]) plt.setp(lines, linewidth=2) ax.view_init(30, angle) plt.show() return t, x_t Let's call the function once to view the solutions. For this set of parameters, we see the trajectories swirling around two points, called attractors. t, x_t = solve_lorenz(angle=0, N=10) Using IPython's interactive function, we can explore how the trajectories behave as we change the various parameters. w = interactive(solve_lorenz, angle=(0.,360.), N=(0,50), sigma=(0.0,50.0), rho=(0.0,50.0))display(w) The object returned by interactive is a Widget object and it has attributes that contain the current result and arguments: t, x_t = w.result w.kwargs After interacting with the system, we can take the result and perform further computations. In this case, we compute the average positions in $x$, $y$ and $z$. xyz_avg = x_t.mean(axis=1) xyz_avg.shape Creating histograms of the average positions (across different trajectories) show that on average the trajectories swirl about the attractors. plt.hist(xyz_avg[:,0])plt.title('Average $x(t)$') plt.hist(xyz_avg[:,1])plt.title('Average $y(t)$')
Young's Modules is defined as, $Y = \frac{\sigma}{\epsilon}$ where $\sigma$ is the strain defined as $\sigma = F/A$ and $\epsilon$ is the stress defined as $\epsilon = \frac{\Delta L}{L_0}$. In this case we require the stress, thus re-arranging the first equation gives, $\sigma = \epsilon Y$ After plugging in the definition of $\epsilon$ we then arrive at, $\sigma = \frac{\Delta L}{L_0}Y$ Now as you already worked out, $\Delta L = L_0\alpha\Delta T$. Therefore after plugging this in, we arrive at the required result: $\sigma = \frac{L_0\alpha\Delta T}{L_0}Y = \alpha\Delta T Y$
Hydrogenic Wavefunctions: A hydrogenic wavefunction, $\psi_{nlm}(\boldsymbol{r})$, can be written as the product$$\psi_{nlm}(r,\theta,\phi) = R_{nl}(r) Y_{lm}(\theta,\phi)$$where $R_{nl}$ is a radial wavefunction and $ Y_{lm}$ is a spherical harmonic. $R_{nl}$ is a radial wavefunction $ Y_{lm}$ is a spherical harmonic $n$ is the principal quantum number, $n = 1,2,3\dots$ $l$ is the angular quantum number, $l = 0=s, 1=p, 2=d,\dots,n-1$ $m$ is the magnetic quantum number $m = -l, -l+1,\dots,l-1,l$ Probability/Electron Density The probability density, $\rho$, is equal to the square modulus of the wavefunction:$$\begin{align}\rho_{nlm}(\boldsymbol{r}) &= |\psi_{nlm}(\boldsymbol{r})|^2 \\\rho_{nlm}(r,\theta,\phi)&= |R_{nl}(r)|^2 \cdot |Y_{lm}(\theta,\phi)|^2\end{align}$$ The spherical harmonics $Y_{lm}$ have angular nodes through the nuclei for all values of $l$ and $m$ except $l =m =0$, so we can say that for all $l>0$ orbitals, i.e. all orbitals except s-orbitals, the nuclei will have $0$ probability density. S-orbitals Consider s-orbitals, in atomic units and substituting in the solution of the radial wavefunctions in generalized Laguerre polynomials $L^\alpha_k$:$$\begin{align}\rho_{n00}(\boldsymbol{r})&= |R_{n0}(r)|^2 \cdot |Y_{00}(\theta,\phi)|^2\\&= |R_{n0}(r)|^2 \frac{1}{4\pi} \\\rho_{n00}(r)&= \frac{Z^3}{\pi n^5} \exp\left(-Zr/n\right) L^1_{n-1}\left(2Zr/n\right) \\&= A(n) \exp\left(-Zr/n\right) L^1_{n-1}\left(2Zr/n\right)\end{align}$$ For all values of $n$, $L^1_{n-1}\left(0\right) = n$, so we know that the nucleus never has zero probability density for s-orbitals, and has probability density$$\rho_{n00}(r=0) = \frac{Z^3}{\pi n^4}$$ Stationary Points But we can differentiate with respect to $r$ to find stationary points in the radial probability density. $$\begin{align}\frac{d}{dr}\rho_{n00}(r)&= A(n) \frac{d}{dr} \exp\left(-Zr/n\right) L^1_{n-1}\left(2Zr/n\right) \\&= A(n) \exp\left(-Zr/n\right)\left[ -\frac{Z}{n} L^1_{n-1}\left(2Zr/n\right) + \frac{d}{dr}L^1_{n-1}\left(2Zr/n\right) \right] \\\text{if}\;n=1: \frac{d}{dr}\rho_{000}(r)&= -\frac{Z^4}{\pi n^6} \exp\left(-Zr/n\right) \\&> 0 \;\forall\;r\end{align}$$Hence the probability density is constantly decreasing away from the nucleus. So as you previously assumed the maximum for probability density for the 1s orbital is at the nucleus. $n>1$: $$\begin{align}\text{if}\;n>1 : \frac{d}{dr}\rho_{n00}(r)&= A(n) \exp\left(-Zr/n\right)\left[ -\frac{Z}{n} L^1_{n-1}\left(2Zr/n\right) + \frac{d}{dr}L^1_{n-1}\left(2Zr/n\right) \right] \\&= A(n) \exp\left(-Zr/n\right)\left[ -\frac{Z}{n} L^1_{n-1}\left(2Zr/n\right) - L^2_{n-2}\left(2Zr/n\right) \right]\end{align}$$ Setting the expression above equal to zero and solving for $r$ will give the positions for the radial nodes and maxima inbetween for a given value of $n$. I am not aware of a general method/expression in n. As you asked about nucleus, we can also consider the point $r=0$: $$\begin{align}\text{if}\;n>1 : \frac{d}{dr}\rho_{n00}(r)&= - \frac{Z^3}{\pi n^5} \left[\frac{Z}{n} L^1_{n-1}\left(0\right) + L^2_{n-2}\left(0\right) \right] \\&= - \frac{Z^3}{\pi n^5} \left[\frac{Z}{n} n +\frac{n^2-n}{2}\right] \\&= - \frac{Z^4}{\pi n^5} - \frac{Z^3 \left(n-1\right)}{2\pi n^4} < 0\end{align}$$ Conclusion/TL;DR Hence the nuclei is always a local maximum in the probability density for an s-orbital. You would expect the exponential decay in the radial wavefunction to dominate over the Laguerre polynomial term, so would expect local maxima further out to be smaller that the maxima at the nuclei and hence that the point of highest probability density is at the nucleus. This makes physical sense, as classically the negatively charged electron wants to be as close as possible to the positively charged nucleus to minimise its potential energy, but the quantum mechanical nature of the electron prevents it from tumbling directly into the nucleus.
I specify a function in terms of an integral and then try to evaluate it with Simplify. However, the answer is not really simplified to what it should be. assumptions := {l > 0, Element[NN, Integers], NN > 0, b > 0, a > 0, l > b, a > l, α > 0, β > 0};Φ = Function[x, If[x < l, (NN*π*α*b^2 *(l^2 ArcCos[x/l] - x Sqrt[l^2 - x^2]))/(a^2 * β), 0] ];Px = Function[x, If[0 <= x <= a, 1/a, 0]];FΦ = Function[φ, Simplify[Integrate[If[Φ[x] <= φ, 1, 0] Px[x], {x, 0, a}], assumptions]];Simplify[FΦ[0], assumptions] However, it is easy to see that it should be reduced as first $\frac{1}{a} \int_0^a \text{If}\left[\text{If}\left[x<l,\frac{\text{NN} \pi \alpha b^2 \left(l^2 \text{ArcCos}\left[\frac{x}{l}\right]-x \sqrt{l^2-x^2}\right)}{a^2 \beta },0\right]\leq 0,1,0\right] \, dx$ and then $\frac{1}{a} \int_0^l \text{If}\left[\frac{\text{NN} \pi \alpha b^2 \left(l^2 \text{ArcCos}\left[\frac{x}{l}\right]-x \sqrt{l^2-x^2}\right)}{a^2 \beta }\leq 0,1,0\right] \, dx + \frac{1}{a} \int_l^a \text{If}\left[ 0 \leq 0,1,0\right] \, dx$ to $ \frac{1}{a} \int_0^l 0 \, dx + \frac{1}{a} \int_l^a 1 \, dx$ $=\frac{a-l}{a}$. How can I get Mathematica to simplify it correctly?
Let $c_{q}(n)=\displaystyle \sum_{\substack{a=1,...,q\\ (a,q)=1}} e\left(\frac{an}{q}\right)$ be the Ramanujan's sum and consider the following series: $$\displaystyle\sum_{q>r} \frac{\mu^{2}(q)c_{q}(n)}{\varphi^{2}(q)}$$ It's possible to find an upper bound explicitly in $n$ and $r$? I want to find a form such as $\frac{n}{\varphi(n)}\frac{C}{r}$, with $C>0$. I try to do this by myself, but in all my attemps I reduce me to find an explicit bound for a sum that is convergent but not absolutely convergent and so doesn't have an Euler's product (that in fact diverge). Thanks in advance for any suggestion.
Ex.13.5 Q5 Surface Areas and Volumes Solution - NCERT Maths Class 10 Question An oil funnel made of tin sheet consists of a \(10\,\rm{cm}\) long cylindrical portion attached to a frustum of a cone. If the total height is \(22 \,\rm{cm,}\) diameter of the cylindrical portion is \(8\rm{cm}\) and the diameter of the top of the funnel is \(18\,\rm{cm,}\) find the area of the tin sheet required to make the funnel (see Fig. 13.25). Text Solution What is Known? Length of the cylindrical part is \(10\,\rm{ cm}\) and diameter is \(8 \,\rm{cm}\). Diameter of top of the funnel is \(18\, \rm{cm}\) and total height of funnel is \(22\,\rm{ cm.}\) What is Unknown? Area of tin sheets required to make the funnel Reasoning: Since the funnel is open at the top and the bottom. Hence, open at the junction of the frustum of cone and cylindrical part. Therefore, Area of tin sheet required to make the funnel \(=\) CSA of frustum of cone \(+\) CSA of the cylinder We will find the CSA of the frustum by using formulae; CSA of frustum of a cone \( = \pi \left( {{r_1} + {r_2}} \right)l\) Slant height, \(l = \sqrt {{h^2} + {{\left( {{r_1} - {r_2}} \right)}^2}} \) where \(r1, r2, h\) and \(l\) are the radii height and slant height of the frustum of the cone respectively. CSA of the cylinder \( = 2\pi rh\) where \(r\) and \(h\) are radius and height of the cylinder respectively. Steps: Height of the funnel, \(H = 22cm\) Height of cylindrical part, \(h = 10cm\) Height of frustum of cone, \({h_1} = 22cm - 10cm = 12cm\) Radius of top part of frustum of cone, \(\begin{align}{r_1} = \frac{{18cm}}{2} = 9cm\end{align}\) Radius of lower part of frustum of cone, \(\begin{align}{r_2} = \frac{{8cm}}{2} = 4cm\end{align}\) Radius of cylindrical part, \({r_2} = 4cm\) Slant height of frustum, \(\begin{align}l = \sqrt {{h_1}^2 + {{\left( {{r_1} - {r_2}} \right)}^2}}\end{align} \) \[\begin{align}&= \sqrt {{{\left( {12cm} \right)}^2} + {{\left( {9cm - 4cm} \right)}^2}} \\&= \sqrt {144c{m^2} + 25c{m^2}} \\&= \sqrt {169c{m^2}} \\&= 13cm\end{align}\] Area of tin sheet required to make the funnel \(=\) CSA of frustum of cone \(+\) CSA of the cylinder \[\begin{align}&= \pi \left( {{r_1} + {r_2}} \right)l + 2\pi {r_2}h\\&= \pi \left[ {\left( {{r_1} + {r_2}} \right)l + 2{r_2}h} \right]\\&= \frac{{22}}{7}\left[ {\left( {9cm + 4cm} \right) \times 13cm + 2 \times 4cm \times 10cm} \right]\\&= \frac{{22}}{7}\left[ {169c{m^2} + 80c{m^2}} \right]\\&= \frac{{22}}{7} \times 249c{m^2}\\&= \frac{{5478}}{7}c{m^2}\\&= 782\frac{4}{7}c{m^2}\end{align}\]
This shows you the differences between two versions of the page. — separation_algebras [2018/08/04 22:44] (current) jipsen created Line 1: Line 1: + =====Separation algebras===== + + Abbreviation: **SepAlg** + + ====Definition==== + A \emph{separation algebra} is a [[generalized separation algebra]] such that + + $\cdot$ is \emph{commutative}: $x\cdot y = y\cdot x$. + + I.e., a separation algebra is a cancellative commutative partial monoid. + + ==Morphisms== + Let $\mathbf{A}$ and $\mathbf{B}$ be cancellative partial monoids. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: + $h(e)=e$ and + if $x\cdot y\ne *$ then $h(x \cdot y)=h(x) \cdot h(y)$. + + ====Examples==== + Example 1: + + ====Basic results==== + + + ====Properties==== + + ^[[Classtype]] |first-order | + ^[[Equational theory]] | | + ^[[Quasiequational theory]] | | + ^[[First-order theory]] | | + ^[[Locally finite]] | | + ^[[Residual size]] | | + ^[[Congruence distributive]] | | + ^[[Congruence modular]] | | + ^[[Congruence $n$-permutable]] | | + ^[[Congruence regular]] | | + ^[[Congruence uniform]] | | + ^[[Congruence extension property]] | | + ^[[Definable principal congruences]] | | + ^[[Equationally def. pr. cong.]] | | + ^[[Amalgamation property]] | | + ^[[Strong amalgamation property]] | | + ^[[Epimorphisms are surjective]] | | + + ====Finite members==== + + $\begin{array}{lr} + f(1)= &1\\ + f(2)= &2\\ + f(3)= &3\\ + f(4)= &8\\ + f(5)= &13\\ + f(6)= &39\\ + f(7)= &120\\ + f(8)= &507\\ + f(9)= &\\ + f(10)= &\\ + \end{array}$ + + ====Subclasses==== + [[Generalized effect algebras]] + + [[Generalized pseudo-effect algebras]] + + + ====Superclasses==== + [[Generalized separation algebra]] + + + ====References==== + +
Contents Strong Dirichlet boundary conditions are imposed by providing a listof DirichletBC objects. The classdocumentation provides the syntax, this document explains themathematical formulation of the boundary conditions in Firedrake, andtheir implementation. To understand how Firedrake applies strong (Dirichlet) boundary conditions, it is necessary to write the variational problem to be solved in residual form: find \(u \in V\) such that: This is the natural form of a nonlinear problem. A linear problem is frequently written: find \(u \in V\) such that: However, this form can trivially be rewritten in residual form by defining: In the general case, \(F\) will be always linear in \(v\) but may be nonlinear in \(u\). When we impose a strong (Dirichlet, essential) boundary condition on \(u\), we are substituting the constraint: for the original equation on \(\Gamma_D\), where \(\Gamma_D\) is some subset of the domain boundary. To impose this constraint, we first split the function space \(V\): where \(V_\Gamma\) is the space spanned by those functions in the basis of \(V\) which are non-zero on \(\Gamma_D\), and \(V_0\) is the space spanned by the remaining basis functions (i.e. those basis functions which vanish on \(\Gamma_D\)). In Firedrake we always have a nodal basis for \(V\), \(\phi_V = \{\phi_i\}\), and we will write \(\phi^0\) and \(\phi^\Gamma\) for the subsets of that basis which span \(V_0\) and \(V_\Gamma\) respectively. We can similarly write \(v\in V\) as \(v_0+v_\Gamma\) and use the linearity of \(F\) in \(v\): If we impose a Dirichlet condition over \(\Gamma_D\) then we no longer impose the constraint \(F(u; v_\Gamma)=0\) for any \(v_\Gamma\in V_\Gamma\). Instead, we need to impose a term which is zero when \(u\) satisfies the boundary conditions, and non-zero otherwise. So we define: where \(g_i\) indicates the evaluation of \(g(x)\) at the node associated with \(\phi_i\). Note that the stipulation that \(F_\Gamma(u; v)\) must be linear in \(v\) is sufficient to extend the definition to any \(v\in V_\Gamma\). This means that the full statement of the problem in residual form becomes: find \(u\in V\) such that: The system of equations will be solved by a gradient-based nonlinear solver, of which a simple and illustrative example is a Newton solver. Firedrake applies this solution strategy to linear equations too, although in that case only one iteration of the nonlinear solver will ever be required or executed. We write \(u = u_i\phi_i\) as the current iteration of the solution and write \(\mathrm{U}\) for the vector whose components are the coefficients \(u_i\). Similarly, we write \(u^*\) for the next iterate and \(\mathrm{U}^*\) for the vector of its coefficients. Then a single step of Newton is given by: where \(\mathrm{F}(u)_i = \hat F(u; \phi_i)\) and \(J\) is the Jacobian matrix defined by the Gâteaux derivative of \(F\): The actual Jacobian matrix is given by: where \(\phi_i\), \(\phi_j\) are the ith and jth basis functions of \(V\). Our definition of the modified residual \(\hat F\) produces some interesting results for the boundary condition rows of \(J\): In other words, the rows of \(J\) corresponding to the boundarycondition nodes are replaced by the corresponding rows of the identitymatrix. Note that this does not depend on the value that theboundary condition takes, only on the set of nodes to which itapplies. This means that if, as in Newton’s method, we are solving the system: then we can immediately write that part of the solution corresponding to the boundary condition rows: Based on this, define: Next, let’s consider a 4-way decomposition of J. Define: Clearly we may write: As an illustration, assume in some example that the boundary nodes are numbered first in the global system, followed by the remaining nodes. Then (disregarding parts of the matrices which are zero), we can write: Note again that this is merely illustrative: the decomposition of J works in exactly the same way for any numbering of the nodes. Using forward substitution, this enables us to rewrite the linear system as: We can now make two observations. First, the matrix \(J^{00} + J^{\Gamma\Gamma}\) preserves the symmetry of \(J\). That is to say, if \(J\) has any of the following properties, then \(J^{00} + J^{\Gamma\Gamma}\) will too: symmetry positive (semi-)definiteness skew-symmetry diagonal dominance Second, if the initial value of \(u\) passed into the Newton iteration satisfies the Dirichlet boundary conditions, then \(\hat{\mathrm{U}}^\Gamma=0\) at every stage of the algorithm. Hence the system to be solved at each iteration is: A similar argument applies to other nonlinear solution algorithms such as line search Newton. Both linear and nonlinear PDEs are solved in residual form in Firedrake using the PETSc SNES interface. In the case of linear systems, a single step of Newton is employed. In the following we will use F for the residual Formand J for the Jacobian Form. In both cases theseforms do not include the Dirichlet boundary conditions. Additionally u will be the solution Function. Strong boundary conditions are applied as follows: Before the solver starts, the initial value uprovided by the user is modified at the boundary condition nodes to satisfy the boundary conditions. Each time the solver assembles the Jacobian matrix, the following happens. Jis assembled using modified indirection maps in which the boundary condition node indices have been replaced by negative values. PETSc interprets these negative indices as an instruction to drop the corresponding entry. The result is the matrix \(J^{00}\). The boundary node row diagonal entries of Jare set to 1. This produces the matrix \(J^{00} + J^{\Gamma\Gamma}\) Each time the solver assembles the residual, the following happens. Fis assembled using unmodified indirection maps taking no account of the boundary conditions. This results in an assembled residual which is correct on the non-boundary condition nodes but contains spurious values in the boundary condition entries. The entries of Fcorresponding to boundary condition nodes are set to zero. Linear systems (i.e. systems in which the matrix is pre-assembled) are solved with boundary conditions as follows: When the user calls assemble(a)to assemble the bilinear form a, no actual assembly takes place. Instead, Firedrake returns a Matrixobject that records the fact that it is intended to be assembled from a. At the solve()call, Firedrake determines which boundary conditions to apply in the following priority order: first, boundary conditions supplied to the solve()call. If no boundary conditions are supplied to the solve()call, then any boundary conditions applied when assemble()was called on A are used, as are any boundary conditions subsequently added with apply(). In the linear system case, the Jacobian Formis a. Using this and the boundary conditions, Firedrake assembles and solves:
You not only can, but also must treat symbols for units by the ordinary rules of algebra, since unit symbols are mathematical entities and not abbreviations. The value of a quantity is expressed as the product of a number and a unit. That number is called the numerical value of the quantity expressed in this unit. This relation may be expressed in the form $$Q = \left\{ Q \right\} \cdot \left[ Q \right]$$ where $Q$ is the symbol for the quantity, $\left[ Q \right]$ is the symbol for the unit, and $\left\{ Q \right\}$ is the symbol for the numerical value of the quantity $Q$ expressed in the unit $\left[ Q \right]$. For example, the mass of a sample is $$m = 100\ \mathrm g$$ Here, $m$ is the symbol for the quantity mass, $\mathrm g$ is the symbol for the unit gram (a unit of mass), and $100$ is the numerical value of the mass expressed in grams. Thus, the value of the mass is $100\ \mathrm g$. It is important to distinguish between the quantity $Q$ itself and the numerical value $\left\{ Q \right\}$ of the quantity expressed in a particular unit $\left[ Q \right]$. The value of a particular quantity $Q$ is independent of the choice of unit $\left[ Q \right]$, although the numerical value $\left\{ Q \right\}$ will be different for different units. For example, changing the unit for the mass in the previous example from the gram to the kilogram, which is $10^3$ times the gram, leads to a numerical value which is $10^{-3}$ the numerical value of the mass expressed in grams, whereas the value of the mass stays the same. $$m = 100\ \mathrm g = 0.100\ \mathrm{kg}$$ Since symbols for units are mathematical entities, both the numerical value and the unit may be treated by the ordinary rules of algebra. For example, the equation $m = 100\ \mathrm g$ may equally be written $$m/\mathrm g = 100$$ It is often convenient to label the axes of a graph in this way, so that the tick marks are labelled only with numbers. The quotient of a quantity and a unit may also be used in this way for the heading of a column in a table, so that the entries in the table are all simply numbers. Performing the mathematical operations of quantities is called quantity calculus. Quantities are multiplied and divided by one another according to the rules of algebra, resulting in new quantities. The quotient of two quantities, $Q_1$ and $Q_2$, satisfies the relation$$\begin{align}\frac{Q_1}{Q_2} &= \frac{ \left\{ Q_1 \right\} \cdot \left[ Q_1 \right] }{ \left\{ Q_2 \right\} \cdot \left[ Q_2 \right] } \\[6pt]&= \frac{ \left\{ Q_1 \right\} }{ \left\{ Q_2 \right\} } \cdot \frac{ \left[ Q_1 \right] }{ \left[ Q_2 \right] }\end{align}$$Thus, the quotient $\left\{ Q_1 \right\}/\left\{ Q_2 \right\}$ is the numerical value $\left\{ Q_1/Q_2 \right\}$ of the quantity $Q_1/Q_2$, and the quotient $\left[ Q_1 \right]/\left[ Q_2 \right]$ is the unit $\left[ Q_1/Q_2 \right]$ of the quantity $Q_1/Q_2$. For example, assuming a volume of $V = 0.127\ \mathrm{l}$, the density $\rho$ of the above-mentioned sample is$$\begin{align}\rho &= \frac{m}{V} \\[6pt]&= \frac{ 0.100\ \mathrm{kg} }{ 0.127\ \mathrm{l} } \\[6pt]&= \frac{ 0.100 }{ 0.127 } \cdot \frac{ \mathrm{kg} }{ \mathrm{l} } \\[6pt]&= 0.79\ \mathrm{kg/l}\end{align}$$ Similarly, the product of two quantities, $Q_1$ and $Q_2$, satisfies the relation$$\begin{align}Q_1 \cdot Q_2 &= \left( \left\{ Q_1 \right\} \cdot \left[ Q_1 \right] \right) \cdot \left( \left\{ Q_2 \right\} \cdot \left[ Q_2 \right] \right) \\[6pt]&= \left\{ Q_1 \right\}\left\{ Q_2 \right\} \cdot \left[ Q_1 \right] \left[ Q_2 \right]\end{align}$$ Thus, the product $\left\{ Q_1 \right\}\left\{ Q_2 \right\}$ is the numerical value $\left\{ Q_1Q_2 \right\}$ of the quantity $Q_1Q_2$, and the product $\left[ Q_1 \right]\left[ Q_2 \right]$ is the unit $\left[ Q_1Q_2 \right]$ of the quantity $Q_1Q_2$. For example, considering the standard acceleration of free fall $g_\mathrm n = 9.80665\ \mathrm{m/s^2}$, the weight $F_\mathrm g$ of the above-mentioned sample is$$\begin{align}F_\mathrm g &= m \cdot g_\mathrm n \\[6pt]&= 0.100\ \mathrm{kg} \times 9.80665\ \frac{\mathrm m}{\mathrm{s^2}} \\[6pt]&= 0.100 \times 9.80665 \times \mathrm{kg} \cdot \frac{\mathrm m}{\mathrm{s^2}} \\[6pt]&= 0.98\ \frac{\mathrm {kg\ m}}{\mathrm{s^2}} \\[6pt]&= 0.98\ \mathrm{N}\end{align}$$ In forming products and quotients of unit symbols, the normal rules of algebraic multiplication or division apply. For example, the expansion work $W$ at constant pressure $p = 100\,000\ \mathrm{Pa} = 100\,000\ \mathrm{kg\ m^{-1}\ s^{-2}}$ associated with a volume change of $\Delta V = 0.5\ \mathrm{m^3}$ is $$\begin{align}W &= p \cdot \Delta V \\[6pt]&= 100\,000\ \frac{\mathrm{kg}}{\mathrm{m\ s^{2}}} \times 0.5\ \mathrm{m^3}\\[6pt]&= 50\,000\ \frac{\mathrm{kg}}{\mathrm{m\ s^{2}}}\cdot\mathrm{m^3}\\[6pt]&= 50\,000\ \frac{\mathrm{kg\ m^2}}{\mathrm{s^{2}}}\\[6pt]&= 50\,000\ \mathrm J\end{align}$$ Two or more quantities cannot be added or subtracted unless they belong to the same kind. The expression shall be written as the sum or difference of expressions for the quantities$$l=12\ \mathrm m-7\ \mathrm m$$or parentheses shall be used to combine the numerical values, placing the common unit symbol after the complete numerical value$$l=\left(12-7\right)\ \mathrm m$$but it is not permissible to write$$l=12-7\ \mathrm m\quad\color{red}{\small\text{(wrong!)}}$$For the same reason, quantities on each side of an equal sign in an equation must be of the same kind$$\begin{align}m_\text{total} &= m_1+m_2 \\1.8\ \mathrm{kg} &= 1.5\ \mathrm{kg}+0.3\ \mathrm{kg}\end{align}$$However, quantities of the same kind do not necessarily have the same unit.$$250\ \mathrm g = 0.250\ \mathrm{kg}$$$$10\ \mathrm{m/s} = 36\ \mathrm{km/h}$$Anyway, quantities on each side of an equal sign in an equation must not be of different kinds. $$1\ \mathrm{mol} = 22.414\ \mathrm l\quad\color{red}{\small\text{(wrong!)}}$$
Probably i've an answer for this problem, but i'm not sill sure that it works. It is not important to "find" the two path, the only important thing is to "know" if they exist or not. I don't think that this is an NP complete problem. So, take the adjacency matrix $\textbf A$. We can easily suppose that it is filled with 0,1 value. (0 = no edge; 1 = there is an edge)Let's use the following algebra with 3 values (0,1,2), where everything works as usual except: $2+\text{<something>} = 2$; $2*\text{<whatever greater than 0>} = 2$ So, if there are two paths of same length from $i,j$ i'm expecting that there is a value $p$ such that $(\textbf A^p)_{i,j} = 2$. Let $n$ is the number of vertex in the graph (or, let's say, that $\textbf A$ has dimension $n\times n$). I don't know the value of $p$, but if i iterate $\textbf A$ by multiplying with itself for at most $n^2$ i should find the answer. (so, $p<n^2$)(the sense is that i check $\text A$, then i check $\text A^2$, then i check $\text A^3$ and so on...) here is my argumentation: if the two paths are simple paths, well, it works; if there are, at most i have to iterate $n$ times. If there is at least one neasted cycle or there is a path with two cycles, well, i have to find the least common multiple (LCM). $n^2$ is a bigger value for sure and in less than $n^2$ times if there is i should have to find them. If the two paths are two distinct paths, both with one cycle, then it's more or less similar to finding a solution for this two equation: $\alpha + \beta m = \gamma + \delta k$, where $m$ and $k$ are the length of these two distinct cycles.Matrix multiplication $\text A^q$, as defined above, says "is there a path from $i$ to $j$ whose length is $q$?" So, if $\textbf A^q$ is grater than $1$ it means that there are more paths leading from $i$ to $j$.By iterating the matrix $n^2$ times we pass through all the possible combination of $\delta$ and $\beta$. Indeed, $LCM(a,b)$ is defined as $(a*b)/GCD(a,b)$ and no cycle can be greater than $n$. I stop to iterate once i found $(\textbf A^p)_{i,j} = 2$. am i wrong?
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
De Bruijn-Newman constant For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula [math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math] where [math]\Phi[/math] is the super-exponentially decaying function [math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math] It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as [math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math] or [math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math] In the notation of [KKL2009], one has [math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math] De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]). The Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math] When [math]t=0[/math], one has [math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math] where [math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{s/2} \Gamma(s/2) \zeta(s)[/math] is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives [math]\displaystyle N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math] for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T. The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real. [math]t\gt0[/math] For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis, all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2]. Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-decreasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have [math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math] for any [math]t[/math]. The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE [math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math] where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as [math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] where the dependence on [math]t[/math] has been omitted for brevity. In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic [math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math] as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that [math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math] as [math]k \to +\infty[/math]. See asymptotics of H_t for asymptotics of the function [math]H_t[/math]. Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Wikipedia and other references Bibliography [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
Edge at \(m_{\rm inv}=79\GeV\) in dilepton events There were virtually no deviations of the LHC from the Standard Model a month ago. But during the last month, there has been an explosion of so far small excesses. Off-topic: Have you ever played 2048? Flash 2048. A pretty good game. Cursor arrow keys.One day after Tommaso Dorigo presented another failed attempt to mock the supersymmetric phenomenologists, he was forced to admit that "his" own CMS collaboration has found another intriguing excess – namely in the search for edge effects in dilepton events. The detailed data may be seen in Konstantinos Theofilatos' slides shown at ICNFP2014 in Κολυμβάρι, Greece (if the city name sounds Greek to you, it's Kolimvari; Czech readers may call the place Kolín-Vary) or at a CMS website. The paper hasn't been released yet but it's already cited in the thesis by Marco-Andrea Buchmann (no idea about his or her sex but he or she looks like a boy) whose chapter 5 (page 49) is dedicated to this search (see also the reference [95] over there). On page ii, Buchmann mentions significance of the edge at 2.96, almost 3 sigma. What's going on? Supersymmetry is expected to preserve, at least approximately, the multiplicative conservation law for the R-parity. The known Standard Model particles have the positive R-parity; their elusive superpartners have the negative R-parity. We collide known and boring protons only so the initial R-parity is positive (even). So if superpartners are produced, they have to be produced in pairs. Each of the superpartners may decay and the process of decay may have several steps. Either missing energy (neutrinos or, at the end, the lightest superpartner) are released, or charged leptons. Charged leptons are "clean" and not too contaminated by the hadronic junk that the LHC constantly produces in the proton-proton collisions. In the decay chain, two charged leptons, e.g. one \(e^\pm\) and one \(\mu^\pm\), may be released as a superpartner decays to lighter superpartners via the chain \[ X\mathop{\to}_{e^\pm} Y\mathop{\to}_{\mu^\pm} Z. \] Their invariant mass can't be arbitrarily high. The maximum value is equal to the mass difference between the masses of \(X\) and \(Z\) i.e. \(m_X-m_Z\). (The actual search looks for same-flavor, opposite sign leptons i.e. \(e^+e^-\) and \(\mu^+\mu^-\).) The reason why this is really the maximum invariant mass of the charged leptons is that it is clearly achieved if the two leptons' momenta are parallel in the spacetime, along with everything else. If they are not parallel, the invariant mass is lower than it could be. So the dileptons resulting from the decay of the superpartner \(X\) with the invariant mass \(m_{\rm inv}(e^\pm \mu^\pm)\) above a threshold (edge) don't exist at all. On the other hand, lots of dilepton events are predicted "right beneath" the edge because they correspond to relatively slowly moving leptons and there's a lot of phase space over there because the invariant mass only depends slowly on the speeds if the speeds are low. To make things clear, one would like to see something like this: The chart was produced by a computer simulation. The green dotted curve shows the shape of the expected edge at \(m_{\rm inv}=65\GeV\) and you may see that the shape is matched by the observed (but simulated) events, the black curve. The actual observed curves from CMS look much less spectacular, but the mild edge located at \(m_{\rm inv} = 78.7\pm 1.4\GeV\) (Buchmann's thesis) was actually quantified to have significance 2.6 (the Greek guy) or 2.96 (Buchmann) sigma, formally corresponding to something like 99% or 99.7% certainty that new physics is there. Of course, a 99% certainty is nothing convincing in particle physics and most things one may be 99% certain about are almost certainly flukes or errors. (The reason of the discrepancy is really the same as the reason why the sleeping beauty knows that the probability of "tails" is still \(P=1/2\). She wakes up after "heads" twice as often but she knows that; she knows that the heads – and similarly excesses at the LHC – are "overreported". So if she wants to have an idea about about the actual probability of the new physics or heads, she must correct her estimates for this overreporting. When she does so, she knows that heads still have \(P=1/2\), less than the naive \(P=2/3\), and new physics given the excess is much less likely than the formally calculated and often incorrectly interpreted 99%.) But it's surely another fluctuation that can keep one excited. If you have been searching for masses in \({\rm GeV}\) in this blog post, you must have seen that the relevant edge was at \(78.7\GeV\) or so. If superpartners exist, this value could be the mass difference between two superpartners (or other new particles?) waiting to be discovered.
I've seen other posts (Inner automorphisms form a normal subgroup of $\operatorname{Aut}(G)$) about the topic of $\operatorname{Inn}(G) \simeq G/Z(G)$, but what I want to ask is a detail. When we ensure that $\operatorname{Inn}(G) \simeq G/Z(G)$, we want to say that there exists some isomorphism $F$ such that: $$F: G/Z(G) \rightarrow \operatorname{Inn}(G), \quad F(gz) = \tau_g$$ where $g \in G, z \in Z(G)$ and $\tau_g(h) = ghg^{-1}$ for all $h \in G$ But is this $F$ really an isomorphism? I'm not pretty sure due to there is not ONE $\tau_g$ for ONE $gz$, what we have is a total of $|Z(G)| = \text{order of }Z(G)$ elements of the form $gz$ for just ONE $\tau_g$ due to $\tau_g = \tau_{gz}$ if $z \in Z(G)$
The simplest forcing to add a dominating function is Hechler forcing $\newcommand{\D}{\mathbb{D}}\D$. In set-theoretic circles, conditions in $\D$ are pairs $(s,f)$ where $s$ is a finite sequence of natural numbers and $\newcommand{\N}{\mathbb{N}}f:\N\to\N$, extension is defined by $(s,f) \leq_{\D} (t,g)$ if $t \supseteq s$, $g \geq f$, and $t(n) \geq f(n)$ for $|s| \leq n \lt |t|$. A $\D$-generic filter $G$ defines a function $g = \bigcup \lbrace s : (s,f) \in G\rbrace$ which eventually dominates every ground model function. Since the statement you're trying to force is localized in the sense that you only want $g$ to dominate all total $X$-computable functions, you can get away with an index-based variant of Hechler forcing. In that case, conditions of $\D_X$ are pairs $(s,i)$ where $s$ is a (coded) finite sequence of natural numbers and $i$ is an index for a total $X$-computable function $\varphi_i^X$, extension is defined by $(s,i) \leq_{\D_X} (t,j)$ if $(s,\varphi_i^X) \leq_{\D} (t,\varphi_j^X)$ in the sense described above. A $\D_X$-generic filter defines a function $g$ as above which eventually dominates every total $X$-computable function. Note that we cannot expect $\D_X$ conditions to form a set since "$\varphi_i^X$ is total" is a $\Pi^0_2(X)$-complete statement. This is not a major problem since generics are constructed externally and we understand what "$\varphi_i^X$ is total" means from outside the ground model. Note that if the set of conditions exists in the ground model, then $\D_X$ is just a variation on Cohen forcing. However, in general, the ground model will have a very different perception of $\D_X$ and the generic will be quite different from a plain Cohen generic set. To see that $\D_X$ preserves $\Sigma^0_1$-induction, first show that if some extension $(t,j) \leq_{\D_X} (s,i)$ forces a $\Sigma^0_1$-statement (which may use a fixed ground model set parameter in addition to the generic function $g$) then there is another extension $(u,i) \geq (s,i)$ that also forces the same $\Sigma^0_1$-statement. It follows from this that if $A(x)$ is a $\Sigma^0_1$ statement in the forcing language, then the set $$\lbrace x \in \N : (s,i) \nVdash \lnot A(x)\rbrace$$ is actually $\Sigma^0_1$-definable over the ground model. By $\Sigma^0_1$-induction in the ground model, this set has a minimal element $x_0$ and there is an extension $(t,j) \geq (s,i)$ (even with $j = i$) such that $$(t,j) \Vdash A(x_0) \land (\forall x \lt x_0)\lnot A(x).$$ This shows that it is dense to either force $\forall x \lnot A(x)$ or to force that there is a minimal $x$ that satisfies $A(x)$. Therefore, forcing with $\D_X$ preserves $\Sigma_1$-induction. The use of the indexed variant $\D_X$ instead of the full second-order forcing $\D$ is very useful here since $\D$ can be quite devastating to weak subsystems of second-order arithmetic. Indeed, if the ground model satisfies arithmetic comprehension, then every $\Pi^1_1$ statement becomes $\Sigma^0_2$ in the generic extension. So forcing with $\D$ will not preserve systems weaker than $\Pi^1_1$-CA 0 containing ACA 0. The index-based variant $\D_X$ is not so devastating since it is equivalent to Cohen forcing over any model of ACA 0.
Experimental Mathematics Experiment. Math. Volume 17, Issue 3 (2008), 375-383. Subrings of the Asymptotic Hecke Algebra of Type $H_4$ Abstract The structure of the subring $J^{\Gamma \cap \Gamma^-1$ of the asymptotic Hecke algebra is described for $\Gamma$}a left cell of the Coxeter group of type $H_4$. A small set of generators over $\mathbb{Z}$ is produced. The subalgebras spanned by a subset of the basis $\left\{t_x\right\}_{x\in \Gamma\cap\Gamma^-1$ are determined. Article information Source Experiment. Math., Volume 17, Issue 3 (2008), 375-383. Dates First available in Project Euclid: 19 November 2008 Permanent link to this document https://projecteuclid.org/euclid.em/1227121390 Mathematical Reviews number (MathSciNet) MR2455708 Zentralblatt MATH identifier 1196.20003 Subjects Primary: 20C08: Hecke algebras and their representations Citation Alvis, Dean. Subrings of the Asymptotic Hecke Algebra of Type $H_4$. Experiment. Math. 17 (2008), no. 3, 375--383. https://projecteuclid.org/euclid.em/1227121390
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
Here are the schematic and naming conventions used : simulate this circuit – Schematic created using CircuitLab Charge and discharge bounds You probably know that already, but the theory of operation is as follows :When TRIGGER is pulled at \$2/3 \cdot V_{CC}\$, the output goes \$\text{LOW}\$, and the DISCHARGE pin becomes a short to ground.\$C_1\$ is then discharged through \$R_2\$ to ground.When the THRESHOLD's pin voltage (and \$C_1\$'s voltage reaches \$1/3\cdot V_{CC}\$, the output goes \$\text{HIGH}\$, the DISCHARGE pin becomes open-circuited, and \$C_1\$ charges to \$V_{CC}\$ through \$R_1\$ and \$R_2\$. The \$\text{HIGH}\$ period is then the time the capacitor takes to charge from \$1/3\$ to \$2/3\$ of the supply through the equivalent resistance \$R_{tot} = R_1 + R_2\$. The \$\text{LOW}\$ period is the time the capacitor takes to discharge from \$2/3\$ to \$1/3\$ of the supply through \$R_2\$. The series RC circuit charge-discharge formula The charge-discharge formula of an RC circuit is :$$V_C(t) = \Delta V(1 - e^{-t/RC}) + V_I$$Where \$V_I\$ is the initial voltage across the capacitor. \$\Delta V\$ is the difference between the initial voltage \$V_I\$ and the steady state voltage once the circuit will have stabilized at the limit. Often, it is assumed \$V_I = 0\$ and the formula takes the more well-known form of :$$ V_C(t) = V_{\text{terminal}}(1 - e^{-t/RC}) $$ Application to the 555 timer's astable mode circuit Only the charging part will be discussed here, the discharging part of the period follows the same principles. During the charging part of the cycle, the capacitor charges (if the 555 didn't exist) to \$V_{CC}\$ from \$1/3 \cdot V_{CC}\$. It is trivial to see that , \$V_I = 1/3\cdot V_{CC}\$ and \$\Delta V = 2/3 \cdot V_{CC}\$.For \$t = 0\$ the moment the capacitor begins to charge, the \$\text{HIGH}\$ period is \$t\$ such that \$V_C(t) = 2/3 \cdot V_{CC}\$, as when this is satisfied, the capacitor begins its discharge, and the output becomes \$\text{LOW}\$. $$\begin{align}2/3 \cdot V_{CC}(1 - e^{-t/R_{tot}C}) + 1/3\cdot V_{CC} = 2/3\cdot V_{CC}& \Rightarrow 2/3(1 - e^{-t/R_{tot}C}) + 1/3 = 2/3 \\& \Rightarrow 1 - e^{-t/R_{tot}C} + 1/3 \cdot 3/2 = 1 \\& \Rightarrow -e^{-t/R_{tot}C} = 1 - 1 - 1/2 = -1/2 \\& \Rightarrow -t/(R_{tot}C) = \ln(1/2) \\& \Rightarrow t = -\ln(1/2) \cdot (R_{tot}C) \approx 0.693 \cdot R_{tot}C\end{align}$$ Changing what needs to be, the discharge part of the cycle follows the same line of reasoning.
If we have an equation we want to solve such as $\sqrt{x} = 3$, we can say something such as square both sides or " raise both sides to the power of 2", to arrive at $x = 9$. So $3 \rightarrow 3^2$ with this action. Is there any standard terminology which can be used for taking a number (or each side of an equation) and using it as the exponent of some base like $2 \rightarrow 3^2$? For example, consider the following equation and solution: \begin{align}\log_5 x & = 3 \\ 5^{\log_5 x} & = 5^{3} \tag{*}\label{} \\ x & = 125 \end{align} How should I describe the step $(*)$ labeled here? The way I currently use to describe something like this is " take each side as an exponent of base 5". I suppose I could say something like " raise 5 to the power of both sides", but I want to avoid this as the direct object of the sentence is not the thing I am starting with. That is, conceptually I am not doing any action to the number $5$ - rather, I am doing something to the numbers $\log_5 x$ and $3$ - so these should be the direct object(s) of the sentence.
Let $S^n$ be the standard unit $n$-sphere, embedded in Euclidian space as $S^n = \{ x \in {\mathbb{R}}^{n+1} | \| x \| = 1 \}$. Define geodesic distance as $d(x, y) = \arccos (x \cdot y)$, where $\cdot$ is the Euclidian dot product. My lecture notes introduce geodesic distance on page 11, but they're quite hand-wavy about the proof that distance is subadditive. To establish the triangle inequality, let $x, y, z ∈ S^{n}$, $\theta = \arccos(\langle x, y \rangle)$ and $\varphi = \arccos(\langle y, z \rangle)$. Then we can write $x = y \cos \theta + u \sin \theta$ and $z = y \cos \varphi + v \sin \varphi$ where $u$ and $v$ are unit vectors orthogonal to $y$. An easy calculationnow gives $\langle x, z \rangle = \cos \theta \cos \varphi + \langle u, v \rangle \sin \theta \sin \varphi$. Concretely, how do I find $u$ and $v$? What is this "easy calculation"?
We know that quantum tunneling is the reason behind several natural phenomenon like alpha decay and thermonuclear fusion inside the stars. How can it influence chemical reactions by tunnelling a species through the activation energy? If so how does it influence the kinetics and fraction of molecules taking part in the reaction. The probability of tunnelling at an energy $E$ is given by $p(E)\approx e^{-bA\sqrt{m}}$ where $A$ is proportional to the area of the potential energy barrier above energy $E$, i.e. the top part of the potential barrier, $m$ the mass and $b$ some constants, $\pi, \hbar$. etc. Thus for a given mass and energy if the barrier is narrow, so $A$ is small, tunnelling is more likely than if the barrier is wide. At a given energy for the same barrier if the mass is large tunnelling is small. Thus we tend to see tunnelling only with H and D and not Cl atoms for example. Tunnelling is also important in electron transfer reactions. As there is not just a single energy in a reaction but a distribution of energies, according to the Boltzmann distribution, it is necessary to modify the expression above to average over the energy but the basic result is the same, which is that the reaction rate constant is reduced by the factor $p(E)$. [If the potential energy barrier is $V(x)$ then $\displaystyle p(E)=\exp\left(-\frac{2\sqrt{m}}{\hbar}\int_{x_1}^{x_2}\sqrt{V(x)-E}\right)$ where $x_{1,2}$ are the points either side of the barrier with energy $E$.] Quantum tunneling in chemical reactions will only play a role if the collision energy and the spread in the collision energy of the reaction are very low. Just like in nucleosynthesis, the tunneling probability of a particle through a barrier only gets an appreciable value if the collision energy is very close to a quasi-bound state of the reaction complex (as in the triple alpha process). At this energy, this results in a large increase of the cross section (a scattering resonance), and thus the reaction rate (which is the product of the cross section and the relative velocity of the reaction partners). At high temperature, however, you can no longer speak of a collision of a single energy because different collision energies are mixed (to form a wave packet, if you like). This mixing of collision energies results in a mixing of the cross sections and washes out the sharp resonance of the single reaction channel. Resonances do play a role in astrochemistry where reactions happen at very low temperature and have been observed in the laboratory under controlled conditions, but are in general not important under normal conditions.
From: Robert Waldmann, Brad DeLong To: Interested Parties Date: 2019-04-12 This note tries to dot some of the i's and cross some of the t's in Olivier Blanchard's excellent, thoughtful, and provocative American Economic Association Presidential Address "Public Debt and Low Interest Rates" https://www.aeaweb.org/aea/2019conference/program/pdf/14020_paper_etZgfbDr.pdf. One of his conclusions is that, in the near-canonical Diamond (1965) Overlapping-Generations model he considers, whether a marginal increase in public debt raises welfare—whether an economy is "dynamically inefficient"—depends not just on the relationship of the rate of interest on safe government bonds to the growth rate but on both that interest rate's and the expected rate of profit's relationship to the economy's growth rate. We believe that there is a distinction between two possible definitions of "dynamic efficiency": Blanchard is correct that (1) depends on the relationship of both the rate of interest on safe government bonds and the expected rate of profit to the economy's growth rate. But (2) depends on only on the relationship of the rate of interest on safe government bonds to the economy's growth rate. In Blanchard's analysis of the near-canonical Diamond (1965) Overlapping-Generations model, issuing government debt directly shifts consumption away from the young who buy the debt to use it as a savings vehicle and to the old who sell the debt and spend the proceeds. Blanchard’s correct conclusion is that this direct-transfer effect will be welfare-raising as long as the economy is dynamically-inefficient, defined as having its safe rate of interest is less than its growth rate. So far, so good. In Blanchard's analysis, a second effect springs from government debt’s role in crowding-out capital investment. This crowding-out of investment has a factor-price effect: it raises profits received by old wealthholders and diminishes wages earned by younger workers, and so it also transfers from the young, who earn less in wages, to the old, whose profits are increased because the firms they own pay less in wages. Blanchard concludes, correctly, that this effect will be welfare-raising only under a stronger condition than that required for the direct-transfer effect: not just the safe rate of interest on government bonds but the risky rate of profit on capital investment must be lower than the economy’s growth rate. Our point is that this factor-price effect is the inverse of a wage subsidy funded by a tax on profits. It thus can be neutralized by a government that taxes profits and provides a wage subsidy. Thus there is a composite policy—the issuance of debt and a profits tax-financed wage subsidy—that reduces the economy's capital stock and is welfare-raising whenever the safe interest rate is less than the economy's growth rate. There is thus a sense in which the risky profit rate is irrelevant: in which an economy characterized by the semi-canonical Diamong (1965) overlapping-generations model is dynamically inefficient—has "too much" capital—whenever the safe interest rate is less than the economy's growth rate. In a world with no stochastic shocks, there is dynamic efficiency so long as $ R > 1 + g $. If $ 1 + g > R $, there is dynamic inefficiency, in the sense that a Pareto improvement can be achieved through a reduction in the stock of capital. Simply have the government issue and then rollover debt, and spend the money on useful things. In the semi-canonical Diamond model: However, when there is a wedge between the (risky) rate of profit and the (safe) rate of interest there are at least two real interest rates. Which of these rates matters for dynamic inefficiency? This matters, for, as noted by Blanchard, in the United States typically the safe interest rate $ R^f $ has been less than $ 1 + g $, but the risky profit rate $ R $ has been greater than $ 1 + g $. It turns out that it is the safe rate that matters. The presence of a profit rate $ R > 1 + g $ along with an interest rate $ 1 + g > R^f $ means that achieving a Pareto improvement via a reduction in the capital stock is not trivially simple—as it is in the no-wedge case—but such a policy exists. For simplicity, assume trend growth $ g = 0 $. Furthermore, assume $ {R^f}_t < 1 $ for every t. (Note that this is a strong assumption: $ R^f $ varies with the capital stock $ K $, and this requires that there be no state of the world in which the capital stock is so low that the economy becomes—temporarily—dynamically efficient.) Consider a government that issues and rollsover debt $ D $, paying each period's safe rate $ {R^f}_t < 0 $, and taxes or transfers to keep $ D $ constant: Since $ {R^f}_t < 1 $ for all t, the government transfers wealth to its citizens each period—a Pareto improvement. Moreover, $ D $ crowds out K, so there is reduced production. Consumers lose $ {R_t}{\Delta}K_t $, where $ {\Delta}K $ is the reduction in $ K $ due to $ D $, and gain $ {R^f}_tD $. Since they have a revealed preference for $ {R^f}_tD $, this is a Pareto improvement as well. However, as Blanchard stresses, this reduction $ {\Delta}K $ in the capital stock also has a general equilibrium effect on factor prices: a higher rate of profit $ r $ and lower wages $ W $. Blanchard demonstrates that the sign of this effect on the welfare evaluated after the resolution of period t-1 uncertainty but before thje resolution of period t uncertainty of the median generation which works with median K using median technology is the same as the sign of: $ E_{t-1}(R_t - 1) $. It is this third effect that leads Blanchard to conclude that dynamic inefficiency depends not just on the (safe) interest rate relative to the growth rate but also on the (risky) profit rate relative to the growth rate. This is correct if the only policy levers available to the government are the issuing and redemption of debt and associated lump-sum taxes and transfers. However, if the government can also impose a profits tax and a wage subsidy, then the third effect exists only if its existence pleases the government. If the existence of this general-equilibrium factor-price effect does not please the government, it can neutralize it. It is a feasible (balanced budget) tax-and-transfer policy to ensure that citizens face the same $ W_t $ and $ R_t $ that they would have faced under zero-debt laissez-faire. In this case effects (1) and (2) are still present: citizens still get a subsidy each period, and citizens still have access to a high-value safe saving vehicle in which they chose to invest. This is a Pareto improvement associated with a reduced capital stock. The existence of this Pareto improvement shows that a sufficient condition for dynamic inefficiency is $ {R^f}_t < 0 $ for all t. More generally, if $ E({R^f}_t) < 0 $, the government can increase money metric welfare averaged over generations. OK the point (if any) of this note is to worry about the complexity of the (always feasible) tax and transfer policy described above. Use the notation: We require: $ (1-{\tau}_t){R^*}_t) = R_t $ So: $ (1-{\tau}_t) = \frac{R_t}{{R^*}_t} $ Assume: $ Y_t = A_t(K_{t-1})^{\alpha} $ Here it is important that the shock is a stochastic Hicks-neutral technology shock. This mean that $ \frac{R_t}{{R^*}_t} = \left(\frac{K_{t-1}}{{K^*}_{t-1}}\right)^{\alpha-1} $. The ratio does not depend on $ A_t $ because the shocks are Hicks neutral. (In addition, to get a simple formula for $ {R^f}_)t $,I also need that the utility from consumption when old yields constant relative risk aversion: for example. $ u(C_o) = \ln(C_o) $. This implies that $ {R^f}_t $ depends only on $ K_t $—is a constant times $ K_t^{\alpha-1} $. The safe rate is lower than the expected value of the risky return, so the constant is less than $ {\alpha}E(A_t) $.) $ K_{t-1} $ is known by agents at time (beause they are choosing it). Assume $ {K^*}_{t-1} $ can be figured out, and is, by the extremely arithmetically-inclined state and its citizens. The state must do this period-by-period to find the $ {\tau}_t $ sequence which guarantees a Pareto improvement... There is a special case in which the required $ {\tau}_t $ is constant, and the effect of debt and of the profits-tax-and-wage-subsidy on the expected welfare of a generation taken when the generation is young is constant. Consider: $ U = C_y + \ln(C_o) $ In this case, all variation in $ Y_t $ is absorbed by $ C_{yt} $. Then everything is very simple. There are closed form solutions. Blanchard: https://tinyurl.com/20190119b-delong Waldmann:
Ex.9.1 Q2 Some Applications of Trigonometry Solution - NCERT Maths Class 10 Question A tree breaks due to storm and the broken part bends so that the top of the tree touches the ground making an angle \(30^{\circ}\) with it. The distance between the foot of the tree to the point where the top touches the ground is \(8\,\rm{m}\). Find the height of the tree. Text Solution What is Known ? (i) Broken part of the tree bends and touching the ground making an angle of \(30^{\circ}\) with the ground. (ii) Distance between foot of the tree to the top of the tree is \(8\,\rm{m.}\) What is Unknown? Height of the tree Reasoning: (i) Height of the tree \( = AB + AC.\) (ii) Trigonometric ratio which involves \(AB, BC\) and \(\angle C\) is \(\tan \theta\) , where \(AB\) can be measured. (iii) Trigonometric ratio which involves \(AB, AC\) and \(\angle C\) is \(\sin \theta\) , where \(AC\) can be measured. (iv) Distance between the foot of the tree to the point where the top touches the ground \(= BC = 8\,\rm{ m}\) Steps: In \(\Delta ABC\), \[\begin{align}\tan C &=\frac{A B}{B C} \\ \tan 30^{\circ} &=\frac{A B}{8} \\ \frac{1}{\sqrt{3}} &=\frac{A B}{8} \\ A B &=\frac{8}{\sqrt{3}} \mathrm{m} \end{align}\] \[\begin{align}\sin \,C{\rm{ }} &= {\rm{ }}\frac{{AB}}{{AC}}\\\sin 30^\circ &= \left( {\frac{{\frac{8}{{\sqrt 3 }}}}{{AC}}} \right)\\\frac{1}{2}{\rm{ }} &= {\rm{ }}\frac{8}{{\sqrt 3 }} \times \frac{1}{{AC}}\\AC{\rm{ }} &= {\rm{ }}\frac{8}{{\sqrt 3 }} \times 2\\AC{\rm{ }} &= {\rm{ }}\frac{{16}}{{\sqrt 3 }}\,\end{align}\] \[\begin{align}\text{Height of tree}\, &= \rm{ }AB + AC\\&= \frac{8}{{\sqrt 3 }}+ \frac{{16}}{{\sqrt 3 }}\\&= \frac{{24}}{{\sqrt 3 }}\\&= \frac{{24}}{{\sqrt 3 }} \times \frac{{\sqrt 3 }}{{\sqrt 3 }}\\&= \frac{{24\sqrt 3 }}{3}\\&= 8\sqrt 3 \end{align}\] Height of tree \( =8 \sqrt{3} \rm{m}\)
This was given as a homework problem but I have already submitted the assignment. I'd like to resolve it at this point for my own satisfaction. Given that $L_1$ is a linear language and $L_2$ is a regular language, show that $L=L_1L_2$ is a linear language. Recall that a linear grammar $G=(\Sigma, V, P, \sigma)$ has productions $A\to yBz$ for some $y,z \in \Sigma^*$ and $A,B \in V$. I use the theorem that every regular language can be represented by a right linear grammar. Then I use the theorem that every right linear grammar is the reverse of a left linear grammar (being a little careful about what I mean by reverse)... $L(rev(G))=rev(L(G))$... Next each left linear grammar is the reverse of a regular language, but the reverse of a regular language is regular, so left linear grammars also represent regular languages. So our productions in $L_2$ are of the form $x \to Ca \mid a$ for some $C \in V_{L_2}$ and $a \in \Sigma_{L_2}$. Now on to the show... What we are looking for is $L = L_1.L_2$, $L$ is linear (to show). So this has the form $S \to yBzCa \mid yBzaa$ So far so good, the second production is linear and within our expectations for set inclusion. I'm having a devil of a time reducing $yBzCa$ however ... If I introduce $V\to BzC$ that linearizes $S$ but $V$ is not linear ... If I give $T\to z$ to get $V\to BTC$ I'm not much better off If I use $V_1\to Bz$ (ok linear!) but then $V\to V_1C$ (not linear) What is the piece of the puzzle I'm missing? I have a suspicion that my woes are because I failed to have a production that is $B\implies^*a$ for some terminal $a \in \Sigma_{L_1}$ but I haven't observed that in the definitions thus far... and further unless B only goes to a terminal I'm in the same mess (if $B\to t$ where $t \in \Sigma_{L_2} \bigcup {\epsilon} $ then I think I'm finished but how do I justify it?
Is there a simple example of a Boolean function $f:\{0,1\}^m \to \{0,1\}$ that we know can be computed by a polynomial-size circuit, but cannot be computed by any polynomial-size monotone circuit? Ideally, I'd be especially interested in a simple example where there is an exponential gap between the size of the smallest monotone circuit and the size of the smallest (not necessarily monotone) circuit. Razborov's famous result shows that the $k$-clique function requires a monotone circuit of size $\Omega(n^k)$. In particular, we treat $x$ as the adjacency matrix of a graph and define $f(x)=1$ if the graph has a $k$-clique or 0 otherwise. However, this doesn't appear to provide a large gap between monotone circuit complexity and ordinary circuit complexity: the clique function can be computed by a monotone circuit of size $O(n^k)$, and the best known non-monotone circuit has size $O(n^{\omega k/3}) \approx O(n^{0.8k})$, where $\omega = 2.373...$ is the constant for matrix multiplication. This is not a very large gap: $O(n^k)$ vs $O(n^{0.8k})$. In particular, for choices of $k$ where the non-monotone complexity is polynomial, so is the monotone complexity. Based on Razborov's result, Tardos has defined a function $\varphi$ that does have this exponential gap. However, $\varphi$ is fairly complex to even define: one must first introduce the Lovász number of a graph, then discuss how to approximate this number in polynomial time using linear programming, then show how to adjust the result to obtain the definition of the resulting function $\varphi$. In other words, the Tardos $\varphi$ function is not particularly simple to define. So is there a suitable example that is simple/easy to define, and has a large gap between its monotone and non-monotone circuit complexity?
I have been trying to understand Discrete Fourier Time Series (NOT Transform). It is defined as $$a_k = \sum_{n=0}^{N-1} x[n]e^{jk\frac{2\pi}{N}n}$$ where N is the Time period and an integer (by definition), and $n$ goes from $0 ... N-1$. Fair enough. But when N = 0.2 second as the time period of the wave, then how we define $n$, as $N-1$ is negative. I think here we should take $N$ as 200 miliseconds. And then loop $n$ from 0 ... 199. But then the terms $x[n]$ (the samples) can be defined over either 200 milisec OR $2*10^8$ nanoseconds and so on. So for 200 ms case, we need 200 samples. But lets say I have taken 100 samples distributed equally over 200 ms time period. Now how we define $x[n]$ in this case, should I do like this: $x[0]=1, x[1]=0, x[2]=1,...$ That is, set alternate values to zero? Because in the above equation, I need 200 values of $x$. Or Better (I think): Define $1 sec = 500ss$, where $ss$ is a hypothetical new unit of time, in that case time period is $N=0.2*500ss=100ss$ (with the condition that the final value is an integer). In this case we can loop over 100 samples easily without worrying over setting alternate values. Can anyone kindly help me with this or on my thinking above. Best Regards,
This answer is based on a comment by J. Kynčl. It gives a bound roughly $\rho_{min}(S) \leq \left(\frac{1}{1.074}\right)^d$ which is already exponentially decreasing, but perhaps still far from the optimum. Thus, a better bound would still be of interest. Let $AB$ one of the longest edges of $S$. Without loss of generality, at least half of the remaining vertices of $S$ are not farther from $B$ than from $A$. Let $V_1, \dots, V_k$ be such vertices ($k \geq (d-1)/2$) and $U_1, \dots, U_{\ell}$ be the remaining vertices (closer to $A$ than to $B$). As J. Kynčl points out, the angles $V_iAB$ are at most $60^°$. We also need that angles $U_iAB$ are at most $90^°$ since $BU_i$ is at most as long as $AB$. Let $h$ be the hyperplane perpendicular to $AB$ passing through $A$ and $h^+$ be the halfspace containing $B$ with the boundary hyperplane $h$. Let $C$ be the cone with apex $A$ determined by $S$ (that is, our task is to determine which fraction of $C$ belongs to a ball $B(A,\varepsilon)$ with center $A$ and small radius $\varepsilon$. From the discussion above it follows that $C$ is fully contained in $h^+$. Furthemore, let $\kappa$ be the affine $(k+1)$-space determined by $A$, $B$, $V_1$, $\dots$, $V_k$. We also need another $(k+1)$-dimensional cone $C_{60}$ which is formed by all points $X$ in $\kappa$ such that the angle $XAB$ is at most $60^°$. From the discussion above it follows that $V_i \in C_{60}$, and consequently $C \cap \kappa \subseteq C_{60}$. It is not too difficult to compute that $\hbox{vol}_{k+1}(B(A,\varepsilon) \cap C_{60})/\hbox{vol}_{k+1}(B(A,\varepsilon)) \leq \left(\frac{\sqrt 3}2\right)^{k+1}$, where $\hbox{vol}_{k+1}$ is the $(k+1)$-dimensional volume in $\kappa$. (See the left picture below.) Now, let us consider a small enough ball $B(A, \varepsilon)$ and let us estimate $\hbox{vol}(B(A, \varepsilon) \cap C)/ \hbox{vol}(B(A, \varepsilon))$.For this let us consider a $(k+1)$-space $\kappa'$ parallel with $\kappa$. The task is to show that $$(*) \hskip{2cm} \frac{\hbox{vol}_{k+1}(B(A, \varepsilon) \cap C \cap \kappa')}{\hbox{vol}_{k+1}(B(A, \varepsilon) \cap \kappa')} \leq \left(\frac{\sqrt 3}2\right)^{k+1}.$$As soon as we show $(*)$ we get the same bound on $\hbox{vol}(B(A, \varepsilon) \cap C)/ \hbox{vol}(B(A, \varepsilon))$ by the Fubini theorem. In order to show $(*)$, let us first reailize that $C \cap \kappa'$ is either empty or it equals to $(C \cap \kappa) + Y$, where $Y$ is the intersection point of $\kappa'$ and the $\ell$-dimensional cone determined by $A$ and $U_1, \dots, U_\ell$ (here, for simplicity, we assume that $A$ is the origin). Thus, in particular, $Y \in h^+$ and $C \cap \kappa' \subseteq Y + C_{60}$. (See the right picture above.) The final step is thus to show that $\hbox{vol}_{k+1}(B(A, \varepsilon) \cap (Y + C_{60})) \leq \hbox{vol}_{k+1}(B(A, \varepsilon) \cap (Z + C_{60}))$ where $Z$ is the center of $B(A, \varepsilon) \cap \kappa'$. This inequality is best shown by the picture below. (First shift $Y$ to $Y'$ on $h$. Then bound $\hbox{vol}_{k+1}(B(A, \varepsilon) \cap (Y' + C_{60}))$ by decomposing $B(A, \varepsilon) \cap (Y' + C_{60})$ into two parts as on the middle and right picture.) Finally, $\hbox{vol}_{k+1}(B(A, \varepsilon) \cap (Z + C_{60}))/\hbox{vol}_{k+1}(B(A, \varepsilon)) \leq \left(\frac{\sqrt 3}2\right)^{k+1}$ as in the case of $C_{60}$. This gives the final bound$$\rho_{min}(S) \leq \left(\frac{\sqrt 3}2\right)^{(d+1)/2} \leq \left(\frac{1}{1.074}\right)^d.$$
Search Now showing items 1-1 of 1 Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
There ought to be a diagram showing how the angle $\theta$ is defined. Nevertheless, the boundary condition which you have been given is confusing. The diagram above shows a vertical plane through the centre of the spherical electrode. There is cylindrical symmetry here, so a cylindrical co-ordinate system is the obvious choice. If the plane is rotated through any azimuthal angle $\theta$ about the vertical axis $z$ through the centre of the sphere, then all measurements (potentials, current densities, etc) would be the same at all points with the same cylindrical co-ordinates $\rho, z$. So there could be no difference in potential or current density around any circle of radius $\rho$ which is centred on and perpendicular to the $z$ axis. Inside the conducting electrode there is no resistance (infinite conductivity) so there could be a current of constant density around such a circle inside the electrode, without there being any change in potential around any azimuthal circle : $$J_{\theta}=\text{constant}$$ Inside the regions of finite conductivity 1 & 2, there could not be any current around an azimuthal circle, because there would have to be a change in potential between the start and the end points, which are the same, so this is impossible :$$J_{1\theta}=J_{2\theta}=0$$ In all three regions the boundary condition applies for all values of $\theta$ and particular values of $\rho, z$. There is nothing special about $\theta=\frac12 \pi$. The boundary condition given in the book is confusing. It suggests that there is something special about the azimuthal angle $\theta=\frac12\pi$ but there is nothing in the problem which supports this. Perhaps a spherical polar system is being used. If so, and if $\phi$ is the polar angle, then $\phi=\frac12 \pi$ defines the horizontal plane through the centre of the sphere. The above boundary condition is still true for this value of $\phi$ and all values of $r$ : $$J_{1\theta}(\phi=\frac12 \pi)=J_{2\theta}(\phi=\frac12 \pi)=0$$ But there is nothing special about this plane. The boundary condition applies for all other planes perpendicular to the axis, and all values of $\phi$ and $r$ provided that $\rho=r\sin\phi$ and $z=r\cos\phi$ are constant.
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
I have an issue with the following problem: Two quadratic equations have real roots $\alpha$ and $\beta$ such that $$\alpha - \beta = 3$$ and $$\alpha \beta = 2(\alpha + \beta).$$ Find the two possible quadratic equations that satisfy these conditions. Since $\alpha$ is larger than $\beta$, in the general solution to the quadratic equation, familiarly: $$x = \frac{-b\pm \sqrt{b^2-4ac}}{2a}$$ $\alpha$ needs to be the larger solution so it must be of the form: $$\alpha = \frac{-b - \sqrt{b^2-4ac}}{2a}$$ since it's the bigger root, and $\beta$ is therefore: $$\beta = \frac{-b + \sqrt{b^2-4ac}}{2a}$$ However, I need to find the two quadratic equations that have these roots. If I play around a bit with the properties of the roots, I can work out some things like: $$\alpha - \beta = 3$$ $$\frac{1}{2a}\ ((-b - \sqrt{b^2-4ac}) - (-b + \sqrt{b^2-4ac}) = 3$$ $$ - 2\sqrt{b^2-4ac} = 6a$$ $$4b^2 - 16ac = 36a^2$$ $$b = \frac{\sqrt{16a(2a - c)}}{2}$$ And the next property: $$\alpha \beta = 2(\alpha + \beta)$$ $$\frac{-b - \sqrt{b^2-4ac}}{2a} \bullet \frac{-b + \sqrt{b^2-4ac}}{2a} = 2(\frac{1}{2a}(-b - \sqrt{b^2-4ac}+ -b + \sqrt{b^2-4ac}))$$ $$\frac{4c}{4a}=\frac{-2b}{a}$$ $$4c=\sqrt{16a(2a - c)}$$ $$16c^2 = 32a^2 - 16ac$$ $$0= 2a^2-ac -c^2$$ $$(a-c)(2a+c)=0$$ $$a= c$$ $$2a = -c$$ I'm going to assume these will represent both quadratics, so I'll use $a = c$ first. $$b = \frac{\sqrt{16a(2a - c)}}{2}$$ $$b = \frac{\sqrt{16c(2c - c)}}{2}$$ $$b = \frac{\sqrt{16c^2}}{2}$$ $$b= \pm\ 2c$$ I'll use $b = 2c$ for equation one. $$\frac{1}{2a}\ ((-b - \sqrt{b^2-4ac}) - (-b + \sqrt{b^2-4ac}) = 3$$ $$\frac{1}{2c}\ ((-2c - \sqrt{{4c}^2-4c^2}) - (-2c + \sqrt{{4c}^2-4c^2}) = 3$$ $$c=0$$ Obviously, I blew it, since that implies $a = b = c$. I'm not sure if what I did wrong was some technical errors or the wrong approach in itself. I'd appreciate any guidance on this.
Essay and Opinion A new graph density Version 1Released on 08 April 2015 under Creative Commons Attribution 4.0 International License Authors' affiliations Laboratoire Electronique, Informatique et Image (Le2i). CNRS : UMR6306 - Université de Bourgogne - Arts et Métiers ParisTech Keywords Community discovery Density @Graph theory Graph properties Graph theory Metric spaces Abstract For a given graph $G$ we propose the non-classical definition of its true density: $\rho(G) = \mathcal{M}ass (G) / \mathcal{V}ol (G)$, where the $\mathcal{M}ass$ of the graph $G$ is a total mass of its links and nodes, and $\mathcal{V}ol (G)$ is a size-like graph characteristic, defined as a function from all graphs to $\mathbb{R} \cup \infty$. We show how the graph density $\rho$ can be applied to evaluate communities, i.e “dense” clusters of nodes. Background and motivation Take a simple graph $G = (V, E)$ with $n$ nodes and $m$ links. The standard definition of graph density, i.e. the ratio between the number of its links and the number of all possible links between $n$ nodes, is not very suitable when we are talking about the true density in the physical sense. More precisely, by “the true density” we mean: $\rho(G) = \mathcal{M}ass (G) / \mathcal{V}ol (G) \,,$ where the $\mathcal{M}ass$ of the graph $G$ equals to the total mass of its links and nodes, and the $\mathcal{V}ol$ is a size-like characteristic of $G$. Consider again the usual graph density: $D = \frac{2 m}{n \left( n - 1 \right)}$. Rewriting $D$ in the “mass divided by volume” form, one obtain the following definitions of graph mass and volume: \begin{align*} \mathcal{M}ass_D (G) &= 2 m \,, \\ \mathcal{V}ol_D (G) &= n \left( n - 1 \right) ; \end{align*} Note, that $\mathcal{V}ol_D (G)$ depends only of the number of nodes, so it is very rough estimation of the actual graph volume. Moreover, any function of the number of nodes (and the number links) will give somewhat strange results, because we neglect the actual graph structure in this way. In the next section of this article we give a formal definition of the actual graph volume. For the moment, just take a look at the Fig. 1, where different graphs with 6 nodes and 6 links are shown. Intuitively, graph $C$ is larger (more voluminous) than $B$ and $A$. But it is not clear which graph is larger: $A$ or $B$. True graph density $\mathcal{M}ass (G)$ It seems a god idea to define $\mathcal{M}ass(G)$ as the total mass of its nodes and links. The simple way consists in assuming that the mass of one link (or node) equals to $1$. \begin{equation} \tag{MASS} \mathcal{M}ass(G) = n + m \label{eq:mass} \end{equation} $\mathcal{V}ol (G)$ We cannot use any classical measure (e.g. Lebesgue-like) to define a volume of a graph $G$, because all measures are additive. Let us explain why the additivity is bad. Observing that $G$ is the union of its links and nodes, and assuming that the volume of a link (node) equals to one, we obtain: \[ \mathcal{C}lassical\mathcal{V}ol(G) = n + m \,,\] where $m$ is the number of links in $G$, and $n$ equals to the number of nodes. The graph structure disappears again, and we should find “another definition of volume”. A clever person can develop a notion of “volume” for any given metric space. Since any graph can be regarded as a metric space, we can use this as a solution of our problem. Here we briefly describe how Feige in his paper [2] defined the volume of a finite metric space $(S,d)$ of $n$ points. A a function $\phi : S \to \mathbb{R}^{n-1}$ is a contraction if for every $u,v \in S$, $d_{\mathbb{R}} \big( \phi (u) - \phi (v) \big) \le d(u,v)$, where $d_{\mathbb{R}}$ denotes usual Euclidean distance between points in $\mathbb{R}^{n-1}$. The Fiege's volume $\mathit{Vol} \big( (S,d) \big)$ is the maximum $(n-1)$ dimensional Euclidean volume of a simplex that has the points of $\{\phi(s) | s \in S \}$ as vertices, where the maximum is taking over all contractions $\phi : S \to \mathbb{R}^{n-1}$. Sometimes in order to calculate Fiege's volume, we need to modify the original metric. Abraham et al. deeply studied Fiege-like embeddings in [1]. Another approach is to find a good mapping $g : S \to \mathbb{R}^{n-1}$, trying to preserve original distances as much as possible, and calculate the $\mathit{Vol} \big( (S,d ) \big)$ as the volume of convex envelop that contains all $\{g(s) | s \in S\}$. The interested reader can refer to the Matoušek's book [3], which gives a good introduction into such embeddings. But we should note that not all finite metric spaces can be embedded into Euclidean space with exact preservation of distances. In this paper we chose another approach: instead of doing approximative embeddings, we compute the “volume” directly. First of all, let us introduce some natural properties that must be satisfied by the graph volume. A graph volume is a function from set of all graphs $\mathcal{G}$ to $\mathbb{R} \cup \infty$: \[ \mathcal{V}ol : \mathcal{G} \to \mathbb{R} \cup \infty \,,\] Note that our volume has no such parameter as dimension. The absence of dimension allows us directly compare volumes of any two graphs. Let the volume of any complete graph be equal to $1$: \begin{equation} \tag{I} \mathcal{V}ol (K_x) = 1 \label{eq:I} \end{equation} Then, for any disconnected graph, denoted by $G_{\bullet^\bullet_\bullet}$, let the volume be equal to infinity: \begin{equation} \tag{II} \mathcal{V}ol (G_{\bullet^\bullet_\bullet}) = \infty \label{eq:II} \end{equation} Intuitively, here one can make an analogy with a gas. Since gas molecules are “not connected”, they fill an arbitrarily large container in which they are placed. When we add a new edge between two existed vertices, the new volume (after edge addition) cannot be greater than the original volume: \begin{equation} \tag{III} \mathcal{V}ol (G) \ge \mathcal{V}ol (G + e) \label{eq:III} \end{equation} When we add a new vertex $v^1$ with degree $1$, the new volume cannot be less than the original one: \begin{equation} \tag{IV} \mathcal{V}ol (G) \le \mathcal{V}ol (G + v^1) \label{eq:IV} \end{equation} For a given graph $G = (V, E)$ the eccentricity $\epsilon(v)$ of a node $v$ equals to the greatest distance between $v$ and any other node from $G$: \[ \epsilon(v) = \max_{u \in V}{d(v,u)} \,, \] where $d(v,u)$ denotes the length of a shortest path between $v$ and $u$. Finally, we define the volume of a graph $G$ as a product of all eccentricities: \begin{equation} \mathcal{V}ol (G) = \sqrt[|V|]{\prod_{v \in V} \epsilon(v)} \tag{VOLUME} \label{eq:volume} \end{equation} Obviously properties \ref{eq:I}, \ref{eq:II} and \ref{eq:III} hold for this definition. But \ref{eq:IV} is needed to be proved or disproved. Reconsidering graphs from Fig. 1, we have $\mathcal{V}ol(A) = \sqrt[6]{3^3 2^3} \approx 2.45$, $ \mathcal{V}ol(B) = \sqrt[6]{3^3 2^3} \approx 2.45$ and $\mathcal{V}ol(C) = \sqrt[6]{3^6} = 3$. Possible applications Quality of communities Consider two graphs $A$ and $B$. We say that $A$ is better than $B$ if and only if $\rho(A) > \rho(B)$. Using this notion one can define a quality of graph partition. The volume of finite metrics spaces Our approach can be applied to calculate the “volume” of any finite metric space $(S,d)$: \[ \mathcal{V}ol \big( (S,d) \big) = \sqrt[|S|]{\prod_{s \in S} \epsilon(s)} \,,\] where $\epsilon(s) = \max_{p \in S}{d(s,p)} $. References I. Abraham, Y. Bartal, O. Neiman, and L. J. Schulman, Volume in general metric spaces, in Proceedings of the 18th annual European conference on Algorithms: Part II, ESA'10, Berlin, Heidelberg, 2010, Springer-Verlag, pp. 87–99. U. Feige, Approximating the bandwidth via volume respecting embeddings, Journal of Computer and System Sciences, 60 (2000), pp. 510 – 539. J. Matoušek, Lectures on Discrete Geometry, Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2002.
In reading about the inversion of sucrose I came across the equation for the reaction rate constant below: $$k = \left(\frac{2.303}{t}\right) \log\left[\frac{\alpha(0) - \alpha(\infty)}{\alpha(t) - \alpha(0)}\right] \tag{5}$$ What is the logic for difference of the rotations' sign for the logarithmic term? i.e conceptuatlly why is $\alpha(0)$ the minuend in the numerator and the subtrahend in the denominator? I have seen other quantities like volume or pressure represented in this form but I don't understand the reason why reacting sugar would be presented this way.
We know that quantum tunneling is the reason behind several natural phenomenon like alpha decay and thermonuclear fusion inside the stars. How can it influence chemical reactions by tunnelling a species through the activation energy? If so how does it influence the kinetics and fraction of molecules taking part in the reaction. The probability of tunnelling at an energy $E$ is given by $p(E)\approx e^{-bA\sqrt{m}}$ where $A$ is proportional to the area of the potential energy barrier above energy $E$, i.e. the top part of the potential barrier, $m$ the mass and $b$ some constants, $\pi, \hbar$. etc. Thus for a given mass and energy if the barrier is narrow, so $A$ is small, tunnelling is more likely than if the barrier is wide. At a given energy for the same barrier if the mass is large tunnelling is small. Thus we tend to see tunnelling only with H and D and not Cl atoms for example. Tunnelling is also important in electron transfer reactions. As there is not just a single energy in a reaction but a distribution of energies, according to the Boltzmann distribution, it is necessary to modify the expression above to average over the energy but the basic result is the same, which is that the reaction rate constant is reduced by the factor $p(E)$. [If the potential energy barrier is $V(x)$ then $\displaystyle p(E)=\exp\left(-\frac{2\sqrt{m}}{\hbar}\int_{x_1}^{x_2}\sqrt{V(x)-E}\right)$ where $x_{1,2}$ are the points either side of the barrier with energy $E$.] Quantum tunneling in chemical reactions will only play a role if the collision energy and the spread in the collision energy of the reaction are very low. Just like in nucleosynthesis, the tunneling probability of a particle through a barrier only gets an appreciable value if the collision energy is very close to a quasi-bound state of the reaction complex (as in the triple alpha process). At this energy, this results in a large increase of the cross section (a scattering resonance), and thus the reaction rate (which is the product of the cross section and the relative velocity of the reaction partners). At high temperature, however, you can no longer speak of a collision of a single energy because different collision energies are mixed (to form a wave packet, if you like). This mixing of collision energies results in a mixing of the cross sections and washes out the sharp resonance of the single reaction channel. Resonances do play a role in astrochemistry where reactions happen at very low temperature and have been observed in the laboratory under controlled conditions, but are in general not important under normal conditions.
Here is yet another problem I can't seem to do by myself... I am supposed to prove that $$\sum_{n \le x} \frac{\varphi(n)}{n^2}=\frac{\log x}{\zeta(2)}+\frac{\gamma}{\zeta(2)}-A+O \left(\frac{\log x}{x} \right),$$ where $\gamma$ is the Euler-Mascheroni constant and $A= \sum_{n=1}^\infty \frac{\mu(n) \log n}{n^2}$. I might be close to solving it, but what I end up with doesn't seem quite right. So far I've got: $$ \begin{align*} \sum_{n \le x} \frac{\varphi(n)}{n^2} &= \sum_{n \le x} \frac{1}{n^2} \sum_{d \mid n} \mu(d) \frac{n}{d} \\ &= \sum_{n \le x} \frac{1}{n} \sum_{d \le x/n} \frac{\mu(d)}{d^2} \\ &= \sum_{n \le x} \frac{1}{n} \left( \sum_{d=1}^\infty \frac{\mu(d)}{d^2}- \sum_{d>x/n} \frac{\mu(d)}{d^2} \right) \\ &= \sum_{n \le x} \frac{1}{n} \left( \frac{1}{\zeta(2)}- \sum_{d>x/n} \frac{\mu(d)}{d^2} \right) \\ &= \frac{1}{\zeta(2)} \left(\log x + \gamma +O(x^{-1}) \right) - \sum_{n \le x} \frac{1}{n}\sum_{d>x/n} \frac{\mu(d)}{d^2}. \end{align*} $$ So. I suppose my main problem is the rightmost sum, I have no idea what to do with it! I'm not sure where $A$ comes into the picture either. I tried getting something useful out of $$ \begin{align*} & \sum_{n \le x} \frac{1}{n} \left( \frac{1}{\zeta(2)}- \sum_{d>x/n} \frac{\mu(d)}{d^2} \right) +A-A \\ &= \left( \frac{1}{\zeta(2)} \left(\log x + \gamma +O(x^{-1}) \right) - A \right) - \sum_{n \le x} \frac{1}{n}\sum_{d>x/n} \frac{\mu(d)}{d^2} + A, \end{align*} $$ but I quickly realized that I had no clue what I was doing. Any help would be much appreciated. I tried this by switching the sums at the beginning so $\displaystyle\sum_{n\le x}\frac{\phi(n)}{n^2}=\sum_{d\le x}\frac{\mu(d)}{d^2}\sum_{q\le\frac{x}{d}}\frac{1}{q}=\sum_{d\le x}\frac{\mu(d)}{d^2} \left (\log\left(\frac{x}{d}\right)+C+O\left(\frac{d}{x}\right)\right)$ using Thm 3.2(a) of Apostol p.55 (where $C$ is the Euler constant). Then use $\displaystyle\sum_{d\le x}\frac{\mu(d)}{d^2}=\frac{1}{\zeta(2)}+O\left(\frac{1}{x}\right)$ Apostol p.61. Then use $\displaystyle\sum_{d\le x}\frac{\mu(d)\log d}{d^2}=A-\sum_{d>x}\frac{\mu(d)\log d}{d^2}$. This last sum is $\displaystyle O\left(\sum_{d>x}\frac{\log d}{d^2}\right)$ and then use: $0<\displaystyle \sum_{d>x}\frac{\log d}{d^2}=\sum_{d>x}\frac{\log d}{d^\frac{1}{2}}.\frac{1}{d^\frac{3}{2}}<\frac{\log x}{x^\frac{1}{2}}\sum_{d>x}\frac{1}{d^\frac{3}{2}}$ and Thm 3.2(c) p.55 for the error term $\displaystyle O\left(\frac{\log x}{x}\right)$ and the $A$ in the question. You were indeed almost there. All that's left to do is just switch the order of summation on the last sum. I won't fill in all the details but here's a start: $$\begin{align} \sum_{1 \leq n \leq x} ~\sum_{d > x/n} \frac{1}{n} \frac{\mu(d)}{d^2} &= \sum_{d \geq 2} \frac{\mu(d)}{d^2}\sum_{\frac{x}{d}< n \leq x} \frac{1}{n} \end{align}$$ Noticing that $$ \sum_{\frac{x}{d} < n \leq x}\frac{1}{n} = \sum_{n \leq x}\frac{1}{n} - \sum_{n \leq x/d}\frac{1}{n} = \log d + O\left(\frac{d}{x}\right) $$ should do it. To check that the error terms behave correctly, let's see where we're at. We (okay, You) have shown: $$\sum_{n \le x} \frac{\varphi(n)}{n^2}=\frac{\log x}{\zeta(2)}+\frac{\gamma}{\zeta(2)}-A + \sum_{d \geq 2} \frac{\mu(d)}{d^2} \cdot O\left( \frac{d}{x} \right).$$ It remains to show that $$ \sum_{d \geq 2} \frac{\mu(d)}{d^2} \cdot O\left( \frac{d}{x} \right) = \sum_{d \geq 2} \frac{\mu(d)}{d} O\left( \frac{1}{x} \right) = O\left(\frac{\log x}{x} \right)\tag{$\ast$} $$ We can actually do a little better (and unless I'm missing something, I'm not sure where the $\log x$ term comes from - maybe it is just a safety net). First, we use that $$ \sum_{n \geq 1} \frac{\mu(n)}{n^s} = \frac{1}{\zeta(s)}. $$ If you have not seen this before this should be justified. It follows from the formula for $\mu(n)$, when you write the Euler product for the sum. In turn this implies that $$ \sum_{n \geq 1} \frac{\mu(n)}{n} = \lim_{s \to 1^+} \sum_{n \geq 1} \frac{\mu(n)}{n^s} = \lim_{s \to 1^+} \frac{1}{\zeta(s)} = 0. $$ This is nice since we then get that $$ \sum_{d \geq 2} \frac{\mu(d)}{d} = \sum_{d \geq 1} \frac{\mu(d)}{d} - 1 = -1. $$ This proves $(\ast)$. Consider Dirichlet series, related to the problem at hand: $$ g(s) = \sum_{n=1}^\infty \frac{\varphi(n)}{n^{2+s}} = \frac{\zeta(s+1)}{\zeta(s+2)} $$ We can now recover behavior of $A(x) = \sum_{n \le x} \frac{\varphi(n)}{n^s}$ by employing Perron's formula, using $c > 0$: $$ A(x) = \frac{1}{2 \pi i} \int_{c - i \infty}^{c + i \infty} \frac{\zeta(z+1)}{\zeta(z+2)} \frac{x^z}{z} \mathrm{d} z = \mathcal{L}^{-1}_z\left( \frac{\zeta(z+1)}{\zeta(z+2)} \frac{1}{z} \right)(\log x) $$ For large $x$, the main contribution comes from the pole of ratio of zeta functions at $z = 0$. Using $$ \frac{\zeta(z+1)}{\zeta(z+2)} \sim \frac{1}{\zeta(2)} \left( \frac{1}{z} + \gamma - \zeta^\prime(2)\right) + O(z) $$ Since $\mathcal{L}^{-1}_z\left( \frac{1}{z^{n+1}} \right)(s) = \frac{s^n}{n!}$ we have $$ A(x) = \frac{\log x}{\zeta(2)} + \left( \frac{\gamma}{\zeta(2)} - \frac{\zeta^\prime(2)}{\zeta(2)} \right) + \mathcal{L}^{-1}_z\left( \frac{\zeta(z+1)}{\zeta(z+2)} \frac{1}{z} - \frac{1}{\zeta(2) z^2} - \frac{\gamma - \zeta^\prime(2)}{\zeta(2) z} \right)(\log x) $$ Notice that $\frac{\zeta^\prime(s)}{\zeta(s)} = \sum_{n=1}^\infty \frac{\mu(n) \log(n)}{n^s}$, hence $\frac{\zeta^\prime(2)}{\zeta(2)} = A$. It remains to be shown that the remainder term is small. Rather belatedly (for the benefit of anyone who stumbles across this page in future), while I like apatch's answer, there is a slight issue with the step to prove $$\left|\sum_{d>x}\frac{\mu(d)\log d}{d^2}\right| \leq \sum_{d>x}\frac{\log d}{d^2} = O\left(\frac{\log x}{x}\right).$$ What follows is now a summary of my post about this question, and the answer contained therein. Specifically, you can't say $$\sum_{d>x}\frac{\log d}{d^2} = \sum_{d>x}\frac{\log d}{d^\frac{1}{2}}.\frac{1}{d^\frac{3}{2}}<\frac{\log x}{x^\frac{1}{2}}\sum_{d>x}\frac{1}{d^\frac{3}{2}}$$ because $\frac{\log x}{\sqrt{x}}$ only reaches its maximum around $x\approx 7.39$, well above the $x>2$ condition stated in the question. Instead, you need to approximate the sum by the integral $$\sum_{d>x}\frac{\log d}{d^2} \leq \frac{\log x}{x^2} + \int_x^\infty \frac{\log t}{t^2}dt$$ and then (e.g.) solve the integral using parts.
I'm solving a problem involving a Fermi gas. There is a specific sum I cannot figure my way around. A set of equidistant levels, indexed by $m=0,1,2 \ldots$, is populated by spinless fermions with population numbers $\nu_m =0 $ or $1$. I need to compute the following sum over the set of all possible configurations $\{ \nu_l \}$: $Q(\beta,\beta_c) = \sum_{\{ \nu_l \}} \sum_{l} \prod_m \exp({\beta_c \, l \, \nu_l}-{ [ \beta \, m + i \phi] \, \nu_m} )$. Any hints on how to deal with this are appreciated. This is not homework, it is a research problem. It is known that $\beta >0$, $\beta_c>0$, and $\phi \in [0; 2 \pi ]$. EDIT: corrected with the complex phase (the sum is coming from a generating function)
Ex.12.3 Q14 Areas Related to Circles Solution - NCERT Maths Class 10 Question \({AB}\) and \({CD}\) are respectively arcs of two concentric circles of radii \(\text{21 cm}\) and \(\text{7 cm}\) and center \({O}\) (see Figure). If \(\angle {AOB} = 30^\circ \), find the area of the shaded region . Text Solution What is known? \({AB}\) and \({CD}\) are arcs of two concentric circle of radii \(\text{21 cm}\) and \(\text{7 cm}\) respectively and centre \({O.}\) \(\angle {{ AOB }}= \,30^\circ \) What is unknown? Area of the shaded region. Reasoning: Area of the shaded region \(= \) Area of sector \({ABO}\, - \) Area of sector \({CDO}\) Areas of sectors \({ABO}\) and \({CDO}\) can be found by using the formula of Area of sector of angle \(\begin{align} \theta = \frac{\theta }{{{{360}^\circ }}} \times \pi {r^2}\end{align}\) where \({r}\) is radius of the circle and angle. with degree measure \(\theta\) with \(\theta = {30^ \circ }\) for the both the sector \( {ABO}\) and \( {CDO}\) angle \(\theta = {30^\circ}\) and radii 21cm and 7cm respectively. Steps: Radius of the sector ABO,\( \rm{}R = OB = 21\rm{}cm\) Radius of the sector CDO,\(\rm{}r = OD = 7\rm{}cm\) For both the sector ABOand CDO angle,\( \theta = {90^\circ}\) Area of shaded region \(= \) Area of sector \({ABO}\; – \) Area of sector \({CDO}\) \[\begin{align}&= \frac{\rm{\theta }}{{360}^{\rm{o}}} \times \pi {R^2} - \frac{\rm{\theta}}{{360}^\circ } \times \pi {r^2}\\&= \frac{{\rm{\theta }}}{{{{360}^{\rm{o}}}}} \times \pi \left( {{R^2} - {r^2}} \right)\\ &= \frac{{{{30}^{\rm{o}}}}}{{{{360}^{\rm{o}}}}} \times \frac{{22}}{7}\left( {{{\left( {21cm} \right)}^2} - {{\left( {7cm} \right)}^2}} \right)\\&= \frac{1}{{12}} \times \frac{{22}}{7} \times \left( {441c{m^2} - 49c{m^2}} \right)\\&= \frac{{11}}{{42}} \times 392c{m^2}\\&= \frac{{308}}{3}c{m^2}\end{align}\]
Ex.6.3 Q5 Triangles Solution - NCERT Maths Class 10 Question \(S\) and \(T\) are points on sides \(PR\) and \(QR\) of \(\Delta PQR\) such that \( \angle P{\rm{ }} = \angle RTS\). Show that Diagram Text Solution Reasoning: If two angles of one triangle are respectively equal to two angles of another triangle, then the two triangles are similar. This is referred as \(AA\) criterion for two triangles. Steps: In \(\Delta RPQ,\,\,\Delta RTS\) \[\begin{align}\angle R P Q&=\angle R T S \quad \text { (given) } \\ \angle P R Q&=\angle T R S \quad \text { (Commonangle) } \\ \Rightarrow\qquad \Delta R P Q &- \Delta R T S \quad \because \text { (AA criterion) }\end{align}\]
Search Now showing items 1-10 of 19 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
The lemma 13.1 of CLRS proves that the height of a red black tree with $n$ nodes is $$h(n) \leq 2\log_2(n+1)$$ There's a subtle step I don't understand. The property 4 reported at the beginning of the chapter states: If a node is red, then both its children are black. And because of such property it is later stated According to property 4, at least half the nodes on any simple path from the root to a leaf, not including the root, must be black. Consequently, the black-height of the root must be at least $h/2$. I can intuitively agree, but as exercises I'd like to prove it, but I can't manage how to actually do it. Why is that property true? I'm not actually neither able to set up the problem, the only think I could think of was that I if I have $r + b = h$ nodes, where $r$ is the number of red nodes and $b$ is the number of black nodes I can have a total of $$ k = \frac{h!}{r!b!} $$ And I'd like to prove from here that if $b < r$ than I have the contradiction, but I need something more that this probably. Any help?
The amsmath package provides a handful of options for displaying equations. You can choose the layout that better suits your document, even if the equations are really long, or if you have to include several equations in the same line. Contents The standard LaTeX tools for equations may lack some flexibility, causing overlapping or even trimming part of the equation when it's too long. We can surpass these difficulties with amsmath. Let's check an example: \begin{equation} \label{eq1} \begin{split} A & = \frac{\pi r^2}{2} \\ & = \frac{1}{2} \pi r^2 \end{split} \end{equation} You have to wrap your equation in the equation environment if you want it to be numbered, use equation* (with an asterisk) otherwise. Inside the equation environment, use the split environment to split the equations into smaller pieces, these smaller pieces will be aligned accordingly. The double backslash works as a newline character. Use the ampersand character &, to set the points where the equations are vertically aligned. This is a simple step, if you use LaTeX frequently surely you already know this. In the preamble of the document include the code: \usepackage{amsmath} To display a single equation, as mentioned in the introduction, you have to use the equation* or equation environment, depending on whether you want the equation to be numbered or not. Additionally, you might add a label for future reference within the document. \begin{equation} \label{eu_eqn} e^{\pi i} + 1 = 0 \end{equation} The beautiful equation \ref{eu_eqn} is known as the Euler equation For equations longer than a line use the multline environment. Insert a double backslash to set a point for the equation to be broken. The first part will be aligned to the left and the second part will be displayed in the next line and aligned to the right. Again, the use of an asterisk * in the environment name determines whether the equation is numbered or not. \begin{multline*} p(x) = 3x^6 + 14x^5y + 590x^4y^2 + 19x^3y^3\\ - 12x^2y^4 - 12xy^5 + 2y^6 - a^3b^3 \end{multline*} Split is very similar to multline. Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. This environment must be used inside an equation environment. For an example check the introduction of this document. If there are several equations that you need to align vertically, the align environment will do it: Usually the binary operators (>, < and =) are the ones aligned for a nice-looking document. As mentioned before, the ampersand character & determines where the equations align. Let's check a more complex example: \begin{align*} x&=y & w &=z & a&=b+c\\ 2x&=-y & 3w&=\frac{1}{2}z & a&=b\\ -4 + 5x&=2+y & w+2&=-1+w & ab&=cb \end{align*} Here we arrange the equations in three columns. LaTeX assumes that each equation consists of two parts separated by a &; also that each equation is separated from the one before by an &. Again, use * to toggle the equation numbering. When numbering is allowed, you can label each row individually. If you just need to display a set of consecutive equations, centered and with no alignment whatsoever, use the gather environment. The asterisk trick to set/unset the numbering of equations also works here. For more information see