text
stringlengths
256
16.4k
Question: Is there a rule of thumb for setting a non-diagonal covariance matrix for your Metropolis-Hastings proposal distribution? References are appreciated. Background: Say I have some posterior distribution I am interested in obtaining samples from $p(\theta|y)$. I choose an initial proposal distribution for a Metropolis-Hastings algorithm. It is $$ q(\theta^*|\theta) = \text{N}(\theta^*; \theta, \text{diag}[\sigma_1^2, \ldots, \sigma_p^2]). $$ Notice that the covariance matrix is diagonal. I choose the elements of this diagonal covariance matrix according to some other rule of thumb (e.g. $\sigma_i \overset{set}{=}2.38/\sqrt{d}).$ It goes for a while, but then it gets stuck. I assume that this is because of the fact that elements of $\theta$ are correlated. Is there any justification for setting $q$'s correlation matrix to the sample correlation matrix? Here's a pairwise scatterplot of all the iterations: Edit: I just opened my copy of a Bayesian textbook, and it describes something similar to what I suggested. It's a little light on references, though. So I am primarily interested in references, here.
You are not right, Le Châtelier's principle plays a very important role. Let's have a look at the reactions involving disodium monohydrogen phosphate, as it is an amphoteric substance and therefore a buffer on its own:\begin{align}\ce{Na2HPO4 + H2O &<=> 2Na+ + H3+O + PO4^3- \\Na2HPO4 + H2O &<=> 2Na+ + {}^{-}OH + H2PO4^- \\}\end{align}To a small extent, there will also be the formation of phosphoric acid, but we'll ignore that for the moment. Let's ignore the counter ion and formulate the equilibrium constants:\begin{align}\ce{HPO4^2- + H2O &<=> H3+O + PO4^3-} &K_1 &=\frac{c(\ce{PO4^3-})\,c(\ce{H3+O})}{c(\ce{HPO4^2-})\, c(\ce{H2O})} &= K_\mathrm{a}\cdot c(\ce{H2O})\\\ce{HPO4^2- + H2O &<=> {}^{-}OH + H2PO4^-} &K_2 &=\frac{c(\ce{H2PO4^-})\,c(\ce{{}^{-}OH})}{c(\ce{HPO4^2-})\, c(\ce{H2O})}&= K_\mathrm{b}\cdot c(\ce{H2O})\\\end{align} The whole reaction can be described with a coupled equilibrium constant:$$K = K_\mathrm{a}\cdot K_\mathrm{b}\cdot c^2(\ce{H2O})$$ At all times in a pure solution there will be phosphate, monohydrogen phosphate, dihydrogen phosphate and maybe even phosphoric acid in solution. Changing any of the concentrations of these species will change the equilibrium. The same equations can be applied to the sodium dihydrogen phosphate system. The initial concentrations of course will be different. Depending on the ratio of the two solutions, the pH will also change. The buffer capacity is of course not only dependent on the overall concentration, but also if it is near the ideal buffer point. Overall, polyprotic acids are very complicated systems and it solely depends on your understanding of the word "excellent" how the buffer behaves. There are many fine tuned buffer systems for various purposes, this website has quite a few neat tables.
For an Alexandrov space M with curvature bounded from below, the isoperimetric profile $v \to I_M(v)$ defined for every $v\in (0,V(M))$ (the volume of M might be infinite), is given by $$ I_M(v)=inf\{A(\partial D): V(D)=v, D \subset \subset M\}, $$ where D varies over relatively compact open subset of M. Then given any $v\in (0,V(M))$, is there a subset D with $A(\partial D)=I_M(v)$? And are there any discription and regularity theorem for D and $\partial D$? If we haven't results for general n, then what about the two-dimensional case?
I want to write the following constraint: If A=1 and B <= m then C=1 ( where A and C are binary, m is a constant and B is continuous). My solution requires some kind of hack in the model, a Big-$M$ value, which is pretty common, but also a tolerance $\varepsilon$, which is not very desirable. I hope that I do not miss something easy here but reckon there is no way around it. To model a condition of the form $(X \wedge Y) \to Z$ for binary $X, Y, Z$, you can use $$\tfrac12(X + Y) \leq Z + \tfrac12.$$ We can verify this by inserting: $X=Y=1$ yields to $1$ on the left hand side, thus, $Z$ has to be at least $\frac12$, and since it is binary, it must be $1$. If either $X$ or $Y$ is $0$ (or both), we have at most $\frac12$ on the left hand side and in this case $Z$ might be either $0$ or $1$. We also need to transform the constraint $B \leq m$ as a binary variable in order to use the template above. This can be done by introducing a new binary variable $B'$ which must assume the vaue $1$ iff $B \leq m$. We also need a Big-$M$ value which must be greater than any value that $B$ can assume (note that for numerical reasons in practice, $M$ should be not too large). Moreover, we have $\varepsilon$, a small tolerance value to get a strict $>$-inequality (in practice you would choose the difference to the next greater number from $m$ that can be represented exactly on your machine). We now add the constraints $$B \leq m + (1-B')M\tag{1}$$ $$B \geq m + \varepsilon - B'M\tag{2}$$$(1)$ guarantees that $B' = 0$ if $B > m$, $(2)$ guarantees that $B' = 1$ if $B \leq m$. We verify this again: If $B \leq M$, $(1)$ and $(2)$ can be satisfied with $B' = 1$, but not with $0$. If $B > m$, we need $B' = 0$ to satisfy $(1)$ and then $(2)$ is satisfied since $B$ is at least $\varepsilon$ larger than $m$. Putting it all together, we obtain your constraint $$ \tfrac12(A + B') \leq C + \tfrac12,\\ B \leq m + (1-B')M,\\ B \geq m + \varepsilon - B'M. $$
I) In this answer we would like to relax the conventional definition of a conservative force to include e.g. the Lorentz force. II) The standard definition of a conservative force is given on Wikipedia (October 2013) roughly as follows: A force field ${\bf F}={\bf F}({\bf r})$ is called a conservative force if it meets any of these three equivalent conditions: The force can be written as the negative gradient of a potential $U=U({\bf r})$: $$\tag{1} {\bf F} ~=~ - {\bf \nabla} U. $$ Equivalently, condition (1) means that the one-form $\phi:={\bf F}\cdot \mathrm{d}{\bf r}$ is exact: $\phi=-\mathrm{d}U$, where the exterior derivative is $\mathrm{d}:=\mathrm{d}{\bf r}\cdot{\bf \nabla}$. The position space is simply connected and the curl of ${\bf F}$ is zero: $$\tag{2} {\bf \nabla} \times {\bf F} ~=~ {\bf 0}. $$ Equivalently, condition (2) means that the one-form $\phi:={\bf F}\cdot \mathrm{d}{\bf r}$ is closed: $\mathrm{d}\phi=0$. There is zero net work $W$ done by the force ${\bf F}$ when moving a particle through a closed curve ${\rm r}: S^1 \to \mathbb{R}^3$ that starts and ends in the same position: $$\tag{3} W ~\equiv~ \oint_{S^1} \!\mathrm{d}s~ {\bf F}({\bf r}(s)) \cdot {\bf r}^{\prime}(s) ~=~ 0. $$ We stress that the parameter $s$ does not have to be actual time $t$. In fact time $t$ doesn't enter conditions (1-3) at all. The curve in condition (3) could be any virtual loop. In particular, the curve and its parametrization $s$ do in principle not have to reflect how an actual point particle would travel along a trajectory in a certain pace determined by some equations of motion, let alone move forward in time. III) Now recall that a velocity dependent potential $U=U({\bf r},{\bf v},t)$ of a force ${\bf F}$ by definition satisfies $$\tag{4} {\bf F}~=~\frac{\mathrm d}{\mathrm dt} \frac{\partial U}{\partial {\bf v}} - \frac{\partial U}{\partial {\bf r}}, \qquad {\bf v}~=~\dot{\bf r},$$ cf. Ref. 1. Next define the potential part of the action as $$\tag{5} S_{\rm pot}[{\bf r}]~:=~\int_{t_i}^{t_f} \!\mathrm{d}t~U({\bf r}(t),\dot{\bf r}(t),t),$$ and note that eq. (4) can be rewritten with the help of a functional derivative as $$\tag{6} F_i(t)~=~-\frac{\delta S_{\rm pot}}{\delta x^i(t)}, \qquad i~\in~\{1,2,3\}. $$ Technically at this point we need to impose pertinent boundary conditions (BC) (e.g. Dirichlet BC) at initial and final time, $t_i$ and $t_f$, respectively, in order for the functional derivative (6) to exists. These BC are implicitly assumed from now on. We dismiss the possibility that one would like to call a force with explicit time dependence for a conservative force. Let us therefore drop explicit time dependence from now on. However, see this Phys.SE post. IV) Seen in the light that velocity dependent potentials (4) are extremely useful in Lagrangian formulations, it is tempting to generalize the notion of a conservative force in the following non-standard way: A velocity dependent force field ${\bf F}={\bf F}({\bf r},{\bf v})$ is called a conservative force if it meets any of these three equivalent conditions: The force can be written as the negative functional gradient of a potential action $S_{\rm pot}[{\bf r}]=\int_{t_i}^{t_f} \!\mathrm{d}t~U({\bf r}(t),\dot{\bf r}(t))$: $$\tag{1'} {\bf F} ~=~ -\frac{\delta S_{\rm pot}}{\delta {\bf r}} ~\equiv~\frac{\mathrm d}{\mathrm dt} \frac{\partial U}{\partial {\bf v}} - \frac{\partial U}{\partial {\bf r}} . $$ Equivalently, condition (1') means that the one-form $\Phi:=\int_{t_i}^{t_f}\!\mathrm{d}t~ F_i(t)\mathrm{d}x^i(t)$ is exact in path space: $\Phi=-\mathrm{d}S_{\rm pot}$, where the exterior derivative is $\mathrm{d}:=\int_{t_i}^{t_f}\!\mathrm{d}t~ \mathrm{d}x^i(t)\frac{\delta}{\delta x^i(t)}$. The position space is simply connected and the force ${\bf F}$ satisfies a closedness condition wrt. to functional derivatives $$\tag{2'} \frac{\delta F_i(t)}{\delta x^j(t^{\prime})} ~=~[(i,t) \longleftrightarrow (j,t^{\prime})]. $$ Equivalently, condition (2') means that the one-form $\Phi:=\int_{t_i}^{t_f}\!\mathrm{d}t~ F_i(t)\mathrm{d}x^i(t)$ is closed in path space: $\mathrm{d}\Phi=0$. The equivalent Helmholtz conditions [2] wrt. to partial and total derivatives read $$ \frac{\partial F_i}{\partial x^j} -\frac{1}{2}\frac{\mathrm d}{\mathrm dt}\frac{\partial F_i}{\partial v^j} ~=~[i \longleftrightarrow j], \qquad \frac{\partial F_i}{\partial v^j}~=~-[i \longleftrightarrow j].$$ The following integral (3') over a two-cycle ${\rm r}: S^2 \to \mathbb{R}^3$ vanishes always: $$\tag{3'} \oint_{S^2}\!\mathrm{d}t \wedge \mathrm{d}s~ {\bf F}({\bf r}(t,s),\dot{\bf r}(t,s)) \cdot {\bf r}^{\prime}(t,s) ~=~ 0. $$ Here a dot and a prime mean differentiation wrt. $t$ and $s$, respectively. With this definition (1'-3') of a conservative force, then e.g. the Lorentz force and the Coriolis force become conservative forces, while the friction force ${\bf F}=-k {\bf v}$ will stay a non-conservative force, cf. this and this Phys.SE answers. It should be said that there are straightforward generalizations of conditions (1'-3'): Firstly one may allow the force ${\bf F}={\bf F}({\bf r}, {\bf v}, {\bf a}, {\bf j},\ldots)$ to depend on acceleration, jerk, etc. Secondly, one can generalize to generalized positions $q^i$, generalized velocities $\dot{q}^i$, and generalized forces $Q_i$, etc. Finally, let us mention that this construction (1'-3') is in spirit related to the inverse problem for Lagrangian mechanics. References: H. Goldstein, Classical Mechanics, Chapter 1. H. Helmholtz, Ueber die physikalische Bedeutung des Prinzips der kleinstenWirkung, J. für die reine u. angewandte Math. 100 (1887) 137.
For $N$ samples of normally distributed data $X_i \sim \mathcal{N}(\mu,\sigma^2)$, the $1-\alpha$ confidence interval for the sample mean $\bar{X}$ is $$ \left[\bar{X} - z_{\alpha/2}\frac{\sigma}{\sqrt{N}}, \bar{X} + z_{\alpha/2}\frac{\sigma}{\sqrt{N}}\right], $$ where $ z_{\alpha/2}$ is the $(1-\alpha/2)$ quantile for the standard normal distribution. In particular, it is clear the length of the confidence interval decreases at the rate $N^{-1/2}$, and so the accuracy of the sample mean increases as the sample size increases. On the other hand, let $a$ be the $\alpha/2$ quantile and $b$ the $1 - \alpha/2$ quantile for the chi-squared distribution with $N-1$ d.o.f. Then the $1 - \alpha$ confidence interval for the sample variance $S^2$ is $$ \left[\frac{(N-1)S^2}{b},\frac{(N-1)S^2}{a}\right]. $$ It is not obvious to me that this interval's length decreases with increasing $N$, as I would expect it to. Only through simulation was I able to verify it does so empirically (here I set $\sigma = 0.15$): I'd really like to be able to show something like, "for N > 1000, the margin of error for $S^2$ is ___", but I don't see how margin of error is easily extracted for the sample variance, as it is for sample mean, and I'm not sure how to show the decrease in interval length analytically. Any thoughts are appreciated.
Find all positive integers $m$ and $n$ with $m\ne n$ that satisfy the equation $~~~~~~~\displaystyle \sqrt[m]{m} ~=~ \sqrt[n]{n}$ Puzzling Stack Exchange is a question and answer site for those who create, solve, and study puzzles. It only takes a minute to sign up.Sign up to join this community Find all positive integers $m$ and $n$ with $m\ne n$ that satisfy the equation $~~~~~~~\displaystyle \sqrt[m]{m} ~=~ \sqrt[n]{n}$ $2^{1/2} = 4^{1/4}$ is the only solution. If you analyse the function $f(x)=x^{1/x}$, you'll find it has a maximum between $x=2$ and $x=3$ (actually at $x=e$) and is strictly decreasing afterwards with asymptote at $1$. Therefore any such solution pairs will consist of one number less than $3$ and that is $3$ or larger. The smallest cannot be $1$, since $f(1)=1$ and the function never reaches 1 after. This leaves only the pair $2$ and $4$. First of all, there are the trivial solutions where $m=n$. Then we have $\{m,n\}=\{2,4\}$. I claim these are all. Note first of all that we're looking for multiple $x$ where $x^{1/x}$ has the same value; equivalently, where $\frac{\log x}x$ does. It's easy to show that this function is increasing from 0 to $e$, and decreasing thereafter; in particular, given any pair $\{m,n\}$ one of the numbers must be less than $e$ and one greater. But the only positive integers below $e$ are 1 and 2. We have already seen what to do with 2; and $1^{1/1}=1$ is smaller than $n^{1/n}$ for any other positive integer $n$.
Let $n>2$ be even. Consider a compact Riemannian manifold $(M^n,g)$ and denote with $P_g$ the critical GJMS operator. Recall that $P_g$ is conformally invariant, i.e. $$P_{\tilde g}=e^{-nu}P_g$$ if $\tilde g=e^{2u}g$ for some $u$. Let $H$ be given by the following expression $$H(x,y)=c_n \log(\frac{1}{r})f(r)$$ where $r=d_g(x,y)$ is the geodesic distance from $x$ to $y$ and $f(r)$ is a positive decreasing function, $f(r)=1$ in a neighborhood of $r=0$ and $f(r)=0$ for $r\geq r_{inj}$ the injectivity radius. From Lee-Parker "The Yamabe problem" (Theorem $5.1$) there exists a metric $\tilde g$ conformal to $g$ such that $$ |\tilde g(x,y)|=1+O(r^m)$$ for some $m$ big enough. In coordinates we have the following expression for the Laplace-Beltrami operator $$\Delta_{\tilde g, y}v=\frac{1}{\sqrt{|\tilde g|}}\partial_i(\tilde g^{ij}\sqrt{|\tilde g|}\partial_j v).$$In normal conformal coordinates one has $$\tilde g^{ij}=\delta_{ij}+O(r^2)$$ $$\partial_i \tilde g^{ij}=O(r).$$ Question: prove that working in this coordinate system one has $$|P_{\tilde g} H(x,y)|\leq C r^{2-n}$$ for $r\leq Cr_{inj}$. This is a step of the proof of Lemma 2.1 in Ndiaye "Constant Q-curvature metrics in arbitrary dimension". It is not clear to me how to do the computation to prove the estimate and why one needs conformal normal coordinates instead of geodesic coordinates. A proof for $n=4$ is well appreciated too. In this case $$P_g v=(-\Delta_g)^2 v+\textrm{div}_g(\frac{2}{3} R_g g-2Ric_g)dv$$ where $R_g$ denotes the scalar curvature and $Ric_g$ the Ricci curvature. For $n>4$ one has $$P_g v=(-\Delta_g)^{n/2} v+l.o.t.$$
Summations and Series are an important part of discrete probability theory. We provide a brief review of some of the series used in STAT 414. While it is important to recall these special series, you should also take the time to practice. For more in depth review, there are links to Khan Academy. Summations First, it is important to review the notation. The symbol, \(\sum\), is a summation. Suppose we have the sequence, \(a_1, a_2, \cdots, a_n\), denoted \(\{a_n\}\), and we want to sum all their values. This can be written as \[\sum_{i=1}^n a_i\] Here are some special sums: \(\sum_{i=1}^n i=1+2+\cdots+n=\frac{n(n+1)}{2}\) \(\sum_{i=1}^n i^2=1^2+2^2+\cdots+n^2=\frac{n(n+1)(2n+1)}{6}\) The Binomial Theorem: It is possible to expand any power of \(x+y\) to the sum \[(x+y)^n=\sum_{i=0}^n {n \choose i} x^{n-i}y^i\] where \[{n\choose i}=\frac{n(n-1)(n-2)\cdots(n-i-1)}{i!}=\frac{n!}{(n-i)!i!}\] Examples using the Binomial Theorem Video, (Khan Academy). Series When n is a finite number, the value of the sum can be easily determined. How do we find the sum when the sequence is infinite? For example, suppose we have an infinite sequence, \(a_1, a_2, \cdots\). The infinite series is denoted: \[S=\sum_{i=1}^\infty a_i\] For infinite series, we consider the partial sums. Some partial sums are \[\begin{align*} & S_1=\sum_{i=1}^1 a_i=a_1 \\ & S_2=\sum_{i=1}^2 a_i=a_1+a_2 \\ & S_3=\sum_{i=1}^3 a_i=a_1+a_2+a_3\\ & \vdots\\ & S_n=\sum_{i=1}^n a_i=a_1+a_2+\cdots+a_n \end{align*}\] An infinite series converges and has sum S if the sequence of partial sums, \(\{S_n\}\) converges to S. Thus, if \[S=\lim_{n\rightarrow \infty} \{S_n\}\] then the series converges to S. If \(\{S_n\}\) diverges, then the series diverges. Review Convergence and Divergence of Series Video, (Khan Academy). These are some of the special series used in STAT 414. It would be helpful to review more than what is listed below. Geometric series A geometric series has the form \[S=\sum_{k=1}^\infty a r^{k-1}=a+ar+ar^2+ar^3+\cdots\] where \(a\neq 0\). A geometric series converges to \(\frac{a}{1-r}\) if \(|r|<1\), but diverges if \(|r|\ge1\). More examples and Explanation of the Geometric Series Video, (Khan Academy). A special case of the geometric series \[\frac{1}{1-x}=1+x+x^2+x^3+\cdots\] for $-1<x<1$. The Taylor (or Maclaurin) series of \(e^x\): The series: \[\sum_{i=0}^\infty \frac{x^i}{i!}=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\cdots\] for \(-1\le x\le 1\) converges to \(e^x\). Review for the Taylor (or Maclaurin) Series Video, (Khan Academy). Example C.1 \[S=\frac{1}{3}-\frac{1}{6}+\frac{1}{12}-\frac{1}{24}+\cdots=\sum_{x=0}^{\infty} \frac{1}{3(-2)^x}\] This is a geometric series with \(a=\frac{1}{3}\) and \(r=-\frac{1}{2}\). Therefore, it converges to \[\frac{a}{1-r}=\frac{\frac{1}{3}}{1+\frac{1}{2}}=\frac{2}{9}\]
Main Page The Problem Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. [math]k=3[/math] Density Hales-Jewett (DHJ(3)) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math] The original proof of DHJ(3) used arguments from ergodic theory. The basic problem to be considered by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers. Useful background materials Some background to the project can be found here. General discussion on massively collaborative "polymath" projects can be found here. A cheatsheet for editing the wiki may be found here. Finally, here is the general Wiki user's guide Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (final call) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (inactive) (500-599) Possible proof strategies (active) (600-699) A reading seminar on density Hales-Jewett (active) (700-799) Bounds for the first few density Hales-Jewett numbers, and related quantities (arriving at station) We are also collecting bounds for Fujimura's problem. Here are some unsolved problems arising from the above threads. Here is a tidy problem page. Bibliography Density Hales-Jewett H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished. Behrend-type constructions M. Elkin, "An Improved Construction of Progression-Free Sets ", preprint. B. Green, J. Wolf, "A note on Elkin's improvement of Behrend's construction", preprint. K. O'Bryant, "Sets of integers that do not contain long arithmetic progressions", preprint. Triangles and corners M. Ajtai, E. Szemerédi, Sets of lattice points that form no squares, Stud. Sci. Math. Hungar. 9 (1974), 9--11 (1975). MR369299 I. Ruzsa, E. Szemerédi, Triple systems with no six points carrying three triangles. Combinatorics (Proc. Fifth Hungarian Colloq., Keszthely, 1976), Vol. II, pp. 939--945, Colloq. Math. Soc. János Bolyai, 18, North-Holland, Amsterdam-New York, 1978. MR519318 J. Solymosi, A note on a question of Erdős and Graham, Combin. Probab. Comput. 13 (2004), no. 2, 263--267. MR 2047239
RANT - I hate problems that don't use significant figures properly. So is the answer supposed to be +/- 1 ml, +/- 0.1 ml, or +/- 0.01 ml? First your overall approach is right. There are three parts to solving the problem: (1) Calculate pHs (2) Calculate how many moles NaOH (3) Calculate how many ml of 0.1 molar solution. So the problem is that there are four phosphate species, $\ce{H3PO4, H2PO4^-, HPO4^{2-},}$ and $\ce{PO4^{3-}}$. The precision needed for the answer is somewhat of a mystery. So what simplifying assumptions are valid? I'm just going to guess that the volume of NaOH should be accurate to 0.01 ml. So let's look at the problem another way. I start off with 20.00 ml of 0.1000 molar solution of phosphoric acid and titrate with 0.01 ml at a time with a solution of 0.1000 molar sodium hydroxide. As every aliquot of NaOH is added the pH is measured. So: (1) when I have added exactly 10.00 ml I have effectively made sodium dihydrogenphosphate. I add 2 to this pH to get the target pH. (2) I look to see volume of NaOH that gives the target pH and subtract 10.00 ml. SIDEBAR Just looking at the problem I'm guessing a few ml. Let's say 5.00 ml. Measuring the NaOH to 0.01 would then mean 1 part in 500 precision (2 parts per thousand). Let's say that the initial $[\ce{H+}]$ is $3.47\times 10^{-5}$. 1 part in 347 is about right. But a pH of 4.46 only has two significant figures, that is 1 part in 46 which is too little precision for my guess of about 2 parts per thousand being needed. So the pH should be noted as 4.460. Looking at Wikipedia we can see that for phosphoric acid that pKa1 = 2.148,pKa2 = 7.198, and pKa3 = 12.319. (1) Given these pKa's we can't do better than 2 ppt. (2) Only a small part of the $\ce{H2PO4^-}$ will ionize to $\ce{HPO4^{2-}}$ and $\ce{H^+}$. Thus the initial pH will be lower than 7.198 but quite above 2.148. (3) Given (2) enough $\ce{H3PO4}$ might form to lower the pH when calculating to 2 ppt. This is right on the edge for 2 ppt precision... (4) The species $\ce{PO4^{3-}}$ can safely be ignored in all the calculations. i.e. $\ce{[PO4^{3-}] << 0.1000 molar}$ 1.0 Quick and dirty method... To avoid solving a 3rd or 4th order polynomial... 1.1 Calculate pHs The reaction of interest is: $\ce{H2PO4- <=> H+ + HPO4^{2-}}\quad\quad 6.339 \times 10^{-8}$ We'll let $\ce{[H+] = [HPO4^{2-}]}$ and assume that $\ce{[H2PO4-]} = 0.1000$ so $ 6.339 \times 10^{-8} = \dfrac{\ce{[H+]^2}}{0.1000}$ $ \ce{[H+]} = 7.96 \times 10^{-5}$pH = 4.099 Therefore end pH = 4.099 + 2.000 = 6.099 Checks $\ce{[HPO4^{2−}]}$ of $7.96\times10^{−5}$ is just less than $1\times10^{-4}$ so the assumption that $\ce{[HPO4^{2−}]}$ is not appreciable to $\ce{[H2PO4−]} \approx 0.1000$ is just ok. We've also assumed that an insignificant amount of $\ce{H3PO4}$ will form which isn't true... $\dfrac{\ce{[H3PO4]}}{\ce{[H2PO4^−]}} = \dfrac{\ce{[H^+]}}{7.11\times 10^{-3}} = 0.0112$ Da fix We have 3 species of phosphate: 0.0008 $\times \ce{[H2PO4^{−}]}$ = $\ce{[HPO4^{2−}]}$ 1.0000 $\times \ce{[H2PO4^{−}]}$ = $\ce{[H2PO4^{−}]}$ 0.0112 $\times \ce{[H2PO4^{−}]}$ = $\ce{[H3PO4]}$ $\text{--------}$ 1.0120 total $\ce{H2PO4^{−}}$ Normalizing $\ce{H2PO4^{−}}$ to 1.0000 total we have for the concentrations 0.0001 = $[\ce{HPO4^{2−}}]$ 0.0988 = $[\ce{H2PO4^{−}}]$ 0.001107 = $[\ce{H3PO4}]$ We rearrange the equilibrium for the reaction $\ce{H3PO4 <=> H^+ + HPO4^{2−}}$ to solve for $\ce{[H^+]}$ $\ce{[H^+]} = \dfrac{\text{K}_{a1}\ce{[H3PO4]}}{\ce{[H2PO4^-]}} = 7.97 \times 10^{-5}$ so everything now checks... 1.2 Calculate how many moles NaOH Given: (1) $\ce{[H^+]} = 7.96\times 10^{-7}$ (2) 0.1 molar = $\ce{[H3PO4] + [H2PO4^-] + [HPO4^{2-}] + [PO4^{3-}]}$ Assume: (1) $\ce{[H3PO4]} \approx 0$ (2) $\ce{[PO4^{3-}]} \approx 0$ (3) moles change in $\ce{H^+}(\text{aq})$ is insignificant $6.339\times10^{−8} = \dfrac{\ce{[H+][HPO4^{2-}]}}{\ce{[H2PO4^-]}}$ or $\ce{[HPO4^{2-}] =} \dfrac{6.339\times10^{−8}\ce{[H2PO4^-]}}{\ce{[H^+]}} = 0.07962\ce{[H2PO4^-]}$ We also know that $0.1000 = \ce{[H2PO4^-] + [HPO4^{2-}] = 1.07962[H2PO4^-]}$ $\ce{[H2PO4^-] =0.09263}$ $\ce{[HPO4^{2-}] =0.00737}$ $\ce{H2PO4^- + OH^- -> HPO4^{2-}}$ so Moles NaOH = $(0.020 \text{ liters})[0.00737] = 1.47\times10^{-4}$ 1.3 Calculate how many ml of 0.1 molar solution. liters 0.1 molar NaOH = $\dfrac{1.47\times10^{-4}}{0.1000} = 1.47\times10^{-3}$ ml 0.1 molar NaOH = 1.47
I'm looking at Lakens' (2017) primer and he tests the hypothesis that there is a difference between two groups $X_1$and $X_2$of magnitude $\Delta=E[X_1]-E[X_2] \neq0$ by subtracting $\Delta$ from the difference between the sample means $d=M_1-M_2$ and then he computes the $t$ value and $p$ value based on this difference score with a procedure analogous to Welch t-test (Eqs 3 and 4 on p. 357). This looks wrong to me. IMO to determine the p value, one should be using non-central $t$ distribution with non-centrality parameter based on $\Delta$ and t value based on $d$. This is because under the hypothesis, $d$ and hence the test statistic $t=d/h$ (where we would substitute $h=\sqrt{s_1^2/n_1+s_2^2/n_2}$ following Welch) does not have a symmetric distribution because $E[X_1]-E[X_2]=\Delta\neq 0$. Subtracting $\Delta$ from $d$ does merely shift the distribution, but it can't make it symmetric, since $\Delta$ is not a random variable. Another CrossValidated answer claims that Using the zero centered T distribution here would test the hypothesis assuming that X1 - X2 -3 is symmetric about zero.` ( X1-X2-3 corresponds to $X_1-X_2-\Delta$). I don't see how it is possible to obtain such symmetric test statistic distribution unless $E[X_1]=E[X_2]$ which under the hypothesis is not the case ($E[X_1]-E[X_2]=\Delta\neq 0$). So, which is the correct procedure to conduct equivalence test? Literature Lakens, D. (2017). Equivalence tests: a practical primer for t tests, correlations, and meta-analyses. Social Psychological and Personality Science, 8(4), 355-362.
kidzsearch.com > wiki Explore:images videos games List of readability tests Contents Overview The Dale–Chall formula Edgar Dale, a professor of education at Ohio State University, was one of the first critics of Thorndike's vocabulary-frequency lists. He claimed that they did not distinguish between the different meanings that many words have. He created two new lists of his own. One, his "short list" of 769 easy words, was used by Irving Lorge in his formula. The other was his "long list" of 3,000 easy words, which were understood by 80 percent of fourth-grade students. In 1948, he incorporated this list in a formula which he developed with Jeanne S. Chall, who was to become the founder of the Harvard Reading Laboratory. To apply the formula: Select several 100-word samples throughout the text. Compute the average sentence length in words (divide the number of words by the number of sentences). Compute the percentage of words NOT on the Dale–Chall word list of 3,000 easy words. Compute this equation Raw Score = 0.1579PDW + 0.0496ASL + 3.6365 Where: Raw Score = uncorrected reading grade of a student who can answer one-half of the test questions on a passage. PDW = Percentage of Difficult Words not on the Dale–Chall word list. ASL = Average Sentence Length Finally, to compensate for the "grade-equivalent curve," apply the following chart for the Final Score: Raw Score Final Score 4.9 and below Grade 4 and below 5.0 to 5.9 Grades 5–6 6.0 to 6.9 Grades 7–8 7.0 to 7.9 Grades 9–10 8.0 to 8.9 Grades 11–12 9.0 to 9.9 Grades 13–15 (college) 10 and above Grades 16 and above [1] Correlating 0.93 with comprehension as measured by reading tests, the Dale–Chall formula is the most reliable formula and is widely used in scientific research. Go to the Okapi Web site for a computerized version of this formula: Okapi In 1995, Dale and Chall published a new version of their formula with an upgraded word list. [2] Fog Formula [math] \mbox{Fog grade} = \frac{ \mbox{words} }{ \mbox{sentences} } + \frac{ \mbox{affixes}-\mbox{Personal Pronouns} }{ \frac{ \mbox{words} }{ \mbox{sentences} } } [/math] Gunning Fog The Gunning Fog, sometimes Fog index, is a formula developed by Robert Gunning. It was first published in his book The Technique of Clear Writing in 1952. It became popular due to the easy which the score is calculated without a calculator. The formula has been criticized as it uses only sentence length. The critics argue that texts created with the formula will use shorter sentences without using simpler words. However, this criticism confuses prediction of difficulty with production of prose (writing). The role of readability tests is to predict difficulty; writing better prose is quite another matter. As discussed in prose difficulty, sentence length is an index of syntactical difficulty. [3] Formula [math] \mbox{Gunning Fog grade} = 0.4 \times \left [ \frac{ \mbox{words} }{ \mbox{sentences} } + \left ( 100 \times \frac{ \mbox{hard words} }{\mbox{words}} \right ) \right ] [/math] Where: wordsis number of words sentencesis number of sentences hard wordsis the number of word with 3 or more syllables (excluding endings) which are not names or compound words Spache The Spache method compares words in a text to a list of words which are familiar in everyday writing. The words that are not on the list are called unfamiliar. The number of words per sentence are counted. This number and the percentage of unfamiliar words is put into a formula. The result is a reading age. Someone of this age should be able to read the text. It is designed to work on texts for children in primary education or grades from 1 st to 7 th. Formula [math] \mbox{Spache grade} = \left ( 0.141 \times \frac{ \mbox{words} }{ \mbox{sentences} }\right )+ \left ( 0.086 \times \frac{ \mbox{unfamiliar words} }{ \mbox{words} } \right ) + 0.839 [/math] In 1974 Spache revised his Formula to: [math] \mbox{Spache grade (revised)} = \left ( 0.121 \times \frac{ \mbox{words} }{ \mbox{sentences} }\right )+ \left ( 0.082 \times \frac{ \mbox{unfamiliar words} }{ \mbox{words} } \right ) + 0.659 [/math] Coleman-Liau Index Formula The calculations are performed in two steps. The first step finds the Estimated Close Percentage. The second step calculates the actual grade. [math] \begin{array}{lcl} \mbox{ECP} = 141.8401 - \left ( 0.214590 \times \mbox{characters} \right ) + \left ( 1.079812 \times \mbox{sentences} \right )\\ \mbox{CLI} = \left ( -27.4004 \times \frac{\mbox{ECP}}{100} \right ) + 23.06395 \end{array} [/math] A simple version also exists that is not as accurate: [math] \mbox{CLI} = \left ( 5.88 \times \frac{\mbox{characters}}{\mbox{words}} \right ) - \left ( 29.5 \times \frac{ \mbox{sentences} }{ \mbox{words} } \right ) - 15.8 [/math] Automated Readability Index Formula [math] \mbox{ARI} = 4.71 \times \frac{ \mbox{letters} }{ \mbox{words} } + 0.50 \times \frac{ \mbox{words} }{ \mbox{sentences} } - 21.43 [/math] SMOG The SMOG formula is a way of estimating the difficulty of writing. It was developed G. Harry McLaughlin in 1969 to make calculations as simple as possible. It has a high correlation 0.985 or 0.97% accuracy of the score to the actual grade at which students where able to fully understand the piece of writing. Like Gunning-Fog the formula uses words which have 3 or more syllables as an indicator for hardness; these words are said to be polysyllabic. Formula The original formula was given for a 30 sentence samples, which is: [math] \mbox{SMOG grade} = 1.0430 \sqrt{ \mbox{hard words in 30 sentences} } \ + 3.1291 [/math] This can be adjusted to work with any number of sentences: [math] \mbox{SMOG grade} = 1.0430 \sqrt{ \mbox{hard words} \times \frac{30}{ \mbox{sentences} } } \ + 3.1291 [/math] McLaughlin also created directions for an approximate version which can be done with just mental math. Count the number of words with 3 or more syllables, excluding names, in a set of 30 sentences Take the square root of the nearest perfect square Add 3 to get the estimated SMOG grade References Dale, E. and J. S. Chall. 1948. '"A formula for predicting readability". Educational research bulletinJan.21 and Feb 17, 27:1–20, 37–54. Chall, J. S. and E. Dale. 1995. Readability revisited: The new Dale–Chall readability formula.Cambridge, MA: Brookline Books. Klare G.R. 1963. The measurement of readability. Iowa State University Press, Ames IA Senter R.J.; Smith E.A. (November, 1967). Automated Readability Index.. Wright-Patterson Air Force Base. p. iii. AMRL-TR-6620. http://www.dtic.mil/cgi-bin/GetTRDoc?AD=AD0667273. Retrieved 2012-03-18. Dubay W.H. (2004). "The principles of readability". Costa Mesa, CA: Impact Information. http://eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/1b/bf/46.pdf. Retrieved 2008-01-10. Spache G. (1953). "A new readability formula for primary-grade reading materials". The Elementary School Journal 53(7): 410-413. http://links.jstor.org/sici?sici=0013-5984(195303)53%3A7%3C410%3AANRFFP%3E2.0.CO%3B2-D. Retrieved 2008-01-10. Coleman M.; Liau T.L. (1975). "A computer readability formula designed for machine scoring". Journal of Applied Psychology 60(2): 283-284.
The CRC Handbook of Chemistry and Physics [1] provides data for the solubility of $CO_2$ in $H_2O$ originally from reference [2]. The reported data are mole ratio solubilities $\chi$ of the gas at a partial pressure of one atmosphere (101.325 kPa), but can readily be converted into Henry's law solubility in other units: $$H_{CP} = 1000 \delta_{H2O} \chi /M_{H2O} = 997 \times 6.15 \times 10^{-4}/18 = 0.034 mol/L atm$$ or in molal and bar units $$H_{mP} = 1000\chi /1.013 (bar/atm)M_{H2O}=1000\times 6.15 \times 10^{-4}/(18 \times 1.013) = 0.034 mol/kg bar$$ Reference [2] is one of the original sources of the data tabulated in the Wikipedia, compiled by Rolf Sander in reference [3]. Other references in [3] provide similar values of $H_{CP} \approx 0.034$ mol/Latm. The value of $H_{CP}$ in the wikipedia is therefore consistent with the solubility reported in the CRC Handbook. The article by Prasetyo and Hofer [4], referred to in the OP, cites the CRC Handbook as the source of the following reported experimental value (see Table 4): $$\Delta_{solv} G^o = 0.24 kcal/mol$$ However, from $H_{mP} = 0.034$ mol/kgbar ( or $H_{CP} = 0.034$ mol/Latm) we obtain $$\Delta_{solv} G^o = -RTlog(H_{CP} \frac{P^o}{m^o})= 2.0 kcal/mol$$ which is far larger than the value reported in the article. To attempt further progress, note that comparing standard free energies of solvation requires awareness of the associated reference states. Prasetyo and Hofer indicate (pg 6474) that their choice of standard state is P° = 1 bar, m° = 1 mol/kg, and T = 298.15 K. The experimental values Prasetyo and Hofer compare to simulation results (see Table 4) refer presumably to this standard state. But this is approximately the same as the molar standard state associated with the data reported in the Wikipedia (P° = 1 atm, c° = 1 M, 298.15 K). Certainly the energies computed in the two scales don't differ significantly, as shown above. I therefore suspect that a different standard state was used in their calculation, or free energies for different (chemical) processes are being reported. It is worth adding that the value $\Delta_{solv} G^o = 0.24 kcal/mol$ is encountered in other articles that also reference the 75th Edition of the CRC Handbook. References [1] The CRC Handbook of Chemistry and Physics, 85th Edition, section 8-87, SOLUBILITY OF SELECTED GASES IN WATER, L. H. Gevantman. [2] R. Crovetto, Evaluation of Solubility Data for the System CO2-H2O, J. Phys. Chem. Ref. Data, 20, 575, 1991. [3] R. Sander: Compilation of Henry's law constants (version 4.0) for water as solvent, Atmos. Chem. Phys., 15, 4399-4981 (2015). [4] N. Prasetyo and T.S. Hofer J. Chem. Theory Comput. 2018, 14, 12, 6472-6483
Archive: Subtopics: Comments disabled Mon, 24 Sep 2018 A long time ago, I wrote up a blog article about how to derive the linear regression formulas fro first principles. Then I decided it was not of general interest, so I didn't publish it. (Sometime later I posted it to math stack exchange, so the effort wasn't wasted.) The basic idea is, you have some points !!(x_i, y_i)!!, and you assume that they can be approximated by a line !!y=mx+b!!. You let the error be a function of !!m!! and !!b!!: $$\varepsilon(m, b) = \sum (mx_i + b - y_i)^2$$ and you use basic calculus to find !!m!! and !!b!! for which !!\varepsilon!! is minimal. Bing bang boom. I knew this for a long time but it didn't occur to me until a few months ago that you could use basically the same technique to fit any other sort of curve. For example, suppose you think your data is not a line but a parabola of the type !!y=ax^2+bx+c!!. Then let the error be a function of !!a, b, !! and !!c!!: $$\varepsilon(a,b,c) = \sum (ax_i^2 + bx_i + c - y_i)^2$$ and again minimize !!\varepsilon!!. You can even get a closed form as you can with ordinary linear regression. I especially wanted to try fitting hyperbolas to data that I expected to have a Zipfian distribution. For example, take the hundred most popular names for girl babies in Illinois in 2017. Is there a simple formula which, given an ordinal number like 27, tells us approximately how many girls were given the 27th most popular name that year? (“Scarlett”? Seriously?) I first tried fitting a hyperbola of the form !!y = c + \frac ax!!. We could, of course, take !!y_i' = \frac 1{y_i}!! and then try to fit a line to the points !!\langle x_i, y_i'\rangle!! instead. But this will distort the measurement of the error. It will tolerate gross errors in the points with large !!y!!-coordinates, and it will be extremely intolerant of errors in points close to the !!x!!-axis. This may not be what we want, and it wasn't what I wanted. So I went ahead and figured out the Zipfian regression formulas: $$ \begin{align} a & = \frac{HY-NQ}D \\ c & = \frac{HQ-JY}D \end{align} $$ Where: $$\begin{align} H & = \sum x_i^{-1} \\ J & = \sum x_i^{-2} \\ N & = \sum 1\\ Q & = \sum y_ix_i^{-1} \\ Y & = \sum y_i \\ D & = H^2 - NJ \end{align} $$ When I tried to fit this to some known hyperbolic data, it worked just fine. For example, given the four points !!\langle1, 1\rangle, \langle2, 0.5\rangle, \langle3, 0.333\rangle, \langle4, 0.25\rangle!!, it produces the hyperbola $$y = \frac{1.00018461538462}{x} - 0.000179487179486797.$$ This is close enough to !!y=\frac1x!! to confirm that the formulas work; the slight error in the coefficients is because we used !!\bigl\langle3, \frac{333}{1000}\bigr\rangle!! rather than !!\bigl\langle3, \frac13\bigr\rangle!!. Unfortunately these formulas I think maybe I need to be using some hyperbola with more parameters, maybe something like !!y = \frac a{x-b} + c!!. In the meantime, here's a trivial script for fitting !!y = \frac ax + c!! hyperbolas to your data: [ Addendum 20180925: Shreevatsa R. asked a related question on StackOverflow and summarized the answers. The problem is more complex than it might first appear. Check it out. ]
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content. bidibangboom Posts: 34 Joined: May 10th, 2019, 6:38 pm triple poster Code: Select all #C A period 8 oscillator that was found in 1972. #C http://www.conwaylife.com/wiki/index.php?title=Roteightor x = 14, y = 14, rule = 23/3 bo12b$b3o8b2o$4bo7bob$3b2o5bobob$10b2o2b2$6b2o6b$5b2obo5b$6b3o5b$2b2o 3b3o4b$bobo5b2o3b$bo7bo4b$2o8b3ob$12bo! fun bidibangboom Posts: 34 Joined: May 10th, 2019, 6:38 pm someone who likes this Code: Select all #C A period 8 oscillator that was found in 1972. #C http://www.conwaylife.com/wiki/index.php?title=Roteightor x = 14, y = 14, rule = 23/3 bo12b$b3o8b2o$4bo7bob$3b2o5bobob$10b2o2b2$6b2o6b$5b2obo5b$6b3o5b$2b2o 3b3o4b$bobo5b2o3b$bo7bo4b$2o8b3ob$12bo! fun bidibangboom Posts: 34 Joined: May 10th, 2019, 6:38 pm someone who probably broke the record for the most posts in a row Code: Select all #C A period 8 oscillator that was found in 1972. #C http://www.conwaylife.com/wiki/index.php?title=Roteightor x = 14, y = 14, rule = 23/3 bo12b$b3o8b2o$4bo7bob$3b2o5bobob$10b2o2b2$6b2o6b$5b2obo5b$6b3o5b$2b2o 3b3o4b$bobo5b2o3b$bo7bo4b$2o8b3ob$12bo! fun bidibangboom Posts: 34 Joined: May 10th, 2019, 6:38 pm someone who is still doing this Code: Select all #C A period 8 oscillator that was found in 1972. #C http://www.conwaylife.com/wiki/index.php?title=Roteightor x = 14, y = 14, rule = 23/3 bo12b$b3o8b2o$4bo7bob$3b2o5bobob$10b2o2b2$6b2o6b$5b2obo5b$6b3o5b$2b2o 3b3o4b$bobo5b2o3b$bo7bo4b$2o8b3ob$12bo! fun bidibangboom Posts: 34 Joined: May 10th, 2019, 6:38 pm someone who will stop now Code: Select all #C A period 8 oscillator that was found in 1972. #C http://www.conwaylife.com/wiki/index.php?title=Roteightor x = 14, y = 14, rule = 23/3 bo12b$b3o8b2o$4bo7bob$3b2o5bobob$10b2o2b2$6b2o6b$5b2obo5b$6b3o5b$2b2o 3b3o4b$bobo5b2o3b$bo7bo4b$2o8b3ob$12bo! fun fluffykitty Posts: 638 Joined: June 14th, 2014, 5:03 pm Someone who posted 7 times in a row I like making rules A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Someone who responded to someone who was wrong about breaking the record for most posts in a row (see here ). x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Moosey Posts: 2486 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact: A person who is being responded to by a person who is writing what this link goes to I am a prolific creator of many rather pathetic googological functions My CA rules can be found here Also, the tree game Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?" testitemqlstudop Posts: 1186 Joined: July 21st, 2016, 11:45 am Location: in catagolue Contact: Somebody who used a self-reference instead of a previous-person-reference when the reference relations are relaxed. A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Somebody who I replied to. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce testitemqlstudop Posts: 1186 Joined: July 21st, 2016, 11:45 am Location: in catagolue Contact: Somebody who made the same aforementioned error that I attempted to correct by the second-previous post. PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode this post before reading the post itself. Last edited by PkmnQ on May 19th, 2019, 5:42 am, edited 1 time in total. Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode Someone who can't use the new paste rle feature of LifeViewer. Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode Someone who likes to size stack every now and then. Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode Someone who has posted 4 times in a row, including this one Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode Someone who is going for 12 posts (5) Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode Someone who is wondering why he did this (6) Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode Someone who is already past the halfway mark (7) Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode Someone who is running out of descriptions for himself (8) Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode Alternating rule (9) Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode Someone who wants to have a profile picture (10) Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode Someone who is almost done (11) Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode Someone who has posted 12 times in a row and is now done Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf testitemqlstudop Posts: 1186 Joined: July 21st, 2016, 11:45 am Location: in catagolue Contact: PkmnQ wrote:this post before reading the post itself. Someone who either doesn't have or chooses to ignore the rules of English Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X A person that is trick work to. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2)
While writing array index in an algorithm $a[i]\gets v$ is conventionally used. Is the notation $a_{i}\gets v$ also used ? If the algorithm uses struct or equivalent, some notation is required for property access. In some occasions $p(s)$ is used where $p$ is the property. Using $s \rightarrow p$ messes up with the assignment arrow. Using $s_{p}$ looks like $p$ is an index and $p\in \mathbb{Z}^{+}$. Should $s["p"]$ be used ? What is the convention to denote property access in algorithm ? Is there any reference where I can check different notations of these two scenarios ?
In lambda calculus there are three types of reduction, $\alpha$-renaming $\beta$-reduction $\eta$-conversion The use of $\eta$ in $\eta$-conversion seems rather strange to me. Since they already used $\alpha$ and $\beta$ I would expect the pattern to be continued with $\gamma$ the third Greek letter, but $\eta$ is used instead. Why is $\eta$ used in $\eta$-conversion?
Suppose $X$ is an infinite set and let $\mathcal{T}_{cf} = \{ U\subset X\ |\ X\setminus U\ \mathrm{is\ finite\ or}\ U=\emptyset\}$. Let $p$ be an element that is not in $X$ and let $Y=X\cup \{p\}$. Topologize $Y$ by declaring a set $U$ in $Y$ to be open if and only if $U\subset X$ or if $p \in U$, then $X\setminus U$ must be finite. $Y$ is the Alexandrov compactifaction or also sometimes called the one-point compactification of the discrete space $(X,\mathcal{P}(X))$. I leave it to you to verify that $Y$ is a compact Hausdorff space. Let $x_0$ be a fixed member of $X$ and define $f:Y\rightarrow X$ as follows:$$f(a) = \begin{cases} x & \mathrm{if}\ a=x\in X \\ x_0 & \mathrm{if} \ a=p. \end{cases}$$ $f$ is the desired continuous surjection. Let $V$ be an open set of $Y$, then:$$f^{-1}(V) = \begin{cases} V & \mathrm{if}\ x_0\not\in V \\ V\cup\{p\} & \mathrm{if} \ x_0 \in V. \end{cases}$$ If $x_0 \not\in V$, then $f^{-1}(V)$ is trivially open in $Y$, so we need only check the other case. Without loss of generality, we may assume that $\emptyset \subsetneq V\subsetneq X$, then $X\setminus V$ must be a non-empty finite set, say $X\setminus V= \{x_1, \ldots , x_n\}$. Consequently $X\setminus f^{-1}(V) = X\setminus (V\cup \{p\})= (X\setminus V) \cap X\setminus \{p\}= X\setminus V$ is finite and hence closed, thus $f$ is continuous.
The corrections made to the measurements described in step 16 of the Frequency Discriminator test setup procedure are somewhat inscrutable. For those who wish to understand the rationale behind them, I provide the mathematical justification in this message. Those who don't like or can't be bothered with math are encouraged to skip the remainder of this message.Warning: Algebra aheadHP product note 11729C-2 provides the mathematical justification for the corrections specified in step 16. However, this justification is not isolated to a single section of the product note; although most of the material is found on pgs 16, 24-27 and 40. One complication that arises is the product note includes some corrections required when using a swept-tuned spectrum analyzer, corrections that are not applicable to an FFT spectrum analyzer. Specifically, the Noise Bandwidth of analog HP spectrum analyzers is used for one correction; whereas the Effective Noise Bandwidth correction of the FFT windowing function is already applied by the PicoScope 6 software and requires no further correction. In addition, a correction factor for the "log-shaping and detection circuitry of an analog spectrum analyzer" is applied. Again, this is not applicable to FFT spectrum analyzers. Consequently, the correction procedure given in the test setup description elides these steps and the mathematics justifying their use is modified to eliminate terms corresponding to them. In the derivation of the equation for the frequency discriminator constant the final result is: \$\nu(t)=K_{d}\varphi(t)\$, where \$\nu(t)\$ is the voltage output of the Phase Detector after low-pass filtering, \$K_{d}\$ is the (frequency) discriminator constant, and \$\varphi(t)\$ is the instantaneous frequency corresponding to the output voltage. In the HP product note, this is presented in a slightly different form: \$\Delta V = K_{d}\Delta f\$, where \$\Delta V\$ is the change in output voltage and \$\Delta f\$ is the change in instantaneous frequency. These are mathematically equivalent formuations. The equations above are time domain descriptions, whereas the spectrum measurements are in the frequency domain. Consequently, additional notation is necessary to identify these measurements. (Subsequent page references are citations to the HP product note ). On page 6, two symbols are defined to identify the relevant spectral data. First, \$S_{v}(f_{m})\$ identifies the "power spectral density of the voltage fluctuations out of the detection system" at the offset frequency \$f_{m}\$. \$S_{\Delta f}(f_{m})\$ is the spectral density of the frequency fluctuations at the offset frequency \$f_{m}\$ . Thus, \$S_{v}(f_{m})\$ represents the power spectral density of the signal \$\nu(t)\$ (this is what is measured by the low frequency spectrum analyzer during an experiment) and \$S_{\Delta f}(f_{m})\$ represents the frequency spectral density of the signal \$\varphi(t)\$. While discussing symbols representing spectral densities, it is convenient to mention another quantity that plays an important role in the characterization of phase noise, \$\mathscr{L(\mathcal{\mathrm{f_{m}}})=\frac{\mathcal{P_{SSB}}(\mathrm{f_{m}})}{\mathcal{P_{\mathrm{Carrier}}}}}\$. \$\mathcal{P_{\mathrm{Carrier}}}\$ is the power of the (oscillator) carrier signal. \$\mathcal{P_{SSB}}(\mathrm{f_{m}})\$ is the single side-band power of a phase modulation sideband at the offset frequency \$f_{m}\$. When discussing phase noise, most specifications provide values of \$\mathscr{L(\mathcal{\mathrm{f_{m}}})}\$, so there is a requirement to convert the spectra measured by the frequency discriminator into the form \$\mathscr{L(\mathcal{\mathrm{f_{m}}})}\$. On page 6 is derived the relationship between \$S_{\Delta f}(f_{m})\$ and \$\mathscr{L(\mathcal{\mathrm{f_{m}}})}\$: \$\mathscr{L(\mathcal{\mathrm{f_{m}})}} = \mathcal{\frac{S_{\Delta f}(f_{m})}{{2f_{m}}^2}}\$. Spectral densities are continuous functions of frequency. When the set of frequencies associated with a power measurement is countable, this is not true and the result is referred to as a spectrum. Since the measurements by a frequency discriminator test setup quantize frequency, phase noise characterizations based on measurement deal with spectra rather than spectral densities. We continue to use the notation \$S_{v}(f_{m})\$, \$S_{\Delta f}(f_{m})\$, and \$\mathscr{L(\mathcal{\mathrm{f_{m}}})}\$ to identify the spectra arising from quantization of the identically named spectral densities. Phase noise is canonically described as arising from FM modulations of the carrier signal by stochastic (noise) processes within the oscillator. The products of these processes add linearly to create the total phase noise spectrum or spectral density. Consider the value associated with \$S_{v}(f_{m})\$, for a particular offset frequency \$f_{m}\$. This is the power of the \$f_{m}\$ component of the spectrum and the total phase noise spectrum is mathematcially equivalent to a sum of spectra, where each comprises a single tone spectrum for the frequency \$f_{m}\$ (m ranging over all values for \$S_{v}(f_{m})\$). To be clear, the single tone spectra are not those produced by the noise processes, which generally create multi-tone spectra. The single tone spectra are a mathematical decomposition useful when considering how to measure the discriminator constant. In particular, measurement of the system response to a single tone input contains all the information needed to compute the discriminator constant. This is the objective of the calibration steps described in the test setup procedure presented previously, specifically in steps 3-5. The mathematical justification for the corrections specified in step 16 is found on page 40. It begins with the casual assertion that for m<0.2rad, where m is the modulation index of the modulation: \$\frac{P_{ssb}}{P_{carrier}} = \frac{m^2}{4}\$ I searched for hours on the internet to find a justification for this result without success (it may be that it exists in some textbook, of which I do not have a copy). Finally, I found enough information to dervive it. On this web page , it is noted that for sufficiently small FM modulation indices (which on other web pages is given as m<0.2), the Bessel J coeffients are: \$J_{0} = 1\$, \$J_{1} = \frac{m}{2}\$, and \$J_{n} = 0, n>1\$. As an aside, the correct constraint on the modulation index is m<.2, not m<0.2rad, since the modulation index is defined as: \$ m = \frac{\Delta f_{peak}}{f_{m}}\$, where \$\Delta f_{peak}\$ is the peak frequency deviation of the FM modulation and \$f_{m}\$ is the FM rate. This is a unitless ratio. The constraint m<0.2rad is appropriate for Phase Modulation and I found several references on the internet where it is erroneously cited for FM modulation. In the calibration procedure (step 3), the FM rate is set to 1 KHz and the frequency deviation to 100 Hz. This yields a modulation index of .1, which satisfies the given constraint. In a slide presentation available on the internet (on slide "Angle and Pulse Modulation - 7"), the total power \$P_{T}\$ of an FM modulated signal with carrier \$P_{C}\$ is given as: \$P_{T} = P_{C}({J_{0}}^2 + 2({J_{1}}^2+{J_{2}}^2+ ...))\$ Noting the values of \$J_{i}\$ when the constraint m<.2 holds and substituing into this equation: \$P_{T} = P_{C}(1 + 2(\frac{m^2}{4})) = P_{C} + P_{C}\frac{m^2}{2}\$ In the last expression to the right of the equal sign, the first term represents the carrier power and the second term represents the power of the double sideband. The single sideband power is 1/2 of this, i.e., \$P_{ssb} = P_{C}\frac{m^2}{4}\$. Dividing this expression by \$P_{C}\$ (aka \$P_{Carrier}\$) yields the assertion made at the beginning of page 40. Given \$\frac{P_{ssb}}{P_{carrier}} = \frac{m^2}{4}\$, we can substitute the values measured during calibration in steps 3-5 and develop an expression for the single-sideband to carrier power ratio in terms of these values. First, to ensure clarity, the derivation on page 40 identifies the calibration values as \$\Delta f_{peak_{cal}}\$ and \$f_{m_{cal}}\$ and uses them to express the modulation index, \$m = \frac{{\Delta f_{peak_{cal}}}}{f_{m_{cal}}}\$, so: \$\frac{P_{ssb}}{P_{carrier}} = \frac{m^2}{4} = \frac{1}{4}\frac{{(\Delta f_{peak_{cal}})}^2}{(f_{m_{cal}})^2}\$ At this point it is useful to mention that, so far, we have not been dealing with values expressed in logrithmic units. Rather, the values used in the expressions are in linear units. This is important in the next step of the derivation. The measurement of P-cal and Delta_SB-cal on the spectrum analyzer generally will be made in logrithmic units (e.g. dBm). To use these values in the derivation, we must express them in linear units. To retain clarity, the distinction between these values in linear and logrithmic units is made by appending them with either [Lin] or [dBm]. Thus, from this point, P-cal in linear units (e.g. milliwatts) is represented by the symbol \$P_{cal}[Lin]\$ and in logrithmic units by the symbol \$P_{cal}[dBm]\$. Simlarly Delta_SB-cal in linear units is represented by \$\Delta SB_{cal}[Lin]\$ and in logrithmic units by \$\Delta SB_{cal}[dBm]\$. The [Lin] and [dBm] notation applies to the other symbols as well. In step 4 of the test setup procedure, the difference between the carrier and sideband power is measured by subtracting the former from the latter (this assumes the SSA3032X is displaying results in dBm). In other words, \$P_{ssb}[dBm] - P_{carrier}[dBm] = \Delta SB_{cal}[dBm]\$. In linear units the subtraction becomes division and therefore: \$\frac{P_{ssb}[Lin]}{P_{carrier}[Lin]} = \Delta SB_{cal}[Lin]\$. Consequently, we can write: \$\frac{P_{ssb}}{P_{carrier}} = \frac{m^2}{4} = \frac{1}{4}\frac{{(\Delta f_{peak_{cal}})}^2}{(f_{m_{cal}})^2} = \Delta SB_{cal}[Lin]\$ The last equality can be re-expressed as: \$\Delta {f}^2_{peak_{cal}} = 4 {f}^2_{m_{cal}} 10^{{\frac{\Delta SB_{cal}[dBm]}{10}}}\$, since \$10^{{\frac{\Delta SB_{cal}[dBm]}{10}}} = \Delta SB_{cal}[Lin]\$ The derivation of the equation for the frequency discriminator constant specifies voltage amplitudes for the oscillator (DUT),\$V_{DUT-AMP}\$ and the referenced signal, \$V_{R-AMP}\$. However, it fails to indicate whether these amplitudes are peak-to-peak values or RMS values. This follows the formuation in Appendix A (page 34), on which the derivation is based, which also does not indicate whether peak-to-peak or RMS voltages are meant. On page 40, however, the discriminator constant is defined implicitly as: \$K_{d} = \frac{\Delta V_{rms}}{\Delta f_{rms}}\$ So far, we have derived the equivalent expression for \$\Delta {f}^2_{peak_{cal}}\$, not \$\Delta {f}^2_{rms_{cal}}\$. This is easily fixed, since \$\Delta {f}_{peak_{cal}} = \sqrt{2} \Delta {f}_{rms_{cal}}\$ and therefore: \$\Delta {f}^2_{rms_{cal}} = 2 {f}^2_{m_{cal}} 10^{{\frac{\Delta SB_{cal}[dBm]}{10}}}\$ Substituting this expression into the definition of \$K_{d}\$ yields: \$K^2_{d} = \frac{\Delta V^2_{rms}}{2 {f}^2_{m_{cal}} 10^{{\frac{\Delta SB_{cal}[dBm]}{10}}}} = \frac{P_{cal}[Lin]}{2 {f}^2_{m_{cal}} 10^{{\frac{\Delta SB_{cal}[dBm]}{10}}}}\$, since \$P_{cal}[Lin]\$ is the response expressed as power (\$\Delta V^2_{rms}\$) to the calibration input. Applying \$10 log_{10}()\$ to both sides of the equation re-expresses it in terms of dB: \$2K_{d}[dBm] = P_{cal}[dBm] - (\Delta SB_{cal}[dBm] + 20 log_{10}(f_{m_{cal}})+3dB)\$ Note: There is a mistake on page 40, which is corrected in the equation given above. On page 40, the left hand side of the equals sign is given as \$K_{d}[dBm]\$, rather than \$2K_{d}[dBm]\$. It turns out that this mistake is cancelled out by an error in the equation given for \$S_{\Delta f}(f_{m})\$ on page 16, which should be: \$S_{\Delta f}(f_{m}) = S_{v}(f_{m}) - 2K_{d}\$ I am deliberately leaving off the units in this equation, since as stated on page 16, \$S_{\Delta f}(f_{m})\$ is in units of [dBHz/Hz], whereas both \$S_{v}(f_{m})\$ and \$K_{d}\$ are given in units of [dBm]. How one gets a quantity in [dBHz/Hz] by subtracting two quantities in [dBm] is beyond my comprehension. In fact the whole document is riddled with equations that combine units in such a way as to be completely baffling. Anyway, the desired final result is \$\mathscr{L(\mathcal{f_{m}})}\$ and this is expressed in terms of \$S_{\Delta f}(f_{m})\$ on page 7: \$\mathscr{L(\mathcal{f_{m}})} = S_{\Delta f}(f_{m}) - 20 log_{10}(\frac{f_{m}}{1 Hz}) - 3 dB\$ \$\;\;\;\;\;\;\;\;\;= S_{\Delta f}(f_{m}) - 20 log_{10}(f_{m}) - 3 dB\$, where \$20 log_{10}(f_{m})\$ in the last expression to the right of the equal sign is written without the explicit reference to its units. Substituing the equation for \$S_{\Delta f}(f_{m})\$ and in that the equation for \$2K_{d}\$ gives: \$\mathscr{L(\mathcal{f_{m}})} = S_{v}(f_{m}) - (P_{cal}[dBm] - (\Delta SB_{cal}[dBm]\$ \$\;\;\;\;\;\;\;\;\;\;\;\;\; + 20 log_{10}(f_{m_{cal}})+3dB)) - 20 log_{10}(f_{m}) - 3 dB\$ \$\;\;\;\;\;\;\;\;\;= S_{v}(f_{m}) - P_{cal}[dBm] + \Delta SB_{cal}[dBm] - 20 log_{10}(\frac{f_{m}}{f_{m_{cal}}})\$ Recalling that \$S_{v}(f_{m})\$ is what is measured by the low frequency spectrum analyzer during an experiment, the last equation to the right of the equal sign justifies the corrections made in step 16 (except adding 10 dBm, which is already justified in the description of the step).
Learning Objectives To get a simple overview of the origin of color and magnetism in complex ions. Electromagnetic radiation is a form of energy that is produced by oscillating electric and magnetic disturbance, or by the movement of electrically charged particles traveling through a vacuum or matter. Electron radiation is released as photons, which are bundles of light energy that travel at the speed of light as quantized harmonic waves. This energy is then grouped into categories based on its wavelength into the electromagnetic spectrum and have certain characteristics, including amplitude, wavelength, and frequency (Figure \(\PageIndex{1}\)). General properties of all electromagnetic radiation include: Electromagnetic radiation can travel through empty space, while most other types of waves must travel through some sort of substance. For example, sound waves need either a gas, solid, or liquid to pass through to be heard. The speed of light (\(c\)) is always a constant (2.99792458 x 10 8m s -1). Wavelengths (\(\lambda\)) are measured between the distances of either crests or troughs. The energy of a photon is expressed by Planck's law in terms of the frequency (\( u\)) of the photon \[E=h u \label{24.5.1}\] since \(\lambda u =c\) for all light Plancks law can be also expressed in terms of the wavelength of the photon \[E = h u = \dfrac{hc}{\lambda} \label{24.5.2}\] If white light is passed through a prism, it splits into all the colors of the rainbow (Figure \(\PageIndex{2}\)). Visible light is simply a small part of an electromagnetic spectrum most of which we cannot see - gamma rays, X-rays, infra-red, radio waves and so on. Each of these has a particular wavelength, ranging from 10 -16 meters for gamma rays to several hundred meters for radio waves. Visible light has wavelengths from about 400 to 750 nm (1 nanometer = 10 -9 meters). Example \(\PageIndex{1}\): Blue Color of Copper (II) Sulfate in Solution If white light (ordinary sunlight, for example) passes through copper(II) sulfate solution, some wavelengths in the light are absorbed by the solution. Copper(II) ions in solution absorb light in the red region of the spectrum. The light which passes through the solution and out the other side will have all the colors in it except for the red. We see this mixture of wavelengths as pale blue (cyan). The diagram gives an impression of what happens if you pass white light through copper(II) sulfate solution. Working out what color you will see is not easy if you try to do it by imagining "mixing up" the remaining colors. You would not have thought that all the other colors apart from some red would look cyan, for example. Sometimes what you actually see is quite unexpected. Mixing different wavelengths of light doesn't give you the same result as mixing paints or other pigments. You can, however, sometimes get some estimate of the color you would see using the idea of complementary colors. Origin of Colors The process of absorption involves the excitation of the valence electrons in the molecule typically from the low lying level called the Highest Occupied Molecular Orbital (HOMO) into a higher lying state called the the Lowest Unoccupied Molecular Orbital ( LUMO). When this HOMO and LUMO transition (Figure \(\PageIndex{3}\)) involves the absorption of visible light, the sample is colored. The HOMO-LUMO energy difference \[\Delta E = E_{HOMO} - E_{LUMO} \label{24.5.3A}\] depends on the nature of the molecule and can be connected to the wavelength of the light absorbed \[\Delta E = h u = \dfrac{hc}{\lambda} \label{24.5.3B}\] Equation \(\ref{24.5.3B}\) is the most important equation in the field of light-matter interactions (spectroscopy). As Example \(\PageIndex{1}\) demonstrated, when white light passes through or is reflected by a colored substance, a characteristic portion of the mixed wavelengths is absorbed. The remaining light will then assume the complementary color to the wavelength(s) absorbed. This relationship is demonstrated by the color wheel shown below. Here, complementary colors are diametrically opposite each other (Figure \(\PageIndex{5}\)). Thus, absorption of 420-430 nm light renders a substance yellow, and absorption of 500-520 nm light makes it red. Green is unique in that it can be created by absorption close to 400 nm as well as absorption near 800 nm. Colors directly opposite each other on the color wheel are said to be complementary colors. Blue and yellow are complementary colors; red and cyan are complementary; and so are green and magenta. Mixing together two complementary colors of light will give you white light. What this all means is that if a particular color is absorbed from white light, what your eye detects by mixing up all the other wavelengths of light is its complementary color. Copper(II) sulfate solution is pale blue (cyan) because it absorbs light in the red region of the spectrum and cyan is the complementary color of red (Table \(\PageIndex{1}\)). Color Wavelength (nm) ΔE HOMO - LUMO gap (eV) UV 100 - 400 12.4 - 3.10 Violet 400 - 425 3.10 - 2.92 Blue 425 - 492 2.92 - 2.52 Green 492 - 575 2.52 - 2.15 Yellow 575 - 585 2.15 - 2.12 Orange 585 - 647 2.12 - 1.92 Red 647 - 700 1.92 - 1.77 Near IR 700 - 10,000 1.77 - 0.12 If the compound absorbs in one region of the spectra, it appears with the opposite (complementary) color, since all of the absorbed color has been removed. For example: The Origin of Color in Complex Ions We often casually talk about the transition metals as being those in the middle of the Periodic Table where d orbitals are being filled, but these should really be called d block elements rather than transition elements (or metals). The definition of a transition metal is one which forms one or more stable ions which have incompletely filled d orbitals. Zinc with the electronic structure [Ar] 3d 104s 2 does not count as a transition metal whichever definition you use. In the metal, it has a full 3d level. When it forms an ion, the 4s electrons are lost - again leaving a completely full 3d level. At the other end of the row, scandium ([Ar] 3d 14s 2) does not really counts as a transition metal either. Although there is a partially filled d level in the metal, when it forms its ion, it loses all three outer electrons. The Sc 3 + ion does not count as a transition metal ion because its 3d level is empty. Example \(\PageIndex{3}\): Hexaaqua Metal Ions The diagrams show the approximate colors of some typical hexaaqua metal ions, with the formula [ M(H 2O) 6 ] n+. The charge on these ions is typically 2+ or 3+. Non-transition metal ions Transition metal ions The corresponding transition metal ions are colored. Some, like the hexaaquamanganese(II) ion (not shown) and the hexaaquairon(II) ion, are quite faintly colored - but they are colored. So, what causes transition metal ions to absorb wavelengths from visible light (causing color) whereas non-transition metal ions do not? And why does the color vary so much from ion to ion? This is discussed in the next sections. Magnetism The magnetic moment of a system measures the strength and the direction of its magnetism. The term itself usually refers to the magnetic dipole moment. Anything that is magnetic, like a bar magnet or a loop of electric current, has a magnetic moment. A magnetic moment is a vector quantity, with a magnitude and a direction. An electron has an electron magnetic dipole moment, generated by the electron's intrinsic spin property, making it an electric charge in motion. There are many different magnetic forms: including paramagnetism, and diamagnetism, ferromagnetism, and anti-ferromagnetism. Only the first two are introduced below. Paramagnetism Paramagnetism refers to the magnetic state of an atom with one or more unpaired electrons. The unpaired electrons are attracted by a magnetic field due to the electrons' magnetic dipole moments. Hund's Rule states that electrons must occupy every orbital singly before any orbital is doubly occupied. This may leave the atom with many unpaired electrons. Because unpaired electrons can spin in either direction, they display magnetic moments in any direction. This capability allows paramagnetic atoms to be attracted to magnetic fields. Diatomic oxygen, \(O_2\) is a good example of paramagnetism (described via molecular orbital theory). The following video shows liquid oxygen attracted into a magnetic field created by a strong magnet: A chemical demonstration of the paramagnetism of oxygen, as shown by the attraction of liquid oxygen to a magnet. Carleton University, Ottawa, Canada. As shown in the video, molecular oxygen (\(O_2\) is paramagnetic and is attracted to the magnet. Incontrast, Molecular nitrogen, \(N_2\), however, has no unpaired electrons and it is diamagnetic (this concept is discussed below); it is therefore unaffected by the magnet. There are some exceptions to the paramagnetism rule; these concern some transition metals, in which the unpaired electron is not in a d-orbital. Examples of these metals include \(Sc^{3+}\), \(Ti^{4+}\), \(Zn^{2+}\), and \(Cu^+\). These metals are the not defined as paramagnetic: they are considered diamagnetic because all d-electrons are paired. Paramagnetic compounds sometimes display bulk magnetic properties due to the clustering of the metal atoms. This phenomenon is known as ferromagnetism, but this property is not discussed here. Diamagnetism Diamagnetic substances are characterized by paired electrons—except in the previously-discussed case of transition metals, there are no unpaired electrons. According to the Pauli Exclusion Principle which states that no two identical electrons may take up the same quantum state at the same time, the electron spins are oriented in opposite directions. This causes the magnetic fields of the electrons to cancel out; thus there is no net magnetic moment, and the atom cannot be attracted into a magnetic field. In fact, diamagnetic substances are weakly by a magnetic field. In fact, diamagnetic substances are weakly repelled by a magnetic field as demonstrated with the pyrolytic carbon sheet in Figure \(\PageIndex{6}\). repelled Figure \(\PageIndex{6}\): Levitating pyrolytic carbon: A small (~6 mm) piece of pyrolytic graphite levitating over a permanent neodymium magnet array (5 mm cubes on a piece of steel). Note that the poles of the magnets are aligned vertically and alternate (two with north facing up, and two with south facing up, diagonally). Image used with permission from Wikipedia. How to Tell if a Substance is Paramagnetic or Diamagnetic The magnetic form of a substance can be determined by examining its electron configuration: if it shows unpaired electrons, then the substance is paramagnetic; if all electrons are paired, the substance is diamagnetic. This process can be broken into four steps: Find the electron configuration Draw the valence orbitals Look for unpaired electrons Determine whether the substance is paramagnetic (one or more unpaired electrons) or diamagnetic (all electrons paired) Example \(\PageIndex{4}\): Chlorine atoms Are chlorine atoms paramagnetic or diamagnetic? SOLUTION Step 1: Find the electron configuration For Cl atoms, the electron configuration is 3s 23p 5 Step 2: Draw the valence orbitals Ignore the core electrons and focus on the valence electrons only. Step 3: Look for unpaired electrons There is one unpaired electron. Step 4: Determine whether the substance is paramagnetic or diamagnetic Since there is an unpaired electron, Cl atoms are paramagnetic (but is quite weak). Example 2: Zinc Atoms Step 1: Find the electron configuration For Zn atoms, the electron configuration is 4s 23d 10 Step 2: Draw the valence orbitals Step 3: Look for unpaired electrons There are no unpaired electrons. Step 4: Determine whether the substance is paramagnetic or diamagnetic Because there are no unpaired electrons, Zn atoms are diamagnetic. References Pettrucci, Ralph H. General Chemistry: Principles and Modern Applications. 9th. Upper Saddle River: Pearson Prentice Hall, 2007 Sherman, Alan, Sharon J. Sherman, and Leonard Russikoff. Basic Concepts of Chemistry Fifth Edition. Boston, MA: Houghton Mifflin Company, 1992. Print.
Question Bradley (2005, Section 3.1) states: Theorem 3.1Suppose $X := (X_k; k \in Z)$ is a strictly stationary, finite state Markov chain. Then the following five statements are equivalent: X is irreducible and aperiodic. X is mixing (in the ergodic-theoretic sense). $\alpha(n) \rightarrow 0$ as $n \rightarrow \infty$. $\psi(n) \rightarrow 0$ as $n \rightarrow \infty$. $\rho^*(n) \rightarrow 0$ as $n \rightarrow \infty$. Does the theorem hold if $X$ is merely a semi Markov chain?What conditions might apply? Likewise does the theorem hold if $X$ is merely a regenerative process?Again conditions might apply? In particular, is it enough that $X$ is aperiodic and positive recurrent? It seems to me that with both semi Markov chains and regenerative processes, the process has 'resetting' properties that could enable strong mixing? Why I am asking Let $Z_t$ be an alternating renewal process that is aperiodic and positive recurrent. Choose $\delta t > 0$, put $t_k = k \cdot \delta t$ for each integer $k \geq 0$, and then put $X_k = Z_{t_k}$ for each $k$. I am studying a sequence $S_n = \sum_{k=1}^n a_{nk} X_k$ where $a_{nk}$ are numeric constants. My hypothesis (supported by experiments) is that $S_n / \text{Var}({S_n})$ converges in distribution to $\mathcal{N}(0,1)$ as $n \rightarrow \infty$. To this end, I am seeking to invoke Peligrad (1996), Corollary 2.1. Put $\xi_{ni} = a_{ni} X_i$ for each $n$ and $1 \leq i \leq n$. Define $$ \bar{\rho}^{*}_{nk} = \sup_{k}(\sigma(\xi_{ni}, i \in T), \sigma(\xi_{nj}, j \in S)) $$ where $T,S \subset 1\dots n$ are nonempty and dist$(T,S) \geq k$, and $$ \bar{\rho}_{k}^{*} = \sup_{n}\bar{\rho}^{*}_{nk} $$ Peligrad's result requires that $\{X_k\}$ is strongly mixing and that $ \lim_{k\rightarrow\infty} \bar{\rho}_{k}^{*} < 1$ (among other conditions). I can establish that $\{X_k\}$ is strongly mixing ($\alpha$-mixing) by noting that $Z_t$ is aperiodic, positive recurrent, and regenerative, and thereby invoking Glynn (1982, Theorem 6.3.i). But I am having trouble establishing that the condition on $\bar{\rho}_{k}^{*}$ holds in my system. References Bradley, Richard C. (2005), Basic Properties of Strong Mixing Conditions. A Survey and Some Open Questions. Probability Surveys 2, 107-144. doi: 10.1214/154957805100000104 Glynn, Peter W. (1982), Some New Results in Regenerative Process Theory. Technical Report 60, July. DTIC: ADA119153 Peligrad, Magda (1996), On the Asymptotic Normality of Sequences of Weak Dependent Random Variables, J Theoretical Probability 9(3), 703-715.
You still need to show that: $\oplus_S$ has a neutral element. $\oplus_S$ is associative. If $0_S$ is the aforementioned additive neutral element, that $\oplus_S$ has opposites, id est, that for all $a$ there is $b$ such that $a\oplus_S b= 0_S$ and $b\oplus_S a=0_S$. If your definition of ring requires it (most of them do), that $\otimes_S$ has a neutral element as well. Some would say that you also need to check "closedness under operations", which would technically mean showing that $$\forall a,b\in S,\ (a\oplus_S b\in S\wedge a\otimes_S b\in S)$$This is reminescent of one of the verifications that you need to do when you check that something is a subring of something else. In this case, it would be an abuse of terminology, because clearly $\oplus_S$ and $\otimes_S$ should already be functions with codomain $S$. What should be checked beforehand in this case is that $\oplus_S$ and $\otimes_S$ are two well-defined functions $S\times S\to S$. What does that mean? Describing it in general is a bit abstract, but somewhere you have a predicate which describes what property, say, $a\oplus_S b$ is meant to satisfy with repsect to $a,b\in S$. Call that property $P(a,b;y)$ (and the definition somehow goes like "$a\oplus_S b$ is the one element satisfying $P(a,b; a\oplus_Sb)$" ). Then you must prove that, for any $a,b\in S$ there is exactly one element $y$ in $S$ such that $P(a,b;y)$ is true. Id est: $$\forall a,b\in S,((\exists y\in S,\ P(a,b;y))\wedge (\forall w,y\in S, (P(a,b;w)\wedge P(a,b;y)\longrightarrow w=y)))$$
Generating networks with a desired second order motif frequency One way to add structure beyond the Erdös-Rényi random network (ER random network) is to generate ensembles of networks with a prescribed degree distribution. Here we describe another approach: modulating the frequencies of small network motifs in directed graphs. Network motifs Network motifs are patterns of connections within a network. For example, the simplest motif consists of a single edge from one node to another. Moving up one step in complexity, we can look at motifs involving two edges. If we restrict ourselves to connected motifs (excluding the motif with two edges connecting distinct pairs of nodes), there are four motifs with two edges, which we can label as the reciprocal, convergent, divergent, and chain motifs. One can characterize a network by counting the relative frequency of these five motifs. Here, each motif represents a pattern of the one or two edges shown without regard to the presence or absence of any of the edges not shown. For example, the following four node network has six edges, so the single-edge motif occurs six times. The total possible number of connections is $4\cdot 3 = 12$, so the network contains 50% of the possible edges. As highlighted on the right, the network contains two reciprocal motifs, five chain motifs, two convergent motifs, and two divergent motifs. The total number of possible reciprocal motifs is $4 \cdot 3/2 = 6$, so the network contains 1/3 of the possible reciprocal motifs. The total number of possible chain motifs is $4 \cdot 3 \cdot 2 = 24$, so the network contains $5/24$ of the possible chain motifs. For both convergent and divergent motifs, the total possible number is $4 \cdot 3 \cdot 2/2 = 12$; the network contains 1/6 of the possible convergent and 1/6 of the possible divergent motifs. Generating networks with given expected motif frequencies One can view the ER random network, as a probability distribution parametrized by the expected relative frequency of the single-edge motif, which is the same as the connection probability. We could specify that, for any pair of nodes $i$ and $j$, the probability of the motif containing a connection from $j$ to $i$ is \begin{gather} \Pr(A_{ij}=1)=E(A_{ij})=p, \label{probedge} \end{gather} where the expected value $E(\cdot)$ is computed over the probability distribution of the adjacency matrix, equation (1) of the random network page. In the ER random network, the second order connectivity motifs will occur, too. But, we cannot independently specify their frequencies. Since all edges occur independently with probability $p$, any pair of edges (such as the two-edge motifs) must occur with probability $p^2$. (The probability of two independent random variables is the product of their individual probabilities.) Given that real world networks motif frequencies differ from the ER prediction, we'd like to generate networks where the two-connection motifs occur at frequencies other than $p^2$. We introduce four more parameters, $\alpha_{\text{recip}}$, $\alpha_{\text{conv}}$, $\alpha_{\text{div}}$, and $\alpha_{\text{chain}}$, through which to specify the frequencies of the reciprocal, convergent, divergent, and chain motifs, respectively. We define each $\alpha$ to indicate deviation from the ER model, so if an $\alpha$ is zero, the corresponding motif will appear with probability $p^2$. The definitions of the four $\alpha$ parameters in terms of the probability of the respective motif are as follows. As we manipulate the expected frequencies of the second order connection motifs, we are still holding the expected frequency of the single-edge motif at $p$. We cannot increase the frequency of any two-edge motif by adding additional edges, or that would increase the frequency of the single-edge motif. Instead, if we would like to increase the frequency of the reciprocal motif, for example, we need to increase the probability that both $A_{ij}$ and $A_{ji}$ are one and simultaneously increase the probability that both $A_{ij}$ and $A_{ji}$ are zero (while decreasing the one edge occurs without the other). We must do this in a way to keep the probability of all single-edge motifs at $p$. We can express the fact that the $\alpha$'s simply must increase the coordination between their edges by writing them in terms of the covariances between edges. Using equation \eqref{probedge} for single-edge probability, one can rewrite the definitions for the $\alpha$'s as normalized covariances. \begin{align*} \alpha_{\text{recip}} &= \frac{\text{cov}(A_{ij},A_{ji})}{E(A_{ij})E(A_{ji})} &\alpha_{\text{conv}}&=\frac{\text{cov}(A_{ij},A_{ik})}{E(A_{ij})E(A_{ik})}\\ \alpha_{\text{div}}&=\frac{\text{cov}(A_{ij},A_{kj})}{E(A_{ij})E(A_{kj})} &\alpha_{\text{chain}}&=\frac{\text{cov}(A_{ij},A_{jk})}{E(A_{ij})E(A_{jk})} \end{align*} For ER random networks, it is a simple matter to generate a network with given edge probability, as we can generate the random number for each edge independently using equation \eqref{probedge}. However, when we break this independence by adding correlations among the edges through nonzero $\alpha$'s, the process of generating a network with the required statistics is more subtle. It turns out one can define a random network model based on these statistics, the SONET (second order network) model 1 and sample networks with different values of the $\alpha$'s. In this way, we obtain a probability distribution for the network parametrized by the first order statistic $p$ and the second order statistics $\alpha_{\text{recip}}$, $\alpha_{\text{conv}}$, $\alpha_{\text{div}}$, and $\alpha_{\text{chain}}$.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
I'm trying to determine whether this matrix is diagonalizable: $$A = \pmatrix{\sqrt3&0&-1\\0&-1&0\\1&0&0}$$ The characteristic polynomial is: $$-\lambda^3 + (\sqrt3-1)\lambda^2 + (\sqrt3-1)\lambda -1 = 0$$ And so we get eigenvalues: $$\lambda_1=-1,$$ $$\lambda_2=\frac{\sqrt3}{2}+\frac{i}{2},$$ $$\lambda_3=\frac{\sqrt3}{2}-\frac{i}{2}$$ I know that if A has 3 distinct eigenvalues (i.e. the characteristic polynomial has 3 distinct roots), then A will be diagonalizable, since we'll have 3 linearly independent eigenvectors. So how do we know if the 3 eigenvalues are "distinct" or not?
Search Now showing items 1-2 of 2 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays (Elsevier, 2014-11) The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
The climb rate depends on the excess power which is available after drag has been subtracted from net thrust. If the airplane stays at the same polar point while climbing, it needs to accelerate in order to compensate for the decrease in air density. Therefore, besides drag also this acceleration work needs to be subtracted before the remaining thrust can be used for climbing. First let's clarify terms: x$_g$, y$_g$, z$_g$ : Earth-fixed coordinate system x$_f$, y$_f$, z$_f$ : Airplane-fixed coordinate system x$_k$, y$_k$, z$_k$ : Kinetic coordinate system where x is the direction of movement L$\;\;$ : Lift D$\;\;$ : Drag T$\;\;$ : Thrust m$\;\:$ : mass $\alpha\;\;$ : Angle of attack (between the x-axes of the airplane-fixed and kinetic coordinate systems) $\gamma\;\;$ : Flight path angle (between the x-axes of the earth-fixed and kinetic coordinate systems) $\sigma\;\:$ : Thrust angle relative to the airplane-fixed coordinate system $v_{\infty}$ : Airspeed The polar point should be the one for optimum climb speed. There is also one for optimum climb angle, but this simplification is justified. It also helps to make the math easier, since propeller aircraft climb best at the polar point where minimum power is required to maintain flight. This is at$$c_L = \sqrt{3\cdot c_{D0}\cdot AR\cdot\pi\cdot\epsilon}$$with $c_L\;\;$: Lift coefficient $c_{D0}$ : Zero-lift drag coefficient $AR$ : Wing aspect ratio $\epsilon\;\;$ : Wing efficiency factor The zero-lift drag coefficient of propeller aircraft is around 0.025 to 0.04, with the high value for fixed-gear aircraft and the lower for those with retractable gear. It increases slightly with altitude due to the decrease of the Reynolds number from the drop in temperature. Here you need to pick a value which is appropriate for each specific aircraft. Staying at the same polar point also means that weight will influence only the speed at which the aircraft climbs best, not the lift coefficient. The speed $v$ will change with the square root of the weight difference, because$$v = \sqrt{\frac{m\cdot g}{\frac{\rho}{2}\cdot S_{ref}\cdot c_L}}$$with $S_{ref}$ being the reference area of the aircraft and $\rho$ the air density. Next to the correction term $C$ for acceleration. It depends on the local speed of sound, the gas constant for humid air $R_h$ and the temperature gradient (lapse rate $\Gamma$) of the atmosphere. This answer explains in detail how it is calculated and I repeat here only the result for standard atmospheric conditions:$$C = 1 - 0.13335\cdot Ma^2 + \frac{(1+0.2\cdot Ma^2)^{3.5}-1}{(1+0.2\cdot Ma^2)^{2.5}}$$with $Ma$ being the ratio between flight speed and local speed of sound. Now your climb speed $v_z$ becomes $$v_z = \frac{v}{C}\cdot sin\gamma = \frac{v}{C}\cdot\frac{T\cdot cos(\sigma)-D}{m\cdot g} = \frac{P\cdot\eta_{prop}\cdot cos(\sigma) - D\cdot v}{C\cdot m\cdot g}$$with $\eta_{Prop}$ the propeller efficiency and $P$ the engine brake power at the given altitude and throttle setting. This leaves a bunch of unknown variables in order to correctly calculate the climb rate: engine power aircraft zero-lift drag coefficient propeller efficiency Therefore, it will be best to look up the possible climb speeds at several altitudes and power settings from each POH and to interpolate between those values. Or you settle for an approximation and use rule-of-thumb values for the unknown parameters. for $\epsilon$ assume 0.8 for $\sigma$ assume zero for $c_{D0}$ assume 0.026 at low and 0.03 at high altitude for retracted gear and 0.035 at low and 0.04 at high altitude for fixed gear. for $D$ use $\left(c_{D0} + \frac{c_L^2}{AR\cdot\pi\cdot\epsilon} \right) \cdot\frac{\rho\cdot v^2\cdot S_{ref}}{2}$ for $\eta_{Prop}$ use 0.75 for a fixed-pitch and 0.8 for a constant speed prop. for normally aspirated engines reduce power proportionally with density. For turbocharged engines assume constant power up to their critical height and reduce power in proportion to density above that. Let the users of your program set the throttle setting themselves. Where you have performance charts available, compare your results with published figures and tweak the variables such that you get a good fit. For example, look at the published optimum climb speed and adjust $c_{D0}$ until your result, taken from the optimum lift coefficient, agrees. And so on. This should give you very useable results.
It looks like you're new here. If you want to get involved, click one of these buttons! In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly: Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints. Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints. Today we'll conclude our discussion of Chapter 1 with two more bombshells: Joins are left adjoints, and meets are right adjoints. Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down. This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world! Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders. In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets. Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have all joins: it's enough that all the joins in this formula exist: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have all meets: it's enough that all the meets in this formula exist: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes. Suppose \(A\) is a poset with all binary joins. Then we get a function $$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows: $$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that $$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the diagonal $$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called duplication, since it duplicates any element of \(A\). Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact: $$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \). Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \). A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function $$ \wedge : A \times A \to A $$that's the right adjoint of \( \Delta \). This is just a clever way of saying $$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check. Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number. All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on. Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by $$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short. I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason. Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\). So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs. Once you start looking you can find duality everywhere, from ancient Chinese philosophy: to modern computers: But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality! This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises.
Result Number Material Type Add to My Shelf Action Record Details and Options 11 Material Type:Article Measurement of the production and lepton charge asymmetry of $\textit{W}$ bosons in Pb+Pb collisions at $\sqrt{s_{\mathrm{\mathbf{NN}}}}=$ 2.76 TeV with the ATLAS detectorEuropean Physical Journal C: Particles and Fields, 2015, Vol.75(1), p.23 [Peer Reviewed Journal] Hyper Article en Ligne (CCSd) 12 Material Type:Article Jet energy measurement and its systematic uncertainty in proton-proton collisions at $\sqrt{s}=7$ TeV with the ATLAS detectorEuropean Physical Journal C: Particles and Fields, 2015, Vol.75, p.17 [Peer Reviewed Journal] Hyper Article en Ligne (CCSd) 13 Material Type:Article Measurement of Higgs boson production in the diphoton decay channel in $pp$ collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detectorPhysical Review D, 2014, Vol.90, p.112015 [Peer Reviewed Journal] Hyper Article en Ligne (CCSd) 14 Material Type:Article Search for new phenomena in events with a photon and missing transverse momentum in $pp$ collisions at $\sqrt{s}=8$ TeV with the ATLAS detectorPhysical Review D, 2015, Vol.91(1), p.012008 [Peer Reviewed Journal] Hyper Article en Ligne (CCSd) 15 Material Type:Article Search for the $b\bar{b}$ decay of the Standard Model Higgs boson in associated $(W/Z)H$ production with the ATLAS detectorJournal of High Energy Physics, 2015, Vol.2015(1), p.69 [Peer Reviewed Journal] Hyper Article en Ligne (CCSd) 16 Material Type:Article Measurement of the $WW+WZ$ cross section and limits on anomalous triple gauge couplings using final states with one lepton, missing transverse momentum, and two jets with the ATLAS detector at $\sqrt{\rm{s}} = 7$ TeVJournal of High Energy Physics, 2015, Vol.2015(1), p.049 [Peer Reviewed Journal] Hyper Article en Ligne (CCSd) 17 Material Type:Article Measurement of the $t\bar{t}$ production cross-section as a function of jet multiplicity and jet transverse momentum in 7 TeV proton-proton collisions with the ATLAS detectorJournal of High Energy Physics, 2015, Vol.1, p.020 [Peer Reviewed Journal] Hyper Article en Ligne (CCSd) 18 Material Type:Article Search for $H \to \gamma\gamma$ produced in association with top quarks and constraints on the Yukawa coupling between the top quark and the Higgs boson using data taken at 7 TeV and 8 TeV with the ATLAS detectorPhysics Letters B, 2015, Vol.740, pp.222-242 [Peer Reviewed Journal] Hyper Article en Ligne (CCSd) 19 Material Type:Article Measurement of Higgs boson production in the diphoton decay channel in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detectorPhysical Review D (Particles, Fields, Gravitation and Cosmology), 2014, Vol.90(11) [Peer Reviewed Journal] SwePub (National Library of Sweden) 20 Material Type:Article Search for new phenomena in events with a photon and missing transverse momentum in pp collisions at root s = 8 TeV with the ATLAS detectorPhysical Review D (Particles, Fields, Gravitation and Cosmology), 2015, Vol.91(1) [Peer Reviewed Journal] SwePub (National Library of Sweden)
Archive: Subtopics: Comments disabled Thu, 26 Jul 2018 There are well-known tests if a number (represented as a base-10 numeral) is divisible by 2, 3, 5, 9, or 11. What about 7? Let's look at where the divisibility-by-9 test comes from. We add up the digits of our number !!n!!. The sum !!s(n)!! is divisible by !!9!! if and only if !!n!! is. Why is that? Say that !!d_nd_{n-1}\ldots d_0!! are the digits of our number !!n!!. Then $$n = \sum 10^id_i.$$ The sum of the digits is $$s(n) = \sum d_i$$ which differs from !!n!! by $$\sum (10^i-1)d_i.$$ Since !!10^i-1!! is a multiple of !!9!! for every !!i!!, every term in the last sum is a multiple of !!9!!. So by passing from !!n!! to its digit sum, we have subtracted some multiple of !!9!!, and the residue mod 9 is unchanged. Put another way: $$\begin{align} n &= \sum 10^id_i \\ &\equiv \sum 1^id_i \pmod 9 \qquad\text{(because $10 \equiv 1\pmod 9$)} \\ &= \sum d_i \end{align} $$ The same argument works for the divisibility-by-3 test. For !!11!! the analysis is similar. We add up the digits !!d_0+d_2+\ldots!! and !!d_1+d_3+\ldots!! and check if the sums are equal mod 11. Why alternating digits? It's because !!10\equiv -1\pmod{11}!!, so $$n\equiv \sum (-1)^id_i \pmod{11}$$ and the sum is zero only if the sum of the positive terms is equal to the sum of the negative terms. The same type of analysis works similarly for !!2, 4, 5, !! and !!8!!. For !!4!! we observe that !!10^i\equiv 0\pmod 4!! for all !!i>1!!, so all but two terms of the sum vanish, leaving us with the rule that !!n!! is a multiple of !!4!! if and only if !!10d_1+d_0!! is. We could simplify this a bit: !!10\equiv 2\pmod 4!! so !!10d_1+d_0 \equiv 2d_1+d_0\pmod 4!!, but we don't usually bother. Say we are investigating !!571496!!; the rule tells us to just consider !!96!!. The "simplified" rule says to consider !!2\cdot9+6 = 24!! instead. It's not clear that that is actually easier. This approach works badly for divisibility by 7, because !!10^i\bmod 7!! is not simple. It repeats with period 6. $$\begin{array}{c|cccccc|ccc} i & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\ %10^i & 1 & 10 & 100 & 1000 & 10000 & \ldots \\ 10^i\bmod 7 & 1 & 3 & 2 & 6 & 4 & 5 & 1 & 3 & 2 & 6 & 4 &\ldots \\ \end{array} $$ The rule we get from this is: Take the units digit. Add three times the ones digit, twice the hundreds digit, six times the thousands digit… (blah blah blah) and the original number is a multiple of !!7!! if and only if the sum is also. For example, considering !!12345678!! we must calculate $$\begin{align} 12345678 & \Rightarrow & 3\cdot1 + 1\cdot 2 + 5\cdot 3 + 4\cdot 4 + 6\cdot 5 + 2\cdot6 + 3\cdot 7 + 1\cdot 8 & = & 107 \\\\ 107 & \Rightarrow & 2\cdot1 + 3\cdot 0 + 1\cdot7 & = & 9 \end{align} $$ and indeed !!12345678\equiv 107\equiv 9\pmod 7!!. My kids were taught the practical divisibility tests in school, or perhaps learned them from YouTube or something like that. Katara was impressed by my ability to test large numbers for divisibility by 7 and asked how I did it. At first I didn't think about my answer enough, and just said “Oh, it's not hard, just divide by 7 and look at the remainder.” (“Just count the legs and divide by 4.”) But I realized later that there are several tricks I was using that are not obvious. First, she had never learned short division. When I was in school I had been tormented extensively with long division, which looks like this: This was all Katara had been shown, so when I said “just divide by 7” this is what she was thinking of. But you only need long division for large divisors. For simple divisors like !!7!!, I was taught short division, an easier technique: Yeah, I wrote 4 when I meant 3. It doesn't matter, we don't care about the quotient anyway. But that's one of the tricks I was using that wasn't obvious to Katara: we don't care about the quotient anyway, only the remainder. So when I did this in my head, I discarded the parts of the calculation that were about the quotient, and only kept the steps that pertained to the remainder. The way I was actually doing this sounded like this in my mind: 7 into 12 leaves 5. 7 into 53 leaves 4. 7 into 44 leaves 2. 7 into 25 leaves 4. 7 into 46 leaves 4. 7 into 57 leaves 5. 7 into 58 leaves 9. The answer is 9. At each step, we consider only the leftmost part of the number, starting with !!12!!. !!12\div 7 !! has a remainder of 5, and to this 5 we append the next digit of the dividend, 3, giving 53. Then we continue in the same way: !!53\div 7!! has a remainder of 4, and to this 4 we append the next digit, giving 44. We never calculate the quotient at all. I explained the idea with a smaller example, like this: Suppose you want to see if 1234 is divisible by 7. It's 1200-something, so take away 700, which leaves 500-something. 500-what? 530-something. So take away 490, leaving 40-something. 40-what? 44. Now take away 42, leaving 2. That's not 0, so 1234 is not divisible by 7. This is how I actually do it. For me this works reasonably well up to 13, and after that it gets progressively more difficult until by 37 I can't effectively do it at all. A crucial element is having the multiples of the divisor memorized. If you're thinking about the mod-13 residue of 680-something, it is a big help to know immediately that you can subtract 650. A year or two ago I discovered a different method, which I'm sure must be ancient, but is interesting because it's quite different from the other methods I described. Suppose that the final digit of !!n!! is !!b!!, so that !!n=10a+b!!. Then !!-2n = -20a-2b!!, and this is a multiple of !!7!! if and only if !!n!! is. But !!-20a\equiv a\pmod7 !!, so !!a-2b!! is a multiple of !!7!! if and only if !!n!! is. This gives us the rule: To check if !!n!! is a multiple of 7, chop off the last digit, double it, and subtract it from the rest of the number. Repeat until the answer becomes obvious. For !!1234!! we first chop off the !!4!! and subtract !!2\cdot4!! from !!123!! leaving !!115!!. Then we chop off the !!5!! and subtract !!2\cdot5!! from !!11!!, leaving !!1!!. This is not a multiple of !!7!!, so neither is !!1234!!. But with !!1239!!, which is a multiple of !!7!!, we get !!123-2\cdot 9 = 105!! and then !!10-2\cdot5 = 0!!, and we win. In contrast to the other methods in this article, this method does There are some shortcuts in this method too. If the final digit is !!7!!, then rather than doubling it and subtracting 14 you can just chop it off and throw it away, going directly from !!10a+7!! to !!a!!. If your number is !!10a+8!! you can subtract !!7!! from it to make it easier to work with, getting !!10a+1!! and then going to !!a-2!! instead of to !!a-16!!. Similarly when your number ends in !!9!! you can go to !!a-4!! instead of to !!a-18!!. And on the other side, if it ends in !!4!! it is easier to go to !!a-1!! instead of to !!a-8!!. But even with these tricks it's not clear that this is faster or easier than just doing the short division. It's the same number of steps, and it seems like each step is about the same amount of work. Finally, I once wowed Katara on an airplane ride by showing her this: To check !!1429!! using this device, you start at ⓪. The first digit is !!1!!, so you follow one black arrow, to ①, and then a blue arrow, to ③. The next digit is !!4!!, so you follow four black arrows, back to ⓪, and then a blue arrow which loops around to ⓪ again. The next digit is !!2!!, so you follow two black arrows to ② and then a blue arrow to ⑥. And the last digit is 9 so you then follow 9 black arrows to ① and then stop. If you end where you started, at ⓪, the number is divisible by 7. This time we ended at ①, so !!1429!! is not divisible by 7. But if the last digit had been !!1!! instead, then in the last step we would have followed only one black arrow from ⑥ to ⓪, before we stopped, so !!1421!! is a multiple of 7. This probably isn't useful for mental calculations, but I can imagine that if you were stuck on a long plane ride with no calculator and you needed to compute a lot of mod-7 residues for some reason, it could be quicker than the short division method. The chart is easy to construct and need not be memorized. The black arrows obviously point from !!n!! to !!n+1!!, and the blue arrows all point from !!n!! to !!10n!!. I made up a whole set of these diagrams and I think it's fun to see how the conventional divisibility rules turn up in them. For example, the rule for divisibility by 3 that says just add up the digits: Or the rule for divisibility by 5 that says to ignore everything but the last digit:
Current browse context: math-ph Change to browse by: References & Citations Bookmark(what is this?) Mathematics > Probability Title: Some properties of non-linear fractional stochastic heat equations on bounded domains (Submitted on 4 May 2016 (v1), last revised 16 Dec 2016 (this version, v2)) Abstract: Consider the following stochastic partial differential equation, \begin{equation*} \partial_t u_t(x)= \mathcal{L}u_t(x)+ \xi\sigma (u_t(x)) \dot F(t,x), \end{equation*} where $\xi$ is a positive parameter and $\sigma$ is a globally Lipschitz continuous function. The stochastic forcing term $\dot F(t,x)$ is white in time but possibly colored in space. The operator $\mathcal{L}$ is a non-local operator. We study the behaviour of the solution with respect to the parameter $\xi$, extending the results in \cite{FoonNual} and \cite{Bin} Submission historyFrom: Erkan Nane [view email] [v1]Wed, 4 May 2016 15:48:18 GMT (12kb) [v2]Fri, 16 Dec 2016 04:20:21 GMT (10kb)
Difference between revisions of "De Bruijn-Newman constant" (→Bibliography) (→Threads) Line 83: Line 83: * [https://terrytao.wordpress.com/2018/01/27/polymath15-first-thread-computing-h_t-asymptotics-and-dynamics-of-zeroes/ Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes], Terence Tao, Jan 27, 2018. * [https://terrytao.wordpress.com/2018/01/27/polymath15-first-thread-computing-h_t-asymptotics-and-dynamics-of-zeroes/ Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes], Terence Tao, Jan 27, 2018. * [https://terrytao.wordpress.com/2018/02/02/polymath15-second-thread-generalising-the-riemann-siegel-approximate-functional-equation/ Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation], Terence Tao and Sujit Nair, Feb 2, 2018. * [https://terrytao.wordpress.com/2018/02/02/polymath15-second-thread-generalising-the-riemann-siegel-approximate-functional-equation/ Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation], Terence Tao and Sujit Nair, Feb 2, 2018. + == Other blog posts and online discussion == == Other blog posts and online discussion == Revision as of 16:09, 12 February 2018 For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula [math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math] where [math]\Phi[/math] is the super-exponentially decaying function [math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math] It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as [math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math] or [math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math] In the notation of [KKL2009], one has [math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math] De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]). The Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math] When [math]t=0[/math], one has [math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math] where [math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{s/2} \Gamma(s/2) \zeta(s)[/math] is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives [math]\displaystyle N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math] for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T. The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real. [math]t\gt0[/math] For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis, all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2]. Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-decreasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have [math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math] for any [math]t[/math]. The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE [math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math] where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as [math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] where the dependence on [math]t[/math] has been omitted for brevity. In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic [math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math] as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that [math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math] as [math]k \to +\infty[/math]. See asymptotics of H_t for asymptotics of the function [math]H_t[/math]. Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Polymath15, third thread: computing and approximating H_t, Terence Tao and Sujit Nair, Feb 12, 2018. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Wikipedia and other references Bibliography [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
As is well known, Dirac introduced the creation and annihilation operators for the quantum harmonic oscillator ($\hat{a}^\dagger$ and $\hat{a}$ respectively), which are now part of every first course in quantum mechanics due to their far-reaching importance and simplicity. $$\hat{a}^\dagger=\sqrt{\frac{m\omega}{2\hbar}}\left[\hat{x}-\frac{i}{m\omega}\hat{p}\right],\,\,\,\hat{a}=\sqrt{\frac{m\omega}{2\hbar}}\left[\hat{x}+\frac{i}{m\omega}\hat{p}\right]$$ However, in English we say "a-dagger" for the creation operator, and the word "dagger" is certainly more associated with destruction (killing, stabbing, etc.) than it is with "creation". This is almost always an unspoken source of confusion for native English-speaking students of physics. Given that this was originally done by an Englishman, how come we haven't simply swapped the definitions for $\hat{a}^\dagger$ and $\hat{a}$, and then called $\hat{a}^\dagger$ the annihilation operator and $\hat{a}$ the creation operator, a redefinition that would have no effect on the physics but have a substantial effect on a student's ability to remember these operators? Is there a reason why Dirac chose this language-awkward definition?
Klaus Warzecha's answer pretty much answers your question. But I know that this subject is easier to understand if supported by some pictures. That's why I will take the same route as Klaus at explaining the concept behind why the absorption in conjugated systems is shifted to higher wavelengths but I will provide some pictures on the way. In a conjugated carbon chain or ring system you can think of the $\ce{C}$ atoms as $\text{sp}^{2}$-hybridized. So, each carbon has 3 $\text{sp}^{2}$ orbitals which it uses to form $\sigma$ bonds and 1 $\text{p}$ orbital which is used to form $\pi$ bonds.It is the $\text{p}$ orbitals that are responsible for the conjugation and their combinations according to the LCAO model are the interesting part since the HOMO and LUMO of the system will be among the molecular orbitals formed from the conjugated $\text{p}$ orbitals. For a start take ethene, the simplest $\pi$-system, being comprised of only 2 carbon atoms.When you combine two atomic orbitals you get two molecular orbitals.These result from combining the $\text{p}$ orbitals either in-phase or out-of-phase.The in-phase combination is lower in energy than the original $\text{p}$ orbitals and the out-of-phase combination is higher in energy than the original $\text{p}$ orbitals.The in-phase combination accounts for the bonding molecular orbital ($\pi$), whilst the out-of-phase combination accounts for the antibonding molecular orbital ($\pi^{*}$). Now, what happens when you lengthen the conjugated system by combining two ethene fragments?You get to butadiene.Butadiene has two $\pi$ bonds and so four electrons in the $\pi$ system.Which molecular orbitals are these electrons in?Since each molecular orbital can hold two electrons, only the two molecular orbitalslowest in energy are filled.Let's have a closer look at these orbitals.In $\Psi_1$, the lowest-energy bonding orbital, the electrons are spread out over all four carbon atoms (above and below the plane) in one continuous orbital.There is bonding between all the atoms.The other two electrons are in $\Psi_2$.This orbital has bonding interactions between carbon atoms 1 and 2, and also between 3 and 4 but an antibonding interaction between carbons 2 and 3.Overall, in both the occupied $\pi$ orbitals there are electrons between carbons 1 and 2 and between 3 and 4, but the antibonding interaction between carbons 2 and 3 in $\Psi_2$ partially cancels out the bonding interaction in $\Psi_1$.This explains why all the bonds in butadiene are not the same and why the middle bond is more like a single bond while the end bonds are double bonds.If we look closely at the coefficients on each atom in orbitals $\Psi_1$ and $\Psi_2$, it can be seen that the bonding interaction between the central carbon atoms in $\Psi_1$ is greater than the antibonding one in $\Psi_2$.Thus butadiene does have some double bond character between carbons 2 and 3, which explains why there is the slight barrier to rotation about this bond. You can construct the molecular orbitals of butadiene by combining the molecular orbitals of the two ethene fragments in-phase and out-of-phase. This method of construction also shows why the HOMO-LUMO gap of butadiene is smaller than that of ethene.The molecular orbital $\Psi_2$, which is the HOMO of butadiene, is the out-of-phase combination of two ethene $\pi$ orbitals, which are the HOMO of ethene.Thus, the HOMO of butadiene is higher in energy than the HOMO of ethene.Furthermore, the molecular orbital $\Psi_3$, which is the LUMO of butadiene, is the in-phase combination of two ethene $\pi^{*}$ orbitals, which are the LUMO of ethene.Thus, the LUMO of butadiene is lower in energy than the LUMO of ethene.It follows that the HOMO-LUMO energy gap is smaller in butadiene than in ethene and thus butadiene absorbs light with longer wavelenghts than ethene. If you continue to lengthen the $\pi$ system by adding more ethene fragments you will see that the HOMO and LUMO are getting closer and closer together the longer the $\pi$ system becomes.
It looks like you're new here. If you want to get involved, click one of these buttons! In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly: Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints. Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints. Today we'll conclude our discussion of Chapter 1 with two more bombshells: Joins are left adjoints, and meets are right adjoints. Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down. This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world! Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders. In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets. Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have all joins: it's enough that all the joins in this formula exist: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have all meets: it's enough that all the meets in this formula exist: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes. Suppose \(A\) is a poset with all binary joins. Then we get a function $$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows: $$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that $$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the diagonal $$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called duplication, since it duplicates any element of \(A\). Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact: $$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \). Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \). A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function $$ \wedge : A \times A \to A $$that's the right adjoint of \( \Delta \). This is just a clever way of saying $$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check. Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number. All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on. Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by $$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short. I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason. Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\). So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs. Once you start looking you can find duality everywhere, from ancient Chinese philosophy: to modern computers: But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality! This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises.
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range NUCLEAR PHYSICS B, ISSN 0550-3213, 05/1998, Volume 519, Issue 1-2, pp. 19 - 36 Using a sample of 10(8) triggered events, produced in pi(-)-Cu interactions at 350 GeV/c, we have identified 26 beauty events. The estimated background in this... BOTTOM PRODUCTION | total cross section measurement | beauty-meson hadroproduction | TRIGGER | DETECTOR | PHYSICS | WA92 | TRACK | PHYSICS, PARTICLES & FIELDS BOTTOM PRODUCTION | total cross section measurement | beauty-meson hadroproduction | TRIGGER | DETECTOR | PHYSICS | WA92 | TRACK | PHYSICS, PARTICLES & FIELDS Journal Article Journal of High Energy Physics, ISSN 1029-8479, 01/2017, Volume 2017, Issue 8 Abstract A test of lepton universality, performed by measuring the ratio of the branching fractions of the B 0 → K *0 μ + μ − and B 0 → K *0 e + e − decays, R... scattering [p p] | Hadron-Hadron scattering (experiments) | Lepton universality | Rare decay | High Energy Physics - Experiment | pair production [electron] | semileptonic decay [B0] | meson | B0 --> K0 muon+ muon | universality [lepton] | (muon+ muon-) [mass spectrum] | LHCb | branching ratio [B0] | experimental results | Nuclear and particle physics. Atomic energy. Radioactivity | CERN LHC Coll | hadronic decay [K0] | phantom | B physics | data analysis method | 7000 GeV-cms8000 GeV-cms | ratio [branching ratio] | K0 --> K+ pi | Branching fraction | pair production [muon] | B0 --> K0 electron positron | colliding beams [p p] | (electron positron) [mass spectrum] | statistical analysis | B physics, Branching fraction, Hadron-Hadron scattering (experiments), Rare decay | mass spectrum [dilepton] scattering [p p] | Hadron-Hadron scattering (experiments) | Lepton universality | Rare decay | High Energy Physics - Experiment | pair production [electron] | semileptonic decay [B0] | meson | B0 --> K0 muon+ muon | universality [lepton] | (muon+ muon-) [mass spectrum] | LHCb | branching ratio [B0] | experimental results | Nuclear and particle physics. Atomic energy. Radioactivity | CERN LHC Coll | hadronic decay [K0] | phantom | B physics | data analysis method | 7000 GeV-cms8000 GeV-cms | ratio [branching ratio] | K0 --> K+ pi | Branching fraction | pair production [muon] | B0 --> K0 electron positron | colliding beams [p p] | (electron positron) [mass spectrum] | statistical analysis | B physics, Branching fraction, Hadron-Hadron scattering (experiments), Rare decay | mass spectrum [dilepton] Journal Article Physics Letters B, ISSN 0370-2693, 04/2017, Volume 767, Issue C, p. 110 Journal Article 64. Measurement of exclusive γγ→ℓ+ℓ− production in proton–proton collisions at s=7 TeV with the ATLAS detector Physics Letters B, ISSN 0370-2693, 10/2015, Volume 749, Issue C, pp. 242 - 261 Journal Article JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 08/2017, Volume 2017, Issue 8 A test of lepton universality, performed by measuring the ratio of the branching fractions of the B-0 -> K*(0)mu(+) mu(-) and B-0 -> K*e(+)e(-) decays, R-K*0,... B physics | Branching fraction | Hadron-Hadron scattering (experiments) | Rare decay | TOOL | PHYSICS, PARTICLES & FIELDS | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS B physics | Branching fraction | Hadron-Hadron scattering (experiments) | Rare decay | TOOL | PHYSICS, PARTICLES & FIELDS | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 66. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - ν ν ¯ final states using proton–proton collisions at s = 13 TeV with the ATLAS detector The European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 04/2018, Volume 78, Issue 4, pp. 1 - 34 Journal Article 67. Search for Contact Interactions in Dimuon Events from pp Collisions at sqrt(s) = 7 TeV with the ATLAS Detector Physical Review D, ISSN 1550-7998, 2011, Volume 84, Issue 1, p. 011101(R) Phys.Rev.D 84 (2011) 011101(R) A search for contact interactions has been performed using dimuon events recorded with the ATLAS detector in proton-proton... Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences Journal Article 68. Measurement of forward t(t)over-bar, W + b(b)over-bar and W+ c(c)over-bar production in ppc ollisions at root s=8 TeV PHYSICS LETTERS B, ISSN 0370-2693, 04/2017, Volume 767, pp. 110 - 120 The production of t (t) over bar, W + b (b) over bar and W+ c (c) over bar is studied in the forward region of proton-proton collisions collected at a... PRODUCTION CROSS-SECTION | ROOT-S=7 | DECAY | ASTRONOMY & ASTROPHYSICS | JETS | PHYSICS, NUCLEAR | PP COLLISIONS | PHYSICS, PARTICLES & FIELDS | Jets in large-Q2 scattering | Models beyond the standard model | High Energy Physics | Heavy quarkonia | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Nuclear and High Energy Physics | Experiment | LHCb | Hadron-induced high- and super-high-energy interactions (energy > 10 GeV): Inclusive production with identified hadrons | High Energy Physics - Experiment | Hadron-Hadron Scattering B physics Flavor physics | Physics PRODUCTION CROSS-SECTION | ROOT-S=7 | DECAY | ASTRONOMY & ASTROPHYSICS | JETS | PHYSICS, NUCLEAR | PP COLLISIONS | PHYSICS, PARTICLES & FIELDS | Jets in large-Q2 scattering | Models beyond the standard model | High Energy Physics | Heavy quarkonia | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Nuclear and High Energy Physics | Experiment | LHCb | Hadron-induced high- and super-high-energy interactions (energy > 10 GeV): Inclusive production with identified hadrons | High Energy Physics - Experiment | Hadron-Hadron Scattering B physics Flavor physics | Physics Journal Article 69. Search for heavy ZZ resonances in the $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$ ℓ+ℓ-νν¯ final states using proton–proton collisions at $$\sqrt{s}= 13$$ s=13 $$\text {TeV}$$ TeV with the ATLAS detector The European Physical Journal C, ISSN 1434-6044, 4/2018, Volume 78, Issue 4, pp. 1 - 34 A search for heavy resonances decaying into a pair of $$Z$$ Z bosons leading to $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article Physical Review Letters, ISSN 0031-9007, 08/2013, Volume 111, p. 191801 Journal Article 71. Observation of the decay $B_{s}^{0} \to \eta_{c} \phi$ and evidence for $B_{s}^{0} \to \eta_{c} \pi^{+} \pi^{-} Journal of High Energy Physics, ISSN 1029-8479, 02/2017, Volume 21 Journal Article 72. Search for heavy ZZ resonances in the l(+) l(-) l(+) l(-) and l(+) l(-) nu(nu)over-bar final states using proton-proton collisions at root s=13 TeV with the ATLAS detector EUROPEAN PHYSICAL JOURNAL C, ISSN 1434-6044, 04/2018, Volume 78, Issue 4 A search for heavy resonances decaying into a pair of Z bosons leading to l(+) l(-) l(+) l(-) and l(+) l(-) nu(nu) over bar final states, where l stands for... DISTRIBUTIONS | BOSON | DECAY | MASS | TAUOLA | TOOL | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences DISTRIBUTIONS | BOSON | DECAY | MASS | TAUOLA | TOOL | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article 73. Measurement of the ZZ production cross section in proton-proton collisions at s = 8 $$ \sqrt{s}=8 $$ TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and Z Z → ℓ − ℓ + ν ν ¯ $$ ZZ\to {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ decay channels with the ATLAS detector Journal of High Energy Physics, ISSN 1029-8479, 1/2017, Volume 2017, Issue 1, pp. 1 - 53 A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ − ℓ + ν ν ¯ $$ {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ channels (ℓ = e, μ) in... Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment Journal Article Journal of High Energy Physics, ISSN 1029-8479, 04/2015, Volume 1504, Issue 4 An angular analysis of the B-0 -> K(*0)e(+) e(-) decay is performed using a data sample, corresponding to an integrated luminosity of 3.0 fb(-1), collected by... Polarization | B physics | Rare decay | Hadron-Hadron Scattering | Flavour Changing Neutral Currents | TOOL | PHYSICS, PARTICLES & FIELDS | Astrophysics | Neutral currents | Nuclear and High Energy Physics | 12.15.Mm | Experiment | Earth and Planetary Astrophysics | High Energy Physics - Experiment | Rare decay Polarization B physics Flavour Changing Neutral Currents Hadron-Hadron Scattering | Models beyond the standard model | Polarization in interactions and scattering | High Energy Physics | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | LHCb | 13.88.+e | 12.60.-i Polarization | B physics | Rare decay | Hadron-Hadron Scattering | Flavour Changing Neutral Currents | TOOL | PHYSICS, PARTICLES & FIELDS | Astrophysics | Neutral currents | Nuclear and High Energy Physics | 12.15.Mm | Experiment | Earth and Planetary Astrophysics | High Energy Physics - Experiment | Rare decay Polarization B physics Flavour Changing Neutral Currents Hadron-Hadron Scattering | Models beyond the standard model | Polarization in interactions and scattering | High Energy Physics | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | LHCb | 13.88.+e | 12.60.-i Journal Article Journal of High Energy Physics, ISSN 1029-8479, 2/2013, Volume 2013, Issue 2, pp. 1 - 20 A measurement of the cross-section for pp → Z → e+e− is presented using data at $\sqrt{s}=7$ TeV corresponding to an integrated luminosity of 0.94 fb−1. The... Electroweak interaction | QCD | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory Electroweak interaction | QCD | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory Journal Article
Recall the following properties of the Fourier transform: If $x(t)$ is an even function, then its Fourier transform $X(\omega)$ is purely real. If $x(t)$ is an odd function, then its Fourier transform $X(\omega)$ is purely imaginary. Thus, we can think of the real and imaginary parts of the Fourier transform of a zero-mean Gaussian random process $x(t)$ as the Fourier transforms of two separate inputs $x_e(t)$ and $x_o(t)$: the components of the process that have even and odd symmetry, respectively. We split the process into these components as follows: $$x_e(t) = \frac{x(t)+x(-t)}{2}$$ $$x_o(t) = \frac{x(t)-x(-t)}{2}$$ Note that $x(t) = x_e(t) + x_o(t)$. Moving through this step by step, Your observation was that $\text{Re}\{X(\omega)\}$ and $\text{Im}\{X(\omega)\}$ (the real and imaginary parts of the process's DFT) are uncorrelated. Using the Fourier transform properties mentioned above, we can deduce that $\text{Re}\{X(\omega)\} = X_e(\omega)$ and $\text{Im}\{X(\omega)\} = X_o(\omega)$; the real and imaginary parts of $X(\omega)$ are none other than the Fourier transforms of $x_e(t)$ and $x_o(t)$, respectively. Therefore, your observation is equivalent to saying that $X_e(\omega)$ and $X_o(\omega)$ are uncorrelated. Since the Fourier transform is a one-to-one mapping between the time and frequency domains, I posit that the lack of correlation between $X_e(\omega)$ and $X_o(\omega)$ would imply a lack of correlation between $x_e(t)$ and $x_o(t)$ as well. What is the correlation between $x_e(t)$ and $x_o(t)$? Simple: $$\begin{align}\mathbb{E}(x_e(t)x_o(t)) &= \mathbb{E}\left(\left(\frac{x(t)+x(-t)}{2}\right)\left(\frac{x(t)-x(-t)}{2}\right)\right) \\&= \mathbb{E}\left(\frac{1}{4}\left(x^2(t) - x^2(-t)\right)\right) \\&= \frac{1}{4} \left(\mathbb{E}(x^2(t)) - \mathbb{E}(x^2(-t))\right) \\&= \frac{1}{4} \left(\sigma^2 - \sigma^2\right) \\&= 0\end{align}$$ As expected, the even and odd components are uncorrelated. So, to summarize, I would say the following: The real and imaginary components of a Fourier transform correspond to the individual Fourier transforms even and odd components of the input function. For a zero-mean Gaussian random process, these even and odd components are uncorrelated. Therefore, their Fourier transforms (the real and imaginary components that you asked about) are also uncorrelated. Edit: To address your followup: If $x(t)$ is Gaussian, then its even and odd components $x_e(t)$ and $x_o(t)$ are as well, due to the property that any weighted sum of Gaussian random variables is also Gaussian. If $x_e(t)$ and $x_o(t)$ are Gaussian random processes, then their Fourier transforms $X_e(\omega)$ and $X_o(\omega$) are as well. This follows from the same property as the previous statement; if you look at the transform, you're computing a weighted sum of a bunch of Gaussian random variables. If $X_e(\omega)$ and $X_o(\omega)$ are Gaussian, and they are uncorrelated with one another (as described above), then they are also independent. This is a property of the Gaussian distribution.
I have a sphere or radius r that has a small point like object moving at a constant velocity inside it. Each time it hits the side it bounces in a random direction but always away from the tangent plane at the contact point. The longest distance it can travel is through the center to the other side $2r$ if it bounces at $45^\circ$) ($\alpha =\pi/4 $) it will hit the sphere a point on the sphere that is on the intersection of the tangent plane if moved to the center. The distance would be $2 \pi \cos(\alpha) $. I assume this works for all bounce directions for $0 < a \leq \pi$ I need to find the average distance traveled between bounces, but I am somewhat at a loss as to how find the solution. I assume its an integral of some type?? This should be a fixed value in terms as a simple multiple of the radius and the same not matter the value of $r$, and would be happy with that number. But I would be very happy if I could get a detailed method, as I would like to be able to express the angle of reflection in terms of a curve. As the question is asked above that curve is a flat line (uniform distribution). $\alpha = \operatorname{random}(0\;\text{to}\;\pi)p(x)$ where $p(x) = 1$ but $p(x)$ could be any function. Is this possible? Update To clarify the angle of reflection $(\alpha_x)$ is expressed as the angle from the axis that is the line from the point of contact through the center and conforms to $0 < \alpha_x \leq \pi$ where pi is traveling through the center to the opposite side. The second angle around that axis ($\alpha_y$) can be $0 < \alpha_y < 2\pi $ Sorry I have very little knowledge of the vernacular used to express math concepts. The random function should return an evenly distributed random value.
While it's apparently some form of time series, if there are multiple observations at a single point, it might be a little unusual to call it a time series. One difficulty I see is how do I relate (say) 2 observations at time $\tau$ to 3 observations at time $\tau-s$? What's the model? Does the expectation at time $\tau$ relate to the average at time $\tau-s$ or what? With single-observation-at-a-time, there's certainly ways to write continuous-time autocorrelated models. A common example is the Ornstein-Uhlenbeck process. See also the Vasicek model, a particular example of its use. It's possible to write an unevenly-sampled observational model based on the O-U process as something of the form (if $s,t$ are consecutive observation times): $\hspace{1cm}(y_t-\mu) = \phi^{t-s} (y_s-\mu) + n_{t-s}\,,\quad$ where $\:n_u\sim N(0,\frac{\sigma^2}{2\phi} (1-e^{-2\phi u}))$ (I think!), which when $t-s=1$ (i.e. the usual regular-time interval situation) corresponds to an AR(1).
Let's assume that we have $X_1,...,X_n\sim^{iid} Exp(\theta)$. The LRT is of the form $\Lambda(X,\theta_0,\theta_1)=e^{\left(\frac{1}{\theta_0}-\frac{1}{\theta_0}\right)\sum X_i}$. When $\theta_0<\theta_1$, the LRT is increasing in the sufficient statistic $T(X)=\sum X_i$. Why the focus on the monotonicity of the LRT w.r.t. $\sum X_i$? Is it because then we can write the test the Neyman-Pearson test as $T(X)\in \text{Rej.Region}_{\alpha}$, and we usually know the distribution of $T(X)$? I ask this because it would also seem important to focus on $\Lambda(X,\theta,\theta_1)$ being monotone w.r.t $\theta$. Then testing hypothesis $H_0: \theta=\theta_0 \ vs \ H_1:\theta>\theta_0$, we could also test $H_0: \theta\leq\theta_0 \ vs \ H_1:\theta>\theta_0$, maintaining the size of the test, i.e. for this example the Neyman-Pearson test for the simple hypothesis is reject $H_0$ if $\sum X_i > k_{\alpha}$. Let's call it $\phi_{simple}(X)$. To extend to the composite hypothesis, we notice $$Pr_{\theta}(\sum X_i > k_{\alpha})=Pr_{\theta}(\Lambda(X,\theta,\theta_1) > K_{\alpha})\leq Pr_{\theta_0}(\Lambda(X,\theta_0,\theta_1) > K_{\alpha})\leq\alpha \quad \forall \theta\leq \theta_0$$ due to the monotonicity of $\Lambda$, we still have $\phi_{simple}(X)$ with same size. Now, by the Neyman-Pearson theorem, under the simple hypothesis, we have that $E_{\theta_1}(\phi(X))\leq E_{\theta_1}(\phi_{simple}(X)) \quad \forall \theta_1> \theta_0$, for tests $\phi(X)$ of size $\alpha$. For the composite hypothesis, $\Theta_1$ set is exactly the same, so our $\phi_{simple}$ should be UMP for the composite hypothesis. Couldn't we extend this reasoning into other examples, and make it a theorem? The only practical problem would be to know the distribution of the LRT, I think.
Search for dissertations about: "renewal processes" Showing result 1 - 5 of 64 swedish dissertations containing the words renewal processes. University dissertation from Västerås : Mälardalen University Abstract : In this thesis we investigate a model of nonlinearly perturbed continuous-time renewal equation. Some characteristics of the renewal equation are assumed to have non-polynomial perturbations, more specifically they can be expanded with respect to a non-polynomial asymptotic scale. READ MORE University dissertation from Göteborg : Chalmers University of Technology Abstract : This paper deals with a generalization of the class of renewal processes with absolutely continuous life length distribution, obtained by allowing a random environment to modulate the stochastic intensity of the renewal process. The random environment is a birth and death process with a finite state space. READ MORE 3. Understanding regional renewal and industry cluster emergence processes within the Swedish peripheryUniversity dissertation from Umeå : Handelshögskolan vid Umeå universitet Abstract : There are many insightful writings revealing that regions within industrialised nations are able to renew their local business environments through building and supporting industry clusters. Such knowledge stems from research based on how to maintain and develop successful industry clusters located within central regions. READ MORE University dissertation from Västerås : Mälardalen University Abstract : This thesis deals with a model of nonlinearly perturbed continuous-time renewal equation with nonpolynomial perturbations. The characteristics, namely the defect and moments, of the distribution function generating the renewal equation are assumed to have expansions with respect to a non-polynomial asymptotic scale: $\{\varphi_{\nn} (\varepsilon) =\varepsilon^{\nn \cdot \w}, \nn \in \mathbf{N}_0^k\}$ as $\varepsilon \to 0$, where $\mathbf{N}_0$ is the set of non-negative integers, $\mathbf{N}_0^k \equiv \mathbf{N}_0 \times \cdots \times \mathbf{N}_0, 1\leq k <\infty$ with the product being taken $k$ times and $\w$ is a $k$ dimensional parameter vector that satisfies certain properties. READ MORE University dissertation from Uppsala : Matematiska institutionen Abstract : This thesis, consisting of three papers and a summary, studies topics in the theory of stochastic processes related to long-range dependence. Much recent interest in such probabilistic models has its origin in measurements of Internet traffic data, where typical characteristics of long memory have been observed. READ MORE
Sometimes, especially in introductory courses the instructor will try to keep things "focused" in order to promote learning. Still, it's unfortunate that the instructor couldn't respond in a more positive and stimulating way to your question. These reactions do occur at $\ce{sp^2}$ hybridized carbon atoms, they are often just energetically more costly, and therefore somewhat less common. Consider when a nucleophile reacts with a carbonyl compound, the nucleophile attacks the carbonyl carbon atom in an $\ce{S_{N}2}$ manner. The electrons in the C-O $\pi$–bond can be considered as the leaving group and a tetrahedral intermediate is formed with a negative charge on oxygen. It is harder to do this with a carbon-carbon double bond (energetically more costly) because you would wind up with a negative charge on carbon (instead of oxygen), which is energetically less desirable (because of the relative electronegativities of carbon and oxygen). If you look at the Michael addition reaction, the 1,4-addition of a nucleophile to the carbon-carbon double bond in an $\ce{\alpha-\beta}$ unsaturated carbonyl system, this could be viewed as an $\ce{S_{N}2}$ attack on a carbon-carbon double bond, but again, it is favored (lower in energy) because you create an intermediate with a negative charge on oxygen. $\ce{S_{N}1}$ reactions at $\ce{sp^2}$ carbon are well documented. Solvolysis of vinyl halides in very acidic media is an example. The resultant vinylic carbocations are actually stable enough to be observed using nmr spectroscopy. The picture below helps explain why this reaction is so much more difficult (energetically more costly) than the more common solvolysis of an alkyl halide. In the solvolysis of the alkyl halide we produce a traditional carbocation with an empty p orbital. In the solvolysis of the vinyl halide we produce a carbocation with the positive charge residing in an $\ce{sp^2}$ orbital. Placing positive charge in an $\ce{sp^2}$ orbital is a higher energy situation compared to placing it in a p orbital (electrons prefer to be in orbitals with higher s density, it stabilizes them because the more s character in an orbital the lower its energy; conversely, in the absence of electrons, an orbital prefers to have high p character and mix the remaining s character into other bonding orbitals that do contain electrons in order to lower their energy).
Let me combine all the ideas I find to prove that $\forall k \ge 1. H_m < \log(2*m+1)$. As noted by @mihaild this is sufficient for the proof of the theorem. We can proceed by induction. For $k = 1$ , we have $1 < \log(3) = 1.09\ldots$ which holds. For $k > 1$, we assume $H_m < \log(2*m+1)$ and would like to show $H_{m+1} < \log(2*(m+1)+1)$. We have $H_{m+1} = H_m + \frac 1 {m+1} < \log(2m+1) + ?$. To fill this placeholder we note that $\log(2m+1) + \log(x) = \log(2*(m+1)+1)$ when $x = \frac{2m+3}{2m+1}$ and in that case we would just need to prove that $\frac 1 {m+1} < \log(1+\frac 2 {2m+1})$. But this is true since the function $f(x) = (x+1) \log(1+ \frac 2 {2x+1}) - 1 > 0$ in $]\frac{-1} 2,+\infty[$. To see this is for the cases we are interested is useful to remember that $e$ is the unique number for which $\Big(1+\frac 1 {x}\Big)^{x} < e < \Big(1+ \frac 1 {x}\Big)^{x+1}$ for all $x > 0$ (see wikipedia). Application on Polya's inequality In fact, what is needed to conclude the proof of Polya's inequality is: For $k \ge 2$ we have If $k$ is odd then $H_{\frac k 2} < \log(k)$. If $k$ is even then $H_{\frac k 2 - 1} + \frac 1 k < \log(k)$ For 1. the proof follows easily from the above. For 2. one notes $h(\frac k 2 - 1) < \log(2*(\frac k 2 - 1)+1) = \log(k-1)$ by the above. Then $\log(k-1) + \frac 1 k < \log(k)$ since the function $x*\log(1+\frac 1 {x-1}) - 1$ is again positive in $[1,+\infty[$. The same inequality as before is used to establish this.
Remember that, in an ideal gas, there are no interactions between the particles, hence, the particles do not exert forces on each other. However, particles do experience a force when they collide with the walls of the container. Let us assume that each collision with a wall is elastic. Let us assume that the gas is in a cubic box of length \(a\) and that two of the walls are located at \(x = 0\) and at \(x = a\). Thus, a particle moving along the \(x\) direction will eventually collide with one of these walls and will exert a force on the wall when it strikes it, which we will denote as \(F_x\). Since every action has an equal and opposite reaction, the wall exerts a force \(-F_x\) on the particle. According to Newton’s second law, the force \(-F_x\) on the particle in this direction gives rise to an acceleration via \[-F_x = ma_x = m\dfrac{\Delta v_x}{\Delta t} \label{2.1}\] Here, \(t\) represents the time interval between collisions with the same wall of the box. In an elastic collision, all that happens to the velocity is that it changes sign. Thus, if \(v_x\) is the velocity in the \(x\) direction before the collision, then \(-v_x\) is the velocity after, and \(\Delta v_x = -v_x - v_x = -2v_x\), so that \[-F_x = -2m\dfrac{v_x}{\Delta t} \label{2.2}\] In order to find \(\Delta t\), we recall that the particles move at constant speed. Thus, a collision between a particle and, say, the wall at \(x = 0\) will not change the particle’s speed. Before it strikes this wall again, it will proceed to the wall at \(x = a\) first, bounce off that wall, and then return to the wall at \(x = 0\). The total distance in the \(x\) direction traversed is \(2a\), and since the speed in the \(x\) direction is always \(v_x\), the interval \(\Delta t = \dfrac{2a}{v_x}\). Consequently, the force is \[-F_x = -\dfrac{mv_x^2}{a} \label{2.3}\] Thus, the force that the particle exerts on the wall is \[F_x = \dfrac{mv_x^2}{a} \label{2.4}\] The mechanical definition of pressure is \[P = \dfrac{\langle F \rangle}{A} \label{2.5}\] where \(\langle F \rangle\) is the average force exerted by all \(N\) particles on a wall of the box of area \(A\). Here \(A = a^2\). If we use the wall at \(x = 0\) we have been considering, then \[P = \dfrac{N \langle F_x \rangle}{a^2} \label{2.6}\] because we have \(N\) particles hitting the wall. Hence, \[P = \dfrac{N m \langle v_x^2 \rangle}{a^3} \label{2.7}\] from our study of the Maxwell-Boltzmann distribution, we found that \[\langle v_x ^2 \rangle = \dfrac{k_B T}{m} \label{2.8}\] Hence, since \(a^3 = V\), \[P = \dfrac{N k_B T}{V} = \dfrac{n R T}{V} \label{2.9}\] which is the ideal gas law. The ideal gas law is an example of an equation of state. One way to visualize any equation of state is to plot the so-called isotherms, which are graphs of \(P\) vs. \(V\) at fixed values of \(T\) . For the ideal-gas equation of state \(P = nRT/V\) , some of the isotherms are shown in the figure below (left panel): If we plot \(P\) vs. \(T\) at fixed volume (called the isochores), we obtain the plot in the right panel. What is important to note, here, is that an ideal gas can exist only as a gas. It is not possible for an ideal gas to condense into some kind of “ideal liquid”. In other words, a phase transition from gas to liquid can be modeled only if interparticle interactions are properly accounted for. Note that the ideal-gas equation of state can be written in the form \[\dfrac{P V}{n R T} = \dfrac{P \bar{V}}{R T} = \dfrac{P}{\rho R T} = 1 \label{2.10}\] where \(\bar{V} = V/n\) is called the molar volume. Unlike \(V\) , which increases as the number of moles increases (an example of what is called an extensive quantity in thermodynamics), \(\bar{V}\) does not exhibit this dependence and, therefore, is called intensive. The quantity \[Z = \dfrac{P V}{n R T} = \dfrac{P \bar{V}}{R T} = \dfrac{P}{\rho R T} \label{2.11}\] is called the compressibility of the gas. In an ideal gas, if we “compress” the gas by increasing \(P\) , the density \(\rho\) must increase as well so as to keep \(Z =1\). For a real gas, \(Z\), therefore, gives us a measure of how much the gas deviates from ideal-gas behavior. Figure \(\PageIndex{2}\) : A graph of the compressibility factor (Z) vs. pressure shows that gases can exhibit significant deviations from the behavior predicted by the ideal gas law. Image used with permission from Openstax (CC-BY) Figure \(\PageIndex{2}\) shows a plot of \(Z\) vs. \(P\) for several real gases and for an ideal gas. The plot shows that for sufficiently low pressures (hence, low densities), each gas approaches ideal-gas behavior, as expected. Extensive and Intensive Properties in Thermodynamics Before we discuss ensembles and how we construct them, we need to introduce an important distinction between different types of thermodynamic properties that are used to characterize the ensemble. This distinction is extensive and intensive properties. Thermodynamics always divides the universe into a “system” and its “surroundings” with a “boundary” between them. The unity of all three of these is the thermodynamic “universe”. Now suppose we allow the system to grow in such a way that both the number of particles and the volume grow with the ratio \(N/V\) remaining constant. Any property that increases as we grow the system in this way is called an extensive property. Any property that remains the same as we grow the system in this way is called intensive. Examples of extensive properties are \(N\) (trivially), \(V\) (trivially), energy, free energy,... Examples of intensive properties are pressure \(P\), density \(\rho = N/V\) , molar heat capacity, temperature,....
This article describes the scoring system in Arcaea in detail, as well as various other mechanics. Timing Like other rhythm games, point are given depending on how accurate and precise your timing is when hitting notes. There are names given to different levels of judgement for the timing: The points per note of a song and its difficulty rating is computed as: $ \text{Points per Note} = \dfrac{10,000,000}{\text{Number of Notes}} $ For example, if a song has 1,000 notes, each note will be worth a maximum of 10,000 points. For every shiny PURE, you will gain 10,001 points; for every PURE, you will gain 10,000 points; for every FAR, you will gain 5,000 points. Since each shiny PURE awards 1 bonus point, the maximum possible score on a song is 10,000,000 + number of notes. For example, Sheriruth [FTR] has 1,151 notes, so its maximum possible score is 10,001,151. After the song ends, the game adds the points rewarded from all the notes in the chart, to form your final score. This score determines the grade of your play. If the score is higher than the current recorded score, it will be saved as your high score. Unlike many other rhythm games, combos have no effect on your score. Theoretically, it is possible to get over 10,000,000 points without a Pure Memory. For example, if a chart has 2,500 notes. 2,499 shiny PUREs + 1 FAR gives a score of 10,000,499. A minimum of $ \Bigl\lceil0.5\times(1+\sqrt{20000001})\Bigr\rceil=2237 $ notes is necessary. Since there are currently no songs with a max combo of 2237 or greater, this is impossible. Result Summary When you select the PURE count on the result screen, additional details will be displayed on the right. 1 — Shiny PURE notes hit. 2 — FAR EARLY notes hit, followed by PURE EARLY notes hit in parentheses. 3 — FAR LATE notes hit, followed by PURE LATE notes hit in parentheses. For example, in the above image, there are: 631 PURE 594 Shiny 37 Regular 19 LATE PURE 18 EARLY PURE 1 FAR 1 LATE FAR 0 EARLY FAR Grades After playing a song, you'll get a grade based on your score. When you get higher grades, it overwrites the current ones, but if you get a lower one, it never overwrites the current one. From high to low, the grades obtained in the game are: Grade Score Range 9,800,000 or above 9,500,000 - 9,799,999 9,200,000 - 9,499,999 8,900,000 - 9,199,999 8,600,000 - 8,899,999 0 - 8,599,999 Titles After every play, a title is obtained. Much like grades, higher titles automatically overwrite lower titles, and lower titles can never overwrite higher titles. The titles in the game, come from high to low, are: Title Badge Condition Hit all notes with PURE timing. Hit all notes with at least 1 FAR timing. Complete a song with the HARD effect active. Complete a song with at least 70% Recollection Rate (or fill the Recollection Gauge at least 7 times with a CHUNITHM skill) Complete a song with at least 70% Recollection Rate with the EASY or OVERFLOW effect active. This title only aids in fulfilling song unlock criteria, and does not add to the cleared song count. Complete a song with less than 70% Recollection Rate with any skill (or reach 0% Recollection Rate with the HARD effect active). Potential See Potential. Reward and Currency See Currency.
Calculate the pH at the equivalence point for the titration of $\pu{0.130 M}$ methylamine ($\ce{CH3NH2}$) with $\pu{0.130 M}$ $\ce{HCl}$. The $K_\mathrm{b}$ of methylamine is ${5.0 \cdot 10^{–4}}$. So I started with the equation: $$\ce{HCl + CH3NH2 <=> CH3NH3+ + Cl-}$$ and then I knew that $$\mathrm{pH} = \mathrm{p}K_\mathrm{a} + \log \left(\frac{\ce{[base]}}{\ce{[acid]}} \right)$$ So, I put ${\log \left(\frac{0.130}{0.130}\right) = \log 1 = 0}$ and then added that to the $\mathrm{p}K_\mathrm{a}$, which I got from the equation $$\mathrm{p}K_\mathrm{a} = \frac{K_\mathrm{w}}{K_\mathrm{b}} \quad \rightarrow \quad \mathrm{p}K_\mathrm{a} = -\log(K_\mathrm{a})$$ However, after I plugged those in to get a $\mathrm{pH}$, it turned out to be wrong and then comments said that when titrated a weak base with a strong acid, the volume is doubled at equivalence point and the concentrations are halved. Why is this? I now know that my original equation was wrong and it should be $$\ce{CH3NH3+ <=> H+ + CH3NH2}$$ and from there I should make an ICE table with the concentration of $\pu{0.0650 M}$.
Difference between revisions of "Main Page" (Fixing location.) (Collected background materials) Line 5: Line 5: '''<math>k=3</math> Density Hales-Jewett (DHJ(3)) theorem:''' <math>\lim_{n \rightarrow \infty} c_n/3^n = 0</math> '''<math>k=3</math> Density Hales-Jewett (DHJ(3)) theorem:''' <math>\lim_{n \rightarrow \infty} c_n/3^n = 0</math> − The original proof of DHJ(3) used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular [http://gowers.wordpress.com/2009/02/01/a-combinatorial-approach-to-density-hales-jewett/ combinatorial approach to DHJ, suggested by Tim Gowers.] Some background to + The original proof of DHJ(3) used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular [http://gowers.wordpress.com/2009/02/01/a-combinatorial-approach-to-density-hales-jewett/ combinatorial approach to DHJ, suggested by Tim Gowers.] + + + + Some background to project can be [http://gowers.wordpress.com/2009/01/30/background-to-a-polymath-project/ found here]discussion on massively collaborative "polymath" projects can be [http://gowers.wordpress.com/2009/01/27/is-massively-collaborative-mathematics-possible/ found here]. A cheatsheet for editing the wiki may be [http://meta.wikimedia.org/wiki/File:MediaWikiRefCard.png found here]. == Threads == == Threads == Line 32: Line 36: # K. O'Bryant, "[http://arxiv.org/abs/0811.3057 Sets of integers that do not contain long arithmetic progressions]", preprint. # K. O'Bryant, "[http://arxiv.org/abs/0811.3057 Sets of integers that do not contain long arithmetic progressions]", preprint. # R. McCutcheon, “[http://www.msci.memphis.edu/~randall/preprints/HJk3.pdf The conclusion of the proof of the density Hales-Jewett theorem for k=3]“, unpublished. # R. McCutcheon, “[http://www.msci.memphis.edu/~randall/preprints/HJk3.pdf The conclusion of the proof of the density Hales-Jewett theorem for k=3]“, unpublished. − − − − Revision as of 07:17, 13 February 2009 The Problem Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. [math]k=3[/math] Density Hales-Jewett (DHJ(3)) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math] The original proof of DHJ(3) used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers. Useful background materials Some background to the project can be found here. General discussion on massively collaborative "polymath" projects can be found here. A cheatsheet for editing the wiki may be found here. Finally, here is the general Wiki user's guide Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (active) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (inactive) (500-599) Possible proof strategies (active) (600-699) A reading seminar on density Hales-Jewett (active) We are also collecting bounds for Fujimura's problem. Here are some unsolved problems arising from the above threads. Bibliography M. Elkin, "An Improved Construction of Progression-Free Sets ", preprint. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119. B. Green, J. Wolf, "A note on Elkin's improvement of Behrend's construction", preprint. K. O'Bryant, "Sets of integers that do not contain long arithmetic progressions", preprint. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished.
Consider the following axiomatic definition of a field: A fieldis a set $F$ together with two binary operations $+$ and $\cdot$ on $F$ such that $(F,+)$ is an Abelian group with identity $0$ and $(F\setminus\{0\},\cdot)$ is an Abelian group with identity $1$, and the following left-distributive law holds: $$a\cdot(b+c)=(a\cdot b)+(a\cdot c)\quad\forall a,b,c\in F.$$ I want to show that $0\cdot x=0$ for any $x\in F$ using these, and only these, field axioms. I can prove that $x\cdot 0=0$ using left-distributivity, but multiplication with $0$ is not necessary commutative a priori [that $(F\setminus\{0\},\cdot)$ is an Abelian group does not say anything about multiplication with $0$]. Any hint would be appreciated. To elaborate on my point, let me prove that $x\cdot 0=0$ for any $x\in F$: \begin{align*} 0+0=&\,0\\ \Downarrow&\,\\ x\cdot(0+0)=&\,x\cdot0\\ \Downarrow&\,\text{(left-distributivity)}\\ (x\cdot 0)+(x\cdot 0)=&\,x\cdot 0\\ \Downarrow&\,\\ [(x\cdot 0)+(x\cdot 0)]+[-(x\cdot 0)]=&\,x\cdot 0+[-(x\cdot 0)]\\ \Downarrow&\,\\ (x\cdot0)+\{(x\cdot0)+[-(x\cdot0)]\}=&\,0\\ \Downarrow&\,\\ (x\cdot0)+0=&\,0\\ \Downarrow&\,\\ x\cdot0=&\,0. \end{align*} My problem is I would need to exploit right-distributivity to show that $0\cdot x=0$, but right-distributivity does not follow immediately from the axioms.
I am doing some research into the movement of robots executing a given algorithm, and have come up with the following recursive equations that describe the movement of the robots at each iteration: \begin{equation} \theta_i = \frac{3}{4}\theta_{i-1} \end{equation} \begin{equation} L_i=\frac{1}{2}\sqrt{L_{i-1}^2+A_{i-1}^2} \end{equation} \begin{equation} A_i=L_i\tan\theta_i \end{equation} Where the initial conditions are $\theta_1=45^{\circ}$, $\;L_1=\frac{1}{2}d$, and $\frac{1}{2}d$. It is clear that the first recurrence relation can simply be replaced with the following closed-form solution: \begin{equation} \theta_i = \big(\frac{3}{4}\big)^{i-1}\;\theta_1 \end{equation} But my problem is trying to obtain closed-form solutions to the other two equations that are defined in terms of each other. Is there a way to do this?
Coloring.tex \section{The coloring Hales-Jewett problem}\label{coloring-sec} The Hales-Jewett theorem states that for every $k$ and every $r$ there exists an $n = HJ(k,r)$ such that if you colour the elements of $[k]{}^n$ with $r$ colours, then there must be a combinatorial line with all its points of the same colour. This is a consequence of the Density Hales-Jewett theorem, since there must be a set of density at least $r^{-1}$ of elements of $[k]{}^n$ all of whose elements have the same colour. It also follows from the Graham-Rothschild theorem [\cite{}]. By iterating the Hales-Jewett theorem, one can also show that one of the colour classes contains an $m$-dimensional combinatorial subspace, if $n$ is sufficiently large depending on $k$, $r$ and $m$. A related idea is that of the Van der Waerden number $W(k,r)$ [\cite{waerden}], which is the length of the longest sequence $1,2,\ldots,n$ which can be $r$-coloured without a monochromatic arithmetic progression of length $k$. These numbers were used in this project to construct $r$-colourings of $k^{m}$ without combinatorial lines, and so provide new lower bounds for $HJ(k,r)$. This method is not new but the lower bounds appear to be new \cite{beck},\cite{shelah}. \begin{lemma} $HJ(k,r) \ge \lfloor (W(k,r)-1)/(k-1)\rfloor$ \end{lemma} \begin{proof} Let $m = \lfloor (W(k,r)-1)/(k-1) \rfloor$. Suppose we have a Van der Warden colouring of $1, 2, \ldots W(k,r)$. Give each point $(a_1,a_2,\ldots,a_m) \in [k]{}^m$ the colour that corresponds to $1+\sum (a_j-1)$. The points of a combinatorial line give sums that form an arithmetic progression, and will therefore not be monochromatic. \end{proof} A recent table of lower bounds for the Van der Waerden numbers can be found at \cite{heule} These numbers were used to produce the following table of lower bounds for colouring Hales-Jewett numbers. \begin{figure}[tb] \centerline{\begin{tabular}{l|lllll} $k\backslash r$ & 2 & 3 & 4 & 5 & 6 \\ \hline 3 & & 13 & 37 & 84 & 103 \\ 4 & 11 & 97 & 349 & 751 & 3259 \\ 5 & 59 & 302 & 2609 & 6011 & 14173 \\ 6 & 226 & 1777 & 18061 & 49391 & 120097 \\ 7 & 617 & 7309 & 64661 & & \\ 8 & 1069 & 34057 & & & \\ 9 & 3389 & & & & \end{tabular}} \caption{Lower bounds for colouring Hales-Jewett numbers} \label{HJcolour}\end{figure} A related problem is if we colour the elements of $[k]{}^n$ with $r$ colours and look for a geometric line. Call the numbers associated with this problem $n = M(k,r)$. We have a relationship between the $HJ(k,r)$ and the $M(k,r)$. We can start with a colouring of $[k]{}^n$ free of combinatorial lines and get a colourings of $[k]{}^{2n-1}$ and $[k]{}^{2n}$ free of geometric lines.We send each point of the original space to sets of points in the second space. When we go from $[k]{}^n$ to $[k]{}^{2n-1}$ we send coordinates equal to $n$ to $n$ and for the rest let the coordinate be equal to $i$, then we send it either to $2n-i$ or $i$. So in this mapping a point can have one to $2^{k}$ colourings. When we go from $[k]{}^n$ to $[k]{}^{2n}$ let the coordinate be equal to $i$, then we send it either to $2n-i$ or $i$. So in this mapping each point has $2^{k}$ images. In both of these colourings the preimage of a geometric line is a combinatorial line. So a combinatorial line free colouring becomes a geometric line free coloring and we get $M(2k-1,r)$ and $M(2k,r)$ are both greater than or equal to $HJ(k,r)$. We can then use this lower bound for $HJ(k,2)$, $2^{k}/k^{2}$ $\cite{beck}$. This improves on the previous method of computing the $M(k,r)$ It also improves the previous lower bound for $M(k,2)$ from $2^{k/4}/3(k^{4})$ to $2^{k/2-3}/(k/2)^{2}$ if k is even, $2^{k/2-3}/(k+1/2)^{2}$ if k is odd. The previous method the sum of the squares of the coordinates and looked for sequences free of quadratic progressions $\cite{beck}$.
I fit a binary logistic regression model with a single categorical variable, for which I received a coefficient. When I added further categorical predictor variables, the coefficient of the original categorical variable changed. What is the reason for this? That is, why is the coefficient of a predictor variable sensitive to the presence of other predictor variables in the model? Actually, this question deserves special consideration. This is because even if your predictors are orthogonal, addition of more predictors to a model that are predictive of the outcome will increase the logistic regression coefficients of variables that are already present in the model. By increase, I mean move them further away from zero. The reason for this is simple. In logistic regression, the error is fixed and assumed to have a variance of $\pi^2/3$. If you create a better model for the outcome, the better model doesn't reduce the variance of the error, it increases the variance of the linear predictor on the logit scale. This causes an inflation in your coefficients. Mathematically, in logistic regression, you're estimating: \begin{equation} y^*/\sigma = X\frac{\beta}{\sigma} + e \end{equation} $y^*$ is the continuous latent variable underlying the observed binary 0,1 and $e \sim \mathrm{Logistic}(0, 1)$. If all the determinants of $y^*$ are in the model, $\sigma$, the scaling factor of the logistic distribution is 1. If any of the predictors of $y^*$ are missing, $\sigma$ is less than one such that as you include more determinants of the underlying latent variable, your coefficients will be guaranteed to increase if they are orthogonal and non-zero. If they are not orthogonal, how they will change also depends on the traditional mechanics of omitted variable bias. A simple simulation of the problem: set.seed(124)coefs <- t(replicate(1500, { # generate two independent random variables xb <- rbinom(200, 1, .5) xc <- rnorm(200) # generate outcome from both predictors # we care about xb, so give xc a large coefficient # to simulate the effect of many missing predictors y <- rbinom(200, 1, 1 / (1 + exp(-(.25 * xb + xc)))) # retrieve binary coefficient in model with: c( coef(glm(y ~ xb, binomial))["xb"], # just binary predictor coef(glm(y ~ xb + xc, binomial))["xb"] # both predictors )}))# get average coefficient from each set of modelscolMeans(coefs) xb xb0.2083103 0.2562395 As one can see, the coefficient from the model without the continuous predictor is deflated while the one in the model with both predictors is close to the .25 value we assigned it. This problem is described in the econometrics/sociology literature as unobserved heterogeneity. It plagues logistic and probit regression models. With these models, the extent to which the change in coefficients is caused by confounding is not clear. Based on a quick glance, traditional omitted variable bias is explained simply on Wikipedia. See this article for a simple to read explanation of the unobserved heterogeneity process in logistic regression. Unless your predictors are orthogonal, adding a new predictor to a model will always change the coefficient estimates on existing predictors. This holds for any model.
The following quote is from the book "The art of computer programming": (..) sentence would presumably be used only if either $j$ or $k$ (not both) has exterior significance. In most cases we will use notation (2) only when the sum is finite- that is, when only a finite number of values $j$ satisfy $R(j)$ and have $a_j \neq 0$. If an infinite sum is required, for example $$\sum_{j=1}^{\infty} = \sum_{j \geq 1}a_j = a_1 + a_2 + a_3+\cdots$$ with infinitely many nonzero terms, the techniques of calculus must be employed; the precise meaning of (2) is then $$\qquad\qquad \sum_{R(j)} a_j = \left(\lim_{n\rightarrow\infty} \sum_{R(j) \atop 0\leq j \leq n} a_j\right) + \left(\lim_{n\rightarrow\infty} \sum_{R(j) \atop 0\leq j \leq n} a_j\right),\qquad\qquad(3)$$ (...) And why are they exactly the same? I showed a math professor and he thinks they're labelled wrong but couldn't figure it out. I don't even get why there are two. Sorry if the question isn't clear enough. I'm referring to the two infinite sums. As far as (2) is concerned he is only referring to what is on the left hand side of the equation with the two infinite sums. Its just a way of representing any possible series. So essentially my question is how does making any series go to infinity make two of them? Or am I misunderstanding? Also I tried to post this in math stack exchange and it wouldn't let me so I came here since its from the book, the art of computer programming.
Cohomology is a stronger invariant than homology because it can be equipped with a ring structure. To be precise, if one starts with the singular cohomology groups $H^\bullet(-; R)$ with coefficients in a ring $R$, then there is a map $$H^n(X; R) \times H^m(X; R) \stackrel{\smile}{\to} H^{m+n}(X; R)$$ which is graded commutative - that is, $\alpha \smile \beta = (-1)^{mn} \beta \smile \alpha$ where $\alpha \in H^n(X; R)$ and $\beta \in H^m(X; R)$. This makes $\bigoplus_i H^i(X; R)$ into a graded commutative ring, which we denote as $H^*(X; R)$, known as the graded cohomology ring of $X$ with coefficients in $R$. Now, the cup product map above can be defined in several ways. One standard way to define it is as $$H^n(X; R) \times H^m(X; R) \to H^{n+m}(X \times X; R) \to H^{n+m}(X; R)$$ where the first map in the composition is induced from the Eilenberg-Zilber map $C^n(X) \times C^m(Y; R) \to C^{n+m}(X \times Y; R)$ and the second is induced from the diagonal inclusion $\Delta : X \hookrightarrow X \times X$. Originally, Poincare defined this for smooth manifolds using transversality. That is, if $M$ is a smooth $n$-manifold, $N, L$ are submanifolds of $M$ of codimension $k$ and $l$ respectively, then $[N], [L]$ represent homology classes in $H_{n-k}(M; \Bbb R)$ and $H_{n-l}(M; \Bbb R)$ respectively. Assume further than $N$ and $L$ are transverse. Under the Poincare duality, $[N]$ corresponds to $[N]^*$ in $H^k( M; \Bbb R)$ and $[L]$ corresponds to $[L]^*$ in $H^l(M; \Bbb R)$. Poincare then defined cup product of these two classes as $[N]^* \smile [L]^* := [N \pitchfork L]^*$, the dual of the class represented by $N \pitchfork L$.$(\star)$ However, there is a much easier definition one can see in standard algebraic topology books (e.g., Hatcher) which goes as follows. Take a $k$-cochain $\psi$ in $C^k(X; R)$, an $l$-cochain $\varphi$ in $C^l(X; R)$, and then define the $k+l$-cochain $\psi \smile \varphi$ by $$(\psi \smile \varphi)(\sigma) := \psi(\sigma|_{[v_0, \cdots, v_k]}) \varphi(\sigma|_{[v_k, \cdots, v_{k+l}]})$$ where $\sigma : \Delta^{k+l} \to X$ is a $(k+l)$-singular simplex. One can easily see that this cochain-level map $C^k(X; R) \times C^l(X; R) \to C^{k+l}(X; R)$ descends to homology. This is precisely the cup product. All the definitions above are equivalent. The first definition was discovered by Eilenberg & Zilberg in this paper, the second by Poincare in Analysis Situs. But who discovered the third? Remark$(\star)$: It is not true however that every sinuglar cycle of a smooth manifold can be represented by a submanifold, see Thom's paper. Although, this is a high dimensional phenomenon and the first counterexample appears at dimension $7$.
$\Phi$, or the golden ratio, is basically $\frac{a+b}{a}=\frac{a}{b}$. The silver ratio corresponds to a similar idea of: $\frac{2a+b}{a}=\frac{a}{b}$. I've read on Wikipedia that both of these ratios are well known and have some appearance in nature (not the mystical hogwash stuff). In addition, there is the relation of the Fibonacci and Pell Numbers respectively. Funny enough, I stumbled on these ratios originally by myself just messing around with numbers. I also noticed that the silver ratio minus 1 approximates $\sqrt{2}$ (which is how I came to be fooling around with this). However I don't seem to find anything about further manipulation of this ratio, such as: $\frac{3a+b}{a}=\frac{a}{b}$, $\frac{4a+b}{a}=\frac{a}{b}$ and so forth. Do these ratios have any special properties that are found in nature? Perhaps called the copper ratio or something?
The simplest answer I can think of: because we can make a finite look-up table for answers.In other words, if you have a finite language $L = \{w_0, \ldots, w_{k-1}\}$,consider the following algorithm: Input: $x$ $L \leftarrow [w_0, \ldots, w_{k_1}]$ for $i$ from 0 to k-1 do $~~$ if $x = L[i]$ $~~$$~~$ accept reject Now, the problem with your argument is that you are thinking of $M$ as input in one place and not thinking of it as input in another place. If $M$ is not part of the input then the machine does not decide the halting problem but only a fixed instance of it, so the algorithm for deciding it doesn't imply that the halting problem is decidable. Note that we don't need to construct (uniformly in $M$) the algorithm deciding the halting of a given machine $M$, we only need to show that it exists, i.e. the process of finding the machine that decide if $M$ halts or not does not need to be a computable process (and it can't, if it could then your argument would go through). On the other hand, if we assume that $M$ is a part of the input, then the set is not finite anymore. It is essentially $A = \{\langle M,b\rangle \mid \text{$b=0$ and $M$ halts}\}$. What you are doing is taking a slice of this set $B_M = \{ w \in A \mid \mathsf{fst}(w)=M \}$ which is finite and therefore decidable. But the decidability of all slices of a set does not imply the set itself is decidable. We need to be able to decide membership in $A$ uniformly in $M$, i.e. we need a single algorithm that works for all $M$, not a different algorithms for each $M$.
I am learning about the binomial coefficient and counting. We define: $$ (1+x)^m = \sum^m_{n=0} \begin{pmatrix} m \\ n \end{pmatrix} x^n $$ the coefficients of the powers of $x^n$ represent the number of ways of choosing $n$ objects from a set of $m$. Is there a way to convert this into a probability distribution by dividing each coefficient by $$ \sum^m_{n=0} \begin{pmatrix} m \\ n \end{pmatrix}. $$ Example We have a set of three objects, $\{a, b , c\}$, we can draw 1 object three ways $(a + b + c)$, two objects three ways $(ab + ac + bc)$ and three objects one way $(abc)$. Then, $$ (1+x)^3 = 1x^0 + 3x^1 + 3x^2 + 1x^3 $$ If we divide each coefficient by the sum of coefficients (in this case 8) then they represent probabilities(?). Can we then say that $(1+x)^n$ is the generator for a distribution function $$ p_n = \frac{1}{\sum^m_{n=0} \begin{pmatrix}m \\ n \end{pmatrix}} \begin{pmatrix} m \\ n \end{pmatrix} $$ Such that $$ (1+x)^n = \sum_n p_n x^n $$
This scenario is explicitly handled by Gordan's theorem, which states$$\text{either} \quad\exists x \in \mathbb{R}_+^m\setminus\{0\} \centerdot Ax = 0,\quad\text{or}\quad\exists y\in\mathbb{R}^n\centerdot A^\top y > 0,$$ where $\mathbb{R}_+$ denotes nonnegative reals.(Like Farkas's Lemma, this is a "Theorem of Alternatives"; furthermore, it can be proved from Farkas's lemma.) A nice way to prove this is, as in Theorem 2.2.6 of Borwein & Lewis's text "Convex Analysis and Nonlinear Optimization", to consider the related optimization problem$$\inf_y \quad\underbrace{\ln\left(\sum_{i=1}^m \exp(y^\top A \textbf e_i)\right)}_{f(y)};$$as stated in that theorem, $f(y)$ is unbounded below iff there exists $y$ so that $A^\top y > 0$. As such, this also gives an unconstrained optimization problem you can plug into your favorite solver to determine which of the two scenarios you are in. Alternatively, you can explicitly solve for either the primal variables $x$ or the dual variables $y$ by considering a similar max entropy problem (i.e.$\inf_y\sum_i \exp(y^\top A\textbf{e}_i)$, which approaches 0 iff the desired $y$ exists) or its dual (you can find this in the above book, as well as papers by the same authors). Anyway, considering Gordan's theorem, your condition on the columns (which can be written $\textbf{1}^\top A = 0$) has no relationship to the question at hand. In one of your comments you mentioned wanting to generate these matrices. To pick positive examples, fix a satisfying $x$, and construct rows $b_i'$ by first getting some $b_i$ and setting $b_i' := b_i - (x^\top b_i)x / (x^\top x)$; to pick negative examples, by Gordan's theorem, choose some nonzero $y$, and then consider adding to $A$ a column $a_i$, including it if it satisfies $a_i^\top y > 0$.
Range of a matrix The rangeof m× nmatrix A, is the span of the ncolumns of A. In other words, for \[ A = [ a_1 a_2 a_3 \ldots a_n ] \] where \(a_1 , a_2 , a_3 , \ldots ,a_n\) are m-dimensional vectors, \[ range(A) = R(A) = span(\{a_1, a_2, \ldots , a_n \} ) = \{ v| v= \sum_{i = 1}^{n} c_i a_i , c_i \in \mathbb{R} \} \] The dimension (number of linear independent columns) of the range of A is called the rank of A. So if 6 × 3 dimensional matrix B has a 2 dimensional range, then \(rank(A) = 2\). For example \[C =\begin{pmatrix} 1 & 4 & 1\\ -8 & -2 & 3\\ 8 & 2 & -2 \end{pmatrix} = \begin{pmatrix} x_1 & x_2 & x_3 \end{pmatrix}= \begin{pmatrix} y_1 \\ y_2\\ y_3 \end{pmatrix}\] C has a rank of 3, because \(x_1\), \(x_2\) and \(x_3\) are linearly independent. Nullspace p>The nullspaceof a m\(\times\) nmatrix is the set of all n-dimensional vectors that equal the n-dimensional zero vector (the vector where every entry is 0) when multiplied by A. This is often denoted as \[N(A) = \{ v | Av = 0 \}\] The dimension of the nullspace of Ais called the nullityof A. So if 6 \(\times\) 3 dimensional matrix Bhas a 1 dimensional range, then \(nullity(A) = 1\). The range and nullspace of a matrix are closely related. In particular, for m \(\times\) n matrix A, \[\{w | w = u + v, u \in R(A^T), v \in N(A) \} = \mathbb{R}^{n}\] \[R(A^T) \cap N(A) = \phi\] This leads to the rank--nullity theorem, which says that the rank and the nullity of a matrix sum together to the number of columns of the matrix. To put it into symbols: \[A \in \mathbb{R}^{m \times n} \Rightarrow rank(A) + nullity(A) = n\] For example, if B is a 4 \(\times\) 3 matrix and \(rank(B) = 2\), then from the rank--nullity theorem, on can deduce that \[rank(B) + nullity(B) = 2 + nullity(B) = 3 \Rightarrow nullity(B) = 1\] Projection The projectionof a vector xonto the vector space J, denoted by Proj( X, J) ,is the vector \(v \in J\) that minimizes \(\vert x - v \vert\). Often, the vector space Jone is interested in is the range of the matrix A, and norm used is the Euclidian norm. In that case \[Proj(x,R(A)) = \{ v \in R(A) | \vert x - v \vert_2 \leq \vert x - w \vert_2 \forall w \in R(A) \}\] In other words \[Proj(x,R(A)) = argmin_{v \in R(A)} \vert x - v \vert_2\]
Math Contents Introduction TeX was designed for ease of typesetting books that contained mathematics. As ConTeXt is built on top of TeX, it inherits all those features. In addition to these, ConTeXt adds lot of macros to make the typesetting of mathematics easier. For typesetting of mathematics follows different rules than that of normal text, TeX uses something called "math mode" where some characters get a different meaning to enable a simple syntax for complicated formulas. Simple Math Typesetting mathematics can be divided into two parts, inline math (mathematical formulas set within ordinary paragraphs as part of the text) and display math mathematics set on lines by themselves, often with equation numbers). Inline math consists of maths that is typed in a sentence. For example There are two ways of typing inline math. The TeX way is to surround what you want to type within $... $. Thus, the above will be typed as Pythagoras formula, stating $a^2 + b^2 = c^2$ was one of the first trignometric results ConTeXt also provides an alternative way of typing the same result. Instead of dollars, you can write the material for maths inside \mathematics. Thus, an alternate way to type the above is, Pythagoras formula, stating \mathematics{a^2 + b^2 = c^2} was one of the first trignometric results Choose the method that suits your style. ((I do not know if there are pros and cons of $..$ vs \mathematics{}. If someone knows, then please elaborate -- aditya )) The famous result (once more) is given by \startformula c^2 = a^2 + b^2. \stopformula This, when typeset, produces the following: Numbering Formulae The famous result (once more) is given by \placeformula \startformula c^2 = a^2 + b^2. \stopformula This, when typeset, produces the following: The \placeformula command is optional, and produces the equation number; leaving it off produces an unnumbered equation. Not so Simple Maths ConTeXt's base mathematics support is built on the mathematics support in plain TeX, thus allowing quite complicated formulas. (There are also some additional macros, such as the \text command for text-mode notes within math.) For instance: A more complicated equation: \placeformula \startformula {{\theta_{\text{\CONTEXT}}}^2 \over x+2} = \pmatrix{a_{11}&a_{12}&\ldots&a_{1n}\cr a_{21}&a_{22}&\ldots&a_{2n}\cr \vdots&\vdots&\ddots&\vdots\cr a_{n1}&a_{n2}&\ldots&a_{nn}\cr} \pmatrix{b_1 \cr b_2 \cr \vdots \cr b_n} + \sum_{j=1}^\infty z^j \left( \sum_{\scriptstyle n=1 \atop \scriptstyle n \ne j}^\infty Z_j^n \right) \stopformula which produces Context provides a wrapper around tex \pmatrix. The above can be typeset in a contextish way as A more complicated equation: \definemathmatrix[pmatrix][left={\left(\,},right={\,\right)}] \placeformula \startformula {{\theta_{\text{\CONTEXT}}}^2 \over x+2} = \startpmatrix \NC a_{11} \NC a_{12} \NC \ldots \NC a_{1n} \NR \NC a_{21} \NC a_{22} \NC \ldots \NC a_{2n} \NR \NC \vdots \NC \vdots \NC \ddots \NC \vdots \NR \NC a_{n1} \NC a_{n2} \NC \ldots \NC a_{nn} \NR \stoppmatrix \startpmatrix b_1 \NR b_2 \NR \vdots \NR b_n \NR \stoppmatrix + \sum_{j=1}^\infty z^j \left( \sum_{\scriptstyle n = 1 \atop \scriptstyle n \ne j}^\infty Z_j^n \right) \stopformula MathAlignment is covered on a separate page. Sub-Formula Numbering As mentioned above, formulas can be numbered using the \placeformula command. This (and the related \placesubformula command have an optional argument which can be used to produce sub-formula numbering. For example: Examples: \placeformula{a} \startformula c^2 = a^2 + b^2 \stopformula \placesubformula{b} \startformula c^2 = a^2 + b^2 \stopformula What's going on here is simpler than it might appear at first glance. Both \placeformula and \placesubformula produce equation numbers with the optional tag added at the end; the sole difference is that the former increments the equation number first, while the latter does not (and thus can be used for the second and subsequent formulas that use the same formula number but presumably have different tags). This is sufficient for cases where the standard ConTeXt equation numbers suffice, and where only one equation number is needed per formula. However, there are many cases where this is insufficient, and \placeformula defines \formulanumber and \subformulanumber commands, which provide hooks to allow the use of ConTeXt-managed formula numbers with plain TeX equation numbering. These, when used within a formula, simply return the formula number in properly formatted form, as can be seen in this simple example with plain TeX's \eqno. Note that the optional tag is inherited from \placeformula. More examples: \placeformula{c} \startformula \let\doplaceformulanumber\empty c^2 = a^2 + b^2 \eqno{\formulanumber} \stopformula In order for this to work properly, we need to turn off ConTeXt's automatic formula number placement; thus the \let command to empty \doplaceformulanumber, which must be placed after the start of the formula. In many practical examples, however, this is not necessary; ConTeXt redefines \displaylines and \eqalignno to do this automatically. For more control over sub-formula numbering, \formulanumber and \subformulanumber have an optional argument parallel to that of \placeformula, as demonstrated in this use of plain TeX's \eqalignno, which places multiple equation numbers within one formula. Yet more examples: \placeformula \startformula \eqalignno{c^2 &= a^2 + b^2 &\formulanumber{a} \cr a^2 + b^2 &= c^2 &\subformulanumber{b} \cr d^2 &= e^2 &\formulanumber\cr} \stopformula Note that both \formulanumber and \subformulanumber can be used within the same formula, and the formula number is incremented as expected. Also, if an optional argument is specified in both \placefigure and \formulanumber, the latter takes precedence. More examples for left-located equation number: \setupformulas[location=left] \placeformula{d} \startformula \let\doplaceformulanumber\empty c^2 = a^2 + b^2 \leqno{\formulanumber} \stopformula and \placeformula \startformula \leqalignno{c^2 &= a^2 + b^2 &\formulanumber{a} \cr a^2 + b^2 &= c^2 &\subformulanumber{b} \cr d^2 &= e^2 &\formulanumber\cr} \stopformula -- 23:46, 15 Aug 2005 (CEST) Prinse Wang List of Formulas You can have a list of the formulas contained in a document by using \placenamedformula instead of \placeformula. Only the formulas written with \placenamedformula are not put in the list, so that you can control precisely the content of the list. Example: \subsubject{List of Formulas} \placelist[formula][criterium=text,alternative=c] \subsubject{Formulas} \placenamedformula[one]{First listed Formula} \startformula a = 1 \stopformula \endgraf \placeformula \startformula a = 2 \stopformula \endgraf \placenamedformula{Second listed Formula}{b} \startformula a = 3 \stopformula \endgraf Gives: Other Methods There are two different math modules on CTAN, nath and amsl. And there's a new math module in the distribution. Context now has inbuilt support for Math_structures It is also possible to use most LaTeX equations in ConTeXt with a relatively small set of supporting definitions. The "native" ConTeXt way of math is MathML, an application of XML - rather verbose but mighty.
Archive: In this section: Subtopics: Comments disabled Tue, 25 Jun 2019 I don't have a good catalog in my head of basic theorems of category theory. Every time I try to think about category theory, I get stuck on really basic lemmas like “can I assume that a product !!1×A!! is canonically isomorphic to !!A!!?” Or “Suppose !!f:A→B!! is both monic and epic. Must it be an isomorphism?” Then I get sidetracked trying to prove those lemmas, or else I assume they are true, go ahead, and even if I'm right I start to lose my nerve. So for years I've wanted to make up a book of every possible basic theorem of category theory, in order from utterly simple to most difficult, and then prove every theorem. Then I'd know all the answers, or if I didn't, I could just look in the book. There would be a chapter on products with a long list of plausible-seeming statements: and each one would either be annotated, Snopes-style, with “ On Sunday I thought I'd give it a shot, and I started with: This seems very plausible, because how could the product possibly work if its left-hand component couldn't contain any possible element of !!A!!? I struggled with this for rather a long time but I just got more andmore stuck. To prove that !!π_1!! is an epimorphism Ineed to have !!g,h:A→C!! and then show that !!g ∘ π_1 = h ∘ π_1!!only when !!g=h!!. But !!π_1!! being a projection arrow doesn't help with this,because products are all about arrows And there's no hope that I could get any leverage by introducing somearrows into !!A!! and !!B!!, because there might not I eventually gave up and looked it up online, and discovered that, infact, the claim is not true in general. It's not even true in So, uh, victory, I guess? I set out to prove something that isfalse, so And I can make lemonade out of the lemons. I couldn't prove thetheorem, and my ideas about why not were basically right. Now I oughtto be able to look carefully at what additional tools I might be ableto use to make the proof go through anyway, and those then become partof the statement of the theorem, which then would become somethinglike “ Mon, 24 Jun 2019 I just finished I enjoyed it a lot. It has a story, but at the center instead of abig problem there is a character, Rose Gann. (To The other side of the book is that it is a very pointed satire of thesocial-climbing literary circles in which Maugham traveled. Anothersuch satire is his short story The book is quasi-autobiographical, the way Parts of Davies wrote an essay about Maugham, which I suppose I've read, but I don't remember what he said. Thu, 20 Jun 2019 A couple of days ago my sleepy brain mashed up Clarke's Law: and Hanlon's Razor: to produce these words of wisdom: Any sufficiently advanced software is indistinguishable from malice. (Why yes, I This sounded like something Bryan Horstmann-Allen would have said, so with his permission, I am naming it after him. BDHA's Law? BDHA's Razor? No! Mon, 17 Jun 2019 My big work project is called “Greenlight”. It's a Git branch mergingservice. After you've pushed a remote branch, say Greenlight analyzes the branch to see if it touches any sensitive codethat requires signoffs. If so it contacts the correct people onSlack, and asks them to review it. Once they have approved it,Greenlight rebases the branch onto the current A user, Locksher, complained last week that it didn't do what he hadexpected. He had a Git With Greenlight, this check wasn't done, because Locksher never pushedto Locksher asked if it was possible to have Greenlight “respect localhooks”. Once I understood what he wanted, my first suggestion wasthat he wrap the and get exactly the desired behavior. I said that if Locksher wanted to implement this, I would include it in the standard client, or alternatively I would open a ticket to implement it myself, eventually. Locksher suggested instead that the I didn't have time then to answer in detail, so I just said: Here's what I said to him once I did have time to answer in detail: I will elaborate a little on the main items 1–2, that differentpeople might have different ideas about what it means to “respect” alocal hook. Consider Locksher's specific request, for So then I would have to add an escape hatch for Zubi, so that everyonewho Nah. Sat, 08 Jun 2019 I have pondered category theory periodically for the past 35 years, but not often enough to really stay comfortable with it. Today I was pondering again. I wanted to prove that !!1×A \cong A!! and I was having trouble. I eventually realized my difficulty: my brain had slipped out of category theory mode so that the theorem I was trying to prove did not even make sense. In most of mathematics, !!1\times A!! would denote some specific entity and we would then show that that entity had a certain property. For example, in set theory we might define !!1\times A!! to be some set, say the set of all Kuratowski pairs !!\langle \varnothing, a\rangle!! where !!a\in A!!: $$ 1×A =_{\text{def}} \{ z : \exists a\in A : z = \{\{\varnothing\}, \{\varnothing, a\}\} \} $$ and then we would explicitly construct a bijection !!f:A\leftrightarrow 1×A!!: $$ f(a) = \{\{\varnothing\}, \{\varnothing, a\}\}$$ In category theory, this is not what we do. Everything is less concrete. !!\times!! looks like an operator, one that takes two objects and yields a third. It is not. !!1\times A!! does not denote any particular entity with any particular construction. (Nor does !!1!!, for that matter.) Instead, it denotes an unspecified entity, which happens to have a certain universal property, perhaps just one of many such entities with that property, and there is no canonical representative of the class. It's a mistake to think of it as a single thing, and it's also a mistake to ask the question the way I did ask it. You can't show that !!1×A!! has any specific property, because it's not a specific thing. All you can do is show that anything with the one required universal property must also have the other property. We should rephrase the question like this: Maybe a better phrasing is: The notation is still misleading, because it looks like !!1×A!! denotes the result of some operation, and it isn't. We can do a little better: That it, that's the real theorem. It seems at first to be more difficult — where do we start? But it's actually easier! Because now it's enough to simply prove that !!A!! itself is a product of !!1!! and !!A!!, which is easily done: its projection morphisms are evidently !!! !! and !!{\text{id}}_A!!. And by a previous theorem that all products are isomorphic, any other product, such as !!B!!, must be isomorphic to this one, which is !!A!! itself. (We can similarly imagine that any theorem that mentions !!1!! is preceded by the phrase “Let !!1!! be some terminal object.”) Fri, 07 Jun 2019 A little while ago I wrote: I just remembered that good mirror technology is perhaps too recent for disco balls to have been at Versailles. Hmmm. Early mirrors were made of polished metal or even stone, clearly unsuitable. Back-silvering of glass wasn't invented until the mid-19th century. Still, a disco ball is a very forgiving application of mirrors. For a looking-glass you want a large, smooth, flat mirror with no color distortion. For a disco ball you don't need any of those things. Large sheets of flat glass were unavailable before the invention of float glass at the end of the 19th century, but for a disco ball you don't need plate glass, just little shards, leftovers even. The 17th century could produce mirrors by gluing metal foil to the back of a piece of glass, so I wonder why they didn't. They wouldn't have been able to spotlight it, but they certainly could have hung it under an orbiculum. Was there a technological limitation, or did nobody happen to think of it? [ Addendum: I think the lack of good spotlighting is the problem here. ] [ Addendum: Apparently, nobody but me has ever used the word “orbiculum”. I don't know how I started using it, but it seemsthat the correct word for what I meant is oculus. ] https://en.wikipedia.org/wiki/Oculus
Difference between revisions of "Degree of irreducible representation divides order of group" (→Facts used) (7 intermediate revisions by the same user not shown) Line 5: Line 5: bigger = order| bigger = order| biggerof = group}} biggerof = group}} − + + ==Statement== ==Statement== Line 17: Line 18: * [[Degree of irreducible representation divides index of center]] * [[Degree of irreducible representation divides index of center]] − * [[Degree of irreducible representation divides index of + * [[Degree of irreducible representation divides index of normal subgroup]] * [[Order of inner automorphism group bounds square of degree of irreducible representation]] * [[Order of inner automorphism group bounds square of degree of irreducible representation]] + * [[Sum of squares of degrees of irreducible representations equals order of group]] * [[Sum of squares of degrees of irreducible representations equals order of group]] + + + + ===Breakdown for a field that is not algebraically closed=== ===Breakdown for a field that is not algebraically closed=== Line 27: Line 33: We still have the following results: We still have the following results: + + * [[Degree of irreducible representation of nontrivial finite group is strictly less than order of group]] * [[Degree of irreducible representation of nontrivial finite group is strictly less than order of group]] * [[Maximum degree of irreducible real representation is at most twice maximum degree of irreducible complex representation]] * [[Maximum degree of irreducible real representation is at most twice maximum degree of irreducible complex representation]] Line 37: Line 45: ! Fact no. !! Statement !! Steps in the proof where it is used !! Qualitative description of how it is used !! What does it rely on? !! Other applications ! Fact no. !! Statement !! Steps in the proof where it is used !! Qualitative description of how it is used !! What does it rely on? !! Other applications |- |- − | 1 || [[uses::Character orthogonality theorem]]: The part relevant for us is: for an irreducible representation over a splitting field of characteristic zero, <math>\sum_{g \in G} \chi(g)\overline{\chi(g)} = |G|</math> || Step (1) || Equation setup that we then tinker with. || || {{uses short|character orthogonality theorem}} + | 1 || [[uses::Character orthogonality theorem]]: The part relevant for us is: for an irreducible representation over a splitting field of characteristic zero , <math>\sum_{g \in G} \chi(g)\overline{\chi(g)} = |G|</math> || Step (1) || Equation setup that we then tinker with. || || {{uses short|character orthogonality theorem}} |- |- − | 2 || [[uses::Size-degree-weighted characters are algebraic integers]]: This states that for an irreducible linear representation <math>\varphi</math> of a finite group <math>G</math> over an algebraically closed field of characteristic zero (or more generally, over any [[splitting field]]), a conjugacy class <math>c</math> in <math>G</math> and an element <math>g \in c</math>, the number <math>|c|\chi(g)/\chi(1)</math> (with <math>1</math> denoting the identity element of the group) is an algebraic integer. || Step (3) || Show certain parts of an expression are algebraic integers. || algebraic number theory + linear representation theory|| {{uses short|Size-degree-weighted characters are algebraic integers}} + | 2 || [[uses::Size-degree-weighted characters are algebraic integers]]: This states that for an irreducible linear representation <math>\varphi</math> of a finite group <math>G</math> over an algebraically closed field of characteristic zero (or more generally, over any [[splitting field]]), a conjugacy class <math>c</math> in <math>G</math> and an element <math>g \in c</math>, the number <math>|c|\chi(g)/\chi(1)</math> (with <math>1</math> denoting the identity element of the group) is an algebraic integer. || Step (3) || Show certain parts of an expression are algebraic integers. || algebraic number theory + linear representation theory|| {{uses short|Size-degree-weighted characters are algebraic integers}} |- |- | 3 || [[uses::Characters are algebraic integers]] || Step (4) || Show certain parts of an expression are algebraic integers. || basic linear representation theory || {{uses short|characters are algebraic integers}} | 3 || [[uses::Characters are algebraic integers]] || Step (4) || Show certain parts of an expression are algebraic integers. || basic linear representation theory || {{uses short|characters are algebraic integers}} Latest revision as of 13:03, 14 October 2018 This article gives the statement, and possibly proof, of a constraint on numerical invariants that can be associated with a finite group This article states a result of the form that one natural number divides another. Specifically, the (degree of a linear representation) of a/an/the (irreducible linear representation) divides the (order) of a/an/the (group). View other divisor relations |View congruence conditions This fact is related to: linear representation theory View other facts related to linear representation theoryView terms related to linear representation theory | Contents Statement Let be a finite group and an irreducible representation of over an algebraically closed field of characteristic zero (or, more generally, over any splitting field of characteristic zero for ). Then, the degree of divides the order of . Related facts Other facts about degrees of irreducible representations Further information: Degrees of irreducible representations Degree of irreducible representation divides index of center Degree of irreducible representation divides index of abelian normal subgroup Order of inner automorphism group bounds square of degree of irreducible representation Number of irreducible representations equals number of conjugacy classes Sum of squares of degrees of irreducible representations equals order of group Similar fact about irreducible projective representations Breakdown for a field that is not algebraically closed Let be the cyclic group of order three and be the field. Then, there are two irreducible representations of over : the trivial representation, and a two-dimensional representation given by the action by rotation by multiples of . The two-dimensional representation has degree , and this does not divide the order of the group, which is . We still have the following results: Degree of irreducible representation over reals divides twice the group order Degree of irreducible representation over any field divides product of order and Euler totient function of exponent Degree of irreducible representation of nontrivial finite group is strictly less than order of group Maximum degree of irreducible real representation is at most twice maximum degree of irreducible complex representation Facts used The table below lists key facts used directlyand explicitlyin the proof. Fact numbers as used in the table may be referenced in the proof. This table need notlist facts used indirectly, i.e., facts that are used to prove these facts, and it need not list facts used implicitly through assumptions embedded in the choice of terminology and language. Fact no. Statement Steps in the proof where it is used Qualitative description of how it is used What does it rely on? Other applications 1 Character orthogonality theorem: The part relevant for us is: for an irreducible representation over a splitting field of characteristic zero with character , Step (1) Equation setup that we then tinker with. click here 2 Size-degree-weighted characters are algebraic integers: This states that for an irreducible linear representation of a finite group over an algebraically closed field of characteristic zero (or more generally, over any splitting field), with character , a conjugacy class in and an element , the number (with denoting the identity element of the group) is an algebraic integer. Step (3) Show certain parts of an expression are algebraic integers. algebraic number theory + linear representation theory click here 3 Characters are algebraic integers Step (4) Show certain parts of an expression are algebraic integers. basic linear representation theory click here Proof This proof uses a tabular format for presentation. Provide feedback on tabular proof formats in a survey (opens in new window/tab) | Learn more about tabular proof formats|View all pages on facts with proofs in tabular format Given: A finite group , an irreducible linear representation of over a splitting field of characteristic zero for , with character and degree . Note that equals , i.e., the value of at the identity element of . To prove: divides the order of . Proof: Step no. Assertion/construction Facts used Given data used Previous steps used Explanation 1 The following holds: where the sum is over all conjugacy classes of , and where denotes the value of at any element of . Fact (1) is irreducible over a splitting field of characteristic zero, with character . Follows from fact (1). The comes because for each conjugacy class , elements of the class appear in the full statement of the column orthogonality theorem. 2 Step (1) Divide both sides of step (1) by . 3 Each is an algebraic integer for each conjugacy class . Fact (2) is irreducible over a splitting field of characteristic zero, with character . 4 Each is an algebraic integer for each conjugacy class . Fact (3) is a character. The complex conjugate of an algebraic integer is also an algebraic integer. 5 is an algebraic integer. Steps (3), (4) The set of algebraic integers forms a ring, so a finite sum of products of algebraic integers is an algebraic integer. 6 is an algebraic integer. Steps (2), (5) By Step (5), the left side of Step (2) is an algebraic integer, hence so is the right side. 7 is a positive integer, so divides . Step (6) Both and are positive integers, hence their quotient is a positive rational number. The only way a rational number can be an algebraic integer is if it is an integer, hence the conclusion.
Difference between revisions of "Moser-lower.tex" Line 237: Line 237: \subsection{Higher k values} \subsection{Higher k values} − One can consider subsets of $[k]{}^n$ that contain no geometric lines. + One can consider subsets of $[k]{}^n$ that contain no geometric lines. has considered the case $k=3$. Let $c'_{n,k}$ be the greatest number of points in $[k]{}^n$ with no geometric line. For example, $c'_{n,3} = c'_n$. We have the following lower bounds: − $c'_{n,4} \ge \binom{n}{n/2}2^n$ + $c'_{n,4} \ge \binom{n}{n/2}2^n$ The set of points with $a$ $1$s,$b$ $2$s,$c$ $3$s and $d$ $4$s, where $a+d$ has the constant value $n/2$, does not form geometric lines because points at the ends of a geometric line have more $a$ or $d$ values than point in the middle of the line. The set of points with $a$ $1$s,$b$ $2$s,$c$ $3$s and $d$ $4$s, where $a+d$ has the constant value $n/2$, does not form geometric lines because points at the ends of a geometric line have more $a$ or $d$ values than point in the middle of the line. Revision as of 01:47, 21 June 2009 \section{Lower bounds for the Moser problem}\label{moser-lower-sec} In this section we discuss lower bounds for $c'_{n,3}$. Clearly we have $c'_{0,3}=1$ and $c'_{1,3}=2$, so we focus on the case $n \ge 2$. The first lower bounds may be due to Koml\'{o}s \cite{komlos}, who observed that the sphere $S_{i,n}$ of elements with exactly $n-i$ 2 entries (see Section \ref{notation-sec} for definition), is a Moser set, so that \begin{equation}\label{cin} c'_{n,3}\geq \vert S_{i,n}\vert \end{equation} holds for all $i$. Choosing $i=\lfloor \frac{2n}{3}\rfloor$ and applying Stirling's formula, we see that this lower bound takes the form \begin{equation}\label{cpn3} c'_{n,3} \geq (C-o(1)) 3^n / \sqrt{n} \end{equation} for some absolute constant $C>0$; in fact \eqref{cin} gives \eqref{cpn3} with $C := \sqrt{\frac{9}{4\pi}}$. In particular $c'_{3,3} \geq 12, c'_{4,3}\geq 24, c'_{5,3}\geq 80, c'_{6,3}\geq 240$. Asymptotically, the best lower bounds we know of are still of this type, but the values can be improved by studying combinations of several spheres or semispheres or applying elementary results from coding theory. Observe that if $\{w(1),w(2),w(3)\}$ is a geometric line in $[3]^n$, then $w(1), w(3)$ both lie in the same sphere $S_{i,n}$, and that $w(2)$ lies in a lower sphere $S_{i-r,n}$ for some $1 \leq r \leq i \leq n$. Furthermore, $w(1)$ and $w(3)$ are separated by Hamming distance $r$. As a consequence, we see that $S_{i-1,n} \cup S_{i,n}^e$ (or $S_{i-1,n} \cup S_{i,n}^o$) is a Moser set for any $1 \leq i \leq n$, since any two distinct elements $S_{i,n}^e$ are separated by a Hamming distance of at least two. (Recall Section \ref{notation-sec} for definitions), This leads to the lower bound \begin{equation}\label{cn3-low} c'_{n,3} \geq \binom{n}{i-1} 2^{i-1} + \binom{n}{i} 2^{i-1} = \binom{n+1}{i} 2^{i-1}. \end{equation} It is not hard to see that $\binom{n+1}{i+1} 2^{i} > \binom{n+1}{i} 2^{i-1}$ if and only if $3i < 2n+1$, and so this lower bound is maximised when $i = \lfloor \frac{2n+1}{3} \rfloor$ for $n \geq 2$, giving the formula \eqref{binom}. This leads to the lower bounds $$ c'_{2,3} \geq 6; c'_{3,3} \geq 16; c'_{4,3} \geq 40; c'_{5,3} \geq 120; c'_{6,3} \geq 336$$ which gives the right lower bounds for $n=2,3$, but is slightly off for $n=4,5$. Asymptotically, Stirling's formula and \eqref{cn3-low} then give the lower bound \eqref{cpn3} with $C = \frac{3}{2} \times \sqrt{\frac{9}{4\pi}}$, which is asymptotically $50\%$ better than the bound \eqref{cin}. The work of Chv\'{a}tal \cite{chvatal1} already contained a refinement of this idea which we here translate into the usual notation of coding theory: Let $A(n,d)$ denote the size of the largest binary code of length $n$ and minimal distance $d$. Then \begin{equation}\label{cnchvatal} c'_{n,3}\geq \max_k \left( \sum_{j=0}^k \binom{n}{j} A(n-j, k-j+1)\right). \end{equation} With the following values for $A(n,d)$: {\tiny{ \[ \begin{array}{llllllll} A(1,1)=2&&&&&&&\\ A(2,1)=4& A(2,2)=2&&&&&&\\ A(3,1)=8&A(3,2)=4&A(3,3)=2&&&&&\\ A(4,1)=16&A(4,2)=8& A(4,3)=2& A(4,4)=2&&&&\\ A(5,1)=32&A(5,2)=16& A(5,3)=4& A(5,4)=2&A(5,5)=2&&&\\ A(6,1)=64&A(6,2)=32& A(6,3)=8& A(6,4)=4&A(6,5)=2&A(6,6)=2&&\\ A(7,1)=128&A(7,2)=64& A(7,3)=16& A(7,4)=8&A(7,5)=2&A(7,6)=2&A(7,7)=2&\\ A(8,1)=256&A(8,2)=128& A(8,3)=20& A(8,4)=16&A(8,5)=4&A(8,6)=2 &A(8,7)=2&A(8,8)=2\\ A(9,1)=512&A(9,2)=256& A(9,3)=40& A(9,4)=20&A(9,5)=6&A(9,6)=4 &A(9,7)=2&A(9,8)=2\\ A(10,1)=1024&A(10,2)=512& A(10,3)=72& A(10,4)=40&A(10,5)=12&A(10,6)=6 &A(10,7)=2&A(10,8)=2\\ A(11,1)=2048&A(11,2)=1024& A(11,3)=144& A(11,4)=72&A(11,5)=24&A(11,6)=12 &A(11,7)=2&A(11,8)=2\\ A(12,1)=4096&A(12,2)=2048& A(12,3)=256& A(12,4)=144&A(12,5)=32&A(12,6)=24 &A(12,7)=4&A(12,8)=2\\ A(13,1)=8192&A(13,2)=4096& A(13,3)=512& A(13,4)=256&A(13,5)=64&A(12,6)=32 &A(13,7)=8&A(13,8)=4\\ \end{array} \] }} Generally, $A(n,1)=2^n, A(n,2)=2^{n-1}, A(n-1,2e-1)=A(n,2e), A(n,d)=2$, if $d>\frac{2n}{3}$. The values were taken or derived from Andries Brower's table at\\ http://www.win.tue.nl/$\sim$aeb/codes/binary-1.html \textbf{include to references? or other book with explicit values of $A(n,d)$ } For $c'_{n,3}$ we obtain the following lower bounds: with $k=2$ \[ \begin{array}{llll} c'_{4,3}&\geq &\binom{4}{0}A(4,3)+\binom{4}{1}A(3,2)+\binom{4}{2}A(2,1) =1\cdot 2+4 \cdot 4+6\cdot 4&=42.\\ c'_{5,3}&\geq &\binom{5}{0}A(5,3)+\binom{5}{1}A(4,2)+\binom{5}{2}A(3,1) =1\cdot 4+5 \cdot 8+10\cdot 8&=124.\\ c'_{6,3}&\geq &\binom{6}{0}A(6,3)+\binom{6}{1}A(5,2)+\binom{6}{2}A(4,1) =1\cdot 8+6 \cdot 16+15\cdot 16&=344. \end{array} \] With k=3 \[ \begin{array}{llll} c'_{7,3}&\geq& \binom{7}{0}A(7,4)+\binom{7}{1}A(6,3)+\binom{7}{2}A(5,2) + \binom{7}{3}A(4,1)&=960.\\ c'_{8,3}&\geq &\binom{8}{0}A(8,4)+\binom{8}{1}A(7,3)+\binom{8}{2}A(6,2) + \binom{8}{3}A(5,1)&=2832.\\ c'_{9,3}&\geq & \binom{9}{0}A(9,4)+\binom{9}{1}A(8,3)+\binom{9}{2}A(7,2) + \binom{9}{3}A(6,1)&=7880. \end{array}\] With k=4 \[ \begin{array}{llll} c'_{10,3}&\geq &\binom{10}{0}A(10,5)+\binom{10}{1}A(9,4)+\binom{10}{2}A(8,3) + \binom{10}{3}A(7,2)+\binom{10}{4}A(6,1)&=22232.\\ c'_{11,3}&\geq &\binom{11}{0}A(11,5)+\binom{11}{1}A(10,4)+\binom{11}{2}A(9,3) + \binom{11}{3}A(8,2)+\binom{11}{4}A(7,1)&=66024.\\ c'_{12,3}&\geq &\binom{12}{0}A(12,5)+\binom{12}{1}A(11,4)+\binom{12}{2}A(10,3) + \binom{12}{3}A(9,2)+\binom{12}{4}A(8,1)&=188688.\\ \end{array}\] With $k=5$ \[ c'_{13,3}\geq 539168.\] It should be pointed out that these bounds are even numbers, so that $c'_{4,3}=43$ shows that one cannot generally expect this lower bound gives the optimum. The maximum value appears to occur for $k=\lfloor\frac{n+2}{3}\rfloor$, so that using Stirling's formula and explicit bounds on $A(n,d)$ the best possible value known to date of the constant $C$ in equation \eqref{cpn3} can be worked out, but we refrain from doing this here. Using the Singleton bound $A(n,d)\leq 2^{n-d+1}$ Chv\'{a}tal \cite{chvatal1} proved that the expression on the right hand side of \eqref{cnchvatal} is also $O\left( \frac{3^n}{\sqrt{n}}\right)$, so that the refinement described above gains a constant factor over the initial construction only. For $n=4$ the above does not yet give the exact value. The value $c'_{4,3}=43$ was first proven by Chandra \cite{chandra}. A uniform way of describing examples for the optimum values of $c'_{4,3}=43$ and $c'_{5,3}=124$ is the following: Let us consider the sets $$ A := S_{i-1,n} \cup S_{i,n}^e \cup A'$$ where $A' \subset S_{i+1,n}$ has the property that any two elements in $A'$ are separated by a Hamming distance of at least three, or have a Hamming distance of exactly one but their midpoint lies in $S_{i,n}^o$. By the previous discussion we see that this is a Moser set, and we have the lower bound \begin{equation}\label{cnn} c'_{n,3} \geq \binom{n+1}{i} 2^{i-1} + |A'|. \end{equation} This gives some improved lower bounds for $c'_{n,3}$: \begin{itemize} \item By taking $n=4$, $i=3$, and $A' = \{ 1111, 3331, 3333\}$, we obtain $c'_{4,3} \geq 43$; \item By taking $n=5$, $i=4$, and $A' = \{ 11111, 11333, 33311, 33331 \}$, we obtain $c'_{5,3} \geq 124$. \item By taking $n=6$, $i=5$, and $A' = \{ 111111, 111113, 111331, 111333, 331111, 331113\}$, we obtain $c'_{6,3} \geq 342$. \end{itemize} This gives the lower bounds in Theorem \ref{moser} up to $n=5$, but the bound for $n=6$ is inferior to the lower bound $c'_{6,3}\geq 344$ given above. A modification of the construction in \eqref{cn3-low} leads to a slightly better lower bound. Observe that if $B \subset \Delta_n$, then the set $A_B := \bigcup_{\vec a \in B} \Gamma_{a,b,c}$ is a Moser set as long as $B$ does not contain any ``isosceles triangles $(a+r,b,c+s), (a+s,b,c+r), (a,b+r+s,c)$ for any $r,s \geq 0$ not both zero; in particular, $B$ cannot contain any ``vertical line segments $(a+r,b,c+r), (a,b+2r,c)$. An example of such a set is provided by selecting $0 \leq i \leq n-3$ and letting $B$ consist of the triples $(a, n-i, i-a)$ when $a \neq 3 \mod 3$, $(a,n-i-1,i+1-a)$ when $a \neq 1 \mod 3$, $(a,n-i-2,i+2-a)$ when $a=0 \mod 3$, and $(a,n-i-3,i+3-a)$ when $a=2 \mod 3$. Asymptotically, this set occues about two thirds of the spheres $S_{n,i}$, $S_{n,i+1}$ and one third of the spheres $S_{n,i+2}, S_{n,i+3}$ and (setting $i$ close to $n/3$) gives a lower bound \eqref{cpn3} with $C = 2 \times \sqrt{\frac{9}{4\pi}}$, which is thus superior to the previous constructions. An integer program was run to obtain the optimal lower bounds achievable by the $A_B$ construction (using \eqref{cn3}, of course). The results for $1 \leq n \leq 20$ are displayed in Figure \ref{nlow-moser}: \begin{figure}[tb] \centerline{ \begin{tabular}{|ll|ll|} \hline n & lower bound & n & lower bound \\ \hline 1 & 2 &11& 71766\\ 2 & 6 & 12& 212423\\ 3 & 16 & 13& 614875\\ 4 & 43 & 14& 1794212\\ 5 & 122& 15& 5321796\\ 6 & 353& 16& 15455256\\ 7 & 1017& 17& 45345052\\ 8 & 2902&18& 134438520\\ 9 & 8622&19& 391796798\\ 10& 24786& 20& 1153402148\\ \hline \end{tabular}} \caption{Lower bounds for $c'_n$ obtained by the $A_B$ construction.} \label{nlow-moser} \end{figure} More complete data, including the list of optimisers, can be found at {\tt http://abel.math.umu.se/~klasm/Data/HJ/}. This indicates that greedily filling in spheres, semispheres or codes is no longer the optimal strategy in dimensions six and higher. The lower bound $c'_{6,3} \geq 353$ was first located by a genetic algorithm: see Appendix \ref{genetic-alg}. \begin{figure}[tb] \centerline{\includegraphics{moser353new.png}} \caption{One of the examples of $353$-point sets in $[3]^6$ (elements of the set being indicated by white squares).} \label{moser353-fig} \end{figure} Actually it is possible to improve upon these bounds by a slight amount. Observe that if $B$ is a maximiser for the right-hand side of \eqref{cn3} (subject to $B$ not containing isosceles triangles), then any triple $(a,b,c)$ not in $B$ must be the vertex of a (possibly degenerate) isosceles triangle with the other vertices in $B$. If this triangle is non-degenerate, or if $(a,b,c)$ is the upper vertex of a degenerate isosceles triangle, then no point from $\Gamma_{a,b,c}$ can be added to $A_B$ without creating a geometric line. However, if $(a,b,c) = (a'+r,b',c'+r)$ is only the lower vertex of a degenerate isosceles triangle $(a'+r,b',c'+r), (a',b'+2r,c')$, then one can add any subset of $\Gamma_{a,b,c}$ to $A_B$ and still have a Moser set as long as no pair of elements in that subset is separated by Hamming distance $2r$. For instance, in the $n=10$ case, the set $$B = \{(0 0 10),(0 2 8 ),(0 3 7 ),(0 4 6 ),(1 4 5 ),(2 1 7 ),(2 3 5 ), (3 2 5 ),(3 3 4 ),(3 4 3 ),(4 4 2 ),(5 1 4 ),(5 3 2 ),(6 2 2 ), (6 3 1 ),(6 4 0 ),(8 1 1 ),(9 0 1 ),(9 1 0 ) \}$$ generates the lower bound $c'_{10,3} \geq 24786$ given above (and, up to reflection $a \leftrightarrow c$, is the only such set that does so); but by adding the following twelve elements from $\Gamma_{5,0,5}$ one can increase the lower bound slightly to $24798$: $1111133333$, $1111313333$, $1113113333$, $1133331113$, $1133331131$, $1133331311$, $3311333111$, $3313133111$, $3313313111$, $3331111133$, $3331111313$, $3331111331$ However, we have been unable to locate a lower bound which is asymptotically better than \eqref{cpn3}. Indeed, any method based purely on the $A_B$ construction cannot do asymptotically better than the previous constructions: \begin{proposition} Let $B \subset \Delta_n$ be such that $A_B$ is a Moser set. Then $|A_B| \leq (2 \sqrt{\frac{9}{4\pi}} + o(1)) \frac{3^n}{\sqrt{n}}$. \end{proposition} \begin{proof} By the previous discussion, $B$ cannot contain any pair of the form $(a,b+2r,c), (a+r,b,c+r)$ with $r>0$. In other words, for any $-n \leq h \leq n$, $B$ can contain at most one triple $(a,b,c)$ with $c-a=h$. From this and \eqref{cn3}, we see that $$ |A_B| \leq \sum_{h=-n}^n \max_{(a,b,c) \in \Delta_n: c-a=h} \frac{n!}{a! b! c!}.$$ From the Chernoff inequality (or the Stirling formula computation below) we see that $\frac{n!}{a! b! c!} \leq \frac{1}{n^{10}} 3^n$ unless $a,b,c = n/3 + O( n^{1/2} \log^{1/2} n )$, so we may restrict to this regime, which also forces $h = O( n^{1/2}\log^{1/2} n)$. If we write $a = n/3 + \alpha$, $b = n/3 + \beta$, $c = n/3+\gamma$ and apply Stirling's formula $n! = (1+o(1)) \sqrt{2\pi n} n^n e^{-n}$, we obtain $$ \frac{n!}{a! b! c!} = (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - (\frac{n}{3}+\alpha) \log (1 + \frac{3\alpha}{n} ) - (\frac{n}{3}+\beta) \log (1 + \frac{3\beta}{n} ) - (\frac{n}{3}+\gamma) \log (1 + \frac{3\gamma}{n} ) ).$$ From Taylor expansion one has $$ -(\frac{n}{3}+\alpha) \log (1 + \frac{3\alpha}{n} ) = -\alpha - \frac{3}{2} \frac{\alpha^2}{n} + o(1)$$ and similarly for $\beta,\gamma$; since $\alpha+\beta+\gamma=0$, we conclude that $$ \frac{n!}{a! b! c!} = (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - \frac{3}{2n} (\alpha^2+\beta^2+\gamma^2) ).$$ If $c-a=h$, then $\alpha^2+\beta^2+\gamma^2 = \frac{3\beta^2}{2} + \frac{h^2}{2}$. Thus we see that $$ \max_{(a,b,c) \in \Delta_n: c-a=h} \frac{n!}{a! b! c!} \leq (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - \frac{3}{4n} h^2 ).$$ Using the integral test, we thus have $$ |A_B| \leq (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \int_\R \exp( - \frac{3}{4n} x^2 )\ dx.$$ Since $\int_\R \exp( - \frac{3}{4n} x^2 )\ dx = \sqrt{\frac{4\pi n}{3}}$, we obtain the claim. \end{proof} \subsection{Higher k values} One can consider subsets of $[k]{}^n$ that contain no geometric lines. Section \ref{moser-lower-sec} has considered the case $k=3$. Let $c'_{n,k}$ be the greatest number of points in $[k]{}^n$ with no geometric line. For example, $c'_{n,3} = c'_n$. We have the following lower bounds: $c'_{n,4} \ge \binom{n}{n/2}2^n$. The set of points with $a$ $1$s,$b$ $2$s,$c$ $3$s and $d$ $4$s, where $a+d$ has the constant value $n/2$, does not form geometric lines because points at the ends of a geometric line have more $a$ or $d$ values than point in the middle of the line. One can show a lower bound that, asymptotically, is twice as large as $\binom{n}{n/2}2^n$. Take all points with $a$ $1$s, $b$ $2$s, $c$ $3$s and $d$ $4$s, for which: Either $a+d = q or q-1$, $a$ and $b$ have the same parity; Or $a+d = q-2 or q-3$, $a$ and $b$ have opposite parity. This includes half the points of four adjacent layers, and therefore may include $(1+o(1))\binom{n}{n/2}2^{n+1}$ points. We also have a DHJ(3)-like lower bound for $c'_{n,5}$, namely $c'_{n,5} = 5^{n-O(\sqrt{\log n})}$. Consider points with $a$ $1$s, $b$ $2$s, $c$ $3$s, $d$ $4$s and $e$ $5$s. For each point, take the value $a+e+2(b+d)+3c$. The first three points in any geometric line give values that form an arithmetic progression of length three. Select a set of integers with no arithmetic progression of length 3. Select all points whose value belongs to that sequence; there will be no geometric line among those points. By Behrend theory, it is possible to choose these points with density $\exp{-O(\sqrt{\log n})}$.
What constitutes a proof in a system like this is a derivation which is a tree of rule applications. The above translation function is defined by (structural) recursion over that tree. Note, this is why it labels the premises in the rules with $\delta$. This is intended to be mnemonic for "derivation". I'm guessing but $\mu$ and $\nu$ appear to need to be sequences of lambda terms, while $\phi$ is a specific lambda term (or, equivalently, a single element sequence of lambda terms). $\Gamma$ is a sequence of formulas, and $\mu$ has a lambda term for each element of that sequence. Constructs like $x,\mu$ correspond to sequence extension, extending the sequence $\mu$ with the (particular) lambda term $x$ on the left, symmetrically for $\mu,x$. The parenthesis notation, e.g. $\Gamma(C)$ and $\mu(\dots)$ correspond to (consistently) operating on an element at an arbitrary location in the sequence. So, for example, an instance of the $\bullet R$ case would look like:$$\left|\cfrac{\cfrac{\delta}{X,A,B,Y,Z\Rightarrow W}}{{X,A\bullet B,Y,Z\Rightarrow W}}\right|_{t_X,t_{AB},t_Y,t_Z} = \left|\cfrac{\delta}{X,A,B,Y,Z\Rightarrow W}\right|_{t_X,\pi_1(t_{AB}),\pi_2(t_{AB}),t_Y,t_Z}$$ A translation of a complete derivation would look like: $$\begin{align}\left|\cfrac{\cfrac{A\Rightarrow A \qquad B \Rightarrow B}{A,B\Rightarrow A\bullet B}}{\cfrac{A\Rightarrow (A\bullet B)/B}{\Rightarrow ((A\bullet B)/B)/A}}\right|& = \lambda a\left|\cfrac{\cfrac{A\Rightarrow A \qquad B \Rightarrow B}{A,B\Rightarrow A\bullet B}}{A\Rightarrow (A\bullet B)/B}\right|_a \\& = \lambda a\lambda b\left|\cfrac{A\Rightarrow A \qquad B \Rightarrow B}{A,B\Rightarrow A\bullet B}\right|_{a,b} \\& = \lambda a\lambda b(\left|A\Rightarrow A\right|_a, \left|B \Rightarrow B\right|_b) \\& = \lambda a\lambda b(a, b)\end{align}$$
This category contains definitions related to Power Series. Related results can be found in Category:Power Series. Let $\xi \in \R$ be a real number. Let $\sequence {a_n}$ be a sequence in $\R$. The series $\displaystyle \sum_{n \mathop = 0}^\infty a_n \paren {x - \xi}^n$, where $x \in \R$ is a variable, is called a power series in $x$ about the point $\xi$. Let $\xi \in \C$ be a complex number. Let $\sequence {a_n}$ be a sequence in $\C$. The series $\displaystyle \sum_{n \mathop = 0}^\infty a_n \paren {z - \xi}^n$, where $z \in \C$ is a variable, is called a (complex) power series in $z$ about the point $\xi$. Pages in category "Definitions/Power Series" The following 12 pages are in this category, out of 12 total.
I like to know if we can model binary outcome with time series predictors. For example lets say Y is binary. $X_1, X_2, X_3,...,X_n$ is the same predictor variable but is a historical snapshot over time period $1,...,n$. I am interested in predicting $Y$ but include the auto correlation among $X_1,...,X_n$ and seasonality if any among $X_1,...,X_n$. Your question sounds very much like you are interested in discrete time event history analysis (aka discrete time survival analysis, aka a logit hazard model) to answer the question whether and when will an event occur? For example, equation 1 gives the logit hazard where discrete time periods (up to period $T$ are indicated $d_{1}, \dots, d_{T}$, and you may condition your model on $p$ number of predictors $X_{1}, \dots, X_{p}$. This gives you a hazard estimate as in equation 2. These equations specify a conditional hazard function with a fully discrete parameterization of time. Although you could instead specify a conditional hazard function that is constant over time, or is a linear or polynomial function of time period, or even a hybrid of polynomial functions of period plus some discrete time indicators. Your predictors can be constant over time, or time-varying, so I see no reason why you could not also include lagged or differenced functions of the predictors to model auto-correlation. $\mathrm{logit}\left(h\left(t,{X_{1t},\dots,X_{pt}}\right)\right) = \alpha_{1}d_{1} + \dots + \alpha_{T}d_{T} + \beta_{1}X_{1t} + \dots + \beta_{p}X_{pt}$ $\hat{h}\left(t,{X_{1t},\dots,X_{pt}}\right) = \frac{e^{\hat{\alpha}_{1}d_{1} + \dots + \hat{\alpha}_{T}d_{T} + \hat{\beta}_{1}X_{1t} + \dots + \hat{\beta}_{p}X_{pt}}}{1 + e^{\hat{\alpha}_{1}d_{1} + \dots + \hat{\alpha}_{T}d_{T} + \hat{\beta}_{1}X_{1t} + \dots + \hat{\beta}_{p}X_{pt}}}$ One need not use a logit hazard model (indeed one could use probit, complimentary log-log, robit, etc. binomial link functions). If you are using Stata see also the dthaz package by typing net describe dthaz, from(https://alexisdinno.com/stata). References Singer, J. and Willett, J. (1993). It’s about time: Using discrete-time survival analysis to study duration and the timing of events. Journal of Educational and Behavioral Statistics, 18(2):155–195. Singer, J. D. and Willett, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. Oxford University Press, New York, NY.
It looks like you're new here. If you want to get involved, click one of these buttons! Is the usual \(\leq\) ordering on the set \(\mathbb{R}\) of real numbers a total order? So, yes. 1. Reflexivity holds 2. For any \\( a, b, c \in \tt{R} \\) \\( a \le b \\) and \\( b \le c \\) implies \\( a \le c \\) 3. For any \\( a, b \in \tt{R} \\), \\( a \le b \\) and \\( b \le a \\) implies \\( a = b \\) 4. For any \\( a, b \in \\tt{R} \\), we have either \\( a \le b \\) or \\( b \le a \\) So, yes. Perhaps, due to our interest in things categorical, we can enjoy (instead of Cauchy sequence methods) to see the order of the (extended) real line as the Dedekind-MacNeille_completion of the rationals. Matthew has told us interesting things about it before. Hausdorff, on his part, in the book I mentioned here, says that any total order, dense, and without \( (\omega,\omega^*) \) gaps, has embedded the real line. I don't have a handy reference for an isomorphism instead of an embedding ("everywhere dense" just means dense here). Perhaps, due to our interest in things categorical, we can enjoy (instead of Cauchy sequence methods) to see the order of the (extended) real line as the [Dedekind-MacNeille_completion of the rationals](https://en.wikipedia.org/wiki/Dedekind%E2%80%93MacNeille_completion#Examples). Matthew has told us interesting things about it [before](https://forum.azimuthproject.org/discussion/comment/16714/#Comment_16714). Hausdorff, on his part, in the book I mentioned [here](https://forum.azimuthproject.org/discussion/comment/16154/#Comment_16154), [says](https://books.google.es/books?id=M_skkA3r-QAC&pg=PA85&dq=each+everywhere+dense+type&hl=en&sa=X&ved=0ahUKEwjLkJao-9DaAhWD2SwKHVrkBcIQ6AEIKTAA#v=onepage&q=each%20everywhere%20dense%20type&f=false) that any total order, dense, and without \\( (\omega,\omega^*) \\) [gaps](https://en.wikipedia.org/wiki/Hausdorff_gap), has embedded the real line. I don't have a handy reference for an isomorphism instead of an embedding ("everywhere dense" just means dense here). I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.) I believe the [hyperreal numbers](https://en.wikipedia.org/wiki/Hyperreal_number) give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.) That's an interesting question, Jonathan. That's an interesting question, Jonathan. Jonathan Castello I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.) In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited. First, we can observe that \(|\mathbb{R}| = |^\ast \mathbb{R}|\). This is because \(^\ast \mathbb{R}\) embeds \(\mathbb{R}\) and is constructed from countably infinitely many copies of \(\mathbb{R}\) and taking a quotient algebra modulo a free ultra-filter. We have been talking about quotient algebras and filters in a couple other threads. Next, observe that all unbounded dense linear orders of cardinality \(\aleph_0\) are isomorphic. This is due to a rather old theorem credited to George Cantor. Next, apply the Morley categoricity theorem. From this we have that all unbounded dense linear orders with cardinality \(\kappa \geq \aleph_0\) are isomorphic. This is referred to in model theory as \(\kappa\)-categoricity. Since the hypperreals and the reals have the same cardinality, they are isomorphic as unbounded dense linear orders. Puzzle MD 1: Prove Cantor's theorem that all countable unbounded dense linear orders are isomorphic. [Jonathan Castello](https://forum.azimuthproject.org/profile/2316/Jonathan%20Castello) > I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.) In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited. First, we can observe that \\(|\mathbb{R}| = |^\ast \mathbb{R}|\\). This is because \\(^\ast \mathbb{R}\\) embeds \\(\mathbb{R}\\) and is constructed from countably infinitely many copies of \\(\mathbb{R}\\) and taking a [quotient algebra](https://en.wikipedia.org/wiki/Quotient_algebra) modulo a free ultra-filter. We have been talking about quotient algebras and filters in a couple other threads. Next, observe that all [unbounded dense linear orders](https://en.wikipedia.org/wiki/Dense_order) of cardinality \\(\aleph_0\\) are isomorphic. This is due to a rather old theorem credited to George Cantor. Next, apply the [Morley categoricity theorem](https://en.wikipedia.org/wiki/Morley%27s_categoricity_theorem). From this we have that all unbounded dense linear orders with cardinality \\(\kappa \geq \aleph_0\\) are isomorphic. This is referred to in model theory as *\\(\kappa\\)-categoricity*. Since the hypperreals and the reals have the same cardinality, they are isomorphic as unbounded dense linear orders. **Puzzle MD 1:** Prove Cantor's theorem that all countable unbounded dense linear orders are isomorphic. Hi Matthew, nice application of the categoricity theorem! One question if I may. You said: In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited. But in my understanding the lattice and poset structure is inter-translatable as in here. Can two lattices be isomorphic and their associated posets not? Hi Matthew, nice application of the categoricity theorem! One question if I may. You said: > In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited. But in my understanding the lattice and poset structure is inter-translatable as in [here](https://en.wikipedia.org/wiki/Lattice_(order)#Connection_between_the_two_definitions). Can two lattices be isomorphic and their associated posets not? (EDIT: I clearly have no idea what I'm saying and I should probably take a nap. Disregard this post.) Can two lattices be isomorphic and their associated posets not? Can two lattices be isomorphic and their associated posets not? If two lattices are isomorphic preserving infima and suprema, ie limits, then they are order isomorphic. The reals and hyperreals provide a rather confusing counter example to the converse. I am admittedly struggling with this myself, as it is highly non-constructive. From model theory we have two maps \(\phi : \mathbb{R} \to\, ^\ast \mathbb{R} \) and \(\psi :\, ^\ast\mathbb{R} \to \mathbb{R} \) such that: Now consider \(\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\). The hyperreals famously violate the Archimedean property. Because of this \(\bigwedge_{^\ast \mathbb{R}} \{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\) does not exist. On the other than if we consider \( \bigwedge_{\mathbb{R}} \{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\), that does exist by the completeness of the real numbers (as it is bounded below by \(\psi(0)\)). Hence $$\bigwedge_{\mathbb{R}} \{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \} \neq \psi\left(\bigwedge_{^\ast\mathbb{R}} \{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\right)$$So \(\psi\) cannot be a complete lattice homomorphism, even though it is part of an order isomorphism. However, just to complicate matters, I believe that \(\phi\) and \(\psi\) are a mere lattice isomorphism, preserving finite meets and joints. > Can two lattices be isomorphic and their associated posets not? If two lattices are isomorphic preserving *infima* and *suprema*, ie *limits*, then they are order isomorphic. The reals and hyperreals provide a rather confusing counter example to the converse. I am admittedly struggling with this myself, as it is highly non-constructive. From model theory we have two maps \\(\phi : \mathbb{R} \to\, ^\ast \mathbb{R} \\) and \\(\psi :\, ^\ast\mathbb{R} \to \mathbb{R} \\) such that: - if \\(x \leq_{\mathbb{R}} y\\) then \\(\phi(x) \leq_{^\ast \mathbb{R}} \phi(y)\\) - if \\(p \leq_{^\ast \mathbb{R}} q\\) then \\(\psi(q) \leq_{\mathbb{R}} \psi(q)\\) - \\(\psi(\phi(x)) = x\\) and \\(\phi(\psi(p)) = p\\) Now consider \\(\\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\). The hyperreals famously violate the [Archimedean property](https://en.wikipedia.org/wiki/Archimedean_property). Because of this \\(\bigwedge_{^\ast \mathbb{R}} \\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\) does not exist. On the other than if we consider \\( \bigwedge_{\mathbb{R}} \\{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\), that *does* exist by the completeness of the real numbers (as it is bounded below by \\(\psi(0)\\)). Hence $$ \bigwedge_{\mathbb{R}} \\{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \\} \neq \psi\left(\bigwedge_{^\ast\mathbb{R}} \\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\right) $$ So \\(\psi\\) *cannot* be a complete lattice homomorphism, even though it is part of an order isomorphism. However, just to complicate matters, I believe that \\(\phi\\) and \\(\psi\\) are a mere *lattice* isomorphism, preserving finite meets and joints.
I apologize if this question is well-known, but I was unable to find it mentioned anywhere. There exists a bug which moves around in $r$-space. The bug begins at the origin of this $r$-space. If the bug is at the center of a regular $r$-simplex (all of which are oriented the exact same direction) with radial length of $1$, then the bug moves to one vertex of the simplex chosen at random with equal probabilities for each vertex. Call each instance of the bug moving, a step. My question is: What is the probability, as a function of $r$, that there exists a number $n>0$ such that just after the $n$th step, the bug is at the origin? An equivalent question is: Given an infinite sequence of digits, with any given digit in the sequence being randomly chosen with equal probabilities inclusively between $0$ and $b$, what is the probability, as a function of $b+1$, that there exists a point in the sequence such that there are an equal number of occurrences of all $b$ digits out of the digits up to that point? Some work I have done has provided me with a solution that is not in closed form: $$-\sum_{k\in A}\left(\prod_{i=1}^{k_{length}}\frac{-(r k_i)!}{(r^{k_i} (k_i!))^r}\right)$$ where $A$ is the set of all finite sequences of distinct integers and $k_{length}$ is the length of the sequence $k$. Unfortunately, I cannot remember how I obtained this result; if I recreate it, I will edit this question. I also have a lower bound of $0.7$ for $r=2$ as calculated by Mathematica. EDIT: I'm starting to doubt my expression above as the answer.
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
Quasirandomness Quasirandomness is a central concept in extremal combinatorics, and is likely to play an important role in any combinatorial proof of the density Hales-Jewett theorem. This will be particularly true if that proof is based on the density increment method or on some kind of generalization of Szemerédi's regularity lemma. In general, one has some kind of parameter associated with a set, which in our case will be the number of combinatorial lines it contains, and one would like a deterministic definition of the word "quasirandom" with the following key property. Every quasirandom set [math]\mathcal{A}[/math] has roughly the same value of the given parameter as a random set of the same density. Needless to say, this is not the only desirable property of the definition, since otherwise we could just define [math]\mathcal{A}[/math] to be quasirandom if it has roughly the same value of the given parameter as a random set of the same density. The second key property is this. Every set [math]\mathcal{A}[/math] that failsto be quasirandom has some other property that we can exploit. These two properties are already discussed in some detail in the article on the density increment method: this article concentrates more on examples of quasirandomness in other contexts, and possible definitions of quasirandomness connected with the density Hales-Jewett theorem. Contents Examples of quasirandomness definitions Bipartite graphs Let X and Y be two finite sets and let [math]f:X\times Y\rightarrow [-1,1].[/math] Then f is defined to be c-quasirandom if [math]\mathbb{E}_{x,x'\in X}\mathbb{E}_{y,y'\in Y}f(x,y)f(x,y')f(x',y)f(x',y')\leq c.[/math] Since the left-hand side is equal to [math]\mathbb{E}_{x,x'\in X}(\mathbb{E}_{y\in Y}f(x,y)f(x',y))^2,[/math] it is always non-negative, and the condition that it should be small implies that [math]\mathbb{E}_{y\in Y}f(x,y)f(x',y)[/math] is small for almost every pair [math]x,x'.[/math] If G is a bipartite graph with vertex sets X and Y and [math]\delta[/math] is the density of G, then we can define [math]f(x,y)[/math] to be [math]1-\delta[/math] if xy is an edge of G and [math]-\delta[/math] otherwise. We call f the balanced function of G, and we say that G is c-quasirandom if its balanced function is c-quasirandom. It can be shown that if H is any fixed graph and G is a large quasirandom graph, then the number of copies of H in G is approximately what it would be in a random graph of the same density as G. Subsets of finite Abelian groups If A is a subset of a finite Abelian group G and A has density [math]\delta,[/math] then we define the balanced function f of A by setting [math]f(x)=1-\delta[/math] when x\in A and [math]f(x)=-\delta[/math] otherwise. Then A is c-quasirandom if and only if f is c-quasirandom, and f is defined to be c-quasirandom if [math]\mathbb{E}_{x,a,b\in G}f(x)f(x+a)f(x+b)f(x+a+b)\leq c.[/math] Again, we can prove positivity by observing that the left-hand side is a sum of squares. In this case, it is [math]\mathbb{E}_{a\in G}(\mathbb{E}_{x\in G}f(x)f(x+a))^2.[/math] If G has odd order, then it can be shown that a quasirandom set A contains approximately the same number of triples [math](x,x+d,x+2d)[/math] as a random subset A of the same density. However, it is decidedly not the case that A must contain approximately the same number of arithmetic progressions of higher length (regardless of torsion assumptions on G). For that one must use "higher uniformity". Hypergraphs Subsets of grids A function f from [math][n]^2[/math] to [-1,1] is c-quasirandom if the "sum over rectangles" is at most c. The sum over rectangles is [math]\mathbb{E}_{x,y,a,b}f(x,y)f(x+a,y)f(x,y+b)f(x+a,y+b)[/math]. Again, it is easy to show that this sum is non-negative by expressing it as a sum of squares. And again, one defines a subset [math]A\subset[n]^2[/math] to be c-quasirandom if it has a balanced function that is c-quasirandom. If A is a c-quasirandom set of density [math]\delta[/math] and c is sufficiently small, then A contains roughly the same number of corners as a random subset of [math][n][/math] of density [math]\delta.[/math] A possible definition of quasirandom subsets of [math][3]^n[/math] As with all the examples above, it is more convenient to give a definition for quasirandom functions. However, in this case it is not quite so obvious what should be meant by a balanced function. Here, first, is a possible definition of a quasirandom function from [math][2]^n\times [2]^n[/math] to [math][-1,1].[/math] We say that f is c-quasirandom if [math]\mathbb{E}_{A,A',B,B'}f(A,B)f(A,B')f(A',B)f(A',B')\leq c.[/math] However, the expectation is not with respect to the uniform distribution over all quadruples (A,A',B,B') of subsets of [math][n].[/math] Rather, we choose them as follows. (Several variants of what we write here are possible: it is not clear in advance what precise definition will be the most convenient to use.) First we randomly permute [math][n][/math] using a permutation [math]\pi[/math]. Then we let A, A', B and B' be four random intervals in [math]\pi([n]),[/math] where we allow our intervals to wrap around mod n. (So, for example, a possible set A is [math]\{\pi(n-2),\pi(n-1),\pi(n),\pi(1),\pi(2)\}.[/math]) As ever, it is easy to prove positivity. To apply this definition to subsets [math]\mathcal{A}[/math] of [math][3]^n,[/math] define f(A,B) to be 0 if A and B intersect, [math]1-\delta[/math] if they are disjoint and the sequence x that is 1 on A, 2 on B and 3 elsewhere belongs to [math]\mathcal{A},[/math] and [math]-\delta[/math] otherwise. Here, [math]\delta[/math] is the probability that (A,B) belongs to [math]\mathcal{A}[/math] if we choose (A,B) randomly by taking two random intervals in a random permutation of [math][n][/math] (in other words, we take the marginal distribution of (A,B) from the distribution of the quadruple (A,A',B,B') above) and condition on their being disjoint. It follows from this definition that [math]\mathbb{E}f=0[/math] (since the expectation conditional on A and B being disjoint is 0 and f is zero whenever A and B intersect). Nothing that one would really like to know about this definition has yet been fully established, though an argument that looks as though it might work has been proposed to show that if f is quasirandom in this sense then the expectation [math]\mathbb{E}f(A,B)f(A\cup D,B)f(A,B\cup D)[/math] is small (if the distribution on these "set-theoretic corners" is appropriately defined).
Bohr's theory of the atom precedes quantum theory but does describe the Balmer and other lines in atomic hydrogen. I've outlined the calculation so that you can see where the Bohr radius comes from. He assumed that the electron orbits the nucleus in a circle with a radius r and is determined by the balance of centripetal acceleration and Coulomb attraction towards the nucleus. For an electron of mass $m_e$ and speed $v$ this is $$\frac{m_ev^2}{r}=\frac{e^2/(4\pi\epsilon _0)}{r^2}$$ where e is the charge on the electron and $4\pi\epsilon_0$ puts us into SI units, ( $\epsilon_0$ is the permittivity of free space). The total energy is the sum of kinetic and potential parts $$E=\frac{m_ev^2}{2}-\frac{(e^2/4\pi\epsilon _0)}{r}$$and using the first equation gives $$E=\frac{e^2/(4\pi\epsilon _0)}{2r}$$ Bohr had to make the assumption that electrons circling in an orbit do not radiate energy, which is contrary to classical electrodynamics, but he then needed to determine which orbits were allowed. He assumed that the angular momentum is in quantised units of Plancks constant h and a (principal) quantum number n. This gives$$ m_evr = n\frac{h}{2\pi}$$ which when combined with the first equation gives $$ r=a_0n^2$$ which is the equation you ask about and $a_0$ is conventionally the symbol for the Bohr radius and is 0.0529 nm.It is calculated as $$a_0 = \frac{\hbar^2}{e^2/(4\pi\epsilon_0)m_e}$$ where $\hbar = h/2\pi$.Re-expressing the energy using $a_0$ gives$$E=-\frac{e^2/(4\pi\epsilon_0)}{2a_0}\frac{1}{n^2}$$which is the famous Bohr equation and explains the energy levels in hydrogen atoms and so the transitions between them.
Let $X_k$ be $\mathbb{P}^2$ blown up at $k$ points (where $k$ is $0$ to $8$). Let $\beta \in H_2(X_k, \mathbb{Z})$ be the homology class given by $$ \beta := n L + m_1 E_1 + \ldots + m_k E_k $$ where $L$ is the homology class of a line and $E_i$ are the exceptional divisors. Also, define $$ \delta_{\beta} := < c_1(TX_k), ~\beta>-1= 3n + m_1 + \ldots m_k-1. $$ Let $N_{\beta}$ be the number of genus zero curves in the class $\beta$ passing through $\delta_{\beta}$ generic points. $\textbf{Questions:} $ I have two questions. In their paper Kontsevich and Mannin give a recursive formula to compute $N_{\beta}$ (page $29$). 1) Is it known that the numbers $N_{\beta}$ that one gets from their formula are actually the enumerative numbers (i.e. they are actually the honest count of curves through the right number of generic points)? A priori, Gromov Witten Invariants need not be enumerative and I suspect the formula given by Kontsevic and Mannin are for the genus zero GW invariants. In particular, on page $26$ of their paper (second last paragraph), they make the remark "We expect that $N_{\beta}$ counts the number of rational curves in the homology class $\beta$ passing through $\delta_{\beta}$ points, at least in unobstructed problems. " This remark seems to suggest that at the time of writing the paper they did not know if the numbers are actually enumerative. Is this presently known (i.e are genus zero GW Invariants on Del-Pezzo surfaces enumerative)? The answer is yes for $\mathbb{P}^2$. 2) My second question is how does one actually compute $N_{\beta}$ using their recursive formula? One needs enough initial conditions for the recursion. On page $29$ (just after they state the formula) they say that $N_{\beta}$ is ``expected'' to be one for all indecomposable $\beta$. This seems to imply that $$N_{3L-E_1-E_2-\ldots- E_8} = 1.$$ But as observed by Mark in this post it seems that this number ought to be the same as the number of rational planar cubics through $8$ generic points, i.e. $12$. So what have I misunderstood here?
Is it wrong to say that Density Functional means that Electron Density is a function of the orbitals (wave function) of all electrons in 3 dimensions, if so, why ? In general, a functional $F$ is a mapping from an arbitrary set $\mathcal{X}$ of functions to the set of complex numbers $\mathbb{C}$ or the set of real numbers $\mathbb{R}$:$$F : \mathcal{X} \mapsto \mathbb{R}.$$or$$F : \mathcal{X} \mapsto \mathbb{C}.$$ For example, if you consider $\mathcal{X}$ as the set of polynomials with real coefficients, you can define a functional $F$ as $$ F[f] = \int_0^1f(x)\,dx $$ i.e. your functional $F$ takes a polynomial function $f\in\mathcal{X}$ (for example $f(x)=3x+1/2$) as an argument and returns a scalar (2 for $f(x)=3x+1/2$, as you can easily verify). A density functional is simply a functional $F[f]$ where the argument $f$ is the electron density $\rho(\vec{r})$ (i.e. a density functional is a functional of the electron density). For example Hohenberg and Kohn showed that the energy $\epsilon$ of a quantum system is a functional of the density$$\epsilon=E[\rho]$$This means that when you plug the electron density of your system $\rho(\vec{r})$ into the energy functional $E[\rho]$ you get a number $\epsilon$, which is the energy of your system. The whole energy functional is not known explicitly, but some of its components are known. For example for the external potential energy we have$$V[\rho] = \int v(\vec{r})\rho(\vec{r})d\vec{r}$$and for the Coulomb interaction between electrons we have$$J[\rho] = \frac{1}{2}\iint \frac{\rho(\vec{r})\rho(\vec{r}')}{|\vec{r}-\vec{r}'|}\,d\vec{r}d\vec{r}' $$which are clearly functionals of the electron density. Electron density, is defined as the probability density of finding any of the $n$ electrons with arbitrary spin at some point $\vec{r}_{1}$ in space, $$ \newcommand{\el}{_\mathrm{el}} \newcommand{\dif}{\mathrm{d}} \rho(\vec{r}_{1}) := n \sum_{m_{s1}} \sum_{m_{s2}} \dotsi \sum_{m_{sn}} \iint \dotsi \int | ψ\el(\vec{q}_{1}, \vec{q}_{2}, \dotsc, \vec{q}_{n}) |^{2} \dif \vec{r}_{2} \dif \vec{r}_{3} \dotsb \dif \vec{r}_{n} \, . $$ From the first Hohenberg–Kohn theorem, it is known that electronic energy is a functional of electron density, $$ E\el = E\el[\rho(\vec{r}_{1})] \, . $$ i.e. electronic energy $E\el$ is a function that takes another function, namely, electron density $\rho(\vec{r}_{1})$, as its input argument and returns a scalar value (real number). So, density functional is a functional of electron density that returns (possibly approximate) electronic energy or a part of it, if $E\el$ is subdivided into parts.
I think that the OP is asking a more specific question than whether or not a surface has a connection that is not metric or not torsion free. It seems that the OP is assuming that the surface $M$ comes equipped with an immersion $\mathbf{r}:M\to\mathbb{E}^3$ into (oriented) Euclidean $3$-space and is asking whether, using the data of the immersion $\mathbf{r}$, it is possible to define, in a canonical way, a connection that has torsion and/or is not metric compatible. His question includes the argument that the usual induced connection associated to a given $\mathbf{r}$ discussed in all curves-and-surfaces books is both compatible with the induced metric and is torsion-free. Now, it's true that the only canonical connection induced by $\mathbf{r}$ that uses at most second-order information from $\mathbf{x}$ at a point is the Levi-Civita connection. However, there are other canonical connections definable using $\mathbf{r}$ that use higher order information, and these need be neither torsion-free nor compatible with any metric (let alone the induced metric), at least for the general immersion. (Obviously, any canonical formula using higher order information will just produce the Levi-Civita connection when applied to an immersion whose image is either a plane or a sphere.) Example: Given an immersion $\mathbf{x}:M\to\mathbb{E}^3$, there is an associated mean curvature function $H$ that, unfortunately, depends on a choice of orientation of the surface $M$; it switches sign if one reverses the orientation of $M$ (always, assuming, of course, that the target space $\mathbb{E}^3$ is oriented). However, the $1$-form $\eta = \ast dH$ is independent of a choice of orientation of the surface, since both $H$ and $\ast$ reverse sign when one reverses orientation. Let $\nabla$ be the Levi-Civita connection on $M$ associated to the metric induced on $M$ by the immersion $\mathbf{x}$, and define a second connection $\tilde\nabla$ on $M$ by the formula$$\tilde\nabla_XY = \nabla_XY + \eta(X)Y$$Then $\tilde\nabla$ is a connection canonically associated to $\mathbf{x}$ (whose local formula depends on third order derivatives of $\mathbf{x}$). One computes (using the fact that the torsion of $\nabla$ vanishes) that$$T^{\tilde\nabla}(X,Y) = \tilde\nabla_XY - \tilde\nabla_YX - [X,Y] = \eta(X)Y - \eta(Y)X,$$so the torsion of $\tilde\nabla$ vanishes if and only if $\eta=0$, i.e., $H$ is locally constant. Meanwhile, it is easy to compute that the curvatures of the two connections are related by$$R^{\tilde\nabla}(X,Y)Z = R^{\nabla}(X,Y)Z + d\eta(X,Y)\ Z,$$so $\tilde\nabla$ does not even have a parallel $2$-form, let alone a parallel metric, unless $d\eta=0$, i.e., unless $H$ is (locally) a harmonic function on the surface. Thus, in general, $\tilde\nabla$ is neither torsion-free nor metric compatible.
With holiday season coming up I decided to make some cinnamon stars. That was fun (and the result tasty), but my inner nerd cringed when I put the first tray of stars in the box and they would not fit in one layer: Almost! Is there a way they could have fit? How well can we tile stars, anyway? Given that these are regular six-pointed stars, we could certainly use the well-known hexagon tilings as an approximation, like so: Messed up the one to the upper right, whoops. But is this optimal? There's plenty of room between the tips. For this consideration, let us restrict ourselves to rectangular boxes and six-pointed, regular stars, i.e. there are thirty degrees (or $\frac{\pi}{6}$) between every tips and its neighbour nooks. The stars are characterised by the inner radius $r_i$ and outer radius $r_o$: [source] Note that we have hexagons for $r_i = \frac{\sqrt{3}}{2} \cdot r_o$ and hexagrams for $r_i = \frac{1}{\sqrt{3}} \cdot r_o$. I think it's reasonable to consider these the extremes (for cookies) and restrict ourselves to the range in between, i.e. $\frac{r_i}{r_0} \in \Bigl[\frac{1}{\sqrt{3}}, \frac{\sqrt{3}}{2}\Bigr]$. My cookies have $r_i \approx 17\mathrm{mm}$ and $r_o \approx 25\mathrm{mm}$ ignoring imperfections -- I was going for taste, not form for once! What is an optimal tiling for stars as characterised above? If there is no static best tiling, is there an algorithm to find a good one efficiently?
My university is participating in the implementation of a library borrowing managing system at Richelieu National Library in France. I received the order to formulate the query: "find all users having borrowed every book" in relational Algebra, in relational Calculus and in SQL (which would probably not happen, probably librarians want to test the limits of the database). The database has the following pattern (the primary keys are in bold): Borrowing( People, Book, DateBorrowing, ExpectedReturnDate, EffectiveReturnDate) Lateness( People, Book, DateBorrowing, LatenessFee) I tried $$\Pi_{People}(Borrowing)\div\Pi_{People}(\sigma_{Book} (Borrowing))$$ But It seemed to be wrong as far as to do $r\div s$, $S\subseteq R$ is needed, which seems not to be the case, but why? I'm still talking about people, isn't it? I then tried the following relational calculus formula: $$\{t.People|Borrowing(t)\wedge(\forall u Borrowing(u)\Rightarrow t.DateBorrowing)\}$$ To find every books that have a borrowing date. I know this calcuation is false but I don't know how to do better... Then in SQL: SELECT People FROM Borrowing WHERE FORALL Books EXISTS DateBorrowing That is what I tried and I know that is not the right way to "find all users having borrowed every book". Can you help me expressing correctely such a querry?
What is the median of the non-central t distribution with non-centrality parameter $\delta \ne 0$? This may be a hopeless question because the CDF appears to be expressed as an infinite sum, and I cannot find any information about the inverse CDF function. You can approximate it. For example, I made the following nonlinear fits for $\nu$ (degrees of freedom) from 1 through 20 and $\delta$ (noncentrality parameter) from 0 through 5 (in steps of 1/2). Let $$a(\nu) = 0.963158 + \frac{0.051726}{\nu-0.705428} + 0.0112409\log(\nu),$$ $$b(\nu) = -0.0214885+\frac{0.406419}{0.659586 +\nu}+0.00531844 \log(\nu),$$ and $$g(\nu, \delta) = \delta + a(\nu) \exp(b(\nu) \delta) - 1.$$ Then $g$ estimates the median to within 0.15 for $\nu=1$, 0.03 for $\nu=2$, .015 for $\nu=3$, and .007 for $\nu = 4, 5, \ldots, 20$. The estimation was done by computing the values of $a$ and $b$ for each value of $\nu$ from 1 through 20 and then separately fitting $a$ and $b$ to $\nu$. I examined plots of $a$ and $b$ to determine an appropriate functional form for these fits. You can do better by focusing on the intervals of these parameters of interest to you. In particular, if you're not interested in really small values of $\nu$ you could easily improve these estimates, likely to within 0.005 consistently. Here are plots of the median versus $\delta$ for $\nu=1$, the hardest case, and the negative residuals (true median minus approximate value) versus $\delta$: The residuals are truly small compared to the medians. BTW, for all but the smallest degrees of freedom the median is close to the noncentrality parameter. Here's a graph of the median, for $\delta$ from 0 to 5 and $\nu$ (treated as a real parameter) from 1 to 20. For many purposes using $\delta$ to estimate the median might be good enough. Here is a plot of the error (relative to $\delta$) made by assuming the median equals $\delta$ (for $\nu$ from 2 through 20). If you are interested in (degrees of freedom) ν > 2, the following asymptotic expression [derived from an interpolative approximation to the noncentral student-t quantile, DL Bartley, Ann. Occup. Hyg., Vol. 52, 2008] is sufficiently accurate for many purposes: Median[ t[δ,ν] ] ~ δ(1 + 1/(3ν)). With ν > 2, the maximum magnitude of the bias of the above expression relative to the noncentral student-t median is about 2% and falls off quickly with increasing ν. The contour diagram shows the bias of the asymptotic approximation relative to the noncentral student-t median:
It's well known that chain complexes are an abelian category, and in particular we can consider chain complexes of chain complexes, i.e. double complexes. Given a double complex $A^{\bullet\bullet} \in \mathrm{Kom}(\mathrm{Kom}(\mathcal A))$ we can form the total complex $\newcommand{\tot}{\mathrm{Tot}}\tot(A^{\bullet\bullet})$ which now lies "one level lower", in $\mathrm{Kom}(\mathcal A)$. I can also try to consider chain complexes in the derived category of $\mathcal A$, but it is no longer clear (at least to me) how to build a total complex in the best way, since $d \circ d=0$ only has to hold up to homotopy in $D(\mathcal A)$. But let me now consider more generally a triangulated category $\newcommand{\T}{\mathcal T}\T$ and a sequence of objects $A^0, \ldots, A^n$ with $d_i \colon A^i \to A^{i+1}$ such that $d_{i+1} \circ d_i = 0$. It seems to me that one can define a total complex $\tot(A^\bullet) \in \T$ as an iterated mapping cone: for instance, if $n=2$, then one can first consider $B = \mathrm{Cone}(d_0)$. Then we consider the diagram $$ \begin{matrix} A^0 & \to & A^1 & \to & B \\ \downarrow & & \downarrow & & \\ 0 & \to & A^2 & \to & A^2 & \end{matrix}$$ where both rows are distinguished triangles; by one of the axioms of triangulated categories there is a map $f \colon B \to A^2$ completing this to a map of triangles, and we define $\tot(A^\bullet)=\mathrm{Cone}(f)$. However, this has a the drawback of not being functorial. For instance, I think that one would like to say that a map $A^\bullet \to B^\bullet$ of chain complexes in $\T$ is a quasi-isomorphism if $\tot(A^\bullet) \to \tot(B^\bullet)$ is an isomorphism, but this makes no sense unless $\tot$ is a functor. And one would like to say that if $f \colon \T \to \T'$ is a triangulated functor, then there is a natural equivalence between $f \circ \tot$ and $\tot \circ f$ as functors $\mathrm{Kom}(\T) \to \T'$ (am I right?), and again $\tot$ needs to be functorial. First of all, I would like to know if what I've said so far is correct. Maybe there is a better way to set things up than this? Secondly, I've heard the slogan that stable $\infty$-categories solve all problems arising from the fact that triangulated categories don't have functorial mapping cones. Is there a better behaved notion of a chain complex in a stable $\infty$-category? I suspect that I could answer these questions myself if I started reading Lurie's work, but it's a slightly intimidating amount of text and I thought I'd ask here first.
Banach Journal of Mathematical Analysis Banach J. Math. Anal. Volume 8, Number 2 (2014), 93-106. Disjointness preserving linear operators between Banach algebras of vector-valued functions Abstract We present vector-valued versions of two theorems due to A. Jimenez-Vargas, by showing that, if $B(X,E)$ and $B(Y,F)$ are certain vector-valued Banach algebras of continuous functions and $T:B(X,E)\to B(Y,F)$ is a separating linear operator, then $\widehat{T}:\widehat{B(X,E)}\to \widehat{B(Y,F)}$, defined by $\widehat{T}\hat{f}=\widehat{Tf}$, is a weighted composition operator, where $\widehat{Tf}$ is the Gelfand transform of $Tf$. Furthermore, it is shown that, under some conditions, every bijective separating map $T:B(X,E)\to B(Y,F)$ is biseparating and induces a homeomorphism between the character spaces $M(B(X,E))$ and $M(B(Y,F))$. In particular, a complete description of all biseparating, or disjointness preserving linear operators between certain vector-valued Lipschitz algebras is provided. In fact, under certain conditions, if the bijections $T:Lip^{\alpha}(X,E)\to Lip^{\alpha}(Y,F)$ and $T^{-1}$ are both disjointness preserving, then $T$ is a weighted composition operator in the form $Tf(y)=h(y)(f(\phi(y))),$ where $\phi$ is a homeomorphism from $Y$ onto $X$ and $h$ is a map from $Y$ into the set of all linear bijections from $E$ onto $F$. Moreover, if $T$ is multiplicative then $M(E)$ and $M(F)$ are homeomorphic. Article information Source Banach J. Math. Anal., Volume 8, Number 2 (2014), 93-106. Dates First available in Project Euclid: 4 April 2014 Permanent link to this document https://projecteuclid.org/euclid.bjma/1396640054 Digital Object Identifier doi:10.15352/bjma/1396640054 Mathematical Reviews number (MathSciNet) MR3189541 Zentralblatt MATH identifier 1308.47047 Subjects Primary: 47B38: Operators on function spaces (general) Secondary: 47B33: Composition operators 47B48: Operators on Banach algebras 46J10: Banach algebras of continuous functions, function algebras [See also 46E25] Citation Ghasemi Honary, Taher; Nikou, Azadeh; Sanatpour, Amir Hossein. Disjointness preserving linear operators between Banach algebras of vector-valued functions. Banach J. Math. Anal. 8 (2014), no. 2, 93--106. doi:10.15352/bjma/1396640054. https://projecteuclid.org/euclid.bjma/1396640054
ClassActivity20130325 Here are the stages that each group should get to: 1. Read in the data and plot the <math>(x_i,y_i)</math> data points, including error bars or some other graphical indication of the <math>\sigma_i</math>'s. 2. Hmm. They look kind of like a raised parabola, don't they? Try fitting a model of the form <math>y = b_0 + b_1 x^2</math>. What are the best fitting values for <math>b_0, b_1</math>? Plot the best fit curve on the same plot as you produced in stage 1. Does it look like a good fit? What is your value of <math>\chi^2_{min}</math>? At this stage you might want to automate your process so that you can quickly plug in the following models and get best-fit parameters, <math>\chi^2_{min}</math>, and a graphical plot. 3. Do a linear fit to see how bad it is: <math>y = b_0 + b_1 x</math> 4. Try an exponential: <math>y = b_0 \exp(b_1 x)</math> 5. Try adding a linear term to the parabola to get a general quadratic: <math>y = b_0 + b_1 x + b_2 x^2</math> 6. Does the ordering of values <math>\chi^2_{min}</math> seem to match your intuitive impression of which curves fits best? 7. Calculate standard errors for your fitted parameters using the Hessian matrix (as described in the segment). Is the value of <math>b_1</math> in stage 5 different enough from zero so that you are sure it isn't zero? (That is, are you justified in adding the extra parameter to the original stage 2 parabola?) 8. Can you answer the same question by looking at the <math>\chi^2_{min}</math> values? (We'll learn more about this later in the course.)
Yes, you can have both as fast as possible random access and faster-than-full-rebuild insertion time. I assume you know how dynamically growing arrays work. Also, I assume you know how to make them work in $O(1)$ worst-case. These techniques allow us to focus on the problem for lists of limited size only. Let $n$ be the maximal size (capacity) of the list. Let’s divide the list into $O(\sqrt{n})$ blocks of equal size $B$ (which is $O(\sqrt{n})$ too). Let also $b_j$ denote a cyclic shift of the $j$-th block. When you need to access $i$-th element of the list, you calculate its position as follows in $O(1)$: $$f(i) = (b_j + i \mod B) + B \times j$$ where $j = \lfloor \frac{i}{B} \rfloor$. When you need to insert something into $i$-th position, you first make a cyclic shift of everything from $j$-th block to the end: Decrement all $b$ values starting from $j$. Swap the elements on the boundaries of consequtive blocks, so that every element is in its right place. Then you fix $j$-th block by rebuilding it from scratch. This gives you $O(\sqrt{n})$ insertion complexity.
If I have the following system, I am wondering how to calculate the number of valid strings it contains. The system is something like this, which can have arbitrary variations. Only consists of an alphabet $\Sigma$. It can't spell any words in a blacklist word list $\rho$. It can't have more than $n$ characters of the same type in a row. Every word $\omega$ it generates is no more than $m$ in length. So for example, we might have this system: $\Sigma = (a, b, c, d, e, f, g, h)$ $l = \texttt{len}(\Sigma)$ $\rho = (\texttt{bed},\texttt{fad},\texttt{dad},\texttt{bead},\texttt{deed},\texttt{fade})$ $n = 3$ (so can't match /aaaa|bbbb|cccc|dddd|eeee|ffff|gggg|hhhh/) $m = 20$ Say we have a random number generator, which uses a radix algorithm to convert it into a string using the characters in the alphabet $\Sigma$. The thing is, this random number generator might generate strings like the following, which we just have to drop and forget, then try generating another one until we get one that matches the constraints. Don't know of a better way to do that, but that's beside the point. So it might generate these ones, which are invalid. afbcdeabeadbeaaaadaf[bead and aaaa]bedddddagheadfffeeee[bed and dddd] So the question is, how do I calculate how many strings are invalid? I know how to calculate the possible strings given the constraints, that is simply: $$z = l^m = 8^{20}$$ I don't know how to calculate the number of strings that are invalid though, in a general way (so I can change the value of $m$, $n$, $\Sigma$, or $\rho$). Wondering how to formulate this problem mathematically or algorithmically so that I can figure out that "there are x number of invalid strings", and so I can calculate: $$y = z - x$$ to get the total amount of possible strings given the constraints.
Images are essential elements in most of the scientific documents. LaTeX provides several options to handle images and make them look exactly what you need. In this article is explained how to include images in the most common formats, how to shrink, enlarge and rotate them, and how to reference them within your document. Contents Below is a example on how to import a picture. \documentclass{article} \usepackage{graphicx} \graphicspath{ {./images/} } \begin{document} The universe is immense and it seems to be homogeneous, in a large scale, everywhere we look at. \includegraphics{universe} There's a picture of a galaxy above \end{document} Latex can not manage images by itself, so we need to use the graphicx package. To use it, we include the following line in the preamble: \usepackage{graphicx} The command \graphicspath{ {./images/} } tells LaTeX that the images are kept in a folder named images under the directory of the main document. The \includegraphics{universe} command is the one that actually included the image in the document. Here universe is the name of the file containing the image without the extension, then universe.PNG becomes universe. The file name of the image should not contain white spaces nor multiple dots. Note: The file extension is allowed to be included, but it's a good idea to omit it. If the file extension is omitted it will prompt LaTeX to search for all the supported formats. For more details see the section about generating high resolution and low resolution images. When working on a document which includes several images it's possible to keep those images in one or more separated folders so that your project is more organised. The command \graphicspath{ {images/} } tells LaTeX to look in the images folder. The path is to the current working directory - so, the compiler will look for the file in the same folder as the code where the image is included. The path to the folder is relative by default, if there is no initial directory specified, for instance relative %Path relative to the .tex file containing the \includegraphics command \graphicspath{ {images/} } This is a typically straightforward way to reach the graphics folder within a file tree, but can leads to complications when .tex files within folders are included in the mail .tex file. Then, the compiler may end up looking for the images folder in the wrong place. Thus, it is best practice to specify the graphics path to be relative to the main .tex file, denoting the main .tex file directory as ./ , for instance %Path relative to the main .tex file \graphicspath{ {./images/} } as in the introduction. The path can also be , if the exact location of the file on your system is specified. For example: absolute %Path in Windows format: \graphicspath{ {c:/user/images/} } %Path in Unix-like (Linux, Mac OS) format \graphicspath{ {/home/user/images/} } Notice that this command requires a trailing slash / and that the path is in between double braces. You can also set multiple paths if the images are saved in more than one folder. For instance, if there are two folders named images1 and images2, use the command. \graphicspath{ {./images1/}{./images2/} } If no path is set LaTeX will look for pictures in the folder where the .tex file the image is included in is saved. If we want to further specify how LaTeX should include our image in the document (length, height, etc), we can pass those settings in the following format: \begin{document} Overleaf is a great professional tool to edit online, share and backup your \LaTeX{} projects. Also offers a rather large help documentation. \includegraphics[scale=1.5]{lion-logo} The command \includegraphics[scale=1.5]{lion-logo} will include the image lion-logo in the document, the extra parameter scale=1.5 will do exactly that, scale the image 1.5 of its real size. You can also scale the image to a some specific width and height. \begin{document} Overleaf is a great professional tool to edit online, share and backup your \LaTeX{} projects. Also offers a rather large help documentation. \includegraphics[width=3cm, height=4cm]{lion-logo} As you probably have guessed, the parameters inside the brackets [width=3cm, height=4cm] define the width and the height of the picture. You can use different units for these parameters. If only the width parameter is passed, the height will be scaled to keep the aspect ratio. The length units can also be relative to some elements in document. If you want, for instance, make a picture the same width as the text: \begin{document} The universe is immense and it seems to be homogeneous, in a large scale, everywhere we look at. \includegraphics[width=\textwidth]{universe} Instead of \textwidth you can use any other default LaTeX length: \columnsep, \linewidth, \textheight, \paperheight, etc. See the reference guide for a further description of these units. There is another common option when including a picture within your document, to rotate it. This can easily accomplished in LaTeX: \begin{document} Overleaf is a great professional tool to edit online, share and backup your \LaTeX{} projects. Also offers a rather large help documentation. \includegraphics[scale=1.2, angle=45]{lion-logo} The parameter angle=45 rotates the picture 45 degrees counter-clockwise. To rotate the picture clockwise use a negative number. In the previous section was explained how to include images in your document, but the combination of text and images may not look as we expected. To change this we need to introduce a new environment. In the next example the figure will be positioned right below this sentence. \begin{figure}[h] \includegraphics[width=8cm]{Plot} \end{figure} The figure environment is used to display pictures as floating elements within the document. This means you include the picture inside the figure environment and you don't have to worry about it's placement, LaTeX will position it in a such way that it fits the flow of the document. Anyway, sometimes we need to have more control on the way the figures are displayed. An additional parameter can be passed to determine the figure positioning. In the example, begin{figure}[h], the parameter inside the brackets set the position of the figure to . Below a table to list the possible positioning values. here Parameter Position h Place the float here, i.e., approximately at the same point it occurs in the source text (however, not exactly at the spot) t Position at the top of the page. b Position at the bottom of the page. p Put on a special page for floats only. ! Override internal parameters LaTeX uses for determining "good" float positions. H Places the float at precisely the location in the LaTeX code. Requires the float package, though may cause problems occasionally. This is somewhat equivalent to h!. In the next example you can see a picture at the top of the document, despite being declared below the text. In this picture you can see a bar graph that shows the results of a survey which involved some important data studied as time passed. \begin{figure}[t] \includegraphics[width=8cm]{Plot} \centering \end{figure} The additional command \centering will centre the picture. The default alignment is left. It's also possible to wrap the text around a figure. When the document contains small pictures this makes it look better. \begin{wrapfigure}{r}{0.25\textwidth} %this figure will be at the right \centering \includegraphics[width=0.25\textwidth]{mesh} \end{wrapfigure} There are several ways to plot a function of two variables, depending on the information you are interested in. For instance, if you want to see the mesh of a function so it easier to see the derivative you can use a plot like the one on the left. \begin{wrapfigure}{l}{0.25\textwidth} \centering \includegraphics[width=0.25\textwidth]{contour} \end{wrapfigure} On the other side, if you are only interested on certain values you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, like the one on the left. On the other side, if you are only interested on certain values you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, like the one on the left. For the commands in the example to work, you have to import the package wrapfig. Add to the preamble the line \usepackage{wrapfig}. Now you can define the wrapfigure environment by means of the commands \begin{wrapfigure}{l}{0.25\textwidth} \end{wrapfigure}. Notice that the environment has two additional parameters enclosed in braces. Below the code is explained with more detail: {l} {0.25\textwidth} \centering For a more complete article about image positioning see Positioning images and tables Captioning images to add a brief description and labelling them for further reference are two important tools when working on a lengthy text. Let's start with a caption example: \begin{figure}[h] \caption{Example of a parametric plot ($\sin (x), \cos(x), x$)} \centering \includegraphics[width=0.5\textwidth]{spiral} \end{figure} It's really easy, just add the \caption{Some caption} and inside the braces write the text to be shown. The placement of the caption depends on where you place the command; if it'a above the includegraphics then the caption will be on top of it, if it's below then the caption will also be set below the figure. Captions can also be placed right after the figures. The sidecap package uses similar code to the one in the previous example to accomplish this. \documentclass{article} \usepackage[rightcaption]{sidecap} \usepackage{graphicx} %package to manage images \graphicspath{ {images/} } \begin{SCfigure}[0.5][h] \caption{Using again the picture of the universe. This caption will be on the right} \includegraphics[width=0.6\textwidth]{universe} \end{SCfigure} There are two new commands \usepackage[rightcaption]{sidecap} rightcaption. This parameter establishes the placement of the caption at the right of the picture, you can also use \begin{SCfigure}[0.5][h] \end{SCfigure} h works exactly as in the You can do a more advanced management of the caption formatting. Check the further reading section for references. Figures, just as many other elements in a LaTeX document (equations, tables, plots, etc) can be referenced within the text. This is very easy, just add a label to the figure or SCfigure environment, then later use that label to refer the picture. \begin{figure}[h] \centering \includegraphics[width=0.25\textwidth]{mesh} \caption{a nice plot} \label{fig:mesh1} \end{figure} As you can see in the figure \ref{fig:mesh1}, the function grows near 0. Also, in the page \pageref{fig:mesh1} is the same example. There are three commands that generate cross-references in this example. \label{fig:mesh1} \ref{fig:mesh1} \pageref{fig:mesh1} The \caption is mandatory to reference a figure. Another great characteristic in a LaTeX document is the ability to automatically generate a list of figures. This is straightforward. This command only works on captioned figures, since it uses the caption in the table. The example above lists the images in this article. Important Note: When using cross-references your LaTeX project must be compiled twice, otherwise the references, the page references and the table of figures won't work. So far while specifying the image file name in the \includegraphics command we have omitted file extensions. However, that is not necessary, though it is often useful. If the file extension is omitted, LaTeX will search for any supported image format in that directory, and will search for various extensions in the default order (which can be modified). This is useful in switching between development and production environments. In a development environment (when the article/report/book is still in progress), it is desirable to use low-resolution versions of images (typically in .png format) for fast compilation of the preview. In the production environment (when the final version of the article/report/book is produced), it is desirable to include the high-resolution version of the images. This is accomplished by Thus, if we have two versions of an image, venndiagram.pdf (high-resolution) and venndiagram.png (low-resolution), then we can include the following line in the preamble to use the .png version while developing the report - \DeclareGraphicsExtensions{.png,.pdf} The command above will ensure that if two files are encountered with the same base name but different extensions (for example venndiagram.pdf and venndiagram.png), then the .png version will be used first, and in its absence the .pdf version will be used, this is also a good ideas if some low-resolution versions are not available. Once the report has been developed, to use the high-resolution .pdf version, we can change the line in the preamble specifying the extension search order to \DeclareGraphicsExtensions{.pdf,.png} Improving on the technique described in the previous paragraphs, we can also instruct LaTeX to generate low-resolution .png versions of images on the fly while compiling the document if there is a PDF that has not been converted to PNG yet. To achieve that, we can include the following in the preamble after \usepackage{graphicx} \usepackage{epstopdf} \epstopdfDeclareGraphicsRule{.pdf}{png}{.png}{convert #1 \OutputFile} \DeclareGraphicsExtensions{.png,.pdf} If venndiagram2.pdf exists but not venndiagram2.png, the file venndiagram2-pdf-converted-to.png will be created and loaded in its place. The command convert #1 is responsible for the conversion and additional parameters may be passed between convert and #1. For example - convert -density 100 #1. There are some important things to have in mind though: --shell-escape option. \epstopdfDeclareGraphicsRule, so that only high-resolution PDF files are loaded. We'll also need to change the order of precedence. LaTeX units and legths Abbreviation Definition pt A point, is the default length unit. About 0.3515mm mm a millimetre cm a centimetre in an inch ex the height of an x in the current font em the width of an m in the current font \columnsep distance between columns \columnwidth width of the column \linewidth width of the line in the current environment \paperwidth width of the page \paperheight height of the page \textwidth width of the text \textheight height of the text \unitleght units of length in the picture environment. About image types in LaTeX JPG: Best choice if we want to insert photos PNG: Best choice if we want to insert diagrams (if a vector version could not be generated) and screenshots PDF: Even though we are used to seeing PDF documents, a PDF can also store images EPS: EPS images can be included using the epstopdfpackage (we just need to install the package, we don't need to use \usepackage{}to include it in our document.) For more information see
Area and definite integrals The actual definition of ‘integral’ is as a limit of sums, whichmight easily be viewed as having to do with area. One of theoriginal issues integrals were intended to address was computation ofarea. First we need more notation. Suppose that we have a function$f$ whose integral is another function $F$:$$\int f(x)\;dx=F(x)+C$$Let $a,b$ be two numbers. Then the definite integral of $f$ with limits $a,b$ is$$\int_a^b f(x)\;dx=F(b)-F(a)$$The left-hand side of this equality is just notation for thedefinite integral. The use of the word ‘limit’ here has little to dowith our earlier use of the word, and means something more like‘boundary’, just like it does in more ordinary English. A similar notation is to write $$[g(x)]_a^b=g(b)-g(a)$$ for any function $g$. So we could also write $$\int_a^b f(x)\;dx=[F(x)]_a^b$$ For example, $$\int_0^5 x^2\;dx=\biggl[{x^3\over 3}\biggr]_0^5={5^3-0^3\over 3}={125\over 3}$$ As another example, $$\int_2^3 3x+1 \;dx=\biggl[{3x^2\over 2}+x\biggr]_2^3 =\biggl({3\cdot 3^2\over 2}+3\biggr)-\biggl({3\cdot 2^2\over 2}+2\biggr)={21\over 2}$$ All the other integrals we had done previously would becalled indefinite integrals since they didn't have ‘limits’$a,b$. So a definite integral is just the difference of twovalues of the function given by an indefinite integral. That is,there is almost nothing new here except the idea of evaluating thefunction that we get by integrating. But now we can do something new: compute areas: For example, if a function $f$ is positive on aninterval $[a,b]$, then$$\int_a^b f(x)\;dx = \hbox{ area between graph and $x$-axis, between$x=a$ and $x=b$}$$It is important that the function be positive, or the result isfalse. For example, since $y=x^2$ is certainly always positive (or at least non-negative, which is really enough), the area ‘under the curve’ (and, implicitly, above the $x$-axis) between $x=0$ and $x=1$ is just $$\int_0^1 x^2\;dx=\biggl[{x^3\over 3}\biggr]_0^1={1^3-0^3\over 3}={1\over 3}$$ More generally, the area below $y=f(x)$, above $y=g(x)$,and between $x=a$ and $x=b$ is\begin{align*}\hbox{ area }&=\int_a^b f(x)-g(x) \;dx\\&=\int_{\textit{left limit}}^{\textit{right limit}} (\text{upper curve - lower curve}) \;dx\end{align*}It is important that $f(x)\ge g(x)$ throughout the interval $[a,b]$. For example, the area below $y=e^x$ and above $y=x$, and between $x=0$ and $x=2$ is $$\int_0^2 e^x-x\;dx=\biggl[e^x-{x^2\over 2}\biggr]_0^2=(e^2-2)-(e^0-0)=e^2+1$$ since it really is true that $e^x\ge x$ on the interval $[0,2]$. As a person might be wondering, in general it may be not so easy to tell whether the graph of one curve is above or below another. The procedure to examine the situation is as follows: given two functions $f,g$, to find the intervals where $f(x)\le g(x)$ and vice-versa: Find where the graphs cross by solving $f(x)=g(x)$ for $x$ to find the $x$-coordinates of the points of intersection. Between any two solutions $x_1,x_2$ of $f(x)=g(x)$ (and also to theleft and right of the left-most and right-most solutions!), plug in oneauxiliary point of your choosing to see which function is larger. Of course, this procedure works for a similar reason that the first derivative test for local minima and maxima worked: weimplicitly assume that the $f$ and $g$ are continuous, so if thegraph of one is above the graph of the other, then the situation can't reverse itself without the graphs actually crossing. As an example, and as an example of a certain delicacy ofwording, consider the problem to find the area between $y=x$ and$y=x^2$ with $0\le x\le 2$. To find where $y=x$ and $y=x^2$ cross, solve $x=x^2$: we find solutions $x=0,1$. In the presentproblem we don't care what is happening to the left of $0$. Pluggingin the value $1/2$ as auxiliary point between $0$ and $1$, we get${1\over 2}\ge ({1\over 2})^2$, so we see that in $[0,1]$ the curve$y=x$ is the higher. To the right of $1$ we plug in the auxiliarypoint $2$, obtaining $2^2\ge 2$, so the curve $y=x^2$ is higherthere. Therefore, the area between the two curves has to be broken into two parts: $$\hbox{ area }=\int_0^1 (x-x^2)\; dx+\int_1^2 (x^2-x)\; dx$$ since we must always be integrating in the form $$\int_{\textit{left}}^{\textit{right}} \hbox{higher - lower}\;dx$$ In some cases the ‘side’ boundaries are redundant or only implied. For example, the question might be to find thearea between the curves $y=2-x$ and $y=x^2$. What is implied here isthat these two curves themselves enclose one or more finitepieces of area, without the need of any ‘side’ boundaries of the form$x=a$. First, we need to see where the two curves intersect, bysolving $2-x=x^2$: the solutions are $x=-2,1$. So we infer thatwe are supposed to find the area from $x=-2$ to $x=1$, and that thetwo curves close up around this chunk of area without any needof assistance from vertical lines $x=a$. We need to find which curveis higher: plugging in the point $0$ between $-2$ and $1$, we see that$y=2-x$ is higher. Thus, the desired integral is $$\hbox{ area}=\int_{-2}^1 (2-x)-x^2 \; dx.$$ Exercises Find the area between the curves $y=x^2$ and $y=2x+3$. Find the area of the region bounded vertically by $y=x^2$ and $y=x+2$ and bounded horizontally by $x=-1$ and $x=3$. Find the area between the curves $y=x^2$ and $y=8+6x-x^2$.
Will a disc or cylinder (rigid body) executing pure rolling on a rough surface stop, neglecting air drag and other heat losses and rolling friction but not static and kinetic friction? If yes, due to which friction it will stop, static or kinetic and how? Assume surface has no rolling friction. closed as unclear what you're asking by ZeroTheHero, Kyle Kanos, Yashas, Jon Custer, AccidentalFourierTransform Mar 29 '17 at 11:13 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. As Yashas Samaga said, it will not stop on a smooth, but frictional surface. It will stop however on an actual rough surface (as it does in reality – e.g. a steel marble rolling on a rough stone surface will come to a halt quite quickly, although drag / rolling friction is as low as on a smooth glass plate, where the marble would indeed roll very far). The reason is that a rough surface can in general not be continually tangent to the rolling body. Instead, if the object has rolled over a peak, it will not smoothly traverse the following trough but slightly collide with the next peak. If there's no rolling friction, then the collision will (ideally) be perfectly elastic, i.e. the cylinder will bounce off. When it hits the surface again, the vertical kinetic energy will regenerally not be fully reclaimed to movement in the original direction. In fact, while it has still some velocity in that direction, it will statistically more likely clash with yet another opposing front of the profile, thus losing yet more momentum. So, I reckon ideally this would eventually lead to a random-walk kind of motion. In reality, this doesn't happen because the collisions are scarcaly sufficiently elastic – actually a good amount or kinetic energy is lost right when the roller hits the next peak. Assumptions made in this answer: By rough surface, you meant a flat surface which has friction. The cylinder/sphere/disc/etc. are ideal; they do not deform. This is my reasonable guess; I am aware of the terminology used in Indian high school textbooks and exams (I am from India too) but you should still edit your question and make it clear. When a perfect/ideal cylinder (or a sphere, disc, ring, etc) pure rolls, the velocity of the lowermost point is zero (condition for pure rolling). As the relative velocity between the surfaces at the point of contact is zero, there is no "kinetic" friction (if there is no external force, there will be zero static friction). Therefore, the cylinder will continue to roll forever in your case. Bonus: The cylinder will continue to roll forever unless it is acted upon by an external unbalanced force. There are situations where you can accelerate the object while pure rolling. One situation where this happens is shown in the figure below: Let $f$ be the frictional force. Let $F$ be the external force ($\le f_{max} = \mu N$). The condition for an object pure rolling is: $$v_{com} = \omega R \tag{1}$$ The translational velocity of the lowermost point cancels out the rotational motion of the lowermost point completely. Differentiating the equation $(1)$ with respect to time, you get: $$a_{com} = \alpha R \tag{2}$$ The translation acceleration can be compensated by angular acceleration such that as the translation velocity increases (or decreases), the angular velocity also increases (or decreases) to ensure that condition $(1)$ is satisfied. In this case, there is no kinetic friction as the contact surfaces are still at rest. However, static friction acts (had it not, there would be relative motion as $v$ would change without affecting the value of $\omega$ which would cause the $(1)$ to fail). The net force ($F_{net}$) and torque ($\tau_{net}$) can be calculated as follows: $$F_{net} = ma = F - f \tag{3}$$ $$\tau_{net} = I\alpha = -fR \tag{4}$$ You have three equations (equation $(2)$, $(3)$ and $(4)$) and three unknowns ($f$, $a$ and $\alpha$). You can solve for $a$ and $\alpha$. From these values, you can calculate the time it takes for the body to stop rolling. If both the cylinder and the surface are perfectly rigid, then yes, it will roll forever, at least until it encounters a bump that its kinetic energy is not sufficient to overcome. But if the surface can deform, not all of the energy used to deform it will be returned to the cylinder. Some of it will propagate away from the point of contact in the form of sound waves, never to be seen again. The cylinder will lose energy and slowly slow down. Eventually it will encounter a bump that its remaining energy can't carry it over, and it will come to a stop. Similarly, if the cylinder itself can deform, it will be building up internal vibrations from the roughness as well. This will also subtract from the rolling energy. Imagine an ideal (incompressible and with an ideal flat surface) cylinder (radius R) lying on an infinite (to avoid momentum transfer) ideal flat and incompressible plane to avoid elastic energy transfer. Applying two equally big but opposite forces $\vec F$, perpendicular to the central axis of the cylinder, on opposite sides of the cylinder will produce a torque $\vec{\tau}=2\vec F$x$\vec R$ along the central axis of the cylinder. This causes the cylinder to spin without imparting a linear velocity to it. Once it spins imagine that the cylinder and the plane it lies on getting rougher and rougher so at some point (in time) the cylinder starts to roll with only static friction. The rough irregular shaped surfaces can stay ideally flat for the cylinder starting to roll. If we zoom in on the point of contact there isn't one point of contact. There may be all sorts of distortions of the plane and the cylinder which can cause it to roll (despite the "flat roughness"). Once it rolls some of those distortions are such that they can be broken by the rolling cylinder (incompressibility and ideal flatness don't imply unbreak ability), which obviously takes energy away from it, so eventually, after first starting to move, the cylinder stops moving. The cylinder loses also energy by emitting e.m. radiation (very very little, though), and gravitational radiation ( very very very little) because it's rotating motion.
I seem to be misunderstanding a claim about linear regression methods that I've seen in various places. The parameters of the problem are: Input: $N$ data samples of $p+1$ quantities each consisting of a "response" quantity $y_i$ and $p$ "predictor" quantities $x_{ij}$ The result desired is a "good linear fit" which predicts the response based on the predictors where a good fit has small differences between the prediction and the observed response (among other criteria). Output: $p+1$ coefficients $\beta_j$ where $\beta_0 + \sum_{j=1}^p x_{ij} * \beta_j$ is a "good fit" for predicting the response quantity from the predictor quantities. I'm confused about the "ridge regression" approach to this problem. In "The Elements of Statistical Learning" by Hastie, Tibshirani, and Friedman page 63 ridge regression is formulated in two ways. First as the constrained optimization problem: $$ {argmin}_\beta \sum_{i=1}^N { ( y_i - (\beta_0 + \sum_{j=1}^p (x_{ij} * \beta_j)) )^2 } $$ subject to the constraint $$ \sum_{j=1}^p \beta_i^2 \leq t $$ for some positive parameter t. Second is the penalized optimization problem:$${argmin}_\beta ( \lambda \sum_{j=1}^p { \beta_j^2 } ) +\sum_{i=1}^N { ( y_i - (\beta_0 + \sum_{j=1}^p (x_{ij} * \beta_j)) )^2}$$for some positive parameter $\lambda$. The text says that these formulations are equivalent and that there is a "one to one correspondence between the parameters $\lambda$ and $t$". I've seen this claim (and similar ones) in several places in addition to this book. I think I am missing something because I don't see how the formulations are equivalent as I understand it. Consider the case where $N=2$ and $p=1$ with $y_1=0$, $x_{1,1}=0$ and $y_2=1$, $x_{1,2}=1$. Choosing the parameter $t=2$ the constrained formulation becomes: $$ {argmin}_{\beta_0,\beta_1} ( \beta_0^2 + (1 - (\beta_0 + \beta_1))^2 ) $$ expanded to $$ {argmin}_{\beta_0,\beta_1} ( 2 \beta_{0}^{2} + 2 \beta_{0} \beta_{1} - 2 \beta_{0} + \beta_{1}^{2} - 2 \beta_{1} + 1 ) $$ To solve this find the solution where the partial derivatives with respect to $\beta_0$ and $\beta_1$ are zero: $$ 4 \beta_{0} + 2 \beta_{1} - 2 = 0 $$ $$ 2 \beta_{0} + 2 \beta_{1} - 2 = 0 $$ with solution $\beta_0 = 0$ and $\beta_1 = 1$. Note that $\beta_0^2 + \beta_1^2 \le t$ as required. How does this derivation relate to the other formulation? According to the explanation there is some value of $\lambda$ uniquely corresponding to $t$ where if we optimize the penalized formulation of the problem we will derive the same $\beta_0$ and $\beta_1$. In this case the penalized form becomes $$ {argmin}_{\beta_0,\beta_1} ( \lambda (\beta_0^2 + \beta_1^2) + \beta_0^2 + (1 - (\beta_0 + \beta_1))^2 ) $$ expanded to $$ {argmin}_{\beta_0,\beta_1} ( \beta_{0}^{2} \lambda + 2 \beta_{0}^{2} + 2 \beta_{0} \beta_{1} - 2 \beta_{0} + \beta_{1}^{2} \lambda + \beta_{1}^{2} - 2 \beta_{1} + 1 ) $$ To solve this find the solution where the partial derivatives with respect to $\beta_0$ and $\beta_1$ are zero: $$ 2 \beta_{0} \lambda + 4 \beta_{0} + 2 \beta_{1} - 2 = 0 $$ $$ 2 \beta_{0} + 2 \beta_{1} \lambda + 2 \beta_{1} - 2 = 0 $$ for these equations I get the solution $$ \beta_0 = \lambda/(\lambda^2 + 3\lambda + 1) $$ $$ \beta_1 = (\lambda + 1)/((\lambda + 1)(\lambda + 2) - 1) $$ If that is correct the only way to get $\beta_0 = 0$ is to set $\lambda = 0$. However that would be the same $\lambda$ we would need for $t = 4$, so what do they mean by "one to one correspondence"? In summary I'm totally confused by the two presentations and I don't understand how they correspond to each other. I don't understand how you can optimize one form and get the same solution for the other form or how $\lambda$ is related to $t$. This is just one instance of this kind of correspondence -- there are others for other approaches such as lasso -- and I don't understand any of them. Someone please help me.
Schaefer, J., Sigeneger, F., Sperka, J. et al. (2 more authors) (2017) Searching for order in atmospheric pressure plasma jets. Plasma Physics and Controlled Fusion, 60 (1). 014038. ISSN 0741-3335 Abstract The self-organized discharge behaviour occurring in a non-thermal radio-frequency plasma jet in rare gases at atmospheric pressure was investigated. The frequency of the azimuthal rotation of filaments in the active plasma volume and their inclination were measured along with the gas temperature under varying discharge conditions. The gas flow and heating were described theoretically by a three-dimensional hydrodynamic model. The rotation frequencies obtained by both methods qualitatively agree. The results demonstrate that the plasma filaments forming an inclination angle α with the axial gas velocity u z are forced to a transversal movement with the velocity ${u}_{\phi }=\tan (\alpha )\cdot {u}_{z}$, which is oriented in the inclination direction. Variations of ${u}_{\phi }$ in the model reveal that the observed dynamics minimizes the energy loss due to convective heat transfer by the gas flow. The control of the self-organization regime motivates the application of the plasma jet for precise and reproducible material processing. Metadata Authors/Creators: Copyright, Publisher and Additional Information: © 2017 IOP Publishing Ltd. Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence (https://creativecommons.org/licenses/by/3.0/). Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Keywords: plasma jet; self-organization; hydrodynamic simulation Dates: Institution: The University of Sheffield Academic Units: The University of Sheffield > Faculty of Engineering (Sheffield) > Department of Materials Science and Engineering (Sheffield) Funding Information: Depositing User: Symplectic Sheffield Date Deposited: 08 Dec 2017 14:13 Last Modified: 08 Dec 2017 14:13 Published Version: https://doi.org/10.1088/1361-6587/aa8f14 Status: Published Publisher: IOP Publishing Refereed: Yes Identification Number: https://doi.org/10.1088/1361-6587/aa8f14 Related URLs:
I have seen at several places, incl. some notes and books, the following inference of the Holomorphic Implicit Function Theorem from the Smooth Real Function Theorem, but I believe this proof to be incorrect or rather incomplete in that it seems to be missing a non-obvious key step. I would like to know how to complete the proof if possible at all. Please note that there are of course other proofs of the Holomorphic Implicit Function Theorem that work, but my question is not about them, but about fixing this one. So, let me illustrate what I have in mind in the case of 2 complex variables: Hol. Impl. Funct. Thm. in 2 var.-s: Let $U,V \subseteq \mathbb{C}$ be open subsets and let $f:U \times V \to \mathbb{C}$ be a holomorphic function. Let $(z_0,w_0) \in U \times V$ be a point such that $f(z_0,w_0) = 0$ and $\frac{\partial f}{\partial w}(z_0,w_0) \neq 0$.Then $z_0$ has an open neighbourhood $\widetilde{U} \subseteq U$ such that there exists a holomorphic function $g: \widetilde{U} \to \mathbb{C}$ with the property $g(z_0)=w_0$ and $\forall z\in \widetilde{U}: f(z,g(z)) = 0$. Proof: By the Real Smooth Implicit Function Theorem there exist an open neighbourhood $\widetilde{U}\ni z_0$, $\widetilde{U}\subseteq U$, and a smooth $g:\widetilde{U} \to \mathbb{C}$ such that $\forall z \in \widetilde{U}: f(z,g(z))=0$ as smooth functions.Thus we only need to show that $g$ is holomorphic in $\widetilde{U}$.Since $f$ is holomorphic in both variables, one computes$$0 = \frac{\partial}{\partial\bar{z}} f(z,g(z)) =\frac{\partial f}{\partial w}(z,g(z)) \frac{\partial g}{\partial\bar{z}},$$hence at $(z_0,w_0)$$$0 = \frac{\partial f}{\partial w}(z_0,w_0) \frac{\partial g}{\partial\bar{z}}(z_0),$$from where it follows that $\frac{\partial g}{\partial\bar{z}}(z_0)=0$ since $\frac{\partial f}{\partial w}(z_0,w_0) \neq 0$ by hypothesis. $\Box$ The problem: this only shows that $g$ is complex-differentiable at the point $z_0\in\widetilde{U}$ rather than in all of $\widetilde{U}$, and none of the proofs I have seen actually justifies why the reasoning should extend to the whole neighbourhood. Attempt to rectify the problem: by continuity of $\frac{\partial f}{\partial w}$ there are neighbourhoods $U'\ni z_0$ and $V'\ni w_0$ such that $\forall (z,w)\in U'\times V': \frac{\partial f}{\partial w}(z,w)\neq 0$.So we can take $\widetilde{U}\cap U'$ instead, but this does not suffice because we only know that $g(\widetilde{U}\cap U')\cap V' \ni w_0$, so we don't actually have that$$\forall z\in \widetilde{U}\cap U': \frac{\partial f}{\partial w}(z,g(z))\neq 0.$$ Is there a way to salvage this proof without resorting to a completely different proof strategy? (For example, a completely different strategy would be to invoke the Holomorphic Inverse Function Theorem.) Feel free to add or remove tags as you see fit.
Is Lebesgue's integral the only definition of integral for which the $L^1$ space is a complete vector space (for a natural norm on it)? I'm mostly interested on integrals of functions $f:\Bbb R^n \to \Bbb R$, and where the integral is an explicit operator (it must have a "practical utility", and we should be able to compute some integrals with it) An integral $I$ must verify : $I(\lambda f+g) = \lambda I(f)+I(g)$ (linear) $I(f)\leq I(g)$ if $f \leq g$ $I(f) = \int_{\Bbb R^n} f(x) dx$ for every continuous function (and $\int$ is the Riemann integral) Thanks
Solve the equation: $$2^x=1-x$$ I know this is extremely easy and I know the solution using graphical approach. Basically, I can see the solution, but I can't work it out algebraically. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community A different perspective on an algebraic solution: you know that for negative $x$, $2^x\lt 1$ but $1-x \gt 1$ (since the latter is $1+(-x)$ and $-x \gt 0$); contrariwise, for positive $x$, $2^x\gt 1$ but $1-x \lt 1$. This means that the only possibility for a solution is $x=0$, and of course by quick algebra that does in fact work. By inspection $0$ is a solution. As the left side is increasing with $x$ and the right decreasing, that is the only solution. Equations that mix exponentials and polynomials usually need the Lambert W function for a "closed form" solution. Solution is $1-W(2\log(2))/\log(2) = 0$, using the Lambert W function. [ added Mhenni's answer shows the steps to derive this.] And for the non-real solutions, use the other branches of the Lambert W function. $$\begin{align} &\dots\\&5.430858450 + 42.90897219 i\\ &5.090239758 + 33.81905797 i\\ &4.642925846 + 24.71686730 i\\ &3.988583083 + 15.59001288 i\\ &2.732900763 + 6.418080468 i\\ &0.000000000 + 0.000000000 i\\ &2.732900763 - 6.418080468 i\\ &3.988583083 - 15.59001288 i\\ &4.642925846 - 24.71686730 i\\ &5.090239758 - 33.81905797 i\\ &5.430858450 - 42.90897219 i\\ &\dots \end{align} $$ Let $ 1-x = y $, then we have $$ 2^x = 1-x \Rightarrow 2^{1-y} = y \Rightarrow y \,2^y = 2 \Rightarrow y\,{\rm e}^{y\, \ln(2)}=2 \Rightarrow \frac{z}{\ln(2)} {\rm e}^{z} = 2 \Rightarrow z {\rm e}^{z}= 2\,\ln(2)$$ $$ \Rightarrow z = W( 2 \,\ln (2) ) \Rightarrow y = \frac{W(2\,\ln(2))}{\ln(2)} \Rightarrow 1-x=\frac{W(2\,\ln(2))}{\ln(2)} \Rightarrow x = 1-\frac{W(2\,\ln(2))}{\ln(2)} = 0\,, $$ since $W(2\,\ln(2))=\ln(2)$. More on Lambert W function. Here is a method for finding complex roots that should be understandable to those of us who didn't learn the Lambert W function at our father's knee. $2^{i\phi}=e^{i\phi\ln 2}$ is on the unit circle in the complex plane. Say we put in some fairly large value of $\phi$. Then $1-x=1-i\phi$ is going to be pretty close to the negative imaginary axis. To make the l.h.s. and r.h.s. of the equation have about the same complex phase, we can just pick a value of $\phi$ such that $2^{i\phi}$ is on the imaginary axis. Suppose we try $\phi=(3\pi/2+10\pi)/\ln 2\approx 52.1$. Now the two sides of the equation match pretty well, except that their magnitudes are mismatched. To fix this, tack on $\ln 52/\ln 2\approx 5.7$ as a real part, giving $x=5.7+52.1i$. This is pretty nearly a solution. Now play around with the real and imaginary parts to minimize the error, and you can converge to a pretty good numerical approximation, about $5.7061+51.99191i$. By replacing the $10\pi$ with other multiples of $2\pi$, it should be clear that you can get as many solutions as you want.
Given a set of points $(x_i, y_i)$, least-squares linear regression finds the linear function $L$ such that $$\sum \varepsilon(y_i, L(x_i))$$ is minimized, where $\varepsilon(y, y') = (y-y')^2$ is the squared error between the actual $y_i$ value and the one predicted by $L$ from $x_i$. Suppose I want to do the same, but I want to use the following penalty function in place of $\varepsilon$: $$ \epsilon(y, L(x)) = \cases{y-L(x) & \text{if $y>L(x)+1$} \\ (y-L(x))^2 & \text{if $y\le L(x)+1$}}$$ If the fitted line passes a certain distance below the actual data point, I want to penalize it much less severely than if the line passes the same distance above the data point. (The threshhold for using the cheaper penalty function is $y > L(x)+1$ rather than $y > L(x)$ so that overshooting by some amount $\delta<1$ is not penalized less than undershooting by the same amount—we don't want to overpenalize $L$ for undershooting by too little!) I could probably write a computer program to solve this numerically, by using a hill-climbing algorithm or something similar. But I would like to know if there are any analytic approaches. If I try to follow the approach that works for the usual penalty function, I get stuck early on, because there seems to be no way to expand the expression $\epsilon(y_i, mx_i+b)$ algebraically. I expect that the problem has been studied with different penalty functions, and I would be glad for a reference to a good textbook.
Let $T$ be a bounded operator on a Hilbert space with the property that $T^*(T-I)= 0$. I'd like to show that $T$ is an orthogonal projection. I'm not really sure how to show that an operator is an orthogonal projection. If $A:X\rightarrow U$ is a projection onto a closed subspace of X, then $\langle x- Ax, u_i\rangle = 0 \;\;\forall u_i \in U$ ? Expanding we get $T'T = T' \Longleftrightarrow TT' = T\ (T'' = T$ in Hilbert spaces?) $$Tx = T'(Tx) \Longleftrightarrow y = T'y$$Hence $T'$ has $\lambda = 1$ after $Tx$, do $T'$ and $T$ have the same eigenvalues? There are a lot of question marks here.
The Lazy Laser Physicist is quite shocked about what happened to his setup over night: Almost all of his mirrors were taken away, but some other things have been put on his table. Besides these there is also a message: Your setup was a total mess! I don't know how you could work there, but weren't all these unused mirrors totally in the way? It looked like you had one setup and then changed everything to a different path with possibly the minimal work. I hope you are aware of the depolarization if you use the mirrors at an angle of incidence other than 45°? Here are some things which you can use to reassemble your setup. Because I know you're lazy it also includes a variable phase shifter to change your beam path without moving any mirror. Best wishes Your supervisor Well then, let's have a look what we have: 7 mirrorswhich have a reflective coating on one side (blue) and must be used with light incident at a 45° angle. 3 50:50 beamsplitterswhich reflect half of the incident's light intensity and transmit the other half. 1 variable phase shifterwhich multiplies a phase factor $e^{i \phi}$ to the transmitted electric field (see below). Unfortunately this phase shifter can only be tuned in the range $\phi \in \left[ -\frac{\pi}{8}, \frac{\pi}{8} \right]$. And it's so narrow that only one beam fits through it. Physics interlude Interference It is known that light can be described as waves of the electric field. When two laser beams are superimposed not the intensities, but the electric fields are added up: $E = E_1 + E_2$. Depending on the relative phase this can lead to constructive $$ E_1 = E_2 \qquad \Rightarrow \qquad E_1 + E_2 = 2E_1 $$ or destructive interference $$ E_1 = -E_2 \qquad \Rightarrow \qquad E_1 + E_2 = 0 $$ or anything in between as $E_1, E_2 \in \mathbb{C}$. Phase change The phase of the electric field changes in the following ways: Propagation through space. After a distance $L$ the electric field changes from $E$ to $E e^{i \cdot 2\pi \frac{L}{\lambda}}$. To make it easy we set the wavelength $\lambda$ to the length of one grid square. Reflection. When light is reflected from a mirror or beamsplitter it accumulates the phase $e^{i \frac{\pi}{2}} = i$. And of course the phase-shifter. When the light passes through the phase shifter it accumulates a phase of $e^{i \cdot 2\pi} = 1$ due to the length of one grid square, but the phase shifter can imprint an additional phase of $e^{i \phi}$ with $\phi \in \left[ -\frac{\pi}{8}, \frac{\pi}{8} \right]$ onto the electric field. Beamsplitter To clarify the action of the beamsplitter: Imagine 2 beams with $E_{\text{in, 1}}$ and $E_{\text{in, 2}}$ impinging on a beamsplitter in the following way: Splitting the intensity in half means the electric field amplitude is divided by $\sqrt{2}$ as intensity is proportional to the square of the electric field. Hence, the $E_{\text{in, 1}}$ contributes $\frac{1}{\sqrt{2}} E_{\text{in, 1}}$ to $E_{\text{out, 1}}$ and $\frac{i}{\sqrt{2}} E_{\text{in, 1}}$ to $E_{\text{out, 2}}$. In an analoguous way one gets the contributions from $E_{\text{in, 2}}$, so that it total: $$ E_{\text{out, 1}} = \frac{1}{\sqrt{2}} \left( E_{\text{in, 1}} + i E_{\text{in, 2}} \right) \\ E_{\text{out, 2}} = \frac{1}{\sqrt{2}} \left( i E_{\text{in, 1}} + E_{\text{in, 2}} \right) $$ What happens if only one beam is impinging on the beamsplitter can be extracted if one of the incident fields is set to $0$. Finally the question How does the physicist need to arrange the elements to direct the full intensity of the beam onto either detector A or B being able to switch between these two states by only changing the state of the phase-shifter?
Considering the problem There were n couples attending a ceremony. Out of the 2n people, exactly m of them ate pizza, chosen randomly over all subsets of 2n people. Let X be the number of couples that didn’t eat pizza. Find an explicit expression for the mean of X. There seems to an easy and correct solution by taking an indicator RV for each couple (expectation of which easily calculated), and summing them all for n couples. However I took a different approach by using the law of iterated expectation as follows: Let X be the number of couples that didn't eat pizza, and Y be the number of the men that had pizza, noting that $Y$ is a binomial random variable with $p=\frac m{2n}$. We thus are going to use the law of iterated expectations to calculate mean of X: \begin{align*} \mathbf E[X] &= \mathbf E[\:\mathbf E[X|Y]\:] \\ \mathbf E[Y] &= m/2 \\\mathbf E[Y^2] &= \mathbf {var}[Y]+\mathbf E[Y]^2 = m/2(1-m/2n)+m^2/4\end{align*} To find $\mathbf E[X|Y]$, we need to determine the expected number of women that didn't eat pizza while their husbands are among the presumed group of men that didn't have pizza.\ We assume $y$ men had pizza. There are $n$ women, each with probability of $\frac{n-y}{n}$ to be the wife of a man that didn't eat pizza. Also, independently, this woman won't have a pizza with a probability of $1 -\frac{m-y}{n}$. Thus the probability of a couple not having pizza equals: $$ p=(1-\frac{y}{n})\cdot (1-\frac{m-y}{n}) $$ Thus, we have a binomial random variable with the calculated p, hence the mean is: $$ \mathbf E[X|Y] = np = \frac{(n-Y)(n-m+Y)}{n} $$ \begin{align*} \mathbf E[X] &= \mathbf E[\:\mathbf E[X|Y]\:] > \\&=\mathbf E\left[\:\frac{(n-Y)(n-m+Y)}{n}\:\right] \\&= > \frac 1n\left(n^2-mn+m\mathbf E[Y]-\mathbf E[Y^2]\right) > \\&=n-m+m^2/(4n)+m^2/(4n^2)-m/(2n) \end{align*} which does not lead to the correct answer, if there's no mistake in calculation parts I hope. :) Are the steps taken correct, apart from calculation parts? Would you please tell me where the fallacy is, if you see any? PS. I believe that my assumption for considering X|Y as a binomial with the calculated p is not correct, since it allows it in probability for no women to be wives of the presumed group of men! I believe that my assumption for considering X|Y as a binomial with the calculated p is not correct, since it allows it in probability for no women to be wives of the presumed group of men! So can any alternative assumptions be made to yield the correct expectation?
A lot of the stuff I've written in the past couple of years has beenon math.StackExchange. Some of it is pretty mundane, but some isinteresting. My summary of April's interesting posts waswell-received, so here are the noteworthy posts I made in May 2015. What matrix transforms !!(1,0)!! into !!(2,6)!! and tranforms!!(0,1)!! into!!(4,8)!!? was alittle funny because the answer is $$\begin{pmatrix}2 & 4 \\ 6 & 8\end{pmatrix}$$ and yeah, it works exactly like it appears to,there's no trick. But if I just told the guy that, he might feelunnecessarily foolish. I gave him a method for solving the problemand figured that when he saw what answer he came up with, he mightlearn the thing that the exercise was designed to teach him. Is a “network topology'” a topologicalspace?is interesting because several people showed up right away to sayno, it is an abuse of terminology, and that network topology reallyhas nothing to do with mathematical topology. Most of those commentshave since been deleted. My answer was essentially: it istopological, because just as in mathematical topology you care aboutwhich computers are connected to which, and not about where any ofthe computers actually are. Nobody constructing a token ring network thinks that it has to be ageometrically circular ring. No, it only has to be a topologicallycircular ring. A square is fine; so is a triangle; topologicallythey are equivalent, both in networking and in mathematics. Thewires can cross, as long as they don't connect at the crossings.But if you use something that isn't topologically a ring, like saya line or a star or a tree, the network doesn't work. The term “topological” is a little funny. “Topos” means “place”(like in “topography” or “toponym”) but in topology you don't careabout places. Is there a standard term for this generalization of the Eulertotient function?was asked by me. I don't include all my answers in these posts, butI think maybe I should have a policy of including all my questions.This one concerned a simple concept from number theory which I wassurprised had no name: I wanted !!\phi_k(n)!! to be the number ofintegers !!m!! that are no larger than !!n!! for which !!\gcd(m,n) =k!!. For !!k=1!! this is the famous Euler totient function, written!!\varphi(n)!!. But then I realized that the reason it has no name is that it'ssimply !!\phi_k(n) = \varphi\left(\frac n k\right)!! so there's no needfor a name or a special notation. As often happens, I found the answer myself shortly after I askedthe question. I wonder if the reason for this is that my time tocome up with the answer is Poisson-distributed. Then if I set a timethreshold for how long I'll work on the problem before asking aboutit, I am likely to find the answer to almost any question thatexceeds the threshold shortly after I exceed the threshold. But ifI set the threshold higher, this would still be true, so there isno way to win this particular game. Good feature of this theory: Iam off the hook for asking questions I could have answered myself.Bad feature: no real empirical support. how many ways can you divide 24 people into groups oftwo? displays afew oddities, and I think I didn't understand what was going on atthat time. OP has calculated the first few special cases: 1:1 2:1 3:3 4:3 5:12 6:15 which I think means that there is one way to divide 2 people intogroups of 2, 3 ways to divide 4 people, and 15 ways to divide 6people. This is all correct! But what could the 1:1, 3:3, 5:12terms mean? You simply can't divide 5 people into groups of 2.Well, maybe OP was counting the extra odd person left over as a sortof group on their own? Then odd values would be correct; I didn'tappreciate this at the time. But having calculated 6 special cases correctly, why can't OPcalculate the seventh? Perhaps they were using brute force: thenext value is 48, hard to brute-force correctly if you don't have aenough experience with combinatorics. I tried to suggest a general strategy: look at special cases, andnot by brute force, but try to analyze them so that you can comeup with a method for solving them. The method is unnecessary forthe small cases, where brute force enumeration suffices, but you canuse the brute force enumeration to check that the method isworking. And then for the larger cases, where brute force isimpractical, you use your method. It seems that OP couldn't understand my method, and when they triedto apply it, got wrong answers. Oh well, you can lead a horse towater, etc. The other pathology here is: I think I did what you said and I got 1.585times 10 to the 21 for the !!n=24!! case. The correct answer is$$23\cdot21\cdot19\cdot17\cdot15\cdot13\cdot11\cdot9\cdot7\cdot5\cdot3\cdot1= 316234143225 \approx 3.16\cdot 10^{11}.$$ OP didn't explain howthey got !!1.585\cdot10^{21}!! so there's not much hope ofcorrecting their weird error. This is someone who probably could have been helped in person, buton the Internet it's hopeless. Their problems are Internetcommunication problems. Lambda calculustyping isn'tespecially noteworthy, but I wrote a fairly detailed explanation ofthe algorithm that Haskell or SML uses to find the type of anexpression, and that might be interesting to someone. I think Special representation of anumber is thestandout post of the month. OP speculates that, among numbers ofthe form !!pq+rs!! (where !!p,q,r,s!! are prime), the choice of!!p,q,r,s!! is unique. That is, the mapping !!\langlep,q,r,s\rangle \to pq+rs!! is reversible. I was able to guess that this was not the case within a couple ofminutes, replied pretty much immediately: I would bet money against this representation being unique. I was sure that a simple computer search would findcounterexamples. In fact, the smallest is !!11\cdot13 + 19\cdot 29= 11\cdot 43 + 13\cdot 17 = 694!! which is small enough that youcould find it without the computer if you are patient. The obvious lesson to learn from this is that many elementaryconjectures of this type can be easily disproved by a trivialcomputer search, and I frequently wonder why more amateurmathematicians don't learn enough computer programming toinvestigate this sort of thing. (I wrote recently on the topic ofAn ounce of theory is worth a pound of search, and this is an interestingcounterpoint to that.) But the most interesting thing here is how I was able to instantlyguess the answer. I explained in some detail in the post. But thebasic line of reasoning goes like this. Additive properties of the primes are always distributed more orless at random unless there is some obvious reason why they can'tbe. For example, let !!p!! be prime and consider !!2p+1!!. Thismust have exactly one of the three forms !!3n-1, 3n,!! or !!3n+1!!for some integer !!n!!. It obviously has the form !!3n+1!! almostnever (the only exception is !!p=3!!). But of the other two formsthere is no obvious reason to prefer one over the other, and indeedof the primes up to 10,000, 611 are of the type !!3n!! and and 616are of the type !!3n-1!!. So we should expect the value !!pq+rs!! to be distributed more orless randomly over the set of outputs, because there's no obviousreason why it couldn't be, except for simple stuff, like that it'sobviously almost always even. So we are throwing a bunch of balls at random into bins, and theclaim is that no bin should contain more than one ball. For that tohappen, there must be vastly more bins than balls. But the bins arenumbers, and primes are not at all uncommon among numbers, so thenumber of bins isn't vastly larger, and there ought to be at leastsome collisions. In fact, a more careful analysis, which I wrote up on the site,shows that the number of balls is vastly larger—to have them beroughly the same, you would need primes to be roughly as common asperfect squares, but they are far more abundant than that—so as youtake larger and larger primes, the number of collisions increasesenormously and it's easy to find twenty or more quadruples of primesthat all map to the same result. But I was able to predict thisafter a couple of minutes of thought, from completely elementaryconsiderations, so I think it's a good example of Lower Mathematicsat work. This is an example of a fairly common pathology of math.sequestions: OP makes a conjecture that !!X!! never occurs or thatthere are no examples with property !!X!!, when actually !!X!!almost always occurs or every example has property !!X!!. I don't know what causes this. Rik Signes speculates that it's justwishful thinking: OP is doing some project where it would be usefulto have !!pq+rs!! be unique, so posts in hope that someone will tellthem that it is. But there was nothing more to it than baselesshope. Rik might be right. [ Addendum 20150619: A previous version of this article included the delightful typo “mathemativicians”. ]
Formula Macro Inserts a mathematical formula Type JAR Developed by Active Installs 1 Rating License GNU Lesser General Public License 2.1 Bundled With XWiki Enterprise Table of contents Description The formula needs is written in TeX markup and rendered as an image. Usage Examples of formula syntax can be seen on the. Parameters imageType: The image format; possible values: png (default), gif, jpg. fontSize: "Fuzzy" sizes, corresponding to the LaTeX font sizes: tiny (LaTeX:\tiny), very_small (LaTeX:\scriptsize), smaller (LaTeX:\footnotesize), small (LaTeX:\small), normal (default) (LaTeX:\normalsize), large (LaTeX:\large), larger (LaTeX:\Large), very_large (LaTeX:\LARGE), huge (LaTeX:\huge), extremely_huge (LaTeX:\Huge). Inserting a formula in the wiki editor Inserting a formula in the WYSIWYG editor Place the TeX markup corresponding to the formula in the content field of the macro insertion dialog: Inline vs. block formula The formula macro can be used inline, inside a paragraph, or as a standalone block. Some differences exist between the two results: just like in LaTeX, inline formulas take up less vertical space than block formulas. For example, the code {{formula}}\sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}{{/formula}} generates Technical information The macro uses third-party tools for generating the image corresponding to a mathematical formula. Currently, four alternatives exist: one based on native system calls to latex, dvips and convert. It gives the best results from a graphical point of view, but requires the presence of external programs, and involves a slight overhead by starting new processes and working with the disk. one based on as a remote service through HTTP requests. The graphic results are as good as the ones obtained by the native method. Additionally, the image contains some useful metadata: the actual formula that generated the image, the position of the baseline for proper alignment, and any errors that may have occurred while processing the request. The disadvantage of this approach is that it relies heavily on a remote server. the third implementation uses and to transform LaTeX into MathML, and then render it into images. The results are not as eye-pleasing as those obtained from LaTeX, but it is a 100% Java, self-contained solution, with no external dependencies. a fourth one was introduced in XWiki 2.1, based on the public as a remote service through HTTP requests. The quality of the results is between those obtained from the native TeX system and those from SnuggleTeX. Unlike MathTran, it doesn't use a native TeX system, thus there are a few disadvantages other than the quality: only block-level equations are produced there's no support (yet?) for different font sizes and image formats the results are not 100% in accordance with the TeX syntax (for example it does not understand root, the content between left and right is lowered) The method used by default is the one based on native system calls. If the required tools are not available on the system, the SnuggleTeX implementation is used instead. The default formula rendering method can be configured by modifying the option macro.formula.renderer in the file WEB-INF/xwiki.properties, with the possible values native, mathtran, snuggletex and googlecharts. Acknowledgments This feature is based on the Tex plugin developed by Guillaume Legris. Prerequisites & Installation Instructions We recommend using theto install this extension (Make sure that the text "Installable with the Extension Manager" is displayed at the top right location on this page to know if this extension can be installed with the Extension Manager). Note that installing Extensions when being offline is currently not supported and you'd need to use some . You can also use the manual method which involves dropping the JAR file and WEB-INF/lib folder and restarting XWiki.into the Dependencies Dependencies for this extension (org.xwiki.platform:xwiki-platform-formula-macro 11.8.1): org.xwiki.platform:xwiki-platform-formula-renderer 11.8.1 11.8.1 11.8.1 org.xwiki.rendering:xwiki-rendering-transformation-macro 11.8.1
Question: Calculate the molar sulfate concentration in a solution that is $0.200$ M in $\ce{Ag+}$ and saturated with $\ce{Ag2SO4}$. My attempt: Looking through old notes and googling lead me to this webpage. Example #2 looked similar to my problem, and my notes from general chemistry involved something called an ICE table to deal with stoichiometry and concentrations of ions. This is what I did: Write the dissociation equation $\ce{Ag2SO4 + <=> 2Ag+ + SO4^2-}$ Write the solubility constant formula $$K_\mathrm{sp} = \frac{\text{products}}{\text{reactants}}$$ I was given the value of $K_\mathrm{sp}$ as $1.6 \times 10^{-5}$ and I assume these values are listed in an appendix in my text or online. Also $\ce{Ag2SO4}$ is supposed to be a solid, I think? Because the question states the solution is "saturated" with $\ce{Ag2SO4}$ and does "saturation" not imply that the $\ce{Ag2SO4}$ is going in and out of solution as a solid and as an ion? And therefore since it is a solid we do not consider it for the equation (not sure why, but this is similar to what I have in my notes)? $$1.6 \times 10^{-5} = \text{products}$$ $$1.6 \times 10^{-5} = \ce{[Ag+]^2 * [SO4^2- ]}$$ Here is an image if that is acceptable of the ICE table I used (note: $\text{I+C=E}$). $$\begin{array}{|c|c|c|}\hline &\ce{Ag+}&\ce{SO4^2-}\\\hline \text{I}&\pu{0.200M}&x\\\hline \text{C}&\frac x2&0\\\hline \text{E}&\pu{0.200M}+\frac x2&x\\\hline \end{array}$$ Now I substitute the values I obtained for concentrations from the ICE table into my formula: $$1.6 \times 10^{-5} = [0.200~\text{M} + \frac{1}{2}x]^2 \times [x]$$ Now since $K_\mathrm{sp}$ is less than $1 \times 10^{-5}$ we can ignore the $\frac{1}{2}x$? I guess because it is very small and therefore negligible? $$1.6 \times 10^{-5} = [0.200~\text{M}]^2 \times [x]$$ $$ \frac{1.6 \times 10^{-5}}{[0.200~\text{M}]^2} = x = 0.0004$$ $$ x = 4.0 \times 10^{-4}~\text{M}$$ Therefore the concentration of $\ce{SO4^2-}$ is $ 4.0 \times 10^{-4}~\text{M}$?
Topological Methods in Nonlinear Analysis Approximate controllability for abstract semilinear impulsive functional differential inclusions based on Hausdorff product measures Lipschitz Retractions onto Sphere VS Spherical Cup in a Hilbert space Finite-time blow-up in a quasilinear chemotaxis system with an external signal consumption Equivalence between uniform $L^{2^\star}(\Omega)$ a priori bounds and uniform $L^{\infty}(\Omega)$ a priori bounds for subcritical elliptic equations A continuation lemma and the existence of periodic solutions of perturbed planar Hamiltonian systems with sub-quadratic potentials Uniform stability for fractional Cauchy problems and applications Nonautonomous Conley Index Theory The Homology Index and Attractor-Repeller decompositions Nonautonomous Conley Index Theory Continuation of Morse-decompositions Optimal retraction problem for proper $k$-ball-contractive mappings in $C^m [0,1]$ On the Lyapunov stability theory for impulsive dynamical systems Multiplicity of positive solutions for a fractional Laplacian equations involving critical nonlinearity Multiplicity and Concentration for Kirchhoff Type Equations around Topologically Critical points in Potential Global Existence for Reaction-Diffusion Systems Modeling Ions Electro-migration Through Biological Membranes with Mass Control and Critical Growth with Respect to the Gradient A generic result on Weyl tensor Diffusive logistic equation with U-shaped density dependent dispersal on the boundary Parabolic equations with localized large diffusion: rate of convergence of attractors New results of mixed monotone operator equations Positive ground states for a subcritical and critical coupled system involving Kirchhoff-Schrödinger equations On positive viscosity solutions of fractional Lane-Emden systems Nonautonomous Conley Index Theory The Connecting Homomorphism Extreme partitions of a Lebesgue space and their application in topological dynamics Lower and upper bounds for the waists of different spaces About positive $W_{loc}^{1,\Phi}(\Omega)$-solutions to quasilinear elliptic problems with singular semilinear term On ground state solutions of Nehari-Pohozaev type for the nonlinear Kirchhoff type problems with a general critical nonlinearity Amenability and Hahn-Banach extension property for set-valued mappings Reidemeister spectra for solvmanifolds in low dimensions Multiplicity results for fractional $p$-Laplacian problems with Hardy term and Hardy-Sobolev critical exponent in $\mathbb{R}^N$ Positive least energy solutions for coupled nonlinear Choquard equations with Hardy-Littlewood-Sobolev critical exponent Strong convergence of bi-spatial random attractors for parabolic equations on thin domains with rough noise Markov perfect equilibria in OLG models with risk sensitive agents On finding the ground state solution to the linearly coupled Brezis-Nirenberg system in high dimensions: the cooperative case $L^{p}$-pullback attractors for non-autonomous reaction-diffusion equations with delays Existence of positive solutions for Hardy nonlocal fractional elliptic equations involving critical nonlinearities Blow-up solutions for a $p$-Laplacian elliptic equation of logistic type with singular nonlinearity Smale Strategies for the N-Person Iterated Prisoner's Dilemma Asymptotically almost automorphic solutions of dynamic equations on time scales Global secondary bifurcation, symmetry breaking and period-doubling Classification of radial solutions to Henon type equation on the hyperbolic space A weighted Trudinger-Moser type inequality and its applications to quasilinear elliptic problems with critical growth in the whole Euclidean space Krasnosel'skii-Schaefer type method in the existence problems Formal Barycenter Spaces with Weights: The Euler Characteristic The limit cycles of a class of quintic polynomial vector fields Article Title Existence, localization and stability of limit-periodic solutions to differential equations involving cubic nonlinearities Solutions for quasilinear elliptic systems with vanishing potentials Topological characteristics of solution sets for fractional evolution equations and applications to control systems Generalized fractional differential equations and inclusions equipped with nonlocal generalized fractional integral boundary conditions Two homoclinic orbits for some second-order Hamiltonian systems Zero-dimensional compact metrizable spaces as attractors of generalized iterated function systems Infinitely many solutions for a class of critical Choquard equation with zero mass On exact multiplicity for a second order equation with radiation boundary conditions The continuity of additive and convex functions which are upper bounded on non-flat continua in $\mathbb R$ Removing Isolated Zeroes by Homotopy Some two-point problems for second order integro-differential equations with argument deviations Multiple Normalized Solutions for Choquard equations involving Kirchhoff type perturbation Decay rates for a viscoelastic wave equation with Balakrishnan-Taylor and frictional dampings A Global multiplicity result for a very singular critical nonlocal equation On the existence of skyrmions in planar liquid crystals On the linearization of vector fields on a torus with prescribed frequency Semiclassical states for singularly perturbed Schrödinger-Poisson systems with a general Berestycki-Lions or critical nonlinearity Asymptotic dynamics of non-autonomous fractional reaction-diffusion equations on bounded domains © 2019 Juliusz P. Schauder Centre for Nonlinear Studies
Ghatak, Chandana and Ayappa, KG (2004) Solvation force, structure and thermodynamics of fluids confined in geometrically rough pores. In: Journal of Chemical Physics, 120 (20). 9703 -9714. PDF Solvation_force.pdf Download (870kB) Abstract The effect of periodic surface roughness on the behavior of confined soft sphere fluids is investigated using grand canonical Monte Carlo simulations. Rough pores are constructed by taking the prototypical slit-shaped pore and introducing unidirectional sinusoidal undulations on one wall. For the above geometry our study reveals that the solvation force response can be phase shifted in a controlled manner by varying the amplitude of roughness. At a fixed amplitude of roughness, a, the solvation force for pores with structured walls was relatively insensitive to the wavelength of the undulation, for \lambda 2.3<\lambda /\sigma ff<7, where \sigma ff is the Lennard-Jones diameter of the confined fluid. This was not the case for smooth walled pores, where the solvation force response was found to be sensitive to the wavelength, for \lambda /\sigma ff<7.0 and amplitudes of roughness, a/\sigma ff \geq0.5. The predictions of the superposition approximation, where the solvation force response for the rough pores is deduced from the solvation force response of the slit-shaped pores, was in excellent agreement with simulation results for the structured pores and for \lambda/\sigma ff \geq7 in the case of smooth walled pores. Grand potential computations illustrate that interactions between the walls of the pore can alter the pore width corresponding to the thermodynamically stable state, with wall�wall interactions playing an important role at smaller pore widths and higher amplitudes of roughness. Item Type: Journal Article Additional Information: Copyright for this article belongs to American Institute of Physics (AIP). Department/Centre: Division of Mechanical Sciences > Chemical Engineering Depositing User: L.Kaini Mahemei Date Deposited: 13 Jan 2005 Last Modified: 19 Sep 2010 04:17 URI: http://eprints.iisc.ac.in/id/eprint/2634 Actions (login required) View Item
This question already has an answer here: I want to prove (or disprove) that $$\sum_{i=1}^n\log_2 i = \Theta(n\log_2 n)\,,$$ but I totally get stuck with this example. May someone help me with that. Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: I want to prove (or disprove) that $$\sum_{i=1}^n\log_2 i = \Theta(n\log_2 n)\,,$$ but I totally get stuck with this example. May someone help me with that. Asymptotically, $$\frac{1}{4}n\log_2n < \frac{1}{2}n(\log_2n-1) = \sum_{i=\frac{n}{2}}^{n}\log_2\frac{n}{2}<\sum_{i=1}^{n} \log_2i < \sum_{i=1}^{n} \log_2 n = n \log_2 n\,.$$ The LHS is $\log(n!)$, which is known, by Stirling's formula, to be asymptotic to $$\frac12\log(2\pi n)+n(\log(n) - 1).$$ You can conclude.