text
stringlengths
256
16.4k
Fujimura's problem Let [math]\overline{c}^\mu_n[/math] the largest subset of the triangular grid [math]\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c=n \}[/math] which contains no equilateral triangles [math](a+r,b,c), (a,b+r,c), (a,b,c+r)[/math] with [math]r \gt 0[/math]; call such sets triangle-free. (It is an interesting variant to also allow negative r, thus allowing "upside-down" triangles, but this does not seem to be as closely connected to DHJ(3).) Fujimura's problem is to compute [math]\overline{c}^\mu_n[/math]. This quantity is relevant to a certain hyper-optimistic conjecture. n 0 1 2 3 4 5 [math]\overline{c}^\mu_n[/math] 1 2 4 6 9 12 n=0 [math]\overline{c}^\mu_0 = 1[/math]: This is clear. n=1 [math]\overline{c}^\mu_1 = 2[/math]: This is clear. n=2 [math]\overline{c}^\mu_2 = 4[/math]: This is clear (e.g. remove (0,2,0) and (1,0,1) from [math]\Delta_2[/math]). n=3 [math]\overline{c}^\mu_3 = 6[/math]: For the lower bound, delete (0,3,0), (0,2,1), (2,1,0), (1,0,2) from [math]\Delta_3[/math]. For the upper bound: observe that with only three removals each of these (non-overlapping) triangles must have one removal: set A: (0,3,0) (0,2,1) (1,2,0) set B: (0,1,2) (0,0,3) (1,0,2) set C: (2,1,0) (2,0,1) (3,0,0) Consider choices from set A: (0,3,0) leaves triangle (0,2,1) (1,2,0) (1,1,1) (0,2,1) forces a second removal at (2,1,0) [otherwise there is triangle at (1,2,0) (1,1,1) (2,1,0)] but then none of the choices for third removal work (1,2,0) is symmetrical with (0,2,1) n=4 [math]\overline{c}^\mu_4=9[/math]: The set of all [math](a,b,c)[/math] in [math]\Delta_4[/math] with exactly one of a,b,c =0, has 9 elements and is triangle-free. (Note that it does contain the equilateral triangle (2,2,0),(2,0,2),(0,2,2), so would not qualify for the generalised version of Fujimura's problem in which [math]r[/math] is allowed to be negative.) Let [math]S\subset \Delta_4[/math] be a set without equilateral triangles. If [math](0,0,4)\in S[/math], there can only be one of [math](0,x,4-x)[/math] and [math](x,0,4-x)[/math] in S for [math]x=1,2,3,4[/math]. Thus there can only be 5 elements in S with [math]a=0[/math] or [math]b=0[/math]. The set of elements with [math]a,b\gt0[/math] is isomorphic to [math]\Delta_2[/math], so S can at most have 4 elements in this set. So [math]|S|\leq 4+5=9[/math]. Similar if S contain (0,4,0) or (4,0,0). So if [math]|S|\gt9[/math] S doesn’t contain any of these. Also, S can’t contain all of [math](0,1,3), (0,3,1), (2,1,1)[/math]. Similar for [math](3,0,1), (1,0,3),(1,2,1)[/math] and [math](1,3,0), (3,1,0), (1,1,2)[/math]. So now we have found 6 elements not in S, but [math]|\Delta_4|=15[/math], so [math]S\leq 15-6=9[/math]. Remark: curiously, the best constructions for [math]c_4[/math] uses only 7 points instead of 9. n=5 [math]\overline{c}^\mu_5=12[/math]: The set of all (a,b,c) in [math]\Delta_5[/math] with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles. Let [math]S\subset \Delta_5[/math] be a set without equilateral triangles. If [math](0,0,5)\in S[/math], there can only be one of (0,x,5-x) and (x,0,5-x) in S for x=1,2,3,4,5. Thus there can only be 6 elements in S with a=0 or b=0. The set of element with a,b>0 is isomorphic to [math]\Delta_3[/math], so S can at most have 6 elements in this set. So [math]|S|\leq 6+6=12[/math]. Similar if S contain (0,5,0) or (5,0,0). So if |S| >12 S doesn’t contain any of these. S can only contain 2 point in each of the following equilateral triangles: (3,1,1),(0,4,1),(0,1,4) (4,1,0),(1,4,0),(1,1,3) (4,0,1),(1,3,1),(1,0,4) (1,2,2),(0,3,2),(0,2,3) (3,2,0),(2,3,0),(2,2,1) (3,0,2),(2,1,2),(2,0,3) So now we have found 9 elements not in S, but [math]|\Delta_5|=21[/math], so [math]S\leq 21-9=12[/math]. n=6 [math]15 \leq \overline{c}^\mu_6 \leq 17[/math]: [math]15 \leq \overline{c}^\mu_6[/math] from the bound for general n. Note that there are eight extremal solutions to [math] \overline{c}^\mu_3 [/math]: Solution I: remove 300, 020, 111, 003 Solution II: remove 030, 111, 201, 102 Solution III (and 2 rotations): remove 030, 021, 210, 102 Solution III' (and 2 rotations): remove 030, 120, 012, 201 Also consider the same triangular lattice with the point 020 removed, making a trapezoid. Solutions based on I-III are: Solution IV: remove 300, 111, 003 Solution V: remove 201, 111, 102 Solution VI: remove 210, 021, 102 Solution VI': remove 120, 012, 201 Suppose we can remove all equilateral triangles on our 7×7x7 triangular lattice with only 10 removals. The triangle 141-411-114 must have at least one point removed. Remove 141, and note because of symmetry any logic that follows also applies to 411 and 114. There are three disjoint triangles 060-150-051, 240-231-330, 042-132-033, so each must have a point removed. (Now only six removals remaining.) The remainder of the triangle includes the overlapping trapezoids 600-420-321-303 and 303-123-024-006. If the solutions of these trapezoids come from V, VI, or VI', then 6 points have been removed. Suppose the trapezoid 600-420-321-303 uses the solution IV (by symmetry the same logic will work with the other trapezoid). Then there are 3 disjoint triangles 402-222-204, 213-123-114, and 105-015-006. Then 6 points have been removed. Therefore the remaining six removals must all come from the bottom three rows of the lattice. Note this means the "top triangle" 060-330-033 must have only four points removed so it must conform to solution either I or II, because of the removal of 141. Suppose the solution of the trapezoid 600-420-321-303 is VI or VI'. Both solutions I and II on the "top triangle" leave 240 open, and hence the equilateral triangle 240-420-222 remains. So the trapezoid can't be VI or VI'. Suppose the solution of the trapezoid 600-420-321-303 is V. This leaves an equilateral triangle 420-321-330 which forces the "top triangle" to be solution I. This leaves the equilateral triangle 201-321-222. So the trapezoid can't be V. Therefore the solution of the trapezoid 600-420-321-303 is IV. Since the disjoint triangles 402-222-204, 213-123-114, and 105-015-006 must all have points removed, that means the remaining points in the bottom three rows (420, 321, 510, 501, 312, 024) must be left open. 420 and 321 force 330 to be removed, so the "top triangle" is solution I. This leaves triangle 321-024-051 open, and we have reached a contradiction. General n A lower bound for [math]\overline{c}^\mu_n[/math] is 2n for [math]n \geq 1[/math], by removing (n,0,0), the triangle (n-2,1,1) (0,n-1,1) (0,1,n-1), and all points on the edges of and inside the same triangle. In a similar spirit, we have the lower bound [math]\overline{c}^\mu_{n+1} \geq \overline{c}^\mu_n + 2[/math] for [math]n \geq 1[/math], because we can take an example for [math]\overline{c}^\mu_n[/math] (which cannot be all of [math]\Delta_n[/math]) and add two points on the bottom row, chosen so that the triangle they form has third vertex outside of the original example. An asymptotically superior lower bound for [math]\overline{c}^\mu_n[/math] is 3(n-1), made of all points in [math]\Delta_n[/math] with exactly one coordinate equal to zero. A trivial upper bound is [math]\overline{c}^\mu_{n+1} \leq \overline{c}^\mu_n + n+2[/math] since deleting the bottom row of a equilateral-triangle-free-set gives another equilateral-triangle-free-set. We also have the asymptotically superior bound [math]\overline{c}^\mu_{n+2} \leq \overline{c}^\mu_n + \frac{3n+2}{2}[/math] which comes from deleting two bottom rows of a triangle-free set and counting how many vertices are possible in those rows. Another upper bound comes from counting the triangles. There are [math]\binom{n+2}{3}[/math] triangles, and each point belongs to n of them. So you must remove at least (n+2)(n+1)/6 points to remove all triangles, leaving (n+2)(n+1)/3 points as an upper bound for [math]\overline{c}^\mu_n[/math]. Asymptotics The corners theorem tells us that [math]\overline{c}^\mu_n = o(n^2)[/math] as [math]n \to \infty[/math]. By looking at those triples (a,b,c) with a+2b inside a Behrend set, one can obtain the lower bound [math]\overline{c}^\mu_n \geq n^2 \exp(-O(\sqrt{\log n}))[/math].
"$X$ and $Y$ are identically distributed" means that $P(X \in B)=P(Y \in B)$ for each Borel set $B$. Knowing that $F_X=F_Y$ means you know $P(X \in (-\infty,a])=P(Y \in (-\infty,a])$ for every real number $a$. So you prove the desired result in three steps: $P(X \in U)=P(Y \in U)$ for each open $U$. If $P(X \in U)=P(Y \in U)$ for each open $U$, then $P(X \in F)=P(Y \in F)$ for each closed $F$. If $P(X \in U)=P(Y \in U)$ for each open $U$ and $P(X \in F)=P(Y \in F)$ for each closed $F$, then $P(X \in B)=P(Y \in B)$ for each Borel $B$. To prove the first part, start from proving the result for open intervals, and then extend it to general open sets by using the fact that any open set is a countable union of open intervals. To prove the second part you need only note that a closed set is the complement of an open set, so $P(X \in F)=1-P(X \in F^c)$, and we already have agreement for open sets. To prove the third part, you show that the set $E=\{ A : P(X \in A)=P(Y \in A) \}$ is a $\sigma$-algebra. This takes some work. First, the above implies that this set contains an algebra (no $\sigma$ here), namely the algebra consisting of open and closed sets. Next, you can check $E$ is closed under increasing unions and decreasing intersections, using continuity of measure. (Here the fact that a probability space has finite measure is required). Then the monotone class theorem implies that $E$ is a $\sigma$-algebra. Now by definition, any $\sigma$-algebra containing all open sets must contain the Borel $\sigma$-algebra.
I am trying to implement the following optimization (from this paper) in Matlab using fmincon: $\min_\omega\omega'\Sigma\omega$ subject to $\min_Ur_p \geq r_0$ where $\Sigma$ is a positive definite matrix and $\omega$ is a vector of weights that sum to 1. Also, we have $r_p=\alpha'\omega$ and $U$ is the circle centred at $\alpha$ with radius equal to $|\chi|\alpha$ for $\chi$ between 0 and 1.. The authors of the paper show that: $\min_Ur_p=|\alpha||\omega|[cos(\phi)-\chi]$ where $\phi$ is the angle between the two vectors $\alpha$ and $\omega$. Any ideas how to implement this using matlab?
OpenCV 4.1.1 Open Source Computer Vision In this tutorial you will learn how to: Principal Component Analysis (PCA) is a statistical procedure that extracts the most important features of a dataset. Consider that you have a set of 2D points as it is shown in the figure above. Each dimension corresponds to a feature you are interested in. Here some could argue that the points are set in a random order. However, if you have a better look you will see that there is a linear pattern (indicated by the blue line) which is hard to dismiss. A key point of PCA is the Dimensionality Reduction. Dimensionality Reduction is the process of reducing the number of the dimensions of the given dataset. For example, in the above case it is possible to approximate the set of points to a single line and therefore, reduce the dimensionality of the given points from 2D to 1D. Moreover, you could also see that the points vary the most along the blue line, more than they vary along the Feature 1 or Feature 2 axes. This means that if you know the position of a point along the blue line you have more information about the point than if you only knew where it was on Feature 1 axis or Feature 2 axis. Hence, PCA allows us to find the direction along which our data varies the most. In fact, the result of running PCA on the set of points in the diagram consist of 2 vectors called eigenvectors which are the principal components of the data set. The size of each eigenvector is encoded in the corresponding eigenvalue and indicates how much the data vary along the principal component. The beginning of the eigenvectors is the center of all points in the data set. Applying PCA to N-dimensional data set yields N N-dimensional eigenvectors, N eigenvalues and 1 N-dimensional center point. Enough theory, let’s see how we can put these ideas into code. The goal is to transform a given data set X of dimension p to an alternative data set Y of smaller dimension L. Equivalently, we are seeking to find the matrix Y, where Y is the Karhunen–Loève transform (KLT) of matrix X: \[ \mathbf{Y} = \mathbb{K} \mathbb{L} \mathbb{T} \{\mathbf{X}\} \] Organize the data set Suppose you have data comprising a set of observations of p variables, and you want to reduce the data so that each observation can be described with only L variables, L < p. Suppose further, that the data are arranged as a set of n data vectors \( x_1...x_n \) with each \( x_i \) representing a single grouped observation of the p variables. Calculate the empirical mean Place the calculated mean values into an empirical mean vector u of dimensions \( p\times 1 \). \[ \mathbf{u[j]} = \frac{1}{n}\sum_{i=1}^{n}\mathbf{X[i,j]} \] Calculate the deviations from the mean Mean subtraction is an integral part of the solution towards finding a principal component basis that minimizes the mean square error of approximating the data. Hence, we proceed by centering the data as follows: Store mean-subtracted data in the \( n\times p \) matrix B. \[ \mathbf{B} = \mathbf{X} - \mathbf{h}\mathbf{u^{T}} \] where h is an \( n\times 1 \) column vector of all 1s: \[ h[i] = 1, i = 1, ..., n \] Find the covariance matrix Find the \( p\times p \) empirical covariance matrix C from the outer product of matrix B with itself: \[ \mathbf{C} = \frac{1}{n-1} \mathbf{B^{*}} \cdot \mathbf{B} \] where * is the conjugate transpose operator. Note that if B consists entirely of real numbers, which is the case in many applications, the "conjugate transpose" is the same as the regular transpose. Find the eigenvectors and eigenvalues of the covariance matrix Compute the matrix V of eigenvectors which diagonalizes the covariance matrix C: \[ \mathbf{V^{-1}} \mathbf{C} \mathbf{V} = \mathbf{D} \] where D is the diagonal matrix of eigenvalues of C. Matrix D will take the form of an \( p \times p \) diagonal matrix: \[ D[k,l] = \left\{\begin{matrix} \lambda_k, k = l \\ 0, k \neq l \end{matrix}\right. \] here, \( \lambda_j \) is the j-th eigenvalue of the covariance matrix C Here we apply the necessary pre-processing procedures in order to be able to detect the objects of interest. Then find and filter contours by size and obtain the orientation of the remaining ones. Orientation is extracted by the call of getOrientation() function, which performs all the PCA procedure. First the data need to be arranged in a matrix with size n x 2, where n is the number of data points we have. Then we can perform that PCA analysis. The calculated mean (i.e. center of mass) is stored in the cntr variable and the eigenvectors and eigenvalues are stored in the corresponding std::vector’s. The final result is visualized through the drawAxis() function, where the principal components are drawn in lines, and each eigenvector is multiplied by its eigenvalue and translated to the mean position. The code opens an image, finds the orientation of the detected objects of interest and then visualizes the result by drawing the contours of the detected objects of interest, the center point, and the x-axis, y-axis regarding the extracted orientation.
This question already has an answer here: I'm looking to evaluate whether the following is true or false: For rotation of a rigid body about an arbitrary axis, the angular momentum always points along the axis of rotation. Let: $$r = x \hat i + y \hat j + z \hat k$$ and $$ v = v_x \hat i + v_y \hat j + v_z \hat k$$ Then, the angular momentum is defined as: $$\vec L = \vec r \times m\vec v$$ $$r \times v = (yv_z-zv_y)\hat i - (xv_z-zv_x)\hat j + (xv_y-yv_x) \hat k$$ $$L = m \left((yv_z-zv_y)\hat i - (xv_z-zv_x)\hat j + (xv_y-yv_x) \hat k\right)$$ Here, $L$ is not along a principle axis, but has $3$ components. If the particles rotating exhibit $v_x,v_y$ or $v_z = 0$, it is plain to see $L$ is now on one principle axis. However, it isn't clear to me that if $L$ has $3$ components that it doesn't still point along the axis of rotation. Maybe the axis of rotation has $3$ components as well? Basically, I think my confusion boils down to whether angular momentum's vector determines the axis of rotation (which I think would make sense since it's proportional to $\omega$) which would make the statement true, however, it doesn't seem impossible to me that the axis of rotation and angular momentum can not necessarily be parallel.
I recently asked about the Mahalanobis distance and I got pretty good answers in this post: I think I got the idea, but what I still felt missing was the derivation of the formula for the Mahalanobis distance. So my question is: "How does one derive the formula for Mahalanobis distance?" Why does the formula have the form: $$D(\textbf{x},\textbf{y})=\sqrt{ (\textbf{x}-\textbf{y})^TC^{-1}(\textbf{x}-\textbf{y})} $$ Could perhaps someone give analogous derivation as user @sjm.majewski gave on Principal component analysis on the link below: UPDATE: From Wikipedia intuitive explanation was: "The Mahalanobis distance is simply the distance of the test point from the center of mass divided by the width of the ellipsoid in the direction of the test point." So is $C^{-1}$ the width of the ellipsoid in the direction of the test point? I mean this distance I can understand: $$\displaystyle\frac{\textbf{x}-\textbf{u}}{\sigma}$$ But this distance confuses me...:/ $$\sqrt{ (\textbf{x}-\textbf{y})^TC^{-1}(\textbf{x}-\textbf{y})} $$
The aleph numbers, $\aleph_\alpha$ The aleph function, denoted $\aleph$, provides a 1 to 1 correspondence between the ordinal and the cardinal numbers. In fact, it is the only order-isomorphism between the ordinals and cardinals, with respect to membership. It is a strictly monotone ordinal function which can be defined via transfinite recursion in the following manner: $\aleph_0 = \omega$ $\aleph_{n+1} = \bigcap \{ x \in \operatorname{On} : | \aleph_n | \lt |x| \}$ $\aleph_a = \bigcup_{x \in a} \aleph_x$ where $a$ is a limit ordinal. To translate the formalism, $\aleph_{n+1}$ is the smallest ordinal whose cardinality is greater than the previous aleph. $\aleph_a$ is the limit of the sequence $\{ \aleph_0 , \aleph_1 , \aleph_2 , \ldots \}$ until $\aleph_a$ is reached when $a$ is a limit ordinal. Contents Aleph one $\aleph_1$ is the first uncountable cardinal. The continuum hypothesis The continuum hypothesis is the assertion that the set of real numbers $\mathbb{R}$ have cardinality $\aleph_{1}$. Gödel showed the consistency of this assertion with ZFC, while Cohen showed using forcing that if ZFC is consistent then ZFC+$\aleph_1<|\mathbb R|$ is consistent. Equivalent Forms The cardinality of the power set of $\aleph_{0}$ is $\aleph_{1}$ The is no set with cardinality $\alpha$ such that $\aleph_{0} < \alpha < \aleph_{1}$ Generalizations The generalized continuum hypothesis (GCH) states that if an infinite set's cardinality lies between that of an infinite set S and that of the power set of S, then it either has the same cardinality as the set S or the same cardinality as the power set of S. That is, for any infinite cardinal \(\lambda\) there is no cardinal \(\kappa\) such that \(\lambda <\kappa <2^{\lambda}.\) GCH is equivalent to:\[\aleph_{\alpha+1}=2^{\aleph_\alpha}\] for every ordinal \(\alpha.\) (occasionally called Cantor's aleph hypothesis) For more,see https://en.wikipedia.org/wiki/Continuum_hypothesis Aleph two Aleph hierarchy The $\aleph_\alpha$ hierarchy of cardinals is defined by transfinite recursion: $\aleph_0$ is the smallest infinite cardinal. $\aleph_{\alpha+1}=\aleph_\alpha^+$, the successor cardinal to $\aleph_\alpha$. $\aleph_\lambda=\sup_{\alpha\lt\lambda}\aleph_\alpha$ for limit ordinals $\lambda$. Thus, $\aleph_\alpha$ is the $\alpha^{\rm th}$ infinite cardinal. In ZFC the sequence $$\aleph_0, \aleph_1,\aleph_2,\ldots,\aleph_\omega,\aleph_{\omega+1},\ldots,\aleph_\alpha,\ldots$$ is an exhaustive list of all infinite cardinalities. Every infinite set is bijective with some $\aleph_\alpha$. Aleph omega The cardinal $\aleph_\omega$ is the smallest instance of an uncountable singular cardinal number, since it is larger than every $\aleph_n$, but is the supremum of the countable set $\{\aleph_0,\aleph_1,\ldots,\aleph_n,\ldots\mid n\lt\omega\}$. Aleph fixed point A cardinal $\kappa$ is an $\aleph$-fixed point when $\kappa=\aleph_\kappa$. In this case, $\kappa$ is the $\kappa^{\rm th}$ infinite cardinal. Every inaccessible cardinal is an $\aleph$-fixed point, and a limit of such fixed points and so on. Indeed, every worldly cardinal is an $\aleph$-fixed point and a limit of such. One may easily construct an $\aleph$-fixed point above any ordinal $\beta$: simply let $\beta_0=\beta$ and $\beta_{n+1}=\aleph_{\beta_n}$; it follows that $\kappa=\sup_n\beta_n=\aleph_{\aleph_{\aleph_{\aleph_{\ddots}}}}$ is an $\aleph$-fixed point, since $\aleph_\kappa=\sup_{\alpha\lt\kappa}\aleph_\alpha=\sup_n\aleph_{\beta_n}=\sup_n\beta_{n+1}=\kappa$. By continuing the recursion to any ordinal, one may construct $\aleph$-fixed points of any desired cofinality. Indeed, the class of $\aleph$-fixed points forms a closed unbounded class of cardinals.
Theorem. Let $T$ be an operator on the finite dimensional complex vector space $\mathbf{W}$. The characteristic polynomial of $T$ equals the minimal polynomial of $T$ if and only if the dimension of each eigenspace of $T$ is $1$. Proof. Let the characteristic and minimal polynomial be, respectively, $\chi(t)$ and $\mu(t)$, with$$\begin{align*} \chi(t) &= (t-\lambda_1)^{a_1}\cdots (t-\lambda_k)^{a_k}\\ \mu(t) &= (t-\lambda_1)^{b_1}\cdots (t-\lambda_k)^{b_k}, \end{align*}$$where $1\leq b_i\leq a_i$ for each $i$. Then $b_i$ is the size of the largest Jordan block associated to $\lambda_i$ in the Jordan canonical form of $T$, and the sum of the sizes of the Jordan blocks associated to $\lambda_i$ is equal to $a_i$. Hence, $b_i=a_i$ if and only if $T$ has a unique Jordan block associated to $\lambda_i$. Since the dimension of $E_{\lambda_i}$ is equal to the number of Jordan blocks associated to $\lambda_i$ in the Jordan canonical form of $T$, it follows that $b_i=a_i$ if and only if $\dim(E_{\lambda_i})=1$. QED In particular, if the matrix has $n$ distinct eigenvalues, then each eigenvalue has a one-dimensional eigenspace. Also in particular, Corollary. Let $T$ be a diagonalizable operator on a finite dimensional vector space $\mathbf{W}$. The characteristic polynomial of $T$ equals the minimal polynomial of $T$ if and only if the number of distinct eigenvalues of $T$ is $\dim(\mathbf{W})$. Using the Rational Canonical Form instead, we obtain: Theorem. Let $W$ be a finite dimensional vector space over the field $\mathbf{F}$, and $T$ an operator on $W$. Let $\chi(t)$ be the characteristic polynomial of $T$, and assume that the factorization of $\chi(t)$ into irreducibles over $\mathbf{F}$ is$$\chi(t) = \phi_1(t)^{a_1}\cdots \phi_k(t)^{a_k}.$$Then the minimal polynomial of $T$ equals the characteristic polynomial of $T$ if and only if $\dim(\mathrm{ker}(\phi_i(T)) = \deg(\phi_i(t))$ for $i=1,\ldots,k$. Proof. Proceed as above, using the Rational Canonical forms instead. The exponent $b_i$ of $\phi_i(t)$ in the minimal polynomial gives the largest power of $\phi_i(t)$ that has a companion block in the Rational canonical form, and $\frac{1}{d_i}\dim(\mathrm{ker}(\phi_i(T)))$ (where $d_i=\deg(\phi_i)$) is the number of companion blocks. QED
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Conceptually, computing double integrals in polar coordinates is the same as in rectangular coordinates. After all, the idea of an integral doesn't depend on the coordinate system. If $R$ is a region in the plane and $f(x,y)$ is a function, then $\iint_R f(x,y) dA$ is what we get when we The differences when working with polar coordinates are explained in the following video: So what is the area of a polar rectangle? The polar rectangle is the difference of two pie wedges, one with radius $b$ and one with radius $a$, and so has area \begin{eqnarray*} \Delta A & = & \frac{b^2}{2}(\beta-\alpha) - \frac{a^2}{2}(\beta - \alpha) \\ & = & \frac{b^2-a^2}{2}(\beta-\alpha) \\ & = & \frac{a+b}{2} (b-a)(\beta - \alpha) \\ & = & r^* \Delta r \Delta \theta, \end{eqnarray*} where $r^*=\frac{a+b}{2}$ is the average value of $r$, $\Delta r = b-a$ is the change in $r$ and $\Delta \theta = \beta-\alpha$ is the change in $\theta$. Another way to understand this is to look at the shape of a
In the small amount of physics that I have learned thus far, there seems to be a (possibly superficial pattern) that I have been wondering about. The formula for the kinetic energy of a moving particle is $\frac{1}{2}mv^2$. The formula for kinetic rotational energy is $\frac{1}{2}I\omega^2$. The formula for energy stored in a capacitor is $\frac{1}{2}C \Delta V^2$. The formula for energy delivered to an inductor is $\frac{1}{2}LI^2$. Finally, everyone is aware of Einstein's famous formula $e=mc^2$. I realize there are other energy formulas (gravitational potential energy, for example) that do not take this form, but is there some underlying reason why the formulas above take a similar form? Is it a coincidence? Or is there a motivation for physicists and textbook authors to present these formulas the way they do?
Forgot password? New user? Sign up Existing user? Log in ∫01(ln(1+x))2ln(x)ln(1−x)1−x dx=72ζ(3)(ln2)2−π26(ln2)3−π22ζ(3)+6ζ(5)−π448ln2\int_0^1\frac{(\ln(1+x))^2\ln(x)\ln(1-x)}{1-x} \, dx=\dfrac{7}{2}\zeta(3){(\ln 2)^2}-\dfrac{\pi^2}{6}{(\ln 2)^3}-\dfrac{\pi^2}{2}\zeta(3)+{6}\zeta(5)-\dfrac{\pi^4}{48}\ln2∫011−x(ln(1+x))2ln(x)ln(1−x)dx=27ζ(3)(ln2)2−6π2(ln2)3−2π2ζ(3)+6ζ(5)−48π4ln2 Prove that the equation above is true. Notation: ζ(⋅)\zeta(\cdot) ζ(⋅) denotes the Riemann Zeta function. This was found in another mathematics form and it was unanswered there. This is a part of the set Formidable Series and Integrals. Note by Aditya Kumar 3 years, 7 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: Use this expansion, ln2(1+x)=∑r=0∞Hr(−1)rxr+1r+1\large \ln ^2 (1+x) = \sum_{r=0}^{\infty} \dfrac{H_{r} (-1)^r x^{r+1}}{r+1}ln2(1+x)=r=0∑∞r+1Hr(−1)rxr+1 Then it will be left to evaluate the integral ∫01xr+1ln(x)ln(1−x)1−xdx\large \int_{0}^{1} x^{r+1} \dfrac{\ln(x) \ln(1-x)}{1-x} dx∫01xr+11−xln(x)ln(1−x)dx This is derivative of beta function. Log in to reply How will you evaluate the resulting summation? wow nice one! Do you know how to prove it? @Pi Han Goh add it to you set Added! Liked + Reshared! Thanks. Do try it. Now @Ishan Singh can solve this.. Integral can be written as: ∑r,s,t≥0Hr(−1)r(r+1)(t+1)(r+s+t+3)2\displaystyle \sum_{r,s,t\geq 0} \frac{H_r(-1)^r}{(r+1)(t+1)(r+s+t+3)^2}r,s,t≥0∑(r+1)(t+1)(r+s+t+3)2Hr(−1)r @Mark Hennings could you help us out? Hint : Convert into derivative of Beta function. Yup I did that but at some point, I have to take natural logarithm of (-1). Which is imaginary, but the closed form ain't contain any imaginary term. No. Take limit. For example this @Ishan Singh – You have to do something with the ln2(1+x)\ln^2 (1+x)ln2(1+x) term and afterwards it gets converted into the limit. Another method is to use generating function of Harmonic numbers. @Ishan Singh – Kk I'll give another shot at Feynman's way. @Ishan Singh – I got imaginary term while using that Harmonic relation. You are referring this ∑n=1∞Hnxn=−ln(1−x)1−x\sum_{n=1}^{\infty} H_{n} x^{n} = \dfrac{-ln(1-x)}{1-x}∑n=1∞Hnxn=1−x−ln(1−x) no? @Harsh Shrivastava – Yes. I'm referring to that generating function. @Ishan Singh – can u clearly explain how u applied the limit there? @Surya Prakash – That will take 2−32 -32−3 pages. I may post it when I'm free. Problem Loading... Note Loading... Set Loading...
I am master student and doing an assignment of Finite element method. In the instruction I could not understand the derivation of the weak form, which should not be difficult. I'm sorry for posting this easy and most likely not helpful question for other people. So the derivation is about the weak form of the integral formulation of 4th ODE. It is a simple beam deformation in the interval 0 and L. $$ \int_0^L\frac{d^2 w}{d x^2}EI\frac{d^2 \hat{u}}{d x^2}dx = ...$$ $$ \left( \left. w EI \frac{d^3 \hat{u}}{d x^3} \right|_{x=0} \right. -\left( \left. \frac{d w}{dx} EI \frac{d^2 \hat{u}}{d x^2} \right|_{x=0} \right. -\left( \left. w EI \frac{d^3 \hat{u}}{d x^3} \right|_{x=L} \right. +\left( \left. \frac{d w}{dx} EI \frac{d^2 \hat{u}}{d x^2} \right|_{x=L} \right. $$ I thought about partial integration $$\int u(x) v'(x) \, dx = u(x) v(x) - \int v(x) \, u'(x) dx $$ then I ended up $$\frac{d^2u}{dx^2}\frac{dw}{dx} - \int \frac{dw}{dx}\frac{d^3x}{dx^3}d x$$ if I continue the partial integration to the 2nd term then derivative will be 4th order... How can I get the following? $$ \left( \left. w EI \frac{d^3 \hat{u}}{d x^3} \right|_{x=0} \right. -\left( \left. \frac{d w}{dx} EI \frac{d^2 \hat{u}}{d x^2} \right|_{x=0} \right. -\left( \left. w EI \frac{d^3 \hat{u}}{d x^3} \right|_{x=L} \right. +\left( \left. \frac{d w}{dx} EI \frac{d^2 \hat{u}}{d x^2} \right|_{x=L} \right. $$
Let $G_1$ and $G_1$ be two semisimple algebraic groups defined over $\mathbb{Q}$, suppose we have a surjective homomorphism $f: G_1\to G_2$, with finite kernel contained in the center of $G_1$. By congruence subgroup of $G_i$, for $i=1,2$, we means $K\cap G_i(\mathbb{Q})$, with $K$ compact open subgroup of $G_i(\mathbb{A}_f)$. Then consider the images of congruence subgroups of $G_1$ under the map $f$, is it cofinal with the congruence of $G_2$, i.e. every congruence subgroup of $G_2$, it conatines image of some congruence of $G_1$ and every image of congruence subgroup of $G_1$ containes congruence subgroup of $G_2$ ? If the above is not true, then is it true that almost all (except finite many) congruence subgroups of $G_2$ are contained in the image of some congruence subgroup of $G_1$? For example $G_1=SL_2(\mathbb{Q})\times SL_2(\mathbb{Q})\times SL_2(\mathbb{Q})$, and $G_1$ has a natural map into $SP_8(\mathbb{Q})$ by tensor product action. Let $G_2$ be the image . Let $C_2$ (resp. $C_1$) be a connected shimura curve using $G_2$ (resp.$G_1$) and congruence subgroup $\Gamma_2$ (resp.$\Gamma_1$) of $G_2$ (resp. $G_1$). Then I want to know is it true that except finite many congruence subgroup $\Gamma_2$ , one can always find some $\Gamma_1$, such that there is an etale map from $C_2$ to $C_1$.
A curiosity that's been bugging me. More precisely: Given any integers $b\geq 1$ and $n\geq 2$, there exist integers $0\leq k, l\leq b-1$ such that $b$ divides $n^l(n^k - 1)$ exactly. The question in the title is obviously the case when n = 10. This serves as a motivating example: if we take b = 7 and n = 10, then k = 6, l = 0 works (uniquely), and if we calculate $\dfrac{n^k - 1}{b}$, we see that it turns out to be 142857 - the repeating part of the decimal expansion of 1/7. A (very sketchy, but correct!) sketch proof, which I've included for completeness: Notice that $\dfrac{1}{99\dots 9} = 0.00\dots 01\; 00\dots 01\; 00\dots 01\dots$. Notice that $1/7$ must have either a repeating or a terminating decimal expansion: just perform the long division, and at each stage you will get remainders of 0 (so it terminates) or 1, 2, ..., 6 (so some of these will cycle in some order). It turns out the decimal expansion is repeating, and the 'repeating part' is 142857. This has length (k=)6. $\dfrac{1}{10^6 - 1} = 0.000001\;000001\dots$, so $\dfrac{142857}{10^6 - 1} = 0.142857\;142857\dots = 1/7$, and so $7\times 142857 = 10^6 - 1$. And we can do the same thing with $\dfrac{1}{10\dots 0} = 0.00\dots 01$ and terminating decimals. And the same proof of course holds in my more general setting. But this (using the long division algorithm after spotting an unwieldy decimal expansion) feels a little artificial to me, and the statement is sufficiently general that I'm sure there must be a direct proof that I'm missing. Of course, the $n^k$ part is easy, but the $n^k-1$ part has me a little stumped. My question is: is there a direct proof of the latter part? Thanks!
I am running various models in R for sake of prediction. If I run a model and a specific variable is showing itself to be insignificant (say, at the alpha=0.05 level), would I want to simply discard this variable? Would my next step be to remove the variable from the model and re-run that same type of model, then see how drastic the affect was on the R-squared and predictive accuracy? Then, run an ANOVA comparison to see if it is true that the smaller, nested model is just as "good" as the larger model? Thoughts? Is it always safe to simply remove insufficiently-significant variables? I am running various models in R for sake of prediction. If I run a model and a specific variable is showing itself to be insignificant (say, at the alpha=0.05 level), would I want to simply discard this variable? Would my next step be to remove the variable from the model and re-run that same type of model, then see how drastic the affect was on the R-squared and predictive accuracy? I am also not quite understand what "running various models in R for sake of prediction" means, I am just assuming that you want to do a hypothesis testing and conduct model selection. For example, you are doing a linear regression in R, you regress one response variable on several independent variables. I am assuming that your response variables are continuous. The summary output in R will give the p-value for each of the variables, some of them are significant at $\alpha=0.05$, and some of them might not. Then you are not sure if you can just delete the variables that have insignificant p-value and refit this model. You plug a set of new independent variables into this model, the returned $y$ is your predictive response variable. Your question would I want to simply discard this variable? Simple word: No, you never throw away any variables that are not significant. Even if the significance level of all the independent variables shows that the variables are insignificant, it does not mean that any of those independent variables won't affect the response variable at all. The ANOVA test or the summary output in R are all simultaneous testing, this means that you are hypothetical over a set of coefficients instead of a single one. For instance, given a linear model $y=\alpha + \beta_1 x_1 + \beta_2 x_2 + \ldots + \beta_p x_p + \epsilon$, the $H_0: \beta_1 = \beta_2 =\ldots=\beta_p=0$, and $H_A: \beta_1 \neq \beta_2 \neq\ldots \neq \beta_p \neq 0$. In the ANOVA, the significance level of F test will determine if we want to reject the $H_0$ or not. This is a simultaneous testing. If F-statistic is significant, this means that all the $\beta_p$ are not equal to 0; Otherwise, at least one $\beta_p$ is not equal to 0. After understanding the simultaneous test, let's look at the summary output, summary(name of a linear model object), I think this is the place that you want to discard a variable in terms of the p-value. Still use the example above, $y=\alpha + \beta_1 x_1 + \beta_2 x_2 + \ldots + \beta_p x_p + \epsilon$, the summary() output will have p-value corresponding to all the variables, those p-values are calculated from the t-distribution. After discarding one variable, the p-value in this new model will have different meaning compared to the model above. The hypothesis test for the F-test is different. There is no correction for this model. When you repeat this procedure for multiple times, and compare your final selected model to your original model, it compares orange to apple. However, there is one possible scenario that you can throw one variable given all the other variables, it is when their design matrix is orthogonal. But this situation is very rare. In summary, discarding the insignificant variables and stepwise model selections (backward, forward and bidirectional) all have this kinda criticism. Here are several methods to fix this:
When you say "why aren't things being destroyed", you presumably mean "why aren't the chemical bonds that hold objects together being broken". Now, we can determine the energy it takes to break a bond - that's called the "bond energy". Let's take, for example, a carbon-carbon bond, since it's a common one in our bodies. The bond energy of a carbon-carbon bond is $348\,\rm kJ/mol$, which works out to $5.8 \cdot 10^{-19}\,\rm J$ per bond. If an impacting gas molecule is to break this bond, it must (in a simplified collision scenario) have at least that much energy to break the bond. If the average molecule has that much energy, we can calculate what the temperature of the gas must be: $$E_\text{average} = k T$$$$T = \frac{5.8 \cdot 10^{-19}\,\rm J}{1.38 \cdot 10^{-23}\,\rm m^2 kg\, s^{-2} K^{-1}}$$$$T = 41,580\rm °C$$ That's pretty hot! Now, even if the average molecule doesn't have that energy, some of the faster-moving ones might. Let's calculate the percentage that have that energy at room temperature using the Boltzmann distribution for particle energy: $$f_E(E) = \sqrt{\frac{4 E}{\pi (kT)^3}} \exp\left(\frac{-E}{kT} \right)$$ The fraction of particles with energy greater than or equal to that amount should be given by this integral: $$p(E \ge E_0) = \int_{E_0}^{\infty} f_E(E) dE$$ In our situation, $E_0 = 5.8 \cdot 10^{-19}\,\rm J$, and this expression yields $p(E \ge E_0) = 1.9 \cdot 10^{-61}$. So, the fraction of molecules at room temperature with sufficient kinetic energy to break a carbon-carbon bond is $1.9 \cdot 10^{-61}$, an astoundingly small number. To put that in perspective, if you filled a sphere the size of Earth's orbit around the sun with gas at STP, you would need around 16 of those spheres to expect to have even one gas particle with that amount of energy. So that's why these "torpedoes" don't destroy things generally - they aren't moving fast enough at room temperature to break chemical bonds!
Automata Theory- Brzozowski Algebraic Method A LaTeX typeset copy of this tutorial is below for download: Brzozowski_Algebraic_Method.pdf (126.4K) Number of downloads: 1388 I. Introduction Recall that a language is regular if and only if it is accepted by some finite state automaton. In other words, each regular expression has a corresponding finite state automaton. An important problem in Automata Theory deals with converting between regular expressions and finite state automata. The Brzozowski Algebraic Method is an intuitive algorithm which takes a finite state automaton and returns a regular expression. This algorithm relies on notions from graph theory and linear algebra; particularly, graph walks and the substitution method for solving systems of equations. II. Notions From Graph Theory In this section, the notions of a graph walk and the adjacency matrix will be introduced, along with some relevant results. I start with the definition of a walk. Walk: Let G be a graph, and let v = (v_{1}, v_{2}, ..., v_{n}) be a sequence of vertices (not necessarily distinct). Then v is a walk if v_{i}, v_{i+1} are adjacent for all i \in {1, ..., n-1}. Intuitively, we start at some vertex v_{1}. Then we visit v_{2}, which is a neighbor of v_{1}. Then v_{3} is a neighbor of v_{2}. Consider the cycle graph on four vertices, C_{4}, which is shown below. Some example walks include (v_{a}, v_{b}, v_{d}) and (v_{a}, v_{b}, v_{a}, v_{c}, v_{a}). However, (v_{a}, v_{d}, v_{a}, v_{b}) is not a walk, as v_{a} and v_{d} are not adjacent. Consider an example of a walk on a directed graph. Observe that (S, A, A, q_{acc}) is a walk, but (S, A, S) is not a walk as there is no directed edge (A, S) in the graph. The adjacency matrix will now be defined, and we will explore how it relates to the notion of a walk. Adjacency Matrix: Let G be a graph. The adjacency matrix A \in {0, 1\}^{V \times V}, with A_{ij} = 1 if vertex i is adjacent to vertex j, and A_{ij} = 0 otherwise. Note: If G is undirected, then A_{ij} = A_{ji}. Finite state automata diagrams are directed graphs, though, so A_{ij} may be different than A_{ij}. Consider the simple C_{4} graph above. The adjacency matrix for this graph is: Similarly, the adjacency matrix of the above directed graph is: The adjacency matrix is connected to the notion of a walk by the fact that (A^{n})_{ij} counts the number of walks of length n starting at vertex i and ending at vertex j. The proof of this theorem provides the setup for the Brzozowski Algebraic Method. For this reason, I will provide a formal proof of this theorem. Theorem: Let G be a graph and let A be its adjacency matrix. Let n \in \mathbb{N}. Then each cell of A^{n}, denoted (A^{n})_{ij}, counts the number of walks of length n starting at vertex i and ending at vertex j. Proof: This theorem will be proven by induction on n \in \mathbb{N}. Consider the base case of n = 1. By definition of the adjacency matrix, if vertices i and j are adjacent, then A_{ij} = 1. Otherwise, A_{ij} = 0. Thus, A counts the number of walks of length 1 in the graph. Thus, the theorem holds at n = 1. Suppose the theorem holds for an arbitrary integer k > 1. The theorem will be proven true for the k+1 case. As matrix multiplication is associative, A^{k+1} = A^{k} \cdot A. By the inductive hypothesis, A^{k} counts the number of walks of length k in the graph, and A counts the number walks of length 1 in the graph. Consider: (A^{k+1})_{ij} = \sum_{x=1}^{n} ((A^{k})_{ix} \cdot A_{xj}) And so for each x \in {1, ..., n\}, (A^{k})_{ix} \cdot A_{xj} \neq 0 if and only if there exists at least one walk of length k from vertex i to vertex x, and an edge from vertex k to vertex j. Thus, the theorem holds by the principle of mathematical induction. Example: Consider again the graph C_{4}, and (A(C_{4}))^{2}, given below. This counts the number of walks of length 2 on G. Observe that ((A(C_{4}))^{2})_{14} states that there are two walks of length 2 from vertex v_{a} to vertex v_{d}. These walks are (v_{a}, v_{b}, v_{d}) and (v_{a}, v_{c}, v_{d}). III. Brzozowski Algebraic Method The Brzozowski Algebraic Method takes a finite state automata diagram (the directed graph) and constructs a system of linear equations to solve. Solving a subset of these equations will yield the regular expression for the finite state automata. I begin by defining some notation. Let E_{i} denote the regular expression which takes the finite state automata from state q_{0} to state q_{i}. The system of equations consists of recursive definitions for each E_{i}, where the recursive definition consists of sums of E_{j}R_{ji} products, where R_{ji} is a regular expression consisting of the union of single characters. That is, R_{ji} represents the selection of single transitions from state j to state i, or single edges (j, i) in the graph. So if \delta(q_{j}, a) = \delta(q_{j}, b) = q_{i}, then R_{ji} = (a + b). In other words, E_{j} takes the finite state automata from state q_{0} to q_{j}. Then R_{ji} is a regular expression describing strings that will take the finite state automata from state j to state i in exactly one step. That is: E_{i} = \sum_{j \in Q, \text{there exists a walk from state j to state i}} E_{j}R_{ji} Note: Recall that addition when dealing with regular expressions is the set union operation. Once we have the system of equations, then we solve them by backwards substitution just as in linear algebra and high school algebra. The explanation of this algorithm is dense, though. Let's work through an example to better understand it. We seek a regular expression over the alphabet \Sigma = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9\} describing those integers whose value is 0 modulo 3. In order to construct the finite state automata for this language, we take advantage of the fact that a number n \equiv 0 \pmod{3} if and only if the sum of n's digits are also divisible by 3. For example, we know 3|123 because 1 + 2 + 3 = 6, a multiple of 3. However, 125 is not divisible by 3 because 1 + 2 + 5 = 8 is not a multiple of 3. Now for simplicity, let's partition \Sigma into its equivalence classes a = {0, 3, 6, 9\} (values congruent to 0 mod 3), b = {1, 4, 7\} (values equivalent to 1 mod 3), and c = {2, 5, 8\} (values equivalent to 2 mod 3). Similarly, we let state q_{0} represent a, state q_{1} represent b, and state q_{2} represent c. Thus, the finite state automata diagram is given below, with q_{0} as the accepting halt state: We consider the system of equations given by E_{i}, taking the FSM from state q_{0} to q_{i}: E_{0} = \lambda + E_{0}a + E_{1}c + E_{2}b If at q_{0}, transition to q_{0} if we read in the empty string, or if we go from q_{0} \to q_{0} and read in a character in a; or if we go from q_{0} \to q_{2} and read in a character in c; or if we go from q_{0} \to q_{2} and read in a character from b. E_{1} = E_{0}b + E_{1}a + E_{2}c To transition from q_{0} \to q_{1}, we can go from q_{0} \to q_{0} and read in a character from b; go from q_{0} \to q_{1} and read in a character from a; or go from q_{0} \to q_{2} and read in a character from c. E_{2} = E_{0}c + E_{1}b + E_{2}a To transition from q_{0} \to q_{2}, we can go from q_{0} \to q_{0} and read a character from c; go from q_{0} \to q_{0} and read in a character from b; or go from q_{0} \to q_{2} and read in a character from a. Since q_{0} is the accepting halt state, only a closed form expression of E_{0} is needed. There are two steps which are employed. The first is to simplify a single equation, then to backwards substitute into a different equation. We repeat this process until we have the desired closed-form solution for the relevant E_{i} (in this case, just E_{0}). In order to simplify a variable, we apply Arden's Lemma, which states that E = \alpha + E\beta = \alpha(\beta)^{*}, where \alpha, \beta are regular expressions. We start by simplifying E_{2} using Arden's Lemma: E_{2} = (E_{0}c + E_{1}b)a^{*}. We then substitute E_{2} into E_{1}, giving us E_{1} = E_{0}b + E_{1}a + (E_{0}c + E_{1}b(a)^{*}c = E_{0}(b + ca^{*}c) + E_{1}(c + ba^{*}c). By Arden's Lemma, we get E_{1} = E_{0}(b + ca^{*}c)(a + ba^{*}c)^{*} Substituting again, E_{0} = \lambda + E_{0}a + E_{0}(b + ca^{*}c)(a + ba^{*}c)^{*}c + (E_{0}c + E_{1}b)a^{*}b. Expanding out, we get E_{0} = \lambda + E_{0}a + E_{0}(b + ca^{*}c)(a + ba^{*}c)^{*}c + E_{0}ca^{*}b + E_{0}(b + ca^{*}c)(a + ba^{*}c)^{*}a^{*}b. Then factoring out: E_{0} = \lambda + E_{0}(a + ca^{*}b + (b + ca^{*}c)(a + ba^{*}c)^{*}(c + ba^{*}b) ). By Arden's Lemma, we have: E_{0} = (a + ca^{*}b + (b + ca^{*}c)(a + ba^{*}c)^{*}(c + ba^{*}b) )^{*}, a closed form regular expression for the integers mod 0 over \Sigma. Note: The hard part of this algorithm is the careful bookkeeping. In a substitution step, the regular expression for an E_{i} may grow and require simplification. Be careful to keep track of how the regular expression is expanded on a substitution step, and how it is possible to factor out terms.
Difference between revisions of "Fujimura's problem" (→n=6) (→n=6) Line 155: Line 155: Therefore 222 is removed. There are six disjoint triangles 150-420-123, 051-321-024, 231-501-204, 132-402-105, 510-150-114, and 312-042-015. So 600, 411, 393, 114, and 006 are open. 600-240 open forces 204 to be removed and 600-150 open forces 105 to be removed. This forces 501 and 402 to be open, but 411 is open, so there is the equilateral triangle 501-411-402. Therefore 222 is removed. There are six disjoint triangles 150-420-123, 051-321-024, 231-501-204, 132-402-105, 510-150-114, and 312-042-015. So 600, 411, 393, 114, and 006 are open. 600-240 open forces 204 to be removed and 600-150 open forces 105 to be removed. This forces 501 and 402 to be open, but 411 is open, so there is the equilateral triangle 501-411-402. − Therefore the solution of the upper triangle is not + Therefore the solution of the upper triangle is not , and we have a contradiction. So <math> \overline{c}^\mu_6 \neq 17 </math>. == n = 7 == == n = 7 == Revision as of 21:19, 4 March 2009 Let [math]\overline{c}^\mu_n[/math] the largest subset of the triangular grid [math]\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c=n \}[/math] which contains no equilateral triangles [math](a+r,b,c), (a,b+r,c), (a,b,c+r)[/math] with [math]r \gt 0[/math]; call such sets triangle-free. (It is an interesting variant to also allow negative r, thus allowing "upside-down" triangles, but this does not seem to be as closely connected to DHJ(3).) Fujimura's problem is to compute [math]\overline{c}^\mu_n[/math]. This quantity is relevant to a certain hyper-optimistic conjecture. n 0 1 2 3 4 5 [math]\overline{c}^\mu_n[/math] 1 2 4 6 9 12 Contents n=0 [math]\overline{c}^\mu_0 = 1[/math]: This is clear. n=1 [math]\overline{c}^\mu_1 = 2[/math]: This is clear. n=2 [math]\overline{c}^\mu_2 = 4[/math]: This is clear (e.g. remove (0,2,0) and (1,0,1) from [math]\Delta_2[/math]). n=3 [math]\overline{c}^\mu_3 = 6[/math]: For the lower bound, delete (0,3,0), (0,2,1), (2,1,0), (1,0,2) from [math]\Delta_3[/math]. For the upper bound: observe that with only three removals each of these (non-overlapping) triangles must have one removal: set A: (0,3,0) (0,2,1) (1,2,0) set B: (0,1,2) (0,0,3) (1,0,2) set C: (2,1,0) (2,0,1) (3,0,0) Consider choices from set A: (0,3,0) leaves triangle (0,2,1) (1,2,0) (1,1,1) (0,2,1) forces a second removal at (2,1,0) [otherwise there is triangle at (1,2,0) (1,1,1) (2,1,0)] but then none of the choices for third removal work (1,2,0) is symmetrical with (0,2,1) n=4 [math]\overline{c}^\mu_4=9[/math]: The set of all [math](a,b,c)[/math] in [math]\Delta_4[/math] with exactly one of a,b,c =0, has 9 elements and is triangle-free. (Note that it does contain the equilateral triangle (2,2,0),(2,0,2),(0,2,2), so would not qualify for the generalised version of Fujimura's problem in which [math]r[/math] is allowed to be negative.) Let [math]S\subset \Delta_4[/math] be a set without equilateral triangles. If [math](0,0,4)\in S[/math], there can only be one of [math](0,x,4-x)[/math] and [math](x,0,4-x)[/math] in S for [math]x=1,2,3,4[/math]. Thus there can only be 5 elements in S with [math]a=0[/math] or [math]b=0[/math]. The set of elements with [math]a,b\gt0[/math] is isomorphic to [math]\Delta_2[/math], so S can at most have 4 elements in this set. So [math]|S|\leq 4+5=9[/math]. Similar if S contain (0,4,0) or (4,0,0). So if [math]|S|\gt9[/math] S doesn’t contain any of these. Also, S can’t contain all of [math](0,1,3), (0,3,1), (2,1,1)[/math]. Similar for [math](3,0,1), (1,0,3),(1,2,1)[/math] and [math](1,3,0), (3,1,0), (1,1,2)[/math]. So now we have found 6 elements not in S, but [math]|\Delta_4|=15[/math], so [math]S\leq 15-6=9[/math]. Remark: curiously, the best constructions for [math]c_4[/math] uses only 7 points instead of 9. n=5 [math]\overline{c}^\mu_5=12[/math]: The set of all (a,b,c) in [math]\Delta_5[/math] with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles. Let [math]S\subset \Delta_5[/math] be a set without equilateral triangles. If [math](0,0,5)\in S[/math], there can only be one of (0,x,5-x) and (x,0,5-x) in S for x=1,2,3,4,5. Thus there can only be 6 elements in S with a=0 or b=0. The set of element with a,b>0 is isomorphic to [math]\Delta_3[/math], so S can at most have 6 elements in this set. So [math]|S|\leq 6+6=12[/math]. Similar if S contain (0,5,0) or (5,0,0). So if |S| >12 S doesn’t contain any of these. S can only contain 2 point in each of the following equilateral triangles: (3,1,1),(0,4,1),(0,1,4) (4,1,0),(1,4,0),(1,1,3) (4,0,1),(1,3,1),(1,0,4) (1,2,2),(0,3,2),(0,2,3) (3,2,0),(2,3,0),(2,2,1) (3,0,2),(2,1,2),(2,0,3) So now we have found 9 elements not in S, but [math]|\Delta_5|=21[/math], so [math]S\leq 21-9=12[/math]. n=6 [math]15 \leq \overline{c}^\mu_6 \leq 17[/math]: [math]15 \leq \overline{c}^\mu_6[/math] from the bound for general n. Note that there are eight extremal solutions to [math] \overline{c}^\mu_3 [/math]: Solution I: remove 300, 020, 111, 003 Solution II: remove 030, 111, 201, 102 Solution III (and 2 rotations): remove 030, 021, 210, 102 Solution III' (and 2 rotations): remove 030, 120, 012, 201 Also consider the same triangular lattice with the point 020 removed, making a trapezoid. Solutions based on I-III are: Solution IV: remove 300, 111, 003 Solution V: remove 201, 111, 102 Solution VI: remove 210, 021, 102 Solution VI': remove 120, 012, 201 Suppose we can remove all equilateral triangles on our 7×7x7 triangular lattice with only 10 removals. The triangle 141-411-114 must have at least one point removed. Remove 141, and note because of symmetry any logic that follows also applies to 411 and 114. There are three disjoint triangles 060-150-051, 240-231-330, 042-132-033, so each must have a point removed. (Now only six removals remaining.) The remainder of the triangle includes the overlapping trapezoids 600-420-321-303 and 303-123-024-006. If the solutions of these trapezoids come from V, VI, or VI', then 6 points have been removed. Suppose the trapezoid 600-420-321-303 uses the solution IV (by symmetry the same logic will work with the other trapezoid). Then there are 3 disjoint triangles 402-222-204, 213-123-114, and 105-015-006. Then 6 points have been removed. Therefore the remaining six removals must all come from the bottom three rows of the lattice. Note this means the "top triangle" 060-330-033 must have only four points removed so it must conform to solution either I or II, because of the removal of 141. Suppose the solution of the trapezoid 600-420-321-303 is VI or VI'. Both solutions I and II on the "top triangle" leave 240 open, and hence the equilateral triangle 240-420-222 remains. So the trapezoid can't be VI or VI'. Suppose the solution of the trapezoid 600-420-321-303 is V. This leaves an equilateral triangle 420-321-330 which forces the "top triangle" to be solution I. This leaves the equilateral triangle 201-321-222. So the trapezoid can't be V. Therefore the solution of the trapezoid 600-420-321-303 is IV. Since the disjoint triangles 402-222-204, 213-123-114, and 105-015-006 must all have points removed, that means the remaining points in the bottom three rows (420, 321, 510, 501, 312, 024) must be left open. 420 and 321 force 330 to be removed, so the "top triangle" is solution I. This leaves triangle 321-024-051 open, and we have reached a contradiction. [math]15 \leq \overline{c}^\mu_6 \leq 16[/math]: Here, "upper triangle" means the first four rows (with 060 at top) and "lower trapezoid" means the bottom three rows. Suppose 11 removals leave a triangle-free set. First, suppose that 5 removals come from the upper triangle and 6 come from the lower trapezoid. Suppose the trapezoid 600-420-321-303 used solution IV. There are three disjoint triangles 402-222-204, 213-123-114, and 105-015-006. The remainder of the points in the lower trapezoid (420, 321, 510, 501, 402, 312, 024) must be left open. 024 being open forces either 114 or 015 to be removed. Suppose 114 is removed. Then 213 is open, and with 312 open that forces 222 to be removed. Then 204 is open, and with 024 that forces 006 to be removed. So the bottom trapezoid is a removal configuration of 600-411-303-222-114-006, and the rest of the points in the bottom trapezoid are open. All 10 points in the upper triangle form equilateral triangles with bottom trapezoid points, hence 10 removals in the upper triangle would be needed, so 114 being removed doesn't work. Suppose 015 is removed. Then 006-024 forces 204 to be removed. Regardless of where the removal in 123-213-114, the points 420, 321, 222, 024, 510, 312, 501, 402, 105, and 006 must be open. This forces upper triangle removals at 330, 231, 042, 060, 051, 132, which is more than the 5 allowed, so 015 being removed doesn't work, so the trapezoid 600-420-321-303 doesn't use solution IV. Suppose the trapezoid 600-420-321-303 uses solution VI. The trapezoid 303-123-024-006 can't be IV (already eliminated by symmetry) or VI' (leaves the triangle 402-222-204). Suppose the trapezoid 303-123-024-006 is solution VI. The removals from the lower trapezoid are then 420, 501, 312, 123, 204, and 015, leaving the remaining points in the lower trapezoid open. The remaining open points is forces 10 upper triangle removals, so the trapezoid 600-420-321-303 doesn't use solution VI. Therefore the trapezoid 303-123-024-006 is solution V. The removals from the lower trapezoid are then 420, 510, 312, 204, 114, and 105. The remaining points in the lower trapezoid are open, and force 9 upper triangle removals, hence the trapezoid 303-123-024-006 can't be V, and the solution for 600-420-321-303 can't be VI. The solution VI' for the trapezoid 600-420-321-303 can be eliminated by the same logic by symmetry. Therefore it is impossible for 5 removals come from the upper triangle and 6 come from the lower trapezoid. Therefore 4 removals come from the upper triangle and 7 come from the lower trapezoid. At this point note the triangle 141-411-141 must have one point removed, so let it be 141 and note that any logic that follows is also true for a removal of 411 and 141 by symmetry. This implies the upper triangle must have either solution I or II. Suppose it has solution II. Note there are five disjoint triangles 600-510-501, 411-321-312, 402-222-204, 213-123-114, and 105-015-006. Suppose 420 and 024 are removed. Then, noting 303 must be open, 606 must be removed, leaving 510 open. 510-240 forces 213 to be removed, and 510-150 force 114 to be removed. 213 are 114 are in the same disjoint triangle. Hence both 420 and 024 both can't be removed. So at least either 420 or 024 is open. Let it be 420, noting by symmetry identical logic will apply if 024 is removed. Then 321, 222, and 123 are removed based on 420 and the open spaces in the upper triangle. This leaves four disjoint triangles 600-501-510, 402-303-312, 213-033-015, 204-114-105. So 411 and 420 are open, forcing the removal of 510. This leaves 501 open, and 501-411 forces the removal of 402. 600-303, and 330 are then open, forming an equilateral triangle. Therefore 420 isn't open, therefore the upper triangle can't have solution II. Therefore the upper triangle has solution I. Suppose 222 is open. 222 with open points in the upper triangle force 420, 321, 123, and 024 to be removed. This leaves four disjoint triangles 411-501-402, 213-303-204, 015-105-006, and 132-312-114. This would force 8 removals in the lower trapezoid, so 222 must be closed. Therefore 222 is removed. There are six disjoint triangles 150-420-123, 051-321-024, 231-501-204, 132-402-105, 510-150-114, and 312-042-015. So 600, 411, 393, 114, and 006 are open. 600-240 open forces 204 to be removed and 600-150 open forces 105 to be removed. This forces 501 and 402 to be open, but 411 is open, so there is the equilateral triangle 501-411-402. Therefore the solution of the upper triangle is not I, and we have a contradiction. So [math] \overline{c}^\mu_6 \neq 17 [/math]. n = 7 n = 8 [math]\overline{c}^\mu_{8} \geq 22[/math]: 008,026,044,062,107,125,134,143,152,215,251,260,314,341,413,431,440,512,521,620,701,800 n = 9 [math]\overline{c}^\mu_{9} \geq 26[/math]: 027,045,063,081,126,135,144,153,207,216,252,270,315,342,351,360,405,414,432,513,522,531,603,630,720,801 n = 10 [math]\overline{c}^\mu_{10} \geq 29[/math]: 028,046,055,064,073,118,172,181,190,208,217,235,262, 316,334,352,361,406,433,442,541,550,604,613,622, 721,730,901,1000 General n A lower bound for [math]\overline{c}^\mu_n[/math] is 2n for [math]n \geq 1[/math], by removing (n,0,0), the triangle (n-2,1,1) (0,n-1,1) (0,1,n-1), and all points on the edges of and inside the same triangle. In a similar spirit, we have the lower bound [math]\overline{c}^\mu_{n+1} \geq \overline{c}^\mu_n + 2[/math] for [math]n \geq 1[/math], because we can take an example for [math]\overline{c}^\mu_n[/math] (which cannot be all of [math]\Delta_n[/math]) and add two points on the bottom row, chosen so that the triangle they form has third vertex outside of the original example. An asymptotically superior lower bound for [math]\overline{c}^\mu_n[/math] is 3(n-1), made of all points in [math]\Delta_n[/math] with exactly one coordinate equal to zero. A trivial upper bound is [math]\overline{c}^\mu_{n+1} \leq \overline{c}^\mu_n + n+2[/math] since deleting the bottom row of a equilateral-triangle-free-set gives another equilateral-triangle-free-set. We also have the asymptotically superior bound [math]\overline{c}^\mu_{n+2} \leq \overline{c}^\mu_n + \frac{3n+2}{2}[/math] which comes from deleting two bottom rows of a triangle-free set and counting how many vertices are possible in those rows. Another upper bound comes from counting the triangles. There are [math]\binom{n+2}{3}[/math] triangles, and each point belongs to n of them. So you must remove at least (n+2)(n+1)/6 points to remove all triangles, leaving (n+2)(n+1)/3 points as an upper bound for [math]\overline{c}^\mu_n[/math]. Asymptotics The corners theorem tells us that [math]\overline{c}^\mu_n = o(n^2)[/math] as [math]n \to \infty[/math]. By looking at those triples (a,b,c) with a+2b inside a Behrend set, one can obtain the lower bound [math]\overline{c}^\mu_n \geq n^2 \exp(-O(\sqrt{\log n}))[/math].
We have this formula for centripetal acceleration - $a = v \frac{d\theta}{dt} = v\omega = \frac{v^2}{r}$ but in case of usual acceleration i know that speed in $t_1 = v_0 + a \cdot (t_1 - t_0)$ but in circular case i don't understande the nature of acceleration, speed is always the same if motion is uniform, but acceleration is not zero. EDIT: Suppose T is 1. $t_0 = 0, t_1 = .25$, so $v$ was rotated for $\frac{\pi}{2}$. So $\frac{\bar{a}}{4} = \bar{v_1} - \bar{v_0}$ and $|a| = 4\sqrt{v^2 + v^2} = 4\sqrt{2} v$, but with previous formula magnitude of a is $\frac{v^2}{4r}$ . ANSWER: physical meaning of $a$ is the length of an arc swept out by velocity vector!
The "iterated" limit$$ \lim_{x\to 0} \lim_{y\to 0} x\sin \Bigl(\frac{1}{y}\Bigr)$$does not exists, because it does not exist the inner limit. EDIT: my first treatment of the limit for $(x,y)\to (0,0)$ was not correct. I think that now all works well. We have to be careful about the following limit in $\mathbb{R}^2$$$\lim_{(x,y)\to (0,0)} x\sin \Bigl(\frac{1}{y}\Bigr)$$Call for simplicity $f(x,y) = x\sin \Bigl( \frac{1}{y}\Bigr)$. The point $(0,0)$ is an accumulation point for pairs of the form $(x,0)$ where the inner function is not defined. It turns out that the limit does not exist, because the sequence $(\frac{1}{n},0)$ converges to $(0,0)$ as $n$ approaches $+\infty$, but $f(\frac{1}{n},0)$ does not converge to any value (since $f$ is not defined at every point of the sequence); while at the same time, there are infinitely many sequences $(x_n,y_n)$ converging to $(0,0)$ such that $f(x_n,y_n)$ converges to $0$. However, it is meaningful to consider the limit$$\lim_{\substack{(x,y)\to (0,0) \\ y \neq 0}} x\sin \Bigl(\frac{1}{y}\Bigr)$$We are simply restricting ourselves to the domain of $f$. In this region, which is $D = \mathbb{R}^2 \setminus \{y=0\}$, the inner function is everywhere defined and we can argue in a simple way: the sine is bounded everywhere and the function $x$ is infinitesimal as $(x,y)\to (0,0)$. Since this estimate holds in the whole $D$, we conclude that the above limit exists and is equal to $0$.
If a matrix $A$ satisfies $x^TAx<0$ for some vector $x \neq 0.$ I wanna show that $\|A\| \neq 0$ for any matrix norm. Another one, if the spectral radius of $B, \rho(B)>1.$ I also want to show that $\|B\| \neq 0$ for any matrix norm. I try like this: Since $x^TAx < 0$ for some $ x \neq 0,$ then $ 0 < \|x^TAx\| \leq \|x^T\|\|A\|\|x\|.$ Thus $ \|A\| \geq \frac{\|x^TAx\|}{\|x\|\|x^T\|}>0.$ Then by the equivalent of norms, there is a number $c>0$ such that $ c\|A\| \leq \|A\|_0$ for any matrix norm $\| \cdot\|_0.$ Thus $\|A\|_0 > 0$ for any matrix norm. The second one: If $ \rho(B)>1,$ let $ \lambda$ be an eigenvalue of $B$ such that $ | \lambda| = \rho(B),$ then $ \|x\| < \rho(A) \|x\| = |\lambda| \|x\| =| |\lambda x\| = \|Bx\| \leq \|B\| \|x\|.$ Thus $\|B\| \geq 1$ for any matrix norm. I think I'm right, yes? Edit: After discussion with @user1551 below, I mention that the proof above assumed that $\|Ax\| \leq \|A\|\|x\|,$ that is, the matrix norm is induced by a vector norm. Otherwise this is not true in general for any matrix norm and a contradicting example has been provided by @user1551
시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율 2 초 512 MB 11 6 6 66.667% Bessie's younger cousins, Ella and Bella, are visiting the farm. Unfortunately, they have been causing nothing but mischief since they arrived. In their latest scheme, they have decided to mow as much grass as they can. The farm's prime grassland is in the shape of large $T \times T$ square. The bottom-left corner is $(0,0)$, and the top-right corner is $(T,T)$. The square therefore contains $(T+1)^2$ lattice points (points with integer coordinates). Ella and Bella plan to both start at $(0,0)$ and run at unit speed to $(T, T)$ while each holding one end of a very sharp and very stretchy wire. Grass in any area that is swept by this wire will be cut. Ella and Bella may take different paths, but each path consists of only upward and rightward steps, moving from lattice point to lattice point. Bessie is rather concerned that too much grass will be cut, so she invents a clever plan to constrain the paths Ella and Bella take. There are $N$ yummy flowers $(1 \leq N \leq 2 \cdot 10^5$) scattered throughout the grassland, each on a distinct lattice point. Bessie will pick a set of $S$ flowers that will be required for both Ella and Bella to visit (so Ella's path must visit all the flowers in $S$, and so must Bella's path). In order to add as many waypoints to these paths as possible, Bessie will choose $S$ to be as large as possible among subsets of flowers that can be visited by a cow moving upward and rightward from $(0,0)$ to $(T,T)$. Ella and Bella will try to maximize the amount of grass they cut, subject to the restriction of visiting flowers in $S$. Please help Bessie choose $S$ so that the amount of grass cut is as small as possible. The first line contains $N$ and $T$ ($1 \leq T \leq 10^6$). Each of the next $N$ lines contains the integer coordinates $(x_i, y_i)$ of a flower. It is guaranteed that $1 \leq x_i, y_i \leq T-1$ for all $i$, and no two flowers lie on the same horizontal or vertical line. In at least 20% of the test cases, it is further guaranteed that $N \leq 3200$. A single integer, giving the minimum possible amount of cut grass. 5 20 19 1 2 6 9 15 10 3 13 11 117 In the above example, it is optimal for Bessie to pick the flowers at $(10,3)$ and $(13,11)$. Then in the worst case, Ella and Bella will cut three rectangles of grass with total area $117$.
Let X be a complex connected projective algebraic surface and let L be an ample line bundle on X. The maps associated with the pluriadjoint bundles t(K_X+L), t \geq 2, are studied by combining an ampleness result for K_X+L with a very recent result by Reider. It turns out that apart from some exceptions and up to reductions: 1) 3(K_X+L) is very ample; 2) 2(K_X+L) is ample and spanned by global sections, and is very ample unless either g(L)=2 (arithmetic genus of L) or X contains an elliptic curve E with E^2 = 0, EL=1; 3) when 2(K_X+L) is not very ample, the associated map has degree \leq 4, equality implying that g(L)=2 and \chi(O_X)=0. Titolo: Pluriadjoint bundles of polarized surfaces Autori: LANTERI, ANTONIO (Primo) Parole Chiave: projective algebraic surface; ample line bundle; adjunction Settore Scientifico Disciplinare: Settore MAT/03 - Geometria Data di pubblicazione: 1990 Rivista: Tipologia: Article (author) Digital Object Identifier (DOI): 10.1007/BF01298854 Appare nelle tipologie: 01 - Articolo su periodico
The IPv6 (Internet Protocol version 6) has a very large address space. In fact, since an IPv6 address is $128$ bits long, the number of possible IPv6 addresses is: $$ \boxed{ N_{\textrm{IPv6}} = 2^{128} \approx 3.4 \times 10^{38} } $$ For comparison, the number of: cells in a human body is about $100$ trillion ($10^{14}$), stars in the observable universe seems to be in the range $10^{22} - 10^{24}$ In other words, the number of possible IPv6 addresses is unimaginably immense. In contrast, IPv4 addresses are only $32$ bits long, so the IPv4 address space contains only $N_{\textrm{IPv4}} = 2^{32} \approx 4.3 \times 10^{9}$ (about $4.3$ billion) addresses. This small address space is one of the causes of the IPv4 address exhaustion. Let's compute the number of unique IPv6 addresses that could be assigned to each square meter of the Earth's surface. Since the Earth is approximately a sphere with radius $R_{\textrm{Earth}} = 6.4 \times 10^6\textrm{m}$, its area can be computed as below: $$ A_{\textrm{Earth}} = 4\pi R_{\textrm{Earth}}^2 \approx 4 \times 3.14 \times (6.4 \times 10^6 \textrm{m})^2 \approx 5.1 \times 10^{14} \textrm{m}^2 $$ The number of IPv6 addresses per square meter of the Earth's surface is then: $$ \boxed{ \lambda_{\textrm{IPv6}} := \displaystyle\frac{N_{\textrm{IPv6}}}{A_{\textrm{Earth}}} \approx 6.6\times 10^{23} \textrm{m}^{-2} } $$ Interestingly, this number is close to the Avogadro constant ($N_A = 6.022\times 10^{23}$). From this we can compute how much area a single IPv6 address would "occupy". This value is given by: $$ \boxed{ A_{\textrm{IPv6}} := \displaystyle\frac{1}{\lambda_{\textrm{IPv6}}} \approx 1.5\times 10^{-24} m^2 = 1.5 \textrm{pm}^2 } $$ since $1\textrm{pm} = 10^{-12}\textrm{m}$ ($\textrm{pm}$ stands for picometer). Given that a Helium atom, which is the smallest (electrically neutral) atom, has a maximum cross-sectional area (imagine a plane cutting through the nucleus of a Helium atom; the area of the plane inside the atom is what I mean by its "maximum cross-sectional area") of approximately: $$ A_{\textrm{He}} = \pi R_{\textrm{He}}^2 \approx 3.14 \times (31 \textrm{pm})^2 \approx 3000\textrm{pm}^2 \approx 2000A_{\textrm{IPv6}} $$ then each atom on the Earth's surface could have thousands of unique IPv6 addresses assigned to it. To finalize, it must be said that although huge, the IPv6 address space might still be small enough to save our planet one day.
Thanks so much to everyone for trying out my transfinite epistemic logic puzzle, which I have given the name Cheryl’s Rational Gifts, on account of her gifts to Albert and Bernard. I hope that everyone enjoyed the puzzle. See the list of solvers and honorable mentions at the bottom of this post. Congratulations! As I determine it, the solution is that Albert has the number $100\frac38$, and Bernard has the number $100\frac7{16}$. Let me explain my reasoning and address a few issues that came up in the comments. First, let’s understand the nature of the space of possible numbers that Cheryl describes, those of the form the form $$n-\frac{1}{2^k}-\frac{1}{2^{k+r}},$$ where $n$ and $k$ are positive integers and $r$ is a non-negative integer. Although this may seem complicated at first, in fact this set consists simply of a large number of increasing convergent sequences, one after the other. Specifically, the smallest of the numbers is $0=1-\frac12-\frac12$, and then $\frac14$, $\frac38$, $\frac7{16}$, and so on, converging to $\frac12$, which itself arises as $\frac12=1-\frac14-\frac14$. So the numbers begin with the increasing convergent sequence $$0 \quad\frac14\quad \frac38\quad \frac7{16}\quad\cdots\quad\to\quad \frac12.$$Immediately after this comes another sequence of points of the form $1-\frac14-\frac1{2^{2+r}}$, which converge to $\frac34$, which itself arises as $1-\frac18-\frac18$. So we have $$\frac12\quad \frac58\quad \frac{11}{16}\quad\frac{23}{32}\quad\cdots\quad\to\quad \frac34.$$Following upon this, there is a sequence converging to $\frac78$, and then another converging to $\frac{15}{16}$, and so on. Between $0$ and $1$, therefore, what we have altogether is an increasing sequence of increasing sequences of rational numbers, where the start of the next sequence is precisely the limit of the previous sequence. The same pattern recurs between $1$ and $2$, between $2$ and $3$, and indeed between any positive integer $n$ and its successor $n+1$, for the numbers the occur between $n$ and $n+1$ are simply a translation of the numbers between $0$ and $1$. Thus, for every positive integer $k$ we have $n-\frac1{2^k}$ as the limit of the numbers $n-\frac{1}{2^k}-\frac{1}{2^{k+r}}$, as $r$ increases. Between any two non-negative integers, therefore, we have an increasing sequence of converging increasing sequences. Altogether, we have infinitely many copies of the picture between $0$ and $1$, which was infinitely many increasing convergent sequences, one after the other. For those readers who are familiar with the ordinals, what this means is that the set of numbers forms a closed set of order type exactly $\omega^3$. We may associate the number $n-\frac{1}{2^k}-\frac{1}{2^{k+r}}$ with the ordinal number $\omega^2\cdot (n-1)+\omega\cdot (k-1)+r$, and observe that this correspondence is a (continuous) order-isomorphism of our numbers with the ordinals below $\omega^3$. In this way, we could replace all talk of the specific rational numbers in this puzzle with their corresponding ordinals below $\omega^3$ and imagine that Cheryl has actually given her friends ordinals rather than rationals. But to explain the solution, allow me to stick with the rational numbers. The fact that Albert initially does not know who has the larger number implies that Albert does not have $0$, the smallest number overall, since if he were to have had $0$, then he would have known that Bernard’s must have been larger. Since then Bernard does not know, it follows that his number is neither $0$ nor $\frac14$, which is the next number, since otherwise he would have known that Albert’s number must have been larger. Since Albert continues not to know, we rule out the numbers up to $\frac38$ for him. And then similarly ruling out the numbers up to $\frac7{16}$ for Bernard. In this way, each step of the back-and-forth continuing denials of knowing eliminates the lowest remaining numbers from possibility. Consequently, when Cheryl interrupts the first time, we learn that Albert and Bernard cannot have numbers on the first increasing sequence (below $\frac12$), since otherwise they would at some point come to know in that back-and-forth procedure who has the larger number, and so it wouldn’t be true that they wouldn’t know no matter how long they continued the back-and-forth, as Cheryl stated. Thus, after her remark, both Albert and Bernard know that both numbers are at least $\frac12$, which is the first limit point of the set of possible numbers. Since at this point Albert states that he still doesn’t know who has the larger number, it cannot be that he has $\frac12$ himself, since otherwise he would have known that he had the smaller number. And then next since Bernard still doesn’t know, it follows that Bernard cannot have either $\frac12$ or $\frac58$, the next number. Thus, if they were to continue the iterative not-knowing-yet pronouncements, they would systematically eliminate the numbers on the second increasing sequence, which converges to $\frac34$. Because of Cheryl’s second interruption remark, therefore, it follows that their numbers do not appear on that second sequence, for otherwise they would have known by continuing that pattern long enough. Thus, after her remark, they both know that both numbers are at least $\frac34$. And since Albert and Bernard in succession state that they still do not know, they have begun to eliminate numbers from the third sequence. Consider now Cheryl’s contentful exasperated remark. What she says in the first part is that no matter how many times the three of them repeat that pattern, they will still not know. The content of this remark is precisely that neither of the two numbers can be on next sequence (the third), nor the fourth, nor the fifth and so on; they cannot be on any of the first $\omega$ many sequences (that is, below $1$), because if one of the numbers occurred on the $k^{th}$ sequence below $1$, as $1-\frac1{2^k}-\frac1{2^{k+r}}$, for example, then after $k-1$ repetitions of the three-way-pattern, it would no longer be true for Cheryl to say that no matter how long Albert and Bernard continued their back-and-forth they wouldn’t know, since they would indeed know after $r$ steps of that at that point. Thus, the first part of Cheryl’s remark implies that the numbers must both be at least $1$. But Cheryl also says that the same statement would be true if she said it again. Thus, the numbers must not lie on any of the first $\omega$ many sequences above $1$. Those sequences converge to the limit points $1\frac12$, $1\frac34$, $1\frac78$ and so on. Consequently, after that second statement, everyone would know that the numbers must both be at least $2$. Similarly, after making the statement a third time, everyone knows the numbers must be at least $3$, and after the fourth time, everyone knows the numbers must be at least $4$. Cheryl says that she could make the statement a hundred times altogether in succession (counting the time she has already said it as amongst the one hundred), and it would be true every time. Since each time she makes the statement, it eliminates precisely the possibility that one of the numbers is on any of the next $\omega$ many sequences, what everyone would know after the one hundredth pronouncement is precisely that both numbers are at least $100$. Even though she didn’t actually make the statement one hundred times, Albert and Bernard are entitled to know exactly that information even so, because she had said that the statement would be true every time, if she were to have said it one hundred times. Note that it would be perfectly compatible with Cheryl making that statement one hundred times, if one of the numbers had been $100$, since each additional assertion simply eliminates the possibility that one of the numbers occurs on the sequences strictly before the next integer limit point, without eliminating the integer limit point itself. Next Cheryl makes an additional statement, which it seems to me that some of the commentators did not give sufficient attention. Namely, she says, “And furthermore, even after my having said it altogether one hundred times in succession, you would still not know who has the larger number!” This statement gives additional epistemic information beyond the content of saying that the $100^{th}$ statement would be true. After the $100^{th}$ statement, according to what we have said, both Albert and Bernard would know precisely that both numbers are at least $100$. But Cheryl is telling them that they still would not know, even after the $100^{th}$ statement. Thus, it must be that neither Albert nor Bernard has $100$, since having that number is the only way they could know at that point who has the larger number. (Note also that Cheryl did not say that they would know that the other does not know, but only that they each would not know after the $100^{th}$ assertion, an issue that appeared to trip up some commentators. So she is making a common-knowledge assertion about what their individual epistemic states would be in that case.) The first few numbers after $100$ are: $$100\qquad 100\frac14\qquad 100\frac38\qquad 100\frac7{16}\qquad 100\frac{15}{32}\qquad\cdots\to\quad 100\frac12$$ So putting everything together, what everyone knows after Cheryl’s exasperated remark is that both numbers are at least $100\frac14$. Since Albert still doesn’t know, it means his number is at least $100\frac 38$. Since Bernard doesn’t know after this, it means that Bernard cannot have either $100\frac14$ or $100\frac38$, since otherwise he would know that Albert’s is larger. So Bernard has at least $100\frac7{16}$. But now suddenly, finally, Albert knows who has the larger number! How can this be? So far, all we knew is that Albert’s number was at least $100\frac 38$ and Bernard’s is at least $100\frac7{16}$. If Albert had $100\frac 38$, then indeed he would know that Bernard’s number is larger; but note also that if Albert had $100\frac7{16}$, then he would also know that Bernard must have the larger number (since he knows the numbers are different). But if Albert’s number were larger than $100\frac7{16}$, then he couldn’t know whether Bernard’s number was larger or not. So after Albert’s assertion, what we all know is precisely that Albert has either $100\frac38$ or $100\frac7{16}$. But now, Bernard claims to know both numbers! How could he know which number Albert has? The only way that he can distinguish those two possibilities that we mentioned is if Bernard himself has $100\frac7{16}$, since this is the smallest possibility remaining for Bernard and if Bernard’s number were larger than that then Albert could have consistently had either $100\frac38$ or $100\frac7{16}$. Thus, because Bernard knows the numbers, it must be that Bernard has $100\frac7{16}$, which would eliminate this possibility for Albert. So Albert has $100\frac38$ and Bernard has $100\frac7{16}$, and that is the solution of the puzzle. A number of commentators solved the puzzle, coming to the same conclusion that I did, and so let me give some recognition here. Congratulations! Neel Nanda evanjhub Aris Katsaris Joe Dobrow and Kellen Kirchberg, also on math.SE MatNat2 Jonas Warwick Logic Reading Group Gavi Dov Paul Crowley (after fixing the fencepost) Let me also give honorable mentions to the following people, who came very close.
Let's light things up ( with an FPGA )! Finally got around to learn the tools and implement the project on to the DE0 Nano. Check it out! The Idea: Restate and test MILP model for determining the parameters of the clap algorithm. With the help of GNU Glpk and PyGlpk, the mixed integer linear programming (MILP) model that was described in the last several logs has finally been implemented! And, in doing so, several issues with the original model were discovered and fixed. The prior logs on the system model may even be removed in the future. However, in attempt to reduce the amount of time spent writing these logs, many of those details are glossed over here; writing these logs take a significant amount of time, unfortunately. Instead, much of the model is simply restated in this log. In addition to the restating the model, this log includes a brief example demonstrating the model works as intended. The example and the respective Python code can be found in the project's repository! And, like the last log, the equivalent inline latex is used to refer to the symbols presented in the images. Hope this isn't too confusing! The Theory: Model description. The signal $x_n$ represents the sampled audio. The clap that needs identification is defined as a sudden spike in the energy of $x_n$, where each energy $e_m$ is defined over a small interval. A spike in energy can be regarded as a sudden increase in energy over a period, and then a sudden decrease over another period of time. The sudden changes are signified by either the energy $e_m$ exceeding the threshold $e_H$ or falling under the threshold $e_L$, for their respective durations. $\sigma_{H,k,m}$ and $\sigma_{L,k,m}$ are both indicators that represent the aforementioned durations. Specifically, $\sigma_{H,k,m}$ represents the duration for which $e_m \ge e_H$ is true for each clap $k$. The three unknowns are $\Delta m_k$, $\Delta m_H$, and $\Delta m_{H,k}$. $\Delta m_k$ represents the amount of time---measured in samples $m$---between each clap $k$. $\Delta m_H$ is the minimum amount of time $e_m \ge e_H$ needs to be true for clap $k$. $\Delta m_{H,k}$ allows $e_m \ge e_H$ for clap $k$ to be true longer than $\Delta m_H$. And, since it's probably not very clear, assume $m_{L,UB,-1} = 0$. Next, $\sigma_{L,k,m}$ represents the duration for which $e_m \le e_L$ is true for each clap $k$. The unknowns are $\Delta m_{L,k}$ and $\Delta m_L$. $\Delta m_{L,k}$ for clap $k$ allows $e_m$ to fall in between $e_H$ and $e_L$ before $e_m \le e_L$ needs to be true. Similar to $\Delta m_H$, $\Delta m_L$ is the minimum amount of time $e_m\le e_L$ needs to be true. In order to progress with this explanation, consider the following definitions. Assume the domain of variables that begin with $\Delta$ are $M$. With these definitions, the indicators $\sigma_{H,k,m}$ and $\sigma_{L,k,m}$ are reduced to a single indicator. To further simplify, $\sigma_{t,k,m}$ is true if and only if the conditions $0 \le m - m_{t,LB,k}$ and $0 \le m_{t,UB,k} - m$ are true. Taking advantage of the if-then constraint, the aforementioned conditions are represented with the following. The conditions $\beta_{t,LB,k,m}$ and $\beta_{t,UB,k,m}$ are further simplified with a logical-AND operation. The result is $\sigma_{t,k,m}$. Finally, the objective is to find the values of the unknowns that maximize the difference between $e_H$ and $e_L$. The variables that need solving are the following. The Implementation: PyGlpk and GNU Glpk For those who have seen the logs on the embedded systems project, GNU Glpk has already been used in other projects as a way to solve LP models, albeit the application was for flow-network problems. Instead of the LEMON Glpk interface as described in the embedded system logs, the presented MILP model is solved with Glpk via the PyGlpk interface. As stated earlier, the source code of the first completed version can be found in the project's repository. There will be one more version of the source code since the current model does not take into consideration multiple sets of audio signals ( and the source code is somewhat messy, admittedly ). Let's start with...Read more » SO, just to extend the brief introduction from the project's home page, this project can be described as an introductory experience for both me and @Neale Estrella. In his case, he's new to the world of RTL design for FPGAs and wants to learn Verilog for behavioral synthesis. For me, my entire FPGA-related experience is with Xilinx FPGAs and for the first time, I want to do a project with an Altera FPGA! In later projects, I hope to check out Altera's OpenCL support and their Nios II softcore processor! But, similar to how I initially started out, we plan to implement this entire project in Verilog. Once we have the design running over the DE0 Nano, I hope to add circuitry to allow the FPGA toggle an actual lap! ( The lap I want to toggle!!!... Just really wanted to throw in a picture somewhere. ) At this point, we already have a few aspects of the project figured out. As briefly mentioned, we both bought the Terasic DE0 Nano development board. The Nano contains the Altera FPGA, however the board was mainly selected for it's decent cost, peripheral components, and it appears to be fairly popular among other hobbyists who work with FPGAs. As for the development environment, I've started to learn Quartus Prime 16 Lite, but I believe Neale needed to get Quartus II Web instead since he uses a 32-bit system. Pretty sure both are nearly identical! And, of course, we are in the process of learning ModelSim Altera for simulation! ( Looks pretty sweet!!!... Can't wait to actually turn it on! ) As of now, the entire RTL design is, in a way, "finished". In the repository, you can see all the modules. However, only the GetSignal module was fully tested, simulation and deployment. As for the others, my hope is to have a few of them replaced with Neale's implementations. Finally, in the next several logs, I will explain the theory of operation!
Ineffable cardinal Ineffable cardinals were introduced by Jensen and Kunen in [1] and arose out of their study of $\diamondsuit$ principles. An uncountable regular cardinal $\kappa$ is ineffable if for every sequence $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ there is $A\subseteq\kappa$ such that the set $S=\{\alpha<\kappa\mid A\cap \alpha=A_\alpha\}$ is stationary. Equivalently an uncountable regular $\kappa$ is ineffable if and only if for every function $F:[\kappa]^2\rightarrow 2$ there is a stationary $H\subseteq\kappa$ such that $F\upharpoonright [H]^2$ is constant [1]. This second characterization strengthens a characterization of weakly compact cardinals which requires that there exist such an $H$ of size $\kappa$. If $\kappa$ is ineffable, then $\diamondsuit_\kappa$ holds and there cannot be a slim $\kappa$-Kurepa tree [1] . A $\kappa$-Kurepa tree is a tree of height $\kappa$ having levels of size less than $\kappa$ and at least $\kappa^+$-many branches. A $\kappa$-Kurepa tree is slim if every infinite level $\alpha$ has size at most $|\alpha|$. Contents Ineffable cardinals and the constructible universe Ineffable cardinals are downward absolute to $L$. In $L$, an inaccessible cardinal $\kappa$ is ineffable if and only if there are no slim $\kappa$-Kurepa trees. Thus, for inaccessible cardinals, in $L$, ineffability is completely characterized using slim Kurepa trees. [1] Ramsey cardinals are stationary limits of completely ineffable cardinals, they are weakly ineffable, but the least Ramsey cardinal is not ineffable. Ineffable Ramsey cardinals are limits of Ramsey cardinals, because ineffable cardinals are $Π^1_2$-indescribable and being Ramsey is a $Π^1_2$-statement. The least strongly Ramsey cardinal also is not ineffable, but super weakly Ramsey cardinals are ineffable. $1$-iterable (=weakly Ramsey) cardinals are weakly ineffable and stationary limits of completely ineffable cardinals. The least $1$-iterable cardinal is not ineffable. [2, 4] Relations with other large cardinals Measurable cardinals are ineffable and stationary limits of ineffable cardinals. $\omega$-Erdős cardinals are stationary limits of ineffable cardinals, but not ineffable since they are $\Pi_1^1$-describable. [3] Ineffable cardinals are $\Pi^1_2$-indescribable [1]. Ineffable cardinals are limits of totally indescribable cardinals. [1] ([5] for proof) For a cardinal $κ=κ^{<κ}$, $κ$ is ineffable iff it is normal 0-Ramsey. [6] Weakly ineffable cardinal Weakly ineffable cardinals (also called almost ineffable) were introduced by Jensen and Kunen in [1] as a weakening of ineffable cardinals. An uncountable regular cardinal $\kappa$ is weakly ineffable if for every sequence $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ there is $A\subseteq\kappa$ such that the set $S=\{\alpha<\kappa\mid A\cap \alpha=A_\alpha\}$ has size $\kappa$. If $\kappa$ is weakly ineffable, then $\diamondsuit_\kappa$ holds. Weakly ineffable cardinals are downward absolute to $L$. [1] Weakly ineffable cardinals are $\Pi_1^1$-indescribable. [1] Ineffable cardinals are limits of weakly ineffable cardinals. Weakly ineffable cardinals are limits of totally indescribable cardinals. [1] ([5] for proof) For a cardinal $κ=κ^{<κ}$, $κ$ is weakly ineffable iff it is genuine 0-Ramsey. [6] Subtle cardinal Subtle cardinals were introduced by Jensen and Kunen in [1] as a weakening of weakly ineffable cardinals. A uncountable regular cardinal $\kappa$ is subtle if for every for every $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ and every closed unbounded $C\subseteq\kappa$ there are $\alpha<\beta$ in $C$ such that $A_\beta\cap\alpha=A_\alpha$. If $\kappa$ is subtle, then $\diamondsuit_\kappa$ holds. Subtle cardinals are downward absolute to $L$. [1] Weakly ineffable cardinals are limits of subtle cardinals. [1] Subtle cardinals are stationary limits of totally indescribable cardinals. [1, 7] The least subtle cardinal is not weakly compact as it is $\Pi_1^1$-describable. $\alpha$-Erdős cardinals are subtle. [1] If $δ$ is a subtle cardinal, the set of cardinals $κ$ below $δ$ that are strongly uplifting in $V_δ$ is stationary.[8] for every class $\mathcal{A}$, in every club $B ⊆ δ$ there is $κ$ such that $\langle V_δ, \mathcal{A} ∩ V_δ \rangle \models \text{“$κ$ is $\mathcal{A}$-shrewd.”}$.[9] (The set of cardinals $κ$ below $δ$ that are $\mathcal{A}$-shrewd in $V_δ$ is stationary.) there is an $\eta$-shrewd cardinal below $δ$ for all $\eta < δ$.[9] Ethereal cardinal Ethereal cardinals were introduced by Ketonen in [10] (information in this section from there) as a weakening of subtle cardinals. Definition: A regular cardinal $κ$ is called etherealif for every club $C$ in $κ$ and sequence $(S_α|α < κ)$ of sets such that for $α < κ$, $|S_α| = |α|$ and $S_α ⊆ α$, there are elements $α, β ∈ C$ such that $α < β$ and $|S_α ∩ S_β| = |α|$. I.e., symbolically(?): $$κ \text{ – ethereal} \overset{\text{def}}{⟺} \left( κ \text{ – regular} ∧ \left( \forall_{C \text{ – club in $κ$}} \forall_{S : κ → \mathcal{P}(κ)} \left( \forall_{α < κ} |S_α| = |α| ∧ S_α ⊆ α \right) ⟹ \left( \exists_{α, β ∈ C} α < β ∧ |S_α ∩ S_β| = |α| \right) \right) \right)$$ Properties: Every subtle cardinal is obviously ethereal. Every ethereal cardinal is weakly inaccessible. A strongly inaccessible cardinal is ethereal if and only if it is subtle. If $κ$ is ethereal and $2^\underset{\smile}{κ} = κ$, then $♢_κ$ holds (where $2^\underset{\smile}{κ} = \bigcup \{ 2^α | α < κ \}$ is the weak power of $κ$). To be expanded. $n$-ineffable cardinal The $n$-ineffable cardinals for $2\leq n<\omega$ were introduced by Baumgartner in [11] as a strengthening of ineffable cardinals. A cardinal is $n$-ineffable if for every function $F:[\kappa]^n\rightarrow 2$ there is a stationary $H\subseteq\kappa$ such that $F\upharpoonright [H]^n$ is constant. $2$-ineffable cardinals are exactly the ineffable cardinals. an $n+1$-ineffable cardinal is a stationary limit of $n$-ineffable cardinals. [11] A cardinal $\kappa$ is totally ineffable if it is $n$-ineffable for every $n$. a $1$-iterable cardinal is a stationary limit of totally ineffable cardinals. (this follows from material in [4]) Helix (Information in this subsection come from [7] unless noted otherwise.) For $k \geq 1$ we define: $\mathcal{P}(x)$ is the powerset (set of all subsets) of $x$. $\mathcal{P}_k(x)$ is the set of all subsets of $x$ with exactly $k$ elements. $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$ is regressive iff for all $A \in \mathcal{P}_k(\lambda)$, we have $f(A) \subseteq \min(A)$. $E$ is $f$-homogenous iff $E \subseteq \lambda$ and for all $B,C \in \mathcal{P}_k(E)$, we have $f(B) \cap \min(B \cup C) = f(C) \cap \min(B \cup C)$. $\lambda$ is $k$-subtle iff $\lambda$ is a limit ordinal and for all clubs $C \subseteq \lambda$ and regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous $A \in \mathcal{P}_{k+1}(C)$. $\lambda$ is $k$-almost ineffable iff $\lambda$ is a limit ordinal and for all regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous $A \subseteq \lambda$ of cardinality $\lambda$. $\lambda$ is $k$-ineffable iff $\lambda$ is a limit ordinal and for all regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous stationary $A \subseteq \lambda$. $0$-subtle, $0$-almost ineffable and $0$-ineffable cardinals can be defined as “uncountable regular cardinals” because for $k \geq 1$ all three properties imply being uncountable regular cardinals. For $k \geq 1$, if $\kappa$ is a $k$-ineffable cardinal, then $\kappa$ is $k$-almost ineffable and the set of $k$-almost ineffable cardinals is stationary in $\kappa$. For $k \geq 1$, if $\kappa$ is a $k$-almost ineffable cardinal, then $\kappa$ is $k$-subtle and the set of $k$-subtle cardinals is stationary in $\kappa$. For $k \geq 1$, if $\kappa$ is a $k$-subtle cardinal, then the set of $(k-1)$-ineffable cardinals is stationary in $\kappa$. For $k \geq n \geq 0$, all $k$-ineffable cardinals are $n$-ineffable, all $k$-almost ineffable cardinals are $n$-almost ineffable and all $k$-subtle cardinals are $n$-subtle. Completely ineffable cardinal Completely ineffable cardinals were introduced in [5] as a strengthening of ineffable cardinals. Define that a collection $R\subseteq P(\kappa)$ is a stationary class if $R\neq\emptyset$, for all $A\in R$, $A$ is stationary in $\kappa$, if $A\in R$ and $B\supseteq A$, then $B\in R$. A cardinal $\kappa$ is completely ineffable if there is a stationary class $R$ such that for every $A\in R$ and $F:[A]^2\to2$, there is $H\in R$ such that $F\upharpoonright [H]^2$ is constant. Relations: Completely ineffable cardinals are downward absolute to $L$. [5] Completely ineffable cardinals are limits of ineffable cardinals. [5] There are stationarily many completely ineffable, greatly Erdős cardinals below any Ramsey cardinal.[13] The following are equivalent:[6] $κ$ is completely ineffable. $κ$ is coherent $<ω$-Ramsey. $κ$ has the $ω$-filter property. Every completely ineffable is a stationary limit of $<ω$-Ramseys.[6] Completely Ramsey cardinals and $ω$-Ramsey cardinals are completely ineffable.[6] $ω$-Ramsey cardinals are limits of completely ineffable cardinals.[2] References Jensen, Ronald and Kunen, Kenneth. Some combinatorial properties of $L$ and $V$.Unpublished, 1969. www bibtex Holy, Peter and Schlicht, Philipp. A hierarchy of Ramsey-like cardinals.Fundamenta Mathematicae 242:49-74, 2018. www arχiv DOI bibtex Jech, Thomas J. Third, Springer-Verlag, Berlin, 2003. (The third millennium edition, revised and expanded) www bibtex Set Theory. Gitman, Victoria. Ramsey-like cardinals.The Journal of Symbolic Logic 76(2):519-540, 2011. www arχiv MR bibtex Abramson, Fred and Harrington, Leo and Kleinberg, Eugene and Zwicker, William. Flipping properties: a unifying thread in the theory of large cardinals.Ann Math Logic 12(1):25--58, 1977. MR bibtex Nielsen, Dan Saattrup and Welch, Philip. Games and Ramsey-like cardinals., 2018. arχiv bibtex Friedman, Harvey M. Subtle cardinals and linear orderings., 1998. www bibtex Hamkins, Joel David and Johnstone, Thomas A. Strongly uplifting cardinals and the boldface resurrection axioms., 2014. arχiv bibtex Rathjen, Michael. The art of ordinal analysis., 2006. www bibtex Ketonen, Jussi. Some combinatorial principles.Trans Amer Math Soc 188:387-394, 1974. DOI bibtex Baumgartner, James. Ineffability properties of cardinals. I.Infinite and finite sets (Colloq., Keszthely, 1973; dedicated to P. Erdős on his 60th birthday), Vol. I, pp. 109--130. Colloq. Math. Soc. János Bolyai, Vol. 10, Amsterdam, 1975. MR bibtex Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex Sharpe, Ian and Welch, Philip. Greatly Erdős cardinals with some generalizations to the Chang and Ramsey properties.Ann Pure Appl Logic 162(11):863--902, 2011. www DOI MR bibtex
We have written in our text book: Let $(X_{1},\mathcal{A}_{1},\mu_{1})$ and $(X_{2},\mathcal{A}_{2},\mu_{2})$ be two $\sigma-$finite measures. Define $(X,\mathcal{A},\mu):=(X_{1}\times X_{2},\mathcal{A_{1}} \otimes \mathcal{A_{2}},\mu_{1}\otimes\mu_{2})$ Let $f: X \to \bar{\mathbb R}$ be $\mathcal{A}-$measurable Then for $g \in \{f_{-},f_{+}\}$: $X_{1}\to [0,\infty],x_{1}\mapsto\int_{X_{2}}g(x_{1},x_{2})d\mu_{2}(x_{2})$ is $\mathcal{A_{1}}-$measurable and $X_{2}\to [0,\infty],x_{2}\mapsto\int_{X_{1}}g(x_{1},x_{2})d\mu_{1}(x_{1})$ is $\mathcal{A_{2}}-$measurable Then we can use Tonelli for $f \geq 0$ a.e. as well as Fubini. Problem: I have a case, let's say $f(x,t):=e^{-xt}\sin{x}$ and would like to use Fubini-Tonelli for $R> 0$ on $\int_{[0,R]}\int_{[0,\infty[}e^{-xt}\sin{x}d\lambda(x)d\lambda(t)$ Part of the solution simply states: $\int_{[0,R]}\int_{[0,\infty[}|e^{-xt}\sin{x}|d\lambda(t)d\lambda(x)<\infty$ (drastically shortened) Which intuitively makes sense in order to use Fubini. However, where in this solution is shown that $f:[0,R]\times[0,\infty[\to\bar{\mathbb R}$ is indeed measurable. Does it suffice to simply state $f$ is continuous on $[0,R]$ as well as on $[0,\infty[$ and therefore it (i.e. the function as a whole) is measurable?
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. Observation of a peaking structure in the J/psi phi mass spectrum from B-+/- -> J/psi phi K-+/- decays PHYSICS LETTERS B, ISSN 0370-2693, 06/2014, Volume 734, Issue 370-2693 0370-2693, pp. 261 - 281 A peaking structure in the J/psi phi mass spectrum near threshold is observed in B-+/- -> J/psi phi K-+/- decays, produced in pp collisions at root s = 7 TeV... PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 2. Measurement of the ratio of the production cross sections times branching fractions of B c ± → J/ψπ ± and B± → J/ψK ± and ℬ B c ± → J / ψ π ± π ± π ∓ / ℬ B c ± → J / ψ π ± $$ \mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm }{\pi}^{\pm }{\pi}^{\mp}\right)/\mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm}\right) $$ in pp collisions at s = 7 $$ \sqrt{s}=7 $$ TeV Journal of High Energy Physics, ISSN 1029-8479, 1/2015, Volume 2015, Issue 1, pp. 1 - 30 The ratio of the production cross sections times branching fractions σ B c ± ℬ B c ± → J / ψ π ± / σ B ± ℬ B ± → J / ψ K ± $$ \left(\sigma... B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory Journal Article Physics Letters B, ISSN 0370-2693, 05/2016, Volume 756, Issue C, pp. 84 - 102 A measurement of the ratio of the branching fractions of the meson to and to is presented. The , , and are observed through their decays to , , and ,... scattering [p p] | pair production [pi] | statistical | Physics, Nuclear | 114 Physical sciences | Phi --> K+ K | Astronomy & Astrophysics | LHC, CMS, B physics, Nuclear and High Energy Physics | f0 --> pi+ pi | High Energy Physics - Experiment | Compact Muon Solenoid | pair production [K] | Science & Technology | mass spectrum [K+ K-] | Ratio B | Large Hadron Collider (LHC) | Nuclear & Particles Physics | 7000 GeV-cms | leptonic decay [J/psi] | J/psi --> muon+ muon | experimental results | Nuclear and High Energy Physics | Physics and Astronomy | branching ratio [B/s0] | CERN LHC Coll | B/s0 --> J/psi Phi | CMS collaboration ; proton-proton collisions ; CMS ; B physics | Physics | Física | Physical Sciences | hadronic decay [f0] | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Physics, Particles & Fields | 0202 Atomic, Molecular, Nuclear, Particle And Plasma Physics | colliding beams [p p] | hadronic decay [Phi] | mass spectrum [pi+ pi-] | B/s0 --> J/psi f0 scattering [p p] | pair production [pi] | statistical | Physics, Nuclear | 114 Physical sciences | Phi --> K+ K | Astronomy & Astrophysics | LHC, CMS, B physics, Nuclear and High Energy Physics | f0 --> pi+ pi | High Energy Physics - Experiment | Compact Muon Solenoid | pair production [K] | Science & Technology | mass spectrum [K+ K-] | Ratio B | Large Hadron Collider (LHC) | Nuclear & Particles Physics | 7000 GeV-cms | leptonic decay [J/psi] | J/psi --> muon+ muon | experimental results | Nuclear and High Energy Physics | Physics and Astronomy | branching ratio [B/s0] | CERN LHC Coll | B/s0 --> J/psi Phi | CMS collaboration ; proton-proton collisions ; CMS ; B physics | Physics | Física | Physical Sciences | hadronic decay [f0] | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Physics, Particles & Fields | 0202 Atomic, Molecular, Nuclear, Particle And Plasma Physics | colliding beams [p p] | hadronic decay [Phi] | mass spectrum [pi+ pi-] | B/s0 --> J/psi f0 Journal Article Journal of High Energy Physics, ISSN 1126-6708, 2012, Volume 2012, Issue 5 Journal Article 5. Measurement of the differential cross-sections of inclusive, prompt and non-prompt J / ψ production in proton–proton collisions at s = 7 TeV Nuclear Physics, Section B, ISSN 0550-3213, 2011, Volume 850, Issue 3, pp. 387 - 444 The inclusive production cross-section and fraction of mesons produced in -hadron decays are measured in proton–proton collisions at with the ATLAS detector at... Journal Article Physics Letters B, ISSN 0370-2693, 06/2014, Volume 734, pp. 261 - 281 A peaking structure in the mass spectrum near threshold is observed in decays, produced in pp collisions at collected with the CMS detector at the LHC. The... Journal Article 7. Search for rare decays of $$\mathrm {Z}$$ Z and Higgs bosons to $${\mathrm {J}/\psi } $$ J/ψ and a photon in proton-proton collisions at $$\sqrt{s}$$ s = 13$$\,\text {TeV}$$ TeV The European Physical Journal C, ISSN 1434-6044, 2/2019, Volume 79, Issue 2, pp. 1 - 27 A search is presented for decays of $$\mathrm {Z}$$ Z and Higgs bosons to a $${\mathrm {J}/\psi } $$ J/ψ meson and a photon, with the subsequent decay of the... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article 8. Suppression of non-prompt J/psi, prompt J/psi, and Upsilon(1S) in PbPb collisions at root s(NN)=2.76 TeV JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 05/2012, Issue 5 Yields of prompt and non-prompt J/psi ,as well as Upsilon(1S) mesons, are measured by the CMS experiment via their mu(+)mu(-) decays in PbPb and pp collisions... P(P)OVER-BAR COLLISIONS | CROSS-SECTIONS | PERSPECTIVE | MOMENTUM | ROOT-S=7 TEV | LHC | COLLABORATION | QUARK-GLUON PLASMA | PP COLLISIONS | NUCLEUS-NUCLEUS COLLISIONS | Heavy Ions | PHYSICS, PARTICLES & FIELDS P(P)OVER-BAR COLLISIONS | CROSS-SECTIONS | PERSPECTIVE | MOMENTUM | ROOT-S=7 TEV | LHC | COLLABORATION | QUARK-GLUON PLASMA | PP COLLISIONS | NUCLEUS-NUCLEUS COLLISIONS | Heavy Ions | PHYSICS, PARTICLES & FIELDS Journal Article Animal Behaviour, ISSN 0003-3472, 02/2013, Volume 85, Issue 2, pp. 299 - 304 In 1980, Paul J. Greenwood published a review of dispersal in birds and mammals that has been widely cited. The review evaluated possible explanations for... sex bias | mating system | inbreeding avoidance | kin cooperation | competition | natal dispersal | philopatry | Paul Greenwood | Competition | Mating system | Inbreeding avoidance | Philopatry | Natal dispersal | Kin cooperation | Sex bias | COLUMBIAN GROUND-SQUIRRELS | UROCITELLUS-COLUMBIANUS | KIN SELECTION | GENETIC BENEFITS | BREEDING DISPERSAL | ZOOLOGY | FEMALE DISPERSAL | FITNESS BENEFITS | BEHAVIORAL SCIENCES | REPRODUCTIVE SUCCESS | Birds | Sex discrimination | Evolutionary biology | Analysis | Phylogenetics | Animal reproduction | Mammals | Animal behavior | Dispersal sex bias | mating system | inbreeding avoidance | kin cooperation | competition | natal dispersal | philopatry | Paul Greenwood | Competition | Mating system | Inbreeding avoidance | Philopatry | Natal dispersal | Kin cooperation | Sex bias | COLUMBIAN GROUND-SQUIRRELS | UROCITELLUS-COLUMBIANUS | KIN SELECTION | GENETIC BENEFITS | BREEDING DISPERSAL | ZOOLOGY | FEMALE DISPERSAL | FITNESS BENEFITS | BEHAVIORAL SCIENCES | REPRODUCTIVE SUCCESS | Birds | Sex discrimination | Evolutionary biology | Analysis | Phylogenetics | Animal reproduction | Mammals | Animal behavior | Dispersal Journal Article 10. Search for rare decays of Z and Higgs bosons to J / ψ and a photon in proton-proton collisions at √s = 13 TeV European Physical Journal C, ISSN 1434-6044, 02/2019, Volume 79, Issue 2, p. 94 A search is presented for decays of and Higgs bosons to a meson and a photon, with the subsequent decay of the to . The analysis uses data from proton-proton... Journal Article
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Finding the Interval of Convergence The main tools for computing the radius of convergence are the DO: work the following without looking at the solutions, which are below the examples. Example 1: Find the radius of converge, then the interval of convergence, for $\displaystyle\sum_{n=1}^\infty(-1)^n\frac{n^2x^n}{2^n}$. Example 2: Find the radius of converge, then the interval of convergence, for $\displaystyle\sum_{n=1}^\infty(-1)^n\frac{x^n}{n}$. Solution 1: $\displaystyle\sqrt[n]{\left|\frac{n^2x^n}{2^n}\right|}=\sqrt[n]{n^2}\frac{|x|}{2}\longrightarrow\frac{1}{2}\vert x\vert\quad$ (We used our very handy previous result: $\sqrt[n]{n^a}\rightarrow 1$ for any $a>0$.) When $x=-2$, we have $\displaystyle\sum_{n=1}^\infty(-1)^n\frac{n^2x^n}{2^n}=\displaystyle\sum_{n=1}^\infty(-1)^n\frac{n^2(-2)^n}{2^n}=\displaystyle\sum_{n=1}^\infty\frac{n^2(2)^n}{2^n}=\displaystyle\sum_{n=1}^\infty n^2$, which diverges by the Divergence Test. When $x=2$, we have $\displaystyle\sum_{n=1}^\infty(-1)^n\frac{n^2x^n}{2^n}=\displaystyle\sum_{n=1}^\infty(-1)^n\frac{n^2(2)^n}{2^n}=\displaystyle\sum_{n=1}^\infty(-1)^n n^2$ which also diverges by the Divergence Test. Solution 2: $\displaystyle\left|\frac{\frac{x^{n+1}}{n+1}}{\frac{x^n}{n}}\right|=\left|\frac{x^{n+1}}{n+1}\frac{n}{x^n}\right|=\frac{n}{n+1}|x|\longrightarrow |x|$
I expect most readers of this blog to be familiar with the concept of lossy data compression. For instance, whenever you rip a song from a CD to a lossy format such as MP3 or Ogg Vorbis, you are effectively discarding some data from the song to make it smaller while still preserving most of the audio. A good compression technique yields a song which sounds almost like the original one. Matrices can sometimes also be compressed in the sense above. For instance, given a matrix $A$, it is sometimes possible to cleverly compute another matrix $\tilde{A} \approx A$ such that $\tilde{A}$ can be represented in a way which requires much less data storage than the original matrix $A$. This approximation can be obtained from a very powerful tool in linear algebra: the singular value decomposition (SVD). This post will not present techniques for computing SVDs, but merely discuss this tool in the context of matrix compression. The SVD of an $m \times n$ real matrix $A$ is the factorization $A = U\Sigma V^T$, where $U$ is an $m \times m$ real orthogonal matrix, $\Sigma$ is an $m \times n$ diagonal matrix with real nonnegative values along its diagonal (called the singular values of $A$) and $V$ is an $n \times n$ real orthogonal matrix. Every matrix has an SVD (for a proof of this fact, see [1], theorem 3.2). To illustrate how one can use the SVD to approximate a matrix, I will use octave. On Ubuntu/Debian, you can install octave by opening a terminal and running the following command: sudo apt-get install octave Now start octave. I have generated an example matrix which is shown below (in what follows, all user input is highlighted): octave:1> AA = 1.02650 0.92840 0.54947 0.98317 0.71226 0.55847 0.92889 0.89021 0.49605 0.93776 0.62066 0.52473 0.56184 0.49148 0.80378 0.68346 1.02731 0.64579 0.98074 0.93973 0.69170 1.03432 0.87043 0.66371 0.69890 0.62694 1.02294 0.87822 1.29713 0.82905 0.56636 0.51884 0.65096 0.66109 0.82531 0.55098 You can generate the matrix above by typing 'A=[' (without the quotes), then copying and pasting the entries from $A$ above and closing it with another square bracket ']'. Octave provides a method for computing the SVD of $A$: octave:2> [U,Sigma,V] = svd(A) U = -0.418967 -0.449082 0.754882 0.062414 -0.211638 0.065260 -0.386832 -0.456685 -0.393449 0.517538 0.420908 -0.204913 -0.369990 0.429218 0.277696 -0.322484 0.605310 -0.362449 -0.455375 -0.257776 -0.427049 -0.676528 -0.289362 -0.048923 -0.469840 0.546519 -0.099679 0.407065 -0.521257 -0.182266 -0.331389 0.201011 -0.076997 0.029526 0.237081 0.887000 Sigma = Diagonal Matrix 4.6780e+00 0 0 0 0 0 0 8.9303e-01 0 0 0 0 0 0 4.5869e-02 0 0 0 0 0 0 7.2919e-03 0 0 0 0 0 0 1.5466e-16 0 0 0 0 0 0 4.5360e-17 V = -0.418967 -0.449082 0.726833 0.183844 -0.240614 0.053029 -0.386832 -0.456685 -0.363745 -0.694211 -0.126874 -0.107069 -0.369990 0.429218 -0.101603 -0.070784 -0.356422 0.732468 -0.455375 -0.257776 -0.373669 0.487768 0.559525 0.188602 -0.469840 0.546519 0.309430 -0.289389 0.439192 -0.328915 -0.331389 0.201011 -0.306111 0.396987 -0.541307 -0.552684 As the output above shows, the singular values of $A$ (the highlighted diagonal entries of $\Sigma$) are placed in decreasing order along the diagonal of $\Sigma$. Compared to the first two singular values ($\Sigma_{11}$ and $\Sigma_{22}$), the other four ($\Sigma_{33}$, $\Sigma_{44}$, $\Sigma_{55}$ and $\Sigma_{66}$) are relatively small. Being so small, and since $A = U\Sigma V^T$, the effect of discarding them should still imply $A \approx U\Sigma V^T$ for this modified $\Sigma$. Let's discard them and see what happens: octave:3> Sigma(3,3) = Sigma(4,4) = Sigma(5,5) = Sigma(6,6) = 0 Sigma = Diagonal Matrix 4.67800 0 0 0 0 0 0 0.89303 0 0 0 0 0 0 0.00000 0 0 0 0 0 0 0.00000 0 0 0 0 0 0 0.00000 0 0 0 0 0 0 0.00000 Discarding $\Sigma_{33}$ means the third column of $U$ and the third row of $V^T$ (the third column of $V$) have no effect on $U\Sigma V^T$ since they contain the only entries of $U$ and $V$ respectively which are multiplied by $\Sigma_{33}$. We can then discard these entries and also discard the third row and the third column of $\Sigma$ altogether. Similarly, we can discard the fourth, fifth and sixth columns of $U$ and $V$ and also the fourth, fifth and sixth rows and columns of $\Sigma$: octave:4> U = U(1:6,1:2) U = -0.41897 -0.44908 -0.38683 -0.45668 -0.36999 0.42922 -0.45538 -0.25778 -0.46984 0.54652 -0.33139 0.20101 octave:5> V = V(1:6,1:2) V = -0.41897 -0.44908 -0.38683 -0.45669 -0.36999 0.42922 -0.45537 -0.25778 -0.46984 0.54652 -0.33139 0.20101 octave:6> Sigma = Sigma(1:2,1:2) Sigma = Diagonal Matrix 4.67800 0 0 0.89303 Now let's compute $U\Sigma V^T$: octave:7> A_tilde = U*Sigma*V'A_tilde = 1.00125 0.94131 0.55302 0.99588 0.70167 0.56888 0.94131 0.88626 0.49448 0.92918 0.62733 0.51770 0.55302 0.49448 0.80491 0.68936 1.02269 0.65062 0.99588 0.92918 0.68936 1.02940 0.87507 0.65967 0.70168 0.62733 1.02269 0.87507 1.29940 0.82647 0.56888 0.51770 0.65062 0.65967 0.82647 0.54982 The values of $\tilde{A}$ are very close to the values of the original matrix $A$. Let's then compute the relative error for each entry of $\tilde{A}$ (below the './' operation computes the division of each entry of $(A - \tilde{A})$ by each corresponding entry of $A$): octave:8> (A - A_tilde) ./ Aans = 2.4599e-02 -1.3907e-02 -6.4608e-03 -1.2934e-02 1.4859e-02 -1.8656e-02 -1.3375e-02 4.4309e-03 3.1586e-03 9.1550e-03 -1.0756e-02 1.3384e-02 1.5708e-02 -6.1061e-03 -1.4028e-03 -8.6418e-03 4.4995e-03 -7.4838e-03 -1.5442e-02 1.1226e-02 3.3826e-03 4.7511e-03 -5.3225e-03 6.0837e-03 -3.9746e-03 -6.3450e-04 2.4899e-04 3.5946e-03 -1.7525e-03 3.1091e-03 -4.4631e-03 2.1875e-03 5.2814e-04 2.1558e-03 -1.3991e-03 2.1170e-03 As the numbers above show, the relative errors are small, so indeed $\tilde{A} \approx A$. However, storing $A$ requires storing $N_{A} = 36$ elements, but $\tilde{A}$ needs not be stored as a $6 \times 6$ matrix. Instead, we can merely store $U$, $\Sigma$ and $V$ after many of their entries are removed. This yields: $$ N_{\tilde{A}} = \textrm{size}(U) + \textrm{size}(V) + \textrm{size}(\Sigma) = 12 + 12 + 2 = 26 $$ so we can obtain $\tilde{A}$ by storing only $N_{\tilde{A}} = 26$ instead of $N_A = 36$ elements. The reason why $\textrm{size}(\Sigma)= 2$ is because $\Sigma$ is a diagonal matrix, so its off-diagonal entries need not be stored as we know they are zero. This represents a $28\%$ reduction in the required storage space. For large matrices, the method above can lead to even better compression ratios. It is possible to use the technique above to compress images while still preserving most of their visual properties. If you want to know how, please visit this blog again in the future :-) References [1] James W. Demmel, Applied Numerical Linear Algebra, SIAM; 1st edition (1997)
By applying a voltage and a magnetic field on a (let's say metallic to keep things as simple as possible) sample, one is able to create the Hall effect and to obtain the Hall coefficient $R_H \sim 1/n$ where $n$ is the charge carrier density. But what is $n$, really? Is it the $n$ that appears in the conductivity formula $\vec J = en\vec v$? If so, I face a huge problem. Indeed, one can find in numerous sources that either Drude's model or a quantum mechanics treatment lead to the same formula for $R_H$. This implies that whatever model is used to explain a metal, from experiment one finds that $n$ is a sort of universal value that does not depend on the model of the solid. But it is well known (e.g. Ziman "Physics of solids") that a QM treatment of a solid where electrons satisfy the Pauli exclusion principle and that they obey Fermi-Dirac statistics, very few of the free electrons actually participate in electrical conduction. When one looks at the Fermi sphere with and without an applied $\vec E$ field, the application of the $\vec E$ field has the same end result as a displacement of the Fermi sphere in an opposite direction than the $\vec E$ field. Thus only electrons at the Fermi surface that were moving in the $\vec E$ field direction and had a momentum near $-\vec p_F$ get their momentum changed to near $\vec p_F$ (actually slightly higher than that, and with a constant that depends linearly on the $\vec E$ field). Those are very few electrons compared to the total number of free electrons (those that constitute the Fermi sphere), and they move extremely fast compared to the drift velocity that arises from the Drude's model. Indeed, I would expect that from a QM treatment, since so few electrons actually participate in electrical conduction and that they move about 2 orders of magnitude slower than the speed of light in vacuum, if we want to describe a particular $\vec J$, then $\vec J \approx en'\vec v_F$ where $n'$ would be a tiny fraction of $n$ that appears in Drude's model. But it turns out that $n'=n$, so I do not see any way to explain consistently a current density from Drude's mode and a QM treatment. So I do not understand exactly what is "n", the charge carrier density. How can it be the same regardless of the model used to describe a solid, while the models give very different values for the number of electrons participating in electrical conduction and very different values in "drift velocity"?
Sage wrongly suggests there are only trivial solutions, why? I have a system of two quadratic equations which I want to solve. WolframAlpha shows me whole families of solution, but sage says there are only trivial solutions. The equations are \begin{align} d\cdot\lvert\alpha\rvert^2 + \alpha^* \beta + \alpha\beta^* ==&0, \end{align} \begin{align} d\cdot\lvert\beta\rvert^2 + \alpha^* \beta + \alpha\beta^* ==&0, \end{align} where $d$ is constant and I want to solve for the complex numbers $\alpha, beta$ I have the following minimal working example var('a,b,q')assume(a, 'complex')assume(b, 'complex')assume(q, 'real')eq(x,y) = q*norm(x) + conjugate(x)*y + x*conjugate(y)solution = solve([eq(a,b) == 0, eq(b,a) == 0], [a,b])polySolution = solve([eq(a,b) == 0, eq(b,a) == 0], [a,b], to_poly_solve=True) and the output then is sage: solution[[a == 0, b == 0]]sage: polySolution[[a == 0, b == 0]] which is simply not correct. Now, as I said, I have found solutions with WolframAlpha, but the next candidate I have to compute gives 18 equations in 5 complex variables, which I cannot possibly solve with WA. Sage, again, tells me that there are only trivial solutions, but because of this smaller example presented here, I believe that that is wrong. Does anyone know how to force sage to give solutions?
I've been thinking about ways to tackle an epidemic modelling problem I've been working on, and I've come up against a conceptual difficulty over the way survival analysis works. Here's a really simplified version with all the extraneous details stripped out. There is a collection of $n$ individuals. At the beginning (time $t = 0$), 1 individual is infectious with a disease, and the other $n-1$ individuals are susceptible. As time progresses, susceptible people can get the disease through contact with infectious individuals, and pass the disease on to other people. To keep focus on the core of my question we'll use these (unrealistic) simplifying assumptions: There are no births, deaths, immigration, or emigration Individuals mix uniformly (e.g. no preferential mixing by age or sex or location, etc) No delay between contracting the disease and ability to infect others (i.e. no incubation period) Once infected, an individual remains infectious forever (no recovery) Here is how transmission works: we use a continuous-time additive hazard model. This means that we let each uninfected individual $i$ have their own hazard function $h_i(t)$. This function tells us "given that the individual $i$ has survived until the time $t$, what is the infinitesimal rate of failure (i.e. infection) at time $t$?" We define it as follows: $$h_i(t ;\lambda) = \lambda \sum_{j \neq i}^n I_j(t)$$ Where $I_j(t)$ is simply an indicator function that is 1 when individual $j$ is infectious at time $t$, and 0 otherwise. The $\lambda$ is a strictly-positive parameter governing how transmissible the disease is: the higher the value, the faster the disease spreads. Given our assumptions, $h_i(t)$ is thus piecewise constant and non-decreasing: it looks like a staircase as the epidemic spreads to more people and the risk of infection increases. The goal is this: given a dataset consisting of the times $t_1, t_2, \dots, t_n$ that each individual became infected, infer the value of $\lambda$. At first this seems simple: since we have complete data, we can easily calculate what the hazard functions must have been for each individual over time, and we can work out the pdf for the infection times from the hazard functions: $$f_i(t | \lambda) = h_i(t)\exp{\left(- \int_0^t h_i(u)du \right)}$$ (The integral in the exponential is just the cumulative hazard function, see the Wikipedia link for more details.) With this, it's a "simple" matter of writing down the likelihood of the entire data and maximizing it with respect to $\lambda$: $$\hat{\lambda} = \mathrm{argmax}_\lambda \prod_i^n f_i(t_i | \lambda)$$ But here's the problem: Writing the product of probabilities in that way is assuming that the events are independent, and I'm really not so sure that's the case. If we were to change one of the infection times (e.g. $t_5$ becomes $t'_5$), that would propagate through and potentially change many of the hazard functions, because they are dependent on when the other individuals become infectious (via $I_j(t)$). This would go on to change the other probability densities. It would be as if, in a linear regression model, some units' $X$-covariates causally depended on other units' $Y$-observations. On the other hand, I can't think of how else to write down the likelihood. And well, it's damned tempting to just ignore the issue entirely because it gives me a headache to think about otherwise. My questions are: Is this a big problem, or can I just ignore it? Will I get anything meaningful by just maximizing that "likelihood"? If it is a big problem, is this approach salvageable or do I just need to scrap the whole idea? If the latter, what are some other ways of going about this (this being analysing epidemics in continuous time rather than discrete)? If anyone is wondering about the XY-problem, let me briefly explain why I'm formulating it this way. I want to make a stochastic continuous-time model for my epidemic data, and since it can be understood as "time-to-failure", survival analysis seems like a logical way of tackling it. Having it be an additive (rather than proportional) risk model works for this case because there is no such thing as a "baseline hazard" in this kind of epidemic (i.e. if there are no infected people around, the chance of transmission is exactly zero, which cannot be represented in a proportional hazard model). It also lets me incorporate more complexity in a very natural way: I can have changing population size and incubation periods just by making straightforward changes to the infection indicator functions, and I can have preferential mixing by introducing more $\lambda$-like parameters that act on covariate information of the individuals.
I think the picture can explain it better than words, but I'm wondering how to figure this out. Given three ratios of distances from corners, not lengths (in the picture I set the base, $base=1$, to be the distance from the top left corner, while the other two corners are of lengths $\alpha\cdot$$base$ and $\beta\cdot$$base$) and given the height $H$ and width $W$ of a rectangle, what are the coordinates of a point with said ratios? I'm sure the Apollonian Theorem comes into play, but I can't quite figure it out. Thanks! Segment $1$ and segment $\alpha$ create a circle of possibilities, with endpoints of their diameter at $\frac{W}{1+\alpha}$ and $\frac{W}{1-\alpha}$. $\beta$ and $H$ work similarly. You can then intersect these circles to find two solutions. To find points where we can be via $\alpha$ and $W$, we can use the distance formula from the two points $(0,0)$ and $(W,0)$: $$ \begin{align} \alpha\sqrt{ \left(x^2 + y^2\right)} &= \sqrt{\left(x-W\right)^2 + y^2}\\ \alpha^2 \left(x^2 + y^2\right) &= \left(x-W\right)^2 + y^2\\ \alpha^2x^2 + \alpha^2y^2 &= x^2 - 2Wx + W^2 + y^2 \\ \left(\alpha^2-1\right)x^2 + \left(\alpha^2-1\right)y^2 + 2Wx &= W^2\\ x^2 + y^2 + \frac{2W}{\alpha^2-1}x &= \frac{W^2}{\alpha^2-1}\\ x^2 + y^2 + \frac{2W}{\alpha^2-1}x + \left(\frac{W^2}{\alpha^2-1}\right)^2&= \frac{W^2}{\alpha^2-1} + \left(\frac{W}{\alpha^2-1}\right)^2\\ \left(x - \frac{W}{\alpha^2-1}\right)^2 + y^2 + &= \frac{\left(\alpha^2-1\right)W^2}{\left(\alpha^2-1\right)^2} + \frac{W^2}{\left(\alpha^2-1\right)^2}\\ \left(x - \frac{W}{\alpha^2-1}\right)^2 + y^2 + &= \frac{\alpha^2W^2}{\left(\alpha^2-1\right)^2}\\ \left(x - \frac{W}{\alpha^2-1}\right)^2 + y^2 + &= \left(\frac{\alpha W}{\alpha^2-1}\right)^2\\ \end{align}$$ So the candidate points form a circle centered at $A = \left(\frac{W}{\alpha^2-1}, 0\right)$ with radius $a = \frac{\alpha W}{\alpha^2-1}$. We can do the same thing with $H$ and $\beta$ to get a second circle centered at $B = \left(0, \frac{H}{\beta^2-1}\right)$ with radius $b = \frac{\beta H}{\beta^2-1}$. Now, let's find the intersections of these circles. First, we need the distance between the centers: $$d=\sqrt{\left(\frac{W}{\alpha^2-1}\right)^2 + \left(\frac{H}{\beta^2-1}\right)^2}$$ Now, the intersection. I'll use circle $a$ as the first circle. At this point the calculations are getting a little too nasty, and I haven't found anything nice after here, so you'll just have to do the math by plugging previous results in: The distance from $A$ to the segment between the two solution points is $u = \frac{d^2 + a^2 - b^2}{2d}$. The distance from the line between $A$ and $B$ to the solution points is $v = \sqrt{u^2 - a^2}$. A unit vector pointing from $A$ to $B$ is $U = \frac{B-A}{d}$. Then $V$ is a unit vector perpendicular to $U$ - just switch the coordinates and flip one's sign. Finally, we can find the (up to) two solution points: $$P = A + uU \pm vV$$
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison Title: Outliers in the spectrum for products of independent random matrices Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke. April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge. Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems. April 18, Andrea Agazzi, Duke Title: Large Deviations Theory for Chemical Reaction Networks Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes. April 25, Kavita Ramanan, Brown Title: Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu. Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown Title: Tales of Random Projections Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems.
In this post, we will prove that the set $\mathcal{L}_{\{0,1\}}$ of all languages over the set $\{0,1\}$ is not countable, i.e., we cannot enumerate the infinitely many languages in $\mathcal{L}_{\{0,1\}}$. A language over $\{0,1\}$ is a set of finite-length strings formed using only the symbols $0$ and $1$. For example, $\{0, 10, 01, 101\}$ and $\{1, 10, 100, 1000, \ldots\}$ are languages over $\{0,1\}$. Notice that while each string in a language must have finite length, the language itself may have infinitely many strings as illustrated in the second example just given. Our proof will be based on the fact that a contradiction is obtained if $\mathcal{L}_{\{0,1\}}$ is countable. To start, notice that we can enumerate the strings in the set $W$ of all finite-length strings over $\{0,1\}$. To see that, let $W' = \{1s \mid s \in W\}$ be the set which is built from $W$ by adding a $1$ in front of each of its strings. We can interpret every string in $W'$ as the binary representation of some natural number. As an example, for a string $0110 \in W$, we have $10110 \in W'$, and $10110$ represents the natural number $22$ in the decimal base. The reason why we need to build $W'$ comes from the fact that distinct strings such as $0010$ and $10$ are both in $W$ but represent the same natural number in binary representation because they differ only by leading zeros; $W'$ does not have this issue and shows us how we can generate a one-to-one mapping from $W$ to $\mathbb{N}$: just add a $1$ in front of each string in $W$ and interpret the resulting strings as binary numbers (distinct strings in $W$ are mapped to distinct numbers in $\mathbb{N}$). Since $W$ has infinitely many strings and since a one-to-one mapping from $W$ to $\mathbb{N}$ exists, $W$ is countable. In other words, the strings in $W$ can be enumerated and we can therefore write $W = \{s_1, s_2, s_3, \ldots\}$, with $s_j$ being the $j$-th string over $\{0,1\}$. Now assume that $\mathcal{L}_{\{0,1\}}$ is countable, i.e., that that $\mathcal{L}_{\{0,1\}} = \{L_1, L_2, L_3, \ldots\}$ with each $L_i$ being a language over $\{0,1\}$. Given that each $L_i$ is a set whose elements are strings from $W$, and since $W$ is countable, we can build a table whose row indices are language indices and whose column indices are string indices as follows: for each table cell with row index $i$ and column index $j$, write $1$ if the language $L_i$ contains the string $s_j$ or $0$ otherwise. This table completely specifies which strings $s_j \in W$ are contained in each language $L_i \in \mathcal{L}_{\{0,1\}}$. Below is an example of what such a table would look like: $s_1$ $s_2$ $s_3$ $s_4$ $\ldots$ $L_1$ $1$ $0$ $1$ $1$ $\ldots$ $L_2$ $0$ $0$ $0$ $1$ $\ldots$ $L_3$ $1$ $1$ $0$ $0$ $\ldots$ $L_4$ $0$ $0$ $1$ $1$ $\ldots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\ddots$ Consider now the language built through the following procedure: flip the value of every diagonal cell on the table above, then collect all strings $s_j$ such that the diagonal cell on the column of $s_j$ has a $1$ after flipping; let $L_{\textrm{diag}}$ be the set of all such strings. To clarify the way $L_{\textrm{diag}}$ is built, take a look at the table below which is built by flipping the diagonal entries of the table above: $s_1$ $s_2$ $s_3$ $s_4$ $\ldots$ $L_1$ $0$ $0$ $1$ $1$ $\ldots$ $L_2$ $0$ $1$ $0$ $1$ $\ldots$ $L_3$ $1$ $1$ $1$ $0$ $\ldots$ $L_4$ $0$ $0$ $1$ $0$ $\ldots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\ddots$ From the procedure just described, $L = \{s_2, s_3, \ldots\}$, so $L$ does not contain $s_1$ and $s_4$ but contains $s_2$ and $s_3$. $L$ is a language with a special property: it is different from every language $L_i \in \mathcal{L}_{\{0,1\}}$. Indeed, for every $L_i \in \mathcal{L}_{\{0,1\}}$, if $L_i$ contains $s_i$, $L$ does not, but if $L_i$ does not contain $s_i$, $L$ does. This implies $L \neq L_i$ for all $L_i \in \mathcal{L}_{\{0,1\}}$ and therefore $L \notin \mathcal{L}_{\{0,1\}}$. However, since $L$ is a set of strings which are in $W$, $L$ is a language over $\{0,1\}$ and therefore $L \in \mathcal{L}_{\{0,1\}}$, a contradiction. Hence $\mathcal{L}_{\{0,1\}}$ cannot be countable.
diff options -rw-r--r-- docs/tutorial/solver.md 79 1 files changed, 78 insertions, 1 deletions diff --git a/docs/tutorial/solver.md b/docs/tutorial/solver.md index 17f793e..b150f64 100644 --- a/docs/tutorial/solver.md +++ b/docs/tutorial/solver.md @@ -6,7 +6,14 @@ title: Solver / Model Optimization The solver orchestrates model optimization by coordinating the network's forward inference and backward gradients to form parameter updates that attempt to improve the loss. The responsibilities of learning are divided between the Solver for overseeing the optimization and generating parameter updates and the Net for yielding loss and gradients. -The Caffe solvers are Stochastic Gradient Descent (SGD), Adaptive Gradient (ADAGRAD), and Nesterov's Accelerated Gradient (NESTEROV). +The Caffe solvers are: + +- Stochastic Gradient Descent (`SGD`), +- AdaDelta (`ADADELTA`), +- Adaptive Gradient (`ADAGRAD`), +- Adam (`ADAM`), +- Nesterov's Accelerated Gradient (`NESTEROV`) and +- RMSprop (`RMSPROP`) The solver @@ -104,6 +111,32 @@ If learning diverges (e.g., you start to see very large or `NaN` or `inf` loss v [ImageNet Classification with Deep Convolutional Neural Networks](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf). *Advances in Neural Information Processing Systems*, 2012. +### AdaDelta + +The **AdaDelta** (`solver_type: ADADELTA`) method (M. Zeiler [1]) is a "robust learning rate method". It is a gradient-based optimization method (like SGD). The update formulas are + +$$ +\begin{align} +(v_t)_i &= \frac{\operatorname{RMS}((v_{t-1})_i)}{\operatorname{RMS}\left( \nabla L(W_t) \right)_{i}} \left( \nabla L(W_{t'}) \right)_i +\\ +\operatorname{RMS}\left( \nabla L(W_t) \right)_{i} &= \sqrt{E[g^2] + \varepsilon} +\\ +E[g^2]_t &= \delta{E[g^2]_{t-1} } + (1-\delta)g_{t}^2 +\end{align} +$$ + +and + +$$ +(W_{t+1})_i = +(W_t)_i - \alpha +(v_t)_i. +$$ + +[1] M. Zeiler + [ADADELTA: AN ADAPTIVE LEARNING RATE METHOD](http://arxiv.org/pdf/1212.5701.pdf). + *arXiv preprint*, 2012. + ### AdaGrad The **adaptive gradient** (`solver_type: ADAGRAD`) method (Duchi et al. [1]) is a gradient-based optimization method (like SGD) that attempts to "find needles in haystacks in the form of very predictive but rarely seen features," in Duchi et al.'s words. @@ -124,6 +157,28 @@ Note that in practice, for weights $$ W \in \mathcal{R}^d $$, AdaGrad implementa [Adaptive Subgradient Methods for Online Learning and Stochastic Optimization](http://www.magicbroom.info/Papers/DuchiHaSi10.pdf). *The Journal of Machine Learning Research*, 2011. +### Adam + +The **Adam** (`solver_type: ADAM`), proposed in Kingma et al. [1], is a gradient-based optimization method (like SGD). This includes an "adaptive moment estimation" ($$m_t, v_t$$) and can be regarded as a generalization of AdaGrad. The update formulas are + +$$ +(m_t)_i = \beta_1 (m_{t-1})_i + (1-\beta_1)(\nabla L(W_t))_i,\\ +(v_t)_i = \beta_2 (v_{t-1})_i + (1-\beta_2)(\nabla L(W_t))_i^2 +$$ + +and + +$$ +(W_{t+1})_i = +(W_t)_i - \alpha \frac{\sqrt{1-(\beta_2)_i^t}}{1-(\beta_1)_i^t}\frac{(m_t)_i}{\sqrt{(v_t)_i}+\varepsilon}. +$$ + +Kingma et al. [1] proposed to use $$\beta_1 = 0.9, \beta_2 = 0.999, \varepsilon = 10^{-8}$$ as default values. Caffe uses the values of `momemtum, momentum2, delta` for $$\beta_1, \beta_2, \varepsilon$$, respectively. + +[1] D. Kingma, J. Ba. + [Adam: A Method for Stochastic Optimization](http://arxiv.org/abs/1412.6980). + *International Conference for Learning Representations*, 2015. + ### NAG **Nesterov's accelerated gradient** (`solver_type: NESTEROV`) was proposed by Nesterov [1] as an "optimal" method of convex optimization, achieving a convergence rate of $$ \mathcal{O}(1/t^2) $$ rather than the $$ \mathcal{O}(1/t) $$. @@ -149,6 +204,28 @@ What distinguishes the method from SGD is the weight setting $$ W $$ on which we [On the Importance of Initialization and Momentum in Deep Learning](http://www.cs.toronto.edu/~fritz/absps/momentum.pdf). *Proceedings of the 30th International Conference on Machine Learning*, 2013. +### RMSprop + +The **RMSprop** (`solver_type: RMSPROP`), suggested by Tieleman in a Coursera course lecture, is a gradient-based optimization method (like SGD). The update formulas are + +$$ +(v_t)_i = +\begin{cases} +(v_{t-1})_i + \delta, &(\nabla L(W_t))_i(\nabla L(W_{t-1}))_i > 0\\ +(v_{t-1})_i \cdot (1-\delta), & \text{else} +\end{cases} +$$ + +$$ +(W_{t+1})_i =(W_t)_i - \alpha (v_t)_i, +$$ + +If the gradient updates results in oscillations the gradient is reduced by times $$1-\delta$$. Otherwise it will be increased by $$\delta$$. The default value of $$\delta$$ (`rms_decay`) is set to $$\delta = 0.02$$. + +[1] T. Tieleman, and G. Hinton. + [RMSProp: Divide the gradient by a running average of its recent magnitude](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf). + *COURSERA: Neural Networks for Machine Learning.Technical report*, 2012. + ## Scaffolding The solver scaffolding prepares the optimization method and initializes the model to be learned in `Solver::Presolve()`.
To the best of my knowledge, the main strategy to compute a moduli space $M(A, J)$ consists in: finding a symplectic form $\omega$ on $M$ which (at least) tames $J$ (ideally, $J$ and $\omega$ are compatible) and for which $J$ is regular i.e. a regular value for the projection map $\mathcal{M}(A, \mathcal{J}_{\omega}) := \bigcup_{J' \in \mathcal{J}_{\omega}} \mathcal{M}(A, J') \to \mathcal{J}_{\omega}$; finding a path $\{J_t\}_{t \in [0,1]} \subset \mathcal{J}_{\omega}$ of regular almost complex structure such that $J_0 = J$ and $J_1$ is integrable i.e. a genuine complex structure on $M$ (if $M$ supports such a structure), so that the different $\mathcal{M}(A, J_t)$ are all diffeomorphic; computing $\mathcal{M}(A, J_1)$ using techniques from algebraic geometry. An important aspect of Gromov's use of almost complex structure and (pseudo)holomorphic curves was to explain the first two bullets. In your example, $J$ is already integrable (being the standard complex structure on $M = \mathbb{C}P^3$), so the first two bullets above are irrelevant. I shall describe the moduli space $\mathcal{M}(A, J)$ for $M = \mathbb{C}P^n$ equipped with its standard (integrable) complex structure $J$ and $A$ the homology class of linear embeddings of $\mathbb{C}P^1$ into $\mathbb{C}P^n$. Claim 1: The image of $u \in \mathcal{M}(A, J)$ is an algebraic subvariety of $\mathbb{C}P^n$. Proof: We know that $u : \mathbb{C}P^1 \to \mathbb{C}^n$ is a smooth $(j, J)$-holomorphic map. Though we don't know if it is embedded at this point, one could argue (along the lines of this argument for instance) that the image is a (complex) analytic subvariety. But then Chow's theorem implies that the image of $u$ is an algebraic subvariety. We are therefore really allowed to use algebraic geometric tools. $\square$ Claim 2: $u \in \mathcal{M}(A, J)$ is a diffemorphic parametization of a projective line in $\mathbb{C}P^n$ i.e. the vanishing locus of $\mathrm{dim}_{\mathbb{C}}(\mathbb{C}P^n) - \mathrm{dim}_{\mathbb{C}}(\mathbb{C}P^1) = n-1$ linearly independent complex-linear functions. Proof: It is a fact that given two distincts points in $\mathbb{C}P^n$, there is a unique projective line containing these two points. It is also a fact that the intersection number of a projective line with a generic hyperplane ( i.e. the zero-locus of a single complex-linear map) is $+1$. Consider $u \in \mathcal{M}(A, J)$. Since $[u(\mathbb{C}P^1)] = A$, its intersection number (which is a homological invariant) with a generic hyperplane $H$ is $+1$. Pick two distincts points $p,q$ in the image $Im(u)$ of $u$, consider the unique projective line $L$ passing through these two points and consider any hyperplane $H$ which contains $L$. By Bertini's theorem, either $H$ contains $Im(u)$ or it intersects it in finitely many isolated points. If the second possibility were possible, then $H$ and $Im(L)$ would intersect at least in $p,q$; by positivity of intersection, each intersection would contribute at least $+1$, which contradicts $H \cdot Im(u) = 1$. Hence $H \supset Im(u)$. Since this is true for all $H$ which contains $L$ and since $L$ is the intersection of all such $H$, we deduce $Im(u) \subset L$. Since $[Im(u)] = [L]$, the map $u : \mathbb{C}P^1 \to L$ not only has to be surjective (for otherwise $Im(u)$ would be contractible), but it also has to be injective and a submersion (for otherwise, by the open mapping theorem, it would be a multi-covering). $\square$ So the problem is reduced to computing the moduli space of projective lines. Each projective line is the quotient to $\mathbb{C}P^n$ of a unique complex subspace $K \subset \mathbb{C}^{n+1}$ which is the common zero-locus of $n-1$ linearly independent complex-linear maps to $\mathbb{C}$, that is the kernel of a surjective complex-linear map $l : \mathbb{C}^{n+1} \to \mathbb{C}^{n-1}$. We can interpret $l$ as an complex $(n-1)$-frame in $\mathbb{C}^{n+1}$. Two such maps $l, l' : \mathbb{C}^{n+1} \to \mathbb{C}^{n-1}$ have the same kernel $K$ if and only if there is an element $M \in Gl(n-1, \mathbb{C})$ such that $l' = M \circ l$. We deduce that $\mathcal{M}(A, J)/G$ is the quotient by $Gl(n-1, \mathbb{C})$ of the manifold of complex $(n-1)$-frames in $\mathbb{C}^{n+1}$, namely the Grassmannian $Gr_{\mathbb{C}}(n-1, n+1)$. This is a compact manifold of real dimension$$2(n-1)[(n+1)-(n - 1)] = 4(n-1) = 2n + 2(n+1) - 6 = 2n + c_1(A) - 6 \; .$$ Remark: the compacity of $\mathcal{M}(A,J)/G$ could have been deduced directly from Gromov's compactness theorem, since the energy $\omega(A)$ is constant (hence bounded) and $A$ cannot be decomposed as a positive sum of other homology classes (since it generates $H_2(\mathbb{C}P^n, \mathbb{Z})$). One can expect that there are a lot of triplets $(M, J, A)$ for which $\mathcal{M}(A, J)$ is known, since algebraic geometry is such an old, developped and active subject. It is thus difficult to refer to any specific place in the literature for any specific computations, as there are a lot of 'folklore' scattered here and there. (However this might be only a personal lack of expertise on my part.) I would thus mention only one other moduli $\mathcal{M}(A, J)$ which was considered by Gromov in order to prove (among other things) his symplectic nonsqueezing theorem. If $(V, \omega)$ is a compact symplectic manifold which is symplectically aspherical ( i.e. the symplectic form vanishes on $\pi_2(V)$), we can consider $(M = V \times S^2, \omega \oplus \omega_0)$ where $(S^2, \omega_0)$ is the standard symplectic sphere (of some area). Considering $A = [\{pt\} \times S^2]$, then for a generic choice $J$ of compatible almost complex structure on $M$, the moduli space $\mathcal{M}(A, J)/G \cong V$; in fact, through each point of $M$, there is one and only one $J$-holomorphic curve in the class $A$ (there is no bubbling phenomenon thanks to the asphericity condition).
I am studying entanglement entropy. It's fullfilled for any local quantum system that the entanglement entropy of a region $A$ in a highly mixed state is extensvie, $$ S_A \sim \frac{\text{Vol}(A)}{\epsilon^d} $$ where $\epsilon$ is the length between sites (or the UV cutoff for a regularized QFT). This is because $$S_A=\log[\text{dim}(\mathcal{H}_A)]$$ where $\mathcal{H}_A$ is the Hilbert space of the degrees of freedom in $A$, and the dimension scales as the number of degrees of freedom per site by the number of the sites, $\sim \exp(\text{Vol}(A)/\epsilon^d)$. On the other hand, for a pure state the entanglement entropy is not extensive, since $S_A=S_B$, where $B$ is the complement of $A$. In fact it is proven that it follows an 'area law'. An usual intuituve argument for this area law is that to compute the entanglement entropy we count the pairs entangled in both sides of the boundary of $A$. Since the theory is local, the most entangled pairs will be those who lay at $\sim\epsilon$ of the boundary, while the further sites won't count for the entropy. So this way we have that the EE must scale as $\text{Vol}(\partial A)$. My problem is that I don't know if I am understanding why are we counting all the states un $A$ in one case and only the states near to the boundary in the other. The reason I found is that in the case of the pure state we are talking about the vaccum, the system is in the ground state of energy and a site can only see what is near around it, since the interactions are local. But if we excite the system (for example, if we consider a thermal state $\rho = e^{-\beta H}/\text{tr }e^{-\beta H}$, which is a mixed state), the system has enough energy to go beyond the local behavior and then we need to consider all the sites for the EE. Am I right with that?
Consider $N$ independent scalar fields $φ_i (x)$ in 4D space. Also consider a lagrangian density $$\mathcal{L} = \mathcal{L}(φ_i, \partial_μφ_i).$$ Suppose we perform the following infinitesimal transformations: $$x'^μ = x^μ + ε^α Χ^μ_α \tag{1} $$ $$φ_i '(x')=φ_i(x)+ ε^α Ψ_{iα} .\tag{2} $$ Let us denote $δφ = φ'(x')-φ(x)$ and $\barδφ = φ'(x)-φ(x)$. It's easy to see that $$ \barδφ_i= δφ_i - δx^ν \partial_ν φ_i. \tag{3}$$ Now doing some calculations which can be found in many books (I'm following D. Gross' lecture notes, also Peskin's and weinberg introductory QFT books), we see that under the above transformations the action changes as: $$ δS=\int d^4x \frac{δS}{δφ_i} \barδφ_i + \partial_μ[\mathcal{L}δx^μ +\frac{\partial \mathcal{L}}{\partial(\partial_μ φ)} \barδφ_i] \tag{4}$$ Where $\frac{δS}{δφ_i} =0 $ are the equations of motion (EOM). My first point of confusion is the following: What do we want from this transformation so as to get Noether's Theorem: do we want the Lagrangian to change up to a 4-divergence; the Lagrangian to be invariant and with the equations of motions satisfied or not; or the action to be invariant i.e. to remain unchanged no matter if the equations of motions are satisfied or not? Supposing the EOM satisfied then from $(4)$ we get only a 4-divergence in the integral of the action. Second point:If we allow the Lagrangian to change by a 4-divergence, as Peskin says in chapter 2.2, then every infinitesimal transformation would be a symmetry and give a conserved quantity, right? (Peskin only deals with field transformations I think, but still what he says should be a special case and thus generalize to what we're doing here) On the other hand, upon getting to eq $(4)$ Gross says that if S is to be invariant under those transformations (for arbitrary volumes, he notes- is this important?) then that 4-divergence should be 0, thus we have our conserved current. Indeed I know that the general Noether current is exactly what we see from (4): $$J^μ_α = \mathcal{L}X^μ_α +\frac{\partial \mathcal{L}}{\partial(\partial_μ φ)}Ψ_{ia}-\frac{\partial \mathcal{L}}{\partial(\partial_μ φ)} \partial_ν φ_i X^ν_α. $$ So when I have ANY transformation I can just close my eyes, get this "formula" and find a conserved current? Moreover, doesn't "invariant action" mean that it doesn't matter if equations of motions are satisfied or not, action still doesn't change? So Gross means something else? In this question Energy momentum tensor from Noether's theorem the accepted answer says "For actions that only depend on first derivatives of the fields, the variation of the action will inevitably have the form $$ S = \int (\partial_\mu a) j^\mu d^d x $$ where $j^\mu$ is some particular function of the fields or other degrees of freedom (and their derivatives)". Does $(4)$ assume this form? Should it? Note that I haven't specified if $ε^α$ are constant or not, in getting $(4)$ we've been as general as possible on this matter (right?).
8. Natural Deduction for First Order Logic¶ 8.1. Rules of Inference¶ In the last chapter, we discussed the language of first-order logic, and the rules that govern their use. We summarize them here: The universal quantifier: In the introduction rule, \(x\) should not be free in any uncanceled hypothesis. In the elimination rule, \(t\) can be any term that does not clash with any of the bound variables in \(A\). The existential quantifier: In the introduction rule, \(t\) can be any term that does not clash with any of the bound variables in \(A\). In the elimination rule, \(y\) should not be free in \(B\) or any uncanceled hypothesis. Equality: Strictly speaking, only \(\mathrm{refl}\) and the second substitution rule are necessary. The others can be derived from them. 8.2. The Universal Quantifier¶ The following example of a proof in natural deduction shows that if, for every \(x\), \(A(x)\) holds, and for every \(x\), \(B(x)\) holds, then for every \(x\), they both hold: Notice that neither of the assumptions 1 or 2 mention \(y\), so that \(y\) is really “arbitrary” at the point where the universal quantifiers are introduced. Here is another example: As an exercise, try proving the following: Here is a more challenging exercise. Suppose I tell you that, in a town, there is a (male) barber that shaves all and only the men who do not shave themselves. You can show that this is a contradiction, arguing informally, as follows: By the assumption, the barber shaves himself if and only if he does not shave himself. Call this statement (*). Suppose the barber shaves himself. By (*), this implies that he does not shave himself, a contradiction. So, the barber does not shave himself. But using (*) again, this implies that the barber shaves himself, which contradicts the fact we just showed, namely, that the barber does not shave himself. Try to turn this into a formal argument in natural deduction. Let us return to the example of the natural numbers, to see how deductive notions play out there. Suppose we have defined \(\mathit{even}\) and \(\mathit{odd}\) in such a way that we can prove: \(\forall n \; (\neg \mathit{even}(n) \to \mathit{odd}(n))\) \(\forall n \; (\mathit{odd}(n) \to \neg \mathit{even}(n))\) Then we can go on to derive \(\forall n \; (\mathit{even}(n) \vee \mathit{odd}(n))\) as follows: We can also prove and \(\forall n \; \neg (\mathit{even}(n) \wedge \mathit{odd}(n))\): As we move from modeling basic rules of inference to modeling actual mathematical proofs, we will tend to shift focus from natural deduction to formal proofs in Lean. Natural deduction has its uses: as a model of logical reasoning, it provides us with a convenient means to study metatheoretic properties such as soundness and completeness. For working within the system, however, proof languages like Lean’s tend to scale better, and produce more readable proofs. 8.3. The Existential Quantifier¶ Remember that the intuition behind the elimination rule for the existential quantifier is that if we know \(\exists x \; A(x)\), we can temporarily reason about an arbitrary element \(y\) satisfying \(A(y)\) in order to prove a conclusion that doesn’t depend on \(y\). Here is an example of how it can be used. The next proof says that if we know there is something satisfying both \(A\) and \(B\), then we know, in particular, that there is something satisfying \(A\). The following proof shows that if there is something satisfying either \(A\) or \(B\), then either there is something satisfying \(A\), or there is something satisfying \(B\). The following example is more involved: In this proof, the existential elimination rule (the line labeled \(3\)) is used to cancel two hypotheses at the same time. Note that when this rule is applied, the hypothesis \(\forall x \; (A(x) \to \neg B(x))\) has not yet been canceled. So we have to make sure that this formula doesn’t contain the variable \(x\) freely. But this is o.k., since this hypothesis contains \(x\) only as a bound variable. Another example is that if \(x\) does not occur in \(P\), then \(\exists x \; P\) is equivalent to \(P\): This is short but tricky, so let us go through it carefully. On the left, we assume \(\exists x \; P\) to conclude \(P\). We assume \(P\), and now we can immediately cancel this assumption by existential elimination, since \(x\) does not occur in \(P\), so it doesn’t occur freely in any assumption or in the conclusion. On the right we use existential introduction to conclude \(\exists x \; P\) from \(P\). 8.4. Equality¶ Recall the natural deduction rules for equality: Keep in mind that we have implicitly fixed some first-order language, and \(r\), \(s\), and \(t\) are any terms in that language. Recall also that we have adopted the practice of using functional notation with terms. For example, if we think of \(r(x)\) as the term \((x + y) \times (z + 0)\) in the language of arithmetic, then \(r(0)\) is the term \((0 + y) \times (z + 0)\) and \(r(u + v)\) is \(((u + v) + y) \times (z + 0)\). So one example of the first inference on the second line is this: The second axiom on that line is similar, except now \(P(x)\) stands for any formula, as in the following inference: Notice that we have written the reflexivity axiom, \(t = t\), as a rule with no premises. If you use it in a proof, it does not count as a hypothesis; it is built into the logic. In fact, we can think of the first inference on the second line as a special case of the first. Consider, for example, the formula \(((u + v) + y) \times (z + 0) = (x + y) \times (z + 0)\). If we plug \(u + v\) in for \(x\), we get an instance of reflexivity. If we plug in \(0\), we get the conclusion of the first example above. The following is therefore a derivation of the first inference, using only reflexivity and the second substitution rule above: Roughly speaking, we are replacing the second instance of \(u + v\) in an instance of reflexivity with \(0\) to get the conclusion we want. Equality rules let us carry out calculations in symbolic logic. This typically amounts to using the equality rules we have already discussed, together with a list of general identities. For example, the following identities hold for any real numbers \(x\), \(y\), and \(z\): commutativity of addition: \(x + y = y + x\) associativity of addition: \((x + y) + z = x + (y + z)\) additive identity: \(x + 0 = 0 + x = x\) additive inverse: \(-x + x = x + -x = 0\) multiplicative identity: \(x \cdot 1 = 1 \cdot x = x\) commutativity of multiplication: \(x \cdot y = y \cdot x\) associativity of multiplication: \((x \cdot y) \cdot z = x \cdot (y \cdot z)\) distributivity: \(x \cdot (y + z) = x \cdot y + x \cdot z, \quad (x + y) \cdot z = x \cdot z + y \cdot z\) You should imagine that there are implicit universal quantifiers in front of each statement, asserting that the statement holds for any values of \(x\), \(y\), and \(z\). Note that \(x\), \(y\), and \(z\) can, in particular, be integers or rational numbers as well. Calculations involving real numbers, rational numbers, or integers generally involve identities like this. The strategy is to use the elimination rule for the universal quantifier to instantiate general identities, use symmetry, if necessary, to orient an equation in the right direction, and then using the substitution rule for equality to change something in a previous result. For example, here is a natural deduction proof of a simple identity, \(\forall x, y, z \; ((x + y) + z = (x + z) + y)\), using only commutativity and associativity of addition. We have taken the liberty of using a brief name to denote the relevant identities, and combining multiple instances of the universal quantifier introduction and elimination rules into a single step. There is generally nothing interesting to be learned from carrying out such calculations in natural deduction, but you should try one or two examples to get the hang of it, and then take pleasure in knowing that it is possible. 8.5. Counterexamples and Relativized Quantifiers¶ Consider the statement: Every prime number is odd. In first-order logic, we could formulate this as \(\forall p \; (\mathit{prime}(p) \to \mathit{odd}(p))\). This statement is false, because there is a prime number which is even, namely the number 2. This is called a counterexample to the statement. More generally, given a formula \(\forall x \; A(x)\), a counterexample is a value \(t\) such that \(\neg A(t)\) holds. Such a counterexample shows that the original formula is false, because we have the following equivalence: \(\neg \forall x \; A(x) \leftrightarrow \exists x \; \neg A(x)\). So if we find a value \(t\) such that \(\neg A(t)\) holds, then by the existential introduction rule we can conclude that \(\exists x \; \neg A(x)\), and then by the above equivalence we have \(\neg \forall x \; A(x)\). Here is a proof of the equivalence: One remark about the proof: at the step marked by \(4\) we cannot use the existential introduction rule, because at that point our only assumption is \(\neg \forall x \; A(x)\), and from that assumption we cannot prove \(\neg A(t)\) for a particular term \(t\). So we use a proof by contradiction there. As an exercise, prove the “dual” equivalence yourself: \(\neg \exists x \; A(x) \leftrightarrow \forall x \; \neg A(x)\). This can be done without using proof by contradiction. In Chapter 7 we saw examples of how to use relativization to restrict the scope of a universal quantifier. Suppose we want to say “every prime number is greater than 1”. In first order logic this can be written as \(\forall n (\mathit{prime}(n) \to n > 1)\). The reason is that the original statement is equivalent to the statement “for every natural number, if it is prime, then it is greater than 1”. Similarly, suppose we want to say “there exists a prime number greater than 100.” This is equivalent to saying “there exists a natural number which is prime and greater than 100,” which can be expressed as \(\exists n \; (\mathit{prime}(n) \wedge n > 100)\). As an exercise you can prove the above results about negations of quantifiers also for relativized quantifiers. Specifically, prove the following statements: \(\neg \exists x \; (A(x) \wedge B(x)) \leftrightarrow \forall x \; ( A(x) \to \neg B(x))\) \(\neg \forall x \; (A(x) \to B(x)) \leftrightarrow \exists x (A(x) \wedge \neg B(x))\) For reference, here is a list of valid sentences involving quantifiers: \(\forall x \; A \leftrightarrow A\) if \(x\) is not free in \(A\) \(\exists x \; A \leftrightarrow A\) if \(x\) is not free in \(A\) \(\forall x \; (A(x) \land B(x)) \leftrightarrow \forall x \; A(x) \land \forall x \; B(x)\) \(\exists x \; (A(x) \land B) \leftrightarrow \exists \; x A(x) \land B\) if \(x\) is not free in \(B\) \(\exists x \; (A(x) \lor B(x)) \leftrightarrow \exists \; x A(x) \lor \exists \; x B(x)\) \(\forall x \; (A(x) \lor B) \leftrightarrow \forall x \; A(x) \lor B\) if \(x\) is not free in \(B\) \(\forall x \; (A(x) \to B) \leftrightarrow (\exists x \; A(x) \to B)\) if \(x\) is not free in \(B\) \(\exists x \; (A(x) \to B) \leftrightarrow (\forall x \; A(x) \to B)\) if \(x\) is not free in \(B\) \(\forall x \; (A \to B(x)) \leftrightarrow (A \to \forall x \; B(x))\) if \(x\) is not free in \(A\) \(\exists x \; (A(x) \to B) \leftrightarrow (A(x) \to \exists \; x B)\) if \(x\) is not free in \(B\) \(\exists x \; A(x) \leftrightarrow \neg \forall x \; \neg A(x)\) \(\forall x \; A(x) \leftrightarrow \neg \exists x \; \neg A(x)\) \(\neg \exists x \; A(x) \leftrightarrow \forall x \; \neg A(x)\) \(\neg \forall x \; A(x) \leftrightarrow \exists x \; \neg A(x)\) All of these can be derived in natural deduction. The last two allow us to push negations inwards, so we can continue to put first-order formulas in negation normal form. Other rules allow us to bring quantifiers to the front of any formula, though, in general, there will be multiple ways of doing this. For example, the formula is equivalent to both and A formula with all the quantifiers in front is said to be in prenex form. 8.6. Exercises¶ Give a natural deduction proof of\[\forall x \; (A(x) \to B(x)) \to (\forall x \; A(x) \to \forall x \; B(x)).\] Give a natural deduction proof of \(\forall x \; B(x)\) from hypotheses \(\forall x \; (A(x) \vee B(x))\) and \(\forall y \; \neg A(y)\). From hypotheses \(\forall x \; (\mathit{even}(x) \vee \mathit{odd}(x))\) and \(\forall x \; (\mathit{odd}(x) \to \mathit{even}(s(x)))\) give a natural deduction proof \(\forall x \; (\mathit{even}(x) \vee \mathit{even}(s(x)))\). (It might help to think of \(s(x)\) as the function defined by \(s(x) = x + 1\).) Give a natural deduction proof of \(\exists x \; A(x) \vee \exists x \; B(x) \to \exists x \; (A(x) \vee B(x))\). Give a natural deduction proof of \(\exists x \; (A(x) \wedge C(x))\) from the assumptions \(\exists x \; (A(x) \wedge B(x))\) and \(\forall x \; (A(x) \wedge B(x) \to C(x))\). Prove some of the other equivalences in the last section. Consider some of the various ways of expressing “nobody trusts a politician” in first-order logic: \(\forall x \; (\mathit{politician}(x) \to \forall y \; (\neg \mathit{trusts}(y,x)))\) \(\forall x,y \; (\mathit{politician}(x) \to \neg \mathit{trusts}(y,x))\) \(\neg \exists x,y \; (\mathit{politician}(x) \wedge \mathit{trusts}(y,x))\) \(\forall x, y \; (\mathit{trusts}(y,x) \to \neg \mathit{politician}(x))\) They are all logically equivalent. Show this for the second and the fourth, by giving natural deduction proofs of each from the other. (As a shortcut, in the \(\forall\) introduction and elimination rules, you can introduce / eliminate both variables in one step.) Formalize the following statements, and give a natural deduction proof in which the first three statements appear as (uncancelled) hypotheses, and the last line is the conclusion: Every young and healthy person likes baseball. Every active person is healthy. Someone is young and active. Therefore, someone likes baseball. Use \(Y(x)\) for “is young,” \(H(x)\) for “is healthy,” \(A(x)\) for “is active,” and \(B(x)\) for “likes baseball.” Give a natural deduction proof of \(\forall x, y, z \; (x = z \to (y = z \to x = y))\) using the equality rules in Section 8.4. Give a natural deduction proof of \(\forall x, y \; (x = y \to y = x)\) using only these two hypotheses (and none of the new equality rules): \(\forall x \; (x = x)\) \(\forall u, v, w \; (u = w \to (v = w \to u = v))\) (Hint: Choose instantiations of \(u\), \(v\), and \(w\) carefully. You can instantiate all the universal quantifiers in one step, as on the last homework assignment.) Give a natural deduction proof of \(\neg \exists x \; (A(x) \wedge B(x)) \leftrightarrow \forall x \; (A(x) \to \neg B(x))\) Give a natural deduction proof of \(\neg \forall x \; (A(x) \to B(x)) \leftrightarrow \exists x \; (A(x) \wedge \neg B(x))\) Remember that both the following express \(\exists!x \; A(x)\), that is, the statement that there is a unique \(x\) satisfying \(A(x)\): \(\exists x \; (A(x) \wedge \forall y \; (A(y) \to y = x))\) \(\exists x \; A(x) \wedge \forall y \; \forall y' \; (A(y) \wedge A(y') \to y = y')\) Do the following: Give a natural deduction proof of the second, assuming the first as a hypothesis. Give a natural deduction proof of the first, asssuming the second as a hypothesis. (Warning: these are long.)
208 1 1. Homework Statement Find basis and dimension of [itex]V,W,V\cap W,V+W[/itex] where [itex]V=\{p\in\mathbb{R_4}(x):p^{'}(0) \wedge p(1)=p(0)=p(-1)\},W=\{p\in\mathbb{R_4}(x):p(1)=0\}[/itex] 2. Homework Equations -Vector spaces 3. The Attempt at a Solution Could someone give a hint how to get general representation of a vector in [itex]V[/itex] and [itex]W[/itex]?
J. D. Hamkins, “Tall cardinals,” Math.~Logic Q., vol. 55, iss. 1, pp. 68-86, 2009. @ARTICLE{Hamkins2009:TallCardinals, AUTHOR = {Hamkins, Joel D.}, TITLE = {Tall cardinals}, JOURNAL = {Math.~Logic Q.}, FJOURNAL = {Mathematical Logic Quarterly}, VOLUME = {55}, YEAR = {2009}, NUMBER = {1}, PAGES = {68--86}, ISSN = {0942-5616}, MRCLASS = {03E55 (03E35)}, MRNUMBER = {2489293 (2010g:03083)}, MRREVIEWER = {Carlos A.~Di Prisco}, DOI = {10.1002/malq.200710084}, URL = {http://wp.me/p5M0LV-3y}, file = F, } A cardinal $\kappa$ is tall if for every ordinal $\theta$ there is an embedding $j:V\to M$
For your first question, the original paper ( J. Chem. Soc., Dalton Trans. 1984, 1349–1356) that described the geometry index $\tau_5$ defined it as an "index of trigonality". For example, they write for a compound with $\tau_5=0.48$ By this criterion, the irregular co-ordination geometry of $\ce{[Cu(bmdhp)(OH2)]2+}$ in the solid state is described as being $48\%$ along the pathway of distortion from square pyramidal toward trigonal bipyramidal. For your second question, while I haven't been able to find a $\tau_7$ or $\tau_8$ used in the literature, it seems possible to define such parameters under the right conditions. To devise a $\tau_8$, we can see that for a regular cube $\ce{MX_8}$, there can only be bond angles of $70.5^\circ$ (between adjacent $\ce{X}$ in the same square) and $109.5^\circ$ (between opposite corner $\ce{X}$ of the same square or between corner $\ce{X}$ of different squares). However, an antiprism instead has an angle of $99.6^\circ$ separating the $\ce{X}$ of different squares. (Image obtained from Inorganic Chemistry by Miessler and Tarr) This suggests using a formula reminiscent of $\tau_5$ to define $\tau_8$ as the antiprismatic distortion index. One possibility is $$\tau_8=\frac{\beta-\alpha}{9.9^\circ}$$ where $\beta > \alpha$ are the two largest valence angles and $9.9^\circ$ is a normalization factor to make it between $0$ and $1$. So when $\alpha=\beta=109.5^\circ \to \tau_8=0 \to$ cubic geometry and $\alpha=109.5^\circ$ $\beta=99.6^\circ \to \tau_8=1 \to$ antiprismatic geometry. This will only work if the structure is a regular antiprism (i.e an anticube). The same is true for defining $\tau_7$ between a pentagonal bipyramid and a monocapped trigonal prismatic. This is because the angles for these will vary if all the attached groups are not the same and so a consistent scheme based on the angles would not suffice. I also imagine that $\tau_7$ would be harder to define in this way because I don't think there is pair of angles that on its own could describe the distortion between the two geometries.
Let $A$ be a Metzler matrix, i.e. a real matrix (not necessarily symmetric) whose off-diagonal elements are all non-negative. Then, for $t\ge 0$, the matrix exponential $\exp(At)$ will have all non-negative elements. Numerically, it seems that given a single off-diagonal element of $\exp(At)$, it is always a log-concave (i.e. log-convex downward) function of $t$, for $t\ge 0$, and that the diagonal elements are always log-convex upward functions. That is, it looks like the function $$ f(t) = \log\Big(\big(\exp(At)\big)_{ij}\Big) $$ is a concave function of $t$ for $t\ge 0$ if $i\ne j$, and a convex function of $t$ if $i=j$. My question is whether this indeed is the case. Note that in this expression $\exp$ is a matrix exponential, but $\log$ is just an ordinary logarithm of a positive number. Here is a typical result. The plot shows each element of $\exp(At)$ as a function of $t$, where $A$ is a matrix whose off-diagonal elements are independently uniformly sampled from $[0,1]$ and whose diagonal elements are independently uniformly sampled from $[-1,0]$. The diagonal elements of $\exp(At)$ are shown in red. Showing this should be a simple case of finding the second derivative and showing that it can't be positive, but I haven't seen a way to do that. I haven't been able to find a counterexample either.
Suppose $\kappa$ is a regular cardinal and $P$ is a $\kappa$-c.c. partial order. I want to know when are small sets added by subforcings of size $<\kappa$. The following seems well-known: Fact:If $\kappa$ is weakly compact, and $P$ has size $\kappa$ and is $\kappa$-c.c., then for any $P$-name $\tau$ for a set of ordinals of size $<\kappa$, there is a $Q \lhd P$ and a $Q$-name $\sigma$ such that $|Q| < \kappa$, and $1 \Vdash_P \sigma = \tau$. I think the easiest way to see the fact is to use the extension property. Every such $P$ can be coded as a set of ordinals $A \subseteq \kappa$, and there is some transitive $X$ of rank > $\kappa$ and a set $B$ such that $(V_\kappa,\in,A) \prec (X,\in,B)$. Any $P$-name for a small set is seen by the larger structure as captured by $P$, so this reflects. Question 1: Are there counterexamples for some large cardinals which are weaker than weakly compact? If $\kappa$ is supercompact, then the same conclusion holds for all $\kappa$-c.c. $P$. This is easy to see by taking a supercompactness embedding $j$ with closure at least $|P|$ and noting that $j[P]$ is a regular suborder of $j(P)$. Supercompactness must be a total overkill hypothesis. Question 2: For what cardinals $\kappa$ do we have that every $\kappa$-c.c. partial order captures small sets in small factors? Update: I think I have a partial answer to Question 2. Supercompactness is indeed overkill, and weak compactness is enough after all. Let $\kappa$ be weakly compact, and $P$ be $\kappa$-c.c. Let $\theta > \kappa$ be regular such that $P \in H_\theta$. Let $\tau$ be any $P$-name for a $<\kappa$ sized set of ordinals. Let $M \prec H_\theta$ be such that $P,\tau \in M$, $|M| \leq \kappa$, and $M^{<\kappa} \subseteq M$. This is possible just because $\kappa$ is inaccessible. Then $P \cap M$ is a regular suborder of $P$, since all antichains contained in $P \cap M$ are members of $M$, and $M$ knows which ones are maximal. By the Fact, there is $Q \lhd P \cap M \lhd P$ and a $Q$-name $\sigma$ such that $|Q| < \kappa$ and $\Vdash_{P \cap M} \tau = \sigma$.
You are here Guitar Speaker Power Handling The power rating of a guitar speaker is an indication of how much power it can handle without being damaged thermally or mechanically. It is not an indication of how loud the speaker will sound in comparison to other speakers. Let us examine the basics of how speakers work, how the power rating is determined and look at things from the perspective of the guitar amplifier so that we can choose guitar speakers that will last a lifetime. How an Electrodynamic Loudspeaker Works Guitar speakers are a type of loudspeaker known as electrodynamic or "moving coil" loudspeakers. The magnetic circuit (composed of front plate, back plate, pole piece and magnet) and voice-coil make up the motor of a guitar speaker. An alternating electrical current flowing through the voice-coil generates an alternating magnetic field perpendicular to flow of current through the coil. The magnetic circuit creates a strongly focused magnetic field in the air gap between the front plate and the pole piece on which the voice-coil is centered. The voice-coil is pushed and pulled through the air gap based on the interaction between these two magnetic fields. Since the speaker cone is connected to the voice-coil, it now has a mechanical force with which to push air particles and make sound. Thermal Damage Speakers are transducers. They convert electrical energy provided by an amplifier into acoustical energy. They are actually very inefficient transducers because most of the electrical energy is converted into heat instead of sound. The reference efficiency (ratio of acoustic power out to electrical power in) for most guitar speakers is around 2% to 6%, which means that 98% to 94% of the electrical energy is dissipated in the form of heat. From the aspect of power dissipation, a guitar speaker can be modeled as a resistor. Most guitar amp enthusiasts are familiar with the equation for electrical power and how it can be used to determine the power dissipated across a resistor. Resistors can be thought of as transducers that intentionally convert electrical energy to heat in order to create a voltage drop. Resistors have a power rating that indicates how much power they can dissipate before being damaged and this rating is analogous to the speaker power rating. Equation for Power$$P = \frac{V^2}{R}$$ where ~P~ = power, ~V~ = voltage, ~R~ = resistance For example:$$P = \frac{V^2}{R} = \frac{{20}^2}{8} = \frac{400}{8} = 50\text{ W}$$ The voice-coil is the electrical interface of the speaker and is given a "nominal impedance" specification (e.g. 4, 8 or 16 ohm) which can be used to approximate power dissipation when connected to an amplifier. (The actual impedance of the voice-coil varies with frequency). Just as a resistor will eventually burn up if its power rating is not high enough, the speaker's voice-coil will burn up if it is overpowered by the amplifier. One of the most common symptoms of an overpowered speaker is a burned voice-coil, which usually measures as an open circuit on an ohm meter. No sound can be produced by a speaker with an open voice-coil. An overheated voice-coil former may also become warped and begin to rub against the pole piece causing the speaker to buzz loudly. Mechanical Damage Loudspeakers can be damaged mechanically by over-excursion of the voice-coil and cone. This is more common with old speakers that have worn suspensions and adhesives, but may also occur at extreme low frequencies outside of the speaker's useable frequency range. When over-excursion occurs, the voice-coil can become misaligned or bottom out. The cone and suspension (surround and spider) can also become stretched or torn. How the Speaker Power Rating is Determined Many speaker manufacturers rate their speakers based on industry standards similar to IEC 60268-5 or AES2-1984. These standards specify a pink noise test signal with a crest factor of two (i.e. 6 dB) which is meant to simulate the transient character of music having an average value, as well as frequent instantaneous voltage spikes that swing up to twice the average value. Pink noise is a particular type of random noise with equal energy per octave and actually sounds like a space shuttle launch. The test signal is applied to the test speaker for a few hours, allowing for a reasonable way to test the speaker's real world thermal and mechanical capabilities. After testing, the speaker must be in working order, without permanent alteration of its technical features. The power rating is calculated using the RMS value of the applied voltage and the minimum value of electrical impedance within the working range of the speaker. Guitar Amplifier Power Output Ratings The power output rating of a guitar amp is mostly a ballpark figure for what it can put out. Amp specifications commonly list power output in a form similar to the following: Power Output: 50W into 8ohm at 5% THD This type of power output rating is obtained by using a sine wave from a signal generator (usually 1 kHz) as the input signal. The 5% THD (total harmonic distortion) figure means that the sine wave was able to generate 50W of power output with relatively low distortion (near the threshold of clipping or overdrive). THD measurements were one of the first conventions used to objectively compare the fidelity of audio amplifiers. Guitar amps are unconventional audio amplifiers. While most audio amplifiers are designed to keep distortion as low as possible, guitar amplification has evolved to where overdrive distortion is usually a requirement. For example, the Marshall® JCM800 2203 is a 100W tube amp that has a highly regarded overdrive sound. The owner's manual lists the power output as follows: Typical power at clipping, measured at 1kHz, average distortion 4% 115 watts RMS into 4, 8, 16 ohms. Typical output power at 10% distortion 170 watts into 4 ohms. This example shows that for many guitar amplifiers, the power output rating (100W in this case) is not a maximum power output rating, but more of a ballpark clean power output specification. RMS and Overdrive Distortion RMS (root mean square) is a kind of average value that can be used to compare the power dissipation from different signals on equal terms. For example, a 20 VDC power supply dissipates the same amount of heat across an 8 ohm resistor as a sine wave with an RMS value of 20 VAC. General Equation for the RMS value of a periodic function$$V_{RMS} = \sqrt{\frac{1}{T} \int_{0}^{T} [V(t)]^2 dt}$$ where the Square portion of the equation is and the Mean portion of the equation is and the Root portion of the equation is Guitar amp output ratings are usually based on a sine wave at low distortion, but if the volume is turned up further or a gain boosting effect is used, the sine wave becomes more overdriven and can approach the shape of a square wave. The RMS value of a square wave is equal to its amplitude, while the RMS value of a sine wave is equal to its amplitude divided by the square root of two. Plugging the RMS values into the equation for power shows that a square wave dissipates twice as much power across the same load as a sine wave with the same amplitude. Power calculation for a sine wave$$P = \frac{(\frac{V_m}{\sqrt{2}})^2}{R}$$$$P = \frac{\frac{{V_m}^2}{2}}{R}$$$$P = \frac{1}{2} \times \frac{{V_m}^2}{R}$$ Power calculation for a square wave$$P = \frac{{V_m}^2}{R}$$ This simplified overdrive distortion model illustrates how the 100 watt Marshall® amp which puts out 115 watts at 4% THD could put out an additional 50 watts at 10% THD. Tube vs. Solid State Outputs Many tube guitar amps use output transformers with secondary taps connected to an impedance switch allowing for the same power output when connected to 4, 8 or 16 ohm load impedances. Solid state amps do not use output transformers and do not have the same power output when connected to different load impedances. For tube outputs, it is important to match the load impedance to the amp\'s output impedance. For solid state outputs, it is important to use a load that is greater than or equal to the rated minimum load impedance and to know the amp's power output at that load. For example, the Fender M-80 is a solid state amp rated for 69 W(RMS) at 5% THD into 8 ohms and 94 W(RMS) at 5% THD into 4 ohms (the minimum load impedance). With solid-state amps, overdrive distortion generated by the power-amp is not generally considered musically pleasing, so most people will not exceed the amp's low THD power rating. Tube power amps, on the other hand, are often played well beyond their low THD rating. Tube vs. Solid State Outputs Amps with Multiple Speakers When an amp uses multiple speakers the output power is divided between them. The nominal impedance of each speaker should be the same value so that power is distributed equally and so that the output impedance of the amplifier can be matched. Formula for calculating the equivalent overall impedance of speakers wired in parallel ~Z_{\text{total}}~ = Equivalent Overall Impedance ~Z_1~ = Impedance of speaker 1, etc.$$ Z_{\text{total}} = \frac{1}{\frac{1}{Z_1} + \frac{1}{Z_2} + \frac{1}{Z_3} + \ldots + \frac{1}{Z_n}}$$ Example: Two Speakers in Parallel$$Z_{\text{total}} = \frac{1}{\frac{1}{Z_1} + \frac{1}{Z_2}} = \frac{1}{\frac{1}{16Ω} + \frac{1}{16Ω}} = \frac{1}{\frac{1}{8}} = 8Ω$$ Formula for calculating the equivalent overall impedance of speakers wired in series ~Z_{\text{total}}~ = Equivalent Overall Impedance ~Z_1~ = Impedance of speaker 1, etc.$$ Z_{\text{total}} = Z_1 + Z_2 + Z_3 + \ldots + Z_n$$ Example: Two Speakers in Series$$Z_{\text{total}} = Z_1 + Z_2 = 4Ω + 4Ω = 8Ω$$ Choosing Guitar Speakers to Last a Lifetime There is no standard method used by all amp manufacturers when selecting an appropriate speaker power rating. If you want to choose a speaker to last a lifetime, you will want to choose a speaker that can handle the maximum amount of preamp and power amp overdrive distortion that can possibly be put into it and safely avoid exceeding the speaker's thermal limits. In the case of single speaker setups, this means choosing a speaker rated for at least twice the rated output power of the amp. For multiple speakers, choose twice the rated power that would be distributed to it. You might decide to go with a lower power rating because you know that you will never be cranked at full power and love the sound of a lower power rated speaker. In the same way you may choose a speaker with a much higher power rating because of the way it sounds. A Real World Example: Speakers for a Fender® '65 Twin Reverb® Reissue 1) Determine the rated output power of the amp. Amplifiers have two power ratings: power consumption and power output. The power consumption is always much higher than the power output. In this case the output power is 85 watts RMS and the power consumption is 260 watts. 2) Determine the output impedance for that output power rating. In this case it is 4 ohm. 3) Determine the number of speakers. In this case there are two 12" (8 ohm) speakers wired in parallel for an overall impedance of 4 ohms. For this amp, speaker choices to last a lifetime should be rated for at least 85 watts each. There are a lot of speaker choices rated for 100 watts and this rating would be very safe. Actually, the stock speaker for this amp is the Jensen C12K and it is rated for 100 watts. Related Videos By Kurt Prange (BSEE), Sales Engineer for Antique Electronic Supply - based in Tempe, AZ. Kurt began playing guitar at the age of nine in Kalamazoo, Michigan. He is a guitar DIY'er and tube amplifier designer who enjoys helping other musicians along in the endless pursuit of tone. Click here to return to the desktop version of this site
Difference between revisions of "Higher-dimensional Fujimura" (New page: Let <math>\overline{c}^\mu_{n,4}</math> be the largest subset of the tetrahedral grid: :<math> \{ (a,b,c,d) \in {\Bbb Z}_+^4: a+b+c+d=n \}</math> which contains no tetrahedrons <math>(a+...) (→General n) (3 intermediate revisions by 3 users not shown) Line 8: Line 8: {| {| − | n || 0 || 1 || 2 + | n || 0 || 1 || 2 |- |- − | <math>\overline{c}^\mu_{n,4}</math> || 1 || 3 || 7 + | <math>\overline{c}^\mu_{n,4}</math> || 1 || 3 || 7 |} |} Line 37: Line 37: A lower bound of 2(n-1)(n-2) can be obtained by keeping all points with exactly one coordinate equal to zero. A lower bound of 2(n-1)(n-2) can be obtained by keeping all points with exactly one coordinate equal to zero. − You get a non-constructive quadratic lower bound for the quadruple problem by taking a random subset of size <math>cn^2</math>. If c is not too large the linearity of expectation shows that the expected number of tetrahedrons in such a set is less than one, and so there must be a set of that size with no tetrahedrons. + You get a non-constructive quadratic lower bound for the quadruple problem by taking a random subset of size <math>cn^2</math>. If c is not too large the linearity of expectation shows that the expected number of tetrahedrons in such a set is less than one, and so there must be a set of that size with no tetrahedrons. <math> c = 24^{1/4}6 + o(1n)</math>. − With coordinates (a,b,c,d), + With coordinates (a,b,c,d), the value a+2b+3c. This forms an arithmetic progression of length 4 for any of the tetrahedrons we are looking for. So we can take subsets of the form a+2b+3c=k, where k comes from a set with no such arithmetic progressions. [[http://arxiv.org/PS_cache/arxiv/pdf/0811/0811.3057v2.pdf This paper]] gives formula for the of + . − + upper bound can be found by counting tetrahedrons. For a given n the tetrahedral grid has n(n+1)(n+2)(n+3)/tetrahedrons. Each point on the grid is part of n tetrahedrons, so (n+1)(n+2)(n+3)/points must be removed to remove all tetrahedrons. This gives an upper bound of (n+1)(n+2)(n+3)/. Latest revision as of 07:28, 14 April 2009 Let [math]\overline{c}^\mu_{n,4}[/math] be the largest subset of the tetrahedral grid: [math] \{ (a,b,c,d) \in {\Bbb Z}_+^4: a+b+c+d=n \}[/math] which contains no tetrahedrons [math](a+r,b,c,d), (a,b+r,c,d), (a,b,c+r,d), (a,b,c,d+r)[/math] with [math]r \gt 0[/math]; call such sets tetrahedron-free. These are the currently known values of the sequence: n 0 1 2 3 4 5 6 7 [math]\overline{c}^\mu_{n,4}[/math] 1 3 7 14 24 37 55 78 n=0 [math]\overline{c}^\mu_{0,4} = 1[/math]: There are no tetrahedrons, so no removals are needed. n=1 [math]\overline{c}^\mu_{1,4} = 3[/math]: Removing any one point on the grid will leave the set tetrahedron-free. n=2 [math]\overline{c}^\mu_{2,4} = 7[/math]: Suppose the set can be tetrahedron-free in two removals. One of (2,0,0,0), (0,2,0,0), (0,0,2,0), and (0,0,0,2) must be removed. Removing any one of the four leaves three tetrahedrons to remove. However, no point coincides with all three tetrahedrons, therefore there must be more than two removals. Three removals (for example (0,0,0,2), (1,1,0,0) and (0,0,2,0)) leaves the set tetrahedron-free with a set size of 7. General n A lower bound of 2(n-1)(n-2) can be obtained by keeping all points with exactly one coordinate equal to zero. You get a non-constructive quadratic lower bound for the quadruple problem by taking a random subset of size [math]cn^2[/math]. If c is not too large the linearity of expectation shows that the expected number of tetrahedrons in such a set is less than one, and so there must be a set of that size with no tetrahedrons. However, [math] c = (24^{1/4})/6 + o(1/n)[/math], which is lower than the previous lower bound. With coordinates (a,b,c,d), consider the value a+2b+3c. This forms an arithmetic progression of length 4 for any of the tetrahedrons we are looking for. So we can take subsets of the form a+2b+3c=k, where k comes from a set with no such arithmetic progressions. [This paper, Corollary 1] gives this formula for a lower bound on the proportion of retained points: [math]C\frac{(log N)^{1/4}}{2^\sqrt{8 log N}}[/math], for some absolute constant C. An upper bound can be found by counting tetrahedrons. For a given n the tetrahedral grid has n(n+1)(n+2)(n+3)/24 tetrahedrons. Each point on the grid is part of n tetrahedrons, so (n+1)(n+2)(n+3)/24 points must be removed to remove all tetrahedrons. This gives an upper bound of (n+1)(n+2)(n+3)/8 remaining points.
The following question arised as a side-question in a geometric problem. It has a "feel" similar to problems in Ramsey-theory, but I have not found any mention of it (also I'm not very familiar with the field). Was this problem considered before? Does it have an easy answer? Consider the set of grid points $[n] \times [n]$, and color each point either black or white, giving rise to the sets $B,W$ (such that $B \cup W = [n] \times [n]$, and $B \cap W = \emptyset$). Are the following true? (stronger): Either $B$ or $W$ contains every permutation of $[n/2]$. (weaker, implied by stronger): Every permutation of $[n/2]$ is contained in either $B$ or $W$. if true, holds also for $k$ colors and $n/k$? if not true, what is largest $m$ for which it holds? A set of points $X$ containing a permutation $\sigma$ of $[n]$ means that: there are points $(x_1, y_1), \dots, (x_n,y_n) \in X$, such that $y_1<y_2<y_3<\dots<y_n$, and $x_i$ have the same relative ordering as $\sigma_i$ (meaning: $\sigma_i < \sigma_j \iff x_i < x_j$, for all $i,j$). For example, $[n] \times [n]$ contains all permutations of $[n]$. Some easy observations: One of $B$ and $W$ might not contain all permutations, even if it contains more than half of the original points (construction: L-shape thinner than n/2). If $n/2$ bound holds, it is best possible (construction: color left half black, right half white).
Integer programming algorithms minimize or maximize a linear function subject to equality, inequality, and integer constraints. Integer constraints restrict some or all of the variables in the optimization problem to take on only integer values. This enables accurate modeling of problems involving discrete quantities (such as shares of a stock) or yes-or-no decisions. When there are integer constraints on only some of the variables, the problem is called a mixed-integer linear program. Example integer programming problems include portfolio optimization in finance, optimal dispatch of generating units (unit commitment) in energy production, and scheduling and routing in operations research. Integer programming is the mathematical problem of finding a vector \(x\) that minimizes the function: \[\min_{x} \left\{f^{\mathsf{T}}x\right\}\] Subject to the constraints: \[\begin{eqnarray}Ax \leq b & \quad & \text{(inequality constraint)} \\A_{eq}x = b_{eq} & \quad & \text{(equality constraint)} \\lb \leq x \leq ub & \quad & \text{(bound constraint)} \\ x_i \in \mathbb{Z} & \quad & \text{(integer constraint)} \end{eqnarray}\] Solving such problems typically requires using a combination of techniques to narrow the solution space, find integer-feasible solutions, and discard portions of the solution space that do not contain better integer-feasible solutions. Common techniques include: Cutting planes: Add additional constraints to the problem that reduce the search space. Heuristics: Search for integer-feasible solutions. Branch and bound: Systematically search for the optimal solution. The algorithm solves linear programming relaxations with restricted ranges of possible values of the integer variables. For more information on integer programming, see Optimization Toolbox™.
This question is based on exercise $2.14$ of chapter $2$ of Hartshorne. Suppose $\varphi:S\rightarrow T$ is a graded homomorphism of graded (commutative, unital) rings such that $\varphi_d := \varphi|_d$ is an isomorphism for all $d$ sufficiently large. Then I want to show that the natural morphism $ f: $Proj $T \rightarrow $Proj $S$ is an isomorphism by explicitly constructing an inverse. The morphism is given on spaces by $P \mapsto \varphi^{-1}(P)$ and on sections by composing pointwise with $\varphi_P : S_{\varphi^{-1}(P)}\rightarrow T_P$. I would like to show that this is an isomorphism by exhibiting an inverse homeomorphism to $f$ and showing the stalk maps are isomorphic. I appreciate that we can cover Proj $S$ with affine pieces and then show that the corresponding maps from the pullbacks to these pieces are isomorphisms, but I would like to know what $f^{-1}$ looks like explicitly. If there is a good way to see what $f^{-1}$ is by chasing through the local method of showing that $f$ is an isomorphism then I would also appreciate an explanation of that. My candidate for $f^{-1}$ was $P \mapsto \sqrt{(\varphi(P))}=I$, the radical of the ideal generated by $\varphi(P)$. I think that I have shown that $I$ is homogeneous, doesn't contain $T_+$, $\varphi^{-1}(I) = P$ and is almost prime in the sense that if $a,b \in T$ are homogeneous and have degree at least $1$ then $ab \in I \implies a \in I$ or $b \in I$. But I think that in fact $I$ is not in general prime, since the degree $0$ component of $I$ is exactly $\sqrt{(\varphi(P_0))}$ in the ring $T_0$, where $P_0$ is the ideal of $S_0$ given by $S_0 \cap P$ and that for general rings $A$ and $B$, with $\rho:A\rightarrow B$, $P$ prime in $A$ doesn't imply $\sqrt{(\rho(P))}$ prime in $B$.
In the use of Bayes' Theorem to calculate the posterior probabilities that constitute inference about model parameters, the weak likelihood principle is automatically adhered to: $$\mathrm{posterior} \propto \mathrm{prior} \times \mathrm{likelihood}$$ Nevertheless, in some objective Bayesian approaches the sampling scheme determines the choice of prior, the motivation being that an uninformative prior should maximize the divergence between the prior and posterior distributions—letting the data have as much influence as possible. Thus they violate the strong likelihood principle. Jeffreys priors, for instance, are proportional to the square root of the determinant of the Fisher information, an expectation over the sample space. Consider inference about the probability parameter $\pi$ of Bernoulli trials under binomial & negative binomial sampling. The Jeffreys priors are $$\def\Pr{\mathop{\rm Pr}\nolimits}\begin{align}\Pr_\mathrm{NB}(\pi) &\propto \pi^{-1} (1-\pi)^{-\tfrac{1}{2}}\\\Pr_\mathrm{Bin}(\pi) &\propto \pi^{-\tfrac{1}{2}} (1-\pi)^{-\tfrac{1}{2}}\end{align}$$ & conditioning on $x$ successes from $n$ trials leads to the posterior distributions $$\begin{align}\Pr_\mathrm{NB}(\pi \mid x,n) \sim \mathrm{Beta}(x, n-x+\tfrac{1}{2})\\\Pr_\mathrm{Bin}(\pi \mid x,n)\sim \mathrm{Beta}(x+\tfrac{1}{2}, n-x+\tfrac{1}{2})\end{align}$$ So observing say 1 success from 10 trials would lead to quite different posterior distributions under the two sampling schemes: Though following such rules for deriving uninformative priors can sometimes leave you with improper priors, that in itself isn't the root of the violation of the likelhood principle entailed by the practice. An approximation to the Jeffreys prior, $ \pi^{-1+c} (1-\pi)^{-1/2}$,where $0 < c\ll 1$, is quite proper, & makes negligible difference to the posterior. You might also consider model checking—or doing anything as a result of your checks—as contrary to the weak likelihood principle; a flagrant case of using the ancillary part of the data.
Football: Mirror Icosahedron The surface of a classic soccer ball is composed of 12 slightly curved black regular pentagons and 20 white regular hexagons. By the way, such a ball was not always considered ”classic”: this cut and colouring were first used for the official world cup ball in 1970 in Mexico. The black-and-white colouring was then chosen from сontrast considerations – so the ball was more visible on then common black-and-white TVs. It was even named Telstar — after a TV satellite. In the years to come the official balls changed their colourings, but the cut remained unchanged until the 2002 championship in Germany. From a mathematical point of view, a classic soccer ball is a truncated icosahedron. This fact and the theory of reflection groups (in three-dimensional case — of Coxeter groups) allows one to make a simple yet beautiful model. One should take a trihedral angle composed of same isosceles triangles. Given the base length $a$, the length of legs that are glued together to form the trihedral angle should be $r=\frac{1}{4}\sqrt{2(5+\sqrt{5})}\>a$ which with a good precision is $r\approx0{,}95\>a$. (For example, if $a=10$ cm, then $r=9{,}5$ cm.) The mirror angle is very close to that of a regular tetrahedron, but yet differs. Another important detail is a (plane) regular triangle coloured black-and-white in such a way that the white interior is a regular hexagon. (To achieve this, the sides of black triangles should be taken three times less than the side of original regular triangle.) If such a triangle is now put in the trihedral angle, a model of a classic soccer ball is seen inside! The image won't change if the angle is moved around the line of sight. For the ”ball” to be seen completely, the triangle put in shouldn't be too large. One should not put it further than a third of the mirror angle's height from the vertex. (That is, with the base of the mirror triangle being $a=10$ cm, the side of triangle to put in can be taken $3$ cm, so the sides of small black triangles on it — $1$ cm.) The simplest way to make the isosceles mirror triangles is to cut them out of plastic with mirror coating. They can be put together with duct tape or with wide electrical tape, gluing the legs of triangles — the trihedral angle's edges. What kind of magic mirror angle is it that the mirror image forms a soccer ball? (In fact — an icosahedron, which is even more clearly visible if one puts in a solid color triangle.) The mirror angle is associated with the icosahedron itself: its vertex is in the icosahedron's center, and the mirrors cross the sides of one of its edges. That is where the conditions on the sides of isosceles triangles forming the mirror angle come from: if the triangle's base $a$ is the length of icosahedron's edge, then the leg $r$ is the radius of its circumscribed sphere. And the fact that the image in this mirror triangle is an icosahedron is guaranteed by the theory of reflection groups.
A Three Group Split Problem Solution Let's represent the three group split by three numbers "###," each standing for the amount of even numbers in a group. The order of the groups is of no consequence. $000$ is the starting configuration. There are nine open slots to place the first even number in. However it is done, the next state is $100.$ With one even number in place, there are $8$ slots - two in the same group as the first even number. Placing the second even number there moves to the state $200,$ otherwise, with probability $\displaystyle \frac{6}{8},$ will move to the state $110.$ We continue in this manner, distributing even numbers of which there are four. When all is said and done, there are only three possible states with four even numbers: $211,$ meaning no group having three odd numbers, $220,$ and $310$ - in both cases there is one group (corresponding to $0)$ in which all numbers are odd. The probability if getting into the state $220$ is $\displaystyle\begin{align}P(220)&=\left(\frac{6}{8}\cdot\frac{4}{7}+\frac{2}{8}\cdot\frac{6}{7}\right)\cdot\frac{2}{6}\\ &=\frac{24+12}{56}\cdot\frac{1}{3}=\frac{3}{14}. \end{align}$ The probability if getting into the state $310$ is $\displaystyle\begin{align}P(310)&=\frac{2}{8}\cdot\left(\frac{1}{7}\cdot\frac{6}{6}+\frac{6}{7}\cdot\frac{1}{6}\right)+\frac{6}{8}\cdot\frac{4}{7}\cdot\frac{1}{6}\\ &=\frac{1}{4}\cdot\frac{6+6}{42}+\frac{1}{14}=\frac{2}{14}. \end{align}$ Thus the probability of having a group of odd numbers is $\displaystyle \frac{2}{14}+\frac{3}{14}=\frac{5}{14}.$ Acknowledgment This is a rephrase of a problem from the mathpages website by Kevin Brown. 65607677
Rotatable with your mouse after you click on “Read the rest of this entry“. Most of randform readers might have heard that the socalled greenhouse effect is one of the main causes of global warming. The effect is not easy to understand. There are two posts which give a nice intro to the greenhouse effect on Azimuth. One is by Tim van Beek and one is by John Baez. The greenhouse effect can also be understood in a slightly more quantitative way by looking at an idealized greenhouse model. In the above diagram I now enhanced this idealized greenhouse model (as of Jan 2017) in order to get an idea about the hypothetical size of the effect of an absorption of non-infrared sunlight and it’s reradiation as infrared light, i.e. the possibly effect size of a certain type of fluorescence. I sort of felt forced to do this, because at the time of writing (February 2017) the current climate models did not take the absorption of UV and near infrared light in methane (here a possible candidate for that above mentioned hypothetical greenhouse gas) into account and I wanted to get an insight into how important such an omission might be. The simple model here is far from any realistic scenario – in particular no specific absorption lines but just the feature of absorption and reradiation is looked at. The above diagramm shows the earth temperature in Kelvin as a function of two parameters, as given by this enhanced model. The two parameters can be seen as being (somewhat) proportional to densities of a hypothetical greenhouse gas, which would display this type of fluorescence. That is the parameter x is seen as (somewhat) proportional to the density of that hypothetical greenhouse gas within the atmossphere, while y is (somewhat) proportional to the density near the surface of the earth. Why I wrote “somewhat” in brackets is explained below. The middle of the “plate” is at x=0, y=0 (please hover over the diagram) which is the “realistic” case of the idealized greenhouse model, i.e. the case where infrared absoptivity is 0.78 and the reflectivity of the earth is 0.3. The main point of this visualization is that linearily increasing x and y in the same way leads to an increase of the temperature. Or in other words, although raising x by a certain amount leads to cooling this effect is easily trumped by raising y by the same amount. As far as I learned from discussions with climate scientists the omission of non-infrared radiation in the climate models was mostly motivated by the fact that an abpsorption of non-infrared is mostly happening in the upper atmossphere (because methane is quickly rising (but there are also circulations)) and thus leading rather to a global cooling effect than a global warming effect and so it in particular doesn’t contribute to global warming. The enhanced simple model here thus confirms that if absorption is taking place in the upper athmossphere then this leads to cooling. The enhanced model however also displays that the contribution of methane that has not risen, i.e. methane that is close to the earth surface, is to warm upon absorption of non-infrared light and that the effect of warming is much stronger than the cooling effect in the upper athmosphere. Unfortunately I can’t say how much stronger for a given amount of methane, since for assessing this one would need to know more about the actual densities (see also discussion below and the comment about circulations). Nonetheless this is a quite disquieting observation. I had actually exchanged a couple of emails with Gunnar Myrhe, the lead author of this corresponding chapter in the IPCC report, who confirmed that non-infrared light absorption in methane hasn’t sofar been taken into account, but that some people intended to work on the near-infrared absorption. He didn’t know about the UV absorption that I had found e.g. here (unfortunately my email to Keller-Rudek and Moortgat from 2015 whether there is more data for methane especially in the range 170nm-750nm stayed unanswered) and thanked for pointing it out to him. He appeared to be very busy and as drowning in (a lot of administrative) work, so that I fear that those absorption lines still might not have been looked at. That is also why I decided to publish this now. I sent a copy of this post to Gunnar Myrhe, Zong-Liang Yang and John Baez in June 2017, where I pointed out that: I have strong concerns that the estimations of the global warming potential of methane need to be better assessed and that the new value might eventually be very different then the current one. – but I got no answer. The Wikipedia entry on the Idealized greenhouse model is based on course notes of the course 387H: Physical Climatology by instructor: Zong-Liang Yang where he used this simple model for motivating more complicated models with many layers: As said, I now enhanced this simple model in a certain way in order to get some insight into the temperature sensitivity of absorption of non-infrared light and it’s conversion into infrared light. I currently don’t have access to a commercial computer algebra system and I sofar haven’t got along with the Sage syntax, so in particular solving spherical Navier-Stokes equations as done in GCM’s is quite out of reach. So I tried to use this enhanced model with Julia. The code is below. The enhanced model is depicted in the following image: The notation is as in the Wikipedia article (see first image above), with a few alterations. That is $latex S=\frac{1}{4}S_0 = 341 W/m^2$ is here one fourth of the total incoming solar radiation (the factor one fourth is because the area of a sphere (i.e. here the earth) is four times the area of its circular shadow, this is e.g. motivated here) and $latex \alpha_p$ is set here $latex \alpha_p = \rho_s$ where I chose $latex \rho$ as in “reflected”. I kept the notation for the subscripts as they were already used for the temperatures $latex T$ in Wikipedia, so the subscripts are $latex s$ as in “surface” and $latex a$ as in “atmosphere”. The symbol $latex \epsilon_{IR}$ denotes the absorptivity/emissitivity of infrared light in the atmossphere (in the Wikipedia entry just $latex \epsilon$), likewise $latex \epsilon_{UV}$ denotes the absorptivity/emissitivity of ultraviolet and other noninfrared light, which is here now assumed to be reradiated as infrared light within the atmossphere. As there seems no “simplify” in Julia, I had to shuffle the algebraic expressions by hand, which is of course error-prone, but I hope there are no mistakes. Below the code and intermediate steps. Anyways if you look at the code then you see that $latex \epsilon_{UV}$ and $latex \rho_s$ are dependent on the variables delta1 and delta2. In the model they describe “small deviations” from some standard values. delta1 describes the deviation from the UV absorptivity and delta2 the deviation from the reflectivity of the earth. The idea behind is that if there is some greenhouse gas which absorbs noninfrared and reradiates this as infrared then as delta1 increases the noninfrared absorptivity of the atmosshpere, this is as if there would be “more of that absorbing” greenhouse in the atmossphere. So in the beginning I wrote the word “somewhat” in brackets, because I don’t know the exact relations between absorptivity and density of a greenhouse gas, apart from this I don’t know much about actual densities (see comment about circulation and this post). Likewise delta2 could describe a “more of that greenhouse gas” at the surface of the earth. In the diagram delta1 is x and delta2 is y. #code is GPL by Nadja Kutz S= 341.5 deltaAt = 0.0 deltaSur = 0.0 epsuv = 0.0+deltaAt epsir = 0.78 #epsir=0.78 is corresponding to usual CO2forcing rhos = 0.3-deltaSur sigma = 0.00000005670367 Ts= (1/((1-0.5*epsir)*sigma)*(((1-rhos)*(1-epsuv) -0.5*(-epsuv*(1-rhos)-rhos + (1-epsuv)^2*rhos))*S))^0.25 Ta=(1/((epsuv + epsuv*(1-epsuv)*rhos + epsir)*sigma)*((1-(1-epsuv)^2*rhos)*S-(1-epsir)*sigma*Ts^4))^0.25 println(“deltaAt=”,deltaAt,” deltaSur=”,deltaSur,” Ts=”,Ts,” Ta=”,Ta) #calculation see image Greenhouse.svg #Term 1 -(1-(1-epsuv)^2*rhos)*S + (epsuv + epsuv*(1-epsuv)*rhos + epsir)*sigma*Ta^4 + (1-epsir)*sigma*Ts^4 #Term 2 (1-rhos)(1-epsuv)*S + (epsuv + epsuv*(1-epsuv)*rhos + epsir)*sigma*Ta^4 – sigma*Ts^4 #Term1 + Term2 !=0 -epsuv (1-rhos)*S -rhos*S + (1-epsuv)^2*rhos*S + 2* (epsuv + epsuv*(1-epsuv)*rhos + epsir)*sigma*Ta^4 + (-epsir)*sigma*Ts^4 #Solve Term1 + Term2 !=0 for Ta Ta^4= -1/(2*(epsuv + epsuv*(1-epsuv)*rhos + epsir)*sigma)*((-epsuv*(1-rhos)-rhos + (1-epsuv)^2*rhos)*S + (-epsir)*sigma*Ts^4) #Into Term 2 (1-rhos)(1-epsuv)*S -0.5*((-epsuv*(1-rhos)-rhos + (1-epsuv)^2*rhos)*S + (-epsir)*sigma*Ts^4)- sigma*Ts^4 #Simplify ((1-rhos)(1-epsuv) -0.5*(-epsuv*(1-rhos)-rhos + (1-epsuv)^2*rhos))*S + (0.5*epsir-1)*sigma*Ts^4 #Solve for Ts Ts= (1/((1-0.5*epsir)*sigma)*(((1-rhos)*(1-epsuv) -0.5*(-epsuv*(1-rhos)-rhos + (1-epsuv)^2*rhos))*S))^0.25 The plot of the function (with some help from Tim) can be got from this code: function Surfacetemp(deltaAt,deltaSur) S= 341.5 epsuv = 0.0+deltaAt epsir = 0.78 #epsir=0.78 wird als CO2forcing angenommen rhos = 0.3-deltaSur sigma = 0.00000005670367 (1/((1-0.5*epsir)*sigma)*(((1-rhos)*(1-epsuv) -0.5*(-epsuv*(1-rhos)-rhos + (1-epsuv)^2*rhos))*S))^0.25 #Ta=(1/((epsuv + epsuv*(1-epsuv)*rhos + epsir)*sigma)*((1-(1-epsuv)^2*rhos)*S-(1-epsir)*sigma*Ts^4))^0.25 end using Plots plotly() x = y = linspace(-0.1, 0.1, 20) plot(x,y,Surfacetemp,st=:surface) thanks to Tim for helping me deciphering the Julia documentation
I have read it a thousand times: "you only need local information to compute derivatives." To be more precise: when you take a derivative, in say point $a$, what you are essentially doing is taking a limit, so you only need to look at the open region $ (a-\delta,a+\delta) $. Taylor's theorem seems to contradict this: from the derivatives in just one point, you can reconstruct the whole function within its radius of convergence (which can be infinity). For example, consider the function: $f: \mathbb{R} \rightarrow \mathbb{R}:x\mapsto \left\{ \begin{array}{lr} x+3\pi/2:& x \leq-3\pi/2 \\ \cos(x): & -3\pi/2\leq x \leq3\pi/2\\ x+3\pi/2& : x\leq-3\pi/2 \end{array} \right.\\$ Wolfram Alpha tells me that $D^{100}f(0)=\cos(0)$... This should give us more than enough information to get a Taylor expansion that converges beyond the point where $f$ is the $\cos$ function ($R=\infty$ for $\cos$ so eventually we have to get there) ... Let me put it this way: Look at the limiting case. All you need to have for a Taylor expansion that converges over all the reals is all the derivatives in 0. This would give you the exact same Taylor expansion as you'd get for the cosine function, while the function from which we took the derivatives is clearly not the cosine function over all the reals. So my question is: Is Wolfram Alpha wrong? If it is right, why does this seem to violate Taylors theorem? If it's wrong, is that because the local region of the domain you need to compute the nth derivative grows with n? Edit 1:en.m.wikipedia.org/wiki/Taylor%27s_theorem. The most basic version of Taylors theorem for one variable does not mention analyticity, and it's easy to prove that the "remainder" goes to zero as you take more and more derivatives, so that f(x) is determined at any x by the derivatives of f in 0.
Astrid the astronaut is floating in a grid. Each time she pushes off she keeps gliding until she collides with a solid wall, marked by a thicker line. From such a wall she can propel herself either parallel or perpendicular to the wall, but always travelling directly \(\leftarrow, \rightarrow, \uparrow, \downarrow\). Floating out of the grid means death. In this grid, Astrid can reach square Y from square ✔. But if she starts from square ✘ there is no wall to stop her and she will float past Y and out of the grid. In this grid, from square X Astrid can float to three different squares with one push (each is marked with an *). Push \(\leftarrow\) is not possible from X due to the solid wall to the left. From X it takes three pushes to stop safely at square Y, namely \(\downarrow, \rightarrow, \uparrow\). The sequence \(\uparrow, \rightarrow\) would have Astrid float past Y and out of the grid. Question: In the following grid, what is the least number of pushes that Astrid can make to safely travel from X to Y?
Hi is there any lower bound for $\Re\zeta(1+it)$. I did try with computer until some ordinate and I saw $\Re\zeta(1+it)>0$. If it is true, is there any reference to prove it. thanks MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community Hi is there any lower bound for $\Re\zeta(1+it)$. I did try with computer until some ordinate and I saw $\Re\zeta(1+it)>0$. If it is true, is there any reference to prove it. thanks There is no lower bound for Re$(\zeta(1+it))$. This can be found in Lamzouri's paper http://www.math.uiuc.edu/~lamzouri/distribzeta.pdf (see e.g. page 3, formula (5)). (Choose $\tau$ large enough, $\theta_1=3\pi/4$ and $\theta_2=5\pi/4$ for and substract). The results of Lamzouri however also implies that on average the argument of $\zeta(1+it)$ is small and that Re$(\zeta(1+it))$ is positive more often than it is negative. No, this is not true; see Table 5 in http://arxiv.org/abs/1001.2962 and conclusions. In particular the real part is negative for $t=682112.9$ ; and this the smallest value given there (and it was found via testing at steps of size $.1$ so perhaps no much smaller ones were missed). You might also be interested in this question's answers for related information for the critical line $1/2 + it$
Let $K = \mathbb Q(\mu_m)$ and $\zeta_K$ it's Dedekind zeta function. We know from the class number formula that, around $0$: $$\zeta_K(s) \sim s^{r_1+r_2-1}h(K)R(K)/w(K) $$ where $h,R,w$ stand for the size of the class group, the regulator and the size of the subgroup of roots of unity. On the other hand, we have the decomposition: $$\zeta_K(s) = \prod_\chi L(s,\chi)$$ where the product is over suitable Dirichlet characters and moreover, $L(0,\chi) = -B_{1,\chi}$ - the generalized Bernoulli numbers. On the other hand, $R(K)$ should not be algebraic and there should be a corresponding transcendental contribution from $L(0,\chi)$ and the explicit formula for $B_{1\chi}$ shows that this comes only from those $\chi$ such that $L(s,\chi) = 0. $ Question 1: In the case that $L(s,\chi) = 0$, is it possible to say what the first non zero Taylor coefficient is? Question 2: Can we decompose (even if it is only conjecturally) $h(K),R(K),w(K)$ into factors corresponding to each Dirichlet character appearing in the decomposition of $\zeta_K$ . What about the order of vanishing of $L(0,\chi)$?
Trigonometry, Clocks, and YouWritten by Curtis Dyer Edit (9/28/2016): some major conceptual changes have been made to the section regarding explanations of the placement of the hour labels. Departing Euclidean Planes Most people are familiar with the basics of graphing things like linear functions of the form $Ax + By = C$ or, in slope-intercept form, $y = mx + b$. Two-dimensional Euclidean space is commonly defined using Cartesian coordinates. For example, if the point $P(x, y)$ is a coordinate pair on a Cartesian plane, then the $x$-coordinate gives the directed horizontal distance from the point of origin and the $y$-coordinate gives the directed vertical distance. In trigonometry, we don't always find it convenient to deal with rectangular coordinates. For example, if I asked you to graph a circle, what would you need to know? You would be perfectly reasonable to ask only for the length of the radius (for the moment, we won't bother considering graphing circles not located at the origin). So $r=1$ probably seems more intuitive than $x^2+y^2=1$, which represents a circle at the origin with a radius of one unit on a Cartesian plane. Consider instead a coordinate system predicated upon radius length and angle measure: the polar coordinate system. Polar coordinates are defined in terms of $(r, \theta)$, where $r$ is the directed distance of a ray measured from the origin and $\theta$ (read as the Greek letter theta) is the angle measured between the ray and polar axis. The polar axis coincides with the positive $x$-axis. Further, due to the modular nature of angles, each point has an infinite amount of equivalent points. For example: \[ (r, \theta) = (3, 45\deg) = \left(3, \, 45\deg + (n \times 360\deg) \right) \] where $n$ is an integer. Since the ray's distance is directed, it can be negative. Given the angle remains constant, changing the sign of $r$ will reflect the coordinate with respect to the origin. You can also think of this as rotating the coordinate by $180\deg$. Using the previous example, we would have: \[ (r,\theta) = (3, 45\deg) = (-3, 45\deg\pm180\deg). \] Furthermore, angle measures can also be negative. In trigonometry (and in physics), angles are typically measured from standard position, which means positive angles are measured from the polar axis in the counterclockwise (CCW) direction and negative angles are measured in the clockwise (CW) direction. Another example shows: \[ (r, \theta) = (-3, 45\deg) = (3, -45\deg) \] and \[ (r, \theta) = (3, 45\deg) = (-3, -45\deg). \] When thinking about an analog clock shape and its hands, we find angle measure and radius are the most intuitively useful pieces of information, which is why graphics APIs generally require these pieces of information for drawing arcs, for example. However, we have a problem, because we also need to think about the pixels on the screen with respect to rectangular coordinates. To proceed, we need to marry these two worlds, but thankfully, it's not as difficult as it might seem. First, however, we need to go over the basics of the trigonometric functions sine and cosine. Trig functions: sine and cosine The sine and cosine functions can be defined in different ways, but we're interested in the definitions with respect to right triangles. Considering a right triangle whose sides coincide with the $x$- and $y$-axes, we have (see Figure 1): \[ \cos \theta = \dfrac xr, \quad \sin \theta = \dfrac yr \] Given the definitions of sine and cosine, we can rearrange them to show the relationship between a rectangular $(x,y)$ coordinate pair and a polar coordinate pair, $(r,\theta)$. \[ x = r \cos \theta, \quad y = r \sin \theta \] This will be key in instructing our graphics libraries how and where to draw the components for our clock. It's easiest to think of arranging numbers around a clock face in terms of angular measurement. However, when we need to draw the hands, we often need to use rectangular coordinates, but thankfully, as shown above, we can relate a radius and angle of measure with the corresponding $x$- and $y$-coordinates. Now that we're armed with the capabilities of two different coordinate systems, we also need to consider a different type of angle measure, other than degrees. To that end, we will briefly cover the radian and its definition. Measuring in radians In many cases, it's often more convenient to measure angles in radians. A radian is the standard SI angle measure; it describes the angle subtended from the origin by a circular arc as a ratio of its length to the length of the radius. For a more rigorous definition, and helpful animations illustrating the concept, Wikipedia has you covered. More formally, the definition of a radian is given by \[ \theta = \frac sr, \quad r \neq 0. \] We especially want to note the case when $r = 1$, as this describes the unit circle. The unit circle is just a circle whose radius measure is 1 unit. When $r = 1$, we have $s = \theta$. This gives us a helpful equality with respect to arc length and angle, the values of which are tangible pure numbers. This is because the radian is a ratio of two lengths, so units cancel out. For this reason, you will often see that answers given in radians are not usually accompanied by a unit suffix. If you really need to make it clear you're working in radians, you can use something like: $2\pi$ (rad). This can sometimes be helpful when writing out conversion factors, where keeping track of units is especially important. Knowing that the circumference of a circle is $2\pi r$, we can deduce that 1 revolution around the circle ($360\deg$) is $2\pi$. Since $1 \textrm{ revolution} = 2\pi \textrm{ radians} = 360\deg$, we simplify to get: $\pi = 180\deg$. We use this fact to convert between degrees and radians. For example, to convert $60\deg$ to radians, we would have \[ \left( \dfrac {60\deg}{1} \right) \left( \dfrac {\pi}{180\deg} \right) = \dfrac {\pi}{3} \] I've set up a Desmos graph to facilitate experimenting visually with a unit circle and the corresponding relationships with the trigonometric functions. Making a Clock Thanks to the HTML5 <canvas> library, we can embed 2D and 3D drawing and animation in web pages. Mozilla began introducing <canvas> implementations from Firefox 1.5 and onward, Microsoft in IE9, and Google in all versions of Chrome (MDN). Most other browsers have had support for quite some time. While drawing a circle is fairly trivial, the primary challenges of making a clock require us to evenly distribute the hour labels along the edge of the clock and draw the hands on the clock. The lines need to be redrawn at least once per second (the animation). Right about now, you must be thinking, this is going to be a lot of work! Fortunately, we only really need the concepts just covered, a little JavaScript experience, and the art of Google-Fu to fill in the blanks. Let's break the task down into manageable bits. It's helpful if you make each step Google search friendly; it's an extremely useful process in refining your understanding of a problem, as well as making it easy to look up further information later. Use numeric variables to store data in JavaScript Draw a circle with HTML5 <canvas> API Draw a line with HTML5 <canvas> API Animate frames HTML5 <canvas> API Get date/time information in JavaScript Compute sine and cosine functions in JavaScript Divide $2\pi$ radians, which is the same as $360\deg$, into 12 equal measures Divide $2\pi$ radians into 60 equal measures (for minute marks) Since the radius is used to determine how big our clock is, we can use the measurement of the radius to scale the length of all other useful lines related to the clock (like clock hands or minute hash marks). This gives us an elegant way to keep the clock in proportion if we decide to change its size later (or allow users to change the size). Next, we'll cover a brief example of how we might distribute hour labels along the clock, draw hands, and draw marks. Distributing Hour Labels We know there are twelve hours to place around the clock. We know one complete revolution is $2\pi$ radians. Therefore, it must be that $\frac{2\pi}{12} = \frac{\pi}{6}$ gives us the evenly distributed arc lengths around the circle. In our code below, we use the hours as multiples of $\frac{\pi}{6}$ to place each label around the clock face. However, we need to account for the fact that screen pixels are generally measured differently than most people are used to. On the $y$-axis, $+5$ pixels measures five pixels from the top of the screen toward the bottom. In other words, all of the $y$ values are inverted. Further, recall that $0$ radians is at the 3-o'clock position. Therefore, unless we adjust our angles, our 12-o'clock will be where 3-o'clock should be, 11-o'clock will be at 4-o'clock, 10-o'clock will be at 5-o'clock, etc. To make life easier, we'll assume that $y$ values will be negated later on, so we'll make an equation that correctly transforms all of our angles in a way that assumes the positive $y$ axis points up, not down. This means we can assume all measures will be measured from standard position! We need to: offset our starting point to begin at the 12-o'clock position and determine the signs for our offset and hour angle measures. What is the angle between 3-o'clock and 12-o'clock?It's obviously a $90\deg$ separation, which we know to be $\frac {\pi}{2}$ radians. Since we're operating under the assumption that angles are measured from standard position, a positive $\frac {\pi}{2}$ indicates a counterclockwise rotation, which is exactly what we want. Now, if we start from the 12-o'clock position, in what direction is 1-o'clock? We see that it is clockwise with respect to 12. Therefore, we know that all of our hour angle measures must be negative. Putting this all together, we just add our initial $90\deg$ rotation to our negative offsets. Let's create an equation showing how this is done. We begin by directly stating our findings above. From there, we use algebra to attempt to simplify whatever we can. We let $\theta\, '$ represent our transformed angle and we let $h$ be defined as the hour. We then have: \[ \begin{align} \theta\, ' &= \frac {\pi}{2} - \frac {h\pi}{6} \\[1em] &= \frac {3\pi - h\pi}{6} \\[1em] &= \frac {\pi(3 - h)}{6} \\[1em] &= \frac {\pi}{6}(3-h). \end{align} \] I prefer to simplify the expression here, because it looks a little neater, and there's only one instance of $\pi$, as opposed to two. Sometimes, doing the simplifications using algebra can lead to much cleaner expressions, so it's definitely worth brushing up on those skills. // canvas context var ctx = document.getElementById("canvas").getContext("2d"); var r = 150; // radius var pad = -20; // padding // change origin of canvas to center of clock ctx.translate(ctx.canvas.width/2, ctx.canvas.height/2); // orient text alignment correctly ctx.textAlign = "center"; ctx.textBaseline = "middle"; // set up some style options for how the clock looks ctx.strokeStyle = "black"; ctx.font = "12pt arial"; // draw edge of clock ctx.arc(0, 0, r, 0, 2*Math.PI); ctx.stroke(); // drawing clock labels for (var label = 1; label <= 12; ++label) { // we offset the angle and subtract the measure of our current // hour angle. Here, we're also using symmetry var theta = Math.PI*(label - 3) / 6; // these are the equivalent rectangular coordinates var x = (r+pad) * Math.cos(theta); var y = (r+pad) * Math.sin(theta); // draw the text ctx.fillText(label, x, y); } I've put up a demonstration of this code on JSFiddle. When dealing with pixel coordinates, it's important to remember that, by default, $(0, 0)$ is located at the top-left of the window. So, again, the $y$-values increase as we travel down toward the bottom of the window. However, the Canvas API does let us move our origin to a different location, so we can at least avoid dealing with that extra transformation headache. You'll notice that the JSFiddle code actually takes a different approach than what we've discussed above. It makes use of a useful property of the sine and cosine functions: symmetry! I should point out that we don't necessarily have to reverse the $y$-values, because the sine and cosine functions have useful symmetry properties. We can just negate the angle, and our clock coordinates will turn out exactly as we want! You may recall learning about odd and even functions. As a quick refresher, let's consider functions $f$ and $g$ with respect to any real number $x$. If $f$ is an odd function, then $f(-x)=-f(x)$ for all values of $x$. If $g$ is an even function, then $g(-x)=g(x)$ for all values of $x$. Odd functions are symmetrical with respect to the origin, and even functions with respect to the $y$-axis. It just so happens that the sine function is odd, and the cosine function is even: \[ \sin(-\theta)=-\sin \theta \, ; \quad \cos(-\theta) = \cos \theta \] We can use this fact to simply negate our earlier equation so as to avoid negating the $y$ values. Our original equation becomes: \[ \theta \, ' = \frac {\pi}{6}(h - 3). \] Drawing Hands and Marks We can extend the same concept to drawing lines to represent the hands or hash marks. An interesting thing is that the <canvas> API lets us rotate the canvas itself. So, for example, if we wanted to draw 60 evenly spaced hash marks around the clock, we could simply draw a short line at the edge of the clock, then rotate the canvas by $\frac {2\pi}{60}=\frac {\pi}{30}$ radians and then repeat until we've rotated for a total of $2\pi$ radians. Just Another Tool Sometimes, you find things become unexpectedly useful when coming across various problems while programming. Although mathematics seems an obviously useful tool to have, it's not always clear in what ways it can specifically aid you. Sometimes, the things you learn only sit in your mind with a vague sense of utility. For me, this was a little exercise in bridging programming and trigonometry class. With a real-world project and concrete goals, it sometimes provides a deeper, pragmatic level of understanding to abstract concepts. Challenges If you're interested in the trigonometry behind these concepts, try out the problems below to help solidify the concepts. Calculating angles Consider a clock with a minute and hour hand. If the time reads 10:30 on the clock, where both hands' positions are accurate to within a minute, what is the radian measure of the least positive angle between both hands? What is the measure in degrees? Tip:try drawing a sketch to help solve the problem. Show solutions. Concepts Above, we cover canvas rotation as an alternative method to drawing hash marks or hands for the clock. Explain why this approach works well for drawing lines, but not necessarily for drawing text. Try getting an example running on JSFiddle to demonstrate the problem. References ^ - Mozilla Developer Network. Accessed 25 May 2015.
Large time behavior in the logistic Keller-Segel model via maximal Sobolev regularity 1. Institute of Mathematical Sciences, Renmin University, Beijing 100872, China 2. Institut für Mathematik, Universität Paderborn, 33098 Paderborn, Germany $\begin{equation} \left\{ \begin{array}{llc} \displaystyle u_t=\Delta u-\chi\nabla\cdot(u\nabla v)+\kappa u-\mu u^2, &(x,t)\in \Omega\times (0,T),\\ \displaystyle \tau v_t=\Delta v-v+u, &(x,t)\in\Omega\times (0,T), \end{array} \right.(\star) \end{equation}$ Mathematics Subject Classification:Primary:35B40, 35K45. Citation:Xinru Cao. Large time behavior in the logistic Keller-Segel model via maximal Sobolev regularity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3369-3378. doi: 10.3934/dcdsb.2017141 References: [1] X. Bai and M. Winkler, Equilibration in a fully parabolic two-species chemotaxis system with competitive kinetics, [2] T. Black, J. Lankeit and M. Mizukami, On the weakly competitive case in a two-species chemotaxis model, [3] X. Cao, Global bounded solutions of the higher-dimensional Keller-Segel system under smallness conditions in optimal spaces, [4] [5] X. Cao and M. Winkler, Sharp decay estimates in a bioconvection model with quardratic degradation in bounded domains, preprint, 2016.Google Scholar [6] [7] [8] [9] [10] [11] K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura, Exponential attractor for a chemotaxis-growth system of equations, [12] [13] M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, [14] [15] [16] [17] M. Winkler, Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with logistic dampening, [18] C. Yang, X. Cao, Z. Jiang and S. Zheng, Boundedness in a quasilinear fully parabolic Keller-Segel system of higher dimension with logistic source, show all references References: [1] X. Bai and M. Winkler, Equilibration in a fully parabolic two-species chemotaxis system with competitive kinetics, [2] T. Black, J. Lankeit and M. Mizukami, On the weakly competitive case in a two-species chemotaxis model, [3] X. Cao, Global bounded solutions of the higher-dimensional Keller-Segel system under smallness conditions in optimal spaces, [4] [5] X. Cao and M. Winkler, Sharp decay estimates in a bioconvection model with quardratic degradation in bounded domains, preprint, 2016.Google Scholar [6] [7] [8] [9] [10] [11] K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura, Exponential attractor for a chemotaxis-growth system of equations, [12] [13] M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, [14] [15] [16] [17] M. Winkler, Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with logistic dampening, [18] C. Yang, X. Cao, Z. Jiang and S. Zheng, Boundedness in a quasilinear fully parabolic Keller-Segel system of higher dimension with logistic source, [1] Yuanyuan Liu, Youshan Tao. Asymptotic behavior in a chemotaxis-growth system with nonlinear production of signals. [2] [3] Telma Silva, Adélia Sequeira, Rafael F. Santos, Jorge Tiago. Existence, uniqueness, stability and asymptotic behavior of solutions for a mathematical model of atherosclerosis. [4] Marco Di Francesco, Alexander Lorz, Peter A. Markowich. Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion: Global existence and asymptotic behavior. [5] Francesca Romana Guarguaglini, Corrado Mascia, Roberto Natalini, Magali Ribot. Stability of constant states and qualitative behavior of solutions to a one dimensional hyperbolic model of chemotaxis. [6] Masaaki Mizukami. Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity. [7] Tobias Black. Global existence and asymptotic stability in a competitive two-species chemotaxis system with two signals. [8] Masaaki Mizukami. Improvement of conditions for asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity. [9] [10] [11] [12] [13] Shubo Zhao, Ping Liu, Mingchao Jiang. Stability and bifurcation analysis in a chemotaxis bistable growth system. [14] Xin Lai, Xinfu Chen, Mingxin Wang, Cong Qin, Yajing Zhang. Existence, uniqueness, and stability of bubble solutions of a chemotaxis model. [15] Doan Duy Hai, Atsushi Yagi. Longtime behavior of solutions to chemotaxis-proliferation model with three variables. [16] Chunpeng Wang. Boundary behavior and asymptotic behavior of solutions to a class of parabolic equations with boundary degeneracy. [17] [18] [19] [20] 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
Research Open Access Published: The finite speed of propagation for solutions to stochastic viscoelastic wave equation Boundary Value Problems volume 2019, Article number: 121 (2019) Article metrics 206 Accesses Abstract In this paper, a class of second order stochastic evolution equations with memory is considered, where f is a continuous function with polynomial growth of order less than or equal to \(n/(n-2)\) and σ is Lipschitz with \(\sigma (0)=0\). By Tartar’s energy method, we prove that for any solution to the equation the propagate speed is finite. Introduction In this paper, we consider the stochastic viscoelastic wave equations where D is a bounded domain in \(\mathbb{R}^{n}\) with a smooth boundary ∂D, g is the relaxation function satisfying the hyperbolicity and the existence condition, i.e. (G1) \(g\geq 0\in C^{1}[0,\infty )\) is a non-increasing function satisfying$$ \int _{0}^{\infty }g(s)\,ds=1-l>0. $$ \(f: \mathbb{R}\rightarrow \mathbb{R}\) satisfies (A1) fis a continuous function, \(f(s)s\geq 0\), \(\forall s \in \mathbb{R}\) and$$ \bigl\vert f(s) \bigr\vert \leq C \bigl(1+ \vert s \vert ^{p+1}\bigr),\quad \forall s\in \mathbb{R}, $$(1.2) where psatisfies$$ \textstyle\begin{cases} 0\leq p\leq \frac{2}{n-2}, &\mbox{if } n>2, \\ p\geq 0, &\mbox{if } n=1,2. \end{cases} $$(1.3) \(\{W(t,x):t\geq 0\}\) is a H-valued R-Wiener process on the probability space with the variance operator R satisfying \(\operatorname{Tr}R<\infty \). When \(g(t)=0\), (1.1) becomes a nonlinear wave equation. Marinelli and Quer-Sarsanyons [1] proved existence of weak solutions in the probabilistic sense for a general class of stochastic semilinear wave equations on bounded domains of \(\mathbb{R}^{n}\) driven by a possibly discontinuous square integrable martingale. Under a more restrictive condition on f, Barbu and Röckner [2] found that the propagation speed of the solutions of (1.1) with dissipative damping is finite by Tartar’s energy method. The result is similar to the classical finite speed of propagation result for the solution to the Klein–Gordon equation. When \(g(t)\neq 0\), for the current equation (1.1), the memory part makes it difficult to estimate the energy by using these methods. Hence, Wei and Jiang [3] studied (1.1) with \(\sigma \equiv 1\) in another way. They showed the existence and uniqueness of solution for (1.1) and obtained the decay estimate of the energy function of the solution. In [4], Liang and Guo obtained asymptotic stability and extend the decay estimate of [3] for the general equation (1.1) with multiplicative noise. Moreover, Liang and Gao [5] also obtained the existence and uniqueness of global mild solutions for (1.1) driven by Lévy noise. In this paper, we prove that, for any solution to (1.1) with probability one, the speed of propagation is with velocity less than or equal to 1. This localization result is new for the case of the second order stochastic evolution equations with memory we consider here. The standard strategy to prove this property for the deterministic Klein–Gordon equation is based on the Paley–Wiener theorem combined with point arguments [6]. However, for the current equation (1.1), the memory part makes it difficult to use these methods. So we shall use a different approach, inspired by Tartar’s energy method [7]. This paper is organized as follows. In Sect. 2 we present some assumptions needed for our work and give the existence theorem for a unique global weak solution. Section 3 is devoted to the proof of the finite propagate speed. Preliminaries Set \(H=L^{2}(D)\) and \(V=H^{1}_{0}(D)\) with norm denoted by \(\|\cdot \|\) and \(\|\nabla \cdot \|\), respectively. In addition, both H and V are Hilbert spaces if we endow them with the usual inner products \((\cdot ,\cdot )\) and \(\langle \cdot ,\cdot \rangle \), respectively. Let \(\mathscr{H}=V\times H\) with the norm \(\|U\|_{\mathscr{H}}=(\|\nabla u\|^{2}+\|v\|^{2})^{\frac{1}{2}}\) for any \(U=(u,v)\in \mathscr{H}\). Let \((\varOmega ,P,\mathcal{F})\) be a complete probability space for which a \(\{\mathcal{F}_{t}, t\geq 0\}\) filtration of sub- σ-fields of \(\mathcal{F}\) is given. A point of Ω will be denoted by ω and \(\mathbf{E}(\cdot )\) stands for the expectation with respect to probability measure P. Suppose that \(\{W(t,x):t\geq 0\}\) is a H-valued Q-Wiener process on the probability space with the covariance operator Q satisfying \(\operatorname{Tr}Q<\infty \). It has mean \(\mathbf{E}W(x,t)=0\) and satisfies for any \(\varphi ,\psi \in H\). Moreover, we can assume that Q has the following form: where \({\lambda _{i}}\) are eigenvalues of Q satisfying \(\sum_{i=1} ^{\infty }\lambda _{i}<\infty \) and \(\{e_{i}\}\) are the corresponding eigenfunctions with \(c_{0}:=\sup_{i\geq 1}\|e_{i}\|_{\infty }<\infty \) (where \(\|\cdot \|_{\infty }\) denotes the super-norm). In this case, where \(\{B_{i}(t)\}\) is a sequence of independent copies of standard Brownian motions in one dimension. Definition 2.1 hold in the sense of distributions over \((0,T)\times D\) for almost all ω. Remark 2.1 for all \(t\in [0,T]\) and all \(\phi \in H_{0}^{1}(D)\). In fact, (2.6) is a conventional form for the definition of solution to stochastic differential equations. Here we say u is a strong solution of Eq. (1.1). As regards the existence equation (1.1), moreover, we assume f also satisfies: (A2) Assume that, for every \(N>0\), there exists constant \(L_{f}(N)\) such that, for all \(t\geq 0\) and all \(u,v\in V\) with \(\| \nabla u\|\leq N\) and \(\|\nabla v\|\leq N\),$$ \bigl\Vert f(u)-f(v) \bigr\Vert ^{2}\leq L_{f}(N) \bigl\Vert \nabla ( u-v) \bigr\Vert ^{2}. $$(2.7) Remark 2.2 It is clear that \(f(u)=|u|^{p}u\) satisfies conditions (A1) and (A2), where p satisfies (1.3). Theorem 2.1 Assume that (G1), (A1) and (A2) are satisfied and \((u_{0}(x),u _{1}(x)): \varOmega \rightarrow \mathscr{H}\) be \(\mathcal{F}_{0}\)- measurable. Then (1.1) admits a unique local mild solution \(u\in C^{1}([0,\tau _{\infty })\times D;H)\cap C([0, \tau _{\infty });V)\) satisfying and for all \(t>0\) and \(k\in \mathbb{N}\), where \(\tau _{\infty }\) is a stopping time defined by and \(S(t)\) is the resolvent operator for the equation Moreover, if \(u_{0}\in H^{2}(D)\cap V\) and \(u_{1}\in V\), then the mild solution of (1.1) is a strong solution and belongs to \(C^{1}([0,\tau _{\infty })\times D; H^{2}(D)\cap V)\). Define the energy functional \(\mathcal{E}(t)\) associated with our system (1.1) where Theorem 2.2 Assume that (G1), (A1) and (A2) are satisfied and \(u(0)=(u_{0}(x),u _{1}(x)): \varOmega \rightarrow \mathscr{H}\) be \(\mathcal{F}_{0}\)- measurable. Let u be the unique local mild solution to problem (1.1) with life span \(\tau _{\infty }\), then \(\tau _{\infty }=\infty \) \(\mathbb{P}\)- a. s. Proof First, we consider the case of \(\mathcal{E}(u(0))<\infty \). Let \(u(t)\), \(0\leq t<\tau _{\infty }\), be a maximal local mild solution to problem (1.1). Define a sequence of stopping times by By Theorem 2.1, then \(\lim_{k\rightarrow \infty } \tau _{k}=\tau _{\infty }\). For any \(t\geq 0\), we will show that \(u(t\wedge \tau _{k})\rightarrow u(t)\) a.s. as \(k\rightarrow \infty \), so that the local solution becomes a global one. To this end, it suffices to show that \(\tau _{k}\rightarrow \infty \) as \(k\rightarrow \infty \) with probability one. Now one of the main obstacles is that the solution u to problem (1.1) may have only a finite lifespan, i.e., \(\tau _{\infty }< \infty \). For this purpose, we fix \(k\in \mathbb{N}\) and introduce the following function: One can see that the processes f̃ and σ̃ are bounded. We consider the following linear nonhomogeneous stochastic equation: Hence the stopped process \(v(\cdot \wedge \tau _{k})\) satisfies where One can observe that (see [10]) Therefore, for every \(k\geq 1\), from Theorem 2.1, we have By virtue of the Itô rule for \(\|v_{t}(t\wedge \tau _{k})\|^{2}\), we have (Note that we could only use the Itô formula on a strong solution, we can approximate the energy function of a mild solution v by a sequence of energy functions such that the corresponding strong solution sequence \(\{v_{m}\}\) converges to v; see Theorem 3.1 for details.) Using the condition (G1), we have which implies that By the definition of f̃ and σ̃, we have and where we also use the definition of \(\mathcal{E}(t)\). Recalling \(v(t)=u(t)\) for \(t\leq \tau _{k}\), from (2.14) we get where \(F(s)=\int _{0}^{s}f(\tau )\,d\tau \) and C is a positive constant. From (A1), we have \(F(s)\geq 0\) for \(s\in \mathbb{R}\). So using the Gronwall inequality to (2.15), we have for any \(k\geq 1\) and \(t\geq 0\), where \(C_{1}=2\mathbb{E}\int _{D} F(u _{0})\,dx\). It follows that where l is defined in (G1). Since \(\mathcal{E}(u(0))<\infty \), the above inequality gives \(\mathbb{P}(\{\tau _{k}< t\})\leq \frac{C_{t}}{k ^{2}}\), which, with the aid of the Borel–Cantelli lemma implies that \(\mathbb{P}(\{\tau _{\infty }< t\})=0\) or \(\tau _{\infty }=\infty \), \(\mathbb{P}\)-a.s. Therefore, Theorem 2.2 holds under the additional condition \(\mathcal{E}(u(0))<\infty \). In fact, we can get a unique global mild solution to (1.1) for the deterministic initial condition \(u(0)=(u_{0}(x),u_{1}(x))\in \mathcal{H}\). Consequently, for any Borel probability measure μ on \(\mathcal{H}\) there exists a martingale solution to (1.1) with the initial condition μ by [11]. Using pathwise uniqueness and a suitable version of the Yamada–Watanabe theory (see [12], Theorem 2) we find a unique global mild solution to (1.1) for every \(\mathcal{F} _{0}\)-measurable initial condition \(u(0): \varOmega \rightarrow \mathcal{H}\). □ The finite speed of propagation In this section, we will apply Tartar’s energy method to show that for any solution to Eq. (1.1) the propagate speed is finite. Let K is a closed subset of D and denote by \(d_{K}(x)\) the distance from \(x\in D\) to K, i.e. For any \(r>0\), we set For a given function \(\varphi : D\rightarrow \mathbb{R}\), let the support \(\{\varphi \}\) denote the closure of the set \(\{x\in D; \varphi (x)\neq 0\}\). Then we have the following. Theorem 3.1 Assume that (G1) and (A1) hold. Let \(1\leq d<\infty \) and K be a closed subset of D. Let \(u(t)\) be any solution to (1.1) with initial data \(u_{0}(x)\in V\) and \(v_{0}(x)\in H\). If then \(\mathbb{P}\)- a. s. Proof Define a \(C^{1}\) function ρ such that We consider the local energy function \(\phi :[0,\infty )\times V \times H\rightarrow \mathbb{R}\) defined by Note that we could only use the Itô formula on a strong solution to Eq. (1.1), we can approximate the energy function of a mild solution u by a sequence of energy functions such that the corresponding strong solution sequence \(\{u_{m}\}\) converges to u. Set then \(D(A)=H^{2}(D)\cap H^{1}_{0}(D)\) and \(R(m;A)\) is bounded by \(1/m\). Let From (1.1), \((u_{m}(t),v_{m}(t) )\) satisfies In addition, by the Sobolev embedding theorem and condition (A1), we have \(f(u)\in L^{2} ([0,T]; L^{2}(\varOmega \times D) )\), which implies Applying the Itô formula to \(\phi (t,u_{m},v_{m})\), we get where Taking into account that we infer that \(d_{K}\in W ^{1,\infty }(\mathbb{R}^{d})\) and By Green’s formula we have In virtue of (G1) and (3.13), it follows that On the other hand, from (3.10) and (G1), we have and Note that Then, letting \(m\rightarrow \infty \) in (3.17), we obtain Recalling \(\operatorname{Tr}Q<\infty \) and \(c_{0}:=\sup_{i\geq 1}\|e_{i}\|_{\infty }< \infty \), it follows from (3.19) that Since \(t\mapsto \phi (t, u,v)\) is continuous \(\mathbb{P}\)-a.s, the Gronwall inequality implies that Therefore, for any \(t\geq 0\), \(\mathbb{P}\)-a.s. Remark 3.1 Note that Theorem 3.1 does not assert the existence of a solution to (1.1) with properties (3.1). It simply refers to the finite speed propagation property of solutions to (1.1). In other word, (3.2) implies that the wave front of the solution at time t is in the neighborhood \(K_{t}\) of the set K \(\mathbb{P}\)-a.s. This amounts to saying that any solution \(u(t)\) of (1.1) propagates with finite velocity less than or equal 1 with probability 1. The solution \(u(t)\) to (1.1) has its support in the space-time cone \(\{(t,x)\in (0,\infty )\times D; d_{K}(x)\leq t\}\). Remark 3.2 where h is a monotonically nondecreasing \(C^{1}\) function satisfying a polynomial growth condition. Leaving aside the existence problem for (3.21), we note that in this case there arises one more term in the energy equation (3.13), which is nonpositive and so we conclude the proof as in the previous case. References 1. Marinelli, C., Quer-Sardanyons, L.: Existence of weak solutions for a class of semilinear stochastic wave equations. SIAM J. Math. Anal. 44, 906–925 (2012) 2. Barbu, V., Röckner, M.: The finite speed of propagation for solutions to nonlinear stochastic wave equations driven by multiplicative noise. J. Differ. Equ. 255, 560–571 (2013) 3. Wei, T.T., Jiang, Y.M.: Stochastic wave equations with memory. Chin. Ann. Math. 31B, 329–342 (2010) 4. Liang, F., Guo, Z.H.: Asymptotic behavior for second order stochastic evolution equations with memory. J. Math. Anal. Appl. 419, 1333–1350 (2014) 5. Liang, F., Guo, H.J.: Stochastic nonlinear wave equation with memory driven by compensated Poisson random measures. J. Math. Phys. 55, 033503 (2014) 6. Reed, M., Simon, B.: Methods of Modern Mathematical Physics II. Academic Press, New York (1975) 7. Tartar, L.: Topics in nonlinear analysis Publications Mathématiques d’Orsay, Report, Orsay (1978) 8. Liang, F., Gao, H.J.: Global existence and explosive solution for stochastic viscoelastic wave equation with nonlinear damping. Rev. Math. Phys. 26, Article ID 1450013 (2014) 9. Liang, F., Gao, H.J.: Explosive solutions of stochastic viscoelastic wave equations with damping. Rev. Math. Phys. 23, 883–902 (2011) 10. Brzeźniak, Z., Maslowski, B., Seidler, J.: Stochastic nonlinear beam equations. Probab. Theory Relat. Fields 132, 119–149 (2005) 11. Ondreját, M.: Uniqueness for stochastic evolution equations in Banach space. Diss. Math. 426, 1–63 (2004) 12. Ondreját, M.: Brownian representation of cylindrical local martingales, martingale problem and strong Markov property of weak solutions of SPDEs in Banach space. Czechoslov. Math. J. 55, 1003–1039 (2005) Acknowledgements The authors are indebted to the editor for giving some important suggestions which improved the presentation of this paper. Availability of data and materials No data. Funding Supported in part by China NSF Grant No. 11501442, the Natural Science Basic Research Plan in Shaanxi Province of China No. 2019JM-283, and the Excellent Youth Fund of Xi An University of Science and Technology Grant Nos. 201YQ2-14, 201YQ3-12. Ethics declarations Competing interests The authors declare that they have no competing interests. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
As you said, the trend in your example data is obvious. If you want just to justify this fact by hypothesis test, than besides using linear regression (the obvious parametric choice), you can use non-parametric Mann-Kendall test for monotonic trend. The test is used to assess if there is a monotonic upward or downward trend of the variable of interest over time. A monotonic upward (downward) trend means that the variable consistently increases (decreases) through time, but the trend may or may not be linear. (http://vsp.pnnl.gov/help/Vsample/Design_Trend_Mann_Kendall.htm) moreover, as noted by Gilbert (1987), the test is particularly useful since missing values are allowed and the data need not conform to any particular distribution The test statistic is the difference between negative and positive $x_j-x_i$ differences among all the $n(n-1)/2$ possible pairs, i.e. $$ S = \displaystyle\sum_{i=1}^{n-1}\displaystyle\sum_{j=i+1}^{n}\mathrm{sgn}(x_j-x_i) $$ where $\mathrm{sgn}(\cdot)$ is a sign function. $S$ can be used to calculate $\tau$ statistics that is similar to correlation as it ranges from $-1$ to $+1$, where the sign suggests negative, or positive trend and value of $\tau$ is proportional to slope of the trend. $$ \tau = \frac{S}{n(n-1)/2} $$ Finally, you can compute $p$-values. For samples of size $n \le 10$ you can use tables of precomputed $p$-values for different values of $S$ and different sample sizes (see Gilbert, 1987). With larger samples, first you need to compute variance of $S$ $$ \mathrm{var}(S) = \frac{1}{18}\Big[n(n-1)(2n+5) - \displaystyle\sum_{p=1}^{g}t_p(t_p-1)(2t_p+5)\Big] $$ and then compute $Z_{MK}$ test statistic $$ Z_{MK} =\begin{cases}\frac{S-1}{\mathrm{var}(S)} & \text{if} ~ S > 0 \\0 & \text{if} ~ S = 0 \\\frac{S+1}{\mathrm{var}(S)} & \text{if} ~ S < 0\end{cases}$$ the value of $Z_{MK}$ is compared to standard normal values $Z_{MK} \ge Z_{1-\alpha}$ for upward trend, $Z_{MK} \le -Z_{1-\alpha}$ for downward trend, $|Z_{MK}| \ge Z_{1-\alpha/2}$ for upward or downward trend. In this thread you can find R code implementing this test. Since the $S$ statistic is compared to all possible pairs of observations then, instead of using normal approximation for $p$-value you can use permutation test that is obvious for this case. First, you compute $S$ statistic from your data and then you randomly shuffle your data multiple times and compute it for each of the samples. $p$ is simply the proportion of cases when $S_\text{data} \ge S_\text{permutation}$ for upward trend or $S_\text{data} \le S_\text{permutation}$ for downward trend. Gilbert, R.O. (1987). Statistical Methods for Environmental Pollution Monitoring. Wiley, NY. Önöz, B., & Bayazit, M. (2003). The power of statistical tests for trend detection. Turkish Journal of Engineering and Environmental Sciences, 27(4), 247-251.
Learning Outcomes Conduct and interpret hypothesis tests for two population means, population standard deviations known Even though this situation is not likely (knowing the population standard deviations is not likely), the following example illustrates hypothesis testing for independent means, known population standard deviations. The sampling distribution for the difference between the means is normal and both populations must be normal. The random variable is [latex]\displaystyle\overline{{X}}_{{1}}-\overline{{X}}_{{2}}[/latex]. The normal distribution has the following format: Normal distribution is: [latex]\displaystyle\overline{{X}}_{{1}}-\overline{{X}}_{{2}}\sim{N}\Bigg[{\mu_{{1}}-\mu_{{2}},\sqrt{{\frac{{(\sigma_{{1}})}^{{2}}}{{n}_{{1}}}+\frac{{(\sigma_{{2}})}^{{2}}}{{n}_{{2}}}}}\Bigg]}[/latex] The standard deviation is: [latex]\displaystyle\sqrt{\frac{(\sigma_1)^2}{n_1}+\frac{(\sigma_2)^2}{n_2}}[/latex] The test statistic ( z-score) is: [latex]\displaystyle{z}=\frac{(\overline{x}_1-\overline{x}_2)-(\mu_1-\mu_2)}{\sqrt{\frac{(\sigma_1)^2}{n_1}+\frac{(\sigma_2)^2}{n_2}}}[/latex] Example Independent groups, population standard deviations known. The mean lasting time of two competing floor waxes is to be compared. Twenty floors are randomly assigned to test each wax. Both populations have a normal distributions. The data are recorded in the table. Wax Sample Mean Number of Months Floor Wax Lasts Population Standard Deviation 1 3 0.33 2 2.9 0.36 Does the data indicate that wax 1 is more effective than wax 2? Test at a 5% level of significance. Solution: This is a test of two independent groups, two population means, population standard deviations known. Random Variable: [latex]\displaystyle\overline{{X}}_{{1}}-\overline{{X}}_{{2}}[/latex] = difference in the mean number of months the competing floor waxes last. H 0: μ≤ 1 μ 2 H: a μ> 1 μ 2 The words “is more effective” says that wax 1 lasts longer than wax 2, on average. “Longer” is a “>” symbol and goes into H a. Therefore, this is a right-tailed test. Distribution for the test: The population standard deviations are known so the distribution is normal. Using the formula, the distribution is: Since μ 1 ≤ μthen 2 μ– 1 μ≤ 0 and the mean for the normal distribution is zero. 2 Calculate the p-value using the normal distribution: p-value = 0.1799 Graph: [latex]\displaystyle\overline{{X}}_{{1}}-\overline{{X}}_{{2}}={3}-{2.9}={0.1}[/latex] Compare α and the p-value: α= 0.05 and p-value = 0.1799. Therefore, α< p-value. Make a decision: Since α < p-value, do not reject H0. Conclusion: At the 5% level of significance, from the sample data, there is not sufficient evidence to conclude that the mean time wax 1 lasts is longer (wax 1 is more effective) than the mean time wax 2 lasts. Using a Calculator Press STAT. Arrow over to TESTSand press 3:2-SampZTest. Arrow over to Statsand press ENTER. Arrow down and enter .33for sigma1, .36for sigma2, 3for the first sample mean, 20for n1, 2.9for the second sample mean, and 20for n2. Arrow down to μ 1: and arrow to > μ 2. Press ENTER. Arrow down to Calculateand press ENTER. The p-value is p= 0.1799 and the test statistic is 0.9157. Do the procedure again, but instead of Calculatedo Draw. try it The means of the number of revolutions per minute of two competing engines are to be compared. Thirty engines are randomly assigned to be tested. Both populations have normal distributions. The table below shows the result. Do the data indicate that Engine 2 has higher RPM than Engine 1? Test at a 5% level of significance. Engine Sample Mean Number of RPM Population Standard Deviation 1 1,500 50 2 1,600 60 The p-value is almost zero, so we reject the null hypothesis. There is sufficient evidence to conclude that Engine 2 runs at a higher RPM than Engine 1. Example An interested citizen wanted to know if Democratic U. S. senators are older than Republican U.S. senators, on average. On May 26 2013, the mean age of 30 randomly selected Republican Senators was 61 years 247 days old (61.675 years) with a standard deviation of 10.17 years. The mean age of 30 randomly selected Democratic senators was 61 years 257 days old (61.704 years) with a standard deviation of 9.55 years. Do the data indicate that Democratic senators are older than Republican senators, on average? Test at a 5% level of significance. Solution: This is a test of two independent groups, two population means. The population standard deviations are unknown, but the sum of the sample sizes is 30 + 30 = 60, which is greater than 30, so we can use the normal approximation to the Student’s-t distribution. Subscripts: 1: Democratic senators 2: Republican senators Random variable: [latex]\displaystyle\overline{{X}}_{{1}}-\overline{{X}}_{{2}}[/latex] = difference in the mean age of Democratic and Republican U.S. senators. H 0: µ≤ 1 µ 2 H: 0 µ– 1 µ≤ 0 2 H a: µ> 1 µ 2 H: a µ– 1 µ> 0 2 The words “older than” translates as a “>” symbol and goes into Ha. Therefore, this is a right-tailed test. Distribution for the test: The distribution is the normal approximation to the Student’s t for means, independent groups. Using the formula, the distribution is: [latex]\displaystyle\overline{{X}}_{{1}}-\overline{{X}}_{{2}}\sim{N}\Bigg[{0,\sqrt{{\frac{{(9.55)}^{{2}}}{30}+\frac {{(10.17)}^{{2}}}{30}}}}\Bigg][/latex] Since µ 1 ≤ µ, 2 µ– 1 µ≤ 0 and the mean for the normal distribution is zero. 2 ( Calculating the p-value using the normal distribution gives p-value = 0.4040) Graph: Compare α and the p-value: α= 0.05 and p-value = 0.4040. Therefore, α< p-value. Make a decision: Since α < p-value, do not reject H0. Conclusion: At the 5% level of significance, from the sample data, there is not sufficient evidence to conclude that the mean age of Democratic senators is greater than the mean age of the Republican senators. Concept Review A hypothesis test of two population means from independent samples where the population standard deviations are known (typically approximated with the sample standard deviations), will have these characteristics: Random variable: [latex]\displaystyle\overline{{X}}_{{1}}-\overline{{X}}_{{2}}[/latex] = the difference of the means Distribution: normal distribution Formula Review Normal Distribution: [latex]\displaystyle\overline{{X}}_{{1}}-\overline{{X}}_{{2}}\sim{N}\Bigg[{\mu_{{1}}-\mu_{{2}},\sqrt{{\frac{{(\sigma_{{1}})}^{{2}}}{{n}_{{1}}}+\frac{{(\sigma_{{2}})}^{{2}}}{{n}_{{2}}}}}\Bigg]}[/latex] Generally µ 1 – µ 2 = 0. Test Statistic ( z-score): [latex]\displaystyle{z}=\frac{(\overline{x}_1-\overline{x}_2)-(\mu_1-\mu_2)}{\sqrt{\frac{(\sigma_1)^2}{n_1}+\frac{(\sigma_2)^2}{n_2}}}[/latex] Generally µ 1 – µ 2 = 0. where: σ 1 and σ 2 are the known population standard deviations. n 1 and n 2 are the sample sizes. [latex]\displaystyle\overline{{x}}_{{1}}[/latex] and [latex]\overline{{x}}_{{2}}[/latex] are the sample means. μ 1 and μ 2 are the population means.
The Annals of Applied Probability Ann. Appl. Probab. Volume 24, Number 3 (2014), 1049-1080. Pathwise optimal transport bounds between a one-dimensional diffusion and its Euler scheme Abstract In the present paper, we prove that the Wasserstein distance on the space of continuous sample-paths equipped with the supremum norm between the laws of a uniformly elliptic one-dimensional diffusion process and its Euler discretization with $N$ steps is smaller than $O(N^{-2/3+\varepsilon})$ where $\varepsilon$ is an arbitrary positive constant. This rate is intermediate between the strong error estimation in $O(N^{-1/2})$ obtained when coupling the stochastic differential equation and the Euler scheme with the same Brownian motion and the weak error estimation $O(N^{-1})$ obtained when comparing the expectations of the same function of the diffusion and of the Euler scheme at the terminal time $T$. We also check that the supremum over $t\in[0,T]$ of the Wasserstein distance on the space of probability measures on the real line between the laws of the diffusion at time $t$ and the Euler scheme at time $t$ behaves like $O(\sqrt{\log(N)}N^{-1})$. Article information Source Ann. Appl. Probab., Volume 24, Number 3 (2014), 1049-1080. Dates First available in Project Euclid: 23 April 2014 Permanent link to this document https://projecteuclid.org/euclid.aoap/1398258095 Digital Object Identifier doi:10.1214/13-AAP941 Mathematical Reviews number (MathSciNet) MR3199980 Zentralblatt MATH identifier 1296.65010 Citation Alfonsi, A.; Jourdain, B.; Kohatsu-Higa, A. Pathwise optimal transport bounds between a one-dimensional diffusion and its Euler scheme. Ann. Appl. Probab. 24 (2014), no. 3, 1049--1080. doi:10.1214/13-AAP941. https://projecteuclid.org/euclid.aoap/1398258095
Difference between revisions of "Literature on Carbon Nanotube Research" Line 96: Line 96: This article can be found in our [[Archive|archive]]. This article can be found in our [[Archive|archive]]. + + + == In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation == == In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation == Revision as of 16:04, 20 March 2009 I have hijacked this page to write down my views on the literature on Carbon Nanotube (CNT) growths and processing, a procedure that should give us the cable/ribbon we desire for the space elevator. I will try to put as much information as possible here. If anyone has something to add, please do not hesitate! Contents 1 Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes 2 Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis 3 Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology 4 Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen 5 Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning 6 In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation 7 High-Performance Carbon Nanotube Fiber Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes B. G. Demczyk et al., Materials and Engineering, A334, 173-178, 2002 The paper by Demczyk et al. (2002) is the basic reference for the experimental determination of the tensile strengths of individual Multi-wall nanotube (MWNT) fibers. The experiments are performed with a microfabricated piezo-electric device. On this device CNTs in the length range of tens of microns are mounted. The tensile measurements are obseverd by transmission electron microscopy (TEM) and videotaped. Measurements of the tensile strength (tension vs. strain) were performed as well as Young modulus and bending stiffness. Breaking tension is reached for the SWNT at 150GP and between 3.5% and 5% of strain. During the measurements 'telescoping' extension of the MWNTs is observed, indicating that single-wall nanotubes (SWNT) could be even stronger. However, 150GPa remains the value for the tensile strength that was experimentally observed for carbon nanotubes. Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis Y.-L. Li, I. A. Kinloch, and A. H. Windle, Science, 304,276-278, 2004 The work described in the paper by Y.-L. Li et al. is a follow-on of the famous paper by Zhu et al. (2002), which was cited extensively in Brad's book. This article goes a little more into the details of the process. If you use a mixture of ethene (as the source of carbon), ferrocene, and theophene (both as catalysts, I suppose) into a furnace (1050 to 1200 deg C) using hydrogen as carrier gas, you apparently get an 'aerogel' or 'elastic smoke' forming in the furnace cavity, which comprises the CNTs. Here's an interesting excerpt: Under these synthesis conditions, the nanotubes in the hot zone formed an aerogel, which appeared rather like “elastic smoke,” because there was sufficient association between the nanotubes to give some degree of mechanical integrity. The aerogel, viewed with a mirror placed at the bottom of the furnace, appeared very soon after the introduction of the precursors (Fig. 2). Itwas then stretched by the gas flow into the form of a sock, elongating downwards along the furnace axis. The sock did not attach to the furnace walls in the hot zone, which accordingly remained clean throughout the process.... The aerogel could be continuously drawn from the hot zone by winding it onto a rotating rod. In this way, the material was concentrated near the furnace axis and kept clear of the cooler furnace walls,... The elasticity of the aerogel is interpreted to come from the forces between the individual CNTs. The authors describe the procedure to extract the aerogel and start spinning a yarn from it as it is continuously drawn out of the furnace. In terms of mechanical properties of the produced yarns, the authors found a wide range from 0.05 to 0.5 GPa/g/ccm. That's still not enough for the SE, but the process appears to be interesting as it allows to draw the yarn directly from the reaction chamber without mechanical contact and secondary processing, which could affect purity and alignment. Also, a discussion of the roles of the catalysts as well as hydrogen and oxygen is given, which can be compared to the discussion in G. Zhang et al. (2005, see below). Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology M. Zhang, K. R. Atkinson, and R. H. Baughman, Science, 306, 1358-1361, 2004 In the research article by M. Zhang et al. (2004) the procedure of spinning long yarns from forests of MWNTs is described in detail. The maximum breaking strength achieved is only 0.46 GPa based on the 30micron-long CNTs. The initial CNT forest is grown by chemical vapour deposition (CNT) on a catalytic substrate, as usual. A very intersting formula for the tensile strength of a yarn relative to the tensile strength of the fibers (in our case the MWNTs) is given: <math> \frac{\sigma_{\rm yarn}}{\sigma_{\rm fiber}} = \cos^2 \alpha \left(1 - \frac{k}{\sin \alpha} \right) </math> where <math>\alpha</math> is the helix angle of the spun yarn, i.e. fiber direction relative to yarn axis. The constant <math>k=\sqrt(dQ/\mu)/3L</math> is given by the fiber diameter d=1nm, the fiber migration length Q (distance along the yarn over which a fiber shifts from the yarn surface to the deep interior and back again), the quantity <math>\mu=0.13</math> is the friction coefficient of CNTs (the friction coefficent is the ratio of maximum along-fiber force divided by lateral force pressing the fibers together), <math>L=30{\rm \mu m}</math> is the fiber length. A critical review of this formula is given here. In the paper interesting transmission electron microscope (TEM) pictures are shown, which give insight into how the yarn is assembled from the CNT forest. The authors describe other characteristics of the yarn, like how knots can be introduced and how the yarn performs when knitted, apparently in preparation for application in the textile industry. Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen Important aspects of the production of CNTs that are suitable for the SE is the efficiency of the growth and the purity (i.e. lack of embedded amorphous carbon and imperfections in the Carbon bounds in the CNT walls). In their article G. Zhang et al. go into detail about the roles of oxygen and hydrogen during the chemical vapour deposition (CVD) growth of CNT forests from hydrocarbon sources on catalytic substrates. In earlier publications the role of oxygen was believed to be to remove amorphous carbon by oxidation into CO. The authors show, however, that, at least for this CNT growth technique, oxygen is important, because it removes hydrogen from the reaction. Hydrogen has apparently a very detrimental effect on the growth of CNTs, it even destroys existing CNTs as shown in the paper. Since hydrogen radicals are released during the dissociation of the hydrocarbon source compount, it is important to have a removal mechanism. Oxygen provides this mechanism, because its chemical affinity towards hydrogen is bigger than towards carbon. In summary, if you want to efficiently grow pure CNT forests on a catalyst substrate from a hydrocarbon CVD reaction, you need a few percent oxygen in the source gas mixture. An additional interesting information in the paper is that you can design the places on the substrate, on which CNTs grow by placing the the catalyst only in certain areas of the substrate using lithography. In this way you can grow grids and ribbons. Figures are shown in the paper. In the paper no information is given on the reason why the CNT growth stops at some point. The growth rate is given with 1 micron per minute. Of course for us it would be interesting to eliminate the mechanism that stops the growth so we could grow infinitely long CNTs. This article can be found in our archive. Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation The paper by S. Hofmann et al. (2007) is a key publication for understanding the microscropic processes of growing CNTs. The authors describe an experiment in which they observe in situ the growth of CNTs from chemical vapour deposition (CVD) onto metallic catalyst particles. The observations are made in time-lapse transmission electron microscopy (TEM) and in x-ray photo-electron spectroscopy. Since I am not an expert on spectroscopy, I stick to the images and movies produced by the time-lapse TEM. In the observations it can be seen that the catalysts are covered by a graphite sheet, which forms the initial cap of the CNT. The formation of that cap apparently deforms the catalyst particle due to its inherent shape as it tries to form a minimum-energy configuration. Since the graphite sheet does not extend under the catalyst particle, which is prevented by the catalyst sitting on the silicon substrate, the graphite sheet cannot close itself. The deformation of the catalyst due to the cap forming leads to a retoring force exerted by the crystaline stracture of the catalyst particle. As a consequence the carbon cap lifts off the catalyst particle. On the base of the catalyst particle more carbon atoms attach to the initial cap starting the formation of the tube. The process continues to grow a CNT as long as there is enough carbon supply to the base of the catalyst particle and as long as the particle cannot be enclosed by the carbon compounds. During the growth of the CNT the catalyst particle breathes so drives so the growth process mechanically. Of course for us SE community the most interesting part in this paper is the question: can we grow CNTs that are long enough so we can spin them in a yarn that would hold the 100GPa/g/ccm? In this regard the question is about the termination mechanism of the growth. The authors point to a very important player in CNT growth: the catalyst. If we can make a catalyst that does not break off from its substrate and does not wear off, the growth could be sustained as long as the catalyst/substrate interface is accessible to enough carbon from the feedstock. If you are interested, get the paper from our archive, including the supporting material, in which you'll find the movies of the CNTs growing. High-Performance Carbon Nanotube Fiber K. Koziol et al., Science, 318, 1892, 2007. The paper "High-Performance Carbon Nanotube Fiber" by K. Koziol et al. is a research paper on the production of macroscopic fibers out of an aerogel (low-density, porous, solid material) of SWNT and MWNT that has been formed by carbon vapor deposition. They present an analysis of the mechanical performance figures (tensile strength and stiffness) of their samples. The samples are fibers of 1, 2, and 20mm length and have been extracted from the aerogel with high winding rates (20 metres per minute). Indeed higher winding rates appear to be desirable, but the authors have not been able to achieve higher values as the limit of extraction speed from the aerogel was reached, and higher speeds led to breakage of the aerogel. They show in their results plot (Figure 3A) that typically the fibers split in two performance classes: low-performance fibers with a few GPa and high-performance fibers with around 6.5 GPa. It should be noted that all tensile strengths are given in the paper as GPa/SG, where SG is the specific gravity, which is the density of the material divided by the density of water. Normally SG was around 1 for most samples discussed in the paper. The two performance classes have been interpreted by the authors as the typical result of the process of producing high-strength fibers: since fibers break at the weakest point, you will find some fibers in the sample, which have no weak point, and some, which have one or more, provided the length of the fibers is in the order of the frequency of occurrence of weak points. This can be seen by the fact that for the 20mm fibers there are no high-performance fibers left, as the likelihood to encounter a weak point on a 20mm long fiber is 20 times higher than encountering one on a 1mm long fiber. As a conclusion the paper is bad news for the SE, since the difficulty of producing a flawless composite with a length of 100,000km and a tensile strength of better than 3GPa using the proposed method is enormous. This comes back to the ribbon design proposed on the Wiki: using just cm-long fibers and interconnect them with load-bearing structures (perhaps also CNT threads). Now we have shifted the problem from finding a strong enough material to finding a process that produces the required interwoven ribbon. In my opinion the race to come up with a fiber of better than Kevlar is still open.
A metal (or otherwise, suitably elastic) circle is cut and the points are slid up and down a vertical axis as shown: How would one describe the resultant curves mathematically? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community This problem was first formulated by Leonhard Euler in 1744 WPlink: "That among all curves of the same length which not only pass through the points A and B, but are also tangent to given straight lines at these points, that curve be determined in which minimizes the value of \begin{equation} \int_A^B \frac{ds}{R^2} \end{equation} It is a problem of calculus of variations and the Euler-Lagrange equations WPlink allows to solve it as an ODE of the type: \begin{equation} \frac{dy}{dx} = \frac{a^2 - c^2 + x^2}{\sqrt{(c^2 - x^2)(2a^2 - c^2 + x^2)}} \end{equation} The physical meaning is this: the wire will take the shape that minimizes the total energy related to bending further in each point. This energy is similar to the the spring potential energy for deformations, but in this case the measure of the deformation is the curvature $k = \frac{1}{R}$. Since in your proposal, the elastic line was initially a circle, I would propose the integral to be: \begin{equation} \int_A^B (\frac{1}{R} - k_0)^2ds \end{equation} where $k_0$ would be the initial curvature which should be the rest one. I would set it constant here since for the circle it is the same in any point, but in general if you start with a different rest position, the rest curvature in each point will differ from point to point. Now let's analyze the curves, and group them. If we number them from left to right and from top to bottom we can make the 2 following groups: Fixed ends condition: The curves 1,2 and 6. This curves are only determined by fixating the extremes of the curve, or frontier conditions. This means that they are shapes that the curves will take naturally under no external forces on any point of it. Fixed ends + one fixed end angle conditions: The curves 3,4 and 5. It can be seen that 4 and 5 are the same. This curves need, apart from the fixed extremes condition, the fixing of one or two of the extremes' angle. A bending force there or some general external force acting on one r more points of it would cause them as well. If they did not have this extra condition, nothing would prevent them from falling back to form 2 or 6. Finally, here is a review of the solutions, with a great historical presentation of the problem: ElasticaHistory. But if you really want to get serious I recommend A Treatise on the Mathematical Theory of Elasticity.
You are asking for a condition on $f$ and $g$ such that the curve:$$C: f(y) - g(x) = 0$$has a uniformly bounded number of rational points.Note that if $f$ and $g$ are equivalent under an affine transformation,then $C$ is divisible by a linear factor and is not reducible. The converse is almost true. Namely, as long as the degree of $f$ and $g$ is sufficientlylarge, and $f$ and $g$ are not of the form$a \circ b$ for polynomials $a$ and $b$ of degree $> 1$,then $C$ will be irreducible. (This follows from CFSG. Of course, using composition of functionsone can create many degenerate examples: $P(y)^2 - Q(x)^2$ is divisibleby $P(y) - Q(x)$. The example in the comments giving a examplein even degrees arises in this way, by taking a degree two exampleand using composition.)If the degree $d$ of $f$ (and $g$) is prime, then $f$ and $g$are certainly indecomposable, so let's concentrate on that case, since thereare no reductions to smaller degree. For convenience, let's also only considerthe case when $C$ is irreducible (if $d$ is prime, this is automatic if $d$ issufficiently large, by the remark above.) If $C$ has genus at least two, then $C$ will have only finitelymany rational points (Faltings). Work ofCaporaso, Harris, and Mazur: http://www.ams.org/journals/jams/1997-10-01/S0894-0347-97-00195-1/home.html suggests that the number of solutions may be even be bounded in terms of the genus, and hence in terms of the degree.Whether you believe Lang's conjectures or not, you are unlikely to disproveLang's conjectures easily, so any negative example to your claim should come from a pair of functions $f$ and $g$ so that $C$ has small genus. In small genus, we may have many rational points, but as far as integral pointswe also have Siegel's theorem to content with. A projective model$\widetilde{C}$ of $C$is given by $Z^d f(Y/Z) - Z^d g(X/Z) = 0$. Setting $Z = 0$, we obtain the equation"at infinity"$Y^d - X^d = 0$, which has $d$ points over the complex numbers. Hence, assuming $d \ge 3$,$$\# \widetilde{C} \setminus C \ge 3.$$ By Siegel's theorem we deduce $C$ has only finitely many integral solutions. Thus, when the degree $d$ isprime and sufficiently large (or more generally, providing one avoids degeneracies arising from the phenomena alluded to in the first paragraph), any $f$ and $g$ in different equivalence classeswill only coincide on a fixed number of integers. Your question, however, asks whether there is a uniform bound. Thereis certainly no uniform bound for Siegel's theorem, at least when the genusis $\le 1$. There is a standard “renomalization” trick which takesa curve with infinitely many rational points and produces a curve with manyintegral points. This trick works in this case. Specifically, suppose that $C: f(y) - g(x) = 0$ has infinitelymany rational points. Then there certainly exists some integer $N$ such that $C$has a bizillion points of the form $(u/N,v/N)$ (take $N$ to be a common denominator). We may then write down the different integralmodel:$$C': N^d f(y/N) - N^d g(x/N) = 0,$$which now has a bizillion integral points $(u,v)$. This also allows one to answer your question in general degrees, simply by choosing $f$ and $g$ so that $C$ hasinfinitely many rational points, and then renomalizing appropriately.The easiest specific example would be to take $C$ of genus zero. For example, take $f = t^n$ and $g = t^{n-1}(t-1)$.Then $C: f(y) - g(x) = y^n - x^{n-1}(x-1)$ has genus zero, as can be seen from the parametrization$$x = \frac{1}{1 - t^n}, \qquad y = \frac{t}{1 - t^n}.$$From the above construction, there will exist positive integers $N$ such thatthe polynomials $t^n$ and $t^{n-1}(t - N)$ will take on the same bizillion values.This answers your question in the negative. EDIT: I guess the last example can be made quite concrete.Let $$N = (1 - 2^d)(1 - 3^d)(1 - 4^d) \ldots (1 - M^d).$$Then $t^d$ and $t^{d-1}(t-N)$ both take on the values$\displaystyle{\left(\frac{aN}{1 - a^d} \right)^d}$for $a = 2, \ldots, M$.
In a recent and fantastic collaboration between Jake Levinson and myself, we discovered new links between several different geometric and combinatorial constructions. We’ve weaved them together into a beautiful mathematical story, a story filled with drama and intrigue. So let’s start in the middle. Slider puzzles for mathematicians Those of you who played with little puzzle toys growing up may remember the “15 puzzle”, a $4\times 4$ grid of squares with 15 physical squares and one square missing. A move consisted of sliding a square into the empty square. The French name for this game is “jeu de taquin”, which translates to “the teasing game”. We can play a similar jeu de taquin game with semistandard Young tableaux. To set up the board, we need a slightly more general definition: a skew shape $\lambda/\mu$ is a diagram of squares formed by subtracting the Young diagram (see this post) of a partition $\mu$ from the (strictly larger) Young diagram of a partition $\lambda$. For instance, if $\lambda=(5,3,3,1)$ and $\mu=(2,1)$, then the skew shape $\lambda/\mu$ consists of the white squares shown below. A semistandard Young tableau is then a way of filling the squares in such a skew shape with positive integers in such a way that the entries are weakly increasing across rows and strictly increasing down columns: Now, an inner jeu de taquin slide consists of choosing an empty square adjacent to two of the numbers, and successively sliding entries inward into the empty square in such a way that the tableau remains semistandard at each step. This is an important rule, and it implies that, once we choose our inner corner, there is a unique choice between the squares east and south of the empty square at each step; only one can be slid to preserve the semistandard property. An example of an inwards jeu de taquin slide is shown (on repeat) in the animation below: Here’s the game: perform a sequence of successive jeu de taquin slides until there are no empty inner corners left. What tableaux can you end up with? It turns out that, in fact, it doesn’t matter how you play this game! No matter which inner corner you pick to start the jeu de taquin slide at each step, you will end up with the same tableau in the end, called the rectification of the original tableau. Since we always end up at the same result, it is sometimes more interesting to ask the question in reverse: can we categorize all skew tableaux that rectify to a given fixed tableau? This question has a nice answer in the case that we fix the rectification to be superstandard, that is, the tableau whose $i$th row is filled with all $i$’s: It turns out that a semistandard tableau rectifies to a superstandard tableau if and only if it is Littlewood-Richardson, defined as follows. Read the rows from bottom to top, and left to right within a row, to form the reading word. Then the tableau is Littlewood-Richardson if every suffix (i.e. consecutive subword that reaches the end) of the reading word is ballot, which means that it has at least as many $i$’s as $i+1$’s for each $i\ge 1$. For instance, the Littlewood-Richardson tableau below has reading word 352344123322111, and the suffix 123322111, for instance, has at least as many $1$’s as $2$’s, $2$’s as $3$’s, etc. Littlewood-Richardson tableaux are key to the Littlewood-Richardson rule, which allows us to efficiently compute products of Schur functions. A convoluted commutator The operation that Jake and I studied is a sort of commutator of rectification with another operation called “shuffling”. The process is as follows. Start with a Littlewood-Richardson tableau $T$, with one of the corners adjacent to $T$ on the inside marked with an “$\times$”. Call this extra square the “special box”. Then we define $\omega(T)$ to be the result of the following four operations applied to $T$. Rectification:Treat $\times$ as having value $0$ and rectify the entire skew tableau. Shuffling:Treat the $\times$ as the empty square to perform an inward jeu de taquin slide. The resulting empty square on the outer corner is the new location of $\times$. Un-rectification:Treat $\times$ as having value $\infty$ and un-rectify, using the sequence of moves from the rectification step in reverse. Shuffling back:Treat the $\times$ as the empty square to perform a reverse jeu de taquin slide, to move the $\times$ back to an inner corner. We can iterate $\omega$ to get a permutation on all pairs $(\times,T)$ of a Littlewood-Richardson tableau $T$ with a special box marked on a chosen inner corner, with total shape $\lambda/\mu$ for some fixed $\lambda$ and $\mu$. As we’ll discuss in the next post, this permutation is related to the monodromy of a certain covering space of $\mathbb{RP}^1$ arising from the study of Schubert curves. But I digress. One of the main results in our paper provides a new, more efficient algorithm for computing $\omega(T)$. In particular, the first three steps of the algorithm are what we call the “evacuation-shuffle”, and our local rule for evacuation shuffling is as follows: Phase 1.If the special box does not precede all of the $i$’s in reading order, switch the special box with the nearest $i$ priorto it in reading order. Then increment $i$ by $1$ and repeat this step. If, instead, the special box precedes all of the $i$’s in reading order, go to Phase 2. Phase 2.If the suffix of the reading word starting at the special box has more $i$’s than $i+1$’s, switch the special box with the nearest $i$ afterit in reading order whose suffix has the same number of $i$’s as $i+1$’s. Either way, increment $i$ by $1$ and repeat this step until $i$ is larger than any entry of $T$. So, to get $\omega(T)$, we first follow the Phase 1 and Phase 2 steps, and then we slide the special box back with a simple jeu de taquin slide. We can then iterate $\omega$, and compute an entire $\omega$-orbit, a cycle of its permutation. An example of this is shown below. That’s all for now! In the next post I’ll discuss the beginning of the story: where this operator $\omega$ arises in geometry and why this algorithm is exactly what we need to understand it.
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response) Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ... @Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation") @Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags @Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag @glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :) @Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work) This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin... @Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension @Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head @Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write @Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all. I've actually recently asked some questions on math.SE on related topics @Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true also probably even more generally without $i$ factors so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal) Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary @Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ... There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h...
Difference between revisions of "Geometry and Topology Seminar" (→Fall 2016) (→Fall 2016) Line 84: Line 84: |- |- |December 9 |December 9 − | + | + | | − |- |- |December 16 |December 16 Revision as of 22:39, 31 October 2016 Contents Fall 2016 date speaker title host(s) September 9 Bing Wang (UW Madison) "The extension problem of the mean curvature flow" (Local) September 16 Ben Weinkove (Northwestern University) "Gauduchon metrics with prescribed volume form" Lu Wang September 23 Jiyuan Han (UW Madison) "Deformation theory of scalar-flat ALE Kahler surfaces" (Local) September 30 October 7 Yu Li (UW Madison) "Ricci flow on asymptotically Euclidean manifolds" (Local) October 14 Sean Howe (University of Chicago) "Representation stability and hypersurface sections" Melanie Matchett Wood October 21 Nan Li (CUNY) "Quantitative estimates on the singular Sets of Alexandrov spaces" Lu Wang October 28 Ronan Conlon "New examples of gradient expanding K\"ahler-Ricci solitons" Bing Wang November 4 Jonathan Zhu (Harvard University) "Entropy and self-shrinkers of the mean curvature flow" Lu Wang November 7 Gaven Martin (University of New Zealand) "TBA" Simon Marshall November 11 Richard Kent (Wisconsin) Analytic functions from hyperbolic manifolds local November 18 Caglar Uyanik (Illinois) "TBA" Kent Thanksgiving Recess December 2 Peyman Morteza (UW Madison) "TBA" (Local) December 9 December 16 Fall Abstracts Ronan Conlon New examples of gradient expanding K\"ahler-Ricci solitons A complete K\"ahler metric $g$ on a K\"ahler manifold $M$ is a \emph{gradient expanding K\"ahler-Ricci soliton} if there exists a smooth real-valued function $f:M\to\mathbb{R}$ with $\nabla^{g}f$ holomorphic such that $\operatorname{Ric}(g)-\operatorname{Hess}(f)+g=0$. I will present new examples of such metrics on the total space of certain holomorphic vector bundles. This is joint work with Alix Deruelle (Universit\'e Paris-Sud). Jiyuan Han Deformation theory of scalar-flat ALE Kahler surfaces We prove a Kuranishi-type theorem for deformations of complex structures on ALE Kahler surfaces. This is used to prove that for any scalar-flat Kahler ALE surfaces, all small deformations of complex structure also admit scalar-flat Kahler ALE metrics. A local moduli space of scalar-flat Kahler ALE metrics is then constructed, which is shown to be universal up to small diffeomorphisms (that is, diffeomorphisms which are close to the identity in a suitable sense). A formula for the dimension of the local moduli space is proved in the case of a scalar-flat Kahler ALE surface which deforms to a minimal resolution of \C^2/\Gamma, where \Gamma is a finite subgroup of U(2) without complex reflections. This is a joint work with Jeff Viaclovsky. Sean Howe Representation stability and hypersurface sections We give stability results for the cohomology of natural local systems on spaces of smooth hypersurface sections as the degree goes to \infty. These results give new geometric examples of a weak version of representation stability for symmetric, symplectic, and orthogonal groups. The stabilization occurs in point-counting and in the Grothendieck ring of Hodge structures, and we give explicit formulas for the limits using a probabilistic interpretation. These results have natural geometric analogs -- for example, we show that the "average" smooth hypersurface in \mathbb{P}^n is \mathbb{P}^{n-1}! Nan Li Quantitative estimates on the singular sets of Alexandrov spaces The definition of quantitative singular sets was initiated by Cheeger and Naber. They proved some volume estimates on such singular sets in non-collapsed manifolds with lower Ricci curvature bounds and their limit spaces. On the quantitative singular sets in Alexandrov spaces, we obtain stronger estimates in a collapsing fashion. We also show that the (k,\epsilon)-singular sets are k-rectifiable and such structure is sharp in some sense. This is a joint work with Aaron Naber. Yu Li In this talk, we prove that if an asymptotically Euclidean (AE) manifold with nonnegative scalar curvature has long time existence of Ricci flow, it converges to the Euclidean space in the strong sense. By convergence, the mass will drop to zero as time tends to infinity. Moreover, in three dimensional case, we use Ricci flow with surgery to give an independent proof of positive mass theorem. A classification of diffeomorphism types is also given for all AE 3-manifolds with nonnegative scalar curvature. Gaven Marin TBA Peyman Morteza TBA Richard Kent Analytic functions from hyperbolic manifolds Thurston's Geometrization Conjecture, now a celebrated theorem of Perelman, tells us that most 3-manifolds are naturally geometric in nature. In fact, most 3-manifolds admit hyperbolic metrics. In the 1970s, Thurston proved the Geometrization conjecture in the case of Haken manifolds, and the proof revolutionized 3-dimensional topology, hyperbolic geometry, Teichmüller theory, and dynamics. Thurston's proof is by induction, constructing a hyperbolic structure from simpler pieces. At the heart of the proof is an analytic function called the skinning map that one must understand in order to glue hyperbolic structures together. A better understanding of this map would more brightly illuminate the interaction between topology and geometry in dimension three. I will discuss what is currently known about this map. Caglar Uyanik TBA Bing Wang The extension problem of the mean curvature flow We show that the mean curvature blows up at the first finite singular time for a closed smooth embedded mean curvature flow in R^3. A key ingredient of the proof is to show a two-sided pseudo-locality property of the mean curvature flow, whenever the mean curvature is bounded. This is a joint work with Haozhao Li. Ben Weinkove Gauduchon metrics with prescribed volume form Every compact complex manifold admits a Gauduchon metric in each conformal class of Hermitian metrics. In 1984 Gauduchon conjectured that one can prescribe the volume form of such a metric. I will discuss the proof of this conjecture, which amounts to solving a nonlinear Monge-Ampere type equation. This is a joint work with Gabor Szekelyhidi and Valentino Tosatti. Jonathan Zhu Entropy and self-shrinkers of the mean curvature flow The Colding-Minicozzi entropy is an important tool for understanding the mean curvature flow (MCF), and is a measure of the complexity of a submanifold. Together with Ilmanen and White, they conjectured that the round sphere minimises entropy amongst all closed hypersurfaces. We will review the basics of MCF and their theory of generic MCF, then describe the resolution of the above conjecture, due to J. Bernstein and L. Wang for dimensions up to six and recently claimed by the speaker for all remaining dimensions. A key ingredient in the latter is the classification of entropy-stable self-shrinkers that may have a small singular set. Archive of past Geometry seminars 2015-2016: Geometry_and_Topology_Seminar_2015-2016 2014-2015: Geometry_and_Topology_Seminar_2014-2015 2013-2014: Geometry_and_Topology_Seminar_2013-2014 2012-2013: Geometry_and_Topology_Seminar_2012-2013 2011-2012: Geometry_and_Topology_Seminar_2011-2012 2010: Fall-2010-Geometry-Topology
Let $f:\mathbb R^n\to\mathbb R$ be a Morse function with uniform nondegenerate Hessian at critical points, i.e. for some $\delta>0$$$\forall x \in \{\nabla f=0\}\;\forall \xi\in\mathbb R^n: \quad |\langle \xi, \nabla^2 H(x)\xi \rangle | \geq \delta |\xi|^2 .$$ Edit: Liviu Nicolaescu pointed out a condition to ensure the sublevel sets of $\{f\leq c\}$ to be diffeomorphic to the sphere for $c$ large enough. Therefore, we impose $f$ to have at least linear radial growth at infinity$$ \langle x , \nabla f(x) \rangle \geq A |x| - B > 0 ,\quad A,B>0 .$$Especially, the above two conditions ensure that $f$ has only finitely many critical points. Let w.l.o.g. $0$ be a local minima of $f$ and let $\Phi_s(x)$ be the negative gradient flow wrt. $f$, i.e. $$ \dot \Phi_s(x) = -\nabla f(\Phi_s(x)) \quad\text{and}\quad \Phi_0(x)= x . $$ In addtion $\Omega$ denotes the basin of attraction for $0$ or in other words just the stable manifold of $0$, i.e. $$ \Omega = \{ x : \Phi_s(x) \to 0 \text{ for } s\to \infty \} $$ Does $f$ satisfy Neumann boundary condition on $\partial\Omega$ in the sense that the following integration by parts hold $$ \int_\Omega (-\Delta f)\; g \; dx = \int_\Omega \nabla f \cdot \nabla g \; dx \quad\text{for all $g$ such that} \quad \int_\Omega |\nabla f| \; |\nabla g| \; dx < \infty\quad ? $$ Strategy so far: If $f$ is Morse-Smale, then $\partial \Omega$ is the union of stable manifolds heteroclinic connected to $0$ for the integration by parts only the (n-1)-dimensional stable manifolds of saddles of index 1 are relevant. hence $\mathit{H}^{n-1}$ almost all $x\in \partial \Omega$ lie on a stable manifold of a 1-saddle and there the proof follows by contradiction and the definition of $\Omega$. Hereby $H^{n-1}$ denotes the (n-1)-dimensional Hausdorff measure. Is it necessary for $f$ to be Morse-Smale? Is there some soft argument? What are relevant references?
Overview: When comparing two morphologically similar(i.e. geometrically similar) species that may differ in size it’s natural to ask under what circumstances their gaits might be similar. Given that the differences in size might be important-consider for example the difference in size between a cat and a rhinoceros-a dimensionless analysis is necessary. 130 years ago, it was William Froude that introduced a dimensionless parameter that proved to be an important criterion for dynamic similarity when comparing boats of different hull lengths. In particular, dimensionless analysis using proved very useful in understanding why the Great Eastern, the largest ship in the world at the time, was a massive failure. In fact, the ship couldn’t earn enough to pay for its fuel. Essentially, Froude found that large and small models of geometrically similar hulls produced similar wave patterns when their Froude numbers were equal. To be precise, is equal to: \begin{equation} Fr = \frac{\lVert v \rVert^2}{gL} \end{equation} where is the velocity, is the gravitational acceleration and is the characteristic length. While Froude concentrated on the movement of ships it was D’Arcy Wentworth Thompson who first recognised the connection between the Froude number and animal locomotion. On page 23 of On Growth and Form, Thompson notes: In two similar and closely related animals, as is also in two steam engines, the law is bound to hold that the rate of working must tend to vary with the square of the linear dimension, according to Froude’s Law of steamship comparison. Despite the popularity of Thompson’s work, the importance of the Froude number as a tool for analysing locomotion wasn’t fully appreciated until Robert Alexander, a Zoology professor at Leeds, empirically demonstrated that the movement of animals of geometrically similar form but different size would be dynamically similar when they moved with the same Froude number. Alexander’s dynamic similarity criteria: One of Alexander’s most striking observations was that the galloping movements of cats and rhinoceroses are remarkably similar even though the rhino is three orders of magnitude larger. After much empirical analysis, Alexander postulated five dynamic similarity criteria in [3]: Each leg has the same phase relationship. Corresponding feet have equal duty factors (% of cycle in ground contact). Relative (i.e. dimensionless) stride lengths are equal. Forces on corresponding feet are equal multiples of body weight. Power outputs are proportional to body weight times speed. Alexander hypothesised, and provided the necessary experimental evidence to demonstrate that animals meet these five criteria when they travel at speeds that translate to equal values of . This important work by Alexander indicates that although the Froude number may appear to oversimplify complex problems in biomechanics, it has empirically proved to be an important factor in the dimensionless analysis of dynamic similarity. At this stage we may marvel at the fact that the Froude number, which emerged from a problem in hydrodynamics, should also play a key role in the comparative analysis of terrestrial locomotion. While this theoretical issue isn’t addressed in [1], I have attempted to show the mathematical connection between the Froude number as it occurs in hydrodynamics and the Froude number as it occurs in biomechanics. Analysis of the Froude number: In this section I shall demonstrate that in both the cases of a surface water wave and a bipedal walker, the Froude number provides a similar description. In fact, if we note that a surface water wave is approximately a transverse wave and the walking motion of a biped is approximately a longitudinal wave then the Froude number is simply the force magnitude responsible for linear displacement divided by the magnitude of the gravitational force. Surface water waves: Given a surface water wave moving through a medium with density with constant velocity (i.e. longitudinal displacement within a period ), the magnitude of the inertial force required to halt the motion of a volume is given by: \begin{equation}\begin{split}\lVert F_i \rVert = \text{mass}*\text{acceleration} & = \rho L^3 \cdot \frac{\lVert \Delta v \rVert}{\Delta t } & = \rho L^3 \cdot \frac{\lVert v \rVert}{T} & = \rho L^2 \cdot \lVert v \rVert \cdot \frac{L}{T} & = \rho L^2 \cdot \lVert v \rVert^2 \end{split} \end{equation} On the other hand, the magnitude of the gravitational force acting on this volume is given by: \begin{equation} \lVert F_g \rVert = \text{mass}*\text{gravitational acceleration} = \rho L^3 \cdot \lVert g \rVert \end{equation} and we may define the Froude number in terms of the ratio of these force magnitudes: \begin{equation} Fr = \frac{\lVert F_i \rVert}{\lVert F_g \rVert} = \frac{\rho L^2 \cdot \lVert v \rVert^2}{\rho L^3 \cdot \lVert g \rVert } = \frac{\lVert v \rVert^2}{Lg} \end{equation} and this number describes the stability of the flowing wave as shown in this video. In particular, when the shallow water wave is stable and the motion of the wave is dominated by gravitational forces so surface waves generated by downstream disturbances can travel upstream but when this is impossible. Bipedal walkers: The inverted pendulum is a useful model for analysing bipedal walking as a leg forms the radius of an arc and the motion of the biped may then be approximated by a longitudinal wave. Furthermore, if we make reasonable modelling assumptions we may infer the speed limits on a bipedal walker. In particular, we make the following assumptions: We neglect air resistance. We assume that the legs are rigid and interact with a single point on the ground. We neglect any pelvic motion. We neglect the inertial role of arm motions. Given these assumptions, note that the force magnitude associated with motion in a circular arc is given by: \begin{equation} \lVert F \rVert = \frac{M \lVert v \rVert^2}{L} \end{equation} where is the mass of the biped, is the inward acceleration of the mass, is the tangential velocity and is the radius of the circular orbit(i.e. the limb length). Note further that during walking, the inward acceleration due to normal forces on the foot shouldn’t exceed the gravitational acceleration: \begin{equation} g > \frac{\lVert v \rVert^2}{L} \end{equation} Meanwhile, if we consider the ratio of the centripetal force magnitude to the ratio of the gravitational force magnitude we have: \begin{equation} Fr = \frac{\lVert F_c \rVert}{\lVert F_g \rVert} = \frac{\frac{M \lVert v \rVert^2}{L}}{Mg}=\frac{\lVert v \rVert^2}{gL} \end{equation} and in order to allow stable walking we must have: \begin{equation} Fr < 1 \implies \lVert v \rVert < \sqrt{gL} \end{equation} so the maximum walking speed of a biped is proportional to the square root of , the length of its legs. Likewise, when running is necessary. It follows that both in the case of a shallow water wave and bipedal walkers, defines the boundary between fundamentally different dynamics. Open questions: Does the implication of dynamic similarity via equal Froude number hold for nonlinear motions? The assumption of constant velocity appears to require steady-state assumptions. Can the Froude number be generalised to handle the case of intermittent locomotion? These questions might have been answered by researchers in the robotics and biomechanics community but at this point I myself don’t have satisfactory answers. References: Froude and the contribution of naval architecture to our understanding of bipedal locomotion. C. Vaughan & M. Malley. 2004. On Growth and Form. D’Arcy Wentworth Thompson. 1917. A dynamic similarity hypothesis for the gaits of quadrupedal mammals. Alexander RM, Jayes AS. 1983.
Answer The reduction formula is $$-\sin\theta$$ Work Step by Step *Summary of the method: For a formula $f(Q\pm\theta)$ 1) See that $Q$ terminates on the $x$ or $y$ axis. If it terminates on the $x$ axis, go for Case 1. If it terminates on the $y$ axis, go for Case 2. 2) Case 1: - For a small positive value of $\theta$, determinate $Q\pm\theta$ lies in which quadrant. - If $f\gt0$, use a $+$ sign. If $f\lt0$, use a $-$ sign. - The reduced form will have that sign, $f$ the function and $\theta$ the angle. 3) Case 2: - For a small positive value of $\theta$, determinate $Q\pm\theta$ lies in which quadrant. - If $f\gt0$, use a $+$ sign. If $f\lt0$, use a $-$ sign. - The reduced form will have that sign, cofunction of$f$ as the function and $\theta$ the angle. $$\cos(90^\circ+\theta)$$ 1) $90^\circ$ terminates on the $y$ axis. We go for Case 2. 2) As $\theta$ is a very small positive value, which means $\theta\gt0$, $$90^\circ\lt(90^\circ+\theta)\lt180^\circ$$ So $90^\circ+\theta$ lies in quadrant II. 3) Cosine is negative in quadrant II. So we use a $-$ sign. 4) Cofunction of cosine is sine, which is used in Case 2. Overall, the reduced form would be $$-\sin\theta$$
It’s Tax Day here in the United States, and I spent a larger portion of the past weekend than I would have liked filling out the appropriate forms. But even among tax forms and legal documents you can find a mathematical gemstone or two! The instructions for filling out, say, IRS form 1040, are rather peculiar in that they try give the reader very simple mathematical operations to carry out, one at a time, to end up with a complicated function of many variables. As far as I could tell, the valid operations are: Entering a value for a variable (e.g. income, standard deduction, etc.) $+$: Addition of two entries $\times$: Multiplication $\min$: Taking the minimum of two entries $\overset{\cdot}{-}$: A modified version of subtraction, where $a \overset{\cdot}{-} b$ is defined to be $a-b$ if $a\ge b$ and $0$ otherwise. I found it interesting that every complicated tax computing formula can be broken down into steps of this form. In fact, it says something about the tax formulae: they can all be written as a composition of addition, multiplication, scalar multiplication, min, and $\overset{\cdot}{-}$. The most interesting of the operations is $\overset{\cdot}{-}$, which is pronounced “ monus”. According to the Wikipedia article on monus, the monus operation can in fact be defined on any commutative monoid $C$ in which the relation $\le$ (defined by $a\le b$ if and only if there exists $c\in C$ for which $a+c=b$) is a partial order. This is certainly true for the nonnegative real numbers, which seems to be the domain of operation of the IRS. The nicest fact about monus in this context, however, is that we can express $\min$ in terms of it: $$\min(a,b)=a\overset{\cdot}{-} (a\overset{\cdot}{-} b)$$ This means that every tax formula can be written as a polynomial in several variables in which we replace minus by monus! Let’s call such polynomials “Monus polynomials”. So one example of a monus polynomial would be: $$2xy+x^2yz\overset{\cdot}{-} 3z^3$$ This opens up a plethora of interesting questions. What are the properties of monus polynomials? Are there nice analogues of theorems such as the fundamental theorem of algebra from the classical case? Can we define monus-algebraic field extensions? This may indeed be an interesting field of study, because tropical polynomials are a special case of monus polynomials, written using only $+$ and $\min$. Who knew that tropical geometry would arise so naturally in the study of IRS Form 1040?
Intersection of Subgroups is Subgroup/General Result Theorem Let $\struct {G, \circ}$ be a group. Proof Let $H = \bigcap \mathbb S$. Let $H_k$ be any element of $\mathbb S$. Then: \(\displaystyle a, b\) \(\in\) \(\displaystyle H\) \(\displaystyle \leadsto \ \ \) \(\, \displaystyle \forall k: \, \) \(\displaystyle a, b\) \(\in\) \(\displaystyle H_k\) Definition of Intersection of Set of Sets \(\displaystyle \leadsto \ \ \) \(\, \displaystyle \forall k: \, \) \(\displaystyle a \circ b^{-1}\) \(\in\) \(\displaystyle H_k\) Group properties \(\displaystyle \leadsto \ \ \) \(\displaystyle a \circ b^{-1}\) \(\in\) \(\displaystyle H\) Definition of Intersection of Set of Sets \(\displaystyle \leadsto \ \ \) \(\displaystyle H\) \(\le\) \(\displaystyle G\) One-Step Subgroup Test $\Box$ Now to show that $\struct {H, \circ}$ is the largest such subgroup. Let $K$ be a subgroup of $\struct {G, \circ}$ such that: $\forall S \in \mathbb S: K \subseteq S$ Then by definition $K \subseteq H$. Let $x, y \in K$. Then: $x \circ y^{-1} \in K \implies x \circ y^{-1} \in H$ $\blacksquare$ Sources 1965: J.A. Green: Sets and Groups... (previous) ... (next): $\S 5.2$. Subgroups: Example $94$ 1965: Seth Warner: Modern Algebra... (previous) ... (next): $\S 14$: Theorem $14.5$ 1966: Richard A. Dean: Elements of Abstract Algebra... (previous) ... (next): $\S 1.9$: Theorem $18$ 1967: George McCarty: Topology: An Introduction with Application to Topological Groups... (previous) ... (next): $\text{II}$: Exercise $\text{J}$ 1971: Allan Clark: Elements of Abstract Algebra... (previous) ... (next): Chapter $2$: Subgroups and Cosets: $\S 35 \beta$ 1974: Thomas W. Hungerford: Algebra... (previous) ... (next): $\S 1.2$ 1978: Thomas A. Whitelaw: An Introduction to Abstract Algebra... (previous) ... (next): $\S 36.7$ Subgroups 1982: P.M. Cohn: Algebra Volume 1(2nd ed.) ... (previous) ... (next): $\S 3.2$: Groups; the axioms 1996: John F. Humphreys: A Course in Group Theory... (previous) ... (next): Chapter $4$: Subgroups: Proposition $4.6$
Definition:Antitransitive Relation Definition Let $\mathcal R \subseteq S \times S$ be a relation in $S$. $\mathcal R$ is antitransitive if and only if: $\left({x, y}\right) \in \mathcal R \land \left({y, z}\right) \in \mathcal R \implies \left({x, z}\right) \notin \mathcal R$ that is: $\left\{ {\left({x, y}\right), \left({y, z}\right)}\right\} \subseteq \mathcal R \implies \left({x, z}\right) \notin \mathcal R$ Also known as Some sources use the term intransitive. However, as intransitive is also found in other sources to mean non-transitive, it is better to use the clumsier, but less ambiguous, antitransitive. Also see Results about relation transitivitycan be found here.
Suppose $f(x) \in \mathbb{Z}[x]$ is such that $f(0)$ and $f(1)$ are odd. How do I show that $f(x)$ has no integer roots? If $r$ is an integer root and $f(x)=\sum_{k=0}^na_kx^k$, $a_k\in\mathbb Z$ then $\sum_{k=0}^na_kr^k=0$. If $r$ is even, then reducing modulo $2$ we get that $a_0\equiv 0[2]$ hence $f(0)$ is even, which cannot be the case by hypothesis. If $r$ is odd, then $r^k\equiv 1[2]$ for each $k \geqslant 0$ hence $\sum_{k=0}^na_k\equiv 0[2]$. Thus $f(1)=\sum_{k=0}^na_k$ is even, and we get a contradiction. Another proof can be done using the fact that if $f$ has integer coefficients and $a\neq b$ are integers, then $a-b \mid f(a)-f(b)$. Because $f(0)$ and $f(1)$ are odd, it follows that $f(k)$ is odd for every integer $k$, and therefore no integer can be a root. Using the basic properties of congruences you can show easily that for any polynomial $f(x)$ with integer coefficients $$x \equiv y \pmod n \qquad \Rightarrow \qquad f(x) \equiv f(y) \pmod n.$$ See the proof at proofwiki. In this case for $n=2$ you get: In both cases, $f(x)$ is odd integer, hence it is non-zero. Note that this is basically the same answer as given by Beni Bogosel, but I thought that if you are familiar with congruences, this approach might be more clear for you. I'm not being very original here, but reducing $x$ modulo 2 in the expression $f(x)$ gives $f(x)\equiv f(x\bmod 2)\pmod 2$. It may be interesting is to note that the result is false for polynomials with rational coefficients that take integer values on $\mathbf Z$, for instance $\frac{x^2-x-2}2$. A slight (but cute, in my opinion) variation of the the other methods is this one: If $f\in \Bbb Z[x]$ has an integer root $m$, then by Ruffini's rule $f(x)=(x-m)(b_n x^n+\cdots + b_0)=(x-m)g(x)$ for some $b_0,\cdots, b_m\in \Bbb Z$. Then, $$\begin{cases} f(0)=-m\cdot g(0)\\ f(1)=(1-m)\cdot g(1)\end{cases}$$ But at least one between $-m$ and $(1-m)$ must be even.
Blow-up for the 3-dimensional axially symmetric harmonic map flow into $ S^2 $ 1. Instituto de Matemáticas, Universidad de Antioquia, Calle 67, No. 53–108, Medellín, Colombia 2. Departamento de Ingeniería Matemática-CMM, Universidad de Chile, Santiago 837-0456, Chile 3. Department of Mathematical Sciences University of Bath, Bath BA2 7AY, United Kingdom 4. Department of Mathematics, University of British Columbia, Vancouver, B.C., Canada, V6T 1Z2 $ S^2 $ $ \begin{align*} u_t & = \Delta u + |\nabla u|^2 u \quad \text{in } \Omega\times(0,T) \\ u & = u_b \quad \text{on } \partial \Omega\times(0,T) \\ u(\cdot,0) & = u_0 \quad \text{in } \Omega , \end{align*} $ $ u(x,t): \bar \Omega\times [0,T) \to S^2 $ $ \Omega $ $ \mathbb{R}^3 $ $ \Gamma \subset \Omega $ $ T>0 $ $ u(x,t) $ $ T $ $ \Gamma $ $ | {\nabla} u(\cdot ,t)|^2 \rightharpoonup | {\nabla} u_*|^2 + 8\pi \delta_\Gamma \quad\mbox{as}\quad t\to T . $ $ u_*(x) $ $ \delta_\Gamma $ Keywords:Blow-up, semilinear parabolic equation, harmonic map flow, codimension 2 singular set, finite time blow-up. Mathematics Subject Classification:Primary: 35K58, 35B44; Secondary: 35R01. Citation:Juan Dávila, Manuel Del Pino, Catalina Pesce, Juncheng Wei. Blow-up for the 3-dimensional axially symmetric harmonic map flow into $ S^2 $. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 6913-6943. doi: 10.3934/dcds.2019237 References: [1] J. B. van den Berg, J. Hulshof and J. R. King, Formal asymptotics of bubbling in the harmonic map heat flow, [2] J. B. van den Berg and J. F. Williams, (In-)stability of singular equivariant solutions to the Landau-Lifshitz-Gilbert equation, [3] K. C. Chang, Heat flow and boundary value problem for harmonic maps, [4] K. C. Chang, W. Y. Ding and R. Ye, Finite-time blow-up of the heat flow of harmonic maps from surfaces, [5] [6] [7] [8] [9] [10] [11] [12] F. H. Lin and C. Y. Wang, Energy identity of harmonic map flows from surfaces at finite singular time, [13] [14] F. H. Lin and C. Y. Wang, Harmonic and quasi-harmonic spheres. Ⅲ. Rectifiability of the parabolic defect measure and generalized varifold flows, [15] [16] J. Qing and G. Tian, Bubbling of the heat flows for harmonic maps from surfaces, [17] P. Raphaël and R. Schweyer, Stable blowup dynamics for the 1-corotational energy critical harmonic heat flow, [18] [19] P. M. Topping, Repulsion and quantization in almost-harmonic maps, and asymptotics of the harmonic map flow, show all references References: [1] J. B. van den Berg, J. Hulshof and J. R. King, Formal asymptotics of bubbling in the harmonic map heat flow, [2] J. B. van den Berg and J. F. Williams, (In-)stability of singular equivariant solutions to the Landau-Lifshitz-Gilbert equation, [3] K. C. Chang, Heat flow and boundary value problem for harmonic maps, [4] K. C. Chang, W. Y. Ding and R. Ye, Finite-time blow-up of the heat flow of harmonic maps from surfaces, [5] [6] [7] [8] [9] [10] [11] [12] F. H. Lin and C. Y. Wang, Energy identity of harmonic map flows from surfaces at finite singular time, [13] [14] F. H. Lin and C. Y. Wang, Harmonic and quasi-harmonic spheres. Ⅲ. Rectifiability of the parabolic defect measure and generalized varifold flows, [15] [16] J. Qing and G. Tian, Bubbling of the heat flows for harmonic maps from surfaces, [17] P. Raphaël and R. Schweyer, Stable blowup dynamics for the 1-corotational energy critical harmonic heat flow, [18] [19] P. M. Topping, Repulsion and quantization in almost-harmonic maps, and asymptotics of the harmonic map flow, [1] José M. Arrieta, Raúl Ferreira, Arturo de Pablo, Julio D. Rossi. Stability of the blow-up time and the blow-up set under perturbations. [2] [3] Yohei Fujishima. On the effect of higher order derivatives of initial data on the blow-up set for a semilinear heat equation. [4] Asma Azaiez. Refined regularity for the blow-up set at non characteristic points for the vector-valued semilinear wave equation. [5] Yohei Fujishima. Blow-up set for a superlinear heat equation and pointedness of the initial data. [6] Sachiko Ishida, Tomomi Yokota. Blow-up in finite or infinite time for quasilinear degenerate Keller-Segel systems of parabolic-parabolic type. [7] Xiumei Deng, Jun Zhou. Global existence and blow-up of solutions to a semilinear heat equation with singular potential and logarithmic nonlinearity. [8] [9] Shota Sato. Blow-up at space infinity of a solution with a moving singularity for a semilinear parabolic equation. [10] Jong-Shenq Guo. Blow-up behavior for a quasilinear parabolic equation with nonlinear boundary condition. [11] Yuta Wakasugi. Blow-up of solutions to the one-dimensional semilinear wave equation with damping depending on time and space variables. [12] [13] Zhiqing Liu, Zhong Bo Fang. Blow-up phenomena for a nonlocal quasilinear parabolic equation with time-dependent coefficients under nonlinear boundary flux. [14] [15] Miaoqing Tian, Sining Zheng. Global boundedness versus finite-time blow-up of solutions to a quasilinear fully parabolic Keller-Segel system of two species. [16] Hua Chen, Huiyang Xu. Global existence and blow-up of solutions for infinitely degenerate semilinear pseudo-parabolic equations with logarithmic nonlinearity. [17] Mohamed-Ali Hamza, Hatem Zaag. Blow-up results for semilinear wave equations in the superconformal case. [18] Qiong Chen, Chunlai Mu, Zhaoyin Xiang. Blow-up and asymptotic behavior of solutions to a semilinear integrodifferential system. [19] Van Tien Nguyen. On the blow-up results for a class of strongly perturbed semilinear heat equations. [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
The points on the Elliptic Curves ($E$) over e field $K$ are forming a commutative additive group with the identity $\mathcal{O}$; the point at infinity, also notated as $P_\infty$. The scalar multiplication $[k]P$ this actually means adding $P$, $k$-time itself. More formally; let $k \in \mathbb{N}\backslash\{ 0\}$ \begin{align}[k]:& E \to E\\&P\mapsto [k]P=\underbrace{P+P+\cdots+P}_{\text{$k$ times}}.\end{align} and $[0]P = \mathcal{O}$, and $[k]P=[-k][-P]$ for $k<0$. Bitcoin uses Secp256k1 which has characteristic $p$ and it is defined over the prime field $\mathbb{Z}_p$ with the curve equation $y^2=x^3+7$. Point addition in $\mathbb{Z}_p$ has an interesting property since the number of elements is finite if you add a point $P$ itself many times eventually you will get the identity $\mathcal{O}$. $$\underbrace{P+P+\cdots+P}_{\text{$t$ times}} = [t]P= \mathcal{O}$$ The smallest $t$ will be the order of the subgroup generated by the $P$. For security, we want this order huge. Note 1: a point $P$ may not generate the whole group but it generates a cyclic subgroup. Note 2: As pointed by SqueamishOssifrage, The Smart showed that if the order of the curve and order of the base field ($K$) are same then the discrete logarithm on this curves runs in linear time.
This is a countable family of first-order statements, so it holds for every real-closed field, since it holds over $\mathbb R$. From a square matrix, we immediately derive that such a field must satisfy the property that the sum of two perfect squares is a perfect square. Indeed, the matrix: $ \left(\begin{array}{cc} a & b \\ b & -a \end{array}\right)$ has characteristic polynomial $x^2-a^2-b^2$, so it is diagonalizable as long as $a^2+b^2$ is a pefect square. Moreover, $-1$ is not a perfect square, or else the matrix: $ \left(\begin{array}{cc} i & 1 \\ 1 & -i \end{array}\right)$ would be diagonalizable, thus zero, an obvious contradiction. So the semigroup generated by the perfect squares consists of just the perfect squares, which are not all the elements of the field, so the field can be ordered. However, the field need not be real-closed. Consider the field $\mathbb R((x))$. Take a matrix over that field. Without loss of generality, we can take it to be a matrix over $\mathbb R[[x]]$. Looking at it mod $x$, it is a symmetric matrix over $\mathbb R$, so we can diagonalize it using an orthogonal matrix. If its eigenvalues mod $x$ are all distinct, we are done, because we can find roots of its characteristic polynomial in $\mathbb R[[x]]$ by Hensel's lemma. If they are all the same, say $\lambda$ we can reduce: subtract $\lambda I$, divide by $x$ and diagonalize again. The only remaining case is if some are the same and some are distinct. If we can handle that case, then we can diagonalize any matrix. Lemma: Let $M$ be a symmetric matrix over $\mathbb R[[x]]$ such that some eigenvalues are distinct mod $x$. There exists an orthogonal matrix $A$ such that $AMA^{-1}$ is block diagonal, with the blocks symmetric. Proof: Consider the scheme of such orthogonal matrices. Each connected component of this scheme corresponds to a partition of the eigenvalues into blocks. Choose one. Since we can diagonalize the matrix with an orthogonal matrix mod $x$, there is certainly a mod $x$ point on this component. We want to lift this to a point on the whole ring. We can do this if the scheme is smooth over $\mathbb R[[x]]$. Assuming the blocks have distinct eigenvalues, the variety of ways to do this looks, over an algebraically closed field, like $O(n_1) \times O(n_2) \times.. \times O(n_k)$ where $n_1,\dots,n_k$ are the sizes of the blocks. This is because the only way to keep a diagonal matrix block diagonal is to hit it with one of those. So as long as the blocks are chosen such that the eigenvalues in different blocks are distinct and remain so on reduction mod $x$, the variety is smooth over $\mathbb R((x))$ and smooth over $\mathbb R$, and has the same dimension over both, so is smooth over $\mathbb R[[x]]$. (This bit might not be entirely correct.) Thus there is a lift and the matrix can be put in this form. Then we do an induction on dimension. The only way we would be unable to put a matrix in a form where two of its eigenvalues are distinct mod $x$ is if its eigenvalues are all the same, in which case,since $\mathbb R((x))$ is contained in a real closed field, it's a scalar matrix and we're done.
Definition:F-Homomorphism Definition Let $R, S$ be rings with unity. Let $F$ be a subfield of both $R$ and $S$. Then a ring homomorphism $\varphi: R \to S$ is called an $F$-homomorphism if: $\forall a \in F: \map \phi a = a$ That is, $\phi \restriction_F = I_F$ where: Also see The word homomorphism comes from the Greek morphe ( ) meaning μορφή formor structure, with the prefix homo-meaning similar. Thus homomorphism means similar structure.
In this tutorial section it is assumed that the parts Connection techniques I – naming policy ‘integrate’ and Connection techniques II – naming policy ‘encapsulate’ have been worked through. In this section the use of ports in MOSAICmodeling is introduced. Ports are useful to define standardized input and output variables and make equation systems more usable like a unit in the process systems engineering sense. In MOSAICmodeling there is no fixed specification of the variables in a port or stream. In fact ports and streams can be defined by the modeler and the variable definitions used therein are reusable. The small example given above is not related to process systems engineering and consequently it does not use any physical values. To make the concepts more transparent, however, some variables are related to colors. Preparations Example equation system (colors) The following equation system and notation are used as a starting point for this tutorial. As you will see colors are used in the description of the variables. Equation system ‘eqsys small latin’ Notation: ‘nota small latin’ Equations: Equation ‘eq small latin one’: Equation ‘eq small latin two’: Notation ‘nota small latin’ Base Names Name Description a light blue value b dark blue value c light green value d dark green value e yellow value f pink value g light gray value h dark gray value m black value Task Create notation ‘nota small latin’ and ‘eqsys small latin’. When creating the notation, please use the colors given in the description as they will help to intuitively distinguish the variables later. Standardization via modular Interfaces As mentioned above, the specification of the variables contained in a port or stream must be defined by the user. This is done by the help of a model element called Interface. An Interface contains a list of variables and a Notation defining their meaning. Such an interface can is used in ports as a modular means of standardization. Motivating example An Interface for a chemical engineering unit could contain the variables F, z_{i}, P, T and w. The notation for such an interface would explain F as molar feed stream, z_{i} as molar fraction of the composition (phase independent), P as pressure, T as temperature, and w as vapor fraction. This interface can be used in ports for units that define material streams with the above mentioned contents. Continuing the tutorial example (colors) In the tutorial example we want define an interface for light and dark color values. We will use the interface for the equation system ‘eqsys small latin’ created above, to demonstrate the generality of the concept we use a different notation in the interface. Interface ‘itfc small greek’ Notation: ‘nota small greek’ Variables \lambda, \tau Notation ‘nota small greek’ Base Names Name Description \lambda light color value \tau dark color value Work flow to create the interface Create notation . 'nota small greek' Select the tab from the Editor bar. Interface Editor Load notation in the 'nota small greek' file panel. Notation Below the table Variable Naming press to open the [Add] dialog. 'Create Variable Naming' Create the Variable Naming ‘lambda’ (latex: \lambda) by typing ‘\lambda’ followed by clicks on [Shift+Enter] and 'Render' [Ctrl+Enter]. 'OK' Create ‘tau’ (latex: \tau) in the same way ( ,’\tau’, [Alt+a] , [Shift+Enter] ). [Ctrl+Enter] Save the Interface. Creating units from equation systems and interfaces Now we will use the Interface to add ports to the equation system ‘eqsys small latin’ created above. A Port contains an Interface and a Connector that defines the mapping of the variables of the equation system to the Interface variables. Unit ‘unit small latin’ Equation system: ‘eqs small latin’ Ports: -‘port blue’: Direction: ‘out’ Interface: ‘itfc small greek’ Connector: ‘conn small latin port blue’ -‘port green’: Direction: ‘in’ Interface: ‘itfc small greek’ Connector: ‘conn small latin port green’ Connector ‘conn small latin port blue’ Subnotation: ‘nota small latin’ Supernotation: ‘nota small greek’ Value list: a -> \lambda b -> \tau Connector ‘conn_small_latin port green’ Subnotation: ‘nota small latin’ Supernotation: ‘nota small greek’ Value list: c -> \lambda d -> \tau Work flow for creating the unit Create the connectors and 'conn small latin port blue' as specified above. 'conn small latin port green' Select the and create a new equation system that uses the notation EQSystemEditor . 'nota small latin' Activate the tab and add the equation system Connected Elements using the naming policy integrate (and without any connector). 'eqsys small latin' Activate the tab. Here the ports of the unit can be specified. External Ports To define port press 'blue' in the lower right corner. This will display the [Add] dialog, which allows you to enter all data listed in the port sections above. Edit External Port In the text field enter Port Name . 'blue' Select the Direction . 'out' Press in the Interface file panel and load [Load] . 'itfc small greek' Press in the Connector file panel and load [Load] 'conn small latin port blue' Press to close the dialog. [OK] Repeat the above steps to define port as given in the unit description. 'green' Save the equation system as ‘unit small latin’. Recapitulation We created a unit with ports based on a simple equation system. The concurrence of the model elements is visualized in figure 1. Evaluating the unit The unit can be evaluated like other equation systems. However, the variable namings are added for the port variables. Load the equation system ‘unit small latin’ in the Simulation Editor . Activate the tab . Instantiated System Push down the button . If [Ports Level] mode is activated the port variables are displayed by the variable namings defined in the interface. Ports Level As you use two ports with the same interface, this display is ambiguous. Push , which displays the namespace information for each variable. To know which port p0 and p1 are referring to, you may look at tab [Namespaces] . Specifications->Namespaces Activate tab and select variable ‘a’. Look at the different variable namings of this variable by the help of the Specifications->Variables , [T] , and [\uparrow] buttons. [\downarrow] A second unit as preparation for the next steps In the next tutorial section two units will be connected by streams. As a preparation and for practice we will create this second unit. Tasks Create ‘nota cap latin’ and ‘eqsys cap latin’ as they are stated above. Please stick to the descriptions given in the notation. Equation system ‘eqsys cap latin’ Notation: ‘nota_cap_latin’ Equations: Equation ‘eq cap latin one’: (A)^{2} + E/B + F\cdot C = D Equation ‘eq cap latin two’: B + G\cdot C = (D)^{2} Notation ‘nota cap latin’ Name Description A the Pale Purple Value B the Deep Purple Value C the Pale Yellow Value D the Deep Yellow Value E the Red Value F the Pink Value G the Orange Value Create the notation ‘nota cap greek’ and the interface ‘itfc cap greek’ specified below. Interface ‘itfc cap greek’ Notation: ‘nota cap greek’ Variables: \Lambda, \Omega Notation ‘nota cap greek’ Base Names Name Description \Lambda Pale Color Value \Omega Deep Color Value Create the connectors ‘conn cap latin port purple’ and ‘conn cap latin port purple’. Finally create the equation system ‘unit cap latin’. Unit ‘unit cap latin’ Equation system: ‘eqs cap latin’ Ports: -‘port purple’ Direction: ‘in’ Interface: ‘itfc cap greek’ Connector: ‘conn cap latin port purple’ -‘port yellow’: Direction: ‘out’ Interface: ‘itfc cap greek’ Connector: ‘conn cap latin port yellow’ Connector ‘conn cap latin port purple’ Subnotation:’nota cap latin’ Supernation: ‘nota cap greek’ Value list: A -> \Lambda B -> \Omega Connector ‘conn cap latin port yellow’ Subnotation: ‘nota cap latin’ Supernotation: ‘nota cap greek’ Value list: C -> \Lambda D -> \Omega Recapitulation The concurrence of the model elements is visualized in figure 2. Figure 2: Equation system ‘unit cap latin’ and the related model elements
Given a sequence of cadlag (right-continuous with left limits) martingales $X^n=(X^n_t)_{0\le t\le 1}$, we may use the well known criteria to determine whether it is weakly convergent, i.e. subtract a subsequence $(X^{n_k})_{k\ge 1}$ such that for all Skorokhod-continuous and bounded function $f$ one has $$\lim_{k\to\infty}E[f(X^{n_k})]~~=~~E[f(X)],$$ where the process $X$ is called the weak limit, that is again a martingale. Now, let us consider a different convergence. The sequence $(X^{n})_{n\ge 1}$ is said to be point-wise weakly convergent iff for any subsequence $(X^{n_k})_{k\ge 1}$ there exists a cadlag process $X$ (which is also a martingale) such that $$(X_{t_1}^{n_k},\ldots, X_{t_m}^{n_k})~~\stackrel{Law}{\longrightarrow}~~(X_{t_1},\ldots, X_{t_m}) \mbox{ for all } 0\le t_1\le \cdots t_m\le 1.$$ My question is the following: Assume that the sequence of martingales have same marginal distributions, i.e. $Law(X_t^n)=\mu_t$ for all $n\ge 1$, where $(\mu_t)_{0\le t\le 1}$ is a sequence of distributions on $\mathbb R$, then could we show that the sequence $(X^{n})_{n\ge 1}$ is point-wise weakly convergent? I believe strongly that it is not true, but cannot find a counterexample. Does some give an example or prove this claim? Thanks a lot for the reply!
On a (simply connected) domain $\Omega$ for a smooth vector field $F\colon \Omega \to \mathbb{R}^3$, when does $\nabla\times(\nabla\times F)=0$ imply $\nabla \times F=0$. I know that $n\cdot(\nabla\times F)=0$ on $\partial\Omega$ is sufficient, and also $t\cdot(\nabla\times F)=0$ is sufficient ($t$ the tangential). Is there a weaker condition? Here are some basic thoughts. Let $G$ be a vector field which is of the form $\nabla \times F$, and also obeys $\nabla \times G = 0$. Since $\nabla \times G = 0$, the vector field $G$ is locally of the form $\nabla h$ for some scalar valued function $h$. The condition that $G = \nabla \times F$ imples that $\nabla \cdot G=0$ or, in other words, $\nabla^2(h)=0$. This says that $h$ is harmonic. So, locally, the condition is equivalent to $G$ being the gradient of a harmonic function. Globally, if $\Omega$ is not simply connected, then traveling around a loop in $\Omega$ may change $h$ by a constant. From this, we can see that, if $G$ is compactly supported, it is zero: If $G$ is $0$ outside a ball of radius $R$, then $h$ is constant outside that ball, and thus $h$ is constant everywhere. We can also trot out our favorite conditions to ensure that a harmonic function is constant. For example, if $\Omega = \mathbb{R}^3$, and $G(x) \to 0$ as $|x| \to \infty$, then $|h(x)| = o(x)$ and a variant of Liouville's theorem tells us that $h$ is constant and $G=0$. The OP seems to be interested in conditions on the flux of $G$ across $\partial \Omega$. In order to make this make sense, I am going to assume that the set up is that $\overline{\Omega}$ is a compact manifold with boundary. Let $R$ be the flux of $G$ across $\partial \Omega$. As the OP already knows, if $R=0$ then $G=0$. And the assignment of $G \mapsto R$ is linear, so this shows that $R$ determines $G$. Let $\mathcal{V}$ be the vector space of functions on $\partial \Omega$ which can occur as such an $R$. The OP asks for conditions which force $G$ to be zero. In other words, he wants a vector space $\mathcal{W}$ of functions on $\partial \Omega$ which is transverse to $\mathcal{V}$. I find that an odd way to think about the question -- better to just characterize $\mathcal{V}$! Now, the obvious observation is that $\int_{\partial \Omega} R=0$ for any $R$ in $\mathcal{V}$, since $\nabla \cdot G=0$. It would be really cool if that were a precise characterization of $\mathcal{V}$. But I don't think it is. Let $\Omega$ be a solid cylinder, and let $R$ be zero on the sides of $\Omega$ and a hat function on top and bottom, dying out smoothly as it approaches the sides. Is $R$ the flux of some $G$? I couldn't figure out how to make it be. For some scalar function g: $ rot\ F=grad\ g$. Necessary and sufficient!
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
I prepared $4\ \mathrm{M}$ solution of $\ce{NaOH}$, and the glass electrode I use cannot measure it the $\mathrm{pH}$ correctly. What are the other options do I have to exactly know the solution's $\mathrm{pH}$? For $\mathrm{pH}>12$ and $\ce{Li+}$ or $\ce{Na+}$ concentrations greater than $0.1\ \mathrm{M}$, glass electrodes experience alkali error. Because the concentration of $\ce{H3O+}$ ions in solution is so low, interference from alkali metal ions (like $\ce{Na+}$) becomes noticeable and causes the electrode to read lower than the true $\mathrm{pH}$ of the solution. $\ce{K+}$ typically cause less error than $\ce{Li+}$ or $\ce{Na+}$ due do its larger size. Since sodium hydroxide is a strong base and dissociates completely into $\ce{Na+}$ and $\ce{OH-}$ ions, though, it's actually quite simple to just calculate the $\mathrm{pH}$. $$K_\mathrm{w}=\ce{[H3O+][OH-]} \Rightarrow$$ $$ 1 \times 10^{-14} \ \mathrm{M^2}=\ce{[H3O+] \cdot 4\ \mathrm{M}} \Rightarrow $$ $$\ce{[H3O+]}=2.5 \times 10^{-15}\ \mathrm{M}$$ $$\mathrm{pH}=–\log(2.5 \times 10^{-15}\ \mathrm{M})=14.60$$ If you are worried about whether or not you truly have a $4\ \mathrm{M}$ solution of $\ce{NaOH}$, you should try titrating it with a known concentration of acid (in the past I always used potassium hydrogen phthalate). This process is called standardization in acid-base parlance.
Electronic Communications in Probability Electron. Commun. Probab. Volume 23 (2018), paper no. 87, 13 pp. A functional limit theorem for the profile of random recursive trees Abstract Let $X_n(k)$ be the number of vertices at level $k$ in a random recursive tree with $n+1$ vertices. We prove a functional limit theorem for the vector-valued process $(X_{[n^t]}(1),\ldots , X_{[n^t]}(k))_{t\geq 0}$, for each $k\in \mathbb N$. We show that after proper centering and normalization, this process converges weakly to a vector-valued Gaussian process whose components are integrated Brownian motions. This result is deduced from a functional limit theorem for Crump-Mode-Jagers branching processes generated by increasing random walks with increments that have finite second moment. Let $Y_k(t)$ be the number of the $k$th generation individuals born at times $\leq t$ in this process. Then, it is shown that the appropriately centered and normalized vector-valued process $(Y_{1}(st),\ldots , Y_k(st))_{t\geq 0}$ converges weakly, as $s\to \infty $, to the same limiting Gaussian process as above. Article information Source Electron. Commun. Probab., Volume 23 (2018), paper no. 87, 13 pp. Dates Received: 14 January 2018 Accepted: 4 November 2018 First available in Project Euclid: 23 November 2018 Permanent link to this document https://projecteuclid.org/euclid.ecp/1542942176 Digital Object Identifier doi:10.1214/18-ECP188 Mathematical Reviews number (MathSciNet) MR3882228 Zentralblatt MATH identifier 07023473 Subjects Primary: 60F17: Functional limit theorems; invariance principles 60J80: Branching processes (Galton-Watson, birth-and-death, etc.) Secondary: 60G50: Sums of independent random variables; random walks 60C05: Combinatorial probability 60F05: Central limit and other weak theorems Citation Iksanov, Alexander; Kabluchko, Zakhar. A functional limit theorem for the profile of random recursive trees. Electron. Commun. Probab. 23 (2018), paper no. 87, 13 pp. doi:10.1214/18-ECP188. https://projecteuclid.org/euclid.ecp/1542942176
I am using Analysis on Manifolds by Munkres to study for a course and the following question comes from an early section in topology. Definition of limit point: Let $A \subset R^n$ and let $x_o \in R^n$. $x_o$ is a limit point of $A$ if, for every $r>0,B(x_o,r)$ contains a point of $A \setminus \{x_o\}$ Exercise: Let $A$ be a subset of $X$ where $X$ is a metric space. Show that if $C$ is a closed subset of $X$ and $C$ contains $A$, then $C$ contains the closure of $A$. This is what I have so far: $C$ is closed implies that $C$ contains its limit points. $A \subset C$, thus $A$ contains all points of $A$ (by Theorem from book). Since $\bar{A} = A \cup \{\text{limit points of $A$}\}$, we only have left to show that the set of limit points of $A$ is in $C$. Let $p$ be a limit point of $A$. Suppose $p \notin C$. Then $p \in X \setminus C$. We know that $X \setminus C$ is open... EDITED (changed proof strategy after doing more research) let $p$ be a limit point of $A$ and if $p$ is not in $C$, then $X \setminus C$ is an open set containing $p$ but not intersecting $C$, which implies that $X \setminus C$ does not intersect $A$, which contradicts the fact that $p$ is a limit point of $A$. This is since any neighborhood of a limit point $A$ must intersect a point other than $x_o$ in $A$ (by definition of limit point; and since $X \setminus C$ is open, every neighborhood of a point contained in $X \setminus C$ has a radius in $x \setminus C$ (by definition of open).
I would like to numerically solve the following system of coupled nonlinear differential equations: $$ -\frac{\hbar^2}{2m_a} \frac{\partial^2}{\partial x^2}\psi_a + V_{ext}\psi_a + \left( g_a |\psi_a|^2 + g_{ab} |\psi_b|^2 \right)\psi_a = \mu_a \psi_a $$ $$ -\frac{\hbar^2}{2m_b} \frac{\partial^2}{\partial x^2}\psi_b + V_{ext}\psi_b + \left( g_b |\psi_b|^2 + g_{ab} |\psi_a|^2 \right)\psi_b=\mu_b\psi_b $$ where $\hbar$, $m_a$, $m_b$, $g_a$, $g_b$, $g_{ab}$ are known coefficients and $V_{ext}$ is a known function of $x$, i.e.: $$ V_{ext}= -P \left[\cos\left(\frac{3}{2}\, \frac{x}{L}\, 2\pi \right)\right]^2 $$ The unknowns are eigenfunctions $\psi_a(x)$, $\psi_b(x)$ and the eigenvalues $\mu_a$ and $\mu_b$. Both $V_{ext}$, $\psi_a$ and $\psi_b$ are defined on the domain $x\in[0,L]$. Functions $\psi_a(x)$ and $\psi_b(x)$ are complex functions. The boundary conditions are the periodic ones, i.e.: $$ \psi_a(x+L)=\psi_a(x) \qquad \psi_b(x+L)=\psi_b(x) $$ Notice that the period of the eigenfunctions should be $L$ while the period of the external potential is $L/3$. Eventually, there is a constraint on the norms of $\psi_a$ and $\psi_b$, namely: $$ \int_0^L |\psi_a|^2 \, \mathrm{d}x= N, \qquad \int_0^L |\psi_b|^2 \, \mathrm{d}x= M $$ Can you please give me a good strategy to handle this problem and, possibly, a Matlab code?
Fix an integer $h$ and let$$ S_h(x) = \sum_{t=0}^{h-1}\;\left(\frac{x+t}{p}\right) $$Where the $(\cdot/p)$ representes the Legendre symbol. We want to prove that$$ \sum_{x=0}^{p-1}S_h(x)^{2r} < (2r)^r p h^r + 4rh^{2r}\sqrt{p} $$Developping the powers in the left and inverting summations we obtain the sum $$\sum_{m_1\dots m_{2r} = 1}^h \sum_{x=0}^{p-1} \left( \frac{(x+m_1)(x+m_2)\dots(x+m_r)}{p} \right ) $$ The outer sum is over the $h^{2r}$ tuples $(m_1,\dots,m_{2r})$ where each $m_i$ varies independently from $1$ to $h$. In order to bound above this sum we divide the outer sum in two sets of tuples, in the first set we pick first the tuples $(m_1,\dots,m_{2r})$ for which the polynomial $$(x-m_1)(x-m_2) \dots (x-m_{2r})$$ is a square, in this case we have $$ \sum_{x=0}^{p-1} \left( \frac{(x+m_1)(x+m_2)\dots(x+m_{2r})\,}{p} \right ) \le p $$ because the value of the polynomial is a square for every $x$ and in consequence the value of the Legendre symbol inside the sum is always 0 or 1. Now if the polynomial is a square that means that we can group the $m_i$'s of a tuple in $r$ pairs with the same value in each pair. So the number of tuples which lead to a square polynomial can be bounded above by the number of partitions of the $2r$ positions in $r$ pairs, times the number of independent $r$-tuples of values from 1 to $h$. The number of partitions of $1,2,\dots, 2r$ in $r$ pairs is simply $$ (2r-1)(2r-3)\dots 5\cdot 3\cdot 1$$just observe that the first index can be paired with any of the other $2r-1$ positions, and now the first free index can be paired with any of the $2r-3$ free positions, and so on. But $$ (2r-1)(2r-3)\dots 5\cdot 3\cdot 1 < (2r)^r$$as there are $r$ factors all smaller than $2r$. Now to each pair we can assign any integer from 1 to $h$, as there are $r$ pairs we have $h^r$ possible asignations. So finally the number of tuples $m_1,\dots,m_{2r}$ wich lead to a square polynomial $(x+m_1)(x+m_2)\dots(x+m_{2r})$ is bounded by $(2r)^rh^r$ Note that the estimation of the number of tuples leading to a square polynomial is a little rough, for example with $r=2$ and $h=3$ there are only 21 tuples in this group (starting with (1,1,1,1),(1,1,2,2),(1,1,3,3),(1,2,1,2),(1,2,2,1),(1,3,1,3),(1,3,3,1),$\dots$ ), but $(2r)^rh^r= 1296$. In the second group we pick the tuples $(m_1,\dots, m_{2r})$ for wich $(x+m_1)\dots(x+m_{2r}$ is not a square. This means that we can write them as a product $g(x)^2F(x)$ where $F(x)$ is squarefree. For the Legendre symbol we have then$$ \left(g(x)^2F(x) \over p \right ) = \left( F(x) \over p \right) $$except possibly when $g(x) = 0$, as $g(x)$ has at most degree $r$ this means that the sums $$ \sum_{x=1}^p \left(g(x)^2F(x)\over p\right) \quad\text{and}\quad \sum_{x=1}^p \left(F(x)\over p\right) $$differ at most by $r$. For the right sum we can use now the bound (André Weil)$$\left \lvert \sum_{x=0}^{p-1} \left( F(x) \over p \right ) \right \rvert \leq (\deg F -1) \sqrt{p} $$and so $$\left \lvert \sum_{x=0}^{p-1} \left( g(x)^2F(x) \over p \right ) \right \rvert \leq 2r + 2r\sqrt{p}< 4r\sqrt{p} $$ As the number of tuples in this case is at most $h^{2r}$ (the total number of tuples) we finally have the toal upper bound of $h^{2r}\sqrt{p}$ to the sum limited to this second set of tuples giving the lemma. EDIT 2: I have rewritten the proof to make it more clear.
P(n): For all $n$, the number of straight line segments determined by $n$ points in the plane, no three of which lie on the same straight line,is: $\large \frac{n^2 - n}{2}$. Inductive hypotheses: given $n = k$ points, assume $P(k)$ is true: $P(k) = \dfrac{k^2 - k}{2}$. Proving $P(k+1)$ would require proving that for $n = k+1$ points, using your inductive hypothesis, the number of lines passing through $k + 1$ points is equal to $$P(k+1) = \dfrac{(k+1)^2 - (k+1)}{2}$$That is, $P(k+1)$ is the sum of $P(k)$, the number of lines determined by $k$ points, plus the number of additional line segments resulting from the additional point: the $(k+1)$th point. Since there are $k$ original points, the number of line segments that can connect with the $(k+1)$st point is precisely $k$, one line segment connecting each of the $k$ original points with $k+1$th point. That is, our sum is: $$\begin{align}P(k) + k &= \dfrac{(k^2 - k)}{2} + k = \dfrac{(k^2 - k)}{2} + \dfrac {2k}{2} \\ \\& = \dfrac{ k^2 + 2k - k}{2} \\ \\ & = \frac{k^2 + 2k +1 - k - 1}{2} \\ \\& = \frac{(k+1)^2 - (k + 1)}{2} \\\end{align}$$ Hence, from the truth of the base case, and the fact that $P(k+1)$ follows from assuming $P(k)$, we have thus proved by induction on $n$ that $P(n) = \dfrac{n^2 - n}{2}$
I am going to show in detail one unsupervised learning. The major use case is behavioral-based anomaly detection, so let’s start with that. Imagine you are collecting daily activity from people. In this example there are 6 people \(S_1 – S_6\) When all the data are sorted and pre-processed, then result might look like this list. \(S_1 =\) eat, read book, ride bicycle, eat, play computer games, write homework, read book, eat, brush teeth, sleep \(S_2 =\) read book, eat, walk, eat, play tennis, go shopping, eat snack, write homework, eat, brush teeth, sleep \(S_3 =\) wake up, walk, eat, sleep, read book, eat, write homework, wash bicycle, eat, listen music, brush teeth, sleep \(S_4 =\) eat, ride bicycle, read book, eat, play piano, write homework, eat, exercise, sleep \(S_5 =\) wake up, eat, walk, read book, eat, write homework, watch television, eat, dance, brush teeth, sleep \(S_6 =\) eat, hang out, date girl, skating, use mother’s CC, steal clothes, talk, cheating on taxes, fighting, sleep \(S_1\) is set of the daily activity of the first person, \(S_2\) of the second one and so on. If you look at this list, then you can pretty easily recognize that activity of \(S_6\) is somehow different from the others. That’s because there are only 6 people. What if there were 6 thousand? Or 6 million? Unfortunately there is no way you could recognize the anomalies. And that’s what machines can do. Once a machine can solve such problem in a small scale, then it can usually handle the large scale relatively easy. Therefore the goal here is to build an unsupervised learning model which will identify the \(S_6\) as an anomaly. What is this good for? Let me give you 2 examples. The first example is traditional audit log analysis for the purpose of suspicious activity detection. Let’s look at e-mail. Almost everyone has his own usage pattern on day-to-day basis. If this pattern suddenly changes, then this is considered “suspicious”. It might mean that someone has stolen your credentials. And it can also mean that you just changed your habit. Machines can’t know the underlying reason. What machines can do is analyze millions of accounts and pick up only the suspicious ones, which is typically a very small number. Then the operator can manually call to these people and discover what is going on. Or imagine you are doing pre-sales research. You employ an agency to make a country-wise survey. And there is a question like ‘Please give us 40-50 words feedback’. Let’s say you have got 30,000 responses which satisfies the length. Now you want to choose the responses which are somehow special. Because they might be extremely good, extremely bad, or just interesting. All of these give you valuable insight and possibly direction for the future. Since the overall amount is relatively high, then any human would certainly fail in this job. For machines, this is just a piece of cake. Now let’s look at how to teach the machine to do the job. Example project (button for download is right above this paragraph) is a standards java maven project. Unpack it into any folder, compile by ‘ mvn package‘, and run by executing ‘ java -jar target/anomalybagofwords-1.0.jar 0.5 sample-data-small.txt‘. If you run the program this way, it will execute the described process over the cooked data set and identifies \(S_6\) as an anomaly. If you want to drill down the code, then start with ‘ BagofwordsAnomalyDetectorApp‘ class. Terminology Let’s briefly establish useful terminology. Bag of words is a set of unique words within a text, where each word is paired with the number of its occurrence. One specific point is that the order of words is ignored by this structure. If word is not presented in the text, then its occurrence will be considered as \(0\). For example bag of words for ‘ eat, read book, ride bicycle, eat, play computer games, write homework, read book, eat, brush teeth, sleep‘ can be written as a following table. Word Number of occurrences eat 3 read 2 book 2 ride 1 bicycle 1 play 1 computer 1 games 1 write 1 homework 1 brush 1 teeth 1 sleep 1 Sometimes you can find the visualization as a histogram. For example this one. Notation \(B(x)\) will be used for bag of words. Following the example for \( S_1\). eat & 3 \\ read & 2 \\ book & 2 \\ ride & 1 \\ bicycle & 1 \\ play & 1 \\ computer & 1 \\ games & 1 \\ write & 1 \\ homework & 1 \\ brush & 1 \\ teeth & 1 \\ sleep & 1 \end{array} \right)\) Next term is a distance between 2 bags of words. Distance will be written as \(|B(x) – B(y)|\) and is calculated as a sum of absolute values of the differences for all words appearing in both bags. Following the example.\(|B(read\ article\ and\ book) – B(write\ book\ and\ book)| = \\ = \left| \left( \begin{array}{cc} read & 1 \\ write & 0 \\ article & 1 \\ and & 1 \\ book & 1 \end{array} \right) – \left( \begin{array}{cc} read & 0 \\ write & 1 \\ article & 0 \\ and & 1 \\ book & 2 \end{array} \right) \right| = \textbf{4}\) Applying this definition, you can calculate the distance between all the example sequences. For example \(|B(S_1) – B(S_2)| = \textbf{12}\) and \(|B(S_1) – B(S_6)| = \textbf{30}\). The latter is higher, because the \(S_1\) and \(S_6\) are more different in words than. This is analogy to the distance between 2 points in the space. Last term is probability density function. Probability density function is a continuous function defined over the whole real numbers space which is greater or equal to zero for every input and integral over the whole space is 1. Notation \(P(x)\) will be used. More formally this means the following. \int_{\mathbb{R}}P(x) = 1\) Typical example of probability density function is normal distribution. Example source code is using more complex one, called normal distribution mixture. Parameter \(x\) is called random variable. In very simplistic way, the higher \(P(x)\) is, the more “likely” variable \(x\) is. If \(P(x)\) is low, then variable \(x\) is falling away from the standard. This will be used when setting up the threshold value. Finally let’s make a note about how to create a probability density from finite number of random variables. If \([x1,…,x_N]\) is the set of N random variables (or samples you can collect), then there is a process called estimation which transforms this finite set of numbers into a continuous probability density function \(P\). Explanation of this process is out of scope for this article, just remember there is such thing. In particular, attached example is using a variation of EM algorithm. Process Now it’s a time to explain the process. The whole process can be separated into 2 phases called training and prediction. Training is the phase where a all the data is iterated through and relatively small model is produced. This is usually the most time consuming operation and outcome is sometimes called predictive model. Once model is prepared, then prediction phase comes into place. In this phase an unknown data record is examined by model. Next let’s drill down the details. Training phase There are required 2 inputs for the training phase. Set of activities \([S_1, …, S_N]\) This might be the example set from the beginning. Sensitivity factor \(\alpha\) which is just the number initially picked up by human that \(\alpha \geq 0\) More on this one later. The whole process is pretty straightforward and you can find the implementation in the source code, class BagofwordsAnomalyDetector, method performTraining. For each activity, calculate a bag of words. Result of this step is \(N\) bags of words \([B(S_1), …, B(S_N)]\) Calculate random variables. One random variable is calculated for each bag of words. Result of this step is \(N\) random variables \([x_1, …, x_N]\). Formula for calculation is following\(x_i = \frac{\sum_{j=1}^N |B(S_i) – B(S_j)|}{N} \quad \forall i = 1..N\) Estimate probability density function \(P\). This process takes random variables \([x_1, …, x_N]\) and produces probability density function \(P\). Variation of EM algorithm is used in the example program. Calculate the threshold value \(\theta\). Value is calculated according following formula. Regarding the sensitivity factor \(\alpha\). The higher \(\alpha\) is, the more activities will be identified as anomaly. Problem with unsupervised learning model is that data is not labeled and therefore there is no way to know what the correct answers are and how to set up the optimal \(\alpha\). Therefore some rule of thumbs are used instead. For example set up \(\alpha\) to report reasonable percentage of activity as anomaly. Typically it is required that the amount of identified anomalies must be manageable by the human investigators. In the bigger system there is usually a feedback loop which incrementally adjusts the until the optimal value is reached. This is then called reinforcement learning. For this small example, \(\alpha\) was picked up manually as by try and error just to reach the goal. Store all bags of words, \(P\) and \(\theta\) for later usage. When training phase finishes, then model is ready to be used in prediction phase. Prediction phase This is the phase when potentially unseen activities are tested by model. Model evaluates them and returns whether activities are considered as an anomaly or not. The whole process works for each activity \(S_U\) separately, U stands for “unknown”. And it can be summarized by these points. Calculate bag of words \(B(S_U)\) Calculate random variable \(x_U\) as\(x_U = \frac{\sum_{i=i}^N |B(S_i) – B(S_U)|}{N}\) If \(P(x_U) \le \theta\) than activity \(S_U\) is considered as anomaly. Otherwise activity is considered as normal. Summary You have learned about relatively simple model for identifying unusual sequence from a bulk of them. Now you can play with source code, try different variations and see how this affect the result. Here are few ideas to start with. Normalize bags of words. In other words don’t count the absolute number, just relative frequency. Use chunks of more than one word. This is then called n-grammodel. Try to implement different ways how to measure distance between items, for example sequence alignment. Key takeaways There is no knowledge about what the correct outcome is at the beginning of the unsupervised learning. Therefore best guess and possibly feedback loop are implemented. Predictive models are usually built in the training phase and then used to classify the unknown data in the prediction phase. In order to be able find the outliers, abstract features like sentences or actions need to be transformed into a measurable form. After that probability and statistics are used to establish the baselines and find the outliers.
TextBlob Sentiment: Calculating Polarity and Subjectivity Sunday June 7, 2015 The TextBlob package for Python is a convenient way to do a lot of Natural Language Processing (NLP) tasks. For example: from textblob import TextBlobTextBlob("not a very great calculation").sentiment## Sentiment(polarity=-0.3076923076923077, subjectivity=0.5769230769230769) This tells us that the English phrase “not a very great calculation” has a polarity of about -0.3, meaning it is slightly negative, and a subjectivity of about 0.6, meaning it is fairly subjective. But where do these numbers come from? Let's find out by going to the source. (This will refer to sloria/TextBlob on GitHub at commit eb08c12.) After digging a bit, you can find that the main default sentiment calculation is defined in _text.py, which gives credit to the pattern library. (I'm not sure how much is original and how much is from pattern.) There are helpful comments like this one, which gives us more information about the numbers we're interested in: # Each word in the lexicon has scores for:# 1) polarity: negative vs. positive (-1.0 => +1.0)# 2) subjectivity: objective vs. subjective (+0.0 => +1.0)# 3) intensity: modifies next word? (x0.5 => x2.0) The lexicon it refers to is in en-sentiment.xml, an XML document that includes the following four entries for the word “great”. <word form="great" cornetto_synset_id="n_a-525317" wordnet_id="a-01123879" pos="JJ" sense="very good" polarity="1.0" subjectivity="1.0" intensity="1.0" confidence="0.9" /><word form="great" wordnet_id="a-01278818" pos="JJ" sense="of major significance or importance" polarity="1.0" subjectivity="1.0" intensity="1.0" confidence="0.9" /><word form="great" wordnet_id="a-01386883" pos="JJ" sense="relatively large in size or number or extent" polarity="0.4" subjectivity="0.2" intensity="1.0" confidence="0.9" /><word form="great" wordnet_id="a-01677433" pos="JJ" sense="remarkable or out of the ordinary in degree or magnitude or effect" polarity="0.8" subjectivity="0.8" intensity="1.0" confidence="0.9" /> In addition to the polarity, subjectivity, and intensity mentioned in the comment above, there's also “confidence”, but I don't see this being used anywhere. In the case of “great” here it's all the same part of speech ( JJ, adjective), and the senses are themselves natural language and not used. To simplify for readability: word polarity subjectivity intensitygreat 1.0 1.0 1.0great 1.0 1.0 1.0great 0.4 0.2 1.0great 0.8 0.8 1.0 When calculating sentiment for a single word, TextBlob uses a sophisticated technique known to mathematicians as “averaging”. TextBlob("great").sentiment## Sentiment(polarity=0.8, subjectivity=0.75) At this point we might feel as if we're touring a sausage factory. That feeling isn't going to go away, but remember how delicious sausage is! Even if there isn't a lot of magic here, the results can be useful—and you certainly can't beat it for convenience. TextBlob doesn't not handle negation, and that ain't nothing! TextBlob("not great").sentiment## Sentiment(polarity=-0.4, subjectivity=0.75) Negation multiplies the polarity by -0.5, and doesn't affect subjectivity. TextBlob also handles modifier words! Here's the summarized record for “very” from the lexicon: word polarity subjectivity intensityvery 0.2 0.3 1.3 Recognizing “very” as a modifier word, TextBlob will ignore polarity and subjectivity and just use intensity to modify the following word: TextBlob("very great").sentiment## Sentiment(polarity=1.0, subjectivity=0.9750000000000001) The polarity gets maxed out at 1.0, but you can see that subjectivity is also modified by “very” to become \( 0.75 \cdot 1.3 = 0.975 \). Negation combines with modifiers in an interesting way: in addition to multiplying by -0.5 for the polarity, the inverse intensity of the modifier enters for both polarity and subjectivity. TextBlob("not very great").sentiment## Sentiment(polarity=-0.3076923076923077, subjectivity=0.5769230769230769) How's that? \[ \text{polarity} = -0.5 \cdot \frac{1}{1.3} \cdot 0.8 \approx -0.31 \] \[ \text{subjectivity} = \frac{1}{1.3} \cdot 0.75 \approx 0.58 \] TextBlob will ignore one-letter words in its sentiment phrases, which means things like this will work just the same way: TextBlob("not a very great").sentiment## Sentiment(polarity=-0.3076923076923077, subjectivity=0.5769230769230769) And TextBlob will ignore words it doesn't know anything about: TextBlob("not a very great calculation").sentiment## Sentiment(polarity=-0.3076923076923077, subjectivity=0.5769230769230769) TextBlob goes along finding words and phrases it can assign polarity and subjectivity to, and it averages them all together for longer text. And while I'm being a little critical, and such a system of coded rules is in some ways the antithesis of machine learning, it is still a pretty neat system and I think I'd be hard-pressed to code up a better such solution. Check out the source yourself to see all the details!
I was doing an exercise and during this exercise I had to solve the definite integral in order to calculate an area. The area was of $A=\{(x,z):x,z \geq 0, x+\frac{1}{2}z \leq 1, x^{2}+z^{2} \geq \frac{4}{5}\}$ The area is a triangle cutted by 1/4 of a circle,so the area i easy to evaluate and is $1-\frac{\pi}{5}$. I tried doing $\int_0^1dx\int_\sqrt{\frac{4}{5}-x^2}^{-2(x-1)}dz$ But I had problem with this piece: $$-\int_0^1 \sqrt{\frac{4}{5}-x^2}{\rm d}x$$ I solved the indefinite integral by putting $x=\frac{2}{\sqrt{5}}\sin(t)$. But now the problem is that the integration's extrems are in $\mathbb C$ and no more in $\mathbb R$. I noticed that (but i don't no why) solving now the integral in $\mathbb C$ and than taking the real part of the result make me solve the integral. My question is: there is a way to solve this integral in $\mathbb R$? And why the way I adopted to solve the integral worked?
[1] Effect of Clustering The msd of agents with and without clustering-effect was produced. [2] Density-Dependency of Diffusion-Coefficient A MC-simulation was made to check the density-dependency of the diffusion-coefficient. Therefore, the "Pauli-effect" (a patch can only be occupied once), the clustering-effect (if agent has neighbours, the hoppingrate is 0.01 times the "normal" hoppingrate without neighbours) and periodic boundaries were used. The lattice consists of 100 x 100 patches. Simulation with different Agent-Densities A simulation with 1, 2, 5, 10, 20, 50, 100, 200, 500, 1000, 2000, 5000 agents was made. The simulation data is here The result of the density-dependency of the msd is shown in this figure. Plot of the MSD vs. timelag of different numbers of agents. This plot is logarithmical averaged: Same plot as above, but "normal" averaged: The same simulation was done for $\# \text{Agents} > 5000$. The plots are shown here: Normally, there should no diffusion if the lattice is completely occupied (100 x 100 Lattice). But one can see a diffusion-constant unequal to zero for #Agents=10000. This can be explained using this plot of the initial-agent-configuration on the 100 x 100 Lattice: The lattice is not fully occupied, so diffusion can occur. [3] Hopping Rate, Diffusion Constant and MSD CM: I have calculated the relation between the Monte Carlo hopping rate $R$, the size $h$ of a single lattice site and the resulting diffusion constant $D$ for free particle diffusion in 2D. The results can be readily extended to $\eta$-dimensional space:(1) and(2) The formula were tested by a simulation. For the case $R=1\;,\;h=1\;,\;\eta=2$ one expects $\overline{\Delta R^2}(t) = 4\; t$. This seems to be indeed the case: [4] Effective Diffusion constant in 2D CM: Assume we have NAG particles on a lattice of size XMX * YMX, where each lattice site has a linear size $h$. Then the particle density is(3) Let the interaction range of the particles be $r_c = h$, corresponding to nearest neighbor coupling on the lattice. In 2D, the correponding area is(4) The probability to have no neighbor within range $r_c$ is(5) Let the normal diffusion constant be $D$ and the slow-down factor be $s$. We then expect a density dependent effective diffusion constant of the form(6) This exponential form does not depend on the number of spatial dimensions. [5] Diffusion-Constant and Agent-Density AK: The simulated agent-density-dependency of the MSD as shown in [2] is now checked with the analytical result of [4]:(7) where(8) and $h=1$ $s$, the slow downfactor equals $0.01$, thus(9) $\rho$ is defined as(10) In the case of [2], $\text{NAG}$ is a variable and $\text{XMX}=\text{YMX}=100$. So,(11) Combining eq. (9) and (11) yields(12) Due to $\eta=2$, the MSD can be calculated to(13) [2] shows a plot of the MSD at $t=1$ and $t=8$, so the analytical result can be computed an plotted in the plot of the simulated MSD. The result is shown in this figure: The assumed poisson-distribution does not fit at all! The first three data-points (1,2,5 agents) could be described by this assumption, but all other agent-densities cannot be explained by poisson-dis. Alternative Approaches 1. Mean-Field approximation Clustering takes effect if an agent has at least one neighbour. If $P_\text{NC}$ is the probability to have no neighbour, hence no clustering, the effective diffusion-constant $D_\text{eff}$ can be described as above:(14) Assuming a mean-field approximation,(15) Thus,(16) The simulation defines a neighbour as an agent inside a circle of radius $h$, thus each patch has $8$ neighbours. The resulting function and the simulation-data is shown in this plot: The assumed mean-field-approximation yields higher effective diffusion-constanst, because the clustering-effect is not considered correctly: If clustering occurs, the system does not behave as any equilibrium. Thus, a mean-field approx. is not allowed. In the next step, the data can be used to fit the distribution, described above, with respect to a critical density $\rho_c$ and a critical maximal agent-number $N_\text{max}$ respectively or an effective number of neighbours $N_\text{NB}$:(17) The result is shown in this figure: where $a$ in the figure is $N_\text{max}$ The fit of the effective-number of neighbours is shown here: 2. Another Function To get a better fit-result, a fit-function(18) The result is shown here: [6] Simulation of Diffusion only with Pauli-Effect AK: A simulation without the clustering-effect and with pauli-effect was made to get out if the MSD-Density-Distribution is a result of the pauli effect or of the clustering effect. The result is shown here: Thus, the Pauli-Effect alone cannot describe the occured curvature. [7] Diffusion-Simulation AK: Due to a problem in the MC-simulation, we decided to simulate the cell-diffusion with an enhanced simulation. The simulaton is done using a 100 x 100 lattice. The number of agents varies between 1 and 9999. The different combinations of diffusion are shown in the next sections: [7.1] Free-Diffusion In this simulation, all agents diffuse freely. Thus, one expects that the MSD does not depend on the agent-density. Furthermore, all MSD should look be proportional to $4Dt$. The plots are shown in this figures: As expected, the MSD is not a function of the number of agents, because of free diffusion without any special effect. Each agent can diffuse freely. Furthermore, there are some statistical fluctuations. [7.2] Diffusion with Pauli-Effect This simulation uses free-diffusion with the constraint that a patch only can be occupied by one agent. There should be a small density-dependency of the diffusion-constant. The MSD vs. time looks as expected: The diffusion is less for a higher number of agents, because an agent cannot diffuse to an already occupied patch. Furthermore, it looks as if for $\rho=0.5$ the MSD for a fixed time and hence the diffusion-constant $D$ is half the diffusion-constant for one agent. [7.3] Diffusion with Clustering-Effect The diffusion is a free diffusion with the constraint that the diffusion-constant of an agent is $D_0$ if the agent has no neighbours, and $D=sD_0$ if the agent has at least one neighbour. Thus, clusters of agents should appear and the MSD should strongly depend on the agent-density. [7.4] Diffusion with Pauli- and Clustering-Effect In this simulation, the pauli- and the clustering-effect were used. In the next plots, one can estimate the influence of both effects to the "true" diffusion of agents. The MSD vs. Time-Plot is shown in this figure: The MSD vs. #Agents at two different times is shown here: As one can see, the MSD for an agent-density of $\approx 1$ (#Agents = 9999) the MSD falls dramatically down, as expected. Firstly, the clustering-effect takes effect, so that there is only a diffusion with $D=0.01$, secondly due to the pauli-effect, there is only one patch, where an agent can diffuse. This explains the small diffusion in the MSD vs time-plot and the very low MSD for a high number of agents. Comparing these plots with the results of clustering only and pauli only suggests that for small agent-densities the clustering effect dominates and the pauli-effect for high agent-densities. [7.4.1] Fitting the Pauli- and Clustering-Effect Curve The MSD vs. #Agents-graph was fitted using the function(19) $D_0$ is the intial diffusion-constant, thus $D_0=32$ for $t=8.00$ and $D_0=4$ for $t=1.00$. The fit-results and the resulting plots are shown here: In this fit, $a$ and $b$ had no initial values. So $b$ does not fit for both fits. In this fit, $b$ was pre-initalised with $b=9500$ for both cases. It seems that the fit-routine has some problems fitting these data, so one has to pre-initialise $b$ very carefully. To get a good $b$, the data for "Pauli-Only" should be fitted using(20) And the data for "Clustering-Only" using(21) to get good initial-conditions for the fit-routine. This plot: shows a fit of $a$ only, using $b=10000$, the maximum number of agents on the lattice. [8] Anti-Clustering AK: A "Anti-Clustering"-Effect was simulated as well. Therefore, the "slow-down-factor" was chosen to be $1.99$. The Diffusion of 0-6000 Agents was simulated using an anti-clustering-effect only and the combination of the pauli. and anti-clustering-effect. The results are shown here: [8.1] Anti-Clustering Only This plot shows the MSD at two different times $t=1.00$ and $t=8.00$ as a function of the agent-number. This plot shows the MSD vs. time for the used agent-numbers. [8.2] Anti-Clustering and Pauli-Effect This plot shows the MSD at different times as shown above. In this plot, the Anti-Clustering-Effect and the Pauli-Effect was simuolated. This plot shows the resulting MSD vs. time graphs: [9] Simulation of diffusing agents in 3D AK: [9.1] Free-Diffusion MSD vs. time: MSD vs. #Agents [9.2] Pauli-Only MSD vs. time: MSD vs. #Agents [9.3] Clustering-Only MSD vs. time MSD vs. #Agents: [9.4] Pauli and Clustering MSD vs. time MSD vs. #Agents
It was an awkward problem, but application of a powerful general principle provided the breakthrough This problem in contrast to the earlier one looked difficult. The challenge was on - would we be able to achieve its elegant solution? Not always we know about the end in the beginning of our journey, but finally concept based strategic approach provided the initial breakthrough and then led us to the elegant solution in a few more steps. In this session we will showcase a hard problem that seemed to have no elegant solution. And then, use of a more general problem solving principle quickly led us to familiar grounds. It was application of a series of rich powerful concepts that produced the elegant solution in a few steps, quickly. We will present first the cumbersome conventional solution and then the elegant solution for critical comparison and appreciation of the power of problem solving principles and techniques in producing elegant and efficient solution. Before going ahead you should refer to our concept tutorials on Trigonometry, Chosen Problem. If $(a^2-b^2)sin \theta + 2abcos \theta=a^2+b^2$ then the value of $tan \theta$ is, $\displaystyle\frac{1}{2ab}(a^2+b^2)$ $\displaystyle\frac{1}{2}(a^2-b^2)$ $\displaystyle\frac{1}{2}(a^2+b^2)$ $\displaystyle\frac{1}{2ab}(a^2-b^2)$ First solution: Conventional approach that involves raising the powers of functions and extensive deduction As value of $tan \theta$ is wanted, dividing by $cos \theta$ the $sin \theta$ is converted to $tan \theta$ creating an additional $sec \theta$ in the process. But $sec \theta$ and $tan \theta$ being a friendly trigonometric function pair having the relation, $sec^2 \theta=tan^2 \theta +1$ at the basic concept level, it is expected that solution can be reached by raising the power of the variable functions to 2 by squaring, $(a^2-b^2)sin \theta + 2abcos \theta=a^2+b^2$, Or, $(a^2-b^2)tan \theta + 2ab=(a^2+b^2)sec \theta$ Squaring the equation, $(a^2-b^2)^2tan^2 \theta + 4abtan \theta(a^2-b^2) $ $\hspace{30mm}+ 4a^2b^2=(a^2+b^2)^2sec^2 \theta$, Or, $(a^2-b^2)^2tan^2 \theta + 4abtan \theta(a^2-b^2) $ $\hspace{30mm}+ 4a^2b^2=(a^2-b^2)^2sec^2 \theta+4a^2b^2sec^2 \theta$, Or, $4abtan \theta(a^2-b^2) + 4a^2b^2$ $\hspace{30mm}=(a^2-b^2)^2(sec^2 \theta-tan^2 \theta)+4a^2b^2sec^2 \theta$, Or, $(a^2-b^2)^2+4a^2b^2(tan^2 \theta+1)$ $\hspace{30mm}-4abtan \theta(a^2-b^2) - 4a^2b^2=0$, Or, $(a^2-b^2)^2-4abtan \theta(a^2-b^2) +4a^2b^2tan^2 \theta=0$, Or, $\left[(a^2-b^2)-2abtan \theta\right]^2=0$, So, $(a^2-b^2)-2abtan \theta=0$ Or, $tan \theta=\displaystyle\frac{1}{2ab}(a^2-b^2)$. Answer: Option d: $\displaystyle\frac{1}{2ab}(a^2-b^2)$. This is hard, extensive deduction based, and not a very reliable solution path. Second solution: Elegant solution: Stage 1: Problem solving approach: Problem analysis revealed lack of harmony in the expression The expression seemed to be discordant with lack of harmony and too many terms. No clear elegant solution path could be seen at first analysis. Let us explain what we mean by discordance and lack of harmony. Principle of harmony or dicordance in expressions The variables are the functions here. The first important characteristic is the two functions involved, $sin \theta$ and $cos \theta$ are friendly trigonometric function pair, though in comparison, this pair of functions does not have as much potential in simplification of expressions as the other two friendly trigonometric function pairs, $sec \theta$, $tan \theta$ and $cosec \theta$, $cot \theta$. Each of these latter two have the of the form, $sec \theta + tan \theta=\displaystyle\frac{1}{sec \theta - tan \theta}$ which in general carries high potential in simplifying complex expressions easily and elegantly. powerful property of mutually inverse expression relationship Coming back to the discordant form of the given expression, the discordance and from the lack of harmony originate $sin \theta$ and $cos \theta$. the function $sin \theta$ is associated with a factor $a^2-b^2$ whereas its pair partner $cos \theta$ is associated with a term $2ab$ which is very different in structure and form from $a^2-b^2$. If the factor of $cos \theta$ were, say, $a^2+b^2$ we could have accepted the expression as an expression in harmony. term association of the friendly functions The states, principle of harmony or discordance in expressions The more is the harmony or less is the dicordance in an expression, chances of existence of an elegant conceptual solution in a few steps will be that much more. How to increase the harmony in the expression - first stage On a closer look, a possibility of introducing significant harmony (still part harmony, not whole) in the expression could be identified. If both sides of the equation are divided by $cos \theta$, with $2ab$, in a single step we transform the equation in terms of thus breaking the discordant association of $cos \theta$ of $sec \theta$ and $tan \theta$ and on top of it form term associations of the variables, $sec \theta$ and $tan \theta$ that are similar in structure and form, that is, $(a^2+b^2)$ and $(a^2-b^2)$, more preferred friendly trigonometric function pair $(a^2-b^2)sin \theta + 2abcos \theta=a^2+b^2$, $(a^2-b^2)tan \theta + 2ab=(a^2+b^2)sec \theta$. In one simple action, the expression is transformed to a much more balanced and harmonious form with lesser discordance. How to increase the harmony in the expression - second stage: apply principle of collection of friendly terms to create mutully inverse expressions in friendly trigonometric function pairs Knowing the power of the mutually inverse expressions, $sec \theta +tan \theta$ and $sec \theta-tan \theta$ of the friendly trigonometric function pair, $sec \theta$, and $tan \theta$, the immediate next step in increasing the harmony in the expression further is to apply the and bring the terms involving $sec \theta$ and $tan \theta$ together. principle of collection of friendly terms This is a familiar situation and decisions are easy to take with end state clearly visible. But in the very beginning it was not so. The was achieved by increasing the harmony or balance in the expression. key brekthrough From the previous stage, $(a^2-b^2)tan \theta + 2ab=(a^2+b^2)sec \theta$, Or, $2ab=a^2(sec \theta - tan \theta) + b^2(sec \theta + tan \theta)$. Second solution: Elegant solution: Stage 2: Resolving the last trace of discordance in the expression: Forming mutually inverse coefficient factors With RHS transformed into a promising form, we turn our attention to the last offending discordant factor term $ab$ in the LHS. This term is in no way is similar to the coefficients $a^2$ or $b^2$, but if we divide the equation by $ab$, not only is it removed from the expression, but also a pair of mutually inverse coefficient factors, $\displaystyle\frac{a}{b}$ and $\displaystyle\frac{b}{a}$ is formed in the RHS. This situation is still more welcome as the other two factors of the two terms, $sec \theta - tan \theta$ and $sec \theta + tan \theta=\displaystyle\frac{1}{sec \theta-tan \theta}$ are already present as a second pair of mutually inverse expressions. From the previous stage, $2ab=a^2(sec \theta - tan \theta) + b^2(sec \theta + tan \theta)$, Or $2=\displaystyle\frac{a}{b}(sec \theta-tan \theta)+\displaystyle\frac{b}{a}(sec \theta+tan \theta)$. Second solution: Elegant solution: Stage 3: Two pairs of Mutually Inverse factors allowed Component expression substitution following Reduction in number of variables technique states, Reduction in number of variables technique In simplifying an expression, the more we are able to reduce the number of variables in the expression, easier and simpler would be the steps to the solution. To reduce number of variables and elements in the expression drastically, we find the situation ideal for applying by using an intermediate dummy variable $z=\displaystyle\frac{a}{b}(sec \theta-tan \theta)$, and reduce the number variables to just 1. component expression substitution From the previous result, $2=\displaystyle\frac{a}{b}(sec \theta-tan \theta)+\displaystyle\frac{b}{a}(sec \theta+tan \theta)$, Or, $2=z+\displaystyle\frac{1}{z}$, where, $z=\displaystyle\frac{a}{b}(sec \theta-tan \theta)$. This is in accordance of the higher level in an expression. The problem now is transformed to a much simpler one, and it is a case of Variable reduction technique solving a simpler problem. Second solution: Elegant solution: Final Stage 4: Solving a simpler problem, reverse substitution and Friendly trigonometric function pairs concepts From the previous stage, $2=z+\displaystyle\frac{1}{z}$, Or, $z^2-2z+1=0$. Or, $(z-1)^2=0$, Or, $z=\displaystyle\frac{a}{b}(sec \theta-tan \theta)=1$, by Reverse substitution, Or, $sec \theta - tan \theta=\displaystyle\frac{b}{a}$. This expression is perfect for applying the and with $sec \theta-tan \theta =\displaystyle\frac{1}{sec \theta + tan \theta}$ we get, Friendly trigonometric function pairs concepts $sec \theta - tan \theta=\displaystyle\frac{b}{a}$, Or, $\displaystyle\frac{1}{sec \theta + tan \theta} = \displaystyle\frac{b}{a}$, Or, $sec \theta + tan \theta = \displaystyle\frac{a}{b}$. This is familiar grounds and we just substract the value of $sec \theta -tan \theta$ from $sec \theta + tan \theta$ to get $tan \theta$, $2tan \theta = \displaystyle\frac{a}{b} - \displaystyle\frac{b}{a}$, Or, $tan \theta =\displaystyle\frac{1}{2ab}(a^2-b^2)$. We always need squaring an expression when we have to derive any trigonometric function from any other, excluding the mutually inverse functions such as, $cosec \theta$ from $\sin \theta$. But we have delayed the squaring till the very last stage, and that too avoided the squaring of the functions. Just like , the delayed evaluation technique states, delayed raising of power technique The earlier an expression or the variables are raised in increasing power, the more cumbersome, time-consuming and complex will be the steps to the solution, and so conversely, the more raising of increasing power is delayed or avoided altogether, simpler and faster will be the steps to the solution. Answer: d: $\displaystyle\frac{1}{2ab}(a^2-b^2)$. In this elegant solution, we have reduced the number of variables to just one using simple algebraic manipulations. Finally, we avoided squaring of trigonometric functions completely. This refers to yet another basic principle of algebraic simplification, the Primarily, the more you increase the order (or power) of the terms in the deductive process, the more you deviate from the shortest path solution and make the problem harder to solve. minimum order simplest solution principle. This fundamental algebraic principle of states, minimum order simplest solution In a solution process, if you keep the order of the variables to the minimum, generally linear of unit power, you will have the simplest solution. Adhering to this principle we always try to keep the expressions involved linear. Key concepts and techniques used: -- Problem analysis -- Key pattern identification -- Principle of harmony or dicordance in expressions -- Discordant variable associations -- Friendly trigonometric function pairs concepts -- Mutually inverse expression resource -- Principle of collection of friendly terms -- Component expression substitution -- Variable reduction technique -- Solving a simpler problem -- Reverse substitution technique -- Basic algebraic concepts -- Rich algebraic techniques -- Minimum order simplest solution principle -- Delayed raise in powers technique -- Efficient simplification Many ways technique - Real life problem solving -- Domain mapping -- Multilevel abstraction -- Degree of abstraction. Special note Though the detailed explanations in the elegant solution section seem to be long and abstract, the actual solution could be achieved wholly mentally. This was possible because the actions were all guided by conceptual analysis, and were inherently simple. The structures manipulated became more and more harmonious and balanced (and that can be remembered easily) after each step, converging finally to the simplest form. If one is practiced and conversant with not only applying the powerful topic related problem solving concepts, strategies and techniques, but also analyzing a new situation to sense the right path by forming and following more general principles, no problem should pose any difficulty. On Principle of harmony or discordance, a very general problem solving resource The key breakthrough was achieved by applying which tells about principle of harmony or discordance . Its application originated (as far as we are concerned) when solving a Trigonometric problem elegantly. Inherently though this principle belongs to the topic of Algebra where more of its applications should be found. harmonious relationships between various elements in an expression On afterthought though, we classify this principle as one of the most general and so, powerful principles that can be applied , but to all kinds of problem states including not only to maths real life problem solving. In a real life problem situation, the variables and coefficients may be thought as equivalent to persons, attributes and things, whereas the associations equivalent to relations between the real life agents and elements. Obviously the binary or unary math operations may need also to be mapped suitably. This is by domain mapping . Freedom from domain is achieved by abstraction of highest order , and without freeing a problem solving principle from the topic area or domain where it is first applied, it cannot be generalized and applied in solving other domain problems. extensive multilevel abstraction The more is the of a problem solving resource, the more is the generalization and degree of abstraction broader is its scope of application. Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry How to solve a difficult SSC CGL level problem in a few conceptual steps, Trigonometry 8 A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving.
Reciprocation (pole and polar) Like the Fano plane, the real projective plane is self-dual. How can we exchange the points and lines in such a way that collinearity becomes concurrency and vice versa? The answer lies in the pole of a line and the polar of a point. Polar: Fix a circle $\omega$ in the Euclidean plane with center $O$. Let $A$ be any point in the projective plane, and let $A’$ be its inverse about $\omega$ (where all the points at infinity invert to $O$), and let $a$ be the line through $A’$ that is perpendicular to line $OA’$ (or $OA$, if $A’=O$). Then $a$ is the polar of $A$. Pole: of a line $a$ is the reverse construction. First, the pole of the line at infinity is $O$. Now, given a line $a$ other than the line at infinity, let $A’$ be the foot of the altitude from $O$ to $a$ and let $A$ be the inverse of $A’$ about $\omega$. Then $A$ is the pole of $a$ (and $a$ is the polar of $A$.) Exercise. Show that $A$ lies on the polar of $B$ if and only if the polar of $A$ passes through $B$. We now know that reciprocation, the transformation about a circle that replaces each point with its polar and each line with its pole, interchanges lines and points in a way that preserves indicence. This enables us to “dualize” many theorems about concurrency to obtain a theorem about collinearity, or vice versa. Reciprocation has the added benefit that it preserves conics. One way to define a conic is as the reciprocal of some circle. Alternatively, we can define a conic as the locus of solutions $(a:b:c)$ to some homogeneous equation of degree $2$ in $x,y,z$, such as $x^2+2y^2-z^2=0$ or $x^2-yz=0$. Exercise. What kind of conics do the two quadratic equations above describe when restricted to the Euclidean plane ($c\neq 0$)? Projective transformations In homogeneous coordinates, a \textit{projective transformation} is a map of the form $$(x:y:z)\mapsto (ax+by+cz:dx+ey+fz:gx+hy+iz)$$ for some real $a,b,c,d,e,f,g,h,i$. The main facts we need to know about projective transformations are: Projective transformations send lines to lines and conics to conics. Projective transformations preserve incidence (collinearity and concurrency). Projective transformations preserve cross ratiosof collinear points. (The cross ratioof four collinear points $A,B,C,D$ is $$(A,B;C,D)=\frac{AC\cdot BD}{AD\cdot BC}.$$ Each factor is a directed length according to a fixed orientation of the line.) Given four points $A,B,C,D$ in the projective plane, no three collinear, and another such choice of points $X,Y,Z,W$, there is a unique projective transformation sending $A$ to $X$, $B$ to $Y$, $C$ to $Z$, and $D$ to $W$. It is often useful to choose an important line in a diagram and apply a projective transformation sending that line to the line at infinity. Some theorems and their duals We finally have the tools to prove some powerful about Euclidean geometry that are more natural in the context of projective geometry. Can you see why reciprocation sends each theorem to its dual? Theorem Dual Ceva: If $ABC$ is a triangle, cevians $AY, BZ, CX$ concur if and only if $\frac{AX}{XB}\cdot\frac{BY}{YC}\cdot\frac{CZ}{ZA}=1$. Menelaus: If $ABC$ is a triangle, points $X, Y, Z$ on lines $AB, BC, CA$ respectively are collinear if and only if $\frac{AX}{XB}\cdot\frac{BY}{YC}\cdot\frac{CZ}{ZA}=-1$. Desargues: If $ABC$ and $DEF$ are two triangles, then $AD$, $BE$, and $CF$ are concurrent if and only if $AB\cap DE$, $BC\cap EF$, and $CA\cap FD$ are collinear. Desargues: If $ABC$ and $DEF$ are two triangles, then $AD$, $BE$, and $CF$ are concurrent if and only if $AB\cap DE$, $BC\cap EF$, and $CA\cap FD$ are collinear. Pascal: Given a hexagon inscribed in a conic, the intersection points of the pairs of opposite sides are collinear. Brianchon: Given a hexagon circumscribed about a conic, the lines joining the pairs of opposite points are concurrent. Let’s illustrate the power of projective transformations by proving Desargues’ Theorem. Let $ABC$ and $DEF$ be triangles and assume $X=AB\cap DE$, $Y=BC\cap EF$, and $Z=CA\cap FD$ are collinear. Consider a projective transformation that sends $X$ and $Y$ to two points $X’$ and $Y’$ on the line at infinity. Since projective transformations preserve incidence, we only need to show that the image of $Z$ under this transformation, $Z’$, is on the line at infinity if and only if the images of the triangles, $A’B’C’$ and $D’E’F’$, have the property that $A’D’$, $B’E’$, and $C’F’$ are concurrent. (In this case we say the triangles are perspective from a point.) Since $X’=A’B’\cap D’E’$ and $Y’=B’C’\cap E’F’$ and these are both on the line at infinity, we have $A’B’||D’E’$ and $B’C’||E’F’$. So we’ve created parallel lines to work with. Now, if $Z’$ is also on the line at infinity, then the third pair of sides of the triangles, $A’C’$ and $D’F’$, are also parallel, and so the triangles are homothetic (that is, one can be mapped to another via a dilation of the plane centered at some chosen point). The center of this homothety must lie on each of $A’D’$, $B’E’$, and $C’F’$, and so the triangles $A’B’C’$ and $D’E’F’$ are perspective from this point. Conversely, if $A’D’$, $B’E’$, and $C’F’$ intersect at $P$, then the homothety centered at $P$ sending $B’$ to $E’$ must send $A’$ to $D’$ and $C’$ to $F’$ because of the two pairs of parallel lines. So, the third pair of lines must also be parallel, and so $Z’$ is on the line at infinity.