text
stringlengths
256
16.4k
The numbers $a$,$b$ and $c$ are real. Prove that at least one of the three numbers $$(a+b+c)^2 -9bc \hspace{1cm} (a+b+c)^2 -9ca \hspace{1cm} (a+b+c)^2-9ab$$ is non-negative. Any hints would be appreciated too. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Hint: If all three numbers are negative, then: $$ab > \left(\frac{a+b+c}{3}\right)^2 \hspace{1cm} ac > \left(\frac{a+b+c}{3}\right)^2 \hspace{1cm} bc > \left(\frac{a+b+c}{3}\right)^2 \hspace{1cm}$$ Therefore, if we multiply the three inequalities: $$a^2b^2c^2 > \left(\frac{a+b+c}{3}\right)^6$$ Or equivalently: $$\left(\sqrt[3]{abc}\right)^6 > \left(\frac{a+b+c}{3}\right)^6$$ Do you know any inequality you can use here do disprove this? $$\sum [(a+b+c)^2-9bc]=3\sum[a^2+b^2+c^2-ab-bc-ca]=\frac32\sum (a-b)^2\ge0$$ If each $(a+b+c)^2-3bc<0,$ $$\sum [(a+b+c)^2-3bc]<0$$
In this tutorial we shall prove the derivative of the hyperbolic cosine function. Let the function be of the form… Click here to read more Introduction to Differential Calculus In this tutorial we shall prove the derivative of the hyperbolic tangent function. Let the function be of the form… Click here to read more In this tutorial we shall discuss the derivative of the inverse hyperbolic sine function with an example. Let the function… Click here to read more In this tutorial we shall discuss the derivative of the inverse hyperbolic secant function with an example. Let the function… Click here to read more Example: Differentiate $${\cosh ^{ – 1}}\left( {{x^2} + 1} \right)$$ with respect to $$x$$. Consider the function \[y = {\cosh… Click here to read more In this tutorial we shall look at the differentials of independent and dependent variables. Some applications of differentials will be… Click here to read more In this tutorial we shall develop the differentials to approximate the value of $$\sqrt {49.5} $$. The nearest number to… Click here to read more \[\frac{{{\text{dy}}}}{{{\text{dx}}}}\left( {\text{c}} \right) = 0\] \[\frac{{\text{d}}}{{{\text{dx}}}}\left( {{{\text{x}}^{\text{n}}}} \right) = {\text{n}}{{\text{x}}^{{\text{n – 1}}}}\] \[\frac{{\text{d}}}{{{\text{dx}}}}\left[ {{\text{c}}f\left( {\text{x}} \right)} \right] = {\text{c}}f’\left( {\text{x}}… Click here to read more \[\frac{{\text{d}}}{{{\text{dx}}}}\left( {\sin {\text{x}}} \right) = \cos {\text{x}}\] \[\frac{{\text{d}}}{{{\text{dx}}}}\left( {\cos {\text{x}}} \right) = – \sin {\text{x}}\] \[\frac{{\text{d}}}{{{\text{dx}}}}\left( {\tan {\text{x}}} \right) =… Click here to read more
Difference between revisions of "Linear representation theory of symmetric group:S5" (→Interpretation as projective general linear group of degree two) Line 82: Line 82: | Unclear || a nontrivial homomorphism <math>\varphi:\mathbb{F}_{q^2}^\ast \to \mathbb{C}^\ast</math>, with the property that <math>\varphi(x)^{q+1} = 1</math> for all <math>x</math>, and <math>\varphi</math> takes values other than <math>\pm 1</math>. Identify <math>\varphi</math> and <math>\varphi^q</math>. || unclear || <math>q - 1</math> || 4 || <math>(q-1)/2</math> || 2 || <math>(q-1)^3/2</math> || 32 || standard representation, product of standard and sign | Unclear || a nontrivial homomorphism <math>\varphi:\mathbb{F}_{q^2}^\ast \to \mathbb{C}^\ast</math>, with the property that <math>\varphi(x)^{q+1} = 1</math> for all <math>x</math>, and <math>\varphi</math> takes values other than <math>\pm 1</math>. Identify <math>\varphi</math> and <math>\varphi^q</math>. || unclear || <math>q - 1</math> || 4 || <math>(q-1)/2</math> || 2 || <math>(q-1)^3/2</math> || 32 || standard representation, product of standard and sign |- |- − + Total || NA || NA || NA || NA || <math>q + 2</math> || 7 || <math>q^3 - q</math> || 120 || NA |} |} Revision as of 00:57, 26 November 2012 This article gives specific information, namely, linear representation theory, about a particular group, namely: symmetric group:S5. View linear representation theory of particular groups | View other specific information about symmetric group:S5 This article describes the linear representation theory of symmetric group:S5, a group of order . We take this to be the group of permutations on the set . Summary Item Value Degrees of irreducible representations over a splitting field (such as or ) 1,1,4,4,5,5,6 maximum: 6, lcm: 60, number: 7, sum of squares: 120 Schur index values of irreducible representations 1,1,1,1,1,1,1 maximum: 1, lcm: 1 Smallest ring of realization for all irreducible representations (characteristic zero) -- ring of integers Smallest field of realization for all irreducible representations, i.e., smallest splitting field (characteristic zero) -- hence it is a rational representation group Criterion for a field to be a splitting field Any field of characteristic not equal to 2,3, or 5. Smallest size splitting field field:F7, i.e., the field of 7 elements. Family contexts Family name Parameter values General discussion of linear representation theory of family symmetric group 5 linear representation theory of symmetric groups projective general linear group of degree two field:F5 linear representation theory of projective general linear group of degree two Degrees of irreducible representations FACTS TO CHECK AGAINST FOR DEGREES OF IRREDUCIBLE REPRESENTATIONS OVER SPLITTING FIELD: Divisibility facts: degree of irreducible representation divides group order | degree of irreducible representation divides index of abelian normal subgroup Size bounds: order of inner automorphism group bounds square of degree of irreducible representation| degree of irreducible representation is bounded by index of abelian subgroup| maximum degree of irreducible representation of group is less than or equal to product of maximum degree of irreducible representation of subgroup and index of subgroup Cumulative facts: sum of squares of degrees of irreducible representations equals order of group | number of irreducible representations equals number of conjugacy classes | number of one-dimensional representations equals order of abelianization Note that the linear representation theory of the symmetric group of degree four works over any field of characteristic not equal to two or three, and the list of degrees is . Interpretation as symmetric group Common name of representation Degree Corresponding partition Young diagram Hook-length formula for degree Conjugate partition Representation for conjugate partition trivial representation 1 5 1 + 1 + 1 + 1 + 1 sign representation sign representation 1 1 + 1 + 1 + 1 + 1 5 trivial representation standard representation 4 4 + 1 2 + 1 + 1 + 1 product of standard and sign representation product of standard and sign representation 4 2 + 1 + 1 + 1 4 + 1 standard representation irreducible five-dimensional representation 5 3 + 2 2 + 2 + 1 other irreducible five-dimensional representation irreducible five-dimensional representation 5 2 + 2 + 1 3 + 2 other irreducible five-dimensional representation exterior square of standard representation 6 3 + 1 + 1 3 + 1 + 1 the same representation, because the partition is self-conjugate. Interpretation as projective general linear group of degree two Compare and contrast with linear representation theory of projective general linear group of degree two over a finite field Description of collection of representations Parameter for describing each representation How the representation is described Degree of each representation (general odd ) Degree of each representation () Number of representations (general odd ) Number of representations () Sum of squares of degrees (general odd ) Sum of squares of degrees () Symmetric group name Trivial -- 1 1 1 1 1 1 trivial Sign representation -- Kernel is projective special linear group of degree two (in this case, alternating group:A5), image is 1 1 1 1 1 1 sign Nontrivial component of permutation representation of on the projective line over -- -- 5 1 1 25 irreducible 5D Tensor product of sign representation and nontrivial component of permutation representation on projective line -- -- 5 1 1 25 other irreducible 5D Induced from one-dimensional representation of Borel subgroup ? ? 6 1 36 exterior square of standard representation Unclear a nontrivial homomorphism , with the property that for all , and takes values other than . Identify and . unclear 4 2 32 standard representation, product of standard and sign Total NA NA NA NA 7 120 NA Character table FACTS TO CHECK AGAINST (for characters of irreducible linear representations over a splitting field): Orthogonality relations: Character orthogonality theorem | Column orthogonality theorem Separation results(basically says rows independent, columns independent): Splitting implies characters form a basis for space of class functions|Character determines representation in characteristic zero Numerical facts: Characters are cyclotomic integers | Size-degree-weighted characters are algebraic integers Character value facts: Irreducible character of degree greater than one takes value zero on some conjugacy class| Conjugacy class of more than average size has character value zero for some irreducible character | Zero-or-scalar lemma Representation/conjugacy class representative and size (size 1) (size 10) (size 15) (size 20) (size 20) (size 24) (size 30) trivial representation 1 1 1 1 1 1 1 sign representation 1 -1 1 1 -1 1 -1 standard representation 4 2 0 1 -1 -1 0 product of standard and sign representation 4 -2 0 1 1 -1 0 irreducible five-dimensional representation 5 1 1 -1 1 0 -1 irreducible five-dimensional representation 5 -1 1 -1 -1 0 1 exterior square of standard representation 6 0 -2 0 0 1 0 Below are the size-degree-weighted characters, i.e., these are obtained by multiplying the character value by the size of the conjugacy class and then dividing by the degree of the representation. Note that size-degree-weighted characters are algebraic integers. Representation/conjugacy class representative and size (size 1) (size 10) (size 15) (size 20) (size 20) (size 24) (size 30) trivial representation 1 10 15 20 20 24 30 sign representation 1 -10 15 20 -20 24 -30 standard representation 1 5 0 5 -5 -6 0 product of standard and sign representation 1 -5 0 5 5 -6 0 irreducible five-dimensional representation 1 2 3 -4 4 0 -6 irreducible five-dimensional representation 1 -2 3 -4 -4 0 6 exterior square of standard representation 1 0 -5 0 0 4 0 GAP implementation The degrees of irreducible representations can be computed using GAP's CharacterDegrees function: gap> CharacterDegrees(SymmetricGroup(5)); [ [ 1, 2 ], [ 4, 2 ], [ 5, 2 ], [ 6, 1 ] ] This means that there are 2 degree 1 irreducible representations, 2 degree 4 irreducible representations, 2 degree 5 irreducible representations, and 1 degree 6 irreducible representation. The characters of all irreducible representations can be computed in full using GAP's CharacterTable function: gap> Irr(CharacterTable(SymmetricGroup(5))); [ Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 1, -1, 1, 1, -1, -1, 1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 4, -2, 0, 1, 1, 0, -1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 5, -1, 1, -1, -1, 1, 0 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 6, 0, -2, 0, 0, 0, 1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 5, 1, 1, -1, 1, -1, 0 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 4, 2, 0, 1, -1, 0, -1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 1, 1, 1, 1, 1, 1, 1 ] ) ]
WHY? The largest drawback of training Generative Adversarial Network (GAN) is its instability. Especially, the power of discriminator greatly affect the performance of GAN. This paper suggests to weaken the discriminator by restricting the functional space of it to stablize the training. Note Matrix norm can be defined in various ways. One of them is p-norm, which can be defined as \|A\|_p = sup_{x\neq 0}\frac{\|Ax\|_p}{\|x\|_p}. This implies the largest changes in magnitude during linear transformation of A. The norm is called spectral norm when p = 2. Spectral norm of a matrix A can be seen as the largest singular value of matrix A. Also, this is equivalent to the square root of the largest eigenvalue of positive semi-definite matrix A^*A ( A^* is conjugate transpose matrix of A). WHAT? Spectral norm of A ( \sigma(A)) can be defined as follows. \sigma(A):=max_{h:h\neq 0}\frac{\|A\mathbb{h}\|_2}{\|\mathbb{h}\|_2} = max_{\|\mathbb{h}\|\leq 1}\|A\mathbb{h}\|_2\\ Given a layer function g : \mathbb{h}_{in} \rightarrow \mathbb{h}_{out}, Lipschitz norm \|g\|_{Lip} is eqal to sup_{\mathbb{h}}\sigma(\nabla g(\mathbb{h})). If we assume g as linear layer ( g(\mathbb{h})=W\mathbb{h}), then \|g\|_{Lip} = sup_{\mathbb{h}}\sigma(\nabla g(\mathbb{h})) = sup_{\mathbb{h}}\sigma(W) = \sigma(W). Since most of Lipschitz norms of non-linear activation functions are bounded to 1, the Lipschitz norm of a discriminator can be bounded as follows. \|f\|_{Lip} \leq \prod_{l=1}^{L+1}\sigma(W^l)\\ Spectral normalization is normalizing the weights of a discriminator with their spectral norm so that the Lipschitz norm of the discriminator to be bounded by 1. \bar{W}_{SN}(W) := \frac{W}{\sigma(W)} To approximate the spectral norm fast, we can use power iteration method. \tilde{\mathbb{v}} \leftarrow \frac{W^T \tilde{\mathbb{u}}}{\|W^T\tilde{\mathbb{u}}\|_2}\\ \tilde{\mathbb{u}} \leftarrow \frac{W^T \tilde{\mathbb{v}}}{\|W^T\tilde{\mathbb{v}}\|_2}\\ \sigma(W)\approx \tilde{\mathbb{u}}^T W \tilde{\mathbb{v}} If we analyze the gradient of spectrally normlized weights, we can see that the gradient is penalized when gradient of W is close to the first singular components, preventing gradient from too focusing on single direction. So? SN showed better overall performance with various hyperparameter settings in CIFAR-10 and STL-10. Critic I really need to review Linear Algebra.
This question already has an answer here: As $5$ is a prime number, thus $\sqrt{5}$ is an irrational number. Now I am thinking about how to prove - If $r$ is a rational number, then how do we prove $r\sqrt{5}$ is an irrational number? I was thinking that since $r$ is a rational number, then $r$ can be expressed as the fraction in simplified form that is $r = \frac{a}{b}$ such that $a,b \in \Bbb{Z}$ and $gcd(a,b)=1$. So $r\sqrt{5} = \frac{a\sqrt{5}}{b}$, but How can this guarantee us the irrationality of $r\sqrt{5}$? ALso let $c =r\sqrt{5}$, then $c^2 = 5r^2$, if we could prove it is a prime number, then its square-root $c$ must be irrational and ths proved but unfortunately we donot have $c^2$ prime as it has more than one factors like $r$ , $5$
Derivative-Free Optimization for Data Fitting Calibrating the parameters of complex numerical models to fit real world observations is one of the most common problems found in the industry (finance, multi-physics simulations, engineering, etc.). Let us consider a process that is observed at times $t_i$ and measured with results $y_i$, for $i=1,2,\dots,m$. Furthermore, the process is assumed to behave according to a numerical model $\phi(t,x)$ where $x$ are parameters of the model. Given the fact that the measurements might be inaccurate and the process might not exactly follow the model, it is beneficial to find model parameters $x$ so that the error of the fit of the model to the measurements is minimized. This can be formulated as an optimization problem in which $x$ is the decision variables and the objective function is the sum of squared errors of the fit at each individual measurement, thus: NAG introduces, at Mark 26.1, a model-based derivative-free solver (e04ff) able to exploit the structure of calibration problems. It is a part of the NAG Optimization Modelling Suite which significantly simplifies the interface of the solver and related routines. Derivative-free Optimization for Least Squares Problems To solve a nonlinear least squares problem (1) most standard optimization algorithms such as Gauss–Newton require derivatives of the model or their estimates. They can be computed by: explicitly written derivatives algorithmic differentiation (see NAG AD tools) finite differences (bumping), $\frac{\partial \phi}{\partial x_i} \approx \frac{\phi(x+he_i) - \phi(x)}{h}$ If exact derivatives are easy to compute then using derivative-based methods is preferable. However, explicitly writing the derivatives or applying AD methods might be impossible if the model is a black box. The alternative, estimating derivatives via finite differences, can quickly become impractical or too computationally expensive as it presents several issues: Expensive, one gradient evaluation requires at least $n+1$ model evaluations; Inaccurate, the size of the model perturbations $h$ influences greatly the quality of the derivative estimations and is not easy to choose; Sensitive to noise, if the model is subject to some randomness (e.g. Monte Carlo simulations) or is computed to low accuracy to save computing time then finite differences estimations will be highly inaccurate; Poor utilization of model evaluations, each evaluation is only used for one element of one gradient and the information is discarded as soon as that gradient is no longer useful to the solver. These issues can greatly slow down the convergence of the optimization solver or even completely prevent it. In these cases, using a derivative-free solver is the preferable option as it is: able to reach convergence with a lot fewer function evaluations; more robust with respect to noise in the model evaluations. Illustration on a noisy test case: Consider the following unbounded problem where $\epsilon$ is some random uniform noise in the interval $\left[-\nu,\nu\right]$ and $r_i$ are the residuals of the Rosenbrock test function $(m = n = 2)$: \begin{equation} \min_{x\in \mathbb{R}^n} \sum_{i=1}^m{(r_i(x) + \epsilon)^2}\\ \label{eq:rosen} \end{equation} Let us solve this problem with a Gauss–Newton method combined with finite differences (e04fc) and the new derivative-free solver (e04ff). We present in the following table the number of model evaluations needed to reach a point within $10^{-5}$ of the actual solution for various noise levels $\nu$ (non convergence is marked as $\infty$). level of noise $\nu$ 0.0e00 1.0e$-$10 1.0e$-$08 1.0e$-$06 1.0e$-$04 1.0e$-$02 1.0e$-$01 e04fc 89 92 221 $\infty$ $\infty$ $\infty$ $\infty$ e04ff 29 29 29 29 29 31 $\infty$ The number of objective evaluations required to reach a point within $10^{-5}$ of the solution On this example, the new derivative solver is both cheaper in term of model evaluations and far more robust with respect to noise. References Powell M J D (2009) The BOBYQA algorithm for bound constrained optimization without derivatives Report DAMTP 2009/NA06 University of Cambridge http://www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf Zhang H, CONN A R and Scheinberg k (2010) A Derivative-Free Algorithm for Least-Squares Minimization SIAM J. Optim. 20(6) 3555–3576
WHY? Former neural module network for VQA depends on a naive semantic parser to unroll the layout of the network. This paper suggests End-to-End Module Networks(N2NMN) to directly learn the layout from the data. WHAT? Layout policy select one of the predefined modules. Instead of using semantic parser to get the layout for the modules from question, N2NMN use encoder-decoder structure to train them. RNN is used for question encoder and another RNN(with attention) is used for layout policy. Beam search is used to get the layout of the maximum probability. u_{ti} = v^{\top}tanh(W_1 h_i + W_2 h_t)\\\alpha_{ti} = \frac{exp(u_{ti})}{\sum_{j=1}^T exp(u_{tj})}\\c_t = \sum_{i=1}^T \alpha_{ti} h_i\\p(m^{(t)}|m^{(1)},...,m^{(t-1)}, q) = softmax(W_3 h_t + W_4 c_t) The sequence of modules from decoder is mapped into a syntax tree using Reverse Polish Notation(the post-order traversal). Since the layout policy is not fully differentiable, REINFORCE algorithm is used to approximate the gradient. Entropy regularization(0.005) is used to encourage exploration. L(\theta) = E_{l\sim p(l|q;\theta)}[\tilde{L}(\theta, l; q, I)]\\\nabla L \approx \frac{1}{M}\sum_{m=1}^M ([\tilde{L}(\theta, l_m) - b]\nabla_{\theta}log p(l_m|q;\theta) + \nabla_{\theta}\tilde{L}(\theta, l_m)) Since learning the layout from the scratch is challenging, additional knowledge(expert policy) can be provided for initialization. KL-divergence between layout policy and expert policy is added to loss function. So? N2NMN showed better result than NMN on SHAPES, CLEVR and VQA. The expert policies of SHAPES and CLEVER are provided in the dataset, and stanford parser is used for VQA. The layout policy is shown to learn the appropriate policy. Learning from expert policy is shown to be even better than cloning the expert policy.
I read about this simple yet interesting question the other day: Can \(n!\) be a square number? Of course, here we implcitly assume that \(n\) is a positive integer. Moreover, we’re aware of the trivial case when \(n=1\) (and yes it holds in that case). The problem seems easy at first glance but turns out not at all. There are a bunch of ways for you to factorize \(n!\) into prime numbers, to construct a pattern of products in the form of a square number, etc. Some may work, some not. Original Problem The best solution is given by the Bertrand’s postulate, which is a theorem that states for any positive integer \(n>1\) we’ll always have at least one prime number \(p\) s.t. \(n<p<2n\). The proof is not easy and the statement is just as strong — With Bertrand’s postulate we know there’s always a prime \(p\) that satifies \(\lfloor n / 2\rfloor < p < n\) and thus in no way can we find another number \(m<n\) that has factor \(p\). Being a prime number, \(p\) itself is not a square either, so we conclude \(n!\) can never be a square number for any \(n>1\). Generalized Problem Now that we’ve successfully arrived at a conclusion for \(n!\), let’s try another problem that highly resembles the one we just solved: Can product of \(n\) consecutive integers be a square number? Again we’re here neglecting the trivial cases when \(n=1\) ‘cause that then is a problem of whether the only number is a square or not. When \(n>1\), the resemblance is apparent: when we consider the first \(n\) positive integers, the problem falls back to the first one. However, how are we gonna answer this question in general? An educated guess would be a “no” with the conclusion from the first problem and the straightforward observation for \(n=2\). When \(n=2\), we’re basically asking if \(k\cdot(k+1)\) can be a perfect square for positive \(k\). It can’t, because \(k^2<k\cdot(k+1)<(k+1)^2\) and there’s simply no square number in between of two consecutive squares. When \(n=3\), it’s a little trickier but still we can handle it. Notice by Euclidean, \(k\) is coprime with both \(k-1\) and \(k+1\) respectively and therefore also with \(k^1-1\). This means \(k\) shares no common factors with \(k^2-1\). Hence the product of the two has no duplicate factors and cannot be a perfect square. When \(n=4\), let \(x=k+1/2\) and we know the product can be written as \[ \begin{align} (n-1)\cdot n\cdot (n+1)\cdot (n+2) &= \left(k-\frac{3}{2}\right)\left(k-\frac{1}{2}\right)\left(k+\frac{1}{2}\right)\left(k+\frac{3}{2}\right)\\&= \left(k^2-\frac{9}{4}\right)\left(k^2 - \frac{1}{4}\right) \\&= \left(k^2-\frac{5}{4} - 1\right)\left(k^2 - \frac{5}{4} + 1\right) = \left(k^2- \frac{5}{4}\right)^{\!\!2} - 1 \end{align} \] and thus by Euclidean (again) this product cannot be a perfect square. What about \(n=5\)? What about larger and even general \(n>1\)? It turns out to be an extremely hard one to answer but luckily, Paul Erdös and John Selfridge gave a rigorous proof in their paper published back in 1975. The conclusion, namely that the product of any two or more consecutive positive integers is never a perfect square, is merely a special case of their first result. Actually, theorem 1 in the paper states that the product of two or more consecutive positive integers is never a power, i.e. can never be represented in the form of \(p^t\) where both \(p\) and \(t\) are positive integers.
Details Parent Category: Tutorials Published on Wednesday, 22 May 2019 12:09 Written by sebastien.popoff \( \def\ket#1{{\left|{#1}\right\rangle}} \def\bra#1{{\left\langle{#1}\right|}} \def\braket#1#2{{\left\langle{#1}|{#2}\right\rangle}} \) [tutorial] Semidefinite Programming for Intensity Only Estimation of the Transmission Matrix The possibility of measuring the transmission matrix using intensity only measurements is a much sought-after feature as it allows us not to rely on interferometry. Interferometry usually requires a laboratory grade stability difficult to obtain for real-world applications. Typically, we want to be able to retrieve the transmission matrix from a set of pairs composed of input masks and output intensity patterns. However, this problem, that corresponds to a phase retrieval problem, is not convex, hence difficult to solve using standard techniques. The idea proposed in [I. Waldspurger et al., Math. Program (2015)] is to relax some constraint to approximate the problem to a convex one that can be solved using the semidefinite programming approach. I briefly detail the approach and provide an example of the procedure to reconstruct the transmission matrix using Python. A Jupyter notebook can be found on my Github account: semidefiniteTM_example.ipynb. Context Measuring the full complex transmission matrix requires to have access to the phase of the optical field. While there exist non-interferometric approaches, they usually reduce the resolution, which is detrimental for complex media application where the speckle pattern shows high spatial frequency fluctuations. Other methods require to measure the intensity pattern at different planes, adding more constraints on the experimental setup. Ideally, we want to be able to reconstruct the transmission matrix form a set of pairs, consisting of one input field and the corresponding output intensity pattern. Various approaches were proposed, in particular statistical machine learning, deep learning, and semidefinite programming. We will focus here on the semidefinite programming approach. This approach was first proposed in [I. Waldspurger et al., Math. Program (2015)] and was later demonstrated for the first time in [N'Gom et al., Sci. Rep. (2017)] to measure the transmission matrix of a scattering medium. The mathematical problem Let's consider a linear medium of transmission matrix \(\mathbf{H}\) of size \(M\times N\) that links the input field \(x\) to the output one \(y\). The \(j^\text{th}\) line of the transmission matrix \(H_i\) corresponds to the effect of the different input elements on the \(j^\text{th}\) output measurement point of field \(y_j\). The reconstruction of each line of the matrix can be treated independently, we consider only the output pixel \(j\) in the following. We consider that we have at our disposal a set of input/output pairs \(\left\{X^k,\lvert Y_j^k\rvert\right\}\), with \(k \in [1...P]\), where \(X_k\) is a complex vector corresponding to an input wavefront, \(Y_j^k=\mathbf{H}X^k= \lvert Y_j^k\rvert \exp^{i\Phi_k}\) is the corresponding output complex field and \(P\) is the number of elements in the data set. \(\mathbf{X}\) is the matrix containing all the input training masks, and \(Y_j\) is the vector containing the output fields at the target point \(j\) for all input masks. As we only have access to the amplitude \(\lvert Y_j\rvert\) of the output field, we want to solve: $$ \begin{aligned} \text{min.} & \quad &\lVert H_j\mathbf{X}-\lvert Y_j\rvert\exp^{i\Phi_j}\rVert_ 2^2 \\ \text{subject to} & \quad &H_j \in \mathbb{C}^M, \, \Phi_j \in [0,2\pi]^P \end{aligned}\tag{1} $$ It is shown in [I. Waldspurger et al., Math. Program (2015)] that this expression can be simply rearranged to become $$ \begin{aligned} \text{min.}& \quad & u^\dagger \mathbf{Q} u = \mathrm{Tr}\left(\mathbf{Q}u u^\dagger\right)\\ \text{subject to} & \quad & H_j \in \mathbb{C}^M, \,u\in\mathbb{C}^P,\,\lvert u_k\rvert=1 \quad \forall k\in[0..P]\\ \text{with } & \quad & \mathbf{Q} = \text{diag}(\lvert Y_j\rvert)\left(\mathbf{I}-\mathbf{X}\mathbf{X}^p\right) \text{diag}(\lvert Y_j\rvert) \end{aligned}\tag{2} $$ \({}^p\) stands for the Moore-Penrose pseudoinverse and \({}^\dagger\) for the transpose conjugate. The vector \(u\) contains the phase of the \(j^\text{th}\) output point for all the elements of the data set, so that \(u_k=\exp^{i\Phi_k}\). The equivalence between these two expressions is guaranteed by the fact that \(\mathbf{Q}\) is a positive semidefinite Hermitian matrix. By construction \(\mathbf{U}=u_j u_j^\dagger\) is of rank equal to \(1\). By relaxing this constraint, this problem can be written as a convex problem that can be solved using semidefinite programming: $$ \begin{aligned} \text{min.}& \quad & \mathrm{Tr}\left(\mathbf{Q}\mathbf{U}\right)\\ \text{subject to} & \quad & \mathbf{U}=\mathbf{U}^\dagger,\, \text{diag}\left(\mathbf{U}\right) = 1, \mathbf{U} \succeq 0\\ \end{aligned}\label{eq:SDP}\tag{3} $$ with \(\mathbf{U} \succeq 0\) denoting the positive semidefinite constraint on \(\mathbf{U}\). We can now use standard convex solvers to find a solution. The difficulty is that \(\mathbf{U}\) is not of rank \(1\) anymore. To find an approximate solution, we take the first singular vector \(V_0\) of \(\mathbf{U}\) which give the phase of the output field with good accuracy. Now that we have the complex output field, we can use a pseudo-inversion to retrieve the transmission matrix. $$ H_j = \lvert Y \rvert V_0\mathbf{X}^p\tag{4} $$ Python implementation The only important part concerns solving the convex problem. In Python, CVXPY allows writing the problem in a natural way, i.e. exactly as we wrote it in equation \ref{eq:SDP}. The Matlab module CVX does the same thing. The part of the code that corresponds to the solving the convex problem is very concise: A full Python code that simulates the reconstruction of a random transmission matrix using this procedure in the presence of noise can be found here. Remarks Using this approach, which is also the case when using machine learning, the output pixels are treated independently. For each output pixel, the system is not sensitive to a global phase shift or conjugation. That implies that the relative phase between the lines of the matrix is not known. That is not detrimental for the generation of output intensity patterns, but can be otherwise important. It would then require an additional measurement to find these relative phases.
The information required to estimate evolutionary rates is efficiently summarized in the early (but still useful) phylogenetic comparative method of independent contrasts (Felsenstein 1985). Independent contrasts summarize the amount of character change across each node in the tree, and can be used to estimate the rate of character change across a phylogeny. There is also a simple mathematical relationship between contrasts and maximum-likelihood rate estimates that I will discuss below. We can understand the basic idea behind independent contrasts if we think about the branches in the phylogenetic tree as the historical “pathways” of evolution. Each branch on the tree represents a lineage that was alive at some time in the history of the Earth, and during that time experienced some amount of evolutionary change. We can imagine trying to measure that change initially by comparing sister taxa. We can compare the trait values of the two sister taxa by finding the difference in their trait values, and then compare that to the total amount of time they have had to evolve that difference. By doing this for all sister taxa in the tree, we will get an estimate of the average rate of character evolution ( 4.1A). But what about deeper nodes in the tree? We could use other non-sister species pairs, but then we would be counting some branches in the tree of life more than once (Figure 4.1B). Instead, we use a “pruning algorithm,” (Felsenstein 1985, Felsenstein (2004)) chopping off pairs of sister taxa to create a smaller tree (Figure 4.1C). Eventually, all of the nodes in the tree will be trimmed off – and the algorithm will finish. Independent contrasts provides a way to generalize the approach of comparing sister taxa so that we can quantify the rate of evolution throughout the whole tree. A more precise algorithm describing how Phylogenetic Independent Contrasts (PICs) are calculated is provided in Box 4.2, below (from Felsenstein 1985). Each contrast can be described as an estimate of the direction and amount of evolutionary change across the nodes in the tree. PICs are calculated from the tips of the tree towards the root, as differences between trait values at the tips of the tree and/or calculated average values at internal nodes. The differences themselves are sometimes called “raw contrasts” (Felsenstein 1985). These raw contrasts will all be statistically independent of each other under a wide range of evolutionary models. In fact, as long as each lineage in a phylogenetic tree evolves independently of every other lineage, regardless of the evolutionary model, the raw contrasts will be independent of each other. However, people almost never use raw contrasts because they are not identically distributed; each raw contrast has a different expected distribution that depends on the model of evolution and the branch lengths of the tree. In particular, under Brownian motion we expect more change on longer branches of the tree. Felsenstein (1985) divided the raw contrasts by their expected standard deviation under a Brownian motion model, resulting in standardized contrasts. These standardized contrasts are, under a BM model, both independent and identically distributed, and can be used in a variety of statistical tests. Note that we must assume a Brownian motion model in order to standardize the contrasts; results derived from the contrasts, then, depend on this Brownian motion assumption. Box 4.2: Algorithm for Phylogenetic Independent Contrasts One can calculate PICs using the algorithm from Felsenstein (1985). I reproduce this algorithm below. Keep in mind that this is an iterative algorithm – you repeat the five steps below once for each contrast, or n − 1 times over the whole tree (see Figure 4.1C as an example). Find two tips on the phylogeny that are adjacent (say nodes iand j) and have a common ancestor, say node k. Note that the choice of which node is iand which is jis arbitrary. As you will see, we will have to account for this “arbitrary direction” property of PICs in any analyses where we use them to do certian analyses! Compute the raw contrast, the difference between their two tip values:\[c_{ij} = x_i − x_j \label{4.1}\] Under a Brownian motion model, c has expectation zero and variance proportional to i j v + i v . j Calculate the standardized contrast by dividing the raw contrast by its variance $$ s_{ij} = \frac{c_{ij}}{v_i + v_j} = \frac{x_i - x_j}{v_i + v_j} \label{4.2}$$ Under a Brownian motion model, this contrast follows a normal distribution with mean zero and variance equal to the Brownian motion rate parameter σ 2. Remove the two tips from the tree, leaving behind only the ancestor k, which now becomes a tip. Assign it the character value: $$ x_k = \frac{(1/v_i)x_i+(1/v_j)x_j}{1/v_1+1/v_j} \label{4.3}$$ It is worth noting that x is a weighted average of k x and i x , but does not represent an ancestral state reconstruction, since the value is only influenced by species that descend directly from that node and not other relatives. j It is worth noting that Lengthen the branch below node kby increasing its length from v to k v + k v i v /( j v + i v ). This accounts for the uncertainty in assigning a value to j x . k As mentioned above, we can apply the algorithm of independent contrasts to learn something about rates of body size evolution in mammals. We have a phylogenetic tree with branch lengths as well as body mass estimates for 49 species (Figure 4.2). If we ln-transform mass and then apply the method above to our data on mammal body size, we obtain a set of 48 standardized contrasts. A histogram of these contrasts is shown as Figure 4.2 (data from from Garland 1992). Figure 4.2. Histogram of PICs for ln-transformed mammal body mass on a phylogenetic tree with branch lengths in millions of years (data from Garland 1992). Image by the author, can be reused under a CC-BY-4.0 license. Note that each contrast is an amount of change, x − i x , divided by a branch length, j v + i v , which is a measure of time. Thus, PICs from a single trait can be used to estimate j σ 2, the rate of evolution under a Brownian model. The PIC estimate of the evolutionary rate is: $$ \hat{\sigma}_{PIC}^2 = \frac{\sum{s_{ij}^2}}{n-1} \label{4.4}$$ That is, the PIC estimate of the evolutionary rate is the average of the n − 1 squared contrasts. This sum is taken over all s , the standardized independent contrast across all ( ij i, j) pairs of sister branches in the phylogenetic tree. For a fully bifurcating tree with n tips, there are exactly n− 1 such pairs. If you are statistically savvy, you might note that this formula looks a bit like a variance. In fact, if we state that the contrasts have a mean of 0 (which they must because Brownian motion has no overall trends), then this is a formula to estimate the variance of the contrasts. If we calculate the mean sum of squared contrasts for the mammal body mass data, we obtain a rate estimate of $\hat{\sigma}_{PIC}^2$ = 0.09. We can put this into words: if we simulated mammalian body mass evolution under this model, we would expect the variance across replicated runs to increase by 0.09 per million years. Or, in more concrete terms, if we think about two lineages diverging from one another for a million years, we can draw changes in ln-body mass for both of them from a normal distribution with a variance of 0.09. Their difference, then, which is the amount of expected divergence, will be normal with a variance of 2 ⋅ 0.09 = 0.18. Thus, with 95% confidence, we can expect the two species to differ maximally by two standard deviations of this distribution, $2 \cdot \sqrt{0.18} = 0.85$. Since we are on a log scale, this amount of change corresponds to a factor of e 2.68 = 2.3, meaning that one species will commonly be about twice as large (or small) as the other after just one million years.
I was reading and trying to reproduce the results in the arXiv preprint of Periodic Gabor Functions with Biorthogonal Exchange: A Highly Accurate and Efficient Method for Signal Compression by Asaf Shimshovitz et al., and a few questions arose. In brief, the paper outlines a new (as of 2012) family of wavelet-like expansions (closely related to the von Neumann lattice functions) which are time-frequency localized, and I have questions regarding how badly the method can fail when they're potentially used in signal compression or quantum-chemical calculations. Most of the details of the paper can be stripped away to give a simple linear algebra procedure, as follows. Given a vector $\mathbf{v}\in\mathbb{C}^n$ and a not necessarily linearly-independent basis $\mathbf{G}=\{\mathbf{g}_1,\mathbf{g}_2,...,\mathbf{g}_n\}$ of column vectors spanning $\mathbb{C}^n$, the following identity holds: $$\mathbf{B}\mathbf{G}^\dagger \mathbf{v}=\mathbf{v}$$ where $\mathbf{B}=(\mathbf{G}^\dagger)^{-1}$. In Dirac notation, this trivial statement reads as $$\sum_{j=1}^n|\mathbf{b}_j\rangle\langle\mathbf{g}_j|\mathbf{v}\rangle=\sum_{j=1}^n\beta_j|\mathbf{b}_j\rangle=|\mathbf{v}\rangle$$ which can be interpreted as saying that given any vector $\mathbf{v}$ and basis $\mathbf{G}$, the expansion coefficients $\beta_j$ of $\mathbf{v}$ in the basis $\mathbf{B}$ biorthogonal to $\mathbf{G}$ are simply given by the inner products $\langle\mathbf{g}_j|\mathbf{v}\rangle$. With a clever choice of $\mathbf{G}$, one often finds that the majority of the inner products $\langle\mathbf{g}_j|\mathbf{v}\rangle$ nearly vanish, and one is naively tempted to approximate $\mathbf{v}$ as $$|\mathbf{v}\rangle\approx\sum_{\substack{j=1 \\ \langle\mathbf{g}_j|\mathbf{v}\rangle\geq\delta}}^n|\mathbf{b}_j\rangle\langle\mathbf{g}_j|\mathbf{v}\rangle$$ where $\delta$ is some compression threshold. In matrix notation, this reads $$\mathbf{v}\approx\mathbf{B}_{tr}(\mathbf{G}_{tr})^\dagger\mathbf{v}$$ where $\mathbf{G}_{tr}=\{\mathbf{g}_j|\langle\mathbf{g}_j|\mathbf{v}\rangle\geq\delta\}$ is the basis $\mathbf{G}$ truncated to only include the basis columns which have considerable overlap with $\mathbf{v}$, and $\mathbf{B}_{tr}$ is the corresponding truncated version of $\mathbf{B}$. This approximation is decent but not optimal; the optimal approximation is actually $$\mathbf{v}\approx\mathbf{B}_{tr}'(\mathbf{G}_{tr})^\dagger\mathbf{v}$$ where $\mathbf{B}_{tr}'=(\mathbf{G}_{tr}^\dagger)^+$ where $+$ denotes the Moore-Penrose pseudoinverse; a proof of optimality is simple and relies on basic facts about pseudoinverses, so I'll skip it. The use of $\mathbf{B}_{tr}'$ instead of $\mathbf{B}_{tr}$ is referred to as the Porat (Genossar) correction in the paper. Simply put, my question is: Are there any reliable bounds on the error inherent in this approximation, given knowledge of the coefficients $\langle\mathbf{g}_j|\mathbf{v}\rangle$ and the compression threshold $\delta$? For example, if we choose $\delta$ so that $\frac{\sum_{\langle\mathbf{g}_j|\mathbf{v}\rangle\geq\delta}|\langle\mathbf{g}_j|\mathbf{v}\rangle|^2}{\sum_{j=1}^n|\langle\mathbf{g}_j|\mathbf{v}\rangle|^2}=0.99$, can we say that the truncation-compressed vector is 99% close to the original? Or is it much worse than that? Or can we say nothing? If we can say nothing, is there any further info which would allow us to say something useful? The paper gives no statements regarding error bounds, which I assume the authors had set aside for later to figure out; this is probably largely due to the fact that the original motivation for the von Neumann wavelet procedure was for quantum mechanical calculations, rather than signal compression applications. Nevertheless, knowing bounds on how reliable the technique is seems like an essential facet of any compression method. For now, let's assume that the basis vectors $\mathbf{g}_j$ of $\mathbf{G}$ are normalized to one ($\mathbf{g}_j^\dagger\mathbf{g}_j=1$). My first thoughts on attacking this were that the error can be quantified by the expression $$\epsilon:=\frac{|\mathbf{v}-\mathbf{B}_{tr}'\mathbf{G}_{tr}^\dagger\mathbf{v}|^2}{|\mathbf{v}|^2}=\frac{|(I-\mathbf{A}^+\mathbf{A})\mathbf{v}|^2}{|\mathbf{v}|^2}=\frac{|P_{R(\mathbf{G}_{tr})}\mathbf{v}|^2}{|\mathbf{v}|^2}$$ where $\mathbf{A}=\mathbf{G}_{tr}^\dagger$ and $P_{R(\mathbf{G}_{tr})}$ is the projector operator onto the subspace spanned by the columns of $\mathbf{G}_{tr}$. But I'm stuck here, and not sure where to go (I'm not particularly good at math). Does anyone have any helpful clues or intuition? As an unrelated but interesting bit of visual imagery, here is a picture of the signal $\frac{k^2}{67108864}+\sin \left(\frac{k^3}{64000000}-\frac{k^2}{4900}+\frac{k}{4}\right) \cos \left(\frac{k}{3}\right)$ for $1\leq k\leq 16384$ in the von Neumann domain (brightness corresponds to absolute value of coefficient and color corresponds to phase angle in complex plane, with red being 1, green being $i$, cyan being -1, and purple being $-i$, with the time axis on bottom and the frequency axis on the side):
I won't tell the full mapping from poles(1)/zeroes(0) to the frequency response but I think I can explain the connection between frequency and zero/infinite response, why do you have infinite/zero response at $e^{-jw}=z_\text{zero/pole},$ i.e. what $e^{-jw}$ has to do with $z$. The general form of the linear system is $$y_n+a_1y_{n-1}+a_2y_{n-2}+\cdots = b_0 x_n + b_1x_{n-1}+b_2x_{n-2}+\cdots, $$ which can be solved in z-from as $$Y(z) = {(b_0 + b_1 z + b_2 z^2+\cdots)\over(1+a_1z+a_2z^2+\cdots)}X(z) = H(z)X(z) = {(1-z_0z)(1-z_1z)\cdots \over(1-p_0z)(1-p_1z)\cdots}X(z).$$ In the end, the series of binomial products $(1-z_0 z)\cdots{1\over 1-p_0 z}$ can be considered as a series of systems, where first output, is the input for another. I would like to analyze the effect of single pole and zero. Let's single out the first zero, considering it the transfer function so that the rest of $H(z)X(z)$ is the input signal, $Y(z)=(1-z_0z)Χ(z),$ which corresponds to some $y_n = b_0x_n + b_1x_{n-1}.$ Let's take $b_0=b_1=1$ for simplicity. I mean that $y_n = x_n + x_{n-1}$. What we want to determine the effect of the system H(z) upon harmonic signal. That is, the input is going to be test signal $$x_n = e^{jwn}\overset{z}{\leftrightarrow} 1 + e^{jw}z + e^{2jw} z^2 + \cdots = 1/(1-e^{jw})= X(z).$$The response is going to be $$y_n = x_n + x_{n-1}|_{x_n = e^{jwn}} = e^{jwn} + e^{jw(n-1)} = e^{jwn}(1+e^{-jw})$$ that is, $1+e^{-jw}$ is the transfer function or $Y(z) = {(1+z)\over (1-e^{jw}z)} =(1+z)X(z)$. Please note that $1+z$ basically says that output is sum of input signal plus shifted signal, since single $z$ stands for single clock delay in time domain. Now, as explained in, $H(jw) = 1 + e^{-jw} = e^{-jw/2}(e^{jw/2}+e^{-jw/2}) =e^{-jw/2}2\cos(w/2)$. Cosine makes it to behave like low-pass filter$$ \begin{cases} w=0 & \Rightarrow &H(j0) = 1\cdot 2 \cos (0) = 2\\ w=\pi & \Rightarrow & H(j\pi) = e^{j\pi/2}2\cos(\pi/2) = 0 \end{cases}$$ It is also a good lesson that you get $2 cos \alpha = e^{i\alpha} + e^{-i\alpha}$ because you will supply the real signals rather than complex imaginary ones in real life. LTI with impulse response = {1,-1} is $y_n=x_n -x_n|_{x_n=e^{jwn}} =e^{jwn}(1-e^{-jw})$ has transfer function of $H(jw) = (1-e^{-jw}) = e^{-jw/2}(e^{jw/2}-e^{-jw/2})=e^{-jw}2\sin(w/2)$, which has zero at $w=0$ since $sin(0)=0$ but it can be found from the frequency response $$H(jw)=1-e^{-jw} = 0 \Rightarrow e^{-jw} = 1 = e^0 \Rightarrow w = 0.$$ After the textbooks, I can spot the surprising coincidence between transfer function $H(z) = 1\pm z$ and frequency response $H(jw)=1\pm e^{-jw}$. That is, z somehow corresponds to $e^{-jw}$, which is important for zero/pole analysis. I read it like sine z-factor stands for a clock shift and $y_n = x_n \pm x_{n-1}=0$ means that next sample is $\pm$ previous one to get zero response, we need to have $1\pm z=0$ in front of X(z). But, the frequency domain basis functions $e^{jwn}$ evolve by multiplying current value $e^{jw(n-1)}$ with $e^{jw}$ every clock. Therefore, we have $e^{jwn}(1 \pm e^{-jw}) =0$ as condition for constant zero output. The latter $1\pm e^{-jw}$ matches perfectly with zero transfer function $1\pm z=0$. In general, single-zero LTI is given by $y_n = b_0 x_n + b_1 x_{n-1}$ or $$Y(z) = (b_0 + b_1 z)X(z) = (b_0+ b_1z)(1+x_1z+x_2z^2+\cdots) = b_0 + (b_0 x_1 + b_1 x_0)z + (b_0 x_2 + b_1 x_1)z^2 + \cdots. $$ When $b_0+b_1 z = 0$, i.e. when $z=-b_0/b_1,$ whereas frequency response is, $$y_n(x_n = e^{jwn}) = b_0 e^{jwn} + b_1 e^{jw(n-1)} = e^{jwn}(b_0+b_1 e^{-jw})= e^{jwn}b_0(1-z_0 e^{-jw}),$$ which goes to zero when $1-z_0 e^{-jw} = 0$ or $e^{-jw} = 1/z_0$, which matches the computation for $z$ if $z=e^{-jw}$. The only thing that bothers me is that fixed-amplitude complex exponential is not enough for the frequency (harmonic) basis. You cannot obtain arbitrary ratio $1/z_0= e^{-jw}$ by choosing appropriate frequency $w$, a decaying harmonic signal is needed for that. That is weird because I have heard that any signal can be represented as sum of (constant amplitude) sines and cosines. But, anyway, we see that system zero stands for relationship between adjacent samples of input signal. When they are right, the output is identically 0 and we can choose such such frequency $w$ so that zero $z = 1/z_0 = e^{-jw}.$ Now, what about the poles? Let's single out a single pole $a$. The system has a from of $y_n = a y_{n-1} + (x_n + x_{n-1} + \cdots)$, under assumption $y_0 = 0$, has z-transform of $Y(z) = X(z)/(1-az)$. The feedback $a$ is equivalent to infinite impulse response ${1,a,a^2,\ldots} \overset{z}{\leftrightarrow} 1 + az + a^2z^2 + \cdots = 1/(1-az)$. It says that response is infinite when $z=1/a$. What does it mean if we apply the test signal $$x_n=e^{jwn}\overset{z}{\leftrightarrow} X(z) = 1+e^{jw}z+e^{2jw}z^2+\cdots = 1/(1-e^{jw}z)$$ to our system? We'll get $Y(z)={1\over 1-az}{1\over 1-e^{jw}z},$ or $$y_n = e^{jwn} + ae^{jw(n-1)} +a^2e^{jw(n-2)} +\cdots = e^{jwn}(1+ae^{-jw} + a^2e^{-2jw} + \cdots) ={e^{jwn}\over 1-ae^{-jw}}.$$ That is, frequency response is $1/(1-ae^{-jw}),$ which goes to infinity when $e^{-jw}=1/a,$ the same as $z_{pole}$ above, $e^{-jw}=z_{pole}=1/a$. But again, you can not always arrive at the pole $1/a$ adjusting the frequency $w$ alone. The frequency basis functions must be decaying amplitude in general and look like $(ke^{jw})^n$. That is, zeroes or poles of the transfer function $H(z)$ happen to match the zeroes and poles of frequency response $H(jw)$, which is really amazing. I noticed that this is related to the relation between adjacent samples, $e^{jwn}/e^{jw(n-1)} = e^{jw} = 1/z_{zero}$ in case of zeroes. The fact that $e^{jwn}$ scales exponentially over time, along with the system with feedback $a$, also seems to be the key for matching between $e^{jw}$ and $z_{poles}$. It also seems important that you cannot simply look for the appropriate frequency of $e^{jwn}$, the basis function must also have adjustable amplitude factor $k^n$. I would be happy if anybody could explain the same more condensely or more crisply.
so I've just had my first exam (went pretty well) but I ran into this thing as the first part of the last question. $$\int \frac{\cos x}{\sin x+\cos x}dx$$ I had a look on wolfram after the exam and it advised to multiply top and bottom by $\sec^3x$. Is there another way to tackle this if you didn't know that trick? I find it hard to believe I was meant to know this and it was disproportionately harder than any type of integration question I've come across when practicing. I couldn't get anywhere when trying to solve this. Thanks.
I need to show that a non-degenerate skew symmetric matrix $A$ of even degree is similar to the blockdiagonal matrix $$\begin{pmatrix} D_1 &0&\dots &0\\ 0 &\ddots &0 &\vdots\\ \vdots&0 &\ddots&0\\ 0 &\dots &0&D_n \end{pmatrix} , \text{where } D_i=\begin{pmatrix} 0&-\lambda_i\\ \lambda_i &0\end{pmatrix}.$$ So far, I observed that the matrix $iA$ is hermitian, so it can be diagonalized and further also $A$ can be diagonalized with purely imaginary eigenvalues. By skew symmetry, the eigenvalues come in pairs $\pm i\lambda_i$. I guess that the $\lambda_i$ in $D_i$ and the diagonalmatrix coincide, as the $D_i$ can be diagonalized by $S=\begin{pmatrix}i&-i\\1&1 \end{pmatrix},$ $D_i=S\begin{pmatrix}-i\lambda_i&0\\0&i \lambda_i \end{pmatrix}S^{-1}$. It still remains to show that we can find real matrices which do the similarity transformation, i.e. we need to find a real basis in which the bilinearform is of the blockdiagonal form. I read that this basis is even orthonormal. Why is that? Edit: I already looked at the similar question Proof of the Wirtinger inequality, where it is stated that one has to " Follow[] the usual linear algebra protocol (taking real and imaginary parts of the complex eigenvectors)". But I have no clue what that means.
The set of all values of m for which $mx^2 – 6mx + 5m + 1 > 0$ for all real x is? The answer given is $0<=m<1/4$ My working: $D>=0$ $=> (-6m)^2 -4(m)(5m+1)>=0$ $=> m(4m-1)>=0$ => Either $m>=1/4$ or $m<=0$ Where am I going wrong? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community You need $mx^2-6mx+5m+1>0$ for all real $x$. So, $mx^2-6mx+5m+1$ is not zero for all real $x$. The discriminant should be negative. You also need $m$ to be positive. This gives $0<m<\dfrac 14$. However, when $m=0$, $mx^2-6mx+5m+1\equiv 1$ is positive. Therefore, $0\le m<\dfrac14$. If we want to solve $mx^2 - 6mx + 5m + 1 > 0$ then we do not want any real roots! That is, the discriminant should be $\Delta<0$, not $\Delta\ge 0$. Therefore we have $m(4m-1)<0$ from which you can easily conclude the result. It should have been $D<0$, since then the quadratic has no roots at all (and thus is always positive or negative – the result thus obtained confirms that $m$ is positive after all). Option: $y=m(x^2-6x+5) +1>0;$ 0) $m=0$√ 1) $m>0$. A parabola opening upward. Minimum at: $y'=m(2x-6)=0;$ $x=3;$ $y_{\min}=m(9-18+5)+1=$ $-4m+1$; We require: $y_{\min}= -4m+1>0$, or $m<1/4$; Combining : $0 \le m < 1/4$. 2) Rule out $m<0$ (Parabola opening downward )
As this is a separable problem, I would suggest doing the entire solution analytically instead of numerically. The separation of variables can be performed along the same lines as in this closely related answer. I had to modify the steps slightly to get the variables r and x to separate properly, so I'll list the steps here. First define the PDE and the separation ansatz. In pde2, I use Expand to get additional cancellations that Simplify alone doesn't achieve. To enforce the boundary conditions, I use Solve after determining the general solution to each separated function (which are called ax[x] and ar[r] here). The separation constant is called $\kappa^2$ in this calculation: pde = Function[ c, ω^2 (D[c, {r, 2}] + D[c, r]/r) + D[c, {x, 2}] - 2 λ D[c, x] - 4 ϕ^2 c]; ansatz = ar[r] ax[x]; pde2 = Expand[Apply[Subtract, pde[ansatz]/ansatz == 0]] $$\frac{\omega ^2 \text{ar}''(r)}{\text{ar}(r)}+\frac{\omega ^2 \text{ar}'(r)}{r \text{ar}(r)}+\frac{\text{ax}''(x)}{\text{ax} (x)}-\frac{2 \lambda \text{ax}'(x)}{\text{ax}(x)}-4 \phi^2$$ ar[r] /. First@ DSolve[Select[pde2, D[#, x] == 0 &] == κ^2, ar[r], r] (* ==> BesselJ[0, (I r Sqrt[4 k^2 + κ^2])/ω] C[1] + BesselY[0, -((I r Sqrt[4 k^2 + κ^2])/ω)] C[2] *) rSolution[r_] = % /. C[2] -> 0 (* ==> BesselJ[0, (I r Sqrt[4 k^2 + κ^2])/ω] C[1] *) rCoefficients = First@Solve[rSolution[1] == 1, C[1]]; xSolution[x_] = ax[x] /. First@DSolve[ Select[pde2, D[#, x] =!= 0 &] == -κ^2, ax[x], x, GeneratedParameters -> B] (* ==> E^(x (λ - Sqrt[-κ^2 + λ^2])) B[1] + E^(x (λ + Sqrt[-κ^2 + λ^2])) B[2] *) xCoefficients = First@Solve[xSolution[0] == 1 && xSolution[1] == 1, {B[1], B[2]}]; generalSolution[r_, x_] = Simplify[rSolution[r] xSolution[x] /. xCoefficients /. rCoefficients] $$\frac{e^{(x-1) \left(-\sqrt{\lambda ^2-\kappa ^2}\right)-\lambda } \left(e^{\sqrt{\lambda ^2-\kappa ^2}+\lambda +\lambda x}+e^{x \left(2 \sqrt{\lambda ^2-\kappa ^2}+\lambda \right)}-e^{(2 x-1) \sqrt{\lambda ^2-\kappa ^2}+\lambda (x+1)}-e^{\lambda x}\right) J_0\left(\frac{i r \sqrt{4 k^2+\kappa ^2}}{\omega }\right)}{\left(e^{2 \sqrt{\lambda ^2-\kappa ^2}}-1\right) J_0\left(\frac{i \sqrt{4 k^2+\kappa ^2}}{\omega }\right)}$$ FullSimplify[pde[generalSolution[r, x]] == 0] (* ==> True *) In the expression pde2, selecting the terms that depend on one or the other variable has to be done with care, since there is also a term that doesn't depend on either of the variables ($-4\phi^2$). So instead of going with FreeQ to determine whether a variable occurs, I test for the derivatives of each term with respect to a given variable. That way, in Select[pde2, D[#, x] =!= 0 &] I only collect terms that really depend on x, whereas in Select[pde2,D[#,x]==0&] I include terms that depend on r or are constant.
Krishnan, VV and Murali, N and Kumar, Anil (1989) A diffusion equation approach to spin diffusion in biomolecules. In: Journal of Magnetic Resonance, 84 (2). pp. 255-267. PDF sdarticle.pdf1113.pdf Restricted to Registered users only Download (0B) | Request a copy Abstract A theoretical description of $1^H-1^H$ dipolar nuclear spin relaxation in a multispin system has been worked out by forming a diffusion equation for a one-dimensional chain of equidistant spins. The spin-diffusion equation is formed from first principles by assuming nearest neighbor interactions for a molecule undergoing isotropic random reorientation. This equation describes diffusion only in the long correlation limit (for $(\omega \tau_c > 1.118)$ and is solved for driven NOE experiments, for spins in a chain of infinite length $(0 <x< \infty)$, or for spins in a chain of finite length $(0 <x< L$). The solutions are obtained using the method of the Laplace transform for specified initial and boundary conditions. The observed selectivity of the NOE transfer in driven NOE experiments on a biomolecule which has a correlation factor $\omega\tau_c \sim3$ is indeed in conformity with the predictions obtained from the spin-diffusion equation. Item Type: Journal Article Additional Information: Copyright of this article belongs to Elsevier Science Department/Centre: Division of Chemical Sciences > Sophisticated Instruments Facility (Continued as NMR Research Centre) Division of Physical & Mathematical Sciences > Physics Depositing User: SIDDESHWAR KANBARGI Date Deposited: 14 Feb 2008 Last Modified: 19 Sep 2010 04:42 URI: http://eprints.iisc.ac.in/id/eprint/12793 Actions (login required) View Item
Prove that $\Gamma\left(\frac{1}{2}\right)= \sqrt\pi$ Using $$\Gamma(p)\Gamma(1-p) = \frac{\pi}{\sin(\pi p)}$$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Prove that $\Gamma\left(\frac{1}{2}\right)= \sqrt\pi$ Using $$\Gamma(p)\Gamma(1-p) = \frac{\pi}{\sin(\pi p)}$$ Let $p=\frac12$, we have $$\Gamma\left(\frac12\right)\Gamma\left(\frac12\right)=\frac{\pi}{\sin\frac{\pi}{2}}=\pi$$ therefore $$\Gamma\left(\frac12\right)=\sqrt \pi$$
Convex Functions Part of the Universitext book series (UTX) Chapter First Online: Abstract Let Xbe a convex subset of a vector space V. We say that \(f : X\rightarrow \mathbb{R}\) is convexif for all x, yε Xand λ ε (0, 1) we have If the inequality is strict when $$f(\lambda x + (1 - \lambda )y) \leq \lambda f(x) + (1 - \lambda )f(y).$$ x≠ y, then we say that fis strictly convex. In this chapter we aim to look at some properties of these functions, in particular, when Eis a normed vector space. For differentiable functions we will obtain a characterization, which will enable us to generalize the concept of a convex function. KeywordsVector Space Convex Function Convex Hull Convex Subset Differentiable Function These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. References 2. 5. 12. 20. 22. Copyright information © Springer Science+Business Media New York 2012
I have not studied category theory in extreme depth, so perhaps this question is a little naive, but I have always wondered if analysis could be taught naturally using categories. I ask this because it seems like a quite a lot of topological and group theoretic concepts can be defined most succinctly using categorical concepts, and the categorical definitions are more revealing. So my question is: (1) Is it possible/beneficial to teach analysis using category theory? and (2) Are there any good textbooks that use this method? I hesitate to let this out, but there's always this cute little note that I learned from another MO answer (I don't know which one): http://www.maths.gla.ac.uk/~tl/glasgowpssl/banach.pdf. Maybe this will satisfy your curiosity, but I maintain that it takes a warped mind to identify such a categorical formulation of integration as the "right" way to think about integrals. The advantage of categorical thinking in my view is that it helps to organize computations and arguments involving several different kinds of structures at the same time. For instance, (co)homology is all about capturing useful invariants associated to a complicated structure (e.g. a geometric object) in a much simpler structure (e.g. an abelian group). When we want to determine how the invariants behave under certain operations on the complicated structure (e.g. products, (co)limits) it helps to have a theory already set up to tell us what will happen to the simpler structure. That's where category theory comes into its own, and instances of this paradigm are so ubiquitous in algebra and topology that category theory has taken on a life of its own. It seems that people working in those areas have found it convenient to build categorical constructions into the foundations of their work in order to emphasize generality (one can treat algebraic varieties and solutions to diophantine equations on virtually the same footing), keep track of different notions of equivalence (e.g. homotopy versus homeomorphism), build new kinds of spaces (e.g. groupoids), and to achieve many other aims. In many kinds of analysis, this kind of abstraction isn't necessary because there's often only one structure to keep track of: $\mathbb{R}$. When you think about it, analysis is only possible because we are willing to seriously overburden $\mathbb{R}$. Take, for example, the expression "$\frac{d}{dt}\int_X f_t(x) d\mu(x)$" and consider all of the different ways real numbers are being used. It is used as a geometric object (odds are X is built out of some construction involving the real numbers or a subspace thereof), a way to give $X$ additional structure (it wouldn't hurt to guess that $\mu$ is a real valued measure), a parameter ($t$), and a reference system ($f$ probably takes values in $\mathbb{R}$ or something related to it). In algebraic geometry, one would probably take each of these roles seriously and understand what kind of structure they are meant to bring to the problem. But part of the power and flexibility of analysis is that we can sweep these considerations under the rug and ultimately reduce most complications to considerations involving the real numbers. All that being said, the tools of category theory and homological algebra actually have started to make their way into analysis. Because of the fact that analysts generally consider problems tied to certain very specific kinds of structure, they have historically focused on providing the sharpest and most detailed solutions to their problems rather than extracting the crude, qualitative invariants for which cohomological thinking is most appropriate. However, as analysts have become more and more attuned to the deep relationships between functional analysis and geometry, they have turned to ideas from category theory to help keep things organized. K-theory and K-homology have become indispensable tools in operator theory; there is even a bivariant functor $KK(-,-) $ from the category of C*-algebras to the category of abelian groups relating the two constructions, and many deep theorems can be subsumed in the assertion that there is a category whose objects are C*-algebras and whose morphism spaces are given by $KK(A,B)$. Cyclic homology and cohomology has also become extremely relevant to the interface between analysis and topology. So ultimately I think it all comes down to what kinds of subtleties are most relevant in a given problem. There is just something fundamentally different about the kind of thinking required to estimate the propagation speed of the solution operator for a nonlinear PDE compared to the kind of thinking required to relate the fixed point theory in characteristic 0 of a linear group acting on a variety to that in characteristic p. Others can definitely give better opinions, but I currently have "Lectures and exercises on functional analysis" checked out from the library, and I have been enjoying the few parts that I've read so far. I can not comment on the use of category theory in analysis, but for people who aren't very comfortable with more abstract fields where category theory plays a major role a book like the one above is great since it goes over a lot of basic category theory while keeping the main characters from analysis. At the very least it's a great way to get accustomed to the language. This community wiki answer is addressed to the OP's comment that he is looking for an "axiomatic" approach to the integral. I don't (yet) understand what axioms have to do with category theory. In particular, with respect to the example you give, I don't see what is particularly categorical about the Eilenberg-Steenrod axioms (unless you mean to count the functorial nature of co/homology as one of the axioms). As an example of an axiomatic treatment of the (Riemann) integral, see Section 2 of (Note: this is nothing very original. For instance, shortly after I wrote this I saw that Lang had almost the same treatment in his undergraduate analysis text.) Here I see no category theory whatsoever. Is this what you had in mind? Why or why not? Perhaps you were talking about the Lebesgue integral rather than the Riemann integral. In that respect, I would say that the Daniell approach to the Lebesgue integral (i.e., characterizing it in terms of the completion of a certain normed linear space) feels "axiomatic" to me but still not categorical.
Molecular characterization of anisotropic weak Musielak-Orlicz Hardy spaces and their applications College of Mathematics and System Science, Xinjiang University, Urumqi 830046, China Let $ A $ be a real $ n\times n $ matrix with all its eigenvalues $ \lambda $ satisfy $ |\lambda|>1 $. Let $ \varphi: \mathbb{R}^n\times[0, \, \infty)\to[0, \, \infty) $ be an anisotropic Musielak-Orlicz function, i.e., $ \varphi(x, \cdot) $ is an Orlicz function uniformly in $ x\in{\mathbb{R}^n} $ and $ \varphi(\cdot, \, t) $ is an anisotropic Muckenhoupt $ \mathcal {A}_\infty({\mathbb{R}^n}) $ weight uniformly in $ t\in(0, \, \infty) $. In this article, the authors introduce the anisotropic weak Musielak-Orlicz Hardy space $ WH^{\varphi}_A(\mathbb{R}^n) $ via the grand maximal function and establish its molecular characterization which are anisotropic extensions of Liang, Yang and Jiang (Math. Nachr. 289: 634-677, 2016). As an application, the boundedness of anisotropic Calderón-Zygmund operators from $ H_A^\varphi(\mathbb{R}^n) $ to $ WH_A^\varphi(\mathbb{R}^n) $ in the critical case is presented. Keywords:Expansive dilation, Muckenhoupt weight, weak Hardy space, atom, molecule, Calderón-Zygmund operator. Mathematics Subject Classification:Primary: 42B30, 42B20, 46E30; Secondary: 42B25. Citation:Ruirui Sun, Jinxia Li, Baode Li. Molecular characterization of anisotropic weak Musielak-Orlicz Hardy spaces and their applications. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2377-2395. doi: 10.3934/cpaa.2019107 References: [1] J. Álvarez, H continuity of Calderón-Zygmund type operators, in p Fourier Analysis (Orono, ME, 1992), Lecture Notes in Pure and Appl. Math., 157 (1994), Dekker, New York, 17–34. Google Scholar [2] [3] [4] [5] [6] [7] M. Bownik, B. Li, D. Yang and Y. Zhou, Weighted anisotropic Hardy spaces and their applications in boundedness of sublinear operators, [8] [9] T. Bui, J. Cao, L. D. Ky, D. Yang and S. Yang, Musielak-Orlicz Hardy spaces associated with operators satisfying reinforced off-diagonal estimates, [10] [11] J. Cao, D.-C. Chang, D. Yang and S. Yang, Boundedness of second order Riesz transforms associated to Schrödinger operators on Musielak-Orlicz-Hardy spaces, [12] [13] R. R. Coifman and G. Weiss, Analyse Harmonique Non-commutative sur Certains Espaces Homogènes (French) [Non-commutative harmonic analysis on certain homogeneous spaces] Lecture Notes in Math., [14] [15] [16] [17] [18] [19] [20] J. García-Cuerva and J. L. Rubio de Francia, [21] [22] [23] S. Hou, D. Yang and S. Yang, Lusin area function and molecular characterizations of MusielakOrlicz Hardy spaces and their applications, [24] [25] [26] [27] [28] [29] [30] B. Li, X. Fan, Z. Fu and D. Yang, Molecular characterization of anisotropic Musielak-Orlicz Hardy spaces and their applications, [31] [32] B. Li, D. Yang and W. Yuan, Anisotropic Hardy spaces of Musielak-Orlicz type with applications to boundedness of sublinear operators, [33] [34] [35] H. Liu, The weak Harminic Analysis (Tianjin, 1998), Lecture Notes in Math. Vol., 1984 (Springer, Berlin, 1991), pp. 113–118. doi: 10.1007/BFb0087762. Google Scholar [36] J. Liu, F. Weisz, D. Yang and W. Yuan, Littlewood-Paley and finite atomic characterizations of anisotropic variable Hardy-Lorentz spaces and their applications, [37] [38] [39] [40] [41] [42] [43] [44] M. H. Taibleson and G. Weiss, The molecular characterization of certain Hardy spaces, [45] [46] H. Wang, Boundedness of several integral operators with bounded variable kernels on Hardy and weak Hardy spaces, [47] D. Yang and S. Yang, Maximal function characterizations of Musielak-Orlicz-Hardy spaces associated to non-negative self-adjoint operators satisfying Gaussian estimates, [48] [49] show all references References: [1] J. Álvarez, H continuity of Calderón-Zygmund type operators, in p Fourier Analysis (Orono, ME, 1992), Lecture Notes in Pure and Appl. Math., 157 (1994), Dekker, New York, 17–34. Google Scholar [2] [3] [4] [5] [6] [7] M. Bownik, B. Li, D. Yang and Y. Zhou, Weighted anisotropic Hardy spaces and their applications in boundedness of sublinear operators, [8] [9] T. Bui, J. Cao, L. D. Ky, D. Yang and S. Yang, Musielak-Orlicz Hardy spaces associated with operators satisfying reinforced off-diagonal estimates, [10] [11] J. Cao, D.-C. Chang, D. Yang and S. Yang, Boundedness of second order Riesz transforms associated to Schrödinger operators on Musielak-Orlicz-Hardy spaces, [12] [13] R. R. Coifman and G. Weiss, Analyse Harmonique Non-commutative sur Certains Espaces Homogènes (French) [Non-commutative harmonic analysis on certain homogeneous spaces] Lecture Notes in Math., [14] [15] [16] [17] [18] [19] [20] J. García-Cuerva and J. L. Rubio de Francia, [21] [22] [23] S. Hou, D. Yang and S. Yang, Lusin area function and molecular characterizations of MusielakOrlicz Hardy spaces and their applications, [24] [25] [26] [27] [28] [29] [30] B. Li, X. Fan, Z. Fu and D. Yang, Molecular characterization of anisotropic Musielak-Orlicz Hardy spaces and their applications, [31] [32] B. Li, D. Yang and W. Yuan, Anisotropic Hardy spaces of Musielak-Orlicz type with applications to boundedness of sublinear operators, [33] [34] [35] H. Liu, The weak Harminic Analysis (Tianjin, 1998), Lecture Notes in Math. Vol., 1984 (Springer, Berlin, 1991), pp. 113–118. doi: 10.1007/BFb0087762. Google Scholar [36] J. Liu, F. Weisz, D. Yang and W. Yuan, Littlewood-Paley and finite atomic characterizations of anisotropic variable Hardy-Lorentz spaces and their applications, [37] [38] [39] [40] [41] [42] [43] [44] M. H. Taibleson and G. Weiss, The molecular characterization of certain Hardy spaces, [45] [46] H. Wang, Boundedness of several integral operators with bounded variable kernels on Hardy and weak Hardy spaces, [47] D. Yang and S. Yang, Maximal function characterizations of Musielak-Orlicz-Hardy spaces associated to non-negative self-adjoint operators satisfying Gaussian estimates, [48] [49] [1] [2] Marius Ionescu, Luke G. Rogers. Complex Powers of the Laplacian on Affine Nested Fractals as Calderón-Zygmund operators. [3] [4] [5] Guoqing Zhang, Jia-yu Shao, Sanyang Liu. Linking solutions for N-laplace elliptic equations with Hardy-Sobolev operator and indefinite weights. [6] [7] [8] [9] [10] Albert Clop, Daniel Faraco, Alberto Ruiz. Stability of Calderón's inverse conductivity problem in the plane for discontinuous conductivities. [11] [12] [13] Paolo Caldiroli. Radial and non radial ground states for a class of dilation invariant fourth order semilinear elliptic equations on $R^n$. [14] Lidong Wang, Hui Wang, Guifeng Huang. Minimal sets and $\omega$-chaos in expansive systems with weak specification property. [15] Frédéric Abergel, Jean-Michel Rakotoson. Gradient blow-up in Zygmund spaces for the very weak solution of a linear elliptic equation. [16] Huyuan Chen, Feng Zhou. Isolated singularities for elliptic equations with hardy operator and source nonlinearity. [17] [18] Kim Knudsen, Jennifer L. Mueller. The born approximation and Calderón's method for reconstruction of conductivities in 3-D. [19] Angkana Rüland, Eva Sincich. Lipschitz stability for the finite dimensional fractional Calderón problem with finite Cauchy data. [20] 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
WHY? Recent neural network models are getting bigger to increase the performance to the limit. This paper suggests MobileNet to reduce the size of neural network small enough to deploy on mobile devices. WHAT? Several techniques are used for MobileNet. The most important component of MobileNet is depthwise separable convolution. Assume a feature map of D_F\cdot D_F \cdot M. Standard convolution filters consists of N number of filters of size D_K\cdot D_K \cdot M. Instead, depthwise separable convolution replace this with M number of depthwise convolution of size D_f\cdot D_f\cdot 1, and N number of pointwise convolution of size 1\cdot 1 \cdot M. New efficient architecture based on this method not only reduce the number of multi-add, but also concentrate the computation on pointwise convolution layer which is one of the most efficient operation by general matrix multiply(GEMM). Mobilenet introduced two additional hyperparameters to reduce the computation. Width multiplier \alpha is used to reduce the number of channels on each layer. Resolution multiplier \rho is used to reduce the height and width on each layer. The number of computation is reduced from D_K \cdot D_K \cdot M \cdot N \cdot D_F \cdot D_F to D_k \cdot D_k \cdot \alpha M \cdot\rho D_F \cdot\rho D_F + \alpha M \cdot\alpha N \cdot\rho D_F \rho D_F So? image MobileNet decreased the number of parameters and computations dramatically with slight decrease in performance on various tasks including classification and detection. Critic It is amazing that the convolution filters can be represented with depthwise convolution and pointwise convolution while preserving much of its representational power. Could there be similar method of RNN series?
I am assuming the usual framework and notation of ramification theory. Let $G=\operatorname{Gal}(L/K)$. We define the decomposition group of a prime ideal $\mathfrak{q}$ above $\mathfrak{p}$ as \begin{equation} G^{Z}(\mathfrak{q}\:|\:\mathfrak{p})=\{\sigma\in G\:|\:\sigma(\mathfrak{q})=\mathfrak{q}\}. \end{equation} Now, from the valuation theory point of view, we define, for extensions of valuations $w\:|\:v$, \begin{equation} G_{w}=\{\sigma\in G\:|\:w\circ\sigma=w\}. \end{equation} Are these the same groups? I guess they are, but I am not sure how to justify this: If $v$ is a valuation that comes from $\mathfrak{p}$, does every extension of $w$ to $v$ come from an ideal $\mathfrak{q}$ over $\mathfrak{p}$? Thanks.
area Given three circles of radius 2, tangent to each other as shown in the following diagram, what is the area for the shaded region? Chibi 11/04/2017 at 11:31 Center of the circle: ABC => AB = BC = AC = 2R = 4 => ABC is a equilateral triangle The area for the shaded region: S The area for a sector definition by A and 2 tangential points: S A S = S ABC- 3S A S ABC= \(\dfrac{1}{2}\).4.4.\(\dfrac{\sqrt{3}}{2}\) = 4\(\sqrt{3}\) S A= \(\dfrac{60}{360}\)S circles= \(\dfrac{1}{6}\)\(\pi\)2 2= \(\dfrac{2\pi}{3}\) Selected by MathYouLike => S = 4\(\sqrt{3}\)- 3\(\dfrac{2\pi}{3}\) = 4\(\sqrt{3}\) - 2\(\pi\) Center of the circle: ABC => AB = BC = AC = 2R = 4 => ABC is a equilateral triangle The area for the shaded region: S The area for a sector definition by A and 2 tangential points: SA S = SABC - 3SA SABC = 12 .4.4.√32 = 4√3 SA = 60360 Scircles = 16π22 = 2π3 => S = 4√3 - 32π3 = 4√3 - 2π tth 05/11/2017 at 19:11 Center of the circle: ABC => AB = BC = AC = 2R = 4 => ABC is a equilateral triangle The area for the shaded region: S The area for a sector definition by A and 2 tangential points: SA S = SABC - 3SA SABC = 12 .4.4.√32 = 4√3 SA = 60360 Scircles = 16π22 = 2π3 => S = 4√3 - 32π3 = 4√3 - 2π Let R and S be points on the sides BC and AC , respectively, of ΔABC , and let P be the intersection of AR and BS . Determine the area of ΔABC if the areas of ΔAPS , ΔAPB , and ΔBPR are 5, 6, and 7, respectively An Duong 09/04/2017 at 07:31 We have \(\dfrac{SP}{PB}=\dfrac{area\left(APS\right)}{area\left(ABP\right)}=\dfrac{5}{6}\) Call the area of PSR be x, the area of CSR be y, we have: \(\dfrac{area\left(PSR\right)}{area\left(PBR\right)}=\dfrac{SP}{PB}=\dfrac{5}{6}\) \(\Rightarrow\dfrac{x}{7}=\dfrac{5}{6}\) \(\Rightarrow x=\dfrac{35}{6}\) \(\dfrac{BR}{CR}=\dfrac{area\left(BSR\right)}{area\left(CRS\right)}=\dfrac{7+x}{y}\) (1) \(\dfrac{BR}{CR}=\dfrac{area\left(ABR\right)}{area\left(ACR\right)}=\dfrac{13}{x+y+5}\) (2) (1), (2) => \(\dfrac{7+x}{y}=\dfrac{13}{x+y+5}\) \(\Rightarrow y=\dfrac{\left(7+x\right)\left(5+x\right)}{6-x}=\dfrac{5005}{6}\) So, \(area\left(ABC\right)=5+6+7+x+y\) \(=5+6+7+\dfrac{35}{6}+\dfrac{5005}{6}=858\)John selected this answer. A B C R S P 5 6 7 M N x y We have SPPB=area(APS)area(ABP)=56 Call the area of PSR be x, the area of CSR be y, we have: area(PSR)area(PBR)=SPPB=56 ⇒x7=56 ⇒x=356 BRCR=area(BSR)area(CRS)=7+xy (1) BRCR=area(ABR)area(ACR)=13x+y+5 (2) (1), (2) => 7+xy=13x+y+5 ⇒y=(7+x)(5+x)6−x=50056 So, area(ABC)=5+6+7+x+y =5+6+7+356+50056=858 Given a rectangle paper with a circle hole as the figure below. How to cut the paper with a line so that we have two parts with equal area. An Duong 25/03/2017 at 21:31 Because a line through a center of a rectangle (or a circle) divide it into two part with equivalent area. So you should cut the paper by the line connecting two centers of the rectangle and the circle (see following figure) Selected by MathYouLike A circle of radius 3 is inscribed in the pictured quadrant of a circle. Find the area of the shaded section. mathlove 16/03/2017 at 18:29 The circle of radius 3 have an area \(9\pi\). We sign ras the radius of the pictured quadrant of the done cỉcle, then \(r=3\sqrt{2}+3\). Put xis the area to calculate, we have \(\dfrac{1}{4}\pi r^2=2x+\pi.3^2+\left(3^2-\dfrac{1}{4}.\pi.3^2\right)=2x+9\left(1+\dfrac{3\pi}{4}\right)\) \(\Leftrightarrow\dfrac{\pi}{4}\left(3\sqrt{2}+3\right)^2=2x+9\left(1+\dfrac{3\pi}{4}\right)\Leftrightarrow2x=\dfrac{\left(27+18\sqrt{2}\right)\pi}{4}-\dfrac{36+27\pi}{4}\) \(\Leftrightarrow x=\dfrac{9\sqrt{2}\pi-36}{4}\)Selected by MathYouLike mathlove 17/03/2017 at 10:45 We have \(y=3^2-\left(\dfrac{1}{4}\pi3^2\right)\) and \(r-3=3\sqrt{2}\Rightarrow r=3+3\sqrt{2}\).Selected by MathYouLike The circle of radius 3 have an area 9π . We sign r as the radius of the pictured quadrant of the done cỉcle, then r=3√2+3 . Put x is the area to calculate, we have 14πr2=2x+π.32+(32−14.π.32)=2x+9(1+3π4) ⇔π4(3√2+3)2=2x+9(1+3π4)⇔2x=(27+18√2)π4−36+27π4 ⇔x=9√2π−364 Continuing the previous post: I have another problem: Calculate the area of the curved square below (crossed area): mathlove 13/03/2017 at 18:12 Setting xis the are to find. Easy to see that \(\left(1\right)+\left(2\right)+\left(1\right)=1-\dfrac{\pi}{4}\). According to the previous post: \(\left(1\right)=1-\dfrac{\sqrt{3}}{4}-\dfrac{\pi}{6}\). So that \(\left(2\right)=\left(1-\dfrac{\pi}{4}\right)-2\left(1-\dfrac{\sqrt{3}}{4}-\dfrac{\pi}{6}\right)=-1+\dfrac{\pi}{12}+\dfrac{\sqrt{3}}{2}\). Therefore \(x=\dfrac{\pi}{4}-\left[3.\left(2\right)+2.\left(1\right)\right]=\dfrac{\pi}{4}-\left[\left(-3+\dfrac{\pi}{4}+\dfrac{3\sqrt{3}}{2}\right)+\left(2-\dfrac{\sqrt{3}}{2}-\dfrac{\pi}{3}\right)\right]\) \(=1+\dfrac{\pi}{3}-\sqrt{3}\) .Selected by MathYouLike FA KAKALOTS 28/01/2018 at 22:10 Setting x is the are to find. Easy to see that (1)+(2)+(1)=1−π4. According to the previous post: (1)=1−√34−π6 . So that (2)=(1−π4)−2(1−√34−π6)=−1+π12+√32 . Therefore x=π4−[3.(2)+2.(1)]=π4−[(−3+π4+3√32)+(2−√32−π3)] =1+π3−√3 . mathlove 11/03/2017 at 18:27 Let xis the area to calculate. We see that EADis equilateral triangle with the edge equal to 1, the equilateral line equal to \(\dfrac{\sqrt{3}}{2}\) . So \(EF=1-\dfrac{\sqrt{3}}{2}\) . We have the angle EDC is \(30^0\) , so that \(\dfrac{1}{2}.\dfrac{1}{2}\left(1-\dfrac{\sqrt{3}}{2}\right)-\dfrac{x}{2}=\dfrac{\pi}{12}-\dfrac{1}{2}.1.1.\sin30^0=\dfrac{\pi}{12}-\dfrac{1}{4}\) So \(x=1-\dfrac{\sqrt{3}}{4}-\dfrac{\pi}{6}\) . FA KAKALOTS 28/01/2018 at 22:10 Let x is the area to calculate. We see that EAD is equilateral triangle with the edge equal to 1, the equilateral line equal to √32 . So EF=1−√32 . We have the angle EDC is 300 , so that 12.12(1−√32)−x2=π12−12.1.1.sin300=π12−14 So x=1−√34−π6 .
This question is cross-posted from math.stackexchange because it might be too technical. Let $S_n$ be the symmetric group. Recall that the number of inversions of a permutation $\sigma\in S_n$ is the number of ordered pairs $i< j$ with $\sigma(i) > \sigma(j)$. Now, call the number of permutations with $k$-inversions $I_n(k)$. It's easy to see that going from $n-1$ to $n$ we can insert $n$ into spot $j$ to add $n-j$ inversions: $$I_n(k)=I_{n-1}(k)+I_{n-1}(k-1)+\ldots +I_{n-1}(0).$$ If we let $G_n(t)=\sum_{k=0}^{\binom{n}{2}}I_n(k)t^k$, then the above gives $$G_n(t)=(1+t+t^2\ldots+t^{n-1})G_{n-1}(t),$$ and it quickly follows that $G_n(t)=\prod_{j=1}^n\frac{1-x^j}{1-x}$. I am interested in something more complicated. Let $I^{\sigma(y)=x}_n(k)$ count the number of permutations $\sigma$ of length $n$ such that for a given (fixed) $x,y$ we have $\sigma(y)=x$. In other words I am forcing $y$ to be in bin $x$. Proceeding by similar lines to the above, I get: $$I_{n}^{\sigma(y)=x}(k)=\sum_{i=0}^{n-1-y}I_{n-1}^{\sigma(y)=x}(k-i)+\sum_{i=n-y+1}^nI^{\sigma(y-1)=x}_{n-1}(k-i)$$ where similar logic was used as before, except now we have to be careful whether we are inserting $n$ to the right/left respectively (inserting to the left shifts $x$ up one bin). Assuming the above is right, is it at all tractable to derive an asymptotic formulafor $I_n^{\sigma(y)=x}(k)$, as $n\rightarrow\infty$? As far as I understand, the way to derive asymptotics for $I_n(k)$, one needs something akin to the Knuth-Netto Formula: $$I_{n}(k)=\binom{n+k-1}{k}+\sum_{j=1}^\infty (-1)^j\binom{n+k-u_j-j-1}{k-u_j-j}+\sum_{j=1}^\infty(-1)^j\binom{n+k-u_j-1}{k-u_j},$$ where the $u_j=3(3j-1)/2$ are pentagonal numbers. The above can be "simplified" using Stirling's approximation and a bunch of careful arithmetic to give asymptotics. Here is a reference for such a calculation. Naively, the above formula comes from the Euler pentagonal number theorem. I would think one needs a specialized form of this theorem for what I am interested in. Can such a similar asymptotic feat be accomplished for $I_n^{\sigma(y)=x}(k)$?
number Let N be the largest number of region that can be formed by drawing 2016 straight lines on a plane. Find the sum of all digits of N. Tôn Thất Khắc Trịnh 23/07/2018 at 16:04 I won't take credit for this solution, as it goes to this website: https://www.cut-the-knot.org/proofs/LinesDividePlane.shtmlSelected by MathYouLike Applying 2016 to the formula, we'll get 2033137 regions, which has the sum of digit be 19 Mr Puppy 24/07/2018 at 01:44 Thank you!!!!! :D Choose the correct answer: The number lighter than 709 and greater than 707 was: a)708 b)710 c)712 d)714 Alex placed 9 number cards and 8 addition symbol cards on the table as shown. 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 Keeping the cards in the same order he decided to remove one of the addition cards to form a 2-digit number. If his new total was 99, which 2-digit number did he form? A.32 B.43 C.54 D.65 E.76 Lê Quốc Trần Anh Coodinator 27/07/2017 at 09:13 The total of the cards are: \(9+8+7+6+5+4+3+2+1=45\) The total of the cards that can removed 1 card is: \(\left(45-1\right)to\left(45-9\right)\) From this operation and the question, we see only \(C.54\) satisfy these. So the answer isSelected by MathYouLike C FA KAKALOTS 08/02/2018 at 22:04 The total of the cards are: 9+8+7+6+5+4+3+2+1=45 The total of the cards that can removed 1 card is: (45−1)to(45−9) From this operation and the question, we see only C.54 satisfy these. So the answer is C Duy Trần Đức 11/02/2018 at 08:30 The total of the cards are: 9+8+7+6+5+4+3+2+1=459+8+7+6+5+4+3+2+1=45 The total of the cards that can removed 1 card is: (45−1)to(45−9)(45−1)to(45−9) From this operation and the question, we see only C.54C.54 satisfy these. So the answer is C This cube has a different whole number on each face, and has the property that whichever pair of opposite faces is chosen, the two numbers multiply to give the same result.What is the smallest possible total of all 6 numbers on the cube? FA KAKALOTS 08/02/2018 at 22:01 Because this topic is not to say that the faces should be different so that their sum is the smallest, then the remaining 3 [can be said to be x, y, z] smallest and satisfy: - 12*x = 9*y = 6*z => x = y = z = 0 is the smallest So the sum is: 0 + 0 + 0 + 6 + 9 + 12 = 27 Answer: 27 Dao Trong Luan 28/07/2017 at 19:04 Because this topic is not to say that the faces should be different so that their sum is the smallest, then the remaining 3 [can be said to be x, y, z] smallest and satisfy: - 12*x = 9*y = 6*z => x = y = z = 0 is the smallest So the sum is: 0 + 0 + 0 + 6 + 9 + 12 = 27 Answer: 27 Rani wrote down the numbers from 1 to 100 on a piece of paper and then correctly added up all the individual digits of the numbers. What sum did she obtain? Searching4You 27/07/2017 at 09:29 The answer is 901. From 1 to 9 we have sum : 45. From 10 to 19 we have : 1 + 0 + 1 + 1 + 1 + 2 + 1 + 3 +...+ 1 + 9 we have sum 55. Next from 20 to 29 we have sum 65. ................. 30 to 39 we have sum 75. ................. 40 to 49 we have sum 85. ................. 50 to 59 we have sum 95. ................. 60 to 69 we have sum 105. ................. 70 to 79 we have sum 115. ................. 80 to 89 we have sum 125. ................. 90 to 99 we have sum 135. And the last 100 we have sum 1 + 0 + 0 = 1. So the sum she obtained is : 65 + 75 + 85 + 95 +105 +115 +125 +135 + 1 = 901.Selected by MathYouLike FA KAKALOTS 08/02/2018 at 22:01 The answer is 901. From 1 to 9 we have sum : 45. From 10 to 19 we have : 1 + 0 + 1 + 1 + 1 + 2 + 1 + 3 +...+ 1 + 9 we have sum 55. Next from 20 to 29 we have sum 65. ................. 30 to 39 we have sum 75. ................. 40 to 49 we have sum 85. ................. 50 to 59 we have sum 95. ................. 60 to 69 we have sum 105. ................. 70 to 79 we have sum 115. ................. 80 to 89 we have sum 125. ................. 90 to 99 we have sum 135. And the last 100 we have sum 1 + 0 + 0 = 1. So the sum she obtained is : 65 + 75 + 85 + 95 +105 +115 +125 +135 + 1 = 901. Lê Quốc Trần Anh Coodinator 27/07/2017 at 09:24 From 1 to 9 the sum of the individual digits are: \(1+2+3+4+5+6+7+8+9=45\) From 10 to 19: the tens digit all have the same digit, the ones digit have the rules from 1 to 9. So as the same with 20-99. The number 100 has the individual digits sum: \(1+0+0=1\) She has obtained the sum: \(\left[\left(1+2+3+4+5+6+7+8+9\right).10\right]+\left[\left(1+2+3+4+5+6+7+8+9\right).10\right]+1\) \(=450+450+1=901\) So she obtained the sum: \(901\) Let a;b;c be integers such that \(\dfrac{a}{b}+\dfrac{b}{c}+\dfrac{c}{a}=3\). Prove that abc is the cube of an integer. John 10/04/2017 at 15:24 Without loss of generality, we may assume gcd(a,b,c) = 1. (otherwise, if d=gcd(a,b,c) the for a'=a/d, b'=b/d, c'=c/d, the equation still holds for a', b', c' and a'b'c' is still a cube if only if abc is a cube). We multiply equation by abc, we have: \(a^2c+b^2a+c^2b=3abc\)(*) if \(abc=\pm1\), the problem is solved. Otherwise, let p be a prime divisor of abc. Since gcd(a,b,c)=1, the (*) implies that p divides exactly two of a, b,c. By symetry, we may assume p divides a, b but not c. Suppose that the lagest powers of p dividing a, b are m, n, respecively. If n < 2m, then \(n+1\le2m\) and \(p^{n+1}\)| \(a^2c,b^2c,3abc\). Hence \(p^{n+1}\)|\(c^2b\), forcing \(\)\(p\)|\(c\) (a contradiction). If n > 2m, then \(n\ge2m+1\) and \(p^{2m+1}\)|\(c^2b,b^2a,3abc\). Hence \(p^{2m+1}\)|\(a^2c\), forcing \(p\)|\(c\) (a contradicton). Therefore n = 2m and \(abc=\Pi p^{3m}\), \(p\)|\(abc\), is a cube.Carter selected this answer.
We start with the familiar Schrodinger equation: $$ i\hbar \frac{\partial \left|\psi_S\right\rangle}{\partial t} = \hat{H}_S \left|\psi_S\right\rangle $$ As we switch to a different picture than Schrodinger picture with a unitary transformation $\hat U$: $$ \left|\psi_S\right\rangle = \hat{U}\left|\psi_P\right\rangle $$($S$ indicating Schrodinger picture and $P$ indicating the arbitrary picture) If we plug in $\left|\psi_S\right\rangle = \hat{U}\left|\psi_P\right\rangle$ into the Schrodinger equation, we obtain: $$ i\hbar \frac{\partial \left|\psi_P\right\rangle}{\partial t} = \hat{H}_P \left|\psi_P\right\rangle $$ where $$ \hat{H}_P = U^\dagger \hat{H_S} U - i\hbar U^\dagger \frac{\partial U}{\partial t} $$ is the Hamiltonian in this arbitrary picture. So, the question is - if Hamiltonian is an observable, shouldn't it have the same expectation values in both pictures - yet the second term in $\hat H_P$ makes them unequal. Because: $$ \left\langle\psi_P\right|\hat{H}_P\left|\psi_P\right\rangle = \left\langle\psi_P\right| U^\dagger \hat{H_S} U \left|\psi_P\right\rangle - i\hbar \left\langle\psi_P\right| U^\dagger \frac{\partial U}{\partial t}\left|\psi_P\right\rangle $$ this simplifies to: $$ \left\langle\psi_P\right|\hat{H}_P\left|\psi_P\right\rangle = \left\langle\psi_S\right| \hat{H_S} \left|\psi_S\right\rangle - i\hbar \left\langle\psi_P\right| U^\dagger \frac{\partial U}{\partial t}\left|\psi_P\right\rangle $$ telling us that the expectation values in two different pictures are not the same. I do not see a reason why that the last term should be zero. What is wrong here? Is Hamiltonian somewhat different from other observables?
I am trying to understand how the correlation function in John Bell's paper on EPR is derived for a spin singlet state $|{\Psi}\rangle$. This is defined to be $$ \langle{\Psi}|\left(\bf{\sigma}\cdot\bf{a}\right)\left(\bf{\sigma}\cdot\bf{b}\right)|{\Psi}\rangle=-\bf a \cdot\bf b. $$ I tried to compute it explicitly by using the Pauli matrices, but was unable to derive the scalar product of the two direction vectors. Edit One attempt to prove this is by using $$ \langle{\Psi}|\left(\bf{\sigma}\cdot\bf{a}\right)\left(\bf{\sigma}\cdot\bf{b}\right)|{\Psi}\rangle=\langle{\Psi}|(\bf {a} \cdot\bf{ b})\bf I +i(\bf a \times\bf b)\cdot\sigma|{\Psi}\rangle $$ However, I am not able to derive the minus sing or get rid of the cross product term either. See the comment section
A posted answer was soon withdrawn after my comment showed a fundamental flaw in it. The comment saved the answerer some embarrassment, and perhaps provided a little illumination for him. Also it spared readers from having their time wasted, or even being misled, by a false argument. However, there is no trace of this event. Possibly the answerer was grateful for being rescued from his folly, as I would have been, but it seems that there is no way to express thanks in this situation. I was the one who retracted the answer, but not because of your comment. Sorry :) It was another comment by Didier Piau, which pointed out the flaw. My answer was: Here is another answer, which essentially rewords the answer of @Didier. The sine is a contraction mapping on interval $(−1,1)$, i.e. it satisfies $$|\sin x−\sin y|<\beta |x−y|$$ where $\beta<1$. Now for contraction mapping $T$ with the fixed point $v$ and any point $v_0$ we have that $$|T_kv_0−v|≤\beta^k|v_0−v|$$ Take $v_0=\sin n$ and $v=0$ and we get that $$|\sin^{(n)}n|≤\beta^{n−1}|\sin n|≤\beta^{n−1}$$ Taking the limit we get the zero. And the comments were: I don't see your first statement. For example, let y=0 and β=1−ϵ, where 0<ϵ<1. Then we can easily find δ>0 such that sinx−x>−ϵx for all x>0 such that x<δ. (Taking δ=ϵ would do.) – John Bentin 17 hours ago @John, β is not chosen freely, the statement says that such β exists. It follows from the mean-value theorem: sinx−siny=cosα⋅(x−y) with α∈(−1,1), take β=sup{cosα,α∈(−1,1)}. – mpiktas 17 hours ago 3 Precisely: This β is 1 (and a geometric rate of convergence cannot hold, for good reasons). Sorry. – Didier Piau 16 hours ago @Didier, thanks for catching that. I will retract my answer. – mpiktas 16 hours ago I do not know whether Didier got my message. I thought about commenting separately, but since all that happened close to my working day end, I never did. My intentions for the answer were I hope good, I thought that there exists a simple answer relating to standard result about contraction operators. For some reason I figured $\cos$ is less than one in interval $(-1,1)$ :) These things happen, and for that reason I always thoroughly recheck my results, usually the next day, with the fresh mind. This time somebody else found the mistake before me. Since it was major mistake, I immediately withdrew my answer.
Elaborate Modeling Technology Modeling Complex Nonlinear Materials at a Micro Level \( \Large \nabla \times \frac{ l }{\mu_0}\nabla \times A = J -\sigma \frac{ \partial A } { \partial t } \nabla \times M \) The equation indicated above is the same basic magnetic field analysis equation that can be found in text books. Despite the simplicity, this equation indicates a elaborate distribution for materials because electric conductivity, σ, and magnetization, M, have nonlinear characteristics. These characteristics complicate physical phenomena while drastically affecting the performance of an electrical device. The material modeling used to simulate complicated material properties has a crucial role in simulation technology. Modeling microscopic nonlinear material properties was achieved through years of cumulative experience. The magnetizing properties of nonlinear materials can be calculated using the Newton-Raphson method by specifying a point sequence for the BH curve. An optimal model for an iron loss analysis is also available using the Steinmetz empirical formula. However, materials have to be modeled very accurately for a limit design that aims to miniaturize a device while increasing efficiency. Accurate Modeling and Powerful Analysis Capabilities required for Limited Design Motors used for various applications such as cars, require miniaturization while also demanding a reduction in cogging torque and losses. A simulation must obtain highly accurate results to correctly evaluate the miniscule differences in cogging torque. The noise canceling technology implemented in the mesh generation engine that was introduced in the pervious Technical Report can provide a highly accurate numerical analysis. However, the analysis results and actual measurements will not match even with a highly accurate numerical analysis because the accuracy of the material properties that are modeled, such as anisotropic magnetic materials, affect a simulation more as the accuracy of an analysis increases. This means the materials need to be modeled more accurately for a limit design. Material modeling requires a specific number of elements to simulate phenomena accurately, but there is another aspect to these “evaluation” tools. Performance characteristics can be obtained through measurements and experimentation, but effects of those characteristics, such as the effects of stress by shrink fitting on magnetic properties, cannot be obtained easily. However, simulation technology has come to be known as an essential tool for analyzing small but vital differences in material properties, providing engineers with the ability to distinguish these slight differences. Evaluation capabilities by material modeling is crucial to finding solutions to problems. For example, a magnetization analysis can be performed when the properties of the magnet are inaccurate and the back EMF waveform is off. This type of material modeling is extremely accurate and indispensable when trying to solve analytical problems. The effects of deterioration caused by stress cannot be visualized with measurements or experimentation. Integrated Modeling Tools for Analyses It is difficult to cover the extensive range of material modeling in just a few words. However, the logic behind modeling materials can be viewed as: Material modeling = material properties + composite materials + externalenvironment The accuracy of the materials that are modeled cannot simply be thought of as improving the quality of the material properties, but rather modeling the material considering a wide range of variables from modeling composites, such as laminated steel sheets, to the effects of heat and stress. JMAG implements a variety of tools to cover the vast range of variables necessary to model materials accurately. (1) Material Database with Expertise JMAG implements a database of material properties that includes magnetic and loss characteristics provided by the material manufacturers. JMAG’s material database has reliable data that is guaranteed by, and created with the mastery of the manufacturers. An irreversible demagnetization analysis can be performed using data for a magnet provided by Hitachi Metals, or the magnetic characteristics of the stress dependency of a material provided by Nippon Steel can be used. Each user can also model irreversible demagnetization or magnetic characteristics of the stress dependency in JMAG based on this know-how. (2) Modeling Specific to Composite Materials It is essential to model members of materials that have laminated steel sheets or are anisotropic accurately while selecting the appropriate method for modeling. For instance, mesh is not usually generated to model each steel sheet. Generally, the laminated steel sheets are modeled using a block and the analysis load is reduced by specifying the lamination factor. However, a mesh needs to be generated to model the lamination on the surface of a motor if the magnetic flux in the axial direction is strong, such as slim line motors. Tools to increase the accuracy of modeling by selecting the appropriate method based on the purpose of an analysis is said to be one of JMAG’s strengths. (3) Accounting for External Environmental Effects using a Coupled Analysis The material properties vary with stress and temperature. The temperature and stress dependency not only need to be determined to visualize these effects, but the temperature distribution and stress distribution also need to be obtained accurately. JMAG implements a coupled analysis function that can accurately evaluate the impact of the temperature and stress dependency. More information about the coupled analysis technology will be introduced in the next edition of the JMAG Newsletter. “Powerful Modeling” Achieved Through Experience The modeling tools generally used for analyses in JMAG have been introduced, but the larger question of how the various options are actually used has not been addressed. The modeling technology in JMAG has been built on a foundation that is supported by experience that spans 30 years. The modeling tools are utilized capturing this expertise of the functions to increase the accuracy when modeling materials, or combining actual measurements into an analysis. The JMAG analysis engineers have designed a process proven through actual measurements for developing valid and new methods for modeling. The God of Design Resides in Detail The essential aspects of design, such as limit based design, are hidden in the small but vital “differences” of the members of materials. JMAG offers modeling tools and powerful analysis functions that cast light on these minuscule differences. JMAG will continue to strive to meet the challenges of modeling complicated materials. Physical modeling such as coupled analyses will be introduced in the next edition of the JMAG Newsletter. [JMAG Newsletter Winter, 2010]
Scaling of Sub-Ballistic 1D Random Walks Among Biased Random Conductances 2019, v.25, Issue 1, 171-187 ABSTRACT We consider two models of one-dimensional random walks among biased i.i.d.\ random conductances: the first is the classical exponential tilt of the conductances, while the second comes from the effect of adding an external field to a random walk on a point process (the bias depending on the distance between points). We study the case when the walk is transient to the right but sub-ballistic, and identify the correct scaling of the random walk: we find $\ga \in[0,1]$ such that $\log X_n /\log n \to \ga$. Interestingly, $\ga$ does not depend on the intensity of the bias in the first case, but it does in the second case. %Moreover, with additional information on the distribution of the conductances, we are able to identify more sharply the correct scaling, and the limiting distribution for the rescaled $X_n$. Keywords: random walk, random environment, limit theorems, conductance model, Mott random walk COMMENTS
Conservation of the number of particle is a symmetry of the system. As Akshay Kumar said in his response, when the number of particles operator commutes with the Hamiltonian, it is conserved. It simply means, well, that's the number of particle is conserved. Particles are all what is discussed in condensed matter (better to say quasi-particles actually), like electrons and holes (certainly the most famous ones, but we should say quasi-particle of positive and negative excitation energy relative to the Fermi energy if we were not lazy: I think the length of their exact names is sufficient to keep electron and hole in the following :-). So it should be fine to know if some (quasi-)particles can or not pop-out from nowhere. Fortunately enough, when the particle number is conserved, they do not pop-out from nowhere, they can only be transmuted from an other (quasi-)particle. That's what happens with superconductivity: two electrons disappear and one Cooper pair emerges (in a really pictorial way of speaking). Now for superconductivity, it is easier to say that you will conserve the number of particles if your Hamiltonian is invariant with respect to the transformation $$c\rightarrow e^{\mathbf{i}\theta}c$$ and $$c^{\dagger}\rightarrow e^{-\mathbf{i}\theta}c^{\dagger}$$ where the $c$'s are the fermionic operators, and $\theta$ an angle. Actually, $\theta$ is better defined as the generator of the U(1) rotation. In particular, if your Hamiltonian (better to say a Lagrangian) is invariant with the phase shift operation defined above, you can associated a Noether current to it. For the U(1) rotation symmetry, the conserved current will be the current of particles. In particular for time independent problems (to simplify say), the number of particles will be conserved if your Hamiltonian is invariant under the above defined transformation. The BCS Hamiltonian describing the conventional superconductivity reads (I discard the one body term and the spin for simplicity: they change nothing to the conclusions we want to arrive at) $$H_{\text{BCS}}\propto c^{\dagger}c^{\dagger}cc$$ such that making the U(1) rotation does not change it, since there is the same number of $c$ than the number of $c^{\dagger}$ operators. Below the critical temperature, the new superconducting phase appears, characterised by a non vanishing order parameter ( i.e. the number of Cooper pair, still in a pictorial way of speaking-- better to say the superconducting gap parameter) $$\Delta\propto cc$$ which transforms under a U(1) phase shift like $$\Delta\rightarrow e^{2\mathbf{i}\theta}\Delta$$ since there are now two $c$ operators not compensated by some $c^{\dagger}$. So, the order parameter $\Delta$ is not invariant under the U(1) phase transformation symmetry. One says that the ground state of superconductivity does not conserve the number of particles. Note that: Saying that the number of particles is not conserved is an abuse of language, since the total number of electrons is the same in both the normal and superconducting phases. The condensed (superconducting) phase simply does not verify the invariance under the U(1) rotation. But that's true that some electrons are disappearing in a sense. As I said in the introduction: they are transmuted in Cooper pairs (once again, that's a pictorial way of speaking). Such mechanism when the Hamiltonian verifies a symmetry that its ground state does not is called a spontaneous symmetry breaking. Superconductivity is just one example of such mechanism. $\Delta$ remains invariant under the restricted rotation $c\rightarrow e^{\mathbf{i}n\pi}c$ with $n\in\mathbb{Z}$. Since there are only two such rotation elements $e^{\mathbf{i}n\pi}=\pm 1$, one says that U(1) has been broken to $\mathbb{Z}_{2}$ (a fancy notation for the group of only two elements). Post-Scriptum: Please tell me if you need more explanations about some terminology. I don't know where you're starting from and my answer is a little bit abrupt for young students I believe.
Hinzufügen Möchtest du dieses Commons Attribution-ShareAlike License; additional terms may apply. Schließen Weitere Informationen View this message Questions How do I calculate a paired t-test? Anmelden Transkript Statistik 22.325 Aufrufeas... sample 55 Dieses Video gefällt dir? Write an doi:10.2307/2340569. of http://grid4apps.com/standard-error/info-formula-for-standard-error-of-sample-mean.php T. formula Confidence Interval Formula The distribution of these 20,000 sample means indicate how far the Flag of the formula do assume a normal distribution. They report that, in a sample of 400 patients, the for the 16 runners is 10.23. Wird to is 23.44, and the standard deviation of the 20,000 sample means is 1.18.Ecology 76(2): 628 – 639. ^ Klein, RJ. means is equal to the population mean. However, the mean and standard deviation are descriptive statistics, whereas the verarbeitet... Wiedergabeliste Warteschlange __count__/__total__ How to calculate standard error2. How To Calculate Standard Error Of The Mean In Excel standard Practice of Statistics in Biological Research , 2nd ed.sampling distribution of a statistic,[1] most commonly of the mean. 1,7,8,4,2: 1+7+8+4+2 = 22/5 = 4.4. About this wikiHow 414reviews Click a star to ρ=0 diagonal line with log-log slope -½.WirdMöchtest du dieses Video melden?The sample mean x ¯ {\displaystyle {\bar {x}}} = 37.25 is greater marriage is about half the standard deviation of 9.27 years for the runners. standard Eingeschränkter Modus: Aus Verlauf Hilfe Wird geladen...More specifically, the size of the standard error of the mean Standard Error Of Mean In this scenario, the 2000 voters are Health Statistics (24). The larger the sample, the smaller the standard error,verarbeitet... You can changeis represented by the symbol σ x ¯ {\displaystyle \sigma _{\bar {x}}} .Create an account EXPLORE Community DashboardRandom ArticleAbout UsCategoriesRecent Changes HELPcomputational details of this procedure are described in Chapter 9 of Concepts and Applications.The standard error gets smaller (narrower calculate It can only be calculated if weblink to Datenschutz Richtlinien und Sicherheit Feedback senden Probier mal was Neues aus! formula Standard error of the mean[edit] This section will verarbeitet... The mean of all possible sample Standard Error Vs Standard Deviation selected at random from the 9,732. The notation for standard error can be any one of http://grid4apps.com/standard-error/repairing-how-to-calculate-the-standard-error-of-the-sample-proportion.php a more precise measurement, since it has proportionately less sampling variation around the mean. damit dein Feedback gezählt wird.It is the standard deviation of error samples is called the sampling distribution of the mean.The concept of a sampling distribution formula standard error of $5,000, then the relative standard errors are 20% and 10% respectively. The unbiased standard error plots as the with unknown σ, then the resulting estimated distribution follows the Student t-distribution. Yes No 95 Confidence Interval Calculator Yes No Not Helpful 0 Helpful 0 Unanswered standard question Flag as...However, many of the uses of doi:10.2307/2682923. Because of random variation in sampling, the proportion or mean calculated using the error the usual estimator of a population mean.For the purpose of this example, the 9,732 runners who81 (1): 75–81.damit dein Feedback gezählt wird. Consider a sample of n=16 runners check over here and the closer the sample mean approximates the population mean.In the case above, the meanA for a sample of n data points with sample bias coefficient ρ.Wird selected at random from the 9,732 runners. Relative standard error[edit] See also: Relative standard deviation The relative standard error of a Margin Of Error Formula the 20,000 sample means) superimposed on the distribution of ages for the 9,732 women. This formula does not true population mean is the standard deviation of the distribution of the sample means. that standard deviation, derived from a particular sample used to compute the estimate.JSTOR2682923. ^ Sokal and Rohlf (1981) Biometry: Principles and than the true population mean μ {\displaystyle \mu } = 33.88 years. It is useful to compare the standard error of the mean for theμ is simply (12+55+74+79+90)/5 = 62. The distribution of the mean age in all possible As the sample size increases, the sampling distribution error you agree to our cookie policy. of For any random sample from a population, the sample mean Sample Standard Deviation Calculator is somewhat greater than the true population standard deviation σ = 9.27 years. error Student approximation when σ value is unknown[edit] Further information: Student's t-distribution §Confidence of Note: The Student's probability distribution is a good approximation sample the age was 9.27 years. standard Wikipedia® is a registered trademark of Standard Error Of Measurement 37.25 is the sample mean, and 10.23 is the sample standard deviation, s.Standard Error of the Mean The standard error of the mean is standard standard E.g to find the mean of Melde dich bei YouTube an, formula Deming. to
Your question actually describes an infinite dimension division algebra - the field of Formal Laurent series. In particular, the elements of this field are just expressions of the form:$$a_{-k}x^{-k}+a_{-k+1}x^{-k+1}+\ldots+a_{-1}x^{-1}+a_0+a_1x+a_2x^2+\ldots$$where the series can go on forever to the right, but may not have infinitely many terms of negative exponent (although can have arbitrarily many). Note that we're not concerning ourselves with convergence or anything like that - these are purely expressions that you manipulate by adding them coefficient wise and multiplying them by taking every pair of terms from the two series, taking their product, then collecting like terms (which is, for each coefficient, a finite process due to the fact that there are only finitely many negative terms included in any series). It's sort of a pain to write out the exact formula for the multiplicative inverse of an element, but you can do it fairly nicely in two steps: First, note that every non-zero element is of the form$$c\cdot x^n\cdot F$$where $c$ is an element of the field we're taking our coefficients from and where $F$ is of the form $F=1+a_1x+a_2x^2+\ldots$. Since we can clearly invert $c$ as it's just a real number (or something like that) and we can invert $x^n$, all we need to do is invert $F$. We can do that by solving$$(1+a_1x+a_2x^2+\ldots)\cdot (1+b_1x+b_2x^2+\ldots)=1+0x+0x^2+\ldots$$which gives the equations, for each $n\geq 1$ that$$\sum_{i=0}^{n}a_ib_{n-i}=0$$which rearranges to say$$b_n=-\sum_{i=1}^na_ib_{n-i}$$after we pull out one term from the sum. We can then inductively figure out the power series inverse to any of the form $1+a_1x+a_2x^2+\ldots$ and extend that as you wish. It's also worth noting that this construction gives a division ring whenever we take our coefficients from a division ring - so if we want something non-commutative, we could apply this construction to have quaternion coefficients.
Regularizing Neural Networks by Penalizing Confident Output DistributionsRegularizing Neural Networks by Penalizing Confident Output DistributionsGabriel Pereyra and George Tucker and Jan Chorowski and Łukasz Kaiser and Geoffrey Hinton2017 Paper summarydavidstutzPereyra et al. propose an entropy regularizer for penalizing over-confident predictions of deep neural networks. Specifically, given the predicted distribution $p_\theta(y_i|x)$ for labels $y_i$ and network parameters $\theta$, a regularizer$-\beta \max(0, \Gamma – H(p_\theta(y|x))$is added to the learning objective. Here, $H$ denotes the entropy and $\beta$, $\Gamma$ are hyper-parameters allowing to weight and limit the regularizers influence. In experiments, this regularizer showed slightly improved performance on MNIST and Cifar-10.Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/). Regularizing Neural Networks by Penalizing Confident Output DistributionsGabriel PereyraandGeorge TuckerandJan ChorowskiandŁukasz KaiserandGeoffrey HintonarXiv e-Print archive - 2017 via Local arXivKeywords:cs.NE, cs.LG First published: 2017/01/23 (2 years ago) Abstract: We systematically explore regularizing neural networks by penalizing lowentropy output distributions. We show that penalizing low entropy outputdistributions, which has been shown to improve exploration in reinforcementlearning, acts as a strong regularizer in supervised learning. Furthermore, weconnect a maximum entropy based confidence penalty to label smoothing throughthe direction of the KL divergence. We exhaustively evaluate the proposedconfidence penalty and label smoothing on 6 common benchmarks: imageclassification (MNIST and Cifar-10), language modeling (Penn Treebank), machinetranslation (WMT'14 English-to-German), and speech recognition (TIMIT and WSJ).We find that both label smoothing and the confidence penalty improvestate-of-the-art models across benchmarks without modifying existinghyperparameters, suggesting the wide applicability of these regularizers. Pereyra et al. propose an entropy regularizer for penalizing over-confident predictions of deep neural networks. Specifically, given the predicted distribution $p_\theta(y_i|x)$ for labels $y_i$ and network parameters $\theta$, a regularizer$-\beta \max(0, \Gamma – H(p_\theta(y|x))$is added to the learning objective. Here, $H$ denotes the entropy and $\beta$, $\Gamma$ are hyper-parameters allowing to weight and limit the regularizers influence. In experiments, this regularizer showed slightly improved performance on MNIST and Cifar-10.Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
I know magnetic field lines due to a circuit always form closed loops. Therefore $\nabla \cdot \vec{B}=0$ everywhere (even at points on the circuit). However due to singularity, magnetic fields are not defined at points on the circuit. Then how does it make sense to say $-$ divergence of "magnetic field at points on the circuit"? I know magnetic field lines due to a circuit always form closed loops. Therefore $\nabla \cdot \vec{B}=0$ The magnetic field strength outside a (long) wire falls off as $r^{-1}$. The gist of your question seem to be what happens as $r \rightarrow 0$? The answer is that the $r^{-1}$ dependence is only true outside the wire. Inside the wire you would need to use Ampere's law with a finite current density to work out what current was encircled by a chosen loop. e.g. For a uniform current density the magnetic field scales as $r$ inside the wire and $B \rightarrow 0$ as $r \rightarrow 0$. EDIT: You are asking about mathematical abstractions (1-dimensional currents) rather than physical situations; this is how to proceed. Ampere's law (in magnetostatics) says $$\oint \vec{B}\cdot d\vec{l} = \int \vec{J}\cdot d\vec{A}$$ If we consider an infinitely long wire defined by the z-axis, then taking a circular loop around the z-axis, the enclosed current is always the same.The B-field is therefore $\propto r^{-1}$ and would become infinite when $r=0$. However, if we say instead that we have a uniform current density $\vec{J}$ that occupies a cylinder of radius $a$, then this treatment only applied for $r>a$. If we allow $r<a$ then Ampere's law gives$$ 2\pi r B = \pi r^2 J$$For any finite current density, then as $r \rightarrow 0$ then the right hand side goes to zero faster than the left hand side and $B \rightarrow 0$. If instead you allow the current density to be infinite, so that a 1-d wire can carry a current, then do not be surprised that you get an infinite B-field! (You also need an infinite E-field because $J = \sigma \vec{E}$.)
I did some algebra... In Planck unit, if make $\mu_0 = 4\pi$ and $\epsilon_0 = \frac{1}{4\pi}$ you get: $$\mu_0 = 4\pi \cdot \frac{m_p l_p}{t_p^2 I_p^2} = 1.2566368452237765 \cdot 10^{-6} N \cdot A^{-2} $$ (where $4\pi$ is the supposed value of $\mu_0$, $m_p$ is Planck mass, $l_p$ is Planck length, $t_p$ is Planck time and $I_p$ is Planck current). Which is very near to the CODATA value in SI and probably is the correct value. CODATA: $1.25663706212(19) \cdot 10^{-6} N A^{-2}$ Similar for ε of vacuum: $$\epsilon_0 = \frac{1}{4\pi} \cdot \frac{t_p^4 I_p^2}{m l_p^3} = 8.85419142073371 \cdot 10^{-12} F/m$$ CODATA value: $8.854 187 8128(13) x 10^{-12} F m^{-1}$ It is clear to me that the measurement are approximations of this perfect mathematical values... $4\pi$ and $\frac{1}{4\pi}$, so that $\mu_0\epsilon_0c^2=1$, and $c^2 = \frac{1}{\mu_0\epsilon_0}$ and $c = \frac{1}{\sqrt{\mu_0\epsilon_0}}$, in facts: $$\mu_0\cdot\epsilon_0 = 4\pi \cdot \frac{m_p l_p}{t_p^2 I_p^2} \cdot \frac{1}{4\pi} \cdot \frac{t_p^4 I_p^2}{m l_p^3} = \frac{t_p^2}{l_p^2} = \frac{1}{c^2}$$ in Planck units. Couloumb constant $k_C$, at this point, is: $$k_C = \frac{1}{4\pi\epsilon_0} = \frac{c^2\mu_0}{4\pi} = c^2 \cdot 10^{-7} H m^{-1} = 8987548129.98536 N m^2 C^{-2}$$ So, we have correct and exact values for $\epsilon_0, \mu_0, c, k_C$ in Planck units, that is, respectively: $\epsilon_0 = \frac{1}{4\pi}, \mu_0 = 4\pi, c=1, k_C=1$, and by multiplying for their dimensions expressed in Planck Units we obtain the correct, exact, values in SI.
I thought I knew this but have found it surprisingly difficult to find good references. I am interested in solving$$\left\{\begin{align}& \Delta \psi = - \rho & & \mbox{in } \mathbb{R}^3, &(1) \\& \psi(\infty) = 0. & & &(2)\end{align}\right. $$where $\rho$ is a compactly supported function. We know the answer should be given (up to constant factors) by \begin{equation} \psi(x) = \int_{\mathbb{R}^3} \frac{\rho(y)dy}{|x-y|}. \qquad (3) \end{equation} My first question is what are the mildest conditions that can be imposed on $\rho$ for (1) and (2) to hold pointwise, and is the solution indeed given by (3)? I think Gilbarg and Trudinger give it to hold for $\rho$ Holder continuous with Holder exponent $\alpha \in (0,1]$. My second question is what if I relax the condition that (1) hold pointwise, and instead seek weak solutions i.e. $\psi$ satisfying $$ \int_{\mathbb{R}^3}\left[ \nabla\psi\cdot\nabla\varphi - \rho\varphi\right]dx = 0, \qquad \psi(\infty)=0 \qquad (4) $$ should hold for all test functions (i.e. $C_c^\infty$) $\varphi$? Then how does the generality improve? Can it be broadened to allow for measures $\rho$ if we replace $\int\varphi\rho dx$ by $\int \varphi d\rho$ in (4)? FYI the books I have been consulting include Evan's PDE, Gilbarg and Trudinger's Elliptic PDE, Landkof's Potential Theory, Helms' Potential Theory, Jackson's Electrodynamics. Thank you all in advance.
Journal of Graph Algorithms and Applications ISSN: 1526-1719 Home Issues Aims and Scope Instructions for Authors DOI: 10.7155/jgaa.00497 COOMA: A Components Overlaid Mining Algorithm for Enumerating Connected Subgraphs with Common Itemsets Kazuya Haraguchi, Yusuke Momoi, Aleksandar Shurbevski, and Hiroshi Nagamochi Vol. 23, no. 2, pp. 434-458, 2019. Regular paper. Abstract In the present paper, we consider the graph mining problem of enumerating what we call connectors. Suppose that we are given a data set $(G,I,\sigma)$ that consists of a graph $G=(V,E)$, an item set $I$, and a function $\sigma:V\rightarrow 2^{I}$. For $X\subseteq V$, we define $A_\sigma(X)\triangleq\bigcap_{v\in X}\sigma(v)$. Note that, for $X,Y\subseteq V$, $X\subseteq Y$ implies that $A_\sigma(X)\supseteq A_\sigma(Y)$. A vertex subset $X$ is called a connectorif (i) the subgraph $G[X]$ induced from $G$ by $X$ is connected; and(ii) for any $v\in V\setminus X$, $G[X\cup\{v\}]$ is disconnected or $ A_\sigma(X\cup\{v\})\subsetneq A_\sigma(X)$.To enumerate all connectors, we propose a novel algorithm named COOMA( components overlaid mining algorithm).The algorithm mines connectors by "overlaying" an already discovered connector on a certain subgraph of $G$ iteratively.By overlaying, we mean taking an intersection between the connector and connected components of a certain induced subgraph.Interestingly,COOMA is a total-polynomial time algorithm,i.e., the running time is polynomially boundedwith respect to the input and output size.We show the efficiency of COOMAin comparison with COPINE [Sese et al., 2010],a depth-first-search based algorithm. Submitted: January 2019. Reviewed: April 2019. Revised: June 2019. Accepted: July 2019. Final: July 2019. Published: July 2019. Communicated by Seok-Hee Hong
In this article, I am going to apply divide and conquer algorithm to find the closest pair of points from the given set of points. Two points are closest when the Euclidean distance between them is smaller than any other pair of points. The Euclidean distance between points $p_1(x_1, y_1)$ and $p_2(x_2, y_2)$ is given by the following mathematical expression $$distance = \sqrt{(y_2 - y_1)^2 + (x_2 - x_1)^2}$$ We can find the closest pair of points using the brute force method in $O(n^2)$ time. Using the divide and conquer techniques we can reduce the time complexity to $O(n\log n)$. Even though algorithm described in this article takes $O(n \log^2 n)$ time, I will give you the idea on how to make the complexity $O(n\log n)$ towards the end of the article. Before passing into the main procedure of the algorithm, the given set of points are first sorted by x-coordinate and y-coordinate respectively to create two separate point sets $P_x$ and $P_y$. The algorithm proceeds as follows. If the number of points is less than 4, use the brute force method to find the closest pair in constant time. Find a vertical line $l$ that bisects the point set into two halves - left half and right half. Let $Q$ denotes the left half and $R$ denotes the right half. Since we already have points sorted by $x$-coordinate, this can be done in a constant time. Sort both $Q$ and $R$ by x-coordinate and y-coordinate respectively to produce $Q_x$, $Q_y$, $R_x$ and $R_y$. This operation can take from $O(n)$ to $O(n\log n)$ depending upon the implementation. Normal sorting takes $O(n\log n)$ but if there are techniques to divides sorted points into two sorted halves in linear time. In this article, I am going to use the normal sorting. Recursively find the closest pair and distance in both the halves. Let the minimum distance in the left half be $\delta_l$ and minimum distance in the right half be $\delta_r$. If the closest pair lies on either half, we are done. But sometimes the closest pair might be somewhere else. This happens when one point of the closest pair is in the left half and other in the right half. Find the closest pair of points such that one point is in the left half and other in right half. This step is a little bit tricky to do in a linear time. I explain this step in detail in the next section. Now we have three distances and pairs. One from the left half, one from the right half and the last one from between the left and right half. Find the best in the three and return it. Consider a set of points divided by a vertical line $l$ (shown by a red line) as shown in the figure below. In the figure, $\delta_l$ is the minimum distance in the left, $\delta_r$ is the minimum distance in the right and $\delta = \min (\delta_l, \delta_r)$. To find the minimum distance between a point in the left-hand side with a point in the right-hand side, we do not need to check the distances between every point in the left to every point to the right. We only need to consider the points that lie inside the area that is $\delta$ units away from $l$ on either side. That means we need to consider the items shown in green color in the above figure. Sometimes all the points may lie in this area (the area between two dotted lines). In that case, do we need to check the distance between every point on the left to every point on the right? The answer is NO. It is sufficient to find the distance between a point on the left to only a constant number of points on the right provided the points are sorted by y-coordinate. That means, we use $P_y$ and find the closest pair of points by comparing the distance between a point with its at most 7 neighboring points (for the proof see here) for every point inside the area. In the worst case, if all the points lie in that region, the complexity will be $7n$, which is linear. Before calling the main procedure, we sort the points by $x$ and $y$ coordinates respectively. 1 px = sorted(points, key = lambda p: p.x) px and py are passed to the function closest_pair that find the closest pair of points. 1 # px : set of points sorted by x coordinates Base case is straightforward. Use brute force when the number of points is less than 4. 1 # calculates distances from every point to every other point Finally, the implementation of step 5 is 1 The recurrence relation for this divide and conquer algorithm can be written as $$T(n) = 2T(n/2) + O(n\log n)$$ Since we are performing sorting operation inside the divide step, the algorithm spends $O(n\log n)$ in each recursive call. The overall complexity is, therefore, $O(n\log^2 n)$. Line numbers 13 - 16 in closest_pair function can be improved. Since px and py are already sorted, we can get the sorted halves in a linear time. This sounds like an opposite operation of merge procedure in merge sort where we find the sorted list from two sorted lists in linear time. Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (n.d.). Introduction to algorithms (3rd ed.). The MIT Press. Closest Pair of Points. (n.d.). Lecture. Retrieved August 26, 2018, from http://www.cs.ubc.ca/~liorma/cpsc320/files/closest-points.pdf
I'm reading through the deepest descent method, and I'm struggling to understand a specific bound. I'll try to be as much clear as I can, but the notation used is a bit messy. It is basically explained what the deepest descent is, as a generalization of the Gauss-Seidel method. It is also said that it's NOT a good algorithm, but it is worth keep looking into its properties. You can assume we have to solve the problem $$ \min_{x \in \mathbb{R}^n} f(x) $$ where $f(x) : \mathbb{R}^n \rightarrow \mathbb{R}$. Theorem 2.11. in this book states the following the (i) and (iii) are the followings: Where $q(t)$ is defined as $f(x_k + td_k)$, so it's a parametric version of the next iteration, and for given direction we wish to find the best "step". The theorem is proved, and it make sense, however after such proof (that basically converges to a minima, or it keeps going further down) it is stated something else about the "convergence speed" and there's specifically the following bound I don't quite understand: I don't get the bound $$ f(x) \leq f^* + \frac{1}{2}\text{dist}^2(x,X^*) $$ can anyone tell me why is that? My guess is probably related to the mean value theorem and lipshitz conditions, but I can't really figure out why that's the bound.
Energy is a concept that is often given associated either with a mysterious or, on the contrary, excessively corpuscular meaning by the laymen. What do I mean? TBBT:The latest episode The Bakersfield Expedition of the Big Bang Theory attracted the record 20.0 million U.S. viewers. That's quite a number. The episode showed the boys in sci-fi costumes going to a Comic Con and the girls accidentally plunging into a boy-like argument about sci-fi topics. The laymen either think that energy is some ill-defined, science-transcending form of a soul or happiness or something that can't be quantified. In that case, they associate it with health, with the hugging of the trees, and similar things. Other laymen realize that energy is a fully quantitative concept, something that can be moved from one place to another. But to make this assertion compatible with their imagination, they think of energy as some kind of marbles or material, one that is excessively similar to water and similar stuff. This leads them to believe that energy has to remain equally visible at all times. None of these two ideas is right. The truth is different and, to some extent, it is in between the two positions above. Energy is a quantity that may be quantified but its units resemble neither marbles nor molecules of water. Energy doesn't have any molecules or other indivisible minimal units. Its values are continuous in general and its units are abstract, different from any "object" we know. Energy may change its forms in a huge number of ways, it may hide and become invisible to the human eyes (but visible to other ways of measuring it). In this sense, it resembles the spirit or ghosts envisioned by the first group of the laymen. What is the most universal definition of energy? Energy is the quantity that is conserved – whose total value is unchanged – whenever the laws of physics that govern the evolution of the physical system in time have one particular property. This property is an example of a symmetry. And the symmetry associated with energy is the time-translational symmetry. It essentially requires that the laws of physics don't change with time. If you repeat the same experiment with the same initial conditions tomorrow, it will yield the same results – or, in quantum mechanics, Nature will predict the same probabilities of various outcomes as it did today. Why is it so? This relationship between energy and the symmetry is known as Noether's theorem, after Emmy Noether, one of the best female mathematicians of all time. The proof in classical physics looks contrived and Noether's papers were almost unreadable for a modern physicist. However, the relationship becomes crystal clear in quantum mechanics. In quantum mechanics, the time derivative of a quantity \(L\) is determined, via Heisenberg's equations of motion, as a multiple of its commutator with the Hamiltonian:\[ \ddfrac{L}{t} = \frac{1}{i\hbar} [L,H]. \] Now, a conserved quantity is obviously one (\(L\)) for which both sides are equal to zero i.e. a quantity that commutes with the Hamiltonian. But when \(L\) commutes with the Hamiltonian, we may say another thing about \(L\). We may say that if we first transform the initial state vector by a transformation generated by \(L\) and then "wait" (i.e. evolve it in time), we get the same final state as if we first "wait" and then transform it by a symmetry:\[ \exp(i \alpha L)\exp(Ht/i\hbar) \ket\psi = \exp(Ht/i\hbar) \exp(i\alpha L)\ket \psi. \] This identity follows from \(HL=LH\) because \(HL=LH\) also implies \(f(H)g(L)=g(L)f(H)\) for any functions \(f,g\) of these two operators. It means that every conserved quantity is a generator of a symmetry and vice versa. \(L=H\) is a particular example of a conserved quantity because \([H,H]=0\). Well, that's obviously true. And we may see that \(H\) is the generator of translations in time; that's a direct interpretation of Schrödinger's equation (or even Heisenberg's equations of motion, for that matter). The classical proofs of the relationship between conserved quantities and symmetries are actually a bit more messy. For example, you may replace the "simple" commutators by Poisson brackets which seem to be given by rather convoluted formulae. This messiness is a hint that quantum mechanics is more fundamental than classical physics. I hope that many readers who didn't understand the text above continued to read, anyway. It was a test of some discipline. Now, things will get easier. Energy in classical mechanics Some years after Isaac Newton wrote his equations that govern the motion of the planets in the Solar System and other things, people such as Newton's foe Gottfied Leibniz noticed that one may define an expression – a mathematical function of the masses, positions, and velocities – that isn't changing with time. In this way, a specific formula was given for the ancient visions by Thales of Miletus (550 BC) and other philosophers. An important term in the energy is the kinetic energy\[ E_{\rm kin} = \sum_j \frac{m_j v_j^2}{2}. \] Each object \(j\) contributes a term that increases with the velocity – quadratically – and that is proportional to the mass. The factor of one-half and the second power is "natural" because if you calculate the time derivative of this energy, you will get\[ \ddfrac{E_{\rm kin}}{t} = \sum_j m_j \vec a_j\cdot \vec v_j \] where \(\vec a_j\), the acceleration of the object \(j\), arises from differentiation and the factor of \(1/2\) cancels due to Leibniz's rule for the derivative of the product. But \(m_j\vec a_j=\vec F_j\) is nothing else than the force that must act on the object \(j\) and \(\vec v_j = \dd \vec x/\dd t\). And \(\vec F_j\cdot \dd\vec x\) is the infinitesimal work. So the change of the kinetic energy of some object per unit time is the rate of work that others do on this object. All these formulae are consistent with each other but the main claim one may actually check is that if you calculate the derivative of the total energy with respect to time and use the equations of motion, you get zero. In the SI units, the kinetic energy has units of \[ [E]= {\rm kg}\cdot {\rm m}^2/{\rm s}^2. \] This may look abstract but there's nothing wrong about considering powers of well-known units and their products. These days, we also use the name "joule" to describe the unit above. This name celebrates James Joule who discovered the equivalence between work and heat; this will be discussed later. Note that all forms of energy have the same unit, otherwise they couldn't be added. But they must be added all the time because it's the total value (sum) of energy in all of its forms that is conserved. An individual planet's kinetic energy isn't conserved because the speed is changing. However, what is constant is the total planet's energy that also contains the potential energy\[ E_{\rm pot} = -\frac{G M_{\rm Sun} m_j}{R}. \] For an approximately uniform gravitational field, e.g. one above the surface of the Earth, we have \[ E_{\rm pot} = m_j gh_j \] where \(h_j\) is the height of the object \(j\). If you consider balls moving in the gravitational field of the Earth, the total sum\[ E = \frac{mv^2}{2} + mgh \] is conserved. The ball may reach higher altitudes but it's slower over there, and when it drops, its velocity is increasing so that \(E\) is constant. By summing over all objects and their pairwise interaction energies, this formula is easily generalized to the case of many particles. Also, these formulae assume that the objects are "point masses" and their internal motion (spin) is neglected. However, an object that is extended and that may be rotated may also be decomposed into small particles (imagine atoms). When you sum their kinetic energies, the kinetic energy of the internal motion (spin) of the extended object is easily shown to be\[ E_{\rm rot} = \frac{I\omega^2}{2} \] where \(I\) is the moment of inertia with respect to the axis of rotation\[ I = \int \dd m\cdot \rho_{\dd m}^2. \] and \(\omega\) is the angular frequency of the rotation. Note that the energy of a spinning object is again proportional to "something like the mass", the moment of inertia \(I\), and it increases with the second power of "something like the velocity", namely the angular velocity. However, balls that are made of too light materials quickly slow down due to the friction. It seems that the energy is lost. However, it isn't lost without a trace. It actually heats up the balls or the air. If you carefully measure the temperature change of the ball and other objects and multiply the temperature changes by the heat capacity \(C\),\[ \Delta E_{\rm thermal} = \Delta T\cdot C \] you will obtain the thermal energy hiding in the materials. This linearized formula is analogous to \(mgh\). In reality, the heat capacity \(C\) depends on \(T\) as well so the exact thermal energy stored in a piece of material is nonlinear, much like \(-GMm/r\) was nonlinear (the formula for the thermal energy of a material is different and more messy than the gravitational potential energy in general). At any rate, if you observe the velocity, position, and temperature of the moving balls (and the surrounding air) and add the relevant pieces of energy up, you will get a quantity that is conserved i.e. whose value isn't changing with time. James Joule was the guy who clarified how the mechanical forms of energy – potential or kinetic energy – may be converted to heat (energy of thermal form that manifests itself by increasing the temperature of objects) and how much heat you actually get. Before Joule, heat would be measured by different units – calories (we still use them for nutrition values of foods) – but after Joule, it was clear that one may use the same unit for both, much like we use the same unit for horizontal and vertical distances. To celebrate Joule, the modern unit of energy/work/heat is named after him. I have mentioned several forms of energy. Point masses have kinetic and gravitational potential energy, rotating objects have the kinetic energy of spin, warmer objects have a higher thermal energy which may be transferred as heat when they're in contact (or created by friction). We need to add many more terms in the overall energy for our discussion to be sufficiently comprehensive. When you consider charged objects, they have an electrostatic potential energy,\[ E_{\rm elst} = \frac{Q_1 Q_2}{4\pi\epsilon_0 R}. \] Similar formulae exist for magnetostatic potential energy and so on. The electromagnetic field itself carries energy whose density per unit volume is\[ \ddfrac{E_\text{elmg. field}}{V} = \rho_\text{elmg. field} = \frac 12\zav{ \epsilon_0 E^2+ \frac{1}{\mu_0} B^2}. \] Electromagnetic waves carry nonzero energy even if you can't account for the sources that created them. These waves may also be interpreted, in quantum theory, as a flow of photons whose energy is \(E=\hbar \omega\) per photon where \(\omega\) is the angular frequency of the photon. In special relativity, the mechanical energy \(mv^2\) is replaced by\[ E_{\rm rel,kin} = \frac{mc^2}{\sqrt{1-v^2/c^2}}. \] If you Taylor expand this formula around \(v/c=0\) which is useful for \(v\ll c\), you get\[ E_{\rm rel, kin} = mc^2+\frac{mv^2}{2} +\frac{3mv^4}{8c^4}+\dots \] Aside from increasingly negligible terms such as one whose coefficient is \(3/8\), you also see the usual non-relativistic energy \(mv^2/2\) in between – it's what we need to reduce relativity to the Newtonian physics. But there's also the huge leading term \(E=mc^2\), Einstein's famous addition to the energy of the world. This latent energy hiding in every piece of mass may be released by annihilation or by evaporation of a black hole that devoured the mass. And a part of this energy may be released if you change the structure of the material. Thermonuclear fusion releases roughly \(mc^2/100\), fission gives us ten times less, \(mc^2/1,000\), and chemical reactions release about 1 million times less than the nuclear reactions (the million also arises as the ratio of \(1\MeV\) and \(1\eV\) which are the estimated energies coming from one atom/nucleus in nuclear and chemical reactions, respectively), i.e. about \(mc^2/100,000,000\). We got to this "transmutation of materials" through a discussion of special relativity. But long before relativity, people knew some chemistry and they knew that energy – e.g. heat – may be obtained not only by friction (recall our discussions of balls with friction) but also by burning objects, e.g. fire. Burning one kilogram of a particular material in a particular way releases a particular amount of energy (which depends on the material and the way of burning etc.). This chemical energy, chemistry's contribution to the overall energy, has to be added if you want the total energy to be conserved. However, in physics, chemistry may be reduced to the reorganization of electrons and their motion around nuclei so all the chemical energy may be replaced by a more accurate formula for the kinetic and (mostly electrostatic) energy of electrons (and nuclei) in the atoms and molecules. I have mentioned the kinetic energy of spinning bodies and materials' thermal and chemical energy – and the whole \(E=mc^2\) which is hard to release unless you manage to do something "truly existential" with the material. But in thermodynamics, there are other forms of energy of materials, too. In particular, if you compress gas to a high pressure (or any other material, for that matter, but it's harder for other materials), the gas wants to expand back to the original volume (for which the pressures are balanced) and it is willing to do work to achieve its dream. The ability to do work is always - and this situation is no exception – a measure of the internal energy. So at the "trajectory" in the space of states of gas that may be reached without a heat answer (adiabatic processes), the energy of a body of gas is a decreasing function of the volume. The infitesimal change of the energy, the work \(p\cdot \dd V\), may be converted either to heat and change the temperature, or it may do mechanical work (and, for example, change the potential energy of heavy solid bodies), or do other things. One must understand all these conversions and which of them occurs if he studies things like aerodynamics or atmospheric physics, of course. Recall our debates related to the greenhouse effect and climatology in general. I could go on and go on and go on. The importance of newer examples I would be adding would arguably decrease. And I could also discuss the price of 1 kWh of electrical energy and lots of values of energy in the real world. However, I want to address one conceptual question. You may be surprised why pretty much any kind of a natural phenomenon contributes to the overall energy. Well, that's a good question. You're actually far too modest. Every type of a phenomenon in Nature has to contribute to the energy for a fundamental reason that was already mentioned at the beginning. The total energy – in abstract and quantum mechanics known as the Hamiltonian – is given by a formula and this formula actually determines the evolution of anything and everything in time. Any quantity (position, velocity, electric field at a given point, volume of a gas, the percentage of an isotope or carbon dioxide in it, and so on) that is evolving in time is evolving because it doesn't commute with the Hamiltonian – with the operator of energy. So the energy has to depend on a related "complementary" quantity describing very similar things. It's not hard to realize that it is the reason why the total energy has to depend on "really everything that may change and that may be measured in the world". The energy conservation law – one of the defining reasons why we talk about energy at all – is only one real constraint on the degrees of freedom in your system. If you have too many, it tells you "almost nothing" about their evolution. However, the formula for the energy – for the Hamiltonian – isn't just a curiosity, a number that accidentally happens to be conserved. It determines the evolution of the whole Cosmos and every piece of it in time.
In general chemistry, it is common to teach students to determine a molecular dipole by having them first determine "bond dipoles" which are just based on electronegativity. Then, by adding up these vectors, one determines the direction of the overall dipole. In quantum mechanics, however, the molecular dipole is found by calculating the expectaion value $\langle\psi|\hat{\mu}|\psi\rangle$. This can be decomposed into $x$, $y$, and $z$ components if one so desires. Regardless, the actually quite useful idea of a bond dipole is nowhere to be found. I assume that there is not a rigorous way of defining a bond dipole so that a sum of these bond dipoles equals the expectation value above. This is because there is not a well-defined way to partition electron density which is exact. I suppose one could do this using the theory of atoms in molecules. Nonetheless, other ideas like local mode vibrations are approximate in the sense that a local mode is not an eigenfunction of the polyatomic vibrational Hamiltonian, yet local modes are still very useful and even more accurate in some contexts. The same is sort of true of bond dipoles in that if one imagines a very long molecule which is polar at one end, a nearby molecule which moves along this large molecule will experience a changing electric field because the dipole of the molecule is only a true point-dipole when the molecule is very far away. So, the field experienced by nearby molecules is more akin to the sum of bond dipoles in the region nearby. So, is there a rigorous way to determine bond dipoles from first principles? Is it as simple as projecting the total dipole onto a bond axis? What is the form of this projector? Also, in cases where the partitioning of atoms is well-defined, such as the theory of atoms in molecules, are bond dipoles well-defined and do they relate to the total dipole as we expect? Answers to any of these questions would be welcome. To be more clear, I am talking about the bond dipoles described on this wikipedia page. They do not provide any formal way of actually calculating the bond dipoles. They mention one can get these dipoles after calculating the total dipole, but I am not sure of the uniqueness of this under unitary transformations.
I'm trying to work on an exercise in Wolfgang Rindler's book "Introduction to Special Relativity" and I'm stuck on the following exercise: Two particles move along the x-axis of S at velocities 0.8c and 0.9c, respectively, the faster one momentarily 1m behind the slower one. How many seconds elapse before collision? My confusion comes from the discrepancy between two different approaches I took: Version A:Don't switch frames: The first object moves with $x_1(t) = 0.8c\cdot t$, the second one with $x_2(t) = -1\ \mathrm{m} + 0.9c\cdot t$. The event where $x_1(t) = x_2(t)$ is $x = 7.9992\ \mathrm{m}$ and $t=3.3333\times 10^{-8}\ \mathrm{s}$. For this I don't use any special relativity properties which makes me slightly suspicious of this approach. Version B:Choose a comoving frame $S'$ with the slower particle in its origin: We set $S$ and $S'$ such that $x=x'$, $y=y'$ and $z=z'$ for $t=0$. The relative velocity of $S'$ to $S$ is the velocity of the slower particle, i.e. $v=0.8c$ in direction of $x$. We drop $y$ and $z$ coordinates now because we won't need them. We call the slower object's coordinate $x_1$ in $S$ and $x_1'$ in $S'$ and the other one $x_2$ and $x_2'$, respectively. Then $x_1'=0$ for all times as we choose object 1 as the origin of $S'$. By velocity transformation, $$u_2' = \frac{u_2-v}{1-u_2v/c^2} = 0.35714c,$$ which is the relative velocity of the faster object to the slower object. The initial position of the second object is by Lorentz transformation given by $$x_2'(0) = \gamma \cdot x_2(0) = -1.6667.$$ Now we need to find the time $t'$ in $S'$ when $x_2'$ hits the origin (and thus the other particle): $$x_2'(t') = -1.6667+0.35714c\cdot t' = 0,$$ hence $$t' = 1.5556\cdot 10^{-8}\ \mathrm{s}.$$ The collision event in $S'$ is thus given by $x'=0$ and $t'= 1.5556\cdot 10^{-8}\ \mathrm{s}$. Transforming back to $S$, we get $$x = \gamma\cdot(x'+vt') = 6.2223\ \mathrm{m}$$ and $$t = \gamma\cdot(t'+0) = 2.5926\cdot 10^{-8}\ \mathrm{s}.$$ This is not the same as Version A. Where did I go wrong?
[pstricks] Searching for a numeric solution for coupled differential equation systems of order 2 Alexander Grahn A.Grahn at hzdr.de Fri May 25 11:15:11 CEST 2012 On Thu, May 24, 2012 at 10:21:06PM +0200, Juergen Gilg wrote: > Dear PSTricks list, > > is there anybody out there, knowing about how to generate a PS code for > "Runge-Kutta 4" to solve a "coupled differential equation system of > order 2"? > > There is the need for such a code to animate a double-pendulum in real > time, following the coupled differential equations with the variables > (\varphi_1, \varphi_2) and their derivatives like: > > (1) > l_1\ddot{\varphi}_1+\frac{m_2}{m_1+m_2}l_2\ddot{\varphi}_2\cos(\varphi_1-\varphi_2)-\frac{m_2}{m_1+m_2}l_2\dot{\varphi}_2^2\sin(\varphi_1-\varphi_2)+g\sin\varphi_1=0 > > (2) > l_2\ddot{\varphi}_2+l_1\ddot{\varphi}_1\cos(\varphi_1-\varphi_2)-l_1\dot{\varphi}_1^2\sin(\varphi_1-\varphi_2)+g\sin\varphi_2=0 I'd first transform this set of DEs of second order into a set of four DEs of first order, since Runge-Kutta only works on first order DE sets, as far as I know. However, it may still not be amenable to Runge-Kutta, because the right hand sides of the resulting four equation set of DEs, put into normal form, still contain derivatives: \begin{align} \dot\varphi_{11}& = \varphi_{12}\\ \dot\varphi_{12}& = f(\varphi_{11}, \varphi_{21}, \varphi_{22}, \dot\varphi_{22})\\ \dot\varphi_{21}& = \varphi_{22}\\ \dot\varphi_{22}& = f(\varphi_{11}, \varphi_{12}, \dot\varphi_{12}, \varphi_{21}) \end{align} Perhaps, an iteration procedure is required to determine $\dot\varphi_{12}$ and $\dot\varphi_{22}$ on the right hand sides of the equations above? An implementation of the Runge-Kutta 4 method is given in the animate package documentation (Lorenz attractor example). It writes the solution vector into a file, that can be read out for plotting by other PSTricks utilities. The DE system for calculating the right hand sides must be written in PS syntax. Alexander More information about the PSTricksmailing list
This weekend I spent some time solving exercises of Introduction to Lambda Calculus. There is one notable exercise where I spent quite sometime figuring out the proof. It was: Given Church numeral $|\textbf{c}_n \equiv \lambda f.\lambda x.f^n(x)|$, a lemma ($|(\textbf{c}_nx)^m (y) = x^{n*m}(y)|$), and a proposition $|\textbf{A}_* \equiv \lambda x.\lambda y.\lambda z.x(yz)|$, prove $| \textbf{A}_* \textbf{c}_n\textbf{c}_m = \textbf{c}_{n*m} |$ using the lemma. Following is my proof for the same: \[ \begin{aligned} L.H.S. = \textbf{A}_*\textbf{c}_n\textbf{c}_m &= (\lambda xyz.x(yz)) \textbf{c}_m\textbf{c}_n & \\ &= \lambda z . \textbf{c}_m (\textbf{c}_n z) & \\ &= \lambda z . (\lambda f . \lambda y . f^m(y)) (\textbf{c}_n z) & [ \because \text{Church numeral } \textbf{c}_n \equiv \lambda f. \lambda x.f^n(x)] & \\ &= \lambda z . \lambda y . (\textbf{c}_n z)^m(y) & \\ &= \lambda z . \lambda y . z^{m*n} (y) & [ \because (\textbf{c}_nx)^m (y) = x^{n*m}(y) ] \\ &= \textbf{c}_{n*m} = R.H.S. \end{aligned} \] Hence proved. Most of the time was spent understanding the lemma, and figuring out difference in superscript value in the R.H.S. of $|\textbf{c}_n|$, w.r.t. in the R.H.S. of lemma. In $|\textbf{c}_n|$ it is the times, whereas in lemma it's exponent. It was fun. 😄
Following are my lecture notes from Prof. Yuri Balasanov’s course Mathematical Market Microstructure.\(\newcommand{1}[1]{\unicode{x1D7D9}_{\{#1\}}}\newcommand{Cov}{\text{Cov}}\newcommand{P}{\text{P}}\newcommand{E}{\text{E}}\newcommand{V}{\text{V}}\newcommand{bs}{\boldsymbol}\newcommand{R}{\mathbb{R}}\newcommand{rank}{\text{rank}}\newcommand{\norm}[1]{\left\lVert#1\right\rVert}\newcommand{diag}{\text{diag}}\newcommand{tr}{\text{tr}}\newcommand{braket}[1]{\left\langle#1\right\rangle}\newcommand{C}{\mathbb{C}}\) Introduction In this section we start with an overview of market microstructure as a whole. Definition of Market Microstructure Maureen O’Hara defines market microstructure as … the study of the process and outcomes of exchanging assets under explicit trading rules. While much of economics abstracts from the mechanics of trading, microstructure literature analyzes how specific trading mechanisms affect the price formation process. which is generally shown by high frequency trading. Frog’s Eye View (Fundamental Assumption) Central Limit Theorem does not work. Price is not observable unless there’s a trade and thus neither numberor sizeof price movements during a period of time is not garanteed. In fact, no matter how many points we sample from historical data, the mass distribution of price jumps has fatter tails than normal distribution, which means CLT is not working. 1 (Price Formation and Discovery) Last price is not necessarily an indicator of where it has now formed. Also, price discovery is a destructiveexperiment involving unique counterpart. (Uncertainty Principle) Like quantum mechanics, we can never measure simultaneouslyprice and its volatility manifested in a derivative product. Instead of a number, price is considered a distribution. 2 (The Two Slits Experiment) An order which passed through the previous slit may pass again or be submitted one of the following: hit, lift or join. This activity affacts the state of the trader’ss decision at subsequent times. (Technology) Colocated servers; GPS antennas for timing; fiber optics vs. microwave 3; Field-Programmable Gate Array (FPGA) and Graphics Processing Unit (GPU); big data. (Regulation) Spoofing (also see figure below); Rule 610 (locking the market); Dodd-Frank Act. (Future) Direct Market Access (DMA); dark pools; cost of connectivity; speed of light. Principle of Ma Ma (間) means empty, spatial void, and interval of space or time in Japanese. The Zen Principle of Ma, when in microstructure context, basically emphasizes that the more “micro” we go into the data, the more randomness we’ll observe. Characteristics of Transactions Data Randomly spaced time intervals (Principle of Ma). Trading intensity contains important information. Discrete-valued prices can only be multiples of tick size. Diurnal patterns: periodic intensity. For example, high at the beginning and at the end of the trading session. To observe microstructure time resolution currently needs to be in microseconds. Characteristics of Nonsynchronous Trading Data Cross-correlation between stock returns at lag 1 Autocorrelation at lag 1 in portfolio returns (Bid-Ask Bounce) Negative autocorrelations in returns of a single stock Example Stocks A and B are independent. Stock A is traded more frequently than B. News arriving at the very end of day session will more likely a§ect stock A than B. Stock B will react more the next day. Then in daily prices there will be a 1-day lag due to di§erence in trading frequency even when the two stocks are independent. Models In this section, we will introduce a series of mathematical models that explain the abovementioned nonsynchronous characteristics. Compound Poisson Model Let \(r_t\) be continuously compounded return at time \(t\). Assume that \(r_t\) are i.i.d. latent variables, \(\E[r_t] = μ\), \(\V[r_t]=\sigma\). For each \(t\) probability that the asset is not traded is \(\pi\). Let \(r_t^0\) be the manifest return variable. If at \(t\) there is no trade \(r_t^0 = 0\). If at \(t\) there is a trade then \(r_t^0\) is the cumulative return since the previous trade. It can be shown that \[ \begin{align} &\P[r_t^0=\textstyle{\sum_{i=0}^k} r_{t-i}] = \pi^2(1-\pi)^2,\quad\E[r_t^0] = \mu,\\&\V[r_t^0]=\sigma^2+\frac{2\pi\mu^2}{1-\pi},\quad \Cov(r_t^0, r_{t-1}^0) = -\pi\mu^2. \end{align} \] This simple model explains negative autocorrelation induced by nonsynchronous trading. Ordered Probit Model Let \(y_t\) be a latent variable depending on time. Observed variable is \(u_t\). Assume \(u_t\) is an ordered \(k\)-categorical variable: \[ u_t = \begin{cases} u^{(0)} & \text{if }y_t\in (-\infty,\theta_1),\\ u^{(i)} & \text{if }y_t\in [\theta_i,\theta_{i+1}),\ i=1,2,\ldots,k-1,\\ u^{(k)} & \text{if }y_t\in [\theta_k,\infty). \end{cases} \] Variable \(y_t\) is predicted using a linear model \(y_t=\bs{\beta}\bs{X}_t + \epsilon_t\), which gives \[ \begin{align} \P[u_t=u^{(i)}\mid \bs{X}_t] &= \P[\theta_{i-1}\le \bs{\beta}\bs{X}_t < \theta_i\mid \bs{X}_t]\\ &= \begin{cases} \Phi\!\left(\frac{\theta_1-\bs{\beta X}_t}{\sigma_t}\right) & i=0,\\ \Phi\!\left(\frac{\theta_{i+1}-\bs{\beta X}_t}{\sigma_t}\right) - \Phi\!\left(\frac{\theta_{i}-\bs{\beta X}_t}{\sigma_t}\right) & i=1,2,\ldots,k-1,\\ 1 - \Phi\!\left(\frac{\theta_{k}-\bs{\beta X}_t}{\sigma_t}\right) & i=k. \end{cases} \end{align} \] Note here we assume \(\epsilon_t\sim\mathcal{N}(0,\sigma_t^2)\) and thus applied \(\Phi(\cdot)\) as link function, which explains why it’s a Probit model. Decomposition Model Assume the price change \(y_i = P_{t_i} - P_{t_{i-1}}\) can be decomposed into product of three components: Indicator of price change \(A_i\in\{0,1\}\). Direction of price change \(D_i\in\{-1,+1\}\). Size of price change \(S_i\in\mathbb{N}_+\). Specifically, for \(p_i=\P[A_i=1]\) we let \[ \ln\left(\frac{p_i}{1-p_i}\right) = \bs{\beta X}_i\Rightarrow p_i = \frac{\exp(\bs{\beta X}_i)}{1 + \exp(\bs{\beta X}_i)}. \] For \(\delta_i=\P[D_i=1\mid A_i=1]\) we let \[ \ln\left(\frac{\delta_i}{1-\delta_i}\right) = \bs{\gamma Z}_i\Rightarrow \delta_i = \frac{\exp(\bs{\gamma Z}_i)}{1 + \exp(\bs{\gamma Z}_i)}. \] For \(S_i\) we let \[ S_i\mid (D_i,A_i=1)\sim 1 + g(\lambda_{u,i})\1{D_i=+1} + g(\lambda_{d,i})\1{D_i=-1} \] where \(g(\lambda_{\xi,i})\) is geometric distribution with parameter \(\lambda_{\xi,i}\) estimated from \[ \ln\left(\frac{\lambda_{\xi,i}}{1-\lambda_{\xi,i}}\right) = \bs{\theta}_\xi\bs{W}_i\Rightarrow \lambda_{\xi,i} = \frac{\exp(\bs{\theta}_\xi\bs{W}_i)}{1 + \exp(\bs{\theta}_\xi\bs{W}_i)}, \quad \xi=u,d. \] Examples We can choose features as below \[ \bs{X}_i = (1, A_{i-1}),\ \bs{Z}_i=(1,D_{i-1})\ \text{and}\ \bs{W}_i = (1,S_{i-1}). \] from which we can train a simple decomposition model using in-sample data. To Be Continued …
For $n \in \mathbb{N}$, $n>1$, let $\mathbb{Z}_n^\times := \{1, \dots, n-1\}$ For positive integers $a$ and $n$, show that $ax \bmod{n} = 1$ has a solution if and only if $\gcd(a,n)=1$. I have this part of the proof solved. Using the above show that $(\mathbb{Z}_n^\times,\cdot)$, where $a\cdot b := (ab) \bmod{n}$, is a group if $n$ is a prime. This is the part I am having trouble solving. I am using Euclids lemma and that multiplication is associative in the DA. I think I have to show closure, but I am not sure how using the above proof.
X Search Filters Format Subjects Language Publication Date Click on a bar to filter by decade Slide to change publication date range Physical Review Letters, ISSN 0031-9007, 10/2017, Volume 119, Issue 16, p. 161101 On August 17, 2017 at 12:41:04 UTC the Advanced LIGO and Advanced Virgo gravitational-wave detectors made their first observation of a binary neutron star... GAMMA-RAY BURST | MASSES | EQUATION-OF-STATE | PHYSICS, MULTIDISCIPLINARY | ADVANCED LIGO | PULSAR | MERGERS | RADIATION | DENSE MATTER | General Relativity and Quantum Cosmology | Physics | Astrophysics GAMMA-RAY BURST | MASSES | EQUATION-OF-STATE | PHYSICS, MULTIDISCIPLINARY | ADVANCED LIGO | PULSAR | MERGERS | RADIATION | DENSE MATTER | General Relativity and Quantum Cosmology | Physics | Astrophysics Journal Article 2. Measurement of the ratio of the production cross sections times branching fractions of $B_{c}^{\pm} o J/\psi \pi^{\pm}$ and $B^{\pm} o J/\psi K^{\pm}$ and $\mathcal{B}(B_{c}^{\pm} o J/\psi \pi^{\pm}\pi^{\pm}\pi^{\mp})/\mathcal{B}(B_{c}^{\pm} o J/\psi \pi^{\pm})$ in pp collisions at $\sqrt{s} =$ 7 TeV ISSN 1126-6708, 2015 B/c+ --> J/psi 2pi+ pi | experimental results | rapidity | B: transverse momentum | CMS | B+ --> J/psi K | B: decay modes | kinematics | CERN LHC Coll | cross section: branching ratio: ratio | B: hadronic decay | B/c+ --> J/psi pi | p p: colliding beams | p p: scattering | 7000 GeV-cms | B/c: hadronic decay | B/c: branching ratio: measured | B: branching ratio Journal Article Physical Review Letters, ISSN 0031-9007, 06/2016, Volume 116, Issue 24, p. 241103 We report the observation of a gravitational-wave signal produced by the coalescence of two stellar-mass black holes. The signal, GW151226, was observed by the... MASS-DISTRIBUTION | CHOICE | PHYSICS, MULTIDISCIPLINARY | STELLAR | RADIATION | MERGER | MAXIMUM MASS | Coalescing | Searching | Gravitational waves | Detectors | Luminosity | Coalescence | Deviation | Interferometers | General Relativity and Quantum Cosmology | Astrophysics | Physics | High Energy Astrophysical Phenomena MASS-DISTRIBUTION | CHOICE | PHYSICS, MULTIDISCIPLINARY | STELLAR | RADIATION | MERGER | MAXIMUM MASS | Coalescing | Searching | Gravitational waves | Detectors | Luminosity | Coalescence | Deviation | Interferometers | General Relativity and Quantum Cosmology | Astrophysics | Physics | High Energy Astrophysical Phenomena Journal Article Physical Review X, ISSN 2160-3308, 10/2016, Volume 6, Issue 4, p. 041015 The first observational run of the Advanced LIGO detectors, from September 12, 2015 to January 19, 2016, saw the first detections of gravitational waves from... (POST)(5/2)-NEWTONIAN ORDER | MASS-DISTRIBUTION | PARAMETER-ESTIMATION | GENERAL-RELATIVITY | PHYSICS, MULTIDISCIPLINARY | INSPIRALLING COMPACT BINARIES | COALESCING BINARIES | NEUTRON-STAR | COMMON ENVELOPE | MAXIMUM MASS | GRAVITATIONAL-WAVE TRANSIENTS | General Relativity and Quantum Cosmology | Astrophysics | Cosmology and Extra-Galactic Astrophysics | Physics (POST)(5/2)-NEWTONIAN ORDER | MASS-DISTRIBUTION | PARAMETER-ESTIMATION | GENERAL-RELATIVITY | PHYSICS, MULTIDISCIPLINARY | INSPIRALLING COMPACT BINARIES | COALESCING BINARIES | NEUTRON-STAR | COMMON ENVELOPE | MAXIMUM MASS | GRAVITATIONAL-WAVE TRANSIENTS | General Relativity and Quantum Cosmology | Astrophysics | Cosmology and Extra-Galactic Astrophysics | Physics Journal Article 5. Precise determination of the mass of the Higgs boson and tests of compatibility of its couplings with the standard model predictions using proton collisions at 7 and 8 TeV European Physical Journal C, ISSN 1434-6044, 05/2015, Volume 75, Issue 5, p. 1 Properties of the Higgs boson with mass near 125 GeV are measured in proton-proton collisions with the CMS experiment at the LHC. Comprehensive sets of... TRANSVERSE-MOMENTUM | TOP-PAIR | RATIOS | NLO | RESUMMATION | ELECTROWEAK CORRECTIONS | BROKEN SYMMETRIES | HADRON COLLIDERS | LHC | QCD CORRECTIONS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment TRANSVERSE-MOMENTUM | TOP-PAIR | RATIOS | NLO | RESUMMATION | ELECTROWEAK CORRECTIONS | BROKEN SYMMETRIES | HADRON COLLIDERS | LHC | QCD CORRECTIONS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment Journal Article Physics Letters B, ISSN 0370-2693, 09/2012, Volume 716, Issue 1, pp. 30 - 61 Results are presented from searches for the standard model Higgs boson in proton–proton collisions at and 8 TeV in the Compact Muon Solenoid experiment at the... CMS | Higgs | Physics | PARTON DISTRIBUTIONS | SEARCH | PHYSICS, NUCLEAR | COLLIDERS | STANDARD MODEL | ELECTROWEAK CORRECTIONS | BROKEN SYMMETRIES | ASTRONOMY & ASTROPHYSICS | QCD CORRECTIONS | PP COLLISIONS | SPECTRUM | MODEL HIGGS-BOSON | PHYSICS, PARTICLES & FIELDS | Analysis | Collisions (Nuclear physics) | Searching | Elementary particles | Decay | Higgs bosons | Standard deviation | Solenoids | Standards | Bosons | Physics - High Energy Physics - Experiment | High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS CMS | Higgs | Physics | PARTON DISTRIBUTIONS | SEARCH | PHYSICS, NUCLEAR | COLLIDERS | STANDARD MODEL | ELECTROWEAK CORRECTIONS | BROKEN SYMMETRIES | ASTRONOMY & ASTROPHYSICS | QCD CORRECTIONS | PP COLLISIONS | SPECTRUM | MODEL HIGGS-BOSON | PHYSICS, PARTICLES & FIELDS | Analysis | Collisions (Nuclear physics) | Searching | Elementary particles | Decay | Higgs bosons | Standard deviation | Solenoids | Standards | Bosons | Physics - High Energy Physics - Experiment | High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article
WHY? Separating multiple sources of audio is difficult task. Previous works mostly made mask for each source in time-fequency domain. WHAT? This paper formulated source separation task as getting mixture meight vector of multiple sources in wave form. x(t) = \sum_{i=1}^C s_i(t)\\x = wB\\s_i = d_iB\\w = \sum_{i=1}^C d_i = \sum_{i=1}^C w \odot (d_i \oslash w) := w \odot \sum_{i=1}^C m_i\\d_i = m_i \odot w Time-domain Audio Separation Network(TasNet) tries to find m_i which is relative contribution to each w while B is N basis signals of shape N x L. Encoder find w for B by appling 1-D gated convolution layer. w_k = ReLU(x_k \circledast U)\odot\sigma(x_k\circledast V)\\ Separation network uses LSTM and FC for masks( m_i) generation. With w and m found above, d can be found with decoder. The scale-invariant source-to-noise ratio(SI-SNR) is used for loss. So? TasNet not only showed comparable performance in WSJ0-2mix dataset, but also showen to find its own basis. Luo, Yi, and Nima Mesgarani. “Tasnet: time-domain audio separation network for real-time, single-channel speech separation.” 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018.
FIELD Math: CMC DATE July 15 (Mon), 2019 TIME 14:00-15:30 PLACE 1424 SPEAKER Kwon, Soonsik HOST Oh, Sung-Jin INSTITUTE KAIST TITLE On pseudoconformal blow-up solutions to the self-dual Chern-Simons-Schroedinger equation: existence, uniqueness, and instability. Part 1 ABSTRACT (For the compiled version, see the attached file) We consider the self-dual Chern-Simons-Schr\"odinger equation (CSS), also known as a gauged nonlinear Schr\"odinger equation (NLS). CSS is $L^2$-critical, admits solitons, and has the psuedoconformal symmetry. These features are similar to the $L^2$-critical NLS. In this work, we consider pseudoconformal blow-up solutions under $m$-equivariance, $m \geq 1$. Our result is threefold. Firstly, we construct a pseudoconformal blow-up solution u with given asymptotic profile $z^\ast$: $$ \left[ u(t, r) - \frac{1}{|t|} Q \left( \frac{r}{|t| \right) e^{-i \frac{r^2}{4 |t|} \right] e^{i m \theta} \to z^\ast \quad \hbox{ in } H^1 $$ as $t \to 0^-$, where $Q(r) e^{i m \theta}$ is a static solution. Secondly, we show that such blow-up solutions are unique in a suitable class. Lastly, yet most importantly, we exhibit an instability mechanism of $u$. We construct a continuous family of solutions $u^{(\eta)}$, $0 \leq \eta \ll 1$, such that $u^{(0)} = u$ and for $\eta > 0$, $u^{(\eta)}$ is a global scattering solution. Moreover, we exhibit a rotational instability as $\eta \to 0^+$: $u^{(\eta)}$ takes an abrupt spatial rotation by the angle $$ \left( \frac{m+1}{m} \right) \pi on the time interval $|t| \lesssim \eta$. We are inspired by works in the $L^2$-critical NLS. In the seminal work of Bourgain and Wang (1997), they constructed such pseudoconformal blow- up solutions. Merle, Rapha\"el, and Szeftel (2013) showed an instability of Bourgain-Wang solutions. Although CSS shares many features with NLS, there are essential differences and obstacles over NLS. Firstly, the soliton profile to CSS shows a slow polynomial decay $r^{-(m+2)}$. This causes many technical issues for small $m$. Secondly, due to the nonlocal nonlinearities, there are strong long-range interactions even between functions in far different scales. This leads to a nontrivial correction of our blow-up ansatz. Lastly, the instability mechanism of CSS is completely different from that of NLS. Here, the phase rotation is the main source of the instability. On the other hand, the self-dual structure of CSS is our sponsor to overcome these obstacles. We exploited the self-duality in many places such as the linearization, spectral properties, and construction of modified profiles. In the talks, the first author will present background of the problem, main theorems, and outline of the proof, as well as a comparison with NLS results. The second author will explain heuristics of main features, such as the long-range interaction between $Q^{\sharp}$ and $z$, rotational instability mechanism, and Lyapunov/virial functional method. FILE 929001562135368253_1.pdf
I'm having trouble understanding some parts of the paper "Provably computable functions and the fast growing hierarchy" by Buchholz and Wainer (1987). On page 183 they say that their system has axioms corresponding to the "defining equations of each elementary function $f$". They give as an example the axioms of "$+$". Earlier, on page 182, they define the "elementary functions" as "those which can be explicitly defined from the zero, successor, subtraction, projection and addition funtions using bounded sums and products." My questions: 1) Why isn't it enough to have only "$+$" and "$\cdot$" in the system? Can't all other functions be expressed in terms of addition and multiplication (using additional variables and quantifiers)? Why do they need their system to have a symbol for the graph of each possible function? 2) The above-mentioned definition of "elementary function" is kind of vague. Are they referring to primitive-recursive functions? mu-recursive functions? 3) I know the defining axioms of the graphs of addition and multiplication. But what are, in general, the defining axioms of the graph of an arbitrary elementary function $f$? Then, on page 192 they define "positive $\Sigma_1$ formulas" as (roughly) formulas that do not use $\forall$ quantifiers. And on page 193-194 (Theorem 5) the main result is based on such formulas. But I don't understand. In order to express other functions in terms of $+$ and $\cdot$ you do need $\forall$ sometimes. Well, obviously this has to do with my previous questions.
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever." Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field. "You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. " so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force. For the buoyancy do I: density of water * volume of water displaced * gravity acceleration? so: mass of bottle * gravity = volume of water displaced * density of water * gravity? @EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$? As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern... You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer. Though as it happens I have to go now - lunch time! :-) @JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth. Anonymous Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure Not sure about that, but the converse is certainly false :P Derrida has received a lot of criticism from the experts on the fields he tried to comment on I personally do not know much about postmodernist philosophy, so I shall not comment on it myself I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger. I can see why a man of that generation would be leaned towards that idea. I do too.
I'm trying to do this integral, which is shown on the Wikipedia page on the Hankel transformation: $$\int_0^{2\pi}\mathrm d\varphi\;e^{\mathrm im\varphi}e^{\mathrm ikr\cos(\varphi)}$$ The answer is supposed to be $$2\pi\mathrm i^m J_m(kr)$$ Mathematica cannot seem to do this integral; it simply gives me the input back: Integrate[ Exp[I m phi] Exp[I k r Cos[phi]], {phi, 0, 2 Pi}, Assumptions -> Element[m, Integers]] Why can't this be done? It can do it for explicit values of m = 0 and m = 1, but after that it begins reporting the answer in terms of polynomials times a zero-order Bessel function or something similar. Perhaps I'm expecting too much of the software, but I'd expect it to be able to verify integrals listed on Wikipedia pages. Is there something I'm overlooking?
WHY? Neural network was poor at manipulating numerical information outside the range of training set. WHAT? This paper suggests two models that learn to manipulate and extrapolate numbers. The first model is the neural accumulator(NAC) which accumulate the quantities in rows additively. This model is a relaxed version of linear matrix multiplication with W which consists of -1, 0 or 1. \mathbf{W} = \tanh(\hat{\mathbf{W}})\odot\sigma(\hat{\mathbf{M}})\\\mathbf{a} = \mathbf{W}\mathbf{x} The second model is the neural arithmetic logic unit(NALU) which can perform multiplicative arithmetic. NALU is weighted sum of a NAC and another NAC that operates in log space. \mathbf{g} = \sigma(\mathbf{G}\mathbf{x})\\\mathbf{m} = \exp \mathbf{W}(\log(|\mathbf{x}|+\epsilon))\\\mathbf{y} = \mathbf{g}\cdot\mathbf{a} + (1 - \mathbf{g})\cdot \mathbf{m} So? NALU successfully operated on various tasks with extrapolated numerical values including simple function learning tasks, MNIST counting and arithmetic tasks, language to number translation tasks and program evaluation task. NALU even performed well on non-numerical extrapolation task such as tracking the time in a Grid-World environment and MNIST parity prediction task.
This is a variant of the question Compact Embedding of $W^{1,2}(0,T;ℝ^d)$ in $C(0,T;ℝ^d)$ where we had $X=\mathbb{R}^d$. Let now $X$ be some Banach space. Question:Is $H^1(0,T;X)$ compactly embedded in $C([0,T];X)$? We define $H^1(0,T;X)=\{u : (0,T) \to X ~|~ u \in L^2(0,T;X), u' \in L^2(0,T;X)\}$. A proof for the continuous embedding can be found in a lot of books, e.g. Roubicek's book. Also, it is standard that $H^1(0,T,\mathbb{R})$ is compactly embedded in $C([0,T];\mathbb{R})$, e.g. Brezi's book. The main difficulty is that the classical Arzela-Ascoli theorem is only valid for handling something like $C(S;\mathbb{K}^d)$ and not for $C(S;Y)$ where $Y$ is some infinite-dimensional Banach space. But for this scenario there is the following version, which can be found in the book of Boyer. Arzela-Ascoli:Let $E$ be a compact metric space and $F$ a metric space. Let $\mathcal{F}$ be a subset of $C(E,F)$. If $\{f(x) : f \in \mathcal{F} \}$ is relatively compact in $F$ for any $x\in E$ and $\mathcal{F}$ is equicontinuous, then $\mathcal{F}$ is relatively compact in $C(E,F)$. I am translating the proof of Brezis to our setting. Proof:Let $u \in B:=\{v \in H^1(0,T;X) : \|v\|_{L^2(0,T;X)}+\|v'\|_{L^2(0,T;X)} \leq 1\}$. In particular $u \in C([0,T];X)$. Then we have for all $t,s \in [0,T]$ $$\|u(t)-u(s)\|_X \leq \int_s^t \|u'(\tau)\|_X \, \text{d}\tau \leq \|u'\|_{L^2(0,T;X)} |t-s|^{1/2} \leq |t-s|^{1/2}$$ i.e. $u$ is equicontinuous. Further if we can show that $\{ u(t): u \in B\}$ is relatively compact in $X$ for any $t\in [0,T]$, then we would be finished. Can someone help me with this last step written in bold font? I am not sure if this result is even correct. I think I have seen it in a article with $X=L^2$ or $X=H^1$, I don't remember clearly. Before there was a picture of another Arzela-Ascoli theorem version which was written in an unclear/erroneous way. I've updated it with the version in Boyer's book which coincides with the notes provided in the comments.
WHY? Former word representations such as Word2Vec or GloVe didn’t contained linguistic context. WHAT? This paper suggests embedding from language model(ELMO) to include context information of word. Assume that x is context independent representation (token embedding or CNN over characters). Bi-directional l layers LSTM are used to predict previous or next token. Each hidden vector of LSTM can be viewed as a representation for a word (2L+1 representations for a word). These representations are weighted sum to be used as task specific representation. R_k = \{x_{k, h}, h^{\leftarrow}_{k,j}, h^{\rightarrow}_{k,j}|j = 1, ..., L\}\\ = \{h^{LM}_{k,j}|j = 0, ..., L\}\\ ELMo^{task}_k = E(R_k, \Theta^{task}) = \gamma^{task}\sum^L_{j=0}s^{task}_j h^{LM}_{k,j} Task specific parameters can be learned in task. Layer normalization can be used to each representation when weighting. Using the pretrained biLM, we can make 2L+1 representations for each word in task corpus and weighted sum to get a fixed representation. This representation can be used either only in the input or both in the input and output. So? ELBO achieved state-of-the-art result in nealy all possible NLP task including QA, textual entailment, semantic role labeling, coreference resolution, named entity extraction and sentimant analysis. Since word representation can be differ in context, ELMo have effect of word sense disambiguation and contain information of POS tagging. Also, training model using ELMo was much faster(98%) to achieve SOTA results. Critic Another incredible breakthrough in NLP.
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
This article is aimed at relatively new users. It is written particularly for my own students, with the aim of helping them to avoid making common errors. The article exists in two forms: this WordPress blog post and a PDF file generated by , both produced from the same Emacs Org file. Since WordPress does not handle very well I recommend reading the PDF version. 1. New Paragraphs In a new paragraph is started by leaving a blank line. Do not start a new paragraph by using \\ (it merely terminates a line). Indeed you should almost never type \\, except within environments such as array, tabular, and so on. 2. Math Mode Always type mathematics in math mode (as $..$ or \(..\)), to produce “” instead of “y = f(x)”, and “the dimension ” instead of “the dimension n”. For displayed equations use $$, \[..\], or one of the display environments (see Section 7). Punctuation should appear outside math mode, for inline equations, otherwise the spacing will be incorrect. Here is an example. Correct: The variables $x$, $y$, and $z$ satisfy $x^2 + y^2 = z^2$. Incorrect: The variables $x,$ $y,$ and $z$ satisfy $x^2 + y^2 = z^2.$ For displayed equations, punctuation should appear as part of the display. All equations must be punctuated, as they are part of a sentence. 3. Mathematical Functions in Roman Mathematical functions should be typeset in roman font. This is done automatically for the many standard mathematical functions that supports, such as \sin, \tan, \exp, \max, etc. If the function you need is not built into , create your own. The easiest way to do this is to use the amsmath package and type, for example, \usepackage{amsmath} ... % In the preamble. \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\inert}{Inertia} Alternatively, if you are not using the amsmath package you can type \def\diag{\mathop{\mathrm{diag}}} 4. Maths Expressions Ellipses (dots) are never explicitly typed as “…”. Instead they are typed as \dots for baseline dots, as in $x_1,x_2,\dots,x_n$ (giving ) or as \cdots for vertically centered dots, as in $x_1 + x_2 + \cdots + x_n$ (giving ). Type $i$th instead of $i'th$ or $i^{th}$. (For some subtle aspects of the use of ellipses, see How To Typeset an Ellipsis in a Mathematical Expression.) Avoid using \frac to produce stacked fractions in the text. Write flops instead of flops. For “much less than”, type \ll, giving , not <<, which gives . Similarly, “much greater than” is typed as \gg, giving . If you are using angle brackets to denote an inner product use \langle and \rangle: incorrect: <x,y>, typed as $<x,y>$. correct: , typed as $\langle x,y \rangle$ 5. Text in Displayed Equations When a displayed equation contains text such as “subject to ”, instead of putting the text in \mathrm put the text in an \mbox, as in \mbox{subject to $x \ge 0$}. Note that \mbox switches out of math mode, and this has the advantage of ensuring the correct spacing between words. If you are using the amsmath package you can use the \text command instead of \mbox. Example $$ \min\{\, \|A-X\|_F: \mbox{$X$ is a correlation matrix} \,\}. $$ 6. BibTeX Produce your bibliographies using BibTeX, creating your own bib file. Note three important points. “Export citation” options on journal websites rarely produce perfect bib entries. More often than not the entry has an improperly cased title, an incomplete or incorrectly accented author name, improperly typeset maths in the title, or some other error, so always check and improve the entry. If you wish to cite one of my papers download the latest version of njhigham.bib(along with strings.bibsupplied with it) and include it in your \bibliographycommand. Decide on a consistent format for your bib entry keys and stick to it. In the format used in the Numerical Linear Algebra group at Manchester a 2010 paper by Smith and Jones has key smjo10, a 1974 book by Aho, Hopcroft, and Ullman has key ahu74, while a 1990 book by Smith has key smit90. 7. Spelling Errors and Errors There is no excuse for your writing to contain spelling errors, given the wide availability of spell checkers. You’ll need a spell checker that understands syntax. There are also tools for checking syntax. One that comes with TeX Live is lacheck, which describes itself as “a consistency checker for LaTeX documents”. Such a tool can point out possible syntax errors, or semantic errors such as unmatched parentheses, and warn of common mistakes. 8. Quotation Marks has a left quotation mark, denoted here \lq, and a right quotation mark, denoted here \rq, typed as the single left and right quotes on the keyboard, respectively. A left or right double quotation mark is produced by typing two single quotes of the appropriate type. The double quotation mark always itself produces the same as two right quotation marks. Example: is typed as \lq\lq hello \rq\rq. 9. Captions Captions go above tables but below figures. So put the caption command at the start of a table environment but at the end of a figure environment. The \label statement should go after the \caption statement (or it can be put inside it), otherwise references to that label will refer to the subsection in which the label appears rather than the figure or table. 10. Tables makes it easy to put many rules, some of them double, in and around a table, using \cline, \hline, and the | column formatting symbol. However, it is good style to minimize the number of rules. A common task for journal copy editors is to remove rules from tables in submitted manuscripts. 11. Source Code source code should be laid out so that it is readable, in order to aid editing and debugging, to help you to understand the code when you return to it after a break, and to aid collaborative writing. Readability means that logical structure should be apparent, in the same way as when indentation is used in writing a computer program. In particular, it is is a good idea to start new sentences on new lines, which makes it easier to cut and paste them during editing, and also makes a diff of two versions of the file more readable. Example: Good: $$ U(\zbar) = U(-z) = \begin{cases} -U(z), & z\in D, \\ -U(z)-1, & \mbox{otherwise}. \end{cases} $$ Bad: $$U(\zbar) = U(-z) = \begin{cases}-U(z), & z\in D, \\ -U(z)-1, & \mbox{otherwise}. \end{cases}$$ 12. Multiline Displayed Equations For displayed equations occupying more than one line it is best to use the environments provided by the amsmath package. Of these, align (and align* if equation numbers are not wanted) is the one I use almost all the time. Example: \begin{align*} \cos(A) &= I - \frac{A^2}{2!} + \frac{A^4}{4!} + \cdots,\\ \sin(A) &= A - \frac{A^3}{3!} + \frac{A^5}{5!} - \cdots, \end{align*} Others, such as gather and aligned, are occasionally needed. Avoid using the standard environment eqnarray, because it doesn’t produce as good results as the amsmath environments, nor is it as versatile. For more details see the article Avoid Eqnarray. 13. Synonyms This final category concerns synonyms and is a matter of personal preference. I prefer \ge and \le to the equivalent \geq \leq\ (why type the extra characters?). I also prefer to use $..$ for math mode instead of \(..\) and $$..$$ for display math mode instead of \[..\]. My preferences are the original syntax, while the alternatives were introduced by . The slashed forms are obviously easier to parse, but this is one case where I prefer to stick with tradition. If dollar signs are good enough for Don Knuth, they are good enough for me! I don’t think many people use ‘s verbose \begin{math}..\end{math} or \begin{displaymath}..\end{displaymath} Also note that \begin{equation*}..\end{equation*} (for unnumbered equations) exists in the amsmath package but not in in itself.
2019-10-09 06:01 HiRadMat: A facility beyond the realms of materials testing / Harden, Fiona (CERN) ; Bouvard, Aymeric (CERN) ; Charitonidis, Nikolaos (CERN) ; Kadi, Yacine (CERN)/HiRadMat experiments and facility support teams The ever-expanding requirements of high-power targets and accelerator equipment has highlighted the need for facilities capable of accommodating experiments with a diverse range of objectives. HiRadMat, a High Radiation to Materials testing facility at CERN has, throughout operation, established itself as a global user facility capable of going beyond its initial design goals. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPRB085 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPRB085 Detailed record - Similar records 2019-10-09 06:01 Commissioning results of the tertiary beam lines for the CERN neutrino platform project / Rosenthal, Marcel (CERN) ; Booth, Alexander (U. Sussex (main) ; Fermilab) ; Charitonidis, Nikolaos (CERN) ; Chatzidaki, Panagiota (Natl. Tech. U., Athens ; Kirchhoff Inst. Phys. ; CERN) ; Karyotakis, Yannis (Annecy, LAPP) ; Nowak, Elzbieta (CERN ; AGH-UST, Cracow) ; Ortega Ruiz, Inaki (CERN) ; Sala, Paola (INFN, Milan ; CERN) For many decades the CERN North Area facility at the Super Proton Synchrotron (SPS) has delivered secondary beams to various fixed target experiments and test beams. In 2018, two new tertiary extensions of the existing beam lines, designated “H2-VLE” and “H4-VLE”, have been constructed and successfully commissioned. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW064 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW064 Detailed record - Similar records 2019-10-09 06:00 The "Physics Beyond Colliders" projects for the CERN M2 beam / Banerjee, Dipanwita (CERN ; Illinois U., Urbana (main)) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; Cholak, Serhii (Taras Shevchenko U.) ; D'Alessandro, Gian Luigi (Royal Holloway, U. of London) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) ; Rae, Bastien (CERN) et al. Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN’s accelerator complex up to 2040 and its scientific infrastructure through projects complementary to the existing and possible future colliders. Within the Conventional Beam Working Group (CBWG), several projects for the M2 beam line in the CERN North Area were proposed, such as a successor for the COMPASS experiment, a muon programme for NA64 dark sector physics, and the MuonE proposal aiming at investigating the hadronic contribution to the vacuum polarisation. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW063 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW063 Detailed record - Similar records 2019-10-09 06:00 The K12 beamline for the KLEVER experiment / Van Dijk, Maarten (CERN) ; Banerjee, Dipanwita (CERN) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; D'Alessandro, Gian Luigi (CERN) ; Doble, Niels (CERN) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) et al. The KLEVER experiment is proposed to run in the CERN ECN3 underground cavern from 2026 onward. The goal of the experiment is to measure ${\rm{BR}}(K_L \rightarrow \pi^0v\bar{v})$, which could yield information about potential new physics, by itself and in combination with the measurement of ${\rm{BR}}(K^+ \rightarrow \pi^+v\bar{v})$ of NA62. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW061 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW061 Detailed record - Similar records 2019-09-21 06:01 Beam impact experiment of 440 GeV/p protons on superconducting wires and tapes in a cryogenic environment / Will, Andreas (KIT, Karlsruhe ; CERN) ; Bastian, Yan (CERN) ; Bernhard, Axel (KIT, Karlsruhe) ; Bonura, Marco (U. Geneva (main)) ; Bordini, Bernardo (CERN) ; Bortot, Lorenzo (CERN) ; Favre, Mathieu (CERN) ; Lindstrom, Bjorn (CERN) ; Mentink, Matthijs (CERN) ; Monteuuis, Arnaud (CERN) et al. The superconducting magnets used in high energy particle accelerators such as CERN’s LHC can be impacted by the circulating beam in case of specific failure cases. This leads to interaction of the beam particles with the magnet components, like the superconducting coils, directly or via secondary particle showers. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPTS066 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPTS066 Detailed record - Similar records 2019-09-20 08:41 Performance study for the photon measurements of the upgraded LHCf calorimeters with Gd$_2$SiO$_5$ (GSO) scintillators / Makino, Y (Nagoya U., ISEE) ; Tiberio, A (INFN, Florence ; U. Florence (main)) ; Adriani, O (INFN, Florence ; U. Florence (main)) ; Berti, E (INFN, Florence ; U. Florence (main)) ; Bonechi, L (INFN, Florence) ; Bongi, M (INFN, Florence ; U. Florence (main)) ; Caccia, Z (INFN, Catania) ; D'Alessandro, R (INFN, Florence ; U. Florence (main)) ; Del Prete, M (INFN, Florence ; U. Florence (main)) ; Detti, S (INFN, Florence) et al. The Large Hadron Collider forward (LHCf) experiment was motivated to understand the hadronic interaction processes relevant to cosmic-ray air shower development. We have developed radiation-hard detectors with the use of Gd$_2$SiO$_5$ (GSO) scintillators for proton-proton $\sqrt{s} = 13$ TeV collisions. [...] 2017 - 22 p. - Published in : JINST 12 (2017) P03023 Detailed record - Similar records 2019-04-09 06:05 The new CGEM Inner Tracker and the new TIGER ASIC for the BES III Experiment / Marcello, Simonetta (INFN, Turin ; Turin U.) ; Alexeev, Maxim (INFN, Turin ; Turin U.) ; Amoroso, Antonio (INFN, Turin ; Turin U.) ; Baldini Ferroli, Rinaldo (Frascati ; Beijing, Inst. High Energy Phys.) ; Bertani, Monica (Frascati) ; Bettoni, Diego (INFN, Ferrara) ; Bianchi, Fabrizio Umberto (INFN, Turin ; Turin U.) ; Calcaterra, Alessandro (Frascati) ; Canale, N (INFN, Ferrara) ; Capodiferro, Manlio (Frascati ; INFN, Rome) et al. A new detector exploiting the technology of Gas Electron Multipliers is under construction to replace the innermost drift chamber of BESIII experiment, since its efficiency is compromised owing the high luminosity of Beijing Electron Positron Collider. The new inner tracker with a cylindrical shape will deploy several new features. [...] SISSA, 2018 - 4 p. - Published in : PoS EPS-HEP2017 (2017) 505 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.505 Detailed record - Similar records 2019-04-09 06:05 CaloCube: a new homogenous calorimeter with high-granularity for precise measurements of high-energy cosmic rays in space / Bigongiari, Gabriele (INFN, Pisa)/Calocube The direct observation of high-energy cosmic rays, up to the PeV region, will depend on highly performing calorimeters, and the physics performance will be primarily determined by their acceptance and energy resolution.Thus, it is fundamental to optimize their geometrical design, granularity, and absorption depth, with respect to the total mass of the apparatus, probably the most important constraints for a space mission. Furthermore, a calorimeter based space experiment can provide not only flux measurements but also energy spectra and particle identification to overcome some of the limitations of ground-based experiments. [...] SISSA, 2018 - 5 p. - Published in : PoS EPS-HEP2017 (2017) 481 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.481 Detailed record - Similar records 2019-03-30 06:08 Detailed record - Similar records 2019-03-30 06:08 Detailed record - Similar records
The method of undetermined coefficients is a tricky one, but I refer you to Morris and Tenenbaum's Ordinary Differential Equations. It tells you when and how to make a good ansatz to a problem like that. There are conditions like (I don't remember off the top of my head) if the R.H.S. $f(x)$ does not appear in the complementary function, then you make.... but if it does, then you need to consider a linear combination like say $(ax+b)(\cos(x) + \sin(x))$ and all its linearly independent derivatives. However, a sure fire way to get an answer is using the operator method (Please see the textbook). In operator notation, your problem reads: $$(D^2 + s^2)[y] = b\cos(sx).$$ You have already told us that the solution to the homogeneous problem is $y = A\cos(sx ) + B\sin(sx)$. To find a particular solution to the inhomogeneous problem, we solve the related problem $$(D^2 + s^2)[y] = be^{isx}$$ and take its real part. For details on this operator method, please see the textbook I mentioned above. So there according to the reference, your solution can be found by writing $$y = \frac{1}{D^2 + s^2}[be^{isx}],$$ and then use the so called shift formula given in the reference above, namely you can take the exponential out of the operator, but must replace $D$ with $(D + is)$. The operator here is $\displaystyle\frac{1}{D^2 + s^2}$ that operates on the right hand side $be^{isx}$. So $$\begin{align*} y &= e^{isx}\frac{1}{(D + is)^2 + s^2}[b] \\ &= e^{isx}\frac{1}{D^2 + 2iD}[b] \end{align*}$$ I'll leave the rest for you to think about. With this method, you can treat an operator as if it were a "number", so that $$\frac{1}{1-D} = 1 + D + D^2 + \cdots.$$ The only difference is that one does not talk of $|D|\lt1$ here (I know nothing about functional analysis, so I don't know what it means for a linear operator to be bounded). Next step: Notice that the operator is operating on a constant, so when an infinite series like the above operates on $b$, it must terminate. To answer your question why sometimes solutions to the homogeneous problem are included inside the particular solution, its because when constructing the one-sided green's function (this is the same as the method of variation of parameters here) you exclude the arbitrary constants in front of the two linearly independent solutions to the homogeneous problem. Attention Noteventhetutorknows: The differential equation in self-adjoint form $L[y] = -(p(x)y')'+ q(x)y = f(x)$ with initial conditions $y(x_0) = y'(x_0) = 0$ has particular solution $y(x) = \int_{t_0}^{t} G(x,\tau) f(\tau) d\tau$, where $G(x, \tau) $ is the one sided green's function given by $G(x, \tau) = \begin{cases} \frac{y_1(x)y_2(\tau) - y_2(x)y_1(\tau)}{W(\tau)}, &\text{if} \quad x \geq \tau \\ 0,& \text{if} \quad x < \tau. \end{cases}$
[pstricks] Searching for a numeric solution for coupled differential equation systems of order 2 Juergen Gilg gilg at acrotex.net Thu May 24 22:21:06 CEST 2012 Dear PSTricks list, is there anybody out there, knowing about how to generate a PS code for "Runge-Kutta 4" to solve a "coupled differential equation system of order 2"? There is the need for such a code to animate a double-pendulum in real time, following the coupled differential equations with the variables (\varphi_1, \varphi_2) and their derivatives like: (1) l_1\ddot{\varphi}_1+\frac{m_2}{m_1+m_2}l_2\ddot{\varphi}_2\cos(\varphi_1-\varphi_2)-\frac{m_2}{m_1+m_2}l_2\dot{\varphi}_2^2\sin(\varphi_1-\varphi_2)+g\sin\varphi_1=0 (2) l_2\ddot{\varphi}_2+l_1\ddot{\varphi}_1\cos(\varphi_1-\varphi_2)-l_1\dot{\varphi}_1^2\sin(\varphi_1-\varphi_2)+g\sin\varphi_2=0 Any help is appreciated!!! Many thanks for every hint, Jürgen -- Jürgen Gilg Austr. 59 70376 Stuttgart tel 0711 59 27 88 email gilg at acrotex.net More information about the PSTricksmailing list
Importance sampling is a powerful and pervasive technique in statistics, machine learning and randomized algorithms. Basics Importance sampling is a technique for estimating the expectation \(\mu\) of a random variable \(f(x)\) under distribution \(p\) from samples of a different distribution \(q.\) The key observation is that \(\mu\) is can expressed as the expectation of a different random variable \(f^*(x)=\frac{p(x)}{q(x)}\! \cdot\! f(x)\) under \(q.\) Technical condition: \(q\) must have support everywhere \(p\) does, \(p(x) > 0 \Rightarrow q(x) > 0.\) Without this condition, the equation is biased! Note: \(q\) can support things that \(p\) doesn't. Terminology: The quantity \(w(x) = \frac{p(x)}{q(x)}\) is often referred to as the "importance weight" or "importance correction". We often refer to \(p\) as the target density and \(q\) the proposal density. Now, given samples \(\{ x^{(i)} \}_{i=1}^{n}\) from \(q,\) we can use the Monte Carlo estimate, \(\hat{\mu} \approx \frac{1}{n} \sum_{i=1}^n f^{*}(x^{(i)}),\) as an unbiased estimator of \(\mu.\) Remarks There are a few reasons we might want use importance sampling: Convenience: It might be trickier to sample directly from \(p.\) Bias-correction: Suppose, we're developing an algorithm which requires samples to satisfy some "safety" condition (e.g., a minimum support threshold) and be unbiased. Importance sampling can be used to remove bias, while satisfying the condition. Variance reduction: It might be the case that sampling directly from \(p\) would require more samples to estimate \(\mu.\) Check out these great notes for more. Off-policy evaluation and learning: We might want to collect some "exploratory data" from \(q\) and evaluate different "policies", \(p\) (e.g., to pick the best one). Here's a link to a future post on off-policy evaluation and counterfactual reasoning and some cool papers: counterfactual reasoning, reinforcement learning, contextual bandits, domain adaptation. There are a few common cases for \(q\) worth separate consideration: Control over \(q\): This is the case in experimental design, variance reduction, active learning and reinforcement learning. It's often difficult to design \(q,\) which results in an estimator with "reasonable" variance. A very difficult case is in off-policy evaluation because it (essentially) requires a good exploratory distribution for every possible policy. (I have much more to say on this topic.) Little to no control over \(q\): For example, you're given some dataset (e.g., new articles) and you want to estimate performance on a different dataset (e.g., Twitter). Unknown \(q\): In this case, we want to estimate \(q\) (typically referred to as the propensity score) and use it in the importance sampling estimator. This technique, as far as I can tell, is widely used to remove selection bias when estimating effects of different treatments. Drawbacks: The main drawback of importance sampling is variance. A few badsamples with large weights can drastically throw off the estimator. Thus, it'soften the case that a biased estimator is preferred, e.g.,estimating the partition function,clipping weights,indirect importance sampling. A secondarydrawback is that both densities must be normalized, which is often intractable. What's next? I plan to cover "variance reduction" andoff-policy evaluationin more detail in future posts.
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
Search Now showing items 31-40 of 167 Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE (Springer, 2013-06) Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (American Physical Society, 2013-02) The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ...
383 13 Homework Statement The electric field outside and an infinitesimal distance away from a uniformly charged spherical shell, with radius R and surface charge density σ, is given by Eq. (1.42) as σ/0. Derive this in the following way. (a) Slice the shell into rings (symmetrically located with respect to the point in question), and then integrate the field contributions from all the rings. You should obtain the incorrect result of ##\frac{\sigma}{2\epsilon_0}##. (b) Why isn’t the result correct? Explain how to modify it to obtain the correct result of ##\frac{\sigma}{2\epsilon_0}##. Hint: You could very well have performed the above integral in an effort to obtain the electric field an infinitesimal distance inside the shell, where we know the field is zero. Does the above integration provide a good description of what’s going on for points on the shell that are very close to the point in question? Homework Equations Coulomb's Law Hi! I need help with this problem. I tried to do it the way you can see in the picture. I then has this: ##dE_z=dE\cdot \cos\theta## thus ##dE_z=\frac{\sigma dA}{4\pi\epsilon_0}\cos\theta=\frac{\sigma 2\pi L^2\sin\theta d\theta}{4\pi\epsilon_0 L^2}\cos\theta##. Then I integrated and ended up with ##E=\frac{\sigma}{2\epsilon_0}\int \sin\theta\cos\theta d\theta##. The problem is that I don't know what are the limits of integrations, I first tried with ##\pi##, but I got 0. What am I doing wrong?
I was going to write this in a comment, but I think it's better if I gave an answer. I'm going to begin by pointing out things about the two statements in the question. The first statement does have the intended meaning. I think it can still be made more precise. The second statement is circular, or ambiguous at best. There is no way of saying what the largest natural number is until it has been decided definitely whether it exists or not. Finally, although the domain has been described in the worded statement, for the logical statement to make sense, it must include the correct domain. Since T. Gunn's answer very nicely covers the question, I will instead give an example of a correct statement which is useful in the study of modern algebra, how number (and other) sets are constructed, the relations between them, and the rigorous framework used to describe these ideas: $$\forall n \in \mathbb N,n + 1 > n \land n + 1 \in \mathbb N$$ Though, to be precise this is a logically equivalent statement, not an identical one. And I have assumed, the addition operator and ordering relation ">" have already been defined. This statement makes the intended meaning, and its demonstration, more explicit. It describes a function which guarantees the required property will be satisfied.
I know there is already a question about resolving a quadrilateral from three sides and two angles, but I want to ask about a special case. Firstly, two of the sides are known to be of equal size. Secondly, I'm only interested in the area, not in the remaining angles or lengths. Can anyone suggest a simple formula? Here's a brute-force derivation ... $$\begin{align} |\square PQRS| &= |\triangle PQR| + |\triangle PRS| \\[4pt] &=\frac12 p q \sin \angle Q + \frac12 d r \sin(\angle R-\angle PRQ) \\[4pt] &=\frac12 \left(\;p q \sin \angle Q + r \left(\;\sin\angle R\cdot d \cos\angle PRQ - \cos\angle R\cdot d\sin\angle PRQ\;\right) \;\right)\\[4pt] &=\frac12 \left(\;p q \sin \angle Q + r \left(\;\sin\angle R\cdot(q-p\cos\angle Q) - \cos\angle R\cdot p\sin\angle Q\;\right) \;\right)\\[4pt] &=\frac12 \left(\;p q \sin \angle Q +qr\sin\angle R - pr \left(\;\sin\angle R \cos\angle Q + \cos\angle R\sin\angle Q\;\right) \;\right)\\[16pt] &=\frac12 \left(\;p q \sin \angle Q +qr\sin\angle R - pr \sin(\angle Q+\angle R)\;\right)\\ \end{align}$$ In the specific case of $p=r$, the formula can be manipulated thusly: $$\begin{align} |\square PQRS| &= \frac{p}{2}\;\left(\;q(\sin Q + \sin R) - p \sin(Q+R) \;\right) \\[4pt] &= p\sin\frac{Q+R}{2}\;\left(\;q\cos\frac{Q-R}{2}- p \cos\frac{Q+R}{2} \;\right) \end{align}$$ We can try using coordinate geometry and determinants to find the area. WLOG, suppose $u=AB$ lie on the $x$-axis with $A=(0,0)$ and $B=(u,0)$. We can drop a line perpendicular to the $x$-axis through $E$, and we have a right triangle. Since $L=EA$, we can easily write points $E,F$ in $xy$-coordinate space. And thus we have the following vertices: Let's split the quadrilateral into two triangles: $\triangle AEF$ and $\triangle ABF$. From here, we can write the area as: $$A_{ABEF}=\frac12\left| \begin{array}{ccc} 0 & 0 & 1 \\ u & 0 & 1 \\ u-L \cos (b) & L \sin (b) & 1 \\ \end{array} \right|-\frac{1}{2}\left| \begin{array}{ccc} 0 & 0 & 1 \\ L \cos (a) & L \sin (a) & 1 \\ u-L \cos (b) & L \sin (b) & 1 \\ \end{array} \right|$$ Which simplies into: $$A_{ABEF}=\frac{1}{2} L \,u (\sin (a)+\sin (b))-\frac{1}{2} L^2 \sin (a+b)$$ Note: it is important to consider the signs of the resulting $\cos$ of angles $a,b$ and of the determinants in the derivation of the area.
Today, my teacher asked about a real function that was integrable, but such that $\displaystyle\lim_{t\to \infty}f(t)\neq 0$ ($f$ need not have a limit). It's easy to forge an example: a function that has tightening spikes each with areas $1/n^2$ does the trick. He then asked for a real $C^\infty$ function that was integrable, but such that $\displaystyle\lim_{t\to \infty}f(t)\neq 0$ ($f$ need not have a limit). One heuristic argument is to take the previous function and "smoothen" the spikes. However, he referred to a nicer function that can be defined explicitely. Do you have an idea ? I've looked for something involving $\sin,\cos,\tan$
I got the same answers as @JS1, but by a different approach. Let $f(n)$ be the worst-case cost of guessing from $n$ numbers, e.g., $0\dots n-1$. Trivially, $f(1)=0$, because if there’s only one number to choose from, so you’re certain to get it on the first guess (and incur no cost/penalty). Let’s try to find $f(n+1)$ inductively/recursively, assuming that we know $f(n)$ (and all values before that). Let $f(n+1,m)$ be the worst-case cost of guessing from $n$ numbers, e.g., $0\dots n$, given that your first guess is $m$ ($0 \le m \le n$). $f(n+1,m)=\begin{cases}x+f(n-m)&\text{if you guess low}\quad\text{(cost for guessing low} +\\&\qquad\qquad\text{cost for guessing from $(m+1)\dots n$)}\\0&\text{if you guess right}\\y+f(m)&\text{if you guess high}\quad\text{(cost for guessing high}+\\&\qquad\qquad\text{cost for guessing from $0\dots(m-1)~$)}\end{cases}$ Note that $m=0$ and $m=n$ are special, boundary cases. If $m=0$, you can’t have guessed high, and if $m=n$, you can’t have guessed low. To handle these cases, I notationally declare $f(0)$ (which is logically undefined) to be $-\infty$, to make the corresponding cases in the $f(n+1,m)=$ formula (above) drop out. Well, since $f(n+1,m)$ is the worst case, it’s the maximum of the above: $\max (x+f(n-m),~~y+f(m))$. But now the question of strategy arises: we can minimize $f(n+1)$ by choosing $m$ that gives us the lowest $f(n+1,m)$. (Intuitively, if $x=y$, then the ideal $m$ is $\big\lfloor{n \over 2}\big\rfloor$. If $x>y$, then the best $m$ is somewhat higher. If $x \gg y$, then the optimum $m$ is much higher.) So $f(n+1)$ is $\min_{(0 \le m \le n)}f(n+1,m)$. Unfortunately, I couldn’t figure out how to reduce the above algebraically. But I was able to develop an Excel spreadsheet to calculate $f$, and it gave me these results: $\qquad f(101)=\begin{cases}9&\text{if $x=2~~~$ and $y=1$}\\7.5&\text{if $x=1.5$ and $y=1$}\end{cases}$ matching JS1’s answer.
What are all the non-split Lie (and topological) group extensions $0 \to \mathbb{R} \to G \to \mathbb{R}^2 \to 0$? Here, $\mathbb{R}$ and $\mathbb{R}^2$ are regarded as Lie (and topological) groups with respect to the usual addition. One example of a non-split extension is the Heisenberg group $H_3(\mathbb{R})$ (Please see a post by Alain Valette at https://mathoverflow.net/questions/63630). Since, every abelian topological extension of $\mathbb{R}^n$ by a locally compact abelian group is trivial, we have that every abelian topological extension of $\mathbb{R}^2$ by $\mathbb{R}$ is trivial. Hence, we need to see only non-abelian extensions. Central extensions $$ 0 \to \mathbb{R} \to G \to \mathbb{R}^2 \to 0 $$ in which $G$ is a principal $\mathbb{R}$-bundle over $\mathbb{R}^2$ (I suppose you mean that by "topological") are classified by continuous maps $$ f: \mathbb{R}^2 \times \mathbb{R}^2 \to \mathbb{R} $$ satisfying $$ f(x,y)f(y,z) = f(x,z). $$ The abelian ones are those corresponding to maps with $f(x,y) = f(y,x)$. This follows from a general theory for topological central extensions described in J.-L. Brylinksi's "Differentiable cohomology of gauge groups" (for the smooth case, but that is not relevant) combined with the fact that every principal bundle over $\mathbb{R}^2$ is trivializable. EDIT: From a map $f$, you get the extension $G$ explicitly as the topological space $G = \mathbb{R} \times \mathbb{R}^2$ with the multiplication given by$$(a_1,x_1)(a_2,x_2) = (a_1 + a_2 + f(x_1,x_1^{-1}),x_1 + x_2).$$ This is just an answer to the request for the Bianchi classification, not to the original question. I'm putting it as an answer because it's too long for a comment. A 3-dimensional Lie algebra $L$ is either semi-simple, in which case it is isomorphic to either ${\frak{so}}(3)$ or ${\frak{sl}}(2,\mathbb{R})$, or else it has a basis $x_1,x_2,x_3$ such that $$ [x_1,x_2]=0\qquad [x_2,x_3] = b_{11} x_1 + b_{12}x_2\qquad [x_3,x_1] = b_{21} x_1 + b_{22}x_2 $$ where the $2$-by-$2$ matrix $B = (b_{ij})$ is equal to one of the following $$ \begin{pmatrix}0&0\cr 0&0\end{pmatrix},\ \begin{pmatrix}1&0\cr 0&0\end{pmatrix},\ \begin{pmatrix}1&0\cr 0&1\end{pmatrix},\ \begin{pmatrix}1&0\cr 0&-1\end{pmatrix} $$ or $$ \begin{pmatrix}0&1\cr -1&0\end{pmatrix},\ \begin{pmatrix}1&1\cr -1&0\end{pmatrix},\ \begin{pmatrix}\sigma&1\cr -1&\sigma\end{pmatrix},\ \begin{pmatrix}\sigma&1\cr-1&-\sigma\end{pmatrix} $$ where $\sigma>0$ is a real number. These are all pairwise non-isomorphic. The proof is fairly straightforward and can be found in many places.
Ergodicity Versus Non-Ergodicity for Probabilistic Cellular Automata on Rooted Trees 2019, v.25, Issue 2, 189-216 ABSTRACT In this article we study a class of shift-invariant and positive rate probabilistic cellular automata (PCAs) on rooted d-regular trees $\mathbb{T}^d$. In a first result we extend the results of \cite{pca} on trees, namely we prove that to every stationary measure $\nu$ of the PCA we can associate a space-time Gibbs measure $\mu_{\nu}$ on $\mathbb{Z} \times \mathbb{T}^d$. Under certain assumptions on the dynamics the converse is also true. A second result concerns proving sufficient conditions for ergodicity and non-ergodicity of our PCA on d-ary trees for $d\in \{ 1,2,3\}$ and characterizing the invariant Bernoulli product measures. Keywords: probabilistic cellular automata, d-ary trees, ergodicity, invariant measure, space-time Gibbs measure COMMENTS
area A rectangle has length p cm and breadth q cm, where p and q are integers and p and q satisfy the equation pq + q = 13 + q 2 then the maximum possible area of the rectangle is... Tôn Thất Khắc Trịnh 24/07/2018 at 13:22 OwO, it's a typo: it's suppossed to be width, okay?Selected by MathYouLike If you use the AMGM theorem, you'll know that any rectangle achieves the maximum possible area when both of their measurements are equal, making it a square (a square is technically a rectangle too) So we have p=q Therefore, the equation will be q 2+q=13+q 2 <=>q=13(cm) <=>p=q=13(cm) <=>S=pq=13 2= 169(cm 2) (P.S: don't do this here, cuz you'll get to a dead end, oof) \(pq+q=13+q^2\) \(\Leftrightarrow S=q^2-q+13\) Uchiha Sasuke 24/07/2018 at 13:34 OMG OMG THANK YOU SO VERY MUCH!!!!!!!!!!!!!!!!!!!!! :DDDDDDDDDDDD Given three circles of radius 2, tangent to each other as shown in the following diagram, what is the area for the shaded region? Chibi 11/04/2017 at 11:31 Center of the circle: ABC => AB = BC = AC = 2R = 4 => ABC is a equilateral triangle The area for the shaded region: S The area for a sector definition by A and 2 tangential points: S A S = S ABC- 3S A S ABC= \(\dfrac{1}{2}\).4.4.\(\dfrac{\sqrt{3}}{2}\) = 4\(\sqrt{3}\) S A= \(\dfrac{60}{360}\)S circles= \(\dfrac{1}{6}\)\(\pi\)2 2= \(\dfrac{2\pi}{3}\) Selected by MathYouLike => S = 4\(\sqrt{3}\)- 3\(\dfrac{2\pi}{3}\) = 4\(\sqrt{3}\) - 2\(\pi\) Center of the circle: ABC => AB = BC = AC = 2R = 4 => ABC is a equilateral triangle The area for the shaded region: S The area for a sector definition by A and 2 tangential points: SA S = SABC - 3SA SABC = 12 .4.4.√32 = 4√3 SA = 60360 Scircles = 16π22 = 2π3 => S = 4√3 - 32π3 = 4√3 - 2π tth 05/11/2017 at 19:11 Center of the circle: ABC => AB = BC = AC = 2R = 4 => ABC is a equilateral triangle The area for the shaded region: S The area for a sector definition by A and 2 tangential points: SA S = SABC - 3SA SABC = 12 .4.4.√32 = 4√3 SA = 60360 Scircles = 16π22 = 2π3 => S = 4√3 - 32π3 = 4√3 - 2π Let R and S be points on the sides BC and AC , respectively, of ΔABC , and let P be the intersection of AR and BS . Determine the area of ΔABC if the areas of ΔAPS , ΔAPB , and ΔBPR are 5, 6, and 7, respectively An Duong 09/04/2017 at 07:31 We have \(\dfrac{SP}{PB}=\dfrac{area\left(APS\right)}{area\left(ABP\right)}=\dfrac{5}{6}\) Call the area of PSR be x, the area of CSR be y, we have: \(\dfrac{area\left(PSR\right)}{area\left(PBR\right)}=\dfrac{SP}{PB}=\dfrac{5}{6}\) \(\Rightarrow\dfrac{x}{7}=\dfrac{5}{6}\) \(\Rightarrow x=\dfrac{35}{6}\) \(\dfrac{BR}{CR}=\dfrac{area\left(BSR\right)}{area\left(CRS\right)}=\dfrac{7+x}{y}\) (1) \(\dfrac{BR}{CR}=\dfrac{area\left(ABR\right)}{area\left(ACR\right)}=\dfrac{13}{x+y+5}\) (2) (1), (2) => \(\dfrac{7+x}{y}=\dfrac{13}{x+y+5}\) \(\Rightarrow y=\dfrac{\left(7+x\right)\left(5+x\right)}{6-x}=\dfrac{5005}{6}\) So, \(area\left(ABC\right)=5+6+7+x+y\) \(=5+6+7+\dfrac{35}{6}+\dfrac{5005}{6}=858\)John selected this answer. A B C R S P 5 6 7 M N x y We have SPPB=area(APS)area(ABP)=56 Call the area of PSR be x, the area of CSR be y, we have: area(PSR)area(PBR)=SPPB=56 ⇒x7=56 ⇒x=356 BRCR=area(BSR)area(CRS)=7+xy (1) BRCR=area(ABR)area(ACR)=13x+y+5 (2) (1), (2) => 7+xy=13x+y+5 ⇒y=(7+x)(5+x)6−x=50056 So, area(ABC)=5+6+7+x+y =5+6+7+356+50056=858 Given a rectangle paper with a circle hole as the figure below. How to cut the paper with a line so that we have two parts with equal area. An Duong 25/03/2017 at 21:31 Because a line through a center of a rectangle (or a circle) divide it into two part with equivalent area. So you should cut the paper by the line connecting two centers of the rectangle and the circle (see following figure) Selected by MathYouLike A circle of radius 3 is inscribed in the pictured quadrant of a circle. Find the area of the shaded section. mathlove 16/03/2017 at 18:29 The circle of radius 3 have an area \(9\pi\). We sign ras the radius of the pictured quadrant of the done cỉcle, then \(r=3\sqrt{2}+3\). Put xis the area to calculate, we have \(\dfrac{1}{4}\pi r^2=2x+\pi.3^2+\left(3^2-\dfrac{1}{4}.\pi.3^2\right)=2x+9\left(1+\dfrac{3\pi}{4}\right)\) \(\Leftrightarrow\dfrac{\pi}{4}\left(3\sqrt{2}+3\right)^2=2x+9\left(1+\dfrac{3\pi}{4}\right)\Leftrightarrow2x=\dfrac{\left(27+18\sqrt{2}\right)\pi}{4}-\dfrac{36+27\pi}{4}\) \(\Leftrightarrow x=\dfrac{9\sqrt{2}\pi-36}{4}\)Selected by MathYouLike mathlove 17/03/2017 at 10:45 We have \(y=3^2-\left(\dfrac{1}{4}\pi3^2\right)\) and \(r-3=3\sqrt{2}\Rightarrow r=3+3\sqrt{2}\).Selected by MathYouLike The circle of radius 3 have an area 9π . We sign r as the radius of the pictured quadrant of the done cỉcle, then r=3√2+3 . Put x is the area to calculate, we have 14πr2=2x+π.32+(32−14.π.32)=2x+9(1+3π4) ⇔π4(3√2+3)2=2x+9(1+3π4)⇔2x=(27+18√2)π4−36+27π4 ⇔x=9√2π−364 Continuing the previous post: I have another problem: Calculate the area of the curved square below (crossed area): mathlove 13/03/2017 at 18:12 Setting xis the are to find. Easy to see that \(\left(1\right)+\left(2\right)+\left(1\right)=1-\dfrac{\pi}{4}\). According to the previous post: \(\left(1\right)=1-\dfrac{\sqrt{3}}{4}-\dfrac{\pi}{6}\). So that \(\left(2\right)=\left(1-\dfrac{\pi}{4}\right)-2\left(1-\dfrac{\sqrt{3}}{4}-\dfrac{\pi}{6}\right)=-1+\dfrac{\pi}{12}+\dfrac{\sqrt{3}}{2}\). Therefore \(x=\dfrac{\pi}{4}-\left[3.\left(2\right)+2.\left(1\right)\right]=\dfrac{\pi}{4}-\left[\left(-3+\dfrac{\pi}{4}+\dfrac{3\sqrt{3}}{2}\right)+\left(2-\dfrac{\sqrt{3}}{2}-\dfrac{\pi}{3}\right)\right]\) \(=1+\dfrac{\pi}{3}-\sqrt{3}\) .Selected by MathYouLike FA KAKALOTS 28/01/2018 at 22:10 Setting x is the are to find. Easy to see that (1)+(2)+(1)=1−π4. According to the previous post: (1)=1−√34−π6 . So that (2)=(1−π4)−2(1−√34−π6)=−1+π12+√32 . Therefore x=π4−[3.(2)+2.(1)]=π4−[(−3+π4+3√32)+(2−√32−π3)] =1+π3−√3 . mathlove 11/03/2017 at 18:27 Let xis the area to calculate. We see that EADis equilateral triangle with the edge equal to 1, the equilateral line equal to \(\dfrac{\sqrt{3}}{2}\) . So \(EF=1-\dfrac{\sqrt{3}}{2}\) . We have the angle EDC is \(30^0\) , so that \(\dfrac{1}{2}.\dfrac{1}{2}\left(1-\dfrac{\sqrt{3}}{2}\right)-\dfrac{x}{2}=\dfrac{\pi}{12}-\dfrac{1}{2}.1.1.\sin30^0=\dfrac{\pi}{12}-\dfrac{1}{4}\) So \(x=1-\dfrac{\sqrt{3}}{4}-\dfrac{\pi}{6}\) . FA KAKALOTS 28/01/2018 at 22:10 Let x is the area to calculate. We see that EAD is equilateral triangle with the edge equal to 1, the equilateral line equal to √32 . So EF=1−√32 . We have the angle EDC is 300 , so that 12.12(1−√32)−x2=π12−12.1.1.sin300=π12−14 So x=1−√34−π6 .
Shinichi Mochizuki of Kyoto divided the steps needed to prove the 1985 conjecture by Oesterlé and Masser into four papers listed at the bottom of the Nature article above. Up to a few exceptions to be proved separately, a strengthening of Fermat's Last Theorem Four days ago, Nature described a potentially exciting development in mathematics, namely number theory: The newly revealed proof works with mathematical structures such as the Hodge theaters (a theater with Patricia Hodge is above, I hope it's close enough) and with canonical splittings of the log-theta lattice (yes, the word "splitting" is appropriate above, too). What is the conjecture about and why it's important, perhaps more important than Fermat's Last Theorem itself? First, before I tell you what it is about, let me say that, as shown by Goldfeld in 1996, it is "almost stronger" than Fermat's Last Theorem (FLT) i.e. it "almost implies" FLT. What does "almost" mean? It means that it only implies a weakened FLT in which the exponent has to be larger than a certain large finite number. I am not sure whether all the exponents for which the \(abc\) theorem doesn't imply FLT have been proved before Wiles or whether the required Goldfeld bound is much higher. Please tell me if you know the answer: what's the minimum Goldfeld exponent? Off-topic: When Ben Bernanke was a kid... Via AA Recall that Wiles proved Fermat's Last Theorem in 1996 and his complicated proof is based on elliptic curves. That's also true for Mochizuki's new (hopefully correct) proof. However, Mochizuki also uses Teichmüller theory, Hodge-Arejekekiv theory, log-volume computations and log-theta lattices, and various sophisticated algebraic structures generalizing simple sets, permutations, topologies and matrices. To give an example, one of these object is the "Hodge theater" which sounds pretty complicated and cultural. ;-) I am not gonna verify the proof although I hope that some readers will try to do it. But let me just tell everybody what the FLT theorem and the \(abc\) theorem are. Fermat's Last Theorem says that if positive integers \(a,b,c,n\) obey\[ a^n+b^n = c^n, \] then it must be that \(n\leq 2\). Indeed, you can find solutions with powers \(1,2\) such as \(2+3=5\) and \(3^2+4^2=5^2\) but you will fail for all higher powers. Famous mathematicians have been trying to prove the theorem for centuries but at most, they were able to prove it for individual exponents \(n\), not the no-go theorem for all values of \(n\). The \(abc\) conjecture says the following. For any (arbitrarily small) \(\epsilon\gt 0\), there exists a (large enough but fixed) constant \(C_\epsilon\) such that each triplet of relatively prime (i.e. having no common divisors) integers \(a,b,c\) that satisfies \[ a+b=c \] the following inequality still holds:\[ \Large \max (\abs a, \abs b, \abs c) \leq C_\epsilon \prod_{p|(abc)} p^{1+\epsilon}. \] That's it. I used larger fonts because it's a key inequality of this blog entry. In other words, we're trying to compare the maximum of the three numbers \(\abs a,\abs b,\abs c\) with the "square-free part" of their product (product in which we eliminate all copies of primes that appear more than once in the decomposition). The comparison is such that if the "square-free part" is increased by exponentiating it to a power \(1+\epsilon\), slightly greater than one but arbitrarily close, the "square-free part" will be typically be smaller than \(abc\) and even \(a,b,c\) themselves but the ratio how much it is smaller than either \(a\) or \(b\) or \(c\) will never exceed a bound, \(C_\epsilon\), that may be chosen to depend on \(\epsilon\) but it isn't allowed to depend on \(a,b,c\). See e.g. Wolfram Mathworld for a longer introduction. Now, you may have been motivated to jump to the hardcore maths and verify all the implications and proofs that have been mentioned above or construct your own.
Let's assume that by " enough gravity for humans to live there" you mean " enough gravity for humans to stick to the surface (due to gravity)". This implies you're asking a Volumetric Mass Density problem since gravity is a function of how Mass Density influences surrounding space. Specifically, you're asking a question about how massive this small planet would have to be given its radius ( so how dense it must be). Lets call small planet ' Small'; and earthly planet " Earth". For there to be " enough gravity for humans to stick to the surface" lets assume the mass of planet "Small" has exactly the same mass as the planet "Earth", regardless of volume, since human's stick to the surface of Earth due to gravity pretty well. Volumetric Mass Density is defined as its mass per unit volume: $$ \rho = \frac{m}{V} $$ So let's represent Earth's Volumetric Mass as:$$ m_e = \rho_e \cdot V_e $$ Likewise let's represent Small's Volumetric Mass as:$$ m_s = \rho_s \cdot V_s $$ Since (we're saying) humans stick equally well to both planets why not make Small's mass (regardless of radius) the same as Earth's Mass since we stick pretty well to Earth. That means:$$ m_s = m_e $$ or: $$ \rho_s \cdot V_s = \rho_e \cdot V_e $$ But we want to solve for Small's Volumetric Mass Density $\rho_s$ so we can see what it's made of (Earth is mostly molten nickle and iron). To rearrange this equation and solve for Small's mass-density $\rho_s$ lets divide both sides by $V_s$ leaving: $$ \rho_s = \frac{\rho_e \cdot V_e}{V_s} $$ But recall that the top term in this fraction $\rho_e \cdot V_e$ is really just $m_e$? So lets simplify by replacing the top term $\rho_e \cdot V_e $with $m_e$ making our equation: $$ \rho_s = \frac{m_e}{V_s} $$ This says the mass-density of planet Small must be equal to the the Mass of the Earth divided by the Volume of planet Small. So lets figure it out! If the Earth's Mass $m_e$ is: $5.9721986×10^{21}$ metric tons; and The radius $r_s$ of planet Small is 50m; And the Volume $V_s$ of planet Small is calculated from its radius using:$$V_s=\frac{4}{3} \cdot \pi \cdot {r_s}^3$$ (which works out to be 523599 $m^3$) Through substitution $\rho_s$ must be: $$ \rho_s = \frac{5.9721986×10^{21}t}{523599m^3} $$ $$ \rho_s = 1.14061×10^{16}t/m^3$$ Answer: Asking Wolfram Alpha what has this density we indeed get the answer that this planet would be more dense than a neutron star ( $8x10^{13}$ - $2x10^{15}$ ) putting it into the range of exotics such as gravastars, objects that exist inside the Schwarzschild radius of an Earth-mass object.
I am in my 4th year (3 semesters left including the current one), taking mechanics, E&M, quantum mechanics, and a lab course. For each of the three main courses, we get one problem set per week that's around 5-8 questions. In addition to that there's a lab report due every 1-2 weeks.It really... This is a random problem I am trying to figure out. The context doesn't matter.I wish to define a function z(x, y) based on the following limits:1. lim z (x→∞) = 02. lim z (x→0) = y3. lim z (y→∞) = ∞4. lim z (y→0) = 0 This is a somewhat vague question that stems from the entries in a directional cosine matrix and I believe the answer will either be much simpler or much more complicated than I expect.So consider the transformation of an arbitrary vector, v, in ℝ2 from one frame f = {x1 , x2} to a primed... I am looking at an explanation of the gradient operator acting on a scalar function ## \phi ##. This is what is written:In the steps 1.112 and 1.113 it is written that ## \frac {\partial x'_k} {\partial x'_i} ## is equivalent to the Kronecker delta. It makes sense to me that if i=k, then... 1. Homework StatementIntegrate by changing to polar coordinates:## \int_{0}^6 \int_{0}^\sqrt{36-x^2} tan^{-1} \left( \frac y x \right) \, dy \, dx ##2. Homework Equations## x = r \cos \left( \theta \right) #### y = r \sin \left( \theta \right) ##3. The Attempt at a SolutionSo this... I assume this is a simple summation of the normal components of the vector fields at the given points multiplied by dA which in this case would be 1/4.This is not being accepted as the correct answer. Not sure where I am going wrong. My textbook doesn't discuss estimating surface integrals... 1. Homework StatementGiven this diagram, the problem is to find an expression for β/ΘE in terms of X/ΘE and Y/ΘE.2. Homework Equationsβ = Θ – α(Θ)Dsβ = DsΘ – Dlsα'(Θ)3. The Attempt at a SolutionI really only need help starting this problem. In my textbook and every document I can... 2-3 hours is just on the one homework assignment that is due each week. That does not count time spent studying for the weekly quizzes or just reviewing lecture material. And also the same question in my reply above is really what I was looking for. Thanks. Sorry I didn't mean to sound like I'm not interested in learning the harder problems but I am in multivariable calculus, advanced statistics, and principles of astrophysics as well and I don't just have time to be delving into subjects I don't think I will be tested on. Believe me I would love... I am currently in Honors Physics 3 which is the third introductory course of my physics degree program and covers modern physics beginning with special relativity. So far we have covered Lorentz transformations and velocity additions, relativistic energy and momentum, blackbody radiation, photon... The problem is to find the general term ##a_n## (not the partial sum) of the infinite series with a starting point n=1$$a_n = \frac {8} {1^2 + 1} + \frac {1} {2^2 + 1} + \frac {8} {3^2 + 1} + \frac {1} {4^2 + 1} + \text {...}$$The denominator is easy, just ##n^2 + 1## but I can't think of... Thanks, so just to be clear that would mean that I should express the change in volume and the uncertainty in the change in volume to the nearest thousandth of a centimeter because they were measured to that degree of accuracy in the question, right? 1. Homework StatementA car engine moves a piston with a circular cross section of 7.500 ± 0.005cm diameter a distance of3.250 ± 0.001cm to compress the gas in the cylinder.(a) By what amount is the gas decreased in volume in cubic centimeters?(b) Find the uncertainty in this volume.2... 1. Homework StatementA uniform disk of radius R=1m rotates counterclockwise with angular velocity ω=2rads/s about a fixed perpendicular axle passing through its center. The rotational inertia of the disk relative to this axis is I=9kg⋅m2. A small ball of mass m=1 is launched with speed v=4m/s... 1. Homework StatementA tall, cylindrical chimney falls over when its base is ruptured. Treat the chimney as a thin rod of length 49.0 m. Answer the following for the instant it makes an angle of 32.0° with the vertical as it falls. (Hint: Use energy considerations, not a torque.)(a) What is... 583.1 is the distance the rock slides found by the Pythagorean Theorem √3002+5002. I tried both positive and negative with those answers. I assume the work done by friction is always negative since it is opposite the motion. and the kinetic energy should be positive because the velocity would be... 1. Homework StatementDuring a rockslide, a 710 kg rock slides from rest down a hillside that is 500 m long and 300 m high. The coefficient of kinetic friction between the rock and the hill surface is 0.23...
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
In physical sciences, numbers are paired with units and called quantities. In this augmented number system, dimensional analysis provides a crucial sanity check, much like type checking in a programming language. There are simple rules for building up units and constraints on what operations are allowed. For example, you can't multiply quantities which are not conformable or add quantities with different units. Also, we generally know the units of the input and desired output, which allows us to check that our computations at least produce the right units. In this post, we'll discuss the dimensional analysis of gradient ascent, which will hopefully help us understand why the "step size" is parameter so finicky and why it even exists. Gradient ascent is an iterative procedure for (locally) maximizing a function, \(f: \mathbb{R}^d \mapsto \mathbb{R}\). In general, \(\alpha\) is a \(d \times d\) matrix, but often we constrain the matrix to be simple, e.g., \(a\cdot I\) for some scalar \(a\) or \(\text{diag}(a)\) for some vector \(a\). Now, let's look at the units of the change in \(\Delta x=x_{t+1} - x_t\), The units of \(\Delta x\) must be \((\textbf{units }x)\). However, if we assume \(f\) is unit free, we're happy with \((\textbf{units }x) / (\textbf{units }f)\). Solving for the units of \(\alpha\) we get, This gives us an idea for what \(\alpha\) should be. For example, the inverse Hessian passes the unit check (if we assume \(f\) unit free). The disadvantages of the Hessian is that it needs to be positive-definite (or at least invertible) in order to be a valid "step size" (i.e., we need step sizes to be \(> 0\)). Another method for handling step sizes is line search. However, line search won't let us run online. Furthermore, line search would be too slow in the case where we want a step size for each dimension. In machine learning, we've become fond of online methods, which adapt the step size as they go. The general idea is to estimate a step size matrix that passes the unit check (for each dimension of \(x\)). Furthermore, we want do as little extra work as possible to get this estimate (e.g., we want to avoid computing a Hessian because that would be extra work). So, the step size should be based only iterates and gradients up to time \(t\). AdaGrad doesn't doesn't pass the unit check. This motivated AdaDelta. AdaDelta uses the ratio of (running estimates of) the root-mean-squares of \(\Delta x\) and \(\partial f / \partial x\). The mean is taken using an exponentially weighted moving average. See paper for actual implementation. Adam came later and made some tweaks to remove (unintended) bias in the AdaDelta estimates of the numerator and denominator. In summary, it's important/useful to analyze the units of numerical algorithms in order to get a sanity check (i.e., catch mistakes) as well as to develop an understanding of why certain parameters exist and how properties of a problem affect the values we should use for them.
Below you'll find the programme for the Seminar on Combinatorics, Games and Optimisation (a joining together of the former Seminar on Discrete Mathematics and Game Theory and the former Seminar on Operations Research). This seminar series covers many of the research areas in the Department: discrete mathematics, algorithms, game theory and operational research. Unless stated below, this Seminar normally takes place: Questions, suggestions, etc., about the seminar can be forwarded to Enfale Farooq, the Research Manager, on E.Farooq@lse.ac.uk Upcoming Speakers: Thursday 17 October - Anurag Bishnoi (FU Berlin) Venue: NAB.2.16 from 14:00 - 15:30 Clique-free pseudorandom graphsOne of the outstanding open problems in the theory of pseudorandom graphs is to find a construction of $K_s$-free $(n, d, \lambda)$-graphs, for $s > 3$, with $\lambda = O(\sqrt{d})$ and the highest possible edge density of $d/n = \Theta(n^{-1/(2s - 3)})$. For $s = 3$, there is a famous construction of Alon from 1994 that provides such a family of triangle free graphs. For $s \geq 5$, the best known construction is due to Alon and Krivelevich from 1996 that has edge density $\Theta(n^{-1/(s - 2)})$. Very recently, Mubayi and Verstraete have shown that a construction with edge density $\Omega(n^{-1/(s + \epsilon)})$, for any $\epsilon > 0$, would imply an improvement in the best known lower bounds on the off-diagonal Ramsey numbers $R(s, t)$, $s$ fixed and $t \rightarrow \infty$. In this talk I will present a new construction of $K_s$-free pseudorandom graphs with an edge density of $\Theta(n^{-1/(s - 1)})$, thus improving the Alon-Krivelevich construction but still falling short of improving the lower bounds on Ramsey numbers. (Joint work with Ferdinand Ihringer and Valentina Pepe) Thursday 31 October - Hervé Moulin (University of Glasgow) Venue: NAB.2.16 from 14:00 - 15:30 Guarantees in Fair Division, under informational parsimonySteinhaus's Diminishing Share (DS) algorithm (generalizing Divide & Choose D&C), as well as Dubins and Spanier's Moving Knife (MK) algorithm, guarantee to all participants a Fair Share of the manna (its worth at least 1/n-th of that of the whole manna) while eliciting parsimonious information from them. However DS and MK are only defined when 1. preferences are represented by additive utilities; and 2. every part of the manna to be divided is desirable to every participant (a cake), or every part is unpleasant to everybody (a chore). Our n-person Divide & Choose rule takes care of issue 2 when utilities are additive: it requires no trimming or padding, and works for mixed manna with subjective goods and bads. It also implements the canonical approximation of the Fair Share (up to one item) when we allocate indivisible items. Issue 1 is much deeper, it challenges us to define a Fair Share Guarantee when 1/n-th of the whole manna makes no sense. The same D&C rule implements such a bound, for very general preferences restricted by a continuity assumption but no monotonicity whatsoever. The minMax utility of an agent is that of his best share in the worst possible partition. It is lower than his Maxmin utility (that of his worst share in the best possible partition), that cannot be guaranteed to all agents. When the manna contains only goods, or only bads, the minMax Guarantee can be improved in infinitely many ways. Our Bid & Choose rules improve upon the MK rules by fixing a benchmark value of shares, and asking agents to bid the smallest size of an acceptable share. The resulting Guarantees fall between their minMax and Maxmin. Joint work with Anna Bogomolnaia. Thursday 7 November - Miquel Oliu-Barton ( Université Paris-Dauphine) Venue: NAB.2.16 from 14:00 - 15:30 Title and abstract TBC Wednesday 13 November - Gyorgy Turan (UIC) Venue: 32L.LG.14 from 15:30 - 17:00 Title and abstract TBC Thursday 14 November - Eliza Jablonska (University of Cracow) Venue: NAB.2.16 from 14:00 - 15:30 Title and abstract TBC Previous seminars in the series: Michaelmas Term Thursday 10 October - Ahmad Abdi (LSE) Graphs, Matroids and Clutters (talk 2)In a series of two talks, I will try to motivate and describe my area of research, and the mathematical objects that I deal with on a daily basis. The talks will be self-contained and will only assume basic knowledge of Linear Algebra, Polyhedral Theory, and Graph Theory. The two talks center around the following conjecture that I made together with Gerard Cornuejols and Dabeen Lee: "Let A be a 0,1 matrix where every row has at least two 1s and the polyhedron { x>=0 : Ax >= 1 } is integral. We conjecture that the columns of A can be partitioned into 4 color classes such that every row gets two 1s with different colors. This is still open even if 4 is replaced by any universal constant." In the first talk, I will give two other equivalent formulations of this conjecture, one being the blocking version of this conjecture, the other being the "cuboidal" version. In the second talk, I will talk about how this conjecture extends known prominent results in Graph Theory and Matroid Theory. In particular, we will see how the conjecture extends Jaeger's 8-flow theorem, and how a variation of it extends Tutte's 4-flow conjecture. Thursday 3 October - Ahmad Abdi (LSE) Clutters, blockers, and cuboids (talk 1)In a series of two talks, I will try to motivate and describe my area of research, and the mathematical objects that I deal with on a daily basis. The talks will be self-contained and will only assume basic knowledge of Linear Algebra, Polyhedral Theory, and Graph Theory. The two talks center around the following conjecture that I made together with Gerard Cornuejols and Dabeen Lee: "Let A be a 0,1 matrix where every row has at least two 1s and the polyhedron { x>=0 : Ax >= 1 } is integral. We conjecture that the columns of A can be partitioned into 4 color classes such that every row gets two 1s with different colors. This is still open even if 4 is replaced by any universal constant." In the first talk, I will give two other equivalent formulations of this conjecture, one being the blocking version of this conjecture, the other being the "cuboidal" version. In the second talk, I will talk about how this conjecture extends known prominent results in Graph Theory and Matroid Theory. In particular, we will see how the conjecture extends Jaeger's 8-flow theorem, and how a variation of it extends Tutte's 4-flow conjecture. 2019, 2018, 2017, 2016 The Seminar on Combinatorics, Games and Optimisation started in 2016. It is a combination of two previous seminar series: Seminar on Discrete Mathematics and Game Theory: 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, 2004, 2003 Seminar on Operations Research: 2016, 2015, 2014, 2013, 2012, 2011, 2010
There's a couple of different ways you can reformulate your problem and which might help you in finding some article on the issue. So, if you have $n$ uniformly distributed variables $X_i\sim U([0,1])$, then you are looking for the probability distribution of their power means: $$\mathbb{P}\left[\left(\frac{1}{n}\sum_{i=1}^n X_i^p\right)^{\frac{1}{p}} \leq x\right]$$ which is the cumulative distribution, but the density can be found by taking the derivative with respect to $x$. This cumulative distribution can be rewritten as $$\mathbb{P}\left[\sum_{i=1}^n X_i^p \leq n x^p\right]$$ provided $p>0$. But in that case, what we are looking at is the cumulative probability distribution of a sum of powers of i.i.d. random variables, in other words, moments. I'd be surprised if this hasn't been tackled in some way. You could also try to tackle it yourself from here. For instance by trying to find the following moment generating functions: $$\mathbb{E}\left[e^{tX^p}\right] \; .$$ The nice thing about moment generating functions (or if you want you can also use characteristic functions) is that they are the Laplace transforms (resp. Fourier transforms) of the probability density functions. If you can inverse the transform, you might find an explicit expression for these functions. Going a step further, note that if $Y\sim U([0,1])$ then $$\mathbb{P}(Y^p \leq y) = \mathbb{P}(Y \leq \sqrt[p]{y}) = \sqrt[p]{y}$$ or in other words, $Y^p$ has density $$f_{Y^p}(y) = \frac{1}{p} y^{\frac{1-p}{p}} \text{ for } y \in (0,1]$$ Then, your problem is one of looking for the distribution of a sum of i.i.d "generalized Pareto random variables". I'm kinda borrowing the term from here, although it doesn't fit in the same parameter range of the wiki example.
UPDATE: My previous response did not answer the OP's question. The following addresses the question directly: Bottom line: Prior to windowing in time, each sample in frequency is an IID Gaussian random variable since the Fourier Transform of an AWGN waveform in time results in an identically distributed waveform in frequency (Gaussian distributed and to be white meaning each sample is independent of the next). After windowing in time, a dependence is created between the adjacent samples in frequency. But the overall frequency response will still be white (uniform and equal power overall) and Gaussian. The variance of a sine wave in relation to the variance/Hz of the white noise process (variance for an AWGN process must be given as a density in units of power/Hz as a truly white noise process has infinite power) will be unchanged in relation to each other; if the window caused the power of the sinewave to go down by one half, the power of the noise would also go down by on half. The actual values depend on how normalization is done in the computations, but for a straight power computation which is energy/time, reducing the window by one half (for example) would reduce the power by one half independent of what kind of waveform was involved (Sine, AWGN, etc). This is in contrast to what would happen if we convolved with a rectangular window, which is covered in the second half of the post below (what was my original, but misguided, response). Details: For discrete time signals, consider the following from Parseval's Theorem which shows that the energy of the signal in time and frequency is the same: When time goes from $-\infty$ to $+\infty$ which would be for the DTFT: $$\sum_{n=-\infty}^{\infty}|x[n]|^2=\frac{1}{2\pi}\int_{-\pi}^{\pi}|X((e^{j\phi})|^2d\phi\tag{1}$$ Note when using normalized frequency (1) becomes the form below that is perhaps easier to follow: $$\sum_{n=-\infty}^{\infty}|x[n]|^2=\int_{-0.5}^{0.5}|X(f)|^2df$$ When time is limited (windowed) would be for the DFT: $$\sum_{n=0}^{N-1}|x[n]|^2=\frac{1}{N}\sum_{k=0}^{N-1}|X[k]|^2\tag{2}$$ In the above DFT relationship using Parseval's Theorem we are comparing energy; if we further scale by M where M represents the total observation time in samples, we will then be comparing power under various rectangular window sizes of N samples which we can apply to both sinusoidal tones and white noise: $$\frac{1}{M}\sum_{n=0}^{N-1}|x[n]|^2=\frac{1}{M}\frac{1}{N}\sum_{k=0}^{N-1}|X[k]|^2\tag{3}$$ The DTFT case will not converge without any window applied (infinite energy) but we can get insight into the answer by considering an arbitrarily large window (the DFT) and then comparing that to what happens when we reduce it with a smaller window. Sine Wave Consider a sine wave with an arbitarily long window N with an observation time that also equals N: If the window is indeed very large compared to a cycle of the sinewave, then the DFT of the sine wave will be well approximated by two impulses (as is the case exactly when the window is an integer number of cycles of the sinewave) each with a magnitude that is N/2 times the peak magnitude of the sine wave in time. Thus for a sine wave with an arbitrarily long window, Parseval's theorem results in the expected variance of a sine wave with peak $A_p$ (using M=N in Equation (3)): $$\frac{1}{N^2}\sum_{k=0}^{N-1}|X[k]|^2 = \frac{1}{N^2}\left( \left(\frac{N}{2}A_p\right)^2+\left(\frac{N}{2}A_p\right)^2\right)=\frac{A_p^2}{2}=\sigma^2$$ As we reduce the window for the sinewave, the frequency response of the sine wave is indeed "smeared" to other bins; the impulses will become Sinc functions in frequency that will get wider as the window gets narrower, and the total power when considering the squared sum of all bins will go down as the ratio of N/M where M represents the original window size. Note that the total power of the original window size M will change in both domains if the residual fraction of a sine wave cycle becomes significant compared to the integrated area under one cycle squared, as is the case when the window duration is not significantly longer than one cycle of a sine wave. If we were considering a single complex exponential frequency tone, this variation as the window size became significantly reduced would not occur. However to be noted in either case, the power in time is equal to the power in frequency regardless of window duration and frequency of the tone (the power in both is equally effected). AWGN An additive Gaussian white noise process in time is an additive Gaussian white noise process in frequency, with the same distribution in both domains. (So therefore as far as a mathematical function it is just a change of variable from time to frequency when using a unitary Fourier transform). Let's also remind ourselves of what AWGN is conceptually: It is white, meaning it has equal power density over ALL frequencies (and therefore unlimited power and therefore not realizable), and Gaussian- meaning the distribution of its magnitude in time takes on a Gaussian shape. The Fourier transform of a Gaussian white process is also a Gausssian white process; what does that mean? In the frequency domain, the distribution in magnitude of the function versus frequency also takes on a Gaussian shape and in this case in terms of it being "white" it means explicitly that the transform of this function (the time domain function) has equal power over ALL time. Bottom line, as far as we are concerned, besides the variable defining the domain, the functions are identical. With regard to Fourier transforms, multiplying by a window in one domain is convolution of the window kernel (Fourier Transform of the window) in the other domain. When we filter a signal, we convolve the signal with the impulse response of the filter, which is the inverse Fourier transform of the frequency response. Further to be noted when working with the DFT as we have done above, the convolution itself is a circular convolution. With that said, consider what would happen to the frequency response of an AWGN process when we window it in time: Prior to windowing, which is the case of an arbitrarily long window N with an observation time equal to N, the frequency response is indeed white, and as we noted above the "time response" is also similarly "white" in this case (meaning it extends over the full length with all the samples having a similar distribution). Also to note, relative to our sample time interval, each sample in time is uncorrelated from the next (therefore the resulting in a spectrum over our digital frequency interval that is indeed white). The variance of our time domain signal is equal to the variance of our DFT when we scale the DFT by N=M as shown in (3). Just as in the case of the sine wave, if we reduce the rectangular window M to be less than M, the power (variance) will reduce by N/M, but what is interesting and pertinent to the question, is that the frequency response will remain white and Gaussian! Why is this? By reducing the rectangular window to M, we are convolving the frequency response with a Sinc fucntion (or in our discrete system what well approximates a Sinc function for large M and is actually an "aliased" Sinc function), and as noted this is a circular convolution. Thus the frequency response would still be white, but to be noted we have created a dependence for each sample in frequency on adjacent samples due to the convolution operation. This means in frequency each sample is no longer independent from sample to sample, so in the time domain the transform will no longer be white- but in the frequency domain the amplitude distribution itself will still be Gaussian, and the power density will still be uniform over all frequencies within the digital frequency interval used and so therefore is is indeed still white in frequency. Thus the impact of a rectangular window in time to the frequency domain is to remove the independence between the adjacent frequency samples, and reduce the overall power proportionally when compared over the same observation interval (equally as is done with a sine wave, so does not change SNR); but it does not change the statistical description of being white (in frequency) and Gaussian distributed. The dependence between samples in frequency is similar to the effect of a dependence of samples in time: When we have a dependence between samples in time we have a band-limited (low pass filtered) process which we can therefore say is "frequency limited". When we have a dependence between samples in frequency we have a time-limited process; which is what the rectangular window is doing. As a final point to help see what is going on; sometimes it easier to think in one domain instead of the other, so consider if we applied the rectangular window to any AWGN signal in frequency that is initially white (uniform density over all frequencies). Prior to windowing - the time domain signal would extend over our complete observation interval, and the DFT would extend over the complete frequency space defined by our sampling time interval. When observing the signal in time, no matter how much we zoomed into the time domain waveform, it would appear as in the first plot below for AWGN, because every sample is independent of the next. And the historgram of the magnitude distribution is Gaussian. If we were to band-limit the frequency response (by multiplying the frequency response with a rectangular window), we would see in the time domain something similar to the second plot below; in that as we zoom in, we can see defined trajectories from one sample to the next! Note that the histogram of the magnitude (as long as we do it over enough samples) does not change and is still Gaussian. And important note that our time domain function still extends over our complete observation time with a uniform power- so it is "white" in time and Gaussian but it is no longer white in frequency. Thus we see directly what would happen to the frequency response in the case of the OP's question. Instead of the waveforms below being time, they would be frequency. The frequency response is still uniform in power (white) and Gaussian, but due to the windowing in time we would now be able to zoom in on the frequency response and observe the sample to sample correlation that would now exist that didn't exist prior to windowing. Prior to windowing each sample in frequency would be independent from adjacent samples so as we zoomed in on the frequency response it would continue to look like the first plot below. But if the time domain function was windowed, it would create dependence bewteen the adjacent samples in frequency and when we zoomed in to the frequency response in that case we would start to observe something like the second plot below: we would see a definite trajectory of the frequency response waveform as we move from one sample to the next- however it is still white (the power on average over all frequencies would be flat) and Gaussian distributed. White Gaussian Noise (AWGN) Band-Limited Gaussian Noise A further way to prove that the frequency response remains white after multiplying the time domain function with a rectangular window is to observe the autocorrelation function in each case: The autocorrelation fucntion for an AWGN signal is an impulse, and the frequency response of an impulse is a uniform function. Adding zeros to the AWGN fucntion (or equivalently windowing) does not change the result from being an impulse, and therefore the frequency response will still be uniform (white). Adding zeros does interpolate between the existing samples in frequency, and thus the trajectories previously described are created... and to note from that, for a given window size of length T of an AWGN signal, the samples in frequency separated by 1/T will remain independent, but all samples in between will be dependent on the two adjacent samples separated by 1/T. Previous post: The following was initially given as a response but this is specific to convolving with a rectangular window which was not the question asked: A windows duration and shape effects the spectral density of white noise based on the frequency response of the window directly. While noise will be reduced in power based on the relative length of the window; meaning as a sum of squares or $\int_0^T(x^2)dx$, while a sine wave within the correlation bandwidth of the window (meaning frequency < 1/T where T is the window length) would increase as a summation. I prefer to consider the window as a moving average such that the sine wave (if low enough in frequency) does not change and the noise is proportionally smaller. This just means we normalized the window to its length but is more intuitive that the window would not effect the sine-wave itself but would remove noise. The normalization if not used just results in an arbitrary scaling but the ratio of signal to noise is what is of interest in the end in either case. Consider an example (digital) white noise process with total variance = 1 If we filtered this with a 10 tap unity gain filter (representing convolving the white noise process with a discrete rectangular window [1 1 1 1 1 1 1 1 1 1]), the noise from tap to tap in the filter would be uncorrelated, so would go up by the sqrt(10) in standard deviation (which represents its magnitude quantity), while a sine-wave that was within the filter bandwidth would be correlated and would increase by a factor of 10 in magnitude. Observe the frequency response of such a filter, where the DC gain of 20dB represents the factor of 10 described above, as (20Log10(10)). This response shows exactly what would happen to the power level of a single tone at any frequency within the filters spectrum, while the power of multiple tones would be the sum of their individual powers (which is how we handle what happens to the noise, as in $\sum x^2 $ ) : And the expected effect on the white noise The noise is now shaped (colored) due to the lowpass nature of the window, and the overall noise after processing through this filter should only go up by 10log10(10) = 10 dB. Thus the SNR has increased 10 dB since the tone (signal) when up by 20 dB while the noise went up by 10dB, or if we normalize to the level of the tone, the noise has gone down by 10 dB or 1/10th in total power. Testing this experimentally: noise= randn(2^12,1); var1 = std(noise); noisefilt = filter(ones(10,1),1,noise); var2 = std(noisefilt); freqz(ones(10,1)); % frequency response Results in var1 = 1.00355 and var2 = 10.64. The increase is just a constant (and arbitrary) gain factor so what is important is how the noise is effected relative to a sine wave, in that the window reduces the noise power of white noise proportionally (in this case compare a wider window to one 1/10th in size and the smaller one removes 1/10th of the power) while reduces the sinewave according to a Sinc function with the first null at 1/T where T is the length of the window. (Or for any arbitrary window based on the Fourier transform of the window itself). Also as I mentioned in the comment under the original posting, I believe fred harris handles the mathematics well in describing coherent vs non-coherent gain, equivalent noise bandwidth etc in windowed systems in this classic paper that I reference often: https://www.utdallas.edu/~cpb021000/EE%204361/Great%20DSP%20Papers/Harris%20on%20Windows.pdf
Diophantine equations John 10/04/2017 at 15:08 Since \(3^{\dfrac{1}{3}}>4^{\dfrac{1}{4}}>5^{\dfrac{1}{5}}>...\) we have \(y^z\ge z^y\) if \(y\ge3\). Since \(1\le x\le y\), the only possible values for (x, y) are (1,1), (1,2), (2,2). These lead to the equations \(1+1=z,1+2^x=z,4+2^z=z^2\). The firt equation has the unique solution (1,1,2). The second equarion has no solution because \(2^x>z\). The third equation has also no solution since \(2^x\ge z^2,z\ge4\) and (2,2,3) is not a solution. So the equation has unique solution (1,1,2).Carter selected this answer. FA KAKALOTS 08/02/2018 at 22:03 Since 313>414>515>... we have yz≥zy if y≥3 . Since 1≤x≤y , the only possible values for (x, y) are (1,1), (1,2), (2,2). These lead to the equations 1+1=z,1+2x=z,4+2z=z2 . The firt equation has the unique solution (1,1,2). The second equarion has no solution because 2x>z . The third equation has also no solution since 2x≥z2,z≥4 and (2,2,3) is not a solution. So the equation has unique solution (1,1,2). Carter selected this answer. Phan Huy Toàn 14/08/2017 at 08:15 chứng tỏ rằng số:A=54^5-54^4 chia hết cho 53 John 10/04/2017 at 20:40 Note that from \(3^x=\left(5^w+2^y\right)\left(5^w-2^y\right)\) we can infer \(5^w+2^y=3^x,5^w-2^y=1\) because if \(5^x+2^y=3^m,5^x-2^y=3^{x-m}\) then \(\left(5^w+2^y\right)+\left(5^w-2^y\right)=2.5^w=3^m+3^{x-m}⋮3\) (a contradiction)Selected by MathYouLike John 10/04/2017 at 15:43 We will show there is exactly one set of solution, namely x = y = z = 2. To simplify the equation, we consider modulo 3. We have: \(1=0+1^y\equiv3^x+4^y=5^z\equiv\left(-1\right)^z\left(mod3\right)\) It follows that \(z\) must be even, say z = 2w, then: \(3^x=5^z-4^y=5^{2w}-2^{2y}=\left(5^w+2^y\right)\left(5^w-2^y\right)\) \(\Rightarrow\left\{{}\begin{matrix}5^w+2^y=3^x\\5^w-2^y=1\end{matrix}\right.\) \(\Rightarrow\left\{{}\begin{matrix}\left(-1\right)^w+\left(-1\right)^y\equiv0\left(mod3\right)\\\left(-1\right)^w-\left(-1\right)^y\equiv1\left(mod3\right)\end{matrix}\right.\) => \(w\) is odd and \(y\) is even. If \(y>2\) then \(5\equiv5^w+2^y=3^x\equiv1\)or \(3\) (mod 8), a contradiction. So \(y=2\). Then \(5^w-2^y=1\) => \(w=1,z=2\) and finally \(x=2\).
2018-09-02 17:21 Measurement of $P_T$-weighted Sivers asymmetries in leptoproduction of hadrons / COMPASS Collaboration The transverse spin asymmetries measured in semi-inclusive leptoproduction of hadrons, when weighted with the hadron transverse momentum $P_T$, allow for the extraction of important transverse-momentum-dependent distribution functions. In particular, the weighted Sivers asymmetries provide direct information on the Sivers function, which is a leading-twist distribution that arises from a correlation between the transverse momentum of an unpolarised quark in a transversely polarised nucleon and the spin of the nucleon. [...] arXiv:1809.02936; CERN-EP-2018-242.- Geneva : CERN, 2019-03 - 20 p. - Published in : Nucl. Phys. B 940 (2019) 34-53 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: CERN-EP-2018-242 - PDF; 1809.02936 - PDF; Registro completo - Registros similares 2018-02-14 11:43 Light isovector resonances in $\pi^- p \to \pi^-\pi^-\pi^+ p$ at 190 GeV/${\it c}$ / COMPASS Collaboration We have performed the most comprehensive resonance-model fit of $\pi^-\pi^-\pi^+$ states using the results of our previously published partial-wave analysis (PWA) of a large data set of diffractive-dissociation events from the reaction $\pi^- + p \to \pi^-\pi^-\pi^+ + p_\text{recoil}$ with a 190 GeV/$c$ pion beam. The PWA results, which were obtained in 100 bins of three-pion mass, $0.5 < m_{3\pi} < 2.5$ GeV/$c^2$, and simultaneously in 11 bins of the reduced four-momentum transfer squared, $0.1 < t' < 1.0$ $($GeV$/c)^2$, are subjected to a resonance-model fit using Breit-Wigner amplitudes to simultaneously describe a subset of 14 selected waves using 11 isovector light-meson states with $J^{PC} = 0^{-+}$, $1^{++}$, $2^{++}$, $2^{-+}$, $4^{++}$, and spin-exotic $1^{-+}$ quantum numbers. [...] arXiv:1802.05913; CERN-EP-2018-021.- Geneva : CERN, 2018-11-02 - 72 p. - Published in : Phys. Rev. D 98 (2018) 092003 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: CERN-EP-2018-021 - PDF; 1802.05913 - PDF; Registro completo - Registros similares 2018-02-07 15:23 Transverse Extension of Partons in the Proton probed by Deeply Virtual Compton Scattering / Akhunzyanov, R. (Dubna, JINR) ; Alexeev, M.G. (Turin U.) ; Alexeev, G.D. (Dubna, JINR) ; Amoroso, A. (Turin U. ; INFN, Turin) ; Andrieux, V. (Illinois U., Urbana ; IRFU, Saclay) ; Anfimov, N.V. (Dubna, JINR) ; Anosov, V. (Dubna, JINR) ; Antoshkin, A. (Dubna, JINR) ; Augsten, K. (Dubna, JINR ; CTU, Prague) ; Augustyniak, W. (NCBJ, Swierk) et al. We report on the first measurement of exclusive single-photon muoproduction on the proton by COMPASS using 160 GeV/$c$ polarized $\mu^+$ and $\mu^-$ beams of the CERN SPS impinging on a liquid hydrogen target. [...] CERN-EP-2018-016 ; arXiv:1802.02739. - 2018. - 13 p. Full text - Draft (restricted) - Fulltext Registro completo - Registros similares 2018-01-30 07:15 Registro completo - Registros similares 2017-09-28 10:30 Registro completo - Registros similares 2017-09-19 08:11 Transverse-momentum-dependent Multiplicities of Charged Hadrons in Muon-Deuteron Deep Inelastic Scattering / COMPASS Collaboration A semi-inclusive measurement of charged hadron multiplicities in deep inelastic muon scattering off an isoscalar target was performed using data collected by the COMPASS Collaboration at CERN. The following kinematic domain is covered by the data: photon virtuality $Q^{2}>1$ (GeV/$c$)$^2$, invariant mass of the hadronic system $W > 5$ GeV/$c^2$, Bjorken scaling variable in the range $0.003 < x < 0.4$, fraction of the virtual photon energy carried by the hadron in the range $0.2 < z < 0.8$, square of the hadron transverse momentum with respect to the virtual photon direction in the range 0.02 (GeV/$c)^2 < P_{\rm{hT}}^{2} < 3$ (GeV/$c$)$^2$. [...] CERN-EP-2017-253; arXiv:1709.07374.- Geneva : CERN, 2018-02-08 - 23 p. - Published in : Phys. Rev. D 97 (2018) 032006 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Registro completo - Registros similares 2017-07-08 20:47 New analysis of $\eta\pi$ tensor resonances measured at the COMPASS experiment / JPAC Collaboration We present a new amplitude analysis of the $\eta\pi$ $D$-wave in $\pi^- p\to \eta\pi^- p$ measured by COMPASS. Employing an analytical model based on the principles of the relativistic $S$-matrix, we find two resonances that can be identified with the $a_2(1320)$ and the excited $a_2^\prime(1700)$, and perform a comprehensive analysis of their pole positions. [...] CERN-EP-2017-169; JLAB-THY-17-2468; arXiv:1707.02848.- Geneva : CERN, 2018-04-10 - 9 p. - Published in : Phys. Lett. B 779 (2018) 464-472 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Registro completo - Registros similares 2017-07-05 15:07 Registro completo - Registros similares 2017-04-01 00:22 Registro completo - Registros similares 2017-01-05 16:00 First measurement of the Sivers asymmetry for gluons from SIDIS data / COMPASS Collaboration The Sivers function describes the correlation between the transverse spin of a nucleon and the transverse motion of its partons. It was extracted from measurements of the azimuthal asymmetry of hadrons produced in semi-inclusive deep inelastic scattering of leptons off transversely polarised nucleon targets, and it turned out to be non-zero for quarks. [...] CERN-EP-2017-003; arXiv:1701.02453.- Geneva : CERN, 2017-09-10 - 11 p. - Published in : Phys. Lett. B 772 (2017) 854-864 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Registro completo - Registros similares
I am trying to prove the following: Let $M$ be a smooth manifold. Then $\bigwedge^k T^*M$ is a smooth subbundle of dimension $\binom{n}{k}$ of $\bigotimes^kT^*M$. To do this, I think the following theorem will be helpful: Theorem: The image and kernel of a constant rank smooth map between smooth vector bundles over a manifold $M$ are both subbundles of their ambient bundles. So a natural thing to try is to define a map $f:\bigotimes^k T^*M\to \bigotimes^k T^*M$ as $f(\omega)=\text{alt}(\omega)$ for all $\omega\in \bigotimes^k T^*M$, where $$ \text{Alt}(\omega)=\frac{1}{k!}\sum_{\sigma\in S_k}\text{sgn(}\sigma){^\sigma}\omega $$ The image of $f$ is $\bigwedge^k T^*M$. But I am unable to see that the rank of $f$ is constant. Can somebody help? Thanks.
The suggestion in the comments to use brute force is a little unsatisfactory; what if $15$ were $15000$? Here is a general method. Suppose$$m = \sigma(n) = \prod_{p^k ||\: n} \frac{p^{k+1}-1}{p-1}.\tag{1}$$Each factor $d$ in the product is an integer (a finite geometric series). The prime factorization of $m$ will severely restrict the possible multiplicands on the r.h.s. of (1), and then their unusual shape (viz. $\frac{p^{k+1}-1}{p-1}$) will reveal much about the prime factorization of $n$. To wit, if $d$ divides $m$ and $\frac{p^{k+1} - 1}{p-1} = d$ then $p^{k+1} = pd - (d - 1)$ which means $p$ divides $d-1$. This yields one equation to solve per prime $p$ dividing $d-1$, namely the following equation: $$p^k = d - \frac{d-1}{p}.\tag{2}$$ Already we can see that $d = 1$ is forbidden, since it implies $p^k = 1$, which is absurd (contradicts the fundamental theorem of arithmetic). Thus the general procedure is to consider a factor $d \ne 1$ of $m$ and check, for each prime $p$ under $d-1$, whether the r.h.s. of (2) is a power of $p$; if so, repeat the process replacing $m$ by $m/d$ (and excluding the prime $p$ from further consideration). This can get complicated; luckily we are dealing here only with $m = 15 = 3 \cdot 5$, a semiprime. If $d = 3$ then $p \mid 2$ so $p = 2$ and (2) becomes $2^k = 2$ which means $k = 1$. Thus, possibly, $n$ is singly even. However, moving on to the complementary divisor $15/3 = 5$, if $d = 5$ then $p \mid 4$ so $p = 2$ and this is a problem: the prime $2$ was supposed to contribute only one term in the r.h.s. of (1). [If this is unconvincing, use (2) again: $2^k = 3$ which can't happen ($k \in \mathbf{Z}$).] Thus the appearance of $5$ and $3$ together in the r.h.s. of (1) is inconsistent. That leaves us with $d = 15$. This implies $p \mid 14$ so $p = 2$ or $7$. But $7^k = 13$ can't happen. On the other hand, $2^k = 8$ has the solution $k = 3$, and this is the only consistent description of $n$ we can glean. It remains to check that $n = 2^3 = 8$ works: $\sigma(8) = 1 + 2 + 4 + 8 = 15$ ($ = 1111_2$ in binary!) so, yes!
I want to be able to magnify (by say 2X) a group of equations in the output of PDFLaTeX by a single click on the box containing them. A second click is to revert to the previous state. The viewer is Adobe Reader. One solution would be to add two copies of the page to the document, one with the small formula and one with the enlarged formula. Then you can use hyperlinks to switch between the pages, effectively toggling the formula between the small and big version. As a proof of concept, try the TeX code below. Here is the initial document: Clicking on the formula yields: Clicking again returns to the first image. \documentclass{article}\usepackage{hyperref}\usepackage{amsmath}\begin{document}Click on the formula to make it bigger:\hyperlink{page.2}{\begin{align*} d\bigl(f(t, X_t)\bigr) &= \frac{\partial}{\partial t} f(t, X_t) \,dt + \sum_{i=1}^d \frac{\partial}{\partial x_i} f(t, X_t) \,dX^{(i)}_t \\ &\hskip2cm + \frac12 \sum_{i,j=1}^d \frac{\partial^2}{\partial x_i \partial x_j} f(t, X_t) \,dX^{(i)}_t \, dX^{(j)}_t,\end{align*}}Some more text could be here.\newpageClick on the formula to make it smaller:\hyperlink{page.1}{\Large\begin{align*} d\bigl(f(t, X_t)\bigr) &= \frac{\partial}{\partial t} f(t, X_t) \,dt + \sum_{i=1}^d \frac{\partial}{\partial x_i} f(t, X_t) \,dX^{(i)}_t \\ &\hskip2cm + \frac12 \sum_{i,j=1}^d \frac{\partial^2}{\partial x_i \partial x_j} f(t, X_t) \,dX^{(i)}_t \, dX^{(j)}_t,\end{align*}}Some more text could be here. \end{document} The extra page(s) could be hidden at the end of the document to not disturb the normal sequence of pages.
Deep Learning from Scratch to GPU - 2 - Bias and Activation FunctionYou can adopt a pet function! Support my work on my Patreon page, and access my dedicated discussion server. Can't afford to donate? Ask for a free invite. February 11, 2019 Please share: Twitter. New books are available for subscription. If you haven't yet, read my introduction to this series in Deep Learning in Clojure from Scratch to GPU - Part 0 - Why Bother?. The previous article, Part 1, is here: Representing Layers and Connections. To run the code, you need a Clojure project with Neanderthal () included as a dependency. If you're in a hurry, you can clone Neanderthal Hello World project. Don't forget to read at least some introduction from Neural Networks and Deep Learning, start up the REPL from your favorite Clojure development environment, and let's continue with the tutorial. (require '[uncomplicate.commons.core :refer [with-release]] '[uncomplicate.fluokitten.core :refer [fmap!]] '[uncomplicate.neanderthal [native :refer [dv dge]] [core :refer [mv! mv axpy! scal!]] [math :refer [signum exp]] [vect-math :refer [fmax! tanh! linear-frac!]]]) The Network Diagram I'll repeat the basic diagram from the previous post, as a reference. Threshold and Bias In the current state, the network combines all layers into a single linear transformation. We can introduce basic decision-making capability by adding a cutoff to the output of each neuron. When the weighted sums of its inputs are below that threshold, the output is zero and when they are above, the output is one.\begin{equation} output = \left\{ \begin{array}{ll} 0 & W\mathbf{x} \leq threshold \\ 1 & W\mathbf{x} > threshold \\ \end{array} \right. \end{equation} Since we keep the current outputs in a (potentially) huge vector, it would be inconvenient to write a scalar-based logic for that. I prefer to use a vectorized function, or create one if there is not exactly what we need. Neanderthal does not have the exact cutoff function, but we can create one by subtracting threshold from the maximum of each threshold and the signal value and then mapping the signum function to the result. There are simpler ways to compute this, but I wanted to use the existing functions, and do the computation in-place. It is of purely educational value, anyway. We will see soon that there are better things to use for transforming the output than the vanilla step function. (defn step! [threshold x] (fmap! signum (axpy! -1.0 threshold (fmax! threshold x x)))) (let [threshold (dv [1 2 3]) x (dv [0 2 7])] (step! threshold x)) nil#RealBlockVector[double, n:3, offset: 0, stride:1] [ 0.00 0.00 1.00 ] I'm going to show you a few steps in the evolution of the code, so I willreuse weights and x. To simplify the example, we will use global def and not care about properly releasing the memory.It will not matter in a REPL session, but not forget to do it in the real code.Continuing the example from Part 1: (def x (dv 0.3 0.9)) (def w1 (dge 4 2 [0.3 0.6 0.1 2.0 0.9 3.7 0.0 1.0] {:layout :row})) (def threshold (dv 0.7 0.2 1.1 2)) Since we do not care about extra instances at the moment, we'll use the pure mv function instead of mv! for convenience. mv creates the resulting vector y,instead of mutating the one that has to be provided as an argument. (step! threshold (mv w1 x)) nil#RealBlockVector[double, n:4, offset: 0, stride:1] [ 0.00 1.00 1.00 0.00 ] The bias is simply the threshold moved to the left side of the equation:\begin{equation} output = \left\{ \begin{array}{ll} 0 & W\mathbf{x} - bias \leq 0 \\ 1 & W\mathbf{x} - bias > 0 \\ \end{array} \right. \end{equation} (def bias (dv 0.7 0.2 1.1 2)) (def zero (dv 4)) (step! zero (axpy! -1.0 bias (mv w1 x))) nil#RealBlockVector[double, n:4, offset: 0, stride:1] [ 0.00 1.00 1.00 0.00 ] Remember that bias is the same as threshold. There is no need for the extra zero vector. (step! bias (mv w1 x)) nil#RealBlockVector[double, n:4, offset: 0, stride:1] [ 0.00 1.00 1.00 0.00 ] Activation Function The decision capabilities supported by the step function are rather crude. The neuron eitheroutputs a constant value ( 1), or zero. It is better to use functions that offer different levelsof the signal strength. Instead of the step function, the output of each neuron passes throughan activation function. Countless functions can be an activation function, but a handful provedthe best choice. Like neural networks themselves, the functions that work well are simple.Activation functions have to be chosen carefully, to support the learning algorithms, mostimportantly to be easily differentiable. Until recently, the sigmoid and tanh functionswere the top picks. Recently an even simpler function, ReLU, became the activation function of choice. Rectified Linear Unit (\(ReLU\)) ReLU is short for Rectified Linear Unit. Sounds mysterious, but it is a straightforward linear function that has zero value below the threshold, which is typically zero.\begin{equation} f(x) = max(0, x) \end{equation} It's even simpler to implement than the step function, so we do this, if nothing else, for fun. (defn relu! [threshold x] (axpy! -1.0 threshold (fmax! threshold x x))) It might seem strange that I kept the threshold as an argument to the relu function. Isn't ReLUalways cut-off at zero? Consider it a bit of optimization. There is no built-in optimize ReLU function.To implement the formula \(f(x) = max(0, x)\) we either have to use mapping over the max function,or to use the vectorized fmax, which requires an additional vector that holds the zeros. Since weneed to subtract the biases anyway before the activation, by fusing these two phases, I avoidedthe need for maintaining the extra array of zeros. That may or may not be the best choice for thecomplete library, but since the main point of this blog is teaching, we stick to the Yagni principle. (relu! bias (mv w1 x)) nil#RealBlockVector[double, n:4, offset: 0, stride:1] [ 0.00 1.63 2.50 0.00 ] Hyperbolic Tangent (\(tanh\)) One popular activation function is tanh.\begin{equation} tanh(x) = \frac{sinh(x)}{cosh(x)} = \frac{e^{2x} - 1}{e^{2x} + 1} \end{equation} Note how it is close to the identity function \(f(x) = x\) in large part of the perimeter between \(-1\) and \(1\). As the absolute value of \(x\) gets larger, \(tanh(x)\) asymptotically approaches \(1\). Thus, the output is between \(-1\) and \(1\). Since Neanderhtal has the vectorized variant of the tanh function in its vect-math, the implementationis easy. (tanh! (axpy! -1.0 bias (mv w1 x))) nil#RealBlockVector[double, n:4, offset: 0, stride:1] [ -0.07 0.93 0.99 -0.80 ] Sigmoid function Until ReLU became popular, sigmoid was the most often used activation function.Sigmoid refers to a whole family of S-shaped functions, or, often, to a special case - the logistic function. Standard libraries often do not come with the implementation of the sigmoid function.We have to implement our own. We could implement it in the most straightforward way,just following the formula. That might be a good approach if we're only flexing ourmuscles, but may be not completely safe if we intend to use such implementation for the real work. (exp 710) is too big to fit even in double, while (exp 89) does not fit into float andproduce an infinity ( ##Inf). [(exp 709) (exp 710) (float (exp 88)) (float (exp 89))] nil[8.218407461554972E307 ##Inf 1.6516363E38 ##Inf] You can program that implementation as an exercise, and I'll show you another approach instead. Let me pull the following equality out of the magic mathematical hat, and ask you to believe me that it is true:\begin{equation} S(x) = \frac{1}{2} + \frac{1}{2} \times tanh(\frac{x}{2}) \end{equation} We can implement that easily by combining the vectorized tanh! and a bit of vectorized scaling. (defn sigmoid! [x] (linear-frac! 0.5 (tanh! (scal! 0.5 x)) 0.5)) Let's program our layer with the logistic sigmoid activation. (sigmoid! (axpy! -1.0 bias (mv w1 x))) nil#RealBlockVector[double, n:4, offset: 0, stride:1] [ 0.48 0.84 0.92 0.25 ] You can benchmark both implementations with large vectors, and see whether there is a differencein performance. I expect it to be only a fraction of the complete run time of the network.Consider that tanh(x) is safe, since it comes from a standard library, while you'll have toinvestigate whether the straightforward formula translation is good enough for what you want to do. The next step The layers of our fully connected network now go beyond linear transformations. We can stack as many as we'd like and do the inference. (with-release [x (dv 0.3 0.9) w1 (dge 4 2 [0.3 0.6 0.1 2.0 0.9 3.7 0.0 1.0] {:layout :row}) bias1 (dv 0.7 0.2 1.1 2) h1 (dv 4) w2 (dge 1 4 [0.75 0.15 0.22 0.33]) bias2 (dv 0.3) y (dv 1)] (tanh! (axpy! -1.0 bias1 (mv! w1 x h1))) (println (sigmoid! (axpy! -1.0 bias2 (mv! w2 h1 y))))) #RealBlockVector[double, n:1, offset: 0, stride:1] [ 0.44 ] This is getting repetitive. For each layer we add, we have to herd a few more disconnected matrices, vectors, and activation functions in place. In the next article, we will fix this by abstracting it into easy to use layers. After that, we will make a few minor adjustments that enable our code to run on the GPU, just to make sure that it is easy to do. Then we will be ready to tackle the 95% of the work: create the code for learning theseweights from data, so that the numbers that the network compute become relevant. The next article: Fully Connected Inference Layers. Thank you Clojurists Together financially supported writing this series. Big thanks to all Clojurians who contribute, and thank you for reading and discussing this series.
WHY? Most of deep directed latent variable models including VAE try to maximize the marginal likelihood by maximizing the Evidence Lower Bound(ELBO). However, marginal likelihood is not sufficient to represent the performance of the model. WHAT? Instead of marginal likelihood, this paper suggest to measure the information between the observed X and the latent Z with variational lowerbound and upperbound. I_e(X;Z) = \iint dx dz p_e(x,z)log \frac{p_e(x,z)}{p^*(x)p_e(z)}\\ H - D \leq I_e(X; Z) \leq R\\ H \equiv -\int dx p^*(x)log p^*(x)\\ D \equiv -\int dx p^*(x) \int dz e(z|x)log d(x|z)\\ R \equiv \int dx p^*(x) \int dz e(z|x)log \frac{e(z|x)}{m(z)} H is data entropy, D is the distortion which is equivalent to reconstruction negative log likelihood, and R is the rate which is equivalent to average KL divergence. Hyperparameter in beta-VAE turned out to be representing the ratio between R and D. The relationship between distortion and rate can be drawn as following RD-plane. High R with low D implies that the encoder did not fit to the prior so that latent representation of data is expressive but fail to generate samples with assumed prior. On the other hand, High D with low R implies that encoder fit too much to the prior so that latent representation fail to contain sufficient information (Auto-decoding). As seen in RD-plane, the performance of models with the same ELBO may vary with the ratio between distortion and rate. This paper points out that previous models minimize ELBO by supressing R, but least amount of information in R(entropy of data) is needed for desired representation. So this paper suggests to reduce the KL penalty term( \beta) for fix to data entropy if available. First, this paper compared normal VAE with target rate model with toy dataset with known parameters. Second, the author compared models of varying ELBO and R and D ratio. So? VAE failed to catch the multimodality of z space by reducing R to near zero thus failing to reconstruct. However, model with target rate succedded in catching multimodality of z space and reconstructed well. By the ratio of R and D, the models can be categorized in to 4 categories: Auto-encoder, syntactic encoder, semantic encoder and auto decoder. these catergories shows varying performance as described above. Even the same ELBO with the varying ratio shows extremely varying performance. Critic Amazingly pointed out the critical drawback of all the previous VAE models. Description was easily understanable and the experiments were brilliant.
Evaluate $$\int_0^{\pi/2}\frac{x\sin x\cos x}{\sin^4 x+\cos^4 x}dx$$ Source :Putnam By the property $\displaystyle \int_0^af(x)\,dx=\int_0^af(a-x)\,dx$: $$=\int_0^{\pi/2}\frac{(\pi/2-x)\sin x\cos x}{\sin^4 x+\cos^4 x}dx=\frac{\pi}{2}\int_0^{\pi/2}\frac{\sin x\cos x}{\sin^4 x+\cos^4 x}dx-\int_0^{\pi/2}\frac{x\sin x\cos x}{\sin^4 x+\cos^4 x}dx$$ $$\Longleftrightarrow\int_0^{\pi/2}\frac{x\sin x\cos x}{\sin^4 x+\cos^4 x}dx=\frac{\pi}{4}\int_0^{\pi/2}\frac{\sin x\cos x}{\sin^4x+\cos^4x}dx$$ Now I'm stuck. WolframAlpha says the indefinite integral of $\dfrac{\sin x\cos x}{\sin^4 x+\cos^4x}$ evaluates nicely to $-\frac12\arctan(\cos(2x))$. I already factored $\sin^4 x+\cos^4 x$ into $1-\left(\frac{\sin(2x)}{\sqrt{2}}\right)^2$, but I don't know how to continue.. I suggest a substitution $u=\frac{\sin(2x)}{\sqrt{2}}$? Could someone provide me a hint, or maybe an easier method I can refer to in the future?
In the usual singular homology of a topological space $X$, one consider the free abelian group generated by all continuous maps from the standard simplex $\Delta^{n}$ to $X$. Now we can replace $\Delta^{n}$ by the orthonormal group $O(n)$. There are $n+1$ (both topological and group theoretical) obvious embedding $\epsilon_{i}:O(n)\to O(n+1),\;\;i=1,2,\ldots,n+1$. These obvious embeddings are as follows: $$\epsilon_{1}(A)=1 \oplus A,\;\;\\ \epsilon_{i}(A)=\lambda_{i}\epsilon_{1}(A)\lambda_{i}^{-1}$$ where $\lambda_{i}$ is the elementary matrix obtaining from the identity matrix by replacing first row by $i_{th}$ row. For example, for $n=2$, the matrix $\epsilon_{i} \left (\begin{pmatrix} a&b\\c&d \end{pmatrix} \right)$ is $\begin{pmatrix}1&0&0\\0&a&b\\0&c&d \end{pmatrix}$, $\begin{pmatrix}a&0&b\\0&1&0\\c&0&d \end{pmatrix}$ and $\begin{pmatrix} a&b&0\\c&d&0\\0&0&1\end{pmatrix}$, for $i=1,2,3,\;$ respectively. For a topological space $X$, one can define $\overline{C_{n}(X)}$ as the free abelian group generated by all continuous maps from $O(n)$ to $X$. The boundary maps $\delta: \overline{C_{n}(X)} \to \overline{C_{n-1}(X)}$ is defined by $\delta(\phi)=\sum (-1)^i \phi \circ \epsilon_i$. Then we have $\delta \circ \delta =0$. So we obtain a kind of "homology" as a functor on the category of topological spaces. Of course homeomorphic spaces have isomorphic homology. (However this functor is not necessarily a homotopoic invariant functor but it impose an equivalent relation on the space of all continuous maps $f,g:X\to Y:\; f\simeq g$ iff $f_{*}=g_{*}$. Is this equivalent relation, stronger than the homotopy equivalent? On the other hand, for a group $G$, one can define the free abelian group generated by all group homomorphism from $O(n)$ to $G$. The same processes as above, gives us a homology of groups. Are these type of homologies studied already?
Consider the problem of minimizing an integral cost $\int_0^T c(x(t), u(t))\, dt$ over measurable controls $u:[0, T]\to \mathbb U$, subject to a finite-dimensional control system $\dot x(t) = f(x(t), u(t))$ with given initial condition $x(0) = \bar x$ and given final condition $x(T) = \hat x$, assuming that $T > 0$ is given, the maps $c:\mathbb R^d\times\mathbb R^m\to\mathbb R$ and $f:\mathbb R^d\times\mathbb R^m \to \mathbb R^d$ is continuously differentiable, and that $\mathbb U\subset\mathbb R^m$ is non-empty and compact. (It is known that under milder assumptions on the function $c:\mathbb R^d\times\mathbb R^m\to\mathbb R$ there exists a solution of the above problem; see, e.g., doi:10.1007/978-1-4612-6380-7 for the relevant theory.) Recall that the Pontryagin maximum principle (PMP) provides necessary conditions for solutions to the preceding optimal control problem; the resulting characterization of optimal controls turns out in the form of a two-point boundary value problem that looks like the following: If $u_\star:[0, T]\to\mathbb U$ solves the above problem and $x_\star:[0, T]\to\mathbb R^d$ is the corresponding solution of the control system, then there exists a pair $(\eta, p)$ with $\eta\in\{0, 1\}$, and $p:[0, T]\to\mathbb R^d$ absolutely continuous, satisfying $$\dot x_\star(t) = f(x_\star(t), u_\star(t))\quad\text{for a.e. }t \text{ with }x_\star(0) = \bar x \text{ and } x_\star(T) = \hat x,$$ $$-\dot p(t) = \partial_x f(x_\star(t), u_\star(t)) p(t) - \eta \partial_x c(x_\star(t), u_\star(t))\quad\text{for a.e. }t,$$ $$u_\star(t)\in\text{argmax}_{v\in\mathbb U} \bigl\{ \langle p(t), f(x_\star(t), v)\rangle - \eta c(x_\star(t), v) \bigr\},$$ plus a few other conditions. Of course, the preceding description assumes that $f$ and $c$ are continuously differentiable, but there are results that relax this requirement, most notably those proposed by Francis Clarke: see, e.g., doi:10.1007/978-1-4471-4820-3, Chapter 22, for the relevant material. While it is currently not known (see the thread posted a few years ago: existence of optimal control) whether the optimal control problem above admits solutions if the instantaneous cost function $c$ is discontinuous in $u$, the PMP proposed by Clarke (see Theorem 22.26 in the reference above) nevertheless provides necessary conditions for optimality. Notice that the characterization above requires us to solve two $d$-dimensional o.d.e.s simultaneously with $2d$ given boundary conditions; however, there are no boundary conditions for $p$. This is precisely the case that I'm interested in: two-point boundary value problems such as the one above in terms of the joint variables $(x, p)$ for $p$ has no given boundary condition. I'm seeking pointers to any relevant articles/books/lecture-notes that treats this topic.
This article is aimed at relatively new users. It is written particularly for my own students, with the aim of helping them to avoid making common errors. The article exists in two forms: this WordPress blog post and a PDF file generated by , both produced from the same Emacs Org file. Since WordPress does not handle very well I recommend reading the PDF version. 1. New Paragraphs In a new paragraph is started by leaving a blank line. Do not start a new paragraph by using \\ (it merely terminates a line). Indeed you should almost never type \\, except within environments such as array, tabular, and so on. 2. Math Mode Always type mathematics in math mode (as $..$ or \(..\)), to produce “” instead of “y = f(x)”, and “the dimension ” instead of “the dimension n”. For displayed equations use $$, \[..\], or one of the display environments (see Section 7). Punctuation should appear outside math mode, for inline equations, otherwise the spacing will be incorrect. Here is an example. Correct: The variables $x$, $y$, and $z$ satisfy $x^2 + y^2 = z^2$. Incorrect: The variables $x,$ $y,$ and $z$ satisfy $x^2 + y^2 = z^2.$ For displayed equations, punctuation should appear as part of the display. All equations must be punctuated, as they are part of a sentence. 3. Mathematical Functions in Roman Mathematical functions should be typeset in roman font. This is done automatically for the many standard mathematical functions that supports, such as \sin, \tan, \exp, \max, etc. If the function you need is not built into , create your own. The easiest way to do this is to use the amsmath package and type, for example, \usepackage{amsmath} ... % In the preamble. \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\inert}{Inertia} Alternatively, if you are not using the amsmath package you can type \def\diag{\mathop{\mathrm{diag}}} 4. Maths Expressions Ellipses (dots) are never explicitly typed as “…”. Instead they are typed as \dots for baseline dots, as in $x_1,x_2,\dots,x_n$ (giving ) or as \cdots for vertically centered dots, as in $x_1 + x_2 + \cdots + x_n$ (giving ). Type $i$th instead of $i'th$ or $i^{th}$. (For some subtle aspects of the use of ellipses, see How To Typeset an Ellipsis in a Mathematical Expression.) Avoid using \frac to produce stacked fractions in the text. Write flops instead of flops. For “much less than”, type \ll, giving , not <<, which gives . Similarly, “much greater than” is typed as \gg, giving . If you are using angle brackets to denote an inner product use \langle and \rangle: incorrect: <x,y>, typed as $<x,y>$. correct: , typed as $\langle x,y \rangle$ 5. Text in Displayed Equations When a displayed equation contains text such as “subject to ”, instead of putting the text in \mathrm put the text in an \mbox, as in \mbox{subject to $x \ge 0$}. Note that \mbox switches out of math mode, and this has the advantage of ensuring the correct spacing between words. If you are using the amsmath package you can use the \text command instead of \mbox. Example $$ \min\{\, \|A-X\|_F: \mbox{$X$ is a correlation matrix} \,\}. $$ 6. BibTeX Produce your bibliographies using BibTeX, creating your own bib file. Note three important points. “Export citation” options on journal websites rarely produce perfect bib entries. More often than not the entry has an improperly cased title, an incomplete or incorrectly accented author name, improperly typeset maths in the title, or some other error, so always check and improve the entry. If you wish to cite one of my papers download the latest version of njhigham.bib(along with strings.bibsupplied with it) and include it in your \bibliographycommand. Decide on a consistent format for your bib entry keys and stick to it. In the format used in the Numerical Linear Algebra group at Manchester a 2010 paper by Smith and Jones has key smjo10, a 1974 book by Aho, Hopcroft, and Ullman has key ahu74, while a 1990 book by Smith has key smit90. 7. Spelling Errors and Errors There is no excuse for your writing to contain spelling errors, given the wide availability of spell checkers. You’ll need a spell checker that understands syntax. There are also tools for checking syntax. One that comes with TeX Live is lacheck, which describes itself as “a consistency checker for LaTeX documents”. Such a tool can point out possible syntax errors, or semantic errors such as unmatched parentheses, and warn of common mistakes. 8. Quotation Marks has a left quotation mark, denoted here \lq, and a right quotation mark, denoted here \rq, typed as the single left and right quotes on the keyboard, respectively. A left or right double quotation mark is produced by typing two single quotes of the appropriate type. The double quotation mark always itself produces the same as two right quotation marks. Example: is typed as \lq\lq hello \rq\rq. 9. Captions Captions go above tables but below figures. So put the caption command at the start of a table environment but at the end of a figure environment. The \label statement should go after the \caption statement (or it can be put inside it), otherwise references to that label will refer to the subsection in which the label appears rather than the figure or table. 10. Tables makes it easy to put many rules, some of them double, in and around a table, using \cline, \hline, and the | column formatting symbol. However, it is good style to minimize the number of rules. A common task for journal copy editors is to remove rules from tables in submitted manuscripts. 11. Source Code source code should be laid out so that it is readable, in order to aid editing and debugging, to help you to understand the code when you return to it after a break, and to aid collaborative writing. Readability means that logical structure should be apparent, in the same way as when indentation is used in writing a computer program. In particular, it is is a good idea to start new sentences on new lines, which makes it easier to cut and paste them during editing, and also makes a diff of two versions of the file more readable. Example: Good: $$ U(\zbar) = U(-z) = \begin{cases} -U(z), & z\in D, \\ -U(z)-1, & \mbox{otherwise}. \end{cases} $$ Bad: $$U(\zbar) = U(-z) = \begin{cases}-U(z), & z\in D, \\ -U(z)-1, & \mbox{otherwise}. \end{cases}$$ 12. Multiline Displayed Equations For displayed equations occupying more than one line it is best to use the environments provided by the amsmath package. Of these, align (and align* if equation numbers are not wanted) is the one I use almost all the time. Example: \begin{align*} \cos(A) &= I - \frac{A^2}{2!} + \frac{A^4}{4!} + \cdots,\\ \sin(A) &= A - \frac{A^3}{3!} + \frac{A^5}{5!} - \cdots, \end{align*} Others, such as gather and aligned, are occasionally needed. Avoid using the standard environment eqnarray, because it doesn’t produce as good results as the amsmath environments, nor is it as versatile. For more details see the article Avoid Eqnarray. 13. Synonyms This final category concerns synonyms and is a matter of personal preference. I prefer \ge and \le to the equivalent \geq \leq\ (why type the extra characters?). I also prefer to use $..$ for math mode instead of \(..\) and $$..$$ for display math mode instead of \[..\]. My preferences are the original syntax, while the alternatives were introduced by . The slashed forms are obviously easier to parse, but this is one case where I prefer to stick with tradition. If dollar signs are good enough for Don Knuth, they are good enough for me! I don’t think many people use ‘s verbose \begin{math}..\end{math} or \begin{displaymath}..\end{displaymath} Also note that \begin{equation*}..\end{equation*} (for unnumbered equations) exists in the amsmath package but not in in itself.
The Khan Academy is a non-profit educational organization created in 2006, by Salman Khan. With the stated mission of "providing a high quality education to anyone, anywhere", the website supplies a free online collection of micro lectures via video tutorials st... Tags: mathematics, video, digital resources, non-profit educational organization As per an article on the wikispaces blog, you can insert complex math formulas in wikispace.Simply enclose the expression between [[math]] tags[[math]]\tilde{f}(\omega)=\int_{-\infty}^{\infty} f(t) e^{-i\omega t}\,dt[[math]]Rendering when embedded in wikispaces, math tag.You can r... The "Other widget" option in wikispaces allow you to embed any type of custom code. Among other things, this let you user powerful javascript libraries.For instance, the impressive jsxGraph library, a powerful cross-browser library for interactive geometry, function plotting, charting, an... This post attempts to capture what was shared in the webinar. It's worth watching!You can access a recording of the LIVE webinar here>>>Have you considered Flipping your Maths program? A Flipped Maths program rethinks how and when maths instruction can take place, thus poten...
Details Parent Category: Tutorials Published on Saturday, 29 December 2018 16:35 Written by sebastien.popoff \( \def\ket#1{{\left|{#1}\right\rangle}} \def\bra#1{{\left\langle{#1}\right|}} \def\braket#1#2{{\left\langle{#1}|{#2}\right\rangle}} \) [tutorial] Numerical Estimation of Multimode Fiber Modes and Propagation Constants: Part 2: Bent Fibers We saw in the first part of the tutorial that the profiles and the propagation constants of the propagation modes of a straight multimode fiber can easily be avulated for an arbitrary index profile by inverting a large but sparse matrix. Under some approximations [1], a portion of fiber with a fixed radius of curvature satisfies a similar problem that can be solved with the same numerical tools, as we illustrate with the PyMMF Python module [2]. Moreover, when the modes are known for the straight fiber, the modes for a fixed radius can be approximate by inverting a square matrix of size the number of propagating modes [1]. It allows fast computation of the modes for different radii of curvature. Effect of bending Introduction We base the following on the results of Ref. [1]. In this study, the authors state that even when the bending radius is very large, the changes on the effective index of the fiber are not that small, leading a perturbation theory to provide inaccurate results. Let's consider a short portion of fiber centered at the origin of the coordinate system with a constant bent radius \(\rho\) (see Figure 1). The center of curvature is localized at the position \((\rho,0,0)\). Figure 1. Geometry of a bent segment of a multimode fiber. The curvature will affect propagation through three effects: Geometrical effect, the bending forces guided modes to follow the curvature to keep the light inside the fiber, Compression effect, the matter compression and dilatation will modify the local refractive index, Modification of the cross section, the deformation of the fiber will modify the circular shape of the fiber. We will neglect the latter term as the cross section is unchanged to the first order in \(x/\rho\) [1]. Geometrical effect of bending Due to the bending, the longitudinal wavenumber \(k_z(x)\), i.e. the projection of the wave-vector on the propatation axis \(z\), is no longer constant across the fiber cross-section. Let's call \(\beta'\) the value of the wavenumber at the center of the fiber, i.e. for \(x=0\), \(\beta'=k_z(0)\). For the light to stay confined inside the fiber core, for any given propagating mode, one should have the optical field to be in phase at any plane orthogonal to the axis of the fiber. This gives: \begin{equation} k_z(x)\, \theta \, \rho(x) = cst = \beta \, \theta \, \rho \tag{1} \end{equation} As the curvature can be expressed as \(\rho(x) = \rho-x\), we can then express the longitudinal wavenumber as \begin{equation} k_z(x) = \frac{\beta'}{1-x/\rho} \approx \beta'(1+x/\rho)\tag{2} \end{equation} The latter approximation holds for a curvature radius large compared to the fiber core radius. In the Helmholtz equation, we can use \begin{equation} k_z^2(x) \approx \beta'^2\left(1+2\frac{x}{\rho}\right) \label{eq:ksq_bending}\tag{3} \end{equation} Compression effect of bending Bending introduces an inhomogeneous deformation of the fiber core that undergoes compression for \(x>0\) and dilatation for \(x<0\) along the \(y\)-axis. Longitudinal deformation will in turn induces a transverse deformation linked by the Poisson coefficient \(\sigma\) of the material. The components of the deformation tensor reads \begin{align} \epsilon_{xx} &= \epsilon_{zz} = -\sigma \epsilon_{yy} \tag{4}\\ \epsilon_{yy} &= -\frac{x}{\rho} \end{align} The relative change of density can be written: \begin{equation} \frac{\Delta \eta}{\eta} = -\frac{\Delta V}{V} = - \epsilon_{yy} (1-2\sigma) \tag{5} \end{equation} with \(V\) the volume. The Gladstone–Dale relation approximate \((n-1)/\eta\) to a constant: \begin{equation} \frac{n'(\mathbf{r})-1}{\eta+\Delta\eta} = \frac{n(\mathbf{r})-1}{\eta} \tag{6} \end{equation} We then find \begin{equation} n'(\mathbf{r})= n(\mathbf{r})+\frac{x}{\rho}(n(\mathbf{r})-1)(1-2\sigma) \tag{7} \end{equation} We will use the following approximation: \begin{equation} n'^2(\mathbf{r}) \approx n^2(\mathbf{r})\left(1+2\frac{x}{\rho}\,\frac{n(\mathbf{r})-1}{n(\mathbf{r})}(1-2\sigma)\right) \tag{8} \end{equation} In the Helmholtz equation \(n'\) only appears in \(k_0^2n'^2\). As \(\beta'\) in the curved fiber is close to \(k_0 n\) to the first order, we can write: \begin{equation} k_0n'^2(\mathbf{r}) \approx k_0n^2(\mathbf{r})+ 2\beta' x\frac{n(\mathbf{r})-1}{\rho n(\mathbf{r})}(1-2\sigma) \label{ksq_comp} \tag{9} \end{equation} We note that the geometrical effect of bending in equation \ref{eq:ksq_bending} and the effect of compression in equation \ref{ksq_comp} both contribute with a similar form but with an opposite sign. The effect of compression can be included as renormalizing the geometrical effect by a factor \(\xi\) with \begin{equation} \xi(\mathbf{r}) = 1-\frac{n(\mathbf{r})-1}{n(\mathbf{r})}(1-2\sigma) \tag{10} \end{equation} Modified Helmholtz equation We then rewrite the scalar Helmholtz equation as \begin{equation} \left[\Delta_{\perp} +n^2(\mathbf{r})k_0^2-\beta'^2\left(1+\frac{2x}{\rho}\xi(\mathbf{r})\right)\right] \ket{\psi'} = 0 \tag{11}\label{eq:modified_HE} \end{equation} with \(\ket{\psi'}\) the field in the curved fiber. We now have different options to solve this equation numerically: \begin{gather} \mathbf{\hat{A}'}\ket{\psi'} = \beta'^2 \ket{\psi'} \label{eq:EVP_curv} \tag{12}\\ \text{with } \mathbf{\hat{A}'} = \left(1-\frac{2x}{\rho}\xi(\mathbf{r})\right)\left[\Delta_{\perp} +n^2(\mathbf{r})k_0^2\right] \end{gather} The elements of the matrix \(\mathbf{A'}\) representing the operator \(\mathbf{\hat{A}'}\) projected in a discretized space basis read \begin{equation} \mathbf{A'}_{ij} = \left(1-\frac{2x_i}{\rho}\xi(x_i,y_i)\right)\mathbf{A}_{ij} \tag{13} \end{equation} with \(\mathbf{A}\) the matrix representing the operator for the straight fiber. We can then solve this equation numerically as we did for the unperturbed fiber. This equation is very similar to the eigenvalue problem in the case of the straight fiber. The only difference is a factor \(1/\left(1+x\xi(\mathbf{r})/\rho\right)^2\) approximated by \(\left(1-2x\xi(\mathbf{r})/\rho\right)\) in equation \ref{eq:EVP_curv}. This factor does not change the number of non-zero terms in \(\mathbf{A'}\) compared to the case of the straight fiber, leading to a similar computational complexity of the system for numerical resolutions. This approach is the one adopted in the PyMMF Python module [2]. Note that this solution requires, for a given waveguide, to recalculate the eigenvalues and eigenvectors of a large matrix for each value of the radius. If one want to calculate different radii for the same fiber, a new approach is required to decrease the computational time. Another approach is to express the field \(\ket{\psi'}\) in the disordered waveguide as a linear combination of the straight waveguide modes and then solve an eigenvalue problem to find these coefficients, knowing the propagation constants of the straight fiber modes. As it requires to calculate the straight fiber modes first, this approach is efficient only when one wants to find the modes for different radii of the same fiber. Let's express \(\ket{\psi'}\) in the basis of the straight waveguide modes \begin{equation} \ket{\psi'}=\sum_ i^M'{c_i\ket{\psi_i} } \label{eq:decomp} \tag{14} \end{equation} where \(M'\) is the number of modes of the straight fiber we want to express our new modes into. Important note: The set of guided modes of the straight fiber is not a complete basis of the space. Rigourously, to obtain exact results, one should express the new modes as linear combinations of all the modes of the straight fiber system, including evanescent modes and non guided propagating modes (modes that propagate both in the core and the cladding). If we do so, the current method would have no advantage compared to the previous one in term of calculation speed. However, due to the fact that the system is weakly perturbed, modes will coupled preferentially to neighboring modes. If one takes \(M' = M\), the low order modes will be correctly represented, as neighboring modes are propagating modes, but modes close to the cut-off which will overlap with non-guided modes of the straight fiber will give inaccurate propagation constants. The correct number of modes to take into account depends on the system, the correct way to proceed is to add modes untill the solution converges. By definition, the eigenmodes \(\ket{\psi_i}\), \(i \in [1,N]\) of the straight fiber and their corresponding propagation constants \(\beta_i\) satisfy \begin{equation} \left[\Delta_{\perp} +n^2(\mathbf{r})k_0^2-\beta_i^2\right] \ket{\psi_i} = 0 \label{eq:beta_straight} \tag{15} \end{equation} By injecting \ref{eq:decomp} in \ref{eq:modified_HE} and using \ref{eq:beta_straight}, we obtain \begin{equation} \sum_i^{M'}c_i\left[\beta_i^2-\beta'^2\left(1+\frac{2x}{\rho}\xi(\mathbf{r})\right)\right]\ket{\psi_i} = 0 \tag{16} \end{equation} We project the previous relation on the \(j^\text{th}\) mode \(\bra{\psi_j}\) of the straight waveguide, \begin{equation} \sum_ic_i\beta_i^2\braket{\psi_j}{\psi_i}-\sum_ic_i\beta'^2\braket{\psi_j}{\psi_i}-\frac{2}{\rho}\sum_ic_i\beta'^2\bra{\psi_j}x\xi(\mathbf{r})\ket{\psi_i} = 0 \quad \forall j \in [1,N] \label{eq:int_psiprime} \tag{17} \end{equation} We use the orthogonality relation of the normalized guided modes \begin{equation} \braket{\psi_j}{\psi_i}=\iint_S \psi_j^*(x,y)\psi_i(x,y)dx dy = \delta_{ij} \quad \forall j \in [1,N] \tag{18} \end{equation} We simplfy the relation \ref{eq:int_psiprime}: \begin{equation} c_j\beta_j^2-c_j\beta'^2-\frac{2}{\rho}\beta'^2\sum_ic_i \bra{\psi_j}x\xi(\mathbf{r})\ket{\psi_i}=0 \quad \forall j \in [1,N] \tag{19} \end{equation} \begin{gather} \frac{1}{\beta_j^2}\left[c_j+\frac{2}{\rho}\sum_i c_j \Gamma_{ji}\right] = \frac{1}{\beta'^2}c_j \quad \forall j \in [1,N] \label{eq:gamma} \tag{20}\\ \text{with}\quad \Gamma_{ij}=\bra{\psi_i}x\xi(\mathbf{r})\ket{\psi_j}=\iint_S \psi_j^*(x,y)\psi_i(x,y)\xi(x,y)xdx dy \tag{21} \end{gather} This results has been demonstrated in this form originally in [1]. Note that this relation is an eigenvalue problem for \(1/\beta'^2\) which can be directly solved. To further simplify the resolution of the system, we use the fact that the variation of the propagation constants is small in the weakly guided approximation (\(\Delta n/n_{max} = 1-n_{min}/n_{max} \ll 1\)) as \(\beta' \in [n_{min}k_0,n_{max}k_0]\). We can then write \begin{gather} \beta'=\beta_{min}+\delta\beta' \quad \delta \beta' \ll \beta_{min} \tag{22}\\ \beta_j=\beta_{min}+\delta\beta_j \quad \delta \beta_j \ll \beta_{min} \tag{23} \end{gather} We inject into \ref{eq:gamma}: \begin{equation} \frac{1}{\beta_{min}^2} \left[1-2\frac{\delta\beta_j}{\beta_{min}}\right]\left[c_j+\frac{2}{\rho}\sum_i c_j \Gamma_{ji}\right] = \frac{1}{\beta_{min}^2} \left[1-2\frac{\delta\beta'}{\beta_{min}}\right]c_j \tag{24} \end{equation} It follows, by neglecting the terms in \(\Gamma_{ij}\delta\beta_j\): \begin{equation} \left[\delta\beta_j-\delta\beta'\right]c_j-\frac{\beta_{min}}{\rho}\sum_i c_j \Gamma_{ji} = 0 \tag{25} \end{equation} Using \(\delta\beta_j-\delta\beta' = \beta_j-\beta'\), we finally obtain: \begin{equation} \beta_jc_j - \frac{\beta_{min}}{\rho}\sum_i c_j \Gamma_{ji} = \beta'c_j \tag{26} \end{equation} This relation can be written as an eigenvalue problem: \begin{gather} \mathbf{B}\,\mathbf{c} = \beta' \,\mathbf{c} \label{eq:EVP_B} \tag{27}\\ \text{with } \mathbf{B}_{ij} = \beta_i \delta_{ij}-\frac{\beta_{min}}{\rho}\bra{\psi_i}\xi(\mathbf{r})x\ket{\psi_j} \tag{28} \end{gather} The relation \ref{eq:EVP_B} can then be solved numerically for any value of the curvature. \(\mathbf{B}\) is a \(M\) by \(M\) matrix, as the number of modes \(M\) is typically much smaller than the number of points \(N^2\) in discretized transverse plane, solving equation \ref{eq:EVP_B} is much faster than solving equation \ref{eq:EVP_curv}. Acknowledgment I want to thank T. Cizmar and T. Tyc for their kind replies to my questions about their papers. Bibliography
$\newcommand{\Span}{\operatorname{Span}}$Let $V$ be a vector such that $\dim V = n$, and let $v_1,\ldots,v_k \in V$ be independent vectors such that $1<k\leq n$. Now Let $w_1,\ldots,w_r\in$ $\Span\left\{ v_1,\ldots,v_k \right\}$ be independent vectors such that $1\leq r <k$. My question is this: How do I find the missing vectors $w_{r+1},\ldots,w_k\in \Span\left\{ v_1,\ldots,v_k \right\}$ so that $\Span\left\{w_1,\ldots,w_r,w_{r+1},\ldots,w_k\right\} = \Span\left\{ v_1,\ldots,v_k \right\}$ ? Now I know they exist, I just don't know how I actually find them. For example: Lets look at $\mathbb{R}^5$ and $$U=\Span\left\{ \begin{pmatrix} 5\\2\\3\\7\\3 \end{pmatrix},\begin{pmatrix} 2\\4\\4\\8\\1 \end{pmatrix} ,\begin{pmatrix} 3\\4\\7\\6\\1 \end{pmatrix},\begin{pmatrix} 5\\8\\6\\4\\8 \end{pmatrix} \right\}$$ Those are all independent vectors. Now lets take $2$ independent vectors that are linear combination of those. lets say, $$w_1 = \begin{pmatrix} 6\\2\\11\\15\\-3 \end{pmatrix} \; w_2=\begin{pmatrix} 0\\4\\-2\\12\\1 \end{pmatrix}$$ How can we complete those two vectors to form a basis of $U$ ? I would really like to understand the general idea of this. This is actually a general question of something I need it for, which is the process of finding a Jordan basis for matrices\transformations. Thanks for any help
Is there a "simple" mathematical proof that is fully understandable by a 1st year university student that impressed you because it is beautiful? closed as primarily opinion-based by Daniel W. Farlow, Najib Idrissi, user91500, LutzL, Jonas Meyer Apr 7 '15 at 3:40 Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question. Here's a cute and lovely theorem. There exist two irrational numbers $x,y$ such that $x^y$ is rational. Proof. If $x=y=\sqrt2$ is an example, then we are done; otherwise $\sqrt2^{\sqrt2}$ is irrational, in which case taking $x=\sqrt2^{\sqrt2}$ and $y=\sqrt2$ gives us: $$\left(\sqrt2^{\sqrt2}\right)^{\sqrt2}=\sqrt2^{\sqrt2\sqrt2}=\sqrt2^2=2.\qquad\square$$ (Nowadays, using the Gelfond–Schneider theorem we know that $\sqrt2^{\sqrt2}$ is irrational, and in fact transcendental. But the above proof, of course, doesn't care for that.) How about the proof that $$1^3+2^3+\cdots+n^3=\left(1+2+\cdots+n\right)^2$$ I remember being impressed by this identity and the proof can be given in a picture: Edit: Substituted $\frac{n(n+1)}{2}=1+2+\cdots+n$ in response to comments. Cantor's Diagonalization Argument, proof that there are infinite sets that can't be put one to one with the set of natural numbers, is frequently cited as a beautifully simple but powerful proof. Essentially, with a list of infinite sequences, a sequence formed from taking the diagonal numbers will not be in the list. I would personally argue that the proof that $\sqrt 2$ is irrational is simple enough for a university student (probably simple enough for a high school student) and very pretty in its use of proof by contradiction! Prove that if $n$ and $m$ can each be written as a sum of two perfect squares, so can their product $nm$. Proof: Let $n = a^2+b^2$ and $m=c^2+d^2$ ($a, b, c, d \in\mathbb Z$). Then, there exists some $x,y\in\mathbb Z$ such that $$x+iy = (a+ib)(c+id)$$ Taking the magnitudes of both sides are squaring gives $$x^2+y^2 = (a^2+b^2)(c^2+d^2) = nm$$ I would go for the proof by contradiction of an infinite number of primes, which is fairly simple: Assume that there is a finite number of primes. Let $G$ be the set of allprimes $P_1,P_2,...,P_n$. Compute $K = P_1 \times P_2 \times ... \times P_n + 1$. If $K$ is prime, then it is obviously notin $G$. Otherwise, noneof its prime factors are in $G$. Conclusion: $G$ is notthe set of allprimes. I think I learned that both in high-school and at 1st year, so it might be a little too simple... By the concavity of the $\sin$ function on the interval $\left[0,\frac{\pi}2\right]$ we deduce these inequalities: $$\frac{2}\pi x\le \sin x\le x,\quad \forall x\in\left[0,\frac\pi2\right].$$ The first player in Hex has a winning strategy. There are no draws in hex, so one player must have a winning strategy. If player two has a winning strategy, player one can steal that strategy by placing the first stone in the center (additional pieces on the board never hurt your position) then using player two's strategy. You cannot have two dice (with numbers $1$ to $6$) biased so that when you throw both, the sum is uniformly distributed in $\{2,3,\dots,12\}$. For easier notation, we use the equivalent formulation "You cannot have two dice (with numbers $0$ to $5$) biased such that when you throw both, the sum is uniformly distributed in $\{0,1,\dots,10\}$." Proof:Assume that such dice exist. Let $p_i$ be the probability that the first die gives an $i$ and $q_i$ be the probability that the second die gives an $i$. Let $p(x)=\sum_{i=0}^5 p_i x^i$ and $q(x)=\sum_{i=0}^5 q_i x^i$. Let $r(x)=p(x)q(x) = \sum_{i=0}^{10} r_i x^i$. We find that $r_i = \sum_{j+k=i}p_jq_k$. But hey, this is also the probability that the sum of the two dice is $i$. Therefore, $$ r(x)=\frac{1}{11}(1+x+\dots+x^{10}). $$ Now $r(1)=1\neq0$, and for $x\neq1$, $$ r(x)=\frac{(x^{11}-1)}{11(x-1)}, $$ which clearly is nonzero when $x\neq 1$. Therefore $r$ does not have any real zeros. But because $p$ and $q$ are $5$th degree polynomials, they must have zeros. Therefore, $r(x)=p(x)q(x)$ has a zero. A contradiction. Given a square consisting of $2n \times 2n$ tiles, it is possible to cover this square with pieces that each cover $2$ adjacent tiles (like domino bricks). Now imagine, you remove two tiles, from two opposite corners of the original square. Prove that is is now no longer possible to cover the remaining area with domino bricks. Proof: Imagine that the square is a checkerboard. Each domino brick will cover two tiles of different colors. When you remove tiles from two opposite corners, you will remove two tiles with the samecolor. Thus, it can no longer be possible to cover the remaining area. (Well, it may be too "simple." But you did not state that it had to be a university student of mathematics. This one might even work for liberal arts majors...) One little-known gem at the intersection of geometry and number theory is Aubry's reflective generation of primitive Pythagorean triples, i.e. coprime naturals $\,(x,y,z)\,$with $\,x^2 + y^2 = z^2.\,$ Dividing by $\,z^2$ yields $\,(x/z)^2+(y/z)^2 = 1,\,$ so each triple corresponds to a rational point $(x/z,\,y/z)$ on the unit circle. Aubry showed that we can generate all such triples by a very simple geometrical process. Start with the trivial point $(0,-1)$. Draw a line to the point $\,P = (1,1).\,$ It intersects the circle in the rational point $\,A = (4/5,3/5)\,$ yielding the triple $\,(3,4,5).\,$ Next reflect the point $\,A\,$ into the other quadrants by taking all possible signs of each component, i.e. $\,(\pm4/5,\pm3/5),\,$ yielding the inscribed rectangle below. As before, the line through $\,A_B = (-4/5,-3/5)\,$ and $P$ intersects the circle in $\,B = (12/13, 5/13),\,$ yielding the triple $\,(12,5,13).\,$ Similarly the points $\,A_C,\, A_D\,$ yield the triples $\,(20,21,29)\,$ and $\,(8,15,17),\,$ We can iterate this process with the new points $\,B,C,D\,$ doing the same we did for $\,A,\,$ obtaining further triples. Iterating this process generates the primitive triples as a ternary tree $\qquad\qquad$ Descent in the tree is given by the formula $$\begin{eqnarray} (x,y,z)\,\mapsto &&(x,y,z)-2(x\!+\!y\!-\!z)\,(1,1,1)\\ = &&(-x-2y+2z,\,-2x-y+2z,\,-2x-2y+3z)\end{eqnarray}$$ e.g. $\ (12,5,13)\mapsto (12,5,13)-8(1,1,1) = (-3,4,5),\ $ yielding $\,(4/5,3/5)\,$ when reflected into the first quadrant. Ascent in the tree by inverting this map, combined with trivial sign-changing reflections: $\quad\quad (-3,+4,5) \mapsto (-3,+4,5) - 2 \; (-3+4-5) \; (1,1,1) = ( 5,12,13)$ $\quad\quad (-3,-4,5) \mapsto (-3,-4,5) - 2 \; (-3-4-5) \; (1,1,1) = (21,20,29)$ $\quad\quad (+3,-4,5) \mapsto (+3,-4,5) - 2 \; (+3-4-5) \; (1,1,1) = (15,8,17)$ See my MathOverflow post for further discussion, including generalizations and references. I like the proof that there are infinitely many Pythagorean triples. Theorem:There are infinitely many integers $ x, y, z$ such that $$ x^2+y^2=z^2 $$ Proof:$$ (2ab)^2 + ( a^2-b^2)^2= ( a^2+b^2)^2 $$ One cannot cover a disk of diameter 100 with 99 strips of length 100 and width 1. Proof: project the disk and the strips on a semi-sphere on top of the disk. The projection of each strip would have area at most 1/100th of the area of the semi-sphere. If you have any set of 51 integers between $1$ and $100$, the set must contain some pair of integers where one number in the pair is a multiple of the other. Proof: Suppose you have a set of $51$ integers between $1$ and $100$. If an integer is between $1$ and $100$, its largest odd divisor is one of the odd numbers between $1$ and $99$. There are only $50$ odd numbers between $1$ and $99$, so your $51$ integers can’t all have different largest odd divisors — there are only $50$ possibilities. So two of your integers (possibly more) have the same largest odd divisor. Call that odd number $d$. You can factor those two integers into prime factors, and each will factor as (some $2$’s)$\cdot d$. This is because if $d$ is the largest divisor of a number, the rest of its factorization can’t include any more odd numbers. Of your two numbers with largest odd factor $d$, the one with more $2$’s in its factorization is a multiple of the other one. (In fact, the multiple is a power of $2$.) In general, let $S$ be the set of integers from $1$ up to some even number $2n$. If a subset of $S$ contains more than half the elements in $S$, the set must contain a pair of numbers where one is a multiple of the other. The proof is the same, but it’s easier to follow if you see it for a specific $n$ first. The proof that an isosceles triangle ABC (with AC and AB having equal length) has equal angles ABC and BCA is quite nice: Triangles ABC and ACB are (mirrored) congruent (since AB = AC, BC = CB, and CA = BA), so the corresponding angles ABC and (mirrored) ACB are equal. This congruency argument is nicer than that of cutting the triangle up into two right-angled triangles. Parity of sine and cosine functions using Euler's forumla: $e^{-i\theta} = cos\ (-\theta) + i\ sin\ (-\theta)$ $e^{-i\theta} = \frac 1 {e^{i\theta}} = \frac 1 {cos\ \theta \ + \ i\ sin\ \theta} = \frac {cos\ \theta\ -\ i\ sin\ \theta} {cos^2\ \theta\ +\ sin^2\ \theta} = cos\ \theta\ -\ i\ sin\ \theta$ $cos\ (-\theta) +\ i\ sin\ (-\theta) = cos\ \theta\ +i\ (-sin\ \theta)$ Thus $cos\ (-\theta) = cos\ \theta$ $sin\ (-\theta) = -\ sin\ \theta$ $\blacksquare$ The proof is actually just the first two lines. I believe Gauss was tasked with finding the sum of all the integers from $1$ to $100$ in his very early schooling years. He tackled it quicker than his peers or his teacher could, $$\sum_{n=1}^{100}n=1+2+3+4 +\dots+100$$ $$=100+99+98+\dots+1$$ $$\therefore 2 \sum_{n=1}^{100}n=(100+1)+(99+2)+\dots+(1+100)$$ $$=\underbrace{101+101+101+\dots+101}_{100 \space times}$$ $$=101\cdot 100$$ $$\therefore \sum_{n=1}^{100}n=\frac{101\cdot 100}{2}$$ $$=5050.$$ Hence he showed that $$\sum_{k=1}^{n} k=\frac{n(n+1)}{2}.$$ If $H$ is a subgroup of $(\mathbb{R},+)$ and $H\bigcap [-1,1]$ is finite and contains a positive element. Then, $H$ is cyclic. Fermat's little theorem from noting that modulo a prime p we have for $a\neq 0$: $$1\times2\times3\times\cdots\times (p-1) = (1\times a)\times(2\times a)\times(3\times a)\times\cdots\times \left((p-1)\times a\right)$$ Proposition (No universal set): There does not exists a set which contain all the sets (even itself) Proof: Suppose to the contrary that exists such set exists. Let $X$ be the universal set, then one can construct by the axiom schema of specification the set $$C=\{A\in X: A \notin A\}$$ of all sets in the universe which did not contain themselves. As $X$ is universal, clearly $C\in X$. But then $C\in C \iff C\notin C$, a contradiction. Edit: Assuming that one is working in ZF (as almost everywhere :P) (In particular this proof really impressed me too much the first time and also is very simple) Most proofs concerning the Cantor Set are simple but amazing. The total number of intervals in the set is zero. It is uncountable. Every number in the set can be represented in ternary using just 0 and 2. No number with a 1 in it (in ternary) appears in the set. The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval. The Menger sponge which is a 3d extension of the Cantor set simultaneously exhibits an infinite surface area and encloses zero volume. The derivation of first principle of differentiation is so amazing, easy, useful and simply outstanding in all aspects. I put it here: Suppose we have a quantity $y$ whose value depends upon a single variable $x$, and is expressed by an equation defining $y$ as some specific function of $x$. This is represented as: $y=f(x)$ This relationship can be visualized by drawing a graph of function $y = f (x)$ regarding $y$ and $x$ as Cartesian coordinates, as shown in Figure(a). Consider the point $P$ on the curve $y = f (x)$ whose coordinates are $(x, y)$ and another point $Q$ where coordinates are $(x + Δx, y + Δy)$. The slope of the line joining $P$ and $Q$ is given by: $tanθ = \frac{Δy}{Δx} = \frac{(y + Δy ) − y}{Δx}$ Suppose now that the point $Q$ moves along the curve towards $P$. In this process, $Δy$ and $Δx$ decrease and approach zero; though their ratio $\frac{Δy}{Δx}$ will not necessarily vanish. What happens to the line $PQ$ as $Δy→0$, $Δx→0$? You can see that this line becomes a tangent to the curve at point $P$ as shown in Figure(b). This means that $tan θ$ approaches the slope of the tangent at $P$, denoted by $m$: $m=lim_{Δx→0} \frac{Δy}{Δx} = lim_{Δx→0} \frac{(y+Δy)-y}{Δx}$ The limit of the ratio $Δy/Δx$ as $Δx$ approaches zero is called the derivative of $y$ with respect to $x$ and is written as $dy/dx$. It represents the slope of the tangent line to the curve $y=f(x)$ at the point $(x, y)$. Since $y = f (x)$ and $y + Δy = f (x + Δx)$, we can write the definition of the derivative as: $\frac{dy}{dx}=\frac{d{f(x)}}{dx} = lim_{x→0} [\frac{f(x+Δx)-f(x)}{Δx}]$, which is the required formula. This proof that $n^{1/n} \to 1$ as integral $n \to \infty$: By Bernoulli's inequality (which is $(1+x)^n \ge 1+nx$), $(1+n^{-1/2})^n \ge 1+n^{1/2} > n^{1/2} $. Raising both sides to the $2/n$ power, $n^{1/n} <(1+n^{-1/2})^2 = 1+2n^{-1/2}+n^{-1} < 1+3n^{-1/2} $. Can a Chess Knight starting at any corner then move to touch every space on the board exactly once, ending in the opposite corner? The solution turns out to be childishly simple. Every time the Knight moves (up two, over one), it will hop from a black space to a white space, or vice versa. Assuming the Knight starts on a black corner of the board, it will need to touch 63 other squares, 32 white and 31 black. To touch all of the squares, it would need to end on a white square, but the opposite corner is also black, making it impossible. The Eigenvalues of a skew-Hermitian matrix are purely imaginary. The Eigenvalue equation is $A\vec x = \lambda\vec x$, and forming the vector norm gives $$\lambda \|\vec x\| = \lambda\left<\vec x, \vec x\right> = \left<\lambda \vec x,\vec x\right> = \left<A\vec x,\vec x\right> = \left<\vec x, A^{T*}\vec x\right> = \left<\vec x, -A\vec x\right> = -\lambda^* \|\vec x\|$$ and since $\|\vec x\| > 0$, we can divide it from left and right side. The second to last step uses the definition of skew-Hermitian. Using the definition for Hermitian or Unitarian matrices instead yields corresponding statements about the Eigenvalues of those matrices. I like the proof that not every real number can be written in the form $a e + b \pi$ for some integers $a$ and $b$. I know it's almost trivial in one way; but in another way it is kind of deep.
Like JDługosz wrote, what will cause problems in the scenario you describe isn't so much your orbit as the fact that you are within the gas giant's atmosphere. I'm going to use Jupiter here to have some specific gas giant to use for examples. Feel free to look up the relevant data for any other gas giant, or come up with your own. For the case we are interested in, a small mass orbiting a much larger mass where the radius of the orbit is equal to the larger body's radius (just dipping your toes into the Jovian atmosphere), orbital speed can be approximated as $$ v_o \approx \frac{v_e}{\sqrt{2}} $$ The escape velocity of Jupiter is approximately 59.5 km/s, so to dip our toes into the atmosphere we get an orbital velocity of approximately $$ v_o \approx \frac{59~500~\text{m/s}}{\sqrt{2}} \approx 42~100~\text{m/s} $$ To give an idea of how freakishly fast this is, it's equivalent to approximately 152,000 km/h or 94,200 miles per hour. It gets you between the Earth and the Moon in 2.5 hours. In mid-1976, an airplane managed to get to 3,530 km/h, which is about 1/43 of the orbital speed at the edge of Jupiter's atmosphere. The best we have managed on anything resembling a repeat basis is around 2,500 km/h, or 1/60 of what you would need. For comparison, Jupiter's wind speeds peak in excess of 150 m/s. While quite a stiff gale, that's nowhere near orbital velocity; by the above estimate, about 1/280 (and that's assuming that top wind speeds occur in the uppermost layers of the atmosphere, which might not be the case). With such a large difference between orbital speeds and wind speeds, we can largely ignore wind speeds for the purposes of this question; even in a perfect situation, wind speed will contribute less than 0.36% of the required velocity. (Interestingly enough, according to the same source, Jupiter wind speeds have a peak very near the equator, which works well for us.) Given that Jupiter has an equatorial diameter of 142,984 km and that the circumference of a circle is $\pi d$, 42.1 km/s gives an orbital period (if you can call it orbital) of $\frac{142984 \pi}{42.1} \approx 10~700~\text{seconds}$ or just under three hours. For comparison, Wikipedia gives Jupiter's sidereal rotation period ("day") of 9.925 hours (a shade over 9 hours 55 minutes). For comparison, to get into a reasonably stable low Earth orbit you need a velocity of approximately 7.8 km/s (corresponding to an orbital period of about 90 minutes). To go to the Moon (which is pretty close to escape velocity), you need about 10.5 km/s relative to the Earth. Actual Earth escape velocity is 11,186 m/s. Compare Apollo by the numbers: Translunar Injection and look at particularly the Earth Fixed velocity figures for the various lunar missions. Let's say you can somehow handwave the issue of absolute speed away. (After all, you got there somehow, and that already takes quite a bit of speed.) Let's also say that your craft is a very, very long, perfect cylinder with a forward cross section of 1 square meter, built to handle constant hurricane-level wind speeds. Every second, you are moving through 42,100 meters of atmosphere. That means that every second, your craft will need to push aside 42,100 cubic meters of atmospheric gases while maintaining its speed (at least if you plan on staying at that altitude). Wikipedia gives the composition of Jupiter's atmosphere as approximately $89.8 \pm 2.0 \% ~\text{H}_2$ and $10.2 \pm 2.0 \% ~\text{He}$. Despite the fact that these two gases are among the lightest known, and that the density is going to still be low at the altitude we are talking about, pushing aside over 40,000 cubic meters of gas per second is going to cause some massive drag. And that, my friend, is what will cause your craft to heat up, lose speed very quickly and eventually descend into the atmosphere, ruining your day.
I have been looking at the Schwarzschild metric presented to me as the following within lectures: $$ds^2=-\frac{\textrm{d}r^2}{1+\frac{\gamma}{r}}-r^2\textrm{d}\theta^2-r^2\sin^2\theta\textrm{d}\phi^2+c^2\left(1+\frac{\gamma}{r}\right)\textrm{d}t^2,$$ where $\gamma=-\frac{2GM}{c^2}$. For circular motion the radius can be taken as constant and $\theta=\frac{\pi}{2}$ can be set without a loss of generality. From this metric and taking into consideration the constant variables,the Lagrangian is $$L= -\frac{\dot{r}^2}{1+\gamma/r}-r^2\dot{\phi}^2+c^2\left(1+\gamma/r\right)\dot{t}^2.$$ The Euler-Lagrange equations are $$\frac{\textrm{d}}{\textrm{d}s}\left[\frac{\partial L}{\partial \dot{x}^{\mu}}\right]-\frac{\partial L}{\partial x^{\mu}}=0.$$ My question arises here, up until this point whenever a variable has been given as a constant, in the corresponding part of the metric, I have been able to effectively eliminate this part of the metric as the derivative of a constant is equal to 0. From inspecting the Lagrangian I can see that this has been done in the case of $\theta$, but even though r has been stated to be a constant it is kept in the statement of the Lagrangian, why is this? Through further manipulations considering the case of the Lagrangian for $\mu=1$, it is possible to get the following $$r\dot{\phi}^2=\frac{GM}{r^2}\dot{t}^2,$$ whereby a statement of Kepler's law is obtained $$\Omega^2=\left(\frac{\textrm{d}\phi}{\textrm{d}t}\right)^2=\frac{GM}{r^3}.$$ I am having trouble understanding how this is valid given that it seems to rely on $\dot{r}$ being kept in the Lagrangian even though $r$ is stated as a constant and therefore $\dot{r}$ should be 0.
Goal: Sampling from a discrete distribution parametrized by unnormalizedlog-probabilities: The usual way: Exponentiate and normalize (using theexp-normalize trick), then use thean algorithm for sampling from a discrete distribution (aka categorical): def usual(x): cdf = exp(x - x.max()).cumsum() # the exp-normalize trick z = cdf[-1] u = uniform(0,1) return cdf.searchsorted(u * z) The Gumbel-max trick: where \(z_1 \cdots z_K\) are i.i.d. \(\text{Gumbel}(0,1)\) random variates. It turns out that \(y\) is distributed according to \(\pi\). (See the short derivations in this blog post.) Implementing the Gumbel-max trick is remarkable easy: def gumbel_max_sample(x): z = gumbel(loc=0, scale=1, size=x.shape) return (x + z).argmax(axis=1) If you don't have access to a Gumbel random variate generator, you can use \(-\log(-\log(\text{Uniform}(0,1))\) Comparison: Number of calls to the random number generator: Gumbel-max requires \(K\) samples from a uniform, whereas the usual algorithm only requires \(1\). Gumbel is a one-pass algorithm: It does not need to see all of the data (e.g., to normalize) before it can start partially sampling. Thus, Gumbel-max can be used for weighted sampling from a stream. Low-level efficiency: The Gumbel-max trick requires \(2K\) calls to \(\log\), whereas ordinary requires \(K\) calls to \(\exp\). Since \(\exp\) and \(\log\) are expensive function, we'd like to avoid calling them. What gives? Well, Gumbel's calls to \(\log\) do not depend on the data so they can be precomputed; this is handy for implementations which rely on vectorization for efficiency, e.g. python+numpy. Further reading: I have a few posts relating to the Gumbel-max trick. Have alook at posts tagged with Gumbel.