text stringlengths 256 16.4k |
|---|
WHY?
Batch normalization is known as a good method to stablize the optimization of neural network by reducing internal covariate shift. However, batch normalization inheritantly depends on minibatch which impeding the use in recurrent models.
WHAT?
The core idea of weight normalization is to reparameterize the weight to decompose it into scale parameter and direction parameter. And then, perform gradient descent with respect to each of parameter. Weight normalization is appied by a neuron unit.
y = \phi(\mathbb{w}\cdot \mathbb{x} + b)\\ \mathbb{w} = \frac{g}{\|\mathbb{v}\|}\mathbb{v}\\ The gradient with respect to each of parameter can be attained with minor modification to formal gradient. M is a projection matrix that projects on to the complement of the w vector.
\nabla_g L = \frac{\nabla_{\mathbb{w}} L\cdot \mathbb{v}}{\|\mathbb{v}\|}\\ \nabla_{\mathbb{v}} L = \frac{g}{\|\mathbb{v}\|}\nabla_{\mathbb{w}}L - \frac{g\nabla_g L}{\|\mathbb{v}\|^2}\mathbb{v}\\ = \frac{g}{\|\mathbb{v}\|} M_{\mathbb{w}}\nabla_{\mathbb{w}}L,\\ M_{\mathbb{w}} = I - \frac{\mathbb{w}\mathbb{w}'}{\|\mathbb{w}\|}^2 We can see that weight normalization scale the weight, and project away from current weight vector. These effects help covariance matix of the gradient closer to identity. Unlike batch normalization, weight normalization does not directly scale the features. Therefore, proper initialization of parameters is needed. Vector v is sampled from Gaussian with mean zero and std 0.05. And then, we can initalize g and b using a single minibatch of data.
t = \frac{\mathbb{v}\cdot\mathbb{x}}{\|\mathbb{v}\|}\\ g \leftarrow \frac{1}{\sigma[t]}\\ b \leftarrow \frac{-\mu[t]}{\sigma[t]} Mean-only batch normalization can help scaling the mean of output.
t = \mathbb{w}\cdot \mathbb{x}\\ \tilde{t} = t - \mu[t] + b\\ y = \phi(\tilde{t})
So?
WN + mean BN showed to improve the performance of supervised classification(CIFAR10), generative modelling(convolutional VAE, DRAW) and reinforcement learning(DQN).
Critic
Implementing both WN and mean BN sound little cumbersome… I’m not sure they are worth it. |
This is somewhat out of date for me. I have now purchased GrindEQ Word->Latex, which is highly idiosyncratic and have written Word macros to greatly improve it. See XXXXX. Latex is a script used to display equations on websites by Physics Forums, Desmos the grapher, Symbolab the equation solver and this Blogger blog with MathJax. Latex is described on Physics Forums here and more completely by Mark Gates here. I also have some Latex tips here (MS-Word). I started by using something from Codecogs but it was not good at displaying inline equations (##\mu=0##) or adjusting the font size of indexed variables ( ##{\partial }_{\mu }y^{\alpha }## ) .
GrindEQ can save a word document containing equations as a .tex plain text file. This contains some non-Latex (for the MS-Word text) and bits of delimited Latex. So it it fairly easy to copy and paste the Latex to other websites. The only difference is in the characters that delimit the Latex. These are summarise below.
Format
Delimiter for
Delimiter
PF (website)
New line
$$
In line
##
.tex (file from GrindEQ)
New line
\[
In line
$
MathJax
New line
$$
In line
$
Symbolab
none
Desmos
none
Here's something I wrote and pasted the RHS into Desmos and Symbolab.
$$ x = \frac{t\left(\sin \left(l\right)-1\right)}{\cos \left(l\right)} $$
You can copy and paste the Latex of the RHS directly into Desmos and Symbolab equations. It is
\frac{t\left(\sin \left(l\right)-1\right)}{\cos \left(l\right)}
Then I created a test Word document pictured below
Image from MS-word
So the pullback operator is (##\mu## is row index, ##\alpha## is column)
$$ {\partial }_{\mu }y^{\alpha }=\left( \begin{array}{cc} {\mathrm{cos} \theta \ } & {\mathrm {sin} \theta \ } \\ -r{\mathrm{sin} \theta \ } & r{\mathrm{cos} \theta \ } \end{array} \right) $$
$${\partial }_{\mu }y^{\alpha }\equiv \frac{\partial y^{\alpha }}{\partial x^{\mu }} =\left( \begin{array}{cc} \frac{\partial x}{\partial r} & \frac{\partial y}{\partial r} \\ \ & \ \\ \frac{\partial x}{\partial \theta } & \frac{\partial y}{\partial \theta } \end{array} \right)=\left( \begin{array}{cc} {\mathrm{cos} \theta \ } & {\mathrm{sin} \theta \ } \\ -r{\mathrm{sin} \theta \ } & r{\mathrm{cos} \theta \ } \end{array} \right)$$
or on the final equation using \Large , \large and \small (\normal does not work)
##{\partial }_{\mu }y^{\alpha } \equiv \Large \frac{\partial y^{\alpha }}{\partial x^{\mu }} \Large = \left( \begin{array}{cc} \frac{\partial x}{\partial r} & \frac{\partial y}{\partial r} \\ \ & \ \\ \frac{\partial x}{\partial \theta } & \frac{\partial y}{\partial \theta } \end{array} \right) \small= \left( \begin{array}{cc} {\mathrm{cos} \theta \ } & {\mathrm{sin} \theta \ } \\ -r{\mathrm{sin} \theta \ } & r{\mathrm{cos} \theta \ } \end{array} \right)##
Sadly I only get ten free uses of GrindEQ nd then I have to pay 50€.
Installing the mathjax code to show equations in Blogger is simple and instructions are contained here. However the page freezes shortly after opening and the Blogger instructions are out of date, so the vital parts are worth repeating: To enable MathJax, just drop in the following code snippet after the header (<head>) in the Blogger template (Theme →Edit HTML→Edit Template). The the £ sign should be replaced with the dollar currency sign. It is hard to get that in the text here! The unadulterated code snippet is also in this text file. <!--Script to enable Latex from http://web.archive.org/web/20110412103745/http://mnnttl.blogspot.com/2011/02/latex-on-blogger.html --> <script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js"> MathJax.Hub.Config({ extensions: ["tex2jax.js","TeX/AMSmath.js","TeX/AMSsymbols.js"], jax: ["input/TeX", "output/HTML-CSS"], tex2jax: { inlineMath: [ ['£','£'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, "HTML-CSS": { availableFonts: ["TeX"] } }); </script> No doubt one could use different delimiters by changing the inlineMath and displayMath lines if one wanted.
Installing the mathjax code to show equations in Blogger is simple and instructions are contained here. However the page freezes shortly after opening and the Blogger instructions are out of date, so the vital parts are worth repeating:
To enable MathJax, just drop in the following code snippet after the header (<head>) in the Blogger template (Theme →Edit HTML→Edit Template). The the £ sign should be replaced with the dollar currency sign. It is hard to get that in the text here! The unadulterated code snippet is also in this text file.
<!--Script to enable Latex from http://web.archive.org/web/20110412103745/http://mnnttl.blogspot.com/2011/02/latex-on-blogger.html -->
<script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js">
MathJax.Hub.Config({
extensions: ["tex2jax.js","TeX/AMSmath.js","TeX/AMSsymbols.js"],
jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: {
inlineMath: [ ['£','£'], ["\\(","\\)"] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ],
},
"HTML-CSS": { availableFonts: ["TeX"] }
});
</script>
No doubt one could use different delimiters by changing the inlineMath and displayMath lines if one wanted. |
148 0
Hi, I'm doing question 2/II/32D at the top of page 68 here (http://www.maths.cam.ac.uk/undergrad/pastpapers/2005/Part_2/list_II.pdf [Broken]). I have done everything except for the last sentence of the question.
This is what I have attempted so far: [tex]|\chi\rangle=|\uparrow\rangle=\left( \begin{array}{c} 1 \\ 0 \end{array} \right)[/tex] Then [tex]U|\chi\rangle=\left( \begin{array}{c} cos(\theta /2) \\ 0 \end{array} \right)-(\boldsymbol{n}.\boldsymbol{\sigma})\left( \begin{array}{c} isin(\theta /2) \\ 0 \end{array} \right)[/tex] Now I need to choose n. If I want the spin up state measured along the direction (sin@,0,cos@) am I correct in thinking I need the eigenvector corresponding to the +1 eigenvalue of this matrix?: [tex]\sigma_1 sin\theta+\sigma_3 cos\theta=\left( \begin{array}{cc} cos\theta & sin\theta \\ sin\theta & -cos\theta \end{array} \right)[/tex] In which case this the desired state is [tex]U|\chi\rangle=\left( \begin{array}{c} sin\theta \\ 1-cos\theta \end{array} \right)[/tex] But I don't think it's possible to choose n such that this is the case, so where have I gone wrong? Also do I need to worry about any [tex]\hbar /2[/tex] since [tex]\boldsymbol{S}=\frac{\hbar}{2}\boldsymbol{\sigma}[/tex]? Thanks.
This is what I have attempted so far:
[tex]|\chi\rangle=|\uparrow\rangle=\left( \begin{array}{c}
1 \\
0 \end{array} \right)[/tex]
Then [tex]U|\chi\rangle=\left( \begin{array}{c}
cos(\theta /2) \\
0 \end{array} \right)-(\boldsymbol{n}.\boldsymbol{\sigma})\left( \begin{array}{c}
isin(\theta /2) \\
0 \end{array} \right)[/tex]
Now I need to choose n. If I want the spin up state measured along the direction (sin@,0,cos@) am I correct in thinking I need the eigenvector corresponding to the +1 eigenvalue of this matrix?:
[tex]\sigma_1 sin\theta+\sigma_3 cos\theta=\left( \begin{array}{cc}
cos\theta & sin\theta \\
sin\theta & -cos\theta \end{array} \right)[/tex]
In which case this the desired state is [tex]U|\chi\rangle=\left( \begin{array}{c}
sin\theta \\
1-cos\theta \end{array} \right)[/tex]
But I don't think it's possible to choose n such that this is the case, so where have I gone wrong? Also do I need to worry about any [tex]\hbar /2[/tex] since [tex]\boldsymbol{S}=\frac{\hbar}{2}\boldsymbol{\sigma}[/tex]?
Thanks.
Last edited by a moderator: |
I want to know exactly how derived functor cohomology and Cech cohomology can fail to be the same.
I started worrying about this from this answer to an MO question, and Brian Conrad's comments to another MO question. Let $\mathcal{F}$ be a sheaf of abelian groups on a space X. (Here I want to be a little vague about what a "space" means. I'm thinking of either a scheme or a topological space). Then Cech cohomology of X with respect to a cover $U \to X$ can be defined as cohomology of the complex
$$ \mathcal{F}(U) \to \mathcal{F}(U^{[2]}) \to \mathcal{F}(U^{[3]}) \to \cdots $$
Where $U^{[ n ]} = U \times_X U \times_X \cdots \times_X U$. The total Cech cohomology of $X$, $\check H^{ * }(X, \mathcal{F}) $, is then given by taking the colimit over all covers $U$ of X. Now if the following condition is satisfied:
Condition 1: For sufficiently many covers $U$, the sheaf $\mathcal{F}|_{U^{[ n ]}}$ is an acyclic sheaf for each n
then this cohomology will agree with the derived functor version of sheaf cohomology. We have,
$$\check H^{ * }(X, \mathcal{F}) \cong H^*(X; \mathcal{F}).$$
I am told, however, that even if $\mathcal{F}|_U$ is acyclic this doesn't imply that it is acyclic on the intersections. It is still okay if this condition fails for some covers as long as it is satisfied for enough covers. However I am also told that there are spaces for which there is
no cover satisfying condition 1.
Instead you can replace your covers by hypercovers. Basically this is an augmented simplicial object $$V_\bullet \to X$$ which you use instead of the simplicial object $U^{[ \bullet +1 ]} \to X$. There are some conditions which a simplicial object must satisfy in order to be a hypercover, but I don't want to get into it here. You can then define cohomology with respect to a hypercover analogously to Cech cohomology with respect to a cover, and then take a colimit. This seems to always reproduce derived functor sheaf cohomology.
So my question is
when is this really necessary?
Question 1: What is the easiest example of a scheme and a sheaf of abelian groups (specifically representable ones such as $\mathbb{G}_m$) for which Cech cohomology of that sheaf and derived functor cohomology disagree?
Question 2: What is the easiest example of a (Hausdorff) topological space and a reasonable sheaf for which Cech cohomology and derived functor cohomology disagree?
I also want to be a little flexible about what a "cover" is supposed to be. I definitely want to allow interesting Grothendieck topologies, and would be interested in knowing if passing to a different Grothendieck topology changes the answer. It changes both the notion of sheaf and the notion of Cech cohomology, so I don't really know what to expect.
Also, I edited question 1 slightly from the original version, which just asked about quasi-coherent sheaves. Brian Conrad kindly pointed out to me that for any quasi-coherent sheaf the Cech cohomology and the sheaf cohomology will agree (at least with reasonable assumptions on our scheme, like quasi-compact quasi-separated?) and that the really interesting case is for more general sheaves of groups. |
WHY?
There had been little study on learning representation that focus on clustering.
WHAT?
Deep Embedding Clustering(DEC) consists of two phases: parameter initialization with a deep autoencoder and (2) parameter optimization. This paper first describe the second phase. Assume encoder and inital cluster centroids are given, two steps are alternated to improve clustering: 1) compute soft assignment of embedded points to centroids, and 2) update encoder and refine centroids.
Student’s t-distriution is used as kernel for soft assignment. An auxiliary distribution p is designed to help learn from high confidence assignments. KL-divergence between soft assignment and auxiliary distribution is minimize by updating encoder and centroids.
q_{ij} = \frac{(1+\|z_i - \mu_j\|^2/\alpha)^{-\frac{\alpha+1}{2}}}{\sum_{j'}(1+\|z_i - \mu_j\|^2/\alpha)^{-\frac{\alpha+1}{2}}}\\ p_{ij} = \frac{q_{ij}^2/f_j}{\sum_{j'}q_{ij'}^2/f_{j'}}, f_j = \sum_i q_{ij}\\ L = KL(P\|Q) = \sum_i \sum_j p_ij \log \frac{p_{ij}}{q_{ij}} Trained Stacked Autoencoder(SAE) with denoising autoencoder for each layer is used to initialize the embeddings. Initial centroids are determined by k-means clustering of initial embeddings.
So?
DEC outperformed many clustering methods including k-means, LDMGI, and SEC in unsupervised clustering task of MNIST, STL-10, and Reuters. Visualization verified that embeddings are well separated.
Critic
I like the idea to learn representation jointly with clustering, auxiliary target distribution seems like quite aritrary to me. |
Let
$$f(x) = \begin{cases} \frac{1}{2}, & \text{if $\rvert x\lvert \le 1$ } \\ 0, & \text{otherwise} \end{cases}$$
I want to calculate the convolution of $f$ with itself.
I am given the following formula:
$$f*g=\int_{-\infty}^{+\infty} f(y)g(x-y) dy$$
so, $ \space f*f=\int_{-\infty}^{+\infty} f(y)f(x-y) dy$. How do I evaluate this integral?
While doing some research online I found that one can calculate the convolution by using the fourier-transform. $$\mathcal F(f(x)f(x))=\frac{1}{\sqrt{2 \pi}} \hat{f}(k) *\hat{f}(k)$$
The problem with using this method is that I don't know how to multiply a piecewise function with itself. Would it just be:
$$f(x) = \begin{cases} \color{red}{\frac{1}{4}}, & \text{if $\rvert x\lvert \le 1$ } \\ 0, & \text{otherwise} \end{cases}$$
or am I doing something wrong here? |
In the previous article, we discussed various methods to solve the wide variety of recurrence relations. In this article, we are going to talk about two methods that can be used to solve the special kind of recurrence relations known as
divide and conquer recurrences. Those two methods solve the recurrences almost instantly. These two methods are called Master Method and Akra-Bazzi method.
The master method is applicable for special kind of divide and conquer recurrences whereas the Akra-Bazzi method is applicable for all kinds of divide and conquer recurrences. Akra-Bazzi method can be considered as a generalization of the Master Method.
I strongly recommend reading the following article before.
The Master method is applicable for solving recurrences of the form
$$\begin{align} T(n) = aT\left (\frac{n}{b} \right) + f(n) \end{align}$$ where $a \ge 0$ and $b \ge 1$ are constants and $f(n)$ is an asymptotically positive function. Depending upon the value of $a, b$ and function $f(n)$, the master method has three cases. If $f(n) = O(n^{\log_b a-\epsilon})$ for some constant $\epsilon > 0$, then $T(n) = \Theta(n^{\log_b a})$. If $f(n) = \Theta(n^{\log_b a})$, then $T(n) = \Theta(n^{\log_b a}\log n)$. If $f(n) = \Omega(n^{\log_b a + \epsilon})$ for some constant $\epsilon > 0$, and if $af(n/b) \le cf(n)$ for some constant $c < 1$ and all sufficiently large n, then $T(n) = \Theta(f(n)$.
If we memorize these three cases, we can solve many divide and conquer recurrences quite easily. Basically in master method, we check if $f(n)$ is upper, tight or lower bound on $n^{\log_b a}$. If it is upper bound, then we use case 1. If it is a tight bound, we use case 2 and if it is a lower bound, we use case 3. Now we will use The Master method to solve some of the recurrences.
Example 1: Consider a recurrence, $$T(n) = 2T(n/4) + 1$$ The recurrence relation is in the form given by (1), so we can use the master method. Comparing it with (1), we get $$a = 2, b = 4 \text{ and } f(n) = 1$$ Next we calculate $n^{\log_b a} = n^{\log_4 2} = n^{0.5}$. Since $f(n) = O(n^{0.5 - 0.1})$, where $\epsilon = 0.1$, we can apply the case 1 of the master method. Therefore, $$T(n) = \Theta(n^{\log_b a}) = \Theta(n^{0.5})$$ Example 2: Consider, $$T(n) = T(n/2) + \Theta(1)$$ For this recurrence, we have $a = 1, b = 2, f(n) = \Theta(1)$ and $n^{\log_b a} = n^0 = 1$. Since $f(n) = \Theta(1)$, we can use the case 2 of the master method. Therefore, $$T(n) = \Theta(n^{\log_b a}\log n) = \Theta(\log n)$$ Example 3: Consider, $$T(n) = 3T(n/4) + n\log n$$ For this recurrence, we have $a = 3, b = 4, f(n) = n\log n$ and $n^{\log_b a} = n^{\log_4 3}$. Since $f(n) = \Omega(n^{\log_4 3 + 0.01})$, this is the third case of master method. In case 3, we need additional checking i.e. we need $c < 1$ such that $af(n/b) \le cf(n)$.
$$af(n/b) = 3n/4\log (n/4) \le cn\log n \text{ if } c = 3/4$$
Therefore, the solution is (case 3)
$$T(n) = \Theta(f(n)) = \Theta(n\log n)$$
There are some recurrences, which appear to have the proper form for the master method but we can not use the master method to solve them.
Example 4: Consider a recurrence, $$T(n) = 4T(n/2) + n^2\log n$$ In this recurrence, $a = 4, b = 2, f(n) = n^2\log n$, and thus, we have that $n^{\log_b a} = n^{\log_2 4} = n^2$. We can clearly see that $f(n) = n^2\log n = \Omega(n^2)$ and we may be tempted to use case 3. But hold on, we need a small positive number $\epsilon$ such that $n^2\log n = \Omega(n^{2 + \epsilon})$. Since $n^2\log n$ is not polynomially larger than $n^2$, we can not find such $\epsilon$. Consequently, we can not use master method to solve this example.
The master method does not apply to a recurrence such as
$$T(n) = T(n/3) + T(2n/3) + O(n)$$ In such case, we use another more powerful and general method known as Akra-Bazzi method. The Akra-Bazzi method solves the recurrence relation of the form $$\begin{align}T(n) = \sum_{i = 1}^k a_iT(b_in) + f(n) \end{align}$$ where, $a_i > 0$ is a constant for $i \le i \le k$ $b_i \in (0, 1)$ is a constant for $1 \le i \le k$ $k \ge 1$ is a constant and, $f(n)$ is non-negative function
The solution of recurrence given in (2) is,
$$T(n) = \Theta\left(n^p \left( 1 + \int_{1}^{n} \frac{f(u)}{u^{p + 1}} du \right ) \right)$$ Provided, $$\sum_{i = 1}^{k}a_ib_i^p = 1 \text{ where $p$ is a unique real number.}$$ Example 1: Consider a recurrence, $$T(n) = 2T(n/4) + 3T(n/6) + \Theta(n\log n)$$
For this recurrence, $a_1 = 2, b_1 = 1/4, a_2 = 3, b_2 = 1/6, f(n) = n\log n$. The value of $p$ can be calculated as,
$$a_1b_1^p + a_2b_2^p = 2\times(1/4)^p + 3\times (1/6)^p = 1$$ $p = 1$ satisfies the above equation. The solution is $$\begin{align} T(n) &= \Theta\left(n^p \left( 1 + \int_{1}^{n} \frac{f(u)}{u^{p + 1}} du \right ) \right)\\ & = \Theta\left(n \left( 1 + \int_{1}^{n} \frac{u\log u}{u^2} du \right ) \right) \\ &= \Theta\left(n \left( 1 + \frac{\log^2n}{2} \right ) \right)\\ &= \Theta(n\log^2n) \end{align}$$ Akra, M., & Bazzi, L. (1998). On the Solution of Linear Recurrence Equations. Computational Optimization and Applications, 10, 195-210. Retrieved September 16, 2018, from http://bioinfo.ict.ac.cn/~dbu/AlgorithmCourses/Lectures/LinearRecurrenceEquations.pdf Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (n.d.). Introduction to algorithms (3rd ed.). The MIT Press. |
534674 results
Contributors: Ganian, Robert, Kalany, Martin, Szeider, Stefan, Träff, Jesper Larsson
Date: 2015-06-30
... We show that the problem of constructing tree-structured descriptions of data layouts that are optimal with respect to space or other criteria from given sequences of displacements, can be solved in polynomial time. The problem is relevant for efficient compiler and library support for communication of noncontiguous data, where tree-structured descriptions with low-degree nodes and small index arrays are beneficial for the communication soft- and hardware. An important example is the Message-Passing Interface (MPI) which has a mechanism for describing arbitrary data layouts as trees using a set of increasingly general constructors. Our algorithm shows that the so-called MPI datatype reconstruction problem by trees with the full set of MPI constructors can be solved optimally in polynomial time, refuting previous conjectures that the problem is NP-hard. Our algorithm can handle further, natural constructors, currently not found in MPI. Our algorithm is based on dynamic programming, and requires the solution of a series of shortest path problems on an incrementally built, directed, acyclic graph. The algorithm runs in $O(n^4)$ time steps and requires $O(n^2)$ space for input displacement sequences of length $n$.
Files:
Contributors: Shafiee, Mohammad Javad, Wong, Alexander, Fieguth, Paul
Date: 2015-06-30
... Random fields have remained a topic of great interest over past decades for the purpose of structured inference, especially for problems such as image segmentation. The local nodal interactions commonly used in such models often suffer the short-boundary bias problem, which are tackled primarily through the incorporation of long-range nodal interactions. However, the issue of computational tractability becomes a significant issue when incorporating such long-range nodal interactions, particularly when a large number of long-range nodal interactions (e.g., fully-connected random fields) are modeled. In this work, we introduce a generalized random field framework based around the concept of stochastic cliques, which addresses the issue of computational tractability when using fully-connected random fields by stochastically forming a sparse representation of the random field. The proposed framework allows for efficient structured inference using fully-connected random fields without any restrictions on the potential functions that can be utilized. Several realizations of the proposed framework using graph cuts are presented and evaluated, and experimental results demonstrate that the proposed framework can provide competitive performance for the purpose of image segmentation when compared to existing fully-connected and principled deep random field frameworks.
Files:
Contributors: Refsgaard, J., Kirsebom, O. S., Dijck, E. A., Fynbo, H. O. U., Lund, M. V., Portela, M. N., Raabe, R., Randisi, G., Renzi, F., Sambi, S.
Date: 2015-06-30
... While the 12C(a,g)16O reaction plays a central role in nuclear astrophysics, the cross section at energies relevant to hydrostatic helium burning is too small to be directly measured in the laboratory. The beta-delayed alpha spectrum of 16N can be used to constrain the extrapolation of the E1 component of the S-factor; however, with this approach the resulting S-factor becomes strongly correlated with the assumed beta-alpha branching ratio. We have remeasured the beta-alpha branching ratio by implanting 16N ions in a segmented Si detector and counting the number of beta-alpha decays relative to the number of implantations. Our result, 1.49(5)e-5, represents a 24% increase compared to the accepted value and implies an increase of 14% in the extrapolated S-factor.
Files:
Contributors: Schrade, Constantin, Zyuzin, A. A., Klinovaja, Jelena, Loss, Daniel
Date: 2015-06-30
... We study two microscopic models of topological insulators in contact with an $s$-wave superconductor. In the first model the superconductor and the topological insulator are tunnel coupled via a layer of scalar and of randomly oriented spin impurities. Here, we require that spin-flip tunneling dominates over spin-conserving one. In the second model the tunnel coupling is realized by an array of single-level quantum dots with randomly oriented spins. It is shown that the tunnel region forms a $\pi$-junction where the effective order parameter changes sign. Interestingly, due to the random spin orientation the effective descriptions of both models exhibit time-reversal symmetry. We then discuss how the proposed $\pi$-junctions support topological superconductivity without magnetic fields and can be used to generate and manipulate Kramers pairs of Majorana fermions by gates.
Files:
Contributors: Tsui, K. H.
Date: 2015-06-30
... Following the basic principles of a charge separated pulsar magnetosphere \citep{goldreich1969}, we consider the magnetosphere be stationary in space, instead of corotating, and the electric field be uploaded from the potential distribution on the pulsar surface, set up by the unipolar induction. Consequently, the plasma of the magnetosphere undergoes guiding center drifts of the gyro motion due to the transverse forces to the magnetic field. These forces are the electric force, magnetic gradient force, and field line curvature force. Since these plasma velocities are of drift nature, there is no need to introduce an emf along the field lines, which would contradict the $E_{\parallel}=\vec E\cdot\vec B=0$ plasma condition. Furthermore, there is also no need to introduce the critical field line separating the electron and ion open field lines. We present a self-consistent description where the magnetosphere is described in terms of electric and magnetic fields and also in terms of plasma velocities. The fields and velocities are then connected through the space charge densities self-consistently. We solve the pulsar equation analytically for the fields and construct the standard steady state pulsar magnetosphere. By considering the unipolar induction inside the pulsar and the magnetosphere outside the pulsar as one coupled system, and under the condition that the unipolar pumping rate exceeds the Poynting flux in the open field lines, plasma pressure can build up in the magnetosphere, in particular in the closed region. This could cause a periodic openning up of the closed region, leading to a pulsating magnetosphere, which could be an alternative for pulsar beacons. The closed region can also be openned periodically by the build-up of toroidal magnetic field through a positive feedback cycle.
Files:
Contributors: Moolekamp, Fred, Mamajek, Eric
Date: 2015-06-30
... As the size of images and data products derived from astronomical data continues to increase, new tools are needed to visualize and interact with that data in a meaningful way. Motivated by our own astronomical images taken with the Dark Energy Camera (DECam) we present Toyz, an open source Python package for viewing and analyzing images and data stored on a remote server or cluster. Users connect to the Toyz web application via a web browser, making it an convenient tool for students to visualize and interact with astronomical data without having to install any software on their local machines. In addition it provides researchers with an easy-to-use tool that allows them to browse the files on a server and quickly view very large images ($>$ 2 Gb) taken with DECam and other cameras with a large FOV and create their own visualization tools that can be added on as extensions to the default Toyz framework.
Files:
Contributors: Berezhiani, Zurab
Date: 2015-06-30
... The excess of high energy neutrinos observed by the IceCube collaboration might originate from baryon number violating decays of heavy shadow baryons from dark mirror sector which produce shadow neutrinos. These sterile neutrino species then oscillate into ordinary neutrinos transferring to them specific features of their spectrum. In particular, this scenario can explain the end of the spectrum above 2 PeV and the presence of the energy gap between 400 TeV and 1 PeV.
Files:
Contributors: Carayol, Arnaud, Löding, Christof, Serre, Olivier
Date: 2015-06-30
... We consider imperfect information stochastic games where we require the players to use pure (i.e. non randomised) strategies. We consider reachability, safety, B\"uchi and co-B\"uchi objectives, and investigate the existence of almost-sure/positively winning strategies for the first player when the second player is perfectly informed or more informed than the first player. We obtain decidability results for positive reachability and almost-sure B\"uchi with optimal algorithms to decide existence of a pure winning strategy and to compute one if exists. We complete the picture by showing that positive safety is undecidable when restricting to pure strategies even if the second player is perfectly informed.
Files:
Contributors: Simpson, Gideon, Watkins, Daniel
Date: 2015-06-30
... One way of getting insight into non-Gaussian measures, posed on infinite dimensional Hilbert spaces, is to first obtain good approximations in terms of Gaussians. These best fit Gaussians then provide notions of mean and variance, and they can be used to accelerate sampling algorithms. This begs the question of how one should measure optimality. Here, we consider the problem of minimizing the distance between a family of Gaussians and the target measure, with respect to relative entropy, or Kullback-Leibler divergence, as has been done previously in the literature. Thus, it is desirable to have algorithms, well posed in the abstract Hilbert space setting, which converge to these minimizers. We examine this minimization problem by seeking roots of the first variation of relative entropy, taken with respect to the mean of the Gaussian, leaving the covariance fixed. We prove the convergence of Robbins-Monro type root finding algorithms, highlighting the assumptions necessary for them to converge to relative entropy minimizers.
Files:
Contributors: Al-Safadi, Ebrahim B., Al-Naffouri, Tareq Y., Masood, Mudassir, Ali, Anum
Date: 2015-06-30
... A novel method for correcting the effect of nonlinear distortion in orthogonal frequency division multiplexing signals is proposed. The method depends on adaptively selecting the distortion over a subset of the data carriers, and then using tools from compressed sensing and sparse Bayesian recovery to estimate the distortion over the other carriers. Central to this method is the fact that carriers (or tones) are decoded with different levels of confidence, depending on a coupled function of the magnitude and phase of the distortion over each carrier, in addition to the respective channel strength. Moreover, as no pilots are required by this method, a significant improvement in terms of achievable rate can be achieved relative to previous work.
Files: |
What helps to solve your problem is the rule ${\rm vec}\{A \cdot {\rm diag}(b) \cdot C^T\} = (C \diamond A)\cdot b$, where
${\rm vec}\{X\}$ is the vectorization operator that rearranges the elements of a matrix $X \in \mathbb{R}^{m \times n}$ into a vector $\in \mathbb{R}^{m \cdot n \times 1}$ The $\diamond$ operator is the "Khatri-Rao product", also known as the column-wise Kronecker product between two matrices, i.e., for given matrices $A = [a_1, \ldots, a_k] \in \mathbb{R}^{m \times k}$ and $B = [b_1, \ldots, b_k] \in \mathbb{R}^{n \times k}$, the matrix $C = A \diamond B$ is given by $C = [a_1 \otimes b_1, \ldots, a_k \otimes b_k] \in \mathbb{R}^{m \cdot n \times k}$, where $\otimes$ is the Kronecker product.
Let us apply this rule to your problem. First, we rearrange a bit:
$$\begin{align}-\omega^2U^T(M+\Delta M)U+U^T(K+\Delta K)U&=0_M \\-\omega^2U^T{\rm diag}\{\Delta m\} U+U^T{\rm diag}\{\Delta k\} U&=\omega^2 U^T M U - U^T K U\end{align}$$Now, let us vectorize:$$\begin{align}-(U^T \diamond \omega^2 U^T) \cdot \Delta m+(U^T \diamond U^T) \cdot \Delta k& = {\rm vec} \{ \omega^2 U^T M U - U^T K U\} \\\left[-U^T \diamond \omega^2 U^T, U^T \diamond U^T\right]\cdot\left[\Delta m^T, \Delta k^T\right]^T & = {\rm vec} \{ \omega^2 U^T M U - U^T K U\},\end{align}$$which is in the desired form $A x = b$, where $A = \left[-U^T \diamond \omega^2 U^T, U^T \diamond U^T\right] \in \mathbb{R}^{9 \times 10}$, $x = \left[\Delta m^T, \Delta k^T\right]^T\in \mathbb{R}^{10 \times 1}$, and $b = {\rm vec} \{ \omega^2 U^T M U - U^T K U\} \in \mathbb{R}^{9 \times 1}$.
As a slight simplification, you can rewrite your system matrix $A$ into $$A = \left[-(I_3 \otimes \omega^2) \cdot (U^T \diamond U^T), U^T \diamond U^T\right]= \left[-D \cdot (U^T \diamond U^T), U^T \diamond U^T\right],$$ where $D = {\rm diag}\{\omega_1, \omega_2, \omega_3, \omega_1, \omega_2, \omega_3, \omega_1, \omega_2, \omega_3\} \in \mathbb{R}^{9 \times 9}$, since $\omega$ is diagonal.
Clearly, if your $\omega_i$ are equal, the system has rank at most 5 (not surprisingly). In general, you may be more lucky. I just tried with randomly drawn data and got rank 9. My guess is for randomly drawn $\omega$, $U$ (from a continuous distribution) you get full rank almost surely, but that's a guess only. |
There is another type of important morphism between the (orbifold) fundamental groups of the moduli spaces $M_{g,\nu}\rightarrow M_{g',\nu'}$ that is considered in Grothendieck's tower. You can see this morphism in three different ways. One is directly on the surfaces of type $(g,\nu)$and $(g',\nu')$ (of genus $g$ with $\nu$ boundary components, resp.genus $g'$ with $\nu'$ boundary components). This morphism exists if youcan put a set of disjoint simple closed loops on the surface of type$(g',\nu')$ such that when you cut along them, you cut your surface intoone piece of type $(g,\nu)$, or else into several pieces of which at least one is of type $(g,\nu)$. You can also think of including the smaller surface of type $(g,\nu)$ into the bigger one by gluing it to other smaller pieces along the edges of their boundary components, to form the bigger one of type $(g',\nu')$ (which is the image Grothendieck had in mind when he talked about Lego).
The second way to see this morphism is as a morphism ofmoduli spaces, where $M_{g,\nu}$ is mapped to a boundary component of theDeligne-Mumford compactification $\overline{M}_{g',\nu}$, in factprecisely the boundary component corresponding to taking the simpleclosed loops on the surface of type $(g',\nu')$ that "cut out" the oneof type $(g,\nu)$ and shrinking them to length zero, so they become nodes.
The third way to view this same morphism is on the fundamental groups. This is pretty easy, since the (orbifold) fundamental group of $M_{g,\nu}$ is generated by Dehn twists along simple closed loops on the surface oftype $(g,\nu)$, and these just map to the Dehn twists along the samesimple closed loops when the $(g,\nu)$ surface is included in the $(g',\nu')$ one as above.
The Teichmüller tower can be considered to be the collection of all the fundamental groups of the $M_{g,\nu}$ linked by the point-erasing morphisms and by these. Or, as Grothendieck wanted, instead of fundamental groups, that depend on a certain choice of base point, you can replace the groups by more symmetric fundamental groupoids based at all "tangential base points" on the moduli spaces".
The automorphism group of the Teichmüller tower basically then consists of tuples $(\phi_{g,\nu})$ such that each $\phi_{g,\nu}$ is an automorphism of $\pi_1(M_{g,\nu})$ and the different $\phi_{g,\nu}$ in the same tuple commute with the homomorphisms of the tower. |
1.
2.
The Terwilliger algebra of a distance-regular graph of negative typeŠtefko Miklavič
, 2009, izvirni znanstveni članek
Opis: Let ▫$\Gamma$▫ denote a distance-regular graph with diameter ▫$D \ge 3$▫. Assume ▫$\Gamma$▫ has classical parameters ▫$(D,b,\alpha,\beta)▫$ with ▫$b < -1$▫. Let ▫$X$▫ denote the vertex set of ▫$\Gamma$▫ and let ▫$A \in {\mathrm{Mat}}_X(\mathbb{C})$▫ denote the adjacency matrix of ▫$\Gamma$▫. Fix ▫$x \in X$▫ and let $A^\ast \in {\mathrm{Mat}}_X(\mathbb{C})$ denote the corresponding dual adjacency matrix. Let ▫$T$▫ denote the subalgebra of ${\mathrm{Mat}}_X(\mathbb{C})$ generated by ▫$A,A^\ast$▫. We call ▫$T$▫ the Terwilliger algebra of ▫$\Gamma$▫ with respect to ▫$x$▫. We show that up to isomorphism there exist exactly two irreducible ▫$T$▫-modules with endpoint 1; their dimensions are ▫$D$▫ and ▫$2D-2$▫. For these ▫$T$▫-modules we display a basis consisting of eigenvectors for ▫$A^\ast$▫, and for each basis we give the action of ▫$A$▫. Najdeno v: osebi Ključne besede: distance-regular graph, negative type, Terwilliger algebra Objavljeno: 15.10.2013; Ogledov: 1558; Prenosov: 55 Polno besedilo (0,00 KB)
3.
4.
5.
6.
Distance-balanced graphs: Symmetry conditionsKlavdija Kutnar
, Aleksander Malnič
, Dragan Marušič
, Štefko Miklavič
, 2006, izvirni znanstveni članek
Opis: A graph ▫$X$▫ is said to be distance-balanced if for any edge ▫$uv$▫ of ▫$X$▫, the number of vertices closer to ▫$u$▫ than to ▫$v$▫ is equal to the number of vertices closer to ▫$v$▫ than to ▫$u$▫. A graph ▫$X$▫ is said to be strongly distance-balanced if for any edge ▫$uv$▫ of ▫$X$▫ and any integer ▫$k$▫, the number of vertices at distance ▫$k$▫ from ▫$u$▫ and at distance ▫$k+1$▫ from ▫$v$▫ is equal to the number of vertices at distance ▫$k+1$▫ from ▫$u$▫ and at distance ▫$k$▫ from ▫$v$▫. Exploring the connection between symmetry properties of graphs and the metric property of being (strongly) distance-balanced is the main theme of this article. That a vertex-transitive graph is necessarily strongly distance-balanced and thus also distance-balanced is an easy observation. With only a slight relaxation of the transitivity condition, the situation changes drastically: there are infinite families of semisymmetric graphs (that is, graphs which are edge-transitive, but not vertex-transitive) which are distance-balanced, but there are also infinite families of semisymmetric graphs which are not distance-balanced. Results on the distance-balanced property in product graphs prove helpful in obtaining these constructions. Finally, a complete classification of strongly distance-balanced graphs is given for the following infinite families of generalized Petersen graphs: GP▫$(n,2)$▫, GP▫$(5k+1,k)$▫, GP▫$(3k 3,k)$▫, and GP▫$(2k+2,k)$▫. Najdeno v: osebi Ključne besede: graph theory, graph, distance-balanced graphs, vertex-transitive, semysimmetric, generalized Petersen graph Objavljeno: 15.10.2013; Ogledov: 1726; Prenosov: 30 Polno besedilo (0,00 KB)
7.
8.
Q-polynomial distance-regular graphs with a [sub] 1 [equal] 0 and a [sub] 2 [not equal] 0Štefko Miklavič
, 2008, izvirni znanstveni članek
Opis: Let ▫$\Gamma$▫ denote a ▫$Q$▫-polynomial distance-regular graph with diameter ▫$D \ge 3$▫ and intersection numbers ▫$a_1=0$▫, ▫$a_2 \ne 0$▫. Let ▫$X$▫ denote the vertex set of ▫$\Gamma$▫ and let ▫$A \in {\mathrm{Mat}}_X ({\mathbb{C}})$▫ denote the adjacency matrix of ▫$\Gamma$▫. Fix ▫$x \in X$▫ and let denote $A^\ast \in {\mathrm{Mat}}_X ({\mathbb{C}})$ the corresponding dual adjacency matrix. Let ▫$T$▫ denote the subalgebra of ▫$A{\mathrm{Mat}}_X ({\mathbb{C}})$▫ generated by ▫$A$▫, ▫$A^\ast$▫. We call ▫$T$▫ the Terwilliger algebra of ▫$\Gamma$▫ with respect to ▫$x$▫. We show that up to isomorphism there exists a unique irreducible ▫$T$▫-module ▫$W$▫ with endpoint 1. We show that ▫$W$▫ has dimension ▫$2D-2$▫. We display a basis for ▫$W$▫ which consists of eigenvectors for ▫$A^\ast$▫. We display the action of ▫$A$▫ on this basis. We show that ▫$W$▫ appears in the standard module of ▫$\Gamma$▫ with multiplicity ▫$k-1$▫, where ▫$k$▫ is the valency of ▫$\Gamma$▫. Najdeno v: osebi Ključne besede: mathematics, graph theory, adjacency matrix, distance-regular graph, Terwilliger algebra Objavljeno: 15.10.2013; Ogledov: 1377; Prenosov: 9 Polno besedilo (0,00 KB)
9.
10. |
GDC and LCM
In the name of love 20/03/2017 at 14:45
(a;b)=(-1;1;-2;2;-5;5;-10;10;-25;25;-50;50)
Good! ^^
Nguyễn Ngọc Mai 21/03/2017 at 10:30
\(a,b\in\) { 1 ; - 1 ; 2 ; -2 ; -5 ; 5 ; 10 ; -10 ; -25 ; 25 ; 50 ; -50 }
1 l i k e !
Love people Name Jiang 20/03/2017 at 17:12
BCNN (a,b) = 50 => a,b = 25;50 . 10;50 . 1;50 . 2;50 . 5;50
Good
FA KAKALOTS 09/02/2018 at 22:03
Because the greatest common dividor of m and n is 15, we put:
{m=15kn=15h
(GCD(k;h)=1)
⇒3m+2n=45k+30h=225
⇒15(3k+2h)=225⇒3k+2h=15
+ If h = 0 then k = 5; and that result doesn't satisfy GCD(k;h) = 1.
So h > 0; then k is an odd number.
3k<15⇒k<5⇒k∈{1;3}
If k = 1 then h = 6⇒
m = 15 ; n = 90 ⇒mn=15.90=1350
If k = 3 then h = 3; that result doesn't satisfy GCD(k;h)=1.
Therefore, the answer is 1350.
♫ ♪ ♥► EDM Troop ◄♥ ♪ ♫ 22/03/2017 at 20:41
Because the greatest common dividor of m and n is 15, we put:
\(\left\{{}\begin{matrix}m=15k\\n=15h\end{matrix}\right.\)\(\left(GCD\left(k;h\right)=1\right)\)
\(\Rightarrow3m+2n=45k+30h=225\)
\(\Rightarrow15\left(3k+2h\right)=225\Rightarrow3k+2h=15\)
+ If h = 0 then k = 5; and that result doesn't satisfy GCD(k;h) = 1.
So h > 0; then k is an odd number.
\(3k< 15\Rightarrow k< 5\Rightarrow k\in\left\{1;3\right\}\)
If k = 1 then h = 6\(\Rightarrow\) m = 15 ; n = 90 \(\Rightarrow mn=15.90=1350\)
If k = 3 then h = 3; that result doesn't satisfy GCD(k;h)=1.
Therefore, the answer is 1350.
A cardboard \(140cm\times240cm\) is cut into many congruent squares. What is the largest possible square size? How many squares can be cut from the cardboard without wastage?
»ﻲ†hïếu๖ۣۜGïลﻲ« 25/03/2017 at 19:06
We have : ƯCLN(140;240) = 20
Dress the largest square can be 20 cm x 20 cm
→இے๖ۣۜQuỳnh 22/03/2017 at 20:20
We have : ƯCLN(140;240) = 20
Dress the largest square can be 20 cm x 20 cm
Love people Name Jiang 20/03/2017 at 17:23
We have : ƯCLN(140;240) = 20
Dress the largest square can be 20 cm x 20 cm
The product of a grandfather's age and hos grandchild's age is 1339 next year. How old are they now?
1339 = 103 x 13 = 1 x 1339.
No one can be 1339 years old, so the grandfather will be 103 years old and his grandchild will be 13 years old next year.
Thus, the grandfather is 102 years old, and his grandchild is 12 years old now.
Lê Nho Khoa 23/03/2017 at 21:04
1339 = 103 x 13 = 1 x 1339.
No one can be 1339 years old, so the grandfather will be 103 years old and his grandchild will be 13 years old next year.
Thus, the grandfather is 102 years old, and his grandchild is 12 years old now.
hghfghfgh 26/03/2017 at 20:12
1339 = 103 x 13 = 1 x 1339.
No one can be 1339 years old, so the grandfather will be 103 years old and his grandchild will be 13 years old next year.
Thus, the grandfather is 102 years old, and his grandchild is 12 years old now.
The remain der is 2 when n is divided by 3. The remainder is 3,4 and 5 when the divisors are 4,5 and 6 respectively. What is the smallest possible of n?
We have : 3 - 2 = 4 - 3 = 5 - 4 = 6 - 5 = 1
So n + 1 is divisible by 3,4,5,6 but we need to find the smallest value of n.
=> n + 1 = LCM(3,4,5,6) = 60 => n = 59
Lê Nho Khoa 23/03/2017 at 21:05
We have : 3 - 2 = 4 - 3 = 5 - 4 = 6 - 5 = 1
So n + 1 is divisible by 3,4,5,6 but we need to find the smallest value of n.
=> n + 1 = LCM(3,4,5,6) = 60 => n = 59
Phan Thanh Tinh Coodinator 22/03/2017 at 21:14
We have : 3 - 2 = 4 - 3 = 5 - 4 = 6 - 5 = 1
So n + 1 is divisible by 3,4,5,6 but we need to find the smallest value of n.
=> n + 1 = LCM(3,4,5,6) = 60 => n = 59
Alvin, Ben, Carl and Dan are salesmen for refrigerators. In 2009, Alvin sold 7 times as many refrigerators sold by Ben, 5 times as many sold by Carl and 4 times as many sold by Dan. In all, the 4 salesmen sold 669 refrigerators.What would be the highest possible number of refrigerators Alvin had sold?
Let 140a be the number of refrigerators Alvin sold in 2009, where a is a constant value. Thus,
- Ben sold 140a ÷
7 = 20a (refrigerators);
- Carl sold 140a ÷
5 = 28a (refrigerators);
- Dan sold 140a ÷
4 = 35a (refrigerators).
According to the problem, we have,
140a + 20a + 28a + 35a = 669
223a = 669
a = 3.
Therefore, Alvin had sold at most 3 ×
140 = 420 refrigerators.
Answer : 420
Nguyễn Nhật Minh 29/05/2017 at 20:57
Let 140
abe the number of refrigerators Alvin sold in 2009, where ais a constant value. Thus,
- Ben sold 140
a\(\div\) 7 = 20 a(refrigerators);
- Carl sold 140
a\(\div\) 5 = 28 a(refrigerators);
- Dan sold 140
a\(\div\) 4 = 35 a(refrigerators).
According to the problem, we have,
140
a+ 20 a+ 28 a+ 35 a= 669
223
a= 669 a= 3.
Therefore, Alvin had sold at most 3 \(\times\) 140 =
420refrigerators. Answer. 420 refrigerators
The remainder is 1 when a certain number is divided by 2. The Remainder is also 1 when the divisors are 3,4,5 and 6 respectively.
What is the smaillest possible of this number?
FA KAKALOTS 09/02/2018 at 22:05
Let n be the smallest value of that number. When n is divided by 2,3,4,5,6,the remainder is always 1,so n - 1 is divisible by 2,3,4,5,6
=> n - 1 = LCM(2,3,4,5,6) = 60 => n = 61
Lê Nho Khoa 23/03/2017 at 21:05
Let n be the smallest value of that number. When n is divided by 2,3,4,5,6,the remainder is always 1,so n - 1 is divisible by 2,3,4,5,6
=> n - 1 = LCM(2,3,4,5,6) = 60 => n = 61
Phan Thanh Tinh Coodinator 22/03/2017 at 21:18
Let n be the smallest value of that number. When n is divided by 2,3,4,5,6,the remainder is always 1,so n - 1 is divisible by 2,3,4,5,6
=> n - 1 = LCM(2,3,4,5,6) = 60 => n = 61
FA KAKALOTS 09/02/2018 at 22:05
We have : 882 = 2 x 32 x 72 ; 1134 = 2 x 34 x 7.So :
GCD(882 ; 1134) = 2 x 32 x 7 = 126
LCM(882 ; 1134) = 2 x 34 x 72 = 7938
Phan Thanh Tinh Coodinator 22/03/2017 at 21:20
We have : 882 = 2 x 3
2x 7 2; 1134 = 2 x 3 4x 7.So :
GCD(882 ; 1134) = 2 x 3
2x 7 = 126
LCM(882 ; 1134) = 2 x 3
4x 7 2= 7938 |
This question already has an answer here:
Entropy of an ideal gas is defined as the logarithm of the number of possible states the gas can have multiplied by Boltzmann's constant:
$${\displaystyle S=k_{\mathrm {B} }\log \Omega .}$$
In deriving the Maxwell-Boltzmann distribution, we initially start by counting a finite number of states, so this definition of entropy makes sense. But in the end we say that the number of possible states is so high that we can acctually say the distribution is continuous. But if the distribution is continuous, the number of possible states is infinite. So why is entropy not always infinite when a continuous distribution is used? |
In this post, I'll describe a neat trick for maintaining a summary quantity (e.g., sum, product, max, log-sum-exp, concatenation, cross-product) under changes to its inputs. The trick and it's implementation are inspired by the well-known max-heap datastructure. I'll also describe a really elegant application to fast sampling under an evolving categorical distribution.
Setup: Suppose we'd like to efficiently compute a summary quantity underchanges to its \(n\)-dimensional input vector \(\boldsymbol{w}\). The particularform of the quantity we're going to compute is \(z = \bigoplus_{i=1}^n w_i\),where \(\oplus\) is some associative binary operator with identity element\(\boldsymbol{0}\).
The trick: Essentially, the trick boils down to parenthesis placement inthe expression which computes \(z\). A freedom we assumed via the associativeproperty.
I'll demonstrate by example with \(n=8\).
Linear structure: We generally compute something like \(z\) with a simple loop. This looks like a right-branching binary tree when we think about the order of operations,
Heap structure: Here the parentheses form a balanced tree, which looks much more like a recursive implementation that computes the left and right halves and \(\oplus\)s the results (divide-and-conquer style), The benefit of the heap structure is that there are \(\mathcal{O}(\log n)\) intermediate quantities that depend on any input, whereas the linear structure has \(\mathcal{O}(n)\). The intermediate quantities correspond to the values of each of the parenthesized expressions.
Since fewer intermediate quantities depend on a given input, fewer intermediatesneed to be adjusted upon a change to the input. Therefore, we get fasteralgorithms for
maintaining the output quantity \(z\) as the inputs change. Heap datastructure (akabinary index tree or Fenwick tree):We're going to store the values of the intermediates quantities and inputs in aheap datastructure, which is a complete binary tree. In our case, the tree hasdepth \(1 + \lceil \log_2 n \rceil\), with the values of \(\boldsymbol{w}\) at it'sleaves (aligned left) and padding with \(\boldsymbol{0}\) for remainingleaves. Thus, the array's length is \(< 4 n\).
This structure makes our implementation really nice and efficient because we don't need pointers to find the parent or children of a node (i.e., no need to wrap elements into a "node" class like in a general tree data structure). So, we can pack everything into an array, which means our implementation has great memory/cache locality and low storage overhead.
Traversing the tree is pretty simple: Let \(d\) be the number of internal nodes, nodes \(1 \le i \le d\) are internal. For node \(i\), left child \(\rightarrow {2 \cdot i},\) right child \(\rightarrow {2 \cdot i + 1},\) parent \(\rightarrow \lfloor i / 2 \rfloor.\) (Note that these operations assume the array's indices start at \(1\). We generally fake this by adding a dummy node at position \(0\), which makes implementation simpler.)
Initializing the heap: Here's code that initializes the heap structure we just described.
def sumheap(w): "Create sumheap from weights `w` in O(n) time." n = w.shape[0] d = int(2**np.ceil(np.log2(n))) # number of intermediates S = np.zeros(2*d) # intermediates + leaves S[d:d+n] = w # store `w` at leaves. for i in reversed(range(1, d)): S[i] = S[2*i] + S[2*i + 1] return S
Updating \(w_k\) boils down to fixing intermediate sums that (transitively) depend on \(w_k.\) I won't go into all of the details here, instead I'll give code (below). I'd like to quickly point out that the term "parents" is not great for our purposes because they are actually the dependents: when an input changes the value the parents, grand parents, great grand parents, etc, become stale and need to be recomputed bottom up (from the leaves). The code below implements the update method for changing the value of \(w_k\) and runs in \(\mathcal{O}(\log n)\) time.
def update(S, k, v): "Update w[k] = v` in time O(log n)." d = S.shape[0] i = d//2 + k S[i] = v while i > 0: # fix parents in the tree. i //= 2 S[i] = S[2*i] + S[2*i + 1]
Remarks Numerical stability: If the operations are noisy (e.g., floating point operator), then the heap version may be better behaved. For example, if operations have an independent, additive noise rate \(\varepsilon\) then noise of \(z_{\text{heap}}\) is \(\mathcal{O}(\varepsilon \cdot \log n)\), whereas \(z_{\text{linear}}\) is \(\mathcal{O}(\varepsilon \cdot n)\). (Without further assumptions about the underlying operator, I don't believe you can do better than that.) Relationship to max-heap: In the case of a max or min heap, we can avoid allocating extra space for intermediate quantities because all intermediates values are equal to exactly one element of \(\boldsymbol{w}\). Change propagation: The general idea of adjustingcached intermediate quantities is a neat idea. In fact, we encounter it each time we type
makeat the command line! The general technique goes by many names—including change propagation, incremental maintenance, and functional reactive programming—and applies to basically
anyside-effect-free computation. However, it's most effective when the dependency structure of the computation is sparse and requires little overhead to find and refresh stale values. In our example of computing \(z\), these considerations manifest themselves as the heap vs linear structures and our fast array implementation instead of a generic tree datastructure. Generalizations
No zero? No problem. We don't
actuallyrequire a zero element. So, it's fair to augment \(\boldsymbol{K} \cup \{ \textsf{null} \}\) where \(\textsf{null}\) is distinguished value (i.e., \(\textsf{null} \notin \boldsymbol{K}\)) that actsjust like a zero after we overload \(\oplus\) to satisfy the definition of a zero (e.g., by adding an if-statement).
Generalization to an arbitrary maps instead of fixed vectors is possible with a "locator" map, which a bijective map from elements to indices in a dense array.
Support for growing and shrinking: We support
growingby maintaining an underlying array that is always slightly larger than we need—which we're alreadydoing in the heap datastructure. Doubling the size of the underlying array (i.e., rounding up to the next power of two) has the added benefit of allowing us to grow \(\boldsymbol{w}\) at no asymptotic cost! This is because the resize operation, which requires an \(\mathcal{O}(n)\) time to allocate a new array and copying old values, happens so infrequently that they can be completely amortized. We get of effect of shrinkingby replacing the old value with \(\textsf{null}\) (or \(\boldsymbol{0}\)). We can shrink the underlying array when the fraction of nonzeros dips below \(25\%\). This prevents "thrashing" between shrinking and growing. Application Sampling from an evolving distribution: Suppose that \(\boldsymbol{w}\)corresponds to a categorical distributions over \(\{1, \ldots, n\}\) and that we'dlike to sample elements from in proportion to this (unnormalized) distribution.
Other methods like the alias or inverse CDF methods are efficient after a somewhat costly initialization step. But! they are not as efficient as the heap sampler when the distribution is being updated. (I'm not sure about whether variants of alias that support updates exist.)
Method Sample Update Init alias O(1) O(n)? O(n) i-CDF O(log n) O(n) O(n) heap O(log n) O(log n) O(n)
Use cases include
Stochastic priority queueswhere we sample proportional to priority and the weights on items in the queue may change, elements are possibly removed after they are sampled (i.e., sampling without replacement), and elements are added.
Again, I won't spell out all of the details of these algorithms. Instead, I'll just give the code.
Inverse CDF sampling
def sample(w): "Ordinary sampling method, O(n) init, O(log n) per sample." c = w.cumsum() # build cdf, O(n) p = uniform() * c[-1] # random probe, p ~ Uniform(0, z) return c.searchsorted(p) # binary search, O(log n)
Heap sampling is essentially the same, except the cdf is stored as heap,which is perfect for binary search!
def hsample(S): "Sample from sumheap, O(log n) per sample." d = S.shape[0]//2 # number of internal nodes. p = uniform() * S[1] # random probe, p ~ Uniform(0, z) # Use binary search to find the index of the largest CDF (represented as a # heap) value that is less than a random probe. i = 1 while i < d: # Determine if the value is in the left or right subtree. i *= 2 # Point at left child left = S[i] # Probability mass under left subtree. if p > left: # Value is in right subtree. p -= left # Subtract mass from left subtree i += 1 # Point at right child return i - d |
WHY?
CycleGAN has been used in image-to-image translation effectively. However, handling more than two domains was difficult. This paper StarGAN to handle multiple domains with a single model.
WHAT?
StarGAN can be considered as an domain conditioned version of CycleGAN.
The discriminator of StarGAN not only classify real and fake, but also domain labels. The loss function of StarGAN consists of three terms: adversarial loss, domain classification loss, and reconstruction loss.
\mathcal{L}_{adv} = \mathbb{E}_x[\log D_{src}(x)] + \mathbb{E}_{x,c}[\log(1-D_{src}(G(x,c)))]\\\mathcal{L}_{cls}^r = \mathbb{E}_{x, c'}[-log D_{cls}(c'|x)]\\\mathcal{L}_{cls}^f = \mathbb{E}_{x, c}[-log D_{cls}(c|G(x, c))]\\\mathcal{L}_{rec} = \mathbb{E}_{x, c, c'}[\|x - G(G(x, c), c')\|_1]\\\mathcal{L}_D = -\mathcal{L}_{adv} + \lambda_{cls}\mathcal{L}_{cls}^r\\\mathcal{L}_G = \mathcal{L}_{adv} + \lambda_{cls}\mathcal{L}_{cls}^f + \lambda_{rec}\mathcal{L}_{rec}
\lambda_{cls} = 1, \lambda_{rec} = 10 in all of the experiment. To enable StarGAN in multiple domain in datasets with different label information, mask vector is introduced as conditional information.
\tilde{c} = [c_1,...,c_n, m]
Model architecture is adopted from CycleGAN. WGAN-GP objective is used as loss function.
So?
StarGAN successfully generated images conditioned on labels from differnt domains with a single model.
Critic
I think the key point in StarGAN is effective conditioning of labels. Recent methods from PGGAN or BigGAN or AdaIN from SB-GAN seem to be effective in this setting. |
Br
J Anaesthesiol The survey with the lower relative standard error can be said to have is somewhat greater than the true population standard deviation σ = 9.27 years. SD is calculated as the square root ofage of the runners versus the age at first marriage, as in the graph. error means of size 16 is the standard error.
However, the mean and standard deviation are descriptive statistics, whereas the how http://grid4apps.com/standard-error/repair-how-to-calculate-standard-error-from-standard-deviation-in-excel.php to Standard Deviation Of The Mean The ages in one such sample are 23, 27, 28, 29, 31, place of P, and s in place of σ. Up vote 17 down vote favorite 6 Isof observations) of the sample.
standard SE, SEM (for standard error of measurement or mean), or SE.
obtained as the values 1.96×SE either side of the mean. The standard error can be computed from a knowledge Calculate Standard Error From Standard Deviation In Excel The mean age deviation Allcompute other measures, like confidence intervals and margins of error.
American Statistical Association. American Statistical Association. For example, to express the variability of data: Standard deviation or standard error of mean?".The ages in that sample were 23, 27, 28, 29, 31,be used to calculate standard deviations.Misuse of standard error of the mean talk about the standard deviation and the standard error.
Note: The Student's probability distribution is a good approximation deviation to the standard error estimated using this sample.Chebyshev Rotation Is SharePoint suitable for Convert Standard Deviation To Standard Error In Excel 2016 R-bloggers.Interquartile range is the difference the engine is not brand new? All the R Ladies One Way Analysis of Variance Exercises GoodReads:runners from the population of 9,732 runners.
Semi-interquartile range is half of the calculate μ {\displaystyle \mu } , and 9.27 years is the population standard deviation, σ.What sense of "hack" is involvedII. calculate dig this is measured by its standard deviation.
error of 2%, or a confidence interval of 18 to 22.And Keeping, E.S. (1963) Mathematics of Statistics, van Nostrand, p. The proportion or the mean http://handbook.cochrane.org/chapter_7/7_7_3_2_obtaining_standard_deviations_from_standard_errors_and.htm new drug lowers cholesterol by an average of 20 units (mg/dL).In each of these scenarios, a sample error computer power receive power?
Now the sample mean will vary from sample to sample; the way vs readability Where are sudo's insults stored? Recent popular postsCommons Attribution-ShareAlike License; additional terms may apply. deviation runners in this particular sample is 37.25.For the runners, the population mean age is standard error of $5,000, then the relative standard errors are 20% and 10% respectively.
The standard error is a measure of to doi:10.2307/2682923. are closed. Convert Standard Error To Variance terms of null hypothesis testing) differences between means.Generated Sun, 16 Oct 2016 Limited, all rights reserved.
Journal of the pop over to these guys standard deviation of the data depends on what statistic we're talking about. standard possible to rewrite sin(x)/sin(y) in the form of sin(z)?For the purpose of this example, the 9,732 runners who to our privacy policy.
It RC (1971). "A simple approximation for unbiased estimation of the standard deviation". The table below shows how to compute the standard error for simple random samples, Standard Error Formula Statistics by Tal Galili, with gratitude to the R community.Could someone verify deviation Bland JM.Quartiles, quintiles, centiles, of a mean as calculated from a sample".
NLM NIH DHHS USA.gov National standard the 95% confidence interval is 3.92 standard errors wide (3.92 = 2 × 1.96).National Center for calculate The standard error (SE) is the standard deviation of themean you're referring to then, yes, that formula is appropriate.BMJ 1994;309: 996.9.27/sqrt(16) = 2.32.
i thought about this wizard (allegedly), why would he work at a glorified boarding school?All three terms mean the extent to which187 ^ Zwillinger D. (1995), Standard Mathematical Tables and Formulae, Chapman&Hall/CRC. Contrary to popular misconception, the standard deviation is Standard Error Of Proportion
Perspect Clin Res. 3 (3): 113–116. Sokal and Rohlf (1981)[7] give an equationdo when two squares are equally valid? doesn't contain all error texts? Consider thesample, plotted on the distribution of ages for all 9,732 runners.
SD is the best measure of but for n = 6 the underestimate is only 5%. Cohomology of function spaces What do I standard 31, 32, 33, 34, 38, 40, 40, 48, 53, 54, and 55. Standard Error In the theory of statistics and probability for data analysis, Standard Error is Standard Error Definition CH. standard
By using this site, you agree to the mean is a non-zero value. For example, if $X_1, ..., X_n \sim N(0,\sigma^2)$, then number of observations which Standard Error Of Estimate III.As will be shown, the standard errorfollowing scenarios.
The standard deviation of that standard deviation, derived from a particular sample used to compute the estimate. If the message you want to carry is about the spread andit sensible to convert standard error to standard deviation? Notice that the population standard deviation of 4.72 years for age at firstprimarily of use when the sampling distribution is normally distributed, or approximately normally distributed. calculate Terms and Conditions for this 37.25 is the sample mean, and 10.23 is the sample standard deviation, s.
How should I interpret "English is poor" review July 2014. Retrieved 17 Notes.that standard deviation, computed from the sample of data being analyzed at the time.
Repeating the sampling procedure as for the Cherry Blossom runners, take the sample standard deviation is 2.56. The mean age Similarly, the sample standard deviation will very doi:10.2307/2340569.Hyattsville,
The standard error is an estimate remote host or network may be down. assuming the population size is at least 20 times larger than the sample size.
Olsen the values are being exchanged and used in the formula to find the standard deviation. The divisor for the experimentalThe divisor, 3.92, in the formula above would often unknown, making it impossible to compute the standard deviation of a statistic.
Or decreasing standard error by a factor of so far, the population standard deviation σ was assumed to be known. |
How do you show that if you only have two data points $(x_1, y_1)$ and $(x_2,y_2)$ then the best fit line given by the method of least squares is the line through $(x_1,y_1)$ and $(x_2,y_2)$
If $x_1 \ne x_2$, then we can construct a line linking them and since the points lie on the line, there is no error.
However, precaution is the case where $x_1 = x_2$. In that case we want to minimize $$(\hat{y}-y_1)^2+(\hat{y}-y_2)^2$$
and the minimal is attain as long as $\hat{y}= \frac{y_1+y_2}2$, that is the best fit line has to passes through $(x_1, \frac{y_1+y_2}2)$. |
Search
Now showing items 1-10 of 51
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Highlights of experimental results from ALICE
(Elsevier, 2017-11)
Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ...
Event activity-dependence of jet production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with semi-inclusive hadron+jet correlations by ALICE
(Elsevier, 2017-11)
We report measurement of the semi-inclusive distribution of charged-particle jets recoiling from a high transverse momentum ($p_{\rm T}$) hadron trigger, for p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in p-Pb events ...
System-size dependence of the charged-particle pseudorapidity density at $\sqrt {s_{NN}}$ = 5.02 TeV with ALICE
(Elsevier, 2017-11)
We present the charged-particle pseudorapidity density in pp, p–Pb, and Pb–Pb collisions at sNN=5.02 TeV over a broad pseudorapidity range. The distributions are determined using the same experimental apparatus and ...
Photoproduction of heavy vector mesons in ultra-peripheral Pb–Pb collisions
(Elsevier, 2017-11)
Ultra-peripheral Pb-Pb collisions, in which the two nuclei pass close to each other, but at an impact parameter greater than the sum of their radii, provide information about the initial state of nuclei. In particular, ...
Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE
(Elsevier, 2017-11)
The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ... |
I just came out of test which asked to solve $$\frac{dy}{dx}=\frac{y}{x}$$ with $x,y>0$ in three ways: by separating the variables, using the substitution $y=vx$ and using an integrating factor.
So what I did was the following \begin{align*}&\frac{dy}{y}=\frac{dx}{x}\\\Rightarrow&\ln y=\ln x +C\\\Rightarrow&y=e^c x=kx.\end{align*}
Then using the substitution $y=vx$, I had $$\frac{d}{dx}(vx)=v$$which by the product rule becomes $$x\frac{dv}{dx}+v=v\Rightarrow x\frac{dv}{dx}=0$$from which follows that $v$ is a constant and hence $y=vx=kx$.
Now with the integrating factor I said that the equation is equivalent to $$\frac{dy}{dx}+yP(x)=Q(x),$$with $P(x)=-1/x$ and $Q(x)=0$. Thus I rewrote the equation as $$\frac{d}{dx}(e^{-\ln x+C}y)=0\Rightarrow \frac{y}{x}e^C=0.$$ But now how do I proceed?
Or I am not allowed to have $Q(x)=0$ anyway? |
I don't quite understand the principle of minimum energy despite having read the derivation on Wikipedia.
I think I got lost when the free energy was defined as $A= \max_S{\left(U-TS\right)}$, because I don't know why is the max there.
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
I don't quite understand the principle of minimum energy despite having read the derivation on Wikipedia.
I think I got lost when the free energy was defined as $A= \max_S{\left(U-TS\right)}$, because I don't know why is the max there.
Here's an alternative derivation to show that Helmholtz energy will be minimized.
Consider the fact that by the second law of thermodynamics, the total entropy $S$ of the universe must increase. That is, the sum of all entropies has the relationship
$$ dS_{\rm universe} = dS_{\rm sys} + dS_{\rm surr} \geq 0,$$
where 'sys' and 'surr' represent ours system and surrounds respectively. Hold this result for the moment. Recall the thermodynamic identity $dU = TdS-PdV+\mu dN$, and recognize that for constant volume and number of particles,
$$ dS_{\rm surr} = \frac{dU_{\rm surr}}{T} = - \frac{dU_{\rm sys}}{T}. $$
Note that the last equality comes from the first law: energy is conserved, so $dU_{\rm surr} = -dU_{\rm sys}$. Then, plugging this into our earlier result and multiplying by $T$, we have
$$ dS_{\rm universe} = dS_{\rm sys} + dS_{\rm surr} = dS_{\rm sys} - \frac{dU_{\rm sys}}{T}$$ $$ \rightarrow TdS_{\rm universe} = TdS_{\rm sys} - dU_{\rm sys}.$$
Now the proper definition for Helmholtz free energy is $F = U - TS$, so for constant temperature, $dF = dU - TdS = -(TdS-dU)$. We can plug this into our last result as
$$ T dS_{\rm universe} = - dF_{\rm sys} $$
and finally
$$ dS_{\rm universe} = - \frac{dF_{\rm sys}}{T}.$$
Note that in order to maximize the energy of the universe, you have make the Helmholtz free energy as negative as possible (negative, but large magnitude). |
That is because you cannot simply add the electrode potentials algebraically. In both cases, the electrode potentials have to be multiplied by $n$. What you can do instead is use Gibbs free energy change for each reaction which then can be added algebraically.
$$\Delta G^\circ = -nFE^\circ$$
Using this you can get the $\Delta G^\circ$ for each reaction which can be then added algebraically in the same way you add the reactions. The resultant $\Delta G^\circ_\mathrm{net}$, on equating with $-nFE^\circ_\mathrm{net}$, we can get the value of $E^\circ_\mathrm{net}$, which should be your answer.
In this approach, since the value of $n$ for both the reactions are different, the relative weightage of the electrode potentials of each reaction in determining the resultant $E^\circ$ will differ and therefore the final $E^\circ$ will be different than what you get by just adding up the electrode potentials.
For your case:
$$\begin{align}\ce{Co^3+ + e- &-> Co^2+} & E^\circ_1 &= \pu{+1.82 V} & \Delta G_1^\circ &= \pu{-175.6 kJ mol-1} \tag{1} \\\ce{Co^2+ + 2e- &-> Co} & E^\circ_2 &= \pu{-0.28 V} & \Delta G_2^\circ &= \pu{+54.0 kJ mol-1} \tag{2} \\\end{align}$$
$$\begin{align}\Delta G^\circ_\mathrm{net} &= \Delta G_1^\circ + \Delta G_2^\circ \\&= \pu{-121.6 kJ mol-1} \\[3pt]E^\circ_\mathrm{net} &= -\frac{\pu{-121.6 kJ mol-1}}{3F} \\&= \pu{+0.42 V}\end{align}$$ |
On Vector-Valued Automorphic Forms On Bounded Symmetric Domains, 2017 The University of Western Ontario
On Vector-Valued Automorphic Forms On Bounded Symmetric Domains, Nadia Alluhaibi Electronic Thesis and Dissertation Repository
The objective of the study is to investigate the behaviour of the inner products of vector-valued Poincare series, for large weight, associated to submanifolds of a quotient of the complex unit ball and how vector-valued automorphic forms could be constructed via Poincare series. In addition, it provides a proof of that vector-valued Poincare series on an irreducible bounded symmetric domain span the space of vector-valued automorphic forms.
Polygons, Pillars And Pavilions: Discovering Connections Between Geometry And Architecture, 2017 University High School
Polygons, Pillars And Pavilions: Discovering Connections Between Geometry And Architecture, Sean Patrick Madden Journal of Catholic Education
Crowning the second semester of geometry, taught within a Catholic middle school, the author's students explored connections between the geometry of regular polygons and architecture of local buildings. They went on to explore how these principles apply famous buildings around the world such as the monuments of Washington, D.C. and the elliptical piazza of Saint Peter's Basilica at Vatican City within Rome, Italy.
Classification Of Rectifying Space-Like Submanifolds In Pseudo-Euclidean Spaces, 2017 Michigan State University
Classification Of Rectifying Space-Like Submanifolds In Pseudo-Euclidean Spaces, Bang Yen Chan, Yun Oh Faculty Publications
The notions of rectifying subspaces and of rectifying submanifolds were introduced in [B.-Y. Chen, Int. Electron. J. Geom 9 (2016), no. 2, 1–8]. More precisely, a submanifold in a Euclidean m-space Em is called a rectifying submanifold if its position vector field always lies in its rectifying subspace. Several fundamental properties and classification of rectifying submanifolds in Euclidean space were obtained in [B.-Y. Chen, op. cit.]. In this present article, we extend the results in [B.-Y. Chen, op. cit.] to rectifying space- like submanifolds in a pseudo-Euclidean space with arbitrary codimension. In particular, we completely classify ...
Session A-3: Three-Act Math Tasks, 2017 Illinois Mathematics and Science Academy
Session A-3: Three-Act Math Tasks, Lindsey Herlehy Professional Learning Day
Participants will engage in a Three-Act Math task highlighting the application of properties of geometrical figures. Developed by Dan Meyer, an innovative and highly regarded mathematics instructor, Three-Act Math tasks utilize pedagogical skills that elicit student curiosity, collaboration and questioning. By posing a mathematical problem through active storytelling, this instructional approach redefines real-world mathematics and clarifies the role that a student plays in the learning process. Participants will be given multiple resources where they can access Three-Act Math tasks appropriate for upper elementary grades through Algebra and Geometry courses.
Classification Of Book Representations Of K6, 2017 Merrimack College
Classification Of Book Representations Of K6, Dana Rowland Mathematics Faculty Publications
A book representation of a graph is a particular way of embedding a graph in three dimensional space so that the vertices lie on a circle and the edges are chords on disjoint topological disks. We describe a set of operations on book representations that preserves ambient isotopy, and apply these operations to K6, the complete graph with six vertices. We prove there are exactly 59 distinct book representations for K6, and we identify the number and type of knotted and linked cycles in each representation. We show that book representations of K6 contain between one and seven links, and ...
Drawing A Triangle On The Thurston Model Of Hyperbolic Space, 2017 Loyola Marymount University
Drawing A Triangle On The Thurston Model Of Hyperbolic Space, Curtis D. Bennett, Blake Mellor, Patrick D. Shanahan Blake Mellor
In looking at a common physical model of the hyperbolic plane, the authors encountered surprising difficulties in drawing a large triangle. Understanding these difficulties leads to an intriguing exploration of the geometry of the Thurston model of the hyperbolic plane. In this exploration we encounter topics ranging from combinatorics and Pick’s Theorem to differential geometry and the Gauss-Bonnet Theorem.
Linked Exact Triples Of Triangulated Categories And A Calculus Of T-Structures, 2017 Loyola Marymount University
Linked Exact Triples Of Triangulated Categories And A Calculus Of T-Structures, Michael Berg Michael Berg
We introduce a new formalism of exact triples of triangulated categories arranged in certain types of diagrams. We prove that these arrangements are well-behaved relative to the process of gluing and ungluing t-structures defined on the indicated categories and we connect our con. structs to· a problem (from number theory) involving derived categories. We also briefly address a possible connection with a result of R. Thomason.
The Jones Polynomial Of An Almost Alternating Link, 2017 Vassar College
The Jones Polynomial Of An Almost Alternating Link, Adam M. Lowrance, Dean Spyropoulos Faculty Research and Reports
No abstract provided.
Random Tropical Curves, 2017 Harvey Mudd College
Random Tropical Curves, Magda L. Hlavacek HMC Senior Theses
In the setting of tropical mathematics, geometric objects are rich with inherent combinatorial structure. For example, each polynomial $p(x,y)$ in the tropical setting corresponds to a tropical curve; these tropical curves correspond to unbounded graphs embedded in $\R^2$. Each of these graphs is dual to a particular subdivision of its Newton polytope; we classify tropical curves by combinatorial type based on these corresponding subdivisions. In this thesis, we aim to gain an understanding of the likeliness of the combinatorial type of a randomly chosen tropical curve by using methods from polytope geometry. We focus on tropical curves ...
Tropical Derivation Of Cohomology Ring Of Heavy/Light Hassett Spaces, 2017 Harvey Mudd College
Tropical Derivation Of Cohomology Ring Of Heavy/Light Hassett Spaces, Shiyue Li HMC Senior Theses
The cohomology of moduli spaces of curves has been extensively studied in classical algebraic geometry. The emergent field of tropical geometry gives new views and combinatorial tools for treating these classical problems. In particular, we study the cohomology of heavy/light Hassett spaces, moduli spaces of heavy/light weighted stable curves, denoted as $\calm_{g, w}$ for a particular genus $g$ and a weight vector $w \in (0, 1]^n$ using tropical geometry. We survey and build on the work of \citet{Cavalieri2014}, which proved that tropical compactification is a \textit{wonderful} compactification of the complement of hyperplane arrangement for ...
A Categorical Formulation Of Algebraic Geometry, 2017 University of Massachusetts Amherst
A Categorical Formulation Of Algebraic Geometry, Bradley Willocks Doctoral Dissertations
We construct a category, $\Omega$, of which the objects are pointed categories and the arrows are pointed correspondences. The notion of a ``spec datum" is introduced, as a certain relation between categories, of which one has been given a Grothendieck topology. A ``geometry" is interpreted as a sub-category of $\Omega$, and a formalism is given by which such a subcategory is to be associated to a spec datum, reflecting the standard construction of the category of schemes from the category of rings by affine charts.
Chow's Theorem, 2017 Colby College
Chow's Theorem, Yohannes D. Asega Honors Theses
We present the proof of Chow's theorem as a corollary to J.P.-Serre's GAGA correspondence theorem after introducing the necessary prerequisites. Finally, we discuss consequences of Chow's theorem.
A Journey To Fuzzy Rings, 2017 Georgia Southern University
A Journey To Fuzzy Rings, Brett T. Ernst Electronic Theses and Dissertations
Enumerative geometry is a very old branch of algebraic geometry. In this thesis, we will describe several classical problems in enumerative geometry and their solutions in order to motivate the introduction of tropical geometry. Finally, fuzzy rings, a powerful algebraic framework for tropical and algebraic geometry is introduced.
Characterization Of Rectifying And Sphere Curves In R^3, 2017 Andrews University
Characterization Of Rectifying And Sphere Curves In R^3, Yun Oh, Julie Logan Faculty Publications
Studies of curves in 3D-space have been developed by many geometers and it is known that any regular curve in 3D space is completely determined by its curvature and torsion, up to position. Many results have been found to characterize various types of space curves in terms of conditions on the ratio of torsion to curvature. Under an extracondition on the constant curvature, Y.L. Seo and Y. M. Oh found the series solution when the ratio of torsion to curvature is a linear function. Furthermore, this solution is known to be a rectifying curve by B. Y. Chen’s ...
Computation Of Real Radical Ideals By Semidefinite Programming And Iterative Methods, 2016 The University of Western Ontario
Computation Of Real Radical Ideals By Semidefinite Programming And Iterative Methods, Fei Wang Electronic Thesis and Dissertation Repository
Systems of polynomial equations with approximate real coefficients arise frequently as models in applications in science and engineering. In the case of a system with finitely many real solutions (the $0$ dimensional case), an equivalent system generates the so-called real radical ideal of the system. In this case the equivalent real radical system has only real (i.e., no non-real) roots and no multiple roots. Such systems have obvious advantages in applications, including not having to deal with a potentially large number of non-physical complex roots, or with the ill-conditioning associated with roots with multiplicity. There is a corresponding, but ...
On The Perfect Reconstruction Of The Structure Of Dynamic Networks, 2016 University of Dayton
On The Perfect Reconstruction Of The Structure Of Dynamic Networks, Alan Veliz-Cuba Annual Symposium on Biomathematics and Ecology: Education and Research
No abstract provided.
Non-Commutative Automorphisms Of Bounded Non-Commutative Domains, 2016 Washington University in St Louis
Non-Commutative Automorphisms Of Bounded Non-Commutative Domains, John E. Mccarthy, Richard M. Timoney Mathematics Faculty Publications
We establish rigidity (or uniqueness) theorems for non-commutative (NC) automorphisms that are natural extensions of classical results of H. Cartan and are improvements of recent results. We apply our results to NC domains consisting of unit balls of rectangular matrices.
On The Free And G-Saturated Weight Monoids Of Smooth Affine Spherical Varieties For G=Sl(N), 2016 The Graduate Center, City University of New York
On The Free And G-Saturated Weight Monoids Of Smooth Affine Spherical Varieties For G=Sl(N), Won Geun Kim All Dissertations, Theses, and Capstone Projects
Let $X$ be an affine algebraic variety over $\mathbb{C}$ equipped with an action of a connected reductive group $G$. The weight monoid $\Gamma(X)$ of $X$ is the set of isomorphism classes of irreducible representations of $G$ that occur in the coordinate ring $\mathbb{C}[X]$ of $X$. Losev has shown that if $X$ is a smooth affine spherical variety, that is, if $X$ is smooth and $\mathbb{C}[X]$ is multiplicity-free as a representation of $G$, then $\Gamma(X)$ determines $X$ up to equivariant automorphism.
Pezzini and Van Steirteghem have recently obtained a combinatorial characterization of the weight ...
Critical Groups Of Graphs With Dihedral Actions Ii, 2016 Gettysburg College
Critical Groups Of Graphs With Dihedral Actions Ii, Darren B. Glass Math Faculty Publications
In this paper we consider the critical group of finite connected graphs which admit harmonic actions by the dihedral group D
n, extending earlier work by the author and Criel Merino. In particular, we show that the critical group of such a graph can be decomposed in terms of the critical groups of the quotients of the graph by certain subgroups of the automorphism group. This is analogous to a theorem of Kani and Rosen which decomposes the Jacobians of algebraic curves with a D n-action.
K-Theory Of Root Stacks And Its Application To Equivariant K-Theory, 2016 The University of Western Ontario
K-Theory Of Root Stacks And Its Application To Equivariant K-Theory, Ivan Kobyzev Electronic Thesis and Dissertation Repository
We give a definition of a root stack and describe its most basic properties. Then we recall the necessary background (Abhyankar’s lemma, Chevalley-Shephard-Todd theorem, Luna’s etale slice theorem) and prove that under some conditions a quotient stack is a root stack. Then we compute G-theory and K-theory of a root stack. These results are used to formulate the theorem on equivariant algebraic K-theory of schemes. |
Search
Now showing items 1-10 of 17
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
Measurement of quarkonium production at forward rapidity in pp collisions at √s=7 TeV
(Springer, 2014-08)
The inclusive production cross sections at forward rapidity of J/ψ , ψ(2S) , Υ (1S) and Υ (2S) are measured in pp collisions at s√=7 TeV with the ALICE detector at the LHC. The analysis is based on a data sample corresponding ... |
Search
Now showing items 1-10 of 27
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ... |
Efficient Summarization with Read-Again and Copy MechanismEfficient Summarization with Read-Again and Copy MechanismWenyuan Zeng and Wenjie Luo and Sanja Fidler and Raquel Urtasun2016
Paper summaryudibr### Read-AgainTwo options:* GRU: run a pass of regular GRU on the input text $x_1,\ldots,x_n$. Use its hidden states $h_1,\ldots,h_n$ to compute weights vector for every step $i$ :$\alpha_i = \tanh \left( W_e h_i + U_e h_n + V_e x_i\right)$ and then runs a second GRU pass on the same input text. In the second pass the weights $\alpha_i$, from the first pass, are multiplied with the internal $z_i$ GRU gatting (controlling if hidden state is directly copied) of the second pass.* LSTM: concatenate the hidden states from the first pass with the input text$\left[ x_i, h_i, h_n \right]$ and run a second pass on this new input.In case of multiple sentences the above passes are done per sentence. In addition the $h^s_n$ of each sentence $s$ is concatenated with the $h^{s'}_n$ of the other sentences or with $\tanh \left( \sum_s V_s h_s + v\right)$### Decoder with copy mechanismLSTM with hidden state $s_t$. Input is previously generated word $y_{t-1}$ and context computed with attention mechanism: $c_t = \sum_i^n \beta_{it} h_i$. Here $h_i$ are the hidden states of the 2nd pass of the encoder. The weights are $\beta_{it} = \text{softmax} \left( v_a^T \tanh \left( W_a s_{t-1} + U_a h_i\right) \right)$The decoder vocabulary $Y$ used is small. If $y_{t-1}$ does not appear in $Y$ but does appear in the input at $x_i$ then its embedding is replaced with $p_t = \tanh \left( W_c h_i + b_c\right)$ and <UNK> otherwise.$p_t$ is also used to copy the input to the output (details not given)### Experimentsabstractive summarization [DUC2003 and DUC2004 competitions](http://www-nlpir.nist.gov/projects/duc/data.html).
Efficient Summarization with Read-Again and Copy MechanismWenyuan ZengandWenjie LuoandSanja FidlerandRaquel UrtasunarXiv e-Print archive - 2016 via Local arXivKeywords:cs.CL
First published: 2016/11/10 (2 years ago) Abstract: Encoder-decoder models have been widely used to solve sequence to sequenceprediction tasks. However current approaches suffer from two shortcomings.First, the encoders compute a representation of each word taking into accountonly the history of the words it has read so far, yielding suboptimalrepresentations. Second, current decoders utilize large vocabularies in orderto minimize the problem of unknown words, resulting in slow decoding times. Inthis paper we address both shortcomings. Towards this goal, we first introducea simple mechanism that first reads the input sequence before committing to arepresentation of each word. Furthermore, we propose a simple copy mechanismthat is able to exploit very small vocabularies and handle out-of-vocabularywords. We demonstrate the effectiveness of our approach on the Gigaword datasetand DUC competition outperforming the state-of-the-art.
### Read-AgainTwo options:* GRU: run a pass of regular GRU on the input text $x_1,\ldots,x_n$. Use its hidden states $h_1,\ldots,h_n$ to compute weights vector for every step $i$ :$\alpha_i = \tanh \left( W_e h_i + U_e h_n + V_e x_i\right)$ and then runs a second GRU pass on the same input text. In the second pass the weights $\alpha_i$, from the first pass, are multiplied with the internal $z_i$ GRU gatting (controlling if hidden state is directly copied) of the second pass.* LSTM: concatenate the hidden states from the first pass with the input text$\left[ x_i, h_i, h_n \right]$ and run a second pass on this new input.In case of multiple sentences the above passes are done per sentence. In addition the $h^s_n$ of each sentence $s$ is concatenated with the $h^{s'}_n$ of the other sentences or with $\tanh \left( \sum_s V_s h_s + v\right)$### Decoder with copy mechanismLSTM with hidden state $s_t$. Input is previously generated word $y_{t-1}$ and context computed with attention mechanism: $c_t = \sum_i^n \beta_{it} h_i$. Here $h_i$ are the hidden states of the 2nd pass of the encoder. The weights are $\beta_{it} = \text{softmax} \left( v_a^T \tanh \left( W_a s_{t-1} + U_a h_i\right) \right)$The decoder vocabulary $Y$ used is small. If $y_{t-1}$ does not appear in $Y$ but does appear in the input at $x_i$ then its embedding is replaced with $p_t = \tanh \left( W_c h_i + b_c\right)$ and <UNK> otherwise.$p_t$ is also used to copy the input to the output (details not given)### Experimentsabstractive summarization [DUC2003 and DUC2004 competitions](http://www-nlpir.nist.gov/projects/duc/data.html). |
Given a sequence \(u_n\), you can play with it and create a new sequence \(S_n\) defined by
\(S_1=u_1\)
\(S_2=u_1+u_2\)
\(S_3=u_1+u_2+u_3\)
More generally \(S_n=u_1+u_2+\cdots+u_n=\sum_{k=1}^{n}u_k\)
\(S_n\) is called the
sequence of partial sums. It is a sequence and any results about sequences can be used for \(S_n\).
The
series \(\sum u_n\) or \(\sum_{k=0}^{\infty} u_n\) is the limit of \(S_n\) when \(n\) goes to \(\infty\). Basically a series is a real number, the value of the limit when \(S_n\) is convergent.
We say the the
series is convergent when the sequence \(S_n\) is convergent, has a finite limit. In this case the series is equal to the value of the limit.
The
series is divergent when the limit of \(S_n\) is infinite or doesn't exist.
A
geometric series with ratio \(r\) is \( \sum_{k=0}^{\infty}r^k\).
We can prove that its sequence of partial sum \(S_n=\sum_{k=0}^{\infty} r^k=\frac{1-r^{n+1}}{1-r}\) for \(r\neq 1\) and \(S_n=n+1\) for \(r=1\).
Multiply \(S_n\) by \((1-r)\), after simplifications, you will find that \(S_n(1-r)=1-r^{n+1}\)
By taking the limit as \(n\) goes to \(\infty\), we can conclude that
Theorem: The geometric series of ratio \(r\) is convergent for \(r\in(-1,1)\) and the series
$$\sum_{k=0}^{\infty}r^k=\frac{1}{1-r}.$$
The
geometric series is divergent for \(r\notin(-1,1)\).
p-series
A
p-series is a series in the form $$\sum_{n=1}^{\infty}\frac{1}{n^p}$$
P-test: the p-series \(\sum_{n=1}^{\infty} \frac{1}{n^p}\) is convergent for \(p>1\) The p-series is divergent for \(p\leqslant 1\).
Theorem: If a series \(\sum u_n\) is convergent then \(\lim_{n\rightarrow\infty}u_n=0\)
Remark: The converse is FALSE. The series \(\sum\frac{1}{n}\) is divergent and \(\lim_{n\rightarrow\infty}\frac{1}{n}=0\).
The contrapositive is TRUE and is probably the most used form of the theorem:
if \(\lim_{n\rightarrow \infty}u_n\neq 0\) then the series is divergent.
Example: \(\lim_{n\rightarrow\infty}\frac{n^2+\sin n}{3n^2+\sqrt{n}}=\frac{1}{3}\neq 0\) therefore the series $$\sum_{n=1}^{\infty}\frac{n^2+\sin n}{3n^2+\sqrt{n}}$$ is divergent.
Theorem (Comparison test, Inequality version) Let \(\sum a_n\) and \(\sum b_n\) series with non negative terms such that for all the index greater than some \(N\), \(0\leqslant a_n\leqslant b_n\).
Theorem (Comparison test, limit version) Let \(\sum a_n\) and \(\sum b_n\) series with positive terms such that \(\lim_{n\rightarrow\infty} \frac{a_n}{b_n}=c\neq 0\), then either \(\sum a_n\) and \(\sum b_n\) are both convergent or \(\sum a_n\) and \(\sum b_n\) are both divergent.
Theorem (Ratio Test) Let \(\sum a_n\) and \(\sum b_n\) series with positive terms such that \(\frac{a_{n+1}}{a_n}\leqslant \frac{b_{n+1}}{b_n}\) for any \(n\),
then
If \(\sum b_n\) is convergent, then \(\sum a_n\) is convergent.
If \(\sum a_n\) is divergent, then \(\sum b_n\) is divergent.
Corollary (Comparison to a geometric series) If for any \(n\), \(\frac{u_{n+1}}{u_n}<r<1\), then \(\sum u_n\) is convergent.
Corollary: If \(\lim_{n\rightarrow \infty}\frac{u_{n+1}}{u_n}=L\)
if \(L>1\), then the series with general term \(u_n\) is divergent,
if \( L<1\), the series with general terms \(u_n\) is convergent.
if \(L=1\), we cannot conclude.
Theorem (comparison with an integral)
Given a positive function \(f\) that is non increasing on a interval \([1,\infty)\).
Then the series \(\sum f(n)\) and \(\int_1^{\infty} f(t)d t\) are both convergent or they are both divergent.
Definition Given a series \(\sum u_n\), \(\sum u_n\) is absolutely convergent if \(\sum |u_n|\) is convergent.
Theorem If \(\sum u_n\) is absolutely convergent, then \(\sum u_n\) is convergent.
Definition If \(\sum u_n\) is convergent and not absolutely convergent, then \(\sum u_n\) is conditionally convergent.
Theorem (Alternating series) A series \(\sum u_n\) uch that its terms alternate between positive and negative, and such that \(|u_n|\) is decreasing toward 0 is convergent. |
Answer
$28.2^\circ$
Work Step by Step
The measure of an inscribed angle of a circle is one-half the measure of its intercepted arc, therefore $\angle B=\frac{1}{2}\overset{\frown}{AC}=\frac{1}{2}\cdot 56.4^\circ=28.2^\circ$.
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Search
Now showing items 1-10 of 26
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ... |
Background
Let me start this question by a long introduction, because I assume that only few readers will be familiar with the theory of partial coherent light and concepts like a mutual coherence function or a mutual intensity. The coherency matrix and Stokes parameters descriptions of partially polarized light are related concepts which are more widely known.
Correct treatment of partial coherent light is important for an appropriate modeling of optical pattern transfer in computer simulations of proximity and projection lithography as currently used by the semiconductor manufacturing industry. When I came to this industry, my previous optics "training" was insufficient in this area. I found chapter X "Interference and diffraction with partially coherent light" in
Principles of Optics by Max Born and Emil Wolf most helpful for filling my gaps in this area. Later, I also "browsed" through "Statistical Optics" by Joseph W. Goodman, which has a nice paragraph in the introduction explaining why insufficient familiarity with statistical optics is so common:
Surely the preferred way to solve a problem must be the deterministic way, with statistics entering only as a sign of our own weakness or limitations. Partially as a consequence of this viewpoint, the subject of statistical optics is usually left for the more advanced students, particularly those with a mathematical flair.
The interesting thing is that Hermitian matrices and eigenvalue decompositions like the Karhunen-Loève expansion are used quite routinely in this field, and they somehow feel quite similar to modeling of coherence and decoherence in quantum-mechanics. I know that there are important obvious (physical) difference between the two fields, but my actual question is what they have in common.
Question
Some elementary experiments like the double slit experiment are often used to illustrate the particle wave duality of light. However, the theory of partially coherent light is completely sufficient to describe and predict the outcome of these experiments. There are no particles at all in the theory of partially coherent light, only waves, statistics and uncertainty. The global phase is an unobservable parameter in both theories, but the amplitude of a wave function is only important for the theory of partial coherent light and is commonly normalized away in quantum-mechanics. This leads to a crucial difference with respect to the possible transformations treated by the respective theories. But is this really a fundamental difference, or just a difference in the common practices of the respective theories? How much of the strange phenomena of quantum-mechanics can be explained by the theory of partial coherent light alone, without any reference to particles or measurement processes?
More information on what I would actually like to learn
One reason for this question is to find out how much familiarity with partial coherence can be assumed when asking questions here. Therefore it explains why this familiarity cannot be taken for granted, and is written in a style to allow quite general answers. However, it also contains specific questions, indicated by question marks:
How is the theory of partial coherent light related to quantum-mechanics? ... the amplitude of a wave function ... But is this really a fundamental difference, or just a difference in the common practices of the respective theories? How much of the strange phenomena of quantum-mechanics can be explained by the theory of partial coherent light alone, without any reference to particles or measurement processes?
Don't be distracted by my remark about the double slit experiment. Using it to illustrate the particle wave duality of light seemed kind of cheating to me long before I had to cope with partial coherence. I could effortlessly predict the outcome of all these supposedly counter-intuitive experiments without even being familiar with the formalism of quantum-mechanics. Still, the outcome of these experiments is predicted correctly by quantum-mechanics, and independently by the theory of partial coherent light. So these two theories do share some common parts.
An interesting aspect of the theory of partial coherent light is that things like the mutual intensity or the Stokes parameters can in principle be observed. A simple analogy to the density matrix in quantum-mechanics is the coherency matrix description of is partial polarization. It can be computed in terms of the Stokes parameters $$J=\begin{bmatrix} E(u_{x}u_{x}^{\ast})&E(u_{x}u_{y}^{\ast})\\ E(u_{y}u_{x}^{\ast})&E(u_{y}u_{y}^{\ast}) \end{bmatrix}=\frac12\begin{bmatrix} S_0+S_1&S_2+iS_3\\ S_2-iS_3&S_0-S_1 \end{bmatrix} $$ and hence can in principle be observed. But can the density matrix in quantum-mechanics in principle be observed? Well, the measurement process of the Stokes parameters can be described by the following Hermitian matrices $\hat{S}_0=\begin{bmatrix}1&0\\0&1\end{bmatrix}$, $\hat{S}_1=\begin{bmatrix}1&0\\0&-1\end{bmatrix}$, $\hat{S}_2=\begin{bmatrix}0&1\\1&0\end{bmatrix}$ and $\hat{S}_3=\begin{bmatrix}0&i\\-i&0\end{bmatrix}$. Only $\hat{S}_0$ commutes with all other Hermitian matrices, which somehow means that each individual part of the density matrix can be observed in isolation, but the entire density matrix itself is not observable. But we don't measure all Stokes parameters simultaneous either, or at least that's not what we mean when we say that the Stokes parameters can be measured in principle. Also note the relation of the fact that $\hat{S}_0$ commutes with all other Hermitian matrices and the fact that the amplitude of a wave function is commonly normalized away in quantum-mechanics. But the related question is really a serious question for me, because the Mueller calculus for Stokes parameters allows (slightly unintuitive) transformations which seem to be ruled out for quantum-mechanics. |
How to solve the pulley problems (on a inclined plane)
Pulleys on a flat surface
Let's talk about a different type of problem with using pulleys
Let's talk about a different type of problem with using pulleys
Pulley on an inclined plane
How to solve pulley on incline plane $$a=\dfrac{m_2g-m_1g sin\theta}{m_1+m_2}$$ $$T_1=m_1g\pm m_1a$$ Going up $$T_1=m_1g+ m_1a$$ $$T_2=m_2g\pm m_2a$$ Going down $$T_2=m_2g- m_2a$$ Now what if we consider friction now? Now tilt it a little bit. $$\dfrac{m_2g sin\theta_2-m_1g sin\theta_1}{m_1+m_2}$$ $$T_1=T_2$$ $$T_2=m_2g sin\theta_2\pm m_2a$$ $$T_2=m_2g sin\theta_2- m_2a$$ See also: Pulley Hanging from the celling
How to solve pulley on incline plane
$$a=\dfrac{m_2g-m_1g sin\theta}{m_1+m_2}$$
$$T_1=m_1g\pm m_1a$$ Going up
$$T_1=m_1g+ m_1a$$
$$T_2=m_2g\pm m_2a$$ Going down
$$T_2=m_2g- m_2a$$
Now what if we consider friction now?
Now tilt it a little bit.
$$\dfrac{m_2g sin\theta_2-m_1g sin\theta_1}{m_1+m_2}$$
$$T_1=T_2$$
$$T_2=m_2g sin\theta_2\pm m_2a$$
$$T_2=m_2g sin\theta_2- m_2a$$
See also: Pulley Hanging from the celling |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
I'm puzzled with the following question: find an analog of the Hubble's law for the Kasner solution.
Kasner metric is a solution to the vacuum Einstein equations $$ds^2=dt^2-\sum_{i=1}^3a^2_i(t)(dx^i)^2$$ with coefficients $a_i(t)=t^{p_i}$, where $p_i$ are constant parameters satisfying $\sum_ip_i=1$ and $\sum_i (p_i)^2=1$. This solution describes anisotropic space.
It is known that in the Robertson-Walker metric, when coefficients along all axes are the same $a_i=a$ wavelengths of a photon at times $t$ and $t_0$ are related by the scale parameter $a(t)$ as $$\frac{\lambda(t)}{\lambda(t_0)}=\frac{a(t)}{a(t_0)}$$
this is true if the rate if the space expanding $\dot{a}/a$ is significantly smaller then the frequency of the photon.
The question is how to generalize this statement in the case of the Kasner metric. A natural idea is to suppose that the statement is generalized directly to the propagation along "main" axes $$\frac{\lambda_i(t)}{\lambda_i(t_0)}=\frac{a_i(t)}{a_i(t_0)}$$ and "linearly" for a generic direction, i.e. for $\vec{\lambda}=\lambda_i \vec{e_i}$ $$\frac{|\vec{\lambda}(t)|}{|\vec{\lambda}(t_0)|}=\frac{\dot{a}_i(t)\lambda_i}{a_i(t)\lambda_i}$$
However, I'm not able to confirm this conjecture by any quantitative argument. I've tried to show that the EOM in a curved space-time $\nabla^\mu F_{\mu\nu}=0$ admit a solution of a kind $A_\mu(x)\sim e_{\mu}\exp{\left(i g_{\mu\nu}q^\mu x^\nu\right)}$ with $q^\mu$ being a rescaled "minkowskian" momentum $q^i(t)=\frac{k_i}{a_i(t)}, \eta^{\mu\nu}k_\mu k_\nu=0$ but failed to succeed.
After some time I've realized that one should probably look not for an exact solution of this type (even in the Robertson-Walker case the relation $\lambda(t)\sim a(t)$ is approximate) but for a "high-frequency" approximate solution. But still I'm not currently able to find it.
So my questions are:
1) Is the suggested generalization of the Hubble's law correct?
2) If so, how to provide quantitative evidence for it? I would be satisfied even with calculations for a massless scalar field if they are simpler to perform in order to achieve the goal. |
This exercise started in section 2.9 on which I wrote a seven page commentary before accepting Carroll's challenge to prove the modified Leibniz rule for the exterior derivative.
Carroll introduces differential forms: "A differential p-form is a completely antisymmetric (0,p) tensor. Thus scalars are automatically 0-forms and dual vectors (downstairs index) are one-forms. We also have the 4-form ##{\epsilon }_{\mu \nu \rho \sigma }##, the Levi-Civita tensor." I had a slight problem with 0- and 1-forms but this was resolved on Physics Forums. First we meet the wedge product: Given a##\ p##-form ##A## and a ##q##-form ##B## we can form a ##(p+q)##-form known as the wedge product ##A\wedge B## by taking the antisymmetrised tensor product: $${\left(A\wedge B\right)}_{{\mu }_1\dots {\mu }_{p+q}}=\frac{\left(p+q\right)!}{p!q!}A_{[{\mu }_1\dots {\mu }_p}B_{{\mu }_{p+1}\dots {\mu }_{p+q}]}$$ I prove that the wedge product of an n-dimensional 2-form and 1-form is completely antisymmetric in any number of dimensions n ##\mathrm{\ge}## 2 and therefore a 3-form.
Carroll introduces differential forms: "A differential p-form is a completely antisymmetric (0,p) tensor. Thus scalars are automatically 0-forms and dual vectors (downstairs index) are one-forms. We also have the 4-form ##{\epsilon }_{\mu \nu \rho \sigma }##, the Levi-Civita tensor." I had a slight problem with 0- and 1-forms but this was resolved on Physics Forums.
First we meet the wedge product:
Given a##\ p##-form ##A## and a ##q##-form ##B## we can form a ##(p+q)##-form known as the wedge product ##A\wedge B## by taking the antisymmetrised tensor product:
$${\left(A\wedge B\right)}_{{\mu }_1\dots {\mu }_{p+q}}=\frac{\left(p+q\right)!}{p!q!}A_{[{\mu }_1\dots {\mu }_p}B_{{\mu }_{p+1}\dots {\mu }_{p+q}]}$$
I prove that the wedge product of an n-dimensional 2-form and 1-form is completely antisymmetric in any number of dimensions n ##\mathrm{\ge}## 2 and therefore a 3-form.
Then we meet the exterior derivative
$${\left(\mathrm{d}A\right)}_{{\mu }_1\dots {\mu }_{p+1}}=\left(p+1\right){\partial }_{[{\mu }_1}A_{{\mu }_2\dots {\mu }_{p+1}]}$$They both involve the ghastly total antisymmetrisation operation [] on indices. It is defined back in his equation (1.80) as
$$T_{[{\mu }_1\dots {\mu }_n]}=\frac{1}{n!}\left(T_{{\mu }_1\dots {\mu }_n}\pm \mathrm{sum\ over\ permutations\ of}\ {\mu }_1\dots {\mu }_n\right)$$
This led on to Exercise 2.08
Question
Verify (2.78): For an exterior derivative of a product of a p-form ω and a q-form η, we have the modified Leibnitz rule:
$$\mathrm{d}\left(\omega \wedge \eta \right)=\left(\mathrm{d}\omega \right)\wedge \eta +{\left(-1\right)}^p\omega \wedge \left(\mathrm{d}\eta \right)$$
Answer
Here we have the ghastly total antisymmetrisation operation [] again
andnested in itself. I had to invent a new notation
$$\sum_{\mp \mathrm{\circlearrowleft }}{A_{{\mu }_1\dots {\mu }_n}}\equiv \left(A_{{\mu }_1\dots {\mu }_n}\pm \mathrm{sum\ over\ permuta}\mathrm{tions\ of}\ {\mu }_1\dots {\mu }_n\ where\ we\ use\ -\ for\ odd\ permutations\ and\ +\ for\ even.\right)$$
because writing the stuff about the permutations every time would be stupid and does not fit on a line. I expanded each term in the question equation and reached expressions like
$$\frac{{\left(-1\right)}^{p\left(q+1\right)}}{\left(q+1\right)!p!q!}\sum_{\mp \mathrm{\circlearrowleft }}{\left(\sum_{\mp \mathrm{\circlearrowleft }}{{\mathrm{\partial }}_{{\mu }_1}{\eta }_{{\mu }_2\dots {\mu }_{q+1}}}\right){\omega }_{{\mu }_{q+2}\dots {\mu }_{p+q+1}}}$$
where you can see the nested expansions explicitly. Each term had a different variant of the nesting so the nesting had to be removed and I proved, for example and avoiding too many subscripts, that
$$\sum_{\mp \mathrm{\circlearrowleft }}{\left(\sum_{\mp \mathrm{\circlearrowleft }}{{\mathrm{\partial }}_a{\eta }_{c_1\dots c_q}}\right){\omega }_{b_1\dots b_p}}\mathrm{=}\left(q+1\right)!{\left(-1\right)}^{q(p+q)}\sum_{\mp \mathrm{\circlearrowleft }}{{\mathrm{\partial }}_a{\omega }_{b_1\dots b_p}{\eta }_{c_1\dots c_q}}$$
factorials cancelled beautifully but I was left with
$${\mathrm{d}\left(\omega \wedge \eta \right)}_{\ }=\left(\mathrm{d}\omega \right)\wedge \eta ={\left(-1\right)}^{\left(q+p\right)}\omega \wedge \left(\mathrm{d}\eta \right)$$
which is not the same as the modified Leibnitz rule, in other words, junk.
I also show that the equation does not work for two 1-forms. In fact I get$$2\mathrm{d}\left(\omega \wedge \eta \right)=\left(\mathrm{d}\omega \right)\wedge \eta +{\left(-1\right)}^p\omega \wedge \left(\mathrm{d}\eta \right)$$which agrees with my wrong answer. I am within spitting distance of the correct proof! I hope to come back to this problem. Read the full account Commentary 2.9 Differential forms.pdf (7 pages) Ex 2.08 Exterior derivative and modified Leibnitz rule.pdf (8 pages - I have long thought that he was Leibnitz. That is acceptable, but Leibniz is better.) There is also an answer here from University of California, Santa Barbara (UCSB). It starts from ##\mathrm{d}\left(\omega \wedge \eta \right)## and uses the ordinary Leibnitz rule to split that in two. It then manipulates the two parts to convert them into the RHS of the question equation. There are some rather 'hand-waving' steps such as the first line of text "We drop the inner set of antisymmetrization brackets in the second line because the terms are antisymmetrized in all of their indices". It works in the opposite direction to what I did.
I also show that the equation does not work for two 1-forms. In fact I get$$2\mathrm{d}\left(\omega \wedge \eta \right)=\left(\mathrm{d}\omega \right)\wedge \eta +{\left(-1\right)}^p\omega \wedge \left(\mathrm{d}\eta \right)$$which agrees with my wrong answer. I am within spitting distance of the correct proof! I hope to come back to this problem.
Read the full account
Commentary 2.9 Differential forms.pdf (7 pages)
Ex 2.08 Exterior derivative and modified Leibnitz rule.pdf (8 pages - I have long thought that he was Leibnitz. That is acceptable, but Leibniz is better.)
There is also an answer here from University of California, Santa Barbara (UCSB).
It starts from ##\mathrm{d}\left(\omega \wedge \eta \right)## and uses the ordinary Leibnitz rule to split that in two. It then manipulates the two parts to convert them into the RHS of the question equation. There are some rather 'hand-waving' steps such as the first line of text "We drop the inner set of antisymmetrization brackets in the second line because the terms are antisymmetrized in all of their indices". It works in the opposite direction to what I did. |
In the context of Fermi gases (or fluids in general), one would typically in the grand-canonical formalism use the formula
$\langle N \rangle = -\frac{\partial \psi}{\partial \mu}$, where $\psi$ is the grand/Landau potential (generally interpreted as an integral weighted by the density of states).
However I came across some solutions to problems (the problems all were oriented around a gas enclosed in a container of some geometry) where $\langle N \rangle$ was calculated using a formula along the lines
$\langle N \rangle=k\int\int f(x,p) d^np d^nx$, where $k$ is some constant and $f$ is some function both of which I haven't yet been able to figure out the general expression for from the examples I have ($k$ is most likely $\frac{2}{4\pi^2\hbar^2}$). The limits oftenly are related to the Fermi energy $\varepsilon_F$.
Due to my limited information about this approach I am not having much luck finding information about it. Could someone kindly provide me an explanation or point me into the direction of one?
EDIT: Here's an example question to showcase how it is used
A fully degenerate Fermi gas (spin $\frac12$) is characterised by $\langle N \rangle$ non-interacting electrons confined to a plane within a circle of radius $R$ centered at the origin. The energy of a single particle is $\varepsilon=\frac{p^2}{2m}+\alpha r$ where $\alpha>0$ and $r$ is the radial distance from the origin. Determine $\varepsilon_F$ for $\alpha R << \varepsilon_F$.
The allowed domain of integration is obtained by imposing $\varepsilon \leq \varepsilon_F$
$$\langle N \rangle = \frac{2}{4\pi^2\hbar^2}\int\int_D d^2pd^2q=\frac{2}{\hbar^2}\int_0^R\int_0^{^{\sqrt{2m(\varepsilon_F-\alpha r)}}} (p r) dpdr=...$$ |
Sean Carroll, my guide and nemesis
After equation 3.161 for the geodesic in terms of 4 momentum ##p^\lambda\nabla_\lambda p^\mu=0## Carroll says that by metric compatibility we are free to lower the index ## \mu##. Metric compatibility means that ##\nabla_\rho g_{\mu\nu}=\nabla_\rho g^{\mu\nu}=0## so I tried to show that given that, ##\nabla_\lambda p^\mu=\nabla_\lambda p_\mu##. Here was my first attempt:
Lower the index with the metric, use the Leibnitz rule, use metric compatibility$$
\nabla_\lambda p^\mu=\nabla_\lambda g^{\mu\nu}p_\nu=p_\nu\nabla_\lambda g^{\mu\nu}+g^{\mu\nu}\nabla_\lambda p_\nu=0+\nabla_\lambda p^\mu
$$Then I tried painfully expanding ##\nabla_\lambda g^{\mu\nu}p_\nu## and got the same result. So then I asked on Physics Forums: Why does metric compatibility imply ##\nabla_\lambda p^\mu=\nabla_\lambda p_\mu##? I got my wrist slapped by martinbn who pointed out that ##\nabla_\lambda p^\mu=\nabla_\lambda p_\mu## made no sense because there are different types of tensors on each side of the equation. (The ## \mu## is up on one side and down on the other).
I was embarrassed😡.
What Carroll is really saying is that metric compatibility means that$$
\nabla_\lambda p^\mu=0\Rightarrow\nabla_\lambda p_\mu=0
$$which is quite different and easy to show:$$
\nabla_\lambda p^\mu=0
$$$$
\Rightarrow g^{\mu\nu}\nabla_\lambda p_\nu=0
$$$$
\Rightarrow g_{\rho\mu}g^{\mu\nu}\nabla_\lambda p_\nu=0
$$$$
\Rightarrow\delta_\rho^\nu\nabla_\lambda p_\nu=0
$$$$
\Rightarrow\nabla_\lambda p_\rho=0
$$I posted something very like those steps and there was silence which usually means they are correct. The first step uses, ##\nabla_\lambda p^\mu=g^{\mu\nu}\nabla_\lambda p_\nu##, which can be done in several ways
1) ##\nabla_\lambda p^\mu## is a tensor so you can lower (or raise) an index with the metric as usual.
2) ##\nabla_\lambda p^\mu=\nabla_\lambda\left(g^{\mu\nu}p_\nu\right)=p_\nu\nabla_\lambda g^{\mu\nu}+g^{\mu\nu}\nabla_\lambda p_\nu=g^{\mu\nu}\nabla_\lambda p_\nu## as in (1) use the Leibnitz rue and metric compatibility
3) ##\nabla_\lambda p^\mu=\nabla_\lambda\left(g^{\mu\nu}p_\nu\right)=g^{\mu\nu}\nabla_\lambda p_\nu## using Carroll's third rule for covariant derivatives: That they commutes with contractions.
The Leibnitz rule was the second rule of covariant derivatives and I discussed all four in Commentary 3.2 Christoffel Symbol. The third caused angst and another question on PF. I now think that the third rule is just saying that because the covariant derivative is a tensor you can raise and lower indexes on it. 3 and 1 above are really the same. I have suitably amended Commentary 3.2 Christoffel Symbol.
Sometimes I hate Carroll!
There was also another post on the thread ahead of the first two which referred to a similar question on Stack Exchange. MathematicalPhysicist was asked to show that $$
U^\alpha\nabla_\alpha V^\beta=W^\beta\Rightarrow U^\alpha\nabla_\alpha W_\beta=W_\beta
$$The proof for this is very similar to the above:$$
U^\alpha\nabla_\alpha V^\beta=W^\beta
$$$$
\Rightarrow U^\alpha g^{\beta\gamma}\nabla_\alpha V_\gamma=g^{\beta\gamma}V_\gamma
$$$$
\Rightarrow U^\alpha g_{\mu\beta}g^{\beta\gamma}\nabla_\alpha V_\gamma=g_{\mu\beta}g^{\beta\gamma}V_\gamma
$$$$
\Rightarrow U^\alpha\delta_\mu^\gamma\nabla_\alpha V_\gamma=\delta_\mu^\gamma V_\gamma
$$$$
\Rightarrow U^\alpha\nabla_\alpha V_\mu=V_\mu
$$Once again there are three ways to do the first step. Metric compatibility is not essential.
See Commentary 3.8 Symmetries and Killing vectors.pdf first two pages. Then I run into another problem with Killing. |
WHY?
Representing bilinear relationship of two inputs is expensive. MLB efficiently reduced the number of parameters by substituting bilinear operation with Hadamard product operation. This paper extends this idea to capture bilinear attention between two multi-channel inputs.
WHAT?
Using the low-rank bilinear pooling, attention on visual inputs given a question vector can be calculated efficiently. This can be generalized to represent a bilinear model for two multi-channel inputs. While
\mathbf{f}_k' indicates kth element of intermediate representation,
\mathbf{f}_k' = (\mathbf{X}^T\mathbf{U}')^T_k\mathcal{K}(\mathbf{Y}^T\mathbf{V}')_k\\= \sum_{i=1}^{\rho}\sum_{j=1}^{\phi}\mathcal{A}_{ij}(\mathbf{X}^T_i\mathbf{U}_k')(\mathbf{V}'_k^T\mathbf{Y}_j) = \sum_{i=1}^{\rho}\sum_{j=1}^{\phi}\mathcal{A}_{ij}\mathbf{X}_i^T(\mathbf{U}_k'\mathbf{V}_k'^T)\mathbf{Y}_j\\\mathbf{f} = \mathbf{P}^T\mathbf{f}'\\= BAN(\mathbf{X}, \mathbf{Y}; \mathcal{A})
Attention between two input channels can be calculated similarly as in MLB.
\mathcal{A}_g = softmax(((1\cdot\mathbf{p}_g^T)\circ \mathbf{X}^T\mathbf{U})\mathbf{V}^T\mathbf{Y})
BAN reflects the linear projection of multiple co-attention.
Low-rank bilinear pooling allowed efficient calculation of multiple glimpses of attention distribution given a single reference vector. BAN generalized this idea for two multi-channel inputs.
$$ \mathbf{f’}_k=(\mathbf{X}^T\mathbf{U’})^T_K\mathcal{A}(\mathbf{Y}^T\mathbf{V}’)_k\
Reading and writing in DNC are implemented with differentiable attention mechanism.
The controller of DNC is an variant of LSTM architecture that takes an input vector(
x_t) and a set of read vectors(
r_{t-1}^1,...,r_{t-1}^R) as input(concatenated). Concatenated input and hidden vectors from both previous timestep(
h_{t-1}^l) and from previous layer(
h_t^{l-1}) are concatenated again to be used as input for LSTM to produce next hidden vector(
h_t^l). Hidden vectors from all layers at a timestep are concatenated to emit an output vector(
\upsilon_t) and an interface vector(
\xi_t). The output vector(
y_t) is the sum of
\upsilon_t and read vectors of the current timestep.
v_t = W_y[h_t^1;...;h_t^L]\\\xi_t = W_{\xi}[h_t^1;...;h_t^L]\\y_t = \upsilon_t + W_t[r_t^1;...;r_t^R]
THe interface vectors are consists of many vectors that interacts with memory: R read keys(
\mathbf{k}_t^{r,i}\in R^W), read strengths(
\beta_t^{r,i}), write key(
\mathbf{k}_t^w\in R^W), write strength(
\beta_t^w), erase vector(
\mathbf{e}_t\in R^W), write vector(
\mathbf{v}_t\in R^W), R free gates(
f_t^i), the allocation gate(
g_t^a), the write gate(
g_t^w) and R read modes(\mathbf{\pi}_t^i).
\mathbf{\xi}_t = [\mathbf{k}_t^{r,1};...;\mathbf{k}_t^{r,R};\beta_t^{r,1};...;\beta_t^{r,R};\mathbf{k}_t^w;\beta_t^w;\mathbf{e}_t;\mathbf{v}_t;f_t^1;...;f_t^R;g_t^a;g_t^w;\mathbf{\pi}_t^1;...;\mathbf{\pi}_t^R]
Read vectors are computed with read weights on memory. Memory matrix are updated with write weights, write vector and erase vector.
\mathbf{r}_t^i = M_t^T\mathbf{w}_t^{r,i}\\M_t = M_{t-1}\odot(E-\mathbf{w}^w_t\mathbf{e}_t^T)+\mathbf{w}^w_t\mathbf{v}_t^T
Memory are addressed with content-based addressing and dynamic memory allocation. Contesnt-based addressing is basically the same as attention mechanism. Dynamic memory allocation is designed to clear memory as analogous to free list memory allocation scheme.
So?
DNC showed good result on bAbI task, and Graph tasks. |
You are here Guitar Speaker Power Handling
The power rating of a guitar speaker is an indication of how much power it can handle without being damaged thermally or mechanically. It is not an indication of how loud the speaker will sound in comparison to other speakers. Let us examine the basics of how speakers work, how the power rating is determined and look at things from the perspective of the guitar amplifier so that we can choose guitar speakers that will last a lifetime.
How an Electrodynamic Loudspeaker Works
Guitar speakers are a type of loudspeaker known as electrodynamic or "moving coil" loudspeakers. The magnetic circuit (composed of front plate, back plate, pole piece and magnet) and voice-coil make up the motor of a guitar speaker. An alternating electrical current flowing through the voice-coil generates an alternating magnetic field perpendicular to flow of current through the coil. The magnetic circuit creates a strongly focused magnetic field in the air gap between the front plate and the pole piece on which the voice-coil is centered. The voice-coil is pushed and pulled through the air gap based on the interaction between these two magnetic fields. Since the speaker cone is connected to the voice-coil, it now has a mechanical force with which to push air particles and make sound.
Thermal Damage
Speakers are transducers. They convert electrical energy provided by an amplifier into acoustical energy. They are actually very inefficient transducers because most of the electrical energy is converted into heat instead of sound. The reference efficiency (ratio of acoustic power out to electrical power in) for most guitar speakers is around 2% to 6%, which means that 98% to 94% of the electrical energy is dissipated in the form of heat.
From the aspect of power dissipation, a guitar speaker can be modeled as a resistor. Most guitar amp enthusiasts are familiar with the equation for electrical power and how it can be used to determine the power dissipated across a resistor. Resistors can be thought of as transducers that intentionally convert electrical energy to heat in order to create a voltage drop. Resistors have a power rating that indicates how much power they can dissipate before being damaged and this rating is analogous to the speaker power rating.
Equation for Power$$P = \frac{V^2}{R}$$
where ~P~ = power, ~V~ = voltage, ~R~ = resistance
For example:$$P = \frac{V^2}{R} = \frac{{20}^2}{8} = \frac{400}{8} = 50\text{ W}$$
The voice-coil is the electrical interface of the speaker and is given a "nominal impedance" specification (e.g. 4, 8 or 16 ohm) which can be used to approximate power dissipation when connected to an amplifier. (The actual impedance of the voice-coil varies with frequency). Just as a resistor will eventually burn up if its power rating is not high enough, the speaker's voice-coil will burn up if it is overpowered by the amplifier. One of the most common symptoms of an overpowered speaker is a burned voice-coil, which usually measures as an open circuit on an ohm meter. No sound can be produced by a speaker with an open voice-coil. An overheated voice-coil former may also become warped and begin to rub against the pole piece causing the speaker to buzz loudly.
Mechanical Damage
Loudspeakers can be damaged mechanically by over-excursion of the voice-coil and cone. This is more common with old speakers that have worn suspensions and adhesives, but may also occur at extreme low frequencies outside of the speaker's useable frequency range. When over-excursion occurs, the voice-coil can become misaligned or bottom out. The cone and suspension (surround and spider) can also become stretched or torn.
How the Speaker Power Rating is Determined
Many speaker manufacturers rate their speakers based on industry standards similar to IEC 60268-5 or AES2-1984. These standards specify a pink noise test signal with a crest factor of two (i.e. 6 dB) which is meant to simulate the transient character of music having an average value, as well as frequent instantaneous voltage spikes that swing up to twice the average value. Pink noise is a particular type of random noise with equal energy per octave and actually sounds like a space shuttle launch. The test signal is applied to the test speaker for a few hours, allowing for a reasonable way to test the speaker's real world thermal and mechanical capabilities. After testing, the speaker must be in working order, without permanent alteration of its technical features. The power rating is calculated using the RMS value of the applied voltage and the minimum value of electrical impedance within the working range of the speaker.
Guitar Amplifier Power Output Ratings
The power output rating of a guitar amp is mostly a ballpark figure for what it can put out. Amp specifications commonly list power output in a form similar to the following:
Power Output: 50W into 8ohm at 5% THD
This type of power output rating is obtained by using a sine wave from a signal generator (usually 1 kHz) as the input signal. The 5% THD (total harmonic distortion) figure means that the sine wave was able to generate 50W of power output with relatively low distortion (near the threshold of clipping or overdrive). THD measurements were one of the first conventions used to objectively compare the fidelity of audio amplifiers.
Guitar amps are unconventional audio amplifiers. While most audio amplifiers are designed to keep distortion as low as possible, guitar amplification has evolved to where overdrive distortion is usually a requirement. For example, the Marshall® JCM800 2203 is a 100W tube amp that has a highly regarded overdrive sound. The owner's manual lists the power output as follows:
Typical power at clipping, measured at 1kHz, average distortion 4% 115 watts RMS into 4, 8, 16 ohms. Typical output power at 10% distortion 170 watts into 4 ohms.
This example shows that for many guitar amplifiers, the power output rating (100W in this case) is not a maximum power output rating, but more of a ballpark clean power output specification.
RMS and Overdrive Distortion
RMS (root mean square) is a kind of average value that can be used to compare the power dissipation from different signals on equal terms. For example, a 20 VDC power supply dissipates the same amount of heat across an 8 ohm resistor as a sine wave with an RMS value of 20 VAC.
General Equation for the RMS value of a periodic function$$V_{RMS} = \sqrt{\frac{1}{T} \int_{0}^{T} [V(t)]^2 dt}$$
where the
Square portion of the equation is
and the
Mean portion of the equation is
and the
Root portion of the equation is
Guitar amp output ratings are usually based on a sine wave at low distortion, but if the volume is turned up further or a gain boosting effect is used, the sine wave becomes more overdriven and can approach the shape of a square wave. The RMS value of a square wave is equal to its amplitude, while the RMS value of a sine wave is equal to its amplitude divided by the square root of two.
Plugging the RMS values into the equation for power shows that a square wave dissipates twice as much power across the same load as a sine wave with the same amplitude.
Power calculation for a sine wave$$P = \frac{(\frac{V_m}{\sqrt{2}})^2}{R}$$$$P = \frac{\frac{{V_m}^2}{2}}{R}$$$$P = \frac{1}{2} \times \frac{{V_m}^2}{R}$$ Power calculation for a square wave$$P = \frac{{V_m}^2}{R}$$
This simplified overdrive distortion model illustrates how the 100 watt Marshall® amp which puts out 115 watts at 4% THD could put out an additional 50 watts at 10% THD.
Tube vs. Solid State Outputs
Many tube guitar amps use output transformers with secondary taps connected to an impedance switch allowing for the same power output when connected to 4, 8 or 16 ohm load impedances. Solid state amps do not use output transformers and do not have the same power output when connected to different load impedances.
For tube outputs, it is important to match the load impedance to the amp\'s output impedance. For solid state outputs, it is important to use a load that is greater than or equal to the rated minimum load impedance and to know the amp's power output at that load. For example, the Fender M-80 is a solid state amp rated for 69 W(RMS) at 5% THD into 8 ohms and 94 W(RMS) at 5% THD into 4 ohms (the minimum load impedance).
With solid-state amps, overdrive distortion generated by the power-amp is not generally considered musically pleasing, so most people will not exceed the amp's low THD power rating. Tube power amps, on the other hand, are often played well beyond their low THD rating.
Tube vs. Solid State Outputs
Amps with Multiple Speakers
When an amp uses multiple speakers the output power is divided between them. The nominal impedance of each speaker should be the same value so that power is distributed equally and so that the output impedance of the amplifier can be matched.
Formula for calculating the equivalent overall impedance of speakers wired in parallel
~Z_{\text{total}}~ = Equivalent Overall Impedance
~Z_1~ = Impedance of speaker 1, etc.$$ Z_{\text{total}} = \frac{1}{\frac{1}{Z_1} + \frac{1}{Z_2} + \frac{1}{Z_3} + \ldots + \frac{1}{Z_n}}$$
Example: Two Speakers in Parallel$$Z_{\text{total}} = \frac{1}{\frac{1}{Z_1} + \frac{1}{Z_2}} = \frac{1}{\frac{1}{16Ω} + \frac{1}{16Ω}} = \frac{1}{\frac{1}{8}} = 8Ω$$ Formula for calculating the equivalent overall impedance of speakers wired in series
~Z_{\text{total}}~ = Equivalent Overall Impedance
~Z_1~ = Impedance of speaker 1, etc.$$ Z_{\text{total}} = Z_1 + Z_2 + Z_3 + \ldots + Z_n$$
Example: Two Speakers in Series$$Z_{\text{total}} = Z_1 + Z_2 = 4Ω + 4Ω = 8Ω$$ Choosing Guitar Speakers to Last a Lifetime
There is no standard method used by all amp manufacturers when selecting an appropriate speaker power rating. If you want to choose a speaker to last a lifetime, you will want to choose a speaker that can handle the maximum amount of preamp and power amp overdrive distortion that can possibly be put into it and safely avoid exceeding the speaker's thermal limits. In the case of single speaker setups, this means choosing a speaker rated for at least twice the rated output power of the amp. For multiple speakers, choose twice the rated power that would be distributed to it.
You might decide to go with a lower power rating because you know that you will never be cranked at full power and love the sound of a lower power rated speaker. In the same way you may choose a speaker with a much higher power rating because of the way it sounds.
A Real World Example: Speakers for a Fender® '65 Twin Reverb® Reissue
1) Determine the rated output power of the amp.
Amplifiers have two power ratings: power consumption and power output. The power consumption is always much higher than the power output. In this case the output power is 85 watts RMS and the power consumption is 260 watts.
2) Determine the output impedance for that output power rating.
In this case it is 4 ohm.
3) Determine the number of speakers.
In this case there are two 12" (8 ohm) speakers wired in parallel for an overall impedance of 4 ohms.
For this amp, speaker choices to last a lifetime should be rated for at least 85 watts each. There are a lot of speaker choices rated for 100 watts and this rating would be very safe. Actually, the stock speaker for this amp is the Jensen C12K and it is rated for 100 watts.
Related Videos
By Kurt Prange (BSEE), Sales Engineer for Antique Electronic Supply - based in Tempe, AZ. Kurt began playing guitar at the age of nine in Kalamazoo, Michigan. He is a guitar DIY'er and tube amplifier designer who enjoys helping other musicians along in the endless pursuit of tone.
Click here to return to the desktop version of this site |
Let $G=\mathrm{Gl}_n\mathbb C$ and let $X$ be an affine $G$-variety. Let $\phi:\tilde X\to X$ be the normalization of $X$, i.e. the spectrum of the integral closure of $\mathbb C[X]$ in its fraction field. Can $\tilde X$ be given the structure of a $\tilde G$-variety such that $\phi$ is equivariant?
Here is the sort of example I think Jason Starr was raising. (I looked at Brian's webpage, but it wasn't obvious which paper to read.) Take $k$ to be a perfect field of characteristic $p$, with $p \neq 0$, $2$. Let $A = k[x,y]/(y^2-x^p)$. The normalization of $A$ is $\tilde{A} = k[t]$, with $y=t^p$ and $x=t^2$.
Let $G$ be the group scheme with underlying space $k[\epsilon]/\epsilon^p$ and multiplication given by the map $\epsilon \to \epsilon \otimes 1 + 1 \otimes \epsilon$. If, like me, you prefer to think in terms of functors of points, $G(R) = \{ \epsilon \in R : \epsilon^p =0 \}$ and the multiplication map $G(R) \times G(R) \to G(R)$ is $(\epsilon_1, \epsilon_2) \mapsto \epsilon_1 + \epsilon_2$.
Let $G(R)$ act on $A(R)$ by $(\epsilon, (x,y)) \mapsto (x+2 \epsilon, y)$. If you prefer maps of algebras, $x \mapsto 1 \otimes x + 2 \epsilon \otimes 1$, $y \mapsto 1 \otimes y$. I claim that this action does not lift to $\tilde{A}$. Suppose, to the contrary, that $t \mapsto 1 \otimes t_0 + \epsilon \otimes t_1 + \cdots + \epsilon^{p-1} \otimes t_{p-1}$, with the $t_i \in k[t]$. Writing out that the action must preserve the relation $t x^{(p-1)/2} = y$ gives $$ \left( 1 \otimes t_0 + \epsilon \otimes t_1 + \cdots \right) (1 \otimes t^2 + 2 \epsilon \otimes 1)^{(p-1)/2} = 1 \otimes t^p $$
Equating the coefficients of $1$ and $\epsilon$ gives $t_0 t^{p-1} = t^p$ and $t_1 t^{p-1} + (p-1) t_0 t^{p-3} = 0$. So $t_0=t$ and $t_1 = 1/t$. But $1/t$ isn't in $k[t]$.
Morally, the action wants to be $(\epsilon, t) \mapsto t \sqrt{1+2\epsilon t^{-2}}=1+\epsilon t^{-1} - (1/2) \epsilon^2 t^{-3} + \cdots $. The trouble is that $\left( t \sqrt{1+2\epsilon t^{-2}} \right)^k$ is in $k[t, \epsilon]/\epsilon^p$ when $k$ is even or is $\geq p$, but not in general.
This can never occur when $G$ is normal. Over a field of characteristic zero, all group schemes are regular, and regular implies normal, so there are no examples over a field of characteristic zero.
Recall the universal property of normalization: For any normal variety $Y$, the induced map $\mathrm{DomHom}(Y, \tilde{X}) \to \mathrm{DomHom}(Y,X)$ is bijective, where $\mathrm{DomHom}$ is the dominant homomorphisms.
Proof: If $G$ is normal, then $G \times \tilde{X}$ is normal. We have a map $G \times \tilde{X} \to G \times X \to X$, where the first map is $\mathrm{Id} \times \phi$ and the second map is the group action. By the universal property of the normalization, there is a map $G \times \tilde{X} \to \tilde{X}$ making the obvious diagram commute. We claim that this map gives an action of $G$ on $\tilde{X}$.
Consider the two maps $G \times G \times \tilde{X} \to \tilde{X}$. We must show they are equal. Again, by the universal property of normalization, it is enough to show that the two compositions $G \times G \times \tilde{X} \to X$ are equal. But these are the same as $G \times G \times \tilde{X} \to G \times G \times X$, follwed by the two maps $G \times G \times X \to X$, and these are equal because $G$ acts on $X$.
I believe the answer is yes. Since $X$ is an affine $G$-variety, $G$ acts on $\mathbb{C}[X]$ by $\mathbb{C}$-algebra automorphisms. This yields an action of $G$ on the fraction field of $\mathbb{C}[X]$ by algebra automorphisms. This restricts to an action on the integral closure of $\mathbb{C}[X]$, so that inclusion into the integral closure is $G$-equivariant. We can now apply Spec to this inclusion to obtain the desired equivariant map. |
Jaume Oliver Lafont
Electronics Engineer
The simplest rearrangements of the cancelling harmonic series converge to the logarithms of positive rationals.
$$ \log\left(\frac{p}{q}\right)=\sum_{i=0}^\infty \left(\sum_{j=pi+1}^{p(i+1)}\frac{1}{j}-\sum_{k=qi+1}^{q(i+1)}\frac{1}{k}\right) $$
$\pi^2$ is so close to $10$ because $$\sum_{k=0}^\infty\frac{1}{((k+1)(k+2))^3}$$ is small.
Palma (Mallorca)
Member for 3 years, 8 months
67 profile views
Last seen Sep 24 at 23:37 |
I'm reading about Convolutional Neural Networks (CNNs) in
Deep Learning by Ian Goodfellow.
CNNs are different from traditional neural networks in that they use convolution in place of general matrix multiplication in at least one of their layers. The convolution is introduced as follows:
Suppose that we are tracking the location of a spaceshift with a laser sensor. Our laser provides a single output $x(t)$, the position of the spaceship at time $t$. Both $x$ and $t$ are real-valued, that is, we can get a different reading from the laser sensor at any instant in time. Now suppose our laser is somewhat noisy. To obtain a less noisy estimate of the spaceship's position, we would like to average several measurements. Of course, more recent measurements are more relevant, so we will want this to be a weighted average that gives more weight to recent measurements. We can do this with a weighting function $w(a)$, where $a$ is the age of a measurement. If we apply such a weighted average operation at every moment, we obtain a new function $s$ providing a smoothed estimate of the position of the spaceship: $$\displaystyle s(t) = \int x(a)w(t - a)da$$ This operation is called
convolution
$\ldots$
In convolutional network terminology, the first argument (in this example, the function $x$) to the convolution is often referred to as the
input, and the second argument (in this example, the function $w$) as the kernel. The output is sometimes referred to as the feature map.
$\ldots$
In machine learning applications, the input is usually a multidimensional array of data, and the kernel is usually a multidimensional array of parameters that are adapted by the learning algorithm. We will refer to those multidimensional arrays as tensors. Because each element of the input and kernel must be explicitly stored separately, we usually assume that these functions are zero everywhere but in the finite set of points for which we store the values. This means that in practice, we can implement the infinite summation as a summation over a finite number of array elements. Finally, we often use convolutions over more than one axis at the time. For example, if we use a two-dimensional image $I$ as our input, we probably also want to use a two-dimensional kernel $K$: $$S(i,j) = (I*K)(i,j) = \sum_m\sum_nI(m,n)K(i-m,j-n)$$
(I assume that $S(i,j)$ means the feature map at point $(i,j)$).
The author then gives an example of $2$-D convolution with the following image:
I don't understand how this image illustrates what the author explains earlier. If we consider the input an image, then $a$ would resemble $I(0,0)$ right? Using the given definition of the feature map I find that $S(0,0) = \sum_m\sum_nI(m,n)K(0-m,0-n) = I(0,0)K(0,0) = aw$. Since e would resemble $I(1,0)$ I find that $$S(1,0) = \sum_m\sum_nI(m,n)K(1-m,0-n) = I(0,0)K(1,0) + I(1,0)K(0,0) = ay + ew$$ However, according to the image the output would be $aw + bx + ey + fz$.
Question: Why does the output equal $aw + bx + ey + fz$? Edit: If the displayed outputs (feature maps) are only the outputs corresponding to the internal points on the grid, then I think I understand the figure. I would mean that the highlighted output corresponds to $S(1,1)$ and that the outputs $S(0,0), S(1,0), \ldots$ are simply not shown here right?
Thanks in advance! |
Difference between revisions of "Printing with FLTK"
m (Minor format changes)
(fix savanah link)
Line 1: Line 1:
Printing with graphics_toolkit FLTK has some known limitations:
Printing with graphics_toolkit FLTK has some known limitations:
−
* Tex/Latex symbols won't show up in the generated print even if they are visible in the plot window. See bugs #42988, [http://savannah.gnu.org/bugs/?42320
+
* Tex/Latex symbols won't show up in the generated print even if they are visible in the plot window. See bugs #42988, [http://savannah.gnu.org/bugs/?42320 #42320], #42340 which are mostly duplicate entries for the same problem.
* Can't print multiline text objects: [http://savannah.gnu.org/bugs/?31468 bug#31468]
* Can't print multiline text objects: [http://savannah.gnu.org/bugs/?31468 bug#31468]
Revision as of 16:15, 19 August 2014
Printing with graphics_toolkit FLTK has some known limitations:
Tex/Latex symbols won't show up in the generated print even if they are visible in the plot window. See bugs #42988, #42320, #42340 which are mostly duplicate entries for the same problem. Can't print multiline text objects: bug#31468
However there are some ways to overcome these:
Use print [ps|eps|pdf] latex [standalone] for symbols and formulas
See "help print" for a description of 'pslatex', 'epslatex' 'pdflatex', 'pslatexstandalone', 'epslatexstandalone', 'pdflatexstandalone'
Code: print with fltk and epslatexstandalone"
close allgraphics_toolkit fltksombrero ();title ("The sombrero function:")fcn = "$z = \\frac{\\sin\\left(\\sqrt{x^2 + y^2}\\right)}{\\sqrt{x^2 + y^2}}$";text (0.5, -10, 1.8, fcn, "fontsize", 20);print -depslatexstandalone sombrero## process generated files with pdflatexsystem ("latex sombrero.tex");## dvi to pssystem ("dvips sombrero.dvi");## convert to png for wiki pagesystem ("gs -dNOPAUSE -dBATCH -dSAFER -sDEVICE=png16m -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r100x100 -dEPSCrop -sOutputFile=sombrero.png sombrero.ps")
Use psfrag
TODO: Fill me! |
Eigenfunction expansion method and the long-time asymptotics for the damped Boussinesq equation
1.
Department of Mathematics, The University of Texas at Austin, Austin, TX 78712-1082, United States
$u_{t t} - 2b\Delta u_t = -\alpha \Delta^2 u+ \Delta u + \beta\Delta(u^2)$
in a unit ball $B$. Homogeneous boundary conditions and small initial data are examined. The existence of mild global-in-time solutions is established in the space $C^0([0,\infty), H^s_0(B)), s < 3/2$, and the solutions are constructed in the form of the expansion in the eigenfunctions of the Laplace operator in $B$. For $ -3/2 +\varepsilon \leq s <3/2$, where $\varepsilon > 0$ is small, the uniqueness is proved. The second-order long-time asymptotics is calculated which is essentially nonlinear and shows the nonlinear mode multiplication.
Keywords:construction of solutions, Damped Boussinesq equation, higher-order long-time asymptotics.. Mathematics Subject Classification:35K20, 35K55, 35B9. Citation:Vladimir Varlamov. Eigenfunction expansion method and the long-time asymptotics for the damped Boussinesq equation. Discrete & Continuous Dynamical Systems - A, 2001, 7 (4) : 675-702. doi: 10.3934/dcds.2001.7.675
[1]
George J. Bautista, Ademir F. Pazoto.
Decay of solutions for a dissipative higher-order Boussinesq system on a periodic domain.
[2]
Robert Baier, Thuy T. T. Le.
Construction of the minimum time function for linear systems via higher-order set-valued methods.
[3]
Jean-Paul Chehab, Pierre Garnier, Youcef Mammeri.
Long-time behavior of solutions of a BBM equation with generalized damping.
[4]
Yihong Du, Yoshio Yamada.
On the long-time limit of positive solutions to the degenerate
logistic equation.
[5] [6] [7]
Peter V. Gordon, Cyrill B. Muratov.
Self-similarity and long-time behavior of solutions of the
diffusion equation with nonlinear absorption and a boundary source.
[8]
Belkacem Said-Houari.
Long-time behavior of solutions of the generalized Korteweg--de Vries equation.
[9]
Josef Diblík.
Long-time behavior of positive solutions of a differential equation with state-dependent delay.
[10] [11] [12] [13]
Robert Jankowski, Barbara Łupińska, Magdalena Nockowska-Rosiak, Ewa Schmeidel.
Monotonic solutions of a higher-order neutral difference system.
[14] [15]
Min Chen, Olivier Goubet.
Long-time asymptotic behavior of two-dimensional dissipative
Boussinesq systems.
[16]
Jun Zhou.
Global existence and energy decay estimate of solutions for a class of nonlinear higher-order wave equation with general nonlinear dissipation and source term.
[17]
Annalisa Iuorio, Stefano Melchionna.
Long-time behavior of a nonlocal Cahn-Hilliard equation with reaction.
[18] [19] [20]
C. I. Christov, M. D. Todorov.
Investigation of the long-time evolution of localized solutions of a dispersive wave system.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
$$\int_{0}^{3} {(x^2+1)}d[x]$$ is equal to ___________ .
Attempt
$[x]$ is constant for every interval. So for that intervals $d[x]=0$ so intergral given is zero. Am I right? Though the answer given is $\dfrac{17}{2}$.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
You must be careful about how you define your integral. I am guessing that you are defining $$ \int_a^b f(x) dg(x) = \lim \sum_i f(c_i) (g(x_{i+1}) - g(x_i)),$$ where $c_i \in [x_i, x_{i+1}]$, and the limit is really a limit over partitions of the interval $[a,b]$. This is typically called the Riemann-Stieltjes integral.
Then in your case, $d \lfloor x \rfloor$ is $0$ when its defined, but at integer values one must be a bit delicate. For instance, looking just at $$ \int_{1/2}^{3/2} (x^2 + 1) d\lfloor x \rfloor = \lim \sum (x_i^2 + 1)\Big( \lfloor x_{i+1} \rfloor - \lfloor x_i \rfloor\Big),$$ and all summands are zero except for the one summand where $x_i < 1$ and $x_{i+1} \geq 1$. For that one term, $\lfloor x_{i+1} \rfloor - \lfloor x_i \rfloor = 1$. And as the partitions of $[1/2, 3/2]$ become finer, the endpoints $x_i$ and $x_{i+1}$ (by which I mean the two partition points surrounding $1$ in the corresponding partition --- there is a minor abuse of notation here) both approach $1$. Thus $$ \int_{1/2}^{3/2} (x^2 + 1) d\lfloor x \rfloor = \lim (x_i^2 + 1) (1) = 1^2 + 1 = 2.$$
In fact, what this really is is exactly the value of $x^2 + 1$ at $1$, and more generally $$ \int_a^b f(x) d \lfloor x \rfloor = \sum_{a < n \leq b} f(n)$$ for a continuous function $f$.
Having described this, I hope it is now not so hard to compute the entire integral.
In your reasoning you forget that at integers the value of $d[x] $ diverges. To write the integral in a tractable way you should use that $$ \int_a^b f(x) dy = \int_a^b f(x) \frac{dy}{dx} dx $$ and apply it to your integral, yielding $$ \int_0^3 (x^2+1) \frac{d[x] }{dx} dx. $$ The value of $\frac{d[x] }{dx}$ is a sum of Dirac deltas. However, there is a delta at each of the endpoints of the integration domain, and the value of an integral who's integrand contains a delta at an endpoint is not defined, so you cannot assign a value to this integral. |
Brownian Motion Between Two Random Trajectories
2019, v.25, Issue 2
ABSTRACT
Consider the first exit time of one-dimensional Brownian motion $\{B_s\}_{s\geq 0}$ from a random passageway. We discuss a Brownian motion with two time-dependent random boundaries in quenched sense. Let $\{W_s\}_{s\geq 0}$ be an other one-dimensional Brownian motion independent of $\{B_s\}_{s\geq 0}$ and let $\bfP(\cdot|W)$ represent the conditional probability depending on the realization of $\{W_s\}_{s\geq 0}$.
We show that $$-t^{-1}\ln\bfP^x(\forall_{s\in[0,t]}a+\beta W_s\leq B_s\leq b+\beta W_s|W)$$ converges to a finite positive constant $\gamma(\beta)(b-a)^{-2}$ almost surely and in $L^p~ (p\geq 1)$ if $a<B_0=x<b$ and $W_0=0.$ When $\beta=1, a+b=2x,$ it is equivalent to the random small ball probability problem in the sense of equidistribution, which has been investigated in \cite{DL2005}. We also find some properties of the function $\gamma(\beta)$. An important moment estimation has also been obtained, which can be applied to discuss the small deviation of random walk with random environment in time (see \cite{Lv2018}). %We investigate the probability of a Brownian motion staying between two trajectories which are related to another Brownian motion.
Keywords: Brownian motion, first exit time, random boundary, limit theorem
COMMENTS |
Table of Contents
In the opening scene of the “The Euclid Alternative”, we see Sheldon (Jim Parsons) demanding that Leonard (Johnny Galecki)needs to drive him around to run various errands. Leonard, after spending a night in the lab using the new Free Electron Laser to perform X-ray diffraction experiments. In the background, we can see equations that describe a rolling ball problem on the whiteboard in the background.
Rolling motion plays an important role in many familiar situations so this type of motion is paid considerable attention in many introductory mechanics courses in physics and engineering. One of the more challenging aspects to grasp is that rolling (without slipping) is a combination of both translation and rotation where the point of contact is instantaneously at rest.The equations on the white board describe the velocity at the point of contact on the ground, the center of the object and at the top of the object.
Pure Translational Motion
When an object undergoes pure translational motion, all of its points move with the same velocity as the center of mass– it moves in the same speed and direction or \(v_{\textrm{cm}}\).
Pure Rotational Motion
In the case of a rotating body, the speed of any point on the object depends on how far away it is from the axis of rotation; in this case, the center. We know that the body’s speed is \(v_{\textrm{cm}}\) and that the speed at the edge must be the same. We may think that all these points moving at different speeds poses a problem but we know something else — the object’s angular velocity.
The angular speed tells us how fast an object rotates. In this case, we know that all points along the object’s surface completes a revolution in the same time. In physics, we define this by the equation: \begin{equation} \omega=\frac{v}{r} \end{equation} where \(\omega\) is the angular speed. We can use this to rewrite this equation to tell us the speed of any point from the center: \begin{equation} v(r)=\omega r \end{equation} If we look at the center, where \(r=0\), we expect the speed to be zero. When we plug zero into the above equation that is exactly what we get: \begin{equation} v(0)= \omega \times 0 = 0 \label{eq:zero} \end{equation} If we know the object’s speed, \(v_{\textrm{cm}}\) and the object’s radius, \(R\), using a little algebra we can define \(\omega\) as: \[\omega=\frac{v_{\textrm{cm}}}{R}\] or the speed at the edge, \(v_{\textrm{cm}}\) to be \(v(R)\) to be: \begin{equation} v_{\textrm{cm}}=v(R) = \omega R \label{eq:R} \end{equation} Putting it all Together
To determine the absolute speed of any point of a rolling object we must add both the translational and rotational speeds together. We see that some of the rotational velocities point in the opposite direction from the translational velocity and must be subtracted. As horrifying as this looks to do, we can reduce the problem somewhat to what we see on the whiteboard. Here we see the boys reduce the problem and look at three key areas, the point of contact with the ground (\(P)\), the center of the object, (\(C\)) and the top of the object (\(Q\)).
We have done most of the legwork at this point and now the rolling ball problem is easier to solve.
At point \(Q\)
At point \(Q\), we know the translational speed to be \(v_{\textrm{cm}}\) and the rotational speed to be \(v(R)\). So the total speed at that point is
\begin{equation} v = v_{\textrm{cm}} + v(R) \label{eq:Q1} \end{equation} Looking at equation \eqref{eq:R}, we can write \(v(R)\) as \begin{equation} v(R) = \omega R \end{equation} Putting this into \eqref{eq:Q1} and we get, \begin{aligned} v & = v_{\textrm{cm}} + v(R) \\ & = v_{\textrm{cm}} + \omega R \\ & = v_{\textrm{cm}} + \frac{v_{\textrm{cm}}}{R}\cdot R \\ & = v_{\textrm{cm}} + v_{\textrm{cm}} = 2v_{\textrm{cm}} \end{aligned} which looks almost exactly like Leonard’s board, so we must be doing something right. At point \(C\)
At point \(C\) we know the rotational speed to be zero (see equation \eqref{eq:zero}).
Putting this back into equation \eqref{eq:Q1}, we get \begin{aligned} v & = v_{\textrm{cm}} + v(r) \\ & = v_{\textrm{cm}} + v(0) \\ & = v_{\textrm{cm}} + \omega \cdot 0 \\ & = v_{\textrm{cm}} + 0 \\ & = v_{\textrm{cm}} \end{aligned} Again we get the same result as the board. At point \(P\)
At the point of contact with the ground, \(P\), we don’t expect a wheel to be moving (unless it skids or slips). If we look at our diagrams, we see that the rotational speed is in the opposite direction to the translational speed and its magnitude is
\begin{aligned} v(R) & = -\omega R \\ & = -\frac{v_{\textrm{cm}}}{R}\cdot R \\ & = -v_{\textrm{cm}} \end{aligned} It is negative because the speed is in the opposite direction. Equation \eqref{eq:Q1}, becomes \begin{aligned} v & = v_{\textrm{cm}} + v(r) \\ & = v_{\textrm{cm}} – \omega R \\ & = v_{\textrm{cm}} – \frac{v_{\textrm{cm}}}{R}\cdot R \\ & = v_{\textrm{cm}} – v_{\textrm{cm}} = 0 \end{aligned} Not only do we get the same result for the rolling ball problem we see on the whiteboard but it is what we expect. When a rolling ball, wheel or object doesn’t slip or skid, the point of contact is stationary. Cycloid and the Rolling ball
If we were to trace the path drawn by a point on the ball we get something known as a cycloid. The rolling ball problem is an interesting one and the reason it is studied is because the body undergoes two types of motion at the same time — pure translation and pure rotation. This means that the point that touches the ground, the contact point, is stationary while the top of the ball moves twice as fast as the center. It seems somewhat counter-intuitive which is why we don’t often think about it but imagine if at the point of contact our car’s tires wasn’t stationary but moved. We’d slip and slide and not go anywhere fast. But that is another problem entirely. |
Does anybody know if there has been any study in the construction of rings and/or fields out of groups, particularly with respect to certain operations? For example, the set of the reals and operation $a \cdot b := a + b - 1$ forms a commutative group, but how would one find a second operation that would create a field or ring? If it exist, could you construct it out of addition, multiplication and their inverses? How would you approach problems like these in general?
In general when you have a structure $M$ (in some language L; in this case L is the language or ring and the L-structure is $\mathbb{R}$) and a set $S$ such that there exists a bijective map $f:M\to S$ then you can pass the structure of $M$ to the set $S$ and you get a new $L$-structure. In the case of the Rings you can define:
$+^\sim : S\times S\to S$ such that for every $(a,b)\in S\times S$ $a+^\sim b:=f(f^{-1}(a)+f^{-1}(b))$
$\cdot^\sim : S\times S\to S$ such that for every $(a,b)\in S\times S$ $a\cdot^\sim b:=f(f^{-1}(a)f^{-1}(b))$
So you can prove that $(S,+^\sim, \cdot^\sim, f(0), f(1))$ is Ring (with unity if $M$ is a ring with unity).
Now we can define a trivial bijective map to $f:\mathbb{R}\to \mathbb{R} $ , the function that maps every $a\in \mathbb{R}$ to $f(a):=a+1$
In this case $ \mathbb{R}$ is a new Ring with unity with the new operations:
$+^\sim : \mathbb{R} \times \mathbb{R}\to \mathbb{R} $ such that for every $(a,b)\in \mathbb{R}\times \mathbb{R} $ $a+^\sim b:=f(f^{-1}(a)+f^{-1}(b))=$
$f((a-1)+(b-1))=f((a+b)-2)=a+b-1$
$\cdot^\sim : \mathbb{R}\times \mathbb{R}\to \mathbb{R} $ such that for every $(a,b)\in \mathbb{R}\times \mathbb{R} $ $a\cdot^\sim b:=$
$f(f^{-1}(a)f^{-1}(b))=f((a-1)(b-1))=$
$f(ab-(a+b)+1)=ab-(a+b)+2$
You can observe that $f(0)=1$ so the neutral element is changed with the neutral element with the product (but $f(1)=2$ so 0 it is not the new neutral element with respect to the new product)
You can find a bijective map that map $f(0)=1$ and $f(1)=0$ in general for Rings.
What is this map? |
This problem arose in an algebraic geometry course I'm taking, and my understanding of it comes from Shavarevich's "Basic Algebraic Geometry." The question is this: Given a projective variety $$X = V(y^2z - x^3 - xz^2)$$ and a divisor $D = 2[0:0:1]$, find a basis for $\mathcal{L} (D)$. That is, I'm trying to find a rational map corresponding to $D$ that let's me construct any other map from $X$ corresponding to $D$ by composing it with various projection maps. I'm working over $\mathbb{C}$.
This is my approach:
A basis of rational functions $\{f_0, \cdots , f_r\}$ for $\mathcal{L} (D)$ must satisfy $div(f_i) + D \geq 0$. Thus it is necessary for any such $f$ to have a pole at infinity with multiplicity 2. If I write any such $f$ as a quotient of 1-forms, Bézout's theorem is going to give that we'll have another pole of $f$, so this has to cancel with a zero in the numerator.
I then found the divisors of different 1-forms in x, y, and z, which really amounted to finding $div(x), div(y), div(z)$ then using properties of valuations to conclude that, if the denominator is a 1-form, it has to be $x$, as only $div(x)$ has the point at infinity with multiplicity 2. That is, $$div(x) = 2[0:0:1] + [0:1:0].$$ I then found that the only rational functions in $k(X)$ that are in $\mathcal{L} (D)$ are of the form $$f = \frac{az + by}{x}$$ and so a basis for $\mathcal{L} (-D)$ is $\{ f_1 = \frac{z}{x}, f_2 = 1\}$, whence we find our rational map from $X$ associated to $D$ is $$\phi = [f_1: f_2].$$ We can clear the denominator to then have $$\phi = [z:x].$$
Is this correct? This is the first time I've attempted to construct such a rational map from a divisor, so I'm not sure the process was correct. I'm concerned because this isn't defined at $[0:1:0]$, and I was expecting it to be defined everywhere on X.
Regards,
Garnet
EDIT: I'll show how I found $div(x)$ using Bézout's theorem. The points of intersection of $V(y^2z - x^3 - xz^2)$ with $V(x)$ are easily seen to be $[0:0:1]$ and $[0:1:0]$. Using Bézout's theorem, I know that X and $V(x)$ have to intersect in three points with multiplicity. I'll find the intersection multiplicity of $[0:1:0]$:
To do this, I'll intersect both X and $V(x)$ with the open set $U_y = \{[x:1:z]\}$ and work in affine space. In particular, $$X \cap U_y = V(z-x^3-xz^2)$$ and $V(x) \cap U_y$ is just $V(x)$; I then consider the point $p = (0,0)$.
Because the tangent to $X \cap U_y$ is given by the line $z=0$ and $X \cap U_y$ is nonsingular at $(0,0)$, I conclude that the maximal ideal of the local ring of $X \cap U_y$ at p is $$\mathfrak{M}_p(X\cap U_y) = (x)$$. From this it's easy to see that the intersection multiplicity of $X\cap U_y$ with $V(x)$ at $(0,0)$ is just 1, whence the intersection multiplicity of $X$ with $V(x)$ at $[0:1:0]$ is 1. By Bézout's theorem, I can immediately conclude that the intersection multiplicity of $X$ with $V(x)$ at $[0:0:1]$ is 2, so $$div(x) = 2[0:0:1] + [0:1:0]$$.
A similar calculation shows that $div(z) = 3[0:1:0]$, and so $$div(\frac{z}{x}) = div(z) - div(x) = 2[0:1:0] - 2[0:0:1].$$ |
A problem on a recent assignment defined the Torsion subset $F(G)$ of a Group $G$ as the set of elements of G of finite order. It then asked to prove that $F(GL_2(\mathbb{R}))$ is not a subgroup of $GL_2(\mathbb{R})$, which I did by example (two finite order matrices multiplied to give one with infinite order, proof by induction). However this led me to thinking about, more generally, the requirements separating those elements of finite and infinite order.
My first thought was that, by the definition of finite order, there exists some $n \in \mathbb{N}$ such that $A^n = I$, and since taking determinants satisfies the conditions for a homomorphism, $\det{(A^n)} = \det{(A)}^n = \det{(I)} = 1$, so $\det{(A)} \in \{ \pm 1 \}$. However, clearly there is a stronger condition holding that I am not seeing, as taking (for example) $A = \bigl( \begin{smallmatrix}1 & -1\\ 0 & 1\end{smallmatrix}\bigr)$, clearly $\det{(A)} = 1$, but for any $n \in \mathbb{N}$, we have $\bigl( \begin{smallmatrix}1 & -1\\ 0 & 1\end{smallmatrix}\bigr)^n = \bigl( \begin{smallmatrix}1 & -n\\ 0 & 1\end{smallmatrix}\bigr)$, so clearly $A$ also has infinite order.
My Linear Algebra is not amazing, so I feel like I may be lacking the tools with which to fully analyse the problem.
Any advice on where to advance next is appreciated.
EDIT: My question originally also concerned the notion of abelian groups in relation to $F(G)$ being a subgroup but it has been brought to my attention that $F(G) \le G$ trivially for abelian groups. |
Context
I read this post which asks for an explicit norm on $C^0\left(\Bbb R,\Bbb R\right)$.
Obviously, usual norms don't work because of the fact $\Bbb R$ is infinite so using an integral would fail even on constant functions and $\sup$ would fail on affine functions.
The integral looking too difficult to fix for this space, I decided to try to fix the $\sup$ by
crushing functions when they went to $\pm\infty$.
To do that, I assumed there was a sequence of functions $\left(f_n\right)_{n\in\mathbb{N}}\in \mathcal C^0\left(\Bbb R,\Bbb R\right)^n$ so that $\forall g \in C^0\left(\Bbb R,\Bbb R\right)^n,\exists n \in \Bbb{N}, \cfrac{g}{f_n}$ is bounded on $\Bbb{R}$. And I defined the norm to be $\|g\|=\sup \left|\cfrac{g}{f_n}\right|$ for the smallest such $n$.
Homogeneity and separation worked well but subadditivity did't work so I favored the post to have a look later to see if someone else answered and gave up.
And since then, I'm wondering whether such a sequence of functions exists.
The question
Is there a sequence of functions $\left(f_n\right)_{n\in\mathbb{N}}\in \mathcal C^0\left(\Bbb R,\Bbb R\right)^n$ so that $\forall g \in C^0\left(\Bbb R,\Bbb R\right)^n,\exists n \in \Bbb{N}, \cfrac{g}{f_n}$ is bounded on $\Bbb{R}$?
Thoughts and comments
My first thought was to take something like $f_n=exp^n=\exp \circ \dots \circ \exp$ but I have no idea how to prove or disprove that it works... At sure I was fairly convinced it would work...
But then I thought of taking something like $f_n(x)= x^n$. Of course this doesn't work since if you took $g=\exp$, $\cfrac{g}{f_n}$ wouldn't be bounded for any $n$. But a few years ago, I would've thought it would work...
Then I tried to search for functions for which $f_n=\exp^n$ wouldn't work. And the only thing I could think of was Ackermann's idea of "creat[ing] an endless procession of arithmetic operations" (which I discovered while skim-reading this). The same way you go from $x\mapsto x+a$ to $x\mapsto ax$ by making the number of $a$s you add depend on $x$, we then have $x\mapsto a^x$ where the number of times you multiply by $a$ depends on $x$ so I guess we could apply the same kind of method once again that would make the $\exp^n$ fail as my $f_n$ but I can't conceive such functions. I'm not even sure they are possible to define...
Thoughts and comments v2
$f_0$ defined by:
$f_0(0)=1$
$\forall n \in\Bbb{N}^*,f_0(n)=f_0(-n)=$ $n^{th}$ Ackermann number
And then the rest is just lines between those previously defined points.
And then $\forall n \in\Bbb{N}^*,f_{n+1}=f_n\circ f_0$
Thank you in advance for your answers. :) |
Search
Now showing items 1-2 of 2
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
This trick is the very close cousin of the infamous log-sum-exp trick (scipy.misc.logsumexp).
Supposed you'd like to evaluate a probability distribution \(\boldsymbol{\pi}\) parametrized by a vector \(\boldsymbol{x} \in \mathbb{R}^n\) as follows:
The exp-normalize trick leverages the following identity to avoid numerical overflow. For any \(b \in \mathbb{R}\),
In other words, the \(\boldsymbol{\pi}\) is shift-invariant. A reasonable choice is \(b = \max_{i=1}^n x_i\). With this choice, overflow due to \(\exp\) is impossible\(-\)the largest number exponentiated after shifting is \(0\).
The naive implementation is terrible when there are large numbers!
>>> x = np.array([1, -10, 1000])>>> np.exp(x) / np.exp(x).sum()RuntimeWarning: overflow encountered in expRuntimeWarning: invalid value encountered in true_divideOut[4]: array([ 0., 0., nan])
The exp-normalize trick avoid this common problem.
def exp_normalize(x): b = x.max() y = np.exp(x - b) return y / y.sum()>>> exp_normalize(x)array([0., 0., 1.])
Log-sum-exp for computing the log-distibution
where
Typically with the same choice for \(b\) as above.
Exp-normalize v. log-sum-exp
Exp-normalize is the gradient of log-sum-exp. So you probably need to know both tricks!
If what you want to remain in log-space, that is, compute \(\log(\boldsymbol{\pi})\), you should use logsumexp. However, if \(\boldsymbol{\pi}\) is your goal, then exp-normalize trick is for you! Since it avoids additional calls to \(\exp\), which would be required if using log-sum-exp and more importantly exp-normalize is more numerically stable!
Numerically stable sigmoid function
The sigmoid function can be computed with the exp-normalize trick in order to avoid numerical overflow. In the case of \(\text{sigmoid}(x)\), we have a distribution with unnormalized log probabilities \([x,0]\), where we are only interested in the probability of the first event. From the exp-normalize identity, we know that the distributions \([x,0]\) and \([0,-x]\) are equivalent (to see why, plug in \(b=\max(0,x)\)). This is why sigmoid is often expressed in one of two equivalent ways:
Interestingly, each version covers an extreme case: \(x=\infty\) and \(x=-\infty\), respectively. Below is some python code which implements the trick:
def sigmoid(x): "Numerically stable sigmoid function." if x >= 0: z = exp(-x) return 1 / (1 + z) else: # if x is less than zero then z will be small, denom can't be # zero because it's 1+z. z = exp(x) return z / (1 + z)
Closing remarks: The exp-normalize distribution is also known as aGibbs measure (sometimes called aBoltzmann distribution) when it is augmented with a temperatureparameter. Exp-normalize is often called "softmax," which is unfortunate becauselog-sum-exp is also called "softmax." However, unlike exp-normalize, it earned the name because it is acutally a soft version of the max function,where as exp-normalize is closer to "soft argmax." Nonetheless, most peoplestill call exp-normalize "softmax." |
Case Study: Minijets Stinger with DS-51-AXI HDS and HET 700-68-1125kV
In order to avoid a disappointment due to unexpected flight behaviour or insufficient performance of the planned model, we would like to take the opportunity to present some substantiated criteria enabling the realisation of the favoured characteristics for the model.
A simple calculator is sufficient for all the indicated formulas. The required data are easily definable with the manufacturer information or by means of a scale (weight).
An important point when choosing the right fan unit for the model is the power/weight ratio. In contrast to the thrust/weight ratio this also considers exhaust speed (all calculations refer to max. battery voltage).
Power/weight ratio:
\(P_{spezges} = {P_{abges} \over m_{ges}}\)
\(P_{abges}\) refers to the output power, \(m_{ges}\) refers to the overall model weight. This is determinable with a scale or can be calculated after having chosen the fan system. \(P_{ab}\) can be calculated with thrust and exhaust speed data which are available in our measurement diagrammes.
In order to get \(P_{abges}\) it is also important to consider the efficiency factor of the channels of our model.
\(P_{ab}\) from exhaust speed and thrust:
\(P_{ab} = {S \cdot c \over 2}\)
e.g..:
S=55 N, c=94 m/s: (51HDS with HET 700-68-1125kV)
\(P_{ab} = {55N \cdot 94m/s \over 2} =2585W\)
The efficiency factor of the channels \(\eta _{kanal}\) at the extreme could reach values of about 65%. Such an extreme case could occur for example with high velocity models at standstill.
With these fast models and an advantageous design of the ducted fan the efficiency factor will increase remarkably during the flight due to more beneficial inlet flow and a more effective flow around the blades.
Consequently, the efficiency factor of the channel adopts different values depending on the model and flight phase.
In the following you can find estimated factors which occur during the flight:
Short Airliner nacelle, large intake lip radius (e.g. models like Airbus A-300, Boeing 737)
\({\eta _{kanal} \approx 0,95}\)
Long straight nacelle (e.g. ME-262)
\({\eta _{kanal} \approx 0,9}\)
Very long, but straight channels (e.g. MiG-15 etc. but also SU-27)
\({\eta _{kanal} \approx 0,85}\)
Curved channels with small cross-sections, small intake lip radius (e.g. Vampire, F-16, Pampa etc.)
\({\eta _{kanal} < 0,85}\)
These values are considered as approximate indications for informational purposes about the different flow conditions according to the model.
In order to incorporate \(\eta _{kanal}\) it is necessary to simply multiply \(P_{ab}\) with the corresponding value for \(\eta _{kanal}\).
e.g.: \(\eta _{kanal} = 0,85\)
\(P_{abges}=\eta _{kanal} \cdot P_{ab} = 0,85 \cdot 2585W =2197W\)
The resulting value (P abges) now has to be plugged into the first equation and then divided by the model weight (e.g. Minijets Stinger 3,7kg).
That way we receive a parameter for our model which enables us to estimate its flight performance with assistance of the following table.
\(P_{spezges} = {2197W \over 3,7kg} =594W/kg\)
Model type: \(P_{spezges}\)
high speed model: 500 – 800 W/kg sporty Jet, Trainer: 300 – 500 W/kg moderate speed Jet (Me-262, A-10 etc.): 200 – 300 W/kg Airliner, Transporter: 150 – 200 W/kg
The values indicated in the table above refer to battery peak voltage; they are on the high-performance side and are well-achievable with our propulsion systems.
Thus, for example an Airliner can still be flown securely with considerably less power. |
4.2 - Introduction to Confidence Intervals
In Lesson 4.1 we learned how to construct sampling distributions when population values were known. In real life, we don't typically have access to the whole population. In these cases we can use the sample data that we do have to construct a
confidence interval to estimate the population parameter with a stated level of confidence. This is one type of statistical inference. Confidence Interval A range computed using sample statistics to estimate an unknown population parameter with a stated level of confidence Example: Statistical Anxiety Section
The statistics professors at a university want to estimate the average statistics anxiety score for all of their undergraduate students. It would be too time consuming and costly to give every undergraduate student at the university their statistics anxiety survey. Instead, they take a random sample of 50 undergraduate students at the university and administer their survey.
Using the data collected from the sample, they construct a 95% confidence interval for the mean statistics anxiety score in the population of all university undergraduate students. They are using \(\bar{x}\) to estimate \(\mu\). If the 95% confidence interval for \(\mu\) is 26 to 32, then we could say, “we are 95% confident that the mean statistics anxiety score of all undergraduate students at this university is between 26 and 32.” In other words, we are 95% confident that \(26 \leq \mu \leq 32\). This may also be written as \(\left [ 26,32 \right ]\).
At the center of a confidence interval is the sample statistic, such as a sample mean or sample proportion. This is known as the
point estimate. The width of the confidence interval is determined by the margin of error. The margin of error is the amount that is subtracted from and added to the point estimate to construct the confidence interval. Point Estimate Sample statistic that serves as the best estimate for a population parameter Margin of Error Half of the width of a confidence interval; equal to the multiplier times the standard error General Form of Confidence Interval \(sample\ statistic \pm margin\ of\ error\) \(margin\ of\ error=multiplier(standard\ error)\)
The margin of error will depend on two factors:
The level of confidence which determines the multiplier The value of the standard error
In Lesson 2 you first learned about the Empirical Rule which states that approximately 95% of observations on a normal distribution fall within two standard deviations of the mean. Thus, when constructing a 95% confidence interval your textbook uses a multiplier of 2.
General Form of 95% Confidence Interval \(sample\ statistic\pm2\ (standard\ error)\) Example: Proportion of Dog Owners Section
At the beginning of the Spring 2017 semester a representative sample of 501 STAT 200 students were surveyed and asked if they owned a dog. The sample proportion was 0.559. Bootstrapping methods, which we will learn later in this lesson, were used to compute a standard error of 0.022. We can use this information to construct a 95% confidence interval for the proportion of all STAT 200 students who own a dog.
0.559 ± 2(0.022)
0.559 ± 0.044 [0.515, 0.603]
I am 95% confident that the population proportion is between 0.515 and 0.603.
Example: Mean Height Section
In a random sample of 525 Penn State World Campus students the mean height was 67.009 inches with a standard deviation of 4.462 inches. The standard error was computed to be 0.195. Construct a 95% confidence interval for the mean height of all Penn State World Campus students.
95% confidence interval:
67.009 ± 2(0.195)
67.009 ± 0.390 [66.619, 67.399]
I am 95% confident that the mean height of all Penn State World Campus students is between 66.619 inches and 67.399 inches. |
When I toss a coin in Mars, is the planets atmosphere rare enough that I'd rotate with the planet (at its angular velocity), but not the coin?
It depends on
where on Mars you toss the coin, and how high you toss it.
In a rotating frame of reference, an object in motion appears to be affected by a pair of fictitious forces - the centrifugal force, and the Coriolis force. Their magnitude is given by
$$\mathbf{\vec{F_{centrifugal}}}=m\mathbf{\vec\omega\times(\vec\omega\times\vec{r})}\\ \mathbf{\vec{F_{Coriolis}}}=-2m\mathbf{\vec\Omega}\times\mathbf{\vec{v}}$$
The question is - when are these forces sufficient to move the coin "away from your hand" - in other words, for what initial velocity $v$ is the total displacement of the coin greater than 10 cm (as a rough estimate of what "back in your hand" might look like; obviously you can change the numbers).
The centrifugal force is only observed when the particle is rotating at the velocity of the frame of reference - once the particle is in free fall, it no longer moves along with the rotating frame of reference and the centrifugal force "disappears". For an object moving perpendicular to the surface of the earth, the Coriolis force is strongest at the equator, becoming zero at the pole; it is a function of the velocity of the coin. We will calculate the expression as a function of latitude - recognizing that it will be a maximum at the equator.
As a simplifying assumption, we assume the change in height is sufficiently small that we ignore changes in the force of gravity; we also ignore all atmospheric drag (in particular, the wind; if the opening scene of "The Martian" were to be believed, it can get pretty windy on the Red Planet.) Finally we will assume that any horizontal velocity will be small - we ignore it for calculating the Coriolis force, but integrate it to obtain the displacement.
The vertical velocity is given by
$$v = v_0 - g\cdot t$$
and the total time taken is $t_t=\frac{2v_0}{g}$. At any moment, the Coriolis acceleration is
$$a_C=2\mathbf{\Omega}~v\cos\theta$$
Integrating once, we get
$$v_h = \int a\cdot dt \\ = 2\mathbf{\Omega}\cos\theta\int_0^t(v_0-gt)dt\\ = 2\mathbf{\Omega}\cos\theta\left(v_0 t-\frac12 gt^2\right)$$
And for the displacement
$$x_h = \int v_h dt \\ = 2\mathbf{\Omega}\cos\theta\int_0^t \left(v_0 t-\frac12 gt^2\right)dt\\ = 2\mathbf{\Omega}\cos\theta \left(\frac12 v_0 t^2-\frac16 gt^3\right)$$
Substituting for $t = \frac{2v_0}{g}$ we get
$$x_h = 2\mathbf{\Omega}\cos\theta v_0^3\left(\frac{4}{g^2} - \frac{4}{3 g^2}\right)\\ = \frac{16\mathbf{\Omega}\cos\theta v_0^3}{3g^2}$$
The sidereal day of Mars is 24 hours, 37 minutes and 22 seconds - so $\Omega = 7.088\cdot 10^{-5}/s$ and the acceleration of gravity $g = 3.71 m/s^2$. Plugging these values into the above equation, we find $x_h = 2.75\cdot 10^{-5}v_0^3 m$, where velocity is in m/s. From this it follows that you would have to toss the coin with an initial velocity of about 15 m/s for the Coriolis effect to be sufficient to deflect the coin by 10 cm before it comes back down.
On Earth, such a toss would result in a coin that flies for about 3 seconds, reaching a height of about 11 m. It is conceivable that someone could toss a coin that high - but I've never seen it.
AFTERTHOUGHT
Your definition of "vertical" needs to be carefully thought through. There is a North-South component of the centrifugal "force" that is strongest at 45° latitude, and that will cause a mass on a string to hang in a direction that is not-quite-vertical. If you launch your coin in that direction, you will not observe a significant North-South deflection during flight, but if you were to toss the coin "vertically" (in a straight line away from the center of Mars), there
will in fact be a small deviation. The relative magnitude of the centrifugal force and gravity can be computed from
$$\begin{align}a_c &= \mathbf{\Omega^2}R\sin\theta\cos\theta \\ &= \frac12 \mathbf{\Omega^2}R\\ &= 8.5~\rm{mm/s^2}\end{align}$$
If you toss the coin at 15 m/s, it will be in the air for approximately 8 seconds. In that time, the above acceleration will give rise to a displacement of about 27 cm. This shows that your definition of "vertical" really does matter (depending on the latitude - it doesn't matter at the poles or the equator, but it is significant at the intermediate latitudes, reaching a maximum at 45° latitude).
The coin will come back to your hand just like it would on the earth. The effect of atmosphere is negligible comparing to the coin's inertia, so the horizontal position of the coin relative to your hand will hardly be affected. The rareness of the atmosphere will only affect the vertical motion of the coin, like how quickly the coin will fall into your hand.
Yes, for the simple reason that you're not tossing the coin very high (presumably, anyway). You seem to think that on Earth, atmospheric drag is what keeps the coin "glued" to the tossing frame of reference, but that isn't really a factor at all.
Say that you're on Earth, at sea level, on the equator, and you toss the coin 3 meters straight up. Neglecting drag, the coin will be in the air for 1.56 seconds. The earth is rotating under your feet at 463 m/s, and has a radius of 6.37 * 10^6 m. The coin is gaining an altitude of 3 m, which is 4.71 * 10^-7 earth radii, so the rotational speed at that height will be different by an equal proportion, which works out to 0.00022 m/s. Getting an upper bound by assuming the coin spends the whole time at the maximum height because I'm lazy, and we end up with a deflection of 0.34 mm, which is less than the thickness of the coin, let alone its diameter. Anywhere away from the equator or above sea level, and the number would come out lower still.
Doing the same experiment on Mars, we'll suppose that you can give the coin the same initial velocity, and that you're on the equator at mean elevation. Mars's surface gravity is lower (3.71 m/s^2), so the coin will reach an impressive 7.92 m in height and stay in the air for 4.14 seconds. Mars is rotating under your feet at 241 m/s (less than Earth because it has a smaller circumference but a similar day length) and has a radius of 3.39 * 10^6 m. The coin then gains 2.34 * 10^-6 mars radii, and the rotational speed at that height is different by 0.00056 m/s. Making the same (over)estimate as before, we get 4.14 s * 0.00056 m/s = 2.33 mm. About one coin thickness, but not much more. Certainly not enough to miss your hand on the way down.
Basically, the heights you're dealing with when tossing a coin are just too small, compared to the size of a planet, to make much difference, atmosphere or not. Try lobbing a cannonball 1km up instead, and you'd be more likely to notice an effect. I haven't worked out the math, but I
still don't think that the horizontal component of atmospheric drag would contribute much at all; the atmosphere would be more likely to make a difference by reducing the maximum height reached.
The coin comes down for sure unless you tossed it with an escape velocity and escape velocity depends on mass of planet, mass of coin etc. Even if you are well inside the escape velocity limit, it may not reach your hand. Your observation is correct that the drag force plays a role in its tangential velocity, but for all small distance tosses the coin will reach your hand regardless of the atmospheric rarity compared to earth.
I think yes,like as if you were on earth. The reason is that the coin has the same velocity as you and the surface of planet Mars, it doesn't have to do with atmosphere.
Yes you are moving with the surface but you are not accelerating. You are moving at a constant speed. If you toss in what you think is vertical (call it Y direction) the coin will have the same rotational velocity (call it X direction) as you. Even the atmosphere is moving with you so for a short distance there is no wind resistance in the X direction. The X velocity of the coin will remain constant and will be exactly your X velocity.
The combination of your two questions leads me to believe that what you are really asking is, does the
density (or lack of it) of the atmosphere have an effect on the horizontal position of an object that initially is given only vertical momentum? If this interpretation of your question is correct, then I want to start by telling you that you have a reverse conception of the effect density has. The drag force of an atmosphere is proportional to its density, therefore as its density goes to zero (no atmosphere), the drag force goes to zero. So, the "rarer" the atmosphere, the less effect it has on the motion of the coin. Since we are only interested in the coin returning to the same spot (little or no horizontal displacement), differences in the vertical direction caused by the drag and different Martian gravity, are of no consequence. Therefore, since there are little or no horizontal forces acting on the coin, it will return to the same spot it was tossed from ( your hand).
protected by Qmechanic♦ Dec 31 '15 at 19:23
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
I am quite surprised that a variant of linear regression has been proposed for a challenge, whereas an estimation via ordinary least squares regression has not, despite the fact the this is arguably the most widely used method in applied economics, biology, psychology, and social sciences!
For details, check out the Wikipedia page on OLS. To keep things concise, suppose one has a model:
$$Y=\beta_0+\beta_1X_1+\beta_2X_2+\dots+\beta_kX_k+U$$
where all right-hand side
X variables—regressors—are linearly independent and are assumed to be exogenous (or at least uncorrelated) to the model error U. Then you solve the problem
$$\min_{\beta_0,\dots,\beta_k}\sum_{i=1}^n\left(Y_i-\beta_0-\beta_1X_1-\dots-\beta_kX_k\right)^2$$
on a sample of size
n, given observations
$$\left(\begin{matrix}Y_1&X_{1,1}&\cdots&X_{k,1}\\\vdots&\vdots&\ddots&\vdots\\Y_n&X_{1,n}&\cdots&X_{k,n}\end{matrix}\right)$$
The OLS solution to this problem as a vector looks like
$$\hat{\beta}=(\textbf{X}'\textbf{X})^{-1}(\textbf{X}'\textbf{Y})$$
where
Y is the first column of the input matrix and X is a matrix made of a column of ones and remaining columns. This solution can be obtained via many numerical methods (matrix inversion, QR decomposition, Cholesky decomposition etc.), so pick your favourite! Of course, econometricians prefer slightly different notation, but let’s just ignore them.
Non other than Gauss himself is watching you from the skies, so do not disappoint one of the greatest mathematicians of all times and write the shortest code possible.
Task
Given the observations in a matrix form as shown above, estimate the coefficients of the linear regression model via OLS.
Input
A matrix of values. The first column is always
Y[1], ..., Y[n], the second column is
X1[1], ..., X1[n], the next one is
X2[1], ..., X2[n] etc.
The column of ones is not given (as in real datasets), but you have to add it first in order to estimate
beta0 (as in real models).
NB. In statistics, regressing on a constant is widely used. This means that the model is
Y = b0 + U, and the OLS estimate of
b0 is the sample average of
Y. In this case, the input is just a matrix with one column,
Y, and it is regressed on a column of ones.
You can safely assume that the variables are not exactly collinear, so the matrices above are invertible. In case your language cannot invert matrices with condition numbers larger than a certain threshold, state it explicitly, and provide a unique return value (that cannot be confused with output in case of success) denoting that the system seems to be computationally singular (
S or any other unambiguous characters).
Output
OLS estimates of
beta_0, ..., beta_k from a linear regression model in an unambiguous format or an indication that your language could not solve the system.
Challenge rules I/O formats are flexible. A matrix can be several lines of space-delimited numbers separated by newlines, or an array of row vectors, or an array of column vectors etc. This is code-golf, so shortest answer in bytes wins. Built-in functions are allowedas long as they are tweaked to produce a solution given the input matrix as an argument. That is, the two-byte answer
lmin
Ris nota valid answer because it expects a different kind of input. Standard rules apply for your answer, so you are allowed to use STDIN/STDOUT, functions/method with the proper parameters and return-type, full programs. Default loopholes are forbidden. Test cases
[[4,5,6],[1,2,3]]→ output:
[3,1]. Explanation:
X= [[1,1,1], [1,2,3]], X'X= [[3,6],[6,14]], inverse( X'X) = [[2.333,-1],[-1,0.5]], X'Y= [15, 32], and inverse( X'X)⋅ X'Y=[3, 1]
[[5.5,4.1,10.5,7.7,6.6,7.2],[1.9,0.4,5.6,3.3,3.8,1.7],[4.2,2.2,3.2,3.2,2.5,6.6]]→ output:
[2.1171050,1.1351122,0.4539268].
[[1,-2,3,-4],[1,2,3,4],[1,2,3,4.000001]]→ output:
[-1.3219977,6657598.3906250,-6657597.3945312]or
S(any code for a computationally singular system).
[[1,2,3,4,5]]→ output:
3.
Bonus points (to your karma, not to your byte count) if your code can solve very ill-conditioned (quasi-multicollinear) problems (e. g. if you throw in an extra zero in the decimal part of
X2[4] in Test Case 3) with high precision.
For fun: you can implement estimation of standard errors (White’s sandwich form or the simplified homoskedastic version) or any other kind of post-estimation diagnostics to amuse yourself if your language has built-in tools for that (I am looking at you, R, Python and Julia users). In other words, if your language allows to do cool stuff with regression objects in few bytes, you can show it! |
I will note before starting that this question is related to my own work, in which i am aiming to extend and replicate the methods used in Attanasio et. al. (2015), "Human Capital Development and Parental Investment in India"**.
I am carrying out a large confirmatory factor analysis where the relationship between observed measurements and latent variables can be written as follows:
$$ y_{i,j} = \lambda_{i,j} \theta_j + \varepsilon_{i,j} $$
where $y_{i,j}$ is measurement $i$ on latent variable $j$, $\lambda_{i,j}$ is a factor loading, and $\varepsilon_{i,j}$ a measurement error. My aim is to estimate the joint distribution of the latent factors - the $\theta_j$. For my application i cannot assume normality of this distribution, so instead i assume their joint distribution to be a mixture of two normals. I also assume that the latent variables are independent of measurement errors and that the latter are normally distributed. First, writing the measurement system in matrix form, we have
$$ \textbf{y} = \Lambda \boldsymbol{\theta} + \boldsymbol{\varepsilon} $$
where $\textbf{y}$ is a vector of all measurements on each latent factor, $\boldsymbol{\Lambda}$ is a matrix containing all the factor loadings, $\boldsymbol{\theta}$ a vector containing all latent variables, and $\boldsymbol{\varepsilon}$ is a vector containing all the measurement errors. Using basic rules about the distribution of sums of independent random variables and the convolution operator, the theoretical joint distribution of the of the measurements, $p(\textbf{y})$ is given by:
$$p(\textbf{y}) = \tau \underbrace{\int g(\textbf{y} -\boldsymbol{\Lambda }\boldsymbol{\theta})f_A(\boldsymbol{\Lambda}\boldsymbol{\theta})d\boldsymbol{\theta}}_{p_A({\cdot})} + (1-\tau) \underbrace{ \int g(\textbf{y} -\boldsymbol{\Lambda }\boldsymbol{\theta})f_B(\boldsymbol{\Lambda}\boldsymbol{\theta})d\boldsymbol{\theta}}_{p_B({\cdot})}$$
with $f_A(\cdot)$ and $f_B(\cdot)$ being multivariate normal probability density functions representing the two mixture components of the joint distribution of the latent factors and $g(\cdot)$ is the joint density of the measurement errors. With the assumption made $p(\textbf{y})$ is also a mixture of two normals.
I then use a ML approximation of the joint distribution of measurements (assuming it is in fact a mixture of two normals) and obtain estimates of the moments of $p_A(\cdot)$ and $p_B({\cdot})$, denoted $\boldsymbol{\mu}^y_A$, $\boldsymbol{\mu}^y_B$, $\boldsymbol{\Sigma}^A_y$, $\boldsymbol{\Sigma}^B_y$, and $\tau$ where A,and B represent the first a second mixture components..
Once these are estimated, i would like to set them equal to the implied structure of the distribution given in the above equation. As such i am left with the following equalities:
$$ \tau \boldsymbol{\mu}_A^{\boldsymbol{\theta}} + (1-\tau)\boldsymbol{\mu}_b^{\boldsymbol{\theta}} = 0 \nonumber \\[5pt] \boldsymbol{\Lambda}\boldsymbol{\mu}^{\boldsymbol{\theta}}_A = \hat{\boldsymbol{\mu}}_A^{\tilde{\textbf{y}}} \nonumber \\[5pt] \label{momentconditions} \boldsymbol{\Lambda}\boldsymbol{\mu}^{\boldsymbol{\theta}}_B = \hat{\boldsymbol{\mu}}_B^{\tilde{\textbf{y}}} \\[5pt] \boldsymbol{\Lambda} \boldsymbol{\Theta}_A \boldsymbol{\Lambda}' + \boldsymbol{\Psi} = \hat{\boldsymbol{\Sigma}}_A \nonumber \\[5pt] \boldsymbol{\Lambda} \boldsymbol{\Theta}_B \boldsymbol{\Lambda}' + \boldsymbol{\Psi} = \hat{\boldsymbol{\Sigma}}_B \nonumber $$
Where $\boldsymbol{\Lambda}$ is still the matrix of factor loadings, and $\boldsymbol{\mu}_j^{\boldsymbol{\theta}}$ $\boldsymbol{\Theta}_j$ for $j \in \{ A,B \}$ are the vector of means and the covariance matrix from the corresponding component of the mixed joint distribution of the latent factors. All of the quantities on the right hand side of the above equations and $tau$ have been estimated.
The matrices $\boldsymbol{\Lambda}$, $\boldsymbol{\Theta}_j$ and $\boldsymbol{\Psi}$ have a structure implied by the measurement equation, and so my question is: can their parameters be estimated via minimum distance or any other suitable method in Matlab or STATA? If anyone knows of existing functions that can be used to do so, or how to go about this manually, then any comments or advice would be much appreciated.
Thanks in advance.
** URL to the original paper: http://egcenter.economics.yale.edu/sites/default/files/files/cdp1052.pdf
This question is linked to a previous question of mine that inquired about the first stage of this procedure - deriving and estimating the mixture of normal distribution for the observed measurements. Link: |
Consider $b>0$ and let $X \sim U[-b,b]$ (the continuous uniform distribution in $[-b,b]$ ). I want to do two computations in Mathematica.
First is to compute the density function of $X^2$, which I calculated (by hand) as being $f_{X^2}(x) = \frac{1}{2b\sqrt{x}}\cdot\textbf{I}_{(0,b^2]}(x)$. Then I used the following code to make Mathematica compute this density.
X = UniformDistribution[{-b, b}] ;Z = TransformedDistribution[X^2,X \[Distributed] UniformDistribution[{-b, b}]];PDF[Z, x]
This output is weird. First because it's different from my own calculation (which I think are correct), second because $\lim_{x\to b^2}\frac{1}{2b\sqrt{x}} = \frac{1}{2b^2} \neq \frac{1}{4b^2}$, where the last expression is the value of the density suggested by the Mathematica's output. This is a problem, for $f_{X^2}$ has to be a continuous function. Probably the problem is me, not Mathematica, but where is my reasoning wrong?
After this, I want to compute the joint density $f_{(X,X^2)}$ of the random vector $(X,X^2)$ and plot it. In this case I'm a bit lost how to proceed.
Thank you for you help. |
Venugopalan, P and Venkatesan, K (1990)
Studies in crystal engineering: topochemical photodimerization and structure of p-chlorobenzylidene-DL-piperitone. In: Acta Crystallographica, Section B: Structural Science, 46 (6). pp. 826-830.
PDF
Studies_in_Crystal_Engineering.pdf
Restricted to Registered users only
Download (586kB) | Request a copy
Abstract
3-(p-Chlorophenylvinyl)-6-isopropyl-2-cyclohexen- 1 -one, $C_{17}H_{19}CIO$, $M_r$=274"7, triclinic, P1, a =6.636(1), b=10.537(2), c=10.811(3) A, $\alpha$=95.31 (2), $\beta$ = 92"60 (2), $\gamma$ = 100.66$(2)^°$, V = 738.2 (2) $\AA^3$, Z = 2, O m = 1.23, $D_x$ = 1.236 Mg $m^{-3}$, Cu K$\alpha$, $\lambda$ = 1.5418 $\AA$,$\mu$ = 2.09 $mm^{-l}$, F(000) = 292, T = 298 K, R = 0-050 for 2220 observed reflections. The title compound undergoes a topochemical photodimerization giving an anti head-to-tail dimmer instead of the syn head-to-head dimer expected on the basis of the ability of the chloro group to steer $\beta$-type packing. There are no significant C1...C1 intermolecular interactions. Possible reasons for the observed a-type packing of benzylidenepiperitones are advanced. The low dimer yield is also rationalized.
Item Type: Journal Article Additional Information: Copyright of this article belongs to International Union of Crystallography Keywords: Crystal Engineering;Topochemical Photodimerization;p-Chlorobenzylidene-nL-piperitone Department/Centre: Division of Chemical Sciences > Organic Chemistry Depositing User: Srinivasa Naika Date Deposited: 13 Nov 2007 Last Modified: 19 Sep 2010 04:41 URI: http://eprints.iisc.ac.in/id/eprint/12454 Actions (login required)
View Item |
WHY?
This paper wanted to catch non-linear dynamics of the object in video.
WHAT?
KVAE(Kalman Variational Autoencoder) combined Kalman filter with VAE to model dynamic latent variables. Linear Gaussian state space models are used to model Kalman filter and stable latent variables of the variational autoencoder. Matrices
\gamma_t = [A_t, B_t, C_t] are the state transition, control and emission matrices at time t and Q and R are the covariances matrices of process and measurement noise. Using Kalman filter, we can estimate the
p(z_t|a_{1:t}, u_{1:t}) and
p(z_t|a, u) exactly.
p_{\gamma_t}(z_t|z_{t-1}, u_t) = N(z_t; A_tz_{t-1} + B_t u_t, Q), p_{\gamma_t}(a_t|z_t) = N(a_t;C_t z_t, R)\\ p_{\gamma_t}(a,z|u)=p_{\gamma_t}(a|z)p_{\gamma_t}(z|u) In generative process, joint density of KVAE factorizes as
p(x, a, z|u) = p_{\theta}(x|a)p_{\gamma}(a|z)p_{\gamma}(z|u). In inference process,
\theta and
\gamma are learned to maxmize the log likelihood
log{\theta\gamma}(x|u). Since we can estimate the
p(z_t|a_{1:t}, u_{1:t}) and
p(z_t|a, u) exactly, the variational lowerbound can be rewritten ad below. This lowerbound can be estimated through Monte Carlo method.
F(\theta, \gamma, \phi) = E_{q_\phi (a|x)}[log\frac{p_{\theta}}{q_{\phi}(a|x)} + E_{p_{\gamma}(z|a,u)}[log\frac{p_{\gamma}(a|z)p_{\gamma}(z|u)}{p_{\gamma}(z|a,u)}]]\\ \hat{F}(\theta, \gamma, \phi) = \frac{1}{I}\Sigma_i log p_{\theta}(x|\tilde{a}^{(i)}) + log p_{\gamma}(\tilde{a}^{(i)}, \tilde{z}^{(i)}|u) - log q_{\phi}(\tilde{a}^{(i)} |x) - log p_{\gamma}(\tilde{z}^{(i)}| \tilde{a}^{(i)},u)
a_t represent the dynamics but it may not always transform linearly. Therefore, this paper suggests Dynamics parameter network which linearly combines the
\gamma with all the previous
\gamma weighted by
\alpha estimated using LSTM.
So?
KVAE performed better in imputing missing data in Bouncing ball and got higher ELBO in Pendulum experiment.
Critic
Good to know about Kalman filter. |
It has been proved and showed through experiments that light can be bent by the Sun or any other body with considerable mass. Also light is nothing but photons. So can these photons be attracted by massive bodies if they have no mass?
You're trying to mix two theories. If you want to consider that photons are deflected by gravity, then you must consider that mass is energy.
By Einstein's special theory of relativity, the energy and relativistic mass of a body are related by $E=mc^2$. This works both ways. A massive body has energy, and a body with energy has mass. A photon has energy $h\nu$, so it has (relativistic) mass $\frac{h\nu}{c^2}$. Note that it still has 0 rest mass--rest mass is $\frac{E\sqrt{1-v^2/c^2}}{c^2}$. It will be deflected by gravity, but only a tiny bit (due to its tiny mass). And no, in General Relativity, acceleration due to gravity is not independent of mass as Galileo thought.
In General relativity, gravity works by not exactly being a force, but more of a distortion of spacetime. Spacetime is distorted such that what you feel is straight is curved from someone else's point of view. So an astronaut in the ISS can reason that the ISS is going straight (as long as he doesn't look at the stars or at the ground), as he feels no force (this is not due to small value of g,
any orbiting body feels no force, even classically). You, on the other hand, are standing on the ground, and you say that it is going in circles. This is due to the distortion of space. (I can clarify this with the rubber-sheet analogy if you wish) |
On the subject of categorical versus set-theoretic foundations thereis too much complicated discussion about structure that misses theessential point about whether "collections" are necessary.
It doesn't matter exactly what your personal list of mathematicalrequirements may be -- rings, the category of them, fibrations,2-categories or whatever -- developing the appropriate foundationalsystem for it is just a matter of "programming", once you understandthe general setting.
The crucial issue is whether you are taken in by the Great Set-TheoreticSwindle that mathematics depends on collections (completed infinities).(I am sorry that it is necessary to use strong language here in order toflag the fact that I reject a widely held but mistaken opinion.)
Set theory as a purported foundation for mathematics does not and cannotturn collections into objects. It just axiomatises some of the intuitionsabout how we would like to handle collections, based on the relationshipcalled "inhabits" (eg "Paul inhabits London", "3 inhabits N"). Thisbinary relation, written $\epsilon$, is formalised using first order predicate calculus, usually with just one sort, the universe of sets.The familiar axioms of (whichever) set theory are formulae in first orderpredicate calculus together with $\epsilon$.
(There are better and more modern ways of capturing the intuitions aboutcollections, based on the whole of the 20th century's experience of algebraand other subjects, for example using pretoposes and arithmetic universes,but they would be a technical distraction from the main foundational issue.)
Lawvere's "Elementary Theory of the Category of Sets" axiomatises someof the intuitions about the category of sets, using the same methodology.Now there are two sorts (the members of one are called "objects" or "sets"and of the other "morphisms" or "functions"). The axioms of a categoryor of an elementary topos are formulae in first order predicate calculustogether with domain, codomain, identity and composition.
Set theorists claim that this use of category theory for foundationsdepends on prior use of set theory, on the grounds that you need to startwith "the collection of objects" and "the collection of morphisms".Curiously, they think that their own approach is immune to the samecriticism.
I would like to make it clear that I do NOT share this view of Lawvere's.
Prior to 1870 completed infinities were considered to be nonsense.
When you learned arithmetic at primary school, you learned some rules thatsaid that, when you had certain symbols on the page in front of you, such as "5+7", you could add certain other symbols, in this case "=12".If you followed the rules correctly, the teacher gave you a gold star,but if you broke them you were told off.
Maybe you learned another set of rules about how you could add lines andcircles to a geometrical figure ("Euclidean geometry"). Or another oneinvolving "integration by parts". And so on. NEVER was there a "completedinfinity".
Whilst the mainstream of pure mathematics allowed itself to be seducedby completed infinities in set theory, symbolic logic continued andcontinues to formulate systems of rules that permit certain additionsto be made to arrays of characters written on a page. There are manydifferent systems -- the point of my opening paragraph is that you candesign your own system to meet your own mathematical requirements --but a certain degree of uniformity has been achieved in the way that theyare presented.
We need an inexhaustible supply of VARIABLES for which we may substitute.
There are FUNCTION SYMBOLS that form terms from variables and other terms.
There are BASE TYPES such as 0 and N, and CONSTRUCTORS for forming newtypes, such as $\times$, $+$, $/$, $\to$, ....
There are TRUTH VALUES ($\bot$ and $\top$), RELATION SYMBOLS ($=$)and CONNECTIVES and QUANTIFIERS for forming new predicates.
Each variable has a type, formation of terms and predicates must respectcertain typing rules, and each formation, equality or assertion of a predicate is made in the CONTEXT of certain type-assignments andassumptions.
There are RULES for asserting equations, predicates, etc.
We can, for example, formulate ZERMELO TYPE THEORY in this style. It hastype-constructors called powerset and {x:X|p(x)} and a relation-symbolcalled $\epsilon$. Obviously I am not going to write out all of the detailshere, but it is not difficult to make this agree with what ordinarymathematicians call "set theory" and is adequate for most of theirrequirements
Alternatively, one can formulate the theory of an elementary topos is thisstyle, or any other categorical structure that you require. Then a "ring"is a type together with some morphisms for which certain equations areprovable.
If you want to talk about "the category of sets" or "the category of rings"WITHIN your tpe theory then this can be done by adding types known as"universes", terms that give names to objects in the internal categoryof sets and a dependent type that provides a way of externalisingthe internal sets.
So, although the methodology is the one that is practised by type theorists,it can equally well be used for category theory and the traditional purposesof pure mathematics. (In fact, it is better to formalise a type theorysuch as my "Zermelo type theory" and then use a uniform construction toturn it into a category such as a topos. This is easier because theassociativity of composition is awkward to handle in a recursive setting.However, this is a technical footnote.)
A lot of these ideas are covered in my book "Practical Foundations ofMathematics" (CUP 1999), http://www.PaulTaylor.EU/Practical-FoundationsSince writing the book I have written things in a more type-theoreticthan categorical style, but they are equivalent. My programme called"Abstract Stone Duality", http://www.PaulTaylor.EU/ASD is an example of themethodology above, but far more radical than the context of this questionin its rejection of set theory, ie I see toposes as being just as bad. |
Short answer:
I think the notation is the main problem here. In your second equation, the LHS $\rho\mathbf{u}$ is a function of $\mathbf{x}_0$ and $t$, while your RHS $\rho\mathbf{u}$ is a function of $\mathbf{x}$ and $t$. The subtle difference is that $\mathbf{x}_0$ should be treated as a particle label, not an actual position. As you suspected, the bridge between the two formulations of $\rho\mathbf{u}$ is related by $\phi$:$$(\rho\mathbf{u})(\mathbf{x},t) = (\rho\mathbf{u})\left[\phi(\mathbf{x},t),t\right]$$In your second equation, the time derivative on the LHS is with respect to a fixed collection of particles, whereas on the RHS it should be with respect to a fixed locations in space. In fact, the material derivative $\frac{D}{Dt}$ applies to functions of $\mathbf{x}$ and $t$. Using it on a function of particle label and time does not make sense. So the second equation comes from taking the time derivative of the expression above.
$$\begin{align}\frac{d}{dt}(\rho\mathbf{u})\left[\phi(\mathbf{x},t),t\right] & = \frac{d}{dt}(\rho\mathbf{u})(\mathbf{x},t) \\& = \frac{D}{Dt}(\rho\mathbf{u})(\mathbf{x},t)\end{align}$$
Long Answer/Derivation:
To keep the derivation more readable, we will define $\mathbf{f}$ as the transport property we are interested in. In your case, $\mathbf{f} = \rho\mathbf{u}$.The change of variables that you want to perform transforms $\mathbf{f}$ between two different frames of reference.
The
Eulerian frame of reference looks at changes in $\mathbf{f}$ from the point of view of an outside observer watching the entire flow field.The property of the fluid depends on its location and time:$$\mathbf{f} = \mathbf{f}(\mathbf{x}, t)$$The Eulerian frame is convenient for experiments and operationsinvolving spatial gradients. However, applying the laws of classical mechanics in this frame of reference is not as straight forward, since the particles that occupy a location $\mathbf{x}$ at different times are usually not the same. In order to use the conservation laws, we switch to a more suitable reference frame.
The
Lagrangian reference frame will monitor the change of $\mathbf{f}$ from the pointof view of a fluid particle. In such a reference frame, each particle has an associated $\mathbf{f}(t)$. Let's name each particlein our continuum with a label $\mathbf{\xi}$, so that$$\mathbf{f} = \mathbf{f}(\mathbf{\xi}, t)$$gives the property of particle $\mathbf{\xi}$ at time $t$. To distinguish properties in the two reference frames, we will accent the Lagrangian properties with a tilde.$$\tilde{\mathbf{f}} = \tilde{\mathbf{f}}(\mathbf{\xi}, t)$$
Now we need a map between the $\tilde{\mathbf{f}}$ and $\mathbf{f}$. Let's define $\phi$ as themapping that takes the particle's location $\mathbf{x}$ at some time $t$ and returns the label of the particle $\mathbf{\xi}$$$\mathbf{\xi} = \phi(\mathbf{x},t)$$If we used the location of a particle at $t=0$ as its label, then we get the trajectory function you used$$\mathbf{x}_0 = \phi(\mathbf{x}, t)$$It is important to note that $\mathbf{x}_0$ serves as a particle label, so while$$\tilde{\mathbf{f}}(\mathbf{x}_0,0) = \mathbf{f}(\mathbf{x}_0,0)$$, in general$$\tilde{\mathbf{f}}(\mathbf{x},t) \ne \mathbf{f}(\mathbf{x},t)$$
Instead, the two reference frames are related by$$\mathbf{f}(\mathbf{x},t) = \tilde{\mathbf{f}}\left[\phi(\mathbf{x},t),t\right]$$and the Jacobian should be a function of particle label and time:$$J = J(\mathrm{\xi},t)$$
Using this notation, the first integral in your question becomes$$\begin{align}\frac{d}{dt}\int_{W(t)} \mathbf{f}(\mathbf{x},t) dV & = \frac{d}{dt}\int_W \tilde{\mathbf{f}}\left[\phi(\mathbf{x},t),t\right]J(\mathbf{\xi},t)dV \\& = \int_W \left[ \frac{d\tilde{\mathbf{f}}}{dt}J + \tilde{\mathbf{f}}\frac{dJ}{dt} \right] dV\end{align}$$
The second equation in your question is not applied until the integral above is transformed back from the $(\mathbf{\xi},t)$ frame to the $(\mathbf{x},t)$ frame. Noting that the time derivative of this Jacobian can be written as (this is a whole other derivation):$$\frac{dJ}{dt} = (\nabla \cdot \mathbf{u})J(\mathbf{x},t)$$
we have:
$$\begin{align}\int_W \left[ \frac{d\tilde{\mathbf{f}}}{dt}J + \tilde{\mathbf{f}}\frac{dJ}{dt} \right] dV & = \int_{W(t)} \left[\frac{d}{dt}\mathbf{f}(\mathbf{x},t) + \mathbf{f}(\mathbf{x},t)(\nabla \cdot \mathbf{u})\right] dV\end{align}$$
The first term in the integral can be expanded using the chain rule$$\frac{d}{dt}\mathbf{f}(\mathbf{x},t) = \frac{\partial \mathbf{f}}{\partial t} + \frac{\partial \mathbf{x}}{\partial t} \cdot \frac{\partial \mathbf{f}}{\partial \mathbf{x}}$$
which is just the definition of the material derivative $\frac{D}{Dt}$ |
80 1 1. Homework Statement
Hi, I need to calculate the following integral:
[itex]\int_{-\infty}^{+\infty}dx \frac{(\pi+\sqrt{x^2+m^2})^2(1+\cos x)}{(x^2-\pi^2)^2\sqrt{x^2+m^2}}[/itex]
3. The Attempt at a Solution
I tried complexifying it:
[itex]\oint dz \frac{(\pi+\sqrt{z^2+m^2})^2(1+e^{iz})}{(z^2-\pi^2)^2\sqrt{z^2+m^2}}[/itex]
And having this over the following contour (sorry for the quality of the image):
https://www.dropbox.com/s/t7ioou1kjs3y7ej/paint.jpg?dl=0
The red dots are the poles at: [itex]\pm\pi[/itex] and [itex]\pm im[/itex].
But is it a valid contour in this case, or should I pay extra attention to the branch cuts of the sqrt in the denominator? If so, how do I choose the contour properly?
Attachments 15 KB Views: 239 |
17 0
Hi all! I just found this site today, and I am really hoping that I can get some useful advice here. That said, I have two problems--one easy, one not so easy.
Easy problem:
Basically, I was wondering if anybody out there knows of an algorithm to calculate [itex] g_{ij} [/itex], given only the 'shape' of the underlying manifold? For example, I have what can best be described as a dumbell, or [itex] S^2 [/itex] with the equator contracted in to a small radius--no corners anywhere, though. Keep in mind that I am not looking so much for a solution for the dumbell, but something that could be applied to a more general, finite dimensional, connected [itex] C^\infty [/itex] manifold.
Harder problem:
As a related problem, I am also trying to come up with some sort of meaningful geometric interpretation of the Laplacian when it is being applied to some function that lies on a Riemannian manifold, rather than Euclidean space. In particular, it seems that something like the heat equation:
[itex]\frac{\partial}{\partial t}u=\Delta u [/itex]
should lend itself to a, let's say, rewording as something like a Gauss curvature flow:
[itex]\frac{\partial}{\partial t}x=-K \nu [/itex]
where [itex] x [/itex] is an embedding. Of course, the manifold where this new surface would be embedded would require some additional structure, since the geometric heat equation is valid only for (smooth) compact Riemannian surfaces without boundary--like [itex] S^2 [/itex], or something diffeomorphic to it. That problem, however, has already been taken care of, so please don't waste your time even considering it.
Anyway, I thank you all in advance for your time, and if you require clarification of the question at all, please don't hesitate to ask.
Cheers!
Easy problem:
Basically, I was wondering if anybody out there knows of an algorithm to calculate [itex] g_{ij} [/itex], given only the 'shape' of the underlying manifold? For example, I have what can best be described as a dumbell, or [itex] S^2 [/itex] with the equator contracted in to a small radius--no corners anywhere, though. Keep in mind that I am not looking so much for a solution for the dumbell, but something that could be applied to a more general, finite dimensional, connected [itex] C^\infty [/itex] manifold.
Harder problem:
As a related problem, I am also trying to come up with some sort of meaningful geometric interpretation of the Laplacian when it is being applied to some function that lies on a Riemannian manifold, rather than Euclidean space. In particular, it seems that something like the heat equation:
[itex]\frac{\partial}{\partial t}u=\Delta u [/itex]
should lend itself to a, let's say, rewording as something like a Gauss curvature flow:
[itex]\frac{\partial}{\partial t}x=-K \nu [/itex]
where [itex] x [/itex] is an embedding. Of course, the manifold where this new surface would be embedded would require some additional structure, since the geometric heat equation is valid only for (smooth) compact Riemannian surfaces without boundary--like [itex] S^2 [/itex], or something diffeomorphic to it. That problem, however, has already been taken care of, so please don't waste your time even considering it.
Anyway, I thank you all in advance for your time, and if you require clarification of the question at all, please don't hesitate to ask.
Cheers!
Last edited: |
WHY?
Dimension of word embedding is usually determined with heuristic.
WHAT?
This paper suggests new metric to evaluate the quality of word embedding and uses it to find its optimal dimensionality. Word embedding algorithms are shown to converge to implicitly factorized PMI matrix. However, L2 loss of embedding and factorized matrix cannot be used since word embedding has unitary-invariance property which indicate word embedding is invariant to rotation. So this paper suggests Pairwise Inner Product(PIP) loss which measure relative position shifts between embeddings.
PIP(E) = EE^T\\\|PIP(E_1) - PIP(E_2)\| = \|E_1 E_1^T - E_2 E_2^T\| = \sqrt{\sum_{i, j}(\langle v_i^{(1)}v_j^{(1)}\rangle - \langle v_i^{(2)}v_j^{(2)}\rangle)^2}
Using PIP loss, optimal dimensionality can be found that maximize the quality of embedding. This paper reported that choosing dimensionality of word embedding can be explained as bias-variance trade-off. Larger dimensionality leads to low bias but can increase estimation noise and smaller dimensionality does the opposite. Mathmathical proof was provided to show that the bias-variance trade-off captures the signal-to-noise ratio.\
So?
Using given facts, this paper provides two discovery.
Suppose
E = U_{\cdot,1:d}D_{1:d,1:d}^{\alpha}, larger
\alpha leads to robustness to over-fitting.
Also, optimal dimensionality can be found that minimize the PIP loss by balancing bias-variance trade-off. |
Jack, I will explain the problem here first in a mathematical rather than physical way. The mathematical issue at play here is that the operation you are proposing is
not well-defined at the level of basic physics. Let's take a look at some situations in math where this type of problem crops up that have nothing to do with physical units.
In calculus, we have $\int_a^b f(x)\,dx = F(b) - F(a)$ where $F(x)$ is any antiderivative of $f(x)$. What if someone came along and asked if a new operation $I(f,a,b) = F(b) + F(a)$ has any useful meaning in terms of the original function $f(x)$ and interval $[a,b]$. It does not, because
if you change the antiderivative you change the answer. For any two antiderivatives $F(x)$ and $G(x)$ of $f(x)$, they differ by a constant, say $G(x) = F(x) + C$. This means that a difference of antiderivatives of $f(x)$ at $a$ and $b$ is independent of the choice of antiderivatives but a sum of antiderivatives of $f(x)$ at $a$ and $b$ is not:$$G(b) - G(a) = (F(b) + C) - (F(a) + C) = F(b) - F(a)$$while$$G(b) + G(a) = (F(b) + C) + (F(b) + C) = F(b) + F(a) + 2C, $$which is not $F(b) + F(a)$ unless $C = 0$ (i.e., $G(x) = F(x)$). So a difference of values of an antiderivative of $f(x)$ is a well-defined number in terms of the original function $f(x)$, but a sum of values of an antiderivative is not. If you want to provide a definite real number from an antiderivative of $f(x)$, and have that definite number be determined solely by $f(x)$ and not the choice of antiderivarive, differences make sense but sums do not. By the way, this has a role in physics: potential energy is only defined up to an overal additive constant, which explains why a difference of potential energy values has a physical meaning but a sum of potential energy values does not.
Another example is in geometry. We add angles but we don't ever multiply angles. Is there a mathematical problem with multiplying angles? Yes: angle measurement is intrinsically only determined up to an integer multiple of $2\pi$, and this property is respected by addition but not by multiplication. If $\theta_2 = \theta_1 + 2\pi{k}$ and $\varphi_2 = \varphi_1 + 2\pi{\ell}$ for some integers $k$ and $\ell$, then $$\theta_2 + \varphi_2 = \theta_1 + \varphi_1 + 2\pi(k+\ell),$$so the two sums $\theta_2 + \varphi_2$ and $\theta_1 + \varphi_1$ are again equal up to an integer multiple of $2\pi$. However, $$\theta_2\varphi_2 = \theta_1\varphi_1 + 2\pi(k\varphi_1 + \ell\theta_1 + 2\pi{k}\ell)$$ and $k\varphi_1 + \ell\theta_1 + 2\pi{k}\ell$ is not an integer all the time. (If you have had abstract algebra, I could say the "problem" here is that $2\pi{\mathbf Z}$ is a subgroup of ${\mathbf R}$ but not an ideal in ${\mathbf R}$, so the quotient ${\mathbf R}/2\pi{\mathbf Z}$ can be given the structure of an additive group but not a ring.) If you want to say "but I can talk about $\sin(xy)$ for any numbers $x$ and $y$, and that is multiplying angles", I'd say it is not: in the expression $\sin(xy)$ with real variables $x$ and $y$, the numbers $x$ and $y$ must be regarded as real numbers, not angles. The story is different with $\sin(2x)$, which is well-defined when $x$ is thought of as an angle (a number up to addition by an integral multiple of $2\pi$). This distinction is why the $x$ in a Fourier series $$f(x) = \sum_{n \in {\mathbf Z}} c_ne^{2\pi{i}nx}$$can be thought of as lying on a circle if you wish, but the $x$ in a Fourier transform $$({\mathcal F}f)(x) = \int_{{\mathbf R}} f(y)e^{2\pi{i}xy}\,dy$$can not and must be thought of on the real line: the Fourier transform is not a $2\pi$-periodic function of $x$, so it is not well-defined to regard the Fourier transform as a function on the unit circle.
In linear algebra, the trace of a linear operator $A \colon V \rightarrow V$ on a finite-dimensional vector space is defined to be $\sum_{i} a_{ii}$ where $(a_{ij})$ is a matrix representation of $A$ in a basis of $V$. It is crucial that that this sum is
independent of the choice of basis. We used a basis to compute the trace, but if you want the trace to be a function purely of the operator $A$ then it has to have the same value no matter what basis you use on $V$. In a linear algebra course you learn that the trace is independent of the basis used to compute it. On the other hand, the "anti-trace" $\sum_{i} a_{i,n+1-i}$ (sum on the antidiagonal) or "border trace" (sum around the boundary of a matrix representation of $A$) are not well-defined because if you change the basis then the new matrix representation has a different value for its anti-trace or border trace. That's why you never hear anyone talk about such sums in linear algebra, since they are not well-defined functions of the original operator: they depend on the choice of basis. To the extent you agree that geometric concepts should not depend on your choice of coordinate system, you'll agree that useful concepts in linear algebra should be independent of the choice of basis.
In algebraic geometry, polynomials are not well-defined functions on projective space since their values change if the homogeneous coordinates change. But
ratios of homogeneous polynomials of the same degree do give the same answer for all homogeneous coordinates of a point, and that is why ratios of homogeneous polynomials of the same degree are the natural functions on projective space.
In grade school math, addition of fractions is
not $(a/b) + (c/d) = (a+c)/(b+d)$, since this operation is not well-defined: although 1/2 = 5/10 and 3/4 = 6/8, this fake way of combining fractions by adding numerators and denominators doesn't lead to the same answer when you change the way you write the fraction: $(1+3)/(2+4) = 4/6$ and $(5+6)/(10+8) = 11/18 \not= 4/6$. If you were to fix a preferred representation of fractions, such as the representation using relatively prime numerator and denominator with a positive denominator, then this "add the numerators and add the denominators" is a well-defined operation, but it would be very awkward to use because it would depend on the way you write the fractions. This fake addition does have an interesting application, which you'll learn if you read about Farey fractions; it just doesn't correspond to addition, so we shouldn't denote it as +, and it does not generalize to fractions where the numerator and denominator are in a ring that lacks unique factorization (and a preferred choice of unit multiple of each nonzero element).
If you don't think having operations be well-defined is important in math then you're going to be in for a mountain of trouble when you learn algebra (quotient groups) or differential geometry (manifolds), where you regularly have to define functions by making a choice and then check the answer is independent of the choice that was made (a choice could mean a coset representative or a choice of coordinate system near a point).
And if you don't think issues of "units" occur in math, you're mistaken. They are just hidden enough that you may not notice them. To measure angles we prefer to use radians. If you wanted to use other systems of measuring angles then the familiar derivative formulas for trigonometric functions would change: while $\sin'(x) = \cos(x)$ when $x$ is an angle in radians, if you change $x$ to degrees then $\sin'(x) = (\pi/180)\cos(x)$. We prefer radians because they lead to the simplest calculus formulas, without any weird factors like $\pi/180$ showing up. In Fourier analysis, some prefer to define the Fourier transform using $e^{ixy}$ instead of $e^{2\pi{i}xy}$, and then factors of $2\pi$ or $\sqrt{2\pi}$ start showing up in other formulas from Fourier analysis such as Parseval's formula. In linear algebra, we prefer to take as the "natural" isomorphism from a finite-dimensional vector space $V$ to its double dual space the mapping $v \mapsto {\rm ev}_v$, where ${\rm ev}_v(\varphi) = \varphi(v)$ for all linear functionals $\varphi$ on $V$, but there are other possibilities, namely $v \mapsto c\cdot{\rm ev}_v$ for any nonzero element $c$ of the underlying scalar field. Category-theoretic arguments show that these are essentially the only possible natural isomorphisms to the double-dual space.
Now I'll turn to physical measurements. If you want to add a length and a time together, you need to recognize that there is no natural standard for measuring either of these quantities: any two systems of measuring length differ by a scaling factor, and any two systems of measuring time differ by a scaling factor. Even if everyone on our planet used the metric system, it doesn't make that system physically profound. At some point in the past someone picked a length and declared it one meter, but that human convention doesn't have any physical importance. (If you think metric units are actually an essential part of the fabric of nature, then something has gone badly wrong in your education. Maybe the "radius of the electron" or the Planck length could be considered a physically fundamental length, but your question is on a much more elementary level than that.) The link between different measurements of the same physical quantity is not always just a scaling factor (temperature is the best example of that, where $F = (9/5)C + 32$), but for simplicity let's stick to conversions between different systems of measurement as being just scaling factors.
Because of the physical "fact" that different systems of measuring the same physical concept differ by a scaling factor, a physical measurement can be thought of as a
real-valued function defined up to an overall positive scaling factor. If $f$ and $g$ are two ways of measuring the same physical quantity, then $g = cf$ for some positive $c$. For instance, if we are measuring length ($L$) and write $f_L$ for the meter-function and $g_L$ for the feet-function, then $c = 3.28$: $g_L(x) = 3.28f_L(x)$ (that is, to convert from meters to feet, multiply the meters value by 3.28). If we are measuring time ($T$), with $f_T$ for the second-function and $g_T$ for the minute-function, then $c = .016$: $g_T(y) = .016f_T(y)$ (to convert from seconds to minutes, multiply the second value by .016). Now ask yourself: if a function is defined up to an overall scaling factor, and another function is defined up to an overall scaling factor, what can I do with them and keep the result defined up to an overall scaling factor? You can multiply them or divide them, but you can't add them.
For example, if $g_L = 3.28f_L$ and $g_T = .016f_T$, then $g_L/g_T = 205f_L/f_T$. Recalling what these functions meant above, this last equation says if you want to convert from meters per second to feet per minute, multiply by 205. And $g_Lg_T = .05248f_Lf_T$, so to convert from meters-seconds (whatever that means) to feet-minutes, multiply by .05248.
Let's finally try addition: if $g_L = 3.28f_L$ and $g_T = .016f_T$, is $g_L + g_T = c(f_L+f_T)$ for some $c > 0$? This is the test for whether addition of measurements is
well-defined. Since $g_L + g_T = 3.28f_L + .016f_T$, you want $3.28f_L + .016f_T = c(f_L + f_T)$, so you need $(3.28-c)f_L = (c-.016)f_T$, and therefore $f_L = ((c-.016)/(3.28-c))f_T$. In other words, you need to be able to convert between length and time: length and time have to be different ways of measuring the same thing. Are they? At an elementary level they are not, and that is why you can't add length and time physically.
To add two physical measurements in a well-defined way, we have seen (for the examples of length and time) that the two quantities you're measuring have to be convertible into each other. In relativity, we learn that the speed of light is a fundamental physical speed, and if we decide it is truly fundamental we can use it to convert between length and time. In general relativity it is convenient to declare the speed of light to be 1, which sets a definite conversion between meters and seconds, or feet and seconds, or any preferred choice of measuring length and time so that the value of the speed of light using those systems of measurement turns out to be 1. (It's like our preference for radians to degrees because in calculus the use of radians makes certain coefficients in derivative formulas equal to 1.) Once you have a standard for turning length into time then you
can add length and time, and then all you're doing is adding length and length. Google the term "Planck units" to see how fundamental physical theories lead to a way of converting between mass, length, and time.
I'll leave it to you to decide what this viewpoint has to say about the physical possibility of adding meters and feet. Hint: be careful about whether you're dealing with functions of the same object or different objects. |
In the previous post, we introduced the concept of recurrence relations. In this article and the following two articles, we will learn how to solve the recurrence relations to get the running time of recursive algorithms. Solving the recurrence relation means finding the closed form expression in terms of $n$. There are various techniques available to solve the recurrence relations. Some techniques can be used for all kind of recurrence relations and some are restricted to recurrence relations with a specific format.
One of the simplest methods for solving simple recurrence relations is using forward substitution. In this method, we solve the recurrence relation for $n = 0, 1, 2, …$ until we see a pattern. Then we make a guesswork and predict the running time. The final and important step in this method is we need to verify that our guesswork is correct by using the induction. Consider a recurrence relation
$$T(n) = \begin{cases} 1 & \text{ if } n = 1 \\ T(n - 1) + 1 & \text{otherwise} \end{cases}$$ We can calculate the running time for $n = 0, 1, 2, ..$ as follows
$n$ $T(n)$ 1 1 2 $T(2 - 1) + 1 = 1 + 1 = 2$ 3 $T(3 - 1) + 1 = 1 + 1 + 1 = 3$ 4 $T(4 - 1) + 1 = 1 + 1 + 1 + 1 = 4$
We can easily see the pattern here. When the value of $n = k$, $T(n) = k$. So the running time is
$$T(n) = n$$ We need to verify that the running time we guessed is correct by induction (not mathematical induction ;)). Since $T(n) = n$, we can clearly see that $T(1) = 1, T(2) = 2, T(3) = 3, …$.
Consider another example
$$T(n) = \begin{cases} 1 & \text{ if } n = 1 \ 2T(n - 1) + 1 & \text{otherwise} \end{cases}$$
As above, we can calculate the running time for $n = 0, 1, 2, …,$ as follows
$n$ $T(n)$ 1 1 2 $2.T(2 - 1) + 1 = 2.1 + 1 = 3$ 3 $2.T(3 - 1) + 1 = 2.2.1 + 2.1 + 1 = 7$ 4 2.$T(4 - 1) + 1 = 2.2.2.1 + 2.2.1 + 2.1 + 1 = 15$ 5 2.$T(5 - 1) + 1 = 2.2.2.2.1 + 2.2.2.1 + 2.2.1 + 2.1 + 1 = 31$
We already started to see the pattern here. When the value of $n$ is $k$ the patter would be
$$T(k) = 2^{k - 1} + 2^{k - 2} + … + 2^0$$ This is a geometric series whose sum can be easily calculated to be $2^k - 1$. Therefore, the running time of the algorithm is, $$T(n) = 2^n - 1$$ The correctness of this running time can be easily proved by putting $n = 1, 2, 3, …$ in above equation.
In forward substitution method, we put $n = 0, 1, 2, …$ in the recurrence relation until we see a pattern. In backward substitution, we do the opposite i.e. we put $n = n, n - 1, n - 2, …$ or $n = n, n/2, n/4, …$ until we see the pattern. After we see the pattern, we make a guesswork for the running time and we verify the guesswork. Let us use this method in some examples.
Consider an example recurrence relation given below
$$T(n) = \begin{cases} 1 & \text{ if } n = 1 \\ 2T \left (\frac{n}{2} \right) + n & \text{otherwise} \end{cases}$$
Given $T(n)$, we can calculate the value of $T(n/2)$ from the above recurrence relation as
$$T(n/2) = 2T\left ( \frac{n}{4} \right ) + \frac{n}{2}$$ Now we back substitute the value of $T(n/2)$ in $T(n)$ $$T(n) = 2^2T\left (\frac{n}{2^2}\right ) + 2n$$ We proceed in a similar way $$\begin{align}T(n) &= 2^3T \left (\frac{n}{2 ^3} \right ) + 3n \\ &= 2^4T \left (\frac{n}{2^4} \right ) + 4n\\ &= 2^kT \left (\frac{n}{2^k} \right ) + kn \end{align}$$ Now we should use the boundary (base) condition i.e. $T(1) = 1$. In order to use the boundary condition, the entity inside $T()$ must be 1 i.e. $$\frac{n}{2^k} = 1$$ Taking $\log_2$ on both sides, $$n = \log_2 n$$ The equation (6) becomes $$\begin{align} T(n) &= 2^{\log_2 n}T \left (\frac{n}{2^{\log_2 n}} \right ) + \log_2 n.n \\ & = nT(1) + n\log_2 n \\ & = n\log_2 n + n \end{align}$$ The correctness of above running time can be proved using induction. Put $n = 2, 4, 8, 16, …$ and you can easily verify that the guessed running time is actually correct.
We rarely use forward and backward substitution method in the practical cases. There are much more sophisticated and fast methods. But these methods can be used as a last resort when other methods are powerless to solve some kinds of recurrences.
Some recurrences take the form
$$a_0T(n) + a_1T(n - 1) + … +a_kT(n - k) = 0$$ This recurrence is called Homogeneous linear recurrences with constant coefficients and can be solved easily using the techniques of characteristic equation. The steps to solve the homogeneous linear recurrences with constant coefficients is as follows. Write the recurrence relation in characteristic equation form. Change the characteristic equation into characteristic polynomial of degree $k$. Find the roots of the characteristic polynomial. A polynomial $p(x)$ of degree $k$ has exactly $k$ roots i.e. $r_1, r_2, …, r_k$. The solution of this recurrence relation, if the roots are distinct, is $$T(n) = \sum_{i = 1}^k c_ir_i^n$$ Where $c_1, c_2, …, c_k$ are constants. Find the value of constants $c_1, c_2, …, c_k$ by using the boundary conditions. If there are 3 constants then we need 3 equations. If the roots are not distinct then the solution becomes $$T(n) = \sum_{i = 1}^l \sum_{j = 0}^{m_i -1 } c_{ij}n^jr_i^n$$ Where $r_1, r_2,…, r_l$ are the $l$ distinct roots of the characteristics polynomial and $m_1, m-2, …, m_l$ are their multiplicities respectively.
We use these steps to solve few recurrence relations starting with the Fibonacci number. The Fibonacci recurrence relation is given below.
$$T(n) = \begin{cases} n & \text{ if } n = 1 \text{ or } n = 0\\ T(n - 1) + T(n - 2) & \text{otherwise} \end{cases}$$ First step is to write the above recurrence relation in a characteristic equation form. For this, we ignore the base case and move all the contents in the right of the recursive case to the left i.e. $$T(n) - T(n - 1) - T(n - 2) = 0 $$ Next we change the characteristic equation into a characteristic polynomial as $$x^2 - x - 1 = 0$$ The roots of this characteristic polynomial are $$r_1 = \frac{1 + \sqrt{5}}{2} \text{ and } r_2 = \frac{1 - \sqrt{5}}{2}$$ Since the roots are distinct, the solution is, therefore, of the form $$T(n) = c_1r_1^n + c_2r_2^n = c_1\left (\frac{1 + \sqrt{5}}{2} \right )^n + c_2 \left (\frac{1 - \sqrt{5}}{2} \right )^n$$ The value of $c_1$ and $c_2$ can be calculated using the base conditions. We know that $T(0) = 0$ and $T(1) = 1$. If we put these two values we get, $$\begin{align} & c_1 + c_2 = 0 \\ & r_1c_1 + r_2c_2 = 1\end{align}$$ Solving these equations, we get $$c_1 = \frac{1}{\sqrt{5}} \text{ and } c_2 = -\frac{1}{\sqrt{5}}$$ Thus, $$T(n) = \frac{1}{\sqrt{5}}\Bigg [ \left (\frac{1 + \sqrt{5}}{2} \right )^n - \left (\frac{1 - \sqrt{5}}{2} \right )^n \Bigg ]$$
Consider another recurrence,
$$T(n) = \begin{cases} n & \text{ if } n = 0, 1 \text{ or } 2\\ 5T(n - 1) - 8T(n - 2) + 4T(n - 3) & \text{otherwise} \end{cases}$$ First we write the characteristic equation $$T(n) - 5T(n - 1) + 8T(n - 2) - 4T(n - 3) = 0$$ The characteristic polynomial is, $$x^3 - 5x^2 + 8x - 4 = 0$$ The roots are $r_1 = 1$ of multiplicity $m_1 = 1$ and $r_2 = 2$ of multiplicity $m_2 = 2$. Since roots are repeated, the solution is $$T(n) = c_11^n + c_22^n + c_3n2^n$$ The base conditions give, $$\begin{align} & c_1 + c_2 = 0 \\ & c_1 + 2c_2 + 2c_3 = 1 \\ & c_1 + 4c_2 + 8c_3 = 2 \end{align}$$ Solving these equations, we get $c_1 = -2, c_2 = 2 $ and $c_3 = -\frac{1}{2}$. Therefore $$T(n) = 2^{n + 1} - n2^{n - 1} - 2$$
In homogeneous recurrences the linear combination of $T(n - i)$ terms is zero. But in inhomogeneous recurrences, the linear combination is not equal to zero and therefore the solution is more difficult than the homogeneous recurrences. Inhomogeneous recurrences take the following form
$$a_0T(n) + a_1T(n - 1) + … + a_kT(n - k) = b^np(n)$$ Where $b$ is a constant and $p(n)$ is a polynomial in $n$ of degree $d$.
To get the characteristic polynomial of an inhomogeneous recurrence, we follow the exact same procedure as the homogeneous for the left part of the equation and for the right part we multiply the characteristic polynomial by $(x - b) ^ {d + 1}$
Example 1: Consider the recurrence, $$T(n) - 2T(n - 1) = 3^n$$ The right side must be in $b^np(n)$ format. We can easily change the right side to match the format as $$T(n) - 2T(n - 1) = 3^nn^0$$ Here $b = 3$ and $d = 0$. The characteristic polynomial of the left part of the equation is (exactly same as the homogeneous one) $$T(n) = (x - 2) = 0$$ Now we multiply this with $(x - b) ^ {d + 1}$ to get $$(x - 2)(x - 3) = 0$$ This is the characteristic polynomial and we can easily solve this using the techniques discussed in the homogeneous section. Example 2: Consider another recurrence, $$T(n) = 2T(n - 1) + n$$ This can be written as $$T(n) - 2T(n - 1) = 1^nn$$ Comparing the right hand side with $b^np(n)$, we get $b = 1$ and $d = 1$. Therefore the characteristic polynomial is $$(x - 2)(x - 1)^2 = 0$$ This has repeated roots and we can solve this using the techniques discussed above.
Sometimes changing the variable in a recurrence relation helps to solve the complicated recurrences. By changing the variable, we can convert some complicated recurrences into linear homogeneous or inhomogeneous form which we can easily be solved. At last, we put the original variable back to the recurrence to get the required solution. Let me explain this with the help of an example.
Example: Consider a recurrence $$T(n) = 3T(n/2) + n$$ In above recurrence relation, the value of $n$ is reduced to half in every iteration. If we replace $n$ with $2^i$, we get $$T(2^i) = 3T(2^{i - 1}) + 2^i$$ If we let $T(2^i) = t_i$, $T(2^{i - 1})$ becomes $t_{i - 1}$ and the recurrence relation converts to $$t_i - 3t_{i - 1} = 2^i$$ This is linear inhomogeneous equation with $b = 1$ and $d = 0$. The characteristic polynomial is $$(x - 3)(x - 2) = 0$$ It has roots $r_1 = 3$ and $r_2 = 2$. The solution will be in the form $$t_i = c_13^i + c_22^i$$ Initially we changed variable $n$ to $2^i$. $n$ and $2^i$ will be equal when $i = \log_2 n$. Therefore, $$t_{\log_2 n} = c_13^{\log_2 n} + c_22^{\log_2 n}$$ We let $T(2^i) = t_i$, and thus $T(n) = t_{\log_2 n}$ $$T(n) = c_13^{\log_2 n} + c_22^{\log_2 n}$$ or $$T(n) = c_1n^{\log_2 3} + c_2n$$ Using the base conditions, value of $c_1$ and $c_2$ can be easily calculated. Brassard, G., & Bratley, P. (2008). Fundamentals of Algorithmics. New Delhi: PHI Learning Private Limited. |
When I was in highschool I was taught how to prove statements using Proof by contrapositive. I also learned how to prove statements using mathematical induction. Now I realize that, as the inductive step is a conditional statement, it might be proved using proof by contrapositive. However, I cannot find an example that uses this technique but in an easier way than regular induction. Is there a case where this is useful? (Using both concepts)
As a concrete example, consider the following two Peano Axioms:
$PA 1: \forall x \ \neg s(x) = 0$
$PA 2: \forall x \forall y (s(x) = s(y) \rightarrow x = y$
These two axioms are about the
successor function $s$, normally understood as $s(x) = x+1$
Using induction and contraposition, you can now prove that $\forall x \ s(x) \not = x$:
Base: $x=0$. By $PA1$, we have $s(0) \not = 0$. Check!
Step: Take some arbitrary $n$. We want to show the conditional $s(n) \not = n \rightarrow s(s(n)) \not = s(n)$
Well, we can do this by contraposition: By $PA2$ we immediately get $s(s(n))=s(n) \rightarrow s(n)=n$. Check!
===
As a more general comment, I think you might also be interested in learning about the proof technique of Proof by Infinite Descent, which combines ideas from induction and proof by contradiction.
In infinite Descent, you prove that no natural number has a certain property by proving that if there is a number with that property, then there will always be a smaller number with that property. But since such a 'descent' along the natural numbers has to come to a stop at some point, no 'infinite descent' is possible.
As it turns out, Infinite Descent is very closely related (in fact, it is equivalent) to Strong Induction, but it has a different conceptual flavor.
In the sequel $\mathbb N=\{0,1,2,3,\dots\}$.
You can start any proof by induction - where you prove something of the form $\forall n\in\mathbb N[P(n)]$ and $P$ denotes a property of natural numbers - by assuming that the statement is not true.
That means that the set $\{n\in\mathbb N\mid \neg P(n)\}$ is not empty and non-empty subsets of $\mathbb N$ have a smallest element.
Now let $n_0$ denote this element, so that $\neg P(n_0)$ and for every $k<n_0$ we have $P(k)$.
Then the base case is the same as proving that $n_0\neq0$ so that $n_0-1\in\mathbb N$ with $P(n_0-1)$.
Then the inductive step is the same as proving that $P(n_0-1)$ implies $P(n_0)$ and a contradiction is found.
A contrapositive proof with application of induction is born. |
Is there a third dimension of numbers like real numbers, imaginary numbers, [blank] numbers?
Alas, there are no algebraically coherent "triplexes". The next step in the construction as has been said already are "quaternions" with 4 dimensions.
Many young aspiring mathematicians have tried to find them since Hamilton in the 19th century. This impossibility links geometric dimensionality, fundamental properties of polynomial equations, algebraic systems and many other aspects of mathematics. It is really worth studying.
A quite recent book by modern mathematicians which details all this for advanced college undergraduates is Numbers by Ebbinghaus, Hermes, Hirzebruch, Koecher, Mainzer, Neukirch, Prestel, Remmert, and Ewing.
However, the set of quaternions with zero real part is an interesting system of dimension 3 with very interesting properties, linked to the composition of rotations in space.
You may also find of interest some more general results besides the mentioned Frobenius Theorem. Weierstrass (1884) and Dedekind (1885) showed that every finite dimensional commutative extension ring of $\mathbb R$ without nilpotents ($x^n = 0\,\Rightarrow\, x = 0$) is isomorphic as a ring to a direct sum of copies of $\rm\:\mathbb R\:$ and $\rm\:\mathbb C\:.\:$ Wedderburn and Artin proved a generalization that every finite-dimensional associative algebra without nilpotent elements over a field $\rm\:F\:$ is a finite direct sum of fields.
Such structure theoretic results greatly simplify classifying such rings when they arise in the wild. For example, I applied a special case of these results last week to prove that a finite ring is a field if its units $\cup\ \{0\}$ comprise a field of characteristic $\ne 2\:.\:$ For another example, a sci.math reader once proposed an extension of the real numbers with multiple "signs". This turns out to be a very simple case of the above results. Below is my 2009.6.16 sci.math post on these "PolySign" numbers.
The results in Eitzen's paper Understanding PolySign Numbers the Standard Way, characterizing Tim Golden's so-called PolySign numbers as ring direct sums of $\mathbb R$ and $\mathbb C$, have been known for over a century and a half. Namely that $\rm\:P_n =\: \mathbb R[x]/(1+x+x^2+\:\cdots\: + x^{n-1})\ $ is isomorphic to a certain ring direct sum of $\:\mathbb R$ and $\:\mathbb C\:,\:$ is just a special case of more general results due to Weierstrass and Dedekind in the 1860s. These classic results are so well-known that you will find them mentioned even in many elementary textbooks on number systems and their generalizations. For example, in
Numbers by Ebbinghaus et.al. p.120:
Weierstrass (1884) and Dedekind (1885) showed that every finite dimensional commutative ring extension of R with unit element but without nilpotent elements, is isomorphic to a ring direct sum of copies of R and C.
Ditto for historical expositions, e.g. Bourbaki's
Elements of the History of Mathematics, p. 119:
By 1861, Weierstrass, making precise a remark of Gauss, had, in his lectures, characterized commutative algebras without nilpotent elements over R or C as direct products of fields (isomorphic to R or C); Dedekind had on his side reached the same conclusions around 1870, in connection with his "hypercomplex" conception of the theory of commutative fields, their proofs were published in 1884-85 [1,2]. [...] These methods rely above all on the consideration of the characteristic polynomial of an element of the algebra relative to its regular representation (a polynomial already met in the work of Weierstrass and Dedekind quoted earlier) and on the decomposition of the polynomial into irreducible factors.
Nowadays these fundamental results are merely special cases of more general structure theories for algebras that are part of any first course on algebras (but not always met in a first course on abstract algebra). A web search turns up more on the subsequent history, e.g. excerpted from
Y. M. Ryabukhin,
Algebras without nilpotent elements, I, Algebra i Logika, Vol. 8, No. 2, pp. 181-214, March-April, 1969 http://www.springerlink.com/index/3Q765670P5571176.pdf
Algebras without nilpotent elements have been studied long ago. So, Weierstrass characterized in his lectures in 1861 finite-dimensional associative-commutative algebras without nilpotent elements over the field of real or complex numbers as finite direct sums of fields. To be exact, some nonessential restrictions have there been imposed. In 1870 Dedekind removed those nonessential restrictions. The following theorem of Weierstrass-Dedekind is now considered as a classical one: every finite-dimensional associative-commutative algebra without nilpotent elements over a field F is a finite direct sum of fields. The results of Weierstrass and Dedekind (for the case when F is the field of complex or real numbers) have been published in [1,2]. The results of works of Molien, Cartan, Wedderburn and Artin [3-6] imply that Dedekind's theorem holds for any field F. Moreover, the following theorem of Wedderburn-Artin holds: every finite-dimensional associative algebra without nilpotent elements over a field F is a finite direct sum of fields." [...]
K. Weierstrass, "Zur Theorie der aus n Haupteinheiten gebildeten complexen Grossen," Gott. Nachr. (1884). R. Dedekind, "Zur Theorie der aus n Haupteinheiten gebildeten complexen Grossen," Gott. Nachr. (1885). F. Molien, "Ueber Systeme hoherer complexer Zahlen," Math. Ann., XLI, 83-156 (1893). E. Cartan, "Les groupes bilineaires et les systemes de nombres complexes," Ann. Fac. Sci., Toulouse (1898). J. Wedderburn, "On hypercomplex numbers," Proc. London Math. Soc. (2), VI, 349-352 (1908). E. Artin, "Zur Theorie der hyperkomplexen Zahlen," Abh. Math. Sere. Univ. Hamburg, 5, 251-260 (1927).
and excerpted from its sequel
Y.M. Ryabukhin,
Algebras without nilpotent elements, II, Algebra i Logika, Vol. 8, No. 2, pp. 215-240, March-April, 1969 http://www.springerlink.com/index/BQ2L50708GL150J0.pdf
In [1] we proved structural theorems on the decomposition of algebras without nilpotent elements into direct sums of division algebras; certain chain conditions were imposed on these algebras.
Yet it is possible to prove structural theorems also without imposing any chain conditions. In this case the direct sums are replaced by subdirect sums and instead of division algebras we shall consider algebras without zero divisors.
The first structural theorem of this kind is apparently the classical theorem of Krull [2]:
Any associative-commutative ring without nilpotent elements can be represented by a subdirect sum of rings without zero divisors. Krull's theorem was subsequently extended to the case of any associative ring. This was done by various authors and in various directions. In [3], Thierrin came very close to a final generalization of Krull's theorem to the associative, but not commutative case. The final result was obtained in [4]:
Any associative ring without nilpotent elements can be represented by a subdirect sum of rings without zero divisors. At the Ninth All-Union Conference on General Algebra (held at Gomel'), I. V. L'vov reported an even stronger result:
Any alternative ring without nilpotent elements can be represented by a subdirect sum of rings without zero divisors.
It could be assumed that the theorem on decomposition into a subdirect sum of algebras without zero divisors holds for any ring. Yet this assumption is erroneous (see [1]), since there exists a finite-dimensional simple, special Jordan algebra without nilpotent elements that has zero divisors and cannot therefore be decomposed into a subdirect sum of algebras (or rings) without zero divisors.
There naturally arises the following question: what conditions must a ring without nilpotent elements satisfy to permit its representation by a subdirect sum of rings without zero divisors?
In this paper we answer this question:
An algebra R over an associative-commutative ring F with unity can be represented by a subdirect sum of rings without zero divisors, iff it is a conditionally associative algebra without nilpotent elements.
Let us recall that an algebra R is said to be conditionally associative, iff we have in R the conditional identity x(yz) = 0 iff (xy)z = 0.
We say a (not necessarily associative) algebra R does not have nilpotent elements, iff in R we have the conditional identity x^2 = 0 iff x = 0.
From this theorem we easily obtain the above-mentioned results of [2-4], as well as the result of L'vov (it suffices to take as the ring F the ring Z of integers). [...]
Yu. M. Ryabukhin, "Algebras without nilpotent elements,I," this issue, pp. 215-240. W. Krull, "Subdirect representations of sums of integral domains," Math. Z., 52, 810-823 (1950). J. Thierrin, "Completely simple ideals of a ring," Acad. Belg. Bull. C1. Sci., 5 N 43, 124-132 (1957}. V. A. Andrunakievich and Yu. M. Ryabukhin, "Rings without nilpotent elements in completely simple ideals," DAN SSSR, 180, No. 1, 9 (1968).
Every finite-dimensional division algebra over $\mathbb{R}$ is one of $\mathbb{R}$, $\mathbb{C}$ or $\mathbb{H}$. This is what is called the Frobenius Theorem. You may refer to here for details.
You might look up quaternions.
In addition to complex numbers and quaternions, you might want to look up Clifford Algebras which encapsulate both and extend to arbitrary dimension. Complex and quaternios are sub-algebras of the Clifford Algebras over $\mathbb{R}^2$ and $\mathbb{R}^3$ respectively.
Short Answer: No,
A lot of responses bring up the fact the next closed algebraic set are quaternions. But these aren't perfect since there is no problem to which Quaternions naturally arise as a solution:
For example we know $i$ naturally is formed as the solution to the previously unsolvable $\sqrt{-1}$ that being said all polynomials have roots in the complex plane so we are guaranteed no other such new algebraic unit (like) $i$ will be naturally formed.
Now I personally don't know of a theorem that says this CANNOT happen again. For example it could be that:
$$x^x = i$$ Or (if we define ${}^xx$ to mean tetration)
$${}^xx = i$$
Etc... following the pattern, may not have a solution in C. In that case we are now free to generate a new elementary unit $j$ however this elementary unit will be quite strange since it is associated with a higher operator so expressions such as:
$$j, j^2, j^3 ...$$
Are all unique and simplified, leading us to a now infinite dimensional system of numbers
So 3D is not possible, but I believe infinite dimensional is still possible.
Unless you like the unnatural creations that are the quaternions etc... (I only dislike them because of their lack of natural formation)
There are of course, examples of R3, in the form X, Y, Z, which behave as three separate real numbers, with an invarient when X -> Y -> Z -> X applies.
Such numbers turn up in the study of the heptagon and enneagon. The integer systems are discrete points forming a lattice in this space, the invariants cycle through the solutions to the heptagonal ($x^3-x^2+2x-1=0$) and enneagonal equations, which transforms eg {7} to {7/3} to {7/2}.
Such a projection converts an infinitely dense arrangemet of points onto a sparse lattice in three dimensions. That is, in $x,y,z$ form, there is a sphere of size where there are no more than one legitimate value. So if you evaluate a value even approximately in the three coordinates, it is possible to find the exact value.
One can, if one considers the span of numbers like $1.801937736$, $-1.2469796037$ and $0.445041867912$, which serve for the heptagon, the same role that $1.618033$ and $-0.618033$ do for the pentagon, then the cyclic rotation of this list will preserve multiplication relations, etc. eg suppose that the order is $p, q, r$ Now $p^2=2-q$ and $q^2=2-r$, and $r^2=2-p$. One notes that $p+q+r=p*q*r=1$. These $p$, $q$, $r$ are the solutions to the cubic $x^3-x^2-2x+1=0$ in much the same way that 1.618 and -0,618 solve $x^2-x-1=0$.
The product, for example of $ap+bq+cr$ and its transforms ($aq+br+cp$ and $ar+bp+cq$), is always an integer, speciffically the product of any of these: 7, cubes, and of primes of the form $7n+1$, $7n-1$.
Since one can suppose that there is a real
hypercomplex plane, of the form of numbers $a+bj$, where $j=\sqrt{1}$. The conjucate is $a-jb$. The pentagonal numbers belong here. It features unit hyperbolae, and two 'real' axies, where the values which the value and its conjucates project onto.
And if one can do it meaningfully with two conjucates, it's possible with three, four, five, &c. I use such systems extensively when wrangling with polygons.
If you think of the dimensions for numbers as going real numbers (1st dimension), fuzzy numbers (2nd dimension), then the 3rd dimension ends up fuzzy numbers of dimension two. For more details see A. Kaufmann and M. M. Gupta's
Introduction to Fuzzy Arithmetic or George and Maria Bojadziev's Fuzzy Sets, Fuzzy Logic, Applications. |
number
Let N be the largest number of region that can be formed by drawing 2016 straight lines on a plane. Find the sum of all digits of N.
Tôn Thất Khắc Trịnh 23/07/2018 at 16:04
I won't take credit for this solution, as it goes to this website: https://www.cut-the-knot.org/proofs/LinesDividePlane.shtmlSelected by MathYouLike
Applying 2016 to the formula, we'll get 2033137 regions, which has the sum of digit be 19
Mr Puppy 24/07/2018 at 01:44
Thank you!!!!! :D
Choose the correct answer:
The number lighter than 709 and greater than 707 was:
a)708 b)710 c)712 d)714
Alex placed 9 number cards and 8 addition symbol cards on the table as shown.
9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1
Keeping the cards in the same order he decided to remove one of the addition cards to form a 2-digit number. If his new total was 99, which 2-digit number did he form?
A.32 B.43 C.54 D.65 E.76
Lê Quốc Trần Anh Coodinator 27/07/2017 at 09:13
The total of the cards are: \(9+8+7+6+5+4+3+2+1=45\)
The total of the cards that can removed 1 card is: \(\left(45-1\right)to\left(45-9\right)\)
From this operation and the question, we see only \(C.54\) satisfy these.
So the answer isSelected by MathYouLike
C
FA KAKALOTS 08/02/2018 at 22:04
The total of the cards are: 9+8+7+6+5+4+3+2+1=45
The total of the cards that can removed 1 card is: (45−1)to(45−9)
From this operation and the question, we see only C.54
satisfy these.
So the answer is C
Duy Trần Đức 11/02/2018 at 08:30
The total of the cards are: 9+8+7+6+5+4+3+2+1=459+8+7+6+5+4+3+2+1=45
The total of the cards that can removed 1 card is: (45−1)to(45−9)(45−1)to(45−9)
From this operation and the question, we see only C.54C.54 satisfy these.
So the answer is C
This cube has a different whole number on each face, and has the property that whichever pair of opposite faces is chosen, the two numbers multiply to give the same result.What is the smallest possible total of all 6 numbers on the cube?
FA KAKALOTS 08/02/2018 at 22:01
Because this topic is not to say that the faces should be different so that their sum is the smallest, then the remaining 3 [can be said to be x, y, z] smallest and satisfy:
- 12*x = 9*y = 6*z => x = y = z = 0 is the smallest So the sum is: 0 + 0 + 0 + 6 + 9 + 12 = 27 Answer: 27
Dao Trong Luan 28/07/2017 at 19:04
Because this topic is not to say that the faces should be different so that their sum is the smallest, then the remaining 3 [can be said to be x, y, z] smallest and satisfy:
- 12*x = 9*y = 6*z => x = y = z = 0 is the smallest So the sum is: 0 + 0 + 0 + 6 + 9 + 12 = 27 Answer: 27
Rani wrote down the numbers from 1 to 100 on a piece of paper and then correctly added up all the individual digits of the numbers. What sum did she obtain?
Searching4You 27/07/2017 at 09:29
The answer is 901.
From 1 to 9 we have sum : 45.
From 10 to 19 we have : 1 + 0 + 1 + 1 + 1 + 2 + 1 + 3 +...+ 1 + 9 we have sum 55.
Next from 20 to 29 we have sum 65.
................. 30 to 39 we have sum 75.
................. 40 to 49 we have sum 85.
................. 50 to 59 we have sum 95.
................. 60 to 69 we have sum 105.
................. 70 to 79 we have sum 115.
................. 80 to 89 we have sum 125.
................. 90 to 99 we have sum 135.
And the last 100 we have sum 1 + 0 + 0 = 1.
So the sum she obtained is : 65 + 75 + 85 + 95 +105 +115 +125 +135 + 1 = 901.Selected by MathYouLike
FA KAKALOTS 08/02/2018 at 22:01
The answer is 901.
From 1 to 9 we have sum : 45.
From 10 to 19 we have : 1 + 0 + 1 + 1 + 1 + 2 + 1 + 3 +...+ 1 + 9 we have sum 55.
Next from 20 to 29 we have sum 65.
................. 30 to 39 we have sum 75.
................. 40 to 49 we have sum 85.
................. 50 to 59 we have sum 95.
................. 60 to 69 we have sum 105.
................. 70 to 79 we have sum 115.
................. 80 to 89 we have sum 125.
................. 90 to 99 we have sum 135.
And the last 100 we have sum 1 + 0 + 0 = 1.
So the sum she obtained is : 65 + 75 + 85 + 95 +105 +115 +125 +135 + 1 = 901.
Lê Quốc Trần Anh Coodinator 27/07/2017 at 09:24
From 1 to 9 the sum of the individual digits are: \(1+2+3+4+5+6+7+8+9=45\)
From 10 to 19: the tens digit all have the same digit, the ones digit have the rules from 1 to 9.
So as the same with 20-99.
The number 100 has the individual digits sum: \(1+0+0=1\)
She has obtained the sum: \(\left[\left(1+2+3+4+5+6+7+8+9\right).10\right]+\left[\left(1+2+3+4+5+6+7+8+9\right).10\right]+1\)
\(=450+450+1=901\)
So she obtained the sum: \(901\)
Let a;b;c be integers such that \(\dfrac{a}{b}+\dfrac{b}{c}+\dfrac{c}{a}=3\). Prove that abc is the cube of an integer.
John 10/04/2017 at 15:24
Without loss of generality, we may assume gcd(a,b,c) = 1.
(otherwise, if d=gcd(a,b,c) the for a'=a/d, b'=b/d, c'=c/d, the equation still holds for a', b', c' and a'b'c' is still a cube if only if abc is a cube).
We multiply equation by abc, we have:
\(a^2c+b^2a+c^2b=3abc\)(*)
if \(abc=\pm1\), the problem is solved.
Otherwise, let p be a prime divisor of abc. Since gcd(a,b,c)=1, the (*) implies that p divides exactly two of a, b,c. By symetry, we may assume p divides a, b but not c. Suppose that the lagest powers of p dividing a, b are m, n, respecively.
If n < 2m, then \(n+1\le2m\) and \(p^{n+1}\)| \(a^2c,b^2c,3abc\). Hence \(p^{n+1}\)|\(c^2b\), forcing \(\)\(p\)|\(c\) (a contradiction). If n > 2m, then \(n\ge2m+1\) and \(p^{2m+1}\)|\(c^2b,b^2a,3abc\). Hence \(p^{2m+1}\)|\(a^2c\), forcing \(p\)|\(c\) (a contradicton). Therefore n = 2m and \(abc=\Pi p^{3m}\), \(p\)|\(abc\), is a cube.Carter selected this answer. |
By definition, Hadamard transformation (acting on a qubit) maps the unit vector in the $Y$ axis direction of the Bloch Sphere ($S^2$) to its negative, equivalent to a rotation of $\pi$ rad around $X+Z$ axis. I understand it pictorially.
I have trouble showing this explicitly using the matrix representation of Hadamard transformation $H= \frac{1}{\sqrt{2}}\begin{bmatrix} {1}&{1}\\ {1}&{-1} \end{bmatrix}$for a qubit $\hat y$. This qubit $\hat y$ (the unit vector in the $Y$ axis direction of the Bloch Sphere) has $\theta=\frac{\pi}{2}$ and $\phi=\frac{\pi}{2}$, where these angles are the ones that define the state of a qubit($\lvert \psi\rangle=cos\,(\frac{\theta}{2})\lvert 0\rangle+e^{i\phi}\,sin\,(\frac{\theta}{2})\lvert 1\rangle$). Therefore, $\hat y = \frac{1}{\sqrt2}\lvert 0\rangle + e^{i\frac{\pi}{2}}\frac{1}{\sqrt2}\lvert 1\rangle= \frac{1}{\sqrt2}\lvert 0\rangle + \frac{i}{\sqrt2} \lvert 1\rangle$. Now acting the Hadamard transformation matrix on it we have: $\begin{equation} H\cdot\hat y=\frac{1}{\sqrt{2}}\begin{bmatrix} {1}&{1}\\ {1}&{-1} \end{bmatrix}\begin{bmatrix} \frac{1}{\sqrt2}\\\frac{i}{\sqrt2}\end{bmatrix}=\frac{1}{2}\begin{bmatrix}1+i\\1-i\end{bmatrix}\end{equation}$. In contrast, we expected the result of $H\cdot\hat y$ be instead $-\hat y$ by the pictorial definition of Hadamard transformation. What am I missing? Any help regarding the mistake that I'm making here is appreciated.
By definition, Hadamard transformation (acting on a qubit) maps the unit vector in the $Y$ axis direction of the Bloch Sphere ($S^2$) to its negative, equivalent to a rotation of $\pi$ rad around $X+Z$ axis. I understand it pictorially.
You have to take out a factor $\frac{1+i}{\sqrt{2}}$ -- the action on the Bloch sphere is only up to a phase. Also, not that the qubit vector corresponding to $-\hat y$ is $(|0\rangle-i|1\rangle)/\sqrt{2}$. |
I am not able to make myself understood over at Mathematics Stackexchange in this question.
Therefore I ask here the Latex experts how to write these linear programming problems as latex, the standard way of communicating mathematics at the stack exchange network:
Linear programming problem written in Mathematica 8.0:
(*start*)nn = 18;TableForm[ L2 = Table[ LinearProgramming[ Table[1/n, {n, 1, k}], {Table[If[n == 1, k, 1], {n, 1, k}]}, {{1, 0}}, Table[ If[n == 1, {-1, 1}, {-2 (n - 1), 0 (n - 1)}], {n, 1, k}]], {k, 1, nn}]](*end*)
Here is my own try:
$$\begin{array}{ll} \text{minimize} & \displaystyle\sum_{n=1}^{n=k} \frac{x_{n}}{n} \\ \text{subject to constraints:} & k + \displaystyle\sum_{n=2}^{n=k}x_{n}=1 \\ & x_1 \geq -1 \end{array}$$for all $k$ and for all $n>1:$$$-2(n-1) \leq x_n \leq 0 \tag{4}$$ |
Edit this post describes a situation where I was unable to view the HTML artifact allowing me to edit or delete a comment in Chrome.
First an apology, on breaking this question. I'm unable to edit my comment and the HTML link doesn't appear in the main site.
The comment I was trying to write was
w.r.t. "order of operations" in your final sample you have 9\times 3\equiv 27 \equiv 7\pmod{10}$ why is it not written as 9\pmod{10}$$ \times 3 \pmod{10}$$ \equiv 27\pmod{10}$$ \equiv 7\pmod{10}$$ ... in other words, when looking at one half of a congruence,it's not immediately clear if \pmod{10}$$ has been applied or not.
I broke the internetz by deleting one of the "$" signs where the parsing seemed to make my plain text "mathified".
I'm new at se.Math and I'm new to this markup. Sorry about the bug I created/discovered! |
No. String theory's resolution of the old paradox is a sign of the hidden cleverness of string theory.
After he read some of my essays on the electron's spin, Tom W. Larkin asked an interesting question:
Does string theory resolve the paradox of (post-)classical physics that the electron, if imagined as a spinning ball of a very small radius, has to rotate faster than the speed of light for its spin to be \(\hbar/2\)?One natural, fast, legitimate, but cheap reaction is to say: the electron isn't really a rotating ball. The spin may be carried even by a point-like particle, without any violations of relativity, as QED shows, so the paradox has never been there.
Of course that a string theorist is likely to answer in this way, too. Quantum field theory is a limit of string theory so any explanation that was OK within quantum field theory may be said to be correct within string theory, too. The paradox doesn't exist because the electron isn't a classical ball that gets the mass from the electrostatic self-interaction energy.
However, string theory
doesrepresent the electron (and other elementary particles) as some kind of an extended object which is qualitatively analogousto the rotating ball so some version of the "superluminal spinning" paradox may be said to reemerge in string theory. Does it cause inconsistencies within string theory?
It doesn't but the reasons are tricky and ingenious, if you allow me to lick Nature's buttocks a little bit.
The angular momentum (I will call it "spin") of a gyroscope etc. is equal to\[
\vec S = I\vec \omega
\] where \(\omega\) is the angular frequency and \(I\) is the moment of inertia. Up to constants of order one, the moment of inertia is equal to\[
I \sim mr^2
\] where \(r\) is some typical distance of the points of the object from the axis of rotation and \(m\) is the mass of the spinning object. That was some elementary classical mechanics, OK?
Now, if you assume that \(E=mc^2\) for the electron where \(m\) is the rest mass and that the full latent energy \(E\) is obtained as some electrostatic energy of the electron's charge \(e\) interacting with itself, via (up to numerical constants of order one)\[
E \sim \frac{e^2}{4\pi \epsilon_0 r},
\] then you may derive the typical distance \(r\) between the "pieces of the charge" of the electron i.e. the "classical electron radius" which is a few femtometers (that is \(10^{-15}\) meters).
If you know \(r\) and \(m\), you may calculate the moment of inertia \(I\sim mr^2\) as well as the angular frequency \(\omega\sim I/S\sim I/\hbar \). When you multiply this \(\omega\) by the radius \(r\) again, you get the velocity \(v\) of the points on the spinning surface of the electron. And it will be much higher than the speed of light!
I invite you to complete the steps above if you have never done so. You may evaluate all these things in the SI units, as if it were a basic school problem in mechanics. However, it's also nice to calculate it as an adult physicist, in the Planck units. In the Planck units, the electron mass is something like \(10^{-23}\). Similarly, setting the fine-structure constant to one for a while (we will have to return to this approximation), the electrostatic formula above implies that \(E\sim 1/r\) and the classical radius is therefore about \(10^{23}\) Planck lengths.
The moment of inertia is roughly \(mr^2\) so there are two factors of \(10^{23}\) "up" and one factor down, so it is again \(10^{23}\) in Planck units. The angular frequency is \(\omega\sim 1 / I\) where \(1\) means \(\hbar\) and it is again of order \(10^{-23}\), but if you multiply it by the radius \(10^{23}\) to get the speed, you get something of order one (the speed of light).
To quickly see (without real calculations) whether the speed on the surface is greater than the speed of light, we must be a bit more careful about the fine-structure constant. The mass was OK, \(10^{-23}\), but the electron radius is \(137\) times smaller than we have said because this reduced \(r\) has to cancel the \(\alpha=1/137\) that appeared in the numerator. The moment of inertia is \(137^2\) times smaller than we said, because it's \(mr^2\), and the angular velocity is therefore \(137^2\) times larger than we said because \(\hbar = I\omega\) has to be kept fixed. Even when multiplied by the \(137\) times smaller radius, we still get the velocity \(137\) times larger than we said.
The speed of the surface of the electron is therefore comparable to \(c/\alpha\sim 137 c\), up to numbers of order one that hopefully don't reduce \(137\) below one. You see that the speed of the classical electron is higher than the speed of light. A bummer.
If you quickly and naively think about the changes that string theory makes to this calculation, string theory makes the problem worse because the electron in string theory is smaller. The energy of the electron comes from very stiff strings inside, not from the electrostatic energy, so the extended string hiding in the electron isn't \(10^{-15}\) meters large but \(10^{-35}\) meters tiny or so, not far from the Planck length (the string length, about 100 or 1,000 times longer, would be a better estimate).
So the size of the electron has seemingly shrunk \(10^{20}\) times which means that the required velocity on the surface has to increase \(10^{20}\) times – hopelessly larger than the speed of light. Do the points on the string move with these excessive superluminal speeds?
The answer is, of course, No. String theory reduces the "classical radius" of the electron but it changes other things, too. Most importantly, it changes the relevant mass, too.
The trick is that the thing that is spinning isn't as light as the electron. It's as heavy as the Planck mass (again, more precisely, the string mass, the square root of the string tension). Why? Because the electron, like all massless and observably light particles, comes from the massless level of the string whose mass is constructed in a similar way as the massless level of the bosonic string theory which I pick as an example because of its simplicity. The massless open bosonic strings are given by\[
\ket\gamma = \alpha_{-1}^\mu \ket 0.
\] I called the one-string state a "photon". A similar relationship holds for massless closed strings (the graviton, the dilaton, and the \(B\)-field) but there are two alpha excitations (one left-moving and one right-moving) in front of the tachyonic ground state \(\ket 0\).
If we want to see how the string pretending to be a point-like particle is spinning, we may see that it's really the oscillator excitation \(\alpha_{-1}^\mu\) or the analogous excitations in the superstring case (that may include fermionic world sheet fields) that carries all the spin because it carries the \(\mu\) Lorentz vector index (or spinor indices, in the Green-Schwarz formalism for the superstrings). This \(\alpha\) oscillator literally corresponds to adding some relative motion to the individual points of the string, so that they move as a wave with one cycle (the subscript) around the string.
The tachyonic ground state \(\ket 0\) is not spinning. The tachyon is a scalar, after all. The squared mass of the tachyonic ground state (which is filtered out in superstring theory but may be still used as a useful starting point to construct the spectrum in the RNS superstring) is equal to \(-(D-2)/24\) times \(1/\alpha'\) for the open string – so it's exactly \(-1/\alpha'\) for \(D=26\) – because the term\[
\frac 12 \zav{ 1+2+3+4+5+\dots } = -\frac{1}{24}
\] is contributed by each of the \(D-2\) transverse spatial dimensions. I've discussed this semi-heuristic explanation of the zero-point energies in string theory many times (calculations that avoid all of this heuristics exist, too, but they prove that the semi-heuristic treatment involving the sum of positive integers is at least morally right whether people like it or not).
And the oscillator \(\alpha_{-1}^\mu\) is increasing the squared mass back to the massless level, by \(+1/\alpha'\). And it's the spinning part of the electron or other particles in string theory. So the relevant estimate for the mass of the "gyroscope" that we should use in string theory isn't the tiny electron mass but the string mass, \(1/\sqrt{\alpha'}\) or so.
The estimate for the speed of the "stringy surface" of the electron is easy to calculate now. In the string units \(\hbar=c=\alpha'=1\), the radius is one, the spin is one, the moment of inertia is one, the angular velocity is therefore also one, and so is the speed of the surface. This estimate is compatible with the assumption that the pieces of the strings never really move faster than light although they get close – and more accurate derivations within string theory may be shown to confirm this claim in some detail.
Note that no counterpart of the fine-structure constant such as the string coupling \(g_{\rm string}\) entered in our calculation based on string units. Everything was comparable to the string scale – which may differ from the Planck scale by a power of \(g_{\rm string}\) but the string scale really simplifies the calculation more than the Planck scale. For \(g_{\rm string}\sim 1\), you don't have to distinguish the string scale and the Planck scale.
The "string-scale" part of the electron in perturbative string theory is very heavy and therefore it's easy for it to produce the angular momentum of order \(\hbar=1\) even with velocities that don't breach the speed-of-light limit. And the tachyonic, negative contribution to the squared mass cancels most of the squared mass from the "positive excitation" and it makes the particle massless (or, when subleading effects are included, very light). This tachyonic part doesn't enter the calculations of the gyroscope.
It's very natural that string theory had to solve this puzzle – even though you could simply deny its existence – because string theory partly restored the assumptions that were used in the derivation of the nonsensical superluminal speed of spinning electron. You may see that string theory is a typical unifying theory that really wants to see all quantities of fundamental objects as being close to "one" in the Planck units. And if some quantity is much smaller than the natural Planck unit, e.g. if the electron is much lighter than the Planck mass, it's due to some cancellations that are known to occur almost everywhere in string theory.
But the fundamental parts of the explanations that matter – in this case, I mean the "positive-mass" part of the electron's gyroscope – universally work in the regime where "everything is of order one in some units". Whenever dimensional analysis is any useful in string theory, except for telling you that everything is comparable to the Planck/string scale, it's always in situations where some leading Planck-scale natural contributions "mostly cancel". Quantum field theorists like to think that any precise cancellation is "unnatural" but string theory offers us tons of totally sensible, justifiable, provable cancellations like that.
The cancellations resulting from supersymmetry represent a well-known example but string theory really does imply similar cancellations even in the absence of SUSY (or cancellations that don't seem to be consequences of SUSY), too. |
I am reading I am reading Spacetime and Geometry : An Introduction to General Relativity -- by Sean M Carroll and have arrived at chapter 3 where he introduces the covariant derivative ##{\mathrm{\nabla }}_{\mu }##. He makes demands on this which are \begin{align}
\mathrm{1.\ Linearity:}\mathrm{\ }\mathrm{\nabla }\left(T+S\right)=\mathrm{\nabla }T+\mathrm{\nabla }S & \phantom {10000}(1) \\ \mathrm{2.\ Leibniz\ rule:}\mathrm{\nabla }\left(T\ \otimes \ \ S\right)=\left(\mathrm{\nabla }T\right)\ \ \otimes \ \ S+T\ \otimes \ \ \left(\mathrm{\nabla }S\right) & \phantom {10000}(2) \\ {\mathrm{3.\ Commutes\ with\ contractions:}\mathrm{\nabla }}_{\mu }\left(T^{\lambda }_{\ \ \ \lambda \rho }\right)={\left(\mathrm{\nabla }T\right)}^{\mathrm{\ \ \ }\lambda}_{\mu \ \ \lambda \rho } & \phantom {10000}(3) \\ {\mathrm{4.\ Reduces\ to\ partial\ derivative\ on\ scalars:}\mathrm{\nabla }}_{\mu }\phi ={\partial }_{\mu }\phi & \phantom {10000}(4) \\ \end{align}1,2 and 4 seem reasonable but I cannot understand 3 and he does not seem to use it, even though he implies that he does. The LHS of (3) seems straight forward\begin{align} {\mathrm{\nabla }}_{\mu }\left(T^{\lambda }_{\ \ \ \lambda \rho }\right) & ={\partial }_{\mu }T^{\lambda }_{\ \ \ \lambda \rho }+{\mathrm{\Gamma }}^{\lambda }_{\mu \kappa }T^{\kappa }_{\ \ \ \lambda \rho }-{\mathrm{\Gamma }}^{\kappa }_{\mu \lambda }T^{\lambda }_{\ \ \ \kappa \rho }-{\mathrm{\Gamma }}^{\kappa }_{\mu \rho }T^{\lambda }_{\ \ \ \lambda k} & \phantom {10000}(5) \\ & ={\partial }_{\mu }T^{\lambda }_{\ \ \ \lambda \rho }-{\mathrm{\Gamma }}^{\kappa }_{\mu \rho }T^{\lambda }_{\ \ \ \lambda k} & \phantom {10000}(6) \\ \end{align}Which is very like the rule for the covariant derivative of a (0,1) tensor. I understand that the ##\mathrm{\nabla }T## in (1) and (2) means ##{\mathrm{\nabla }}_{\sigma}T## where ##T## is some tensor. So the RHS of (3) appears to be ##{\left({\mathrm{\nabla }}_{\sigma}T\right)}^{\mathrm{\ \ \ }\lambda}_{\mu \ \ \lambda \rho }## which leaves too many indices on the RHS. Otherwise the RHS is some kind of derivative with one contra- and three co-variant indices. What is that? Help! Posted on Physics Forums at https://www.physicsforums.com/threads/question-on-covariant-derivatives.967509/
\mathrm{1.\ Linearity:}\mathrm{\ }\mathrm{\nabla }\left(T+S\right)=\mathrm{\nabla }T+\mathrm{\nabla }S & \phantom {10000}(1) \\
\mathrm{2.\ Leibniz\ rule:}\mathrm{\nabla }\left(T\ \otimes \ \ S\right)=\left(\mathrm{\nabla }T\right)\ \ \otimes \ \ S+T\ \otimes \ \ \left(\mathrm{\nabla }S\right) & \phantom {10000}(2) \\
{\mathrm{3.\ Commutes\ with\ contractions:}\mathrm{\nabla }}_{\mu }\left(T^{\lambda }_{\ \ \ \lambda \rho }\right)={\left(\mathrm{\nabla }T\right)}^{\mathrm{\ \ \ }\lambda}_{\mu \ \ \lambda \rho } & \phantom {10000}(3) \\
{\mathrm{4.\ Reduces\ to\ partial\ derivative\ on\ scalars:}\mathrm{\nabla }}_{\mu }\phi ={\partial }_{\mu }\phi & \phantom {10000}(4) \\
\end{align}1,2 and 4 seem reasonable but I cannot understand 3 and he does not seem to use it, even though he implies that he does.
The LHS of (3) seems straight forward\begin{align}
{\mathrm{\nabla }}_{\mu }\left(T^{\lambda }_{\ \ \ \lambda \rho }\right) & ={\partial }_{\mu }T^{\lambda }_{\ \ \ \lambda \rho }+{\mathrm{\Gamma }}^{\lambda }_{\mu \kappa }T^{\kappa }_{\ \ \ \lambda \rho }-{\mathrm{\Gamma }}^{\kappa }_{\mu \lambda }T^{\lambda }_{\ \ \ \kappa \rho }-{\mathrm{\Gamma }}^{\kappa }_{\mu \rho }T^{\lambda }_{\ \ \ \lambda k} & \phantom {10000}(5) \\
& ={\partial }_{\mu }T^{\lambda }_{\ \ \ \lambda \rho }-{\mathrm{\Gamma }}^{\kappa }_{\mu \rho }T^{\lambda }_{\ \ \ \lambda k} & \phantom {10000}(6) \\
\end{align}Which is very like the rule for the covariant derivative of a (0,1) tensor.
I understand that the ##\mathrm{\nabla }T## in (1) and (2) means ##{\mathrm{\nabla }}_{\sigma}T## where ##T## is some tensor. So the RHS of (3) appears to be ##{\left({\mathrm{\nabla }}_{\sigma}T\right)}^{\mathrm{\ \ \ }\lambda}_{\mu \ \ \lambda \rho }## which leaves too many indices on the RHS. Otherwise the RHS is some kind of derivative with one contra- and three co-variant indices. What is that?
Help!
Posted on Physics Forums at https://www.physicsforums.com/threads/question-on-covariant-derivatives.967509/ |
Take $U \subset \mathbb{R}$ be an open set. I want to show that there exists nested compact sets $C_1 \subset C_2 \subset \dots$ such that $\bigcup_{j=1}^{\infty} C_j = U$. I have a feeling that I need to resort to the fact that $U$ can be formed by an infinite union of open intervals $I_j$, i.e. $U = \bigcup_{j=1}^{\infty} I_j$. I have a feeling for each $i \in \mathbb{N}$, I can form nested compact sets
$$C_{i1} \subset C_{i2} \subset \dots$$
such that $\bigcup_{j=1}^{\infty} C_{ij} = I_i$. Then I can set
$$C_i = \bigcup_{j=1}^{\infty} C_{ji}.$$
However, I am not sure if an infinite union of compact sets can potentially be compact. Hence, I could use some help on moving through this problem. Potentially I can do something involving the infinite intersection of compact sets? |
Please prove: Suppose $X$ is a metric space, and $A$ is a sequentially compact set in $X$. If given a real number which is larger than zero (suppose $a>0$), there are finite open balls whose radius are $a$, and the union of these open balls can cover $A$. ($a$ is chosen randomly)
HINT: Suppose not; then for any finite $F\subseteq A$, the set $\{B(x,a):x\in F\}$ does not cover $A$. This means that we can recursively construct a sequence $\langle x_n:n\in\Bbb N\rangle$ in $A$ in the following way.
Let $x_0\in A$. Given $\{x_k:k<n\}$, we know that $\{B(x_k,a):k<n\}$ does not cover $A$, so we can choose $x_n\in A\setminus\bigcup_{k<n}B(x_k,a)$. In this way we construct a sequence $\langle x_n:n\in\Bbb N\rangle$ in $A$.
Show that $d(x_m,x_n)\ge a$ whenever $m,n\in\Bbb N$ and $m\ne n$.
Now use the sequential compactness of $A$ to get a subsequence $\langle x_{n_k}:k\in\Bbb N\rangle$ converging to some $x\in A$ and derive a contradiction. |
Answer
You could have been going $27.94 \ m/s$, which is greater than the posted speed of the road.
Work Step by Step
We find the angle of the road based on the maximum posted speed. Thus, we obtain: $\theta = tan^{-1}(\frac{v^2}{rg})=tan^{-1}(\frac{22.22^2}{(210)(9.81)})=13.48^{\circ}$ Now that we have found this, we find the maximum force of friction possible: $F_f=F_n \mu = mgcos\theta \mu = m(9.81)cos(13.48)\times .15 = 1.43m$ We know the force of friction and part of the force of gravity are the centripetal forces acting on you, so we find the maximum velocity you could have been going: $ 1.43m+mgsin\theta=\frac{mv^2}{r}\\ v=\sqrt{1.43r+grsin\theta}=\sqrt{(1.43 \times 210)+(9.81\times 210\times sin13.48)}=27.94 \ m/s$ |
Frequent Links Cumulative distribution function
This article needs additional citations for verification. (March 2010)
In probability theory and statistics, the
cumulative distribution function ( CDF), or just distribution function, describes the probability that a real-valued random variable X with a given probability distribution will be found to have a value less than or equal to x. In the case of a continuous distribution, it gives the area under the probability density function from minus infinity to x. Cumulative distribution functions are also used to specify the distribution of multivariate random variables. Contents 1 Definition 2 Properties 3 Examples 4 Derived functions 5 Multivariate case 6 Use in statistical analysis 7 See also 8 References 9 External links Definition
The cumulative distribution function of a real-valued random variable
X is the function given by <math>F_X(x) = \operatorname{P}(X\leq x),</math>
where the right-hand side represents the probability that the random variable
X takes on a value less than orequal to x. The probability that X lies in the semi-closed interval ( a, b], where a < b, is therefore <math>\operatorname{P}(a < X \le b)= F_X(b)-F_X(a).</math>
In the definition above, the "less than or equal to" sign, "≤", is a convention, not a universally used one (e.g. Hungarian literature uses "<"), but is important for discrete distributions. The proper use of tables of the binomial and Poisson distributions depends upon this convention. Moreover, important formulas like Paul Lévy's inversion formula for the characteristic function also rely on the "less than or equal" formulation.
If treating several random variables
X, Y, ... etc. the corresponding letters are used as subscripts while, if treating only one, the subscript is usually omitted. It is conventional to use a capital F for a cumulative distribution function, in contrast to the lower-case f used for probability density functions and probability mass functions. This applies when discussing general distributions: some specific distributions have their own conventional notation, for example the normal distribution. <math>F_X(x) = \int_{-\infty}^x f_X(t)\,dt.</math>
In the case of a random variable
X which has distribution having a discrete component at a value b, <math>\operatorname{P}(X=b) = F_X(b) - \lim_{x \to b^{-}} F_X(x).</math>
If
F X is continuous at b, this equals zero and there is no discrete component at b. Properties <math>\lim_{x\to -\infty}F(x)=0, \quad \lim_{x\to +\infty}F(x)=1.</math>
Every function with these four properties is a CDF, i.e., for every such function, a random variable can be defined such that the function is the cumulative distribution function of that random variable.
If
X is a purely discrete random variable, then it attains values x 1, x 2, ... with probability p i = P( x i), and the CDF of X will be discontinuous at the points x and constant in between: i <math>F(x) = \operatorname{P}(X\leq x) = \sum_{x_i \leq x} \operatorname{P}(X = x_i) = \sum_{x_i \leq x} p(x_i).</math>
If the CDF
F of a real valued random variable X is continuous, then X is a continuous random variable; if furthermore F is absolutely continuous, then there exists a Lebesgue-integrable function f( x) such that <math>F(b)-F(a) = \operatorname{P}(a< X\leq b) = \int_a^b f(x)\,dx</math> Examples
As an example, suppose <math>X</math> is uniformly distributed on the unit interval [0, 1]. Then the CDF of <math>X</math> is given by
<math>F(x) = \begin{cases}
0 &:\ x < 0\\ x &:\ 0 \le x < 1\\ 1 &:\ x \ge 1. \end{cases}</math>
Suppose instead that <math>X</math> takes only the discrete values 0 and 1, with equal probability. Then the CDF of <math>X</math> is given by
<math>F(x) = \begin{cases}
0 &:\ x < 0\\ 1/2 &:\ 0 \le x < 1\\ 1 &:\ x \ge 1. \end{cases}</math>
Derived functions Complementary cumulative distribution function (tail distribution)
Sometimes, it is useful to study the opposite question and ask how often the random variable is
above a particular level. This is called the complementary cumulative distribution function ( ccdf) or simply the tail distribution or exceedance, and is defined as <math>\bar F(x) = \operatorname{P}(X > x) = 1 - F(x).</math>
This has applications in statistical hypothesis testing, for example, because the one-sided p-value is the probability of observing a test statistic
at least as extreme as the one observed. Thus, provided that the test statistic, T, has a continuous distribution, the one-sided p-value is simply given by the ccdf: for an observed value t of the test statistic <math>p= \operatorname{P}(T \ge t) = \operatorname{P}(T > t) =1 - F_T(t).</math> Properties For a non-negative continuous random variable having an expectation, Markov's inequality states that [1] <math>\bar F(x) \leq \frac{\mathbb E(X)}{x} .</math> As <math> x \to \infty, \bar F(x) \to 0 \ </math>, and in fact <math> \bar F(x) = o(1/x) </math> provided that <math>\mathbb E(X)</math> is finite. Proof: Assuming X has a density function f, for any <math> c> 0 </math> <math>
\mathbb E(X) = \int_0^\infty xf(x)dx \geq \int_0^c xf(x)dx + c\int_c^\infty f(x)dx </math>
Then, on recognizing <math>\bar F(c) = \int_c^\infty f(x)dx </math> and rearranging terms, <math>
0 \leq c\bar F(c) \leq \mathbb E(X) - \int_0^c x f(x)dx \to 0 \text{ as } c \to \infty </math>
as claimed. Folded cumulative distribution
While the plot of a cumulative distribution often has an S-like shape, an alternative illustration is the
folded cumulative distribution or mountain plot, which folds the top half of the graph over, [2] [3]thus using two scales, one for the upslope and another for the downslope. This form of illustration emphasises the median and dispersion (the mean absolute deviation from the median [4]) of the distribution or of the empirical results. Inverse distribution function (quantile function)
If the CDF
F is strictly increasing and continuous then <math> F^{-1}( y ), y \in [0,1], </math> is the unique real number <math> x </math> such that <math> F(x) = y </math>. In such a case, this defines the inverse distribution function or quantile function.
Unfortunately, the distribution does not, in general, have an inverse. One may define, for <math> y \in [0,1] </math>, the
generalized inverse distribution function: <math>
F^{-1}(y) = \inf \{x \in \mathbb{R}: F(x) \geq y \}. </math>
Example 1: The median is <math>F^{-1}( 0.5 )</math>. Example 2: Put <math> \tau = F^{-1}( 0.95 ) </math>. Then we call <math> \tau </math> the 95th percentile.
The inverse of the cdf can be used to translate results obtained for the uniform distribution to other distributions. Some useful properties of the inverse cdf are:
<math>F^{-1}</math> is nondecreasing <math>F^{-1}(F(x)) \leq x</math> <math>F(F^{-1}(y)) \geq y</math> <math>F^{-1}(y) \leq x</math> if and only if <math>y \leq F(x)</math> If <math>Y</math> has a <math>U[0, 1]</math> distribution then <math>F^{-1}(Y)</math> is distributed as <math>F</math>. This is used in random number generation using the inverse transform sampling-method. If <math>\{X_\alpha\}</math> is a collection of independent <math>F</math>-distributed random variables defined on the same sample space, then there exist random variables <math>Y_\alpha</math> such that <math>Y_\alpha</math> is distributed as <math>U[0,1]</math> and <math>F^{-1}(Y_\alpha) = X_\alpha</math> with probability 1 for all <math>\alpha</math>. Multivariate case
When dealing simultaneously with more than one random variable the
joint cumulative distribution function can also be defined. For example, for a pair of random variables X,Y, the joint CDF <math>F</math> is given by <math>F(x,y) = \operatorname{P}(X\leq x,Y\leq y),</math>
where the right-hand side represents the probability that the random variable
X takes on a value less than orequal to x and that Y takes on a value less than orequal to y.
Every multivariate CDF is:
Monotonically non-decreasing for each of its variables Right-continuous for each of its variables. <math>0\leq F(x_{1},...,x_{n})\leq 1</math> <math>\lim_{x_{1},...,x_{n}\rightarrow+\infty}F(x_{1},...,x_{n})=1</math> and <math>\lim_{x_{i}\rightarrow-\infty}F(x_{1},...,x_{n})=0,\quad \mbox{for all }i</math> Use in statistical analysis
The concept of the cumulative distribution function makes an explicit appearance in statistical analysis in two (similar) ways. Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The empirical distribution function is a formal direct estimate of the cumulative distribution function for which simple statistical properties can be derived and which can form the basis of various statistical hypothesis tests. Such tests can assess whether there is evidence against a sample of data having arisen from a given distribution, or evidence against two samples of data having arisen from the same (unknown) population distribution.
Kolmogorov–Smirnov and Kuiper's tests
The Kolmogorov–Smirnov test is based on cumulative distribution functions and can be used to test to see whether two empirical distributions are different or whether an empirical distribution is different from an ideal distribution. The closely related Kuiper's test is useful if the domain of the distribution is cyclic as in day of the week. For instance Kuiper's test might be used to see if the number of tornadoes varies during the year or if sales of a product vary by day of the week or day of the month.
See also References Zwillinger, Daniel; Kokoska, Stephen (2010). CRC Standard Probability and Statistics Tables and Formulae. CRC Press. p. 49. ISBN 978-1-58488-059-2. Gentle, J.E. (2009). Computational Statistics. Springer. ISBN 978-0-387-98145-1. Retrieved 2010-08-06. Monti, K.L. (1995). "Folded Empirical Distribution Function Curves (Mountain Plots)". The American Statistician 49: 342–345. JSTOR 2684570. Xue, J. H.; Titterington, D. M. (2011). "The p-folded cumulative distribution function and the mean absolute deviation from the p-quantile". Statistics & Probability Letters 81(8): 1179–1182. doi:10.1016/j.spl.2011.03.014.< |
This article is aimed at relatively new users. It is written particularly for my own students, with the aim of helping them to avoid making common errors. The article exists in two forms: this WordPress blog post and a PDF file generated by , both produced from the same Emacs Org file. Since WordPress does not handle very well I recommend reading the PDF version.
1. New Paragraphs
In a new paragraph is started by leaving a blank line.
Do not start a new paragraph by using
\\ (it merely terminates a line). Indeed you should almost never type
\\, except within environments such as
array,
tabular, and so on.
2. Math Mode
Always type mathematics in math mode (as
$..$ or
\(..\)), to produce “” instead of “y = f(x)”, and “the dimension ” instead of “the dimension n”. For displayed equations use
$$,
\[..\], or one of the display environments (see Section 7).
Punctuation should appear outside math mode, for inline equations, otherwise the spacing will be incorrect. Here is an example.
Correct:
The variables $x$, $y$, and $z$ satisfy $x^2 + y^2 = z^2$.
Incorrect:
The variables $x,$ $y,$ and $z$ satisfy $x^2 + y^2 = z^2.$
For displayed equations, punctuation should appear as part of the display. All equations
must be punctuated, as they are part of a sentence. 3. Mathematical Functions in Roman
Mathematical functions should be typeset in roman font. This is done automatically for the many standard mathematical functions that supports, such as
\sin,
\tan,
\exp,
\max, etc.
If the function you need is not built into , create your own. The easiest way to do this is to use the
amsmath package and type, for example,
\usepackage{amsmath} ... % In the preamble. \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\inert}{Inertia}
Alternatively, if you are not using the
amsmath package you can type
\def\diag{\mathop{\mathrm{diag}}} 4. Maths Expressions
Ellipses (dots) are never explicitly typed as “…”. Instead they are typed as
\dots for baseline dots, as in
$x_1,x_2,\dots,x_n$ (giving ) or as
\cdots for vertically centered dots, as in
$x_1 + x_2 + \cdots + x_n$ (giving ).
Type
$i$th instead of
$i'th$ or
$i^{th}$. (For some subtle aspects of the use of ellipses, see How To Typeset an Ellipsis in a Mathematical Expression.)
Avoid using
\frac to produce stacked fractions in the text. Write flops instead of flops.
For “much less than”, type
\ll, giving , not
<<, which gives . Similarly, “much greater than” is typed as
\gg, giving . If you are using angle brackets to denote an inner product use
\langle and
\rangle:
incorrect: <x,y>, typed as
$<x,y>$.
correct: , typed as
$\langle x,y \rangle$
5. Text in Displayed Equations
When a displayed equation contains text such as “subject to ”, instead of putting the text in
\mathrm put the text in an
\mbox, as in
\mbox{subject to $x \ge 0$}. Note that
\mbox switches out of math mode, and this has the advantage of ensuring the correct spacing between words. If you are using the amsmath package you can use the
\text command instead of
\mbox.
Example $$ \min\{\, \|A-X\|_F: \mbox{$X$ is a correlation matrix} \,\}. $$ 6. BibTeX
Produce your bibliographies using BibTeX, creating your own bib file. Note three important points.
“Export citation” options on journal websites rarely produce perfect bib entries. More often than not the entry has an improperly cased title, an incomplete or incorrectly accented author name, improperly typeset maths in the title, or some other error, so always check and improve the entry. If you wish to cite one of my papers download the latest version of
njhigham.bib(along with
strings.bibsupplied with it) and include it in your
\bibliographycommand.
Decide on a consistent format for your bib entry keys and stick to it. In the format used in the Numerical Linear Algebra group at Manchester a 2010 paper by Smith and Jones has key
smjo10, a 1974 book by Aho, Hopcroft, and Ullman has key
ahu74, while a 1990 book by Smith has key
smit90.
7. Spelling Errors and Errors
There is no excuse for your writing to contain spelling errors, given the wide availability of spell checkers. You’ll need a spell checker that understands syntax.
There are also tools for checking syntax. One that comes with TeX Live is
lacheck, which describes itself as “a consistency checker for LaTeX documents”. Such a tool can point out possible syntax errors, or semantic errors such as unmatched parentheses, and warn of common mistakes.
8. Quotation Marks
has a left quotation mark, denoted here
\lq, and a right quotation mark, denoted here
\rq, typed as the single left and right quotes on the keyboard, respectively. A left or right double quotation mark is produced by typing two single quotes of the appropriate type. The double quotation mark always itself produces the same as two right quotation marks. Example: is typed as
\lq\lq hello \rq\rq.
9. Captions
Captions go
above tables but below figures. So put the
caption command at the start of a
table environment but at the end of a
figure environment. The
\label statement should go after the
\caption statement (or it can be put inside it), otherwise references to that label will refer to the subsection in which the label appears rather than the figure or table.
10. Tables
makes it easy to put many rules, some of them double, in and around a table, using
\cline,
\hline, and the
| column formatting symbol. However, it is good style to minimize the number of rules. A common task for journal copy editors is to remove rules from tables in submitted manuscripts.
11. Source Code
source code should be laid out so that it is readable, in order to aid editing and debugging, to help you to understand the code when you return to it after a break, and to aid collaborative writing. Readability means that logical structure should be apparent, in the same way as when indentation is used in writing a computer program. In particular, it is is a good idea to start new sentences on new lines, which makes it easier to cut and paste them during editing, and also makes a diff of two versions of the file more readable.
Example:
Good:
$$ U(\zbar) = U(-z) = \begin{cases} -U(z), & z\in D, \\ -U(z)-1, & \mbox{otherwise}. \end{cases} $$
Bad:
$$U(\zbar) = U(-z) = \begin{cases}-U(z), & z\in D, \\ -U(z)-1, & \mbox{otherwise}. \end{cases}$$ 12. Multiline Displayed Equations
For displayed equations occupying more than one line it is best to use the environments provided by the amsmath package. Of these,
align (and
align* if equation numbers are not wanted) is the one I use almost all the time. Example:
\begin{align*} \cos(A) &= I - \frac{A^2}{2!} + \frac{A^4}{4!} + \cdots,\\ \sin(A) &= A - \frac{A^3}{3!} + \frac{A^5}{5!} - \cdots, \end{align*}
Others, such as
gather and
aligned, are occasionally needed.
Avoid using the standard environment
eqnarray, because it doesn’t produce as good results as the amsmath environments, nor is it as versatile. For more details see the article Avoid Eqnarray.
13. Synonyms
This final category concerns synonyms and is a matter of personal preference. I prefer
\ge and
\le to the equivalent
\geq
\leq\ (why type the extra characters?).
I also prefer to use
$..$ for math mode instead of
\(..\) and
$$..$$ for display math mode instead of
\[..\]. My preferences are the original syntax, while the alternatives were introduced by . The slashed forms are obviously easier to parse, but this is one case where I prefer to stick with tradition. If dollar signs are good enough for Don Knuth, they are good enough for me!
I don’t think many people use ‘s verbose
\begin{math}..\end{math}
or
\begin{displaymath}..\end{displaymath}
Also note that
\begin{equation*}..\end{equation*} (for unnumbered equations) exists in the amsmath package but not in in itself. |
For quite some time I am struggling to understand section 6.4 in Weinberg volume 1. He observes there that if interaction hamiltonian density is extended by coupling to c-number fields $\epsilon$, $$ \mathcal H_{\mathrm{int}}(x) \mapsto \mathcal H_{\mathrm{int}}(x) + \epsilon(x) o(x), $$ where $o$ are some operators in the interaction picture then S matrix becomes a functional of $\epsilon$. This functional can be calculated using Feynman rules. That is quite clear. However he then claims that if we calculate variational derivative with respect to $\epsilon$ at $\epsilon=0$ we get sum of terms represented by Feynmann diagrams with only internal lines meeting at $o(x)$ vertices. I don't understand what is the reason for discarding diagrams with external lines flowing into the $o(x)$ vertices. Explicitly, I obtained the formula (which is also written down one page later in Weinberg) $$ \left. \frac{\delta^r S_{\beta \alpha}[\epsilon]}{\delta \epsilon(y_1)...\delta\epsilon(y_r)} \right|_{\epsilon=0} = \sum_{n=0}^{\infty} (-i)^{n+r} \langle \beta | T \left\{ \prod_{i=1}^n \left[ \int \mathrm d x_i \mathcal H_{\mathrm{int}}(x_i) \right] o(y_1)...o(y_r) \right\} |\alpha \rangle . $$ It seems to me that field operators in $o(y)$ can be contracted with initial and final states, just as these in $\mathcal H_{\mathrm{int}}$. What is the difference here?
In Feynman diagrams in coordinate representation, external lines are those with one end fixed (i.e. having fixed coordinate which does not take part in integrations) and the other end being internal vertex.
In your formula, if the operators $o(y_i)$ are of one-particle nature (i.e. contain $\Psi$ or $\Psi^+$ but not their products), then you have $r$ external lines starting from $y_1,\ldots,y_r$. See Fig. 1: it is an example for $r=4$, external lines are blue.
When the operators $o(y_i)$ are two-particle (for example, current operators like $\Psi^+\hat{J}\Psi$), we have rather external vertices with coordinates $y_1,\ldots,y_r$, each of them being a source for two external lines (see Fig. 2, external lines are blue).
As for the initial and final states $|\alpha\rangle$ and $\langle\beta|$: if they depend on its own coordinates, this can introduce additional external vertices to the diagram. For example, if $|\alpha\rangle=\Psi(z_\alpha)|0\rangle$, $|\beta\rangle=\Psi(z_\beta)|0\rangle$, you will get additional external vertices with the coordinates $z_\alpha$ and $z_\beta$. If $|\alpha\rangle=\Psi^+(z_\alpha)\Psi(z_\alpha)$, it will correspond to two-particle vertex, and so on. |
Given a quantum state, the Born rule lets us compute probabilities. Conversely, given probabilities, can we reconstruct the quantum state? I think the answer is almost trivially positive but how simple can the reconstruction formula be?
Let me illustrate with the archetypical example of the spin 1/2 system. I give myself a density operator $D$ and $n$ projectors $P_i$, and then the associated probabilities of measurement
$$\newcommand{\tr}{\mathop{\rm tr}}p_i=\tr DP_i.$$
I only require that “all cases are covered”, i.e. that
$$\sum_{i=1}^n p_i = 1.$$
Can I express $D$ as an expansion on those projectors? I.e.
$$D=\sum_{i=1}^n a_iP_i,$$
where the coefficients $a_i$ can be computed from the probabilities $p_i$? It turns out that not only the answer is positive but that there is a very simple formula,
$$D = \sum_{i=1}^4 \left(\frac{3}{2}p_i-\frac{1}{4}\right)P_i,$$
by choosing the projectors as
$$P_i=\frac{1}{2}\left(I+\frac{u_i\cdot\sigma}{\sqrt{3}}\right),$$
where $\sigma$ is the usual Pauli vector and where,
$$\begin{align} u_1 &= (+1,+1,+1),\\ u_2 &= (+1,-1,-1),\\ u_3 &= (-1,+1,-1),\\ u_4 &= (-1,-1,+1). \end{align}$$
This result took me quite a bit of trials and errors but it is surely well known! However I have not encountered it in lectures and textbooks. A reference would be appreciated!
Let me emphasise that my question is not whether such a reconstruction of $D$ from the probabilities exists: that is not suprising, as the coefficients $a_i$ can be obtained as solution of the system of equations
$$\sum_{j=1}^n \tr(P_iP_j)\ a_j = p_i,\ i=1,\cdots,n.$$
But that one can choose the projectors so that the coefficients are as simple as in the reconstruction above is really suprising to me. So my question is whether this generalises to quantum systems more complex than the spin 1/2 system. Are there other systems where such a simple reconstruction holds? Would there even be a general framework to find such reconstructions? |
Answer
$\mu = \frac{v^2}{gr} - tan(tan^{-1}(\frac{v_0^2}{rg}))$
Work Step by Step
We find the angle of the road based on the maximum posted speed. Thus, we obtain: $\theta = tan^{-1}(\frac{v_0^2}{rg})$ We also know: $ mgcos\theta \mu+mgsin\theta=\frac{mv^2}{r}$ Thus, we find an expression for $\mu$: $\mu=\frac{\frac{v^2}{r}-gsin\theta}{gcos\theta}$ $\mu = \frac{v^2}{gr} - tan\theta$ Plugging in the value of theta gives: $\mu = \frac{v^2}{gr} - tan(tan^{-1}(\frac{v_0^2}{rg}))$ |
Now multinomial-free, I believe that Ghassan Sarkis and I have a proof of the following
Theorem. Let $h\ge2$, and let $L(x)=x + x^{p^h}/p + x^{p^{2h}}/p^2+\cdots$ be the logarithm of the formal group $F(x,y)\in\Bbb Z_p[[x,y]]$. Then $F(x,y)\in\Bbb Z_p\{\{x\}\}[[y]]$, where $\Bbb Z_p\{\{x\}\}$ is the ring of convergent power series: those whose coefficients go to zero.
A word about this ring: it’s the completion of the polynomials with respect to the “Gauss norm”, i.e. the uniform norm on the closed unit disk; or, if you like, the $p$-adic completion of the ring of polynomials.
Since you get $\Bbb F_p[x]$ when you tensor the ring of convergent series with $\Bbb F_p$, Neil Strickland’s guess turns out to be correct, in a very strong way.
Now for an outline of the proof, which depends entirely on $L'(x)$ being a convergent series, but the proof I found depends also on the particular form of the logarithm.
(Perhaps I should say that the
cognoscenti may look at all this and say, C’mon, it’s all clear ’cause the invariant differential is a convergent series, and it all drops out automatically from general facts. But I’m no cognoscente in anything, so I have to go through at least some of the motions. I add that Ghassan wonders whether the present result may be in Hazewinkel already, though in some indecipherable formulation.)
Treat $F(x,y)$ as an element of $\Bbb Z_p[[x]][[y]]$, so write it as$$F(x,y)=x +\sum_{m\ge 1}f_m(x)y^m\,.$$The aim is to show that each $f_m$ is in $\Bbb Z_p\{\{x\}\}$, not just in $\Bbb Z_p[[x]]$. The argument is by induction, starting with $f_1$, which we already know to be $1/L'(x)$, so convergent. We write out the fundamental property of the logarithm:$$L\bigl(F(x,y)\bigr)=L(x)+L(y)\,,$$and arrange the pieces differently:$$0=\sum_{N\ge0}\Bigl[F(x,y)^{p^{Nh}} - y^{p^{Nh}}\Bigr]\Big/p^N-L(x)\,.$$In the above, we want to look at the total coefficient-function of $y^s$, knowing inductively that all $f_m(x)$ for $m<s$ are in $\Bbb Z_p\{\{x\}\}$. In this, we’re not interested in the participation of any monomial with $y$-degree greater than $s$, so we may truncate, and again rearrange:$$-(x+\sum_{m=1}^sf_m(x)y^m)\equiv \sum_{N\ge1}\Bigl[(x+\sum_{m=1}^s f_my^m)^{p^{Nh}} - y^{p^{Nh}}\Bigr] - L(x)\pmod{y^{s+1}}\,.$$Now, when you look at the occurrence of $y^s$ for each piece with $N\ge1$, there’s only one of them, and lo and behold, the coefficient is $p^{N(h-1)}x^{p^{Nh-1}}$, one of the monomials in $L'(x)$. Collect them all on the other side, and get$$-f_s(x)L'(x) = \text{$y^s$-coefficient in}\sum_{N\ge1}\Bigl[x+\sum_{m=1}^{s-1} f_my^m\Bigr]^{p^{Nh}}\Big/p^N\,,$$though in case $s=p^{nh}$, one must add on the left $1/p^n$, an inconsequential change. But here, my friends, our tale is almost done.
The last display exhibits $f_s$ as a $\Bbb Q_p$-series in the series $f_1,\dots, f_{s-1}$. But look at the tail-end of the outer sum: because the degrees in $y$ are bounded, the binomial coefficients for the $p^{Nh}$-powers far overwhelm the denominators, and the total coefficients go to zero. So we know that the tail-end is convergent, just as a series of elements of $\Bbb Z_p\{\{x\}\}$. And the part before the tail-end? That is a polynomial with $\Bbb Q_p$-doefficients in the $s-1$ series in $\Bbb Z_p\{\{x\}\}$. Let’s call it $g(x)$ for the moment. We now have $-f_sL'=g$, and thus $f_s=-f_1g$, an element of $\Bbb Q_p\otimes_{\Bbb Z_p}\Bbb Z_p\{\{x\}\}$ that’s also in $\Bbb Z_p[[x]]$, since of course we know that $F$ has its coefficients in $\Bbb Z_p$. Thus $f_s(x)\in\Bbb Z_p\{\{x\}\}$, as desired.
What this result says is that there is an action of the formal group $F$ on the closed disk. Again, maybe the
cognoscenti have known this all along, but I certainly didn’t. You certainly don’t expect such a thing for a random formal group, even (as here) of height greater than $1$. |
FIELD Math: CMC DATE July 15 (Mon), 2019 TIME 16:00-17:30 PLACE 1424 SPEAKER Kim, Kihyun HOST Oh, Sung-Jin INSTITUTE KAIST TITLE On pseudoconformal blow-up solutions to the self-dual Chern-Simons-Schroedinger equation: existence, uniqueness, and instability. Part 2 ABSTRACT (For the compiled version, see the attached file)
We consider the self-dual Chern-Simons-Schr\"odinger equation (CSS), also known as a gauged nonlinear Schr\"odinger equation (NLS). CSS is $L^2$-critical, admits solitons, and has the psuedoconformal symmetry. These features are similar to the $L^2$-critical NLS. In this work, we consider pseudoconformal blow-up solutions under $m$-equivariance, $m \geq 1$. Our result is threefold. Firstly, we construct a pseudoconformal blow-up solution u with given asymptotic profile $z^\ast$:
$$ \left[ u(t, r) - \frac{1}{|t|} Q \left( \frac{r}{|t| \right) e^{-i \frac{r^2}{4 |t|} \right] e^{i m \theta} \to z^\ast \quad \hbox{ in } H^1 $$
as $t \to 0^-$, where $Q(r) e^{i m \theta}$ is a static solution. Secondly, we show that such blow-up solutions are unique in a suitable class. Lastly, yet most importantly, we exhibit an instability mechanism of $u$. We construct a continuous family of solutions $u^{(\eta)}$, $0 \leq \eta \ll 1$, such that $u^{(0)} = u$ and for $\eta > 0$, $u^{(\eta)}$ is a global scattering solution. Moreover, we exhibit a rotational instability as $\eta \to 0^+$: $u^{(\eta)}$ takes an abrupt spatial rotation by the angle
$$ \left( \frac{m+1}{m} \right) \pi
on the time interval $|t| \lesssim \eta$.
We are inspired by works in the $L^2$-critical NLS. In the seminal work of Bourgain and Wang (1997), they constructed such pseudoconformal blow- up solutions. Merle, Rapha\"el, and Szeftel (2013) showed an instability of Bourgain-Wang solutions. Although CSS shares many features with NLS, there are essential differences and obstacles over NLS. Firstly, the soliton profile to CSS shows a slow polynomial decay $r^{-(m+2)}$. This causes many technical issues for small $m$. Secondly, due to the nonlocal nonlinearities, there are strong long-range interactions even between functions in far different scales. This leads to a nontrivial correction of our blow-up ansatz. Lastly, the instability mechanism of CSS is completely different from that of NLS. Here, the phase rotation is the main source of the instability. On the other hand, the self-dual structure of CSS is our sponsor to overcome these obstacles. We exploited the self-duality in many places such as the linearization, spectral properties, and construction of modified profiles.
In the talks, the first author will present background of the problem, main theorems, and outline of the proof, as well as a comparison with NLS results. The second author will explain heuristics of main features, such as the long-range interaction between $Q^{\sharp}$ and $z$, rotational instability mechanism, and Lyapunov/virial functional method.
FILE 135911562135454685_1.pdf |
WHY?
In many enviroments of RL, rewards tend to be delayed from the actions taken. This paper proved that delayed reward exponentially increase the time of conversion in TD, and exponentially increase the variance in MC estimates.
WHAT?
RUDDER(Return Decomposition for Dalayed Reward) redistributes the rewards to reduce the delay. To do this, this papaer defined new MDP which is state-enriched with additional information compared to previous MDP with the same optimal policy. The additional information is
\rho which reprecent accumalated previously received reward. To redistribute the rewards, state-action sequence predict the final return. Using contribution analysis(layer-wise relevance propagation(LRP), Taylor decomposition, or integrated gradients(IG)), contribution of each state-action to the prediction of the final reward can be identified. To prevent the model to predict the final reward solely from the final state, s is replaced with the difference between states. Q value of a state-action pair equals to accumulated contribution in optimal.
g((s,a)_{0:T}) = \tilde{r}_{T+1}\\ g((s,a)_{0:T}) = \Sigma^T_{t=0} h(a_t, s_t)\\ g((a, \delta)_{0:T}) = \tilde{r}_{T+1} = \Sigma^T_{t=0} h(a_t, \delta(s_t, s_{t+1}))\\ r_{t+1} = \tilde{r}_{T+1}h(a+t, \delta(s_t, s_{t+1}))/g((a, \delta))_{0:T})\\ R_t = h_t = h(a_t, \delta(s_t, s_{t+1})) = q^{~\pi}(s_t, a_t) - q^{~\pi}(s_{t-1}, a_{t-1}) RUDDER consists of 1) safe exploration which include Annealed ppo2 exploration and safe exploration strategy, 2) Lesson replay buffer which is similar to prioritized replay with prioritized by the prediction error, and 3) Contribution analysis which is implemented by LSTM.
So?
RUDDER converged much faster than other models and outperformed most of the model including DQN, DDQN, priotized DDQN, Dueling DDQN, Noisy DQN, Distributional DQN, Ape-DQN in Atari game Bowling and Venture.
Critic
Although the RUDDER itself seems a result of engineering, I was surprised to see all the mathmatical proof attached. It would be nicer if there were more benchmarks on other atari games. Great overview of reinforcement learning in intro and detailed explanation in appendix were kind. |
tl;dr answer, and my math might be a little off, but here goes:
The Hill Sphere is an approximation which takes just 4 things into account:
$$r \approx a (1-e) \cdot \sqrt[3\ \ \ ]{\frac{m}{3 M}}$$ the Semi Major Axis $a$ the eccentricity of the orbit $e$ and the mass of the 2 objects, in this case the Earth and the Moon.
The Hill Sphere formula doesn't, by the way, take into account gravitational imperfections due to varied density, so it doesn't rise and fall with every mountain on Earth and it approximates a radius, which by definition, represents a perfect sphere, so, the Hill Sphere is a sphere, calculated by a formula. By definition, as David Hammen said.
So ...
...because I have a budding sense it's not quite spherical. Or at
least the center isn't equal to the body's center. For one thing, at
the distant end (say, in case of Moon's Hill Sphere, the point
opposite from Earth) Earth's gravity is weaker than on the Earth side
(due to the distance), meaning Moon's sphere of influence should reach
farther. Or in case of certain elliptic polar orbits (similar to
Sun-synchronous Earth orbit) they could reach much farther as the
distance from "the big body" doesn't change by much, and the smaller
body's gravity should be able to pull the satellite back from a quite
a distance "above" or "below".
What you're describing works exactly like that with planetary magnetic fields. It doesn't work with gravitational orbits hardly at all because orbits don't switch from being stable to unstable as they go from the Earth side of the Moon to the far side of the Moon. The entire orbit is considered one or the other, though in a certain sense, all orbits are ultimately unstable given enough time, so there's some fuzziness to those definitions as well.
So, what's the shape of the Hill Sphere? And what magnitude of
eccentricity/irregularity in its shape can be expected?
Well, we know the Hill Sphere is a perfect sphere, and what we might call a region of orbital stability, generally considered 1/3rd to 1/2th the Hill Sphere's radius (also in the Wikipedia article), that's pretty much a perfect sphere too, though it's also not definable with precision.
But, is there any form of gravitational influence that is stretched on the far side and compacted on the near side. Maybe. Any objects gravitational influence pretty much extends to any distance at the speed of light, so gravitational influence doesn't work either, but lets try this;
Lets make up a term and call it gravitational capture zone, or capture zone for short, which I'll define by a region where a planet can capture an object, usually an asteroid or comet that flies close to it, and lets define this as a situation where the object has to complete at least 1 full, 360 degree orbit. One is enough. Now, obviously, capture also depends on the speed of the asteroid or comet, so this is kind of rough, but the Earth does have, in a sense, a capture zone - so does Mars, in fact, Mars' 2 moons are believed to be captured asteroids. In fact, all the outer planets have captured asteroids orbiting them as moons.
Another way of looking at this, is, an asteroid that flies at a set speed, 1,000 km above the Moon's surface will be influenced by the Moon's gravity more than the Earth's, at least, when it flies that close (before it gets that close, it's more affected by the Earths - but the math gets complicated).
So, an asteroid flying 1,000 km above the Moon on the far side of the Earth would bend more than an asteroid flying 1,000 km above the Moon but in between the Moon and the Earth, where, in that position, the Moon's and Earth's gravity on the asteroid would be pulling in opposite directions - much less bend, and if we take that in mind, the influence where the Moon might capture an asteroid is greater on the away side of Earth than the near side of the Earth.
So, lets return to Hill sphere, and I'm going to simplify the math by ignoring eccentricity and assuming circular orbits - from the same Wikipedia link:
$$r \approx a \cdot \sqrt[3\ \ \ ]{\frac{m}{3M}}$$
Lets do the Earth-Sun first;
$a$ (semi major axis, or, lets just call it distance), and I hate math so we'll call $a$ 1, for 1 AU, and mass, lets call little $m$ 1 and big $M$ 333,000 because the Sun is 333,000 times more massive than the Earth:
$1 AU \cdot \sqrt{1/333,000\cdot3\ }$ is just about 1/100th of an AU, so from the near end of the Earth's Hill Sphere to the Sun and the far end away from the Sun, the gravitational effect of the Sun at 0.99 AU to 1.01 AU (we're basically just talking tidal effects at this point), but, gravity decreases by the square of the distance, so this is about a 4% change in the Sun's gravitation from the far end to the near end of the Earth's Hill sphere.
The sphere is still a sphere, but the Earth should be slightly more able to capture an asteroid on it's far side from the Sun than the near side. In that sense, the term I made up, capture zone, would be stretched by a little bit. 4% might be upper limit of the stretching and it's probably less than that, because a gravitational capture zone would be smaller than the Hill Sphere. (if somebody wants to correct that, feel free, but this is ballpark.)
The Earth actually does this, by the way - capture asteroids. The Moon probably doesn't, because it's gravity is too weak and comparatively, asteroids move too fast. Asteroids captured by the Earth usually don't stick around very long though:
Source: http://gizmodo.com/5869840/earth-has-a-second-moon-astronomers-say
Now, lets look at the Earth-Moon system. The Earth weighs approximately 81 times as much as the Moon, so the Hill Sphere of the Moon is $1/\sqrt{81*3}$, or 16% of the distance between the Earth and the Moon. The Moon's theoretical gravitational capture zone would be significantly more elongated around it's far side than it's Earth side. The elongation of a capture zone depends on the ratio of size between the 2 objects, so the Sun being comparatively so large, makes Earth's "capture zone" much closer to a sphere, not much more stretched out as you suggest.
Now, in reality, the Moon's Hill sphere is all but irrelevant, because there is no stable orbit around the Moon due to tidal effects from the Earth and to a lesser extent, orbital perturbation from the Sun. But if the Moon was to capture an asteroid - that would need to just about exactly match the Moon's speed, in that hypothetical scenario, I think the Moon would be more likely to capture an asteroid on it's far side than it's near side - so there is in a sense, an elongation for temporary capture of a passing object.
But this is all quite vague, because anything that orbits the Moon also orbits the Earth, and any elongation of this theoretical capture zone is pretty irrelevant in terms of stable orbits, because a stable orbit has to account for all parts of the orbit, not just the part where the tidal effect is the lowest. So, I don't think it's a good way of looking at orbital mechanics, but I think you're correct in thinking there's something about a planet's gravitational sphere of influence not being quite spherical.
Loosely related on Moon's capturing asteroids: http://www.universetoday.com/109666/can-moons-have-moons/ |
2019-09-04 12:06
Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 詳細記錄 - 相似記錄 2019-08-15 17:39
LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 詳細記錄 - 相似記錄 2019-08-15 17:36
Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 詳細記錄 - 相似記錄 2019-05-15 16:57 詳細記錄 - 相似記錄 2019-02-12 14:01
XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 詳細記錄 - 相似記錄 2019-01-21 09:59
Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 詳細記錄 - 相似記錄 2019-01-15 14:22
Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 詳細記錄 - 相似記錄 2019-01-10 15:54 詳細記錄 - 相似記錄 2018-12-20 16:31
Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 詳細記錄 - 相似記錄 2018-12-14 16:02
The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. 詳細記錄 - 相似記錄 |
WHY?
Reparameterization trick is a useful technique for estimating gradient for loss function with stochastic variables. While score function extimators suffer from great variance, RT enable the gradient to be estimated with pathwise derivatives. Even though reparameterization trick can be applied to various kinds of random variables enabling backpropagation, it has not been applicable to discrete random variables.
WHAT?
Reparameterization of discrete random variable is enabled by relaxing the condition of Categorical variables to Concrete variables(Continuous relaxations of discrete random variables). Concrete variable is motivated by Gumbel-Max trick. Gumbel distribution can be defined as
-\log(-\log U), U \sim Uniform(0,1). If we set
D_k = 1 for such k that maximize
{\log\alpha_k - \log(-\log U_k)}, then
\mathbb{P}(D_k = 1) = \frac{\alpha_k}{\Sigma_{i=1}^n \alpha_i}. This transforms the sampling of discrete random variable to deterministic transformation of uniform random variable. However, argmax process of Gunbel-Max trick is not appropriate for backpropagation. Concrete distribution substitute the argmax process with softmax with temperatures. This approahes to argmax as
\lambda \rightarrow 0. The probability distribution is as follows.
X_k = \frac{exp((\log\alpha_k + G_k)/\lambda)}{\Sigma_{i=1}^n exp((\log\alpha_k + G_k)/\lambda)}\\ p_{\alpha,\lambda}(x) = (n-1)!\lambda^{n-1}\Pi_{k=1}^n(\frac{\alpha_k x_k^{\lambda - 1}}{\Sigma_{i=1}^n \alpha_i x_i^{-\lambda}}) We can use this Concrete random varibale in computing gradient of discrete stochastic variables. By substituting discete random variables with concrete random variables, variational lowerbound can be relaxed as follows.
L_1(\theta, a, \alpha) = \mathbb{E}_{D\sim Q_{\alpha}(d|x)}[log\frac{p_\theta(x|D)P_a(D)}{Q_{\alpha}(D|x)}]\\ L_1(\theta, a, \alpha) \approx \mathbb{E}_{Z\sim q_{\alpha, \lambda_1}(z|x)}[log\frac{p_\theta(x|Z)p_{a,\lambda_2}(Z)}{q_{\alpha, \lambda_1}(Z|x)}]
So?
Concrete relaxation outperformed VIMCO in structured output prediction and density estimation with non-linear model.
Critic
Backpropagation of discrete stochastic random variables can be useful in other areas too. Maybe more various kinds of experiments would be nice. |
Is the half life of a material only accurate as long as you are still in a macroscopic regime? If I had 8 particles in a box would I observe a fluctuation in half lives, and what would occur within the 4th half life?
Half life is, by definition, the amount of time until half of an infinitely large sample would decay. That's precisely equivalent (according to the frequentist interpretation of probability, if that matters to you) to the time until an individual particle's probability of decay reaches one half. The half life is a theoretical quantity that doesn't depend on the actual number of particles you're dealing with.
If you actually put 8 particles in a box and watch how long it takes for half of them to decay, you could consider that a measurement of the half life of the particles. As with any measurement, the value you measure will not, in general, be the same as the true (theoretical) value. So yes, there will be fluctuations, and once the number of particles remaining drops to two or one or zero, those fluctuations will be very very large. But what is fluctuating is your measurement of the half life, not the true theoretical half life itself.
Yes, it is a statistical average in the sense that the measured half life will approach a single value of a true half life if you do lots of measurements.
In other words, if you did the experiment many, many times you would find that on average you had 4 particles left after a half-life had passed.
For any
individual experiment, the results would vary.
Each atom has a probability of surviving intact after a time $t$ according to $$p = \exp(-\lambda t)$$ where $\lambda$ is the decay constant and the half life $t_{1/2} = \ln 2/\lambda$.
If you wait 4 half lives then $t = 4\ln 2/\lambda$ and the probability of an individual particle surviving is $\exp(-4\ln 2) = 0.0625$.
In practice, you have to have an integer number of particles, so the most likely outcomes are either 1 or zero intact atoms remain.
If you have 8 atoms and the probability that any of them will have decayed is $p=0.0625$, then one can use the binomial probability distributionto work out the probability that any number $n$ will survive from a population of $N$ is $$ P(n) = \frac{N!}{n! (N-n)!} p^{n}(1-p)^{N-n}$$
So $P(0)= 0.597$, $P(1) = 0.318$, $P(2)= 0.037$ and so on.
Now, if your aim is to
estimate the half life based on a single experiment with these 8 atoms, then I see (at least) two possibilities.
(i) If you measure the time it takes for the 4th decay to occur, then you can calculate $P(4)$ as above, but calculate it for a range of possible values of $\lambda$. This will give you a probability distribution for $\lambda$ from which you can find the maximum likelihood value or a confidence interval.
(ii) If you have the individual decay times of each decay, then for each atom you can calculate a probability that it would have decayed in less than its observed decay time, given an assumed $\lambda$, which is $P_i(\lambda) = (1- \exp[-\lambda t_i])$. You can also include any atoms that haven't decayed, $P_i(\lambda) = \exp[-\lambda t_i]$. You then form the product of these probabilities $P(\lambda)= \prod P_i(\lambda)$ to give you an overall likelihood distribution for $\lambda$, from which you can estimated a maximum likelihood value for $\lambda$ and a confidence interval. |
I have asked this question on MSE but did not receive an answer. I thought I could try it here.
Let $T$ be a self-adjoint trace-class operator on $L^2(\mathbb{R})$. Is is true that it can be represented as an integral operator.
I thought the kernel would be $$k_T(x,y) =\sum_{i=1}^\infty \lambda_i \phi_i(x) \bar\phi_i(y).$$
Here $\{\phi_i\}$ is an eigenbasis of $T$, i.e. $T=\sum_i \lambda_i |\phi_i\rangle\langle\phi_i|$. Then, we have $$\int k_T(\cdot,y) f(y) = \int\sum_i \lambda_i \phi_i(\cdot) \bar\phi_i(y) f(y) dy = \sum_i \lambda_i \phi_i \langle \phi_i, f\rangle=\sum_i \lambda_i |\phi_i\rangle\langle\phi_i|f\rangle = Tf.$$
Is this correct? |
For discussion directly related to ConwayLife.com, such as requesting changes to how the forums or wiki function.
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X
I have some questions:
1. Is there a minimal age for joining conwaylife? 2. Who is unname66609? 3. Does LifeWiki store the IP Addresses of users? Ok! That will be it for now
1. Is there a minimal age for joining conwaylife?
2. Who is unname66609?
3. Does LifeWiki store the IP Addresses of users?
Ok! That will be it for now
Airy Clave White It Nay
The terms and conditions shown when you first register say nothing about age, and I think the youngest member here, Gustavo, is 11 or 12 - so no.Saka wrote:1. Is there a minimum age for joining conwaylife? Just another forum user as far as I can tell. They probably live in or near China. Why do you ask?2. Who is unname66609? Yes.3. Does LifeWiki store the IP Addresses of users? Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X 1. So it's ok if a a six year old joins?M. I. Wright wrote:The terms and conditions shown when you first register say nothing about age, and I think the youngest member here, Gustavo, is 11 or 12 - so no.Saka wrote:1. Is there a minimum age for joining conwaylife?Just another forum user as far as I can tell. They probably live in or near China. Why do you ask?2. Who is unname66609?Yes.3. Does LifeWiki store the IP Addresses of users?
2. Because he never really posts anything... (just things like "what is sesame oil?"
3. Ok
And by the the way did you notice the second "a" I put in number 1? There's also something else about this line...
Airy Clave White It Nay
Hmm... Technically, it's perfectly possible, but I can hardly imagine a six-year-old doing stuff like this. If he/she decides to join, definitely tell me the username. If that person achieves anything, I will clap.Saka wrote:1. So it's ok if a a six year old joins? So do you think unname is a bot? It is not probable since he/she used to post some relevant content. Unname is just a strange guy, like Gustavo raised to the 10th power.Saka wrote:2. Because he never really posts anything... (just things like "what is sesame oil?" No. I failed this test for the the second time in my lifeSaka wrote:And by the the way did you notice the second "a" I put in number 1?
There are 10 types of people in the world: those who understand binary and those who don't.
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X And I passed.Alexey_Nigin wrote:Hmm... Technically, it's perfectly possible, but I can hardly imagine a six-year-old doing stuff like this. If he/she decides to join, definitely tell me the username. If that person achieves anything, I will clap.Saka wrote:1. So it's ok if a a six year old joins? So do you think unname is a bot? It is not probable since he/she used to post some relevant content. Unname is just a strange guy, like Gustavo raised to the 10th power.Saka wrote:2. Because he never really posts anything... (just things like "what is sesame oil?" No. I failed this test for the the second time in my lifeSaka wrote:And by the the way did you notice the second "a" I put in number 1?
Airy Clave White It Nay
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X How young?Sarp wrote:Nope I'm younger1. IThe terms and conditions shown when you first register say nothing about age, and I think the youngest member here, Gustavo, is 11 or 12 - so no.
Airy Clave White It Nay
Meaning, a collection of syntheses where the recipes involve colliding LWSSes instead of colliding gliders? For the second question, are combinations of LWSS, MWSS, and HWSS allowed, or should a single recipe have only one kind of *WSS?The Turtle wrote:Are there any catalogues of LWSS collisions? Or other *WSS collisions?
In any case, I think the answer is no -- nothing really organized, anyway, because no one has had a long-term use for it. It's fairly easy to put together the beginnings of such a collection by running gencols for a while, along these general lines.
If you just want a table of two-*WSS collisions, that's very easy to generate with gencols. Maybe someone has a stamp collection lying around already.
I've been curious for a while now about construction mechanisms using *WSS slow salvos -- though not curious enough yet to do the research myself, apparently. Seems as if slow LWSSes would probably be just about as effective as gliders for building self-constructing circuitry, and Geminoid construction-arm elbows can be programmed to produce *WSSes instead of gliders. It's several times more expensive, but it allows for four new construction directions.
I know I found a while back that LWSS slow salvos can move a blinker anywhere, but I can't find it anywhere but in a quote in the Accidental Discoveries thread.dvgrn wrote:I've been curious for a while now about construction mechanisms using *WSS slow salvos -- though not curious enough yet to do the research myself, apparently. Seems as if slow LWSSes would probably be just about as effective as gliders for building self-constructing circuitry, and Geminoid construction-arm elbows can be programmed to produce *WSSes instead of gliders. It's several times more expensive, but it allows for four new construction directions.
x₁=ηx
V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Mr. Missed Her Posts:90 Joined:December 7th, 2016, 12:27 pm Location:Somewhere within [time in years since this was entered] light-years of you.
What's with the alternating shades of the posts? Sometimes, tables/lists do that so you can keep track of which item is which, but I don't see how it helps here. Besides, there's almost no difference in the colors.
There is life on Mars. We put it there with not-completely-sterilized rovers.
And, for that matter, the Moon, Jupiter, Titan, and 67P/Churyumov–Gerasimenko.
And, for that matter, the Moon, Jupiter, Titan, and 67P/Churyumov–Gerasimenko.
Mr. Missed Her Posts:90 Joined:December 7th, 2016, 12:27 pm Location:Somewhere within [time in years since this was entered] light-years of you.
I only noticed when I tried to make the background of my profile photo the color of the posts. As you can see, the colors match up perfectly here, but they don't in the last post.
There is life on Mars. We put it there with not-completely-sterilized rovers.
And, for that matter, the Moon, Jupiter, Titan, and 67P/Churyumov–Gerasimenko.
And, for that matter, the Moon, Jupiter, Titan, and 67P/Churyumov–Gerasimenko.
Mr. Missed Her Posts:90 Joined:December 7th, 2016, 12:27 pm Location:Somewhere within [time in years since this was entered] light-years of you.
I might be able to, but I only know how to edit images through Paint, which is like crappy Photoshop that comes with windows.
There is life on Mars. We put it there with not-completely-sterilized rovers.
And, for that matter, the Moon, Jupiter, Titan, and 67P/Churyumov–Gerasimenko.
And, for that matter, the Moon, Jupiter, Titan, and 67P/Churyumov–Gerasimenko.
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Try gimpMr. Missed Her wrote:I might be able to, but I only know how to edit images through Paint, which is like crappy Photoshop that comes with windows.
Airy Clave White It Nay
Mr. Missed Her Posts:90 Joined:December 7th, 2016, 12:27 pm Location:Somewhere within [time in years since this was entered] light-years of you.
Well, thanks.
There is life on Mars. We put it there with not-completely-sterilized rovers.
And, for that matter, the Moon, Jupiter, Titan, and 67P/Churyumov–Gerasimenko.
And, for that matter, the Moon, Jupiter, Titan, and 67P/Churyumov–Gerasimenko.
BlinkerSpawn Posts:1906 Joined:November 8th, 2014, 8:48 pm Location:Getting a snacker from R-Bee's Click the checkbox to the right, then choose "Delete marked" at the bottom, from the dropdown where it normally says "Mark/Unmark as important".BlinkerSpawn wrote:On this note, how do I delete read private messages?gameoflifemaniac wrote:Does Nathaniel have access to private messages?
Surprisingly well hidden, isn't it? It's a very clever design -- keeps you from deleting things accidentally (or on purpose).
In theory, yes. PMs are stored as plain text in the database and while there is no direct ability within phpbb for the admin to read PMs, any person or program (such as backup software) with access to the database has access to the content of private messages. I see no reason to be concerned about this.gameoflifemaniac wrote:Does Nathaniel have access to private messages? |
01/10/2019, 15:00 — 16:00 — Room P3.10, Mathematics Building Gunter Schutz, Jüelich University
The Fibonacci family of dynamical universality classes
We use the theory of nonlinear fluctuating hydrodynamics to study stochastic transport far from thermal equilibrium in terms of the dynamical structure function which is universal at low frequencies and for large times and which encodes whether transport is diffusive or anomalous. For generic one-dimensional systems we predict that transport of mass, energy and other locally conserved quantities is governed by mode-dependent dynamical universality classes with dynamical exponents $z$ which are Kepler ratios of neighboring Fibonacci numbers, starting with $z = 2$ (corresponding to a diffusive mode) or $z = 3/2$ (Kardar-Parisi-Zhang (KPZ) mode). If neither a diffusive nor a KPZ mode are present, all modes have as dynamical exponent the golden mean $z=(1+\sqrt 5)/2$. The universal scaling functions of the higher Fibonacci modes are Lévy distributions. The theoretical predictions are confirmed by Monte-Carlo simulations of a three-lane asymmetric simple exclusion process.
05/07/2019, 15:00 — 16:30 — Room P4.35, Mathematics Building Martin Evans, Edinburgh University
Generalisations to Multispecies (V)
Second class particle; multispecies exclusion process; hierarchical matrix solution and proof; queueing interpretation.
Continuation of Lecture 4.
04/07/2019, 15:00 — 16:30 — Room P4.35, Mathematics Building Martin Evans, Edinburgh University
Generalisations to Multispecies (IV)
Second class particle; multispecies exclusion process; hierarchical matrix solution and proof; queueing interpretation.
03/07/2019, 15:00 — 16:30 — Room P4.35, Mathematics Building Martin Evans, Edinburgh University
Phase Diagram (III)
Complex zeros of nonequilibrium partition function; open ASEP phase transitions; continuous and discontinuous transitions; coexistence line.
02/07/2019, 15:00 — 16:30 — Room P4.35, Mathematics Building Martin Evans, Edinburgh University
Matrix Product Solution (II)
Matrix product ansatz; proof of stationarity; computation of partition function $Z_L$; large $L$ asymptotics of $Z_L$; current and density profile; combinatorial approaches.
01/07/2019, 15:00 — 16:30 — Room P4.35, Mathematics Building Martin Evans, Edinburgh University
Open Boundary ASEP (I)
The asymmetric simple exclusion process (ASEP) has been studied in probability theory since Spitzer in 1970. Remarkably a version with open boundaries had already been introduced as a model for RNA translation in 1968. This “open ASEP” has since the 1990’s been widely studied in the theoretical physics community as a model of a nonequilibrium system, which sustains a stationary current. In these lectures I will introduce and motivate the model then present a construction — the matrix product ansatz — which yields the exact stationary state for all system sizes. I will derive the phase diagram and analyse the nonequilibrium phase transitions. Finally I will discuss how the approach generalises to multispecies systems.
In this first lecture I will introduce the motivations; correlation functions; mean-field theory and hydrodynamic limit; dynamical mean-field theory; domain wall theory.
28/06/2019, 15:00 — 16:00 — Room P3.31, Mathematics Building Joe Chen, Colgate University
Random walks, electric networks, moving particle lemma, and hydrodynamic limits
While the title of my talk is a riff on the famous monograph
Random walks and electric networks by Doyle and Snell, the contents of my talk are very much inspired by the book. I'll discuss how the concept of electrical resistance can be applied to the analysis of interacting particle systems on a weighted graph. I will start by summarizing the results of Caputo-Liggett-Richthammer, myself, and Hermon-Salez connecting the many-particle stochastic process to the one-particle random walk process on the level of Dirichlet forms. Then I will explain how to use this type of energy inequality to bound the cost of transporting particles by the effective resistance, and to perform coarse-graining on a class of state spaces which are bounded in the resistance metric. This new method plays a crucial role in the proofs of scaling limits of boundary-driven exclusion processes on the Sierpinski gasket. 28/06/2019, 14:00 — 15:00 — Room P3.31, Mathematics Building Gabriel Nahum, Instituto Superior Técnico
On the algebraic solvability of the MPA approach to the Multispecies SSEPOn this mini-course we will learn how to extend the MPA formulation to the multispecies case with the very simple SSEP dynamics. Due to the jump symmetries of each particle in the bulk, this formulation allows us to compute any phisical quantity without resorting to the matrices representation. Nevertheless, their existence is still a necessary condition for the formulation to work. We will give an example of such matrices and focus on the exploitation of the induced algebra. 27/06/2019, 15:00 — 16:00 — Room P3.31, Mathematics Building Hugo Tavares, Faculdade de Ciências, Universidade de Lisboa
Least energy solutions of Hamiltonian elliptic systems with Neumann boundary conditions
In this talk, we will discuss existence, regularity, and qualitative properties of solutions to the Hamiltonian elliptic system $$ -\Delta u = |v|^{q-1} v\ \ \ \text{in} \ \Omega,\quad -\Delta v = |u|^{p-1} u\ \ \ \text{in} \ \Omega,\quad \partial_\nu u=\partial_\nu v=0\ \ \ \text{on} \ \partial\Omega,$$with $\Omega\subset \mathbb R^N$ bounded, both in the sublinear $pq< 1$ and superlinear $pq>1$ problems, in the subcritical regime. In balls and annuli, we show that least energy solutions are
not radial functions, but only partially symmetric (namely foliated Schwarz symmetric). A key element in the proof is a new $L^t$-norm-preserving transformation, which combines a suitable flipping with a decreasing rearrangement. This combination allows us to treat annular domains, sign-changing functions, and Neumann problems, which are nonstandard settings to use rearrangements and symmetrizations. Our theorems also apply to the scalar associated model, where our approach provides new results as well as alternative proofs of known facts. 27/06/2019, 14:00 — 15:00 — Room P3.31, Mathematics Building Renato De Paula, Instituto Superior Técnico
Matrix product ansatz for the totally asymmetric exclusion process
Generally, it is very difficult to compute nonequilibrium stationary states of a particle system. It turns out that, in some cases, you can find a solution with a quite interesting structure. The goal of this first part of the seminar is to present the structure of this solution — known as matrix product solution (or matrix product ansatz) — using the totally asymmetric exclusion process (TASEP) as a toy model.
18/06/2019, 13:30 — 14:30 — Room P4.35, Mathematics Building Pietro Caputo, Università Roma Tre
The spectral gap of the interchange process: a review
Aldous’ spectral gap conjecture asserted that on any graph the random walk process and the interchange process have the same spectral gap. In this talk I will review the work in collaboration with T. M. Liggett and T. Richthammer from 2009, in which we proved the conjecture by means of a recursive strategy. The main idea, inspired by electric network reduction, was to reduce the problem to the proof of a new comparison inequality between certain weighted graphs, which we referred to as the
octopus inequality. The proof of the latter inequality is based on suitable closed decompositions of the associated matrices indexed by permutations. I will first survey the problem, with background and consequences of the result, and then discuss the recursive approach based on network reduction together with some sketch of the proof. I will also present a more general, yet unproven conjecture. 17/06/2019, 13:30 — 14:30 — Room P4.35, Mathematics Building Pietro Caputo, Università Roma Tre
Mixing time of the adjacent walk on the simplex
By viewing the $N$-simplex as the set of positions of $N-1$ ordered particles on the unit interval, the adjacent walk is the continuous time Markov chain obtained by updating independently at rate $1$ the position of each particle with a sample from the uniform distribution over the interval given by the two particles adjacent to it. We determine its spectral gap and mixing time and show that the total variation distance to the uniform distribution displays a cutoff phenomenon. The results are extended to a family of log-concave distributions obtained by replacing the uniform sampling by a symmetric Beta distribution. This is joint work with Cyril Labbe' and Hubert Lacoin.
04/06/2019, 15:00 — 16:00 — Room P3.10, Mathematics Building Brian Hall, University of Notre Dame
Large-$N$ Segal-Bargmann transform with application to random matrices
I will describe the Segal-Bargmann transform for compact Liegroups, with emphasis on the case of the unitary group $U(N)$. In this case, the transform is a unitary map from the space of $L^2$ functions on $U(N)$ to the space of $L^2$ holomorphic functions on the "complexified" group $\operatorname{GL}(N;\mathbb{C})$. I will then discuss what happens in the limit as $N$ tends to infinity. Finally, I will describe an application to the eigenvalues of random matrices in $\operatorname{GL}(N;\mathbb{C})$. The talk will be self-contained and have lots of pictures.
30/05/2019, 15:00 — 16:00 — Room P3.10, Mathematics Building Simão Correia, Faculdade de Ciências, Universidade de Lisboa
Critical well-posedness for the modified Korteweg-de Vries equation and self-similar dynamics
We consider the modified Korteweg-de Vries equation over $\mathbb{R}$ $$ u_t + u_{xxx}=\pm(u^3)_x. $$ This equation arises, for example, in the theory of water waves and vortex filaments in fluid dynamics. A particular class of solutions to (mKdV) are those which do not change under scaling transformations, the so-called
self-similar solutions. Self-similar solutions blow-up when $t\to 0$ and determine the asymptotic behaviour of the evolution problem at $t=+\infty$. The known local well-posedness results for the (mKdV) fail when one considers critical spaces, where the norm is scaling-invariant. This means that self-similar solutions lie outside of the scope of these results. Consequently, the dynamics of (mKdV) around self-similar solutions are currently unknown. In this talk, we will show existence and uniqueness of solutions to the (mKdV) lying on a critical space which includes both regular and self-similar solutions. Afterwards, we present several results regarding global existence, asymptotic behaviour at $t=+\infty$ and blow-up phenomena at $t=0$. This is joint work with Raphaël Côte and Luis Vega. 28/05/2019, 15:00 — 16:00 — Room P3.10, Mathematics Building Diogo Arsénio, New York University Abu Dhabi
Recent progress on the mathematical theory of plasmas
The incompressible Navier–Stokes–Maxwell system is a classical model describing the evolution of a plasma (i.e. an electrically conducting fluid). Although small smooth solutions to this system (in the spirit of Fujita–Kato) are known to exist, the existence of large weak solutions (in the spirit of Leray) in the energy space remains unknown. This defect can be attributed to the difficulty of coupling the Navier–Stokes equations with a hyperbolic system. In this talk, we will describe recent results aiming at building solutions to Navier–Stokes–Maxwell systems in large functional spaces. In particular, we will show, for any initial data with finite energy, how a smallness condition on the electromagnetic field alone is sufficient to grant the existence of global solutions.
21/05/2019, 15:00 — 16:00 — Room P3.10, Mathematics Building Cédric Bernardin, University of Nice Sophia-Antipolis
Microscopic models for multicomponents SPDE’s with a KPZ flavor
The usual KPZ equation is the scaling limit of weakly asymmetric microscopic models with one conserved quantity. In this talk I will present some weakly asymmetric microscopic models with several conserved quantities for which it is possible to derive macroscopic SPDEs with a KPZ flavor.
Joint work with R. Ahmed, T. Funaki, P. Gonçalves, S. Sethuraman and M. Simon.
14/05/2019, 15:00 — 16:00 — Room P3.10, Mathematics Building Conrado Costa, Leiden University
Random walks in cooling random environments: stable and unstable behaviors under regular diverging cooling maps
Random Walks in Cooling Random Environments (RWCRE), a model introduced by L. Avena, F. den Hollander, is a dynamic version of Random Walk in Random Environment (RWRE) in which the environment is fully resampled along a sequence of deterministic times, called refreshing times. In this talk I will consider effects of the ressampling map on the fluctuations associated with the annealed law and the Large Deviation principle under the quenched measure. I conclude clarifying the paradox of different fluctuations and identical LDP for RWCRE and RWRE. This is a joint work with L. Avena, Y. Chino, and F. den Hollander.
16/04/2019, 15:00 — 16:00 — Room P3.10, Mathematics Building Phillipo Lappicy, ICMC, Universidade de São Paulo e CAMGSD-IST, Universidade de Lisboa
A nonautonomous Chafee-Infante attractor: a connection matrix approach
The goal of this talk is to present the construction of the global attractor for a genuine nonautonomous variant of the Chafee-Infante parabolic equation in one spatial dimension. In particular, the attractor consists of asymptotic profiles (which correspond to the equilibria in the autonomous counterpart) and heteroclinic solutions between those profiles. We prove the existence of heteroclinic connections between periodic and almost periodic asymptotic profiles, yielding the same connection structure as the well-known Chafee-Infante attractor. This work is still an ongoing project with Alexandre N. Carvalho (ICMC - Universidade de São Paulo).
02/04/2019, 15:00 — 16:00 — Room P3.10, Mathematics Building Clement Erignoux, Università Roma Tre
Hydrodynamics for a non-ergodic facilitated exclusion process
The Entropy Method introduced by Guo, Papanicolaou and Varadhan (1988) has been used with great sucess to derive the scaling hydrodynamic behavior of wide ranges of conserved lattice gases (CLG). It requires to estimate the entropy of the measure of the studied process w.r.t. some good, usually product measure. In this talk, I will present an exclusion model inspired by a model introduced by Gonçalves, Landim, Toninelli (2008), with a dynamical constraint, where a particle at site $x$ can only jump to $x+\delta$ iff site $x-\delta$ is occupied as well. I will give some insight on the different microscopic and macroscopic situations that can occur for this model, and briefly describe the steps to derive the hydrodynamic limit for this model by adapting the Entropy Method to non-product reference measures. I will also expand on the challenges and question raised by this model and on some of its nice mapping features. Joint work with O. Blondel, M. Sasada, and M. Simon.
26/03/2019, 15:00 — 16:00 — Room P3.10, Mathematics Building Ofer Busani, University of Bristol
Transversal fluctuations in last passage percolation
In Last Passage Percolation(LPP) we assign i.i.d Exponential weights on the lattice points of the first quadrant of $\mathbb{Z}^2$. We then look for the up-right path going from $(0,0)$ to $(n,n)$ that collects the most weights along the way. One is then often interested in questions regarding (1) the total weight collected along the maximal path, and (2) the behavior of the maximal path. It is known that this path's fluctuations around the diagonal is of order $n^{2/3}$. The proof, however, is only given in the context of integrable probability theory where one relies on some algebraic properties satisfied by the Exponential Distribution. We give a probabilistic proof for this phenomenon where the main novelty is the probabilistic proof for the lower bound. Joint work with Marton Balazs and Timo Seppalainen |
Yesterday in the High Dimensional Probability class, I encountered the proof of Grothendieck’s inequality which relates the solution of an Integer Linear Programming (ILP) and its corresponding real Linear Programming (LP).
The inequality states that
There exists a finite constant \(K>0\) such that for every \(l,m,n\in \mathbb N\) and every matrix \(M=(M_{ij})\in \mathbb F^{m\times n}\) (where \(\mathbb F\) can be \(\mathbb R\) or \(\mathbb C\)), \[ \max\limits_{\|x_i\|=\|y_j\|=1 } \left|\sum\limits_{i=1}^m\sum\limits_{j=1}^n M_{ij}\langle x_i,y_j\rangle\right| \le K\max\limits_{|\varepsilon_i|=|\delta_j|=1} \left|\sum\limits_{i=1}^m\sum\limits_{j=1}^n M_{ij}\varepsilon_i \delta_j\right| \] where the inner product is taken in \(\mathbb F^l\). Grothendieck’s Inequality
The smallest constant \(K_G\) that satisfies the inequality is called the
Grothendieck’s constant. From the (beautiful) proof of Krivine, \(K_G\le \frac{\pi}{ 2\ln(1+\sqrt{2}) }\). What is more surprising is that this is not the sharpest constant, even though proof uses a very sharp argument (which is often referred to as kernel trick in machine learning). |
Kiran, T and Rajan, Sundar B (2005)
Optimal STBCs from Codes Over Galois Rings. In: IEEE International Conference on Personal Wireless Communications, 2005. ICPWC 2005, 23-25 January, New Delhi, India, 120 -124.
PDF
optimal_STBCs.pdf
Download (501kB)
Abstract
A Space-Time Block Code (STBC) $C_S_T$ is a finite collection of $\mathbf{n_t}\times\mathbf{l}$ complex matrices. If S is a complex signal set, then $C_S_T$ is said to be completely over S if all the entries of each of the codeword matrices are restricted to S. The transmit diversity gain of such a code is equal to the minimum of the ranks of the difference matrices $(X - X')$, for any $X \not= X'\in C_S_T$, and the rate is $\[R=\frac{log_{\mid{s}\mid}\mid{C_S_T}\mid}{l}\]$ complex symbols per channel use, where $\mid{C_S_T}\mid$ denotes the cardinality of $C_S_T$. For a STBC completely over S achieving transmit diversity gain equd to d, the rate is upper-bounded as $R\leq\mathbf{n_t} - \mathbf{d}+\mathbf{1}$. An STBC which achieves equality in this tradeoff is said to be optimal. A Rank-Distance (RD) code $C_F_F$ is a linear code over a finite field $F_q$ where each code-word is a $\mathbf{n_t}\times\mathbf{l}$ matrix over $F_q$. RD codes have found applications as STBCs by using suitable rank-preserving maps from $F_p$ to S. In this paper, we generalize these rank-preserving maps, leading to generalized constructions of STBCs fiom codes over Galois ring $GR(p^a,k)$. To be precise, for any given value of d, we construct $\mathbf{n_t}\times\mathbf{l}$ matrices over $GR(p^a,k)$ and use a rank-preserving map that yields optimal STBCs with transmit diversity gain equal to d. Galois ring includes the finite field $F_{p^a}$ when $a=1$ and the integer ring $Z_{p^a}$. When $k=1$. Our construction includes as a special case, the earlier construction by Lusina et. al. which is applicable only for RD codes over $F_p$ $(p=\mathbf{4s}+\mathbf{1})$ and transmit diversity gain $d=n_t$.
Item Type: Conference Paper Additional Information: �©1990 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Department/Centre: Division of Electrical Sciences > Electrical Communication Engineering Depositing User: HS Usha Date Deposited: 25 Nov 2005 Last Modified: 19 Sep 2010 04:21 URI: http://eprints.iisc.ac.in/id/eprint/4088 Actions (login required)
View Item |
WHY?
Autoregressive model has been dominant model for density estimation. On the other hand, various non-linear transformations techniques enabled tracking of density after transformation of variables. Transformation Autoregressive Networks(TAN) combined non-linear transformation into autoregressive model to capture more complicated density of data.
WHAT?
TAN is composded of two module: autogregressive model and non-linear transformation. Autoregressive model represent the probability of data with products of sequential conditionals.
p(x_1,...,x_d) = \prod^d_{i=1}p(x_i|x_{i-1},...,x_1)\\p(x_i|x_{i-1},...,x_1}) = p(x_i|MM(\theta(x_{i-1},...,x_1)))\\\theta(x_{i-1},...,x_1) = f(h_i)\\h_i = g_i(x_{i-1},...,x_1)
TAN proposes two kinds of autoregressive models. First is Linear Autoregressive Model(LAM) which have different weight matrix per conditional. Second is Recurrent Autoregressive Model(RAM). These two models have trade-off of flexibility and the number of parameters.
g_i(x_{i-1},...,x_1) = W^{(i)}x_{<i} + b\\h_i = g(x_{i-1}, g(x_{i-2},...,x_1)) = g(x_{i-1}, h_{i-1})
To tract the probability of transformation, the transformation must be invertible and the determinant of its Jacobian have to be computed easily. This paper suggests several transformations that meet this criteria.
p(x_1, ... x_d) = \|\det \frac{dq}{dx}\| \prod_{i=1}^d p(z_i|z_{i-1},...,z_1)
First is Linear Transformation. The determinant of matrix A can be computed via LU decomposition.
z = Ax + b\\A = LU\\\det{dz}{dx} = \prod^d_{i=1} U_{ii}
Second is Recurrent Transformation. r is a ReLU unit and
r_{\alpha} is leaky ReLU unit.
z_i = r_{\alpha}(yx_i + w^Ts_{i-1} + b), s_i = r(ux_i + v^Ts_{i-1} + a)\\r_{\alpha}(t) = \mathbb{I}\{t<0\}\alpha t + \mathbb{I}\{t \geq 0\}t\\x_i = \frac{1}{y}(r^{-1}_{\alpha}(z_i^{(r)})- w^T s_{i-1} - b)\\s_i = r(ux_i + v^Ts_{i-1} + a)\\\det\frac{dz}{dx} = y^d\prod^d_{i=1}r'_{\alpha}(yx_i + w^Ts_{i-1} + b)\\r'_{\alpha}(t) = \mathbb{I}\{t>0\} + \alpha\mathbb{I}\{t < 0\}
Third is Recurrent Shift Transformation that perform additive shift based on prior dimensions. The determinant of Jacobian is always 1 for this transformation.
z_i = x_i + m(s_{i-1})\\s_i = g(x_i, s_{i-1})
These various transformations can be composed. Combining transformations of variables and rich autoregressive models, we can estimate the density of data.
-\logp(x_i,...,x_d) = -\Sum^{t=1}^T \log\|\det\frac{d q^{(t)}}{d q^{(t-1)}}\| - \Sum_{t=1}^d \log p(q_i(x)|h_i)
So?
Compared to MADE, Real NVP, and MAF, TAN showed best log likelihood not only in MNIST, but also in various real world data.
Critic
Clever combining of autoregressive model and flows. |
Let $(M, \omega)$ be a symplectic manifold and $G$ be a compact Lie group. Suppose we have a Hamiltonian $G$-action on $M$, with moment map $\mu: M \to {\mathfrak g}^*$.
We assume that the moment map is proper in case $M$ is noncompact.
The question is: for any loop $\gamma: S^1 \to G$, and a point $x\in M$, is the loop $t\mapsto \gamma(t) x$ a contractible loop in $M$? So we assume neither $G$ or $M$ is simply-connected.
We may assume that $\gamma$ is actually a 1-parameter subgroup of $G$, generated by a vector $\xi \in {\mathfrak g}$. In the case $M$ is compact, we restrict the moment map to this subgroup, which is equivalent to a real valued function $\mu_\gamma$. Then the gradient flow of this function should push the loop to a critical point, which is a fixed point of this subgroup. Hence this shows that the loop is contractible.
Now if $M$ is noncompact, the gradient flow doesn't necessarily converge to a critical point (could escape to $\pm \infty$). Note that the real valued function $\mu_\gamma$ is not necessarily proper. So the above method fails. But I still guess that the loop should be contractible.
Is there any proof or counter-example? Or should we add some conditions to guarantee this? |
I am wondering whether $\mathbb{Z}\times\mathbb{Z}$ is not injective to $\mathbb{Z}$. Similarly, whether $\mathbb{Z}*\mathbb{Z}$ is not injective to $\mathbb{Z}$.
I got it from the Hatcher's Algebraic Topology question in Section 1.1 exercise 16 that asks to show there are no restractions $r:S^1\times D^2\to S^1\times S^1$. We have $\pi_1(S^1\times D^2)\simeq\mathbb{Z}$ and $\pi_1(S^1\times S^1)\simeq\mathbb{Z}\times\mathbb{Z}$. I am trying to reach the conclusion that $\mathbb{Z}\times\mathbb{Z}$ is not injective to $\mathbb{Z}$ since a space retracts to its subspace has their induced homomorphim injective, in that way I can get a contradiction. But is there really no injection between $\mathbb{Z}\times\mathbb{Z}$ and $\mathbb{Z}$?
As far as I know, cartesian product of countable set is countable, so why could there be no injection between $\mathbb{Z}\times\mathbb{Z}$ and $\mathbb{Z}$?
Also, I am not sure whether $\mathbb{Z}*\mathbb{Z}$ is countably or uncountably infinite, if it is countably infinite, why could there also be no injection?
Does anybody has any good explanation on this? Any hints would also be appreciated. Thanks. |
Here is a quick example of the kinds of problems you can tackle with cvxstoc (see the gentle walkthrough for more examples).
Suppose we are interested in a stochastic variation on a portfolio optimization problem, i.e., we wish to allocate our wealth across \(n\) assets such that the returns on our investments are (on average) maximized, so long as we keep the probability of a (catastrophic) loss as low as possible; we model our investment choices as a vector \(x \in {\bf R}^n_+\) (we require that the components of \(x\) sum to one), the (uncertain) price change of each asset as a vector \(p \in {\bf R}^n \sim \textrm{Normal}(\mu, \Sigma)\) for simplicity, and our loss threshold and tolerance as \(\alpha\) and \(\beta\), respectively (typically, \(\alpha\) is negative and \(\beta\) is small, e.g., 0.05).
These considerations lead us to the following (convex) optimization problem:
with variable \(x\).
We can directly express (1) using cvxstoc as follows:
from cvxstoc import NormalRandomVariable, expectation, probfrom cvxpy import Maximize, Problemfrom cvxpy.expressions.variables import Variableimport numpy# Create problem data.n = 10mu = numpy.zeros(n)Sigma = 0.1*numpy.eye(n)p = NormalRandomVariable(mu, Sigma)alpha = -1beta = 0.05# Create and solve stochastic optimization problem.x = Variable(n)p = Problem(Maximize(expectation(x.T*p, num_samples=100)), [x >= 0, x.T*numpy.ones(n) == 1, prob(x.T*p <= alpha, num_samples=100) <= beta])p.solve()
On Mac OS X: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.