text
stringlengths
256
16.4k
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$. Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$... What if $\theta$ is irrational...what did I do wrong? 'cause I understand that second one but I'm having a hard time explaining it in words (Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.) DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something. he based much of his success on principles like this I cant believe ive forgotten it it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book regardless of you attending Princeton university or not yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned @TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college. long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now @BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even A proof of this uses (basically) Fourier analysis Even though it looks rather innocuous albeit surprising result in pure number theory @BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about @Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$). The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero. I can try to recall more if you're interested. It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps So it sort of makes sense Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis @BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate
I'm not very familiar with (even simple examples of) orbifolds, so my first question is: Let $C_2$ be $\mathbb{C}$ with one cone singularity at 0 of index 2. What is the fundamental group of $C_2$ minus $k$ points ? My naive answer is: take $\mathbb{C}^*$ minus the same $k$ points. Its fundamental group is freely generated by the $k+1$ loops around the punctures. Now decide that you don't have a "hole" in 0 anymore, but a cone singularity, meaning that the generators corresponding to a loop around 0 is now of order 2. Then I would say that the fundamental group of $C_2$ minus $k$ points is $\langle a_0,\dots,a_k | a_0^2=1\rangle$, ie $Z_2\ltimes F_{2k}$, where $F_{2k}$ is generated by {$a_i,a_0a_ia_0, i\geq 1$}. Now recall the following construction: take the pure braid group $P_n$ with its standard generators $x_{i,j}, 1\leq i < j\leq n$ given by taking the $j$th strand, letting it go behind all other strand, loop around the $i$th one and going back. Then it's quite easy to see that the subgroup generated by the $x_{i,n}$ is free: it is the subgroup of pure braids for which all but the last strand are fixed straight lines. In fact, it leads to a semidirect product decomposition $P_n=P_{n-1}\ltimes F_{n-1}$. This decomposition is actually a so called "almost direct" product, which is quite an important fact. This construction has a nice geometric interpretation: let $X_n$ be the configuration space of $n$ points in $\mathbb{C}$, and recall that $P_n=\pi_1(X_n)$. Then the map $X_n \rightarrow X_{n-1}$ which forget the last coordinate is a locally trivial fibration with fiber $\mathbb{C}$ minus $n-1$ points. Then it induces a (split) short exact sequence of fundamental groups $$1\rightarrow F_{n-1} \rightarrow P_n \rightarrow P_{n-1}\rightarrow 1$$ Let's try to do something similar with the "orbifold braid group" of $C_2$, that is the fundamental group $P_n(C_2)$ of $O_n=${$z_1,\dots,z_n \in C_2, z_i \neq z_j$}. It seems to me that $P_n(C_2)=P_{n+1}/ \langle x_{1,i}^2=1,i=2 \dots n+1 \rangle$. The above construction seems to work "at the algebraic level": let $G_n$ be the subgroup of $P_n(C_2)$ generated by (the images of) $x_{i,n+1}$. What is stated in this paper (in a slighty different form) is that $P_n(C_2)=P_{n-1}(C_2) \ltimes G_n$, and that it is an almost direct product too. But $G_n$ satisfies some relations, for example $x_{i,n+1}$ and $x_{0,n+1}x_{i,n+1}x_{0,n+1}$ commute for a given $i$, hence it is not isomorphic to the fundamental group of $C_2$ minus $n-1$ points (at least if my first naive try is not wrong). While this construction strongly looks like to and shares many algebraic properties with the construction for $P_n$, it does not seems to come from a natural geometric construction. So my real question is: Am I wrong somewhere ? Is there a natural interpretation of $G_n$ ? Edit: Here is roughly what happen: assuming that $n=2$ for the sake of simplicity, it doesn't make sense to "freeze" the first strand (and its negative) and to make the second one loop around because the following relation holds: now pushing the red loop (seen as a loop in the 2-punctured plane) to the bottom plane, we see that it has to be identified with its conjugate by a loop around the two strands at once, ie by the product of the generators of $F_2$. Therefore, this product has to be central, leading to the relation holding in $G_n$ above. So one can ask: Is there a topological space modelled on this situation, i.e. which looks like to the "complementary in $\mathbb{C}\times[0,1]$ of two strands modulo homotopy". Or at least, is there a way to prove that there are no other relations than declaring that the big loop is central ?
Swapping axes same as reversing an axis. \begin{align}\large {\left(*A\right)}_{{\mu }_1\dots {\mu }_{n-p}}=\frac{1}{p!}{\epsilon }^{{\nu }_1\dots {\nu }_p}_{\ \ \ \ \ \ \ \ \ \ \ \ \ \ {\mu }_1\dots {\mu }_{n-p}}A_{{\nu }_1\dots {\nu }_p} & \phantom {10000}(1) \\ \end{align} which maps ##A## to "##A## dual". We have also used the Levi-Civita tensor which is, as the name suggests, a tensor not a mere number, defined as (Carroll's 2.69) \begin{align}\large {\epsilon }_{{\mu }_1{\mu }_2\dots {\mu }_n}=\sqrt{\left|g\right|}{\widehat{\epsilon }}_{{\mu }_1{\mu }_2\dots {\mu }_n} & \phantom {10000}(2) \\ \end{align} where ##\widehat{\epsilon }## is the Levi-Civita number and ##g## (also not a tensor) is the determinant of the metric. Then at his 2.84 Carroll writes. "In three-dimensional Euclidean space the Hodge dual of the wedge product of two 1-forms gives another 1-form: \begin{align} *{\left(U\wedge V\right)}_i={\epsilon }^{\ \ jk}_iU_jV_k & \phantom {10000}(7) \\ \end{align} (All of the prefactors cancel.) Since 1-forms in Euclidean space are just like vectors, we have a map from two vectors to a single vector. You should convince yourself that this is just the conventional cross product, and that the appearance of the Levi-Civita tensor explains why the cross product changes sign under parity (interchange of two coordinates or equivalently basis vectors.)" Proving (7) itself was quite hard and the first step was to prove that the Levi-Civita tensor is completely antisymmetric, even when indices are up and down. Carroll had not mentioned this - he probably thought it was obvious. It was vital to the proof which followed with some tricky index swapping. The prefactors cancelling was easy. The next part to show that (7) was the same as the cross product was very easy, including showing the dependence on the parity of the coordinate system which involves a lady flying over the north pole (diagram above) and the vexed question of fingering or screwing. Read all four pages and 25 beautifully numbered equations in Commentary 2.9 Hodge star operator - in Euclidean space.pdf
WHY? VAE can learn useful representation while GAN can sample sharp images. WHAT? Introspective Variational Autoencoder(IVAE) combines the advantage of VAE and GAN to make a model to learn useful representation and output sharp images. IVAE uses encoder to introspectively estimate the generated samples and the training data as a discriminator. L_{AE} = -E_{q_{\phi}(z|x)}log p_{\theta}(x|z) = \frac{1}{2}\sum_{i=1}^N\|x_{r,i}-x_i\|^2_F\\L_{REG} = D_{KL}(q_{\phi}(z|x)\|p(z)) = \frac{1}{2}\sum_{i=1}^N(1+log(\sigma_i^2) - \mu_i^2 - \sigma_i^2)\\L_E(x,z) = E(x) + [m-E(G(z))]^+ + L_{AE}(x) = L_{REG}(Enc(x)) + \alpha\sum_{s=r,p}[m-L_{REG}(Enc(ng(x_s)))]^+ + \beta L_{AE}(x, x_r)\\L_G(z) = E(G(z)) + L_{AE}(x) = \alpha \sum_{s=r,p} L_{REG}(Enc(x_s)) + \beta L_{AE}(x) Algorithm is as follows. So? IVAE achieved realistic quality reconstruction and sample from CelebA, CelebA-HQ, and LSUN Bedroom. Also the representations learned from IVAE showed meaningful latent manifold. Critic Amazing image quality!
WHY? Choosing an appropriate prior is important for VAE. This paper suggests two-layered VAE with flexible VampPrior. WHAT? The original variational lower-bound of VAE can be decomposed as follows. \mathbb{E}_{x\sim q(x)}[\ln p(x)] \geq \mathbb{E}_{x\sim(x)}[\mathbb{E}_{q_{\phi}(z|x)}[\ln p_{\theta}(x|z)+\ln p_{\lambda}(z) - \ln q_{\phi}(z|x)]] \triangleq \mathcal{L}(\phi, \theta, \lambda) \\= \mathbb{E}_{x \sim q(x)}[\mathbb{E}_{q_{\phi}(z|x)}[\ln p_{\theta}(x|z)]] + \mathbb{E}_{x\sim q(x)}[\mathbb{H}[q_{\phi})(z|x)]] - \mathbb{E}_{x\sim q(x)}[-\ln p_{\lambda}(z)] The first component is the negative reconstruction error, the second component is the expectation of the entropy of the variational posterior, and the last component is the cross-entropy betwen the aggregated posterior and the prior. Usually the prior is given with a simple distribution such as Gaussian Normal, but a prior that optimized the ELBO can be found as the aggregated posterior. max_{p_{\lambda}(z)} - \mathbb{E}_{z \sim q(z)}[=\ln p_{\lambda}(z)] + \beta (\int p_{\lambda}(z)dz -1)\\p_{\lambda}^*(z) = \frac{1}{N}\sum_{n=1}^N q_{\phi}(z|x_n) However this not only leads to overfitting, but also expensive to compute. So this paper suggests variational mixture of posteriors prior(VampPrior) that approximates the prior with a mixture of variational posteriors of pseudo-inputs. These pseudo-inputs are learned by backpropagation. p_{\lambda}(z) = \frac{1}{K}\sum^K_{k=1}q_{\phi}(z|u_k) In order to prevent inactive stochastic units problem, this paper suggests two-layered VAE. q_{\phi}(z_1|x, z_2) q_{\psi}(z_2|x)\\p_{\theta}(x|z_1, z_2) p_{\lambda}(z_1|z_2)p(z_2)\\p(z_2) = \frac{1}{K}\sum_{k=1}^K q_{\psi}(z_2|u_k)\\p_{\lambda}(z_1|z_2) = \mathcal{N}(z_1|\mu_{\lambda}(z_2), diag(\sigma_{\lambda}^2(z_2)))\\q_{\phi}(z_1|x, z_2) = \mathcal{N}(z_1|\mu_{\phi}(x, z_2), diag(\sigma_{\phi}^2(x, z_2)))\\q_{\psi}(z_2|x) = \mathcal{N}(z_2|\mu_{\psi}(x), diag(\sigma_{\psi}^2(x))) So? HVAE with VampPrior achieved good results on various dataset(MNIST, dynamix MNIST, OMNIGLOT, Caltech 101 Silhouette, Frey Faces and Histopathology patches) not only in log-likelihood(LL) but also in quality reducing blurring problem in standard VAE.
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
I am reading these notes http://terrytao.wordpress.com/2010/10/02/245a-notes-4-modes-of-convergence/ by Terry Tao. I have a question about the difference between convergence in $L^{\infty}$ and convergence almost uniformly. Is the difference that convergence almost uniformly guarantees that you can get uniform convergence outside a set of arbitrarily small but still positive measure, while convergence $L^{\infty}$ gets uniform convergence outside a set of exactly measure zero? Formal definitions follow to make ideas precise. Let $(X, \mathcal{M}, \mu)$ be a measure space. Let $f, f_1, f_2, \ldots$ be a measurable functions. We say that $f_n \to f$ in $L^{\infty}$ if for all $\varepsilon > 0$ there is an $N_{\varepsilon}$ such that $|f_n(x) - f(x)| \leq \varepsilon$ $\mu$--a.e. when $n \geq N_{\varepsilon}$. We say that $f_n \to f$ almost uniformly if for all $\varepsilon > 0$ there is a set $E \in \mathcal{M}$ with $\mu(E) \leq \varepsilon$ such that $f_n \to f$ uniformly on $E^c$. I.e., for each $\delta > 0$ there is an $N_{\delta}$ such that $|f_n(x) - f(x)| \leq \delta$ for all $x \in E^c$ when $n \geq N_{\delta}$.
I'm trying to prove that if $L$ is regular, then $L_S$ is regular as well. $L_s$ = {$x$ | $∃$ $w ∈ Σ^*$ such that $wx∈L$} I know one way to do this would be to create an NFA that accepts $L$, then modify it so it accepts $L_S$ as well. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community I'm trying to prove that if $L$ is regular, then $L_S$ is regular as well. $L_s$ = {$x$ | $∃$ $w ∈ Σ^*$ such that $wx∈L$} I know one way to do this would be to create an NFA that accepts $L$, then modify it so it accepts $L_S$ as well. Try adding $\epsilon$-transitions from the initial state to each acceptor state. Added: If you prefer, you can start with a right regular grammar for $L$, with initial symbol $S$, and add productions $S\to X$ for each non-terminal symbol $X$. Added 2: Since $L$ is regular, it can be generated by a right regular grammar $G=\langle N,\Sigma,P,S\rangle$. It’s well-known that we may without loss of generality assume that $S$ does not appear on the righthand side of any production in $P$, and that every non-terminal other than $S$ does occur on the righthand side of some derivation in $G'$. Let $G'=\langle N,\Sigma,P',S\rangle$, where $P'=P\cup\big\{S\to X:X\in N\setminus\{S\}\big\}$; in other words, $G'$ is the same as $G$, except that we’ve added a production $S\to X$ for each non-terminal symbol $X$ other than $S$ itself. $G'$ is clearly an extended right regular grammar, so it generates a regular language. We’re done if we can prove that $G'$ generates the language $L_S$. To do this, we must prove two things: Suppose, then, that $x\in L_S$. By definition this means that there is a $w\in\Sigma^*$ such that $wx\in L$, and that means that there is a derivation $S\Rightarrow^* wx$ in $G$. Since $G$ is right regular, the symbols of $wx$ are generated from left to right during the derivation. That is, if $wx=\sigma_1\sigma_2\ldots\sigma_n$, the derivation must have the form $$S\Rightarrow\sigma_1X_1\Rightarrow\sigma_1\sigma_2X_2\Rightarrow\ldots\Rightarrow\sigma_1\sigma_2\ldots\sigma_{n-1}X_{n-1}\Rightarrow\sigma_1\sigma_2\ldots\sigma_{n-1}\sigma_nX_n\Rightarrow\sigma_1\sigma_2\ldots\sigma_{n-1}\sigma_n\;,$$ where $X_1,\dots,X_n$ are non-terminal symbols, not necessarily distinct. Thus, at some stage of the derivation we must have a word of the form $wX$ for some non-terminal $X$, and the derivation therefore must have the form $S\Rightarrow^* wX\Rightarrow^* wx$, and we can see that $X\Rightarrow^* x$. That is, $G$ allows us to generate the word $x$ if we get to start with the non-terminal $X$. $G'$ has a production $S\to X$, and it also has all of the productions of $G$ that are used in the $G$-derivation $X\Rightarrow^* x$, so in $G'$ we can form the derivation $S\Rightarrow X\Rightarrow^* x$; this shows that $G'$ generates $x$. Now suppose that $G'$ generates $x$, meaning that there is a $G'$-derivation $S\Rightarrow^* x$. The productions in $P'$ with $S$ on the lefthand side have one of the following forms: $S\to\epsilon$; $S\to\sigma$ for some $\sigma\in\Sigma$; $S\to\sigma X$ for some $\sigma\in\Sigma$ and $X\in N$; or $S\to X$ for some $X\in N$. Thus, the derivation of $x$ must begin $S\Rightarrow\epsilon$, $S\Rightarrow\sigma$ for some $\sigma\in\Sigma$, $S\Rightarrow\sigma X$ for some $\sigma\in\Sigma$ and $X\in N$, or $S\Rightarrow X$ for some $X\in N$. In all cases, therefore, $x\in $L_S$, and the proof is complete.
One can construct a Chebyshev series approximation to the integrand for an interval, such as -5 <= x <=5 mentioned in the comments, and integrate it to get a series expansion for the integral. It is well known that Chebyshev series representation have numerical advantages over power series. I saw a comment about a NPU-supported method, but I don't know what that is, this being a Mathematica site. Most numerical systems have access to an FFT, in one way or another, I think, and, of course, to trigonometric functions. That is all that is really needed. Some auxiliary functions used here (code below): chebSeries[f, {a, b}, n, precision] computes the Chebyshev expansion of order n for f[x] over a <= x <= b. iCheb[c, {a, b}, k] integrates the Chebyshev series c, plus k. f = chebFunc[c,{a,b}], computes the function represented by the Chebyshev coefficients c = {c0, c1,...} over the interval {a, b}. The first step is to compute the coefficients of the integrand as a function of x for a given value of c. The are saved (memoized) for the sake of speed, but it is not essential. An antiderivative of the integrand is computed with iCheb[coeff[c], {-5, 5}]. Depending on c, one needs a series of order 70 to 90+ to get a theoretical error of less than machine precision, so computing one of order 2^7 = 128 is sufficient for all c. (See Boyd, Chebyshev and FourierSpectral Methods, Dover, New York,2001, ch. 2., for a discussion of the convergence theory of Chebyshev series.) ClearAll[coeffs]; coeffs[c_?NumericQ] := coeffs[c] = Module[{ cc, (* Chebyshev coefficients *) pg = 16, (* Precision goal: *) sum, (* sum of tail = error bound *) max, (* max coefficient to measure rel. error *) len}, (* how many terms of the tail to drop *) cc = chebSeries[Function[x, Exp[-x^2] Erf[x + c]], {-5, 5}, 128]; max = Max@Abs@cc; sum = 0; (* trim tail of series of unneeded coefficients (smaller than desired precision) *) len = LengthWhile[Reverse[cc], (sum += Abs[#1]) < 10^-pg max &]; Drop[cc, -len] ]; Next we can define the user's sought-after function in terms of the antiderivative: func[a_, b_, c_] := With[{antiderivative = chebFunc[ N@iCheb[coeffs[SetPrecision[c, Infinity]], {-5, 5}, 0], {-5, 5}]}, antiderivative[b] - antiderivative[a] ]; The following computes the relative and absolute error of func on a hundred random inputs, using a high-precision NIntegrate[] to compute the "true" value. cmp[a_, b_, c_] := {(#1 - #2)/#2, #1 - #2} &[ func[a, b, c], NIntegrate[Exp[-x^2] Erf[x + SetPrecision[c, Infinity]], {x, a, b}, WorkingPrecision -> 40] ] ListLinePlot[ Transpose@RealExponent[cmp @@@ RandomReal[{-5, 5}, {100, 3}]], PlotRange -> {-20, 0}, PlotLabel -> "Error", PlotLegends -> {"Rel.", "Abs."}] The yellow line shows the absolute error is limited to a few ulps. The theoretical bound on the error does not take into account rounding error (in both the coefficients and the evaluation of the series). The lines that drop off the bottom are the result of an error of . zero Auxiliary functions (* Chebyshev extreme points *) chebExtrema::usage = "chebExtrema[n,precision] returns the Chebyshev extreme points of order n"; chebExtrema[n_, prec_: MachinePrecision] := N[Cos[Range[0, n]/n Pi], prec]; (* Chebyshev series approximation to f *) Clear[chebSeries]; chebSeries::usage = "chebSeries[f,{a,b},n,precision] computes the Chebyshev expansion \ of order n for f[x] over a <= x <= b."; chebSeries[f_, {a_, b_}, n_, prec_: MachinePrecision] := Module[{x, y, cc}, x = Rescale[chebExtrema[n, prec], {-1, 1}, {a, b}]; y = f /@ x; (* function values at Chebyshev points *) cc = Sqrt[2/n] FourierDCT[y, 1]; (* get coeffs from values *) cc[[{1, -1}]] /= 2; (* adjust first & last coeffs *) cc ]; (* Integrate a Chebyshev series -- cf. Clenshaw-Norton, Comp.J., 1963, p89, eq. (12) *) Clear[iCheb]; iCheb::usage = "iCheb[c, {a, b}, k] integrates the Chebyshev series c, plus k"; iCheb[c0_, {a_, b_}, k_: 0] := Module[{c, i, i0}, c[1] = 2 First[c0]; c[n_] /; 1 < n <= Length[c0] := c0[[n]]; c[_] := 0; i = 1/2 (b - a) Table[(c[n - 1] - c[n + 1])/(2 (n - 1)), {n, 2, Length[c0] + 1}]; i0 = i[[2 ;; All ;; 2]]; Prepend[i, k - Sum[(-1)^n*i0[[n]], {n, Length[i0]}]]] (* chebFunc[c,...] computes the function represented by a Chebyshev series *) chebFunc::usage = "f = chebFunc[c,{a,b}], c = {c0,c1,...} Chebyshev coefficients, over the interval {a,b} y = chebFunc[c,{a,b}][x] evaluates the function"; chebFunc[c_, dom_][x_] := chebFunc[c, dom, x]; chebFunc[c_?VectorQ, {a_, b_}, x_] := Cos[Range[0, Length[c] - 1] ArcCos[(2 x - (a + b))/(b - a)]].c; Update: Comparison of Chebyshev and power series Perhaps it would be worth illustrating the difference between power series and Chebyshev series approximations for those who are not familiar with it. (One should become familiar with it, for Chebyshev expansions are to functions what decimal expansions are to numbers.) Key differences: Symbolic series expansion of the function Erf[x + c] grows extremely fast and takes a much longer time to evaluate than the DCT-I used to compute the Chebyshev coefficients. Attempting a degree 40 expansion hanged the computer and I had to kill the kernel. Aside from not being able to compute the series expansion to arbitrary order, it is probably impossible to get convergence for a fixed precision due to rounding error. At machine precision, you cannot even get 2 digits throughout the interval {c, -5, 5}, for a = -4, b = 4 up to order 25. OTOH, the Chebyshev series has an exponential order of convergence and can nearly achieve machine precision with machine precision coefficients. It is fairly easy to figure out when you have enough terms of a Chebyshev series $\sum a_j T_j$, because the error is bounded by the tail $\sum |a_j|$ and the $a_n \rightarrow 0$ roughly geometrically on average. If you don't have fast trigonometric functions, then instead of the code in chebFunc above, you can use Clenshaw's algorithm (see chebeval) to evaluate the series. Here's another implementation of Mariusz's power series idea. I speed up the integration with a "power rule" int[{n}] for$$ \int \exp\left(-x^2\right) x^n \; dx\,.$$Of course it turned out that Series was the bigger bottleneck. ClearAll[sol, int]; int[{0}] = Integrate[Exp[-x^2], x, Assumptions -> x ∈ Reals]; int[{n_}] = Integrate[Exp[-t^2] t^n, {t, 0, x}, Assumptions -> n > 0 && n ∈ Integers && t ∈ Reals]; $seriesCoefficientPart = 3; sol[n_] := sol[n] = Total@MapIndexed[ First@Differences[#1 int[#2 - 1] /. {{x -> a}, {x -> b}}] &, Series[Erf[x + c], {x, 0, n}][[$seriesCoefficientPart]] ]; (* Times *) First@*AbsoluteTiming@*sol /@ {2, 10, 20, 22, 23, 24, 25} (* {0.013267, 0.071065, 2.01908, 7.6296, 23.4752, 33.1553, 48.3542} *) (* Sizes *) LeafCount /@ sol /@ {2, 10, 20, 22, 23, 24, 25} (* {163, 693, 2413, 2765, 1100459, 1779935, 2879267} *) Chebyshev speed for c = 4, per evaluation of c: First@AbsoluteTiming@func[a, b, 4, #] & /@ {2, 10, 20, 25} (* {0.002047, 0.000184, 0.000248, 0.000276} *) To illustrate the issue with power series, the graphics below shows the error in approximating Erf[x + c] by its Taylor series (times Exp[-x^2] for c = -4, -2, 0, 2, 4 and various orders. It does pretty good for Abs[x] < 1 as the order increases, but it gets worse for Abs[x] > 4. GraphicsRow[ Table[ Plot[ Evaluate@Table[ Exp[-x^2] (Erf[x + c] - Normal@Series[Erf[x + c], {x, 0, n}]) // RealExponent, {c, -4, 4, 2}], {x, -5, 5}, PlotRange -> {-18, 0}, Frame -> True, Axes -> False, PlotLabel -> Row[{"Order ", n}], AspectRatio -> 1, FrameLabel -> {"x", "Log error"}], {n, {2, 10, 20, 25}}], PlotLabel -> "Error in approximating integrand by power series"] The two plots below compare the absolute error of approximating by power series and by Chebyshev series. The convergence of the Chebyshev series is remarkable by comparison. Plot[Evaluate@Table[ sol[n] - exact[a, b, c] /. {a -> -4, b -> 4} // RealExponent, {n, {2, 10, 20, 25}}], {c, -5, 5}, PlotRange -> {-18, 0}, Frame -> True, Axes -> False, GridLines -> {None, -Range[2, 16, 2]}, PlotLabel -> "Log error for power series of order n", FrameLabel -> {"c", "Log error"}, PlotLegends -> {2, 10, 20, 25}] Plot[Evaluate@Table[ func[a, b, c, n] - exact[a, b, c] /. {a -> -4, b -> 4} // RealExponent, {n, {2, 10, 20, 30, 40, 50, 60}}], {c, -5, 5}, PlotRange -> {-18, 0}, Frame -> True, Axes -> False, GridLines -> {None, -Range[2, 16, 2]}, PlotLabel -> "Log error for Chebyshev series of order n", FrameLabel -> {"c", "Log error"}, PlotLegends -> {2, 10, 20, 30, 40, 50, 60}] Determining the order of the approximation: trim[cc_, eps_] := Module[{sum, max, len}, max = Max@Abs@cc; sum = 0; len = LengthWhile[Reverse[cc], (sum += Abs[#1]) < eps max &]; Drop[cc, -len] ] Manipulate[ With[{cc = iCheb[chebSeries[Exp[-#^2] Erf[# + c] &, -5, 5, 128, 32], {-5, 5}]}, With[{order = Length@trim[cc, 10^-accuracy]}, ListPlot[ RealExponent@cc, PlotLabel -> Column[{ Row[{"Chebyshev coefficients a[n] for ", HoldForm["c"] -> Chop[N[c, {2, 1.5}], 0.05], ", "}], Row[{"For accuracy ", SetPrecision[10^-accuracy, 2], " use order ", order}] }, Alignment -> Center], Frame -> True, FrameLabel -> {"n", "exponent of a[n]"}, GridLines -> {{order}, {-accuracy}}, PlotRange -> {-31, 1}] ]], {{c, 4}, -5, 5, 1/10}, {{accuracy, 16.}, 2, 28}] Addendum: Failed ideas - Maybe someone can make them work... The Chebyshev series of Exp[-x^2] can be computed over {-x0, x0} exactly in terms of modified Bessel functions BesselI[]. Therefore, so can the series for Erf[x]. I was seduced into trying to come up with a way to compute the OP's function in this way but the +c in Erf[x + c] was too ornery. One thing that would be needed is a way to write ChebyshevT[n, x + c] as a Chebyshev series in ChebyshevT[n, x]. The coefficients would be polynomials in c (with integer coefficients), which themselves could be represented as Chebyshev expansions. This can be done, in fact, but it's a bit cumbersome and slow. Further the Chebyshev coefficients for n = 64 get bigger than 2^100, and I worried about numerical stability. For the moment, I have given up without testing it. The way above seems superior, in simplicity, as well as (probably) in speed and numerics.
Wells (forthcoming) has an really nice example of a sequential decision problem in which an evidential decision theorist will end up predictably poorer than a causal decision theorist. Wells thinks that this case shows that we should reject evidential decision theory. I agree that we should reject evidential decision theory, but I don’t think that a proponent of CDT should use Wells’s case to argue for this conclusion. The reason is that there are sequential decision problems in which a causal decision theorist will end up predictably poorer than an evidential decision theorist, even when both the causal decision theorist and the evidential decision theorist face this decision problem in the same circumstances. If predictable relative poverty like this gives a sufficient reason to reject EDT, then it likewise gives a sufficient reason to reject CDT. (I think we should tollens—though I won’t be making that case here.) Consider the following decision problem, adapted from Hunter & Richter (1978): Hunter Richter You are given the opportunity to play a game. You can, if you wish, take either box $A$, box $B$, or box $C$. Or you can decide to not play and not take any box ($N$). A reliable predictor made a prediction about how you would choose, and allocated prizes in the boxes according to the following rules: if they predicted that you would take $A$, then they put 100 dollars in box $A$ and left a bill for 100 dollars in boxes $B$ and $C$ (so that, if you were to pick either boxes $B$ or $C$, then you'd lose 100 dollars). If they predicted that you would take $B$, then they put 100 dollars in $B$ and left a bill for 100 dollars in boxes $A$ and $C$. If they predicted that you would take $C$, then they put 100 dollars in box $C$ and left a bill for 100 dollars in boxes $A$ and $B$. If they predicted that you wouldn't play, then they left all boxes empty. Using ‘$K_A$’ to stand for the proposition that the predictor predicted you would take box $A$, and likewise for ‘$K_B$‘, ‘$K_C$‘, and $K_N$, we have the following decision matrix: For simplicity, suppose that your utilities are linear in dollars, and suppose that the predictor is 100% reliable—that is, conditional on you selecting act $X$, the probability that the predictor predicted you would select act $X$ is 100% (this is a harmless simplification, since we could run the same case with 70% reliability, but it would make the math more complicated than it needs to be). Then, the evidential decision theorist tells you to select either $A$, $B$, or $C$ (it does’t matter which), since $A$, $B$, and $C$ each have an expected value of 100, and $N$ has an expected value of 0. (Because the predictor is perfectly reliable, the evidential decision theorist is only interested in the diagonal entries in the decision matrix.) On the other hand, what the causal decision theorist tells you to do will depend upon how confident you are that you will end up selecting $A$, $B$, or $C$, (since this makes a difference with respect to how confident you are that the predictor predicted you’d choose $A$, $B$, $C$, or $N$). If your credence that you take $A$ is $a$, your credence that you take $B$ is $b$, and your credence that you take $C$ is $c$, then the causal decision theorist’s $U$-value for the acts $A$, $B$, $C$, and $N$ are as shown below. $$ \begin{aligned} U(A) &= 100 (a-b-c) \\ U(B) &= 100 (b-a-c) \\ U( C ) &= 100 (c - a- b) \\ U(N) &= 0 \end{aligned} $$ Suppose that $a=b=c=$ 25%—that is, you think you’re equally likely to select any of the available acts. Then, $U(A) = U(B) = U( C ) = -25$, while $U(N) = 0$, so CDT will advise you to not play the game. Consider now a slightly different decision problem. Two Stage Hunter Richter At stage one, you are give a choice: either play, $\sim N$, or don't. If you choose to not play, then you walk away without gaining or losing any money. If you choose to play, then you must either select box $A$, box $B$, or box $C$. As in *Hunter Richter*, if the predictor predicted you would not play, the boxes were left empty. If they predicted you would take box $X$, then 100 dollars were left in box $X$ and bills for 100 dollars were left in the other boxes. In Two Stage Hunter Richter, causal decision theory will tell you to play, $\sim N$, no matter how likely you think you are to play or not, and no matter how likely you think you are to pick $A$, $B$, or $C$, given that you play. By way of explanation: label the factors which you are not in a position to affect “$K$”, and those which you are in a position to affect “$C$”. Then, the causal decision theorist says to maximize the $U$-value of your act, where \begin{aligned} U(A) = \sum_K \Pr(K) \cdot \sum_C \Pr(C \mid KA) \cdot V(KCA) \end{aligned}($V$ is your value function. This is Skyrm’s formulation of CDT, but the same point would go through with other formulations.) Since it conditions the probability of each downstream factor $C$ on the performance of your act, $\Pr(C \mid KA)$, $U$-value is sensitive to correlations between your acts and the goods that they cause. Since it additionally conditions the probability of each $C$ on each $K$, $U$-value is additionally sensitive to correlations between factors out of your control and factors causally downstream of your act. So if one of the downstream causal consequences of your acts is a subsequent act of yours—as in Two Stage Hunter Richter—then $U$-value will be sensitive to correlations between subsequent acts of yours and factors over which you have no control. Applied to Two-Stage Hunter Richter: deciding to play causes you to take either box $A$, $B$, or $C$. So, when you are deciding whether to play or not, causal decision theory tells you to take into consideration correlations between your subsequent decision (take $A$, $B$, or $C$), and the predictor’s prediction about which box you would take. Since these correlations are perfect, $$ \Pr(A \mid K_A \sim N) = \Pr(B \mid K_B \sim N) = \Pr(C \mid K_C \sim N) = 1 $$the $U$-value of playing will be \begin{aligned} U(\sim N) &= \Pr(K_A) \cdot 100 + \Pr(K_B) \cdot 100 + \Pr(K_C) \cdot 100 + \Pr(K_N) \cdot 0\\ &= 100 (a + b + c) \end{aligned} So: as long as you aren’t certain that you won’t play the game, CDT advises you to play. Let’s alter the case once more. Here’s a three-stage version of Hunter-Richter: Three Stage Hunter Richter At stage one, you are given a choice: you may either pay 90 dollars, $P$, or pay nothing, $\sim P$. If you pay nothing, then you go on to play the original Hunter Richter game. If you pay 90 dollars, then you go on to play Two Stage Hunter Richter. What does causal decision theory say to do in Three Stage Hunter Richter? Again, it depends upon how likely you think you are to end up selecting $A$, $B$, $C$, or $N$ in the final stage. Let’s suppose, as before, that you think each outcome is equally likely. Now: suppose you don’t pay the 90 dollars, $\sim P$. Then, you’ll face the original Hunter Richter game. And we know what CDT will advise you to do there: it will tell you to not play, $N$. So you’ll walk away with nothing. Suppose that you do pay the 90 dollars, $P$. Then, you’ll face the Two Stage Hunter Richter game. And we know what CDT will advise you to do there: it will tell you to play, and you’re certain that you’ll end up making 100 dollars. Minus the 90 you paid up front, you’ll end up with a net 10 dollars. 10 dollars is better than 0 dollars, so CDT tells you to pay the 90 up front. So: if a causal decision theorist plays this game, they’ll walk away with 10 dollars. What about the evidential decision theorist? If they play this three stage game, then they know that they’ll choose to take either $A$, $B$, or $C$ in the original Hunter Richter game, so they will see no reason to pay 90 dollars at stage one. So: if an evidential decision theorist plays this game, they’ll walk away with 100 dollars. Notice that the evidential decision theorist didn’t have more money provided to them by the predictor. Both the causalist and the evidentialist had 100 dollars placed in one box and bills in the other two—we can even suppose that it was the very same box for each. And both had the choice to pay or not. But the evidentialist ended up 90 dollars richer than the causalist.
WHY? Spatial sampling of convolutional neural network is geometrically fixed. This paper suggests two modules for CNN to capture the geometric structure more flexibly. WHAT? Deformable convolution modifies the regular grid \mathcal{R} of convolution network by augmenting \mathcal{R} with offests. offsets are generated by a conv layer with 2N + 1 channels. For example, consider a convolution with 3x3 kernel with dilation 1. \mathcal{R} = \{(-1, -1), (-1, 0),...,(0, 1), (1, 1)\}\\\mathbf{y}(\mathbf{p}_0) = \sum_{\mathbf{p}_n\in\mathcal{R}}\mathbf{w}(\mathbf{p}_n)\cdot\mathbf{x}(\mathbf{p} + \mathbf{p}_n)\\ \mathbf{y}(\mathbf{p}_0) = \sum_{\mathbf{p}_n\in\mathcal{R}}\mathbf{w}(\mathbf{p}_n)\cdot\mathbf{x}(\mathbf{p} + \mathbf{p}_n + \delta\mathbf{p}_n)\\ Since offsets can be fractional, bilinear interpolation can be used. \mathbf{x}(\mathbf{p}) = \sum_q G(\mathbf{q}\mathbf{p})\mathbf{x}(\mathbf{q})\\G(\mathbf{q}\mathbf{p}) = g(q_x, p_x)\cdot g(q_y, p_y)\\g(a, b) = max(0, 1 - |a - b|) This kind of augmentation enable CNN to capture image with various transformation for scale, aspect ratio and rotation. Second module is deformable RoI Pooling for image detection. Deformable RoI Pooling divides the RoI into k x k bins and outputs a k x k feature map y. The offsets are generated by a fc layer. \mathbf{y} = \sum_{\mathbf{p}\in bin(i, j)} \mathbf{x}(\mathbf{p}_0 + \mathbf{p})/n_{ij}\\\mathbf{y} = \sum_{\mathbf{p}\in bin(i, j)} \mathbf{x}(\mathbf{p}_0 + \mathbf{p} + \delta \mathbf{p}_{ij})/n_{ij}\\\delta \mathbf{p}_{ij} = \gamma\cdot\delta\hat{\mathbf{p}}_{ij} \circ (w, h)\\ Position-Sensitive RoI Pooling replace the general feature map with positive-sensitive score map with k^2(C+1) channels. So? Decormable convolution network performed well on semantic segmentation, and object detection than normal convolution network. Critic The amazing proprty of DCN is that the receptive field of its filter can be varied with object size. I assume that feature vectors of DCN may represent real object which can be useful in VQA.
Answer Chord AB is about 1.27 inches closer. Work Step by Step Let x represent the distance from chord AB to point P. Let y represent the distance from chord CD to point P. The perpendicular bisector from chord AB to point P splits triangle APB into two 45 45 90 triangles with radii for hypotenuses. Therefore: x=$\frac{8}{\sqrt 2}$ x=4$\sqrt 2$ The perpendicular bisector from chord CD to point B splits triangle CPD into to 30 60 90 triangles with radii for hypotenuses. Therefore: y=$\frac{8}{2}$($\sqrt 3$) y=4$\sqrt 3$ y-x=4$\sqrt 3$-4$\sqrt 2$$\approx$1.27
WHY? Efficient exploration of agent in reinforcement learning is an important issue. Conventional exploration heuristics includes \epsilon-greedy for DQN and entropy reward for A3C. WHAT? NoisyNet is a neural network whose parameters are replaced with a parametric function of the noise. \theta =^{def} \mu + \Sigma \odot \epsilon\\ y =^{def} (\mu^w + \sigma^w \odot \epsilon^w)x + \mu^b = \sigma^b \odot \epsilon^b\\ There are two options for noise: Independent Gaussian noise and Factorised Gaussian noise. Independent Gaussian noise insert noise per weight(pq + q per layer), but factorised Gaussian insert noise per each input and output(p + q per layer). Since factorised Gaussian requires less random variables generation, it can be used for a single thread agents DQN and independent noise can be used for the distributed A3C. The loss of Noisy Network applied to DQN is as follows. \mathcal{E}[\mathcal{E}_{(x, a, r, y)\sim D}[r = \gamma Q(y, b^*(y), \epsilon'; \zeta^-) - Q(y, a, \epsilon'; \zeta)]]\\ b^*(y) = argmax_{b\in \mathcal{A}}Q(y, b(y), \epsilon''; \zeta) So? NoisyNet-DQN/Dueling/A3C showed overall improved performance over models without noise. The evolution of \sigma differed significantly by the tasks. Critic I’m quite surprised that this can actually work. More experiments on when the agent choose to explore more would be helpful.
WHY? Learning directed generative model is difficult. WHAT? Deep AutoRegressive Network(DARN) models images with hierarchical, autoregressive hidden layers.DARN has three components: a encoder q(H|X), a decoder prior p(H), and a decoder conditional p(X|H). We consider all the latent variables (h) in this model are binary. The decoder prior is an autoregressive model. The conditional probabilities can be modeled with simple logistic regressions. p(h) = \prod_{j=1}^{n_h}p(h_j|h_{1:j-1})\\p(H_j = 1|h_{1:j-1}) = \sigma(W_j^(H)\cdot h_{1:j-1} + b_j^{H})\\p(x|h) = \prod_{j=1}^{n_h}p(x_j|x_{1:j-1}, h)\\p(X_j = 1|x_{1:j-1}, h) = \sigma(W_j^(X|H)\cdot (x_{1:j-1},h) + b_j^{X|H})\\q(h|x) = \prod_{j=1}^{n_h}q(h_j|h_{1:j-1}, x)\\q(H_j = 1|h_{1:j-1}, x) = \sigma(W_j^(H|X)\cdot x + b_j^{H|X}) We can make deeper architectures of DARN by adding additional stochastic hidden layers, additional deterministic hidden layers, and additional kinds of autoregressivity such as NADE. p(H^{(l)}|H^{(l+1)}) = \prod_{j=1}^{n_h^{(l)}} p(H_j^{(l)}|H_{1:j-1}^{(l)}, H^{(l+1)})\\q(H^{(k)}|H^{(k-1)}) = \prod_{j=1}^{n_h^{(k)}} q(H_j^{(k)}|H_{1:j-1}^{(k)}, H^{(k-1)})\\d^{l} = tanh(Uh^{(l+1)})\\p(H_j^{l} = 1 | h^{(l)}_{1:j-1}, h^{(l+1)}) = \sigma(W_j^{(H)}\cdot (h^{(l)}_{1:j-1},d^{l}) + b_j^{H}) The cost function is defined with minimum description length (MDL), which is equivalent to Helmholtz variational free energy. L(x) = \sum^h q(h|x)(L(h) + L(x|h))\\= -\sum^h q(h|x)(log_2p(x,h) - log_2q(h|x))\\= \sum_{h_1=0}^1q(h_1|x) ... \sum_{h_{n_h}=0}^1q(h_{n_h}|h_{1:n_h-1},x)\prod(log_2q(h|x) - log_2p(x,h)) Calculating the derivative of the cost function is intractable, so MC approximation is used to estimate the gradient. Laplacian Pyramid Framework is used for restoring compressed image. When a image is compressed to smaller size, it would lose information of high-resolution image so that simply enlarging the image would not be enough to restore original data. Laplacian pyramid framework save the differences between enlarged low resolution image and high resolution image at each stage. So? DARN performed better than RBM, FVSBN, NDAE in log likelihood in UCI and binarised MNIST dataset. Critic Backpropagation seems inconvenient.
Answer $$(1-\cos^2\alpha)(1+\cos^2\alpha)=2\sin^2\alpha-\sin^4\alpha$$ By dealing with the left side. we prove that both sides are equal and this is thus an identity. Work Step by Step $$(1-\cos^2\alpha)(1+\cos^2\alpha)=2\sin^2\alpha-\sin^4\alpha$$ We would try with the left side first. $$A=(1-\cos^2\alpha)(1+\cos^2\alpha)$$ We notice that $\sin^2\alpha=1-\cos^2\alpha$. That means, $$A=\sin^2\alpha(1+\cos^2\alpha)$$ $$A=\sin^2\alpha+\sin^2\alpha\cos^2\alpha$$ Also, as we witness that the right side only includes $\sin\alpha$, it is better to change $\cos^2\alpha$ into $1-\sin^2\alpha$. $$A=\sin^2\alpha+\sin^2\alpha(1-\sin^2\alpha)$$ $$A=\sin^2\alpha+\sin^2\alpha-\sin^4\alpha$$ $$A=2\sin^2\alpha-\sin^4\alpha$$ Thus, the left side is equal to the right side. The expression is therefore an identity.
Note : Having spent some time over the original problem below, I saw that it can be boiled down to a simpler problem. Here is that simpler problem : In a vectorial space (over $\mathbb{R}$) of dimension n, consider the intersection of an hyperplane H of dimension m < n with an orthant (extension of a quadrant to several dimensions). First question : how many extreme points (vertices) can I have ? I would be happy with only the maximum of that number depending on n and m. Second question : find a efficient algorithm to enumerate the coordinates of those extreme points having the linear equation linked with the hyperplane. I tried to reason in the simple case where the intersection of the hyperplane with any facet of the orthant (dimension n-1) is an hyperplane of dimension m-1. In that case, we can prove that extreme points have all m coordinates equal to 0. element of proof : The intersection of p facets with H is of dimension m-p So, extrema points (hyperplanes of dimension 0) are intersection of m facets with H. Furthermore, it is clear We can have only one extrema point when m coordinates are set to 0. So, the number of extrema points is less than or equal to $n \choose m$. But in dimension 3, for instance, if H is of dimension 1 (a line), the number of extrema points is 2 or less (while ${3 \choose 1} =3$) So the maximum is less than the found upper bound. =================== Original problem ======== Extreme points of a probability distribution subject to marginal constraints Suppose that I have a probability distribution over N binary variables $\chi=(X_i)_{1 \leq i \leq n}$. That distribution can be seen as a point on the simplex [$\sum_{x \in val(\chi)}{P(x)}=1$ and $\forall x \in val(\chi), P(x) \ge 0$] where the P(x) are the coordinates of my distribution and $val(\chi)$ is the different outcomes for the variables of $\chi$. Suppose, now, that I know a set of (coherent) marginal constraints over subsets of $\chi$ Thus, for each constraint $P(X_i=x_i,X_j=x_j, \dots ,X_k=x_k)=C$, I would have additional linear constraints $\sum_{x \in S}(P(x)=C)$ where S is the subset of $val(\chi)$ compatible with $X_i=x_i,X_j=x_j, \dots ,X_k=x_k$ The space that is valid for all my constraints is a convex polytope. Its dimension is $card(val(\chi))-1-N=2^n-1-N$ where N is the number of additional linear constraints. The question is how find the extreme points of that polytope, the number of those points (or an upper bound of that number) and (that would be great) an algorithm enabling to find neighbour extreme point from any extreme point. So far, I have devised a simple but fastidious way to identify extreme point : create counters for each marginal constraints Repeat for each ordering of $x \in val(\chi)$ sort the $x \in val(\chi)$ in that order. attribute to each x in turn the highest valid probability P(x) decrement the counters accordingly e.g Suppose I have 2 variables $X_1$ and $X_2$ and 2 marginal constraints : $P(X_1=0)=0.3$ $P(X_2=0)=0.6$ I create 2 pairs of counters : P($X_1$) = [0.3, 0.7] P($X_2$) = [0.6, 0.4] Then, I start with the natural ordering for $x=(x_1,x_2)$ : (0,0), (0,1),(1,0),(1,1) I set P(0,0)=0.3 and decrements my counters : P($X_1$) = [0, 0.7] P($X_2$) = [0.3, 0.4] The I have no choice for P(0,1)=0 Then I set P(1,0)=0.4 and decrements my counters : P($X_1$) = [0, 0.3] P($X_2$) = [0.3, 0] Then it comes P(1,1)=0.3 and I continue with different orderings. well, I have no proved that this method provides only and all extreme points. Furthermore, it requires to do the job for each ordering so the complexity is not polynomial.
Here's the very impressive Stokes's theorem, which applies to the diagram $$\int^{\ }_{\mathrm{\Sigma }}{{\mathrm{\nabla }}_{\mu }V^{\mu }\sqrt{\left|g\right|}}d^nx=\int^{\ }_{\mathrm{\partial }\mathrm{\Sigma }}{n_{\mu }V^{\mu }\sqrt{\left|\gamma \right|}}d^{n-1}x $$ At Carroll's (3.36) he says "if ##\mathrm{\nabla }## is the Christoffel symbol, ##{\omega }_{\mu }## is a one-form, and ##X^{\mu }## and ##Y^{\mu }## are vector fields, we can write $${\left(\mathrm{d}\omega \right)}_{\mu \nu }=2{\partial }_{[\mu }{\omega }_{\nu ]}=2{\mathrm{\nabla }}_{[\mu }{\omega }_{\nu ]} $$The phrase "if ##\mathrm{\nabla }## is the Christoffel symbol" is bizarre and it is easy to prove the equation without it, assuming the Christoffel connection is torsion-free (##{\mathrm{\Gamma }}^{\lambda }_{\mu \nu }={\mathrm{\Gamma }}^{\lambda }_{\nu \mu }##). I think our author meant "if the connection is torsion-free". Read more at Commentary 3.2 Properties of covariant derivative.pdf (7 pages)
EF and BD are diagonals of rectangle EBFD, thus, EF = BD. Length of EF is minimal only if BD is perpendicular to AC (BD as altitude through B of triangle ABC). Therefore, $\sin \alpha = \dfrac{BD}{AB} = \dfrac{BC}{AC}$ $\dfrac{BD}{40} = \dfrac{30}{50}$ $BD = 24 \, \text{ cm}$ $\sin \beta = \dfrac{y}{BD} = \dfrac{AB}{AC}$ $\dfrac{y}{24} = \dfrac{40}{50}$ $y = 19.2 \, \text{ cm}$ answer
I was learning about beta decay, and how a down quark decays into an up quark by emitting a W- boson which then becomes an electron and an electron antineutrino. I have two main questions - Firstly, how can the down quark be considered a fundamental particle, when it can break down to produce... Historically, quantum mechanics, or wave mechanics, arose due to the anomaly that accelerating free charges ( electrons ) radiated EM waves. The quantisation theory provided a solution.Quarks also have electric charge, and are moving at relativistic speeds, and bound in the nucleon.As the... Some physics papers today describe the b quark as a beauty quark. For example:Others physics papers today refer to b quarks as bottom quarks. For example:The b quark is a particle that was theoretically predicted to exist in 1973 and first observed experimentally in 1977.But, here we... The deuterium exists only with the proton and neutron of aligned spin, which suggests that the residual strong force is greated with aligned spins, i.e. the binding energy is greater if the spins are aligned.On the other hand the mass of ##\Delta^{+}## is greater than the mass of proton ##p##... I am trying to work through a problem in the textbook "Particle Physics in a Nutshell." However, I am realizing how little I actually understand about working through problems involving quark color pairs.Given in the problem is the meson color singlet 1/\sqrt{3}(r\bar{r}+g\bar{g}+b\bar{b})... Mesons and baryons have both a ground state and excited states involving the same valence quarks but a higher mass (which can in principle be calculated from QCD).Fundamental fermions and bosons, however, do not appear to display this behavior. They have a ground state, and while there are... Hi! I am asked to calculete the portion of antimatter present in protons. I am sayed that this portion is given by:r=(3R-1)/(3-R), where R is R=σ(antineutrino)/σ(neutrino). Another definition of r is r=∫x*Q'(x)dx/∫x*Q(x)dx, where Q(x) are partonic densities Q(x)=d(x)+u(x) (d is down quark and... Hi there ppl!I have a question !I learned that neutral miuon ( μ0) is made of a quark and his anti quark (can't remember which) which explains it's very little lifetime(around 10-24s if i'm not wrong).Now i wonder how two particles ,that are meant to destroy each other,form for a little time... When protons, due to their electric charge, interact with photons are the quarks somehow also involved in this same electric interaction? After all, the quarks do have fractional electric charges.Thanks in advance. Hello everyone,I'm sorry if it's a dumb question, I'm very new in self studying particle physics.May I ask when we associate a charge (i.e. positive or negative) to quark and lepton, is it only because of their attraction or repulsion toward each other? in other words, is it just to name one... Hey!I'm studying some particle physics. I ran into this example of a gluon decaying into a u - anti-u pair. (According to example 9: http://teachers.web.cern.ch/teachers/archiv/HST2002/feynman/examples.htm) How come this happens via strong force. Why isn't a Z0 boson doing this instead?Thanks! my understanding of beta radiation is that an up quark in a proton changes to a down quark, forming a neutron and emitting an electron as the result of the change in charge.My questions are,1. Why does the quark change?2. How does it change and how does it change charge? Dear PF Forum,Just out of curiosity.What happens when an anti proton hits a 'normal' neutron?According to this:https://en.wikipedia.org/wiki/Protonhttps://en.wikipedia.org/wiki/NeutronA proton has 2 up quarks, 1 down quarkA neutron has 1 up quark, 2 down quarks.1. Does anti proton has 2... In the big rip theory, the force of dark energy isn't constant and increases over time. This causes first galaxies to fly apart, then solar systems, then planets, then stars, then atoms, then the atom nuclei. If it keeps increasing, it would start pulling the quarks inside protons and neutrons... I am familiar with the proton:neutron ratio and stability but what about this instability actually causes a quark to emit a boson and change flavour?And what does this have to do with the weak nuclear force?Thanks 1. Homework StatementHow does the pi-meson (π0) work without annihilating itself?2. Homework Equations|π0 〉1/√2 〉- {|uu − |dd}u= anti-upquarkd= anti-downquark3. The Attempt at a SolutionI do understand that it is a superposition, but why does it work when it is a partice with its... Quarks join up with other quarks to form composite particles like protons and neutrons, but in the center of something like a nucleus, how do they know which quarks are in THEIR proton or neutron? When all the quarks are together and it becomes a "soup" of quarks, why doesn't it form things like... There are exotic atoms such as the protonium (proton+antiproton) and positronium (electron+positron); I was wondering if quark-antiquark particles could appear even if they only exist for a fraction of a second. Dear PF Forum,I have a question to ask.Supernovae produce neutron star (or Black Hole).This is what I summarize from wikipedia.1. Is P + e = N? Is it that simple?Judging by its mass, altough slightly off.2. Is Up Quark + e = Down Quark?Thanks for any answer As far as I know a Baryon is made of three Quarks (eg uud, udd etc) and a Meson of two Quarks, a Quark/Antiquark pair. As I am not a student / scholar in Physics but very deeply interested in this field, I couldn't find any explanation, why a Meson is omly made up by a Quark/Antiquark pair. What... 1. Homework Statement(a) What are their vertex and propagator factors?(b) Find the value of R.(c) Explain the peaks at 3, 10 and 100 GeV.2. Homework Equations3. The Attempt at a SolutionPart (a)They have the same propagator factor ##\frac{1}{P \cdot P}##.Vertex factor for muon... Hello all,I'm trying to answer a question about a meson and a proton interacting to form a neutron and an unknown meson- to be determined. The equation is :(Anti up and strange) + (uud) --> udd + (xy)I arrived at strange and anti down OR anti strange and down. Are both possibilities... This isn't a homework problem. I am preparing for a particle physics exam and although I understand the theoretical side of field theory, I have little idea how to approach practical scattering questions like these.THE PROBLEM:Dark matter might be observed at the LHC with monojet and...
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
I've been wondering if we can attribute any physical meaning to the inverse metric. I mean when we talk about the metric itself, there are lots of insights we can have towards its role in spacetime, yet I cannot see any physical meaning for the inverse metric. For now, I just see it as tensor with the special property of giving the identity when joined with the metric. Rigorously speaking, I would say it is not even an "inverse" actually, as it doesn't map like one. But still, is there any physical way of interpreting this tensor? Another way of looking at user40330's answer is to think of the inverse metric as the map from the space of one-forms (or differentials, if you prefer) and mapping them to the space of vectors (or directional derivatives, if you prefer that language), and then thinking of the metric as the inverse of this map. Namely $$g^{-1}({ d}v)=g^{ab}v_{b} = v^{a} = {\vec v}$$ and $$g({\vec v}) = g_{ab}v^{b} = v_{a} = dv$$ This map is obviously invertible, thanks to the properties of matrices, and you obviously have $g^{-1}(dv,dv) = g({\vec v},{\vec v})$. The tensor algebra is symmetric between one-forms and vectors. One could start with defining any of them first and then obtain the rest of the things. The inverse metric tensor is a linear map that takes two one forms on a manifold and maps into $\mathbb{R}.$ $g^{\mu \nu}: A_\mu,B_\nu \rightarrow \mathbb{R}$ It of course tranforms like a vector with respect to both the indices. So to answer your question, the inverse metric tensor is as physical as the metric tensor. Just as metric tensor provides a way to measure length of vector fields, inverse metric provides a way to measure length of one-form fields.
Uncertainty Quantification A core task in metrology is the determination of measurement uncertainties. According to an international agreement measurement uncertainties are determined by the “Guide to the expression of uncertainties in measurements” (GUM) and its supplements. Thereby the effects of uncertain input variables of nonlinear problems are considered as random variables and output fluctuations are obtained by Monte Carlo (MC) samplings. Many applications in metrology, physics and engineering use partial differential equations (pde) which are solved by finite element methods (FEM). Such methods are computationally expensive and the determination of uncertainties according to the GUM is often unfeasible. Our main goal is to develop methods for the determination of uncertainties for computationally expensive systems in compliance with the GUM. Introduction To investigate the effect of uncertain parameters of a pde on its solutions a map is typically defined from these parameters to the pde’s solutions. Such a map is called a forward model. Uncertain parameters (input quantities) are expressed in probability distributions. The aim is to determine the probability distribution of the solutions of the pde or derived quantities (output quantities), see Fig1. Methods Sampling methods Sampling methods are characterized by randomly drawing from input distributions and calculating the associated output quantities. To obtain a distribution of output quantities from this procedure a large number of repetitions is necessary. The choice of a specific sampling method can reduce the sampling size and in turn the number of forward calculations significantly. For example, in many applications the Lattin-Hypercube sampling is more effective than the Monte Carlo method and therefore it needs less computing time for the same precision/accuracy. Polynomial chaos Within the polynomial chaos approach the output quantity of interest is approximated as a weighted sum of orthogonal polynomials of random variables. The weights are defined as integrals over the (multi-dimensional) space of input quantities, with the integrands being a product of the underlying deterministic model function and one of the polynomials. The integrals are evaluated using numerical quadrature, requiring model evaluations. This approach has very rapid convergence properties (in many cases, low order polynomials give a good approximation of the model output), meaning that the methods can outperform most sampling methods, at least as long as only a few random parameters are considered. Thus, the method is also applicable to computationally expensive systems as they occur in computational fluid dynamics, for example. Surrogate models Surrogate models approximate the forward model itself, rather than to reduce the number of forward evaluations. In the case of pde’s, solutions are interpolated within a certain parameter range by nonlinear functions. In particular, for certain input quantities (training points) output quantities are calculated rigorously by using the forward model. From these results a surrogate model for a certain parameter range is constructed. The surrogate model connects the inputs to the output quantities by simple functions. With this method a speed up of the computational time by several magnitudes is possible. Fig. 3 shows a sketch for the determination of diffraction pattern of a nano-grating (scatterometry). The upper path depicts the rigorous FEM calculation of electromagnetic waves (computational time 2min). The lower path depicts the replacement with the surrogate model (less than one second). The pre-calculation time to construct the surrogate model was about 20h. Inverse Problems For indirect measurements for the determination of uncertainties it is necessary to solve a statistical inverse problem. A flexible method to solve a statistical inverse problem allowing for the inclusion of prior knowledge is the Bayesian approach. The inclusion of prior knowledge is of advantage in metrology. Here it is possible to combine different measurements to reduce measurement uncertainties (hybrid-metrology). Bayes theorem provides the basis for this approach. Thereby prior knowledge about a certain measurand given by the distribution $\pi_0 $ is combined with the information from the measurement (likelihood function, $\mathcal{L}$) to update the knowledge about the desired quantity (posterior distribution $\pi$), i.e., \[\pi (\theta;{\bf y} ) = \frac{\mathcal{L}(\theta;{\bf y})\pi_0(\theta)}{\int \mathcal{L}(\theta;{\bf y})\pi_0(\theta) d\theta }.\] The calculation of the output distribution (posterior) requires a large number of evaluations of the forward model. For computationally expensive problems this is very time consuming which makes the direct calculation of uncertainties unfeasible. However by using the prior knowledge, output distributions can be efficiently approximated and uncertainties are calculated. Applications Flow Since the polynomial chaos method requires only a small number of evaluations of the underlying deterministic problem, it is well-suited for computationally expensive problems like CFD (computational fluid dynamics). In cooperation with AG 7.52 the influence of disturbed inflow profiles on the measurement result of a single-beam ultrasonic flow meter has been investigated using the polynomial chaos method in conjunction with CFD. The disturbed profiles were produced by a double bend out-of plane. This case is especially important for metrology, since in practice one bend usually follows another one, which can lead to huge measurement errors. Fig. 1 shows the expected error of the volume flow measured by the simulated measurement device (in cyan) as well as its standard deviation (in red). Additionally the maximal deviation of the prediction with respect to the exact volume flow is shown (in blue). One can see that in the considered configuration the flow rate is underestimated on average. With increasing distance between the double bend and the measurement device the mean error decreases from around 4% to almost 0%. Scatterometry The second application deals with the statistical inverse problem in scatterometry. Scatterometry is an indirect optical measurement method and is used to measure periodic nanostructures at surfaces. In particular, critical dimensions and sidewall angles of lines on a photomask can be measured nondestructively. Its precise production and measurement is of particular interest in semiconductor industry. Fig. 5 shows a cross-section of a realistic EUV mask used for simulations. In our case the forward model is given by the map from the line geometry onto the diffraction pattern. Rigorous calculations are performed by a standard FEM solver (DIPOG-WIAS). For the geometry parameters: line width, line height and the side wall angle a certain prior distribution was chosen ($\pi_0$) and a surrogate model based on the polynomial chaos method was constructed. The associated posterior distribution could be determined in several hours by using a Markov-Chain-Monte-Carlo method (compared with a computational time of half a year if the rigorous method was used). Fig. 6 depicts the distributions for the geometry parameters. Green, dashed lines indicate reference values. Publications 2017 • S. Heidenreich, H. Gross, M. Bär and L. Wright Uncertainty propagation in computationally expensive models: A survey of sampling methods and application to scatterometry. Measurement, 97(79--87), 2017. 2016 • A. Weissenbrunner, A. Fiebach, S. Schmelter, M. Bär, P. Thamsen and T. Lederer Simulation-based determination of systematic errors of flow meters due to uncertain inflow conditions. Flow Measurement and Instrumentation, 2016. • S. Schmelter, A. Fiebach and A. Weissenbrunner Polynomchaos zur Unsicherheitsquantifizierung in Strömungssimulationen für metrologische Anwendungen. tm-Technisches Messen, 83(2), 71-76, 2016. 2015 • A. Weissenbrunner, A. Fiebach, S. Schmelter, M. Straka, M. Bär and T. Lederer Proceedings of Imeko 2015 XXI World Congress Measurement in Research and Industry, 2015. • S. Heidenreich, H. Gross and M. Bär Bayesian approach to the statistical inverse problem of scatterometry: Comparison of three surrogate models. International Journal for Uncertainty Quantification, 511, 2015. • S. Schmelter, A. Fiebach, R. Model and M. Bär Int. J. Comp. Fluid. Dyn., 29(6-8), 411-422, 2015. 2014 • S. Heidenreich, H. Gross, M.-A. Henn, C. Elster and M. Bär J. Phys. Conf. Ser., 490(1), 012007, 2014.
As I’ll use the name here, the independence of irrelevant alternatives ( IIA) says that adding an additional option to the menu can’t transform an impermissible choice into a permissible one. An old story from Sidney Morgenbesser illustrates the seeming irrationality of violating this principle: asked to decide between steak and chicken, a man says “I’d rather have the steak”. The waiter tells him that they also have fish, to which he responds: “Oh, in that case, I’ll have the chicken”. This behavior looks irrational, and a principle like IIA explains why. I recently realized that causal decision theory (CDT) doesn’t abide by the IIA. Not just orthodox CDT, but basically any theory worth calling ‘causalist’ will end up violating the principle. The point of today’s post is to explain why. 1. Stage Setting Orthodox CDT measures the choiceworthiness of acts with their utility, $\mathcal{U}$, where $$ \mathcal{U}(A) \stackrel{\text{df}}{=} \sum_K \Pr(K) \cdot \mathcal{D}(KA) $$(In the above, the $K$s are a partition of states of nature, and $\mathcal{D}(KA)$ says how strongly you desire that you perform $A$ in the state of nature $K$.) Orthodox CDT says that an act is permissible iff it maximizes utility. Let me use ‘$\mathbf{M}$’ for a menu of options, and I’ll use “$\mathcal{P}(\mathbf{M})$” for the permissible options on the menu $\mathbf{M}$. Then, orthodox CDT says that an act is permissible iff its utility is no less than any alternative act, $$ \mathcal{P}(\mathbf{M}) = \{ A \in \mathbf{M} \mid (\forall B \in \mathbf{M}) \mathcal{U}(A) \geqslant \mathcal{U}(B) \} $$ One noteworthy feature of the measure $\mathcal{U}$ is that its values can depend upon how likely you think you are to select each act. Let’s write “$\mathcal{U}_A(B)$” for the utility you would assign to the act $B$, were you to learn only that you had performed the act $A$: $$ \mathcal{U}_A(B) \stackrel{\text{df}}{=} \sum_K \Pr(K \mid A) \cdot \mathcal{D}(KA) $$ Then, in a choice between two options, $A$ and $B$, both of the following situations are possible: Self-Undermining Choice Once chosen, every act is worse than the alternative. \begin{aligned} \mathcal{U}_A(B) > \mathcal{U}_A(A) \qquad \text{ and } \qquad \mathcal{U}_B(A) > \mathcal{U}_B(B) \end{aligned} Self-Reinforcing Choice Once chosen, every act is better than the alternative. \begin{aligned} \mathcal{U}_A(A) > \mathcal{U}_A(B) \qquad\text{ and } \qquad \mathcal{U}_B(B) > \mathcal{U}_B(A) \end{aligned} This can lead CDT’s verdicts to change as you make up your mind about what to. In a self-undermining choice, once you follow CDT’s advice and intend to do the act it called rational, it will change its mind and begin to call you irrational. In a self-reinforcing choice, if you disregard its advice and do what it said was irrational, CDT will change its mind and call you rational for doing so. I’ve come to think that this is a reason to doubt orthodox CDT. But this feature won’t be relevant to anything I’m saying today. All I will need to appeal to here is the following, minimal committment of CDT: Minimal CDT In a choice between two options, $A$ and $B$, if the utility of $A$ exceeds the utility of $B$, and it would continue to do so whether you choose $A$ or $B$, then $B$ is impermissible. $$ \left( \mathcal{U}_A(A) > \mathcal{U}_A(B) \text{ and } \mathcal{U}_B(A) > \mathcal{U}_B(B) \right) \Rightarrow B \notin \mathcal{P}(\{ A, B \}) $$ Minimal CDTsays that one-boxing is impermissible. 2. CDT violates IIA What I’ll show here is that the following three principles are jointly inconsistent. Independence of Irrelevant Alternatives (IIA) Given any two menus of options where the first is a subset of the second, and $A$ appears on both, if it is not permissible to choose $A$ from the smaller menu, then it is not permissible to choose $A$ from the larger menu. $$ A \in \mathbf{M} \subseteq \mathbf{M}^+ \Rightarrow \left[ A \notin \mathcal{P}(\mathbf{M}) \Rightarrow A \notin \mathcal{P}(\mathbf{M}^+) \right] $$ Minimal CDT In a choice between two options, $A$ and $B$, if the utility of $A$ exceeds the utility of $B$, and it would continue to do so whether you choose $A$ or $B$, then $B$ is impermissible. $$ \left( \mathcal{U}_A(A) > \mathcal{U}_A(B) \text{ and } \mathcal{U}_B(A) > \mathcal{U}_B(B) \right) \Rightarrow B \notin \mathcal{P}(\{ A, B \}) $$ No Dilemmas Given any menu of options, *some* option is permissible to choose from that menu. $$ \forall \mathbf{M} \quad \mathcal{P}(\mathbf{M}) \neq \varnothing $$ Since CDT is clearly committed to Minimal CDT and No Dilemmas, it must reject the IIA. To see why these three principles are jointly inconsistent, consider the following decision: You must choose between three boxes, labeled ‘$A$’, ‘$B$’, and ‘$C$’. You can take one, and only one, of the boxes. Yesterday, a reliable predictor made a prediction about how you would choose. Their predictions are 80% reliable—so, conditional on you taking box $X$, you’re 80% sure that this is what they predicted you’d do. If they predicted that you would take $A$, then they left nothing in $A$, 10 dollars in $C$, and bill for 10 dollars in $B$. If they predicted that you would take $B$, then they left nothing in $B$, 10 dollars in $A$, and a bill for 10 dollars in $C$. If they predicted that you would take $C$, then they left nothing in $C$, 10 dollars in $B$, and a bill for 10 dollars in $A$. If we use “$K_X$” for the state of nature in which it was predicted that you would take box $X$, then the desirabilities and probabilities for this decision are shown in the matrices below. Multiplying the left-hand-side matrix by the right-hand-side matrix gives us the following matrix of the utility you would assign to the row act, were you to learn only that you had performed the column act: Now, suppose that, instead of being given a choice between $A$, $B$, and $C$, you were instead offered a choice between just $A$ and $B$—$C$ is taken off of the menu (though there’s still a 10% chance that the predictor falsely predicted that you would take $C$). In that case, notice that both $\mathcal{U}_A(A) > \mathcal{U}_A(B)$ and $\mathcal{U}_B(A) > \mathcal{U}_B(B)$. So Minimal CDT says that, in a decision between $A$ and $B$, $B$ is an impermissible choice, $B \notin \mathcal{P}(\{ A, B \})$. Also notice that a choice between $B$ and $C$ is exactly like a choice between $A$ and $B$. As is a choice between $C$ and $A$. That is: both $\mathcal{U}_B(B) > \mathcal{U}_B( C )$ and $\mathcal{U}_C(B) > \mathcal{U}_C( C )$. So Minimal CDT says that, in a decision between $B$ and $C$, $C$ is an impermissible choice, $C \notin \mathcal{P}(\{ B, C \})$. And both $\mathcal{U}_C( C ) > \mathcal{U}_C(A)$ and $\mathcal{U}_A( C ) > \mathcal{U}_A(A)$. So Minimal CDT says that, in a decision between $C$ and $A$, $A$ is an impermissible choice, $A \notin \mathcal{P}(\{ C, A \})$. Now, consider what it is permissible to choose from the full menu of options, $\{ A, B, C \}$. By No Dilemmas, some option on this menu is permissible. Suppose it is permissible to choose $A$, $A \in \mathcal{P}(\{ A, B, C \})$. Then, $A \notin \mathcal{P}(\{ C, A \})$ and $A \in \mathcal{P}(\{ C, A, B \})$. This violates IIA. Suppose, on the other hand, that it is permissible to choose $B$, $B \in \mathcal{P}(\{ A, B, C \})$. Then, $B \notin \mathcal{P}(\{ A, B \})$ and $B \in \mathcal{P}(\{ A, B, C \})$. And this violates IIA. Suppose, finally, that it is permissible to choose $C$, $C \in \mathcal{P}(\{ A, B, C \})$. Then, $C \notin \mathcal{P}(\{ B, C \})$ and $B \in \mathcal{P}(\{ B, C, A \})$. Again, this violates IIA. So: whichever of $A, B,$ and $C$ is permissible, there will be a violation of IIA. So: if we assume Minimal CDT and No Dilemmas, then we will have violations of IIA.
function Given two function \(f\left(x\right)=\sqrt{25x^2-30x+9}\), \(g\left(y\right)=y\). How many values of a are there such that \(f\left(a\right)=g\left(a\right)+7\)? Nguyễn Linh Chi 08/08/2019 at 09:44 \(f\left(a\right)=\sqrt{25a^2-30a+9}\) \(g\left(a\right)=a\) We have the following: \(f\left(a\right)=g\left(a\right)+7\) \(\Leftrightarrow\sqrt{25a^2-30a+9}=a+7\) \(\Leftrightarrow\left\{{}\begin{matrix}25a^2-30a+9=\left(a+7\right)^2\\a+7\ge0\end{matrix}\right.\) \(\Leftrightarrow\left\{{}\begin{matrix}a\ge-7\\24a^2-44a-40=0\end{matrix}\right.\Leftrightarrow\left\{{}\begin{matrix}a\ge-7\\\left[{}\begin{matrix}a=\dfrac{5}{2}\\a=-\dfrac{2}{3}\end{matrix}\right.\end{matrix}\right.\Leftrightarrow\left[{}\begin{matrix}a=\dfrac{5}{2}\\a=-\dfrac{2}{3}\end{matrix}\right.\) Finally, there are two values of a.Uchiha Sasuke selected this answer. Given two functions \(f\left(x\right)=5x+1\) and \(g\left(x\right)=ax+3\). Find the value of g(1) if \(a=f\left(2\right)-f\left(-1\right)\) How many values of the whole number m such that the function \(y=\left(2016-m^2\right)x+3\) is increasing? Tôn Thất Khắc Trịnh 13/06/2019 at 03:46 So as to have the function be increasing, 2016-m 2must be positive. Therefore, 2016 must be greater than m 2. Since 2016 is a positive number, m has to range from \(-\sqrt{2016}\) to \(\sqrt{2016}\). In other words, m has to be between -44.899 and 44.899. Since m is a whole number, the minimal value of m is -44 and the maximum is 44. To calculate the number of values, we use this formula: \(N=\frac{44-(-44)}{1}+1=89\) To conclude, there are 89 values of m such that the function is increasing.Uchiha Sasuke selected this answer.
Bounds for Avalanche Critical Values of the Bak - Sneppen Model 2006, v.12, Issue 4, 679-694 ABSTRACT We study the Bak - Sneppen model on locally finite transitive graphs $G$, in particular on $\math {Z}^d$ and on $T_{\Delta}$, the regular tree with common degree $\Delta$. We show that the avalanches of the Bak - Sneppen model dominate independent site percolation, in a sense to be made precise. Since avalanches of the Bak - Sneppen model are dominated by a simple branching process, this yields upper and lower bounds for the so-called avalanche critical value $p_c^{BS}(G)$. Our main results imply that $1/ (\Delta+1) \le p_c^{BS}(T_\Delta) \le 1/(\Delta -1)$, and that $1/(2d+1) \leq p_c^{BS}(\math {Z}^d) \leq 1/2d + 1/(2d)^2 + O(d^{-3})$, as $d\to\infty$. Keywords: Bak - Sneppen model,critical values,coupling,site percolation,branching process COMMENTS
Difference between revisions of "Probability Seminar" (→Probability related talk in PDE Geometric Analysis seminar: Monday, 3:30pm to 4:30pm, Van Vleck 901) (→February 21, Diane Holcomb, KTH) Line 44: Line 44: Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. − == Probability related talk in PDE Geometric Analysis seminar: Monday, 3:30pm to 4:30pm, Van Vleck 901, + == Probability related talk in PDE Geometric Analysis seminar: Monday, 3:30pm to 4:30pm, Van Vleck 901, Xiaoqin Guo, UW-Madison == − Xiaoqin Guo, UW-Madison == Title: Quantitative homogenization in a balanced random environment Title: Quantitative homogenization in a balanced random environment Revision as of 11:32, 18 February 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 7, TBA March 14, TBA March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM).
The topics we'll look at in this article are: Friction Scene Collision Jump Table I highly recommended reading up on the previous two articles in the series before attempting to tackle this one. Some key information in the previous articles is built upon within this article. Note: Although this tutorial is written using C++, you should be able to use the same techniques and concepts in almost any game development environment. Video Demo Here's a quick demo of what we're working towards in this part: Friction Friction is a part of collision resolution. Friction always applies a force upon objects in the direction opposite to the motion in which they are to travel. In real life, friction is an incredibly complex interaction between different substances, and in order to model it, vast assumptions and approximations are made. These assumptions are implied within the math, and are usually something like "the friction can be approximated by a single vector" - similarly to how rigid body dynamics simulates real life interactions by assuming bodies with uniform density that cannot deform. Take a quick look at the video demo from the first article in this series: The interactions between the bodies are quite interesting, and the bouncing during collisions feels realistic. However, once the objects land on the solid platform, they just sort of all press away and drift off the edges of the screen. This is due to a lack of friction simulation. Impulses, Again? As you should recall from the first article in this series, a particular value, j, represented the magnitude of an impulse required to separate two objects penetration during a collision. This magnitude can be referred as jnormal or jN as it is used to modify velocity along the collision normal. Incorporating a friction response involves calculating another magnitude, referred to as jtangent or jT. Friction will be modeled as an impulse. This magnitude will modify the velocity of an object along the negative tangent vector of the collision, or in other words along the friction vector. In two dimensions, solving for this friction vector is a solvable problem, but in 3D the problem becomes much more complex. Friction is quite simple, and we can make use of our previous equation for j, except we will replace all instances of the normal n with a tangent vector t. \[ Equation 1:\\ j = \frac{-(1 + e)(V^{B}-V^{A})\cdot n)} {\frac{1}{mass^A} + \frac{1}{mass^B}}\] Replace n with t: \[ Equation 2:\\ j = \frac{-(1 + e)((V^{B}-V^{A})\cdot t)} {\frac{1}{mass^A} + \frac{1}{mass^B}}\] Although only a single instance of n was replaced with t in this equation, once rotations are introduced a few more instances must be replaced besides the single one in the numerator of Equation 2. Now the matter of how to calculate t arises. The tangent vector is a vector perpendicular to the collision normal that is facing more towards the normal. This might sound confusing - don't worry, I have a diagram! Below you can see the tangent vector perpendicular to the normal. The tangent vector can either point to the left or the right. To the left would be "more away" from the relative velocity. However, it is defined as the perpendicular to the normal that is pointing "more towards" the relative velocity. As stated briefly earlier, friction will be a vector facing opposite to the tangent vector. This means that the direction in which to apply friction can be directly computed, since the normal vector was found during the collision detection. Knowing this, the tangent vector is (where n is the collision normal): \[ V^R = V^{B}-V^{A} \\ t = V^R - (V^R \cdot n) * n \] All that is left to solve for jt, the magnitude of the friction, is to compute the value directly using the equations above. There are some very tricky pieces after this value is computed that will be covered shortly, so this isn't the last thing needed in our collision resolver: // Re-calculate relative velocity after normal impulse // is applied (impulse from first article, this code comes // directly thereafter in the same resolve function) Vec2 rv = VB - VA // Solve for the tangent vector Vec2 tangent = rv - Dot( rv, normal ) * normal tangent.Normalize( ) // Solve for magnitude to apply along the friction vector float jt = -Dot( rv, t ) jt = jt / (1 / MassA + 1 / MassB) The above code follows Equation 2 directly. Again, it's important to realize that the friction vector points in the opposite direction of our tangent vector, and as such we must apply a negative sign when we dot the relative velocity along the tangent to solve for the relative velocity along the tangent vector. This negative sign flips the tangent velocity and suddenly points in the direction in which friction should be approximated as. Coulomb's Law Coulomb's law is the portion of friction simulation that most programmers have trouble with. I myself had to do quite a bit of studying to figure out the correct way of modeling it. The trick is that Coulomb's law is an inequality. Coulomb friction states: \[ Equation 3: \\ F_f <= \mu F_n \] In other words, the force of friction is always less than or equal to the normal force multiplied by some constant μ (whose value depends on the materials of the objects). The normal force is just our old j magnitude multiplied by the collision normal. So if our solved jt (representing the force of friction) is less than μ times the normal force, then we can use our jt magnitude as friction. If not, then we must use our normal force times μ instead. This "else" case is a form of clamping our friction below some maximum value, the max being the normal force times μ. The whole point of Coulomb's law is to perform this clamping procedure. This clamping turns out to be the most difficult portion of friction simulation for impulse-based resolution to find documentation on anywhere - until now, at least! Most white papers I could find on the subject either skipped friction entirely, or stopped short and implemented improper (or non-existent) clamping procedures. Hopefully by now you have an appreciation for understanding that getting this part right is important. Lets just dish out the clamping all in one go before explaining anything. This next code block is the previous code example with the finished clamping procedure and friction impulse application all together: // Re-calculate relative velocity after normal impulse // is applied (impulse from first article, this code comes // directly thereafter in the same resolve function) Vec2 rv = VB - VA // Solve for the tangent vector Vec2 tangent = rv - Dot( rv, normal ) * normal tangent.Normalize( ) // Solve for magnitude to apply along the friction vector float jt = -Dot( rv, t ) jt = jt / (1 / MassA + 1 / MassB) // PythagoreanSolve = A^2 + B^2 = C^2, solving for C given A and B // Use to approximate mu given friction coefficients of each body float mu = PythagoreanSolve( A->staticFriction, B->staticFriction ) // Clamp magnitude of friction and create impulse vector Vec2 frictionImpulse if(abs( jt ) < j * mu) frictionImpulse = jt * t else { dynamicFriction = PythagoreanSolve( A->dynamicFriction, B->dynamicFriction ) frictionImpulse = -j * t * dynamicFriction } // Apply A->velocity -= (1 / A->mass) * frictionImpulse B->velocity += (1 / B->mass) * frictionImpulse I decided to use this formula to solve for the friction coefficients between two bodies, given a coefficient for each body: \[ Equation 4: \\ Friction = \sqrt[]{Friction^2_A + Friction^2_B} \] I actually saw someone else do this in their own physics engine, and I liked the result. An average of the two values would work perfectly fine to get rid of the use of square root. Really, any form of picking the friction coefficient will work; this is just what I prefer. Another option would be to use a lookup table where the type of each body is used as an index into a 2D table. It is important that the absolute value of jt is used in the comparison, since the comparison is theoretically clamping raw magnitudes below some threshold. Since j is always positive, it must be flipped in order to represent a proper friction vector, in the case dynamic friction is used. Static and Dynamic Friction In the last code snippet static and dynamic friction were introduced without any explanation! I'll dedicate this whole section to explaining the difference between and necessity of these two types of values. Something interesting happens with friction: it requires an "energy of activation" in order for objects to start moving when at complete rest. When two objects are resting upon one another in real life, it takes a fair amount of energy to push on one and get it moving. However once you get something sliding it is often easier to keep it sliding from then on. This is due to how friction works on a microscopic level. Another picture helps here: As you can see, the small deformities between the surfaces are really the major culprit that creates friction in the first place. When one object is at rest on another, microscopic deformities rest between the objects, interlocking. These need to be broken or separated in order for the objects to slide against one another. We need a way to model this within our engine. A simple solution is to provide each type of material with two friction values: one for static and one for dynamic. The static friction is used to clamp our jt magnitude. If the solved jt magnitude is low enough (below our threshold), then we can assume the object is at rest, or nearly as rest and use the entire jt as an impulse. On the flipside, if our solved jt is above the threshold, it can be assumed that the object has already broken the "energy of activation", and in such a situation a lower friction impulse is used, which is represented by a smaller friction coefficient and a slightly different impulse computation. Scene Assuming you did not skip any portion of the Friction section, well done! You have completed the hardest part of this entire series (in my opinion). The Scene class acts as a container for everything involving a physics simulation scenario. It calls and uses the results of any broad phase, contains all rigid bodies, runs collision checks and calls resolution. It also integrates all live objects. The scene also interfaces with the user (as in the programmer using the physics engine). Here is an example of what a scene structure may look like: class Scene { public: Scene( Vec2 gravity, real dt ); ~Scene( ); void SetGravity( Vec2 gravity ) void SetDT( real dt ) Body *CreateBody( ShapeInterface *shape, BodyDef def ) // Inserts a body into the scene and initializes the body (computes mass). //void InsertBody( Body *body ) // Deletes a body from the scene void RemoveBody( Body *body ) // Updates the scene with a single timestep void Step( void ) float GetDT( void ) LinkedList *GetBodyList( void ) Vec2 GetGravity( void ) void QueryAABB( CallBackQuery cb, const AABB& aabb ) void QueryPoint( CallBackQuery cb, const Point2& point ) private: float dt // Timestep in seconds float inv_dt // Inverse timestep in sceonds LinkedList body_list uint32 body_count Vec2 gravity bool debug_draw BroadPhase broadphase }; There is not anything particularly complex about the Scene class. The idea is to allow the user to add and remove rigid bodies easily. The BodyDef is a structure that holds all information about a rigid body, and can be used to allow the user to insert values as a sort of configuration structure. The other important function is Step(). This function performs a single round of collision checks, resolution and integration. This should be called from within the timestepping loop outlined in the second article of this series. Querying a point or AABB involves checking to see which objects actually collide with either a pointer or AABB within the scene. This makes it easy for gameplay-related logic to see how things are placed within the world. Jump Table We need an easy way to pick out which collision function should be called, based on the type of two different objects. In C++ there are two major ways that I am aware of: double dispatch and a 2D jump table. In my own personal tests I found the 2D jump table to superior, so I'll go into detail about how to implement that. If you're planning to use a language other than C or C++ I am sure an array of functions or functor objects can be constructed similarly to a table of function pointers (which is another reason I chose to talk about jump tables rather than other options that are more specific to C++). A jump table in C or C++ is a table of function pointers. Indices representing arbitrary names or constants are used to index into the table and call a specific function. The usage could look something like this for a 1D jump table: enum Animal { Rabbit Duck Lion }; const void (*talk)( void )[] = { RabbitTalk, DuckTalk, LionTalk, }; // Call a function from the table with 1D virtual dispatch talk[Rabbit]( ) // calls the RabbitTalk function The above code actually mimics what the C++ language itself implements with virtual function calls and inheritance. However, C++ only implements single dimensional virtual calls. A 2D table can be constructed by hand. Here is some psuedocode for a 2D jump table to call collision routines: collisionCallbackArray = { AABBvsAABB AABBvsCircle CirclevsAABB CirclevsCircle } // Call a collsion routine for collision detection between A and B // two colliders without knowing their exact collider type // type can be of either AABB or Circle collisionCallbackArray[A->type][B->type]( A, B ) And there we have it! The actual types of each collider can be used to index into a 2D array and pick a function to resolve collision. Note, however, that AABBvsCircle and CirclevsAABB are almost duplicates. This is necessary! The normal needs to be flipped for one of these two functions, and that is the only difference between them. This allows for consistent collision resolution, no matter the combination of objects to resolve. Conclusion By now we have covered a huge amount of topics in setting up a custom rigid body physics engine entirely from scratch! Collision resolution, friction, and engine architecture are all the topics that have been covered thus far. An entirely successful physics engine suitable for many production-level two dimensional games can be constructed with the knowledge presented in this series so far. Looking ahead into the future, I plan to write one more article devoted entirely to a very desirable feature: rotation and orientation. Oriented objects are exceedingly attractive to watch interact with one another, and are the final piece that our custom physics engine requires. Resolution of rotation turns out to be quite simple, though collision detection takes a hit in complexity. Good luck until next time, and please do ask questions or post comments below!
How to write the plus/minus signs aligned with the bottom of the fraction. I can write the rest of the part of equation (1.1) and (1.2). but can cot write the plus/minus sign at the bottom. TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community How to write the plus/minus signs aligned with the bottom of the fraction. I can write the rest of the part of equation (1.1) and (1.2). but can cot write the plus/minus sign at the bottom. Here are two options: \documentclass{article}\usepackage{amsmath}\newcommand{\subplus}{\mathbin{\genfrac{}{}{0pt}{}{}{+}}}\newcommand{\subminus}{\mathbin{\genfrac{}{}{0pt}{}{}{-}}}\newcommand{\subcdots}{\genfrac{}{}{0pt}{}{}{\cdots}}\begin{document}Option 1:\[ 1 - \begin{array}{@{}*{8}{c@{}}} 1 & & 1 & & 1 & & 1 \\ \cline{1-1}\cline{3-3}\cline{5-5}\cline{7-7} 1 & {}+{} & 1 & {}-{} & 1 & {}+{} & 1 & {}- \cdots \end{array} = \frac{\sqrt{5}-1}{2}\]Option 2:\[ 1 - \frac{1}{1} \subplus \frac{1}{1} \subminus \frac{1}{1} \subplus \frac{1}{1} \subminus \subcdots = \frac{\sqrt{5}-1}{2}\]\end{document} Option 1 sets the fraction as an array, using \cline to simulate the fraction lines. Option 2 uses amsmath to set a fraction with 0pt horizontal rule. Additional macros have been created to set these using \subplus, \subminus and \subcdots.
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
WHY? Word embedding using neural network(Skipgram) seems to outperform traditional count-based distributional model. However, this paper points out that current superiority of word2vec is not because of the algorithm itself, but because of system design choices and hyperparameter optimizations. Note Traditional method of word representation is count-based representation (bag-of-contexts). This method is counting the cooccurance of two words within a certain window forming sparse matrix M with row of vocabulary and column of context. The popular measure of assiciation is pointwise mutual information(PMI). To solve the problem of negative values in PMI, modified positive PMI(PPMI) is widely used. PMI(w, c) = \log\frac{\hat{P}(w,c)}{\hat{P}(w)\hat{P}(c)} = \log\frac{\#(w,c)\cdot|D|}{\#(w)\cdot\#(c)}\\PPMI(w,x) = max(PMI(w,c), 0) Since word representation of count-based model can be too sparse, Singular Value Decomposition(SVD) is used on PPMI matrix to reduce the dimension. M_d = U_d\cdot\Sigma_d\cdot V_d^T\\W^{SVD} = U_d\cdot\Sigma_d\\C^{SVD} = V_d WHAT? Author compared PPMI, SVD, SGNS(Skip-gram with negative sampling), and GloVe varing different hyperparameters. This paper defined three kinds of parameters: pre-processing hyperparameters, association metric hyperparameters, and post-processing hyperparameters. Dynamic context window(dyn), subsampling(sub), and deleting rare words(del) are included as pre-processing hyperparameters. Shifted PMI(neg) and context distribution smoothing(cds) are association metric hyperparameters. Adding context vectors(w+c), eigenvalue weighting(eig) and vector normalization(nrm) are post-processing hyperparameters. The space of hyperparameters are as follows. This paper enumerated all the possible hyperparameter space for each embedding method. The quality of embedding was tested on the two kinds of task. Word similarity was tested on WordSim Similarity, WordSim Relatedness, MEN, Mechanical Turk, Rare Words, and SimLex-999 dataset. Word analogy was tested on MSR and Google’s analogy dataset on two different measures: 3CosAdd and 3CosMul. So? The most important result was that no single algorithm outperformed the others. The performance was varied by datasets and hyperparameter configuration. Beneficial configurations and practical recommendations were given in the paper. Critic Deep analysis on word embedding. Amazing experiments. Levy, Omer, Yoav Goldberg, and Ido Dagan. “Improving distributional similarity with lessons learned from word embeddings.” Transactions of the Association for Computational Linguistics 3 (2015): 211-225.
While trying to answer this question I initially tried to argue via contradiction and was led to the following result: Unproved Theorem: For each positive integer $n$ let $J_n$ be a union of finite number of non-overlapping (closed) sub-intervals of $[a, b] $. And further let the combined length of sub-intervals in $J_n$ be greater than or equal to $\delta>0$ for all $n$. Then there is at least one point $c\in[a, b] $ which lies in infinitely many $J_n$. Two intervals are non-overlapping if their interiors are disjoint. Unfortunately I was unable to prove the theorem and I need some help here. Also approaches without the use of measure theory are desired as I was able to answer the linked question using measure theory. The relation between the above theorem and the linked question is based on the fact that if a Riemann integral is positive then the function being integrated is positive on some union like $J_n$. More formally Apostol proves the following theorem in an exercise (see Exercise 7.35, page 180, Mathematical Analysis): Theorem: Let $f:[a, b] \to\mathbb {R} $ be a non-negative Riemann integrable function such that $I=\int_{a} ^{b} f(x) \, dx>0$. Let $M$ be a positive bound for $f$ and $\delta=\dfrac{I} {2(M+b-a)}$. Then the set $A=\{x\mid f(x) \geq \delta\} $ contains a finite number of non-overlapping intervals whose combined length is at least $\delta$. Here is one approach which sounds promising but there are certain doubts. Let's assume on the contrary that no such point $c$ exists. Then for each $x\in[a, b] $ there is a positive integer $n_x$ such that $x\notin J_n$ for all $n\geq n_x$. Since $J_n$ is closed it follows that there is a neighborhood $I_x$ such that $I_x\cap J_n=\emptyset $ for all $n\geq n_x$. By Heine Borel we can choose a finite number of such neighborhoods say $I_{x_1},\dots,I_{x_p}$ which cover $[a, b] $. Let $N$ be the maximum of $n_{x_1},\dots,n_{x_p}$. Then $[a, b] \cap J_n=\emptyset $ for $n\geq N$ and this is a contradiction. The problem with above proof is that the value $\delta$ is used nowhere. That part of hypotheses seems crucial. The problem with the above proof is found and the wrong inference is striked out. I will let the above wrong proof be there so that readers can avoid it. Update: On further investigation as well as looking at this answer of the linked question, it appears that the unproved theorem mentioned above is equivalent to the theorem of Arzelà: Arzelà's Theorem: Let $\{f_n\} $ be a sequence of functions $f_n:[a, b] \to\mathbb {R} $ such that each $f_n$ is Riemann integrable on $[a, b] $ and let $|f_n(x)|\leq M$ for all positive integers $n$ and all $x\in[a, b] $. Further let $f_n$ converge point wise to a Riemann integrable function $f$ on $[a, b] $. Then $\int_{a} ^{b} f_n(x) \, dx\to\int_{a} ^{b} f(x) \, dx$. By considering the functions $f_n-f$ the above theorem is reduced to the special case when $f(x) =0$ for all $x\in[a, b] $. Thus let $f_n(x) \to 0$ for all $x\in[a, b] $ and $|f_n(x)| \leq M$ for all $x\in[a, b] $ and all $n\in\mathbb {N} $. Also since $|\int_{a} ^{b} f_n(x) \, dx|\leq \int_{a} ^{b} |f_n(x) |\, dx$ the theorem is reduced to the case when each $f_n$ is non-negative. Thus Arzelà's theorem is reduced to the equivalent simpler formulation: Arzelà's Theorem (Simplified): Let $\{f_n\} $ be a sequence of non-negative Riemann integrable functions $f_n:[a, b] \to\mathbb{R} $ which converges to $0$ point wise on $[a, b] $ and let $f_n(x) \leq M$ for all $x\in[a, b] $ and all $n\in\mathbb {N} $. Then $\int_{a} ^{b} f_n(x) \, dx$ converges to $0$. Arguing by contradiction let us suppose that $\int_{a} ^{b} f_n(x) \, dx$ does not converge to $0$. Then there is an $\epsilon>0$ and a subsequence $f_{n_k} $ such that $\int_{a} ^{b} f_{n_k} (x) \, dx\geq \epsilon$. Without any loss of generality we can assume the subsequence $f_{n_k} $ to be the entire sequence $f_n$. Thus we have $\int_{a} ^{b} f_n(x) \, dx\geq \epsilon $. And by the theorem from Apostol's exercise this means that if $\delta=\dfrac{\epsilon} {2(M+b-a)}$ then the set $A_n=\{x\mid f_n(x) \geq\delta\} $ contains a union $J_n$ of non-overlapping subintervals of $[a, b] $ whose combined length is not less than $\delta$. By the theorem stated at the beginning of this post there is a point $c\in[a, b] $ which lies in infinitely many $J_n$ and hence $f_n(c) \geq \delta$ for infinitely many $n$. This contradicts the hypotheses that $f_n$ converges to $0$ point wise on $[a, b] $. This completes the proof of Arzelà's theorem. Assuming the truth of Arzelà's theorem one can prove the theorem mentioned in the beginning of this post. Let's just take $f_n$ to be the indicator function of $J_n$. If every $x\in[a, b] $ lies only in a finite number of $J_n$ then $f_n$ converges point wise to $0$ on $[a, b] $. It can be checked that all hypotheses of Arzelà's theorem are satisfied and hence the integrals $\int_{a} ^{b} f_n(x) \, dx$ converge to $0$. But these integrals represent the length of $J_n$ which is at least $\delta$ and thus we reach a contradiction. In this manner we see that the theorem mentioned in beginning of the post is equivalent to the theorem of Arzelà.
Difference between revisions of "SageMath" m (Jupyter Notebook reference to sagemath kernel package added.) (Update outdated information) Line 23: Line 23: * {{Pkg|sage-notebook}} includes the browser-based notebook interface. * {{Pkg|sage-notebook}} includes the browser-based notebook interface. − {{Note|Most + {{Note|Most of the [http://doc.sagemath.org/html/en/installation/standard_packages.html standardpackages are available as [[pacman#Installing packages|optional dependencies]] of the {{pkg|sagemath}} package , therefore they have to be installed additionally as normal Arch packages in order to take advantage of their features. Note that there is no need to install them with {{ic|sage -i}}, in fact this command will not work if you installed SageMath with pacman.}} == Usage == == Usage == Line 41: Line 41: === Sage Notebook === === Sage Notebook === − {{Note|The SageMath Flask notebook is + {{Note|The SageMath Flask notebook is deprecated in favour of the Jupyter notebook. The Jupyter notebook is recommended for all new worksheets. You can use the {{pkg|sage-notebook-exporter}} application to convert your Flask notebooks to Jupyter}} A better suited interface for advanced usage in SageMath is the Notebook. A better suited interface for advanced usage in SageMath is the Notebook. Line 55: Line 55: === Jupyter Notebook === === Jupyter Notebook === − SageMath also provides a kernel for the [https://jupyter.org/ Jupyter] notebook in + SageMath also provides a kernel for the [https://jupyter.org/ Jupyter] notebook in {{Pkg|sagemath-jupyter}} . To use it, launch the notebook with the command $ jupyter notebook $ jupyter notebook and choose "SageMath" in the drop-down "New..." menu. The SageMath Jupyter notebook supports [[LaTeX]] output via the {{ic|%display latex}} command and 3D plots if {{Pkg|jmol}} is installed. and choose "SageMath" in the drop-down "New..." menu. The SageMath Jupyter notebook supports [[LaTeX]] output via the {{ic|%display latex}} command and 3D plots if {{Pkg|jmol}} is installed. Revision as of 14:17, 29 September 2017 SageMath (formerly Sage) is a program for numerical and symbolic mathematical computation that uses Python as its main language. It is meant to provide an alternative for commercial programs such as Maple, Matlab, and Mathematica. SageMath provides support for the following: Calculus: using Maxima and SymPy. Linear Algebra: using the GSL, SciPy and NumPy. Statistics: using R (through RPy) and SciPy. Graphs: using matplotlib. An interactive shellusing IPython. Access to Python modulessuch as PIL, SQLAlchemy, etc. Contents Installation contains the command-line version; for HTML documentation and inline help from the command line. includes the browser-based notebook interface. Note:Most of the standard and optional Sage packages are available as optional dependencies of the package or in AUR, therefore they have to be installed additionally as normal Arch packages in order to take advantage of their features. Note that there is no need to install them with sage -i, in fact this command will not work if you installed SageMath with pacman. Usage SageMath mainly uses Python as a scripting language with a few modifications to make it better suited for mathematical computations. SageMath command-line SageMath can be started from the command-line: $ sage For information on the SageMath command-line see this page. Note, however, that it is not very comfortable for some uses such as plotting. When you try to plot something, for example: sage: plot(sin,(x,0,10)) SageMath opens the plot in an external application. Sage Notebook Note:The SageMath Flask notebook is deprecated in favour of the Jupyter notebook. The Jupyter notebook is recommended for all new worksheets. You can use the application to convert your Flask notebooks to Jupyter A better suited interface for advanced usage in SageMath is the Notebook. To start the Notebook server from the command-line, execute: $ sage -n The notebook will be accessible in the browser from http://localhost:8080 and will require you to login. However, if you only run the server for personal use, and not across the internet, the login will be an annoyance. You can instead start the Notebook without requiring login, and have it automatically pop up in a browser, with the following command: $ sage -c "notebook(automatic_login=True)" Jupyter Notebook SageMath also provides a kernel for the Jupyter notebook in the package. To use it, launch the notebook with the command $ jupyter notebook and choose "SageMath" in the drop-down "New..." menu. The SageMath Jupyter notebook supports LaTeX output via the %display latex command and 3D plots if is installed. Cantor Cantor is an application included in the KDE Edu Project. It acts as a front-end for various mathematical applications such as Maxima, SageMath, Octave, Scilab, etc. See the Cantor page on the Sage wiki for more information on how to use it with SageMath. Cantor can be installed with the official repositories.package or as part of the or groups, available in the Optional additions SageTeX If you have TeX Live installed on your system, you may be interested in using SageTeX, a package that makes the inclusion of SageMath code in LaTeX files possible. TeX Live is made aware of SageTeX automatically so you can start using it straight away. As a simple example, here is how you include a Sage 2D plot in your TEX document (assuming you use pdflatex): include the sagetexpackage in the preamble of your document with the usual \usepackage{sagetex} create a sagesilentenvironment in which you insert your code: \begin{sagesilent} dob(x) = sqrt(x^2 - 1) / (x * arctan(sqrt(x^2 - 1))) dpr(x) = sqrt(x^2 - 1) / (x * log( x + sqrt(x^2 - 1))) p1 = plot(dob,(x, 1, 10), color='blue') p2 = plot(dpr,(x, 1, 10), color='red') ptot = p1 + p2 ptot.axes_labels(['$\\xi$','$\\frac{R_h}{\\max(a,b)}$']) \end{sagesilent} create the plot, e.g. inside a floatenvironment: \begin{figure} \begin{center} \sageplot[width=\linewidth]{ptot} \end{center} \end{figure} compile your document with the following procedure: $ pdflatex <doc.tex> $ sage <doc.sage> $ pdflatex <doc.tex> you can have a look at your output document. The full documentation of SageTeX is available on CTAN. Troubleshooting TeX Live does not recognize SageTex If your TeX Live installation does not find the SageTex package, you can try the following procedure (as root or use a local folder): Copy the files to the texmf directory: # cp /opt/sage/local/share/texmf/tex/* /usr/share/texmf/tex/ Refresh TeX Live: # texhash /usr/share/texmf/ texhash: Updating /usr/share/texmf/.//ls-R... texhash: Done.
Well consider an orbit - I'm trying to calculate the exact time spent in the shadow of the body you orbit around. An explanation of "shadow" (sun is positioned to the far left): For a circular orbit this is quite easy: One just calculates the orbit radius and solve it using a simple sine ($T$ is the orbital period): $$R_{earth} = r \cdot\sin(\theta)$$ $$t = T \cdot \frac{\theta}{\pi}$$ (notice division by $\pi$ since $\theta$ represents half the time in shadow.) However this is for the specific case of a circular orbit. - I'm wondering how to do it for an (highly) eccentric orbit. The simple equation above becomes slightly more complicated, since $r$ is no longer constant: $$r = a \frac{1-e^2}{1 + e \cdot \cos(\theta)}$$ For the point around the periapsis, Filling that in to the equation above results (wolfram alpha) into something I do not particularly like. Solving it at a random point it even becomes worse: $$r = a \frac{1-e^2}{1 + e \cdot \cos(\theta_{avg} \pm \theta)}$$ Once I have the true anomalies ($\theta$) I could use some straight forward solution: Eccentric anomaly -> mean anomaly -> time Before I wish to start this, a question pops up: does the time in shadow actually depend on the position? (true anomaly/radius). When we're close to the planet the planet overshadows a higher angle of the orbit. However due to Kepler's law an object also moves faster at this point. Specifically does Kepler's second law of motion prove that the time in shadow is independent on the mean true anomaly of the shadow? Kepler's law: $$\frac{dA}{dt} = \tfrac{1}{2} r^2 \frac{d\theta}{dt}$$ I have a feeling that through kepler's law above problem could be reduced a lot.. Now once I know the position of the longest-time I could solve above equations (wondering if I should try to fill in the equations or solve it numerically).
Regularity of extremal solutions of semilinaer fourth-order elliptic problems with general nonlinearities School of Mathematics, Iran University of Science and Technology, Narmak, Tehran, 16846-13114, Iran $Δ^{2}u = λ f(u)$ $Ω$ $R^{n}$ $u = Δ u = 0$ $\partial Ω$ $λ$ $ f:[0, a_{f}) \to \Bbb{R}_{+} $ $ \left( {0 < {a_f} \le \infty } \right)$ $ f(0) > 0 $ $ {a_f} $ $0<τ_{-}: = \liminf\limits_{t \to a_{f}} \frac{f(t)f''(t)}{f'(t)^{2}}≤q τ_{+}: = \limsup\limits_{t \to a_{f}} \frac{f(t)f''(t)}{f'(t)^{2}}<2.$ $\sqrt{λ_{m}}\int{{_{Ω}}}\sqrt{f'(u_{m})}\phi ^{2}dx≤\int{{_{Ω}}}|\nablaφ|^{2}dx, ~~\text{for all}~\phi ∈ H^{1}_{0}(Ω), $ $(2-τ_{-})^{2} α^{4}- 8(2-τ_{+})α^{2}+4(4-3τ_{+})α-4(1-τ_{+}) = 0.$ Mathematics Subject Classification:Primary: 58F15, 58F17; Secondary: 53C35. Citation:A. Aghajani, S. F. Mottaghi. Regularity of extremal solutions of semilinaer fourth-order elliptic problems with general nonlinearities. Communications on Pure & Applied Analysis, 2018, 17 (3) : 887-898. doi: 10.3934/cpaa.2018044 References: [1] [2] [3] S. Agmon, A. Douglis and L. Nirenberg, Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions, [4] E. Berchio and F. Gazoola, Some remarks on bihormonic elliptic problems with positive, increasing and convex nonlinearities, [5] [6] [7] [8] [9] C. Cowan, P. Esposito, N. Ghoussoub and A. Moradifam, The critical dimension for a fourth order elliptic problem with singular nonlinearity, [10] [11] [12] [13] [14] J. Dávila, I. Flores and I. Guerra, Multiplicity of solutions for a fourth order equation with power-type nonlinearity, [15] [16] [17] P. Esposito, N. Ghoussoub and Y. Guo, Compactness along the branch of semi-stable and unstable solutions for an elliptic problem with a singular nonlinearity, [18] [19] [20] [21] F. Gazzola, H. -C. Grunau and G. Sweers, Polyharmonic Boundary Value Problems: Positivity Preserving and Nonlinear Higher Order Elliptic Equations in Bounded Domains, [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] show all references References: [1] [2] [3] S. Agmon, A. Douglis and L. Nirenberg, Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions, [4] E. Berchio and F. Gazoola, Some remarks on bihormonic elliptic problems with positive, increasing and convex nonlinearities, [5] [6] [7] [8] [9] C. Cowan, P. Esposito, N. Ghoussoub and A. Moradifam, The critical dimension for a fourth order elliptic problem with singular nonlinearity, [10] [11] [12] [13] [14] J. Dávila, I. Flores and I. Guerra, Multiplicity of solutions for a fourth order equation with power-type nonlinearity, [15] [16] [17] P. Esposito, N. Ghoussoub and Y. Guo, Compactness along the branch of semi-stable and unstable solutions for an elliptic problem with a singular nonlinearity, [18] [19] [20] [21] F. Gazzola, H. -C. Grunau and G. Sweers, Polyharmonic Boundary Value Problems: Positivity Preserving and Nonlinear Higher Order Elliptic Equations in Bounded Domains, [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [1] Guillaume Warnault. Regularity of the extremal solution for a biharmonic problem with general nonlinearity. [2] Zongming Guo, Juncheng Wei. Liouville type results and regularity of the extremal solutions of biharmonic equation with negative exponents. [3] [4] [5] Baishun Lai, Qing Luo. Regularity of the extremal solution for a fourth-order elliptic problem with singular nonlinearity. [6] Craig Cowan, Pierpaolo Esposito, Nassif Ghoussoub. Regularity of extremal solutions in fourth order nonlinear eigenvalue problems on general domains. [7] [8] Asadollah Aghajani. Regularity of extremal solutions of semilinear elliptic problems with non-convex nonlinearities on general domains. [9] [10] Chérif Amrouche, Yves Raudin. Singular boundary conditions and regularity for the biharmonic problem in the half-space. [11] Angelo Favini, Rabah Labbas, Keddour Lemrabet, Stéphane Maingot, Hassan D. Sidibé. Resolution and optimal regularity for a biharmonic equation with impedance boundary conditions and some generalizations. [12] [13] Canghua Jiang, Kok Lay Teo, Ryan Loxton, Guang-Ren Duan. A neighboring extremal solution for an optimal switched impulsive control problem. [14] Jagmohan Tyagi, Ram Baran Verma. Positive solution to extremal Pucci's equations with singular and gradient nonlinearity. [15] [16] Marie-Françoise Bidaut-Véron, Marta García-Huidobro, Cecilia Yarur. Large solutions of elliptic systems of second order and applications to the biharmonic equation. [17] Filippo Gazzola. On the moments of solutions to linear parabolic equations involving the biharmonic operator. [18] Elvise Berchio, Filippo Gazzola. Positive solutions to a linearly perturbed critical growth biharmonic problem. [19] [20] Pasquale Candito, Giovanni Molica Bisci. Multiple solutions for a Navier boundary value problem involving the $p$--biharmonic operator. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
It's important to note that none of this is specific to quantum mechanics, and that you get exactly the same structure for the corresponding problem within classical hamiltonian mechanics. There, you have the hamiltonian$$H=\frac{1}{2m}p_x^2 + \frac{1}{2m}\left(p_y -\frac{eB}{c}x \right)^2,$$which naturally conserves $p_y$ since $y$ does not appear in $H$ and therefore $\{p_y,H\}=0$. To see what's going on, it helps to write out Hamilton's equations explicitly:\begin{align}\dot x & = \frac{\partial H}{\partial p_x} = \frac{p_x}{m}\\\dot p_x & = -\frac{\partial H}{\partial x} = \frac{eB}{mc}\left(p_y-\frac{eB}{c}x\right)\\\dot y & = \frac{\partial H}{\partial p_y} = \frac{p_y-\tfrac{eB}{c}x}{m}\\\dot p_y & = -\frac{\partial H}{\partial y} = 0.\end{align} Here you get the same structure as in quantum mechanics: the canonical momentum $p_y$ is conserved, but it differs from the kinematic momentum $m\dot y = p_y-\tfrac{eB}{c}x$, which is obviously not conserved (as opposed, in particular, to what you stated in a comment. That said, it is indeed very odd, at least at first glance, that you can change which component of the canonical momentum is conserved and which one isn't using just a gauge transformation. Since the gauge transformation isn't physical (i.e. it's just in our heads) this does look very odd; however, it's important to note that canonical momentum is also a quantity that's just in our heads, so there isn't that much of a paradox. Digging down a bit deeper, though, it is intuitively obvious that there should be two independent constants of the motion, and with a bit of work one can see that this is indeed the case. Leaving the canonical momenta out of the picture for a moment, these two constants of the motion are given by$$x_0= x+\frac{mc}{eB}v_y \quad\text{and}\quad y_0 =y-\frac{m c}{eB}v_x.$$It is easy to verify manually from the equations of motion that these two quantities are conserved. Moreover, using this conservation property, their two definitions above can be easily rephrased in the form\begin{align}\frac{\mathrm d}{\mathrm dt}(y-y_0) & = -\tfrac{eB}{mx}(x-x_0)\\\frac{\mathrm d}{\mathrm dt}(x-x_0) & = +\tfrac{eB}{mx}(y-y_0).\end{align}This is a first-order ODE in $(x-x_0,y-y_0)$, which is easily solved to show that $(x,y)$ circles about $(x_0,y_0)$; that is, the conserved quantity $(x_0,y_0)$ is the centre of the particle's circular motion. The reason the Landau gauge has a conserved canonical momentum $p_y$ is that it manages to align one of its canonical momentum axes with one of these conserved quantities, and indeed it's just a bit of algebra to show that$$p_y = \frac{eB}{c} x_0.$$Of course, in the process you lose the direct relationship between the canonical and the kinematic momentum - and no amount of gauge changing will make the kinematic momentum, which is a physical quantity, be conserved. Finally, to comment a bit on the quantum mechanical side of the problem, it's important to note that while it looks like you can set $p_y$ to be arbitrarily large, thus giving the electron more and more velocity in a regime where it should just be moving in circles, this is not actually the case. In the standard way to solve this, you simply look for eigenstates of $\hat H$ which are also eigenstates of the canonical momentum $\hat p_y = -i\hbar \frac{\mathrm d}{\mathrm dy}$ (i.e. plane waves along $y$), and then you look at the $x$ coordinate to get the reduced hamiltonian$$\hat H = \frac{\hat{p}_x^2}{2m}+\frac{e^2B^2}{2mc^2}(x-x_0)^2.$$This hamiltonian describes simple harmonic oscillations, about a centre $x_0=\frac{c}{eB}p_y$ given by the eigenvalue $p_y$ of the $y$-direction plane wave. For a fixed hamiltonian eigenvalue, the eigenfunctions are supported in a strip, which can be displaced side-to-side by varying $p_y$. Here the important point is that the velocity operator is not $\hat{p}_y$ at all; instead, it is ($1/m$ times) the kinematic momentum $m\hat{v}_y = \hat{p}_y -\frac{eB}{c}\hat x$. However big you make $\hat{p}_y$, if you look along the centerline of the strip, the velocity is zero, and it varies symmetrically about it: $v_y$ is negative for $x>x_0$, and vice versa. Thus, while it is still very much its own, the quantum mechanical 'motion' is a lot more consistent with the classical circling electrons than it looks at first glance.
It is elementary that differential forms can be pulled back via a smooth map between manifolds. However, I was reading a paper and came across a construction about push forward of a differential form via a submersion which I didn't fully understand. The paper pointed to Differential Forms in Algebraic Topology by Bott and Tu for reference. However, since I have little background in algebraic topology, I would like to know if anyone can show me a more detailed explanation, or point me to some references. Below is the construction as described in the paper: If $f: X\rightarrow Y$ is a submersion from an oriented manifold of dimension $n$ to an oriented manifold of dimension $m \leq n$. Then the fibers are manifolds of dimension $r=n-m$. So far this is OK, and it continues: Integration over the fibers gives a map $f_*: D^p(X)\rightarrow D^{p-r}(Y)$ defined as follows. Any $p$-form $\phi$ on $X$ with compact support can be written $\phi = \psi \wedge f^*\omega$, where $\psi$ is an $r$-form with compact support on $X$ and $\omega$ is a $(p-r)$-form on $Y$. To see this, use a partition of unity to write $\phi$ as a sum of forms with support in a coordinate neighborhood, and in local coordinates the decomposition becomes obvious. We can then consider $f_*\psi$ on $Y$ with compact support defined by $f_*\psi (y)=\int_{f^{-1}(y)} \psi$ and define $f_*\phi = f_* \psi \wedge \omega$. What I didn't understand is how the $p$-form $\phi$ on $X$ can be decomposed as $\psi \wedge f^*\omega$. (Even though it says it's obvious in local coordinates...) Is this decomposition unique? If not, then the push forward $f_* \phi$ better not depend on the decomposition..?
When you say "why aren't things being destroyed", you presumably mean "why aren't the chemical bonds that hold objects together being broken". Now, we can determine the energy it takes to break a bond - that's called the "bond energy". Let's take, for example, a carbon-carbon bond, since it's a common one in our bodies. The bond energy of a carbon-carbon bond is $348\,\rm kJ/mol$, which works out to $5.8 \cdot 10^{-19}\,\rm J$ per bond. If an impacting gas molecule is to break this bond, it must (in a simplified collision scenario) have at least that much energy to break the bond. If the average molecule has that much energy, we can calculate what the temperature of the gas must be: $$E_\text{average} = k T$$$$T = \frac{5.8 \cdot 10^{-19}\,\rm J}{1.38 \cdot 10^{-23}\,\rm m^2 kg\, s^{-2} K^{-1}}$$$$T = 41,580\rm °C$$ That's pretty hot! Now, even if the average molecule doesn't have that energy, some of the faster-moving ones might. Let's calculate the percentage that have that energy at room temperature using the Boltzmann distribution for particle energy: $$f_E(E) = \sqrt{\frac{4 E}{\pi (kT)^3}} \exp\left(\frac{-E}{kT} \right)$$ The fraction of particles with energy greater than or equal to that amount should be given by this integral: $$p(E \ge E_0) = \int_{E_0}^{\infty} f_E(E) dE$$ In our situation, $E_0 = 5.8 \cdot 10^{-19}\,\rm J$, and this expression yields $p(E \ge E_0) = 1.9 \cdot 10^{-61}$. So, the fraction of molecules at room temperature with sufficient kinetic energy to break a carbon-carbon bond is $1.9 \cdot 10^{-61}$, an astoundingly small number. To put that in perspective, if you filled a sphere the size of Earth's orbit around the sun with gas at STP, you would need around 16 of those spheres to expect to have even one gas particle with that amount of energy. So that's why these "torpedoes" don't destroy things generally - they aren't moving fast enough at room temperature to break chemical bonds!
An indefinite nonlinear diffusion problem in population genetics, II: Stability and multiplicity 1. Department of Mathematics, The Ohio State State University, Columbus, Ohio 43210 2. School of Mathematics, University of Minnesota, Minneapolis, Minnesota 55455 3. School of Mathematics, University of Minnesota, Minneapolis, MN 55455 $ u_{t}=d\Delta u+g(x)f(u),~0\leq u\leq 1 $ in Ω × (0, ∞), $ \frac{\partial u}{\partial\nu}=0 $ on ∂ Ω × (0, ∞), where $\Delta$ denotes the Laplace operator, $g$ may change sign in $\Omega$, and $f(0)=f(1)=0$, $f(s)>0$ for $s\in(0,1)$. Our main results include stability/instability of the trivial steady states $u\equiv 0$ and $u\equiv 1$, and the multiplicity of nontrivial steady states. This is a continuation of our work [12]. In particular, the conjecture of Nagylaki and Lou [11, p. 152] has been largely resolved. Similar results are obtained for Dirichlet and Robin boundary value problems as well. Mathematics Subject Classification:Primary: 35K57; Secondary: 35B3. Citation:Yuan Lou, Wei-Ming Ni, Linlin Su. An indefinite nonlinear diffusion problem in population genetics, II: Stability and multiplicity. Discrete & Continuous Dynamical Systems - A, 2010, 27 (2) : 643-655. doi: 10.3934/dcds.2010.27.643 [1] [2] Begoña Barrios, Leandro Del Pezzo, Jorge García-Melián, Alexander Quaas. A Liouville theorem for indefinite fractional diffusion equations and its application to existence of solutions. [3] Pablo Amster, Manuel Zamora. Periodic solutions for indefinite singular equations with singularities in the spatial variable and non-monotone nonlinearity. [4] Leszek Gasiński, Nikolaos S. Papageorgiou. Multiplicity of solutions for Neumann problems with an indefinite and unbounded potential. [5] Pierre Magal. Global stability for differential equations with homogeneous nonlinearity and application to population dynamics. [6] Fengxin Chen. Stability and uniqueness of traveling waves for system of nonlocal evolution equations with bistable nonlinearity. [7] François Genoud, Charles A. Stuart. Schrödinger equations with a spatially decaying nonlinearity: Existence and stability of standing waves. [8] [9] Junping Shi, Ratnasingham Shivaji. Exact multiplicity of solutions for classes of semipositone problems with concave-convex nonlinearity. [10] [11] Messoud Efendiev, Alain Miranville. Finite dimensional attractors for reaction-diffusion equations in $R^n$ with a strong nonlinearity. [12] Marek Galewski, Shapour Heidarkhani, Amjad Salari. Multiplicity results for discrete anisotropic equations. [13] Julián López-Góme, Andrea Tellini, F. Zanolin. High multiplicity and complexity of the bifurcation diagrams of large solutions for a class of superlinear indefinite problems. [14] José Godoy, Nolbert Morales, Manuel Zamora. Existence and multiplicity of periodic solutions to an indefinite singular equation with two singularities. The degenerate case. [15] Zuzana Došlá, Mauro Marini, Serena Matucci. Global Kneser solutions to nonlinear equations with indefinite weight. [16] [17] Said Boulite, S. Hadd, L. Maniar. Critical spectrum and stability for population equations with diffusion in unbounded domains. [18] Kimie Nakashima, Wei-Ming Ni, Linlin Su. An indefinite nonlinear diffusion problem in population genetics, I: Existence and limiting profiles. [19] Farah Abdallah, Denis Mercier, Serge Nicaise. Spectral analysis and exponential or polynomial stability of some indefinite sign damped problems. [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
I am having trouble understanding one detail of the standard use of Brownian motion to solve the Dirichlet problem, I will write the statement and proof and then point to the detail I don't understand. Let $U$ be a bounded domain in $\mathbb{R}^n$, and let $B_t$ be a Brownian motion in $\mathbb{R}^n$, $\tau_{2}$ be the hitting time of $\partial U$, and let $\phi(x)$ be a continuous function defined on $\partial U$. The following function is a harmonic function on $U$, $u(x) = \mathbb{E}_x \phi(B_{\tau_{2}})$. Here is the proof Let $x\in U$, and let $r$ be small enough that $\partial D(x,r)\subseteq U$. Let $D(x,r)$ represent the ball of radius $r$ around $x$. Let $\tau_1$ be the hitting time of $\partial D(x,r)$, by the strong Markov property the process $\tilde{B}_t = B_{t+\tau_1}-B_{\tau_1}$ is a Brownian motion independent of $\mathcal{F}_{\tau_1}$. Now let $\tau_2 = \inf\{t>0:B_t\in \partial U\}$. We show that $u(x)$ satisfies the mean value property, \begin{align} u(x) &= \mathbb{E}_x\phi(B_{\tau_2})\\ &= \mathbb{E}_x(\mathbb{E}[\phi(B_{\tau_2})|\mathcal{F}_{\tau_1}])\\ &= \mathbb{E}_x(\mathbb{E}[\phi(B_{\tau_2}-B_{\tau_1}+B_{\tau_1}|\mathcal{F}_{\tau_1}])\\ &= \mathbb{E}_x( \psi(B_{\tau_1})) \end{align} Where $\psi(y) = \mathbb{E}_{y} \phi(\tilde{B}_{\tau_2-\tau_1})$, because Brownian motion is rotationally invariant, the distribution of $B_{\tau_1}$ is uniform over the sphere, thus if $\lambda$ is the uniform distribution over the sphere \begin{align} u(x) &= \mathbb{E}_x( \psi(B_{\tau_1}))\\ &= \int\limits_{\partial D(x,r)} \mathbb{E}_{y} \phi(\tilde{B}_{\tau_2-\tau_1}) \lambda(dy) \end{align} If $\tilde{B}_{\tau_2-\tau_1}$ represents the hitting time of $\partial U$ for each $y$, then this is equal to $\int\limits_{\partial D(x,r)} u(y)\lambda(dy)$, then we are done, however I don't see why $\tilde{B}_{\tau_2-\tau_1}$ necessarily represents the hitting time of $\partial U$, or maybe I have the proof wrong somewhere else
Prove the following inequality: $$\frac{1}{n}+\frac{1}{n+1}+\cdots+\frac{1}{2n-1}\geq \log (2)$$ I have tried experimenting with different values of $n$ and see that the sum seems to converge to $\log(2)$ as $n$ gets larger, but I am having some difficulty proving this inequality. I realize it is probably something to do with the fact that $\frac{d}{dx}\ln(x)=\frac{1}{x}$, but cannot find a proper solution. The theme of these problems is that they can generally be solved with some sort of drawing or visual aid, and I am unsure of what I can draw to make this solution more clear. Any help is appreciated, thanks.
Let $f(x)$ and $g(x)$ be non-negative, convex functions in $C^2([M,\infty))$, where $M > 0$. Also, assume $f(x)$ is strictly decreasing on $[M,\infty)$, and that $g(x)$ is strictly increasing on $[M,\infty)$. We also have that $g(M) \geq f(M)$, and that $\lim_{x\to \infty} f(x)g(x) = 0$. Does this imply that $f(x)g(x)$ is decreasing on the whole domain $[M,\infty)$? Help much appreciated! Edit: Added that $g(M) \geq f(M)$, and hence $g(x) > f(x)$ for all $x \in (M,\infty)$.
Answer Please see the work below. Work Step by Step We know that $v=\frac{2\pi r}{T}$ We plug in the known values to obtain: $v=\frac{2\pi(42.16\times 10^6)}{24\times 60\times 60}$ $v=3100\frac{m}{s}$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
I'm trying to find the Hamiltonian function for a system consisting of a single particle in one dimension colliding elastically with a wall at $x = 0$. Everything I've read on the topic (e.g. this question Why can't collisions be elastic?) says that the wall can be represented by a step function potential barrier $V = K\theta(x)$ with $K$ any number higher than the maximum momentum of the particle -- often taken to be infinite for simplicity (e.g. $V=0$ if $x < 0$, $V = \infty$ if $x > 0$, so the Hamiltonian would be $H = p^2/2m + K\theta(x)$, where $\theta(x)$ is the step function, $\theta(x) = 0$ for x < 0, $\theta(x) = 1$ if $x > 0$, and $K$ is a constant that we eventually take to infinity)...but I cannot get that to work. When I apply the Hamiltonian equations of motion to that Hamiltonian, I end up finding that the collision is only elastic and energy-conserving if $K$ is not only finite, but also dependent on the particle's momentum. I'm fairly sure that there's just something wrong with my derivation, because using an infinite potential barriers to exclude particles from certain positions is used everywhere in physics (e.g. hard-sphere dynamics in molecular dynamics or billiard dynamics, the Van Der Walls forces, etc)...but I just can't see where I'm going wrong. So, below is my full derivation (I start with two particles of diameter $D$, and then take the mass of the second particle to be infinity at the end; however, I get the same results if I start with just one particle of diameter $\frac{D}{2}$). Note: the problem is specifically getting the Hamiltonian; the equations of motion themselves are trivial to get via e.g. conservation of energy/momentum arguments; $x$ is proportional to $|t|$, and $p$ to sign(t)). Variables D := diameter of particle K := height of "wall" (nonzero; assumed by most references to be a constant, often infinite, but always greater than the maximum kinetic energy; this derivation only assumes that it is position-independent) $m_i$ := mass of particle i (for a wall, $m_2 \to \infty$) $ x_1, x_2, p_1, p_2 $ := coordinates and momenta of particles $r = |x_1 - x_2|$ := absolute distance between centers of particles (so $(D - r) >= 0$ iff they are touching) $H = T + U$ $U(r) = K\theta(D - r)$ $T = p_1^2/2m + p_2^2/2m$ Definitions / abbreviations $\epsilon(x) := sign(x) $ $\theta(x) := step(x)$ $\partial r/\partial x_1 = \epsilon(x_1 - x_2)$ $\partial r/\partial x_2 = -\epsilon(x_1 - x_2)$ $\partial\theta/\partial x = \delta(x)$ Derivation (0) Hamilton's equations: $\partial H/\partial x_1 = -dp_1/dt = \partial U/\partial x_1$ $\partial H/\partial x_2 = -dp_2/dt = \partial U/\partial x_2$ (1) Chain rule (prime means differentiation wrt appropriate x): $\partial U/\partial x_1 = K\theta'(D - r)(-r') = -K\delta(D - |x_1 - x_2|)\epsilon(x_1 - x_2)$ $\partial U/\partial x_2 = K\theta'(D - r)(-r') = K\delta(D - |x1_ - x_2|)\epsilon(x_1 - x_2)$ (2) note that the derivatives are equal/opposite $ -\partial U/\partial x_1 = dp_1/dt = -dp_2/dt = \partial U/\partial x_2$ (3) find total change in momenta $\Delta P_i $ (and final momenta $p_i(t_2)$) Let $t_1$ := the time where $x_1 - x_2 = +D$; let $t_0 := t_1 - dt, t_2 := t_1 + dt$ Let $p_1(t_1) = P_1, p_2(t_1) = P_2$ Let $\Delta P_1 := \int_{t_0}^{t_2} (dp_1/dt)~dt = K = -\Delta P_2$ Then $p_1(t_2) = P_1 + K; p_2(t_2) = P_2 - K$ (4) But, energy must be conserved: $T(p_i) = T(p_i + \Delta P_i)$ $(P_1 + K)^2/2m_1 + (P_2 - K)^2/2m_2 = P_1^2/2m_1 + P_2^2/2m_2$ $(P_1^2 + 2KP_1 + K^2)/2m_1 + (P_2^2 - 2KP_2 + K^2)/2m_2 = P_1^2/2m_1 + P_2^2/2m_2$ $(2KP_1 + K^2)/2m_1 + (-2KP_2 + K^2)/2m_2 = 0$ Multiply through by $2m_1m_2$ $(2KP_1 + K^2)m_2 + (-2KP_2 + K^2)m_1 = 0$ $2K(m_2 P_1 + 1/2 K m_2 - m_1 P_2 + 1/2K m_1) = 0$ (5) By definition, |K| > 0. So, $m_2 P_1 - m_1 P_2 + 1/2K(m_1 + m_2) = 0$ Conclusion: $K = 2(m_1 P_2 - m_2 P_1)/(m_1 + m_2) $ So to conserve kinetic energy, K (and hence U) must depend on the momentum. (6) In the special case where $m_2 \to \infty$: $ 1/2K = (m_1 P_2)/(m_1 + m_2) - (m_2 P_1)/(m_1 + m_2)$ $ 1/2K = (m_1 P_2)/(m_2) - (m_2 P_1)/(m_2)$ $ 1/2K = 0 - P_1, K = -2P_1$, as expected. (7) However, since the potential now depends on the momenta, $ dx_i/dt$ is no longer simply $p_i/2m_i$; we have $ H = p_1^2/2m_1 + p_2^2/2m_2 + 2(m_1 P_2 - m_2 P_1)\theta(D - r)/(m_1 + m_2)$ $\partial H/\partial p_1 = dx_1/dt = p_1/m_1 - 2m_2\theta(D - r)/(m_1 + m_2)$ $\partial H/\partial p_2 = dx_2/dt = p_2/m_2 + 2m_1\theta(D - r)/(m_1 + m_2)$ (8) Or in the special case of the wall, $ H = p_1^2/2m_1 + p_2^2/2m_2 -2 P_1\theta(D - r)$ $\partial H/\partial p_1 = dx_1/dt = p_1/m_1 - 2P_1 \theta(D - r)$ $\partial H/\partial p_2 = dx_2/dt = p_2/m_2 + 2P_1 \theta(D - r)$ ...which is zero outside of the barrier, but there is ambiguity in the definition of the step function at zero; we can retain the normal equations of motion only if $\theta(0) = 0$. Any thoughts would be greatly appreciated.
Technical Report Show 1 to 6 of 6 Issue 6 Generating Highly Accurate Machine Models Indispensable to Model Based Design (MBD) Elaborate Modeling TechnologyIdeal MBD and Its ChallengesModel based design (MBD) has been around the field of circuits/controls for motors for a long time, but has not been at th… Issue 5 Versatile Mapping that Supports Multiphysics Simulations Elaborate Modeling TechnologyMaterial Modeling and Mapping Technology Supporting Multiphysics SimulationsMultiphysics simulations, such as coupled analyses (magnetic field, therma… Issue 4 Material Modeling and Powerful Analysis Capabilities that Contribute to Limit Design Elaborate Modeling TechnologyModeling Complex Nonlinear Materials at a Micro Level\( \Large \nabla \times \frac{ l }{\mu_0}\nabla \times A = J -\sigma \frac{ \partial A } { \parti… Issue 3 What Does the JMAG Mesh Generation Engine have to Offer? A Powerful Simulation EngineThese technical reports introduce the scope of JMAG's technological development.This edition introduces the value and future of one of the two major fo… Issue 2 How JMAG Realized Accelerated Speed A Powerful Simulation EngineThis technical report introduces content concerning the development of JMAG technology.As the previous section introduced why matrix solvers are necess… Issue 1 Why is a high-speed calculation engine necessary? A Powerful Simulation EngineIn recent years, the needs of engineers are diversifying as computer aided engineering (CAE) tools are applied more regularly when designing electromag… Show 1 to 6 of 6
Given a field $F$, can you necessarily construct a field extension $E \supset F$ such that $\operatorname{Gal}(E/F) = S_n\,$? We will prove that there exists a finite Galois extension $K/\mathbb{Q}$ such that $S_n$ = $Gal(K/\mathbb{Q})$ for every integer $n \geq 1$.We will follow mostly van der Waerden's book on algebra.You can also see his proof on Milne's course note on Galois theory.However, Milne refers to his book for a crucial theorem( Proposition 1 below) whose proof uses multivariate polynomials.Instead, we will use elementary commutative algebra to prove this theorem. Notations We denote by |S| the number of elements of a finite set S. Let $K$ be a field. We denote by $K^*$ the multiplicative group of $K$. Let $\tau$ = $(i_1, ..., i_m)$ be a cycle in $S_n$. The set {$i_1, ..., i_m$} is called the support of $\tau$. Let $\sigma \in S_n$. Let $\sigma$ = $\tau_1\dots\tau_r$, where each $\tau_i$ is a cycle of length $m_i$ and they have mutually disjoint supports. Then we say $\sigma$ is of type [$m_1, ..., m_r$]. Definition 1Let $F$ be a field.Let $f(X)$ be a non-constant polynomial of degree n in $F[X]$.Let $K/F$ be a splitting field of $f(X)$.Suppose $f(X)$ has distinct $n$ roots in $K$.Then $f(X)$ is called separable.Since the splitting fields of $f(X)$ over $F$ are isomorphic to each other,this definition does not depend on a choice of a splitting field of $f(X)$. Definition 2Let $F$ be a finite field.Let $|F| = q$.Let $K/F$ be a finite extension of $F$.Let $\sigma$ be a map: $K \rightarrow K$ defined by $\sigma(x) = x^q$ for each $x \in K$.$\sigma$ is an automorphism of $K/F$.This is called the Frobenius automorphism of $K/F$. Definition 3Let $G$ be a permutation group on a set $X$.Let $G'$ be a permutation group on a set $X'$.Let $f:X \rightarrow X'$ be a bijective map.Let $\lambda:G \rightarrow G'$ be an isomophism.Suppose $f(gx) = \lambda(g).f(x)$ for any $g \in G$ and any $x \in X$.Then G and G' are said to be isomorphic as permutation groups. Lemma 1Let $F$ be a field.Let $f(X)$ be a separable polynomial of degree $n$ in $F[X]$.Let $K/F$ be a splitting field of $f(X)$.Let $G = Gal(K/F)$.Let $S$ be the set of roots of $f(X)$ in K.Then $G$ acts transitively on $S$ if and only if $f(X)$ is irreducible in $F[X]$. Proof: If $f(X)$ is irreducible, clearly $G$ acts transitively on $S$. Conversely, suppose $f(X)$ is not irreducible.Let $f(X) = g(X)h(X)$, where $g$ and $h$ are non-constant polynomials in $F[X]$.Let $T$ be the set of roots of $g(X)$ in $K$.Since $G$ acts on $T$ and $S \neq T$, $S$ is not transitive. QED Lemma 2Let $F$ be a field.Let $f(X)$ be a separable polynomial in F[X].Let $f(X) = f_1(X)...f_r(X)$, where $f_1(X), ..., f_r(X)$ are distinct irreducible polynomials in $F[X]$.Let $K/F$ be a splitting field of $f(X)$.Let $G = Gal(K/F)$.Let $S$ be the set of roots of $f(X)$ in $K$.Let $S_i$ be the set of roots of $f_i(X)$ in $K$ for each $i$.Then $S = \cup S_i$ is a disjoint union and each $S_i$ is a $G$-orbit. Proof: This follows immediately from Lemma 1. Lemma 3Let $F$ be a finite field.Let $K/F$ be a finite extension of $F$.Then $K/F$ is a Galois extension and $Gal(K/F)$ is a cyclic group generated by the Frobenius automorphism $\sigma$. Proof: Let $|F| = q$. Let $n = (K : F)$. Since $|K^*| = q^n - 1$, $x^{q^n - 1} = 1$ for each $x \in K^*$. Hence $x^{q^n} = x$ for each $x \in K$. Hence $\sigma^n = 1$. Let m be an integer such that $1 \leq m < n$.Since the polynomial $X^{q^m} - X$ has at most $q^m$ roots in $K$, $\sigma^m \neq 1$.Hence $\sigma$ generates a subgroup $G$ of order n of $Aut(K/F)$.Since $n = (K : F)$, $G = Aut(K/F)$.Since $|Aut(K/F)| = n$, $K/F$ is a Galois extension. QED Lemma 4Let $F$ be a finite field.Let $f(X)$ be an irreducible polynomial of degree $n$ in $F[X]$.Let $K/F$ be a splitting field of $f(X)$.Let $\sigma$ be the Frobenius automorphism of $K/F$.Then $Gal(K/F)$ is a cyclic group of order $n$ generated by $\sigma$. Proof:Let $\alpha$ be a root of $f(X)$ in $K$.By Lemma 3, $F(\alpha)/F$ is a Galois extension. Hence $K = F(\alpha)/F$ By Lemma 3, $Gal(F(\alpha)/F)$ is a cyclic group of order $n$ generated by $\sigma$. QED Lemma 5Let $F$ be a finite field.Let $f(X)$ be an irreducible polynomial of degree $n$ in $F[X]$.Let $K/F$ be a splitting field of $f(X)$.Let $G = Gal(K/F)$.Let $\sigma$ be the Frobenius automorphism of $K/F$.Let $S$ be the set of roots of $f(X)$.We regard $G$ as a permutation group on $S$.Then $\sigma$ is an $n$-cycle. Proof: This follows immediately from Lemma 4. Lemma 6Let $F$ be a finite field.Let $f(X)$ be a separable polynomial in F[X].Let $f(X) = f_1(X)...f_r(X)$, where $f_1(X), ..., f_r(X)$ are distinct irreducible polynomials in $F[X]$.Let $m_i$ = deg $f_i(X)$ for each $i$.Let $K/F$ be a splitting field of $f(X)$.Let $G = Gal(K/F)$.Let $\sigma$ be the Frobenius automorphism of $K/F$.Let $S$ be the set of roots of $f(X)$.We regard $G$ as a permutation group on $S$.Then $\sigma$ is a permutation of type $[m_1, ..., m_r]$. Proof: This follows immediately from Lemma 2, Lemma 3 and Lemma 5. Lemma 7$S_n$ is generated by $(k, n)、k = 1、...、n - 1$. Proof:Let $(a, b)$ be a transpose on {$1, ..., n$}.If $a \neq n$ and $b \neq n$, then $(a, b) = (a, n)(b, n)(a, n)$.Since $S_n$ is generated by transposes, we are done. QED Lemma 8Let $G$ be a transitive permutation group on a finite set $X$.Let $n = |X|$.Suppose $G$ contains a transpose and a $(n-1)$-cycle.Then $G$ is a symmetric group on X. Proof:Without loss of generality, we can assume that $X$ = {$1, ..., n$} and $G$ containsa cycle $\tau$ = $(1, ..., n-1)$ and transpose $(i, j)$.Since $G$ acts transitively on $X$, there exists $\sigma \in G$ such that $\sigma(j)$ = $n$.Let $k$ = $\sigma(i)$.Then $\sigma(i, j)\sigma^{-1}$ = $(k, n) \in G$.Taking conjugates of $(k, n)$ by powers of $\tau$, we get $(m, n), m = 1, ..., n - 1$. Hence, by Lemma 7, $G = S_n$. QED Lemma 9Let $F$ be a finite field.Let $n \geq 1$ be an integer.Then there exists an irreducible polynomial of degree $n$ in $F[X]$. Proof:Let $|F| = q$.Let $K/F$ be a splitting field of the polynomial $X^{q^n} - X$ in $F[X]$.Let $S$ be the set of roots of $X^{q^n} - X$ in $K$.It's easy to see that $S$ is a subfield of $K$ containing $F$.Hence $S = K$.Since $X^{q^n} - X$ is separable, $|S| = q^n$.Hence $(K : F) = n$.Since $K^*$ is a cyclic group, $K^*$ has a generator $\alpha$.Let $f(X)$ be the minimal polynomial of $\alpha$ over $F$.Since $K = F(\alpha)$, the degree of $f(X)$ is $n$. QED Lemma 10Let $f(X) \in \mathbb{Z}[X]$ be a monic polynomial.Let $p$ be a prime number.Suppose $f(X)$ mod $p$ is separable in $\mathbb{Z}/p\mathbb{Z}[X]$.Then $f(X)$ is separable in $\mathbb{Q}$. Proof:Suppose $f(X)$ is not separable in $\mathbb{Q}$.Since $\mathbb{Q}$ is perfect, there exists a monic irreducible $g(X) \in \mathbb{Z}[X]$ such that $f(X)$ is divisible by $g(X)^2$. Then $f(X)$ (mod $p$) is divisible by $g(X)^2$ (mod $p$). This is a contradiction. QED Proposition 1Let $A$ be an integrally closed domain and let $P$ be a prime ideal of $A$.Let $K$ be the field of fractions of A.Let $\tilde{K}$ be the field of fractions of $A/P$.Let $f(X) ∈ A[X]$ be a monic polynomial without multiple roots.Let $\tilde{f}(X) \in (A/P)[X]$ be the reduction of $f(X)$ mod $P$.Suppose $\tilde{f}(X)$ is also wihout multiple roots.Let $L$ be the splitting field of $f(X)$ over $K$.Let $G$ be the Galois group of $L/K$.Let S be the set of roots of $f(X)$ in $L$.We regard $G$ as a permutation group on $S$.Let $\tilde{L}$ be the splitting field of $\tilde{f}(X)$ over $\tilde{K}$.Let $\tilde{G}$ be the Galois group of $\tilde{L}/\tilde{K}$.Let $\tilde{S}$ be the set of roots of $\tilde{f}(X)$ in $\tilde{L}$.We regard $\tilde{G}$ as a permutation group on $\tilde{S}$. Then there exists a subgroup $H$ of $G$ such that $H$ and $\tilde{G}$ are isomorphic as permutation groups. Proof: See my answer here. CorollaryLet $f(X) \in \mathbb{Z}[X]$ be a monic polynomial of degree $m$.Let p be a prime number.Suppose $f(X)$ mod $p$ is separable in $\mathbb{Z}/p\mathbb{Z}[X]$.Suppose $f \equiv f_1...f_r$ (mod $p$), where each $f_i$ is monic and irreducible of degree $m_i$ in $\mathbb{Z}/p\mathbb{Z}[X]$.Let $K/\mathbb{Q}$ be a splitting field of $f(X)$.Let $M$ be the set of roots of $f(X)$.$G = Gal(K/\mathbb{Q})$ can be regarded as a permutation group on $M$.Then $G$ contains an element of type [$m_1, ..., m_r$]. Proof:By Lemma 10, $f(X)$ is separable in $\mathbb{Q}[X]$.Let $F_p$ = $\mathbb{Z}/p\mathbb{Z}[X]$.Let $\tilde{f}(X) \in F_p[X]$ be the reduction of $f(X)$ mod $p$.Let $\tilde{K}/F_p$ be a splitting field of $\tilde{f}(X)$.Let $\tilde{G}$ be the Galois group of $\tilde{K}/F_p$.Let $\tau$ be the Frobenius automorphism of $\tilde{K}/F_p$.Let $\tilde{M}$ be the set of roots of $\tilde{f}(X)$.We regard $\tilde{G}$ as a permutation group on $\tilde{M}$.By Lemma 6, $\tau$ is a permutation of type $[m_1, ..., m_r]$.Hence the assertion follows by Proposition 1. QED TheoremThere exists a finite Galois extension $K/\mathbb{Q}$ such that $S_n$ = $Gal(K/\mathbb{Q})$ for every integer $n \geq 1$. Proof(van der Waerden): By Lemma 9, we can find the following irreducible polynomials. Let $f_1$ be a monic irreducible polynomial of degree $n$ in $\mathbb{Z}/2\mathbb{Z}[X]$. Let $g_0$ be a monic polynomial of degree 1 in $\mathbb{Z}/3\mathbb{Z}[X]$. Let $g_1$ be a monic irreducible polynomial of degree $n - 1$ in $\mathbb{Z}/3\mathbb{Z}[X]$. Let $f_2 = g_0g_1$. If $n - 1 = 1$, we choose $g_1$ such that $g_0 \ne g_1$. Hence $f_2$ is separable. Let $h_0$ be a monic irreducible polynomial of degree 2 in $\mathbb{Z}/5\mathbb{Z}[X]$. If $n - 2$ is odd, Let $h_1$ be a monic irreducible polynomial of degree $n - 2$ in $\mathbb{Z}/5\mathbb{Z}[X]$. Let $f_3 = h_0h_1$. Since $h_0 \neq h_1$, $f_3$ is separable. If $n - 2$ is even, $n - 2 = 1 + a$ for some odd integer $a$. Let $h_1$ and $h_2$ be monic irreducible polynomials of degree $1$ and $a$ respectively in $\mathbb{Z}/5\mathbb{Z}[X]$. Let $f_3 = h_0h_1h_2$. If a = 1, we choose $h_2$ such that $h_1 \ne h_2$. Hence $f_3$ is separable. Let $f = -15f_1 + 10f_2 + 6f_3$. Since each of $f_1, f_2, f_3$ is a monic of degree $n$, f is a monic of degree $n$. Then, $f \equiv f_1$ (mod 2) $f \equiv f_2$ (mod 3) $f \equiv f_3$ (mod 5) Since $f \equiv f_1$ (mod 2), $f$ is irreducible. Let $K/\mathbb{Q}$ be the splitting field of $f$. Let $G = Gal(K/\mathbb{Q})$. Let $M$ be the set of roots in $K$. We regard $G$ as a permutation group on $M$. Since $f$ is irreducible, $G$ acts transitively on $M$. Since $f \equiv f_2$ (mod 3), $G$ contains a $(n-1)$-cycle by Corollary of Proposition 1. Similarly, since $f \equiv f_3$ (mod 5), $G$ contains a permutation $\tau$ of type [$2, a$] or [$2, 1, a$], where $a$ is odd. Then $\tau^a$ is a transpose. Hence $G$ contains a transpose. Hence, $G$ is a symmetric group on $M$ by Lemma 8. QED There is no general solution to your question. It depends on the base field $F$. I will show that your problem is solved affirmatively when $F$ is a field of rational functions over any field. Let $k$ be a field. Let $K$ = $k(X_1, ..., X_n)$ be the field of rational functions over k. Each element of $S_n$ acts on $K$ as an $k$-automorphism. Hence $S_n$ can be regarded as a subgroup of Aut($K/k$). Let $F$ be the fixed field by $S_n$. By Artin's theorem, $K/F$ is a Galois extension and its Galois group is $S_n$. Let $s_1, ..., s_n$ be the elementary symmetric functions of $X_1, ..., X_n$. Then $F = k(s_1, ..., s_n)$. It can be easily proved that $s_1, ..., s_n$ are algebraically independent over $k$. Hence $F$ can be regarded as the field of rational functions of $n$ variables. Therefore we get the following proposition. PropositionLet $k$ be a field.Let $F = k(x_1, ..., x_n)$ be the field of rational functoins of $n$ variables over $k$.There exists a Galois extension $E$ of $F$ such that $S_n = Gal(E/F)$.
I want to know if there is a way of typing into Mathematica an expression like the following, $$\epsilon^{\mu \nu \lambda} f^{abc} A^a_\mu A^b_\nu A^c_\lambda + g\epsilon^{\mu \nu \lambda} A^a_\mu \partial_\lambda A^a_\nu + \bar{\psi}(\gamma^\mu(\partial_\mu + gA^a_\mu T^a))\psi$$ (..repeated indices are understood to be summed over..) where $g$ is a number, $A^a_\mu$ can be thought of as matrices with $a,b,c = {1,2,3,..,N}$ for some $N$ and $\mu, \nu, \lambda = \{ 0,1,2\}$. So the $\partial_\mu$ are partial derivatives as $\partial_\mu = \frac{\partial}{\partial x^\mu}$. $f^{abc}$ is a set of numbers depending on the values of a,b and c and it is completely cyclic and anti-symmetric in it. $\epsilon^{\mu \nu \lambda}$ evaluates to $0$ if any two or more of its indices are equal and evaluates to 1 or -1 depending on whether the three distinct entries are in cyclic or anti-cyclic order. $\psi$ should also be thought of as a matrix $\psi^a_i$ where $i,j = \{0,1,2\}$. $\gamma^\mu$ are a chosen set of $3\times 3$ matrices. Each of $T^a$ is a $N \times N$ matrix. Then the terms involving $\psi$ when expanded out look like, $$\bar{\psi}\gamma^\mu \partial _\mu \psi = (\psi^\dagger)^a_i (\gamma^0\gamma^\mu \partial_\mu )_{ij}\psi_j^a \quad{ \rm and }\quad\bar{\psi}\gamma^\mu A^a_\mu T^a \psi = (\psi^\dagger)^a_i(\gamma^\mu)_{ij}(A^c_\mu T^c)^{ab} \psi^b_j$$ I would like to be able to input the above expression into Mathematica without having to explicitly specify the numbers $f^{abc}$ and the matrices $T^a$. I would like to be able to manipulate the expression with the matrices $T,A,\psi$ and the numbers $f^{abc}$,$g$ as being variables. If the above is possible then I would eventually like to do something like write $A^a_\mu = B^a_\mu + C^a_\mu$ and $\psi^a_i = \eta ^a _i + \xi ^a _i$ and expand the expression in terms of B,C,$\eta$ and $\xi$.
I don't think most of these answers actually answer the question in generality. They are restricted to the case when there is a simple null hypothesis and when the test statistic has an invertible CDF (as in a continuous random variable which has a strictly increasing CDF). These cases are the cases which most people tend to care about with the z-test and t-test, though for testing a binomial mean (for example) one does not have such a CDF. What is provided above seems correct to my eyes for these restricted cases. If null hypotheses are composite then things are a bit more complicated. The most general proof of this fact I've seen under the composite case using some assumptions regarding rejection regions is provided in Lehmann and Romano's "Testing Statisitical Hypotheses," pages 63-64. I'll try to reproduce the argument below... We test a null hypothesis $H_0$ versus an alternative hypothesis $H_1$ based upon a test statistic, which we'll denote as the random variable $X$. The test statistic is assumed to come from some parametric class, i.e., $X \sim P_\theta$, where $P_\theta$ is an element of the family of probability distributions $\mathcal{P} \equiv \{P_\theta \mid \theta \in \Theta \}$, and $\Theta$ is a parameter space. The null hypothesis $H_0: \theta \in \Theta_0$ and the alternative hypothesis $H_1: \theta \in \Theta_1$ form a partition of $\Theta$ in that$$\Theta = \Theta_0 \cup \Theta_1 $$where$$\Theta_0 \cap \Theta_1 = \emptyset.$$ The result of the test may be denoted$$\phi_\alpha(X) = 1_{R_\alpha}(X)$$where for any set $S$ we define$$1_{S}(X) = \begin{cases}1, & X \in S, \\0, & X \notin S.\end{cases}$$Here $\alpha$ is our significance level, and $R_\alpha$ denotes the rejection region of the test for significance level $\alpha$. Suppose the rejection regions satisfy the $$R_\alpha \subset R_{\alpha'}$$if $\alpha < \alpha'$. In this case of nested rejection regions, it is useful to determine not only whether or not the null hypothesis is rejected at a given significance level $\alpha$, but also to determine the smallest significance level for which the null hypothesis would be rejected. This level is known as the p-value, $$\hat{p} = \hat{p}(X) \equiv \inf\{\alpha \mid X \in R_\alpha\},$$This number gives us an idea of how strong the data (as portrayed by the test statistic $X$) contradict the null hypothesis $H_0$. Suppose that $X \sim P_\theta$ for some $\theta \in \Theta$ and that $H_0: \theta \in \Theta_0$. Suppose additionally that the rejection regions $R_\alpha$ obey the nesting property stated above. Then the following holds: If $\sup_{\theta \in \Theta_0} P_\theta(X \in R_\alpha) \leq \alpha$ for all $0 < \alpha < 1$, then for $\theta \in \Theta_0$, $$P_\theta(\hat{p} \leq u) \leq u \quad \text{for all} \quad 0 \leq u \leq 1.$$ If for $\theta \in \Theta_0$ we have $P_\theta(X \in R_\alpha) = \alpha$ for all $0 < \alpha < 1$, then for $\theta \in \Theta_0$ we have$$P_\theta(\hat{p} \leq u) = u \quad \text{for all} \quad 0 \leq u \leq 1.$$ Note this first property just tells us that the false positive rate is controlled at $u$ by rejecting when the p-value is less than $u$, and the second property tells us (given an additional assumption) that p-values are uniformly distributed under the null hypothesis. The proof is as follows: Let $\theta \in \Theta_0$, and assume $\sup_{\theta \in \Theta_0} P_\theta(X \in R_\alpha) \leq \alpha$ for all $0 < \alpha < 1$. Then by definition of $\hat{p}$, we have $\{\hat{p} \leq u\} \subset \{X \in R_v\}$ for all $u < v$. By monotonicity and the assumption, it follows that $P_\theta(\hat{p} \leq u) \leq P_\theta(X \in R_v) \leq v$ for all $u < v$. Letting $v \searrow u$, it follows that $P_\theta(\hat{p} \leq u) \leq u$. Let $\theta \in \Theta_0$, and assume that $P_\theta(X \in R_\alpha) = \alpha$ for all $0 < \alpha < 1$. Then $\{X \in R_u\} \subset \{\hat{p}(X) \leq u\}$, and by monotonicity it follows that $u = P_\theta(X \in R_u) \leq P_\theta(\hat{p} \leq u)$. Considering (1), it follows that $P_\theta(\hat{p}(X) \leq u) = u$. Note that the assumption in (2) does not hold when a test statistic is discrete even if the null hypothesis is simple rather than composite. Take for instance $X \sim \mathrm{Binom}(10, \theta)$ with $H_0: \theta = .5$ and $H_1: \theta > 0.5$. I.e., flip a coin ten times and test whether it's fair vs biased towards heads (encoded as a 1). The probability of seeing 10 heads in 10 fair coin flips is (1/2)^10 = 1/1024. The probability of seeing 9 or 10 heads in 10 fair coin flips is 11/1024. For any $\alpha$ strictly between 1/1024 and 11/1024, you'd reject the null if $X = 10$, but we don't have that $\Pr(X \in R_\alpha) = \alpha$ for those values of $\alpha$ when $\theta = 0.5$. Instead $\Pr(X \in R_\alpha) = 1/1024$ for such $\alpha$.
How to set up linode vps - How to Set Up PuTTY CLI for a Virtual Private Server VPS 30/01/2018В В· If someone compliments you, and you are interested in them, take that compliment and expand on it to make it a conversation. Sometimes a compliment is a way for a guy to make a connection and to start up a conversation. 2/10/2012В В· The problem statement, all variables and given/known data Solve the equation \sin(2x) dx + cos(3y) dy = 0, where y(\pi/2) = \pi/3 2. Relevant equations N/A 3. The attempt at a solution I understand the process that gets from the original equation to y = \frac{\arcsin(\frac{3}{2}... The PATH environment variable is a series of directories separated by semicolons (;). Microsoft Windows looks for programs in the PATH directories in order, from left to right. It would be wiser to start of by selling ice-cream. Once that picks up, then you can introduce waffles as a 'new dish'. Once you have gained the trust of your consumer … Nope, there is no LED flash on the Galaxy S7 and Galaxy S7 edge for the front camera. But Samsung has taken a feature from the competition and implemented a software-based flash option for the front camera on its new devices – taking a picture with the flash on will make the screen go all white for a moment in order to bring some light into your selfies. Use the trapezium rule, with the values from the table to estimate the area under the curve between and . Write out the formula we are using. Substitute the correct values in your formula. You can find us here: Australian Capital Territory: Rokeby ACT, Taylor ACT, Weetangera ACT, Parkes ACT, Chifley ACT, ACT Australia 2686 New South Wales: Homebush West NSW, Bethanga NSW, South Nowra NSW, Capertee NSW, Latham NSW, NSW Australia 2085 Northern Territory: Jabiru NT, Alyangula NT, Yarralin NT, Dundee NT, Gillen NT, Batchelor NT, NT Australia 0879 Queensland: Amamoor QLD, Wilsonton QLD, Morella QLD, Macleay Island QLD, QLD Australia 4076 South Australia: Mt Pleasant SA, New Residence SA, Gulfview Heights SA, Stepney SA, Beverley SA, Mortana SA, SA Australia 5063 Tasmania: Collinsvale TAS, Oakdowns TAS, Montana TAS, TAS Australia 7085 Victoria: Shepparton VIC, Wendouree VIC, Fish Point VIC, Swan Reach VIC, Lima East VIC, VIC Australia 3007 Western Australia: Cannington WA, Elabbin WA, Mud Springs Community WA, WA Australia 6027 British Columbia: Ladysmith BC, West Kelowna BC, Port Alberni BC, Terrace BC, Vancouver BC, BC Canada, V8W 7W5 Yukon: Snag Junction YT, Little Teslin Lake YT, Whitefish Station YT, Carmacks YT, Clinton Creek YT, YT Canada, Y1A 3C2 Alberta: Bentley AB, Bonnyville AB, Mayerthorpe AB, Bassano AB, Morrin AB, Taber AB, AB Canada, T5K 7J8 Northwest Territories: Ulukhaktok NT, Ulukhaktok NT, Fort Good Hope NT, Kakisa NT, NT Canada, X1A 7L9 Saskatchewan: Cadillac SK, Perdue SK, Mossbank SK, Esterhazy SK, Langenburg SK, Endeavour SK, SK Canada, S4P 3C5 Manitoba: Lac du Bonnet MB, Oak Lake MB, Carberry MB, MB Canada, R3B 2P4 Quebec: Coteau-du-Lac QC, Metis-sur-Mer QC, Beloeil QC, Baie-Comeau QC, Marsoui QC, QC Canada, H2Y 8W9 New Brunswick: Paquetville NB, Upper Miramichi NB, Tracadie NB, NB Canada, E3B 7H2 Nova Scotia: Oxford NS, Parrsboro NS, Sydney Mines NS, NS Canada, B3J 8S8 Prince Edward Island: St. Louis PE, Tignish PE, Tyne Valley PE, PE Canada, C1A 5N8 Newfoundland and Labrador: Harbour Main-Chapel's Cove-Lakeview NL, Belleoram NL, Lewisporte NL, Sunnyside NL, NL Canada, A1B 8J6 Ontario: Pickle Crow ON, Ebenezer, Peel Regional Municipality ON, Iroquois ON, Washburns Corners, Nellie Lake ON, West Carleton ON, Prescott and Russell ON, ON Canada, M7A 8L9 Nunavut: King William Island NU, Pond Inlet NU, NU Canada, X0A 7H1 England: Filton ENG, Gillingham ENG, Hastings ENG, Portsmouth ENG, Gosport ENG, ENG United Kingdom W1U 7A7 Northern Ireland: Newtownabbey NIR, Newtownabbey NIR, Newtownabbey NIR, Derry (Londonderry) NIR, Bangor NIR, NIR United Kingdom BT2 9H2 Scotland: Aberdeen SCO, Cumbernauld SCO, East Kilbride SCO, Hamilton SCO, Paisley SCO, SCO United Kingdom EH10 7B2 Wales: Cardiff WAL, Cardiff WAL, Neath WAL, Swansea WAL, Cardiff WAL, WAL United Kingdom CF24 3D2
I'm extremely new to calculus so please excuse my lack of lingo/formatting. I'm doing homework for my calc class, and I looked on wolfram alpha. It told me the limit was $\frac{1}{8}$ but I wanted to do it on my own to make sure I actually knew what I was doing. Wolfram$\alpha$ told me to use l'Hospital's rule, but I've never learned it and couldn't figure it out based on some google searches. It also said the limit as $x\rightarrow2$ was $\frac 1 8$, but the answers I got are either $-\frac {1} {8}$ or $-1$. Here is the problem: $$ \lim_{x\to 2} \frac{2 - \sqrt{x+2}}{ x^2 - 6x + 8} $$ So I tried to rationalize by multiplying the numerator by $2 + \sqrt{x+2}$, but then my final answer came out to $\frac{-4}4$ when I plugged $2$ into $$ \frac{x-6}{(x^2-6x+8)(2+\sqrt{x+2})}$$ I'm really just not sure what I'm doing wrong. I haven't taken a precalc course since senior year and I'm a sophomore now, but we did mostly trig, so derivatives and all that are absolutely new to me. Any help is appreciated.
Answer 230 Newtons Work Step by Step We simplify the equation for torque to find the force: $F = \frac{\tau}{rsin\theta}=\frac{95}{.45 \times sin113^{\circ}} = 230 \ N$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
The question is interesting because it's a natural one which leads to an impasse for research and (probably as a result) apparently didn't make it into the literature even though people have long recognized that the answer is yes. Probably the most intriguing aspect of the question is how efficiently it can be answered: how few elementary facts about category $\mathcal{O}$ are actually needed to construct a rigorous proof. For example, does one need to know about the existence of enough projectives in $\mathcal{O}$? Here is a fairly straightforward treatment of the rank 1 case, which makes Ben's helpful sketch more transparent, though it would require more combinatorial detail about weight spaces to make the general argument rigorous. In general it's probably natural to carry out some reductions first, including the (unnecessary) assumption that all weights are integral. The axioms imply fairly directly that all objects in $\mathcal{O}$ have finite Jordan-Holderlength (finite number of composition factors). Since all simple modules are highest weight modules $L(\lambda)$ (the unique quotient of a Verma module $M(\lambda)$), and $\mathcal{O}$ is closed under subquotients, it's enough to assume that $M = L(\lambda)$ is infinite dimensional, which just means that $\lambda$ is not dominant. Now choose $N$ to be a simple Verma module $M(\mu)$ (so $\mu$ is antidominant). For the rank 1 simple Lie algebra $\mathfrak{sl}_2$, integral weights can be treated just as integers. Here the possibilities for simple and Verma modules in $\mathcal{O}$ are very limited: the finite dimensional simples $L(\lambda)$ with $\lambda \geq 0$ and the infinite dimensional simples of negative highest weight, which are also Verma modules. Moreover, the weight spaces here all have dimension 1, with weights forming a string $\lambda, \lambda -2, \dots$ But in a tensor product $M \otimes N = M(\lambda) \otimes M(\mu)$ with $\lambda, \mu$ negative, the dimensions grow very rapidly. As Ben observes, this creates a contradiction. In particular, a weight space $(M \otimes N)_\nu$ with $\nu = \lambda+\mu - \theta$ and $\theta \in \mathbb{Z}^+$ can be arbitrarily large. On the other hand, if $M \otimes N \in \mathcal{O}$, it has only a finite number $n$ of composition factors and thus each weight space has dimension at most $n$. ADDED: For the general case I've tried here to write down a concise proof with references, though it's unclear to me what approach will be "easiest" in terms of using only the most basic properties of $\mathcal{O}$ such as finite generation.
This problem is from our recitation which I do not have solutions for, and I'm stuck on the very last part where I need to satisfy the $u_t(x,0)$ initial condition. The problem is: $$ \left\{ \begin{split} u_{tt} &= c^2u_{xx}, \quad\qquad x>0,t>0\\ u(&0,t) = 0 \qquad\qquad\qquad t>0\\ u(&x,0) = \sin(x) = f(x)\;\quad x>0\\ u_t&(x,0) = e^{-x} = g(x)\quad\quad x>0 \end{split} \right. $$ Since we have Neumann conditions we use an even extension. Let $$ u(x,0) = \begin{cases} f(x) &\text{if } x > 0 \\ f(-x) &\text{if } x < 0 \end{cases} $$ and $$ u_t(x,0) = \begin{cases} g(x) &\text{if } x > 0 \\ g(-x) &\text{if } x < 0 \end{cases} $$ Then D'Alembert's formula gives: $$ \begin{split} u(x,t) &= \frac{1}{2}\big(f(x-ct)+f(x+ct)\big) + \frac{1}{2c}\int_{x-ct}^{x+ct} g(s)\,ds\\ &= \frac{1}{2}\big(f(x)+f(x)\big) + \frac{1}{2c}\int_{x}^{x} g(s)\,ds \\ &= f(x)=\sin(x) \end{split} $$ thus the First initial condition is satisfied. For the second one we have $$ u_t(x,t) = \frac{1}{2}\big(-cf(x-ct)+cf(x+ct)\big) + \frac{d}{dt}\bigg [ \frac{1}{2c} \int_{x-ct}^{x+ct} g(s)\,ds \bigg ] $$ My problem is I'm not sure what to do with the $$ \frac{d}{dt}\bigg [ \frac{1}{2c} \int_{x-ct}^{x+ct} g(s)\,ds \bigg ] $$ term. Should I differentiate the integral and then plug in $t = 0$ for the whole expression? How do I get the term $e^{-x}$ from this?
If $D$ is critical dimension of Bosonic strings, a particular derivation goes like the following, where we arrive finally at $$ \frac{D-2}{2}\sum_{n=1}^\infty n + 1 = 0. $$ Now mathematically this is clearly a divergent series, but using zeta function regularization here we are taking $$ \sum_{n=1}^\infty n = \zeta(-1) = -\frac{1}{12}. $$ And obtain $ D = 26 $ where $\zeta $ is the analytic continuation of the zeta function we know. But it makes no sense in putting $ s = -1 $ in the formulae $$ \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}. $$ As the above is only valid for $ Re(s) > 1 $. So what is going on in here? Can anyone give me a reasonable explanation about obtaining $ -1/12 $? I know some derivations in which one can track the emergence of the concrete value, without having to buy that the second order contribution in the Euler-MacLaurin formula (see other answer) is $-\frac{1}{2!}$ times the second Bernoulli number $B_2$. The limit $\lim_{z\to 1}$ of the sum $0+1\,z^1+2\,z^2+3\,z^3+\dots$ diverges, because of the pole in $\sum_{k=0}^\infty k\,z^k=z\frac{{\mathrm d}}{{\mathrm d}z}\sum_{k=0}^\infty z^k=z\frac{{\mathrm d}}{{\mathrm d}z}\frac{1}{1-z}=\frac{z}{(z-1)^2}, \hspace{1cm} z\in(0,1)$ We are instead going to consider the sum of smooth deviations of the above, using the local mean $\langle f(k)\rangle:=\int_{k}^{k+1}f(k')\,{\mathrm d}k'$. for which $\langle k\,z^k\rangle=z\frac{{\mathrm d}}{{\mathrm d}z}\langle z^k\rangle=z\frac{{\mathrm d}}{{\mathrm d}z}\langle {\mathrm e}^{k \log(z)}\rangle=z\frac{{\mathrm d}}{{\mathrm d}z}\frac{z^{k'}}{\log(z)}\left|_{k}^{k+1}\right.$. Because of canceling upper and lower bounds, the sum $\sum_{k=0}^n\langle k\,z^k\rangle$ is $\frac{z^0}{\log(z)^2}$ plus terms suppressed by $z^n$. Finally, using the expansion $\dfrac{1}{\left(\log(1+r)\,/\,r\right)^2}=\dfrac{1}{1-r+\left(1-\frac{1}{1!\,2!\,3!}\right)r^2+{\mathrm{O}}(r^3)}=1+r+\dfrac{1}{1!\,2!\,3!}r^2+{\mathrm{O}}(r^3),$ we find $\sum_{k=0}^\infty \left(k\,z^k-\langle k\,z^k\rangle\right)=\dfrac{z}{(z-1)^2}-\dfrac{1}{\log(z)^2}=-\dfrac{1}{12}+{\mathcal O}\left((z-1)^1\right).$ The picture shows the two functions $\dfrac{z}{(z-1)^2}$ and $\dfrac{1}{\log(z)^2}$, as well as their difference (blue, red, yellow). While the functions themselves clearly have a pole at $z=1$, their difference converges against $$-\frac{1}{1!\,2!\,3!}=-\frac{1}{12}=-0.08{\dot 3}.$$ A way to do this is using regularization by substracting a continuous integral, ,with the help of the Euler-MacLaurin formula: You can write : $$ \sum_{Regularized} =(\sum_{n=0}^{+\infty}f(n) - \int_0^{+\infty} f(t) \,dt) = \frac{1}{2}(f(\infty) + f(0)) + \sum_{k=1}^{+\infty} \frac{B_k}{k!} (f^{(k - 1)} (\infty) - f^{(k - 1)} (0))$$ where $B_k$ are the Bernoulli numbers. With the function $f(t) = te^{-\epsilon t}$, with $\epsilon > 0$, you have $f^{(k)}(\infty) = 0$ and $f(0) = 0$, so with the limit $\epsilon \rightarrow 0$, you will find : $$\sum_{Regularized} = - \frac{B_1}{1!} f (0) - \frac{B_2}{2!} f' (0) = - \frac{1}{12}$$ because $f(0) = 0$ and $B_2 = \frac{1}{6}$
In electrostatics, we have Maxwell's equations: $\nabla \cdot E = \rho$ $\nabla \times E = 0$ These four equations (the second line standing for three equations) can also be written in terms of the electrostatic potential: $ -\nabla^2 V=\rho $ $ E = -\nabla V $ Now if we know the positions of every charge in our system, we can find the electrostatic field (completely and entirely, with no additional information required) using Coulomb's law: $ E(x) = -\nabla \int \frac{\rho(x')}{4\pi|x-x'|} \mathrm{d}^3 x' $ My question is: to what extent can we do the same with Maxwell's Equations? For instance, whenever Coulomb's Law is derived from Maxwell's equations, an appeal needs to be made to spherical symmetry. Must we do this? Can we not use the vanishing curl somehow to reach the same conclusion? Similarly, if we derive Coulomb's Law from Poisson's equation, we must specify boundary conditions. We must specify that the potential is some constant, say zero, at infinity. It appears that the information content is less. I have read a little bit about the Helmholtz decomposition, and it appears that the curl and divergence of a vector field (E, in this case) do completely determine the vector field, providing certain restrictions are placed on the smoothness and decay of the field at infinity. In other words, it appears that in fact Maxwell's Equations (in the context of electrostatics) do have less information content than Coulomb's Law.
Let $X_k$ be $\mathbb{P}^2$ blown up at $k$ points (where $k$ is $0$ to $8$). Let $\beta \in H_2(X_k, \mathbb{Z})$ be the homology class given by $$ \beta := n L + m_1 E_1 + \ldots + m_k E_k $$ where $L$ is the homology class of a line and $E_i$ are the exceptional divisors. Also, define $$ \delta_{\beta} := < c_1(TX_k), ~\beta>-1= 3n + m_1 + \ldots m_k-1. $$ Let $N_{\beta}$ be the number of genus zero curves in the class $\beta$ passing through $\delta_{\beta}$ generic points. $\textbf{Questions:} $ I have two questions. In their paper Kontsevich and Mannin give a recursive formula to compute $N_{\beta}$ (page $29$). 1) Is it known that the numbers $N_{\beta}$ that one gets from their formula are actually the enumerative numbers (i.e. they are actually the honest count of curves through the right number of generic points)? A priori, Gromov Witten Invariants need not be enumerative and I suspect the formula given by Kontsevic and Mannin are for the genus zero GW invariants. In particular, on page $26$ of their paper (second last paragraph), they make the remark "We expect that $N_{\beta}$ counts the number of rational curves in the homology class $\beta$ passing through $\delta_{\beta}$ points, at least in unobstructed problems. " This remark seems to suggest that at the time of writing the paper they did not know if the numbers are actually enumerative. Is this presently known (i.e are genus zero GW Invariants on Del-Pezzo surfaces enumerative)? The answer is yes for $\mathbb{P}^2$. 2) My second question is how does one actually compute $N_{\beta}$ using their recursive formula? One needs enough initial conditions for the recursion. On page $29$ (just after they state the formula) they say that $N_{\beta}$ is ``expected'' to be one for all indecomposable $\beta$. This seems to imply that $$N_{3L-E_1-E_2-\ldots- E_8} = 1.$$ But as observed by Mark in this post it seems that this number ought to be the same as the number of rational planar cubics through $8$ generic points, i.e. $12$. So what have I misunderstood here?
I am reading a book . "Fourier Series and Integrals" by Dym & McKean There is an exercise (Page 106): Exercise: Check that if $f$ is a real, even, summable function and if $f(0+)$ and $f(0-)$ exist, then either $f(0-) =f(0+)$ or $\hat f(\gamma)$ changes sign infinitely often as $|\gamma| \to \infty$. Note that $\hat f(\gamma)$ is a real function, so its "sign" makes sense! There is a hint for the exercis as follow: Hint:The function $f$ is summable if it is of one sign far out, as you can see from $$\frac{f(0-) +f(0 + )}{2} = \lim_{t \to 0} \, (P_t * f) (0) = \lim_{t \to 0} \int_{-\infty}^{\infty} \exp(-2 \pi^2 \gamma^2 t) \hat f(\gamma) \, \mathrm{d}\gamma.$$ Here $P_t=P_t(x)=\dfrac{\exp(-\dfrac{x^2}{2})}{\sqrt{2\pi t}}$ is the Gauss Kernel and $P_t * f$ means the convolution of $f$ with Gauss Kernel. My try: If f is of one sign far out, then by using $$\frac{f(0-) +f(0 + )}{2} = \lim_{t \to 0} \int_{-\infty}^{\infty} \exp(-2 \pi^2 \gamma^2 t) \hat f(\gamma) \, \mathrm{d}\gamma,$$ and Monotone convergence Theorem we deuce that $\hat f \in L^1(\mathbb{R})$ so $$f(-x)=\hat {\hat f}, $$ is continuous and consequently $f(x)$ will be continuous at $x=0$ and therefore $f(0-) =f(0+)$. I don't know how to handle the other half. Thanks.
Consider the $d$-dimensional integer lattice, $Z^d$. Call two points in $Z^d$ "neighbors" if their Euclidean distance is 1 (i.e., if they differ by 1 on exactly one coordinate). Let $C$ be a two-coloring of $Z^d$, which makes each point either red or blue. We'll assume $C$ has the following "nontriviality" property: the origin is colored red, but on each of the $d$ axes through the origin, there's a point on that axis that's colored blue. Let the "sensitivity" of a point $x$ with respect to $C$, or $s_x(C)$, be the number of $x$'s neighbors that are colored differently from $x$. Then let $s(C) = \max_{x \in Z^d} s_x(C)$. QUESTION: Can you give me any decent lower bound on $s(C)$ in terms of $d$? For example, that $s(C) \ge k \sqrt{d}$ for some constant $k > 0$? REMARK 1: If you prove a lower bound of the form $k d^l$ (for constants $k,l > 0$), then you'll have solved an old open problem in the study of Boolean functions, namely the "sensitivity versus block sensitivity" problem (posed by Noam Nisan in 1991). But please don't let that discourage you! My variant feels more approachable, and maybe something is even already known about it. (I'll be happy to supply full details of the reduction on request. But here's the basic idea: let $f : \lbrace 0,1 \rbrace ^n \rightarrow \lbrace 0,1 \rbrace$ be a Boolean function such that the block sensitivity $bs(f)$ is much much larger than the sensitivity $s(f)$. Then there must be an input $x$ of $f$ that has $bs(f)$ disjoint sensitive blocks. Let $d=bs(f)$. Then we can construct a two-coloring of $Z^d$ with the properties listed above, and such that $s(C) \le 2 s(f)$ where $s(f)$ is the sensitivity of $f$. The input $x$ gets mapped to the origin of $Z^d$, while each of the $d$ sensitive blocks of $x$ gets mapped to one of the axes of $Z^d$. To map a Boolean assignment to an integer, in a way that preserves the sensitivity, we use the Gray Code. Finally, we color each point $y \in Z^d$ either red or blue, depending on whether $f(x)$ is 0 or 1 for the corresponding Boolean point $x$.) REMARK 2: I can give an example of a coloring with $s(C) = O(\sqrt{d})$, meaning that $s(C) \ge k \sqrt{d}$ really is the best lower bound one can hope for. This coloring can be obtained by starting from "Rubinstein's function" -- a Boolean function $f : \lbrace 0,1 \rbrace ^n \rightarrow \lbrace 0,1 \rbrace$ with $bs(f) = n/2$ and $s(f) = 2 \sqrt{n}$ -- and then applying the reduction sketched in Remark 1. (For those who are interested, let me now go ahead and describe a coloring with $s(C) = O( \sqrt{d} )$ explicitly. Assume for simplicity that $d$ is a perfect square. Partition the $d$ coordinates of $x$ into $\sqrt{d}$ "blocks" of $\sqrt{d}$ coordinates each. Then we'll color $x$ blue, if and only if at least one of the blocks has a single coordinate equal to $2$ and all other coordinates equal to $0$. I'll leave it as an exercise for you to verify that $s(C) = 2 \sqrt{d}$.) Note: I edited the above paragraph a little, to simplify the construction and insert a missing factor of 2. REMARK 3: At the moment, I don't even have a proof that $s(C)$ has to grow with $d$ (!!). But I suspect at least $s(C) \ge k \log d$ ought to be doable. EDIT: Sorry to switch notations in the middle of the game, but I have a better one if you want to talk about low dimensions (per domotorp's question below)! Let's let $r_x(C)$ be the number of axes (up/down, left/right, etc.) along which $x$ has a neighbor that's colored differently than $x$ is. Then let $r(C) = \max_x r_x(C)$. Clearly $r(C) \le s(C) \le 2r(C)$ for all $C$. In fact, something even stronger than that is true: given any coloring $C$, one can create a new coloring C' that satisfies $s(C')=r(C)$, by simply "blowing up" each point $x$ into a cube of $2^d$ points, which are all colored the same way $x$ was colored in $C$. The nontriviality and sensitivity properties will clearly be preserved; all this transformation does is to eliminate the problem of a point having two differently-colored neighbors along the same axis. So without loss of generality, we can shift attention to $r(C)$. Now let $r_d = \min_C r(C)$ over all nontrivial colorings $C$ of $Z^d$. Then here's what I know: $r_1 = 1$ $r_2 = 2$ $r_3 = 2$ $r_4 = 2$ $r_5 \in \lbrace 2,3 \rbrace$ UPDATE: I created an image that shows an explicit coloring of $Z^3$ that satisfies both the nontriviality condition and $r(C)=2$. (That is, from every point, you can change color by moving along at most 2 different axes.) As explained above, this can easily be converted into a coloring with $s(C)=2$ as well. domotorp is right that proving $r_5=3$ could be a great start...
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
Does $\text{exp}\bigg(\dfrac{\pi^2}{6 e^{\gamma}}\dfrac{\sigma(p_n\#)}{p_n\#}\bigg)$ bound $p_n$ from above? (Note: From Daniel Fischer's answer here.) Update ... and from below by $\log(p_n\#)$? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Does $\text{exp}\bigg(\dfrac{\pi^2}{6 e^{\gamma}}\dfrac{\sigma(p_n\#)}{p_n\#}\bigg)$ bound $p_n$ from above? (Note: From Daniel Fischer's answer here.) ... and from below by $\log(p_n\#)$?
You asked, Is there an efficient way to set all the integration limits in the document to \textstyle? When TeX is in displaymath mode, \int produces a "large" integral symbol and the upper and lower limits of integration are typeset in scriptstyle. What you're encountering is that the numerators and denominators inside these limits are typeset in scriptscriptstyle if \frac is used. To change this default behavior globally, i.e., to switch from scriptstyle to textstyle in the limits of integration on a document-wide basis, would require major surgery on TeX's math innards and is not to be undertaken lightly. And I would hate to call such coding efficient. Moreover, you will probably not like the resulting look if the limits contain simple numbers and letters (since these will now be set in textstyle, i.e., the same size as applies for \sin x\,\mathrm{d}x. You could use \tfrac explicitly -- \tfrac is short for \textstyle\frac-- to get scriptsize- rather than scriptscriptsize-sized numerator and denominator terms; the resulting look is also shown in @Bernard's earlier answer. Alternatively, you could use "inline-style" fraction notation, i.e., write \pi/4 and \pi/2 in the limits; these terms will be set in scriptstyle automatically. \documentclass{article} \usepackage{amsmath} % for \text and \tfrac macros \begin{document} \[ \int_{\frac{\pi}{4}}^{\frac{\pi}{2}}\sin x\,\mathrm{d}x % \frac \quad\text{vs.}\quad \int_{\tfrac{\pi}{4}}^{\tfrac{\pi}{2}}\sin x\,\mathrm{d}x % \tfrac \quad\text{vs.}\quad \int_{\pi/4}^{\pi/2}\sin x\,\mathrm{d}x % inline-style \] \end{document}
Let $\lambda=(\lambda_1,\lambda_2,\cdots,\lambda_k)$ be a partition with $|\lambda|=n$ and $\lambda_1\geq \lambda_2\geq\cdots\geq \lambda_k$. For any Standard Young Tableaux (SYT) $T$ of shape $\lambda$, define the "flattened tableaux" by deleting the first row $\lambda_1$, and then relabeling all entries in the tableaux with respect to their relative order, thus giving a SYT $T'$ of shape $\lambda':=(\lambda_2,\cdots,\lambda_n)$ and $|\lambda'|=n-\lambda_1$. For example, if $\lambda=(4,3,2,1,1)$ with $|\lambda|=10$, then $\lambda'=(3,2,1,1)$. An as an example, take $T=$ \begin{array}{cccc} 1 & 3 & 5 & 6\\ 2 & 4 & 7 & \ \\ 8 & 11 & \ & \ \\ 9 & \ & \ & \ \\ 10 & \ & \ &\ \end{array} Then, $T'=$ \begin{array}{cccc} 1 & 2 & 3 & \ \\ 4 & 7 & \ & \ \\ 5 & \ & \ & \ \\ 6 & \ & \ &\ \end{array} Does this operation have a common name in literature? Has it been studied before? As a map, the flattening operation $\phi: SYT(\lambda)\rightarrow SYT(\lambda')$ is clearly a surjection (and not a bijection). On the other hand, are there specific $\lambda$ for which $\phi$ is uniform in $SYT(T')$? In other words, $|\{T: \ \phi(T)=T'\}|$ is the same for all $T'$?
A linear transformation maps straight lines continuously to straight lines, equal distances on a single line to equal distances, and additionally the origin to the origin. Without the third condition, it's called an affine transformation. Proof that those conditions are sufficient to recover the usual definition: Be $V$ and $W$ vector spaces and $f:V\to W$ a function that fulfils those definitions. Be $v_1$ and $v_2$ vectors in $V$, and $w_i = f(v_i)$. Since the origin is preserved, $f(0_V) = 0_W$. Now consider the straight line $\{\lambda v_1: \lambda\in\mathbb R\}$. Since straight lines are mapped to straight lines, and a line is fixed by two points, we know that the image of the line is $\{\mu w_1: \mu\in\mathbb R\}$, where $\mu(\lambda)$ is by assumption a continuous function. We also know $\mu(1)=1$ because $w_1 = f(v_1)$. Now since on a single line, equal lengths are mapped to equal lengths, we know that $f(\lambda v_1+v_1)=\mu(\lambda)w_1+w_1$. From that we can derive that for integer $n$, $f(n v_1)=n w_1$, and with an analogous argument that for any rational number $q$, $f(q v_1) = q w_1$. Continuity then gives us $\mu(\lambda)=\lambda$, that is, $f(\lambda v_1) = \lambda w_1$. Of course since $v_1$ is arbitrary, this is true for every vector. Now consider the straight line going through $2\alpha v_1$ and $2\beta v_2$. This line is given by $\lambda 2\alpha v_1 + (1-\lambda) 2 \beta v_2$. This line is mapped to the straight line through $2\alpha w_1$ and $2\beta w_2$, given by $\mu(\lambda) 2\alpha w_1+(1-\mu(\lambda)) 2\beta w_2$. Clearly $\mu(0)=0$ and $\mu(1)=1$. Now consider specifically $\lambda=\frac12$, that is, the point $\alpha v_1 + \beta v_2$. That point has equal distance from $2\alpha v_1$ and $2\beta v_2$. Therefore since equal distances on a line are mapped to equal distances, the image point also has to have equal distance from $2\alpha w_1$ and $2\alpha w_2$, that is, it must be the point $\alpha w_1 + \beta w_2$. Therefore we have$$f(\alpha v_1 + \beta v_2) = \alpha w_1 + \beta w_2 = \alpha f(v_1) + \beta f(w_2)$$But this is the conventional definition of a linear function.
Consider the following statement: Prove that it is possible to write $\Bbb R$ as a union $\Bbb R= \bigcup_{i\in I} A_{i}$ where $A_{i} \cap A_{j}= \emptyset$ if $i\neq j$, $i,j \in I$,and such that each $A_{i}$ and $I$ are uncountable sets. There is a same question here (The real numbers as the uncountably infinite union of disjoint uncountably infinite sets). And my question is about one of the answers from that question. Thanks for Kyle Gannon who gives a constructive proof: Since $|\mathbb{R}| = |\mathbb{R} \times \mathbb{R}| $, there exists a bijection $f$ from $\mathbb{R} \to \mathbb{R} \times \mathbb{R} $. Then $\mathbb{R} = \bigcup_{a \in \mathbb{R}} f^{-1}(\mathbb{R},a)$ where $f^{-1}(\mathbb{R},a) = \{b \in \mathbb{R}: f(b) = (c,a)$ for some $c \in \mathbb{R} \}$. This answer is reasonable to me. But I am struggling with proving the the facts that the set $f^{-1}(\mathbb{R},a)$ is uncountable, each $f^{-1}(\mathbb{R},a)$ is disjoint from one another and the union of them is the whole set of real numbers. They all seem intuitively true to me but I just want to know how to formally prove them.I would really appreciate if someone could help me. Thanks so much!
27/06/2019, 15:00 — 16:00 — Room P3.31, Mathematics Building Hugo Tavares, Faculdade de Ciências, Universidade de Lisboa Least energy solutions of Hamiltonian elliptic systems with Neumann boundary conditions In this talk, we will discuss existence, regularity, and qualitative properties of solutions to the Hamiltonian elliptic system $$ -\Delta u = |v|^{q-1} v\ \ \ \text{in} \ \Omega,\quad -\Delta v = |u|^{p-1} u\ \ \ \text{in} \ \Omega,\quad \partial_\nu u=\partial_\nu v=0\ \ \ \text{on} \ \partial\Omega,$$with $\Omega\subset \mathbb R^N$ bounded, both in the sublinear $pq< 1$ and superlinear $pq>1$ problems, in the subcritical regime. In balls and annuli, we show that least energy solutions are not radial functions, but only partially symmetric (namely foliated Schwarz symmetric). A key element in the proof is a new $L^t$-norm-preserving transformation, which combines a suitable flipping with a decreasing rearrangement. This combination allows us to treat annular domains, sign-changing functions, and Neumann problems, which are nonstandard settings to use rearrangements and symmetrizations. Our theorems also apply to the scalar associated model, where our approach provides new results as well as alternative proofs of known facts.
I haven't thought about this one before, so here is an approach that will work if you work hard enough at it. Before I begin banging on, point number 1: Should I assume a spin 3/2 system (4x4 Matrix) or an entangled Hilbert space with spin 1/2 and spin 1 (6x6 Matrix)? Unquestionably the latter. It is a bipartite system and its state space is the tensor product of the two particle spaces. It simply cannot be anything else. The basic principle here is conservation of angular momentum, so your basic procedure to solve your problem is: Work out the matrices for the observables for the three nett angular momentum components (the three nett angular momentum operators); Find the most general Hamiltonian which commutes for all of these three as commutation with the Hamiltonian is equivalent to invariance with time of all the moments of probability distributions of the measurements. Part 1: The Three Angular Momentum Operators The $x$-AM component observable for the spin half particle, $$\sigma_x=\left(\begin{array}{cc}0&1\\1&0\end{array}\right)$$ has AM eigenvectors: $$\psi_+=\frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\1\end{array}\right);\quad\psi_-=\frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\-1\end{array}\right)$$ and AM eigenvalues $\lambda_+=+\frac{1}{2}$ and $\lambda_-=-\frac{1}{2}$, respectively. The $x$-AM component observable for the spin 1 particle, $$S_x = \left(\begin{array}{ccc}0&0&0\\0&0&i\\0&-i&0\end{array}\right)$$ has AM eigenvectors: $$\Psi_+=\frac{1}{\sqrt{2}}\left(\begin{array}{c}0\\i\\1\end{array}\right);\quad\Psi_-=\frac{1}{\sqrt{2}}\left(\begin{array}{c}0\\1\\i\end{array}\right);\quad\Psi_0=\left(\begin{array}{c}1\\0\\0\end{array}\right)$$ and AM eigenvalues $\Lambda_+=+1$, $\Lambda_-=-1$ and $\Lambda_0=0$, respectively. So now, for the two particle system, the six $x$-AM eigenstates are: $\psi_+\otimes\Psi_+$ with AM eigenvalue $\frac{1}{2}+1=\frac{3}{2}$ $\psi_+\otimes\Psi_0$ with AM eigenvalue $\frac{1}{2}+0=\frac{1}{2}$ $\psi_+\otimes\Psi_-1$ with AM eigenvalue $\frac{1}{2}-1=-\frac{1}{2}$ $\psi_-\otimes\Psi_+$ with AM eigenvalue $-\frac{1}{2}+1=\frac{1}{2}$ $\psi_-\otimes\Psi_0$ with AM eigenvalue $-\frac{1}{2}+0=-\frac{1}{2}$ $\psi_-\otimes\Psi_-1$ with AM eigenvalue $-\frac{1}{2}-1=-\frac{3}{2}$ and so, if we order the eigenstates as above, the eigenvectors as columns are $\mathrm{vec}(\psi_+\otimes\Psi_+),\,\mathrm{vec}(\psi_+\otimes\Psi_0)\cdots$ (see the Wikipedia Vectorization Page) and so at last we get as the total $x$-AM component observable $\Sigma_X = P_X \Lambda_X P_X^\dagger$ where $$P_X=\left(\begin{array}{cccccc} 0 & \frac{1}{\sqrt{2}} & 0 & 0 & \frac{1}{\sqrt{2}} & 0 \\ 0 & \frac{1}{\sqrt{2}} & 0 & 0 & -\frac{1}{\sqrt{2}} & 0 \\ \frac{i}{2} & 0 & \frac{1}{2} & \frac{i}{2} & 0 & \frac{1}{2} \\ \frac{i}{2} & 0 & \frac{1}{2} & -\frac{i}{2} & 0 & -\frac{1}{2} \\ \frac{1}{2} & 0 & \frac{i}{2} & \frac{1}{2} & 0 & \frac{i}{2} \\ \frac{1}{2} & 0 & \frac{i}{2} & -\frac{1}{2} & 0 & -\frac{i}{2} \\\end{array}\right)$$ and $\Lambda_X =\mathrm{diag}\left(\frac{3}{2},\,\frac{1}{2},\,\frac{-1}{2},\,\frac{1}{2},\,\frac{-1}{2},\,\frac{3}{2}\right)$. The result is: $$\Sigma_X=\left(\begin{array}{cccccc} 0 & \frac{1}{2} & 0 & 0 & 0 & 0 \\ \frac{1}{2} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{3}{4} & -\frac{1}{4} & \frac{i}{4} & \frac{3 i}{4} \\ 0 & 0 & -\frac{1}{4} & \frac{3}{4} & \frac{3 i}{4} & \frac{i}{4} \\ 0 & 0 & -\frac{i}{4} & -\frac{3 i}{4} & \frac{3}{4} & -\frac{1}{4} \\ 0 & 0 & -\frac{3 i}{4} & -\frac{i}{4} & -\frac{1}{4} & \frac{3}{4} \\\end{array}\right)$$ From here on it should be conceptually clear how to go, although tedious. You do the same for the $y$-AM observables: $$\sigma_y=\left(\begin{array}{cc}0&-i\\i&0\end{array}\right)$$$$S_y = \left(\begin{array}{ccc}0&0&-i\\0&0&0\\i&0&0\end{array}\right)$$ to find the total system $y$-AM observable $\Sigma_Y$ and for the $z$-AM observables: $$\sigma_z=\left(\begin{array}{cc}i&0\\0&-i\end{array}\right)$$$$S_z = \left(\begin{array}{ccc}0&i&0\\-i&0&0\\0&0&0\end{array}\right)$$ to get the total system $z$-AM observable $\Sigma_Z$. Part 2: Find most general Hamiltonian Your most general Hamiltonian will be defined by the three commutator relationships expressing conservation of AM: $$[\hat{H},\,\Sigma_j]=0;\;j=X,\,Y,\,Z$$ You'll need to work out the invariant spaces of the three $\Sigma$s to do this. You'll get a linear space of possible $\hat{H}$s: in the two coupled spin half particles case there is essentially only one possible Hamiltonian that falls out of this approach and that is one proportional to $\sigma_x\otimes\sigma_x+\sigma_y\otimes\sigma_y+\sigma_z\otimes\sigma_z$ (plus a term proportional to the $4\times4$ identity matrix expressing the shift in the ground state energy) but this six dimensional case things will be a bit more complicated. Now as I said, I've never done this before, so I daresay there is a more systematic and less cumbersome way to work this all out. But any method is going to rest on the first principles expressed above. Magnetic Field What are the terms for the influence of the magnetic field. Well that's an easy one: in the ordering we have studied above, the uncoupled Hamiltonian will be: $$\hat{H} = \gamma_{\frac{1}{2}}\left(\sigma_x\,B_x + \sigma_y\,B_y+ \sigma_z\,B_z\right)\otimes 1_{3\times3} + \gamma_1\,I_{2\times2}\otimes\left(S_x\,B_x + S_y\,B_y+ S_z\,B_z\right)$$ where $\gamma_{\frac{1}{2}}$ and $\gamma_1$ are the respective gyromagnetic ratios. Notes on completing the method. You can also represent a bipartite state $\Phi=\psi\otimes\Psi$ as the literal $2\times 3$ matrix that is the outer product $\Phi=\psi Psi^T$ of the $2\times 1$ and $3\times 1$ column vectors. Then the operator on the first space act on the left and the operators on the second act on the right. So our $x$-component observable would be the linear, homogeneous transformation: $$\Phi\mapsto \sigma_x\,\Phi\,S_x^T$$ and the vectorization operator (See Vectorization Wiki Page), which reorders our states into a $6\times 1$ column vectors as in my answer, writes this as $$\mathrm{vec}(\Phi) \mapsto S_x\otimes\sigma_x\,\mathrm{vec}(\Phi)$$ Using the standard formula $\mathrm{vec}(A\,B\,C) = C^T\otimes A \,\mathrm{vec}(B)$. By dint of the formula $(A\otimes B)\, (C\otimes D) = (A\,C)\otimes(B\,D)$, and using the fact that inverse, complex conjugate, Hermitian conjugate and transpose operations distribute over the Kronecker produt, we can diagonalize $S_x\otimes\sigma_x$ inside the Kronecker product and find that the coupled system’s eigenstates are $\Pi_x\otimes \pi_x$, where $P_X,\,p_x$ are the matrices of eigenvectors of the individual multiplicands written as columns. So this will let you calculate $\Sigma_j,\,j=X,\,Y,\,Z$ systematically and fast. Now to find the most general Hamiltonian, you need to find the invariant space of the group of matrices generated by the three matrices $\exp(i\,\Sigma_j)$ and find the irreducible representation of it: equivalently the smallest vector subspace of $\mathbb{C}^6$ left invariant by the group: by Schur’s lemma, any matrix commuting with all three must be proportional to the identity operator when restricted to this subspace. The scaling factor is possibly nought – i.e. the operator could possibly be the zero endomorphism. This completely characterizes the most general Hamiltionian: it can be any operator which is proportional to the identity when restricted to this irreducible subspace. You could also find the subspace which is common to all three of the nullspaces of the three $36\times 36$ matrices $1_{36\times 36}\otimes \Sigma_j - \Sigma_j^T\otimes 1_{36\times 36}$ in Mathematica or Matlab, but I suspect there is a much eleganter method grounded on Schur's lemma!
Because I can see the non-dust pattern with naked eyes... Just three days ago, I wrote a blog post about BICEP2. So I wasn't terribly excited to write down another blog post once the Planck Collaboration published a paper claiming that the BICEP2 could be due to dust, especially because I don't find the Planck paper to be terribly new, interesting, insightful, or game-changing. A random image, taken from Perceiving randomness: egalitarian bias They've been saying similar things since the spring and the arguments they presented today don't seem stronger than the previous ones. Their fits don't seem to be too good without the error margins and as far as I can say, they are inflating the errors by inventing various kinds of "extra errors" (such as the "conversion error") in order to dilute and obfuscate the signal that they may have failed to discover, despite their superior gadgets and huge funding. This production of spurious errors sort of reminds me of Gerhard Schröder's invention of new taxes such as the environmental tax, the beverage tax, the bad weather tax, and others (Schröder wasn't a sufficiently arrogant hardcore thief to propose a carbon tax, however!). Much of this tension is a clash of personalities. I think that what BICEP2 has shown is the experimental science of the best kind and unless some embarrassing error emerges (I really mean something like a loosened OPERA cable: it hasn't emerged so far), I will continue to think of them highly even if their discovery is ultimately reduced to dust (or another background). Like proper stereotypical experimenters, they didn't really believe a word that the theorists like to say (all proper experimenters think that gravity is actually caused by leprechauns and GR is just a theorists' fairy-tale for babies to sleep smoothly; but if a theorist needs the experimenters to empirically determine something, the good experimenters are as reliable as a vacuum cleaner). However, after they spent a very long time by efforts to show that their signal is due to something else, they published a paper with the discovery claim and it was undoubtedly right that they did so. Science couldn't operate if the publication of a discovery were viewed as a blasphemy. On the other hand, I do feel that the Planck folks have the lack of audacity combined with prejudices and the plan to "just impose new limits" is their modus operandi, a dogma predetermined to direct their behavior for years to come. I am even afraid that they will abuse their stronger political power and their joint paper with BICEP2 (to be out in November 2014) already has a predetermined qualitative conclusion now – it will be just a new limit on \(r\), the tensor-to-scalar ratio. But let me stop with these ethical questions which are clearly dividing the people. There is a scientific substance. Everyone who is interested in cosmology would love to know whether the imprints of the primordial gravitational waves have been seen. I agree with those who say that this discovery, if true, is the greatest discovery in many years if not decades or a century. I would probably place it above the Higgs boson discovery because unlike the Higgs boson, it wasn't really guaranteed. However, we must ask: is the discovery real? Of course that I am not 100.00000% sure. But I still think it's significantly more likely than not that the BICEP2 discovery is genuine and the pattern they see is simply not dust. Why? Because it clearly doesn't look like dust. What do I mean? Just look at the key "photograph" of the BICEP2 field with the polarized CMB signal. My question for you is the following: Can you distinguish this picture from the picture of the "colorful smoke" at the top of this blog post? I sincerely hope you can. But why are these two pictures so different? The BICEP2 picture is very regular in some very specific way: you may talk about a preferred length scale, the typical distance of adjacent red blobs (or adjacent blue blobs). It pretty much looks as if the adjacent, nearby blue blobs (or adjacent red blobs) wanted to be a rather well-defined distance from each other – and the distance is something like 1/7 of the width of the picture (about 4° steps in the declination, the vertical axis – or about 8° steps in the right ascension on the horizontal axis; note that \(\Delta \phi\) has to be multiplied by \(\sin\theta\) to get a distance on the unit sphere). Do you agree? Needless to say, the step \(\Delta\theta=4^\circ\) exactly corresponds to \(\ell=90\) because one may squeeze \(90\) waves per \(4°\) on a circle that has \(360^\circ\) in total. On the other hand, the colorful smoke at the top doesn't seem to have a preferred length scale. To a large extent, it is self-similar. If you need to know, the colors in the "coloful smoke" picture were computed from functions defined by Fourier series, and the coefficients in the momentum representation were random numbers comparable to a (decreasing) power law of the momentum, to guarantee that the color is sort of continuous. If you work on the sphere, the "momentum modes" become "spherical harmonics" and \(\ell\), the orbital angular momentum, replaces \(|\vec k|\). OK. What is the decomposition of the pictures to the spherical harmonics? Open the new Planck paper on page 8. You find Figure 2 over there which has two similar (electric, magnetic spectral) parts and the top one-half of each part features three red almost straight lines (top) and three blue almost straight lines (bottom) which are the predictions for the dust. These monotonic predictions may leak to the polarization data and the resulting polarization, in the bottom part of the pictures, is less monotonic but it is still monotonic enough. The slopes are small and the local maxima are very mild. It simply looks very different from the BICEP2 spectrum that seems to have a pretty clear local maximum near \(\ell=90\) or so while the \(\ell=50\) harmonics are weaker by a factor of two. I don't want to say which particular increase or decrease in the BICEP2 spectrum most visibly contradicts the predictions from the "dust" hypothesis. If I wanted to choose the criterion that, in my opinion, discriminates between the BICEP2 observations and the predictions of dust more sharply, it's the presence of a preferred length scale (preferred value of \(\ell\approx 90\), or preferred angular scale in the sky) that the BICEP2 seems to clearly and "repeatedly" see while, as far as I can say, all dust-like explanations would tend to predict a self-similar smoke-like pattern that has no finite preferred angular scale. It's my feeling that if you quantified some "confidence level" telling you "how strongly BICEP2 sees a preferred finite angular scale smaller than the size of the window", you would get a pretty high confidence. And the fact that this preferred angular scale \(\ell\approx 90\) agrees with the predictions of inflationary cosmology is one more consistency check for me to be even more certain. Again, I am in no way 100.00000% certain that the gravitational waves are being seen over there. Of course that if there is a risk of something like a "loose cable" in the BICEP2 apparatuses, the confidence level may go to 50% or lower. But if there is no risk of an embarrassing error and the competing hypotheses are really dust-like, I feel that my confidence is still above 99% that what BICEP2 doesn't see a dust/smoke because that would have a much more self-similar, scaleless appearance. For those reasons, it looks like Planck is a victim of a confirmation bias, a team looking for excuses why they haven't seen those things before their competition and trying to sell their own "absence of evidence" as "evidence of absence".
A water planet. Water has a much lower density than any rock you could make a planet out of and is nearly incompressible. However, some funny stuff happens when you try to make an entire planet out of it. For the sake of easy calculation, my planet is going to be a balmy 350K, at least for now. What we're going to do is run through a range of pressures, and water changes forms as that happens. Take a look at the phase diagram of water as I talk you through it. (From this very helpful website) We're staying on the 350K line for now and moving vertically upward as we travel toward the center of the planet. We start at around 100 kPa at the surface, convert to ice VII at ~2 GPa, and convert to ice X at ~50 GPa, where we stay all the way to the core, which should be around 500 GPa. Respective densities: liquid water, at 1g/cm$^3$; ice VII, at 1.5g/cm$^3$; and ice X, at 2.5g/cm$^3$. However, these densities also increase with depth according to the bulk modulus equation. $$B = \frac{\Delta P}{\Delta V / V}$$ From this website and this article (paywall, sorry), I managed to find the bulk modulus ($B$) of water and ice VII. I couldn't find one for ice X, so I'll assume it's similar to ice VII. Liquid water has a bulk modulus of 2.2 GPa and it takes 200km of water to reach 2GPa, according to the classic conversion 101kPa/10m. Thus, we can solve for final density with this equation: $$\rho_f = \frac{(\Delta P + B)*\rho_i}{B}$$ where $\rho_f$ is final density, $\Delta P$ is the change in pressure, and $\rho_i$ is the initial density of water (1g/cm$^3$). This gives us a water density at the bottom of our ocean of 1.9g/cm$^3$. For the rest of the math, I'll use the average value of 1.5g/cm$^3$. The same equation can be used for the ices, but it's already been done by this graph, made by people (paywall, sorry) far more qualified than I: As you can see, the density of ice VII starts at something like 1.5g/cm$^3$ at 2GPa and is projected to increase to something like 3g/cm$^3$ (7cm$^3$/mol) around 500GPa (which would be our core). I'll use an average density of 2.3g/cm$^3$ for the rest of the math. So, we now have a planet with 200km deep global surface oceans and a thick core of dense ice. Let's get an actual radius for this thing. Our equation in this case will be something like $$g = \frac{G*M_{planet}}{r^2} = \frac{G*(V_{core}*\rho_{core}+V_{ocean}*\rho_{ocean})}{r^2}$$ Substituting and solving gives us a radius of 15,000 kilometers Whew. Of course, I handwaved a lot in there, with my biggest one being the assumption of constant temperature. To account for that, the vertical line we used on the water phase diagram would curve to the right as we increase the pressure. This means we wouldn't pass through the transitions as quickly, which would actually increase our radius, not decrease it, because we'd have more of the lighter stuff (water and ice VII). Additionally, being forced to average out the densities with respect to depth annoyed me, but I didn't want to work with nasty differential equations. If the "solid surface" requirement truly means solid, we've also got an easy solution- freeze it! Instead of a temperature like 400K, a planet near 200 or 100K would have a frozen surface and similar radius- remember that ice 1h (normal ice) actually has a lower density than water. As far as creating such a planet goes, I wouldn't be surprised if we found one in the universe somewhere. There's a lot of water around, and one hypothesis for Earth's water is comets. Smash a bunch of comets together and you've got a water planet. As other answers have pointed out, this is implausible, but not impossible. There would likely be a solid core of some other substance and would raise a few scientific eyebrows if it was made of pure water. Other options Other answers have pointed out some good ideas, but I still think that water is the ideal material. Substances like liquid hydrogen or organic molecules (hexane, for example) do indeed have lower densities but they have MUCH higher bulk moduli, which was really the deciding factor in this whole equation. See below for a similar graph from here (again, paywall)- and note the difference in axes, where $H$ has a much more dramatic change with pressure. I wasn't able to find a similar one for hexane, but it'd be between the two based on its bulk modulus alone (paywall. sad.).
Hi - I'm taking a linear algebra course and the first few chapters are on set theory. I'm having trouble with a very simple proof and I'd like some help on it. Let \(S\) and \(T\) be groups. Given \(T \subseteq S\) prove: a) \(S \cap T = T\) b) \(S \cup T = S\) I know these are very simple proofs but please keep in mind (and I know it's ridiculous) I was never taught proofs in high school, so the concept of proving something quite obvious is new to me. Here's my shot at it: a) Let \(x \in T\) - And since \(T \subseteq S\), then \(x \in S\) - Which means \(x \in S \cap T\) - And of course, \(S \cap T \subseteq S\) - And since \(S \cap \{ x | \{x\} \subseteq S\} = \{x | \{x\} \subseteq S\}\) - It is implied that \(S \cap T = T\) b) Let \(x \in T \ and \ y \in S\) - Since \(S \cap T = \{x,y \ | \ x \in T \ , \ y \in S \}\) and it is given that \(T \subseteq S\) - And we already know that \(x \in T \Rightarrow x \in S\) - Then the previous statement can be rewritten as \(S \cup T = \{x,y \ | \ x \in S , y \in S \}\) - Which obviously implies that in fact \(S \cup T = \{ x \ | \ x \in S\}\) - So \(S \cup T = S\) Can anyone tell me if these proofs are correct and if not, explain them in similar terms so I can improve my strategies?
In this tutorial we shall find the integral of x sine inverse of x, and solve this problem with the… Click here to read more Introduction to Integral Calculus In this tutorial we shall find the integral of x ln x and solve this problem with the help of… Click here to read more In this tutorial we shall find the integral of 1 over a^2-x^2. The integration is of the form \[\int {\frac{1}{{{a^2}… Click here to read more In this tutorial we shall derive the integral of e^x into the sine function, and this integral can be evaluated… Click here to read more In this tutorial we shall derive the definite integral of the trigonometric function secant tangent from limits 0 to Pi… Click here to read more In this tutorial we shall derive the definite integral of inverse tangent from limits 0 to Pi over 4. The… Click here to read more In this tutorial we shall find the area of the region between the x-axis and the curve $$y = {x^2}… Click here to read more In this tutorial we shall find the area bounded by the curve $$y = {x^3} + 1$$, the x-axis and… Click here to read more In this tutorial we shall find the area bounded by the curve $$y = {x^3} – x$$ and the x-axis…. Click here to read more In this tutorial we shall solve a differential equation of the form $$y’ + \sqrt {\frac{{1 – {y^2}}}{{1 – {x^2}}}}… Click here to read more Differential equations are frequently used in solving mathematics and physics problems. In the following example we shall discuss the application of… Click here to read more Differential equations are commonly used in physics problems. In the following example we shall discuss a very simple application of the… Click here to read more In the following example we shall discuss the application of simple differential equation in business. If $$P$$ is the principal… Click here to read more
I think I found two errors in the book here, They are highlighted in red below. Part 1 Causality jargonThis section was introducing a ton of jargon and, as ever, Carroll confused me with his brevity and power sentences. Early on we had an achronalhypersurface which is one where no two points are connected by a timelike curve. Carroll gives any edgeless spacelike hypersurface in Minkowski space as an example. I was having a bit of trouble imagining an 'edgeless spacelike hypersurface in Minkowski space' when I found Fig 1. That made it obvious. The HYPERSURFACE OF THE PRESENT therein is achronal. Fig 1 timelike inside the light cone - massive particle null (aka lightlike) on the light cone - photon spacelike outside the light cone On a flat (x,t)spacetime diagram any line is a hypersuface, if it's edgeless it continues forever and it is spacelike if its gradient mis always limited: -1 < m< 1 - it is more parallel to the space axis than the time axis. Fig 2 Various types of hypersurface. Both spacelike surfaces are achronal. Thinking about an achronal hypersurface S, Carroll defines one + four (or eight) new terms. Causal curve One which is timelike or null everywhere. Two are shown. Causal future of S J +(S) Set of points that can be reached from S by following a future directed causal curve Chronological future of S I +(S) Set of points that can be reached from S by following a future directed timelike curve Future domain of dependence of S D +(S) Set of all points that p such that every past moving inextendible* causal curve through p intersects S. Points predictable from S (see below). Future Cauchy horizon of S H +(S) Boundary of D +(S). Limit of predictable points (see below). * inextendible means the curve goes on forever. We'll now concentrate on the spacelike hypersurfaces S and T, which are achronal, in fig 3. Fig 3 Before all the definitions, Carroll had mysteriously said that "We look at the problem of evolving matter fields …". Light dawned: The evolution from events (the initial conditions) in S can only completely specify future events in D+(S) its future domain of dependence. Events beyond its future Cauchy horizon cannot be predicted from the initial conditions. There were some more terms Cauchy surface Closed achronal surface Σ whose domain of dependence D +(Σ) is the entire manifold Globally hyperbolic A space time that has a Cauchy surface Partial Cauchy surface ? Cauchy surface whose domain of dependence D +(Σ) is not the entire manifold Closed time like curve See below From information on a Cauchy surface on we can predict what happens throughout the entire manifold / entire universe / all spacetime. Part 2. Cylindrical spacetimeWe then have a simple example: Consider a two-dimensional geometry with coordinates ##\{t , x\}##, such that points with coordinates ##(t , x)## and ##(t , x+1)## are identified. The topology is thus ##\boldsymbol{\mathrm{R}}\times S^1##. We take the metric to be $${ds}^2=-{\mathrm{cos} \left(\lambda \right)\ }{\mathrm{d}t}^2-{\mathrm{sin} \left(\lambda \right)\ }\left[\mathrm{d}t\mathrm{d}x+\mathrm{d}x\mathrm{d}t\right]+{\mathrm{cos} \left(\lambda \right)\ }{\mathrm{d}x}^2$$where$$\lambda ={{\mathrm{cot}}^{-1} t\ }$$which goes from ##\lambda =0## (##t = - \infty ##) to ##\lambda = \pi## (##t = \infty ##). ##\lambda ={{\mathrm{cot}}^{-1} t\ }## is the same as ##\lambda ={\mathrm{tan}}^{-1} ( 1 / t )## and so $$t=\ 1 /{\mathrm{tan} \lambda \ }$$That's a problem. As ##\lambda \to 0, t \to \infty ## not ##-\infty ##. So we really want $$\lambda =-{{\mathrm{cot}}^{-1} t\ }$$To find the light cone we want a null vector ##V^{\mu }##, at various times. We can get this from the metric and Desmos plotted various light cones from 0 to ##\pi## as shown below. Details are in the pdf. I had arrived at the same diagram as Carroll! Fig. 4. Shows the light cones in red in the distant past (λ = 0) to distant future (λ = π). In our diagram we are identifying points with coordinates (t,x) and (t, x+~100), so that we can better see the strange light cones for large t or λ≈π . Our light cones rotate the same way as Carroll's Fig 2.25. Carroll says "When t > 0, x becomes the timelike coordinate." (Because x, not t, is in the light cone. Moreover light and particles can only move in the positive x direction and they can move in + and - t directions.) We can now draw two causal curves from a point p as shown. One reaches the surface S, the other does not. Therefore p is outside the future Cauchy horizon of S. This applies to any point p with t > 0. As he says "There is thus necessarily a Cauchy horizon at t=0." Surely it's worse than that. There is a 'global' Cauchy horizon at t = 0. Perhaps that is what he meant. In plainer language: "Nothing at t > 0 is predictable by things at t < 0". I don't see why we had to have the cylindrical coordinate system. The closed causal curve guarantees that the causal curve from p is inextendible, but we could have had a curve that waved around forever keeping its t coordinate always >0. Part 3 A singularity There seems to be another error in the book here. His fig 2.26 shows a singularity at a point p and the text discusses a point p that is in the future of the singularity. So I will repeat the paragraph and the diagram using separate p's. Fig 5 Fig 5 shows a singularity at s and an achronal surface Σ that extends indefinitely in the plus and minus x directions. The point p cannot be in D+(Σ), future domain of dependence of Σ, because there are causal curves from p that end at s. Therefore there is a future Cauchy horizon at H+(Σ) as shown. H+(Σ) also extends indefinitely in the plus and minus x directions. I am not sure if the right branch of the past light cone from p should escape the influence of the singularity, but it does not matter for this argument. Part 4. A Diversion From fig 4 it is clear that photons and particles from t > 0 can travel backward in time to t = 0 or nearby. Sadly I was not able to find the equations of motion. I need to know more about geodesics perhaps. There is more detail about that and the equations in part 2 here: Commentary 2.7 Causality.pdf
LaTeX typesetting is made by using special tags or commands that provide a handful of ways to format your document. Sometimes standard commands are not enough to fulfil some specific needs, in such cases new commands can be defined and this article explains how. Contents Most of the LaTeX commands are simple words preceded by a special character. In a document there are different types of \textbf{commands} that define the way the elements are displayed. This commands may insert special elements: $\alpha \beta \Gamma$ In the previous example there are different types of commands. For instance, \textbf will make boldface the text passed as parameter to the command. In mathematical mode there are special commands to display Greek characters. Commands are special words that determine LaTeX behaviour. Usually this words are preceded by a backslash and may take some parameters. The command \begin{itemize} starts an environment, see the article about environments for a better description. Below the environment declaration is the command \item, this tells LaTeX that this is an item part of a list, and thus has to be formatted accordingly, in this case by adding a special mark (a small black dot called bullet) and indenting it. Some commands need one or more parameters to work. The example at the introduction includes a command to which a parameter has to be passed, textbf; this parameter is written inside braces and it's necessary for the command to do something. There are also optional parameters that can be passed to a command to change its behaviour, this optional parameters have to be put inside brackets. In the example above, the command \item[\S] does the same as item, except that inside the brackets is \S that changes the black dot before the line for a special character. LaTeX is shipped with a huge amount of commands for a large number of tasks, nevertheless sometimes is necessary to define some special commands to simplify repetitive and/or complex formatting. New commands are defined by \newcommand statement, let's see an example of the simplest usage. \newcommand{\R}{\mathbb{R}} The set of real numbers are usually represented by a blackboard bold capital r: \( \R \). The statement \newcommand{\R}{\mathbb{R}} has two parameters that define a new command \R \mathbb{R} \mathbb the package After the command definition you can see how the command is used in the text. Even tough in this example the new command is defined right before the paragraph where it's used, good practice is to put all your user-defined commands in the preamble of your document. It is also possible to create new commands that accept some parameters. \newcommand{\bb}[1]{\mathbb{#1}} Other numerical systems have similar notations. The complex numbers \( \bb{C} \), the rational numbers \( \bb{Q} \) and the integer numbers \( \bb{Z} \). The line \newcommand{\bb}[1]{\mathbb{#1}} defines a new command that takes one parameter. \bb [1] \mathbb{#1} User-defined commands are even more flexible than the examples shown above. You can define commands that take optional parameters: \newcommand{\plusbinomial}[3][2]{(#2 + #3)^#1} To save some time when writing too many expressions with exponents is by defining a new command to make simpler: \[ \plusbinomial{x}{y} \] And even the exponent can be changed \[ \plusbinomial[4]{y}{y} \] Let's analyse the syntax of the line \newcommand{\plusbinomial}[3][2]{(#2 + #3)^#1}: \plusbinomial [3] [2] (#2 + #3)^#1 If you define a command that has the same name as an already existing LaTeX command you will see an error message in the compilation of your document and the command you defined will not work. If you really want to override an existing command this can be accomplished by \renewcommand: \renewcommand{\S}{\mathbb{S}} The Riemann sphere (the complex numbers plus $\infty$) is sometimes represented by \( \S \) In this example the command \S (see the example in the commands section) is overwritten to print a blackboard bold S. \renewcommand uses the same syntax as \newcommand. For more information see:
1. An angle whose measurement is of $${90^ \circ }$$ is called a RIGHT ANGLE. 2. An angle whose measurement is greater than… Click here to read more Trigonometry 1) ANGLE: The union of two non colinear rays which have a common end point is called an angle. 2)… Click here to read more \[Si{n^2}\theta + Co{s^2}\theta = 1\] \[1 + Ta{n^2}\theta = Se{c^2}\theta \] \[1 + Co{t^2}\theta = Cose{c^2}\theta \] \[Sin(\alpha + \beta… Click here to read more \[Sina + Sinb = 2Sin\left( {\frac{{a + b}}{2}} \right)Cos\left( {\frac{{a – b}}{2}} \right)\] \[Sina – Sinb = 2Cos\left( {\frac{{a +… Click here to read more 1) If $$r$$ denotes in-radius, then \[r = \sqrt {\frac{{(s – a)(s – b)(s – c)}}{s}} = \frac{\Delta }{s}\] \[r… Click here to read more \[Si{n^{ – 1}}a + Si{n^{ – 1}}b = Si{n^{ – 1}}(a\sqrt {1 – {b^2}} + b\sqrt {1 – {a^2}} )\]… Click here to read more 1) \[Sinh{\text{ }}ix = iSinx\] 2) \[Cosh{\text{ }}ix = Cosx\] 3) \[Tanh{\text{ }}ix = iTanx\] 4) \[iSinhx = Sin{\text{ }}ix\]… Click here to read more 1) \[Sinx = \frac{{{e^{ix}} – {e^{ – ix}}}}{{2i}}\] 2) \[Cosx = \frac{{{e^{ix}} + {e^{ – ix}}}}{2}\] 3) \[Tanx = \frac{{{e^{ix}}… Click here to read more
There is a well known vectorization operator $\mbox{vec}$ in matrix analysis. I've vectorized my matrix equations, did some transformation of vectorized equations and now I want to get back to the matrix form. Is there special operator for it? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community There is a well known vectorization operator $\mbox{vec}$ in matrix analysis. I've vectorized my matrix equations, did some transformation of vectorized equations and now I want to get back to the matrix form. Is there special operator for it? The inverse of the vectorization operator $$\mbox{vec} : \mathbb{R}^{m \times n} \to \mathbb{R}^{mn}$$ is the operator $$\mbox{vec}^{-1} : \mathbb{R}^{mn} \to \mathbb{R}^{m \times n}$$ such that $\mbox{vec}^{-1} (\mbox{vec} (X)) = X$ for all $X \in \mathbb{R}^{m \times n}$. $\mbox{vec} (\mbox{vec}^{-1} (x)) = x$ for all $x \in \mathbb{R}^{m n}$. Once a matrix is vectorized, the original dimensions of the matrix are "forgotten". Hence, it would be wise to write the dimensions of the inputs of $\mbox{vec}$ and of the outputs of $\mbox{vec}^{-1}$ in subscripts, e.g., $\mbox{vec}_{m,n}$ and $\mbox{vec}_{m,n}^{-1}$. Note that $\mbox{vec}_{3,2}^{-1}$ is the inverse of $\mbox{vec}_{3,2}$, but not of $\mbox{vec}_{2,3}$. Adding to the excellent answer by Rodrigo de Azevedo, I would like to point out that there is an explicit formula for the inverse $\operatorname{vec}_{m\times n}^{-1}$, given by $$ \mathbb{R}^{mn}\ni x \mapsto \operatorname{vec}_{m\times n}^{-1}(x) = \big((\operatorname{vec} I_n)^\top \otimes I_m\big)(I_n \otimes x) \in \mathbb{R}^{m\times n}, $$ where $I_n$ denotes the $n\times n$ identity matrix and $\otimes$ denotes the Kronecker product. The formula above can be verified in the following way: Let $X\in\mathbb{R}^{m\times n}$ be such that $\operatorname{vec}{X}=x\in\mathbb{R}^{mn}$, and let $X_k$ denote the $k$-th column of $X$. Additionally, let $M=I_n\otimes x \in\mathbb{R}^{mn^2\times n}$ and let $M_k$ denote the $k$-th column of $M$. Finally, we let $e_k\in\mathbb{R}^n$ be the $k$-th column of $I_n$. Note that $M_k=e_k\otimes\operatorname{vec}{X}$. Recall the identity $\operatorname{vec}(ABC) = (C^\top\otimes A)\operatorname{vec}{B}$, which we shall make heavy use of. Observing that $\operatorname{vec}(e_k^\top\otimes X)=M_k$, we see that $$ \big((\operatorname{vec} I_n)^\top \otimes I_m\big)M_k = \big((\operatorname{vec} I_n)^\top \otimes I_m\big)\operatorname{vec}(e_k\otimes X) = \operatorname{vec}\big(I_m(e_k^\top\otimes X)\operatorname{vec}{I_n}\big) = \operatorname{vec}\big((e_k^\top\otimes X)\operatorname{vec}{I_n}\big) = \operatorname{vec}\big(\operatorname{vec}(XI_ne_k)\big) = \operatorname{vec}(Xe_k) = X_k, $$ and thus $$ \big((\operatorname{vec} I_n)^\top \otimes I_m\big)M = \big((\operatorname{vec} I_n)^\top \otimes I_m\big) \begin{bmatrix} M_1 & \ldots & M_n \end{bmatrix} = \begin{bmatrix} X_1 & \ldots & X_n \end{bmatrix} = X. $$
I am reading the book "Elliptic partial differential equations of second order" by D. Gilbarg and N. S. Trudinger. Specifically, I am interested in Hölder regularity estimates for solution of elliptic problems in divergence form with Hölder coefficients on a domain whose boundary is smooth ($C^2$ for example). Theorem 8.33 p 210 of this book is exactly what I am looking for. However, the right-hand side of the inequality depends on a constant which depends on the Hölder norms of the coefficients in a non-explicit way. I am looking for any references (or argument) which explicit this constant. In particular, I would like to know what is the dependency on the Hölder norms of the coefficients of the elliptic problem. For simplicity, I am restating this inequality: \begin{align} |u|_{1,\alpha}\leq C (|u|_0+|g|_0+|f|_{0,\alpha}) \end{align} where $u$ is a $C^{1,\alpha}(\overline{\Omega})$ solution of the elliptic problem \begin{align} L(u)=g+D_if^i, \end{align} with $u=0$ on the boundary, $L=D_i(a^{i,j}(x)D_ju)$, $\max |a^{i,j}|_{0,\alpha}=K<+\infty$ and $C>0$ depending on $K$. I want to know the dependency on $K$ of $C$. Thanks in advance.
Search Now showing items 1-2 of 2 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
In trigonometric differentiation, most of the examples are based on the sine square roots function. We will discuss the derivative… Click here to read more Calculus In trigonometric differentiation, most of the examples are based on the sine square roots function. We will discuss the derivative… Click here to read more In this tutorial we will discuss the basic formulas of differentiation for trigonometric functions. 1. $$\frac{d}{{dx}}\sin x = \cos x$$… Click here to read more In this tutorial we shall explore the derivative of inverse trigonometric functions and we shall prove the derivative of secant… Click here to read more In this tutorial we shall discuss the basic formulas of differentiation for inverse trigonometric functions. 1. $$\frac{d}{{dx}}{\sin ^{ –… Click here to read more A function defined by $$y = {\log _a}x,\,\,\,x > 0$$, where $$x = {a^y},\,\,\,a > 0$$, $$a \ne 1$$ is… Click here to read more Example: Differentiate $${\cosh ^{ – 1}}\left( {{x^2} + 1} \right)$$ with respect to $$x$$. Consider the function \[y = {\cosh… Click here to read more
WHY? \beta-VAE is known to disentangle latent variables. WHAT? This paper further decomposed the KL divergence term in b-VAE (TC-Decomposition). \mathcal{L}_{\beta} = \frac{1}{N}\sum_{n=1}^N(\mathbb{E}_q[\log p(x_n|z)]-\beta KL(q(z|x_n)\|p(z)))\\\mathbb{E}_{p(n)}[KL(q(z|n)\|p(z))] = KL(q(z, n)\|q(z)p(n)) + KL(q(z)\|\prod_j q(z_j)) + \sum_j KL(q(z_j)\|p(z_j)) The first term of the right hand side is referred to as the index-code mutual information(MI) which is the mutual information between data and latent variable. The second term is referred to as the total correlation(TC) which measure statistical dependency between variables. The third term is dimension-wise KL which is complexity penalty from diagonal Gaussian. Disentangling is mostly affected by the second term(TC) but is is hard to decompose due to q(z). Evaluation of q(z) = \mathbb{E}_{p(N)}[q(z|n)] require statistics of entire dataset. Instead of getting statistics of the dataset, that of mini-batch can be use to approximate it. mathcal{E}_{q(z)}[\log q(z)]\sim \frac{1}{M} \sum_{i=1}^{M}[\log\frac{1}{NM}\sum_{j=1}^Mq(z(n_i)|n_j)] Therefore, the objective function of \beta-TCVAE is as follows. \mathcal{L}_{\beta-TC} = \mathbb{E}_{q(z|n)p(n)}[\log p(n|z)] - \alpha I_q(z;n) - \beta KL(q(z)\|\prod_j q(z_j)) - \gamma \sum_jKL(q(z_j)\|p(z_j)) Increasing the beta while fixing alpha and gamma to 1 leads to better disentanglement. So? According to disentanglement measure(Mutual Information Gap:MIG) this paper suggested, beta-TCVAE showed better performance than beta-vae. Critic Great improvement with simple modification and clean theoratical backup.
Table of Contents In Season 1 Episode 2 of The Big Bang Theory, “The Big Bran Hypothesis”, Penny (Kaley Cuoco) asks Leonard (Johnny Galecki) to sign for a furniture delivery if she isn’t home. Unfortunately for Leonard and Sheldon, they are left with the task of getting a huge (and heavy) box up to Penny’s apartment. To solve this problem, Leonard suggest using the stairs as an inclined plane, one of the six classical simple machines defined by Renaissance scientists. Both Leonard and Sheldon have the right idea here. Not only are inclined planes used to raise heavy loads but they require less effort to do so. Though this may make moving a heavy load easier the tradeoff is that the load must now be moved over a greater distance. So while, as Leonard correctly calculates, the effort required to move Penny’s furniture is reduced by half, the distance he and Sheldon must move Penny’s furniture twice the distance to raise it directly. Mathematics of the Inclined Plane Effort to lift block on Inclined Plane Now we got an inclined plane. Force required to lift is reduced by the sine of the angle of the stairs… call it 30 degrees, so about half. To analyze the forces acting on a body, physicists and engineers use rough sketches or free body diagrams. This diagram can help physicists model a problem on paper and to determine how forces act on an object. We can resolve the forces to see the effort needed to move the block up the stairs. If the weight of Penny’s furniture is \(W\) and the angle of the stairs is \(\theta\) then \[\angle_{\mathrm{stairs}}\equiv\theta \approx 30^\circ\] and \[\Rightarrow\sin 30^\circ = \frac{1}{2}\] So the effort needed to keep the box in place is about half the weight of the furniture box or \(\frac{1}{2}W\), just as Leonard says. Distance moved along Inclined Plane While the inclined plane allows Leonard and Sheldon to push the box with less effort, the tradeoff is that the distance they move along the incline is twice the height to raise the box vertically. Geometry shows us that \[\sin \theta = \frac{h}{d}\] We again assume that the angle of the stairs is approximately \(30^\circ\) and \(\sin 30^{\circ} = 1/2\) then we have \(d=2h\). Uses of the Inclined Plane We see inclined planes daily without realizing it. They are used as loading ramps to load and unload goods. Wheelchair ramps also allow wheelchair users, as well as users of strollers and carts, to access buildings easily. Roads sometimes have inclined planes to form a gradual slope to allow vehicles to move over hills without losing traction. Inclined planes have also played an important part in history and were used to build the Egyptian pyramids and possibly used to move the heavy stones to build Stonehenge. Lombard Street (San Francisco) Lombard Street in San Francisco is famous for its eight tight hairpin turns (or switchbacks) that have earned it the distinction of being the crookedest street in the world (though this title is contested). These eight switchbacks are crucial to the street’s design as the reduce the hills natural 27° grade which is too steep for most vehicles. It is also a hazard to pedestrians, who are more accustomed to a more reasonable 4.86° incline due to wheel chair navigability concerns. Technically speaking, the “zigzag” path doesn’t make climbing or coming down the hill any easier. As we have seen, all it does is change how various forces are applied. It just requires less effort to move up or down but the tradeoff is that you travel a longer distance. This has several advantages. Car engines have to be less powerful to climb the hill and in the case of descent, less force needs to be applied on the brakes. There are also safety considerations. A car will not accelerate down the switch back path as fast than if it was driven straight down, making speeds safer and more manageable for motorists. This idea of using zigzagging paths to climb steep hills and mountains is also used by hikers and rock climbers for very much the same reason Lombard Street zigszags. The tradeoff is that the distance traveled along the path is greater than if a climber goes straight up. The Descendants of Archimedes We don’t need strength, we’re physicists. We are the intellectual descendants of Archimedes. Give me a fulcrum and a lever and I can move the Earth. It’s just a matter of… I don’t have this, I don’t have this! We see that Leonard had the right idea. If we were to assume are to assume — based on the size of the box — that the furniture is approximately 150 lbs (65kg) and the effort is reduced by half, then they need to push with at least 75 lbs of force. This is equivalent to moving a 34kg mass. If they both push equally, they are each left pushing a very manageable 37.5 lbs, the equivalent of pushing a 17kg mass. Penny’s apartment is on the fourth floor and we if we assume a standard US building design of ten feet per floor, this means a 30 foot vertical rise. The boys are left with the choice of lifting 150 lbs vertically 30 feet or moving 75lbs a distance of 60 feet. The latter is more manageable but then again, neither of our heroes have any upper body strength.
You are transported in a parallel universe where people write mathematical equations on computers as ASCII art by hand. As a LaTeX addict, this is totally unacceptable, and you ought to automate this process somewhat. Your goal is to write a program that outputs an ASCII version of an equation inputed as a LaTeX math command. Mandatory LaTeX commands to support Sum: the LaTeX command for a sum is \sum_{lower bound}^{upper bound} The ASCII figure you have to use for sums is: upper bound ___ \ ` /__, lower bound Product: the LaTeX command for a product is \prod_{lower bound}^{upper bound} The ASCII figure you have to use for products is: upper bound ____ | | | | lower bound Fraction: the LaTeX command for fractions is \frac{numerator}{denominator} The ASCII figure you have to use for fractions is: numerator ----------- denominator Anything that is not one of those three commands is displayed as is. For example, \sum{i=3}^{e^10}\frac{3x+5}{2}should be displayed as e^10___ 3x+5\ ` ----/__, 2i=3 Inputs The input is a LaTeX command passed as a string (or your language's equivalent to strings). LaTeX commands can be nested, for instance \frac{\frac{1}{2}}{3} is a valid input. Inputs are supposed to be always correct (no need to check LaTeX's syntax in your code). Inputs will only consist of the three LaTeX commands presented above and 'text' that you won't need to format. LaTeX commands will always come with the syntax presented above, i.e. sums and products will always have upper and lower bounds (although they can be empty) and there will always be a numerator and denominator for fractions. We assume that the bounds of sums and products are at most 4 characters long (= the width of the sum and product symbols), so that you don't have to worry about possible overlap issues. For similar reasons, we assume that the bounds are just 'text' and will never be LaTeX commands, e.g. \sum_{\sum_{1}^{2}}^{1} is not a valid input. Outputs Your program's output is the ASCII representation of the LaTeX command you were given as input. Your program has to take horizontal alignment into account: for instance, the bounds of the sum or the product have to be horizontally aligned with the sum or product symbol (which are both 4 characters wide). If the bound has an odd number of characters, it does not matter whether it is one character off to the right or to left of the center, whichever is fine. The fraction's line has to be as long as the numerator or the denominator, whichever is the longest. Your program has to take vertical alignment into account: for instance, \frac{\frac{1}{2}}{3} = \frac{1}{6} should be displayed as 1-2 1- = -3 6 For sums and products, since the symbols are 4 characters high, the vertical center is assumed to be the second line from the top. Horizontal spacing is assumed to be correct in the given input, i.e. the spaces in the input should be displayed in the output. Test cases Input abc = 2 Output abc = 2 Input e = \sum_{n=0}^{+inf} \frac{1}{n!} Output +inf ___ 1 e = \ ` -- /__, n! n=0 Input e^x = 1 + \frac{x}{1 - \frac{x}{2 + x - ...}} Output x e^x = 1 + --------------- x 1 - ----------- 2 + x - ... Input \prod_{i=1}^{n} \frac{\sum_{j=0}^{m} 2j}{i + 1} Output m ___ \ ` 2j n /__, ____ j=0 | | ------- | | i + 1 i=1 Input \frac{sum}{prod} = \sum_{frac}^{prod} sum Output prod sum ___ ---- = \ ` sum prod /__, frac Scoring This is code-golf, so the shortest code wins.
I am trying to work out what I think about the suitability of this question. The author is in a very real and awkward situation: His$^{\ast}$ graduate student has a result. The result is not that exciting, but it is not clearly superseded by prior results. (I'll back these statements up at the end of this question.) The student has done the right thing: Asked his advisor whether the result is worthwhile. Unfortunately, the advisor also doesn't know the conventions of the field well enough to answer. One answer is that the advisor should e-mail friends of his who are closer to the relevant field, but this can take a long time without a good answer, even if he has such friends. He can simply encourage the student to submit the result and see what happens. The question for us is, do we want "post the question on MO" to be a third option? The advantage I see is that we have readers who are very familiar with the publication standards of number theory journals. I think the argument against is that the question is seen as too subjective, and is usually the sort of thing that appears on academia.SE (where the question started) not here. I've cast a vote to reopen, but I'd be glad to hear arguments both ways. $^{\ast}$ Gender presumed on the basis of the poster's choice of pseudonym. As i707107 computes, the result follows from the prime number theorem with error term $x/(\log x)^3$. But the question says that the proof only uses Rosser's theorem which looks to me to roughly be equivalent to a one-sided estimate $\pi(x) < (1+\epsilon) x/\log x$. There is a standard trick to convert one-sided estimates into two-sided estimates, but it seems surprising to me that the proof needs two powers of $\log x$ less than expected. Is that surprise worth publishing? I don't know. How much more elementary is Rosser than getting $x/(\log x)^N$ error? How interested are number theory journals in new proofs of Bertrand like results which follow immediately from PNT with strong bounds? I feel like this is the sort of question that only people familiar with the publishing practices of analytic number theory journals can answer.
Consider the simple hopping model in second quantization, $\hat{H} = -J \sum_{i,j=1}^\infty \left(\hat{c}_i \hat{c}^\dagger_j + h.c.\right)$ where $J$ is real and $\hat{c}_i$ are annihilation fermionic operators for an infinite 1D periodic lattice. As well known, a single-excitation eigenstate of this Hamiltonian can be found in the form $|\psi^{(1)}\rangle = \hat{d}^\dagger_k |0\rangle$ where $\hat{d}^\dagger_k \propto \sum_{i=1}^\infty \hat{c}^\dagger_i e^{i k x_i}$, $x_i$ is the position of the i-th site and $k$ is a wavevector in the first Brillouin zone. Is there a standard recipe to find the two-excitation eigenstates? That is, for states of the form $|\psi^{(2)}\rangle = \sum_{i,j<i}^\infty A_{i,j} \hat{c}^\dagger_i \hat{c}^\dagger_j |0\rangle$ is there a known Ansatz to find analytically the matrix $A_{i,j}$ such that $|\psi^{(2)}\rangle$ is an eigenstate of $\hat{H}$?
We are given that $Y_i$ ($i=1,2,3...10$) is an independent random sample from a $N\sim(-1,4)$ distribution, and that Q is given by: $$Q=\frac{1}{400}\Bigl(\sum_{i=1}^{10}(Y_i +1)\Bigl)^2$$ The question is to name the distribution of Q, and state the parameter(s) of the distribution. My working was: $\sum_{i=1}^{10}(Y_i +1)$ = $N\sim(-10, 400)$ $\Bigl(\sum_{i=1}^{10}(Y_i +1)\Bigl)^2=\chi^2(1)$ My specific question is what is the distribution of Q. But more generally, what is the general approach and intuition to dealing with ratios in these sort of problems. To take another example, suppose $X_i$ a random variable with a $N\sim(0, \sigma^2)$, ($i=1,2,..5$). If $U= X_1 +X_2 +X_3 +X_4$, then $N\sim(0,4\sigma^2)$. I can accept this. But $\frac{U}{4}$ is given as $N\sim(0,\frac{4\sigma^2}{16})$ I am not clear what $\frac{U}{4}$ represents and why we would want to divide U by 4 (or in the first example, why divide by 400) and why the variance changes to $\frac{4\sigma^2}{16}$ (and also, for the normal distribution, whether the divisor would affect the mean also). Is there any intuitive explanation for including a divisor and is there any general rule, when trying to solve these problems, how to deal with the divisor. Hope this makes sense. Thanks
The integral of cotangent inverse $${\cot ^{ – 1}}x$$ it is an important integral function, but it has no direct method to find it. We shall find the integration of cotangent inverse by using the integration by parts method. The integral of cotangent inverse is of the form \[I = \int {{{\cot }^{ – 1}}xdx} \] To solve this integration it must have at least two functions, however it has only one function: $${\cot ^{ – 1}}x$$. So consider the second function as $$1$$. Now the integration becomes \[I = \int {{{\cot }^{ – 1}}x \cdot 1dx} \,\,\,\,{\text{ – – – }}\left( {\text{i}} \right)\] The first function is $${\cot ^{ – 1}}x$$ and the second function is $$1$$. Using the formula for integration by parts, we have \[\int {\left[ {f\left( x \right)g\left( x \right)} \right]dx = f\left( x \right)\int {g\left( x \right)dx – \int {\left[ {\frac{d}{{dx}}f\left( x \right)\int {g\left( x \right)dx} } \right]dx} } } \] Using the formula above, equation (i) becomes \[\begin{gathered} I = {\cot ^{ – 1}}x\int {1dx – \int {\left[ {\frac{d}{{dx}}{{\cot }^{ – 1}}x\int {1dx} } \right]dx} } \\ \Rightarrow I = x{\cot ^{ – 1}}x – \int {\left[ { – \frac{1}{{1 + {x^2}}}x} \right]dx} \\ \Rightarrow I = x{\cot ^{ – 1}}x + \int {\frac{x}{{1 + {x^2}}}dx} \\ \end{gathered} \] Multiplying and dividing by 2, we have \[I = x{\cot ^{ – 1}}x + \frac{1}{2}\int {\frac{{2x}}{{1 + {x^2}}}dx} \] Using formula \[\int {\frac{{f’\left( x \right)}}{{f\left( x \right)}}dx = \ln f\left( x \right)} + c\] we have \[\begin{gathered} I = x{\cot ^{ – 1}}x + \frac{1}{2}\ln \left( {1 + {x^2}} \right) + c \\ \Rightarrow \int {{{\cot }^{ – 1}}xdx} = x{\cot ^{ – 1}}x + \frac{1}{2}\ln \left( {1 + {x^2}} \right) + c \\ \end{gathered} \] Now we can also use this integration of cotangent inverse as a formula.
Ok, after a lot of brainstorming, I came up with a proof. The advantages are that it's very constructive, gives a bit of an algorithmic picture of what goes on with the Eigenvalues, and uses nothing abstract (no graph theory!). The disadvantages?...Well...it's pretty dang technical/inefficient--you may even want to go as far as say that this is "brute force". The first answer gives the more "eloquent use of pretty theorems" approach. Let $B_n$ denote the normalized $n \times n$ adjacency matrix. We will begin by proving the following claims. Claim 1. Given an eigenvalue $\lambda$ of $B_n$ and an eigenvector $\left[\begin{matrix} a_1 \\ \vdots \\ a_n \end{matrix}\right]$, we have $a_2=\lambda a_1$, $a_{n-1}=\lambda a_n$ and $a_k=2\lambda a_{k-1}-a_{k-2}$ for $2<k \leq n$. Proof. Observe the recursive pattern\begin{align*}\left[\begin{matrix} a_2 \\ \frac{a_1}{2}+\frac{a_3}{2} \\ \vdots \\ \frac{a_{n-2}}{2}+\frac{a_n}{2} \\ a_{n-1} \end{matrix}\right]=B_n\left[\begin{matrix} a_1 \\ a_2 \\ \vdots \\ a_{n-1} \\ a_n \end{matrix}\right]=\lambda \left[\begin{matrix} a_1 \\ a_2 \\ \vdots \\ a_{n-1} \\ a_n \end{matrix}\right]. \quad \square\end{align*} Claim 2. Given an eigenvalue $\lambda$ of $B_n$, define\begin{align*}p_k(\lambda)=\begin{cases}1, & \text{if $k=1$,} \\\lambda, & \text{if $k=2$,} \\2\lambda p_{k-1}(\lambda)-p_{k-2}(\lambda), & \text{if $k>2$.}\end{cases}\end{align*}We find $\left[\begin{matrix} p_1(\lambda) \\ \vdots \\ p_n(\lambda) \end{matrix}\right]$ is an eigenvector. Proof. choose an eigenvector $\left[\begin{matrix} a_1 \\ \vdots \\ a_n \end{matrix}\right]$ of $\lambda$. Using Claim 1, one can show that $\left[\begin{matrix} a_1 \\ \vdots \\ a_n \end{matrix}\right]=a_1 \left[\begin{matrix} p_1(\lambda) \\ \vdots \\ p_n(\lambda) \end{matrix}\right]$, which means $a_1$ must be non-zero. Our conclusion follows. $\quad \square$ Claim 3. $\lambda$ is an eigenvalue of $B_n$ if and only if $\lambda$ is a solution to $0=\lambda p_n(\lambda)-p_{n-1}(\lambda)$. (we have more generally found that $\lambda p_n(\lambda)-p_{n-1}(\lambda)$ is the "characteristic polynomial" of $B_n$) Proof. $\implies$ Follows from applying Claim 1 to the eigenvector $\left[\begin{matrix} p_1(\lambda) \\ \vdots \\ p_n(\lambda) \end{matrix}\right]$ obtained from Claim 2 and getting $p_{n-1}(\lambda)=\lambda p_n(\lambda)$. $\impliedby$ Follows from the fact that\begin{align*}\left[\begin{matrix} p_2(\lambda) \\ \frac{p_1(\lambda)+p_3(\lambda)}{2}\\ \vdots \\ \frac{p_{n-2}(\lambda)+p_n(\lambda)}{2} \\ p_{n-1}(\lambda) \end{matrix}\right]=B_n \left[\begin{matrix} p_1(\lambda) \\ p_{n-1}(\lambda) \\ \vdots \\ p_{n-1}(\lambda) \\ p_n(\lambda) \end{matrix}\right]=\lambda \left[\begin{matrix} p_1(\lambda) \\ p_2(\lambda) \\ \vdots \\ p_{n-1}(\lambda) \\ p_n(\lambda) \end{matrix}\right],\end{align*}by our converse hypothesis (the hypothesis is applied to the $n$th entry in particular). $\quad \square$ Now to prove your question: Final Proof. It shall suffice by Claim 3 to show that all real solutions to $0=\lambda p_n(\lambda)-p_{n-1}(\lambda)$ are within the bounds of $[-1, 1]$. It is easy to verify using our recursive construction that $-1, 1$ is a solution to $0=\lambda p_n(\lambda)-p_{n-1}(\lambda)$, for all $n \geq 2$. Next, we check that all zero solutions are bounded by $[-1, 1]$. This is done by just using calculus and recursion to show that $\lambda p_n(\lambda)-p_{n-1}(\lambda)$ never hits zero outside of $[-1, 1]$. $\quad \square$ The final step is rather inconvenient, but I believe it's do-able since $\lambda p_n(\lambda)-p_{n-1}(\lambda)$ turns out to be monotonic outside of $[-1, 1]$. I'll leave it up to you unless you find it too hard/tedious and rather have me cook it up. UPDATE: Corrected my mistake on Claim 3. Should have written "solution", not "nonzero solution".
This is just a fun question: Prove a 45-45-90 triangle has side ratios \(1:1:\sqrt2\) WITHOUT Pythagorean Theorem or Trigonometry. Can you do it? (also, I'm kind of curious because I don't know how to prove it without Pythag. or Trig, is it even possible?) :P Sure, it's possible by considering the area. We can assume that the legs have a length 1 and so we wish to find the length of the hypothenuse which we will call x. The area then becomes \(\frac{1}{2}\). If we draw the height of the triangle from the hypothenuse we notice that it has the length \(\frac{x}{2} \) and so the area can also be calculated to be \(\frac{x \cdot \frac{x}{2}}{2}\). We then get the equation \(\frac{x^2}{4}=\frac{1}{2} \) or \(x^2=2\) and so \(x= \sqrt{2}\) and the ratio is \(1:1:\sqrt{2}\). Well that's because when we draw the height we split the triangle into two isosceles triangles and also split the hypothenuse into two equal parts of length \(x/2\). Because this is one of the legs and the other leg is the height they must have the same length.
Answer a) The magnitude of the force is $\text{273 lb}$. b) The magnitude of the force is $\text{61 lb}$. Work Step by Step (a) For calculating the magnitude of the force, the law of cosines is applied as the barrel makes the angle with the ramp along horizontal direction, so we will take into account only horizontal component. Therefore we get $280\cos 12.5{}^\circ =273\text{ lb}$ (b) The magnitude of the barrel against the ramp is calculated by using the law of sine’s because in this case we take only vertical component into account and ignore the horizontal components. Therefore, we get $280\sin 12.5{}^\circ =61\text{ lb}$
WHY? Modeling data with known probability distribution has a lot of advantages. We can exactly calculate the log likelihood of the data and easily sample new data from distribution. However, finding tractable transformation of data into probability distribution or vice versa is difficult. For instance, a neural encoder is a common way to transform data but its log-likelihood is known to be intractable and another separately trained decoder is required to sample data. WHAT? This paper suggest tractable transformation of probability distribution. Suppose we model data with known distributions. We want to find a invertible function that map h to x (or x to h: f) to maximize the log-likelihood of data (Dimension of h is the same as that of x). p_H(h) is prior distribution that we define (eg. isotropic Gaussian). If prioir distribution is factorial, we call this estimation of probability non-linear independent components estimation (NICE) criterion. h \sim p_H(h)\\ p_H(h) = \Pi_d p_{H_d}(h_d)\\ x = f^{-1}(h)\\ p_X(x) = p_H(f(x))|det\frac{\partial f(x)}{\partial x}|\\ \log (p_X(x)) = \Sigma^D_{d=q} \log(p_{D_h}(f_d(x))) + log(|\det(\frac{\partial f(x)}{\partial x})) We can see that we need to calculate Jacobian matrix ( \frac{\partial f(x)}{\partial x}) to find log likelihood. So we need to find transformation that is invertible and its Jacobian matrix is easy to compute. Note that the determinant of a matrix is easy to compute when the matrix is diagonal, lower triangular or upper triangular. This paper suggest elementary component of transformation with Jacobian that satisfy above conditions. General coupling layer transform x to y as follows. I_1 and I_2 are partition of [1, D] with d = |I_1|. y_{I_1} = x_{I_2}\\ y_{I_2} = g(x_{I_2};m(x_{I_1}))\\ \frac{\partial y}{\partial x} = \left[\begin{array}{cc} I_d & 0\\ \frac{\partial y_{I_2}}{\partial x_{I_1}} & \frac{\partial y_{I_2}}{\partial x_{I_2}}\end{array} \right] We call g: \mathbb{R}^{D-d} X m(\mathbb{R}^d) \rightarrow \mathbb{R}^{D-d} the coupling law, m a coupling function and this transformation a coupling layer. We can see that \det \frac{\partial y}{\partial x} = \det \frac{\partial y_{I_2}}{\partial x_{I_2}}. If we put coupling law as addition (Additive coupling layer), the determinant of Jacobian would be 1. Since some part of a layer is not affected in transformation, we alternate the index in alternating layers (at least three). We can define rescaling by defining diagonal matrix S. So? NICE was able to model MNIST, TFD, SVHN and CIFAR-10 with the best log likelihood. Also NICE showed good performance in inpainting task of MNIST. Critic Maybe more elaborate transformations are needed for more elaborate generation.
I will give an elaborate answer because I think this is an area where Mathematica really shines ;) Let me digress. We know that, in general, an equation system defines a set of points as a function of the parameters in it. Namely, the set of variable assignments or valuations that make the equation true. Thus an equation system can also be seen as a predicate $E(\vec x, \vec p)$ which defines the set of its solutions given the parameters $\vec p$:$$sol(E, \vec p) := \{\vec x : E(\vec x, \vec p)\}$$ Note that "solving" an equation system, even for given parameter values (for example if you have no parameters) does not usually give a single value $\vec x$. Now what does mathematica's Solve[E, {x1,...,xn}] (or NSolve) do? It gives a representation of this set in the form of a finite list of replacement rule sets,$\{r_1, r_2, ..., r_n\}$. Each $r_i$ contains rules of the form $x_{ij} \mapsto f_{ij}(\vec x', \vec p)$. Let $(\vec x)_i'$ be the sublist of $\vec x$ not appearing as an $x_{ij}$ (on the left hand side of a rule), $(\vec x)_i''$ be the others.Then the ruleset $r_i$ defines a map $F_i$ with$$(\vec x)_i'' := F_i((\vec x)_i', \vec p).$$The set of solutions is then characterized as follows:$$\vec x \in sol(E, \vec p) \iff \exists i\quad (\vec x)_i'' = F_i((\vec x)_i', \vec p).$$ Turning this into a function Ideally, we would want to get a function $G$ with $$G(\vec p) := sol(E, \vec p)$$ Obviously, this is in general impossible because the set of solutions is infinite (sometimes not even countable). But assume we knew $E$ has a finite amount of solutions for every fixed $\vec p$. Can we derive these from the output of Solve? In general no, one because of the parameters $\vec C$ mathematica might add to describe the solution space and because we dont know which $(\vec x)_i'$ will not generate Undefined values with the functions we are given. And we get no explicit way of enumerating only valid candidates. Still, if the $F_i$ are total in $(\vec x)_i'$ we might be able to leverage them to describe our solution space in a more explicit way. In particular, we can ask Mathematica to give us the solution in 1 variable in terms of the parameters and the others, simply by 'solving' for that variable only. Using ParametricPlot and friends you can then visualize the solution. Wrap up Lets assume you know that $(\vec x)_i'' = \vec x$ for all $r_i$, i.e. that all variables you care about have been explicitly solved for. Then the function $$f(\vec p) := \{F_1(\vec p), ...,F_n(\vec p)\}$$ can be defined in the following two ways in mathematica. (Let us write {{x1, ...}*} = f[{p1, p2,...}]). 1. Solving the equation system once If you know (or enforce) that x1,...,p1,... are undefined at the point of definition: ClearAll[f, p1, p2, x1, x2]; (*Define f[{p1,...,pn}]. {x1,...,xn, p1,...,pn} must not be defined at this point \ for this to work*) f[{p1_, p2_}] := Evaluate[{x1, x2} /. Solve[ (*The equation system*) x2 == p1^2 + p1 && p2 == x1, {x1, x2}]]; (*Test*) ?f f[{3, 4}] Otherwise (namespace-clean version): ClearAll[f]; (*Prepare f*) Evaluate[Module[{x1, x2, p1, p2}, f[params] = {p1, p2}; f[variables] = {x1, x2}; f[solutions] = f[variables] /. Solve[ (*The equation system*) x2 == p1^2 + p1 && p2 == x1, f[variables]]] ]; (*Define f[{p1,...,pn}]*) f[p_] := f[solutions] /. Rule @@@ Transpose@{f[params], p}; (*Test*) ?f f[{3, 4}] 2. Solving the equation system on every call Namespace-dirty version -- x1,... must not be defined when calling f. ClearAll[f, x1, x2]; (*When calling this function, x1,...,xn must be undefined*) f[{p1_, p2_}] := {x1, x2} /. Solve[ (*The equation system*) x2 == p1^2 + p1 && p2 == x1, {x1, x2}]; (*Test*) ?f f[{3, 4}] To make this clean, simply wrap the definition in a module: ClearAll[f, x1, x2]; Module[{x1, x2}, f[{p1_, p2_}] := {x1, x2} /. Solve[ (*The equation system*) x2 == p1^2 + p1 && p2 == x1, {x1, x2}]; ]; (*Test*) ?f f[{3, 4}] Some more details Note that the $F_i$ are in general partial functions. You may get x -> ConditionalExpression[1, a > 0] which is Undefined for certain values, try it: ConditionalExpression[1, a > 0] /. {a -> 0} The above explanation makes it clear why {} denotes the empty set of solutions. And also why {{}} is the full dimensional set of solutions: In this case $(\vec x)_1' = \vec x$, $(\vec x)_1''$ is empty and the function $F_1$ returns nothing, thus $(\vec x)_1'' = F_1((\vec x)_1',\vec p)$ for any $\vec x$. Sometimes Solve will introduce new parameters $\vec C$ in the solution. These parametrize the $F_i$ so that actually, implicitly, infinitely many $F_i$ are available to pick to construct a solution: $$\vec x \in sol(E, \vec p) \iff \exists i\exists \vec C\quad (\vec x)_i'' = F_i((\vec x)_i', \vec p, \vec C).$$ Because "Solve gives generic solutions only. Solutions that are valid only when continuous parameters satisfy equations are removed" the picture I painted above is a bit too simple. Sometimes there will be solutions that Mathematica is well aware of but does not give because of this.It will generate ConditionalExpression with inequalities for parameters: Solve[x^2 == a - b, x, Reals] {{x -> ConditionalExpression[-Sqrt[a - b], a > b]}, {x -> ConditionalExpression[Sqrt[a - b], a > b]}} but by default 'forgets' all solutions that would require an equality between parameters: Solve[x == 0 && x^2 == a - b, x, Reals] {} It will also generate solutions that are wrong for a finite set of parameter assignments: Solve[x a == 1 , {x, y}] {{x -> 1/a}} I don't see why you shouldn't always use MaxExtraConditions -> All which gives these solutions back: Solve[x == 0 && x^2 == a - b, x, Reals, MaxExtraConditions -> All] {{x -> ConditionalExpression[0, a == b]}} Solve[x a == 1 , {x, y}, MaxExtraConditions -> All] {{x -> ConditionalExpression[1/a, a != 0]}}
Triangle A triangle ABC has each side has 10cm. Let the point D in the middle of the line AB, E in the middle of the line BC, G in the middle of the line AC as shown. Ask in before fold DG, DE, find the perimeter of the square CDEG? The triangle ABC has the perimeter is 80m. If edge AB=1200mm, edge BC=180cm; what is the length of edge CA ? Give square triangle ABC at A. Draw AH perpendicular BC at H. Draw HD, HE are high lines of AB and AC. a) Prove that : AH = DE b) Call I is the midpoint of HB, K is the midpoint of HC. Prove that : DI // EK Good luck :) Kayasari Ryuunosuke Coodinator 08/08/2017 at 21:46 a) We have : \(\left\{{}\begin{matrix}\widehat{ADH}=90^0\\\widehat{DAE}=90^0\\\widehat{AEH}=90^0\end{matrix}\right.\) So , quadrilateral DHEA is the rectangle => AH = DE (2 diagonals) It also means : OD = OH = OE = OA (with O is the intersection) b) Consider \(\Delta ODI\) and \(\Delta OHI\) , we have : DI = IH (DI is the median of square triagle BDH) DO = OH IO general => \(\Delta ODI\) = \(\Delta OHI\) (e - e - e) => \(\widehat{IDO}=90^0\Rightarrow DE\perp ID\) (1) Similar with \(\Delta OHK\) and \(\Delta OEK\) \(\Delta OHK\) = \(\Delta OEK\) (e - e - e) => \(\widehat{OEK}=90^0\Rightarrow DE\perp EK\) (2) From (1) and (2) => DI // EKSelected by MathYouLike Let G be the centroid of \(\Delta ABC\) and d be the line outside \(\Delta ABC\).Draw \(AM,BN,CP,GQ\perp d\).Prove that \(AM+BN+CP=3GQ\) Nguyễn Tất Đạt 05/09/2017 at 18:04 Draw median CD of \(\Delta\)ABC. Call E is the midpoint of CG. From D and E draw DH and EK perpendicular with line d. We have GQ is the midsegment of trapezium HDEK. \(\Rightarrow GQ=\dfrac{DH+EK}{2}\Leftrightarrow2GQ=DH+EK\) \(\Rightarrow4GQ=2DH+2EK\) (1) Consider trapezium MABN: DH is midsegment \(\Rightarrow2DH=AM+BN\) (2) Similar: \(2EK=GQ+PC\) (3) Substitute (2) and (3) to (1): \(AM+BN+GQ+PC=4GQ\) \(\Rightarrow AM+BN+PC=3GQ\) .
Say I have a function $f(\theta) = 1 + \cos^2(\theta)$ that can be expressed terms of the Legendre polynomials. When calculating coefficients should I change the Legendre polynomials from $x$ variables to theta variables? e.g. The third Legendre usually written: $(0.5(3x^2-1))$ would have theta rather than $x$? (since my function is a function of theta not $x$). If I am correct would it also make sense to then change the limits of integration $-1, 1$ to $-\pi, \pi...$ My third coefficient equation then looks like (sorry I don't know how to write this out correctly): $c_3 = \frac52 \int_{-\pi} ^ \pi (1+\cos^2(\theta))(0.5(3\theta^2-1))d \theta$ Sorry if this is hard to read. Any help?
Apologies, I don't claim my reasoning is perfect, but I would appreciate any critiques. Thank you. Let us consider commutation relations on a general Riemannian manifold M , where the commutator is defined as: $$\left[A,B\right]=AB-BA$$ Specifically, we consider the basic commutation relation between one of the set of basis on $M$ , $e^{\mu}$ and the basis dual to it, $e_{\mu}$ .$$\left[e^{\mu},e_{\mu}\right]$$ Choosing our basis to locally be coordinate basis we obtain: $$\left[e^{\mu},e_{\mu}\right]\xi=\left[\frac{\partial}{\partial x^{\mu}},dx\right]\xi$$ where $$\xi$$ is some test function defined over the manifold . Expanding out the terms of the commutator, we obtain: $$\left[\frac{\partial}{\partial x^{\mu}},dx^{\mu}\right]\xi=\frac{\partial}{\partial x^{\mu}}\left(dx^{\mu}\xi\right)-dx^{\mu}\frac{\partial\xi}{\partial x^{\mu}}$$ $$=\frac{\partial(dx^{\mu})}{\partial x^{\mu}}\xi+\left(dx^{\mu}\frac{\partial\xi}{\partial x^{\mu}}-dx^{\mu}\frac{\partial\xi}{\partial x^{\mu}}\right)$$ $$=\frac{\partial(dx^{\mu})}{\partial x^{\mu}}\xi$$ written without the test function: $$\left[\frac{\partial}{\partial x^{\mu}},dx^{\mu}\right]=\frac{\partial(dx^{\mu})}{\partial x^{\mu}}$$ The lattermost term, $\frac{\partial(dx^{\mu})}{\partial x^{\mu}}$ is the change in the element $dx^{\mu}$ with the change in $x^{\mu}$ . Because $dx^{\mu}$ is, in general, a function of position, it can be nonzero. Clearly such a quantity is related to curvature. Because $dx^{\mu}=e^{\mu}$ is the dual to $\frac{\partial}{\partial x^{\mu}}=e_{\mu}$ , a change in one effects a change in another. Consider now Euler's definition of extrinsic curvature: $$\frac{\partial(T_{x^{\mu}}M)}{\partial S}=\frac{1}{R}$$ Where R is the radius of curvature of the manifold in the $x^{\mu}$ direction, $T_{x^{\mu}}M$ is the unit tangent vector on M in the $\mu$ direction and $\partial S$ is the differential distance element. Of course the cotangent/dual space represented by the dual basis is defined as $$dx^{\mu}=(T_{x^{\mu}}M)^{\star}$$ Where $\star$ is the Hodge star operator. It is then evident that, for our local coordinate patch: $$(T_{x^{\mu}}M)^{\star}=-i(T_{x^{\mu}}M)$$ Thus we can write: $$\frac{\partial(dx^{\mu})}{\partial x^{\mu}}=\frac{\partial(T_{x^{\mu}}M)^{\star}}{\partial S}\left(\frac{\partial S}{\partial x^{\mu}}\right)=-i\frac{\partial(T_{x^{\mu}}M)}{\partial x^{\mu}}\left(\frac{\partial S}{\partial x^{\mu}}\right)=-\frac{i}{R}\left(\frac{\partial S}{\partial x^{\mu}}\right)$$ Now the distance element can be defined as: $$dS=\left(dx^{\mu}dx_{\mu}\right)^{\frac{1}{2}}=\left(dx^{\mu}g_{\mu\nu}dx^{\nu}\right)^{\frac{1}{2}}=\gamma_{\mu}dx^{\mu}=\gamma^{\mu}dx_{\mu}$$ Where in the latter two term we have utilized Dirac's “trick” Interestingly, Dirac's mathematical “trick” of factoring the square root with gamma matrices can be used throughout classical relativity though, in practice, it is reserved exclusively for use in quantum mechanics for reasons ambiguous to the author.and factored the square root with gamma matrices defined by the relation: $\{\gamma^{\mu},\gamma^{\nu}\}=\gamma^{\mu}\gamma^{\nu}+\gamma^{\nu}\gamma^{\mu}=2g^{\mu\nu}$ . Thus we may write the expression: $$\frac{dS}{dx^{\mu}}=\gamma_{\mu}$$ Such that we may rewrite (7) as: $$\left[\frac{\partial}{\partial x^{\mu}},dx^{\mu}\right]=\frac{\partial(dx^{\mu})}{\partial x^{\mu}}=-\frac{i}{R}\gamma_{\mu}$$ It is important to note here that equation (8) Equation (8) is rather fundamental, it demonstrates one may write the curvature in a coordinate direction in terms of a tangent vector and its dual 1-form. Note also that, in general, R and $\gamma_{\mu}$ are functions of the manifold coordinates. Let us now consider a Taylor series expansion of $dx^{\mu}$ about the point $x_{0}$ . $$dx^{\mu}=\sum_{n=0}^{\infty}\left(\frac{\partial^{n}(dx^{\mu})}{(\partial x^{\mu})^{n}}\mid_{x_{0}}\right)(x-x_{0})^{n}$$ $$=dx^{\mu}\mid_{x_{0}}+\left\{ \frac{\partial(dx^{\mu})}{\partial x^{\mu}}\mid_{x_{0}}\right\} (x^{\mu}-x_{0}^{\mu})+\cdots$$ $$=dx^{\mu}\mid_{x_{0}}-\frac{i}{R}\gamma_{\mu}\mid_{x_{0}}(x^{\mu}-x_{0}^{\mu})+\cdots$$ We will only consider terms to first order. This is equivalent to claiming the curvature is constant, or rather changes in curvature are negligible for the portion of the manifold we are considering. Inserting (12) back into expression (9) we obtain: $$\left[\frac{\partial}{\partial x^{\mu}},dx^{\mu}\mid_{x_{0}}-\frac{i}{R}\gamma_{\mu}(x^{\mu}-x_{0})\right]\backsimeq-\frac{i}{R}\gamma_{\mu}$$ All constant terms cancel, yielding: $$\left[\frac{\partial}{\partial x^{\mu}},-\frac{i}{R}\gamma_{\mu}x^{\mu}\right]=\left[\frac{\partial}{\partial x^{\mu}},-\frac{i}{R}\gamma^{\mu}x_{\mu}\right]=\backsimeq-\frac{i}{R}\gamma_{\mu}$$ This may be rewrittion equivalently as: $$\left[-\frac{i}{R}\gamma^{\mu}\frac{\partial}{\partial x^{\mu}},\,x_{\mu}\right]\backsimeq-\frac{i}{R}\gamma_{\mu}$$ Equation (17) merits careful consideration. It is the first order approximation to curvature at a point varied from the point $x_{0}$ , but it is more than that, If the manifold being considered is an n-sphere all approximations are exact and (15) appears to form the Heisenberg group over it. Is this right or horribly wrong?
So, I was trying to obtain the point form of the conservation of linear momentum equation in integral form, namely: $\int_{\partial \Omega} \vec{V} \rho \vec{V} \cdot \vec{dS} + \int_{\partial \Omega} p \vec{dS} = 0$ According to the Gauss theorem for a closed surface $S$: $\iint_S \vec{A} \cdot \vec{dS} = \iiint_V \nabla \cdot \vec{A} dV $ But if I apply that to the above equation I get $\int_{\partial \Omega} \vec{V} \rho \vec{V} \cdot \vec{dS} = \int_{\Omega} \nabla \cdot (\vec{V} \rho \vec{V}) dV =\\ \quad \int_{\Omega} (\nabla \cdot \vec{V}) \rho \vec{V} dV $ Which can't be right, since for an incompressible flow $\nabla \cdot \vec{V} = 0$. Isn't the dot product supposed to be commutative? What am I missing? I apologize for any misuse of mathematical notation, let me know of any mistakes.
Let $\psi \in L^2( \mathbb{R})$ and suppose that it satisfies the admissibility condition $$ \int_{-\infty}^{\infty} \frac{|\widehat{\psi}(\omega)|^2}{|\omega|}d\omega = C_{\psi} < \infty $$ where $ \widehat{\psi}$ denotes the fourier transform of $\psi$. Then my textbook in Fourier analysis (and any other book I've seen) says that this implies that $$ \widehat{\psi}(0) = \int_{-\infty}^{\infty} \psi(x) dx = 0 .$$ So I can see why this is the case, since plugging in $\omega = 0$ in the first expression would make the integral infinite, however I'm not sure how to prove this rigorously. Could someone give me a hint on how to do that? I think we also can assume that $ \psi$ is continuous. Thanks
Generalized Susceptibilities for a Perfect Quantum Gas 2005, v.11, Issue 2, 177-188 ABSTRACT The system we consider here is a charged fermions gas in the effective mass approximation, and in grand-canonical conditions. We assume that the particles are confined in a three dimensional cubic box $\Lambda$ with side $L\geq 1$, and subjected to a constant magnetic field of intensity $ B \geq 0 $. Define the grand canonical generalized susceptibilities $\chi_L^N$, $N\geq 1$, as successive partial derivatives with respect to $B$ of the grand canonical pressure $P_L$. Denote by $P_{\infty}$ the thermodynamic limit of $P_L$. Our main result is that $\chi_L^N$ admit as thermodynamic limit the corresponding partial derivatives with respect to $B$ of $P_{\infty}$. In this paper we only give the main steps of the proofs, technical details will be given elsewhere. Keywords: quantum gas,magnetic field,thermodynamic limit COMMENTS
Citation:Journal of Approximation Theory, 2007, vol. 148, n. 1, p. 92-110 ISSN:0021-9045 DOI:10.1016/j.jat.2007.02.005 Sponsor:The authors acknowledge financial support from project MTM2004-01367 (Ministerio de Educacióny Ciencia). J.S. acknowledges financial support from Project BFM2003-06335-C03-02 (Ministerio de Educación y Ciencia). Liouville-Green transformations of the Gauss hypergeometric equation with changes of variable $$z(x)=\int\sp xt\sp {p-1}(1-t)\sp {q-1}dt$$ are considered. When $p+q=1,\ p=0$ or $q=0$ these transformations, together with the application of Sturm theorems, lead Liouville-Green transformations of the Gauss hypergeometric equation with changes of variable $$z(x)=\int\sp xt\sp {p-1}(1-t)\sp {q-1}dt$$ are considered. When $p+q=1,\ p=0$ or $q=0$ these transformations, together with the application of Sturm theorems, lead to properties satisfied by all the real zeros $x_i$ of any of its solutions in the interval $(0,1)$. Global bounds on the differences $z(x_{k+1})-z(x_k),$ with $ 0<x_k<x_{k+1}<1$ being consecutive zeros, and monotonicity of their distances as a function of $k$ can be obtained. We investigate the parameter ranges for which these two different Sturm-type properties are available. Classical results for Jacobi polynomials (Szegő's bounds, Grosjean's inequality) are particular cases of these more general properties. Similar properties are found for other values of $p$ and $q$, particularly when $ alpha beta and $ , where $\alpha$ and $\beta$ are the usual Jacobi parameters.[+][-]
Not looking for a solution, just wonder what formula to use (and why?): Given population A and population B, where: standard deviation(A) = standard deviation(B) = $100$ If I select equal sample size N from A and B, how big does the sample size need to be to estimate the difference in population mean to within $10$ with 99% confidence? My attempt: If both of the population standard deviations are known, then the formula for a confidence interval for the difference between two population means (averages) is $\bar{x_1} - \bar{x_2} \pm z\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}$ Where $\bar{x_1} - \bar{x_2}$ is the difference in the sample means. We know the sample sizes are the same: $n_1 = n_2 = N$ And the z value for 99% confidence level is 2.58, so: $\bar{x_1} - \bar{x_2} \pm 2.58\sqrt{\frac{100^2 +100^2}{N}}$ Is $\pm 2.58\sqrt{\frac{100^2 +100^2}{N}}$ the margin of error? Then solving: $10 = \pm 2.58\sqrt{\frac{100^2 +100^2}{N}}$ yields $N = 1331.28$. Am I on the right track here?
2018-09-02 17:21 Measurement of $P_T$-weighted Sivers asymmetries in leptoproduction of hadrons / COMPASS Collaboration The transverse spin asymmetries measured in semi-inclusive leptoproduction of hadrons, when weighted with the hadron transverse momentum $P_T$, allow for the extraction of important transverse-momentum-dependent distribution functions. In particular, the weighted Sivers asymmetries provide direct information on the Sivers function, which is a leading-twist distribution that arises from a correlation between the transverse momentum of an unpolarised quark in a transversely polarised nucleon and the spin of the nucleon. [...] arXiv:1809.02936; CERN-EP-2018-242.- Geneva : CERN, 2019-03 - 20 p. - Published in : Nucl. Phys. B 940 (2019) 34-53 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: CERN-EP-2018-242 - PDF; 1809.02936 - PDF; Detaljert visning - Lignende elementer 2018-02-14 11:43 Light isovector resonances in $\pi^- p \to \pi^-\pi^-\pi^+ p$ at 190 GeV/${\it c}$ / COMPASS Collaboration We have performed the most comprehensive resonance-model fit of $\pi^-\pi^-\pi^+$ states using the results of our previously published partial-wave analysis (PWA) of a large data set of diffractive-dissociation events from the reaction $\pi^- + p \to \pi^-\pi^-\pi^+ + p_\text{recoil}$ with a 190 GeV/$c$ pion beam. The PWA results, which were obtained in 100 bins of three-pion mass, $0.5 < m_{3\pi} < 2.5$ GeV/$c^2$, and simultaneously in 11 bins of the reduced four-momentum transfer squared, $0.1 < t' < 1.0$ $($GeV$/c)^2$, are subjected to a resonance-model fit using Breit-Wigner amplitudes to simultaneously describe a subset of 14 selected waves using 11 isovector light-meson states with $J^{PC} = 0^{-+}$, $1^{++}$, $2^{++}$, $2^{-+}$, $4^{++}$, and spin-exotic $1^{-+}$ quantum numbers. [...] arXiv:1802.05913; CERN-EP-2018-021.- Geneva : CERN, 2018-11-02 - 72 p. - Published in : Phys. Rev. D 98 (2018) 092003 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: CERN-EP-2018-021 - PDF; 1802.05913 - PDF; Detaljert visning - Lignende elementer 2018-02-07 15:23 Transverse Extension of Partons in the Proton probed by Deeply Virtual Compton Scattering / Akhunzyanov, R. (Dubna, JINR) ; Alexeev, M.G. (Turin U.) ; Alexeev, G.D. (Dubna, JINR) ; Amoroso, A. (Turin U. ; INFN, Turin) ; Andrieux, V. (Illinois U., Urbana ; IRFU, Saclay) ; Anfimov, N.V. (Dubna, JINR) ; Anosov, V. (Dubna, JINR) ; Antoshkin, A. (Dubna, JINR) ; Augsten, K. (Dubna, JINR ; CTU, Prague) ; Augustyniak, W. (NCBJ, Swierk) et al. We report on the first measurement of exclusive single-photon muoproduction on the proton by COMPASS using 160 GeV/$c$ polarized $\mu^+$ and $\mu^-$ beams of the CERN SPS impinging on a liquid hydrogen target. [...] CERN-EP-2018-016 ; arXiv:1802.02739. - 2018. - 13 p. Full text - Draft (restricted) - Fulltext Detaljert visning - Lignende elementer 2018-01-30 07:15 Detaljert visning - Lignende elementer 2017-09-28 10:30 Detaljert visning - Lignende elementer 2017-09-19 08:11 Transverse-momentum-dependent Multiplicities of Charged Hadrons in Muon-Deuteron Deep Inelastic Scattering / COMPASS Collaboration A semi-inclusive measurement of charged hadron multiplicities in deep inelastic muon scattering off an isoscalar target was performed using data collected by the COMPASS Collaboration at CERN. The following kinematic domain is covered by the data: photon virtuality $Q^{2}>1$ (GeV/$c$)$^2$, invariant mass of the hadronic system $W > 5$ GeV/$c^2$, Bjorken scaling variable in the range $0.003 < x < 0.4$, fraction of the virtual photon energy carried by the hadron in the range $0.2 < z < 0.8$, square of the hadron transverse momentum with respect to the virtual photon direction in the range 0.02 (GeV/$c)^2 < P_{\rm{hT}}^{2} < 3$ (GeV/$c$)$^2$. [...] CERN-EP-2017-253; arXiv:1709.07374.- Geneva : CERN, 2018-02-08 - 23 p. - Published in : Phys. Rev. D 97 (2018) 032006 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Detaljert visning - Lignende elementer 2017-07-08 20:47 New analysis of $\eta\pi$ tensor resonances measured at the COMPASS experiment / JPAC Collaboration We present a new amplitude analysis of the $\eta\pi$ $D$-wave in $\pi^- p\to \eta\pi^- p$ measured by COMPASS. Employing an analytical model based on the principles of the relativistic $S$-matrix, we find two resonances that can be identified with the $a_2(1320)$ and the excited $a_2^\prime(1700)$, and perform a comprehensive analysis of their pole positions. [...] CERN-EP-2017-169; JLAB-THY-17-2468; arXiv:1707.02848.- Geneva : CERN, 2018-04-10 - 9 p. - Published in : Phys. Lett. B 779 (2018) 464-472 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Detaljert visning - Lignende elementer 2017-07-05 15:07 Detaljert visning - Lignende elementer 2017-04-01 00:22 Detaljert visning - Lignende elementer 2017-01-05 16:00 First measurement of the Sivers asymmetry for gluons from SIDIS data / COMPASS Collaboration The Sivers function describes the correlation between the transverse spin of a nucleon and the transverse motion of its partons. It was extracted from measurements of the azimuthal asymmetry of hadrons produced in semi-inclusive deep inelastic scattering of leptons off transversely polarised nucleon targets, and it turned out to be non-zero for quarks. [...] CERN-EP-2017-003; arXiv:1701.02453.- Geneva : CERN, 2017-09-10 - 11 p. - Published in : Phys. Lett. B 772 (2017) 854-864 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Detaljert visning - Lignende elementer
From our previous knowledge we are now able to distinguish between a population and a sample, a parameter and statistic,… Click here to read more Estimation In the point estimation procedure we make an attempt to compute a numerical value from sample observations, which could be… Click here to read more From central limit theorem we know that \[Z = \frac{{\overline X – \mu }}{{\frac{\sigma }{{\sqrt n }}}}\] is a standard… Click here to read more As long as $${\sigma ^2}$$ is known, the confidence interval estimate of a population mean can be obtained by the… Click here to read more (1) Large Samples The difference between two means is of considerable importance in testing the homogeneity of populations. In this… Click here to read more The term standard error has already been introduced very briefly in previous tutorials while discussing the sampling distribution of means…. Click here to read more
WHY? A caption of an image can be generated with attention based model by aligning a word to a part of image. WHAT? A convolution neural network extracts features from raw images resulting a series of feature vectors. In order to generate a series of words as a caption, LSTM with attention model is used. Generated feature vectors are used as annotation vectors for attention. Previous word, previous hidden vector and current context vector is concatenated and projected to the dimension of hidden vector to be used as inputs for gates. Deep output layer is used for output word probability distribution. e_{ti} = f_{att}(\mathbf{a_i}, \mathbf{h_{t-1}})\\\alpha_{ti} = \frac{\exp(e_{ti})}{\sum_{k=1}^L \exp(e_{tk})}\\\hat{z}_t = \phi(\{\mathbf{a}_i\}, \{\alpha_i\})\\p(\mathbf{y}_\mathbf{a}, \mathbf{y}_1^{t-1}) \propto \exp(\mathbf{L}_o(\mathbf{E}\mathbf{y}_{t-1} + \mathbf{L}_h \mathbf{h}_t + \mathbf{L}_z\hat{\mathbf{z}}_t)) Weighting function of annoation vectors and a hidden vector( \phi) can be varied. Stochastic hard attention treate the weights of attention as parameters of multinoulli distribution and sample a place of attention from the distribution. Variational lower bound on the marginal log-likelihood can be maximized by estimating gradients with REINFORCE algorithm. Deterministic soft attention take the expectation of context vector by weight summing annotation vectors. Doubly stochasic attention is used to encourage model to focus on the every part of the image by imposing regularization on the loss function. L_d = -\log(P(\mathbf{y}|\mathbf{x})) + \lambda\sum^L_i(1 - \sum_t^C\alpha_{ti})^2 So? Attention model was able to generate caption by sequentially focusing on the part of images. Critic Attention of other words other than keywords were drifting around. There can be attention for relations since some words refer to the relations of the objects.
I'm looking for a formula to generate all solutions $x$, $y$, $z$ for $x^2 + y^2 = 5z^2$. Any advice? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Ok, so I am assuming rational solutions. This method can yield a parametrization of all integer solutions without too much work. Note that $(1,2)$ lie on the circle $x^2 + y^2 = 5$. Let $x_0 = 1, y_0 = 2$. Now suppose $x^2 + y^2 = 5$. Let $m = x - x_0, n = y - y_0$, then we have $$m^2 + 2mx_0 + x_0^2 + n^2 + 2ny_0 + y_0^2 = 5$$ And thus $m^2 + 2mx_0 + n^2 + 2ny_0 = 0$. Let $\lambda = \frac{m}{n}$. Then we have: $$n^2\lambda^2 + 2n\lambda x_0 + n^2 + 2ny_0 = 0$$ $$n\lambda^2 + 2\lambda x_0 + n + 2y_0 = 0$$ $$n = \frac{-2y_0 - 2\lambda x_0}{1 + \lambda^2}$$ Plugging in $x_0 = 1, y_0 = 2$: $$n = \frac{-4 - 2 \lambda}{1 + \lambda^2}$$ Thus it follows $$(x,y) = \left (1 + \frac{-4\lambda - 2\lambda^2}{1 + \lambda^2}, 2 + \frac{-4 - 2 \lambda}{1 + \lambda^2} \right )$$ where $\lambda$ is an arbitrary number in $\mathbb{Q}$. Now for $x^2 + y^2 = 5z^2$, we simply need: $$(x,y,z) = \left (z + \frac{-4z\lambda - 2z\lambda^2}{1 + \lambda^2}, 2z + \frac{-4z - 2z \lambda}{1 + \lambda^2},z \right )$$ If you want solutions in $\mathbb{Z}$, it takes only a little more work to finish. EDIT: So either I made a massive reading failure or the author changed the title. So here's how to finish. A slightly neater form to work with is $$(x,y,z) = \left (1 + \lambda^2 -4\lambda - 2\lambda^2, 2 + 2\lambda^2 -4 - 2\lambda, 1 + \lambda^2 \right )$$ Letting $\lambda = \frac{m}{n}$, we see that: $$(x,y,z) = \left (m^2 - n^2 -4mn, 2n^2 -2m^2 - 2mn, m^2 + n^2 \right )$$ I couldn't but notice the pattern $x^2 + y^2 = 5 z^2 = z^2 + (2z)^2 $; owing to $(am+bn)^2 + (an-bm)^2 = (an+bm)^2 + (am-bn)^2 $, and so if we let $x=am+bn , y=an-bm , z=am-bn$ , we need $an + bm = 2(am-bn)$ , i.e. $ a(n - 2m) + b(m+2n) =0$ which is possible by $a = (m+2n) k , b=(2m-n)k $ , where $a,b,m,n,k$ $∈$ $\Bbb Z$ So, the solutions are :- $x = k( m^2 + 4mn - n^2 )$ ; $y = 2k(mn + n^2 - m^2 )$ ; $z = k( m^2 + n^2 )$ these are same as what "dinoboy" seems to have obtained by comparatively more effort. Presumably you’re looking for solutions from $\mathbb Z$. When you have such a solution, you can divide the equation through by $z^2$ to find rational numbers $\lambda$ and $\mu$ such that $\lambda^2+\mu^2=5$. So far so good. Now you can think of $(\lambda,\mu)$ as the complex number $\lambda+\mu i\in{\mathbb{Q}}(i)$, the field of Gaussian numbers. And, when we call $z=\lambda+\mu i$, the condition is that $\mathbf{N}(z)=5$, where $\mathbf N$ is the norm map, $z\mapsto z\bar z$, which you see is multiplicative. Now, in case $\mathbf{N}(z)=5$ and $\mathbf{N}(u)=1$, you see that $zu$ is another point on your circle of radius $\sqrt5$, just the kind of number you’re looking for. But we know all the Gaussian numbers of norm $1$, they correspond to Pythagorean triples, just as $5/13 + 12i/13$ corresponds to the Pythagorean triple $(5,12,13)$, and there are various ways of describing these triples, in other words the appropriate Gaussian numbers of norm $1$. Here’s one way of describing all the P-triples: The Gaussian numbers of norm $1$ are an infinitely generated abelian group. The torsion subgroup is $\{\pm1,\pm i\}$, and, modulo these, the group is free-abelian, generators indexed by the primes congruent to $1$ modulo $4$. For each such prime $p$, you write $p=m^2+n^2$, and the corresponding generator of the above-mentioned free-abelian group is $(m+ni)/(m-ni)$. The upshot is that once you've made your choices of these generators $\{g_p\}$, every Gaussian number of norm $1$ is uniquely writable as $\varepsilon\prod_p g_p^{e_p}$. Here the product must be finite, that is all but finitely many of the exponents $e_p$ must be zero, and the $\varepsilon$ is $\pm1$ or $\pm i$. Example: take $g_5=(2+i)/(2-i)=(3+4i)/5$. Then, using the fixed Gaussian number $2+i$ for your $z$, if you use $u=g_5^2=(-7+24i)/25$, you get $uz=(-38+41i)/25$. This yields the solution $x=-38$, $y=41$, $z=25$ to your original equation.