text stringlengths 256 16.4k |
|---|
What are some of the methods used to test for the exactitude of a numerical solution, given that the analytical solution isn't available, and the numerical solution converges ?
You should also read about the Method of Manufactured Solutions (PDF) which will show you how to generate analytical solutions to your problem.
Verification of a numerical solution is not fully possible when there is no analytical solution for comparison, but there are still several ways to gain confidence in the "correctness".
In your question the meaning of "the numerical solution converges" is not fully clear. "Convergence" in the sense that the numerical code produces a finite answer in finite time is of course necessary for correctness. More strongly, typically a mesh-refinement study will be performed to show that the solution converges as the discretization becomes finer.
If your PDE system is $F(x)=0$, then the error is $\Delta x=x_h-x_*$, where $x_h$ is the discrete solution for mesh-size $h$ and $x_*$ is the exact solution. You would want the convergence to be monotone (i.e. refining mesh does not
increase error), and possibly also show a consistent order of convergence (e.g. $|\Delta x|=O[h^{-p}]$).
In your case, you could choose some proxy for the "exact" solution. For example, this could be a solution on a very fine mesh, or a solution from another code, e.g. from the literature. (For
validation, rather than verification, you would use observations from the natural/experimental/engineered system your PDE is modeling.) Then you can use this proxy in the mesh refinement convergence analysis.
Rather than the solution
error, you can also look at the solution residual, i.e. $|F_h(x_h)|$. In this case you should normalize the residual so it does not inherently grow with mesh refinement. For example the residual could be averaged, or integrated over the fixed domain size (conceptually, $\int_\Omega F d\Omega = \bar{F}|\Omega|$, with $h=|\Omega|/N$).
I would also reccomend doing
code verification as well. That is, solve some problems that do have an analytical solution using the same code, and verify that these are correct. (Here, you can manufacture test cases that by design have an analytical solution; see Bill Barth's answer.) |
Many thanks to Bart Andrews for this contribution!
Question Show that a relation of the kind \(f(x, y, z) = 0\) between the three quantities x, y, and z implies the relation \[\left(\frac{\partial x}{\partial y}\right)_z \left(\frac{\partial y}{\partial z}\right)_x \left(\frac{\partial z}{\partial x}\right)_y = -1\] between the partial derivatives. The grandcanonical partition sum Z Gis a function of the three parameters β, V, and μ. All other state variables are derived from Z Gand its partial derivatives. Therefore, there must be a relation \(f(N, μ, V, T) = 0\) for the four quantities N, μ, V, and T. Consider this equation for constant T and show that
\[ \left( \frac{\partial {\langle N \rangle}}{\partial \mu} \right)_{V,T}=-{\langle N \rangle}\frac{\left(\frac{\partial {\langle N \rangle}}{\partial V}\right)_{\mu,T}}{\left(\frac{\partial {\langle N \rangle} \mu}{\partial V}\right)_{{\langle N \rangle},T}} \]
Solution
This is once again a two part question. It makes sense, however, to do the first part first on this occasion. The first part asks you to derive the cyclic relation for partial derivatives. So let us first write down the total differential for \(dz\) and then move along the curve of constant \(z\).
\[dz = \bigg( {\partial z \over \partial x} \bigg)_y dx + \bigg( {\partial z \over \partial y} \bigg)_x dy \stackrel{!}{=} 0 \]
It follows that the total derivative for \(y\) is given by…
\[ dy = \bigg( {\partial y \over \partial x} \bigg)_z dx \]
…since \(dz=0\). Now if we substitute this expression back into the total derivative for \(z\) we arrive at the cyclic relation (assuming the reciprocity relation).
\begin{align}
0 & = \bigg( {\partial z \over \partial x} \bigg)_y dx + \bigg( {\partial z \over \partial y} \bigg)_x \bigg( {\partial y \over \partial x} \bigg)_z dx \nonumber \\ 0 & = \Bigg( \bigg( {\partial z \over \partial x} \bigg)_y + \bigg( {\partial z \over \partial y} \bigg)_x \left( \partial y \over \partial x \right)_z \Bigg) dx \;\;\;\;\;\;\;\; \forall x \nonumber \\ \bigg( {\partial z \over \partial x} \bigg)_y & = – \bigg( {\partial z \over \partial y} \bigg)_x \bigg( {\partial y \over \partial x} \bigg)_z \nonumber \\ \bigg( {\partial x \over \partial y} \bigg)_z \bigg( {\partial y \over \partial z} \bigg)_x \bigg( {\partial z \over \partial x} \bigg)_y & = -1 \; \blacksquare \nonumber \end{align}
The next part of the question asks you to consider a specific case in statistical mechanics. The principle is same, it is just that the equation has physical meaning. The question asks you to start with \(f( \langle N \rangle , \mu , V , T)=0\). However, \(T\) is not a variable in this case, it is a constant, and so we know that \(f( \langle N \rangle , \mu , V)=0\). By analogy we can write down the cyclic relation.
\[ \bigg( {\partial {\langle N \rangle} \over \partial \mu} \bigg)_{V,T} \bigg( {\partial \mu \over \partial V} \bigg)_{\langle N \rangle, T} \bigg( {\partial V \over \partial {\langle N \rangle}} \bigg)_{\mu,T} = -1 \]
After some algebraic manipulation with the assumption of the reciprocity relation, we arrive at the desired result.
\begin{eqnarray}
\bigg( {\partial {\langle N \rangle} \over \partial \mu} \bigg)_{V,T} {\langle N \rangle} \bigg( {\partial \mu \over \partial V} \bigg)_{\langle N \rangle, T} \bigg( {\partial V \over \partial {\langle N \rangle}} \bigg)_{\mu,T} & & = -{\langle N \rangle} \nonumber \\ \bigg( {\partial {\langle N \rangle} \over \partial \mu} \bigg)_{V,T} \bigg( {\partial {{\langle N \rangle} \mu} \over \partial V} \bigg)_{\langle N \rangle, T} \bigg( {\partial V \over \partial {\langle N \rangle}} \bigg)_{\mu,T} & & = -{\langle N \rangle} \nonumber \\ \bigg( {\partial {\langle N \rangle} \over \partial \mu} \bigg)_{V,T} \bigg( {\partial {{\langle N \rangle} \mu} \over \partial V} \bigg)_{\langle N \rangle, T} & & = -{\langle N \rangle} {\bigg( {\partial {\langle N \rangle} \over \partial V} \bigg)_{\mu,T}}\nonumber \\ \bigg( {\partial {\langle N \rangle} \over \partial \mu} \bigg)_{V,T} & & = -{\langle N \rangle} \frac{{\left( {\partial {\langle N \rangle} \over \partial V} \right)_{\mu,T}}}{\left( {\partial {{\langle N \rangle} \mu} \over \partial V} \right)_{\langle N \rangle, T}} \; \blacksquare \nonumber \end{eqnarray} |
Consider the equations of motion of an object in an orbit around a central body. We use an inertial frame originating at centre of mass of the central body: $$ \frac{d\mathbf{r}}{dt} = \mathbf{V} \tag{1} $$ $$ \frac{d\mathbf{V}}{dt} = -\frac{\mu}{r^3}\mathbf{r} + \mathbf{A}_\text{ext} \tag{2} $$ where $\mathbf{r}$ is the position vector of the object, $\mathbf{V}$ is the velocity, $\mu$ is the gravitational parameter of the central body and $\mathbf{A}_\text{ext}$ are accelerations due to some other sources (example, thrust from engines, drag, etc.).
I am interested in specifying these with respect to the specific orbital energy $\epsilon$. For this, I need to know $\dfrac{d\epsilon}{dt}$
I know that $$\epsilon = \frac{V^2}{2}-\frac{\mu}{r} \tag{3}$$ So, differentiating: $$ \frac{d\epsilon}{dt} = V\dfrac{dV}{dt}+\dfrac{\mu}{r^2}\dfrac{dr}{dt} \tag{4} $$
If we multiply equation (2) with $\dfrac{d\mathbf{r}}{dt}$, we get: $$ \dfrac{d\mathbf{r}}{dt} \cdot \frac{d\mathbf{V}}{dt} = - \dfrac{d\mathbf{r}}{dt} \cdot \frac{\mu}{r^3}\mathbf{r} + \dfrac{d\mathbf{r}}{dt} \cdot \mathbf{A}_\text{ext} \tag{5} $$
Rearranging it, $$ \mathbf{V} \cdot \frac{d\mathbf{V}}{dt} + \dfrac{d\mathbf{r}}{dt} \cdot \frac{\mu}{r^3}\mathbf{r} = \mathbf{V} \cdot \mathbf{A}_\text{ext} \tag{6} $$
I am stuck here. $\epsilon$ is a scaler, and so is its derivative. However, it appears that the left hand side of the equation 6 is the vector form of equation 4.
Could you help me proceed? If I have the above, then I could formulate the equations of motion. Or, if you already know the equations of motion with respect to energy, that would be great!
Wikipedia article on specific orbital energy states that $\dfrac{d\epsilon}{dt} = \mathbf{V}\cdot \mathbf{A}_\text{ext}$ but doesnt give any reference / steps. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for
@JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default?
@JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever
I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font.
@DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma).
@egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge.
@barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually)
@barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording?
@barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us.
@DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.)
@barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow)
if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.)
@egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended.
@barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really
@DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts.
@DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ...
@DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts.
MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers...
has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable?
I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something.
@baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out
You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!...
@baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier.
@baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals. |
Hello, I've never ventured into char before but cfr suggested that I ask in here about a better name for the quiz package that I am getting ready to submit to ctan (tex.stackexchange.com/questions/393309/…). Is something like latex2quiz too audacious?
Also, is anyone able to answer my questions about submitting to ctan, in particular about the format of the zip file and putting a configuration file in $TEXMFLOCAL/scripts/mathquiz/mathquizrc
Thanks. I'll email first but it sounds like a flat file with a TDS included in the right approach. (There are about 10 files for the package proper and the rest are for the documentation -- all of the images in the manual are auto-generated from "example" source files. The zip file is also auto generated so there's no packaging overhead...)
@Bubaya I think luatex has a command to force “cramped style”, which might solve the problem. Alternatively, you can lower the exponent a bit with f^{\raisebox{-1pt}{$\scriptstyle(m)$}} (modify the -1pt if need be).
@Bubaya (gotta go now, no time for followups on this one …)
@egreg @DavidCarlisle I already tried to avoid ascenders. Consider this MWE:
\documentclass[10pt]{scrartcl}\usepackage{lmodern}\usepackage{amsfonts}\begin{document}\noindentIf all indices are even, then all $\gamma_{i,i\pm1}=1$.In this case the $\partial$-elementary symmetric polynomialsspecialise to those from at $\gamma_{i,i\pm1}=1$,which we recognise at the ordinary elementary symmetric polynomials $\varepsilon^{(n)}_m$.The induction formula from indeed gives\end{document}
@PauloCereda -- okay. poke away. (by the way, do you know anything about glossaries? i'm having trouble forcing a "glossary" that is really an index, and should have been entered that way, into the required series style.)
@JosephWright I'd forgotten all about it but every couple of months it sends me an email saying I'm missing out. Oddly enough facebook and linked in do the same, as did research gate before I spam filtered RG:-)
@DavidCarlisle Regarding github.com/ho-tex/hyperref/issues/37, do you think that \textNFSSnoboundary would be okay as name? I don't want to use the suggested \textPUnoboundary as there is a similar definition in pdfx/l8uenc.def. And textnoboundary isn't imho good either, as it is more or less only an internal definition and not meant for users.
@UlrikeFischer I think it should be OK to use @, I just looked at puenc.def and for example \DeclareTextCompositeCommand{\b}{PU}{\@empty}{\textmacronbelow}% so @ needs to be safe
@UlrikeFischer that said I'm not sure it needs to be an encoding specific command, if it is only used as \let\noboundary\zzznoboundary when you know the PU encoding is going to be in force, it could just be \def\zzznoboundary{..} couldn't it?
@DavidCarlisle But puarenc.def is actually only an extension of puenc.def, so it is quite possible to do \usepackage[unicode]{hyperref}\input{puarenc.def}. And while I used a lot @ in the chess encodings, since I saw you do \input{tuenc.def} in an example I'm not sure if it was a good idea ...
@JosephWright it seems to be the day for merge commits in pull requests. Does github's "squash and merge" make it all into a single commit anyway so the multiple commits in the PR don't matter or should I be doing the cherry picking stuff (not that the git history is so important here) github.com/ho-tex/hyperref/pull/45 (@UlrikeFischer)
@JosephWright I really think I should drop all the generation of README and ChangeLog in html and pdf versions it failed there as the xslt is version 1 and I've just upgraded to a version 3 engine, an dit's dropped 1.0 compatibility:-) |
In his classic paper
"Modular Equations and Approximations to $\pi$ (1914)", Ramanujan gives a standard technique to obtain a general family of series for $1/\pi$ based on series for $(2K/\pi)^{2}$ in terms of $k$ (see the details here). Most of the series which Ramanujan provides are based on the following expressions for $(2K/\pi)^{2}$:
\begin{align}\left(\frac{2K}{\pi}\right)^{2}&= 1 + \left(\frac{1}{2}\right)^{3}(2kk')^{2} + \left(\frac{1\cdot 3}{2\cdot 4}\right)^{3}(2kk')^{4} + \cdots\tag{1}\\ \left(\frac{2K}{\pi}\right)^{2}&= (1 + k^{2})^{-1}\,_{3}F_{2}\left(\frac{1}{4},\frac{3}{4},\frac{1}{2}; 1, 1; \left(\frac{g^{12} + g^{-12}}{2}\right)^{-2}\right)\tag{2}\\ \left(\frac{2K}{\pi}\right)^{2}&= (1 - 2k^{2})^{-1}\,_{3}F_{2}\left(\frac{1}{4},\frac{3}{4},\frac{1}{2}; 1, 1; -\left(\frac{G^{12} - G^{-12}}{2}\right)^{-2}\right)\tag{3}\\ \left(\frac{2K}{\pi}\right)^{2}&= \{1 - (kk')^{2}\}^{-1/2}\,_{3}F_{2}\left(\frac{1}{6},\frac{5}{6}, \frac{1}{2};1;1; \frac{27G^{24}}{(4G^{24} - 1)^{3}}\right)\tag{4}\end{align}
(In the above $G = (2kk')^{-1/12}, g = (2k/k'^{2})^{-1/12}$. I have left the series related to functions ${}_{3}F_{2}(1/3, 2/3, 1/2; 1; 1; a(k))$ based on elliptic functions to base 3).
Next Chudnovsky brothers (around 1989) use the following series $$\left(\frac{2K}{\pi}\right)^{2} = \{1 - 4G^{-24}\}^{-1/2}\,_{3}F_{2}\left(\frac{1}{6},\frac{5}{6}, \frac{1}{2};1;1; \frac{-27G^{48}}{(G^{24} - 4)^{3}}\right)\tag{5}$$ to give their famous series based on $G_{163}$. Another series can be obtained from $(5)$ above by changing the nome $q$ into $(-q)$: $$\left(\frac{2K}{\pi}\right)^{2} = \{k'^{4} + 16k^{2}\}^{-1/2}\,_{3}F_{2}\left(\frac{1}{6},\frac{5}{6}, \frac{1}{2};1;1; \frac{27g^{48}}{(g^{24} + 4)^{3}}\right)\tag{6}$$
The general family of series for $1/\pi$ based on equation $(6)$ is as follows: $$\frac{1}{\pi} = \sum_{m = 0}^{\infty}\frac{(1/6)_{m}(5/6)_{m}(1/2)_{m}}{(m!)^{3}}(A + mB)X_{n}^{m}\tag{7}$$ where \begin{align} X_{n} &= \frac{27g_{n}^{48}}{(g_{n}^{24} + 4)^{3}}\notag\\ A &= \frac{1}{2k\sqrt{g_{n}^{24} + 4}}\left(\frac{\sqrt{n}}{3}(1 - 2k^{2}) - \frac{R_{n}(k, k')}{6}\right) - \sqrt{n}\cdot\frac{g_{n}^{12}(k^{2} + 7)}{4(g_{n}^{24} + 4)^{3/2}}\notag\\ B &= \sqrt{n}(g_{n}^{12} + k)\cdot\frac{g_{n}^{24} - 8}{(g_{n}^{24} + 4)^{3/2}}\notag\\ n &> 4\notag\\ k &= k(q) = k(e^{-\pi\sqrt{n}})\notag \end{align}
I am currently trying to work out a series based on the above equation $(7)$ (using $n = 58$) and bit bogged down into calculations with various radicals. I don't know if the equation $(7)$ above has been used anywhere to generate series for $1/\pi$ (let me know of any references if this has already been in literature). I want to know whether this approach would lead a genuine new family of series for $1/\pi$. |
I am reading Pragmatic MPC (link) about the BGW protocol. I also cross-reference here (click on lecture link for slides) *The BGW Construction for the Information Theoretic Setting – Benny Pinkas I do not understand why the multiplication gate calculation can work.
First, let's say of $N$ parties, party $P_i$ has shares of wires $\alpha$, $\beta$ as $[v_{\alpha}]$, $[v_{\beta}]$. He can multiply them together to get a point on the polynomial $q(x) = p_{\alpha}(x) p_{\beta}(x)$. But everyone's values together could then only be reconstructed with $q(x)$, where $q(x)$ is degree $2t$.
Useful fact: we say there exists $\lambda_i$ for $i =1 $ to $N$ (or can index to $2t+1$, but I didn't know why) such that $q(0) = \sum_{i=1}^N \lambda_i q(i)$ (I guess party $P_i$ has the value $q(i)$ in his possession). The $\lambda_i$ are the "appropriate Lagrange coefficients".
Every party $P_i$ can share their value $q(i)$. They pick $g_i$ such that $g_i(0) = q(i)$. (This polynomial was unnamed in the book, but maybe makes things more explicit). Then $P_i$ shares to all the parties so they have shares $[q(i)]$.
If we then think of what $P_i$ is receiving, he gets shares of $g_j(0)$ for each $j$, for him that means he gets the value $g_j(i)$.
Then, each $P_i$ on inputs $g_j(i)$ uses these points to create the share for himself $[q(0)] = \sum_{i=1}^N \lambda_i [q(i)]$. For party $P_i$, this means he can figure out $q(i) = \sum_{i=1}^n \lambda_i g_j(i)$.
The value $q(i)$ turns out to be their share $[v_{\alpha}v_{\beta}]$.
I understand that these $\lambda_i$ should be the same as the one for the useful fact. Why?
Where did the useful fact come from?
I also took a look at a paper (didn't read through) of
A Full Proof of the BGW Protocol for Perfectly-Secure Multiparty Computationlink where they say that the $q(x)$ (they call it $h(x)$) has a "specific structure". What does it mean? |
We know that a massless $\phi^4$ theory $$S=\int d^4x \left[\frac{1}{2}\partial_\mu\phi\partial^\mu\phi-\frac{\lambda}{4!}\phi^4\right],$$ has conformal invariance at the classical level. But within the Coleman-Weinberg mechanism, at the one-loop level, quantum fluctuations will generate a vacuum expected value for $\phi$, introducing a mass scale and breaking the conformal invariance. Is this phenomenon a dynamical symmetry breaking or an anomaly? How can we distinguish between them?
First, dynamical symmetry breaking (which I take to be either synonymous with or a subset of spontaneous symmetry breaking) and anomalies are two completely different things. An
anomaly is when a symmetry group acquires a central extension, due to some obstruction in the process of representing it in our theory. Such obstructions can exist purely classical, or they can arise in the course of quantization, but they are crucially features of the whole theory. For more information on anomalies, see this excellent answer by DavidBarMoshe. In contrast, in spontaneous symmetry breaking, the theory retains the symmetry, just its vacuum state does not, which leads to the symmetry being non-linearly realized on the natural perturbative degrees of freedom (being "broken").
Just $\phi$ acquiring a VEV would not mean an anomaly, that would just be ordinary spontaneous symmetry breaking. However, the appearance of the $\phi^2$ term in the effective potential
also means that we have an anomaly, i.e. the quantum effective action is not invariant under the classical symmetry - this is a clear case of a quantum anomaly. That is, in this case, the Coleman-Weinberg mechanism leads to both spontaneous symmetry breaking and a quantum anomaly, but it is perfectly conceivable to have one without the other - they are completely distinct things. It might be debatable whether we want to speak of spontaneously "breaking" a symmetry that became anomalous to begin with, though.
Strictly speaking, the phase
dynamical symmetry breaking, although essentially a kind of spontaneous symmetry breaking proces, means something slightly different. While spontaneous symmetry breaking usually refers to cases where an elementary scalar field (such as $\phi$) acquires a VEV, dynamical symmetry breaking refers to cases where the scalar field that acquires a VEV is a composite field. In this sense the term dynamical refers to a force that is strong enough to bind fields so strongly that the composite acquires a VEV. Chiral symmetry breaking is an example of dynamical symmetry breaking. Technicolor theories are attempts to describe the electroweak symmetry breaking in terms of a dynamical symmetry breaking process. |
In special relativity, the coordenates of a event are in general written using a 4-vector: $$x^{\mu} = \binom{ct}{\textbf{x}}$$ where $\textbf{x} = (x,y,z)$ are the spacial coordenates. This is a contravariant vector, and its covariant representation is written as $$x_{\mu} = \binom{ct}{-\textbf{x}}.$$ However, I know that the contravariant and covariant vectors are defined by the way that their components transforms. If $A$ is contravariant and $B$ is covariant: $$A'^{\mu} = \frac{\partial x'^{\mu}}{\partial x^{\nu}} A^{\nu}, \ \ \ \ B' _{\mu} = \frac{\partial x^{\mu}}{\partial x'^{\nu}} B_{\nu}.$$ Using this definition, how can I show that $x_{\mu}$ is in fact covariant?
As a side issue, coordinates don't really act like vectors. Infinitesimal changes in the coordinates do. It makes a difference when spacetime isn't flat or when you use coordinates that aren't Minkowski. But this is not really relevant to your question.
Consider the coordinate transformation $(t,x)\mapsto(t',x')=(\alpha t,\alpha x)$, where $\alpha$ is a positive constant. This is a change of units. By comparing this with the rules for transforming vectors and covectors, you'll see that the pair of coordinates acts like a vector, not a covector.
If the metric was given by the line element $dt^2-dx^2$ in the original coordinates, then it's given by $\alpha^{-2}dt'^2-\alpha^{-2}dx'^2$ in the new coordinates. Then lowering an index gives the components $(\alpha^{-1}t',\alpha^{-1}x')$ for the covector in the primed coordinate system. You can now check that this behaves according to the rule for transforming covectors. |
There is no general rule. However there is a class of bounded self-adjoint operators whose spectrum is made of a bounded set of isolated points (proper eigenvalues) -- except for $0$ at most -- and the eigenspaces associated to these eigenvalues are finite dimensional. They are the so-called
compact operators (this class includes classes of operators important in QM, like Hilbert-Schmidt and trace class ones).However there are operators which are not compact but have a pure point spectrum. An example is the Hamiltonian of the harmonic oscillator, whose eigenspaces are also finite dimensional but it is not bounded. The reason why the spectrum has the same features as the one of compact operators is that inverse powers of these operators or the associated resolvent operators are bounded and compact.
Conversely, an example of a
bounded operator with pure continuous spectrum is the position operator in $L^2([0,1], dx)$ defined as usual$$(X\psi)(x) = x \psi(x)\quad \forall \psi \in L^2([0,1], dx)\:.$$It does not admit (proper) eigenvalues. The spectrum is $\sigma(X)= [0,1]$.Since, for a self-adjoint operator (more generally for a normal operator), $$||A||= \sup_{\lambda \in \sigma(A)} |\lambda|$$you see that $||X||=1$.
NOTE. Regarding your added last point (a connection between normalisability of eigenfunctions and discreteness of eigenvalues) the situation is the following.
If $\lambda\in \sigma(A)$ is an
isolated point of the spectrum ($\sigma(A)$) of the self-adjoint operator $A$. Then $\lambda$ is a proper eigenvalue and thus their eigenvectors are proper (normalizable) eigenvectors.So, as you suppose, in the jargon of physicists, "discrete eigenvalues" are proper eigenvalues with normalizable eigenvectors.
The converse is however generally
false. You can have points $\lambda$ in a continuous part of $\sigma(A)$ (say, $\lambda \in (a,b)$ with $(a,b) \in \sigma(A)$) which are proper eigenvalues. Even, in a non-separable Hilbert space, it is possible to costruct a self-adjoint operator $A$ such that $\sigma(A)=[0,1]$ and all points of $[0,1]$ are proper eigenvalues with proper eigenvectors. In a separable Hilbert space it is not possible, but one can easily construct an operator whose set of proper eigenvalues is dense in $[0,1]$. |
The problem is:
Original Problem \[\begin{align}\min\>&\sum_i \color{darkred}x_i\\ & \mathbf{sd}(\color{darkred}x) \lt \color{darkblue}\alpha\\ & \color{darkred}x_i \in \{0,1\}\end{align}\]
Notes:
Here sdis the standard deviation. We assume \(x\) has \(n\) components. Of course, \(\lt\) is problematic in optimization. So the equation should become a \(\le\) constraint. The standard formula for the standard deviation is: \[ \sqrt{\frac{\sum_i (x_i-\bar{x})^2}{n-1}}\] where \(\bar{x}\) is the average of \(x\). This is an easy problem. Just choose \(x_i=0\). When we use \(\max \sum_i x_i\) things are equally simple. In that case choose \(x_i = 1\). There is symmetry: \(\mathbf{sd}(x) = \mathbf{sd}(1-x)\). A more interesting problem is to have \(\mathbf{sd}(x)\ge\alpha\). Updated problem
A slightly different problem, and somewhat reformulated is:
MIQCP problem \[\begin{align}\min\>&\bar{\color{darkred}x} \\ & \bar{\color{darkred}x}= \frac{\sum_i \color{darkred}x_i}{\color{darkblue}n}\\ & \frac{\sum_i (\color{darkred}x_i - \bar{\color{darkred}x})^2}{\color{darkblue}n-1} \ge \color{darkblue}\alpha^2 \\ & \color{darkred}x_i \in \{0,1\}\end{align}\]
First, we replaced \(\lt\) by \(\ge\) to make the problem more interesting. Furthermore I got rid of the square root. This removes a possible problem with being non-differentiable at zero. The remaining problem is a non-convex quadratically constrained (MIQCP=Mixed Integer Quadratically Constrained Problem). The non-convexity implies we want a global solver.
This model solves easily with solvers like Baron or Couenne.
Integer variable
When we look at the problem a bit more, we see we are not really interested in which \(x_i\)'s are zero or one. Rather, we need only to worry about the number. Let \(k = \sum_i x_i\), Obviously \(\bar{x}=k/n\). But more interestingly: \[\mathbf{sd}(x) = \sqrt{\frac{k (1-\bar{x})^2+(n-k)(0-\bar{x})^2}{n-1}}\] The integer variable \(k\) is restricted to \(k=0,2,\dots,n\).
Thus we can write:
MINLP problem \[\begin{align}\min\>&\color{darkred}k \\ & \frac{\color{darkred}k (1-\color{darkred}k/\color{darkblue}n)^2+(\color{darkblue}n-\color{darkred}k) (\color{darkred}k/\color{darkblue}n)^2}{\color{darkblue}n-1} \ge \color{darkblue}\alpha^2 \\ & \color{darkred}k = 0,1,\dots,\color{darkblue}n \end{align}\]
The constraint can be simplified into \[\frac{k-k^2/n}{n-1}\ge \alpha^2\] This is now so simple we can do this by enumerating \(k=0,\dots,n\), check the constraint, and pick the best.
Because of the form of the standard deviation curve (note the symmetry), we can specialize the enumeration loop and restrict the loop to \(k=1,\dots,\lfloor n/2 \rfloor\). Pick the first \(k\) that does not violate the constraint (and when found exit the loop). For very large \(n\) we can use something like a bisection to speed things up even further.
So this example optimization problem does not really need to use optimization at all.
References Constrained optimisation with function in the constraint and binary variable, https://stackoverflow.com/questions/57850149/constrained-optimisation-with-function-in-the-constraint-and-binary-variable Another problem that minimizes the standard deviation, https://yetanothermathprogrammingconsultant.blogspot.com/2017/09/minimizing-standard-deviation.html |
Loan balance after the ith payment principal or initial loan balance Interest rate Loan term Loan product, important parameter which fully specifies a loan Number of loan payments Time between loan payments Helpful collection of variables Fraction of loan term Repayment rate (dollars per time) Fraction of payment to interest during the ith payment Fraction of initial payment to interest Total payment to interest Sum of all payments Overpay ratio
In the article on continuous solutions to the loan equation, we found that the entire loan could be specified by the loan product, \(rt_\text{term}\), or the initial fraction of payment to interest, \(\phi_0\). In practice, loan payments and interest are not continuous payment rates. If they were, a borrower's bank balance would continuously drop by a few fractions of a cent per second and their balance would change in real-time while they viewed it. Instead, banks charge interest and collect payments at discrete intervals, usually monthly.
In this article, we will show that the mathematical model used in practice to determine a loan balance is the finite difference method for first order ODEs. We will show that both the loan product and number of payment periods during the loan term, \(n\), are necessary to compute all the relevant parameters of the loan. We will find formulas for the principal fraction \(\frac{B_i}{B_0}\), payment to interest \(\phi_i\), and overpay ratio \(\frac{V}{B_0}\) as a function of \(f(rt_\text{term},n)\); the repayment rate \(P\) as a function of \(f(r,t_\text{term},n)\).
Set up the first order differential equation describing a loan using the variable definitions above just as we did in the continuous case here.
Rate of change of loan balance \( = \) Interest \( - \) Repayment rate \( = \) \( - \)
When a bank computes the balance on a loan, they use a recursive function based on the finite difference approximation of this first order ordinary differential equation. This equation can be discretized by approximating \(\partial B = B_{i+1} - B_i\) and \(\partial t = \Delta t\).
This is the formula commonly programmed into spreadsheet programs to compute loan balances as a function of time:
Notice that \(r\Delta t\) is unitless. For example, in this case where \(\Delta t =\) 1 month the conversion appears as follows.
Let us expand this recursive function into an explicit one and simplify by defining \(R = 1+r\Delta t \)\(= 1+ \frac{rt_\text{term}}{n}\) for convenience:
Simplify:
Now we easily see the pattern for a closed-form expression of \(B_i\). Recognize \(\Delta t\) and \(P\) are independent of \(i\) and factor them out of the sum.
From this follows an expression for the payment as a function of the loan term, rate and payment/maturation frequency (set \(B_n = 0\)). The number of payment cycles is a function of the term of the loan \(n=t_\text{term}/\Delta t\).
From here we can derive an equation for \(\frac{B_i}{B_0}\) that is solely a function of the total number of payment periods, \(n\), and the loan product, \(rt_\text{term}\).
The overpay ratio follows easily. \(V\) is the total amount repaid and \(V= Pt_\text{term}\). The ratio between what is repaid and what is borrowed is \(\frac{V}{B_0}\). This can be viewed as a function of \((r, \, t_\text{term}, \, \Delta t)\) or as a function of \((rt_\text{term}, \, n)\).
We can also derive an expression for the fraction of each payment put to interest as a function of \(i\), \(rt_\text{term}\), and \(n\).
When we plot \(\phi_i\) vs \(\tau\) with increasing \(n\), we eventually approximate the continuous solution (recall that \(\tau=i/n\)). The continuous solution becomes recognizable by \(n= \) 20 and becomes visually indistinguishable around \(n= \) 200. A future article will discuss the error between the continuous and discrete solutions when solving for the overpay ratio or the repayment rate. |
I understand that $f(x)$ must be linear with a first derivative equal to a constant. I'm just not sure how I can use the mean value property of integrals to show something about $f''(x)$. The hint on this question is to use the fundamental theorem of calculus or Jensen's inequality.
Assuming $f$ is integrable and twice differentiable (otherwise your statement about average value doesn't make sense, nor your final statement*), $$\int_a^bf(x)\,\mathrm dx=(b-a)f\left(\cfrac{a+b}{2}\right)$$
Differentiate both sides w.r.t $\,b$, using the Leibniz integral rule (derived from fundamental theorem of calculus) for the LHS:
$$f(b)=\frac{b}{2}f'\left(\cfrac{a+b}{2}\right)+f\left(\cfrac{a+b}{2}\right)-\frac{a}{2}f'\left(\cfrac{a+b}{2}\right)$$
Now set $b=0$ and $a=2x$:
$$f(0)=f(x)-xf'(x)$$
Differentiate both sides w.r.t $x$:
$$0=f'(x)-f'(x)-xf''(x)$$
so $f''(x)=0$ for all $x\neq 0$.
Thus we have proven the function is linear everywhere except $0$. Since $f'(0)$ and $f'(x)$, $f'(-x)$ are constants for $x>0$ exists, we know $f'(0)$ has to be equal to each of these and thus $f''(0)=0$.
*I'm not sure if you're able to prove that $f$ has to be twice differentiable.
I will assume that $f$ is locally integrable for an obvious reason. Our aim is to prove that $f$ is linear, which is then enough to conclude that $f$ is twice-differentiable with $f'' \equiv 0$.
Let $x < y$ and $0 < \lambda < 1$ be arbitrary. Set
$$ c = \lambda x + (1-\lambda) y, \qquad a = 2x - c, \qquad b = 2y - c. $$
Then $a < x < c < y < b$ and
$$ \frac{a+b}{2} = (1-\lambda)x + \lambda y, \qquad \frac{a+c}{2} = x, \qquad \frac{c+b}{2} = y. $$
So it follows that
\begin{align*} f((1-\lambda)x+\lambda y) &= \frac{\int_{a}^{b} f(t) \, \mathrm{d}t}{b-a} \\ &= \frac{\int_{a}^{c} f(t) \, \mathrm{d}t + \int_{c}^{b} f(t) \, \mathrm{d}t}{b-a} \\ &= \frac{(c-a)f(x) + (b-c)f(y)}{b-a} \\ &= (1-\lambda) f(x) + \lambda f(y). \end{align*}
This proves that $f$ is linear. |
Normally I read the
Divergence Theorem written as (
\oiint doesn't exist here):
\begin{align} \oint_{\partial \Omega} \vec{F} \cdot \hat{n} \; dS = \iiint_{\Omega} \nabla \cdot \vec{F} \; dV \end{align}
Where $\partial \Omega$ is the boundary of region $\Omega$ and $dS$ are infinitesimal surface of $\partial \Omega$ and $dV$ infinitesimal volume of $\Omega$.
But reading a paper
CASTILLO and GRONE, A matrix analysis approach to higher order approximations for divergence and gradients satisfying global conservation law, it states the Divergence Theorem as:
\begin{align} \int_{\partial \Omega} f \, \vec{v} \cdot \hat{n} \; dS = \int_{\Omega} \nabla \cdot \vec{v} \, f \; dV + \int_{\Omega} \vec{v} \, \nabla f \; dV \end{align}
Are those equivalent? if so how can I correlate the RHS of both equations?
I really think that he is abusing notation using a single integral instead a triple one and not directly specifying the loop integral in left hand side, but even with that I cannot correlate, maybe I'm missing some definition here. |
This question is motivated by the Lorenz curve used in economic analysis and also the Penrose diagram used in general relativity, used by physicists in order to visualise causal relationships in compactified Minkowski space time models.
It is also motivated deeply by Hamiltonian mechanics, symplectic geometry, and contact geometry (which can be viewed as the odd dimensional counterpart of symplectic geometry). Hamiltonian mechanics was a reformulation of Newtonian mechanics. In Hamiltonian mechanics one studies phase spaces of physical systems, symplectic flows and many other topics.
From a mathematical standpoint, this question is motivated from modern trends in differential geometry, topology, abstract algebra and mathematical physics.
Consider the families of planar curves for $x,y\in \Re(0,1)$ and $\Re(s) \ge1:$
$$\zeta:= \{ (x, y) \in \Bbb R^2 | x^s + y^s = 1 \}$$ $$\tau:= \{ (x, y) \in \Bbb R^2 | (1-x)^s + y^s = 1 \} $$ $$\psi:= \{ (x, y) \in \Bbb R^2 | x^s + (1-y)^s = 1 \} $$ $$\phi:= \{ (x, y) \in \Bbb R^2 | (1-x)^s + (1-y)^s = 1 \}. $$
Interpreting $\zeta,\tau,\psi,$ and $\phi$ as phase spaces allows the interpretation of them as infinite dimensional manifolds, specifically infinite dimensional symplectic manifolds:$(\zeta,\omega),(\tau,\omega),(\psi,\omega),(\phi,\omega).$
What follows is a natural Hamiltonian vector field, which defines a Hamiltonian flow on each of the symplectic manifolds.
The process of lifting these manifolds into $\Bbb R^3$ can be achieved with homotopic maps. See the answer by Paul Frost here: https://math.stackexchange.com/questions/2895816/existence-of-homotopic-map. The key thing to understand is that each of the four symplectic manifolds are unique projections of the curves lifted via homotopy. In other words, one can project the lifted curves onto the planar curves bijectively. The image provides the intuition for what a single lift looks like. But in reality there are infinitely many of these lifts each at different heights above the planar curves.
Question:
Consider the symplectic manifolds defined above. I'm attempting to count the fixed points for Hamiltonian symplectomorphisms on $T^2,$ and generally $T^{2n}.$ What's the best way to attack this problem?
This is a shape I built of some geodesics to try to get a better understanding of the geometry of the object and the phase space of the particle ensemble for a specific configuration in three dimensions:
The image relates to the equations because each white strand in the image corresponds to a strand of a geodesic on a $2$-manifold, namely $S^2.$ Each white geodesic also corresponds to a specific planar curve listed above for a specific value of $s$. In the picture $s=2.$ The singularities can be viewed as invariants or fixed points because they don't change spatial location for the purpose of this question. Varying $s$ gauges the way the shape looks. As $s$ approaches infinity the shape will look like a cube. As $s$ tends to $1,$ the shape will look like $2$ pairs of perpendicular lines situated in $3$-space.
Maybe it helps to say that the image is the shape of the intersections of $8$ identical copies of $S^2$ with finitely many geodesics shown. It is like a higher dimensional venn diagram if you will. |
Noether's theorem relates symmetries to conserved quantities. For a central potential $V \propto \frac{1}{r}$, the Laplace-Runge-Lenz vector is conserved. What is the symmetry associated with the conservation of this vector?
1)
Hamiltonian Problem. The Kepler problem has Hamiltonian
$$ H~=~T+V, \qquad T~:=~ \frac{p^2}{2m}, \qquad V~:=~- \frac{k}{q}, \tag{1} $$
where $m$ is the 2-body reduced mass. The Laplace–Runge–Lenz vector is (up to an irrelevant normalization)
$$ A^j ~:=~a^j + km\frac{q^j}{q}, \qquad a^j~:=~({\bf L} \times {\bf p})^j~=~{\bf q}\cdot{\bf p}~p^j- p^2~q^j,\qquad {\bf L}~:=~ {\bf q} \times {\bf p}.\tag{2}$$
2)
Action. The Hamiltonian Lagrangian is
$$ L_H~:=~ \dot{\bf q}\cdot{\bf p} - H,\tag{3} $$
and the action is
$$ S[{\bf q},{\bf p}]~=~ \int {\rm d}t~L_H .\tag{4}$$
The non-zero fundamental canonical Poisson brackets are
$$ \{ q^i , p^j\}~=~ \delta^{ij}. \tag{5}$$
3)
Inverse Noether's Theorem. Quite generally in the Hamiltonian formulation, given a constant of motion $Q$, then the infinitesimal variation
$$\delta~=~ -\varepsilon \{Q,\cdot\}\tag{6}$$
is a global off-shell symmetry of the action $S$ (modulo boundary terms). Here $\varepsilon$ is an infinitesimal global parameter, and $X_Q=\{Q,\cdot\}$ is a Hamiltonian vector field with Hamiltonian generator $Q$. The full Noether charge is $Q$, see e.g. my answer to this question. (The words
on-shell and off-shell refer to whether the equations of motion are satisfied or not. The minus is conventional.)
4)
Variation. Let us check that the three Laplace–Runge–Lenz components $A^j$ are Hamiltonian generators of three continuous global off-shell symmetries of the action $S$. In detail, the infinitesimal variations $\delta= \varepsilon_j \{A^j,\cdot\}$ read
$$ \delta q^i ~=~ \varepsilon_j \{A^j,q^i\} , \qquad \{A^j,q^i\} ~=~ 2 p^i q^j - q^i p^j - {\bf q}\cdot{\bf p}~\delta^{ij}, $$ $$ \delta p^i ~=~ \varepsilon_j \{A^j,p^i\} , \qquad \{A^j,p^i\}~ =~ p^i p^j - p^2~\delta^{ij} +km\left(\frac{\delta^{ij}}{q}- \frac{q^i q^j}{q^3}\right), $$ $$ \delta t ~=~0,\tag{7}$$
where $\varepsilon_j$ are three infinitesimal parameters.
5) Notice for later that
$$ {\bf q}\cdot\delta {\bf q}~=~\varepsilon_j({\bf q}\cdot{\bf p}~q^j - q^2~p^j), \tag{8} $$
$$ {\bf p}\cdot\delta {\bf p} ~=~\varepsilon_j km(\frac{p^j}{q}-\frac{{\bf q}\cdot{\bf p}~q^j}{q^3})~=~- \frac{km}{q^3}{\bf q}\cdot\delta {\bf q}, \tag{9} $$
$$ {\bf q}\cdot\delta {\bf p}~=~\varepsilon_j({\bf q}\cdot{\bf p}~p^j - p^2~q^j )~=~\varepsilon_j a^j, \tag{10} $$
$$ {\bf p}\cdot\delta {\bf q}~=~2\varepsilon_j( p^2~q^j - {\bf q}\cdot{\bf p}~p^j)~=~-2\varepsilon_j a^j~. \tag{11} $$
6) The Hamiltonian is invariant
$$ \delta H ~=~ \frac{1}{m}{\bf p}\cdot\delta {\bf p} + \frac{k}{q^3}{\bf q}\cdot\delta {\bf q}~=~0, \tag{12}$$
showing that the Laplace–Runge–Lenz vector $A^j$ is classically a constant of motion
$$\frac{dA^j}{dt} ~\approx~ \{ A^j, H\}+\frac{\partial A^j}{\partial t} ~=~ 0.\tag{13}$$
(We will use the $\approx$ sign to stress that an equation is an on-shell equation.)
7) The variation of the Hamiltonian Lagrangian $L_H$ is a total time derivative
$$ \delta L_H~=~ \delta (\dot{\bf q}\cdot{\bf p})~=~ \dot{\bf q}\cdot\delta {\bf p} - \dot{\bf p}\cdot\delta {\bf q} + \frac{d({\bf p}\cdot\delta {\bf q})}{dt} $$ $$ =~ \varepsilon_j\left( \dot{\bf q}\cdot{\bf p}~p^j - p^2~\dot{q}^j + km\left( \frac{\dot{q}^j}{q} - \frac{{\bf q} \cdot \dot{\bf q}~q^j}{q^3}\right)\right) $$ $$- \varepsilon_j\left(2 \dot{\bf p}\cdot{\bf p}~q^j - \dot{\bf p}\cdot{\bf q}~p^j- {\bf p}\cdot{\bf q}~\dot{p}^j \right) - 2\varepsilon_j\frac{da^j}{dt}$$ $$ =~\varepsilon_j\frac{df^j}{dt}, \qquad f^j ~:=~ A^j-2a^j, \tag{14}$$
and hence the action $S$ is invariant off-shell up to boundary terms.
8)
Noether charge. The bare Noether charge $Q_{(0)}^j$ is
$$Q_{(0)}^j~:=~ \frac{\partial L_H}{\partial \dot{q}^i} \{A^j,q^i\}+\frac{\partial L_H}{\partial \dot{p}^i} \{A^j,p^i\} ~=~ p^i\{A^j,q^i\}~=~ -2a^j. \tag{15}$$
The full Noether charge $Q^j$ (which takes the total time-derivative into account) becomes (minus) the Laplace–Runge–Lenz vector
$$ Q^j~:=~Q_{(0)}^j-f^j~=~ -2a^j-(A^j-2a^j)~=~ -A^j.\tag{16}$$
$Q^j$ is conserved on-shell
$$\frac{dQ^j}{dt} ~\approx~ 0,\tag{17}$$
due to Noether's first Theorem. Here $j$ is an index that labels the three symmetries.
9)
Lagrangian Problem. The Kepler problem has Lagrangian
$$ L~=~T-V, \qquad T~:=~ \frac{m}{2}\dot{q}^2, \qquad V~:=~- \frac{k}{q}. \tag{18} $$
The Lagrangian momentum is
$$ {\bf p}~:=~\frac{\partial L}{\partial \dot{\bf q}}~=~m\dot{\bf q} \tag{19} . $$
Let us project the infinitesimal symmetry transformation (7) to the Lagrangian configuration space
$$ \delta q^i ~=~ \varepsilon_j m \left( 2 \dot{q}^i q^j - q^i \dot{q}^j - {\bf q}\cdot\dot{\bf q}~\delta^{ij}\right), \qquad\delta t ~=~0.\tag{20}$$
It would have been difficult to guess the infinitesimal symmetry transformation (20) without using the corresponding Hamiltonian formulation (7). But once we know it we can proceed within the Lagrangian formalism. The variation of the Lagrangian is a total time derivative
$$ \delta L~=~\varepsilon_j\frac{df^j}{dt}, \qquad f_j~:=~ m\left(m\dot{q}^2q^j- m{\bf q}\cdot\dot{\bf q}~\dot{q}^j +k \frac{q^j}{q}\right)~=~A^j-2 a^j . \tag{21}$$
The bare Noether charge $Q_{(0)}^j$ is again
$$Q_{(0)}^j~:=~2m^2\left(\dot{q}^2q^j- {\bf q}\cdot\dot{\bf q}~\dot{q}^j\right) ~=~-2a^j . \tag{22}$$
The full Noether charge $Q^j$ becomes (minus) the Laplace–Runge–Lenz vector
$$ Q^j~:=~Q_{(0)}^j-f^j~=~ -2a^j-(A^j-2a^j)~=~ -A^j,\tag{23}$$
similar to the Hamiltonian formulation (16).
While Kepler second law is simply a statement of the conservation of angular momentum (and as such it holds for all systems described by central forces), the first and the third laws are special and are linked with the unique form of the newtonian potential $-k/r$. In particular, Bertrand theorem assures that
only the newtonian potential and the harmonic potential $kr^2$ give rise to closed orbits (no precession). It is natural to think that this must be due to some kind of symmetry of the problem. In fact, the particular symmetry of the newtonian potential is described exactly by the conservation of the RL vector (it can be shown that the RL vector is conserved iff the potential is central and newtonian). This, in turn, is due to a more general symmetry: if conservation of angular momentum is linked to the group of special orthogonal transformations in 3-dimensional space $SO(3)$, conservation of the RL vector must be linked to a 6-dimensional group of symmetries, since in this case there are apparently six conserved quantities (3 components of $L$ and 3 components of $\mathcal A$). In the case of bound orbits, this group is $SO(4)$, the group of rotations in 4-dimensional space.
Just to fix the notation, the RL vector is:
\begin{equation} \mathcal{A}=\textbf{p}\times\textbf{L}-\frac{km}{r}\textbf{x} \end{equation}
Calculate its total derivative:
\begin{equation}\frac{d\mathcal{A}}{dt}=-\nabla U\times(\textbf{x}\times\textbf{p})+\textbf{p}\times\frac{d\textbf{L}}{dt}-\frac{k\textbf{p}}{r}+\frac{k(\textbf{p}\cdot \textbf{x})}{r^3}\textbf{x} \end{equation}
Make use of Levi-Civita symbol to develop the cross terms:
\begin{equation}\epsilon_{sjk}\epsilon_{sil}=\delta_{ji}\delta_{kl}-\delta_{jl}\delta_{ki} \end{equation}
Finally:
\begin{equation} \frac{d\mathcal{A}}{dt}=\left(\textbf{x}\cdot\nabla U-\frac{k}{r}\right)\textbf{p}+\left[(\textbf{p}\cdot\textbf{x})\frac{k}{r^3}-2\textbf{p}\cdot\nabla U\right]\textbf{x}+(\textbf{p}\cdot\textbf{x})\nabla U \end{equation}
Now, if the potential $U=U(r)$ is central:
\begin{equation} (\nabla U)_j=\frac{\partial U}{\partial x_j}=\frac{dU}{dr}\frac{\partial r}{\partial x_j}=\frac{dU}{dr}\frac{x_j}{r} \end{equation}
so
\begin{equation} \nabla U=\frac{dU}{dr}\frac{\textbf{x}}{r}\end{equation}
Substituting back:
\begin{equation}\frac{d\mathcal A}{dt}=\frac{1}{r}\left(\frac{dU}{dr}-\frac{k}{r^2}\right)[r^2\textbf{p}-(\textbf{x}\cdot\textbf{p})\textbf{x}]\end{equation}
Now, you see that if $U$ has
exactly the newtonian form then the first parenthesis is zero and so the RL vector is conserved.
Maybe there's some slicker way to see it (Poisson brackets?), but this works anyway.
The symmetry is an example of an open symmetry, i.e. a symmetry group which varies from group action orbit to orbit. For bound trajectories, it's SO(4). For parabolic ones, it's SE(3). For hyperbolic ones, it's SO(3,1). Such cases are better handled by groupoids.
Conservation of the Runge-Lenz vector does not correspond to a symmetry of the Lagrangian itself. It arises from an invariance of the integral of the Lagrangian with respect to time, the classical action integral. Some time ago I wrote up a derivation of the conserved vector for any spherically symmetric potential:
The derivation is at the level of Goldstein and is meant to fill in the gap left by its omission from graduate-level classical mechanics texts.
(This post may be old, but we can add some precisions) The conservation of the RL vector is not trifling, it goes with the fact that you consider a central force, lead here by a Newtonian potential $\frac{1}{r}$ which has the property to be invariant under rotations (as $\frac{1}{r^n}$ but it works only for $n=1$ as shown by @quark1245).
Therefore, the S0(3) which has not 6 conserved quantities as said before but 3, the 3 generators of the symmetry $J_i$, i=1..3 such that the symmetry transformation under an infinitesimal change $x \rightarrow x + \epsilon$ is given in the canonical formalism by $$ \delta_i X = \{X, J_i(\epsilon) \} $$ and the algebra is $$ \{ J_i, J_j \} = \epsilon_{ij}^k J_k. $$ They are conserved because, at least for the Kepler problem, the system is invariant w.r.t a time translation, and the Hamiltonian is also conserved, and the calculations show that $$ \{H,J_i\}= 0. $$
Before their redefinition as shown on Wikipedia to see that the previous algebra is fulfilled, the generators of the rotations are : one is the angular momentum $L$ which shows that the movement is planar, therefore invariant under rotation around $L$, one is the RL vector which is in the plan, therefore perpendicular to $L$ and parallel to the major axis of the ellipse, and the third one has a name I don't remember, but is parallel to the minor axis.
We can see that their are only 3 degrees of freedom if we take place in the referential such that $\vec{J}_1 = \vec{L} = (0,0,L_z)$, then the planar generators are $A = (A_x,0,0)$ and $B = (0,B_y,0)$.
It has been shown that they can be constructed from the Killing-Yano tensors (which mean symmetry), and it works also at dimensions greater than 3. A nice review about the LRL vector derivation can be found in HeckmanVanHaalten
Looking at https://arxiv.org/pdf/1207.5001.pdf one gets a very nice solution. If one is not very keen into mathematics, their basic idea is to use the infinitesimal transformation $$\delta x^i=\epsilon L^{ik}$$ where $L^{ik}=\dot{x}^ix^k-x^k\dot{x}^i$. Since angular momentum is conserved, kinetic energy won't change. On the other hand, the potential changes up to order $\epsilon^2$ like $$\frac{k}{r+\delta r}=\frac{k}{((x^i+\delta x^i)(x_i+\delta x_i))^{1/2}}=\frac{k}{r}\left(1-\frac{x_i\delta x^i}{r^2}\right)=\frac{k}{r}-\epsilon\frac{kx_iL^{ik}}{r^3}=\frac{k}{r}-\epsilon\frac{d}{dt}\left(\frac{kx^k}{r}\right).$$
Therefore, the change in the action is $$\epsilon\left[m\dot{x}_iL^{ik}\right]=[m\dot{x}_i\delta x^i]_{t_1}^{t_2}=\delta S=\epsilon\left[\frac{kx^k}{r}\right]_{t_1}^{t_2}.$$ This gives the conservation of the vector $$m\dot{x}_iL^{ik}-\frac{kx^k}{r},$$ which can be easily shown to be the Runge-Lenz vector. |
I'm currently studying how to deduce Feynman rules for general theories, and I've managed to deduce them for $\phi^3$ and $\phi^4$ theories. Up to this point I've considered the same field for all cases, and deduced the Feynman rules by expanding the interacting term in the correlator and using Wick's theorem to make the contractions.
My question is, if we consider a interacting theory for two different fields, how could we deduce the Feynman rules from Wick's theorem. Consider, for example, the decay of a particle given by the interacting lagrangian term
$$L_\mathrm{int}=-\lambda\Phi\phi^2$$
We see directly it only has vertices with 3 lines. If I expand the exponential that usually comes from this:
$$\exp(-i\lambda\int d^4y\ \Phi\phi^2)=1+(i\lambda)\int d^4y\Phi_y\phi_y^3+(i\lambda)^2\int d^4y_1d^4y_4\ \Phi_{y1}\phi_{y1}^3\Phi_{y2}\phi_{y2}^3+...$$ $$$$
However, I'm not sure which terms I should write for the correlators. For simplicity, let's consider the 2 particle correlator:
$$\langle \Omega |T[\phi_1\phi_2]|\Omega\rangle=\lim_{T\rightarrow \infty}\frac{\langle 0 |T[\phi_1\phi_2 \exp(-i\lambda\int d^4y\ \Phi\phi^2)]|0\rangle}{\langle 0 |\exp(-i\lambda\int d^4y\ \Phi\phi^2)|0\rangle}$$
Let's focus on the numerator, which could either be:
$$N(x_1,x_2)=\langle 0 | \phi_{x1}\phi_{x2}\exp(-i\lambda\int d^4y\ \Phi\phi^2) | 0 \rangle$$
Or could it be:
$$N(x_1,x_2)=\langle 0 | \phi_{x1}\Phi_{x2}\exp(-i\lambda\int d^4y\ \Phi\phi^2) | 0 \rangle$$
Or even it could be:
$$N(x_1,x_2)=\langle 0 | \Phi_{x1}\Phi_{x2}\exp(-i\lambda\int d^4y\ \Phi\phi^2) | 0 \rangle$$
Which one should I consider for the expansion? |
I understand the attractiveness and usefulness of infinite-series expansions such as Taylor expansions, but I wonder if they sometimes hide important aspects of the described system.
For example, take $F(x) = e^x$, whose Taylor expansion is $$F(x) = \sum_n \frac{x^n}{ n!}.$$ Sure, we can approximate $F(x)$ using the first few terms of the Taylor expansion for low values of $x$, and that is often useful. However, $F(x)$ is actually an exponential function, so has properties that are not easily evident if F(x) is approximated by truncating a Taylor expansion-- such as the beautiful fact that $\frac{d}{dx}F(x) = F(x)$.
Similarly, a truncated Fourier series representation of the delta function can obscure the fact that a delta function has precisely zero width.
Though chaos theory isn't in my skill set, I imagine that a truncated infinite series representation of a chaotic system might fail to reveal the fact that infinitesimally narrow domains exist in which the system is actually stationary.
It seems that quantum field theory (QFT) makes extensive use of the Euler-Heisenberg effective Lagrangian, which uses only terms up to the second order in the field invariants. One of the dreams of some field theorists has been to show that massive particles are stationary structures that "condense" out of vacuum fields due to nonlinearities; but it seems that the existence of such structures might be completely hidden if the field Lagrangian is not complete "all the way to the end" of an infinite series representation.
I would like to know if there exists an exact Lagrangian for QFT. Presumably it would not be an infinite power series expansion; instead it would be a relatively compact expression like $$L = e^{H(f(E^2 – B^2), g(E \cdot B))}$$ where $H$ is a simple function and $f$ and $g$ are low-order functions, or “natural” functions like $cos$ or $tanh$, and would contain only a small handful of constants whose values need to be adjusted to fit constraints set by experimental results. A Lagrangian like this could still be represented as an infinite series, but could also be represented in a very compact non-series form; and in either case it would have only a finite number of adjustable constants. |
Bernoulli Bernoulli Volume 19, Number 5A (2013), 2098-2119. Marked empirical processes for non-stationary time series Abstract
Consider a first-order autoregressive process $X_{i}=\beta X_{i-1}+\varepsilon_{i}$, where $\varepsilon_{i}=G(\eta_{i},\eta_{i-1},\ldots)$ and $\eta_{i}$, $i\in\mathbb{Z}$ are i.i.d. random variables. Motivated by two important issues for the inference of this model, namely, the quantile inference for $H_{0}\colon\ \beta=1$, and the goodness-of-fit for the unit root model, the notion of the marked empirical process $\alpha_{n}(x)=\frac{1}{n}\sum_{i=1}^{n}g(X_{i}/a_{n})I(\varepsilon_{i}\leq x)$, $x\in\mathbb{R}$ is investigated in this paper. Herein, $g(\cdot)$ is a continuous function on $\mathbb{R}$ and $\{a_{n}\}$ is a sequence of self-normalizing constants. As the innovation $\{\varepsilon_{i}\}$ is usually not observable, the residual marked empirical process $\hat{\alpha}_{n}(x)=\frac{1}{n}\sum_{i=1}^{n}g(X_{i}/a_{n})I(\hat{\varepsilon}_{i}\leq x)$, $x\in\mathbb{R}$, is considered instead, where $\hat{\varepsilon}_{i}=X_{i}-\hat{\beta}X_{i-1}$ and $\hat{\beta}$ is a consistent estimate of $\beta$. In particular, via the martingale decomposition of stationary process and the stochastic integral result of Jakubowski (
Ann. Probab. 24 (1996) 2141–2153), the limit distributions of $\alpha_{n}(x)$ and $\hat{\alpha}_{n}(x)$ are established when $\{\varepsilon_{i}\}$ is a short-memory process. Furthermore, by virtue of the results of Wu ( Bernoulli 95 (2003) 809–831) and Ho and Hsing ( Ann. Statist. 24 (1996) 992–1024) of empirical process and the integral result of Mikosch and Norvaiša ( Bernoulli 6 (2000) 401–434) and Young ( Acta Math. 67 (1936) 251–282), the limit distributions of $\alpha_{n}(x)$ and $\hat{\alpha}_{n}(x)$ are also derived when $\{\varepsilon_{i}\}$ is a long-memory process. Article information Source Bernoulli, Volume 19, Number 5A (2013), 2098-2119. Dates First available in Project Euclid: 5 November 2013 Permanent link to this document https://projecteuclid.org/euclid.bj/1383661215 Digital Object Identifier doi:10.3150/12-BEJ444 Mathematical Reviews number (MathSciNet) MR3129045 Zentralblatt MATH identifier 06254555 Citation
Chan, Ngai Hang; Zhang, Rongmao. Marked empirical processes for non-stationary time series. Bernoulli 19 (2013), no. 5A, 2098--2119. doi:10.3150/12-BEJ444. https://projecteuclid.org/euclid.bj/1383661215 |
Wave packet in 2D potential13 Oct 2018
Having some fun with Julia and simulating a Wave packet in 2D.
Main Idea
We will evolve the wavepacket in 2D by taking the tensor products between two 1D (x,y) spaces.
Wavepacket
The wavepacket is expressed as a Gaussian:
\[\Psi(x) = \frac{\sqrt{\Delta x}}{\pi^{1/4}\sqrt{\sigma}} e^{i p_0 (x-\frac{x_0}{2}) - \frac{(x-x_0)^2}{2 \sigma^2}}\]
With the basis spanning the 1D space. And total state is represented by \(\Psi = \Psi_x \otimes \Psi_y\)
Hamiltonian
The kinetic energy term is
p_x^2/2 and
p_y^2/2.The two-dimensional potential is a boundary with two slits.
function potential(x,y) if x > 20 && x < 25 && abs(y) > 1 && abs(y) < 5 return 0 elseif x > 20 && x < 25 return 150 elseif x > 48 return 150 else return 0 endend
I also use a gaussian potential:
potential(x,y) = exp(-(x^2 + y^2)/15.0)
Time Evolution
At first, I had some trouble with installing
DifferntialEquations.jl due to compilation errors for
Arpack.jl. So I decided to just solve it manually by separating the real and imaginary components for the state, and then preforming the Discrete Laplace Operator for finite differences. Followed by adding the potential part. This is what I got:
However, before I began working on visualization,
Arpack.jl
mysteriously compiled (probably because I switched to the official binary of Julia, rather than compiling my own). So I started exploring
DifferntialEquations.jl. I then discovered
QuantumOptics.jl which makes it pretty easy to simulate various quantum systems, so decided I should use it. It comes with a nice
timeevolution.schroedinger which will evolve (integrate) the system.
Double Split starts with:
x0 = -5y0 = 0p0_x = 3.0p0_y = 0.0σ = 2.0
Guassian starts with:
x0 = -5y0 = 0p0_x = 1.5p0_y = -.5σ = 2.0
Results
Julia has very nice support for plotting and animations. I abuse this here. Moreover, due to the Julia pre-compilation step, as long as you do not abuse the global space too much, the processing is rather quick.
Double Slit potential with magnitude mapped to color
Gaussian Potential
Represent phase as hue and magnitude as value (HSV)
Decided this is not enough to show the true “wave” nature as the visualization neglects the actual phase component… So time for more color!
Double Slit potential
Gaussian Potential
HSV and Showing Potential
Guess this would be a good time to actually visualize the of potential, and get our final nice visualization
Double Slit potential
Gaussian potential |
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in...
Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch...
Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen...
Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl...
People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f...
Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a...
I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac...
This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s...
There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com...
Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not...
Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}...
I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo...
Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a...
I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst...
Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ...
NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ...
I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few...
This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme...
EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc...
Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu...
Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d...
I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa...
To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co...
Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik...
I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like.
I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have...
It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl...
Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,...
One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi...
Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case.
What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?... |
Some tricks I've seen:
Tricks with notable products
$(a + b)^2 = a^2 + 2ab + b^2$
This formula can be used to compute squares. Say that we want to compute $46^2$. We use $46^2 = (40+6)^2 = 40^2+2\cdot40\cdot6 +6^2 = 1600 + 480 + 36 = 2116$. You can also use this method for negative $b$:$ 197^2 = (200 - 3)^2 = 200^2 - 2\cdot200\cdot3 + 3^2 = 40000 - 1200 + 9 = 38809 $
The last subtraction can be kind of tricky: remember to do it right to left, and take out the common multiples of 10:$ 40000 - 1200 = 100(400-12) = 100(398-10) = 100(388) = 38800 $The hardest thing here is to keep track of the amount of zeroes, this takes some practice!
Also note that if we're computing $(a+b)^2$ and a is a multiple of $10^k$ and $b$ is a single digit-number, we already know the last $k$ digits of the answer: they are $b^2$, then the rest (going to the right) are zeroes. We can use this even if a is only a multiple of 10: the last digit of $(10 * a + b)^2$ (where $a$ and $b$ consist of a single digit) is $b$. So we can write (or maybe only make a mental note that we have the final digit) that down and worry about the more significant digits.
Also useful for things like $46\cdot47 = 46^2 + 46 = 2116 + 46 = 2162$. When both numbers are even or both numbers are uneven, you might want to use:
$(a+b)(a-b) = a^2 - b^2$Say, for example, we want to compute $23 \cdot 27$. We can write this as $(25 - 2)(25 + 2) = 25^2 - 2^2 = (20 + 5)^2 = 20^2 + 2\cdot20\cdot5 + 5^2 - 4 = 400 + 200 + 25 - 4 = 621$.
Divisibility checks
Already covered by Theodore Norvell. The basic idea is that if you represent numbers in a base $b$, you can easily tell if numbers are divisible by $b - 1$, $b + 1$ or prime factors of $b$, by some modular arithmetic.
Vedic math
A guy in my class gave a presentation on Vedic math. I don't really remember everything and there probably are a more cool things in the book, but I remember with algorithm for multiplication that you can use to multiplicate numbers in your head.
This picture shows a method called lattice or gelosia multiplication and is just a way of writing our good old-fashioned multiplication algorithm (the one we use on paper) in a nice way. Please notice that the picture and the Vedic algorithm are not tied: I added the picture because I think it helps you appreciate and understand the pattern that is used in the algorithm. The gelosia notation shows this in a much nicer way than the traditional notation.
The algorithm the guy explained is essentially the same algorithm as we would use on paper. However, it structures the arithmetic in such a way that we never have remember too many numbers at the same time.
Let's illustrate the method by multiplying $456$ with $128$, as in the picture. We work from left to right: we first compute the least significant digits and work our way up.
We start by multiplying the least significant digits:
$6 \cdot 8 = 48$: the least significant digit is $8$, remember the $4(0)$ for the next round (of course, I don't mean zero times four here but four, or forty, whatevery you prefer: be consistent though, if you include the zero here to make forty, you got do it everywhere).$ 8 \cdot 5(0) = 40(0) $
$ 2(0) \cdot 6 = 12(0) $ $ 4(0) + 40(0) + 12(0) = 56(0) $: our next digit (to the left of the $8$) is $6$: remember the $5(00)$
$ 8 \cdot 4(00) = 32(00) $
$ 2(0) \cdot 5(0) = 10(00) $ $ 1(00) \cdot 6 = 6(00) $ $ 5(00) + 32(00) + 10(00) + 6(00) = 53(00) $: our next digit is a $3$, remember the $5(000)$
Pfff... starting with 2-digit numbers is a better idea, but I wanted to this longer one to make the structure of the algorithm clear. You can do this much faster if you have practiced, since you don't have to write it all down.
$ 2(0) \cdot 4(00) = 8(000) $
$ 1(00) \cdot 5(0) = 5(000)$ $ 5(000) + 8(000) + 5(000) = 18(000)$: next digit is an $8$, remember the $1(0000)$
$ 1(00) \cdot 4(00) = 4(0000) $
$ 1(0000) + 4(0000) = 5(0000) $: the most significant digit is a $5$.
So we have $58368$.
Quadratic equations
There are multiple ways to solve a quadratic equation in your head. The easiest are quadratic with integer coefficients. If we have $x^2 + ax + c = 0$, try to find $r_{1, 2}$ such that $r_1 + r_2 = -a$ and $r_1r_2 = c$. It is also possible to solve for non-integer solutions this way, but it is usually too hard to actually come up with solutions this way.
Another way is just to try divisors of the constant term. By the rational root theorem (google it, I can't link anymore
sigh) all solutions to $x^n + ... + c = 0$ need to be divisors of $c$. If $c$ is a fraction $\frac{p}{q}$, the solutions need to be of the form $\frac{a}{b}$ where $a$ divides $p$ and $b$ divides $q$.
If this all fails, we can still put the abc-formula in a much easier form:
$ ux^2 + vx + w = 0 $
$ x^2 + \frac{v}{u}x + \frac{w}{u} = 0 $ $ x^2 + \frac{v}{u}x + \frac{w}{u} = 0 $ $ x^2 - ax - b = 0 $
$ x^2 = ax + b $
(This is the form that I found easiest to use!) $ (x - \frac{a}{2})^2 = (\frac{a}{2})^2 + b $ $ x = \frac{a\pm\sqrt{a^2 + 4b}}{2} = \frac{a}{2} \pm \sqrt{(\frac{a}{2})^2 + b} $
I'm sure there are also a lot of techniques for estimating products and the like, but I'm not really familiar with them.
Tricks that aren't really usable but still pretty cool
See this excerpt from Feynman's "Surely you're joking, Mr. Feynman!" about how he managed to amaze some of his colleagues, and also this video from Numberphile. |
This benchmark requires that the global optimization algorithm is run starting from the 100 randomly generated (minimized) Lennard-Jones structures. Runs that are greater than 0.001 energy units from the known global minimum are considered failures. If a run was successful, the number of force calls needed to reach the global minimum is recorded.
The starting structures can be downloaded here.
Entries for this benchmark must record the average number number of force callsas
force_calls in
benchmark.dat, the maximum number of forcecalls as
force_calls_max, and the minimum number of force calls as force_calls_min.
Entry Code <N> min N max N gmin-csm GMIN 6.421e+03 7.700e+01 2.698e+04 gmin-symmetrise GMIN 2.271e+04 7.200e+02 1.212e+05 gmin-nosym GMIN 2.660e+05 6.457e+03 1.170e+06 eon-basinhopping 1 Eon 5.080e+05 4.579e+03 1.796e+06 pele-basinhopping pele 5.216e+05 9.447e+03 2.534e+06
This benchmark is a global optimization benchmark for a Lennard-Jones cluster with 75 atoms. The same criterion apply as in the LJ38 benchmark above.
The starting structures can be downloaded here.
Entry Code <N> min N max N gmin 1 GMIN 6.069e+04 1.100e+03 3.101e+05
This benchmark test the performance of global optimization algorithms on abinary Lennard-Jones A
42B 58 system with a size ratio of 1.3. The form of the potential is
where $\alpha$ and $\beta$ are the atom type of atoms $i$ and $j$, respectively. Here $\epsilon_{AA}$=$\epsilon_{AB}$=$\epsilon_{BB}$=1, $\sigma_{AA}$=1, $\sigma_{BB}$=1.3, and $\sigma_{AB}$=($\sigma_{AA}$+$\sigma_{BB}$)/2. Previous studies have reported a putative global minimum energy of -604.796307.
This benchmark requires that the global optimization algorithm is run starting from the 100 randomly generated (minimized) Lennard-Jones structures. The starting structures can be downloaded here. The global optimization algorithm will be run for not more than two million energy (force) evaluations for each starting structure.
Entries for this benchmark must record the average lowest energy found at theend of each run as
average_energy in
benchmark.dat as well asthe largest and smallest energies (
min_energy and max_energy)found at the end of the 100 runs.
Entry Code <E> min E max E gmin GMIN -5.895e+02 -5.990e+02 -5.786e+02 eon-basinhopping 1 Eon -5.846e+02 -5.961e+02 -5.741e+02
This benchmark requires that the global optimization algorithm is run starting from the supplied randomly generated (minimized) structures. Runs that are greater than 0.001 energy units from the known global minimum are considered failures. If a run was successful, the number of force calls needed to reach the global minimum is recorded.
Entries for this benchmark must record the average number number of force callsas
force_calls in
benchmark.dat, the maximum number of forcecalls as
force_calls_max, and the minimum number of force calls as force_calls_min.
Entry Code <N> min N max N gmin-tbp 1 GMIN 1.610e+04 1.210e+02 1.451e+05 gmin 1 GMIN 1.624e+04 1.210e+02 1.484e+05
Entry Code <N> min N max N gmin-tbp 1 GMIN 6.254e+05 2.345e+03 6.177e+06 gmin 1 GMIN 6.706e+05 4.549e+03 5.026e+06
Entry Code <N> min N max N gmin-tbp 1 GMIN 2.557e+07 5.973e+05 1.434e+08 gmin 1 GMIN 2.860e+07 4.412e+05 1.419e+08 |
Let $A$ denote a commutative ring. Then if $A$ is a field, we may deduce that every $A$-module is free. Does the converse hold? i.e. If every $A$-module is free, can we deduce that $A$ is a field?
I would think so.
Let $0\neq x\in A$ a non-invertible element. Then the ideal $Ax$ is proper. Now consider the quotient $A/xA$ as an $A$-module. Since $x\cdot\bar1=\bar 0$, it is torsion, thus not free.
If $I$ is a proper ideal of $A$ then $A/I$ is free by assumption, but any element in a basis will be killed by anything in $I$, so we must have $I = (0)$, and thus $A$ must be a field.
Tobias's and Andrea's answers are pretty much optimal for commutative rings. I'd just like to share a strategy that works for noncommutative rings as well.
For any ring (with identity) $R$, all right $R$ modules are free iff $R$ is a division ring.
Proof: Let $S$ be a simple right $R$ module. Then $S$ is free. Since it's simple, it cannot have more than one copy of $R$ in its decomposition into a sum of copies of $R$. Thus $S\cong R$ as right $R$ modules, and this shows $R$ is a simple right $R$ module. That immediately implies $R$ is a division ring.
Assume that $R$ is a commutative ring with $1$ whose every module is free. Let $I\subseteq R$ be an ideal. Note that $R/I$ is an $R$-module and therefore free. If $I\neq\{0\}$, then every element of $R/I$ is a torsion element (since we can act by any non-zero element of I to obtain $0$ in $R/I$). Since $R/I$ is also free, $R/I=\{0\}$. Therefore, $I=R$. So, the only ideals are $\{0\}$ and $R$, meaning $R$ is a field.
Assuming $R$ is a domain, then here is another approach:
Firstly, $R$ must be a PID, because an ideal that is not principal is not a free $R$-module.
Now consider $K$, the quotient ring of $R$, as an $R$-module, and note that $1_R \in K$ is a primitive element (ie. if $1 = ay$, then $a \in R^{\ast}$) and so there is a basis of $K$ containing $1_R$ - call it $\{1_R, v_1, \ldots, v_k\}$. Consider $v_1 = a_1/b_1$ with $b_1\neq 0$ in $R$, then $$ a_11_R - b_1v_1 = 0 $$ which implies that $a_1 = 0$ whence $v_1 = 0$ - this is impossible.
So it must follow that $K$ is one-dimensional, and hence $K = R$. |
I have a friend who believes that 17% doesn't have to be equal to 0.17. Even though he says that 17% is equal to 0.17
on its own, he says that 17% at any other time is not equal to 0.17, referring to the argument that $17\%x \neq 0.17$. No matter how I try to explain it to him, he won't believe me when I say that 17% is always equal to 0.17, no matter what. Does anyone have a good explanation for this?
I have a friend who believes that 17% doesn't have to be equal to 0.17. Even though he says that 17% is equal to 0.17
"17 per cent" on its own is $\frac{17}{100} = 0.17$. That's what it means in English language and I'm pretty sure it's the same in most languages.
However $17\%$ of something, say $x$, will be $\frac{17}{100}x = 0.17x$ which of course isn't $0.17$ except for the special case $x = 1$ but that's not very interesting.
If this still doesn't convince you friend, you could take an example :
Say we have an object with a certain price $x$. Then $1\%$ of the price is like $1$ hundredth of the price which is :$$\frac{x}{100} = \frac{1}{100}x = 0.01x$$
$17\%$ of the price of the object is $17$ times greater than $1\%$ of the price therefore it is :$$17\times\frac{1}{100}x = \frac{17}{100}x = 0.17x$$
The term percent comes from the Latin
per centum, or per hundred. 17 per 100 is 0.17, so 17 percent is most definitely 0.17
I'm going with him on this one. We've come to accept that 17% = .17 because that's how it's interpreted in the context of math, but 17% and .17 are not the same thing semantically.
0.17 is simply a number. 17% is a
function. Without another parameter (number you're calculating a percentage of), 17% is only meaningful in a relative sense. Think of it this way:
If I go outside, I can jog for 0.17 miles (probably pretty accurate, too). I can't, on the other hand, jog for 17% miles. (I can jog for
17% of a mile, but again that's using 17% as a function.)
I think your friend is more on the right track than you are. What you're confused about is how to treat the phrase 17% in language, not in mathematics.
He understands that $17\% = 0.17$, in the sense of the term where $17\%$ is an isolated figure. You're trying to convince him that this is the only valid usage of the term $17\%$.
But consider a sentence like this:
On a successful sale, you'll earn anywhere from \$12,000 to \$18,000, and the real estate agency will take 17%.
In this sentence, interpreting $17\%$ as $0.17$ makes absolutely no sense. It's obviously $17\%$
of the \$12,000 to \$18,000 you earn from a successful deal, which is not $0.17$ at all, but rather around two to three thousand dollars.
And something like this:
This week, viewership of our front page went up by 17%.
Viewership can't really go up by $0.17$ (because you can't get $0.17$ visitors to a site). It's referring to a percentage
relative to the past week. So if you had 20,000 visitors to your site, you'd now have 23,400 visitors, which represents an increase of about 3,400 visitors – again, nothing to do with $0.17$.
Basically, what's confusing you is how context affects the use of the percentage term. Yes, when you say $17\%$, you're always calculating something multiplied by $0.17$. But this is very different from saying that $17\%$ is
equal to $0.17$ in that case.
Your friend agrees that
17% is equal to 0.17
on its own
and hopefully he would agree that
50% of something = halve of something = 0.5 of something
and similarly,
17% of something = 17/100ths of something = 0.17 of something
Therefore in both ways of referring to 17% (
on their own and in relation to some other value) it seems to be fully equivalent to just saying 0.17.
At least in my native language (German), you can't have "17%" on its own. You always have to refer (at least implicitly) to some quantity that the 17% are part of. So, at least in German, 17% is totally meaningless on its own - and it's not taught in school that 17% = 0.17 or 17% = 17/100.
17% is not recognized as a number but as a function (percent(17,x) = x/100*17) like for example we have "das Vierfache von" = "the quadruple of". (quadruple(x) = 4*x) I'm sure that also in English it does not make sense to have "a quadruple" on its own. Otherwise, would you say: A quadruple is 4?
Vice versa, in German, it's not even possible to say: "0.17 of something". The terms are not interchangeable from a linguistic point of view.
This question almost seems like it should be on English SE rather than Math. There is a failure to understand the English language more than there is a failure to understand the math; it seems everyone agrees on the numbers, it’s the words that are giving trouble.
"17%
of something" means, in the English language, "17% multiplied by something," so yes, you can still replace 17% by 0.17: the statement just becomes "0.17 multiplied by something" and is still completely true.
Ask him to calculate what 17% of some random high value is. Let him use a calculator. See what he presses.... I hope for him that he will enter your random value and multiply it by 0.17 to get to the answer :)
If your friend is not willing to accept that 17% of
x is always 0.17
x then a simple way to make him believe will be to ask him to prove it otherwise. If he fails to prove his theory mathematically, its invalid. You cannot deny proofs in mathematics without demonstrating their invalidity mathematically. Ask him if its not always 0.17 of something then you'd like to see what it is, backed with mathematical reasoning.
The core issue here is not an issue of mathematics, but an issue of language. 17% is obviously only ever used to state a proportion of some whole and always has an explicit or implicit "of". While .17 can only really represent a portion of some whole unit (frequently the integer "1") we don't think about it in the same way.
Depending on your perspective the statement 17% = .17 is either always true or always false, but it's silly to say that sometimes it's true and other times false.
If you say that 17%
is a number then the statement is absolutely true. If you say that 17% is not a number then it's impossible to ever say that "17%" itself is directly equal to any number.
"Per cent" means "per 100" because "cent" is the Latin root for "hundred". So 17% means 17 per 100, or $17/100 = 0.17$.
Even if you had 34 items out of 200 (two
hundred), or 51 per 300 (three hundred), that's still $34/200 = 0.17$ or $51/300 = 0.17$. It all simplifies to a base of 100. As long as the base unit is 100 for dividing your value, it will always equate to a decimal out of 1.
If there was something called "per-dec" (per 10), or "per-milli" (per 1000), then it would vary based on that. But as far as I know, those are never used.
$17 \% \text{ of } x$ is the same as saying $.17x$. The actual value of $17\%$ depends upon the value of $x$. If $x=50$, $17\%=8.5$.
A percent always represents a fraction of a whole. what that whole is is undefined until you provide it. Just like hertz: hertz is a unit of measurement that means 1/seconds or per second, but not what is happening per second. Or like verbs in a sentence; by themselves they only define themselves, but within a sentence they can represent a cohesive communication. For future reference, this concept is the definition of a function.
While 17% is mathematically .17 of whatever item you have, remember that we don't always work with real numbers. Sometimes we are limited to discrete values, like integers, and so there are times when your friend may be right.
As an example, imagine you have a formula for dividing a certain quantity of items among several people, such that one of those people gets 17%. Now say you have 100 items. Of course that person gets exactly 17, or .17 of the total. But now lets say that instead of 100 items to distribute you have 101 items. You can't break the items apart, but the formula still says this person should get 17%. So what happens? He still gets 17 of them. However, in this case, that 17% did not work out to exactly .17 of the total. Instead, a quick check of the calculator shows the result comes to .16831683168.
protected by Community♦ Apr 26 '14 at 13:15
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Let $X$ be a smooth curve over a number field $K$ (not necessarily proper). Fix an algebraic closure $\overline{K}$ of $K$.
Let $i,i' : \overline{K}\hookrightarrow\mathbb{C}$ be two abstract embeddings (ie, as $\mathbb{Q}$-algebras). Let $X_\mathbb{C},X_{\mathbb{C}}'$ be the base changes of $X_{\overline{K}}$ to $\mathbb{C}$ via $i$ and $i'$. Then, $X_\mathbb{C}(\mathbb{C})$ has the structure of a Riemann surface. Let $x\in X_\mathbb{C}(\mathbb{C})$ come from a $\overline{K}$-rational point. We may consider its topological fundamental group $\pi_1^{top}(X_\mathbb{C}(\mathbb{C}),x)$. Since every finite cover of the Riemann surface $X_\mathbb{C}(\mathbb{C})$ is algebraic, for every loop in $\pi_1^{top}(X_\mathbb{C}(\mathbb{C}),x)$, its monodromy action on the fibers at $x$ of its finite covers determines an automorphism of the fiber functor at $x$, and hence we obtain a homomorphism $$\pi_1^{top}(X_\mathbb{C}(\mathbb{C}),x)\rightarrow \pi_1^{et}(X_\mathbb{C},x)$$ which is known to be the embedding of the first group into its profinite completion. The map $X_\mathbb{C}\rightarrow X_{\overline{K}}$ given by base change induces an isomorphism on etale fundamental groups, and composing these maps we get $$\pi_1^{top}(X_\mathbb{C}(\mathbb{C}))\longrightarrow \pi_1^{et}(X_\mathbb{C})\stackrel{\sim}{\longrightarrow}\pi_1^{et}(X_\overline{K})$$ where I've omitted the base points because I only care about these maps up to conjugacy (say, inside $\pi_1^{et}(X_{\overline{K}})$). Similarly, with $X_\mathbb{C}'$, we get a map
$$\pi_1^{top}(X'_\mathbb{C}(\mathbb{C}))\longrightarrow \pi_1^{et}(X'_\mathbb{C})\stackrel{\sim}{\longrightarrow}\pi_1^{et}(X_\overline{K})$$
Both of these maps give embeddings of the topological fundamental groups inside $\pi_1^{et}(X_{\overline{K}})$, canonical up to conjugation.
My question is:
When are the images the same (up to conjugation)?
Are there examples when the images are not the same?
I'm particularly interested in the case when $X_\mathbb{C}(\mathbb{C})$ is hyperbolic.
References would also be appreciated. |
Current browse context:
physics.atom-ph
Change to browse by: References & Citations Bookmark(what is this?) Physics > Atomic Physics Title: Photoionization of Xe and Rn from the relativistic random-phase theory
(Submitted on 30 Apr 2018 (v1), last revised 14 Feb 2019 (this version, v6))
Abstract: Photoionization cross section $\sigma_{n\kappa}$, asymmetry parameter $\beta_{n\kappa}$, and polarization parameters $\xi_{n\kappa}$, $\eta_{n\kappa}$, $\zeta_{n\kappa}$ of Xe and Rn are calculated in the fully relativistic formalism. To deal with the relativistic and correlation effects, we adopt the relativistic random-phase theory with channel couplings among different subshells. Energy ranges for giant \emph{d}-resonance regions are especially considered. Submission historyFrom: Chenkai Qiao [view email] [v1]Mon, 30 Apr 2018 19:33:46 GMT (252kb) [v2]Mon, 14 May 2018 19:35:38 GMT (252kb) [v3]Thu, 28 Jun 2018 12:28:45 GMT (322kb) [v4]Sat, 18 Aug 2018 15:38:19 GMT (324kb) [v5]Tue, 2 Oct 2018 19:17:56 GMT (343kb) [v6]Thu, 14 Feb 2019 14:51:43 GMT (347kb) |
EDIT (Aug 1):
I posted a small report with a more detailed proof on my blog; the reduction idea is the same, but the "gadget" used are better explained (you can also directly download the pdf from here)
The problem seems NP-complete and this is a possible reduction from SET COVER.
Suppose you have an universe $A$ of $n$ elements: $A = \{a_1,...,a_n\}$, a collection of $m$ subsets $\mathcal{S} = \{S_1,S_2,...,S_m\}$ (with $S_i \subseteq A$) and an integer $k$. The SET COVER problem asks for a sub-collection $\mathcal{C} \subseteq \mathcal{S}$ of size at most $k$ such that $\bigcup_{S_i \in \mathcal{C}} S_i = A$ (with $|\mathcal{C}| \leq k$).
The reduction to your problem (I call it 3DM-relaxed, 3DMR) can be done in the following way.
The subsets $S_i$ are simulated using one or more elements of the set $X$, the elements of the universe $A$ are simulated using elements of the sets $Y$ and $Z$ ($a_{2j}$ is simulated with an element $y_{a_{2j-1}}$ of $Y$, $a_{2j-1}$ is simulated with an element $z_{a_{2j}}$ of $Z$).
We start with $X = \{ e_1,e_2,...,e_{k} \}$ that will force the $|\mathcal{C}|=k$ constraint.
Then, for every subset $S_i$ we create the triple: $(x^1_{S_i}, y_{S_i}, dum)$, where $dum$ is a new element of $Z$; and add the $k$ triples:
$(e_1,y_{S_i}, dum),(e_2,y_{S_i}, dum),...,(e_{k},y_{S_i}, dum)$
Note that
all $dum$ elements are distinct!
In this way
at most $k$ of the $x^1_{S_i}$ will be "free" to cover the elements representing the $a_j$: indeed the $(e,\cdot,\cdot)$ triples can include at most $k$ of the $y_{S_i}$, the remaining $m-k$ must be included by the corresponding $(x^1_{S_i},\cdot,\cdot)$ triple.
To link the $x^1_{S_i}$ to the elements $y_{a_{2j}}, z_{a_{2j-1}}$ that correspond to the elements of the universe in $S_i$, we add three
bridge triples: $(x^1_{S_i},y_{B_i},z_{B_i})$, $(x^2_{S_i},y_{B_i},dum)$, $(x^3_{S_i},dum,z_{B_i})$
At this point the elements of $A$ can be linked to $x^2_{S_i}$ and $x^3_{S_i}$ (or we can further extend the capacity of $S_i$ adding more bridge triples).
For example adding the triples:
$(x^2_{S_i},y_{a_2},z_{a_1}), (x^3_{S_i},y_{a_4},z_{a_3})$ we simulate the set: $S_i = \{ a_1, a_2, a_3, a_4\}$.
The fundamental point is that if $x^1_{S_i}$ is "used" to cover the element $y_{S_i}$ (blue edges in the figure) then:
$x^2_{S_i}$ must be used to include element $y_{B_i}$ and cannot be used to include elements $y_{a_2}, z_{a_1}$ (red edges in the figure); $x^3_{S_i}$ must be used to include element $z_{B_i}$ and cannot be used to include elements $y_{a_4}, z_{a_3}$ (red edges in the figure).
Some of the $S_i$ included in the cover can have a non-empty intersection; so we must be sure that two distinct $x^p_{S_i}, x^q_{S_j}$ that are linked to the same element $a_i$ (i.e. we have the triples $(x^p_{S_i}, a_i, \cdot), (x^q_{S_j}, a_i, \cdot)$) can be included in the matching. For this purpouse for every $x^p, p > 1$ we add a triple with two dum elements: $(x^p, dum, dum)$ (green edges in the figure below).
Finally we can add as many distinct triples $(x, y, dum_i), (x, dum_j, z)$ as needed to be able to
garbage collect all the dum elements $dum_i \in Z$ and the dum elements $dum_j \in Y$ (otherwise the would not be included in the matching) if they are left alone.
Also note that if there are too many $S_i$, the elements $e_i$ can be included in the matching using their $z_{dum}$ element in $Z$ (only the pair $(e_i, z_{dum})$ is picked).
The resulting 3DMR instance has a solution if and only if there is an set cover $\mathcal{C}$ of $A$ of size at most $k$.
In the figure a triple $(x,y,z)$ is represented with two edges $(x,y),(x,z)$ of the same color. As an example, if $x^1_{S_1}$ corresponding to $S_1 = \{a_1,a_2,a_3,a_4\}$, must be used to include $y_{S_1}$ (blue edges), then it cannot be used to include the elements $z_{a_1},y_{a_2},z_{a_3},y_{a_4}$ (red edges). |
So basically
$dS_t=\mu S_tdt+\sigma S_tdWt$
and
$\mu=r-\frac12\sigma^2$
I have just been thinking about this later equation. This is very interesting because it ties together risk-free rate, volatility and asset drift. I always like and try to look at equation from some simple perspective, for example assuming that something is huge or very small or 0, and trying to watch how it impacts other variables. This is good approach to remember some dependencies.
So looking at this later equation, first thing to note is the negative sign of volatility. This is OK when trying to explain why VIX is index of fear and that "investors" don't like increase in volatilities. But increasing risk-free rate in macroeconomics theory translates to increased demand for bonds and decrease in demand for stocks, so their prices drop - this assumption is quite real in today's market - when US Treasuries yields rise stocks go down and vice versa.
So this is not in agreement with this also fundamental assumption $\mu=r-\frac12\sigma^2$.
How do you interpret this fact? |
10.8. RMSProp¶
In the experiment in Section 10.7, the learning rate of each element in the independent variable of the objective function declines (or remains unchanged) during iteration because the variable \(\mathbf{s}_t\) in the denominator is increased by the square by element operation of the mini-batch stochastic gradient, adjusting the learning rate. Therefore, when the learning rate declines very fast during early iteration, yet the current solution is still not desirable, Adagrad might have difficulty finding a useful solution because the learning rate will be too small at later stages of iteration. To tackle this problem, the RMSProp algorithm [Tieleman.Hinton.2012] made a small modification to Adagrad.
10.8.1. The Algorithm¶
Unlike in Adagrad, the state variable \(\mathbf{s}_t\) is the sum of the square by element all the mini-batch stochastic gradients \(\mathbf{g}_t\) up to the time step \(t\), RMSProp uses the exponentially weighted moving average on the square by element results of these gradients. Specifically, given the hyperparameter \(0 \leq \gamma < 1\), RMSProp is computed at time step \(t>0\).
Like Adagrad, RMSProp re-adjusts the learning rate of each element in the independent variable of the objective function with element operations and then updates the independent variable.
Here, \(\eta\) is the learning rate while \(\epsilon\) is a constant added to maintain numerical stability, such as \(10^{-6}\).
10.8.1.1. Exponentially Weighted Moving Average (EWMA)¶
Now let expand the definition of \(\mathbf{s}_t\), we can see that
In Section 10.6 we see that \(\frac{1}{1-\gamma} = 1 + \gamma + \gamma^2 + \cdots\), so the sum of weights equals to 1. In addition, these weights decrease exponentially, it is called exponentially weighted moving average.
We visualize the weights in the past 40 time steps with various \(\gamma\)s.
%matplotlib inlineimport d2limport mathfrom mxnet import np, npxnpx.set_np()gammas = [0.95, 0.9, 0.8, 0.7]d2l.set_figsize((3.5, 2.5))for gamma in gammas: x = np.arange(40).asnumpy() d2l.plt.plot(x, (1-gamma) * gamma ** x, label='gamma = %.2f'%gamma)d2l.plt.xlabel('time');
10.8.2. Implementation from Scratch¶
By convention, we will use the objective function \(f(\mathbf{x})=0.1x_1^2+2x_2^2\) to observe the iterative trajectory of the independent variable in RMSProp. Recall that in Section 10.7, when we used Adagrad with a learning rate of 0.4, the independent variable moved less in later stages of iteration. However, at the same learning rate, RMSProp can approach the optimal solution faster.
def rmsprop_2d(x1, x2, s1, s2): g1, g2, eps = 0.2 * x1, 4 * x2, 1e-6 s1 = gamma * s1 + (1 - gamma) * g1 ** 2 s2 = gamma * s2 + (1 - gamma) * g2 ** 2 x1 -= eta / math.sqrt(s1 + eps) * g1 x2 -= eta / math.sqrt(s2 + eps) * g2 return x1, x2, s1, s2def f_2d(x1, x2): return 0.1 * x1 ** 2 + 2 * x2 ** 2eta, gamma = 0.4, 0.9d2l.show_trace_2d(f_2d, d2l.train_2d(rmsprop_2d))
epoch 20, x1 -0.010599, x2 0.000000
Next, we implement RMSProp with the formula in the algorithm.
def init_rmsprop_states(feature_dim): s_w = np.zeros((feature_dim, 1)) s_b = np.zeros(1) return (s_w, s_b)def rmsprop(params, states, hyperparams): gamma, eps = hyperparams['gamma'], 1e-6 for p, s in zip(params, states): s[:] = gamma * s + (1 - gamma) * np.square(p.grad) p[:] -= hyperparams['lr'] * p.grad / np.sqrt(s + eps)
We set the initial learning rate to 0.01 and the hyperparameter \(\gamma\) to 0.9. Now, the variable \(\boldsymbol{s}_t\) can be treated as the weighted average of the square term \(\boldsymbol{g}_t \odot \boldsymbol{g}_t\) from the last \(1/(1-0.9) = 10\) time steps.
data_iter, feature_dim = d2l.get_data_ch10(batch_size=10)d2l.train_ch10(rmsprop, init_rmsprop_states(feature_dim), {'lr': 0.01, 'gamma': 0.9}, data_iter, feature_dim);
loss: 0.243, 0.058 sec/epoch
10.8.3. Concise Implementation¶
From the
Trainer instance of the algorithm named “rmsprop”, we canimplement the RMSProp algorithm with Gluon to train models. Note thatthe hyperparameter \(\gamma\) is assigned by
gamma1.
d2l.train_gluon_ch10('rmsprop', {'learning_rate': 0.01, 'gamma1': 0.9}, data_iter)
loss: 0.243, 0.033 sec/epoch
10.8.4. Summary¶ The difference between RMSProp and Adagrad is that RMSProp uses an EWMA on the squares of elements in the mini-batch stochastic gradient to adjust the learning rate. 10.8.5. Exercises¶ What happens to the experimental results if we set the value of \(\gamma\) to 1? Why? Try using other combinations of initial learning rates and \(\gamma\) hyperparameters and observe and analyze the experimental results. |
I am confused about a statement made in the paper Linear Time Algorithm for Projective Clustering, section 5.1, second paragraph, second line.
Project clustering is a natural generalization of k-means clustering. Given a point set $P$, the goal to find $k$, $j$-dimensional affine spaces(flats) $F_1,\cdots,F_k$, such that the following objective function is minimized
$\sum_{p\in P} \min_{1\leq i \leq k}dist^2(p,F_i)$, where $dist(p,F)$ is the orthogonal distance between point $p$ and flat $F$.
Clearly, as is the case with k-means, the $k$ optimal flats induces a partitioning of the point set $P$, into $k$ subsets $C_1,\cdots,C_k$, such that points in $C_i$ is closer to flat $F_i$ than any other flat $F_j, i\neq j$.
The aforementioned paper (in page 7, third line from last), it is said that "It is easy to see that $F_i$ passes through the mean $o_i$ of $C_i$". I fail to see the simplicity of this. Can someone please elaborate on that, with a proof sketch?
N.B: mean of a point set $P$ is defined as $\frac{1}{|P|}\sum_{p\in P}p$. |
Algebraic topological model for a cell complex¶
This file contains two functions,
algebraic_topological_model()and
algebraic_topological_model_delta_complex(). The secondworks more generally: for all simplicial, cubical, and\(\Delta\)-complexes. The first only works for simplicial and cubicalcomplexes, but it is faster in those cases.
AUTHORS:
John H. Palmieri (2015-09)
sage.homology.algebraic_topological_model.
algebraic_topological_model(
K, base_ring=None)¶
Algebraic topological model for cell complex
Kwith coefficients in the field
base_ring.
INPUT:
K– either a simplicial complex or a cubical complex
base_ring– coefficient ring; must be a field
OUTPUT: a pair
(phi, M)consisting of
chain contraction
phi
chain complex \(M\)
This construction appears in a paper by Pilarczyk and Réal [PR2015]. Given a cell complex \(K\) and a field \(F\), there is a chain complex \(C\) associated to \(K\) with coefficients in \(F\). The
algebraic topological modelfor \(K\) is a chain complex \(M\) with trivial differential, along with chain maps \(\pi: C \to M\) and \(\iota: M \to C\) such that \(\pi \iota = 1_M\), and there is a chain homotopy \(\phi\) between \(1_C\) and \(\iota \pi\).
In particular, \(\pi\) and \(\iota\) induce isomorphisms on homology, and since \(M\) has trivial differential, it is its own homology, and thus also the homology of \(C\). Thus \(\iota\) lifts homology classes to their cycle representatives.
The chain homotopy \(\phi\) satisfies some additional properties, making it a
chain contraction: \(\phi \phi = 0\), \(\pi \phi = 0\), \(\phi \iota = 0\).
Implementation details: the cell complex \(K\) must have an
n_cells()method from which we can extract a list of cells in each dimension. Combining the lists in increasing order of dimension then defines a filtration of the complex: a list of cells in which the boundary of each cell consists of cells earlier in the list. This is required by Pilarczyk and Réal’s algorithm. There must also be a
chain_complex()method, to construct the chain complex \(C\) associated to this chain complex.
In particular, this works for simplicial complexes and cubical complexes. It doesn’t work for \(\Delta\)-complexes, though: the list of their \(n\)-cells has the wrong format.
Note that from the chain contraction
phi, one can recover the chain maps \(\pi\) and \(\iota\) via
phi.pi()and
phi.iota(). Then one can recover \(C\) and \(M\) from, for example,
phi.pi().domain()and
phi.pi().codomain(), respectively.
EXAMPLES:
sage: from sage.homology.algebraic_topological_model import algebraic_topological_model sage: RP2 = simplicial_complexes.RealProjectivePlane() sage: phi, M = algebraic_topological_model(RP2, GF(2)) sage: M.homology() {0: Vector space of dimension 1 over Finite Field of size 2, 1: Vector space of dimension 1 over Finite Field of size 2, 2: Vector space of dimension 1 over Finite Field of size 2} sage: T = cubical_complexes.Torus() sage: phi, M = algebraic_topological_model(T, QQ) sage: M.homology() {0: Vector space of dimension 1 over Rational Field, 1: Vector space of dimension 2 over Rational Field, 2: Vector space of dimension 1 over Rational Field}
If you want to work with cohomology rather than homology, just dualize the outputs of this function:
sage: M.dual().homology() {0: Vector space of dimension 1 over Rational Field, 1: Vector space of dimension 2 over Rational Field, 2: Vector space of dimension 1 over Rational Field} sage: M.dual().degree_of_differential() 1 sage: phi.dual() Chain homotopy between: Chain complex endomorphism of Chain complex with at most 3 nonzero terms over Rational Field and Chain complex morphism: From: Chain complex with at most 3 nonzero terms over Rational Field To: Chain complex with at most 3 nonzero terms over Rational Field
In degree 0, the inclusion of the homology \(M\) into the chain complex \(C\) sends the homology generator to a single vertex:
sage: K = simplicial_complexes.Simplex(2) sage: phi, M = algebraic_topological_model(K, QQ) sage: phi.iota().in_degree(0) [0] [0] [1]
In cohomology, though, one needs the dual of every degree 0 cell to detect the degree 0 cohomology generator:
sage: phi.dual().iota().in_degree(0) [1] [1] [1]
sage.homology.algebraic_topological_model.
algebraic_topological_model_delta_complex(
K, base_ring=None)¶
Algebraic topological model for cell complex
Kwith coefficients in the field
base_ring.
This has the same basic functionality as
algebraic_topological_model(), but it also works for \(\Delta\)-complexes. For simplicial and cubical complexes it is somewhat slower, though.
INPUT:
K– a simplicial complex, a cubical complex, or a \(\Delta\)-complex
base_ring– coefficient ring; must be a field
OUTPUT: a pair
(phi, M)consisting of
chain contraction
phi
chain complex \(M\)
See
algebraic_topological_model()for the main documentation. The difference in implementation between the two: this uses matrix and vector algebra. The other function does more of the computations “by hand” and uses cells (given as simplices or cubes) to index various dictionaries. Since the cells in \(\Delta\)-complexes are not as nice, the other function does not work for them, while this function relies almost entirely on the structure of the associated chain complex.
EXAMPLES:
sage: from sage.homology.algebraic_topological_model import algebraic_topological_model_delta_complex as AT_model sage: RP2 = simplicial_complexes.RealProjectivePlane() sage: phi, M = AT_model(RP2, GF(2)) sage: M.homology() {0: Vector space of dimension 1 over Finite Field of size 2, 1: Vector space of dimension 1 over Finite Field of size 2, 2: Vector space of dimension 1 over Finite Field of size 2} sage: T = delta_complexes.Torus() sage: phi, M = AT_model(T, QQ) sage: M.homology() {0: Vector space of dimension 1 over Rational Field, 1: Vector space of dimension 2 over Rational Field, 2: Vector space of dimension 1 over Rational Field}
If you want to work with cohomology rather than homology, just dualize the outputs of this function:
sage: M.dual().homology() {0: Vector space of dimension 1 over Rational Field, 1: Vector space of dimension 2 over Rational Field, 2: Vector space of dimension 1 over Rational Field} sage: M.dual().degree_of_differential() 1 sage: phi.dual() Chain homotopy between: Chain complex endomorphism of Chain complex with at most 3 nonzero terms over Rational Field and Chain complex morphism: From: Chain complex with at most 3 nonzero terms over Rational Field To: Chain complex with at most 3 nonzero terms over Rational Field
In degree 0, the inclusion of the homology \(M\) into the chain complex \(C\) sends the homology generator to a single vertex:
sage: K = delta_complexes.Simplex(2) sage: phi, M = AT_model(K, QQ) sage: phi.iota().in_degree(0) [0] [0] [1]
In cohomology, though, one needs the dual of every degree 0 cell to detect the degree 0 cohomology generator:
sage: phi.dual().iota().in_degree(0) [1] [1] [1] |
10.6. Momentum¶
In Section 10.3, we mentioned that the gradient of the objective function’s independent variable represents the direction of the objective function’s fastest descend at the current position of the independent variable. Therefore, gradient descent is also called steepest descent. In each iteration, the gradient descends according to the current position of the independent variable while updating the latter along the current position of the gradient. However, this can lead to problems if the iterative direction of the independent variable relies exclusively on the current position of the independent variable.
10.6.1. Exercises with Gradient Descent¶
Now, we will consider an objective function \(f(\boldsymbol{x})=0.1x_1^2+2x_2^2\), whose input and output are a two-dimensional vector \(\boldsymbol{x} = [x_1, x_2]\) and a scalar, respectively. In contrast to Section 10.3, here, the coefficient \(x_1^2\) is reduced from \(1\) to \(0.1\). We are going to implement gradient descent based on this objective function, and demonstrate the iterative trajectory of the independent variable using the learning rate \(0.4\).
%matplotlib inlineimport d2lfrom mxnet import np, npxnpx.set_np()eta = 0.4def f_2d(x1, x2): return 0.1 * x1 ** 2 + 2 * x2 ** 2def gd_2d(x1, x2, s1, s2): return (x1 - eta * 0.2 * x1, x2 - eta * 4 * x2, 0, 0)d2l.show_trace_2d(f_2d, d2l.train_2d(gd_2d))
epoch 20, x1 -0.943467, x2 -0.000073
As we can see, at the same position, the slope of the objective function has a larger absolute value in the vertical direction (\(x_2\) axis direction) than in the horizontal direction (\(x_1\) axis direction). Therefore, given the learning rate, using gradient descent for interaction will cause the independent variable to move more in the vertical direction than in the horizontal one. So we need a small learning rate to prevent the independent variable from overshooting the optimal solution for the objective function in the vertical direction. However, it will cause the independent variable to move slower toward the optimal solution in the horizontal direction.
Now, we try to make the learning rate slightly larger, so the independent variable will continuously overshoot the optimal solution in the vertical direction and gradually diverge.
eta = 0.6d2l.show_trace_2d(f_2d, d2l.train_2d(gd_2d))
epoch 20, x1 -0.387814, x2 -1673.365109
10.6.2. The Momentum Method¶
The momentum method was proposed to solve the gradient descent problem described above. Since mini-batch stochastic gradient descent is more general than gradient descent, the subsequent discussion in this chapter will continue to use the definition for mini-batch stochastic gradient descent \(\mathbf{g}_t\) at time step \(t\) given in Section 10.5. We set the independent variable at time step \(t\) to \(\mathbf{x}_t\) and the learning rate to \(\eta_t\). At time step \(0\), momentum creates the velocity variable \(\mathbf{v}_0\) and initializes its elements to zero. At time step \(t>0\), momentum modifies the steps of each iteration as follows:
Here, the momentum hyperparameter \(\gamma\) satisfies \(0 \leq \gamma < 1\). When \(\gamma=0\), momentum is equivalent to a mini-batch SGD.
Before explaining the mathematical principles behind the momentum method, we should take a look at the iterative trajectory of the gradient descent after using momentum in the experiment.
def momentum_2d(x1, x2, v1, v2): v1 = gamma * v1 + eta * 0.2 * x1 v2 = gamma * v2 + eta * 4 * x2 return x1 - v1, x2 - v2, v1, v2eta, gamma = 0.4, 0.5d2l.show_trace_2d(f_2d, d2l.train_2d(momentum_2d))
epoch 20, x1 -0.062843, x2 0.001202
As we can see, when using a smaller learning rate (\(\eta=0.4\)) and momentum hyperparameter (\(\gamma=0.5\)), momentum moves more smoothly in the vertical direction and approaches the optimal solution faster in the horizontal direction. Now, when we use a larger learning rate (\(\eta=0.6\)), the independent variable will no longer diverge.
eta = 0.6d2l.show_trace_2d(f_2d, d2l.train_2d(momentum_2d))
epoch 20, x1 0.007188, x2 0.002553
10.6.2.1. Expanding the velocity variable \(\mathbf v_t\)¶
To understand the momentum method, we can expand the velocity variable over time:
As we can see that \(\mathbf v_t\) is a weighted sum over all past gradients multiplied by the according learning rate, which is the weight update in normal gradient descent. We just call it the scaled gradient. The weights deceases exponentially with the speed controlled by \(\gamma\).
The following code block shows the weights for the past 40 time steps under various \(\gamma\)s.
gammas = [0.95, 0.9, 0.6, 0]d2l.set_figsize((3.5, 2.5))for gamma in gammas: x = np.arange(40).asnumpy() d2l.plt.plot(x, gamma ** x, label='gamma = %.2f'%gamma)d2l.plt.xlabel('time')d2l.plt.legend();
A small \(\gamma\) will let the velocity variable focus on more recent scaled gradients. While a large value will have the velocity variable to include more past scaled gradients. Compared to the plain gradient descent, momentum will make the weight updates be more consistent over time. It might smooth the training progress if \(\mathbf x\) enters the region that the gradient vary, or walk out region that is too flat.
Also note that \(\frac{1}{1-\gamma} = 1 + \gamma + \gamma^2 + \cdots\). So all scaled gradients are similar to each other, e.g. \(\eta_t\mathbf g_t\approx \eta\mathbf g\) for all \(t\)s, then the momentum changes the weight updates from \(\eta\mathbf g\) in normal gradient descent into \(\frac{\eta}{1-\gamma} \mathbf g\).
10.6.3. Implementation from Scratch¶
Compared with mini-batch SGD, the momentum method needs to maintain avelocity variable of the same shape for each independent variable and amomentum hyperparameter is added to the hyperparameter category. In theimplementation, we use the state variable
states to represent thevelocity variable in a more general sense.
def init_momentum_states(feature_dim): v_w = np.zeros((feature_dim, 1)) v_b = np.zeros(1) return (v_w, v_b)def sgd_momentum(params, states, hyperparams): for p, v in zip(params, states): v[:] = hyperparams['momentum'] * v + hyperparams['lr'] * p.grad p[:] -= v
When we set the momentum hyperparameter
momentum to 0.5, it can betreated as a mini-batch SGD: the mini-batch gradient here is theweighted average of twice the mini-batch gradient of the last two timesteps.
def train_momentum(lr, momentum, num_epochs=2): d2l.train_ch10(sgd_momentum, init_momentum_states(feature_dim), {'lr': lr, 'momentum': momentum}, data_iter, feature_dim, num_epochs)data_iter, feature_dim = d2l.get_data_ch10(batch_size=10)train_momentum(0.02, 0.5)
loss: 0.244, 0.048 sec/epoch
When we increase the momentum hyperparameter
momentum to 0.9, it canstill be treated as a mini-batch SGD: the mini-batch gradient here willbe the weighted average of ten times the mini-batch gradient of the last10 time steps. Now we keep the learning rate at 0.02.
train_momentum(0.02, 0.9)
loss: 0.278, 0.048 sec/epoch
We can see that the value change of the objective function is not smooth enough at later stages of iteration. Intuitively, ten times the mini-batch gradient is five times larger than two times the mini-batch gradient, so we can try to reduce the learning rate to 1/5 of its original value. Now, the value change of the objective function becomes smoother after its period of decline.
train_momentum(0.004, 0.9)
loss: 0.243, 0.049 sec/epoch
10.6.4. Concise Implementation¶
In Gluon, we only need to use
momentum to define the momentumhyperparameter in the
Trainer instance to implement momentum.
d2l.train_gluon_ch10('sgd', {'learning_rate': 0.004, 'momentum': 0.9}, data_iter)
loss: 0.244, 0.039 sec/epoch
10.6.5. Summary¶ The momentum method uses the EWMA concept. It takes the weighted average of past time steps, with weights that decay exponentially by the time step. Momentum makes independent variable updates for adjacent time steps more consistent in direction. 10.6.6. Exercises¶ Use other combinations of momentum hyperparameters and learning rates and observe and analyze the different experimental results. |
If economic growth is indeed highly desirable (see this question), why must this growth be exponential? With finite resources, exponential growth might hit limits rapidly (or be impossible?). Why not express growth in linear rather than exponential terms?
Growth as is meant here "must" be nothing in particular. It is a specific metric, the percentage change in yearly GNP/GDP, and it is what it is.
In Blanchard and Fischer 's "Lectures on Macroeconomics", in the introductory chapter 1, page 2, Figure 1.1, the logarithm of USA GNP 1874-1986 is graphed: and it is impressively linear , bar a disturbance around World-War II (a dive before it that was roughly equally compensated immediately after). But this means that
$$\ln Y \approx at \Rightarrow Y \approx e^{at}$$
(for the US Economy, $a \approx 0.030\;\; \text{to} \;\;0.037$ for the period).
It is
the data that told us that "growth was exponential" during this period. (Note that "exponential growth" usually includes the concept of constant growth rate, while in informal language, "exponential" may also refer to exploding paths, paths with increasing growth rate). And so economic models were deemed relevant if they could replicate to a respectable degree the observed data.
The question "can this go on forever?" is an altogether different issue, starting with the meaning of the word "forever".
Because linear functions don't match the data.
You can't express a series $$[1,2,4,9,16]$$
as $$f(x)=x+y$$
for any possible $y$.
Because we use today's capital stock to produce tomorrow's output, some fraction of which is invested, so you should expect something like $dK/dt=\alpha f(K)$ where $f$ is increasing in $K$.
growth makes most sense as a percentage. looking at absolute numbers does have value but percentage growth allows for some pretty good comparisons.
You seem to think exponential growth means infinite growth. It is a pretty logical assumption to make, but I believe it takes these models and uses them in a way they were not meant to be used. Economists seldom care about making predictions 200 years in the future. Exponential growth is quite bad at forecasting that far ahead in anything, in shorter time scales it isn't too bad (Source needed).
I'll try and make it clearer:
Consider a basic model of GDP growth. Suppose GDP is growing at 1% per year ($r=1.01$) and initially is at \$1,000,000. Let $Y_t$ denote the populations size $t$ years after the initial population of $Y_0 = \$1,000,000$. If one asks what the GDP will be in 50 years there are two options.
At 1% per year growth, the dynamic equation would be \begin{gather*} Y_{t+1} - P_t = 0.01 \, Y_t \end{gather*} and the corresponding iteration equation is \begin{gather*} Y_{t+1} = 1.01 \, Y_t \end{gather*} Starting with the initial condition, $Y_0 = 1,000,000$, we could calculate $P_1 = 1.01 \times 1,000,000 = 1,010,000$, $P_2 = 1.01 \times 1,010,000 = 1,020,100$ and so on for 50 iterations.
This is equivalent to:
\begin{gather*} Y_t = 1.01^t \left( 1,000,000 \right) \end{gather*} so that we immediately have a formula for the population after 50 years: \begin{gather*} Y_{50} = 1.01^{50} \left( 1,000,000 \right) = 1,644,631. \end{gather*}
A point I am trying to make here is that exponential growth is really just the size of something as a function of itself in a different state or time frame. If you want exponential growth over a longer timeframe, it makes sense to extend the model.
What if $r$ was endogenous to the model? As Y gets larger, r gets smaller. Still growing exponentially, and the size of the economy in $t+1$ is still dependent on the size of the economy in $t$. |
Scipy.optimize.linprog[1] recently added a sparse interior point solver [2]. In theory we should be able to solve some larger problems with this solver. However the input format is matrix based. This makes it difficult to express LP models without much tedious programming. Of course if the LP model is very structured things are a bit easier. In [3] the question came up if we can solve some reasonable sized transportation problems with this solver. I claimed this new interior point solver should be able to tackle reasonably large transportation problems. As transportation problems translate into large but easy LPs (very sparse, network structure) this is a good example to try out this solver. It should not require too much programming.
An LP model for the transportation problem can look like:
Transportation Model \[ \begin{align} \min \> & \sum_{i,j} \color{darkblue}c_{i,j} \color{darkred} x_{i,j} \\ & \sum_j \color{darkred} x_{i,j} \le \color{darkblue}s_i &&\forall i\\ & \sum_i \color{darkred} x_{i,j} \ge \color{darkblue}d_j &&\forall j\\ & \color{darkred}x_{i,j}\ge 0\end{align} \]
Here \(i\) indicate the supply nodes and \(j\) the demand nodes. The problem is feasible if total demand does not exceed total supply (i.e. \(\sum_i s_i \ge \sum_j d_j\)).
Even if the transportation problem is dense (that is each supply node can serve all demand nodes or in other words each link \( i \rightarrow j\) exists), the LP matrix is sparse. There are 2 nonzeros per column.
LP Matrix
The documentation mentions we can pass on the LP matrix as a sparse matrix. Here are some estimates of the difference in memory usage:
100x100 500x500 1000x1000 Source Nodes 100 500 1,000 Destination Nodes 100 500 1,000 LP Variables 10,000 250,000 1,000,000 LP Constraints 200 500 2,000 LP Nonzero Elements 20,000 500,000 2,000,000 Dense Memory Usage (MB) 15 1,907 15,258 Sparse Memory Usage (MB) 0.3 7.6 30.5
For the \(1000\times 1000\) case we see that a sparse storage scheme will be about 500 times as efficient.
Sparsity is very important both inside the solver as at the modeling level. Larger but sparser models are often to be preferred above smaller but denser models. Exploiting sparsity will not only save memory but also increase performance: by skipping zero elements we prevent doing all kind of useless work (like multiplying by zero or adding zeros).
Solving a 1000x1000 transportation problem: Implementation The package scipy.sparse[4] is used to form a sparse matrix. We use three parallel arrays to populate the sparse matrix: one integer array with the row numbers, one integer array with the column numbers and one floating point array with the values. All these arrays have the same length: the number of nonzeros in the LP matrix. Scipy.optimize.linprogdoes not allow for \(\ge\) constraints. So our model becomes: \[\begin{align} \min &\sum_{i,j} c_{i,j} x_{i,j}\\ & \sum_j x_{i,j} \le s_i &&\forall i \\ & \sum_i -x_{i,j} \le -d_j &&\forall j\\ & x_{i,j}\ge 0\end{align}\]
When I run this, I see:
This proves we can actually solve a \(1000 \times 1000\) transportation problem (leading to an LP with a million variables) using standard Python tools.
Notes
It is noted that a
dense transportation problem(with all links \(i \rightarrow j\) allowed) produces a sparse LP. It is also possible the transportation problem is sparse: only some links are allowed. Sparse transportation problemsare a little bit more difficult to set up: the LP matrix is less structured, so we need more advanced data structures (probably a dict to establish a mapping from each existing link \(i \rightarrow j\) to a column number). A good modeling tool may help here. References https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linprog.html https://docs.scipy.org/doc/scipy/reference/optimize.linprog-interior-point.html Maximum number of decision variables in scipy linear programming module in Python, https://stackoverflow.com/questions/57579147/maximum-number-of-decision-variables-in-scipy-linear-programming-module-in-pytho https://docs.scipy.org/doc/scipy/reference/sparse.html |
Fortunately, and covering the points already made, there is a fairly complete answer to the determination of the excision map
$$\varepsilon_2 : \pi_2(B,C) \to \pi_2(X,A)$$when $X=A \cup B, C = A \cap B$ where the base point lies in $C$, and some other conditions, such as $A,B$ are open; we also need connectivity conditions. We say $(B,C)$ is
connected if $B,C$ are path connected and the morphism $\pi_1(C) \to \pi_1(B)$ is surjective. Under all these conditions, the excision result is that $(X,A)$ is connected, and that the morphism $\varepsilon_2$ is entirely determined by the morphism $\lambda: \pi_1(C) \to \pi_1(A)$ induced by the inclusion $C \to A$.
To give more detail, we need to recognise, with Henry Whitehead, 1946, that the boundary map $\delta: \pi_2(X,A) \to \pi_1(A)$ has the structure of crossed module.
A morphism $\mu: M \to P $ of groups is called a
crossed module if there is given an action of the group $P$ on the group $M$, written $(m,p) \mapsto m^p$, such that the following two rules hold for all $p \in P, m,n \in M$:
CM$1$) $\delta(m^p)= p^{-1}mp$;
CM$2$) $n^{-1}mn= m^{\mu n}$.
The second rule is
crucial, for example to the homotopical applications.
Now suppose $\mu : M \to P$ is a crossed module and $\lambda: P \to Q$ is a morphism of groups. Then we can construct a new crossed module $\delta: \lambda_* M \to Q$ called the crossed module
induced from $\mu: M \to P$ by $\lambda $, which comes with a crossed module morphism $\lambda': M \to \lambda_*M \;$ and which with $\lambda $ satisfies a nice universal property for morphisms of crossed modules.
Then we have:
Homotopical Excision in Dimension $2$: Under the above conditions at the start of this account, $$\pi_2(X,A) \cong \lambda_* \pi_2(B,C).$$
An important point is that this result is about nonabelian structures and so not apparently deducible from standard homological tools.
An expository account of the background to this was published as the first article, available here, in the journal HHA, but the first statement and proof were in
R. Brown and P.J. Higgins, "On the connection between the secondrelative homotopy groups of some related spaces",
Proc.London Math. Soc. (3) 36 (1978) 193-212.
which deduces it from a much more general $2$-dimensional van Kampen type Theorem stated and proved there. A full account is also in the new book on Nonabelian Algebraic Topology (EMS Tract Vol 15, 2011) advertised here. This Excision result is Theorem 5.4.1.
This leads to ideas for calculating induced crossed modules in homotopically relevant situations. See for example
R. Brown and C.D. Wensley, "Computation and homotopical applicationsof induced crossed modules",
J. Symbolic Computation 35 (2003)59-72.
which gives some computer based calculations.
The basic philosophy is that a $2$-d van Kampen Theorem allows the computation of some homotopy $2$-types, and from this, the calculation of some second homotopy groups, which are, after all, but a pale shadow of the $2$-type.
Addition 30 Dec: The universal property of the induced crossed module can be stated as that the following diagram of morphisms of crossed modules
$$ \begin{matrix}1\to P & \to & 1 \to Q \\\downarrow & & \downarrow \\M \to P & \to & \lambda_* M \to Q \end{matrix} $$
is a pushout of crossed modules. This should make clear the connection with a van Kampen type theorem, dealing more generally with pushouts of crossed modules involving second relative homotopy groups. This Excision result also implies Whitehead's subtle theorem (in Combinatorial Homotopy II, 1949) that $\pi_2(A \cup_{f_i} \{e^2_i\}, A,a) \to \pi_1(A,a)$ is for connected $A$ the
free crossed module on the characteristic maps $f_i$ of the $2$-cells, as well as the result given by mph on $\pi_2(\Sigma X)$ but without using homological information.
Edit 05/01/14: In particular, if $Q=1$ then $\lambda_* M$ is $M$ factored by the action of $P$. This implies the Relative Hurewicz Theorem in dimension $2$.
Edit Dec 31: Another interesting aspect of this result is that the failure of excision is usually related to the work of Blakers and Massey on triad homotopy groups, which gives an exact sequence
$$\to \pi_3(X,A) \to \pi_3(X;A,B) \to \pi_2(B,C) \to \pi_2(X,A) \to \pi_2(X;A,B) \to . $$
But it is not clear to me how this sequence can yield the above Excision Theorem, and Blakers and Massey's results on determining the first non vanishing triad group do not deal with the non simply connected case.
Edit: Jan 7 2014: I should add that in the paper R. Brown and J.-L.Loday, "Excision homotopique en basse dimension'',
C.R. Acad. Sci. Paris S\'er. I 298 (1984)353-356, and available here, we announce under the above conditions an exact sequence
$$\pi_3(B,C)\to \pi_3(X,A) \to \pi_2(A,C) \otimes \pi_2(B,C) \to \pi_2(B,C) \to \pi_2(X,A) $$where $\otimes$ denotes a "nonabelian tensor product" of two groups which act on each other, basically via a determination of $\pi_3(X;A,B)$ by a van Kampen type theorem. |
I heard that mass and distance are the only deciding criteria for determining the gravitational pull. Keeping the distance constant, if mass would be the only deciding factor for gravitational pull, then every super-massive star capable of forming a black hole would absorb light in the first place, and thus it would already look like a black hole throughout it's whole life cycle, even before being formed into one. This is based on the fact that black holes can absorb light rays due to it's immense gravitational pull.
You are correct that the status of a black hole is determined by its mass, but
also by its radius. The gravitational field becomes stronger the bigger the mass and the closer you can get to that mass.
A black hole forms once a mass $M$ is compressed inside the Schwarzschild radius $r_s = 2GM/c^2$. i.e. once its density achieves $$ \rho > \frac{3M}{4\pi r_s^{3}}$$ i.e. when a central mass $M$ has a density that exceeds $$ \rho > \frac{3}{32\pi} \frac{c^6}{G^3 M^2} = 1.8\times10^{19} \left(\frac{M}{M_{\odot}}\right)^{-2}\ {\rm kg/m}^3$$ This is a ball park figure and assumes spherical symmetry and neglects any detailed GR treatment, but is more or less correct - a few times higher than typical neutron star densities.
In other words it is the
density of the material that largely determines whether something becomes a black hole. The mass is only an indirect parameter.
At the start of its life, the density of a super-massive star is actually
lower (on average, and at the core) than the density of the Sun, so nowhere near high enough to form a black hole. Later in its (relatively) short life, after several nuclear burning stages, the core becomes very much denser - of order $10^{12}$ kg/m$^{3}$ and has a mass of just over a solar mass. This is still way to small to form a black hole. But what happens then is that once the core consists of iron (and other iron-peak elements), there is no further energy generation and electron degeneracy pressure is no longer capable of supporting the core against its weight and it collapses. If that collapse results in densities exceeding about $10^{18}$ kg/m$^3$ then a black hole can form at the centre. |
Most of the question has already been answered in the comments. I'll summarise and expand on some of the points.
The OP asks why we have a requirement that the 2-norm of vectors be preserved under time evolution. The answer, already given in the comments, is as follows. Let $U(t)$ be the time evolution operator, and $\lvert \psi(t) \rangle$ the state of the system at time $t$. Then $\lvert \psi(t) \rangle = U(t)\lvert \psi(0) \rangle$ where $\lvert \psi(0) \rangle$ is the initial state. We insist that the wavefunction is normalised, which in the usual interpretation means that probabilities "add up to 1". So if $\langle \psi(0)\lvert \psi(0) \rangle=1$. We therefore require that $U(t)$ is unitary to ensure that the wavefunction is then normalised at all times.
If time evolution wasn't unitary, then you may have $\langle \psi(t)\lvert \psi(t) \rangle \neq 1$. If it's greater than one, then it's nonsense (as long as you believe that the wavefunction can be interpreted as a probability amplitude). If it's less than one, then your system is "leaking" information. This is also nonsense in the usual interpretation. Some people use this to model dissipative effects by using an imaginary potential, which translates to non hermitian hamiltonian and therefore (if you believe Schrodinger) non unitary evolution.
As the question you linked points out correctly, we need operators corresponding to observables to be normal. This is because by the spectral theorem, this means that they are diagonalisable by an orthogonal set, which is something we need if measurements are to make sense. Notice however that we can build an hermitian Hamiltonian from non-normal operators. The typical example is the harmonic oscillator, for which $H=a^\dagger a$, but $a$ is not normal. If we require the eigenvalues of a normal operator to be real, then the operator is hermitian. Why do we do this? It's what we are used to. Position, momentum and energy eigenvalues have a simple interpretation if they are real. What if they were complex? This happens when you do scattering, for which the wavefunction is not normalisable, and you may get complex momentum and energies, which in that context may be interpreted as decay times (see these notes, page 312 onwards). However it's not clear how to interpret them in general.
We can summarise the logical chain:
$$\textrm{wavefunctions are probabilty amplitudes} \implies \textrm{time evolution should be unitary}$$
$$\textrm{observables give real measurements} \implies \textrm{corresponding operators should be hermitian}$$
$$\textrm{Schrodinger equation holds} \Leftrightarrow \textrm{time evolution is given by } e^{-iHt}$$
So to answer your question in the comments, if you assume 1) but not 2) you still get unitary time evolution, however you do not necessarily know its mathematical expression.
Notice however that there are cases in QM when we want to interpret things differently. The most common example is scattering states, whose wavefunctions are not interpreted as probability amplitudes. |
Yes. The phase envelope (or equilibrium curve) of most liquids have a similar shape and can be approximated, to a reasonable degree, by using the Clausius-Clapeyron equation:
$$\Delta h_v=T(v^V-v^L)\frac{dP^s}{dT}$$
which requires finding ${dP^s}/{dT}$ from e.g. a vapor-pressure-temperature correlation like the Antoine equation, and also a separate estimate for $\Delta v$ before $\Delta h_v$ can be obtained. This is an exact thermodynamic relationship, but is often simplified by using the ideal gas law for $v^V$ and neglecting the liquid volume $v^L$ i.e. $v^V \gg v^L$, giving
$$\Delta h_v=\frac{RT^2}{P^s}\frac{dP^s}{dT}$$
However, it is
very important to note that the term $(v^V-v^L)$ becomes increasingly decisive as you go above 1 bar, where the assumption $v^V \gg v^L$ is no longer valid. In such a case, separate correlations for $v^L$ and $v^V$ are required before the correct curvature is obtained. $v^L$, for instance, can be calculated from e.g. a density correlation like the Rackett equation, and $v^v$ from an equation of state like Peng-Robinson. |
Show that L = $\{0^{2^n}| n\geq 0\}$ is not a context free language.
Let string $s = 0^{2^p}$. Then we know we can write $s$ as $s = uvxyz$. I know that |vy| > 0 and $|vxy| \leq p$.
So how do I show that $uv^2xy^2z$ is not in $L$.
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
$\{ 0^{2^n} \mid n \ge 0 \}$ is not context-free.
To show this, you can use any of the usual techniques to show that a language is not context-free, such as the pumping lemma for context-free languages.
The pumping lemma states that if $L$ is context-free, then there exists a pumping length $p$ such that for all $n \ge p$, there exist $u,v,x,y,z$ such that $0^{2^n} = uvxyz$ and $|vy| \ge 1$ and for all $k \ge 0$, $uv^kxy^kz \in L$. Take $n = p$: for all $k \ge 0$, $|uv^kxy^kz| = |uxz| + k |vy|$ must be a power of $2$. This is not possible for large $k$ since it would imply that the distance between consecutive powers of $2$ is never more than $|vy|.
You can also use Parikh's theorem, which states that the set of possible numbers of occurrences of a letter in a context-free language is semi-linear (i.e. it's of the form $\{a p + b \mid a \in \mathbb{N}, b \in B\} \cup C$ for some integer $p$ and some finite sets $B$ and $C$). For a language with a singleton alphabet, this means that the set of lengths of words in the language is semi-linear, which $\{2^n \mid n\in\mathbb{N}\}$ isn't. |
The integral is exactly the fractional part of $100!\,e$, or in other words $100!\ e-\lfloor100!\ e\rfloor\approx0.00999901019\ldots$
Apply integration by parts to the integral $I_n=\int_0^1e^{1-t}t^n\,dt$ (it's nicer
not to pull the $e$ out to the front) and we find for $n\geq1$, $$I_n=-1+nI_{n-1}$$
This gives us $$I_{100}=-1+100[-1+99[-1+98[-1+\cdots+2[-1+1I_0]\cdots]]]$$
$I_0$ is a straightforward computation: $e-1$. So
This gives us $$I_{100}=-1+100[-1+99[-1+98[-1+\cdots+2[-1+e-1]\cdots]]]$$
Here is a nice observation. Once this is multiplied out, it (clearly?) simplifies to $100!\,e-N$ for some
integer $N$. A graphical examination of the integral reveals that $I_{100}$ is somewhere between $0$ and $1$. (You could prove this using the fact that $e^{1-t}t^{100}=e^{1-t}tt^{99}\leq t^{99}$ on $[0,1]$.) So $N$ must equal the integer part of $100!\,e$, leaving $I_{100}$ to be the fractional part.
It's interesting to note that since $I_n\to0$ as $n\to\infty$, the fractional part of $n!\,e$ must approach zero; that is, $n!\,e$ gets closer and closer to being an integer. (Although I suppose that is obvious if we consider the usual series expansion for $e$.)
For computational purposes, we can use this to find a decimal approximation by throwing out the first $100$ terms or so (which are all integers) of the series expansion for $100!\, e$.
$$
\begin{align}
\int_0^1e^{1-t}t^{100}\,dt &
= 100!\, e-\lfloor100!\,e\rfloor\\
& = \sum_{n=101}^{\infty}\frac{100!}{n!}
\end{align}
$$
This is the series that bgins has found with a slightly different argument. At first, this series converges faster than David Mitra's alternating series. It is correct to at least 17 decimal places after only 8 partial summands. David's requires 18 partial summands to get that much accuracy. However since both series have a ratio of order $1/n$ and David's series is alternating, I think that in the long run for very high accuracy demands, his series might be better. |
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero).
I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it.
But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$
I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ...
Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!)
On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case
@Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question.
Moreover, the title is vague and doesn't clearly ask a question.
And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed.
If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself.
but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away
lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre
I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A?
@swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out
By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point
So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying
But 240 miles seems waaay to short to cross two time zones
So my inclination is to say the answer key is nonsense
You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi...
Hi there,
I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer.
Where does the term e^{(r_1-r_2)x} come from?
It seems like it is taken out of the blue, but it yields the desired result. |
Hi,
I asked this question already on math.stackexchange but got no answer (link: https://math.stackexchange.com/questions/22155).
Our setting: An Euclidean vector bundle $(E, h, \nabla^E)$ over a Riemannian manifold (M,g) is said to have bounded geometry, if the norms of the curvature tensor $R^E$ and of all its covariant derivatives are bounded. The manifold itself is said to have bounded geometry, if the tangent bundle TM, equipped with the manifold metric and the Levi-Civita connection, has bounded geometry and additionally the metric is complete and the injectivity radius fulfills $\operatorname{inj rad}(x) > \epsilon > 0$ for all x.
The question: We have a Riemannian manifold (M,g) of bounded geometry and some isometric embedding $\iota\colon M \to R^N$. Now we can look at the normal bundle NM over M, equipped with the pull-back metric and pull-back connection. Has this bundle bounded geometry? My intuition says "yes".
I tried it with local computations using the corresponding projection matrices but got nowhere.
I use the fact that a manifold has bounded geometry, if and only if the Christoffel symbols of the Levi-Civita connection and all their derivatives are uniformly bounded functions when computed in Riemannian normal coordinates (where the radii of the coordinate balls are the same for all points p). An analogous statement holds for vector bundles of bounded geometry, where the frames we use for the computation of the Christoffel symbols are acquired by choosing a orthonormal basis for the bundle in the point p and then parallel translate it along the radial geodesics in a normal coordinate ball (also with fixed radius for every point).
So if $\partial_{x_i}$ are the normal coordinates and $\{n_i\}$ is the orthonormal frame for the normal bundle we have the following expression: $\Gamma_{ij}^{k, TM} = g^{kl}\langle \nabla_{\partial_{x_i}} \partial_{x_j}, \partial_{x_l}\rangle$ and analogously $\Gamma_{ij}^{k, NM} = h^{kl}\langle \nabla_{\partial_{x_i}} n_j, n_l\rangle$, where $g^{ij}$ is as usually the inverse matrix of the matrix of the metric g (computed w.r.t. the normal coordinates), $h_{ij}$ the matrix of the metric of the normal bundle and $\langle \cdot, \cdot \rangle$ is the Euclidean metric of $R^N$ (we pushed the coordinates $\partial_{x_i}$ and the frame $\{n_i\}$ forward via the embedding $M \to R^N$). Since the frame we use for the normal bundle is orthonormal, we have $h_{ij}=\delta_{ij}$ and so the formula for the Christoffel symbols of the normal bundle reduces to $\Gamma_{ij}^{k, NM} = \langle \nabla_{\partial_{x_i}} n_j, n_k\rangle$.
For the matrices of the projections $p^{TM}: TR^N \to TM$, resp. $p^{NM}$ we get the following expressions w.r.t. the standard coordinates $\{e_i\}$ of $R^N$: $(p^{TM})_{ij} = g^{kl}\langle e_j, \partial_{x_l}\rangle \langle e_i, \partial_{x_k} \rangle$ and analogously $(p^{NM})_{ij} = h^{kl}\langle e_j, n_l\rangle \langle e_i, n_k \rangle$.
Now I want to deduce that if the Christoffel symbols of TM and all their derivates are uniformly bounded (i.e. the manifold has bounded geometry), then the entries of the projection matrix $p^{TM}$ and all their derivatives are uniformly bounded (which automatically gives the uniform boundedness of the entries of $p^{NM}$ and their derivatives). And from here I want to deduce the uniform boundedness of the Christoffel symbols of NM and all their derivatives. But I do not see how using the equations I got so far.
May we get further equation / information which give the desired results? Or maybe there is some other way to answer the question posed in the third paragraph (not using ugly local computation)? I would be happy with any solution.
Thanks, Alex |
I've come across a question of which I can't fully understand the solution:
A space station is located in a gravity-free region of space. It consists of a large diameter, hollow thin-walled cylinder which is rotating freely about its axis. The cylinder is of radius $r$ and mass $M$.
Radial spokes, of negligible mass, connect the cylinder to the centre of rotation. If astronaut (mass $m$) now climbs halfway up a spoke and lets go, how far along the cylinder circumference from the base of the spoke will the astronaut hit the cylinder? With the astronaut at the centre, the cylinder spins with angular velocity $\omega_0^2 = g/r$.
Attempt at a solution:
With the man at the centre, the moment of inertia of the system is $ I=Mr^2 $ spinning at $\omega_0^2$.
With the man at $r/2$, the moment of inertia of the system will be: $$I' = Mr^2 + m(r/2)^2 = (M + m/4)r^2 $$
By conservation of angular momentum, the cylinder will now be spinning at angular velocity $$\omega' = \frac{{Mr^2}}{{(M+m/4)r^2}}\omega_0 $$ The tangential velocity of the astronaut at the point of release is $v_\mathrm{man}=\omega'(r/2)$.
However, when the man lets go, the MoI of the system returns to $I$, spinning at $\omega_0$.
The man sweeps out an angle $\alpha = \pi/3$ along the circumference, and travels a distance $S_\mathrm{man} = \sqrt{r^2-(r/2)^2} = (\sqrt3/2)r$, taking a time $S_\mathrm{man}/v_\mathrm{man} = \frac{{\sqrt3}}{{\omega}}(1+\frac{{m}}{{4M}})$.
In this time, the base of the spoke travels a distance $S_\mathrm{spoke} = v_\mathrm{spoke}r\times t = \omega rt = \sqrt3r(1+\frac{{m}}{{4M}})$. and so the difference in distance is $$S_\mathrm{spoke} - S_\mathrm{man} = \left(\sqrt3\left(1+\frac{{m}}{{4M}}\right) - \frac{{\pi}}{{3}}\right)r.$$
However, the answer given is $(\sqrt3 - \frac{{\pi}}{{3}})r$. |
The $\lambda$ value used in the original paper is arbitrary, but you can estimate that by assuming (in the simplest case) 2 assets and running the following model:
$\sigma^2_{12,t+1}$ $=$ $\lambda$$*$$\sigma^2_{12,t-1}$$+$$(1-\lambda)$$r_{1,t}$$*$$r_{2,t}$;
given $r_{1,t}$ and $r_{2,t}$ respectively as the returns for the asset 1 and 2 and $\sigma^2_{12,t}$ the volatility at time t.
Solving by $\lambda$ as unique unknown variable, you can find the $\lambda$ estimation.
To compute the correlation forecast, replace $\sigma^2_{12,t+1}$ in:
$\rho_{t+1}$ $=$ $\frac{\sigma^2_{12,t+1}}{\sigma_{1,t+1}* \sigma_{2,t+1}}$;
where $\rho_{t+1}$ is the forecast of the correlation 1 period ahead.
Here the reference of the original paper by JP Morgan; I suggest you to read the paper an estimate $\lambda$ again, since its value depends on the volatility of returns and it changes over time.
The authors used a 20-day returns period to estimate asset volatility and returns and the choice of such time period, again, was arbitrary.
Hope this helps. |
I'm looking for advice, or references, for a change of basis to my dependent variables that leads to a less computationally expensive scheme when solving a system of coupled polynomial equations. Below is an explanation of the problem, which will be somewhat involved.
The general idea is this. I have a system of equations I'm solving for the dependent variables $a=(a_1,...,a_N)$, all of them are a function of a parameter $\mu$. That is, my system takes the form ( I will call this method 1)
$$F(a; \mu)=0.$$
For a fixed (prescribed) value of $\mu\in(0,1)$, I solve this system using a Newton-Raphson method (implemented in SUNDIALS), increasing $N$ until my solutions converge to some given precision. For large values of $\mu$, $N$ grows very large and at a certain point becomes very impractical (
). see my note at the bottom for more about this
Now, these variables $a_i$ correspond to Fourier coefficients of a physical boundary displacement. In particular, they are Fourier coefficients to two parametric functions $(X(x),Y(x))$, with $X= x+\sum_{n=1}^N a_n \sin n x$, $Y=\sum_{n=1}^N a_n \cos n x$, and $x\in (-\pi,\pi)$.
For a simple example, I have plotted $(X,Y)$, $(x,X(x))$, and $(x,Y(x))$ in black in the figure.
Okay, so there exists another way to formulate this problem, using a different set of variables that leads to a very different set of equations that returns the same information about the boundary displacement. Using
ginput in Matlab, I can extract the boundary displacements based on results in the literature that employ this other method (call it method 2), to find a set $(U(x),V(x))$, $(x,U(x))$, and $(x,V(x))$, where the independent variable $x$ is exactly the same as before. These are the red plots in the figure.
Now, we note that $(U,V)$ and $(X,Y)$ give approximately the same plot, but $U$ and $V$ are not nearly as localized in space as $X$ and $Y$. This implies that $U$ and $V$ need much lower spectral resolution than $X$ and $Y$.
This motivates me to look for a change of basis in my original problem, to one that needs lower resolution. In particular, this later approach makes me think that I'm working too hard to get at these solutions, and this becomes particularly apparent as I try to describe the system with higher values of $\mu$.
Now, I realize I should just figure out a mapping between the two methods, but this is very difficult due to the fact that both methods use a series of mappings to get to the final set of equations. So, this information is not available.
: Is there any general way, or are there any instances where people have found a way, to do a simple transformation of my variables $a$, to converge for smaller $N$. (I'm not sure if this is the same thing as asking about $preconditioning$ the jacobian of $F$, to allow for more rapid convergence of the Newton-Raphson scheme?). If so, how does one look for this transformation ? My question
I apologize for the vague question, and please feel free to ask any questions of clarification.
: I'm actually solving an ODE with a nontrivial set of coefficients on the highest order derivative, so I'm really solving a system $F(a, t_i)$ for each time step $t_i$. This makes it very important for me to optimize the time taken to solve $F(a)$, as I will do this many, many times. Note |
If I have a random too way overdeterminate systems of homogenous non-linear equations and few variables, What's the better way too find a solution, finding a the Groener Basis or just using linearization ?
Thanks
Cryptography Stack Exchange is a question and answer site for software developers, mathematicians and others interested in cryptography. It only takes a minute to sign up.Sign up to join this community
If I have a random too way overdeterminate systems of homogenous non-linear equations and few variables, What's the better way too find a solution, finding a the Groener Basis or just using linearization ?
Thanks
I think this is better suited to be a question on the MSE site. However, I will answer it.
The first thing is that if your system is
random, then the chances it admits at least one solution are extremely small, and it therefore does not make too much sense to "solve" the system. On the other hand, if you already know in advance that your system has a solution (for instance, the system arose by evaluating a value at some polynomials and subtracting the result), there are a couple of techniques that could be used depending on how large is the number of equations $m$ compared to the number of variables $n$ (I'll assume for simplicity that your equations are quadratic).
Intuition says that the more equations you have, the easier the system will be to solve. This is because equations represent "information" about the solution, and the more information you have, the easier it is to caractherize the solution. For instance, if you have many many more equations than variables (something like $m = \binom{n+2}{2}$), then linearization will work and actually will give you the solution in polynomial time. However, if you don't have this condition you are not guaranteed linearization will succeed.
On the other hand, if the ratio $m/n$ is equal to $C$, some work on semi-regular sequences shows that if you take a random system with these characteristics, then the fundamental parameter determining the running time of any Groebner basis algorithm (the Degree of Regularity) will be approximately equal to $$\left(C - \frac{1}{2} - \sqrt{C(C-1)}\right)n$$ (notice this is a decreasing function in $C$, as expected). Using the relation between the arithmetic complexity of general Groebner basis algorithms and this Degree of Regularity, one can show that this complexity is exponential in $n$ if $m = O(n)$, and sub-exponential if $m = \tilde{O}(n)$.
From your question I understand that you are talking about non-linear systems. So you may add it to your original question. Both methods you've suggested are good, but Grobner, with lex-order, (I believe) is better, especially if you have a state of the art implementation of F5 algorithm (for instance if you have magma). If you have too many equations and unknowns Grobner method becomes very costly. For the case of quadratic systems over finite fields you may also consider xL-algorithm (which is better than simple linearization). If your system is over the real numbers see
Solving polynomial equations: Foundations, algorithms, and applications. A. Dickenstein and I.Z. Emiris, eds., vol. 14 in: Algorithms and Computation in Mathematics, 2005 |
I need some help with a big O proof. I think I have a proof but I feel like some of the steps aren't logically compatible.
The Question: For all functions f,g with domain $\mathbb{N}$ that maps to $\mathbb{R}$, $log(f(n))\in \mathcal{O}(g(n)) \implies f(n) \in \mathcal{O}(3^{g(n)})$
I basically took the definiton of big O and took the last part of the inequality $log(f(n)) \le cg(n)$ then $f(n) \le e^{cg(n)} \le 3^{cg(n)}$ I know I'm close. But I have that c in the exponent. Any help?
Can I do something like $3^{cg(n)} \le c_0^{g(n)-cg(n)}$ $3^{cg(n)}$ (the $c_0$ from the definition for the part I want to prove) I know that I know the definition holds for some c since it is an if-then proof, but I can't choose c since it's a existential.
FYI (def of Big O I used): $f(n)\in\mathcal{O}(g(n))\iff \exists c \in \mathbb{R^+}: \exists B \in \mathbb{N}: \forall n \in \mathbb{N}, n \ge B \implies f(n) \le cg(n)$ |
Let $R(x)$ be a remainder upon dividing $x^{44}+x^{33}+x^{22}+x^{11} +1$ by the polynomial $x^4 +x^3 +x^2 +x +1$. Find: $R(1)+2R(2)+3R(3)$. Answer provided is $0$
closed as off-topic by user26857, Claude Leibovici, user91500, user1551, user223391 Jul 21 '16 at 4:45
This question appears to be off-topic. The users who voted to close gave this specific reason:
" This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – user26857, Claude Leibovici, user91500, user1551, Community Hint\begin{align} & {{x}^{55}}-1=({{x}^{11}}-1)({{x}^{44}}+{{x}^{33}}+{{x}^{22}}+{{x}^{11}}+1) \\ & {{x}^{5}}-1=(x-1)({{x}^{4}}+{{x}^{3}}+{{x}^{2}}+{{x}^{1}}+1) \\ \end{align}$$\frac{{{x}^{44}}+{{x}^{33}}+{{x}^{22}}+{{x}^{11}}+1}{{{x}^{4}}+{{x}^{3}}+{{x}^{2}}+{{x}^{1}}+1}=\frac{{{x}^{55}}-1}{{{x}^{5}}-1}\times \frac{(x-1)}{({{x}^{11}}-1)}$$
$$x^{44}+x^{33}+x^{22}+x^{11}+1=\frac{x^{55}-1}{x^{11}-1}=\frac{(x^5-1)\sum_{k=0}^{10}x^{5k}}{x^{11}-1}\\=\frac{x^5-1}{x-1}\cdot\frac{\sum_{k=0}^{10}x^{5k}}{\sum_{k=0}^{10}x^k}=(x^4+x^3+x^2+x+1)\cdot\color{blue}{\frac{\sum_{k=0}^{10}x^{5k}}{\sum_{k=0}^{10}x^k}}$$
The blue fraction is actually a polynomial because the only eleventh root of $1$ which is also a fifth root of $1$ is $1$ itself. Or, if you prefer, because $\operatorname{gcd}(x^{11}-1,x^5-1)$ is obviously $x-1$.
The thing that makes this easy is $x^4 + x^3 + x^2 + x + 1$ divides $x^5 - 1$. Thus, if $$\begin{align} f(x) &\equiv g(x) & &\pmod{x^5 - 1} \\ g(x) &\equiv h(x) & &\pmod{x^4 + x^3 + x^2 + x + 1} \end{align}$$ then $$ f(x) \equiv h(x) \pmod{x^4 + x^3 + x^2 + x + 1} $$ and it's easy to find a low degree $g(x)$ to use as the intermediate, since $x^5 \equiv 1 \pmod{x^5-1}$.
But even without that, you can use the usual tricks for modular arithmetic; e.g. some of the content of How do I compute $a^b\,\bmod c$ by hand? applies here too.
Hint $\,\ (x\!-\!1)f\, =\, x^{\large 5}-1\,\ $ for $\,\ f =\, x^{\large 4}+x^{\large 3}+\cdots+1,\,\ $ therefore
$\qquad\quad \begin{eqnarray}{\rm mod}\ f\!:\,\ x^{\large \color{#c00}5}\equiv 1\ \Rightarrow\ &&x^{\large 4+\color{#c00}5I}+\!&&x^{\large 3+\color{#c00}5J}+\!&&x^{\large 2+\color{#c00}5K}+\!&&x^{\large 1+\color{#c00}5L}+\!&&x^{\large\color{#c00} 5M}\\ \equiv\,\ &&x^{\large 4}\quad + &&x^{\large 3}\quad + &&x^{\large 2} \quad + &&x\quad\ + &&1\,\equiv\, f\,\equiv\, 0\end{eqnarray}$
Long division of $x^{44}+x^{33}+x^{22}+x^{11}+1$ by $x^4+x^3+x^2+x+1$ gives a quotient of $$ x^{40}-x^{39}+x^{35}-x^{34}+x^{30}-x^{28}+x^{25}-x^{23}+x^{20}-x^{17}+x^{15}-x^{12}+x^{10}-x^{6}+x^{5}-x+1$$ and no remainder. It is not a lot of work: only five lines. |
On the history
See Butcher: A History of the Runge-Kutta method
In summary, people (Nystroem, Runge, Heun, Kutta,...) at the end of the 19th century experimented with success in generalizing the methods of numerical integration of functions in one variable $$\int_a^bf(x)dx,$$ like the Gauss, trapezoidal, midpoint and Simpson methods, to the solution of differential equations, which have an integral form $$y(x)=y_0+\int_{x_0}^x f(s,y(s))\,ds.$$
Carl Runge in 1895
[1] came up with ("by some curious inductive process" - "auf einem eigentümlich induktiven Wege" wrote Heun 5 years later) the 4-stage 3rd order method\begin{align}k_1&=f(x,y)Δx,\\k_2&=f(x+\tfrac12Δx,y+\tfrac12k_1)Δx\\k_3&=f(x+Δx,y+k_1)Δx\\k_4&=f(x+Δx,y+k_3)Δx\\y_{+1}&=y+\tfrac16(k_1+4k_2+k_4)\end{align}
[1] "Über die numerische Auflösung von Differentialgleichungen", Math. Ann. 46, p. 167-178
Inspired by this Karl Heun in 1900
[2] explored methods of the type$$\left.\begin{aligned}k^i_m &= f(x+ε^i_m,y+ε^i_mk^{i+1}_m)Δx,~~ i=1,..,s,\\ k^{s+1}_m&=f(x,y)Δx\end{aligned}\right\},~~ m=1,..,n\\y_{+1}=y+\sum_{m=1}^n\alpha_mf(x+ε^0_mΔx,y+ε^0_mk^1_m)Δx$$He computed the order conditions by Taylor expansion and constructed methods of this type up to order 4, however the only today recognizable Runge-Kutta methods among them are the order-2 Heun-trapezoidal method and the order 3 Heun method.
[2] "Neue Methode zur approximativen Integration der Differentialgleichungen einer unabhängigen Veränderlichen", Z. f. Math. u. Phys. 45, p. 23-38
Wilhelm Kutta in his publication one year later in 1901
[3] considered the scheme of Heun wasteful in the number of function evaluations and introduced what is today known as explicit Runge-Kutta methods, where each new function evaluation potentially contains all previous values in the $y$ update. \begin{align}k_1&=f(x,y)Δx,\\k_m&=f(x+c_mΔx, y+a_{m,1}k_1+...+a_{m,s-1}k_{s-1})Δx,&& m=2,...,s\\[0.5em]y_{+1}&=y+b_1k_1+...+b_sk_s\end{align}He computed order conditions and presented methods up to order $5$ in parametrization and examples. He especially noted the 3/8 method for its symmetry and small error term and the "classical" RK4 method for its simplicity in using always only the last function value in the $y$ updates.
[3] "Beitrag zur näherungsweisen Lösung totaler Differentialgleichungen", Z. f. Math. u. Phys. 46, p. 435-453 On the order dependence of the performance
The
Euler method has global error order 1. Which means that to get an error level of $10^{-8}$ (on well-behaved example problems) you will need a step size of $h=10^{-8}$. Over the interval $[0,1]$ this requires $10^8$ steps with $10^8$ function evaluations.
The classical
RK4 method has error order 4. To get an error level of $10^{-8}$ you will thus need a step size of $h=0.01$. Over the interval $[0,1]$ this requires $100$ steps with $400$ function evaluations.
If you decrease the step by a factor of $10$ to $h=0.001$, the
RK4 method will need $1000$ steps with $4000$ function evaluations to get an error level of $10^{-12}$. This is still much less effort than used in the Euler example above with a much better result.
Using
double precision floating point numbers you will not get a much better result with any method using a fixed step size, as smaller step sizes result in an accumulating floating point noise that dominates the error of the method. |
An equation such as Gauss' law $$ \nabla \cdot E(x) = \frac{\rho(x)}{\varepsilon_0} $$ is easy to interpret in physical terms. If we use the divergence theorem we have $$ \int_{\partial \Omega} E\cdot \hat{n} \ dx = \int_{\Omega} \frac{\rho(x)}{\varepsilon_0} dx $$ which says that the electric flux out of the domain $\Omega$ is equal to the total amount of charge $\rho$ inside $\Omega$.
Now what about the Helmholtz equation which governs time-harmonic acoustic pressure waves:
$$ \Delta p + k^2 p = 0 $$ where $k$ is the wavenumber. Is it possible to give as direct an interpretation for this equation as it was for Gauss' law? Things were really clear in the case of Gauss' law because we could use the divergence theorem to lead to an integral equation that relates the strength of a variable leaving a domain to the amount of another variable inside the domain.
The same approach can be taken in thermodynamics, i.e. relating the amount of heat flux out of a body to the amount of energy inside the body.
Can a similar clear interpretation of the Helmholtz equation by given? |
The Equation of the Trajectory of a particle can be obtained by eliminating the variable $t$ as we are doing in the equation of Trajectory of a parabolic Projectile. So, my question is if the equation of trajectory of a parabolic projectile is possible, then other simple ones like The Simple Pendulum should also be possible?. So, I defined the degrees of freedom of a simple pendulum using two variables $t, \theta$. Here, $t$ is the time duration of oscillation and $\theta$ is the angle made by the pendulum wrt it's mean equilibrium position. The function $\theta(t)$ is assumed to be a function which returns the angle $\theta$ wrt it's dependent time variable $t$.
So, first I've to define the coordinates and draw a suitable figure for that. So, here's my take over it.
Sorry for the bad design.
Note: I want that function in either Classical Mechanics or Newtonian Mechanics Only.
I'm presenting the failed try of the Lagrangian Mechanics Part.
Here, my attack was based on calculating the equation of motion of the pendulum using the lagrangian separately for x-axis and y-axis and then, eliminate $t$ f from that.
$$ \begin{cases} T_x = (l \dot \cos \theta)^2 \dfrac 12 m \\ T_y = (l \dot \sin \theta)^2 \dfrac 12 m \end{cases} $$
Using them and working separately the equations of motion, we'd get!
$$ \text{For x-axis} \\ U = - mgl \cos \theta \\ L(\theta, \dot \theta, t) = T_x - U \\ = \dfrac{l^2 \dot \theta^2 \cos^2 \theta m}{2} + mgl \cos \theta \\ \dfrac{d}{dt} \left ( \dfrac{\partial L}{\partial \dot \theta} \right) - \dfrac{\partial L}{\partial \theta} = 0 \\ \Rightarrow \dfrac{d}{dt} \left ( l^2 \dot \theta \cos \theta m \right) =- mgl \sin \theta\dots(1) $$
$$\text{For y-axis} \\ U = 0\\ L(\theta, \dot \theta, t) = T_y \\ = \dfrac{l^2 \dot \theta^2 \sin^2 \theta m}{2} \\ \dfrac{d}{dt} \left ( \dfrac{\partial L}{\partial \dot \theta} \right)=0 \\ \Rightarrow l^2 \dot \theta \sin^2 \theta m = C\\ \text{Where, C is a constant}$$
Since this is not based on $t$ How to get the value of $\theta$ as a function, I need help here.
Edit: There is another big problem here, rather. There is no left $t$ to eliminate ;( Edit2: I've defined the motion of a pendulum in polar coordinates because it's easy to tell it's position at an instant at this format $(l, \theta)$. I then separated the motion into Rectangular Coordinates, so that I could Eliminate $t$, but soon I realized after that, Lagrangians don't work like that. So, I'm updating my try to a better-improved version, (if I get) |
I have two TeX files that I need to compare, and I thought texdiff could do this for me. Unfortunately, in the manual page, it reads
For texdiff to work, the following LaTeX code must be inserted in the preamble of the LaTeX document:
\usepackage{xcolor} \usepackage{ulem} \usepackage{changebar} \newcommand\TLSins[1]{\cbstart{}\textcolor{ins}{\uline{#1}}\cbend{}} \newcommand\TLSdel[1]{\cbdelete{}\textcolor{del}{\sout{#1}}}
Now, I'm a bit lost as I have never actually used plain TeX and using LaTeX code inside a TeX file is obviously not going to work.
Example:
Let's consider a very simple plain TeX example. If you use a file with the following content:
Let $D$ be a subset of $\bf R$ and let $f \colon D \to{\bf R}$ be a real-valued function on $D$. The function $f$ is said to be{\it continuous} on $D$ if, for all $\epsilon > 0$ and for all $x \in D$,there exists some $\delta > 0$ (which may depend on $x$) such that if $y\in D$ satisfies $$|y - x| < \delta$$ then $$|f(y) - f(x)| < \epsilon.$$\bye
With
pdftex file.tex this can easily be converted to
Now, if you make a change in the file and store it under a different name, you can run
texdiff file1.tex file2.tex diff.tex
you will have the following content inside
diff.tex
Let $D$ be a subset of $\bf R$ and let $f \colon D \to{\bf R}$ be a real-valued function on $D$. The function $f$ is said to be{\it continuous} on $D$ if, for all $\epsilon > 0$ and for all $x \in D$,there exists some $\delta > 0$ \protect\TLSins{(which may depend on $x$)} such that if $y\in D$ satisfies $$|y - x| < \delta$$ then $$|f(y) - f(x)| < \protect\TLSdel{\epsilon$$ should hold.}\protect\TLSins{\epsilon.$$}\bye
The commands
\TLSins and
TLSdel don't exist in plain
TeX, and I cannot only put
\usepackage and
\newcommand inside this file. This would not work.
Question: How must the header look like to define the required commands correctly in plain TeX? |
Two things to note here. First, subtracting inflation from the nominal interest rate
is an approximation to the real interest rate, but only in discrete time. Furthermore, the "true" relationship it's approximating isn't division of one rate by the other--you have to add 1 to all three of your quantities (inflation, real interest rate, and nominal interest rate) first to get the true relationship.
Here's a brief overview. Consider the Fisher equation of
$r = i - \pi$
where $r$ is the real interest rate, $i$ is the nominal interest rate, and $\pi$ is the inflation rate.
This equation is often introduced as a linear approximation to the true real rate of interest, given by the equation
$\frac{1 + i}{1 + \pi} = 1 + r$
Let's see how this holds in a discrete time model. Denote your nominal income as $Y$ and the price level as $P$. Your real income is $Y/P$. If all this income is invested in some interest-bearing asset in a discrete time model, your real income in the next time period becomes
$\frac{Y (1 + i)}{P (1 + \pi)}$
and if we want to find a real interest rate that summarizes this change in real income, we would need to write your real income in the next time period as
$\frac{Y}{P} (1 + r)$
which gives us the identity $\frac{1 + i}{1 + \pi} = 1 + r$ that the Fisher equation approximates. However, if we work in continuous time, this breaks down. First, the units don't work out--inflation rates and interest rates are measured in percentage change per year (or some other unit of time), so they cannot be added to the dimension-less number $1$. Second, it turns out that Fisher's approximation is actually completely correct in continuous time. Using derivatives, we define our quantities as follows:
$i = \frac{dY}{dt} \frac{1}{Y}$
$\pi = \frac{dP}{dt} \frac{1}{P}$
$r = \frac{d(Y/P)}{dt} \frac{1}{(Y/P)}$.
Using the quotient rule, we can rewrite $r$ as
$r = \frac{\frac{dY}{dt}P - \frac{P}{dt}Y}{P^{2}} \frac{1}{(Y/P)}$
which simplifies to
$r = (\frac{dY}{dt} \frac{1}{P} - \frac{dP}{dt} \frac{Y}{P^{2}}) \frac{P}{Y} = \frac{dY}{dt} \frac{1}{Y} - \frac{dP}{dt} \frac{1}{P} = i - \pi$
which gives us the Fisher equation, no approximations about it!
Note that this assumes continuous compounding from your nominal interest rate. If you instead have a compounding rate of $\tau$, we would define $i$ differently, and our equation becomes
$r = \tau \ln(1 + \frac{i}{\tau}) - \pi$.
However, for the purposes of the data you're working with, I am 90% sure that this modification is completely superfluous. If you want to use something more precise than the Fisher equation, you need to know exactly how your data was computed. What price index was used to calculate the inflation rate? Under what assumptions was the nominal interest rate calculated?
In short, while I can't speak to the reasons for your supervisor's recommendation, they're definitely correct that you should subtract inflation rather than divide by it. There's a reason why we use the Fisher equation. |
Ok for real photons there is the formula when summing over the polarizations:
$$ \sum_{\lambda=\pm}\epsilon^{*\mu}_\lambda\epsilon^\nu_\lambda = -\eta^{\mu\nu}$$
But if I have a matrix element of the form: $$\epsilon^{*\mu}M_\mu\qquad \epsilon^\mu\bar{M}_\mu$$
So when I take the absoulte squared of that I have: $$|M|^2 = \epsilon^{*\mu}M_\mu \epsilon^\nu\bar{M}_\nu= -M_\mu\eta^{\mu\nu}M_\nu = -M \bar{M}$$
But know the absolute squared is negative. So I know I have a heavy mistake somewhere here. Can someone help me understand? Thanks a lot. |
I'd recommend "The Art of Molecular Dynamics Simulation" by D. C. Rapaport. The code samples are written in C. I'm not a huge fan of the programming style of the book, but at least it's not FORTRAN.Having said that, my advise would be to take any book where neighbour lists are explained (for instance the Frenkel & Smit book, which I guess it's what you are using now) and just implement the algorithm, which is usually written in pseudo-code.
If you want a TL;DR, there are several types of neighbour lists. All of these rely on your interaction potential being 0 outside a certain range $r_c$.
Verlet lists ($\mathcal{O}(N^2)$ - $\mathcal{O}(N^{3/2})$ but with a small-ish prefactor): do an $N^2$ sweep where for each particle you fill an array of neighbours that are closer than $r_c + r_v$, where $r_v$ is a parameter and save the current position of each particle (let's call it $\vec{r}_i(t_0)$. Evolve your simulation by using the list of neighbours to compute the forces. After each integration step check whether, for any particle $i$, $|\vec{r}_i(t) - \vec{r}_i(t_0)| \geq r_v / 2$. If it is then update all the lists (see step 1). Cell lists ($\mathcal{O}(N)$): Divide your simulation box in cells of linear size $l > r_c$. Use linked lists to assign particles to the cells. For each particle $i$, the list of neighbours is given by all the particles that are in $i$'s cells and in each of the (8 in 2D and 26 in 3D) neighbouring cells. Calculate the forces between $i$ and its neighbours. After each integration step check whether particles have crossed cell boundaries and update the data structure accordingly Verlet lists built with cell lists (usually the best choice, $\mathcal{O}(N)$ with a smaller prefactor than just using cells): Step 1 of the "Verlet lists" section is carried out by using cell lists. The linked lists that store the cell data structures are kept updated throughout the simulation
The reason why using Verlet lists instead of cell lists is (in MD simulations) better is that the average number of neighbours is smaller for the former than the latter. The difference is due to the fact that, for each particle, cell lists give you a list of neighbours that are in a volume $(3l)^3$, which, if $r_v$ is chosen wisely, is much larger than the volume of the Verlet sphere ($4/3 \pi (r_c + r_v)^3$). Therefore, on average, you compute much fewer distances with Verlet lists. |
I am looking at two non-zero matrices, $A$ and $B$, both over $ \mathbb{R}$ and of dimension $n \times n$. I think I've proven that their product $AB$ can be equal to zero by the following:
In the first entry of $AB=0$, which is $ab_{11}$ = $\sum_{i=1}^{n} a_{1n}b_{n1}$, there exists some $a_{1k}b_{k1} = -\sum_{i=1}^{n} a_{1n}b_{n1}$ for $i \neq k$. We just induce this for all remaining entries $ab_{ij}$ of $AB$.
Assuming that's a sufficient (and sufficiently elegant [I'm new]) proof, is there a general form of $A$ or $B$? My ultimate goal is to show that $BA \neq 0$ for all such $A$ and $B$, and I think knowing the general form of one of these matrices would help.
(Also, is it appropriate to ask more than one question about this same system on this same thread?) |
This task is more complex than the task to solve a quadratic equation, for example, and one must master a significant portion of a textbook – such as Georgi's textbook – and perhaps something beyond it to have everything he needs.
For the 8-dimensional representation of $SU(3)$, things simplify because it's the "adjoint rep" of $SU(3)$ – the vector space that formally coincides with the Lie algebra itself. And the action of the generator $G_i$ on the basis vector $V_j=G_j$ of the adjoint representation is given by $$ G_i (V_j) = [G_i,V_j]= \sum_k f_{ij}{}^k G_k $$This implies that the structure constants $f$ directly encode the matrix elements of the generator $G_i$ with respect to the adjoint representation – $j$ and $k$ label the row and the column, respectively.
The structure constants $f$ determining the commutators may be extracted from all the roots. The whole mathematical structure is beautiful but the decomposition of the generators under the Cartan subalgebra has several pieces, and therefore an even greater number of different types of "pairs of pieces" that appear as the commutators.
Some ($r$, rank) of the generators $G_i$ are identified with the Cartan generators $u_a$. The rest of the generators $G_j$ are uniquely associated with all the roots.
If you only have the Cartan matrix, you effectively have the inner products of the simple roots only. You first need to get all the roots, and those are connected with the $d-r$ (dimension minus rank) root vectors $r_j$.
The commutators of two Cartan generators vanish, $$[h_i,h_j]=0$$The commutator of a Cartan generator with a non-Cartan generator is given by$$[h_i,G_{r(j)}] = r_i G_{r(j)}$$because we organized the non-Cartan generators as simultaneous eigenstates under all the Cartan generators. Finally, the commutator$$[G_{r(i)},G_{r(j)}]$$is zero if $r_i=r_j$. It is a natural linear combination of the $h_i$ generators if the root vectors obey $r_i=-r_j$. If $r_i+r_j$ is a vector that isn't a root vector, the commutator has to vanish. And if $r_i+r_j$ is a root vector but $r_i\neq \pm r_j$, then the commutator is proportional to $G_{r(i)+r(j)}$ corresponding to this "sum" root vector. The coefficient (mostly sign) in front of this commutator is subtle.
Once you have all these commutators, you have effectively restored all the structure constants $f$, and therefore all the matrix entries with respect to the adjoint representation.
To find matrix elements for a general representation is much more complex. You must first figure out what the representations are. Typically, you want to start with the fundamental (and/or antifundamental) rep, and all others may be obtained as terms composing a direct sum decomposition of tensor products of many copies of the fundamental (and/or antifundamental, if it is different) representation.
All the representations may be obtained from the weight lattice which is a subset of the root lattice and is similar. In fact, the weight lattice is the dual (in the "form" vector space sense) of the root lattice under the natural inner product.
In practice, physicists don't ever do the procedures in this order because that's not how Nature asks us to solve problems. We learn how to deal with the groups we need – which, at some moment, includes all the compact simple Lie groups as the "core" (special unitary, orthogonal, symplectic, and five exceptional), and we learn the reps of these Lie groups – the obvious fundamental ones, the adjoint, and the pattern how to get the more complicated ones.
I am afraid that it doesn't make any sense to fill in any "gaps" if you would need to elaborate upon something because in this way, one would gradually be forced to write another textbook on Lie groups and representation theory as this answer, and I don't think that such a work would be appropriately rewarded – even this work wasn't. ;-) |
One of the postulates of QM states that given a system in a state $|\psi\rangle$ and given an observable $A$ whose eigenstates are $|\phi_i\rangle$, then the state of the system can be expressed as a linear combination of them such that
$$|\psi\rangle=\sum_ic_i|\phi_i\rangle$$
and the probability of the eigenvalue $a_i$ associated to the eigenstate $|\phi_i\rangle$ of coming out when $A$ is measured is determined by $|c_i|^2$.
So far so good. My question is how are the $c_i$ coefficients determined. I mean, if one can only get eigenvalues when doing measurements, and on top of that the system is left on an eigenstate right after that, how can one know the state in which the system is before performing the measurement (and, with that, the probability of getting the different eigenvalues)? |
Added: a Stanford course on neural networks,cs231n,gives yet another form of the steps:
v = mu * v_prev - learning_rate * gradient(x) # GD + momentum
v_nesterov = v + mu * (v - v_prev) # keep going, extrapolate
x += v_nesterov
Here
v is velocity aka step aka state,and
mu is a momentum factor, typically 0.9 or so.(
v,
x and
learning_rate can be very long vectors;with numpy, the code is the same.)
v in the first line is gradient descent with momentum;
v_nesterov extrapolates, keeps going.For example, with mu = 0.9,
v_prev v --> v_nesterov
---------------
0 10 --> 19
10 0 --> -9
10 10 --> 10
10 20 --> 29
The following description has 3 terms:
term 1 alone is plain gradient descent (GD), 1 + 2 give GD + momentum, 1 + 2 + 3 give Nesterov GD.
Nesterov GD is usually described as alternatingmomentum steps $x_t \to y_t$ and gradient steps $y_t \to x_{t+1}$:
$\qquad y_t = x_t + m (x_t - x_{t-1}) \quad $ -- momentum, predictor
$\qquad x_{t+1} = y_t + h\ g(y_t) \qquad $ -- gradient
where $g_t \equiv - \nabla f(y_t)$ is the negative gradient,and $h$ is stepsize aka learning rate.
Combine these two equations to one in $y_t$ only,the points at which the gradients are evaluated,by plugging the second equation into the first, and rearrange terms:
$\qquad y_{t+1} = y_t$
$\qquad \qquad + \ h \ g_t \qquad \qquad \quad $ -- gradient $\qquad \qquad + \ m \ (y_t - y_{t-1}) \qquad $ -- step momentum $\qquad \qquad + \ m \ h \ (g_t - g_{t-1}) \quad $ -- gradient momentum
The last term is the difference between GD with plain momentum,and GD with Nesterov momentum.
One could use separate momentum terms, say $m$ and $m_{grad}$:
$\qquad \qquad + \ m \ (y_t - y_{t-1}) \qquad $ -- step momentum $\qquad \qquad + \ m_{grad} \ h \ (g_t - g_{t-1}) \quad $ -- gradient momentum
Then $m_{grad} = 0$ gives plain momentum, $m_{grad} = m$ Nesterov.
$m_{grad} > 0 $ amplifies noise (gradients can be very noisy), $m_{grad} \sim -.1$ is an IIR smoothing filter.
By the way, momentum and stepsize can vary with time, $m_t$ and $h_t$,or per component (ada* coordinate descent), or both -- more methods than test cases.
A plot comparing plain momentum with Nesterov momentum on a simple 2d test case,
$(x / [cond, 1] - 100) + ripple \times sin( \pi x )$ : |
Given information:
The boiling point of benzene at atmospheric pressure is $353~\mathrm{K}$ the enthalpy of vaporization of benzene is $30.8~\mathrm{kJ~mol^{−1}}$ at this temperature. The molar heat capacities of the liquid and vapour are $136.1~\mathrm{J~K^{−1}~mol^{-1}}$ and $81.7~\mathrm{J~K^{−1}~mol^{-1}}$, respectively, and may be assumed temperature independent.
Calculate the entropy change of the system, the surroundings and hence the universe when $1~\mathrm{mol}$ of benzene vapour at $343~\mathrm{K}$ and atmospheric pressure becomes liquid benzene at $343~\mathrm{K}$. Also, will this process occur spontaneously?
I know that $\mathrm{d}S = \frac{\mathrm{d}H}{T}$ therefore,
$\displaystyle\mathrm{d}S = \frac{-30.8 \times 10^3~ \mathrm{J}}{343~\mathrm{K}}=-89.8~\mathrm{J~K^{-1}}$ which is the entropy of the system.
How do I continue from there, utilising the molar heat capacities given? |
In this question, a
graph is a finite, undirected graph without loops or multiple edges, and a colouring of a graph is a proper vertex colouring. The product $G \times H$ of graphs $G$ and $H$ is the graph whose vertex-set is the product of the vertex-sets of $G$ and $H$ and whose edge-set is the product of the edge-sets of $G$ and $H$, with the obvious incidence relation.
Let $G$ and $H$ be graphs. Any $n$-colouring of $G$ gives rise to an $n$-colouring of $G \times H$: just paint $(x, y)$ the same colour as $x$. (Or, if you prefer, an $n$-colouring of a graph is just a homomorphism into the complete graph $K_n$, so we can compose the colouring $G \to K_n$ with the projection $G \times H \to G$ to obtain a colouring $G \times H \to K_n$.) Similarly, any $n$-colouring of $H$ gives rise to an $n$-colouring of $G \times H$. Let us say that a colouring of $G \times H$ arising in one of these two ways is
obtained by projection.
The previous paragraph makes it clear that $\chi(G \times H) \leq \min\{ \chi(G), \chi(H) \}$, where $\chi$ means chromatic number. Hedetniemi's conjecture states that this is an equality. In other words, it says that there are no colourings of a product more economical than those obtained by projection. My question:
Let $G$ and $H$ be graphs. Is every colouring of $G \times H$ with $\chi(G \times H)$ colours obtained by projection?
The answer can't be known to be yes, unless I've missed some news, since that would imply Hedetniemi's conjecture. But perhaps the answer is no, or perhaps it's known that this apparently stronger conjecture would actually be implied by Hedetniemi's original conjecture.
Edit Assume the graphs are connected, otherwise the answer is trivially no. (E.g. consider $(K_2 \sqcup K_2) \times K_2$.) |
Background on why I want this:
I'd like to check that suspension in a simplicial model category is the same thing as suspension in the quasicategory obtained by composing Rezk's assignment of a complete Segal space to a simplicial model category with Joyal and Tierney's "first row functor" from complete Segal spaces to quasicategories. Rezk's functor first builds a bisimplicial set (I will say simplicial space) with a very nice description in terms of the model category, then takes a Reedy fibrant replacement in the category of simplicial spaces. Taking this fibrant replacement is the only part of the process which doesn't feel extremely concrete to me.
To take a Reedy fibrant replacement $X_*'$ for a simplicial space $X_*$ you can factor a sequence of maps of simplicial sets as weak equivalences followed by fibrations. The first such map is $X_0 \rightarrow *$. If I factor that as $X_0 \rightarrow X_0' \rightarrow *$ the next map to factor is $X_1 \rightarrow X_0' \times X_0'$ and factors as $X_1 \rightarrow X_1' \rightarrow X_0' \times X_0'$. (The maps are all from $X_n$ into the $n$th matching space of the replacement you're building, but these first two matching spaces are so easy to describe I wanted to write them down explicitly.)
Background on the question:
The small object argument gives us a way to factor any map $X \rightarrow Y$ in a cofibrantly generated model category as a weak equivalence $X \rightarrow Z$ followed by a fibration $Z \rightarrow Y$. (In fact, as an acyclic cofibration followed by a fibration.) However, the object $Z$ this produces is in general hard to understand.
If the model category we're interested in is the usual one on simplicial sets (weak equivalences are weak homotopy equivalences on the geometric realizations, fibrations are Kan fibrations, and cofibrations are inclusions) we have some special ways of factoring maps $X \rightarrow *$ as a weak equivalence followed by a fibration. Probably the most familiar is using the singular chains on the geometric realization of $X$ to make $X \rightarrow S(\mid X\mid ) \rightarrow *$. This is again big and rather hard to understand.
Another method is Kan's Ex$^\infty$ functor, which I learned about in Goerss and Jardine's book. Ex is the right adjoint to the subdivision functor, which is defined first for simplices in terms of partially ordered sets, so the subdivision functor and Ex are both very combinatorial. There is a natural map $X \rightarrow$ Ex $X$, and Ex$^\infty X$ is the colimit of the sequence $X \rightarrow$ Ex $X \rightarrow$ Ex$^2 X \rightarrow$... . It turns out that $X \rightarrow $Ex$^\infty X \rightarrow *$ is a weak equivalence followed by a fibration.
I would like a way of factoring a map $X \rightarrow Y$ of simplicial sets as a weak equivalence followed by a fibration that is similar in flavor to the use of Kan's Ex$^\infty$ functor to find a fibrant replacement for a simplicial set $X$.
Question:
In the standard model category structure on simplicial sets, is there a combinatorial way of factoring a map as a weak equivalence followed by a fibration? |
View Answer
question_answer1) If the quadratic equation \[p{{x}^{2}}-2\sqrt{5}\,px+15=0,\] has two equal roots then find the value of p. question_answer2)
In figure 1, a tower AB is 20 m high and BC, its shadow on the ground, is \[20\sqrt{3}\,m\] long. Find the sun's altitude.
View Answer
question_answer3) Two different dice are tossed together. Find the probability that the product of two numbers on the top of the dice is 6. question_answer4)
In figure 2, PQ is a chord of a, circle with centre O and PT is a tangent If \[\angle QPT=60{}^\circ \], find \[\angle PRQ\]. question_answer5)
In figure 3, two tangents RQ and RP are drawn from an external point R to the circle with centre O. If \[\angle PRQ=120{}^\circ \], then prove that, \[OR=PR+RQ\]. question_answer6)
In figure 4, a triangle ABC is drawn to circumscribe a circle of radius 3 cm, such that the segments BD and DC are respectively of lengths 6 cm and 9 cm. If the area of \[\Delta \,ABC\] is \[54\text{ }c{{m}^{2}}\], then find the lengths of sides AB and AC.
View Answer
question_answer7) Solve the following quadratic equation for x: \[4{{x}^{2}}+4bx-\left( {{a}^{2}}-{{b}^{2}} \right)=0\]
View Answer
question_answer8) In an A.P., if \[~{{S}_{5}}+{{S}_{7}}=167\] and \[{{S}_{10}}=235\], find the A.P. where \[{{S}_{n}}\] denotes the sum of its first n terms.
View Answer
question_answer9) The points \[A(4,7),\,\,B(p,3)\] and \[C(7,3)\] are the vertices of a right triangle, right-angled at B. Find the value of p.
View Answer
question_answer10) Find the relation between x and y if the points \[A(x,y),\,B(-5,7)\] and \[C(-4,5)\]are collinear.
View Answer
question_answer11) The 14th term of an AP is twice its 8th term. If its 6th term is\[8\], then find the sum of its first 20 terms.
View Answer
question_answer12) Solve for x: \[\sqrt{3}{{x}^{2}}-2\sqrt{2}x-2\sqrt{3}=0\]
View Answer
question_answer13) The angle of elevation of an aeroplane from point A on the ground is \[60{}^\circ \]. After flight of 15 seconds, the angle of elevation change to \[30{}^\circ \]. If the aeroplane is flying at a constant height of \[1500\sqrt{3}\,m\], find the speed of the plane in km/hr.
View Answer
question_answer14) If the coordinates of points A and B are \[(-2,-2)\] and \[(2,-4)\] respectively find the coordinates of P such that \[AP=\frac{3}{7}AB,\]where P lies on the line segment AB.
View Answer
question_answer15) A probability of selecting a red ball at random from a jar that contains only red, blue and orange is \[\frac{1}{4}\]. The probability of selecting a blue ball at random from the same jar is \[\frac{1}{3}\]. If the jar contains 10 orange balls, find the total number of balls in the jar.
View Answer
question_answer16) Find the area of the minor segment of a circle of radius 14 cm, when its central angle is \[60{}^\circ \]. Also find the area of the corresponding major segment. [Use \[\pi =\frac{22}{7}\]]
View Answer
question_answer17) Due to sudden floods, some welfare associations jointly requested the government to get 100 tents fixed immediately and offered to contribute 50% of the cost. If the lower part of each tent is of the form of a cylinder of diameter 4.2 m and height 4 m with the conical upper part of same diameter but of height 2.8 m and the canvas to be used costs Rs. 100 per sq. m. Find amount the associations will have to pay. What values are shown by these associations?
View Answer
question_answer18) A hemispherical bowl of internal diameter 36 cm contains liquid. This liquid is filled into 72 cylindrical bottles of diameter 6 cm. Find the height of each bottle, if 10% liquid is wasted in this transfer.
View Answer
question_answer19) A cubical block of side 10 cm is surmounted by a hemisphere. What is the largest diameter that the hemisphere can have? Find the cost of painting the total surface area of the solid so formed, at the rate of Rs. 5 per 100 sq. cm [Use\[\pi =3.14\]]
View Answer
question_answer20) 504 cones each of diameter 3.5 cm and height 3 cm. are melted and recast into a metallic sphere. Find the diameter of the sphere and hence find its surface area. \[\left[ \pi =\frac{22}{7} \right]\]
View Answer
question_answer21) The diagonal of a rectangular field is 16 m more than the shorter side. If the longer side is 14 m more than the shorter side, then find the lengths of the sides of the field.
View Answer
question_answer22) Find the 60th term of the A.P. 8, 10, 12.... if it has a total of 60 terms and hence find the sum of its last 10 terms.
View Answer
question_answer23) A train travels at a certain average speed for a distance of 54 km and then travels a distance of 63 km at an average speed of 6 km/h more than the first speed. If it takes 3 hours to complete the total journey, what is its first speed?
View Answer
question_answer24) Prove that the lengths of the tangents drawn from an external point to a circle are equal.
View Answer
question_answer25) Prove that the tangent drawn at the mid-point of an arc of a circle is parallel to the chord joining the end points of the arc.
View Answer
question_answer26) Construct a \[\Delta \text{ }ABC\] in which \[AB=6\text{ }cm,\text{ }\angle A=30{}^\circ \] and \[\angle B=60{}^\circ \]. Construct another \[\Delta \text{ }AB'C\]similar to \[\Delta \text{ }ABC\]with base\[AB'=8\text{ }cm\].
View Answer
question_answer27) At a point A, 20 m above the level of water in a lake, the angle of elevation of a cloud is \[30{}^\circ \]. The angle of depression of the reflection of the cloud in the lake, at A is \[60{}^\circ \]. Find the distance of the cloud from A. question_answer28)
A card is drawn at random from a well-shuffled deck of playing cards. Find the probability that the card drawn is (i) A card of spade or an ace (ii) A black king (iii) Neither a jack nor a king (iv) Either a king or a queen
View Answer
question_answer29) Find the values of k so that the area of the triangle with vertices \[(1,-1),\,\,(-4,2k)\] and \[(-k,-5)\] is 24 sq. units. question_answer30)
In figure 5, PQRS is a square lawn with side \[PQ=42\text{ }m\]. Two circular flower beds are there on the sides PS and QR with center at O, the intersection of its diagonals. Find the total area of the two flower beds (shaded parts).
View Answer
question_answer31) From each end of a solid metal cylinder, metal was scooped out in hemispherical form of same diameter. The height of the cylinder is 10 cm and its base is of radius 4.2 cm. The rest of the cylinder is melted and converted into a cylindrical wire of 1.4 cm thickness. Find the length of the wire.
You need to login to perform this action.
You will be redirected in 3 sec |
Some tricks I've seen:
Tricks with notable products
$(a + b)^2 = a^2 + 2ab + b^2$
This formula can be used to compute squares. Say that we want to compute $46^2$. We use $46^2 = (40+6)^2 = 40^2+2\cdot40\cdot6 +6^2 = 1600 + 480 + 36 = 2116$. You can also use this method for negative $b$:$ 197^2 = (200 - 3)^2 = 200^2 - 2\cdot200\cdot3 + 3^2 = 40000 - 1200 + 9 = 38809 $
The last subtraction can be kind of tricky: remember to do it right to left, and take out the common multiples of 10:$ 40000 - 1200 = 100(400-12) = 100(398-10) = 100(388) = 38800 $The hardest thing here is to keep track of the amount of zeroes, this takes some practice!
Also note that if we're computing $(a+b)^2$ and a is a multiple of $10^k$ and $b$ is a single digit-number, we already know the last $k$ digits of the answer: they are $b^2$, then the rest (going to the right) are zeroes. We can use this even if a is only a multiple of 10: the last digit of $(10 * a + b)^2$ (where $a$ and $b$ consist of a single digit) is $b$. So we can write (or maybe only make a mental note that we have the final digit) that down and worry about the more significant digits.
Also useful for things like $46\cdot47 = 46^2 + 46 = 2116 + 46 = 2162$. When both numbers are even or both numbers are uneven, you might want to use:
$(a+b)(a-b) = a^2 - b^2$Say, for example, we want to compute $23 \cdot 27$. We can write this as $(25 - 2)(25 + 2) = 25^2 - 2^2 = (20 + 5)^2 = 20^2 + 2\cdot20\cdot5 + 5^2 - 4 = 400 + 200 + 25 - 4 = 621$.
Divisibility checks
Already covered by Theodore Norvell. The basic idea is that if you represent numbers in a base $b$, you can easily tell if numbers are divisible by $b - 1$, $b + 1$ or prime factors of $b$, by some modular arithmetic.
Vedic math
A guy in my class gave a presentation on Vedic math. I don't really remember everything and there probably are a more cool things in the book, but I remember with algorithm for multiplication that you can use to multiplicate numbers in your head.
This picture shows a method called lattice or gelosia multiplication and is just a way of writing our good old-fashioned multiplication algorithm (the one we use on paper) in a nice way. Please notice that the picture and the Vedic algorithm are not tied: I added the picture because I think it helps you appreciate and understand the pattern that is used in the algorithm. The gelosia notation shows this in a much nicer way than the traditional notation.
The algorithm the guy explained is essentially the same algorithm as we would use on paper. However, it structures the arithmetic in such a way that we never have remember too many numbers at the same time.
Let's illustrate the method by multiplying $456$ with $128$, as in the picture. We work from left to right: we first compute the least significant digits and work our way up.
We start by multiplying the least significant digits:
$6 \cdot 8 = 48$: the least significant digit is $8$, remember the $4(0)$ for the next round (of course, I don't mean zero times four here but four, or forty, whatevery you prefer: be consistent though, if you include the zero here to make forty, you got do it everywhere).$ 8 \cdot 5(0) = 40(0) $
$ 2(0) \cdot 6 = 12(0) $ $ 4(0) + 40(0) + 12(0) = 56(0) $: our next digit (to the left of the $8$) is $6$: remember the $5(00)$
$ 8 \cdot 4(00) = 32(00) $
$ 2(0) \cdot 5(0) = 10(00) $ $ 1(00) \cdot 6 = 6(00) $ $ 5(00) + 32(00) + 10(00) + 6(00) = 53(00) $: our next digit is a $3$, remember the $5(000)$
Pfff... starting with 2-digit numbers is a better idea, but I wanted to this longer one to make the structure of the algorithm clear. You can do this much faster if you have practiced, since you don't have to write it all down.
$ 2(0) \cdot 4(00) = 8(000) $
$ 1(00) \cdot 5(0) = 5(000)$ $ 5(000) + 8(000) + 5(000) = 18(000)$: next digit is an $8$, remember the $1(0000)$
$ 1(00) \cdot 4(00) = 4(0000) $
$ 1(0000) + 4(0000) = 5(0000) $: the most significant digit is a $5$.
So we have $58368$.
Quadratic equations
There are multiple ways to solve a quadratic equation in your head. The easiest are quadratic with integer coefficients. If we have $x^2 + ax + c = 0$, try to find $r_{1, 2}$ such that $r_1 + r_2 = -a$ and $r_1r_2 = c$. It is also possible to solve for non-integer solutions this way, but it is usually too hard to actually come up with solutions this way.
Another way is just to try divisors of the constant term. By the rational root theorem (google it, I can't link anymore
sigh) all solutions to $x^n + ... + c = 0$ need to be divisors of $c$. If $c$ is a fraction $\frac{p}{q}$, the solutions need to be of the form $\frac{a}{b}$ where $a$ divides $p$ and $b$ divides $q$.
If this all fails, we can still put the abc-formula in a much easier form:
$ ux^2 + vx + w = 0 $
$ x^2 + \frac{v}{u}x + \frac{w}{u} = 0 $ $ x^2 + \frac{v}{u}x + \frac{w}{u} = 0 $ $ x^2 - ax - b = 0 $
$ x^2 = ax + b $
(This is the form that I found easiest to use!) $ (x - \frac{a}{2})^2 = (\frac{a}{2})^2 + b $ $ x = \frac{a\pm\sqrt{a^2 + 4b}}{2} = \frac{a}{2} \pm \sqrt{(\frac{a}{2})^2 + b} $
I'm sure there are also a lot of techniques for estimating products and the like, but I'm not really familiar with them.
Tricks that aren't really usable but still pretty cool
See this excerpt from Feynman's "Surely you're joking, Mr. Feynman!" about how he managed to amaze some of his colleagues, and also this video from Numberphile. |
Four Color Theorem (4CT) states that every planar graph is four colorable. There are two proofs given by [Appel,Haken 1976] and [Robertson,Sanders,Seymour,Thomas 1997]. Both these proofs are computer-assisted and quite intimidating.
There are several conjectures in graph theory that imply 4CT. Resolution of these conjectures probably requires a better understanding of the proofs of 4CT. Here is one such conjecture :
Conjecture : Let $G$ be a planar graph, let $C$ be a set of colors and $f : C \rightarrow C$ a fixed-point free involution. Let $L = (L_v : v \in V(G))$ be such that $|L_v| \geq 4$ for all $v \in V$ and if $\alpha \in L_v$ then $f(\alpha) \in L_v$ for all $v \in V$, for all $\alpha \in C$.
Then there exists an $L$-coloring of the graph $G$.
If you know such conjectures implying 4CT, please list them one in each answer. I could not find a comprehensive list of such conjectures. |
I have a table where I want more than one row in some cells. I am using parbox to achieve this but this makes some items in cells left aligned and some not. How can you fix this to make them all aligned the same way?
\documentclass{beamer}\usepackage{multirow}\begin{document} \begin{frame}\frametitle{Test page} \resizebox{\textwidth}{!}{%\begin{tabular}{|l||c|c|} \hline \multirow{3}{*} & Column one & Column two \\ \hline \hline Row one& \textcolor{gray}{$f(n) = n^2$} & \parbox{5cm}{$f(n) = n^2$ \\ $f(n) = n^2$}\\ Row two & \parbox{7cm}{\textcolor{gray}{$g(n,S) = n\sqrt{\log (n/S)/\log\log(n/S)})$} \\$g(n) = n^3$ } & $g(n) = n^3$\\ Row three & \textcolor{gray}{$f(n) = n^2$} & \parbox{5cm}{$f(n) = n^2$\\ $f(n) = n^2$}\\ \hline \end{tabular}%} \end{frame} \end{document}
Update. Following the advice in the comments I have changed the table a little to show another problem.
\resizebox{\textwidth}{!}{%\begin{tabular}{|l||c|c|} \hline \multirow{3}{*} & Column one & Column two \\ \hline \hline Row one& \textcolor{gray}{$g(n,S) = n\sqrt{\log (n/S)/\log\log(n/S)})$} & \parbox{5cm}{\centering $f(n) = n^2$ \\ $f(n) = n^2$}\\ Row two & \parbox{7cm}{\textcolor{gray}{\centering $g(n,S) = n\sqrt{\log (n/S)/\log\log(n/S)})$} \\ \centering $g(n) = n^3$ } & $g(n) = n^3$\\ Row three & \textcolor{gray}{$f(n) = n^2$} & \parbox{5cm}{\centering $f(n) = n^2$\\ $f(n) = n^2$}\\ \hline \end{tabular}%}
g(n,S) is still not quite aligned the same in the first and second rows of "Column one". |
And suppose that R has no periodic points of period n . Then (d, n) is one of the pairs \( (2,2) ,(2,3) ,(3,2) ,(4 ,2) \) , each such pair does arise from some R in this way .
The example of such pair is $$ 1. R(z) = z +\frac {(w-1)(z^2 -1)}{2z} ; it \ has \ no \ points \ of \ period \ 3 .$$
$$ 2. If R(z) = \frac {z^3 +6}{3z^2} , then \ R \ has \ no \ points \ of \ period \ 2 \\ \\ \\ \\ R^2(z) = z \Rightarrow \frac {(\frac {z^3 +6}{3z^2})^3 + 6}{ 3 (\frac {z^3 + 6}{3z^2})^2 } =z \Rightarrow \frac {(z^3 +6)^3+(27 \times 6) z^6}{z^2 \times 3^2 \times (z^3 + 6)^2} =z $$
$$ 3. \ If \ R(z) = \frac {-z(1+ 2z^3)}{1-3z^3} \ then \ R \ no \ points \ of \ period \ 2 .$$ |
Briefly analyzed the characters of measuring three-phase electric power parameters,introduced a method of interface between the chip of S3C44B0X with 32-bit embedded processor of ARM7TDMI and the A/D application-specific chip of ADS7864.The structure can predigest hardware circuits,relieve the contradiction between the real-time capabilities and precision when sampling data.
In the description of the scheme,the general framework of the scheme is shown first,then CPU——SEP3203,the core board hardware design,FPGA board hardware design,the system bus design and the system of power is specified one by one.
If is a ?2-grading of a simple Lie algebra, we explicitly describe a-module Spin0 () such that the exterior algebra of is the tensor square of this module times some power of 2.
The operation adj on matrices arises from the (n - 1)st exterior power functor on modules; the analogous factorization question for matrix constructions arising from other functors is raised, as are several other questions.
Let q be a power of p and let G(q) be the finite group of Fq-rational points of G.
We prove a Tauberian theorem of the form $\phi * g (x)\sim p(x)w(x)$ as $x \to \infty,$ where p(x) is a bounded periodic function and w(x) is a weighted function of power growth.
A new C*-algebra of strong limit power functions is proposed. |
A little background to Stirling’s Formula
Stirling’s approximation is vital to a manageable formulation of statistical physics and thermodynamics. It vastly simplifies calculations involving logarithms of factorials where the factorial is huge. In statistical physics, we are typically discussing systems of particles. With numbers of such orders of magnitude, this approximation is certainly valid, and also proves incredibly useful.
There are numerous ways of deriving the result, and further refinements to the approximation to be found elsewhere. Here is a simple derivation using an analogy with the Gaussian distribution:
The Formula
Derive the Stirling formula: $$\ln(n!) \approx (n+\frac{1}{2})\ln{n} – n + \frac{1}{2}\ln{2\pi}$$
Let’s Go
We begin by calculating the integral (where ) using integration by parts.
The formula for integration by parts is: $$\int u(x)v´(x)dx=u(x)v(x)-\int u´(x)v(x)dx$$
Given this situation, it’s normally better to take the infinitely differentiable function as your :
\(u(x)=x^n\) \(u’(x)=nx^{n-1}\)
\(v(x)=-e^{-x}\) \(v’(x)=e^{-x}\)
Thus we have: \begin{equation}\int_{0}^{\infty} x^ne^{-x}dx=[x^n·-e^{-x}]_{0}^{\infty} + n\int_{0}^{\infty}x^{n-1}·e^{-x}dx\label{eq:int1}\end{equation}
We note that is always zero. Labelling the first integral \(g(x)\), the last integral is nothing more than . This can be integrated by parts to yield a similar expression but with reduced to (and so on) such that we end up with .
\begin{equation}
\int_{0}^{\infty} x^ne^{-x}dx=n! \label{eq:eqeq} \end{equation}
Take the logarithm of the integrand in \eqref{eq:int1}: $$\ln{e^{-x}x^n} = -x + n\ln{x} =: f(x)$$ $$f’(x) \stackrel{!}{=} 0 = -1 + \frac{n}{x}$$ $$\Rightarrow x_0=n$$
So we find it has a maximum at . Expanding around : $$f(x)=-n+n \ln{n}+(-1+\frac{n}{x})|_{x=n}(x-n) + \frac{1}{2!} (\frac{-n}{x^2})|_{x=n}(x-n)^2 + …$$ $$=-n+n\ln{n}-\frac{1}{2n}(x-n)^2$$
Therefore, .
If we take the exponential of and integrate, we see we have the same integral we just calculated in \eqref{eq:int1} but in a more useful form: $$\int \exp(f(x))dx=\int e^{-x}x^ndx$$ $$\simeq\int \exp(-n + n\ln{n} – \frac{1}{2n}(x-n)^2)\,dx$$ \begin{equation}
=\exp(-n+n\ln{n})\int_{0}^{\infty} \exp(-\frac{(x-n)^2}{2n})dx \stackrel{\eqref{eq:eqeq}}{=} n! \label{eq:nbang} \end{equation}
This is calculable by analogy with the Gaussian distribution, where $$P(x)=\frac{1}{\sqrt{2\pi}\sigma}\exp(-\frac{(x-\stackrel{-}{x})^2}{2\sigma^2}).$$
Given the sum of all probabilities , it follows $$\sqrt{2\pi}\sigma = \int_{-\infty}^{\infty} \exp(-\frac{(x-\stackrel{-}{x})^2}{2\sigma^2})dx.$$
If we compare this with \eqref{eq:nbang}, we note and both translate to in our analogy:
$$n!\simeq\exp(-n+n\ln{n})\int_{0}^{\infty} \exp(-\frac{(x-n)^2}{2n})\,dx$$
\begin{equation}= \exp(-n+n\ln{n})\sqrt{2\pi n} \label{eq:final}\end{equation}
Note that the lower bound on the integral has changed from to 0. This approximation is justifiable since the peak of the distribution is at a point much greater than zero, thus most of the distribution is greater than zero.
Finally, taking the logarithm of \eqref{eq:final}, $$\ln{n!} \simeq -n + n\ln{n} + \frac{1}{2}\ln{2\pi n}$$ $$=(n+\frac{1}{2})\ln{n}-n+\frac{1}{2}\ln{2\pi},$$
we recover what we started with. |
I try to understand the actual intuition behind the logarithm properties and came across a post on this site that explains the multiplication and thereby also the division properties very nicely:
Suppose you have a table of powers of 2, which looks like this: (after revision)
$$\begin{array}{rrrrrrrrrr} 0&1&2&3&4&5&6&7&8&9&10\\ 1&2&4&8&16&32&64&128&256&512&1024 \end{array}$$
Each column says how many twos you have to multiply to get the number in that column. For example, if you multiply 5 twos, you get $2\cdot2\cdot2\cdot2\cdot2=32$, which is the number in column 5.
Now suppose you want to multiply two numbers from the bottom row, say $16\cdot 64$. Well, the $16$ is the product of 4 twos, and the $64$ is the product of 6 twos, so when you multiply them together you get a product of 10 twos, which is $1024$.
I found that very helpful to understand the actual proofs for this property.
I still struggle to get the idea behind the change of base rule. I'm familiar with the proof that goes like:
$$\log_a x = y \implies a^y = x$$ $$\log_b a^y = \log_b x$$ $$y \cdot \log_b a = \log_b x$$ $$y = \frac{\log_b x}{\log_b a}$$
But can somehow provide a explanation in the style of the quoted answer why this actually works? |
So, as far as I understand, Godel's theorem says that sufficiently strong theories such as Peano Arithmetic are negation-incomplete, which means that there exists a formula $\phi$ such that neither $\phi$ nor its negation $\lnot \phi$ are provable within the formal system.
I have read somewhere that it can be understood that it means there are different models for the same axioms. By that I mean that there is an interpretation of axioms such that all axioms are true, say, interpretation $I_1$, and there is a different interpretation of axioms such that all axioms are true, say, interpretation $I_2$, and also there is a statement $\phi$ which in $I_1$ is taken to be true but in $I_2$ is taken to be false, and, thus its negation is taken as true [?1?].
Question 1: Is it true that in interpreting formal theories we always for each formula give either true or false, and for some formula $\psi$ we interpret it as true statement in the interpretation then for $\lnot \psi$ we automatically interpret it as false? If yes then what is the motivation for it? If not then what is the motivation for it?
Then, intuitively, it feels to me that if I fix some interpretation then if I prove something using formal system then it should be true in the interpretation. This is because I agree that axioms are true and deductive rules are true, so the reasoning should be true, and whenever I prove it formally, I prove it also for my interpretation.
Question 2: Is this reasoning correct?
Does it then mean that if there are two different interpretations $I_1, I_2$ then the formula about which they disagree $\psi$ is unprovable because if it was then it would be true in both interpretations by previous reasoning. The same applies for $\lnot \psi$. Can I thus conclude that Godel's theorem says that there are models that interpret axioms as true but differ in interpretation of some statements? But then does it mean that mathematics is empirical because, for example, if I want to reason about strings (which are physical objects) and I set up sufficiently strong axiomatic system for them then there will be different models and then by manipulating strings and finding empirically whether some statement is true or not I can understand which interpretation makes sense and which does not? Doesn't it show that then axiomatization in mathematics makes no sense because I would still have to perform some empirical measurements to find out the interpretation that makes sense, so that I assign truth values according to the physical reality? What I mean is that I thought axiomatization is needed in order to leave intuition and work purely syntactically. But say I want to make axiomatic theory for physics. Then, I want to be able to interpret theorems. But in order to do that I still have to make some intuitive statements and empirical statements about which interpretation to choose. So, axiomatic theory does not give me precise answer to what to expect in physical reality.
I probably went somewhere super wrong and far away from mathematics, and probably confused myself. I hope that at least some small portion of this text makes sense. Please feel free to add any comments or advice concerning this question (if it is possible to understand what I mean). |
The point is the following:
Delta, $\Delta$, is defined as $\frac{\partial C}{\partial S}$, where $C$ is the value of the call option, and $S$ is the price of the underlying asset.
So, given that the value of a call option for a non-dividend-paying underlying stock in terms of the Black–Scholes parameters is
$$C = N(d_{1})S - N(d_{2})Ke^{-rT},$$
$$\Delta = \frac{\partial C}{\partial S} = N(d_{1}).$$
Basically, Delta is just the first partial derivative of $C$ with respect to $S$.
How to derive $\Delta$ $N(x)$ is the cumulative probability that a variable with a standardized normal distribution will be less than x; $N'(x)$ is the probability density function for a standardized normal distribution:
$$N'(X) = \frac{1}{\sqrt{2\pi}}e^{\frac{x^2}{2}}.$$
Then, defining $\tau = T - t$, we have$$ d_{1} = \frac{\ln(\frac{S}{K}) + (r + \frac{\sigma^2}{2})\tau}{\sigma\sqrt{\tau}}$$
and
$$ d_{2} = \frac{\ln(\frac{S}{K}) + (r - \frac{\sigma^2}{2})\tau}{\sigma\sqrt{\tau}}$$
It follows that
$$ N'(d_{1}) = N'(d_{2} + \sigma\sqrt{\tau}) = \frac{1}{\sqrt{2\pi}}e^{-\frac{(d_{2} + \sigma\sqrt{\tau})^2}{2}} = N'(d_{2})e^{-d_{2}\sigma\sqrt{\tau} - \frac{\sigma^2\tau}{2}} = N'(d_{2})\frac{Ke^{-r\tau}}{S}$$
Thus,
$$N'(d_{1})S = N'(d_{2})Ke^{-r\tau}.$$
Then
$$ \frac{\partial d_{1}}{\partial S} = \frac{\partial d_{2}}{\partial S} = \frac{1}{S\sigma\sqrt{\tau}}$$
Since there is an $S$ in $N(d_{1})$ and $N(d_{2})$, we use the chain-rule:
$$ \frac{\partial C}{\partial S} = N(d_{1}) + \frac{\partial d_{1}}{\partial S} N'(d_{1})S - \frac{\partial d_{2}}{\partial S} N'(d_{2})Ke^{-r\tau} = N(d_{1}) + \frac{\partial d_{1}}{\partial S} N'(d_{1})S - \frac{\partial d_{2}}{\partial S} N'(d_{1})S = N(d_{1}) + \frac{1}{S\sigma\sqrt{\tau}} N'(d_{1})S - \frac{1}{S\sigma\sqrt{\tau}} N'(d_{1})S = N(d_{1}).$$ |
12.12. Neural Style Transfer¶
If you use social sharing apps or happen to be an amateur photographer, you are familiar with filters. Filters can alter the color styles of photos to make the background sharper or people’s faces whiter. However, a filter generally can only change one aspect of a photo. To create the ideal photo, you often need to try many different filter combinations. This process is as complex as tuning the hyper-parameters of a model.
In this section, we will discuss how we can use convolution neural networks (CNNs) to automatically apply the style of one image to another image, an operation known as style transfer [Gatys.Ecker.Bethge.2016]. Here, we need two input images, one content image and one style image. We use a neural network to alter the content image so that its style mirrors that of the style image. In Fig. 12.12.1, the content image is a landscape photo the author took in Mount Rainier National Part near Seattle. The style image is an oil painting of oak trees in autumn. The output composite image retains the overall shapes of the objects in the content image, but applies the oil painting brushwork of the style image and makes the overall color more vivid.
12.12.1. Technique¶
The CNN-based style transfer model is shown in Fig. 12.12.2. First, we initialize the composite image. For example, we can initialize it as the content image. This composite image is the only variable that needs to be updated in the style transfer process, i.e. the model parameter to be updated in style transfer. Then, we select a pre-trained CNN to extract image features. These model parameters do not need to be updated during training. The deep CNN uses multiple neural layers that successively extract image features. We can select the output of certain layers to use as content features or style features. If we use the structure in Fig. 12.12.2, the pretrained neural network contains three convolutional layers. The second layer outputs the image content features, while the outputs of the first and third layers are used as style features. Next, we use forward propagation (in the direction of the solid lines) to compute the style transfer loss function and backward propagation (in the direction of the dotted lines) to update the model parameter, constantly updating the composite image. The loss functions used in style transfer generally have three parts: 1. Content loss is used to make the composite image approximate the content image as regards content features. 2. Style loss is used to make the composite image approximate the style image in terms of style features. 3. Total variation loss helps reduce the noise in the composite image. Finally, after we finish training the model, we output the style transfer model parameters to obtain the final composite image.
Next, we will perform an experiment to help us better understand the technical details of style transfer.
12.12.2. Read the Content and Style Images¶
First, we read the content and style images. By printing out the image coordinate axes, we can see that they have different dimensions.
%matplotlib inlineimport d2lfrom mxnet import autograd, gluon, image, init, np, npxfrom mxnet.gluon import nnnpx.set_np()d2l.set_figsize((3.5, 2.5))content_img = image.imread('../img/rainier.jpg')d2l.plt.imshow(content_img.asnumpy());
style_img = image.imread('../img/autumn_oak.jpg')d2l.plt.imshow(style_img.asnumpy());
12.12.3. Preprocessing and Postprocessing¶
Below, we define the functions for image preprocessing andpostprocessing. The
preprocess function normalizes each of the threeRGB channels of the input images and transforms the results to a formatthat can be input to the CNN. The
postprocess function restores thepixel values in the output image to their original values beforenormalization. Because the image printing function requires that eachpixel has a floating point value from 0 to 1, we use the
clipfunction to replace values smaller than 0 or greater than 1 with 0 or 1,respectively.
rgb_mean = np.array([0.485, 0.456, 0.406])rgb_std = np.array([0.229, 0.224, 0.225])def preprocess(img, image_shape): img = image.imresize(img, *image_shape) img = (img.astype('float32') / 255 - rgb_mean) / rgb_std return np.expand_dims(img.transpose(2, 0, 1), axis=0)def postprocess(img): img = img[0].as_in_context(rgb_std.context) return (img.transpose(1, 2, 0) * rgb_std + rgb_mean).clip(0, 1)
12.12.4. Extract Features¶
We use the VGG-19 model pre-trained on the ImageNet data set to extract image features[1].
pretrained_net = gluon.model_zoo.vision.vgg19(pretrained=True)
To extract image content and style features, we can select the outputsof certain layers in the VGG network. In general, the closer an outputis to the input layer, the easier it is to extract image detailinformation. The farther away an output is, the easier it is to extractglobal information. To prevent the composite image from retaining toomany details from the content image, we select a VGG network layer nearthe output layer to output the image content features. This layer iscalled the content layer. We also select the outputs of different layersfrom the VGG network for matching local and global styles. These arecalled the style layers. As we mentioned in Section 7.2, VGGnetworks have five convolutional blocks. In this experiment, we selectthe last convolutional layer of the fourth convolutional block as thecontent layer and the first layer of each block as style layers. We canobtain the indexes for these layers by printing the
pretrained_netinstance.
style_layers, content_layers = [0, 5, 10, 19, 28], [25]
During feature extraction, we only need to use all the VGG layers fromthe input layer to the content or style layer nearest the output layer.Below, we build a new network,
net, which only retains the layers inthe VGG network we need to use. We then use
net to extract features.
net = nn.Sequential()for i in range(max(content_layers + style_layers) + 1): net.add(pretrained_net.features[i])
Given input
X, if we simply call the forward computation
net(X),we can only obtain the output of the last layer. Because we also needthe outputs of the intermediate layers, we need to performlayer-by-layer computation and retain the content and style layeroutputs.
def extract_features(X, content_layers, style_layers): contents = [] styles = [] for i in range(len(net)): X = net[i](X) if i in style_layers: styles.append(X) if i in content_layers: contents.append(X) return contents, styles
Next, we define two functions: The
get_contents function obtains thecontent features extracted from the content image, while the
get_styles function obtains the style features extracted from thestyle image. Because we do not need to change the parameters of thepre-trained VGG model during training, we can extract the contentfeatures from the content image and style features from the style imagebefore the start of training. As the composite image is the modelparameter that must be updated during style transfer, we can only callthe
extract_features function during training to extract the contentand style features of the composite image.
def get_contents(image_shape, ctx): content_X = preprocess(content_img, image_shape).copyto(ctx) contents_Y, _ = extract_features(content_X, content_layers, style_layers) return content_X, contents_Ydef get_styles(image_shape, ctx): style_X = preprocess(style_img, image_shape).copyto(ctx) _, styles_Y = extract_features(style_X, content_layers, style_layers) return style_X, styles_Y
12.12.5. Define the Loss Function¶
Next, we will look at the loss function used for style transfer. The loss function includes the content loss, style loss, and total variation loss.
12.12.5.1. Content Loss¶
Similar to the loss function used in linear regression, content lossuses a square error function to measure the difference in contentfeatures between the composite image and content image. The two inputsof the square error function are both content layer outputs obtainedfrom the
extract_features function.
def content_loss(Y_hat, Y): return np.square(Y_hat - Y).mean()
12.12.5.2. Style Loss¶
Style loss, similar to content loss, uses a square error function tomeasure the difference in style between the composite image and styleimage. To express the styles output by the style layers, we first usethe
extract_features function to compute the style layer output.Assuming that the output has 1 example, \(c\) channels, and a heightand width of \(h\) and \(w\), we can transform the output intothe matrix \(\boldsymbol{X}\), which has \(c\) rows and\(h \cdot w\) columns. You can think of matrix\(\boldsymbol{X}\) as the combination of the \(c\) vectors\(\boldsymbol{x}_1, \ldots, \boldsymbol{x}_c\), which have a lengthof \(hw\). Here, the vector \(\boldsymbol{x}_i\) represents thestyle feature of channel \(i\). In the Gram matrix of these vectors\(\boldsymbol{X}\boldsymbol{X}^\top \in \mathbb{R}^{c \times c}\),element \(x_{ij}\) in row \(i\) column \(j\) is the innerproduct of vectors \(\boldsymbol{x}_i\) and\(\boldsymbol{x}_j\). It represents the correlation of the stylefeatures of channels \(i\) and \(j\). We use this type of Grammatrix to represent the style output by the style layers. You must notethat, when the \(h \cdot w\) value is large, this often leads tolarge values in the Gram matrix. In addition, the height and width ofthe Gram matrix are both the number of channels \(c\). To ensurethat the style loss is not affected by the size of these values, wedefine the
gram function below to divide the Gram matrix by thenumber of its elements, i.e. \(c \cdot h \cdot w\).
def gram(X): num_channels, n = X.shape[1], X.size // X.shape[1] X = X.reshape(num_channels, n) return np.dot(X, X.T) / (num_channels * n)
Naturally, the two Gram matrix inputs of the square error function forstyle loss are taken from the composite image and style image stylelayer outputs. Here, we assume that the Gram matrix of the style image,
gram_Y, has been computed in advance.
def style_loss(Y_hat, gram_Y): return np.square(gram(Y_hat) - gram_Y).mean()
12.12.5.3. Total Variance Loss¶
Sometimes, the composite images we learn have a lot of high-frequency noise, particularly bright or dark pixels. One common noise reduction method is total variation denoising. We assume that \(x_{i,j}\) represents the pixel value at the coordinate \((i,j)\), so the total variance loss is:
We try to make the values of neighboring pixels as similar as possible.
def tv_loss(Y_hat): return 0.5 * (np.abs(Y_hat[:, :, 1:, :] - Y_hat[:, :, :-1, :]).mean() + np.abs(Y_hat[:, :, :, 1:] - Y_hat[:, :, :, :-1]).mean())
12.12.5.4. Loss Function¶
The loss function for style transfer is the weighted sum of the content loss, style loss, and total variance loss. By adjusting these weight hyper-parameters, we can balance the retained content, transferred style, and noise reduction in the composite image according to their relative importance.
content_weight, style_weight, tv_weight = 1, 1e3, 10def compute_loss(X, contents_Y_hat, styles_Y_hat, contents_Y, styles_Y_gram): # Calculate the content, style, and total variance losses respectively contents_l = [content_loss(Y_hat, Y) * content_weight for Y_hat, Y in zip( contents_Y_hat, contents_Y)] styles_l = [style_loss(Y_hat, Y) * style_weight for Y_hat, Y in zip( styles_Y_hat, styles_Y_gram)] tv_l = tv_loss(X) * tv_weight # Add up all the losses l = sum(styles_l + contents_l + [tv_l]) return contents_l, styles_l, tv_l, l
12.12.6. Create and Initialize the Composite Image¶
In style transfer, the composite image is the only variable that needsto be updated. Therefore, we can define a simple model,
GeneratedImage, and treat the composite image as a model parameter.In the model, forward computation only returns the model parameter.
class GeneratedImage(nn.Block): def __init__(self, img_shape, **kwargs): super(GeneratedImage, self).__init__(**kwargs) self.weight = self.params.get('weight', shape=img_shape) def forward(self): return self.weight.data()
Next, we define the
get_inits function. This function creates acomposite image model instance and initializes it to the image
X.The Gram matrix for the various style layers of the style image,
styles_Y_gram, is computed prior to training.
def get_inits(X, ctx, lr, styles_Y): gen_img = GeneratedImage(X.shape) gen_img.initialize(init.Constant(X), ctx=ctx, force_reinit=True) trainer = gluon.Trainer(gen_img.collect_params(), 'adam', {'learning_rate': lr}) styles_Y_gram = [gram(Y) for Y in styles_Y] return gen_img(), styles_Y_gram, trainer
12.12.7. Training¶
During model training, we constantly extract the content and stylefeatures of the composite image and calculate the loss function. Recallour discussion of how synchronization functions force the front end towait for computation results in Section 11.2. Because we onlycall the
asscalar synchronization function every 50 epochs, theprocess may occupy a great deal of memory. Therefore, we call the
waitall synchronization function during every epoch.
def train(X, contents_Y, styles_Y, ctx, lr, num_epochs, lr_decay_epoch): X, styles_Y_gram, trainer = get_inits(X, ctx, lr, styles_Y) animator = d2l.Animator(xlabel='epoch', ylabel='loss', xlim=[1, num_epochs], legend=['content', 'style', 'TV'], ncols=2, figsize=(7,2.5)) for epoch in range(1, num_epochs+1): with autograd.record(): contents_Y_hat, styles_Y_hat = extract_features( X, content_layers, style_layers) contents_l, styles_l, tv_l, l = compute_loss( X, contents_Y_hat, styles_Y_hat, contents_Y, styles_Y_gram) l.backward() trainer.step(1) npx.waitall() if epoch % lr_decay_epoch == 0: trainer.set_learning_rate(trainer.learning_rate * 0.1) if epoch % 10 == 0: animator.axes[1].imshow(postprocess(X).asnumpy()) animator.add(epoch, [float(sum(contents_l)), float(sum(styles_l)), float(tv_l)]) return X
Next, we start to train the model. First, we set the height and width of the content and style images to 150 by 225 pixels. We use the content image to initialize the composite image.
ctx, image_shape = d2l.try_gpu(), (225, 150)net.collect_params().reset_ctx(ctx)content_X, contents_Y = get_contents(image_shape, ctx)_, styles_Y = get_styles(image_shape, ctx)output = train(content_X, contents_Y, styles_Y, ctx, 0.01, 500, 200)
As you can see, the composite image retains the scenery and objects of the content image, while introducing the color of the style image. Because the image is relatively small, the details are a bit fuzzy.
To obtain a clearer composite image, we train the model using a larger image size: \(900 \times 600\). We increase the height and width of the image used before by a factor of four and initialize a larger composite image.
image_shape = (900, 600)_, content_Y = get_contents(image_shape, ctx)_, style_Y = get_styles(image_shape, ctx)X = preprocess(postprocess(output) * 255, image_shape)output = train(X, content_Y, style_Y, ctx, 0.01, 300, 100)d2l.plt.imsave('../img/neural-style.png', postprocess(output).asnumpy())
As you can see, each epoch takes more time due to the larger image size. As shown in Fig. 12.12.3, the composite image produced retains more detail due to its larger size. The composite image not only has large blocks of color like the style image, but these blocks even have the subtle texture of brush strokes.
12.12.8. Summary¶ The loss functions used in style transfer generally have three parts: 1. Content loss is used to make the composite image approximate the content image as regards content features. 2. Style loss is used to make the composite image approximate the style image in terms of style features. 3. Total variation loss helps reduce the noise in the composite image. We can use a pre-trained CNN to extract image features and minimize the loss function to continuously update the composite image. We use a Gram matrix to represent the style output by the style layers. 12.12.9. Exercises¶ How does the output change when you select different content and style layers? Adjust the weight hyper-parameters in the loss function. Does the output retain more content or have less noise? Use different content and style images. Can you create more interesting composite images? |
3.7. Concise Implementation of Softmax Regression¶
Just as Gluon made it much easier to implement linear regression in Section 3.3, we’ll find it similarly (or possibly more) convenient for implementing classification models. Again, we begin with our import ritual.
import d2lfrom mxnet import gluon, init, npxfrom mxnet.gluon import nnnpx.set_np()
Let’s stick with the Fashion-MNIST dataset and keep the batch size at \(256\) as in the last section.
batch_size = 256train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
3.7.1. Initialize Model Parameters¶
As mentioned in Section 3.4, the output layer of softmaxregression is a fully connected (
Dense) layer. Therefore, toimplement our model, we just need to add one
Dense layer with 10outputs to our
Sequential. Again, here, the
Sequential isn’treally necessary, but we might as well form the habit since it will beubiquitous when implementing deep models. Again, we initialize theweights at random with zero mean and standard deviation 0.01.
net = nn.Sequential()net.add(nn.Dense(10))net.initialize(init.Normal(sigma=0.01))
3.7.2. The Softmax¶
In the previous example, we calculated our model’s output and then ranthis output through the cross-entropy loss. At its heart it uses
-nd.pick(y_hat, y).log(). Mathematically, that’s a perfectlyreasonable thing to do. However, computationally, things can get hairywhen dealing with exponentiation due to numerical stability issues, amatter we’ve already discussed a few times (e.g. in
sec_naive_bayes) and in the problem set of the previouschapter). Recall that the softmax function calculates\(\hat y_j = \frac{e^{z_j}}{\sum_{i=1}^{n} e^{z_i}}\), where\(\hat y_j\) is the \(j^\mathrm{th}\) element of
yhat and\(z_j\) is the \(j^\mathrm{th}\) element of the input
y_linear variable, as computed by the softmax.
If some of the \(z_i\) are very large (i.e. very positive),\(e^{z_i}\) might be larger than the largest number we can have forcertain types of
float (i.e. overflow). This would make thedenominator (and/or numerator)
inf and we get zero, or
inf, or
nan for \(\hat y_j\). In any case, we won’t get a well-definedreturn value for
cross_entropy. This is the reason we subtract\(\text{max}(z_i)\) from all \(z_i\) first in
softmaxfunction. You can verify that this shifting in \(z_i\) will notchange the return value of
softmax.
After the above subtraction/ normalization step, it is possible that\(z_j\) is very negative. Thus, \(e^{z_j}\) will be very closeto zero and might be rounded to zero due to finite precision (i.eunderflow), which makes \(\hat y_j\) zero and we get
-inf for\(\text{log}(\hat y_j)\). A few steps down the road inbackpropagation, we start to get horrific not-a-number (
nan) resultsprinted to screen.
Our salvation is that even though we’re computing these exponentialfunctions, we ultimately plan to take their log in the cross-entropyfunctions. It turns out that by combining these two operators
softmax and
cross_entropy together, we can escape the numericalstability issues that might otherwise plague us during backpropagation.As shown in the equation below, we avoided calculating \(e^{z_j}\)but directly used \(z_j\) due to \(\log(\exp(\cdot))\).
We’ll want to keep the conventional softmax function handy in case we ever want to evaluate the probabilities output by our model. But instead of passing softmax probabilities into our new loss function, we’ll just pass \(\hat{y}\) and compute the softmax and its log all at once inside the softmax_cross_entropy loss function, which does smart things like the log-sum-exp trick (see on Wikipedia).
loss = gluon.loss.SoftmaxCrossEntropyLoss()
3.7.3. Optimization Algorithm¶
We use the mini-batch random gradient descent with a learning rate of \(0.1\) as the optimization algorithm. Note that this is the same choice as for linear regression and it illustrates the general applicability of the optimizers.
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1})
3.7.4. Training¶
Next, we use the training functions defined in the last section to train a model.
num_epochs = 10d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
Just as before, this algorithm converges to a solution that achieves an accuracy of 83.7%, albeit this time with a lot fewer lines of code than before. Note that in many cases, Gluon takes specific precautions in addition to the most well-known tricks for ensuring numerical stability. This saves us from many common pitfalls that might befall us if we were to code all of our models from scratch.
3.7.5. Exercises¶ Try adjusting the hyper-parameters, such as batch size, epoch, and learning rate, to see what the results are. Why might the test accuracy decrease again after a while? How could we fix this? |
fskilnik wrote:
GMATH
practice exercise (Quant Class 15)
If the line segment AB is one of the sides of a regular polygon inscribed in the given circle with center O, how many sides does this polygon have?
(1) \(\,\alpha = {30^ \circ }\).
(2) The circumference of the circle is equal to\(\,4\pi \,\).
\({\rm{regular}}\,\,N{\rm{ - polygon}}\,\,\left( {N \ge 3\,\,{\mathop{\rm int}} } \right)\)
\(? = N\)
\(\left( 1 \right)\,\,\alpha = {30^ \circ }\,\,\,\, \Rightarrow \,\,\,\angle AOB = {60^ \circ }\,\,\,\left( {{\rm{central}}\,\,{\rm{angle}}} \right)\,\,\,\,\,\,\mathop \Rightarrow \limits^{{\rm{regularity}}} \,\,\,\,\,? = {{{{360}^ \circ }} \over {{{60}^ \circ }}} = 6\,\,\,\,\,\, \Rightarrow \,\,\,\,\,{\rm{SUFF}}.\)
\(\left( 2 \right)\,\,2\pi r = 4\pi \,\,\, \Rightarrow \,\,\,r = 2\,\,\,:\,\,\left( {{\rm{images}}} \right)\,\,\,\left\{ \matrix{
\,{\rm{Take}}\,\,\alpha = {30^ \circ }\,\,\,\,\,\mathop \Rightarrow \limits^{{\rm{regularity}}} \,\,\,\,\,\,? = {{{{360}^ \circ }} \over {{{60}^ \circ }}} = 6\,\,\,\,\left( {{\rm{regular}}\,\,{\rm{hexagon}}} \right) \hfill \cr
\,{\rm{Take}}\,\,\alpha = {60^ \circ }\,\,\,\,\,\mathop \Rightarrow \limits^{{\rm{regularity}}} \,\,\,\,\,\,? = {{{{360}^ \circ }} \over {{{120}^ \circ }}} = 3\,\,\,\,\left( {{\rm{equilateral}}\,\,{\rm{triangle}}} \right) \hfill \cr} \right.\)
The correct answer is (A).
We follow the notations and rationale taught in the GMATH
method.
Regards,
Fabio.
_________________
Fabio Skilnik :: GMATH
method creator (Math for the GMAT)
Our high-level "quant" preparation starts here: https://gmath.net |
Given a system with N identical Fermions, with spin $\frac{1}{2}$ and mass $m$, subject to the potential:$V(\vec{r}) = \frac{1}{2}m\omega^{2}(x^{2}+y^{2}+z^{2})$
and the single particle energy levels: $E_{\vec{n}} = \hbar\omega(n_{x}+n_{y}+n_{z}+\frac{3}{2})$,
Calculate the Density of states $g(\epsilon)$.
I know that the density of states is the number of single particle states with energy between $(\epsilon,\epsilon +d\epsilon)$, per unit energy.
If we define $F(\epsilon)$ = #States with energy $<\epsilon$,
then $ g(\epsilon) = \frac{dF(\epsilon)}{d\epsilon}$
I am unsure how to get this #States in either case. |
I was testing some links within my produced PDF file and recognized that all of the links to an algorithm (from the
algorithm2e package) will jump to the beginning of the algorithm with skipping half of the algorithm's header and the caption of the algorithm. I use the
ruled option of the
algorithm2e package such that the caption will be above the actual algorithm.
Here is a minimal example:
\documentclass{scrbook}\usepackage[colorlinks=true,linkcolor=blue]{hyperref}\usepackage[ruled]{algorithm2e}\begin{document} Der Algorithmus \ref{alg:roundoff} approximiert das ''Closest Vector Problem``. \newpage \begin{algorithm} \caption{\label{alg:roundoff}Babais \textsf{ROUNDING OFF-PROCEDURE}} \emph{Berechne $r_1, r_2, \ldots, r_n \in R$, sodass $\vec{q} = r_1\vec{b}_1+r_2\vec{b}_2+\cdots+r_n\vec{b}_n$ gilt.}\\ $\vec{v} := \vec{0}$\\ \For{$i = 1,2,\ldots,n$} { $\vec{v} := \vec{v} + \lfloor r_i\rceil \cdot \vec{b}_i$ } \end{algorithm}\end{document}
This code will produce a (blue) link on the first page to the algorithm on the second page. But when zoomed and after clicking the link, it will jump to the following part of the algorithm:
Question: Is it possible to adjust the jump position for algorithms (from package
algorithm2e) such that a click will jump to the position before the algorithm header/caption even if the
ruled option of the
algorithm2e package is used? |
This is a Test of Mathematics Solution (from ISI Entrance). The book, Test of Mathematics at 10+2 Level is Published by East West Press. This problem book is indispensable for the preparation of I.S.I. B.Stat and B.Math Entrance.
Also see: Cheenta I.S.I. & C.M.I. Entrance Course
If the coefficients of a quadratic equation \( ax^2 + bx + c = 0 (a \neq 0 ) \) , are all odd integers, show that the roots cannot be rational.
Suppose m and n are the roots. Then \( \frac{c}{a} = mn \).
Hence if one of them is rational then the other one is rational as well.
If possible say both roots are rational. Then
\( m = \frac{p}{q} , n = \frac {r}{s} \) where p,q,r,s are integers and g.c.d. (p,q) = 1, g.c.d.(r, s) = 1
Also \( mn = \frac{p}{q} \times \frac {r}{s} = \frac {c}{a} \).
Note that both r, s cannot be even (as their g.c.d. is 1). If possible say s is even, then r is odd. Also if s is even, p has to be even (since the 2’s in s must be canceled out, as the product of the two fractions ultimately is \( \frac {c}{a} \) which has odd in the numerator and odd in the denominator). Since p is even, therefore, q must be odd.
So here are the cases:
Case 1: p is even (say = 2p’, q is odd, r is odd, s is even (say = 2s’) (and other similar cases where crosswise numbers are even and other pair being odd)
Then \( m+n = \frac{p}{q} + \frac {r}{s} = -\frac {b}{a} \).
This implies \( m+n = \frac{2p’}{q} + \frac {r}{2s’} = -\frac {b}{a} \).
Or \( m+n = \frac{4s’p’ + qr}{2s’q} = -\frac {b}{a} \).
Since qr is odd and 4s’p’ is even their sum is odd. But the denominator (2s’p) is even. Hence the numerator cannot cancel the 2’s in the denominator. But that is a contradiction as this fraction is supposed to be equal to \( -\frac {b}{a} \) where denominator is odd.
Case 2: all of p,q,r,s are odd
\( m+n = \frac{p}{q} + \frac {r}{s} = -\frac {b}{a} \).
Here the numerator is ps+qr which is even (since p, q, r, s are odd). But the denominator qs is odd. Hence 2’s in the numerator won’t cancel out. Hence contradiction.
Thus the roots cannot be rational. |
So far when I looked at tetration I noticed it had a recursive relation. It's $t_2=2^{(t_1)}.$
For example if we start at point $(0,1)$, we can take the x-value of $0$, and $2^0=1$, then we take $1$ and get $2^1=2$, and so on, for $^{x} 2$.
$$\begin{align} & (0,1)\\ & (1,2) \\ & (2,4) \\ & (4,16) \\ & (16,65536) \\ \end{align} $$
If you take this relation you basically get all the integers for $^{x} 2$. If it is all raised by $2$. We can find $1/2$ between each integer by using $x^{x^{(t_1)}}=(t_2) $, which will give the intermediate values.
However, unlike exponents, and addition, there is no similarity to the "x's" between each integer.
For example, if we take the function $2^x$, and take $1.5$, you would get $2\sqrt{2}$, and if you multiply by $\sqrt{2}$ you get $4$. For each $1/2$ between each integer the factor was by $\sqrt{2}$.
However the (x's) for tetrations are different. Also if you divide by $1/4^{th}$ you would $x^{x^{x^{x^{(t_1)}}}}=2^{(t_1)} $. However if you take $x^{x^{(2^{(t_1)})}} $, it's not equal to the half intervals.
Also if you take the inverse of $2^{(t_1)}$ you would get $\log_2(t_1)$ , so lets take $\log_2(t_1)=1 $, we get $0$, but if we take $\log_2(t_1)= 0$ , it would be undefined, (some would argue it's $-\infty$).
So what are all the requirements for a real continuous tetration function? What are the problems with Kneser's method, if any? Do you personally think there is a solution to a "tetration function"? |
So I'm going through Niven's The Theory of Numbers, and it gives the definition that:
$$a \equiv b \pmod m \implies m \mid (a - b)$$
However, a few pages after this definition, it gives a theorem that states "if $\gcd(a, m) = 1$, then there is an $x$ such that $ax \equiv 1 \pmod m$. To prove this theorem, it states that:
If $\gcd(a, m) = 1$, then there exist $x, y$ such that $ax + my = 1.$ That is, $ax \equiv 1 \pmod m$.
Well... from $ax + my = 1$, you can get $my = 1 - ax$, but this shows that $m \mid (1 - ax)$.
However, from the aforementioned definition of an equivalence class, $ax \equiv 1 \pmod m \implies m \mid (ax - 1)$, rather than $m \mid (1 - ax)$.
What is happening here? |
Given a light pulse in vacuum containing a single photon with an energy $E=h\nu$, what is the peak value of the electric / magnetic field?
The electric and magnetic fields of a single photon in a box are in fact very important and interesting. If you fix the size of the box, then yes, you can define the peak magnetic or electric field value. It's a concept that comes up in cavity QED, and was important to Serge Haroche's Nobel Prize this year (along with a number of other researchers). In that experiment, his group measured the electric field of single and a few photons trapped in a cavity. It's a very popular field right now.
However, to have a well defined energy, you need to specify a volume. In a laser, you find an electric field for a
flux of photons ( n photons per unit time), but if you confine the photon to a box you get an electric field per photon. I'll show you the second calculations because it's more interesting.
Put a single photon in a box of volume $V$. The energy of the photon is $\hbar \omega$ (or $\frac{3}{2} \hbar \omega$, if you count the zero-point energy, but for this rough calculation let's ignore that). Now, equate that to the classical energy of a magnetic and electric field in a box of volume $V$:
$$\hbar \omega = \frac{\epsilon_0}{2} |\vec E|^2 V + \frac{1}{2\mu_0} |\vec B|^2 V = \frac{1}{2} \epsilon_0 E_\textrm{peak}^2 V$$
There is an extra factor of $1/2$ because, typically, we're considering a standing wave. Also, I've set the magnetic and electric contributions to be equal, as should be true for light in vacuum. An interesting and related problem is the effect of a single photon on a single atom contained in the box, where the energy of the atom is $U = -\vec d \cdot \vec E$. If this sounds interesting, look up
strong coupling regime, vacuum Rabi splitting, or cavity quantum electrodynamics. Incidentally, the electric field fluctuations of photons (or lack thereof!) in vacuum are responsible for the Lamb shift, a small but measureable shift in energies of the hydrogen atom.
This is a reasonable question to ask, but the answer is probably not what you're expecting: the electric and magnetic fields don't have well-defined values in a state with a fixed number of photons. The electric and magnetic field operators do not commute with the number operator which counts photons. (They can't, because they are components of the exterior derivative of the field potential operator, which creates/annihilates photons.) The lack of commutativity implies via Heisenberg's uncertainty principle that the field
might have arbitrarily large values.
If an atom emits energy
hf, it emits also an angular momentum (spin). That combination is called "photon" or "wave packet". Linking the appropriate formulas from QM and E&M waves, you get the diameter of the wave packet (about λ/2) but not the length. The radius and the direction of propagation do not change as long as the wave packet is not disturbed. It is not locked in a box but propagates in vacuum.
If the coherence length
L is accepted as the length of the cylindrical wave packet, you can calculate the energy density u~f³/ L and the electric field strength E~sqrt(f³/ L), which is constant inside the cylinder.
I got the following results: a) The Hydrogen line at 1420 MHz has FWHM≈5 kHz, L≈60,000 m, E≈1e-8 V/m
b) The Sodium D-Line has FWHM≈10 MHz, L≈6 m, E≈220 V/m
c) X-ray, λ≈1e-12 m, L≈1000λ, E≈1e16 V/m
If you choose a different shape, perhaps like a cigar, those values differ
A single photon's wave could have different shapes, really, so the maximum of the Electric field would be impossible to compute given the parameters above. It could be really short with a very high electric field or really long with a low electric field. Or, whatever. That light comes in "chunks" as photons doesn't restrict you.
Suppose the energy of an ocean wave were hv... then how high is it? Well, it would depend on the width and other factors... as surf comes in waves so does light come in photons, but we wouldn't know the exact shape from the question.
In a box of defined, thus finite, volume an infinitely long wave is by definition impossible. Positing an infinitely long wave would also deny the physical reality of the photon having a wavelength, as wavelength is never infinite; measured wavelengths, of visible light for instance, are extremely short, not infinite.
By defining the volume of the box, i.e. by setting a volume arbitrarily, one is in effect setting an arbitrary upper limit on wavelength. But a single photon cannot yield a value for wavelength, since there is no possibility of measuring a peak-to-peak distance between adjacent peaks in the waveform, when there is no second peak to measure to.
Energy is a derivative of amplitude, but only in a statistical sense, as an average of many photons per second, since the uncertainty principle makes measuring a single photon problematic. Its electric and magnetic field values are only a statistical average; individual photons may deviate widely from that average. Equations derived from these group averages are likewise valid only for the group, not for individual photons. |
Answer
The cost of sod is around $325.83 dollars.
Work Step by Step
1. Find the area of the semi circle and multiply it by 2 Let $S =$ area of the semi circle $S = \frac{1}{2}\pi r^{2}$ $S = \frac{1}{2} (3.14) (8^{2})$ $S = \frac{1}{2} (200.96)$ $S = 100.48$ sq ft There are 2 semi circles are on opposite ends and therefore this area must be multiplied by 2. $(S \times 2) = (100.48 \times 2) = 200.96$ sq ft 2. Find the area of the rectangle Let $R =$ area of the rectangle $R = base \times height$ $R = 29 \times 16$ $R = 464$ sq ft 3. Add up the area of the semi circles and the area of the rectangle Let $T = $ total area of the playing field $T = 464 + 200.96$ $ T = 664.96$ 4. Find the cost of sod Let $P =$ the cost of sod $P = 664.96 \times 0.49$ $P = 325.8304$ dollars $P = 325.83$ dollars |
I've done some testing of different initial temperatures in my simulating annealing algorithm and noticed the starting temperature has an affect on the performance of the algorithm.
Is there any way of calculating a good initial temperature?
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
I've done some testing of different initial temperatures in my simulating annealing algorithm and noticed the starting temperature has an affect on the performance of the algorithm.
Is there any way of calculating a good initial temperature?
As noted by Thomas Klimpel in the comments, a certain acceptance probability is often used, which is equal to say $0.8$. The following is a simple iterative method to find a suitable initial temperature, proposed by Ben-Ameur in 2004 [1]. In the following, $t$ is a strictly positive transition, $\max_t$ and $\min_t$ are the states after and before the transition, $\delta_t$ the cost difference $E_{\max_t} - E_{\min_t}$ and $\pi_{\min_t} \dfrac{1}{|N(\min_t)|}$ the probability to generate a transition $t$ when the energy states are distributed in conformance with the stationary distribution
$$\pi_i = \dfrac{|N(i)|\text{exp}(-E_i/T)}{\sum_j |N(j)| \exp(-E_j/T)}$$, where $N(i)$ denotes the set of neighbors of $i$.
Finally, $\text{exp}(-\delta_t/T)$ is the probability of accepting a positive transition $t$. Now, we can have an estimation $\hat{\chi}$ of the acceptance probability $\chi(T)$ based on a "random" set $S$ of positive transitions:
\begin{eqnarray} \hat{\chi}(T) &=& \dfrac{\sum_{t \in S} \pi_{\min_t} \dfrac{1}{|N(\min_t)|} \text{exp}(- \delta_t / T)}{\sum_{t \in S} \pi_{\min_t} \dfrac{1}{|N(\min_t)|}} \\ &=& \dfrac{ \sum_{t \in S} \text{exp}(- E_{\max_t} / T) }{ \sum_{t \in S} \text{exp}(- E_{\min_t} / T) }. \end{eqnarray}
We want to find a temperature $T_0$ such that $\chi(T_0) = \chi_0$, where $\chi_0 \in ] 0,1 [$ is the acceptance probability we desire.
$T_0$ is computed by an iterative method. Some states and a neighbor for each state is generated. This gives us a set of transitions $S$. The energies $E_{\max_t}$ and $E_{\min_t}$ corresponding with the states of the subset $S$ are stored. Then a value for $T_1$ is chosen, which can be any positive value. $T_0$ is then found with the recursive formula
$$T_{n+1} = T_n \dfrac{\ln(\hat{\chi}(T_n))}{\ln(\chi_0)}^{1/p}$$, where $p$ is a real number $\geq 1$.
When $\hat{\chi}(T_n)$ gets close to $\chi_0$ we can stop. $T_n$ is now a good approximation of the wanted initial temperature $T_0$. For more explantion, proofs and discussion, please see the first section of the original paper [1].
[1] Ben-Ameur, Walid. "Computing the initial temperature of simulated annealing." Computational Optimization and Applications 29, no. 3 (2004): 369-385.
this is a very advanced topic related to getting very tight optimums. my understanding, the initial temperature is generally considered part of a "temperature schedule" strategy for which there is some deep research. in other words both the initial temperature condition and the temperature decay algorithm (which you dont mention) affect the overall optimization results. simple strategies or heuristics for both often yield good or "good enough" results.
there is however at least one paper that studies the initial temperature alone.[1] the bottom line is that unless you are doing very advanced work, treating the initial temperature as a parameter of the problem and iterating over different initial temperatures as part of the overall optimization [after finding that it does indeed affect results] is a very reasonable and a probably widespread practice.
or, even just choosing an initial temperature that gives good results is also common (it would seem to be somewhat surprising & not be often that problem instance optimization results vary substantially from a "better" initial temperature parameter found by trial-and-error). as dhj pointed out some problems will be more sensitive than others to initial temperature.
[1] Computing the Initial Temperature of Simulated Annealing Ben-Ameur 2004
[2] An Efficient Simulated Annealing Schedule: Derivation Lam & Delosme
[3] Temperature control for simulated annealing Munakata & Nakamura |
It's my understanding that General Relativity abstracts away the concept of gravity as a force, and instead describes it as a feature of spacetime by which massive objects cause curvature. Then it follows that what we experience as a force is simply the difference between a geodesic on this curved surface and our perceived Euclidean space. What I am unsure of, exactly, is the implication of this.
$$S[q] \equiv \int L(q(t), \tfrac{ \delta q }{ \delta t }(t), t)dt$$ and Hamilton's Principle states that $$\tfrac{\delta S}{\delta q(t)} = 0.$$
If $$F(q(t)) = -\nabla U(q(t)) = \nabla (T(\tfrac{ \delta q }{ \delta t}(t)) - U(q(t))) = \nabla L,$$ which, as I understand is true for a conservative field like gravitation, then are these two statements not equivalent?
What additional insight do we gain by using principles of differential geometry versus classical potential theory? |
In our chemistry lab we performed the following experiment below.
Given that $K_\mathrm{sp}(\ce{AgCl}) = 8.2 \times 10^{-11}$ and $K_{\mathrm{sp}}(\ce{PbCl_{2}}) = 1.7 \times 10^{-5}$
Procedure: $0.2\,\mathrm{mL}$ each of $\mathrm{0.05\: M \: \ce{Ag+}}$ ions and $\mathrm{ 0.05\: M \: \ce{Pb^{2+}}}$ ions were added into a centrifuge tube. The solution was diluted to $1\,\mathrm{mL}$ $6\,\mathrm{M}$ $\ce{HCl}$ was added to the solution till it completely precipitates. The quantity of $\ce{HCl}$ required was recorded. The precipitate and supernatant were separated and kept for subsequent experiments.
Observations: Precipitate formation was observed almost instantly. It took 6 drops of $\ce{HCl}$ to completely precipitate the contents.
Estimate the concentration of $\ce{HCl}$ and volume of $6\,\mathrm{M}$ $\ce{HCl}$ required to complete precipitate each of the given salts.
Here is my approach:
\begin{align} c_{1}V_{1} &= c_{2}V_{2} \\ \text{Let } c_{1}=0.05\:\mathrm{M}, V_{1}&=0.2\:\mathrm{mL}, \text{and} \ V_{2}=1\:\mathrm{mL}\\ [\ce{Pb^2+}] &= 0.2 \cdot 0.05\ \mathrm{M} \\ [\ce{Ag+}] &= 0.2 \cdot 0.05\ \mathrm{M}\\ K_{\mathrm{sp}}\:(\ce{AgCl}) &= [\ce{Ag+}] [\ce{Cl-}] \\ 8.2 \times 10^{-11} &= 0.2 \cdot 0.05\ \mathrm{M} \cdot s_{1} \\ s_{1} &= 8.2 \times 10^{-9}\ \mathrm{M}\\ K_{\mathrm{sp}}\:(\ce{PbCl_{2}}) &= [\ce{Pb^{2+}}] [\ce{Cl-}]^{2}\\ 1.7 \times 10^{-5} &= 0.2 \cdot 0.05\ \mathrm{M} \cdot (s_{2})^2\\ s_{2} &= 0.041\ \mathrm{M} \end{align}
Thus to precipitate $\ce{AgCl}$ and $\ce{PbCl_{2}}$ completely, the concentration of chloride ions in solution must exceed $8.2 \times 10^{-9}$ M and $0.041$ M respectively).
I am not sure of my computations up till this point, and I need some help on how to proceed further.
EDIT: My calculations tell me that volume of $6\,\mathrm{M}\,\ce{HCl}$ necessary is $0.00683\,\mathrm{mL}$. In the lab we have estimated the volume of one drop of our pipette as $0.037\,\mathrm{mL}$, based on this I calculated that the number of drops of hydrochloric acid necessary to complete precipitation is $0.18$ drops. But in the actual experiment I performed 6 drops of $\ce{HCl}$ were dispensed and based on the computations we did, everything should have precipitated. But after centrifugation and separation the supernatant still had lead ions which were precipitated as lead chromate. Why is this so? (This is the main reason why I suspected my calculations and turned to this website for help). EDIT2: I now see the flaw in my computations. what I have calculated is in fact that amount of HCl necessary to start precipitation. Thanks to everyone for their help. |
Let $f\in L^1(\mathbb R)$ s.t. $$\int_{\mathbb R}|f|=0\implies f=0\ a.e.$$
My attempt
Suppose $f$ continuous and that there is $y$ s.t. $|f(y)|\neq 0$. In particular, there is $\delta >0$ s.t. $f(x)\neq 0$ for all $x\in [y-\delta ,y+\delta ]$. By continuity, there is $m>0$ s.t. $|f(x)|\geq m$ for all $x\in [y-\delta ,y+\delta ]$. Therefore, $0<2m\delta\leq \int_{[y-\delta ,y+\delta ]}|f(x)|\leq \int_{\mathbb R}|f|,$ which is a contradiction. Therefore $f=0$ everywhere.
Since $f\in L^1$. There is a sequence of continuous function s.t. $$\|f_n-f\|_{L^1}\to 0.$$ In particular, $$\int_{\mathbb R}|f_n|\to \int_{\mathbb R}|f|=0.$$
How can I prove that $f_n=0$ for all $n$ ? |
This looks extremely easy, but then again it's late at night...
Let $k$ be a commutative ring with unity. An element $a$ of a $k$-algebra $A$ is said to be
transcendental over $k$ if and only if every polynomial $P\in k\left[X\right]$ (with $X$ being an indeterminate) such that $P\left(a\right)=0$ must satisfy $P=0$.
Let $n$ be a positive integer. Let $A$ be a $k$-algebra, and $t$ be an element of $A$ such that $t^n$ is transcendental over $k$. Does this yield that $t$ is transcendental over $k$ ?
There is a rather standard approach to a problem like this which works if $k$ is reduced (namely, assume that $t$ is not transcendental, take a nonzero polynomial $P$ annihilated by $t$, and consider the product $\prod\limits_\omega P\left(\omega X^{1/n}\right)$, where $\omega$ runs over a full multiset of $n$-th roots of unity adjoined to $k$; this product can be seen to lie in $k\left[x\right]$ and annihilate $t^n$; this all requires a lot more work to put on a solid footing). There are even easier proofs around when $k$ is an integral domain or a field (indeed, in this case, if $t$ is not transcendental over $k$, then $t$ is algebraic over $k$, so that, by a known theorem, $t^n$ is algebraic over $k$ as well, hence not transcendental). I am wondering if there is a counterexample in the general case or I am just blind... |
A friend of mine asked me a question, which I considered trivial at first, but after a while gave rise to some doubts.
For instance, we have a potential well in 1 dimension defined by $$ V(x)= \begin{cases} +\infty &\text{if}& x<0 \text{ and } x>L\\ 0 &\text{if} &0\leq x\leq L \end{cases} $$ We know the wave function that describes the particle in the potential at a given energy level $$ E_n=\frac{\hbar^2\pi^2n^2}{2mL^2} $$
Now if we take the state at the energy level $E_2$ we have a wavefunction that behaves like $\psi_2\sim\sin(\frac{2\pi x}{L})$. We are interested in the probability density, so we take the square modulus, which would be $0$ at $L/2$. According to this fact I would say that it's impossible to find the particle in the position $L/2$, which can be said as: the event: "find the particle at $L/2$" is impossible.
The problem is that probability tells me that the fact that the probability is zero doesn't mean that the event is impossible. Of course to get the probability I should integrate over a length, but how can I say that the event IS impossible? Isn't it?
Maybe it's a stupid question and I'm missing something, but I just can't fulfill my purpose. |
Under a scale transformation $$t\rightarrow \bar{t}=\mu t\hspace{0.3cm}\text{and}\hspace{0.3cm}\textbf{r}\to\bar{\textbf{r}}=\lambda\textbf{r},\tag{1}$$ Newton's law take the form $$m\frac{d^2\textbf{r}}{dt^2}=\textbf{F}\Rightarrow m\frac{d^2\bar{\textbf{r}}}{d\bar{t}^2}=\frac{\lambda}{\mu^2}\textbf{F}.\tag{2}$$ which shows that Newton's law is
not scale-invariant for a time-independent $\textbf{F}$.
This looks surprising to me because scaling investigates whether the physics is same at all scales (of magnification), and scale invariance is broken/spoiled if there is a built-in length scale or time scale in the problem. Now, Newton's law for a particle of mass $m$ is
not scale invariant as I've shown in (2). What is the reason for this? There is no built-in length scale or time scale in the problem that one can construct from the $\textbf{F}$ and $m$. Therefore, physically it is surprising to me. Does it mean that breakdown of scale invariance has nothing to do with intrinsic length scale or time-scale? |
I would like to ask a question about random walk. Campbell, Lo & Mackinlay defined the random walk, in the following way (RW3):
$$ cov[f(r_{t}),g(r_{t+k})]=0,\qquad k\neq0 $$
for all $f(\cdot)$ and $g(\cdot)$, where $f(\cdot)$ and $g(\cdot)$ are linear functions, and $\{r_{t}\}$ is a series of returns. So, the question is about the following equation: $$ r_{t}=\alpha\varepsilon_{t-1}^{2}+\varepsilon_{t},\qquad\varepsilon_{t}\sim IID(0,\sigma^{2}). $$
Is it random walk or not? And why? (I have an idea, but i don't know, if it's true.)
Thanks |
You can look at your problem in a different way. Look at the definition of norm , and then you can define your norm as:
$ \| x \|_{\mathcal{B}} = \sqrt{\sum_i \| x \|_{B_i}^2} = \sqrt{\sum_i \left( x^T,B_ix \right)^2} $
where $\mathcal{B} = \{B_i\}_i$ is the set of matrices $B_i$ that you have. You can do it if your matrices $B_i$ are definite positive, or semidefinite positives and at least one of the definite positive. In other case you are not sure that you can define the norm (the same happens for your matrix $A$). Because they are symmetric and rank complete, I think it is enough.
Then you can rewrite your problem as
$$\arg \min_{x\in \mathbb{R}} \frac{x^T A x }{\sum_i \left( x^T,B_ix \right)^2} =\arg \min_{\|y\|_{\mathcal{B}}=1} y^T A y$$
where you have used that
$$y = \frac{x}{\sqrt{\sum_i \left( x^T,B_ix \right)^2}} .$$
It is strictly an eigenvalue problem. I am not sure about if the eigenvalues for the matrix A are always the same whithout taking into account the norm that you use to define the problem. I dont think so. Anyway, you can use the power iteration method, and define yourself the norm as in the explanation to normalize the eigenvector. It should work.
EDIT:
I think that it will work. You start with a initial guess $y_0$. Then you iterates, solving the systems. At iteration $i$ you do
$$\hat{y} = A^{-1}y_i $$$$\lambda_i = \|\hat{y}\|_{\mathcal{B}}$$$$y_{i+1} = \frac{y_i}{\lambda_i}$$$$\varepsilon = \left\|\frac{\lambda_{i}-\lambda_{i-1}}{\lambda_{i}}\right\|$$
until $\varepsilon$ is small enough. Once you converge, your eigenpair $(\lambda,y)$ is the one with minimum eigenvector satisfying $$A y = \lambda y$$for a vector $y$ satisfying $$\|y\|_{\mathcal{B}} = 1$$
It is the same as$$\lambda = y^T A y $$
EDIT2:
Sorry, the previous is wrong as @Johan said in his answer, because the different exponent of $x$ in the numerator and the denominator. The answer would be valid if the problem is
$$\arg \min_{x\in \mathbb{R}} \frac{x^T A x }{\sum_i \left( x^T,B_ix \right)}$$
where we dont have the square in the denominator. Then defining the norm as
$ \| x \|_{\mathcal{B}} = \sqrt{\sum_i \| x \|_{B_i}} = \sqrt{\sum_i \left( x^T,B_ix \right)} $
the power iteration can be applied. For the original problem this not valid. |
Getting all (complex) solutions of a non polynomial equation
Hi !
I was used to solve the following equation with Mathematica. \begin{equation} \alpha_1 + \alpha_2x + \alpha_3x^2 + x^4 + \frac{\alpha_4}{x^2-\alpha_0} + \frac{\alpha_5 x^2}{x^2-\alpha_0}=0 \end{equation} where $\alpha_i$ are constants.
The Mathematica function "Solve" gives me all the numerical roots of this non polynomial equation very easily. These roots can be real or complex.
I'm a very beginner at Sage. I have tried several methods to solve this equation automatically but it seems that all methods I've used work only for polynomial equations. Here they are :
alpha0 = 0.25alpha1 = -2.5alpha2 = 6.9282alpha3 = -5.5alpha4 = 0.5alpha5 = -0.5x = var('x')eq = alpha1 + alpha2*x + alpha3*x**2 + x**4 + alpha4/(x**2 - alpha0) + alpha5*x**2/(x**2 - alpha0) == 0.# test 1# solve(eq, x, ring=CC)# ==> [0 == 20000*x^6 - 115000*x^4 + 138564*x^3 - 32500*x^2 - 34641*x + 22500]# test 2# solve(eq, x, ring=CC, solution_dict=True)# ==> [{0: 20000*x^6 - 115000*x^4 + 138564*x^3 - 32500*x^2 - 34641*x + 22500}]# test 3# eq.roots(x, ring=CC,multiplicities=False)# ==> TypeError: fraction must have unit denominator
Do you have any idea of a method to get the approximated roots of the equation ?
Thanks in advance
EDIT : correction of an error in the equation ; add few tests |
How should one think about simplicial objects in a category versus actual objects in that category? For example, both for intuition and for practical purposes, what's the difference between a [commutative] ring and a simplicial [commutative] ring?
One could say many things about this, and I hope you get many replies! Here are some remarks, although much of this might already be familiar or obvious to you.
In some vague sense, the study of simplicial objects is "homotopical mathematics", while the study of objects is "ordinary mathematics". Here by "homotopical mathematics", I mean the philosophy that among other things say that whenever you have a set in ordinary mathematics, you should instead consider a space, with the property that taking pi_0 of this space recovers the original set. In particular, this should be done for Hom sets, so we should have Hom spaces instead. This is formalized in various frameworks, such as infinity-categories, simplicial model categories, and A-infinity categories. Here "space" can mean many different things, in these examples: infinity-category, simplicial set, or chain complex respectively.
For intuition, it helps to think of a simplicial object as an object with a topology. For example, a simplicial set is like a topological space, a simplicial ring is like a topological ring etc. The precise statements usually takes the form of a Quillen equivalence of model categories between the simplicial objects and a suitable category of topological objects. Simplicial sets are Quillen equivalent to compactly generated topological spaces, and I think a similar statement holds if you replace sets by rings, although I am not sure if you need any hypotheses here.
If you like homological algebra, it helps to think of a simplicial object as analogous to a chain complex. The precise statements are given by various generalizations of the Dold-Kan correspondence. For simplicial rings, they should correspond to chain complexes with a product, more precisely DGAs. Again, one has to be a bit careful with the precise statements. I think the following is true: Simplicial commutative unital k-algebras are Quillen equivalent to connective commutative differential graded k-algebras, provided k is a Q-algebra.
A remark about the word "simplicial": A simplicial object in a category C is a functor from the Delta category into C, but for almost all purposes the Delta category could be replaced with any test category in the sense of Grothendieck, see this nLab post for some discussion which doesn't use the terminology of test categories.
Since you used the tag "derived stuff" I guess you are already aware of Toen's derived stacks. Some of his articles have introductions which explain why one would like to use simplicial rings instead of rings. See in particular his really nice lecture notes from a course in Barcelona last year.
I tried to write a blog post on some of this a while ago, there might be something useful there, especially relating to motivation from algebraic geometry.
This is an answer to `What can you do with simplicial objects' instead of how to think of them.
The main context in which I think about simplicial objects is Cohomological Descent. This is, as Martin Olsson put it, Cech cohomology on steroids. It is an extremely useful tool (for instance it makes it totally easy to prove things like GAGA for Deligne-Mumford stacks). Brian Conrad has an excellent set of notes here. SGA IV, expose 5 is another (much more abstract) reference. Deligne's Hodge III is another source. There he uses coh. descent to put Hodge structures on singular (complex analytic) varieties.
Just to amplify the very good points made by Andreas Holmstrom above:
a) it just so happens that simplices are one of the best, at least best understood, models for higher homotopical structure and most every simplicial object that you will run into is secretly a model for an oo-groupoid with extra structure.
b) In particular this is true for the case of simplicial abelian groups, and monoids of these: simplicial abelian rings. The underlying simplicial set of a simplicial group is a necessarily a Kan complex, hence an oo-groupoid. So simplicial groups are oo-groupoids with strict group structure on them. In particular simplicial abelian groups are therefore fully abelian oo-groupoids: special kinds of connective spectra.
c) See the introduction at nLab:Dold-Kan correspondence: this correspondence allows to not only understand all these simplicial objects as models for oo-groupoids with extra structure and property, but also lots of objects in homological algebra as being just yet another (computationally convenient) repackaging of that information.
And when you feel at home with a,b,c), open Lurie's "Stable (oo,1)-Cateories" and see his (oo,1)-version of the Dold-Kan correspondence there to see the full truth...
In case you are an algebraic geometer and are you used to thinking about a commutative ring in terms of its spectrum it might be helpful to imagine the spectrum of a simplicial commutative ring A as the spectrum of \pi_0(A) together with a fuzzy cloud of generalized nilpotents. This can actually be made precise: For a simplicial commutative ring A there is a closed immersion Spec(\pi_0 (A)) -> Spec (A), and their underlying point sets are the same. So the relationship between spectra of simplicial rings to spectra of commutative rings is really much the same as the relationship between general schemes and reduced schemes.
This is a good place to mention the notion of a "Grothendieck test category". This is a small A category which has the property that presheaves of sets on A, with an appropriate class of weak equivalences, models the homotopy theory of spaces. So the simplicial indexing category Δ is a test category, but there are others, such as the category which indexes cubical sets. I would guess that whenever one needs to use simplicial rings (say), you could replace simplicial with any test category (though I don't know that anyone has worked this sort of thing out).
These notions are developed in some papers of Cisinski, and a good introduction (which gives the definition of test category) is the paper of Jardine. Of course, this doesn't really answer your question on how to think about simplicial objects, but perhaps it puts it in a broader context.
It depends on the context. Simplicial objects are often used to "resolve" objects in nonabelian contexts to get a good notion of derived functors. For example, in Andre-Quillen cohomology you take a ring (viewed as a discrete simplicial set) and resolve it by finding a weakly equivalent simplicial ring that's levelwise free. Then you can apply various functors to this (such as abelianization) and get good notions of derived functors.
A simplicial object {X
n} in particular always includes two maps from X 1 to X 0 and you can think of the simplicial object as representing a lift of the coequalizer of these two maps to something living in a derived context.
Simplicial objects are often equivalent in some fashion to topological objects; e.g. simplicial commutative rings are equivalent (in a model-category sense) to topological commutative rings.
Sometimes there's a "realization" functor from simplicial objects to regular objects, but in general simplicial objects play their own role.
The definition of a simplicial object is a functor $X_\bullet\colon \Delta^{op}\to A$ where $\Delta$ is the simplicial category and $A$ is your favorite category. So the easiest answer is that a simplicial object in a category is a sequence of objects in that category together with morphisms $d_i$ and $s_j$ that satisfy a bunch of relations.
We can also think of a simplicial object $X_\bullet$ as an element of $Fun(\Delta^{op}, A)$, the category of functors from $\Delta^{op}$ to $A$. In this context, it is easy to define a morphism between simplicial objects. It maps $X_n\to Y_n$ and commutes with the $d_i$'s and $s_j$'s.
Shameless planar algebra plug: Simplicial objects are also really cool in tandem with an adjoint functor pair. You can use this machinery to get a pictorial representation of the simplicial category using Temperley-Lieb (string) diagrams. In fact, planar algebras are great examples of simplicial vector spaces, although there's a lot more structure too... |
I am trying to vertical and horizontal align items in a table. I tried to follow what I found on the internet, including on this site, but it seems not to work for the last column. Here is what I have.
\begin{table}[!h]\begin{center}\begin{tabular}{|c||c|c|}\hlineDoping & $\alpha_b$ $\left(\textup{cm}^{-1}\right)$ & $\eta_{ext}$ \\\hline\hline 5 \% & $4 \cdot 10^{-4}$ & 0.995 \\ \hline7.5 \% & $6 \cdot 10^{-4}$ & 0.992 \\ \hline10 \% & $8 \cdot 10^{-4}$ & 0.987 \\\hline\end{tabular}\caption{Caption.}\label{theor_cooling_efficiency_parameters}\end{center}\end{table}
I want to increase the row height, since it seems too short, especially in the middle column (although it surprises me that LaTeX does not automatically adapt the rows to the math formulas inside)
Anyway, I included the array packages, increased manually the row's heights via [2ex] after the \ and set the vertical alignment with m. Here is the code:
\begin{table}[!h]\begin{center}\begin{tabular}{|m{3em}||m{4.8em}|m{3em}|}\hlineDoping & $\alpha_b$ $\left(\textup{cm}^{-1}\right)$ & $\eta_{ext}$ \\[2ex] \hline\hline 5 \% & $4 \cdot 10^{-4}$ & 0.995 \\[2ex] \hline7.5 \% & $6 \cdot 10^{-4}$ & 0.992 \\[2ex] \hline10 \% & $8 \cdot 10^{-4}$ & 0.987 \\[2ex] \hline\end{tabular}\caption{Caption.}\label{theor_cooling_efficiency_parameters}\end{center}\end{table}
And here is the result:
As you can see, the first two columns are ok, but not the third, which is not centered at all. How can I fix this?
Thanks in advance for all the answers.
P.S.: I am aware that the columns are not horizontally centered and I know how to fix that, for example, by changing the tabular initialization to
\begin{tabular}{|>{\centering\arraybackslash}m{.1\linewidth}||>{\centering\arraybackslash}m{.14\linewidth}|>{\centering\arraybackslash}m{.1\linewidth}|}
which centers horizontally the columns but does not fix the third column vertical alignment. |
CAD & 3D Printing
Category Archives: 3D
September 5, 2019 – 1:26 pm
On a recent trip I picked up a art-glass marble for my wife. She liked it so much she bought a nice lighted display stand for it. The stand wasn’t designed to display that particular marble though, so it didn’t work too well.
This was a problem easily solved with a bit of 3D printing. I figured it’d be trivial to make an adapter ring for the marble to sit securely on. However, when I examined the stand, I discovered the LEDs were flush with the base, and likely to get in the way. The marble needed to be held up above them. Before starting on the adapter, I did a simple model of the stand, so I could check everything cleared.
With the stand and LEDs modeled, I could verify my design properly fit. The project worked great, and fit perfectly with the first print. Success.
The next CAD project I did was a stand for a screwdriver set I purchased. I carefully measured the handle and the various blades, but (perhaps over-confident) didn’t bother to model them.
It wasn’t until I got the print back I discovered a major ergonomic fail: You can’t easily reach the handle because the blades are in the way! Had I taken the time to actually model the blades and handle, I would have visualized this immediately, and chosen another layout.
Moral – it’s a good idea to model the whole environment, not just the piece you’re printing.
|
June 16, 2019 – 11:15 am
Apple’s recently introduced Mac Pro features a distinctive pattern of holes on the front grill. I’m not likely to own one anytime soon (prices for a well configured machine approach a new car), but that pattern is very appealing, and re-creating it is a fun exercise.
The best clue about the pattern comes from this page pitching the product. About halfway down, by the heading “More air than metal” is a short video clip showing how the hemispherical holes are milled to create the pattern.
Let’s start with a screen grab of the holes from the front. The holes are laid out with their centers equally spaced apart and the tops of the lower circles fall on the same line as the bottom of the circles above them. So the circles are spaced 2
r apart vertically, where r is the radius of the circle.
The horizontal spacing is a bit more work. The angles of the equilateral triangle formed by the centers are 180°/3 = 60° (or π/3 as they say in the ‘hood). If you draw a vertical line from the center of the top circle to the line connecting the centers of the bottom circles, that line (as you see above) is 2
r long. With a bit of trig, you can find half the horizontal spacing x by using the right triangle formed by that line, x and the side of the equilateral triangle. The angle from the vertical center line to the equilateral triangle edge is half of π/3, π/6. So,
and 2
x is the horizontal spacing of the circles.
The hemispherical holes are milled into both sides of the plate, but the holes on the other side are offset so the hole centers on one side fall exactly in the middle of the triangle formed by the hole centers on the other side:
You already know the horizontal offset for the centers from one side to the other is
x, but how far up do you go to hit the center of that triangle? Let’s call that h.
You’ll use the same tan(π/6) trick we used above, this time using the triangle formed by
x and h. Like the triangle used to find x, the angle here is also π/6. So:
Let’s clean this up a bit:
\[h=x\tan \frac{\pi }{6}=2r\tan \frac{\pi }{6}\tan \frac{\pi }{6}\]
\[\tan \frac{\pi }{6}=\frac{1}{\sqrt{3}}\] so… \[h=2r\frac{1}{\sqrt{3}}\frac{1}{\sqrt{3}}=\frac{2r}{3}\]
There’s still the issue of how thick the plate is, relative to the size of the holes. I took screen grabs of the film clip and compared them by counting pixels:
Examining the images, the thickness was about 101, with the diameter (2
r) of the holes coming in at 176. Now, these numbers aren’t at all precise, because of the perspective introduced by whatever animation software was used. But I can’t help but notice the following coincidence:
Yeah. The ratio of the plate thickness to the hole diameter is just like the ratio of the hole horizontal spacing to the hole diameter. So let’s turn this around, and summarize by saying for a plate of any thickness
, use: t
\[r=\frac{t\sqrt{3}}{2}\]
\[x=2r\tan \frac{\pi }{6}=\frac{2r}{\sqrt{3}}=t\] \[h=\frac{2r}{3}\]
Where
is the radius (half the diameter) of the spheres and r 2 is the horizontal spacing of the sphere centers on a given row. For the next row, the centers are offset by x horizontally from the centers of the previous row. The rows are spaced x 2 apart vertically, from sphere center to center. The same grid of spheres carved into the back side is displaced by r horizontally, and x vertically from the spheres in the front. The centers of the front spheres are on the front surface of the plate, the back spheres on the back. h
So to CAD this up, all you need to do is start with a rectangular block of thickness
t, and use the formulas above to place the centers of the spheres (with diameter 2 r) on the front and back of the block.
If you just want to quickly print or look at the result in 3D, I’ve posted some sample STL files on Thingiverse.
|
June 23, 2018 – 12:10 pm
Somewhere around middle-school, I came across a diagram of the classic three-piece burr puzzle.
It looked fun, so I endeavored to make one. Unfortunately, the materials available to me then (a scrap of plywood and a janky power scroll saw) didn’t produce very good results. It worked, but was crude and wobbly. Spray painting it black didn’t help.
A couple years ago, I revisited the project, this time with 3D printing.
Printed at Shapeways in dyed plastic, it works great. It’s kind of pricey though, running just over $90 for the three solid pieces. Similar puzzles retail for less than $5. I tried making hollow versions to lower the price, but it reduced it less than 20%, and required annoying holes to drain the trapped material.
Yes, I could just have bought one. But there’s something fun about precisely realizing something you envisioned decades ago.
February 10, 2018 – 12:53 pm
My mom collects teacups, and I liked the idea of creating one using 3D tech. She also grows raspberries in her garden – eating them off the vines is always a treat when we visit in the summer. I was mulling over ideas for a raspberry teacup design when the idea struck of using a raspberry leaf as the saucer. Then I started in earnest. This is perhaps an afternoon project for an experienced ceramics artist. But I’m more fond of CAD then wet clay, so I designed it in Fusion 360, and had it fabricated with
Shapeway’s porcelain process. read more »
October 17, 2016 – 9:55 am
We have an aging Thermador gas range. One of the original plastic knobs broke. Surprisingly, it was hard to find replacements. Thermador no longer supplied them, and the 6mm D-shaft was an unusual size for generic replacements. The closest I could come was some generic knobs off ebay. But these didn’t have the proper stop inside the sleeve, so you couldn’t push the shaft in before turning it (a safety feature of the Thermador knobs). I kludged some stops with chopstick pieces, but it was clumsy. Time to roll our own. read more » |
I plan to study elliptic function. Can you recommend some books? What is the relationship between elliptic function and elliptic curve?Many thanks in advance!
McKean and Moll have written the nice book
Elliptic Curves: Function Theory, Geometry, Arithmetic that cleanly illustrates the connection between elliptic curves and elliptic/modular functions. If you haven't seen the book already, you should.
As for elliptic functions proper, my suggested books tend to be a bit on the old side, so pardon me if I don't know the newer treatments. Anyway, I quite liked Lawden's
Elliptic Functions and Applications and Akhiezer's Elements of the Theory of Elliptic Functions. An oldie but goodie is Greenhill's classic, The Applications of Elliptic Functions; the notation is a bit antiquated, but I have yet to see another book that has a wide discussion of the applications of elliptic functions to physical problems.
At one time... every young mathematician was familiar with $\mathrm{sn}\,u$, $\mathrm{cn}\,u$, and $\mathrm{dn}\,u$, and algebraic identities between these functions figured in every examination.
– E.H. Neville
Finally, I would be remiss if I did not mention the venerable Abramowitz and Stegun, and the successor work, the DLMF. The chapters on the Jacobi and Weierstrass elliptic functions give a nice overview of the most useful identities, and also point to other fine work on the subject.
First of all
ever-modern Course of modern analysis by Whittaker-Watson.
For a more introductory style, I highly recommend
V. Prasolov, Y. Solovyev Elliptic Functions and Elliptic Integrals.
The relation between elliptic curves and elliptic functions can be sketched as follows. Elliptic curve is topologically a torus which can be realized by cutting a parallelogram in $\mathbb{C}$ and identifying its opposite edges. On the other hand, it can be realized in $\mathbb{CP}^2$ by an algebraic equation of the form $$y^2=x^3+ax+b.$$ Elliptic functions provide a map between the two pictures, which is also called uniformization. Essentially, $x,y$ are given by some elementary elliptic functions of $z$ (complex coordinate on the parallelogram).
Compare this with a more familiar example: trigonometric functions $\sin$, $\cos$ provide a uniformization of the circle, which can be defined either via an algebraic equation or in a parametric form: $$x^2+y^2=1\quad \begin{array}{c}\sin,\cos\\ \Longleftrightarrow \\ \;\end{array}\quad \begin{cases}x=\cos t,\\ y=\sin t,\\ t\in[0,2\pi].\end{cases}$$
There is a classical 3 volume series by C.L. Siegel. It is well-written, though the perspective is a little bit outdated. I guess (no book at hand) you can find treatments by Serge Lang in the GTM series as well. I am not sure if Stein's book on complex analysis studied this topic. |
The
Birch and Swinnerton-Dyer conjecture is one of the Millennium Prize Problems listed by the Clay Mathematics Institute. It relates the order of vanishing and the first non-zero Taylor series coefficient of the L-function associated to an elliptic curve $E$ defined over $\Q$ at the central point $s=1$ to certain arithmetic data, the BSD invariants of $E$.
$\displaystyle \frac{1}{r!} L^{(r)}(E,1)= \displaystyle \frac{\# Ш(E/\Q)\cdot \Omega_E \cdot \mathrm{Reg}(E/\Q) \cdot \prod_p c_p}{\#E(\Q)_{tor}^2}. $
The quantities appearing in this formula are the BSD invariants of $E$:
$r$ is the rank of $E(\Q)$ (a non-negative integer); Ш is the Tate-Shafarevich group of $E$ (which is conjectured to always be finite, a positive integer); $\mathrm{Reg}(E/\Q)$ is the regulator of $E/\Q$; $\Omega_E$ is the real period of $E/\Q$ (a positive real number); $c_p$ is the Tamagawa number of $E$ at each prime $p$ (a positive integer which is $1$ for all but at most finitely many primes); $E(\Q)_{\text{tor}}$ is the torsion order of $E(\Q)$ (a positive integer). Knowl status: Review status: reviewed Last edited by John Cremona on 2019-02-08 11:36:42 Referred to by: History:(expand/hide all) 2019-09-20 16:29:45 by Vishal Arul 2019-09-05 20:19:33 by Kiran S. Kedlaya 2019-02-08 11:36:42 by John Cremona (Reviewed) Differences(show/hide) |
The first one is expressible in terms of the modified Bessel function of the first kind:
$$1+\cfrac1{2+\cfrac1{3+\cfrac1{4+\cdots}}}=\frac{I_0(2)}{I_1(2)}=1.433127426722\dots$$
The second one, through an equivalence transformation, can be converted into the following form:
$$1+\cfrac1{\frac12+\cfrac1{\frac23+\cfrac1{\frac38+\cfrac1{b_4+\cdots}}}}$$
where $b_k=\dfrac{k!!}{(k+1)!!}$ and $k!!$ is a double factorial. By Van Vleck, since
$$\sum_{k=1}^\infty \frac{k!!}{(k+1)!!}$$
diverges, the second continued fraction converges. This CF can be shown to be equal to
$$\frac1{\tfrac1{\sqrt{\tfrac{e\pi}{2}}\mathrm{erfc}\left(\tfrac1{\sqrt 2}\right)}-1}=1.904271233329\dots$$
where $\mathrm{erfc}(z)$ is the complementary error function.
Establishing the value of the "continued fraction constant" (a short sketch)
From the modified Bessel differential equation, we can derive the difference equation
$$Z_{n+1}(x)=-\frac{2n}{x}Z_n(x)+Z_{n-1}(x)$$
where $Z_n(x)$ is any of the two solutions $I_n(x)$ or $K_n(x)$. Letting $x=2$, we obtain
$$Z_{n+1}(2)=-n\,Z_n(2)+Z_{n-1}(2)$$
We can divide both sides of the recursion relation with $Z_n(2)$ and rearrange a bit, yielding
$$\frac{Z_n(2)}{Z_{n-1}(2)}=\cfrac1{n+\cfrac{Z_{n+1}(2)}{Z_n(2)}}$$
A similar manipulation can be done in turn for $\dfrac{Z_{n+1}(2)}{Z_n(2)}$; iterating that transformation yields
$$\frac{Z_n(2)}{Z_{n-1}(2)}=\cfrac1{n+\cfrac1{n+1+\cfrac1{n+2+\cdots}}}$$
Now, we don't know if $Z$ is $I$ or $K$; the applicable theorem at this stage is Pincherle's theorem. This states that $Z$ is necessarily the
minimal solution of the associated difference equation if and only if the continued fraction converges (which it does, by Śleszyński–Pringsheim). Roughly speaking, the minimal solution of a difference equation is the unique solution that "decays" as the index $n$ increases (all the other solutions, meanwhile, are termed dominant solutions). From the asymptotics of $I$ and $K$, we find that $I$ is the minimal solution of the difference equation ($K$ and any other linear combination of $I$ and $K$ constitute the dominant solutions). By Pincherle, then, the continued fraction has the value $\dfrac{I_n(2)}{I_{n-1}(2)}$. Taking $n=1$ and reciprocating gives the first CF in the OP.
Here's a short
Mathematica script for evaluating the "continued fraction constant", which uses the Lentz-Thompson-Barnett method for the evaluation:
prec = 50;
y = N[1, prec]; c = y; d = 0; k = 2;
While[True,
c = k + 1/c; d = 1/(k + d);
h = c*d; y *= h;
If[Abs[h - 1] <= 10^(-prec), Break[]];
k++];
y
1.4331274267223117583171834557759918204315127679060
We can check the agreement with the closed form:
y - BesselI[0, 2]/BesselI[1, 2] // InputForm
0``49.70728038020511
Alternative expressions for the second continued fraction
Just to thoroughly beat the stuffing out of this question, I'll talk about a few other expressions that are equivalent to the OP's second CF.
One can build the Euler-Minding series of the continued fraction:
$$1+\sum_{k=0}^\infty \frac{(-1)^k (k+2)!}{B_k B_{k+1}}$$
where $B_k$ is the denominator of the $k$-th convergent of the continued fraction, which satisfies the difference equation $B_k=B_{k-1}+(k+1)B_{k-2}$, with initial conditions $B_{-1}=0$, $B_0=1$. OEIS has a record of this sequence, but there is no mention of a closed form.
One can also split the original continued fraction into odd and even parts, yielding the following contractions:
$$3-\cfrac{6}{8-\cfrac{20}{12-\cfrac{42}{16-\cdots}}}\qquad \text{(odd part)}$$
$$\cfrac1{1-\cfrac{2}{6-\cfrac{12}{10-\cfrac{30}{14-\cdots}}}}\qquad \text{(even part)}$$
The utility of these two contractions is that they converge twice as fast as the original continued fraction, as well as providing "brackets" for the value of the continued fraction.
Much, much later:
Prompted by GEdgar's question, I have found that the second CF does have a nice closed form. Here is a derivation:
The iterated integrals of the complementary error function, $\mathrm{i}^n\mathrm{erfc}(z)$ (see e.g. Abramowitz and Stegun) satisfy the difference equation
$$\mathrm{i}^{n+1}\mathrm{erfc}(z)=-\frac{z}{n+1}\mathrm{i}^n\mathrm{erfc}(z)+\frac1{2(n+1)}\mathrm{i}^{n-1}\mathrm{erfc}(z)$$
with initial conditions $\mathrm{i}^0\mathrm{erfc}(z)=\mathrm{erfc}(z)$ and $\mathrm{i}^{-1}\mathrm{erfc}(z)=\dfrac2{\sqrt\pi}\exp(-z^2)$.
This recurrence can be rearranged:
$$\frac{\mathrm{i}^n\mathrm{erfc}(z)}{\mathrm{i}^{n-1}\mathrm{erfc}(z)}=\frac1{2z+2(n+1)\tfrac{\mathrm{i}^{n+1}\mathrm{erfc}(z)}{\mathrm{i}^n\mathrm{erfc}(z)}}$$
Iterating this transformation yields the continued fraction
$$\frac{\mathrm{i}^n\mathrm{erfc}(z)}{\mathrm{i}^{n-1}\mathrm{erfc}(z)}=\cfrac1{2z+\cfrac{2(n+1)}{2z+\cfrac{2(n+2)}{2z+\dots}}}$$
(As a note, $\mathrm{i}^n\mathrm{erfc}(z)$ can be shown to be the minimal solution of its difference equation; thus, by Pincherle, the CF given above is correct.)
In particular, the case $n=0$ gives
$$\frac{\sqrt\pi}{2}\exp(z^2)\mathrm{erfc}(z)=\cfrac1{2z+\cfrac2{2z+\cfrac4{2z+\cfrac6{2z+\dots}}}}$$
If $z=\dfrac1{\sqrt 2}$, then
$$\frac{\sqrt{e\pi}}{2}\mathrm{erfc}\left(\frac1{\sqrt 2}\right)=\cfrac1{\sqrt 2+\cfrac2{\sqrt 2+\cfrac4{\sqrt 2+\cfrac6{\sqrt 2+\dots}}}}$$
We now perform an
equivalence transformation. Recall that a general equivalence transformation of a CF
$$b_0+\cfrac{a_1}{b_1+\cfrac{a_2}{b_2+\cfrac{a_3}{b_3+\cdots}}}$$
with some sequence $\mu_k, k>0$ looks like this:
$$b_0+\cfrac{\mu_1 a_1}{\mu_1 b_1+\cfrac{\mu_1 \mu_2 a_2}{\mu_2 b_2+\cfrac{\mu_2 \mu_3 a_3}{\mu_3 b_3+\cdots}}}$$
If we apply this to the CF earlier with $\mu_k=\dfrac1{\sqrt 2}$, then
$$\sqrt{\frac{e\pi}{2}}\mathrm{erfc}\left(\frac1{\sqrt 2}\right)=\cfrac1{1+\cfrac1{1+\cfrac2{1+\cfrac3{1+\dots}}}}$$
Thus,
$$\frac1{\tfrac1{\sqrt{\tfrac{e\pi}{2}}\mathrm{erfc}\left(\tfrac1{\sqrt 2}\right)}-1}=1+\cfrac2{1+\cfrac3{1+\cfrac4{1+\dots}}}$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.